id
stringlengths 20
20
| score
int64 1
5
| normalized_score
float64 0.2
1
| content
stringlengths 216
2.36M
| sub_path
stringclasses 1
value |
|---|---|---|---|---|
BkiUbns4dbjiU9i0ywdm
| 5
| 1
|
\section{Introduction}
The ideas for computing polarized parton densities have been around for
a long time\cite{kaur}. What has been lacking is sufficient experimental data
to define those densities.
Good progress has been made from deep inelastic scattering experiements.
At present, the data ranges are from $0.003<x<0.8$
and $1\,$GeV${}^2<Q^2<60\,$GeV${}^2$,
and this has provided reasonable fits descibing the up quark and
down quark distributions. Data intended to constrain the gluon
and sea quark densities tightly has yet to be obtained.
Recent theoretical progress has given us the next to leading order
Altarelli-Parisi splitting kernels for polarized partons\cite{rolf}.
The first helicity distributions based on this higher order evolution
have already appeared\cite{grsv}.
\begin{table}[t]
\caption{This is a partial listing of parameterizations for
the polarized parton distribution functions.}
\begin{center}
\begin{tabular}{ l | l | l }
\hline\hline
\multicolumn{3}{c}{$\Delta$PDFs}\\
\hline\hline
$\Delta$PDFs & AUTHORS & { REFERENCE} \\
\hline
BT-95 & Bartelski \& Tatur & { preprint, CAMK 95-288} \\
GRSV-95& Gluck, Reya, Stratmann \& Vogelsang
& { preprint, DO-TH 95/13} \\
GRV-95 & Gluck, Reya \& Vogelsang & { preprint, DO-TH 95/11} \\
CLW-95 & Cheng, Liu and Wu & { preprint, IP-ASTP-17-95} \\
BS-95 & Bourrely and Soffer & { Nucl.~Phys.} {B445} (1995) 341 \\
F-95 & de Florian, et al. & { Phys.~Rev.} {D51} (1995) 37 \\
GS-95 & Gehrmann and Stirling & { Z.~Phys.} {C65} (1995) 461 \\
BBS-95 & Brodsky, Burkardt \& Schmidt & { Nucl.~Phys.} {B441} (1995) 197 \\
N-94 & Nadolsky & { Z.~Phys.} {C63} (1994) 601 \\
CCGN-93& Chiappetta, et al. & { Z.~Phys.} {C59} (1993) 629 \\
F-93 & de Florian, et al. & { Phys.~Lett.} {B319} (1993) 285 \\
CW-92 & Cheng and Wai & { Phys.~Rev.} {D46} (1992) 125 \\
SL-92 & Sridhar and Leader & { Phys.~Lett.} {B295} (1992) 283 \\
CN-91 & Chiappetta and Nardulli & { Z.~Phys.} {C51} (1991) 435 \\
GRV-90 & Gluck, Reya and Vogelsang & { Nucl.~Phys.} {B329} (1990) 347 \\
G-90 & Gupta, et al. & { Z.~Phys.} {C46} (1990) 111 \\
CL-90 & Cheng and Lai & { Phys.~Rev.} {D41} (1990) 91 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
A partial listing of polarized parton densities is given in Table~1.
As we can see, 1995 has been a good year. Soon, we should have about
as many distributions as there are data points to fit.
This table mainly focuses on those distributions from the 1990's, but
there are other distributions that people have found useful that
appeared before 1990\cite{extras}.
\begin{table}[t]
\caption{This is a listing of input parameters for the polarized
parton distribution functions in Table~1.}
\begin{center}
\begin{tabular}{ l | l | l | c | c | l}
\hline\hline
\multicolumn{6}{c}{$\Delta$PDFs}\\
\hline\hline
Mode & $\Delta$PDF & Order & $Q^2_0$ (GeV${}^2$) &
$\Lambda_{QCD}^{(4)}$ (MeV${}^2$)& Unpolarized PDF\\
\hline
270 & BT-95 & LO & 4.0 & 230 & U-MRSA (set A)\\
260 & CLW-95 & LO & 10.0 & 231 & U-MRSA${}^\prime$ (set A${}^\prime$)\\
250 & GRSV-95 & NLO & 0.34& 200 & U-GRVt-95 \\
240 & GRV-95 & LO & 0.23& 200 & U-GRVt-95 \\
230 & BS-95 & LO & 3.0 & 200 & BS-95 \\
220 & F-95 & LO & 10.0 & 230 & U-MRS${}^\prime$ (set D${}_-^\prime$) \\
210 & GS-95 & LO & 4.0 & 177 & U-O \\
200 & BBS-95 & LO & 4.0 & 230 & U-MRS${}^\prime$ (set D${}_0^\prime$) \\
190 & N-94 & LO & 11.0 & 200 & U-GRVt-92 \\
180 & CCGN-93 & LO & 1.0 & 260 & U-DFLM (avg. set) \\
170 & F-93 & LO & 4.0 & 168 & U-CTEQ1 \\
160 & CW-92 & LO & 10.0 & 260 & U-DFLM (avg. set) \\
150 & SL-92 & LO & 4.0 & 177 & U-O \\
140 & CN-91 & LO & 1.0 & 260 & U-DFLM (avg. set) \\
130 & GRV-90 & LO & 10.0 & 360 & U-GHR \\
120 & GPS-90 & LO & 5.0(15.0) & 200(90) & U-EHLQ(U-EMC) \\
110 & CL-90 & LO & 10.7 & 260 & U-DFLM (avg. set)\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
Using the CTEQ evolution package\cite{cteq}, these distributions have
been evolved consistently to allow for a comparison of a few of the
more recent distributions as well as to facilitate future research. A
library in the spirit of PDFLIB by Plothow-Besch\cite{pdflib} is under
development and will soon be available for distribution. In the
remainder of this report, details of the evolution are presented and
some comparisons between a few distributions from the 1990's are made.
\label{sect2}
\section{On the Parameters of the $Q^2$ Evolution}
To have a properly defined parton distribution function (PDF) requires that
a number of parameters be defined from the outset (something about which many
papers are not very explicit). In Table~2 the values of the
parameters used in this evolution are presented.
Generally, the evolution for each parton
distribution starts with an initial distribution at a given energy scale
($Q_0$) as provided in the published work. Besides the initial scale for
the evolution and the order in perturbation theory in which the
evolution is being performed, it is also necessary to set the QCD scale
($\Lambda_{QCD}^{(n_f)}$) for a given number of quark flavors
$n_f$ and the quark masses.
For those papers that did not explicitly
present a value for $\Lambda_{QCD}^{(n_f)}$, the value was taken from
their choice of unpolarized PDF.
The quark masses are not mentioned in the tables. A few of the
distributions begin their evolution at scales of $Q_0$ around 1~GeV,
which can be below the charm and bottom quark thresholds. This has
its effect on the evolution when these quark thresholds are crossed.
Some authors indicate what quark masses were used, like in
GRV-95 where $m_c=1.5\,$GeV and $m_b=4.5\,$GeV were given, but for
most of the cases values of $m_c=1.6\,$GeV and $m_b=5.0\,$GeV
have been adopted.
In all the evolution
performed here, the top quark mass has been set to $m_t=180\,$GeV.
Since the data has been at low $Q^2$, quite a number of papers perform
their fits based on evolutions with three quark flavors. For the
evolutions here, however, the full six flavors are used with the
understanding that some people want to do computations at higher
energy scales.
\begin{table}[t]
\caption{This is a listing of references for the unpolarized parton
distribution functions associated with the $\Delta$PDFs in Table~1.}
\begin{center}
\begin{tabular}{ l | l | l }
\hline\hline
\multicolumn{3}{c}{Unpolarized PDFs}\\
\hline\hline
PDFs & AUTHORS & { REFERENCE} \\
\hline
U-GRVt-95& Gluck, Reya and Vogt & { Z.~Phys.} {C67} (1995) 433 \\
U-BS & Bourrely and Soffer & { Nucl.~Phys.} {B445} (1995) 341 \\
U-BBS & Brodsky, Burkardt \& Schmidt & { Nucl.~Phys.} {B441} (1995) 197 \\
U-MRSA${}^\prime$
& Martin, Stirling \& Roberts & { Phys.~Lett.} {B354} (1995) 155 \\
U-MRSA & Martin, Roberts \& Stirling & { Phys.~Rev.} {D56} (1994) 6734 \\
U-MRS${}^\prime$
& Martin, Roberts \& Stirling & { Phys.~Lett.} {B306} (1993) 145 \\
U-CTEQ1 & CTEQ Collaboration & { Phys.~Lett.} {B304} (1993) 159 \\
U-GRVt-92& Gluck, Reya \& Vogt & { Z.~Phys.} {C53} (1992) 127 \\
U-O & Owens & { Phys.~Lett.} {B266} (1991) 126 \\
U-DFLM & Diemoz, et al. & { Z.~Phys.} {C39} (1988) 21 \\
U-EMC & Sloan, Smajda and Voss & { Phys.~Rep.} {162} (1988) 45 \\
U-DO & Owens & { Phys.~Rev.} {D30} (1984) 49 \\
U-EHLQ & Eichten, et al. & { Rev.~Mod.~Phys.} {56} (1984) 579 \\
U-GHR & Gluck, Hoffman \& Reya & { Z.~Phys.} {C13} (1982) 119 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
Each set of helicity distributions has been associated in some manner
with a specific set of unpolarized PDFs. In most cases, this appears
through the application of a dilution model, whereby the $\Delta$PDF
is written as a linear combination of unpolarized PDFs weighted by a
phenomenological function in $x$\cite{kaur}. This association between
the $\Delta$PDFs and the unpolarized PDFs is tabulated in Table~3. In
performing the evolutions, these associations have been maintained.
The $\Delta$PDFs that have been chosen for presentation are
N-94 (sets 1 and 2), BBS-95, GS-95 (sets A, B, and C), and
GRV-95 (``standard'' and ``valence'' scenarios).
In Figs.~1-3, these distributions are shown at the scale $Q=15\,$GeV.
To evolve the BBS-95 helicity densities, it was necessary to
assume a form for the sea quark distributions. Since the BBS-95 distributions
are close to the MRSD0${}^\prime$ form, those sea quark densities were used to
provide the helicity densities for the sea; namely, at the initial
energy scale it was assumed that $\Delta\bar{u}(x)=\bar{u}(x)/2$ and
$\Delta\bar{d}(x)=\bar{d}(x)/2$.
\label{sect3}
\section{The Up and Down Quarks}
The two distributions best defined by the present experimental data are the
up and down quark densities. Looking at Fig.~1, there is nice agreement
between the different fits for $\Delta u$; it isn't until the larger
$x$ are reached (where the error bars on the data increase) that
significant deviations occur. The down quark densities appear
to have a few atypical contenders, but it should be noted that in
the plot of $x\Delta d(x)$, the BBS-95 and N-94 (set 2) distributions
both allow $\Delta d(x)$ to cross over into positive values at some $x$.
Since the $\Delta$PDFs are constrained by the moments,
where $\Delta d=\int \Delta d(x)\,dx<0$, the BBS-95
and N-94 (set 2) distributions compensate
for their positive contribution to the integral with a more negative
$\Delta d(x)$ below the crossover point.
\label{sect4}
\section{The Gluon and Sea Quark Densities}
Since the gluon and sea quark densities are, to a large degree, unconstrained,
it is here where the parameterizations are most distinguishable.
Some models have negligible or large $\Delta G(x)$ and $\Delta s(x)$
while others may carry more moderate $\Delta G(x)$ and $\Delta s(x)$.
Hopefully, this issue will be settled with results from hadron-hadron
collisions at laboratories like the RHIC.\cite{rhic}
\label{sect5}
\section{Large $x$ Limits}
As discussed by Farrar and Jackson\cite{farrarj},
our expectation is that as the momentum fraction of the parton approaches
unity that the helicity of the parton should coincide with that
of its parent hadron. In other words, we have the limit
\begin{equation}
\Delta{q}(x)/q(x) \longrightarrow 1\qquad \hbox {as}\qquad {x\rightarrow 1}.
\label{eq:fjlimit}
\end{equation}
The plots in Fig.~2 illustrate the polarization of some of the $\Delta$PDFs
in the large $x$ limit. The BBS-95 distributions carry a smooth transition
of the polarization towards unity with large $x$ while other distributions
have polarizations that sharply rise at very large $x$,
plateau around $x\stackrel{>}{\sim} 0.6$, or ignore this large $x$ behavior.
Numerically, the PDFs are small as $x$ nears unity, which minimizes the
effect such variations in the $\Delta$PDF may have on physics.
Nevertheless, it is important to note that it is in the large $x$ region
that the polarization distinguishes between the different models and
fits\cite{nadw}. In particular, different parameterizations of the
$\Delta d$ distribution cross over from negative to positive values
at different $x$.
\label{sect6}
\section{Small $x$ Extrapolations}
With the larger energy colliders like HERA, the RHIC or the LHC
comes the possibility of invesitgating polarization physics at higher energy
scales and smaller momentum fraction than fixed target experiments.
In Fig.~3, the small-$x$ extrapolation
for a sample of $\Delta$PDFs is displayed. These results indicate that,
as usual, care must be exercised when extending the use of the $\Delta$PDFs
beyond the range of the data with which they were fit. The view generally
taken is that the polarization of the helicity distributions should
vanish as $x\rightarrow 0$\cite{brodsky}. Using the distributions beyond
their range of validity can produce unreasonable results.
\label{sect7}
\section{$Q^2$ Evolution}
In Fig.~4 the $Q^2$ evolution is performed for four of the $\Delta$PDFs
we have been examining. What is specifically shown is
the charge weighted sum over the quark helicity distributions,
\begin{equation}
{1\over 2}\sum_i e^2_i \Delta f_i(x,Q^2),
\label{eq:fisum}
\end{equation}
for $Q=0.015,0.1,1,10\,$TeV and
where $i$ runs over the quark types $u,d,s,\bar{u},\bar{d},\bar{s}$.
(I do not call this $g_1$ because some papers, such as GS-95,
define this structure
function differently by including the anomalous gluon contribution.)
The thing to note in Fig.~4 is that the $Q^2$ evolution
reveals distinguishing features of the different helicity
distributions, indicating that the evolution properties themselves
will be useful to consider when establishing the $\Delta$PDFs\cite{blum}.
\label{sect8}
\section{Caveats}
This work has mainly been a presentation of polarized parton densities
available in the literature. Though some features of these
distributions have been discussed, no attempt has been made here to
determine the quality or correctness of the fits or the procedures
used. In some cases the comparison would be pointless because of the
improvements in data and theory over the past decade.
There are a few caveats for the user. The many distributions have been
fit with different data at different times and may eventually become
outdated. Furthermore, not all the distributions have been determined
from the same perspective within QCD. An example of such variations
can be found in the different definitions used for the structure
function $g_1$. Inconsistencies, many of which are irrelevant due to
the lack of constraining data on the gluon and sea quark densities,
may also appear in the fits ({\it e.g.}, higher order unpolarized PDFs
sometimes have been used as the input distribution for helicity
densities evolved in leading order). Nonetheless, with the variety
available among all the $\Delta$PDFs, a wide range of possible physics
can be investigated.
\label{sect9}
\section*{Acknowledgments}
My gratitude goes to Wolf-Dieter Nowak and Johannes Blumlein
for inviting me to DESY-Zeuthen and for their hospitality
during my stay. Thanks also go to Wu-Ki Tung for the
CTEQ routines.
This work is funded in part by DESY-Zeuthen,
Michigan State University and NSF grant PHY-9309902.
|
train/arxiv
|
BkiUbBc5i7PA9Fz9mNr8
| 5
| 1
|
\section{INTRODUCTION}
Much progress has been made recently in the synthesis of very heavy nuclides.
Deformed superheavy nuclides with proton numbers Z=108, 110, 111 and 112
have been discovered through reactions of cold--fusion at GSI
\cite{Hof1,Hof2,Hof3} and hot--fusion in Dubna \cite{Laz1,Laz2}. In 1995,
first chemical separations of element 106 were performed \cite{T}.
After these successes, the accurate calculations of the lifetimes of
nuclei situated in the upper--end of the isotopic chart became a new
challenge of nuclear theory.
The objective of the present paper is to study spontaneous--fission
$(T^{SF}_{1/2})$ and $\alpha$-decay $(T^{\alpha}_{1/2})$ half--lives for
even-even nuclei with proton number Z=100--114 and neutron number N=142--180.
This relatively broad region of nuclei contains experimentally well known
nuclides with Z$\leq$104, the deformed superheavy with Z$\geq$106 and
transitional nuclei close to a hypothetical island of spherical
superheavy elements situated around the nucleus $^{298}114_{184}$.
For all these nuclei the action integrals
describing probability of SF are minimized using multi--dimensional
dynamic--programming (MDP) method based on WKB approximation within the same
deformation space.
Much attention is paid to find the optimal deformation space. We examine
a~relatively reach collection of nuclear shape parameters
($\beta_{\lambda}$, with $\lambda$=2,3,4,5,6 and 8), as well as the pairing
degrees of freedom (i.e. proton $\Delta_{p}$
and neutron $\Delta_{n}$ pairing gaps). The optimal collective space
$\{\beta_{2}, \beta_{4}, \beta_{6}, \Delta_{p}, \Delta_{n}\}$
is found by comparison of the calculated $T^{SF}_{1/2}$ of Fm isotopes
with their experimental values.
Alpha--decay is one of the most predominant modes of decay
of superheavy nuclei. All recently discovered superheavy elements with atomic
numbers Z$\geq$107 were identified from their $\alpha$-decay chains.
Calculations of $T^{\alpha}_{1/2}$ are easier to perform than $T^{SF}_{1/2}$.
The half--life for $\alpha$-decay depends primarily upon the release energy
$Q_{\alpha}$, which is given only by appropriate difference of ground--state
masses.
To better characterized properties of the nuclei in investigated
region, we also calculated their ground--state electric quadrupole moments
and mean square radii.
A part of the results of the analysis has been presented earlier
\cite{LS1,SL1,LS2}.
The description of the method and details of the calculations
are given in Sec.~2, the results and discussion are presented in Sec.~3.
\section{DESCRIPTION OF THE METHOD}
\subsection{Collective variables}
Let us consider a set of $\lambda$ collective variables
$(X_1,X_2,...,X_{\lambda})\equiv X$,
then the classical collective Hamiltonian can be written
\begin{equation}
H= \frac{1}{2}\sum_{k,l}^{\lambda} B_{kl}(X)\dot{X}_{k}\dot{X}_{l}+V(X)\,,
\end{equation}
\noindent
where $V(X)$ is the collective potential energy. The tensor of effective mass
(inertia tensor) $B_{kl}$ is symmetric and all its components may depend on
collective variables. The choice of the collective variables is arbitrary
but caution must be exercised.
Starting from the single--particle motion of A=Z+N nucleons, a state of
nucleus is defined by an average potential fixed by a set of parameters.
These parameters are good candidates for collective variables.
In the presented paper we used the single--particle Hamiltonian ($H_{s.p.}$)
consisted of a deformed Woods--Saxon Hamiltonian
and a residual pairing interaction treated in the BCS approximation.
In our model we discussed only axially symmetric
deformations of $H_{s.p.}$, i.e. a nuclear radius was expanded in terms of
spherical harmonics $Y_{\lambda 0}(cos\vartheta)$
\begin{equation}
R(\vartheta) = R_{0}(\beta_{\lambda})\,[1 + \sum_{\lambda=2}
^{\lambda_{max}} \beta_{\lambda} Y_{\lambda 0}(cos\vartheta)]\,,
\end{equation}
\noindent
where $\beta_{\lambda}$ is the set of deformation parameters up to
$\lambda_{max}$ multipolarity and the dependence of $R_{0}$ on
$\beta_{\lambda}$
is determined by the volume--conservation condition.
Besides the deformation parameters, the other candidates for collective
variables are parameters connected with the pairing interaction.
In the usual BCS approximation we have two such parameters:
the pairing Fermi energy ($\lambda$) and the pairing energy--gap~($\Delta$).
In the presented paper we chose the proton $\Delta_{p}$ and neutron
$\Delta_{n}$ pairing gaps as the additional collective variables.
The significant role of these so--called pairing vibrations on
penetration of the fission barrier has been first discussed by
Moretto {\em et al.} in Ref. \cite{MB} (see, also \cite{SB,SPP,PPS,D}).
Finally, the set of collective variables consists of the nuclear shape
parameters ($\beta_{\lambda}$) and the pairing degrees of freedom
($\Delta_{p}, \Delta_{n}$). These variables spann the multi--dimensional
deformation space $\{X_{\lambda}\}$ within which we shall describe a fission
process.
\subsection{Inertia tensor}
The inertia tensor $B_{kl}$ describes the inertia of a nucleus with
respect to changes of its {\em shape}. It also plays a role similar to the
metric tensor in deformation space $\{X_{\lambda}\}$.
Its components for multipole vibrations as well as pairing vibrations
can be evaluated in the first order perturbation approximation
\cite{Belya}
\begin{equation}
B_{kl}(X) = 2\hbar^2 \sum_{m}
\frac{\langle 0|{\partial}/{\partial X_k}|m\rangle
\langle m|{\partial}/{\partial X_l}|0\rangle}
{{\cal E}_m - {\cal E}_0}\,,
\end{equation}
\noindent
where $|0\rangle$ and $|m\rangle$ denote a ground state and
the excited states of the nucleus with the corresponding energies
${\cal E}_0$ and ${\cal E}_m$.
For even--even nuclei the excited states can be identified with the two
quasi-particle excitations ${\cal E}_m=E_\mu+E_\nu$.
After transformation to the quasi-particle representation
the corresponding formula takes the following compact form \cite{GP}
\begin{equation}
B_{kl}(X) = 2\hbar^2 \sum_{\mu,\nu} P^{k \, \ast}_{\mu\nu}(X)
(E_\nu + E_\mu)^{-1} P^l_{\mu\nu}(X)\,,
\end{equation}
where for the shape deformations
\begin{equation}
P^k_{\mu\nu}(\beta)
= -\frac{\langle \mu|\frac{\partial{H_{s.p.}}}{\partial\beta_k}|\nu\rangle}
{E_\mu+E\nu}
(u_\mu v_\nu + u_\nu v_\mu)-\frac{1}{2}
\delta_{\mu\nu} (\frac{\Delta}{E_\mu^2}\frac{\partial\lambda}
{\partial\beta_k}
+ \frac{e_\mu-\lambda}{E_\mu^2}\frac{\partial\Delta}{\partial\beta_k})
\end{equation}
and in the case of pairing degrees of freedom
\begin{equation}
P^k_{\mu\nu}(\Delta)=
\delta_{\mu\nu} \frac{(e_\mu-\lambda)+\Delta
\frac{\partial\lambda}{\partial\Delta}}{2E_\mu^2}\,.
\end{equation}
\noindent
Here $v_\mu$, $u_\mu$ are the pairing occupation probability
factors, $e_\mu$ are the single--particle energies of $H_{s.p.}$
and the $E_\mu=[(e_\mu - \lambda)^2 + \Delta^2]^{1/2}$
is the quasi--particle energy corresponding to $|\mu\rangle$
state. The above expression is equivalent to a commonly used
formula developed by Sobiczewski {\em et al.} in Ref. \cite{Sob69}.
The components of inertia tensor are strongly affected by single--particle
and pairing effects. The relation between the energy--gap parameter
$\Delta$, the effective level density at the Fermi energy
$g_{eff}(\lambda)$ and the diagonal components of inertia tensor
$B_{kk}$ can be showed
in terms of the uniform model \cite{Funny}
\begin{equation}
B_{kk} \sim const.\frac{g_{eff}(\lambda)}{\Delta^2}
| \langle \partial H_{s.p.}/ \partial \beta_k \rangle |^2\,.
\label{delb}
\end{equation}
\noindent
This strong dependence of the inertia tensor on the pairing energy--gap
allows to expect considerable reduction of the spontaneous--fission
half--life values.
\subsection{Collective energy}
The collective energy $V$ is calculated for a given nucleus by the
macroscopic--microscopic model developed by Strutinsky \cite{Strut}:
\begin{equation}
V=E_{\rm macr}(\beta)+\delta{}E_{\rm shell}(\beta)
+\delta{}E_{\rm pair}(\beta,\Delta)\,.
\end{equation}
\noindent
For the macroscopic part $E_{\rm macr}$ we used the
Yukawa--plus--exponential model \cite{KN}.
The so--called microscopic part, consisted of the shell
$\delta{}E_{\rm shell}$ and pairing $\delta{}E_{\rm pair}$
corrections, was calculated on the basis of single--particle spectra of
Woods--Saxon Hamiltonian \cite{CDN}.
The one--body Woods--Saxon Hamiltonian consists of the kinetic energy term
$T$, the potential energy $V^{WS}$,
the spin-orbit term $V^{WS}_{so}$ and the Coulomb potential $V_{Coul}$ for
protons:
\begin{equation}
H^{WS}=T+V^{WS}(\vec{r};\beta)+
V^{WS}_{so}(\vec{r};\beta)+\frac{1}{2}(1+\tau_3)V_{Coul}(\vec{r};\beta)\,.
\end{equation}
\noindent
In the above equation
\begin{equation}
V^{WS}(\vec{r};\beta) =\frac{V_0[1 \pm \kappa(N-Z)/(N+Z)]}
{1+\exp[dist(\vec{r};\beta)/a]}
\end{equation}
and
\begin{equation}
V^{WS}_{so}(\vec{r};\beta)=
-\lambda (\nabla V^{WS}\times\vec{p})\cdot\vec{s}\,,
\end{equation}
\noindent
where $dist(\vec{r};\beta)$ denotes the distance of a point $\vec{r}$ from
the surface of the nucleus given by Eq. (2) and $V_0$, $\kappa$, $a$, $\lambda$
are adjustable constants. The Coulomb potential $V_{Coul}$ is assumed to be
that of the nuclear charge equal to $(Z-1)e$ and uniformly distributed inside
the nuclear surface. In our calculations we used Woods--Saxon
Hamiltonian with the so--called ``universal'' set of its parameters
(see Ref. \cite{CDN}) which were
adjusted to the single--particle levels of odd--A nuclei
with A$\geq$40.
The term $\delta{}E_{\rm pair}$ in Eq.(8) arises from the pairing residual
interaction, which is included to our $H_{s.p.}$ by the BCS approximation.
In the presented paper we used the pairing strength constants:
$G_ZA=13.3+0.217(N-Z)$ and $G_NA=19.3-0.080(N-Z)$ for protons and neutrons,
respectively, which are taken from Ref. \cite{DMS}.
\subsection{Lifetimes for alpha--decay}
For calculation of alpha--decay half--life we employ the phenomenological
formula of Viola and Seaborg \cite{VS}
\begin{equation}
\log T^{\alpha}_{1/2}\,[yr] =
(aZ + b) (Q_{\alpha}/MeV)^{-1/2} + (cZ +d ) - 7.5\,,
\end{equation}
\noindent
where $Z$ is the atomic number of the parent nucleus and
Q$_{\alpha}$ is the energy release obtained from the mass excesses
\begin{equation}
Q_{\alpha}(Z,N)=M(Z,N)-M(Z-2,N-2)-M(2,2)\,.
\end{equation}
\noindent
The values of parameters: $a$=1.66175, $b$=--8.5166, $c$=--0.20228
and $d$=--33.9069 in the above formula were taken from Ref. \cite{SPC}.
It should be noted that,
the uncertainties in the calculated $\alpha$-decay
half-lives due to their phenomenological character are far less than
uncertainties in the calculated SF half-lives.
\subsection{Lifetimes for spontaneous--fission}
The spontaneous--fission half--life is inversely proportional to the
probability of penetration through the barrier
\begin{equation}
T^{SF}_{1/2}=\frac{\ln2}{n}\frac{1}{P}\,.
\label{tsf}
\end{equation}
\noindent
Where $n$, in the above formula, is the number of ``assaults''
of the nucleus on the fission barrier {\em per} unit time.
The number of assaults is usually
equated to the frequency of zero--point vibration of the nucleus in the
fission degree of freedom and for a vibrational frequency of
$\hbar\omega_0$=1MeV, assumed in this paper,
$n\approx10^{20.38}{\rm s}^{-1}$.
Using the one--dimensional WKB semi--classical approximation for
the penetration probability $P$ one obtains
\begin{equation}
T^{SF}_{1/2}\,[yr] = \frac{10^{-28.04}}{\hbar\omega_0}\,[1+\exp2S(L)]\,,
\end{equation}
\noindent
where $S(L)$ is the action--integral calculated along a fission path
$L(s)$ in the multi--dimensional deformation space $\{X_\lambda\}$
\begin{equation}
S(L) = \int^{s_2}_{s_1} \left\{{2 \over \hbar^2} \, B_{\rm eff}(s)
[V(s) - E]\right\}^{1/2} ds\,.
\label{act}
\end{equation}
\noindent
An effective inertia associated with the fission motion
along the path $L(s)$ is
\begin{equation}
B_{\rm eff}(s) = \sum_{k,l} \, B_{kl} \,
{dX_k \over ds} {dX_l \over ds}\,,
\label{beff}
\end{equation}
\noindent
where $B_{kl}$ are the components of the inertia tensor.
In the above equations $ds$ denotes an element of the path length in
the $\{X_\lambda\}$ space. The integration limits $s_1$ and
$s_2$ correspond to the classical turning points,
determined by a~condition $V(s) = E$, where
$E = V(X^0_{\lambda})$ + 0.5 $\hbar\omega_0$
denotes the energy of the fissioning nucleus in MeV (calculated in the
ground--state).
\subsection{Calculations technique}
Dynamic calculations of $T^{SF}_{1/2}$ mean a quest for
the least--action trajectory $L_{\rm min}$ which fulfills a principle
of the least--action $\delta[S(L)]~=~0$.
To minimize the action--integral (\ref{act}) we used the
dynamic--programming method. Its application to fission was first
developed by Baran {\em et al.} (see e.g., \cite{BP} and references
cited therein).
In contrast to the method used by Smola\'nczuk {\em et al.} in Ref.
\cite{SK,SS}, where only two
coordinates ($\beta_2$ and $\beta_4$) have been handled dynamically
and the remaining degrees of freedom have been found by
minimization of the potential energy V, in our
multi--dimensional
dynamic--programming (MDP) method all coordinates are treated
as independent dynamical variables.
Figure 1 demonstrates how our model works.
Since the macroscopic--microscopic method is not analytical, it is
necessary to calculate the potential energy and all components of
the inertia tensor on a~grid in the multi--dimensional space spanned
by a set of deformation parameters $\{X_\lambda\}$.
We select one coordinate $X_0$ from this set. This
coordinate (e.g. elongation parameter) is related in a linear
way to the fission process.
In Fig.~1 to each point $X_0$ correspond a {\em plane} representing
the rest of the collective space $\{X_{\lambda-1}\}$.
\begin{figure}[h]
\vspace*{-3.5cm}
\hspace{-4.0cm}
\psfig{file=cas96f1.eps,height=24cm}
\vspace{-11.5cm}
\caption{
Diagrammatic illustration of the MDP method.
In the multi--dimensional deformation
space $\{X_\lambda\}$ we select the coordinate $\{X_0\}$
which is related in a linear way to the fission process. The points $s_1$
and $s_2$ correspond to entrance to the barrier and exit from the
barrier, respectively. See text for details.
}
\label{Fig.1}
\end{figure}
To find the least--action trajectory $L_{\rm min}$ between the
turning point $s_1$ and $s_2$ we proceed as follows. First,
we calculate the action--integrals
from the entrance point under the barrier $s_1$
to all grid points in the nearest {\em plane}
at $X_0 = 1$. In the next step we come to the {\em plane} at $X_0 = 2$
and from each grid point in this {\em plane} calculate the
action--integrals to all grid points in the {\em plane} at $X_0 = 1$.
The trajectories started from each grid point at
$X_0 = 2$, passing through all grid points in the {\em plane} at $X_0 = 1$
and terminated in the point $s_1$, form a~bunch of paths. From
each such a~bunch we choose the path with the minimal
action--integral and bear it in mind. At the end of this
step we have the least--action integrals along trajectories
which connect the starting point $s_1$ with all grid points in
the {\em plane} at $X_0 = 2$. Next, we repeat this procedure
for all grid points at $X_0 = 3$ and again we obtain all
the least--action--integrals along trajectories starting from point
$s_1$ with ends at each grid point in the {\em plane} $X_0 = 3$. We
repeat it until we reach the $n$--th {\em plane}, the last one before
the exit point from the barrier $s_2$. Finally, we proceed to the
last step of our method, where we calculate action--integrals between
the exit point $s_2$ and all grid points
situated on the last {\em plane} at $X_0 = n$;
the minimal one among them corresponds to the searched
trajectory of the least--action--integral $L_{\rm min}$.
If we denote a number of grid points on each $X_i$ (i=1,2,...,$\lambda$-1)
axis by $n_i$, then the whole number of trajectories examined in MDP
method is equal to
($n_1\cdot{}n_2\cdot...\cdot{}n_{\lambda-1}$)$^{\textstyle n}$.
Up to now, our calculations are carried out in a maximum of four--dimensional
deformation space in view of an enormously large computational time
(and disk space) required for preparing (and storing) input data
with potential energy and $1/2\lambda(\lambda+1)$ components of symmetric
inertia tensor for each of
($n_1\cdot{}n_2\cdot...\cdot{}n_{\lambda-1}$)$\cdot{}n$ grid points.
Calculations are performed in various four--, three-- and two--dimensional
deformation spaces spanned by selected shape parameters
($\beta_{\lambda}$, with
$\lambda$=2,3,4,5,6 and 8) and two pairing degrees of freedom
($\Delta_p, \Delta_n$).
For $\beta$--shape parameters we used grids with steps
$\Delta \beta_2$=$\Delta \beta_3$=0.05 and $\Delta \beta_{\lambda}$=0.04
for $\lambda$=4,5,6 and 8; in the case of pairing
energy--gaps grids steps are equal 0.2 MeV.
In our calculations a~quadrupole deformation $\beta_2$
plays a~role of the coordinate $X_0$ in Fig.~1.
\section{RESULTS AND DISCUSSION}
\subsection{Optimal Multi--dimensional Deformation Space}
The experimental values of the spontaneous fission
half--lives of nine even--even Fm isotopes (N = 142,
144, ..., 158) form approximately two sides of an acute--angled
triangle with a vertex in N = 152.
This strong nonlinear behaviour of $T^{SF}_{1/2}$ {\em vs.} neutron
number N, due to an enhanced nuclear stability in the vicinity of
deformed shell N=152,
provides good opportunity for testing theoretical models.
To find the proper deformation space for description of the fission
process we examined three different effects:
the effect of the higher even--multipolarity shape
parameters $\beta_6$ and $\beta_8$,
the role of the reflection--asymmetry shape parameters
$\beta_3$ and $\beta_5$, and
the influence of the pairing degrees of freedom
$\Delta_p$ and $\Delta_n$.
The following conclusions can be drawn from our previous dynamical
analysis of $T^{SF}_{1/2}$ for Fm even--even isotopes,
see Ref. \cite{LS1,SL1,LS2}.
In the case when the $\beta_{6}$ deformation parameter is
added to our minimal two--dimensional space $\{\beta_{2}, \beta_{4}\}$
we observed an increase of fission lifetimes by one to four orders
of magnitude.
The contribution of parameter
$\beta_{8}$ to $T^{SF}_{1/2}$ is negligible.
The deformations with odd--multipolarity $\lambda$=3,5 do not change
SF half--lives.
The reason of this lies in the dynamical treatment of the fission process.
The parameters $\beta_3$ and $\beta_5$ reduce the width of a static
fission barrier, however the effective inertia
$B_{eff}$, Eq. (\ref{beff}), along the corresponding static path
is larger than along the dynamic one, where $\beta_{3}$ and $\beta_{5}$
are almost equal to zero. One can say, that the static path (corresponding
to minimal potential energy) is ``longer'' than the dynamical one, for which
$\beta_{3}$=$\beta_{5}$=0. The above conclusions are in agreement with
those published in Ref. \cite{SK}.
The pairing degrees of freedom $\Delta_p$ and $\Delta_n$
reduce SF half--lives for Fm isotopes with N$>$152 for about 3 orders
of magnitude and considerably improve theoretical predictions of
$T^{SF}_{1/2}$. This effect is due to strong dependence of the inertia
tensor upon pairing energy--gap, as it was shown in Eq. (\ref{delb}).
Finally, we can conclude that the optimal deformation space for
description of the fission half--lives of heavy nuclei
is
$\beta_{2}, \beta_{4}, \beta_{6}, \Delta_{p}$ and $\Delta_{n}$.
\begin{figure}[h]
\vspace{-4.5cm}
\centerline{\psfig{file=cas96f2.eps,height=20cm}}
\vspace{-5.5cm}
\caption{
The electric quadrupole moments of the even--even nuclei with atomic numbers
Z=100--114, plotted as a function of the neutron number.
}
\label{Fig.2}
\end{figure}
On account of computational limitations mentioned above, we can only perform
calculations in a maximum of four--dimensional deformation space.
So, we decided to define a correction to SF half--lives, which arises
from pairing degrees of freedom, as the difference between $T^{SF}_{1/2}$
calculated in four--dimensional space
$\{\beta_{2}, \beta_{4}, \Delta_{p}, \Delta_{n}\}$
(when pairing degrees of freedom are treated as dynamical variables)
and the one calculated in two--dimensional space $\{\beta_{2}, \beta_{4}\}$
(where pairing energy--gaps are treated in the stationary way--
i.e. by solving the BCS equations):
\begin{equation}
\delta T^{SF}_{1/2} (\Delta_{p},\Delta_{n}) \equiv
T^{SF}_{1/2} (\beta_{2},\beta_{4},\Delta_{p},\Delta_{n}) -
T^{SF}_{1/2} (\beta_{2},\beta_{4})\,.
\label{tdelta}
\end{equation}
\noindent
Calculations of $T^{SF}_{1/2}$ in the
space $\{\beta_{2}, \beta_{4}, \beta_{6}, \Delta_{p}, \Delta_{n}\}$
were approximated by the results obtained in
three--dimensional space $\{\beta_{2}, \beta_{4}, \beta_{6}\}$
with the pairing correction $\delta T^{SF}_{1/2} (\Delta_{p},\Delta_{n})$.
\subsection{Ground--state properties}
\begin{figure}[hb]
\vspace{-3.5cm}
\centerline{\psfig{file=cas96f3.eps,height=18cm}}
\vspace{-5.0cm}
\caption{
The mean square radii of the even--even superheavy nuclei, plotted
as a function of the neutron number.
}
\label{Fig.3}
\end{figure}
In the present study of superheavy the even--even nuclei with
atomic numbers Z=100-114 and neutron numbers N=142-180 are considered.
First, we present the results related to ground--state (GS) properties.
The GS properties were calculated in the equilibrium point found for
a~given nucleus by minimization of its potential energy with respect to
$\beta_{2}$, $\beta_{4}$ and $\beta_{6}$ degrees of freedom.
In Fig.~2 we plot the electric quadrupole moments calculated
with following formula
\begin{equation}
Q_2 = \sqrt{\frac{16\pi}{5}}
\sum_{\nu=p} \langle\,\nu\,|\,r^2 Y_{20}\,|\,\nu\,\rangle\, v^2_{\nu}\,,
\end{equation}
\noindent
where $v^2_{\nu}$ is the BCS occupation factor corresponding to
proton single--particle state $| \nu \rangle$ in the equilibrium point.
Almost all nuclei have distinct prolate deformations.
And with an increase in the neutron number the $Q_2$ values show
a regular decrease except at N=162--164, where a slight
discontinuity in this behaviour can be seen for nuclei
with atomic number Z$\geq$106.
The mean square charge radii (MSR), for the same region of nuclei,
are plotted as a function of neutron number in Fig.~3.
For calculations of MSR we use the usual formula
\begin{equation}
<r^2> = \frac{1}{\rm Z}
\sum_{\nu=p} \langle\,\nu\,|\,r^2\,|\,\nu\,\rangle\, v^2_{\nu}
+ 0.64\,{\rm fm^2}\,,
\end{equation}
\noindent
where the last term is due to finite range of proton charge distribution.
One observes a rather regular dependence of mean square radii
on both neutron and proton number. However, as previously,
close to N=162--164 one can see local maxima in MSR curves,
particularly for nuclei with Z$\geq$106.
This means that Coulomb repulsion energy for these nuclei is
locally smaller, then they are more stable.
\subsection{Spontaneous--fission versus alpha--decay}
\begin{figure}[h]
\vspace{-6.0cm}
\centerline{\psfig{file=cas96f4.eps,height=20cm}}
\vspace{-6.0cm}
\caption{
Contour map of the logarithm of the spontaneous--fission
half-life (given in years) for nuclei shown in Fig.~2
calculated in $\{\beta_2, \beta_4, \beta_6\}$ deformation space
and corrected by the effect of the pairing degrees of freedom,
Eq. (18).
}
\label{Fig.4}
\end{figure}
Figure 4 shows the results of the spontaneous--fission
half--lives calculation, according to the MDP method, performed in
$\{\beta_{2}, \beta_{4}, \beta_{6}\}$ deformation space
with pairing correction $\delta T^{SF}_{1/2} (\Delta_{p},\Delta_{n})$,
Eq. (\ref{tdelta}), for the nuclei shown in Fig.~2.
Two very specific effects can be observed on the contour map of
$T^{SF}_{1/2}$ plotted as a function of neutron and proton numbers.
One can see an enhancement in the SF half--life values at N=162
followed by a diminution at N=170.
The enhancement in nuclear stability near the deformed shell N=162
allows the appearance of a peninsula of deformed metastable
superheavy nuclei. The local maximum of the $T^{SF}_{1/2}$ values
is centered at the nucleus $^{268}_{106}{\rm Sg}_{162}$ (2.5 h).
In the vicinity of neutron number N=170 one observes the opposite behaviour.
Here, the SF half--life values form a trench. This trench separates the
peninsula of the deformed superheavy nuclei from an island of spherical
superheavy elements around the doubly magic nucleus
$^{298}114_{184}$. The local minimum is obtained for nucleus
$^{272}_{102}{\rm No}_{170}$ (10 $\mu$s).
We found also that the $T^{SF}_{1/2}$ values of two heaviest nuclei
(considered in the presented paper)
$^{292}114_{178}$ and $^{294}114_{180}$ are comparable with those
of the most stable Fm isotopes with neutron numbers N=152 and~154
($\sim$ 100 yr).
In Fig.~5 we plot the alpha--decay half--lives (given in years)
estimated by use of the Viola--Seaborg relationship with set
of constants from Ref. \cite{SPC}. The contour plot of the
$T^{\alpha}_{1/2}$ forms a relatively regular surface descending
steeply in the direction where the proton number tends to increase
and the neutron number tends to decrease (upper--left--hand
corner in the plot).
\begin{figure}[h]
\vspace{-6.0cm}
\centerline{\psfig{file=cas96f5.eps,height=20cm}}
\vspace{-6.0cm}
\caption{
Contour plot of the logarithm of the alpha--decay half-life
$T^{\alpha}_{1/2}$ (given in years)
obtained from the Viola--Seaborg systematics. The alpha--decay energy
was calculated with $\{\beta_{\lambda}\}, \lambda=2,4,6$
deformation parameters.
}
\label{Fig.5}
\end{figure}
The surface of the $T^{\alpha}_{1/2}$ values shows an evident
protuberance at N=162,
which demonstrates unambiguously the magicity of this neutron
number. The shell effect at N=152 is very weak and disappears
practically for nuclei with atomic number Z$>$104.
The results presented in Fig.~4 and~5 as well as conclusions
drawn from them are generally similar to those recently
published by other groups employing the
macroscopic--microscopic method, Ref. \cite{SS,SSS,MN,MNK}.
To compare the SF and $\alpha$-decay modes we show in Fig.~6 the logarithm
of the total half--life $T^{SF+\alpha}_{1/2}$ resulting from both
these modes. If we examine contour maps with $T^{SF+\alpha}_{1/2}$
and $T^{SF}_{1/2}$ we can notice some minor differences but the global
behaviour of both these quantity stays unchanged.
The dark shadowed areas in Fig.~6 show the regions of nuclei in
which the $\alpha$--decay mode predominates.
The light shadowed area corresponds to the intermediate region of
nuclei, where the probabilities of the SF and $\alpha$--decay
processes are approximately equal (i.e. the region where the values of
$T^{\alpha}_{1/2}$ and $T^{SF}_{1/2}$ differ up to one order of
magnitude).
The black solid curve inside this area connects nuclei for which
probabilities of both considered modes are the same.
Thus, one can observe that the region of increased $\alpha$--decay
activity separates two areas with predominating SF activity
in a diagonal manner, in Fig.~6.
It is worth noting that the upper area is almost inaccessible due to
extremely short lifetimes ($\leq$ 1 $\mu$s).
\begin{figure}[h]
\vspace{-6.0cm}
\centerline{\psfig{file=cas96f6.eps,height=20cm}}
\vspace{-6.0cm}
\caption{
Logarithm of the total half-life $T^{SF+\alpha}_{1/2}$
resulting from SF (Fig.~4) and $\alpha$--decay (Fig.~5) modes.
The dark shadowed areas show the regions of nuclei, where the $\alpha$--decay
mode is a dominant one. The light shadowed area corresponds to the
intermediate region, where the probabilities of SF and $\alpha$--decay modes
are approximately equal.
}
\label{Fig.6}
\end{figure}
The total half--life values (equal to $T^{\alpha}_{1/2}$) for two
heaviest nuclei, considered in this paper,
$^{292}114_{178}$ and $^{294}114_{180}$ are found larger than 1 yr.
This is in agreement with results obtained recently in the fully
selfconsistent microscopic nonrelativistic
Hartrre--Fock--Bogoliubov approach, Ref. \cite{BBD}.
|
train/arxiv
|
BkiUbNnxK0wg09FXT2Zr
| 5
| 1
|
\section{Introduction}
Turbulence is a ubiquitous phenomenon encountered in very diverse natural systems, from the large-scale atmosphere \cite{wyngaard1992atmospheric} and oceans \cite{toschi2009lagrangian} all the way down to quantum fluids \cite{vinen2002quantum}, as well as in engineered systems, such as pipelines, heat exchangers, wind turbines, etc. It relates to the complex fluid dynamics that orchestrates the interactions of flow eddies spanning many length-scales and generating non-Gaussian statistics of velocity increments. The statistical properties of these turbulent fluctuations are fundamentally changed when the flow is confined by the presence of solid walls or boundaries \cite{smits2013wall,jimenez2013near}. In contrast to bulk turbulence, which is statistically homogeneous and isotropic, the wall-bounded turbulence is characterised by statistically anisotropic properties. Namely, there is a net mean-flow in the streamwise direction along the wall and the different flow structures form depending on their distance to the wall. We typically differentiate between four flow regions as moving away from the wall \cite{Ob97}: i) the {\em viscous region} closest to the wall and where viscous flows dominate, ii) the {\em buffer layer}, marking the transition from the viscous layer into the inertial layer, iii) the {\em inertial layer} where the
log-law of the wall applies, and iv) the {\em wake}, the energetic region beyond the inertial layer. A more refined division is given in \cite{CHS19}.
A classical signature of wall-bounded turbulence is the "log-law of the wall" of the mean velocity profile (MVP) due to Prandtl and von K\'arm\'an, and reads as
\begin{equation}
\label{eq:l-lwall}
\langle \tilde u \rangle = \frac{1}{\kappa} \log(\tilde y)+B,
\end{equation}
where $\kappa$ is the \emph{universal} von K\'arm\'an constant that is independent of the microscopic flow characteristics and relates to generic features such as space dimensionality. The distance to the wall $y$ and the mean fluid velocity $u$ along the wall,
are typically expressed in the "wall units" determined by the wall shear stress $\tau_0$. This is because $\tau_0$ is an important theoretical concept that is also experimentally measurable. The friction velocity $u_\tau = \sqrt{\langle \tau_0 \rangle/\rho}$ which is set by the wall shear stress $\tau_0$ and the kinematic viscosity $\nu$, and enters in the unit rescalings as $\tilde u= u/u_\tau$ and $\tilde y = yu_\tau/ \nu$. The constant fluid density is $\rho$ and the $B$ is a dimensionless constant that
is fitted to experimental data, e.g. \cite{P53}.
\begin{figure}[t]
\centering
\includegraphics[width=.4\textwidth]{Figure_0.pdf}
\caption{Theoretical predictions from the spectral theory for the MVP $\langle u \rangle$ and mean square velocity fluctuations $\langle w^2\rangle$ (dimensionless variables in wall units). }
\label{fig:wx1}
\end{figure}
A log-law of the wall was also derived from the "attached eddy hypothesis" by Townsend \cite{T76}. Townsend showed that the velocity fluctuations, $\tilde w =w/u_\tau$, $\tilde u= \langle \tilde u \rangle + \tilde w$, also follow the log-law of the wall in its second moment, namely
\begin{equation}
\label{eq:l-lfluct}
{\langle \tilde w^2 \rangle} = - A_1 \log(\tilde y) + B_1,
\end{equation}
where the coefficients $A_1$ and $B_1$, also called the Townsend-Perry constants, were first measured by Perry and Chong \cite{PC82, P86}.
More recently, the log-law was generalised to any moment of the streamwise velocity fluctuations, $\tilde w$, assuming Gaussian velocity fluctuations \cite{MM13},
\begin{equation}
\label{eq:l-pfluct}
\langle \tilde w^{2p} \rangle^{1/p} = - A_p \log(\tilde y) + B_p.
\end{equation}
While the generalised log-law is supported by wall-turbulence experiments, the dependance of $A_p$ and $B_p$ on $p$ turns out to be sub-Gaussian, which is confirmed both experimentally and numerically, \cite{MM13}. The sub-Gaussian behavior was explained in Ref. \cite{BC16} using the stochastic closure theory of turbulence \cite{BB211,BB314} and the analysis was improved in Ref. \cite{KBK19}, using measurements from the Flow Physics Facility (FPF) at the University of New Hampshire.
Both of these studies used the results from homogeneous turbulence \cite{KBBS17} and made an assumption about the form of the fluctuating shear stress in the inertial layer, based on physical principles.
In Ref. \cite{GGGC10}, a spectral theory for the log-law of the wall of the MVP was proposed in which it is possible to derive the log-law in the inertial layer and the laminar profile in the viscous layer. The novel contribution is the precise form of the transition in the buffer layer using the the Kolmogorov-Obukhov energy spectrum of turbulent fluctuations. The form of the MVP in the wake is also obtained. This was done by summing the energy of the wall-attached eddies, as hypothesised originally by Townsend in \cite{T76}.
In this paper, we propose a generalisation of the spectral theory that includes fluctuations in the streamwise velocity due to an essentially fluctuating wall shear stress. Fig.~\ref{fig:wx1} shows the spectral theory predictions of the profiles of the mean velocity and mean square velocity fluctuations across the viscous, buffer and inertial layers.
The rest of the paper is structured as follow. We summarise the analysis in Ref. \cite{GGGC10} and its extension in Section \ref{sec:MVP}, and generalise it to include the fluctuations in Section \ref{sec:fluct}. This produces the log law of the wall in Eq. (\ref{eq:l-lfluct}) for the velocity fluctuations and its higher moments in Eq. (\ref{eq:l-pfluct}). Then in Section \ref{sec:functional}, we derive the functional form of the mean-square fluctuations in the viscous layer and the inertial layer. In Section \ref{sec:SCT}, we use the attached eddy hypothesis and the stochastic closure theory \cite{BB211,BB314} to derive the form of the Townsend-Perry and the generalized Townsend-Perry constants. This allows us to derive the streamwise fluctuations in the wall shear stress, and remove the assumption made in Refs. \cite{BC16} and \cite{KBK19}, and mentioned above. Using theory-informed by data analysis, we can construct the Townsend-Perry constants and the generalised Townsend-Perry constants. In Section \ref{sec:BW}, we extend the formulas for the mean square fluctuations to the buffer layer and the energetic wake. In Section \ref{sec:data}, we compare the predicted MVP and mean-square velocity profile from this spectral theory to experimental data. In Section \ref{sec:summary}, we conclude with a discussion on the proposed spectral theory and the role that Townsend's attached eddies play in it.
\section{The Spectral Theory}
\label{sec:MVP}
The typical velocity of an inertial eddy of size $s$ can be obtained by integrating out the kinetic energy contained in all eddies of sizes up to $s$ as in Ref. \cite{GGGC10}
\begin{equation}\label{eq:vs}
v_s^2 = \int_{1/s}^\infty E(k)dk,
\end{equation}
where kinetic energy spectrum follows the Kolmogorov-Obukhov scaling with cutoffs in the injection scale and viscous scales, $E(k) =c_d(\eta k) \frac{2}{3}(\kappa_\epsilon \epsilon )^{2/3}k^{-5/3} c_e(Rk)$, with $\frac{2}{3}(\kappa_\epsilon \epsilon)^{2/3}k^{-5/3}$ being the Kolmogorov-Obukhov spectrum and $c_d (\eta k)$ and $c_e(R k)$ the phenomenological dimensionless corrections functions in the dissipative (set by the Kolmogorov scale $\eta$) and energetic range (set by the system size $R$), respectively. $\kappa_\epsilon$ is a dimensionless parameter, $\epsilon$ is the turbulent energy dissipation rate, $\eta=\nu^{3/4}\epsilon^{-1/4}$ is the viscous length scale and $R$ is the largest length scale in the flow.
The dissipative correction function is typically an exponential cutoff function $c_d(\eta k) =\exp(-\beta_d \eta k)$, and the energetic-range
(wake) correction function is $c_e(Rk)=(1+(\beta_e/(Rk))^2)^{-17/4}$, which is the form that was proposed by von K\'arm\'an. $\beta_d$ and $\beta_e$ are non-negative fitting parameters that can be adjusted to data.
By the change of variables $\xi = sk$, we recast Eq. (\ref{eq:vs}) as
\begin{equation}
v_s^2 = (\kappa_\epsilon \epsilon s)^{2/3}I\left(\frac{\eta}{s},\frac{s}{R}\right),
\end{equation}
where the spectral function $I$ is given by the formula \cite{GGGC10}
\begin{align}
\label{eq:spectcont}
&I\left(\frac{\eta}{s},\frac{s}{R}\right)= \nonumber \\
&\frac{2}{3} \int_1^\infty e^{-\xi \beta_d \eta/s}\xi^{-5/3}\left(1+\left(\frac{\beta_e s}{R\xi}\right)^2\right)^{-17/6} d\xi.
\end{align}
The integral sums the energies of all eddies of a smaller radius than $s$, and computes their contribution to the energy of the eddy of radius $s$. This is the energy (or spectral) formulation of the attached eddy hypothesis of Townsend \cite{T76}. The $I$-function correctly captures the buffer layer, as the transition from the viscous to the inertial layer, and the asymptotic of the MVP in the energetic wake. The asymptotic values are such that in the inertial layer $I=1$ and in the viscous layer $I=0$. The $I$-function combines the Kolmogorov-Obukhov theory with the observed spectrum in the viscous layer, the inertial layer and the wake and is thus able to capture the transition from one layer to the next. In Ref \cite{GGGC10}, it was used to give the details of the MVP. In this paper, we will use it to capture the profile of mean-square fluctuations.
In the buffer layer a different scaling of the attached eddies comes into play, this is the $k_x^{-1}$ scaling of the spectrum that has been debated in literature, but clearly shows up in recent simulations and experiments in the middle of the buffer layer, see Figure 9 (a) in Ref. \cite{LM15} and Figure 12 (b) in Ref. \cite{Sa18}. In the spectral theory, corresponding $I$-function for this scaling regime is
\begin{align}
\label{eq:spectcont1}
&I_b\left(\frac{\eta}{s},\frac{s}{R}\right)= \nonumber\\
&\frac{2}{3}s^{-\frac{2}{3}} \int_1^\infty e^{-\xi \beta_d \frac{\eta}{s}}\xi^{-1}\left(1+\left(\frac{\beta_e s}{R\xi}\right)^2\right)^{-\frac{17}{6}} d\xi,
\end{align}
where the subscript $b$ stands for "buffer". The mean velocity is primarily influenced by the $I$-function, whereas the variation (fluctuation squared) is greatly influenced by the $I_b$-function in the buffer layer. $I$ is associated with the Kolmogorov-Obukhov energy cascade $k_x^{-5/3}$, in the inertial layer, whereas $I_b$ is associated with the $k_x^{-1}$ scaling in the buffer layer. (Here the $x$ denotes the streamwise direction.) We will take $I_b$ to be zero outside the buffer layer.
The splitting of the near-wall region based on different scaling of the spectrum was proposed by Perry and Chong \cite{PC82} who used it build an interpolation model for MVP and the variation, this model was improved in Ref. \cite{Va15}.
\section{The generalised log-law}
\label{sec:fluct}
In this section, we will give a simple derivation of the log-law for the
mean-square velocity profile that holds in the limit of large Reynolds number.
In the following section we derive the general form of the variation that is not equally transparent.
We will generalize the derivation of the MVP in Ref. \cite{GGGC10}, by adding a fluctuation to the mean velocity. We let the velocity along the wall be
\begin{equation}
v_1=u+v_1-u=u+w,
\end{equation}
where $u$ is the mean velocity obtained by averaging $v_1$ over time, and $w$ is the fluctuation.
The same derivations as in Ref. \cite{GGGC10} give the following equations for a dominant eddy of radius
$s=y$, if we include the velocity fluctuations. In Ref. \cite{GGGC10} the shear stress at the distance $y$ from the wall is given
by the formula ${\bar \tau_t} = \kappa_\tau \rho y v_y u'$ where $u'$ denotes the $y$ derivative of the velocity $u$ along the wall, and the overline indicates a not-fluctuating quantity. When velocity fluctuations are included the shear stress becomes:
\begin{equation}
\label{eq:shear-stress}
\tau_t = \kappa_\tau \rho y v_y (u'+w'),
\end{equation}
where $\rho$ is the density $v_y$ is the (rotational) velocity of an eddy a distance $y$ from the wall
and $\kappa_\tau$ is the dimensionless proportionality factor.
The energy dissipation rate is related to the wall shear stress as ${\bar \epsilon} = \tau_t u'/\rho$ \cite{GGGC10} , and including the fluctuations, this becomes
\begin{equation}
\label{eq:energy}
\epsilon = \tau_t(u'+w')/\rho.
\end{equation}
The eddy velocity for an eddy with radius $s=y$ at the distance $y$ from the wall is the same
as in Ref. \cite{GGGC10}, and as discussed above,
\begin{equation}
\label{eq:e-viscosity}
v_y= (\kappa_\epsilon \epsilon y)^{1/3} \sqrt{I},
\end{equation}
where $I$ is the integral from Eq. (\ref{eq:spectcont}) and $\kappa_\epsilon$ is a dimensionless proportionality factor.
In the inertial layer $I=1$ and $\kappa_\epsilon = 4/5$ according to Kolmogorov's $4/5$ law.
Eliminating $\epsilon$ and $v_y$ from the three equations above, we obtain
\begin{equation}
\label{eq:shear-stress_1}
\tau_t= (\kappa_\epsilon \kappa_\tau^3)^{1/2} \rho y^2 (u'+w')^2 I^{3/4}.
\end{equation}
The viscous shear stress is $\rho \nu (u'+w')$ so the total shear stress, including the contribution from the
fluctuation is \cite{T76}
\begin{equation}
\tau_t + \rho \nu (u'+w') = \tau_0(1-y/R).
\end{equation}
Our assumption is that the wall shear stress $\tau_0$ is also a quantity that fluctuates about its mean value.
We change the rescaled variables in the wall units written here in terms of the friction factor $f$: $\tilde y=y Re\sqrt{f}/R$, $\tilde u = u/(U\sqrt{f})$ and $\tilde w=w/(U\sqrt{f})$ and let $f=\langle \tau_0\rangle/\rho U^2$.
Then, the equation above becomes
\begin{equation}
\label{eq:total_stress1}
{\tilde \kappa}^2 {\tilde y}^2(\tilde u'+\tilde w')^2 I^{3/4}+(\tilde u'+\tilde w') = \frac{\tau_0}{\langle \tau_0 \rangle}\left(1-\frac{\tilde y}{Re\sqrt{f}}\right).
\end{equation}
If we let $\tilde y \to 0$, $\tilde w \to 0$ and integrate, we get the law of the viscous layer
\begin{equation}
\label{eq:viscous}
\tilde u = \tilde y,
\end{equation}
the laminar profile being
\begin{equation}
\label{eq:laminar}
\tilde u = \left(\tilde y-\frac{\tilde y^2}{2Re\sqrt{f}}\right).
\end{equation}
In the large Reynolds number limit, solving just for the mean velocity, we obtain the Prandtl-von K\'arm\'an law
\begin{equation}
\label{eq:velocity}
\tilde u = \frac{1}{\tilde \kappa}\log (\tilde y)+D.
\end{equation}
This is the correct leading term but the full formulas in the next section are more complicated.
We now motivate the log-law for the variation.
If we solve for both the mean velocity and the fluctuation in the large Reynolds number limit,
we get that
\begin{equation}
\label{eq:velocity1}
\tilde u+\tilde w= \frac{\sqrt{\tau_0}}{\langle \tau_0\rangle^{1/2} \tilde \kappa} \log (\tilde y)+C.
\end{equation}
This is consistent with the Eq. (\ref{eq:velocity}) in the sense that if $\sqrt{\tau_0}=\langle \tau_0\rangle^{1/2}$, then
$\tilde w =0$ and we recover Eq. (\ref{eq:velocity}).
Thus squaring Eq. (\ref{eq:velocity1}) gives that
\begin{equation}
{\tilde u}^2+2\tilde u \tilde w +{\tilde w}^2= \frac{\tau_0}{\langle \tau_0\rangle \tilde \kappa^2} (\log(\hat y))^2+2\frac{\sqrt{\tau_0}}{\tilde \kappa \sqrt{\langle \tau_0 \rangle}} C\log(\tilde y) +C^2.
\end{equation}
Taking the average, using that $\langle \tilde w \rangle = 0$ and Eq. (\ref{eq:velocity}), we get that
\begin{equation}
\langle \tilde w^2 \rangle = \frac{2C\langle \sqrt{\tau_0}\rangle-2D\sqrt{\langle\tau_0\rangle}}{\tilde \kappa \sqrt{\langle \tau_0 \rangle}}\log(\tilde y) +C^2-D^2.
\end{equation}
By comparing this with the generalised log-law in Eq. (\ref{eq:l-lfluct}), for the fluctuations squared, we obtain
\begin{equation}
\label{eq:gloglaw}
\langle \tilde w^2 \rangle = -A \log(\tilde y)+B,
\end{equation}
where $A = - \frac{2C\langle \sqrt{\tau_0}\rangle-2D\sqrt{\langle\tau_0\rangle}}{\tilde \kappa \sqrt{\langle \tau_0 \rangle}}$ and $B = C^2-D^2$ are the Townsend-Perry constants. The full formulas in next section show that Eq. (\ref{eq:gloglaw}) is the leading term and $A = - 2C(\frac{\langle \sqrt{\tau_0}\rangle-\sqrt{\langle\tau_0\rangle}}{\tilde \kappa \sqrt{\langle \tau_0 \rangle}})$,
with $C=D$.
To simplify the notation, we will now drop the tilde's from all the variable with the dimensionless units implicitly assumed, unless otherwise stated.
\section{The functional form of the Townsend-Perry law}
\label{sec:functional}
We will now use Eq. (\ref{eq:total_stress1}) to find the general form of the average of the fluctuations squared as a function of the distance to the wall. We consider the Eq. (\ref{eq:total_stress1})
\begin{equation}
\label{eq:total_stress2}
{\kappa}^2 {y}^2( u'+ w')^2 I^{3/4}+(u'+w') = \frac{\tau_0}{\langle \tau_0 \rangle}(1-\frac{y}{Re\sqrt{f}}),
\end{equation}
and first set $I=0$ in the viscous layer. Then
\begin{equation}
\label{eq:u_o}
u = y - \frac{y^2}{2 Re\sqrt{f}}
\end{equation}
by averaging and integration in $y$. Integrating Eq. (\ref{eq:total_stress2}) and subtracting $u$ gives,
\begin{equation}
\label{eq:w_o}
w = \frac{\tau_0 - \langle \tau_0 \rangle}{\langle \tau_0 \rangle}\left( y - \frac{y^2}{2 Re\sqrt{f}}\right)
\end{equation}
and
\begin{equation}
\langle w^2 \rangle=\frac{\langle \tau_0^2 \rangle - \langle \tau_0 \rangle^2}{\langle \tau_0 \rangle^2}\left( y - \frac{y^2}{2 Re\sqrt{f}}\right)^2.
\end{equation}
In the inertial layer $I=1$ and ignoring the small $O(1/y^4)$ term, we get that
\begin{align}
u+w &=& \frac{1}{2\kappa^2 y} + 2\frac{\sqrt{\tau_0}}{\kappa \sqrt{\langle \tau_0 \rangle}}\sqrt{1-\frac{y}{2Re\sqrt{f}}} \nonumber\\
&-&2 \frac{\sqrt{\tau_0}}{\kappa \sqrt{\langle \tau_0 \rangle}} \tanh^{-1}\left(\sqrt{1-\frac{y}{2Re\sqrt{f}}}\right)+K,
\end{align}
where $K$ is a constant. Then setting $w=0$, we get that
\begin{align}
&&u = \frac{1}{2\kappa^2 y} + \frac{2}{\kappa}\sqrt{1-\frac{y}{2Re\sqrt{f}}}\nonumber\\
&-&\frac{2}{\kappa}\tanh^{-1}\left(\sqrt{1-\frac{y}{2Re\sqrt{f}}}\right)+K',
\end{align}
where $K'$ is another constant, because $\tau_0$ becomes $\langle \tau_0 \rangle$. Subtracting, $u$ from $u+w$ we get
\begin{align}
&&w = 2\frac{(\sqrt{\tau_0}-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}}\sqrt{1-\frac{y}{2Re\sqrt{f}}}\nonumber\\
&-&2 \frac{(\sqrt{\tau_0}-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}} \tanh^{-1}\left(\sqrt{1-\frac{y}{2Re\sqrt{f}}}\right)+C,
\end{align}
where $C=K-K'$. Squaring $w$ and taking the average gives
\begin{align}
&&\langle w^2 \rangle = 4C\frac{(\langle \sqrt{\tau_0} \rangle-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}}\sqrt{1-\frac{y}{2Re\sqrt{f}}} \nonumber\\
&-&4C \frac{(\langle \sqrt{\tau_0} \rangle-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}} \tanh^{-1}\left(\sqrt{1-\frac{y}{2Re\sqrt{f}}}\right)\nonumber\\
&+&4\left[\frac{2(\langle\tau_0 \rangle-\sqrt{\langle \tau_0 \rangle}\langle \sqrt{\tau_0}\rangle)}{\kappa^2 \langle \tau_0 \rangle}\left(1-\frac{y}{2Re\sqrt{f}}\right.\right.\nonumber\\
&-& \left. 2\sqrt{1-\frac{y}{2Re\sqrt{f}}}\tanh^{-1}(\sqrt{1-\frac{y}{2Re\sqrt{f}}})\right)\nonumber\\
&+& \left. \left[\tanh^{-1}(\sqrt{1-\frac{y}{2Re\sqrt{f}}})\right]^2\right]+C^2.
\end{align}
From $\tanh^{-1}(x) = \frac{1}{2} \log(\frac{1+x}{1-x})$, we see that the second term in the last formula is of leading order and we get that
\begin{equation}
\label{eq:w2der}
\langle w^2 \rangle \sim 2C \frac{(\langle \sqrt{\tau_0} \rangle-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}}\log\left(\frac{y}{Re\sqrt{f}}\right) + h. o. t.
\end{equation}
This agrees with the formula (\ref{eq:gloglaw}) above. For higher order moments $\langle w^{2p} \rangle^{1/p}$ the similar term,
linear in $\tanh^{-1}$ and multiplied by $2C$, is of leading order,
\begin{equation}
\label{eq:wpder}
\langle w^{2p} \rangle^{1/p} \sim 2C \frac{\langle (\sqrt{\tau_0}-\sqrt{\langle \tau_0 \rangle})^p \rangle^{1/p}}{\kappa \sqrt{\langle \tau_0 \rangle}}\log\left(\frac{y}{Re\sqrt{f}}\right) + h. o. t.
\end{equation}
These formulas establish the log dependance of the second moment of the fluctuations, with the Townsend-Perry constants, and the log dependence of the higher moments of the fluctuations, with the Generalized Townsend-Perry constants,
and justify formulas Eq. (\ref{eq:l-lfluct}) and Eq. (\ref{eq:l-pfluct}). Together, Eq. (\ref{eq:l-lfluct}) and Eq. (\ref{eq:l-pfluct}) can be called the generalised log-law of the wall.
\section{Derivation of the Generalized Townsend-Perry Constants}
\label{sec:SCT}
We consider the dependence of the fluctuation $w$ on the distance $x$ along the wall, to understand the Townsend-Perry constants. So far we have only considered $w(y)$ as a function of the distance $y$ from the wall, but $w(x,y)$ obviously depends on both variables $x$ and $y$. If we consider the eddy depicted in Fig. \ref{fig:wx}, then we see that the difference in momentum in the $x$ direction, across the eddy, is given by
\begin{equation}
\rho(w(x+s)-w(x-s)) \sim 2\rho s w_x,
\end{equation}
for $y$ fixed, where $w_x=\frac{d}{dx}w$.
\begin{figure}[h]
\centering
\includegraphics[width=.3\textwidth]{Figure_1}
\caption{The eddy of radius $s$ and the variation in the fluctuations across it in the $x$ (streamwise) direction.}
\label{fig:wx}
\end{figure}
This means that the total turbulent stress, across a vertical surface at $x$, denoted by a dotted line on Fig. \ref{fig:wx} for an eddy of radius $s\sim y$, is
\begin{equation}
\tau_0 = \tau_t+\tau_x,
\end{equation}
where $\tau_x= 2\kappa_\tau \rho y w_x v_y$, analogous to formula Eq. (\ref{eq:shear-stress}) above. Then we get, using
Eq. (\ref{eq:e-viscosity}) and
\begin{equation}
\epsilon = (\tau_t+\tau_x)(u'+w_x) \rho,
\end{equation}
that
\begin{equation}
\tau_t+\tau_x= \kappa^2 \rho I^{3/4} y^2(u'+w_x)^2,
\end{equation}
where prime denotes the derivative with respect to $y$, and
\begin{align}
(\tau_t+\tau_x)^{1/2} &=& \kappa \rho^{1/2}I^{3/8}y(u'+w_x) \nonumber\\
&=& \langle \tau_0 \rangle^{1/2}+ \kappa \rho^{1/2}I^{3/8}y|w_x|,
\end{align}
since both parts must be positive. The derivation is completely analogous to the derivation in Sec. \ref{sec:fluct},
but here with $w$ varying in the $x$ direction and $w_y=0$.
This gives that for $y$ fixed,
\begin{align}
\tau_0^{1/2}-\langle \tau_0 \rangle^{1/2} &=& (\tau_t+\tau_x)^{1/2}-\langle \tau_0 \rangle^{1/2} \nonumber\\
&=& \kappa \rho^{1/2}I^{3/8}y|w_x|.
\end{align}
Considering the leading order $\log(y/2Re\sqrt{f})$ term in Eq. (\ref{eq:w2der}) gives the Townsend-Perry constant
\begin{equation}
\label{eq:TP}
A_1=\frac{2C \rho^{1/2} y\langle |w_x|\rangle}{\sqrt{\langle \tau_0 \rangle}},
\end{equation}
and the generalized Townsend-Perry constants
\begin{equation}
\label{eq:GTP}
A_p=\frac{2C \rho^{1/2} y \langle |w_x|^{p}\rangle^{1/p}}{\sqrt{\langle \tau_0 \rangle}},
\end{equation}
by use of Eq. (\ref{eq:wpder}). This justifies the form of the stress tensor assumed in Ref. \cite{BC16} and used in Ref. \cite{KBK19}.
Finally, we get the expressions
\begin{equation}
A_1 = K \langle |w(x+y)-w(x-y)|\rangle
\end{equation}
and
\begin{equation}
A_p = K \langle |w(x+y)-w(x-y)|^{p} \rangle^{1/p},
\end{equation}
where $K$ is a constant and this produces the relationship between the Townsend-Perry and the generalized Townsend-Perry constants and the structure function of turbulence, see Ref. \cite{BB211,BB314,KBBS17}, used in Ref. \cite{BC16,KBK19},
\begin{equation}
\label{eq:TPstru1}
A_1 = K C_1|y^*|^{\zeta_1},
\end{equation}
\begin{equation}
\label{eq:TPstru2}
A_2 = K C^{1/2}_2|y^*|^{\zeta_2/2},
\end{equation}
and
\begin{equation}
\label{eq:TPstrup}
A_p = K C^{1/p}_{p}|y^*|^{\zeta_{p}/p},
\end{equation}
where $-y\leq y^* \leq y$.
Considering the ratio, washes out the constant $K$,
\begin{equation}
\label{eq:ratio}
\frac{A_p}{A_2}= \frac{C^{1/p}_{p}}{C^{1/2}_2}|y^*|^{\zeta_{p}/p-\zeta_2/2},
\end{equation}
where the $C_p$s are the Kolmogorov-Obukhov coefficients of the structure functions from Ref. \cite{BB211,BB314,KBBS17}.
The last ratio was used in Ref. \cite{KBK19} to get agreement between experimental data and theory.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{Figure_2bis.pdf}
\caption{The average of the MVP as a function of $log(y)$, where $y$ is the distance from the wall. Comparison of experimental data with theory (black line). (a) Theoretical curve is given by an $I$-integral that interpolates between the $k_x^{-5/3}$ to the $k_x^{-1}$ with $a=0.9994$ in the buffer region. (b) Theoretical curve has a uniform $I$-integral with the $k_x^{-5/3}$ scaling present in buffer and inertial regions.}
\label{fig:mean velocity}
\end{figure}
\section{The Spectral Theory of mean-square fluctuations}
\label{sec:BW}
In the above sections we have not used the spectral information in the integral $I$, in Eq. (\ref{eq:spectcont}). We have just used the attached eddy hypothesis and set $I=0$ in the viscous layer and $I=1$ in the inertial layer. But following
Ref. \cite{GGGC10}, we can now use the spectral information through the integral $I$ to find the beginning of the buffer layer and the form of both the MVP $u$ and the fluctuation $w$ in the buffer layer and in the wake. This allows one to obtain the full functional form of both $u$ and $w$ as functions of the distance $y$ from the wall and compare it with the experimental data in the next section. By use of the energy Eq. (\ref{eq:energy}) and the relation $\eta = \nu^{3/4}\epsilon^{-1/4}$ we can find an expression for $\eta/y$, the viscosity parameter that increases as we approach the wall $y\to 0$. If we set the fluctuation equal to zero,
\begin{equation}
\eta/y = (\tilde u'(1-\tilde y/Re\sqrt{f})-(\tilde u')^2)^{-1/4}\tilde y^{-1}
\end{equation}
and find a formula for $\tilde y$ using this equation along with the equation
\begin{equation}
{\kappa}^2 {\tilde y}^2( u')^2 I^{3/4}+u' = \frac{\tau_0}{\langle \tau_0 \rangle}\left(1-\frac{y}{Re\sqrt{f}}\right).
\end{equation}
The resulting formula is given in Ref. \cite{GGGC10},
\begin{equation}
\tilde y = \left(\frac{(\eta/y)^{4/3}+ \kappa^{4/3}I^{1/2}(\eta/y,0)}{\kappa^{2/3}(\eta/y)^{8/3}I^{1/4}(\eta/y,0)}\right).
\end{equation}
It gives the minimum value of $\tilde y$ for which $I(\eta/y,0)>0$ and the small eddies begin to contribute to the turbulent shear stress $\tau_t >0$. In fact for each value of the parameter $\beta_d$ there is a minimum value of $\tilde y$
denoted $\tilde y_v$ below which $I=0$.
Only after this minimum does $\tilde y$ increase with $\eta/y$. This gives the end of the viscous layer and the beginning of the buffer layer and a value of the MVP, $u_v$ at
$\tilde y_v$. It also gives the value of the fluctuation $w$ at $\tilde y_v$ and we can integrate the differential equations
for $u$ and $w$, with respect to $y$, to get the form of both functions in the buffer layer, inertial layer and the wake. Along with the formulas in the viscous layer this gives the full functional form. The differential equations use the spectral information through the full functional form of $I$ and the two parameters $\beta_d$ and $\beta_e$ must be fitted to experimental data.
Approximations to the MVP and mean square fluctuations, based on the formulas in Sec. \ref{sec:functional} are given in Fig. \ref{fig:mean velocity} and \ref{fig:mean variation}, respectively. To compare with experimental data one must solve the differential equations
\begin{equation}
\label{eq:udiff}
u'=-\frac{1}{2 \kappa^2 I^{3/4}y^2} + \frac{1}{\kappa I^{3/8}y} \sqrt{1 - \frac{y}{Re\sqrt{f}}+\frac{1}{4\kappa^2 I^{3/4}y^2}}
\end{equation}
with the initial condition $u = 4.17$ at the beginning of the buffer layer $y =4.17$. For the fluctuation we first have to solve the differential equation, ignoring term of order $O(1/y^3)$ and higher,
\begin{equation}
\label{eq:wdiff}
w'=\frac{ \sqrt{\tau_0}-\sqrt{\langle\tau_0\rangle}}{ \kappa I^{3/8}y\sqrt{\langle \tau_0 \rangle}} \sqrt{1 - \frac{y}{Re\sqrt{f}}},
\end{equation}
with the initial condition $w=\frac{ \tau_0-\langle\tau_0\rangle}{ \langle \tau_0 \rangle}\left(4.17-\frac{17.39}{2 Re\sqrt{f}}\right)$, from Eq. (\ref{eq:w_o}), at the beginning of the buffer layer. Here $I(y)$ is the integral in Eq. (\ref{eq:spectcont}).
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{Figure_3.pdf}
\caption{The average of the fluctuation squared as a function of $log(y)$, where $y$ is the distance from the wall (dimensionless units). Comparison of experimental data with theory (blue line).}
\label{fig:mean variation}
\end{figure*}
In practice it is easier to vary the initial conditions than to change $\beta_d$ and $\beta_e$, thus we will let the initial condition $y_o$, of $w$, from Equation (\ref{eq:w_o}), vary slightly depending on the Reynolds number in the simulations below. The other initial condition $w_o$ is given by the formula $w_o=\frac{ \tau_0-\langle\tau_0\rangle}{ \langle \tau_0 \rangle}\left(y_o-\frac{y_o^2}{2 Re\sqrt{f}}\right)$.
\section{Comparison with Experimental Data}
\label{sec:data}
The data we use to compare with the theory comes from the wind tunnel experiments at the University of Melbourne using the nano-scale thermal anemometry probe (NSTAP) to conduct velocity measurements in the high Re number boundary layer up to $Re_\tau = 20000$. The NSTAT has a sensing length almost one order of magnitude smaller than conventional hot-wire, hence allows for a fully resolved NSTAT measurement of velocity fluctuations, \cite{Sa18}, \cite{Ba19}. The size of the University of Melbourne wind tunnel and the accuracy of the NSTAT permit the measurement over a very large range of scales. We use the averaged velocity time-series at Reynolds numbers $Re_\tau=6000, 10000,14500, 20000$ and the averaged variance at the same Reynolds numbers. Fig. \ref{fig:mean velocity} shows the mean velocity profiles as a function of normalized distance from the wall, whereas Fig. \ref{fig:mean variation} shows the averaged fluctuation squared (variation) as a function of the normalized distance to the wall.
Both are semi-log plots.
First, let us consider the curve describing the MVP in Fig. \ref{fig:mean velocity} (panel b). It starts with the Eq. (\ref{eq:u_o}) for the viscous profile because the $I$-function is zero. But then we reach the value $y_v$ where
the first attached eddies appear ($y=4.17$) and then the viscous profile changes, instead of reaching its maximum $u=Re \sqrt{f}/2$ at $y=Re \sqrt{f}$, the attached eddies increase the viscosity (decrease the Reynolds number) and the MVP reaches its maximum increase at $y \approx 15$, independent of the Reynolds number. The energy transfer of the attached eddies is captured by the $I$-integral and we integrate the differential equation given by Eq. (\ref{eq:udiff}), from $y=4.17$, with the initial condition $u=4.17$. This gives the MVP in Fig. \ref{fig:mean velocity} (b). This was already done in Ref. \cite{GGGC10} and describes how the attached eddies transfer energy
into the buffer and the inertial layer. However, we notice that in the predicted MVP over estimates the mean velocity in buffer region. This is because the $I$-function from Eq. (\ref{eq:spectcont}) does not account for the formation of the attached eddies which reduce the net energy transfer in the direct cascade.
The curves for the fluctuations squared in Fig. \ref{fig:mean variation} are obtained in a similar manner. The attached eddies fix the peak of $\langle w^2 \rangle$ at $y \approx 15$ and the peak profiles can be fitted by the viscous formula $\langle w^2 \rangle = a (y-\frac{y^2}{30})^2$ where $a \sim( \langle \tau _o^2\rangle -\langle \tau_o \rangle^2)/ \langle \tau_o \rangle^2$. This fit is shown in Fig. \ref{fig:mean variation} (c). The peak position is experimentally observed to be fixed, but its height shows a weak Reynolds number dependence
$a = -3.06+0.99 \log(Re)$, see \cite{Sa18}. This relationship can be tested using our theory and this will be done in another publication, see also \cite{CS20}. Then, we integrate the differential equation from Eq. (\ref{eq:wdiff}) for $w$ with the initial data described in last section from some point to the right of the peak, where above peak profile fits the initial condition, this give the profile of the fluctuations squared down to the flat part in the buffer layer. At the beginning of the flat part, $y \approx 60$, the second scaling from Section \ref{sec:MVP} begins to dominate the fluctuations, modeling an inverse cascade of attached eddies in the buffer layer. Then we switch to the buffer $I$-function $I_ b$ in the integration and integrate with $I_b$ until we get into the inertial region where the Kolmogorov-Obukhov scaling dominates again and the attached eddies break up. This produces the curves in Fig. \ref{fig:mean variation}.
We can now compare the functional form of the fluctuations squared shown in Fig. \ref{fig:mean variation}
with the predictions of the stochastic closure theory (SCT) of turbulence, used in Refs. \cite{BC16} and \cite{KBK19}, to compute the Townsend-Perry constants, in the inertial (log) layer. These computations use the first structure function $S_1$ of turbulence and we explain how they are performed, see \cite{BC16} and \cite{KBK19} for more information. The computed Townsend-Perry constants are listed in Table I.
The first structure function of turbulence is, see \cite{KBBS17},
\begin{eqnarray*}
&&E(\vert u(x,t)-u(y,t)\vert)=S_1(x,y,t)\nonumber\\
&&=\frac{2}{C}\sum_{k\in\mathbb{Z}^3\backslash\{0\}}\frac{\vert d_k\vert(1-e^{-\lambda_kt})}{\vert k\vert^{\zeta_1}+\frac{4\pi^2\nu}{C}\vert k\vert^{\zeta_1+\frac{4}{3}}}\vert \sin(\pi k\cdot(x-y))\vert,
\end{eqnarray*}
where the Reynolds number dependence enters through the viscosity $\nu$, and $E$ denotes the expectation (ensamble average).
To get the Kolmogorov-Obukhov coefficients, $C_p$ in
\begin{equation}
S_p(r, \infty) \sim C_p r^{\zeta_p},
\end{equation}
for the lag variable $r$ small, and $\zeta_p$ the scaling exponents, we send $t$ to $\infty$ in the above formulas
and project onto the longitudinal lag variable ${\bf r} = (r,0,0)$. For $p=1$ this becomes
\begin{align}
&&S_1 \sim \frac{2\pi^{\zeta_1}}{C} \sum_{k\neq 0} \frac{|d_k |}{(1+\frac{4\pi^2\nu}{C}|k|^{4/3})} r^{\zeta_1}\nonumber\\
&=&\frac{4\pi^{\zeta_1}}{C} \sum_{k = 1}^\infty \frac{a}{(a^2+k^m)(1+\frac{4\pi^2\nu}{C}|k|^{4/3})} r^{\zeta_1},
\end{align}
see \cite{KBBS17}, where $\zeta_1 = 0.37$, see \cite{BB211}. Now we use the values for $\nu$ in Table 1 in \cite{KBK19}, and the corresponding values
for $a,\ m$ and $C$ from Table 3 in the same paper. The Reynolds numbers, 6430, 10,770, 15,740 and 19,670 are close enough to ours 6000, 10,000, 14,500, and 20,000, that we can use value of the parameters in \cite{KBK19}. This gives the values in Table I, where $A_1 \sim K|y^*|^{\zeta_1} C_1$, see Section \ref{sec:SCT}, and the proportionality factor
$K|y^*|^{\zeta_1} = 1/12.952$ is computed at the Reynolds number $15,470$, where the approximated $A_1$ coincides with the measured $A_1$. The $\log$ functions with coefficient $A_1$, from the third column in Table I, and using the constant $B_1$ from the fourth column in Table I, are then compared to the experimental and theoretical values in Fig. \ref{fig:mean variation}. The spanwise Townsend-Perry constants, for the spanwise fluctuations, can computed similarly by projecting onto the spanwise lag variable ${\bf t}=(0,t,0).$
In Fig. \ref{fig:mean variation} panel (a), the Townsend-Perry constant $A_1$ computed by the SCT does
not agree with the measured slope. This was already observed in Ref. \cite{KBK19}, since for low Reynolds numbers the $C_1$s do not provide a good approximation to the $A_1$s. They only do for large Reynolds numbers and the
discrepancy (a) occurs at the smallest Reynolds number. This does not happen for the Generalized Townsend-Perry constants, the reasons are explained in Ref. \cite{KBK19}, and for them the $C_p$s, $p \ge 2$, provide good approximations to the $A_p$s for all Reynolds numbers.
\begin{table}
\begin{center}
\begin{tabular}{ | l | l | l | p{1.0cm} |}
\hline
$Re_\lambda$ & $C_1$ & $A_1$ & $B_1$ \\ \hline
6000&\ 9.449&0.730& 9.373 \\ \hline
10,000&15.628&1.207&13.073 \\ \hline
14,500&15.500&1.197&13.573 \\ \hline
20,000&14.994&1.158&13.673 \\ \hline
\end{tabular}
\caption{Here, the approximate $A_1$ value is computed from $C_1$ using the proportionality factor
$A_1=C_1/(K|y^*|^{\zeta_1})=C_1/12.952$.}
\end{center}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=.4\textwidth]{Figure_4.pdf}
\caption{Sketch of the instantaneous streaks, in the streamwise direction, and the wall-attached eddies, in the spanwise direction.}
\label{fig:wx1}
\end{figure}
\section{Discussion}
\label{sec:summary}
We used the spectral theory of the MVP and the variation profile to represent both, and compare with experiment \cite{Sa18} for a range of Reynolds numbers. Assuming that the wall shear stress is a fluctuating quantity, we can derive that log-law for the variation (\ref{eq:l-lfluct}) that was proposed by Townsend and measured by Perry and Chong. This law involves the Townsend-Perry constants.
This was first done in the large Reynolds number limit and then for general Reynolds numbers. The Reynolds number dependence of the Townsend-Perry constants is determined by the stochastic closure theory \cite{BC16}, \cite{KBK19}. We derive the log-law for the higher moments of the fluctuations and the Generalized Townsend-Perry constants based on the functional form of the variation and use the stochastic closure theory to express them in terms of the Kolmogorov-Obukhov coefficients of the structure functions of turbulence \cite{KBBS17}. This confirms the results in Refs. \cite{BC16} and \cite{KBK19}.
The spectral function $I$ derived in Ref. \cite{GGGC10} plays a central role in this theory. It can be considered be the analytic expression of Townsend's theory of wall-attached eddies. It quantifies when the first eddies appear at the boundary of the viscous and the buffer layer and when they are fully developed in the inertial layer. It even quantifies the limit of their influence in the energetic wake. By introducing the spectral theory into the analysis it resolves many of the issues that we are faced with in boundary layer turbulence.
The $I$-function corresponds to the Kolmogorov-Obukhov cascade $k_x^{-5/3}$ in the inertial layer, but in the buffer layer another cascade $k_x^{-1}$ dominates the fluctuations, although its influence on the MVP is small. This is an inverse cascade that can accelerate larger and larger attached eddies. The energy transfer of this cascade is captured by the $I$-function in buffer layer, $I_b$. With it we are able to produce the functional form of the averaged fluctuations square in the buffer layer. Once in the inertial layer the original $I$-function dominates again.
The final confirmation of this spectral theory is how we are able to improve the fit to experimental values of the MVP in Ref. \cite{GGGC10}, by use of the $I_b$ function in the buffer layer. Although, this effect on the MVP is small, the attached eddies, siphon a small amount of energy from the MVP in the buffer layer. We model this by linear combination of the $I$ and $I_b$ function $(1-a)I+aI_b$, in the buffer layer, where $a$ is small. This produces a better fit to the measured MVP in the buffer region as shown in Figure \ref{fig:mean velocity} (a), whereas the fit without this linear combination, shown in Figure \ref{fig:mean velocity} (b), is not as good.
It is fair to ask what the Townsend attached eddies actually look like since our spectral method is based on them.
Unlike the streamwise streaks and associated vortices that have been visualize since the experiments of Kline et al. in the 1960s, see Refs. \cite{Kl67} and \cite{Ji99}, the attached eddies are difficult to visualize, either in experiments or simulations. We provide a sketch in Fig. \ref{fig:wx1}, where streamwise streaks are visualized gradually lifting from the boundary by the flow, and perpendicular to them are spanwise attached eddies being deformed by the alternating slow and fast streamwise flow into a hairpin vortex. This does happen both in experiments and observations, see Ref. \cite{MM19}. However, these hairpin vortices are made unstable by the striations in the streamwise flow and the typical attached eddies are irregular in shape, with the general feature of being stretched by the flow and attached to the wall. One must interpret their influence in a statistical sense.
\paragraph*{Acknowledgements:} We are thankful to Ivan Marusic, Milad Samie and Christian E. Willert for kindly sharing with us the wind turbulence experimental data, and Joe Klewicki for useful conversations. We are grateful to Knut Bauer for proving us with the graphic illustrations. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958 through the Kavli Institute for Theoretical Physics.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUbNQ5qWTD6eww21wh
| 5
| 1
|
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
\section{Introduction} \label{sec:intro}
A variety of small scale energy releases on the Sun (flaring bright points, active region transient brightenings, coronal jets, etc.) have been studied using X-ray and radio observations. While X-rays are dominated by thermal emission from the coronal plasma, radio observations are sensitive to non-thermal emission also. The observations of low frequency type III radio bursts in association with X-ray bright point flares \citep{Kundu1980,Kundu1994} clearly indicated that the latter are capable of accelerating particles to non-thermal energies, as well as producing the heated material detected in soft X-rays.
The detection of type III bursts together with coronal X-ray jets strengthened the above argument \citep{Aurass1994,Kundu1995}.
These results imply that radio observations are an useful complimentary tool for observing signatures of weak, transient energy releases in the solar atmosphere since the related non-thermal emission can be easily detected \citep{Benz1995,Mugundhan2017}.
Note that counterparts to some of the X-ray transients have been reported at higher radio frequencies also.
For e.g., \cite{Gopalswamy1994,White1995,Gary1997} observed correlated active region transient brightenings in soft X-rays and microwaves. Moving further,
X-ray microflares are another independent observational evidence for the small scale energy releases in the solar atmosphere. They were first reported by \cite{Lin1984}. The energy involved (${\sim}10^{26}$\,erg) is approximately six orders of magnitude lower than the corresponding value for some of the largest solar flares.
Sensitive observations with the soft X-ray telescope onboard YOHKOH revealed that the microflares are present in the `quiet' Sun also \citep{Krucker1997}.
The study of these microflares are of interest because of their possible bearing on the problems of coronal heating and solar flares \citep{Hudson1991,Hannah2011,Benz2017}. Analogous to microflares, \cite{Kundu1986} reported observations of weak non-thermal microbursts in the solar corona at low radio frequencies.
Though it was hinted that a common source of energetic particles could be responsible for both the microflares and microbursts, reports of direct association are rare. The microbursts were found to have some characteristics similar to that of the normal type III bursts, but the relationship was inconclusive. Further the observations reported were at separate individual frequencies unlike typical spectral observations of type III bursts \citep{Kundu1986,White1986,Thejappa1990,Subramanian1993}.
Recent spectroscopic imaging observations indicate that the weak non-thermal radio emission at low frequencies are more like type I radio bursts \citep{Sharma2018,Mondal2020}.
However there were no details about the counterparts to the radio events in other frequency bands of the electromagnetic spectrum. Note that type I bursts represent the smallest discrete releases of energy observable \citep{Bastian1998}. They are considered to be evidence of successive electron accelerations. So, establishing its association with activities in other regions of the solar atmosphere would be useful to understand the acceleration processes of the non-thermal electrons at the sites of elementary/weakest energy releases.
In this situation, we report observations of weak type I radio burst emission during the same time as soft X-ray observations of a sub-A class level flare and EUV brightening from the `quiet' solar corona in the complete absence of active regions and flare/coronal mass ejection (CME) activity.
\section{Observations} \label{sec:obs}
The radio observations were carried out using the different facilities operated by the Indian Institute of Astrophysics (IIA) in the Gauribidanur Observatory\footnote{\url{https://www.iiap.res.in/?q=centers/radio}} \citep{Ramesh2011a,Ramesh2014}. The radio spectral images were obtained with the
Gauribidanur LOw-frequency Solar Spectrograph (GLOSS) in the frequency range 85\,-\,40 MHz \citep{Ebenezer2001,Ebenezer2007,Kishore2014,Hariharan2016b}. The GLOSS is an one-dimensional array of eight log-periodic dipole antennas (LPDA) along a North-South baseline. The half-power width of the response pattern of GLOSS around local noon is
${\approx}90{\arcdeg}{\times}6{\arcdeg}$ (right ascension, R.A.\,{$\times$}\,declination, decl.) at the highest frequency of operation, i.e. 85\,MHz. While the width of the response pattern along R.A. is nearly independent of frequency, its width along the declination varies inversly with the frequency due to interferometric arrangement of the individual antennas. The observations were carried out with an integration time of ${\approx}$1\,sec and bandwidth of
${\approx}$1\,MHz. The minimum detectable flux density is ${\approx}$75\,Jy
(1\,Jy\,=\,$\rm 10^{-26}\,Wm^{-2}Hz^{-1}$) at a typical frequency like 80 MHz.
The antenna and the receiver systems were calibrated by carrying
out observations in the direction of the Galactic center as described in \cite{Kishore2015}. The two-dimensional radio images were obtained with the Gauribidanur RAdioheliograPH (GRAPH) at 80 MHz \citep{Ramesh1998,Ramesh1999a,Ramesh2006b}. The GRAPH is a T-shaped radio interferometer array of 384 LPDAs. Its angular resolution (`beam' size) for observations close to the zenith is
${\approx}5{\arcmin}{\times}7{\arcmin}$ (R.A.\,$\times$\,decl.) at the above frequency. The integration time is $\approx$250\,msec and the observing bandwidth is
$\approx$2\,MHz. The field-of-view (FOV) in the GRAPH images is
${\approx}2{\arcdeg}{\times}2{\arcdeg}$, and the pixel size is
${\approx}14{\arcsec}$. The minimum detectable flux density is
${\approx}$2\,Jy. The GRAPH data were calibrated using the standard Astronomical Image Processing System (AIPS). The combined use of the imaging and spectral data help to understand the radio signatures associated with the corresponding solar activity in a better manner (see for e.g. \cite{Sasikumar2014}).
Figure \ref{fig:figure1} shows the GLOSS observations on 2020 April 21 in the time interval 04:51\,-\,05:30\,UT. The patches of bright emission during the period
$\approx$05:09\,-\,05:11\,UT are typical of type I or noise storm bursts from the solar corona (see for e.g. \cite{Iwai2013,Mugundhan2018b}). It is widely believed that the bursts are due to plasma radiation at the fundamental plasma frequency \citep{Melrose1980,Kai1985}.
Figure \ref{fig:figure2} shows the frequency averaged time profile of the dynamic spectrum in Figure \ref{fig:figure1}. The presence of enhanced activity during the interval $\approx$05:09\,-\,05:11\,UT could be clearly noticed. It is also similar to the time profiles of groups of type I bursts (see for e.g. \cite{Ramesh2013b,Mugundhan2016}).
No H$\alpha$ and/or GOES soft X-ray flares were reported during the burst interval mentioned above\footnote{\url{https://www.solarmonitor.org/data/2020/04/21/meta/noaa{\_}events{\_}raw{\_}20200421.txt}}. The Sun was totally free of any active regions\footnote{\url{https://www.solarmonitor.org/?date=20200421}} and/or CMEs\footnote{\url{https://cdaw.gsfc.nasa.gov/CME{\_}list/UNIVERSAL/2020{\_}04/univ2020{\_}04.html}}.
The overall location of the bursts can be inferred from the GRAPH difference image (obtained by subtracting a pre-event image to clearly identify the weak emission features) in Figure \ref{fig:figure3} at 80\,MHz.
The two spatially separated contours marked 1 \& 2 correspond to the two maxima (indicated by the same set of numbers) in the time profile of the bursts in Figure \ref{fig:figure2}.
The brightness temperature ($T_{b}$) of the contours 1 \& 2, estimated using the `beam' size of the GRAPH at 80\,MHz, are ${\approx}3{\times}10^{5}$\,K. The `dots' inside the contours in Figure \ref{fig:figure3} correspond to the centroids of some of the individual type I bursts (see Figure \ref{fig:figure4}). We located them following the methodology described in \cite{Ramesh2020a}.
Any ionospheric refraction effects on the radio source positions in the present case are expected to be very minimal since the observations were carried out close to the local noon during which time the zenith angle of the Sun is the least.
Note that the elevation of Sun on 2020 April 21 during the present radio observations was ${\approx}90^{\arcdeg}$.
Secondly, the total duration of the type I radio bursts in the present case is only
$\approx$2\,min (see Figures \ref{fig:figure2} \& \ref{fig:figure4}). This is less than the period
of $\approx$20\,min over which radio source positions at low frequencies usually change due to ionospheric effects \citep{Stewart1982,Mercier1996}.
\begin{figure}[t!]
\centerline{\includegraphics[angle=0,height=7.5cm,width=12cm]{figure1new.pdf}}
\caption{GLOSS dynamic spectrum of the solar radio emission observed on 2020 April 21. The bright emission during the period $\approx$05:09\,-\,05:11\,UT correspond to the type I solar radio bursts mentioned in the text.}
\label{fig:figure1}
\end{figure}
\begin{figure}[t!]
\centerline{\includegraphics[angle=0,height=7.5cm,width=12cm]{figure2new.pdf}}
\caption{The `green' colour plot corresponds to the frequency averaged time profile of the GLOSS dynamic spectrum in Figure \ref{fig:figure1}. Its amplitude values are indicated in the left hand side ordinate axis. The `red' colour line is the fit to the data points. The labels 1 \& 2 indicate the epochs of maximum radio emission from the regions 1 \& 2 in Figure \ref{fig:figure3}, respectively. The `blue' colour profile is the light curve of the soft X-ray emission from the Sun close to the same epoch as the radio observations. Its amplitude values are indicated in the right hand side ordinate axis. The data were obtained with the Chandrayaan-2/XSM \citep{Mithun2020} in the energy range
$\approx$1\,-\,5\,keV with a time binning of ${\approx}$120\,sec.}
\label{fig:figure2}
\end{figure}
\begin{figure}[t!]
\centerline{\includegraphics[angle=0,height=6cm,width=12.5cm]{figure3new.pdf}}
\caption{A composite of the GRAPH difference image of the bursts in Figure \ref{fig:figure1} at 80 MHz and the EUV observations at 94{\AA} with the SDO/AIA \citep{Lemen2012} around the same time as the radio and X-ray observations in Figures \ref{fig:figure1} \& \ref{fig:figure2} on 2020 April 21. The contours labelled 1 \& 2 correspond to the GRAPH observations. The background is the EUV image. Solar north is straight up and east is to the left. The bigger and smaller `boxes' in the left panel image indicate the region around the EUV brightening and the location of maximum emission, respectively. The `zoomed' version of the same brightening is shown in the right side panel. The peak flux density in the GRAPH observations is $\rm {\approx}\,241\,Jy$. Its nearly the same for the contours 1 \& 2, which correspond to the two maxima 1 \& 2 in the radio time profile in Figure \ref{fig:figure2}, respectively. The contours shown are at 80\% level. The `dots' inside the contours 1 \& 2 indicate the centroid locations of the individual type I bursts a\,-\,e \& f\,-\,k in Figure \ref{fig:figure4}, respectively.}
\label{fig:figure3}
\end{figure}
\section{Analysis and Results} \label{sec:anares}
Recently \cite{Vadawale2021} had reported observations of `quiet' Sun X-ray microflares with the Chandrayaan-2/XSM during the solar minimum 2019-2020 (see for e.g. \cite{Ramesh2020b}). Upon inspection we found that some of these flares were observed during the same epoch as the low frequency radio observations of the Sun from Gauribidanur. We considered the X-ray flare observed on 2020 April 21 at
${\approx}$05:10 UT (see Figure \ref{fig:figure2}) for the present work since both radio spectral and imaging observations were available. There was also an EUV brightening observed with the
SDO/AIA at 94{\AA} (see Figure \ref{fig:figure3}) around the same time as the type I radio bursts (Figures \ref{fig:figure1} \& \ref{fig:figure2}) and the X-ray flare (Figure \ref{fig:figure2}). The location of the northern radio contour with label `1' in Figure \ref{fig:figure3} correspond reasonably well with the location of the EUV brightening. The observations of the type I radio bursts over a larger area compared to the EUV brightening could be due to the divergence of the associated field lines (see for e.g. \cite{Li2017}). We speculate that the presence of the two spatially separated radio contours 1 \& 2 (particularly with the latter located just below the equator in the southern hemisphere) suggests interaction at two different locations between inclined, large magnetic loops with foot points in the same hemisphere, north in the present case \citep{Wild1968,Simnett1998}. Note that the probability of trans-equatorial loops are expected to be minimal since there were no active regions on the solar disk. Information on the polarization characteristics of the regions 1 \& 2 would have helped to verify the above. But, observations with the GRAPH in its current configuration are limited to the total intensity mode.
We also checked the location of the 1st sidelobe in the GRAPH `beam', particularly in the north-south direction, to rule out the possibility of any spurious pick-up. It was $\gtrsim\,14^{\arcmin}$ away from the main lobe. The spacing between the contours 1 \& 2 in Figure \ref{fig:figure3} is shorter compared to this. Secondly, the amplitude of the sidelobe is lesser by a factor of
20 (${\approx}$\,13\,dB). But the strength of the two sources 1 \& 2 are nearly the
same.
The peak flux of the XSM flare is
$\rm {\approx}6{\times}10^{-9}\,Wm^{-2}$. It was a very weak event (see
Figure \ref{fig:figure2}).
The total duration of the event is ${\approx}$5\,min. There appears to be two `peaks' in the flare light curve with a noticeable difference between the corresponding count rates.
The type I radio bursts are present only during the initial phase of the X-ray emission, i.e. close to the 1st of the two `peaks' mentioned above.
The total duration of the radio event is smaller $({\approx}$2\,min).
Assuming that both the X-ray and radio events are related to a common primary phenomenon, the comparitively shorter duration of the radio event indicates that the electrons responsible for its occurrence are probably thermalized quickly. As a result they cannot travel to larger heights in the corona from where the low frequency radio emission primarily originate \citep{Mondal2020}. The shorter duration of the radio bursts could be also due to the emission being non-thermal in nature as compared to the soft X-ray emission \citep{Reid2017}. Nevertheless we independently calculated the associated energy from the radio observations.
\begin{figure}[t!]
\centerline{\includegraphics[angle=0,height=7.5cm,width=12cm]{figure2anew.pdf}}
\caption{`Zoomed' version of the observations around the maxima in the radio time profile in Figure \ref{fig:figure2}. The labels a\,-\,k indicate some of the individual type I bursts.}
\label{fig:figure4}
\end{figure}
The energy associated with a type I burst can be estimated using the relation $\rm E\,{=}\,S{\delta}t{\delta}{\nu}R^{2}{\Omega}e^{\tau}$ \citep{Elgaroy1977}. Here $\rm S$ is the flux density of the burst, $\rm {\delta}$t is the duration of the burst, ${\nu}$ is the frequency of observation, $\rm {\delta}{\nu}$ is the bandwidth of the burst, $\rm R$ is the Sun-Earth distance, ${\Omega}$ is the solid angle into which the radio waves are emitted, and $\rm \tau$ is the optical depth. In the present case, $\rm S\,{\approx}\,241\,Jy$ (see Figure \ref{fig:figure3}),
$\rm {\delta}t\,{\approx}\,$1\,sec, and $\rm {\delta}{\nu}\,{\approx}\,5\,MHz$ near 80\,MHz (see Figure \ref{fig:figure1}). Assuming
${\Omega}\,{=}\,$0.15\,steradians \citep{Steinberg1974}, $\rm {\tau}\,{\approx}$\,3 at 80 MHz \citep{Ramesh2005b}, and an efficiency ($\eta$) of ${\approx}10^{-10}$ for the type I burst emission process \citep{Prasad2004}, we find $\rm E\,{\approx}\,8.1{\times}10^{22}$\,erg. This is consistent with the reports that ${\sim}10^{21}$\,{-}\,$10^{23}$\,ergs are needed for a single type I burst \citep{Tomin2018}.
We also calculated the energy using the relation $\rm E\,{=}\,n_{th}(n/n_{th})VE_{m}$ (see for e.g. \cite{Ramesh2010c}). Here $\rm n_{th}$ is the number density of the background thermal electrons, $\rm n$ is the number density of the non-thermal electrons, $\rm V$ is the volume of the burst source, and $\rm E_{m}$ is the mean energy of the individual electrons. In the present case $\rm n_{th}\,{=}\,7.9{\times}10^{7}\rm cm^{-3}$ and $\rm E_{m}\,{\approx}\,5$\,keV \citep{Vadawale2021}. Assuming $\rm n/n_{th}\,{=}\,1.23{\times}10^{-7}$ at 80 MHz \citep{Thejappa1991} and
V\,=\,$10^{30}\rm cm^{3}$ (corresponding to a density scale height of
$\rm {\approx}\,10^{10}\,cm$ in the solar corona), we find $\rm E\,{\approx}\,7.8{\times}10^{22}$\,erg. This is in good agreement with the estimated energy using the observed flux density, duration, bandwidth, etc. of the burst in the present case.
We would like to mention here that the noise storm radiative efficiency $\eta$ mentioned above is typically in the range ${\sim}\,10^{-6}$\,-\,$10^{-10}$. In the present case both the type I bursts and the associated X-ray emission were short lived. Therefore it is likely that the electron acceleration responsible for the type I bursts were triggered by the same process responsible for the associated X-ray microflare \citep{Crosby1996}. The energy of the latter is typically ${\sim}\,10^{27}$\,erg.
Reports indicate that for such an energy input, $\eta$ is expected to be in the range ${\sim}\,10^{-9}$\,-\,$10^{-10}$ \citep{Prasad2004}. We assumed $\eta\,{\approx}\,10^{-10}$ since the observed type I bursts were also weak. The close agreement between the different energy estimates mentioned above is in support of our assumption on the value of $\eta$. However it should be kept in mind that the above calculations will give rise to a lower energy for the type I burst if we assume $\eta\,>\,10^{-10}$. Hence a more tighter constraint on the value of $\eta$ would be better.
Proceeding further, we find that the area enclosed by the contours in Figure \ref{fig:figure3} is nearly same as that of the GRAPH `beam' size at 80 MHz mentioned earlier, i.e.\,${\approx}5^{\arcmin}{\times}7^{\arcmin}$. But results obtained from (i) high angular resolution observations of the solar corona at low radio frequencies during solar eclipses (lunar occultation technique) and (ii) independent long baseline interferometer observations indicate that the `true' size of the individual type I bursts is ${\lesssim}15^{\arcsec}$ \citep{Ramesh2001b,Kathiravan2011,Ramesh2012b,Mugundhan2016,Mugundhan2018a}. There are also reports that the upper limit to the size of a type I burst source is
${\approx}14^{\arcsec}$ \citep{Melrose1980}. These values are much smaller than the GRAPH `beam' size. \cite{Kundu1990,Malik1996,Willson1997} had shown earlier that the centroids of type I burst sources are spatially distributed within the associated noise storm emitting region.
Type I burst models too predict scattered small-scale sites of energy release \citep{Klein1995}.
The dispersion in the centroids of some of the individual type I bursts in the present case (see Figures \ref{fig:figure3} \& \ref{fig:figure4}) is consistent with this.
Therefore it is possible that the contours in Figure \ref{fig:figure3} correspond to an ensemble of type I burst sources, each of size ${\approx}14^{\arcsec}{\times}14^{\arcsec}$. So, we calculated the maximum possible total energy of the type I bursts as
$\rm E_{t}\,{\approx}\,\frac{5{\times}7{\times}3600{\times}8.1{\times}10^{22}}{14{\times}14}\,{\approx}\,5.3{\times}10^{25}$\,erg.
This is in reasonable agreement with the range of energies
($3{\times}10^{26}$\,-\,$7{\times}10^{27}$\,erg) for the soft X-ray microflares reported by \cite{Vadawale2021} since the authors had mentioned that their estimates represent upper limits. Note that the minimum possible energy of the type I bursts in the present case is $\rm E\,{\approx}\,8.1{\times}10^{22}$\,erg. So, our estimates indicate a range of ${\approx}10^{22}$\,-\,$10^{25}$ erg.
\cite{Benz2002} had earlier reported EUV flares in the `quiet' solar corona with energy budget ${\approx}10^{24}$\,-\,$10^{26}$ erg.
\cite{Lin1985} showed that the total energy released into the interplanetary medium in solar electrons above 2\,keV is ${\approx}10^{25}$\,-\,$10^{26}$ erg.
The above numbers and arguments confirm that the type I radio bursts are an independent ground based observational tool to probe weak activity in the `quiet' regions of the corona also in addition to its known association with sunspot activity (see for e.g. \cite{Ramesh2000b}).
\section{Summary} \label{sum}
We had presented co-temporal/co-spatial observations of weak type I radio bursts, X-ray microflare, and EUV brightening from the `quiet' Sun which was completely devoid of any active regions. There is close agreement between the energy budgets estimated independently from the radio and X-ray observations. As far as we know, this is the first time such simultaneous observations of transient activity in the `quiet' Sun have been reported. Considering that type I radio bursts like those described in this work hint activity in the outer layers of the solar corona which is currently inaccessible to observations in X-rays and extreme ultra-violet (EUV), combined investigations of weak energy releases observed at the same time in all the aforementioned domains would be helpful to understand the energies deposited at different levels in the solar corona in addition to the associated mechanisms themselves. For e.g. \cite{Li2017} had shown that magnetic reconnection driven by multiple moving magnetic features \citep{Harvey1973,Bentley2000} in/near an active region at the photosphere are correlated with EUV brightenings and type I bursts. But there were reports of
H$\alpha$ and X-ray flares during the observing period reported by the above authors. Several active regions too were present. Nevertheless, it would be interesting to explore such moving features in the `quiet' Sun also to explain weak energy releases as described in this work.
We express our gratitude to the Gauribidanur observatory staff members for their help in the observations and upkeep of the facilities. M.Rajesh and K.P.Santosh are acknowledged for their assistance to the present work. The SDO/AIA data are courtesy of the NASA/SDO and the AIA science teams. We thank the referee for his/her kind comments which helped us to describe the results more clearly.
|
train/arxiv
|
BkiUdYU5qX_Bw5o9H7OE
| 5
| 1
|
\section{Introduction }
For a field $k$, we denote by $\overline{k}$ its algebraic closure.
Let $R(X)=\sum_{n=0}^{\infty}b(n)X^{n}$ represent a rational function
in $\mathbb{Q}(X)$ and suppose that $b(n)$ is a $d$-th power in
$\mathbb{Q}$ for all large $n\in\mathbb{N}$. Pisot's $d$-th root
conjecture states that one can choose a $d$-th root $a(n)$ of $b(n)$
such that $\tilde{R}(X):=\sum a(n)X^{n}$ is again a rational function
in $\overline{\mathbb{Q}}(X)$. The sequence $\{b(n)\}$ coming from
the rational function $R(X)$ is a linear recurrence sequence, which
can be written as an \emph{exponential polynomial}, which we now define.
An \emph{exponential polynomial over a field $k$} is a sequence $b:\mathbb{N}\rightarrow k$
of the form
\begin{equation}
n\mapsto\sum_{i\in I}B_{i}(n)\beta_{i}^{n},\label{eq:exp-poly}
\end{equation}
where $I$ is a (finite) set of indices, each $\beta_{i}\in k^{*}$
is nonzero and each $B_{i}\in k[T]$ is a single-variate polynomial.
When it can be arranged so that each $B_{i}$ is constant, we say
that $b$ is simple. For any exponential polynomial $b:\mathbb{N}\rightarrow k$
and any natural number $d$, it is clear that $b^{d}:\mathbb{N}\rightarrow k$,
defined by $n\mapsto b(n)^{d}$, is still an exponential polynomial.
The following result of Zannier \cite{Zannier2000} essentially proves
its converse, which is a generalization of Pisot's $d$-th root conjecture
stated earlier.
We also refer to \cite{Zannier2000} for a survey on related works.
\begin{theorem}[\cite{Zannier2000}] \label{thm:Pisot's_Conj}Let $b$
be an exponential polynomial over a number field $k$, and $d\ge2$
be an integer. Suppose that $b(n)$ is the $d$-th power of some element
in $k$ for all but finitely many $n$. Then there exists an exponential
polynomial $a$ over $\overline{k}$ such that $a(n)^{d}=b(n)$ for
all $n$. \end{theorem}
The main purpose of this paper is to investigate a function-field
analog of Theorem \ref{thm:Pisot's_Conj}.
Let $C$ be a smooth projective algebraic curve of genus $\mathfrak{g}$
defined over an algebraically closed field ${\bf k}$ of characteristic
zero. Let $K:=\mathbf{k}(C)$ be its function field. We will always
denote by $\mathfrak{p}$ a point in $C(\mathbf{k})$, by $S$ a finite
subset of $C(\mathbf{k})$. Since $K$ contains the algebraically-closed
field $\mathbf{k}$, note that for any pair of exponential polynomials
$a:\mathbb{N}\rightarrow K$ and $c:\mathbb{N}\rightarrow\mathbf{k}$
we have that $ca^{d}:\mathbb{N}\rightarrow K$, defined by $n\mapsto c(n)a(n){}^{d}$,
is still an exponential polynomial whose $n$-th term is the $d$-th
power of some element in $K$ for all $n\in\mathbb{N}$. A plausible
statement obtained from Theorem \ref{thm:Pisot's_Conj} by replacing
the number field $k$ by our function field $K$ must therefore have
its conclusion modified to the existence of exponential polynomials
$a$, $c$, respectively over $\overline{K}$ and over $\mathbf{k}$,
such that $c(n)a(n)^{d}=b(n)$ for all $n$. Our result in this direction
is as follows.
\begin{theorem} \label{dPisot} Let $b(n)=\sum_{i=1}^{\ell}B_{i}(n)\beta_{i}^{n}$
be an exponential polynomial over $K$, i.e. $B_{i}\in K[T]$ and
$\beta_{i}\in K^{*}$. Let $\Gamma$ be the multiplicative subgroup
of $K^{*}$ generated by $\beta_{1},\hdots,\beta_{\ell}$. Assume
that $\Gamma\cap{\bf k}^{*}=\{1\}$. If $b(n)$ is a $d$-th power
in $K$ for infinitely many $n\in\NN$, then there exists an exponential
polynomial $a(m)=\sum_{i=1}^{r}A_{i}(m)\alpha_{i}^{m}$, $A_{i}\in\overline{K}[T]$, $\a_{i}\in\overline{K}^{*}$,
and a polynomial $R\in{\bf k}[T]$ such that $b(m)=R(m)a(m)^{d}$
for all $m\in\mathbb{N}$. \end{theorem}
\begin{remark} The assumption that $b(n)$ is a $d$-th power in
$K$ for infinitely any $n\in\NN$ is weaker than the one in Theorem
\ref{thm:Pisot's_Conj}.
We refer to \cite{CZ1998} for a result over number field under both
this weaker assumption and the existence of a dominant $\beta_{i}$,
i.e. a unique $\beta_{i}$ of maximal or minimal absolute value. \end{remark}
\begin{remark} \label{rem: tor}
In the notation of Theorem \ref{dPisot}, it is standard to notice
that $b$ consists of the $q$ disjoint subsequences $b_{j}$, defined
by $n\mapsto b(j+qn)$, where $j\in\{0,\ldots,q-1\}$ and $q$ is
the order of the torsion subgroup of $\Gamma$; moreover, each $b_{j}$
is an exponential polynomial whose associated $\Gamma_{j}$ is torsion-free.
With this observation, we may generalize Theorem \ref{dPisot} so
that the assumption $\Gamma\cap{\bf k}^{*}=\{1\}$ is relaxed to that
$\Gamma\cap{\bf k}^{*}$ is finite and the conclusion only holds for
some $b_{j}$ rather than $b$.
\end{remark}
\begin{remark} \label{rem: rk1}
In the case where $b$ is simple, i.e., each $B_{i}$ is constant,
we can relax the hypothesis on $\Gamma$ in Theorem \ref{dPisot}
so that the case where $\Gamma\cap{\bf k}^{*}$ is infinitely cyclic
generated by $\gamma$ can be also treated. In this new case, modifying
slightly our proof of Theorem \ref{dPisot}, we can conclude
that $b(m)=c(m)a(m)^{d}$ for all $m\in\mathbb{N}$, where $c$ is
a simple exponential polynomial over $\mathbf{k}$ given by $m\mapsto\sum_{i=1}^{r}c_{i}\gamma^{e_{i}m}$
for some $c_{i}\in{\bf k}$ and $e_{i}\in\mathbb{Z}$. It seems difficult
to further relax the hypothesis on $\Gamma$.
\end{remark}
Our proof of Theorem \ref{dPisot} contains two major ingredients,
both rely on the special features of function fields of characteristic
zero. One of the ingredients is the result (restated as Theorem \ref{buchi_func_field}
in Section \ref{section4}) by Pasten and Wang \cite{PW2015} motivated
by B{\"u}chi's $d$-th power problem, which has a similar flavor as
Pisot's $d$-th root conjecture but arising from different purposes.
While working on an undecidability problem related to Hilbert's tenth
problem in the 1970s, B{\"u}chi formulated a related arithmetic problem,
which can be stated in more generality as follows: Let $k$ be a number
field. Does there exist an integer $M$ such that the only monic polynomials
$G\in k[T]$ of degree $d$ satisfying that $G(1),\hdots,G(M)$ are
$d$-th powers in $k$, are precisely those of the form $G(X)=(X+c)^{d}$
for some $c\in k$. This problem remains unsolved, while its analogs
have been investigated intensively in the recent years. In particular,
the analog over function fields of characteristic zero was solved
completely, even with an explicit bound on $M$; see \cite{Buchi2013}
and \cite{Pa}. We refer to \cite{PW2015} for a survey of relevant
works. The other ingredient, which is also developed in this paper,
is the function-field analog of the recent work of Levin \cite{Levin:GCD}
for number fields and Levin-Wang \cite{levin2019greatest} for meromorphic
functions on GCD estimates of two multivariable polynomials over function
fields evaluated at arguments which are $S$-units. We will use the estimates
through the following result, which is of independent interest.
\begin{theorem} \label{thm:y^e=00003D00003D00003D00003D00003DFg^l}
Let $d\ge2$ be an integer and $F\in K[x_{1},\dots,x_{n}]$. Assume
that $F$ can not be expressed as $a\mathbf{x}^{\mathbf{i}}G^{d}$ for any $a\in K^{*}$,
any monomial $\mathbf{x}^{\mathbf{i}}\in K[x_{1},\dots,x_{n}]$, and
any $G\in K[x_{1},\dots,x_{n}]$. Then we have the following conclusion:
For any $u_{1},\hdots,u_{n}\in\mathcal{O}_{S}^{*}$, there exists
positive integer $m$ and rationals $c_{1}$ and $c_{2}$
all depending only on $\left(d,n,\deg F,\max_{1\le j\le n}h(u_{j})\right)$,
such that if $F(u_{1}^{\ell},\ldots,u_{n}^{\ell})$ is a $d$-th power
in $K$ with some $\ell\ge c_{1}\tilde{h}(F)+c_{2}\max\{1,2\gen-2+|S|\}$,
then $u_{1}^{m_{1}}\cdots u_{n}^{m_{n}}\in{\bf k}$ for some $(m_{1},\hdots, m_{n})\in\mathbb{Z}^{n}\setminus\{(0,\ldots,0)\}$
with $\sum|m_{i}|\le2m$.
\end{theorem} Here $\tilde{h}(F)$ is the
relevant height of $F$ to be defined in the next section.
{\begin{remark} We cannot drop $a$ in the assumption
of Theorem \ref{thm:y^e=00003D00003D00003D00003D00003DFg^l}. For
example, if $a=u_{1}\in\mathcal{O}_{S}^{*}$ and $F(x_{1}):=ax_{1}$,
we always have that $F(u_{1}^{d\ell-1})$ is a $d$-th power in $K$
for all $\ell\in\mathbb{N}$.\end{remark}}
The assumption in Theorem \ref{dPisot} that $\Gamma\cap\mathbf{k}^{*}$
is trivial implies that every minimal set of generators of $\Gamma$
is multiplicatively independent modulo ${\bf k}$. This suggests how
Theorem \ref{thm:y^e=00003D00003D00003D00003D00003DFg^l} plays a
role in our proof of Theorem \ref{dPisot}.
We briefly describe the core idea connecting GCD estimates and our proof
of Theorem \ref{thm:y^e=00003D00003D00003D00003D00003DFg^l}, as introduced
by Corvaja-Zannier \cite{CZ2008}. After
some reduction, we only need to treat the case where $F$ is $d$-th
power free. Given a tuple $(u_{1},\hdots,u_{n},y)\in(\mathcal{O}_{S}^{*})^{n}\times K$
satisfies that $y^{d}=F(u_{1},\ldots,u_{n})$, we
will construct a polynomial $G\in K[x_{1},\dots,x_{n}]$ with controllable
height, depending on $F$ and the $\frac{u_{i}'}{u_{i}}$, such that $(y^{d})'=G(u_{1},\ldots,u_{n})$, where
$'$ denotes a global derivation on $K$. For example, if
$F:=x_{1}^{2}+\cdots+x_{n}^{2}$, then our construction will yield
$G:=2\frac{u_{1}'}{u_{1}}x_{1}^{2}+\hdots+2\frac{u_{n}'}{u_{n}}x_{n}^{2}.$
As $d\ge2$, the number of common zeros of $y^{d}$ and
$(y^{d})'$ is essentially larger than the number of zeros of $y^{d-1}$.
On the other hand, we expect
the number of common zeros of $F(u_{1},\ldots,u_{n})$ and $G(u_{1},\ldots,u_{n})$
to be essentially smaller than the number of zeros of $y^{d-1}$ unless something special
happens. To formalize this idea, we prove the following result on
GCD estimates, where all notation involved are defined in Section
\ref{sec:Preliminaries}.
\begin{theorem} \label{movinggcdunit}
Let $S\subset C$ be a finite set of points. Let $F,\,G\in K[x_{1},\dots,x_{n}]$
be a coprime pair of nonconstant polynomials. For any $\epsilon>0$,
there exist an integer $m$, positive reals $c_{i}$, $0\le i\le4$,
all depending only on $\epsilon$, such that for all $n$-tuple $(g_{1},\hdots,g_{n})\in({\cal O}_{S}^{*})^{n}$
with
\[
\max_{1\le i\le n}h(g_{i})\ge c_{1}(\tilde{h}(F)+\tilde{h}(G))+c_{2}\max\{0,2\gen-2+|S|\},
\]
we have that either
\begin{align}
h(g_{1}^{m_{1}}\cdots g_{n}^{m_{n}})\le c_{3}(\tilde{h}(F)+\tilde{h}(G))+c_{4}\max\{0,2\gen-2+|S|\}\label{multiheight11}
\end{align}
holds for some integers $m_{1},\hdots,m_{n}$, not all zeros with $\sum|m_{i}|\le2m$,
or the following two statements hold.
\global\long\def\labelenumi{(\theenumi)
\global\long\def\theenumi{\alph{enumi}
\begin{enumerate}[label=(\alph*)]
\item[{\rm(a)}] \label{enu:;Nsgcd<} $N_{S,{\rm gcd}}(F(g_{1},\hdots,g_{n}),G(g_{1},\hdots,g_{n}))\le\epsilon\max_{1\le i\le n}h(g_{i})$;
\item[{\rm(b)}] \label{enu: hgcd<} $h_{{\rm gcd}}(F(g_{1},\hdots,g_{n}),G(g_{1},\hdots,g_{n}))\le\epsilon\max_{1\le i\le n}h(g_{i})$
if we further assume that not both of $F$ and $G$ vanish at $(0,\hdots,0)$.
\end{enumerate}
\end{theorem}
\begin{remark} In Theorem \ref{movinggcdunit}, all the quantities
claimed to exist can be given effectively. Moreover, the explicit
bounds on heights are important in our application. As said earlier,
if we are given $F=x_{1}^{2}+\cdots+x_{n}^{2}$ in Theorem \ref{thm:y^e=00003D00003D00003D00003D00003DFg^l},
then in the main step of the proof, we construct $G_{\mathbf{u}}:=2\frac{u_{1}'}{u_{1}}x_{1}^{2}+\hdots+2\frac{u_{n}'}{u_{n}}x_{n}^{2}$
for each tuple $\mathbf{u}:=(u_{1},\hdots,u_{n})\in({\cal O}_{S}^{*})^{n}$
and apply Theorem \ref{movinggcdunit} to estimate the GCD of $F(\mathbf{u})$
and $G_{\mathbf{u}}(\mathbf{u})$. The main point which makes the
proof of Theorem \ref{thm:y^e=00003D00003D00003D00003D00003DFg^l}
works is that $\tilde{h}(G_{{\bf u}})$ can be explicitly bounded
independent of these $\mathbf{u}$. (See Proposition \ref{heightDu}.)
\end{remark}
It is more desirable to obtain GCD estimates, such as Statement (a)
and (b) in Theorem \ref{movinggcdunit}, under the assumption
that $g_{1},\hdots,g_{n}$ are multiplicatively independent modulo
${\bf k}$. As a result in this direction, we can actually replace
the right hand side of \eqref{multiheight11} by $0$ in the case
where $n=2$ and the coefficients of $F$ and $G$ are in ${\bf k}$.
We include a complete statement below. Although this result can be
deduced from \cite[Corollary 2.3]{CZ2008}, we will derive it from
our proof of Theorem \ref{movinggcdunit}. \begin{theorem} \label{n=00003D00003D00003D00003D00003D00003D2gcdunit}
Let $F,\,G\in{\bf k}[x_{1},x_{2}]$ be nonconstant coprime polynomials.
For any $\epsilon>0$, there exist an integer $m$, constant $c$,
both depending only on $\epsilon$, such that for all pairs $(g_{1},g_{2})\in({\cal O}_{S}^{*})^{2}$
with $\max\{h(g_{1}),h(g_{2})\}\ge c\,\max\{1,2\gen-2+|S|\}$, either we have that
$g_{1}^{m_{1}}g_{2}^{m_{2}}\in{\bf k}$ holds for some integers $m_{1},m_{2}$,
not all zeros with $|m_{1}|+|m_{2}|\le2m$, or the following two statements
hold
\global\long\def\labelenumi{(\theenumi)
\global\long\def\theenumi{\alph{enumi}
\begin{enumerate}
\item[{\rm(a)}] $N_{S,{\rm gcd}}(F(g_{1},g_{2}),G(g_{1},g_{2}))\le\epsilon\max\{h(g_{1}),h(g_{2})\}$;
\item[{\rm(b)}] $h_{{\rm gcd}}(F(g_{1},g_{2}),G(g_{1},g_{2}))\le\epsilon\max\{h(g_{1}),h(g_{2})\}$,
if we further assume that not both of $F$ and $G$ vanish at $(0,0)$.
\end{enumerate}
\end{theorem}
As another result in the same direction, we obtain easily from Theorem
\ref{movinggcdunit} that an \emph{effectively} asymptotic version
of Statement (a) and (b) in Theorem \ref{movinggcdunit}
holds, \emph{merely assuming that }$g_{1},\hdots,g_{n}$ are multiplicatively
independent modulo ${\bf k}$; here the effectivity means that we
have an effective lower bound for $\ell$ in the following statement.
\begin{theorem} \label{movinggcdpower} Let $F,\,G\in\K[x_{1},\dots,x_{n}]$
be nonconstant coprime polynomials. Let $g_{1},\hdots,g_{n}\in\K^{*}$,
not all constant. Then for any $\epsilon>0$, there exist an integer
$m$ and constant $c_{1}$ and $c_{2}$ depending only on $\epsilon$,
such that for each positive integer
\[
\ell>c_{1}(\tilde{h}(F)+\tilde{h}(G))+c_{2}(\gen+n\max_{1\le i\le n}\{h(g_{i})\}),
\]
either we have $g_{1}^{m_{1}}\cdots g_{n}^{m_{n}}\in{\bf k}$ for
some integers $m_{1},\hdots,m_{n}$, not all zeros with $\sum|m_{i}|\le2m$,
or the following two statements hold.
\global\long\def\labelenumi{(\theenumi)
\global\long\def\theenumi{\alph{enumi}
\begin{enumerate}[label=(\alph*)]
\item[{\rm(a)}] $N_{S,{\rm gcd}}(F(g_{1}^{\ell},\hdots,g_{n}^{\ell}),G(g_{1}^{\ell},\hdots,g_{n}^{\ell}))\le\epsilon\max_{1\le i\le n}h(g_{i}^{\ell})$;
\item[{\rm(b)}] $h_{{\rm gcd}}(F(g_{1}^{\ell},\hdots,g_{n}^{\ell}),G(g_{1}^{\ell},\hdots,g_{n}^{\ell}))\le\epsilon\max_{1\le i\le n}h(g_{i}^{\ell})$,
if we further assume that not both of $F$ and $G$ vanish at $(0,\hdots,0)$.
\end{enumerate}
\end{theorem}
\begin{remark} When $F,\,G\in\mathbb{C}[x_{1},\dots,x_{n}]$ be a
coprime pair of nonconstant polynomials and $g_{1},\ldots,g_{n}\in\CC[z]$
are multiplicatively independent modulo $\CC$, then the results in
\cite{levin2019greatest} also imply Statement (a) and (b) in Theorem \ref{movinggcdpower}. Our statement here is stronger
since we have formulated effective bounds on $\ell$ and the $m_{i}$
such that $g_{1}^{m_{1}}\cdots g_{n}^{m_{n}}\in{\bf k}$. When $n>2$,
the only other previous result in this direction appears to be a result
of Ostafe \cite[Th.~1.3]{Ostafe}, which considers special polynomials
such as $F=x_{1}\cdots x_{r}-1,G=x_{r+1}\cdots x_{n}-1$, but proves
a uniform bound in place of Statement (a) and (b) independent of $\ell$. In the case where $n=2$,
previous results include the original theorem of Ailon-Rudnick \cite{AR}
in this setting, i.e. $F=x_{1}-1$, $G=x_{2}-1$, and extensions of
Ostafe \cite{Ostafe} and Pakovich and Shparlinski \cite{PS} (all
with uniform bounds). It is noted in \cite{Ostafe} that it appears
to be difficult to extend the techniques used there to obtain results
for general $F$ and $G$. \end{remark}
We collect the background materials in Section \ref{sec:Preliminaries}.
We will prove some main lemmas in Section \ref{mainlemmas}. The proofs
of Theorem \ref{dPisot} and Theorem \ref{thm:y^e=00003D00003D00003D00003D00003DFg^l}
are given in Section \ref{sectionPisot} and Section \ref{section4}
respectively. Finally, we establish the gcd theorems in Section \ref{gcd}.
\section{Preliminaries}
\label{sec:Preliminaries}
Recall that $K$ is the function field of the smooth projective curve
$C$ of genus $\mathfrak{g}$ defined over the algebraically closed
field ${\bf k}$ of characteristic zero. At each point $\p\in C(\mathbf{k})$,
we may choose a uniformizer $t_{\p}$ to define a normalized order
function $v_{\p}:=\ord_{\p}:\K\to\ZZ\cup\{+\infty\}$.
Let $S\subset C(\mathbf{k})$ be a finite subset. We denote the ring
of $S$-integers in $K$ and the group of $S$-units in $K$ respectively
by
\[
{\cal O}_{S}:=\{f\in\K\,|\,v_{\p}(f)\ge0\text{ for all }\p\notin S\},
\]
and
\[
{\cal O}_{S}^{*}:=\{f\in\K\,|\,v_{\p}(f)=0\text{ for all }\p\notin S\}.
\]
For simplicity of notation, for $f\in\K^{*}$ and $\mathbf{p}\in C(\mathbf{k})$
we let
\[
v_{\p}^{0}(f):=\max\{0,v_{\p}(f)\},\qquad\bar{v}_{\p}^{0}(f):=\min\{1,v_{\p}^{0}(f)\},
\]
i.e. its order of zero at $\p$ and its truncated value;
\[
v_{\p}^{\infty}(f):=-\min\{0,v_{\p}(f)\},\qquad\bar{v}_{\p}^{\infty}(f):=\min\{1,v_{\p}^{\infty}(f)\},
\]
i.e. its order of pole at $\p$ and its truncated value. The height
of $f$ is defined by
\[
h(f):=\sum_{\p\in C }-v_{\p}^{\infty}(f),
\]
which counts the number of poles of $f$ with multiplicities. For
any ${\bf f}:=[f_{0}:\cdots:f_{m}]\in\PP^{m}(K)$ with $m\ge1$ and
$f_{0},...,f_{m}\in\K$, we define $v_{\p}(\mathbf{f}):=\min\{v_{\p}(f_{0}),...,v_{\p}(f_{m})\}$
and
\[
h({\bf f})=h(f_{0},...,f_{m}):=\sum_{\p\in C}-v_{\p}(\mathbf{f}).
\]
For a finite subset $S$ of $C$ and $f\in K^{*}$, we let
\[
\overline{N}_{S}({f})={\displaystyle \sum_{\mathbf{p}\in C\setminus S}\bar{v}_{\mathbf{p}}^{0}(f)}.
\]
be the cardinality of the set of zeros of $f$ outside $S$; and
\[
N_{S}({f})=\sum_{\p\notin S}v_{\p}^{0}(f)
\]
be the number of the zero, counting multiplicities, of $f$ outside
of $S$. For any $f,g\in\K,$ we let
\begin{align*}
N_{S,{\rm gcd}}(f,g):=\sum_{\p\in C\setminus S}\min\{v_{\p}^{0}(f),v_{\p}^{0}(g)\}
\end{align*}
and
\begin{align*}
h_{{\rm gcd}}(f,g):=\sum_{\p\in C}\min\{v_{\p}^{0}(f),v_{\p}^{0}(g)\}.
\end{align*}
Let $\mathbf{x}:=(x_{1},\ldots,x_{n})$ be a tuple of $n$ variables,
and $F=\sum_{{\bf i}\in I_{F}}a_{{\bf i}}{\bf x}^{{\bf i}}\in K[{\bf x}]$
be a nonzero polynomial, where $I_{F}$ is the (nonempty finite) set
of those indices ${\bf i}=(i_{1},\hdots,i_{n})$ with $a_{{\bf i}}\ne0$;
here, each $i_{j}$ is a nonnegative integer, and we put ${\bf x}^{{\bf i}}:=x_{1}^{i_{1}}\cdots x_{n}^{i_{n}}$.
We define the height $h(F)$ and the relevant height $\tilde{h}(F)$
as follows. Put
\begin{align*}
v_{\p}(F):=\min_{{\bf i}\in I_{F}}\{v_{\p}(a_{{\bf i}})\}\qquad\text{for }\p\in C.
\end{align*}
and define
\begin{align*}
h(F):=\sum_{\p\in C}-v_{\p}(F),
\end{align*}
\begin{align*}
\tilde{h}(F):=\sum_{\p\in C}-\min\{0,v_{\p}(F)\}.
\end{align*}
Notice that Gauss's lemma can be stated as
\begin{align*}
v_{\p}(FG)=v_{\p}(F)+v_{\p}(G),
\end{align*}
where $F$ and $G$ are in $K[x_{1},\hdots,x_{n}]$ and $\p\in C$.
Consequently, we have that
\begin{align}
h(FG)=h(F)+h(G).\label{Gaussht}
\end{align}
Although relevant height $\tilde{h}(F)$ is not projectively invariant,
it suits better when comparing with the height of an individual coefficient
of $F$. Indeed, we have from the definitions that
\begin{align}
h(a_{{\bf i}})\le\tilde{h}(F)\quad\text{and}\quad\tilde{h}(a_{{\bf i}}^{-1}F)=h(a_{{\bf i}}^{-1}F)=h(F)\le\tilde{h}(F),\label{compareheight}
\end{align}
where $a_{{\bf i}}$ is any non-zero coefficient $F$.
We now recall the definitions of global and local derivations on $\K$.
Let $t\in\K\setminus {\bf k}$, which will be fixed later. The mapping
${\displaystyle {g\to\frac{dg}{dt}}}$ on ${\bf k}(t)$, formal differentiation
on ${\bf k}(t)$ with respect to $t$, extends uniquely to a global
derivation on $\K$ as $\K$ is a finite separable extension of ${\bf k}(t)$.
Furthermore, since an element in $\K$ can be written as a Laurent
series in $t_{\p}$, the local derivative of $\eta\in\K$
with respect to $t_{\p}$, denoted by ${\displaystyle {d_{\p}\eta:=\frac{d\eta}{dt_{\p}}}}$,
is given by the formal differentiation on ${\bf k}((t_{\p}))$ with
respect to $t_{\p}$. Consequently,
\begin{align}
\frac{d\eta}{dt}=d_{\p}\eta\cdot(d_{\p}t)^{-1}.\label{chain rule}
\end{align}
The following results are consequences of the Riemann-Roch Theorem.
We refer to \cite[Corollary 7]{Buchi2013} for a proof.
\begin{proposition}\label{functiont}
For each point $\p_{\infty}\in C$, we can find some $t\in K\setminus{\bf k}$ satisfying the following conditions:
\global\long\def\labelenumi{(\theenumi)
\global\long\def\theenumi{\alph{enumi}
\begin{enumerate}[label=(\alph*)]
\item[{\rm(a)}] $t$ has exactly one pole at $\p_{\infty}$;
\item[{\rm(b)}] $h(t)\le\gen+1$;
\item[{\rm(c)}] \label{prop12_3} ${\displaystyle {\sum_{\p\in C}v_{\p}^{0}(d_{\p}t)\le3\gen}}$.
\end{enumerate}
\end{proposition}
We will use the following result of Brownawell-Masser \cite{BM}.
\begin{theorem}\label{BrMa} Let the characteristic of $\K$ be zero.
If $f_{1},\hdots,f_{n}\in\mathcal{O}_{S}^{*}$ and $f_{1}+\cdots+f_{n}=1$,
then either some proper subsum of $f_{1}+\cdots+f_{n}$ vanishes or
\[
\max_{1\le i\le n}h(f_{i})\le\frac{n(n-1)}{2}\max\{0,2\gen-2+|S|\}.
\]
\end{theorem} The following is an analogue of Green's lemma in Nevanlinna's
theory. \begin{corollary}\label{Green} Let the characteristic of
$\K$ be zero and $\ell$ be an integer. Let $a_{1},\hdots,a_{n},f_{1},\hdots,f_{n}\in K^{*}$,
$n\ge2$. If
\[
a_{1}f_{1}^{\ell}+\cdots+a_{n}f_{n}^{\ell}=0,
\]
and no proper subsum of $a_{1}f_{1}^{\ell}+\cdots+a_{n}f_{n}^{\ell}$
vanishes. Then $\frac{f_{i}}{f_{j}}\in{\bf k}$, for all $1\le i,j\le n$,
if
\begin{align}\label{lbound}
\ell>(n-1)^{2}(n-2)\max\{1,\gen\}+(n-1)^{4}h(a_{1},\hdots,a_{n}).
\end{align}
\end{corollary} \begin{proof} Let $b_{i}=\frac{a_{i}}{a_{n}}$ and
$g_{i}=\frac{f_{i}}{f_{n}}$. Then
\begin{align}
b_{1}g_{1}^{\ell}+\cdots+b_{n-1}g_{n-1}^{\ell}=-1,\label{reformulate1}
\end{align}
and no proper subsum of $b_{1}g_{1}^{\ell}+\cdots+b_{n-1}g_{n-1}^{\ell}$
vanishes. Suppose that \eqref{lbound} holds and that at least one of the $g_{i}$, $1\le i\le n-1$,
is not constant. Let $S$ be the set consisting of the zeros and poles
of $b_{i}$ and $g_{i}$, $1\le i\le n-1$. Then
\[
2\le|S|\le2\sum_{i=1}^{n-1}(h(b_{i})+h(g_{i})),
\]
and all the $b_{i}$ and $g_{i}$ are $S$-units. Applying Theorem
\ref{BrMa} to the equation \eqref{reformulate1}, we have
\begin{align}
h(b_{i}g_{i}^{\ell})\le\frac{(n-1)(n-2)}{2}(2\gen-2+2\sum_{j=1}^{n-1}(h(b_{j})+h(g_{j}))),\label{hestimate1}
\end{align}
for $1\le i\le n-1$. As
\[
\ell h(g_{i})\le h(b_{i}g_{i}^{\ell})+h(b_{i}),
\]
for $1\le i\le n-1$, together with \eqref{hestimate1} we have
\[
\ell\sum_{i=1}^{n-1}h(g_{i})\le\sum_{i=1}^{n-1}h(b_{i})+(n-1)^{2}(n-2)(\gen-1+\sum_{i=1}^{n-1}(h(b_{i})+h(g_{i}))).
\]
Hence,
\[
(\ell-(n-1)^{2}(n-2))\sum_{i=1}^{n-1}h(g_{i})\le(n-1)^{2}(n-2)(\gen-1)+((n-1)^{2}(n-2)+1)\sum_{i=1}^{n-1}h(b_{i}).
\]
Since one of the $g_{i}$ is not constant, $\ell>(n-1)^{2}(n-2)$ by \eqref{lbound}
and $h(b_{i})=h(a_{i},a_{n})\le h(a_{1},\hdots,a_{n})$, it implies
that
\[
\ell-(n-1)^{2}(n-2)\le(n-1)^{2}(n-2)(\gen-1)+(n-1)^{4}h(a_{1},\hdots,a_{n}),
\]
contradicting to \eqref{lbound}.
\end{proof}
\section{Main Lemmas}
\label{mainlemmas} From now on, we will fix a $t$ satisfying the
conditions in Proposition \ref{functiont} and use the notation $\eta':=\frac{d\eta}{dt}$
for $\eta\in K$. We will use the follow estimate. \begin{lemma}\label{lem:NSgcd_lb}
Let $S$ be a finite subset of $C$. Then the following statements
hold.
\global\long\def\labelenumi{(\theenumi)
\global\long\def\theenumi{\alph{enumi}
\begin{enumerate}[label=(\alph*)]
\item[{\rm(a)}] $N_{S,\gcd}(\eta,\eta')\ge N_{S}(\eta)-\overline{N}_{S}(\eta)-3\mathfrak{g}$
for any $\eta\in K$
\item[{\rm(b)}] $h(1,\frac{\eta_{1}'}{\eta_{1}},\hdots,\frac{\eta_{\ell}'}{\eta_{\ell}})\le|S|+3\gen$,
where $\eta_{i}\in\mathcal{O}_{S}^{*}$ for each $1\le i\le\ell$.
\end{enumerate}
\end{lemma}
\begin{proof} It is clear from \eqref{chain rule} that
\begin{align}\label{eq: v_of_diff}
v_{\p}(\eta')&=v_{\p}(\eta)-1-v(d_{\p}t) &\text{ if }v_{\p}(\eta)\ne0;\cr
v_{\p}(\eta')&\ge-v(d_{\p}t) &\text{ if }v_{\p}(\eta)=0.
\end{align}
Consequently,
\begin{align*}
N_{S,\gcd}(\eta,\eta') & =\sum_{\p\notin S}\min\{v_{\p}^{0}(\eta),v_{\p}^{0}(\eta')\}\\
& =\sum_{v_{\p}(\eta)>0,\,\p\notin S}\min\{v_{\p}(\eta),v_{\p}(\eta)-1-v(d_{\p}t)\}\\
& \ge\sum_{v_{\p}(\eta)>0,\,\p\notin S}(v_{\p}(\eta)-1-v^{0}(d_{\p}t))\\
& \ge N_{S}(\eta)-\overline{N}_{S}(\eta)-3\mathfrak{g}
\end{align*}
by Proposition \ref{functiont} (c). Again by \eqref{eq: v_of_diff}
and the assumption that $\eta_{i}\in\mathcal{O}_{S}^{*}$ for each
$1\le i\le\ell$, we have
\begin{align*}
h(1,\frac{\eta_{1}'}{\eta_{1}},\hdots,\frac{\eta_{\ell}'}{\eta_{\ell}}) & =\sum_{\p\in C}-\min_{1\le i\le\ell}\{0,v_{\p}(\eta_{i}')-v_{\p}(\eta_{i})\}\\
& \le\sum_{\p\in S}-\min\{0,-1-v(d_{\p}t)\}+\sum_{\p\in C\setminus S}-\min\{0,-v(d_{\p}t)\}\\
& \le|S|+\sum_{\p\in C}v^{0}(d_{\p}t)\\
& \le|S|+3\gen
\end{align*}
by Proposition \ref{functiont} (c). \end{proof
For convenience of discussion, we will use the following convention.
Let $\mathbf{i}=(i_{1},\ldots,i_{n})\in\mathbb{Z}^{n}$ and $\mathbf{u}=(u_{1},\ldots,u_{n})\in(K^{*})^{n}$.
We denote by $\mathbf{x}:=(x_{1},\ldots,x_{n})$, $\mathbf{x^{i}}:=x_{1}^{i_{1}}\cdots x_{n}^{i_{n}}$,
$\mathbf{u^{i}}:=u_{1}^{i}\cdots u_{n}^{i_{n}}\in K^{*}$ and $|\mathbf{i}|:=\sum_{j=1}^{n}|i_{j}|$.
For a polynomial $F(\mathbf{x})=\sum_{\mathbf{i}}a_{\mathbf{i}}\mathbf{x}^{\mathbf{i}}\in K[x_{1},\dots,x_{n}]$,
we denote by $I_{F}$ the set of exponents ${\bf i}$ such that $a_{\mathbf{i}}\ne0$
in the expression of $F$, and define
\begin{align}
D_{\mathbf{u}}(F)(\mathbf{x}):=\sum_{\mathbf{i}\in I_{F}}\frac{(a_{\mathbf{i}}\mathbf{u}^{\mathbf{i}})'}{\mathbf{u}^{\mathbf{i}}}\mathbf{x}^{\mathbf{i}}.\label{DuF}
\end{align}
Clearly, we have
\begin{align}
F(\mathbf{u})'=D_{\mathbf{u}}(F)(\mathbf{u}),\label{value}
\end{align}
and the following product rule:
\begin{align}
D_{\mathbf{u}}(FG)=D_{\mathbf{u}}(F)G+FD_{\mathbf{u}}(G)\label{product}
\end{align}
for each $F,G\in K[x_{1},\dots,x_{n}]$.
The following proposition gives an upper bound on height of the coefficients
of $D_{\mathbf{u}}(F)$ when the $u_{i}$'s are $S$-units. This is
a crucial step. \begin{proposition}\label{heightDu} Let $F$ be
a nonconstant polynomial in $K[x_{1},\dots,x_{n}]$ and $\mathbf{u}=(u_{1},\ldots,u_{n})\in(O_{S}^{*})^{n}$.
Then there exist $c_{1},c_{2}$ depending only on $\deg F$
such that
\[
\tilde{h}(D_{\mathbf{u}}(F))\le c_{1}\tilde{h}(F)+c_{2}\max\{1,2\gen-2+|S|\}.
\]
\end{proposition} \begin{proof} Let $F(x_{1},\ldots,x_{n})=\sum_{\mathbf{i}\in I_{F}}a_{\mathbf{i}}\mathbf{x}^{\mathbf{i}}$.
We then choose $S'$ containing $S$ and all the zeros and poles of
all $a_{\mathbf{i}}$ for $\mathbf{i}\in I_{F}$. Then
\begin{align}
|S'|\leq|S|+2\sum_{\mathbf{i}\in I_{F}}h(a_{\mathbf{i}})\le|S|+2|I_{F}|\tilde{h}(F)\label{Sestimate}
\end{align}
and $a_{\mathbf{i}}\in O_{S'}^{*}$ for each $\mathbf{i}\in I_{F}$.
As
\[
D_{\mathbf{u}}(F)(\mathbf{x})=\sum_{\mathbf{i}\in I_{F}}a_{\mathbf{i}}\cdot\frac{(a_{\mathbf{i}}\mathbf{u^{i}})'}{a_{\mathbf{i}}\mathbf{u^{i}}}\mathbf{x}^{\mathbf{i}},
\]
we have that
\begin{align}\label{hFduF}
\tilde{h}(D_{\mathbf{u}}(F)) & \le h(1,(a_{\mathbf{i}})_{{\mathbf{i}}\in I_F})+ h(1,(\frac{(a_{\mathbf{i}}\mathbf{u^{i}})'}{a_{\mathbf{i}}\mathbf{u^{i}}})_{{\mathbf{i}}\in I_F})\cr
& \le|S|+(2|I_{F}|+1)\tilde{h}(F)+3\gen.
\end{align}
by Lemma \ref{lem:NSgcd_lb} and \eqref{Sestimate}. The assertion
is now clear since $|I_{F}|\le\binom{n+\deg F}{n}$ and $|S|+3\gen\le 3\max\{1,2\gen-2+|S|\}$.
\end{proof}
\begin{lemma} \label{lem:coprime-irr}For any irreducible $F(\mathbf{x})=\sum_{\mathbf{i}\in I_{F}}a_{\mathbf{i}}\mathbf{x}^{\mathbf{i}}\in K[x_{1},\dots,x_{n}]$
and $\mathbf{u}\in(K^{*})^{n}$, the two polynomials $F$ and $D_{\mathbf{u}}(F)$
are not coprime if and only if $\frac{a_{\mathbf{i}}\mathbf{u}^{\mathbf{i}}}{a_{\mathbf{j}}\mathbf{u}^{\mathbf{j}}}\in{\bf k}^{*}$
whenever $\mathbf{i},\mathbf{j}\in I_{F}$. \end{lemma}
\begin{proof} It is clear from \eqref{DuF} that $\deg D_{\mathbf{u}}(F)\le\deg F$
for each $j$. Since $F$ is irreducible, it follows that $F$ and
$D_{\mathbf{u}}(F)$ are not coprime if and only if $D_{\mathbf{u}}(F)=\lambda F$
for some $\lambda\in K$, i.e., $\frac{(a_{\mathbf{i}}\mathbf{u}^{\mathbf{i}})'}{\mathbf{u}^{\mathbf{i}}}=\lambda a_{\mathbf{i}}$
for each $\mathbf{i}$. The latter condition is equivalent to that
for those $\mathbf{i},\mathbf{j}\in I_{F}$ we must have $\frac{(a_{\mathbf{i}}\mathbf{u}^{\mathbf{i}})'}{a_{\mathbf{i}}\mathbf{u}^{\mathbf{i}}}=\frac{(a_{\mathbf{j}}\mathbf{u}^{\mathbf{j}})'}{a_{\mathbf{j}}\mathbf{u}^{\mathbf{j}}}$,
which is equivalent to that $\left(\frac{a_{\mathbf{i}}\mathbf{u}^{\mathbf{i}}}{a_{\mathbf{j}}\mathbf{u}^{\mathbf{j}}}\right)'=0$.
\end{proof} \begin{lemma} \label{lem:coprime-gen} Let $F=\prod_{i=1}^{r}P_{i}\in K[x_{1},\hdots,x_{n}]$,
where $P_{i}$, $1\le i\le r$, is irreducible and not monomial in
$K[x_{1},\hdots,x_{n}]$. Let $\mathbf{u}\in(K^{*})^{n}$, $\mathbf{e}=(e_{1},\hdots,e_{r})$
be an $r$-tuple of positive integers. Then either the two polynomials
$F$ and
\[
F_{\mathbf{e},\mathbf{u}}:=\sum_{i=1}^{r}e_{i}D_{\mathbf{u}}(P_{i})\prod_{j\ne i}P_{j}
\]
are coprime in $K[x_{1},\hdots,x_{n}]$ or
\[
h(u_{1}^{m_{1}}\cdots u_{n}^{m_{n}})\le h(F)
\]
for some $(m_{1},\hdots m_{n})\in\mathbb{Z}^{n}\setminus\{(0,\ldots,0)\}$
with $\sum|m_{i}|\le2\deg F$.
\end{lemma}
\begin{proof}
If $F$ and $F_{\mathbf{e},{\mathbf{u}}}$ are not coprime in $K[x_{1},\dots,x_{n}]$,
some $P_{i}$ must divide $F_{\mathbf{e},\mathbf{u}}$ and thus divide
$D_{\mathbf{u}}(P_{i})$. Since $P_{i}$ is not a monomial, we have
$P_{i}=\sum_{\mathbf{i}\in I_{P_{i}}}a_{\mathbf{i}}\mathbf{x}^{\mathbf{i}}$
with $|I_{P_{i}}|\ge2$. Then Lemma \ref{lem:coprime-irr} implies
that $\frac{a_{\mathbf{i}}\mathbf{u}^{\mathbf{i}}}{a_{\mathbf{j}}\mathbf{u}^{\mathbf{j}}}\in{\bf k}^{*}$
whenever $\mathbf{i},\mathbf{j}\in I_{P_{i}}$. Since $|I_{P_{i}}|\ge2$,
we can choose distinct $\mathbf{i},\mathbf{j}\in I_{P_{i}}$. Thus
\[
h(\mathbf{u}^{\mathbf{i}-\mathbf{j}})=h(a_{\mathbf{i}}^{-1}a_{\mathbf{j}})\le h(P_{i})\le h(F),
\]
by \eqref{Gaussht} with $0\ne|\mathbf{i}-\mathbf{j}|\le2\deg P_{i}\le2\deg F.$
\end{proof}
\begin{lemma}\label{dth_power_count} Let $d\ge2$
be an integer, $F_{1},\hdots,F_{r}\in O_{S}[x_{1},\dots,x_{n}]$ be
distinct non-monomial polynomials which are irreducible in $K[x_{1},\dots,x_{n}]$,
and put $F:=F_{1}^{e_{1}}\cdots F_{r}^{e_{r}}$ with $1\le e_{i}<d$
for each $i$. Let $\mathbf{u}=(u_{1},\ldots,u_{n})\in(\mathcal{O}_{S}^{*})^{n}$. If $F(\mathbf{u})=g^{d}$ for some $g\in K^{*}$,
then for every $\varepsilon>0$ there exist an integer $m$ and reals
$c_{1}$, $c_{2}$, all depending only on {$(\varepsilon,\delta,d)$,
where $\deg F\le\delta$,} such that either
\begin{equation}
N_{S}(F(\mathbf{u}))\le\varepsilon\max_{1\le j\le n}\{h(u_{j})\}.\label{1}
\end{equation}
or
\begin{equation}
h(u_{1}^{m_{1}}\cdots u_{n}^{m_{n}})\le c_{1}\tilde{h}(F)+c_{2}\max\{1,2\gen-2+|S|\}\label{2}
\end{equation}
for some integers $m_{1},\hdots,m_{n}$ not all zeros with $\sum|m_{i}|\le2m$.
\end{lemma}
\begin{proof}By \eqref{value} and the product rule \eqref{product} of $D_{\mathbf{u}}$,
we have
\begin{align}
dg^{d-1}g'=D_{\mathbf{u}}(F)(\mathbf{u})=(F_{1}^{e_{1}-1}(\mathbf{u})\cdots F_{r}^{e_{r}-1}(\mathbf{u}))F_{\mathbf{e},\mathbf{u}}(\mathbf{u}),\label{expression1}
\end{align}
where $F_{\mathbf{e},\mathbf{u}}:=\sum_{i=1}^{r}e_{i}D_{\mathbf{u}}(P_{i})\prod_{j\ne i}P_{j}$
as defined in Lemma \ref{lem:coprime-gen}, from which it follows
that either $\bar{F}:=F_{1}\cdots F_{r}$ and $F_{\mathbf{e},\mathbf{u}}$
are coprime in $K[x_{1},\dots,x_{n}]$ or the second assertion \eqref{2}
holds with any $(c_{1},c_{2},m)$ with $c_{1}\ge1$,
$c_{2}\ge0$ and $m\ge\d$. It remains to consider the case where
the former condition holds. By Theorem \ref{movinggcdunit} and Proposition \ref{heightDu},
for any $\epsilon'>0$ there exist an
integer $m\ge\d$, positive reals
$c_{i}$, $1\le i\le4$, with $c_{1}\ge1$ and $c_{2}\ge0$,
depending only on $\epsilon'$, such that whenever
\begin{align}
\max_{1\le i\le n}h(u_{i})\ge c_{3}\tilde{h}(F)+c_{4}\max\{1,2\gen-2+|S|\},\label{heightpart}
\end{align}
we have either
\[
h(u_{1}^{m_{1}}\cdots u_{n}^{m_{n}})\le c_{1}\tilde{h}(F)+c_{2}\max\{1,2\gen-2+|S|\}
\]
for some integers $m_{1},\hdots,m_{n}$, not all zeros with $\sum|m_{i}|\le2m$,
or
\begin{align}
N_{S,{\rm gcd}}(\bar{F}(\mathbf{u}),F_{\mathbf{e},\mathbf{u}}(\mathbf{u}))\le\epsilon'\max_{1\le i\le n}h(u_{i}).\label{gcdpart}
\end{align}
We note that
$h(u_{1}^{m_{1}}\cdots u_{n}^{m_{n}})\le\left(\sum_{1\le i\le n}|m_{i}|\right)\max_{1\le i\le n}h(u_{i})$,
which shows that the case where \eqref{heightpart} does not hold
leads to \eqref{2} once we enlarge
$c_{1}$ and $c_{2}$.
If \eqref{gcdpart} holds, then together with \eqref{expression1},
we have
\begin{equation}
N_{S,\gcd}(F(\mathbf{u}),D_{\mathbf{u}}(F)(\mathbf{u}))\le\sum_{i=1}^{r}(e_{i}-1)N_{S}(F_{i}(\mathbf{u}))+\varepsilon'\max_{1\le j\le n}\{h(u_{j})\}.\label{upbound}
\end{equation}
On the other hand, since $g^{d}=F(\mathbf{u})$, the
key equality \eqref{value} and Lemma \ref{lem:NSgcd_lb} imply
that
\[
N_{S,\gcd}(F(\mathbf{u}),D_{\mathbf{u}}(F)(\mathbf{u}))=N_{S,\gcd}(g^{d},(g^{d})')\ge(d-1)N_{S}(g)-3\gen;
\]
together
with the fact $F(\mathbf{u}),F_{1}(\mathbf{u}),\ldots,F_{r}(\mathbf{u})\in O_{S}$,
this gives
\[
N_{S,\gcd}(F(\mathbf{u}),D_{\mathbf{u}}(F)(\mathbf{u}))+3\gen\ge\frac{d-1}{d}N_{S}(F(\mathbf{u}))=\frac{d-1}{d}\sum_{i=1}^{r}e_{i}N_{S}(F(\mathbf{u})).
\]
Together with \eqref{upbound}, we have
\[
\sum_{i=1}^{k}(1-\frac{e_{i}}{d})N_{S}(F_{i}(\mathbf{u}))\le2\varepsilon'\max_{1\le j\le n}\{h(u_{j})\},
\]
by further requiring that $3\gen\le\varepsilon'\max_{1\le j\le n}\{h(u_{j})\}$;
this is possible since we may assume that $c_{4}\ge3\gen/\epsilon'$
in \eqref{heightpart}. Since $e_{i}<d$ and $N_{S}(F_{i}(\mathbf{u}))\ge0$
for each $i$, it implies that
\[
\frac{1}{d}N_{S}(F_{i}(\mathbf{u}))\le2\varepsilon'\max_{1\le j\le n}\{h(u_{j})\}
\]
for each $i$. By taking $\varepsilon'= \frac{\varepsilon}{2d\d}\le \frac{\varepsilon}{2d\deg F}\le\frac{\varepsilon}{2d(e_{1}+\cdots+e_{r})}$,
we have
\[
N_{S}(F(\mathbf{u}))=\sum_{i=1}^{r}e_{i}N_{S}(F_{i}(\mathbf{u}))\le\varepsilon\max_{1\le j\le n}\{h(u_{j})\}.
\]
\end{proof}
\section{\label{section4}Proof of Theorem \ref{thm:y^e=00003D00003D00003D00003D00003DFg^l} }
For each finite extension $L$ over $K$, denote by $h_{L}$ the height
function (both on $L$ and on $L[x_{1},\dots,x_{n}]$) obtained from
the same construction of $h$ with $K$ replaced by $L$; similar
for the notation $\tilde{h}_{L}$, and $O_{L,\widetilde{S}}$, $N_{L,\widetilde{S}}$,
$\overline{N}_{L,\widetilde{S}}$, where $\widetilde{S}\subset C_{L}(\mathbf{k})$
is a finite subset, and $C_{L}$ be a smooth projective
curve over $\mathbf{k}$ such that $L=\mathbf{k}(C_{L})$. We need the following result from \cite[Proposition 2.4]{PW2015}.
\begin{proposition}\label{Prop: genus}
Let $\alpha$ be a nonconstant algebraic element over $K$ with $[K(\alpha):K]=m$.
Denote by $L=K(\alpha)$ and let $C_{L}$ be a smooth projective curve
over $\mathbf{k}$ of genus $\gen_{L}$ such that $L=\mathbf{k}(C_{L})$.
Then
\[
\gen_{L}-1\le m(\gen-1)+(m-1)h_{L}(\alpha).
\]
\end{proposition}
In the following proof, we will use, without further prompts, the
standard fact that $h_{L}(a)=[L:K]h(a)$ for every $a\in K$, and
that $\tilde{h}_{L}(P)\le{[L:K]}\tilde{h}(P)$ for every
$P\in K[x_{1},\dots,x_{n}]$.
\begin{proof}[Proof of Theorem \ref{thm:y^e=00003D00003D00003D00003D00003DFg^l}]
We may assume $|S|\ge2$, for otherwise $\mathcal{O}_{S}^{*}=\mathbf{k}^{*}$
and the desired conclusion holds trivially. For each
$\ell\in\mathbb{N}$, put $\mathbf{u}^{\ell}:=(u_{1}^{\ell},\ldots,u_{n}^{\ell})\in(O_{S}^{*})^{n}$.
We may suppose that there is indeed some $\ell\in\mathbb{N}$ with
\begin{equation}
\ell\ge c_{1}\tilde{h}(F)+c_{2}\max\{1,2\gen-2+|S|\}\label{eq: large_ell}
\end{equation}
such that $F(\mathbf{u^{\ell}})$ is a $d$-th power in $K$, where
$c_{1}$ and $c_{2}$ will be determined in the end of
the proof.
Fix a total ordering on the set of monomials in $K[x_{1},\dots,x_{n}]$
and say that an element $Q\in K[x_{1},\dots,x_{n}]$ is monic if the
coefficient attached to largest monomial appearing in $Q$ with a
non-zero coefficient is $1$. Since $F\ne0$ in $K[x_{1},\dots,x_{n}]$,
it follows from our hypothesis that we may write $F=a\mathbf{x^{i}}G^{d}P$,
where $P\in K[x_{1},\dots,x_{n}]\setminus K$ is $d$-th
power free monic polynomial
with no (non-trivial) monomial factors,
$G\in K[x_{1},\dots,x_{n}]$ is monic, $\mathbf{x^{i}}\in K[x_{1},\dots,x_{n}]$
is a monomial and $a\in K^{*}$. Note that
\begin{align}
{\normalcolor }\tilde{h}(P)=h(P)\le h(F)\le\tilde{h}(F),\label{heightFP}
\end{align}
and $h(a)\le\tilde{h}(F)$ since $a$ is the coefficients of the largest
monomial appearing in $F$.
Write
\begin{align}
{\normalcolor {\normalcolor {\normalcolor }}}P=\sum_{\mathbf{i}\in I_{P}}a_{\mathbf{i}}{\bf x}^{\mathbf{i}}\qquad\text{with each \ensuremath{a_{\mathbf{i}}} being nonzero}.\label{P}
\end{align}
By our setting, we have that $|I_{P}|\ge2$. Choose
a finite subset $S_{P}\subset C(\mathbf{k})$ containing $S$ such
that $a_{\mathbf{i}}\in O_{S_{P}}^{*}$ for each $\mathbf{i}\in I_{P}$,
that each monic irreducible factor of $P$ is in $O_{S_{P}}[x_{1},\dots,x_{n}]$,
and that by \eqref{heightFP} we have
\begin{align}
2\le|S_{P}|\le|S|+2|I_{P}|\tilde{h}(P)+\deg P\cdot\binom{n+\deg P}{n}\tilde{h}(P)\le|S|+(\deg F+2)\binom{n+\deg{F}}{n}\tilde{h}(F).\label{SP}
\end{align}
Let $L:=K(\alpha)$ with some $d$-th root
$\alpha$ of $a\mathbf{u^{i}}$. Since $F(\mathbf{u^{\ell}})$
is a $d$-th power in $K$, it follows that $P(\mathbf{u^{\ell}})$
is a $d$-th power in $L$. By Proposition \ref{Prop: genus}, $L$
is the function field of a smooth projective algebraic curve $C_{L}$
of genus $\mathfrak{g}_{L}$ defined over $\mathbf{k}$ with
\begin{align}
{\normalcolor {\normalcolor {\normalcolor }}}\gen_{L}-1 & \le[L:K](\gen-1)+([L:K]-1)\frac{[L:K]}{d}h(a\mathbf{u}^{i})\cr
& \le[L:K]\left(\gen-1+\tilde{h}(F)+\deg F\max_{1\le j\le n}h(u_{j})\right)\label{eq: g_L}
\end{align}
since $[L:K]\le d$ and $\a^{d}=a\mathbf{u}^{\mathbf{i}}$.
Let $\widetilde{S_{P}}\subset C_{L}(\mathbf{k})$ be the the preimage
of $S_{P}$ under the natural map $C_{L}(\mathbf{k})\rightarrow C(\mathbf{k})$.
Then
\begin{align}
2\le|\widetilde{S_{P}}|\le[L:K]|S_{P}|.\label{wildtilde_S}
\end{align}
Now we have that $P\in L[x_{1},\dots,x_{n}]$
is $d$-th power free and has no (non-trivial) monomial factor, that
each irreducible factor of $P$ is in $O_{L,\widetilde{S_{P}}}[x_{1},\dots,x_{n}]$,
and that $P(\mathbf{u^{\ell}})$ is a $d$-th power in $L$. Hence
we can apply Lemma \ref{dth_power_count} with
\begin{align}
\varepsilon=\frac{1}{\binom{n+\deg F}{n}^{2}\max_{1\le j\le n}h(u_{j})}
\end{align}
and obtain an integer $m'$ and constants $c_{1}'$, $c_{2}'$ depending
only on $(\varepsilon,\deg F,d)$ such that we have
either that
\begin{equation}
{\normalcolor }{\normalcolor {\normalcolor {\normalcolor {\normalcolor }}}}N_{L,\widetilde{S_{P}}}(P(\mathbf{u^{\ell}}))\le\varepsilon\ell\max_{1\le j\le n}h_{L}(u_{j})=\frac{\ell[L:K]}{\binom{n+\deg F}{n}^{2}}\;\label{11}
\end{equation}
or that
\begin{align}
{\normalcolor {\normalcolor {\normalcolor }}}{\normalcolor {\normalcolor {\normalcolor }}}h_{L}(u_{1}^{\ell m_{1}}\cdots u_{n}^{\ell m_{n}}) & \le c_{1}'\tilde{h}_{L}(P)+c_{2}'\max\{1,2\gen_{L}-2+|\widetilde{S_{P}}|\}\label{12}
\end{align}
for some integers $m_{1},\hdots,m_{n}$, not all zeros with $\sum|m_{i}|\le2m'$.
First consider the case where \eqref{12} holds; thus
by \eqref{SP}, \eqref{heightFP}, \eqref{eq: g_L} and \eqref{wildtilde_S},
we obtain
\[
h(u_{1}^{\ell m_{1}}\cdots u_{n}^{\ell m_{n}})\le\left(c_{1}'+c_{2}'(\deg F+2)\binom{n+\deg{F}}{n}\right)\tilde{h}({F})+c_{2}'\max\left\{ 1,2\gen-2+2\tilde{h}(F)+2\deg F\max_{1\le j\le n}h(u_{j})+|S|\right\} .
\]
If $u_{1}^{m_{1}}\cdots u_{n}^{m_{n}}{\in K\setminus{\bf k}}$, then
$h(u_{1}^{\ell m_{1}}\cdots u_{n}^{\ell m_{n}})\ge\ell$ and we would
get a contradiction to \eqref{12}, provided
\begin{align}
\ell>\left(c_{1}'+c_{2}'(\deg F+2)\binom{n+\deg F}{n}+2c_{2}'\right)\tilde{h}(F)+c_{2}'\left(1+2\deg F\max_{1\le j\le n}h(u_{j})\right)\max\{1,2\gen-2+|S|\}.\label{l1}
\end{align}
It remains to consider when \eqref{11} occurs. By \eqref{P}, we
have the following equality
\begin{equation}
P(\mathbf{u^{\ell}})=\sum_{\mathbf{i}\in I_{P}}a_{\mathbf{i}}\mathbf{u^{i\ell}}.\label{eq:unit-eq2}
\end{equation}
First consider the case where the right-hand side of \eqref{eq:unit-eq2}
has a nontrivial vanishing subsum. (This includes the possibility
where $P(\mathbf{u^{\ell}})=0$.) In this case, it must have a smallest
nontrivial vanishing subsum, i.e., for some $I\subset I_{P}$ (with
$|I|\ge2)$ we have
\begin{equation}
\sum_{\mathbf{i}\in I}a_{\mathbf{i}}\mathbf{u^{i\ell}}=0.\label{eq:smallest_vanish2}
\end{equation}
Corollary \ref{Green} implies that if
\begin{align}
\ell & >(|I|-1)^{2}(|I|-2)\max\{1,\gen\}+(|I|-1)^{4}h([a_{\mathbf{i}}]_{\mathbf{i}\in I}),\label{l2}
\end{align}
then $\mathbf{u}^{\mathbf{i}-\mathbf{j}}\in{\bf k}$ for any distinct
$\mathbf{i}$, $\mathbf{j}$ in $I$. Since $|I|\le|I_{P}|\le\binom{n+\deg F}{n}$
and $h([a_{\mathbf{i}}]_{\mathbf{i}\in I})\le h(P)\le\tilde{h}(F)$
as well as $\max\{1,\gen\}\le\max\{1,2\gen-2+|S|\}$, we see that
\eqref{l2} holds if
\begin{equation}
\ell> \binom{n+\deg{F}}{n}^{4} \big(\tilde{h}(F)+\max\{1,2\gen-2+|S|\} \big). \label{eq:l2}
\end{equation}
We also note that any distinct $\mathbf{i}$, $\mathbf{j}$ in $I$
satisfies that $|\mathbf{i}-\mathbf{j}|\le2\deg F$. This settles
down the current case.
It remains to consider the case where the right-hand side of \eqref{eq:unit-eq2}
has no nontrivial vanishing subsum. Pick some $\mathbf{i}_{0}\in I_{P}$.
This case is equivalent to the one where the left-hand side of
\begin{equation}
\frac{P(\mathbf{u^{\ell}})}{a_{\mathbf{i}_{0}}\mathbf{u}^{\mathbf{i}_{0}\ell}}-\sum_{\mathbf{i}\in I_{P}\setminus\{\mathbf{i}_{0}\}}\frac{a_{\mathbf{i}}\mathbf{u^{i\ell}}}{a_{\mathbf{i}_{0}}\mathbf{u}^{\mathbf{i}_{0}\ell}}=1\label{eq: uniteq3-1}
\end{equation}
has no nontrivial vanishing subsum. By \eqref{11},
we see that
\begin{equation}
{\normalcolor }{\normalcolor {\normalcolor {\normalcolor }}}{\normalcolor {\normalcolor {\normalcolor {\normalcolor }}}}N_{S_{P}}(P(\mathbf{u^{\ell}}))\le\frac{\ell}{\binom{n+\deg F}{n}^{2}}.\label{11-1}
\end{equation}
Let $S_{P,\ell}\subset C(\mathbf{k})$ be a subset containing $S_{P}$
and the zeros of $P(\mathbf{u^{\ell}})$ such that
\[
2\le|S_{P,\ell}|\le|S_{P}|+\overline{N}_{S_{P}}(P(\mathbf{u^{\ell}}))\le|S|+(\deg F+2)\binom{n+\deg{F}}{n}\tilde{h}(F)+\frac{\ell}{\binom{n+\deg F}{n}^{2}}
\]
by \eqref{SP} and \eqref{11-1}. Applying Theorem \ref{BrMa}
to the $ S_{P,\ell}$-unit equation \eqref{eq: uniteq3-1},
we see that if $\mathbf{u}^{\mathbf{i}-\mathbf{i_{0}}}\not\in{\bf k}$
for some $\mathbf{i}\in I_{P}\setminus\{\mathbf{i}_{0}\}$, then
\begin{align*}
\ell\le h(\mathbf{u}^{(\mathbf{i}-\mathbf{i_{0}})\ell}) & \le h(\frac{a_{\mathbf{i}}\mathbf{u^{i\ell}}}{a_{\mathbf{i}_{0}}\mathbf{u}^{\mathbf{i}_{0}\ell}}){\normalcolor {\normalcolor {\normalcolor }}}+h(a_{\mathbf{i}},a_{\mathbf{i}_{0}})\\
& \le\frac{|I_{P}|^{2}}{2}\left(2\mathfrak{g}-2+|S_{P,\ell}|\right)+h(P)\\
& \le\frac{\ell}{2}+\left(\frac{1}{2}\deg F+2\right)\binom{n+\deg F}{n}^{3}\tilde{h}(F)+\frac{1}{2}\binom{n+\deg F}{n}^{2}\max\{1,2\gen-2+|S|\}.
\end{align*}
Then again, $\mathbf{u}^{\mathbf{i}-\mathbf{i_{0}}}\in{\bf k}$ for
every $\mathbf{i}\in I_{P}\setminus\{\mathbf{i}_{0}\}$, where we
note that such $\mathbf{i}$ indeed exists and that $|\mathbf{i}-\mathbf{i}_{0}|\le2\deg F$,
if
\begin{align}
\ell>\left(\deg F+4\right)\binom{n+\deg F}{n}^{3}\tilde{h}(F)+\binom{n+\deg F}{n}^{2}\max\{1,2\gen-2+|S|\}.\label{l3-1}
\end{align}
We obtain the desired conclusion by taking $m:=\max\{m',\deg F\}$,
and choose $c_{1}$, $c_{2}$ such that \eqref{eq: large_ell} implies
all of \eqref{l1}, \eqref{eq:l2} and \eqref{l3-1}.
\end{proof}
\section{Proof of Theorem \ref{dPisot}}
\label{sectionPisot}
We need
the following result from \cite[Proposition 4.2]{NW}, where it is
stated for number fields, but it is clear that the proof works for
any field.
\begin{proposition}\label{moving:Prop} Let $f_{1},f_{2}\in K[x_{0},x_{1},\dots,x_{n}]\setminus K[x_{0}]$
be coprime polynomials. Then, the polynomials $f_{1}(m),f_{2}(m)\in K[x_{1},\dots,x_{n}]$
are coprime for all but perhaps finitely many $m\in\mathbb{N}$.
\end{proposition}
We also recall the following result of Pasten and the third author
on the generalized B{\"u}chi's $n$-th power problem. \begin{theorem}\cite[Theorem 3]{PW2015}\label{buchi_func_field}
Let $K$ be a function field of a smooth projective curve $C$ of
genus $\gen_{K}$ over an algebraically closed field $k$ of characteristic
zero. Let $n\ge2$ and $M$ be integers with
\[
M>4n\max\{\gen-1,0\}+11n-3.
\]
Let $F\in K[x]\setminus\mathbf{k}[x]$ be a monic polynomial of degree
$n$. Write $F=PH$ where $P\in\mathbf{k}[x]$ is monic, $H\in K[x]$
is monic and $H$ is not divisible by any non-constant polynomial
in $\mathbf{k}[x]$. Let $G_{1},\dots,G_{\ell}\in K[x]$ be the distinct
monic irreducible factors of $H$ (if any) and let $e_{1},\dots,e_{\ell}\ge1$
be integers such that $H=\prod_{j=1}^{\ell}G_{j}^{e_{j}}$. Let $\mu\ge\max_{j}e_{j}$
be an integer and let $a_{1},\dots,a_{M}$ be distinct elements of
$\mathbf{k}$.
If for each $1\le i\le M$, the zero multiplicity of those nonzero
$F(a_{i})\in K^{*}$ at every point $\mathfrak{p}\in C(\mathbf{k})$
is divisible by $\mu$, then $\mu=e_{1}=\cdots=e_{\ell}$ and $H=(\prod_{j=1}^{\ell}G_{j})^{\mu}$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{dPisot}] Let $u_{1},\dots,u_{n}$
be a (multiplicative) basis of $\Gamma$. Then there exists a Laurent
polynomial $f\in K[x_{0},x_{1},x_{1}^{-1},\dots,x_{n},x_{n}^{-1}]$
such that
\begin{equation}
b(m)=f(m,u_{1}^{m},\dots,u_{n}^{m}).\label{bf}
\end{equation}
We may assume that $f\in K[x_{0},x_{1},\dots,x_{n}]$ by multiplying
$f$ by $(x_{1}\cdots x_{n})^{hd}$ for some $h\in\mathbb{N}$ without
affecting the assertion.
To avoid trivialities, we assume $f$ is not the zero polynomial.
We also note that the assumption
that $\Gamma\cap k^{*}=\{1\}$ implies that $u_{1},\dots,u_{n}$ are
multiplicatively independent modulo ${\bf k}$.
For each $m\in\mathbb{N}$, it is clear that $\deg f(m,\bullet)\le\deg f$;
since $v_{\p}(a(m))\ge v_{\p}(a)$ for every $\p\in C(\mathbf{k})$
and nonzero $a\in K[x_{0}]$, we also see that $\tilde{h}(f(m,\bullet))\le\tilde{h}(f)$.
Denote by $\mathcal{N}$ the collection of $m\in\mathbb{N}$ such
that $b(m)$ is a $d$-th power in $K$, which is an infinite set
by the assumption. Thus $f(m,u_{1}^{m},\dots,u_{n}^{m})$ is a $d$-th
power in $K$ for each $m\in\mathcal{N}$. Let $S\subset C(\mathbf{k})$
be a finite subset such that $\mathbf{u}:=(u_{1},\ldots,u_{n})\in(\mathcal{O}_{S}^{*})^{n}$.
Recall $\mathbf{x}:=(x_{1},\ldots,x_{n})$. Applying Theorem \ref{thm:y^e=00003D00003D00003D00003D00003DFg^l}
to each $f(m,\bullet)\in K[\mathbf{x}]$ for each $m\in\mathcal{N}$,
we conclude that
\begin{align}
f(m,\bullet)=\alpha_{m}\mathbf{x}^{\mathbf{i}_{m}}G_{m}^{d}\label{Qm}
\end{align}
for some $\a_{m}\in K^{*}$, monomial $\mathbf{x}^{\mathbf{i}_{m}}\in K[\mathbf{x}]$
and $G_{m}\in K[\mathbf{x}]$, provided that $m\in\mathcal{N}$ is
sufficiently large.
On the other hand, we factor $f$ in $K[x_{0},\mathbf{x}]$ as
\begin{align}
{\normalcolor }f(x_{0},\mathbf{x})=Q(x_{0})\mathbf{x}^{\mathbf{i}}\prod_{i=1}^{s}P_{i}(x_{0},\mathbf{x})^{e_{i}}\label{factoringf}
\end{align}
with some $Q\in K[x_{0}]$, some monomial $\mathbf{x^{i}}\in K[\mathbf{x}]$,
and some irreducibles $P_{1},\ldots,P_{s}\in K[x_{0},x_{1},\dots,x_{n}]\setminus K[x_{0}]$
without any (non-trivial) monomial factor. Applying Proposition \ref{moving:Prop}
to all $(P_{i},P_{j})$ with $0\le i<j\le s$, where $P_{0}:=x_{1}x_{2}\cdots x_{n}$,
we may replace $\mathcal{N}$ by one of its cofinite subset such that
for all $m\in\mathcal{N}$, $1\le i\le s$ and $1\le j\le s$ with $i\ne j$, we have that $P_{i}(m,\bullet)\in K[\mathbf{x}]$ neither belongs to $K$ nor
has (nontrivial) monomial factor, and that $P_{i}(m,\bullet),P_{j}(m,\bullet)$
share no irreducible factor in $K[\mathbf{x}]$. Since each such $P_{i}(m,\bullet)\in K[\mathbf{x}]$
has at least one irreducible factor, by comparing \eqref{factoringf}
with \eqref{Qm}, we see that each $e_{i}$ must be divisible by $d$,
and thus
\begin{equation}
f(x_{0},\mathbf{x})=Q(x_{0})\mathbf{x}^{\mathbf{i}}G(x_{0},\mathbf{x})^{d}\label{eq: f}
\end{equation}
for some $G\in K[x_{0},\mathbf{x}]$. Letting $\b\in K^{*}$ be the leading
coefficient of $Q$, we have following factorization
\begin{equation}
Q= \b Q_0 Q_1^d, \label{eq: factor_Q}
\end{equation}
where $Q_0, Q_1\in K[x_0]$ is monic such that $Q_0$ is $d$-th power free in $K[x_0]$.
Choose $\gamma_1,\gamma_2\in\overline{K}$ such that
\begin{equation}
\gamma_1^{d}=\b\qquad\text{and}\qquad\gamma_2^{d}=\mathbf{u}^{\mathbf{i}}.\label{eq: ga}
\end{equation}
By \eqref{bf},
\eqref{eq: f}, \eqref{eq: factor_Q} and \eqref{eq: ga}, we see that
\[
Q_0(m)=b(m)\left((\g_1\g_2 ^{m} )^{d} Q_1(m)^d G(m,u_{1}^{m},\dots,u_{n}^{m})^{d}\right)^{-1}
\]
is a $d$-th power in the function field $K(\g_1,\g_2)$ over $\mathbf{k}$
for these infinitely many $m\in\mathcal{N}$. Now Theorem \ref{buchi_func_field}
implies that $Q_0\in\mathbf{k}[x_{0}]$. Therefore, our desired
conclusion holds with $ R:=Q_0$ and $a$ given
by $m\mapsto\g_1\gamma_2^{m}Q_1(m)G(m,u_{1}^{m},\dots,u_{n}^{m}).$
\end{proof}
\section{Proof of the GCD Theorems}
\label{gcd}
\subsection{Key Theorems}
We first recall some definitions in order to reformulate Theorem 2.2
in \cite{Wa2004}, which deal with the case when the coefficients
of the linear forms are in $\K$ instead of constants, i.e., in $\mathbf{k}$.
Consider $q$ (nonzero) linear forms $L_{j}:=a_{j0}X_{0}+\dots+a_{jn}X_{n}$,
$1\le j\le q$, with each $a_{jk}$ in $K$. Recall that the Weil
function associated with $L_{j}$ at a place $\p$ of $K$ is defined
by sending those $\mathbf{a}\in\mathbb{P}^{n}(\K)$ with $L_{j}(\mathbf{a})\ne0$
to
\[
\lambda_{L_{j},\p}(\mathbf{a}):=v_{\p}(L_{j}(\mathbf{a}))-v_{\p}(\mathbf{a})-v_{\p}(L_{j}).
\]
For any finite-dimensional vector subspace
\mbox
$V\subset K
} over
\mbox
$\mathbf{k}
} and any positive integer
\mbox
$r
}, we denote by
\mbox
$V(r)
} the vector space over
\mbox
$\mathbf{k}
} spanned by the set of all products of
\mbox
$r
} (non-necessarily distinct) elements from
\mbox
$V
}. It is easy to show (e.g.,
\mbox
\cite[Lemma 6]{Wa1996
}) that
\mbox
$\dim V(r+1)\ge\dim V(r)
} for each
\mbox
$r
} and
\mbox
$\liminf_{r\to\infty}\dim V(r+1)/\dim V(r)=1
}. Applying this inequality with
\mbox
$V
} replaced by
\mbox
$V(e)
}, we see that for each
\mbox
$e\in\mathbb{N}
}
\begin{align}
\liminf_{r\to\infty}\dim V(er+e)/\dim V(er)=1.\label{infvr}
\end{align}
\begin{definition}
Let $E\subset\K$ be a vector space over ${\bf k}$. We say that $y_{1},\ldots,y_{m}\in K$ are linearly
nondegenerate over $E$ if whenever we have a linear combination $\sum_{i=1}^{m}a_{i}y_{i}=0$
with $a_{i}\in E$, then $a_{i}=0$ for each $i$; otherwise we say
that they are linearly degenerate over $E$. Similarly, a point ${\bf x}=[x_{0}:x_{1}:\cdots:x_{n}]\in\mathbb{P}^{n}(\K)$,
with each $x_{i}\in K$, is said to be linearly degenerate (resp.
linearly nondegenerate) over $E$ if $x_{0},\ldots,x_{0}$ is linearly
degenerate (resp. nondegenerate) over $E$.\end{definition}
We obtain the following variant of Theorem 2.2 in \cite{Wa2004} from
its proof.
\begin{theorem}\label{MSMT}
Consider the
collection $\mathcal{L}:=\{L_{1},\hdots,L_{q}\}$ of linear forms
$L_{i}=\sum_{j=0}^{n}a_{ij}X_{j}\in\K[X_{0},\hdots,X_{n}]$, $1\le i\le q$,
and define
\begin{equation}
h(\mathcal{L}):=-\sum_{\mathbf{p}\in C(\mathbf{k})}\min_{1\le i\le q,\,0\le j\le n}v_{\mathbf{p}}(a_{ij}).\label{eq: new_h(H)-1}
\end{equation}
Let $V_{\mathcal{L}}\subset K$ be the vector subspace over $\mathbf{k}$
spanned by the set consisting of all the $a_{ij}$. Suppose that ${\bf a}\in\mathbb{P}^{n}(\K)$
is linearly nondegenerate over $V_{\mathcal{L}}(r+1)$ for some positive
integer $r$, then
\begin{align}
\sum_{\p\in S}\max_{J}\sum_{j\in J}\lambda_{L_{j},\p}({\bf a})\le\frac{w}{u}(n+1)\left(h({\bf a})+(r+2)h(\mathcal{L})+\frac{nw+w-1}{2}\max\{0,2\gen-2+|S|\}\right),
\end{align}
where the maximum is taken over all subsets $J\subset\{1,\hdots,q\}$
such that those linear forms $L_{j}$ with $j\in J$ are linearly
independent over $\K$, and we denote by $w:=\dim_{\mathbf{k}}V_{\mathcal{L}}(r+1)$
and $u:=\dim_{\mathbf{k}}V_{\mathcal{L}}(r)$. \end{theorem}
We now formulate the following technical theorem of estimating the
counting function of the gcd. The proof is adapted from \cite{Levin:GCD}
and \cite{levin2019greatest} with more control on the coefficients
of the constructed linear forms so that all the constants involved
can be computed effectively.
\begin{theorem}\label{Refinement} Let $F_{1},F_{2}\in K[x_{1},\cdots,x_{n}]$
be coprime polynomials of the same degree $d>0$. Assume that one
of the coefficients in the expansion of $F_{i}$ is 1 for each $i\in\{1,2\}$.
For every positive integer $m\ge2d$, we let $M:=M_{m}:=2\binom{m+n-d}{n}-\binom{m+n-2d}{n}$
and $M':=M'_{m}:=\binom{m+n}{n}-M$. For every positive integer $r$,
we denote by $V_{F_{1},F_{2}}(r)$ the (finite-dimensional) vector
space over ${\bf k}$ spanned by $\prod_{\alpha}\alpha^{n_{\alpha}}$,
where $\alpha$ runs over all non-zero coefficients of $F_{1}$ and
$F_{2}$, $n_{\alpha}\ge0$ and $\sum n_{\alpha}=r$; we also put
$d_{r}:=\dim_{{\bf k}}V_{F_{1},F_{2}}(r)$. Then $M'_{m}$ has order
$O(m^{n-2})$; moreover, if for some ${\bf g}=(g_{1},\dots,g_{n})\in({\cal O}_{S}^{*})^{n}$ those $\mathbf{g}^{\mathbf{i}}$ with $\mathbf{i}\in\mathbb{Z}_{\ge0}^{n}$
and $|\mathbf{i}|\le m$ are
linearly nondegenerate over $V_{F_{1},F_{2}}(Mr+1)$ for some positive
integer $m\ge2d$, then we have the following estimate
\begin{align*}
& MN_{S,{\rm gcd}}(F_{1}({\bf g}),F_{2}({\bf g}))\\
\le & \left(M'+\frac{d_{Mr}}{d_{M(r-1)}}M-M\right)mn\max_{1\le i\le n}h(g_{i})+cM\left(h(F_{1})+h(F_{2})\right)+c'M\max\{0,2\gen-2+|S|\},
\end{align*}
where $c:=\frac{d_{Mr}}{d_{M(r-1)}}(1+M(r+1))$ and $c':=\frac{d_{Mr}^{2}M}{2d_{M(r-1)}}$.
\end{theorem}
\begin{proof} We first make some convenient settings. We denote by
$\mathbf{x}:=(x_{1},\ldots,x_{n})$ an $n$-tuple of $n$ (algebraically
independent) variables. Let $m$ be a positive integer. For a subset
$T\subset K[\mathbf{x}]$, we let
\[
T_{m}=\{f\in T\,:\deg f\le m\}.
\]
By the assumption that one of the coefficients in the expansion of
$F_{i}$ is $1$ for each $i\in\{1,2\}$, we note that
\begin{equation}
v_{\p}(F_{i})\le0\qquad\text{for every }i\in\{1,2\}\text{ and }\p\in C(\mathbf{k}).\label{eq: vF<=00003D00003D00003D00003D0}
\end{equation}
Consider the ideal $(F_{1},F_{2})\subset K[\mathbf{x}]$. If $(F_{1},F_{2})=(1)=K[\mathbf{x}]$,
then it is elementary to show that $N_{S,{\rm gcd}}(F_{1}({\bf g}),F_{2}({\bf g}))$
is bounded by some constant independent of ${\bf g}$. Therefore,
we assume that the ideal $(F_{1},F_{2})$ is proper. For ${\bf i}=(i_{1},\cdots,i_{n})\in\mathbb{Z}_{\ge0}^{n}$,
we let ${\bf x}^{{\bf i}}:=x_{1}^{i_{1}}\cdots x_{n}^{i_{n}}$, and
${\bf g}^{{\bf i}}:=g_{1}^{i_{1}}\cdots g_{n}^{i_{n}}$. By Lemma
2.11 of \cite{levin2019greatest}, we note that $M=\dim_{K}(F_{1},F_{2})_{m}$,
and may choose a basis $\{\phi_{1},\hdots,\phi_{M}\}$ of the $K$-vector
space $(F_{1},F_{2})_{m}$ such that each $\phi_{j}$ is of the form
${\bf x}^{{\bf i}}F_{j}$ with $|\mathbf{i}|:=i_{1}+\cdots+i_{n}\le m-d$
and $j\in\{1,2\}$. Put
\[
\Phi:=(\phi_{1},\hdots,\phi_{M})\qquad\text{and}\qquad\Phi({\bf g}):=(\phi_{1}({\bf g}),\hdots,\phi_{M}({\bf g})).
\]
For each $\p\in S$, we construct a subset $B_{\p}\subset K[\mathbf{x}]_{m}$,
consisting of only monomials, whose images in the $K$-linear space
$V_{m}:=K[\mathbf{x}]_{m}/(F_{1},F_{2})_{m}$ form one of its bases
as follows.
Choose a monomial ${\bf x}^{{\bf i}_{\p,1}}\in K[x_{1},\hdots,x_{n}]_{m}$
so that $v_{\p}({\bf g}^{{\bf i}_{\p,1}})$ is maximum subject to
the condition ${\bf x}^{{\bf i}_{\p,1}}\notin(F_{1},F_{2}).$ If ${\bf x}^{{\bf i}_{\p,1}},\hdots,{\bf x}^{{\bf i}_{\p,j}}$
have been constructed such that their images in $V_{m}$ are $K$-linearly
independent but don't span the whole $V_{m}$, then we let ${\bf x}^{{\bf i}_{\p,j+1}}\in K[x_{1},\hdots,x_{n}]_{m}$
be a monomial such that $v_{\p}({\bf g}^{{\bf i}_{\p,j+1}})$ is maximum
subject to the condition that the images of ${\bf x}^{{\bf i}_{\p,1}},\hdots,{\bf x}^{{\bf i}_{\p,j+1}}$
in $V_{m}$ are $K$-linearly independent; otherwise we stop. Because
$\dim_{K}V_{m}=\dim_{K}K[\mathbf{x}]_{m}-\dim_{K}(F_{1},F_{2})_{m}=\binom{m+n}{n}-M=M'$,
we will eventually stop and obtain that $B_{\p}:=\{{\bf x}^{{\bf i}_{\p,1}},\hdots,{\bf x}^{{\bf i}_{\p,M'}}\}\subset K[\mathbf{x}]_{m}$
is a set of monomials whose images in $V_{m}$ form one of its $K$-linear
bases such that
\begin{equation}
v_{\p}({\bf g}^{{\bf i}_{\p,1}})\ge v_{\p}({\bf g}^{{\bf i}_{\p,2}})\ge\cdots\ge v_{\p}({\bf g}^{{\bf i}_{\p,M'}})\ge v_{\p}({\bf g}^{{\bf i}_{\p}(i)})\label{eq: decreasing-v}
\end{equation}
for each $i\in\{1,\ldots,M\}$, where we denote by $\{{\bf i}_{\p}(1),\hdots,{\bf i}_{\p}(M)\}$
the set of those ${\bf i}\in\mathbb{Z}_{\ge0}^{n}$ with $|{\bf i}|\le m$
and $\mathbf{i}\notin I_{\p}$, where
\begin{equation}
I_{\p}:=\{{\bf i}_{\p,1},\hdots,{\bf i}_{\p,M'}\}.\label{eq: I_p}
\end{equation}
By direction calculation, we find that $M'_{m}=\binom{m+n}{n}-2\binom{m+n-d}{n}+\binom{m+n-2d}{n}=O(m^{n-2})$.
Alternatively, since $F_{1}$ and $F_{2}$ are coprime, the ideal
$(F,G)$ defines a closed subset of $\mathbb{P}^{n}$ of codimension
at least 2, and it follows from the theory of Hilbert functions and
Hilbert polynomials that $M'_{m}=\dim_{K}V_{m}=O(m^{n-2})$.
For each $i\in\{1,\ldots,M\}$, we have
\[
{\bf x}^{{\bf i_{\p}}(i)}+\sum_{j=1}^{M'}c_{\p,i,j}{\bf x}^{{\bf i}_{\p,j}}\in(F_{1},F_{2})_{m}
\]
for some (unique) choice of coefficients $c_{\p,i,j}\in\K$; by expressing
${\bf x}^{{\bf i_{\p}}(i)}+\sum_{j=1}^{M'}c_{\p,i,j}{\bf x}^{{\bf i}_{\p j}}$
as a (unique) $K$-linear combination of $\phi_{1},\hdots,\phi_{M}$,
we let
\begin{equation}
L_{\p,i}:=\sum_{\ell=1}^{M}b_{\p,i,\ell}y_{\ell}\in K[y_{1},\ldots,y_{M}]\label{eq:L_p,i}
\end{equation}
be a linear form over $K$ such that
\begin{align}
L_{\p,i}(\Phi(\mathbf{x}))=c_{\p}\left({\bf x}^{{\bf i_{\p}}(i)}+\sum_{j=1}^{M'}c_{\p,i,j}{\bf x}^{{\bf i}_{\p,j}}\right),\label{cij}
\end{align}
where $c_{\p}\in K^{*}$ will be chosen later.
By the choice of the $\phi_{\ell}$, we may write
\begin{align}
\phi_{\ell}=\sum_{s=1}^{M}\alpha_{\p,\ell,s}{\bf x}^{{\bf i}_{\p}(s)}+\sum_{j=1}^{M'}\alpha_{\p,\ell,{\bf i}_{\p,j}}{\bf x}^{{\bf i}_{\p,j}},\label{cij3}
\end{align}
where both $\alpha_{\p,\ell,i}$ and $\alpha_{\p,\ell,{\bf i}_{\p,j}}$
are coefficients of either $F_{1}$ or $F_{2}$, thus
\begin{equation}
\min\{v_{\p}(\alpha_{\p,\ell,{\bf i}_{\p,j}}),v_{\p}(\alpha_{\p,\ell,i})\}\ge v_{\p}(F_{1})+v_{\p}(F_{2})\qquad\text{for each }\ell,i,j.\label{eq: v_=00003D00003D00003D00005Calpha}
\end{equation}
Combining \eqref{eq:L_p,i} and \eqref{cij3}, we have
\begin{align}
L_{\p,i}(\Phi(\mathbf{x}))=\sum_{\ell=1}^{M}b_{\p,i,\ell}\left(\sum_{s=1}^{M}\alpha_{\p,\ell,s}{\bf x}^{{\bf i}_{\p}(s)}+\sum_{j=1}^{M'}\alpha_{\p,\ell,{\bf i}_{\p,j}}{\bf x}^{{\bf i}_{\p,j}}\right).\label{cij4}
\end{align}
Note that if we take $c_{\p}=1$, then by comparing \eqref{cij} with
\eqref{cij4}, we find that
\[
\det(b_{\p,i,\ell})_{1\le\ell,i\le M}\det(\alpha_{\p,\ell,s})_{1\le\ell,s\le M}=1.
\]
From now on, we let
\begin{equation}
c_{\p}:=\det(\alpha_{\p,\ell,s})_{1\le\ell,s\le M}\ne0\label{eq: c_p}
\end{equation}
and note that $c_{\p}\in$$V_{F_{1},F_{2}}(M)$. With this choice
of $c_{\p}$, we compare \eqref{cij} with \eqref{cij4} again and
see that the inverse of $(\alpha_{\p,\ell,s})_{1\le\ell,s\le M}$
is $c_{\p}^{-1}(b_{\p,i,\ell})_{1\le\ell,i\le M}$, which shows that
\begin{equation}
b_{\p,i,\ell}\in V_{F_{1},F_{2}}(M-1)\label{eq: b_pil_in}
\end{equation}
for each $i$, $\ell$ by Cramer's rule. This comparison also gives
\begin{align}
c_{\p} & =\sum_{\ell=1}^{M}b_{\p,i,\ell}\alpha_{\p,\ell,i}\qquad\text{for each }1\le i\le M,\label{eq: key2}\\
c_{\p}c_{\p,i,j} & =\sum_{\ell=1}^{M}b_{\p,i,\ell}\alpha_{\p,\ell,{\bf i}_{\p,j}}\qquad\text{for each }1\le i\le M\text{ and }1\le j\le M'.\label{eq: key2-1}
\end{align}
From \eqref{eq: decreasing-v}, \eqref{cij}, \eqref{eq: key2}, \eqref{eq: key2-1},
\eqref{eq: v_=00003D00003D00003D00005Calpha} and \eqref{eq: vF<=00003D00003D00003D00003D0},
we have
\begin{align}
v_{\p}(L_{\p,i}(\Phi({\bf g}))) & \ge v_{\p}({\bf g}^{{\bf i}_{\p}(i)})+\min_{j}\{v_{\p}(c_{\p}),v_{\p}(c_{\p}c_{\p,i,j})\}\label{keyinequality2}\\
& \ge v_{\p}({\bf g}^{{\bf i}_{\p}(i)})+\min_{j}\{\min_{\ell}v_{\p}(b_{\p,i,\ell})+\min_{\ell}v_{\p}(\alpha_{\p,\ell,i}),\min_{\ell}v_{\p}(b_{\p,i,\ell})+\min_{\ell}v_{\p}(\alpha_{\p,\ell,{\bf i}_{\p,j}})\}\nonumber \\
& \ge v_{\p}({\bf g}^{{\bf i}_{\p}(i)})+\min_{\ell}v_{\p}(b_{\p,i,\ell})+v_{\p}(F_{1})+v_{\p}(F_{2}),\nonumber
\end{align}
which gives the following key inequality
\begin{equation}
v_{\p}(L_{\p,i}(\Phi({\bf g})))-v_{\p}(L_{\p,i})\ge v_{\p}({\bf g}^{{\bf i}_{\p}(i)})+v_{\p}(F_{1})+v_{\p}(F_{2}).\label{eq:keyineq}
\end{equation}
Thus, by the construction of \eqref{eq: I_p} and the fact that ${\bf g}^{{\bf i}}\in\mathcal{O}_{S}^{*}$
for each $\mathbf{i}$, we have
\begin{align}
\sum_{\p\in S}\sum_{1\le i\le M}v_{\p}({\bf g}^{{\bf i}_{\p}(i)}) & =\sum_{\p\in S}\sum_{|{\bf i}|\le m}v_{\p}({\bf g}^{{\bf i}})-\sum_{\p\in S}\sum_{|{\bf i}|\le m,{\bf i}\in I_{\p}}v_{\p}({\bf g}^{{\bf i}})\label{eq:main_term}\\
& \ge\sum_{|{\bf i}|\le m}\sum_{\p\in S}v_{\p}({\bf g}^{{\bf i}})-|I_{\p}|m\sum_{\p\in S}\sum_{j=1}^{n}v_{\p}^{0}(g_{j})\nonumber \\
& =-M'm\sum_{j=1}^{n}h(g_{j})\nonumber \\
& \ge-M'mn\max_{1\le j\le n}h(g_{j}).\nonumber
\end{align}
By the choice of these $\phi_{i}\in\K[x_{1},\hdots,x_{n}]_{m}$, together
with \eqref{eq: vF<=00003D00003D00003D00003D0}, we have
\begin{align*}
v_{\p}(\phi_{i}({\bf g})) & \ge m\min\{v_{\p}({\bf g}),0\}+v_{\p}(F_{1})+v_{\p}(F_{2})\\
& \ge-m\sum_{j=1}^{n}v_{\p}^{\infty}(g_{j})+v_{\p}(F_{1})+v_{\p}(F_{2})
\end{align*}
for every $\p\in C(\mathbf{k})$. It follows that
\begin{align}
h(\Phi({\bf g}))\le mn\max_{1\le i\le n}h(g_{i})+h(F_{1})+h(F_{2}).\label{phiup}
\end{align}
Also, with the fact that ${\bf g}^{{\bf i}}\in\mathcal{O}_{S}^{*}$
for each $\mathbf{i}$, we note for every $\p\notin S$ that $v_{\p}(\phi_{i}({\bf g}))=v_{\p}(F_{\epsilon_{i}}({\bf g}))\ge v_{\p}(F_{\epsilon_{i}})$
with some $\epsilon_{i}\in\{1,2\}$, and hence, together with \eqref{eq: vF<=00003D00003D00003D00003D0},
we have that
\begin{equation}
\begin{split}v_{\p}(\phi_{i}({\bf g})) & \ge\min\{v_{\p}(F_{1}({\bf g})),v_{\p}(F_{2}({\bf g}))\}\\
& \ge\min\{v_{\p}^{0}(F_{1}({\bf g})),v_{\p}^{0}(F_{2}({\bf g}))\}+v_{\p}(F_{1})+v_{\p}(F_{2}).
\end{split}
\label{phinotp}
\end{equation}
By \eqref{phinotp}, we have
\begin{align}
N_{S,{\rm gcd}}(F_{1}({\bf g}),F_{2}({\bf g})) & =\sum_{\p\notin S}\min\{v_{\p}^{0}(F_{1}({\bf g})),v_{\p}^{0}(F_{2}({\bf g}))\}\nonumber \\
& \le\sum_{\p\notin S}\min_{i}v_{\p}(\phi_{i}({\bf g}))+\sum_{\p\notin S}-(v_{\p}(F_{1})+v_{\p}(F_{2}))\nonumber \\
& =\sum_{\p\not\in S}v_{\p}(\Phi({\bf g}))+\sum_{\p\notin S}-(v_{\p}(F_{1})+v_{\p}(F_{2}))\nonumber \\
& =-h(\Phi({\bf g}))-\sum_{\p\in S}v_{\p}(\Phi({\bf g}))+\sum_{\p\notin S}-(v_{\p}(F_{1})+v_{\p}(F_{2})).\label{phgcd}
\end{align}
By \eqref{eq: b_pil_in}, we may choose a finite collection
\mbox
$\mathcal{L}
} of linear forms over
\mbox
$K
} such that
\mbox
$L_{\p,i}\in\mathcal{L}
} for each
\mbox
$\p\in S
} and
\mbox
$i\in\{1,\ldots M\}
}, that the finite-dimensional
\mbox
$\mathbf{k}
}-linear subspace
\mbox
$V:=V_{F_{1},F_{2}}(M)
} is spanned by the set of all coefficients of linear forms in
\mbox
$\mathcal{L}
}, and that\\
\raisebox{-\belowdisplayshortskip}
\noindent\parbox[b]{1\linewidth}
\begin{equation}
h(\mathcal{L})\le M(h(F_{1})+h(F_{2})).\label{eq: h(L)}
\end{equation}
}}
\noindent Since
those $\mathbf{g}^{\mathbf{i}}$ with $\mathbf{i}\in\mathbb{Z}_{\ge0}^{n}$
and $|\mathbf{i}|\le m$ are
linearly nondegenerate over $V_{F_{1},F_{2}}(Mr+1)$, we must have
that $\Phi({\bf g})\in\mathbb{P}^{M-1}(K)$ is linearly nondegenerate
over $V_{F_{1},F_{2}}(Mr)=V(r)$. By \eqref{eq: c_p}, we note that
elements from $\{L_{\p,i}(\Phi(\mathbf{x}))\,|\,1\le i\le M\}$ are
linearly independent over $\K$; thus the linear forms $L_{\p,i}$,
$1\le i\le M$, are linearly independent over $K$. Noting that $d_{Mr}=\dim_{\mathbf{k}}V(r)$
and $d_{M(r-1)}:=\dim_{\mathbf{k}}V(r-1)$, we obtain from Theorem
\ref{MSMT} and \eqref{eq: h(L)} that
\begin{align}
\sum_{\p\in S}\sum_{1\le i\le M}\lambda_{L_{\p,i},\p}(\Phi({\bf g})) & \le\frac{d_{Mr}M}{d_{M(r-1)}}\left(h(\Phi({\bf g)})+(r+1)M(h(F_{1})+h(F_{2}))+\frac{Md_{Mr}-1}{2}\max\{0,2\gen-2+|S|\}\right).\label{eq: MSMT}
\end{align}
Together with \eqref{eq:keyineq}, \eqref{eq:main_term} and \eqref{phgcd},
we have
\begin{equation}
\begin{split}\sum_{\p\in S}\sum_{1\le i\le M}\lambda_{L_{\p,i},\p}(\Phi({\bf g})) & =\sum_{\p\in S}\sum_{1\le i\le M}\left(v_{\p}(L_{\p,i}(\Phi({\bf g})))-v_{\p}(L_{\p,i})\right)-M\sum_{\p\in S}v_{\p}(\Phi({\bf g}))\\
& \ge\sum_{\p\in S}\sum_{1\le i\le M}v_{\p}({\bf g}^{{\bf i}_{\p}(i)})+M\sum_{\p\in S}\left(v_{\p}(F_{1})+v_{\p}(F_{2})\right)+MN_{S,{\rm gcd}}(F_{1}({\bf g}),F_{2}({\bf g}))\\
& \qquad+Mh(\Phi({\bf g)})+M\sum_{\p\notin S}(v_{\p}(F_{1})+v_{\p}(F_{2}))\\
& \ge-M'mn\max_{1\le j\le n}h(g_{j})+MN_{S,{\rm gcd}}(F_{1}({\bf g}),F_{2}({\bf g}))+M(h(\Phi({\bf g)})-h(F_{1})-h(F_{2})).
\end{split}
\label{eq:MSMT-LHS}
\end{equation}
Combining \eqref{eq:MSMT-LHS} with \eqref{eq: MSMT} and \eqref{phiup},
we get
\[
\begin{split} & MN_{S,{\rm gcd}}(F_{1}({\bf g}),F_{2}({\bf g}))\label{useSMT2}\\
\le & M'mn\max_{1\le j\le n}h(g_{j})+M\left(\frac{d_{Mr}}{d_{M(r-1)}}-1\right)h(\Phi({\bf g)})+\left(\frac{d_{Mr}}{d_{M(r-1)}}M^{2}(r+1)+M\right)\left(h(F_{1})+h(F_{2})\right)\\
& +\frac{d_{Mr}M(Md_{Mr}-1)}{2d_{M(r-1)}}\max\{0,2\gen-2+|S|\}\\
\le & \left(M'+\frac{d_{Mr}}{d_{M(r-1)}}M-M\right)mn\max_{1\le i\le n}h(g_{i})+\left(\frac{d_{Mr}}{d_{M(r-1)}}(1+M(r+1))\right)M\left(h(F_{1})+h(F_{2})\right)\\
& +\frac{d_{Mr}^{2}M^{2}}{2d_{M(r-1)}}\max\{0,2\gen-2+|S|\}.
\end{split}
\]
\end{proof}
\begin{theorem}\label{=00003D00003D00003D000024S=00003D00003D00003D000024 part}
Let $F\in K[x_{1},\cdots,x_{n}]$ be a polynomials of degree $d>0$
that does not vanish at $(0,\hdots,0)$. Assume that one of the coefficients
of $F$ is 1. For each
\mbox
$r\in\mathbb{N}
}, denote by $V_{F}(r)$ the (finite-dimensional) vector space over
${\bf k}$ spanned by $\prod_{\alpha}\alpha^{n_{\alpha}}$, where
$\alpha$ runs over all (non-zero) coefficients of $F$ with $n_{\alpha}\ge0$
and $\sum n_{\alpha}=r$; put $d_{r}:=\dim_{{\bf k}}V_{F}(r)$. Put
$N:=\binom{n+d}{n}-1$. Let ${\bf g}=(g_{1},\dots,g_{n})\in({\cal O}_{S}^{*})^{n}$.
Suppose
those $\mathbf{g}^{\mathbf{i}}$ with $\mathbf{i}\in\mathbb{Z}_{\ge0}^{n}$
and $|\mathbf{i}|\le d$ are linearly nondegenerate over $V_{F}(r)$. Then we have the following
estimate
\begin{align*}
\sum_{\p\in S}v_{\p}^{0}(F({\bf g}))\le & (\frac{d_{r}}{d_{r-1}}-1)(N+1)dn\max_{1\le i\le n}h(g_{i})+\frac{d_{r}(N+1)}{d_{r-1}}(r+1)h(F)\\
& +\frac{d_{r}(N+1)(Nd_{r}+d_{r}-1)}{2d_{r-1}}\max\{0,2\gen-2+|S|\}.
\end{align*}
\end{theorem}
\begin{proof} Let $\Phi=(\phi_{0},\phi_{1},\hdots,\phi_{N}):\mathbb{P}^{n}\to\mathbb{P}^{N}$
be the $d$-tuple embedding of $\mathbb{P}^{n}$ given by the set
of monomials of degree $d$ in $\K[x_{0},\hdots,x_{n}]$, where $\phi_{0}:=x_{0}^{d}$.
Let $\tilde{F}\in\K[x_{0},\hdots,x_{n}]$ be the homogenization of
$F$. Denote by $\tilde{{\bf g}}:=(g_{0},g_{1},\hdots,g_{n})$, where
$g_{0}:=1$.
Since each $\phi_{i}$ is a degree-$d$ monomial in $\K[x_{0},\hdots,x_{n}]$,
we have $v_{\p}(\phi_{i}(\tilde{{\bf g}}))\ge dv_{\p}(\tilde{{\bf g}})=d\min\{v_{\p}({\bf g}),0\}\ge-d\sum_{j=1}^{n}v_{\p}^{\infty}(g_{j})$
for every $\p\in C(\mathbf{k})$ and $i\in\{0,\ldots,N\}$; thus we
have
\begin{align}
h(\Phi(\tilde{{\bf g}}))\le dn\max_{1\le i\le n}h(g_{i}).\label{phiup-1}
\end{align}
Also, since $\phi_{i}(\tilde{{\bf g}})\in\mathcal{O}_{S}^{*}$, we
have that
\begin{equation}
\sum_{\p\in S}v_{\p}(\phi_{i}(\tilde{{\bf g}}))=0.\label{eq: sum_formula_Sunit}
\end{equation}
For each $i\in\{0,\ldots,N\}$, denote by $L_{i}$ the linear form
corresponding to the coordinate hyperplanes $\mathbb{P}^{N}$. We
also denote by $L_{\tilde{F}}\in K[y_{0},\ldots,y_{N}]$ the linear
form coming from the monomial expansion (of degree $d$) of $\tilde{F}$;
thus $L_{\tilde{F}}(\Phi(\tilde{\mathbf{g}}))=\tilde{F}(\tilde{\mathbf{g}})$.
Let $\mathcal{L}:=\{L_{i}\,|\,i\in\{0,\ldots,N\}\}\cup\{L_{\tilde{F}}\}$.
By construction and our assumption that one of the coefficients of
$F$ is 1, we have that $h(\mathcal{L})=\tilde{h}(F)=h(F)$, that
$V_{F}(1)$ is spanned by the set of all coefficients of linear forms
in $\mathcal{L}$, and that
\begin{equation}
v_{\p}(L)\le0\qquad\text{for every }L\in\mathcal{L}\text{ and }\p\in C(\mathbf{k}).\label{eq: vL<=00003D00003D00003D0}
\end{equation}
We also note that any $N+1$ linear forms in $\mathcal{L}$ are linearly
independent over $K$ since $F(0,\hdots,0)\ne0$.
For those $\p\in S$ and $i\in\{0,\ldots,N\}$ satisfying either $i\ne0$
or $v_{\p}(F({\bf g}))\le0$, we define $L_{\p,i}:=L_{i}$; for the
remaining case, we define $L_{\p,i}:=L_{\tilde{F}}$. Hence we see
\begin{align}
L_{\p,i}(\Phi(\tilde{{\bf g}}))=F({\bf g})\qquad & \text{ if \ensuremath{i=0} and \ensuremath{v_{\p}(F({\bf g}))>0};}\label{L0}\\
L_{\p,i}(\Phi(\tilde{{\bf g}}))=\phi_{i}(\tilde{{\bf g}})\qquad & \text{otherwise.}\label{eq:Li}
\end{align}
By assumption, those $\mathbf{g}^{\mathbf{i}}$ with $\mathbf{i}\in\mathbb{Z}_{\ge0}^{n}$
and $|\mathbf{i}|\le d$ are linearly nondegenerate over $V_{F}(r)$, thus
$\Phi(\tilde{{\bf g}})\in\mathbb{P}^{N}(K)$
is linearly nondegenerate over $V_{F}(r)$. Applying Theorem \ref{MSMT}
with $\mathbf{a}=\Phi(\tilde{{\bf g}})$ and $V_{\mathcal{L}}=V_{F}(1)$,
we have
\begin{align}\label{useSMT3}
& \sum_{\p\in S}\sum_{i=0}^{N}\left(v_{\p}(L_{\p,i}(\Phi(\tilde{{\bf g}}))-v_{\p}(\Phi(\tilde{{\bf g}}))-v_{\p}(L_{\p,i})\right)\cr
\le & \frac{d_{r}(N+1)}{d_{r-1}}\big(h(\Phi(\tilde{{\bf g}}))+(r+1)h(F)+\frac{Nd_{r}+d_{r}-1}{2}\max\{0,2\gen-2+|S|\}\big).
\end{align}
Together with \eqref{eq: sum_formula_Sunit}, \eqref{L0}, \eqref{eq:Li}
and \eqref{eq: vL<=00003D00003D00003D0}, we have the following estimate
for the left hand side of \eqref{useSMT3}
\begin{align*}
\sum_{i=0}^{N}\sum_{\p\in S}\left(v_{\p}(L_{\p,i}(\Phi(\tilde{{\bf g}}))-v_{\p}(\Phi(\tilde{{\bf g}}))-v_{\p}(L_{\p,i})\right)\ge\sum_{\p\in S}\big(v_{\p}^{0}(F({\bf g}))-(N+1)v_{\p}(\Phi(\tilde{{\bf g}})).
\end{align*}
Therefore, we can derive from \eqref{useSMT3} and \eqref{phiup-1}
that
\begin{align*}
\sum_{\p\in S}v_{\p}^{0}(F({\bf g}))\le & (\frac{d_{r}}{d_{r-1}}-1)(N+1)dn\max_{1\le i\le n}h(g_{i})+\frac{d_{r}(N+1)}{d_{r-1}}(r+1)h(F)\\
& +\frac{d_{r}(N+1)(Nd_{r}+d_{r}-1)}{2d_{r-1}}\max\{0,2\gen-2+|S|\}.
\end{align*}
\end{proof}
\subsection{Proof of Theorem \ref{movinggcdunit}}
\begin{proof}[Proof of Theorem \ref{movinggcdunit}] Let $\alpha$
and $\beta$ be one of the nonzero coefficients of $F$ and $G$ respectively,
and put ${\bf g}:=(g_{1},\hdots,g_{n})\in(\mathcal{O}_{S}^{*})^{n}$,
$1\le j\le n$. Since $v_{\p}^{0}(F({\bf g}))\le v_{\p}^{0}(\frac{1}{\alpha}F({\bf g}))+v_{\p}^{0}(\alpha)$
and $v_{\p}^{0}(G({\bf g}))\le v_{\p}^{0}(\frac{1}{\beta}G({\bf g}))+v_{\p}^{0}(\beta)$,
we have
\begin{align*}
N_{S,{\rm gcd}}(F({\bf g}),G({\bf g}))\le N_{S,{\rm gcd}}(\frac{1}{\alpha}F({\bf g}),\frac{1}{\beta}G({\bf g}))+\tilde{h}(F)+\tilde{h}(G),
\end{align*}
and
\begin{align*}
h_{{\rm gcd}}(F({\bf g}),G({\bf g}))\le h_{{\rm gcd}}(\frac{1}{\alpha}F({\bf g}),\frac{1}{\beta}G({\bf g}))+\tilde{h}(F)+\tilde{h}(G).
\end{align*}
Then by elementary reductions, which we omit, it suffices to prove
the theorem for $\frac{1}{\alpha}F$ and $\frac{1}{\beta}G$. Therefore,
we assume that, with respect to some fixed total ordering on the set
of monomials in
\mbox
$K[x_{1},\ldots,x_{n}]
}, the coefficient attached to the largest monomial appearing in $F$
(resp. in
\mbox
$G
}) is 1. In this case,
\mbox
$\tilde{h}(F^{e})=h(F^{e})=eh(F)=e\tilde{h}(F)
} for every
\mbox
$e\in\mathbb{N}
}, thus we may assume that
\mbox
$F
} and
\mbox
$G
} have the same degree
\mbox
$d
} via replacing
\mbox
$F
} (resp.
\mbox
$G
}) by some of its powers.
Let $\epsilon>0$ be given. We first choose $m$ sufficiently large
so that $m\ge2d$ and
\begin{align}
\frac{M'mn}{M}\le\frac{\epsilon}{4},\label{findm}
\end{align}
where
\mbox
$M:=M_{m}:=2\binom{m+n-d}{n}-\binom{m+n-2d}{n}
} and $M':=M'_{m}:=\binom{m+n}{n}-M$; this is possible because $M_{m}=\frac{m^{n}}{n!}+O(m^{n-1})$
and
\mbox
$M'=O(m^{n-2})
} (by the proof of Theorem \ref{Refinement}). By \eqref{infvr} we
may then choose a sufficiently large integer $r\in\mathbb{N}$ such
that
\begin{align}
\frac{w}{u}-1\le\frac{\epsilon}{4mn},\label{wu}
\end{align}
where $w:=\dim_{{\bf k}}V_{F,G}(Mr)$ and $u:=\dim_{{\bf k}}V_{F,G}(Mr-M)$
(as in Theorem \ref{Refinement}); in the case where
\mbox
$F(0,\ldots,0)\ne0
}, we further require that
\begin{align}
\frac{w'}{u'}-1\le\frac{\epsilon}{8dn(N+1)},\label{wu'}
\end{align}
where $w':=\dim V_{F}(r)$, $u':=\dim V_{F}(r-1)$ and $N:=\binom{n+d}{n}-1$
(as in Theorem \ref{=00003D00003D00003D000024S=00003D00003D00003D000024 part}).
We first consider when those $\mathbf{g}^{\mathbf{i}}$ with $\mathbf{i}\in\mathbb{Z}_{\ge0}^{n}$
and $|\mathbf{i}|\le m$ are linearly degenerate over $V_{F,G}(Mr+1)$, i.e. there is a non-trivial
relation
\begin{align}
\sum_{\mathbf{i}}\alpha_{\mathbf{i}}\mathbf{g}^{\mathbf{i}}=0,\label{uniteq}
\end{align}
where the sum runs over those $\mathbf{i}\in\mathbb{Z}_{\ge0}^{n}$
and $|\mathbf{i}|\le m$, and $\alpha_{\mathbf{i}}\in V_{F,G}(Mr+1)$
for each $\mathbf{i}$ such that $\alpha_{\mathbf{i}_{0}}\ne0$ for
some $\mathbf{i}_{0}$. Then we have
\begin{align}
\sum_{\mathbf{i}\ne\mathbf{i}_{0}}\frac{\alpha_{\mathbf{i}}}{\alpha_{\mathbf{i}_{0}}}\mathbf{g}^{\mathbf{i}-\mathbf{i}_{0}}=-1.\label{uniteq-1}
\end{align}
Since ${\bf g}\in(\mathcal{O}_{S}^{*})^{n}$ and the number of zeros
and poles of each $\alpha_{\mathbf{i}}$ appearing in \eqref{uniteq}
is bounded by $2h(\alpha_{\mathbf{i}})\le2(Mr+1)(\tilde{h}(F)+\tilde{h}(G))$,
we can apply Theorem \ref{BrMa} with some $S'$ including $S$ and
the zeros and poles of those $\alpha_{\mathbf{i}}$ appearing in \eqref{uniteq}
to get
\[
h(\frac{\alpha_{\mathbf{i}}}{\alpha_{\mathbf{i}_{0}}}\mathbf{g}^{\mathbf{i}-\mathbf{i}_{0}})\le\tilde{c}\max\left\{ 0,2\gen-2+|S|+2(Mr+1)\binom{n+m}{n}(\tilde{h}(F)+\tilde{h}(G))\right\} ,
\]
where $\tilde{c}:=\frac{1}{2}\left(\binom{n+m}{n}-1\right)\left(\binom{n+m}{n}-2\right)$.
Then
\begin{align}\label{multiht1}
h(\mathbf{g}^{\mathbf{i}-\mathbf{i}_{0}}) & \le h(\frac{\alpha_{\mathbf{i}}}{\alpha_{\mathbf{i}_{0}}})+h(\frac{\alpha_{\mathbf{i}}}{\alpha_{\mathbf{i}_{0}}}\mathbf{g}^{\mathbf{i}-\mathbf{i}_{0}})\cr
& \le2(Mr+1)(\tilde{c}\binom{n+m}{n}+1)(\tilde{h}(F)+\tilde{h}(G))+\tilde{c}\max\{0,2\gen-2+|S|\},
\end{align}
which fits the assertion \eqref{multiheight11} with $(m_{1},\ldots,m_{n})=\mathbf{i}-\mathbf{i}_{0}$
and $\sum_{j=1}^{n}|m_{j}|\le2m$.
We now consider when those $\mathbf{g}^{\mathbf{i}}$ with $\mathbf{i}\in\mathbb{Z}_{\ge0}^{n}$
and $|\mathbf{i}|\le m$ are linearly nondegenerate over $V_{F,G}(Mr+1)$. By Theorem \ref{Refinement}
combined with \eqref{findm} and \eqref{wu},
\begin{align*}
N_{S,{\rm gcd}}(F({\bf g}),G({\bf g})) & \le(\frac{M'mn}{M}+(\frac{w}{u}-1)mn)\max_{1\le i\le n}\{h(g_{i})\}+\tilde{c}_{1}(\tilde{h}(F)+\tilde{h}(G))+\tilde{c}_{2}\max\{0,2\gen-2+|S|\}\\
& \le\frac{\epsilon}{2}\max_{1\le i\le n}\{h(g_{i})\}+\tilde{c}_{1}(\tilde{h}(F)+\tilde{h}(G))+\tilde{c}_{2}\max\{0,2\gen-2+|S|\},
\end{align*}
where $\tilde{c}_{1}$ and $\tilde{c}_{2}$ depend only on $(M_{m},r)$,
thus only on $\epsilon$. Hence, if
\begin{align}
\max_{1\le i\le n}\{h(g_{i})\}\ge\frac{4}{\epsilon}\left(\tilde{c}_{1}(\tilde{h}(F)+\tilde{h}(G))+\tilde{c}_{2}\max\{0,2\gen-2+|S|\}\right),\label{heightbound1}
\end{align}
then
\begin{align}
N_{S,{\rm gcd}}(F({\bf g}),G({\bf g}))\le\frac{3\epsilon}{4}\max_{1\le i\le n}\{h(g_{i})\}.\label{NSgcd}
\end{align}
We now estimate $h_{gcd}(F({\bf g}),G({\bf g}))$ using Theorem \ref{=00003D00003D00003D000024S=00003D00003D00003D000024 part}
with extra assumption that $F$ or $G$ does not vanish at the origin.
We may assume that $F(0,\hdots,0)\ne0$. Since $m\ge2d$ and those $\mathbf{g}^{\mathbf{i}}$ with $\mathbf{i}\in\mathbb{Z}_{\ge0}^{n}$
and $|\mathbf{i}|\le m$ are linearly nondegenerate over $V_{F,G}(Mr+1)$, it is clear that
those $\mathbf{g}^{\mathbf{i}}$ with $\mathbf{i}\in\mathbb{Z}_{\ge0}^{n}$
and $|\mathbf{i}|\le d$ are linearly nondegenerate over $V_{F}(r)$. Then by Theorem \ref{=00003D00003D00003D000024S=00003D00003D00003D000024 part}
and \eqref{wu'}, we have the the following
\begin{align}
\sum_{\p\in S}v_{\p}^{0}(F({\bf g})) & \le\frac{\epsilon}{8}\max_{1\le i\le n}h(g_{i})+c_{1}'h(F)+c_{2}'\max\{0,2\gen-2+|S|\},\label{eq: S-part}
\end{align}
where $c_{1}':=\frac{w'(N+1)}{u'}(r+1)$ and $c_{2}':=\frac{w'(N+1)(Nw'+w'-1)}{2u'}$
with $N:=\binom{n+d}{n}-1$. Note that $c_{1}'$ and $c_{2}'$ depend
only on $(w',u')$, thus only on $\epsilon$. By \eqref{eq: S-part},
we see that if both \eqref{heightbound1} and
\begin{align*}
\max_{1\le i\le n}\{h(g_{i})\}\ge\frac{8}{\epsilon}\left(c_{1}'h(F)+c_{2}'\max\{0,2\gen-2+|S|\}\right
\end{align*}
hold, then
\begin{align*}
\sum_{\p\in S}\min\{v_{\p}^{0}(F({\bf g})),v_{\p}^{0}(G({\bf g}))\}\le\sum_{\p\in S}v_{\p}^{0}(F({\bf g}))\le\frac{\epsilon}{4}\max_{1\le i\le n}\{h(g_{i})\},
\end{align*}
and hence together with \eqref{NSgcd}, we have
\begin{align*}
h_{{\rm gcd}}(F({\bf g}),G({\bf g}))\le\epsilon\max_{1\le i\le n}\{h(g_{i})\}.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{n=00003D00003D00003D00003D00003D00003D2gcdunit}]
Since $F,\,G\in{\bf k}[x_{1},x_{2}]$, we have that $V_{F,G}(r)=V_{F}(r)=\mathbf{k}$
for each $r\in\mathbb{N}$. Given $\epsilon>0$, we first choose $m$
sufficiently large satisfying \eqref{findm} with $n=2$. Suppose that those $g_1^{i_1}g_2^{i_2}$ with $(i_1,i_2)\in\mathbb{Z}_{\ge0}^2$ and $i_1+i_2\le m$
are linearly dependent over ${\bf k}$. Then there is a linear relation
\begin{align}
\sum_{\mathbf{j}=(j_{1},j_{2})}\alpha_{\mathbf{j}}g_{1}^{j_{1}}g_{2}^{j_{2}}=1,\label{uniteq0}
\end{align}
where $\alpha_{\mathbf{j}}\in{\bf k}^{*}$, $\mathbf{j}\in\mathbb{Z}^{2}\setminus\{(0,0)\}$
and $|j_{1}|+|j_{2}|\le2m$ for each appearing $\mathbf{j}=(j_{1},j_{2})$.
We may also assume that no proper subsum of the left hand side of
\eqref{uniteq0} vanishes. Consider the subgroup $J\subset\mathbb{Z}^{2}$
generated by those $\mathbf{j}$ appearing in \eqref{uniteq0}. If
$J$ has rank one, i.e., there exists $(m_{1},m_{2})\in\boldsymbol{\mathbb{Z}^{2}}\setminus\{(0,0)\}$
such that $(j_{1},j_{2})=\lambda_{\mathbf{j}}(m_{1},m_{2})$ with
$\lambda_{\mathbf{j}}\in\mathbb{Z}$ for every $\mathbf{j}=(j_{1},j_{2})$
appearing in \eqref{uniteq0}, then $|m_{1}|+|m_{2}|\le2m$ and
\begin{align*}
\sum_{\mathbf{j}}\alpha_{\mathbf{j}}(g_{1}^{m_{1}}g_{2}^{m_{2}})^{\lambda_{\mathbf{j}}}=1,
\end{align*}
which implies that $g_{1}^{m_{1}}g_{2}^{m_{2}}\in\mathbf{k}$. For
the other cases, $J$ must have rank two, thus we can find $(j_{1},j_{2})$
and $(j_{1}',j_{2}')$ appearing in \eqref{uniteq0} such that $\mathbb{Q}\cdot(j_{1},j_{2})\ne\mathbb{Q}\cdot(j_{1}',j_{2}')$
(i.e. $(j_{1},j_{2})$ and $(j_{1}',j_{2}')$ are $\mathbb{Q}$-linearly
independent), and
\begin{equation}
\max\{h(g_{1}^{j_{1}}g_{2}^{j_{2}}),h(g_{1}^{j_{1}'}g_{2}^{j_{2}'})\}\le\frac{1}{2}\left(\binom{m+2}{2}-1\right)\left(\binom{m+2}{2}-2\right)\max\{0,2\gen-2+|S|\}\label{eq: htb}
\end{equation}
by using Theorem \ref{BrMa}. Since $\mathbb{Q}\cdot(j_{1},j_{2})\ne\mathbb{Q}\cdot(j_{1}',j_{2}')$
and thus $j_{1}j_{2}'\ne j_{1}'j_{2}$, we note that $(j_{2}'j_{1}-j_{2}j_{1}',0)=j_{2}'(j_{1},j_{2})-j_{2}(j_{1}',j_{2}')$,
thus by \eqref{eq: htb} we have
\begin{align*}
h(g_{1})\le h(g_{1}^{j_{2}'j_{1}-j_{2}j_{1}'})&\le|j_{2}'|h(g_{1}^{j_{1}}g_{2}^{j_{2}})+|j_{2}|h(g_{1}^{j_{1}'}g_{2}^{j_{2}'})\cr
&\le2m\left(\binom{m+2}{2}-1\right)\left(\binom{m+2}{2}-2\right)\max\{0,2\gen-2+|S|\}.
\end{align*}
With similar estimates for
\mbox
$h(g_{2})
}, this implies that
\mbox
$\max\{h(g_{1}),h(g_{2})\}\le c\max\{0,2\gen-2+|S|\}
}, where $c:=2m\left(\binom{m+2}{2}-1\right)\left(\binom{m+2}{2}-2\right)$
depends only on
\mbox
$\epsilon
}.
In the case that
those $g_1^{i_1}g_2^{i_2}$ with $(i_1,i_2)\in\mathbb{Z}_{\ge0}^2$ and $i_1+i_2\le m$ are linearly independent over
\mbox
${\bf k}
}, we shall conclude that
\mbox
$N_{S,{\rm gcd}}(F(g_{1},g_{2}),G(g_{1},g_{2}))\le\epsilon\max\{h(g_{1}),h(g_{2})\}
}, and that
\mbox
$h_{{\rm gcd}}(F(g_{1},g_{2}),G(g_{1},g_{2}))\le\epsilon\max\{h(g_{1}),h(g_{2})\}
} if we further assume that not both of
\mbox
$F
} and
\mbox
$G
} vanish at
\mbox
$(0,0)
}. The corresponding part in the proof of Theorem
\mbox
\ref{movinggcdunit
} works, but actually an easier proof suffices. We omit the details.
\end{proof}
\begin{proof}[Proof of Theorem \ref{movinggcdpower}] Let $S=S_{{\bf g}}:=\{\p\in C\,|\,v_{\p}(g_{i})\ne0\text{ for }1\le i\le n\}.$
Then, $g_{i}\in\mathcal{O}_{S}^{*}$ for each $1\le i\le n$, and
\begin{align}
|S|\le2\sum_{i=1}^{n}h(g_{i})\le2n\max_{1\le i\le n}\{h(g_{i})\}.\label{sizeS}
\end{align}
Let $\epsilon>0$. Suppose that our assertion (i) (resp. (ii)) does
not hold for some
\mbox
$\ell
}. By Theorem
\mbox
\ref{movinggcdunit
} applied to
\mbox
$(g_{1}^{\ell},\hdots,g_{n}^{\ell})\in({\cal O}_{S}^{*})^{n}
}, there exist an integer
\mbox
$m
}, positive constants
\mbox
$c_{i}'
},
\mbox
$0\le i\le4
}, all depending only on
\mbox
$\epsilon
}, such that we have either
\begin{equation}
\max_{1\le i\le n}h(g_{i}^{\ell})\le c_{1}'(\tilde{h}(F)+\tilde{h}(G))+c_{2}'\max\{0,2\gen-2+|S|\},\label{eq:maxht}
\end{equation}
or
\begin{align}
h(g_{1}^{{\ell}m_{1}}\cdots g_{n}^{{\ell}m_{n}})\le c_{3}'(\tilde{h}(F)+\tilde{h}(G))+c_{4}'\max\{0,2\gen-2+|S|\}\label{multiheight1}
\end{align}
for some integers
\mbox
$m_{1},\hdots,m_{n}
}, not all zeros with
\mbox
$\sum|m_{i}|\le2m
}. If
\mbox
$g_{1}^{m_{1}}\cdots g_{n}^{m_{n}}\notin{\bf k}
}, then we must have
\mbox
$\max_{1\le i\le n}h(g_{i})\ge1
} and
\mbox
$h(g_{1}^{m_{1}}\cdots g_{n}^{m_{n}})\ge1
}. Hence
\mbox
\eqref{sizeS
},
\mbox
\eqref{eq:maxht
} and
\mbox
\eqref{multiheight1
} imply that \\
\raisebox{-\belowdisplayshortskip}
\noindent\parbox[b]{1\linewidth}
\begin{align*}
\ell & \le(c_{1}'+c_{3}')(\tilde{h}(F)+\tilde{h}(G))+(c_{2}'+c_{4}')\max\{0,2\gen-2+|S|\}\\
& \le(c_{1}'+c_{3}')(\tilde{h}(F)+\tilde{h}(G))+2(c_{2}'+c_{4}')(\gen+n\max_{1\le i\le n}\{h(g_{i})\}).
\end{align*}
}}\\
This shows that our desired conclusion holds with
\mbox
$c_{1}:=c_{1}'+c_{3}'
} and
\mbox
$c_{2}:=2(c_{2}'+c_{4}')
}. \end{proof}
|
train/arxiv
|
BkiUfnzxK7IDLyvTH_cv
| 5
| 1
|
\section{Introduction}
J.H. Oort first pointed out the {\it missing matter problem} in the 30's of last century \cite{Oort1,Oort2}. The issue came out
by observing the Doppler shift of stars moving near the plane of our Galaxy and calculating the star velocities. The result was that there had to be a large amount of matter inside the galaxy to prevent the stars from escaping. Such a "matter" should give rise to a central gravitational force much larger than Sun's gravitational pull to keep a planet in its orbit. However it turned out that there was not enough luminous mass in the Galaxy to account for this dynamics. The discrepancy was very large and the Galaxy had to be at least two or three times more massive than the sum of all its luminous components in order to match the result.
Later on, the tangential velocity of stars in orbits around the Galactic center was calculated as a function of distance from the center. Surprisingly it was found that far away from the Galactic Center, stars move with the same velocity independent of their distance out from the Galactic Center. These results strongly posed the problem that either luminous matter was not able to reliably trace the radial profile of the Galaxy or the Newtonian potential was not able to describe dynamics far from the Galactic center.
Soon after, other dark matter issues came out from dynamical descriptions of self-gravitating astrophysical systems like stellar clusters, galaxies, groups and clusters of galaxies.
In all these cases, there is more matter dynamically inferred than that can be accounted for by luminous matter components. The mass discrepancy comes out assuming the validity of Newton law at any astrophysical scales. Problems emerged also at larger scales. F. Zwicky discovered anomalous motions of galaxies in the Coma cluster finding that the visible mass was too little to produce enough gravitational force to hold the cluster together \cite{Zwicky}.
At the beginning, the only possibility considered was to assume the Newton law holding at all scales and postulating some non-luminous component to make up the missing mass. Many names have been coined to define these invisible components.
For example, the MAssive Compact Halo Objects (MACHOs) are objects like black holes and neutron stars (in general sub-luminous objects) that populate the outer reaches of galaxies like the Milky Way. There are the Weakly Interacting Massive Particles (WIMPs) which do not interact with standard matter (constituted by baryons as protons and neutrons): they are supposed to be particles out of the Standard Model of Particles but, up to now, there is no final indication for their existence \cite{revdonne}. In general, dark matter is assumed to come in two flavors, hot (HDM) and cold (CDM) dark matter.
The CDM should be in dead stars, planets, brown dwarfs etc., while HDM should be constituted by fast moving relativistic particles. It should be neutrinos, tachyons etc. However, there is still no definitive proof that WIMPs exist, or that MACHOs will ever make up more than five percent of the total amount of missing matter.
On the other hand, the need of unknown components as dark energy (coming from cosmology)
and dark matter could be considered nothing else but as a signal
of the breakdown of Einstein General Relativity (GR) at astrophysical
(galactic and extragalactic) and cosmological scales.
In this context, Extended Theories of Gravity (ETGs) could be, in
principle, an interesting alternative to explain cosmic
acceleration and large scale structure
without any dark components. In their simplest version, the Ricci
curvature scalar $R$, linear in the Hilbert-Einstein action, could
be replaced by a generic function $f(R)$ whose true form could be
"reconstructed" by the data. In fact, there is no a priori reason
to consider the gravitational Lagrangian linear in the Ricci
scalar while observations and experiments could contribute to
define and constrain the "true" theory of gravity (see \cite{PRnostro,reviewodi,reviewodi1,reviewmauro,reviewvalerio,libro,libro1}).
Coming to the weak-field limit, any alternative relativistic theory of gravity is expected to
reproduce GR results which, in any case, are firmly tested only at
Solar System scales in the Newtonian limit \cite{Will93}. Even this limit is
matter of debate since several relativistic theories do not
reproduce it. For example,
Yukawa-like corrections to the Newtonian potential easily comes out \cite{Stelle:1976gc} with
interesting physical consequences. For example, it is
claimed by some authors that the flat rotation
curves of
galaxies can be explained by such
terms \cite{Sanders90}. Other authors have shown that a
conformal theory of gravity is nothing else but a fourth order
theory containing such terms in the Newtonian limit.
In general, any relativistic theory of gravitation yields
corrections to the weak-field gravitational potentials ({\em
e.g.}, \cite{Qua91}) which, at the post-Newtonian level and
in the Parametrized Post-Newtonian formalism,
could constitute a test of these theories \cite{Will93}.
This point deserves a deep discussion. Beside the fundamental physics motivations coming from Quantum Gravity and unification theories (see
\cite{PRnostro,libro}), ETGs pose the problem that there are further gravitational degrees of freedom (related to higher order terms, non-minimal couplings and scalar fields in the field equations) and gravitational interaction is {\it not} invariant at any scale. This means that, besides the Schwarzschild radius, other characteristic gravitational scales could come out from dynamics. Such scales, in the weak field approximation, should be responsible of characteristic lengths of astrophysical structures that should
result {\it confined} in this way \cite{annalen}.
In this paper, without claiming for completeness, we will try to address the problem of describing galaxy rotation curves {\it without dark matter} but asking for corrections to the Newtonian potential that could fit data and reproduce dynamics.
These corrections are not phenomenological but come out from the weak field limit of general relativistic theories of gravity that predict the existence of corrections (e.g. Yukawa-like corrections) to the Newtonian potential. The only exception is GR where the action is chosen to be $R$, that is linear in the Ricci curvature scalar and does not contain corrections to the Newtonian potential in the weak field limit. Relaxing such a hypothesis, it is possible to show that {\it any analytic ETG presents Yukawa corrections in the weak-field limit} (see also \cite{Qua91} for a detailed caculation). From an astrophysical point of view, these corrections means that further scales have to be taken into account and that their effects could be irrelevant at local scales as Solar System.
With this scheme in mind, we will give a summary of ETGs in Sec. \ref{due} discussing also their conformal properties. In fact any ETG can be conformally transformed to the Einstein one plus scalar fields representing the further gravitational degrees of freedom. This feature is extremely important to select characteristic length scales (related to the effective masses of scalar fields) that could account for dynamics. In this sense, considering $f(R)$ gravity means to take into account an Einstein theory plus a scalar field; considering $f(R,\,\Box R)$-gravity means to assumes Einstein + two scalar fields and so on.
The emergence of Yukawa-like corrections to the Newtonian potential is discussed in Sec. \ref{tre} where the weak-field limit of $f(R)$-gravity, the simplest ETG, is worked out. Here, $f(R)$ is a generic analytic function of the Ricci curvature scalar $R$.
Furthermore, we discuss the case of $f(R,\phi)$-gravity, corresponding to $f(R,\Box R)$-gravity, i.e. Einstein plus two scalar fields, showing that a further free parameter in needed to better model dynamics.
Sec. \ref{cinque} is devoted to the rotation curves of galaxies. It is shown that the phenomenological Sanders potential, suitable to fit realistically observations, can be reproduced by the weak filed limit of $f(R,\phi)$-gravity. Sec \ref{sei} is devoted to discussion and conclusions.
\section{Extended Gravity and Conformal trasformations}
\label{due}
Higher-order and scalar-tensor gravities are examples of ETGs. For a comprehensive discussion, see
\cite{PRnostro,reviewodi,reviewodi1,reviewmauro,reviewvalerio,libro}. Essentially these theories can be characterized by
two main feature: the geometry can
non-minimally couple to some scalar
field; derivatives of the metric components of order higher than second
may appear.
In the first case, we say that we have
scalar-tensor gravity, and in the second case we have
higher-order theories. Combinations of non-minimally
coupled and higher order terms can
also emerge in effective
Lagrangians, producing mixed higher order/scalar-tensor
gravity. The physical foundation of such models can be found at fundamental level by considering effective actions coming from quantum fields in curved space-times, string/M theory and so on \cite{libro}.
A general class of higher-order-scalar-tensor theories in four dimensions is
given by the effective action
\begin{eqnarray} \label{V3.1}
{\cal S}&=&\int d^{4}x\sqrt{-g}\left[f(R,\Box R,\Box^{2}R,\dots,\Box^kR,\phi)+\omega(\phi)
\phi_{; \alpha} \phi^{;\alpha}+ \mathcal{X} \mathcal{L}_m\right],
\end{eqnarray}
where $f$ is an unspecified function of curvature invariants and scalar
field $\phi$ and $\mathcal{X}\,=\,8\pi G$\footnote{Here we use the convention
$c\,=\,1$.}. The convention for Ricci's tensor is
$R_{\mu\nu}={R^\sigma}_{\mu\sigma\nu}$, while for the Riemann
tensor is
${R^\alpha}_{\beta\mu\nu}=\Gamma^\alpha_{\beta\nu,\mu}+...$. The
affinities are the usual Christoffel symbols of the metric:
$\Gamma^\mu_{\alpha\beta}=\frac{1}{2}g^{\mu\sigma}(g_{\alpha\sigma,\beta}+g_{\beta\sigma,\alpha}
-g_{\alpha\beta,\sigma})$. The adopted signature is $(+---)$).
The term $\mathcal{L}_m$ is the minimally
coupled ordinary matter contribution, considered as a {\it perfect fluid}; $\omega(\phi)$ is a function of scalar field which specifies the theory. Actually its
values can be $\omega(\phi) =\pm 1,0$ fixing the nature and the
dynamics of the scalar field which can be a canonical scalar
field, a phantom field or a field without dynamics (see
\cite{valerio,odi2005,singularity} for details).
In the metric approach, the field equations are obtained by
varying (\ref{V3.1}) with respect to $g_{\mu\nu}$. By introducing the Einstein tensor $G_{\mu\nu}$ we get
\begin{eqnarray} \label{3.2cc}
\mathcal{G}\,G_{\mu\nu}\,=&&\mathcal{X}\,T_{\mu\nu}+\frac{f-{\cal G}R}{2}g_{\mu\nu}+
{\cal G}_{;\mu\nu}-g_{\mu\nu}\Box\mathcal{G}-\omega(\phi)\biggl(\phi_{;\mu}\phi_{;\nu}-
\frac{\phi_{;\alpha}\phi^{; \alpha}}{2}g_{\mu\nu}\biggr)\nonumber\\&&+\frac{1}{2}\sum_{i=1}^{k}\sum_{j=1}^{i}(g_{\mu\nu}
g^{\lambda\sigma}+g_\mu^{\,\,\,\lambda} g_\nu^{\,\,\,\sigma})(\Box^{j-i})_{;\sigma}
\times\left(\Box^{i-j}\frac{\partial f}{\partial \Box^{i}R}\right)_{;\lambda}-g_{\mu\nu}\left((\Box^{j-1}R)_{;\sigma}
\Box^{i-j}\frac{\partial f}{\partial \Box^{i}R}\right)^{;\sigma},
\end{eqnarray}
where we have introduced the quantity
\begin{eqnarray}
\label{3.4gg}
{\cal G}\equiv\sum_{j=0}^{n}\Box^{j}\left(\frac{\partial f}{\partial \Box^{j} R}
\right)\,,\end{eqnarray}
the energy-momentum tensor of matter
\begin{eqnarray}\label{en_ten}
T_{\mu\nu}\,=\,-\frac{1}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\,\mathcal{L}_m)}{\delta
g^{\mu\nu}}
\end{eqnarray}
and $\Box={{}_{;\sigma}}^{;\sigma}$ is the d'Alembert operator. The differential Eqs.(\ref{3.2cc}) are of order at most
$(2k+4)$. The (possible) contribution of
a self-interaction potential $V(\phi)$ is contained in the definition of $f$.
By varying with respect to the scalar field $\phi$, we obtain the generalized
Klein-Gordon equation
\begin{eqnarray}\label{3.62}
2\,\omega(\phi)\,\Box\,\phi+\omega_{\phi}(\phi)\,\phi_{;\alpha}\phi^{;\alpha}-f_{\phi}\,=\,0\,,
\end{eqnarray}
where $f_\phi\,=\,\frac{df}{d\phi}$ and
$\omega_\phi(\phi)\,=\,\frac{d\omega(\phi)}{d\phi}$. Several interesting cases can be worked out starting from the action (\ref{V3.1}). Below, we give some significant examples that will result useful for the astrophysical applications of this paper.
\subsection{The case of $f(R)$-gravity}
The simplest extension of GR is achieved assuming,
\begin{eqnarray}\label{fr}
R\rightarrow\,f(R)\,,\qquad \omega(\phi)\,=\,0\,
\end{eqnarray}
and the action (\ref{V3.1}) becomes
\begin{equation} \label{s1_fR}
{\cal S}= \int d^{4}x\sqrt{-g}\left[f(R)+\mathcal{X}\mathcal{L}_m\right]\,.
\end{equation}
Then the field equations (\ref{3.2cc}) become
\begin{eqnarray}\label{VAR12.34}
f_R\,G_{\mu\nu}\,=\,\mathcal{X}T_{\mu\nu}+\frac{f-f_R R}{2}g_{\mu\nu}+
{f_R}_{;\mu\nu} - g_{\mu\nu}\Box f_R\,\equiv\,\mathcal{X}T_{\mu\nu}+f_RT^{f(R)}_{\mu\nu}
\end{eqnarray}
where ${\displaystyle f_R\,=\,\frac{df}{dR}}$. The gravitational contribution due to higher-order terms can
be reinterpreted as a stress-energy tensor contribution $T^{f(R)}_{\mu\nu}$.
This means that additional and higher-order terms in the
gravitational action act, in principle, as a stress-energy tensor,
related to the form of $f$.
In the case of GR, $T^{f(R)}_{\mu\nu}$ identically vanishes while the
standard, minimal coupling is recovered for the matter
contribution.
The peculiar behavior of $f(R)\,=\,R$ is due to the
particular form of the Lagrangian itself which, even though it is
a second order Lagrangian, can be non-covariantly rewritten as the
sum of a first order Lagrangian plus a pure divergence term. The
Hilbert-Einstein Lagrangian can be in fact recast as follows:
\begin{eqnarray}
L_{HE}&=& {\cal L}_{HE} \sqrt{-g}=\Big[ p^{\alpha \beta}
(\Gamma^{\rho}_{\alpha \sigma} \Gamma^{\sigma}_{\rho
\beta}-\Gamma^{\rho}_{\rho \sigma} \Gamma^{\sigma}_{\alpha
\beta})+ \nabla_\sigma (p^{\alpha \beta} {u^{\sigma}}_{\alpha
\beta}) \Big]\,,\nonumber\\
\end{eqnarray}
\noindent where:
\begin{equation}
p^{\alpha \beta} =\sqrt{-g} g^{\alpha \beta} = \frac{\partial {\cal{L}}}{\partial R_{\alpha \beta}}\,,
\end{equation}
$\Gamma$ is the Levi-Civita connection of $g$ and
$u^{\sigma}_{\alpha \beta}$ is a quantity constructed out with the
variation of $\Gamma$ \cite{weinberg}. Since $u^{\sigma}_{\alpha
\beta}$ is not a tensor, the above expression is not covariant;
however standard procedures can be used to recast covariance \cite{libro}. This clearly shows that
the field equations has to be of second order and the
Hilbert-Einstein Lagrangian is thus degenerate.
\subsection{The case of scalar-tensor gravity}
From the action (\ref{V3.1}), it is possible to obtain another
interesting case by choosing
\begin{eqnarray}
f\,=\,F(\phi)R+V(\phi)\,,\qquad \omega(\phi)\,=\,1/2\,,
\end{eqnarray}
then
\begin{equation} \label{s1}
{\cal S}= \int d^{4}x\sqrt{-g}\left[F(\phi) R+V(\phi)+\frac{\phi_{;\alpha}\phi^{;\alpha}}{2}+\mathcal{X}\mathcal{L}_m\right]\,,
\end{equation}
where $V(\phi)$ and $F(\phi)$ are generic functions describing respectively the
potential and the coupling of a scalar field $\phi$. The
Brans-Dicke theory of gravity is a particular case of the action
(\ref{s1}) for $V(\phi)$\,=\,0 \cite{libro}. The variation with respect to $g_{\mu\nu}$ gives now the second-order field equations (particular form of field equations (\ref{3.2cc}))
\begin{equation} \label{s2}
F(\phi)\,G_{\mu\nu}\,=\,\mathcal{X}T_{\mu\nu}+\frac{V(\phi)}{2}g_{\mu\nu} +F(\phi)_{;\mu\nu}- g_{\mu\nu} \Box F(\phi)-\frac{1}{2}\biggl(\phi_{;\mu}\phi_{;\nu}-
\frac{\phi_{;\alpha}\phi^{; \alpha}}{2}g_{\mu\nu}\biggr)\,\equiv\,\mathcal{X}T_{\mu\nu}+F(\phi)T^{(\phi)}_{\mu\nu}
\end{equation}
where $T^{(\phi)}_{\mu\nu}$ is the energy-momentum
tensor relative to the scalar field $\phi$. The variation with respect
to $\phi$ provides the Klein - Gordon equation, {\it i.e.} the field
equation for the scalar field
\begin{equation} \label{s5}
\Box \phi-F_{\phi}(\phi)R-V_{\phi}(\phi)= 0\,,
\end{equation}
where $\displaystyle{F_{\phi}(\phi)= \frac{dF(\phi)}{d\phi}}$, $\displaystyle{V_{\phi}(\phi)= \frac{dV(\phi)}{d\phi}}$. This last equation
is equivalent to the Bianchi contracted identity \cite{cqg}.
\subsection{Conformal transformations}
These models, and, in general, any theory of the class (\ref{V3.1}), can be conformally reduced to the Einstein theory plus scalar fields.
Conformal transformations are mathematical tools very useful in ETGs
in order to disentangle the further gravitational degrees of freedom coming from general actions \cite{libro,MagnanoSokolowski94,FGN98,FaraoniNadeau}. The idea is to perform a conformal rescaling of the
space-time metric $g_{\mu\nu} \rightarrow \tilde{g}_{\mu\nu}$.
Often a scalar field is present in the theory and the metric
rescaling is accompanied by a (nonlinear) redefinition of this
field $\phi \rightarrow \tilde{\phi}$. New dynamical variables
$ \left\{ \tilde{g}_{\mu\nu} , \tilde{\phi} \right\}$ are thus
obtained. The scalar field redefinition serves the purpose of
casting the kinetic energy density of this field in a canonical
form. The new set of variables $\left\{\tilde{g}_{\mu\nu},
\tilde{\phi} \right\}$ is called the {\em Einstein conformal
frame}, while $\left\{ g_{\mu\nu}, \phi
\right\}$ constitute the {\em
Jordan frame}. When a scalar degree of
freedom $\phi$ is present
in the theory, as in scalar tensor
or $f(R)$ gravity, it generates the
transformation to the Einstein frame in
the sense that the
rescaling is completely determined by a function of $\phi$. In
principle, infinitely
many conformal frames could be introduced, giving rise to as many
representations of the theory.
Let the pair $\{{\cal M}, g_{\mu\nu}\}$ be a space-time, with ${\cal M}$ a smooth
manifold of dimension ~$n \geq 2$ and $g_{\mu\nu} $ a
(pseudo)-Riemannian metric on ${\cal M}$. The point-dependent rescaling of the
metric tensor
\begin{eqnarray} \label{cft33}
g_{\mu\nu} \longrightarrow \tilde{g}_{\mu\nu}=\Omega^2
g_{\mu\nu} \, ,
\end{eqnarray}
where the {\em conformal factor}
$\Omega$ is a nowhere
vanishing, regular function, is called a {\em Weyl} or {\em conformal}
transformation. Due to this metric rescaling, the
lengths of space-like and time-like intervals and the norms of
space-like and time-like vectors are changed, while null vectors
and null intervals of the metric $g_{\mu\nu}$ remain
null in the rescaled metric $\tilde{g}_{\mu\nu}$. The light cones
are left unchanged by the transformation (\ref{cft33})
and the space-times $\{{\cal M}, g_{\mu\nu}\}$ and $\{{\cal M},
\tilde{g}_{\mu\nu}\} $
exhibit the same causal structure; the converse is also true
\cite{Wald84}. A vector that is time-like, space-like,
or null with respect to the metric $g_{\mu\nu}$ has the same
character with respect to $\tilde{g}_{\mu\nu}$, and
{\em vice-versa}.
Conformal invariance corresponds to the absence of a characteristic length (or mass)
scale in the physics. In general, the effective potential $V(\phi)$ coming from conformal transformations
contains dimensional parameters (such as a mass $m$, that is a further "characteristic gravitational length").
This means that the further degrees of freedom coming from ETGs give rise to features that could play a fundamental role in the dynamics of astrophysical structures. In what follows, we will see that these further gravitational lengths could solve, in principle, the dark matter problem.
\subsection{Conformal transformations and higher-order gravity}
Performing the conformal transformation for $f(R)$-gravity with $\Omega^2\,=\,f_R$ we have
\begin{eqnarray} \label{h7}
\int d^4x\sqrt{-g}[f(R)+\mathcal{X}\mathcal{L}_m]\,=\,\int d^4x\sqrt{-\tilde{g}} \left(\tilde{R}+
W(\tilde{\phi})-\frac{\tilde{\phi}_{;\alpha}\tilde{\phi}^{;\alpha}}{2}+\mathcal{X}\tilde{\mathcal{L}}_m\right)
\end{eqnarray}
where $\tilde{\phi}\,=\,\sqrt{3}\,\ln f_R$ while the potential $W$ and the nonminimally coupled lagrangian of ordinary matter $\tilde{\mathcal{L}}_m$ are given by
\begin{eqnarray}\label{transconfTS}
&&W(\tilde{\phi})\,=\,e^{-2\,\tilde{\phi}/\sqrt{3}}\,V(e^{\tilde{\phi}/\sqrt{3}})\nonumber\\\\
&&\tilde{\mathcal{L}}_m\,=\,e^{-2\,\tilde{\phi}/\sqrt{3}}\,\mathcal{L}_m\biggl(e^{-\tilde{\phi}/\sqrt{3}}\tilde{g}_{\rho\sigma}\biggr)
\nonumber
\end{eqnarray}
The function $V$ is defined by the analogy between the $f(R)$-gravity and the scalar-tensor gravity (the so-called O'Hanlon lagrangian)
\begin{eqnarray}\label{h4_a}
V(\phi)\,=\,f(R)-R\,f_R(R)
\end{eqnarray}
where $\phi\,=\,f_R$. The field equations in standard form are given in the Einstein frame as follow
\begin{eqnarray}\label{h4}
\tilde{G}_{\mu\nu}\,=\,\mathcal{X}\,\tilde{T}_{\mu\nu}+\frac{W(\tilde{\phi})}{2}\,\tilde{g}_{\mu\nu}+\frac{1}{2}\biggl(\tilde{\phi}_{;\mu}\tilde{\phi}_{;\nu}-
\frac{\tilde{\phi}_{;\alpha}\tilde{\phi}^{;\alpha}}{2}\tilde{g}_{\mu\nu}\biggr)
\end{eqnarray}
\begin{eqnarray}\label{h4_sf}
\tilde{\Box}\,\tilde{\phi}+W_{\tilde{\phi}}(\tilde{\phi})\,=\,
-\mathcal{X}\,\frac{\delta\tilde{\mathcal{L}}_m}{\delta\tilde{\phi}}
\end{eqnarray}
However, the problem is completely solved if $\phi\,=\,f_R$ can be analytically inverted. In summary, a fourth-order theory is conformally equivalent to the standard
second-order Einstein theory plus a scalar field (see also
\cite{francaviglia,ordsup}).
If the theory is higher than fourth order, we have Lagrangian
densities of the form \cite{buchdahl,gottloeber,sixth},
\begin{eqnarray}\label{h10}
\mathcal{L}\,=\,\mathcal{L}(R,\,\Box R,\,\dots,\,\Box^{k} R)\,.
\end{eqnarray}
Every $\Box$ operator introduces two further terms of derivation
into the field equations. For example a theory like
\begin{eqnarray}\label{h11}
\mathcal{L}\,=\,R\,\Box R\,,
\end{eqnarray}
is a sixth-order theory, and the above approach can be pursued considering a conformal
factor of the form
\begin{eqnarray}\label{h12}
\Omega^2\,=\,\frac{\partial\mathcal{L}}{\partial R} +\Box\frac{\partial\mathcal{L}}{\partial \Box R}\,.
\end{eqnarray}
In general, increasing two orders of derivation in the field
equations (\emph{i.e.} every term $\Box R$), corresponds to add a scalar
field in the conformally transformed frame \cite{gottloeber}. A
sixth-order theory can be reduced to an Einstein theory with two
minimally coupled scalar fields; a $2n$-order theory can be, in
principle, reduced to an Einstein theory + $(n-1)$-scalar fields.
On the other hand, these considerations can be directly
generalized to higher-order-scalar-tensor theories in any number
of dimensions as shown in \cite{libro}.
With these considerations in mind, we can easily say that a higher order theory like $f(R,\Box R)$ is dynamically equivalent to $f(R,\phi)$.
This feature, as we will show, gives the minimal ingredients to reproduce the rotation curves of galaxies since two Yukawa like corrections come out. For a detailed derivation see \cite{Qua91}.
\section{Yukawa-like corrections to the gravitational potential}
\label{tre}
In order to deal with standard self-gravitating systems, any theory of gravity has to be developed to its Newtonian or post-Newtonian limit depending on the order of approximation of the theory in terms of power of velocity $v^2$ \cite{cqg,stabile}. The paradigm of the Newtonian limit starts from the development of the
metric tensor (and of all additional quantities in the theory) with respect to the dimensionless velocity\footnote{The velocity $v$ is expressed in unit of light speed.} $v$ of the moving massive bodies embedded in the gravitational potential. The perturbative development takes only first term of $0,0$- and $i,j$-component of metric tensor $g_{\mu\nu}$ (for details, see \cite{PRD1,PRD1_2}). The metric assumes the following form
\begin{eqnarray}\label{me0}
{ds}^2\,=\,(1+2\Phi)\,dt^2-(1-2\Psi)\,\delta_{ij}dx^idx^j
\end{eqnarray}
where the gravitational potentials $\Phi,\, \Psi\,<\,1$ are proportional to $v^2$. The adopted set of coordinates\footnote{The Greek index runs from $0$ to $3$; the Latin index runs from $1$ to $3$.}, the so-called {\it isotropic coordinates}, is $x^\mu\,=\,(t,\textbf{x})=\,(t,x^1,x^2,x^3)$. The Ricci scalar is approximated as $R\,=\,R^{(1)}\,+\,R^{(2)}\,+\,\dots$ where $R^{(1)}$ is proportional to $\Phi$, and $\Psi$, while $R^{(2)}$ is proportional to $\Phi^2$, $\Psi^2$ and $\Phi\Psi$.
Here we show as a general gravitational potential, with a Yukawa correction, can be
obtained in the Newtonian limit of any analytic $f(R)$-gravity
model. From a phenomenological point of view, this correction
allows to consider as viable this kind of models even at small
distances, provided that the Yukawa correction turns out to be
not relevant in this approximation as in the so called "chameleon
mechanism" \cite{chameleon}.
\subsection{Yukawa-like corrections in $f(R)$-gravity}
Starting from the action (\ref{V3.1}) for the case $f(R,\Box R,\Box^{2}R,\dots,\Box^kR,\phi)+\omega(\phi)
\phi_{; \alpha} \phi^{;\alpha}$ reduced to $f(R)$, the field equations are (\ref{VAR12.34}).
In principle, the following analysis can be developed for any ETGs. Let us now start with the $f(R)$ case.
As discussed in \cite{PRnostro,arturosferi,arturonoether}, we can deal with the
Newtonian limit of $f(R)$ gravity
adopting the spherical symmetry. By introducing the radial coordinate $r\,=\,|\textbf{x}|$ the metric (\ref{me0}) can be recast as follows
\begin{eqnarray}\label{me}
ds^2\,=\,[1+g^{(1)}_{tt}(t,r)]\,dt^2-[1-g^{(1)}_{rr}(t,r)]\,dr^2-r^2\,d\Omega\,,
\end{eqnarray}
where $d\Omega\,=\,d\theta^2+\sin^2\theta\,d\phi^2$ is the angular distance, $(t,r,\theta,\phi)$ are standard coordinates. Since we want to obtain the most general result, we do not provide any specific form for the $f(R)$. We assume, however, analytic Taylor expandable $f(R)$ functions with respect to the value $R\,=\,0$ (Minkowskian background):
\begin{eqnarray}\label{sertay}
f(R)\,=\,\sum_{n\,=\,0}^{\infty}\frac{f^{(n)}(0)}{n!}\,R^n\,=\,
f_0+f_1R+\frac{f_2}{2}R^2+\frac{f_3}{6}R^3+...
\end{eqnarray}
In order to obtain the weak field approximation, one has to
insert expansions (\ref{me}) and (\ref{sertay}) into
field Eqs. (\ref{VAR12.34}) and expand the system up
to the orders ${\mathcal O}(0)$ and ${\mathcal O}(1)$. This approach provides
general results and specific (analytic) theories are selected by
the coefficients $f_i$ in Eq.(\ref{sertay}). It is worth noticing
that, at the order ${\mathcal O}(0)$, the field equations give the condition
$f_0 =0$ and then the solutions at further orders do not depend on
this parameter as we will show below. If we now consider the
${\mathcal O}(1)$ - order approximation, the field equations in
vacuum ($T_{\mu\nu}\,=\,0$), results to be
\begin{eqnarray}\label{eq2}
&&f_1rR^{(1)}-2f_1g^{(1)}_{tt,r}+4f_2R^{(1)}_{,r}-f_1rg^{(1)}_{tt,rr}+2f_2rR^{(1)}=0\,,\nonumber
\\\nonumber\\
&&f_1rR^{(1)}-2f_1g^{(1)}_{rr,r}+4f_2R^{(1)}_{,r}-f_1rg^{(1)}_{tt,rr}=0\,,\nonumber
\\\nonumber\\
&&2f_1g^{(1)}_{rr}-r\left[f_1rR^{(1)}-f_1g^{(1)}_{tt,r}-f_1g^{(1)}_{rr,r}+2f_2R^{(1)}_{,r}+2f_2rR^{(1)}_{,rr}\right]=0\,,
\\\nonumber\\
&&f_1rR^{(1)}+3f_2\left[2R^{(1)}_{,r}+rR^{(1)}_{,rr}\right]=0\,,\nonumber
\\\nonumber\\
&&2g^{(1)}_{rr}+r\left[2g^{(1)}_{tt,r}-rR^{(1)}+2g^{(1)}_{rr,r}+rg^{(1)}_{tt,rr}\right]=0\,.\nonumber
\end{eqnarray}
It is evident that the trace equation (the fourth in the system
(\ref{eq2})), provides a differential equation with respect to the
Ricci scalar which allows to solve exactly the system (\ref{eq2})
at ${\mathcal O}(1)$ - order. Finally, one gets the general solution\,:
\begin{eqnarray}\label{sol}
&&g^{(1)}_{tt}\,=\,\delta_0-\frac{Y}{f_1r}+\frac{\delta_1(t)e^{-mr}}{3m^2
r}+\frac{\delta_2(t)e^{mr}}{6m^3r}\nonumber
\\\nonumber\\
&&g^{(1)}_{rr}\,=\,-\frac{Y}{f_1r}-\frac{\delta_1(t)[1+mr]e^{-mr}}{3m^2
r}-\frac{\delta_2(t)[1-mr]e^{mr}}{6m^3r}
\\\nonumber\\
&&R^{(1)}\,=\,\frac{\delta_1(t)\,e^{-mr}}{r}+\frac{\delta_2(t)e^{mr}}{2m
r}\nonumber
\end{eqnarray}
where $m^2\doteq-\frac{f_1}{3f_2}$, $\delta_0$ and $Y$ are arbitrary constants, while $\delta_1(t)$ and $\delta_2(t)$
are arbitrary time functions. When we consider the limit $f(R)\rightarrow R$ then $m\rightarrow\infty$ and $f_1\,\rightarrow\,1$, in the case
of a point-like source of mass $M$, we recover the standard
Schwarzschild solution if we set $\delta_0\,=\,0$ and $Y\,=\,2GM$. Let us notice that the
integration constant $\delta_0$ is dimensionless, while the two functions $\delta_1(t)$ and $\delta_2(t)$ have
respectively the dimensions of $length^{-1}$ and $length^{-2}$. These functions are completely arbitrary since the
differential equation system (\ref{eq2}) contains only spatial
derivatives and can be fixed to constant values. Besides, the condition $\delta_0\,=\,0$ is avaible
since it represents an unessential additive quantity for the potential.
The solutions (\ref{sol}) are valid if $m^2\,>\,0$ \emph{i.e.} $f_1$ and $f_2$ are assumed to have different signs in Eq. (\ref{sertay}). If the algebraic signs are equal we find an oscillating solution where the correction term to the newtonian component ($\propto\,1/r$) is proportional to the $(\cos m r+\sin mr)/r$ \cite{PRD1}. In this paper we consider only the correction Yukawa-like.
It is possible now, to write the general solution of the problem considering
the previous expressions (\ref{sol}). In order to match at infinity
the Minkowskian prescription for the metric, one can discard the
Yukawa growing mode in (\ref{sol}) and then we obtain\,:
\begin{eqnarray}\label{mesol}
&&ds^2\,=\,\biggl[1-\frac{2GM}{f_1r}+\frac{\delta_1(t)e^{-mr}}{3m^2
r}\biggr]dt^2- \biggl[1+\frac{2GM}{f_1r}+\frac{\delta_1(t)[1+mr]e^{-mr}}{3m^2
r}\biggr]dr^2-r^2d\Omega\,,\nonumber\\\\
&&R\,=\,\frac{\delta_1(t)e^{-mr}}{r}\,.\nonumber
\end{eqnarray}
At this point, one can provide the solution in terms of
gravitational potentials. The first of (\ref{sol}) gives the first order solution in term of the metric expansion (see the
definition (\ref{me})). This term coincides with the
gravitational potential at the Newton order. In particular, since
$g_{tt}\,=\,1+2\Phi_{grav}\,=\,1+g_{tt}^{(1)}$, the
gravitational potential of $f(R)$-gravity, analytic in the Ricci
scalar $R$, is
\begin{eqnarray}\label{gravpot}
\Phi_{grav}\,=\,-\left(\frac{GM}{f_1r}-\frac{\delta_1(t)e^{-mr}}{6\,m^2
r}\right)\,.
\end{eqnarray}
This general result means that the standard Newton potential is
achieved only in the particular case $f(R)=R$ while it is not so
for analytic $f(R)$ models up to exceptions of measure zero. Specifically all models with $f_1/f_2\,>\,0$ are excluded by hand. Eq.(\ref{gravpot}) deserves
some comments. The parameters $f_1$, $m$ and the function
$\delta_1(t)$ represent the deviations with respect the standard
Newton potential. To test these theories of gravity inside the
Solar System, we need to compare such quantities with respect to
the current experiments, or, in other words, Solar System
constraints should be evaded fixing such parameters \cite{chameleon}. On the other
hand, these parameters could acquire non-trivial values (e.g.
$f_1\neq 1,\,\delta_1(t)\neq 0,\,m\,<\,\infty$) at scales different
from the Solar System ones.
Since the parameter $m$ can be related to an effective length $L^{-1}$,
Eq. (\ref{gravpot}) can be recast as
\begin{eqnarray}\label{gravpot1}
\Phi_{grav}\,=\,-\left(\frac{G M}{1+\delta}\right)\frac{1+\delta\,e^{-r/L}}{r}\,,
\end{eqnarray}
where the first term is the Newtonian-like part of the potential
for a point-like mass ${\displaystyle \frac{M}{1+\delta}}$ and the second
term is a modification of gravity including a new scale
length, $L$ associated to the coefficients of the Taylor expansion.
If $\delta\,=\,0$ the Newtonian potential and the standard gravitational coupling are recovered.
Comparing Eqs. (\ref{gravpot}) and (\ref{gravpot1}), we assumed
$f_1\,=\,1+\delta$ and ${\displaystyle \delta_1(t)\,=\,-\frac{6\,GM}{L^2}\left(\frac{\delta}{1+\delta}\right)}$
where $\delta$ can be chosen quasi-constant.
Under this assumption, the scale length $L$ could naturally arise and reproduce several phenomena that range from Solar System to large scale structure
\cite{annalen}. Understanding
on which scales the modifications to GR are working or what is the weight of corrections
to Newton potential is a crucial point that could confirm or rule out these extended approaches to gravitational interaction.
\subsection{Yukawa-like corrections in $f(R,\phi)$-gravity}
A further step is to analyze the Newtonian limit starting from the action (\ref{V3.1}) and considering a generic function of Ricci scalar and scalar field. Then the action becomes
\begin{eqnarray}\label{HOGaction}
\mathcal{A}=\int d^{4}x\sqrt{-g}\biggl[f(R,\phi)+\omega(\phi)\,\phi_{;\alpha}\,\phi^{;\alpha}+\mathcal{X}\mathcal{L}_m\biggr]
\end{eqnarray}
The field equations are obtained from (\ref{3.2cc}) by setting $f(R,\Box R,\Box^{2}R,\dots,\Box^kR,\phi)\,\rightarrow\,f(R,\phi)$. As discussed in Sec. II, this case can be considered from a purely geometric point of view assuming $f(R,\Box R)$ theories where the terms $\Box R$ gives a further scalar field contribution \cite{Qua91}. We get
\begin{eqnarray}
\label{fieldequationHOG}
&&f_RR_{\mu\nu}-\frac{f+\omega(\phi)\,\phi_{;\alpha}\,\phi^{;\alpha}}{2}\,g_{\mu\nu}+\omega(\phi)\,\phi_{;\mu}\,\phi_{;\nu}-f_{R;\mu\nu}+g_{\mu\nu}\Box\,
f_R\,=\,\mathcal{X}\,T_{\mu\nu}\nonumber\\\\
&&2\,\omega(\phi)\,\Box\,\phi+\omega_{\phi}(\phi)\,\phi_{;\alpha}\phi^{;\alpha}-f_{\phi}\,=\,0\nonumber
\end{eqnarray}
A further equation is the trace of field equation with respect to the metrci tensor $g_{\mu\nu}$
\begin{eqnarray}\label{trace}
f_R\,R-2f-\omega(\phi)\,\phi_{;\alpha}\phi^{;\alpha}+3\,\Box\,f_R\,=\,\mathcal{X}\,T
\end{eqnarray}
where $T\,=\,T^{\sigma}_{\,\,\,\,\,\sigma}$ is the trace of energy-momentum tensor.
Let us consider a point-like source with mass $M$. The energy-momentum tensor is
\begin{eqnarray}\label{emtensor}
T_{\mu\nu}\,=\,\rho\,u_\mu
u_\nu\,,\,\,\,\,\,\,\,\,\,\,T\,=\,\rho
\end{eqnarray}
where $\rho$ is the mass density and $u_\mu$ satisfies the condition $g^{00}{u_0}^2\,=\,1$ and $u_i\,=\,0$. Here, we are not interested to the internal structure.
It is possible to analyze the problem in the more general case by using the isotropic coordinates $(t,x^1,x^2,x^3)$, then the metric is expressed as in Eq. (\ref{me0}).
In this framework, also the scalar field $\phi$ is approximated as the Ricci scalar. In particular we get $\phi\,=\,\phi^{(0)}\,+\,\phi^{(1)}\,+\,\phi^{(2)}\,+\dots$ and the function $f(R,\phi)$ with its partial derivatives ($f_R$, $f_{RR}$, $f_{\phi}$, $f_{\phi\phi}$ anf $f_{\phi R}$) and $\omega(\phi)$ can be substituted by their corresponding Taylor series. In the case of $f(R,\phi)$, we have
\begin{eqnarray}
f(R,\phi)\,\sim\,f(0,\phi^{(0)})+f_R(0,\phi^{(0)})\,R^{(1)}+f_\phi(0,\phi^{(0)})\,\phi^{(1)}+\dots
\end{eqnarray}
and analogous relations for the derivatives are obtained. From the lowest order of field Eqs. (\ref{fieldequationHOG}) we have
\begin{eqnarray}\label{PPN-field-equation-general-theory-fR-O0}
f(0,\,\phi^{(0)})\,=\,0\,,\,\,\,\,\,\,\,\,\,\,f_{\phi}(0,\phi^{(0)})\,=\,0
\end{eqnarray}
and also in this modified fourth order gravity a missing cosmological component in the action (1) implies that the space-time is asymptotically Minkowskian (the same outcome of previous section); moreover the ground value of scalar field $\phi$ must be a stationary point of potential. In the Newtonian limit, we have
\begin{eqnarray}
\label{NL-fieldequationHOG}
&&\triangle\biggl[\Phi-\frac{f_{RR}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}\,R^{(1)}-\frac{f_{R\phi}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}\,\phi^{(1)}\biggr]-\frac{R^{(1)}}{2}\,=\,\frac{\mathcal{X}\,\rho}{f_R(0,\phi^{(0)})}\nonumber\\\nonumber\\
&&\biggl\{\triangle\biggl[\Psi+\frac{f_{RR}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}\,R^{(1)}+\frac{f_{R\phi}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}\,\phi^{(1)}\biggr]+\frac{R^{(1)}}{2}\biggr\}\delta_{ij}+\nonumber\\\nonumber\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\biggr\{\Psi-\Phi-\frac{f_{RR}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}R^{(1)}-\frac{f_{R\phi}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}\phi^{(1)}\biggr\}_{,ij}\,=\,0\\\nonumber\\
&&\triangle\phi^{(1)}+\frac{f_{\phi\phi}(0,\phi^{(0)})}{2\,\omega(\phi^{(0)})}\,\phi^{(1)}\,=\,-\frac{f_{R\phi}(0,\phi^{(0)})}{2\,\omega(\phi^{(0)})}\,R^{(1)}\nonumber\\\nonumber\\
&&\triangle R^{(1)}+\frac{f_R(0,\phi^{(0)})\,R^{(1)}}{3\,f_{RR}(0,\phi^{(0)})}\,=\,-\frac{\mathcal{X}\,\rho}{3\,f_{RR}(0,\phi^{(0)})}-\frac{f_{R,\phi}(0,\phi^{(0)})}{f_{RR}(0,\phi^{(0)})}\,\triangle\phi^{(1)}\nonumber
\end{eqnarray}
where $\triangle$ is the Laplacian in the flat space. The last equation in (\ref{NL-fieldequationHOG}) is the trace coming from Eq. (\ref{trace}). These equations are not simply the merging of field equations of $f(R)$-gravity and a further massive scalar field, but are due to the fact that the model $f(R,\phi)$ generates a coupled system of equations with respect to Ricci scalar $R$ and scalar field $\phi$. By supposing that $f_{\phi\phi}\,\neq\,0$ and obviosuly $f_{RR}\,\neq\,0$, we can introduce the two characteristic length scales
\begin{eqnarray}\label{mass_definition}
{m_R}^2\,\doteq\,-\frac{f_R(0,\phi^{(0)})}{3f_{RR}(0,\phi^{(0)})},\,\,\,\,\,\,\,\,
{m_\phi}^2\,\doteq\,-\frac{f_{\phi\phi}(0,\phi^{(0)})}{2\,\omega(\phi^{(0)})}
\end{eqnarray}
where the two masses are assumed to be real and this gives further restrictions to the set of viable models.
The gravitational potentials $\Phi$ and $\Psi$ are given by\footnote{The potential $\Psi$ can be found also as $\Psi(\textbf{x})\,=\,\frac{1}{8\pi}\int
d^3\textbf{x}'\frac{R^{(1)}(\textbf{x}')}{|\textbf{x}-
\textbf{x}'|}+\frac{R^{(1)}(\textbf{x})}{3{m_R}^2}-\frac{f_{R,\phi}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}\,\phi^{(1)}(\textbf{x})$.}
\begin{eqnarray}\label{new_sol}
\Phi(\mathbf{x})\,&=&\,-\frac{\mathcal{X}}{4\pi\,f_R(0,\phi^{(0)})}\int
d^3\textbf{x}'\frac{\rho(\textbf{x}')}{|\textbf{x}-
\textbf{x}'|}-\frac{1}{8\pi}\int
d^3\textbf{x}'\frac{R^{(1)}(\textbf{x}')}{|\textbf{x}-
\textbf{x}'|}-\frac{R^{(1)}(\textbf{x})}{3{m_R}^2}+\frac{f_{R,\phi}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}\,\phi^{(1)}(\textbf{x})\nonumber\\\\
\Psi(\mathbf{x})\,&=&\,\Phi(\textbf{x})-\frac{R^{(1)}(\textbf{x})}{3{m_R}^2}+\frac{f_{R,\phi}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}\,\phi^{(1)}(\textbf{x})\nonumber\end{eqnarray}
while, for the Ricci scalar and the scalar field, we have the coupled system of equations
\begin{eqnarray}
\label{coupledsyst}
&&\biggl[\triangle-{m_\phi}^2\biggr]\phi^{(1)}\,=\,-\frac{f_{R\phi}(0,\phi^{(0)})}{2\,\omega(\phi^{(0)})}\,R^{(1)}\nonumber\\\\
&&\biggl[\triangle-{m_R}^2\biggr]R^{(1)}\,=\,\frac{{m_R}^2\,\mathcal{X}\,\rho}{f_R(0,\phi^{(0)})}+\frac{3\,{m_R}^2\,f_{R\phi}(0,\phi^{(0)})}{f_R(0,\phi^{(0)})}\,\triangle\phi^{(1)}\nonumber
\end{eqnarray}
The definition of ${m_R}^2$ is the generalization of $m^2$ in the case of pure $f(R)$-gravity. By using the Fourier transformation the system (\ref{coupledsyst}) has the following solutions
\begin{eqnarray}
\label{coupledsyst_sol}
&&\phi^{(1)}(\textbf{x})\,=\,-\frac{{m_R}^2\,f_{R\phi}(0,\phi^{(0)})\,\mathcal{X}}{2\,\omega(\phi^{(0)})\,f_R(0,\phi^{(0)})}\int\frac{d^3\textbf{k}}{(2\pi)^{3/2}}\frac{\tilde{\rho}(\textbf{k})\,e^{i\textbf{k}\cdot\textbf{x}}}{(\textbf{k}^2+{k_1}^2)(\textbf{k}^2+{k_2}^2)}\nonumber\\\\
&&R^{(1)}(\textbf{x})\,=\,-\frac{{m_R}^2\,\mathcal{X}}{f_R(0,\phi^{(0)})}\int\frac{d^3\textbf{k}}{(2\pi)^{3/2}}\frac{\tilde{\rho}(\textbf{k})\,(\textbf{k}^2+{m_\phi}^2)\,e^{i\textbf{k}\cdot\textbf{x}}}{(\textbf{k}^2+{k_1}^2)(\textbf{k}^2+{k_2}^2)}\nonumber
\end{eqnarray}
where
\begin{eqnarray}
2\,{k_{1,2}}^2\,=\,{m_R}^2+{m_\phi}^2-\frac{3{f_{R\phi}(0,\phi^{(0)})}^2{m_R}^2}{2\,\omega(\phi^{(0)})\,f_R(0,\phi^{(0)})}\pm\sqrt{\biggl[{m_R}^2+{m_\phi}^2-\frac{3{f_{R\phi}(0,\phi^{(0)})\,}^2{m_R}^2}{2\,\omega(\phi^{(0)})f_R(0,\phi^{(0)})}\biggr]^2-4{m_R}^2{m_\phi}^2}
\end{eqnarray}
In order to understand the relevant physical consequences of solutions (\ref{new_sol}), it is sufficient to analyze the point-like source framework. Then, if we consider $\tilde{\rho}(\textbf{k})\,=\,M/(2\pi)^{3/2}$, where $M$ is the mass, and ${k_{1,2}}^2\,>\,0$ Eqs. (\ref{coupledsyst_sol}) become
\begin{eqnarray}
\label{coupledsyst_sol_point}
&&\phi^{(1)}(\textbf{x})\,=\,
\frac{f_{R\phi}(0,\phi^{(0)})}{2\,\omega(\phi^{(0)})\,f_R(0,\phi^{(0)})}\frac{r_g}{|\textbf{x}|}\frac{e^{-m_R\tilde{k}_1\,|\textbf{x}|}-e^{-m_R\tilde{k}_2\,|\textbf{x}|}}{{\tilde{k}_1}^2-{\tilde{k}_2}^2}\nonumber\\\\
&&R^{(1)}(\textbf{x})\,=\,
-\frac{{m_R}^2}{f_R(0,\phi^{(0)})}\frac{r_g}{|\textbf{x}|}\frac{({\tilde{k}_1}^2-\eta^2)\,e^{-m_R\tilde{k}_1\,|\textbf{x}|}-({\tilde{k}_2}^2-\eta^2)\,e^{-m_R\tilde{k}_2\,|\textbf{x}|}}{{\tilde{k}_1}^2-{\tilde{k}_2}^2}\nonumber
\end{eqnarray}
where $r_g$ is the Schwarzschild radius. Furthermore, we introduced the dimensionless quantities
\begin{eqnarray}
2\,{\tilde{k}_{1,2}}^2\,=\,\frac{2\,{k_{1,2}}^2}{{m_R}^2}\,=\,1-\xi+\eta^2\pm\sqrt{(1-\xi+\eta^2)^2-4\eta^2}
\end{eqnarray}
and ${\displaystyle \eta\,=\,\frac{m_\phi}{m_R}}$, ${\displaystyle \xi\,=\,\frac{3{f_{R\phi}(0,\phi^{(0)})}^2}{2\,\omega(\phi^{(0)})\,f_R(0,\phi^{(0)})}}$. These two parameters have to ensures two different conditions for the roots ${k_{1,2}}^2$ that have to be both real and positive \emph{i.e.} ${k_{1,2}}^2\,>\,0$. Such conditions can be reformulated as $\xi\,\leq\,(\eta-1)^2$ and this fact restrict the class of viable Lagrangians. In fact we have
\begin{eqnarray}
\xi\,=\,\frac{({k_1}^2-{m_R}^2)({k_2}^2-{m_R}^2)}{{m_R}^4}\,,\,\,\,\,\,\,\,\,\,\,
\eta^2\,=\,\frac{{k_1}^2{k_2}^2}{{m_R}^4}
\end{eqnarray}
where $\xi$ and $\eta$ are given in terms of ${k_{1,2}}^2$ and $m_R$ which are the parameters defining the form of Yukawa-like terms in the potentials. Specifically the conditions
\begin{eqnarray}
&&f_{RR}(0,\phi^{(0)})\,=\,-\frac{f_R(0,\phi^{(0)})}{3\,{m_R}^2}\,,\,\,\,\,\,\,\,\,\,\,f_{\phi\phi}(0,\phi^{(0)})\,=\,-2\,\omega(\phi^{(0)})\,\frac{{k_1}^2{k_2}^2}{{m_R}^2}\,,\nonumber\\\\
&&f_{R\phi}(0,\phi^{(0)})\,=\,\sqrt{\frac{2}{3}\omega({\phi^{(0)}})f_R(0,\phi^{(0)})\biggl[\frac{({k_1}^2-{m_R}^2)({k_2}^2-{m_R}^2)}{{m_R}^4}\biggr]}\nonumber
\end{eqnarray}
which, together with conditions (\ref{PPN-field-equation-general-theory-fR-O0}), give the form of possible Lagrangians. It is worth noticing that $f_R(0,\phi^{(0)})$ can be assumed equal to $1$ in standard units, while $\omega(\phi^{(0)})$ fixes the form of scalar field kinetic term that is equal to $1/2$ in the canonical case. For $\omega(\phi^{(0)})\,<\,0$ a ghost scalar field is possible.
The potentials (\ref{new_sol}) become
\begin{eqnarray}\label{new_sol_point}
\Phi(\mathbf{x})\,&=&\,-\frac{GM}{f_R(0,\phi^{(0)})|\textbf{x}|}\biggl\{1+g(\xi,\eta)\,e^{-m_R\tilde{k}_1\,|\textbf{x}|}+[1/3-g(\xi,\eta)]\,e^{-m_R\tilde{k}_2\,|\textbf{x}|}\biggr\}\nonumber\\\\
\Psi(\mathbf{x})\,&=&\,-\frac{GM}{f_R(0,\phi^{(0)})|\textbf{x}|}\biggl\{1-g(\xi,\eta)\,e^{-m_R\tilde{k}_1\,|\textbf{x}|}-[1/3-g(\xi,\eta)]\,e^{-m_R\tilde{k}_2\,|\textbf{x}|}\biggr\}\nonumber
\end{eqnarray}
where ${\displaystyle g(\xi,\eta)\,=\,\frac{{\tilde{k}_1}^2(2\eta^2-2{\tilde{k}_1}^2-2\xi+3)-3\eta^2}{3{\tilde{k}_1}^2({\tilde{k}_1}^2-{\tilde{k}_2}^2)}}$. In Figs. (\ref{plotscalar}), (\ref{plotricciscalar}), (\ref{plotpotential}) and (\ref{plotpotential_2}) we show respectively the spatial behaviors of the scalar field, the Ricci scalar and the potentials $\Phi$, $\Psi$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{scalar_field.eps}\\
\caption{The spatial behavior of scalar field $\phi^{(1)}$ generated by a point-like source (\ref{coupledsyst_sol_point}) for $\eta\,=\,0.1$ and $\xi\,=\,-2$.}
\label{plotscalar}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{ricci.eps}\\
\caption{The spatial behavior of scalar field $R^{(1)}$ (dashed line) generated by a point-like source (\ref{coupledsyst_sol_point}) compared with respect to the same quantity (dotted line) in the $f(R)$-gravity. In both cases we set $\eta\,=\,0.1$ and $\xi\,=\,-2$.}
\label{plotricciscalar}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{potential_1.eps}\\
\caption{Comparison among the potentials $\Phi$ generated by a point-like source in three frameworks: the first one (\ref{new_sol_point}) induced by the action (\ref{HOGaction}) (dashed line), the second one (dotted line) induced by $f(R)$-gravity (\ref{new_sol_point_fR}) and the last one (solid line) is the Newtonian limit of GR. In the two alternative theories, we set $\eta\,=\,0.1$ and $\xi\,=\,-2$.}
\label{plotpotential}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{potential_2.eps}\\
\caption{Comparison among the potentials $\Psi$ generated by a point-like source in three frameworks. The scheme is the same of Fig. \ref{plotpotential}.}
\label{plotpotential_2}
\end{figure}
The solutions (\ref{coupledsyst_sol_point}), (\ref{new_sol_point}) are the generalization of solutions obtained in $f(R)$-gravity and scalar tensor gravity. In fact, we can easily obtain the outcomes that in the case of minimally coupled scalar field \emph{i.e.} $f_{R\phi}\,=\,0\,\rightarrow\,\xi\,=\,0$, ${\tilde{k}_{1,2}}^2\,=\,1,\,\eta^2$, $g(\xi,\eta)\,=\,1/3$ we find the point-like solutions of $f(R)$-gravity \cite{PRD1, PRD1_2}
\begin{eqnarray}\label{new_sol_point_fR}
\Phi_{f(R)}(\mathbf{x})\,&=&\,-\frac{GM}{f_R(0)|\textbf{x}|}\biggl\{1+\frac{1}{3}\,e^{-m_R\,|\textbf{x}|}\biggr\}\nonumber\\\\
\Psi_{f(R)}(\mathbf{x})\,&=&\,-\frac{GM}{f_R(0)|\textbf{x}|}\biggl\{1-\frac{1}{3}\,e^{-m_R\,|\textbf{x}|}\biggr\}\nonumber
\end{eqnarray}
while in the case of the Brans-Dicke theory \emph{i.e.} $f_{R\phi}=1,\,\,f_{R}=\phi,\,\,\omega(\phi)\,=\,-\omega_0/\phi,\,\,f_{RR}\,=\,0,\,\,f_{\phi\phi}\,=\,0$, from the field Eqs. (\ref{NL-fieldequationHOG}), we find the classical solutions of Brans-Dicke gravity \cite{BD}
\begin{eqnarray}\label{sol_point_BD}
\Phi_{BD}(\mathbf{x})\,&=&\,-\frac{GM}{\phi^{(0)}|\textbf{x}|}\frac{2(2+\omega_0)}{2\,\omega_0+3}
\nonumber\\\\
\Psi_{BD}(\mathbf{x})\,&=&\,-\frac{GM}{\phi^{(0)}|\textbf{x}|}\frac{2(1+\omega_0)}{2\,\omega_0+3}
\nonumber
\end{eqnarray}
The Brans-Dicke behavior is also present in the solutions (\ref{new_sol_point}). In fact we recover the solutions (\ref{sol_point_BD}) when $m_R\,\rightarrow\,\infty,\,\,m_\phi\,=\,0,\,\,{\displaystyle \xi\,=\,-\frac{3}{2\,\omega_0}}$. Also solutions (\ref{new_sol_point_fR}) have been found in vacuum, but here also the boundary conditions for the Yukawa term in the origin (where the mass is placed) have been inserted. In this case, the arbitrary time-function $\delta_1(t)$ in Eq. (\ref{gravpot}) is fixed to the value ${\displaystyle -\frac{2 m^2}{f_1}}$ and then we can define $\Phi_{grav}\,=\,\Phi_{f(R)}$, while the expression for $\Psi_{grav}$ in the Eq. (\ref{mesol}) becomes
\begin{eqnarray}\label{pot_grav_2}
\Psi_{grav}\,=\,\frac{g^{(1)}_{rr}}{2}\,=\,-\frac{GM}{f_1r}\biggl\{1-\frac{1+mr}{3}\,e^{-mr}\biggr\}
\end{eqnarray}
It is possible to show that the potential $\Psi_{grav}$ is equal to the potential $\Psi_{f(R)}$ of Eqs. (\ref{new_sol_point_fR}) if we assume standard coordinates. The passage from the isotropic coordinates $(t,x^1,x^2,x^3)$ to the standard ones $(t,r,\theta,\phi)$ is given by the transformation $\biggl[1-2\,\Psi_{f(R)}(|\textbf{x}|)\biggr]|\textbf{x}|^2\,=\,r^2$ where $|\textbf{x}|^2\,=\,x_ix^i$, then, at first order with respect to the quantity $r_g/r$ (or $r_g/|\textbf{x}|$), the metrics
\begin{eqnarray}\label{isotropic_metric}
&&ds^2\,=\,\biggl[1-\frac{r_g}{f_R(0)|\textbf{x}|}\biggl(1+\frac{1}{3}\,e^{-m_R|\textbf{x}|}\biggr)\biggr]dt^2-\biggl[1+\frac{r_g}{f_R(0)|\textbf{x}|}
\biggl(1-\frac{1}{3}\,e^{-m_R|\textbf{x}|}\biggr)\biggr]\delta_{ij}dx^idx^j\nonumber\\\\
&&ds^2\,=\,\biggl[1-\frac{r_g}{f_1\,r}\biggl(1+\frac{1}{3}\,e^{-mr}\biggr)\biggr]dt^2-\biggl[1+\frac{r_g}{f_1\,r}
\biggl(1-\frac{1+mr}{3}\,e^{-mr}\biggr)\biggr]dr^2-r^2d\Omega\nonumber
\end{eqnarray}
coincide and, obviously, for $f(R,\phi)\,\rightarrow\,f(R)$, we have $m_R\,=\,m$ and $f_R(0)\,=\,f_1$.
It is interesting to note that, in the case of minimally coupled scalar field ($f_{R\phi}\,=\,0$), the Newtonian level of the field Eqs. (\ref{fieldequationHOG}) for the metric tensor is unaffected by the presence of the scalar field $\phi$. Moreover $\phi$ is not linked to the energy-momentum tensor \emph{via} the Ricci scalar and must satisfy only the boundary conditions at infinity, while the amplitude of scalar field is generic and depending only on time. In fact, the solution of the first of Eqs. (\ref{coupledsyst}) is
\begin{eqnarray}
\phi^{(1)}(t,\textbf{x})\,=\,\frac{K(t)}{2\,\omega(\phi^{(0)})}\frac{e^{-m_\phi|\textbf{x}|}}{|\textbf{x}|}\,,
\end{eqnarray}
where $K(t)$ is a generic function depending on time. The evolution of $K(t)$ is fixed by the post-Newtonian level of field equations. By considering $f_{R\phi}\,\neq\,0$, we find a further contribution in the energy-momentum tensor. Another interesting case is the generalization of Brans-Dicke theory. In fact, we can consider a scalar-tensor theory, but the geometric sector is given only by the Ricci scalar. Without losing generality, we can set the interaction term in the action (\ref{HOGaction}) as $\phi\,R$ and not as $f(\phi)\,R$ because by introducing a new scalar field we obtain formally the same equations \cite{TS_analogy}. By setting $f_{R\phi}\,=\,1,\,\,f_{RR}\,=\,0,\,\,f_R\,=\,\phi$ the field Eqs. (\ref{NL-fieldequationHOG}) become\footnote{Also in this case we can find the solution of $\Psi$ as $\Psi(\textbf{x})\,=\,\frac{1}{8\pi}\int
d^3\textbf{x}'\frac{R^{(1)}(\textbf{x}')}{|\textbf{x}-
\textbf{x}'|}-\frac{\phi^{(1)}(\textbf{x})}{\phi^{(0)}}$.}
\begin{eqnarray}
\label{NL-fieldequationST}
&&\triangle\Phi\,=\,\frac{\mathcal{X}\,\rho}{\phi^{(0)}}+\frac{\triangle\phi^{(1)}}{\phi^{(0)}}+\frac{R^{(1)}}{2}\nonumber\\\nonumber\\
&&\Psi\,=\,\Phi+\frac{\phi^{(1)}}{\phi^{(0)}}\nonumber\\\\
&&\biggl[\triangle-{m_\phi}^2\biggr]\phi^{(1)}\,=\,-\frac{R^{(1)}}{2\,\omega(\phi^{(0)})}\nonumber\\
&&R^{(1)}\,=\,-\frac{\mathcal{X}\,\rho}{\phi^{(0)}}-\frac{3\,\triangle\phi^{(1)}}{\phi^{(0)}}\nonumber
\end{eqnarray}
and their solutions are
\begin{eqnarray}
\label{NL-solution_ST}
&&\phi^{(1)}(\textbf{x})\,=\,-\frac{1}{2\,\omega(\phi^{(0)})\,\phi^{(0)}-3}\frac{r_g}{|\textbf{x}|}\,e^{-\sqrt{\frac{2\,\omega(\phi^{(0)})\,\phi^{(0)}}{2\,\omega(\phi^{(0)})\,\phi^{(0)}-3}}\,m_\phi |\textbf{x}|}\nonumber\\\nonumber\\
&&R^{(1)}(\textbf{x})\,=\,-\frac{4\pi\,r_g}{\phi^{(0)}}\,\delta(\textbf{x})+\frac{6\,\omega(\phi^{(0)})\,{m_\phi}^2}{[2\,\omega(\phi^{(0)})\,\phi^{(0)}-3]^2}\frac{r_g}{|\textbf{x}|}\,e^{-\sqrt{\frac{2\,\omega(\phi^{(0)})\,\phi^{(0)}}{2\,\omega(\phi^{(0)})\,\phi^{(0)}-3}}\,m_\phi |\textbf{x}|}\nonumber\\\\
&&\Phi_{ST}(\textbf{x})\,=\,-\frac{GM}{\phi^{(0)}|\textbf{x}|}\biggl\{1-\frac{e^{-\sqrt{\frac{2\,\omega(\phi^{(0)})\,\phi^{(0)}}{2\,\omega(\phi^{(0)})\,\phi^{(0)}-3}}\,m_\phi |\textbf{x}|}}{2\,\omega(\phi^{(0)})\,\phi^{(0)}-3}\biggr\}\nonumber\\\nonumber\\
&&\Psi_{ST}(\textbf{x})\,=\,-\frac{GM}{\phi^{(0)}|\textbf{x}|}\biggl\{1+\frac{e^{-\sqrt{\frac{2\,\omega(\phi^{(0)})\,\phi^{(0)}}{2\,\omega(\phi^{(0)})\,\phi^{(0)}-3}}\,m_\phi |\textbf{x}|}}{2\,\omega(\phi^{(0)})\,\phi^{(0)}-3}\biggr\}\nonumber
\end{eqnarray}
which, in the case of massless scalar field, become the typical solution of Brans-Dicke theory.
In the cases that we have shown, the Newtonian contribution ($|\textbf{x}|^{-1}$) to the potential is ever present. We can find a difference in the definition of gravitational constant $G$, since in these theories we have a multiplying factor $f_R(0,\phi^{(0)})^{-1}$, while the additional terms are depending on the form of the Lagrangian. It is important to stress that, in all cases that we have considered, the limit and results of GR are fully recovered.
If we have a generic matter source distribution $\rho(\textbf{x})$, it is sufficient to use the superposition principle by starting from point-like solutions. Then we substitute to the solutions (\ref{new_sol_point}) the integral expression: $\Phi\,\rightarrow\,\int\Phi$. This approach is correct only in the Newtonian limit since such a limit correspond also to the linearized version of the theory.
\section{Rotation curves of galaxies}
\label{cinque}
At astrophysical level, the probe for the validity of alternative theories of gravity is the correct reproduction of rotation curves of spiral galaxies \cite{annalen}. As discussed above, the foundation of the dark matter issue lies on this observational evidence.
In order to face such a problem, one has to discuss the motion of a body embedded in a gravitational field. Let us take into account the geodesic equation
\begin{eqnarray}\label{geodesic}
\frac{d^2\,x^\mu}{ds^2}+\Gamma^\mu_{\alpha\beta}\frac{dx^\alpha}{ds}\frac{dx^\beta}{ds}\,=\,0
\end{eqnarray}
where $ds\,=\,\sqrt{g_{\alpha\beta}dx^\alpha dx^\beta}$ is the relativistic distance. In the Newtonian limit, from Eq.(\ref{geodesic}), we obtain the equation of motion equation
\begin{eqnarray}
\frac{d^2\,\mathbf{x}}{dt^2}\,=\,-\nabla\Phi(\mathbf{x})\,.
\end{eqnarray}
In our case, the gravitational potentials are given by (\ref{new_sol_point}). The study of motion is very simple if we consider a particular symmetry of mass distribution $\rho$, otherwise the analytical solutions are not available. Our aim is to evaluate the corrections to the classical motion in the easiest situation: the circular motion. In this case we do not consider the radial and vertical motions. The condition of stationary motion on the circular orbit is
\begin{eqnarray}\label{stazionary_motion}
v_c(|\mathbf{x}|)\,=\,\sqrt{|\mathbf{x}|\frac{\partial\Phi(\mathbf{x})}{\partial|\mathbf{x}|}}
\end{eqnarray}
where $v_c$ is the velocity. Generally the correction terms do not satisfy the Gauss theorem \cite{Stabile_Capozziello} and this aspect implies that a sphere cannot be simply reduced to a point. In fact the gravitational potential generated by a sphere (also with constant density) is depending also on the Fourier transform of the sphere \cite{Stabile_Capozziello}. Only in the limit case, where the radius of the sphere is small with respect to the distance (point-like source), we obtain the simple expression (\ref{new_sol_point}).
A further remark on Eqs. (\ref{new_sol_point}) is needed. The structure of solutions is mathematically similar to the one of fourth-order gravity generated by $f(R,R_{\alpha\beta}R^{\alpha\beta})$. However there is a fundamental difference with the present case: the two Yukawa corrections have different algebraic sign. In particular, the Yukawa correction induced by a generic function of Ricci scalar implies a stronger attractive gravitational force, while the second one, induced by squared Ricci tensor, implies a repulsive force \cite{PRD1, stabile_scelza}. In the present paper, the Yukawa corrections are induced by a generic function of Ricci scalar and a non-minimally coupled scalar field. Both corrections have a positive coefficient. In fact in Fig. (\ref{plotcoefficient_1}), we show the coefficient $g(\xi,\eta)$ with respect to $\xi$ for a given values of $\eta$. The function $g(\xi,\eta)$ assumes the maximum value ($\,=\,1/3$) when $\xi\,=\,0$ (we have a pure $f(R)$-gravity and the scalar field does not contribute to the gravitational potential in this case), otherwise we have two Yukawa corrections with positive coefficients. The scalar field gives rise to a more attractive force than in $f(R)$-gravity. The interesting range of values of $\eta$ is between $0$ and $1$. In the case $\eta\,>\,1\,\rightarrow\,m_\phi\,>\,m_R$, the correction induced by scalar field is suppressed with respect to the other one.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{coeff.eps}\\
\caption{Plot of coefficient $g(\xi,\eta)$ with respect to quantity $\xi$ for $0\,\leq\eta\,\leq\,0.99$ with step $0.33$.}
\label{plotcoefficient_1}
\end{figure}
From this analysis, the choice of $f(R,\phi)$-gravity is better than $f(R,R_{\alpha\beta}R^{\alpha\beta})$-gravity, but we have a problem in the limit for $|\textbf{x}|\,\rightarrow\,\infty$: the interaction is, of course, scale-depending (the scalar fields are massive) and in the vacuum the corrections turn off. For this reason, at large distances, we recover only the classical Newtonian contribution. Therefore the presence of scalar fields makes the profile smooth. This behavior is very clear in the study of rotation curves (\ref{stazionary_motion}). Let us assume a phenomenological point-like gravitational potential as supposed by Sanders \cite{Sanders90,sanders}
\begin{eqnarray}\label{san_pot}
\Phi_{SP}(\textbf{x})\,=\,-\frac{GM}{|\textbf{x}|}(1+\alpha\,e^{-m|\textbf{x}|})
\end{eqnarray}
where $\alpha$ and $m$ are free parameters that, following Sanders \cite{sanders}, can be assumed to be $\alpha\,\simeq\,-0.92$ and $r_0\,=\,1/m\,\simeq\,40\, \text{Kpc}$ to fit the galactic rotation curves. This potential has been introduced to explain the rotation curves of spiral galaxies \cite{sanders,cardoneyoukawa}, however the theoretical framework generating it is purely phenomenological. Recently by using the same potential it has been possible to fit elliptical galaxies \cite{cap_de_na}. In both cases by setting a negative value to $\alpha$ an almost constant profile of rotation curve is recovered. Such a rotation curve is obviously possible but there are two problems: the first one consists that no $f(R, \phi)$-gravity, by imposing all boundary conditions at origin and at infinity, gives that negative value of $\alpha$. The second one is linked to the value of gravitational constant $G$. In fact in presence of Yukawa-like correction with negative coefficient, we find a lower rotation curve and only by resetting $G$ (or the point-like mass) we can fit the experimental data. In Fig. (\ref{plotcircul}) we compare the profiles derived in the Newtonian limit of GR, $f(R)$- and $f(R,\phi)$-gravity and the potential (\ref{san_pot}). It is extremely interesting to note that the presence of scalar the field $\phi$, in the case $m_\phi\,\sim\,m_R$, guarantees a rotation curve higher than the other ones but also in this case we find the asymptotic flatness are derived from observations.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{circul.eps}\\
\caption{Comparison (in the vacuum case) of the spatial behaviors of rotation curves in the gravitational field generated by a point-like source. The dotted and dashed lines are the Sanders model for $-0.95\,<\,\alpha\,<\,-0.92$, the solid line is the GR curve, the dotted line is the $f(R)$-gravity and the dashed line is the fourth order gravity non-minimally coupled with a scalar field. In the last case, we set $\xi\,=\,-2$, $\eta\,=\,0.1$.}
\label{plotcircul}
\end{figure}
Only if we consider a massive scalar tensor theory non-minimally coupled, we get a potential with negative coefficient in Eq.(\ref{NL-solution_ST}). In fact by setting the gravitational constant as ${\displaystyle G_0\,=\,\frac{2\,\omega(\phi^{(0)})\,\phi^{(0)}-4}{2\,\omega(\phi^{(0)})\,\phi^{(0)}-3}\frac{G_\infty}{\phi^{(0)}}}$ where $G_\infty$ is the gravitational constant as measured at infinity and by imposing $\alpha^{-1}\,=\,3-2\,\omega(\phi^{(0)})\,\phi^{(0)}$, the potential $\Phi_{ST}$ in the (\ref{NL-solution_ST}) becomes
\begin{eqnarray}
\label{ST_pot}
\Phi_{ST}(\textbf{x})\,=\,-\frac{G_\infty M}{|\textbf{x}|}\biggl\{1+\alpha\,e^{-\sqrt{1-3\alpha}\,m_\phi |\textbf{x}|}\biggr\}
\end{eqnarray}
and then the Sanders potential (\ref{san_pot}) is fully recovered.
\section{Discussion and Conclusions}
\label{sei}
The dark matter issue, together with dark energy, can be considered the major problem of modern astrophysics and cosmology. Beside the huge amount of observations confirming its effects, practically at all astrophysical scales, no final answer exists, at fundamental level, definitively confirming one (or more than one) candidate supposed to explain the phenomenology. Furthermore, GR has been firmly tested only up to Solar System scale and then its features have been inferred at larger scales. In this situation, dark matter and dark energy could be nothing else but the manifestation that GR does not work at IR scales.
A similar disturbing situation is found at UV scales where no Quantum Gravity theory is up to now definitely available. Alternative gravities (in particular ETGs) could represent a way out to this puzzle being effective theories of gravity representing a reliable picture of quantum fields in high curvature regimes \cite{libro} and an approach to overcome the dark side problem at larger scales.
In this paper, we have discussed the weak field limit (in particular the Newtonian limit) of some classes of ETGs in view to explain the almost flat rotation curves of spiral galaxies. In particular, we have shown that ETGs, in general, present Yukawa-like corrections in the gravitational potential. In particular, we have analyzed the case of $f(R)$ and $f(R,\phi)$. The latter are known to be analogue to $f(R,\Box R)$.
After a discussion of the mathematical features of the emerging corrections, we have confronted the results with the phenomenological Sanders potential, assumed as a possible dynamical explanation of flat rotation curves. The suitable value of phenomenological parameters can be exactly reproduced in the framework of
$f(R,\phi)$-gravity since the concurring Yukawa corrections allow to recover attractive and repulsive components of potential. In this case, no dark matter is required to fit dynamics like in the case discussed in \cite{cardoneyoukawa} where only a Yukawa-like correction was not sufficient to reproduce realistic rotation curves.
|
train/arxiv
|
BkiUayjxK7ICUmfbx7SX
| 5
| 1
|
\section{Introduction}
The Standard Model predicts that the Cabbibo-Kobayashi-Maskawa (CKM) matrix,
which describes the mixing between different flavors of quarks, must be unitary.
Any deviation from unitarity would be an indication of new physics,
and constraints can be established on the scale of new physics even if unitarity is fulfilled.
The CKM matrix elements $\lvert{V_{us}}\rvert$, $\lvert{V_{cs}}\rvert$
and $\lvert{V_{cd}}\rvert$ can be precisely determined
from the semileptonic decays $K\to\pi l\nu$, $D\to K l \nu$ and $D\to\pi l \nu$ respectively.
This determination can be done combining experimental results with lattice calculations of the corresponding vector form factors
at zero momentum transfer, $f_+(0)$.
Here we present a calculation of these form factors using HISQ valence quarks on MILC $N_f=2+1+1$ HISQ lattices.
The details of the ensembles used are given in Table~\ref{ensembles}.
\begin{table}[b
\centering
\caption{Ensembles used in these calculations. \textcolor{red}{Red} indicates physical quark mass ensembles.}
{
\begin{tabular}{c c c c c c c c c}\hline\hline
\multirow{2}{*}{$a$ (fm)} & \multirow{2}{*}{$m_l/m_s$} & \multirow{2}{*}{Volume} & \multicolumn{2}{c}{$N_{conf}\times N_{t_{source}}$}&
\multirow{2}{*}{$am_s^{sea}$} & \multirow{2}{*}{$am_s^{val}$} & \multirow{2}{*}{$am_c^{sea}$} & \multirow{2}{*}{$am_c^{val}$} \\
& & & ($K\to\pi$) & ($D\to K/\pi$) & & & & \\ \hline
0.15 &\textcolor{red}{0.035} & $32^3\times 48$ & $1000\times 4$ & & \textcolor{red}{0.0647} & \textcolor{red}{0.0691} & 0.831 & 0.8531 \\ \hline
0.12 & 0.2 & $24^3\times 64$ & $1053\times 8$ & $1050 \times 8$ &0.0590 & 0.0535 & 0.635 & 0.6363 \\
& 0.1 & $32^3\times 64$ & $993 \times 4$ & $993 \times 4$ & 0.0507 & 0.053 & 0.628 & 0.650 \\
& 0.1 & $40^3\times 64$ & $391 \times 4$ & &0.0507 & 0.053 & 0.628 & 0.650 \\
& \textcolor{red}{0.035} & $48^3\times 64$ & $945 \times 8$ & $943 \times 8 $ &\textcolor{red}{0.0507} & \textcolor{red}{0.0531} & 0.628 & 0.6269 \\ \hline
0.09 & 0.2 & $32^3\times 96$ & $755 \times 4$ & $773 \times 4 $& 0.037 & 0.038 & 0.45 & 0.44 \\
& 0.1 & $48^3\times 96$ & $853 \times 4$ & $851 \times 4$ & 0.0363 & 0.038 & 0.44 & 0.43 \\
& \textcolor{red}{0.035} & $64^3\times 96$ & $963 \times 8$ & $905 \times 8$ &\textcolor{red}{0.0363} & \textcolor{red}{0.0363} & 0.432 & 0.432 \\ \hline
0.06 & 0.2 & $48^3\times 144$& $362 \times 4$ & &0.024 & 0.024 & 0.286 & 0.286 \\
& \textcolor{red}{0.035} & $96^3\times 192$& $565 \times 6$ & $565 \times 6$ &\textcolor{red}{0.022} & \textcolor{red}{0.022} & 0.26 & 0.26 \\ \hline \hline
\end{tabular}\label{ensembles}}
\end{table}
\section{Method}
We do not calculate the vector form factor directly, but rather follow the method introduced in Ref.~\cite{Na:2010uf}
which uses a Ward Identity to relate the matrix element of a vector current to that of a scalar current:
\begin{equation}
f_0^{DK}(q^2)=\frac{m_{c}-m_s}{m^2_{D}-m^2_K}\langle{K}\lvert{S}\rvert{D}\rangle_{q^2},
\end{equation}
together with the kinematic constraint $f_+(0)=f_0(0)$.
The same applies to the other processes considered in these proceedings, $D\to\pi$ and $K\to\pi$.
The main advantage of this approach over calculating the vector form factor directly is that the combination of
$(m_{c}-m_s)S$ does not need renormalization.
Determination of the form factor requires the calculation of 2pt and 3pt correlation functions,
where the 3pt correlators have the structure depicted in Figure~\ref{3ptdiagram}.
We generate a light (strange) quark propagator at $t_{source}$ with random-wall sources and
contract it with an extended strange (charm) propagator generated at $T+t_{source}$.
\begin{figure}\centering
\includegraphics[width=0.6\linewidth]{Semileptonic_2014.pdf}\vspace{-200pt}
\caption{The structure of the 3pt correlators used for the calculation of the $D\to\pi$ and $D\to K$ scalar form factors,
with the quantities in brackets corresponding to $K\to\pi$.}\label{3ptdiagram}
\end{figure}
In order to calculate 3pt correlators with zero momentum transfer we tune the momentum of the
child particle using twisted boundary conditions.
For each spatial direction $k$ with length $L$ these boundary conditions are defined as,
\begin{equation}
\psi(x_k + L) = e^{i\theta_k}\psi(x_k)
\end{equation}
This results in a propagator carrying momentum $p_k=\pi\frac{\theta_k}{L}$.
Our initial $K\to\pi$ calculations experimented with putting the momentum on either the
parent kaon or child pion ($\vec{\theta}_1\ne 0$ or $\vec{\theta}_2\ne 0$ in Fig~\ref{3ptdiagram} respectively),
however the statistical errors were significantly larger with the momentum on the kaon.
Therefore we now only put momentum on the child particle ($\vec{\theta}_1 = 0$ and $\vec{\theta}_2\ne 0$).
\section{$K\to\pi l \nu$ Updated Results}
Our results for $f_+^{K\pi}(0)$ and $\lvert{V_{us}}\rvert$ have been published previously in Ref.~\cite{Bazavov:2013maa}.
In Fig.~\ref{elviraextrap} we show new preliminary results for the 0.06 fm physical quark mass ensemble
as well as for the 0.09 fm physical mass ensemble with improved statistics in comparison
with our previous results.
The statistical error in the 0.09 fm point decreases from $~0.4\%$ to $~0.3\%$ when we double the number of time sources
and add around 300 more configurations, while the central value is nearly unchanged.
Our preliminary 0.06 fm result agrees very well with the other physical quark mass points,
as well as with the extrapolation of Ref.~\cite{Bazavov:2013maa} value,
and has similar errors.
We are still generating data on this ensemble, so we expect to reduce the error shown in Fig.~\ref{elviraextrap}.
\begin{figure}[b]
\centering
\includegraphics[width=0.6\linewidth]{ElviraExtrap.pdf}
\caption{Chiral interpolation of $f_+^{K\pi}(q^2=0)$ from Ref.~\cite{Bazavov:2013maa}.
The data points with open symbols show our preliminary new results
for the $a\approx 0.06,\, 0.09$~fm physical mass ensembles.}
\label{elviraextrap}
\end{figure}
Even when simulations at physical quark masses are available,
application of a $\chi PT$ interpolation is useful for a number of reasons.
Computationally cheaper data at $m_\pi > m_\pi^{phys}$ with high statistics can be included
to reduce the final statistical errors.
Dominant discretization and finite volume effects can be analytically incorporated and removed.
Small mass mistunings and partially quenched effects ($m_s^{sea}\ne m_s^{val}$) can be corrected at leading order.
In order to arrive at a final result we use a combined chiral interpolation and continuum extrapolation.
The form factor $f^{K\pi}_+(0)$ is written in terms of one-loop (NLO) partially quenched staggered $\chi PT$
supplemented by two-loop (NNLO) continuum $\chi PT$ supplemented by analytic terms,
see Ref.~\cite{Bazavov:2013maa} for details.
\begin{figure}\centering
\includegraphics[width=0.45\linewidth]{formfactor.pdf}
\includegraphics[width=0.3\linewidth]{FirstRow.pdf}
\caption{(left) Our value for $f_+^{K\pi}(0)$ from Ref.~\cite{Bazavov:2013maa} compared with previous work.
(right) Bounds on first row unitarity using $V_{ud}$ from Towner \& Hardy,
$f_k/f_\pi$ from Ref.~\cite{Bazavov:2014wgs} and $f^{K\pi}_+(0)$ from Ref.~\cite{Bazavov:2013maa}.}
\label{comparison}
\end{figure}
The result of the extrapolation for the form factor at physical quark mass in the continuum reported in Ref. \cite{Bazavov:2013maa} is:
\begin{equation}
f_+^{K\pi}(0)=0.9704(24)(22)=0.9704(32)
\end{equation}
Fig.~\ref{comparison} (left panel) shows a comparison of the result with previous lattice and non-lattice calculations.
Combining it with the latest experimental average ($\lvert{V_{us}}\rvert f^{K\pi}_+(0)=0.2163(5)$ Ref.~\cite{Moulson:2013nsa})
yields $\lvert{V_{us}}\rvert$ and subsequent check of first row unitarity:
\begin{equation} \lvert{V_{us}}\rvert = 0.2290 \pm 0.00074_{lat} \pm 0.00052_{exp} \end{equation}
\begin{equation}
\rightarrow \Delta_{CKM} \equiv \lvert{V_{ud}}\rvert^2 + \lvert{V_{us}}\rvert^2 + \lvert{V_{ub}}\rvert^2 -1 =
-0.00115(40)_{V_{us}}(43)_{V_{ud}} \label{bounds}
\end{equation}
where Eq.~\ref{bounds} uses the $\lvert{V_{ud}}\rvert$ determination of Ref.~\cite{Hardy:2013lga}.
A graphical representation of the unitarity test is shown in the right panel of Fig.~\ref{comparison},
which also includes the result of Ref.~\cite{Bazavov:2014wgs}.
\section{$D\to K(\pi) l \nu$ Current Status}
The calculation of $D\to K$ and $D\to\pi$ follow the same process at that for $K\to\pi$.
The main difference comes from the significantly larger mass of the $D$ meson compared to that of the kaon.
This leads to a much larger momentum that must be injected into the kaon or pion to get $q^2=0$.
As a result the moving kaon (pion) two-point correlators and $D\to K(\pi)$
three-point correlators are quite noisy.
The challenge then is in finding stable fits and keeping statistical and systematic errors under control.
Our initial calculations focused on the physical quark mass ensembles at lattice spacings
$a\approx 0.12$, $0.09$, and $0.06$ fm.
We are currently expanding our analysis to a number of unphysical quark mass ensembles
as indicated in Table~\ref{ensembles}.
We can also analyse the form factor at $q^2_{max}$ (both particles stationary) without calculating any additional propagators.
This allows us to investigate the dependence of statistical errors on the external momentum.
The $D\to K$ form factor correlator fits are stable under variations of the fit parameters,
which means that errors due to excited states and other fit systematics are under control.
Our preliminary results for the form factor at $q^2=0$ and $q^2_{max}$ are shown in Fig.~\ref{dtok}
as functions of light-quark mass.
We observe that $f_0(q^2=0)$ shows no light-quark mass dependence within the statistical errors while $f_0(q^2_{max})$
exhibits a statistically significant trend, increasing as the light-quark mass approaches its physical value.
The analyses for other ensembles listed in Table~\ref{ensembles} are still in progress.
The statistical errors for $f_0(q^2=0)$ are in general around 10 times as large as those for $f_0(q^2_{max})$.
Finding good stability across fit windows and small statistical errors is difficult in the fits for $D\to\pi$ at $q^2=0$
on a number of the ensembles.
The large errors in the moving pion 2pt correlator and associated 3pt correlator leads to
the usual $\chi^2$ per degree of freedom being a poor criterion for judging fits since almost all reasonable choices of fit window return
a small $\chi^2/[dof]$ (or alternatively a $p$-value close to 1).
These challenges will require more work to improve our fits.
\begin{figure}\centering\vspace{-24pt}\hspace{-48pt}
\includegraphics[width=0.50\linewidth]{{DtoK0.f_0}.pdf}\hspace{-24pt}
\includegraphics[width=0.50\linewidth]{{DtoK.f_0}.pdf}\\\vspace{-24pt}
\caption{$f_0(q^2)$ for $D\to K$ as a function of quark mass at $q^2_{max}$ (left) and $q^2=0$ (right).}
\label{dtok}
\end{figure}
\section{Future Plans}
The preliminary results presented here for $f_+^{K\pi}(0)$ on two of our physical quark mass ensembles look promising
for our goal of decreasing one of the main sources of uncertainty, statistical error, in our previous calculation.
The reduction of the other dominant source of error, finite volume corrections, will require an explicit one-loop $\chi$PT calculation.
Due to the large statistical errors that come with the momenta required to reach $q^2=0$ for D semileptonic decays,
it is valuable to work with other values of $q^2$ as well.
We plan to calculate $f_+^{DK(\pi)}(q^2)$ for a range of momentum values, which will require the evaluation of correlation functions
with insertion of vectors currents, in addition to those with insertion of scalar currents.
This will yield lattice form factors that cover the allowed range of $q^2$, which can be fit to a $z$-expansion \cite{Koponen:2013tua}
to determine the shape.
After checking the shape of the lattice form factor against experimental data we will use them to extract $\lvert{V_{cd(s)}}\rvert$
in a combined fit.
\acknowledgments
This work was supported by the U.S. Department of Energy and National Science Foundation.
Computation for this work was done at the Argonne Leadership Computing Facility (ALCF),
the National Center for Atmospheric Research (UCAR),
Bluewaters at the National Center for Supercomputing Resources (NCSA),
the National Energy Resources Supercomputing Center (NERSC),
the National Institute for Computational Sciences (NICS),
the Texas Advanced Computing Center (TACC),
and the USQCD facilities at Fermilab, under grants from the NSF and DOE.
A.K.~is support by the U.S. Department of Energy under Grant No.~DE-FG02-13ER42001 and URA Visiting Scholars' program.
E.G.~is supported in part by MINECO (Spain) under Grants No.~FPA2010-16696 and No.~FPA2006-05294;
by Junta de Andaluc\'{\i}a (Spain) under Grants No.~FQM-101 and No.~FQM-6552;
and by the European Commission under Grant No.~PCIG10-GA-2011-303781.
Fermilab is operated by Fermi Research Alliance, LLC,
under Contract No.~DE-AC02-07CH11359 with the U.S. Department of Energy.
|
train/arxiv
|
BkiUft7xK6-gDyrT_y4O
| 5
| 1
|
\section{Introduction}
\label{section1}
Planetary nebulae (PN) are evolved objects ejected by stars with main sequence masses in the
range of 0.8 and 8 $M_\odot$, so that the expected ages of their central stars are of the order
or greater than about 1 Gyr. However, the relatively large mass bracket of their progenitor
stars implies that an age distribution is to be expected, which has some consequences in the
interpretation of the PN data in the Galaxy and other stellar systems. The determination of
ages of the central stars is a difficult problem, and most usual methods have large
uncertainties when applied to intermediate and old age objects. We have recently developed
three different methods to estimate the age distribution of the CSPN (Maciel, Costa \& Idiart
\citeyear{mci2010}, see also Maciel et al. \citeyear{mcu2003}, \citeyear{mlc2005},
\citeyear{mlc2006}), and applied these methods to a sample of PN in the disk of the Galaxy, most
of which are located in the solar neighbourhood, within 3 kpc from the Sun. These methods
include the determination of the age distribution of CSPN using (i) an age-metallicity
relation that also depends on the Galactocentric distance, (ii) an age-metallicity relation
obtained for the disk, and (iii) the central star masses obtained from the observed
nitrogen abundances. We concluded that most CSPN in our sample have ages under 6 Gyr,
and that the age distribution is peaked around 2-4 Gyr. The average uncertainties were
estimated as 1-2 Gyr, and the results were compared with the expected distribution based
both on the observed mass distribution of white dwarfs and on the age distribution derived
from available masses of CSPN.
In the present work we develop two additional and more accurate methods to estimate the age
distribution of the CSPN based on their kinematical properties, namely: (i) A method
based on the expected rotation velocities of the nebulae at their Galactocentric distances,
which are then compared with the predicted values for a given rotation curve; the
differences are attributed to the different ages of the evolved stars; (ii) A method
based on the derived U, V, W, velocity components of the stars and their corresponding
dispersions. In both cases, the age-velocity dispersion relations from the Geneva-Copenhagen
survey are used to infer the age distribution. These methods are applied to two PN samples,
(i) the previous sample of disk PN used by Maciel, Costa \& Idiart (\citeyear{mci2010}),
for which a detailed data set is available, and (ii) a sample containing all PN for which
accurate radial velocities are known. The methods are developed in Section~2, and the samples
used are described in Section~3. The main results and discussion are given in Section~4.
\section{Determination of the age distribution of CSPN}
\label{section2}
\subsection{Method 1: The PN rotation velocity}
As objects of intermediate age, PN the disk of the Galaxy describe a rotation curve similar
to the one defined by younger objects, such as HII regions, although with a higher dispersion,
as discussed in detail by Maciel and Lago (\citeyear{ml2005}). Therefore, the discrepancies
between the rotation velocities inferred from the PN radial velocities and distances and the
expected velocities from the known rotation curve may be at least partially ascribed to their
evolved status. In other words, a given nebulae located at a distance $d$, with galactic
coordinates $\ell$ and $b$ and observed heliocentric radial velocity $V_r(hel)$ can be associated
with a rotation velocity $\theta(R)$, after obtaining its Galactocentric distance $R$ and its
radial velocity relative to the Local Standard of Rest (LSR), $V_r(LSR)$. Assuming circular
orbits, the rotation velocity $\theta(R)$ at the Galactocentric distance $R$ can be written as
\begin{equation}
\theta(R) = {R \over R_o} \ \biggl[{V_r(LSR) \over \sin \ell \, \cos b} + \theta_0\biggr]
\end{equation}
\noindent
where $R_0$ and $\theta_0$ are the Galactocentric distance and rotation velocity at the solar
position (see for example Maciel \& Lago \citeyear{ml2005}, Maciel \& Dutra \citeyear{md1992}).
On the other hand, the expected rotation velocity at the given Galactocentric distance,
$\theta_c(R)$, can be obtained from an adopted rotation curve. The difference $\Delta \theta =
\vert{\theta(R) - \theta_c(R)}\vert$ can then be considered as proportional to the age difference
between the PN and the objects defining the rotation curve. We have adopted the radial velocities
from the catalogue by Durand et al. (\citeyear{durand}), and two distance scales, those by Maciel
(\citeyear{m1984}) and Stanghellini et al. (\citeyear{ssv2008}). The first one was based on a
relationship between the ionized mass and the radius of the nebulae, while the second is an update
of the distance scale by Cahn et al. (\citeyear{cks1992}), using a modified Shklovksy method
following Daub (\citeyear{daub}). Since the distances of planetary nebulae in the Galaxy may contain
large individual uncertainties, the use of two different scales which are considered as
\lq\lq short\rq\rq\ (Maciel \citeyear{m1984}) and \lq\lq long\rq\rq (Stanghellini et al.
\citeyear{ssv2008}) warrants that these uncertainties will not affect the derived age distributions.
We have adopted $R_0 = 8.0\,$kpc for the distance of the Sun to the centre and
$\theta_0 = 220\,$km/s for the rotation velocity at $R_0$. Slightly different values can be found
in the literature (see for example Perryman, \citeyear{perryman}, and Reid, \citeyear{reid}),
but the values above are frequently adopted, so that a comparison with other works is made easier.
For the \lq\lq theoretical\rq\rq\ rotation curve we have also adopted two possibilities, namely,
the PN derived curve by Maciel \& Lago (\citeyear{ml2005}), and the HII region derived curve by
Clemens (\citeyear{clemens}). In the first case, the rotation velocity can be written as
\begin{equation}
\theta_c(R) = a_0 + a_1 \, R + a_2 \, R^2 \ .
\end{equation}
\noindent
where the constants are $a_0 = 269.2549$, $a_1 = -14.7321$, and $a_2 = 0.7847$, the Galactocentric
distance $R$ is in kpc and $\theta_c(R)$ in km/s. For the CO/HII region based Clemens (\citeyear{clemens})
curve, we have made an adjustment for $R_0 = 8.0\,$kpc and $\theta_0(R) = 220\,$km/s, in which
case we have
\begin{equation}
\theta_c(R) = \sum a_i\, R^i \ .
\end{equation}
\noindent
where the constants are given in Table 1, with the same units as in Eq.(2).
\begin{table*}
\small
\caption[]{Coefficients of the polynomial given by Eq. (3).}
\label{table1}
\begin{flushleft}
\begin{tabular}{cccccc}
\noalign{\smallskip}
\hline\noalign{\smallskip}
$R$ (kpc) & $0-0.765$ & $0.765-2.9$ & $2.9-3.825$ & $3.825-13$ & $> 13$ \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
$a_0$ & 0.0 & 325.0912 & 329.8 & $-$2346.0 & 230.6 \\
$a_1$ & 3069.81 & $-$248.1467 & $-$250.1 & 2507.60391 & $-$ \\
$a_2$ & $-$15809.8 & 231.87099 & 231.87099 & $-$1024.068760 & $-$ \\
$a_3$ & 43980.1 & $-$110.73531 & $-$110.73531 & 224.562732 & $-$ \\
$a_4$ & $-$68287.3 & 25.073006 & 25.073006 & $-$28.4080026 & $-$ \\
$a_5$ & 54904.0 & $-$2.110625 & $-$2.110625 & 2.0697271 & $-$ \\
$a_6$ & $-$17731.0 & $-$ & $-$ & $-$0.080508084 & $-$ \\
$a_7$ & $-$ & $-$ & $-$ & 0.00129348 & $-$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{table*}
The recent Geneva-Copenhagen Survey of the Solar Neighbourhood (cf. Nordstr\"om et al.
\citeyear{nordstrom}, Holmberg et al. \citeyear{holmberg2007}, \citeyear{holmberg2009})
has considerably improved the relations involving the ages, kinematics, and chemical
composition of a large sample containing about 14000 F and G nearby stars. Using basically
the original {\it Hipparcos} parallaxes, uvby$\beta$\ photometry and the Padova stellar
evolution models, several basic relations were investigated. In particular, high
correlations have been obtained between the velocity dispersions $\sigma_U$, $\sigma_V$,
$\sigma_W$, and $\sigma_T$ and the age of the star, which clearly show a smooth
increase of the velocity dispersions in the U, V, W components and total velocity $T$
with time. From the calibration by Holmberg et al. (\citeyear{holmberg2009}) these
correlations can be approximately written as
\begin{equation}
\log \sigma = a \, \log t + b \ ,
\end{equation}
\noindent
where the age $t$ is in Gyr and the constants $a$, $b$ are given in Table~2. This approximation
is valid in the age interval $0 < t {\rm (Gyr)} < 14$ with an estimated average age uncertainty
of about 25\%. Method~1 consists of assuming that the discrepancy in the rotation velocity
$\Delta \theta$ is due to the evolved status of the CSPN, so that we should expect a correlation
between $\Delta \theta$ and the velocity dispersion, as given by Eq.~(4). Since in this method
we are using the rotation velocity, we have considered two possibilities, according to which the
velocity discrepancy $\Delta \theta$ can be associated with (i) the $V$ component of the total
velocity ($\sigma_V$), or (ii) the total velocity ($\sigma_T$). Moreover, since we are adopting
two distance scales and two theoretical rotation curves, we have 8 different age distributions
for Method~1, characterized by the timescales $t_1$ to $t_8$, as explained in Table~3.
\begin{table*}
\small
\caption[]{Coefficients of Eq. (4).}
\begin{changemargin}{4cm}{4cm}
\label{table2}
\begin{flushleft}
\begin{tabular}{ccc}
\noalign{\smallskip}
\hline\noalign{\smallskip}
& $a$ & $b$ \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
$U$ & 0.39 & 1.31 \\
$V$ & 0.40 & 1.10 \\
$W$ & 0.53 & 0.94 \\
Total & 0.40 & 1.40 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{changemargin}
\end{table*}
\begin{table*}
\small
\caption[]{Parameters for Method 1.}
\begin{changemargin}{2cm}{2cm}
\label{table3}
\begin{flushleft}
\begin{tabular}{cccc}
\noalign{\smallskip}
\hline\noalign{\smallskip}
Distance & Rotation Curve & Dispersion & Age \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
Maciel & PN & $\sigma_V$ & $t_1$ \\
Maciel & PN & $\sigma_T$ & $t_2$ \\
Maciel & Clemens & $\sigma_V$ & $t_3$ \\
Maciel & Clemens & $\sigma_T$ & $t_4$ \\
Stanghellini & PN & $\sigma_V$ & $t_5$ \\
Stanghellini & PN & $\sigma_T$ & $t_6$ \\
Stanghellini & Clemens & $\sigma_V$ & $t_7$ \\
Stanghellini & Clemens & $\sigma_T$ & $t_8$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{changemargin}
\end{table*}
\subsection{Method 2: The U, V, W velocity components}
Method~2 is also a kinematic method, and in principle more accurate than Method~1,
as discussed in more detail in Section~4.
From the PN radial velocities and distances, we have estimated their proper motions both in
right ascension and declination, $\mu_\alpha$ and $\mu_\delta$. We have assumed that, in average,
the tangential velocities are similar to the radial velocities, namely $V_t \simeq V_r$. In
view of the large distances of the nebulae, this hypothesis in practice does not introduce any
major uncertainties in the results. Considering further the equatorial coordinates
($\alpha,\delta$) of the PN, we have used the equations by Boesgaard and Tripicco
(\citeyear{boesgaard}) to derive the $U$, $V$, $W$ velocity components of the nebulae, as well
as the total velocity $T$ and the velocity dispersions $\sigma_U$, $\sigma_V$, $\sigma_W$, and
$\sigma_T$. According to these equations we derive the following parameters: $C = f(d)$,
$X = f(C, \mu_\alpha, \mu_\delta, \alpha, \delta, V_r)$,
$Y = f(C, \mu_\alpha, \mu_\delta, \alpha, \delta, V_r)$, and
$Z = f(C, \mu_\delta, \delta, V_r)$, from which the velocities can be written as
$U = f(X, Y, Z)$, $V = f(X, Y, Z)$, $W = f(X, Y, Z)$, and $T = f(X, Y, Z)$,
so that the dispersions are given by
\begin{equation}
\sigma_i = \sqrt{(V_i-\bar V_i)^2}
\end{equation}
\noindent
where $V_i$ stands for the velocities $U, V, W, T$. Then, we have again used the detailed
correlations between the velocity dispersions and the ages as given by the Geneva-Copenhagen
survey (Holmberg et al. \citeyear{holmberg2009}), adopting the same coefficients given in
Table~2. We have used the same distance scales (Maciel \citeyear{m1984} and Stanghellini et
al. \citeyear{ssv2008}), so that we have again 8 different age distributions, corresponding
to the timescales $t_9$ to $t_{16}$, as described in Table~4.
\begin{table*}
\small
\caption[]{Parameters for Method 2.}
\begin{changemargin}{3.5cm}{3.5cm}
\label{table4}
\begin{flushleft}
\begin{tabular}{ccc}
\noalign{\smallskip}
\hline\noalign{\smallskip}
Distance & Dispersion & Age \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
Maciel & $\sigma_U$ & $t_9$ \\
Maciel & $\sigma_V$ & $t_{10}$ \\
Maciel & $\sigma_W$ & $t_{11}$ \\
Maciel & $\sigma_T$ & $t_{12}$ \\
Stanghellini & $\sigma_U$ & $t_{13}$ \\
Stanghellini & $\sigma_V$ & $t_{14}$ \\
Stanghellini & $\sigma_W$ & $t_{15}$ \\
Stanghellini & $\sigma_T$ & $t_{16}$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{changemargin}
\end{table*}
In practice, we have considered several additional cases, in order to better investigate
the hypothesis of $V_t \simeq V_r$. Assuming that these velocities are of the same magnitude,
but allowing for the possibility of different signs, we have as a result several possibilities
for the proper motions $\mu_\alpha$ and $\mu_\delta$, all of which are consistent with either
$V_t \simeq V_r$ or $\vert V_t\vert \simeq \vert V_r\vert$. It turns out that these
possibilities produce very similar age distributions, which will be discussed in
Section~4. Therefore, we will present only the distributions of the ages $t_9$ to $t_{16}$,
as defined in Table~4, for the cases where $\mu_\alpha \simeq \mu_\delta \simeq 0$.
An interesting alternative to overcome the lack of proper motion and
tangential velocity measurements would be to apply the singular value decomposition
(SVD) technique, as used by Branham (\citeyear{branham}) to solve the inverse
problem, that is, obtaining the space velocities from available proper motions. However,
in view of the similarity of the results for different assumptions regarding
the tangential velocities, it is unlikely that this technique would produce
very different results than presented here.
\section{The samples}
\label{section3}
As mentioned in the Introduction, we have considered two samples of Milky Way PN. In order to make
comparisons with our previous work, we have first considered the same sample used by Maciel et al.
(\citeyear{mcu2003}, \citeyear{mlc2005}, \citeyear{mlc2006}), which we will call Sample~1. This
sample contains 234 well-observed nebulae located in the solar neighbourhood and in the
disk, for which all data were obtained with the highest accuracy. Their Galactocentric distances
are in the range $4 < R {\rm (kpc)} < 14$, and most (69\%) are located in the solar neighbourhood,
with distances $d < 3\,$kpc.
The second sample considered in this work, called Sample~2, includes all the nebulae for which
accurate radial velocities are available in the catalogue by Durand et al. (\citeyear{durand}),
comprising 867 objects. This is a more complete sample, so that it is expected that the derived
results can be extended to the observed population of PN in the Galaxy. In both samples, the number of
nebulae used depends on the availability of the statistical distances. The actual number of objects
from the Maciel (\citeyear{m1984}) and Stanghellini et al. (\citeyear{ssv2008}) distance
scales are 195 and 170 for Sample 1 and 493 and 403 for Sample~2, respectively. We have then applied
the approximation given by Eq. (4) for both samples, with the coefficients shown in Table~2,
considering only the objects for which ages in the interval $0 < t ({\rm Gyr}) < 14$ could be
obtained.
\section{Results and discussion}
\label{section4}
The main results for the age distribution of the CSPN are shown in Figures 1-4, where we have
used the age parameter definitions given in Tables~3 and 4 for Methods 1 and 2, respectively.
Figures~1 and 2 refer to Sample~1, while figures~3 and 4 refer to Sample~2. It can be seen
that the age distributions obtained by both methods are similar, in the sense that most
objects have ages under 5 Gyr, with a strong peak at ages typically between 1 and 3 Gyr.
The histograms of Figures 3-4 are summarized in Table~5, where the fraction of stars
obtained by Method~1 (ages $t_1$ to $t_8$) and Method~2 (ages $t_9$ to $t_{16}$) are shown
for three age bins, namely $0 - 3\,$Gyr, $3 - 6\,$Gyr, and $t > 6\,$Gyr.
\begin{table*}
\small
\caption[]{Fraction of stars at three age intervals.}
\begin{changemargin}{2cm}{2cm}
\label{table5}
\begin{flushleft}
\begin{tabular}{ccccc}
\noalign{\smallskip}
\hline\noalign{\smallskip}
& $\Delta t$ (Gyr) & $0 - 3$ & $3 - 6$ & $> 6$ \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
Method 1 & $t_1$ & $0.57$ & $0.13$ & $0.30$ \\
& $t_2$ & $0.62$ & $0.18$ & $0.20$ \\
& $t_3$ & $0.57$ & $0.19$ & $0.24$ \\
& $t_4$ & $0.67$ & $0.18$ & $0.16$ \\
& $t_5$ & $0.51$ & $0.13$ & $0.36$ \\
& $t_6$ & $0.71$ & $0.17$ & $0.12$ \\
& $t_7$ & $0.61$ & $0.15$ & $0.24$ \\
& $t_8$ & $0.71$ & $0.11$ & $0.18$ \\
& & & & \\
Method 2 & $t_9$ & $0.76$ & $0.12$ & $0.12$ \\
& $t_{10}$ & $0.79$ & $0.10$ & $0.11$ \\
& $t_{11}$ & $0.92$ & $0.04$ & $0.04$ \\
& $t_{12}$ & $0.77$ & $0.18$ & $0.05$ \\
& $t_{13}$ & $0.78$ & $0.10$ & $0.12$ \\
& $t_{14}$ & $0.78$ & $0.11$ & $0.11$ \\
& $t_{15}$ & $0.93$ & $0.03$ & $0.04$ \\
& $t_{16}$ & $0.76$ & $0.18$ & $0.06$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{changemargin}
\end{table*}
The similarity
of the results of both methods is remarkable, especially considering that Method~2
is probably more accurate than Method~1. Method~2 consists of straightforward calculations
of the velocities and velocity dispersions followed by an application of relatively
accurate correlations involving the kinematics and ages of the objects considered. On
the other hand, Method~1 is based on the assumption that the differences between the observed
and predicted rotation velocities are essentially due to age effects. However, other processes
may be important, such as deviations from the circular rotation, which is particularly important
for nearby objects. According to Table~5, in all cases the vast majority of CSPN have ages
under 3 Gyr. For Method~1 the total fraction of objects with $t \leq 3\,$ Gyr is $50 - 70\%$,
while for Method~2 this fraction is somewhat higher, $70 - 90\%$. It is unlikely that this is a
result from biased samples, as the results for the larger Sample~2 are essentially the same as
in the smaller Sample~1. It should be pointed out that the latter, albeit smaller, includes
only well studied nebulae, for which all individual parameters (distances, velocities, abundances)
are better determined.
Also, there are no significant differences in the results using the different
velocity components $U$, $V$, $W$, and $T$. For Method~1, the distributions
using the $V$ velocity component are essentially the same as those using the total
velocity, for both distance scales and samples. For Method~2, the distributions
are slightly more concentrated in the first few age bins for the $W$ component, compared
with the distributions for the $U$ and $V$ components and the total velocity, again
for both distance scales and samples. Since the $W$ component is more clearly associated
with the disk heating, essentially caused by age effects, the corresponding distributions
are probably more accurate.
Similar remarks can be made regarding the adopted values for the proper motions. As
mentioned at the end of Section~2, the results shown here assume that
$\mu_\alpha \simeq \mu_\delta \simeq 0$. Adopting nonzero values for these
quantities ($\mu_\alpha \simeq \mu_\delta \neq 0$), either the $V$ or $W$ component
distributions become slightly less concentrated at the first few age bins, but most
objects still have ages under about 4~Gyr. Again, the application of
the SVD technique could be useful to confirm these results.
The uncertainties in the distances of the Milky Way PN are difficult to estimate, but
the procedure adopted here ensures that the obtained age distributions are not
particularly affected by the individual distances of the objects in the samples.
As mentioned in Section~2, we have adopted two very different statistical scales,
and the derived age distrbutions are essentially the same in both cases. The
individual distances may depend on the particular scale, but the results shown
in Figures~1--4 and in Table~5 do not depend on the choice of the distance scale.
This can be seen by comparing the results for the timescales $t_1 - t_4$ with those
for $t_5 - t_8$, or the results for $t_9 - t_{12}$ with those for $t_{13} - t_{16}$.
The uncertainties in the radial velocities also do not seem to have an important
effect on the age distributions. In the catalogue by Durand et al. (\citeyear{durand}),
most objects ($\sim 90\%$) have uncertainties smaller than 20 km/s, and many
objects have much lower uncertainties. Concerning Method~1, from Maciel \& Lago
(\citeyear{ml2005}), the average rms deviation in the rotation velocity is
about 50 km/s for PN, which can be compared with the values of about
20 km/s for HII regions (see also Clemens \citeyear{clemens} and Maciel \& Dutra
\citeyear{md1992}).
Probably the main uncertainty of the age distributions is due to the calibration
between the stellar ages and the velocity dispersions, given by Eq. (4), which affects
both Method~1 and 2. From the Geneva-Copenhagen Survey, this relation has a
dispersion of about 20 km/s in average, which corresponds roughly to an age
uncertainty of about 25\%, amounting to less than 1.2 Gyr for the objects of
Figures~1--4. Therefore, the uncertainties of the present method are comparable
and probably smaller than in the case of the methods based on age-metallicity
relations considered by Maciel et al. (\citeyear{mci2010}).
The results for Sample~2 are not essentially different from those of Sample~1, so that
a direct comparison can be made with the results by Maciel et al. (\citeyear{mci2010}).
The results of both investigations are similar, even though the present methods are
completely independent of the metallicity-based methods used by Maciel et al. (\citeyear{mci2010}).
The main difference is that the kinematic methods used in the present investigation suggest
somewhat lower ages for the CSPN in our samples. In this respect, these results fit
nicely to the probability distribution for the progenitors of the CSPN according to
Maciel et al. (\citeyear{mci2010}, cf. figure~7, dashed line). In this case the well
known relation between the main sequence mass and the stellar ages by Bahcall \& Piran
(\citeyear{bahcall}) was adopted, taking $t = 10\,$Gyr for $1 M_\odot$ stars on the
main sequence. Taking into account the uncertainties of the methods, which are
typically in the range $1-2\,$ Gyr, this case was considered as the most realistic,
so that it is reassuring that the kinematic methods produce similar results.
\begin{figure}
\centering
\includegraphics[angle=-90, width=13.5cm]{m1s1t1t2.eps}
\vskip 0.6 true cm
\includegraphics[angle=-90, width=13.5cm]{m1s1t3t4.eps}
\vskip 0.6 true cm
\includegraphics[angle=-90, width=13.5cm]{m1s1t5t6.eps}
\vskip 0.6 true cm
\includegraphics[angle=-90, width=13.5cm]{m1s1t7t8.eps}
\caption{Age distribution of CSPN, Method 1, Sample 1.}
\label{fig1}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90, width=13.5cm]{m2s1t9t12.eps}
\includegraphics[angle=-90, width=13.5cm]{m2s1t13t16.eps}
\caption{Age distribution of CSPN, Method 2, Sample 1.}
\label{fig2}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90, width=13.5cm]{m1s2t1t2.eps}
\vskip 0.6 true cm
\includegraphics[angle=-90, width=13.5cm]{m1s2t3t4.eps}
\vskip 0.6 true cm
\includegraphics[angle=-90, width=13.5cm]{m1s2t5t6.eps}
\vskip 0.6 true cm
\includegraphics[angle=-90, width=13.5cm]{m1s2t7t8.eps}
\caption{Age distribution of CSPN, Method 1, Sample 2.}
\label{fig3}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90, width=13.5cm]{m2s2t9t12.eps}
\includegraphics[angle=-90, width=13.5cm]{m2s2t13t16.eps}
\caption{Age distribution of CSPN, Method 2, Sample 2.}
\label{fig4}
\end{figure}
\bigskip
{\it Acknowledgements. We thank Dr. R. Branham, Jr., for some interesting comments on
an earlier version of this paper. This work was partly supported by FAPESP and CNPq.}
|
train/arxiv
|
BkiUe_s5qhLBs2ux72qH
| 5
| 1
|
\section{Introduction}\label{sec:intro}
Spatial capture-recapture (SCR) models are now widely used to estimate demographic parameters, particularly density. SCR data inherently varies across space because animal movements are not completely random and an individual is more likely to be detected close to its centre of activity (`activity centre', AC). SCR models account for and, in fact, exploit such spatial heterogeneity in detection by modelling the detection probability as a decreasing function of distance between a detector - e.g., an observer, a trap, or a search location - and a latent AC \citep{efford2004density, borchers2008spatially}.
However, the relative distance between a detector and an AC may not be the only cause of variation in detection probability. Spatially variable and autocorrelated detection probability can occur due to various other factors, such as local differences in how animals use space and how sampling is performed \citep{moqanaki2021consequences,stevenson2021spatial}.
Known sources of variation in detection probability are readily modelled in SCR using covariates, for example, through proxies or direct measures of sampling effort \citep{efford2013varying}, resource selection data obtained from telemetry studies \citep{royle2013integrating}, or information about landscape connectivity \citep{sutherland2015modelling}. However, not all sources of variation are known and fully observed. For example, local site-specific characteristics affecting detector exposure, or effect of local atmospheric conditions on the genotyping success rate of non-invasively collected DNA samples may remain unaccounted for during SCR analyses \citep{moqanaki2021consequences, kendall2019bear, efford2013varying}. Furthermore, large-scale wildlife monitoring programs sometimes include both structured and unstructured sampling data. The latter may be data collected by the general public to increase the extent and/or intensity of sampling \citep{thompson2012framework, bischof2020estimating}. Unstructured and opportunistic sampling data is likely to be associated with unknown spatial variation in detection probability. Unmodelled spatial variation in detection probability, particularly in the presence of high spatial autocorrelation, can lead to biased and overdispersed population size estimates in SCR analyses \citep{moqanaki2021consequences}. A worst-case scenario are pockets or clusters of detectors where, unbeknownst to the investigator, detection probability is null.
Adequately accounting for spatial heterogeneity and autocorrelation in detection probability is essential for obtaining reliable statistical inference in SCR analyses \citep{moqanaki2021consequences,howe2022estimating}. In the absence of known covariates, the effect of detector-specific variation in detection probability can be modelled by using a function that explains the true pattern of heterogeneity. This function is always unknown and we approximate it using random effects, i.e., by extending SCR with generalized linear mixed models (GLMM). Bayesian implementation of SCR-GLMMs allows modelling and estimation of heterogeneous detection probability surfaces in SCR models \citep{hooten2003predicting}. Spatially-explicit estimates of detection probability can in turn reveal problematic areas (e.g., regions with very low detection probability), which are important to wildlife monitoring and conservation.
Using simulations, we describe and test three extensions of Bayesian SCR-GLMMs that aim to account for latent spatial heterogeneity in detection probability via the use of random effects: (1) a simple GLMM extension of the basic single-season SCR model by assigning independent random effects (RE) to detector-specific baseline detection probabilities - with the aim to account for unknown spatial variation in detection probability among detectors; (2) a GLMM extension of the basic single-season SCR model incorporating spatial autocorrelation between detectors by means of spatially autocorrelated random effects (SARE), where covariance is modelled as a function of inter-detector distance, thus implicitly defining an ordered neighbourhood structure; (3) a two-group finite mixture (FM) model to identify latent detectability classes of each detector.
We assessed and compared these three structurally different models in terms of (i) their ability to produce unbiased abundance estimates, (ii) their capacity to realistically predict detection probability surfaces, (iii) their model complexity and (iv) their computational overhead. Finally, we considered the role that model comparison could play in selecting the `best' SCR model under different conditions.
\section{Methods}\label{sec:methods}
We first describe a basic single-season SCR model, where we assume a homogeneous baseline detection probability across all the detectors. Following that, we describe three extensions of the SCR model, namely: (i) an SCR-GLMM with independent random effects, (ii) an SCR-GLMM with spatially autocorrelated random effect, and (iii) an SCR-GLMM model with two-group mixture to model detector-specific baseline detection probabilities. Lastly, as a reference point for making comparisons, we outline a special case of the model in (ii), where the known true cause of the variation in detection probability is modelled using fixed effects.
\subsection{Model 1: Basic single-season SCR model (SCR) }\label{sec:snapshotSCR}
A single-season SCR model typically consists of two submodels: a submodel for the spatial distribution of individual ACs within a given habitat $\ensuremath{\mathcal{V}} \subset \mathbb{R}^2 $, and another submodel for the individual and detector-specific observations, conditional on the location of ACs.
\subsubsection{The ecological submodel}\label{sec:binomialPP}
We considered $N$ individuals to reside in $\ensuremath{\mathcal{V}}$, each of whom was assumed to move randomly around its AC (with coordinates $\ensuremath{\textbf{\emph{s}}}_i$). Following a homogeneous point process, each individual AC was assumed to be uniformly distributed across the habitat $\ensuremath{\mathcal{V}}$:
\begin{align}\label{eq:binomial.pt.proc}
\ensuremath{\textbf{\emph{s}}}_i \sim \text{Uniform} (\ensuremath{\mathcal{V}}),\, i = 1,2, \dots,N.
\end{align}
In our analysis, the location $\ensuremath{\textbf{\emph{s}}}_i$ of individual ACs and the number of these ACs ($N$) are both unknown. We used a data augmentation approach to model $N$ \citep{royle2007analysis}, with a large integer $M$ as an upper bound for $N$. We introduced a vector of $M$ latent binary variables $\ensuremath{\textbf{\emph{z}}} = (z_1, z_2, \dots, z_M)'$ such that $z_i = 1$ if individual $i$ is a member of the population and $z_i=0$ otherwise. Then we assumed that each $z_i$ follows a Bernoulli distribution with inclusion probability $\psi$, the probability that an arbitrary individual from the augmented population of $M$ individuals is a member of the population under study:
\begin{align}\label{eq:data.augmentation}
\ensuremath{\textbf{\emph{z}}}_i \sim \text{Bernoulli} (\psi).
\end{align}
Consequently, population size $N = \sum_{i=1}^M z_i$ is a derived parameter, following a binomial distribution with parameters $M$ and $\psi$.
\subsubsection{The observation submodel}\label{sec:obs.submodel}
We considered one sampling occasion and a set of $J$ detectors located in $\ensuremath{\mathcal{V}}$. The capture history of the $i$-th individual is denoted as $(y_{i1}, y_{i2}, \dots, y_{iJ})$, where each $y_{ij}$ is binary, i.e., $y_{ij}$ is 1 if individual $i$ is detected at detector $j$ and 0 otherwise.
The observed capture-recapture data set, denoted by $\ensuremath{\textbf{\emph{Y}}}_{\text{obs}}$, is of dimension $n\times J$, where $n$ is the number of detected individuals during the SCR survey. We augmented this data set $\ensuremath{\textbf{\emph{Y}}}_{\text{obs}}$ with $M-n$ ``all-zero'' capture histories $\mathbf{0}_J$ following the data augmentation approach. The zero-augmented data set is denoted by $\ensuremath{\textbf{\emph{Y}}}$ and is of dimension $M \times J$. We assumed a Bernoulli model for each $y_{ij}$, conditional on $z_i$:
\begin{align}\label{eq:bernoulliSCRmodel}
y_{ij} \sim \text{Bernoulli}(p_{ij}z_{i}),
\end{align}
where $p_{ij}$ denotes the detection probability of the $i$-th individual at the $j$-th detector.
The detection probability $p_{ij}$ is a decreasing function of distance, modelled following a half-normal form \citep{efford2004density}:
\begin{align}\label{eq:halfnormal}
p_{ij} = p_0 \, \exp\Big{(}-\frac{d_{ij}^2}{2\sigma^2}\Big{)}
\end{align}
where $d_{ij} = d(\ensuremath{\textbf{\emph{s}}}_i, \ensuremath{\textbf{\emph{x}}}_j) = ||\ensuremath{\textbf{\emph{s}}}_i - \ensuremath{\textbf{\emph{x}}}_j||$ is the Euclidean distance between the detector location $\ensuremath{\textbf{\emph{x}}}_j$ and individual AC $\ensuremath{\textbf{\emph{s}}}_i$, $p_0$ is the baseline detection probability, and the scale parameter $\sigma$ quantifies the rate of decline in detection probability $p_{ij}$ with distance $d_{ij}$. The full SCR model can thus be written as:
{\footnotesize
\begin{align}\label{model:SCR}
& \psi \sim \text{Uniform} (0,1) \nonumber\\[-0.50em]
& \sigma \sim \text{Uniform} (0,50) \nonumber\\[-0.50em]
& \mbox{logit}(p_0) \sim \ensuremath{\mathcal{N}}(0, 2) \nonumber\\[-0.50em]
& \hspace{-2em} i = 1,2,\dots,M: \nonumber\\[-0.50em]
& \ensuremath{\textbf{\emph{s}}}_i \sim \text{Uniform} (\ensuremath{\mathcal{V}}) \nonumber\\[-0.50em]
& z_i \sim \text{Bernoulli} (\psi) \nonumber\\[-0.50em]
& p_{ij} = p_{0} * \exp(- d_{ij}^2 / (2\sigma^2)) \text{ for } j = 1,2,\dots,J \nonumber\\[-0.50em]
& y_{ij} \sim \text{Bernoulli} (p_{ij} * z_i) \text{ for } j = 1,2,\dots,J
\end{align}
}
By modelling detection probability $p_{ij}$ in terms of individual ACs and fitting a decreasing detection function (as in (\ref{eq:halfnormal})) using the distance between ACs and detector location, the SCR model accounts for the spatial autocorrelation within individual capture histories. However, under this model, detection probabilities $p_{ij}$ and $p_{ij'}$ are equal at detectors $j$ and $j'$ whenever the two detectors are located at the same distance from the AC $\ensuremath{\textbf{\emph{s}}}_i$ regardless of other potential sources of variation between the two detectors. In other words, this model does not consider the additional variation in detection probability that may be present at different detectors due to their locations in the landscape and other heterogeneous characteristics.
\subsection{Model 2: Independent random effects SCR model (RE)}\label{sec:RAND}
To account for spatial heterogeneity in detection probability, causing detector-specific variation in detection probabilities, we used a simple GLMM extension of the basic single-season SCR model (Model 1). Here, we assigned a logistic-regression type model to baseline detection probability for each detector:
\begin{align}\label{model:RE}
& \mbox{logit}(p_{0j}) = \mu + W_j,\, j = 1,2,\dots,J
\end{align}
where $\mu$ denotes the intercept and $W_j$ denotes the random effect for the $j$-th detector. The detection probability $p_{ij}$ for individual $i$ at detector $j$ is expressed as
\begin{align}\label{eq:halfnormal.RE}
p_{ij} = p_{0j} \, \exp\Big{(}-\frac{d_{ij}^2}{2\sigma^2}\Big{)}.
\end{align}
We assumed a $\ensuremath{\mathcal{N}}(0, \sigma_w^2)$ prior for each $W_j$, $j = 1,2,\dots,J$ and a $\ensuremath{\mathcal{N}}(0, 2^2)$ prior for $\mu$. The variance parameter $\sigma_w^2$ can be given a weakly informative prior. We referred to this model as independent random effects SCR model (RE). Note that, RE model does not specifically account for spatial autocorrelation in detection probability across detectors.
\subsection{Model 3: Spatially autocorrelated random effects SCR model (SARE)}\label{sec:SARE}
We extended the basic single-season SCR model (Model 1) to account for spatial autocorrelation among detectors. In particular, we developed an SCR model for situations, where detectors at close proximity are more likely to have similar detection probability as compared to more distant detectors. We modelled this spatial autocorrelation by introducing an autocorrelated random effect $\ensuremath{\textbf{\emph{W}}} = (W_1, W_2, \dots, W_J)'$ of length $J$. We assumed $W$ to follow a multivariate normal distribution with mean $\mathbf{0}_J$ and covariance matrix $\Gamma = ((\gamma_{jk}))$, which controls the spatial dependence between detectors. We modelled each element $\gamma_{jk}$ of this covariance matrix as a decreasing function of distance between detectors $j$ and $j'$ following \cite{moqanaki2021consequences},
\begin{align}\label{eq:covariance.function}
\gamma_{jj'} = \exp (- \phi \, \delta_{jj'} )
\end{align}
where $\delta_{jj'} = d(\ensuremath{\textbf{\emph{x}}}_j, \ensuremath{\textbf{\emph{x}}}_{j'}) = ||\ensuremath{\textbf{\emph{x}}}_j - \ensuremath{\textbf{\emph{x}}}_{j'}||$ is the Euclidean distance between the detector locations $\ensuremath{\textbf{\emph{x}}}_j$ and $\ensuremath{\textbf{\emph{x}}}_{j'}$. This covariance function implicitly defines an ordered neighbourhood for each detector and $\phi$ controls the rate of distance-dependent decay of spatial autocorrelation between the detectors. In particular, detectors are highly autocorrelated if $\phi$ is small (e.g., 0.05), and autocorrelation decreases as $\phi$ increases (Figures~\ref{fig:sample.autocorrelated.det} and \ref{fig:predictedp0Surfaces.p0.3}). Similar to the RE model, the detection probability $p_{ij}$ for individual $i$ at detector $j$ is then expressed as
\begin{align}\label{eq:halfnormal.SARE}
p_{ij} = p_{0j} \, \exp\Big{(}-\frac{d_{ij}^2}{2\sigma^2}\Big{)},
\end{align}
where
\begin{align}\label{eq:baselineDetProb.SARE}
\mbox{logit}(p_{0j}) = \mu + W_j,\, j = 1,2,\dots,J.
\end{align}
Here, we assigned a $\ensuremath{\mathcal{N}}(0, 2^2)$ prior for $\mu$ and a $\ensuremath{\mathcal{N}}(0, 5^2)$ prior for log-transformed $\phi$.
We referred to this model as spatially autocorrelated random effect SCR model (SARE).
When $\phi = 0$, each component $\gamma_{jj'}=1$ (for any $j$ and $j'$), the random effect $\ensuremath{\textbf{\emph{W}}}$ becomes a degenerate process, implying exact dependence between the detectors. Hence, the value of each random effect $W_j$ is identical at any location of the detector grid. This is equivalent to basic single-season SCR model (Model 1), where we use a homogeneous baseline detection probability $p_0$ for each detector in the detector grid. Conversely, when $\phi \rightarrow \infty$, covariance matrix $\Gamma$ reduces to an identity matrix, and consequently, SARE model reduces to GLMM with independent random effects.
\subsection{Model 4: Two-group finite mixture SCR model (FM)}\label{sec:FM}
Variable sampling intensity could be associated with ordered classes of unknown variation in detection probability across the landscape.
For our study, we proposed using a two-group finite mixture SCR model (FM) to model heterogeneity in detection probability between detectors \citep{cubaynes2010importance, turek2021bayesian}. Here, we defined two groups of heterogeneity, viz., 1 and 2 assuming first group to have lower detection probability than the second one. We introduced two detection probability parameters $\eta_1$ and $\eta_2$, where $\eta_k$ is the detection probability of the $b$-th subgroup, $b = 1,2$. A constraint is imposed on these parameters $\eta_1 \leq \eta_2$ to ensure identifiability. Further, we defined binary indicator variables $u_j$ ($j=1,2,\dots,J$) to indicate the subgroup that a detector belongs to:
\begin{align}\label{model:FM}
p_{0j} = (1-u_j)\, \eta_1 + u_j \, \eta_2, \, j = 1,2,\dots,J.
\end{align}
MCMC computation allows the binary classification in our two-group mixture model to implicitly account for the group membership probabilities $\ensuremath{\mbox{Pr}}(u_j=1)$ and consequently, allows estimation of each $p_{0j}$ via (\ref{model:FM}) accounting for the uncertainty in the group membership probabilities of each detector. This provides a flexible approach of estimating spatial heterogeneity in detection probability surface. We assigned a Bernoulli prior to each $u_j$ with probability $\pi$ of being assigned to second group. Further, we assumed weakly informative bounded uniform priors for the probability parameters $\eta_1, \eta_2$ and $\pi$.
\subsection{Model 5: SCR model with known true effects (FE)} \label{sec:FE}
For the sake of assessing and comparing the performance of the above models, we considered a GLM extension of basic single-season SCR model (Model 1) using detector-specific effects (the true source of variation) to model baseline detection probability. This can be executed by supplying the known simulated effect $\ensuremath{\textbf{\emph{W}}}$ as an observed ``virtual" covariate and then model the baseline detection probability: $\mbox{logit}(p_{0j}) = \mu + W_j$. Consequently, the detection probability $p_{ij}$ is expressed as: $p_{ij} = p_{0j} \, \exp(-d_{ij}^2/(2\sigma^2))$.
The rest of the model remains the same as Model 1 and we referred to this model as FE.
\section{Simulation study}\label{sec:simstudy}
For simulations, we used a $32 \times 32$ detector array (number of traps $J$ = 1024) with 1 distance unit (du) of minimum inter-detector spacing. The detector array is centred on a $41 \times 41$ du habitat, surrounded by a 5-du habitat buffer (Figure~\ref{fig:sample.autocorrelated.det}). We used a $\sigma$ value of 1.5 for all the simulations so that the buffer width is larger than $3\sigma$ resulting in negligible detection probability of individuals with AC near the habitat boundary \citep{efford2011estimation}. We simulated SCR data sets for $N=300$ individuals leading to a population-level home range overlap index $k=\sigma \sqrt{\text{Density}} = 0.63$ \citep{efford2016density}. We set the size of the augmented population $M$ to be 500.
\subsection{Simulation scenarios}\label{sec:simscenarios}
For each simulation, we used the SARE model (Model 3, Section~\ref{sec:SARE}) to generate SCR data with spatially autocorrelated detection probability between detectors. We created simulation scenarios by varying spatial autocorrelation rate parameter $\phi$ with high ($\phi = 0.05$) and intermediate ($\phi = 1$) spatial autocorrelation to simulate spatially varying random effect $\ensuremath{\textbf{\emph{W}}} = (W_1, W_2, \dots,W_J)'$ (Figure~\ref{fig:sample.autocorrelated.det}).
\subsubsection{Continuous detector-specific variation in detection probability}\label{sec:simscenarios.cont}
Detection probability may exhibit continuous spatial variation if it is linked with underlying habitat characteristics, such as elevation, forest cover, or distance from roads that influence animal behavior or detection effort and efficiency \citep{moqanaki2021consequences}. For simplifying the interpretation of $\mu$ in SARE (Model 3), we transformed it into a new variable $\eta$ via the link $\mu = \mbox{logit}(\eta)$. Here, $\eta$ can also be viewed as the average baseline detection probability, providing a clearer interpretation for the readers. In simulations, we used three values of $\eta$ to generate low ($\eta = 0.1$), intermediate ($\eta = 0.3$), and high ($\eta = 0.6$) baseline detection probability for each detector, subject to spatial autocorrelation infused by $\ensuremath{\textbf{\emph{W}}}$ (Figure~\ref{fig:sample.autocorrelated.det}, row 1).
\subsubsection{Categorical detector-specific variation in detection probability}\label{sec:simscenarios.cat}
Discrete differences in sampling or environmental characteristics can lead to categorical classes of variation in detection probability between detectors. We considered an extreme case, where 50\% of the detectors would remain inactive, and the remaining detectors would have a constant detection probability \citep{moqanaki2021consequences}. Thus, a portion of the study area would remain entirely unsampled. For simulating such scenarios, we transformed each $p_{0j}$ into a discrete variable taking only one of the two values 0 and $\mbox{logit}(\eta)$ to create two classes of detector-specific baseline detection probability using (\ref{eq:baselineDetProb.SARE}):
\begin{align
p_{0j} =
\begin{cases}
0, & \text{ if } W_j \leq q_{50} \nonumber \\
\mbox{logit}(\eta), & \text{otherwise}
\end{cases}
\end{align}
where $q_{50}$ is 50\% quantile of the effect $W_j$'s.
We used two values of $\eta$ to generate low ($\eta = 0.1$) and intermediate ($\eta = 0.3$) level of baseline detection probability for each detector (Figure~\ref{fig:sample.autocorrelated.det}, row 2).
In summary, we divided all the simulation scenarios in two broad setups, viz. continuous and categorical, with respect to detector-specific variation in detection probability. In the continuous setup (`CON'), we generated six simulation scenarios by combining two levels of autocorrelation $\phi$ and three levels of detection $\eta$. In the categorical setup (`CAT'), we generated four simulation scenarios by combining two levels of $\phi$ and two levels of $\eta$. Thus, in total, we generated 10 simulation scenarios. For each simulation scenario, we generated 100 independent SCR data sets, resulting in 1000 simulated SCR data sets in total (Table~\ref{table:ndets}).
\subsection{Curse of dimensionality}\label{sec:scaling}
In many SCR studies the majority of detectors are associated with no or very few detections \citep{gerber2015spatial,tourani2022review}. In such situations, fitting complex models such as SARE, FM and RE, which involves large number of parameters and latent variables, may lead to poor Markov chain Monte Carlo (MCMC) convergence, below par mixing, and over-fitting. This phenomenon is known as the curse of dimensionality and expected to occur when models are over-parameterised \citep{wikle2010general}.
We mitigated this issue by dimension reduction of the random effects. To do this, we aggregated random effects that are used to model baseline detection probability, such that a single random effect value is assigned to a cluster of neighboring detectors. Note that we are not aggregating detections themselves. For instance, in SARE, if each cluster contains $n_c$ detectors, then each detector belonging to a cluster (say, $j$-th) will share the same random effect value $W_j$, $j = 1,2, \dots, J/n_c$ ($J$ being the total number of detectors).
Here, we aggregated the random effects by a factor of 4 (squares of $4\times4$ detectors = one cluster). When aggregated, the $32\times32$ detector grid (i.e., 1024 detectors, Figure~\ref{fig:sample.autocorrelated.det}) forms a grid of $8\times8$ clusters.
\subsection{Model fitting description}\label{sec:model.fitting}
We fitted five SCR models to the same simulated datasets: (i) basic single-season SCR model without aggregation (Section~\ref{sec:snapshotSCR}), (ii) RE model, both with and without aggregation (Section~\ref{sec:RAND}), (iii) SARE model, both with and without aggregation (Section~\ref{sec:SARE}), (iv) FM model, both with and without aggregation, and (v) FE model without aggregation (Section~\ref{sec:FE}). The models were fitted using MCMC simulation with NIMBLE \citep{valpine2017nimble, nimbleSoftware2021} in R version 3.6.2 \citep{Rsoftware2019}. We used the R package nimbleSCR \citep{nimblescr2020bischof, turek2021bayesian}, which implements the local evaluation approach \citep{milleret2019local} to increase MCMC efficiency. For each simulated data set, we ran three chains of (i) 30,000 iterations for both basic single-season SCR and FE model including burn-in 12,000 iterations, (ii) 100,000 iterations for SARE and RE including burn-in 20,000 iterations (both with and without aggregation), (iii) 60,000 iterations for FM (without aggregation) including burn-in 12,000 iterations and 20,000 iterations for FM (with aggregation) including 4,000 iteration burn-in.
MCMC convergence of each model was monitored using the Gelman-Rubin convergence diagnostics $\hat{R}$ \citep[with upper threshold 1.1,][]{gelman2014bayesian} and visual inspection of traceplots.
\subsection{Model performance measures}\label{sec:model.performance.measures}
We used relative bias, coefficient of variation, and coverage probability to evaluate the performance of each fitted models with respect to estimation of focal parameters (e.g., population size, $\sigma$). Suppose $\{\theta^{(r)} \, : \, r = 1, 2, \dots, R\}$ denotes a set of MCMC draws from the posterior distribution of a scalar parameter $\theta$.
\tbf{Relative bias}.
Relative bias (RB) is calculated as
\begin{align}
\widehat{\text{RB}} (\theta) = \frac{\hat{\theta} - \theta_0}{\theta_0},
\end{align}
where $\hat{\theta}$ denotes the posterior mean $\frac{1}{R} \sum_{r=1}^{R} \theta^{(r)}$ and $\theta_0$ gives the true value.
\tbf{Coefficient of variation}.
Precision was measured by the coefficient of variation (CV):
\begin{align}
\widehat{\text{CV}} (\theta) = \frac{\widehat{\text{SD}}(\theta)}{\hat{\theta}},
\end{align}
where $\widehat{\text{SD}}(\theta) = \sqrt{\frac{1}{R} \sum_{r=1}^{R-1} (\theta^{(r)} - \hat{\theta})^2}$ is the posterior standard deviation of parameter $\theta$.
\tbf{Coverage probability}.
Coverage probability was computed as the proportion of converged model fits for which the estimated 95\% credible interval (CI) of the parameter $\theta$ contained the true value of $\theta_0$.
\subsubsection{Effective sample size and MCMC efficiency}\label{sec:ess}
To compare the efficiency of the different MCMC algorithms of the fitted models, we computed the effective sample size (ESS) and MCMC efficiency (= ESS/MCMC run time) of each top-level parameter for each of the model runs. We used the `effectiveSize' function from the R package coda to compute ESS \citep{coda2006}. The calculation of ESS is based on the combined samples of the converged MCMC chains after discarding the burn-in period. The MCMC computation time was calculated excluding the burn-in period. To obtain stable estimates of the quantities of interest, it is recommended to have ESS greater than 400 \citep{vehtari2021newRhat}.
Although MCMC algorithms are used to generate samples from posterior distributions, efficiency can vary between different algorithms. There are two primary measures of efficiency of MCMC algorithms - quality of MCMC mixing and speed of MCMC computation. We computed `MCMC efficiency' as a combined metric to assess both of these characteristics of MCMC algorithms, so that we could compare the efficiency of different MCMC algorithms. For our study, we reported the mean MCMC efficiency for each top-level parameters in the model and across all the converged replicates in each scenario.
\subsubsection{Spatial accuracy of predicted baseline detection probability surfaces}\label{sec:method.compare.det.maps}
Baseline detection probability surfaces obtained from SCR analyses are useful in evaluating the performance of our SCR models as they have the potential to reveal spatial patterns in detection probability (such as pockets with very low/high detection probability) that could be of practical relevance. We compared the accuracy of the detector-specific baseline detection probability surfaces predicted by the different models with the true simulated surface $(p_{01},p_{02},\dots,p_{0J})'$. We quantified the accuracy by calculating the expected sum of squared errors (SSE). In practice, we first obtained posterior MCMC samples of baseline detection probability surface $(p_{01}, p_{02},\dots, p_{0J})'$ and compute mean squared error for detector $j$: $\text{SSE}_j = \frac{1}{R} \sum_{r=1}^{R} (p_{0j}^{(r)} - p_{0j})^2$, where $\{p_{0j}^{(1)}, p_{0j}^{(2)}, \dots, p_{0j}^{(R)} \}$ denotes posterior MCMC sample of $p_{0j}$, $j=1,2,\dots,J$. Finally, we calculated total error sum of squares $\text{SSE} = \sum_{j=1}^{J} \text{SSE}_j$ as a measure of predictive accuracy of detection probability surface. Smaller SSE implies a more accurate prediction (closer to the truth) of the baseline detection probability surface. We used $\Delta \text{SSE}$, relative to the model with the lowest SSE ($\Delta \text{SSE}$ $= \text{SSE} - \min\{\text{SSE}\}$), to compare the accuracy of predicted baseline detection probability surface amongst the different models.
\subsubsection{Model comparison using WAIC}\label{sec:model.comparison.waic}
We compared the fitted models using Watanabe-Akaike information criterion (WAIC) \citep{watanabe2010asymptotic}, which is computed as
\begin{align}\label{waic}
\text{WAIC} = -2\sum_{i=1}^{M} \log \big{(} \frac{1}{R} \sum_{r=1}^{R} f(\mathbf{Y}_i \, | \, \boldsymbol{\ensuremath{\boldsymbol{\theta}}}^{(r)}) \big{)} + 2 \, p_w.
\end{align}
where $f(\mathbf{Y}_i \, | \, \boldsymbol{\ensuremath{\boldsymbol{\theta}}})$ denotes the likelihood of $i$-th individual capture history $\mathbf{Y}_i = (y_{i1}, y_{i2}, \dots, y_{iJ})'$ in the model. Here, we adopt the second of the two variants of the penalty term $p_w$ proposed by \cite{gelman2014understanding}:\vspace*{-0.1em}
\begin{align}\label{p.waic}
p_w = \sum_{i=1}^{M} \Big{\{}\frac{1}{R-1} \sum_{r=1}^{R}\Big{(}\log f(\mathbf{Y}_i \, | \, \boldsymbol{\ensuremath{\boldsymbol{\theta}}}^{(r)}) - \frac{1}{R} \sum_{r=1}^{R} \log f(\mathbf{Y}_i \, | \, \boldsymbol{\ensuremath{\boldsymbol{\theta}}}^{(r)}) \Big{)}^2\Big{\}}
\end{align}
A model with smaller WAIC is preferred. We use $\Delta \text{WAIC}$ ($= \text{WAIC} - \min\{\text{WAIC}\}$) to compare the different models in terms of their model fit and complexity.
\section{Results}\label{sec:results}
During comparison and interpretation, we only considered models that had reached convergence and exhibited proper mixing of all the top level parameters (e.g., $N$, $\sigma$, $\phi$, $\eta$), with $\hat{R} \leq 1.1$. While all SCR (Model 1) and FE models (Model 5) converged, convergence of the models SARE, RE and FM were found to be challenging without aggregating the random effects (Table~\ref{table:nconverged}). Only under the extreme categorical scenarios with low baseline detection probability ($\eta=0.1$), the convergence rates (i.e., the number of converged models out of 100 repetitions) of RE (Model 3) were found to be higher ($\geq60\%$) than the other two GLMM models. The convergence rate of RE and FM improved substantially when random effects were aggregated (66 -- 100\%). Convergence rate for the SARE improved substantially (67 -- 96\%) after aggregating the random effects under high spatial autocorrelation scenarios ($\phi = 0.05$), whereas the improvement was less pronounced (5 -- 39\%) under intermediate autocorrelation scenarios ($\phi = 1$).
For all the models that converged, mean ESS were considerably higher than the suggested threshold of 400, indicating that the MCMC chains were long enough to provide stable estimates (Tables~S6 -- S9, Supp. material). SCR and FE had the highest MCMC efficiency (mean MCMC eff. $>0.8)$ under most scenarios except for the extreme categorical scenario with $\eta=0.1$ and $\phi = 0.05$, where SCR had a relatively lower mean MCMC efficiency 0.41. MCMC efficiency of both SARE and RE models were 1.5 -- 2 times lower than SCR and FE models in these scenarios despite aggregating of the random effects. Among the three GLMM formulations, SARE showed the highest MCMC efficiency in most scenarios (mean 0.8 -- 1.6 in continuous and 0.27 -- 0.77 in categorical scenarios). FM model had the lowest MCMC efficiency (mean 0.01 -- 0.1) across all scenarios, primarily due to the higher MCMC computation time (Table S9). Considering the overall poor MCMC convergence of the three GLMMs when fitted without aggregation, we only considered the results with dimension reduction.
\subsection{Estimates of population size}\label{sec:par.estimates}
All five models showed negligible bias in population size $N$ estimates under most simulation scenarios tested here. Both SARE and FE (models that distinctly account for spatial autocorrelation) estimated the population size with moderate accuracy across all the scenarios (median RB: -$9\%$ -- $6\%$) (Tables~S3, S5). Although SCR and RE did not specifically model spatial autocorrelation between detectors, population size estimates from these models showed negligible bias (median RB: -$10$ -- $5\%$) in most scenarios considered (Tables~S1 and S2, Supp. material).
However, under scenarios with categorical spatial variation in baseline detection probability and high autocorrelation ($\phi=0.05$), SCR model showed approximately 30\% negative bias in estimating population size (Figure~\ref{fig:Nest.p0.3}). RE model produced an elevated negative bias (median RB: -17\%) under the categorical scenario with $\eta=0.3$ and $\phi=0.05$ (Table~S2). FM also showed a similar level of accuracy in estimating population size compared to the SARE and FE models for all the continuous scenarios (Table~S4). Although FM seemed to be structurally better suited for the scenarios with categorical variation in detection probability (due to the integration of membership in discrete detectability groups), it showed an 11\% negative bias in each of the categorical scenarios with high autocorrelation.
The coefficient of variation (CV) in population size estimated with the five fitted models varied moderately (median CV: 3 -- 16\%) under $\eta = 0.1$ and was less than 8\% for the remaining scenarios with $\eta \geq 0.3$. Coverage probabilities for SCR were $> 90\%$ for the scenarios with intermediate autocorrelation ($\phi= 1$). But when spatial autocorrelation was high ($\phi= 0.05$), coverage declined drastically (65 -- 97\% coverage) for the continuous scenarios and dropped to less than $20\%$ for the categorical scenarios (Table~S1). Coverage probabilities for SARE, RE and FE were $\geq 90\%$ (coverage for FM $\geq 81\%$) for all the scenarios except for the extreme categorical scenario with $\eta=0.3$ and $\phi = 0.05$, where coverage probabilities for SARE, RE and FM were 0.77, 0.29 and 0.51, respectively (Tables~S2-S5).
\subsection{Detection probability surfaces and model comparison}\label{sec:results.compare.det.maps}
Both SARE and FM models produced reliable detection probability surfaces in the presence of high spatial autocorrelation between detectors. SARE-generated surfaces were more accurate in estimating surfaces of baseline detection probability, with the lowest SSE in 65 -- 94\% of the replicates in both continuous and categorical scenarios with high autocorrelation. Although FM were more precise than SARE and RE in scenarios with intermediate autocorrelation, SCR (which assumes homogeneous baseline detection probability) had the lowest SSE in 72 -- 97\% of the replicates in these scenarios (Figure~\ref{fig:SSE.WAIC.p0.3}, S2).
Under the scenarios with continuous spatial variation in detectability and high autocorrelation, SARE was selected 4 -- 6 times more frequently than the other models in model comparison based on WAIC when $\eta \geq 0.3$. With intermediate autocorrelation, FM and RE were selected 1.5 -- 2 times more frequently than the other models when $\eta$ was 0.3 and 0.6, respectively. For all remaining scenarios (including the scenarios with categorical spatial variation), SCR (Model 1) was selected by WAIC.
\section{Discussion}\label{sec:discussion}
Using a simulation study, we developed and tested three SCR-GLMMs (RE, SARE and FM) to assess their performance in accounting for latent spatial heterogeneity and autocorrelation in detection probability among detectors. SARE (Model 3), the data generating model, was the most reliable model in estimating population size across all the tested scenarios. When autocorrelation was high ($\phi = 0.05$), SARE also performed best in predicting the baseline detection probability surface (as indicated by SSE). The population size estimates from RE and FM (Models 2 and 4) were largely unbiased in the presence of continuous detector-specific variation in baseline detection probability surface, but the estimates were subject to a pronounced negative bias when fitted under the extreme scenarios with categorical variation and high autocorrelation. FM outperformed SARE and RE in terms of SSE for predicted surfaces of baseline detection probability when autocorrelation was at intermediate level ($\phi = 1$).
Unknown latent and autocorrelated variation in detection probability among detectors in an SCR survey is common \citep{gaspard2019residual}, and can remain present in SCR data due to sampling design or landscape characteristics. As shown by \cite{moqanaki2021consequences}, failure to properly account for spatially autocorrelated detection probability may result in biased and overdispersed population size estimates (Figure~\ref{fig:Nest.p0.3}). In this study, we presented a Bayesian SCR-GLMM (viz., SARE, Model 3) that specifically accounts for the spatial autocorrelation between detectors. The primary advantages of modelling spatial autocorrelation among detectors include the ability to use information on detector configuration to correctly account for uncertainty in the estimates. In a practical context, this may aid identification of locations or regions inside the study area without any detection record. Fitting SCR (Model 1) in cases of high autocorrelation produced a 30\% RB with approximately zero coverage probability, whereas the SARE (Model 3) showed less than 10\% RB and greater than 77\% coverage probability. Even models that allow variation among detectors but do not explicitly account for spatial autocorrelation (RE and FM) were able to produce estimates of population size with little bias for the majority of the simulated data sets, thus showcasing the potential of SCR-GLMMs models.
In large-scale monitoring programs, data often hail from both structured and unstructured or opportunistic sampling \citep{altwegg2019occupancy, bischof2020estimating, isaac2020data}. In certain extreme cases (e.g., citizen science data), large portions of the study area may be left unsampled, unbeknownst to the investigator \citep{johnston2022outstanding, bird2014statistical}. The three SCR-GLMMs tested here (SARE, RE and FM) allow modelling unknown spatial variation in detection probability in the absence of known fixed effects. SCR-GLMMs can also help quantify this unknown spatial variation in detectability. Spatially-explicit estimates of detection probability obtained with SCR-GLMMs can be useful in planning and adjusting large-scale surveys, if they help investigators identify regions with high and low detection probability, including apparent holes in sampling. On the flip side, Bayesian SCR-GLMMs involve a large number of unknown parameters, making these models challenging to fit, manifested in slow computation speeds and convergence issues in certain conditions (e.g., SCR data with low number of detections per detector, fitting of GLMM models without aggregating the random effects). For choosing a model, practitioners will need to weigh the benefits of accounting for spatial heterogeneity in detection probability against the costs associated with model complexity.
High dimensional random effects models can easily overfit typical SCR data with few or no detections at the majority of detectors. Dimension reduction of the random effects is a typical strategy to avoid overfitting and to control the number of random effects in a model (Section~\ref{sec:scaling}) \citep{hefley2017basis, gelman2014bayesian}. Pooling information allows reliable inference from model fitting that would otherwise be computationally unstable as shown from the improved convergence rates of all SCR-GLMM models when random effects were aggregated (Table~\ref{table:nconverged}). The choice of aggregation level implies a trade-off between sample size per detector (high aggregation to achieve dimension reduction) and the resolution of spatially explicit estimates of detectability (low aggregation for more spatial detail). We recommend increasing the aggregation level until the MCMC convergence criteria are met for the key parameters of interest. This way the aggregation level is kept as small as possible so that model fitting is possible and the level of detail meet the requirements of the investigation. In empirical analyses, random effects can also be aggregated based on natural groups of detectors, such as administrative units, sub-regions that differ in varying sampling effort or some other categorical factors.
For the SARE model, we advise caution in choosing an upper bound for the aggregation scale as the spatial autocorrelation is specifically modelled as a decreasing function of inter-detectors distance. It may get computationally intractable to estimate model parameters with low number of random effects since the fitted coarse surface would over-dilute the true scale of variation in the autocorrelated surface. Further, strong negative bias may arise in the estimate of population size under highly autocorrelated scenarios, similar to what we experienced when fitting the basic SCR model to our simulated data with heterogeneous and spatially autocorrelated detection probability.
When the number of detections per detector in SCR data sets is low, multicollinearity can occur between the detector- or cluster-specific random effects and other parameters in half-normal detection function. For instance, such multicollinearity arise in situations where SARE is fitted to SCR data sets that are not sufficiently informative to reveal underlying autocorrelation amongst detectors. Based on SSE values and WAIC in our study, we recommend fitting SARE models primarily to data from extreme sampling situations where both detection probability and spatial autocorrelation are high. In all other situations, SARE is expected to give a poorer fit (and poor MCMC convergence), whereas basic single-season SCR can cope with moderate levels of variation present among the detectors even under low detectability (as indicated by SSE and WAIC; Figure~S3, Supp. material). Overall, we found WAIC to be useful in selecting the best model in scenarios with different levels of autocorrelation, which holds promise for WAIC application in empirical analyses.
In this study, we focused on three extensions of the SCR model that can account for latent heterogeneous detection probability. Other potential modelling solutions for dealing with a lack of covariates include: a. Bayesian nonparametric models allowing for the possibility of infinite number of subgroups for the detection probability \citep{turek2021bayesian}, b. conditionally autoregressive random effects model (CAR) that specifically models spatial autocorrelation between detectors \citep{nicolau2020incorporating}, and c. basis function models by obtaining basis expansion from factorization of a pre-specified correlation matrix \citep{hefley2017basis}. Recently, \cite{stevenson2021spatial} developed an SCR model that models spatially autocorrelated detections based on Gaussian random fields. While the modelling approach can be advantageous in situations where variation in detection probability occurs regularly within individual home ranges, the latent detection field SCR model requires integrating out the spatially autocorrelated random effects as well as the ACs, resulting in a significant computational burden. Each of these different classes of models is computationally extensive, overparamterized and likely to overfit the sparse SCR data sets that are common in ecological studies \citep{gerber2015spatial,tourani2022review}, but we anticipate future advancements can overcome the computational and/or modelling barriers to facilitate successful application of these sophisticated techniques to model heterogeneity in detection probability.
\subsection{Conclusions}\label{sec:conclusions}
Properly accounting for spatial autocorrelation in detection probability can mitigate bias in population size estimates. Dimension reduction of the random effects is a computationally stable technique to avoid overfitting of such complex models, but caution should be applied when choosing the aggregation scale given the trade-offs between MCMC efficiency and spatial detail. Investigators specifically interested in predicting detection probability surfaces, should choose SARE in situations where spatial autocorrelation is high and number of detections per detector is above 1. In situations where either detectability or autocorrelation is low to moderate, we recommend to use FM instead.
\section*{Acknowledgements}
This work was funded by the Norwegian Environment Agency (Miljødirektoratet), the Swedish Environmental Protection Agency (Naturvårdsverket), and the Research Council of Norway (NFR 286886).
\section*{Supplementary material}
Additional tables and figures can be found in supplementary material
(\url{https://www.dropbox.com/s/9ur4nvyy808j2hi/HetDetSol_Supporting_Material_arxiv.pdf?dl=0}). R code for generating simulated data and data analysis are available at: \url{https://github.com/soumenstat89/HetDetSol}.
\bibliographystyle{apalike}
|
train/arxiv
|
BkiUdGo5qX_Bl-uWGaj0
| 5
| 1
|
\section*{Introduction}
\noindent The \textit{Turing Test} \cite{Turing1950} was designed to replace the philosophical question "Can machines think ?" with a more pragmatic one: "Can digital computers imitate human behaviors in answering text questions ?" Since 1990, the \textit{Loebner Prize} competition awards the AI that has the more human-like behaviour when passing a five-minutes Turing Test. This contest is controversial within the field of AI for encouraging low-quality interactions and simplistic chatbots \cite{Shieber1994}.
Programmers force chatbots to make mistakes, such as typing errors, to be more human-like. Because computers achieve super-human performance in some tasks, such as arithmetic \cite{Turing1950} or video game \cite{Liden2003}, their ability may need to be artificially constrained. Those deliberate mistakes are called \textit{Artificial Stupidity} in the media \cite{Economist1992} \cite{Salon12003}.
Solving the Turing Test problem in its general form implies building an AI capable of delivering human-like replies to any question of human interest. As such, it is known to be AI-Complete \cite{Yampolskiy2013}, because by solving this problem we would be able to solve most of problems of interest in Artificial Intelligence by reformulating them as questions during a Turing Test. To appear human, an AI will need to fully understand human limits and biases. Thus, the Turing Test can be used to test if an AI is capable of understanding \textit{human stupidity}.
By deliberately limiting an AI's ability to achieve a task, to better match humans' ability, an AI can be made safer, in the sense that its capabilities will not exceed by several orders of magnitude humans' abilities. Upper-bounds on the number of operations made per second by a human brain have been estimated \cite{Moravec1997} \cite{Bostrom1998}. To obtain an AI that does not exceed by far humans' abilities, for instance in arithmetics, the computing power allowed for mathematical capabilities must be artificially diminished. Besides, humans exhibit cognitive biases, which result in systematic errors in judgment and decision making \cite{Haselton2005}. In order to build a safe AGI, some of those biases may need to be replicated.
We will start by introducing the concept of Artificial Stupidity. Then, we will recommend limitations to build a safer AGI.
\section{Artificial Stupidity}
\subsection{Passing the Turing Test}
To pass the Turing Test, programs that are deliberately simplistic perform better at Turing Test contests such as the Loebner Prize. The computer program A.L.I.C.E. (or Artificial Linguistic Internet Computer Entity) \cite{Wallace2009} won the Loebner Prize in 2000, 2001 and 2004, even though "there is no representation of knowledge, no common-sense reasoning, no inference engine to mimic human thought. Just a very long list of canned answers, from which it picks the best option" \cite{Salon22003}. A.L.I.C.E. has an approach similar to ELIZA \cite{Weizenbaum1966}: it identifies some relevant keywords and give appropriate answers without learning anything about the interrogator \cite{Salon22003}.
A general trend for computer programs written for the Loebner prize is to avoid being asked questions it cannot answer, by directing the conversation towards simpler conversational context. For A.L.I.C.E. and ELIZA, that means focusing mainly of what had been said in the last few sentences (stateless context). Another example of an AI performing well at Turing Test contests is Eugene Goostman, who convinced 33\% of the judges that it was human \cite{Zdnet2014}. Goostman is portrayed as a thirteen-year-old Ukrainian boy who does not speak English well. Thus, Goostman makes typing mistakes and its interrogators are more inclined to forgive its grammatical errors or lack of general knowledge. Introducing deliberate mistakes, what we call \textit{Artificial Stupidity}, is necessary to cover up an even greater gap in intelligence during a Turing Test.
\subsection{Interacting with Humans}
Outside those very specific Turing Test contests, Artificial Stupidity is being increasingly introduced to interact with humans. In "Artificial Stupidity: The Art of Intentional Mistakes"\cite{Liden2003}, Liden describes the design choices an AI programmer must make in the context of video games. He gives general principles that a Non-Player Character (NPC) must follow to make the game playable. For instance, NPCs must "move before firing" (e.g. by rolling when the player enters the room) so that the player has additional time to understand that a fight is happening, or "miss the first time" to indicate the direction of the attack without hurting the player. In video games, because computer programs can be much more capable than human beings (for instance because of their perfect aim in First Person Shooters (FPS)), developers force NPCs to make mistakes to make life easier for the human player.
This tendency to make AI deliberately stupid can be observed across multiple domains. For example, at Google I/O 2018, Sundar Pichai introduced Google Duplex, "a new technology for conducting natural conversations to carry out "real world" tasks over the phone"\cite{Duplex2018}. In the demo, the Google Assistant used this Google Duplex technology to successfully make an appointment with a human. To that end, it used the interjection "uh" to imitate the space filler humans use in day-to-day conversations. This interjection was not \textit{necessary} to make the appointment. More precisely, when humans use these kind of space fillers, it is a sign of poor communication skills. The developers introduced this type of Artificial Stupidity to make the call more fluid, more human-like.
\subsection{Exploiting Human Vulnerabilities}
When interacting with AI, humans want to fulfill some basic needs and desires. The AI can exploit these cravings, by giving people what they want. Accordingly, the AI may want to appear vulnerable to make the human feel better about himself. For instance, Liden \cite{Liden2003} suggests to give NPCs "horrible aim", so that humans feel superior because they think they are dodging gunshots. Similarly, he encourages "kung-fu style" fights, where amongst dozens of enemies, only two are effectively attacking the player at each moment. Thus, the player feels powerful (because he believes he is fighting multiple enemies). Instead of unleashing its full potentiality, the AI designers diminish its power to please the players.
The same mechanism applies to the computer program A.L.I.C.E.. The program delivers a simple \textit{verbal behavior}, and because "a great deal of our interactions with others involves verbal behavior, and many people are interested in what happens when you talk to someone"\cite{Salon22003}, it fulfills this elementary craving of obtaining a verbal behavioral response. In his "Artifical Stupidity" article, part 2, Sundman quotes Wallace, the inventor of A.L.I.C.E, explaining why he thinks humans enjoy talking to his program: "It's merely a machine designed to formulate answers that will keep you talking. And this strategy works, [...] because that's what people are: mindless robots who don't listen to each other but merely regurgitate canned answers."
More generally, virtual assistants and chatbots are technologies aimed at helping consumers. To that end, they must provide value. They might achieve that by gaining users' thrust and improving their overall well-being. For instance, Woebot, a Facebook chatbot developed by Stanford Researchers, improves users' mental-health through Cognitive Behavorial Therapy (CBT) \cite{MIT2017}. In addition to CBT, Woebot uses two therapeutic process-oriented features. First, it provides empathic responses to the user, according to the mood the user said he had. Second, content tailored to its mood is presented to the user. \cite{Woebot2017}. The chatbot delivers specifically what will help the user best, without describing the details of the user's mental-health. It gives a very simple answer the consumer wanted, masking the intelligence of its algorithm. Chatbots are designed to help humans, giving appropriate simple responses. They are not designated to appear smarter than humans.
\subsection{Avoiding Superintelligence requires Superintelligence}
In "Computing Machinery and Intelligence" \cite{Turing1950}, Turing exposes common fallacies when arguing that a machine cannot pass the Turing Test. In particular, he explains why the belief that "the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic" because "the machine would be unmasked because of its deadly accuracy" is false. Indeed, the machine "would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator." Thus, the machine would hide its super-human ability by giving a wrong answer, or simply saying that he could not compute it. Similarly, in a video game, AI designers artificially make the AI not omniscient, so that it does not miraculously guess where each and every weapon of the game is located \cite{Liden2003}.
The general trend here is that AI tend to quickly achieve super-human level performance after having achieved human-level performance. For instance, for the game of Go, in a few months, the state-of-the-art went from strong amateur, to weak professional player, to super-human performance \cite{Silver2016}. From that point onwards, to make the AI pass a Turing Test, or make it behave human-like to satisfy human desires, AI designers must deliberately limit its capabilities.
We now take the point of view of algorithmic complexity. We call \textit{AI-problems} the problems that can be solved (i.e. for which the correct output for a given input can be computed) by the union of all humans \cite{Yampolskiy2013}. The Turing Test is said to be \textit{AI-complete} \cite{Yampolskiy2013}, because all AI-problems can be reduced via a polynomial-time reduction to it (by framing the problem as a question during a Turing Test), and it is an AI-problem. Additionally, we say that a problem is \textit{AI-Hard} if we can find a polynomial-time reduction from an AI-complete problem to this problem. In this setting, the problem of comprehensively specifying the limits of the human cognition to an AI is AI-Hard. Indeed, it implies being capable to know which questions a human can answer in a given time during a Turing Test, and which answers are likely to be produced by humans. Therefore, the Turing Test can be reduced polynomially to specifying the limits of human cognition, so specifying such limits is AI-Hard.
Although the precise limits of human cognition are not fully known, specific recommendations on minima or maxima for different capabilities can be given.
\section{The Cognitive Limits of the Human Brain}
\epigraph{I believe that in about fifty years time it will be possible to programme computers, with a storage capacity of about $10^9$, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.}{Turing, Computing Machinery and Intelligence, 1950}
Sixty-eight years ago, Turing estimated that in the 2000s, an AI with only 1 Gb in storage could pass a five minutes Turing Test 30\% of the time. Previously, we showed that passing the Turing Test \textit{in general} was AI-complete. The amount of computing resources necessary to pass the Turing Test is then a relevant estimate for determining the computing power necessary to attain Human-Level Machine Intelligence (HLMI). In what follows, we try to estimate the computing power of the human brain.
\subsection{Limits in Computing}
The brain is a complex system with an architecture completely different from the usual von Neumann computer architecture. However, estimates about the storage capacity of the brain and the number of operations per second were attempted.
\subsubsection{Long-term Memory}
Here is how Turing \cite{Turing1950} justifies the $10^9$ bits of storage capacity:
\begin{quote}
``Estimates of the storage capacity of the brain vary from $10^{10}$
to $10^{15}$ binary digits. I incline to the lower values and believe
that only a very small fraction is used for the higher types of
thinking. Most of it is probably used for the retention of visual
impressions.''
\end{quote}
The storage capacity of the brain is generally considered to be within the bounds given by Turing (resp. $10^{10}$ and $10^{15}$). Although the encoding of information in our brains is different from the encoding in a computer, we observe many similarities \cite{Lecoq2013}.
To estimate the storage capacity of the human brain, we first evaluate the number of synapses available in the brain. The human brain has about 100 billion neurons \cite{Williams1988}. Each neuron has about five thousand potential synapses, so this amounts to about $5.10^{14}$ synapses \cite{Bostrom1998}, so $5.10^{14}$ potential datapoints.
This shows that the brain could in theory encode between $10^{12}$ and $10^{15}$ bits of information, assuming that each synapses stores one bit. However, such estimates are still approximate because neuroscientists do not know precisely how synapses actually encode information: some of them can encode multiple bits by transmitting different strengths, and individual synapses are not completely independent \cite{Wickman2012}.
\subsubsection{Processing}
Even though the brain can encode Terabits of information, humans are in practice very limited in the amount of information they can process.
In his classical article \cite{Miller1956}, Miller showed how our minds could only hold about $7\pm2$ concepts in our working memory. More generally, three essential bottlenecks were shown to limit information processes in the brain: the Attentional Blink (AB) limits our ability to consciously perceive, the Visual Short-Term Memory (VSTM) our capacity to hold in mind, and the Psychological Refractory Period (RFP) our ability to act upon the visual world \cite{Marois2005}. In particular, the brain takes up to 100ms to process complex images \cite{Rousselet2004}.
Moreover, the processing time seems to take longer when the choice to make takes as input a complex information. This is known as \textit{Hick's Law} \cite{Hick1952}: the time it takes to make a choice is linearly related to the entropy of the possible alternatives.
\subsubsection{Computing}
One approach to evaluate the complexity of the processes happening in the brain is to estimate the maximum number of operations per second. Thus, Moravec \cite{Moravec1997} estimates that to replicate all human's function as a whole one would need about 100 MIPS (Millions of Instructions per Second), by comparing it to the computational needs for edge extraction in robotics. Using the same estimation for the number of synapses in the brain (mentioned in \textit{Memory}), Bostrom \cite{Bostrom1998} concludes that the brain uses at most about $10^{17}$ operations per second (for a survey of the different estimates of the computational capacity of the brain, see (Bostrom, 2008) \cite{Bostrom2008}).
\subsubsection{Clock Speed}
The brain does not operate with a central clock. That's why the term "clock-speed" does not describe accurately processes happening in the brain. However, it if possible to compare the transmission of information in the brain and inside a computer.
Processes emerge and dissolve in parallel in different parts of the brain at different frequency bands: theta(5-8Hz), alpha(9-12Hz), beta(14-28Hz) and gamma(40-80Hz). Comparing computer and brain frequences, Bostrom notes that ``biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (\textasciitilde2 GHz)'' \cite{Bostrom2014}.
It is important to note that clock speed, alone, do not fully characterize the performances of a processor \cite{Smith2002}. Furthermore, the processes happening in the brain use several orders of magnitude more parallelization than modern processors.
\subsection{Cognitive biases}
\epigraph{Humans, like other animals, see the world through the lens of evolved adaptation}{The Evolution of Cognitive Bias, 2005 \cite{Haselton2005}}
Natural selection have shaped human perception, with the constraint of limited computational power. Cognitive biases led humans to draw inferences or adopt beliefs without corresponding empirical evidence. The fundamental work of Tversky and Kahneman \cite{Tversky1974} highlighted the existence of heuristics and systemic errors in judgment. However, those biases helped to solve adaptive problems, i.e. ``problems that recurred across many generations during a species' evolutionary history, and whose solution statistically promoted reproduction in ancestral environments'' \cite{Cosmides1994}.
\subsubsection{Limited rationality}
The economist Herbert A. Simon opposed the classical view of the \textit{rational} economic agent. He viewed humans as organisms with limited computational powers, and introduced the concept of \textit{bounded rationality} to take those limits into account in decision-making. \cite{Herbert1955}.
Cosmides and Tobby also criticized the study of economic agents as following "rational" decision rules, without studying the "computational devices" inside. Natural selection's invisible hand created the human mind, and economics is made of the interactions of those minds. Evolution led humans to develop domain-specific functions, rather than general-purpose problem-solvers. The intelligence of humans comes from those specific "reasoning instincts" that make inferences ``just as easy, effortless, and "natural" to humans as spinning a web is to a spider or building a dam is to a beaver'' \cite{Cosmides1994}.
\subsubsection{Heuristics}
The intractability of certain problems, the limited computational power of human minds, and uncertainty, are the most common ways to explain cognitive biases, and in particular heuristics, which are ``rules of thumb that are prone to breakdown in systematic ways'' \cite{Haselton2005}. Heuristics aim at reducing the cost of computing while delivering good-enough solutions. Processes are limited by brain ontogeny, i.e. the development of different parts of the brain, and complex algorithms take longer and require additional resources.
One of the most famous bias resulting from mental shortcuts is the "Linda Problem": individuals consider the assertion "Linda is a bank teller and active in the feminist movement" more probable than "Linda is a bank teller". This error is caused by the \textit{conjunction fallacy}, i.e. believing that the conjunction of two events is more probable than a single event \cite{Tversky1983}. Another notorious example is the Fundamental Attribution Error, attributing certain mental states to individuals because of their behavior, and not because of the logical implications of the context \cite{Jones1967} \cite{Ross1977}.
\subsubsection{Error management biases}
Error Management Theory (EMT) studies cognitive biases in the context of error management. It distinguishes two error types \cite{Haselton2005}:
\begin{itemize}
\item false positives (false belief)
\item false negatives (failing to adopt a true belief)
\end{itemize}
One of the findings of EMT is that humans are biased to make the less costly error, even if it is the most frequent \cite{Haselton2006}. Biases of this kind include \cite{Haselton2005}:
\begin{itemize}
\item \textit{Protective biases} (e.g. avoiding noninfectious person)
\item \textit{Bias in Interpersonal Perception} (for instance sexual overperception for male and commitment skepticism for female)
\item \textit{Positive Illusions} (estimates unrealistic likelihoods for positive events)
\end{itemize}
\subsubsection{Artifacts}
Humans might appear irrational in experiments because the tested abilities were not optimized by evolution. Those are called \textit{biases as artifacts} \cite{Haselton2005}. For instance, humans are better at statistical prediction if the inputs are presented in frequency form \cite{Gigerenzer1998}.
\section{Recommendations to Build a Safer AGI}
Humans have clear computational constraints (memory, processing, computing and clock speed) and have developed cognitive biases. An Artificial General Intelligence (AGI) is not \textit{a priori} constrained by such computational and cognitive limits.
Hence, if humans do not deliberately limit an AGI in its hardware and software, it could become a \textit{superintelligence}, i.e. "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" \cite{Bostrom2014}, and humans could lose control over the AI. In this section, we discuss how to constrain an AGI to be less capable than an average person, or equally, while still exhibiting general intelligence. In order to achieve this, resources such as memory, clock speed, or electricity must be restricted.
However, intelligence is not just about computing. Bostrom distinguishes three forms of superintelligence: speed superintelligence (``can do all that a human intellect can do, but much faster''), collective superintelligence (``A system composed of a large number of smaller intellects such that the system's overall performance across many very general domains vastly outstrips that of any current cognitive system.'') and quality superintelligence (``A system that is at least as fast as a human mind
and vastly qualitatively smarter'') \cite{Bostrom2014}. A hardware-limited AI could be human-level-intelligent in speed, but still qualitatively superintelligent.
\subsection{Hardware}
To begin with, we focus on how to avoid speed superintelligence by limiting the AI's hardware. For instance, its maximum number of operations per second can be bounded by the maximum number of operations a human does. Similarly, by limiting its RAM (or anything that can be used as a working memory), we limit its processing power to process information at the similar rate as humans.
Focusing only on limiting the hardware is nonetheless insufficient. We assume that, in parallel, there exists other limitations (in software) that prevents the AI to become qualitatively superintelligent, to upgrade its hardware by changing its own physical structure, or to just buy computing power online.
\subsubsection{Storage Capacity}
\epigraph{ I should be surprised if more than $10^9$ was required
for satisfactory playing of the imitation game, at any rate against
a blind man. [...] A storage capacity of $10^7$
would be a very practicable possibility even by present techniques.}{Turing, Computing Machinery and Intelligence, 1950}
We estimated the storage capacity of the human brain to be at most $10^{15}$ bits, using one bit per synapse. The cost of hard drives have reached \$0.05/Gb in 2017 \cite{Backblaze2017}. Hence, the storage capacity of a human brain would cost at most approximately \$50,000. This is a pessimistic estimate: the brain uses maybe orders of magnitude less information for storage, and the price of a Gb could decrease even lower in the future.
To have a safe AGI, one should rather use much less storage capacity. For instance, as quoted in the epigraph, Turing \cite{Turing1950} estimated $10^{7}$ (or about 10Mb) to be a practical storage capacity to pass the Turing Test (and therefore attain AGI). Even if this seems very low, consider that an AGI could have a very elegant data structure and semantics, that may allow to store information much more concisely than our brains. In comparison, English Wikipedia in compressed text is about 12Gb, and is growing at a steady rate of 1Gb/year \cite{Wikipedia2018}. For this reason, allowing more than 10 Gb of storage capacity is unsafe. With 10Gb of storage, it could have access to an offline version of Wikipedia permanently, and be qualitatively superintelligent in the sense that it would have direct access to all human knowledge.
A counter-argument for such memory limit is that our brains process much more information than 10Mb when observing the world, and would store all those images in our long-term memory. The human-eye could observe, at most, 576Mb in a single glance (forgetting about all the visual flaws) \cite{Clarkvision2005}. However, all this resolution is not necessary to perform edge detection and image recognition. For instance, a 75x50 pixels image is enough to identify George W. Bush \cite{Yang2003}, and the popular database for handwritten digit recognition MNIST use 28x28 images \cite{lecun1998gradient}. Thus, we can imagine a "visual processing unit", that would transform the photons received by the captor of the AGI into a low resolution image, precise enough to be interpreted by our AGI, but still orders of magnitude smaller in size than a Mb.
\subsubsection{Memory access}
Manuel Blum opposes the application of traditional complexity theory to formalize how humans process information and do mental computations, in particular to generate a password from a private key previously memorized \cite{Blum2017}. In his Human-Model, memory can be modeled as a two-tape Turing machine: one for long-term memory, one for short-term memory. Blum considers potentially infinite tapes, because the size of the tape is not relevant for complexity theory, but for our purpose we can consider the tapes to be at most the size discussed previously for memory (e.g. 10Mb). According to Miller's magical number $7\pm2$ \cite{Miller1956}, human working-memory works with a limited amount of chuncks. So our two-tape Turing Machine should have a very short "short-term memory" tape, containing at most two or three 64-bit pointers pointing to chunks in the long-term memory (the other tape). More specifically, storing information in the long-term memory is slow, but reading from long-term memory (given the correct pointer) is fast.
In modern computers, RAM's bandwidth is about 10GBytes/s, hard drive's bandwidth is 100MBytes/s, and with high clock rate a CPU can process about 25GBytes/s \cite{Lecoq2013}. In order to build a safer AGI, the memory access for the two mentioned tapes must be restricted, so that we are sure that data is being retrieved slower than by humans. However, the computing paradigms being very different, it is difficult to give a clear estimate of how much information is being retrieved per second by the brain \cite{Lecoq2013}.
\subsubsection{Processing}
We previously mentioned how the human brain could only process a limited amount of information per second. In addition to a limited number of chunks in working memory, other features must also be implemented to slow down the AGI and make it human-level intelligent. For instance, introduce some artificial period to process information, such as images, depending on the content. We already commented on the necessary duration of 100ms to process complex images \cite{Rousselet2004}. Similarly, the amount of time to process a certain image might depend on the complexity and size of the image. More generally, a model similar to \textit{Hick's Law} \cite{Hick1952} can be implemented, to have the AGI take linearly more time to take decisions as the information-theoretic entropy of the decision increases.
\subsubsection{Clock Speed}
As we mentioned multiple times, the brain parallelizes much more, using a totally different computing paradigm than the von Neumann architecture. Therefore, using a clock rate close to the frequency of the brain frequencies (typically \textasciitilde10Hz) is not relevant to our purpose, and it might prove difficult to build an AGI that exhibits human-level intelligence in real time using such a low clock rate.
To solve this, one possibility is to first measure better the trajectory of thoughts occuring in the brain, and then give a precise estimate of how frequently the processes in the brain are refreshed (i.e. evaluating some kind of clock rate). Another solution is to abandon the von Neumann architecture, and build the AGI with a computer architecture more similar to the human brain.
\subsubsection{Computing}
In "The Cognitive Limits of the Human Brain", we mentioned Bostrom's estimate \cite{Bostrom1998} of at most $10^{17}$ operations per second for the brain. This is a very large number, and could only happen if the AGI's hardware allowed that much computing power. This will not be the case, according to what we said previously in \textit{Storage capacity} and \textit{Memory access}.
More importantly, even if we could measure a number of operations per second that would actually be lower than any number of operations per second a human brain does for any given task, it might not be a correct bound. Why? The brain has evolved to achieve some very specific tasks, useful for evolution, but nothing guarantees that the complexity or the processes happening in the brain are algorithmically optimal. Thus, the AGI could possess a structure that would be far more optimized for computing than the human brain.
Therefore, restricting the number of operations alone is insufficient: the algorithmic processes and the structure of the AGI must be precisely defined so it is clear that the resulting processes happening are performing tasks at a lower rate than humans.
\subsection{Software}
In November 2017, there were more than 45 companies in the world working on Artificial General Intelligence \cite{Baum2017}. Ben Goertzel distinguishes three major approaches \cite{Goertzel2018}:
\begin{itemize}
\item 1) Use neural networks to emulate the different parts of the brains, visual and auditory processing, and connect all those parts together by emulating how those parts talk to each other. Deepmind is a representative example.
\item 2) Take Marcus Hutter's Universal Artificial Intelligence model \cite{hutter2004universal} and try to limit the required computing power
\item 3) Ben Goertzel's approach with OpenCog: look at the cognitive processes happening in the brain from a high-level cognitive point of view, and map this into a weighted labeled hypergraph.
\end{itemize}
In this paper, in order to build a safe AGI with at most human intelligence, we will focus on the first approach. Indeed, a more universal or high-level Artificial General Intelligence will have a very different computing paradigm than the human brain, so it would be difficult to restrain the AGI computing resources accordingly.
In addition to those neural networks emulating processes happening in the brain, we consider additional safety measures that must be implemented to obtain a safe AGI.
\subsubsection{No self-improvement}
A limited initial hardware is not a real restriction if the AGI can buy some additional computing power online or change its own structure. To prevent the AGI from changing its own code, one possibility is to hard-code the feature directly "you shall not rewrite your own code". Another (subtler) possibility is to encrypt its source code, making self-modification more difficult.
However, it might seem that this does not completely solve the problem, as the AGI could manipulate humans into changing its code. Yet, with our AGI design, the AGI would not be superintelligent, but at most human-intelligent. So the AGI would not have the necessary "social manipulation superpowers" \cite{Bostrom2014} to convince humans to change its code.
\subsubsection{Cognitive Biases}
Humans have developed cognitive biases because of natural selection. Incorporating human biases into the AGI present several advantages: they can limit the AGI's intelligence and make the AGI fundamentally safer by avoiding behaviors that might harm humans. Thus, heuristics limiting the possible results of a computation, or error management biases, can help build a less capable AGI that would also make less errors.
Here is a list of cognitive biases that could make the AGI safer \cite{List2018}:
\begin{itemize}
\item \underline{Planning fallacy}: would prohibit the AGI from successfully planning a takeover or a treacherous turn
\item \underline{Bandwagon Effect}: the AGI will acquire human values that are shared among the group it belongs to
\item \underline{Confirmation Bias}: the AGI will rationalize and confirm that it is useful to help humans, or that AI Safety is an important problem
\item \underline{Conservatism}: the AGI will keep the same initial values, and not become evil
\item \underline{Courtesy bias}: the AGI will try to not offend anyone, avoiding aggressive behaviors
\item \underline{Functional Fixedness}: the AGI will only use objects like humans do. It will not "hack" anything or use objects with malicious intent
\item \underline{Information bias}: the AGI will have the tendency to seek information and think more, avoiding errors
\item \underline{Mere-exposure effect}: the AGI will have good intention towards humans because it will be exposed to humans
\item \underline{Omission bias}: bad actions will be much lower in its utility function than status quo
\item \underline{Status quo bias}: an action that does not change the environment or even inaction will be desirable
\item \underline{Authority bias}: some human values are shared among most authority figures. The AGI will avoid extreme individual opinions and prefer conventional ones
\item \underline{System justification}: the AGI will defend human values, and won't promote revolution
\item \underline{Processing difficulty effect}: the AGI will prefer well-thought arguments and well-argued ideas over simple actions or decisions that can be harmful
\item \underline{Spotlight effect}: the AGI will overestimate the amount of people observing its behavior. Hence, it will behave in a similar fashion both with high and low supervision
\end{itemize}
It is clear that some of those biases can lead to irrational behavior. Therefore, even though those biases can lead to safer behavior, they must be implemented with caution.
\subsubsection{Avoiding a treacherous turn}
In the Chapter 8 of \textit{Superintelligence} \cite{Bostrom2014}, Bostrom notes the following: behaving nicely in a sandbox environment is a convergent instrumental goal, both for a friendly or a malicious AI. At one point, an AI could exhibit enough intelligence to understand that it is being observed and could restrain its own intelligence accordingly. Bostrom calls this first moment of realization ``the conception of deception'', and would be the last moment of vulnerability where we could observe a true malicious intent. After this point, the behaviors will appear "safe", until a certain threshold is reached, where the AI does not feel threatened anymore by its supervisors, and decides to take a treacherous turn and reveal its true nature by taking over.
In the implementation of the AI, however, we can decide to make it very clear what processes are happening, when, and how much computing power it took to do each task. This interface would allow the human supervisors to better monitor the behavior of the AI in the sandbox environment, and detect a treacherous turn. Additionally, the designers can implement a safety feature that consists in preventing the AI from erasing any of the log files, or from even knowing that it is being observed.
\section*{Conclusion}
In the history of Artificial Intelligence, one of the greatest challenge has been to pass the Turing Test. To win at the Imitation Game, chatbots are made \textit{Artificially Stupid}. More generally, introducing Artificial Stupidity into an AI can improve its interaction with humans (for instance in video-game), but can also be a safety measure to better control an AGI. In this paper, we proposed a design of a safer and humanly-manageable AGI. It would be hardware constrained, so it would have less computing power than humans, but also an architecture very similar to human brains. Additionally, some features in software might help avoid self-improvement, a treacherous turn, or just make the AI safer (e.g. cognitive biases).
This approach presents several limitations and makes multiple assumptions:
\begin{itemize}
\item this paper is limited to the case where the first approach to AGI (emulating a human-brain with neural networks) is possible, and will be the first approach to succeed
\item prohibiting the AGI from hardware or software self-improvement might prove a very difficult problem to solve, and may even be incompatible with corrigibility
\item it may be impossible to build an AI that simultaneously operates under such heavy constraints and still exhibits general intelligence
\item this approach does not generalize: it is impossible to build a safe Superintelligence from this AGI design
\end{itemize}
Therefore, progress must still be made to generalize this approach. Future directions of this research include:
\begin{itemize}
\item exploring other limits of human cognition
\item determining how to apply those limits for different AGI designs (Marcus Hutter's approach or Ben Goertzel's approach for instance)
\end{itemize}
\printbibliography
\end{document}
|
train/arxiv
|
BkiUaX_xK6wB9k0iI7y4
| 5
| 1
|
\section{#1}\setcounter{equation}{0}\setcounter{theorem}{0}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newcommand{{\mathbb N}}{\hfill\nonumber}
\newcommand{\begin{align}}{\begin{align}}
\newcommand{\end{align}}{\end{align}}
\newcommand{\addtoc}[1]{\addcontentsline{toc}{section}{#1}}
\newcommand \nc {\newcommand}
\nc \proof {\noindent {\em{Proof.\/ }}}
\nc \qed {\hfill $\Box$}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{question}[theorem]{Question}
\nc \bth[1] {\begin{theorem}\label{t#1} }
\nc \ble[1] {\begin{lemma}\label{l#1} }
\nc \bpr[1] {\begin{proposition}\label{p#1} }
\nc \bco[1] {\begin{corollary}\label{c#1} }
\nc \bde[1] {\begin{definition}\label{d#1}\rm }
\nc \bex[1] {\begin{example}\label{e#1}\rm }
\nc \bre[1] {\begin{remark}\label{r#1}\rm }
\nc \bcon[1] {\begin{conjecture}\label{con#1}\rm }
\nc \bque[1] {\begin{question}\label{que#1}\rm }
\nc {\eth} {\end{theorem}}
\nc {\ele} {\end{lemma}}
\nc {\epr} {\end{proposition}}
\nc {\eco} {\end{corollary}}
\nc {\ede} {\end{definition}}
\nc {\eex} {\end{example}}
\nc {\ere} {\end{remark}}
\nc {\econ} {\end{conjecture}}
\nc {\eque} {\end{question}}
\nc \thref[1]{Theorem \ref{t#1}}
\nc \leref[1]{Lemma \ref{l#1}}
\nc \prref[1]{Proposition \ref{p#1}}
\nc \coref[1]{Corollary \ref{c#1}}
\nc \deref[1]{Definition \ref{d#1}}
\nc \exref[1]{Example \ref{e#1}}
\nc \reref[1]{Remark \ref{r#1}}
\newcommand {\normprod}[1]{ {\textrm{:}}{#1}{\textrm{:}} }
\def W_{1+\infty} {W_{1+\infty}}
\def \W(N) {W_{1+\infty}(N)}
\def \alpha {\alpha}
\def \beta {\beta}
\def {\mathcal A} {{\mathcal A}}
\def {\mathcal M} {{\mathcal M}}
\def {\mathcal L} {{\mathcal L}}
\def {\mathcal O} {{\mathcal O}}
\def {\mathcal R} {{\mathcal R}}
\def \Delta {{\mathcal D}}
\def \delta {{\mathrm d}}
\def {\mathbb R} {{\mathbb R}}
\def {\mathbb C} {{\mathbb C}}
\def {\mathbb Z} {{\mathbb Z}}
\def {\mathbb Q} {{\mathbb Q}}
\def {\mathbb N} {{\mathbb N}}
\def {\mathbb P} {{\mathbb P}}
\def {\mathbb A} {{\mathbb A}}
\def {\mathbb T} {{\mathbb T}}
\def {\mathbb H} {{\mathbb H}}
\def {\mathrm{ord}} { {\mathrm{ord}} }
\def {\mathrm{rank}} { {\mathrm{rank}} }
\def {\mathrm{span}} { {\mathrm{span}} }
\def {\mathrm{const}} { {\mathrm{const}} }
\def {\mathrm{mod}} { {\mathrm{mod}} }
\def {\mathrm{Spec}} { {\mathrm{Spec}} }
\def {\mathrm{\bf Spec}} { {\mathrm{\bf Spec}} }
\def {\mathrm{Proj}} { {\mathrm{Proj}} }
\def {\mathrm{\bf Proj}} { {\mathrm{\bf Proj}} }
\def {\mathrm{diag}} { {\mathrm{diag}} }
\def {\mathrm{deg}} { {\mathrm{deg}} }
\def {\mathrm{mult}} { {\mathrm{mult}} }
\def {\mathrm{Res}} { {\mathrm{Res}} }
\def {\mathrm{div}} { {\mathrm{div}} }
\def {\mathrm{length}} { {\mathrm{length}} }
\def {\mathrm{wt}} { {\mathrm{wt}} }
\def {\mathrm{dim}} { {\mathrm{dim}} }
\def {\mathrm{Im}} { {\mathrm{Im}} }
\def {\mathrm{Re}} { {\mathrm{Re}} }
\def {\mathrm{Gr}} { {\mathrm{Gr}} }
\def \pi { {\partial}}
\def {\Rightarrow} {{\Rightarrow}}
\def \impliedby {{\Leftarrow}}
\def \( {{\left(}}
\def \) {{\right)}}
\def {\left\{} {{\left\{}}
\def {\right\}} {{\right\}}}
\def \[ {{\left[}}
\def \] {{\right]}}
\def \bigl( {\bigl(}
\def \bigr){\bigr)}
\def\biggl({\biggl(}
\def\biggr){\biggr)}
\def\langle{\langle}
\def\rangle{\rangle}
\def\otimes{\otimes}
\def\forall{\forall}
\def\tilde{\tilde}
\def\subset{\subset}
\def\supset{\supset}
\def\notin{\notin}
\def\times{\times}
\def\infty{\infty}
\def\equiv{\equiv}
\def\!\sim\!{\!\sim\!}
\def\prime{\prime}
\def\mapsto{\mapsto}
\def\exists{\exists}
\def\sharp{\sharp}
\def\natural{\natural}
\def\flat{\flat}
\def\emptyset{\emptyset}
\def\implies{{\Rightarrow}}
\def\impliedby{\impliedby}
\def\rightharpoonup{\rightharpoonup}
\def\uparrow{\uparrow}
\def\downarrow{\downarrow}
\def\nabla{\nabla}
\def\partial{\partial}
\def\triangle{\triangle}
\def\alpha{\alpha}
\def\beta{\beta}
\def\gamma{\gamma}
\def\delta{\delta}
\def\epsilon{\epsilon}
\def\zeta{\zeta}
\def\theta{\theta}
\def\kappa{\kappa}
\def\lambda{\lambda}
\def\mu{\mu}
\def\nu{\nu}
\def\xi{\xi}
\def\pi{\pi}
\def\rho{\rho}
\def\upsilon{\upsilon}
\def\iota{\iota}
\def\varphi{\varphi}
\def\sigma{\sigma}
\def\phi{\phi}
\def\psi{\psi}
\def\omega{\omega}
\def\Gamma{\Gamma}
\def\Delta{\Delta}
\def\Theta{\Theta}
\def\Lambda{\Lambda}
\def\Xi{\Xi}
\def\Sigma{\Sigma}
\def\Upsilon{\Upsilon}
\def\Phi{\Phi}
\def\Psi{\Psi}
\def\Omega{\Omega}
\begin{document}
\title{A Remark on Nonlinear Dirac Equations}
\author{Changyou Wang\thanks{Partially supported by NSF grant 0601162}\\
Department of Mathematics\\
University of Kentucky\\
Lexington, KY 40506, USA\\
{\it [email protected]}}
\date{}
\maketitle
\begin{abstract}
For a $n$-dimensional spin manifold $M$ with a fixed spin structure and a spinor bundle $\Sigma M$,
we prove an $\epsilon$-regularity theorem for weak solutions to the nonlinear Dirac equation
of cubic nonlinearity. This, in particular, answers a regularity question raised by Chen-Jost-Wang
\cite{chen-jost-wang2} when $n=2$.
\end{abstract}
\section{Introduction}
Linear Dirac type equations, including the Cauchy-Riemann equation in dimension two,
are the most fundamental first order system of elliptic equations. During the course
to study {\it Dirac-harmonic maps with curvature term} from a Riemann surface into a
Riemannian manifold, Chen-Jost-Wang \cite{chen-jost-wang1, chen-jost-wang2} introduced the nonlinear
Dirac equation with cubic nonlinearity:
\begin{equation}{}\label{dirac0}
\partial\hspace{-0.25cm}/\hspace{+0.05cm}\psi^i=\sum_{j,k,l=1}^N H_{jkl}^i\langle \psi^j, \psi^k\rangle \psi^l, \ 1\le i\le N.
\end{equation}
In dimension two, an interesting feature of this nonlinear Dirac equation is
that it is conformally invariant and has critical nonlinearity, where the classical
methods fail to apply. Thus it is an interesting question to study the regularity of
weak solutions of (\ref{dirac}). The aim of this short note is to provide an elementary
proof of a general regularity criterion for (\ref{dirac0}).
In order to describe the results, we briefly review some background materials on spin manifolds.
The interested readers can consult with Lawson-Michelsohn \cite{lawson-michelsohn}, Chen-Jost-Li-Wang
\cite{chen-jost-li-wang1, chen-jost-li-wang2} for more details.
For $n\ge 2$, let $(M,g)$ be a spin manifold with a given spin structure and an associated
spinor bundle $\Sigma$. Let $\langle\cdot,\cdot\rangle$ be a Hermitian metric on $\Sigma$ and
$\nabla$ be the Levi-Civita connection on $\Sigma$ compatible with both $\langle\cdot,\cdot\rangle$
and $g$. The Dirac operator on $M$ is defined by
$\partial\hspace{-0.25cm}/\hspace{+0.05cm}=e_\alpha\circ\nabla_{e_\alpha}$,
where $\{e_\alpha\}_{\alpha=1}^n$ is a local orthonormal frame on $M$, and $\circ: TM\otimes_{\mathbb C}\Sigma\to\Sigma$
is the Clifford multiplication.
Now let's write (\ref{dirac}) into the form
\begin{equation}{}\label{dirac}
\partial\hspace{-0.25cm}/\hspace{+0.05cm}\psi= H_{jkl}\langle \psi^j, \psi^k\rangle \psi^l,
\end{equation}
where $\psi=(\psi^1,\cdots,\psi^N)\in \left(\Gamma\Sigma\right)^N$, $N\ge 1$,
$H_{jkl}=\left(H_{jkl}^1,\cdots, H_{jkl}^N\right)\in C^\infty(M,\mathbb R^N)$.
We refer the readers to \cite{chen-jost-wang2} \S1, where the authors discussed two interesting
examples in which (\ref{dirac}) arises naturally. The first example is the Dirac-harmonic map
$(\phi,\psi)$ associated with the Dirac-harmonic energy functional with curvature term,
a nonlinear $\sigma$-model in the superstring theory, in which the nonlinear Dirac equation
for $\psi$ reduces to (\ref{dirac}) when $\phi$ is a constant map. The second example is
the Weierstrass representation formula for minimal surfaces $X$ immersed in $\mathbb R^3$
by holomorphic $1$-forms and meromorphic functions, in which an equation of the form (\ref{dirac})
appears naturally.
It turns out that the underlying function space for the equation (\ref{dirac}) is $L^4(M)$. As pointed
out by \cite{chen-jost-wang2} that any weak solution $\psi$ of (\ref{dirac}) is smooth provide
$\psi\in L^p(M)$ for some $p>4$. In \cite{chen-jost-wang2}, the authors proved three interesting
analytic properties of (\ref{dirac}) for $n=2$: (i) the gradient estimate for {\it smooth} solutions $\psi$
of (\ref{dirac}) under the smallness condition of $L^4$-norm of $\psi$, (ii) the isolated singularity
removable theorem, and (iii) the energy identity theorem for sequentially weak convergent smooth solutions
of (\ref{dirac}). At the end of \S1 in \cite{chen-jost-wang2}, the authors asked
\bcon{}\label{full_reg}
{\em For $n=2$, any weak solution $\psi\in L^4(M)$ of (\ref{dirac}) is smooth.}
\econ
In this short note, we answer Conjecture \ref{full_reg} affirmatively. In fact, we prove a general regularity
theorem for weak solutions of (\ref{dirac}) in any dimensions. The ideas is based on an application of the
estimate of Reisz potentials between Morrey spaces, due to Adams \cite{adams}. Similar techniques have
been employed in the proof of higher order regularity of Dirac-harmonic maps by Wang-Xu \cite{wang-xu}.
The proof turns out to be very elementary, and may be applicable to other similar problems.
Before stating our results, let's first recall
the definition of weak solutions of (\ref{dirac}).
\bde{}\label{weak_sol} A section $\psi\in L^4((\Gamma\Sigma)^N)$ is a weak solution of (\ref{dirac}) if
\begin{equation}{}
\label{weak_sol1}
\int_M \langle\psi, \partial\hspace{-0.25cm}/\hspace{+0.05cm}\eta\rangle
=\int_M H_{jkl}\left\langle \psi^j, \psi^k\right\rangle\left\langle \psi^l, \eta\right\rangle
\end{equation}
holds for any smooth section $\eta\in C^\infty\left((\Gamma\Sigma)^N\right)$.
\ede
Denote by ${\it i}_M>0$ the injectivity radius of $M$. For $0<r<{\it i}_M$ and $x\in M$, denote by $B_r(x)$
the geodesic ball in $M$ with center $x$ and radius $r$. Now we state our theorems.
\bth{}\label{epsilon_reg} For any $n\ge 2$, there exists $\epsilon_0>0$ depending on $n$ such that
if $\psi\in L^4((\Gamma\Sigma)^N)$ is a weak solution of the Dirac equation
(\ref{dirac}) and satisfies, for some $x_0\in M$ and $0<r_0\le \frac12{\it i}_M$,
\begin{equation}{} \label{small_cond}
\sup_{x\in B_{r_0}(x_0), \ 0<r\le r_0}\left\{\frac{1}{r^{n-2}}\int_{B_r(x)} |\psi|^4 \right\}\le\epsilon_0^4,
\end{equation}
then $\psi\in C^\infty(B_{\frac{r_0}2}(x_0))$.
\eth
Note that by H\"older inequality, we have for $n\ge 2$,
$$\frac{1}{r^{n-2}}\int_{B_r(x)} |\psi|^4 \le \left(\int_{B_r(x)} |\psi|^{2n}\right)^{\frac{2}{n}}.$$
Thus, as an immediate consequence of Theorem \ref{epsilon_reg}, we obtain
\bco{} \label{l-2n-reg} For $n\ge 2$, if $\psi\in L^{2n}((\Gamma\Sigma)^N)$ is a weak solution of
the Dirac equation (\ref{dirac}), then $\psi\in C^\infty((\Gamma\Sigma)^N)$.
\eco
It is clear that when $n=2$, Corollary \ref{l-2n-reg} implies Conjecture \ref{full_reg}.
\section{Proof of Theorem \ref{epsilon_reg}}
This section is devoted to the proof of Theorem \ref{epsilon_reg}. Since the regularity is a local property,
we assume, for simplicity of presentation, that for $x_0\in M$, the geodesic ball $B_{{\it i}_M}(x_0)\subset M$
with the metric $g$ is identified by $(B_2,g_0)$. Here $B_2\subset\mathbb R^n$ is the ball with center $0$
and radius $2$, and $g_0$ is the Euclidean metric on $\mathbb R^n$. We also assume that
$\Sigma\big|_{B_2}=B_2\times \mathbb C^L$, with $L={\rm{rank}}_{\mathbb C}\Sigma$.
Let's also recall the definition of Morrey spaces.
\bde{}\label{morrey} For $1\le p\le n$, $0<\lambda\le n$, and a domain $U\subseteq\mathbb R^n$, the
Morrey space $M^{p,\lambda}(U)$ is defined by
$$
M^{p,\lambda}(U)
:=\left\{f\in L^p_{\hbox{loc}}(U): \|f\|_{M^{p,\lambda}(U)}<+\infty\right\},$$
where
$$\left\Vert f\right\Vert _{M^{p,\lambda}(U)}^p=\sup\left\{r^{\lambda-n}\int_{B_r}|f|^p:\ B_r\subseteq U\right\}.$$
It is easy to see that for $1\leq p\leq n$, $M^{p,\lambda}(U)\subset L^p(U)$,
$M^{p,n}(U)=L^p(U)$ and $M^{p,p}(U)$ behaves like $L^{n}(U)$ from the view of scalings.
\ede
It is clear that the condition (\ref{small_cond}) in Theorem \ref{epsilon_reg} is equivalent to
$$\left\|\psi\right\|_{M^{4,2}(B_{r_0}(x_0))}\le \epsilon_0.$$
Thus Theorem \ref{epsilon_reg} follows from the following Lemma.
\ble{}\label{small_integ} For any $4<p<+\infty$ and $n\ge 2$,
there exists $\epsilon_0>0$ depending only on $p$ and $n$
such that if $\psi\in M^{4,2}(B_1)$ is a weak solution of (\ref{dirac}) and
$$\|\psi\|_{M^{4,2}(B_1)}\le\epsilon_0,$$
then $\psi\in L^p(B_\frac1{16},\mathbb C^{NL})$. Furthermore, $\psi\in C^\infty(B_\frac1{16}, \mathbb C^{NL})$ and
the estimate
\begin{equation}{}\label{prior-est}
\left\|\nabla^l\psi\right\|_{C^0(B_{\frac{1}{16}})}\le C(\epsilon_0, l), \ \forall l\ge 1
\end{equation}
holds
\ele
\proof Applying $\partial\hspace{-0.25cm}/\hspace{+0.1cm}$ to (\ref{dirac}), we have, for $1\le i\le N$,
\begin{equation}{}\label{dirac1}
\partial\hspace{-0.25cm}/\hspace{+0.05cm}^2\psi^i
=\partial\hspace{-0.25cm}/\hspace{+0.05cm}\left(H_{jkl}^i\langle\psi^j, \psi^k\rangle \psi^l\right)
\end{equation}
in the sense of distributions. By Lichnerowitz's formula (cf. \cite{lawson-michelsohn}), we have
$$-\Delta\psi^i=\partial\hspace{-0.25cm}/\hspace{+0.05cm}^2\psi^i.$$
Hence we have
\begin{equation}{}\label{dirac2}
-\Delta\psi^i
=\partial\hspace{-0.25cm}/\hspace{+0.05cm}\left(H_{jkl}\langle\psi^j, \psi^k\rangle \psi^l\right)
\end{equation}
in the sense of distributions.
For $m=1,2$, let $\eta_m\in C_0^\infty(B_1)$ be such that $0\le\eta_m\le 1$, $\eta_m\equiv 1$ on
$B_{2^{1-2m}}$. For $1\le i\le N$, define $f^i_m:\mathbb R^n\to\mathbb C^L$ by letting
\begin{equation}{}\label{auxil1}
f^i_m(x)=\int_{\mathbb R^n}\frac{\partial G(x,y)}{\partial y_\alpha}
\frac{\partial}{\partial y_\alpha}\circ \left(\eta_m^3H_{jkl}\langle\psi^j, \psi^k\rangle \psi^l\right)
(y)\,dy,\end{equation}
where $G(x,y)$ is the fundamental solution of $\Delta$ on $\mathbb R^n$. For $1\le i\le N$, define $g_m^i: B_1\to\mathbb C$ by
letting
\begin{equation}{}\label{auxil2} \psi^i=f_m^i+g_m^i.
\end{equation}
Direct calculations imply that for $m=1,2$ and $1\le i\le N$,
\begin{eqnarray}\label{dirac3}
-\Delta f^i_m&=&\partial\hspace{-0.25cm}/\hspace{+0.05cm}
\left(\eta_m^3H_{jkl}\langle\psi^j, \psi^k\rangle \psi^l\right)\nonumber\\
&=&\partial\hspace{-0.25cm}/\hspace{+0.05cm}\left(
H_{jkl}\langle\psi^j, \psi^k\rangle \psi^l\right) \ \hbox{ in } B_{2^{1-2m}}.
\end{eqnarray}
This and (\ref{dirac2}) imply
\begin{equation}{}\label{dirac4}
\Delta g^i_m=0 \ \hbox{ in }B_{2^{1-2m}}.
\end{equation}
It follows from (\ref{auxil1}) that for $m=1,2$ and $1\le i\le N$,
\begin{equation}{}\label{one_reisz}
\left|f^i_m\right|(x)\le C\int_{\mathbb R^n}\left|x-y\right|^{1-n}\left(\eta_m(y)|\psi(y)|\right)^3\,dy
=CI_1(\eta_m^3|\psi|^3)(x),
\end{equation}
where
$$I_1(f)(x)=\int_{\mathbb R^n}\left|x-y\right|^{1-n}|f(y)|\,dy,\ \ f:\mathbb R^n\to \mathbb R,$$
is the Riesz potential of order one. Let's recall Adams' inequality on Morrey spaces (cf. \cite{adams}):
\begin{equation}{}\label{adams}
\left\|I_1(f)\right\|_{M^{\frac{\lambda q}{\lambda-q},\lambda}(\mathbb R^n)}
\le C\|f\|_{M^{q,\lambda}(\mathbb R^n)}, \ \forall 1\le q<\lambda\le n.
\end{equation}
\noindent{\it Step} 1 ($m=1$).
Since $(\eta_1|\psi|)^3\in M^{\frac43,2}(\mathbb R^n)$, (\ref{adams}) implies that
for $1\le i\le N$,
\begin{eqnarray}\label{morrey_est}
\|f^i_1\|_{M^{4,2}(\mathbb R^n)}
&\le & C\|\eta_1^3|\psi|^3\|_{M^{\frac43,2}(\mathbb R^n)}
=C\|\eta_1|\psi|\|_{M^{4,2}(\mathbb R^n)}^3\nonumber\\
&\le& C\|\psi\|_{M^{4,2}(B_1)}^3\le C\epsilon_0^2\|\psi\|_{M^{4,2}(B_1)}.
\end{eqnarray}
On the other hand, by the standard estimate for harmonic functions, we have that for any
$\theta\in (0,\frac14)$ and $x_0\in B_{\frac14}$
\begin{equation}{}\label{morrey_est1}
\|g^i_1\|_{M^{4,2}(B_\theta(x_0))}\le C\theta^{\frac12} \|g^i_1\|_{M^{4,2}(B_{\frac12})},
\ \forall 1\le i \le N.
\end{equation}
Putting (\ref{morrey_est}) and (\ref{morrey_est1}) into (\ref{auxil2}) yields that for $1\le i\le N$,
\begin{eqnarray}
\|\psi^i\|_{M^{4,2}(B_\theta(x_0))}
&\le & C\theta^{\frac12}\|g^i_1\|_{M^{4,2}(B_{\frac12})}+C\epsilon_0^2\|\psi\|_{M^{4,2}(B_1)}\nonumber\\
&\le & C\theta^{\frac12}\left[\|\psi^i\|_{M^{4,2}(B_{\frac12})}
+\|f^i_1\|_{M^{4,2}(B_{\frac12})}\right]
+C\epsilon_0^2\|\psi\|_{M^{4,2}(B_1)}\nonumber\\
&\le & C\left(\epsilon_0^2+\theta^{\frac12}\right)\|\psi\|_{M^{4,2}(B_1)}.
\end{eqnarray}
This clearly implies that for any $\theta\in (0, \frac14)$ and $x_0\in B_{\frac14}$,
\begin{equation}{}\label{morrey_est2}
\|\psi\|_{M^{4,2}(B_\theta(x_0))}\le C\left(\epsilon_0^2+\theta^\frac12\right)\|\psi\|_{M^{4,2}(B_1)}.
\end{equation}
For any $\alpha\in (0,\frac13)$, first choose $\theta\in (0,\frac14)$ be such that $C\theta^{\frac12}
\le \theta^{\frac{\alpha}2}$ and then choose $\epsilon_0>0$ be such that
$C\epsilon_0^2\le \theta^{\frac{\alpha}2}$. Then we have
\begin{equation}{}\label{morrey_est3}
\|\psi\|_{M^{4,2}(B_\theta(x_0))}\le \theta^{\frac{\alpha}2}\|\psi\|_{M^{4,2}(B_1)},
\ \forall x_0\in B_{\frac14}.
\end{equation}
Iteration of (\ref{morrey_est3}) yields
\begin{equation}{}\label{morrey_est4}
\|\psi\|_{M^{4,2}(B_r(x_0))}\le Cr^{\frac{\alpha}2}\|\psi\|_{M^{4,2}(B_1)},
\ \forall x_0\in B_{\frac14}\ {\rm{ and } }\ 0\le r<\frac14.
\end{equation}
In particular, we have for any $0<\alpha<\frac13$,
\begin{equation}{}\label{morrey_est4}
r^{2(1-\alpha)-n}\int_{B_r(x_0)}|\psi|^4\le C\int_{B_1}|\psi|^4,
\ \forall x_0\in B_{\frac14} \ \hbox{ and } 0<r<\frac14.
\end{equation}
Thus $\psi\in M^{4,2(1-\alpha)}(B_\frac14)$ for any $\alpha\in (0,1)$.\\
\noindent{\it Step} 2 ($m=2$).
We want to repeat the above argument to show that
$\psi\in M^{\frac{4-4\alpha}{1-3\alpha}, 2(1-\alpha)}(B_{\frac1{16}})$.
In fact, since $(\eta_2|\psi|)^3\in M^{\frac43,2(1-\alpha)}(\mathbb R^n)$,
(\ref{adams}) implies that $f_2^i\in M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(\mathbb R^n)$,
and
\begin{eqnarray}\label{morrey_est5}
\left\|f_2^i\right\|_{M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(B_{\frac18})}
&\le& \left\|f_2^i\right\|_{M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(\mathbb R^n)}\nonumber\\
&\le& C\left\|\eta_2^3|\psi|^3\right\|_{M^{\frac43, 2(1-\alpha)}(\mathbb R^n)}\nonumber\\
&\le &C \left\|\psi\right\|_{M^{4,2(1-\alpha)}(B_{\frac14})}.
\end{eqnarray}
On the other hand, since $g_2^i$ is a harmonic function on $B_{\frac18}$, we have, by (\ref{morrey_est5}),
\begin{eqnarray}\label{morrey_est6}
&&\left\|g_2^i\right\|_{M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(B_{\frac1{16}})}\nonumber\\
&\le& C\left\|g_2^i\right\|_{M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(B_{\frac18})}\nonumber\\
&\le& C\left[\left\|f_2^i\right\|_{M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(B_{\frac18})}
+\left\|\psi^i\right\|_{M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(B_{\frac18})}\right]\nonumber\\
&\le & C\left\|\psi\right\|_{M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(B_{\frac18})}.
\end{eqnarray}
Putting (\ref{morrey_est5}) and (\ref{morrey_est6}) into (\ref{auxil2}) yields
that $\psi\in M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(B_{\frac1{16}})$ and
\begin{equation}{}\label{morrey_est7}
\left\|\psi\right\|_{M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(B_{\frac1{16}})}
\le C\left\|\psi\right\|_{M^{4,2(1-\alpha)}(B_{\frac14})} \le C\left\|\psi\right\|_{M^{4,2}(B_1)}.
\end{equation}
Since
$$\lim_{\alpha\uparrow\frac13}\frac{4(1-\alpha)}{1-3\alpha}=+\infty
\ {\rm{and}}\ \ M^{\frac{4(1-\alpha)}{1-3\alpha}, 2(1-\alpha)}(B_{\frac1{16}})
\subseteq L^{\frac{4(1-\alpha)}{1-3\alpha}}(B_{\frac1{16}}),$$
it follows from that $\psi\in L^p(B_{\frac1{16}})$ for any $p>4$, and
\begin{equation}{}\label{lp_est}
\left\|\psi\right\|_{L^p(B_{\frac1{16}})}\le C(n,p)\|\psi\|_{M^{4,2}(B_1)}.
\end{equation}
Since
$|\partial\hspace{-0.25cm}/\hspace{0.125cm}\psi|
\le C|\psi|^3$,
$W^{1,p}$-estimate implies that $\psi\in W^{1,p}_{{\rm{loc}}}(B_{\frac1{16}},\mathbb C^{NL})$
for any $p>4$. Hence, by the Sobolev embedding theorem, $\psi\in C^\mu(B_{\frac1{16}},\mathbb C^{NL})$
for any $\mu\in (0,1)$. By the Schauder estimate, this yields $\psi\in C^{1,\mu}(B_{\frac1{16}},\mathbb C^{NL} )$.
Hence, by the bootstrap argument, we conclude $\psi\in C^\infty(B_{\frac1{16}},\mathbb C^{NL})$
and the estimate (\ref{prior-est}) holds.
\qed
|
train/arxiv
|
BkiUdoE5qoYA189x-X9F
| 5
| 1
|
\section{Introduction} In 1983, L.G.~Brown~\cite{Br} introduced a spectral
distribution measure for non-normal elements in a finite von Neumann
algebra with respect to a fixed normal faithful tracial state, which
is called the Brown measure of the operator. Recently, U.~Haagerup
and H.~Schultz~\cite{H-S1} proved a remarkable result which states
that if the support of Brown measure of an operator in a type ${\rm
II}_1$ factor contains more than two points, then the operator has a
non-trivial hyperinvariant subspace affiliated with the type ${\rm
II}_1$ factor. In general cases, the computation of Brown measures
of non-normal operators are nontrivial. The first essential result
was given by Haagerup and F.~Larsen. In~\cite{H-L}, Haagerup and
Larsen computed the spectrum and Brown measure of R-diagonal
operators in a finite von Neumann algebra, in terms of the
distribution of its radial part. Brown measures of some non-normal
and non-R-diagonal operators, examples include $u_n+u_\infty$, where
$u_n$ and $u_\infty$ are the generators of $\mathbb{Z}_n$ and
$\mathbb{Z}$ respectively, in the free product
$\mathbb{Z}_n*\mathbb{Z}$, and elements of the form
$S_\alpha+iS_\beta$, where $S_\alpha$ and $S_\beta$ are free
semi-circular elements of variance $\alpha$ and $\beta$, are
computed by P.~Biane and F.~Lehner in~\cite{B-L}. The purpose of
this paper is to compute the spectra and Brown measures of some non
hermitian operators in $(M_2(\mathbb{C}), \frac{1}{2}Tr)*(M_2(\mathbb{C}),
\frac{1}{2}Tr)$, the reduced free product von Neumann algebra of
$M_2(\mathbb{C})$ with $M_2(\mathbb{C})$ (cf. [Ch]). Examples include $AB$ and
$A+B$, where $A$ and $B$ are matrices in $(M_2(\mathbb{C}),
\frac{1}{2}Tr)*1$ and $1*(M_2(\mathbb{C}),
\frac{1}{2}Tr)$, respectively. This paper is organized as follows.\\
In section 2 we recall preliminary facts about Brown measures,
R-diagonal operators, Haagerup and Larsen's result on Brown measures
of R-diagonal operators and some notation used in this paper. In
section 3, we provide some results on the spectra and spectral
radius of operators in $M_2(\mathbb{C})*M_2(\mathbb{C})$, the universal free
product C*-algebra of $M_2(\mathbb{C})$ with $M_2(\mathbb{C})$. Firstly we compute
the spectral radius of $AB$ for two normal matrices $A\in
M_2(\mathbb{C})*1$ and $B\in 1*M_2(\mathbb{C})$ relative to $M_2(\mathbb{C})*M_2(\mathbb{C})$.
As a corollary, we also get the spectrum radius of $AB$ for normal
matrices $A\in (M_2(\mathbb{C}), \frac{1}{2}Tr)*1$ and $B\in 1*(M_2(\mathbb{C}),
\frac{1}{2}Tr)$, relative to the reduced free product von Neumann
algebra of $M_2(\mathbb{C})$ with $M_2(\mathbb{C})$. Then we obtain the following
result: Let $A$,$B$ be matrices in $M_2(\mathbb{C})*1$ and $1*M_2(\mathbb{C})$,
respectively, such that $Tr(A)=Tr(B)=0$. Then $\sigma(AB)$, the
spectrum of $AB$, relative to $M_2(\mathbb{C})*M_2(\mathbb{C})$, is the closure of
the annulus centered at 0 with inner radius
$\|A^{-1}\|^{-1}\|B^{-1}\|^{-1}$ and outer radius $\|A\|\|B\|$,
where we use the convention $\infty^{-1}=0$ and if $A$ is not
invertible then $\|A^{-1}\|:=\infty$.\\
In section 4 we prove that $AB$ is an R-diagonal operator if and
only if $Tr(A)=Tr(B)=0$, where $A\in (M_2(\mathbb{C}), \frac{1}{2}Tr)*1$
and $B\in 1* (M_2(\mathbb{C}), \frac{1}{2}Tr)$. As a corollary, we
explicitly compute the spectrum and Brown measure of $AB$
($Tr(A)=Tr(B)=0$) in terms of
$\S-$transform of $A^*A$ and $B^*B$.\\
In
section 5, we develop algebraic techniques used in~\cite{Dy}. Let
$X\in 1*(M_2(\mathbb{C}), \frac{1}{2}Tr)$. With respect to the matrix units
of $(M_2(\mathbb{C}), \frac{1}{2}Tr)*1$,
$X=\left(\begin{array}{cc} x_1&x_2\\
x_3&x_4
\end{array}\right).$ By~\cite{Dy}, $(M_2(\mathbb{C}),
\frac{1}{2}Tr)*(M_2(\mathbb{C}), \frac{1}{2}Tr)\cong L(\mathbb{F}_3)\otimes
M_2(\mathbb{C})$. So $x_1,x_2,x_3,x_4\in L(\mathbb{F}_3)$. In section 5, we
find $\ast$-free generators $h,u,v$ of $L(\mathbb{F}_3)$ (different
from the free generators given in~\cite{Dy}) so that we may
explicitly write out $x_1,x_2,x_3, x_4$ in terms of $h,u,v$.\\
In section 6, we compute miscellaneous examples of Brown measures of
operators $A+B$ and $AB$, where $A\in (M_2(\mathbb{C}), \frac{1}{2}Tr)*1$
and $B\in 1* (M_2(\mathbb{C}), \frac{1}{2}Tr)$. As a corollary, we show
that $A+B$ is an R-diagonal operator if and
only if $A+B=0$.\\
In section 7, we prove the following result: Let $A\in (M_2(\mathbb{C}),
\frac{1}{2}Tr)*1$ and $B\in 1* (M_2(\mathbb{C}),\frac{1}{2}Tr)$. if
$X=A+B$ or $X=AB$ and $A,B$ are not scalar matrices, then the Brown
measure of $X$ is not concentrated on a single point. As a corollary
of Theorem 7.1 of [H-S1], we prove that if $X=A+B$ or $X=AB$ and
$X\neq \lambda 1$, then $X$ has a nontrivial hyperinvariant subspace
affiliated with $(M_2(\mathbb{C}), \frac{1}{2}Tr)*(M_2(\mathbb{C}),
\frac{1}{2}Tr)$.\\
Many concrete examples of spectra and Brown measures are given in
this paper.
For some interesting applications, we refer to~\cite{FDM}.\\
\section{Preliminaries}
\subsection{Fuglede-Kadison determinant and Brown's spectral
measure.} Let $\mathcal M$ be a finite von Neumann algebra with a faithful
tracial state $\tau$. The \emph{Fuglede-Kadison
determinant}~\cite{F-K}, $\Delta:\,\mathcal M\rightarrow [0,\infty[$, is
given by
\[\Delta(T)=\exp\{\tau(\log|T|)\},\qquad T\in\mathcal M,\]
with $\exp\{-\infty\}:=0$. For an arbitrary element $T$ in $\mathcal M$ the
function $\lambda\rightarrow \log \Delta(a-\lambda 1)$ is
subharmonic on $\mathbb{C}$, and it's Laplacian
\[d\mu_{T}(\lambda):=\frac{1}{2\pi} \bigtriangledown^2 \log
\Delta(T-\lambda 1),\] in the distribution sense, defines a
probability measure $\mu_T$ on $\mathbb{C}$, called the
\emph{Brown's measure}~\cite{Br} of $T$. From the definition, Brown
measure $\mu_T$ only depends on the joint
distribution of $T$ and $T^*$.\\
If $T$ is normal, $\mu_T$
is the trace $\tau$ composed with the spectral projections of $T$.
If $\mathcal M=M_n(\mathbb{C})$ and $\tau=\frac{1}{n} Tr$ is the normalized trace
on $M_n(\mathbb{C})$, then $\mu_T$ is the normalized counting measure
$\frac{1}{n}\left(\delta_{\lambda_1}+\delta_{\lambda_2}\cdots+\delta_{\lambda_n}\right)$,
where $\lambda_1,\lambda_2 \cdots,\lambda_n$ are the eigenvalues of $T$
repeated according to root multiplicity.
\\
The Brown measure has the following properties (see~\cite{Br,
H-S2}): $\mu_T$ is the unique compactly supported measure on $\mathbb{C}$
such that $\log\Delta(T-\lambda 1)=\int_{\mathbb{C}}
\log|z-\lambda|d\mu_T(z)$ for all $\lambda\in\mathbb{C}$. The support of
$\mu_T$ is contained in $\sigma(T)$, the spectrum of $T$.
$\mu_{ST}=\mu_{TS}$ for arbitrary $S,T$ in $\mathcal M$, and if $f(z)$ is
analytic in a neighborhood of $\sigma(A)$, $\mu_{f(T)}=(\mu_T)_f$,
the push-forward measure of $\mu_T$ under the map
$\lambda\rightarrow f(\lambda)$. If $E\in\mathcal M$ is a projection such
that $E\in Lat T$, then with respect to $E, I-E$ we can write
\[T=\left(\begin{array}{cc}
A&B\\
0&C
\end{array}\right),\]
where $A=ETE$ and $C=(I-E)T(I-E)$ are elements of $\mathcal M_1=E\mathcal M E$ and
$\mathcal M_2=(I-E)\mathcal M (I-E)$, respectively. Let $\mu_{A}$ and $\mu_{C}$ be
the Brown measures of $A$ and $C$ computed relative to $\mathcal M_1$ and
$\mathcal M_2$, respectively. Then $\mu_T=\alpha\mu_A+(1-\alpha)\mu_C$,
where $\alpha=\tau(E)$. \\
For a generalization of Brown measures of sets of commuting
operators in a type ${\rm II}_1$ factor, we refer to~\cite{Sc}.
\subsection{R-diagonal operators}
In 1995, A.~Nica and S.~Speicher~\cite{N-S1} introduced the class
of R-diagonal operators in non-commutative probability spaces.
Recall that an operator $T$ in a non-commutative probability space
is an $R-$diagonal operator if the $R-$transform $R_{\mu(T,T^*)}$ of
the joint distribution $\mu(T,T^*)$ of $T, T^*$ is of the form
\[R_{\mu(T,T^*)}(z_1,z_2)=\sum_{n=1}^{\infty}\alpha_n (z_1z_2)^n+
\sum_{n=1}^{\infty}\alpha_n (z_2z_1)^n.\] Nica and
Speicher~\cite{N-S1} proved that $T$ is an $R-$diagonal operator if
and only if $T$ has same $\ast$-distribution as product $UH$, where
$U$ and $H$ are $\ast$-free random variables in some tracial non
commutative probability space, $U$ is a Haar unitary operator and
$H$ is positive. If $T$ is an R-diagonal operator, then the
$\ast$-distribution of $T$ is uniquely determined by the
distribution of $T^*T=|T|^2$. If $T$ is an R-diagonal operator and
$S$ is $\ast$-free with $T$, then both $ST$ and $TS$ are R-diagonal
operators (see~\cite{N-S1}). If $T$ is an R-diagonal operator and
$n\in \mathbb{N}$, then $T^n$ is also an R-diagonal operator
(see~\cite{H-L,La}). For other important properties of R-diagonal
operators, we refer to~\cite{H-L, La,N-S1, N-S2}.
\subsection{Brown measures of R-diagonal operators}
In~\cite{H-L}, Haagerup and Larson explicitly computed the Brown
measures of {R-diagonal} operators in a finite von Neumann algebra.
\begin{Theorem} \emph{(Theorem 4.4 of~\cite{H-L})} Let $U,H$ be $\ast$-free random variables in a noncommutative probability space $(\mathcal M,\tau)$,
with $U$ a Haar unitary operator and $H$ a positive operator such
that the distribution $\mu_H$ of $H$ is not a Dirac measure. Then
the Brown measure $\mu_{UH}$ of $UH$ can be computed as the
following.
\begin{enumerate}
\item $\mu_{UH}$ is rotation invariant and its support is the
annulus with inner radius $\|H^{-1}\|_2^{-1}$ and outer radius
$\|H\|_2$.
\item $\mu_{UH}(\{0\})=\mu_{H}(\{0\})$ and for $t\in ]\mu_H(\{0\}),
1]$,
\[\mu_{UH}\left(\mathbb{B}\left(0, \left(\S_{\mu_{H^2}}(t-1)\right)^{-1/2}\right)\right)=t,\]
where $\S_{\mu_{H^2}}$ is the $\S-$transform of $H^2$ and
$\mathbb{B}(0,r)$ is the open disc with center 0 and radius $r$;
\item $\mu_{UH}$ is the only rotation invariant symmetric
probability measure satisfying 2.
\end{enumerate}
Furthermore, if $H$ is invertible, then $\sigma(UH)=supp\mu_{UH}$;
if $H$ is not invertible, then
$\sigma(UH)=\overline{\mathbb{B}(0,\|H\|_2)}.$
\end{Theorem}
\subsection{Some Notation}
The following notation will be used in the rest of the paper
\begin{itemize}
\item $(\mathcal M,\tau)=(M_2(\mathbb{C}), \frac{1}{2}Tr)* (M_2(\mathbb{C}), \frac{1}{2}Tr)$ denotes the reduced free product von Neumann algebra
of $M_2(\mathbb{C})$ with $M_2(\mathbb{C})$ with the unique tracial state $\tau$;
\item $M_2(\mathbb{C})_{(1)}:= (M_2(\mathbb{C}), \frac{1}{2}Tr)*1$ and $M_2(\mathbb{C})_{(2)}:=
1*(M_2(\mathbb{C}), \frac{1}{2}Tr)$;
\item $\{E_{ij}\}_{i,j=1,2}, \{F_{ij}\}_{i,j=1,2}$ are matrix units of
$M_2(\mathbb{C})_{(1)}$ and $M_2(\mathbb{C})_{(2)}$, respectively;
\item $P=E_{11}$ and $Q=F_{11}$;
\item $\mathcal M\cong \mathcal N\otimes M_2(\mathbb{C})_{(1)}\cong \mathcal N\otimes M_2(\mathbb{C})_{(2)}$.
For $X\in \mathcal M$,
$X=\left(\begin{array}{cc} x_1&x_2\\
x_3&x_4
\end{array}\right)_{(1)}=\left(\begin{array}{cc} x_1'&x_2'\\
x_3'&x_4'
\end{array}\right)_{(2)}$ means the decomposition is
with respect to above matrix units of $M_2(\mathbb{C})_{(1)}$ and
$M_2(\mathbb{C})_{(2)}$, respectively.
\item $W_0=\left(\begin{array}{cc}
1&0\\
0&1
\end{array}\right)_{(1)}, W_1=\left(\begin{array}{cc}
1&0\\
0&-1
\end{array}\right)_{(1)}, W_2=\left(\begin{array}{cc}
0&-1\\
1&0
\end{array}\right)_{(1)}, W_3=\left(\begin{array}{cc}
0&1\\
1&0
\end{array}\right)_{(1)}$;
\item $V_0=\left(\begin{array}{cc}
1&0\\
0&1
\end{array}\right)_{(2)}, V_1=\left(\begin{array}{cc}
1&0\\
0&-1
\end{array}\right)_{(2)}, V_2=\left(\begin{array}{cc}
0&-1\\
1&0
\end{array}\right)_{(2)}, V_3=\left(\begin{array}{cc}
0&1\\
1&0
\end{array}\right)_{(2)}$;
\item $A,A_1,\cdots,A_n$ denote elements in $M_2(\mathbb{C})_{(1)}$,
$B,B_1,\cdots,B_n$ denote elements in $M_2(\mathbb{C})_{(2)}$, $X,Y,Z$
denote general elements in $\mathcal M$;
\item An element $X$ in $\mathcal M$ is called \emph{centered} if
$\tau(X)=0$.
\end{itemize}
We end this section with the following lemma. The proof is an easy
exercise.
\begin{Lemma} $V_1M_2(\mathbb{C})_{(1)} V_1$ is free with
$M_2(\mathbb{C})_{(1)}$.
\end{Lemma}
\section{Spectra of elements in the universal free product of $M_2(\mathbb{C})$ and $M_2(\mathbb{C})$}
Let $\AA=M_2(\mathbb{C})*M_2(\mathbb{C})$ denote the universal free product
C*-algebra of $M_2(\mathbb{C})$ with $M_2(\mathbb{C})$. Then there is a *
homomorphism $\pi$ from $\AA$ onto the reduced free product
C*-algebra of $M_2(\mathbb{C})$ and $M_2(\mathbb{C})$, the C*-subalgebra generated
by $M_2(\mathbb{C})_{(1)}$ and $M_2(\mathbb{C})_{(2)}$ in $\mathcal M$. Since
$\sigma(\pi(a))\subseteq \sigma(a)$ for $a\in \AA$, it is useful to
obtain some information of spectrum of $AB$, where $A\in M_2(\mathbb{C})*1$
and $B\in 1* M_2(\mathbb{C})$.
\subsection{``Free product" of normal matrices}
\begin{Lemma} Let $A\in M_2(\mathbb{C})*1$ and $B\in 1*M_2(\mathbb{C})$ be normal
matrices. Then $r(AB)=\|A\|\cdot \|B\|$ relative to $\AA$.
\end{Lemma}
\begin{proof} $r(AB)\leq \|AB\|\leq \|A\|\cdot\|B\|$. We need only to prove $r(AB)\geq \|A\|\cdot\|B\|$.
Since $A$ is a normal matrix, there is a unitary matrix $U_1\in
M_2(\mathbb{C})*1$ such that $U_1AU_1^*=\left(\begin{array}{cc}
\alpha_1&0\\
0&\beta_1
\end{array}\right)$
and $\|\alpha_1\|=\|A\|$. Similarly, there is a unitary matrix $U_2\in
1*M_2(\mathbb{C})$ such that $U_2BU_2^*=\left(\begin{array}{cc}
\alpha_2&0\\
0&\beta_2
\end{array}\right)$
and $\|\alpha_2\|=\|B\|$. Let $\pi_1(X)=U_1XU_1^*$ and
$\pi_2(Y)=U_2YU_2^*$ be $\ast$-representations of $M_1(\mathbb{C})*1$ and
$1*M_2(\mathbb{C})$ to $M_2(\mathbb{C})$, respectively. Then there is a
$\ast$-representation $\pi=\pi_1*\pi_2$ from $\AA$ to $M_2(\mathbb{C})$ and
$\pi(AB)=\left(\begin{array}{cc}
\alpha_1\alpha_2&0\\
0&\beta_1\beta_2
\end{array}\right)$. Therefore, $\alpha_1\alpha_2\in
\sigma(\pi(AB))\subseteq \sigma(AB)$. So $r(AB)\geq
|\alpha_1\alpha_2|=\|A\|\cdot \|B\|$.
\end{proof}
\begin{Corollary}Let $A\in M_2(\mathbb{C})_{(1)}$ and $B\in M_2(\mathbb{C})_{(2)}$ be normal
matrices. Then $r(AB)=\|A\|\cdot \|B\|$ relative to $\mathcal M$.
\end{Corollary}
\begin{proof} We may assume
that $A$ and $B$ are diagonal matrices. Then we can treat $AB$ as an
operator in the full free product $C^*(\mathbb{Z}_2\ast
\mathbb{Z}_2)$. Same technique used in the previous lemma gives the
corollary.
\end{proof}
\subsection{``Free product" of non-normal matrices}
It is well-known that two matrices $X,Y$ in $M_2(\mathbb{C})$ are unitarily
equivalent if and only if $Tr(X)=Tr(Y), Tr(X^2)=Tr(Y^2)$ and
$Tr(X^*X)=Tr(Y^*Y)$. The proof of the following lemma now is an
easy exercise.
\begin{Lemma} If $A\in M_2(\mathbb{C})$ and $Tr(A)=0$, then
$A$ is unitarily equivalent to a matrix of form $\displaystyle
\left(\begin{array}{cc} 0&\alpha\\
\beta&0
\end{array}\right),$ where $\alpha,\beta$ are complex numbers.
\end{Lemma}
\begin{Remark} We have the following useful observations:
\begin{itemize}
\item $\left(\begin{array}{cc}
0&1\\
1&0
\end{array}\right)\left(\begin{array}{cc}
0&\alpha\\
\beta&0
\end{array}\right)\left(\begin{array}{cc}
0&1\\
1&0
\end{array}\right)=\left(\begin{array}{cc}
0&\beta\\
\alpha&0
\end{array}\right).$
\item $\left(\begin{array}{cc}
1&0\\
0&e^{i(\theta_1-\theta_2)/2}
\end{array}\right)\left(\begin{array}{cc}
0&|\alpha|e^{i\theta_1}\\
|\beta|e^{i\theta_2}&0
\end{array}\right)\left(\begin{array}{cc}
1&0\\
0&e^{-i(\theta_1-\theta_2)/2}
\end{array}\right)=e^{i(\theta_1+\theta_2)/2}\left(\begin{array}{cc}
0&|\alpha|\\
|\beta|&0
\end{array}\right).$
\end{itemize}
\end{Remark}
\begin{Lemma}Let $A\in M_2(\mathbb{C})*1$ and $B\in 1*M_2(\mathbb{C})$ be
matrices such that $Tr(A)=Tr(B)=0$. Then $r(AB)=\|A\|\cdot \|B\|$
relative to $\AA$.
\end{Lemma}
\begin{proof} We need only to prove $r(AB)\geq \|A\|\cdot \|B\|$.
By Lemma 3.3 and Remark 3.4, there are unitary matrices
$U,V$ in $M_2(\mathbb{C})$ such that
$UAU^*=\left(\begin{array}{cc} 0&\alpha_1\\
\beta_1&0\\
\end{array}\right)$ and $VBV^*=\left(\begin{array}{cc} 0&\alpha_2\\
\beta_2&0\\
\end{array}\right)$ and $|\alpha_1|=\|A\|, |\beta_2|=\|B\|$. Let
$\pi_1(X)=UXU^*$ and $\pi_2(Y)=VYV^*$ be $\ast$-representations of
$M_2(\mathbb{C})*1$ and $1*M_2(\mathbb{C})$ to $M_2(\mathbb{C})$, respectively. Let
$\pi=\pi_1*\pi_2$ be the induced $\ast$-representation of $\AA$ to
$M_2(\mathbb{C})$. Then $\sigma(AB)\supseteq
\sigma(\pi(AB))=\sigma(\pi_1(A)\pi_2(B))=\{\alpha_1\beta_2,
\alpha_2\beta_1\}$. Therefore, $r(AB)\geq
|\alpha_1\beta_2|=\|A\|\cdot \|B\|$.
\end{proof}
\begin{Theorem}Let $A\in M_2(\mathbb{C})*1$ and $B\in 1*M_2(\mathbb{C})$ be
matrices such that $Tr(A)=Tr(B)=0$. Then
\[\sigma(AB)=[\|A^{-1}\|^{-1}\|B^{-1}\|^{-1},
\|A\|\|B\|]\times_p [0,2\pi],\] where $\times_p$ denotes the polar
set product $\{re^{i\theta}:\,\, r\in
[\|A^{-1}\|^{-1}\|B^{-1}\|^{-1}, \|A\|\|B\|],\,\, \theta\in
[0,2\pi]\}$.
\end{Theorem}
\begin{proof} We will prove the theorem for two cases.\\
\noindent Case 1. Either $A$ or $B$ is not invertible. We may assume
that $A$ is not invertible. By $Tr(A)=0$, Lemma 3.3 and Remark 3.4,
$A$ is unitarily
equivalent to $\left(\begin{array}{cc} 0&\alpha_1\\
0&0
\end{array}\right)$. Without loss of generality, we assume that $A=\left(\begin{array}{cc} 0&1\\
0&0
\end{array}\right)\in M_2(\mathbb{C})*1$. By Lemma 3.3 and Remark 3.4, we may also assume that $B=\left(\begin{array}{cc} 0&\alpha\\
\beta&0
\end{array}\right)\in 1*M_2(\mathbb{C})$ and $\beta\geq \alpha\geq 0$. We need to prove
that $\sigma(AB)$ is the closed disc of complex plane with center 0
and radius $\beta$. Since $A$ is unitarily equivalent to
$e^{i\theta}A$ in $M_2(\mathbb{C})*1$, $\sigma(AB)$ is rotation invariant.
For $\theta\in [0,2\pi]$, let $U=\left(\begin{array}{cc} \cos\theta& \sin\theta\\
-\sin\theta&\cos\theta
\end{array}\right)$. Let $\pi_1(X)=X$ and $\pi_2(Y)=UYU^*$ be
$\ast$-representations of $M_2(\mathbb{C})*1$ and $1*M_2(\mathbb{C})$ to
$M_2(\mathbb{C})$, respectively. Let $\pi=\pi_1*\pi_2$ be the induced
$\ast$-representation of $\AA$ to $M_2(\mathbb{C})$. Then
\[\pi(AB)=AUBU^*=\left(\begin{array}{cc}
-\alpha
\sin^2\theta+\beta\cos^2\theta&-(\alpha+\beta)\sin\theta\cos\theta\\
0&0
\end{array}\right).\] So $\sigma(\pi(AB))=\{-\alpha
\sin^2\theta+\beta\cos^2\theta, 0\}$. Since $[0,\beta]\subseteq
[-\alpha,\beta]=\{-\alpha \sin^2\theta+\beta\cos^2\theta:\,
\theta\in [0,2\pi]\}$, $[0,\beta]\subseteq \sigma(AB)$. Since
$\sigma(AB)$ is rotation invariant, $\sigma(AB)$ contains the closed
disc with center 0 and radius $\beta$. By Lemma 3.5, $\sigma(AB)$ is
the closed disc of complex plane with center 0 and
radius $\beta$.\\
\noindent Case 2. Both $A$ and $B$ are invertible. By Lemma 3.3 and
Lemma 3.4, we may assume that $A=\left(\begin{array}{cc} 0&1\\
\beta_1&0 \end{array}\right) $ and $B=\left(\begin{array}{cc}0&1\\
\beta_2&0\end{array}\right)$ such that $\beta_1,\beta_2\geq 1$. Then
$A^{-1}=\left(\begin{array}{cc} 0&\beta_1^{-1}\\
1&0 \end{array}\right)$ and $B^{-1}=\left(\begin{array}{cc} 0&\beta_2^{-1}\\
1&0 \end{array}\right)$. We need to prove that
$\sigma(AB)=[1,\beta_1\beta_2]\times_p [0,2\pi]$. By Lemma 3.5,
$r(AB)=\beta_1\beta_2$ and $r((AB)^{-1})=1$. This implies that
$\sigma(AB)\subseteq [1,\beta_1\beta_2]\times_p [0,2\pi]$. So we
need only to prove $\sigma(AB)\supseteq [1,\beta_1\beta_2]\times_p
[0,2\pi]$.\\
For $\phi,\psi\in [0,2\pi]$, let $U=\left(\begin{array}{cc}
\cos\psi& e^{i\phi}\sin\psi\\
-\sin\psi& e^{i\phi}\cos\psi
\end{array}\right)$. Then $U$ is a unitary matrix. Let $\pi_1(X)=UXU^*$ and $\pi_2(Y)=Y$ be
$\ast$-representations of $M_2(\mathbb{C})*1$ and $1*M_2(\mathbb{C})$ to
$M_2(\mathbb{C})$, respectively. Let $\pi=\pi_1*\pi_2$ be the induced
$\ast$-representation of $\AA$ to $M_2(\mathbb{C})$. Then
\[\pi(AB)=\left(\begin{array}{cc}
-\beta_1\beta_2e^{i\phi}\sin^2\psi+\beta_2e^{-i\phi}\cos^2\psi&*\\
*&\beta_1e^{i\phi}\cos^2\psi-e^{-i\phi}\sin^2\psi
\end{array}\right).\]
Let $\lambda_1(\phi,\psi),\lambda_2(\phi,\psi)$ be the eigenvalues
of $\pi(AB)$. Then
\begin{equation}
\lambda_1(\phi,\psi)\lambda_2(\phi,\psi)=\det
(\pi(AB))=\det(A)\det(B)=\beta_1\beta_2,
\end{equation}
\begin{equation}
\lambda_1(\phi,\psi)+\lambda_2(\phi,\psi)=(\beta_1e^{i\phi}+\beta_2e^{-i\phi})\cos^2\psi-
(\beta_1\beta_2e^{i\phi}+e^{-i\phi})\sin^2\psi.
\end{equation}
Note that $\sigma(AB)\supseteq \{\lambda_1(\phi,\psi):\,
\phi,\psi\in [0,2\pi]\}$. We only need to prove that
$\{\lambda_1(\phi,\psi):\, \phi,\psi\in [0,2\pi]\}\supseteq
[1,\beta_1\beta_2]\times_p [0,2\pi]$. For this purpose, we need to
show for any $r\in [1,\beta_1\beta_2]$, $\theta\in [0,2\pi]$, there
are $\phi,\psi\in [0,2\pi]$ such that
\begin{equation}
re^{i\theta}+\frac{\beta_1\beta_2}{r}e^{-i\theta}=
(\beta_1e^{i\phi}+\beta_2e^{-i\phi})\cos^2\psi-
(\beta_1\beta_2e^{i\phi}+e^{-i\phi})\sin^2\psi.
\end{equation}
Let $\alpha=\cos^2\psi$. Simple computations show that equation 3.3
is equivalent to the following
\[
\left(r+\frac{\beta_1\beta_2}{r}\right)\cos\theta+i\left(r-\frac{\beta_1\beta_2}{r}\right)\sin\theta=\]\[
(\alpha(1+\beta_1)(1+\beta_2)-(1+\beta_1\beta_2))\cos\phi+i(\alpha(\beta_1-1)(\beta_2+1)+
(1-\beta_1\beta_2))\sin\phi.\]
Let
\[\Omega_1=\left\{\left(r+\frac{\beta_1\beta_2}{r}\right)\cos\theta+
i\left(r-\frac{\beta_1\beta_2}{r}\right)\sin\theta: \,\, r\in
[1,\beta_1\beta_2],\,\theta\in [0,2\pi]\right\},\]
\[\Omega_2=\{(\alpha(1+\beta_1)(1+\beta_2)-(1+\beta_1\beta_2))\cos\phi+i(\alpha(\beta_1-1)(\beta_2+1)+
(1-\beta_1\beta_2))\sin\phi\]\[:\, \alpha\in [0,1],\,
\phi\in[0,2\pi]\}.\] Now we need only to prove $\Omega_1=\Omega_2$.
Note that $\Omega_1$ is the union of a family of ellipses with
center the origin point and semimajor axis and semiminor axis
$|r+\frac{\beta_1\beta_2}{r}|$ and $|r-\frac{\beta_1\beta_2}{r}|$,
$1\leq r\leq \beta_1\beta_2$, respectively. Similarly, $\Omega_2$ is
the union of a family of ellipses with center the origin point and
semimajor axis and semiminor axis
$|\alpha(1+\beta_1)(1+\beta_2)-(1+\beta_1\beta_2)|$ and
$|\alpha(\beta_1-1)(\beta_2+1)+ (1-\beta_1\beta_2)|$, $0\leq
\alpha\leq 1$, respectively. Note that the ``largest" ellipse in
$\Omega_1$ is with semimajor axis and semiminor axis
$|1+\beta_1\beta_2|$ and $|\beta_1\beta_2-1|$, respectively; the
``smallest" ellipse in $\Omega_1$ is with semimajor axis and
semiminor axis $2\sqrt{|\beta_1\beta_2|}$ and $0$, respectively. The
``largest" ellipse in $\Omega_2$ is with semimajor axis and
semiminor axis $|1+\beta_1\beta_2|$ and $|\beta_1\beta_2-1|$,
respectively; the ``smallest" ellipse in $\Omega_2$ is with
semimajor axis and semiminor axis $0$ and
$\frac{2\beta_1(\beta_2-1)}{\beta_1+1}$. So both $\Omega_1$ and
$\Omega_2$ are the closure of the domain enclosed by the ellipse
with center the origin point and semimajor axis and semiminor axis
$|1+\beta_1\beta_2|$ and $|\beta_1\beta_2-1|$, respectively. Thus
$\Omega_1=\Omega_2$.
\end{proof}
\section{R-diagonal operators in
$\mathcal M$}
In this section, we prove the following result. We will use the
notation introduced in section 2.4.
\begin{Theorem} In $\mathcal M$, let $A\in M_2(\mathbb{C})_{(1)}$ and $B\in M_2(\mathbb{C})_{(2)}$. Then
$AB$ is an R-diagonal operator if and only if $\tau(A)=\tau(B)=0$.
\end{Theorem}
To prove Theorem 4.1, we need the following lemmas.
\begin{Lemma} $\{W_1,V_1,W_3V_3\}''\cong L(\mathbb{Z}_2)*L(\mathbb{Z}_2)*L(\mathbb{Z}).$
\end{Lemma}
\begin{proof} Let $U=W_3V_3$. Then $U$ is a Haar unitary operator.
We need only to prove that $U$ is * free with the von Neumann
subalgebra generated by $W_1$ and $V_1$. Let $g_1g_2\cdots g_n$ be
an alternating product of $\{U^n: n\neq 0\}$ and $\{W_1,V_1,
W_1V_1,V_1W_1, W_1V_1W_1,V_1W_1V_1,\cdots\}$. By regrouping, it is
an alternating product of $\{W_1, W_1W_3, W_3^*W_1, W_3^*W_1W_3,
W_3, W_3^*\}$ and $\{V_1, V_3V_1, V_1V_3^*, V_3V_1V_3^*, V_3,
V_3^*\}$. Thus the trace is 0.
\end{proof}
\begin{Lemma} $\displaystyle
\left(\begin{array}{cc} 0&\alpha_1\\
\beta_1&0
\end{array}\right)_{(1)}
\left(\begin{array}{cc} 0&\alpha_2\\
\beta_2&0
\end{array}\right)_{(2)}$ is an R-diagonal operator.
\end{Lemma}
\begin{proof} Note that
\[\left(\begin{array}{cc} 0&\alpha_1\\
\beta_1&0
\end{array}\right)_{(1)}
\left(\begin{array}{cc} 0&\alpha_2\\
\beta_2&0
\end{array}\right)_{(2)}=\left(\begin{array}{cc} \alpha_1&0\\
0&\beta_1
\end{array}\right)_{(1)}
\left(\begin{array}{cc} 0&1\\
1&0
\end{array}\right)_{(1)}\left(\begin{array}{cc} 0&1\\
1&0
\end{array}\right)_{(2)}
\left(\begin{array}{cc} \beta_2&0\\
0&\alpha_2
\end{array}\right)_{(2)}.
\] By Lemma 4.2 and basic properties of R-diagonal operators given in 2.2, we prove the lemma.
\end{proof}
\begin{Lemma} With the assumption of Theorem 4.1 and assume $AB$ is an R-diagonal operator and
$\tau(A^2)\neq 0$. Then $\tau(B)=0$.
\end{Lemma}
\begin{proof} Since $AB$ is an $R-$diagonal operator, $\tau(AB)=0$.
Since $A,B$ are $\ast$-free, $\tau(A)\tau(B)=\tau(AB)=0$. If
$\tau(B)=0$, then done. Otherwise, assume $\tau(A)=0$. Then
$0=\tau(ABAB)=\tau(A^2B)\tau(B)=\tau(A^2)(\tau(B))^2$. By
assumption, $\tau(B)=0$.
\end{proof}
\begin{Lemma} Let $B\in M_2(\mathbb{C})_{(2)}$ and $\lambda$ be any complex
number. Then $\sigma(E_{12}B)=\sigma(E_{12}(\lambda+B))$.
\end{Lemma}
\begin{proof} By Jacobson's theorem, \[\sigma(E_{12}(\lambda+B))\cup
\{0\}=\sigma (E_{11}E_{12}(\lambda+B))\cup\{0\}=\sigma
(E_{12}(\lambda+B)E_{11})\cup\{0\}\]\[=\sigma(E_{12}BE_{11})\cup\{0\}=\sigma(BE_{12})\cup\{0\}.\]
\end{proof}
\noindent\emph{Proof of Theorem 4.1.}\,\, If $\tau(A)=\tau(B)=0$,
then by Lemma 3.3 and Lemma 4.3, $AB$ is an R-diagonal operator.
Conversely, assume that $AB$ is an R-diagonal operator. Then
$0=\tau(AB)=\tau(A)\cdot\tau(B)$. So either $\tau(A)=0$ or
$\tau(B)=0$. Without loss of generality, we assume that $\tau(A)=0$.
If $\tau(A^2)\neq 0$, then $\tau(B)=0$ by Lemma 4.4. If
$\tau(A^2)=0$, then $A$ is unitary equivalent to $\alpha E_{12}$. We
may assume that $A=E_{12}$. By Theorem 2.1, if $E_{12}B$ is an
R-diagonal operator, then
$(r(E_{12}B))^2=\tau(B^*E_{21}E_{12}B)=\tau(E_{21}E_{12}BB^*)=\|E_{12}\|_2^2\cdot
\|B\|_2^2$. Note that $E_{12}(B-\tau(B))$ is an R-diagonal operator,
$(r(E_{12}(B-\tau(B)))^2=\|E_{12}\|_2^2\cdot \|B-\tau(B)\|_2^2$. By
Lemma 4.5, $\|B\|_2^2=\|B-\tau(B)\|_2^2$. This implies that
$\tau(B)=0$. This ends the proof.\\
Combining Theorem 4.1, Theorem 2.1 and $\S$ transform of Voiculescu
(see~\cite{Vo, VDN}), we have the following theorem (It is
interesting to compare the following theorem and Theorem 3.6).
\begin{Theorem} Let $A\in M_2(\mathbb{C})_{(1)}, B\in M_2(\mathbb{C})_{(2)}$ and $\tau(A)=\tau(B)=0$. Then
\begin{enumerate}
\item $\mu_{AB}$ is rotation invariant;
\item $\sigma(AB)=supp\mu_{AB}=[\|A^{-1}\|_2^{-1}\|B^{-1}\|_2^{-1},
\|A\|_2\|B\|_2]\times_p [0,2\pi]$;
\item $\mu_{AB}(\{0\})=\max\{\mu_{A^*A}(\{0\}),\,\,\mu_{B^*B}(\{0\})\}$ and
\[\mu_{AB}(\mathbb{B}(0,
(\S_{\mu_{A^*A}}\S_{\mu_{B^*B}}(t-1))^{-1/2}))=t,\qquad
\text{for}\,\, t\in [\mu_{AB}(\{0\}), 1].\]
\end{enumerate}
\end{Theorem}
\section{Algebraic techniques}
For $X\in \mathcal M$, define
\[\Phi(X)=\left(\begin{array}{cc}
E_{11}XE_{11}& E_{11}XE_{21}\\
E_{12}XE_{11}& E_{12}XE_{21}
\end{array}\right).\]
Then $\Phi$ is a $\ast$-isomorphism from $\mathcal M$ onto $E_{11}\mathcal M
E_{11}\otimes M_2(\mathbb{C})_{(1)}.$ We will identify $\mathcal M$ with $E_{11}\mathcal M
E_{11}\otimes M_2(\mathbb{C})_{(1)}$ by the canonical isomorphism $\Phi$.
In~\cite{Dy}, K.~Dykema proved that $E_{11}\mathcal M E_{11}\cong
L(\mathbb{F}_3)$. For $B\in M_2(\mathbb{C})_{(2)}$, we may write
\[B=\left(\begin{array}{cc}
b_{11}&b_{12}\\
b_{21}&b_{22}
\end{array}\right)\]
with respect to matrix units in $\mathcal M_1$. Then $b_{ij}\in
L(\mathbb{F}_3)$. In this section, we will develop the algebraic
techniques used in~\cite{Dy}. Combining the matrix techniques, we
may explicitly express $b_{ij}$ in terms of free generators of
$L(\mathbb{F}_3)$. \\
Let $\Lambda\{W_1,V_1\}$ be the set of
words generated by $W_1,V_1$. Note that $W_1^2=V_1^2=1$ and
$\tau(W_1)=\tau(V_1)=0$. The following observation is crucial
in~\cite{Dy}. The proof is an easy exercise.
\begin{Lemma}$\tau(g_1g_2\cdots g_n)=0$ for an alternating product
of $\Lambda\{W_1,V_1\}\setminus \{1,W_1\}$ and $\{E_{12},E_{21}\}$.
\end{Lemma}
Recall that $P=E_{11}$ and $Q=F_{11}$. Let $W$ be the ``polar" part
of $(1-P)QP$ and $U=E_{12}W$. The following corollary is a special
case of Theorem 3.5 of~\cite{Dy}.
\begin{Corollary}\label{C:dykema's corollary} $U$ is a Haar unitary operator in $\mathcal M_P=P\mathcal M P$
and $U$, $PQP$ are $\ast$-free in $\mathcal M_P$.
\end{Corollary}
With the canonical identification of $\mathcal M$ with $\mathcal M_P\otimes
M_2(\mathbb{C})_{(1)}$,
\[Q=\left(\begin{array}{cc}
PQP& \sqrt{PQP-(PQP)^2}U\\
U^* \sqrt{PQP-(PQP)^2}& U^*(1-PQP)U
\end{array}\right).\]
By~\cite{Vo}, the distribution of $PQP$ (relative to $\mathcal M_P$) is
non-atomic and the density function is
\begin{equation}
\displaystyle \rho (t)=\frac{1}{\pi }\frac{1}{\sqrt{\frac{1}{4}-(\frac{1}{2}%
-t)^{2}}}, \qquad 0\leq t\leq 1.
\end{equation}
By Corollary~\ref{C:dykema's corollary}, the von Neumann
subalgebra $\mathcal M_1$ generated by $M_2(\mathbb{C})_{(1)}$ and $Q$ is
$\ast$-isomorphic to $L(\mathbb{F}_2)\otimes M_2(\mathbb{C})_{(1)}$. Since
$\mathcal M_1$ is also $\ast$-isomorphic to $M_2(\mathbb{C})* L(\mathbb{Z}_2)$,
$M_2(\mathbb{C})* L(\mathbb{Z}_2)\cong L(\mathbb{F}_2)\otimes M_2(\mathbb{C})$,
which is proved by Dykema in~\cite{Dy}.\\
Since $V_1=2Q-1$,
\[V_1=\left(\begin{array}{cc}
2PQP-1& 2\sqrt{PQP-(PQP)^2}U\\
2U^* \sqrt{PQP-(PQP)^2}& U^*(1-2PQP)U
\end{array}\right).\]
Simple computation shows that the density function of $2PQP-1$ is
\[\displaystyle \sigma(t)=\frac{1}{\pi}\frac{1}{\sqrt{1-t^2}},\qquad -1\leq t\leq 1.\] Let $H=2PQP-1$, then
\[V_1=\left(\begin{array}{cc}
H&\sqrt{1-H^2} U\\
U^*\sqrt{1-H^2}& -U^*HU
\end{array}\right).\]
Let $H=V|H|$ be the polar decomposition of $H$. Since
$H$ is a symmetric selfadjoint operator, $V^2=1$ and $V$ is
independent with the von Neumann algebra generated by $|H|$ in the
classical probability sense. Let $h=|H|, u=VU, v=UV$. Then $u,v$ are
Haar unitary operators and the distribution of $h$ relative to
$\mathcal M_P$ is non-atomic.
\begin{Lemma} $h, u, v$ are * free.
\end{Lemma}
\begin{proof}Let $g_1g_2\cdots g_n$ be an alternating product of
elements of $\mathfrak{S}=\{|H|\}''\ominus \mathbb{C} I$, $\{(VU)^n: n\neq
0\}$, $\{(UV)^n: n\neq 0\}$. By regrouping, it is an alternating
product of elements of $\{\mathfrak{S}, V, V\mathfrak{S},
\mathfrak{S}V, V\mathfrak{S}V\}$ and $\{U^n: n\neq 0\}$. Since $H$
and $U$ are $\ast$-free, $\{\mathfrak{S}, V, V\mathfrak{S},
\mathfrak{S}V, V\mathfrak{S}V\}$ and $\{U^n: n\neq 0\}$ are free.
Since $V$ and $\mathfrak{S}$ are independent, $\tau(VS)=\tau(SV)=0$
for $S\in \mathfrak{S}$. This implies that $\tau(g_1g_2\cdots
g_n)=0$.
\end{proof}
By simple computations, we have the following.
\begin{equation}
V_1E_{11}V_1=\left(\begin{array}{cc}
h^2&\sqrt{1-h^2}h u\\
u^*h\sqrt{1-h^2}& u^*(1-h^2)u
\end{array}\right), \label{E:ve11v}
\end{equation}
\begin{equation}
V_1E_{12}V_1=\left(\begin{array}{cc}
HU^*\sqrt{1-H^2}&-HU^*HU\\
U^*\sqrt{1-H^2}U^*\sqrt{1-H^2}& -U^*\sqrt{1-H^2}U^*HU
\end{array}\right)\label{E:ve12v-}\end{equation}
\begin{equation}=
\left(\begin{array}{cc}
hv^*\sqrt{1-h^2}&-hv^*hu\\
u^*\sqrt{1-h^2}v^*\sqrt{1-h^2}& -u^*\sqrt{1-h^2}v^*hu
\end{array}\right).\label{E:ve12v}
\end{equation}
By Lemma 2.2, $\mathcal M\cong M_2(\mathbb{C})_{(1)}*(V_1M_2(\mathbb{C})_{(1)}V_1)\cong
\mathcal M_P\otimes M_2(\mathbb{C})_{(1)}$. With this isomorphism, $\mathcal M_P$ is the
von Neumann algebra generated by $h, u $ and $v$ by~(\ref{E:ve11v})
and~(\ref{E:ve12v}). So $\mathcal M_P\cong L(\mathbb{F}_3)$. By simple
computations, we have
\[V_1\left(\begin{array}{cc}
\alpha&\beta\\
\gamma&\sigma
\end{array}\right)_{(1)}V_1=\left(\begin{array}{cc}
b_{11}&b_{12}\\
b_{21}& b_{22}
\end{array}\right),\]
where
\[
\begin{array}{cl}
b_{11}= & \sigma+(\alpha-\sigma)h^2+\gamma\sqrt{1-h^2}vh+\beta hv^*\sqrt{1-h^2}, \\
b_{12}= & (\alpha-\sigma)h\sqrt{1-h^2}u+\gamma \sqrt{1-h^2}v\sqrt{1-h^2}u-\beta hv^*hu, \\
b_{21}= & (\alpha-\sigma)u^*h\sqrt{1-h^2}-\gamma u^*hvh+\beta u^*\sqrt{1-h^2}
v^*\sqrt{1-h^2}, \\
b_{22}=& \alpha+(\sigma-\alpha) u^*h^2u-\gamma
u^*hv\sqrt{1-h^2}u-\beta u^*\sqrt{1-h^2}v^*hu.
\end{array}
\]
\begin{Theorem}\label{T:algebraic techinque}$\mathcal M{\cong} L(\mathbb{F}_3)\otimes
M_2(\mathbb{C})_{(1)}$; furthermore, let $\displaystyle
B=\left(\begin{array}{cc}
\alpha&\beta\\
\gamma&\sigma
\end{array}\right)_{(2)}$ in $M_2(\mathbb{C})_{(2)}$, then with respect to the matrix units
$\{E_{ij}\}_{i,j=1,2}\subset M_2(\mathbb{C})_{(1)}$, $\displaystyle
B=\left(\begin{array}{cc}
b_{11}&b_{12}\\
b_{21}& b_{22}
\end{array}\right)_{(1)}$, where $b_{ij}$ are given as above.
\end{Theorem}
\begin{Example}\label{E:normal matrices}\emph{ In Theorem~\ref{T:algebraic techinque}, let $\beta=\gamma=0$. Then we
have
\[ \left(\begin{array}{cc}
\alpha&0\\
0&\sigma
\end{array}\right)_{(2)}=\left(\begin{array}{cc}
\sigma+(\alpha-\sigma)h^2&(\alpha-\sigma)h\sqrt{1-h^2}u\\
(\alpha-\sigma)u^*h\sqrt{1-h^2}&\alpha+(\sigma-\alpha)u^*h^2u
\end{array}\right)_{(1)}.\]}
\end{Example}
\begin{Example}\label{E:algebra techinque}\emph{ In Theorem~\ref{T:algebraic techinque}, let $\alpha=\sigma$ and $\gamma=0$. Then we
have
\[\left(\begin{array}{cc}
\alpha&\beta\\
0&\alpha
\end{array}\right)_{(2)}=\left(\begin{array}{cc}
\alpha+\beta hv^*\sqrt{1-h^2}&-\beta hv^*hu\\
\beta u^*\sqrt{1-h^2} v^*\sqrt{1-h^2}&\alpha-\beta
u^*\sqrt{1-h^2}v^*hu
\end{array}\right)_{(1)}.
\]}
\end{Example}
\begin{Remark}\emph{ By equation (\ref{E:ve11v}), the distribution of $h^2$ is the
distribution of $E_{11}V_1E_{11}V_1E_{11}$ relative to $M_P$. So the
distribution of $h^2$ is same as the distribution of $PQP$ (relative
to $\mathcal M_P$). By [Vo], the distribution of $PQP$ (relative to $\mathcal M_P$)
is non-atomic and the density
function is \[\displaystyle \rho (t)=\frac{1}{\pi }\frac{1}{\sqrt{\frac{1}{4}-(\frac{1}{2}%
-t)^{2}}}, \qquad 0\leq t\leq 1.\]}
\end{Remark}
\section{Miscellaneous examples}
\begin{Example}\label{E:E12+F12}\emph{ We compute the Brown spectrum of $\alpha
E_{12}+\beta F_{12}$. Let $F_{12}=\left(\begin{array}{cc}0&1\\
0&0
\end{array}\right)_{(2)}=\left(\begin{array}{cc}b_1&b_2\\
b_3&b_4
\end{array}\right)_{(1)}.$ Then
\[(\alpha E_{12}+\beta
F_{12})^2=\alpha\beta(E_{12}F_{12}+F_{12}E_{12})=\alpha\beta(E_{12}+F_{12})^2=
\alpha\beta\left(\begin{array}{cc} b_3&b_1+b_4\\
0&b_3\end{array}\right)_{(1)}.
\] So $\mu_{(\alpha E_{12}+\beta
F_{12})^2}=\mu_{\alpha\beta b_3}$. By equation~(\ref{E:ve12v-}), the
distribution of $b_3$ is same as the distribution of
$(U^*\sqrt{1-H^2})^2$. Since $U^*\sqrt{1-H^2}$ is an R-diagonal
operator, $(U^*\sqrt{1-H^2})^2$ is also an R-diagonal operator.
Since the distribution of $\alpha E_{12}+\beta F_{12}$ is rotation
invariant, $\mu_{\alpha E_{12}+\beta
F_{12}}=\mu_{\sqrt{|\alpha\beta|}b},$ where $b=U^*\sqrt{1-H^2}$.
Simple computations show that (or by Proposition 5.10 and Corollary
5.11 of~\cite{FDM})
\[d\mu_{b}(z)=\frac{1}{\pi }\frac{1}{(1-r^2)^2}drd\theta\qquad
0\leq r\leq \frac{1}{\sqrt{2}}.
\] Hence
\[d\mu_{\alpha E_{12}+\beta
F_{12}}(z)=d\mu_{\sqrt{|\alpha\beta|}b}(z)=\frac{1}{\pi
}\frac{|\alpha\beta|^{3/2}}{(|\alpha\beta|-r)^2}drd\theta\qquad
0\leq r\leq \sqrt{\frac{|\alpha\beta|}{2}}
\] and
\[\sigma(\alpha E_{12}+\beta
F_{12})=\overline{\mathbb{B}\left(0,\sqrt{\frac{|\alpha\beta|}{2}}\right)}.
\]}
\end{Example}
\begin{Corollary}\label{C:E12+F12}
$r(\alpha E_{12}+\beta F_{12})=\sqrt{\frac{|\alpha\beta|}{2}}.$
\end{Corollary}
\begin{Corollary} Let $A\in M_2(\mathbb{C})_{(1)}$ and $B\in M_2(\mathbb{C})_{(2)}$. Then $A+B$ is
an R-diagonal operator if and only if $A+B=0$.
\end{Corollary}
\begin{proof} Indeed, if $A+B$ is an R-diagonal operator, then $\tau(A+B)=0$. So we may assume
that $\tau(A)=\tau(B)=0$. Let $\lambda, -\lambda$ and $\eta,-\eta$
be the spectra of $A$ and $B$, respectively. Then
$0=\tau((A+B)^2)=\tau(A^2)+\tau(B^2)=\lambda^2+\eta^2$. By simple
computation we have
$0=\tau((A+B)^4)=\tau(A^4)+\tau(B^4)+4\tau(A^2)\tau(B^2)=\lambda^4+\eta^4+4\lambda^2\eta^2=(\lambda^2+\eta^2)^2+2\lambda^2\eta^2=2\lambda^2\eta^2$.
Thus $\lambda=\eta=0$. This implies that $A$ and $B$ are unitary
equivalent to $\alpha E_{12}$ and $ \beta F_{12}$, respectively. By
Corollary~\ref{C:E12+F12}, $r(A+B)=\sqrt{\frac{|\alpha\beta|}{2}}.$ On the other
hand, since we assume that $A+B$ is an R-diagonal operator, by
Theorem 2.1, $(r(A+B))^2=\tau((A^*+B^*)(A+B))=\tau((\bar{\alpha}
E_{21}+\bar{\beta} F_{21})(\alpha E_{12}+\beta
F_{12}))=\frac{|\alpha|^2+|\beta|^2}{2}$. So
$|\alpha|^2+|\beta|^2=|\alpha\beta|$. This implies that $\alpha=0$
and $\beta=0$. Hence $A+B=0$.
\end{proof}
\begin{Example}\label{E:P times ab}\emph{ We compute the spectrum and Brown spectrum of
\[ X=\left(\begin{array}{cc}
1&0\\
0&0
\end{array}\right)_{(1)}\left(\begin{array}{cc}
\alpha&\beta\\
0&\alpha
\end{array}\right)_{(2)}.\] By Example~\ref{E:algebra techinque}, we have the following
\[X=\displaystyle \left(\begin{array}{cc}
1&0\\
0&0
\end{array}\right)_{(1)}\left(\begin{array}{cc}
\alpha&\beta\\
0&\alpha
\end{array}\right)_{(2)}=\displaystyle \left(\begin{array}{cc}
1&0\\
0&0
\end{array}\right)_{(1)}\left(\begin{array}{cc}
\alpha+\beta hv^*\sqrt{1-h^2}&-\beta hv^*hu\\
\beta u^*\sqrt{1-h^2} v\sqrt{1-h^2}&\alpha-\beta
u^*\sqrt{1-h^2}v^*hu
\end{array}\right)_{(1)}\]\[=\left(\begin{array}{cc}
\alpha+\beta hv^*\sqrt{1-h^2}&-\beta hv^*hu\\
0&0
\end{array}\right)_{(1)}.\]
So $\sigma(X)=\{0\}\cup \sigma(\alpha+\beta hv^*\sqrt{1-h^2})$ and
$\mu_X=\frac{1}{2}\delta_0+\frac{1}{2}\mu_{\alpha+\beta
hv^*\sqrt{1-h^2}}$. Note that
$\mu_{hv^*\sqrt{1-h^2}}=\mu_{v^*\sqrt{1-h^2}h}$ and
$v^*\sqrt{1-h^2}h$ is an R-diagonal operator. We have the following
computations:
\[\|\sqrt{1-h^2}h\|_2^2=\tau_P((1-h^2)h^2)=\tau_P((1-PQP)PQP)=\int_{0}^1
\frac{1}{\pi }\frac{t(1-t)dt}{\sqrt{\frac{1}{4}-(\frac{1}{2}%
-t)^{2}}}=\frac{1}{8},\]
\[\|(\sqrt{1-h^2}h)^{-1}\|_2^2=\tau_P(((1-h^2)h^2)^{-1})=\tau_P(((1-PQP)PQP)^{-1})\]\[=\int_{0}^1
\frac{1}{\pi }\frac{dt}{t(1-t)\sqrt{\frac{1}{4}-(\frac{1}{2}%
-t)^{2}}}=\infty.\]
By Theorem 2.1,
\[\sigma(X)=supp\mu_X=\{0\}\cup \overline
{\mathbb{B}(\alpha,|\beta|/2\sqrt{2})}.\]}
\end{Example}
\begin{Example}\label{E:P times normal}\emph{ We compute the spectrum and Brown spectrum of
\[\displaystyle Y=\left(\begin{array}{cc}
0&1\\
0&0
\end{array}\right)_{(1)}\left(\begin{array}{cc}
\alpha&0\\
0&\beta
\end{array}\right)_{(2)}.\] By Example~\ref{E:normal matrices}, we have the following
\[Y=\left(\begin{array}{cc}
0&1\\
0&0
\end{array}\right)_{(1)}\left(\begin{array}{cc}
\alpha&0\\
0&\beta
\end{array}\right)_{(2)}=\left(\begin{array}{cc}
0&1\\
0&0
\end{array}\right)_{(1)}\left(\begin{array}{cc}
\beta+(\alpha-\beta)h^2&(\alpha-\beta)h\sqrt{1-h^2}u\\
(\alpha-\beta)u^*h\sqrt{1-h^2}&\alpha+(\beta-\alpha)u^*h^2u
\end{array}\right)_{(1)}\]\[=\left(\begin{array}{cc}
(\alpha-\beta)u^*h\sqrt{1-h^2}&\alpha+(\beta-\alpha)u^*h^2u\\
0&0
\end{array}\right)_{(1)}.\]
Since $u^*h\sqrt{1-h^2}$ is an R-diagonal operator, similar
computations as Example~\ref{E:P times ab}, we have
\[\sigma(Y)=supp \mu_Y=\overline
{\mathbb{B}(0,|\alpha-\beta|/2\sqrt{2})}.\]}
\end{Example}
\begin{Example}\label{E:1+E12 times 1+F12}\emph{We compute the spectrum and Brown spectrum of
\[\displaystyle Z=(1+\alpha E_{12})(1+\beta F_{12})=\left(\begin{array}{cc}
1&\alpha\\
0&1
\end{array}\right)_{(1)}\left(\begin{array}{cc}
1&\beta\\
0&1
\end{array}\right)_{(2)}.\] For $\lambda\in\mathbb{C}$, we have
\[Z-\lambda 1=(1+\alpha E_{12})(1+\beta F_{12})-
\lambda (1+\alpha E_{12})(1-\alpha E_{12})=(1+\alpha
E_{12})(\lambda\alpha E_{12}+\beta F_{12}-(\lambda-1)).\] This
implies that $\lambda \in \sigma(Z)$ if and only if $\lambda -1\in
\sigma (\lambda\alpha E_{12}+\beta F_{12})$. By
Example~\ref{E:E12+F12}, $\lambda -1\in \sigma (\lambda\alpha
E_{12}+\beta F_{12})$ if and only if
\[|\lambda-1|^2\leq \frac{ |\alpha\beta||\lambda|}{2}.\] So
\[\sigma(Z)=\left\{\lambda\in \mathbb{C}:\,\, |\lambda-1|^2\leq \frac{
|\alpha\beta||\lambda|}{2}\right\}.\] In the following, we will show
that $supp\mu_{Z}\supseteq \partial\sigma(Z)=\left\{\lambda\in
\mathbb{C}:\,\, |\lambda-1|^2= \frac{ |\alpha\beta||\lambda|}{2}\right\}$.
For this purpose, we need only to prove that $supp\mu_{Z-1}\supseteq
\partial\sigma(Z-1)=\left\{\lambda\in \mathbb{C}:\,\, |\lambda|^2= \frac{
|\alpha\beta||\lambda+1|}{2}\right\}$.\\
\vskip 0.5cm
Note that $\Delta (1+\alpha
E_{12})=1$. For $\lambda\in \mathbb{C}$, we have \[\log \Delta
((Z-1)-\lambda)=\log \Delta ((1+\alpha E_{12})(1+\beta
F_{12})-(1+\lambda) (1+\alpha E_{12})(1-\alpha E_{12}))\]
\[=\log \Delta (1+\alpha E_{12})+\log \Delta (1+\beta
F_{12}-\lambda+(1+\lambda)\alpha E_{12})=\log\Delta
((1+\lambda)\alpha E_{12}+\beta F_{12}-\lambda).\] By
Example~\ref{E:E12+F12}, $\mu_{(1+\lambda)\alpha E_{12}+\beta
F_{12}}=\mu_{\sqrt{|1+\lambda||\alpha\beta|} b}.$ Hence,
\[\log \Delta ((Z-1)-\lambda)=\log\Delta(\sqrt{|1+\lambda||\alpha\beta|}
b-\lambda)\]
\[
=\log\Delta\left(b-\frac{\lambda}{\sqrt{|1+\lambda||\alpha\beta|}}\right)
-\log\frac{|\lambda|}{\sqrt{|1+\lambda||\alpha\beta|}}+\log|\lambda|.\]
Since $b$ is an R-diagonal operator, this implies that
\begin{equation}\log\Delta\left(b-\frac{|\lambda|}{\sqrt{|1+\lambda||\alpha\beta|}}\right)=
\log\frac{|\lambda|}{\sqrt{|1+\lambda||\alpha\beta|}}-\log|\lambda|+\log
\Delta ((Z-1)-\lambda). \label{Eq:6.1}
\end{equation}
Suppose $\lambda_0\in \partial \sigma(Z-1)$ and $\lambda_0\notin
supp\mu_{Z-1}$. Then there is $\delta>0$ such that
$\mathbb{B}(\lambda_0,\delta)\subset \mathbb{C}\setminus supp\mu_{Z-1}$.
Now $\log\Delta ((Z-1)-\lambda)$ is a harmonic function on
$\mathbb{B}(\lambda_0,\delta)$. Since $\tau((Z-1)^n)=0$ for all
$n=1,2,\cdots$. By Lemma 4.3 of~\cite{H-L}, for $\lambda\in \mathbb{C}$
such that $|\lambda|\geq r(Z-1)$, $\log\Delta
((Z-1)-\lambda)=\log|\lambda|$. By the uniqueness of harmonic
functions, we have $\log\Delta ((Z-1)-\lambda)=\log|\lambda|$ for
$\lambda\in \mathbb{B}(\lambda_0,\delta)$. By
equation~(\ref{Eq:6.1}), this implies that
\begin{equation}\log\Delta\left(b-\frac{|\lambda|}{\sqrt{|1+\lambda||\alpha\beta|}}\right)=
\log\frac{|\lambda|}{\sqrt{|1+\lambda||\alpha\beta|}}.\label{Eq:6.2}
\end{equation}
Let $r=\frac{|\lambda|}{\sqrt{|1+\lambda||\alpha\beta|}}$. Then
equation~(\ref{Eq:6.2}) implies that
\begin{equation*}\log\Delta\left(b-r\right)=
\log r
\end{equation*}
for $r\in (s,t)\subseteq [0, \frac{1}{\sqrt{2}}]$. Since $b$ is an
R-diagonal operator, this implies that $\log\Delta(b-z)$ is harmonic
on the annulus with inner radius $s$ and outer radius $t$,
$0<s<t<\frac{1}{\sqrt{2}}$. By Theorem 2.1,
$supp\mu_b=\overline{\mathbb{B}(0,\frac{1}{\sqrt{2}})}$. It is a
contradiction.}
\end{Example}
\section{Hyperinvariant subspaces for operators in $\mathcal M$}
\begin{Lemma}\label{L:lemma for center element} For $X\in \mathcal M$, if $supp\mu_X=\{\lambda\}$, then
$\tau(X^n)=\lambda^n$ for $n=1,2,\cdots$.
\end{Lemma}
\begin{proof} $\tau(X^n)=\int_{supp\mu_X} z^n d\mu_X(z)=\lambda^n$.
\end{proof}
The converse of Lemma~\ref{L:lemma for center element} is not true.
Since for an R-diagonal operator $X$, we have $\tau(X^n)=0$ for
$n=1,2\cdots$.
\begin{Proposition} Let $X=A+B$, where $A\in M_2(\mathbb{C})_{(1)}$ and
$B\in M_2(\mathbb{C})_{(2)}$. If $A,B$ are not scalar matrices, then
$supp\mu_X$ contains more than two points.
\end{Proposition}
\begin{proof} Suppose $A,B$ are not scalar matrices. Since
$A+B=\tau(A)1+\tau(B)1+(A-\tau(A)1)+(B-\tau(B)1)$, to show
$supp\mu_X$ contains more than two points, we need only to show
$\mu_{(A-\tau(A)1)+(B-\tau(B)1)}$ contains more than two points. So
we may assume that $\tau(A)=\tau(B)=0$ and $A,B\neq 0$. Assume that
the spectra of $A$ and $B$ are $\lambda_1,-\lambda_1$ and
$\lambda_2,-\lambda_2$. If $\tau(A^2)=\tau(B^2)=0$, then $A$ and
$B$ are unitarily equivalent to $\alpha E_{12}$ and $\beta F_{12}$
in $M_2(\mathbb{C})_{(1)}$ and $M_2(\mathbb{C})_{(2)}$, respectively. Thus
$\mu_X=\mu_{\alpha E_{12}+\beta F_{12}}$. By
Example~\ref{E:E12+F12}, $supp\mu_X$ contains more than two points.
Now suppose
$\tau(A^2)\neq 0$ or $\tau(B^2)\neq 0$. Without loss of generality,
we assume that $\lambda_1^2=\tau(A^2)\neq 0$. Note that
$\tau(A+B)=0$ and
$\tau((A+B)^2)=\tau(A^2)+\tau(B^2)=\lambda_1^2+\lambda_2^2$. If
$\lambda_1^2+\lambda_2^2\neq 0$, by Lemma~\ref{L:lemma for center
element}, $supp\mu_X$ contains more than two points. Suppose
$\lambda_1^2+\lambda_2^2=0$. Then $\tau(B^2)=\lambda_2^2\neq 0$.
Simple computations show that
\[\tau((A+B)^4)=\tau(A^4)+\tau(B^4)+4\tau(A^2)\tau(B^2)=
\lambda_1^4+\lambda_2^4+4\lambda_1^2\lambda_2^2=(\lambda_1^2+\lambda_2^2)^2+2\lambda_1^2\lambda_2^2=2\lambda_1^2\lambda_2^2\neq
0.\] Note that $\tau(A+B)=0$. By Lemma~\ref{L:lemma for center
element}, $supp\mu_X$ contains more than two points.
\end{proof}
\begin{Proposition}\label{P:brown measure} Let $X=AB$, where $A\in M_2(\mathbb{C})_{(1)}$ and
$B\in M_2(\mathbb{C})_{(2)}$. If $A, B$ are not scalar matrices, then
$supp\mu_X$ contains more than two points.
\end{Proposition}
\begin{proof} Suppose $A,B$ are not scalar matrices. We
consider the following cases:\\
\noindent Case 1. $\tau(A)=\tau(B)=0$ and $A,B\neq 0$. By Theorem
4.1, $AB(\neq 0)$ is an
R-diagonal operator. So $supp\mu_X$ contains more than two points.\\
\noindent Case 2. $\tau(A)=0, \tau(B)\neq 0$ or $\tau(A)\neq 0,
\tau(B)=0$. Without loss of generality, we assume that $\tau(A)=0$
and $\tau(B)\neq 0$. Then $\tau(AB)=0$ and
$\tau(ABAB)=\tau(A^2)\tau(B)$. If $\tau(A^2)\neq 0$, then
$\tau(ABAB)\neq 0$. By Lemma 10.1, $supp\mu_X$ contains more than
two points. If $\tau(A^2)=0$, then $A$ is unitarily equivalent to
$\alpha E_{12}$ in $M_2(\mathbb{C})_{(1)}$. By Lemma 4.5,
$\mu_X=\mu_{\alpha E_{12}(B-\tau(B))}$. Since $\alpha
E_{12}(B-\tau(B)(\neq 0)$ is an R-diagonal operator, $supp\mu_X$
contains
more than two points.\\
\noindent Case 3. $\tau(A)\neq 0$ and $\tau(B)\neq 0$. We may assume
that $\tau(A)=\tau(B)=1$. Let $A=1+A_1$ and $B=1+B_1$. Then
$\tau(A_1)=\tau(B_1)=0$.\\
\noindent Subcase 3.1. $\tau(A_1^2)\neq 0$ or $\tau(B_1^2)\neq 0$.
We may assume that $\tau(A_1^2)\neq 0$. Simple computation shows
that $\tau(AB)=1$, $\tau(ABAB)=1+\tau(A_1^2)+\tau(B_1^2)$ and
$\tau((AB)^3)=1+3(\tau(A_1^2)+\tau(B_1^2))+9\tau(A_1^2)\tau(B_1^2).$
If $\tau(A_1^2)+\tau(A_2^2)\neq 0$, then $\tau(ABAB)\neq 1$. By
Lemma~\ref{L:lemma for center element}, $supp\mu_X$ contains more
than two points. If $\tau(A_1^2)+\tau(A_2^2)=0$, then
$\tau(A_2^2)=-\tau(A_1^2)\neq 0$. So $\tau((AB)^3)\neq 1$. By
Lemma~\ref{L:lemma for center element} again, $supp\mu_X$ contains
more than two points.\\
\noindent Subcase 3.2. $\tau(A_1^2)=\tau(A_2^2)=0$. Then $A_1$ and
$A_2$ are unitarily equivalent to $\alpha E_{12}$ and $\beta F_{12}$
in $M_2(\mathbb{C})_{(1)}$ and $M_2(\mathbb{C})_{(2)}$, respectively. So
$\mu_X=\mu_{(1+\alpha E_{12})(1+\beta F_{12})}$. We may assume
that $A=\left(\begin{array}{cc} 1&\alpha\\
0&1
\end{array}\right)_{(1)}$ and $B=\left(\begin{array}{cc} 1&\beta\\
0&1
\end{array}\right)_{(2)}$. By Example~\ref{E:1+E12 times 1+F12}, $supp\mu_X$ contains more
than two points.
\end{proof}
\begin{Corollary} Let $X=AB$ or $X=A+B$, where $A\in M_2(\mathbb{C})_{(1)}$ and
$B\in M_2(\mathbb{C})_{(2)}$. If $X\neq \lambda 1$, then $X$ has a
nontrivial hyperinvariant subspace relative to $\mathcal M$.
\end{Corollary}
\begin{proof} If $X=A+B$ and $A=\lambda
1$ or $B=\lambda 1$, then $X=\lambda 1+ B$ or $X=\lambda 1+ A$. If
$X$ is not a scalar matrix and $\eta$ is an eigenvalue of $X$, then
$ker(X-\eta 1)$ is a nontrivial hyperinvariant subspace of $X$. If
$X=A+B$ and $A,B\neq \lambda 1$, then $supp\mu_X$ contains more than
two points by Proposition~\ref{P:brown measure}. By~\cite{H-S1}, $X$
has a nontrivial hyperinvariant subspace relative to $\mathcal M$. If $X=AB$
and $A=\lambda 1$ or $B=\lambda 1$, then $X=\lambda B$ or $X=\lambda
A$. If $X$ is not a scalar matrix and $\eta$ is an eigenvalue of
$X$, then $ker(X-\eta 1)$ is a nontrivial hyperinvariant subspace of
$X$. If $X=AB$ and $A,B\neq \lambda 1$, then $supp\mu_X$ contains
more than two points. By~\cite{H-S1}, $X$ has a nontrivial
hyperinvariant subspace relative to $\mathcal M$.
\end{proof}
{\bf Acknowledgements:}\,The authors want to express their deep
gratitude to professor Eric Nordgren for valuable discussions. The
authors also thank the referee for some useful suggestions.
|
train/arxiv
|
BkiUd7E5qoaAwrv3XLc6
| 5
| 1
|
\section{Introduction and results}
In biology, several key processes of cellular spatial organisation are driven by chemotaxis. The effective mechanism by which individual cells undergo directed motion varies among organisms. We are particularly interested here in bacterial migration, characterized by the smallness of the cells, and their ability to swim up to several orders of magnitude in the attractant concentration. Several models, depending on the level of description, have been developed mathematically for the collective motion of cells \cite{Perthame04,PerthameBook}. Among them the kinetic model due to Othmer, Dunbar and Alt (ODA) \cite{Alt, ODA}, describes a population of bacteria in motion ({\em e.g. E. Coli} or {\em B. Subtilis}) \cite{ErbanOthmer04} in a field of chemoattractant (a process called {\em chemokinesis}). These small cells are not capable of measuring any head-to-tail gradient of the chemical concentration, and to choose directly some preferred direction of motion towards high concentrated regions. Therefore they develop an indirect strategy to select favourable areas, by detecting a sort of time derivative in the concentration along their pathways, and to react accordingly \cite{MK72}. In fact they undergo a jump process where free movements (runs) are punctuated by reorientation phenomena (tumbles) \cite{WWS03}. For instance it is known that {\em E. Coli} increases the time spent in running along a favourable direction \cite{MK72,BB74,ErbanOthmer04}.
This jump process can be described by two different informations. First cells switch the rotation of their flagella, from counter-clockwise CCW (free runs) to clockwise CW (reorientation, or tumbling phase), and conversely. This decision is the result of a complex chain of reactions inside the cells, driven by the external concentration of the chemoattractant \cite{GarrityOrdal,SPO,WWS03}.
Then cells select a new direction. Although we expect large organisms (like { the slime mold amoebae} {\em D. discoideum}) to choose directly a favourable direction, bacteria are unable to do so, and they randomly choose a new direction of motion. Actually some { directional persistence} may influence this selection, privileging some angles better than others.
However we will not consider inertia here for simplicity.
{ From the molecular point of view, the frequency of tumbling events is driven by a regulatory protein network made of the membrane receptor complex (MCP), the switch complex located at the flagella motor, and six main proteins in between (namely CheA, CheW, CheY, CheZ, CheB and CheR -- more are involved in {\em B. Subtilis}, but the whole picture is similar \cite{GarrityOrdal}). This regulatory network exhibits a remarkable excitation/adaptation process \cite{BB74,SBB86,SPO}. When attractant concentration increases suddenly the tumbling frequency decreases in a short time scale (excitation), but increases back to the basal activity after a while (adaptation). This allows bacteria to follow favorable pathways over several orders of the concentration magnitude. Note that a similar adaptation process is involved in bigger organisms like {\em D. discoideum} \cite{Hofer,OthmerSchaap}. Realistic models have been proposed based on the complete regulatory network \cite{HauriRoss,SPO}, as well as toy models capturing the key behavior (basically made of a two species relaxing ODE system \cite{ErbanOthmer07}). Note that this network is also known to select positive perturbations of the chemoattractant concentration only \cite{BB74}, and to be highly sensitive to very low changes in the chemoattractant concentration \cite{SBB86}.}
As a drift-diffusion limit of the ODA kinetic model, one recovers the so-called Keller-Segel model \cite{HillenOthmer,CMPS,CDMOSS}, where diffusion and chemosensitivity coefficients can be derived from the mesoscopic description. The Keller-Segel model exhibits a remarkable dichotomy where cells aggregate if they are sufficient enough, and disperse if not \cite{Horstmann}. Particularly in the two dimensional case, the total mass of cells is the key parameter which { selects between these phenomena {(respectively global existence {\em versus} blow-up in finite time)}. This simple alternative is depicted in the whole space ${\mathbb {R}}^2$ in \cite{BDP}.} In the three dimensional case however, the relevant quantity ensuring global existence is rather the $L^{3/2}$ norm of the initial cell density \cite{CoPeZa}. Therefore it is of interest to ask the question of global existence at the mesoscopic level. As far as we know, no blow-up phenomenon has been found in the ODA kinetic model.
{ The goals of this paper are the following two.}
First we investigate global existence theory for several kinetic models depending on the growth of the reorientation kernel with respect to the chemical. In a previous work we succesfully applied dispersion and Strichartz estimates to kinetic models including delocalization effects \cite{BCGP}, that can be either a time delay effect due to intracellular dynamics, or some measurement at the tip of a cell protrusion. Those techniques are applied here to a class of assumptions where the reorientation kernel is actually independent of the (inner and outer) velocities. Those assumptions are very rough from the biological point of view, but they aim to determine the critical growth of the turning kernel ensuring global existence.
On the other hand we apply those ideas to a more realistic kinetic model including internal molecular variables, improving the results of \cite{ErbanHwang}. We present general assumptions for global existence that can be satisfied by the two species excitative/adptative ODE system, or more generally by a complex network.
We consider the following ODA kinetic model for bacterial chemotaxis:
\begin{subequations}\label{kinmodel}
\begin{align}
\partial_{t} f + v\cdot\nabla_{x} f &= \int_{v'\in V} T[S](t,x,v,v') f(t,x,v') dv' \nonumber \\
&\qquad - \int_{v'\in V} T[S](t,x,v',v) f(t,x,v) dv' \ ,\quad t>0\ , x\in {\mathbb {R}}^d \label{eq:ODA f}\\
- \Delta S + S &= \rho(t,x)=\int_{v\in V} f(t,x,v) dv\ ,
\end{align}
\end{subequations}
associated with the initial condition $f(0,x,v)= f_0(x,v)$. The space density of cells is denoted by $\rho(t,x)$. We assume in this paper that the space dimension is $d=2$ or $d=3$. We assume as usual that the set $V\in {\mathbb {R}}^d$ of { admissible} cell velocities is bounded.
{The free transport operator $\partial_t f + v\cdot\nabla_x f$ describes the free runs of the bacteria which have velocity $v$.} On the other hand, the scattering operator in the right hand side of \eqref{eq:ODA f} expresses the reorientation process (tumbling) occuring during the bacterial migration towards regions of high concentration in chemoattractant $S$.
\subsection*{Partial review of plausible reorientation mechanisms.}
We review below the assumptions existing in the literature concerning the reorientation kernel, in order to motivate the forthcoming work.
\subsubsection*{Delocalization effects.}
In a previous article \cite{BCGP}, we considered mild assumptions of the type:
\begin{equation}
\label{eq:turning kernel delay}
0\leq T[S](t,x,v,v') \leq C \Big( 1 + S(t,x-\varepsilon v') + |\nabla S|(t,x-\varepsilon v') \Big) , \end{equation}
or,
\begin{equation}
\label{eq:turning kernel protrusion}
0\leq T[S](t,x,v,v') \leq C \Big( 1 + S(t,x+\varepsilon v) + |\nabla S|(t,x+\varepsilon v) + |D^2S|(t,x+\varepsilon v) \Big). \end{equation}
Those assumptions were studied for example in \cite{CMPS,HKS} { in two or three dimensions of space.}
Under assumption \eqref{eq:turning kernel delay}, the bacteria take the decision to reorient with the probability $\lambda[S] = \int T[S](t,x,v',v)\ dv'$, and then {choose} a new direction randomly. Therefore the turning frequency increases once cells have entered a favourable area, say (where some delay effect { due to internal dynamics} is expressed by the space shifting $ - \varepsilon v'$; the concentration measurement is performed at position $x- \varepsilon v'$ by the cell with velocity $v'$, turning at position $x$).
Intuitively, the cells increase the turning frequency to be confined in highly concentrated areas.
The hypothesis \eqref{eq:turning kernel protrusion} is even more intuitive: cells, when they decide to turn (due to a complex averaging over the surrounding area within a radius $\simeq\varepsilon$), simply choose a better new direction $v$ with higher probability. This anticipation measurement can be the result of sending protrusions { in the surrounding}, or considering that the cells have some finite radius with receptors located all over the membrane (see also \cite{HPS} for a similar interpretation at the parabolic level -- volume effects have also been considered at the kinetic level in \cite{ChalubRodrigues}). However this interpretation is hardly relevant for bacteria which are small cells, unable to feel gradients and to send protrusions.
\begin{remark}
The gradient in assumption \eqref{eq:turning kernel delay} has to be motivated because we highlight that bacteria cannot feel gradients of chemical concentration. As a matter of fact, from a homogeneity viewpoint, $\nabla S$ has the same weight as the time derivative $\partial_t S$. Therefore we can replace indeed \eqref{eq:turning kernel delay} by the assumption
\[ 0\leq T[S](t,x,v,v') \leq C \Big( 1 + S(t,x-\varepsilon v') + |\partial_t S|(t,x-\varepsilon v') \Big) \ , \]
which makes sense biologically (although a more realistic assumption is expressed for example in \cite{DolakSchmeiser}, see below \eqref{eq:directional derivative}).
To see that $\nabla S$ and $\partial_t S$ do have the same homogeneity, observe that
\[ \partial_t S = G* \partial_t\rho = - G*\nabla\cdot j \ , \]
where $G$ is the Bessel potential, and the flux $j$ is given by
\[ j(t,x) = \int_V v f(t,x,v)\ dv \ , \quad |j(t,x) | \leq (\max_{v\in V} |v|) \rho(t,x)\ . \]
As a consequence, { in the three dimensional case we have}
\[ |\partial_t S| \simeq \left| \frac1{|x|}*(\nabla \cdot j) \right|\simeq \frac1{|x|^2}* |j| \lesssim \frac1{|x|^2}* \rho \simeq |\nabla S|\ . \]
\end{remark}
The dispersion lemma turned out to be a powerful tool for dealing with those assumptions (even the second derivative of $S$ can be
added in \eqref{eq:turning kernel protrusion}).
It turns out that putting together those two hypotheses \eqref{eq:turning kernel delay} and \eqref{eq:turning kernel protrusion} is a much harder task (due to the fact that we loose the benefit of the decay term in the balance law { along the estimates}).
{ Some progress} in this direction was recently made in \cite{CMPS, HKS, BCGP} but the whole picture is not clear so far. For example in \cite{BCGP} it was shown that in $d=3$ dimensions we have global existence of weak solutions if
\begin{equation}\label{1}
0 \leq T[S](t,x,v,v') \lesssim 1 + S(t,x+\varepsilon v) + S(t,x-\varepsilon v') + \abs{\nabla S(t,x+\varepsilon v)} + \abs{\nabla S(t,x-\varepsilon v')},
\end{equation}
provided that the initial data are small in the critical space $L^{3/2}$. If \eqref{1} is strengthened by dropping the last term,
then a global existence result was established without a smallness assumption on the initial data. The proofs use the
dispersion and Strichartz estimates of \cite{CP} and rely on the delocalization effects induced by $x+\varepsilon v$ and $x-\varepsilon v'$.
Interestingly the fact that some directed motion emerges from turning kernels which resemble to assumptions \eqref{eq:turning kernel delay}, \eqref{eq:turning kernel protrusion} or \eqref{1} -- as pointed out by the diffusion limit -- seems to involve a completely different
mechanism from the following commonly described behaviour in {\em E. coli}.
\subsubsection*{Persistence of motion in the good directions.}
As opposed to the previous set of hypothesis, it is commonly accepted that bacteria increase the time spent in running in a favourable direction \cite{WWS03,ErbanOthmer04}. That is, the turning kernel is expected to decrease as the chemical concentration increases along the cell's trajectory, like \begin{equation} T[S](v,v') = T_0 + \psi(S_t + v'\cdot \nabla S)\ , \label{eq:directional derivative}\end{equation} where $\psi$ is nonnegative and decreasing, and $S_t + v'\cdot \nabla S$ denotes the directional derivative along the free run before turning (see \cite{ErbanHwang}, \cite{DolakSchmeiser} { where this hypothesis is injected in a model for {\em D. discoideum} self-organization, and its drift-diffusion limit is derived}). One may think of $\psi$ to be: $\psi(\eta) = 0$ if $\eta>0$ and $\psi(\eta) = 1$ if $\eta<0$ for instance. { Actually, in \cite{ErbanHwang} the authors explicit two behaviour caricatures, where cells might "perfectly avoid going in wrong directions", or "perfectly follow good directions". The latter is stressed out there and leads the system to regular solutions, intuitively, whereas the former might develop singularities where cells aggregate.}
{ The above mechanism is also part of more complex models including internal variables (which is reviewed and analysed further below). In fact some molecular concentration denoted by $y$ (standing for the phosphorylated CheY-P) which induces a tumbling behavior, is actually reduced under attractant binding to the membrane receptor (excitation phase). The chemical chain of reactions is in fact inhibited under activation of the membrane complex receptor. On the contrary, expression of a repellent activates this internal network, favouring tumbling.} Global existence theory for such a class of models has been discussed in \cite{ErbanHwang} for the one dimensional case.
\subsubsection*{Internal dynamics}
Complex models of bacterial motility include a cascade of chemical reactions. This chain of activator/inhibitor reactions links the evaluation of the chemical concentration by the membrane receptors to the rotational switch of the flagella, inducing { or inhibiting} the tumbling phase. Several works propose a chemical network describing this complexity \cite{HauriRoss,SPO}. In particular, the global short term excitation/mid term adaptation is crucial for the cells to crawl up across levels of magnitude of the chemical concentration. Caricatures of such an excitation/adaptation process are depicted in \cite{ErbanOthmer04,DolakSchmeiser} for instance. However we will keep in this paper the necessary abstract level required for our purpose ({ for an illustrative example, see section \ref{sec:internal}}).
In the following, $y\in {\mathbb {R}}^m$ denotes the whole internal state of the cells, which can correspond to huge data of { molecular} concentrations in the chemical network (in fact $m=2$ in the caricatural excitating/adaptating system).
In accordance with previous notations, $p(t,x,v,y)$ denotes the cell density at position $x$, velocity $v$, and with internal state $y$. As before, $f(t,x,v) = \int_y p(t,x,v,y)\ dy$ is the cell density in position$\times$velocity space. On the other hand we introduce $\mu(t,x,y) = \int_v p(t,x,v,y) \ dv$, and as usual $\rho(t,x) = \int_{v,y} p(t,x,v,y) \ dvdy$.
The chemical potential is given by a mean-field equation $-\Delta S + S = \rho(t,x)$. But this could be extended to a more realistic influence of the internal state on the chemical secretion (as it is in \cite{DolakSchmeiser})
\[- \Delta S + S = \int_y \omega(y) \mu(t,x,y)\ dy \ ,\] under suitable assumptions on the weight $\omega$.
The dynamic inside an individual cell is driven by an ODE system representing the protein network in an abstract way:
\[ \frac{dy}{dt} = G\big(y,S(t,x)\big)\ , \quad y\in {\mathbb {R}}^m\ . \]
The cell master equation describing the run and tumble processes, and the chemical potential equation are respectively:
\begin{subequations}\label{kinmodel_y intro}
\begin{align}
\partial_{t} p + v\cdot\nabla_{x} p + \nabla_y \cdot \Big( G(y,S) p \Big) &= \int_{ v'\in V} T(t,x,v,v',y) p(t,x,v',y) dv' \nonumber\\
&\qquad - \int_{ v'\in V} T(t,x,v',v,y) p(t,x,v,y) dv' \ ,\\
-\Delta S + S & = \rho \ ,
\end{align}
\end{subequations}
The turning kernel $T$ can be decomposed in this context as product between a turning frequency $\lambda[y]$, depending on the internal state only, and a reorientation $K(v,v')$ which may describe some persistence in the choice of a new direction with respect to the old one. Without loss of generality here we assume that $K(v,v')$ is constant and renormalized as being $K(v,v') = 1/|V|$.
It is worth noticing that this realistic kinetic model may contain enormous informations on the microscopic cell biology, and links different scales of description, because we eventually end up with a cell population $\rho(t,x)$.
\medskip
As a partial conclusion, we observe that several scenarios with different underlying kinds of hypotheses, drive the system to positive chemotaxis (at least considering the formal drift-diffusion limit of those).
\subsection*{Statement of the main results}
In this paper we investigate the critical growth of the turning kernel in terms of space norms of the chemical which ensures the global existence for the kinetic model. { In particular
we consider control on the turning kernel without any dependence upon the velocity variables, that is with some abuse of notations:
\[ 0\leq T[S](t,x,v,v') \leq T[S](t,x)\ , \] under suitable conditions on the growth of $T[S]$.}
We exhibit examples in 2D and 3D, restricting ourselves to some $L^p$ norms of the chemical (and not of its gradient for instance) for which our method appears to be borderline.
In particular, it is natural to ask (see Section 3 of \cite{CMPS} and the concluding remarks in \cite{BCGP})
whether global existence can be established under a hypothesis of the form
\begin{equation}\label{hypothesisA}
0 \leq T[S](t,x,v,v') \leq C\Big( 1+ \norm{S(t,\cdot)}{L^{\infty}({\mathbb {R}}^d)}^{\alpha}\Big)\ ,
\end{equation}
where $\alpha>0$.
\subsubsection*{Exponential growth in dimension 2}
Consider first the case of dimension $d=2$. It is easy to see using the methods of \cite{CMPS} that we have global existence
for any exponent $\alpha>0$ within \eqref{hypothesisA}. In analogy with global existence results for nonlinear wave or Schr\"odinger equations
\cite{IMM1, IMM2, NO1, NO2} we can ask whether the turning kernel can grow exponentially:
\begin{equation}\label{hypothesisB}
0 \leq T[S](t,x,v,v') \leq C\left( 1+ \exp\left[\norm{S(t,\cdot)}{L^{\infty}({\mathbb {R}}^2)}^{\beta}\right] \right) .
\end{equation}
We will show that this is actually possible: if $0<\beta<1$ then we have global existence for large data; if $\beta=1$
we have global existence for initial data of small mass. Our proof requires $M<\pi$, but we don't know if this bound is optimal.
Also, we don't know if we { may} have blow-up for large $M$ or for exponents $\beta>1$.
We shall prove the following
\begin{theorem} \label{exp3C}
Consider the system \eqref{kinmodel} in $d=2$ dimensions under hypothesis \eqref{hypothesisB} and let $1<p<2$.
Assume $0 <\beta \leq 1$.
If $\beta =1 $ assume also that $M<\pi$, where $M=\int_{V} f_0(x,v) dv$ is the { total mass of cells.}
Then if $f_0\in L^{1}_{x}L^{p}_{v} \cap L^{1}_{x,v}$ then
\eqref{kinmodel}
has a global weak solution $f$ with $f(t)\in L^{p}_{x}L^{1}_{v} \cap L^{1}_{x,v}$.
\end{theorem}
\subsubsection*{Almost $L^\infty$ growth in 3D}
Naturally, from the global existence point of view we address the question of a $\|S\|_\infty$ growth of the turning kernel in the case of $d=3$ dimensions: $T[S]\leq C\big( 1+ \|S\|_\infty\big)$. Actually it cannot be handled using our method in three dimensions so far. Even in the simpler case $T[S] \leq C\big(1+ S(t,x)\big)$ our dispersion method fails.
Puzzling enough, if $T[S]= C\big(1 + S(t,x)\big)$ then a very simple symmetrization trick does perfectly the job (see section \ref{sec:dispersion}).
It was noticed
in \cite{CMPS} that if $\alpha<1$ in \eqref{hypothesisA} then we have global existence (a sketch of the proof will be given
in Section \ref{sublinear}). The case $\alpha=1$ remains open.
In this direction we will use the methods of \cite{BCGP} to show that we have global existence under the assumption
\begin{equation}\label{lra2}
0\leq T[S](t,x,v,v') \leq C\left( 1+ \norm{S(t,\cdot)}{L^{r}({\mathbb {R}}^3)}^{\alpha}\right)\ ,
\end{equation}
where $0<\alpha < \frac{r}{r-3}$ and $r$ can be arbitrarily large. Notice that $\frac{r}{r-3} \to 1^{+}$
as $r\to\infty$, { which is coherent with the above obstruction}. More precisely we shall prove the following
\begin{theorem} \label{lra1}
Let $d= 3$ and $1<p<3/2$. Suppose that that the turning kernel $T[S]$ satisfies hypothesis \eqref{lra2} for some $r$ and $\alpha$ { verifying:}
if $1\leq r \leq 3$, $\alpha$ can be any positive number, whereas in case of $3< r <\infty$,
$0<\alpha < \frac{r}{r-3}$.
Then if $f_0\in L^{1}_{x} L^{p}_{v}\cap L^{1}_{x,v}$ then the kinetic model \eqref{kinmodel}
has a global weak solution with $f(t)\in L^{p}_{x}L^{1}_{v}\cap L^{1}_{x,v}$.
\end{theorem}
If we assume that the turning kernel satisfies \eqref{hypothesisA} with $\alpha =1$,
we can use the Strichartz estimates of \cite{CP} to show global existence, provided that the critical norm
$\norm{f_0}{L^{3/2}\left({\mathbb {R}}^{6}_{x,v}\right)}$ is small.
\begin{theorem} \label{3dsmall}
Let $d=3$ and assume that the turning kernel satisfies
\[ 0 \leq T[S](t,x,v,v') \leq C\left( 1+ \norm{S(t,\cdot)}{L^{\infty}({\mathbb {R}}^3)}\right)\ .\]
Assume also that
$f_0 \in L^{1}\left({\mathbb {R}}^{6}_{x,v}\right) \cap
L^{3/2}\left({\mathbb {R}}^{6}_{x,v}\right)$
and that $\norm{f_0}{L^{3/2}\left({\mathbb {R}}^{6}_{x,v}\right)}$ is sufficiently
small. Then \eqref{kinmodel} has a global weak solution.
\end{theorem}
\subsubsection*{Internal dynamics.}
We shall prove the following theorem for global existence in three dimensions of space.
\begin{theorem}
\label{the:internal}
Let $d=3$.
Assume that the turning kernel has the form $T = \lambda[y]\times K(v,v')$ where $K$ is uniformly bounded, and $\lambda$ grows at most linearly: $\lambda[y]\leq C\big( 1+ |y|\big)$. On the other hand, assume that $G$ has a (sub)critical growth with respect to $y$ and $S$: there exists $0\leq \alpha<1$ such that
\[ |G|(y,S)\leq C\Big( 1 + |y| + S^\alpha\Big)\ . \]
Then there exists an exponent $1<p<3/2$ such that the system \eqref{kinmodel_y intro} admits globally existing solutions with $p\in L^p_xL^1_vL^1_y$.
\end{theorem}
\section{The dispersion lemma applied to kinetic chemotaxis, and the symmetrization trick}
\label{sec:dispersion}
In this section we present a direct application of the dispersion lemma \cite{CP} to system \eqref{kinmodel}. As a consequence we are led to the following question, which is decoupled from \eqref{kinmodel}:
\begin{quote}
\emph{Investigate the critical norm for the turning kernel ensuring the bound
\[ 0 \leq T[S] \leq C\Big( 1 + \| \rho(t) \|_{L^{p}}\Big)\ , \]
for $p<d'$ in dimension $d$.}
\end{quote}
The rest of this paper will be devoted to this question of critical growth.
\begin{lemma} \label{lem:dispersion} Assume the turning kernel can be controlled without any dependence on the velocity variables $v$ nor $v'$:
\[ 0 \leq T[S](t,x,v,v') \leq T[S](t,x)
.\] Then, applying the dispersion estimate, we get the following for $p\in [1,d')$:
\[ \| \rho(t) \|_{L^p} \leq \|f_0(x-tv,v)\|_{L^{p}_{x}L^{1}_{v}} + |V|^{1/p} \int_{s=0}^t (t-s)^{-\lambda} \int_x T[S](s,x) \rho(s,x) \ dxds \ , \]
where $\lambda = d / p'$.
\end{lemma}
Observe that the condition $d<p'$ is crucially required here to ensure { further} the time integrability of the right-hand-side.
\begin{proof}
As usual we represent the solution of \eqref{kinmodel} as
\[f(t,x,v)\leq f_0(x-tv,v) + \int_{0}^{t} T[S](s,x-(t-s)v) \rho(s,x-(t-s)v) ds \ .\]
Using dispersion we get immediately
\[\|f(t,x,v)\|_{L^{p}_{x}L^{1}_{v}} \leq \|f_0(x-tv,v)\|_{L^{p}_{x}L^{1}_{v}} +
\int_{0}^{t} \frac{1}{(t-s)^{d(1-1/p)}} \Big\| T[S](s,x) \rho(s,x) \Big\|_{L^1_x L^p_v} \! ds \ .\]
\end{proof}
As an observation, we state also a second lemma, which is interesting in its own right, but which will not be used in the sequel. Following \cite{Perthame04b}, it claims that a kernel which is symmetric with respect to $v$ and $v'$ ensures global existence. It is relevant from the mathematical point of view because we consider bounds that do not depend on $v$ and $v'$. It is biologically irrelevant however in the case of a purely symmetric kernel because no directed motion emerges in the drift-diffusion limit \cite{CMPS}.
\begin{lemma}
Consider the scattering equation,
\begin{equation}\label{13}
\partial_{t} f + v\cdot \nabla_{x} f = \int_{V} \left(K(t,x,v,v')f(t,x,v') - K(t,x,v',v)f(t,x,v)\right) dv'\ .
\end{equation}
and assume that $K$ is symmetric w.r.t. $v$ and $v'$, i.e.
\begin{equation}
K(t,x,v,v')= K(t,x,v',v) \geq 0 .
\end{equation}
Then all $L^p_xL^p_v-$norms of the density $f$ ($1\leq p<\infty$) are uniformly estimated like
$\norm{f(t)}{L^{p}_{x,v}} \leq \norm{f_0}{L^{p}_{x,v}}$.
\end{lemma}
\begin{proof}
First rewrite \eqref{13} using the symmetry property. It becomes
\begin{equation}\label{13b}
\partial_{t} f + v\cdot \nabla_{x} f = \int_{V} K(t,x,v,v') \left(f(t,x,v') - f(t,x,v)\right) dv'\ .
\end{equation}
Next multiply \eqref{13b} by $p f^{p-1}(t,x,v)$ to get
\begin{align*
& \partial_{t} f^{p} + v\cdot \nabla_{x} f^{p} = p \int f^{p-1}(t,x,v)
K(t,x,v,v') \left(f(t,x,v') - f(t,x,v)\right) dv'
\end{align*}
Integrate with respect to $x$ and $v$ to get
\begin{align*}
&\dfrac d{dt} \iint f^{p} dv dx =
p \iiint f^{p-1}(t,x,v)
K(t,x,v,v')\left(f(t,x,v') - f(t,x,v)\right) dv' dv dx\ .
\end{align*}
We can symmetrize the latter expression to obtain eventually
\begin{multline*}
\dfrac d{dt} \iint f^{p} dv dx = \\
- \frac p2 \iiint K(v,v')
\left(f^{p-1}(t,x,v)-f^{p-1}(t,x,v')\right)\left(f(t,x,v) - f(t,x,v')\right) dv' dv dx \ .
\end{multline*}
Since $f\geq 0$ we have
\[\left(f^{p-1}(t,x,v)-f^{p-1}(t,x,v')\right)\left(f(t,x,v) - f(t,x,v')\right) \geq 0\ , \]
because these two factors always have the same sign.
It follows that
\begin{equation*}
\dfrac d{dt} \iint f^{p} dv dx \leq 0 \ .
\end{equation*}
\end{proof}
\section{Exponential growth in $L^\infty$ in dimension 2}\label{d=2}
In this Section we prove Theorem \ref{exp3C}.
Working as in the proof of Trudinger's inequality we expand the exponential into a power series
and use Young's inequality as in \cite{CMPS, BCGP} to estimate each term. The dispersion method is then used as in \cite{BCGP} through \eqref{lem:dispersion}.
Throughout these processes we keep track of the growth of the various constants in order to make sure that the resulting series converges.
A similar approach has been used in \cite{IMM1, IMM2, NO1, NO2} to study nonlinear wave and Schr\"odinger equations.
We will need the following two Lemmas.
\begin{lemma}\label{exp3A} Let
$G(x)= \frac{1}{4\pi}\int_{0}^{\infty} e^{- \pi \frac{|x|^2}{s}} e^{-\frac{s}{4\pi}} \frac{ds}{s}$.
There exists a positive constant $A$ such that
\begin{equation}\label{exp3B}
G(x) \leq A + \frac{1}{2\pi} \abs{\log|x|\,}\ \ ,\ \ |x|\leq 1 .
\end{equation}
\end{lemma}
\begin{proof}
Fix $x$ with $|x|\leq 1 $. Write $G(x)=G_{1}(x) + G_{2}(x) + G_{3}(x)$ where
\begin{align*}
G_{1}(x)&=\frac{1}{4\pi}\int_{0}^{|x|^2} e^{- \pi \frac{|x|^2}{s}} e^{-\frac{s}{4\pi}} \frac{ds}{s} ,\\
G_{2}(x)&=\frac{1}{4\pi}\int_{|x|^2}^{1} e^{- \pi \frac{|x|^2}{s}} e^{-\frac{s}{4\pi}} \frac{ds}{s} ,\\
G_{3}(x)&=\frac{1}{4\pi}\int_{1}^{\infty} e^{- \pi \frac{|x|^2}{s}} e^{-\frac{s}{4\pi}} \frac{ds}{s} .
\end{align*}
For $G_{1}$ use $e^{-s/4\pi} \leq 1$ and then change variables $s\mapsto t$, where $s=|x|^2 t$, to get
$G_{1}(x)\leq \frac{1}{4\pi}\int_{0}^{1} e^{-\pi\frac{1}{t}} \frac{dt}{t}=: A_{1}$.
For $G_{2}$ we have
$G_{2}(x) \leq \frac{1}{4\pi}\int_{|x|^2}^{1} \frac{ds}{s} = \frac{-\log |x|}{2\pi}$.
For $G_{3}$ use $e^{-\pi |x|^2 /s} \leq 1$ to get
$G_{3}(x) \leq \frac{1}{4\pi}\int_{1}^{\infty} e^{-\frac{s}{4\pi}} ds =: A_{2}$.
\end{proof}
\begin{remark}
In fact the exact asymptotics of $G$ near the origin is:
\[ G(x) = -\frac1{2\pi} \log|x| + \gamma + \frac1{2\pi} \log 2 + o(1) \ , \]
where $\gamma$ is the Euler constant.
\end{remark}
\begin{lemma}\label{exp6A}
For $x > 0$ define $\Gamma(x)=\int_{0}^{\infty} t^{x-1} e^{-t} dt .$
Then (Stirling's formula)
\begin{align}
&n! = \Gamma(n+1) \sim \sqrt{2\pi n} \left(\frac{n}{e}\right)^{n} \ \ (n\to+\infty) , \label{exp6B}\\
&\Gamma(x+1) \sim \sqrt{2\pi x} \left(\frac{x}{e}\right)^{x} \ \ (x\to+\infty)\label{exp6C} .
\end{align}
Moreover, for all $\beta>0$, $x>0$,
\begin{equation}\label{exp6D}
e^{x} > \frac{x^{\beta}}{\Gamma(\beta+1)} .
\end{equation}
\end{lemma}
\begin{proof}
\eqref{exp6B} and \eqref{exp6C} are well known.
For \eqref{exp6D} we have
\[\Gamma(\beta+1) = \int_{0}^{\infty} t^{\beta} e^{-t} dt > \int_{x}^{\infty}t^{\beta} e^{-t} dt >
x^{\beta} \int_{x}^{\infty} e^{-t} dt = x^{\beta} e^{-x} .\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{exp3C}]
Recall from section \ref{sec:dispersion} that a control of the turning kernel like $ T[S] \leq C\big( \| \rho(t) \|_{L^{p}}\big)$, is sufficient to guarantee global existence. The rest of this section is devoted to the proof of this estimate.
Pick $1<p<2$ and set $\mu = p'>2$. In case of $\beta = 1$, assume in addition that $\mu < \frac{2\pi}M$.
Write $S=S^{long}+S^{short}$ where \[S^{long}= \left( \mathbbm{1}_{|x|> 1} G(x)\right) \ast \rho\ , \mbox{and}\quad S^{short}=\left( \mathbbm{1}_{|x|\leq 1} G(x)\right) \ast \rho\ .\]
Since $0<\beta \leq 1$ we have
\[\norm{S}{L^\infty}^{\beta}\leq \left(\norm{S^{\text{long}}}{L^{\infty}} + \norm{S^{\text{short}}}{L^{\infty}}\right)^{\beta} \leq
\norm{S^{\text{long}}}{L^{\infty}}^{\beta} + \norm{S^{\text{short}}}{L^{\infty}}^{\beta}\ ,\]
where we have used the fact that $(x+y)^{\beta}\leq x^{\beta} + y^{\beta}$ for $x,y>0$ and $0<\beta\leq 1$.
Therefore
\[
\exp\left\{\norm{S(t,\cdot)}{L^{\infty}}^{\beta}\right\} \leq
\exp\left\{\norm{S^{long}(t,\cdot)}{L^{\infty}}^{\beta}\right\} \cdot
\exp\left\{\norm{S^{short}(t,\cdot)}{L^{\infty}}^{\beta}\right\} .
\]
For $S^{long}$ we have
\[\norm{S^{long}}{L^\infty}^{\beta} \leq \norm{ \mathbbm{1}_{|x|> 1} G(x)}{L^{\infty}}^{\beta}
\norm{\rho}{L^1}^{\beta} \leq c M^{\beta} ,\]
where $c$ is a positive constant (depending on $\beta$), therefore
\begin{align*}
\exp\left\{\norm{S(t,\cdot)}{L^{\infty}}^{\beta}\right\} & \leq
e^{c M^{\beta}} \exp\left\{\norm{S^{short}(t,\cdot)}{L^{\infty}}^{\beta}\right\}\\
& = e^{c M^{\beta}} \left( 1 + \sum_{j=1}^{\infty} \frac{1}{j!} \norm{S^{short}(t,\cdot)}{L^\infty}^{j\beta} \right) .
\end{align*}
For $S^{short}$ we have
\[
\norm{S^{short}(t,\cdot)}{L^{\infty}} \leq \norm{G(x) \mathbbm{1}_{|x|\leq 1} }{L^{\mu j}}
\norm{\rho}{L^{\frac{\mu j}{\mu j-1}}}
\]
For all $j\geq 1$ we have
$\frac{\mu j}{\mu j-1} \leq \frac{\mu }{\mu -1} = p$ therefore
\begin{align*}
\norm{S^{short}(t,\cdot)}{L^\infty} & \leq
\norm{G(x) \mathbbm{1}_{|x|\leq 1} }{L^{\mu j}}
\norm{\rho(t,\cdot)}{L^1}^{1-\frac1j} \norm{\rho(t,\cdot)}{L^p}^{\frac1j}\\
&= \norm{G(x) \mathbbm{1}_{|x|\leq 1} }{L^{\mu j}}
M^{1-\frac1j} \norm{\rho(t,\cdot)}{L^p}^{\frac1j} ,
\end{align*}
therefore
\[\norm{S^{short}}{L^\infty}^{j\beta} \leq \norm{ G(x) \mathbbm{1}_{|x|\leq 1} }{L^{\mu j}}^{j\beta}
M^{j\beta-\beta}
\norm{\rho(t,\cdot)}{L^p}^{\beta} .\]
Consequently
\begin{equation}
\exp\left\{\norm{S(t,\cdot)}{L^{\infty}}^{\beta}\right\} \leq
e^{c M^{\beta}} \left( 1 + \left[\sum_{j=1}^{\infty} \frac{1}{j!}
\norm{G(x) \mathbbm{1}_{|x|\leq 1} }{L^{\mu j}}^{j\beta}
M^{j\beta}\right] M^{-\beta} \norm{\rho}{L^p}^{\beta}\right)\ . \label{exp3J}
\end{equation}
We need to guarantee that the series in the above right-hand-side converges. Using \eqref{exp3B} we have:
\begin{align*}
\norm{G(x) \mathbbm{1}_{|x|\leq 1} }{L^{\mu j}} & \leq
\norm{A + \frac{1}{2\pi} \abs{\log|x|}\, }{L^{\mu j} (|x|\leq 1)}\\
&\leq A \pi^{\frac{1}{\mu j}} + \frac{1}{2\pi} \norm{\log|x|\,}{L^{\mu j}(|x|\leq 1)}\ ,
\end{align*}
and also
\begin{align*}
\norm{\log|x|\,}{L^{\mu j}(|x|\leq 1)}
&=\left( 2\pi \int_{0}^{1} \left(\, - \log r \, \right)^{\mu j} r dr \right)^{1/\mu j}\\
&\leq \left( 2 \pi \int_{0}^{\infty} s^{\mu j} e^{-2s} ds \right)^{1/\mu j} \\
&\leq \left( 2 \pi \int_{0}^{\infty} \frac{s^{\mu j}}{\frac{s^{\mu j}}{\Gamma(\mu j+1)} } e^{-s} ds \right)^{1/\mu j}
\ \ \ \text{by}\ \ \
\eqref{exp6D}\\
&=\left( 2 \pi\right)^{\frac{1}{\mu j}} \left( \Gamma(\mu j+1) \right)^{\frac{1}{\mu j}} \ .
\end{align*}
As a consequence
\begin{equation}\label{exp3H}
\norm{G(x) \mathbbm{1}_{|x|\leq 1} }{L^{\mu j}} \leq A \pi^{\frac{1}{\mu j}} + \frac{1}{2\pi} \left( 2 \pi \right)^{\frac{1}{\mu j}}
\left( \Gamma(\mu j+1) \right)^{\frac{1}{\mu j}} .
\end{equation}
Then the infinite sum in \eqref{exp3J} can be estimated by
\begin{equation}\label{exp3K}
\sum_{j=1}^{\infty} \frac{1}{j!} \left(A \pi^{\frac{1}{\mu j}} + \frac{1}{2\pi} \left( 2 \pi \right)^{\frac{1}{\mu j}}
\left( \Gamma(\mu j +1) \right)^{\frac{1}{\mu j}}
\right)^{j\beta} M^{j\beta} .
\end{equation}
We'll show that for $\beta<1$ the series converges for any mass $M$, and that
for $\beta=1$ it converges thanks to the restriction $\frac{M\mu}{ 2\pi}<1$. Using the root test we have
\begin{align*}
&\left( \frac{1}{j!} \left(A \pi^{\frac{1}{\mu j}} + \frac{1}{2\pi} \left( 2 \pi\right)^{\frac{1}{\mu j}}
\left( \Gamma(\mu j +1) \right)^{\frac{1}{\mu j}}
\right)^{j\beta} M^{j\beta} \right)^{\frac{1}{j}} \\
& \ \ \ \ =
\frac{1}{\left(j!\right)^{\frac{1}{j}}}
\left(A \pi^{\frac{1}{\mu j}} + \frac{1}{2\pi} \left( 2 \pi\right)^{\frac{1}{\mu j}} \left( \Gamma(\mu j +1) \right)^{\frac{1}{\mu j}}
\right)^{\beta} M^{\beta} \\
&\ \ \ \ \leq \frac{1}{\left(j!\right)^{\frac{1}{j}}}
\left(A^{\beta} \pi^{\frac{\beta}{\mu j}} + \left(\frac{1}{2\pi}\right)^{\beta} \left( 2 \pi \right)^{\frac{\beta}{\mu j}}
\left(\Gamma(\mu j +1) \right)^{\frac{\beta}{\mu j}}
\right) M^{\beta} ,
\end{align*}
We have $\frac{1}{\left(j!\right)^{\frac{1}{j}}} A^{\beta} \pi^{\frac{\beta}{\mu j}} \to 0$, therefore it remains
to examine the limit of
\begin{equation}\label{exp6E}
\frac{1}{\left(j!\right)^{\frac{1}{j}}} \left(\frac{1}{2\pi}\right)^{\beta} \left( 2 \pi \right)^{\frac{\beta}{\mu j}}
\left( \Gamma(\mu j +1) \right)^{\frac{\beta}{\mu j}} M^{\beta} .
\end{equation}
From \eqref{exp6B} we have $j! \sim \sqrt{2\pi j} \left(\frac{j}{e}\right)^{j} $ therefore
$ \left( j! \right)^{\frac{1}{j}} \sim \left(2\pi j\right)^{\frac{1}{2j}} \frac{j}{e} \sim \frac{j}{e} .$
From \eqref{exp6C} we have $\Gamma(\mu j +1) \sim \sqrt{2\pi \mu j} \left(\frac{\mu j}{e}\right)^{\mu j} $ therefore
\[ \left( \Gamma(\mu j +1) \right)^{\frac{\beta }{\mu j}} \sim \left(2\pi \mu j\right)^{\frac{\beta}{2\mu j}}
\left(\frac{\mu j}{e}\right)^{\beta} \sim
\left(\frac{\mu j}{e}\right)^{\beta} .\]
Therefore
\[
\eqref{exp6E}\ \sim \left(\frac{M}{2\pi}\right)^{\beta} \frac{\left(\frac{\mu j}{e}\right)^{\beta}}{ \frac{j}{e}}
\to \begin{cases}
0 &,\ \ \text{if}\ \ \beta < 1 \\
\frac{M \mu}{2\pi} &,\ \ \text{if}\ \ \beta =1
\end{cases} .
\]
The limit is smaller than 1 in all cases, therefore the series converges.
Summing up, we obtain
\[
T[S](t,x)\leq C\left(1+ \exp\left\{\norm{S(t,\cdot)}{L^{\infty}}^{\beta}\right\}\right) \leq C + C \norm{\rho(t,\cdot)}{L^p}^{\beta}\ .
\]
Recall that we have choosen $p<2$
such that the Lemma \ref{lem:dispersion} applies.
We end up with
\begin{equation*}
\norm{\rho(t,x)}{L^{p}}\leq t^{-\lambda}\|f_0(x,v)\|_{L^{1}_{x}L^{p}_{v}}
+ C
\int_{0}^{t}\Big(1+
\norm{\rho(s,x)}{L^{p}_{x}}^{\beta}\Big) \frac{ds}{(t-s)^\lambda}\ ,
\end{equation*}
where $\lambda = 2/p' <1$ so that we can bootstrap.
\end{proof}
\section{(Almost) $L^\infty$ growth in dimension $3$}\label{d=3}
\subsection{Almost $L^\infty$ growth}
In this Section we consider the kinetic model \eqref{kinmodel} in $d=3$ dimensions under hypothesis \eqref{lra2}.
\begin{proof}[Proof of Theorem \ref{lra1}.]
If $1\leq r<3$, $\alpha >0$ and $T[S]$ satisfies \eqref{lra2} then $T[S]$ can be estimated {\em a priori} in terms of the mass $M$.
Indeed,
\[\norm{S(t,\cdot)}{L^{r}({\mathbb {R}}^3)}\leq \norm{G}{L^{r}({\mathbb {R}}^3)}\norm{\rho(t,\cdot)}{L^1({\mathbb {R}}^3)}\leq C M ,\]
because $G(x) \sim \frac{C}{|x|}$ for small $|x|$, and $G(x)$ decays exponentially for large $x$.
Therefore $T[S](t,x,v,v') \leq C + C M^{\alpha} $ and global existence follows easily.
Assume now that $3\leq r <\infty$ and $0<\alpha < \frac{r}{r-3}$.
Choose $p$ defined by \[ \frac1{p'} = \frac{\alpha(r-3)}{3r}<\frac13\ , \] and define $B$ such that
\[
\frac1{B'} = \frac13 - \frac1r = \frac1{\alpha p'}\ .
\]
Using fractional integration \cite{LiebLoss} we get for the signal $S = G*\rho\leq \frac C{|x|}*\rho(x)$ (both short and long range parts),
\begin{align*}
\norm{S}{L^r}&\leq C \norm{\frac{1}{|x|} \ast \rho}{L^r}
\leq C
\norm{\rho}{L^{B}}
\leq C M^{1-\frac{p'}{B'}}\norm{\rho}{L^p}^{\frac{p'}{B'}} .
\end{align*}
Consequently we get the crucial estimate required in Lemma \ref{lem:dispersion}:
\[T[S](t,x)\leq C + C \norm{S}{L^r}^\alpha \leq C+C \norm{\rho}{L^p} \ , \]
where $p$ is smaller than $3/2$.
We can complete the proof as in Theorem \ref{exp3C}.
\end{proof}
\subsection{$L^\infty$ growth: global existence for small data}
\begin{proof}[Proof of Theorem \ref{3dsmall}]
We have
\begin{equation}
\label{sd1}
\partial_{t} f + v \cdot\nabla_{x} f \leq C \int_{V} \Big(1+\norm{S(t)}{L^\infty}\Big) f(t,x,v')\ dv' = C \Big(1+ \norm{S(t)}{L^\infty}\Big) \rho(t,x) .
\end{equation}
To apply the Strichartz estimate \cite{CP} we need four parameters $q,p,r,a$ such that
\begin{subequations}\label{sd30}
\begin{align}
&1\leq r \leq p \leq \infty \label{sd3a}\\
& 0 \leq \frac{1}{r} - \frac{1}{p} < \frac{1}{3} \\
& 1 \leq \frac{1}{r} + \frac{1}{p} \label{sd3c}\\
&\frac{2}{q}= 3 \left( \frac{1}{r} - \frac{1}{p}\right) \label{sd3d}\\
&a=\frac{2pr}{p+r} \label{sd3e}
\end{align}
\end{subequations}
More conditions will be imposed later. We get:
\begin{align}
\norm{f}{L^{q}_{t} L^{p}_{x} L^{r}_{v}} & \leq \norm{f_0}{L^{a}_{x,v}} + C
\Big\| (1+ \norm{S(t)}{L^\infty}) \rho(t,x)\Big\|_{L^{q'}_{t} L^{r}_{x} L^{p}_{v}} \nonumber \\
& = \norm{f_0}{L^{a}_{x,v}} + C(|V|)
\norm{ (1+ \norm{S(t)}{L^\infty}) \norm{\rho(t,x)}{L^{r}_{x}} \ }{L^{q'}_{t} } \label{sd2c}.
\end{align}
In the sequel we omit the constant part in the growth of the turning kernel for the sake of clarity. Assume
\begin{equation}
\label{sd7}
p > \frac{3}{2} .
\end{equation}
Then $p'<3$ therefore,
\begin{equation}\label{sd21}
\norm{S(t)}{L^\infty}\leq \norm{G * \rho(t)}{L^\infty} \leq \norm{G}{L^{p'}} \norm{\rho(t)}{L^{p}}
\leq C \norm{\rho(t)}{L^{p}} ,
\end{equation}
because $G(x) \sim \frac{C}{|x|}$ for small $|x|$, and $G(x)$ decays rapidly for large $|x|$.
Moreover, since $r\leq p$ we have by interpolation,
\begin{equation*}\label{sd22}
\norm{\rho(t)}{L^r} \leq \norm{\rho(t)}{L^1}^{1-\frac{p'}{r'}} \norm{\rho(t)}{L^p}^{\frac{p'}{r'}}
= M^{1-\frac{p'}{r'}} \norm{\rho(t)}{L^p}^{\frac{p'}{r'}} .
\end{equation*}
Therefore
\begin{align*}
\norm{\ \norm{S(t)}{L^\infty} \norm{\rho(t,x)}{L^{r}_{x}}\ }{L^{q'}_{t} }
& \leq C \norm{\ \norm{\rho(t)}{L^p} \ \norm{\rho(t)}{L^{p}}^{\frac{p'}{r'}} \ }{L^{q'}_{t}}\\
&= \norm{\ \norm{\rho(t)}{L^p} \ }{L^{q'\left(1 + \frac{p'}{r'} \right)}_{t}}^{ 1 + \frac{p'}{r'} }
\end{align*}
Now
\[
\norm{\rho(t)}{L^p}=\norm{f(t,x,v)}{L^{p}_{x} L^{1}_{v}} \leq C(|V|)
\norm{f(t,x,v)}{L^{p}_{x} L^{r}_{v}}
\]
therefore
\begin{equation*}
\norm{\ \norm{S(t)}{L^\infty} \norm{\rho(t,x)}{L^{r}_{x}}\ }{L^{q'}_{t} }
\leq C
\norm{f(t,x,v)}{L^{ q'\left(1 + \frac{p'}{r'} \right) }_{t} L^{p}_{x} L^{r}_{v}}^{ 1 + \frac{p'}{r'} }
\label{sd23e}.
\end{equation*}
Suppose that
\begin{equation}
\label{sd24}
q'\left(1 + \frac{p'}{r'} \right) = q .
\end{equation}
Then
\begin{equation}
\label{sd25}
\norm{\ \norm{S(t)}{L^\infty} \norm{\rho(t,x)}{L^{r}_{x}}\ }{L^{q'}_{t} }
\leq C \norm{f(t,x,v)}{L^{ q }_{t} L^{p}_{x} L^{r}_{v}}^{ 1 + \frac{p'}{r'} }
\end{equation}
and plugging this into \eqref{sd2c} we get
\begin{equation*}
\norm{f(t,x,v)}{L^{ q }_{t} L^{p}_{x} L^{r}_{v} } \leq \norm{f_0}{L^{a}_{x,v}} +
C \norm{f(t,x,v)}{L^{ q }_{t} L^{p}_{x} L^{r}_{v}}^{ 1 + \frac{p'}{r'} }
\end{equation*}
If $\norm{f_0}{L^{a}_{x,v}}$ is small enough then we can bootstrap.
\medskip
We need to verify that there exist $(q,p,r,a)$ satisfying \eqref{sd30}, \eqref{sd7} and \eqref{sd24}. There are many possible
choices. For example, if we want initial data $f_0 \in L^{a}_{x,v}$ with
$a=\frac{3}{2}$ (critical exponent in dimension 3) we must choose $p$ and $r$ so that
$\frac{1}{p}+\frac{1}{r}=\frac{4}{3}$.
The complete set of exponents solving these constraints is:
\[q=1+\sqrt{2}\ ,\ p=\frac{9+3\sqrt{2}}{7}\ ,\ r= 3 \left(\sqrt{2} -1 \right) \ ,\]
where all conditions are fulfilled.
\end{proof}
\subsection{Sublinear $L^\infty$ growth}\label{sublinear}
To close this section we
give a quick sketch of the observation in \cite{CMPS} that the hypothesis
\[
0\leq T[S](t,x,v,v') \leq C\Big( 1 + \norm{S(t,\cdot)}{L^{\infty}}^{\alpha}\Big)\ ,
\]
implies global existence.
Fix $p$ and $q$ such that
\[ \frac{\alpha}3 + \frac1p = \frac1q \ , \quad p>\frac32\ . \]
Then we have the following elliptic estimate (see below),
\begin{equation}\label{ell}
\norm{S(t,\cdot)}{L^\infty}= \norm{G * \rho(t)}{L^\infty}
\leq C(M) \norm{\rho(t)}{L^p}^{p'/3}\ .
\end{equation}
Therefore (again omitting the constant contribution of the turning kernel)
\begin{equation*}
f(t,x,v)\leq
f_0(x-tv,v) + C \int_{0}^{t}\norm{\rho(s)}{L^p }^{\alpha} \rho(s,x-(t-s)v) ds .
\end{equation*}
Take the $L^p_xL^q_v$ norm and use the dispersion estimate with $\lambda = 3(1/q -1/p) = \alpha$ to get
\begin{align}
\norm{f(t)}{L^{p}_xL^q_v}
&\leq t^{-\alpha}\|f_0(x,v)\|_{L^{q}_{x}L^{p}_{v}} + |V|^{1/p} \int_{0}^{t} \frac1{(t-s)^\alpha} \norm{\rho(s)}{L^p }^{p' \alpha/3}
\norm{ \rho(s) }{L^{q}_{x}} ds \\
& \leq t^{-\alpha}\|f_0(x,v)\|_{L^{q}_{x}L^{p}_{v}} + C \int_{0}^{t} \frac1{(t-s)^\alpha} \norm{\rho(s)}{L^p }^{p' \alpha/3 + p'/q'}\ ds , \label{lto.5}
\end{align}
where $p' \alpha /3 + p'/q' = 1$ by definition.
To prove the elliptic estimate \eqref{ell} write
\[S\leq C \rho * \frac{\chi_{|x|\leq R}}{|x|} + C \rho * \frac{\chi_{|x|\geq R}}{|x|}\]
Then, if $p'<3$,
\begin{align*}
\|S\|_{L^\infty}& \leq C \|\rho\|_{L^p} \|\frac{\chi_{|x|\leq R}}{|x|}\|_{L^{p'}} + C
\|\rho\|_{L^1} \|\frac{\chi_{|x|\geq R}}{|x|}\|_{L^{\infty}}\\
&\leq C \Big( \|\rho\|_{L^p} R^{\frac{3}{p'}-1} + \|\rho\|_{L^1} R^{-1} \Big) \ .
\end{align*}
Choose $R$ so that $\|\rho\|_{L^p} R^{\frac{3}{p'}-1}=\|\rho\|_{L^1} R^{-1} $, {\em i.e.} choose
\[R=\left(\frac{\|\rho\|_{L^1}}{\|\rho\|_{L^p}}\right)^{p'/3} .\]
This gives
\[\|S\|_{L^\infty} \leq C \|\rho\|_{L^p}^{p'/3} \|\rho\|_{L^{1}}^{1-p'/3} = C M^{1-p'/3} \|\rho\|_{L^p}^{p'/3}\ . \]
\section{Extension to internal dynamics}
\label{sec:internal}
Recall the kinetic model with internal dynamics:
\begin{subequations}\label{kinmodel_y}
\begin{align}
\partial_{t} p + v\cdot\nabla_{x} p + \nabla_y \cdot \Big( G(y,S) p \Big)= & \int_{ v'\in V} \lambda[y] K(v,v') p(t,x,v',y) dv'
\nonumber\\
& \qquad \qquad - \lambda[y] p(t,x,v,y) \ ,\\
\label{eq:mean field} -\Delta S + S= \rho\ , &
\end{align}
\end{subequations}
Assuming that $K$ is bounded we reduce to $K = 1/|V|$ without loss of generality.
This model takes into account the transport along characteristics of the internal cellular dynamics
\[ \frac{dy}{dt} = G(y,S(t,x))\ , \quad y\in {\mathbb {R}}^m\ . \]
For {\em E. coli}, the regulatory network described by $G$ is made of six main proteins essentially (named Che-proteins), and the main events are methylation and phosphorylation.
Indeed, in the absence of a chemoattractant (basal activity), the phosphorylated protein Che-Y is supposed to diffuse inside the cell and to reach the flagella motor complex, enhancing switch between CCW rotation and CW rotation, that is tumbling. This transduction pathway is in fact inhibited when the chemoattractant (say aspartate) binds a membrane receptor, triggering methylation of the membrane receptor complex, and eventually inhibition of the tumbling process.
This network exhibits a remarkable excitation/adaptation behavior, which is crucial for cell migration. For the sake of simplicity, one deals in general with a system of two coupled ODEs which captures the same features. This system should be {\em excitable} with slow adaptation -- there is a single, stable equilibrium state, but a perturbation above a small threshold triggers a large excursion in the phase plane (see figure \ref{fig:FHN} ({\em left})) -- and possibly one-sided -- in case of positive chemotaxis, the cells do not respond specifically to a decrease of the chemoattractant concentration \cite{BB74}. This characterization of dynamical systems is very well known in biological modeling, as it is the basis of the FitzHugh-Nagumo models \cite{Murray} for potential activity in axons. Furthermore, it is often associated to the phenomenon of pulse wave propagation ({\em e.g.} calcium waves), \cite{Keener}. In the context of cell migration, it is also involved in the slime mold amoebae {\em D. discoideum} aggregation process, where the chemoattractant cAMP is relayed by the cells \cite{Hofer,DolakSchmeiser}.
To be more concrete, the following set of equations is generally proposed \cite{ErbanOthmer04}
\begin{equation}
\left\{ \begin{array}{rll}
\dfrac {dy_1}{dt} = & \dfrac1{\tau_e} \big( h(S) - (y_1+y_2) \big) \quad & {\mbox (excitation)} \vspace{.2cm}\ , \\
\dfrac {dy_2}{dt} = & \dfrac1{\tau_a} \big( h(S) - y_2 \big) \quad & {\mbox (adaptation)} \ .
\end{array}
\right.
\end{equation}
Considered to be decoupled from the transport equation, these two internal quantities relax respectively to
\[ \lim_{t\to \infty} y_1 = 0 \ , \quad \lim_{t\to \infty} y_2 = h(S)\ , \] with a slow time scale associated to adaptation provided that $\tau_e\ll \tau_a$. However, this system cannot reproduce true excitability with a large gain factor for small perturbations because it is linear with respect to the variable $y$.
In a slightly different context (pulsatory cAMP waves), Dolak and Schmeiser considered an even simpler system \cite{DolakSchmeiser}, namely
\begin{equation}
\left\{ \begin{array}{rll}
y_1 = &\big( h(S) - y_2 \big)_+ \quad & {\mbox (excitation)} \ , \vspace{.2cm} \\
\dfrac {d y_2}{dt} = &\dfrac1{\tau_a} \big( h(S) - y_2 \big) \quad & {\mbox (adaptation)} \ .
\end{array}
\right.
\end{equation}
This particular choice does select responses to one-sided stimuli, but fails for true excitability. We suggest to consider the following phenomenological translated slow-fast, FHN type, system,
\begin{equation}
\left\{ \begin{array}{rll}
\dfrac {dy_1}{dt} = & \dfrac1{\tau_e} \big(h(S) - q(y_1) - y_2 \big) \quad & {\mbox (excitation)} \vspace{.2cm}\ , \\
\dfrac {dy_2}{dt} = & \dfrac1{\tau_a} \big( h(S) + y(1) - y_2 \big) \quad & {\mbox (adaptation)} \ ,
\end{array}
\right.
\end{equation}
where $q$ is a cubic function depicted in figure \ref{fig:FHN}.
\begin{figure}
\includegraphics[width = .48\linewidth]{FHNplus.ps} \,
\includegraphics[width = .48\linewidth]{FHNminus.ps}
\caption{A two coupled ODE system exhibiting short time excitation/mid time adaptation with one-sided selection. The picture is the same as for the FHN model. The perturbation of the equilibrium state is enhanced by a displacement of the basal line $y2 = h(S0+dS)$, translating the system up ({\em left}) or down ({\em right}). Here the cubic function is given by $q(u) = u(u-1)(u-.2)$, the saturating ligand function is given by $h(S) = S/(1+S)$, the basal aspartate concentration is $S0 = .4$ and the system reacts to perturbations $dS = .1$.}
\label{fig:FHN}
\end{figure}
\begin{proof}[Proof of Theorem \ref{the:internal}.]
Our next step is to prove global existence under general and manageable assumptions settled in Theorem \ref{the:internal}. But let us begin with an important remark on the methodology.
\begin{remark}
To obtain {\em a priori} estimates, one possible strategy would be to use the characteristics to handle with \eqref{kinmodel_y}, as it is performed in \cite{ErbanHwang} in 1D. For this purpose, integrating the hyperbolic \eqref{kinmodel_y}
along the backward-in-time auxiliary problem
\[ \dot{X}(s) = v \ , \quad \dot{Y}(s) = G(Y,S(s,X)) \ ,\quad (X(t),Y(t)) = (x,y)\ , \] gives the estimate
\[ \dfrac{d}{ds} f(s,X(s),v,Y(s)) - \Big(\nabla_y\cdot G\Big) p \leq \lambda[Y] \mu(s,X,Y)\ .
\]
The difficulty arises at two levels here. First one has to control the $\nabla_y\cdot G$ contribution, and secondly one has to perform later on the change of variables $z = Y(y)$. This induces a Jacobian contribution $\left| \dfrac{\partial Y}{\partial y} \right|^{-1}$, and one has to control it too. In the sequel we avoid these two difficulties by working on averaged quantities.
\end{remark}
We use a partial representation formula of the solution from the free transport operator $\partial_t p + v\cdot \nabla_x p$. First integrating the equation with respect to $y$, we obtain
\[ \partial_t f + v\cdot \nabla_x f + 0 \leq \frac1{|V|} \int_y \lambda[y] \mu(t,x,y)\ dy\ , \]
so that
\[ f(t,x,v) \leq f_0(x-tv,v) + \frac1{|V|} \int_{s=0}^t \int_y \lambda[y] \mu(s,x-(t-s)v,y)\ dy ds\ . \]
Using the $L^p_xL^1_v$ dispersion Lemma \ref{lem:dispersion} we get as usual
\begin{align}
\|\rho(t)\|_{L^p} &\leq \|f_0(x-tv,v)\|_{L^{p}_{x}L^{1}_{v}} + \frac1{|V|} \int_{s=0}^t \left\| \int_y \lambda[y] \mu(s,x-(t-s)v,y)\ dy \right \|_{L^p_xL^1_v} ds \nonumber\\
&\leq t^{-\lambda}\|f_0(x,v)\|_{L^{1}_{x}L^{p}_{v}} \nonumber
\\ &\qquad + |V|^{1/p-1} \int_{s=0}^t \frac1{(t-s)^\lambda} \iint_{x,y} \lambda[y] \mu(s,x,y)\ dx dy ds\ ,\label{eq:estimate rho(y)}
\end{align}
where $\lambda=3/p'$.
We now use the two growth assumptions on $\lambda$ and $G$: \[\lambda[y]\leq C(1+|y|)\ , \quad |G|(y,S)\leq C(1 + |y| + S^\alpha) \ ,\ 0\leq \alpha<1\ , \] to control the time growth of the average quantity $\iint_{x,y} |y| \mu(t-s,x,y)\ dx dy $.
\begin{remark} Note that in dimension $d=2$ we can handle any nonnegative $\alpha$. \end{remark}
We test the master equation \eqref{kinmodel_y} against $|y|$:
\[
\dfrac{d}{d s} \iint_{x,y} |y| \mu(s,x,y) \ dxdy + 0 + \iint_{x,y} |y| \nabla_y\cdot (G(y,S) \mu(s,x,y))\ dy dx = 0\ ,
\]
therefore, using $|G|\leq C(1+|y|+S^\alpha)$,
\begin{align}
\dfrac{d }{ds} \iint_{x,y} |y| \mu(s) \ dxdy & = \iint_{x,y} \frac{y}{|y|}\cdot G(y,S) \mu(s,x,y)\ dy dx\nonumber \\
& \leq \iint_{x,y} |G|(y,S) \mu(s,x,y)\ dy dx\nonumber \\
& \leq C + C \iint_{x,y} |y| \mu(s,x,y)\ dy dx + C \int_x |S(s,x)|^\alpha \rho(s,x)\ dx \ . \label{eq:preDuhamel}
\end{align}
\begin{remark}
If we agree to diminish $\alpha$ it is possible to deal with higher exponents in $\lambda[y]\leq C(1+|y|^\gamma)$. For instance we have by Young's inequality:
\begin{align*} \dfrac{d }{ds} \iint_{x,y} |y|^\gamma \mu(s) \ dxdy & \leq \gamma \iint_{x,y} |y|^{\gamma-1} |G|(y,S) \mu(s,x,y)\ dy dx \\ & \leq C \iint_{x,y} |y|^\gamma \mu(s,x,y)\ dy dx \\ & \quad + C\gamma \iint_{x,y} \left(\frac{\gamma - 1}\gamma |y|^\gamma + \frac 1\gamma |S(s,x)|^{\alpha\gamma}\right) \mu(s,x,y)\ dy dx \ .
\end{align*}
The same argument follows provided that $\alpha\gamma<1$. More general combination of exponents could have been considered. We have chosen here a simple framework for the sake of clarity.
\end{remark}
We can use the Duhamel formula to represent the inequality \eqref{eq:preDuhamel} as
\begin{multline*}
\iint_{x,y} |y| \mu(s,x,y) \ dxdy \leq Ce^{Cs} + e^{Cs} \iint_{x,y} |y| \mu_0(x,y)\ dx dy \\
+ C \int_{\tau = 0}^s e^{C(s-\tau)} \int_x |S(\tau,x)|^\alpha \rho(\tau,x)\ dx d\tau\ .
\end{multline*}
Plugging that into \eqref{eq:estimate rho(y)} gives
\begin{equation*}
\|\rho\|_{L^p} \leq C_0 (t) + C \int_{s=0}^t \frac1{(t-s)^\lambda} \int_{\tau = 0}^s e^{C(s-\tau)} \int_{x} |S(\tau,x)|^\alpha \rho(\tau,x)\ dx d\tau ds \ .
\end{equation*}
We choose $p<3/2$ so that $\lambda = 3/p'<1$. Since $\alpha < 1$ we have $3< \frac{3}{\alpha}$ and we can choose $p$ sufficiently close to $3/2$ so that $ 3 < p' < \frac{3}{\alpha}$.
Then
\[
\int S(t,x)^\alpha \rho(t,x) dx \leq \norm{S^\alpha}{L^{p'}} \norm{\rho}{L^{p}} = \norm{S}{L^{\alpha p'}}^{\alpha} \norm{\rho}{L^p} \ .
\]
>From the mean field chemical equation \eqref{eq:mean field} $-\Delta S+S=\rho$ we deduce the following elliptic estimate.
We have $\alpha p' < 3$, and $S=G*\rho$
where $G(x) \sim \frac{C}{|x|}$ for small $|x|$ (short range) and $G(x)$ decreases exponentially fast for large $|x|$ (long range). Thus we obtain
\[
\norm{S}{L^{\alpha p'}} = \norm{G * \rho}{L^{\alpha p'}} \leq \norm{G}{L^{\alpha p'}} \norm{\rho}{L^1} \leq C M\ ,
\]
therefore
\[
\int S(t,x)^\alpha \rho(t,x) dx \leq C M^\alpha \norm{\rho}{L^p}\ .
\]
We obtain
\begin{align*}
\|\rho(t)\|_{L^p} &\leq C_0 (t) + C\int_{0}^{t} \frac1{(t-s)^\lambda} \int_{\tau = 0}^s e^{C(s-\tau)} \norm{\rho(\tau)}{L^p} d\tau ds \\
& \leq C_{0}(t) + C \int_{0}^{t} \norm{\rho(\tau)}{L^p} \int_{\tau}^{t} \frac{e^{C(s-\tau)}}{(t-s)^\lambda} ds d\tau\ ,
\end{align*}
Using the boundedness of $\int_{s = \tau}^t \frac1{(t-s)^\lambda} e^{C(s-\tau)}\ ds$ with respect to $\tau$, we conclude thanks to a Gronwall estimate.
\end{proof}
|
train/arxiv
|
BkiUbjHxK7DgtAAQHpvB
| 5
| 1
|
\section{Introduction}
According to the Centers for Disease Control and Prevention \citep{cdc2018}, as of 2016, approximately 36.9 million people have HIV worldwide, with 1.8 million new cases reported in 2017 alone. While the rate of AIDS-related deaths and reports of new HIV cases have decreased since the late 1990s, there has been an increase in HIV diagnoses amongst people who inject drugs (PWID), co-occurring with the rise in opioid and fentanyl use in the United States \citep{fentanyluse2018, frieden2015, burnett2018}. PWID are an important subpopulation for interventions because they have a higher risk of HIV infection, have higher rates of comorbid diseases and mortality once infected, and have worse health outcomes even when treated with antiretroviral therapy (ART) \citep{altice2010}.
Researchers have evaluated social context and its role in perpetuating or reducing engagement in HIV risk behavior to better deliver HIV interventions among PWID \citep{druginjection2020, friedman2017, latkin1995, unger2006, altice2010}. Two of these observational network studies, the Social Risk Factors and HIV Risk (SFHR) Study and the Transmission Reduction Intervention Project (TRIP), were conducted with participants who engaged in HIV risk behaviors with others. The SFHR Study included participants who had injected drugs within the past year and lived in New York City or the surrounding areas between 1991 and 1993 \citep{friedman2002}. Participants in TRIP were injection drug users and their contacts who lived in Athens, Greece between 2013 and 2015 \citep{nikolopoulos2016}. \cite{dombrowski2013} conducted a study in the SFHR network using exponential random graph models (ERGMs) and examined the influence of transitive closure (i.e., tendency for two people who are connected to someone in common to then connect with each other), and homophily effects (i.e., the tendency of individuals to associate and form connections with others who are similar) for race/ethnicity, age, gender, and number of risk partners on having ties in a network. They found that transitive closure was more important than race/ethnicity, age, gender, and number of injection partners in determining connections in the network \citep{dombrowski2013}; however, they did not discuss other attributes such as education, employment, and living situation. These individual socioeconomic attributes may influence social network formation \citep{nodalattribute2014}, e.g., the position of the individual in the network. Including both demographic and socioeconomic attributes into the model could improve the understanding of the likelihood of people who engage in HIV risk behaviors with one another among PWID.
Although previous studies have identified individual-level risk factors, such as housing instability and race/ethnicity, for engagement in HIV risk behaviors, few studies have evaluated the role of these social and economic risk factors from a network perspective. In this study, we applied ERGMs to data from both the SFHR Study and TRIP using demographic and socioeconomic individual attributes. We examined network- and individual-level attributes possibly associated with the likelihood of individuals sharing an edge between them defined by engaging in HIV risk behavior together. People are connected if they engage in HIV risk behaviors together, including engaging in unprotected sex, sharing needles or syringes for injection drug use, using drugs around others who also share needles, have sex around others, or have been seen interacting with study participants.
Most network analysis ignores the problems of missing data by including only individuals with complete attribute data and assuming fully observed network. There are limited number of studies that analyze the effects of missing data with known or unknown missing mechanisms. This study attempts to explore the missingness at individual and network levels, such as missing attributes and unreported risk connections, on the likelihood of individuals to engage in HIV risk behaviors among PWID. We analyzed the SFHR and TRIP using full observed network and only the largest connected component in combination with missing imputation methods for the individual attributes. We consider two missing imputation methods, namely propensity score matching and miss forest imputation. (refs) The findings of this study can be used to understand where and how to target interventions to most effectively decrease engagement in shared HIV risk behaviors among networks of PWID without excluding individuals with only partial information.
The rest of the paper is organized as follows. We begin by introducing the Network-based studies, SFHR and TRIP, and the statistical methods in Section 2. The analysis results of both network studies will be discussed in Section 3. Section 4 concludes with a discussion.
\section{Methods}
\subsection{Network-based Studies}
Participants in the Social Risk Factors and HIV Risk (SFHR) study lived in New York City and the surrounding areas (e.g., New Jersey, New York State, and Connecticut) between 1991 and 1993. Some were recruited by study staff who engaged in ethnographic work where they spent significant amounts of time with people living and using drugs in the neighborhood, while others were brought in by friends or had been given a coupon to participate. Participants were all 18 years of age or older and had injected drugs within the last year \citep{friedman2002}. They completed interviews with study staff and could choose to have their blood drawn and tested for various of diseases, including HIV and Hepatitis B. They provided descriptions of their contacts (people with whom they had sex and/or used drugs, not necessarily in a risky manner). Study staff located those contacts within the neighborhood and asked them to confirm that they knew the participant who had nominated them as a contact. In addition to staff prompted confirmation, some participants also recruited their contacts and brought them to the storefront for confirmation. Researchers also collected data on demographic information (e.g., age, race, ethnicity, sex, employment status), health attitudes and beliefs, and health-seeking behavior. As a result, 767 people enrolled in the study and 516 shared connections between those people \citep{friedman2002}.
Participants in the Transmission Reduction Intervention Project (TRIP) study were people who inject drugs and their contacts who lived in Athens, Greece, between 2013 and 2015. Those who were initially recruited into the study were injection drug users referred to the study by HIV testing centers in Athens. These participants were initially recruited by ARISTOTLE, a program that followed a respondent-driven sampling (RDS) design and sought to contract trace PWID and enroll them in HIV care \citep{aristotle}. For individuals initially recruited into the study, it was determined whether they were recently infected with HIV (within the last 6 months) or were considered long-term HIV infected (more than 6 months prior)\citep{nikolopoulos2016}. Researchers then used two-wave contact tracing for each initial recruit, asking them about their sexual and drug use partners. Once those nominated participants were contacted and agreed to participate, they had their HIV status ascertained. Those who were HIV positive were tested using the Limited Antigen Avidity (LAg) assay to determine if they were recently infected with HIV ($<$ 130 days since infection) or long-term infected ($>$ 130 days since infection) \citep{nikolopoulos2016}. It is important to differentiate between short-term and long-term infected because those who are recently infected are more contagious and likely not to have many symptoms while in their most acutely infectious period \citep{selik2018, volz2013}. Both those who were recently infected and those who were long-term infected were asked to provide contact information about their sexual and drug use partners. Many of those individuals were recruited into the study \citep{nikolopoulos2016}.
In TRIP, participants completed computer-assisted and paper interviews, and also had their HIV status ascertained. If HIV status was positive, the recency of infection was determined with the LAg test. Participants provided demographic information and their contacts' information, answered questions about engagement in risk behaviors, their HIV status, substance use, access to care, HIV knowledge, stigma, injection norms, and their opinions on the project. Follow-up interviews were conducted with participants about six months after they completed their baseline interview. The data were collected over two years of follow-up with 356 people enrolled in the study and has 542 shared connections. \citep{nikolopoulos2016}.
\subsection{Statistical Methods}
This study used existing, de-identified data from two observational network-based studies, namely, SFHR and TRIP. Each node in the network represents a participant in the study. If the participants engage in HIV risk behavior together, then there exists an edge between the corresponding nodes. Descriptive statistics for the participants' attributes are reported for each study network in Tables \ref{tab:SHFRdescriptivestat}, \ref{tab:TRIPdescriptivestat}, and \ref{tab:network}. The primary attributes in SFHR were age, sex, race/ethnicity, education level, employment, living situation, and marital status. These variables were chosen because they could be important for the delivery of interventions and are easy to ascertain \citep{ware1981}. The primary attributes of interest in the TRIP analyses were age, sex, nationality, education level, employment, and living situation. Sexual identity was not included as 97\% of participants identified as heterosexual. The detailed description of individual attributes and data pre-processing are described in Appendix A.
\begin{table}
\centering
\caption{Descriptive statistics of SFHR attribute variables. These include counts and frequencies for each variable in both the full network, which includes the counts and percentages of all nodes, and the largest connected component (LCC), as well as means and ranges for age, a continuous variable. }
\begin{tabular}{llrrrr}
\midrule
& & \multicolumn{2}{c}{Full Network } & \multicolumn{2}{c}{LCC} \\
& & \multicolumn{2}{c}{N=767} & \multicolumn{2}{c}{N=277} \\
& & {N} & {\%} & \multicolumn{1}{c}{N} & \multicolumn{1}{c}{ \%} \\
\midrule
Sex & Male & 541 & 71 & 195 & 70 \\
\midrule
\multirow{4}[2]{*}{Race} & African & 206 & 27 & 81 & 29 \\
& American Latinx & 311 & 41 & 87 & 31 \\
& White & 243 & 32 & 104 & 38 \\
& Other & 7 & 1 & 5 & 2 \\
\midrule
\multirow{2}[2]{*}{Education} & {Less than high school} & 472 & 61 & 176 & 64 \\
& {High school or more} & 295 & 39 & 101 & 36 \\
\midrule
\multirow{2}[2]{*}{Employment} & Employed & 78 & 10 & 24 & 9 \\
& Unemployed & 689 & 90 & 253 & 91 \\
\midrule
\multicolumn{1}{l}{\multirow{4}[2]{*}{\shortstack{Living\\Situation}}} & In own place & 244 & 32 & 76 & 27 \\
& Someone else's place & 370 & 48 & 132 & 48 \\
& Homeless & 137 & 18 & 62 & 22 \\
& Missing & 16 & 2 & 7 & 3 \\
\midrule
\multicolumn{1}{l}{\multirow{3}[1]{*}{\shortstack{Marital\\Status}}} & Single & 400 & 51 & 136 & 49 \\
& Married & 167 & 22 & 53 & 19 \\
& Divorced & 200 & 26 & 88 & 32 \\
\midrule
Age & Mean and SD & 35.25 & 6.97 & 35.19 & 6.63 \\
\bottomrule
\end{tabular}%
\label{tab:SHFRdescriptivestat}%
\end{table}%
\begin{table}
\centering
\caption{Descriptive statistics of TRIP attribute variables. These include counts and frequencies for each variable in both the full network, which includes all nodes, and the largest connected component (LCC), as well as means and standard deviations for age, a continuous variable.}
\begin{tabular}{llrrrr}
\midrule
& & \multicolumn{2}{c}{Full Network } & \multicolumn{2}{c}{LCC} \\
& & \multicolumn{2}{c}{N=356} & \multicolumn{2}{c}{N=241} \\
& & \multicolumn{1}{c}{N} & \multicolumn{1}{c}{\%} & \multicolumn{1}{c}{N} & \multicolumn{1}{c}{ \%} \\
\midrule
Sex & Male & 281 & 79 & 191 & 79 \\
\midrule
Nationality & Greek & 323 & 91 & 212 & 88 \\
\midrule
\multirow{3}[2]{*}{Education} & Primary School & 113 & 32 & 74 & 31 \\
& High School & 196 & 55 & 135 & 56 \\
& Post High School & 47 & 13 & 32 & 13 \\
\midrule
\multirow{4}[2]{*}{Employment} & Employed & 61 & 17 & 38 & 16 \\
& Unemployed: looking for work & 89 & 25 & 51 & 21 \\
& Can't work, health reasons & 161 & 45 & 117 & 49 \\
& Other & 45 & 13 & 35 & 14 \\
\midrule
\multicolumn{1}{l}{\multirow{4}[2]{*}{\shortstack{Living\\Situation}}} & Paying rent & 75 & 21 & 39 & 16 \\
& Not paying rent & 193 & 54 & 124 & 51 \\
& Homeless & 80 & 23 & 71 & 29 \\
& Missing & 8 & 2 & 7 & 4 \\
\midrule
Age & (Mean and SD) & 35.87 & 8.39 & 35.99 & 8.54 \\
\bottomrule
\end{tabular}%
\label{tab:TRIPdescriptivestat}%
\end{table}%
\begin{table}
\begin{center}
\footnotesize
\caption{Network descriptive statistics of SFHR (left) and TRIP (right), calculated for both the full network and the largest connected component (LCC).}
\begin{tabular}{lrrrr}
& \multicolumn{2}{c}{SFHR} & \multicolumn{2}{c}{TRIP} \\
& \multicolumn{1}{c}{Full Network} & \multicolumn{1}{c}{LCC} & \multicolumn{1}{c}{Full Network} & \multicolumn{1}{c}{LCC} \\
\midrule
Node Count & 767 & 277 & 356 & 241 \\
Edge Count & 516 & 380 & 542 & 502 \\
Assortativity & 0.1 & -0.0004 & 0.2 & 0.15 \\
Transitivity & 0.12 & 0.11 & 0.24 & 0.23 \\
Average degree (SD) & 1.35 (2.25) & 2.74 (3.17) & 3.04 (3.46) & 4.17 (3.6) \\
Average betweenness centrality & 0.0008 & 0.017 & 0.005 & 0.016 \\
Density & 0.002 & 0.01 & 0.009 & 0.017 \\
\bottomrule
\end{tabular}%
\label{tab:network}%
\end{center}
{\footnotesize \flushleft Note: transitivity, average betweenness centrality, and density range from 0 (low) to 1 (high); average degree ranges from 0 to the number of people in a network minus 1; and assortativity ranges from $-1$ to $+1$, indicating negatively related to positively related.}
\end{table}%
ERGMs were used to determine which network and individual attributes were associated with individuals possibly engaging in HIV risk behavior with others in both the SFHR and TRIP networks. Individual attributes examined in the current study, including sex, employment status, age, education level, living situation, race/ethnicity, marital status, and nationality, were used in the ERGMs. All nodal attribute variables were included using three different terms: node match, node factor, and node mix. Three separate models were used for nodal attribute variables in each study network. The first model included the effects of node match (the odds of people being connected based on a shared attribute). The second model included the effects of node factor (comparing the odds of people being connected across levels of an attribute variable). And the third model included the effects of node mix (comparing the odds of people being connected within and between levels of an attribute variable). These were modeled separately because the effects of every term for each attribute could not be estimated when they were included together in a single model. After node match, node factor, and node mix models were run separately. The final models for each study were created containing a combination of those terms, based on which terms had been statistically significant in the previous models. First, univariate models were developed that only included terms for each attribute separately. If a term was significant at the $p < 0.2$ level in the univariate models, the term was included in the adjusted models. The entire network sample (full network) and the largest connected component were modeled separately, and the parameter estimates were compared to see if there are differences in the results. The full network model included all participants, some of whom were isolates (they did not share any edges with others in the network). The evaluation of the goodness-of-fit of all the models indicated no evidence of a lack of fit.
We also incorporated network structure in the ERGMs. The only network structure term that could consistently be estimated across all scenarios was edges. The only network term, apart from edges, that could be included in some models was the geometrically-weighted degree term, and this could only be estimated in models of the largest connected component in the SFHR data set. Geometrically-weighted degree can be thought of as a measure of "popularity," meaning it estimates the likelihood of a person having a tie in the network based on their degree, which is the number of connections (edges) they have in the network and the degree of others \citep{hunter2008}.
Missing attribute information for the nodes is common in network-based studies, particularly among vulnerable populations. In the presence of missing nodal attribute data, techniques to address missing data in the analysis are needed, as ERGMs do not converge when missing data are included in the model. In network studies, excluding a person with missing data could significantly change network structure by removing their ties with other people in the network \citep{gile2006}. To address missing nodal attribute data, we used two missing data imputation methods: propensity score matching and multiple imputation using missForest. Propensity score matching requires the propensity score model to be correctly specified \citep{dagostino2000, little2002}. The miss forest non-parametric technique uses observed information to predict missing values, but does not require any modeling assumptions \citep{stekhoven2012}. For both methods, data were imputed using the same set of attributes that were candidates for the ERGMs. For the TRIP data, propensity scores and missForest were generated to impute missing data for the variable about where a person was currently living based on sex, employment status, nationality, education, and age. For the SFHR data, propensity scores and missForest were generated to impute missing data for the variable about where a person was currently living based on sex, race/ethnicity, employment status, marital status, education, and age. This secondary data analysis study was reviewed and approved by the Institutional Review Board (IRB) at the University of Rhode Island.
\section{Results}
The full networks and largest connected components (LCC) of SFHR and TRIP were analyzed using complete cases and two missing data imputation techniques, propensity score matching and miss forest. The results across full networks and LCC using complete cases and both missing data imputation methods are comparable. In this section, we focus the discussion on the results of the final models on the full network of SFHR and TRIP using complete cases. The model results using complete cases can be found in Appendix B.
\subsection{SFHR}
The final model using complete cases for SFHR full network included terms for node mixing based on living situation and race/ethnicity, homophily terms for sex and marital status, and a node factor term for educational attainment (Table B4). Those who lived in their own place were 65\% less likely to be connected to others who lived in their own place (odds ratio (OR) = 0.35, 95\% confidence interval (CI) = (0.23, 0.52)), 73\% less likely to connect with those who lived in someone else's place (OR = 0.27 , 95$\%$ CI = (0.23, 0.52)), and similarly, 73\% less likely to be connected to those who were homeless (OR = 0.27 , 95$\%$ CI = (0.18, 0.41)), compared to ties between two people who were homeless. Those who lived in someone else's place were 67\% less likely to be connected to others who lived in someone else's place (OR = 0.33, 95\% CI = (0.23, 0.46)), and those who were homeless (OR = 0.4 , 95$\%$ CI = (0.28, 0.56)), compared to ties between two people who were homeless.
We also observed homophily by race, sex, and marital status. African Americans were 1.44 times (95\% CI = (1.11, 1.87)) more likely to be connected to other African Americans, compared to the likelihood of a connection between two White people in the network. Males had 1.27 times (95\% CI = (1.05, 1.54)) the odds of connecting with other males. Females had 1.55 times (95\% CI = (1.13, 2.11)) the odds of connecting with other females. Single people had 1.44 times (95\% CI = (1.19, 1.76)) the odds of being connected to other single people. Lastly, people with a high school education or more were 23\% less likely to have ties in the network, compared to those with less than high school education.
The two ERGM models using geometrically-weighted degree term were the node match model and node factor model (Table B5). The node match models with and without the geometrically-weighted degree were similar; however, being in a relationship was statistically significant in this model. Those who were in a relationship having 1.72 times (95\% CI = (1.03, 2.86)) the odds of being connected with others who were also in relationships in the network. The node factor models with and without the geometrically-weighted degree were again similar. However, in the model with the geometrically-weighted degree term a high school education or more was no longer a statistically significant term. The geometrically-weighted degree term was statistically significant in both node match and node factor models, indicating a potential popularity effect, according to which people were more likely to form ties with higher-degree people in the network.
\subsection{TRIP}
The final model for the TRIP full network using complete cases included node mixing terms based on living situation, sex and nationality, and node factor terms based on educational attainment and employment status (Table B9). Male to male (OR = 0.61, 95\% CI = (0.43, 0.86)) and male to female (OR = 0.59, 95\% CI = (0.41, 0.84)) connections were significantly less likely to occur, compared to female to female connections. Connections between those who were Greek and those who were not (OR = 0.34, 95\% CI = (0.20, 0.56)) and connections between two Greek people (OR = 0.43, 95\% CI = (0.26, 0.72)) were also less likely to occur, compared to connections between two non-Greek people. Those who paid rent were less likely to be connected to others who paid rent (OR = 0.21, 95\% CI = (0.12, 0.36)), those who did not pay rent (OR = 0.18, 95\% CI = (0.13, 0.25)), and those who were homeless (OR = 0.23, 95\% CI = (0.16, 0.34)), compared to connections between two people who were homeless. Those who did not pay rent were less likely to be connected to those who did not pay rent (OR = 0.25, 95\% CI = (0.19, 0.34)), and those who were homeless (OR = 0.32, 95\% CI = (0.25, 0.42)), compared to connections between two people who were homeless. People with post high school education were 1.35 times (95\% CI = (1.12, 1.63)) more likely to have ties in the network, compared to those with less than a high school education. People who could not work for health reasons were 1.34 times (95\% CI = (1.08, 1.66)) more likely to have ties in the network, compared to those who were employed. People who marked "other" as their employment status had 1.31 times (95\% CI = (1.02, 1.70)) the odds of having connections in the network, compared to those who were employed.
\section{Discussion}
In this paper, we analyzed the network attributes and individual attributes associated with the likelihood of people engaging in HIV risk behaviors with each other among PWID in two network-based studies SFHR and TRIP. People who consistently had the highest odds of being connected within both networks across all models were those experiencing housing instability. They were more likely to have network connections and were more likely to be connected with one another. Although often challenging to reach and sustain engagement with public health interventions, these individuals represent a subpopulation that has the potential to benefit from interventions that leverage network connections due to their positioning in the observed network. Interventions for PWID who experience homelessness could include establishing safe injection sites or outreach to deliver harm reduction interventions by visiting places they frequent or at community centers, which could decrease risky injection drug use and overdose deaths. People who experience housing instability are a particularly vulnerable subpopulation, and often lack access to medical care, treatment services, and other supports, so the interventions could be more readily accessible to them through both outreach efforts and from within their communities using intraventions. \citep{intravention2004}.
The results suggest that study participants tend to engage in potentially risky behavior with others that are similar based on sex and shared race/ethnicity or nationality. Delivering interventions to those on the periphery in the network, such as those employed and live in their own place/pay rent, could be challenging because the peer influence and reinforcement may have less of an impact. Considering the attributes of those engaging in risk behavior with each other could better inform the development of interventions. For example, an intervention is delivered by recruiting individuals and expecting them to share information about the intervention and recruit others they know. Researchers must ensure that they are intervening in diverse groups of individuals. Interventions aimed at HIV prevention among PWID should focus on increasing accessibility to as many people as possible. Network analysis is one possible approach to determine individuals and groups that might be good candidates for interventions, particularly when there is a social component delivering an intervention.
The study has potential limitations. The missing data imputation techniques, propensity score matching and missForest, assume independent observations. In future work, the inclusion of network structure variables in missing attribute data imputation techniques is important to align methodology with the observed data structure. In alternative data sources, ERGMs may be able to include additional network structure variables to examine whether there are any structural effects on engagement in potentially risky behavior. Conducting future longitudinal studies among PWID now would also be very informative to see how behaviors and network structures have changed over time. There has been an increase in heroin use and prescription opioid use, specifically fentanyl, coinciding with a sharp rise in opioid overdoses in recent years. Development intervention strategies according to the information from longitudinal studies among PWID can be of benefit to mitigate opioid overdose.
\section*{Acknowledgement}
These findings are presented on behalf of Social Risk Factors and HIV Risk Study (SFHR) and the Transmission Reduction Intervention Project (TRIP). We would like to thank all of the SFHR and TRIP investigators, data management teams, and participants who contributed to this project. The project described was supported grant 1DP2DA046856-01 by Avenir Award Program for Research on Substance Abuse and HIV/AIDS (DP2) from National Institute on Drug Abuse of the National Institutes of Health and grant P30 DA011041 Ending HIV/AIDS among People Who Use Drugs: Overcoming Challenges by Center for Drug Use and HIV Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
|
train/arxiv
|
BkiUa1bxaKPQokFdaRhk
| 5
| 1
|
\section{\label{sec:Intro}Introduction}
The methyl radical, \ce{CH3}, is one of the smallest organic molecules and, owing to the presence of an unpaired electron, it is among the most important reaction intermediates in combustion and atmospheric chemistry \cite{Baulch:2005}. Therefore, detailed spectroscopic and reaction dynamics studies of this molecular radical have traditionally been of paramount interest to researchers \cite{Hirota1985, McFadden1972, Whitehead1996}. However, since radicals have to be produced \textit{in situ}, such studies are technically challenging owing to low signal intensities. Due to its planarity and the lack of a permanent electric dipole moment, the \ce{CH3} molecule does not show a pure rotational spectrum. In addition to that, there are no spectroscopic transitions in the visible range of the electromagnetic spectrum, and predissociation prevents the acquisition of well-resolved molecular spectra in the ultraviolet regime. Spectroscopic studies have thus been limited to rovibrational spectroscopy in the mid- and near-infrared region of the spectrum, as described in Refs. \cite{Hirota1985}, \cite{Davis:1997} and references therein. It is therefore desirable to produce cold and intense beams of \ce{CH3} in order to achieve good signal quality and long interrogation times with the sample. Using Zeeman deceleration, the production of translationally cold supersonic beams of \ce{CH3} has been demonstrated by some of us \cite{Momose2013}. Recently, we have also attained stationary samples of \ce{CH3} using magnetic trapping after Zeeman deceleration \cite{Momose:2017}.
A pulsed supersonic nozzle expansion, in which internal energy is converted into directed motion by the expansion of a highly pressurized gas into vacuum, is the most common approach to produce dense and internally cold molecular beams \cite{Miller1988, Morse1996}. Since radical production requires the dissociation of molecular bonds, external energy is applied during the expansion, which may lead to rotational and vibrational excitation of the molecules. A large number of techniques have been developed for the production of \ce{CH3} molecular beams, including pyrolysis \cite{Digiuseppe1982, Robinson1988, Whitehead1996}, photodissociation \cite{Zahedi1994, Holt1984} and discharge sources \cite{Ishiguro1996, Davis:1997}. However, only the pulsed discharge sources have been shown to yield cold molecular beams with rotational temperatures of around \mbox{25 K} \cite{Davis:1997, Ishiguro1996}. Ishiguro et al. used a supersonic jet expansion combined with a discharge modulation technique \cite{Ishiguro1996}, and Davis and co-workers have developed a shaped plate discharge source (slit discharge) to produce cold radical beams \cite{Anderson:1996, Davis:1997}.
In this article, we present the comparison of a plate discharge and a dielectric barrier discharge (DBD) source in combination with a home-built pulsed valve for the generation of rotationally cold \ce{CH3} radical beams. Both discharge techniques are widely used in industrial applications \cite{Kogelschatz:2003} and they have already been used for the production of molecular radicals and electronically excited atoms and molecules in supersonic jets \cite{Lewandowski:2004, Raunhardt:2008, Luria:2009, Ploenes:2016, vanBeek2001}. The two methods differ in terms of the underlying discharge mechanism and in terms of their technical complexity. The realization of a DBD is technically more demanding compared to the set-up of a plate discharge. A simple and robust two-plate electrode scheme can be used to ignite a DC discharge at relatively low voltages and at intermediate current strengths in the glow regime \cite{Roth:1995}. In this regime, the nozzle does not suffer from sputtering, and high electron excitation energies, which would lead to a heating of the gas pulse, are avoided. For the operation of a DBD, several AC high voltage pulses are applied to the electrodes, which are shielded from their surroundings using a dielectric material. This mode of operation ensures that the discharge current is kept at very low values \cite{Kogelschatz:2003, Luria:2009}. A DBD source has been shown to generate very cold and intense supersonic beams \cite{Luria:2009, Ploenes:2016}. Since the discharge is initiated in filaments, which are uniformly spread over a large surface, the formation of highly energetic species through arcing is prevented. It is therefore of particular interest to investigate whether a DBD can also be used as an efficient \ce{CH3} radical source.
In this paper, we give a detailed characterization and a direct comparison of both discharge sources in terms of \ce{CH3} radical intensities, beam velocity and rotational state distributions. We have also used two different precursor species, methane (\ce{CH4}) and di-tert-butyl peroxide (\ce{[(CH3)3CO]2}, DTBP), for radical generation. The relative efficiencies of both precursors in terms of \ce{CH3} radical production are discussed here as well.
\section{\label{sec:Setup}Experimental setup}
The experimental setup, which is schematically depicted in Fig.~\ref{fig:Setup} for the plate discharge source, consists of two differentially pumped vacuum chambers separated by a skimmer (2 mm diameter). The source chamber hosts a room-temperature, pulsed valve built at the Canadian Center for Research on Ultra-Cold Systems (hereafter referred to as the \textit{CRUCS valve}) which is optimized for the generation of short, variable pulse duration (25 - 100 $\mu$s) and intense gas pulses. The valve opening duration was set to 50 $\mu$s for all the experiments described in this paper, which generated gas pulses with a total duration of 90 - 100 $\mu$s at the discharge point. The valve design is conceptually similar to the commercially available Even-Lavie valve \cite{Even:2015}, i.e., a short current pulse generates a magnetic field inside a coil, which, in turn, lifts a magnetic plunger that admits gas from a high-pressure reservoir into the vacuum chamber. The nozzle is conically shaped at a 40$^{\circ}$ opening angle, and the valve orifice has a 250 $\mu$m diameter.
\begin{figure}[ht!]
\includegraphics[width=8.5cm]{Fig1.eps
\caption{\label{fig:Setup} Schematic illustration of the experimental setup used for the characterization of the plate discharge source.}
\end{figure}
Methane (\ce{CH4}, Praxair, 99 \% purity, no further purification), or di-tert-butyl peroxide (\ce{[(CH3)3CO]2}, DTBP, Sigma Aldrich, 98 \% purity, no further purification) are used as radical precursor molecules, respectively. In the case of \ce{CH4}, the radical precursor species is supersonically expanded from a 30 \% \ce{CH4}-\ce{Kr} premix in a gas cylinder at a 5 bar stagnation pressure. The precursor-noble-gas mixing ratio was initially optimized for the generation of methyl radical beams suitable for Zeeman deceleration \cite{Momose:2017} and is thus not further varied in the experiments described here.
An $\approx0.6\%$ room-temperature gas mixture of DTBP in Kr carrier gas \cite{Indritz:1978} is obtained by filling a small amount of liquid DTBP into a lecture bottle and letting it equilibrate with Kr gas (5 bar overall pressure) overnight.
Molecular beams of \ce{CH3} radicals are formed by a supersonic expansion of the precursor gas mixture followed by bond cleavage initiated by one of the two discharge sources described below. Both of the sources are directly attached to the front plate of the valve (cf. Fig. ~\ref{fig:Setup}). The plate discharge source, which is schematically shown in Fig.~\ref{fig:Setup}, is based on the design described by Lewandowski et al. \cite{Lewandowski:2004}. It consists of one insulated, stainless steel electrode plate of the same outer diameter as the valve with a hole of 3 mm diameter and a thickness of 0.7 mm mounted to the valve head. The electrode is set to high negative voltage, and the valve head serves as the ground electrode. The electrode is insulated against the valve using a polytetrafluoroethylene (PTFE) spacer (outer diameter as the valve, 7 mm hole diameter and 2.5 mm thickness).
In our case, the supersonic expansion creates conditions of a steady charge flow that prevents the discharge from arcing \cite{Phelps:1960, Davis:1997}, and thus also avoids an unwanted heating of the supersonic beam. Such heating would be caused by the high current flow along very distinct and spatially small paths. We did not observe an increase of the beam temperature when the duration of the discharge pulse was increased. For this reason, we have applied DC voltages to the electrode during the experiments described here. Due to the voltage applied to the electrode, the direction of the gas flow is directed against the drag of the electrons. In contrast to schemes in which the electrode voltages are pulsed on and off for very short time periods \cite{Lewandowski:2004}, we found that the discharge process can be driven in the glow regime, i.e., without arcing, at relatively low DC voltages of around -1 kV. To ensure that the discharge is operated in the glow regime, we monitor the applied voltage using a voltage probe. In the case of arcing, a strong and rapid decrease of the voltage is observed. In contrast to that, a glow discharge is characterized by a much smaller and less abrupt voltage drop. In our setup, the use of an additional glow filament for the generation of seed electrons for the discharge, as reported in Refs. \cite{Lewandowski:2004} and \cite{Halfmann:2000}, was not required for the stable operation of the discharge unit. To optimize the plate discharge source, the amplitude of the electrode voltage was varied between -0.6 kV and -1.6 kV.
For comparison with the plate discharge source, we used a dielectric barrier discharge head (DBD) \cite{Even:2015} in combination with a CRUCS valve of the same dimensions as detailed for the plate discharge setup above. Apart from the pulsed valve assembly, the experimental setup is identical to the plate discharge setup. The alumina ($\mathrm{Al_2O_3}$) nozzle for the DBD source (250 $\mu$m orifice diameter) has a 40$^{\circ}$ opening angle and a parabolic shape. The ring electrode inside the DBD head is driven by a ferrite-core step-up transformer built in-house which is optimized for the generation of peak-to-peak AC voltages up to 5 kV at a frequency of 1 MHz. The transformer is externally triggered by two externally programmed pulse trains from a commercial digital delay generator. In the experiments, a fixed number of 0.5 $\mu$s-long TTL pulses with a gap of \mbox{0.5 $\mu$s} is used which results in a time period of 1 $\mu$s for each channel. The TTL pulse trains of each channel are offset by \mbox{0.5 $\mu$s} with respect to each other. The external generation of the DBD pulse sequence allows for the optimization of the time delay between the AC voltage pulse for the DBD source and the pulsed valve, and it is used to adjust the duration of the AC voltage pulse. The operation of the discharge source is monitored by using a fast photodiode (Thorlabs, APD38-A) mounted to a window in the source chamber.
For the experimental characterization of the DBD source, both the peak-to-peak voltage, $U_{\mathrm{DBD}}$, and the number of AC voltage periods contained in a pulse train, $N_{\mathrm{P}}$, were adjusted. The discussion below is limited to $U_{\mathrm{DBD}}$ = 3.0 kV and 4.4 kV and to $N_{\mathrm{P}}=8$ and 120 for the following reasons: \ce{CH3} radical signal is only observed at $U_{\mathrm{DBD}} \geq$ 3.0 kV and $N_{\mathrm{P}} \geq$ 8, and arcing starts to occur at $U_{\mathrm{DBD}} >$ 4.4 kV which in turn leads to large fluctuations of the signal intensity. For $N_{\mathrm{P}} >$ 120, no further increase of the \ce{CH3} signal intensity was observed. In addition to that, for low $N_{\mathrm{P}}$, the delay of the pulse train with respect to the falling edge of the valve trigger was optimized to the maximum of the gas pulse. For $N_{\mathrm{P}} \geq$ 120, the DBD pulse train covered the full duration of the gas pulse.
To detect \ce{CH3} radicals, [2+1]-resonance-enhanced multiphoton ionization (REMPI) of the molecule via the 4p Rydberg state ~\cite{Black:1988} is used in combination with mass-selected ion detection in a Wiley-McLaren-type ion-time-of-flight spectrometer. For this, laser light at 573 nm at a 10 Hz repetition rate is generated by a Nd:YAG-laser-pumped, pulsed dye laser (Sirah, PrecisionScan) and subsequently frequency-doubled in a BBO crystal (9 mJ pulse energy, 8 ns pulse duration) to yield the desired laser radiation at \mbox{286.3 nm}. The laser beam is focused into the interaction volume using a lens (350 mm focal length). The methyl ions are then accelerated towards an MCP detector perpendicular to the molecular beam axis by applying DC voltages of $U_1$ = 1200 V and $U_2$ = 600 V to the extraction plates, respectively. The use of DC extraction voltages prevents the detection of ions produced during the discharge process.
To characterize the discharge sources, rotationally resolved REMPI spectra and \ce{CH3} beam time-of-flight profiles (TOF) were measured (see below). Here, the time-of-flight is defined as the time between discharge excitation and laser ionization. The excitation time could be deduced from the sudden voltage decrease at the electrode (plate discharge source) and from the light pulses emitted by the discharge (DBD source), respectively.
To ensure the comparability of both discharge sources, the settings of both pulsed valves in the absence of a discharge are adjusted such that the beam temperatures and profiles of both valves are the same. This is done by monitoring the expansion of molecular oxygen (without a discharge) using a [2+1]-REMPI scheme from the X$^3\mathrm{\Sigma_g}$($v$=0) into the $\mathrm{\tilde{C}}^3\Pi_{\mathrm{g}}$($v$=2) state at 287.5 nm \cite{Russell:1987} (6 mJ pulse energy) prior to the characterization of each discharge source.
\section{Results and Discussion}
The experimental results for the two discharge sources presented in Sections \ref{sec:TOFs} and \ref{sec:ROT} below were obtained using \ce{CH4} as \ce{CH3} radical precursor. The efficiency of DTBP for \ce{CH3} production is discussed in Section \ref{sec:DTBP}.
\subsection{\label{sec:TOFs}Time-of-flight profiles}
\begin{figure}[ht!]
\includegraphics[width=8.5cm]{Fig2.eps
\caption{\label{fig:TOF} \ce{CH3} beam TOF traces at a two-photon wavenumber $\tilde{\nu}$ = \mbox{69852.8 cm$^{-1}$} for the plate discharge source (dashed lines) and the DBD source (solid lines) measured under different discharge conditions using \ce{CH4} as a radical precursor. The relative beam intensity is normalized to the maximum \ce{CH3+} ion yield obtained using the plate discharge source. A pulse train with $N_{\mathrm{P}}=120$ is used for the DBD source. For discharges using \ce{CH4} as a radical precursor, \ce{CH3+} ion signal is not observed when the discharge voltage is set to zero.}
\end{figure}
\ce{CH3} beam TOF traces for both discharge methods are shown in Fig. \ref{fig:TOF}. For clarity, only a selection of traces is given. As can be seen from Fig. \ref{fig:TOF}, similar \ce{CH3} radical intensities can be observed for both discharge sources under certain experimental settings. For the plate discharge, we have found that radical production sets in at $U_{\mathrm{PD}} \approx$ -0.6 kV, and a maximum in signal intensity is measured at around -1.1 kV. Above this voltage, a decrease in signal intensity and a significant broadening of the TOF spectrum -- which is indicative of a significant heating of the beam by the discharge -- is observed. As the voltage is increased, the maximum of the TOF profile is shifted to shorter times, i.e. the beam velocity is increased.
For the DBD source, the $\mathrm{CH_3}$ signal intensity is increased as the applied voltage $U_{\mathrm{DBD}}$ is raised. The highest $\mathrm{CH_3}$ radical yield is thus observed at the highest voltage $U_{\mathrm{DBD}}$ that can be applied without causing arcing, and for long pulse trains. At lower values of $U_{\mathrm{DBD}}$, a shift of the mean velocity to slightly higher values is observed. This can be due to an asymmetric density profile of the gas pulse, i.e. due to a higher intensity of the molecular beam at earlier times. Using very short pulses with $N_{\mathrm{P}}=8$ or $N_{\mathrm{P}}=15$, we have also observed a dependency of the peak arrival times upon the time delay for the DBD pulse train with respect to the valve trigger timing (cf. Fig. ~\ref{fig:Delay}). As can be seen from the lower panel in Fig. ~\ref{fig:Delay}, the mean arrival time of the molecules in the detection volume is not equal to the change in pulse train delay (dashed line). Instead, the peak arrival time is delayed much further than what would be expected from the set pulse train delay, i.e. a faster (slower) beam is produced at early (late) pulse train delays. The early fraction of the gas pulse is probably faster than the later fraction, because it is less efficiently cooled by collisions during the expansion process. The \ce{CH3+} ion yield obtained at each DBD pulse train delay (upper panel in Fig. ~\ref{fig:Delay}) reflects the intensity profile of the emitted gas pulse. Our findings also show that \ce{CH3} radical production is induced during the expansion process, where several velocity classes can be addressed during a very short time interval. Hence, to obtain a relatively slow supersonic beam, which is, for example, preferably for supersonic beam deceleration techniques, DBD excitation at late pulse train delays is preferable. We have also observed a similar, trigger-delay-dependent time shift of the TOF profiles for a pulsed plate discharge source (not shown).
\begin{figure}[ht!]
\includegraphics[width=8.5cm]{Fig3.eps
\caption{\label{fig:Delay} Integrated \ce{CH3+} ion yield (upper panel) and time shift of the TOF maximum (lower panel) as a function of the DBD pulse train delay for $N_{\mathrm{P}}=8$ (red points) and $N_{\mathrm{P}}=15$ (black points). Here, the time delay is defined as the time difference between the falling edge of the valve trigger pulse and the starting time of the DBD pulse sequence. The dashed line is a guide to the eye (see main text). The uncertainty is given as the error in the TOF arrival time only.}
\end{figure}
To determine the beam velocity, each TOF profile was fitted to a shifted Boltzmann velocity distribution of a supersonic expansion which was convoluted with a rectangular function. For simplicity, we assume that the radicals are produced within a rectangular pulse shape of width 50 $\mu$s which is equal to the FWHM of the observed voltage drop during the discharge process. Since the discharge duration is small compared to the flight time of the beam through the apparatus, the uncertainty related to this approximated discharge profile is small but not negligible.
The molecular density in the expanding beam is only sufficient to induce a discharge very close to the nozzle orifice. Since this distance is small compared to the overall distance between nozzle and detector, the uncertainty related to the flight distance is also small. We estimate that the overall uncertainty of the velocity determination is within 3$\%$ for the mean velocity of the beam. A relative comparison of the velocities under different excitation conditions is thus possible. However, absolute values of the beam velocities cannot be accurately given due to the simple rectangular pulse shape assumed for the excitation process. The uncertainty for the full width at half maximum (FWHM) is about 10$\%$, which is due to the uncertainties in the fit procedure, i.e. a higher (lower) FWHM can be balanced by a higher (lower) amplitude and then still yield a reasonable fit to the data.
The measured values of the mean \ce{CH3} beam velocity, the FWHM of the velocity distributions and the relative signal intensities obtained from the TOF profiles in \mbox{Fig. \ref{fig:TOF}} are summarized in Table \ref{tav:FWHM_PD} for the plate discharge source and in Table \ref{tav:FWHM_DBD} for the DBD source. As can be seen from \mbox{Table \ref{tav:FWHM_PD}}, both the beam velocity and the FWHM increase as the applied voltage is raised. In terms of the longitudinal beam temperature, the broadening of the TOF profile between -0.65 kV $\leq U_{\mathrm{PD}} \leq$ -1.6 kV corresponds to a temperature increase from 2 K to 12 K, i.e., by a factor of six. However, the maximum signal intensity is observed at $U_{\mathrm{PD}}$ = -1.1 kV. Hence, it has to be decided whether a beam with a high signal intensity and a larger velocity spread or a beam with a lower signal intensity and a smaller velocity spread is desired. In our Zeeman deceleration measurements, a voltage of $U_{\mathrm{PD}}$ = -1.1 kV is chosen for the electrode to maximize the radical density; the small increase in beam velocity can be compensated for by using a higher phase angle for deceleration \cite{Momose:2017}.
\begin{table}[ht!]
\caption{\label{tav:FWHM_PD}Summary of \ce{CH3} beam characteristics obtained using the plate discharge source.}
\begin{tabular}{rccc}
\toprule
& \multicolumn{3}{c}{$U_{\mathrm{PD}}$ (kV)}\\
& -0.65 & -1.1 & -1.6 \\
\midrule
Mean beam velocity (m/s) & 510 & 540 & 560\\
FWHM (\%) & 17 & 24 & 35\\
Relative beam intensity (a.u.) & 0.60 & 1 & 0.55\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[ht!]
\caption{\label{tav:FWHM_DBD} Summary of \ce{CH3} beam characteristics for the DBD source. The beam intensity is normalized to the maximum \ce{CH3+} ion yield obtained using the plate discharge source (cf. Table \ref{tav:FWHM_PD}).}
\begin{tabular}{rcc}
\toprule
& $U_{\mathrm{DBD}}$ (kV) & $N_{\mathrm{P}}=120$ \\
\midrule
Mean beam velocity (m/s) & 3.3 & 555 \\
& 4.4 & 550 \\
FWHM (\%) & 3.3 & 30 \\
& 4.4 & 28 \\
Relative beam intensity (a.u.) & 3.3 & 0.33 \\
& 4.4 & 0.87 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{\label{sec:ROT}Rotationally resolved REMPI spectra}
Fig. \ref{fig:Rot_DBD} shows rotationally resolved REMPI spectra of \ce{CH3} obtained using the plate discharge and the DBD source, respectively. Similar spectra were obtained under all experimental conditions, but for reasons of clarity, only two exemplary spectra are depicted. The experimental conditions were the same as for the measurement of TOF spectra (Section \ref{sec:TOFs}). Here, $N^{\prime\prime}$ and $N^{\prime}$ denote the rotational angular momenta of the electronic ground state and of the electronically excited state, respectively, and $K^{\prime\prime}$ and $K^{\prime}$ are the corresponding projections of $N^{\prime\prime}$ and $N^{\prime}$ onto the principal axis. The spectroscopic assignments are labeled as P, Q, R and S and correspond to transitions with $\Delta N = N^{\prime}-N^{\prime\prime}$ = -1, 0, 1, and 2, respectively. As can be seen from the spectra in Fig. \ref{fig:Rot_DBD}, only transitions arising from $N^{\prime\prime}=0$ (S(0)) and $N^{\prime\prime}=1$ (R(1) and S(1)) have non-zero spectral intensity regardless of the discharge source\footnote{Since P(2) has zero spectral intensity, R(2) must also be zero, so that the transition at $\tilde{\nu}$ = 69912 cm$^{-1}$ can be unambiguously assigned to S(0).}. We can thus conclude that the discharge process does not affect the rotational cooling of \ce{CH3} into the lowest-lying rotational state of each nuclear-spin isomer, ortho-\ce{CH3} ($N^{\prime\prime}=0, K^{\prime\prime}=0$) and para-\ce{CH3} ($N^{\prime\prime}=1, |K^{\prime\prime}|=1$), by the supersonic expansion. Judging from the energy-level structure of the molecule, we can deduce a rotational temperature of $\leq$ 15 K, which is colder than the results obtained in Refs. \cite{Davis:1997} and \cite{Ishiguro1996}. This means that the heating of the supersonic beam at high discharge voltages only affects the translational motion and thus the longitudinal beam temperature of the molecules in the supersonic beam. This finding is of particular interest for applications in spectroscopy and supersonic beam deceleration experiments, where not only the velocity distribution of the beam is important but also the population of internal states plays a crucial role.
\begin{figure}[ht!]
\includegraphics[width=8.5cm]{Fig4.eps
\caption{\label{fig:Rot_DBD} Rotationally resolved REMPI spectra of \ce{CH3} obtained using the plate discharge source (dashed line) and the DBD source (solid line). For the plate discharge source, $U_{\mathrm{PD}} = -1.1$ kV was used. For the DBD source, a voltage of $U_{\mathrm{DBD}}= 4.4$ kV was applied and a pulse train with $N_{\mathrm{P}}= 120$ was used. All traces are normalized to the maximum signal intensity at $\tilde{\nu}$ = 69852.8 cm$^{-1}$ (Q branch). Spectroscopic assignments are given on top of the spectra, where the labels P, Q, R and S correspond to $\Delta N = N^{\prime}-N^{\prime\prime}$ = -1, 0, 1, and 2, respectively. The dip in the plate discharge spectrum at $\tilde{\nu} \approx$ 69855 cm$^{-1}$ is an experimental artifact caused by instabilities in the laser system. The linewidth of the REMPI signal is limited by the lifetime of the intermediate 4p Rydberg state of \ce{CH3} \cite{Black:1988}.}
\end{figure}
\subsection{\label{sec:DTBP}DTBP as a radical precursor}
It is known that, upon heating, di-tert-butyl peroxide (\ce{[(CH3)3CO]2}, DTBP) decomposes into two acetone molecules and two methyl radicals \cite{Raley1948}. The activation energy for unimolecular decomposition is very low (1.64 eV \cite{Dickey1949}), so that even a mild discharge should be sufficient to produce a considerable amount of \ce{CH3} radicals. In contrast to other peroxides, DTBP does not react with metal surfaces \cite{Dickey1949} and it is thus suitable for use with pulsed valves. DTBP has, thus far, been mainly used for \ce{CH3} radical generation in flash pyrolysis sources \cite{Yamada1981, Digiuseppe1981, Digiuseppe1982, Hudgens1983, Hoffmann1985, Robinson1988, Balucani2011}. A 60 Hz AC discharge of DTBP has also been tested for \ce{CH3} production in a gas cell \cite{Yamada1981, Amano1982} but not in a supersonic jet. Here, we explore the possibility of using DTBP in combination with a discharge source attached to a pulsed valve.
\begin{figure*}[htb!]
\includegraphics[width=0.97\textwidth]{Fig5.eps
\caption{\label{fig:tBu_DBD} \ce{CH3} beam TOF traces at $\tilde{\nu}$ = 69852.8 cm$^{-1}$ for a) the plate discharge source and b) the DBD source using DTBP as a radical precursor. The nozzle conditions were optimized independently for both the plate discharge and the DBD source in order to obtain the maximum signal difference between discharge on and off. The relative beam intensity in both figures is normalized to the maximum \ce{CH3+} ion yield measured when the discharge is turned off (black curves). In a), traces at different electrode voltages are shown (green and red curves). In b), plots at distinct peak-to-peak voltages are displayed (green and red curves, at $N_{\mathrm{P}}$ = 120). The dissociative ionization of DTBP by the detection laser pulse also leads to the formation of \ce{CH3+} even when the discharge is turned off.}
\end{figure*}
\ce{CH3} radical TOF traces obtained from DTBP discharges are shown in Fig. \ref{fig:tBu_DBD}. Even in the absence of a discharge, \ce{CH3+} ion signal is observed (black curves) which is probably due to the photodissociation of DTBP and subsequent ionization of thus produced \ce{CH3} radicals by the detection laser. The longer flight times through the apparatus compared to discharges in \ce{CH4} (cf. \mbox{Fig. \ref{fig:TOF}}) are due to the larger amount of Kr carrier gas as well as the higher mass of the DTBP precursor species. The observation of several TOF peaks is caused by a rebouncing of the valve plunger at the long valve opening times, which were used to obtain sufficient methyl radical signal intensity. For the plate discharge source, the \ce{CH3+} signal intensity at a flight time of around 1.4 ms is increased when the discharge is operated at $U_{\mathrm{PD}}$ = -0.8 kV (red curve in Fig. \ref{fig:tBu_DBD} a)) which indicates that \ce{CH3} radicals are indeed produced during the discharge process. However, the \ce{CH3+} signal intensity originating from the discharge is only 30 \% of the intensity obtained using the plate discharge source. At the trailing edge of the gas pulse (flight times $\geq$ 1.5 ms), the gas density was probably not high enough to maintain the discharge to produce a large number of \ce{CH3} radicals to detect. At high DC voltages ($U_{\mathrm{PD}}$ = -1.4 kV, green curve in Fig. \ref{fig:tBu_DBD} a)), the \ce{CH3+} signal at flight times around 1.4 ms is nearly depleted. This suggests that, at high DC voltages, DTBP rapidly decomposes into atomic and molecular fragments other than \ce{CH3}. In addition to that, the measured rotationally resolved REMPI spectrum (Fig. \ref{fig:REMPI_DTBP}) displays a much broader Q branch (FWHM of 20 cm$^{-1}$) compared to the spectrum obtained using \ce{CH4} as radical precursor (FWHM of 6 cm$^{-1}$). Furthermore, the P(2) transition can also be observed for a DTBP discharge. These observations indicate a higher rotational temperature for \ce{CH3} radicals obtained from DTBP in comparison to using \ce{CH4} as a precursor in the plate discharge source.
\begin{figure*}[htb!]
\includegraphics[width=8.5cm]{Fig6.eps
\caption{\label{fig:REMPI_DTBP} Rotationally resolved REMPI spectra of \ce{CH3} radicals obtained using DTBP (solid line) and \ce{CH4} (dashed line) as precursors. A plate discharge source was used which was operated at a voltage of $U_{\mathrm{PD}}$ = -0.7 kV for DTBP and at $U_{\mathrm{PD}}$ = -1.1 kV for \ce{CH4}, respectively. All traces are normalized to the maximum signal intensity at $\tilde{\nu}$ = 69852.8 cm$^{-1}$ (Q branch). Spectroscopic assignments are the same as in Fig. \ref{fig:Rot_DBD}.}
\end{figure*}
Using the DBD source, we were unable to see an increased \ce{CH3+} signal intensity upon discharge operation. Again, we attribute this to radical decomposition, since the lowest value of $U_{\mathrm{DBD}}$ is already higher than the maximum voltage for $U_{\mathrm{PD}}$ used for the plate discharge source (cf. Fig. \ref{fig:tBu_DBD} a) and b)). Methyl radicals may still be produced at very low AC discharge voltages which would -- in our case -- require a redesign of the step-up transformer core and could thus not be studied here.
\subsection{Conclusion}
We have found that both a plate discharge and a DBD source can yield cold supersonic beams of methyl radicals of similar intensity. Even though the absolute values of the mean velocities cannot be determined with high accuracy, the relative velocity obtained from the TOF profiles indicates that both discharge sources lead to an increase of the longitudinal beam temperature, and this effect is highest at high electrode voltages. In contrast to that, the rotational cooling of the beam into the lowest-lying rotational state of each nuclear-spin isomer, induced by inelastic collisions during the supersonic expansion, is not affected by the discharge process. However, in terms of technical complexity, a plate discharge source is much easier to set-up compared to a DBD source. In particular, a plate discharge source does not require special ceramic and magnetic discs, and additional electronics equipment is not necessary for the generation of a pulse train. The use of \ce{CH4} as a radical precursor is preferable to DTBP, since it provides a more efficient, rotationally cold and background-free source of \ce{CH3} for both a plate discharge and a DBD setup at various different experimental settings. In our laboratory, a plate discharge source has been used for \ce{CH3} radical production for several years now and, working with \ce{CH4} as a precursor, has proven very reliable.
\begin{acknowledgments}
Funding by the Deutsche Forschungsgemeinschaft (International Research Training Group 2079) is gratefully acknowledged. K.D. acknowledges financial support by the Fonds der Chemischen Industrie (FCI) for financial support through a Liebig fellowship. The study was also supported by a National Science and Engineering Research Discovery Grant in Canada and funds from the Canada Foundation for Innovation (CFI) for the Centre for Research on Ultra-Cold Systems (CRUCS) at UBC. The authors are thankful to Edvardas Narevicius (Weizmann Institute, Israel) for the loan of the DBD head.
\end{acknowledgments}
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2\doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{35}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Baulch \latin{et~al.}(2005)Baulch, Bowman, Cobos, Cox, Just, Kerr,
Pilling, Stocker, Troe, Tsang, Walker, and Warnatz]{Baulch:2005}
Baulch,~D.~L.; Bowman,~C.~T.; Cobos,~C.~J.; Cox,~R.~A.; Just,~T.; Kerr,~J.~A.;
Pilling,~M.~J.; Stocker,~D.; Troe,~J.; Tsang,~W.; Walker,~R.~W.; Warnatz,~J.
\emph{J. Phys. Chem. Ref. Data} \textbf{2005}, \emph{34}, 757\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hirota(1985)]{Hirota1985}
Hirota,~E. In \emph{High-Resolution Spectroscopy of Transient Molecules};
Sch{\"a}fer,~F.~P., Ed.; Springer Series in Chemical Physics; Springer, 1985;
Vol.~40\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[McFadden \latin{et~al.}(1972)McFadden, Jr., Kalos, Gentry, and
Ross]{McFadden1972}
McFadden,~D.~L.; Jr.,~E. A.~M.; Kalos,~F.; Gentry,~W.~R.; Ross,~J. \emph{J.
Chem. Phys.} \textbf{1972}, \emph{57}, 1351\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Whitehead(1996)]{Whitehead1996}
Whitehead,~J.~C. \emph{Rep. Prog. Phys.} \textbf{1996}, \emph{59}, 993\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Davis \latin{et~al.}(1997)Davis, Anderson, Duxbury, and
Nesbitt]{Davis:1997}
Davis,~S.; Anderson,~D.~T.; Duxbury,~G.; Nesbitt,~D.~J. \emph{J. Chem. Phys.}
\textbf{1997}, \emph{107}, 5661\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Momose \latin{et~al.}(2013)Momose, Liu, Zhou, Djuricanin, and
Carty]{Momose2013}
Momose,~T.; Liu,~Y.; Zhou,~S.; Djuricanin,~P.; Carty,~D. \emph{Phys. Chem.
Chem. Phys.} \textbf{2013}, \emph{15}, 1772\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2017)Liu, Vashishta, Djuricanin, Zhou, Zhong,
Mittertreiner, Carty, and Momose]{Momose:2017}
Liu,~Y.; Vashishta,~M.; Djuricanin,~P.; Zhou,~S.; Zhong,~W.; Mittertreiner,~T.;
Carty,~D.; Momose,~T. \emph{Phys. Rev. Lett.} \textbf{2017}, \emph{118},
093201\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Miller(1988)]{Miller1988}
Miller,~D.~R. In \emph{Atomic and Molecular Beam Methods}; Scoles,~G., Ed.;
Oxford University Press: New York, 1988; Vol.~1; Chapter 2, pp 14 -- 53\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Morse(1996)]{Morse1996}
Morse,~M.~D. In \emph{Atomic, Molecular, and Optical Physics: Atoms and
Molecules}; Dunning,~F., Hulet,~R.~G., Eds.; Experimental Methods in the
Physical Sciences; Academic Press, 1996; Vol. 29, Part B; pp 21 -- 47\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[DiGiuseppe \latin{et~al.}(1982)DiGiuseppe, Hudgens, and
Lin]{Digiuseppe1982}
DiGiuseppe,~T.~G.; Hudgens,~J.~W.; Lin,~M.~C. \emph{J. Phys. Chem.}
\textbf{1982}, \emph{86}, 36\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Robinson \latin{et~al.}(1988)Robinson, Nathanson, Continetti, and
Lee]{Robinson1988}
Robinson,~G.~N.; Nathanson,~G.~M.; Continetti,~R.~E.; Lee,~Y.~T. \emph{J. Chem.
Phys.} \textbf{1988}, \emph{89}, 6744\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zahedi \latin{et~al.}(1994)Zahedi, Harrison, and Nibler]{Zahedi1994}
Zahedi,~M.; Harrison,~J.~A.; Nibler,~J.~W. \emph{J. Chem. Phys.} \textbf{1994},
\emph{100}, 4043\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Holt \latin{et~al.}(1984)Holt, McCurdy, Weisman, Adams, and
Engel]{Holt1984}
Holt,~P.~L.; McCurdy,~K.~E.; Weisman,~R.~B.; Adams,~J.~S.; Engel,~P.~S.
\emph{J. Chem. Phys.} \textbf{1984}, \emph{81}, 3349\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ishiguro \latin{et~al.}(1996)Ishiguro, Imajo, Harada, Matsubara,
Tanaka, and Tanaka]{Ishiguro1996}
Ishiguro,~M.; Imajo,~T.; Harada,~K.; Matsubara,~M.; Tanaka,~K.; Tanaka,~T.
\emph{Chem. Phys. Lett.} \textbf{1996}, \emph{263}, 629\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Anderson \latin{et~al.}(1996)Anderson, Davis, Zwier, and
Nesbitt]{Anderson:1996}
Anderson,~D.~T.; Davis,~S.; Zwier,~T.~S.; Nesbitt,~D.~J. \emph{Chem. Phys.
Lett.} \textbf{1996}, \emph{258}, 207\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kogelschatz(2003)]{Kogelschatz:2003}
Kogelschatz,~U. \emph{Plasma Chem. Plasma P.} \textbf{2003}, \emph{23}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lewandowski \latin{et~al.}(2004)Lewandowski, Hudson, Bochinski, and
Ye]{Lewandowski:2004}
Lewandowski,~H.; Hudson,~E.~R.; Bochinski,~J.; Ye,~J. \emph{Chem. Phys. Lett.}
\textbf{2004}, \emph{395}, 53\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Raunhardt \latin{et~al.}(2008)Raunhardt, Sch{\"a}fer, Vanhaecke, and
Merkt]{Raunhardt:2008}
Raunhardt,~M.; Sch{\"a}fer,~M.; Vanhaecke,~N.; Merkt,~F. \emph{J. Chem. Phys.}
\textbf{2008}, \emph{128}, 164310\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Luria \latin{et~al.}(2009)Luria, Lavie, and Even]{Luria:2009}
Luria,~K.; Lavie,~N.; Even,~U. \emph{Rev. Sci. Instrum.} \textbf{2009},
\emph{80}, 104102\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ploenes \latin{et~al.}(2016)Ploenes, Haas, Zhang, van~de Meerakker,
and Willitsch]{Ploenes:2016}
Ploenes,~L.; Haas,~D.; Zhang,~D.; van~de Meerakker,~S. Y.~T.; Willitsch,~S.
\emph{Rev. Sci. Instrum.} \textbf{2016}, \emph{87}, 053305\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[van Beek and ter Meulen(2001) van Beek and ter Meulen]{vanBeek2001}
van Beek,~M. C.; ter Meulen,~J. J.
\emph{Chem. Phys. Lett.} \textbf{2001}, \emph{337}, 237\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Roth(1995)]{Roth:1995}
Roth,~J.~R. \emph{Industrial Plasma Engineering}; Boca Raton:CRC Press, 1995;
Vol.~I\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Even(2015)]{Even:2015}
Even,~U. \emph{EPJ Tech. Instrum.} \textbf{2015}, \emph{2}, 17\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Indritz \latin{et~al.}(1978)Indritz, Stone, and
Williams]{Indritz:1978}
Indritz,~D.; Stone,~J.; Williams,~F. \emph{J. Chem. Eng. Data} \textbf{1978},
\emph{23}, 6\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Phelps \latin{et~al.}(1960)Phelps, Pack, and Frost]{Phelps:1960}
Phelps,~A.~V.; Pack,~J.~L.; Frost,~L.~S. \emph{Phys. Rev.} \textbf{1960},
\emph{117}, 470\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Halfmann \latin{et~al.}(2000)Halfmann, Koensgen, and
Bergmann]{Halfmann:2000}
Halfmann,~T.; Koensgen,~J.; Bergmann,~K. \emph{Meas. Sci. Technol.}
\textbf{2000}, \emph{11}, 1510\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Black and Powis(1988)Black, and Powis]{Black:1988}
Black,~J.~F.; Powis,~I. \emph{J. Chem. Phys.} \textbf{1988}, \emph{89},
3986\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Johnson \latin{et~al.}(1987)Johnson, Long, and Hudgens]{Russell:1987}
Johnson,~R.~D.; Long,~G.~R.; Hudgens,~J.~W. \emph{J. Chem. Phys.}
\textbf{1987}, \emph{87}, 1977\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Raley \latin{et~al.}(1948)Raley, Rust, and Vaughan]{Raley1948}
Raley,~J.~H.; Rust,~F.~F.; Vaughan,~W.~E. \emph{J. Am. Chem. Soc.}
\textbf{1948}, \emph{70}, 88\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dickey \latin{et~al.}(1949)Dickey, Raley, Rust, Treseder, and
Vaughan]{Dickey1949}
Dickey,~F.~H.; Raley,~J.~H.; Rust,~F.~F.; Treseder,~R.~S.; Vaughan,~W.~E.
\emph{Ind. Eng. Chem.} \textbf{1949}, \emph{41}, 1673\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yamada \latin{et~al.}(1981)Yamada, Hirota, and Kawaguchi]{Yamada1981}
Yamada,~C.; Hirota,~E.; Kawaguchi,~K. \emph{J. Chem. Phys.} \textbf{1981},
\emph{75}, 5256\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Digiuseppe \latin{et~al.}(1981)Digiuseppe, Hudgens, and
Lin]{Digiuseppe1981}
Digiuseppe,~T.; Hudgens,~J.~W.; Lin,~M. \emph{Chem. Phys. Lett.} \textbf{1981},
\emph{82}, 267\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hudgens \latin{et~al.}(1983)Hudgens, DiGiuseppe, and Lin]{Hudgens1983}
Hudgens,~J.~W.; DiGiuseppe,~T.~G.; Lin,~M.~C. \emph{J. Chem. Phys.}
\textbf{1983}, \emph{79}, 571\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hoffmann \latin{et~al.}(1985)Hoffmann, Smith, Williams, and
Grice]{Hoffmann1985}
Hoffmann,~S.; Smith,~D.; Williams,~J.; Grice,~R. \emph{Chem. Phys. Lett.}
\textbf{1985}, \emph{113}, 425\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Balucani \latin{et~al.}(2011)Balucani, Leonori, Bergeat, Petrucci, and
Casavecchia]{Balucani2011}
Balucani,~N.; Leonori,~F.; Bergeat,~A.; Petrucci,~R.; Casavecchia,~P.
\emph{Phys. Chem. Chem. Phys.} \textbf{2011}, \emph{13}, 8322\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Amano \latin{et~al.}(1982)Amano, Bernath, Yamada, Endo, and
Hirota]{Amano1982}
Amano,~T.; Bernath,~P.~F.; Yamada,~C.; Endo,~Y.; Hirota,~E. \emph{J. Chem.
Phys.} \textbf{1982}, \emph{77}, 5284\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
train/arxiv
|
BkiUdqzxK6-gD0SrdrsS
| 5
| 1
|
\section{Introduction}
\vspace*{-0.1in}
In this paper, we consider the following stochastic optimization problem:
\begin{align}\label{eqn:stocn}
\min_{\mathbf{x}\in\mathbb{R}^d}f(\mathbf{x})=\mathbb{E}_{\xi}[f(\mathbf{x}; \xi)],
\end{align}
where $f(\mathbf{x}; \xi)$ is a random function but not necessarily convex.
The above formulation plays an important role for solving many machine learning problems, e.g., deep learning~\cite{goodfellow2016deep}.
A prevalent algorithm for solving the problem is stochastic gradient descent (SGD)~\cite{ghadimi2013stochastic}. However, SGD can only guarantee convergence to a first-order stationary point (i.e., $\|\nabla f(\mathbf{x})\|\leq\epsilon_1$, where $\|\cdot\|$ denotes the Euclidean norm) for non-convex optimization, which could be a saddle point.
A potential solution to address this issue is to find a nearly second-order stationary point $\mathbf{x}$ such that $\|\nabla f(\mathbf{x})\|\leq \epsilon_1\ll 1$, and $-\lambda_{\text{min}}(\nabla^2 f(\mathbf{x}))\leq \epsilon_2\ll 1$, where $\lambda_{\text{min}}(\cdot)$ denotes the smallest eigenvalue. When the objective function is non-degenerate (e.g., strict saddle~\cite{pmlr-v40-Ge15} or whose Hessian at all saddle points has a negative eigenvalue), an approximate second-order stationary point is close to a local minimum.
Although there emerged a number of algorithms for finding a nearly second-order stationary point for non-convex optimization with a deterministic function~\cite{nesterov2006cubic,conn2000trust,Cartis2011,Cartis2011b,DBLP:conf/stoc/AgarwalZBHM17,DBLP:journals/corr/CarmonDHS16,royer2017complexity}, results for stochastic non-convex optimization are still limited. There are three closely related works~\cite{pmlr-v40-Ge15, DBLP:conf/colt/ZhangLC17,natasha2}. A summary of algorithms in these works and their convergence results is presented in Table~\ref{tab:2}. It is notable that Natasha2, which involves switch between several sub-routines including SGD, a degenerate version of Natasha1.5 for finding a first-order stationary point, and an online power method (i.e., the Oja's algorithm~\cite{oja1982simplified}) for computing the negative curvature (i.e., the eigen-vector corresponding to the minium eigen-value) of the Hessian matrix, is more complex than noisy SGD and SGLD.
\begin{table*}[t]
\caption{Comparison with existing stochastic algorithms for achieving an $(\epsilon_1, \epsilon_2)$-second-order stationary solution to~(\ref{eqn:stocn}), where $p$ is a number at least $4$, IFO (incremental first-order oracle) and ISO (incremental second-order oracle) are terminologies borrowed from~\cite{reddi2017generic}, representing $\nabla f(\mathbf{x}; \xi)$ and $\nabla^2 f(\mathbf{x}; \xi)\mathbf{v}$ respectively, $T_h$ denotes the runtime of ISO and $T_g$ denotes the runtime of IFO. The proposed algorithms SNCG have two variants with different time complexities, where the result marked with $*$ has a practical improvement detailed later. }
\centering
\label{tab:2}
\begin{small}\begin{tabular}{l|lll}
\toprule
algo.& oracle & second-order guarantee in &time complexity\\
&& expectation or high probability&\\
\midrule
Noisy SGD~\cite{pmlr-v40-Ge15} &IFO&$(\epsilon, \epsilon^{1/4})$, high probability&$\widetilde O\left(T_gd^p\epsilon^{-4}\right)$\\
\midrule
SGLD~\cite{DBLP:conf/colt/ZhangLC17} &IFO&$(\epsilon, \epsilon^{1/2})$, high probability&$\widetilde O\left(T_gd^p\epsilon^{-4}\right)$\\
\midrule
Natasha2~\cite{natasha2} &IFO + ISO&$(\epsilon, \epsilon^{1/2})$, expectation&$\widetilde O\left( T_g\epsilon^{-3.5}+T_h\epsilon^{-2.5} \right)$\\
\midrule
SNCG&IFO + ISO&$(\epsilon, \epsilon^{1/2})$, high probability&$\widetilde O\left(T_g\epsilon^{-4} + T_h\epsilon^{-3}\right)^*$\\
&&&$\widetilde O\left(T_g\epsilon^{-4} + T_h\epsilon^{-2.5}\right)$\\
\bottomrule
\end{tabular}
\end{small}
\vspace*{-0.2in}
\end{table*}
In this paper, we propose new stochastic optimization algorithms for solving~(\ref{eqn:stocn}). Similar to several existing algorithms, we also use the negative curvature to escape from saddle points. The key difference is that we compute a noisy negative curvature based on a proper mini-batch of sampled random functions. A novel updating step is proposed that follows a stochastic gradient or the noisy negative curvature depending on which decreases the objective value most. Building on this step, we present two algorithms that have different time complexities. A summary of our results and comparison with previous similar results are presented in Table~\ref{tab:2}. To the best of our knowledge, the proposed algorithms are the first for stochastic non-convex optimization with a second-order convergence in {\it high probability} and a time complexity that is {\it almost linear} in the problem's dimensionality. It is also notable that our result is much stronger than the mini-batch SGD analyzed in~\cite{Ghadimi:2016:MSA:2874819.2874863} for stochastic non-convex optimization in that (i) we use the same number of IFO as in~\cite{Ghadimi:2016:MSA:2874819.2874863} but achieve the second-order convergence using a marginal number of ISO; (ii) our high probability convergence is for a solution from a single run of the proposed algorithms instead of from multiple runs and using a boosting technique as in~\cite{Ghadimi:2016:MSA:2874819.2874863}.
Before moving to the next section, we would like to remark that stochastic algorithms with second-order convergence result are recently proposed for solving a finite-sum problem~\cite{reddi2017generic}, which alternates between a first-order sub-routine (e.g., stochastic variance reduced gradient) and a second-order sub-routine (e.g., Hessian descent). Since full gradients are computed occasionally, they are not applicable to the general stochastic non-convex optimization problem~(\ref{eqn:stocn}) and hence are excluded from comparison. Nevertheless, our idea of the proposed NCG-S step that lets negative curvature descent competes with the gradient descent can be borrowed to reduce the number of stochastic Hessian-vector products in their Hessian descent. We will elaborate this point later.
\section{Preliminaries and Building Blocks}
\vspace*{-0.1in}
Our goal is to find an $(\epsilon_1, \epsilon_2)$-second order stationary point $\mathbf{x}$ such that
$\|\nabla f(\mathbf{x})\|\leq \epsilon_1$, and $\lambda_{\min}(\nabla^2 f(\mathbf{x}))\geq -\epsilon_2$.
To this end, we make the following assumptions regarding~(\ref{eqn:stocn}).
\begin{ass}\label{ass:1} (i) Every random function $f(\mathbf{x}; \xi)$ is twice differentiable, and it has Lipschitz continuous gradient, i.e., there exists $L_1>0$ such that $\|\nabla f(\mathbf{x}; \xi) - \nabla f(\mathbf{y}; \xi)\|\leq L_1\|\mathbf{x} - \mathbf{y}\|$, (ii) $f(\mathbf{x})$ has Lipschitz continuous Hessian, i.e., there exists $L_2>0$ such that $\|\nabla^2 f(\mathbf{x}) - \nabla^2 f(\mathbf{y})\|_2\leq L_2\|\mathbf{x} - \mathbf{y}\|$, (iii) given an initial point $\mathbf{x}_0$, there exists $\Delta<\infty$ such that $f(\mathbf{x}_0) - f(\mathbf{x}_*)\leq \Delta$, where $\mathbf{x}_*$ denotes the global minimum of $f(\mathbf{x})$; (iv) there exists $G>0$ such that $\mathbb{E}[\exp(\|\nabla f(\mathbf{x}; \xi) - \nabla f(\mathbf{x})\|/G)]\leq \exp(1)$ holds.
\end{ass}
\vspace*{-0.1in}
{\bf Remark:} The first three assumptions are standard assumptions for non-convex optimization in order to establish second-order convergence. The last assumption is standard for stochastic optimization necessary for high probability analysis.
The proposed algorithms require noisy first-order information at each iteration and maybe noisy second-order information. We first discuss approaches to compute these information, which will lead us to the updating step NCG-S. To compute noisy first-order information, we use incremental first-order oracle (IFO) that takes $\mathbf{x}$ as input and returns $\nabla f(\mathbf{x}; \xi)$. In particular, at a point $\mathbf{x}$ we sample a set of random variables $\mathcal{S}_1 = \{\xi_1, \xi_2, \ldots,\}$ and compute a stochastic gradient $\mathbf{g}(\mathbf{x}) = \frac{1}{|\mathcal S_1|}\sum_{\xi_i\in\mathcal{S}_1}\nabla f(\mathbf{x}; \xi_i)$ such that $\|\mathbf{g}(\mathbf{x}) - \nabla f(\mathbf{x})\|\leq \epsilon_4\leq \min(\frac{1}{2\sqrt{2}}\epsilon_1, \epsilon_2^2/(24L_2))$ holds with high probability. This can be guaranteed by the following lemma.
\begin{lemma}
\label{lem:gc}
Suppose {\bf Assumption 1} (iv) holds. Let $\mathbf{g}(\mathbf{x}) = \frac{1}{|\mathcal S_1|}\sum_{\xi_i\in\mathcal{S}_1}\nabla f(\mathbf{x}; \xi_i)$. For any $\epsilon_4,\delta\in(0,1)$, $\mathbf{x}\in\mathbb{R}^d$, when
$|\mathcal{S}_1|\geq\frac{4G^2(1+3\log^2(1/\delta))}{\epsilon_4^2}$,
we have
$ \Pr(\|\mathbf{g}(\mathbf{x})-\nabla f(\mathbf{x})\|\leq\epsilon_4)\geq 1-\delta.$
\end{lemma}
The lemma can be proved by using large deviation theorem of vector-valued martingales (e.g., see~\cite{Ghadimi:2016:MSA:2874819.2874863}[Lemma 4]).
To compute noisy second-order information, we calculate a noisy negative curvature of a stochastic Hessian that is sufficiently close to the true Hessian. In particular, at a point $\mathbf{x}$ we sample a set of random variables $\mathcal{S}_2 = \{\xi'_1, \xi'_2, \ldots, \}$ and compute a noisy negative curvature $\mathbf{v}$ of the stochastic Hessian $H(\mathbf{x}) = \frac{1}{|\mathcal S_2|}\sum_{\xi'_i\in\mathcal{S}_2}\nabla^2 f(\mathbf{x}; \xi'_i)$, where $|\mathcal{S}_2|$ is sufficiently large such that $\|H(\mathbf{x}) - \nabla^2 f(\mathbf{x})\|_2\leq \epsilon_3\leq \epsilon_2/24$ holds with high probability, where $\|\cdot\|_2$ denotes the spectral norm of a matrix. This can be guaranteed according to the following lemma.
\begin{lemma}
\label{lem:Hc}
Suppose {\bf Assumption 1} (i) holds. Let $H(\mathbf{x}) = \frac{1}{|\mathcal S_2|}\sum_{\xi_i\in\mathcal{S}_2}\nabla^2 f(\mathbf{x}; \xi_i)$. For any $\epsilon_3,\delta\in(0,1)$, $\mathbf{x}\in\mathbb{R}^d$, when $|\mathcal{S}_2|\geq\frac{16L_1^2}{\epsilon_3^2}\log(\frac{2d}{\delta})$, we have
$ \Pr(\|H(\mathbf{x})-\nabla^2 f(\mathbf{x})\|_2\leq\epsilon_3)\geq 1-\delta'.$
\end{lemma}
The above lemma can be proved by using matrix concentration inequalities. Please see \cite{peng16inexacthessian}[Lemma 4] for a proof. To compute a noisy negative curvature of $H(\mathbf{x})$, we can leverage approximate PCA algorithms~\cite{DBLP:conf/nips/ZhuL16,DBLP:conf/icml/GarberHJKMNS16} using the incremental second-order oracle (ISO) that can compute $\nabla^2 f(\mathbf{x}; \xi)\mathbf{v}$.
\begin{lemma}\label{lem:approxPCA}
Let $H = \frac{1}{m}\sum_{i=1}^mH_i$ where $\|H_i\|_2\leq L_1$. There exists a randomized algorithm $\mathcal A$ such that with probability at least $1- \delta$, $\mathcal A$ produces a unit vector $\mathbf{v}$ satisfying $\lambda_{\min}(H)\geq \mathbf{v}^{\top}H\mathbf{v} - \varepsilon$ with a time complexity of $\widetilde O(T_h^1\max\{m, m^{3/4}\sqrt{L_1/\varepsilon}\})$, where $T_h$ denotes the time of computing $H_i\mathbf{v}$ and $\widetilde O$ suppresses a logarithmic term in $d, 1/\delta, 1/\varepsilon$.
\end{lemma}
\textbf{NCG-S: the updating step.} With the approaches for computing noisy first-order and second-order information, we present a novel updating step called NCG-S in Algorithm \ref{alg:sgnc}, which uses a competing idea that takes a step along the noisy negative gradient direction or the noisy negative curvature direction depending on which decreases the objective value more. One striking feature of NCG-S is that the noise level in computing a noisy negative curvature of $H(\mathbf{x})$ is set to a free parameter $\varepsilon$ instead of the target accuracy level $\epsilon_2$ as in many previous works~\cite{DBLP:conf/stoc/AgarwalZBHM17,DBLP:journals/corr/CarmonDHS16,peng16inexacthessian}, which allows us to design an algorithm with a much reduced number of ISO calls in practice. The following lemma justifies the fact of sufficient decrease in terms of the objective value of each NCG-S step.
\begin{lemma}
\label{lemma:ncg-s}
Suppose Assumption 1 holds.
Conditioned on the event $\mathcal A=\{\|H(\mathbf{x}_j) - \nabla^2 f(\mathbf{x}_j)\|_2\leq \epsilon_3\} \cap \{\|\mathbf{g}(\mathbf{x}_j) - \nabla f(\mathbf{x}_j)\|\leq \epsilon_4\}$ where $\epsilon_3\leq \epsilon_2/24$ and $\epsilon_4 \leq \min(\frac{1}{2\sqrt{2}}\epsilon_1, \epsilon_2^2/(24L_2))$, the update $\mathbf{x}_{j+1}=\text{NCG-S}(\mathbf{x}_j,\varepsilon,\delta,\epsilon_2)$ satisfies
$ f(\mathbf{x}_j) - f(\mathbf{x}_{j+1})\geq \max\left(\frac{1}{4L_1}\|\mathbf{g}(\mathbf{x}_j)\|^2 - \frac{\epsilon_1^2}{8L_1}, \frac{-\epsilon_2^2\mathbf{v}_j^\top H(\mathbf{x}_j)\mathbf{v}_j}{2L_2^2} - \frac{11\epsilon_2^3}{48L_2^2}\right).$
\end{lemma}
\setlength\floatsep{0.1\baselineskip plus 3pt minus 1pt}
\setlength\textfloatsep{0.1\baselineskip plus 1pt minus 1pt}
\setlength\intextsep{0.1\baselineskip plus 1pt minus 1 pt}
\begin{algorithm}[t]
\caption{The stochastic NCG step: $(\mathbf{x}^+, \mathbf{v}^\top H(\mathbf{x})\mathbf{v})=\text{NCG-S}(\mathbf{x}, \varepsilon, \delta,\epsilon_1, \epsilon_2)$}\label{alg:sgnc}
\textbf{Input}: $\mathbf{x}$, $\varepsilon$, $\delta$, $\epsilon_1, \epsilon_2$\;
let $\mathbf{g}(\mathbf{x})$ and $H(\mathbf{x})$ be a stochastic gradient and Hessian according to Lemma~\ref{lem:gc} and~\ref{lem:Hc}\;
Find a unit vector $\mathbf{v}$ such that $\lambda_{\min}(H(\mathbf{x}))\geq \mathbf{v}^{\top}H(\mathbf{x})\mathbf{v} - \varepsilon$
according to Lemma~\ref{lem:approxPCA}\;
\If{$-\frac{\epsilon_2^2}{2L_2^2}\mathbf{v}^\top H(\mathbf{x})\mathbf{v}-\frac{11\epsilon_2^3}{48L_2^2}>\frac{\|\mathbf{g}(\mathbf{x})\|^2}{4L_1} - \frac{\epsilon_1^2}{8L_1}$}{
Compute $\mathbf{x}^+ = \mathbf{x} - \frac{\epsilon_2}{L_2}\text{sign}(\mathbf{v}^{\top}\mathbf{g}(\mathbf{x}))\mathbf{v}$\;
}
\Else{
Compute $\mathbf{x}^+ = \mathbf{x} - \frac{1}{L_1}\mathbf{g}(\mathbf{x})$\;
}
return $\mathbf{x}^+, \mathbf{v}^\top H(\mathbf{x})\mathbf{v}$
\end{algorithm}
\section{The Proposed Algorithms: SNCG}
\vspace*{-0.1in}
In this section, we present two variants of the proposed algorithms based on the NCG-S step shown in Algorithm~\ref{alg:sgncA} and Algorithm~\ref{alg:SGSNCG}. The differences of these two variants are (i) SNCG-1 uses NCG-S at every iteration to update the solution, while SNCG-2 only uses NCG-S when the approximate gradient's norm is small; (ii) the noise level $\varepsilon$ for computing the noisy negative curvature (as in Lemma~\ref{lem:approxPCA}) in SNCG-1 is set to $\max(\epsilon_2, \|\mathbf{g}(\mathbf{x}_j)\|^\alpha)/2$ adaptive to the magnitude of the stochastic gradient, where $\alpha\in(0,1]$ is a parameter that characterizes $\epsilon_2 = \epsilon_1^\alpha$. In contrast, the noise level $\varepsilon$ in SNCG-2 is simply set to $\epsilon_2/2$. These differences lead to different time complexities of the two algorithms.
\setlength\floatsep{0.1\baselineskip plus 3pt minus 2pt}
\setlength\textfloatsep{0.1\baselineskip plus 1pt minus 2pt}
\setlength\intextsep{0.1\baselineskip plus 1pt minus 2 pt}
\begin{algorithm}[t]
\DontPrintSemicolon
\caption{SNCG-1: $(\mathbf{x}_0, \epsilon_1, \alpha, \delta)$}\label{alg:sgncA}
\textbf{Input}: $\mathbf{x}_0$, $\epsilon_1, \alpha$, $\delta$\;
Set $\mathbf{x}_1=\mathbf{x}_0$, $\epsilon_2 = \epsilon_1^\alpha$, $\delta' = \delta /(1+\max\left(\frac{48L_2^2}{\epsilon_2^3}, \frac{8L_1}{\epsilon_1^2}\right)\Delta)$\;
\For{$j=1,2,\ldots,$}{
$(\mathbf{x}_{j+1}, \mathbf{v}_j^\top H(\mathbf{x}_j)\mathbf{v}_j) = \text{NCG-S}(\mathbf{x}_j, \max(\epsilon_2, \|\mathbf{g}(\mathbf{x}_j)\|^\alpha)/2, \delta', \epsilon_1, \epsilon_2)$\;
\If{ $\mathbf{v}_j^{\top}H(\mathbf{x}_j)\mathbf{v}_j> -\epsilon_2/2$ and $\|\mathbf{g}(\mathbf{x}_j)\|\leq \epsilon_1$}
{return $\mathbf{x}_j$}
}
\end{algorithm}
\begin{algorithm}[t]
\DontPrintSemicolon
\caption{SNCG-2: $(\mathbf{x}_0, \epsilon_1, \delta)$}\label{alg:SGSNCG}
\textbf{Input}: $\mathbf{x}_0$, $\epsilon_1, \delta$\;
Set $\mathbf{x}_1=\mathbf{x}_0$, $\delta' = \delta /(1+\max\left(\frac{48L_2^2}{\epsilon_2^3}, \frac{8L_1}{\epsilon_1^2}\right)\Delta)$\
\For{$j=1,2,\ldots,$}{
Compute $\mathbf{g}(\mathbf{x}_j)$ according to Lemma~\ref{lem:gc}\;
\If{$\|\mathbf{g}(\mathbf{x}_j)\|\geq\epsilon_1$}{
compute $\mathbf{x}_{j+1}=\mathbf{x}_j-\frac{1}{L_1}\mathbf{g}(\mathbf{x}_j)$// SG step\; }
\Else{
compute $(\mathbf{x}_{j+1},\mathbf{v}_j^\top H(\mathbf{x}_j)\mathbf{v}_j)=\text{NCG-S}(\mathbf{x}_j,\epsilon_2/2,\delta',\epsilon_1, \epsilon_2)$\;
\If{$\mathbf{v}_j^\top H(\mathbf{x}_j)\mathbf{v}_j>-\epsilon_2/2$}
{
return $\mathbf{x}_j$\;
}
}
}
\end{algorithm}
\begin{theorem}
\label{cor:SGSNCG}
Suppose Assumption~\ref{ass:1} holds,
$\epsilon_3\leq \epsilon_2/24$ and $\epsilon_4 \leq \min(\frac{1}{2\sqrt{2}}\epsilon_1, \epsilon_2^2/(24L_2))$.
With probability $1-\delta$, SNCG-1 terminates with at most $[1+\max\left(\frac{48L_2^2}{\epsilon_2^3},\frac{8L_1}{\epsilon_1^2}\right)\Delta]$ NCG-S steps,
and furthermore, each NCG-S step requires time in the order of $
\widetilde O\left(T_h|\mathcal S_2| + T_h|\mathcal S_2|^{3/4} \frac{\sqrt{L_1}}{\max(\epsilon_2, \|\mathbf{g}(\mathbf{x}_j)\|^\alpha)^{1/2}} + |\mathcal S_1|T_g\right)$;
SNCG-2 terminates with at most $\frac{8L_1}{\epsilon_1^2}\Delta$ SG steps and at most $(1+\frac{48L_2^3}{\epsilon_2^3})\Delta$ NSG-S steps, each NCG-S step requires time in the order of $
\widetilde O\left(T_h|\mathcal S_2| + T_h|\mathcal S_2|^{3/4} \frac{\sqrt{L_1}}{\epsilon_2^{1/2}} + |\mathcal S_1|T_g\right)$.
Upon termination, with probability $1-3\delta$, both algorithms return a solution $\mathbf{x}_{j_*}$ such that $\|\nabla f(\mathbf{x}_{j_*})\|\leq2\epsilon_1 $ and $\lambda_{\text{min}}\left(\nabla^2f(\mathbf{x}_{j_*})\right)\geq -2\epsilon_2.$
\end{theorem}
\vspace*{-0.1in}
{\bf Remark:} To analyze the time complexity, we can plug in the order of $|\mathcal{S}_1|$ and $|\mathcal{S}_2|$ as in Lemma~\ref{lem:gc} and Lemma~\ref{lem:Hc}. It is not difficult to show that when $\epsilon_2=\sqrt{\epsilon_1}$, the worst-case time complexities of these two algorithms are given in Table~\ref{tab:2}, where the result marked by $^*$ corresponds to SNCG-1. However, this worse-case result is computed by simply bounding $T_h/\sqrt{\max(\epsilon_2, \|\mathbf{g}(\mathbf{x})\|^\alpha)}$ by $T_h/\sqrt{\epsilon_2}$. In practice, before reaching a saddle point (i.e., $\|\mathbf{g}(\mathbf{x}_j)\|\geq \epsilon_1)$, the number of ISO calls for each NCG-S step in SNCG-1 can be less than that of each NCG-S step in SNCG-2. In addition, the NCG-S step in SNCG-1 can be faster than the SG step in SNCG-2 before reaching a saddle point. More importantly, the idea of competing between gradient descent and negative curvature descent and the adaptive noise parameter $\varepsilon$ for computing the noisy negative curvature can be also useful in other algorithms. For example, in~\cite{reddi2017generic} the Hessian descent (also known as negative curvature descent) can take the competing idea and uses adaptive noise level for computing a noisy negative curvature.
\section{Conclusion}
\vspace*{-0.1in}
In this paper, we have proposed new algorithms for stochastic non-convex optimization with strong high probability second-order convergence guarantee. To the best of our knowledge, the proposed stochastic algorithms are the first with a second-order convergence in {high probability} and a time complexity that is almost linear in the problem's dimensionality.
{
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUdyXxK6nrxjHzCDTJ
| 5
| 1
|
\section{The Analysis of Main Mechanism} \label{sec:analysis}
We now present the analysis of the approximation ratio of $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$.
\paragraph{Notation.} To avoid confusion, throughout this section, we use ``$i$'' to index the iterations, ``$j$'' to index the auctions inside an iteration, ``$b$'' to index the bidders, and ``$\ell$'' to index the items.
We pick an optimal allocation $O = (O_1,\ldots,O_n)$ of items with supporting prices $\bm{q} = (q_1,\ldots,q_m)$ and denote by $\ensuremath{\mbox{\sc opt}}\xspace$ the welfare of this allocation.
We further define $O^{\star}$ as the restriction of $O$ to items with supporting prices in $\bm{q}$ that belong to the modified price tree $\TT^{\star}$ chosen by the mechanism.
Similarly, $\bq^{\star}$ is defined by zeroing out the price of items in $\bm{q}$ that are not allocated by $O^{\star}$ and leaving the rest unchanged.
We also define the following series of refinement of $\bm{q}$ based on the bidders in $N_1,\ldots,N_{\beta+1}$ and the choice of $\TT^{\star}$. For every $i \in [\beta+1]$,
$\bqi{i} = (\qi{i}_1,\ldots,\qi{i}_m)$ is defined so that for every item $\ell \in M$, $\qi{i}_\ell = 0$ iff $\ell$ is allocated in $O^{\star}$ to some bidder in $N_{1},\ldots,N_{i-1}$ or is not allocated at all,
and otherwise $\qi{i}_\ell = q_\ell$ for $q_\ell \in \bq^{\star}$.
Fix any iteration $i$ and the price vector $\bpricei{i} = (\pricei{i}_1,\ldots,\pricei{i}_m)$ obtained by the mechanism so far. We say an item $\ell \in M$ is \emph{correctly priced} in iteration $i$ iff
$\pricei{i}_\ell$ belongs to the same level-$i$ node in $\TT^{\star}$ as $\qi{i}_\ell$. Note that by construction, $\pricei{i}_\ell$ always strongly belongs to a node, and hence for any correctly priced item, we have
$\pricei{i}_\ell \leq \qi{i}_\ell$. We use $\Ci{i}$ to denote the set of all items that are correctly priced throughout \emph{all} iterations $1$ to $i$. Hence, under this definition, $O^{\star}=\Ci{1} \supseteq \Ci{2} \supseteq \ldots \supseteq \Ci{\beta+1}$.
The definition of the price tree ensures that by moving from $\Ci{1}$ towards $\Ci{\beta+1}$ we are learning the prices of correctly prices items more and more accurately.
\paragraph{Learnable-Or-Allocatable Lemma.} The goal of our mechanism is to learn a set $\Ci{\beta+1}$ such that $\bqi{\beta+1}(\Ci{\beta+1})$ is still sufficiently large compared to $\bq^{\star}(O^{\star})$.
Having reached such a state, we can run a fixed price auction
with price vector $\bpricei{\beta+1}/2$ with bidders $N_{\beta+1}$. Since for items in $\Ci{\beta+1}$, their price in $\bpricei{\beta+1}$ and $\bqi{\beta+1}$ are within a $\gamma$ factor of each other,
we can invoke Lemma~\ref{lem:fixed-price} and obtain an allocation with welfare at least $\gamma$ fraction of $\bqi{\beta+1}(\Ci{\beta+1})$.
Of course, in general, it is too much to expect that our mechanism can converge to a particular price vector $\bq^{\star}$ (think of a case where
there are many different optimal allocations with different prices; converging to one such price vector necessarily means not converging to the other ones).
The following lemma, which is the heart of the proof, however states that in each iteration, we can either ``learn'' the prices of most items more accurately than before, or we can already ``allocate'' the items
efficiently enough at the current prices.
\begin{Lemma}[\textbf{Learnable-Or-Allocatable Lemma}]\label{lem:main}
For any iteration $i \in [\beta]$, conditioned on any outcome of first $i-1$ iterations and choice of $\TT^{\star}$:
\begin{enumerate}[label=(\roman*)]
\item\label{item:main-learnable} either $\expect{\bqi{i+1}(\Ci{i+1})} \geq \bqi{i}(\Ci{i}) - \frac{\textnormal{\ensuremath{\mbox{opt}}}\xspace}{3\beta}$, where the expectation is over $N_i$;
\item\label{item:main-allocatable} or $\expect{\val{\Ai{i}_{j^{\star}}}} \geq \frac{\ensuremath{\mbox{\sc opt}}\xspace}{200\alpha \cdot \beta^2}$, where the expectation is over $N_i$ and $j^{\star} \in [\alpha]$.
\end{enumerate}
We refer to the first case as \textbf{Learnable} and to the second one as \textbf{Allocatable}.
\end{Lemma}
We prove Lemma~\ref{lem:main} next and then use it to conclude the proof of Theorem~\ref{thm:main-mech}.
\subsection{Proof of Lemma~\ref{lem:main} -- Learnable-Or-Allocatable Lemma}\label{sec:main-proof}
We start with a high level overview. We prove this lemma in three steps:
\begin{enumerate}[label=($\roman*$)]
\item \emph{No underestimating prices:} We first show (Lemma~\ref{lem:random-arrival}) that for any of the auctions in this iteration, either most of the correctly priced items (with respect to this auction)
are sold, or this auction itself can result in a high welfare. This step allows us to argue that for many of the items we can sell them in these auctions with a price \emph{at least as high} as their true price, and hence
we will not underestimate their prices in this iteration. The proof of this part crucially uses the fact that the bidders are coming in a random order and is along the lines of a similar argument by Dobzinski~\cite{Dobzinski16}.
\item \emph{No overestimating prices:} We then show (Lemma~\ref{lem:no-overestimate}) that in these auctions only a small fraction of items may continue to get sold even past their correct price. Roughly speaking, this is because if
we could actually sell many items in auctions with higher prices, this implies that the true welfare of the auction is larger than $\ensuremath{\mbox{\sc opt}}\xspace$, a contradiction. This part relies on
the ``price gap'' we introduced in price trees by picking $\TT^e$ or $\TT^o$ (instead of $\ensuremath{\mathcal{T}}$ itself).
\item \emph{Handling removed bidders:} Finally, in Claim~\ref{clm:qi-qi+1} we argue that even if we ignore the items for bidders in $N_i$ (as the mechanism no longer considers these bidders), the \emph{remaining} correctly priced items
still have a substantial contribution. This part of the proof uses the fact that we only consider a small random subset $N_i$ of the remaining bidders.
\end{enumerate}
We now present the formal proof.
Throughout the proof, we fix $i \in [\beta]$ and condition on the outcome of the first $i-1$ iterations and the choice of $\TT^{\star}$.
Conditioning on the outcome of the first $i-1$ iterations fixes the set of bidders $N_1,\ldots,N_{i-1}$ but bidders in $N_i$ are chosen randomly from the remaining bidders.
Fixing the bidders $N_1,\ldots,N_{i-1}$ also fixes the price vector $\bqi{i}$. This conditioning also fixes the level-$i$ price vector $\bpricei{i}$ and its canonical level-$(i+1)$ price vectors
$\bpricei{i}_1,\ldots,\bpricei{i}_{\alpha}$. The set $\Ci{i}$ of items that have been correctly priced so far is also fixed.
We partition the correctly priced items $\Ci{i}$ into $\alpha$ sets $\Di{i}_1,\ldots,\Di{i}_\alpha$, defined as follows. For an item $\ell \in \Ci{i}$, let $z_\ell$ denote the node in level $i$ of $\ensuremath{\mathcal{T}}$ that both $\pricei{i}_\ell$ and $\qi{i}_\ell$
belong to. Suppose the child-node of $z_\ell$ to which $\qi{i}_\ell$ belongs is $z_{\ell,j}$ for some $j \in [\alpha]$. We place item $\ell$ in $\Di{i}_j$ in this case. Note that under this partitioning, the level $(i+1)$ node
$z_{\ell,j}$ to which $\qi{i}_\ell$ belongs is the same node that $p_\ell \in \bpricei{i}_{j}$ (strongly) belongs to; thus, for items in $\Di{i}_j$, $\bpricei{i}_j \leq \bqi{i}$.
In the following lemma, we use the construction of $\ensuremath{\textnormal{\texttt{Partition}}}$ to argue that for any $j \in [\alpha]$, we either allocate most
items in $\Di{i}_j$ in the fixed price auction with price vector $\bpricei{i}_j$ or otherwise this auction is obtaining a large welfare.
\begin{lemma}\label{lem:random-arrival}
For any $j \in [\alpha]$, we have
$20\beta \cdot \expect{\val{\Ai{i}_j}} + \expect{\bqi{i}(\Ai{i}_j \cap \Di{i}_j)} \geq \bqi{i}(\Di{i}_j). $
\end{lemma}
\begin{proof}
We define $\ensuremath{N_{\geq i}}:= N \setminus (N_1 \cup \ldots \cup N_{i-1})$. In the following, all expectations are taken over the choice of $N_i$ from $\ensuremath{N_{\geq i}}$. Recall that in $\ensuremath{\textnormal{\texttt{Partition}}}$, $N_i$ is chosen from $\ensuremath{N_{\geq i}}$
by picking a random permutation and picking the first $\card{\ensuremath{N_{\geq i}}}/(10\beta)$ bidders in $N_i$.
Define $O^{D}_{N_i}$ as the restriction of $O^{\star}$ to items in $\Di{i}_j$ and bidders in $N_i$. Similarly, define $O^{D}_{\ensuremath{N_{> i}}}$ as the restriction of $O$ to items in $\Di{i}_j$ and bidders in $\ensuremath{N_{> i}}:= \ensuremath{N_{\geq i}} \setminus N_i$.
Note that $\bqi{i}(D_j) = \bqi{i}(O^{D}_{N_i}) + \bqi{i}(O^{D}_{\ensuremath{N_{> i}}})$ (recall that $\bqi{i}$ gives price $0$ to items not allocated to bidders in $\ensuremath{N_{\geq i}}$).
Proof of this lemma is by a simple combination of the following two claims.
\begin{claim}\label{clm:random-arrival-1}
Deterministically, $\val{\Ai{i}_j} \geq \bqi{i}(O^{D}_{N_i} \setminus \Ai{i}_j)/2$.
\end{claim}
\begin{proof}
For any bidder $b \in N_i$, when it was bidder $b$'s turn to pick a set in allocation $\Ai{i}_j$ of $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace(N_i,M,\bpricei{i}_j/2)$, $b$ could have picked $O^{D}_b \setminus \Ai{i}_j \subseteq O^{D}_{N_i}$ and obtain the
profit of
\begin{align*}
v_b(O^{D}_b \setminus \Ai{i}_j) - \bpricei{i}_j(O^{D}_b \setminus \Ai{i}_j)/2 ~~ \geq ~~ \bqi{i}(O^{D}_b \setminus \Ai{i}_j) - \bpricei{i}_j(O^{D}_b \setminus \Ai{i}_j)/2 ~~ \geq ~~ \bqi{i}(O^{D}_b \setminus \Ai{i}_j)/2.
\end{align*}
The first inequality is because $\bqi{i}$ is a supporting price for $O^{D}_b \setminus \Ai{i}_j$ and the second one is because $\bpricei{i}_j \leq \bqi{i}$ on the items in $\Di{i}_j$.
As bidder $b$ maximizes the profit by picking $\Ai{i}_{j,b}$, we have
\begin{align*}
\val{\Ai{i}_j} \quad = \quad \sum_{b \in N_i} v_b(\Ai{i}_{j,b}) \quad \geq \quad \sum_{b \in N_i} \bqi{i}(O^{D}_b \setminus \Ai{i}_j)/2 \quad = \quad \bqi{i}(O^{D}_{N_i} \setminus \Ai{i}_j)/2. \Qed{Claim~\ref{clm:random-arrival-1}}
\end{align*}
\end{proof}
\begin{claim}\label{clm:random-arrival-2}
By randomness of choice of $N_i$ from $\ensuremath{N_{\geq i}}$, $\expect{\val{\Ai{i}_j}} \geq (\frac{1}{10\beta}) \cdot \expect{\bqi{i}(O^{D}_{\ensuremath{N_{> i}}} \setminus \Ai{i}_j)/2}$.
\end{claim}
\begin{proof}
For the purpose of this proof, it helps to think of picking $N_i$ alternatively by repeating the following for $n_i:=\card{N_i}$ \emph{steps}:
sample a bidder uniformly at random from $\ensuremath{N_{\geq i}}$, include it in $N_i$ , and remove it from consideration for sampling from now on. It is immediate that the distribution of $N_i$ is the same under this and the original definition.
For every $k \in [n_i]$, define $N_{i,k} \subseteq N_i$ as the set $N_i$ constructed \emph{before} step $k$ and $\ensuremath{O^{D}_{\geq k}}$ as the restriction of $O^{\star}$ to $\Di{i}_j$ and $\ensuremath{N_{\geq i}} \setminus N_{i,k}$. Thus, $\ensuremath{O^{D}_{\geq k}} \supseteq O^{D}_{\ensuremath{N_{> i}}}$ and
hence $\bqi{i}(\ensuremath{O^{D}_{\geq k}}) \geq \bqi{i}(O^{D}_{\ensuremath{N_{> i}}})$ for every $k$. Recall that $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace$ operates in a greedy manner and hence allocation of bidders participating in the auction before step $k$ are already determined by
step $k$. Define $\ensuremath{A_{< k}}$ as the set of items allocated by auction \emph{before} step $k$ and let $u_k := v_b(\Ai{i}_{j,b})$ where $b$ is the chosen bidder in step $k$
and $\Ai{i}_{j,b}$ is the allocation $b$ will get by participating in $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace(N_i,M,\bpricei{i}_j/2)$.
We first prove that $u_k \geq \bm{q}(\ensuremath{O^{D}_{\geq k,b}} \setminus \ensuremath{A_{< k}})/2$. This is precisely because of the same reason as in Claim~\ref{clm:random-arrival-1} that $b$ could have chosen $\ensuremath{O^{D}_{\geq k,b}} \setminus \ensuremath{A_{< k}}$ but decided to pick another set.
Define $n_{\geq i} := \card{\ensuremath{N_{\geq i}}}$. Recall that $b$ is chosen uniformly at random from the $(n_{\geq i}-k+1)$ bidders at step $k$ and hence,
\begin{align}
\Exp_{b}\bracket{u_k} \quad \geq \quad \frac{1}{n_{\geq i}-k+1} \cdot \frac{\bqi{i}(\ensuremath{O^{D}_{\geq k}} \setminus \ensuremath{A_{< k}})}{2} \quad \geq \quad \frac{1}{n_{\geq i}} \cdot \frac{\bqi{i}(\ensuremath{O^{D}_{\geq k}} \setminus \ensuremath{A_{< k}})}{2}. \label{eq:rand1}
\end{align}
We can thus write,
\begin{align*}
\Exp_{N_i}\bracket{\val{\Ai{i}_j}} &= \sum_{k=1}^{n_i} \Exp_{N_{i,k}}\Exp_{b}\bracket{u_k \mid N_{i,k}} \\
&\geq \frac{1}{2n_{\geq i}} \cdot \sum_{k=1}^{n_i} \cdot \Exp_{N_{i,k}}\bracket{\bqi{i}(\ensuremath{O^{D}_{\geq k}} \setminus \ensuremath{A_{< k}}) \mid N_{i,k}} \tag{by Eq~(\ref{eq:rand1})}\\
&\geq \frac{1}{2n_{\geq i}} \cdot \sum_{k=1}^{n_i} \Exp_{N_{i}}\bracket{\bqi{i}(O^{D}_{\ensuremath{N_{\geq i}}} \setminus \Ai{i}_j)} \tag{as $O^{D}_{\ensuremath{N_{\geq i}}} \subseteq \ensuremath{O^{D}_{\geq k}}$ and $\Ai{i}_j \supseteq \ensuremath{A_{< k}}$ always} \\
&= \frac{n_i}{n_{\geq i}} \cdot \Exp_{N_{i}}\bracket{\bqi{i}(O^{D}_{\ensuremath{N_{\geq i}}} \setminus \Ai{i}_j)/2}
\quad = \quad \left(\frac{1}{10\beta}\right) \cdot \expect{\bqi{i}(O^{D}_{\ensuremath{N_{\geq i}}} \setminus \Ai{i}_j)/2}. \Qed{Claim~\ref{clm:random-arrival-2}}
\end{align*}
\end{proof}
We can now conclude the proof of Lemma~\ref{lem:random-arrival} as follows. By Claims~\ref{clm:random-arrival-1} and~\ref{clm:random-arrival-2},
\begin{align*}
20\beta \cdot \expect{\val{\Ai{i}_j}} &\geq \expect{\bqi{i}(O^{D}_{N_i} \setminus \Ai{i}_j) + \bqi{i}(O^{D}_{\ensuremath{N_{\geq i}}} \setminus \Ai{i}_j)} \\
&= \expect{\bqi{i}(\Di{i}_j \setminus A_j)} \quad = \quad \bqi{i}(\Di{i}_j) - \expect{\bqi{i}(\Di{i}_j \cap \Ai{i}_j)}.
\end{align*}
This concludes the proof. \Qed{Lemma~\ref{lem:random-arrival}}
\end{proof}
The quantity $\Ai{i}_j \cap \Di{i}_j$ bounded in Lemma~\ref{lem:random-arrival} is closely related to the set of correctly priced items at iteration $i+1$, namely $\Ci{i+1}$. The only
difference between the two sets is that some items in $\Ai{i}_j \cap \Di{i}_j$ can be allocated even in $\Ai{i}_k$ for $k > j$ and hence in $\ensuremath{\textnormal{\texttt{PriceUpdate}}}\xspace$, we assign a larger price to them.
In the following, we prove that the contribution of such items cannot be too large.
\begin{lemma}\label{lem:no-overestimate}
We have $\bqi{i}(\Ci{i+1}) \geq \sum_{j=1}^{\alpha}\bqi{i}(\Ai{i}_j \cap \Di{i}_j) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{10\beta}.$
\end{lemma}
\begin{proof}
By definition of $\ensuremath{\textnormal{\texttt{PriceUpdate}}}\xspace$, we know that items in $\Ai{i}_j \cap \Di{i}_j$ will join $\Ci{i+1}_j$ iff they do not belong to some $\Ai{i}_k$ for $k > j$.
For each $k > j$, let $\OE_k$ be the set of items in $\Di{i}_j \cap \Ai{i}_j$ that are also allocated in $\Ai{i}_k$. Then, $\OE_{j+1} \cup \ldots \cup\OE_{\alpha}$ forms the set of all items in $\Di{i}_j$ that the mechanism \emph{overestimates} their price in iteration $i$. We bound the contribution of such items.
Fix some $k > j$. Consider the level $i+1$ of the price tree $\TT^{\star}$. There are $\alpha^i$ nodes in this level to which the $\bqi{i}$-price of an item in $\Di{i}_j$ can belong to. Let $oe_{k,1},\ldots,oe_{k,\alpha^i}$ be the number of items corresponding to these
nodes that were allocated in $\Ai{i}_k$ as well. Hence, $\card{\OE_{k}} = \sum_{\ell=1}^{\alpha^i} oe_{k,\ell}$. Moreover, let $p_{k,1},\ldots,p_{k,\alpha^i}$ be the \emph{maximum} prices that belong to these nodes.
Finally, let $p'_{k,1},\ldots,p'_{k,\alpha^i}$ be the prices that these items were sold in $\Ai{i}_k$. Because $\TT^{\star}$ is either $\TT^o$ or $\TT^e$, we have $p_{k,\ell} \leq \gamma^{k-j} \cdot p'_{k,\ell}$ (there is a factor $\gamma$ gap between the maximum
price of any bin $B_x$ and minimum price of $B_{x+2}$).
Since all the items in $\OE_{k}$ are sold in a single application of $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace$, we know that there exists an allocation with supporting prices $p'_{k,\ell}$ for $oe_{k,\ell}$ items for all $\ell \in [\alpha^i]$.
As such,
\begin{align}
\ensuremath{\mbox{\sc opt}}\xspace \quad \geq \quad \sum_{\ell=1}^{\alpha^i} p'_{k,\ell} \cdot oe_{k,\ell} \quad \geq \quad \gamma^{k-j} \cdot \sum_{\ell=1}^{\alpha^i} p_{k,\ell} \cdot oe_{k,\ell} \quad \geq \quad \gamma^{k-j} \cdot \bqi{i}(\OE_k), \label{eq:no-overestimate}
\end{align}
by definition of $p_{k,\ell}$ as the maximum price inside the nodes of $\TT^{\star}$ that prices of $\bqi{i}(\OE_k)$ belong to. Summing up Eq~(\ref{eq:no-overestimate}) for all choices of $k > j$, we have
\begin{align*}
\sum_{k=j+1}^{\alpha} \bqi{i}(\OE_k) \quad \leq \quad \sum_{k=j+1}^{\alpha}\frac{1}{\gamma^{k-j}} \cdot \ensuremath{\mbox{\sc opt}}\xspace \quad \leq \quad \frac{2}{\gamma} \cdot \ensuremath{\mbox{\sc opt}}\xspace.
\end{align*}
Finally, as there are $\alpha$ choices for $j$, we have
\begin{align*}
\paren{\sum_{j=1}^{\alpha}\bqi{i}(\Ai{i}_j \cap \Di{i}_j)} - \bqi{i}(\Ci{i+1}) \quad \leq \quad \sum_{j=1}^{\alpha} \frac{2}{\gamma} \cdot \ensuremath{\mbox{\sc opt}}\xspace \quad = \quad \frac{2\alpha}{\gamma} \cdot \ensuremath{\mbox{\sc opt}}\xspace \quad \leq \quad \frac{\ensuremath{\mbox{\sc opt}}\xspace}{10\beta},
\end{align*}
by the choice of $\gamma \geq 20\alpha\beta$ in Eq~(\ref{eq:equations}). \Qed{Lemma~\ref{lem:no-overestimate}}
\end{proof}
So far we only considered prices with respect to $\bqi{i}$. We now extend the bounds to $\bqi{i+1}$, for which we need to remove the correctly priced items corresponding to bidders in $N_i$.
\begin{claim}\label{clm:qi-qi+1}
We have $\expect{\bqi{i+1}(\Ci{i+1})} \geq \expect{\bqi{i}(\Ci{i+1})} - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{10\beta}$.
\end{claim}
\begin{proof}
For a bidder $b$, we write $\Ci{i}_b$ as the set of items in $\Ci{i}$ that are allocated to $b$ in $O^{\star}$ (i.e., take their price in $\bq^{\star}$ because of bidder $b$); this is similarly defined for $\Ci{i+1}_b$.
We can write,
\begin{align*}
\expect{\bqi{i+1}(\Ci{i+1})} &~=~ \expect{\bqi{i}(\Ci{i+1}) - \sum_{b \in N_i} \bqi{i}(\Ci{i+1}_b)} ~ \geq ~ \expect{\bqi{i}(\Ci{i+1}) - \sum_{b \in N_i} \bqi{i}(\Ci{i}_b)},
\intertext{because $\Ci{i+1} \subseteq \Ci{i}$. Since each bidder joins $N_i$ with probability $(1/10\beta)$, this implies }
\expect{\bqi{i+1}(\Ci{i+1})} &~\geq~ \expect{\bqi{i}(\Ci{i+1})} - \frac{1}{10\beta} \cdot \bqi{i}(\Ci{i}_b) ~ \geq ~ \expect{\bqi{i}(\Ci{i+1})} - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{10\beta}. \Qed{Claim~\ref{clm:qi-qi+1}}
\end{align*}
\end{proof}
We now have all the ingredients needed to prove Lemma~\ref{lem:main}.
\begin{proof}[Proof of Lemma~\ref{lem:main}]
By applying Lemma~\ref{lem:random-arrival} to every $j \in [\alpha]$, we have
\begin{align}
20\beta \cdot \sum_{j=1}^{\alpha} \expect{\val{\Ai{i}_j}} + \expect{\sum_{j=1}^{\alpha}\bqi{i}(\Ai{i}_j \cap \Di{i}_j)} \geq \sum_{j=1}^{\alpha}\bqi{i}(\Di{i}_j). \label{eq:lhs-rhs}
\end{align}
The RHS of above is clearly $\bqi{i}(\Ci{i})$. The second term in the LHS can be upper bounded by Lemma~\ref{lem:no-overestimate} and Claim~\ref{clm:qi-qi+1},
\begin{align*}
\expect{\sum_{j=1}^{\alpha}\bqi{i}(\Ai{i}_j \cap \Di{i}_j)} \quad \leq \quad \expect{\bqi{i}(\Ci{i+1})} + \frac{\ensuremath{\mbox{\sc opt}}\xspace}{10\beta}
\quad \leq \quad \expect{\bqi{i+1}(\Ci{i+1})} + \frac{2\cdot\ensuremath{\mbox{\sc opt}}\xspace}{10\beta}.
\end{align*}
Plugging in these bounds in Eq~(\ref{eq:lhs-rhs}), we obtain
\begin{align}
20\beta \cdot \sum_{j=1}^{\alpha} \expect{\val{\Ai{i}_j}} + \expect{\bqi{i+1}(\Ci{i+1})} \geq \bqi{i}(\Ci{i}) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{5\beta} \label{eq:cases}.
\end{align}
Now let us consider two cases. First suppose,
\begin{align}
\sum_{j=1}^{\alpha} \expect{\val{\Ai{i}_j}} \geq \frac{\ensuremath{\mbox{\sc opt}}\xspace}{200\beta^2}. \label{eq:cont}
\end{align}
In this case, $\expect{\val{\Ai{i}_{j^{\star}}}}$ for $j^{\star}$ chosen uniformly at random from $[\alpha]$ is at least $\frac{\ensuremath{\mbox{\sc opt}}\xspace}{100\alpha\beta^2}$, hence satisfying item~\ref{item:main-allocatable} of the lemma (Allocatable case).
We now consider the other case where the LHS of Eq~\eqref{eq:cont} is smaller than the RHS. Plugging in this bound in Eq~\eqref{eq:cases} implies that
\begin{align*}
\expect{\bqi{i+1}(\Ci{i+1})} \quad \geq \quad \bqi{i}(\Ci{i}) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{5\beta} - 20\beta \cdot \frac{\ensuremath{\mbox{\sc opt}}\xspace}{200\beta^2} \quad >\quad \bqi{i}(\Ci{i}) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{3\beta}.
\end{align*}
This satisfies item~\ref{item:main-learnable} of the lemma (Learnable case), concluding the proof. \Qed{Lemma~\ref{lem:main}}
\end{proof}
\input{thm-main-mech}
\section{Missing Details}\label{app:missing}
\subsection{Formal Definitions of Mechanisms and Truthfulness}\label{app:truthful}
Let $\ensuremath{\mathcal{V}}$ be a class of valuation functions defined over $M$, say, all submodular functions $2^M \rightarrow \ensuremath{\mathbb{R}}^+$, and $\AA$ be the set of all possible allocations of $M$ to $n$ bidders.
A deterministic mechanism for combinatorial auctions is a pair $(f,\bm{\price})$ where $f: \ensuremath{\mathcal{V}}^n \rightarrow \AA$ (representing the allocation to bidders)
and $\bm{\price}=(p_1,\ldots,p_n)$ where $p_i : \ensuremath{\mathcal{V}}^n \rightarrow \ensuremath{\mathbb{R}}^+$ (representing the price charged for item $i$). A randomized mechanism is simply a probability distribution over deterministic mechanisms.
\begin{definition}[Truthfulness and Universal Truthfulness]\label{def:truthful}
A deterministic mechanism $(f,p)$ is \emph{truthful} iff for all $i \in N$, $v_i,v'_i \in \ensuremath{\mathcal{V}}$ and $v_{-i} \in \ensuremath{\mathcal{V}}^{n-1}$, we have,
\[
v_i(f(v_i,v_{-i})_i) - p_i(v_i,v_{-i}) \geq v_i(f(v'_i,v_{-i})_i) - p_i(v'_i,v_{-i}).
\]
A randomized mechanism is \emph{universally truthful} iff it is a distribution over truthful mechanisms.
\end{definition}
We note that beside universal truthfulness, the notion of truthful-in-expectation is also considered for randomized mechanisms that guarantee that bidding truthfully maximizes the \emph{expected} profit; see, e.g.~\cite{LaviS05, DughmiV11,DughmiRY11} and references therein.
This is a much weaker guarantee than universal truthfulness we consider in this paper. In particular such mechanisms are only applicable when bidders are risk neutral and have no information about the outcomes of the random coin flips before they need to act;
see~\cite[Section 1.2]{DobzinskiNS06} for more details.
\subsection{Proof of Lemma~\ref{lem:fixed-price} -- Fixed-Price Auctions}\label{app:lem-fixed-price}
\begin{lemma*}[Restatement of Lemma~\ref{lem:fixed-price}]
Let $A:=\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace(N,M,\bm{\price})$ and $\delta <1/2$ be a parameter.
Suppose $O$ is any allocation with supporting prices $\bm{q}$ and $M^* \subseteq M$ is the set of items $j$ with $\delta \cdot q_j \leq p_j < \frac{1}{2} \cdot q_j$.
Then, $\val{A} \geq \delta \cdot \bm{q}(M^*)$.
\end{lemma*}
\begin{proof}
Define the allocation $O^{\star} = (O^{\star}_1,\ldots,O^{\star}_n)$ as the restriction of $O$ to $M^*$. Define $\overline{A}_i = O^{\star}_i \setminus A$ for every $i \in N$ and $\overline{A} := \overline{A}_1 \cup \ldots \cup \overline{A}_n$.
Bidder $i$ could have chosen $\overline{A}_i$ in \ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace but decided to pick another bundle $A_i$ instead. This implies that:
\begin{align}
v_i(A_i) - \bm{\price}(A_i) \geq v_i(\overline{A}_i) - \bm{\price}(\overline{A}_i). \label{eq:fixed-price-1}
\end{align}
We now use Eq~(\ref{eq:fixed-price-1}) to prove the lemma. We have,
\begin{align*}
\val{A} &= \sum_{i=1}^{n} v_i(A_i) \quad = \quad \bm{\price}(A) + \sum_{i=1}^{n} (v_i(A_i) - \bm{\price}(A_i)) \\
&\geq \bm{\price}(A) + \sum_{i=1}^{n} (v_i(\overline{A}_i) - \bm{\price}(\overline{A}_i)) \quad \geq \quad \bm{\price}(A) + \sum_{i=1}^{n} (\bm{q}(\overline{A}_i) - \bm{\price}(\overline{A}_i))
\end{align*}
by Eq~(\ref{eq:fixed-price-1}) and since $\overline{A}_i \subseteq O^{\star}_i \subseteq O_i$. Now using as $p_j \leq q_j/2$ for all $j \in \overline{A} \subseteq O^{\star} = M^*$, we get
\begin{align*}
\val{A} &\geq \bm{\price}(A) + \sum_{i=1}^{n} \bm{\price}(\overline{A}_i) \\
&\geq \bm{\price}(O^{\star}) \quad \geq \quad \delta \cdot \bm{q}(O^{\star}),
\end{align*}
where the last two inequalities use $O^{\star} \subseteq \overline{A} \cup A$ and $\overline{A} \cap A = \emptyset$, and that $p_j \geq \delta \cdot q_j$ for all $j \in O^{\star}$.
This concludes the proof as $\bm{q}(O^{\star}) = \bm{q}(M^*)$ by definition.
\end{proof}
\section{Concluding Remarks and Open Problems}\label{sec:conc}
We gave a randomized, computationally-efficient, and universally truthful mechanism for combinatorial auctions with submodular (even XOS) bidders that achieves an $O((\log\log{m})^3)$-approximation.
This reduces the gap between the approximation ratio achievable by truthful mechanisms vs arbitrary algorithms for this problem by an exponential factor from $\mbox{\rm poly}{(\log{(m)})}$ to $\mbox{\rm poly}{(\log\log{(m)})}$.
The obvious question left open by our work is whether this gap can be improved further. We do not believe in any way that our $O((\log\log{m})^3)$ approximation is the best possible\footnote{Indeed, using a slightly more nuanced argument, our bounds can be
improved to $O(\frac{(\log\log{m})^3}{\log\log\log{m}})$; however as this $\Theta(\log\log\log{m})$ improvement is minor and for the sake of clarity, we used the slightly weaker analysis in the paper.}.
On the other hand, the limit of our approach seems to be an $\Omega(\log\log{m})$ approximation. It is a fascinating open question whether one can improve the approximation factor all the way down to a constant. However,
even improving the approximation ratio of our mechanism down to $O(\log\log{m})$ already seems challenging, and is an interesting open question. On the lower bound front, proving any separation between the power of truthful mechanisms and algorithms when
the access to input is via arbitrary queries, namely, the communication complexity setting, is also very interesting.
\section{Removing the Extra Assumptions}\label{sec:end-mech}
We now show how to remove Assumption~\ref{assumption1} and prove our main result in its full generality.
We shall emphasize that the main contribution of our work is in establishing Theorem~\ref{thm:main-mech}; the remaining ideas here are standard for the most part and appear in similar forms in previous work on truthful mechanisms, e.g. in~\cite{DobzinskiNS06,Dobzinski07,Dobzinski16}. We present them for completeness.
Let $O=(O_1,\ldots,O_n)$ be an optimal allocation with welfare $\ensuremath{\mbox{\sc opt}}\xspace$ and supporting prices $\bm{q}$.
In order to remove Assumption~\ref{assumption1}, we find prices $\psi_{\min}$ and $\psi_{\max}$ such that $\psi_{\max}/\psi_{\min} = O(m^2)$, and for most items allocated by $O$, their price in $\bm{q}$ belongs to the range $[\psi_{\min} : \psi_{\max}]$; here, ``most
items'' should be interpreted as items with prices in $\bm{q}$ that is a constant fraction of $\ensuremath{\mbox{\sc opt}}\xspace$. Having found such prices, we can then run $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$ from Section~\ref{sec:mech} and apply
Theorem~\ref{thm:main-mech} to finalize the proof (strictly speaking, Assumption~\ref{assumption1} stated that \emph{all} prices in all valuations of bidders need to be in range $[\psi_{\min} : \psi_{\max}]$; however, as is evident from the proof of
Theorem~\ref{thm:main-mech}, we only applied this assumption to prices in $\bm{q}$).
To find $\psi_{\min}$ and $\psi_{\max}$, we partition $N$ into two (almost) equal-size groups $\ensuremath{N_{\textnormal{\textsf{stat}}}}$ and $\ensuremath{N_{\textnormal{\textsf{mech}}}}$ randomly. We run any constant-factor approximation algorithm (and not a truthful mechanism) for welfare maximization
with bidders in $\ensuremath{N_{\textnormal{\textsf{stat}}}}$ and items $M$, say, the algorithm of~\cite{LehmannLN06}, to compute a value $\ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}}$ which is an $O(1)$-approximation to $\ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{stat}}}}$ namely, the value of welfare maximizing allocation for $\ensuremath{N_{\textnormal{\textsf{stat}}}}$ and $M$.
We completely ignore the allocation of these bidders and instead only set $\psi_{\min} := \ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}}/m^2$ and $\psi_{\max} := \ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}} \cdot \Theta(1)$. We then run $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace(\ensuremath{N_{\textnormal{\textsf{mech}}}},M)$ with $\psi_{\min}$ and $\psi_{\max}$, and return
the resulting allocation to bidders in $\ensuremath{N_{\textnormal{\textsf{mech}}}}$.
The intuition behind the approach is that because we partitioned $N$ into two \emph{random} groups, $\ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{stat}}}}$ and consequently $\ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}}$ should be an $O(1)$-approximation to $\ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{mech}}}}$, namely, the value of welfare optimizing allocation for bidders in
$\ensuremath{N_{\textnormal{\textsf{mech}}}}$ (this intuition is not quite correct but for the moment let us ignore this fact). Thus, in an optimal allocation of $M$ to $\ensuremath{N_{\textnormal{\textsf{mech}}}}$, no item have price more than $\psi_{\max}$ and also the total contribution of items with price smaller than $\psi_{\min}$ is
negligible, hence we can safely ignore them. This in turn implies that Assumption~\ref{assumption1} holds and by Theorem~\ref{thm:main-mech}, $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace(\ensuremath{N_{\textnormal{\textsf{mech}}}},M)$ outputs an allocation with
welfare $\ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{mech}}}}$ such that $\expect{\ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{mech}}}}} \geq \ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{mech}}}} \cdot \Omega\paren{\frac{1}{(\log\log{m})^3}}$. Moreover, by the choice of $\ensuremath{N_{\textnormal{\textsf{mech}}}}$, we have $\expect{\ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{mech}}}}} = \ensuremath{\mbox{\sc opt}}\xspace/2$. Thus, this should gives us
an $O((\log\log{m})^3)$ approximation in expectation.
As stated earlier, there is a slight problem with the above intuition. One cannot in general guarantee that by partitioning the bidders into two parts randomly, each part will have roughly the same contribution to the value of $\ensuremath{\mbox{\sc opt}}\xspace$. In particular, if there exists a
bidder with a much higher contribution to $\ensuremath{\mbox{\sc opt}}\xspace$ than the rest, the above approach is bound to fail. So we take care of this case separately as follows: With probability half, we simply run a second-price auction on the grand bundle $M$ and
sell it to the highest bidder entirely. With the remaining half probability, we run the above procedure. This ensures that if such a bidder exists, we get her contribution with probability half. Otherwise, with probability half, we can run the previous analysis.
\subsection{The Final Mechanism}\label{sec:final}
Our final mechanism is as follows.
\begin{tbox}
\underline{$\ensuremath{\textnormal{\textsf{FinalMechanism}}}\xspace(N,M)$}
\begin{enumerate}
\item With probability $1/2$, run a second-price auction on grand bundle $M$ with all bidders, return the resulting allocation, and terminate. With the remaining probability, continue.
\item Pick $\ensuremath{N_{\textnormal{\textsf{stat}}}}$ by sampling each bidder in $N$ independently and w.p. $1/2$. Let $\ensuremath{N_{\textnormal{\textsf{mech}}}} := N \setminus \ensuremath{N_{\textnormal{\textsf{stat}}}}$.
\item\label{line:alg} Run the $2$-approximation algorithm of~\cite{LehmannLN06} on items $M$ and bidders $\ensuremath{N_{\textnormal{\textsf{stat}}}}$. Let $\ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}}$ be the welfare of the returned allocation. Let $\psi_{\min} := \ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}}/m^2$ and $\psi_{\max} := 8 \cdot \ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}}$.
\item\label{line:mech} Run $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace(\ensuremath{N_{\textnormal{\textsf{mech}}}},M)$ with $\psi_{\min}$ and $\psi_{\max}$, and return the allocation.
\end{enumerate}
\end{tbox}
We have the following theorem that formalizes our main result from Section~\ref{sec:intro}.
\begin{theorem}\label{thm:final}
For a combinatorial auction with $n$ submodular (even XOS) bidders and $m$ items, $\ensuremath{\textnormal{\textsf{FinalMechanism}}}\xspace$ is universally truthful, uses $\mbox{\rm poly}(m,n)$ demand and value queries, and achieves an approximation ratio of $O((\log\log{m})^3)$ in expectation.
\end{theorem}
The proof of truthfulness of Theorem~\ref{thm:final} is quite easy. The case where we run the second-price auction is clearly truthful. For the other case, note that we never allocate any item to bidders in $\ensuremath{N_{\textnormal{\textsf{stat}}}}$ and so they might as well reveal their
true valuations in response to the algorithm in Line~\eqref{line:alg}. Finally, $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$ with bidders $\ensuremath{N_{\textnormal{\textsf{mech}}}}$ is truthful by Theorem~\ref{thm:main-mech}. The bound on the number of queries also follows from~\cite{LehmannLN06} for Line~\eqref{line:alg}
and Theorem~\ref{thm:main-mech} for Line~\eqref{line:mech}, and since the second-price auction can be implemented with $n$ value queries for the grand bundle. It thus only remains to analyze the approximation ratio of $\ensuremath{\textnormal{\textsf{FinalMechanism}}}\xspace$, which we do in the next section.
\subsection{Approximation Ratio of Final Mechanism}\label{sec:final-analysis}
We use the following standard result that follows directly from Chernoff-Hoeffding bound.
\begin{lemma}[cf.~\cite{DobzinskiNS06,Dobzinski07,Dobzinski16}]\label{lem:chernoff-bidder}
Let $O=(O_1,\ldots,O_n)$ be an optimal allocation of items $M$ to bidders $N$ with welfare $\ensuremath{\mbox{\sc opt}}\xspace$. Suppose we sample each $i \in N$ w.p. $\rho$ independently to obtain $N'$. If for every $i \in N$, we have
$v_i(O_i) \leq \epsilon \cdot \ensuremath{\mbox{\sc opt}}\xspace$, then $\sum_{i \in N'} v_i(O_i) \geq (\rho/2) \cdot \ensuremath{\mbox{\sc opt}}\xspace$ w.p. at least $1-2\cdot\exp\paren{-\frac{\rho}{2 \cdot \epsilon}}$.
\end{lemma}
Fix an optimal allocation $O=(O_1,\ldots,O_n)$ of items to bidders in $N$ with welfare $\ensuremath{\mbox{\sc opt}}\xspace$. We say that a bidder $i \in N$ is \emph{dominant} iff $v_i(O_i) \geq \ensuremath{\mbox{\sc opt}}\xspace/8$. For the analysis, we consider two cases: either $(i)$ there exists at least one dominant bidder, or $(ii)$ no bidder is dominant.
\paragraph{Case $(i)$: A dominant bidder exists.} W.p. half, we decide to run the second-price auction. Let $i$ be the bidder that gets the grand bundle $M$ in the auction. Clearly $i$ has to be a dominant bidder in this case and
thus $v_i(M) \geq \ensuremath{\mbox{\sc opt}}\xspace/8$ already. As such, in this case, the expected welfare of the allocation is at least $\ensuremath{\mbox{\sc opt}}\xspace/16$, concluding the proof.
\paragraph{Case $(i)$: No dominant bidder exists.} W.p. half, we decide not to run the second-price auction. Let $\ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{stat}}}}$ and $\ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{mech}}}}$ be the welfare of the optimal allocation of $M$ to $\ensuremath{N_{\textnormal{\textsf{stat}}}}$ and $\ensuremath{N_{\textnormal{\textsf{mech}}}}$, respectively.
By Lemma~\ref{lem:chernoff-bidder}, applied to choice of $\ensuremath{N_{\textnormal{\textsf{stat}}}}$ and $N \setminus \ensuremath{N_{\textnormal{\textsf{stat}}}}$ (both sets have the same distribution) with $\rho = 1/2$ and $\ensuremath{\varepsilon} = 1/8$, and a union bound, w.p. at least $1/2$, we have
\begin{align}
\frac{1}{4} \cdot \ensuremath{\mbox{\sc opt}}\xspace ~\leq ~ \ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{stat}}}} ~ \leq ~ \ensuremath{\mbox{\sc opt}}\xspace \qquad \textnormal{and} \qquad \frac{1}{4} \cdot \ensuremath{\mbox{\sc opt}}\xspace ~ \leq ~ \ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{mech}}}} ~ \leq ~ \ensuremath{\mbox{\sc opt}}\xspace. \label{eq:opt-stat-mech}
\end{align}
In the following, we condition on the (independent) events that we do not run the second-price auction, and that Eq~\eqref{eq:opt-stat-mech} holds, which happens w.p. $1/4$.
Fix a welfare maximizing allocation of $M$ to $\ensuremath{N_{\textnormal{\textsf{mech}}}}$ with welfare $\ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{mech}}}}$ and supporting prices $\bm{q} = (q_1,\ldots,q_m)$.
Since we run a $2$-approximation algorithm in Line~\eqref{line:alg}, we know that $\frac{1}{8} \cdot \ensuremath{\mbox{\sc opt}}\xspace \leq \ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}} \leq \ensuremath{\mbox{\sc opt}}\xspace$ by Eq~\eqref{eq:opt-stat-mech}. Hence, setting $\psi_{\max}=8 \cdot \ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}}$ ensures
that $q_j \leq \psi_{\max}$ for every item $j \in M$. Moreover, let $M' \subseteq M$ be the set of items $j$ such that $q_j \leq \psi_{\min}$. By definition of $\psi_{\min} = \ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}}/m^2$ and since $\ensuremath{\ensuremath{\mathcal{A}}\xspace_{\textnormal{\textsf{stat}}}} > \ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{mech}}}}/8$, we get
$\bm{q}(M') \leq \ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{mech}}}}/2$ (as $m \gg 8$). As such, we can simply ignore the contribution of all items in $M'$ and still have a set of items $M \setminus M'$ that can be allocated to bidders in $\ensuremath{N_{\textnormal{\textsf{mech}}}}$ with welfare at least $\ensuremath{\ensuremath{\mbox{\sc opt}}\xspace_{\textnormal{\textsf{mech}}}}/2 \geq \ensuremath{\mbox{\sc opt}}\xspace/8$. Moreover, the supporting prices of these items now belong to $[\psi_{\min}:\psi_{\max}]$. Hence, we can apply Theorem~\ref{thm:main-mech} under Assumption~\ref{assumption1} and
obtain that in this case, the expected welfare of the allocation is at most $O((\log\log{m})^3)$ times smaller than $\ensuremath{\mbox{\sc opt}}\xspace$, finishing the proof of Theorem~\ref{thm:final}.
\section{Introduction}\label{sec:intro}
In a combinatorial auction, $m$ items are to be allocated between $n$ bidders.
Each bidder $i$ has a valuation function $v_i$ that describes their value $v_i(S)$ for every bundle $S$ of items. The goal is to design a mechanism that finds an allocation $A$ of items that maximizes the \emph{social welfare}, which is defined
as $\val{A} := \sum_{i} v_i(A_i)$ where $A_i$ is the bundle allocated to bidder $i$.
For a mechanism to be feasible, it needs to be \emph{computationally-efficient}, i.e., run in $\mbox{\rm poly}(m,n)$ time given access to certain queries to valuation functions, namely value queries and demand queries (see Section~\ref{sec:prelim} for definitions).
Mechanisms should also take into account the strategic behavior of the bidders.
A mechanism in which the dominant strategy of each bidder is to reveal their true valuation in response to given queries is called
\emph{truthful}. For randomized mechanisms, we consider \emph{universally truthful} mechanisms which are distributions over truthful mechanisms
(this is a stronger guarantee than truthful-in-expectation considered also in the literature, e.g.~\cite{LaviS05,DughmiRY11}; see Appendix~\ref{app:truthful}).
A ``paradigmatic''~\cite{DobzinskiNS06,AbrahamBDR12,FotakisKV17}, ``central''~\cite{MualemN08,DughmiV11}, and ``arguably the most important''~\cite{Dobzinski07} problem in Algorithmic Mechanism Design is to design
mechanisms for combinatorial auctions that are both {computationally-efficient} and {truthful}.
At the root of this problem is the question of whether there is an inherent clash between computational-efficiency and truthfulness. On one hand,
the celebrated VCG mechanism of Vickrey-Clarke-Groves \cite{Vickrey61,Clarke71,Groves73} is a truthful mechanism for this problem that returns the welfare maximizing allocation. Alas, this mechanism requires finding the welfare maximizing allocation {exactly}, which is not possible in $\mbox{\rm poly}(m,n)$ time for most classes of valuations. On the other hand, from a purely algorithmic point of view, constant factor {approximation} algorithms exist for many interesting classes of valuations, but they are no longer truthful.
A particular case of this problem that has received significant attention is when the valuation functions of all the bidders are \emph{submodular}\footnote{A valuation function $v$ is submodular iff $v(S \cup T) + v(S \cap T) \leq v(S) + v(T)$ for all $S$ and $T$; see also Section~\ref{sec:valuations}.} (see,
e.g.~\cite{DobzinskiNS06,LehmannLN06,DobzinskiS06,DobzinskiV12,Dobzinski07,FeigeV10,Dobzinski11,KrystaV12,DobzinskiV13,Dobzinski16} and references therein).
There is no poly-time algorithm for finding the optimal allocation of submodular bidders~\cite{MirrokniSV08,FeigeV10,DobzinskiV13} and thus VCG mechanism is not computationally-efficient here.
On the other hand, by using only value queries, a simple greedy algorithm can achieve a $2$-approximation~\cite{LehmannLN06} and this can be further improved to $(\frac{e}{e-1})$-approximation~\cite{Vondrak08},
and even slightly better~\cite{FeigeV06} by using demand queries. This leads to one of the earliest and the most basic questions in Algorithmic Mechanism Design:
\vspace{-4pt}
\begin{quote}
\emph{How closely can the approximation ratio of truthful mechanisms for submodular bidders match what is possible from an algorithmic point of view that ignore strategic behavior?}
\end{quote}
\vspace{-4pt}
Already more than a decade ago, Dobzinski, Nisan, and Schapira~\cite{DobzinskiNS05} gave the first non-trivial answer to this question by designing an $O(\sqrt{m})$-approximation mechanism.
This approximation ratio was soon after exponentially improved by the same
authors~\cite{DobzinskiNS06} to $O(\log^2{m})$, which in turn was improved to $O(\log{m}\log\log{m})$ by Dobzinski~\cite{Dobzinski07}, and then to $O(\log{m})$ by Krysta and V{\"o}cking~\cite{KrystaV12}.
Breaking this logarithmic barrier remained elusive until a recent breakthrough of Dobzinski~\cite{Dobzinski16} that achieved an $O(\sqrt{\log m})$ approximation.
\paragraph{Our Result.} We give an \emph{exponential} factor improvement over this $\Theta(\sqrt{\log{m}})$ approximation mechanism of~\cite{Dobzinski16}, proving the following result.
\begin{result}
There exists a universally truthful mechanism for combinatorial auctions with submodular valuations that achieves an approximation ratio of $O((\log\log{m})^3)$ to the social welfare in expectation using polynomial number of value and demand queries.
\end{result}
We shall note that our mechanism (as well as all previous ones in~\cite{DobzinskiNS06,Dobzinski07,KrystaV12,Dobzinski16}) actually works for the much broader class of \emph{XOS} valuations (see Section~\ref{sec:prelim} for definition).
Our result reduces the gap between the approximation ratio of truthful mechanisms vs algorithms for submodular and XOS bidders significantly, namely, from $\mbox{\rm poly}{(\log{(m))}}$ in prior work to $\mbox{\rm poly}{(\log\log{(m)})}$.
Similar to~\cite{Dobzinski16}, our result implies a $\mbox{\rm poly}(m,n)$ time algorithm with \emph{explicit access} to valuations, when valuations are \emph{budget additive}, i.e., for
every $S$, $v(S) = \min(b,\sum_{j \in S}v(\set{j}))$ for some fixed $b$. These valuations have been studied extensively in the past (see, e.g.~\cite{AndelmanM04,ChakrabartyG08,Dobzinski16}) and a simple reduction from Knapsack shows
that it is NP-hard to compute a demand query for these valuations. Yet, similar to~\cite{Dobzinski16}, our mechanism uses demand queries of a very specific form, and these can be computed in poly-time. We omit the details here and
instead refer the reader to~\cite[Section 6]{Dobzinski16}.
\paragraph{Our Techniques.}
All previous work on this problem~\cite{DobzinskiNS06,Dobzinski07,KrystaV12,Dobzinski16}, at their core, relied on the following key observation: to design truthful mechanisms for submodular or XOS bidders,
``all'' we need is to find ``good'' estimates of the \emph{item prices} in an optimal allocation; the rest can be handled by a simple \emph{fixed-price auction} using these prices. We also use this observation but depart from prior work
in the following key conceptual way. Previous work mainly aimed to learn coarse-grained ``statistics'' about the prices, say, the range they should belong to~\cite{DobzinskiNS06,Dobzinski07}, and used these statistics to
``guess'' a small number of good prices (e.g., $O(1)$ prices in~\cite{DobzinskiNS06,Dobzinski07}, and $O(\sqrt{\log{m}})$ in~\cite{Dobzinski16}), whereas we instead strive to ``learn'' the entire price vector of items in a fine-grained way (at least for a large fraction of items). This fine-grained view is the key factor that allows us
to get much more accurate prices and ultimately leads to the exponentially improved performance of our mechanism.
A cornerstone of our approach is a ``learning process'' which starts with a simple guess of item prices and \emph{iteratively} refine this guess until it converges to suitable prices for different items.
Each iteration of this process involves running \emph{several} fixed-price auctions with the prices learned so far and use the resulting allocations to refine our learned prices further. The key to the analysis of
this mechanism is the ``Learnable-Or-Allocatable Lemma'' (Lemma~\ref{lem:main}): Roughly speaking, we prove that in each iteration of this process,
we can either refine our learned prices significantly (Learnable), or the fixed-price auction with the currently learned prices already gets a high-welfare allocation (Allocatable). Thus, after a \emph{few} iterations, the resulting
prices have been refined enough to allow for a high-welfare allocation.
One ingredient in the proof of this lemma is
an interesting property of fixed-price auctions that stems from their greedy nature: if we run a fixed-price auction with a \emph{random ordering} of bidders,
either we obtain a high-welfare allocation or we sell almost all items (most likely to wrong bidders). Such a property was first proved (in a similar but not identical form) by Dobzinski~\cite{Dobzinski16} and is closely related to other similar results
about greedy algorithms for maximum matching~\cite{KonradMM12}, matroid intersection~\cite{GS-IPCO17}, and constrained submodular maximization~\cite{Norouzi-FardTMZ18}.
\paragraph{Further related work.} The gap between the approximation ratio of truthful mechanisms and general algorithms has been studied from numerous angles in the literature. It is known that algorithms that use only $\mbox{\rm poly}(m,n)$ many value queries,
or are poly-time in the input representation (for succinctly representable valuations) can achieve only $m^{\Omega(1)}$-approximation~\cite{PapadimitriouSS08,Dobzinski11,DughmiV11,DobzinskiV12,DobzinskiV12,DanielySS15} (the latter
assuming RP $\neq$ NP). However, these results no longer apply for mechanisms that are allowed other natural types of queries, e.g., demand queries\footnote{Demand queries are quite natural from an economic point of view as they simply return the most
valued bundle for the bidder at the given item prices; see Section~\ref{sec:prelim}.}. This has led the researchers to study
the communication complexity of this problem that can capture arbitrary queries to valuations~\cite{Nisan00,BlumrosenN02,DobzinskiNS05,NisanS06,DobzinskiV13,DobzinskiNO14,Dobzinski16b,Assadi17ca,BravermanMW17,EzraFNTW18}. Although a clear
path for proving a separation between the communication complexity of truthful mechanisms and general algorithms was shown recently in~\cite{Dobzinski16b} (see also~\cite{BravermanMW17,EzraFNTW18}), no such separation is still known.
\subsection*{Acknowledgements}
We are grateful to Matt Weinberg for illuminating discussions on the related work, and the anonymous reviewers of FOCS 2019 for many helpful comments on the presentation of this paper.
{\small
\bibliographystyle{abbrv}
\section{The Main Mechanism}\label{sec:mech}
We give our main mechanism for combinatorial auctions with XOS valuations in this section. In the following, we present our mechanism
with a simplifying assumption (Assumption~\ref{assumption1}). This assumption is made primarily for simplicity of exposition and we show how to remove it in Section~\ref{sec:end-mech}.
\begin{assumption}\label{assumption1}
We assume there exists two non-negative numbers $\psi_{\min} \leq \psi_{\max}$ such that:
\begin{enumerate}[label=(\roman*)]
\item for every valuation, supporting price of any item for any clause belongs to $\set{0} \cup [\psi_{\min}:\psi_{\max}]$;
\item the ratio of these numbers, denoted by $\Psi:= \psi_{\max}/\psi_{\min}$, is bounded by some fixed $\mbox{\rm poly}(m)$.
\end{enumerate}
We further assume that the mechanism is given $\psi_{\min}$ and $\psi_{\max}$ as input.
\end{assumption}
In the following, we first present a simple tree-structure, named the \emph{price tree},
that we use in our mechanism for discretizing prices at different scales. We then describe the method with which we assign different
bidders to different auctions run by our mechanism. Finally, we present our mechanism and prove its computational efficiency and universal truthfulness guarantees. The analysis of the approximation ratio of our mechanism---the main technical contribution of the paper---appears
in the subsequent section.
\paragraph{Parameters:}
We define and use the following parameters in our mechanism.
\begin{itemize}
\item $\alpha:= \Theta(1)$ -- number of different auctions run in each iteration of our mechanism;
\item $\beta := O(\log\log{\Psi})$ -- number of iterations in our mechanism;
\item $\gamma := \Theta(\alpha\beta)$ -- the accuracy to which we aim to learn the true prices.
\end{itemize}
Moreover, the above parameters satisfy the following equations:
\begin{align}
&\alpha^{\beta+1} \geq \log_{\gamma}{\Psi} \notag \\ &20\alpha\beta \leq \gamma \leq 30\alpha\beta. \label{eq:equations}
\end{align}
It is immediate to verify that one can choose $\alpha,\beta,\gamma$ satisfying all the above equations.
\subsection{Price Trees and Their Properties}\label{sec:price-tree}
We define a simple tree-structure used for discretizing the range of prices in $[\psi_{\min} : \psi_{\max}]$ by our mechanism.
The first part is a geometric partition of set of available prices as follows.
\begin{definition}\label{def:bins}
We partition $[\psi_{\min} : \psi_{\max}]$ into $t:= \ceil{\log_{\gamma}{\Psi}}$ \textbf{\emph{bins}} $B_1,\ldots,B_t$
where values inside each $B_i$ are within a factor $\gamma$ of each other. We use $\ensuremath{\textnormal{\textsf{price}}}(B_i)$ to denote the min value in $B_i$.
\end{definition}
We now use the concepts of bins to define a multi-level partitioning of $[\psi_{\min}:\psi_{\max}]$ with different scales of accuracy.
\begin{definition}[Price Tree]\label{def:price-tree}
A \textbf{\emph{price tree}} $\ensuremath{\mathcal{T}}$ is a rooted tree in which each node $z$ is assigned two attributes: $(i)$ $\ensuremath{\textnormal{\textsf{bins}}}(z)$ which is a subset of bins $B_1,\ldots,B_t$ with \emph{consecutive} indices, and $(ii)$ $\ensuremath{\textnormal{\textsf{price}}}(z)$ which is the value of $\ensuremath{\textnormal{\textsf{price}}}(B_i)$
where $B_i$ is the \emph{smallest indexed bin} inside $\ensuremath{\textnormal{\textsf{bins}}}(z)$. The tree $\ensuremath{\mathcal{T}}$ satisfies the following properties:
\begin{itemize}
\item For the root $z_r$ of $\ensuremath{\mathcal{T}}$, $\ensuremath{\textnormal{\textsf{bins}}}(z_r) := (B_1,\ldots,B_t)$.
\item $\ensuremath{\mathcal{T}}$ has $t$ leaf-nodes where the $i$-th left most leaf-node $z_i$ of $\ensuremath{\mathcal{T}}$ has $\ensuremath{\textnormal{\textsf{bins}}}(z_i) = B_i$.
\item Every non-leaf node $z$ of $\ensuremath{\mathcal{T}}$ has $\alpha$ children $z_1,\ldots,z_\alpha$ such that $\ensuremath{\textnormal{\textsf{bins}}}(z_1)$ contains the first $\alpha$ fractions of $\ensuremath{\textnormal{\textsf{bins}}}(z)$, $\ensuremath{\textnormal{\textsf{bins}}}(z_2)$ contains the second $\alpha$ fraction, and so on.
\end{itemize}
By the choice of $\alpha^{\beta+1} \geq t$, the number of levels in $\ensuremath{\mathcal{T}}$ is $\beta+1$ (see Figure~\ref{fig:price-tree} for an illustration).
\end{definition}
\begin{figure}[t]
\centering
\input{price-tree}
\caption{An illustration of a price tree $\ensuremath{\mathcal{T}}$ with $\alpha=2$, $\beta=2$, and $t=8$. By considering only the bold-face bins (with odd indices), we obtain the modified price tree $\ensuremath{\mathcal{T}}^o$.} \label{fig:price-tree}
\end{figure}
A price tree $\ensuremath{\mathcal{T}}$ gives a nested partitioning of the range $[\psi_{\min}:\psi_{\max}]$ into $\beta+1$ levels with different granularities.
We say that a price $p$ \emph{belongs} to a node $z$ of $\ensuremath{\mathcal{T}}$ iff $p$ appears in one of the bins in $\ensuremath{\textnormal{\textsf{bins}}}(z)$; moreover, if $p = \ensuremath{\textnormal{\textsf{price}}}(z)$,
then we say $p$ \emph{strongly belongs} to $z$.
\begin{definition}\label{def:level-price}
We say a price vector $\bm{\price}=(p_1,\ldots,p_m)$ is a \textbf{\emph{level-$i$ price vector}} iff every $p_j$ strongly belongs to some node $z_j$ in level $i$ of $\ensuremath{\mathcal{T}}$.
We assign $\alpha$ \textbf{\emph{canonical level-$(i+1)$ price vectors}} $\bm{\price}_1,\ldots,\bm{\price}_\alpha$ to a level-$i$ price vector $\bm{\price}$, where in $\bm{\price}_k=(p'_{1},\ldots,p'_{m})$
each $p'_j$ strongly belongs to the $k$-th child of $z_j$ to which $p_j$ strongly belongs.
\end{definition}
\paragraph{Modified price trees.} Using price trees in our mechanism directly is problematic primarily because
it is possible that a price $p \in B_i$ for some bin $B_i$ is actually closer to $\ensuremath{\textnormal{\textsf{price}}}{(B_{i+1}})$ than $\ensuremath{\textnormal{\textsf{price}}}{(B_i)}$, hence making the learning of the correct bin for $p$ not feasible.
To fix this issue, we consider the following two modified price trees $\TT^o$ and $\TT^e$ instead:
$\TT^o$ is a subtree of $\ensuremath{\mathcal{T}}$ obtained by retaining only the \emph{odd} indexed bins $B_1,B_3,\ldots$ in $\ensuremath{\textnormal{\textsf{bins}}}$ of $\ensuremath{\mathcal{T}}$; $\TT^e$ is defined analogously by retaining all \emph{even} indexed bins.
In our mechanism, we pick one of $\TT^o$ or $\TT^e$ at random and from there on, only consider the prices that belong to the bins that appear in the corresponding modified price tree. This way, for any two price $p,p'$ that belong
to two different nodes of the modified tree, there is at least a factor $\gamma$ gap between $p$ and $p'$.
\subsection{Partitioning Bidders}\label{sec:bidder-partition}
Our main mechanism involves partitioning the set of bidders into $\beta+1$ different \emph{groups} $N_1,\ldots,N_{\beta+1}$ and assigning them to different auctions throughout the mechanism:
\begin{itemize}
\item $\ensuremath{\textnormal{\texttt{Partition}}}(N)$: Let $N' \leftarrow N$ and for $i=1$ to $\beta$ iterations: pick a random permutation of $N'$ and insert the first $|N'|/(10\beta)$ bidders into $N_i$; update $N' \leftarrow N' \setminus N_i$. At the end,
return $N_1,\ldots,N_\beta$ and $N_{\beta+1} := N'$.
\end{itemize}
\noindent
We note that size of $N_1,\ldots,N_{\beta}$ are \emph{decreasing} in expectation, while $N_{\beta+1}$ is larger than the rest.
\subsection{Formal Description of the Mechanism}\label{sec:main-mech}
We are now ready to give our main mechanism under Assumption~\ref{assumption1}. For that, we also need the following procedure first:
\begin{itemize}
\item $\ensuremath{\textnormal{\texttt{PriceUpdate}}}\xspace(\Ai{i}_1,\ldots,\Ai{i}_{\alpha},\bpricei{i}_1,\ldots,\bpricei{i}_\alpha)$: For any item $j \in M$, we let $p'_j$ be equal to $p_j \in \bpricei{i}_k$ where $k$ is the \emph{largest} index such that item $j$ is allocated in $\Ai{i}_k$ (if $j$ is never allocated, we set $k=1$). Return $\bpricei{i+1} = (p'_1,\ldots,p'_m)$.
\end{itemize}
\noindent
We now define our mechanism.
\begin{tbox}
\underline{$\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace{(N,M)}$}
\begin{enumerate}
\item Let $(N_1,N_2,\ldots,N_{\beta+1}) := \ensuremath{\textnormal{\texttt{Partition}}}(N)$.
\item Pick one of the modified trees $\TT^o$ or $\TT^e$ uniformly at random and denote it by $\TT^{\star}$.
\item Let $\bpricei{1}$ be the (unique) level-$1$ (root) price of $\TT^{\star}$. For $i = 1$ to $\beta$ \underline{iterations}:
\begin{enumerate}
\item Let $\bpricei{i}_1,\ldots,\bpricei{i}_{\alpha}$ be the level-$(i+1)$ canonical price vectors of $\bpricei{i}$ in $\TT^{\star}$ (Definition~\ref{def:level-price}).
\item For $j=1$ to $\alpha$: run $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace(N_i,M,\frac{\bpricei{i}_j}{2})$ and let $\Ai{i}_j$ be the allocation.
\item\label{line:coin-toss} W.p. $(1/\beta)$, pick $j^{\star} \in [\alpha]$ uniformly at random and return $\Ai{i}_{j^{\star}}$ as the final allocation;
otherwise, let $\bpricei{i+1} := \ensuremath{\textnormal{\texttt{PriceUpdate}}}\xspace(\Ai{i}_1,\ldots,\Ai{i}_{\alpha},\bpricei{i}_1,\ldots,\bpricei{i}_\alpha)$, and continue.
\end{enumerate}
\item Run $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace(N_{\beta+1},M,\frac{\bpricei{\beta+1}}{2})$ and return the allocation $A^*$.
\end{enumerate}
\end{tbox}
We shall right away remark that in $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$, every price vector $\bpricei{i}$ computed in iteration $i$ is a level-$i$ price vector and hence the canonical price vectors defined in each iteration indeed do exist. We have the following
theorem which is the main technical result of this paper.
\begin{theorem}\label{thm:main-mech}
For a combinatorial auction with $n$ submodular (even XOS) bidders and $m$ items, under Assumption~\ref{assumption1}, $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$ is universally truthful, uses $O(n)$ demand queries and polynomial time, and achieves an approximation ratio of $O((\log\log{m})^3)$ in expectation.
\end{theorem}
We remark that our mechanism in Theorem~\ref{thm:main-mech} only makes $O(1)$ queries to the valuation of each bidder, which is clearly optimal, and results in a {highly efficient} mechanism (computationally).
To see that $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$ is truthful, notice that every bidder $b$ is participating in at most $\alpha$ fixed-price auctions of $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace$ for which the prices of items have already been fixed entirely independent of $b$'s valuations (and responses).
Moreover, for bidders in $N_1,\ldots,N_{\beta}$ that participate in more than one auction, the choice of which items (if any) they are being allocated across the auctions is entirely independent of the auction outcome and is determined by the random
coin tosses in Line~(\ref{line:coin-toss}). This still does not imply that truth telling is a dominant strategy as a bidder can ``threat'' another bidder by presenting wrong valuations in subsequent auctions they both participate in (see, e.g.~\cite{Dobzinski07,Dobzinski16}).
To fix this, we make each bidder $b$ output the preferences in all fixed-price auctions $b$ participates in \emph{simultaneously} (or alternatively hide bidders responses from each other). As was observed in~\cite{Dobzinski07,Dobzinski16} this ensures the truthfulness of the mechanism.
Computational efficiency of $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$ and the bound on number of demand queries follow immediately from the fact that each bidder is participating in at most $\alpha = \Theta(1)$ fixed-price auctions, each of which requires one demand query per bidder.
\section{The High-Level Overview}\label{sec:overview}
We describe our mechanism using three parameters $\alpha := \Theta(1)$, $\beta := O(\log\log{m})$, and $\gamma := \Theta(\alpha \beta)$.
Let $O=(O_1,\ldots,O_n)$ be an optimal allocation with welfare $\ensuremath{\mbox{\sc opt}}\xspace$ and $\bm{q} = (q_1,\ldots,q_m)$ be its supporting prices (obviously, $O$ and $\bm{q}$ are unknown). For now, let us assume that every $q_j$ belongs to $\set{1,\gamma,\gamma^2,\ldots,\gamma^{K}}$, for some $K=O(\log{m})$ (and hence prices
are roughly $\mbox{\rm poly}{(m)}$ large).
The crux of our mechanism is to ``learn'' $\bm{q}$, namely, find another price vector $\bm{\price}$ such that
for some subset $C \subseteq M$ with $\bm{q}(C) \approx \val{O}$, $\bm{\price}$ \emph{point-wise} $\gamma$-approximates $\bm{q}$ for items in $C$ (i.e., within a multiplicative factor of $\gamma$).
Having learned such prices, we can run a fixed-price auction with prices $\bm{\price}$, and by Lemma~\ref{lem:fixed-price}, obtain an allocation with welfare $\approx \gamma \cdot \val{O}$.
In order to obtain the price vector $\bm{\price}$, we start with a rough guess $\bpricei{1}$ for what prices should be (say, all ones), and update
our guess over (at most) $\beta$ \emph{iterations}. In each iteration $i \in [\beta]$, we use the prices $\bpricei{i}$ learned so far to find $\alpha$ new price vectors $\bpricei{i}_1,\ldots,\bpricei{i}_\alpha$, and
``explore'' for \emph{each} item~$j\in M$ which of these $\alpha$ vectors best represents its price in $\bm{q}$, and then assign that price to item $j$ in $\bpricei{i+1}$. We continue this for $\beta$ iterations until we converge to the desired price vector $\bm{\price} := \bpricei{\beta+1}$, or we
decide along the way that the prices learned so far are already ``good enough''. There are three main questions to answer here: $(i)$ how to choose which prices to explore in each iteration, $(ii)$ how to explore
a new price for each item, and finally $(iii)$ how to implement all this in a truthful (and computationally-efficient) manner.
We elaborate on each part below.
\paragraph{Part $(i)$ -- which prices to explore.}
This question can be best answered from the perspective of a single item $j \in M$.
Originally, we set $\pricei{1}_j \in \bpricei{1}$ to be $1$, and so with our assumption that $q_j\in \set{1,\gamma,\ldots,\gamma^{K}}$, price $\pricei{i}_j$ will $(\gamma^K)$-approximate $q_j \in \bm{q}$. We want $\pricei{2}_j$ to
$(\gamma^{K/\alpha})$-approximate $q_j$ in the next iteration. Thus, we simply need to check for every $\ell \in \set{0,\ldots,\alpha-1}$, whether $q_j \geq \gamma^{\ell \cdot K/\alpha}$ or not (using part $(ii)$ below).
By picking the largest $\ell^{*}$ for which this is true, we can get a $(\gamma^{K/\alpha})$-approximation to $q_j$.
As such, for each item, there are only $\alpha$ choices of prices that we need to explore next, which allows us to devise price vectors $\bpricei{1}_1,\ldots,\bpricei{1}_\alpha$ accordingly.
We repeat the same idea for later iterations as well, maintaining that in iteration $i$, price $\pricei{i}_j \in \bpricei{i}$ will $(\gamma^{K/\alpha^{i-1}})$-approximate $q_j$, and use $\alpha$
prices as before in $\bpricei{i}_1,\ldots,\bpricei{i}_\alpha$ to update this to a $(\gamma^{K/\alpha^i})$-approximation for the next iteration.
This way, after $\beta=O(\log\log{m})$ iterations, we obtain $\pricei{\beta+1}_j$ that $\gamma$-approximates $q_j$ as desired. See Figure~\ref{fig:price-trajectory} for an illustration.
In the above discussion, we talked about an item $j$ as if its price is learned correctly throughout (i.e., $\pricei{i}_j$ is $(\gamma^{K/\alpha^{i-1}})$-approximating $q_j$ for all $i \in [\beta+1]$).
Our mechanism cannot guarantee this property for every item (but rather for most of them).
Moreover, we are also not able to decide which items have been correctly priced, so we simply treat all items as being priced correctly in the mechanism and perform
the above process for them. This means that for some items, their price may have been learned incorrectly in some iteration; so we conservatively ignore their contribution from now on
in the analysis. A key part of our analysis is to show that this does not hurt the performance of the mechanism by much, namely, $\bm{q}(C)$ is still a good approximation to $\val{O}$, where $C$ is the set of items for which we learn their prices correctly.
\begin{figure}[h!]
\centering
\input{price-trajectory}
\caption{An illustration of the trajectory of the prices of a single item throughout the mechanism. Here, $\alpha=4$ and $\beta=2$.
Each block $i$ corresponds to price $\gamma^{i}$. Arrows correspond to the price of this item in the corresponding fixed-price auction; a solid arrow means the item was sold, while a dashed arrow means it was not.
The learned price of this item in this example is $\gamma^{9}$.} \label{fig:price-trajectory}
\end{figure}
\paragraph{Part $(ii)$ -- how to explore a new price.} For this part, we build on a key idea from~\cite{Dobzinski16} in using fixed-price auctions themselves as a ``proxy'' for determining correctness of a guess for item prices. The idea is as follows:
suppose we run a fixed-price auction with prices $\bpricei{i}_\ell$ for $\ell \in [\alpha]$ that we want to explore in an iteration $i$. As these prices may be very far from $\bm{q}$ yet, there is no guarantee that this auction returns a high-welfare allocation. However,
if we choose the ordering of bidders \emph{randomly}, then the \emph{only way} this auction does not succeed in outputting a high-welfare allocation is because it sold \emph{almost all} the items at the current prices (most likely to wrong bidders).
Hence, an item getting sold in a certain fixed-price auction is a ``good indicator'' that its price in $\bm{q}$ is at least as high as the price used in this fixed-price auction.
Such an idea was used in~\cite{Dobzinski16} to narrow down the range of item prices from $O(\log{m})$ values to $O(\sqrt{\log{m}})$, which in turn allows the mechanism to simply guess a correct price for each item and achieves an
$O(\sqrt{\log{m}})$-approximation.
We take this idea to the next step to obtain our Learnable-Or-Allocatable Lemma (Lemma~\ref{lem:main}).
Roughly speaking, we show that in each iteration $i$, starting from the set $\Ci{i}$ of correctly priced items, either one of the $\alpha$ auctions for exploring prices will lead to an $O(\beta^2)$-approximate allocation,
or after this iteration we will manage to further refine the prices of almost all items in $\Ci{i}$. I.e., we obtain a set $\Ci{i+1}$ with $\bm{q}(\Ci{i+1}) \approx \bm{q}(\Ci{i})$ and with $\bpricei{i+1}$ approximating prices $\bm{q}$ for $\Ci{i+1}$ much more accurately than $\bpricei{i}$ (as described in part $(i)$).
Hence, either during one of the iterations there is an auction that gives us an $O(\beta^2)$-approximation, or we eventually end up with $\bpricei{\beta+1}$ that
point-wise $\gamma$-approximates $\bm{q}$ for a large set of items $\Ci{\beta+1}$. Therefore, by ensuring $\bm{q}(\Ci{\beta+1}) = \Omega(\ensuremath{\mbox{\sc opt}}\xspace)$, a fixed-price auction with prices~$\bpricei{\beta+1}$ gives a $\gamma = O(\alpha\beta)$-approximation by Lemma~\ref{lem:fixed-price}.
This outline oversimplifies many details. Let us briefly mention two here. Firstly, running fixed-price auctions only help us in not \emph{underpricing} items for the next iteration; we also need to take care of \emph{overpricing}.
This is handled by making sure there is a \emph{gap} of $\gamma$ between different prices explored so that not many overpriced items can be sold in an auction.
Also while for the purpose of this discussion we simply assumed the existence of this gap, in the actual mechanism we need to \emph{create} this gap using a basic randomization idea. Secondly, our mechanism has no way of determining (in a truthful way)
which case of the
Learnable-Or-Allocatable Lemma we are in. This means that there are $\alpha \cdot \beta$ auctions in the mechanism and any one of them may give an $O(\beta^2)$-approximation welfare. (If not, then we can learn the prices accurately and the final
auction would be an $O(\alpha\beta)$-approximation.) The solution here is then to simply {pick} one of the $(\alpha\beta + 1)$ auctions \emph{uniformly at random} and allocate according to that.
This way we succeed in finding a good auction with probability at least $1/\alpha \beta$ and hence, in expectation, we obtain an $O(\alpha\beta^3)$-approximation.
\paragraph{Part $(iii)$ -- how to ensure truthfulness.}
Recall that a fixed-price auction is truthful primarily because the responses of the bidders has no effect on the price of their allocated bundle. However, our mechanism consists of multiple fixed-price auctions and the outcomes of these auctions do influence the
prices for \emph{later} iterations. As such, to ensure truthfulness, each bidder should only participate in the auctions of a single iteration. Hence, at the beginning of the mechanism, we randomly partition the bidders into $\beta+1$ groups
$N_1,\ldots,N_{\beta+1}$. Then, in each iteration $i$, we use the bidders in group $N_i$ for fixed-price auctions with prices $\bpricei{i}_1,\ldots,\bpricei{i}_\alpha$ to learn prices $\bpricei{i+1}$, and in the final iteration we run one fixed-price auction with bidders $N_{\beta+1}$ and prices $\bm{\price} = \bpricei{\beta+1}$.
This partitioning of bidders results in a key challenge: Our goal in learning the prices should actually be different from what was stated earlier. In particular, the auctions in each iteration $i$ with bidders $N_i$ should reveal the $\bm{q}$ prices of items
allocated in $O$ to bidders in $N_{>i} := N_{i+1},\ldots,N_{\beta+1}$, \emph{as opposed to} bidders in $N_i$.
This is because we are no longer able to allocate any item to bidders in $N_1,\ldots,N_{i}$. We handle this also by our Learnable-Or-Allocatable Lemma.
Instead of learning the set $\Ci{i+1}$ with $\bm{q}(\Ci{i+1}) \approx \bm{q}(\Ci{i})$ in the Learnable case, we have a more refined statement in which the LHS is replaced with $\bm{q}$ of items allocated \emph{only} to bidders in $N_{>i}$.
This in turn requires a delicate choice of parameters and analysis to balance out two opposing forces: on one hand, we need $N_i$ to be large enough so that we can ``extrapolate'' the learned prices in auctions with $N_i$ to $N_{>i}$ (in the Learnable case); on the other hand, each $N_i$ should be small enough so that by the time we end up learning the prices, the contribution of the remaining bidders is still large enough.
\paragraph{Comparison to Dobzinski~\cite{Dobzinski16}.} We conclude this section by comparing our work with the previous best $O(\sqrt{\log{m}})$ approximation mechanism of Dobzinski~\cite{Dobzinski16}.
As stated in Part $(ii)$, our mechanism builds on a key idea from~\cite{Dobzinski16} in using fixed price auctions as a proxy for finding ``good'' prices. On a high level, the main difference between the two works is that Dobzinski~\cite{Dobzinski16} uses fixed price auctions to ``learn'' item prices in a \emph{single-shot}, but with a relatively poor accuracy. Instead, we use fixed price auctions \emph{iteratively} in order to learn the prices of (most) items quite accurately.
Concretely, assuming that all prices $q_j\in \set{1,\gamma,\ldots,\gamma^{K}}$ for $K = O(\log m)$, Dobzinski's mechanism can be viewed as a
special case of our mechanism by setting $\beta=1$ and $\alpha = \sqrt{\log m}$: Dobzinski first uses a set $N_1$ of bidders to run $\alpha = \sqrt{\log m}$ auctions to learn the prices of items (to within an $O(\sqrt{\log{m}})$ factor),
and then runs one more fixed price auction with these learned prices on bidders $N_{\beta+1}=N_2$. The final auction is then chosen randomly from these $\sqrt{\log m}+1$ auctions to ensure truthfulness. Considering both the prices
are learned only to within an $O(\sqrt{\log{m}})$ factor and the final auction is chosen from $\sqrt{\log{m}}+1$ auctions, the approximation ratio of this mechanism is $O(\sqrt{\log{m}})$.
Our mechanism on the other hand learns prices of items in multiple iterations ($\beta=O(\log\log{m})$ iterations) via the Learnable-Or-Allocatable Lemma. This allows us to both use a much smaller number of auctions ($\mbox{\rm poly}{(\log\log{m})}$ many),
and at the same time learn prices much more accurately (again to within a $\mbox{\rm poly}{(\log\log{m})}$ factor), which ultimately leads to our improved approximation ratio of $O((\log\log{m})^3)$.
\section{Preliminaries}\label{sec:prelim}
\paragraph{Notation.} We denote by $N$ the set of bidders and by $M$ the set of items. We use bold-face letters to denote vectors of prices and capital letters for allocations. For a price vector $\bm{\price}$ and a set of items $M' \subseteq M$, we define $\bm{\price}(M') := \sum_{j \in M'} \ensuremath{p}_j$.
For an allocation $A = (A_1,\ldots,A_n)$, we sometimes abuse the notation and use $A$ to denote the set of allocated items. A restriction of allocation $A$ to bidders in $N' \subseteq N$ and items $M' \subseteq M$
is an allocation $A'$ consisting of $A_i \cap M'$ for every $i \in N'$.
\subsection{Submodular and XOS Valuation Functions}\label{sec:valuations}
We make the standard assumption that valuation $v_i$ of each bidder $i$ is normalized, i.e., $v_i(\emptyset)=0$, and monotone, i.e., $v_i(S) \leq v_i(T)$ for every $S \subseteq T \subseteq M$.
We are interested in the case when bidders valuations are \emph{submodular} and hence capture the notion of ``diminishing marginal utility'' of items for bidders.
A valuation $v$ is submodular iff $v(S \cup T) + v(S \cap T) \leq v(S) + v(T)$ for any $S,T \subseteq M$.
Submodular functions are a strict
subset of \emph{XOS} valuations also known as \emph{fractionally additive} valuations (see, e.g.~\cite{Feige06,LehmannLN06}) defined as follows.
A valuation $a$ is additive iff $a(S) = \sum_{j \in S} a(\set{j})$ for every bundle $S$. A valuation function $v$ is XOS iff there exists $t$ additive valuations $\set{a_1,\ldots,a_t}$ such that $v(S) = \max_{r \in [t]} a_r(S)$
for every $S \subseteq M$. Each $a_r$ is referred to as a \emph{clause} of $v$. If $a \in \argmax_{r \in [t]} a_r(S)$, then $a$ is called a \emph{maximizing clause} for $S$ and $a(\set{j})$ is a \emph{supporting price} of
item $j$ in this maximizing clause. We say that an allocation $A = (A_1,\ldots,A_n)$ of items to $n$ bidders with XOS valuation is \emph{supported} by prices $\bm{q} = (q_1,\ldots,q_m)$ iff each $q_j$ is a supporting price
for item $j$ in the maximizing clause of the bidder $i$ to whom $j$ is allocated, i.e., $j \in A_i$.
\paragraph{Query access to valuations.} Since valuations have size exponential in $m$, a common assumption is that valuations are specified via certain queries instead, in particular, value queries and demand queries.
A value query to valuation $v$ on bundle $S$ reveals the value of $v(S)$. A demand query specifies a price vector $\bm{\price}$ on items and the
answer is the ``most demanded'' bundle under this pricing, i.e., a bundle $S \in \argmax_{S'} \{v(S')-\bm{\price}(S')\}$.
\subsection{A Fixed-Price Auction}\label{sec:fixed-price}
We use a standard fixed-price auction as a subroutine in our mechanism. For an \emph{ordered} set $N$ of bidders, $M$ of items, and a price vector $\bm{\price}$, $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace(N,M,\bm{\price})$ is defined as follows.
\begin{tbox}
\underline{$\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace(N,M,\bm{\price})$}
\begin{enumerate}
\item Iterate over the bidders $i$ of the ordered set $N$ in the given order:
\begin{enumerate}
\item Allocate $A_i \in \argmax_{S \subseteq M} \{ v_i(S) - \bm{\price}(S) \}$ to bidder $i$ and update $M \leftarrow M \setminus A_i$.
\end{enumerate}
\item Return the allocation $A = (A_1,\ldots,A_n)$.
\end{enumerate}
\end{tbox}
It is easy to see that $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace$ can be implemented using one demand query per bidder. Its truthfulness is also easy to check as bidders have no influence on the pricing mechanism.
The following lemma gives a key property of this auction used in our proofs. Variants of this lemma have already appeared in the literature, e.g., in~\cite{DobzinskiNS06,Dobzinski07,FeldmanGL15,Dobzinski16,EhsaniHKS18} (although we are not aware of this exact
statement). For completeness, we prove this lemma in Appendix~\ref{app:lem-fixed-price}.
\begin{lemma}\label{lem:fixed-price}
Let $A:=\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace(N,M,\bm{\price})$ and $\delta <1/2$.
Suppose $O$ is an allocation with supporting prices $\bm{q}$ and $M^*$ is the set of items $j$ with $\delta \cdot q_j \leq p_j < \frac{1}{2} \cdot q_j$.
Then, $\val{A} \geq \delta \cdot \bm{q}(M^*)$.
\end{lemma}
\subsection{Proof of Theorem~\ref{thm:main-mech} -- Approximation Ratio}\label{sec:thm-main-mech}
We now prove the bound on expected approximation ratio of $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$. We first need some definitions.
For an iteration $i \in [\beta+1]$, we use $\Algi{i} = (\ensuremath{Alg}_1,\ldots,\ensuremath{Alg}_n)$ to denote the allocation returned by our mechanism, conditioned on the mechanism reaching iteration $i$ and on the outcomes of
$N_1,\ldots,N_{i-1}$ as well as the choice of $\TT^{\star}$. We use $\algi{i}$ to denote the welfare of allocation $\Algi{i}$. We note that except for $i=\beta+1$, $\Algi{i}$ is a random variable.
Our main tool in this section is the following inductive lemma.
\begin{lemma}\label{lem:induction}
For $i\in [\beta+1]$,
\begin{align*}
\expect{\algi{i}} \geq \frac{1}{200 \alpha \beta^3} \cdot \left(1-\frac{1}{\beta}\right)^{\beta+1-i} \cdot \paren{ \bqi{i}(\Ci{i})- \frac{\ensuremath{\mbox{\sc opt}}\xspace}{3\beta}(\beta + 1-i)},
\end{align*}
where the expectation is taken over the choice of $N_i,N_{i+1},\ldots,N_{\beta}$.
\end{lemma}
Before proving this lemma, we show how it immediately implies the proof of Theorem~\ref{thm:main-mech}.
\begin{proof}[Proof of Theorem~\ref{thm:main-mech} -- Approximation Ratio]
By Lemma~\ref{lem:induction} for $i=1$,
\begin{align*}
\expect{\algi{1}} ~ \geq ~ \frac{1}{200 \alpha \beta^3} \cdot \left(1-\frac{1}{\beta}\right)^{\beta} \cdot \paren{\bqi{1}(\Ci{1}) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{3\beta} \cdot \beta} ~ = ~ \Omega\Big(\frac{1}{\alpha\beta^3}\Big) \cdot \paren{\bqi{1}(\Ci{1}) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{3}}.
\end{align*}
The only random event that we have not conditioned on in $\bqi{1}(\Ci{1})$ is the choice of $\TT^{\star}$. Let $\ensuremath{\mathcal{A}}\xspace$ denote the welfare of allocation returned by the mechanism. We have,
\begin{align*}
\expect{\ensuremath{\mathcal{A}}\xspace} ~ = ~ \Omega\Big(\frac{1}{\alpha\beta^3}\Big) \cdot \Exp_{\TT^{\star}}\bracket{\bqi{1}(\Ci{1}) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{3}} ~ = ~ \Omega\Big(\frac{1}{\alpha\beta^3}\Big) \cdot \paren{\frac{\ensuremath{\mbox{\sc opt}}\xspace}{2} - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{3}} ~ = ~ \Omega\Big(\frac{1}{\alpha\beta^3}\Big) \cdot \ensuremath{\mbox{\sc opt}}\xspace,
\end{align*}
where the second equality is because $\TT^{\star}$ is chosen uniformly at random to be $\TT^o$ or $\TT^e$ and the bins in these two price trees partition the prices in $\bm{q}(O)$ by Assumption~\ref{assumption1}.
As $\alpha=\Theta(1)$, and $\beta = O(\log\log{\Psi})$, which is $O(\log\log{m})$ under Assumption~\ref{assumption1},
we obtain that $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$ achieves an $O((\log\log{m})^3)$ approximation in expectation. \Qed{Theorem~\ref{thm:main-mech}}
\end{proof}
We prove Lemma~\ref{lem:induction} using backward induction. We first show that the lemma easily holds true for the base case, namely, for $i=\beta+1$, because of the performance of \ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace for correctly priced items (Lemma~\ref{lem:fixed-price}).
The heart of the induction step lies in Learnable-Or-Allocatable Lemma (Lemma~\ref{lem:main}) that states $\expect{\bqi{i+1}(\Ci{i+1})}$ is close to $\bqi{i}(\Ci{i})$ unless we already have a good allocation. So we first use the induction hypothesis to show that the
expected welfare of the mechanism is close to $\expect{\bqi{i+1}(\Ci{i+1})}$ and then use Lemma~\ref{lem:main} to show it is close to $\bqi{i}(\Ci{i})$.
\begin{proof}[Proof of Lemma~\ref{lem:induction}]
We use backward induction on $i$. Consider the base case for $i=\beta+1$, where we want to show the following (note that $\algi{\beta+1}$ is no longer a random variable)
\[
{{\algi{\beta+1}}} \geq \frac{1}{200 \alpha \beta^3} \cdot \left( \bqi{\beta+1}(\Ci{\beta+1}) \right).
\]
Since our mechanism has already reached iteration $i=\beta+1$, this means that for every correctly priced item $j \in \Ci{\beta+1}$, $p_j \in \bpricei{\beta+1}$ and $q_j \in \bqi{\beta+1}$ both belong to a leaf-node of $\TT^{\star}$, and consequently the same
price bin. As such, by construction of bins, $p_j \leq q_j \leq \gamma \cdot p_j$ and hence running $\ensuremath{\textnormal{\texttt{FixedPriceAuction}}}\xspace(N_{\beta+1},M,\bpricei{i+1}/2)$ in this step of $\ensuremath{\textnormal{\texttt{PriceLearningMechanism}}}\xspace$, by Lemma~\ref{lem:fixed-price}, results in allocation with welfare,
\begin{align*}
\algi{\beta+1} \geq \frac{1}{\gamma} \cdot \bqi{\beta+1}(\Ci{\beta+1}) > \frac{1}{200\alpha\beta^3} \cdot \bqi{\beta+1}(\Ci{\beta+1}),
\end{align*}
by the choice of $\gamma = \Theta(\alpha\beta)$ in Eq~(\ref{eq:equations}). This proves the induction base.
We now prove the induction step. Suppose the lemma is true for iterations $\geq i+1$ and we prove the induction step for iteration $i$. Notice that w.p. $1/\beta$ the mechanism outputs an allocation $\Ai{i}_{j^{\star}}$ for $j^{\star}$ chosen randomly from $[\alpha]$, and otherwise it continues to the next iteration. This implies:
\begin{align} \label{eq:wrapUp}
\expect{\algi{i}} &\geq \frac{1}{\beta} \Exp_{N_i,j^{\star}}\bracket{\val{\Ai{i}_{j^{\star}}}} + \paren{1-\frac{1}{\beta}} \cdot \Exp_{N_i}\bracket{{\algi{i+1}}} \notag \\
&\geq \frac{1}{\beta} \cdot \Exp_{N_i,j^{\star}}\bracket{\val{\Ai{i}_{j^{\star}}}} + \left(1-\frac{1}{\beta}\right)^{\beta+1-i} \Exp_{N_i}\left[\frac{1}{200 \alpha \beta^3} \left( \bqi{i+1}(\Ci{i+1}) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{3\beta}(\beta -i) \right)\right],
\end{align}
where the second inequality uses induction hypothesis. Now to prove the induction step we consider the two cases corresponding to Lemma~\ref{lem:main}:
\begin{enumerate}[label=(\roman*)]
\item \textbf{Learnable case}, i.e., $\expect{\bqi{i+1}(\Ci{i+1})} \geq \bqi{i}(\Ci{i}) - \frac{\textnormal{\ensuremath{\mbox{opt}}}\xspace}{3\beta}$: Combining this with Eq~\eqref{eq:wrapUp},
\begin{align*}
\expect{\algi{i}} ~\geq~ \left(1-\frac{1}{\beta}\right)^{\beta+1-i} \cdot \Exp_{N_i}\bracket{\frac{1}{200 \alpha \beta^3} \paren{ \bqi{i}(\Ci{i}) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{3\beta}- \frac{\ensuremath{\mbox{\sc opt}}\xspace}{3\beta}(\beta -i) }},
\end{align*}
which implies the induction step.
\item \textbf{Allocatable case}, i.e., $\expect{\val{\Ai{i}_{j^{\star}}}} \geq \frac{\ensuremath{\mbox{\sc opt}}\xspace}{200\alpha \cdot \beta^2}$: Combining this with Eq.~\eqref{eq:wrapUp},
\[ \expect{\algi{i}} ~ \geq ~ \frac{\ensuremath{\mbox{\sc opt}}\xspace}{200\alpha \beta^3} ~ \geq ~ \frac{1}{200 \alpha \beta^3} \left(1-\frac{1}{\beta}\right)^{\beta+1-i} \expect{\bqi{i}(\Ci{i}) - \frac{\ensuremath{\mbox{\sc opt}}\xspace}{2\beta}(\beta + 1-i)},
\]
where the last inequality uses $\ensuremath{\mbox{\sc opt}}\xspace \geq \bqi{i}(\Ci{i})$ and implies the induction step.
\end{enumerate}
This concludes the proof of the lemma. \Qed{Lemma~\ref{lem:induction}}
\end{proof}
|
train/arxiv
|
BkiUeHfxK0zjCxh74I91
| 5
| 1
|
\section{Introduction}
Random geometric network models~\cite{Penrose, FM} comprise a collection of entities
called nodes embedded in region of typically two or three dimensions, together
with connecting links between pairs of nodes that exist with a probability related
to the node locations. They appear in numerous complex systems including
in nanoscience~\cite{Kyr}, epidemiology~\cite{Miller,Danon}, forest fires~\cite{Pueyo},
social networks~\cite{Palla,Parshani}, and wireless
communications~\cite{HABDF,Li,Wang}. Such networks exhibit a general
phenomenon called {\em percolation}~\cite{Callaway,BR}, where at a critical connection probability
(controlled by the node density), the largest connected
component (cluster) of the network jumps abruptly from being independent of system size (microscopic) to being proportional to system size (macroscopic).
Percolation phenomena are closely related to thermodynamic phase transitions where the number of nodes
$N$ goes to infinity and the critical percolation density $\rho_{c}$ is largely independent of the system size, shape,
and of the microscopic details of the model; the phenomenon of universality.
At the critical point, conformal invariance in two dimensional networks leads to detailed expressions for the probability
of a connection across general regions~\cite{Cardy} and more general connections with conformal field theory~\cite{Fuchs} and Schramm-Loewner Evolution~\cite{StAubin}.
Here, we take a different approach and are concerned with finite networks and with questions related to percolation,
but fundamentally different: What node density ensures a specified probability $P_{fc}$ that
the entire network is a single connected component (cluster), that is, {\em fully connected}? How is this probability
affected by the shape of the network domain?
These questions are crucial for many applications, including for example the design of
reliable wireless mesh networks. These consist of communication devices
(the nodes) that pass messages to each other via other nodes rather
than a central router. This allows the network to operate seamlessly over a large
area, even when nodes are moved or deactivated. A fully connected network means that every node can communicate with every other node through direct or indirect connections. Mesh networks have been developed for many communication systems, including laptops, power distribution (``smart grid'') technologies, vehicles for road safety or environmental monitoring, and robots in hazardous locations such as factories, mines and disaster areas~\cite{Li}.
For many applications of random geometric networks including those above,
{\em direct connection} between two nodes $i$ and $j$ can be well described
by a probability $H_{ij}=H(r_{ij})$, a given function of the distance between the nodes
$r_{ij}=|{\bf r}_i-{\bf r}_j|$.
Often, the nodes are mobile or otherwise not located in advance, hence we assume $N$ uniformly distributed nodes confined in
a specified $d$-dimensional region $\cal V$ with area ($d=2$) or volume ($d=3$)
denoted by $V$. The node density is then defined as $\rho=N/V$.
For reference, we will later take $H(r_{ij})=\exp[-(r_{ij}/r_{0})^\eta]$, where $r_{0}$ is a
relevant length scale, and $\eta$ determines the sharpness
of the cut-off. Note that when $\eta\rightarrow\infty$ a step function corresponding to the popular
{\em unit disk} deterministic model~\cite{DJLN} is obtained, where connections have a
fixed range $r_0$. Our derivation however is completely general and only requires that
$H_{ij}$ is sufficiently short-ranged compared to system size.
Using this as a basis, we find that contrary to common belief and practice, the
geometrical details of the confined space boundaries ({\em corners, edges} and {\em faces}) dominate the
properties of the percolation transition. Moreover, the short-range nature of $H_{ij}$ allows us to separate individual boundary components and
obtain analytic expressions for $P_{fc}$ at high densities as a sum over their contributions. We confirm this through computer simulations and argue that the substantial improvement
offered by our main result Eq.~\ref{e:bdy} can be used to predict, control, optimize or even set benchmarks
for achieving full network connectivity in a wide variety of suitable models and applications involving finite size geometries.
\begin{figure}
\centerline{\includegraphics[width=450pt]{G500and700_arx.jpg}}
\caption{\label{f:balls}
Isolated nodes shown as black balls concentrate at the boundaries of the domain and particularly near corners at higher densities. Nodes are placed randomly in a cube, with lighter
colors indicating a higher probability of being in the largest connected component. We use $\eta=2$, while the side length of the cube is $L=10r_0$.
There are 500 nodes in (a) and 700 nodes in (b).}
\end{figure}
\section{Full connection probability}
As in conventional continuum percolation theory~\cite{Stell}, we start by utilizing a cluster
expansion approach~\cite{Hill} to derive a systematic perturbative method for
determining the full connection probability $P_{fc}$ as a function of density $\rho$.
Formulation of the expansion can be summarized as follows.
The probability of two nodes being connected (or not) leads to the trivial identity
$1\equiv H_{ij}+ (1-H_{ij})$.
Multiplying over all links expresses the probabilities $\mathcal{H}_g$ of all
$2^{N(N-1)/2}$ possible graphs $g$,
\begin{equation}
1= \prod_{i<j}[H_{ij}+ (1-H_{ij})]\equiv \sum_{g}\mathcal{H}_{g},
\end{equation}
Collecting terms according to largest cluster size we get
\begin{equation}\label{2}
1= \sum_{g\in \mathcal{G}_N}\mathcal{H}_{g} + \sum_{g\in \mathcal{G}_{N-1}}\mathcal{H}_{g} + \ldots + \sum_{g\in \mathcal{G}_{1}}\mathcal{H}_{g},
\end{equation}
where $\mathcal{G}_{n}$ is the set of all possible graphs with largest cluster of size
$n\in\{1\ldots N\}$. The first term on the right hand side is the probability of being fully
connected given a specific configuration of nodes. The average over all random
configurations $\langle\rangle\equiv V^{-N}\int_{\mathcal{V}} d^{N}{\bf r}$ of this
quantity is thus the overall probability of obtaining a fully connected network $P_{fc}$.
Moreover, the main idea conveyed by Eq.~(\ref{2}) is that at high densities, full connectivity is most likely to be broken by a single isolated node (the $\mathcal{G}_{N-1}$ term); this is sufficient
detail for most applications. Further corrections incorporate the probability of several isolated
single nodes and smaller clusters of nodes, for which a systematic expansion can be
developed~\cite{CDG11}.
Averaging Eq.~(\ref{2}) over all configurations and noting that to leading order the $N-1$ cluster
is fully connected, and that all nodes are identical, the first order approximation becomes
\begin{eqnarray}\label{first}
P_{fc}&\approx&1-\langle\sum_{g\in\mathcal{G}_{N-1}}\mathcal{H}_g\rangle\nonumber\\
&=&1-N\langle\prod_{j=2}^N(1-H_{1j})\rangle\\
&=&1-\rho\int_{\cal V}\left(1-\frac{M({\bf r}_1)}{V}
\right)^{N-1} {\rm d}{\bf r}_1\;\;,\nonumber
\end{eqnarray}
where the ``connectivity mass'' accessible from a node placed at ${\bf r}_1$ is given by
\begin{equation}\label{mass}
M({\bf r}_1)=\int_{\cal V} H(r_{12}){\rm d}{\bf r}_2 ,
\end{equation}
Assuming that the volume $V\gg \rho M({\bf r}_1)^2$ for any ${\bf r}_1$,
which is reasonable if the system is significantly larger than $r_{0}$ at moderate densities and that the number of nodes $N$ is large, Eq.\ref{first} simplifies
to
\begin{equation}\label{e:MA}
P_{fc}\approx 1-\rho\int_{\cal V} e^{-\rho M({\bf r}_1)}{\rm d}{\bf r}_1\;\;.
\end{equation}
This equation is equivalent to Eq.~(8) in Mao and Anderson~\cite{Mao} which
was derived for the specific case of a square domain. Following numerous
studies by probabilists and engineers~\cite{Penrose,FM}, these authors however
assumed an exponential scaling of system size $V$ with $\rho$ which essentially renders boundary effects negligible. Scaling the system in such a way is a common approach
as it corresponds to the limit of infinite density at fixed connection probability, however
in practice this limit is approached only for unphysically large volumes.
In contrast, we do not assume exponential growth of $V$, and also consider far more
general geometries in any dimension $d\geq 1$.
Without an exponentially growing volume $V$, the behavior of the
full connection probability at high densities is qualitatively different: It is
controlled by the exponential in Eq.~(\ref{e:MA}), and hence node positions
${\bf r}_1$ where the connectivity mass is small, that is,
near the boundary of $\cal V$.
Thus in contrast to the usual situation in statistical mechanics, the boundaries (and in particular corners) are important, and we will see they in fact dominate.
We illustrate this in Fig.~\ref{f:balls} where nodes are placed randomly inside a cube and an average over a large number of possible graphs gives the connectivity of each node. Notice that isolated and hard-to-connect nodes shown as dark balls concentrate near the boundaries of the domain and particularly near corners at higher densities.
This observation forms
the basis of our work, and has led to a radically different understanding
of connectivity in confined geometries which we now detail further.
\section{Boundary effects}
The contributions to the integrals in Eq.~(\ref{e:MA}) come from ${\bf r}_1$
at {\em boundary components} $B\subset \cal V$ of dimension $d_B$, for example the bulk, the faces, and right angled edges and corners of a cube,
with $d_B=3,2,1$ and $0$ respectively. The short-range nature of $H_{ij}$ allows us to isolate each boundary component, whilst to leading order the connectivity mass splits into independent
radial and angular integrals, depending only on the local geometry of $B$ and hence
\begin{equation}\label{e:sgf}
M_B=M({\bf r}_B)=\omega_B\int_0^\infty H(r) r^{d-1}{\rm d}r\;\;,
\end{equation}
where $\omega_B$ is the angle ($d=2$) or solid angle ($d=3$) subtended by $B$. For example, if ${\bf r}_{B}$ is near a corner of the cube then $\omega_B= (4\pi)/8$, while near an edge $\omega_B= (4\pi)/4$, near faces $\omega_B= (4\pi)/2$ and $\omega_B= (4\pi)$ for the bulk interior. Hence, from Eq.~(\ref{e:MA}) we see that corner contributions to $P_{fc}$ as a function of $\rho$ are exponentially larger than edge contributions which are themselves exponentially larger than face contributions etc. This simple argument shows that the dominant contribution to $P_{fc}$ at high densities comes from the ``pointiest'' corners.
Expanding $H(r_{12})$ about ${\bf r}_{2}$ near the corresponding boundary component we obtain a next to leading order expansion for $M({\bf r}_{B})$ which we can then use to approximately evaluate the integral in Eq. (\ref{e:MA}). Ignoring exponentially smaller correction terms and combining all boundary contributions we arrive at
our main result
\begin{equation}\label{e:bdy}
P_{fc}\approx 1-\rho\sum_BG_BV_B e^{-\rho M_B}\;\;,
\end{equation}
where $V_B$ is the $d_B$-dimensional ``volume'' of each component (equal to one in the case of a $0$-dimensional corner and $V$ when $d_B=d$), $G_B$ is a geometrical factor depending on $B$ and implicitly on $H$ and $M_B$ is as in (\ref{e:sgf}); see examples below.
Notice that Eq.~(\ref{e:bdy}) is completely general as we have only assumed a short-ranged $H_{ij}$ and not used its specific form. Moreover, it also does not depend on using Euclidean distance and holds in any dimension $d\geq1$ and geometry where the lack of connectivity is dominated by a situation involving an $N-1$ cluster and a single disconnected node. Hence Eq.~(\ref{e:bdy}) is a powerful and useful multi-purpose tool
for analyzing full network connectivity at high densities in a wide variety of suitable models and applications involving finite size geometries.
For example, in the context of single input single output (SISO) wireless communication
channels and a Rayleigh fading model \cite{TV}, information theory predicts
$H(r_{ij})=\exp[-(r_{ij}/r_0)^\eta]$ with $\eta$ an environment and wavelength dependent decay parameter equal to $2$ for free propagation, increasing to $\eta\approx 4$ for a cluttered environment, while $r_{0}$ depends on the minimum outage rate threshold.
For nodes confined to a cube of side length $L$ and $\eta=2$ we find $V_B=L^{d_B}$, $G_B=(2^{3-d_B-1}/\pi\rho r_0^2)^{3-d_B}$, and $M_B=(r_0\sqrt{\pi})^32^{d_B-3}$ with contributions from each of the eight corners, twelve edges, six faces and bulk. However the derivation is general: Once $G_B$ and $M_B$ have been evaluated for these boundary components (right angled edges etc.) by standard asymptotic analysis of the relevant
integrals, they apply to any geometry with these features and length scales significantly
larger than $r_0$. This independence on the large scale geometry
follows from the short-range nature of $H_{ij}$ and is a type of universality allowing for the calculation of $P_{fc}$ in complex high dimensional geometries without increased difficulty.
\begin{figure}
\centerline{\includegraphics[width=450pt]{cube_arx.jpg}}
\caption{\label{f:cube}
(a) Comparison of the full analytic prediction of Eq.~(\protect\ref{e:bdy}) (solid curve) with direct numerical
simulation of the random network in a cube of side length $7r_0$ (jagged curve). The dashed line corresponds to the bulk contribution (previous theory).
(b) Contributions from the bulk (dotted blue, left), faces (red), edges (yellow) and
corners (green, right), together with the total (solid blue) and numerical simulation
(black jagged curve), showing the dominance of the corners at the highest densities and
good agreement between theory and simulation at moderate
to high densities. Here it is convenient to plot
the outage probability $P_{out}=1-P_{fc}$.}
\end{figure}
The substantial improvement offered by Eq.~(\ref{e:bdy}) becomes clear when compared with the ``bulk'' contribution corresponding to current conventional wisdom shown in Fig.~\ref{f:cube}a) for a network confined to a cube. Fig.~\ref{f:cube}b) further demonstrates the inaccuracy of the bulk model as well as the benefits of including boundary effects when analyzing network connectivity in confined geometries.
\begin{figure}
\centerline{\includegraphics[width=450pt]{triangles_arx.jpg}}
\caption{\label{f:tri}
Corner contributions in triangles with equal area and perimeter:
Comparison of theory with direct simulation, as in Fig.~\protect\ref{f:cube}.
The red triangle has side lengths of 26.88, 15.44 and 15.44 in units of the
connectivity length scale $r_0$, while the
blue triangle has side lengths of 8.40, 24.68 and 24.68.
The black dashed lines correspond to the equal bulk (left curve) and
bulk$+$edge (right curve) contributions while neglecting corner contributions. The colored curves give the
total (including crucial corner) contributions for each triangle. Both theory
and simulation are plotted, showing excellent agreement with the numerical simulations (jagged curves) which cover them completely for $\rho>4$.}
\end{figure}
We can go beyond simple geometries restricted to right-angled corners. Consider the case
of a two dimensional triangle
with general angles $0<\omega_B<\pi$. The relevant
integrals for this case come to $M_B=r_0^2\omega_B/2$, with $G_B=4/\pi\rho^2r_0^2\sin\omega_B$
for the corners and $G_B=(2^{2-d_B-1}/\pi\rho r_0^2)^{2-d_B}$ for the edges and bulk and can be generalized easily to higher dimensions.
Fig.~\ref{f:tri} shows two triangles chosen to have
identical perimeter and area; the connectivity at a given
density differs only due to the corner angles and agrees
perfectly with the full theory of Eq.~(\protect\ref{e:bdy}). A bulk theory,
even supplemented with edge contributions, is clearly
incapable of explaining the difference between the
connectivities of networks in these two triangles.
Moreover, such a situation motivates inverse problems, similar to ``hearing the shape of a drum" \cite{Kac66} by attempting to determine the size and shape details of an unknown domain containing a random network.
\section{Discussion}
An important aspect of the theory presented here is how it affects the design of real life random geometric networks.
For wireless mesh networks, the lack of connectivity near the boundaries can be mitigated
by increasing the signal power, the number of spatial channels, or by constructing a hybrid network with
a regular array of fixed nodes along the boundaries as well as randomly placed nodes in the interior. In each of these cases, the
design can now be analyzed given information about the cost and
connectivity function $H(r)$ and of course the desired connectivity region.
Conversely, boundary effects can be harnessed to avoid full connectivity if desired. For example in the case of forest fires \cite{Pueyo}
we have a prediction for the number of unburnt regions as a function of the geometric landscape and environment
parameters (for example angles between fire-lanes and/or natural boundaries), again given a specific model for
connectivity that depends on the type of vegetation, temperature, moisture content etc. Similar models could be
devised for the spread of epidemics \cite{Miller} or mobile phone viruses \cite{Wang} where boundaries are embedded in a more complex (possibly non-Euclidean) space yet
$H_{ij}$ is still short-ranged.
We examined connectivity in confined geometries and illustrated the importance of the often neglected boundary effects.
We then derived a general high density expansion Eq.~(\ref{e:bdy}) for the probability of full connectivity $P_{fc}$ assuming only a short-ranged connectivity function relative to system size and
showed that it displays universal features allowing for its easy calculation in complex geometries. This we have confirmed through computer simulations and argued that our approach
is well placed to facilitate efficiency in design in a variety of physical applications ranging from wireless networks to forest fire-lanes. Appropriate modifications of our theory can aid the understanding of small boundary-dominated systems such as for example the electrical conduction through carbon nanotubes in a
polymer matrix~\cite{Kyr} but possibly larger systems such as highly connected social and financial networks~\cite{Palla,Parshani}.
\section*{Acknowledgments}
The authors thank the Directors of the Toshiba Telecommunications Research Laboratory for
their support, and Charo Del Genio, Jon Keating and Mark Walters for helpful discussions.
|
train/arxiv
|
BkiUdfI4ubngyxyhmPHl
| 5
| 1
|
\section{Introduction}
\label{intro}
\begin{figure}
\vspace*{-10pt}
\centering
\includegraphics[width=0.45\textwidth]{./images/loc2.pdf}
\caption{Qualitative results on MVTec 3D-AD~\cite{mvtec3d}. The two left columns show the input, the third the ground truth and the fourth our anomaly detection. Images are masked by foreground extraction. Our method is able to successfully combine RGB and 3D data to detect defects even if only present in one data domain.}
\label{fig:loc}
\vspace*{-10pt}
\end{figure}
To ensure product quality and safety standards in industrial manufacturing processes, products are traditionally inspected by humans, which is expensive and unreliable in practice.
For this reason, image-based methods for automatic inspection have been developed recently using advances in deep learning \cite{ae_ssim, itae, cutpaste, differnet, csflow}.
Since there are no or only very few negative examples, \ie erroneous products, available, especially at the beginning of production, and new errors occur repeatedly during the process, traditional supervised algorithms cannot be applied to this task.
Instead, it is assumed that only data of a \textit{normal} class of defect-free examples is available in training which is termed as semi-supervised \emph{anomaly detection}.
This work and others~\cite{ae_ssim, cflow, patchcore, differnet, csflow} specialize for industrial anomaly detection.
This domain differs in contrast to others that normal examples are similar to each other and to defective products.
In this work, we not only show the effectiveness of our method for common RGB images but also on 3D data and their combination as shown in Figure \ref{fig:loc}.
Several approaches try to solve the problem by so-called \textit{student-teacher networks} \cite{st_bergmann2, st_bergmann1, georgescu2021anomaly, wang2021student, xiao2021unsupervised}.
First, the teacher is trained on a pretext task to learn a semantic embedding.
In a second step, the student is trained to match the output of the teacher.
The motivation is that the student can only match the outputs of the teacher on normal data since it is trained only on normal data.
The distance between the outputs of student and teacher is used as an indicator of an anomaly at test-time.
It is assumed that this distance is larger for defective examples compared to defect-free examples.
However, this is not necessarily the case in previous work, since we discovered that both teacher and student are conventional (i.~e. non-injective) neural networks with similar architecture.
A student with similar architecture tends to undesired generalization, such that it extrapolates similar outputs as the teacher for inputs that are out of the training distribution, which, in turn, gives an undesired low anomaly score.
This effect is shown in Figure \ref{fig:teaser} using an explanatory experiment with one-dimensional data:
If the same neural network with one hidden layer is used for student and teacher, the outputs are still similar for anomalous data in the yellow area of the upper plot.
In contrast, the outputs for anomalies diverge if an MLP with 3 hidden layers is used as the student.
In general, it is not guaranteed that an out-of-distribution input will cause a sufficiently large change in both outputs due to the missing injectivity of common neural networks.
In contrast to normalizing flows, conventional networks have no guarantee to provide out-of-distribution outputs for out-of-distribution inputs.
These problems motivate us to use an asymmetric student-teacher pair (\emph{AST}):
A bijective normalizing flow \cite{nf} acts as a teacher while a conventional sequential model acts as a student.
In this way, the teacher guarantees to be sensitive to changes in the input caused by anomalies.
Furthermore, the usage of different architectures and thus of different sets of learnable functions enforces the effect of distant outputs for out-of-distribution samples.
\begin{figure}
\vspace*{-10pt}
\centering
\includegraphics[width=0.48\textwidth]{./images/symm_stud.pdf}
\includegraphics[width=0.48\textwidth]{./images/asymm_stud.pdf}
\caption{Toy example with mini-MLPs: The students were optimized to match the outputs in the grey area.
While the symmetric student-teacher pair (top) generalizes unintentionally and maps anomalous data very similarly, the distance between student and teacher outputs can be used for anomaly detection in the asymmetric student-teacher pair (bottom).}
\label{fig:teaser}
\vspace*{-10pt}
\end{figure}
As a pretext task for the teacher, we optimize to transform the distribution of image features and/or depth maps to a normal distribution via maximum likelihood training which is equivalent to a density estimation~\cite{realnvp}.
This optimization itself is used in previous work~\cite{cflow, differnet, csflow} for anomaly detection by utilizing the likelihoods as an anomaly score:
A low likelihood of being normal should be an indicator of anomalies.
However, Le and Dinh~\cite{le2021perfect} have shown that even perfect density estimators cannot guarantee anomaly detection.
For example, just reparameterizing the data would change the likelihoods of samples.
Furthermore, unstable training leads to misestimated likelihoods.
We show that our student-teacher distance is a better measure for anomaly detection compared to the obtained likelihoods by the teacher.
The advantage to using a normalizing flow itself for anomaly detection is that a possible misestimation in likelihood can be compensated for:
If a low likelihood of being normal is incorrectly assigned to normal data, this output can be predicted by the student, thus still resulting in a small anomaly score.
If a high likelihood of being normal is incorrectly assigned to anomalous data, this output cannot be predicted by the student, again resulting in a high anomaly score.
In this way, we combine the benefits of student-teacher networks and density estimation with normalizing flows.
We further enhance the detection by a positional encoding and by masking the foreground using 3D images.
Our contributions are summarized as follows:
\begin{packed_enum}
\item Our method avoids the undesired generalization from teacher to student by having highly asymmetric networks as a student-teacher pair.
\item We improve student-teacher networks by incorporating a bijective normalizing flow as a teacher.
\item Our AST outperforms the density estimation capability of the teacher by utilizing student-teacher distances.
\item Code is available on GitHub\footnote{\url{https://github.com/marco-rudolph/ast}}.
\end{packed_enum}
\section{Related Work}
\subsection{Student-Teacher Networks}
Originally, the motivation for having a student network that learns to regress the output of a teacher network was to distill knowledge and save model parameters \cite{hinton2015distilling, mirzadeh2020improved, tian2019contrastive}.
In this case, a student with clearly fewer parameters compared to the teacher almost matches the performance.
Some previous work exploits the student-teacher idea for anomaly detection by using the distance between their outputs:
The larger the distance, the more likely the sample is anomalous.
Bergmann et al.~\cite{st_bergmann1} propose an ensemble of students which are trained to regress the output of a teacher for image patches.
This teacher is either a distilled version of an ImageNet-pre-trained network or trained via metric learning.
The anomaly score is composed of the student uncertainty, measured by the variance of the ensemble, and the regression error.
Wang et al.~\cite{wang2021student} extend the student task by regressing a feature pyramid rather than a single output of a pre-trained network.
Bergmann and Sattlegger \cite{st_bergmann2} adapt the student-teacher concept to point clouds.
Local geometric descriptors are learned in a self-supervised manner to train the teacher.
Xiao et al.~\cite{xiao2021unsupervised} let teachers learn to classify applied image transformations.
The anomaly score is a weighted sum of the regression error and the class score entropy of an ensemble of students.
By contrast, our method requires only one student and the regression error as the only criterion to detect anomalies.
All of the existing work is based on identical and conventional (non-injective) networks for student and teacher, which causes undesired generalization of the student as explained in Section~\ref{intro}.
\subsection{Density Estimation}
Anomaly detection can be viewed from a statistical perspective:
By estimating the density of normal samples, anomalies are identified through a low likelihood.
The concept of density estimation for anomaly detection can be simply realized by assuming a multivariate normal distribution.
For example, the Mahalanobis distance of pre-extracted features can be applied as an anomaly score~\cite{padim, rippel} which is equivalent to computing the negative log likelihood of a multivariate Gaussian.
However, this method is very inflexible to the training distributions, since the assumption of a Gaussian distribution is a strong simplification.
To this end, many works try to estimate the density more flexibly with a Normalizing Flow~(NF) \cite{nf_trajectory, cflow, differnet, csflow, nf_deep, nf_time_series}.
Normalizing Flows are a family of generative models that map bijectively by construction \cite{inn, realnvp,nf, tomINN} as opposed to conventional neural networks.
This property enables exact density estimation in contrast to other generative models like GANs~\cite{gan} or VAEs~\cite{vae}.
Rudolph et al.~\cite{differnet} make use of this concept by modeling the density of multi-scale feature vectors obtained by pre-trained networks.
Subsequently, they extend this to multi-scale feature maps instead of vectors to avoid information loss caused by averaging~\cite{csflow}.
To handle differently sized feature maps so-called cross-convolutions are integrated.
A similar approach by Gudovskiy et al.~\cite{cflow} computes a density on feature maps with a conditional normalizing flow, where likelihoods are estimated on the level of local positions which act as a condition for the NF.
A common problem of normalizing flows is unstable training, which has a tradeoff on the flexibility of density estimation~\cite{cinn}.
However, even the ground truth density estimation does not provide perfect anomaly detection, since the density strongly depends on the parameterization~\cite{le2021perfect}.
\subsection{Other Approaches}
\hspace{-4.2mm}\textbf{Generative Models}\\
Many approaches try to tackle anomaly detection based on other generative models than normalizing flows as autoencoders~\cite{ae_ssim, itae, memae, sae,dsebm, adae} or GANs~\cite{ganomaly, ADGAN, anogan}.
This is motivated by the inability of these models to generate anomalous data.
Usually, the reconstruction error is used for anomaly scoring.
Since the magnitude of this error depends highly on the size and structure of the anomaly, these methods underperform in the industrial inspection setting.
The disadvantage of these methods is that the synthetic anomalies cannot imitate many real anomalies.\\
\textbf{Anomaly Synthesization}\\
Some work reformulates semi-supervised anomaly detection as a supervised problem by synthetically generating anomalies.
Either parts of training images~\cite{cutpaste, nsa, anoseg} or random images~\cite{draem} are patched into normal images.
Synthetic masks are created to train a supervised segmentation.\\
\textbf{Traditional Approaches}\\
In addition to deep-learning-based approaches, there are also classical approaches for anomaly detection.
The one-class SVM~\cite{ocsvm} is a max-margin method optimizing a function that assigns a higher value to high-density regions than to low-density regions.
Isolation forests~\cite{isoforest} are based on decision trees, where a sample is considered anomalous if it can be separated from the rest of the data by a few constraints.
Local Outlier Factor~\cite{lof} compares the density of a point with that of its neighbors.
A comparatively low density of a point identifies anomalies.
Traditional approaches usually fail in visual anomaly detection due to the high dimensionality and complexity of the data.
This can be circumvented by combining them with other techniques:
For example, the distance to the nearest neighbor, as first proposed by Amer and Goldstein~\cite{amer2012nearest}, is used as an anomaly score after features are extracted by a pre-trained network~\cite{nazare,patchcore}.
Alternatively point cloud features~\cite{fpfh} or density-based clustering~\cite{DocBra2015, DocBra2016} can be used to characterize a points neighborhood and label it accordingly.
However, the runtime is linearly related to the dataset size.
\section{Method}
\label{overview}
Our goal is to train two models, a student model $f_s$ and a teacher model $f_t$, such that the student learns to regress the teacher outputs on defect-free image data only.
The training process is divided into two phases:
First, the teacher model is optimized to transform the training distribution $p_X$ to a normal distribution $\mathcal{N}(0,\,I)$ bijectively with a normalizing flow.
Second, the student is optimized to match the teacher outputs by minimizing the distance between $f_s(x)$ and $f_t(x)$ of training samples $x \in X$.
We apply the distance for anomaly scoring at test-time, which is further described in Section~\ref{student}.
We follow~\cite{st_bergmann1, cflow, csflow} and use extracted features obtained by a pre-trained network on ImageNet~\cite{imagenet} instead of RGB images as direct input for our models.
Such networks have been shown to be universal feature extractors whose outputs carry relevant semantics for industrial anomaly detection.
In addition to RGB data, our approach is easily extendable to multimodal inputs including 3D data.
If 3D data is available, we concatenate depth maps to these features along the channels.
Since the feature maps are reduced in height and width compared to the depth map resolution by a factor $d$, we apply pixel-unshuffling~\cite{pixelunshuffle} by grouping a depth image patch of $d \times d$ pixels as one pixel with $d^2$ channels to match the dimensions of the feature maps.
Any 3D data that may be present is used to extract the foreground.
This is straightforward and reasonable whenever the background is static or planar, which is the case for almost all real applications.
Pixels that are in the background are ignored when optimizing the teacher and student by masking the distance and negative log likelihood loss, which are introduced in Sections \ref{teacher} and \ref{student}.
If not 3D data is available, the whole image is considered as foreground.
Details of the foreground extraction are given in Section~\ref{preprocessing}.
Similar to~\cite{cflow}, we use a sinusoidal positional encoding~\cite{posenc} for the spatial dimensions of the input maps as a condition for the normalizing flow $f_t$.
In this way, the occurrence of a feature is related to its position to detect anomalies such as misplaced objects.
An overview of our pipeline is given in Figure~\ref{fig:overview}.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{./images/overview.pdf}
\caption{
Overview of our pipeline: Teacher and student receive image features and/or depth maps as input which is conditioned by a positional encoding.
First, the teacher represented by a normalizing flow is optimized to reduce the negative log likelihood loss that may be masked by a foreground map from 3D.
Second, the student is trained to match the teacher outputs by minimizing the (masked) distance between them.
}
\label{fig:overview}
\vspace*{-10pt}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{./images/student_teacher.pdf}
\caption{Model architecture of teacher (left side) and student (right side). While the teacher is a Real-NVP-based~\cite{realnvp} conditional normalizing flow~\cite{cinn}, the student is a conventional convolutional neural network.}
\label{fig:architecture}
\end{figure}
\subsection{Teacher}
\label{teacher}
Similar to \cite{cflow, differnet, csflow}, we train a normalizing flow based on Real-NVP~\cite{realnvp} to transform the training distribution to a normal distribution $\mathcal{N}(0,\,I)$.
In contrast to previous work, we do not use the outputs to compute likelihoods and thereby obtain anomaly scores directly.
Instead, we interpret this training as a pretext task to create targets for our student network.
The normalizing flow consists of multiple subsequent affine coupling blocks.
Let the input $x \in \mathbb{R}^{w\times h \times n_{\mathrm{feat}}}$ be feature maps with $n_{\mathrm{feat}}$ features of size $w\times h$.
Within these blocks, the channels of the input $x$ are split evenly along the channels into the parts $x_1$ and $x_2$ after randomly choosing a permutation that remains fixed.
\interfootnotelinepenalty=10000
These parts are each concatenated with a positional encoding $c$ as a static condition.
Both are used to compute scaling and shift parameters for an affine transformation of the counterpart by having subnetworks $s_i$ and $t_i$ for each part:
\begin{equation}
\begin{aligned}
y_2 = x_2 \odot e^{s_1([x_1, c])} + t_1([x_1, c]) \\
y_1 = x_1 \odot e^{s_2([x_2, c])} + t_2([x_2, c]),
\end{aligned}
\label{aff}
\end{equation}
where $\odot$ is the element-wise product and $[\cdot , \cdot]$ denotes concatenation.
The output of one coupling block is the concatenation of $y_1$ and $y_2$ along the channels.
Note that the number of dimensions of input and output does not change due to invertibility.
To stabilize training, we apply alpha-clamping of scalar coefficients as in~\cite{cinn} and the gamma-trick as in \cite{csflow}.
Using the change-of-variable formula with $z$ as our final output
\begin{equation}
\label{eqn:change_of_variables}
p_X(x) = p_Z(z) \@ifstar{\oldabs}{\oldabs*}{
\det{
\frac{\partial z}
{\partial x}
}}\quad ,
\end{equation}
we minimize the negative log likelihood with $p_Z$ as the normal distribution $\mathcal{N}(0,\,I)$ by optimizing the mean of
\begin{equation}
\begin{aligned}
\mathcal{L}_{ij}^t = -\log{p_X(x_{ij})} = \frac{\@ifstar{\oldnorm}{\oldnorm*}{z_{ij}}_2^2}{2} - \log{\@ifstar{\oldabs}{\oldabs*}{
\det{
\frac{\partial z_{ij}}
{\partial x_{ij}}
}}}
\end{aligned}
\label{formula:loglikelihood}
\end{equation}
over all (foreground) pixels at pixel position $(i,j)$.
\begin{table}
\small
\begin{center}
\footnotesize
\begin{tabular}{|l|c|c|}
\hline
Dataset & MVTec AD & MVTec 3D-AD\\
Alias & (MVT2D) & (MVT3D) \\ \hline
RGB images & \checkmark & \checkmark\\
3D scans & $\times$ & \checkmark \\
\#categories & 15 & 10 \\
image side length & 700-1024 & 400-800 \\
\#train samples per cat. & 60-320 & 210-300\\
\#test samples per cat. & 42-160 & 100-159\\
\#defect types per cat. & 1-7 & 3-5\\
\hline
\end{tabular}
\end{center}
\vspace{-0.25cm}
\caption{Overview of the used datasets.}
\vspace{-1.0em}
\label{table:datasets}
\end{table}
\begin{table*}
\begin{center}\resizebox{0.9\linewidth}{!}{
\hskip-0.5cm
\footnotesize
\begin{tabular}{c|l|cccccccccccc|}
\cline{2-14}
& Category & ARNet & DR\AE M & GAN & Rippel & PatchCore & DifferNet & PaDiM & CFlow & CS-Flow & Uninf. & STFPM* & \textbf{AST}\\
& &\cite{itae}& \cite{draem} &\cite{ganomaly} & \cite{rippel} & \cite{patchcore}& \cite{differnet} & \cite{padim} & \cite{cflow} & \cite{csflow} & Stud. \cite{st_bergmann1} & \cite{wang2021student} & (ours)\\
\cline{2-14}
& Grid & 88.3 & 99.9 & 70.8 & 93.7 & 98.2 & 84.0 & - & 99.6 & 99.0 & 98.1 & \textbf{100} & 99.1 $\pm$ 0.2\\
& Leather & 86.2 & 100 & 84.2 &\textbf{100} & \textbf{100} & 97.1 & - & \textbf{100} & 99.9 & 94.7 & \textbf{100} & \textbf{100} $\pm$ 0.0\\
& Tile & 73.5 & 99.6 & 79.4 & \textbf{100} & 98.7 & 99.4 & - & 99.9 & \textbf{100} & 94.7 & 95.5 & \textbf{100} $\pm$ 0.0\\
& Carpet & 70.6 & 97.8 & 69.9 & 99.6 & 98.7 & 92.9 & - & 98.7 & \textbf{100} & 99.9 & 98.9 & 97.5 $\pm$ 0.4\\
\rotatebox[origin=c]{90}{\parbox[c]{0cm}{Textures}}& Wood & 92.3 & 99.1 & 83.4 & 99.2 & 98.8 & 99.8 & - & 99.1 & \textbf{100} & 99.1 & 99.2 & \textbf{100} $\pm$ 0.0\\
\cline{2-14}
& Avg. Text. & 82.2 & 99.3 & 77.5 & 98.5 & 98.3 & 94.6 & 99.0 & 99.5&\textbf{99.8} & 97.3 & 98.7 & 99.3 $\pm$ 0.08\\
\cline{2-14}
& Bottle & 94.1 & 99.2 & 89.2 & 99.0 & \textbf{100} & 99.0 & - & \textbf{100} & 99.8 & 99.0 & \textbf{100} & \textbf{100} $\pm$ 0.0\\
& Capsule & 68.1 & 98.5 & 73.2 & 96.3 & 98.1 & 86.9 & - & 97.7 & 97.1 & 92.5 & 88.0 & \textbf{99.7} $\pm$ 0.1\\
& Pill & 78.6 & 98.9 & 74.3 & 91.4 & 96.6 & 88.8 & - & 96.8 & 98.6 & 92.2 & 93.8 & \textbf{99.1} $\pm$ 0.1\\
& Transistor & 84.3 & 93.1 & 79.2 & 98.2 & \textbf{100} & 91.1 & - & 95.2 & 99.3 & 79.4 & 93.7 & 99.3 $\pm$ 0.1\\
& Zipper & 87.6 & \textbf{100} & 74.5 & 98.8 & 99.4 & 95.1 & - & 98.5 & 99.7 & 94.4 & 93.6 & 99.1 $\pm$ 0.1\\
& Cable & 83.2 & 91.8 & 75.7 & 99.1 & \textbf{99.5} & 95.9 & - & 97.6 & 99.1 & 78.7 & 92.3 & 98.5 $\pm$ 0.2 \\
\rotatebox[origin=c]{90}{\parbox[c]{0cm}{Objects}}& Hazelnut & 85.5 & \textbf{100} & 78.5 & \textbf{100} & \textbf{100} & 99.3 & - & \textbf{100} & 99.6 & 99.1 & \textbf{100} & \textbf{100} $\pm$ 0.0 \\
& Metal Nut & 66.7 & 98.7 & 70.0 & 97.4 & \textbf{100} & 96.1 & - & 99.3 & 99.1 & 89.1 & \textbf{100} & 98.5 $\pm$ 0.2\\
& Screw & \textbf{100} & 93.9 & 74.6 & 94.5 & 98.1 & 96.3 & - & 91.9 & 97.6 & 86.0 & 88.2 & 99.7 $\pm$ 0.1\\
& Toothbrush & \textbf{100} & \textbf{100} & 65.3 & 94.1 & \textbf{100} & 98.6 & - & 99.7 & 91.9 & \textbf{100} & 87.8 & 96.6 $\pm$ 0.1\\
\cline{2-14}
& Avg. Obj. & 84.8 & 97.4 & 75.5 & 96.9 & \textbf{99.2} & 94.7 & 97.2& 97.7 & 98.2 & 91.0 & 93.7 & 99.1 $\pm$ 0.03\\
\cline{2-14}
& \textbf{Average} & 83.9 & 98.0 & 76.2 & 97.5 & 99.1 & 94.7 & 97.9 & 98.3 & 98.7 & 93.2 & 95.4 & \textbf{99.2} $\pm$ 0.04\\
\cline{2-14}
\end{tabular}
}
\end{center}
\caption{AUROC in \% for detecting defects of all categories of MVT2D \cite{mvtec} on image-level grouped into textures and objects. We report the mean and standard deviation over 5 runs for our method. Best results are in bold.
Beside the average value, detailed results of PaDiM~\cite{padim} were not provided by the authors. The numbers of STFPM*~\cite{wang2021student} were obtained by a reimplementation.
}
\label{table:mvtec}
\end{table*}
\subsection{Student}
\label{student}
As opposed to the teacher, the student is a conventional feed-forward network that does not map injectively or surjectively.
We propose a simple fully convolutional network with residual blocks which is shown in Figure~\ref{fig:architecture}.
Each residual block consists of two sequences of $3 \times 3$ convolutional layers, batch normalization~\cite{batchnorm} and leaky ReLU activations.
We add convolutions as the first and last layer to increase and decrease the feature dimensions.
Similarly to the teacher, the student takes image features as input which are concatenated with 3D data if available.
In addition, the positional encoding $c$ is concatenated.
The output dimensions match the teacher to enable pixel-wise distance computation.
We minimize the squared $\ell_2$-distance between student outputs $f_s(x)$ and the teacher outputs $f_t(x)$ on training samples $x \in X$, given the training set $X$, at a pixel position $(i, j)$ of the output:
\begin{equation}
\mathcal{L}^s_{ij} = \@ifstar{\oldnorm}{\oldnorm*}{f_s(x)_{ij} - f_t(x)_{ij}}^2_2 .
\label{eqn: st_loss}
\end{equation}
Averaging $\mathcal{L}^s_{ij}$ over all (foreground) pixels gives us the final loss.
The distance $\mathcal{L}^s$ is also used in testing to obtain an anomaly score on image level:
Ignoring the anomaly scores of background pixels, we aggregate the pixel distances of one sample by computing either the maximum or the mean over the pixels.
\section{Experiments}
\subsection{Datasets}
\label{datasets}
To demonstrate the benefits of our method on a wide range of industrial inspection scenarios, we evaluate with a diverse set of 25 scenarios in total, including natural objects, industrial components and textures in 2D and 3D.
Table~\ref{table:datasets} shows an overview of the used benchmark datasets MVTec AD~\cite{mvtec} and MVTec 3D-AD~\cite{mvtec3d}.
For both datasets, the training set only contains defect-free data and the test set contains defect-free and defective examples.
In addition to image-level labels, the datasets also provide pixel-level annotations about defective regions which we use to evaluate the segmentation of defects.
MVTec AD, which will be called \textit{MVT2D} in the following, is a high-resolution 2D RGB image dataset containing 10 object and 5 texture categories.
The total of 73 defect types in the test set appear, for example, in the form of displacements, cracks or scratches in various sizes and shapes.
The images have a side length of 700 to 1024 pixels.
MVTec 3D-AD, to which we refer to as \textit{MVT3D}, is a very recent 3D dataset containing 2D RGB images paired with 3D scans for 10 categories.
These categories include deformable and non-deformable objects, partially with natural variations (e.g.\ peach and carrot).
In addition to the defect types in MVT2D there are also defects that are only recognizable from the depth map, such as indentations.
On the other hand, there are anomalies such as discoloration that can only be perceived from the RGB data.
The RGB images have a resolution of 400 to 800 pixels per side, paired with rasterized 3D point clouds at the same resolution.
\subsection{Implementation Details}
\label{impdetails}
\subsubsection{Image Preprocessing}
\label{preprocessing}
Following \cite{padim, csflow}, we use the layer 36 output of EfficientNet-B5~\cite{efficientnet} pre-trained on ImageNet~\cite{imagenet} as a feature extractor.
This feature extractor is not trained during training of the student and teacher networks.
The images are resized to a resolution of $768\times768$ pixels resulting in feature maps of size $24\times24$ with 304 channels.
\subsubsection{3D Preprocessing}
We discard the $x$ and $y$ coordinates due to the low informative content and use only the depth component $z$ in centimeters.
Missing depth values are repeatedly filled by using the average value of valid pixels from an 8-connected neighborhood for 3 iterations.
We model the background as a 2D plane by interpolating the depth of the 4 corner pixels.
A pixel is assumed as foreground if its depth is further than $7mm$ distant from the background plane.
As an input to our models, we first resize the masks to $192\times192$ pixels via bilinear downsampling and then perform pixel-unshuffling~\cite{pixelunshuffle} with $d=8$ as described in Section~\ref{overview} to match the feature map resolution.
In order to detect anomalies at the edge of the object and fill holes of missing values, the foreground mask is dilated using a square structural element of size 8.
We subtract the mean foreground depth from each depth map and set its background pixels to 0.
The binary foreground mask $M$ with ones as foreground and zeros as background is downsampled to feature map resolution to mask the loss for student and teacher.
This is done by a bilinear interpolation $f_\downarrow$ followed by a binarization where all entries greater than zero are assumed as foreground to mask the loss at position~$(i, j)$:
\begin{equation}
\mathcal{L}^{\mathrm{masked}}_{ij} =
\begin{cases}
\mathcal{L}_{ij} & \text{if }\quad f_\downarrow(M)_{ij} > 0 \\
0 & else
\end{cases}
.
\label{eq:mask}
\end{equation}
\subsubsection{Teacher}
For the normalizing flow architecture of the teacher, we use 4 coupling blocks which are conditioned on a positional encoding with 32 channels.
Each pair of internal subnetworks $s_i$ and $t_i$ is designed as one shallow convolutional network $r_i$ with one hidden layer whose output is split into the scale and shift components.
Inside $r_i$ we use ReLU-Activations and a hidden channel size of 1024 for MVT2D and 64 for MVT3D.
We choose the alpha-clamping parameter $\alpha=3$ for MVT2D and $\alpha=1.9$ for MVT3D.
The teacher networks are trained for 240 epochs for MVT2D and 72 epochs for MVT3D, respectively, with the Adam optimizer~\cite{adam}, using author-given momentum parameters $\beta_1=0.9$ and $\beta_2=0.999$, a learning rate of $2 \cdot 10^{-4}$ and a weight decay of $10^{-5}$.
\subsubsection{Student}
For the student networks, we use $n_{\mathrm{st\_blocks}}=4$ residual convolutional blocks as described in Section \ref{student}.
The Leaky-ReLU-activations use a slope of 0.2 for negative values.
We choose a hidden channel size of $n_{hidden}=1024$ for the residual block.
Likewise, we take over the number of epochs and optimizer parameters from the teacher.
The scores at feature map resolution are aggregated for evaluation at image level by the maximum distance if a foreground mask is available, and the average distance otherwise (RGB only).
\subsection{Evaluation Metrics}
\label{metrics}
As common for anomaly detection, we evaluate the performance of our method on image-level by calculating the area under receiver operating characteristics (AUROC).
The ROC measures the true positive rate dependent on the false positive rate for varying thresholds of the anomaly score.
Thus, it is independent of the choice of a threshold and invariant to the class balance in the test set.
For measuring the segmentation of anomalies at pixel-level, we compute the AUROC on pixel level given the ground truth masks in the datasets.
\begin{table*}[t]
\begin{center}
\footnotesize
\resizebox{0.98\linewidth}{!}{
\begin{tabular}{@{\hskip5pt}l@{\hskip5pt}l|cccccccccc|c}
\toprule
\footnotesize
& Method & Bagel & \begin{tabular}[c]{@{}c@{}}Cable\\ Gland\end{tabular} & Carrot & Cookie & Dowel & Foam & Peach & Potato & Rope & Tire & Mean \\
\midrule
\multirow{8}{*}{\rotatebox[origin=c]{90}{3D}}
\footnotesize
& Voxel GAN \cite{mvtec3d}& 38.3 & 62.3 & 47.4 & 63.9 & 56.4 & 40.9 & 61.7 & 42.7 & 66.3 & 57.7 & 53.7 \\
& Voxel AE \cite{mvtec3d}& 69.3 & 42.5 & 51.5 & 79.0 & 49.4 & 55.8 & 53.7 & 48.4 & 63.9 & 58.3 & 57.1 \\
& Voxel VM \cite{mvtec3d}& 75.0 & \textbf{74.7} & 61.3 & 73.8 & 82.3 & 69.3 & 67.9 & 65.2 & 60.9 & \textbf{69.0} & 69.9 \\
& Depth GAN \cite{mvtec3d}& 53.0 & 37.6 & 60.7 & 60.3 & 49.7 & 48.4 & 59.5 & 48.9 & 53.6 & 52.1 & 52.3 \\
& Depth AE \cite{mvtec3d}& 46.8 & 73.1 & 49.7 & 67.3 & 53.4 & 41.7 & 48.5 & 54.9 & 56.4 & 54.6 & 54.6 \\
& Depth VM \cite{mvtec3d}& 51.0 & 54.2 & 46.9 & 57.6 & 60.9 & 69.9 & 45.0 & 41.9 & 66.8 & 52.0 & 54.6 \\
& 1-NN (FPFH) \cite{fpfh}& 82.5 & 55.1 & 95.2 & 79.7 & \textbf{88.3} & 58.2 & 75.8 & 88.9 & 92.9 & 65.3 & 78.2 \\
& 3D-ST$_{128}$ \cite{st_bergmann2}\phone& 86.2 & 48.4 & 83.2 & 89.4 & 84.8 & 66.3 & 76.3 & 68.7 & 95.8 & 48.6 & 74.8 \\
& \textbf{AST (ours)} & \textbf{88.1} $\pm$ 2.0 & 57.6 $\pm$ 6.9 & \textbf{96.5} $\pm$ 1.0 & \textbf{95.7} $\pm$ 0.6 & 67.9 $\pm$ 1.1 & \textbf{79.7} $\pm$ 1.2 & \textbf{99.0} $\pm$ 0.9 & \textbf{91.5} $\pm$ 2.1 & \textbf{95.6} $\pm$ 0.7 & 61.1 $\pm$ 3.4 & \textbf{83.3} $\pm$ 0.8 \\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{RGB}}
& PatchCore \cite{patchcore} & 87.6 & 88.0 & 79.1 & 68.2 & 91.2 & 70.1 & 69.5 & 61.8 & 84.1 & 70.2 & 77.0 \\
& DifferNet \cite{differnet}\phone & 85.9 & 70.3 & \textbf{64.3} & 43.5 & 79.7 & 79.0 & 78.7 & 64.3 & 71.5 & 59.0 & 69.6 \\
& PADiM \cite{padim}* & \textbf{97.5} & 77.5 & 69.8 & 58.2 & 95.9 & 66.3 & 85.8 & 53.5 & 83.2 & 76.0 & 76.4 \\
& CS-Flow \cite{csflow}\phone & 94.1 & \textbf{93.0} & 82.7 & 79.5 & \textbf{99.0} & 88.6 & 73.1 & 47.1 & 98.6 & 74.5 & 83.0 \\
& STFPM \cite{wang2021student}* & 93.0 & 84.7 & \textbf{89.0} & 57.5 & 94.7 & 76.6 & 71.0 & 59.8 & 96.5 & 70.1 & 79.3 \\
& \textbf{AST (ours)} & 94.7 $\pm$ 0.7 & 92.8 $\pm$ 1.2 & 85.1 $\pm$ 1.2 & \textbf{82.5} $\pm$ 0.8 & 98.1 $\pm$ 0.4 & \textbf{95.1} $\pm$ 0.6 & \textbf{89.5} $\pm$ 1.1 & 61.3 $\pm$ 2.4 & \textbf{99.2} $\pm$ 0.2 & \textbf{82.1} $\pm$ 0.9 & \textbf{88.0} $\pm$ 0.6 \\
\midrule
\multirow{8}{*}{\rotatebox[origin=c]{90}{3D + RGB}}
& Voxel GAN \cite{mvtec3d}& 68.0 & 32.4 & 56.5 & 39.9 & 49.7 & 48.2 & 56.6 & 57.9 & 60.1 & 48.2 & 51.7 \\
& Voxel AE \cite{mvtec3d}& 51.0 & 54.0 & 38.4 & 69.3 & 44.6 & 63.2 & 55.0 & 49.4 & 72.1 & 41.3 & 53.8 \\
& Voxel VM \cite{mvtec3d}& 55.3 & 77.2 & 48.4 & 70.1 & 75.1 & 57.8 & 48.0 & 46.6 & 68.9 & 61.1 & 60.9 \\
& Depth GAN \cite{mvtec3d}& 53.8 & 37.2 & 58.0 & 60.3 & 43.0 & 53.4 & 64.2 & 60.1 & 44.3 & 57.7 & 53.2 \\
& Depth AE \cite{mvtec3d}& 64.8 & 50.2 & 65.0 & 48.8 & 80.5 & 52.2 & 71.2 & 52.9 & 54.0 & 55.2 & 59.5 \\
& Depth VM \cite{mvtec3d}& 51.3 & 55.1 & 47.7 & 58.1 & 61.7 & 71.6 & 45.0 & 42.1 & 59.8 & 62.3 & 55.5 \\
& PatchCore+FPFH \cite{fpfh} & 91.8 & 74.8 & 96.7 & 88.3 & \textbf{93.2} & 58.2 & 89.6 & 91.2 & 92.1 & \textbf{88.6} & 86.5 \\
& \textbf{AST (ours)} & \textbf{98.3} $\pm$ 0.4 & \textbf{87.3} $\pm$3.3& \textbf{97.6} $\pm$ 0.5& \textbf{97.1} $\pm$ 0.3& \textbf{93.2}$\pm$2.1 & \textbf{88.5} $\pm$ 1.4 & \textbf{97.4}$\pm$ 1.4 & \textbf{98.1} $\pm$ 1.2 & \textbf{100} $\pm$ 0.0 & 79.7 $\pm$ 1.0 & \textbf{93.7}$\pm$ 0.2 \\
\bottomrule
\end{tabular}
}
\end{center}
\caption{AUROC in \% for detecting defects of all categories of MVT3D \cite{mvtec3d} on image-level for 3D data, RGB data and the combination of both. We report the mean and standard deviation over 5 runs for our method. Best results per data domain are in bold. Numbers of listed methods followed by a \phone\ are non-published results obtained by the corresponding authors on request. A * indicates that we used a reimplementation. The numbers from PatchCore are taken from \cite{fpfh}.}
\label{table:mvt3d}
\vspace{-0.25cm}
\end{table*}
\begin{table}
\small
\begin{center}
\footnotesize
\begin{tabular}{l|c|c}
Method & MVT2D & MVT3D (RGB+3D) \\
\hline
AE-SSIM \cite{ae_ssim}& 87.0 & -\\
PatchCore \cite{patchcore}& \textbf{98.4} & -\\
PatchCore+FPFH \cite{fpfh}& - & \textbf{99.2}\\
\textbf{AST (ours)} & 95.0 $\pm$ 0.03 & 97.6 $\pm$ 0.02
\end{tabular}
\end{center}
\vspace{-1mm}
\caption{Anomaly segmentation results measured by the mean pixel-AUROC over all classes and its standard deviation over 5 runs. Despite image-level detection is the focus of this work, our method is able to localize defects for practical purposes with an AUROC of 95\% or 97.6\%.}
\vspace{-5mm}
\label{tab:seg}
\end{table}
\subsection{Results}
\subsubsection{Detection}
Table \ref{table:mvtec} shows the AUROC of our method and previous work for detecting anomalies on the 15 classes of MVT2D as well as the averages for textures, objects and all classes.
We set a new state-of-the-art performance on the mean detection AUROC over all classes, improving it slightly to 99.2\%.
This is mainly due to the good performance on the more challenging objects, where we outperform previous work by a comparatively large margin of 0.9\%, except for PatchCore~\cite{patchcore}.
The detection of anomalies on textures, which CS-Flow~\cite{csflow} has already almost solved with a mean AUROC of 99.8\%, still works very reliably at 99.3\%.
Especially compared to the two student-teacher approaches \cite{st_bergmann1, wang2021student}, a significant improvement of 6\% and 3.6\% respectively is archieved.
Moreover, our student-teacher distances show to be a better indicator of anomalies compared to the likelihoods of current state-of-the-art density estimators \cite{cflow, csflow} which, like our teacher, are based on normalizing flows.
Even though MVT2D has established itself as a standard benchmark in the past, this dataset (especially the textures) is easily solvable for recent methods, and differences are mainly in the sub-percent range, which is only a minor difference in terms of the comparatively small size of the dataset.
In the following, we focus on the newer, more challenging MVT3D dataset where the normal data shows more variance and anomalies only partly occur in one of the two data modalities, RGB and 3D.
The results for individual classes of MVT3D grouped by data modality are given in Table \ref{table:mvt3d}.
We are able to outperform all previous methods for all data modalities regarding the average of all classes by a large margin of 5.1\% for 3D, 5\% for RGB and 7.2\% for the combination.
Facing the individual classes and data domains, we set a new state-of-the-art in 21 of 30 cases.
Note that this data set is much more challenging when comparing the best results from previous work (99.1\% for MVT2D vs. 86.5\% AUROC for MVT3D).
Nevertheless, we detect defects in 7 out of 10 cases for RGB+3D at an AUROC of at least 93\%, which demonstrates the robustness of our method.
In contrast, the nearest-neighbor approach PatchCore~\cite{patchcore}, which provides comparable performance to us on MVT2D, struggles with the increased demands of the dataset and is outperformed by 11\% on RGB.
The same applies for the 3D extension~\cite{fpfh} using FPFH~\cite{fpfh_orig} despite using a foreground mask as well.
Figure \ref{fig:loc} shows qualitative results for the RGB+3D case given both inputs and ground truth annotations.
More examples can be found in the supplemental material.
Despite the low resolution, the regions of the anomaly can still be localized well for practical purposes.
Table \ref{tab:seg} reports the pixel-AUROC of our method and previous work.
For the class peach in the RGB+3D setting, the top of Figure \ref{fig:viz2d} compares the distribution of student-teacher distances for anomalous and normal regions.
The distribution of anomalous samples shows a clear shift towards larger distances.
At the bottom of Figure \ref{fig:viz2d}, the outputs of student and teacher as well as our the distance of corresponding pairs representing our anomaly score are visualized by a random orthographic 2D projection.
Note that visualizations made by techniques such as t-SNE~\cite{tsne} or PCA~\cite{pca} are not meaningful here, since the teacher outputs (and therefore most of the student outputs) follow an isotropic standard normal distribution.
Therefore, different random projections barely differ qualitatively.
\begin{figure}
\centering
\includegraphics[width=0.33\textwidth]{./images/peach_diff_norms.pdf}
\includegraphics[width=0.23\textwidth]{./images/peach_dist_norm.pdf}
\includegraphics[width=0.23\textwidth]{./images/peach_dist_ano.pdf}
\caption{Top: Histogram of our AST distances for normal and anomalous regions of the class peach in MVT3D. Bottom: Random orthographic projections of student and teacher outputs grouped in non-defective (left plot) anomalous regions (right plot) for the class peach.
The plotted student-teacher distance representing the anomaly score is clearly higher for anomalous regions since the student is not able to match the teacher outputs, as it was only trained on non-defective regions.}
\vspace{-0.25cm}
\label{fig:viz2d}
\end{figure}
\label{detection}
\subsubsection{Ablation Studies}
\label{ablation}
We demonstrate the effectiveness of our contributions and design decisions with several ablation studies.
Table \ref{table:ablation} compares the performance of variants of students with the teacher, which can be used as a density estimator itself for anomaly detection by using its likelihoods, given by Eq.~\ref{eqn:change_of_variables}, as anomaly score.
In comparison, a symmetric student-teacher pair worsens the results by 1 to 2\%, excepting the RGB case.
However, the performance is already improved for RGB and 3D+RGB by creating the asymmetry with a deeper version of the student than the teacher by doubling the number of coupling blocks to 8.
This effect is further enhanced if the architecture of the NF-teacher is replaced by a conventional feedforward network as we suggest.
We also vary the depth of our student network and analyzed its relation to performance, model size and inference time in Table \ref{table:depth}.
With an increasing number of residual blocks $n_{\mathrm{st\_blocks}}$, we observe an increasing performance which is almost saturated after 4 blocks.
Since the remaining potential in detection performance is not in relation to the linearly increasing additional computational effort per block, we suggest to choose 4 blocks to have a good trade-off.
In Table \ref{table:pe_mask} we investigate the impact of the positional encoding and the foreground mask.
For MVT3D, positional encoding improves the detection by 1.4\% of our AST-pair when trained with 3D data as the only input.
Even though the effect is not present when combining both data modalities, we consider it generally reasonable to use the positional encoding, considering that the integration with just 32 additional channels does not significantly increase the computational effort.
Foreground extraction in order to mask the loss for training and anomaly score for testing is also highly effective.
Since the majority of the image area often consists of background, the teacher has to spend a large part of the distribution on the background.
Masking allows the teacher and student to focus on the essential structures.
Moreover, noisy background scores are eliminated.
\begin{table}
\begin{center}
\footnotesize
\begin{tabular}{l|c|c|c}
Method & 3D & RGB & 3D+RGB \\
\hline
Teacher only & 82.2 & 69.8 & 90.9\\
NF student (symm.) & 81.8 & 76.0 & 88.9\\
NF student (deeper) & 81.8 & 76.7 & 92.7\\
\textbf{AST} (ours) & \textbf{83.3} & \textbf{88.0} & \textbf{93.7}\\
\end{tabular}
\end{center}
\caption{Comparison of average detection performance in AUROC percentage on MVT3D of teacher and student-teacher in a symmetric and asymmetric setting. Our proposed asymmetric student-teacher pair outperforms all baselines in all cases.}
\label{table:ablation}
\end{table}
\begin{table}
\small
\begin{center}
\footnotesize
\begin{tabular}{c|c|c|c}
$n_{\mathrm{st\_blocks}}$ & AUROC $[\%] \uparrow$ & \#Params. [M] $\downarrow$ & inf. time [ms] $\downarrow$\\
\hline
1 & 92.8 & 26.0 & 3.4\\
2 & 93.3 & 44.8 & 6.1\\
4 & 93.7 & 82.6 & 10.4\\
8 & 93.7 & 151.1 & 19.8\\
12 & 93.8 & 233.6 & 29.4\\
\hline
teacher & 90.9 & 3.8 & 4.5 \\
\end{tabular}
\end{center}
\caption{Tradeoff between performance and computational effort on 3D+RGB data of MVT3D. The inference time was measured with a \textit{NVIDIA RTX 1080 Ti}.}
\label{table:depth}
\end{table}
\begin{table}
\footnotesize
\begin{center}\begin{tabularx}{0.95\linewidth}{ c *{2}{|YY} }
input & pos. enc. & mask & teacher & \textbf{AST} \\ \hline
& \ding{55}& \checkmark&78.4 & 81.9 \\
3D & \checkmark& \ding{55}& 59.4 & 67.2 \\
& \checkmark& \checkmark& 82.2 & \textbf{83.3} \\ \hline
& \ding{55}& \ding{55}& 69.3 & 87.8\\
RGB & \checkmark& \ding{55}& 69.8 & \textbf{88.0} \\
& \checkmark& \checkmark& n. a. & n. a.\\ \hline
& \ding{55}& \checkmark& 90.9 & \textbf{93.8}\\
3D+RGB & \checkmark& \ding{55}& 66.2 & 84.0 \\
& \checkmark& \checkmark& 90.9 & 93.7 \\
\end{tabularx}
\end{center}
\caption{Impact of the positional encoding and the foreground mask on the detection performance of student and teacher on MVT3D. Numbers are given in AUROC percentage. Since masks are obtained from 3D data, there is no mask for RGB.
}
\label{table:pe_mask}
\end{table}
\section{Conclusion}
We discovered the generalization problem of previous student teacher pairs for AD and introduced an alternative student-teacher method that prevents this issue by using a highly different architecture for student and teacher.
We were able to compensate for skewed likelihoods of a normalizing flow-based teacher, which was used directly for detection in previous work, by the additional use of a student.
Future work could extend the approach to more data domains and improve the localization resolution.
\vspace{-0.1em}
\small{\paragraph{Acknowledgements.}
This work was supported by the Federal Ministry of Education and
Research (BMBF), Germany under the project LeibnizKILabor (grant no.
01DD20003), the Center for Digital Innovations (ZDIN) and the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122).
\newpage
\clearpage
{\small
\bibliographystyle{ieee_fullname}
|
train/arxiv
|
BkiUda85qg5A55BmGqMb
| 5
| 1
|
\section{Introduction}
\label{sec:Introduction}
Markov decision processes (MDP,~\cite{Put94}) are a widely-used formalism to represent discrete-state and -time systems in which \emph{probabilistic} effects meet controllable \emph{nondeterministic} decisions.
The former may arise from an environment or agent whose behaviour is only known statistically (\eg message loss in wireless communication or statistical user profiles), or it may be intentional as part of a randomised algorithm (such as exponential backoff in Ethernet).
The latter may be under the control of the system---then we are in a planning setting and typically look for a \emph{scheduler} (or strategy, policy) that minimises the probability of unsafe behaviour or maximises a reward---or it may be considered adversarial, which is the standard assumption in verification: we want to establish that the maximum probability of unsafe behaviour is below, or that the minimum reward is above, a specified threshold.
Extensions of MDP cover continuous time~\cite{EHZ10,KNSS02}, and the analysis of complex formalisms such as stochastic hybrid automata~\cite{FHHWZ11} can be reduced to the analysis of MDP abstractions.
The standard algorithm to compute optimal (maximum or minimum) probabilities or reward values on MDP is \emph{value iteration} (VI).
It implicitly computes the corresponding optimal scheduler, too.
It keeps track of a value for every state of the MDP, locally improves the values iteratively until a ``convergence'' criterion is met, and then reports the final value for the initial state as the overall result.
The initial values are chosen to be an underapproximation of the true values (\eg 0 for all states in case of probabilities or non-negative rewards).
The final values are then an improved underapproximation of the true values.
For unbounded (infinite-horizon) properties, there is unfortunately no (known) convergence criterion that could guarantee a predefined error on the final result.
Still, probabilistic model checkers such as \tool{Prism}~\cite{KNP11} report the final result obtained via simple relative or absolute global error criteria as the definitive probability.
This is because, on \emph{most} case studies considered so far, value iteration in fact converges fast enough that the (relative or absolute) difference between the reported and the true value meets the error $\epsilon$ specified for the convergence criterion.
Only relatively recently has this problem of soundness come to the attention of the probabilistic verification and planning communities~\cite{BCCFKKPU14,HM14,MLG05}.
First highlighted on hand-crafted counterexamples, it has by now been found to affect benchmarks and real-life case studies, too~\cite{BKLPW17}.
The first proposal to compute sound reachability probabilities was to use \emph{interval iteration} (II)~\cite{HM18}.
The idea is to perform two value iterations concurrently, one starting from 0 as before, and one starting from 1 for all relevant states.
The latter iterations improve an overapproximation of the true values, and the entire process can be stopped once the (relative or absolute) difference between the two values for the initial state is below the specified $\epsilon$.
Interval iteration, however, requires the MDP to be in a form where value iteration has a single fixed point.
For minimum probabilities, this is achieved via graph-based (\ie not numerical) precomputations~\cite[algs.\ 1-4]{FKNP11}.
For maximum probabilities, however, additionally end components need to be eliminated, requiring a state space transformation whose extra memory usage exacerbates the state space explosion problem.
Baier \etal extended interval iteration to expected accumulated reward values~\cite{BKLPW17}; here, the complication is to find initial values that are guaranteed to be an overapproximation.
The proposed graph-based algorithm in practice computes very conservative initial values, from which many iterations are needed until convergence.
More recently, \emph{sound value iteration} (SVI)~\cite{QK18} improved upon interval iteration by computing upper bounds on-the-fly and performing larger value improvements per iteration, for both probabilities and expected rewards.
It still requires the same precomputations and end component reduction as interval iteration, though; it only does not need a priori upper bounds for expected rewards (although they may improve performance if provided).
\paragraph{Our contribution.}
We present (in \Cref{sec:OVI}) a new approach to computing sound reachability probabilities and expected rewards that is both simple and practically efficient.
We first (1)~perform standard value iteration until ``convergence'', resulting in a lower bound on the value for every state.
To this we (2)~apply heuristics to ``guess'', for every state, a candidate upper bound value.
Further iterations (3)~then confirm (if all values decrease) or disprove (if all values increase, or lower and upper bounds cross) the soundness of the upper bounds.
In the latter case, we perform more lower bound iterations with reduced $\epsilon$ before retrying from step~2.
In problematic cases, many retries may be needed, and performance may be worse than interval or sound value iteration.
However, on the vast majority of existing case studies, value iteration already worked well, and our approach attaches a soundness proof to its result with moderate overhead.
We thus refer to it as \emph{optimistic value iteration} (OVI).
It does not require any of the precomputations, end component reductions, or a priori bound computations that are needed for II and SVI, further simplifying implementations and improving scalability.
Our experimental evaluation in \Cref{sec:Experiments} uses all applicable models from the Quantitative Verification Benchmark Set~\cite{HKPQR19} to confirm that OVI indeed performs as expected.
It uses our publicly available implementations of II, SVI, and now OVI in the \mcsta model checker of the \toolset~\cite{HH14}.
\paragraph{Related work.}
As an alternative to the iterative numeric road that we take in this paper, guaranteed correct results (modulo implementation errors) can also be obtained by using precise rational arithmetic.
It does not combine too well with sound iterative methods like II or SVI due to the increasingly small differences between the values and the actual solution.
The probabilistic model checker \tool{Storm}~\cite{DJK017} thus combines topological decomposition, policy iteration, and exact solvers for linear equation systems based on Gaussian elimination when asked to use rational arithmetic~\cite[Section 7.4.8]{Hen18}.
Alternatively, rational search~\cite{BMCSV17} can be used, which proceeds in a very similar way to OVI:
starting from the (floating-point) result of value iteration, it finds the next rational number, checks whether it is the correct result, and if not, continues the floating-point iterations.
The disadvantage of both of these methods is the significant runtime cost for performing the unlimited-precision calculations, limiting them to relatively smaller~MDP.
\section{Preliminaries}
\label{sec:Preliminaries}
$\RRpluszero$ is the set of all non-negative real numbers.
We write $\set{ x_1 \mapsto y_1, \dots }$ to denote the function that maps all $x_i$ to $y_i$, and if necessary in the respective context, implicitly maps to~$0$ all $x$ for which no explicit mapping is specified.
Given a set $S$, its powerset is $\powerset{S}$.
A (discrete) \emph{probability distribution} over $S$ is a function $\mu \in S \to [0, 1]$ with countable \emph{support} $\support{\mu} \defeq \set{ s \in S \mid \mu(s) > 0 }$ and $\sum_{s \in \support{\mu}} \mu(s) = 1$.
$\Dist{S}$ is the set of all probability distributions over $S$.
\paragraph{Markov decision processes}
(MDP) combine nondeterministic choices as in labelled transition systems with discrete probabilistic decisions as in discrete-time Markov chains (DTMC).
We define them formally and describe their semantics.
\begin{definition}
\label{def:MDP}
A \emph{Markov decision process} (MDP) is a triple\\[5pt]
\centerline{
$M =\tuple{S, s_I, T}$
}\\[-5pt]
where
\begin{itemize}
\item
$S$ is a finite set of \emph{states} with \emph{initial state} $s_I \in S$ and
\item
$T \colon \mathit{S} \to \powerset{\Dist{\RRpluszero \times S}}$ is the \emph{transition function}.
\end{itemize}
$T(s)$ must be finite and non-empty for all $s \in S$.
\end{definition}
For $s \in S$, an element of $T(s)$ is a \emph{transition}, and a pair $\tuple{r, s'} \in \support{T(s)}$ is a \emph{branch} to successor state $s'$ with \emph{reward} $r$ and probability $T(s)(\tuple{r, s'})$.
Let $M^{(s_I')}$ be $M$ but with initial state $s_I'$, and $M^0$ be $M$ with all rewards set to zero.
\begin{example}
\label{ex:MDP}
\begin{figure}[t]
\begin{floatrow}
\ffigbox[0.26\textwidth]
\begin{tikzpicture}[on grid,auto]
\node[state] (s0) {$\mathstrut s_0$};
\coordinate[above=0.3 of s0.north] (start);
\node[] (me) [above left=0.5 and 0.8 of s0] {\small$M_e$:};
\node[dot] (n0) [below right=0.5 and 0.5 of s0] {};
\node[state] (sp) [below=1.25 of s0] {$\mathstrut s_+$};
\node[state] (sm) [below right=1.25 and 1.0 of s0] {$\mathstrut s_-$};
\node[state] (s1) [below=1.5 of sp] {$\mathstrut s_1$};
\node[state] (s2) [below=1.5 of sm] {$\mathstrut s_2$};
\node[dot] (n1) [above left=0.75 and 0.5 of s2] {};
;
\path[-]
(s0) edge[bend left] node[inner sep=1pt] {\texttt{a}} (n0)
(s2) edge[bend right=20] node[inner sep=1.5pt,pos=0.1,swap] {\texttt{c}} (n1)
;
\path[->]
(start) edge node {} (s0)
(s0) edge [bend right=60] node[swap,pos=0.06,inner sep=1pt] {\texttt{b}} (s1)
(s1) edge [bend left] node {} (s2)
(s2) edge [bend left] node {} (s1)
(n0) edge[bend left] node[align=left,inner sep=-2pt,pos=0.8] {$0.1$\\[-1pt]$~~{+1}$} (sm)
(n0) edge[bend right] node[align=right,swap,inner sep=-2pt,pos=0.8] {$0.1$\\[-1pt]\phantom{${+1}~~$}} (sp)
(n0) edge[out=20,in=30,looseness=3] node[align=left,right,inner sep=3pt,pos=0.45] {$0.8$\\[-1pt]${+1}$} (s0)
(n1) edge[bend right] node[align=left,swap,inner sep=-2pt,pos=0.7] {$~~0.6$\\[-1pt]${+1}$} (sm)
(n1) edge[bend left] node[align=right,inner sep=-2pt,pos=0.7] {$0.4~~$\\[-1pt]\phantom{${+1}$}} (sp)
(sm) edge [loop,out=30,in=-30,looseness=5] node {} (sm)
(sp) edge [loop,out=30,in=-30,looseness=5] node {} (sp)
;
\end{tikzpicture}}{
\caption{Example MDP}
\label{fig:ExampleMDP}
\capbtabbox[0.71\textwidth]
\renewcommand{\arraystretch}{1.05}
\setlength{\tabcolsep}{2.0pt}
\scriptsize
\begin{tabular}{@{}cllllllll@{}}
\toprule
$i$ & $v(s_0)$ & $u(s_0)$ & $v(s_1)$ & $u(s_1)$ & $v(s_2)$ & $u(s_2)$ & $\mathit{error}$ & $\epsilon_\mathit{VI}$ \\
\midrule
$0$ & $0$ & & $0$ & & $0$ & & & $0.05$ \\
$1$ & $0.1$ & & $0$ & & $0.4$ & & $0.4$ & $0.05$ \\
$2$ & $0.18$ & & $0.4$ & & $0.4$ & & $0.4$ & $0.05$ \\
$3$ & $0.4$ & & $0.4$ & & $0.4$ & & $0.22$ & $0.05$ \\
$4$ & $0.42$ & $\textsl{0.47}$ & $0.4$ & $\textsl{0.45}$ & $0.4$ & $\textsl{0.45}$ & $0.02$ & $0.05$ \\
$5$ & $0.436$ & $0.476$ & $0.4$ & $0.45$ & $0.4$ & $0.45$ & $0.016$ & \\
$6$ & $0.4488$ & & $0.4$ & & $0.4$ & & $0.0128$ & $0.008$ \\
$7$ & $0.45904$ & & $0.4$ & & $0.4$ & & $0.01024$ & $0.008$ \\
$8$ & $0.467232$ & & $0.4$ & & $0.4$ & & $0.008192$ & $0.008$ \\
$9$ & $0.4737856$ & $\textsl{0.5237856}$ & $0.4$ & $\textsl{0.45}$ & $0.4$ & $\textsl{0.45}$ & $0.0065536$ & $0.008$ \\
$10$ & $0.47902848$ & $0.51902848$ & $0.4$ & $0.45$ & $0.4$ & $0.45$ & $0.00524288$ & \\
\bottomrule
\end{tabular}}{
\caption{VI and OVI example on $M_e$}
\label{tab:ExampleOVI}
\end{floatrow}
\end{figure}
\Cref{fig:ExampleMDP} shows our example MDP $M_e$.
We draw transitions as lines to an intermediate node from which branches labelled with probability and reward (if not zero) lead to successor states.
We omit the intermediate node and probability~$1$ for transitions with a single branch, and label some transitions to refer to them in the text.
$M^e$ has 5~states, 5~transitions, and 8~branches.
\end{example}
In practice, higher-level modelling languages like \modest~\cite{HHHK13} are used to specify MDP.
The semantics of an MDP is captured by its \emph{paths}.
A path represents a concrete resolution of all nondeterministic and probabilistic choices.
Formally:
\begin{definition}
A \emph{finite path} is a sequence\\[5pt]
\centerline
$\pi_\mathrm{fin} = s_0\, \mu_0\, r_0\, s_1\, \mu_1\, r_1\, s_2 \dots \mu_{n-1} r_{n-1} s_n$}\\[5pt]
where $s_i \in S$ for all $i \in \set{ 0, \dots, n }$ and $\exists\, \mu_i \in T(s_i) \colon \tuple{r_i, s_{i+1}} \in \support{\mu_i}$ for all $i \in \set{ 0, \dots, n - 1 }$.
Let $|\pi_\mathrm{fin}| \defeq n$, $\mathrm{last}({\pi_\mathrm{fin}}) \defeq s_n$, and $\mathrm{rew}({\pi_\mathrm{fin}}) = \sum_{i=0}^{n-1} r_i$.
$\Pi_\mathit{fin}$ is the set of all finite paths starting in $s_I$.
A \emph{path} is an analogous infinite sequence $\pi$, and $\Pi$ are all paths starting in $s_I$.
We define $s \in \pi \iff \exists\, i \colon s = s_i$, and $\pi_{\to G}$ as the shortest prefix of $\pi$ that contains a state in $G \subseteq S$, or $\bot$ if $\pi$ contains no such state.
Let $\mathrm{rew}(\bot) \defeq \infty$
\end{definition}
A scheduler (or \emph{adversary}, \emph{policy} or \emph{strategy}) only resolves the nondeterministic choices of~$M$.
For this paper, memoryless deterministic schedulers suffice.
\begin{definition}
\label{def:MDPReductionFunction}
A function $\mathfrak{s} \colon S \to \Dist{\RRpluszero \times S}$ is a \emph{scheduler} if, for all $s \in S$, we have $\mathfrak{s}(s) \in T(s)$.
The set of all schedulers of~$M$ is $\mathfrak{S}(M)$.
\end{definition}
Given an MDP $M$ as above, let $M|_\mathfrak{s} = \tuple{S, s_I, T|_\mathfrak{s}}$ with $T|_\mathfrak{s}(s) = \set{\mathfrak{s}(s)}$ be the DTMC induced by $\mathfrak{s}$.
Via the standard cylinder set construction~\cite[Sect.\ 2.2]{FKNP11} on $M|_\mathfrak{s}$, a scheduler induces a probability measure $\mathbb{P}_\mathfrak{s}^M$ on measurable sets of paths starting in $s_I$.
For goal state $g \in S$, the maximum and minimum probabilities of reaching $g$ are defined as $\mathrm{P}_{\!\max}^M(\diamond\: g) = \sup_{\mathfrak{s} \in \mathfrak{S}} \mathbb{P}_\mathfrak{s}^M(\set{ \pi \in \Pi \mid g \in \pi })$ and $\mathrm{P}_{\!\min}^M(\diamond\: g) = \inf_{\mathfrak{s} \in \mathfrak{S}} \mathbb{P}_\mathfrak{s}^M(\set{ \pi \in \Pi \mid g \in \pi })$, respectively.
The definition extends to sets $G$ of goal states.
Let $R_G^M \colon \Pi \to \RRpluszero$ be the random variable defined by $R_G^M(\pi) = \mathrm{rew}(\pi_{\to G})$ and let $\mathbb{E}_\mathfrak{s}^M(G)$ be the expected value of $R_G^M$ under $\mathbb{P}_\mathfrak{s}^M$.
Then the maximum and minimum expected reward to reach $G$ is defined as $\mathrm{E}_\max^M(G) = \sup_{\mathfrak{s}}\mathbb{E}_\mathfrak{s}^M(G)$ and $\mathrm{E}_\min^M(G) = \inf_{\mathfrak{s}}\mathbb{E}_\mathfrak{s}^M(G)$, respectively.
We omit the superscripts for $M$ when they are clear from the context.
From now on, whenever we have an MDP with a set of goal states $G$, we assume that they have been made absorbing, \ie for all $g \in G$ we only have a self-loop: $T(g) = \set{ \set{ \tuple{0, g} \mapsto 1 } }$.
\begin{definition}
An \emph{end component} of $M$ as above is a (sub-)MDP $\tuple{S', T', s_I'}$ where $S' \subseteq S$, $T'(s) \subseteq T(s)$ for all $s \in S'$, if $\mu \in T'(s)$ for some $s \in S'$ and $\tuple{r, s'} \in \support{\mu}$ then $r = 0$, and the directed graph with vertex set $S'$ and edge set $\set{ \tuple{s, s'} \mid \exists\,\mu \in T'(s) \colon \tuple{0, s'} \in \support{\mu} }$ is strongly connected.
\end{definition}
\section{Value Iteration}
The standard algorithm to compute reachability probabilities and expected rewards is \emph{value iteration} (VI)~\cite{Put94}.
In this section, we recall its theoretical foundations as well as its limitations regarding convergence.
Let $\VV = \set{v ~|~ v \colon S \to \RRpluszero \cup \{\infty\}}$ be a space of vectors of values.
It can easily be shown that $\tuple{\VV,\, {\preceq}}$, with\\[5pt]
\centerline{
$v \preceq w \qquad\text{if and only if}\qquad \forall\, s \in S\colon v(s) \leq w(s)$,
}\\[5pt]
forms a complete lattice, i.e.\ every subset $V \subseteq \VV$ has a supremum (and an infimum) in $\VV$ with respect to $\preceq$.
Minimum and maximum reachability probabilities and expected rewards can be expressed as the \emph{least fixed point} of the \emph{Bellman operator}~\mbox{$\Phi\colon \VV \to \VV$} given b
\vspace{-6pt
\begin{align*}
\Phi(v) \defeq \lambda\: s.~ \begin{cases}
\opt_{\mu \in T(s)} ~ \sum_{\tuple{r, s'} \in \support{\mu}} ~ \mu(s') \cdot (r + v(s')) & \text{ if } s \in S_?\\
d &\text{ if } s \not\in S_?~,
\end{cases}
\end{align*}\\[-11pt
where $\opt \in \set{ \max, \min }$ and the choice of both $S_? \subseteq S$ and $d$ depends on whether we wish to compute reachability probabilities or expected rewards.
In any case, the Bellman operator $\Phi$ can be shown to be Scott-continuous \cite{abramskyjung94}, \ie in our case: for any subset $V \subseteq \VV$, we have $\Phi( \sup V) = \sup \Phi(V)$.
The Kleene fixed point theorem for Scott-continuous self-maps on complete lattices \cite{abramskyjung94,DBLP:journals/ipl/LassezNS82} guarantees that the least fixed point of $\Phi$, denoted by $\lfp \Phi$, indeed exists.
Note that $\Phi$ can have more than one fixed point---only the least fixed point is guaranteed to exists (and is necessarily unique).
In addition to mere existence of $\lfp \Phi$, the Kleene fixed point theorem states that $\lfp \Phi$ can \mbox{be expressed by
\begin{align}
\lfp \Phi = \lim_{n \to \infty} \Phi^n(\vec{0}) \label{eq:limit-lfp}
\end{align
where $\vec{0} \in \VV$ is the zero vector and $\Phi^n(v)$ denotes $n$-fold application of $\Phi$ to $v$.
\Cref{eq:limit-lfp} forms the theoretical basis for VI:
the algorithm iteratively constructs a sequence of vectors with\\[5pt]
\centerline{
$v_0 = \vec{0} \qquad\text{and}\qquad v_{i+1} = \Phi(v_i)$,
}\\[5pt]
which converges to the sought-after least fixed point.
This convergence is \emph{monotonic}:
for every $n \in \NN$, we have $\Phi^n(\vec{0}) \preceq \Phi^{n+1}(\vec{0})$ and hence $\Phi^n(\vec{0}) \preceq \lfp \Phi$.
In particular, $\Phi^n(\vec{0})(s_I)$ is an \emph{under}approximation of the sought-after quantity for every $n$.
Note that iterating $\Phi$ on \emph{any} underapproximation $v \preceq \lfp \Phi$ (instead of~$\vec{0}$) will still converge to $\lfp \Phi$ and $\Phi^n(v) \preceq \lfp \Phi$ will hold for any $n$.
For determining concrete reachability probabilities, we operate on $M^0$ and choose $S_? = S \setminus G$ and $d = 1$.
Then the least fixed point of the corresponding Bellman operator satisfies\\[-0pt]
\centerline{
$(\lfp \Phi)(s) = \mathrm{P}_\mathit{\!\!opt}^{M^{(s)}}(\diamond\: G)$,
}\\[5pt]
and VI will iteratively approximate this quantity \emph{from below}.
For determining the expected reward $\mathrm{E}_\opt^{M^{(s)}}(G)$, we operate on $M$ and first have to determine the set $S_\infty$ of states from which the minimum (if $\opt = \max$) or maximum (if $\opt = \min$) probability to reach $G$ is less than $1$
\footnote{This can be done via Algs.\ 2 and~4 of \cite{FKNP11}, respectively.
These algorithms do not consider the actual probabilities, but only whether there is a transition and branch (with positive probability) from one state to another or not.
We thus call them \emph{graph-based} (as opposed to \emph{numeric}) algorithms.}
If $s_I \in S_\infty$, then the result is $\infty$ due to the definition of $\mathrm{rew}(\bot)$.
Otherwise, we choose $S_? = S \setminus S_\infty$ and $d = \infty$.
Then, for $\opt = \max$, the least fixed point of the corresponding Bellman operator satisfies\\[5pt]
\centerline{
$(\lfp \Phi)(s) = \mathrm{E}_\opt^{M^{(s)}}(G)$.
}\\[5pt]
Again, VI underapproximates this quantity.
The same holds for $\opt = \min$ if $M$ does not have end components containing states other than those in $G$ and $S_\infty$.
\paragraph{Gauss-Seidel value iteration.}
\begin{algorithm}[t]
\Function{\texttt{GSVI}$(M = \tuple{S, s_I, T}$, $S_?$, $\opt \in \set{ \max, \min }$, $v$, $\epsilon_\mathit{VI})$}{
\Repeat{$\mathit{error} < \epsilon_\mathit{VI}$\label{alg:VI:Until}}{
$\mathit{error} := 0$\;
\ForEach{$s \in S_?$\label{alg:VI:Foreach}}{
$v_\mathit{new} := \opt_{\mu \in T(s)} \sum_{\tuple{r, s'} \in \support{\mu}}{\mu(s') \cdot (r + v(s'))}$\label{alg:VI:Update}\;
\lIf{$v_\mathit{new} > 0$}
$\mathit{error} := \max\,\set{\mathit{error}, (v_\mathit{new} - v(s)) / v_\mathit{new} }$\label{alg:VI:Error}
}
$v(s) := v_\mathit{new}$
}
}
}
\caption{Gauss-Seidel value iteration with relative-error convergence}
\label{alg:VI}
\end{algorithm
\Cref{alg:VI} shows the pseudocode of a VI implementation that uses the so-called \emph{Gauss-Seidel optimisation}:
Whereas standard VI needs to store two vectors $v_i$ and $v_{i+1}$, Gauss-Seidel VI stores only a single vector~$v$ and performs updates in place.
This does not affect the correctness of VI, but may speed up convergence depending on the order in which the loop in line~\ref{alg:VI:Foreach} considers the states in $S_?$.
To move towards the least fixed point, we call $\texttt{GSVI}$ with a trivial underapproximation:
$v = \set{ s \mapsto 0 \mid s \in S \setminus G } \cup \set{ s \mapsto 1 \mid s \in G }$ for $\mathrm{P}_\mathit{\!\!opt}(\diamond\: G)$ (and we operate on $M^0$ instead of $M$), and $v = \set{ s \mapsto 0 \mid s \in S \setminus S_\infty } \cup \set{ s \mapsto \infty \mid s \in S_\infty }$ for $\mathrm{E}_\opt(G)$.
\paragraph{Convergence.}
\texttt{GSVI} will not, in general, reach a fixed point (and neither will classical VI); we thus use the standard \emph{relative error} convergence criterion to decide when to stop iterations (lines \ref{alg:VI:Error} and~\ref{alg:VI:Until}).
To use the absolute error, replace line~\ref{alg:VI:Error} by $\mathit{error} := \max\,\set{\mathit{error}, v_\mathit{new} - v(s) }$.
Upon termination of \texttt{GSVI}, $v$ is closer to the least fixed point, but remains an underapproximation.
In particular, the parameter $\epsilon_\mathit{VI}$ (which is $10^{-6}$ by default in most probabilistic model checkers) has, in general, no formal relation whatsoever to the final difference between $v(s_I)$ and $\mathrm{P}_\mathit{\!\!opt}(\diamond\: G)$ or $\mathrm{E}_\opt(G)$, respectively.
\begin{example}
Consider MDP $M_e$ of \Cref{fig:ExampleMDP} again.
The first four rows in the body of \Cref{tab:ExampleOVI} show the values for $v$ after the $i$-th iteration of the outer loop of a call to $\texttt{GSVI}(M_e^0, \set{ s_0, s_1, s_2 }, \max, \set{ s_+ \mapsto 1 } \cup \set{ s \mapsto 0 \mid s \neq s_+ }, 0.05 )$ using absolute-error convergence.
After the fourth iteration, \texttt{GSVI} terminates since the error is less than~$\epsilon_\mathit{VI} = 0.05$; at this point, we have $\mathrm{P}_{\!\max}(\diamond\: s_+) - v(s_0) = 0.08 > \epsilon_\mathit{VI}$.
In $M_e$, states $s_1$ and $s_2$ and the two transitions in between form an end component.
$v = \set{ s \mapsto 1 }$ is another fixed point for the corresponding Bellman operator here; in fact, with appropriate values for $s_1$ and $s_2$, we can obtain fixed points with any $v(s_0) > 0.5$ of our choice.
Similarly, we have $\mathrm{E}_\min^M(\set{s_+, s_-}) = 0.6$ (by scheduling \texttt{b} in $s_0$), but due to the end component (which has only zero-reward transitions by definition), the fixed point is such that $v(s_0) = 0$.
\end{example}
Value iteration thus comes with the two problems of convergence and uniqueness of fixed points.
In practice, the latter is not critical for $\mathrm{P}_{\!\min}$, $\mathrm{P}_{\!\max}$, and $\mathrm{E}_\max$:
we simply call \texttt{GSVI} with a (trivial) underapproximation.
For $\mathrm{E}_\min$, (zero-reward) end components rarely occur in case studies since they indicate Zeno behaviour \wrt to the reward.
As rewards are often associated to time progress, such behaviour would be unrealistic.
To make the fixed points unique, for $\mathrm{E}_\max$ we fix the value of all goal states to~$0$.
For $\mathrm{P}_{\!\min}$, we precompute the set of states that reach the goal with probability~$0$ using algs.\ 1 and~3 of \cite{FKNP11}, then fix their values to~$0$.
For $\mathrm{P}_{\!\max}$ and $\mathrm{E}_\min$, we additionally need to \emph{eliminate end components}:
we determine the maximal end components using algorithms similar to \cite[Alg.\ 1]{HM18}, then replace each of them by a single state, keeping all transitions leading out of the end component.
In contrast to the precomputations, end component elimination changes the structure of the MDP and is thus more memory-intensive, yet a sound probabilistic model checker cannot avoid it for $\mathrm{E}_\min$ properties.
The convergence problem is more severe.
Current solutions consist of computing an upper bound in addition to the lower bound provided by VI.
Interval iteration (II)~\cite{HM14,BKLPW17} does so by essentially performing, in parallel, a second value iteration on a second vector $u$ that starts from an overapproximation of the values.
For probabilities, the one vector $\vec{1} = \set{s \mapsto 1}$ is a trivial overapproximation; for rewards, more involved graph-based algorithms as presented in~\cite{BKLPW17} need to be used to precompute (a very conservative) one.
Interval iteration terminates as soon as $u(s_I) - v(s_I) \leq 2\epsilon \cdot v(s_I)$ (assuming $\epsilon$ specifies a relative-width requirement; if it is an absolute width, we compare with just $2\epsilon$) and returns $v_{\mathit{II}} = \frac{1}{2}(u(s_I) + v(s_I))$.
With $v_\True = \mathrm{P}_\mathit{\!\!opt}(\diamond\: G)$, it thus guarantees that $v_{\mathit{II}} \in [v_\True - \epsilon \cdot v_\True, v_\True + \epsilon \cdot v_\True]$ and analogously for expected rewards.
To ensure termination, II requires a unique fixed point: the precomputations, and in particular end component elimination for $\mathrm{P}_{\!\max}$, are thus no longer optional.
Sound value iteration (SVI)~\cite{QK18} uses a different approach to deriving upper bounds that makes it perform better overall, and that eliminates the need to precompute an initial overapproximation for expected rewards.
It still requires unique fixed points and hence precomputations.
\section{Optimistic Value Iteration}
\label{sec:OVI}
We now describe a new, practical approach to solving the convergence problem for unbounded reachability and expected rewards.
It exploits the observation that VI does deliver results that are in fact $\epsilon$-close to the true value on \emph{most} case studies to which probabilistic model checking has been applied so far---it only lacks the ability to prove it.
Our approach, called \emph{optimistic value iteration} (OVI), extends standard value iteration with the ability to deliver such a proof.
The key idea is to exploit a property of the Bellman operator $\Phi$ as well as of Gauss-Seidel value iteration as in \Cref{alg:VI} to determine whether a candidate vector is a lower bound, an upper bound, or neither.
The foundation of our approach is basic domain theory:
By Scott-continuity of $\Phi$ it follows easily that $\Phi$ is monotonic, meaning $v \preceq w$ implies $\Phi(v) \preceq \Phi(w)$.
A principle called \emph{Park induction} \cite{park1969fixpoint} for monotonic self-maps on complete lattices yields the following induction rule:
For any $u \in \VV$
\vspace{-6pt}\begin{align}
\Phi(u) \preceq u \qquad\text{implies}\qquad \lfp \Phi \preceq u \label{eq:park}
\end{align}\\[-19pt]
Thus, if we can construct a candidate vector $u$ that satisfies $\Phi(u) \preceq u$, then $u$ is in fact an upper bound on the sought-after least fixed point.
We call such a $u$ an \emph{inductive upper bound}.
Optimistic value iteration uses this insight and can---in a nutshell---be summarised as follows:\\[-5pt]
\fbox{\parbox{0.9\textwidth}
\begin{enumerate}
\item Perform Gauss-Seidel value iteration until the current underapproximation $v$ satisfies the VI convergence criterion.
\item Heuristically determine a candidate $u$ and compute $\Phi(u)$.
\item If $\Phi(u) \preceq u$, then $v \preceq \lfp \Phi \preceq u$.
\begin{itemize}
\item If $u(s_I) - v(s_I) < 2\epsilon$, \textbf{terminate and return $\boldsymbol{\tfrac{1}{2}\bigl(u(s_I) + v(s_I)\bigr)}$}.
\end{itemize}
\item Tweak parameters pertaining to convergence of VI and goto step 1.
\end{enumerate}}}\\[1pt]
\begin{algorithm}[t]
\Function{\texttt{OVI}$(M = \tuple{S, s_I, T}$, $S_?$, $\opt \in \set{ \max, \min }$, $v$, $\epsilon)$}{
$\mathit{error} := \epsilon$\;
\WhileTrue{}{
$\texttt{GSVI}(M, S_?, \opt, v, \mathit{error})$\label{alg:OVI:CallVI}\tcp*{perform standard value iteration}
$u := \set{ s \mapsto v(s) \cdot (1 + \epsilon) \mid s \in S_? }$\label{alg:OVI:Guess}\tcp*{guess candidate upper bound}
\WhileTrue{\label{alg:OVI:Verif}\tcp*[f]{start the verification phase}}{
$\mathit{error} := 0$, $\mathit{up} := \True$, $\mathit{down} := \True$, $\mathit{cross} := \False$\;
\ForEach{$s \in S_?$\label{alg:OVI:Foreach}}{
$v_\mathit{new} := \opt_{\mu \in T(s)} \sum_{\tuple{r, s'} \in \support{\mu}}{\mu(s') \cdot (r + v(s'))}$\label{alg:OVI:UpdateV}\;
$u_\mathit{new} := \opt_{\mu \in T(s)} \sum_{\tuple{r, s'} \in \support{\mu}}{\mu(s') \cdot (r + u(s'))}$\label{alg:OVI:UpdateU}\;
\lIf{$v_\mathit{new} > 0$}
$\mathit{error} := \max\,\set{\mathit{error}, (v_\mathit{new} - v(s)) / v_\mathit{new} }$\label{alg:OVI:Error}
}
\lIf{$u_\mathit{new} < u(s)$}{$\mathit{up} := \False$\tcp*[f]{upper value decreased}}
\lElseIf{$u_\mathit{new} > u(s)$}{$\mathit{down} := \False$\tcp*[f]{upper value increased}}
\lIf{$u_\mathit{new} < v_\mathit{new}$}{$\mathit{cross} := \True$\tcp*[f]{upper value below lower}}
$v(s) := v_\mathit{new}$, $u(s) := u_\mathit{new}$
}
\lIf{$\mathit{up} \vee \mathit{cross}$}{\Break\label{alg:OVI:NotUpper}\tcp*[f]{$u$ is definitely not an upper bound}}
\ElseIf(\tcp*[f]{$u$ is an upper bound}){$\mathit{down} \wedge u(s_I) - v(s_I) \leq 2\epsilon \cdot v(s_I)$\label{alg:OVI:Upper}}
\Return{$\frac{1}{2}(u(s_I) + v(s_I))$}\tcp*{and we have converged}
}
}
$\mathit{error} := \frac{1}{2} \mathit{error}$\label{alg:OVI:DecrErr}\tcp*{decrease error for next iteration phase}
}
}
\caption{Optimistic value iteration}
\label{alg:OVI}
\end{algorithm}
\noindent{}The resulting procedure in more detail is shown as \Cref{alg:OVI}.
Starting from the same initial vectors $v$ as for VI, we first perform standard Gauss-Seidel value iteration (in line~\ref{alg:OVI:CallVI}).
We refer to this as the \emph{iteration phase} of OVI.
After that, vector $v$ is an improved, and in practice usually very close, underapproximation of the actual probabilities or reward values.
We then ``guess'' an overapproximating vector $u$ of \emph{upper values} from the \emph{lower values} in $v$ by adding to $v$ the desired relative error, i.e.\ we multiply $v$ element-wise by $1 + \epsilon$ to obtain $u$ (line~\ref{alg:OVI:Guess}).
Then the \emph{verification phase} starts (in line~\ref{alg:OVI:Verif}):
we now perform value iteration on both the lower values $v$ and the upper values $u$ at the same time, keeping track of the direction in which the upper values move.
If, in some iteration, the upper values \emph{for all states moved down} (line~\ref{alg:OVI:Upper}), then we know by Park induction that the current $u$ is an inductive upper bound for the values of all states, see \Cref{eq:park}, and the true value must be in the interval $[v(s_I), u(s_I)]$.
If the interval is small enough \wrt $\epsilon$ (line~\ref{alg:OVI:Upper} checks a relative-width requirement of $2\epsilon$), then we can return its centre $v_I = \frac{1}{2}(u(s_I) + v(s_I))$ and be sure that the true value $v_\True = (\lfp \Phi)(s_I)$ is in $[v_I - \epsilon \cdot v_\True, v_I + \epsilon \cdot v_\True]$.
Otherwise, we remain in the verification phase---effectively performing interval iteration---until the two vectors $v$ and $u$ are sufficiently close.
If, on the other hand, the upper values for all states moved \emph{up}, or if we have $u(s) < v(s)$ for some state $s$, then the current $u$ is not an inductive upper bound. In line~\ref{alg:OVI:NotUpper}, we then cancel verification and go back to the iteration phase to further improve $v$ before trying again.
\paragraph{Optimisation.}
Recall that Park induction reads $\Phi(u) \preceq u$ implies $\lfp \Phi \preceq u$.
Conversely---in case the fixed point of $\Phi$ is \emph{unique}---$u \preceq \Phi(u)$ implies that $u$ is a lower bound on $\lfp \Phi$.
In such situations of single fixed points, we can---as an optimisation---additionally replace $v$ by $u$ if all upper values have moved up at some point in the verification phase and continue with the iteration phase.
\paragraph{Heuristics.}
OVI is inherently a \emph{practical} approach that relies extensively on heuristics to gain an advantage over alternative methods such as II or SVI; it cannot be better on \emph{all} MDP.
Concretely, an implementation of OVI can choose
\begin{enumerate}
\item
a stopping criterion for the iteration phase,
\item
how to guess candidate upper values from the result of the iteration phase,~and
\item
how much to increase the ``precision'' requirement when going back from verification to iteration.
\end{enumerate}
\Cref{alg:OVI} shows the default choices made by our current implementation:
It (1.)~uses \Cref{alg:VI} and its standard relative-error stopping criterion for the iteration phase, but can be configured to use the absolute-error method instead.
We (2.)~guess upper values as shown in line~\ref{alg:OVI:Guess} if $\epsilon$ specifies a relative width; if an absolute width is required instead, then we simply add $\epsilon$ to all values in $v$.
In case of probabilities, we additionally replace values greater than $1$ by $1$ (not shown in \Cref{alg:OVI}).
Finally, when (3.)~going back to the iteration phase, we use half the error of the last iteration in the verification phase as the next value of the $\epsilon$ parameter of \texttt{GSVI} (as shown in line~\ref{alg:OVI:DecrErr}).
Reducing the error too much may cause more and potentially unnecessary iterations in \texttt{GSVI} (continuing to iterate although switching to the verification phase would already result in upper values sufficient for termination), while using too high a value may result in more verification phases (whose iterations are computationally more expensive than those of \texttt{GSVI}) being started before the values in $v$ are high enough.
\begin{example}
\label{ex:OVI}
We now call $\texttt{OVI}(M_e^0, \set{s_0, s_1, s_2}, \max, \set{ s_+ \mapsto 1 } \cup \set{ s \mapsto 0 \mid s \neq s_+ }, 0.05)$.
\Cref{tab:ExampleOVI} shows the values in $v$ and $u$ during this run, using an absolute-width requirement of $\epsilon = 0.05$ and the absolute-error stopping criterion in \texttt{GSVI}.
The first iteration phase lasts from $i = 0$ to~$4$.
At this point, $u$ is initialised with the values shown in italics.
The first verification phase needs only one iteration to realise that $u$ is actually a lower bound (to a fixed point which is not the least fixed point, due to the uneliminated end component).
We then resume \texttt{GSVI} from $i = 6$.
The error in \texttt{GSVI} is again below $\epsilon_\mathit{VI}$, which had been reduced to $0.008$, during iteration $i = 9$.
We thus start another verification phase, which immediately (in one iteration) finds the newly guessed vector $u$ to be an upper bound, and the difference between $u(s_0)$ and $v(s_0)$ to be small enough.
\end{example}
\paragraph{Termination.}
We first consider situations where the Bellman operator has a single fixed point (which is always achievable by precomputations and transformations as described in \Cref{sec:Preliminaries}).
At some point, all values in $v$ will be close enough to the true values so that the guessing phase picks a \emph{valid} upper bound $u \succeq \lfp \Phi$.
However, this bound need not be \emph{inductive}, i.e.\ $\Phi(u) \not\preceq u$.
Moreover, even in the case where value iteration on this upper bound does converge towards the least fixed point\footnote{This could even be guaranteed by modifying VI to enforce monotonicity as in~\cite{BKLPW17}.}, i.e.\ if $\lim_{n \to \NN} \Phi^n(u) = \lfp \Phi$, the improved upper bound may never become inductive, i.e.\ there may not exist an $n$ such that $\Phi^{n+1}(u) \preceq \Phi^n(u)$.
In this scenario, OVI will remain unaware that it has in fact picked a valid upper bound~$u$ because it is unable to prove this fact by Park induction.
Thus, if the guessing heuristics of OVI continually picks such an unfavourable $u$, then OVI will not terminate.
\emph{In practice}, however, we have not yet encountered such a situation outside of constructed examples with \emph{constructed vectors} $u$ that OVI with our implemented guessing heuristics could not have chosen.
If we apply OVI in a situation with multiple fixed points (\eg by skipping the precomputations, or by not computing and eliminating end components\footnote{We must however ensure that at least the \emph{least} fixed point corresponds to the true~values, \ie we must eliminate end components for $\mathrm{E}_\min$ properties---but only for those.}), then we can additionally get nontermination due to the guessed upper values \emph{always} moving \emph{up} towards a higher fixed point, resulting in infinitely many validation phases being cancelled.
The situation where they move \emph{down} in verification phase iteration $i$, but another fixed point exists between $u$ and the true values, is only problematic with our guessing heuristics if additionally values moved up in iterations $j < i$ such that the difference between $v(s_I)$ and $u(s_I)$ forever remains higher than the required width.
Again, we have not encountered either situation on practical examples yet.
To mitigate (but not eliminate) the second case in models yet unknown to us, as well as the case of never reaching an inductive upper bound described in the previous paragraph, our implementation additionally cancels verification when the current verification phase took more than ten times as many iterations as the previous iteration phase.
In summary, OVI is a semi-algorithm: it need not terminate.
On all MDP that we have tested, however, it does terminate.
This, together with the importance of heuristics, again underlines the practical nature of OVI.
\section{Experimental Evaluation}
\label{sec:Experiments}
We have implemented interval iteration (II) (using the ``variant 2'' approach of~\cite{BKLPW17} to compute initial overapproximations for expected rewards), sound value iteration (SVI), and now optimistic value iteration (OVI) precisely as described in the previous section, in the \mcsta model checker of the \toolset~\cite{HH14}, which is publicly available at \href{http://www.modestchecker.net/}{modestchecker.net}.
It is cross-platform, implemented in C\#, and built around the \modest~\cite{HHHK13} high-level modelling language.
Via support for the \jani format~\cite{BDHHJT17}, the toolset can exchange models with other tools like \tool{Storm}~\cite{DJK017} and \tool{Epmc}~\cite{HLSTZ14}.
\mcsta is the toolset's explicit-state probabilistic model checker.
Its performance is competitive with \tool{Storm} and \tool{Prism}~\cite{HHHKKKPQRS19}.
\begin{figure}[t]
\centering
\scatterplotpvi{results-p.csv}{mcsta.ovi.std}{OVI (time)}{mcsta.vi.std}{VI (time)}{0.52\textwidth}{false
\scatterplotiters{results-p.csv}{mcsta.ovi.std}{OVI (iterations)}{mcsta.vi.std}{VI (iterations)}{0.52\textwidth}{true
\caption{OVI runtime and iteration count compared to VI (probabilistic reachability)}
\label{fig:PlotsPVI}
\end{figure}
\begin{figure}[t]
\centering
\scatterplotevi{results-e.csv}{mcsta.ovi-vi.std}{OVI (time)}{mcsta.vi.std}{VI (time)}{0.52\textwidth}{false
\scatterplotiters{results-e.csv}{mcsta.ovi.std}{OVI (iterations)}{mcsta.vi.std}{VI (iterations)}{0.52\textwidth}{true
\caption{OVI runtime and number of iterations compared to VI (expected rewards)}
\label{fig:PlotsEVI}
\end{figure}
In the following, we report on our experimental evaluation of OVI using \mbox{\mcsta} on all applicable models of the Quantitative Verification Benchmark Set (QVBS)~\cite{HKPQR19}.
All models in the QVBS are available in \jani and can thus be used by \mcsta.
Most of them are parameterised, and come with multiple properties of different types.
Aside from MDP models, the QVBS also includes DTMCs (which are a special case of MDP), continuous-time Markov chains (CTMC, for which the analysis of unbounded properties reduces to checking the embedded DTMC), Markov automata (MA~\cite{EHZ10}, on which the embedded MDP suffices for unbounded properties), and probabilistic timed automata (PTA~\cite{KNSS02}, some of which can be converted into MDP via the digital clocks semantics~\cite{KNPS06}).
We use all of these model types.
The QVBS thus gives rise to a large number of benchmark \emph{instances}:
combinations of a model, a parameter valuation, and a property to check.
For every model, we chose a representative set of instances, aiming to cover all its unbounded probabilistic reachability and expected-reward properties as well as one or two suitable parameter valuations.
We only excluded
\begin{itemize}
\item
models with multiple initial states (which \mcsta does not yet support),
\item
probabilistic reachability properties for which the result is $0$ or $1$ (since they can easily be solved by the graph-based precomputations),
\item
the \textit{oscillators} model due to its very large model files,
\item
model-property combinations for which we found no parameter valuation s.t.
\begin{itemize}
\item[--]
VI, II, SVI, or OVI took more than 1 second (since lower runtimes do not allow reliable comparisons) and
\item[--]
the entire model checking process (including state space exploration) did not run out of memory or exceed a 2-minute timeout.
\end{itemize}
\end{itemize}
As a result, we considered 47 instances with probabilistic reachability and 47 instances with expected-reward properties.
For many of them, ``reference results'' are available; in those cases, we also checked that the result delivered by the respective method is correct up to the requested error width.
We ran all experiments on an Intel Core i7-4790 workstation ($3.6$-$4.0\sunit{GHz}$) with 8\sunit{GB} of memory and 64-bit Ubuntu Linux 18.04, using version 3.1 of the \toolset.
We request a relative half-width of $\epsilon = 10^{-6}$ for the result probability or reward value, and configure OVI to use the relative-error criterion with $\epsilon_\mathit{VI} = 10^{-6}$ in the iteration phase.
We report the average over three runs for every instance.
Due to the number of instances, we show the results of our experiments as scatter plots like in \Cref{fig:PlotsPVI}.
Each such plot compares two methods in terms of runtime or number of iterations.
Every point $\tuple{x, y}$ corresponds to an instance and indicates that the method noted on the x-axis took $x$ seconds or iterations to solve this instance while the method noted on the y-axis took $y$ seconds or iterations.
Thus points above the solid diagonal line correspond to instances where the x-axis method was faster (or needed fewer iterations); points above (below) the upper (lower) dotted diagonal line are where the x-axis method took less than half (more than twice) as long or as many iterations.
\paragraph{Comparison with VI.}
All methods except VI delivered correct results, \ie within $\pm\,\epsilon \cdot r$ where a reference result $r$ is available.
VI offers low runtime at the cost of occasional incorrect results, and in general the absence of any guarantee about the result.
We thus compare with VI separately to judge the overhead caused by performing additional verification, and possibly iteration, phases.
\Cref{fig:PlotsPVI,fig:PlotsEVI} show the results.
The unfilled shapes indicate instances where a reference result is available and VI produced an incorrect result.
In terms of runtime, we see that OVI does not often take more than twice as long as VI, and in most cases requires less than $50\,\%$ extra time.
On many of the instances where OVI incurs a significant overhead, VI produces an incorrect result, indicating that they are ``hard'' instances for value iteration.
The unfilled CTMCs where OVI takes much longer to compute probabilities are all instances of the \textit{embedded} model; the DTMC on the x-axis is \textit{haddad-monmege}, an adversarial model built to highlight the convergence problem of VI in~\cite{HM14}.
The problematic cases for expected rewards include the two instances of the \textit{ftwc} MA model, the two expected-reward instances of the \textit{embedded} CTMC, and again \textit{haddad-monmege}.
In terms of iterations, the overhead of OVI is even less than in runtime.
When inspecting the output of \mcsta, we found that OVI usually requires few very short verification phases.
\begin{figure}[t]
\centering
\scatterplotp{results-p.csv}{mcsta.ovi.std}{OVI/std}{mcsta.ii.std}{II/std}{0.52\textwidth}{false
\scatterplotp{results-p.csv}{mcsta.ovi.pre}{OVI/pre}{mcsta.ii.pre}{II/pre}{0.52\textwidth}{true}\\[5pt]
\scatterplotp{results-p.csv}{mcsta.ovi.std}{OVI/std}{mcsta.svi.std}{SVI/std}{0.52\textwidth}{false
\scatterplotp{results-p.csv}{mcsta.ovi.pre}{OVI/pre}{mcsta.svi.pre}{SVI/pre}{0.52\textwidth}{false
\caption{OVI runtime compared to II and SVI (probabilities)}
\label{fig:PlotsP}
\end{figure}
\begin{figure}[t]
\centering
\scatterplote{results-e.csv}{mcsta.ovi.std}{OVI/std}{mcsta.ii.std}{II/std}{0.52\textwidth}{false
\scatterplote{results-e.csv}{mcsta.ovi.pre}{OVI/pre}{mcsta.ii.pre}{II/pre}{0.52\textwidth}{true}\\[5pt]
\scatterplote{results-e.csv}{mcsta.ovi.std}{OVI/std}{mcsta.svi.std}{SVI/std}{0.52\textwidth}{false
\scatterplote{results-e.csv}{mcsta.ovi.pre}{OVI/pre}{mcsta.svi.pre}{SVI/pre}{0.52\textwidth}{false
\caption{OVI runtime compared to II and SVI (expected rewards)}
\label{fig:PlotsE}
\end{figure}
\paragraph{Comparison with II and SVI.}
We compare the runtime of OVI with the runtime of II and that of SVI separately for reachability probabilities (shown in \Cref{fig:PlotsP}) and expected rewards (shown in \Cref{fig:PlotsE}).
OVI has the same requirements on precomputations as VI (\ie only end component elimination is needed only for $\mathrm{E}_\min$ properties), while II and SVI require the use of precomputations and of end component elimination (for $\mathrm{P}_{\!\!\max}$ properties) as discussed in \Cref{sec:Preliminaries}.
The precomputations and end component elimination need extra runtime (which turned out to be negligible in some cases but significant enough to cause a timeout in others) prior to the numeric iterations.
However, doing the precomputations can reduce the size of the set $S_?$, and end component elimination can reduce the size of the MDP itself.
Both can thus reduce the runtime needed for the numeric iterations.
For the overall runtime, we found that none of these effects dominates the other over all models.
Thus sometimes it may be better to perform only the required precomputations and transformations, while on other models performing all applicable ones may lead to lower total runtime.
We thus compare OVI, II, and SVI in two scenarios:
once in the default (``std'') setting of \mcsta that uses only required precomputations and transformations (where we report the total runtime for precomputations, transformations, and numeric iterations), and once with all of them enabled (``pre'', where we report only the runtime for numeric iterations, plus the computation of initial upper bounds in case of~II).
For probabilistic reachability, we see in \Cref{fig:PlotsP} that there is no clear winner among the three methods in the ``std'' setting.
We found that, for the QVBS models, value iteration to compute probabilities is usually fast, and the overall model checking time is dominated by the time needed for state space exploration.
We were unable to scale up several models to require more than $1\sunit{s}$ for value iteration without running out of memory due to the state space exploding.
Similarly, the precomputations and transformation take relatively long enough to significantly influence the outcome.
The ``pre'' setting, in which all three algorithms operate on exactly the same input \wrt to MDP $M$ and set $S_?$, however, shows a clearer picture:
OVI is consistently faster than both II and SVI, with only 6 instances where it takes longer (which are the single instances of the \textit{stream} and \textit{nand} models as well as two instances each of \textit{csma} and \emph{zeroconf}).
Expected-reward properties were more challenging for all three methods (as well as for VI, which produced more errors here than for probabilities), and the precomputations and transformations have less impact on runtime.
The plots in \Cref{fig:PlotsE} paint a very clear picture of OVI being significantly faster for expected rewards than II (which suffers from the need to precompute initial upper bounds that then turn out to be rather conservative), and faster (though by a lesser margin) than SVI.
The outliers are the single instances of \textit{coupons} and \textit{polling-system}, one instance each of \textit{csma} and \textit{firewire}, and two instances of \textit{wlan}.
\section{Conclusion}
\label{sec:Conclusion}
We have presented \emph{optimistic value iteration} (OVI), a new approach to making non-exact probabilistic model checking via iterative numeric algorithms sound in the sense of delivering results within a prescribed interval around the true value (modulo floating-point and implementation errors).
Compared to the existing approaches of interval (II) and sound value iteration (SVI), OVI is \emph{theoretically} weaker since it cannot guarantee termination.
However, it is deeply \emph{practical}:
\begin{itemize}
\item
It terminates on ``regular'' models, including on all applicable models and properties of the Quantitative Verification Benchmark Set.
\item
It relies on a combination of heuristics that can be arbitrarily modified and tuned, but that crucially determine its effectiveness and efficiency.
\item
It is faster than II and SVI when computing probabilities on a ``level playing field'' (\ie modulo precomputations and transformations), and it is unconditionally faster than either of the two when computing expected rewards.
\item
It is very simple to add to any tool that already implements value iteration.
\end{itemize}
In summary, there is no more excuse for a probabilistic model checker (several of which still default to unsound VI due to the effort required to implement II or SVI) not to (try to) produce sound results now (via OVI).
\paragraph{Future work.}
We have so far implemented OVI (in \mcsta) with one set of heuristics as described in this paper.
While they turned out to work very well, making OVI faster than all current alternatives, we see ample room for improvement especially in devising better methods to guess the initial upper bounds for the verification phase, and in tuning how $\epsilon_\mathit{VI}$ is adjusted when going back to the iteration phase.
We also plan to run more extensive experiments, in particular comparing OVI across different absolute and relative-width requirements, and with initial values for $\epsilon_\mathit{VI}$ that differ from the specified half-width~$\epsilon$.
\paragraph{Acknowledgments.}
The authors thank Tim Quatmann (RWTH Aachen) for fruitful discussions when the idea of OVI initially came up in late 2018, and for his help in implementing and optimising the SVI implementation in \mcsta.
|
train/arxiv
|
BkiUdK025YjgKOMQw2LD
| 5
| 1
|
\section{Introduction and results}
This paper is devoted to the study of a smoothing effect for a damped
Schr\"{o}dinger equation on exterior domain. In order to formulate the
results, we shall begin by recalling some results for Schr\"{o}dinger
equation linking the regularity of solutions and the geometry of domain
where these equations are posed. \newline
It is well known that the free Schr\"{o}dinger equation enjoys the property
of the $\ensuremath{\mathscr C}^{\infty }$ smoothing effect, which can be described as follows:
For any distribution $u_{0}$ of compact support, the solution of the Cauchy
problem
\begin{equation*}
\left\{
\begin{array}{l}
(i\partial _{t}+\Delta )u=0\text{ in }\mathbb{R}\times \mathbb{R}^{d} \\ [4pt]
u_{|t=0}=u_{0},
\end{array}
\right.
\end{equation*}
is infinitely differentiable with respect to $t$ and $x$ when $t\neq 0$ and
$x\in \mathbb{R}^{d}$.
Another type of smoothing effect says that if
$u_{0}\in L^{2}(\mathbb{R}^{d}) $ then the solution of the Schr\"{o}dinger equation satisfies the Kato
$\frac{1}{2}$-smoothing effect ($H^{1/2}$-smoothing effect):
\begin{equation*}
\int_{\mathbb{R}}\left\Vert \langle x\rangle ^{-s}\Delta ^{1/4}u\right\Vert
_{L^{2}(\mathbb{R}^{d})}^{2}\leq C\Vert u_{0}\Vert _{L^{2}}^{2},\text{\ }
s>1/2.
\end{equation*}
This property of gain of regularity has been first observed in the case of
$\mathbb{R}^{d}$ in the works of Constantin-Saut \cite{co.sa1}, Sj\"olin \cite{sjolin} and Vega~\cite{vega} and it has been extended
locally in time
to variable
coefficient operators with non trapping metric by Doi (\cite{doi1,Do})).
In the case of domains with boundary Burq, G\'erard and Tzvetkov \cite{b.g.t}
proved a local smoothing estimate for $\exp(it\Delta )$ in the exterior
domains with non-trapping assumption. Using the $TT^{\star }$ argument, the
proof of the smoothing effect with respect to initial data in \cite{b.g.t}
is reduced to the non-homogeneous bound which, by performing Fourier
transform in time, can be deduced from the bounds on the cut-off resolvent:
\begin{equation*}
\Vert \chi (\lambda ^{2}-\Delta )^{-1}\chi \Vert _{L^{2}\rightarrow
L^{2}}\leq C,\text{ }\forall \lambda \gg 1.
\end{equation*}
The resolvent bound, for which the non-trapping assumption plays a crucial
role, is proven for $|\lambda |>>1$ in greater generality by Lax-Phillips ~\cite{LaxPh}, Melrose-Sjostrand~\cite{melJost,melJost2}, Vainberg~\cite{vainb} and Vazy-Zworski~\cite{V.Z}
The Kato-effect has been extended by Robbiano and Zuily in \cite{RZ} to
variable coefficients operators with unbounded potential in exterior domains
with non trapping metric. The proof of their result is reduced to an
estimate localized in frequency which has been established by contradiction
using in a crucial way the semiclassical defect measure introduced by
P.~Gerard~\cite{gerard} (see also \cite{Leb}). The use of the microlocal defect
measure to prove an estimate by contradiction method (Wilcox~\cite{wilcox})
go back\ to Lebeau~\cite{Leb}. This idea has been followed with success by
several authors (see Burq \cite{Bu,B2,bursmot} Aloui\ and
Khenissi~\cite{alkh,alkh2,kh}).
In \cite{bursmot}, Burq proved that the non trapping condition is necessary
for the $H^{1/2}$ smoothing effect and showed, in the case of several convex
obstacles satisfying certain assumptions, the smoothing effect with an
$\varepsilon >0$ loss:
\begin{equation*}
\Vert \chi u\Vert _{L^{2}(H^{1/2-\varepsilon }(\Omega ))}\leq C\Vert
u_{0}\Vert _{L^{2}(\Omega )},
\end{equation*}
where $\chi$ is compactly supported.
On the other hand, the non-trapping assumption is also equivalent to the
uniform decay of the local energy for the wave equation
(see \cite{LaxPh,rals,Melros}). For the trapping domains, when no such decay is hoped,
the idea of stabilization for the wave equation is to add a dissipative term
to the equation to force the energy of the solution to decrease uniformly.
There is a large literature on the problem of stabilization of wave
equation. In the case of bounded domains, we quote essentially the work of
J. Rauch and M. Taylor \cite{rauch} and the one of C. Bardos, G. Lebeau and
J. Rauch \cite{BLR} whose introduced and developed the geometric control
condition (GCC). This condition that asserts, roughly speaking, that every
ray of geometric optics enters the region where the damping term is
effective in a uniform time, turns out to be almost necessary and sufficient
for the uniform exponential decay of waves. In \cite{alkh}, Aloui and
Khenissi introduced the Exterior Geometric control condition (see below
Definition~\ref{egc})
and hence extended the result of \cite{BLR} to the case of exterior
domains (see also \cite{alkh2} ).
Recently, by analogy with the stabilization problem the first author~\cite{al1,al2} has introduced the forced smoothing effect for Schr\"{o}dinger
equation in bounded domains; it consists to act on the equation to produce
some smoothing effects. More precisely he considered the following equation
\begin{equation}
\left\{
\begin{array}{lll}
i\partial _{t}u-\Delta_{D} u +ia(x)(-\Delta_{D})^{\frac{1}{2}}a(x)u=0 &
\text{in} & ]0,+\infty)\times \Omega, \\
u(0,.)=f & \text{in} & \Omega, \\
u|_{\mathbb{R}^{+}\times \partial \Omega }=0, & &
\end{array}
\right. \label{eqr}
\end{equation}
where $\Omega$ is a bounded domain and $\Delta_{D}$ is the Dirichlet-Laplace
operator on $\Omega$.
Using the strategy of \cite{b.g.t}, Aloui~\cite{al2} proved a weak Kato
-Smoothing effect:
\begin{equation}
\left\Vert v\right\Vert _{L^{2}([\varepsilon ,T],H_{D}^{s+1}(\Omega ))}\leq
c\left\Vert v_{0}\right\Vert _{H_{D}^{s}(\Omega )}, \label{fai}
\end{equation}
where $0<\varepsilon <T<\infty $ and $v_{0}\in H_{D}^{s}(\Omega )$, (See
\cite{al2} for the definition of $H_{D}^{s}$).
By iteration of the last result, Aloui deduced also a
$\ensuremath{\mathscr C}^{\infty }$-smoothing effect for the regularized Schr\"{o}dinger equation
(\ref{eqr}).
Recently, Aloui, Khenissi and Vodev~\cite{alKhVo} have proved that the
Geometric control condition is not necessary to obtain the forced
$\ensuremath{\mathscr C}^{\infty }$- smoothing effect.
On the other hand, using the arguments of \cite{b.g.t}, we can prove, for
the equation (\ref{eqr}) in exterior domains, the cut-off resolvent bound,
which is sufficient to deduce the non-homogenous bound. But, unfortunately,
the generator operator $\Delta _{D}-ia(x)(-\Delta _{D})^{\frac{1}{2}}a(x)$
is not self-adjoint and then the $TT^{\star }$ argument fails. For this
reason, we can not prove (with this strategy) the weak Kato-smoothing effect
(\ref{fai}) for exterior domains.
The question now is the following:
Can we establish the Kato-smoothing effect for the regularized
Schr\"{o}dinger equation (\ref{eqr}) for which the Geometric Control Condition is
necessary? and if so, does this result still hold for exterior problems?
In this paper, we give an affirmative answer. Indeed, under the Exterior
Geometric Control condition, we prove the Kato-smoothing effect and the non
homogenous bound for the regularized Schr\"{o}dinger equation in exterior
domains. Notice that the case of bounded domains can be treated by the same
method.
Our approach for deriving such results is to combine the strategies of
Robbiano-Zuily in \cite{RZ} \ and Aloui-Khenissi in \cite{alkh}, \cite{kh}.
In order to state our results, we give several notations and assumptions.
\newline
Let $K$ be a compact obstacle in $\mathbb{R}^{d}$ whose complement $\Omega $
an open set with $\ensuremath{\mathscr C}^{\infty }$ boundary $\partial \Omega $ and $\tilde{P}
$ be a second-order differential operator of the form
\begin{equation} \label{eq:P}
\tilde{P}=\sum_{j,k=1}^{d}D_j(b^{jk}D_k)+V(x), \qquad D_j=\frac{\partial}{
i\partial x_j},
\end{equation}
where coefficients $b^{jk}$\ and $V$\ are assumed to be in
$\ensuremath{\mathscr C}^{\infty }(\mathbb{R}^d),$\ real valued,\ and
$b^{jk}=b^{kj},$ $1\leq j,$ $k\leq d.$
Throughout this paper,
$\left\langle x\right\rangle :=(1+|x|^{2})^{\frac{1}{2}}$
and we denote by $S_{\Omega }(M,g)$ the H\"{o}rmander's class of symbols
if $M$ is a weight and the metric
\begin{equation*}
g=\frac{dx^{2}}{\left\langle x\right\rangle ^{2}}+
\frac{d\xi ^{2}}{\left\langle \xi \right\rangle ^{2}}.
\end{equation*}
We shall denote by $p$ the principal symbol of $\tilde{P}$, namely
\begin{equation*}
p(x,\xi )=\sum_{j,k=1}^{d}b^{jk}(x)\xi _{j}\xi _{k},
\end{equation*}
and we assume that
\begin{equation}
\exists \text{ }c>0:p(x,\xi )\geq c|\xi |^{2},\text{ \ for }x\text{ in }
\mathbb{R}^{d}\text{\ and }\xi \text{\ in }\mathbb{R}^{d}, \label{eq:illip}
\end{equation}
\begin{equation}
\left\{
\begin{array}{l}
(i)\text{ }b^{jk}\in S_{\Omega }(1,g),\text{ }\nabla _{x}b^{jk}(x)=
o(\frac{1}{|x|}),\text{ \ }|x|\rightarrow +\infty ,\text{ \ }1\leq j,\text{ }k\leq d.
\\
(ii)\text{ }V\in S_{\Omega }(\left\langle x\right\rangle ^{2},g),\text{ \ }
V\geq -C_{0}\text{ for some positive constant }C_{0}.
\end{array}
\right. \label{hyp1}
\end{equation}
Under the assumptions (\ref{eq:illip}) and (\ref{hyp1}), the operator
$\tilde{P}$ is essentially self-adjoint on $\ensuremath{\mathscr C}^\infty_0(\Omega)$ and we
denote by $P$ its self-adjoint extension. \newline
Now we set
\begin{equation*}
\Lambda =((1+C_{0})Id+P)^{1/2},
\end{equation*}
which is well defined by functional calculus of self-adjoint positive
operators. \newline
We consider the following regularized Schr\"{o}dinger equation
\begin{equation}
\left\{
\begin{array}{l}
(D_{t}+P)u-ia\Lambda au=f\text{ in }]0,+\infty)\times \Omega \\[4pt]
u=0\text{ on }[0,+\infty)\times \partial \Omega , \\
u_{|t=0}=u_{0},
\end{array}
\right. \label{eq: Equa}
\end{equation}
where $(u_{0},f)\in \ensuremath{\mathscr C}_{0}^{\infty }(\Omega)\times\ensuremath{\mathscr C}_{0}^{\infty }(]0,+\infty)\times\Omega )$ and
$a\in \ensuremath{\mathscr C}_{0}^{\infty }(\overline{\Omega }).$
Let's recall the Exterior Geometric Control (E.G.C.) condition \cite{alkh}
\begin{definition}[E.G.C.]
\label{egc}Let $R>0$ be such that $K\subset B_{R}=\{|x|<R\}$ and $\omega $ be a subset
of $\Omega .$ We say that $\omega $ verifies the Exterior Geometric Control
condition on $B_{R}$ (E.G.C.) if there exists $T_{R}>0$\ such that every
generalized bicharacteristic $\gamma $ starting from $B_{R}$ at time $t=0,$
is such that:
\begin{itemize}
\item $\gamma $ leaves $\mathbb{R}^{+}\times B_{R}$ before the time $T_{R},$
or
\item $\gamma $ meets $\mathbb{R}^{+}\times \omega $ between the times $0$
and $T_{R}.$
\end{itemize}
\end{definition}
We assume also that the bicaracteristics have no contact of infinite order
with the boundary (see, for a precise statement, Definition~\ref{contact
d'ordre infini}).
Under this condition on $\omega =\{x\in \Omega ,a^{2}(x)>0\},$ we can state
our main result.
\begin{theorem}
\label{A}Let $T>0$, $\alpha \in (-1/2,1/2)$ and $s\in (1/2,1]$. Let $P$
defined by (\ref{eq:P}) satisfying the assumptions (\ref{eq:illip}) and (\ref
{hyp1}). Then under, the E.G.C on $\omega$ one can find a positive constant $C(T,\alpha
,s)=C$ such that
\begin{equation}
\int_{0}^{T}\left\Vert \Lambda ^{\alpha +1/2}\langle x\rangle
^{-s}u\right\Vert _{L^{2}(\Omega )}^{2}dt+\!\sup_{t\in \lbrack 0,T]}\Vert
\Lambda ^{\alpha }u(t)\Vert _{L^{2}(\Omega )}^{2}\!\leq \!C\left( \Vert
\Lambda ^{\alpha }u_{0}\Vert _{L^{2}(\Omega )}^{2}+\int_{0}^{T}\left\Vert
\Lambda ^{\alpha -1/2}\langle x\rangle ^{s}f\right\Vert _{L^{2}(\Omega
)}^{2}dt\right) \label{eq:estmGlob}
\end{equation}
for all $u_{0}$
in $\ensuremath{\mathscr C}_{0}^{\infty }(\Omega )$,
$f$ in $\ensuremath{\mathscr C}_{0}^{\infty }(\Omega \times \mathbb{R}^{+} )$, where $u$ denotes the
solution of (\ref{eq: Equa}).
\end{theorem}
Working with $\tilde{u}=e^{i(1+C_{0})t}u,$ one may assume $V\geq 1$ in
(\ref{hyp1}) and $\Lambda =P^{1/2},$ which will be assumed in the sequel. It
turns into the following equation
\begin{equation}
\left\{
\begin{array}{l}
(D_{t}+P)u-iaP^{1/2}au=f\text{ in }[0, +\infty)\times \Omega \\[4pt]
u=0\text{ on }[0,\infty)\times \partial \Omega , \\
u_{|t=0}=u_{0},
\end{array}
\right.
\end{equation}
where $P\geq 1.$
\begin{remark}
$
\begin{array}{c}
\end{array}
$
\begin{enumerate}
\item When the obstacle is nontrapping, we obtain the result of Robbiano
Zuily \cite{RZ} by\ taking $a(x)=0$ and moreover, we improve their result to
non homogenous bound.
\item If we consider the equation in a bounded domain $\Omega $ of $\mathbb{R
}^{d},$ and replace the exterior geometric condition (E.G.C) by the
classical microlocal condition of Bardos-Lebeau-Rauch \cite{BLR}, we can
still prove the Kato-effect and then we improve the result of
Aloui~\cite{al2}.
\item If there is a trapped ray which does not intersect the regularized
region, due to Burq \cite{bursmot}, the Kato-effect does not hold. In this
context, our result is thus optimal.
\end{enumerate}
\end{remark}
The rest of the paper is organized as follows: Section~\ref{proofs} is devoted to the
proof of Theorem \ref{A} while in the Section~\ref{appendix} we shall prove some Lemmata
used in Section~\ref{proofs}.
\section{Proofs}\label{proofs}
Let's describe the strategy of the proof of theorem 1.2. In a first step, we reduce the estimate \eqref{eq:estmGlob} to an analogue one localized in frequencies. By following a contradiction argument, we can construct an adapted microlocal defect measure. Our aim in the rest of the proof is to obtain a contradiction on this measure. First, we prove that this measure is not identically null. Next, we show that it is null on incoming set and on $\{ a^2>0\}$. Finaly, using the geometrical assumption (E.G.C.) and that the support of this measure is propagated along the generalized flow, we conclude that the measure is identically null. This gives the contradiction.
\subsection{Reduction to an estimate localized in frequency}
\label{reduction en frequences}
We recall the Paley-Littlewood decomposition. Let $\Phi \in \ensuremath{\mathscr C}_{0}^{\infty
}([0,+\infty ))$ be a decreasing function such that
\begin{equation*}
\Phi (s)=1\text{ if \ }s\leq 1/2,\text{ \ }\Phi (s)=0\text{ \ if \ }s\geq 1.
\newline
\end{equation*}
Let $\psi (s)=\Phi (4^{-1}s)-\Phi (s)$, $\psi (s)=0$ if $s\leq 1/2$ or
$s\geq 4$, $0\leq \psi \leq 1$. For $s\geq 0$ we have
\begin{equation*}
\displaystyle1=\Phi (s)+\sum_{n=0}^{+\infty }\psi (4^{-n}s),
\end{equation*}
and using $P\geq 1$, we have
\begin{equation*}
\displaystyle u=\sum_{n=0}^{+\infty }\psi (4^{-n}P)u.
\end{equation*}
For support reason
\begin{equation*}
\psi (4^{-n}s)\psi (4^{-k}s)=0\text{ if }|k-n|\geq 2,
\end{equation*}
thus there exists $C>0$ such that for all $u\in L^{2}(\Omega )$,
\begin{equation*}
\Vert u\Vert _{L^{2}(\Omega )}^{2}\leq C\sum_{n=0}^{+\infty }\Vert \psi
(4^{-n}P)u\Vert _{L^{2}(\Omega )}^{2}\leq C^{2}\Vert u\Vert _{L^{2}(\Omega
)}^{2}.
\end{equation*}
In the sequel we denote by $h_{n}=2^{-n}$ and $u_{n}=u_{h_{n}}=\psi
(h_{n}^{2}P)u $.\newline
If $u$ satisfies
\begin{equation}
D_{t}u+Pu-iaP^{1/2}(au)=f, \label{eq:Schrod}
\end{equation}
thus $u_{n}$ is a solution of the following semi-classical Schr\"odinger equation:
\begin{equation}
h_{n}^{2}(D_{t}+P)u_{n}-ih_{n}a(h_{n}^{2}P)^{1/2}(au_{n})=h_{n}g_{n},
\label{eq:lowfreq}
\end{equation}
where
\begin{equation}
g_{n}=g_{h_{n}}=h_{n}\psi (h_{n}^{2}P)f+i[\psi
(h_{n}^{2}P),a](h_{n}^{2}P)^{1/2}(au)+ia(h_{n}^{2}P)^{1/2}[\psi
(h_{n}^{2}P),a]u. \label{eq:g}
\end{equation}
\begin{proposition}
\label{prop:lowfreq} Let $s\in (1/2,1]$, $T>0$ and $\alpha \in (-1/2,1/2)$.
Assume there exists $C>0$ such that for $u_{n}=\psi (h_{n}^{2}P)u$
satisfying \eqref{eq:lowfreq}, we have, for all $n\geq 1$
\begin{equation}
\Vert \langle x\rangle ^{-s}u_{n}\Vert _{L^{2}([0,T]\times \Omega
)}^{2}+h_{n}\sup_{t\in \lbrack 0,T]}\Vert u_{n}(t)\Vert _{L^{2}(\Omega
)}^{2}\leq C\left( h_{n}\Vert u_{n}(0)\Vert _{L^{2}(\Omega )}^{2}+\Vert
\langle x\rangle ^{s}g_{n}\Vert _{L^{2}([0,T]\times \Omega )}^{2}\right),
\label{prop:lowfreq1}
\end{equation}
then there exists $C^{\prime }>0$ such that for all $u$ satisfying
\eqref{eq:Schrod} we have
\begin{equation} \label{inegalite avec P alpha}
\begin{split}
\| P^{\alpha/2+1/4}\langle x\rangle^{-s}u\|_{L^2([0,T]\times
\Omega)}^2+\sup_{t\in[0,T]}\| P^{\alpha/2} u(t)\|_{L^2(\Omega)}^2 \qquad
\qquad\qquad \qquad \qquad \qquad\qquad \\
\le C^{\prime } \left( \| P^{\alpha/2}u(0)\|_{L^2(\Omega)}^2+\|
P^{\alpha/2-1/4}\langle x\rangle^s f\|_{L^2([0,T]\times \Omega)}^2 \right) .
\end{split}
\end{equation}
\end{proposition}
\begin{prooff}
We multiply \eqref{prop:lowfreq1} by $h_{n}^{-2\alpha -1}$ and we sum over
$n\in {\mb{N}}$, we obtain,
\begin{equation}
\begin{split}
& \sum_{n\in {\mb{N}}}h_{n}^{-2\alpha -1}\Vert \langle x\rangle ^{-s}u_{n}\Vert
_{L^{2}([0,T]\times \Omega )}^{2}+\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha }\sup_{t\in
\lbrack 0,T]}\Vert u_{n}(t)\Vert _{L^{2}(\Omega )}^{2}\qquad \qquad \\
& \leq C\left( \sum_{n\in {\mb{N}}}h_{n}^{-2\alpha }\Vert u_{n}(0)\Vert
_{L^{2}(\Omega )}^{2}+\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha -1}\Vert \langle
x\rangle ^{s}g_{n}\Vert _{L^{2}([0,T]\times \Omega )}^{2}\right) .
\end{split}
\label{inegalite sur la somme en n}
\end{equation}
Now, let us estimate each term appearing in inequality
\eqref{inegalite
avec P alpha}. We have,
\begin{align}
\sup_{t\in \lbrack 0,T]}\Vert P^{\alpha /2}u(t)\Vert _{L^{2}(\Omega )}^{2}&
\leq C\sup_{t\in \lbrack 0,T]}\sum_{n\in {\mb{N}}}\Vert \psi (h_{n}^{2}P)P^{\alpha
/2}u(t)\Vert _{L^{2}(\Omega )}^{2} \notag \\
& \leq C\sup_{t\in \lbrack 0,T]}\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha }\Vert \psi
_{0}(h_{n}^{2}P)u(t)\Vert _{L^{2}(\Omega )}^{2}\text{ where }\psi
_{0}(\sigma )=\sigma ^{\alpha /2}\psi (\sigma ) \notag \\
& \leq C\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha }\sup_{t\in \lbrack 0,T]}\Vert \psi
(h_{n}^{2}P)u(t)\Vert _{L^{2}(\Omega )}^{2}.
\label{Premiere inegalite P alpha}
\end{align}
We have also with
$\psi _{1}(\sigma )=\sigma ^{\alpha /2+1/4}\psi (\sigma )$
,
\begin{align}
\Vert P^{\alpha /2+1/4}\langle x\rangle ^{-s}u\Vert _{L^{2}([0,T]\times
\Omega )}^{2}& \leq C\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha -1}\Vert \psi
_{1}(h_{n}^{2}P)\langle x\rangle ^{-s}u\Vert _{L^{2}([0,T]\times \Omega
)}^{2} \notag \\
& \leq C\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha -1}\Vert \langle x\rangle ^{-s}\psi
(h_{n}^{2}P)u\Vert _{L^{2}([0,T]\times \Omega )}^{2}
\text{( by Lemma~\ref{equivalence norme H alpha} )} \notag \\
& \leq C\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha -1}\Vert \langle x\rangle
^{-s}u_{n}\Vert _{L^{2}([0,T]\times \Omega )}^{2}.
\label{deuxieme
inegalite P alpha}
\end{align}
Now we can estimate, with $\psi _{2}(\sigma )=\sigma ^{-\alpha /2}\psi
(\sigma )$,
\begin{align}
\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha }\Vert u_{n}(0)\Vert _{L^{2}(\Omega )}^{2}&
\leq C\sum_{n\in {\mb{N}}}\Vert \psi _{2}(h_{n}^{2}P)P^{\alpha /2}u(0)\Vert
_{L^{2}(\Omega )}^{2} \notag \\
& \leq C\Vert P^{\alpha /2}u(0)\Vert _{L^{2}(\Omega )}^{2}.
\label{troisieme
inegalite P alpha}
\end{align}
The term $g_{n}$ contains three terms (see \eqref{eq:g}). For the first, we
have, with $\psi _{3}(\sigma )=\sigma ^{-\alpha /2+1/4}\psi (\sigma )$,
\begin{align}
\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha +1}\Vert \langle x\rangle ^{s}\psi
(h_{n}^{2}P)f\Vert ^2_{L^{2}([0,T]
\times \Omega )}& \leq \sum_{n\in {\mb{N}}
}h_{n}^{-2\alpha +1}\Vert \psi (h_{n}^{2}P)\langle x\rangle ^{s}f\Vert^2
_{L^{2}([0,T]
\times \Omega )} \notag \\
& \leq C\sum_{n\in {\mb{N}}}\Vert \psi _{3}(h_{n}^{2}P)P^{\alpha /2-1/4}\langle
x\rangle ^{s}f\Vert _{L^{2}([0,T]\times \Omega )}^{2} \notag \\
& \leq C\Vert P^{\alpha /2-1/4}\langle x\rangle ^{s}f\Vert
_{L^{2}([0,T]\times \Omega )}^{2}. \label{estimation premier term
g_n}
\end{align}
For the second and the third terms of $g_{n}$ we can apply the Lemmata~\ref
{lemma : premier commutateur} and \ref{lemme deuxieme terme}, to obtain with
\eqref{estimation premier term g_n},
\begin{equation}
\sum_{n\in {\mb{N}}}h_{n}^{-2\alpha -1}\Vert \langle x\rangle ^{s}g_{n}\Vert
_{L^{2}([0,T]\times \Omega )}^{2}\leq C\Vert P^{\alpha /2-1/4}\langle
x\rangle ^{s}f\Vert _{L^{2}([0,T]\times \Omega )}^{2}+C\Vert P^{\alpha
/2}u\Vert _{L^{2}([0,T]\times \Omega )}^{2}.
\label{Quatrieme inegalite P alpha}
\end{equation}
Then following \eqref{inegalite sur la somme en n}
\eqref{Premiere
inegalite P alpha}, \eqref{deuxieme inegalite P alpha},
\eqref{troisieme
inegalite P alpha} and \eqref{Quatrieme inegalite
P alpha}, we obtain
\begin{equation*}
\begin{split}
& \Vert P^{\alpha /2+1/4}\langle x\rangle ^{-s}u\Vert _{L^{2}([0,T]\times
\Omega )}^{2}+\sup_{t\in \lbrack 0,T]}\Vert P^{\alpha /2}u(t)\Vert
_{L^{2}(\Omega )}^{2}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \\
& \leq C\left( \Vert P^{\alpha /2}u(0)\Vert _{L^{2}(\Omega )}^{2}+\Vert
P^{\alpha /2-1/4}\langle x\rangle ^{s}f\Vert _{L^{2}([0,T]\times \Omega
)}^{2}+\Vert P^{\alpha /2}u\Vert _{L^{2}([0,T]\times \Omega )}^{2}\right) .
\end{split}
\end{equation*}
By Gronwall's Lemma, we can remove the last term in the previous inequality
and we obtain~\eqref{inegalite avec P alpha}.
\end{prooff}
\subsection{Construction of microlocal defect measure}
In this section we will prove the localized frequency
estimate~(\ref{prop:lowfreq1}) by a contradiction argument and using microlocal defect
measure.
More precisely, let $u_{h}$ solution of
\begin{equation}
h^{2}(D_{t}+P)u_{h}-iha(h^{2}P)^{1/2}(au_{h})=hg_{h}. \label{eq:lowfre2}
\end{equation}
We will prove by contradiction the following estimate,
\begin{equation}
\Vert \langle x\rangle ^{-s}u_{h}\Vert _{L^{2}([0,T]\times \Omega
)}^{2}+h\sup_{t\in \lbrack 0,T]}\Vert u_{h}(t)\Vert _{L^{2}(\Omega
)}^{2}\leq Ch\Vert u_{h}(0)\Vert _{L^{2}(\Omega )}^{2}+C\Vert \langle
x\rangle ^{s}g_{h}\Vert _{L^{2}([0,T]\times \Omega )}^{2}.
\label{eq:lowfreqIstim2}
\end{equation}
Assuming it is false. Taking $C=k\in \mathbb{N}$, we deduce sequences
$h_{k}\underset{k\rightarrow +\infty }{\rightarrow }0,$
$u_{k}^{0}=u_{h_k}(0)\in
L^{2}(\Omega )$ and $g_{k}=g_{h_k}\in L^{2}(\Omega )$ such that,
\begin{equation}
h_{k}\left\Vert u_{k}^{0}\right\Vert _{L^{2}(\Omega )}^{2}
\underset{k\rightarrow +\infty }{\rightarrow }0,\text{ }\left\Vert \left\langle
x\right\rangle ^{s}g_{k}\right\Vert _{L^{2}([0,T]\times \Omega )}^{2}
\underset{k\rightarrow +\infty }{\rightarrow }0. \label{eq:contradiction}
\end{equation}
We normalize by the left term in (\ref{eq:lowfreqIstim2}), thus
\begin{equation*}
\left\Vert \left\langle x\right\rangle ^{-s}u_{k}\right\Vert
_{L^{2}([0,T]\times \Omega )}^{2}+\text{ }h_{k}\sup_{t\in \lbrack
0,T]}\left\Vert u_{k}(t)\right\Vert _{L^{2}(\Omega )}^{2}=1,
\end{equation*}
where, for simplicity, we have denoted $u_{h_k}=u_k$. By the
Lemma~\ref{lemma:A} we have
\begin{equation}
\text{ }h_{k}\sup_{t\in \lbrack 0,T]}\left\Vert u_{k}(t)\right\Vert
_{L^{2}(\Omega )}^{2}\underset{k\rightarrow +\infty }{\rightarrow }0,
\label{eq:contradiction2}
\end{equation}
then
\begin{equation}
\left\Vert \left\langle x\right\rangle ^{-s}u_{k}\right\Vert
_{L^{2}([0,T]\times \Omega )}^{2}\underset{k\rightarrow +\infty }
{\rightarrow }1 . \label{eq:contradiction3}
\end{equation}
The sequence $(u_{k})$ is bounded in $L_{loc}^{2}
(\mathbb{R}_{t},L_{loc}^{2}(\Omega )).$ Indeed, for $R>0$ , there exists $c>0$ such
that $\left\langle x\right\rangle ^{-2s}\geq c,\,\;\forall x\in B(0,R)$ and then we have
\begin{equation}
\int_{0}^{T}\int_{\Omega \cap B_{R}}|u_{k}|^{2}dtdx\leq \frac{1}{c}
\int_{0}^{T}\int_{\Omega \cap B_{R}}\left\langle x\right\rangle
^{-2s}|u_{k}|^{2}dtdx\leq \frac{1}{c}. \label{eq:bound}
\end{equation}
We set
\begin{equation}
\left\{
\begin{array}{c}
w_{k}=1_{\Omega }u_{k}(t) \\
W_{k}=1_{[0,T]}w_{k}.
\end{array}
\right. \label{eq:plong}
\end{equation}
It follows from (\ref{eq:bound}) that the sequence $(W_{k})$ is bounded in
$L^{2}(\mathbb{R}_{t},L_{loc}^{2}(\mathbb{R}^{d})).$\newline
We associate to a symbol $b=b(x,t,\xi ,\tau )\in \ensuremath{\mathscr C}_{0}^{\infty }(T^{\ast }
{\mathbb{R}}^{d+1})$ the semiclassical pseudo-differential operator (pdo) by
the formula
\begin{equation*}
{\mathcal{O}}p(b)(y,s,hD_{x},h^{2}D_{t})v(x,t)=\frac{1}{(2\pi h)^{d+1}}\iint
e^{i\left( \frac{x-y}{h}\xi +\frac{t-s}{h^{2}}\tau \right) }\varphi
(y)b(x,t,\xi ,\tau )v(y,s)dydsd\xi d\tau,
\end{equation*}
where $\varphi \in \ensuremath{\mathscr C}_{0}^{\infty }({\mathbb{R}}^{d})$ is equal to one on
a neighborhood of the $x$-projection of the support of $b$. As in \cite{RZ}
we can associate to $(W_{k})$ a semi-classical measure $\mu .$ More
precisely,
\begin{proposition}
\label{mesure}There exists a subsequence $(W_{\sigma (k)})$ and a Radon
measure $\mu $ on $T^{\ast }\mathbb{R}^{d+1}$ such that for every
$b\in \ensuremath{\mathscr C}_{0}^{\infty }(T^{\ast }{\mathbb{R}}^{d+1})$ one has
\begin{equation*}
\lim_{k\rightarrow +\infty }\left( \mathcal{O}p(b)\left( x,t,h_{\sigma
(k)}D_{x},h_{\sigma (k)}^{2}D_{t}\right) W_{\sigma (k)},W_{\sigma
(k)}\right) _{L^{2}({\mathbb{R}}^{d+1})}=\left\langle \mu ,b\right\rangle .
\end{equation*}
\end{proposition}
We prove first that the measure $\mu $ satisfies the
following property.
\begin{proposition}
\label{prop:sopport}The support of $\mu $ is contained in the characteristic
set of the operator $D_{t}+P$
\begin{equation}
\Sigma =\{(x,t,\xi ,\tau )\in T^{\ast }\mathbb{R}^{d+1}:x\in
\overline{\Omega },t\in \lbrack 0,T]\text{ and }\tau +p(x,\xi )=0\}.
\label{prop:carac}
\end{equation}
\end{proposition}
\begin{prooff}
According to (\ref{eq:plong}), it is obvious that
\begin{equation*}
\supp\mu \subset \{(x,t,\xi ,\tau )\in T^{\ast }\mathbb{R}^{d+1}:x\in
\overline{\Omega },t\in \lbrack 0,T]\}.
\end{equation*}
Therefore it remains to show that if $m_{0}=(x_{0},t_{0},\xi _{0},\tau _{0})$
with $x_{0}\in \overline{\Omega },t_{0}\in \lbrack 0,T],$ and $\tau
_{0}+p(x_{0},\xi _{0})\neq 0$ then $m_{0}\notin \supp \mu .$ For simplicity, we
shall denote
the sequence $W_{\sigma(k)}$ by $W_k$.
\begin{case}
Assume that $x_{0}\in \Omega .$
Let $\varepsilon >0$ be such that $B(x_{0},\varepsilon )\subset \Omega $,
$\varphi \in \ensuremath{\mathscr C}_{0}^{\infty }(B(x_{0},\varepsilon )),$ $\varphi =1$ on
$B(x_{0},\frac{\varepsilon }{2})$ and $\tilde{\varphi}\in \ensuremath{\mathscr C}_{0}^{\infty
}(\Omega ),$ $\tilde{\varphi}=1$ on $\supp\varphi .$ Let $b\in
\ensuremath{\mathscr C}_{0}^{\infty }(\mathbb{R}_{x}^{d}\times \mathbb{R}_{\xi }^{d})$ such that
$\pi _{x}\supp b\subset B(x_{0},\frac{\varepsilon }{2})$ and
$\chi \in \ensuremath{\mathscr C}_{0}^{\infty }(\mathbb{R}_{t}\times \mathbb{R}_{\tau }).$ Recall that we
have $W_{k}=1_{[0,T]}1_{\Omega }u_{k}$ and that $(u_{k})$ is bounded
sequence in $L^{2}([0,T],L_{loc}^{2}(\Omega )).$ We set
\begin{equation*}
I_{k}=\left( b(x,h_{k}D_{x}\right) \chi (t,h_{k}^{2}D_{t})\varphi
(x)h_{k}^{2}(D_{t}+P(x,D_{x}))W_{k},\tilde{\varphi}W_{k})_{L^{2}
(\mathbb{R}^{d+1})}.
\end{equation*}
As in \cite{RZ} we have
\begin{equation}
\lim_{k\rightarrow +\infty }I_{k}=\left\langle \mu ,(\tau +p)b\chi
\right\rangle . \label{eq:carac}
\end{equation}
On the other hand, since we have
\begin{equation*}
h_{k}^{2}(D_{t}+P(x,D_{x}))u_{k}=h_{k}ia(h_{k}^{2}P)^{1/2}au_{k}+h_{k}g_{k},
\end{equation*}
and $\varphi \in \ensuremath{\mathscr C}_{0}^{\infty }(\Omega )$,
\begin{equation}
\varphi (h_{k}^{2}D_{t}+h_{k}^{2}P(x,D_{x}))W_{k}=\varphi
(ih_{k}a(h_{k}^{2}P)^{1/2}au_{k}+h_{k}g_{k})+h_k^2\varphi (u_{k}(0)\delta
_{t=0}-h_k^2u_{k}(T)\delta _{t=T}) . \label{eq:terms}
\end{equation}
Then $I_{k}$ is a sum of four terms,
\begin{equation*}
I_{k}=I_{k}^{1}+I_{k}^{2}+I_{k}^{3}+I_{k}^{4},
\end{equation*}
\begin{align*}
&I_{k}^{1} =ih_{k}\left( b(x,h_{k}D_{x}\right) \chi
(t,h_{k}^{2}D_{t})\varphi (x)a(h_{k}^{2}P)^{1/2}au_{k},\tilde{\varphi}
W_{k})_{L^{2}(\mathbb{R}^{d+1})} \\
&I_{k}^{2} =h_{k}\left( b(x,h_{k}D_{x}\right) \chi (t,h_{k}^{2}D_{t})\varphi
(x)g_{k},\tilde{\varphi}W_{k})_{L^{2}(\mathbb{R}^{d+1})} \\
&I_{k}^{3} =\left( b(x,h_{k}D_{x}\right) \chi (t,h_{k}^{2}D_{t})h_k^2\varphi
(x)u_{k}(0)\delta _{t=0},\tilde{\varphi}W_{k})_{L^{2}(\mathbb{R}^{d+1})} \\
&I_{k}^{4} =-(b(x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})h_k^2\varphi
(x)u_{k}(T)\delta _{t=T},\tilde{\varphi}W_{k})_{L^{2}(\mathbb{R}^{d+1})}.
\end{align*}
For the first term $I_{k}^{1}$, we use the Lemma~\ref{lemma:H},
we have,
\begin{equation}
\left\Vert (h_{k}^{2}P)^{1/2}au_{k}\right\Vert _{L^{2}(\Omega )}^2\leq
Ch_{k}^{2}\Vert u_{k}\Vert _{L^{2}(\Omega )}^{2}+C\Vert au_{k}\Vert
_{L^{2}(\Omega )}^{2} , \label{eq:auk}
\end{equation}
and we deduce,
\begin{equation}
|I_{k}^{1}|\leq c(h_{k}^{2}\sup_{t\in \lbrack 0,T]}\Vert u_{k}\Vert
_{L^{2}(\Omega )}^{2}+h_{k}\sup_{t\in \lbrack 0,T]}\Vert u_{k}\Vert
_{L^{2}(\Omega )}^{2}). \label{eq:I11}
\end{equation}
Then we obtain, that $I^1_k$ goes to zero by \eqref{eq:contradiction2}.
For the second term $I_{k}^{2}$,
\begin{align*}
\left\vert I_{k}^{2}\right\vert & \leq h_{k}\left\Vert g_{k}\right\Vert
_{L^{2}([0,T],B(x_{0},\varepsilon ))}\left\Vert \tilde{\varphi}
W_{k}\right\Vert _{L^{2}(\mathbb{R}^{d+1})} \\
& \leq Ch_{k}\left\Vert \left\langle x\right\rangle ^{s}g_{k}\right\Vert
_{L^{2}([0,T]\times \Omega )}\left\Vert \left\langle x\right\rangle
^{-s}u_{k}\right\Vert _{L^{2}([0,T]\times \Omega )}.
\end{align*}
Using (\ref{eq:contradiction}) and (\ref{eq:contradiction3}), we deduce that
\begin{equation}
\lim_{k\rightarrow +\infty }I_{k}^{2}=0. \label{eq:I2}
\end{equation}
The third and fourth terms in (\ref{eq:terms}) have the following form,
\begin{equation*}
J_{k}=\left( b(x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})\varphi
h_{k}^{2}u_{k}(s)\delta _{t=s},\tilde{\varphi}W_{k}\right) _{L^{2}
(\mathbb{R}^{d+1})},\text{ \ \ }s=0\text{ or }T.
\end{equation*}
Since $(\tilde{\varphi}W_{k})$ is bounded in $L^{2}(\mathbb{R}^{d+1}),$ we
see that
\begin{equation*}
|J_{k}|^{2}\leq c\left\Vert b\varphi w_{k}(s)\right\Vert _{L^{2}
(\mathbb{R}^{d})}^{2}\left\Vert h_{k}^{2}\chi (t,h_{k}^{2}D_{t})\delta
_{t=s}\right\Vert _{L^{2}(\mathbb{R})}^{2}\sup_{t\in \lbrack 0,T]}\Vert
u_{k}(t)\Vert _{L^{2}(\Omega )}^{2},
\end{equation*}
so, using \cite[Lemma A.5]{RZ} with $p=2$ and $l=2$, we deduce that,
\begin{equation}
|J_{k}|^{2}\leq ch_{k}^{2}\left\Vert u_{k}(s)\right\Vert _{L^{2}(\Omega
)}^{2}\sup_{t\in \lbrack 0,T]}\Vert u_{k}(t)\Vert _{L^{2}(\Omega )}^{2}\leq
c\,h_{k}^{2}\sup_{t\in \lbrack 0,T]}\Vert u_{k}(t)\Vert _{L^{2}(\Omega
)}^{4}. \label{eq:I3}
\end{equation}
It follows from (\ref{eq:I11}), (\ref{eq:I2}), (\ref{eq:I3}) and
(\ref{eq:contradiction2}) that
\begin{equation}
\lim_{k\rightarrow \infty }I_{k}=0. \label{eq:limit}
\end{equation}
As the linear combination of $\chi(t,\tau)b(x,\xi)$ are dense in $\ensuremath{\mathscr C}_0^\infty(T^\star({\mathbb{R}}^{d+1}))$, using (\ref{eq:carac}) and (\ref{eq:limit}),
we deduce that $m_{0}=(x_{0},t_{0},\xi _{0},\tau _{0})\notin \supp \mu $.
\end{case}
\begin{case}
Assume that $x_{0}\in \partial \Omega .$
We would like to show that one can find a neighborhood $U_{x_{0}}$ of $x_{0}$
in ${\mathbb{R}}^{d}$ such that for any $b\in \ensuremath{\mathscr C}_{0}^{\infty
}(U_{x_{0}}\times \mathbb{R}_{t}\times \mathbb{R}_{\xi }^{d}\times
\mathbb{R}_{\tau }),$ we have
\begin{equation}
\left\langle \mu ,(\tau +p)b\right\rangle =0 . \label{eq:caraBord}
\end{equation}
Indeed this will imply that the point $m_{0}(x_{0},t_{0},\xi _{0},\tau _{0})$
(with $\tau _{0}+(x_{0},\xi _{0})\neq 0)$ does not belong to the support of
$\mu $ as claimed.
Formula (\ref{eq:caraBord}) will be implied, by
\begin{equation}
\left\{
\begin{array}{l}
\lim\limits_{k\rightarrow +\infty }I_{k}=0\text{ where} \\
I_{k}=\left( b(x,t,h_{k}D_{x},h_{k}^{2}D_{t})\varphi
h_{k}^{2}(D_{t}+P)W_{k},W_{k}\right) _{L^{2}(\mathbb{R}^{d+1})}.
\end{array}
\right. \label{eq:termeIk}
\end{equation}
where $\varphi \in \ensuremath{\mathscr C}_{0}^{\infty }(U_{x_{0}}),\varphi =1$ on
$\pi _{x}\supp b.$
Let $U_{x_{0}}$ a neighborhood of $x_0$ such that there exists a
$\ensuremath{\mathscr C}^{\infty }$\ diffeomorphisme $F$ from $U_{x_{0}}$\ to a neighborhood $U_{0}$
of the origin in $\mathbb{R}^{d}$ satisfying,
\begin{equation}
\left\{
\begin{array}{c}
F(U_{x_{0}}\cap \Omega )=\{y\in U_{0}:y_{1}>0\} \\
F(U_{x_{0}}\cap \partial \Omega )=\{y\in U_{0}:y_{1}=0\} \\
(P(x,D)W_{k})\circ F^{-1}=(D_{1}^{2}+R(y,D^{\prime })
+ L(x,D))
(W_{k}\circ F^{-1}),
\end{array}
\right. \label{eq:U0}
\end{equation}
where $R$ is a second-order differential operator, $D^{\prime
}=(D_{2},...,D_{d})$
and $L(x,D)$ a first order differential operator.
Let us set
\begin{equation}
v_{k}=u_{k}\circ F^{-1},\text{ \ \ }V_{k}=1_{[0,T]}1_{y_{1}>0}v_{k},
\label{eq:v0}
\end{equation}
then we will have
\begin{equation}
\left\{
\begin{array}{l}
\left( D_{t}+D_{1}^{2}+R(y,D^{\prime })+ L(x,D)\right) v_{k}=iaP^{1/2}(au_{k})\circ
F^{-1}+h_{k}^{-1}g_{k}\circ F^{-1}:=f_{k} \\
v_{k}|_{y_{1}=0}=0.
\end{array}
\right. \label{eq:probRedresse}
\end{equation}
Making the change of variable $\ x=F^{-1}(y)$ on the right-hand side of the
second line of (\ref{eq:termeIk}), we see that
\begin{equation*}
I_{k}=\left( \tilde{b}(y,t,h_{k}D_{y},h_{k}^{2}D_{t})\psi
h_{k}^{2}(D_{t}+D_{1}^{2}+R(y,D^{\prime })+ L(x,D))V_{k},V_{k}\right) _{L^{2}
(\mathbb{R}^{d+1})},
\end{equation*}
where $\tilde{b}\in \ensuremath{\mathscr C}_{0}^{\infty }(U_{0}\times \mathbb{R}_{t}\times
\mathbb{R}_{\eta }^{d}\times \mathbb{R}_{\tau }),$ and $\psi \in
\ensuremath{\mathscr C}_{0}^{\infty }(U_{0}),$ $\psi =1$ on $\pi _{y}\supp\tilde{b}.$
To prove (\ref{eq:termeIk}) it is sufficient to prove that,
\begin{equation*}
\lim_{k\rightarrow +\infty }J_{k}=\lim_{k\rightarrow +\infty }\left( T\psi
_{0}(y_{1})\psi _{1}(y^{\prime })h_{k}^{2}(D_{t}+D_{1}^{2}+R(y,D^{\prime
})+ L(x,D))V_{k},V_{k}\right) _{L^{2}(\mathbb{R}^{d+1})}=0,
\end{equation*}
where $T=\theta (y_{1},h_{k}D_{1})\Phi (y^{\prime },h_{k}D^{\prime })\chi
(t,h_{k}^{2}D_{t}),$ $\theta \Phi \chi \in \ensuremath{\mathscr C}_{0}^{\infty }(U_{0}\times
\mathbb{R}_{t}\times \mathbb{R}_{\eta }^{d}\times \mathbb{R}_{\tau }),$
$\psi _{0}\psi _{1}\in \ensuremath{\mathscr C}_{0}^{\infty }(U_{0}),$ $\psi _{0}\psi _{1}=1$ on $
\pi _{y}\supp\theta \Phi \chi $;
According to (\ref{eq:probRedresse}) we have,
\begin{align*}
(D_{t}+D_{1}^{2}+R(y,D^{\prime })+ L(x,D))V_{k}& =f_{k}-i1_{y_{1}>0}v_{k}(0,.)\delta
_{t=0}+i1_{y_{1}>0}v_{k}(T,.)\delta _{t=T} \\
& \quad -i1_{[0,T]}(D_{1}v_{k}|_{y_{1}=0})\otimes \delta _{y_{1}=0}.
\end{align*}
Therefore (\ref{eq:termeIk}) will be proved if we can prove that
\begin{equation}
\left\{
\begin{array}{l}
\lim\limits_{k\rightarrow +\infty }A_{k}^{j}=0,\text{ \ }j=1,2,3,
\text{ where } \\
A_{k}^{1}=\left( \theta (y_{1},h_{k}D_{1})\Phi (y^{\prime },h_{k}D^{\prime
})\chi (t,h_{k}^{2}D_{t})\psi _{0}\psi
_{1}h_{k}^{2}1_{y_{1}>0}v_{k}(s,.)\delta _{t=s},V_{k}\right) ,\text{ }s=0,
\text{ }T, \\[3pt]
A_{k}^{2}=\left( \theta (y_{1},h_{k}D_{1})\Phi (y^{\prime },h_{k}D^{\prime
})\chi (t,h_{k}^{2}D_{t})\psi _{0}\psi
_{1}h_{k}^{2}1_{[0,T]}(D_{1}v_{k}|_{y_{1}=0})\otimes \delta
_{y_{1}=0},V_{k}\right) , \\[3pt]
A_{k}^{3}=\left( \theta (y_{1},h_{k}D_{1})\Phi (y^{\prime },h_{k}D^{\prime
})\chi (t,h_{k}^{2}D_{t})\psi _{0}\psi _{1}h_{k}^{2}f_{k},V_{k}\right) .
\end{array}
\right. \label{eq:A-k}
\end{equation}
As in \cite[A.18]{RZ}
\begin{equation}
\lim\limits_{k\rightarrow +\infty }A_{k}^{1}=0 . \label{A1}
\end{equation}
To estimate the term $A_{k}^{2}$ we need a Lemma.
With $U_{0}$ introduced in (\ref{eq:U0}), we set $U_{0}^{+}=\{y\in
U_{0}:y_{1}>0\}.$ We consider a smooth solution of the problem:
\begin{equation}
\left\{
\begin{array}{l}
\left( D_{t}+D_{1}^{2}+R(y,D^{\prime })+ L(x,D)\right) u=g\text{ \ in \ }
U_{0}^{+}\times \mathbb{R}_{t} \\[2pt]
u|_{y_{1}=0}=0
\end{array}
\right. \label{eq:sec}
\end{equation}
\begin{lemma}
\label{LemmaA6} Let $\chi \in \ensuremath{\mathscr C}_{0}^{\infty }(U_{0})$ and $\chi _{1}\in
\ensuremath{\mathscr C}_{0}^{\infty }(U_{0})$ $\chi _{1}=1$ on $\supp \chi .$ There exists $C>0$
such that for any solution $u$ of (\ref{eq:sec}) and all $h$ in $]0,1],$ we
have
\begin{align*}
\int_{0}^{T}\left\Vert \left( \chi h\partial _{1}u\right)
_{|y_{1}=0}(t)\right\Vert _{L^{2}}^{2}dt &\leq C\left(
\int_{0}^{T}\sum_{|\alpha |\leq 1}\left\Vert \chi _{1}(hD)^{\alpha
}u(t)\right\Vert _{L^{2}(U_{0}^{+})}^{2}dt\right. \\
&\quad +\left\Vert h^{\frac{1}{2}}\chi u(0)\right\Vert
_{L^{2}(U_{0}^{+})}\left\Vert h^{\frac{1}{2}}(h\partial _{1}u)(0)\right\Vert
_{L^{2}(U_{0}^{+})} \\
&\quad \left. +\left\Vert h^{\frac{1}{2}}\chi u(T)\right\Vert
_{L^{2}(U_{0}^{+})}\left\Vert h^{\frac{1}{2}}(h\partial _{1}u)(T)\right\Vert
_{L^{2}(U_{0}^{+})}+\left\Vert \chi _{1}hg\right\Vert _{L^{2}}^{2}\right).
\end{align*}
\end{lemma}
\end{case}
\begin{prooff}[Proof of the Lemma]
It is analogue to the proof of \cite[Lemma A.6]{RZ}.
\end{prooff}
We replace in the previous Lemma $g$ by $iaP^{1/2}(au_k)\circ
F^{-1}+h_k^{-1}g_k\circ F^{-1} $ and by \eqref{eq:v0}, we obtain easily the
following corollary.
\begin{corollary}
\label{corollaryA7} One can find a constant $C>0$ such that
\begin{align*}
\int_{0}^{T}\left\Vert \left( \chi h_{k}\partial _{1}v_{k}\right)
_{|y_{1}=0}(t)\right\Vert _{L^{2}}^{2}dt &\leq C\left(
\int_{0}^{T}\left\Vert \tilde{\chi}u_{k}(t)\right\Vert _{L^{2}(\Omega
)}^{2}dt+\left\Vert h_{k}^{1/2}u_{k}(0)\right\Vert _{L^{2}(\Omega
)}^{2}dt\right. \\
&\quad \left. +\int_{0}^{T}\left( \left\Vert \tilde{\chi}
a(h_{k}^{2}P)^{1/2}au_{k}\right\Vert _{L^{2}}^{2}+\left\Vert \tilde{\chi}
g_{k}\right\Vert _{L^{2}}^{2}\right) dt\right) \\
&\leq C,
\end{align*}
where $v_{k}$ has been defined in (\ref{eq:v0}) and $\tilde{\chi}\in
\ensuremath{\mathscr C}_{0}^{\infty }(\mathbb{R}^{d}).$
\end{corollary}
Let us go back to the estimate of $A_{k}^{2}$ defined in (\ref{eq:A-k}). We
have
\begin{equation*}
\left\vert A_{k}^{2}\right\vert ^{2}\leq Ch_{k}^{2}\left\Vert \theta
(y_{1},h_{k}D_{1})\delta _{y_{1}=0}\right\Vert _{L^{2}
(\mathbb{R})}^{2}\left\Vert \left( \psi _{2}V_{k}\right) \right\Vert _{L^{2}
(\mathbb{R}^{d+1})}^{2}\int_{0}^{T}\left\Vert \left( \psi _{1}h_{k}D_{1}v_{k}\right)
_{|y_{1}=0}(t)\right\Vert _{L^{2}(\mathbb{R}^{d-1})}^{2}dt.
\end{equation*}
Applying (\ref{eq:bound}), \cite[Lemma A.5]{RZ} with $p=2$, $l=1$ and
corollary \ref{corollaryA7}, we obtain
\begin{equation}
\left\vert A_{k}^{2}\right\vert \leq ch_{k}\longrightarrow 0 . \label{eq:A2}
\end{equation}
The term $\left\vert A_{k}^{3}\right\vert $ can be treated as the first and
the second term in the case 1.\newline
Using (\ref{A1}) and (\ref{eq:A2}), we deduce (\ref{eq:A-k}), which implies
(\ref{eq:termeIk}) thus (\ref{eq:caraBord}). The proof of
Proposition~\ref{prop:sopport}\ is complete.
\end{prooff}
\subsection{The microlocal defect measure does not vanish identically}
First let us prove that the sequence $(u_k)$ have mass in a compact domain.
\begin{lemma}
\label{Lemma2.6} There exists a subsequence $k_\nu$, there exists $R>0$ such
that
\begin{equation*}
\int_0^T\| u_{k_\nu}(t)\|_{L^2(x\in\Omega,\ |x|<R)}^2dt\ge 1/2.
\end{equation*}
\end{lemma}
\begin{prooff}[Proof of Lemma]
We prove the Lemma by contradiction. Assume that
\begin{equation}
\forall R>R_{0},\ \limsup_{k}\int_{0}^{T}\Vert u_{k}(t)\Vert _{L^{2}(x\in
\Omega ,\ |x|\leq 2R
+1)}^{2}dt\leq 3/4, \label{Contradiction lemme 2.6}
\end{equation}
where $R_{0}$ is large enough such that $\supp a\subset \{|x|\leq R_{0}/2\}$.
Let $\chi \in \ensuremath{\mathscr C}^{\infty }(\mathbb{R}^{d})$ such that $\chi =1$ for $|x|>2$
and $\chi =0$ for $|x|<1$. We set $\chi _{R}(x)=\chi (x/R)$ and by the
choice of $R_{0}$ we have $a\chi _{R}=\chi _{R}a=0$ . The function
$v_{k}:=\chi _{R}u_{k}$ satisfies
\begin{equation*}
D_{t}v_{k}+Pv_{k}=h_{k}^{-1}\chi _{R}g_{k}+[P,\chi _{R}]u_{k}.
\end{equation*}
From \cite[Theorem 2.8]{doi05}, we have
\begin{equation}
\int_{0}^{T}\Vert \langle x\rangle ^{-s}v_{k}\Vert _{L^{2}
(\mathbb{R}^{d})}^{2}\leq C(\Vert E_{-\frac{1}{2}}v_{k}(0)\Vert _{L^{2}
(\mathbb{R}^{d})}^{2}+\int_{0}^{T}\Vert \langle x\rangle ^{s}E_{-1}\left(
h_{k}^{-1}\chi _{R}g_{k}+[P,\chi _{R}]u_{k}\right) \Vert _{L^{2}
(\mathbb{R}^{d})}^{2}dt), \label{Doi}
\end{equation}
where $E_{s}$ is the pseudo-differential operator with symbol $e_{s}=(1+{p}
(x,\xi )+|x|^{2})^{\frac{s}{2}}$ which belongs to $S((|\xi |+<x>)^{s},g)$.
For the first term of the right hand side of (\ref{Doi}) we have,
where $(\cdot,\cdot)$ means the scalar product in $L^2(\Omega)$,
\begin{align*}
\Vert E_{-\frac{1}{2}}v_{k}(0)\Vert _{L^{2}}^{2} &=h_k\Vert E_{-\frac{1}{2}
}\chi_R P^{\frac{1}{4}}(h_k^{2}P)^{-\frac{1}{4}}\psi _{1}(h_k^{2}P)\psi
(h_k^{2}P)u(0)\Vert _{L^{2}}^{2}, \\
&=h_k ( S\psi _{2}(h_k^{2}P)u_{k}(0),S\psi
_{2}(h_k^{2}P)u_{k}(0)) ,\text{ where } S=E_{-\frac{1}{2}}\chi_R P^{
\frac{1}{4}},\text{ and }\psi _{2}(t)=t^{-\frac{1}{4}}\psi _{1} \\
&=h_k( \psi _{2}(h_k^{2}P)S^{\star }S\psi
_{2}(h_k^{2}P)u_{k}(0),u_{k}(0)) \\
&=h_k( \psi _{2}(h_k^{2}P)(h_k^{2}P)^{-\frac{1}{4}}Q\chi_R (h_k^{2}P)^{
\frac{1}{4}}\psi _{2}(h_k^{2}P)u_{k}(0),u_{k}(0)) \\
&\leq Ch_k\Vert u_{k}(0)\Vert _{L^{2}}^{2},
\end{align*}
where $\psi _{1}\in \ensuremath{\mathscr C}_{0}^{\infty }(0,+\infty) $ and $\psi _{1}=1\text{
on }\supp(\psi )$, $S^{\star }S=P^{-\frac{1}{4}}Q\chi_R P^{\frac{1}{4}}$, $
Q=P^{\frac{1}{2}}\chi_R A_{-1}$, and $A_{-1}=E_{-\frac{1}{2}}^{\star }
E_{-\frac{1}{2}}$. We have used that the operator $Q$ is bounded from $L^{2}(
\mathbb{R}^{d})$ to $L^{2}(\Omega )$ (see \cite[Lemma 4.2]{RZ}).\newline
Then from (\ref{eq:contradiction2}), we deduce that
\begin{equation}
\lim_{k\rightarrow +\infty }\Vert E_{-\frac{1}{2}}v_{k}(0)\Vert
_{L^{2}}^{2}=0 . \label{eq:E-1/2}
\end{equation}
Concerning the term $\displaystyle\int_{0}^{T}\Vert \langle x\rangle
^{s}E_{-1}h_k^{-1}\chi_R
g_{k}\Vert _{L^{2}}^{2}dt$, we will prove that it
tends to zero.\newline
Let $\psi _{1}\in \ensuremath{\mathscr C}_{0}^{\infty }(\mathbb{R})$, such that $\psi _{1}=1$
on $\supp\psi $.\newline
Since $\psi _{1}(h_{k}^{2}P)u_{k}=u_{k}$ then applying $1-\psi_1(h_k^2P)$ to
Formula \eqref{eq:lowfre2}, we obtain
\begin{equation*}
h_{k}^{-1}g_{k}=h_{k}^{-1}\psi
_{1}(h_{k}^{2}P)g_{k}-ih_{k}^{-1}a(h_k^{2}P)^{1/2}a\psi
_{1}(h_{k}^{2}P)u_{k}+ih_{k}^{-1}\psi
_{1}(h_{k}^{2}P)a(h_k^{2}P)^{1/2}au_{k}.
\end{equation*}
Using that $\chi_R a=0$, we have
\begin{equation*}
h_{k}^{-1}\chi_R g_{k}=h_{k}^{-1}\chi_R \psi
_{1}(h_{k}^{2}P)g_{k}+ih_{k}^{-1}\chi_R \psi
_{1}(h_{k}^{2}P)a(h_k^{2}P)^{1/2}au_{k}.
\end{equation*}
And then
\begin{align*}
&\int_{0}^{T}\Vert \langle x\rangle ^{s}E_{-1}h_{k}^{-1}\chi_R g_{k}\Vert
_{L^{2}}^{2}dt \\
&\quad \quad \leq \int_{0}^{T}\Vert \langle x\rangle ^{s}E_{-1}\chi_R
h_k^{-1}\psi _{1}(h_{k}^{2}P)g_{k}\Vert ^{2}dt+\int_{0}^{T}\Vert \langle
x\rangle ^{s}E_{-1}\chi_R h_{k}^{-1}\psi
_{1}(h_{k}^{2}P)a(h_{k}^{2}P)^{1/2}au_{k}\Vert ^{2}dt \\
&\quad \quad \leq \int_{0}^{T}\Vert \langle x\rangle ^{s}E_{-1}\chi_R
P^{1/2}\psi _{2}(h_{k}^{2}P)g_{k}\Vert ^{2}dt+\int_{0}^{T}\Vert \langle
x\rangle ^{s}E_{-1}\chi_R h_{k}^{-1}\psi
_{1}(h_{k}^{2}P)a(h_{k}^{2}P)^{1/2}au_{k}\Vert ^{2}dt,
\end{align*}
where $\psi _{2}(t)=t^{-1/2}\psi _{1}(t).$
We have,
\begin{align}
\int_{0}^{T}\Vert \langle x\rangle ^{s}E_{-1}\chi_R P^{1/2}\psi
_{2}(h_{k}^{2}P)g_{k}\Vert ^{2}dt&\le I + I\!I,
\end{align}
where
\begin{equation}
I= \int_{0}^{T}\Vert \langle x\rangle^{s}E_{-1}\langle x\rangle ^{-s}\chi_R P^{1/2}\psi _{2}(h_{k}^{2}P)\langle x\rangle ^{s}g_{k}\Vert ^{2}dt \notag
\end{equation}
and
\begin{equation}
I\!I=h_k^{-2} \int_{0}^{T}\Vert \langle x\rangle ^{s}E_{-1}\chi_R
[(h_k^2P)^{1/2}\psi _{2}(h_{k}^{2}P),\langle x\rangle ^{-s} ]\langle
x\rangle ^{s}g_{k}\Vert ^{2}dt. \notag
\end{equation}
It follows that the symbol of $\langle x\rangle ^{s}E_{-1}\langle x\rangle ^{-s}$
belongs to $S((|\xi|+\langle x\rangle)^{-1})$ then $\langle x\rangle
^{s}E_{-1}\langle x\rangle ^{-s}\chi_R
P^{1/2}$ is bounded on $L^2(\Omega)$
(see \cite[Lemma 4.2]{RZ}) and we have
\begin{equation}
I\le C \int_{0}^{T}\Vert \langle x\rangle ^{s}g_{k}\Vert ^{2}dt , \notag
\end{equation}
According to Lemma~\ref{lemma:D}, $h_k^{-1}\langle x\rangle ^{s}
[(h_k^2P)^{1/2}\psi_{2}(h_{k}^{2}P),\langle x\rangle ^{-s} ]$ is bounded on
$L^2(\Omega)$ and we get
\begin{equation}
I\!I\le C \int_{0}^{T}\Vert \langle x\rangle ^{s}g_{k}\Vert ^{2}dt. \notag
\end{equation}
To estimate $$\int_{0}^{T}\Vert \langle x\rangle ^{s}E_{-1}\chi
_{R}h_{k}^{-1}\psi _{1}(h_{k}^{2}P)a(h_{k}^{2}P)^{1/2}au_{k}\Vert ^{2}dt,$$
we have with $\psi _{2}(s)=s^{-1}\psi
_{1}(s)$ and $\tilde{\chi}$ a smooth function such that, $\tilde{\chi}=1$
for $|x|\geq 1$ and $\tilde{\chi}=0$ for $|x|\leq 1/2$, $\tilde{\chi}_{R}(x)=
\tilde{\chi}(x/R)$,
\begin{align}
\langle x\rangle ^{s}E_{-1}\chi _{R}h_{k}^{-1}\psi _{1}(h_{k}^{2}P)a&
=\langle x\rangle ^{s}E_{-1}\chi _{R}Ph_{k}\psi _{2}(h_{k}^{2}P)a=\langle
x\rangle ^{s}E_{-1}\chi _{R}P\tilde{\chi}_{R}h_{k}\psi _{2}(h_{k}^{2}P)a
\notag \\
& =\langle x\rangle ^{s}E_{-1}\langle x\rangle ^{-s}\chi
_{R}P^{1/2}(h_{k}^{2}P)^{1/2}\langle x\rangle ^{s}[\tilde{\chi}_{R},\psi
_{2}(h_{k}^{2}P)]a \notag \\
& \quad +\langle x\rangle ^{s}E_{-1}\langle x\rangle ^{-s}\chi _{R}[\langle
x\rangle ^{s},P]\tilde{\chi}_{R}h_{k}[\psi _{2}(h_{k}^{2}P),a],
\label{egalite pour le terme 2}
\end{align}
where we have used $a\tilde{\chi}_{R}=0$ if $R$ large enough.
By the \cite[Lemma A.5]{RZ} and Lemma~\ref{lemma:B bis} the first term of
\eqref{egalite pour le terme 2} is bounded on $L^2(\Omega)$ by $Ch_k$. As
$[\langle x\rangle ^{s},P]$ is a sum of term $\alpha \partial_{x_j}$ where
$\alpha $ is bounded, $\langle x\rangle ^{s}E_{-1}\langle x\rangle
^{-s}\chi_R[\langle x\rangle ^{s},P]$ is bounded on $L^2(\Omega)$, and
$[\psi_{2}(h_{k}^{2}P),a]$ is bounded on $L^2(\Omega)$ by \cite[Lemma 6.3]{RZ}. Then the second term of \eqref{egalite pour le terme 2} is bounded on
$L^2(\Omega)$ by $Ch_k$. Finally, we yield
by Lemma~\ref{lemma:H},
\begin{align}
\int_{0}^{T}\Vert \langle x\rangle ^{s}E_{-1}\chi_R h_{k}^{-1}\psi
_{1}(h_{k}^{2}P)a(h_{k}^{2}P)^{1/2}au_{k}\Vert ^{2}dt &\leq C_R
h_{k}^{2}\int_{0}^{T}\Vert
(h_{k}^{2}P)^{1/2}a
u_{k}\Vert ^{2}dt \notag \\
&
\le C_Rh_k^2\sup_{t\in [0,T]}\|u_k(t,.)\|^2.
\end{align}
According to (\ref{eq:contradiction}) and (\ref{eq:contradiction2}), we
conclude that the second term of the right hand side of (\ref{Doi}) goes to
zeros when $k$ tend to $+\infty $
\begin{equation}
\lim_{k\rightarrow \infty }\int_{0}^{T}\Vert \langle x\rangle
^{s}E_{-1}h_{k}^{-1}\chi _{R}g_{k}\Vert _{L^{2}}^{2}dt=0.
\label{Doi terme 2}
\end{equation}
Now we estimate the term $\displaystyle\int_{0}^{T}\Vert \langle x\rangle
^{s}E_{-1}[P,\chi _{R}]u_{k}\Vert _{L^{2}}^{2}dt$.\newline
Let $\chi _{1}\in \ensuremath{\mathscr C}_{0}^{\infty }(R-1<|x|<2R+1),\chi _{1}\geq 0,\chi
_{1}=1\mbox{ on
}\supp(\nabla \chi _{R}),$
\begin{align}
\int_{0}^{T}\Vert \langle x\rangle ^{s}E_{-1}[P,\chi _{R}]u_{k}\Vert
_{L^{2}}^{2})dt& \leq \int_{0}^{T}\Vert \langle x\rangle ^{s}\chi
_{1}E_{-1}[P,\chi _{R}]\chi _{1}u_{k}]\Vert _{L^{2}(\Omega )}^{2}dt \notag
\\
& \quad +\int_{0}^{T}\Vert \langle x\rangle ^{s}(1-\chi _{1})E_{-1}[P,\chi
_{R}]\chi _{1}u_{k}]\Vert _{L^{2}(\Omega )}^{2}dt,\qquad \notag \\
& \leq CR^{2(s-1)}\int_{0}^{T}\Vert u_{k}\Vert
_{L^{2}(R-1<|x|<2R+1)}^{2}dt\leq CR^{2(s-1)}, \label{Doi terme 3}
\end{align}
where we have used, first that $E_{-1}\partial _{x}$ is bounded on $L^{2}$, $
\langle x\rangle ^{s}$ is estimate by $CR^{s}$ on support of $\chi _{1}$ and
$\partial _{x}\chi _{R}$ is the product of a bounded function by $R^{-1}$,
second, the symbol of $\langle x\rangle ^{s}(1-\chi _{1})E_{-1}[P,\chi _{R}]$
is uniformly bounded in $R^{-1}S((\langle x\rangle +|\xi |)^{-N},g)$ for all
$N$. The last inequality uses the contradiction assumption
\eqref{Contradiction lemme 2.6}.
Following \eqref{Doi}, \eqref{eq:E-1/2}, \eqref{Doi terme 2} and
\eqref{Doi
terme 3}, we have,
\begin{equation*}
\int_{0}^{T}\Vert \langle x\rangle ^{-s}u_{k}\Vert
_{L^{2}(|x|>2R)}^{2}dt\leq \int_{0}^{T}\Vert \langle x\rangle
^{-s}v_{k}\Vert _{L^{2}(\mathbb{R}^{d})}^{2}\le C_R\delta_k+CR^{2(s-1)},
\end{equation*}
where $\delta_k\to 0$ when $k\to +\infty $, $C$ is independent of $R$ and $
C_R$ may depend of $R$. Then we have
\begin{align*}
\int_0^T\| u_k\|^2_{L^2(x\in\Omega,\ |x|<2R)}&\ge \int_0^T\| \langle
x\rangle^{-s}u_k\|^2_{L^2(x\in\Omega,\ |x|<2R)} \\
&\ge \int_0^T\| \langle x\rangle^{-s}u_k\|^2_{L^2(x\in\Omega)} -\int_0^T\|
\langle x\rangle^{-s}u_k\|^2_{L^2( |x|>2R)} \\
&\ge \int_0^T\| \langle
x\rangle^{-s}u_k\|^2_{L^2(x\in\Omega)}-C_R\delta_k-CR^{2(s-1)}.
\end{align*}
This with \eqref{eq:contradiction3} implies a contradiction with
\eqref{Contradiction lemme 2.6} and proves the Lemma.
\end{prooff}
In the sequel, for simplicity, we
shall denote
the sequence $u_{k_\nu}$ found in
Lemma~\ref{Lemma2.6} by $u_k$. Thus there exist $R_{0}>0$, $k_{0}>0$ such
that
\begin{equation*}
\int_{0}^{T}\Vert u_{k}(t)\Vert _{L^{2}(|x|<R)}^{2}dt\geq \frac{1}{2},
\end{equation*}
when $R>R_{0}$ and $k>k_{0}$. \newline
We consider $\chi _{1}\in \ensuremath{\mathscr C}_{0}^{\infty }(\mathds{R}^{d})$ such that $$
0\leq \chi _{1}\leq 1,\; \chi _{1}(x)=1 \mbox{ if }|x|\leq R_{1}+2\mbox{ and }\supp\chi
_{1}\subset \{|x|\leq R_{1}+3\},$$ with $R_{1}>R_{0}$.\\ Let $A\geq 1$, $R\geq
1 $, $\psi _{A}\in \ensuremath{\mathscr C}_{0}^{\infty }(\mathds{R})$, $\phi _{R}\in
\ensuremath{\mathscr C}_{0}^{\infty }(\mathds{R})$ be such that $0\leq \psi _{A}$,
$\phi _{R}\leq 1$
and
\begin{equation*}
\psi _{A}(\tau )=1\text{ if }|\tau |\leq A,\phi _{R}(t)=1\text{ if }|t|\leq
R.
\end{equation*}
We recall that $w_k(t)=1_\Omega u_k(t)$.
\begin{proposition}
There exist positive constants $A_0$, $R_0$, $k_0$ such that
\begin{equation*}
\int_{\mathds{R}}\|\psi_A(h^2_kD_t)\phi_R(h^2_k\Delta)1_{[0,T]}
\chi_1w_k(t)\|^2_{L^2(\mathds{R}^d)}dt\geq \frac{1}{4},
\end{equation*}
when $A\geq A_0$, $R\geq R_0$, $k\geq k_0$.
\end{proposition}
\begin{corollary}
The measure $\mu$ does not vanish identically.
\end{corollary}
\begin{prooff}[Proof of Proposition ]
Set $I=(Id-\psi _{A}(h_{k}^{2}D_{t}))1_{[0,T]}\chi _{1}u_{k}$ and
$\widetilde{\psi }(\tau )=\dfrac{1-\psi _{A}(\tau )}{\tau }$. It is easy to
see that $\widetilde{\psi }\in L^{\infty }(\mathbb{R})$ and
$|\widetilde{\psi }(\tau )|\leq \frac{1}{A}$ for all
$\tau \in \mathbb{R}$.
We have
\begin{align*}
I &=\widetilde{\psi }_{A}(h_{k}^{2}D_{t})h_{k}^{2}D_{t}(1_{[0,T]}\chi
_{1}w_{k}) \\
&=\frac{h_k^{2}}{i}\widetilde{\psi }_{A}(h_{k}^{2}D_{t})\chi
_{1}(u_{k}(0)\delta _{t=0}-u_{k}(T)\delta _{t=T}) \\
&\quad \widetilde{\psi }_{A}(h_{k}^{2}D_{t})\chi
_{1}1_{[0,T]}(-h_{k}^{2}Pu_{k}+ih_{k}a(h_{k}^{2}P)^{1/2}
au_{k}+h_{k}g_{k}) \\
&=B_{k}^{1}+B_{k}^{2}+B_{k}^{3}+B_{k}^{4}.
\end{align*}
From \cite[See the proof of Proposition 6.1]{RZ} we know that $\Vert
\widetilde{\psi }_{A}(h_{k}^{2}D_{t})\delta _{t=a}\Vert _{L^{2}
(\mathbb{R})}\leq Ch_{k}^{-1}$, then we deduce that
\begin{equation*}
\lim_{k\rightarrow +\infty }\int_{\mathbb{R}}\Vert B_{k}^{1}\Vert
_{L^{2}(\Omega )}^{2}dt\leq \lim_{k\rightarrow +\infty
}Ch_{k}^{4}h_{k}^{-2}(\Vert u_{k}(0)\Vert _{L^{2}(\Omega )}^{2}+\Vert
u_{k}(T)\Vert _{L^{2}(\Omega )}^{2})=0.
\end{equation*}
Using (\ref{eq:auk}) and (\ref{eq:contradiction2}), we can prove easily that
\begin{equation*}
\lim_{k\rightarrow +\infty }\int_{\mathbb{R}}\Vert B_{k}^{3}\Vert
_{L^{2}(\Omega )}^{2}dt\leq C\lim_{k\rightarrow +\infty
}\int_{0}^{T}h_{k}\Vert (h_{k}^{2}P)^{1/2}au_{k}\Vert _{L^{2}(\Omega
)}^{2}dt=0 .
\end{equation*}
From \eqref{eq:contradiction} we can see that
\begin{equation*}
\lim_{k\rightarrow +\infty }\int_{\mathbb{R}}\Vert B_{k}^{4}\Vert
_{L^{2}(\Omega )}^{2}dt\leq C\lim_{k\rightarrow +\infty }\int_{0}^{T}\Vert
\chi _{1}g_{k}\Vert _{L^{2}(\Omega )}^{2}dt=0 .
\end{equation*}
Now, for $B_{k}^{2}$ we argue as in \cite[See the proof of
Proposition 6.1]{RZ}. Let $\tilde\theta\in \ensuremath{\mathscr C}_0^\infty (0,+\infty) $
such $\tilde\theta=1$ on the support of $\psi$ and let
$\tilde\theta_1(s)=s\tilde\theta(s)$. We
have
\begin{align*}
B_{k}^{2} &=-\widetilde{\psi }_{A}(h_{k}^{2}D_{t})
\chi _{1}1_{[0,T]}h_k^{2}P\widetilde{\theta }(h_{k}^{2}P)u_{k} \\
&=-\widetilde{\psi }_{A}(h_{k}^{2}D_{t})1_{[0,T]}[\chi _{1},
\widetilde{\theta }_{1}(h_{k}^{2}P)]u_{k}-\widetilde{\psi }_{A}
(h_{k}^{2}D_{t})1_{[0,T]}
\widetilde{\theta }_{1}(h_k^{2}P)\chi _{1}u_{k}.
\end{align*}
Using Lemma 6.3 in \cite{RZ} and the fact that
\begin{equation*}
\Vert \widetilde{\psi }_{A}(h_{k}^{2}D_{t})\Vert _{L^{2}(\mathbb{R}
)\rightarrow L^{2}(\mathbb{R})}=O\left( \frac{1}{A}\right) ,\,\Vert
\widetilde{\theta }_{1}(h_{k}^{2}P)\Vert _{L^{2}
(\Omega)\rightarrow
L^{2}(\Omega})=O(1),
\end{equation*}
uniformly in $k$, we deduce that
\begin{equation*}
\int_{\mathbb{R}}\Vert B_{k}^{2}\Vert _{L^{2}(\Omega )}^{2}dt\leq
C(h_{k}^{2}\sup_{t\in \lbrack 0,T]}\Vert u_{k}(t)\Vert _{L^{2}(\Omega
)}^{2}dt+\frac{1}{A}\int_{0}^{T}\Vert \chi _{1}u_{k}\Vert _{L^{2}(\Omega
)}^{2}dt).
\end{equation*}
Taking $k$ and $A$ sufficiently large we obtain
\begin{equation}
\int_{\mathds{R}}\Vert \psi _{A}(h_{k}^{2}D_{t})1_{[0,T]}\chi
_{1}w_{k}(t)\Vert _{L^{2}(\mathds{R}^{d})}^{2}dt\geq \frac{1}{3}.
\label{trocature D_t}
\end{equation}
Now, we set
\begin{equation*}
\text{II} =(Id-\phi _{R}(h_{k}^{2}\Delta ))\psi
_{A}(h_{k}^{2}D_{t})1_{[0,T]}\chi _{1}w_{k}.
\end{equation*}
It is proved in \cite{RZ} that
\begin{equation}
\int_{\mathbb{R}}\Vert \text{II}\Vert _{L^{2}(\mathds{R}^{d})}^{2}dt\leq
\frac{C_{R_1}}{R}(1+h_{k}^{2}), \label{RZ1}
\end{equation}
where $C_{R_1}$ depends on $R_1$ and The proof does not depend on the
equation, so it remains valid in our case. Nevertheless we recall the proof
in the sequel for the convenience of the reader. Before we give the end of
the proof of proposition 2.7.
Taking $R$ sufficiently large and using (\ref{trocature D_t}), we obtain
\begin{equation*}
\int_{\mathds{R}}\Vert \phi _{R}(h_{k}^{2}\Delta )\psi
_{A}(h_{k}^{2}D_{t})1_{[0,T]}\chi _{1}w_{k}(t)\Vert _{L^{2}(\mathds{R}
^{d})}^{2}dt\geq \frac{1}{4}.
\end{equation*}
Return to the proof of \eqref{RZ1}. We have
$\displaystyle |1-\phi_R(t)|\le C\frac{h|\xi|}{\sqrt{R}} $ then we obtain,
\begin{align}
\int_{\mathbb{R}}\Vert \text{II}\Vert _{L^{2}(\mathds{R}^{d})}^{2}dt &\leq
C\frac{h_{k}^{2}}{R}\int_{\mathbb{R}}\sum_j\Vert \partial _{j}\psi
_{A}(h_{k}^{2}D_{t})1_{[0,T]}\chi _{1}w_{k}\Vert _{L^{2}(\mathds{R}
^{d})}^{2}dt \notag \\
&\leq C\frac{h_{k}^{2}}{R}\int_{\mathbb{R}}\sum_j\Vert \partial _{j}\psi
_{A}(h_{k}^{2}D_{t})1_{[0,T]}\chi _{1}u_{k}\Vert _{L^{2}(\Omega )}^{2}dt
\notag \\
&\le \frac{h_{k}^{2}}{R}\sum_j\left( \int_{\mathbb{R}}\Vert \partial _{j}
\widetilde{\theta }(h_{k}^{2}P)\psi _{A}(h_{k}^{2}D_{t})1_{[0,T]}\chi
_{1}u_{k}\Vert _{L^{2}(\Omega )}^{2}dt\right. \notag \\
&\quad +\left. \int_{\mathbb{R}}\Vert \partial _{j}(1-\widetilde{\theta }
(h_{k}^{2}P))\psi _{A}(h_{k}^{2}D_{t})1_{[0,T]}\chi _{1}u_{k}\Vert
_{L^{2}(\Omega )}^{2}dt\right) \notag \\
&:=\frac{h_{k}^{2}}{R}(C_{k}^{1}+C_{k}^{2}), \label{C^1_k+C^2_k}
\end{align}
where $\widetilde{\theta }\in \ensuremath{\mathscr C}_{0}^{\infty }(\mathbb{R})$ satisfying
$\widetilde{\theta }(t)=1$ if $t\in \supp(\theta _{1})$ and
$\widetilde{\theta }\theta _{1}=\theta _{1}$.
We have by Lemma 6.3 \cite{RZ}
\begin{equation}
C^1_k \leq Ch_k^{-2}\int_0^T\| \chi_1 u_k\|^2_{L^2(\Omega)}dt\leq ch_k^{-2} ,
\label{C^1_k}
\end{equation}
and
\begin{align}
C^2_k &\leq \int_{\mathbb{R}} \|\partial_j[\widetilde{\theta}
(h_k^2P),\chi_1]\psi_A(h_k^2D_t)1_{[0,T]}\widetilde{\chi}_1
u_k\|^2_{L^2(\Omega)}dt \notag \\
&\leq \int_{\mathbb{R}} \|\psi_A(h_k^2D_t)1_{[0,T]}\widetilde{\chi}_1
u_k\|^2_{L^2(\Omega)}dt \notag \\
&\leq C\int_0^T\| \widetilde{\chi}_1 u_k\|^2_{L^2(\Omega)}dt\leq C_{R_1}
\int_0^T\|\langle x\rangle^{-s} u_k\|^2_{L^2(\Omega)}dt, \label{C^2_k}
\end{align}
where $\widetilde{\chi}_1\in \ensuremath{\mathscr C}^\infty_0(\overline{\Omega})$, $\widetilde{
\chi}_1=1$ on $\supp (\chi_1)$.\newline
Combining (\ref{C^1_k+C^2_k}), (\ref{C^1_k}) and (\ref{C^2_k}),
we obtain (\ref{RZ1}).
\end{prooff}
\subsection{The microlocal defect measure vanishes in the incoming set}
In this section we prove that the microlocal defect measure $\mu$ vanishes
in the incoming set.
First remind some notation introduced in \cite{RZ} section 7. We keep the
same notation when it is possible.
We denote by
\begin{equation*}
b(x,\xi )=\sum_{j,k=1}^{d}b^{jk}(x)x_{j}\xi _{k}.
\end{equation*}
\begin{proposition}
\label{Prop Incom} Let $m_0=(x_0,t_0,\xi_0,\tau_0)\in T^\star({\mb{R}}^{d+1})$ be such
$\xi_0\not=0$, $\tau_0+p(x_0,\xi_0)=0$, $|x_0|\ge3R_0$, $b(x_0,\xi_0)\le
-3\delta|x_0||\xi_0|$ for some $\delta>0$ small enough. Then
$m_0\notin\supp \mu$.
\end{proposition}
We remind the results proved in \cite{RZ} in section 7, Lemma 7.5 and
Corollary 7.6. A part of the proof is in Doi~\cite{Do}. We use the Weyl
quantification of symbol which is denoted by $Op^{w}$.
There exist a symbol $\Phi \in S(1,g)$ such that $0\leq \Phi \leq 1$ and a
symbol $\lambda _{1}\in S(1,g)$ such that,
\begin{gather}
\supp\lambda _{1}\subset \supp\Phi \subset \{(x,\xi )\in T^{\ast }({\mb{R}}^{d}),\
|x|\geq 2R_{0},\ b(x,\xi )\leq -\frac{\delta }{2}|x||\xi |,\ |\xi |\geq
\frac{|\xi _{0}|}{4}\} , \label{Incom eq1} \\
\{(x,\xi )\in T^{\ast }({\mb{R}}^{d}),\ |x|\geq \frac{5}{2}R_{0},\ b(x,\xi )\leq
-\delta |x||\xi |,\ |\xi |\geq \frac{|\xi _{0}|}{2}\}\subset \{(x,\xi )\in
T^{\ast }({\mb{R}}^{d}),\ \Phi (x,\xi )=1\} , \notag \\
\Phi (x,h\xi )=\Phi (x,\xi )\text{ when }|h\xi |\geq \frac{|\xi _{0}|}{2},
\text{ and }0<h\leq 1, \notag \\
H_{{p}}\Phi (x,\xi )\leq 0\text{ on the support of }\lambda _{1} , \notag \\
\lambda _{1}\geq 0 , \notag \\
[\tilde{P},Op^{w}(\lambda _{1})]-\frac{1}{i}Op^{w}(H_{{p}}\lambda _{1})\in
Op^w(S(1,g)) , \label{Icom eq7} \\
\text{there exist two positive constants }C,\ C^{\prime }\text{ such that} ,
\notag \\
-H_{{p}}\lambda _{1}\geq C\langle x\rangle ^{-2s}\Phi ^{2}(x,\xi )(|x|+|\xi
|)-C^{\prime }\Phi^2(x,\xi ). \label{Icom eq8}
\end{gather}
\begin{prooff}
Let $\varphi _{1}\in \ensuremath{\mathscr C}_{0}^{\infty }({\mb{R}}^{d})$ such that
\begin{equation}
\varphi _{1}(x)=1\text{ if }|x|\leq \frac{4}{3}R_{0},\ \supp\varphi
_{1}\subset \{x,\ |x|\leq \frac{3}{2}R_{0}\}. \label{Incom eq9}
\end{equation}
Let $M$ large enough such that,
\begin{equation*}
|((1-\varphi _{1})Op^w(\lambda _{1})(1-\varphi _{1})u|u)|\leq \frac{M}{2}
\Vert u\Vert ^{2}.
\end{equation*}
Here and in the sequel $(\cdot |\cdot )$ and $\Vert \cdot \Vert $ denote the
$L^{2}(\Omega )$ inner product and norm respectively. The cutoff make sense
with this $L^{2}$ product. We set,
\begin{equation*}
N(t)=((M-(1-\varphi _{1}){\mathcal{O}}p(\lambda _{1})(1-\varphi
_{1}))u_{k}(t)|u_{k}(t)),
\end{equation*}
and we have
\begin{equation}
\frac{M}{2}\Vert u_{k}(t)\Vert ^{2}\leq N(t)\leq 2M\Vert u_{k}(t)\Vert ^{2}.
\label{Incom eq11}
\end{equation}
Setting $\Lambda =M-(1-\varphi_1){\mathcal{O}} p(\lambda_1)(1-\varphi_1)$,
we have,
\begin{equation*}
\frac d{dt}N(t)=(\Lambda \frac d{dt}u_k(t)|u_k(t))+(\Lambda u_k(t)| \frac
d{dt}u_k(t)).
\end{equation*}
From \eqref{eq:lowfre2} we have
\begin{equation*}
\frac{d}{dt}
u_{k}=-iPu_{k}-h_{k}^{-1}a(h_{k}^{2}P)^{1/2}(au_{k})+ih_{k}^{-1}g_{k}.
\end{equation*}
We obtain,
\begin{align}
\frac{d}{dt}N=& (i[P,\Lambda ]u_{k}|u_{k}) \notag \\
& -h_{k}^{-1}(\Lambda a(h^{2}P)^{1/2}au_{k}|u_{k})-h_{k}^{-1}(\Lambda
u_{k}|a(h_{k}^{2}P)^{1/2}au_{k}) \notag \\
& +ih_{k}^{-1}(\Lambda g_{k}|u_{k})-ih_{k}^{-1}(\Lambda u_{k}|g_{k}) \notag
\\
=& A_{1}+A_{2}+A_{3} . \label{Icom eq16}
\end{align}
For support reasons, we have $a(1-\varphi_1)=0$ thus we deduce,
\begin{align}
A_{2}& =-\frac{M}{h_{k}}
[(a(h_{k}^{2}P)^{1/2}(au_{k})|u_{k})+(u_{k}|a(h_{k}^{2}P)^{1/2}(au_{k}))]
\notag \\
& =-\frac{2M}{h_{k}}\Vert (h_{k}^{2}P)^{1/4}(au_{k})\Vert ^{2}\leq 0.
\label{Icom eq17}
\end{align}
We have, for a constant $C_1>0$
\begin{align} \label{Icom eq18}
|A_3|\le \frac{C_1}{h_k}\|\langle x\rangle ^sg_k\|\| \langle x\rangle
^{-s}u_k\|.
\end{align}
To estimate $A_{1}$ we remark that $[P,\Lambda ]=[\tilde{P},\Lambda ]$ and
\begin{equation}
\lbrack P,\Lambda ]=[\tilde{P},\varphi _{1}]{O}p^w(\lambda _{1})(1-\varphi
_{1})-(1-\varphi _{1})[\tilde{P},{O}p^w(\lambda _{1})](1-\varphi
_{1})+(1-\varphi _{1}){O}p^w(\lambda _{1})[\tilde{P},\varphi _{1}].
\label{Icom eq19}
\end{equation}
Following \eqref{Incom eq1} and \eqref{Incom eq9}, the support of $\lambda_1$
and $\varphi_1$ are disjoint, thus, taking account of \eqref{Incom eq11}, we
have
\begin{equation} \label{Incom eq20}
|(\big[[\tilde P,\varphi_1]{O} p^w(\lambda_1)(1-\varphi_1)+(1-\varphi_1){O}
p^w(\lambda_1)[\tilde P,\varphi_1]\big]u_k|u_k)|\le C_2N(t).
\end{equation}
Let $d(x,\xi)\in \ensuremath{\mathscr C}_{0}^{\infty }({\mb{R}}^{2d}) $ supported in $\{|x-x_0|\le 1,\ |\xi-\xi_0|\le 1\}$, and
$d(x_0,\xi_0)=1$. According to \eqref{Icom eq7}, \eqref{Icom eq8} and G\aa rding
inequality, we get,
\begin{equation} \label{Icom eq21}
(-i(1-\varphi_1)[\tilde P,{O} p^w(\lambda_1)](1-\varphi_1)u_k|u_k)\ge
C_3h_k^{-1}\|\langle x\rangle^{-s}d(x,h_kD_x)u_k\|^2-C_4N(t).
\end{equation}
From \eqref{Icom eq19}, \eqref{Incom eq20} and \eqref{Icom eq21}
we obtain,
\begin{equation} \label{Icom eq22}
A_1\ge C_3h_k^{-1}\|\langle x\rangle^{-s}d(x,h_kD_x)u_k\|^2-C_5N(t).
\end{equation}
Following \eqref{Icom eq16}, \eqref{Icom eq17}, \eqref{Icom eq18} and
\eqref{Icom eq22}, we have
\begin{equation} \label{Incom eq22bis}
N^{\prime }(t)+C_3h_k^{-1}\|\langle x\rangle^{-s}d(x,h_kD_x)u_k\|^2\le
\beta(t)+C_6N(t),
\end{equation}
where we have set $$\beta(t)= \frac{C_1}{h_k}\|\langle x\rangle ^sg_k(t)\|.\|
\langle x\rangle ^{-s}u_k(t)\|.$$
Integrating \eqref{Incom eq22bis} between $0$ and $t$ for $t\in[0,T]$ we
obtain,
\begin{equation} \label{Incom eq23}
N(t)+C_3h_k^{-1}\|\langle
x\rangle^{-s}d(x,h_kD_x)u_k\|^2_{L^2([0,T]\times\Omega)}\le
\int_0^T\beta(t)dt+N(0)+C_8\int_0^tN(s)ds.
\end{equation}
By Gronwall's inequality we have for $t\in [0,T]$,
\begin{equation} \label{Incom eq24}
N(t)\le C_7\int_0^T\beta(t)dt+ C_8N(0).
\end{equation}
Using \eqref{Incom eq24} in \eqref{Incom eq23}, we get
\begin{align*}
\|\langle x\rangle^{-s}d(x,h_kD_x)u_k\|_{L^2([0,T]\times\Omega)}^2 & \notag
\\
\le& {C_8}\|\langle x\rangle ^sg_k\|_{L^2([0,T]\times\Omega)}\| \langle
x\rangle
^{-s}u_k\|_{L^2([0,T]\times\Omega)}+C_9h_k\|u_k(0)\|^2_{L^2(\Omega)}.
\end{align*}
Following \eqref{eq:contradiction} and \eqref{eq:contradiction3} we obtain
\begin{equation*}
\|\langle x\rangle^{-s}d(x,h_kD_x)u_k\|_{L^2([0,T]\times\Omega)}^2\to 0
\text{ when }k\to+\infty.
\end{equation*}
Let $\chi (t,\tau )\in \ensuremath{\mathscr C}_{0}^{\infty }({\mb{R}}^{2})$ supported in a
neighborhood sufficiently small around $(t_{0},\tau _{0})$ and taking
account that $d$ is supported in a neighborhood of $(x_{0},\xi _{0})$, we
have
\begin{equation*}
\|\chi(t,h^2_k)d(x,h_kD_x)u_k\|_{L^2([0,T]\times\Omega)}\to 0 \text{ when }
k\to+\infty,
\end{equation*}
then $\langle \mu, \chi^2d^2\rangle=0$ thus $(x_0,t_0,\xi_O,\tau_0)\not\in
\supp \mu$.
\end{prooff}
\subsection{The microlocal defect measure vanishes on $\{a^{2}>0\}$}
The goal of this section is to prove that the microlocal defect measure
vanishes on $\{a^{2}>0\}$. More precisely we have the following proposition.
\begin{proposition}
\label{prop: vanish on a>0} Let $u_k=\psi(h^2_kP)u$ satisfying
\begin{equation} \label{eq1:prop a>0}
h_k^2(D_t+P)u_k-ih_ka(h_k^2P)^{1/2}(au_k)=h_kg_k,
\end{equation}
\begin{equation} \label{eq2:prop a>0}
\left\Vert \left\langle x\right\rangle ^{s}g_{k}\right\Vert
_{L^{2}([0,T]\times\Omega)}^{2}+h_k\sup_{t\in \lbrack 0,T]}\left\Vert
u_{k}(t)\right\Vert _{L^{2}(\Omega)}^{2}+h_k\underset{k\rightarrow +\infty }{
\rightarrow }0,
\end{equation}
and
\begin{equation} \label{eq3:prop a>0}
\left\Vert \left\langle x\right\rangle ^{-s}u_{k}\right\Vert
_{L^{2}([0,T]\times\Omega)}^{2}\underset{k\rightarrow +\infty }{\rightarrow }
1.
\end{equation}
We assume that the sequence $(W_k)=(1_{[0,T]}1_\Omega u_k)$ admits a microlocal
defect measure $\mu$ then $a^2\mu=0$.
\end{proposition}
\begin{prooff}
Taking the imaginary part of the $L^{2}([0,T]\times \Omega )$ inner product
of \eqref{eq1:prop a>0} with $u_{k}/h_{k}$, we obtain,
\begin{align}
& \Im m \lbrack
(h_{k}(D_{t}+P)u_{k}|u_{k})-i(a(h_{k}^{2}P)^{1/2}(au_{k})|u_{k})=\Im m
(g_{k}|u_{k}). \notag \\
\end{align}
Using that $P$ is self-adjoint, we get
\begin{align}
& \Im m (h_{k}\int_{0}^{T}\int_{\Omega }\frac{1}{2}
D_{t}|u_{k}|^{2}dxdt)-((h_{k}^{2}P)^{1/2}(au_{k})|au_{k})=\Im m(\langle
x\rangle ^{s}g_{k}|\langle x\rangle ^{-s}u_{k}). \label{eq4=prop a>0}
\end{align}
From \eqref{eq2:prop a>0} and \eqref{eq3:prop a>0}, we have
\begin{equation*}
h_{k}\int_{0}^{T}\int_{\Omega }D_{t}|u_{k}|^{2}dxdt=ih_{k}\Vert
u_{k}(0)\Vert _{L^{2}(\Omega )}^{2}-ih_{k}\Vert u_{k}(T)\Vert _{L^{2}(\Omega
)}^{2}\underset{k\rightarrow +\infty }{\rightarrow }0,
\end{equation*}
and
\begin{equation*}
|(\langle x\rangle ^{s}g_{k}|\langle x\rangle ^{-s}u_{k})|\leq \Vert \langle
x\rangle ^{s}g_{k}\Vert _{L^{2}(\Omega )}\Vert \langle x\rangle
^{-s}u_{k}\Vert _{L^{2}(\Omega )}\underset{k\rightarrow +\infty }{
\rightarrow }0.
\end{equation*}
Following \eqref{eq4=prop a>0}, we deduce
\begin{equation}
((h_{k}^{2}P)^{1/2}(au_{k})|au_{k})\underset{k\rightarrow +\infty }{
\rightarrow }0. \label{eq4:prop a>0}
\end{equation}
Let $\theta\in\ensuremath{\mathscr C}^\infty_0((0,+\infty))$ with $\theta=1$ on the support of $
\psi$. Thus we have $\theta(h_k^2P)u_k=u_k$.
Let $\tilde\theta(t)=t^{-1/4}
\theta(t)$, we have $\tilde\theta\in\ensuremath{\mathscr C}^\infty_0((0,+\infty))$ and,
\begin{align}
(au_k|au_k)&=(a\theta^2(h^2_kP)u_k|au_k)=(a(h^2_kP)^{1/2}\tilde
\theta^2(h^2_kP)u_k|au_k) \notag \\
&=((h^2_kP)^{1/2}\tilde\theta^2(h^2_kP)au_k|au_k)+([a,(h^2_kP)^{1/2}\tilde
\theta^2(h^2_kP)]u_k|au_k). \label{eq5:prop a>0}
\end{align}
From Lemma 6.3 \cite{RZ}, we have
\begin{equation} \label{eq6:prop a>0}
\|[a,(h^2_kP)^{1/2}\tilde\theta^2(h^2_kP)]u_k\|_{L^2(\Omega)}\le
Ch_k\|u_k\|_{L^2(\Omega)}.
\end{equation}
We have also,
\begin{align}
((h^2_kP)^{1/2}\tilde\theta^2(h^2_kP)au_k|au_k)&=
\|(h^2_kP)^{1/4}\tilde\theta(h^2_kP)au_k\|^2_{L^2([0,T]\times\Omega )}
\notag \\
&\le \|(h^2_kP)^{1/4}au_k\|^2_{L^2([0,T]\times\Omega
)}=((h^2_kP)^{1/2}au_k|au_k)\underset{k\rightarrow +\infty }{\rightarrow }0,
\label{eq7:prop a>0}
\end{align}
from \eqref{eq4:prop a>0}. Following \eqref{eq5:prop a>0},
\eqref{eq6:prop
a>0} and \eqref{eq7:prop a>0}, we obtain,
\begin{equation} \label{eq8:prop a>0}
(au_k|au_k)\underset{k\rightarrow +\infty }{\rightarrow }0.
\end{equation}
Let $b(x,t,\xi ,\tau )\in \ensuremath{\mathscr C}_{0}^{\infty }({\mb{R}}^{d}\times {\mb{R}}\times {\mb{R}}
^{d}\times {\mb{R}})$, we have by standard symbolic semi-classical calculus
\begin{align}
(a^{2}(x)b(x,t,h_{k}D_{x},h_{k}^{2}D_{t})W_{k}|W_{k})=&
(b(x,t,h_{k}D_{x},h
_k^{2}D_{t})(aW_{k})|aW_{k}) \notag \\
& +h_k(r(x,t,h_{k}D_{x},h_{k}^{2}D_{t})W_{k}|W_{k}) , \label{eq9:prop a>0}
\end{align}
where $r(x,t,h_{k}D_{x},h_{k}^{2}D_{t})$ is bounded on
$L^{2}([0,T]\times {\mb{R}}^{d})$. Thus from \eqref{eq2:prop a>0}, we have,
\begin{equation}
h_{k}|(r(x,t,h_{k}D_{x},h_{k}^{2}D_{t})W_{k}|W_{k})|\leq Ch_{k}\Vert
W_{k}\Vert _{L^{2}([0,T]\times {\mb{R}}^{d})}^{2}\underset{k\rightarrow +\infty }{
\rightarrow }0. \label{eq10:prop a>0}
\end{equation}
From \eqref{eq8:prop a>0} and using $\Vert aW_{k}\Vert _{L^{2}({\mb{R}}\times {\mb{R}}
^{d})}^{2}=\Vert au_{k}\Vert _{L^{2}([0,T]\times \Omega )}^{2}$ we obtain,
\begin{equation}
|(b(x,t,hD_{x},h^{2}D_{t})(aW_{k})|aW_{k})_{L^{2}({\mb{R}}\times {\mb{R}}^{d})}|\leq
C\Vert aW_{k}\Vert _{L^{2}({\mb{R}}\times {\mb{R}}^{d})}^{2}\underset{k\rightarrow
+\infty }{\rightarrow }0. \label{eq11:prop a>0}
\end{equation}
According to the definition of the microlocal defect measure $\mu $, \eqref{eq9:prop a>0}, \eqref{eq10:prop a>0} and \eqref{eq11:prop a>0} imply the Proposition~
\ref{prop: vanish on a>0}
\end{prooff}
\subsection{Propagation properties of microlocal defect measure and end of
proof}
The statement of our results requires some geometric notions which are
classical in the microlocal study of boundary problems (cf. \cite{hor} p.
424 and 430-432). \newline
Let $M=\Omega \times \mathbb{R}_{t}$. We set $$T^{*}_b M=T^{*}M\backslash\{0\}\cup T^{*}\partial M\backslash\{0\}.$$ We have the natural restriction map $$\pi:T^{*}\mathbb{R}^{d+1}_{\overline{M}}\rightarrow T^{*}_b M,$$ which is the identity on $T^{*}\mathbb{R}^{d+1}_{M}\backslash\{0\}$ (see \cite{RZ} for details). Consider, near a point of the boundary
$z=(x_{1},x^{\prime },t)\in \partial M$
a geodesic system of coordinates given by the diffeomorphism $F$ in
(\ref{eq:U0}), for which $z=(0,0,t)$, $M=\{(x_{1},x^{\prime },t),x_{1}>0)\}$ and
the operator $D_{t}+P$ has the form (near $z$)
\begin{equation*}
P=D_{t}+D_{x_{1}}^{2}+R(x_{1},x^{\prime },D_{x^{\prime }})+S(x,D_{x}),
\end{equation*}
with $R$ a second order tangential operator and $S$ a first order operator.
Denoting $r(x_{1},x^{\prime },\xi ^{\prime })$ the principal symbol of $R$
and $r_{0}=r|_{x_{1}=0}$, the cotangent bundle to the boundary $T^{\star
}\partial M\backslash \{0\}$ can be decomposed (in this coordinate system)
as the disjoint union of the following regions:
\begin{itemize}
\item the elliptic region $\mathcal{E}=\{(x^{\prime },t,\xi^{\prime
},\tau)\in T^{\star}\partial M\backslash\{0\};\quad r_{0}(x^{\prime
},\xi^{\prime })+\tau >0\}$,
\item the hyperbolic region $\mathcal{H}=\{ (x^{\prime },t,\xi^{\prime
},\tau)\in T^\star\partial M\backslash\{0\} ;\quad r_{0}(x^{\prime
},\xi^{\prime })+\tau <0\}$,
\item and the glancing region $\mathcal{G}=\{(x^{\prime
},t,\xi^\prime,\tau)\in T^ \star\partial M\backslash\{0\};\quad
r_{0}(x^{\prime },\xi^{\prime })+\tau =0\}$.
\end{itemize}
For the purpose of the proofs, it is important to consider the following
subsets of the glancing region:
\begin{itemize}
\item the diffractive region $\mathcal{G}_d=\{ \zeta\in \mathcal{G},
\partial_{x_{1}}r|_{x_{1}=0}(\zeta)<0\}$,
\item the gliding region $\mathcal{G}_{g}=\{ \zeta\in \mathcal{G},
\partial_{x_{n}}r|_{x_{n}=0}(\zeta)> 0\}$; we set $\mathcal{G}^2=\mathcal{G}
_d\cup\mathcal{G}_g$,
\item and $\mathcal{G}^k=\{ \zeta\in \mathcal{G},
H_{r_0}^j(\partial_{x_{1}}r|_{x_{1}=0})(\zeta)=0,\; 0\leq j<k-2,\;
H_{r_0}^{k-2}(\partial_{x_{1}}r|_{x_{1}=0})(\zeta)\neq 0\}\quad k\geq 3$,
where $$H_{r_0}=\frac{\partial r_0}{\partial \xi^{\prime }}\frac{\partial }{
\partial x^{\prime }}- \frac{\partial r_0}{\partial x^{\prime }}\frac{
\partial }{\partial \xi^{\prime }}$$.
\end{itemize}
\begin{definition}
\label{contact d'ordre infini} We say that the bicaracteristics have no
contact of infinite order with the boundary if $\displaystyle \mathcal{G}
=\bigcup _{k=2}^{+\infty }\mathcal{G}^{k}$.
\end{definition}
Now, we recall the definition of $\nu $ the measure
on the boundary. By the
Lemma~\ref{LemmaA6}, we see that the sequence $(1_{[0,T]}h_{k}(\frac{
\partial w_{k}}{\partial n}))$ is bounded in $L^{2}(\mathbb{R}_{t}\times
L^{2}(\partial \Omega )).$\ Therefore with the notations in (\ref{eq:plong})
\ and Proposition \ref{mesure}, we have the following Lemma.
\begin{lemma}
\label{lemma:Bord}There exists a subsequence $(W_{\sigma _{1}(k)})$ of $
(W_{\sigma (k)})$ and a Radon measure $\nu $ on $T^{\star }(\partial \Omega
\times \mathds{R}_{t})$ such that for every $b\in \ensuremath{\mathscr C}_{0}^{\infty }(T^{\ast
}(\partial \Omega \times \mathds{R}_{t}))$ we have
\begin{equation*}
\lim_{k\rightarrow +\infty }\left( \mathcal{O}p(b)\left( x,t,h_{\sigma
_{1}(k)}D_{x},h_{\sigma _{1}(k)}^{2}D_{t}\right) h_{\sigma _{1}(k)}\frac{1}{i
}\tfrac{\partial W_{\sigma _{1}(k)}}{\partial n},h_{\sigma _{1}(k)}\frac{1}{i
}\tfrac{\partial W_{\sigma _{1}(k)}}{\partial n}\right) _{L^{2}(\partial
\Omega \times \mathds{R}_{t})}=\left\langle \nu ,b\right\rangle .
\end{equation*}
\end{lemma}
We give now two results on propagation of support of microlocal defect
measure. The first, Proposition~\ref{Prop Propagation interieur} for point
inside $T^\star M $ and the second, Proposition~\ref{prop:propag} at the
boundary of $M$.
\begin{proposition}
\label{Prop Propagation interieur} Let $m_{0}=(x_{0},\xi _{0},t_{0},\tau
_{0})\in T^{\star }M$ and $U_{m_{0}}$ be a neighborhood of this point in $
T^{\star }M$. Then for every $b\in \ensuremath{\mathscr C}_{0}^{\infty }(U_{m_{0}})$, we have
\begin{equation}
\langle \mu ,H_{p}b\rangle =0 . \label{eq:Hp}
\end{equation}
\end{proposition}
\begin{prooff}
It is enough to prove (\ref{eq:Hp}) when $b(x,t,\xi ,\tau )=\Phi (x,\xi
)\chi (t,\tau )$ with $\pi _{x}\supp \Phi \subset V_{x_{0}}\subset \Omega $.
Let $\varphi \in \ensuremath{\mathscr C}_{0}^{\infty }(\Omega )$ be such that $\varphi =1$ on $
V_{x_{0}}$. We introduce
\begin{align*}
A_{k}&=\frac{i}{h_{k}}[(\Phi (x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})\varphi
h_{k}^{2}(D_{t}+P)1_{[0,T]}w_{k},1_{[0,T]}w_{k})_{L^{2}(\Omega \times
\mathds{R})} \\
&\quad\quad \quad -(\Phi (x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})\varphi
1_{[0,T]}w_{k},h_{k}^{2}(D_{t}+P)1_{[0,T]}w_{k})_{L^{2}(\Omega \times
\mathds{R})}].
\end{align*}
We claim that we have
\begin{equation}
\lim_{k\rightarrow +\infty }A_{k}=0 . \label{eq:damp term}
\end{equation}
We have
\begin{align}
A_{k}&=\frac{i}{h_{k}}[(\Phi (x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})\varphi
h_{k}^{2}[D_{t},1_{[0,T]}]w_{k},1_{[0,T]}w_{k})_{L^{2}(\Omega \times
\mathds{R})} \notag \\
&\quad -(\Phi (x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})\varphi
1_{[0,T]}w_{k},h_{k}^{2}[D_{t},1_{[0,T]}]w_{k})_{L^{2}(\Omega \times
\mathds{R})}] \notag \\
&\quad -2\Im (\Phi (x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})\varphi
g_{k},1_{[0,T]}w_{k})_{L^{2}(\Omega \times \mathds{R})} \notag \\
&\quad -2\Re (\Phi (x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})\varphi
1_{[0,T]}a(h_{k}^{2}P)^{1/2}aw_{k},1_{[0,T]}w_{k})_{L^{2}(\Omega \times
\mathds{R})}+o(1) , \notag
\end{align}
where we used that $(\Phi (x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})\varphi)-(\Phi (x,h_{k}D_{x})\chi (t,h_{k}^{2}D_{t})\varphi)^*=o(1)$ by pseudo-differen\-tial calculus.
It was proved in \cite[proof of Proposition~A.9]{RZ} that the first and the
second terms tend to zero when $k\rightarrow +\infty $. Since $
g_{k}\rightarrow 0$ in $L_{loc}^{2}$, the third term tends also to zero when
$k\rightarrow +\infty $.\newline
For the fourth term, according to (\ref{eq9:prop a>0}) and (\ref{eq11:prop
a>0}), it is easy to see that it tends to zero.
Thus (\ref{eq:damp term}) is proved.
In another side, it was shown in the Proposition A.9 \cite{RZ} that
\begin{equation*}
\lim_{k\rightarrow +\infty }A_{k}=-\langle \mu ,H_{p}(\Phi \chi )\rangle .
\end{equation*}
It follows from (\ref{eq:damp term}), (\ref{eq:Hp}) that $\langle \mu
,H_{p}b\rangle =0$ if $b=\Phi \chi $, which implies our proposition.
\end{prooff}
We consider now the case of point $m_{0}=(x_{0},\xi _{0},t_{0},\tau _{0})\in
T^{\star }\mathds{R}^{d+1}$ with $x_{0}\in \partial \Omega .$ We take, as in
\cite{RZ}, a neighborhood $U_{x_{0}}$ so small that we can perform the
diffeomorphism $F$ described in (\ref{eq:U0}).
Let $\mu $ and $\nu $ be the measures on $T^{\star }\mathds{R}^{d+1}$\ and $
T^{\star }(\partial \Omega \times \mathds{R}_{t})$\ defined in
Proposition~\ref{mesure} and Lemma ~\ref{lemma:Bord}.
We denote by $\tilde{\mu}$ and $
\tilde{\nu}$ the measures on
$T^{\star }(U_{x_{0}}\times \mathds{R}_{t})$
and $T^{\star }(U_{x_{0}}\cap \{y_{1}=0\}\times \mathds{R}_{t})$
which are the pullback of $\mu $ and $\nu $ \ by the diffeomorphism
$\tilde{F}:(x,t)\mapsto (F(x),t).$
We first recall the Lemma~A.10 established in \cite{RZ}.
\begin{lemma}
\label{lemma:A10} Let $b\in \ensuremath{\mathscr C}_{0}^{\infty }(T^{\star }(U_{x_{0}}\times
\mathds{R}_{t})).$ We can find $b_{j}\in \ensuremath{\mathscr C}_{0}^{\infty }(U_{x_{0}}\times
\mathds{R}_{t}\times \mathds{R}_{\eta ^{\prime }}^{d-1}\times \mathds{R}
_{\tau }),$ $j=0,1$\ and $b_{2}\in \ensuremath{\mathscr C}_{0}^{\infty }(T^{\star
}(U_{x_{0}}\times \mathds{R}_{t}))$\ with compact support in $(y,t,\eta
^{\prime },\tau )$ such that with the notations of (\ref{eq:U0}),
\begin{equation*}
b(y,t,\eta ,\tau )=b_{0}(y,t,\eta ^{\prime },\tau )+b_{1}(y,t,\eta ^{\prime
},\tau )\eta _{1}+b_{2}(y,t,\eta ,\tau )(\tau +\eta _{1}^{2}+r(y,\eta
^{\prime })),
\end{equation*}
where $r$ is the principal symbol of $R(y,D^{\prime }).$
\end{lemma}
\begin{proposition}
\label{prop:propag} With the notations of Lemma~\ref{lemma:A10} for every
$b\in \ensuremath{\mathscr C}_{0}^{\infty }(T^{\star }(U_{0}\times \mathds{R}_{t}))$, we have
\begin{equation*}
\langle \widetilde{\mu },H_{p}b\rangle =-\langle \widetilde{\nu },b_{1\mid
Y_{1}=0}\rangle.
\end{equation*}
\end{proposition}
\begin{prooff}
This proof is similar to the one of Proposition A.12 \cite{RZ}. We recall
some results from \cite{RZ} used to prove Proposition A.12.
\end{prooff}
\begin{lemma}[Lemma A.13 \protect\cite{RZ}]
Let for $j=0,1$, $b_{j}=b_{j}(Y,t,\eta ^{\prime },\tau )\in \ensuremath{\mathscr C}_{0}^{\infty
}(U_{0}\times \mathbb{R}^{d+1})$ and $\varphi \in \ensuremath{\mathscr C}_{0}^{\infty }(U_{0})$
, $\varphi =1$ on $\pi _{Y}\supp a_{j}$. Then,
\begin{align}
& \frac{i}{h_{k}}[((b_{0}(\Lambda _{k})+b_{1}(\Lambda
_{k})h_{k}D_{1})\varphi
h_{k}^{2}(D_{t}+P)1_{[0,T]}v_{k}|1_{[0,T]}v_{k})_{L_{+}^{2}} \notag \\
& \quad -\int_{U_{0}^{+}}\langle (b_{0}(\Lambda _{k})+b_{1}(\Lambda
_{k})h_{k}D_{1})\varphi 1_{[0,T]}v_{k},h_{k}^{2}(D_{t}+P)\overline{
1_{[0,T]}v_{k}}\rangle dY] \notag \\
& =-\frac{i}{h_{k}}([h_{k}^{2}(D_{t}+P),(b_{0}(\Lambda _{k})+b_{1}(\Lambda
_{k})h_{k}D_{1})\varphi 1_{[0,T]}]v_{k}|1_{[0,T]}v_{k})_{L_{+}^{2}} \notag
\\
& \quad -(a_{1}(0,Y^{\prime },t,h_{k}D_{Y^{\prime }},h_{k}^{2}D_{t})\varphi {
_{|Y_{1}=0}}1_{[0,T]}(h_{k}D_{1}v_{k}{_{|Y_{1}=0}})|1_{[0,T]}(h_{k}D_{1}v_{k}
{_{|Y_{1}=0}}))_{L^{2}(\mathbb{R}^{d-1}\times \mathbb{R})}. \label{Identity}
\end{align}
Here $\langle . , .\rangle $ denotes the bracket in $\mathcal{D}^{\prime }(
\mathbb{R}_{t})$.
\end{lemma}
\begin{lemma}[Lemma A.15 \protect\cite{RZ}]
\label{mu1} Let for $j=0,1,2$, $b_{j}=b_{j}(Y,t,\eta ^{\prime },\tau )\in
\ensuremath{\mathscr C}_{0}^{\infty }(U_{0}\times \mathbb{R}^{d})$ and $\varphi \in
\ensuremath{\mathscr C}_{0}^{\infty }(U_{0})$, $\varphi =1$ on $\pi _{Y}\supp b_{j}$.
Let us set
\begin{equation*}
L_{k}^{j}=(b_{j}(\Lambda _{k})\varphi
(h_{k}D_{1})^{j}1_{[0,T]}v_{k},1_{[0,T]}v_{k})_{L_{+}^{2}}.
\end{equation*}
Then we have for $j=0,1,2$
\begin{equation*}
\lim_{k\rightarrow +\infty }L_{\sigma (k)}^{j}=\langle \widetilde{\mu }
,b_{j}\eta _{1}^{j}\rangle .
\end{equation*}
\end{lemma}
The previous Lemmas still hold in our case, since they are independent of
the equation.
\begin{lemma}
\label{I^j_k,J^j_k} Let $b=b(Y,t,\eta ^{\prime },\tau )\in \ensuremath{\mathscr C}_{0}^{\infty
}(U_{0}\times \mathbb{R}^{d+1})$ and $\varphi \in \ensuremath{\mathscr C}_{0}^{\infty }(U_{0})$
, $\varphi =1$ on $\pi _{Y}\supp b_{j}$. For $j=0,1$ we set,
\begin{align*}
I_{k}^{j}&=(h_{k}^{-1}b(\Lambda _{k})\varphi
(h_{k}D_{1})^{j}h_{k}^{2}(D_{t}+P)1_{[0,T]}v_{k}|1_{[0,T]}v_{k})_{L_{+}^{2}},
\\
J_{k}^{j}&=\int_{U_{0}^{+}}\langle h_{k}^{-1}b(\Lambda _{k})\varphi
(h_{k}D_{1})^{j}1_{[0,T]}v_{k}|h_{k}^{2}(D_{t}+P)1_{[0,T]}v_{k}\rangle dY.
\end{align*}
Then $\lim\limits_{k\rightarrow +\infty }I_{k}^{j}=\lim\limits_{k\rightarrow
+\infty }J_{k}^{j}=0$.
\end{lemma}
\begin{prooff}
The proof is similar to the one of Lemma~A.14 \cite{RZ}. We have,
\begin{align*}
\!\! \! \!\! I^j_k & = \frac{1}{i}[(h_k
b(\Lambda_k)\delta_{t=0}\varphi(h_kD_1)^j
v_k(0,.)|1_{[0,T]}v_k)_{L^2_+}-(h_k
b(\Lambda_k)\delta_{t=T}\varphi(h_kD_1)^j v_k(0,.)| 1_{[0,T]}v_k)_{L^2_+}]
\notag \\
&\quad + ( b(\Lambda_k)\varphi(h_kD_1)^j 1_{[0,T]} g_k|
1_{[0,T]}v_k)_{L^2_+}+( b(\Lambda_k)\varphi(h_kD_1)^j 1_{[0,T]}
a(h_k^2P)^{1/2}av_k|1_{[0,T]}v_k)_{L^2_+}.
\end{align*}
From Lemma A.14 \cite{RZ}, the first and the second terms of the RHS in the
previous identity tend to zero.\\
Using that $\|g_k\|_{L^2}\rightarrow 0$, we can prove that the third term
tends also to zero.
Following Lemma~\ref{lemma:H} and \eqref{eq8:prop a>0} the forth term tends
to zero.
We conclude that $I_k^j$ tends to zero. For $J_k^j$ we argue as for $I_k^j$.
\end{prooff}
\begin{prooff}[Proof of Proposition~\protect\ref{prop:propag}]
From Proposition~\ref{prop:sopport} $(\tau +p)\mu =0$, so we have
\begin{equation*}
\langle \widetilde{\mu },H_{p}b\rangle =\langle \widetilde{\mu }
,H_{p}(b_{0}+b_{1}\eta _{1})\rangle.
\end{equation*}
Let consider the identity (\ref{Identity}), by Lemma~\ref{I^j_k,J^j_k}, the
LHS tends to zero when $k\rightarrow +\infty $. By the semiclassical
symbolic calculus, we have
\begin{equation*}
\frac{i}{h_{k}}[k^{2}(D_{t}+P),(b_{0}(\Lambda _{k})+b_{1}(\Lambda
_{k})h_{k}D_{1})\varphi ]=\sum_{j=0}^{2}c_{j}(\Lambda _{k})\varphi
(h_{k}D_{1})^{j},
\end{equation*}
where $c_{j}\in \ensuremath{\mathscr C}_{0}^{\infty }(U_{0}\times \mathbb{R}^{d+1})$, $\varphi
_{1}=1$ on $\supp\varphi $, and $\{p,b_{0}+b_{1}\eta
_{1}\}=\sum\limits_{j=0}^{2}c_{j}\eta _{1}^{j}$. Hence, using Lemma~\ref{mu1}
and Lemma~\ref{lemma:Bord}, the RHS of (\ref{Identity}) tends to
\begin{equation*}
-\langle \widetilde{\mu },H_{p}(b_{0}+b_{1}\eta _{1})\rangle -\langle
\widetilde{\nu },b_{1}{_{|Y_{1}=0}}\rangle ,
\end{equation*}
when $k\rightarrow +\infty $.
We conclude that
\begin{equation*}
\langle \widetilde{\mu},H_p b\rangle=\langle \widetilde{\mu}
,H_p(b_0+b_1\eta_1)\rangle=-\langle\widetilde{\nu},b_1{_{|Y_1=0}}\rangle,
\end{equation*}
which proves the Proposition~\ref{prop:propag}.
\end{prooff}
\begin{proposition}
\label{prop: mesure G} With the notations of \cite{RZ}, we have
\begin{equation*}
\widetilde{\nu }(\mathcal{G}_{d}\cup (\bigcup\limits_{k=3}^{+\infty }
\mathcal{G}^{k}))=0.
\end{equation*}
\end{proposition}
\begin{prooff}
The proof is the same as of Lemma A.17 in \cite{RZ}.
\end{prooff}
By measure theory methods (see \cite{Bu}, \cite{B2} and \cite{RZ}), the
propagation of the measure $\mu $ along the generalized bicharacteristic
flow is equivalent to Propositions~\ref{Prop Propagation interieur},
\ref{prop:propag} and \ref{prop: mesure G}.
|
train/arxiv
|
BkiUa_0241xg-GX14fxu
| 5
| 1
|
\section{Introduction}\label{sec_intro}
In the current paradigm of hierarchical structure formation, dark matter halos form through the gravitational collapse of the initial density perturbation and evolve via mass accretion and mergers with other halos \citep{White1978}. Since galaxies form and evolve in the potential well of dark matter halos, it is expected that the orientations of galaxies are related to those of their host halos, as well as the large-scale environment. Consequently, understanding the relation between the shapes of galaxies and halos is one of the most important parts of galaxy formation and evolution. Meanwhile, the alignment between galaxies and halos can be used to constrain the systematic errors in the weak gravitational lensing measurements \citep{Codis2015, Joa2015, Kilbinger2015, Kirk2015, Fortuna2020}.
In the past decades, numerous studies have characterized various types of alignments among galaxies, halos and the large-scale environment. The large-scale environment is commonly characterized by the tidal field to define the orientations of filaments or sheets in the cosmic web, leading to studies of the alignment of galaxies or halos with respect to their large-scale environment \citep{Zhang2009, Zhang2013, Zhang2015, Tempel2015, Hirv2017, Xia2017, Gane2018, Chen2019, Gane2019,Lee2019, Kraljic2020}. Observationally, the galaxy groups are usually adopted to study the alignment of the shapes of centrals, satellites or galaxy groups with respect to central-satellite position vector \citep{Brain2005, Pere2005, Agu2006, Yang2006, Fal2007, Kang2007, Siverd2009, Agu2010, Hao2011, LiZhigang2013, Chisari2014, Singh2015, Huang2018, Georgiou2019}, as well as the alignment between the major axes of the galaxies and those of their host groups \citep{WangY2008, Nie2010, Huang2016, West2017,Ume2018}. The observational evidence shows that the major axes of central galaxies are well aligned with their host clusters, and the alignment signals are stronger for redder and more luminous galaxies.
For example, \citet{Huang2016} measured the projected alignments between the major axes of the central galaxies and those of their host clusters using three different
galaxy shape measurement methods. The average alignment angle based on isophote shape measurement is about $33$ degrees in the two-dimensional case.
Besides the few observational measurements using galaxy clusters, a vast majority of studies in literature rely on the cosmological hydrodynamical simulations to determine the alignment between galaxies and their host halos \citep{Bett2010, Okabe2018, Okabe2020, Brain2019, Bho2020, Soussana2020, Tang2020, Tenneti2020}. There is general agreement in literature that the major axes of central galaxies are well aligned with their halos, and the alignment signals are stronger galaxies in more massive halos. However, the average alignment angle measured in the simulations spans a wide range from $30^\circ$ to $50^\circ$ in the three-dimensional case \citep[see e.g.,][]{Tenneti2014,Vell2015, Shao2016, Chisari2017}. The wide range of variations on the alignment signals from cosmological simulations may be caused by the adoption of different hydro subgrid physics modeling, different estimators in the quantification of position angle, and different
numerical techniques with or without adaptive mesh refinement \citep{Springel2010}.
The above two methods both have pros and cons. In observation, the orientations of the galaxy clusters determined from the satellite galaxy distribution may not be the exactly same as the halos, which would affect the accuracy of the alignment angle. On the other hand, the alignment angle measured in the hydrodynamical simulations apparently strongly correlates with the complex baryonic physics in the galaxy formation and evolution models. For example, \citet{Tenneti2017} found that the orientation of the stellar distribution can be affected by the angular momentum of the galactic wind used in the hydrodynamical simulation. In this paper, we propose a different method of measuring the alignment angle by connecting the observed galaxies to the dark matter
halos in the constrained $N$-body simulation of ELUCID \citep{WangHY2014}, which reasonably reproduces the large-scale environment of the observed galaxies.
The initial condition of ELUCID is extracted from the
density field of galaxies in the SDSS DR7 Main Sample \citep{WangHY2014}. The mass and positions of the halos at $z\sim0$ in the ELUCID simulation are consistent with those in the galaxy group catalogs, especially for halos with mass larger than $10^{13}\,h^{-1}{\rm M}_\odot$. Using a novel neighborhood abundance matching method, \citet{Yang2018} built up the connection between galaxies in SDSS DR7 to the subhalos in the corresponding observed region in the ELUCID simulation. Based on their galaxy-subhalo connection, we are able to make a one-to-one comparison between the projected major axes of galaxies and those of the subhalos.
The paper is organized as follows. In Section~\ref{sec_data}, we present the observational data from SDSS DR7 and the subhalo catalog from the ELUCID simulation. In Section~\ref{sec_method}, we describe the novel neighborhood abundance matching method which links galaxies in SDSS DR7 to dark matter subhalos in the ELUCID simulation and present the method to calculate the shapes of dark matter subhalos in simulation and groups in observation. In addition, we describe the methodology to quantify the various alignment signals. In Section~\ref{sec_result}, we present the alignments of galaxies with respect to their host groups in observation and their corresponding subhalos in simulation. In Section~\ref{sec_summary}, we summarize our main results.
\section{Data}\label{sec_data}
\subsection{Galaxies in the SDSS DR7}
Based on galaxy sample of the New York
University Value-Added Galaxy Catalog \citep[NYU-VAGC;][]{Blanton2005}, constructed from the SDSS DR7 \citep{Abazajian2009}, we collect a total of $639,359$ galaxies in the redshift range of $0.01 \le z \le 0.2$. We use $472,416$ groups identified from these galaxies with the adaptive halo-based group finder as in \citet{Yang2005} and \citet{Yang2007}. Most of groups only have a single member galaxy, and there are $68,170$ groups with at least two members. We refer the readers to \citet{Yang2007} for details.
Following \citet{Yang2008}, we separate the galaxies into red and blue subsamples according to the $g-r$ color cut in the absolute magnitudes,
\begin{equation}
\centering
g-r = 1.022 - 0.0651x - 0.00311x^2,
\label{eqn:color}
\end{equation}
where $x=M_r - 5\log h + 23.0$ and $M_r$ is the absolute r-band magnitude k+e corrected to $z=0.1$. Since the initial condition of the ELUCID simulation is constructed from the distribution of the galaxies in the continuous Northern Galactic Cap (NGC), in this paper we only select galaxies in the NGC region of the range $99^{\circ} < \alpha < 283^\circ$ and $-7^{\circ}<\delta < 75^{\circ}$ , where $\alpha$ and $\delta$ are the right ascension and declination, respectively. This results in a final sample of $396,069$ galaxies in the NGC.
\subsection{Subhalos in the ELUCID simulation}
The halo catalog used in this study is from the ELUCID simulation \citep{WangHuiyuan2016}, which is a dark matter only, constrained simulation carried out in the Center for High Performance Computing of Shanghai Jiao Tong University using L-GADGET2 code, a memory-optimized version of GADGET2 \citep{Springel2005}. The simulation evolves $3072^3$ dark matter particles in a periodic box of $500\,h^{-1}{\rm Mpc}$ on a side from redshift $z=100$ to the present epoch. The particle mass and softening length are $3.0875 \times 10^8 \,h^{-1}{\rm M}_\odot$ and $3.5 \,h^{-1}{\rm kpc}$, respectively. The cosmological parameters adopted in the simulation are $\Omega_{\rm m} = 0.258$, $\Omega_{\rm b} = 0.044$, $\Omega_\Lambda = 0.742$, $h =0.72$, $n_s = 0.963$ and $\sigma_8 = 0.796$, where $\Omega_{\rm m}$ is the matter density, $\Omega_{\rm b}$ is the baryon density, $\Omega_\Lambda$ is the dark energy density, $h$ is the normalized Hubble constant today, $n_s$ is the power law index of the initial power spectrum, and $\sigma_8$ is the amplitude of matter fluctuations on $8\,h^{-1}{\rm Mpc}$ scales today.
Dark matter halos are identified using the standard friends-of-friends (FOF) algorithm \citep{Davis1985} with a linking length of $b=0.2$ times the mean particle separation and containing at least $20$ particles. Since the FOF algorithm may accidentally identify two independent structures linking with particle bridge as one structure, it is necessary to detect substructures inside a larger FOF halo. In this paper, a ${\tt subhalo}$ is defined as the locally overdense, self-bound substructure within a larger parent FOF halo. Using the algorithm SUBFIND designed by \citet{Springel2001}, we decomposed a given FOF halo into a set of disjoint self-bound subhalos. The center of the subhalo is defined as the position of the most bound particle, and the velocity is defined as the mean velocities of the particles in the subhalo. Within these subhalos, the most massive one is regarded as the main subhalo. Obviously, compared with FOF halos, self-bound subhalos are more suitable to link central or satellite galaxies in observation.
\section{Methods}\label{sec_method}
\subsection{Matching between galaxies and subhalos}
Since the initial condition of the ELUCID simulation is constrained by the mass density field extracted from the galaxy distribution in observation, it is expected that the subhalo distribution at present is tightly correlated with the spatial distribution of galaxies in SDSS DR7.
\citet{Yang2018} proposed a novel neighborhood abundance matching method to link galaxies in SDSS DR7 to dark matter subhalos in the ELUCID simulation, according to the likelihood of the subhalo to be linked to the candidate galaxy \begin{equation}\label{eqn:match} P = M_{\rm sh} \exp \left( - \frac {r_p^2} {2 r_{\rm off}^2} \right) \exp \left( -\frac {\pi^2} {2 v_{\rm off}^2} \right), \end{equation} where $r_p$ and $\pi$ are the separations between the galaxy and the subhalo in the perpendicular and parallel to the line-of-sight direction, $M_{\rm sh}$ is the mass of the subhalo under consideration, $r_{\rm off}$ and $v_{\rm off}$ are two free parameters. Note that $r_{\rm off} = \infty$ and $v_{\rm off} = \infty$ correspond to the traditional abundance matching method. Here, the parameters are set to be $r_{\rm off} = 5 \,h^{-1}{\rm Mpc}$ and $v_{\rm off} = 1000 ~{\rm km/s}$. This results in a total of $396,069$ galaxy-subhalo pairs in the catalog.
In what follows, we use the galaxy-subhalo pairs by matching separately for central and satellite galaxies. Central galaxies are linked to the main subhalos in the host FOF halos and satellite galaxies to the other subhalos. The matching criteria can give better constraint for the luminosity (stellar mass)-subhalo mass relation. This results in a total of $396,069$ galaxy-subhalo pairs in the matching catalog. Using the galaxy-subhalo pairs, \citet{Yang2018} reproduced the satellite fraction, the conditional luminosity function(the conditional stellar mass function), and the biases of the galaxies.
In this paper, only galaxies in groups with at least two member galaxies are selected to measure the alignment signals. After this selection, there are $43,316$ central galaxies and $118,095$ satellite galaxies.
\subsection{Shape and alignment definition}
The shape of a subhalo containing $N$ dark matter particles is calculated by the simple inertia tensor \begin{equation}\label{eqn:it} I_{\alpha \beta} = m \sum\limits_{i=1}^N x_{i,\alpha} x_{i,\beta}, \end{equation} where $m$ is the particle mass, $\alpha$ and $\beta$ are the inertia tensor indices with values of $1$, $2$ or $3$, and $x_{i,\alpha}$ is the position of the particle $i$ with respect to the center of the subhalo, which is defined as the position of the particle with the minimum potential. The axis lengths $a$, $b$ and $c$ ($a\geq b \geq c$) of the ellipsoidal subhalo are proportional to the square roots of the eigenvalues $\lambda_1$, $\lambda_2$ and $\lambda_3$ ($\lambda_1 \geq \lambda_2 \geq \lambda_3$) of the inertia tensor by $a=\sqrt{\lambda_1}$, $b=\sqrt{\lambda_2}$ and $c=\sqrt{\lambda_3}$ . The corresponding eigenvectors denote the directions of the major, middle and minor axes of the subhalos.
In order to calculate the projected orientation on the sky of the halos, the Cartesian coordinates in the simulation are transformed into the redshift $z$ and sky coordinates $\alpha$ and $\delta$, where $\alpha$ and $\delta$ are the right ascension and declination, respectively. For a subhalo at the location $\boldsymbol x$ with the three-dimensional direction of the major axis $\Delta{\boldsymbol x}$, the projected direction $\theta_{\rm H}$ on the sky can be calculated by \begin{equation}\label{eqn:project} \theta_{\rm H} = \arctan \left( \frac {\Delta \alpha \cos \delta} {\Delta \delta} \right), \end{equation} where $\Delta\alpha$ and $\Delta\delta$ are the right ascension and declination differences between the locations $\boldsymbol x$ and $\Delta \boldsymbol x$.
In observation, for each group with more than one member, the group shape is calculated using the projected two-dimensional case of Equation~\ref{eqn:it}, where $N$ is the number of the galaxies in the group, $x_{i,1} = \Delta \alpha \cos \delta$ and $x_{i,2}= \Delta \delta $ are the $i$-th projected coordinates on the sky with the origin at the position of the central galaxy, where $\Delta\alpha$ and $\Delta\delta$ are the right ascension and declination differences between the locations of satellite galaxies and central galaxies. For each galaxy, the orientation angle $\theta_{\rm G}$ of the major axis of the galaxy on the sky is specified by the $25$ mag~arcsec$^{-2}$ isophote in the r-band.
In order to quantify the alignment signal between the orientation of the major axis of a galaxy in observation and its corresponding subhalo in the ELUCID simulation, we calculate the normalized probability distribution of the angle $\theta$ between the two orientations as
\begin{equation}
P(\theta) = N(\theta)/\langle N_{\rm R}(\theta)\rangle,
\end{equation}
where $N(\theta)$ is the number of the galaxy-subhalo or galaxy-group pairs in each $\theta$ bin, and $\langle N_R(\theta)\rangle$ is the average number of such pairs obtained from $100$ random samples, in which the orientation of galaxies are kept fixed, but the other orientations (subhalos or groups) are randomized. The standard deviation of $P_{\rm R}(\theta) = N_{\rm R}(\theta)/\langle N_R(\theta)\rangle$ calculated from the random samples is used to assess the significance of the deviation of $P(\theta)$ from unity. We note that $P(\theta)=1$ is the case without any alignment. Since the significance is quantified with respect to the null hypothesis, throughout the paper the error bars are plotted on top of $P(\theta)=1$ line. The angle $\theta$ is constrained in the range of $0^{\circ}\leq \theta \leq 90^{\circ}$. For two parallel orientations, $\theta = 0$, while for two perpendicular orientations, $\theta = 90$. In addition, we calculate the average angle $\langle \theta \rangle$ and $\theta_\sigma$, which is the standard deviation of $\langle \theta_{\rm R} \rangle$ of the $100$ random samples. In the absence of the alignment, $\langle \theta \rangle = 45^{\circ}$, however $\langle \theta \rangle = 45^{\circ}$ does not necessarily mean a isotropic distribution.
\section{Results}\label{sec_result}
\subsection{galaxy-group alignment in observation}
\begin{figure}
\includegraphics[width=0.5\textwidth]{gg_cen.eps}
\caption{The normalized probability distribution of the angles between
the major axes of central galaxies and the major axes of their host groups
in observation. The black solid line shows the alignment signal for a
total of $43,316$ central galaxies in the groups with member galaxies
larger than $2$. The horizontal line shows an isotropic distribution of
alignment angles, while the error bars are obtained from $100$ random
realizations. The average angle and its error are also indicated.}
\label{fig:gg_cen}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{gg_sat.eps}
\caption{Similar to Figure~\ref{fig:gg_cen}, but for the alignments
between satellite galaxies and their host groups. }
\label{fig:gg_sat}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{gg_cen_member.eps}
\caption{Galaxy-group alignment for central galaxies in groups with
member galaxies $N \geq 2$, $N \geq 4$, $N \geq 4$, and $N \geq 8$,
respectively. }
\label{fig:gg_member}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{gg_rb.eps}
\caption{Galaxy-group alignment for red and blue central galaxies. The
error bars on top of the horizontal line are also shown for red and
blue subsamples, respectively. }
\label{fig:gg_rb}
\end{figure}
In this subsection, we measure the alignment signal between the major axes of the galaxies in the SDSS DR7 group catalog and those of their host groups. For each group with at least two members, we calculate the shape of the group using the projected two-dimensional case of Equation~\ref{eqn:it} based on the angular positions of satellite galaxies with respect to their central galaxies. There are a total of $43,316$ groups with more than one member for $396,069$ galaxies in the continuous region. Figure~\ref{fig:gg_cen} shows the probability distribution $P(\theta)$, with the black line for all centrals. With an average alignment angle of $\langle \theta \rangle = 42\degree86\pm 0\degree13$ for a total of $43,316$ central galaxies, there is a clear trend that the major axes of central galaxies are preferentially aligned parallel to the major axes of their host groups. The error bars of the measurements are shown on top of the horizontal dotted line taken from $100$ realizations, in which the major axes of the groups have been randomized.
In observation, based on the redMapper cluster catalog from SDSS DR8, \citet{Huang2016} also found that the shapes of central galaxies are aligned with the shapes of their parent clusters traced by the satellite location in the clusters at redshifts $0.1<z<0.35$. Using a sample of $65$ distant galaxy clusters from HST observation at redshifts $0.19<z<1.8$, \citet{West2017} reported the similar alignment signals, which are also confirmed in cosmological hydrodynamical simulations \citep{Shao2016,Tenneti2020}.
To study the mass dependence of the alignment signal, the central galaxies in the galaxy-group pairs are then separated into three different halo mass subsamples with $\log (M_{\rm h}/\,h^{-1}{\rm M}_\odot)$ in the ranges of $(11,12)$, $(12,13)$, and $(13,\infty)$. Here for each central galaxy, the halo mass is taken from its matched main subhalo in the ELUCID simulation. The alignment signals for the subsamples are shown using different types of lines with different colors in Figure~\ref{fig:gg_cen}. There is a clear indication that the alignment is stronger for more massive halos. Similar trend are also found for the galaxy stellar mass.
We also study the alignment signals of satellite galaxies with respect to their host groups, as shown in Figure~\ref{fig:gg_sat}. Similar trends of alignment angle distribution, as well as the halo mass dependence, are found for the satellite galaxies. The alignment signals of satellite galaxies with $\langle \theta \rangle = 44\degree39\pm 0\degree07$ are weaker than those of central galaxies with $\langle \theta \rangle = 42\degree86 \pm 0\degree13$. In the following analysis, we mainly focus on the central galaxies.
To investigate the effect of group richness, all groups are separated into different richness subsamples, as shown in Figure~\ref{fig:gg_member}. Central galaxies in richer groups have higher chance to be aligned with their host groups, confirming the result of \citet{Nie2010} using the galaxy cluster catalog from SDSS DR6.
Finally, we show the dependence of the alignment signal on the galaxy color in Figure~\ref{fig:gg_rb}, where we separate the galaxies into red and blue subsamples according to Equation~(\ref{eqn:color}). The alignment signal for the red centrals with $\langle \theta \rangle = 41\degree93\pm 0\degree16$ is much stronger than that of the blue centrals with $\langle \theta \rangle = 44\degree10\pm 0\degree18$, similar to the finding of \citet{Agu2010}.
\subsection{Alignment between galaxies in observation and subhalos in simulation}
\begin{figure}
\includegraphics[width=0.5\textwidth]{spin_axis_3d.eps}
\caption{The probability distribution of the angle $\cos\theta$
between the angular momentum and the major (red),
middle (green), and minor (blue) axes of the subhalos in
the ELUCID simulation in three-dimensional case.}
\label{fig:ss_3d}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{spin_axis_2d.eps}
\caption{The probability distribution of the angle $\theta$
between the projected angular momentum and the projected major (red),
middle (green), and minor (blue) axes in two-dimensional case.}
\label{fig:ss_2d}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{sa.eps}
\caption{The average alignment angle $\langle \theta \rangle$
as a function of the inner particle fraction of the subhalos in the
ELUCID simulation for spin-major (red), spin-middle (green), and
spin-minor (blue) alignments.}
\label{fig:sa}
\end{figure}
In this section, we investigate the alignment signal between the major axes of the SDSS galaxies and those of matching subhalos in the ELUCID simulation \citep{Yang2018}. As the satellite galaxy-group alignment is very week, in what follows, we only focus on the central-main subhalo pairs.
\subsubsection{spin-shape alignment in simulation}
In order to check the reliability of the shape calculation for subhalos in simulation, we first examine the spin-shape alignment within the subhalos themselves in the ELUCID simulation. The subhalo spin vector is defined as \begin{equation}\label{eqn:spin} \boldsymbol J = m \sum \limits_{i=1}^N \boldsymbol r_i \times \boldsymbol v_i, \end{equation} where $m$ is the dark matter particle mass, $\boldsymbol r_i$ is the position vector of the $i$-th particle with respect to the subhalo center of mass, and $\boldsymbol v_i$ is the velocity of the $i$-th particle relative to the bulk velocity of the subhalo.
Previous studies mainly focus on the spin-shape alignment in the three-dimensional case. For comparison, Figure~\ref{fig:ss_3d} shows probability distribution of $\cos\theta$, where $\theta$ is the three-dimensional alignment angle between the major axes of subhalo shape and the subhalo spin vector. Note that two parallel orientations correspond to $\cos\theta = 1$, while two perpendicular orientations correspond to $\cos\theta =0$. As shown in Figure~\ref{fig:ss_3d}, the spin direction is preferentially parallel to the subhalo minor axis, which is in good agreement with the previous results \citep{Bett2007, Zhang2009, Chisari2017, Gane2018}.
The spin direction of the subhalo in three-dimensional case is calculated using Equation~\ref{eqn:spin}, and then projected on the sky using Equation~\ref{eqn:project}, similar to the projected shape orientations \citep{Zhang2013, Zhang2015}. Figure~\ref{fig:ss_2d} shows the alignment signals of Figure~\ref{fig:ss_3d} in the projected two-dimensional case. The spin direction has a strong tendency to be parallel to the minor axis with $\langle \theta \rangle = 36\degree99\pm 0\degree06$, and much weaker chance to be perpendicular to the major axes with $\langle \theta \rangle = 50\degree94\pm 0\degree06$.
The spin and shape orientations have been proven to vary as a function of distance from the center \citep{Bailin2005, Bett2010}. For each subhalo, we also calculate the spin and shape orientations for the inner subhalo regions consisting of the inner $1/8$, $1/4$, and $1/2$ of the subhalo particles. Figure~\ref{fig:sa} shows the average angle $\langle \theta \rangle$ of the spin-shape alignment in two-dimensional case as a function of the inner particle fraction of the subhalos in the ELUCID simulation. As indicated in Figure~\ref{fig:sa}, the spin-shape alignment is dependent on the radial extent of the subhalo. Both the spin-minor and spin-major alignment are weaker in the inner part, compared to those of the entire subhalos. In the following analysis, we will mainly use the subhalo shape calculated from the entire subhalo particles.
\subsubsection{galaxy-subhalo alignment}
\citet{WangHY2012} generated the initial condition of the constrained ELUCID simulation based on the galaxy density field using the groups with halo masses larger than $10^{12}\,h^{-1}{\rm M}_\odot$. \citet{Tweed2017} compared the halos in reconstructed and original simulations and found that the reconstruction techniques are reliable for the most massive halos with masses larger than $5\times10^{13} \,h^{-1}{\rm M}_\odot$, where more than half of the halo particles in the original simulation are matched in the reconstructed one.
In the neighborhood abundance matching method developed by \citet{Yang2018}, for each galaxy, the likelihood of the corresponding subhalo calculated by Equation~\ref{eqn:match} is proportional to the subhalo mass in a certain small volume. In this method, \citet{Yang2018} started from the most massive galaxy to search for its corresponding subhalo. Therefore, the most massive subhalo in the searching volume is mostly linked to the most massive galaxy, leading to the fact that matching pairs of central galaxies are more reliable than those of satellite galaxies. In the following analysis, we mainly focus on the alignment signals of central galaxies. In total, there are $43,316$ central galaxies in groups with at least two members.
\begin{figure}
\includegraphics[width=0.5\textwidth]{gh_cen.eps}
\caption{The probability distribution of the angles between the major
axes of the galaxies in observation and the projected major axes of
their embedding main subhalos from the ELUCID simulation.
The black solid
line shows the alignment signal for a total of $43,316$ central galaxies
in the galaxy-halo
pairs catalog. The horizontal line shows an isotropic distribution of
alignment angles, while the error bars are obtained from $100$ random
realizations for total galaxies. The average angle and its error are
also indicated.}
\label{fig:gh}
\end{figure}
Figure~\ref{fig:gh} shows the probability distribution of the angles between the major axes of the central galaxies from the SDSS DR7 and the projected major axes of their host subhalos in the ELUCID simulation. With an average alignment angle of $\langle \theta \rangle = 44\degree55 \pm 0\degree13 $ for $43,316$ central galaxies, there is a clear tendency that the major axes of central galaxies are preferentially parallel to those of their host subhalos. However, the alignment signals are weaker than those of galaxy-group alignment. Here the shapes of the subhalos in simulation are computed from the distribution of the entire dark matter particles in the subhalos. We have checked the alignment signals only using the inner $1/8$ particles in the subhalos, and the alignment signals are still weak between galaxies and the inner region of the subhalos.
Besides, we have checked the alignment signals using the galaxy-subhalo pairs with smaller separation $r_p$ and $\pi$ in Equation~\ref{eqn:match}. The alignment signals are slightly
stronger for the galaxy-subhalo pairs with smaller separation. For $50\%$ cleaner galaxy-subhalo pairs with $r_p < 2.8 \,h^{-1}{\rm Mpc}$ and $\pi < 3.2 \,h^{-1}{\rm Mpc}$, the average angle of
galaxy-subhalo alignment is $44\degree15 \pm 0\degree25$.
We also study the mass dependence of the alignment signals using the subsamples separated by the embedding subhalo mass with $\log (M_{\rm h} /\,h^{-1}{\rm M}_\odot)$ in the ranges of $(11,12)$, $(12,13)$, and $(13,\infty)$. In Figure~\ref{fig:gh}, different types of lines with blue, green and red colors show the alignment signals for galaxies in different subhalo mass ranges. There is a clear indication that galaxies in more massive subhalos have stronger alignment signals. Based on the previous studies \citep{WangHY2012, Tweed2017, Yang2018}, the cross-identification method is more reliable for more massive subhalos. Besides, the shape measurements are more accurate for subhalos containing more dark matter particles. Obviously, the results of galaxies in subhalos with $\log (M_{\rm h}/ \,h^{-1}{\rm M}_\odot) \geq 13 $ are more reliable. As shown by the red line in Figure~\ref{fig:gh}, there is a clear and significant alignment signal with $\langle \theta \rangle = 43\degree66\pm 0\degree30$ for galaxies in subhalos of $\log (M_{\rm h}/ \,h^{-1}{\rm M}_\odot) \geq 13$.
\subsection{The impact of survey selection effects on the alignment signals}
\begin{figure}
\includegraphics[width=0.5\textwidth]{sub_fof_dm.eps}
\caption {The probability distribution of the angles between the projected major axes of the main subhalos and the projected major axes of FOF halos in simulation.
The black solid line shows the result for a total of $43,316$ FOF halos. Different colors correspond to the subsamples with different main subhalo mass.}\label{fig:sub_fof_dm}
\end{figure}
There are a total of $43,316$ FOF halos in the ELUCID simulation corresponding to the observed groups with at least two members.
In this section, based on the subhalo and FOF halo catalogs from the ELUCID simulation, we investigate three types of alignment signals, which include the alignment
between main subhalo shapes and the host FOF halos shapes traced by all dark matter particles, the alignment between main subhalo shapes and the satellite subhalo distribution systems,
and the alignment between main subhalo shapes and the SDSS matched satellite subhalo distribution systems.
\subsubsection{Signals for dark matter particles }
Based on the subhalo and FOF halo catalogs from the ELUCID simulation, we first calculate the alignments between the main subhalos and their parent FOF halos,
where the shapes of the FOF halos are calculated by the position distribution of all the dark matter particles in the FOF halos. The sample is separated into four subsamples according to
the main subhalo mass, resulting in $15,288$, $19,379$, $5,765$, and $471$ subhalos with the mass $\log (M_{\rm h} /\,h^{-1}{\rm M}_\odot)$ in the ranges of $(11,12)$, $(12,13)$, $(13,14)$ and $(14,\infty)$.
Figure~\ref{fig:sub_fof_dm} shows the alignments of the major axes of the main subhalos with respect to the FOF halo shapes traced by dark matter particles. With the average alignment angle $\langle \theta \rangle = 18\degree7\pm 0\degree13$, in dark matter only simulation, the major axes of the main subhalos are strongly correlated with the major axes of their parent
FOF halos. Besides, there is a mass dependence that the alignment signals are stronger for more massive halos.
\begin{figure}
\includegraphics[width=0.5\textwidth]{sub_fof.eps}
\caption{The probability distribution of the angles between the projected major axes of the main subhalos and the distribution of all the subhalos in FOF halos in simulation. The black solid line shows the result for a total of $43,316$ halos. Different colors correspond to the subsamples with different halo mass.}\label{fig:sub_fof}
\end{figure}
\subsubsection{Signals for all the subhalos }
To fully understand the alignment signals in observation and the ELUCID simulation, we then calculate the alignments between the main subhalos and their subhalo systems, where the
shapes of the subhalo systems are calculated by the position distribution of all the subhalos in the FOF halos.
Figure~\ref{fig:sub_fof} shows the alignments of the major axes of the main subhalos with respect to the distribution of all the subhalos in the FOF halos. There is a mass dependence that the alignment signals are stronger for more massive halos. For halo mass larger than $10^{14}\,h^{-1}{\rm M}_\odot$, the average alignment angle is $\langle \theta \rangle = 11\degree55\pm 1\degree05$.
Such kind of mass dependence of the alignment signal is also found in the hydrodynamical simulations \citep{Vell2015, Tenneti2020}. From the Horizon-AGN simulation, \citet{Okabe2020} claimed that the major axes of central galaxies are aligned with their subhalo systems with the average angle $\sim 20$ degrees in the projected plane for $40$ cluster-sized halos with masses larger that $5\times 10^{13}\,h^{-1}{\rm M}_\odot$. As shown in Figure~\ref{fig:sub_fof}, the two-dimensional average alignment angle is $\langle \theta \rangle = 17\degree70\pm 0\degree32$ for halo masses of $ 13\leq \log (M_{\rm h}/\,h^{-1}{\rm M}_\odot)\leq 14$, which are in agreement with the results of \citet{Okabe2020}. This agreement is expected in the fact that the galaxy-halo alignment in hydrodynamical simulation is determined, to first order, by the alignment between the inner dark matter distribution and the entire dark matter in the FOF halos. \citet{Vell2015} has claimed that, for stars and dark matter enclosed in sphere of the same radius, the orientation of the stellar distribution follows that of dark matter (see the right panel of their Figure~$8$), while dark matter itself changes orientation with increasing radii, resulting in the misalignment between galaxies and halos in hydrodynamical simulations.
\begin{figure}
\includegraphics[width=0.5\textwidth]{sh_cen.eps}
\caption{The probability distribution of the angles between the projected major axes of the main subhalos and their hosts calculated using the subhalos linked with satellites
in the groups from SDSS DR7.}\label{fig:sh_cen}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{sh_member.eps}
\caption{Similar to Figure~\ref{fig:sh_cen}, but for the host system richness $N \geq 2$, $N \geq 4$, $N \geq 4$, and $N \geq 8$, respectively. }\label{fig:sh_member}
\end{figure}
\subsubsection{Signal for the observed subhalos}
In simulation, the alignments between main subhalos and their host halos are commonly much stronger than the alignments between galaxies and their host groups in observation, which may be due to the fact that the satellites subhalos resolved in N-body or hydrodynamical simulations are much more complete than the observation. To explore the impact of the survey selection effects, we use only the subhalos that are associated with the satellite galaxies in SDSS observation to measure the shape and orientation of subhalo systems.
In observation, the shape of the group is calculated by the distribution of the satellites in the group. For consistency, here the shape of the subhalo system in simulation is also calculated using the position distribution of the subhalos that are linked with the satellites in the observed group, which makes sure that the shape calculation of the subhalo system in simulation has used the same number $N$ in Equation~\ref{eqn:it} as that of the group in observation.
Figure~\ref{fig:sh_cen} shows the distribution of the angles between the projected major axes of
the main subhalos and their subhalo systems, which are calculated only using the subhalos linked with
satellites in the observed group.
Similar to the galaxy-group alignments, there is a clear indication that the major axes of the main subhalos are preferentially aligned to the major axes of their subhalo systems in simulation. The alignments between the main subhalos and the SDSS matched satellite subhalo systems in
simulation with $\langle \theta \rangle = 42\degree08\pm 0\degree13$ are slightly larger than the galaxy-group alignments in observation with
$\langle \theta \rangle = 42\degree86\pm 0\degree13$. As shown by the red lines in Figure~\ref{fig:sh_cen},
there is a clearly significant alignment signal with $\langle \theta \rangle = 37_\cdot^\circ 91\pm 0\degree30$ for $6,236$ halos with mass larger than $10^{13}\,h^{-1}{\rm M}_\odot$, again somewhat larger than the signals obtained from the observations.
We also investigate the dependence of the alignment signals on the richness. The host systems are separated into different subsamples according to the richness at least $2$, $4$, $6$, and $8$. Figure~\ref{fig:sh_member} shows the alignment signal as a function of the richness. Similar to the tendency of the galaxy-group alignment, the alignment signals are stronger in richer systems. For host systems with the richness $N \geq 8$, there is a clearly significant alignment with $\langle \theta \rangle = 39\degree40\pm 0\degree51$, which is in good agreement with the galaxy-group alignment with $\langle \theta \rangle = 39\degree98\pm 0\degree51$ with $N \geq 8$.
The results demonstrate that using only a small fraction of galaxies to trace the shape of host halo will significantly reduce the alignment signals between the orientations of the main subhalo and FOF host halo. Given that the ELUCID simulation can only reproduce the most massive halos, this results in a further
reduction of the alignment signals.
Taken the results shown in Figure~\ref{fig:sh_cen} as a benchmark for the case of central (main subhalo) - group shape (as traced by subhalos) alignment in the situation without baryonic effect from a dark matter only simulation, the lower alignment signals shown in Figure 1 highlight the importance of taking into account the baryon effect on the intrinsic alignment signals, as discussed in the works of \citet{Tenneti2017} and \citet{Soussana2020}.
\section{Summary}\label{sec_summary}
In this paper, we have studied the orientation of galaxies relative to their host groups in observation and their corresponding subhalos in the ELUCID simulation.
Observationally, from the group catalog, we select $43,316$ groups with member galaxies larger than $2$. The orientations of the galaxies are defined by the position angles of the major axes specified by the $25$ mag~arcsec$^{-2}$ isophote in the r-band. The shapes of the groups are chracterized by the inertia tensor of member galaxies in the groups. Based on the $43,316$ groups from SDSS DR7, we have investigated the alignment between the major axes of galaxies and their host groups. We have found that the major axes of central and satellite galaxies have a tendency to be aligned with the major axes of their host groups. The galaxy-group alignment signals of satellite galaxies are weaker than those of central galaxies. There is a mass and richness dependence that the alignment signals are stronger for galaxies in massive halos and in richer groups. In addition, the galaxy-group alignment is found to depend on the galaxy colors. Red centrals show a stronger alignment than that of blue centrals.
For 43,316 central galaxies in groups with member galaxies larger than $2$, we have matched 43,316 main subhalos from the ELUCID simulation using a novel neighborhood abundance matching method \citep{Yang2018}. From the ELUCID simulation, we have calculated the shapes of the halos by the inertia tensor of thousands of dark matter particles within the halos. The shapes of the halos are then projected on the sky using Equation~\ref{eqn:project}, in order to compare with the shapes of the galaxies in observation. Using 43,316 main subhalos matched to central galaxies in observation, we have examined the alignment between the major axes of galaxies and their corresponding subhalos. We find that central galaxies are preferentially parallel to the major axes of their corresponding subhalos. Galaxies in more massive subhalos have stronger galaxy-subhalo alignment signals.
For $43,316$ main subhalos matched to central galaxies in observation, we have calculated the alignments between main subhalos and their host systems in simulation. The shapes of the host systems in simulation are calculated using the positions of the subhalos matched to galaxies in observation. Totally, the alignments between main subhalos and SDSS matched subhalo systems
in simulation are slightly stronger than galaxy-group alignments in observation. Similar to the galaxy-group alignments in observation, the projected major axes of the main subhalos are
aligned to the major axes of their subhalo systems in simulation.
Totally, the major axes of central galaxies, groups and halos are preferentially parallel each other. The alignment signals are stronger for galaxies in more massive halos. Especially for $6,236$ central galaxies with their corresponding halo mass larger than $10^{13}\,h^{-1}{\rm M}_\odot$, there are clearly significant alignment signals with the average angles $\langle \theta \rangle = 40\degree70\pm 0\degree30$, $\langle \theta \rangle = 43\degree66\pm 0\degree30$, and $\langle \theta \rangle = 37\degree91\pm 0\degree30$ of the galaxy-group, galaxy-subhalo, and main subhalo-subhalo system alignments, respectively.
In addition, we have examined the spin-shape alignments within the subhalos themselves in the ELUCID simulation. The spin vectors of the subhalos are calculated by the angular momenta of the subhalos. The major (minor) axes of the subhalos are preferentially perpendicular (parallel) to the directions of angular momenta, which is in good agreement with the previous studies \citep{Bett2007, Zhang2009, Chisari2017, Gane2018}. For each subhalo, we also calculate the spin-shape alignment within the inner regions consisting of the inner $1/8$, $1/4$, and $1/2$ of the subhalo particles. The spin-shape alignment signals are found to be weaker in the inner part of the subhalos.
\section*{Acknowledgements}
We thank the anonymous referee for the helpful comments that significantly improve the presentation of this paper.
This work is supported by the national science foundation of China
(Nos. 11833005, 11890692, 11621303), 111 project No. B20019 and
Shanghai Natural Science Foundation, grant No. 15ZR1446700.
We also thank the support of the Key Laboratory for Particle
Physics, Astrophysics and Cosmology, Ministry of Education.
This work is also supported by the High Performance Computing Resource
in the Core Facility for Advanced Research Computing at Shanghai
Astronomical Observatory.
Funding for the Sloan Digital Sky Survey IV has been provided by the
Alfred P. Sloan Foundation, the U.S. Department of Energy Office of
Science, and the Participating Institutions. SDSS acknowledges support
and resources from the Center for High-Performance Computing at the
University of Utah. The SDSS web site is www.sdss.org.
SDSS is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS Collaboration including the
Brazilian Participation Group, the Carnegie Institution for Science,
Carnegie Mellon University, the Chilean Participation Group, the
French Participation Group, Harvard-Smithsonian Center for
Astrophysics, Instituto de Astrof{\'i}sica de Canarias, The Johns
Hopkins University, Kavli Institute for the Physics and Mathematics of
the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National
Laboratory, Leibniz Institut f{\"u}r Astrophysik Potsdam (AIP),
Max-Planck-Institut f{\"u}r Astronomie (MPIA Heidelberg),
Max-Planck-Institut f{\"u}r Astrophysik (MPA Garching),
Max-Planck-Institut f{\"u}r Extraterrestrische Physik (MPE), National
Astronomical Observatories of China, New Mexico State University, New
York University, University of Notre Dame, Observat{\'o}rio Nacional /
MCTI, The Ohio State University, Pennsylvania State University,
Shanghai Astronomical Observatory, United Kingdom Participation Group,
Universidad Nacional Aut{\'o}noma de M{\'e}xico, University of
Arizona, University of Colorado Boulder, University of Oxford,
University of Portsmouth, University of Utah, University of Virginia,
University of Washington, University of Wisconsin, Vanderbilt
University, and Yale University.
\section*{Data availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
train/arxiv
|
BkiUdnQ5qdmDH5kXJDyO
| 5
| 1
|
\chapter*{#1}%
\begin{document}
\title{Generating Synthetic Handwritten Historical Documents With OCR Constrained GANs}
\begin{comment}
\author{Anonymous Author\inst{1} \and
Anonymous Author\inst{1} \and
Anonymous Author\inst{1} \and
Anonymous Author\inst{1} \and
Anonymous Author\inst{1, 2}}
\authorrunning{Author et al.}
\institute{\textit{Anonymous Group (AG)} \\
Anonymous University, Anonymousstreet 1A, Anonymous City, Anonymous Country, \\
\email{\{firstname.lastname\}@AnonymousUniversity.abc} \and
\textit{Anonymous Group 2 (AG)} \\
Anonymous University 2, Anonymousstreet 2A, Anonymous City, Anonymous Country}
\end{comment}
\author{Lars~V\"ogtlin\thanks{Both authors contributed equally to this work.} \and
Manuel~Drazyk\textsuperscript{\thefootnote} \and
Vinaychandran~Pondenkandath \and
Michele~Alberti \and
Rolf~Ingold}
\authorrunning{V\"ogtlin et al.}
\institute{\textit{Document Image and Voice Analysis Group (DIVA)} \\
University of Fribourg, Switzerland \\
\email{\{firstname.lastname\}@unifr.ch}}
\maketitle
\input{section/acronym}
\input{section/abstract}
\input{section/introduction}
\input{section/dataset}
\input{section/method}
\input{section/experimental_setup}
\input{section/results}
\input{section/conclusion}
\input{section/acknowledgment}
\bibliographystyle{splncs04}
\section*{Acknowledgment}
The work presented in this paper has been partially supported by the HisDoc III project funded by the Swiss National Science Foundation with the grant number 205120\_169618.
A big thanks to our co-workers Paul Maergner and Linda Studer for their support and advice.
\section{Conclusion}
\begin{comment}
In this paper, we present a framework to generate synthetic historical documents in a two-step process.
First, we produce a template document with user-specified content and structure; this document can be customized in several different ways.
Second, these documents are transformed with the help of a deep neural network into synthetic historical documents.
With the introduction of two \acf{TR} networks, we show that we can constrain the CycleGAN transformation to produce correct and readable historical-looking text.
We show that the source text and layout are preserved through the transformation from the source domain document to the historical style with several examples.
This makes it possible for us to reuse the ground truth from the source document for the synthetic historical document.
This technique allows us to generate exact ground truth for any style of a historical document.
To evaluate our framework, we perform quantitative experiments on the Saint Gall dataset as well as different visual inspections.
With these evaluations, we show that our method is superior compared to other pre-training methods.
However, there remain some limitation of our approach like the quality of decorations; high memory usage because of high-resolution images; expensive retraining and fine-tuning on new styles; some mismatch of the ground truth and the historical document caused by wrong letter mapping; dependency of the layout with the source document.
To tackle the issues with the letter mapping and decoration, we plan to increase the weight of the reading loss $\lambda_{\text{read}}$ during training after a certain amount of epochs to raise the textual awareness of the network.
Additionally, we aim to decouple the style definition from the source document to increase the variability of our method.
We also aim to perform a more rigorous evaluation of the synthetic documents with additional quantitative testing on more challenging datasets.
Finally, we intend to evaluate the ground truth available due to our approach for other tasks like text line segmentation~\cite{alberti2019labeling} or layout analysis~\cite{pondenkandathExploitingStateoftheArtDeep2017}.
\end{comment}
We presented a two-step framework for generating synthetic historical images that appear realistic.
The two steps are (1) creating electronic user-defined datasets (e.g., with LaTeX) for which the text content is known, and then feed it to step (2) where we use an improved CycleGAN based deep learning model to learn the mapping to a target (real) historical dataset.
Differently from previous works in the field, our approach leverages two \acf{TR} networks to constrain the learning process further to produce images from which the text can still be read.
The outcome of the process is a model capable of synthesizing a user-specified template image into historical-looking images.
The content is known, i.e., we have the perfect ground truth for all the synthetic data we generate.
These synthetic images --- which come with a \ac{OCR} ground truth --- can then be used to pre-train models for downstream tasks.
We measured the performances of a standard deep learning model using images created with our approach as well as other existing real historical datasets.
We show that our approach consistently outperforms the baselines through a robust set of benchmarks, thus becoming a valid alternative as a source dataset for transfer learning.
This work extends the already conspicuous work on the field of synthetic document generation.
It distinguishes itself for providing the ground truth and high-quality synthetic historical images.
Finally, the images generated with our methods are still distinguishable from real genuine ones due to small imperfections.
Therefore we envisage that further work would improve upon our open-source implementation.
\section{Datasets}\label{sec:datasets}
In this work, we use three datasets: the user-specified template document dataset (source domain dataset; see Section~\ref{sec:source_dataset}); a dataset of real unlabeled historical documents (target domain dataset) whose style which we want to learn in the transformation function; a dataset of real labeled historical documents (evaluation dataset) with transcription ground truth that we use to evaluate our methods.
\subsection{Source Domain Dataset}\label{sec:source_dataset}
We create a collection of template documents with user-specified content and structure as Pondenkandath et al.~\cite{pondenkandathHistoricalDocumentSynthesis2019a}.
Our template document images are generated based on the specifications from \LaTeX~files; they define the layout, font, size, and content (see Figure~\ref{fig:intro_fig}).
As a text, we use the \textit{Bellum Gallicum}~\cite{edwards1917caesar} with a one or two-column layout.
Additionally, we populated each document with different decorative starting letters.
The advantage of this technique is that we have very precise ground truth, which is the transcription of the document and the exact position of the word on the page.
This dataset contains 455 document images with a resolution of $2754 \times 3564$.
\subsection{Target Domain Dataset}\label{sec:target_dataset}
The target domain dataset refers to the collection of historical documents whose style we aim to learn in the transformation function.
To create this dataset, we use the historical documents present in the \ac{HBA} 1.0 dataset~\cite{mehriHBAPixelbasedAnnotated2017}.
The HBA dataset comprises 11 books, where 5 are manuscripts and 6 books are printed, containing 4436 scanned historical document images.
These books were published between the 13th and 19th centuries and are written in different scripts and languages.
We use one book of this dataset; the handwritten Latin book ``Plutarchus, Vitae illustrium virorum''.
This book contains 730 colored pages with a resolution of $6158 \times 4267$ (see Figure~\ref{fig:dataset_hba}) from which we filtered out 120 pages (blank, binding, and title pages), leaving us with 600 pages that contain only text.
To validate the best model for our downstream evaluation task, we hand-labeled 350 individual word crops from this book.
\subsection{Evaluation Dataset}\label{sec:eval_dataset}
As part of the evaluation process, we use two different datasets.
Our evaluation protocol involves pre-training a \ac{HTR} model using synthetic data generated using our method, and then evaluating it in a fine-tuning setting on the St. Gall dataset~\cite{fischer2011TranscriptionAlignmentLatin} (see Figure~\ref{fig:dataset_sg}).
The Saint Gall dataset includes 40 pages of labeled historical handwritten manuscripts containing 11'597 words and 4'890 word labels.
Each image has a resolution of $3328 \times 4992$ with quality of 300dpi.
To compare our synthetic data pre-training against pre-training on a real handwritten dataset, we pre-train an \ac{HTR} model on the IAM Handwriting Database~\cite{marti2002iam} (see Figure~\ref{fig:dataset_iam}).
This \ac{HTR} model (pre-trained on the IAM Handwriting Database) is then evaluated similarly in a fine-tuning setting on the St. Gall dataset.
The IAM Handwriting Database contains 1'539 handwritten scanned pages with 115'320 words and 6'625 unique words.
The word images are normalized and in black-white colorization.
\section{Experimental Setup}
\subsubsection{Model Architecture}
To achieve the goal of learning a transformation from source domain $X$ to target domain $Y$ using unpaired collections of images, we use an architecture based on the CycleGAN~\cite{zhuUnpairedImagetoimageTranslation2017} framework.
The generators $G$ and $F$ are each 24 layers deep \ac{CNN} architectures with $11.3$ million parameters.
The discriminators $D_x$ and $D_y$ are based on the PatchGAN architecture~\cite{isolaImagetoimageTranslationConditional2017}, and have 5 layers and $2.7$ million parameters each.
Our \acf{TR} networks $T$ and $T^\prime$ are based on the winning \ac{HTR} model from the ICFHR2018 competition~\cite{straussICFHR2018CompetitionAutomated2018}.
Both these networks contain 10 convolutional and batch normalization layers followed by 2 bi-directional \ac{LSTM} layers for a total of $8.3$ million parameters.
For all architectures, we apply the preprocessing steps (e.g. resizing) as suggested in their respective publications.
The data gets min-max normalized.
\subsubsection{Task}
The first step in our two-stage method is to create the source domain dataset images as described in Section~\ref{sec:source_dataset}.
The structure and content of these documents are specified using LaTeX.
In the second step, we use the source domain dataset files along with a collection of unlabeled historical document images (see Section~\ref{sec:target_dataset}) to train our CycleGAN and \ac{TR} networks.
In the training process, we learn a mapping function $g$ that transforms source domain documents to the target domain as well as a mapping function $f$, which works in the other direction.
The \ac{TR} networks are trained simultaneously to recover the user-specified content from $g(x)$ and $f(g(x))$.
After completing training, we use the generator $G$ to transform document images from the source domain to the target domain while preserving content and structure.
\subsubsection{Pre-processing}
Due to GPU memory constraints, we use to train our models using image patches of size $256 \times 256$.
These image patches are randomly cropped from the document images and fed into the CycleGAN architecture.
The \ac{TR} networks $T$ and $T^\prime$ receive individual words cropped ($128\times32$) from $g(x)$ and $f(g(x))$ respectively.
Additionally we add Gaussian Noise to $g(x)$ as described in Section~\ref{sec:model_architecture}.
\subsubsection{Training Procedure}
We train the CycleGAN and \ac{TR} components of our system simultaneously.
The models are trained for 200 epochs using the Adam optimizer~\cite{kingma2017AdamMethodStochastic} with a learning rate of $2\times 10^{-4}$ and a linear decay starting at 100 epochs.
The optimizer uses $5\times10^{-5}$ weight decay and $0.5$, $0.999$ beta values for the generators and discriminators, respectively.
We use a batch size of 1 to facilitate the varying amount of words per patch that is fed to $T$ and $T^\prime$.
\subsubsection{Evaluation Procedure}\label{sec:experimental_setup_evaluation_procedure}
We evaluate the quality of the synthetic historical documents produced with our method qualitatively and quantitatively.
We first evaluate the synthetic historical documents produced qualitatively with a visual inspection, highlighting the successfully transformed and key limitations of the produced synthetic documents.
We use synthetic historical documents produced with our method in a pre-training setting to provide a quantitative evaluation.
We generate 70'000 synthetic words in the historical style of the target domain dataset and use these words to train a new \ac{TR} network called $\mathcal{R}_{\text{syn}}$.
We then fine-tune $\mathcal{R}_{\text{syn}}$ using various subsets ($10\%$, $20\%$, $50\%$, and $100\%$) of the training data from the St. Gall dataset (see Section~\ref{sec:eval_dataset}) and evaluate its text recognition performance on the test set.
As baselines, we compare $\mathcal{R}_{\text{syn}}$ against $\mathcal{R}_{\text{base}}$ and $\mathcal{R}_{\text{IAM}}$.
$\mathcal{R}_{\text{base}}$ is randomly initialized and then trained directly on the St. Gall dataset in a similar manner as $\mathcal{R}_{\text{syn}}$.
$\mathcal{R}_{\text{IAM}}$ is pre-trained on the IAM Handwriting Database (see Section~\ref{sec:eval_dataset}) and fine-tuned on the St. Gall dataset.
To determine the best performing pre-trained models of $\mathcal{R}_{\text{syn}}$ and $\mathcal{R}_{\text{IAM}}$, we train both networks until convergence and select the best performing model based on validation score from the hand-labeled subset of HBA (see Figure~\ref{fig:hba_words} and validation split of the IAM Handwriting Database.
The performance of these three models is compared on the test split of the St. Gall dataset using the \acf{CER} and \ac{WER} metrics~\cite{margnerToolsMetricsDocument2014}.
\section{Introduction}
\begin{figure}[t]
\centering
\subfloat[]{{%
\includegraphics[width=0.3\textwidth]{Images/manuel/Intro_Real.pdf}}%
\label{fig:intro_real}}%
\hspace{0.4em}
\subfloat[]{{%
\includegraphics[width=0.3\textwidth]{Images/manuel/Intro_Template.pdf}}%
\label{fig:intro_template}}%
\hspace{0.4em}
\subfloat[]{{%
\includegraphics[width=0.3\textwidth]{Images/manuel/Intro_Synthetic.pdf}}%
\label{fig:intro_synthetic}}%
%
\caption{
Inputs for the second step of our framework and the output of the network.
(a) represents the style template for our output document.
(b) shows a source document that was generated using \LaTeX.
(c) shows the corresponding transformed version of the template image (b).
The transformation between (b) and (c) preserves overall structure and content.
}
\label{fig:intro_fig}
\end{figure}
Large labeled datasets play a major role in the significant performance increases seen in \ac{DIA} and computer vision over the last decade.
These datasets -- often containing millions of labeled samples -- are typically used to train deep neural networks in a supervised setting, achieving state-of-the-art performance in tasks such as text line segmentation~\cite{alberti2019labeling}, \ac{OCR}~\cite{blucheScanAttendRead2017} or layout analysis~\cite{scius-bertrandLayoutAnalysisText2019}.
However, such methods are much more challenging to train in settings where no labeled data is available.
The size of labeled datasets is limited to a few hundred or thousand samples -- as is often the case with historical documents~\cite{journetDocCreatorNewSoftware2017a,clausnerAletheiaAdvancedDocument2011}.
Common strategies to deal with limited labeled data include (1) transfer-learning, (2) synthesizing artificial data, or (3) unsupervised learning.
In (1) typical procedure is to train a deep neural network on similar data and then fine-tune this network on the small labeled target dataset.
The success depends on having datasets similar enough to the target dataset to perform pre-training.
(2) has been an active area of \ac{DIA} research.
Baird~\cite{bairdDocumentImageDefect1992}, Kieu et
al.~\cite{kieuCharacterDegradationModel2012} and Seuret et
al~\cite{seuretGradientdomainDegradationsImproving2015a} focus on degrading real
document images using defect models to augment datasets.
Other tools such as DocEmul~\cite{capobiancoDocEmulToolkitGenerate2017a} and
DocCreator~\cite{journetDocCreatorNewSoftware2017a} aim to create synthetic
document images using a combination of user-specified structure, background
extraction, degradation methods, and other data augmentation approach.
However, such approaches still require human expertise in designing appropriate
pipelines to generate realistic documents.
When large unlabeled datasets are available for the target task, a common practice is to use unsupervised learning methods such as autoencoders~\cite{masciStackedConvolutionalAutoEncoders2011} to learn representations.
However, recent work~\cite{albertiPitfallUnsupervisedPreTraining2017} shows that autoencoders trained for reconstruction are not useful for this task.
Another possibility is to use unlabeled data in a Generative Adversarial~\cite{goodfellowGenerativeAdversarialNetworks2014,zhuUnpairedImagetoimageTranslation2017} setting to synthesize artificial data that looks similar in appearance to the unlabeled data.
More recent work in document image synthesis has used deep learning, and \ac{GAN} based approaches.
But these approaches\newline~\cite{pondenkandathHistoricalDocumentSynthesis2019a,tensmeyerGeneratingRealisticBinarization2019,guanImprovingHandwrittenOCR2020,kangGANwritingContentConditionedGeneration2020} result in various issues: the produced data matches the overall visual style of historical documents but fail to produce meaningful textual content; they require paired datasets, which defeats the purpose of using unlabeled data; only create text of fixed length.
In this paper, we present a framework to generate historical documents without relying on human expertise or labeled data\footnote{https://github.com/DIVA-DIA/Generating-Synthetic-Handwritten-Historical-Documents}.
We approach this problem in two steps.
First, we create template document images that contain user-specified content and structure using LaTeX\footnote{This can be done with any other word processing tool such as MS Word.}.
Second, using the user-specified template documents and a collection of unlabeled historical documents, we learn a mapping function to transform a given template document into the historical style while preserving the textual content and structure present in the template document.
We evaluate the usefulness of our synthetically generated images by measuring the performances of a deep learning model on the downstream task of \ac{OCR}.
Specifically, we measure the performances of this model when (1) trained only on the target dataset (St. Gall \cite{fischer2011TranscriptionAlignmentLatin}), (2) pre-trained on a similar dataset (IAM Handwriting database \cite{marti2002iam}) and then fine-tuned on the target dataset and finally when (3) pre-trained on our synthetic images and then fine-tuned on the target dataset.
This will allow us to compare against a standard supervised baseline as well as a reasonable transfer learning baseline.
Our empirical experiments show that, the model pre-trained on our synthetic images (see point 3 above) is outperforming the supervised and transfer learning baselines by 38\% and, respectively, 14\% lower \acf{CER}.
\subsection*{Main Contribution}
This paper extends the existing work on synthetic document generation by providing a general framework that can produce realistic-looking historical documents with a specific style and textual content/structure.
We introduce a two-step CycleGAN based process that leverages two \ac{TR} networks to condition the learning process.
This additional signal let us overcome the main limitations of previous work and enable us to obtain significantly better performance measured on a robust set of benchmarks.
\section{Our Approach}\label{ch:MB}
\section{Method}\label{ch:MB}
Our method uses a CycleGAN formulation, along with \ac{HTR} models, to further constrain the synthesis process.
To train the CycleGAN, we use unpaired collections of user-specified template images (source domain) and real historical images (target domain).
The source domain documents specify the content and overall structure, and the target domain documents exemplify the style we want in our final synthetic historical documents.
Pondenkandath et al.~\cite{pondenkandathHistoricalDocumentSynthesis2019a} have shown that using only the CycleGAN formulation with the source and target domain datasets is enough to produce synthetic documents that appear stylistically similar to the target domain.
However, they do not contain the content or structure specified in the source domain documents.
We add a loss term using \ac{HTR} models that aim to read user-specified content from the synthesized historical documents to address this issue.
After completing training, we obtain a generator that transforms any given template image to a corresponding synthetic historical version.
\newpage
\subsection{Model Architecture}\label{sec:model_architecture}
Our model architecture is based on the CycleGAN formulation.
It uses the cycle-consistency loss to transform an image from a given source domain to a target domain in a bi-directional fashion.
This architecture introduces two main challenges.
First, generating text in the target domain that is human-readable at the character and word levels is difficult due to the under-constrained nature of the CycleGAN architecture for our task.
Second, CycleGANs are prone to emergent \ac{GAN} steganography~\cite{zhangSteganoGANHighCapacity2019}; where the generators in a \ac{GAN} can learn to hide information from the discriminator within the synthesized image and use it for perfect reconstruction.
\begin{figure}[t]
\center{\includegraphics[width=\textwidth]
{Images/Meth_CycleGAN2RD_architecture}}
\caption{
The CycleGAN architecture presented in \cite{zhuUnpairedImagetoimageTranslation2017} with two additional \ac{TR}s $T \text{ and } T^\prime$ and the five different loss terms.
}
\label{fig:cycleGAN_architecture}
\end{figure}
To tackle the first problem of generating human-readable text, we introduce two \ac{HTR} models $T \text{ and } T^\prime$ to our architecture (see Figure~\ref{fig:cycleGAN_architecture}).
We aim to adjust for the under-constrained nature of the CycleGAN by adding additional loss terms based on this \ac{HTR} model.
We adopt the bi-directional \ac{LSTM} and \ac{CTC} based \ac{HTR} architecture used by the winners of the text recognition competition at ICFHR'18~\cite{straussICFHR2018CompetitionAutomated2018}.
The first \ac{HTR} model $T$ evaluates the quality of the characters or words produced by transforming a source domain template image to the target historical domain.
To do this, it takes as input the synthetic images produced by the source-to-target generator $G$ as well as the textual content and location information from the template document images.
The second \ac{HTR} model $T^\prime$ evaluates the quality of the reconstructed source domain documents (produced by the target-source generator $F$) by comparing the reconstructed image against the same textual content and location information as $T$.
The second challenge is overcoming the tendency of CycleGANs to hide information about the input within the generated synthetic image~\cite{chuCycleGANMasterSteganography2017}.
This tendency arises naturally due to cooperation between the generators and is potentially exacerbated by the presence of the \ac{HTR} models.
To minimize the cyclic consistency loss as well as the loss introduced by the \acp{TR}, the generator $G$ attempts to hide information that can be effectively decoded by generator $F$ to produce good reconstructions, as well as information that allows the \acp{TR} to recover the textual content.
These results in synthetic documents that do not satisfy the constraints of our synthesis process, yet produce very low reconstruction losses and \ac{HTR} losses.
In some of our preliminary experiments, the generator places the encoded template document into the target document by adding or subtracting the encoded value from each pixel.
The influence on the image is so small that it is nearly impossible for humans to detect, and it is even challenging to be detected by the style discriminator.
Allowing the CycleGAN to cheat prevents it from learning the correct mapping from the target domain back to the source domain, negatively affecting the style representation learned by the \ac{GAN}.
%
To prevent the CycleGAN from creating this hidden embedding, we add Gaussian noise to the synthetic document images.
This low-frequency noise disturbs the encoded message of the generator, making it much harder to cheat by using steganography.
This noise effectively prevents the network from cheating, as a much stronger signal would be needed, which would manipulate the appearance of the image in a way that is more easily detected by the human eye as well as the style discriminator, and thus would achieve a much lower performance score.
\subsection{Loss Functions}
We train with a loss objective that consists of five different loss terms (see Figure~\ref{fig:cycleGAN_architecture}).
The identity loss, the adversarial loss, and the cycle consistency loss are the loss terms presented in the original CycleGAN paper~\cite{zhuUnpairedImagetoimageTranslation2017}.
To solve the readability problem described in Section~\ref{sec:model_architecture}, we introduce two additional loss terms using the \acp{HTR} system, the reading loss, and the recovered reading loss.
The identity, adversarial, and cycle consistency loss are calculated in both directions, but the reading loss terms are just calculated once per cycle.
Formally, we aim to learn mappings between two domains $X$ and $Y$ with the help of $N$ training samples $x \in X$ and $M$ samples $y \in Y$.
Each document image $x$ is composed of pairs of its word images and the corresponding word text (ground truth) $x = {((x_{1}, z_{1}), (x_{2}, z_{2}), ..., (x_{n}, z_{n}))}$ where $n = |x|$ and $|x|$ is the amount of words in a document.
The data distributions are denoted as $x \sim p_{data}(x)$ and $y \sim p_{data}(y)$.
We also define a projection function $\alpha$ where $\alpha_1(x)$ refers to the first and $\alpha_2(x)$ to the second element of the tuple.
The transformation functions of generators $G$ (source-target) and $F$ (target-source) are denoted respectively by $g : X \rightarrow Y$ and $f : Y \rightarrow X$.
Additionally, we have two adversarial discriminators $D_x$ and $D_y$.
The task of $D_x$ is to distinguish the images of $\{x\}$ and $\{f(y)\}$, and in the same fashion $D_y$ learns to differentiate between $\{y\}$ and $\{g(x)\}$.
\subsubsection{Identity Loss}
This loss term~\cite{zhuUnpairedImagetoimageTranslation2017,taigmanUnsupervisedCrossDomainImage2016} is used to regularize both generators to function as identity mapping functions when provided with real samples of their respective output domains.
Zhu et al.~\cite{zhuUnpairedImagetoimageTranslation2017} observed that in the absence of this identity loss term, the generators $G$ and $F$ were free to change the tint between the source and target domains even without any need to do it.
The identity loss is defined as follows:
\begin{align}
\mathcal{L}_{\text{identity}}(G, F) =& \mathbb{E}_{x\sim p_{\text{data}}(x)}[\norm{G(\alpha_1(x))-\alpha_1(x)}_1] \nonumber \\
+& \mathbb{E}_{y\sim p_{\text{data}}(y)}[\norm{F(y)-y}_1].\lbleq{identity}
\end{align}
\subsubsection{Adversarial Loss}
The adversarial loss~\cite{goodfellowGenerativeAdversarialNetworks2014} shows how well the mapping function $g$ can create images $g(x)$ which looks similar to images in the domain $Y$, while the discriminator $D_y$ aims to tries to distinguish between images from $g(x)$ and real samples from $Y$.
$g$ tries to minimize this objective against $D_y$, which tries to maximize it, i.e. $min_g max_{D_y} \mathcal{L}_{\text{GAN}}(g,D_Y,X,Y)$.
As we use a CycleGAN, this loss is applied twice, once for $g$ and its discriminator $D_y$, as well as for $f$ and the discriminator $D_x$.
\begin{align}
\mathcal{L}_{\text{GAN}}(g,D_Y,X,Y) =& \mathbb{E}_{y \sim p_{\text{data}}(y)}[\log D_Y(y)] \nonumber \\
+& \mathbb{E}_{x \sim p_{\text{data}}(x)}[\log (1-D_Y(g(\alpha_1(x)))].\lbleq{GAN}
\end{align}
\subsubsection{Cycle Consistency Loss}\label{sec:ccl}
The cycle consistency loss~\cite{zhuUnpairedImagetoimageTranslation2017} further restricts the freedom of the \ac{GAN}.
Without it, there is no guarantee that a learned mapping function correctly maps an individual $x$ to the desired $y$.
Hence, for each pair $(x_\text{i}, z_\text{i}) \in x$ the cycleGAN should be able to bring the image $x_\text{i}$ back into the original domain $X$, i.e. $x_\text{i} \rightarrow g(x_\text{i}) \rightarrow f(g(x_\text{i})) \approx x_\text{i}$.
As the nature of the cycleGAN is bidirectional the reverse mapping must also be fulfilled, i.e. $y \rightarrow f(y) \rightarrow g(f(y)) \approx y_i$.
\begin{align}
\mathcal{L}_{\text{cyc}}(g, f) = & \mathbb{E}_{x\sim p_{\text{data}}(x)}[\norm{f(g(\alpha_1(x)))-\alpha_1(x)}_1] \nonumber \\
+ & \mathbb{E}_{y\sim p_{\text{data}}(y)}[\norm{g(f(y))-y}_1].\lbleq{cycle}
\end{align}
\subsubsection{Reading Loss and Recovered Reading Loss}
As described in Section~\ref{sec:model_architecture} and shown in Figure~\ref{fig:cycleGAN_architecture}, we use the reading loss to ensure that the \ac{GAN} produces readable images, i.e. images containing valid Latin characters.
The \acp{TR} $T \text{ and } T^\prime$ are trained with a \ac{CTC} loss~\cite{gravesConnectionistTemporalClassification2006,liReinterpretingCTCTraining2020}, which is well suited to tasks that entail challenging sequences alignments.
\begin{align}
\mathcal{L}_{CTC}(\mathbf{x, y})=-{\rm ln}\,p(\mathbf{x | y}).
\label{equ:ctcLoss}
\end{align}
To calculate the reading loss, the template word text $z_\text{i}$ and the corresponding transformed word image $G(x_\text{i})$ is passed to the \acp{TR} $T \text{ and } T^\prime$.
The loss evaluates the mapping $g$ to our target domain $Y$ at a character level.
This discriminator evaluates the readability of the reconstructed image.
Hence, its input is a word text from the source domain $z_\text{i}$ and the respective reconstruction $f(g(x_\text{i}))$.
As above, we calculate the \ac{CTC}-loss on a word level $x_{\text{i}}$. Since the documents all have a different length, the per word losses for each document are summed up and divided by the length of the document $|x|$.
The two reading loss terms are combined to form the overall reading loss defined as
\begin{align}
\mathcal{L}_{\text{reading}}(g, f) = \mathbb{E}_{x\sim p_{\text{data}}(x)}
& \left [\frac{\sum_{v, w \in s(g, x)} \mathcal{L}_{CTC}(\alpha_2(v), w)}{|x|}\right] \nonumber \\
+& \left[\frac{\sum_{v, w \in s(f(g, x))} \mathcal{L}_{CTC}(\alpha_2(v), w)}{|x|}\right]
\label{equ:reading_loss}
\end{align}
where $s(h, u) = \{ (u_i, h(u_i)) | i = 1, ..., |u| \}$ and $h$ represents the transformation function and $u$ all word image and ground truth pair of a document.
\subsubsection{Combined Loss}
The different loss term are weighted with $\lambda_{\text{cyc}} = 10$, $\lambda_{\text{read}} = 1$, and $\lambda_{\text{id}} = 5$ as suggested by Zhu et al.~\cite{zhuUnpairedImagetoimageTranslation2017} and Touvron et al.\cite{touvronPowersLayersImagetoimage2020} and summed up to form the overall loss objective:
\begin{align}
\mathcal{L_{\text{total}}}(g,f,D_X,D_Y) = & \mathcal{L}_{\text{GAN}}(g,D_Y,X,Y) \nonumber
+\ \mathcal{L}_{\text{GAN}}(f,D_X,Y,X) \nonumber \\
+&\ \lambda_{cyc} \times \mathcal{L}_{\text{cyc}}(g, f) \nonumber
+\ \lambda_{id} \times \mathcal{L}_{\text{Identity}}(g, f) \nonumber \\
+&\ \lambda_{read} \times \mathcal{L}_{\text{reading}}(g, f).
\lbleq{full_objective}
\end{align}
The combined loss is used in a min-max fashion, the generator tries to minimize it, and the discriminators aim to maximize it:
\begin{equation}
g^*,f^* = \arg\min_{g,f}\max_{D_x,D_Y} \mathcal{L_{\text{total}}}(g, f, D_X, D_Y).
\lbleq{minmax}
\end{equation}
\section{Related Work}
\label{toc:related_Work}
This section briefly outlines the research most closely related to our work.
\subsection{Document Image Annotation and Synthesis}
The process of creating ground truth for labeled datasets is often
time-consuming and challenging.
To ease this process, several software tools have been created to assist
researchers in the annotation process.
Groundskeeper~\cite{yanikoglu1995complete},
TrueViz~\cite{haleeArchitectureTrueVizGroundTRUth2003} and
PerfectDoc~\cite{yacoubPerfectDocGroundTruthing2005} are some examples of ground
truthing tools that allow users to annotate documents in terms of regions,
lines, words, character glyphs and reading order.
These tools do not incorporate automation, and rely on the user to annotate the
entire document.
Other tools incorporate automated or semi-automated methods in addition to manual
methods to assist the user with annotations.
Aletheia~\cite{clausnerAletheiaAdvancedDocument2011} and
DIVAnnotation~\cite{seuretSemiautomatizedModularAnnotation2018} assist the user
with methods to shrink bounding boxes, generate outlines with smearing-based
techniques, component selection, binarization and text-line extraction methods.
GraphManuScribble~\cite{garzCreatingGroundTruth2016} and the approach of Kassis
and El-Sana~\cite{kassisScribbleBasedInteractive2016} use graph-based approaches
provide the user with an interface to perform page layout segmentation by
scribbling on the page to isolate various components.
Another approach towards creating labeled datasets involves the creation of
synthetic documents and their corresponding ground truth.
Baird~\cite{bairdDocumentImageDefect1992}, Kieu et
al.~\cite{kieuCharacterDegradationModel2012} and Seuret et
al~\cite{seuretGradientdomainDegradationsImproving2015a} focus on degrading real
document images using defect models to augment datasets.
Other tools such as DocEmul~\cite{capobiancoDocEmulToolkitGenerate2017a} and
DocCreator~\cite{journetDocCreatorNewSoftware2017a} aim to create synthetic
document images using a combination of user-specified structure, background
extraction, degradation methods, and other data augmentation approaches.
However, such approaches still require human expertise in designing appropriate
pipelines to generate realistic documents.
\subsection{Image-to-Image Translation}
The goal of an image-to-image translation process is to translate a given image
to another image.
These transformations can work on complete images, transforming them between
different domains or depictions (drawing to photo-realistic images) or on
objects within the images (horses to zebras).
Such image translation problems can be viewed as an image conditioned
synthesis process~\cite{Isola_2017_CVPR}, with applications towards style
transfer~\cite{Chang_2018_CVPR},
super-resolution~\cite{jiangEdgeEnhancedGANRemote2019},
colorization~\cite{nazeriImageColorizationUsing2018} and
depth-estimation~\cite{Zheng_2018_ECCV}.
Some approaches towards image-to-image translation rely on paired datasets with
matched image pairs from both the source and target domains, while other
approaches work with unmatched collections of images from the two different
domains.
Using paired datasets, Isola et
al.~\cite{isolaImagetoimageTranslationConditional2017} used an
image-conditioned \ac{GAN} approach (Pix2Pix) to learn a mapping from input images to
output images (for e.g., overhead satellite view to map view).
Approaches towards working with unpaired datasets can be categorized into
four main categories following the scheme described by Huang et
al.~\cite{huangIntroductionImageSynthesis2018}.
These approaches use:
\begin{itemize}
\item Cycle consistency loss with bi-directional
reconstruction~\cite{zhuUnpairedImagetoimageTranslation2017,Yi_2017_ICCV,kimLearningDiscoverCrossdomain2017}
\item Pair-wise distance constraint loss~\cite{benaimOnesidedUnsupervisedDomain2017}
\item Task specific auxiliary classifier~\cite{Bousmalis_2017_CVPR}
\item \acp{VAE} with shared-weight generators~\cite{liuUnsupervisedImagetoimageTranslation2017}
\end{itemize}
In this work, we use the CycleGAN approach outlined by Zhu et
al.~\cite{zhuUnpairedImagetoimageTranslation2017}, which extends the Pix2Pix
architecture with two additional networks and the cycle consistency loss.
\section{Background}
In this section, the related work to set the context of the paper and the theoretical background necessary to understand the paper are discussed.
\subsection{Transkribus}\label{subsec:transcribus}
Transkribus~\cite{kahle2017transkribus, jander2016handwritten} is an open-source tool for computer-aided transcription, recognition, and retrieval of digitized historical documents. After a user uploads all his documents into the tool, the first task is to segment all words, either manually or automatically with the built-in layout analysis. In the second step, all allocated words are pre-transcribed with the \ac{HTR} of Transkribus and the user has to check each transcribed word and correct the mistakes made by the HTR. Afterwards the \ac{HTR} is trained on these words and the improvements of every unique training are synchronized with the global \ac{HTR} of Transkribus that allows all users to optimize it further. Since \ac{HTR} works the best on handwritten documents they are trained on, the automated transcription works significantly better on historical documents from popular and well-known authors or from time epochs with a lot of already transcribed documents in which only small errors have to be corrected later. On the other hand, completely unknown authors and time epochs will have mostly wrong word predictions and the transcription, therefore, has to be done mostly manually, which trains the \ac{HTR} for similar upcoming documents.
\subsection{DocCreator}\label{subsec:doccreator}
DocCreator~\cite{journet2017doccreator, journet2017massive} is an open-source tool to generate synthetic ground truth documents.
One possible approach is to generate this new synthetic document by composing it with a text given as an XML file that is pasted in the defined position with a desired background and font.
For that, the tool offers to extract the background and the font from the given documents (the source domain)~\cite{journet2017doccreator}.
A key difference between our approach and DocCreator is, that in the latter, no deep learning is involved in the synthesis process.
However, deep learning is used in DocCreator for the extraction of the characters to create a font, as
this is done with the \ac{OCR}~\cite{islam2017survey, arica2001overview} of Tesseract~\cite{smith2007overview, tesseractGit}.
Therefore, since DocCreator uses \ac{OCR} for font extraction, the automatic pre-assignment only works if the handwritten characters of the document are known to the OCR.
To create the font, the OCR of DocCreator analyzes the input of one (historical) document and tries to identify all characters that the document consists of.
For every character the font contains, up to five (or more if configured) different recognized characters from the document are extracted and one of them is picked randomly every time a character is inserted with the font.
As far as we are aware, it is currently not possible to create a font with more than one image.
Errors in the recognition of one character can be fatal, as that would create a wrong ground truth for the characters in the training data, which can result in a malfunction of the deep learning algorithm and it may not be able to recover from the mistake~\cite{achille2018critical}.
It is also possible to reconstruct the background from the desired document in a semi-automatic way.
This is accomplished by first, applying a binarization on the documents from which the background should be constructed and then manually choosing a threshold that captures the complete text of the document but not the background.
It should be noted that the synthesized documents that are generated in the section "Generation with ground truth" have no new elements in them.
All characters, symbols as well as the background the new synthesized document contains are extracted components that were collected from the training data and just reordered according to the XML file specified for the synthesized document.
\subsection{CycleGAN}\label{subsec:CycleGAN}
In 2014, the technolgy of \acp{GAN}~\cite{goodfellow2014generative} became popular, which are able to generate new, synthetic data that can pass for real data, and consist of a generator that generates the data and a discriminator who judes it.
The task of the generator is to create an image out of a random input.
The discriminator then receives the output of the generator as well as the real images that the generator is supposed to resemble.
Based on the results of how the discriminator judged the images, a loss is determined to train both, the discriminator and the generator.
If the discriminator predicted the nature of the majority of the images rightfully, the loss will be high for the generator low the discriminator is low and vice versa, if the discriminator was wrong in its prediction for the majority of the images.
Since both, the generator and the discriminator, start their respective processes with no training, the first generated images of the generator are merely random pixels. The discriminator then guesses if these generated images are real or fake.
As the training progresses, they start to train each other by playing a minimax two-player game.
The goal is to balance the training of both out and avoid that one gets better than the other, so they can learn from each other as long as possible.
In 2017, the successor of \acp{GAN}~\cite{goodfellow2014generative}, CycleGAN~\cite{zhu2017unpaired} was published, which can achieve image-to-image translation in an unsupervised manner.
In image-to-image translation, the goal is to take an image of a source domain and convert it to resemble the style of the target domain.
Like a normal \ac{GAN}, the CycleGAN tries to learn a mapping G from a source domain to target domain \( G : X \to Y \) with the adversarial loss, so that G(X) and Y are not distinguishable.
The main difference is, that the CycleGAN additionally learns the reversed mapping \(F: Y \to X \) with a cycle consistency loss that ensures that \( F(G(X)) = X \).
This cycle consistency enforces a strong connection between the domains, for the reason that a source image that is transformed to the target domain style has to contain enough information of the source to be able to be transformed back~\cite{zhu2017unpaired, almahairi2018augmented}.
The third loss is the identity loss, which maps the source domain back to the source domain using F and should result in the same image: \( F(X) = X \).
The identity loss helps CycleGAN to preserve the colors of the input images, that could otherwise be mapped into a completely different color spectrum~\cite{zhu2017unpaired}.
\section{Results}
We use two ways to evaluate the results of our generative model: a qualitative visual inspection and a qualitative evaluation.
We use a qualitative human-based approach to evaluate the output from a visual perspective and a qualitative approach to measure the influence of our generated data on a downstream text recognition task.
\subsection{Visual Analysis}
As we can see in Figure~\ref{fig:results_visual}, the synthetic historical documents generated using our method achieve a high degree of similarity to documents from the target domain (see Section~\ref{sec:target_dataset}.
The two primary goals of our approach were to preserve structure and content during the transformation of the source domain document into the target domain.
From Figure~\ref{fig:intro_fig}, we can observe that the generator preserves the location of the text from the source domain to the target domain, resulting in the overall structure in the synthetic document matching the input document structure.
In most cases, the transformation preserves the number of characters, words, and lines from the source document.
However, we observe that on rare occasions, our approach results in synthetic documents where two letters in the source document are combined into a single letter (\textit{legatis} in Figure~\ref{fig:results_visual-1}) or a single letter is expanded into multiple letters (\textit{rem} in Figure~\ref{fig:results_visual-6}).
We can also see from Figure~\ref{fig:intro_synthetic} that our approach is not very effective at transforming the large decorative characters at the beginning of paragraphs.
The color of these decorative characters is transformed to the historical style, but they appear slightly distorted.
This effect can be viewed as a side-effect of our training procedure, which does not emphasize transforming the decorative elements apart from the general style discrimination provided by $D_x$ and $D_y$.
Additionally, we see artifacts where the patches are stitched together because they are generated individually with 10\% overlap and then combined by averaging.
Considering the preservation of textual content, our approach successfully transforms most individual characters to the style of the target domain dataset.
Individual words are readable and require some effort to distinguish from real historical image samples -- even to expert eyes.
However, our approach struggles with the transformation of certain letters.
From Figure~\ref{fig:results_visual-1}, we can see that the character `o' is mistransformed into an `a'.
However, the shape and appearance of these two letters are very similar and often hard to distinguish.
Our approach also has problems transforming the letter `s'.
This character is sometimes transformed into the character `n', for e.g., in the word \textit{superas} in Figure~\ref{fig:results_visual-5}, the first `s' is transformed into `n', however the second `s' is correctly preserved.
Despite these small mistakes, we can observe that overall the method produces a very faithful transformation of the source document into the target historical style while preserving content and structure.
\subsection{Quantitative Evaluation}
\begin{figure}[t]
\centering
\subfloat[CER]{{%
\includegraphics[width=0.48\textwidth]{Images/manuel/CER-Chart.pdf}}%
\label{fig:results_cer_chart}}%
\quad
\subfloat[WER]{{%
\includegraphics[width=0.48\textwidth]{Images/manuel/WER-Chart.pdf}}%
\label{fig:results_wer_chart}}%
\caption{
We can see that the network pre-trained with synthetic data (in green) outperforms the two baselines (orange and blue) in all categories and for both metrics \ac{CER} and \ac{WER}.}
\label{fig:results_result_plots}
\end{figure}
In Figure~\ref{fig:results_result_plots} we visualize the empirical results of our experiments where we compare our proposed approach against a purely supervised method and a transfer learning baseline method, with respect to the fraction of labels used in the target dataset.
This way, we can assess the performances of those methods in the conditions of arbitrarily (and here, controlled) small datasets.
We recall that small datasets are the common scenario in this domain, as opposed to more mainstream computer vision domains. As expected, with a small amount of data, the pre-trained methods ($\mathcal{R}_{\text{syn}}$ and $\mathcal{R}_{\text{IAM}}$ ) vastly outperform the baseline ($\mathcal{R}_{\text{base}}$).
This margin decreases as we train on large proportions of training data from St. Gall, however, $\mathcal{R}_{\text{syn}}$ consistently achieves the lowest \ac{CER} (see Figure~\ref{fig:results_cer_chart}), and is narrowly beat by $\mathcal{R}_{\text{IAM}}$ only when considering the \ac{WER} at the $20\%$ subset (see Figure~\ref{fig:results_wer_chart}).
On average, $\mathcal{R}_{\text{syn}}$ has a $38\%$ lower \ac{CER} and a $26\%$ lower \ac{WER} compared to the model trained only on the St. Gall dataset, and a $14\%$ lower \ac{CER} and $10\%$ lower \ac{WER} compared to the model pre-trained on the IAM Handwriting Database.
Interestingly, when using the entire training set, $\mathcal{R}_{\text{base}}$ achieves a lower error rate than $\mathcal{R}_{\text{IAM}}$, which could be attributed to stylistic differences between the IAM Handwriting Database and the St. Gall dataset.
Similar to observations from Studer et al.~\cite{studerComprehensiveStudyImageNet2019}, the benefits of pre-training on a different domain could decrease when more training data is available from the actual task.
Therefore, the stylistic similarity of the synthetic historical images and documents from the St. Gall dataset could explain the lower error rates of $\mathcal{R}_{\text{syn}}$ compared to $\mathcal{R}_{\text{base}}$.
|
train/arxiv
|
BkiUdoY5qsJBjms2Yy7m
| 5
| 1
|
\section{Introduction}
\par Since 2003, there has been a worldwide effort to plan new
reactor neutrino experiments using two or more detectors to measure
or further limit the only unmeasured neutrino mixing angle, $\theta_{13}$.
The best current limit on $\theta_{13}$ comes from the reactor experiment
CHOOZ,\cite{bib:chooz} which ran in the 1990's along with
Palo Verde\cite{bib:palo} to determine if the atmospheric neutrino
anomaly could be explained with $\theta_{12}$. Here I will describe some
features and the current status of reactor neutrino experiments,
which I expect to be the first to improve our knowledge of
$\theta_{13}$ further. The reader whose
only interest is in the status of current
projects can skip to
Section 3.2.
\par In a sense, reactor neutrino $\bar{\nu}_e$ disappearance
experiments are complementary to the new off-axis accelerator $\nu_e$
appearance experiments, T2K \& NO$\nu$A\cite{bib:t2k,bib:nova}, whose
goal is also to study $\theta_{13}$. The magnitude of a $\theta_{13}$ signal
at a new reactor
neutrino experiment is affected only by the
uncertainty of the value of $\Delta m^2_{32}$, which is currently bounded by
$2.48 < \Delta m^2_{32} < 3.18 \times 10^{-3}eV^2/c^4$.\cite{bib:minos,bib:trish}
On the other hand,
the ability of an accelerator experiment to measure $\theta_{13}$ is also
affected by the uncertainty in $\theta_{23}$,
$ 0.36 < \sin^2(\theta_{23}) < 0.63$\cite{bib:pdg}, and
the uncertainty in the CP violating
phase $\delta$, $0 < \delta < 2\pi$. Thus a precise measurement
of $\theta_{13}$ by both reactor and accelerator experiments could be
used to constrain $\theta_{23}$ and/or $\delta$. On the other hand,
a failure to find a value for $\theta_{13}$
by the reactor experiments will have a
rather negative effect on the expected physics capabilities for
the accelerator experiments which are much more expensive. For example, a
limit of $\sin^2 (2 \theta_{13}) < 0.02$, which reactor experiments could
achieve before T2K or NO$\nu$A start running, would mean
that the accelerator experiments, even with
increases in beam power, could not measure evidence for matter effects
or CP violation and would only have a narrow window to
find evidence for a non-zero $\theta_{13}$
The current limit on $\theta_{13}$ from the Chooz experiment is shown
in Figure \ref{fig:chooz}. The analysis was actually done for
$\theta_{12}$, but given the value of $\Delta m^2_{21}$, it serves
as a valid analysis for $\theta_{13}$. The curve shows the 90\% CL allowed
and prohibited values of $\sin^2(2\theta_{13})$ as a function of $\Delta m^2_{32}$.
In order to compare experiments, it
is common to quote a single number as
the $\theta_{13}$ limit, but this requires a few assumptions.
As a consequence, a large variety of numbers are
quoted as the CHOOZ limit, such as $\sin^2(2\theta_{13}) < 0.10, 0.11, 0.14, 0.15, 0.20$.
One cause for this is the time-dependence of the $\Delta m^2_{32}$ measurement of
Super-K and now MINOS. There is also no unique method of picking
the value of $\Delta m^2_{32}$ to use. (The union of two CL curves is not
a Confidence Level.) While the ``best fit" $\Delta m^2_{32}$ value is often chosen,
the PDG has elected to use the one sigma low value of $\Delta m^2_{32}$ where
the larger value of $\theta_{13}$ is achieved, and they
obtain $\sin^2(2\theta_{13}) < 0.19$. Another
more mundane issue which requires care is that, depending on the
application, $\theta_{13}$ is often expressed in degrees, in radians, as
$\sin(\theta_{13})$, $\sin^2(\theta_{13})$, $\sin^2(\theta_{\mu e})$,
$\sin^2(2\theta_{13})$ and $U_{e3}$. The relationships between these
expressions are simple, but factors of
two errors are common. Finally, since comparison with the
sensitivity of future accelerator
experiments is often made, note that the accelerator
experiments have additional ambiguities and degeneracies in interpreting
a $\theta_{13}$ limit from $\theta_{23}$, $\delta$ and the mass
hierarchy.\cite{bib:lindneramb}
\begin{figure}
\vspace*{3pt}
\begin{center}
\mbox{\epsfig{figure=chooz.eps,width=8.0cm}}
\caption{The CHOOZ and Palo Verde Limits on $\theta_{13}$ as a function of $\Delta m^2_{32}$.}
\label{fig:chooz}
\end{center}
\end{figure}
\section{The planning of a new generation of reactor neutrino
experiments}
The ingredients for the design of a
new reactor $\theta_{13}$ experiment have been laid out in detail in the
white paper, ``A new reactor neutrino experiment to measure $\theta_{13}$"
prepared by an International Working Group comprised of 125 authors from
40 institutions in 9 countries\cite{bib:white}. The optimum location
for the far detector depends on $\Delta m^2_{32}$, and also on the
experiment's ultimate exposure. Sites from 1.1 to 2.0 km have been
chosen. The near detector needs to be located
close to the core to measure the unoscillated spectrum. Local factors,
such as reactor access and topological features modify where
detectors will be placed. Important general features of reactor
experiments are the effects of luminosity on the sensitivity,
detector design, scintillator stability, calibration, backgrounds
and systematic errors.
\par The neutrino oscillation sensitivity for a reactor neutrino experiment
comes from measuring a smaller number of neutrinos than would be expected
if $\theta_{13}=0$, and measuring an energy distribution consistent with
$\bar{\nu}_e$ disappearance due to oscillations. These can be called the ``rate" test and the
``shape" test, but every experiment will use all available
information.
The effective ``luminosity" for a reactor
experiment can be expressed in GW-ton-years, or the product of the
reactor's thermal power times the size of the detector times the length
of time the detectors operate. An example of how the sensitivity of an
experiment varies with luminosity is given in Figure \ref{fig:lindner}.
Two extreme examples, which represent straight lines on this log-log
plot, are for no systematic error, and for infinite systematic error
in normalization and energy calibration. In the latter case, an oscillation
signature is recognized by the appropriate wiggles in an energy distribution.
Such a signal would be affected by bin-to-bin systematic errors, but
not by the same systematic errors which limit the ``rate" test.
Two other curves are drawn with possibly realistic estimates of
systematic error for the next round of experiments.
Vertical lines are drawn at 12 GW-ton-years, corresponding to CHOOZ,
400 GW-ton-years, which could quickly and dramatically increase the
world's sensitivity, and a more ambitious project with 8000 GW-ton-years.
\begin{figure}
\vspace*{3pt}
\begin{center}
\mbox{\epsfig{figure=lindner.eps,width=14.0cm}}
\caption{Luminosity scaling for a reactor experiment's sensitivity
for $\theta_{13}$. The solid curve
shows what might be achieved with reasonable assumptions for systematic
error.}
\label{fig:lindner}
\end{center}
\end{figure}
\par CHOOZ had
a volume of Gd-loaded liquid scintillator, optically connected
and surrounded by
liquid scintillator without Gadolinium. New detector designs
involve the addition of a third volume of mineral oil without
scintillator, as shown in the Double Chooz design of
Figure~\ref{fig:choozdet}. An inner volume of Gd loaded scintillator
serves as a well-defined fiducial volume for neutrino interactions,
with a very high neutron capture cross section. A second layer
of scintillator, called the ``$\gamma$-catcher", measures the energy
of any photons from positron annihilation or neutron capture which
escape the fiducial volume, and a third volume, or ``buffer", shields
the active volume from backgrounds originating in the rock or
phototubes.
\begin{figure}
\vspace*{3pt}
\begin{center}
\mbox{\epsfig{figure=choozdet.eps,width=12.0cm}}
\caption{Plan for the Double Chooz detector(s) with three
optically connected volumes: a target with 0.1\% Gd,
a $\gamma$-catcher with scintillator and no Gd, and a buffer
with mineral oil and no scintillator.}
\label{fig:choozdet}
\end{center}
\end{figure}
\par The scintillator in CHOOZ showed a degradation of its
transparency over time, which resulted in a decrease of the light
yield. Such a degradation would be unacceptable in a new experiment,
particularly if it differed between two detectors. Suspicions concentrate
on possible contaminants which may have caused the Gd to come out of solution,
so that clean and robust liquid handling systems will be required to
maintain good optical qualities. Newly developed
scintillator formulations from groups at Heidelberg\cite{bib:dc}
and Brookhaven\cite{bib:db}
have shown that it is possible
to satisfy the stability requirements for the long time periods needed.
\par Precise calibration will be necessary to ensure that
the response of two or more
detectors is identical. This will be accomplished
by the introduction of radioactive sources that emit
gammas, electrons, positrons and neutrons. Light flasher systems and
lasers will be used to test the stability of photo-detectors. Cosmic ray
muons will also be used, and particular cosmogenic nuclei, such as
$^12$B also can be used to provide a calibration. An important
consideration is that identical calibration systems be used for all
detectors. One possibility that has been proposed is to use
multiple and movable detectors, in order to increase the available
information regarding cross-calibration.
\par The neutrino signature is a coincidence between a prompt positron
annihilation and a delayed neutron capture with a mean life of
30 $\mu$s.
There are two kinds of backgrounds:
accidental ones where the two signals have different causes,
and correlated backgrounds. Two important correlated
backgrounds are fast neutrons, which can cause two
signals separated by a typical neutron capture time, and
$^9$Li, which can be created by spallation when a muon passes
through the scintillator. The danger of $^9$Li is that it has a
long decay time ($\sim$ 130 ms), and the decay
leads to both a neutron and an electron, creating a signal much
like a reactor neutrino. The long decay time makes it unrealistic
to veto every throughgoing muon which might have been the cosmogenic
source. While $^9$Li production has been measured,
its dependence on the muon energy is poorly known, so predictions
of the rates at a particular depth may not be accurate.
All correlated backgrounds can be reduced by putting the detectors
deep enough underground so that there are large overburdens, though this has a cost
in civil construction.
\par Finally, it is necessary to reduce the systematic errors for
counting reactor neutrino events below those that were achieved
by CHOOZ and Palo Verde. The use of a second detector and the
definition of the fiducial volume at the target/$\gamma$-catcher
interface provide large reductions in systematic errors, and the
experiments have to be careful that other effects do not limit them
for their planned-for statistics. A comparison of the
systematic error goals for Double Chooz and Daya Bay has been
tabulated by Mention et al.\cite{bib:mention} and is presented
in Table \ref{tab:syst}.
\begin{table}[ht]
\caption{Comparison of Systematic errors
for the CHOOZ analysis and estimates of the relative and
absolute errors in Double Chooz and Daya Bay, as tabulated by
Mention et al. in Reference 19.}
\vspace*{5pt}
\begin{tabular}{l|c|cc|ccc}\hline
Error Description & CHOOZ & ~~Double & Chooz~~~ & & Daya Bay & \\
& & & & & No R\&D & R\&D \\
& {\small Absolute} & {\small Absolute} & {\small Relative} & {\small Absolute} & {\small Relative} & {\small Relative}\\\hline \hline
Reactor & & & & & & \\ \hline
Production $\sigma$ & 1.90~\% & 1.90~\% & & 1.90~\% & & \\
Core powers & 0.70~\% & 2.00~\% & & 2.00~\% & & \\
Energy/fission & 0.60~\% & 0.50~\% & & 0.50~\% & & \\
Solid angle & & & 0.07~\% & & 0.08~\% & 0.08~\% \\ \hline
Detector & & & & & & \\ \hline
Detection $\sigma$ & 0.30~\% & 0.10~\% & & 0.10~\% & & \\
Target Mass & 0.30~\% & 0.20~\% & 0.20~\% & 0.20~\% & 0.20~\% & 0.02~\% \\
Fiducial volume & 0.20~\% & & & & & \\
Target \% H & 0.80~\% & 0.50~\% & & ? & 0.20~\% & 0.10~\% \\
Dead time & 0.25~\% & & & & & \\ \hline
Analysis & & & & & & \\ \hline
$e^+$ escape & 0.10~\% & & & & & \\
$e^+$ identification & 0.80~\% & 0.10~\% & 0.10~\% & & & \\
n escape & 0.10~\% & & & & & \\
n capture \% Gd & 0.85~\% & 0.30~\% & 0.30~\% & 0.10~\% & 0.10~\% & 0.10~\% \\
n identification & 0.40~\% & 0.20~\% & 0.20~\% & 0.20~\% & 0.20~\% & 0.10~\% \\
$\bar{\nu}_e$ time cut & 0.40~\% & 0.10~\% & 0.10~\% & 0.10~\% & 0.10~\% & 0.03~\% \\
$\bar{\nu}_e$ distance cut & 0.30~\% & & & & & \\
n multiplicity & 0.50~\% & & & & 0.05~\% & 0.05~\% \\ \hline \hline
{\bf Total} & {\bf 2.72~\%} & 2.88~\% & {\bf 0.44~\%} & 2.82~\% & 0.39~\% &
0.20~\% \\ \hline\hline
\end{tabular}
\label{tab:syst}
\end{table}
\section{Sites for reactor experiments}
\subsection{Projects that were previously considered}
\par Four reactor $\nu$ projects have received some funding and
are moving forward. These four experiments, which will be described
in the following sections, are Double Chooz, Daya Bay, RENO and Angra.
As members of the International Working Group considered locations
for new reactor experiments, a number of other
possible locations were studied
which have since been dropped. It is instructive to consider some of
the strengths and weaknesses of sites that are not currently being
pursued.
\par The first idea for a two-detector experiment to measure $\theta_{13}$
was KR2DET\cite{bib:kr2det}. This would have been built at the
Krasnoyarsk reactor in Russia, which was originally built underground
for producing weapons-grade plutonium. Two 46 ton detectors would
have been 115 m and 1000 m from the reactor.
Since the whole complex is
underground, the 600 meters of water equivalent (m.w.e.)
overburden would shield against a high rate
of cosmogenic nuclei, such as $^9Li$, and the near and far detector
would have the same low backgrounds. Unfortunately, local officials
were not cooperative at the prospect of an international collaboration
at their formerly secret soviet city.
\par The reactor complex
in the United States with the highest power, and the site of
a former reactor neutrino experiment, is Palo Verde, in Arizona. The previous
collaboration had a poor decommissioning experience and the reactor
company was not approached about a new project. There was a collaboration which
did an extensive site study at the Diablo Canyon reactor on the
coast of California. The hills there offered an opportunity for
considerable overburden for both the near and far detector. However PG\&E,
the reactor power company, had recently gone through a politically
motivated bankruptcy, and they decided not to cooperate with
the collaboration after the initial studies. The most complete proposal
in the United States was put forward by a large collaboration at the
Braidwood reactor, about an hour's drive from Chicago\cite{bib:braidwood}.
Good cooperation
with the Exelon Corporation was obtained after efforts from the Directors of
Fermilab and Argonne. In Illinois, the overburden would need to
be achieved
by a vertical shaft rather than horizontal tunnels. Although the per-foot cost for a shaft
is higher than a tunnel, the shaft height to reach a given overburden
can be obtained with less digging than
for the length of a typical mountain
tunnel, and civil construction costs are comparable. An experiment was
designed which could move two pairs of 65 ton detectors between two 180 m
shafts about 1 km apart outside the reactor's security fence. After
consideration by the Neutrino Scientific Assessment
Group (NuSAG)\cite{bib:nusag},
the DOE decided not to fund the Braidwood experiment, presumably because
it was more expensive than the alternative, which was support for U.S.
participation in the Daya Bay project.
\par In Japan, a collaboration formed to prepare an experiment called
KASKA at the Kashiwazaki-Kariwa complex south of
Niigata\cite{bib:kaska}. With seven 3.4 GW$_{th}$ nuclear power plants,
it is the world's most powerful reactor complex. The plants are
located in two small clusters, so two near detectors were planned. The
absence of hard rock at the desired depth led to a design
in which the detectors were placed in deep narrow shafts.
However the economics of shafts limits their size and hence the size of the
detector that could be placed in them.
The collaboration developed an excellent relationship with the nuclear
power company and conducted extensive boring studies
to plan the shafts for 4.5 ton detectors. The Japanese funding
agencies, which also support the KamLAND and Super-Kamiokande
experiments, decided not to support this project.
\newpage
\subsection{Double Chooz in France}
\label{sec:dc}
Double Chooz will use the location of the CHOOZ
experiment as its site for its far detector. By avoiding civil
construction costs for the far site, Double Chooz will be less
expensive and will be
able to get started more quickly than the alternatives. There will be
a near detector location 270 m from the middle of the two reactor cores,
with an overburden of about 90 m.w.e., which conservatively maintains
a similar signal to background as the far detector.
Engineering for a near site has been provided by the French
Electricity Company, Ed.F. The
final design will be completed during 2007 and the
lab will be ready in 2009.
The design for the three volume
Double Chooz detector was shown in
Figure~\ref{fig:choozdet}. A three volume prototype was built for
the R\&D stage, and the project is now entering
the construction stage. Key parameters of the Double Chooz Experiment are
given in Table~\ref{tab:detsum}.
Initial tenders
for the steel shielding and for the scintillator
have already been issued, and the far detector will be
installed and operated while the near detector lab is under
construction. With just the 10 ton far detector, the CHOOZ limit on
$\theta_{13}$ can be passed in a few months. When the near detector is
operational, the full sensitivity can be reached quickly, as shown
in Figure \ref{fig:choozsched}.
As the site of a former reactor neutrino experiment with extensive
reactor off running, Chooz is one place where backgrounds have been
measured. Accidental backgrounds in Double Chooz will be much
lower than CHOOZ because sand will be replaced by 170 mm steel
shielding, and because of the buffer.
At the far detector, where 60 neutrinos per day will be
measured, accidental backgrounds will be about 2 per day, while
correlated backgrounds from fast neutrons will be an order of magnitude
smaller. The estimate for $^9$Li is 1.4 per day, based on measurements
in Chooz. The near detector, which should measure 1012 neutrino events
per day, will have accidental backgrounds of about 22 per day, and 1.3
per day from fast
neutrons. The estimate for $^9$Li is 9 per day.
While Double Chooz will be both the cheapest experiment and the first to
provide new knowledge on $\theta_{13}$, its lower ultimate sensitivity has been
used by some funding agencies to deny it the resources that could have
provided that knowledge in a more timely way.
\begin{figure}
\vspace*{3pt}
\begin{center}
\mbox{\epsfig{figure=dclimit.eps,width=8.0cm}}
\caption{Expected Double Chooz $\theta_{13}$
sensitivity versus time.}
\label{fig:choozsched}
\end{center}
\end{figure}
\begin{table}[htbp]
\caption[Detectors] {Summary of the some parameters of the
proposed Double Chooz experiment.}
\label{tab:detsum}
\begin{center}
\begin{tabular}{lrc}
\hline
Thermal power & 4.27 GW & each of 2 cores \\
Electric power & 1.5 GWe & each of 2 cores \\
$\bar{\nu}_e$ target volume & 10.3 m$^3$ & Gd loaded LS (0.1\%) \\
$\gamma$-catcher thickness & 55 cm & Gd-free LS\\
Buffer thickness & 105~cm & nonscintillating \\
Total liquid volume & $\sim$237~m$^3$ & \\
Number of phototubes per detector & 534 8{\tt "} & 13\% coverage \\
Far detector distance & 1050~m & averaged\\
Near detector distance & 280~m & averaged\\
Far detector overburden & 300 m.w.e. & hill topology\\
Near detector overburden & 70--80 m.w.e. & shaft\\
$\bar{\nu}_e$ 5 years far detector events & 75,000 & with a 60.5\% efficiency\\
$\bar{\nu}_e$ 5 near detector events & 789,000 & with a 43.7\% efficiency\\
Relative systematic error & 0.6\% & \\
Effective bin-to-bin error & 1\% & background systematics \\
Running time with far detector only & 1--1.5 year & \\
Running time with far+near detector & 3 years & \\
$\sin^2(2\theta)$ goal in 3 years & 0.02--0.03 & (90\% CL) \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Daya Bay in China}
The Daya Bay Complex, located near Hong Kong in Guangdong Province China,
currently consists of two pairs of reactors, called Daya Bay and Ling Ao.
The centers of each pair of reactors are about 1100 m apart. In addition,
two more reactor cores near Ling Ao are under construction (Ling Ao II)
and should be in operation by 2011, resulting in a total 17.4 GW$_{th}$ reactor
power. With this geometry, two near detectors are needed to monitor the reactor
power, as well as a far detector, as shown in Figure~\ref{fig:daya1}.
Important factors for the near sites are the estimated muon induced
backgrounds, which are a function of overburden.
The near sites were optimized using a global $\chi^2$, which takes into
account backgrounds, mountain profile, detector systematics and residual
reactor related systematics. A summary of distances obtained
is provided in Table~\ref{tab:db}.
\begin{table}[ht]
\caption{Distances between reactors and planned detectors
at Daya Bay.}
\begin{center}
\begin{tabular}{|l||r|r|r|}\hline
~~~~~~Detectors & DB near & LA near & far \\
Reactors & (m) & (m) & (m) \\ \hline
DB cores & 363 & 1347 & 1985 \\ \hline
LA cores & 857 & 481 & 1618 \\ \hline
LA II cores & 1307 & 526 & 1613 \\ \hline
\end{tabular}
\label{tab:db}
\end{center}
\end{table}
\begin{figure}
\vspace*{3pt}
\begin{center}
\mbox{\epsfig{figure=daya1.eps,width=8.0cm}}
\caption{The layout of reactors and detectors at Daya Bay.}
\label{fig:daya1}
\end{center}
\end{figure}
The cylindrical Daya Bay detector will contain three zones, with a target,
$\gamma$-catcher and buffer, as described above. The 224 phototubes
will be located on the sides of each 20 ton detector,
with reflective surfaces at the top
and bottom. The multiple detectors at each site will be used to
cross-calibrate each other, and the possibility of movable detectors is being
studied. In Daya Bay's baseline design, the detectors at each site are
placed inside a large water buffer/water Cerenkov muon detector.
For the far hall, this is similar to a swimming pool with
dimensions 16 m $\times$ 16 m $\times$ 10 m (high). In addition,
water tanks of 1 m $\times$ 1 m are used as an outer muon tracker.
\begin{figure}[ht]
\begin{minipage}[t]{0.60\textwidth}
\begin{center}
\includegraphics*[width=8cm,angle=0,clip]{daya2.eps}
\caption{\label {fig:daya2}
Design of a Daya Bay Module showing a variety of
monitoring tools.}
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{0.30\textwidth}
\begin{center}
\includegraphics*[width=4cm,angle=0,clip]{daya3.eps}
\caption{\label {fig:daya3}
Daya Bay Detectors in the water buffer/Veto System}
\end{center}
\end{minipage}
\hfill
\end{figure}
The large depth of the Daya Bay detectors will be used to
keep cosmogenic backgrounds at a small level. Currently,
tests involving muon and neutron backgrounds are taking place
with a number of detectors at the Aberdeen tunnel in Hong Kong,
which has a similar overburden.
\par With the large reactor power and large overburden to reduce
backgrounds, Daya Bay is an excellent choice for a reactor $\theta_{13}$
experiment. With support from the Chinese government and
the U.S. Department of Energy, it is poised to be an excellent
reactor experiment. With a baseline detector systematic error of 0.38\%
and a goal of 0.18\%, they hope to take full advantage of the statistical
uncertainty of 0.2\%. Data taking with two near halls and far hall
could begin in June 2010. With three years of running, Daya
Bay will reach $\sin^2(2\theta_{13}) < 0.008$ or better.
\subsection{RENO in South Korea}
The South Korean Reactor Experiment for Neutrino Oscillation (RENO) collaboration
is working on an experimental project at the YoungGwang reactor
complex, which consists of six equally spaced reactors in a line
on the west coast of South Korea.
A schematic setup showing the topography and the proposed location
of the near and far detectors is shown in Figure \ref{fig:skreno}.
The near detector at a distance of
about 150 m would be under a 70 m high hill,
and the far detector at a distance of
1.5 km would be under a 260 m high mountain.
\begin{figure}
\begin{center}
\mbox{\epsfig{figure=skreno.eps,width=8.0cm}}
\caption{A topographic map of the YoungGwang site showing
the proposed locations of the near and far detectors for RENO.}
\label{fig:skreno}
\end{center}
\end{figure}
\par After getting a \$9M funding approved by the government of
Korea, the RENO collaboration\cite{bib:reno} has been undertaking
detector design since May 2006. Various samples of liquid scintillator
are under investigation with respect to the
long-term stability of their optical
properties. Other tests include compatibility with stainless steel
and mylar and an acrylic cracks test.
A RENO prototype contained
50 liters of Gd loaded scintillator with a 400 liter $\gamma$-catcher
and a 60 cm $\times$ 100 cm stainless steel dark container.
The prototype was used to do performance tests and background studies,
R\&D for the detector structure and phototube mounting scheme, and to
establish a data analysis effort.
The phototube layout in a simulation of a three-volume
detector is shown in Figure~\ref{fig:reno}. Each detector would have
a fiducial mass of 15.4 ton using scintillator with density 0.73 gm/cm$^3.$
\begin{figure}
\begin{center}
\mbox{\epsfig{figure=reno.eps,width=5.0cm}}
\caption{The phototube configuration in the RENO
GEANT simulation.}
\label{fig:reno}
\end{center}
\end{figure}
\par RENO has received support from the South Korean government and good
cooperation from the Y.K. Power Plant Company.
The expected number of $\bar{\nu}_e$'s is about 5000 per day at the near detector
and 100/day at the far detector. With a systematic error near 1\%,
the project could reach $\sin^2(2\theta_{13}) < 0.03$ in 3 years.
\subsection{Angra in Brazil}
The Angra dos Reis reactor complex in Brazil, about 150 km south
of Rio de Janeiro contains two reactors, Angra-I and Angra-II, which
have 2 and 4 GW thermal powers and up times 83\% and 94\% respectively.
The nearby site has high terrain consisting of granite, so both near
and far detectors could have a substantial overburden. Initial
designs for a $\theta_{13}$ experiment involve a near detector, 300 m from
Angra-II, with 250 m.w.e. overburden, and a far detector, under the peak of
a mountain called ``Morro de Frade", which would provide 2000 m.w.e. at
a distance of 1.5 km.
Thoughts are to build a 50 ton near detector and a 500 ton far detector,
and concentrate on reducing any bin-to-bin systematic errors. The
1000 ton KamLAND detector is a proof that large reactor neutrino
detectors are possible. Unlike KamLAND but like the other new $\theta_{13}$
projects, the Angra collaboration plans to build a three-volume detector.
For such a large detector, phototube costs scale as $V^{2/3}$. A statistical
precision of $\sin^2(2\theta_{13}) < 0.006$ could be obtained in three years.
The Angra experiment was originally conceived as a large
$\theta_{13}$ detector under a considerable overburden
together with a single reactor in order to obtain an large luminosity but
still have substantial reactor off running.
A funding request
to the Brazilian Minister of Science and Technology in 2006 was approved
for initial stages of the project.
The experiment
will be a long term project which will take advantage of lessons
learned at Double Chooz, Daya Bay and RENO. In the meantime,
smaller detectors are being constructed with possible applications
toward the monitoring of reactor operations. The collaboration is
establishing a formal agreement with Eletronuclear for
permanent access to the site. They are already authorized to
place one ton of Gd-loaded scintillator provided by LVD
near to the reactor for
muon background measurements.
Other tests have measured noise and singles rate in the vicinity
of the proposed detectors.
\par The next stage is a very near detector with three
volumes of scintillator and a muon to
be placed between 50 and 100 m from the core.
The current design is a cylinder, 1.3 m high and with a 0.5 m radius
for the target,
1.9 m high and 0.8 m radius for the $\gamma$-catcher, and 3.1 m high
with a 1.4 m radius for the buffer.
\begin{figure}
\vspace*{3pt}
\begin{center}
\mbox{\epsfig{figure=angra.eps,width=8.0cm}}
\caption{Design for the Angra Very Near Detector with
A) target of liquid scintillator and Gd, B) $\gamma$-catcher
of scintillator, C)Buffer of mineral oil and D,E) Muon veto
system of plastic scintillator panels.}
\label{fig:angra}
\end{center}
\end{figure}
\subsection{A comparison of current reactor $\nu$ projects}
Two summaries of some features of the four
current projects are given in Tables \ref{tab:1} and \ref{tab:2}.
These tables were prepared with input from each collaboration's
management in October 2005\cite{bib:pc} and
may not be up to date.
In any case, the exact size and location
of the detectors is subject to further modifications in design, and
the ``optimistic start dates" need to be taken with a huge clump of
salt.
\begin{table}[ht]
\begin{center}
\caption{Comparison of Detectors for four reactor $\nu$ projects.
}
\begin{tabular}{|l|l|l|l|l|}\hline
Project & Power (P) & $<P>$ & Location & Detectors \\ \hline
& $GW_{th}$ & $GW_{th}$ & & km/ton/m.w.e. \\ \hline
& & & & 0.05/1/20 \\
Angra & 6.0 & 5.3 & Brazil & 0.3/50/250 \\
& & & & 1.5/500/2000 \\ \hline
RENO & 17.3 & 16.4 & Korea & 0.15/20/230 \\
& & & & 1.5/20/675 \\ \hline
Daya Bay & 11.6 & 9.9 & China & 0.36/40/260 \\
& (17.4 after 2010) & (14.8 after 2010) & & 0.50/40/260\\
& & & & 1.75/[40x2]/910 \\ \hline
Double & 8.7 & 7.4 & France & 0.27/10.2/90 \\
Chooz & & & & 1.067/10.2/300 \\ \hline
\end{tabular}
\label{tab:1}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\caption{Comparison of Physics for reactor experiments. (* For Daya Bay
after 2010)}
\begin{tabular}{|l|l|r|l|c|c|l|}\hline
Project& Start Date & GW-t-yr & 90\% CL & for $\Delta m^2_{32}$ & efficiences & Far event
\\
& optimistic & (yr) & $\sin^2(2\theta_{13})$ & ($10^{-3}eV^2$) & & rate \\ \hline
& & 3900(1) & 0.0070 & & & \\
Angra & 2013(full) & 9000(3) & 0.0060 & 2.5 & 0.8$\times$0.9 & 350,000/yr \\
& & 15000(5) & 0.0055 & 2.5 & & \\ \hline
RENO & 2009 & 340(1) & 0.03 & 2.0 & 0.8 & 18,000/yr \\ \hline
Daya Bay & 2009 & 3700(3) & 0.008 & 2.5 & 0.75$\times$0.83 & 70,000/yr\\
& & & & & & 110,000/yr$^*$ \\ \hline
Double & 2007(far) & 29(1) & 0.08 & & & \\
Chooz & 2008(near) & 29(1+1) & 0.04 & 2.5 & 0.8$\times$0.9 & 15,000/yr \\
& & 80(1+3) & 0.025 & & & \\ \hline
\end{tabular}
\label{tab:2}
\end{center}
\end{table}
\par A comparison of the philosophy of the new reactor projects can
be discerned by a critical examination of Figure \ref{fig:lindner}.
The thick curve shows the evolution of $\theta_{13}$ sensitivity with reactor
luminosity for a particular set of assumptions about the
detector locations, $\Delta m^2_{32}$ and systematic error.
That curve showed a transition from near the sensitivity of the rate-only
test to near the sensitivity of the shape-only test between 200 and 2000
GW-ton-year.
The
Double Chooz and RENO projects aim to quickly improve the limit by
reaching the ``transition" near 200 GW-ton-year. Daya Bay
has adopted a goal to work hard to reduce systematic errors
below the assumptions of Figure \ref{fig:lindner}. It will reach
$\sin^2(2\theta_{13}) \sim 0.01$ with 2000 -ton-year, perhaps by using movable
detectors. Angra's strategy is to build a much larger far
detector with $ > 10,000$ -ton-year to make it less sensitive
to systematic error. Depending, of course, on $\Delta m^2_{32}$, it is
reasonable for the field to have a sensitivity goal of $\sin^2(2\theta_{13}) \sim 0.01$,
as might be achievable with the Daya Bay or Angra experiments.
However, as can be seen for Figure \ref{fig:lindner}, the luminosity
requirement for 0.01 is 70 times larger than for 0.03, following the
thick curve. In that sense, a 0.01 experiment is 70 times harder
than an 0.03 experiment, and the earlier and less expensive Double
Chooz and RENO projects can be valuable steps on the learning curve
for a successful 0.01 experiment.
\par It would be desirable to compare the real schedules of these
four projects. All four projects have some funding, though not necessarily
enough to reach their design goals (yet).
Even
though it is the cheapest experiment, Double Chooz' schedule is limited
only by funding. The other four projects, which will require
considerably more
civil construction, also have schedules that are probably limited
both by funding and by technical considerations.
\newpage
\section{The near future}
\par The earliest results from reactor experiments may be three years or more
away. However, at the next Neutrino Telescopes meeting in 2009, observers
will be able to gauge progress by paying attention to the following
subjects:
\begin{itemize}
\item Updated estimates of GW-ton-year as the final detector design
and efficiencies are completed,
\item Liquid scintillator production and stability and attenuation
length studies using large amounts of liquid scintillator,
\item Civil construction issues and, in particular, experience with
costs and schedules,
\item Improved estimates of the background for
cosmogenic sources such as $^9Li$, (It may be possible to achieve
an improved understanding of the
possible production mechanisms for cosmogenic sources. In any case,
each experiment should carefully estimate the range of uncertainty of
their background estimates, the impact that uncertainty would have on
the $\theta_{13}$ sensitivity, and quantitative methods for measurements that
will lead to a reduction of the uncertainty when data taking is
underway.)
\item Calibration system development and the results of relative
calibration measurements between two or more detectors.
\item Progress in the implementation of movable detectors as
a calibration technique, and evidence as to whether this is a reliable
method, given the progress or absence of changes when the detectors
are moved.
\end{itemize}
\section{The longer term}
Due to the importance of $\theta_{13}$ for CP violation and the mass hierarchy,
a potential long-term program of reactor neutrino measurements
lies ahead of us.
Results from Double Chooz, Daya Bay, RENO, and later Angra,
will be used to determine
the value of upgrades, additional detectors, and new projects. An
important factor will be whether the goal becomes further limits on
a small value of $\theta_{13}$, or more precise measurements of a non-zero
value. Statistical precision better than $\delta(\sin^2(2\theta_{13})) < 0.01$
can be imagined, but experience with systematic errors and backgrounds
must be weighed along with the capabilities and needs of accelerator
experiments. Ideas already exist for more ambitious reactor experiments
to study $\theta_{13}$ further, as well as $\theta_{12}$. Some examples
are Triple Chooz\cite{bib:triple}, R2D2\cite{bib:r2d2} and
Hano Hano\cite{bib:learned}. If such projects become reality, they
will certainly be based on lessons not yet learned by Double Chooz,
Daya Bay, RENO and Angra.
\section{Acknowledgments}
Thanks to the organizers of Neutrino Telescopes 2007 for
the opportunity to discuss reactor neutrinos. I am indebted to
many colleagues for information in the preparation of this paper,
including Jo{\~a}o dos Anjos, Milind Diwan, Karsten Heeger,
Ernesto Kemp,
Soo-Bong Kim, Thierry Laserre,
Manfred Lindner, Kam-Biu Luk,
Guillaume Mention, David Reyna,
Michael Shaevitz and Patricia Vahle.
|
train/arxiv
|
BkiUe-k5qdmC6x7UQAS1
| 5
| 1
|
\section{Introduction}
The BICEP2 experiment\cite{Ade:2014xna,Ade:2014gua},
dedicated to the observation of the
cosmic microwave background (CMB) polarization,
has announced the detection of the primordial B-mode polarization,
from observations of about 380 square degrees
low-foreground area of sky during 2010 to 2012 in the South Pole.
The detected B-mode power is in the multipole range $30<\ell<150$,
a clear excess over the base lensed-$\Lambda$CDM model at these
small $\ell$s, this excess can not be explained by the lensing
contribution, for the CMB lensing contribution to B-mode polarization
peaks at $\ell\sim1000$, while the contributed power at $\ell\sim100$,
is small. The BICEP team has also examined possible systematic error and
potential foreground contaminations and excluded these as possible
source of the observed B-mode power. The
cross-correlation between frequency bands shows little
change in the observed amplitude, implying that frequency-dependent
foreground are not the dominant contributor.
The presence of the B-modes induced by the primordial gravitational
wave in the early universe provides a direct evidence for the inflation theory.
The tensor mode contribution to the CMB anisotropy may affect the
global fitting of the cosmological parameters. The BICEP group reported their
measured value of tensor-to-scalar ratio as
$r=0.20^{+0.07}_{-0.05}$, based on the lensed-$\Lambda$CDM+tensor
model, and derived from importance sampling of the Planck MCMC chain
using the direct likelihood method, but they did not give constraints
on other parameters. The unexpectedly large tensor-to-scalar
ratio inspires a lot of interests on re-examining the
inflation models
\cite{ Hertzberg:2014aha,Choudhury:2014kma,Ma:2014vua,Gong:2014cqa,Xia:2014tda,Cai:2014bda}
and observation datasets\cite{Zhao:2014rna,Zhao:2010ic,Zhang:2014dxk}.
In this paper, we use the newly published BICEP2 CMB B-mode data,
combined with the Planck CMB temperature data\cite{Collaboration:2013uv},
the WMAP 9 year CMB polarization data\cite{Hinshaw:2013dd, Bennett:2013ew},
and the BAO data from the SDSS DR9\cite{Anderson:2013jb},
SDSS DR7\cite{Padmanabhan:2012ft}, 6dF\cite{Beutler:2011ea},
to constrain the cosmological parameters in the lensed $\Lambda$CDM model.
We derive constraints on the lensed $\Lambda$CDM model
using the publicly available code COSMOMC\cite{Lewis:2002ah},
which implements a Metropolis-Hastings algorithm to perform
a MCMC simulation in order to fit the cosmological
parameters. This method also provides reliable error estimates
on the measured variables.
Previous CMB observations from the Planck satellite, the WMAP satellite and
other CMB experiments yielded a limit of much smaller
tensor-to-scalar ratio $r<0.11$ (at $95\%$ C.L.)\cite{Collaboration:2013uv},
so there is some tension between these and the BICEP result
at least in the simplest lensed $\Lambda$CDM+tensors model.
As pointed out by the BICEP team\cite{Ade:2014xna}, a simple way to
relax this tension is to take the running of spectral index into
account, we will explore this possibility in our fit.
There are also wide spread
interests in the tensor power spectral index, as it
is an important additional source of information for
distinguishing inflation models \cite{2014arXiv1403.5922A,2014arXiv1403.5163G,2014arXiv1403.4927B}, and a blue tensor power spectrum tilt $n_t\sim 2$ have
been reported using the B-mode measurement\cite{2014arXiv1403.5732G}. Here
we shall also investigate this problem and obtain an estimate of
$n_t$ and its measurement error.
\section{The fitting of cosmological parameters}
We explore the cosmological parameter space and
obtain limits on cosmological parameters by using the MCMC technique with
the CosmoMC code \cite{Lewis:2002ah}. In our simulation we
collected about 500000 chain samples, the first 1/3 of the data
was used for burning in the chains and not used for the final analysis.
In addition of the BICEP data \cite{Ade:2014xna}, we use the
Planck CMB temperature data\cite{Collaboration:2013uv},
the WMAP 9 year CMB polarization data\cite{Hinshaw:2013dd, Bennett:2013ew},
and the BAO data from the SDSS DR9\cite{Anderson:2013jb},
SDSS DR7\cite{Padmanabhan:2012ft}, 6dF\cite{Beutler:2011ea} in our cosmological
parameter fitting. Below, we use the following labels to denote the
different data sets included in the fitting:
\begin{itemize}
\item Planck + WP : Planck high $\ell$, low $\ell$\cite{Collaboration:2013uv}, and WMAP9 polarization data\cite{Hinshaw:2013dd, Bennett:2013ew}.
\item Planck + WP + BAO : add BAO data from SDSS DR9\cite{Anderson:2013jb},
SDSS DR7\cite{Padmanabhan:2012ft}, 6dF\cite{Beutler:2011ea}.
\item Planck + WP +BAO + BICEP : add BICEP data\cite{Ade:2014xna,Ade:2014gua}.
\end{itemize}
As noted by the BICEP group, in order to be compatible with the
Planck data, the running of spectrum tilt $\alpha_s$ is needed.
We shall consider a $\Lambda$CDM model, and
assume that the scalar perturbations are purely adiabatic, and the scalar and
tensor mode power spectra are parameterized by
\begin{eqnarray}
P_\zeta \left( k \right) &\equiv& A_s \left( \frac{k}{k_0} \right)^{n_s - 1 + \frac{1}{2} \alpha_s \ln \frac{k}{k_0} } \;\; \ ,
\label{parametrizationP}\\
P_t \left( k \right) &\equiv& A_t \left( \frac{k}{k_0} \right)^{n_t } \;\; \ ,
\label{parametrizationPT}
\end{eqnarray}
where $k_0 = 0.05$ Mpc$^{-1}$, is the pivot scale,
roughly in the middle of the logarithmic
range of scales probed by the WMAP and Planck experiments. The parameter
$\alpha_s$ denotes the running of the scalar
spectral tilt\cite{1995PhRvD..52.1739K}
with $\alpha_s = d \, n_s / d \, {\rm ln } \, k$.
The primordial tensor-to-scalar ratio is defined by $r\equiv A_t /A_s$ at a
chosen pivot scale, for example $r_{0.05}$ is defined
at $k_0=0.05 \ensuremath{\,{\rm Mpc}}^{-1}$, and $r_{0.002}$ at $k_0=0.002 \ensuremath{\,{\rm Mpc}}^{-1}$.
The relation between $r_{0.05}$ and $r_{0.002}$ could be inferred from Eq.\ref{parametrizationP} and Eq.\ref{parametrizationPT}:
\begin{eqnarray}
r_{0.002}=r_{0.05} \frac{0.04^{n_t } }{ 0.04^{n_s - 1 + \frac{1}{2} \alpha_s \ln 0.04 } } ~ .
\label{eq:r0.05-r0.002}
\end{eqnarray}
Throughout
this paper, $r$ without the subscript (as in our plots)
is $r_{0.002}$. In the Planck Collaboration paper
XVI (2013)\cite{Collaboration:2013uv}, $n_t$ is
assumed to be close to zero, and satisfies a single field slow roll inflation
consistency relation
\begin{equation}
n_t = -\frac{r}{8} ~ .
\label{eq:consistency_relation}
\end{equation}
Note that in Eq.(\ref{eq:consistency_relation}), $n_t$ and $r$ should be
defined at the same pivot scale.
The BICEP group adopted the same assumption, and applied the importance
sampling method on the Planck MCMC chains \cite{Collaboration:2013uv} with the
addition of the B-mode data to obtain constraints on $r$ and $n_s$ \cite{Ade:2014xna}.
Here we study the more general case, with $n_t$ and $r$ treated
as independent parameters.
We fix the the number of neutrinos as $N_{eff}=3.046$, and the
sum of neutrino masses as the Planck best fit $\sum m_\nu =0.06 \ensuremath{\,{\rm eV}}$.
The lensing amplitude parameter $A_L$ is fixed to $1$,
and we put flat priors on all fitting parameters.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{graph/triangle_power_simplest.eps}
\caption{Joint constraints on primordial power spectrum parameters.}
\label{fig:triangle-pw}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{graph/triangle_cosmo_simplest.eps}
\caption{Constraints on other cosmological parameters}
\label{fig:triange-other-param}
\end{center}
\end{figure}
The constraints on the primordial power spectrum parameters are shown
in Fig.\ref{fig:triangle-pw},
and constraints on other parameters in Fig.\ref{fig:triange-other-param}.
The marginalized $68\%$ bounds on the parameters based on different datasets
are listed in Table \ref{tb:params}.
For the scalar spectral index and running, using the Planck + WP + BAO + BICEP dataset,
we obtain
$$n_s=0.9617 \pm {0.0061}, \qquad \alpha_s = -0.0175\pm_{ 0.0097}^{ 0.0105}$$
while for the Planck + WP + BAO data
$$n_s=0.9616 \pm{0.0061}, \qquad \alpha_s = -0.0148\pm_{ 0.0095}^{ 0.0108}$$
(see Table \ref{tb:params}), the best-fit value of $\alpha_s$
is $0.027$ smaller after including the BICEP2 data.
The decrease in $\alpha_s$ reduces the $TT$ angular power at
small $\ell$ region (see Fig.\ref{fig:diff_alpha}), and helps to alleviate
the tension between the high $r$ value obtained with BICEP data and the limit derived
from the large scale $TT$ auto-correlation power from Planck experiment.
The effect of decreasing $\alpha_s$ on the other power spectra is that it
could lower the matter power spectrum at very large scale
and very small scales. If future galaxy surveys
can probe the matter power spectrum at extremely large scales,
the constraint on $\alpha_s$ can be further improved.
For the tensor-to-scalar ratio $r$, the BICEP group
reported a value of $r = 0.20^{+0.07}_{-0.05}$ based on their fit
to the CMB power spectrum, with the consistency relation
Eq.(\ref{eq:consistency_relation}).
Using the Planck + WP + BAO + BICEP dataset, we obtain
$$ r = 0.1043\pm_{ 0.0914}^{ 0.0307},$$
where we have taken $n_t$ and $r$ as independent free parameters.
If we fix $n_t$ to the single-field inflation consistency relation value
as the Planck and BICEP2 group did,
we will obtain a higher value of $r$ and can also
place a tighter constraint on $r$.
We compare the two dimensional likelihood contour of $r$ vs $n_t$
with and without single-field inflation consistency relation
in Fig.\ref{fig:contour_r02_ns}.
Blue dashed curves in Fig.\ref{fig:contour_r02_ns} show the result
if we impose the single-field slow roll inflation consistency relation
$n_t=-r/8$ in the fitting, with
$r=0.2130\pm_{0.0609}^{0.0446}$(1$\sigma$ error bar). Note that with the
consistency relation, the $r$ value is significantly higher than the
one without.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.35\textwidth]{graph/contour_r02_ns.eps}
\caption{Comparison of the two dimensional likelihood contour obtained by
using and without using single-field inflation consistency relation.
The dataset used is Planck + WP + BAO + BICEP.}
\label{fig:contour_r02_ns}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{graph/png_different_dataset/power_result_diff_alpha.eps}
\caption{Comparison of CMB and matter power spectra for
different running index $\alpha$.}
\label{fig:diff_alpha}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3in]{graph/power_result_comp_bicep_us.eps}
\caption{Comparison of power spectra results predicted by two $n_t$ and $r$ parameter sets.
One set is our best fitting value(red curves), another is from BICEP2 paper\cite{Ade:2014xna}(blue dotted curve). }
\label{fig:diff_nt_r}
\end{center}
\end{figure}
For the tensor spectrum index $n_t$, using the Planck + WP + BAO + BICEP
dataset and without consistency relation, we obtain
$$ n_t = 0.5198\pm_{ 0.4579}^{ 0.4515}.$$
This result shows that a blue tilt is slightly favored,
but a flat or even red tilt is still consistent with the current data.
So, without imposing the consistency relation,
the best fitting value of the $r_{0.002}$ and $n_t$ we obtain are $r \sim 0.1$
and $n_t \sim 0.52$, while the BICEP group reported
$r \sim 0.2$ and $n_t \sim -0.024$ (fixed by simple-field slow-roll
inflation consistency relation prior) \cite{Ade:2014xna}.
We plot the CMB power spectra and matter power spectra according
to these two fits in Fig.\ref{fig:diff_nt_r}.
The red curves and blue curves overlap each other for most $\ell$-values, the
main difference is at the very large scales($\ell<15$), where the statistics
are poor, and it is hard to distinguish the two cases with the current
the current observational data.
Constraints on the other cosmological parameters are shown
in Fig.\ref{fig:triange-other-param} and Table \ref{tb:params}.
By combining the BAO data, we obtain tighter constraints on all parameters,
and help break the parameter degeneracy. Nevertheless, there is not
much difference for these parameters after the addition of the BICEP data.
\begin{table*}
\begin{center}
\caption{\label{tb:params}
Summary of the best fit values of cosmological parameters and the corresponding
68\% intervals. The ``Best fit'' column is the best fitting value inferred from minimum $\chi^2$ point in the whole MCMC chains.
The center value in ``$68\%$ limits'' column is the parameter according to the peak in one-dimensional
likelihood distribution.
The ``Planck + WP" column lists the result of using the
temperature map from Planck and polarization map form WMAP9;
``Planck + WP + BAO" column shows the result with BAO data added;
``Planck + WP + BAO+ BICEP" column is the result with BICEP2
data added.
}
\begin{tabular}{c|cc|cc|cc} \hline\hline
& \multicolumn{2}{|c}{Planck + WP + BAO + BICEP}& \multicolumn{2}{|c}{Planck + WP + BAO}& \multicolumn{2}{|c}{Planck + WP}\\
\hline
Parameter & Best fit & $68\%$ limits& Best fit & $68\%$ limits& Best fit & $68\%$ limits\\
\hline
$n_\mathrm{s}$ & $ 0.9618$ & $ 0.9617\pm_{ 0.0061}^{ 0.0061}$ & $ 0.9591$ & $ 0.9616\pm_{ 0.0061}^{ 0.0061}$ & $ 0.9631$ & $ 0.9614\pm_{ 0.0073}^{ 0.0072}$ \\
$r_{0.002}$ & $ 0.0634$ & $ 0.1043\pm_{ 0.0914}^{ 0.0307}$ & $ 0.0015$ & $< 0.0649$ & $ 0.0117$ & $< 0.0684$ \\
$\alpha_\mathrm{s}$ & $ -0.0080$ & $ -0.0175\pm_{ 0.0097}^{ 0.0105}$ & $ -0.0129$ & $ -0.0148\pm_{ 0.0095}^{ 0.0108}$ & $ -0.0090$ & $ -0.0150\pm_{ 0.0094}^{ 0.0109}$ \\
$n_\mathrm{t}$ & $ 0.7293$ & $ 0.5198\pm_{ 0.4579}^{ 0.4515}$ & $ 0.6230$ & $ 0.8324\pm_{ 1.1676}^{ 0.3823}$ & $ 1.3546$ & $ 0.8361\pm_{ 1.1803}^{ 0.3868}$ \\
$\Omega_{\mathrm{m}}$ & $ 0.3015$ & $ 0.3026\pm_{ 0.0094}^{ 0.0094}$ & $ 0.3080$ & $ 0.3032\pm_{ 0.0095}^{ 0.0095}$ & $ 0.3095$ & $ 0.3037\pm_{ 0.0138}^{ 0.0137}$ \\
$\Omega_\Lambda$ & $ 0.6985$ & $ 0.6974\pm_{ 0.0094}^{ 0.0094}$ & $ 0.6920$ & $ 0.6968\pm_{ 0.0095}^{ 0.0095}$ & $ 0.6905$ & $ 0.6963\pm_{ 0.0137}^{ 0.0138}$ \\
$\sigma_8$ & $ 0.8173$ & $ 0.8242\pm_{ 0.0099}^{ 0.0099}$ & $ 0.8218$ & $ 0.8237\pm_{ 0.0098}^{ 0.0098}$ & $ 0.8303$ & $ 0.8236\pm_{ 0.0099}^{ 0.0099}$ \\
$H_0$ & $ 68.2990$ & $ 68.2599\pm_{ 0.7416}^{ 0.7430}$ & $ 67.8668$ & $ 68.2224\pm_{ 0.7437}^{ 0.7472}$ & $ 67.6910$ & $ 68.2043\pm_{ 1.0658}^{ 1.0663}$ \\
$100\theta_{\mathrm{MC}}$ & $ 1.0417$ & $ 1.0415\pm_{ 0.0006}^{ 0.0006}$ & $ 1.0413$ & $ 1.0415\pm_{ 0.0006}^{ 0.0006}$ & $ 1.0411$ & $ 1.0415\pm_{ 0.0006}^{ 0.0006}$ \\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\section{Conclusion}
In this paper, we use the newly published BICEP2 CMB B-mode data,
Planck CMB temperature data\cite{Collaboration:2013uv},
the WMAP 9 year CMB polarization data\cite{Hinshaw:2013dd, Bennett:2013ew}
to constrain the base lensed $\Lambda$CDM model. In addition to the CMB data,
we also use the BAO data
from SDSS DR9\cite{Anderson:2013jb}, SDSS DR7\cite{Padmanabhan:2012ft},
6dF\cite{Beutler:2011ea}, which help to break parameter degeneracy.
For most parameters, we find that the best fit values and
measurement errors are not altered much by the introduction of
the BICEP2 data. The most affected parameters are $r$, $\alpha_s$ and $n_t$.
Combining Planck + WP + BICEP+ BAO dataset, we obtain marginalized
$68\%$ bounds on some interested parameters are:
\begin{eqnarray}
r &=& 0.1043\pm_{ 0.0914}^{ 0.0307} ~ , \\
n_s &=& 0.9617\pm_{ 0.0061}^{ 0.0061} ~ , \\
\alpha_s &=& -0.0175\pm_{ 0.0097}^{ 0.0105} ~ , \\
n_t &=& 0.5198\pm_{ 0.4579}^{ 0.4515} ~ .
\label{eq:final}
\end{eqnarray}
We find that a blue tensor tilt ($n_t>0$) is slightly favored,
but a flat or red tilt is consistent with the data.
The best fitting value of $r$ we obtain is slightly smaller
than BICEP2 group obtained,and the constraint on $r$ is also
looser than BICEP2 group obtained. This result is reasonable,
as we have not imposed the single-field-slow-roll inflation
consistency relation on $n_t$, and treated it as an independent
parameter. If we impose this relation, we will
obtain $r=0.2130\pm_{0.0609}^{0.0446}$($1\sigma$ error) instead.
In the near future, Planck and other experiments will
provide more data on CMB polarization, and help improve the constraint
on these parameters.
\section*{Acknowledgements}
We thank Antony Lewis for kindly providing us the beta version
of CosmoMC code for testing.
Our MCMC computation was performed on the Laohu cluster in
NAOC and on the GPC supercomputer at the SciNet HPC Consortium.
This work is supported by the Chinese Academy
of Science Strategic Priority Research Program ``The Emergence of
Cosmological Structures'' Grant No. XDB09000000, by the
NSFC grant 11103027, 11373030 and
the Ministry of Science and Technology 863 project grant 2012AA121701.
|
train/arxiv
|
BkiUdlw25V5jEBMNXEYy
| 5
| 1
|
\section{Abstract}
In this paper, we propose an effective training strategy to extract robust speaker representations from a speech signal.
One of the key challenges in speaker recognition tasks is to learn latent representations or embeddings containing solely speaker characteristic information in order to be robust in terms of intra-speaker variations.
By modifying the network architecture to generate both speaker-related and speaker-unrelated representations, we exploit a learning criterion which minimizes the mutual information between these disentangled embeddings.
We also introduce an identity change loss criterion which utilizes a reconstruction error to different utterances spoken by the same speaker.
Since the proposed criteria reduce the variation of speaker characteristics caused by changes in background environment or spoken content, the resulting embeddings of each speaker become more consistent.
The effectiveness of the proposed method is demonstrated through two tasks; disentanglement performance, and improvement of speaker recognition accuracy compared to the baseline model on a benchmark dataset, VoxCeleb1.
Ablation studies also show the impact of each criterion on overall performance.
\noindent\textbf{Index Terms}: speaker verification, disentanglement, mutual information
\section{Introduction}
Speaker recognition systems have been studied for many years due to their usefulness in various applications.
Recently, the accuracy of speaker recognition has dramatically improved due to advances in deep learning and the availability of large-scale datasets for training.
The main objective of deep learning-based speaker recognition is to extract a high dimensional embedding vector such that it uniquely represents the characteristic of each speaker.
The d-vector~\cite{variani2014deep,chung2018voxceleb2} and x-vector~\cite{snyder2018x} are typical examples, where they are estimated via an identity classification task with an encoder style network.
The detailed extraction process differs with respect to the type of network structure and the criterion of the objective function such as softmax, triplet, and angular softmax~\cite{huang2018angular}.
However, given that the extracted embeddings also include speaker-unrelated information , there remains room for further improvement.
To overcome the aforementioned limitation inherent to the encoder style framework, a method for disentangling the embeddings with the use of relevant and irrelevant speaker information was proposed~\cite{Tai2020SEFALDRAS}.
The method consists of two encoders, a speaker purifying encoder and a dispersing encoder, as well as a decoder for reconstruction.
While the speaker purifying encoder is trained by the original speaker classification scheme, the dispersing encoder is trained by an adversarial training scheme designed to fool it from correctly classifying the speaker identity.
Later, two encoded features are concatenated, following which they are fed to the decoder, which utilizes a reconstruction loss to the original input so that all information is embedded within the representative features.
In other words, they decompose the entirety of the speech information into speaker identity-related and -unrelated information.
Although the speaker and non-speaker embeddings are learned effectively using the adversarial classifier, the method does not directly address the task of dispersing both embeddings simultaneously in disentanglement.
There is an opportunity to improve the disentanglement performance by adopting a method which considers the relation of embeddings simultaneously.
In this paper, we propose a method to effectively disentangle speaker identity-related and identity-unrelated information using various types of criteria.
We first introduce a criterion for minimizing mutual information between speaker-related and -unrelated representations, which is beneficial due to that it directly considers the relation between those features.
We also propose a novel identity change criterion which measures the difference between the input and generated mel-spectrums.
The reconstructed mel-spectrum used for the identity change loss is generated via a speaker embedding from one utterance and a residual embedding from the other utterance possessing the same speaker identity.
Since the criterion enforces speaker embeddings to be similar to a different set of utterances, it reduces intra-variation within each speaker's cluster.
The main contributions of this paper are as follows: (1) we propose an effective method for disentanglig identity-related and identity-unrelated information using a mutual information criterion through an auto-encoder framework; (2) we introduce a speaker identity change loss criterion to further enhance the performance of speaker embeddings; (3) we use this framework to improve speaker verification performance on benchmark datasets.
The remainder of the paper is organized as follows.
Section 3 presents a brief overview of related works on speaker embedding and disentanglement.
In Section 4, we present the details of the proposed method such as network architectures and loss functions.
Experimental results are presented in Section 5, and the conclusion follows in Section 6.
\section{Related works}
\subsection{Speaker embedding strategy}
Speaker embedding vectors are high level representations (typically obtained via deep neural networks) that aim to compactly represent a speaker's identity. They are very important for many applications such as speaker recognition and diarization.
There are various speaker embedding methods that differ in terms of the type of network architecture, feature aggregation, and training criteria.
Deep learning architectures such as DNN-~\cite{variani2014deep,heigold2016end,snyder2016deep}, CNN-~\cite{nagrani2017voxceleb,chung2018voxceleb2,li2017deep, hajibabaei2018unified,jungimproving}, or LSTM-based ones~\cite{wan2018generalized} first extract the frame-level features from a variable length of utterances.
Then, a pooling method~\cite{snyder2017deep, cai2018exploring,cai2018analysis, xie2019utterance} is used to aggregate the frame-level features to a fixed length of utterance-level.
In terms of the objective function, they are trained by performing a classification task with a criterion of softmax, angular softmax or a metric learning task using a contrastive loss~\cite{nagrani2017voxceleb, chung2018voxceleb2}, triplet loss~\cite{li2017deep} and etc~\cite{chung2020defence, kye2020meta}.
Nevertheless, there is still room for improvement if we introduce the concept of target-unrelated information to the extracted embedding features.
\subsection{Disentangled feature learning}
Disentanglement is a learning technique that represents the input signal's characteristics through multiple separated dimensions or embeddings.
Therefore, it is beneficial for obtaining representations that contain certain attributes or for extracting discriminative features.
Adversarial training~\cite{ganin2016domain,zhou2019training, meng2019adversarial, peng2019domain,bhattacharya2019generative} and reconstruction based training~\cite{zhang2019non, chou2018multi,liu2018exploring,eom2019learning,gonzalez2018image} are widely used to obtain disentangled representations.
Tai at el.~\cite{Tai2020SEFALDRAS} proposed a disentanglement method for speaker recognition that is the baseline for our work.
By constructing an identity-related and an identity-unrelated encoder, they trained each encoder to represent only speaker-related and -unrelated information using speaker identification loss and adversarial training loss.
They also adopted an auto-encoder framework to maintain all input speech information within output embeddings.
The information contained in the output embeddings is preserved using spectral reconstruction approaches.
\subsection{Mutual Information Neural Estimator}
Mutual information (MI) based feature learning methods have been popular for a long time, but they are often difficult to apply for deep learning-based approaches because it is not easy to calculate the MI for high dimensional continuous variables.
Recently, a mutual information neural estimator (MINE)~\cite{belghazi2018mine} was proposed to estimate mutual information with a neural network architecture.
By definition, the MI is equivalent to the Kullback-Leibler (KL) divergence of a joint distribution, $P_{X,Y}$, and the product of marginals, $P_{X \otimes Y}$.
According to the Donsker-Varadhan representation~\cite{donsker1983asymptotic}, the lower bound of mutual information can be represented by:
\begin{equation}
I(X,Y) \geq \sup_T\mathbb{E}_{P_{X,Y}}[T_\theta]-log(\mathbb{E}_{P_{X \otimes Y}}[e^{T_\theta}]).
\label{mine}
\end{equation}
The $T$ function is trained by a neural network with the parameter $\theta$, for which the output can be considered to be an approximated value of mutual information between $X$ and $Y$.
It has been widely used in recent works on feature learning~\cite{hjelm2018learning, ravanelli2018learning, sanchez2019learning}.
\section{Proposed Method}
The main goal of the proposed algorithm is to extract a high-level latent embedding that contains only speaker-related information.
To achieve this goal, we propose a disentanglement method to decouple speaker information from an input signal such that the embedding represents the speaker's identity being robust to the variation of linguistic information.
\subsection{Overview of the proposed algorithm}
Figure~\ref{fig:overall} illustrates the proposed training strategies in our disentanglement method.
Our network consists of three modules: a speaker encoder $E_{spk}$, a residual encoder $E_{res}$, and a decoder $D_r$.
$f_{spk}$ and $f_{res}$ are respectively the output features of encoders $E_{spk}$ and $E_{res}$.
Our method reconstructs mel-scaled spectrum instead of the magnitude spectrum so that it efficiently disentangles embeddings without speaker information loss.
The network is trained in various learning criteria used in the baseline model, depicted in Figure~\ref{baseloss} with auxiliary loss which minimizes intra-variance of clusters; speaker loss, disentanglement loss, reconstruction loss, and our novel criterion -- {\em identity change} loss.
Also, we modify disentanglement loss, which uses the adversarial classifier on the residual embedding in the baseline method, into the mutual information between $f_{spk}$ and $f_{res}$.
Details of each criterion are described in the Section~\ref{subsec:training_objective}.
\subsection{Training Objective}
\label{subsec:training_objective}
In this section, we demonstrate the details of the proposed method with objective functions for training; speaker loss $L_S$, disentanglement loss $L_{MI}$, reconstruction loss $L_R$ and identity change loss $L_{IC}$ .
The total objective function of the proposed method consists of four loss functions:
\begin{equation}
\begin{split}
L_{total}=&\lambda_1L_S+\lambda_2L_{MI}+\lambda_3L_R+\lambda_4L_{IC}.
\end{split}
\label{total}
\end{equation}
The hyper-parameters are set based on experimental results, $[\lambda_1, ... ,\lambda_4] = [1, 0.1, 0.1, 0.1]$.
\vspace{2pt}
\noindent\textbf{Speaker loss.}
The objective of the speaker loss is embedding speaker representation $f_{spk}$ into the latent space using the encoder $E_{spk}$ as done in~\cite{ nagrani2017voxceleb, huang2018angular, li2017deep, wan2018generalized}.
Following the baseline model, the speaker encoder is trained in a speaker label classification task using a cross-entropy criterion.
The loss function is denoted as:
\begin{equation}
L_{S}=-\sum_{i=1}^C t_i log(softmax(f_{spk})_i),
\end{equation}
where $C$ is the number of speakers and $t$ is the label index.
\vspace{2pt}
\noindent\textbf{Disentanglement loss.}
In the disentanglement mechanism, the residual embedding $f_{res}$ contains information which is not included in the speaker vector $f_{spk}$.
The baseline method adopts the adversarial classification to embed residual of speaker characteristics.
The adversarial classification shares network parameters used in speaker loss whereas its objective is to eliminates the speaker information by fooling the classifier.
The residual encoder $E_{res}$ is trained not to estimate any speaker label by using uniform distribution, and its definition is as follows:
\begin{equation}
L_{adv}= {1 \over C} \sum_{j=1}^{C}log(\emph{softmax}(f_{res})_j),
\end{equation}
where $C$ is the number of classes.
In our strategy, we attempt disentanglement using mutual information between $f_{spk}$ and $f_{res}$ instead of adversarial learning.
Since the {\em genuine} disentanglement is achieved in dispersing residual information but not in embedding features separately, we consider both $f_{spk}$ and $f_{res}$ in terms of disentanglement criterion.
Here, we adopt the MINE method, which handles correspondence between the three embeddings using deep learning approaches.
In~\cite{ravanelli2018learning}, MINE controls the information differences between speakers; minimizing in the same speaker and maximizing in different speakers.
MINE, in our paper, maximizes the discrepancy between disentangled features ($f_{spk}$, $f_{res}$), and minimizes between speaker representations extracted from different segments of the same speech signals as shown in Figure~\ref{mineloss}.
The criterion is designed as Equation~\ref{eq:mi_loss}.
\begin{equation}
\label{eq:mi_loss}
\begin{split}
L_{MI} = \mathbb{E}[T_\theta(f_{spk}^{A},f_{spk}^{A'})] -log\Big(\mathbb{E}\Big[e^{T_\theta(f_{spk}^{A},f_{res}^{A})}\Big]\Big) \\
+ \mathbb{E}[T_\theta(f_{spk}^{A'},f_{spk}^{A})] -log\Big(\mathbb{E}\Big[e^{T_\theta(f_{spk}^{A'},f_{res}^{A'})}\Big]\Big),
\end{split}
\end{equation}
where $f_{spk}^A$ and $f_{spk}^{A'}$ represent identical speaker extracted from the same speech signal with different offsets, and $f_{res}^A$ and $f_{res}^{A'}$ are their residual embeddings.
It holds the common information between speaker embeddings and disperses residuals to speaker embeddings on the other embedding.
\vspace{2pt}
\noindent\textbf{Reconstruction loss.}
The disentangled embeddings, $f_{spk}$ and $f_{res}$ preserve the spectral information in the input spectrum when they are combined. The decoder $D_r(f_{spk},f_{res})$ is trained to generate a reconstructed spectrum using a concatenated embedding input.
The reconstruction loss $L_R$ is defined by measuring the distance between input and the reconstructed spectrum using an MSE criterion as follows:
\begin{equation}
L_{R}=||D_r(f_{spk},f_{res}) - S_{mel} ||^2,
\end{equation}
where $S_{mel}$ is a mel-spectrum of the input speech signal $S$.
Reconstructing the mel-spectrum instead of a magnitude spectrum can reduce the burden of the decoder during the spectrum generation process, while it still enables the generation of embeddings containing all information of input.
\vspace{2pt}
\noindent\textbf{Identity change~(IC) loss.}
Intra-class variance inevitable in each speaker cluster is caused by the variation of linguistic information, recording environments, and speakers' emotional or health state.
To further improve speaker recognition performance by minimizing intra-class variances in speaker clusters, we propose identity change loss.
Instead of minimizing intra-class variance directly, we use a reconstruction loss criterion that measures spectral distance between the reference and reconstructed one.
Since the reconstructed mel-spectrum is generated by substituting the identity embedding with the one extracted from different utterances spoken by the same speaker, we may obtain perfect reconstruction only when the substitute embedding has the same distribution as the original identity.
The identity change loss is described in Equation~\ref{eq:ic_loss}.
\begin{equation}
\begin{split}
L_{IC}=&\|{\hat{S}_{A}-S_A}\|^2 +\|{\hat{S}_{B}-S_B}\|^2,\\
\hat{S}_{A}=&D_r\bigg(\frac{f_{spk}^A+f_{spk}^{B}}{2}, f_{res}^A\bigg),\\
\hat{S}_{B}=&D_r\bigg(\frac{f_{spk}^A+f_{spk}^{B}}{2}, f_{res}^B\bigg),
\end{split}
\label{eq:ic_loss}
\end{equation}
where $S_A$ and $S_B$ are the mel-spectrum of speech signals $A$, $B$ spoken by the same speaker, and $\hat{S}_A$ and $\hat{S}_{B}$ are the reconstructed mel-spectrum using substituted identities.
In the proposed method, $f_{spk}^A$ and $f_{spk}^B$ are substituted with the mean of two identities depicted in Figure~\ref{idloss}; it guides the direction where speaker embeddings to be gathered to minimize intra-class variance.
\begin{table}[t]
\centering
\caption{Verification results on VoxCeleb1 test set. S, C and AM are Softmax, Contrastive and Angular margin loss, respectively.}
\begin{tabular}{c | c c c c}
\toprule
&\bf Model & \bf Criterion &\bf EER \\
\midrule\midrule
Chung \emph{et al.}~\cite{chung2018voxceleb2} & Encoder& S + C & 5.04\% \\
Xie \emph{et al.}~\cite{xie2019utterance} & Encoder & S & 5.02\% \\
\midrule
Tai \emph{et al.}~\cite{Tai2020SEFALDRAS} & Enc(2)+Dec& S & 3.83\% \\
\midrule
\multirow{2}{*}{\bf Proposed} & Enc(2)+Dec & S & \textbf {3.18\%} \\
& Enc(2)+Dec & AM & \textbf {2.54\%} \\
\bottomrule
\end{tabular}
\vspace{-10pt}
\label{table:1}
\end{table}
\section{Experiments}
\subsection{Dataset configuration}
We train our model on VoxCeleb2~\cite{chung2018voxceleb2}, which is a large-scale audio-visual dataset containing over 1 million utterances for 5,994 celebrities, extracted from YouTube videos.
We evaluate our model on VoxCeleb1~\cite{nagrani2017voxceleb} test set which consists of 677 clips spoken by 40 speakers.
Clips are segmented into 3 seconds with a random offset from each utterance for training.
They are sliced in every 10ms with 25ms window length and transformed into log-magnitude spectrum with the FFT size of 512; thus, the dimension of input speech features is $300 \times 257$.
For reconstruction, we prepare mel-spectrogram in logarithm scale using 64 mel-filterbanks as outputs.
\subsection{Implementation details}
The structures of the speaker encoder and the residual encoder are designed based on ResNet34 with small changes into the pooling strategy.
Both encoders use a time average pooling (TAP) method to embed variable length input features into a fixed dimension of utterance level.
The decoder consists of 3 fully-connected layers and 9 transposed convolutional layers referenced by~\cite{radford2015unsupervised}.
In the training phase, the batch size of the input is set to 32 and the model is trained with the Adam optimizer~\cite{kingma2014adam}.
The learning rate is set to 1e-3 and reduced by half every 10 epochs until convergence.
\begin{table}[t]
\centering
\caption{Ablation study of the proposed method}
\begin{tabular}{ c | c c c c c | c }
\toprule
& $L_s$ & $L_r$ & $L_{adv}$ & $L_{mi}$ & $L_{ic}$ & EER (\%) \\
\midrule
Baseline & \checkmark & \checkmark & \checkmark & - & - & 3.83\% \\
\midrule
\multirow{4}{*}{\bf Proposed}
& \checkmark & \checkmark & \checkmark & \checkmark & - & 3.71\% \\
& \checkmark & \checkmark & - & \checkmark & - & 3.81\% \\
& \checkmark & \checkmark & \checkmark & - & \checkmark & 3.59\% \\
& \checkmark & \checkmark & - & \checkmark & \checkmark & \bf 3.18\% \\
\bottomrule
\end{tabular}
\label{table:2}
\vspace{-10pt}
\end{table}
\subsection{Training strategy}
\noindent \textbf{Phase I. Disentanglement training.}
In phase I, the network is pre-trained using speaker loss, disentanglement loss and reconstruction loss, similar to the baseline strategy.
According to each experimental setup, either adversarial loss or mutual information loss is used.
\vspace{2pt}
\noindent \textbf{Phase II. Identity change training.}
During phase II, we consider an efficient training strategy for identity change loss.
Its motivation is based on dispersing information by setting one embedding as an anchor and stable adaptation of the other side embedding.
The detailed process is shown below and the stages are processed recursively:
\begin{enumerate}
\item {\em Intra-class minimization} -- The identity is replaced by the mean of two identities to generate mel-spectrogram, and its reconstruction error $L_{IC}$ is minimized through backpropagation on the decoder and residual encoder.
\item {\em Adaptation} -- The original identity is ingested on the decoder and the parameters of the decoder and the speaker encoder are updated to minimize reconstruction error $L_{R}$.
\end{enumerate}
\subsection{Experimental results}
We compare the performance of our models to that of conventional models and analyze the impact of each loss function on overall performance with an ablation study under the same settings.
All models for comparison are re-implemented by ours.
Table~\ref{table:1} shows the equal error rate (EER) obtained by the VoxCeleb1~\cite{nagrani2017voxceleb} testset, where we compare our models with the encoder model~\cite{xie2019utterance} and the disentanglement model~\cite{Tai2020SEFALDRAS}.
With the standard softmax loss and TAP aggregation, our model outperforms previous models based on the ResNet encoder by 36.6\% and the disentanglement model using an adversarial method~\cite{Tai2020SEFALDRAS} by 16.9\%.
These results demonstrate that the represented embeddings of the proposed disentanglement approach are more informative than those of the baseline.
The proposed method trained with angular margin softmax provided our best results among the experiments.
\vspace{2pt}
\noindent\textbf{Ablation study.}
Table \ref{table:2} shows equal error rates (EERs) obtained by ablation studies, which indicates the effectiveness of loss functions used in the proposed model.
First, we trained the model using the mutual information criterion with and without the adversarial criterion.
The results confirm that minimizing the mutual information between speaker and residual embeddings is effective to disentangle speaker information.
Unlike adversarial training, which is applied to the encoders independently, mutual information is calculated between speaker and residual embedding simultaneously, resulting in more powerful disentanglement performance.
Among these experiments, the case absent adversarial criterion performs better, with an EER~3.81\%.
Then, the other experiments are conducted in order to investigate the effect of identity change loss.
The results prove that identity change loss improves the performance of speaker embedding, and it shows the best result when it is trained using the mutual information and identity change loss criterion together, giving an EER~3.18\%.
Figure~\ref{fig:fig} illustrates t-SNE plots~\cite{maaten2008visualizing} for visualization of the effectiveness of the proposed method more concretely.
As shown in Figure~\ref{fig:sub-first} and Figure~\ref{fig:sub-second}, the proposed model also effectively disentangles speaker-related and speaker-unrelated information.
Moreover, compared to the baseline with proposed model in Figure~\ref{fig:sub-first} and Figure~\ref{fig:sub-third}, our method shows more densely clustered identities with small variance.
Through the results of experiments, we proved that mutual information loss and identity change loss is helpful in learning the clearly disentangled features for speaker recognition.
\begin{figure}[t]
\begin{subfigure}{.23\textwidth}
\centering
\includegraphics[width=\linewidth]{spk_base.png}
\caption{}
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}{.23\textwidth}
\centering
\includegraphics[width=\linewidth]{res_base.png}
\caption{}
\label{fig:sub-second}
\end{subfigure}
\newline
\begin{subfigure}{.23\textwidth}
\centering
\includegraphics[width=\linewidth]{spk_propose.png}
\caption{}
\label{fig:sub-third}
\end{subfigure}
\begin{subfigure}{.23\textwidth}
\centering
\includegraphics[width=\linewidth]{res_propose.png}
\caption{}
\label{fig:sub-fourth}
\end{subfigure}
\caption{t-SNE plot of extracted embeddings: extracted from 10 speaker and 20 utterances each and each color corresponds to a different speaker. {\normalfont (a)} and {\normalfont (b)} are extracted from baseline model. {\normalfont (c)} and {\normalfont (d)} are from our proposed model.}
\vspace{-8pt}
\label{fig:fig}
\end{figure}
\section{Conclusion}
In this paper, we present a novel disentanglement training scheme to estimate more informative speaker embedding vectors for robust speaker recognition.
Our method is built upon auto-encoder frameworks with two encoders and trained via mutual information and identity change loss, which extracts more discriminative representations by reducing the variance in the intra-cluster.
Experimental results demonstrated that our algorithm achieved improved EER compared to the baseline method.
Through ablation experiments, we demonstrated the impact of each criterion to the overall performance.
\noindent\textbf{Acknowledgements.}
This research is sponsored by Naver Corporation.
\section{Discussion}
\begin{comment}
\subsection{Training strategy}
\noindent The training process consists of three phases.
\noindent \textbf{Phase I.}
All parameters are initialized by the Xavier method~\cite{glorot2010understanding}.
In this phase, the speaker encoder $E_{id}$ is trained using the speaker loss $L_s$.
A learning rate is set to 0.001.
\noindent \textbf{Phase II.}
When we train the whole networks using all criteria simultaneously, the network does not converge well.
Thus, it is necessary to limit the embedding space to be placed in the appropriate vector space such that it can be understood well by the network.
Before training the other network, we fix the speaker encoder after phase I.
Then, we train the residual encoder $E_{res}$, decoder $G$, MINE network $T$ and discriminator $D_d,D_c$ with disentanglement loss, reconstruction loss, id change loss and discriminator loss.
\noindent \textbf{Phase III.}
Finally, we train the whole network with an end-to-end manner until converged.
Since all of the networks are already trained once, phase III. is considered as a kind of fine tuning process.
We set the learning rate to 1e-4 for the training.
\end{comment}
This way, the gradient computed within the encoder not only depends on the supervised
loss but also on the unsupervised objective. The latter approach
turned out to be very effective, since the unsupervised gradient
acts as a powerful regularizer
\begin{comment}
\noindent\textbf{Discriminator loss.}
We use multi-task discriminators, $D_d$ and $D_c$.
The $D_d$ helps the generator $D_r$ to reconstruct more realistic spectrum, and the class discriminator $D_c$ forces the synthesized output to have speaker characteristics.
The loss function of each discriminator, $L_d$ and $L_c$ is defined as follows:
\begin{equation}
\begin{split}
L_d=&min~max[\sum logD_d(S) \\
& \qquad {} + \sum log(1-D_d(D_r(f_{spk},f_{res})))],
\end{split}
\end{equation}
\begin{equation}
L_c= -\sum_{c=1}^{C}\sum_{k=1}^{K}q_c^klogD_c(S),
\end{equation}
where $S$ is a mel-spectrum fed to the discriminator, $C$ is the number of speaker, and $k$ is the number of samples.
\vspace{5pt}
\end{comment}
\begin{comment}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{IC_loss_ver2.png}
\caption{Description of identity interchange loss.}
\label{fig:loss}
\end{figure}
\end{comment}
\begin{comment}
Table \ref{table:2} shows equal error rates (EERs) obtained by ablation studies, which provides the effectiveness of loss functions used in proposed model.
From the experiment 1~3, we confirm that minimizing the mutual information between speaker and residual embeddings is effective to disentangle the speaker information from input speech compared to the adversarial training method which is used in our baseline.
Unlike the disentanglement method via adversarial training, which is applied to the encoders independently, mutual information is calculated between speaker and residual embedding simultaneously, resulting in more powerful disentanglement results.
The experiment 4 demonstrates the proposed identity change loss is effective to speaker recognition.
Idnetity change loss utilzes speaker embeddings extracted from different utterances to regenerate input speech, which helps to extract a constant speaker embedding, or common identity, in utterance with different linguistic information.
Fig.~\ref{fig:fig} illustrates t-SNE plots~\cite{maaten2008visualizing} to visualize the effectiveness of the proposed method.
Fig.~\ref{fig:sub-first},\ref{fig:sub-second} are the plots of embedding extracted from the speaker and residual encoder, $E_{spk}$, $E_{res}$, trained by the mutual information criterion.
The features generated by the speaker encoder are well-clustered but those by the residual encoder are randomly distributed in the low-dimensional feature space.
It demonstrate the mutual information effectively separate the speaker information and residual embedding and embeds into each embedding, repectively.
Fig.~\ref{fig:sub-first} and \ref{fig:sub-second} illustrate the plot of speaker embeddings extracted from the baseline model and the ones from the proposed model with the MI and identity change loss.
As shown in these figures, the clusters obtained by the proposed method show clearer boundaries between identities and smaller distances to the same identities.
\end{comment}
\begin{comment}
To apply the identity change loss, it needs a pair of speech spoken by a same speaker.
At first, we extract both speaker and residual embeddings using the speaker and residual encoder.
To reduce the difference in intra-cluster, the speaker embedding is replaced with a mean of all the speaker embedding from the same speaker and they are fed to the decoder.
Finally, using the reconstructed mel-spectrum $\hat{S}_{A}=D_r(f_{mean,spk}, f_{A,res})$ and $\hat{S}_{B}=D_r(f_{mean,spk}, f_{B,res})$, the reconstruction loss is calculated for training.
\begin{equation}
\begin{split}
L_{ch}=&L_r(\hat{S}_{A}) + L_r(\hat{S}_{B}).
\end{split}
\end{equation}
Although the speaker embeddings are replaced, the generated mel-spectrum should be the same as the original one because the pair has the same speaker identity.
This identity chage process encourages the speaker encoder $E_{spk}$ to focus on common information and makes speaker embedding robust to intra-variations.
\end{comment}
\begin{figure*}[t]
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=\linewidth]{base_loss.png}
\caption{Baseline loss}
\label{fig:baseloss}
\end{subfigure}
\unskip\ \vrule\
\begin{subfigure}{.418\textwidth}
\centering
\includegraphics[width=\linewidth]{id_change_loss.png}
\caption{Identity change loss}
\label{fig:idloss}
\end{subfigure}
\unskip\ \vrule\
\begin{subfigure}{.342\textwidth}
\centering
\includegraphics[width=\linewidth]{mine_loss.png}
\caption{Mutual information loss}
\label{mineloss}
\end{subfigure}
\caption{Overview of proposed training criteria. {\normalfont (a)} Training criteria based on ~\cite{Tai2020SEFALDRAS}: spekeaker loss, disentanglement loss and reconstruction loss.
{\normalfont (b)} Identity change loss: switch the spekaer embedding to mean of those.
{\normalfont (c)} Mutual information loss: estimate the mutual information by MINE}
\label{fig:overall}
\end{figure*}
|
train/arxiv
|
BkiUegXxK6-gD5TlerXM
| 5
| 1
|
\section{Introduction}
\subsection{Statement of the problem and summary of results}
Let $H$ be a matrix from the Gaussian unitary ensemble (GUE) and let each $G_i$ $(i=1,\dots, M)$ denote a complex Ginibre matrix, i.e. a matrix with i.i.d. standard complex Gaussian entries. In this paper, we investigate the eigenvalues of the Hermitised product matrix
\begin{equation}\label{W1}
W_M = G_M^\dagger \cdots G_1^\dagger H G_1 \cdots G_M
\end{equation}
under the assumption that all matrices, $H$ and $G_i$ ($i=1,\ldots,M$), are independent. We will see that the eigenvalues form a bi-orthogonal ensemble~\cite{Bo98}. Furthermore, this ensemble is closely related (in a sense that will be specified in the next subsection) to the so-called Hermite Muttalib--Borodin ensemble~\cite{Mu95,Bo98}. The latter is defined by the joint eigenvalue probability density function (PDF)
\begin{equation}\label{MB-Hermite}
\tilde P(x_1,\ldots,x_N)=\frac1{\tilde Z_N^{(M)}}
\prod_{1\leq j<k\leq N}(x_k-x_j)(x_k^{2M+1}-x_j^{2M+1})
\prod_{\ell=1}^N|x_\ell|^\alpha e^{-x_\ell^2},
\end{equation}
where $\tilde Z_N^{(M)}$ is a normalisation constant and $\alpha$ is a non-negative constant.
It will transpire that the bi-orthogonal ensemble structure associated with the eigenvalue PDF of the product matrix~\eqref{W1} is a corollary of the following more basic result.
\begin{thm}\label{T1}
Let $G$ be an $n \times N$ ($n \le N$) standard complex Gaussian matrix and let $A$ be an $n \times n$
Hermitian matrix with eigenvalues $a_1,\ldots,a_N$.
If the eigenvalues of $A$ are pairwise distinct and ordered as
\begin{equation}\label{as}
a_1<a_2<\cdots<a_{n_0}<0<a_{n_0+1}<\cdots<a_n,
\end{equation}
then the PDF of the non-zero eigenvalues of matrix $X=G^\dagger AG$ is given by
\begin{multline}\label{as1}
P^{n_0}_{n}(\{a_j\}_{j=1}^n;\{x_j\}_{j=1}^n)=\\
\prod_{l=1}^n\frac1{|a_l|}\frac{(x_l/a_l)^{N-n}}{(N-l)!}
\prod_{1\leq j<k\leq n}\frac{x_k-x_j}{a_k-a_j}\det\big[e^{-x_i/a_j}\big]_{i,j=1}^{n_0}
\det\big[e^{-x_{i}/a_{j}}\big]_{i,j=n_0+1}^{n},
\end{multline}
where
\begin{equation}\label{as2}
x_1< \cdots < x_{n_0} < 0 < x_{n_0+1} < \cdots < x_n.
\end{equation}
In particular, we see that $X$ has $n_0$ ($n-n_0$) negative (positive) eigenvalues, i.e. the same number as $A$.
The remaining $N-n$ eigenvalues are all identically zero.
\end{thm}
We remark that in Theorem \ref{T1} the case of $n<N$ remains unanswered although this is certainly of high interest; see \cite{DF06,ACLS} and references therein for a relevant question.
The rest of this paper is organised as follows:
In Section~\ref{sec:product}, we use Theorem~\ref{T1} to find the PDF for the eigenvalues of the product~\eqref{W1} as a bi-orthogonal ensemble. Moreover, the explicit expression for the PDF is seen to reduce asymptotically to the functional form~\eqref{MB-Hermite} specifying the Hermite Muttalib--Borodin ensemble.
Explicit expressions for the bi-orthogonal functions are derived in Section~\ref{sec:biortho}. Analogous to the theory of Hermite polynomials (see e.g.~(\ref{HL}) below), we will see that it is convenient to consider bi-orthogonal functions of even and odd degree separately.
Section~\ref{sec:int} provides reformulations of the bi-orthogonal functions and the correlation kernel as integral representations, which are more suited for asymptotic analysis. These integral representations can also be expressed in terms of Meijer $G$-functions, and we will see that they are closely related to known formulae stemming from the product ensemble of Laguerre type.
The local scaling limit at the origin is derived and seen to be related to the Meijer $G$-kernel in Section~\ref{sec:hard}. This result is also compared with the local scaling limit of the Hermite Muttalib--Borodin ensemble.
Section~\ref{sec:global} includes derivations of the global spectrum of the product~\eqref{W1} as well as the Hermite Muttalib--Borodin ensembles. Since our product ensemble reduces asymptotically to the Hermite Muttalib--Borodin ensemble, they are as expected seen to have the same global spectrum, which in turn is given in terms of the Fuss--Catalan density.
Finally, Theorem~\ref{T1} is proven in the appendix. This theorem is an important result by itself. For this reason, we provide three separate proofs each with their own merits.
\subsection{First motivation: Muttalib--Borodin ensembles}
\label{sec:motivation}
Orthogonal polynomial ensembles are point processes on (a subset of) the real line with a joint distribution given by
\begin{equation}\label{jpdf-classical}
P(dx_1,\ldots,dx_n)=\frac1{Z_n}\Delta_n(\{x\})^2\prod_{k=1}^nw(x_k)dx_k,
\end{equation}
where $Z_N$ is a normalisation constant, $w(x)$ is a non-negative weight function, and $\Delta_n(\{x\})$ denotes the Vandermonde determinant,
\begin{equation}
\Delta_{n}(\{x\})=\det_{1\leq i,j\leq n}\big[x_i^{\,j-1}\big]=\prod_{1\leq i<j\leq n}(x_j-x_i).
\end{equation}
Like the corresponding moment problem, it is often useful to distinguish between models with support on a finite, semi-infinite, and double-infinite interval. The canonical examples are the Jacobi-, Laguerre-, and Hermite-ensembles summarised in Table~\ref{table:weights}. These ensembles are named according to the corresponding classical orthogonal polynomials. In fact, for $a\neq0$ the latter would more appropriately be called the generalised Hermite ensemble.
\begin{table}[htbp]
\centering
\caption{Summary of weight functions and support for the three canonical orthogonal ensembles in random matrix theory given by the joint distribution~\eqref{jpdf-classical}.}
\vspace*{.5em}
\label{table:weights}
\begin{tabular}{l@{\qquad\qquad}l@{\qquad\qquad}l}
\hline\hline
ensemble & \hspace*{1.5em}weight & support\\ \hline \\[-.9em]
Jacobi & $w(\lambda)=\lambda^a(1-\lambda)^b$ & $\lambda\in(0,1)$ \\[.2em]
Laguerre & $w(\lambda)=\lambda^ae^{-\lambda}$ & $\lambda\in(0,\infty)$ \\[.2em]
Hermite & $w(\lambda)=|\lambda|^ae^{-\lambda^2}$ & $\lambda\in(-\infty,\infty)$\\[.1em] \hline\hline
\end{tabular}
\end{table}
In random matrix theory these three ensembles play a fundamental role as they appear as the distribution of the eigenvalues (or singular values) for the transfer (or truncated unitary) ensemble, the complex Wishart (or chiral) ensemble, and the Gaussian unitary ensemble, respectively; see e.g.~\cite{Fo10}.
A fundamental insight, which can be traced back to Wigner~\cite{Wi57}, is that the joint distribution~\eqref{jpdf-classical} allows an interpretation as the equilibrium measure for a one-dimensional gas of pairwise repulsive point particles in a confining potential.
More precisely, consider the Gibbs measure for a classical gas of $n$ point particles which are pairwise repulsive according to a two-point potential $U(x,y)$ and confined by a common one-point potential $V(x)$, i.e.
\begin{equation}\label{gibbs}
P(dx_1,\ldots,dx_n)=\frac{1}{Z_n}e^{-\beta E(x_1,\ldots,x_n)}\prod_{k=1}^n dx_k
\end{equation}
with $\beta$ denoting the inverse temperature and $E$ the energy functional
\begin{equation} \label{H1}
E(x_1,\ldots,x_n)=\frac12\sum_{k=1}^n V(x_k)-\sum_{1\leq i<j\leq n}U(x_j,x_i).
\end{equation}
We see that at $\beta=2$ (sometimes referred to as the free fermion point) the Gibbs measure~\eqref{gibbs} is identical to~\eqref{jpdf-classical} provided we set $V(\lambda)=-\log w(\lambda)$ and $U(x_i,x_j)=\log|x_j-x_i|$. In this way, the eigenvalues of random matrices relate to the Boltzmann factor of a simple statistical mechanical system with one- and two-body interactions only.
A recent development in random matrix theory is the study of exactly solvable product ensembles; see~\cite{AI15} for a review.
As an example, let $G_i$ ($i=1,\ldots,M$) be independent complex Ginibre matrices (matrices whose entries are i.i.d. standard complex Gaussians)
and consider the Hermitian product
\begin{equation}\label{old-product}
G_{M}^\dagger\cdots G_1^\dagger G_1\cdots G_M.
\end{equation}
From the work of Akemann et al.~\cite{AKW13,AIK13}, we know that the explicit
PDF for the eigenvalues of the matrix~\eqref{old-product} is
\begin{equation}\label{2}
P^{(M)}(x_1,\ldots,x_n)=\frac1{Z_n^{(M)}}\Delta_n(\{x\})\det_{1\leq i,j\leq n}\big[g_j^{(M)}(x_i)\big],
\qquad x_i > 0 \: (i=1,\dots,n)
\end{equation}
where $Z_n^{(M)}$ is a known normalisation constant and $g_j^{(M)}(x)$ ($j=1,\ldots,n$) are given by certain Meijer $G$-functions.
Generally, such PDFs are known as polynomial ensembles~\cite{KS14}.
It seems natural to ask whether these product ensembles also have (at least approximately) an interpretation as a Gibbs measure of the form~\eqref{gibbs} and~\eqref{H1}.
However, unlike the Vandermonde determinant, the determinant in~\eqref{2} cannot be evaluated as a product (for $M \ge 2$).
This prohibits a literal interpretation of the eigenvalues of~\eqref{old-product} as a statistical mechanical system
with only one- and two-body interactions. One could fear that this meant that there was no simple physical interpretation related to~\eqref{2}.
However, if we consider (\ref{2}) with each $x_j$ large, the Meijer $G$-functions can be replaced by their asymptotic approximation~\cite{Fi72}.
After a change of variables, the joint density~\eqref{2} to leading order in the asymptotic expansion becomes~\cite{FLZ15}
\begin{equation}\label{MB-laguerre}
\tilde P^{(M)}(x_1,\ldots,x_n)=
\frac1{\tilde Z_n^{(M)}}\Delta_n(\{x\})\Delta_n(\{x^M\})\prod_{k=1}^n x_k^a\,e^{-x_k},
\qquad x_k > 0 \: (k=1,\dots,n)
\end{equation}
where $a$ is a known non-negative constant.
This does correspond to the Boltzmann factor of a statistical mechanical system with one- and two-body interactions only.
A comparison between~\eqref{2} and~\eqref{MB-laguerre} can be done a posteriori.
A connection between the two ensembles was first noted by Kuijlaars and Stivigny~\cite{KS14}, who observed that the hard edge scaling limit of~\eqref{MB-laguerre} found in~\cite{Bo98} took the same functional form as the Meijer $G$-kernel found in the product ensemble~\cite{KZ14}, albeit with a different choice of parameters. Due to recent progress, even more is known about the scaling limits of both models, and their similarities. Thus it has been established that the two ensembles also share the same global spectral distribution~\cite{Mu02,BJLNS10,BBCC11,PZ11,FW15}. Furthermore, in both cases the local correlations in the bulk and near the soft edge are given by the familiar sine and Airy process, respectively~\cite{LWZ14,Zh15}.
The ensemble~\eqref{MB-laguerre} had, in fact, appeared in earlier random matrix literature.
It was first isolated by Muttalib~\cite{Mu95}, who suggested it as a naive approximation to the transmission eigenvalues in a problem about quantum transport.
A feature of the new interaction is that bi-orthogonal polynomials (rather than orthogonal polynomials) are needed in the study of correlation functions. Such bi-orthogonal ensembles were considered in greater generality by Borodin~\cite{Bo98}, who devoted special attention to PDFs
\begin{equation}\label{MB}
P(x_1,\ldots,x_n)=\frac1{Z_n}\prod_{j=1}^n w(x_l)\prod_{1\leq j<k\leq n} \big|x_k-x_j\big|\,
\big|\sgn(x_k)|x_k|^{\theta}-\sgn(x_j)|x_j|^{\theta}\big|,
\end{equation}
with $\theta>0$ and $w(x)$ representing one of the three classical weight functions from Table~\ref{table:weights}.
Following \cite{FW15}, we will henceforth refer to these ensembles as the (Jacobi, Laguerre, Hermite) Muttalib--Borodin ensembles.
We note that the awkward dependence of signs in the last factor in~\eqref{MB} disappears when the eigenvalues are non-negative (e.g. for Laguerre- and Jacobi-ensembles) and when $\theta$ is an odd integer as in~\eqref{MB-Hermite}.
At the time of their introduction,
the Muttalib--Borodin ensembles had no obvious relation to any random matrix models defined in terms of PDFs on their entries (except for the trivial case $\theta=1$), and could merely be interpreted as a simple one-parameter generalisation of the classical ensembles.
However, we now see that the Laguerre Muttalib--Borodin ensemble has a close connection
to products of complex Gaussian random matrices~\eqref{old-product} through the approximation~\eqref{MB-laguerre}.
Knowing that the Laguerre Muttalib--Borodin ensemble appears as an asymptotic approximation to the Gaussian product~\eqref{old-product}, it seems natural to ask the reverse question: \emph{Can we find product ensembles which reduce asymptotically to the Jacobi and Hermite Muttalib--Borodin ensembles?} If this is possible, it would be reasonable to say we have completed a link between the Muttalib--Borodin ensembles with classical weights and the new family of product ensembles.
For the Jacobi Muttalib--Borodin ensemble a link to products of random matrices is provided by looking at the squared singular values of a product of truncated unitary matrices~\cite{KKS15,FW15}. In this paper, it is our aim to isolate a random matrix product structure for which the eigenvalue PDF reduces asymptotically to the functional form of the Hermite Muttalib--Borodin ensemble. This construction therefore completes the correspondence between product ensembles and the three Muttalib--Borodin ensembles with classical weights, i.e. Laguerre, Jacobi, Hermite. Furthermore, the relevant product ensemble provides by itself a new interesting class of integrable models, which unlike all previous product ensembles (see~the review \cite{AI15}) allows for negative eigenvalues.
As the product ensemble in question must allow for negative eigenvalues, it is no longer sufficient to investigate Wishart-type matrices like~\eqref{old-product} which are positive-definite by construction.
It turns out that the correct structure is the Hermitised product of a GUE matrix and $M$ complex Ginibre matrices given by~\eqref{W1}.
The case $M = 1$ of~\eqref{W1} has previously been isolated in the recent paper of Kumar~\cite{Ku15} as an example
of a matrix ensemble which permits an explicit eigenvalue PDF.
\subsection{Second motivation: hyperbolic Harish-Chandra--Itzykson--Zuber integrals}
Another reason that the Hermitised random matrix product~\eqref{W1} is of particular interest is its relation to the so-called hyperbolic Harish-Chandra--Itzykson--Zuber (HCIZ) integral. By way of introduction on this point, we note that it is by now evident that the family of exactly solvable product ensembles is intimately linked to a family exactly solvable group integrals sometimes referred to as integrals of HCIZ type. For the study of products of Ginibre matrices~\eqref{old-product} it was sufficient to know the familiar (and celebrated) HCIZ integral~\cite{HC57,IZ80}:
\begin{equation}\label{HCIZ}
\int_{U(N)/U(1)^N}e^{-\tr AVBV^{-1}}\,(V^{-1}dV)=\pi^{N(N-1)/2}
\frac{\det[e^{-a_ib_j}]_{i,j=1}^{N}}
{\prod_{1\leq i<j\leq N} (a_j-a_i)(b_j-b_i)},
\end{equation}
where $(V^{-1}dV)$ denotes the Haar measure on the unitary quotient group $U(N)/U(1)^N$, while $A$ and $B$ are Hermitian $N\times N$ matrices with eigenvalues $a_1<\cdots<a_N$ and $b_1<\cdots<b_N$, respectively. However, for studies of products of spherical, truncated unitary, or coupled random matrices generalisations of the HCIZ integral are needed, see~\cite{KKS15,AS16,Liu17} for the two latter cases. We emphasise that the product of truncated unitary matrices considered by Kieburg et al.~\cite{KKS15} required a previously unknown generalisation of the HCIZ integral. Likewise, our study of the Hermitised random matrix product~\eqref{W1} requires knowledge about the so-called hyperbolic HCIZ integral in which the integration on the left-hand side of~\eqref{HCIZ} should be replaced with an integration over the pseudo-unitary group (see Section~\ref{sec:fyodorov} for details). The study of such hyperbolic group integrals was initiated by Fyodorov~\cite{Fy02,FS02}.
An interesting feature of the hyperbolic HCIZ integral is that the integration over the pseudo-unitary is non-compact, which forces us to introduce some additional constraints on the Hermitian matrices $A$ and $B$ to ensure convergence; this is a difficulty which does not arise in other HCIZ-type integrals. Finally, we mention that HCIZ-type integrals have other applications in theoretical and mathematical physics beyond products of random matrices, e.g.~the hyperbolic HCIZ integral was used to find the spectral properties of the Wilson--Dirac operator in lattice quantum chromodynamics~\cite{KVZ13}. Moreover, HCIZ-type integrals represent a rich area of mathematical research, for example within the study of Lie groups, harmonic analysis, combinatorics, and probability (e.g. matrix-valued Brownian motion); see
e.g.~the text \cite{Te88a}.
\section{Products of random matrices and Hermite Muttalib--Borodin ensembles}
\label{sec:product}
In this section, we establish that the eigenvalue PDF of the matrix product~\eqref{W1}
is a polynomial ensemble and show that it reduces asymptotically to the Hermite Muttalib--Borodin ensemble~\eqref{MB-Hermite}.
As stated in the introduction, the eigenvalue PDF of~\eqref{W1} follows as a consequence of Theorem~\ref{T1}. The idea is simple: let $A$ be a random matrix from a polynomial ensemble, i.e. it has an eigenvalue PDF of the form
\begin{equation}\label{af}
P_A(\{a_k\}_{k=1}^n)=\frac1{Z_n}\prod_{1\leq i<j\leq n}(a_j-a_i)\det[w_j(a_i)]_{i,j=1}^n,
\end{equation}
where $a_1\leq a_{2}\leq \cdots\leq a_{n}$ are the (ordered) eigenvalues of $A$, $w_j$ ($j=1,\ldots,n$) is a family of weight functions, and $Z_n$ is a normalisation constant. Now, let $G$ be an $n \times N$ $(n \le N)$ standard complex Gaussian matrix. Then Theorem~\ref{T1} gives the eigenvalue PDF of $G^\dagger AG$. Moreover, it is seen that this new eigenvalue PDF is also a polynomial ensemble. In other words, Theorem~\ref{T1} provides a map from the class of polynomial ensembles into itself. Thus, we may apply Theorem~\ref{T1} recursively to construct hierarchies of polynomial ensembles.
Let us make this statement more precise.
\begin{lemma}\label{C1}
Let $G$ be an $n \times N$ $(n \le N)$ standard complex Gaussian matrix, and let $A$ be a random matrix from a
polynomial ensemble with eigenvalue PDF (\ref{af}), independent of $G$.
Then the PDF for the non-zero eigenvalues of the random matrix product $G^\dagger AG$ is equal to
\begin{equation}\label{af2}
\frac1{Z_n}\prod_{l=1}^n\frac{1}{(N-l)!}\prod_{1\leq j<k\leq n}(x_k-x_j)
\det_{1\leq i,j\leq n}\bigg[\int_0^\infty \frac{da\,e^{-a}}{a^{n-N+1}}\,w_{j}\big(\frac{x_i}{a}\big)\bigg]
\end{equation}
with the eigenvalues ordered $x_1\leq x_{2}\leq \cdots\leq x_{n}$.
\end{lemma}
\begin{proof}
In order to use Theorem~\ref{T1}, we fix an $n_0\in\{0,1,\ldots,n\}$ and assume that the eigenvalues of $A$ are ordered as~\eqref{as}.
Consequently, the non-zero eigenvalues of $G^\dagger AG$ can be ordered as~\eqref{as2} almost surely.
It follows from the conditional eigenvalue PDF~\eqref{as1} and~\eqref{af} that the eigenvalue PDF of $G^\dagger AG$ (up to $N-n$ eigenvalues which are identically zero) is given by
\begin{multline}\label{recurrence-step}
\int_D P_A(\{a_k\}_{k=1}^n)P^{n_0}_{n}(\{a_j\}_{j=1}^n;\{x_j\}_{j=1}^n)\, da_1\cdots da_n
=\frac1{Z_n}\prod_{1\leq j<k\leq n}(x_k-x_j)\\
\times\int_D\prod_{l=1}^n\frac1{|a_l|}\frac{(x_l/a_l)^{N-n}}{(N-l)!}
\det
\begin{bmatrix}
\{e^{-x_i/a_j}\}_{i,j=1}^{n_0} & 0 \\
0 & \{e^{-x_{i}/a_{j}}\}_{i,j=n_0+1}^{n}
\end{bmatrix}
\det[w_j(a_i)]_{i,j=1}^n
\, da_1\cdots da_n,
\end{multline}
where the domain of integration $D$ is given according to~\eqref{as} and the eigenvalues $x_i$ ($i=1,\ldots,n$) are ordered according to~\eqref{as2}.
We note that the integral on the second line in~\eqref{recurrence-step} is a close cousin to the well-known Andreief integral~\cite{An83,Br55}.
Upon expansion of the determinants, it is readily seen that~\eqref{recurrence-step} may rewritten as
\begin{equation}\label{recurrence-step2}
\frac1{Z_n}\prod_{l=1}^n\frac{1}{(N-l)!}\prod_{1\leq j<k\leq n}(x_k-x_j)
\det
\begin{bmatrix}
\displaystyle\bigg\{-\int_{-\infty}^0\frac{da}a (x_i/a)^{N-n} e^{-x_i/a}
w_j(a)\bigg\}_{\substack{i=1,\ldots,n_0\\j=1,\ldots,n}} \\
\displaystyle\bigg\{\int^{\infty}_0\frac{da}a (x_i/a)^{N-n} e^{-x_i/a}
w_j(a)\bigg\}_{\substack{i=n_0+1,\ldots,n\\j=1,\ldots,n}}
\end{bmatrix}.
\end{equation}
If we make a change of variables $a\mapsto -a$ in the first $n_0$ rows in the determinant in~\eqref{recurrence-step2} then we get
\begin{equation}
\frac1{Z_n}\prod_{l=1}^n\frac{1}{(N-l)!}\prod_{1\leq j<k\leq n}(x_k-x_j)
\det_{1\leq i,j\leq n}\bigg[\int_0^\infty \frac{da}a\,\frac{e^{-|x_i|/a}}{(|{x_i}|/a)^{n-N}}\,w_j((\sgn x_i)a)\bigg];
\end{equation}
recall that $x_i$ ($i=1,\ldots,n$) is ordered according~\eqref{as2}. Finally, if we make another change of variables $a\mapsto |x_i|/a$ in the $i$-th row, then~\eqref{af2} follows for any fixed $n_0$. Note that this result is independent of the choice of $n_0$, so we have the final result.
\end{proof}
\begin{remark}
The study of maps from the space of polynomial ensembles onto itself is an interesting endeavour, since such maps give rise to new random matrix ensembles without destroying integrability. In fact, the study of such maps is presently an active area of research in random matrix theory~\cite{CKW15,Ku16,KK16,KR16}. Lemma~\ref{C1} provides a new transformation to this class of maps, which cannot be obtained directly from any of the previously established transformations. We note that Lemma~\ref{C1} includes the transformation~\cite[Theorem 2.1]{KS14} as a special case arising when the matrix $A$ is positive definite. A restriction of Lemma~\ref{C1} is that $n\leq N$. Thus it is seen that the PDF~\eqref{af2} develops a singularity for $n>N$ indicating that the formula is no longer generally valid in this case, depending on the properties of $w_j$. It would be interesting to extend the above results to include the case $n>N$ more generally.
\end{remark}
With Lemma~\ref{C1} at hand, we are ready to write down the eigenvalue PDF for the product~\eqref{W1}.
\begin{thm}\label{cor-matrix} Let $\nu_0=0, \nu_1, \ldots, \nu_M$ be non-negative integers.
Suppose that
$H$ is an $n\times n$ GUE matrix and $G_1, \ldots, G_M$ are independent standard complex Gaussian matrices where $G_m$ is of size $(\nu_{m-1}+n) \times (\nu_{m} +n)$.
Then the joint PDF for the non-zero eigenvalues of the matrix~\eqref{W1} is given by
\begin{equation}\label{PDF-matrix}
P^{(M)}(x_1,\ldots,x_n)=\frac{1}{Z^{(M)}_n}\prod_{1\leq i<j\leq n}(x_j-x_i)\det_{1\leq i,j\leq n}\big[g_{j-1}^{(M)}(x_i)\big],
\end{equation}
where the weight functions $g_j^{(M)}$ are defined recursively by
\begin{equation}\label{weight-recursive}
g_j^{(0)}(x)=x^{j}e^{-x^2}
\quad\text{and}\quad
g_j^{(m)}(x)=\int_0^\infty \frac{dy}{y}\,y^{\nu_m}e^{-y}\,g_j^{(m-1)}(x/y),\quad m=1,\ldots,M
\end{equation}
and the normalisation constant is
\begin{equation}
Z^{(M)}_n={2^{-n(n-1)/2}\pi^{n/2}}\prod_{m=0}^M\prod_{j=1}^n\Gamma(\nu_m+j).
\end{equation}
\end{thm}
\begin{proof}
First, let us consider the simplest situation, that is a product of square matrices, i.e. $\nu_1=\cdots=\nu_M=0$. As the eigenvalue PDF of an $n\times n$ GUE matrix is given by~\eqref{PDF-matrix} with $M=0$, the theorem follows immediately by applying Lemma~\ref{C1} $M$ times.
We need to be a little more careful when the case of rectangular matrices is considered.
The $M=1$ case of the theorem is again an immediate consequence of Lemma~\ref{C1}, which gives us the eigenvalues of $W_1=G_1^\dagger HG_1$. However, in order to apply Lemma~\ref{C1} a second time and find the non-zero eigenvalues of $W_2=G_2^\dagger W_1G_2$, we have to take into account that $W_1$ has a zero eigenvalue with multiplicity $\nu_1$. To proceed, we can use the same idea as in~\cite{IK14}. The unitary invariance of Gaussian matrices tells us that $W_1\stackrel{d}=U^\dagger W_1 U$ and $G_2\stackrel{d}{=}VG_2$ for any $U,V\in U(n+\nu_1)$. It thus follows
\begin{equation}\label{reduction}
W_2=G_2^\dagger W_1 G_2\stackrel{d}{=}
\begin{bmatrix} \tilde G_2^\dagger & g_2^\dagger \end{bmatrix}
\begin{bmatrix} X_1 & 0 \\ 0 & 0 \end{bmatrix}
\begin{bmatrix} \tilde G_2 \\ g_2 \end{bmatrix}
=\tilde G_2^\dagger X_1 \tilde G_2,
\end{equation}
where $X_1=\diag(x_1,\ldots,x_n)$ is an $n\times n$ diagonal matrix distributed according to~\eqref{PDF-matrix} with $M=1$, while $\tilde G_2$ and $g_2$ are standard Gaussian matrices of size $n\times (n+\nu_2)$ and $\nu_1\times (n+\nu_2)$, respectively. Now, Lemma~\ref{C1} can be applied to the right-hand side in~\eqref{reduction}, which gives us the PDF of the non-zero eigenvalues of $W_2$. Repeating this procedure completes the proof.
\end{proof}
\begin{remark}
We note that the case $M=1$ of Theorem~\ref{cor-matrix} is in agreement with the result stated by Kumar~\cite[Eq.~(46) and~(47)]{Ku15}. However, the derivation therein is incomplete due to the reliance on the HCIZ integral
(1.14), rather than its hyperbolic variant (A.38) below.
\end{remark}
There are many other representations for the weight functions in Theorem~\ref{cor-matrix}
beyond the recursive definition~\eqref{weight-recursive}.
As usual, it is particularly useful for analytic purposes to write the weight functions in their contour integral representation.
\begin{lemma}
We have
\begin{equation}\label{contour-matrix}
g_j^{(M)}(x)=\frac{(\sgn x)^{j}}{4\pi i}\int_{c-i\infty}^{c+i\infty}ds\,|x|^s\,\Gamma\Big(\frac{j-s}{2}\Big)\prod_{m=1}^M\Gamma(\nu_m-s),
\end{equation}
where $c<0$ is a negative constant.
\end{lemma}
\begin{proof}
By means of the residue theorem, it is seen that
\begin{equation}
g_j^{(0)}(x)=x^{j}e^{-x^2}=\frac{(\sgn x)^{j}}{4\pi i}\int_{c-i\infty}^{c+i\infty}ds\,|x|^s\,\Gamma\Big(\frac{j-s}{2}\Big).
\end{equation}
Now, assume that $g_j^{(M-1)}(x)$ is given by~\eqref{contour-matrix}. From the recursive formula~\eqref{weight-recursive}, we have
\begin{equation}
g_j^{(M)}(x)=\int_0^\infty \frac{dy}{y}\,y^{\nu_M}e^{-y}\,\frac{(\sgn x)^{j}}{4\pi i}
\int_{c-i\infty}^{c+i\infty}ds\,\Big(\frac{|x|}y\Big)^s\,\Gamma\Big(\frac{j-s}{2}\Big)\prod_{m=1}^{M-1}\Gamma(\nu_m-s).
\end{equation}
It is a straightforward exercise, considering the asymptotic decay, to show that
with $c<0$ the order of the integrals may be interchanged. Thus we have
\begin{equation}
g_j^{(M)}(x)=
\frac{(\sgn x)^{j}}{4\pi i}\int_{c-i\infty}^{c+i\infty}ds\,|x|^s\,\Gamma\Big(\frac{j-s}{2}\Big)\prod_{m=1}^{M-1}\Gamma(\nu_m-s)\,
\int_0^\infty \frac{dy}{y}\,y^{\nu_M-s}e^{-y}
\end{equation}
and the lemma follows by induction.
\end{proof}
As already mentioned, functions with a contour integral representation like~\eqref{contour-matrix} have certain properties which are useful for analytical purposes.
In fact, many of these properties may be found in the literature if we first recognise the contour integral as a Fox $H$-function
\begin{equation}\label{weight-fox}
g_j^{(M)}(x)=\frac{(\sgn x)^{j}}2
\FoxH{M+1}{0}{0}{M+1}{-}{(\nu_1,1),\ldots,(\nu_M,1),(\frac{j}2,\frac12)}{|x|}
\end{equation}
or as a Meijer $G$-function
\begin{equation}\label{weight}
g_j^{(M)}(x)=(\sgn x)^{j}\prod_{m=1}^M\frac{2^{\nu_m-1}}{\sqrt{\pi}}
\MeijerG{2M+1}{0}{0}{2M+1}{-}{\frac{\nu_1}{2},\frac{\nu_1+1}{2},\ldots,\frac{\nu_M}{2},\frac{\nu_M+1}{2},\frac{j}2}{\frac{x^2}{4^M}}.
\end{equation}
We refer to the book~\cite{MSH09} for an extensive review of these functions; the Fox $H$- and Meijer $G$-functions are defined by~\cite[Def.~1.1]{MSH09} and~\cite[Def.~1.5]{MSH09}, respectively.
As discussed in Section~\ref{sec:motivation}, one of our goals is to find a `classical gas' approximation for~\eqref{PDF-matrix}. For this purpose, we can use the asymptotic result~\cite{Fi72}
\begin{equation}\label{asymp}
\MeijerG{q}{0}{0}{q}{-}{b_1,\ldots,b_q}{x}\sim\frac1{q^{1/2}}\Big(\frac{2\pi}{x^{1/q}}\Big)^{(q-1)/2}
x^{(b_1+\cdots+b_q)/{q}}e^{-qx^{1/q}}\big(1+O(x^{1/q})\big),
\end{equation}
for $x\to\infty$ (recall that a typical eigenvalue grows with $n$). This immediately allows us to find a Muttalib--Borodin ensemble~\eqref{MB},
which approximates the product ensemble~\eqref{PDF-matrix}.
However, for notational simplicity, it is convenient to first make a change of variables
\begin{equation}\label{change}
x_i\mapsto x_i'=2^M\Big(\frac{y_i}{\sqrt{2M+1}}\Big)^{2M+1}
\end{equation}
for $i=1,\ldots,n$. Using the asymptotic formula~\eqref{asymp} and making a change of variable~\eqref{change}, we find the approximate PDF
\begin{equation}\label{hermite-MB}
\tilde P^{(M)}(y_1,\ldots,y_n)=\frac{1}{\tilde Z^{(M)}}\prod_{1\leq i<j\leq n} (y_j-y_i)(y_j^{2M+1}-y_i^{2M+1})
\prod_{k=1}^n|y_k|^\alpha e^{-y_k^2},
\end{equation}
where $\tilde Z^{(M)}$ is a new normalisation constant and $\alpha=\sum_{m=1}^M(2\nu_m+1)$.
We recognise~\eqref{hermite-MB} as the Hermite Muttalib--Borodin ensemble~\eqref{MB-Hermite}.
Note that the approximation~\eqref{asymp} is valid for large $x$ and that the absolute value of a typical eigenvalue grows with the matrix dimension $n$. Thus, one might suspect agreement between the two models in the large-$n$ limit except for local correlations near the origin. We will return to a comparison between the two models in Section~\ref{sec:hard} and Section~\ref{sec:global}.
It is worth noting that the following exact relation holds
\begin{equation}\label{asymp-exact}
\MeijerG{q}{0}{0}{q}{-}{0,\frac1q,\ldots,\frac{q-1}q}{x}=\frac{(2\pi)^{(q-1)/2}}{q^{1/2}}e^{-qx^{1/q}}
\end{equation}
for integer $q$.
This may be proven by writing the Meijer $G$-function on the left-hand side as its integral representation and then using Gauss' multiplication formula for the gamma functions. The exact relation~\eqref{asymp-exact} tells us that we can choose the parameters $b_k$ $(k=1,\ldots,q)$ in~\eqref{asymp} such that all subleading terms in the expansion vanish, and (\ref{hermite-MB}) is exact.
\section{Biorthogonality and correlations}
\label{sec:biortho}
Generally polynomial ensembles describe determinantal point processes. The correlation kernel may be written as
\begin{equation}\label{kernel-sum}
K_n(x,y)=\sum_{k=0}^{n-1}\frac{p_k(x)\phi_k(x)}{h_k},
\end{equation}
where $p_k(x)$ and $\phi_k(x)$ are bi-orthogonal functions with normalisation $h_k$, i.e.
\begin{equation}
\int_{-\infty}^\infty dx\, p_k(x)\phi_k(x)=h_k\delta_{k\ell}.
\end{equation}
For both \eqref{PDF-matrix} and \eqref{hermite-MB} the $p_{k}(x)$ will be a monic polynomial of degree $k$, while
\begin{equation}
\phi_k(x)-g_k^{(M)}(x)\in\Span\{g_{k-1}^{(M)}(x),\ldots,g_{0}^{(M)}(x)\}
\end{equation}
for the product ensemble~\eqref{PDF-matrix} and
\begin{equation}
\phi_k(x)-x^{(2M+1)k} |x|^{\alpha}e^{-x^2}\in\Span\{x^{(2M+1)(k-1)} |x|^{\alpha}e^{-x^2},\ldots, |x|^{\alpha}e^{-x^2}\}
\end{equation}
for the Hermite Muttalib--Borodin ensemble~\eqref{hermite-MB}. In the latter case, the bi-orthogonal structure is already known~\cite{Ko67,Ca68,Bo98,FI16}. The main purpose of this section is to determine the bi-orthogonal functions --- and consequently the kernel~\eqref{kernel-sum} --- for the product ensemble~\eqref{PDF-matrix}.
\subsection{The oddness of being even}
For $M=0$, the bi-orthogonal functions are $p_n(x)=\tilde H_n(x)$ and $\phi_n(x)=\tilde H_n(x)e^{-x^2}$ with $\tilde H_n(x)$ denoting the Hermite polynomials in monic normalisation. We recall that the $n$-th Hermite polynomials is an even (odd) function when $n$ is even (odd);
this is due to the reflection symmetry of the Gaussian weight about the origin. A similar phenomenon is present for our product generalisation. In order to see this, we use an alternative form of the kernel~\eqref{kernel-sum}. We have~\cite{Co39,Bo98}
\begin{equation}\label{kernel}
K_n(x,y)=\sum_{k,\ell=0}^{n-1}(B_{n}^{(M)})^{-1}_{\ell,k}\,x^{k}g_{\ell}^{(M)}(y),
\end{equation}
where $B_n^{(M)}=(b_{i,j}^{(M)})_{i,j=0}^{n-1}$ is the $n$-th bi-moment matrix constructed from the bi-moments
\begin{equation}\label{bimoment}
b_{k,\ell}^{(M)}=\int_{-\infty}^\infty x^kg_\ell^{(M)}(x)dx.
\end{equation}
Simon~\cite{Si08} refers the inverse moment matrix representation~\eqref{kernel} as the ABC (Aitken--Berg--Collar) theorem.
In the following, it will be useful to extend the concept of odd and even moments to bi-moments.
We say that the bi-moments are odd (even) when $k+\ell$ is odd (even).
Now, using that $g_\ell^{(M)}(-x)=(-1)^\ell g_\ell^{(M)}(x)$,
we see that the bi-moments satisfy
\begin{equation}
b_{k,\ell}^{(M)}=(-1)^{k+\ell}b_{k,\ell}^{(M)},\qquad k,\ell=0,1,\ldots,
\end{equation}
which implies that all odd moments are equal to zero. In other words, the entries in the bi-moment matrix vanishes in a chequerboard pattern. Thus, by reordering rows and columns we may write the bi-moment matrix in a block diagonal form
$B_n^{(M)}\mapsto \diag(B_n^{(M,\text{even})},B_n^{(M,\text{odd})})$ with $B_n^{(M,\text{even})}=(b_{2k,2\ell}^{(M)})_{k,\ell}$ and
$B_n^{(M,\text{odd})}=(b_{2k+1,2\ell+1}^{(M)})_{k,\ell}$.
Using this reordering in the sum~\eqref{kernel}, we see that the kernel splits into two parts
\begin{equation}
K_n(x,y)=K_n^\text{even}(x,y)+K_n^\text{odd}(x,y),
\end{equation}
where
\begin{align}
K_n^\text{even}(x,y)&=\sum_{k,\ell=0}^{\lfloor \frac{n-1}2\rfloor}(B_n^{(M,\text{even})})^{-1}_{\ell,k}\,x^{2k}g_{2\ell}^{(M)}(y)
=\sum_{k=0}^{\lfloor \frac{n-1}2\rfloor} \frac{p_{2k}(x)\phi_{2k}(x)}{h_{2k}}, \label{kernel-even} \\
K_n^\text{odd}(x,y)&=\sum_{k,\ell=0}^{\lfloor \frac n2\rfloor-1}(B_n^{(M,\text{odd})})^{-1}_{\ell,k}\,x^{2k+1}g_{2\ell+1}^{(M)}(y)
=\sum_{k=0}^{\lfloor \frac n2\rfloor-1} \frac{p_{2k+1}(x)\phi_{2k+1}(x)}{h_{2k+1}}. \label{kernel-odd}
\end{align}
Here, the latter equality in both~\eqref{kernel-even} and~\eqref{kernel-odd} follow from comparison with~\eqref{kernel-sum}.
Finally, we note that
\begin{equation}
K_{2n}^\text{even}(x,y)=K_{2n-1}^\text{even}(x,y)
\qquad\text{and}\qquad
K_{2n}^\text{odd}(x,y)=K_{2n+1}^\text{odd}(x,y).
\end{equation}
Thus, in the following we can restrict our attention to kernels with an even subscript.
\subsection{Bi-orthogonal functions}
We are now ready to write down the bi-orthogonal functions for our product ensemble. As explained in the previous subsection, it is convenient to consider functions of odd and even degree separately.
\begin{prop}\label{thm:bi-func-sum}
The ensemble defined by Theorem~\ref{cor-matrix} is bi-orthogonalised by
\begin{align}
p_{2n}(x)&=\sum_{\ell=0}^n\frac{(-\tfrac14)^{n-\ell}}{(n-\ell)!}
\prod_{m=0}^M\frac{\Gamma(\nu_m+2n+1)}{\Gamma(\nu_m+2\ell+1)}x^{2\ell}, &
\phi_{2n}(x)&=\sum_{\ell=0}^n\frac{(-\tfrac14)^{n-\ell}}{(n-\ell)!}\frac{(2n)!}{(2\ell)!}g_{2\ell}^{(M)}(x), \nonumber \\
p_{2n+1}(x)&=\sum_{\ell=0}^n\frac{(-\tfrac14)^{n-\ell}}{(n-\ell)!}
\prod_{m=0}^M\frac{\Gamma(\nu_m+2n+2)}{\Gamma(\nu_m+2\ell+2)}x^{2\ell+1}, &
\phi_{2n+1}(x)&=\sum_{\ell=0}^n\frac{(-\tfrac14)^{n-\ell}}{(n-\ell)!}\frac{(2n+1)!}{(2\ell+1)!}g_{2\ell+1}^{(M)}(x), \nonumber \\
h_n&=2^{-n}\pi^{1/2}\prod_{m=0}^M\Gamma(\nu_m+n+1)\label{h_n},
\end{align}
with notation as above (recall that $\nu_0=0$).
\end{prop}
There are several different approaches to prove Proposition~\ref{thm:bi-func-sum}. Here, we will present a method which emphasizes the relation to the Hermite polynomials (see~\cite[Prop.~3.5]{Ip15} for the same method applied to the product ensemble of Laguerre type). In order to use this approach, it is convenient to first calculate the bi-moments.
\begin{lemma}
The bi-moments are given by
\begin{equation}\label{bimoment-gamma}
b_{k,\ell}^{(M)}=\Gamma\Big(\frac{k+\ell+1}{2}\Big)\prod_{m=1}^M\Gamma(\nu_m+k+1),
\end{equation}
for $k+\ell$ even and zero otherwise.
The bi-moment determinant is
\begin{equation}\label{bi-det}
D_n^{(M)}:=\det_{0\leq k,\ell\leq n}[b_{k\ell}^{(M)}]=2^{-n(n+1)/2}\pi^{(n+1)/2}\prod_{m=0}^M\prod_{j=0}^n\Gamma(\nu_m+j+1).
\end{equation}
\end{lemma}
\begin{proof}
We insert the contour integral representation of the weight functions~\eqref{contour-matrix} into the expression for the bi-moments~\eqref{bimoment}, then we see that the even moments are
\begin{equation}
b_{k\ell}=\frac{1}{2\pi i}\int_0^\infty dx\int_{c-i\infty}^{c+i\infty} ds \,x^s\,\Gamma\Big(\frac{k+\ell-s}{2}\Big)
\prod_{m=1}^M\Gamma(\nu_m+k-s).
\end{equation}
The integrals in this expression can be recognised as a combination of a Mellin and an inverse Mellin transform, which yields~\eqref{bimoment-gamma}.
In order to evaluate the bi-moment determinant~\eqref{bi-det}, we note that
\begin{equation}\label{bimoment-det}
D_n^{(M)}=\prod_{m=1}^M\prod_{j=0}^n\Gamma(\nu_m+j+1)\det_{0\leq k,\ell\leq n}[b_{k\ell}^{(M=0)}].
\end{equation}
This completes the proof, since the $M=0$ case is the well-known Hermite (or GUE) case.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{thm:bi-func-sum}]
The bi-orthogonal functions may be expressed by means of their bi-moments exactly as orthogonal polynomials through their moments.
Thus, we have
\begin{equation}
p_n(x)=\frac{1}{D_{n-1}^{(M)}}
\det_{\substack{i=0,\ldots,n\\j=0,\ldots,n-1}}
\begin{bmatrix}
b_{i,j}^{(M)} \bigg\vert\, x^i
\end{bmatrix}
\qquad\text{and}\qquad
\phi_n(x)=\frac{1}{D_{n-1}^{(M)}}
\det_{\substack{i=0,\ldots,n-1\\j=0,\ldots,n}}
\bigg[\begin{array}{@{}cc @{}}
b_{i,j}^{(M)} \\ \hline
g_j^{(M)}(x)
\end{array}\bigg].
\end{equation}
Furthermore, we have $h_n=D_n^{(M)}/D^{(M)}_{n-1}$.
The constants $h_n$ are immediate from the above and~\eqref{bi-det}. Thus, it remains only to find the bi-orthogonal functions. To do so, we first note that
\begin{align}
p_n(x)&=\frac{\prod_{k=0}^n\prod_{m=1}^M\Gamma(\nu_m+k+1)}{D_{n-1}^{(M)}}
\det_{\substack{i=0,\ldots,n\\j=0,\ldots,n-1}}
\begin{bmatrix}
b_{i,j}^{(M=0)} \bigg\vert\, \displaystyle\frac{x^i}{\prod_{m=1}^M\Gamma(\nu_m+i+1)}
\end{bmatrix},\\
\phi_n(x)&=\frac{\prod_{k=0}^{n-1}\prod_{m=1}^M\Gamma(\nu_m+k+1)}{D_{n-1}^{(M)}}
\det_{\substack{i=0,\ldots,n-1\\j=0,\ldots,n}}
\bigg[\begin{array}{@{}cc @{}}
b_{i,j}^{(M=0)} \\ \hline
g_j^{(M)}(x)
\end{array}\bigg].
\end{align}
This observation is important, since we know that the monic Hermite polynomials (with respect to the weight $e^{-x^2}$) are given by
\begin{equation}
\tilde H_n(x)=2^{-n}H_n(x)=
\frac{1}{D_{n-1}^{(M=0)}}
\det_{\substack{i=0,\ldots,n\\j=0,\ldots,n-1}}
\begin{bmatrix}
b_{i,j}^{(M=0)} \bigg\vert\, x^i
\end{bmatrix}.
\end{equation}
It follows that the expressions for the bi-orthogonal function $p_n(x)$ and $\phi_n(x)$ can be found using the known expressions for the Hermite polynomials and then making substitutions
\begin{equation}
x^k\mapsto \frac{x^k}{\prod_{m=1}^M\Gamma(\nu_m+k+1)}
\qquad\text{and}\qquad
x^k\mapsto g_k^{(M)}(x),
\end{equation}
respectively. We recall that
\[
\tilde H_{2n}(x)=\sum_{\ell=0}^n\frac{(-1)^{n-\ell}}{(n-\ell)!}\frac{(2n)!}{(2\ell)!}\frac{(2x)^{2\ell}}{2^{2n}}
\quad\text{and}\quad
\tilde H_{2n+1}(x)=\sum_{\ell=0}^n\frac{(-1)^{n-\ell}}{(n-\ell)!}\frac{(2n+1)!}{(2\ell+1)!}\frac{(2x)^{2\ell+1}}{2^{2n+1}},
\]
which makes it a straightforward exercise to verify the proposition.
\end{proof}
\begin{remark}
We recall that the bi-orthogonal functions also can be obtained from the characteristic polynomial using that
\begin{equation}
p_N(x)=\big\langle\det[x\mathbb I_N-W_M]\,\big\rangle
\qquad\text{and}\qquad
\int_{\mathbb R} dx\,\frac{\phi_{N-1}(x)}{z-x}=\Big\langle\,\frac1{\det[z\mathbb I_N-W_M]}\,\Big\rangle
\end{equation}
with $\langle\cdots\rangle$ denoting the matrix average and $z\in\mathbb C\setminus\mathbb R$. The first relation allows for an alternative method to calculate $p_N(x)$; see e.g.~\cite{FL15}.
\end{remark}
\section{Integral representations and correlation kernels}
\label{sec:int}
The explicit expressions for the bi-orthogonal functions given by Proposition~\ref{thm:bi-func-sum} allow us to write down an explicit form for the correlation kernel by insertion in~\eqref{kernel-sum}. However, this formulation of the kernel is not optimal for asymptotic analysis. For this reason, in this section we will provide integral representations of the bi-orthogonal functions as well as the kernel.
\begin{prop}\label{prop:bi-func-int}
The bi-orthogonal functions given by Proposition~\ref{thm:bi-func-sum} have integral representations
\begin{align}
p_{2n}(|x|)&=\frac{\sqrt\pi(2n)!}{(-1)^n2^{2n}}\frac{1}{2\pi i}\oint_\Sigma ds\,|x|^{2s}\,
\frac{\Gamma(-s)}{\Gamma(n+1-s)\Gamma(s+\frac12)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+2n+1)}{\Gamma(\nu_m+2s+1)}, \label{p2n-int} \\
p_{2n+1}(|x|)&=\frac{\sqrt\pi(2n+1)!}{(-1)^n2^{2n+1}}\frac{1}{2\pi i}\oint_\Sigma ds\,|x|^{2s+1}\,
\frac{\Gamma(-s)}{\Gamma(n+1-s)\Gamma(s+\frac32)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+2n+2)}{\Gamma(\nu_m+2s+2)}, \label{p2n1-int} \\
\frac{\phi_{2n}(|x|)}{h_{2n}}&=\frac{(-1)^n\,2^{2n}}{\sqrt\pi(2n)!}\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} dt\,
|x|^{-2t-1}\frac{\Gamma(n-t)\Gamma(t+\frac12)}{\Gamma(-t)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+2t+1)}{\Gamma(\nu_m+2n+1)}, \\
\frac{\phi_{2n+1}(|x|)}{h_{2n+1}}&=\frac{(-1)^n\,2^{2n+1}}{\sqrt\pi(2n+1)!}\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} dt\,
|x|^{-2t-2}\frac{\Gamma(n-t)\Gamma(t+\frac32)}{\Gamma(-t)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+2t+2)}{\Gamma(\nu_m+2n+2)},
\end{align}
where the contour $\Sigma$ encloses the integers $0,1,\ldots,n$ in the negative direction and $-\frac12<c<0$.
We recall that $p_{2n}(x)$ and $\phi_{2n}(x)$ are even functions, while $p_{2n+1}(x)$ and $\phi_{2n+1}(x)$ are odd functions.
\end{prop}
\begin{proof}
The integrands in~\eqref{p2n-int} and~\eqref{p2n1-int} has $n+1$ simple poles located at $0,1,\ldots,n$, thus the series representations in Proposition~\ref{thm:bi-func-sum} follow upon a straightforward application of the residue theorem.
In order to find the integral representation of the bi-orthogonal functions $\phi_n(x)$, we first note that the weight functions can be written as
\begin{equation}\label{weight-int-proof}
g_\ell^{(M)}(x)=(\sgn x)^\ell\int_0^\infty \frac{dy}{y}\,y^\ell\,e^{-y^2}
\MeijerG{M}{0}{0}{M}{-}{\nu_1,\ldots,\nu_M}{\frac{|x|}{y}};
\end{equation}
this is easily seen starting from the recursive definition~\eqref{weight-recursive}. Now, using~\eqref{weight-int-proof} in the expression for $\phi_n(x)$ (cf. Proposition~\ref{thm:bi-func-sum}), we see that
\begin{equation}
\phi_n(x)=\int_0^\infty \frac{dy}{y}e^{-y^2}\tilde H_n(y)
\MeijerG{M}{0}{0}{M}{-}{\nu_1,\ldots,\nu_M}{\frac{|x|}{y}}.
\end{equation}
In other words, the bi-orthogonal functions $\phi_n(x)$ are an integral transform of the Hermite polynomials with respect to a Meijer $G$-function as integral kernel. The Hermite polynomial can itself be expressed as a Meijer $G$- or Fox $H$-function (see~\cite[Sec.~1.8.1.]{MSH09}) and the remaining integral is well-known from the literature~\cite[Sec.~2.3.]{MSH09}.
\end{proof}
\begin{prop}\label{prop:kernel-finite}
Integral representations of the even and odd kernels are
\begin{align}
K_{2n}^\textup{even}(x,y)&=\frac{1}{2(2\pi i)^2}\int_{c-i\infty}^{c+i\infty} dt\oint_\Sigma ds\,\frac{|x|^{s}|y|^{-t-1}}{s-t}
\frac{\Gamma(-\frac s2)\Gamma(\frac{t+1}2)}{\Gamma(-\frac t2)\Gamma(\frac{s+1}2)}\frac{\Gamma(\frac {2n-t}2)}{\Gamma(\frac{2n-s}2)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+t+1)}{\Gamma(\nu_m+s+1)}, \nonumber \\
K_{2n}^\textup{odd}(x,y)&=\frac{\sgn(xy)}{2(2\pi i)^2}\int_{c-i\infty}^{c+i\infty} dt\oint_\Sigma ds\,\frac{|x|^{s}\,|y|^{-t-1}}{s-t}\,
\frac{\Gamma(\frac{1-s}2)\Gamma(\frac{t+2}2)}{\Gamma(\frac{1-t}2)\Gamma(\frac{s+2}2)}
\frac{\Gamma(\frac{2n-t+1}2)}{\Gamma(\frac{2n-s+1}2)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+t+1)}{\Gamma(\nu_m+s+1)},
\end{align}
with $-1<c<0$ and the contour $\Sigma$ is chosen such that it encircles $0,1,\ldots,2n-1$ in the negative direction with $\textup{Re}\{s\}>c$ for all $s\in\Sigma$.
\end{prop}
\begin{proof}
As the proofs for the odd and even kernels are almost identical, we provide only the proof for the even case. The odd case is easily verified by the reader.
It follows the definition of the even kernel~\eqref{kernel-even} together with contour integral representation of the bi-orthogonal functions from Proposition~\ref{prop:bi-func-int}, that
\begin{equation}
K_{2n}^\text{even}(x,y)=\frac{1}{(2\pi i)^2}\int dt\oint_\Sigma ds\,|x|^{2s}|y|^{-2t-1}
\frac{\Gamma(-s)\Gamma(t+\frac12)}{\Gamma(-t)\Gamma(s+\frac12)}\prod_{m=1}^M\frac{\Gamma(\nu_m+2t+1)}{\Gamma(\nu_m+2s+1)}
\sum_{k=0}^{n-1}\frac{\Gamma(k-t)}{\Gamma(k+1-s)}.
\end{equation}
Following similar steps as in~\cite{KZ14}, we note the sum allows a telescopic evaluation. This gives
\begin{multline}
K_{2n}^\text{even}(x,y)=\frac{1}{(2\pi i)^2}\int_{c-i\infty}^{c+i\infty} dt\oint_\Sigma ds\,\frac{|x|^{2s}|y|^{-2t-1}}{s-t}
\frac{\Gamma(-s)\Gamma(t+\frac12)}{\Gamma(-t)\Gamma(s+\frac12)}\frac{\Gamma(n-t)}{\Gamma(n-s)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+2t+1)}{\Gamma(\nu_m+2s+1)}\\
-\frac{1}{(2\pi i)^2}\int_{c-i\infty}^{c+i\infty} dt\oint_\Sigma ds\,\frac{|x|^{2s}|y|^{-2t-1}}{s-t}
\frac{\Gamma(t+\frac12)}{\Gamma(s+\frac12)}\prod_{m=1}^M\frac{\Gamma(\nu_m+2t+1)}{\Gamma(\nu_m+2s+1)}.
\end{multline}
Here, the integrand on the second line is zero as it has no poles encircled by the contour $\Sigma$ and, thus
\begin{equation}
K_{2n}^\text{even}(x,y)=\frac{1}{(2\pi i)^2}\int_{c-i\infty}^{c+i\infty} dt\oint_\Sigma ds\,\frac{|x|^{2s}|y|^{-2t-1}}{s-t}
\frac{\Gamma(-s)\Gamma(t+\frac12)}{\Gamma(-t)\Gamma(s+\frac12)}\frac{\Gamma(n-t)}{\Gamma(n-s)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+2t+1)}{\Gamma(\nu_m+2s+1)}.
\end{equation}
Finally, the proposition follows by a change of variables $s\mapsto s/2$ and $t\mapsto t/2$.
\end{proof}
The above integral representations for the bi-orthogonal functions and the kernel are probably the most convenient form for further asymptotic analysis, as we will see in Section~\ref{sec:hard}. However, it is also often helpful to express these formulae in terms of special functions as (for example) it allows for use of pre-defined mathematical software. Furthermore, such reformulations often guide us to recognise patterns which otherwise would have been left unseen.
The integral representations for the bi-orthogonal functions given by Proposition~\ref{prop:bi-func-int} can also be recognised as several different types of special function; this includes generalised hypergeometric, Meijer $G$-, and Fox $H$-functions. Here, we will restrict ourselves to their Meijer $G$-function formulation.
Let us first consider the bi-orthogonal polynomials which may be written as
\begin{align}
p_{2n}(x)&=\frac{(-1)^n}{2^{2n}}\prod_{m=0}^M\frac{\Gamma(\nu_m+2n+1)}{2^{\,\nu_m}\pi^{-1/2}} \nonumber
\MeijerG{1}{0}{1}{2M+2}{n+1}{-\frac{\nu_0}2,-\frac{\nu_0}2+\frac12,\ldots,-\frac{\nu_M}2,-\frac{\nu_M}2+\frac12}{\frac{x^2}{2^{2M}}}, \\
\frac{p_{2n+1}(x)}x&=\frac{(-1)^n}{2^{2n}}\prod_{m=0}^M\frac{\Gamma(\nu_m+2n+2)}{2^{\,\nu_m+1}\pi^{-1/2}}
\MeijerG{1}{0}{1}{2M+2}{n+1}{-\frac{\nu_0}2,-\frac{\nu_0}2-\frac12,\ldots,-\frac{\nu_M}2,-\frac{\nu_M}2-\frac12}{\frac{x^2}{2^{2M}}}.
\label{p_2n-meijer}
\end{align}
It is worth comparing these polynomials with the polynomial found in the study of the Laguerre-like matrix product~\eqref{old-product}.
Akemann et al.~\cite{AIK13} found that in this case the bi-orthogonal polynomial is given by
\begin{equation}
P_n^{(M)}(x)=(-1)^n\prod_{m=0}^M\Gamma(\nu_m+n+1)
\MeijerG{1}{0}{0}{M+1}{n+1}{-\nu_0,-\nu_1,\ldots,-\nu_M}{x}.
\end{equation}
It is clear that the two families of polynomials are related as
\begin{equation}
p_{2n}(x)\propto P_n^{(2M+1)}\Big(\frac{x^2}{2^{2M}}\Big)
\qquad\text{and}\qquad
p_{2n+1}(x)\propto xP_n^{(2M+1)}\Big(\frac{x^2}{2^{2M}}\Big)
\end{equation}
with
\begin{equation}\label{nu-map}
\{\nu_m\}_{m=0}^M\mapsto\{{\nu_m}/2,({\nu_m}-1)/2\}_{m=0}^M
\qquad\text{and}\qquad
\{\nu_m\}_{m=0}^M\mapsto\{{\nu_m}/2,({\nu_m}+1)/2\}_{m=0}^M,
\end{equation}
respectively.
This is a generalisation of the relation between Hermite and Laguerre polynomials. Recall that
\begin{equation}\label{HL}
\tilde H_{2n}(x)=\tilde L^{(-\frac12)}_n(x^2)
\qquad\text{and}\qquad
\tilde H_{2n+1}(x)=x\tilde L^{(+\frac12)}_n(x^2),
\end{equation}
where $\tilde H_n(x)$ and $\tilde L_n^{(\alpha)}(x)$ denote the Hermite and Laguerre polynomials in monic normalisation.
Likewise, the (non-polynomial) bi-orthogonal functions may be written as
\begin{align}
\frac{\phi_{2n}(|x|)}{|x|}&=(-1)^n\prod_{m=1}^M\frac{2^{\nu_m-2}}{\pi^{1/2}}
\MeijerG{2M+1}{1}{1}{2M+2}{-n}{\frac{\nu_M}2-\frac12,\frac{\nu_M}2,\ldots,\frac{\nu_0}2-\frac12,\frac{\nu_0}2}{\frac{x^2}{2^{2M}}}, \\
\phi_{2n+1}(|x|)&=(-1)^n\prod_{m=1}^M\frac{2^{\nu_m-1}}{\pi^{1/2}}
\MeijerG{2M+1}{1}{1}{2M+2}{-n}{\frac{\nu_M}2+\frac12,\frac{\nu_M}2,\ldots,\frac{\nu_0}2+\frac12,\frac{\nu_0}2}{\frac{x^2}{2^{2M}}}.
\end{align}
Again, we want to compare to the formula in~\cite{AIK13} which this time reads
\begin{equation}
\Phi_n^{(M)}(x)=(-1)^n\MeijerG{M}{1}{1}{M+1}{-n}{\nu_M,\ldots,\nu_1,\nu_0}{x}.
\end{equation}
Evidently, we have the following relations
\begin{equation}\label{phi-relations}
\phi_{2n}(|x|)\propto |x|\Phi_n^{(2M+1)}\Big(\frac{x^2}{2^{2M}}\Big)
\qquad\text{and}\qquad
\phi_{2n+1}(|x|)\propto \Phi_n^{(2M+1)}\Big(\frac{x^2}{2^{2M}}\Big),
\end{equation}
with~\eqref{nu-map} as before.
Yet again, this is a generalisation of the relation between Hermite and Laguerre polynomials. In the simplest case the relations~\eqref{phi-relations} reduces to
\begin{equation}
\tilde H_{2n}(x)w_\text{H}(x)=|x|\tilde L^{(-\frac12)}_n(x^2)w_\text{L}^{(-\frac12)}(x^2)
\qquad\text{and}\qquad
\tilde H_{2n+1}(|x|)w_\text{H}(x)=\tilde L^{(+\frac12)}_n(x^2)w_\text{L}^{(\frac12)}(x^2), \nonumber
\end{equation}
where $w_\text{H}(x)=e^{-x^2}$ and $w_\text{L}^{(\alpha)}(x)=x^\alpha e^{-x}$ are the Hermite and Laguerre weight functions.
It is, of course, well-known that there are relations between ensembles with reflection symmetry about the origin and ensembles on the half-line (albeit explicit formulae may be elusive). A general description of such relations in the Muttalib--Borodin ensembles can be found in~\cite{FI16}.
\section{Scaling limits at the origin in product and Muttalib--Borodin ensembles}
\label{sec:hard}
With the integral representations of the correlation kernels established by Proposition~\ref{prop:kernel-finite}, we can turn to a study of asymptotic properties. Perhaps the most interesting scaling regime is that of the local correlations near the origin, referred to as the hard edge when the eigenvalues are strictly positive. For other product ensembles~\cite{KZ14,Fo14,KKS15,FL16}, it has been observed that correlations at the hard edge is determined by the so-called Meijer $G$-kernel, which generalises the more familiar Bessel kernel. Below, we will see that the Meijer $G$-kernel appears once again, but this time
involving a sum.
\begin{thm}\label{thm:hard} Let $K_n(x,y)=K_n^\textup{even}(x,y)+K_n^\textup{odd}(x,y)$ with the even and odd kernels given by Proposition~\ref{prop:kernel-finite}. For $x,y\in\mathbb R\setminus\{0\}$ and $\nu_1\ldots,\nu_M$ fixed, the microscopic limit near the origin is
\begin{equation}\label{hard-limit}
\lim_{n\to\infty}\frac{1}{\sqrt n}K_{2n}\Big(\frac x{\sqrt n},\frac y{\sqrt n}\Big)
=K^\textup{even}(x,y)+K^\textup{odd}(x,y)
\end{equation}
with
\begin{align}
K^\textup{even}(x,y)
&=\frac{1}{2(2\pi i)^2}\int_{c-i\infty}^{c+i\infty} dt\int_\Sigma ds\,\frac{|x|^{s}|y|^{-t-1}}{s-t}
\frac{\Gamma(-\frac s2)\Gamma(\frac{t+1}2)}{\Gamma(-\frac t2)\Gamma(\frac{s+1}2)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+t+1)}{\Gamma(\nu_m+s+1)} \label{hard-even} \\
K^\textup{odd}(x,y)&=\frac{\sgn(xy)}{2(2\pi i)^2}\int_{c-i\infty}^{c+i\infty} dt\int_\Sigma ds\,\frac{|x|^{s}\,|y|^{-t-1}}{s-t}\,
\frac{\Gamma(\frac{1-s}2)\Gamma(\frac{t+2}2)}{\Gamma(\frac{1-t}2)\Gamma(\frac{s+2}2)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+t+1)}{\Gamma(\nu_m+s+1)} \label{hard-odd},
\end{align}
where $-1<c<-1/2$ and $\Sigma$ encloses the non-negative half-line in the negative direction starting and ending at $+\infty$ such that $\textup{Re}\{s\}>c$ for all $s\in\Sigma$.
\end{thm}
\begin{proof}
We only consider the even kernel in Proposition~\ref{prop:kernel-finite} since the odd case is very similar. After rescaling we rewrite the integral representation of the even kernel in Proposition~\ref{prop:kernel-finite} as
\begin{align}\label{5.4}
\frac 1{\sqrt n} K_{2n}^\textup{even}(\frac x{\sqrt n},\frac y{\sqrt n})= \frac{1}{2(2\pi i)^2}\int_{c-i\infty}^{c+i\infty} dt\int_\Sigma ds\,\frac{|x|^{s}|y|^{-t-1}}{s-t} \frac{f_{n}(s)}{f_{n}(t)} \frac{g(t)}{g(s)},
\end{align}
with \begin{equation}
f_{n}(s)= \frac{\Gamma(n)\Gamma(-\frac s2)}{n^{\frac s2} \Gamma(n-\frac s2)}, \qquad
g(s)=\Gamma\big(\frac{s+1}{2}\big) \prod_{m=1}^M \Gamma(\nu_m+s+1).
\end{equation}
For any fixed $t\in c+i \mathbb{R}$ and $s \in \Sigma$, using \cite[eq. 5.11.13]{NIST} we see
\begin{equation}\label{5.6}
f_{n}(s)= \Gamma(-\frac s2) \big(1+O(\frac{1}{n})\big), \qquad f_{n}(t)= \Gamma(-\frac t2) \big(1+O(\frac{1}{n})\big). \end{equation}
Formally, substituting (\ref{5.6}) in (\ref{5.4}) gives (\ref{hard-even}). To proceed rigorously, we need to verify a condition for the exchange of limit and integration. For this purpose, we will proceed to
find two dominated functions respectively corresponding to $1/|f_{n}(t)|$ and $|f_{n}(s)|$.
First, using \cite[eq. 5.11.13]{NIST} we have for sufficiently large $n$
\begin{equation}
\frac{1}{|f_{n}(t)|} \leq \frac{n^{\frac c2} \Gamma(n-\frac c2) }{ \Gamma(n)|\Gamma(-\frac t2)|} \leq \frac{2}{|\Gamma(-\frac t2)|}, \quad \forall t\in c+i \mathbb{R}. \label{t-bound}
\end{equation}
Second, we require an upper bound for $ | f_{n}(s) |$. Noting the asymptotic expansion, that as $z\rightarrow \infty$ in the sector $|\mathrm{arg}(z)|\leq \pi-\delta$ (with $0<\delta<\pi$)
\begin{equation}
\Gamma(z)=e^{-z}z^{z-\frac{1}{2}}\sqrt{2\pi} \big(1+O(\frac{1}{z})\big), \label{Agamma}
\end{equation}
it is easy to see that for a given $y_0>0$ we can choose the contour $\Sigma=\Sigma_{l}\cup \Sigma_{r}$ with \begin{equation} \Sigma_{l}=\big\{\frac{c}{2}+iy:|y|\leq y_{0}\big\}\cup \big\{x\pm iy_0: \frac{c}{2} \leq x\leq 1\big\}, \quad \Sigma_{r}= \big\{x\pm iy_0: x> 1\big\}.\end{equation}
Thus, we get from \eqref{Agamma} and the boundedness of $\Gamma(-s/2)$ over $\Sigma_l$ that for large $n$ there exists a constant $C_1=C_1(y_0) > 0$ such that
\begin{equation}
|f_{n}(s)| \leq C_1, \qquad \forall s\in \Sigma_{l}. \label{upl}
\end{equation}
In order to estimate $f_{n}(s)$ with $s\in \Sigma_{r}$, we use the integral representation
\begin{equation}
f_{n}(s)= \frac{n^{-\frac{s}{2}}}{ 2i \sin \frac{\pi s}{2}} \int_{\mathcal{C}_0} (1-u)^{n-1}(-u)^{-\frac{s}{2}-1}du,
\end{equation}
where $\mathcal{C}_0$ is a counter-clockwise path which begins and ends at $1$ and encircles the origin once; see e.g. \cite[eq. 5.12.10]{NIST}. Note that we choose $(-u)^{-1-s/2}=e^{-(1+s/2)\log(-u)}$ with $-\pi<\mathrm{arg}(-u)<\pi$. Change $u$ by $u/n$ and deform the resulting contour into the path which starts from $n$, proceeds along the (upper) real axis to 1, describes a circle of radius one counter-clock round the origin and returns to $n$ along the (lower) real axis. That is,
\begin{equation}
f_{n}(s)= \frac{1}{ 2i \sin \frac{\pi s}{2}} \int_{\mathcal{C}} (1-\frac{u}{n})^{n-1}(-u)^{-\frac{s}{2}-1}du.
\end{equation}
Let $s=v\pm iy_0, v>1$. On the unit circle of the $u$-integral above write $-u=e^{i\theta}$. Then we easily obtain for $n\geq1$
\begin{equation}
|f_{n}(s)|\leq \frac{1}{ 2 |\sin \frac{\pi s}{2}|} \int_{-\pi}^{\pi} \big(1+\frac{1}{n}\big)^{n-1} |e^{-(\frac{s}{2}+1)i\theta}|d\theta\leq
\frac{\pi e^{ 1+\frac{\pi y_{0}}{2}}}{ |\sin \frac{\pi s}{2}|}. \label{ub1}
\end{equation}
On the upper and lower real axis, we have
\begin{align}
|f_{n}(s)|&\leq \frac{1}{ 2 |\sin \frac{\pi s}{2}|} \int_{1}^{n} \big(1-\frac{u}{n}\big)^{n-1}|u^{-\frac{s}{2}-1} e^{-(\frac{s}{2}+1)(\mp i\pi)}|du \nonumber \\
&\leq \frac{1}{ 2 |\sin \frac{\pi s}{2}|} \int_{1}^{n} u^{-\frac{v}{2}-1} e^{\frac{1}{2}\pi y_{0}} du \nonumber\\
& =\frac{1}{ |\sin \frac{\pi s}{2}|} e^{\frac{1}{2}\pi y_{0}} \frac{1- n^{-\frac{v}{2}} }{v}\leq \frac{1}{ |\sin \frac{\pi s}{2}|} e^{\frac{1}{2}\pi y_{0}}. \label{ub2}
\end{align}
Using the simple fact $|\sin \frac{\pi s}{2}| \geq |\sinh \frac{\pi }{2} \mathrm{Im}(s)|$, combination of \eqref{ub1} and \eqref{ub2} shows that
there exists a constant $C_2=C_2(y_0)>0$ such that
\begin{equation}
|f_{n}(s)| \leq C_2, \qquad \forall s\in \Sigma_{r}.
\end{equation}
Together with \eqref{upl} this gives us a bound $C>0$, that is, for large $n$
\begin{equation}
|f_{n}(s)| \leq C, \qquad \forall s\in \Sigma. \label{s-bound}
\end{equation}
Finally, use \eqref{Agamma} and the asymptotic formula that as $y\rightarrow \pm \infty$
\begin{equation}
| \Gamma(x + iy) |\sim \sqrt{2\pi} |y|^{x-\frac{1}{2}} e^{-\frac{1}{2}\pi |y|}
\end{equation}
with bounded real value of $x$ (see \cite[eq. 5.11.9]{NIST}), it is easy to conclude that the function of variables $s$ and $t$
\begin{equation}
\,\frac{||x|^{s}|y|^{-t-1}|}{|s-t|} \frac{2}{|\Gamma(-\frac t2)|} \frac{|g(t)|}{|g(s)|},
\end{equation}
is integrable along the chosen contours,
whenever $-1<c<-1/2$. Here we emphasize that the assumption $-1/2\leq c<0$ does not ensure the convergence in the special case $M=0$ while for $M\geq 1$ it can be relaxed to $-1<c<0$ as in Proposition~\ref{prop:kernel-finite}.
With this, combing \eqref{t-bound} and \eqref{s-bound}, we have indeed justified the interchange of limit and integrals for every $M$ by the dominated convergence theorem, which completes the proof.
\end{proof}
For a comparison with other known results, it is useful to rewrite the hard edge correlation function of Theorem~\ref{thm:hard} in terms of Meijer $G$-functions.
Using that
\begin{equation}
\int_0^1du\,u^{s-t-1}=\frac1{s-t},
\end{equation}
we see that the even and odd kernel can be written as
\begin{align}
K^\textup{even}(|x|,|y|)=\frac{|y|}{2^{2M}}\int_0^1du\, \nonumber
&\MeijerG{1}{0}{0}{2M+2}{-}{-\frac{\nu_0}2,-\frac{\nu_0}2+\frac12,\ldots,-\frac{\nu_M}2,-\frac{\nu_M}2+\frac12}{\frac{x^2}{2^{2M}}u} \\
&\times\MeijerG{2M+1}{0}{0}{2M+2}{-}{\frac{\nu_M}2-\frac12,\frac{\nu_M}2,\ldots,\frac{\nu_0}2-\frac12,\frac{\nu_0}2}{\frac{y^2}{2^{2M}}u}, \\
K^\textup{odd}(|x|,|y|)=\frac{|x|}{2^{2M}}\int_0^1du\, \nonumber
&\MeijerG{1}{0}{0}{2M+2}{-}{-\frac{\nu_0}2,-\frac{\nu_0}2-\frac12,\ldots,-\frac{\nu_M}2,-\frac{\nu_M}2-\frac12}{\frac{x^2}{2^{2M}}u} \\
&\times\MeijerG{2M+1}{0}{0}{2M+2}{-}{\frac{\nu_M}2+\frac12,\frac{\nu_M}2,\ldots,\frac{\nu_0}2+\frac12,\frac{\nu_0}2}{\frac{y^2}{2^{2M}}u},
\end{align}
respectively. We recall that the so-called Meijer $G$-kernel is given by~\cite{KZ14}
\begin{equation}
K_\text{Meijer}^M(x,y)=\int_0^1du\,\MeijerG{1}{0}{0}{M+1}{-}{-\nu_0,\ldots,-\nu_M}{xu}\MeijerG{M}{0}{0}{M+1}{-}{\nu_M,\ldots,\nu_0}{yu} \label{G-kernel}
\end{equation}
with $x,y>0$. We note that this kernel is single-sided ($x,y\in\mathbb R_+$) while the kernel from Theorem~\ref{thm:hard} is double-sided ($x,y\in\mathbb R\setminus\{0\}$). However, it is also evident that our new kernel may be re-expressed in terms of the Meijer $G$-kernel. We have
\begin{equation}
K^\textup{even}(|x|,|y|)=\frac{|y|}{2^{2M}}K_\text{Meijer}^{2M+1}\Big(\frac{x^2}{2^{2M}},\frac{y^2}{2^{2M}}\Big)
\quad\text{and}\quad
K^\textup{odd}(|x|,|y|)=\frac{|x|}{2^{2M}}K_\text{Meijer}^{2M+1}\Big(\frac{x^2}{2^{2M}},\frac{y^2}{2^{2M}}\Big)
\end{equation}
with
\begin{equation}
\{\nu_m\}_{m=0}^M\mapsto\{{\nu_m}/2,({\nu_m}-1)/2\}_{m=0}^M
\qquad\text{and}\qquad
\{\nu_m\}_{m=0}^M\mapsto\{{\nu_m}/2,({\nu_m}+1)/2\}_{m=0}^M,
\end{equation}
respectively. Thus, the random product matrix~\eqref{W1} provides yet another appearance of the Meijer $G$-kernel; albeit this time in a double-sided version. For graphical representation of the Meijer $G$-kernel we refer to~\cite[Fig.~3.2]{Ip15}, which shows plots of the local density (i.e. the kernel with $x=y$) for different values of $M$.
A double-side hard edge scaling limit near the origin is also present in the Hermite Muttalib--Borodin ensemble.
In this case the kernel is found to be~\cite{Bo98}
\begin{equation}\label{kernel-borodin}
K^\text{even}(x,y)=K^{(\frac{\alpha-1}2,\theta)}(x^2,y^2)
\qquad\text{and}\qquad
K^\text{odd}(x,y)=\sgn(xy)|x|^\theta|y|\,K^{(\frac{\alpha+\theta}2,\theta)}(x^2,y^2)
\end{equation}
where
\begin{equation}\label{kernel-wright-bessel}
K^{(\alpha,\theta)}(x,y)=\theta\int_0^1du(xu)^\alpha J_{\frac{\alpha+1}\theta,\frac1\theta}(xu)J_{\alpha+1,\theta}((yu)^\theta)
\end{equation}
with $J_{a,b}(x)$ denoting Wright's Bessel function. In the case relevant to us~\eqref{MB-Hermite}, we also have $\theta=2M+1$. Furthermore, it is known from~\cite{KS14} that the kernel~\eqref{kernel-wright-bessel} is a Meijer $G$-kernel whenever $\theta$ is a positive integer. In particular, we have
\begin{equation}
\Big(\frac{x^2}{2^{2M}}\Big)^{\frac{1}{2M+1}-1}
K^{(\alpha,2M+1)}\Big((2M+1)\Big(\frac{x^2}{2^{2M}}\Big)^{\frac{1}{2M+1}},(2M+1)\Big(\frac{y^2}{2^{2M}}\Big)^{\frac{1}{2M+1}}\Big)
=K_\text{Meijer}^{2M+1}\Big(\frac{y^2}{2^{2M}},\frac{x^2}{2^{2M}}\Big),
\end{equation}
where the Meijer $G$-kernel on the right-hand side has indices
\begin{equation}
\nu_m=\frac{\alpha+m-1}{2M+1},\qquad m=1,\ldots,2M+1,
\end{equation}
and as always $\nu_0=0$. It follows from~\eqref{kernel-borodin} and~\eqref{kernel-wright-bessel} that the hard edge correlations for the Hermite Muttalib--Borodin ensemble with appropriately chosen parameters may be expressed in terms of the Meijer $G$-kernel in a similar fashion as done for the product ensemble above. We note that the choice of variables in~\eqref{kernel-wright-bessel} should be compared to the change of variables~\eqref{change} performed in the derivation of the asymptotic reduction~\eqref{hermite-MB}.
It is worth verifying consistency of the simplest scenario of $M=0$.
When $M=0$ our matrix ensemble~\eqref{W1} reduces to the GUE, hence the kernel given by Theorem~\ref{thm:hard} must reduce to the sine kernel for $M=0$. To see this, we use
\begin{equation}
\MeijerG{1}{0}{0}{2}{-}{0,\frac12}{\frac{x^2}{4}}=\frac{\cos x}{\sqrt\pi}
\qquad\text{and}\qquad
\MeijerG{1}{0}{0}{2}{-}{\frac12,0}{\frac{x^2}{4}}=\frac{\sin |x|}{\sqrt\pi}.
\end{equation}
It follows that
\begin{align}
K^\textup{even}(x,y)&=\frac{1}{\pi}\int_0^1\frac{du}{\sqrt u}\cos(2x\sqrt u)\cos(2y\sqrt u)
=\frac1{\pi}\Big(\frac{\sin 2(x-y)}{2(x-y)}+\frac{\sin 2(x+y)}{2(x+y)}\Big), \\
K^\textup{odd}(x,y)&=\frac{1}{\pi}\int_0^1\frac{du}{\sqrt u}\sin(2x\sqrt u)\,\sin(2y\sqrt u)
=\frac1{\pi}\Big(\frac{\sin 2(x-y)}{2(x-y)}-\frac{\sin 2(x+y)}{2(x+y)}\Big),
\end{align}
which upon insertion into~\eqref{hard-limit} indeed reproduces the sine kernel.
In the end of this section, let us emphasize that there also exists a contour integral representation of the limiting kernel in Theorem~\ref{thm:hard} which combines the odd and even into a single formula.
\begin{prop}\label{prop:kernelrep2} With the same notation as in Theorem \ref{thm:hard}, the limiting kernel at the origin can be rewritten as
\begin{align}
K^\textup{even}(x,y)+ K^\textup{odd}(x,y)=2\, \mathcal{K}_{\nu_{1},\ldots,\nu_{M}}(2x,2y), \label{equivalence}
\end{align}
where the kernel on the right-hand side is defined as
\begin{align}
\mathcal{K}_{\nu_{1},\ldots,\nu_{M}}(x,y)&
=\int_{C_{R}} \frac{dv}{2\pi i}
\,\MeijerG{1}{0}{0}{M+1}{-}{0,-\nu_1, \ldots,-\nu_M}{-\sgn(y)xv}\MeijerG{M+1}{0}{0}{M+1}{-}{0, \nu_1,\ldots,\nu_M}{|y|v},
\label{doubleG-kernel}
\end{align}
with $C_{R}$ denoting a path in the right-half plane from $-i$ to $i$.
\end{prop}
\begin{proof}
Using Euler's reflection formula and duplication formula for the gamma function, we see that
\begin{equation*}
K^\textup{even}(x,y)+ K^\textup{odd}(x,y)=\frac{1}{(2\pi i)^2}\int dt\int ds\, (2|x|)^{s}(2|y|)^{-t-1}
\frac{ g(s,t)}{s-t} \frac{\Gamma(t+1)}{\Gamma(s+1)}
\prod_{m=1}^M\frac{\Gamma(\nu_m+t+1)}{\Gamma(\nu_m+s+1)},
\end{equation*}
where
\begin{equation}
g(s,t)=\frac{\sin\frac{\pi}{2}t}{\sin\frac{\pi}{2}s}+\sgn(xy) \frac{\cos\frac{\pi}{2}t}{\cos\frac{\pi}{2}s}.
\end{equation}
In order to proceed, we will consider the cases $xy<0$ and $xy>0$ separately.
For $xy<0$, it is seen that
\begin{equation}
g(s,t)= \frac{2}{\sin\pi s}\sin\frac{\pi}{2}(t-s)=-\frac{2}{\pi} \Gamma(-s)\Gamma(1+s) \, \sin\frac{\pi}{2}(t-s).
\end{equation}
Now~\eqref{equivalence} can be obtained using the integral representation
\begin{equation}
\frac{1}{\pi i} \int_{C_{R}}dv \,v^{s-t-1}= \frac{1}{t-s}\sin\frac{\pi}{2}(t-s),
\end{equation}
with the contour $C_R$ as above,
together with the definition of Meijer $G$-function. For $xy>0$, we note that
\begin{equation}
e^{i\pi s}g(s,t)=
\left(\frac{\sin\frac{\pi}{2}t}{\sin\frac{\pi}{2}s}-\frac{\cos\frac{\pi}{2}t}{\cos\frac{\pi}{2}s} \right)+2e^{i\frac{\pi}{2}(t+s)}.
\end{equation}
The $s$-variable integrand in the second part has no pole within the contour $\Sigma$. Thus, the problem reduces to the proven situation.
\end{proof}
The simplest non-trivial case is $M=1$. Here, we get
\begin{equation}
\mathcal{K}_{\nu}(x,y)
=\left(\frac{y}{x}\right)^{\nu/2} \frac{1}{\pi i} \int_{C_{R}}dv
\, I_{\nu}\big(2\sqrt{\sgn(y)xv}\big)\, K_{\nu}\big(2\sqrt{|y|v}\big), \label{doubleM1-kernel}
\end{equation}
with the modified Bessel functions $I_{\nu}$ and $K_{\nu}$, which follows immediately from the fact that
\begin{align}
\MeijerG{1}{0}{0}{2}{-}{0,-\nu}{-z}=z^{-\nu/2} I_{\nu}(2\sqrt{z}), \qquad \MeijerG{2}{0}{0}{2}{-}{\nu,0}{z}=2 z^{\nu/2} K_{\nu}(2\sqrt{z}).
\end{align}
\section{Global spectra in product and Muttalib--Borodin ensembles}
\label{sec:global}
The study of the scaling limit at the origin in the previous section introduces a scale in which the average spacing between eigenvalues is of order unity. A very different, but still well-defined, limiting process is the so-called global scaling regime. In this regime the average spacing between eigenvalues tends to zero in such way that the spectral density tends to a quantity $\rho(x)$ with compact support $I\subset\mathbb R$ and $\int_I \rho(x)dx=1$. Here $\rho(x)$ is referred to as the global density.
Throughout this section, the indices $\nu_1\ldots,\nu_M$ are kept fixed.
For the Laguerre Muttalib--Borodin ensemble specified by the density~\eqref{MB-laguerre} the global scaling limit corresponds to a change of variables $x_j\mapsto nx_j$. Introducing the further change of variables $x_j\mapsto Mx_j^M$, the global density is known to be the so-called Fuss--Catalan density with parameter $M$ \cite{FW15}. It can be specified by the moment sequence
\begin{equation}
\text{FC}_M(k)=\frac1{Mk+1}\binom{(M+1)k}k,\qquad k=0,1,\ldots\,.
\end{equation}
These are the Fuss--Catalan numbers (the Catalan numbers are the case $M=1$).
Now, consider the product of $M$ standard complex Gaussian random matrices. Consistent with the discussion in Section~\ref{sec:motivation}, the corresponding global density is again the Fuss--Catalan density with parameter $M$~\cite{Mu02,AGT10,BBCC11,NS06}.
It is known that the Fuss--Catalan density, $\rho^{(M)}_\text{FC}(x)$ say, can also be characterised as the minimiser of the energy functional
\begin{equation}\label{energy-laguerre}
E[\rho]=M\int_0^Ldx\,\rho(x)x^{\frac1M}-\frac{1}{2}\int_0^Ldx\int_0^Ldy\,\rho(x)\rho(y)\log\big(|x-y||x^{\frac1M}-y^{\frac1M}|\big)
\end{equation}
with $L=(M+1)^{M+1}/M^M$; see~\cite{CR14,FL15,FLZ15}. Note that the energy functional~\eqref{energy-laguerre} relates to~\eqref{MB-laguerre} through the aforementioned change of variables. Similarly, the energy functional corresponding to~\eqref{MB-Hermite} is
\begin{align}
\tilde E[\tilde\rho]&=\theta\int_{-\tilde L}^{\tilde L}dx\,\rho(x)x^{\frac2\theta}
-\frac{1}{2}\int_{-\tilde L}^{\tilde L}dx\int_{-\tilde L}^{\tilde L}dy\,
\tilde\rho(x)\tilde\rho(y)\log\big(|x-y||\sgn x|x|^{\frac1\theta}-\sgn y|y|^{\frac1\theta}|\big) \nonumber \\
&=2\theta\int_{0}^{\tilde L}dx\,\rho(x)x^{\frac2\theta}-\int_{0}^{\tilde L}dx\int_{0}^{\tilde L}dy\,
\tilde\rho(x)\tilde\rho(y)\log\big(|x^2-y^2||(x^2)^{\frac1\theta}-(y^2)^{\frac1\theta}|\big)
\label{energy-hermite}
\end{align}
with $\theta=2M+1$.
We note that changing variables $x^2\mapsto x$ and $y^2\mapsto y$, then setting $\tilde\rho(x)=x\rho(x^2)$ reduces~\eqref{energy-hermite} to~\eqref{energy-laguerre} with $L=\tilde L^2$. Thus, the minimiser in~\eqref{energy-hermite} is given in terms of the Fuss-Catalan density
\begin{equation}\label{double-sided-FC}
\tilde\rho(x)=|x|\rho_\text{FC}^{(M)}(x^2)
\end{equation}
and is symmetric about the origin.
As an illustration, let us consider the simplest case, $M=1$. The Fuss--Catalan density becomes the celebrated Mar\v cenko--Pastur density,
\begin{equation}
\rho_\text{FC}^{(M=1)}(x)=\frac{1}{2\pi}\sqrt{\frac{4-x}{x}},\qquad 0<x<4.
\end{equation}
The formula~\eqref{double-sided-FC} then gives the standard result (see e.g.~\cite{PSbook}) that the energy functional
\begin{equation}
\tilde E[\tilde\rho]=\int_{-2}^{2}dx\,\tilde \rho(x)x^{2}
-\int_{-2}^{2}dx\int_{-2}^{2}dy\,\tilde\rho(x)\tilde\rho(y)\log|x-y|
\end{equation}
is minimised by
\begin{equation}
\rho_\text{Wigner}(x)=\frac{\sqrt{4-x^2}}{2\pi},\qquad -2<x<2,
\end{equation}
which is Wigner's semi-circle law.
It has been demonstrated in Section~\ref{sec:product}, that the energy function implicit in~\eqref{energy-hermite} underlies the eigenvalue distribution of the random matrix product~\eqref{W1}. Thus, we can anticipate that after appropriate scaling the global density for the product ensembles is given by~\eqref{double-sided-FC}. A direct proof of this can obtained through a number of different strategies. We consider first a method based on the characteristic polynomial.
In terms of the global scaled variables, the key equation relating the averaged characteristic polynomial to the global is the asymptotic formula~\cite{FW15}
\begin{equation}\label{asymp-stieltjes}
\frac1n\frac d{dz}\log\big\langle\det(z n^{M+\frac12}\mathbb I_n-W_M)\big\rangle=\tilde G_M(z)+O(n^{-1}),
\end{equation}
where $\tilde G_M(z)$ is the Stieltjes transform of the global spectral density,
\begin{equation}\label{stieltjes}
\tilde G_M(z)=\int_{-\tilde L}^{\tilde L}dx\,\frac{\tilde\rho(x)}{z-x}.
\end{equation}
Following the strategy first used in~\cite{FL15}, the formula~\eqref{asymp-stieltjes} leads to a characterisation of the Stieltjes transform~\eqref{stieltjes}, upon realising that the characteristic polynomial satisfy a linear differential equation.
\begin{prop}
Consider a matrix product~\eqref{W1} with even matrix dimension, $n=2N$. Let
\begin{equation}\label{char-poly}
f(z)=\big\langle\det(z\mathbb I_{n}-W_M)\big\rangle
\end{equation}
denote the characteristic polynomial. Then~\eqref{char-poly} is a solution to the $(2M+2)$-th differential equation,
\begin{equation}\label{diff}
2z^2\Big(z\frac d{dz}-n\Big)f(z)
=\prod_{m=0}^M\Big(z\frac d{dz}+\nu_m\Big)\Big(z\frac d{dz}+\nu_m-1\Big)f(z),
\end{equation}
with asymptotic boundary condition $f(z)\sim z^n$ for $|z|\to \infty$.
\end{prop}
\begin{proof}
The characteristic polynomial~\eqref{char-poly} is identical to the bi-orthogonal polynomial $p_{2N}(z)$. As shown earlier, this polynomial is proportional to a Meijer $G$-function~\eqref{p_2n-meijer}. It is well-known that such Meijer $G$-functions satisfy the differential equation~\eqref{diff}. The asymptotic boundary condition follows trivially, since $f(z)$ is a monic polynomial.
\end{proof}
Changing variables $z\mapsto n^{M+\frac12}\hat z/\sqrt{2}$ in~\eqref{diff} and using that~\cite{FL15}
\begin{equation}
\frac{f^{(k)}(\hat z)}{f(\hat z)}\sim\Big(\frac{f'(\hat z)}{f(\hat z)}\Big)^k
\end{equation}
to leading order in $n$, we see that for large $n$ the differential equation~\eqref{diff} reduces to the algebraic equation (see e.g. \cite{bai07} for $M=1$)
\begin{equation}\label{alg-eq1}
z^2(z\tilde G_M(z)-1)=(z\tilde G_M(z))^{2M+2}
\end{equation}
with asymptotic condition $\tilde G_M(z)\sim 1/z$ as $|z|\to\infty$.
This equation is to be compared to the algebraic equation satisfied by the Stieltjes transform of the Fuss--Catalan density,
\begin{equation}\label{alg-eq2}
z(zG_M(z)-1)=(zG_M(z))^{M+1},
\end{equation}
see e.g.~\cite{FL15}. With $z\mapsto z^2$ and $M\mapsto 2M+1$ and setting $\tilde G_{M}(z)=zG_{2M+1}(z^2)$, we see that~\eqref{alg-eq2} reduces to~\eqref{alg-eq1}. This prescription is equivalent to ~\eqref{double-sided-FC}, thus verifying this formula as the evaluation of the global density.
The same result can also be obtained using free probability techniques. To see this, we need some additional notation. Let $a$ be a non-commutative random variable with distribution $d\mu(x)=\rho(x)dx$. The Stieltjes transform $G_a(z)$ of the variable $a$ is defined analogous to~\eqref{stieltjes}. The $S$-transform is defined as
\begin{equation}
S_a(z)=\frac{1+z}{z}\gamma^{-1}(z)
\qquad\text{with}\qquad
\gamma(z)=-1+z^{-1}G_a(z^{-1}).
\end{equation}
Now assume that $a$ and $b$ are two freely independent non-commutative random variables and that the Stieltjes transform $G_b(z)$ satisfies a functional equation $P(z,G_b(z))=0$.
It is known~\cite{NS06} that under these conditions the Stieltjes transform $G_{ab}(z)$ of the product $ab$ satisfies
\begin{equation}\label{functional-recursion}
P\Big(zS_a(zG_{ab}(z)-1),\frac{zG_{ab}(z)}{S_a(zG_{ab}(z)-1)}\Big)=0.
\end{equation}
Moreover, we know that if $a$ is given by the free normal distribution (i.e. Wigner's semi-circle) and $b$ is given by the free Poisson distribution (i.e. Mar\v cenko--Pastur), then
\begin{equation}
S_a(z)=\frac1{1+z} \qquad\text{and}\qquad G_{b}(z)^2-zG_b(z)+1=0.
\end{equation}
We can now use that the limiting distributions for the GUE and the Wishart ensemble are the free normal and the free Poisson, respectively. Thus, using~\eqref{functional-recursion} $M$ times, we see that our product~\eqref{W1} indeed gives rise to the the functional equation~\eqref{alg-eq1}.
It is also possible to construct a parametrisation of the global density in terms of elementary functions based on the polynomial equation \eqref{alg-eq1}. With
\begin{equation}
x_{0}^{2}=\frac{\big(\sin((2M+2)\varphi)\big)^{2M+2}}{\sin\varphi\,\big(\sin((2M+1)\varphi)\big)^{2M+1}},
\qquad 0\leq\varphi\leq\frac{\pi}{2M+2},
\end{equation}
we have
\begin{equation}
\tilde\rho(x_0
=\frac{1}{\pi}\sqrt{\frac{\sin\varphi}{\sin(2M+ 1)\varphi}}\left(\frac{\sin(2r+1)\varphi}{\sin(2M+ 2)\varphi}\right)^{ M}\sin\varphi,
\qquad 0\leq\varphi\leq\frac{\pi}{2M+2};
\end{equation}
see e.g.~\cite{FL15}. We remark that it follows that the singularity at the origin blows up like
\begin{equation}
\tilde\rho(x_0)\sim \frac{1}{\pi } \sin\frac{\pi}{2M+2} \, |x_0|^{-\frac{M}{M+1}}
\end{equation}
as $x_0\to0$.
\section{Conclusion and outlook}
In this paper, we have shown that it is possible to construct a Hermitised random matrix product for which the eigenvalues form a determinantal point process on the entire real line with an explicit kernel. This is a fundamental new contribution to the study of random matrix product ensembles, since all previous exactly solvable models of this type have had eigenvalues restricted to the positive half-line. Furthermore, we have argued that this Hermitised product ensemble can be considered a natural generalisation of the classical Hermite ensemble (i.e. GUE) in similar way as the squared singular values of matrix products with Gaussian matrices~\cite{AKW13,AIK13} and truncated unitary matrices~\cite{KKS15} can be considered generalisations of the Laguerre and Jacobi ensembles, respectively.
To this point, we have shown that the joint eigenvalue PDF reduces asymptotically to the Muttalib--Borodin ensemble of Hermite type.
On another front, we have shown that the local scaling limit near the origin is described by a two-sided generalisation of the so-called Meijer $G$-kernel~\cite{KZ14}. This two-sided kernel reduces to the sine kernel in the simplest case. We have also seen that the global density can be found explicitly and that it is expressed in terms of the so-called Fuss--Catalan distribution in a simple manner. Our result relies on an explicit double contour integral formulation of the correlation kernel (Proposition \ref{prop:kernel-finite}).
It is worth stressing that we could make full use of this double contour integral formulation and give an analytical proof of the global density. In fact, following almost exactly the same steps introduced in \cite{LWZ14}, it can be proven that the sine-kernel arises in the bulk
and that Airy-kernel arises at the soft edge; cf. the proof of Theorems 1.1 and Theorem 1.3 as well as Remark 2 in~\cite{LWZ14}. The full details are beyond the scope of this paper, so let us only mention that a basic starting point is to approximate the integrand by elementary functions and rewrite the kernel, say the even part, as
\begin{multline}
\frac{1}{n \tilde\rho(x_0)}\Big( \sqrt{\frac{2}{n}} \Big)^{2M+1}K_{2n}^\text{even}
\bigg(
\Big( \sqrt{\frac{2}{n}} \Big)^{2M+1} \Big(\frac{x_0}{\sqrt{2}}+\frac{x}{ \tilde\rho(x_0)n}\Big), \Big( \sqrt{\frac{2}{n}} \Big)^{2M+1}(\frac{x_0}{\sqrt{2}}+\frac{x}{ \tilde\rho(x_0)n}\Big)\bigg)
\\ \sim \frac{\sqrt{2}}{|x_0|\tilde\rho(x_0)}\frac{1}{(2\pi i)^2}\int dt\int ds\,\frac{e^{n(g(s)-g(t))}}{s-t} \Big|1+\frac{\sqrt{2}x}{ x_0\tilde\rho(x_0)n}\Big|^{2ns} \Big|1+\frac{\sqrt{2}y}{ x_0\tilde\rho(x_0)n}\Big|^{-2nt-1}
\frac{h_{n}(s)}{h_{n}(t)},
\end{multline}
where the phase function is given by
\begin{equation}
g(z)=(2M+1)z-2(M+1)z\log z+z\log(z-1)-\log(1-z)+z\log x_{0}^2.
\end{equation}
Hence, the saddle point equation $g'(z)=0$ is exactly expressed through the equation \eqref{alg-eq1}. We stress that the above parametrisation representation plays a key role in the proof of the sine-kernel via the steepest decent method.
Finally, we emphasize that our construction of a Hermitised random product ensemble is based on a matrix transformation which maps the space of polynomial ensembles onto itself (Theorem~\ref{T1}). This type of matrix transformation are important since they preserve exact solvability. Our proof of Theorem~\ref{T1} is applicable to the Hermitised product ensemble multiplied by a Gaussian matrix, crucially with the help of the hyperbolic HCIZ integral over the pseudo-unitary group. However, it would be interesting to see whether this could be extended to the product ensemble multiplied by other types of random matrices, say, truncated unitary matrices. For this, a possible way is to first extend the matrix integral formula stated in \cite[Theorem 2.3]{KKS15} from the unitary group to the pseudo-unitary case, and then perform the same steps as in Section \ref{sec:fyodorov}. This will be an interesting and challenging problem for us.
\paragraph{Acknowledgements}
We thank S. Kumar for useful discussions on his paper and M. Kieburg for comments on a first draft.
We acknowledge support by the Australian Research Council through grant DP170102028 (PJF),
the ARC Centre of Excellence for Mathematical and Statistical Frontiers (PJF,JRI), and partially by ERC Advanced Grant \#338804, the National Natural Science Foundation of China \#11771417, the Youth Innovation Promotion Association CAS \#2017491, the Fundamental Research Funds for the Central Universities \#WK0010450002, Anhui Provincial Natural Science Foundation \#1708085QA03 (DZL). D.-Z. Liu is particularly grateful to L\'{a}szl\'{o} Erd\H{o}s for funding his one-year stay at IST Austria.
|
train/arxiv
|
BkiUeAjxaJJQn2qq7dw0
| 5
| 1
|
\section{Introduction}
Voter models are a classical example of an interacting particle system defined on a graph that models how consensus is formed e.g.\ across a social network.
In the standard model voters can have one of two opinions and at rate $1$ a vertex updates and copies the opinion from one of its neighbours.
Classically, the underlying graph is $\mathbb{Z}^d$, see e.g.~\cite{liggett1985interacting}, and typical questions study the structure and existence of invariant measures.
When considered on a finite graph, the invariant measures become trivial and the main question is how long it takes to reach consensus.
One example when this question can be answered is the complete graph on $N$ vertices, in which case the voter model is a variation of the Moran model from evolutionary biology and it is known that consensus is reached in time of order $N$.
More recently, the voter model has also been studied on random graphs, see e.g~\cite[Chapter 6.9]{durrett2007random}, and any result is very much dependent on the underlying model.
Moreover, the case when the underlying graph is inhomogeneous in the sense that its empirical degree distribution shows power law behaviour has not
been treated systematically. In the nonrigorous literature, this analysis has been carried out by~\cite{sood2008voter} for a mean-field model.
It is well known that voter models are dual to coalescing random walks (which in some sense trace back where opinions came from). In particular,
if we consider the final coalescence time at which a set of random walks started at each vertex in the underlying graph have coalesced into a single walker, then
any paper that bounds the final coalescence time also bounds the consensus time in the voter model. Examples
include \cite{oliveira2013mean, kanade2016coalescence}.
In this paper, we consider the voter model on subcritical inhomogeneous random graphs showing power law behaviour. The reason that we focus
on subcritcal random graphs is that, as we will see below, the behaviour observed here cannot be captured by mean-field methods. Moreover, in our model the
random graphs are disconnected, but the components are still of polynomial order in the number of vertices and have the fractal-like structure seen in Figure~\ref{component_figure}. Therefore, the asymptotics of the consensus time (defined as the first time that all components reach consensus) depends on a subtle interplay of the different structures of the components.
Furthermore, we also introduce a ``temperature'' parameter $\theta \in \mathbb{R}$ into the model, so that vertices update at rate proportional to $\operatorname{d}(v)^\theta$, where $\operatorname{d}(v)$ denotes the degree of vertex $v$. This extra parameter leads to interesting phase transitions in $\theta$, where in the different phases different structural elements of the underlying random graphs dominate the consensus time.
Finally, we also consider a variation of the voter model, also considered by \cite{moinet2018generalized} and similar to the ``oblivious'' model of \cite{cooper2018discordant}, which we will refer to as the \emph{discursive voter model}. In this model, vertices update at rate $\operatorname{d}(v)^\theta$, but then they `discuss' with a randomly chosen neighbour and agree on one opinion chosen at random from their respective opinions. This model gives a very different phase diagram -- surprisingly, the large components do not explain the consensus time for large $\theta$ or for small $\theta$.
Our proofs rely on duality to coalescing random walkers and using the right tools to bound the expected coalescence times. This is combined with a very fine analysis of the structure of subcritical inhomogeneous random graphs models that is not readily available in the literature.
\begin{figure}[t!]
\centering
\vspace{-1cm}
\centerline{\includegraphics[width=1\textwidth ]{bigger_cpt_picture_move.eps}}
\caption{The component containing the vertex $1$, for a graph in the class $\mathcal{G}_{\beta,\gamma}$ with subcritical network parameters $(\beta,\gamma)=(0.05,0.45)$. On these $4429$ vertices we can already see the emerging fractal structure.}\label{component_figure}
\end{figure}
{\bf Notation.} Throughout the paper, we will use the following notation.
For sequences of positive random variables $(X_N)_{N \geq 1}$ and $(Y_N)_{N \geq 1}$, we write
$X_N=O_{\mathbb{P}}^{\log N}(Y_N)$ if there exists $K> 0$ such that
\[
\mathbb{P}\left(X_N\leq Y_N (\log N)^K \right)\rightarrow 1
\]
as $N \rightarrow \infty$. Similarly, we write $X_N=\Omega_{\mathbb{P}}^{\log N}(Y_N)$ if
$Y_N=O_{\mathbb{P}}^{\log N}(X_N)$. If both bounds hold we write $X_N=\Theta_{\mathbb{P}}^{\log N}(Y_N)$.
Throughout we write $[N] = \{1,\ldots, N\}$.
For any graph $G$, we write $V(G)$ for its vertex set and $E(G)$ for its edge set. Moreover, if $v , w\in V(G)$, we write $v \sim w$ if $v$ and $w$ are neighbours, i.e.\ if $\{ v, w \} \in E(G)$. Also, for $v \in V(G)$, we
denote by $\operatorname{d}(v)$ its degree (i.e.\ the number of its neighbours).
\section{Main results}
In this paper, we will consider the following two variants of the voter model.
\begin{definition}\label{def:voter}
Let $G = (V,E)$ be a (simple) finite graph. Given $\eta \in \{ 0,1\}^V$, define for $i\neq j \in V$,
\[ \eta^{i \leftarrow j} (k) = \left\{ \begin{array}{ll} \eta(j) & \mbox{if } k = i \in V, \\ \eta(k) & \mbox{if } k \in V \setminus \{ i \} . \end{array}\right. \]
The \emph{voter model} is a Markov process $(\eta_t)_{t \geq 0}$ with state space $\{0,1\}^V$ and with the following update rules depending on a parameter $\theta \in \mathbb{R}$:
\begin{itemize}
\item[(a)] In the \emph{classical} voter model, for any neighbours $i$ and $j \in V$, the state $\eta \in \{0,1\}^V$ is replaced by $\eta^{i \leftarrow j}$ at rate
\[ \operatorname{d}(i)^{\theta -1} . \]
\item[(b)] For the \emph{discursive voter model}, for any neighbours $i$ and $j \in V$, the state $\eta \in \{0,1\}^V$ is replaced by $\eta^{i \leftarrow j}$ at rate
\[ \frac{1}{2} \big(\operatorname{d}(i)^{\theta -1} + \operatorname{d}(j)^{\theta -1}\big) . \]
\end{itemize}
\end{definition}
The classical voter model has the interpretation that each vertex $i$ updates its opinion at rate $\operatorname{d}(i)^\theta$ by copying the opinion of a uniformly chosen neighbour.
In the discursive model, each vertex $i$ becomes active at rate $\operatorname{d}(i)^\theta$, then it chooses a neighbour uniformly at random and then both neighbours agree on a common opinion by picking one of their opinions randomly.
Note also that for the `temperature' parameter $\theta$, $\theta = 0$ corresponds to the `standard' voter model, where each vertex $i \in [N]$ updates its opinion at rate $1$.
We consider a general class of inhomogeneous random graphs, which include a special case of the Chung-Lu model. The latter has vertex set $[N]$ and each edge is present independently with probability
\begin{equation}\label{chung_lu_edges}
p_{ij}:=\frac{\beta N^{2\gamma-1}}{i^\gamma j^\gamma}\wedge 1.
\end{equation}
We generalize this model as follows.
\begin{definition}\label{define_G} Fix $\beta, \gamma >0$ such that $\beta+2\gamma<1$.
We say that a sequence of (simple) random graphs $(G_N)_{N \geq 1}$, where $V(G_N) = [N]$, is
in the class $\mathcal{G}_{\beta,\gamma}$ of \emph{subcritical inhomogeneous random graph models}
with parameters $\beta$ and $\gamma$, if for any $N$ there exists a symmetric array $(q_{i,j})_{i ,j \in [N]}$ of numbers in $(0,\frac 12)$ such that each edge $\{i,j\}$, $i \neq j$, is present in $G_N$ independently of all others with probability $q_{i,j}$. Moreover, for $(p_{i,j})_{i,j \in [N]}$ as in~\eqref{chung_lu_edges}, we require that
\[
\lim_{N\rightarrow\infty} \sum_{i \neq j} \frac{(p_{i,j}-q_{i,j})^2}{p_{i,j}}=0.
\]
\end{definition}
\begin{remark}\label{rem:IRG}
\begin{itemize}
\item[(a)] Our definition includes various well-known models of inhomogeneous random graphs additionally to the Chung-Lu (CL) model, where $q_{i,j} = p_{i,j}$.
Other models are the simple Norros-Reittu random graph (SNR) with $q_{i,j}=1-e^{-p_{i,j}}$ obtained from the Norros-Reittu model by flattening all multiedges, see also Section~\ref{Branching_section} below.
It also includes the Generalised Random Graph (GRG) with $q_{i,j}=\frac{p_{i,j}}{1+p_{i,j}}$, which has the distribution of a particular configuration model conditioned to be simple \cite[Theorem 7.18]{van2016random}.
\item[(b)] In the following, we will slightly abuse notation and write $G_N \in \mathcal{G}_{\beta, \gamma}$ if we mean that $G_N$ is the random graph with vertex set $[N]$ in a sequence of random graphs in $\mathcal{G}_{\beta, \gamma}$.
\item[(c)]
Any two representatives of the class $\mathcal{G}_{\beta, \gamma}$ are asymptotically equivalent, see~\cite[Theorem 6.18]{van2016random}, so that if a statement holds with high probability for one particular model it also holds for any other one from the class. In the proofs, we will sometimes make use of this freedom and choose a particular model when it is convenient.
\item[(d)] By \cite{bollobas2007phase} we know that the regime $\beta +2\gamma<1$ is the complete subcritical region, so in particular our class of random graphs does not have a giant component.
Also, by~\cite{chung2006complex} it is known that these models have power-law exponent $\tau = 1 + 1/\gamma$.
\end{itemize}
\end{remark}
Note that our network model is typically disconnected, so then the voter model $(\eta_t)_{t \geq 0}$ can hit an absorbing state without being in global consensus.
So, let $C_1, \ldots, C_k$ be the components of a graph $G$ on the vertex set $[N]$.
The \emph{consensus time} is the first time that there is consensus on each component, i.e.\
\[ \tau_{\rm cons} = \inf\{ t \geq 0 \, : \, \eta_t|_{C_i} \mbox{ is constant for each } i \in [k] \} . \]
We can now state our first main theorem on the expected consensus time.
\begin{theorem}\label{class_subcrit}
Suppose $G_N \in \mathcal{G}_{\beta,\gamma}$ for some $\beta+2\gamma<1$ and that the initial conditions are chosen according to $\mu_u$ such that each initial opinion is an independent $\operatorname{Bernoulli}(u)$ random variable, for some $u \in (0,1)$. Then, for the classical voter model with parameter $\theta \in \mathbb{R}$, we have
\begin{equation}\label{eq:2611-1}
\mathbb{E}^{\theta}_{\mu_u}(\tau_{\text{\textnormal{cons}}}|G_N)=
\Theta_{\mathbb{P}}^{\log N}\left( N^{\mathbbm{c}}\right)
\end{equation}
where the exponent $\mathbbm{c}=\mathbbm{c}(\gamma,\theta)$ is given as
\[
\mathbbm{c}=
\begin{cases}
\gamma & \theta\geq 1 ,\\
\gamma \theta & \frac{1}{2-2\gamma}<\theta< 1 , \\
\frac{\gamma}{2-2\gamma} & 0 \leq \theta\leq \frac{1}{2-2\gamma} ,\\
\frac{\gamma(1- \theta)}{2-2\gamma} & \theta < 0.
\end{cases}
\]
\end{theorem}
Note that we are only averaging over the voter model dynamics, so that the expectation in~\eqref{eq:2611-1} is still random (but depends only on the realization of the particular random graph).
The averaged $\theta=0$ classical dynamics are studied in~\cite{sood2008voter} and they find a linear expected time to hit \emph{global} consensus whenever $\gamma<1/2$. In our non-averaged dynamics however the consensus is componentwise and hence we find a sublinear consensus time $N^\frac{\gamma}{2-2\gamma}=N^\frac{1}{2\tau-4}=o( \sqrt{N} )$.
\begin{remark}[Dominating contributions for the classical model]\label{class_dominating_contrib} We remark that the theorem shows that dominating contributions to the consensus time come from different parts of the random graph in the different regimes.
The proof can be adapted to show that on $\mathscr{C}(1)$, the component of vertex $1$, which is with high probability the largest component, we have the following asymptotics:
\[
\mathbb{E}^{\theta}_{\mu_u}(\tau_{\text{\textnormal{cons}}}(\mathscr{C}(1))|G_N)
=
\Theta_{\mathbb{P}}^{\log N}\left(N^c\right), \quad \text{where }
c=
\begin{cases}
\gamma & \theta\geq 1 ,\\
\gamma \theta & \frac{\gamma}{1-\gamma}<\theta< 1 , \\
\frac{\gamma^2}{1-\gamma} & 0 \leq \theta\leq \frac{\gamma}{1-\gamma} ,\\
\frac{\gamma^2(1- \theta)}{1-\gamma} & \theta < 0.
\end{cases}
\]
Therefore, by comparison with Theorem \ref{class_subcrit}, in the regime $\theta<1/(2-2\gamma)$ we find $\mathscr{C}(1)$ is not the component that takes longest to reach consensus. Instead in this case, the consensus time of Theorem \ref{class_subcrit} the dominating contribution comes from
the consensus time on a \emph{double star} component, i.e.\ a tree component with two connected vertices of polynomially maximal degree, which exists with high probability.
\end{remark}
Next we consider the consider the discursive model, where we have the following phase diagram.
\begin{theorem}\label{obl_subcrit}
Suppose $G_N \in \mathcal{G}_{\beta,\gamma}$ for some $\beta+2\gamma<1$ and that the initial conditions are chosen according to $\mu_u$ such that each initial opinion is an independent $\operatorname{Bernoulli}(u)$ random variable, for some $u \in (0,1)$. Then, for the discursive voter model with parameter $\theta \in \mathbb{R}$, we have
\[
\mathbf{E}^{\theta}_{\mu_u}(\tau_{\text{\textnormal{cons}}}|G_N)=
\Theta^{\log N}_{\mathbb{P}}\left( N^{\mathbf{c}} \right)
\]
where the exponent $\mathbf{c}=\mathbf{c}(\gamma,\theta)$ is given as
\[
\mathbf{c}=
\begin{cases}
\frac{\gamma}{2-2\gamma} & \theta \geq \frac{3-4\gamma}{2-2\gamma} , \\
\gamma(2-\theta) & 1 < \theta < \frac{3-4\gamma}{2-2\gamma} , \\
\gamma & 2\gamma \leq \theta \leq 1 ,\\
\frac{\gamma(2-\theta)}{2-2\gamma} & \theta < 2\gamma.
\end{cases}
\]
\end{theorem}
Unlike for the classical model, where large positive $\theta$ slowed down consensus when compared to the standard model $\theta=0$, for the discursive model we see that large $\theta$ accelerates consensus by accelerating mixing: for each $\gamma$, $\mathbf{c}(\gamma,\theta)$ is non-increasing in $\theta$. See also Figure~\ref{fig:exponent_figure} for an illustration.
\begin{figure}[b!]
\centering
\centerline{\includegraphics[width=1.2\textwidth ]{exponents.eps}}
\caption{This figure shows the typical shapes by setting $\gamma=1/3$. Somewhat surprisingly, for any subcritical $(\beta,\gamma)$ parameters the function $\mathbbm{c}(\gamma,\theta)$ is \emph{not} monotonic in $\theta$ for the classical model. On the left we see that the most popular model $\theta=0$ is one of the fastest parameter values for consensus, as part of the optimal interval $\theta \in [ 0 , {1}/({2-2\gamma})]$. Conversely, the discursive model shows monotonicity in the exponent.}\label{fig:exponent_figure}
\end{figure}
\begin{remark}[Dominating contributions for the discursive model]
For comparison, we state the consensus order of $\mathscr{C}(1)$ with the discursive dynamic.
\[
\mathbf{E}^{\theta}_{\mu_u}(\tau_{\text{\textnormal{cons}}}(\mathscr{C}(1))|G_N)
=
\Theta_{\mathbb{P}}^{\log N}\left(N^c\right), \quad \text{where }
c=
\begin{cases}
\frac{\gamma^2}{1-\gamma} & \theta\geq \frac{2-3\gamma}{1-\gamma} ,\\
\gamma(2-\theta) & 1<\theta< \frac{2-3\gamma}{1-\gamma} , \\
\gamma & 3-\frac{1}{\gamma} \leq \theta\leq 1 ,\\
\frac{\gamma^2(2- \theta)}{1-\gamma} & \theta < 3-\frac{1}{\gamma}.
\end{cases}
\]
The most obvious difference here is that $\mathscr{C}(1)$ makes a dominating contribution to the consensus order on $G_N$, as seen in Theorem \ref{obl_subcrit}, only for parameters $\theta$ in a an \emph{intermediate} range $\theta \in [2\gamma, \frac{3-4\gamma}{2-2\gamma} ]$ as opposed to in Remark \ref{class_dominating_contrib} where this was true for $\theta$ sufficiently large. Again we will see in the proofs that in the regimes where $\mathscr{C}(1)$ does not dominate, a component of double star type will dominate instead, which which exhibits slow mixing.
\end{remark}
\begin{remark}[Transition in the power law] For illustration, we rephrase the main theorems by fixing $\theta$ and varying the tail exponent $\tau = 1 + 1/\gamma$.
For $G_N \in \mathcal{G}_{\beta,\gamma}$ and for the classical dynamics we obtain for $\theta \in \left(\frac{1}{2}, 1 \right)$,
\[
\mathbb{E}^{\theta}_{\mu_u}(\tau_{\text{\textnormal{cons}}}|G_N)=
\Theta_{\mathbb{P}}^{\log N}
\begin{cases} N^{\frac{1}{2\tau-4}} & \tau \leq 3 + 2 \left( \frac{1-\theta}{2\theta -1} \right) ,\\
N^{\frac{\theta}{\tau-1}} & \text{otherwise,}
\\
\end{cases}
\]
and for the discursive dynamics with $\theta \in \left(1,\frac{3}{2}\right)$, this translates to
\[
\mathbf{E}^{\theta}_{\mu_u}(\tau_{\text{\textnormal{cons}}}|G_N)=
\Theta_{\mathbb{P}}^{\log N}
\begin{cases}
N^{\frac{1}{2\tau-4}} & \tau \leq 3+2\left(\frac{\theta-1}{3-2\theta}\right) ,\\
N^{\frac{2-\theta}{\tau-1}} & \text{otherwise} .\\
\end{cases}
\]
From the proof one can see that in both these cases the consensus time on the largest component is dominant only for small $\tau$. If $\theta \in (0,1)$ then for the discursive dynamics we have that
\[
\mathbf{E}^{\theta}_{\mu_u}(\tau_{\text{\textnormal{cons}}}|G_N)=
\Theta_{\mathbb{P}}^{\log N}
\begin{cases}
N^{\frac{1}{\tau-1}} & \tau \leq 3+ 2\left( \frac{1-\theta}{\theta} \right) ,\\
N^{\frac{2-\theta}{2\tau-4}} & \text{otherwise.} \\
\end{cases}
\]
In this case, one can see from the proofs that the asymptotics for the largest component dominate for large $\tau$ values.
\end{remark}
\begin{remark}[The supercritical Regime]
On the complete graph the model reduces to the one-dimensional Wright-Fisher model, an example of a well mixed regime. On a supercritical scale free network, too, we expect polylogarithmic mixing times in most cases so that the consensus time on a component $\mathscr{C}$ has order
\begin{equation}\label{order_conjecture}
\Theta^{\log N}_{\mathbb{P}} \left( \frac{1}{\sum_{v \in \mathscr{C}} q(v) \pi(v)^2} \right)
\end{equation}
where $\pi, q$ are the stationary distribution and vertex jump rate of the
random walk obtained by tracing back opinions when restricted to the component $\mathscr{C}$, see Section~\ref{sec:duality} for precise definitions.
The order in Equation \eqref{order_conjecture} is that of the \emph{mean-field} model, see \cite{sood2008voter} for the mean-field approach. A very general rigorous analysis is made in \cite{cox2016convergence}, under assumption on the mixing and meeting times for the dual chain. However, mixing times in particular are very sensitive to work with and therefore highly model-dependent.
For the Erd\H{o}s-R\'enyi graph we have detailed mixing time and structural results, see \cite{benjamini2014mixing}. Also on configuration models we have some mixing results, and note that we include a configuration model (with random degrees) in the class of graphs that we are considering in Definition \ref{define_G}. However existing results assume \emph{subpolynomial maximum degree} as in \cite{berestycki2018random}, or a \emph{degree lower bound} as in \cite{abdullah2012cover}.
The conjecture is that these results do extend to general configuration models with power-law degree sequence and so $t_{\text{mix}}=\Theta_{\mathbb{P}}\left( \log^2 N \right)$. Results \cite[(3.21)]{cox2016convergence} and \cite[Lemma 3.17]{aldous-fill-2014}, with discrete-time analogues in \cite{kanade2016coalescence}, would then give the bounds
\[
\Omega_{\mathbb{P}}^{\log N} \left( \frac{1}{\sum_v q(v) \pi(v)^2} \right)
,\quad
O_{\mathbb{P}}^{\log N} \left( \frac{t_{\text{mix}}}{\sum_v \pi(v)^2} \right),
\]
which in many regimes, those where $\sum_v q(v) \pi(v)^2 \approx \sum_v \pi(v)^2$, are polynomially tight.
In fact, \cite{durrett2010some} conjectures for the $\theta=0$ model via Aldous' ``Poisson Clumping Heuristic'' \cite{aldous2013probability} that the order of the mean consensus time is really the exact polynomial without logarithmic corrections
as found by the mean-field approximation in \cite{sood2008voter}.
A structural result comparable to \cite{ding2014anatomy} but for the rank one scale-free network would solve the open question of mixing time, but also potentially give a direct handle on meeting time without the logarithmic factors.
\end{remark}
The remaining paper is organised as follows:
In Section~\ref{sec:duality}, we describe the (classical) duality of the voter model to coalescing random walks and then develop various tools for coalescing random walks. This section works whenever the random walks follow the dynamics of a reversible Markov chain, so apply to both our models.
In Section~\ref{sec:structure}, we derive structural results for subcritical inhomogeneous random graphs. In Section~\ref{voter_models_section}, we combine the structural results with the bounds in Section~\ref{sec:duality} to complete the proofs of Theorems~\ref{class_subcrit} and~\ref{obl_subcrit}.
\section{Duality and bounds on the coalescence time}\label{sec:duality}
In this section, we consider the voter model on an arbitrary finite state space. Moreover, we will discuss the main tool to analyse the voter model, which is the duality to a system of coalescing random walks. In the remaining section, we will then show various bounds on the coalescence time of a system of general random walks.
We will describe a general voter model, where the voters are indexed by $[n]=\{1,\ldots,n\}$ for some $n \in \mathbb{N}$ and the dynamics are governed by a matrix $Q = (Q(i,j))_{i, j \in [n]}$ which is the generator of a continuous-time, reversible Markov chain $(X_t)_{t \geq 0}$ on $[n]$.
Let $O$ be the set of possible opinions, then the $Q$-voter model $(\eta_t)_{t \geq 0}$ with $\eta_t \in O^{[n]}$
evolves as follows: for all $i\neq j \in [n]$ at rate $Q(i,j)$ the current state $\eta \in O^{[n]}$ is replaced by
\[ \eta^{i \leftarrow j} (k) = \left\{ \begin{array}{ll} \eta(j) &\mbox{if } k = i \\ \eta(k) &\mbox{if } k \in [n]\setminus \{ i \} .\end{array} \right. \]
In other words, at rate $Q(i,j)$ the voter $i$ copies the opinion from voter $j$.
It is classical that the voter model is dual to a system of coalescing random walks, see~\cite{liggett1985interacting}.
The duality can be described via a graphical construction. We start with the graph $\{(j,t) \, : \, j \in [n], t \geq 0\}$ and independent Poisson point processes $(N_{i,j}(t))_{t \in \mathbb{R}}, i \neq j$ (with rates $Q(i,j)$ respectively). If $t_k$ denotes a jump of $N_{i,j}$ we draw an arrow from $(t_k,j)$ to $(t_k,i)$. Given any initial condition $\eta_0 \in O^{[n]}$, we then let the opinions flow upwards starting at time $0$ and if they encounter an arrow following the direction of the arrow and replacing the opinion of the voter found at the tip of the arrow.
Now, we fix a time horizon $T > 0$ and we start with $n$ random walkers located at each of the points $(j,T), j \in [n]$, then the trajectories follow the graph downwards, following each arrow and if two walkers meet they coalesce. Denote
by $\xi_t^T \subseteq [n] = \{ \xi_t^T(j) \, , \, j \in [n]\}$ the set of positions of these walkers at time $t \geq 0$, where thus $\xi_0^T = [n]$. From this construction, it follows that each walker follows the dynamics of the Markov chain $X$, so we obtain a system of coalescing Markov chains/random walks.
Moreover, one can immediately see that the voter model at time $T$ can be obtained, by tracing the random walk paths backwards, i.e.\ for any $j \in [n]$,
\begin{equation}\label{eq:duality} \eta_T(j) = \eta_0 ( \xi_{T}^T(j)) . \end{equation}
We are interested in general reversible Markov chains, so we do not necessarily assume that the Markov chain is irreducible. However, since $X = (X_t)_{t \geq 0}$ is reversible, we can decompose the state space into its irreducible components, which we
will denote by $C_1, \ldots, C_k$, so that $X$ restricted to $C_j$ is irreducible.
In this case,
for any $j \in [k]$, we denote the consensus time on the $j$th component by
\[ \tau_{\rm cons}(C_j) = \inf\{ t \geq 0 \, : \, \eta_t |_{C_j} \mbox{ is constant}\}. \]
Then, define the overall consensus time
\[ \tau_{\rm cons} = \max_{j \in [k]} \tau_{\rm cons} (C_j) . \]
Our main interest in this article is in the case when $O = \{0 ,1\}$ and the initial conditions $\eta_0$ are distributed according to $\mu_u$, the product of Bernoulli$(u)$ measures for some $u \in [0,1]$.
Then, we set
\[ t_{\rm cons}^{(u)} = \E_{\mu_u}(\tau_{\rm cons}). \]
For the duality, it will be easier to consider the voter model where each
voter starts with a different opinion, i.e.\ $\eta_0 = [n]$. Here, we define
\[ t_{\rm cons}^* = \E_{[n]} (\tau_{\rm cons}). \]
For the system of coalescing random walks, we define for each irreducible component $C_j, j \in [k]$,
\[ \tau_{\rm coal} (C_j) = \inf\{ t \geq 0 \, : \, |\xi^T_t| = 1 \} , \]
i.e.\ the first time all walkers in this component have coalesced into a single walker. Moreover, we then define
\[ t_{\rm coal} = \E_{[n]} (\tau_{\rm coal}) , \quad \mbox{where }
\tau_{\rm coal} = \sup_{j \in [k]} \tau_{\rm coal}(C_j) . \]
By duality, we have that if the voter model starts in $\eta_0 = [n]$, then
\[ \p_{[n]} (\tau_{\rm coal} \leq T) = \p_{[n]} ( \tau_{\rm cons} \leq T) , \]
so $\tau_{\rm coal}$ and $\tau_{\rm cons}$ agree in distribution and in particular $t_{\rm coal} = t_{\rm cons}^*$.
As the following lemma shows, we can also get bounds for $t_{\rm cons}^{(u)}$.
\begin{lemma}[Duality]\label{binary_lower_bound}
In the setting above, we have for any $u \in (0,1)$,
\[ t^{(u)}_{\text{\textnormal{cons}}}
\leq t_{\text{\textnormal{coal}}}. \]
Suppose additionally that the dual Markov chain is irreducible, then for all $u \in (0,1)$,
\[
t^{(u)}_{\text{\textnormal{cons}}}
\geq 2u(1-u) \, t_{\text{\textnormal{coal}}} .
\]
\end{lemma}
\begin{proof}
By recolouring we can see that we reach consensus from the product Bernoulli measure $\mu_u$ before we do from unique colours, and hence
\[
t^{(u)}_{\text{\textnormal{cons}}}
\leq
t^*_{\text{\textnormal{cons}}}
=
t_{\text{\textnormal{coal}}}.
\]
For the other direction, suppose that the dual Markov chain is irreducible. Then, observe from the duality relation \eqref{eq:duality} that
\[
\mathbb{P}_{\mu_u}\left(\eta_T \text{ constant} \right)
=
\mathbb{P}\left( \mu_u \text{ constant on } \xi_{T}^T \right)
=
\mathbb{E} \left(
u^{\left|\xi_{T}^T\right|}
+
(1-u)^{\left|\xi_{T}^T\right|}
\right)
\]
which we can crudely upper bound by considering the event $\{ \left|\xi_{T}^T\right| =1 \}$
\[
\begin{split}
\mathbb{E} \left( u^{\left|\xi_{T}^T\right|}+(1-u)^{\left|\xi_{T}^T\right|} \right)
&\leq
\mathbb{P}\left(
\left|\xi_{T}^T\right| =1
\right)+
\left(u^2+(1-u)^2\right)
\mathbb{P}\left(
\left|\xi_{T}^T\right| \geq 2
\right)\\
&=1-2u(1-u)
\mathbb{P}\left(
\left|\xi_{T}^T\right| \geq 2
\right).
\end{split}
\]
Therefore we have
\[
\begin{split}
t^{(u)}_{\text{\textnormal{cons}}}
&= \int_0^\infty \p_{\mu_u}( \tau_{\rm cons} \geq t) \, {\rm d} t \\
& =
\int_0^\infty 1-
\mathbb{E} \left(
u^{\left|\xi_{T}^T\right|}
+
(1-u)^{\left|\xi_{T}^T\right|}
\right)
{\rm d}T\\
&\geq
2u(1-u)
\int_0^\infty
\mathbb{P}\left(
\left|\xi_{T}^T\right| \geq 2
\right)
{\rm d}T
=
2u(1-u)t_{\text{\textnormal{coal}}},
\end{split}
\]
where we used irreducibility in the last step.
\end{proof}
We will control the time $t_{\rm coal}$ until all random walkers have coalesced using the following two bounds in terms of the two auxiliary quantities that we defined next.
First of all, let $X = (X_t)_{t \geq 0}$ and $Y = (Y_t)_{t \geq 0}$ be two independent reversible Markov chains with generator $Q$. Then, define the (expected) meeting time for $j \in [k]$ as
\[ t_{\rm meet}(C_j) = \max_{x,y \in C(j)} \E_{x,y} ( \tau_{\rm meet}) , \quad \mbox{where } \tau_{\rm meet} = \inf\{ t \geq 0 \, : \, X_t = Y_t \} . \]
Moreover, an important role will be played by the (expected) hitting time
defined for $j \in [k]$ as
\[ t_{\rm hit}(C_j) = \max_{x,y \in C_j} \E_x (T_y) \, , \quad
\mbox{where } T_y = \inf\{ t \geq 0 \, : \, X_t = y \} . \]
Both these quantities give bounds on the coalescence time and thus on the consensus time.
\begin{proposition}\label{prop:coal} With the notation as above, we have that
\[ \sup_{j \in [k]} t_{\rm meet}(C_j) \leq t_{\rm coal} \leq e( \log n +2) \sup_{j \in [k]} t_{\rm meet}(C_j) .\]
Moreover,
for any $ j \in [k]$, \[ t_{\rm meet}(C_j) \leq t_{\rm hit}(C_j) .\]
\end{proposition}
\begin{remark} Recall that $t_{\rm coal}$ is defined as
\[ t_{\rm coal} = \E_{[n]}\Big( \sup_{j \in [k]} \tau_{\rm coal} (C_j) \Big) \, \]
so the non-standard part of the statement is that we can take the supremum out of the expectation.
For irreducible chains, the statement is
Proposition~14.11 in~\cite{aldous-fill-2014}. However, their proof does not really need this extra assumption. For the convenience of the reader, we repeat the proof below.
Furthermore, we note that for reducible chains the first bound is shown
in~\cite{oliveira2012coalescence} without the $\log n$ factor.
The stronger bound does not hold without the assumption of irreducibility.
Indeed,
by looking at a Markov chain with $n$ components of size $2$ each (e.g.\ with transition rates $1$ within these components), it becomes obvious that the factor $\log n$ in the proposition is sharp.
\end{remark}
\begin{proof}
The reversible Markov chain decomposes into irreducible recurrence classes - write $\mathscr{C}(i)$ for the class containing the state $i$. As in the proof of \cite[Proposition 14.11]{aldous-fill-2014}, consider a walker $W^{(i)}$ independently started in $i$. We have $n(n+1)/2$ meeting times
\begin{equation}\label{eq:meeting_ij}
\tau^{i,j}_{\text{meet}}:=\inf
\left\{
t \geq 0 :
W^{(i)}_t=W^{(j)}_t
\right\}
\end{equation}
for the walkers $1 \leq i \leq j \leq n$, where $\inf \emptyset := \infty$ and $\tau^{i,i} = 0$. Define a function $\operatorname{f}$ which maps all elements in a recurrence class $\mathscr{C}(i)$ to a label $\min \mathscr{C}(i)$ which is of lowest index in that component.
\[
\operatorname{f}: i \mapsto \min \mathscr{C}(i)
\]
Then we can say for the random coalescence time,
\[
\tau_{\text{coal}}:=\max_{i=1}^n \tau_{\text{coal}}(\mathscr{C}(i))\leq \max_{i=1}^n \tau^{i,f(i)}_{\text{meet}}.
\]
We then apply a result for the general exponential tails of hitting times of finite Markov chains \cite[Equation 2.20]{aldous-fill-2014}: from arbitrary initial distribution $\mu$ and for a continuous time reversible chain, for any subset $A\subset V$
\[
\mathbb{P}_{\mu}(T_A>t)\leq \exp \left( - \left\lfloor \frac{t}{e \max_v \mathbb{E}_v T_A } \right\rfloor \right).
\]
For the meeting time variables this leads to
\[
\mathbb{P}(\tau^{i,j}_{\text{meet}}>t)\leq \exp\left( - \left\lfloor \frac{t}{e t_{\text{meet}}} \right\rfloor \right) .
\]
We can deduce that
\[\mathbb{P}(\tau_{\text{coal}}>t)
\leq \sum_{i=1}^n \mathbb{P}\left(\tau^{f(i),i}_{\text{meet}}>t\right)
\leq n\exp\left( - \left\lfloor \frac{t}{e t_{\text{meet}}} \right\rfloor \right).
\]
Finally, we conclude as in \cite[Proposition 14.11]{aldous-fill-2014} by integrating to get
\[
t_{\rm coal}\leq
\int_0^\infty
1 \wedge
\left(
n e \exp\left( - \frac{t}{e t_{\text{meet}}} \right)
\right) {\rm d} t
=
e \left(
2+\log n
\right) t_{\text{meet}},
\]
which proves the first claim.
The second claim of the proposition is \cite[Proposition 14.5]{aldous-fill-2014}.
\end{proof}
In particular, Proposition~\ref{prop:coal} allows us to to bound the consensus time by bounding either hitting times or meeting times for an irreducible chain. We start by collecting the bounds on hitting times in Section~\ref{ssec:hitting} and continue with the bounds on meeting times in Section~\ref{ssec:meeting}
\subsection{Bounds on hitting times}\label{ssec:hitting}
Throughout the following two sections $X = (X_t)_{t \geq 0}$ will be a reversible, irreducible Markov chain with state space $[n] = \{1, \ldots, n\}$ and transition rates given by a matrix~$Q$. Moreover, we denote by $\pi = (\pi(i))_{i \in [n]}$ the invariant measure of $X$.
In this case, as there is only one irreducible component the (expected) hitting time is defined as
\[ t_{\rm hit} = \max_{k, j \in [n]} \E_k (T_j) . \]
For our bound on the hitting time, we will make use of the well-known correspondence between
Markov chains and electric networks, see e.g.~\cite{aldous-fill-2014, wilmer2009markov}.
In this context, we associate to $Q$ a graph $G_Q$ with vertex set $[n]$ and connect $i$ and $j$ by an edge, written $i \sim j$, if the conductance $c(ij)$ is nonzero, where the conductance is defined as
\begin{equation}\label{conductance_definition}
c(ij) := \pi(i) Q(i,j) = \pi(j) Q(j,i).
\end{equation}
This is also known as the \emph{ergodic flow} of the edge. Moreover, the interpretation as an electric network lets us define the effective resistance between two vertices $i,j \in [n]$, denoted $\mathcal{R}(i \leftrightarrow j)$, as in \cite[Chapter 9]{wilmer2009markov}.
To state the following proposition, we also define ${\rm diam}(Q)$ to be the diameter in the graph theoretic
sense for the graph obtained from $Q$ as above.
The proof uses the representation of the effective resistances in terms of the Markov chains, combined with Thomson's principle.
\begin{proposition}[Conductance bounds]\label{prop:max_resistance}
Let $(X_t)_{t \geq 0}$ be a reversible, irreducible Markov chain on $[n]$ with associated conductances $c$.
Let $P_{i,j}$ be a path from $i$ to $j$ in $G_Q$ and denote by $E(P_{i,j})$ the set of edges in $P_{i,j}$. Then
\[
\mathbb{E}_i \left( T_j \right) +\mathbb{E}_j \left( T_i \right) \leq \sum_{e \in E(P_{i,j})}\frac{1}{c(e)}.
\]
In particular, we have
\[ t_{\rm hit} \leq {\rm diam} (Q) \max_{i\sim j \in [n]} \frac{1}{c(ij)} . \]
\end{proposition}
\begin{proof}
Let $T_i^+ = \inf\{ t > 0 \, : \, X_t = i, \lim_{s \uparrow t} X_s \neq i \}$ be the return time to state $i$.
From \cite[Proposition 9.5]{wilmer2009markov},
\[
\mathcal{R}(i \leftrightarrow j)=\frac{1}{c(i) \mathbb{P}_i(T_j<T_i^+)},
\]
where $c(i) = \sum_{j \sim i}c({ij})$ is the conductance around a vertex. We further have from \cite[Corollary 2.8 (continuous time version)]{aldous-fill-2014}
\[
\mathbb{E}_i \left( T_j \right) +\mathbb{E}_j \left( T_i \right) =\frac{1}{\pi(i)q(i) \mathbb{P}_i(T_j<T_i^+)},
\]
where $q(i) = - Q(i,i)$ is the walker speed at $i$, and by the choice of $c$ these expressions are equal. Finally by Thompson's Theorem (which describes monotonicity of effective resistances with respect to edge resistances)
\[
\begin{split}
\mathbb{E}_i \left( T_j \right) +\mathbb{E}_j \left( T_i \right) ={\mathcal{R}(i \leftrightarrow j)} &\leq {\mathcal{R}(i \leftrightarrow j \text{ through } P_{i,j})}\\
&= \sum_{\{u,v\} \in E(P_{i,j})}\frac{1}{c(uv)},
\end{split}
\]
which gives the required bound.
\end{proof}
\subsection{Bounds on meeting times}\label{ssec:meeting}
In this section, we continue to use the notation from the beginning of Section~\ref{ssec:hitting}. In particular, $(X_t)_{t \geq 0}$ is a reversible, irreducible Markov chain on $[n]$ with transition rates given by $Q$ and
invariant measure $\pi$.
It will often be easier to work with the (expected) meeting time when both chains are started in
the invariant measure, i.e.\ we define
\[ t_{\rm meet}^\pi := \sum_{i, j \in [n]} \pi(i) \pi(j) \E_{i,j}(\tau_{\rm meet}) .\]
In order to make the connection to $t_{\rm meet}$, we will need the time it takes to reach stationarity. There are competing definitions of the distance from stationarity, both of which are required to apply the literature results.
\begin{definition}\label{stat_dist}
For a Markov chain on $[n]$
\[
d(t):=\frac{1}{2}\max_{x \in [n]}\| p^{(t)}_{x,\cdot}-\pi(\cdot) \|_1,
\]
\[
\bar{d}(t):=\frac{1}{2}\max_{x,y \in [n]}\| p^{(t)}_{x,\cdot}-p^{(t)}_{y,\cdot} \|_1.
\]
\end{definition}
The mixing time $t_{\rm mix}$ is then defined as
\[
t_{\text{\textnormal{mix}}}:=\min \Big\{ t\geq 0 : d(t)\leq \frac{1}{4}\Big\},
\]
and the mixing time from a point $i \in [n]$ as
\[
t_{\text{\textnormal{mix}}}(i):=\min \Big\{ t\geq 0 : \big\|p_{i\cdot}^{(t)}-\pi \big\|_1 \leq \frac{1}{2}\Big\}.
\]
Closely related to the mixing time is the relaxation time
\[
t_{\text{\textnormal{rel}}}:=\max \left\{ \frac{1}{\lambda} : \lambda \text{ a positive eigenvalue of } -Q \right\} \]
and we describe the relationship in the following lemma, as standard references are either in discrete time or using different definitions.
\begin{lemma}\label{mixing_and_relaxation}
\[
t_{\text{\textnormal{mix}}} \geq \frac{t_{\text{\textnormal{rel}}}}{1+\frac{1}{\log 2}}.
\]
\end{lemma}
\begin{proof}
By the previous definitions of the mixing time and stationarity distances, we have that if
$d(t_{\text{\textnormal{mix}}}) = \frac{1}{4}$, then it follows that $\bar{d}(t_{\text{\textnormal{mix}}})\leq \frac{1}{2}$.
Therefore, by the submultiplicativity of $\bar{d}$ shown in \cite{aldous-fill-2014} for any $C\geq 1$ we have
\[
\bar{d}(C t_{\text{\textnormal{mix}}})\leq 2^{-\lfloor C\rfloor}\leq 2^{1-C}.
\]
The right hand side is less than $e^{-1}$ when $
C=1+\frac{1}{\log 2}$. Therefore, by \cite[Lemma 4.23]{aldous-fill-2014} we can deduce that
$
C t_{\text{\textnormal{mix}}} \geq t_{\text{\textnormal{rel}}}
$
and the claim follows.
\end{proof}
\begin{proposition}
\begin{enumerate}[label={(\alph*)},ref={\theproposition~(\alph*)}]
\item \label{lower_meeting_bound}
\[
t_{\text{\textnormal{meet}}}^{\pi} \geq \frac{(1-\sum_{i \in [n]} \pi(i)^2)^2}{4\sum_{i \in [n]} q(i) \pi(i)^2},
\]
where $q(i) = - Q(i,i)$.
\item \label{conductance_theorem}
There exists an absolute constant $c_{\rm cond} > 0$ such that
\[
t_{\rm{meet}} \geq
c_{\rm cond} \left(
\max_{A \subset [N]}
\frac{\pi(A) \pi(A^c)}{\sum_{x \in A} \sum_{y \in A^c}c(xy)}
\right).
\]
\end{enumerate}
\end{proposition}
\begin{proof}
For Part (a) see Remark 3.5 in~\cite{cox2016convergence}.
(b) From the standard coupling bound for mixing times seen in \cite[Theorem 9.2]{aldous-fill-2014} and with $\tau_{\rm meet}^{ij}$ as in~\eqref{eq:meeting_ij},
\[
d(t)
\leq \max_{i,j} \mathbb{P}\left( \tau_{\text{meet}}^{i,j}>t \right)
\leq \exp\left( - \left\lfloor \frac{t}{e t_{\text{meet}}} \right\rfloor \right)
\leq \exp\left(1 - \frac{t}{e t_{\text{meet}}} \right)
\]
where the second inequality is from \cite[Equation (2.20)]{aldous-fill-2014}.
So by integrating
\begin{equation}\label{mixing_and_meeting}
\frac{1}{4} t_{\text{mix}}\leq \int_0^{\infty} d(t) {\rm d} t \leq e^2 t_{\text{meet}} .
\end{equation}
Because $c(xy)=\pi(x)Q(x,y)$, by \cite[Corollary 4.37]{aldous-fill-2014},
\[
\max_{A \subset [N]}
\frac{\pi(A) \pi(A^c)}{\sum_{x \in A} \sum_{y \in A^c}c(xy)} \leq t_{\text{\textnormal{rel}}}.
\]
Combining this with Equation \eqref{mixing_and_meeting} and Lemma \ref{mixing_and_relaxation} proves the claim.
\end{proof}
\cite[Corollary 1.2]{peres2017intersection} has the consequence that, for some universal $C>0$, $t_{\rm mix} \leq C \min_i t_{\rm hit}(i)$. We present a simple proof of this fact for the convenience of the reader and to give an explicit constant factor.
\begin{lemma}\label{mixing_below_central_hitting} For any $i \in [n]$,
\[
t_{\text{\textnormal{mix}}}(i)\leq 2\mathbb{E}_{\pi}\left( T_i \right).
\]
\end{lemma}
\begin{proof}
Let $i \in [n]$, then by Cauchy-Schwarz we have that
\[ \left\|p_{i\cdot}^{(t)}-\pi\right\|_1^2 = \sum_{j \in [n]} \Big| \frac{p_{i j}^{(t)}}{\pi(j)} - 1\Big| \pi(j)
\leq \bigg( \sum_{i \in [n]} \Big| \frac{p_{i j}^{(t)}}{\pi(j)} - 1\Big|^2 \pi(j) \bigg)^2 =
\Big\|\frac{p_{i\cdot}^{(t)}}{\pi}-1\Big\|_\pi^2.
\]
To simplify the right hand side, we use reversibility to obtain
\[
\Big\|\frac{p_{i\cdot}^{(t)}}{\pi}-1\Big\|^2_\pi
=-1+\sum_j \frac{\left( p_{ij}^{(t)} \right)^2}{\pi(j)}
=-1+\frac{1}{\pi(i)}\sum_j p_{ij}^{(t)}p_{ji}^{(t)}
=-1+\frac{p_{ii}^{(2t)}}{\pi(i)}.
\]
Now, by \cite[Lemma 2.11]{aldous-fill-2014}, we have that for any $t \geq 0$,
\[
\begin{split}
\mathbb{E}_\pi \left( T_i \right) &= \int_0^\infty \left( -1+\frac{p_{ii}^{(s)}}{\pi(i)} \right) {\rm d}s
\geq 2 t \left( -1+\frac{p_{ii}^{(2t)}}{\pi(i)} \right),
\end{split}
\]
because the integrand is non-increasing \cite[Equation 3.40]{aldous-fill-2014}.
Combining these inequalities, we have for $t > 0$,
\begin{equation}\label{l1_bound}
\Big\|p_{i\cdot}^{(t)}-\pi\Big\|_1 \leq
\Big\|\frac{p_{i\cdot}^{(t)}}{\pi}-1\Big\|_\pi \leq
\sqrt{\frac{\mathbb{E}_\pi \left( T_i \right)}{2 t}} .
\end{equation}
Hence, if $t$ is such that
\[
\frac{\mathbb{E}_\pi \left( T_i \right)}{2 t}\leq \frac{1}{4},
\]
then we can deduce that $t_{\rm mix}(i) \leq t$, which completes the proof.
\end{proof}
\begin{proposition}\label{corr:strong_stationary_time}
For two independent copies $(X_t)_t$ and $(Y_t)_t$ of a Markov Chain on $[n]$, and any state $s \in [n]$, we find
\[
t_{\rm mix} \leq 16 \, t_{\text{\textnormal{hit}}}(s)
\]
and further we can construct a time for the product chain with
\[
\mathbb{E}_{x,y}(\mathcal{S})\leq 188 \, t_{\rm hit}(s)
\]
which is a strong stationary time in the sense that for any $t \geq 0$ we have
$
\mathcal{L}(X_{t+\mathcal{S}},Y_{t+\mathcal{S}})=\pi \otimes \pi
$
and, further, $(X_{t+S},Y_{t+S})_{t \geq 0}$ and $\mathcal{S}$ are independent.
\end{proposition}
\begin{proof}
Define the time $M_1 = 8 t_{\rm hit}(s)$. Then, by Markov's inequality
\[
\max_x \mathbb{P}_x (T_s \geq M_1)
\leq
\max_x \frac{\mathbb{E}_x (T_s)}{M_1}
=
\frac{t_{\text{\textnormal{hit}}}(s)}{M_1}
=
\frac{1}{8}
\]
so we will hit $s$ in the timeframe $[0,M_1]$ with probability at least $\frac 78$. Define also the time $M_2$ by
\[
\frac{\mathbb{E}_\pi \left( T_s \right)}{2 M_2} = \frac{1}{16}
\]
then by recalling equation \eqref{l1_bound} we have that
\[ \frac{1}{2} \| p_{s,\cdot}^{(M_2)} - \pi(\cdot) \|_1 \leq \frac{1}{8}. \]
Hence, by distinguishing the cases of hitting $s$ by $M_1$, or not, we obtain that $d(M_1 + M_2) \leq \frac{1}{4}$.
Thus,
\[
t_{\text{\textnormal{mix}}}
\leq
M_1+M_2
=
8 \, t_{\text{\textnormal{hit}}}(s)+8 \, \mathbb{E}_\pi \left( T_s \right)
\leq
16 \, t_{\text{\textnormal{hit}}}(s).
\]
It remains to prove the second claim. By Theorem 1.1 in \cite{fill1991time} we can construct a strong stationary time with
\[
\mathbb{P}_s\left(\mathcal{S}_X>t\right)=\operatorname{sep}_s(t)=:1-\min_{j \in [n]} \frac{p^{(t)}_{s, j}}{\pi(j)}.
\]
Then we recover several definitions and results from \cite{aldous-fill-2014} given in Definition \ref{stat_dist}.
These various definitions of distance from stationarity satisfy
\[
\operatorname{sep}_s(2t)\leq\max_{v \in [N]}\operatorname{sep}_v(2t)<2 \, \bar{d}(t)\leq 4 \, d(t) .
\]
Therefore, we have that
\[
\tau_1:=\min \left\{ t: \bar{d}(t) \leq \frac{1}{2}\right\}\leq \min \left\{ t: d(t) \leq \frac{1}{4}\right\} = t_{\text{mix}} .
\]
Then we use that $\bar{d}$ is submultiplicative to obtain
\[
\bar{d}(t)\leq 2^{-\lfloor t/\tau_1 \rfloor }\leq 2^{-\lfloor t/t_{\text{mix}} \rfloor }.
\]
Thus, we can bound the expectation of the time to stationarity
\[
\mathbb{E}_s \left(\mathcal{S}_X\right)
=2\int_0^\infty \operatorname{sep}_s(2t) {\rm d}t
\leq 4 \int_0^\infty 2^{1- t/t_{\text{mix}}} {\rm d}t
=\frac{8 \, t_{\text{mix}}}{\log 2}
\leq \frac{64 \, t_{\text{\textnormal{hit}}}(s)}{\log 2}.
\]
This becomes a strong stationary time for $(X_t)_t$ with $X_0=x$ by constructing another time $\tilde{\mathcal{S}}_X$ which simply waits for the event
when the walker hits $s$, and then waits for $\mathcal{S}_X$. Thus
\[
\mathbb{E}_x\left(\tilde{\mathcal{S}}_X \right)\leq t_{\rm hit}(s)
+
\frac{64 \, t_{\text{\textnormal{hit}}}(s)}{\log 2}
<
94 \, t_{\rm hit}(s).
\]
We construct the symmetric time $\tilde{\mathcal{S}}_Y$ for $(Y_t)_t$ and then finally our object is the time
\[
\mathcal{S}:= \tilde{\mathcal{S}}_X \vee \tilde{\mathcal{S}}_Y
\]
so that
\[
\mathbb{E}_{x,y}(\mathcal{S}) \leq
\mathbb{E}_x\left(\tilde{\mathcal{S}}_X \right)+\mathbb{E}_y\left(\tilde{\mathcal{S}}_Y \right)
\leq 188 \, t_{\rm hit}(s),
\]
as claimed.
\end{proof}
\begin{proposition}\label{tree_meeting_theorem}
For any state $s \in [n]$
\begin{equation*}
t_{\text{\textnormal{meet}}} \leq \frac{189 \, t_{\text{\textnormal{hit}}}(s)}{\pi(s)}.
\end{equation*}
\end{proposition}
\begin{proof}
From any configuration of two walkers, we can apply Proposition \ref{corr:strong_stationary_time} to construct a strong stationary time $\mathcal{S}$ with $\mathbb{E}(\mathcal{S})\leq 188 \, t_{\rm hit}(s) $. Then, wait for $(X_t)_t$ to hit $s$, which in expectation takes an additional time period of length $t_{\rm hit}(s)$.
On $(X_t)_t$ hitting $s$, $(Y_t)_t$ is still in independent stationarity, and so we have exactly probability $\pi(s)$ to meet at that instant. Otherwise, we restart the argument with mixing and hitting periods to get another chance to meet at $s$.
Thus we have to repeat the attempt no more than $\operatorname{Geom} (1/\pi(s))$ times, and each attempt conditionally expects to take no longer than $188 \, t_{\rm hit}(s)+ t_{\rm hit}(s)$.
\end{proof}
\begin{remark}
We find the following illustrative discrete-time bound in \cite{kanade2016coalescence}
\begin{equation*}
t_{\text{\textnormal{meet}}}^{\pi} = O\left(\frac{t_{\text{\textnormal{mix}}}}{||\pi||_2^2}\right)
\end{equation*}
which, while appearing a better bound, is commonly not so for Markov chains on trees. The mixing time for a Markov chain on a tree (which must always be a reversible chain) is always the hitting time of a \emph{central} vertex, i.e. one with
\[
\mathbb{E}_{\pi}T_c={\operatorname{min}}_{v \in [n]}\mathbb{E}_{\pi}T_v.
\]
Then,
\[
t_{\text{\textnormal{mix}}} = \Theta \left( t_{\text{\textnormal{hit}}}(c) \right)
\]
and so because $\|\pi\|_2^2\leq\|\pi\|_\infty$, Proposition \ref{tree_meeting_theorem} will often give a tighter bound.
\end{remark}
In the following, we will need
the following large deviations result given in \cite{saloff1997lectures}.
\begin{theorem}\label{thm:markov_chain_large_deviations}
For any finite, irreducible continuous-time Markov chain $(X_t)_t$ with initial stationary distribution $\pi$, and any function on the state space $f$ with
\[
\langle f, \pi \rangle =0, \qquad ||f||_\infty \leq 1,
\]
we have for $x>0$
\[
\mathbb{P}_\mu \left(
\frac{1}{t} \int_0^t f(X_s) {\rm d}s > x
\right)
\leq
||\mu/\pi||_2 \exp \left(
-
\frac{x^2 t}{10 \, t_{\text{\textnormal{rel}}}}
\right),
\]
where $\mu$ is an arbitrary distribution on the state space.
\end{theorem}
We now use the concept of the chain $(X_t)_{t \geq 0}$ \emph{observed on a subset} $V \subset N$ described in Section 2.7.1 in~\cite{aldous-fill-2014}: define a clock process
\[
U(t):=\int_0^t \mathbbm{1}_{V} \left( X_s \right) {\rm d}s,
\]
with generalised right-continuous inverse $U^{-1}$. Then the partially observed chain $(P_t)_{t \geq 0}$ is defined for any $t \geq 0$ via
\[
P_t:=X_{U^{-1}(t)}.
\]
This corresponds to the deletion of states in $V^c$ from the history of $(X_t)_{t\geq 0}$, and so it can be shown that $(P_t)_{t \geq 0}$ is Markovian and has the natural stationary distribution
\[
\frac{\pi(\cdot) \mathbbm{1}_{V} (\cdot)}{\pi(V)}.
\]
Then, we can define the random \emph{subset meeting time} $\tau^\pi_{\text{\textnormal{meet}}}(A)$ analogously to $\tau^\pi_{\text{\textnormal{meet}}}$ except for the partially observed product chain on $A \times A$ rather than the full chain. Similarly, $t^\pi_{\text{\textnormal{meet}}}(A)=\mathbb{E}(\tau^\pi_{\text{\textnormal{meet}}}(A))$.
\begin{theorem}\label{partial_meeting} For any $A \subset [n]$,
\[
t_{\rm meet}
\leq
188 t_{\rm hit}(s)
+
\frac{2 \, t^\pi_{\text{\textnormal{meet}}}(A)}{\pi(A)^2}
+
\frac{1568 \, t_{\rm hit}(s)}{\pi(A)^4}
.
\]
\end{theorem}
\begin{proof}
We first prove the claim
\begin{equation}\label{eq:claim1}
t^\pi_{\text{\textnormal{meet}}}
\leq
\frac{2 \, t^\pi_{\text{\textnormal{meet}}}(A)}{\pi(A)^2}
+
\frac{98 \, t_{\text{\textnormal{mix}}}}{\pi(A)^4}.
\end{equation}
Consider two independent copies $(X_t, Y_t)_t$ of the stationary chain, such that in particular, for any
$ t \geq 0$ we have that $\mathcal{L}(X_t, Y_t)= \pi \otimes \pi$.
Define the time-change
\[
U(t):=\int_0^t \mathbbm{1}_{A \times A} \left( X_s,Y_s \right) {\rm d}s .
\]
Then, the product chain $(\tilde X_t, \tilde Y_t)_{t \geq 0}$ observed on $A \times A$ satisfies $(\tilde X_t, \tilde Y_t) = (X_{U^{-1}(t)}, Y_{U^{-1}(t)})$ for any $t \geq 0$.
Therefore, we have that for any $t \geq 0$,
\[ \p ( U(\tau_{\rm meet}^\pi) \geq t ) \leq \p( \tau_{\rm meet}^\pi(A) \geq t) ,\]
since a meeting might also happen outside $A$.
In particular, we can deduce that
\begin{align}\label{eq:2605-1}
\E(\tau_{\rm meet}^\pi) & =
\int_0^\infty \p( \tau_{\rm meet}^\pi >t) \,{\rm d} t \notag\\
& \leq \int_0^\infty \p \Big( U( \tau_{\rm meet}^\pi) > U(t); U(t) \geq \frac{\pi(A)^2}{2} t\Big){\rm d }t + \int_0^\infty\p \Big(U(t) \leq \frac{\pi(A)^2}{2} t \Big) \,{\rm d} t\notag \\
& \leq\int_0^\infty \p \Big( \tau^\pi_{\rm meet}(A) \geq \frac{\pi(A)^2}{2} t\Big){\rm d }t + \int_0^\infty\p \Big(U(t) \leq \frac{\pi(A)^2}{2} t \Big) \,{\rm d} t \notag\\
& \leq \frac{2}{\pi(A)^2} t_{\rm meet}^\pi(A) + \int_0^\infty\p \Big(U(t) \leq \frac{\pi(A)^2}{2} t \Big) \,{\rm d} t .
\end{align}
It remains to estimate the second integral on the right hand side.
For this purpose, we apply Theorem \ref{thm:markov_chain_large_deviations}
to the function
$ f :=
\mathbbm{1}_{ A \times A }
-\pi(A)^2$
to obtain
\[
\mathbb{P}_\pi \left(
\frac{1}{t}\int_0^t \mathbbm{1}_{A \otimes A} \left( X_s, Y_s \right) {\rm d}s
-\pi(A)^2
<-x
\right)
\leq
\exp \left(
-
\frac{x^2 t}{10 \, t_{\text{\textnormal{rel}}}}
\right)
\]
and hence
\[
\begin{split}
\mathbb{P}\left(U(t) \leq \frac{\pi(A)^2}{2}t\right)
&= \mathbb{P}\left( \frac{U(t)}{t}-\pi(A)^2 \leq -\frac{\pi(A)^2}{2} \right)
\leq \exp \left( - \frac{t \pi(A)^4}{40 \, t_{\text{rel}}}\right) .
\end{split}
\]
We deduce that
\[ \int_0^t \p \left( U(t) \leq \frac{\pi(A)^2}{2}t \right)
\leq \frac{40 \, t_{\rm rel}}{\pi(A)^4}. \]
Moreover,
Lemma \ref{mixing_and_relaxation} gives
\[
40 \, t_{\rm rel}
\leq
40 \left(1+\frac{1}{\log 2}\right) t_{\rm mix}
< 98 \, t_{\rm mix} .
\]
Combining these estimates with~\eqref{eq:2605-1} gives the claim~\eqref{eq:claim1}.
To obtain the statement of the theorem, recall from Proposition \ref{corr:strong_stationary_time} that there exists a strong stationary time $\mathcal{S}$ such that
\[
t_{\rm mix} \leq 16 \, t_{\text{\textnormal{hit}}}(s)
\quad \text{and} \quad
\mathbb{E}_{x,y}(\mathcal{S})\leq 188 \, t_{\rm hit}(s).
\]
Using the stationary time constructed in this corollary gives the bound
\[
t_{\rm meet}\leq
\max_{x,y \in [n]}
\mathbb{E}_{x,y}(\mathcal{S})
+
t^\pi_{\rm meet},
\]
which together with~\eqref{eq:claim1} proves the theorem.
\end{proof}
\section{Structural results for subcritical random graphs}\label{sec:structure}
In this section, we collect some of the structural results on subcritical inhomogeneous
random graphs that we will need later on in Section~\ref{voter_models_section}.
Some of these results are known, but as the literature on subcritical inhomogeneous random
graphs is less developed than for supercritical random graphs, we have to prove
the more specialised ones.
Let $G_N \in \mathcal{G}_{\beta,\gamma}$.
Denote by ${\rm Comp}(G_N)$ the set of (connected) components of $G_N$.
For any $\mathscr{C} \in {\rm Comp}(G_N)$ we write the graph as $(V(\mathscr{C}),E(\mathscr{C}))$ and denote by $|\mathscr{C}|:=|V(\mathscr{C})|$ the number of vertices in $\mathscr{C}$. Moreover, we let $\mathscr{C}(i)$ denote the component containing vertex $i$.
Throughout this section, we will use the notation
\[ K_\gamma:= N^\frac{1-2\gamma}{2-2\gamma} \log N , \]
and call a component $\mathscr{C} \in {\rm Comp}(G_N)$ \emph{big} if
$\mathscr{C} = \mathscr{C}(i)$ for some $i \leq K_\gamma$. Otherwise, the component is called \emph{small}.
Moreover, we define the collection of all vertices lying in big components as
\[ V_{\rm big} := \bigcup_{i \leq K_\gamma} V(\mathscr{C}(i)) . \]
The first proposition is a standard result on the (componentwise) diameter.
\begin{proposition}\label{prop:diameter}
For $G_N \in \mathcal{G}_{\beta,\gamma}$ with $\beta + 2 \gamma < 1$, we have that
\[ \diam (G_N) := \sup_{\mathscr{C} \in {\rm Comp}(G_N)} \diam (\mathscr{C}) = O_{\p} ( \log N ). \]
\end{proposition}
As we will see later on, for the classical voter model, the invariant measure of the associated random walk is normalized by $\sum_{z \in \mathscr{C}(k)} \operatorname{d}(z)^{\theta-1}$, so that in the following we collect various bounds on
$\sum_{z \in \mathscr{C}(k)} \operatorname{d}(z)$.
\begin{proposition}
For $G_N \in \mathcal{G}$ with $\beta+2\gamma<1$, with high probability,
\begin{enumerate}[label={(\alph*)},ref={\theproposition~(\alph*)}]
\item \label{prop:big_sum_of_degrees}
\[
\max_{k \leq K_\gamma}
\frac{\sum_{z \in \mathscr{C}(k)} \operatorname{d}(z)}{(N/k)^\gamma} \leq \log N.
\]
\item \label{prop:small_sum_of_degrees}
\[
\max_{i\notin V_{\rm big}}
\sum_{v \in \mathscr{C}(i)} \operatorname{d}(v)
=O_{\mathbb{P}}^{\log N}
\left(N^{\frac{\gamma}{2-2\gamma}}\right).
\]
\end{enumerate}
\end{proposition}
For the largest component this result is not optimal as we lose a $\log$ factor,
see also~\cite[Theorem 1.1]{janson2008largest}, but the latter result does not cover the other components.
As a next result, we need that the large degrees $\operatorname{d}(i)$ are well approximated by their means.
Also, we need to know that for each of the vertices with large degree, a positive proportion of its neighbours has degree $1$.
One of the challenges in the proof is that we need these bounds uniformly over all big components.
\begin{proposition}\label{prop:stars_and_leaves}
For $G_N \in \mathcal{G}_{\beta, \gamma}$ with $\beta+2\gamma<1$
the following statements hold:
\begin{enumerate}[label={(\alph*)},ref={\theproposition~(\alph*)}]
\item \label{prop:star_degrees}
\[
\min_{k \leq K_\gamma}
\frac{\operatorname{d}(k)}{(N/k)^\gamma}=\Omega_{\mathbb{P}}(1),
\quad \quad
\max_{k \leq K_\gamma}
\frac{\operatorname{d}(k)}{(N/k)^\gamma}=O_{\mathbb{P}}(1).
\]
\item \label{prop:leaf_counts} For any $k \in [N]$, let $L_k$ be the number of neighbours of $k$ of degree $1$, then we have
\[
\min_{k \leq K_\gamma}
\frac{|L_k|}{\operatorname{d}(k)}=\Omega_{\mathbb{P}}(1) .
\]
\end{enumerate}
\end{proposition}
\begin{definition}\label{branch_defn} For $G_N \in \mathcal{G}_{\beta,\gamma}$ and any component $\mathscr{C} \in {\rm Comp}(G_N)$,
we define the set of \emph{branches} $\mathcal{B}(\mathscr{C})$ of $\mathscr{C}$ as the set of connected components of the subgraph of $\mathscr{C}$ induced by the vertex set $V(\mathscr{C}) \setminus \{ i\}$, where $i = \min V(\mathscr{C})$.
\end{definition}
We will use this definition specifically in the context when $\mathscr{C}$ is a tree, so
that this terminology makes sense.
The next lemma states that big components are trees and have branches that are small (at least when compared to the largest components of order $N^\gamma$).
\begin{lemma}\label{le:subcritical_branch_control}
For $G_N \in \mathcal{G}_{\beta, \gamma}$ with $\beta+2\gamma<1$, with high probability
every \emph{big} component is a tree. On this event we have that
\[
\max_{k \leq K_\gamma }
\max_{B \in \mathcal{B}(\mathscr{C}(k))}
\sum_{v \in B} \operatorname{d}(v)
=
\max_{k \leq K_\gamma}
\max_{B \in \mathcal{B}(\mathscr{C}(k))}
(2|B|-1)
=
O_{\mathbb{P}}^{\log N}\left( N^{\frac{\gamma}{2-2\gamma}} \right).
\]
\end{lemma}
The following claim for the empirical moment of the degree distribution of $\mathscr{C}(1)$ will allow us to demonstrate a lower bound on this component for certain parameters of both the classical and discursive models.
\begin{lemma}\label{le:empirical_moment}
For $G_N \in \mathcal{G}_{\beta,\gamma}$ with $\beta+2\gamma<1$, then for any $\eta \geq 1$
\[
\sum_{v \in \mathscr{C}(1)}\operatorname{d}(v)^\eta =\Theta^{\log N}_{\mathbb{P}}\left( N^{\gamma\eta} \right).
\]
\end{lemma}
Two of our lower bounds require the existence of a `double star' component together with a suitable bound on the empirical moment.
\begin{proposition}\label{prop:existence_simple_double_star}
For $G_N \in \mathcal{G}_{\beta,\gamma}$ with $\beta+2\gamma<1$ there exists with high probability a tree component containing two adjacent vertices $x,y \in K_\gamma$ such that
\[
\operatorname{d}(x) \mbox{ and } \operatorname{d}(y) \mbox{ are }\Theta_{\mathbb{P}}^{\log N} \left(
N^\frac{\gamma}{2-2\gamma}
\right)
\]
and further for any $\eta\geq 1$
\[
\sum_{v \in \mathscr{C}(x)} \operatorname{d}(v)^\eta = \Theta_{\mathbb{P}}\left(
N^\frac{\gamma \eta}{2-2\gamma}
\right).
\]
\end{proposition}
The final proposition of this section states that we can always find a ``long double star'' in $G_N$, i.e. two vertices with degree of order at least $N^{\gamma/(2-2 \gamma)}$ that are connected via a short path with two intermediate vertices of degree $2$ each. The path having length at least $3$ is important for the discursive voter model dynamic.
\begin{proposition}\label{prop:existence_long_double_star}
With high probability any $G_N \in \mathcal{G}_{\beta,\gamma}$ with $\beta + 2\gamma <1$ contains a path
$\mathcal{P}=(v_1,v_2,v_3,v_4)$
such that:
\begin{enumerate}[label={(\alph*)},ref={\theproposition~(\alph*)}]
\item $\operatorname{d}(v_2)=\operatorname{d}(v_3)=2$.
\item $\{ v_1, v_4 \} \subset [K_\gamma]$ (and hence the component is a tree)
\item $\operatorname{d}(v_1), \operatorname{d}(v_4) =\Theta_{\mathbb{P}}^{\log N} \left(
N^\frac{\gamma}{2-2\gamma}
\right).$
\end{enumerate}
\end{proposition}
In the remaining part of this section, we will prove these results. An essential tool will be a coupling
with a branching process that we set up in Section~\ref{Branching_section}. Then in Section~\ref{subcrit_cpts_section} we will prove the structural results stated above.
\subsection{Coupling with a branching process}\label{Branching_section}
By Remark~\ref{rem:IRG}(c), we have some flexibility for which model in the class $\mathcal{G}_{\beta,\gamma}$ to show our results. For most of our proofs, we will prove the statements for the simple Norros-Reittu (SNR) model, i.e.\ where
edges are present independently with probabilities
\begin{equation}\label{nr_edge_probabilities}
q_{i,j}=1-e^{-p_{i,j}}.
\end{equation}
The reason for this choice is the close relation with the standard multigraph Norros-Reittu (MNR) model.
In the MNR multigraph $G_N^{\rm NR}$ each vertex $i \in [N]$ has weight $w(i)>0$ and independently for each pair $\{i,j\}$ with $i,j \in [N]$, the number of edges between $i$ and $j$ has the distribution
\[
\operatorname{
Pois}\left(\frac{w(i) w(j)}{w([N])}\right),
\]
where $w([N])=\sum_{i=1}^N w(i)$ is the total weight and where we write ${\rm Pois}(\mu)$ for a Poisson distribution with parameter $\mu > 0$. Note this graph model not only has multiple edges, but also allows for self-loops.
The SNR model with edge probability as in~\eqref{nr_edge_probabilities} is then obtained by first choosing
\begin{equation}\label{weight_defn}
w(i):=\sum_{j=1}^N \beta N^{2\gamma-1} i^{-\gamma} j^{-\gamma}
\sim \frac{\beta}{1-\gamma}\left( \frac{N}{i} \right)^\gamma ,
\end{equation}
and then collapsing all multi-edges to simple edges and deleting the loops.
The MNR model is particularly nice, because it allows for an exact coupling with a
two-stage Galton-Watson process with thinning and cycle creation.
Our construction here extends the coupling introduced in~\cite{norros2006conditionally} (see also \cite{van2016random}) by also keeping track of the number of edges, so that we can also control when we create cycles.
Define the \emph{mark distribution} to be the random variable $M$ on $[N]$ which chooses a vertex biased proportional to its weight
\[
\mathbb{P}(M=m)\propto w(m) \mathbbm{1}_{m\in [N]}
\propto m^{-\gamma} \mathbbm{1}_{m\in [N]}
\]
so that if $W_N$ is the empirical weight distribution in the network, the weight of a typical neighbour in our local picture will be simply the size-biased version of $W_N$, denoted $W_N^*$
\[
w(M)\stackrel{({\rm d})}{=}W_N^*.
\]
Fix $k \in [N]$, we now describe the (marked) branching process that describes the cluster exploration when started from a vertex $k$. To describe the branching process, we label the tree vertices using the standard Ulam-Harris notation, in
particular we denote by $\emptyset$ the root of the tree, by $1$ the first offspring of the root, by $11$ the first offspring of tree vertex $1$ etc. We will write $v < w$ if $v$ comes first in the breadth-first ordering of the tree, i.e.\
vertices are first sorted according to length and then according to lexicographical ordering if the lengths are the same.
For the root of the branching process, we define
\[
M_{\emptyset}=k , \ X_{\emptyset} \sim \operatorname{Pois}\left(w(k)\right).
\]
Next, we define independent random variables $\left(X_v\right)_{v \neq \emptyset}$ in two stages:
we first choose marks $\left(M_v\right)_{v \neq \emptyset}$ which are i.i.d.\ with the same distribution as $M$.
Then, conditionally on $M_v$, let $X_v \sim \operatorname{Pois}\left(w(M_v)\right)$.
where we write
${\rm Pois}(Y)$ for the mixed Poisson law with random mixing parameter $Y$.
Moreover, if we take $X_v$ to be the number of children of vertex $v$ (if it exists in the tree), this construction can be used
to define a (marked) Galton-Watson tree $\mathcal{T}^k$ (where only the root has a different offspring distribution).
To obtain the cluster at $k$ in $G_N$ from $\mathcal{T}^k$, we introduce a thinning procedure.
We set $\emptyset$ to be unthinned and then explore the tree in the breadth-first order described above and thin a tree vertex $w$ if either one of the tree vertices in the unique path between $\emptyset$ and $w$ has been thinned or if
there exists an unthinned $v < w$ with $M_v = M_w$.
Now, denote for $i \in [N]$, $X_v(i)$ to be the number of children of $v$ with mark $i$.
If $v$ and $w$ are unthinned tree vertices, then we define
\begin{equation}\label{eq:edges} E(M_v, M_w) = \left\{ \begin{array}{ll} X_v(M_w) &\mbox{if } v < w ,\\
X_w(M_v) &\mbox{if } w \leq v .\end{array} \right. \end{equation}
We can define the multigraph $\mathcal{T}^k_{\rm thin}$ by specifying that the vertex set is
$\{ M_v \, : \, v \mbox{ unthinned}\}$ and the number of edges are given by
$( E(M_v, M_w) \, : \, v, w \mbox{ unthinned} )$.
Similarly, we can define a forest $(\mathcal{T}^1, \mathcal{T}^2, \ldots, \mathcal{T}^n)$ of independent trees constructed as above, where the root of the $k$th tree has mark $k$. Then, we can define the same thinning operation as above, starting in the tree $\mathcal{T}^1$ and going to the next when the algorithm terminates, where now also the roots of the trees may be thinned if their label has appeared in a previous tree.
If we define the edges as in~\eqref{eq:edges}, then we obtain a mulitgraph $(\mathcal{T}^1, \mathcal{T}^2, \ldots, \mathcal{T}^n)_{\rm thin}$ with vertex set $\{ M_v \, : \, v \mbox{ unthinned} \} = [N]$ and the number of edges between $i$ and $j$ given as $E(M_v, M_w)$, where $v$ and $w$ are the unique unthinned vertices $v, w$ with $M_v = i$ and $M_w =j$.
With this construction, we have the following proposition.
\begin{proposition}\label{prop:tree_coupling}
Let $G_N^{\rm NR}$ be a realization of a Norros-Reittu mulitgraph.
For any fixed vertex $k \in [N]$, we have for the component $\mathscr{C}(k)$ in $G_N^{\rm NR}$ containing $k$,
\[ \mathscr{C}(k) \stackrel{d}{=} \mathcal{T}^k_{\rm thin} . \]
Moreover,
\[ G_N^{\rm NR} \stackrel{d}{=}(\mathcal{T}^1, \mathcal{T}^2, \ldots, \mathcal{T}^n)_{\rm thin} . \]
\end{proposition}
This proposition can be proved in the same way as Prop.\ 3.1 in \cite{norros2006conditionally}. The only difference is that we explicitly keep track of the number of edges.
\begin{figure}
\begin{center}
\includegraphics[width=13cm]{thinning.eps}
\caption{On the left a realisation of the labelled Galton-Watson tree $\mathcal{T}^1$ and on the right the resulting random graph. Note that the number of edges between $2$ and $5$ are determined by the number of children of type $5$ of the first child of the root.}\label{fig:thinning}
\end{center}
\end{figure}
\begin{remark}\label{rem:cycles} \emph{Thinning and creation of cycles.}
Note that by construction, we only create edges that lead to cycles if there are tree vertices $v, v', w$ such that $v'$ is a child of $v$, but in the breadth-first order $v < w < v'$, $v$ and $w$ are unthinned and such that $M_w = M_{v'}$,
see Figure~\ref{fig:thinning} for an example. The reason for this that the number of edges between $M_w$ and $M_v$ is determined by looking at the types of children of the first vertex in breadth-first order.
As a consequence, by this procedure we do not create edges between different components of $\mathcal{T}^i$.
Moreover, if an unthinned tree vertex $v$ has children $v_1',\ldots,v_\ell'$ with $M_v = M_{v'_i}$, then this
leads to $\ell$ self-loops.
\end{remark}
\begin{remark}\label{rem:depletion}
Note that for the second construction, if the root of the $k$th tree $\mathcal{T}^k$ is not thinned, then any vertex in the tree with root $k$ that receives mark $j \leq k$ will be thinned.
So to get a stochastic upper bound on the number of vertices and their degrees in the component $\mathscr{C}(k)$, we can replace $\mathcal{T}^k$ by $\mathcal{T}^k_k$, where the marks are chosen independently with
distribution
\[
M_k \stackrel{({\rm d})}{=}
\begin{cases*}
M & if $M>k$, \\
\dagger & otherwise.
\end{cases*} \]
Then, the offspring distribution is ${\rm Pois}(W_{N,k}^*)$ with
$W_{N,k}^* \stackrel{d}{=} w(M_k)$, where we set $w(\dagger)=0$.
The error in this upper bound comes from thinning within $\mathcal{T}^k$ and also that thinned vertices are included as leaves of zero weight rather than simply being removed.
\end{remark}
\subsection{Proofs for the simple Norros-Reittu network}\label{subcrit_cpts_section}
In this section we will prove the statements made at the beginning of Section~\ref{sec:structure} using the coupling with a branching process as outlined in Section~\ref{Branching_section}.
A standard strategy will be to use that
the SNR model can obtained from the MNR model by collapsing multi edges and then an upper bound on the degrees in the MNR model can be shown in terms of the Galton-Watson trees as described in Proposition~\ref{prop:tree_coupling}.
\begin{proof}[Proof of Proposition \ref{prop:diameter}]
By the construction of Proposition~\ref{prop:tree_coupling}, for an upper bound it suffices to
bound the diameter in each component of $\mathcal{T}^1, \ldots, \mathcal{T}^n$ as extra edges are only created within components and so only make the diameters shorter, see also Remark~\ref{rem:cycles}.
Recall that in $\mathcal{T}^i$ the root has a ${\rm Pos}(w(i))$-distributed number of offspring, while the offspring distribution for any other vertex has the same distribution as
$D \sim \operatorname{Pois}\left( W_N^* \right)$, where
the offspring mean satisfies
\begin{equation}\label{eq:2204-1}
\mathbb{E}\left( D \right) \rightarrow \frac{\beta}{1-2\gamma}<1.
\end{equation}
Let $(Z_k)_{k \geq 0}$ be a Galton-Watson tree with offspring distribution $D$
and $Z_0 = 1$. Then, for any
$\rho \in \left(\frac{\beta}{1-2\gamma},1\right)$ and $N$ large enough we have, by Markov's inequality for all $k \geq 0$,
\[
\p(Z_k \neq 0) \leq \mathbb{E}(Z_k)\leq \rho^k.
\]
By construction, it order to show the required bound on the diameter it suffices to bound the maximal depth of $Y$ independent Galton-Watson trees $\mathcal{T}_i^*, i =1,\ldots, Y$, with offspring distribution $D$ and where $Y$ is Poisson-distributed random variable with parameter
\begin{equation}\label{eq:total_weight} w([N]) = \sum_{i,j \in [N]} \beta N^{2\gamma-1}i^{-\gamma}j^{-\gamma}
\sim \frac{\beta N}{(1-\gamma)^2},
\end{equation}
where here and in the following, we write $w(A) = \sum_{i \in [A]} w(i)$ for any $A \subset [N]$.
Since with high probability $Y \leq 2w([N]) \leq K N $ for a suitable constant $K \in \mathbb{N}$, we can get the required bound by noting that for any $C >0$,
\[\begin{aligned} \p \Big(\max_{i =1,\ldots, K N} \diam ( \mathcal{T}_i^*) \geq C \log N \Big)
& \leq \sum_{i=1}^{ K N} \p( \diam (\mathcal{T}_1^* ) \geq C \log N) \\ & \leq
K N \p( Z_{\lfloor C \log N \rfloor } \neq 0 ) \leq K N \rho^{C \log N - 1} ,
\end{aligned} \]
which converges to $0$ if we choose $C$ large enough such that $C \log \rho < 1$.
\end{proof}
In the following, we will develop a stochastic upper bound on the sizes of the trees $\mathcal{T}^i$ in Proposition~\ref{prop:tree_coupling}
that no longer depends on $N$.
Throughout we will write $X \preceq Y$ if the random variable $Y$ stochastically dominates the random variable $X$.
\begin{lemma}\label{le:domination}
For any $\alpha > 1$, $\gamma<\frac{1}{2}$ and $N$ sufficiently large, we have
\[
W^*_N \preceq \alpha W^* ,
\]
where $W^*$ is the weak limit of $(W_N^*)_N$ with density
\begin{equation}\label{eq:W_density}
\frac{\mathbb{P}(W^* \in {\rm d}x)}{{\rm d}x}= \mathbbm{1}_{x>\frac{\beta}{1-\gamma}} \frac{1-\gamma}{\gamma}\left( \frac{\beta}{1-\gamma}\right)^{\frac{1}{\gamma}-1} x^{-\frac{1}{\gamma}}.
\end{equation}
\end{lemma}
\begin{proof}
Note that the MNR weights satisfy for each $i \in [N]$,
\begin{equation}\label{eq:up_bound_w}
\begin{split}
w(i)&= \beta N^{2\gamma-1} i^{-\gamma} \sum_{j = 1}^N j^{-\gamma}
\leq\frac{\beta}{1-\gamma}\left( \frac{N}{i} \right)^\gamma =: \lambda(i) .
\end{split}
\end{equation}
Moreover, by the definition of the distribution of the marks, we have that
\[
\mathbb{P}(M \leq k) = \sum_{\ell =1}^k \frac{\ell^{-\gamma}}{\sum_{i=1}^N i^{-\gamma}} \leq \frac{1}{1-\gamma} \frac{k^{1-\gamma}}{\sum_{i=1}^N i^{-\gamma}}.
\]
Now we can consider the tail of distribution function of $W_N^*
\stackrel{{\rm d}}{=} w(M)$ and
estimate for any $x \geq 0$,
\[
\mathbb{P}\left( W_N^* \geq x \right)
=
\mathbb{P}\left( w(M) \leq x \right)
\leq \mathbb{P} (\lambda(M) \leq x ) =\mathbb{P}\left( M \leq \lambda^{-1}(x) \right)
\leq
\frac{\left( \lambda^{-1}(x) \right)^{1-\gamma}}{(1-\gamma)\sum_{i=1}^N i^{-\gamma}},
\]
where we write $\lambda(x) := \frac{\beta}{1-\gamma} N^\gamma x^{-\gamma}$.
Furthermore, we can compare this expression to
\[
\mathbb{P}(\alpha W^* \geq x)
=\left( \frac{1-\gamma}{\beta}\frac{x}{\alpha}\right)^{1-1/\gamma}
=\alpha^{1/\gamma-1}\left(\frac{\lambda^{-1}(x)}{N} \right)^{1-\gamma}
\]
Therefore, we can conclude that
\[
\inf_{x\geq 0} \frac{\mathbb{P}(\alpha W^* \geq x)}{\mathbb{P}\left( W_N^* \geq x \right)}
\geq \alpha^{1/\gamma-1} \frac{(1-\gamma)\sum_{i=1}^N i^{-\gamma}}{N^{1-\gamma}}
\rightarrow \alpha^{1/\gamma-1}>1,
\]
which gives the claimed stochastic domination.
\end{proof}
\begin{proposition}\label{le:poisson_powerlaw}
If $\alpha \in (1, \frac{1-2\gamma}{\beta})$,
then $D_\alpha \sim \operatorname{Pois}\left( \alpha W^* \right)$ satisfies
\[
p_k:=\mathbb{P}(D_\alpha=k)=\Theta(k^{-1/\gamma}).
\]
In particular, if $T$ is a Galton-Watson tree with offspring distribution $(p_k)_{k\geq 0}$, then the total size $|T|$ satisfies
\[ \mathbb{P}(|T|=k)=\Theta\big(k^{-1/\gamma} \big).\]
\end{proposition}
\begin{proof}
The second statement follows from the first one by \cite[Thm.\ 4.1]{jonsson2011condensation} (which holds in general for trees with power law offspring distribution with an power law exponent greater than~$3$).
To prove the first statement, we use that
by~\eqref{eq:W_density} we know that $\alpha W^*$ has density $\operatorname{f}$ where
$
\operatorname{f}(x)= C x^{-\frac{1}{\gamma}} \mathbbm{1}_{x>\frac{\alpha \beta}{1-\gamma}},
$
for some $C>0$ that makes this a probability measure.
Therefore,
\[
p_{k-1} = \frac{C}{(k-1)!}\int_{\alpha \beta/(1-\gamma)}^\infty x^{k-1-1/\gamma} e^{-x} {\rm d}x = C \frac{\Gamma(k-1/\gamma)-E_k}{\Gamma(k)},
\]
where the error term $E_k$ is defined as
\[
E_k= \int_0^{\alpha \beta/(1-\gamma)} x^{k-1-1/\gamma} e^{-x} {\rm d}x
\leq \left( \frac{\alpha \beta}{1-\gamma} \right)^{k-1/\gamma}\leq 1,
\]
and where the bound holds for $k \geq \frac{1}{\gamma} + 1$, using the assumption that $\alpha$ is small.
We deduce that
$E_k=\Theta(1)$. Now we use $\Gamma (k)= \Theta \left( k^{k-{{1}/{2}}}e^{-k} \right)$ to rearrange
\[
p_{k-1}= \Theta \left( \frac{(k-1/\gamma)^{k-1/\gamma-{{1}/{2}}}e^{1/\gamma-k}}{k^{k-{{1}/{2}}}e^{-k}} \right)
= \Theta \left( \left(k-\frac{1}{\gamma}\right)^{-1/\gamma} \right)
\]
by using the classical limit
$
\left(1-\frac{1}{k\gamma}\right)^{k-1/2} \rightarrow e^{-1/\gamma}
$.
\end{proof}
The statement of \emph{Proposition \ref{prop:big_sum_of_degrees}} follows immediately from the following lemma, but we will also need the upper bound on the unthinned Galton-Watson forest.
\begin{lemma} \label{le:unthinned_sum_of_degrees}
For the Galton-Watson forest $(\mathcal{T}^1,\ldots, \mathcal{T}^N)$ as defined in the paragraph before Proposition~\ref{prop:tree_coupling}, we have that with high probability
\[
\max_{k \leq K_\gamma}
\frac{\sum_{z \in \mathcal{T}^k} \operatorname{d}(z)}{(N/k)^\gamma} \leq \log N.
\]
\end{lemma}
\begin{proof}[Proof of Lemma \ref{le:unthinned_sum_of_degrees}]
First, we consider the degrees of the roots, where we write $\emptyset_k$ for the root of
the tree $\mathcal{T}^k$. Then, since $\operatorname{d}(\emptyset_k) \sim
{\rm Pois}(w(k))$, standard large deviation estimates for Poisson distributions
give a $C_1>0$ such that the event
\[
E_1 := \Big\{ \max_{k \leq K_\gamma}
\frac{\operatorname{d}(\emptyset_k)}{(N/k)^\gamma}\leq C_1 \Big\},
\]
satisfies $\lim_{N \rightarrow \infty} \p(E_1) = 1$.
Then, in distribution, each $\mathcal{T}^k$ consists of the root $\emptyset_k$ with
$\operatorname{d}(\emptyset_K)$ edges to which we attach independent Galton-Watson trees, where the number of offspring has the same distribution as ${\rm Pois}(W_N^*)$.
In particular, by Lemma~\ref{le:domination}, we can dominate the size of $\mathcal{T}^k$ by the size of $\mathcal{T}^{k,\alpha}$, a tree where the root $\emptyset_k$ has $\operatorname{d}(\emptyset_k)$ children, which are each connected to an independent Galton-Watson tree with offspring distribution $D_\alpha \sim\operatorname{Pois} ( \alpha W^*)$, for some $\alpha \in (1,\frac{1-2\gamma}{\beta}).
$
Now, if $T_1, \ldots, T_n$ denote independent copies of $D_\alpha$-Galton-Watson trees, then by Proposition~\ref{le:poisson_powerlaw} the total sizes $|T_i|$ of these trees satisfy $\p(|T_i| =k) = \Theta(k^{-\frac{1}{\gamma}})$,
so that they are subexponential, see~\cite{embrechts2013modelling},
in the sense that
\[
\lim_{n \rightarrow \infty} \sup_{x \geq \gamma n}\left|1- \frac{\mathbb{P}\left(\sum_{k=1}^n \left(|T_k| -\mathbb{E}(|T_k|) \right)>x\right)}{n\mathbb{P}\left(|T_1|>x\right)}\right|=0.
\]
In particular, we have that for any $\epsilon > 0$, for all $d$ sufficiently large and for $x \geq (\gamma + \E(|T_1|))d$
\begin{equation}\label{eq:subexp}
\mathbb{P}\Big(\sum_{k=1}^d |T_k| >x \Big)
\leq(1+\epsilon)d \, \mathbb{P}(|T_1|>x- d \E(|T_1|) ).
\end{equation}
Since $\E(D_\alpha) = \frac{\alpha\beta}{1-2\gamma}$, we have that
\[ \E(|T_1|) = \sum_{k=0}^\infty \Big( \frac{\alpha\beta}{1-2\gamma}\Big)^k = \frac{1}{1- \frac{\alpha \beta}{1- 2 \gamma}}. \]
Therefore, if we define $C_2 = 1 + C_1 \frac{\alpha\beta}{1 -2 \gamma}$, we have
for any $k \leq K_\gamma$,
\[
\begin{split}
\mathbb{P}\Big( |\mathcal{T}^k|>\Big( & \frac{N}{k} \Big)^\gamma \log N ; E_1 \Big)
\leq \mathbb{P}\Big(|\mathcal{T}^{k,\alpha}|>\Big( \frac{N}{k} \Big)^\gamma \log N ; E_1\Big) \\
& \leq \mathbb{P}\left( 1+ \sum_{i=1}^{\lfloor C_1 (\frac{N}{k})^\gamma\rfloor } |T_i| >\left( \frac{N}{k} \right)^\gamma \log N \right) \\
&\leq (1+\epsilon) C_1 \left( \frac{N}{k} \right)^\gamma \mathbb{P}\left(|T_1|>\left( \frac{N}{k} \right)^\gamma \log N - C_2\left( \frac{N}{k} \right)^\gamma\right)\\
&\leq (1+2\epsilon) C_1 \left( \frac{N}{k} \right)^\gamma \mathbb{P}\left(
|T_1|>\left( \frac{N}{k} \right)^\gamma \log N
\right),
\end{split}
\]
where the last inequality holds for $N$ sufficiently large because we know $|T_1|$ has a power law tail. Hence, by a union bound
\[\begin{aligned}
\mathbb{P}\bigg( \exists k \leq K_\gamma : & |\mathcal{T}^k|> \left( \frac{N}{k} \right)^\gamma \log N \bigg)\\
& \qquad \leq
(1+2\epsilon) C_1 N^\gamma \sum_{k \leq K_\gamma} k^{-\gamma}\, \mathbb{P}\left(
|T_1|>\left( \frac{N}{k} \right)^\gamma \log N\right),
\end{aligned} \]
which for $N$ large enough we can bound for some suitable constant $C_3>0$ by
\[
\begin{split}
&
(1+2\epsilon) C_3 N^\gamma \sum_{k \leq K_\gamma} k^{-\gamma}
\left(\left( \frac{N}{k} \right)^\gamma \log N\right)^\frac{\gamma-1}{\gamma}\\
&=
(1+2\epsilon) C_3 N^{2\gamma-1} \log^{\frac{\gamma-1}{\gamma}} N \sum_{k \leq K_\gamma} k^{1-2\gamma}\\
&\leq
\frac{(1+ 2\epsilon) C_3}{2-2\gamma} \log^{3-\frac{1}{\gamma}-2\gamma} N = o(1),
\end{split}
\]
where we used $K_\gamma = N^{\frac{1-2\gamma}{2-2\gamma}} \log N$.
From this bound on the number of vertices, we can immediately deduce
the claimed bound for the sum of the degrees, since
\[
\sum_{v \in \mathcal{T}^k}\operatorname{d}(v)=2\big|\mathcal{T}^k\big|-2 ,
\]
as each $\mathcal{T}^k$ is a tree.
\end{proof}
\begin{proposition}\label{prop:small_degrees_arent_big}
On $G_N \in \mathcal{G}_{\beta,\gamma}$ with $\beta+2\gamma<1$ we have the following uniform bound on the degrees of vertices with larger index
\[
\max_{k > K_\gamma} \operatorname{d}(k) = O_{\mathbb{P}} \left( N^\frac{\gamma}{2-2\gamma} \right).
\]
\end{proposition}
\begin{proof}
As before we can use that the degrees are stochastically dominated by the degrees for the MNR model, where each $\operatorname{d}(k) \sim {\rm Pois}(w(k))$.
By Chebyshev's inequality, we have that
\[
\begin{split}
\log \mathbb{P}\left(d(k) \geq N^\frac{\gamma}{2-2\gamma} \right) &\leq \log \mathbb{P}\left(\operatorname{Pois}( w(k) ) \geq N^\frac{\gamma}{2-2\gamma} \right)\\
&\leq \log \mathbb{E}\left(e^{\operatorname{Pois}( w(k) )} \right) - N^\frac{\gamma}{2-2\gamma} \\
&= w(k) (e-1) - N^\frac{\gamma}{2-2\gamma} .
\end{split}
\]
where by a slight abuse of notation we also write ${\rm Pois}(w(k))$ for a Poisson random variable with parameter $w(k)$.
Hence, by a union bound and using that $w(k)$ is decreasing, we obtain
\[
\begin{split}
\mathbb{P}\left( \exists k>K_\gamma : d(k)> N^\frac{\gamma}{2-2\gamma} \right)&\leq e^{- N^\frac{\gamma}{2-2\gamma}} \sum_{k=K_\gamma+1}^N e^{w(k) (e-1)} \\
&\leq (N-K_{\gamma}+1) e^{w(K_\gamma) (e-1)- N^\frac{\gamma}{2-2\gamma}} \\
&\leq N \exp \left( N^{\frac{\gamma}{2-2\gamma}} \left( -1 + \frac{\beta (e-1)}{1-\gamma} \frac{1}{\log^{\gamma} N} \right) \right)=o(1),\\
\end{split}
\]
where in the last step we used that
$w(k) \leq \frac{\beta}{1-\gamma} (\frac{N}{k})^\gamma$, see also~\eqref{eq:up_bound_w}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:small_sum_of_degrees}]
As before it suffices to bound the degrees in the MNR graph.
By Remark~\ref{rem:depletion}, the tree construction yields the stochastic upper bound
\[ \max_{k \notin V_{\rm big} } \sum_{v \in \mathscr{C}(k)} \operatorname{d} (v)
\preceq \max_{k = K_\gamma +1,\ldots,N} \sum_{v \in \mathcal{T}^k_k} \operatorname{d}(v)
\leq \max_{k = K_\gamma + 1,\ldots,N} 2 |\mathcal{T}^k_k| , \]
where $\mathcal{T}^k_k$ are independent Galton-Watson trees with the following law:
the root of $\mathcal{T}^k_k$ has a ${\rm Pois}(w(k))$ number of offspring and all other offspring
are independent and have a ${\rm Pois}(W^*_{N,k})$ distribution.
For any thinning level $z \in [N]$, we recall that $W^*_{N,z}$ is defined
as
\begin{equation}\label{truncated_weight}
W^*_{N,z}
\stackrel{({\rm d})}{=}
w (M)
\mathbbm{1}_{M>z},
\end{equation}
where $M$ is the usual mark distribution (which chooses $i \in [N]$ with probability proportional to $i^{-\gamma}$).
The same argument as in the proof of Proposition~\ref{prop:small_degrees_arent_big} also shows that there exists a constant $C_1 > 0$ such that if $\emptyset_k$ denotes the root of $\mathcal{T}_k^k$, then we have that the event
\begin{equation}\label{eq:event_E1} E_1 := \Big\{ \max_{k =K_\gamma +1 , \ldots, N} \operatorname{d}(\emptyset_k) \leq C_1 N^{\frac{\gamma}{2-2 \gamma}} \Big\} \end{equation}
satisfies $\p(E_1) \rightarrow 1$ as $N \rightarrow \infty$.
To bound the size of the trees $\mathcal{T}_k^k$, we use the standard connection to random walks, see e.g.~\cite[Section 3.3]{van2016random}, where we consecutively record the number of offspring of each individual in the branching process.
Define
\[
R:=\left\lfloor C_1 N^\frac{\gamma}{2-2\gamma} \right \rfloor,
\]
for the same constant $C_1$ as in~\eqref{eq:event_E1}.
Then, define a random walk $(S_n)_{n\geq 0}$, where we set $S_0 = 0, S_1 = R$ and for $i \geq 1$ suppose that the increments $S_{i+1} - S_i$
are independent and with the same distribution as $D-1$, where
$D \sim {\rm Pois}(W^*_{N,z})$. The random walk connection then yields
that for any $z \in \{K_\gamma + 1, \ldots, N\}$ and any $L \geq 0$
\[ \p(|\mathcal{T}^z_z| > L; E_1) \leq \p(S_{L+1} \geq 0 ) . \]
Now, for large $N$
we define
\[ L := \frac{2 R}{1-\mathbb{E}(W^*_N)} , \]
which is well-defined as $\mathbb{E}(W^*_N) \rightarrow \E(W^*) < 1$.
Moreover, we define $(X_i^{(z)})_{i \geq 1}$ as a sequence of i.i.d.\ random variables with the same distribution as $D - \E(W_{N,z}^*)$, where $D \sim {\rm Pois}(W_{N,z}^*)$. Then, we can estimate using $\E(W_{N,z}^*) \leq \E(W_{N,z})$
that
\begin{align}
\mathbb{P}\left(
|\mathcal{T}^z_z| > L; E_1\right)
& \leq \mathbb{P} \bigg( R + \sum_{i=1}^L (S_{i+1} - S_i) > L \bigg)
\notag \\ &
\leq
\mathbb{P} \bigg(
\sum_{i=1}^{L} X^{(z)}_i \geq L\Big(1-\mathbb{E}( W^*_{N,z} )\Big)-R
\bigg)
\notag \\ & \leq
\mathbb{P} \bigg(
\sum_{i=1}^{L} X^{(z)}_i \geq R
\bigg) . \notag
\end{align}
Then, by Markov's inequality for any $r>2 \vee \left(\frac{1}{\gamma}-1\right)$ and $N$ sufficiently large, we can deduce that
\begin{equation}\label{post_markov}
\mathbb{P}\left(
|\mathcal{T}^z_z| > L; E_1\right)
\leq \frac{\mathbb{E}|
\sum_{i=1}^{L} X^{(z)}_i
|^r}{R^r} \leq
C_2\frac{
L^{\frac{r}{2}} w(z)^{\frac{r}{2}\left( 3-\frac{1}{\gamma} \right)^+} + L w(z)^{r+1-\frac{1}{\gamma}}
}{R^r} ,
\end{equation}
where
$C_2 > 0$ is a suitable constant, coming out of the estimate on the fractional moment, which we defer to Lemma \ref{walk_moments}.
For the remainder of the proof,
we will need to fix an even larger $r$ and assume that
\[
r>\frac{4-4\gamma}{\gamma \wedge (1-2\gamma)}.
\]
By a union bound combined with the bound in~\eqref{post_markov} and the definitions of $L$ and $R$, we find that there exists a $C_3 >0$ such that
\begin{equation}\label{eq:4_2_union}\begin{aligned}
\p\Big( \max_{z \in \{K_\gamma+1, \ldots, N\}} |\mathcal{T}_z^z| >L; E_1
\Big)& \leq
\sum_{z = K_\gamma + 1}^N
\mathbb{P}\left(
|\mathcal{T}_z^z| >L; E_1
\right)\\
& \leq
C_3 \sum_{z =K_\gamma + 1}^N
\frac{N^{\frac{r}{2}\left( 3\gamma-1 \right)^+ - \frac{r}{2}\frac{\gamma}{2-2\gamma}}}{z^{\frac{r}{2}\left( 3-\frac{1}{\gamma} \right)^+} \log^{\frac{r}{2}} N}
+
\frac{N^{\gamma r + \gamma -1 +(1-r)\frac{\gamma}{2-2\gamma}}}{z^{\gamma r + \gamma -1}\log^{r-1} N}\end{aligned}
\end{equation}
and we require that this sum tends to $0$. For the first term, observe
\[
\frac{N^{\frac{r}{2}\left( 3\gamma-1 \right)^+ - \frac{r}{2}\frac{\gamma}{2-2\gamma}}}{z^{\frac{r}{2}\left( 3-\frac{1}{\gamma} \right)^+} }
\leq
\frac{N^{\frac{r}{2}\left( 3\gamma-1 \right)^+ - \frac{r}{2}\frac{\gamma}{2-2\gamma}}}{N^{\frac{1-2\gamma}{2-2\gamma}\frac{r}{2}\left( 3-\frac{1}{\gamma} \right)^+} }
=
\left(
N^{\frac{\left( 3\gamma-1 \right)^+-\gamma}{2-2\gamma}}
\right)^\frac{r}{2},
\]
and we note that the exponent of $N$ in this expression is less than $-1$ by our choice of $r$ and since
\[
\left( 3\gamma-1 \right)^+-\gamma=
\begin{cases}
2\gamma-1, &\mbox{if } \gamma>\frac{1}{3}, \\
-\gamma, &\mbox{if } \gamma\leq \frac{1}{3}.\\
\end{cases}
\]
In particular, the first term in~\eqref{eq:4_2_union} converges to $0$.
For the second term, again by our choice of $r$, we have that $r > \frac{2-\gamma}{\gamma}$
so that we can deduce that
\[
\sum_{z = K_\gamma+1}^N
\frac{1}{z^{\gamma r + \gamma -1}}=O\left(
N^{\frac{1-2\gamma}{2-2\gamma}(2-\gamma r - \gamma)}
\right).
\]
In particular,
we have that
\[\begin{aligned}
\frac{N^{\gamma r + \gamma -1 +(1-r)\frac{\gamma}{2-2\gamma}}}{\log^{r-1} N}
\sum_{z>K_\gamma}\frac{1}{z^{\gamma r + \gamma -1}}
& =O\left(
\frac{N^{\gamma r + \gamma -1 +(1-r)\frac{\gamma}{2-2\gamma}}}{\log^{r-1} N}
N^{\frac{1-2\gamma}{2-2\gamma}(2-\gamma r - \gamma)}
\right)
\\
& =O\left(\log^{1-r} N\right)=o(1) , \end{aligned}
\]
which shows that also the second term in~\eqref{eq:4_2_union} tends to $0$.
Thus, we can conclude from~\eqref{eq:4_2_union} that with high probability on the event $E_1$ every tree has size at most $L$. Recalling that $E_1$ occurs also with high probability and the asymptotics of $L$ then gives the required bound.
\end{proof}
The following lemma provides the moment estimate that was required in the proof of Proposition \ref{prop:small_sum_of_degrees} above.
\begin{lemma}\label{walk_moments}
For $L \in \mathbb{N}, z \in [N]$, suppose $X^{(z)}_i, i\leq L$ are independent random variables with the same distribution as $D - \E(W_{N,z}^*)$, where $D \sim {\rm Pois} (W^*_{N,z})$ and where $W^*_{N,z}$ is defined in~\eqref{truncated_weight}. Then,
for any $L, N \in \mathbb{N}$, $ r>2 \vee \left(\frac{1}{\gamma}-1\right)$ and $z>1$
we have
\[
\mathbb{E}\left(
\Big|
\sum_{i=1}^{L} X^{(z)}_i
\Big|^r
\right)
\leq C \Big(
L^{\frac{r}{2}} w(z)^{\frac{r}{2}\left( 3-\frac{1}{\gamma} \right)^+} + L w(z)^{r+1-\frac{1}{\gamma}}
\Big) ,
\]
where $C>0$ is a constant depending only on $r$ and $\gamma$
and $w(z)$ is defined in~\eqref{weight_defn}.
\end{lemma}
\begin{proof}
These calculations use a similar strategy to the proof of~\cite[Theorem 1.1]{janson2008largest}. We write $X^{(z)}$ for a random variable with the same distribution as $X^{(z)}_i$.
We start by estimating the second and the $r$th moment of $X^{(z)}$.
First, note that as
\[
\mathbb{E} \left(
W^*_{N,z}
\right)\leq
\mathbb{E} \left(
W^*_{N}
\right)
\rightarrow
\frac{\beta}{1-2\gamma},
\]
we deduce for $N$ sufficiently large
\[ \begin{aligned}
\mathbb{E}\big( (X^{(z)})^2 \big)
& =
\operatorname{Var} \left( \operatorname{Pois} \left( X^{(z)} \right) \right)
=
\operatorname{Var} \left(
\mathbb{E} \left(
X^{(z)} | W^*_{N,z}
\right)
\right)
+
\mathbb{E} \left(
\operatorname{Var} \left(
X^{(z)} | W^*_{N,z}
\right)
\right)
\\
& =
\operatorname{Var} \left(
W^*_{N,z}
\right)
+
\mathbb{E} \left(
W^*_{N,z}
\right)
\leq
\mathbb{E} \left( \left(
W^*_{N,z}
\right)^2 \right)
+
\frac{\beta}{1-2\gamma}.
\end{aligned} \]
Using Lemma \ref{le:domination}, we find a constant $C_1>0$ independent of $N$ for any $\alpha>1$ such that
\[
\begin{split}
\mathbb{E} \left( (W^*_{N,z})^2 \right)
&= \int_0^\infty 2x \, \mathbb{P}(W^*_{N,z}>x)\, {\rm d}x
= \left( \frac{\beta}{1-\gamma} \right)^2 + \int_{\frac{\beta}{1-\gamma}}^{w(z)} 2x \, \mathbb{P}(W^*_{N}>x)\, {\rm d} x\\
&\leq \left( \frac{\beta}{1-\gamma} \right)^2 + \int_{\frac{\beta}{1-\gamma}}^{w(z)} 2x \, \mathbb{P}(\alpha W^*>x)\, {\rm d} x
\leq C_1 \int_{\frac{\beta}{1-\gamma}}^{w(z)} x^{2-\frac{1}{\gamma}} \, {\rm d}x\\
&\leq C_1 w(z)^{\left( 3-\frac{1}{\gamma} \right)^+},\\
\end{split}
\]
for $N$ sufficiently large, where we used the explicit density of $W^*$ identified in~\eqref{eq:W_density}.
We now have to estimate $\mathbb{E}\big((X^{(z)})^r\big)$ for $r$ as above. We claim that
\begin{equation}\label{eq:claim_0604}
\sup_{\lambda\geq \frac{\beta}{1-\gamma}} \frac{\mathbb{E}(\operatorname{Pois}(\lambda)^r)}{\lambda^r}<\infty.
\end{equation}
Indeed,
since the Poisson distribution has an exponential moment, we know
$
\lambda \mapsto {\mathbb{E}(\operatorname{Pois}(\lambda)^r)}/{\lambda^r}
$
is finite and continuous on $\left[ \frac{\beta}{1-\gamma},\infty \right)$. Further
\[
\left\| \frac{\operatorname{Pois}(\lambda)}{\lambda} \right\|_r
\leq
\frac{ \lambda + \left\| \operatorname{Pois}(\lambda) - \lambda \right\|_r }{\lambda}
=
1+\frac{ \left\| \frac{\operatorname{Pois}(\lambda) - \lambda}{\sqrt{\lambda}} \right\|_r }{\sqrt{\lambda}}
\rightarrow
1
\]
as $\lambda \rightarrow \infty$, by the central limit theorem.
This proves the claim~\eqref{eq:claim_0604}.
Hence we can say because $X^{(z)} \preceq \operatorname{Pois} \left( W^*_{N,z} \right)$ we have some $C_2>0$ such that
\[
\mathbb{E}\left(\left(X^{(z)}\right)^r\right) \leq C_2 \mathbb{E}( (W^*_{N,z})^r)
\]
for $N$ sufficiently large. We have, given $r>\frac{1}{\gamma}-1$,
\[ \begin{aligned}
\frac{\mathbb{E}( (W^*_{N,z})^r)}{z\mathbb{P}(M=z)w(z)^r}
& =
\frac{\sum_{j=z}^N \mathbb{P}(M=j) w(j)^r}{z\mathbb{P}(M=z)w(z)^r}
=
\sum_{j=z}^N
\frac{1}{z} \frac{j^{-\gamma} j^{-\gamma r}}{z^{-\gamma} z^{-\gamma r}}\\
& \leq z^{\gamma+\gamma r -1} \int_{z-1}^\infty j^{-\gamma-\gamma r} {\rm d} j
= \frac{1}{\gamma+\gamma r -1} \left(1-\frac{1}{z} \right)^{1-\gamma-\gamma r}\\
&\leq\frac{1}{\gamma+\gamma r -1} \left(1-\frac{1}{N} \right)^{1-\gamma-\gamma r}
=O(1),
\end{aligned}
\]
as $N\rightarrow \infty$, noting that this bound holds uniformly over all $z>1$. Thus there is some constant $C_3$ such that, for every $z>1$,
\[
\mathbb{E}\left(\left( X^{(z)} \right)^r\right) < C_3 z\mathbb{P}(M=z)w(z)^r.
\]
Therefore by Rosenthal's inequality \cite[Chapter 3, Theorem 9.1]{gut2013probability} we obtain for $r>2\vee (\frac{1}{\gamma}-1)$
\[ \begin{aligned}
\mathbb{E}\bigg(\Big|
\sum_{i=1}^{L} \left( X_i^{(z)} \right)
\Big|^r\bigg)
& \leq
C_3 L^{r/2} \big( \mathbb{E}(\big((X^{(z)})^2\big)\big)^{r/2}
+C_4 L \mathbb{E}\big((X^{(z)})^r\big)\\
& \leq
C_5 L^{r/2}
\left(1+w(z)^{\left( 3-\frac{1}{\gamma} \right)^+}\right)^{r/2}
+C_6 L z\mathbb{P}(M=z)w(z)^r\\
& \leq
C_7 L^{r/2}
w(z)^{\left( 3-\frac{1}{\gamma} \right)^+\frac{r}{2}}
+C_8 L
\frac{z^{1-\gamma}w(z)^r}{N^{1-\gamma}},
\end{aligned}
\]
as claimed.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:stars_and_leaves}]
(a) Again we construct the SNR network in $\mathcal{G}_{\beta,\gamma}$ via the MNR network and use the tree construction of Proposition~\ref{prop:tree_coupling} for an upper bound. Recall by a standard concentration argument, there is some universal constant $C>0$ such that for $\lambda$ large enough
\begin{equation}\label{eq:poisson_deviations}
\mathbb{P}\left( \left| \operatorname{Pois} \left( \lambda \right) - \lambda \right| \leq \frac{\lambda}{2} \right)\leq e^{-C \lambda},
\end{equation}
so that for the unthinned degrees in $\mathcal{T}^k$ we can immediately compare $d(k)$ to $(N/k)^\gamma$. The upper bound $d(k)$ in the MNR network then follows immediately, because thinning can only decrease the degree.
For a lower bound, it suffices to show that overall not too many vertices are being thinned in the big components. More precisely, define
\[ Z^{\rm thin} := \sum_{k =1}^{K_\gamma} \sum_{v \in \mathcal{T}^k} \mathbbm{1}_{\{ v \small\mbox{ thinned}\}}. \]
We will show in the following $Z^{\rm thin} \leq (\log N)^3$ with high probability, which immediately implies the lower bound on the degres (as these are polynomially large) by the same concentration argument for the Poisson degrees as before.
From Lemma \ref{le:unthinned_sum_of_degrees} we have with high probability
\[
\max_{k \leq K_\gamma}
\frac{|\mathcal{T}^k|}{(N/k)^\gamma}
\leq
\max_{k \leq K_\gamma}
\frac{\sum_{z \in \mathcal{T}^k} \operatorname{d}(z)}{(N/k)^\gamma}\leq \log N.
\]
Therefore, by summation and the definition of $K_\gamma$ we have that the event
\[
E_1 = \Big\{
\Big|
\bigcup_{i \leq K_\gamma} V(\mathscr{C}(i))\Big|
=
|V_{\rm Big}|
\leq
\frac{1}{1-\gamma}\sqrt{N}\log N \Big \},
\]
satisfies $\p(E_1) \rightarrow 1$ as $N\rightarrow \infty$.
We can bound $Z^{\rm thin}$ by the double sum over vertices that have the same mark. Thus, if we write $M,M'$ for two independent copies of the mark distribution, then by distinguishing the cases of root vertices and remaining vertices, we obtain
\[ \begin{aligned}
\E(Z^{\rm thin}\mathbbm{1}_{E_1}) & \leq
\frac{1}{1-\gamma}
\sqrt{N} (\log N ) \mathbb{P}\left(M\leq K_\gamma \right)
+
\frac{1}{(1-\gamma)^2}
N(\log N )^{2} \mathbb{P}\left(M=M'\right) \\
& =O \left( \log^{2-\gamma} N + \log^2 N \right),
\end{aligned}
\]
where we used that
\begin{equation}\label{eq:MMprime}
\mathbb{P}(M=M^\prime)=\sum_{i=1}^N \left(\frac{i^{-\gamma}}{\sum_{j=1}^N j^{-\gamma}}\right)^2=\Theta\left( \frac{N^{1-2\gamma}}{N^{2-2\gamma}} \right)=\Theta\left( \frac{1}{N} \right).
\end{equation}
Hence, by Markov's inequality, we have that
\[ \p(Z^{\rm thin} \geq (\log N)^3) \leq \frac{\E(Z^{\rm thin}\mathbbm{1}_{E_1})}{(\log N)^3} + \p(E_1^c) , \]
and the right hand side tends to zero as $N \rightarrow \infty$.
(b) Now each of the neighbours of the root vertices $\emptyset_k$ which was not thinned has offspring $D \sim \operatorname{Pois}(W_N^*)$ and independently no children with probability
\[
p_0=
\mathbb{P}(D=0)
=\mathbb{E}(e^{-W_N^*})
\geq
e^{-\mathbb{E}(W_N^*)}
\rightarrow
e^{-\frac{\beta}{1-2\gamma}}
>0
\]
by using Jensen's inequality. This gives the bound on $|L_k|$ by a binomial concentration argument.
\end{proof}
\begin{proof}[Proof of Lemma \ref{le:subcritical_branch_control}]
By Lemma \ref{le:unthinned_sum_of_degrees} we know that the event
\begin{equation}\label{eq:big_components}
E_1 := \bigg\{ |\mathscr{C}(k)|\leq \left(\frac{N}{k}\right)^\gamma \log N \mbox{ for all } k \leq K_\gamma \bigg\},
\end{equation}
satisfies $\p(E_1) \rightarrow 1$ as $N \rightarrow \infty$.
As observed in Remark~\ref{rem:cycles}, the thinning operation does not create cycles between components, nor does it create extra edges between the root vertex and one of its children.
We now bound the surplus of each component, which is defined as the number of edges more than edges of a tree on the same vertex set. Writing $M$ and $M'$ for two independent copies of the mark distribution, we get
\[
\mathbb{E}(\text{surplus}(\mathscr{C}(k)); E_1)\leq \mathbb{P}(M=M^\prime) \left(\frac{N}{k}\right)^{2\gamma} \log^2 N .
\]
Hence, we obtain for the total surplus in the big components, using~\eqref{eq:MMprime}
\[
\begin{split}
\sum_{i =1}^{\left\lfloor K_\gamma \right\rfloor }\mathbb{E}(\text{surplus}(\mathscr{C}(i)); E_1)
&\leq \mathbb{P}(M=M^\prime) N^{2\gamma} \log^2 N \int_0^{K_\gamma} i^{-2\gamma} {\rm d}i \\
&=O^{\log N} \left( N^{2\gamma-1} N^{\frac{(1-2\gamma)^2}{2-2\gamma}} \right)\\
&=O^{\log N} \left( N^{\frac{2\gamma-1}{2-2\gamma}} \right)=o(1).
\end{split}
\]
Hence, combining this with the fact that $\p(E_1) \rightarrow 1$, we obtain
by Markov's inequality that the big components form a forest with high probability.
Now that we know each component is a tree, it makes sense to talk about \emph{branches} of the root vertices. Again, we can stochastically upper bound the sizes of these branches in SNR by the ones in MNR, which are bounded by the (unthinned) branches in the forest $\mathcal{T}^1, \ldots, \mathcal{T}^{\lfloor K_\gamma \rfloor}$.
In the latter, each of the branches is an independent ${\rm Pois}(W_N^*)$-GW tree.
Note that the total number of these trees is bounded by $\sum_{k = 1}^{\lfloor K_\gamma \rfloor} \operatorname{d} (\emptyset_k)$, where $\emptyset_k$ is the root of $\mathcal{T}^k$.
By the same argument as in the proof of Lemma~\ref{le:unthinned_sum_of_degrees}, we therefore have that there exists a constant $C_2 >0$ such that the event
\[ E_2 = \bigg\{ \sum_{k=1}^{\left\lfloor K_\gamma \right\rfloor }
\operatorname{d}(\emptyset_k) \leq C_2 \sqrt{N} \log^{1-\gamma} N \bigg\}, \]
satisfies $\p(E_2) \rightarrow 1$, since
$\sum_{i=1}^{\left\lfloor K_\gamma \right\rfloor } \left(\frac{N}{i}\right)^\gamma = O(\sqrt{N} \log^{1-\gamma} N )$.
Let $(T_i)_{i \geq 1}$ be a sequence of i.i.d.\ ${\rm Pois}(\alpha W^*)$-GW trees, where
$\alpha \in \left(1,\frac{1-2\gamma}{\beta}\right)$.
Further, let $J=\left\lfloor C_2 \sqrt{N} \log N \right\rfloor$. Then, by the above argument and
Lemma~\ref{le:domination}, we have that
\[
\begin{split}
\mathbb{P}\Big( \max_{k \leq K_\gamma} \max_{B \in \mathcal{B}
(\mathfrak{C}(k))} & \sum_{ v \in B} \operatorname{d} (v) \geq N^{\frac{\gamma}{2-2\gamma}}\log N ; E_2 \Big) \\
& \leq
\mathbb{P}\left( \max_{i=1}^{J} |T_i|>N^{\frac{\gamma}{2-2\gamma}}\log N \right)\\
& \leq \sum_{i=1}^J \p( |T_1| > N^{\frac{\gamma}{2-2\gamma}} \log N) \\
&= O \Big( J \big(N^{\frac{\gamma}{2-2\gamma}} \log N \big)^{-(\frac{1}{\gamma} -1)} \Big)
= O \left( \log(N)^{2 - \frac{1}{\gamma} }\right) = o(1) ,
\end{split}
\]
where we used Proposition~\ref{le:poisson_powerlaw} in the final line.
Since $\p(E_2^c) \rightarrow 0$, this completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Lemma \ref{le:empirical_moment}]
Note that $\operatorname{d}(\emptyset_1)$, the degree of the root of $\mathcal{T}^1$,
satisfies $\operatorname{d}(\emptyset) \sim {\rm Pois}(w(1))$ and $w(1) = \Theta(N^\gamma)$. Hence, we can immediately deduce by standard Poisson concentration that there are constants $c_1 , C_1 > 0$ such that the event
\[ E_1 := \{ c_1 N^\gamma \leq \operatorname{d} (\emptyset_1) \leq C_1 N^\gamma \} , \]
holds with high probability.
For a lower bound, we note that for the SNR model \[ \sum_{v \in \mathscr{C}(1)} \operatorname{d}(v)^{\eta} \geq \operatorname{d}(1)^{\eta_1} . \]
However, on the event $E_1$ for $\mathcal{T}_1$ the expected number of repeated labels among the children of the root $\emptyset_1$ is of order
\[
O^{\log N}\left(
N^\gamma \mathbb{P}(M=1)
+
N^{2\gamma} \mathbb{P}(M=M')
\right)
=
O^{\log N}\left(N^{2\gamma-1}\right)
=
o(1) , \]
where as before we write $M, M'$ for independent copies of the mark distribution.
So with high probability we have that $\operatorname{d}(1) = \operatorname{d}(\emptyset_1)\geq c_1 N^\gamma$, which gives the required lower bound.
For the upper bound, by the same arguments used before we only have to bound the degrees in $\mathcal{T}^1$.
Note that with the exception of the degree of the root, all other degrees have the same distribution as $D$, where $D \sim 1+ \operatorname{Pois}\left( W_N^* \right)$.
Note also that by Lemma~\ref{le:unthinned_sum_of_degrees} there exists a constant $C_2$ such that
the event $E_2 = \{ |\mathcal{T}^1| \leq C_2 N^{\gamma} \log N \}$ occurs with high probability.
Therefore, if we let $(D_i)_{i \geq 1}$ be i.i.d.\ random variables with the same distribution as $D$ and set $J := \lfloor C_2 N^{\gamma \log N} \rfloor$, then for any $\eta > 1$.
\[ \p \bigg( \sum_{v \in \mathcal{T}^1, v \neq \emptyset_1} \operatorname{d}(v)^\eta
\geq (\log N)^3 N^{2\gamma}; E_2 \bigg)
\leq \p\bigg( \sum_{i=1}^J D_i^\gamma \geq (\log N)^3 N^{\eta\gamma} \bigg) .\]
To estimate the probability on the right, we use a first moment bound.
We notice that
\[
\mathbb{E}((W_N^*)^\eta)
= \sum_{i=1}^N \frac{w(i)^{\eta + 1} }{\sum_{j = 1}^N w(j)} = \Theta\begin{cases}
N^{\gamma(\eta +1) -1} & \eta>\frac{1}{\gamma} - 1,\\
\log N & \eta = \frac{1}{\gamma} -1,\\
1 & \eta < \frac{1}{\gamma} - 1.
\end{cases}
\]
Moreover, there exists a constant $C_3 > 0$ such that
\[
\mathbb{E}\left(\left(
1+ \operatorname{Pois}\left( W_N^* \right)
\right)^\eta
\right)
\leq
2^\eta \mathbb{E}\left(
1 \vee \operatorname{Pois}\left( W_N^* \right)^\eta
\right)
\leq
C_3 \mathbb{E}\left(
\left(W_N^*\right)^\eta
\right),
\]
and in particular we have that
\[
\mathbb{E} \left( \sum_{i=1}^J D_i^\eta \right)
=O
\left( (\log N)^2
N^{(\gamma(\eta +1)-1)^+ +\gamma}
\right)
\leq
O
\left( (\log N)^2
N^{\eta\gamma}
\right) ,
\]
where we used that $\gamma < \frac 12$ in the last step.
Hence, by Markov's inequality,
\[ \p \bigg( \sum_{v \in \mathcal{T}^1, v \neq \emptyset_1} \operatorname{d}(v)^\eta
\geq (\log N)^3 N^{\eta\gamma}; E_2 \bigg)
\leq O ((\log N)^{-1} ) .
\]
Since the event $E_2$ occurs with high probability and since we have by the first part that $\operatorname{d}(\emptyset)^\eta \leq C_1^\eta N^{\eta\gamma}$ on the high probability event $E_1$, the upper bound in the statement of the lemma follows immediately.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:existence_simple_double_star}]
Consider the index set
\[
I:=
\left[
N^\frac{1-2\gamma}{2-2\gamma},
N^\frac{1-2\gamma}{2-2\gamma} \log N
\right] \cap \mathbb{N}
\subset [K_\gamma].
\]
We calculate the MNR weight as
\[
w(I)\sim \sum_{k \in I} \frac{\beta}{1-\gamma}\left( \frac{N}{k} \right)^\gamma
=\Theta\left(
\sqrt{N} \log^{1-\gamma} N
\right)
\]
and so in for the MNR model the expected number of edges on the subgraph induced on $I$ is
\[
\frac{w(I)^2}{w([N])}=\Theta \left( \log^{2-2\gamma} N \right),
\]
and so diverges to $\infty$.
Since this number is Poisson distributed, it must then be nonzero with high probability. After collapsing any multi-edges to arrive at the SNR model it must still be nonzero with high probability.
We can take any such adjacent pair $(x,y) \in I^2$ to create a double star, which by Lemma \ref{le:subcritical_branch_control} is a tree and by Proposition \ref{prop:star_degrees} has
\[
\operatorname{d}(x) \mbox{ and } \operatorname{d}(y) =\Theta_{\mathbb{P}}^{\log N} \left(
N^\frac{\gamma}{2-2\gamma}
\right).
\]
For the final claim of the Proposition we consider the empirical moment. The estimate on the moment can be proved in the same way as in the previous proof of Lemma \ref{le:empirical_moment}, but instead we now have to control the $\eta$th empirical moment
of an i.i.d.\ sequence $D_i, i =1, \ldots, \lfloor N^\frac{\gamma}{2-2\gamma} \log N \rfloor$ where
$D_{i} \sim 1+ \operatorname{Pois}\left( W_N^* \right).$
Then, the result follows by analogous argument, combined with the fact that with high probability we do not see any thinning on this double star.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:existence_long_double_star}]
We first prove the statement for the multigraph MNR.
Let $V$ be the set of vertices with weight $w$ (as defined in~\eqref{weight_defn}) less than $1$,
\[
V:=\left\{ v \in[N] : w(v) < 1 \right\}.
\]
As in the previous proof, we consider
$I:=
\left[
N^\frac{1-2\gamma}{2-2\gamma},
N^\frac{1-2\gamma}{2-2\gamma} \log N
\right] \cap \mathbb{N}
\subset [K_\gamma].$
We split both vertex sets into even and odd vertices as
\[
V^{\rm even}:=V \cap \left( 2\mathbb{N} \right);
\qquad
V^{\rm odd}:=V \cap \left( 2\mathbb{N} +1 \right);
\]
\[
I^{\rm even}:=I \cap \left( 2\mathbb{N} \right);
\qquad
I^{\rm odd}:=I \cap \left( 2\mathbb{N} +1 \right).
\]
Recall that $w(v) \leq \frac{\beta}{1-\gamma} (\frac{N}{v} )^\gamma$ by~\eqref{eq:up_bound_w}, so
any vertex $v$ with $v >
N \left( \frac{\beta}{1-\gamma} \right)^{1/\gamma}
$ is in $V$. Since we also that have by assumption $\frac{\beta}{1-\gamma}<\frac{1-2\gamma}{1-\gamma}<1$, we can conclude that
$|V|=\Theta(N)$. Thus,
\[
\frac{w(V^{\rm even})}{N}
\sim
\frac{w(V^{\rm odd})}{N}
\rightarrow
\rho>0,
\]
where as before we write $w(A) = \sum_{i \in A} w(i)$ for any $A\subset [N]$.
We also recall from~\eqref{eq:total_weight} that $w([N]) \sim \frac{\beta N}{(1-\gamma)^2} = \Theta(N)$
and finally for the large degree sets, we get
\[
w(I^{\rm even})
\sim w(I^{\rm odd})
\sim \frac{1}{2} \sum_{k \in I}\frac{\beta}{1-\gamma}\left( \frac{N}{k}\right)^\gamma
\sim \frac{\beta }{2(1-\gamma)^2} N^\gamma K_\gamma^{1-\gamma}.
\]
The number of edges from $V^{\rm even}$ to $I^{\rm even}$ in the MNR model is Poisson distributed with mean
\[
\frac{w(V^{\rm even})w(I^{\rm even})}{w([N])}
=\Theta \left(
\frac{N \cdot N^\gamma K_\gamma^{1-\gamma}}{N}
\right)
=\Theta \left(
\sqrt{N}\log^{1-\gamma}N
\right)
\]
and similarly for $V^{\rm odd}$ to $I^{\rm odd}$, so by Poisson concentration we have $\Theta_{\mathbb{P}}\left(
\sqrt{N}\log^{1-\gamma}N
\right)$ edges between each. However for any particular $v \in V$ and ${\rm par} \in \{{\rm odd}, {\rm even}\}$ we see a number of edges with mean
\[
\frac{w(v)w(I^{\rm par})}{w([N])}
\leq
\frac{1 \cdot \frac{\beta }{2(1-\gamma)^2} N^\gamma K_\gamma^{1-\gamma}}{\frac{\beta }{(1-\gamma)^2}N}
=
\frac{\log^{1-\gamma} N}{2\sqrt{N}}
\]
so the probability that this $v$ received more than one edge from $I^{\rm par}$ is bounded by
\[
O\left(
\frac{\log^{2-2\gamma} N}{N}
\right)
\]
and hence by a union bound we will see only $O_{\mathbb{P}}^{\log N}(1)$ such instances. Further, any vertex $v \in V$ has $\operatorname{d} (v) \preceq \operatorname{Pois}(1)$ and so by the union bound
\[
\max_{v \in V} \operatorname{d} (v)=O_{\mathbb{P}}\left(
\log N
\right).
\]
So because from $\Theta_{\mathbb{P}}\left(
\sqrt{N}\log^{1-\gamma}N
\right)$ total edges, only $O_{\mathbb{P}}^{\log N}(1)$ vertices in $V^{\rm par}$ have received more than $1$, and at most $O_{\mathbb{P}}^{\log N}(1)$ edges at each, we conclude that $\Theta_{\mathbb{P}}\left(
\sqrt{N}\log^{1-\gamma}N
\right)$ vertices in $V^{\rm par}$ received a unique edge. Denote the sets which are connected by a unique edge $\mathcal{E} \subset V^{\rm even}$ and $\mathcal{O} \subset V^{\rm odd}$.
For the final stage of the construction, each vertex $o \in \mathcal{O}$ has conditionally
\[
e(o,\mathcal{E}) \succeq \operatorname{Bin}\left(
\Theta_{\mathbb{P}}\left(
\sqrt{N}\log^{1-\gamma}N
\right),
\frac{\beta}{N}
\right)
\]
so we find a single edge into $\mathcal{E}$ with probability $\omega_{\mathbb{P}}\left(1/{\sqrt{N}}\right)$, and each vertex incident to this edge has no further edges with probability at least $1/e$. So, both have no further edges with probability at least $1/e^2$. Hence amongst the $|\mathcal{O}|=\omega_{\mathbb{P}}\left(\sqrt{N}\right)$ trials we will find an adjacent pair each of degree $2$, with high probability.
We found a path $\mathcal{P}$ connecting
$I^{\rm odd} \leftrightarrow
V^{\rm odd} \leftrightarrow
V^{\rm even} \leftrightarrow
I^{\rm even}$ in the MNR model.
Since each of these sets is disjoint, we know that after collapsing multi-edges to obtain the SNR model the path will still exist, and will then satisfy the criteria for our ``double star''.
\end{proof}
\section{Voter models}\label{voter_models_section}
In this section, we will prove the two main theorems about the asymptotics of the consensus time. In Section~\ref{ssec:proof_classical}, we will consider the classical voter model and prove Theorem~\ref{class_subcrit}. Then, in Section~\ref{ssec:proof_discursive} we will prove Theorem~\ref{obl_subcrit}
for the discursive voter model.
Throughout we will use the duality of the voter model to a system of coalescing random walks as described in Section~\ref{sec:duality}. We will also use the notation regarding various random walks statistics from that section.
\subsection{Consensus time for the classical voter model}\label{ssec:proof_classical}
In this section, we will consider the classical voter model as defined in Definition~\ref{def:voter}(a). Throughout, let $G_N \in \mathcal{G}_{\beta,\gamma}$ for $\beta +2 \gamma < 1$ be the underlying graph.
We note that this version of the voter model fits into the general setting of a $Q$-voter model of Section~\ref{sec:duality} if for $\theta \in \mathbb{R}$ we consider $Q = Q^\theta$ defined as
\begin{equation}\label{eq:Q_theta} Q^\theta(i,j) = \operatorname{d}(i)^{\theta -1} \quad \mbox{if } i \sim j \mbox{ in } G_N. \end{equation}
As before, we write $\p^\theta$ for the law of (and $\E^\theta$ for the expectation with respect to) the coalescing random walks with generator $Q^\theta$.
If we denote by $\mathscr{C}_1, \ldots, \mathscr{C}_k$ the connected components of $G_N$, then these also correspond to the irreducible components of the Markov chain with generator $Q^\theta$. So if we let $\pi = (\pi(z) , z \in V(G_N))$ be defined via
\[ \pi(z) = \frac{\operatorname{d}(z)^{1-\theta}}{\sum_{y \in \mathscr{C}_j} \operatorname{d}(y)^{1-\theta}} , \quad \mbox{ for } z \in \mathscr{C}_j, \]
for $j \in [k]$,
then $\pi|_{\mathscr{C}_j}$ is the invariant measure of the $Q^\theta$ Markov chain restricted to $\mathscr{C}_j$.
Before the main proof, we show an elementary bound on the meeting time of two independent random walks, when the component contains a star, i.e.\
if there exists a vertex $k$ with a set of neighbours $L_K$, each of degree $1$
(compare Proposition~\ref{prop:stars_and_leaves}).
\begin{lemma}\label{le:meeting_star}
Let $k \in [N]$ be such that $L_k$, the set of its neighbours of degree $1$, is non empty.
Let $(X_t)_{t \geq 0}$ and $(Y_t)_{t \geq 0}$ be independent Markov chains on $\mathscr{C}(k)$ with generator $Q^\theta$.
Then, for the product chain observed on $\{ k \} \cup L_k$ (as defined before Theorem~\ref{partial_meeting}), we have
\[ t_{\rm meet}^\pi (\{ k \} \cup L_k) \leq \frac{3+\operatorname{d}(k)^\theta}{2} . \]
\end{lemma}
\begin{proof}
Let $S_t $ count how many of the two walkers are currently in the leaf set $L_k$. Then $(S_t)_{t \geq 0}$ is a Markov chain on $\{0,1,2\}$ with transition rates $(s_{i j})$, where in particular
\[ s_{2 1}=2,\quad s_{10} \geq 1 ,\quad s_{1 2} = |L_k| \operatorname{d}(k)^{\theta-1}\leq \operatorname{d}(k)^{\theta}, \]
using that $|L_k| \leq \operatorname{d}(k)$.
Now, note that $S_t = 0$ implies that $\tau_{\rm meet} \leq t$. In particular, if $T_i = \inf\{ t \geq 0\, : \, S_t = i\}$ for $i \in \{0,1,2\}$, then we have that $T_0 \geq \tau_{\rm meet}$.
From the explicit transition rates, we can see that $\mathbb{E}_2(T_1)=\frac{1}{2}$ and if we write $s(1) = s_{10} + s_{12}$, then
\[ \mathbb{E}_1(T_0)=\frac{1}{s(1)}+\frac{s_{12}}{s(1)}\left( \frac{1}{2}+\mathbb{E}_1(T_0) \right) \, \]
so that
\[ \mathbb{E}_1(T_0)=\frac{1+{s_{12}}/{2}}{s_{10}}\leq 1+\frac{s_{12}}{2} .
\]
We conclude that
\[
\sup_{v,w \in V(\mathcal{S}_k)} \E_{(v,w)} (\tau_{\rm meet})
\leq \max\{ \E_1(T_0) , \E_2(T_0) \}
\leq \frac{1}{2}+1+\frac{s_{12}}{2}\leq \frac{3+\operatorname{d}(k)^\theta}{2},
\]
as claimed.
\end{proof}
\begin{proof}[Proof of Theorem \ref{class_subcrit}] We will start by showing the {\bf upper bounds}.
For the cases $\theta \geq 1$, we use that by Lemma~\ref{binary_lower_bound} and Proposition~\ref{prop:coal} we can bound
\[\E_{\mu_u} ( \tau_{{\rm cons}} \, |\, G_{\beta,\gamma}) \leq e(2+\log N)t_{\rm hit}(G_N),\]
where $t_{\rm hit}(G_N) = \sup_{j \in [k]} t_{\rm hit}(\mathscr{C}_i)$ for $\mathscr{C}_1, \ldots, \mathscr{C}_k$ the components of $G_N$. Note in particular that the right hand side is still random and the expectation is only over the random walks.
We recall that the random walk associated to the classical voter model has transition rates
$Q^\theta(x,y) = \operatorname{d}(x)^{\theta-1}\mathbbm{1}_{x\sim y}$ and $\pi(x) \propto \operatorname{d}(x)^{1-\theta}$. In particular, for any component $\mathscr{C} \in {\rm Comp}(G_N)$ the conductances as defined in~\eqref{conductance_definition} are
\[ c(xy) = \pi(x) Q^{\theta}(x,y) = \frac{1}{\sum_{z \in \mathscr{C}} \operatorname{d}(z)^{1- \theta}} \mathbbm{1}_{\{ x \sim y \}} , \quad
\mbox{for any } x,y \in \mathscr{C}. \]
Hence, by Proposition~\ref{prop:max_resistance}, we have for any component $\mathscr{C}$,
\begin{equation}\label{eq:upper_hit} t_{{\rm hit}}(\mathscr{C}) \leq \diam (\mathscr{C}) \sum_{z \in \mathscr{C}} \operatorname{d}(z)^{1- \theta} . \end{equation}
Because $\theta \geq 1$, we have that
$\sum_{z \in \mathscr{C}} \operatorname{d}(z)^{1-\theta} \leq |\mathscr{C}|$. Therefore, by Proposition~\ref{prop:big_sum_of_degrees} and Proposition~\ref{prop:diameter}, we get that
\[ \sup_{\mathscr{C} \in {\rm Comp}(G_N)} t_{{\rm hit}}(\mathscr{C})
\leq
\max_{\mathscr{C} \in {\rm Comp}(G_N)}\diam (\mathscr{C})
\times
\max_{\mathscr{C} \in {\rm Comp}(G_N)}|\mathscr{C}|
=
O_{\p}^{\log N}(N^\gamma)
, \]
which completes the upper bound for $\theta \geq 1$.
For $\theta \leq 0$, we first deal with the small components, where we recall that the vertex set of the `small' components is defined as
\[
V_{\rm small}:=[N]\setminus V_{\rm big},
\qquad
\mbox{where }
V_{\rm big}:= \bigcup_{k \leq K_\gamma} V \left( \mathscr{C}(k) \right)
\]
and $K_\gamma = N^\frac{1-2\gamma}{2-2\gamma}\log N$.
By Proposition~\ref{prop:small_sum_of_degrees}, we know that
\[
\max_{k \in V_{\rm small}}
\sum_{x \in \mathscr{C}(k)} \operatorname{d}(x)
=O_{\mathbb{P}}^{\log N}\left(N^{\frac{\gamma}{2-2\gamma}}\right)
\]
In particular, we get from~\eqref{eq:upper_hit} using $\sum_{i} x_i^p \leq (\sum_i x_i)^p$ for any $p\geq1$ and $x_i\geq 0$ that
\[\begin{aligned}
\max_{ k \in V_{\rm small}} t_{\rm hit}(\mathscr{C}(k))
& \leq \diam (G_N) \max_{ k \in V_{\rm small}} \sum_{x \in \mathscr{C}(k)} \operatorname{d}(x)^{1-\theta}\\
& \leq \diam (G_N)
\max_{ k \in V_{\rm small}} \Big(\sum_{x \in \mathscr{C}(k)} \operatorname{d}(x)\Big)^{1- \theta}
=O_{\mathbb{P}}^{\log N}\left(N^{\frac{\gamma(1-\theta)}{2-2\gamma}}\right),
\end{aligned} \]
where we also used Proposition~\ref{prop:diameter} to bound the diameter.
To bound the consensus time on large components, we use that by Proposition~\ref{tree_meeting_theorem}
for any $k \leq K_\gamma$,
\[ t_{\rm meet}(\mathscr{C}(k)) \leq 189 \, \frac{t_{\rm hit}(k)}{\pi(k)} , \]
and find a suitable upper bound on the right hand side, which in turn gives us by Lemma~\ref{binary_lower_bound} and Proposition~\ref{prop:coal} an upper bound on $\E_{\mu_u} ( \tau_{\rm cons}(\mathcal{C}(k)) \, |\, G_N)$.
In order to bound the invariant measure, we note that since $\theta \leq 0$, we have from Proposition \ref{prop:big_sum_of_degrees} and~\ref{prop:star_degrees} that
\[ \min_{k \leq K_\gamma}\pi(k) = \min_{k \leq K_\gamma} \frac{\operatorname{d}(k)^{1-\theta}}{\sum_{z \in \mathscr{C}(k)} \operatorname{d}(v)^{1-\theta} }
\geq \min_{k \leq K_\gamma} \Big( \frac{\operatorname{d}(k) }{ \sum_{z \in \mathscr{C}(k)} \operatorname{d}(z) }\Big)^{1-\theta}= \Omega_{\p}^{\log N}(1) . \]
In order to bound the hitting time $t_{\rm hit}(k)$ we apply the same argument as for the small component, but
for the random walk restricted to each branch of $\mathscr{C}(k)$ (see also Definition~\ref{branch_defn} above for the formal definition of a branch).
The bound on the sum of degrees comes from Lemma~\ref{le:subcritical_branch_control}.
Together, we obtain that
\[ \sup_{k \leq K_\gamma} t_{\rm meet}(\mathscr{C}(k)) = O_{\mathbb{P}}^{\log N}\Big(N^{\frac{\gamma(1-\theta)}{2-2\gamma}}\Big) . \]
Combined with the bound on the small components, this completes the upper bound in the case $\theta \leq 0$.
We complete the upper bounds by showing for $\theta \in (0,1)$ that
\begin{equation}\label{eq:mid_upper_bd}
\max_{k \in [N]} t_{\text{\textnormal{meet}}}\left( \mathscr{C}(k) \right)=O_{\mathbb{P}}^{\log N}\left( N^{\gamma\theta} + N^\frac{\gamma}{2-2\gamma} \right).
\end{equation}
The upper bound on the consensus time then follows by Proposition~\ref{prop:coal} and by noting that in each of the two different regimes
one of the summands dominates.
For $k \in V_{\rm small}$, we use similar strategy as above and obtain by~\eqref{eq:upper_hit} that
\[ t_{\rm hit}(\mathscr{C}(k) ) \leq \diam(\mathscr{C}(k)) \sum_{z \in \mathscr{C}(k)} d(z)^{1-\theta}
\leq \diam(\mathscr{C}(k)) \sum_{z \in \mathscr{C}(k)} d(z), \]
which if we combine Proposition~\ref{prop:small_sum_of_degrees} and Proposition~\ref{prop:diameter} is seen to be
$O_{\mathbb{P}}^{\log N} \left(N^{\frac{\gamma}{2 -2 \gamma}}\right)$ uniformly in $k \in V_{\rm small}$.
For the bound on the large components, define for $k \leq K_\gamma$ the set $L_k$
as the neighbours of $k$ that have degree $1$. By Proposition~\ref{prop:leaf_counts} we have that
\begin{equation}\label{eq:number_star}
\min_{k \leq K_\gamma} \frac{|L_k|}{\operatorname{d}(k)}=\Omega_{\mathbb{P}}(1).
\end{equation}
hence, since all vertices in $L_k$ have degree $1$, we obtain
\[ \pi(L_k \cup \{ k\}) \geq \frac{\sum_{x \in L_k} \operatorname{d}(x)^{1-\theta}}{\sum_{x \in \mathscr{C}(k)} \operatorname{d}(x)^{1-\theta}}
\geq \frac{|L_k|}{\sum_{x \in \mathscr{C}(k)} \operatorname{d}(x) }
. \]
Thus by~\eqref{eq:number_star} and Proposition \ref{prop:big_sum_of_degrees} we have
\begin{equation}\label{eq:low_pi}
\min_{k \leq K_\gamma} \pi(L_k \cup\{k\})
=
\Omega \left( \min_{k \leq K_\gamma } \frac{\operatorname{d}(k)}{\sum_{x \in \mathscr{C}(k)} \operatorname{d}(x) } \right)
=
\Omega_{\mathbb{P}}\left( \frac{1}{\log N} \right).
\end{equation}
Because we have a large stationary mass in $L_k \cup \{k\}$, Theorem~\ref{partial_meeting} gives us that
\begin{equation}\label{eq:partial}
t_{\text{\textnormal{meet}}}(\mathscr{C}(k))
=O_{\mathbb{P}}^{\log N}\left(
t_{\rm meet}^\pi(L_k \cup \{k\})
+
t_{\rm hit}
\left( k \right)
\right),
\end{equation}
where we recall that $t_{\rm meet}^\pi(L_k \cup \{k\})$ is the meeting time for the Markov chain observed on $\{k\} \cup L_k$ (see also the definition just before Theorem~\ref{partial_meeting}). We obtain from by Lemma \ref{le:meeting_star} that
\[
\max_{k \leq K_\gamma} t^\pi_{\rm meet}\left( \{k\} \cup L_k \right) \leq
\max_{k \leq K_\gamma} \frac{3+\operatorname{d}(k)^\theta}{2}
=O_{\mathbb{P}}(N^{\gamma \theta}) .
\]
Moreover, by Lemma \ref{le:subcritical_branch_control}
\[\begin{aligned}
\max_{k \leq K_\gamma}t_{\text{hit}}(k)
& \leq
\max_{k \leq K_\gamma} \max_{B \in \mathcal{B}(\mathscr{C}(k))} \operatorname{diam}(B) \sum_{v \in B} \operatorname{d}(v)^{1-\theta}\\
& \leq \max_{k \leq K_\gamma}
\operatorname{diam}(G_N) \max_{B \in \mathcal{B}(\mathscr{C}(k))} \sum_{v \in B} \operatorname{d}(v)
=O_{\mathbb{P}}^{\log N}\left( N^{\frac{\gamma}{2-2\gamma}} \right).
\end{aligned}
\]
Substituting both bounds into~\eqref{eq:partial}, we obtain
\[ \max_{k \leq K_\gamma} t_{\rm meet}(\mathscr{C}(k) )
= O_{\mathbb{P}}^{\log N} \left(N^{\gamma \theta}+N^{\frac{\gamma}{2-2\gamma}} \right). \]
By combining this with the bound on the small components, we have completed the proofs for the upper bounds in all cases.
We continue with the {\bf lower bounds}.
For the first part, we suppose that $\theta >0$ and consider the consensus time on $\mathscr{C}(1)$. By Lemma~\ref{binary_lower_bound} and Proposition \ref{prop:coal}
\[ \mathbb{E}^{\theta}_{\mu_u}(\tau_{\text{\textnormal{cons}}}(\mathscr{C}(1)) \, |\, G_N)
\geq 2u(1-u)t_{\text{meet}}(\mathscr{C}(1))
\geq 2u(1-u)t_{\text{meet}}^\pi(\mathscr{C}(1)), \]
where the last inequality follows from the definitions. To bound the right hand side, we recall from Proposition~\ref{lower_meeting_bound}
that
\begin{equation}\label{eq:L2} t_{\text{meet}}^\pi(\mathscr{C}(1)) \geq \frac{( 1- \sum_{x \in \mathscr{C}(1)} \pi(x)^2)^2}{4
\sum_{x \in \mathscr{C}(1)} q(x) \pi(x)^2 } . \end{equation}
In order to find a lower bound on the right hand side, we first bound the maximum of the invariant distribution.
If $\theta \in (0,1)$, then we have by Proposition~\ref{prop:star_degrees}
\[\max_{v \in \mathscr{C}(1)} \pi(v)
= \frac{\max_{v \in \mathscr{C}(1)} \operatorname{d}(v)^{1 - \theta}}{ \sum_{z \in \mathscr{C}(1)}
\operatorname{d}(z)^{1-\theta}} \leq \frac{ |\mathscr{C}(1)|^{1 - \theta}}{ |\mathscr{C}(1)|} \leq \operatorname{d}(1)^{-\theta}
= O_{\mathbb{P}} (N^{-\gamma \theta} ) . \]
Similarly, if $\theta \geq 1$ we recall the leaf neighbours of Proposition~\ref{prop:leaf_counts},
\[ \max_{v \in \mathscr{C}(1)} \pi(v)
\leq \frac{1}{ \sum_{z \in \mathscr{C}(1)}
\operatorname{d}(z)^{1-\theta}}
\leq \frac{1}{ |L_1| }
= O_{\mathbb{P}} (N^{-\gamma}) . \]
In particular, in both cases we have
\[ \sum_{x \in \mathscr{C}(1)}\pi(x)^2 \leq \max_{x \in \mathscr{C}(1)} \pi(x) \sum_{v \in \mathscr{C}(1)} \pi(v) = o_{\mathbb{P}}(1). \]
To estimate the denominator in~\eqref{eq:L2}, we note that
for $\theta \geq 1$,
\[\begin{aligned} \sum_{v \in \mathscr{C}(1)} q(v) \pi(v)^2 & =\frac{\sum_{v \in \mathscr{C}(1)}\operatorname{d}(v)^{\theta}\operatorname{d}(v)^{2-2\theta}}{\left( \sum_{v \in \mathscr{C}(1)}\operatorname{d}(v)^{1-\theta} \right)^2}
\leq \frac{\sum_{v \in \mathscr{C}(1)}\operatorname{d}(v) }{|L_1|^2}\\
& = O_{\mathbb{P}}^{\log N} \Big( \frac{N^\gamma}{N^{2 \gamma} } \Big)
= O_{\mathbb{P}}^{\log N} (N^{-\gamma}), \end{aligned}
\]
where we used Proposition~\ref{prop:leaf_counts} for the denominator and Proposition~\ref{prop:big_sum_of_degrees} for the numerator.
By the same results and Lemma \ref{le:empirical_moment}, we have for
$\theta \in (0,1)$,
\[ \sum_{v \in \mathscr{C}(1)} q(v) \pi(v)^2 \leq \frac{ \sum_{v \in \mathscr{C}(1)}\operatorname{d}(v)^{2- \theta} }{\big( \sum_{v \in \mathscr{C}(1)}\operatorname{d}(v)^{1-\theta} \big)^2} = O^{\log N}_{\mathbb{P}} \left( \frac{N^{(2-\theta)\gamma}}{N^{2 \gamma} } \right)
= O^{\log N}_{\mathbb{P}} ( N^{- \theta \gamma}) .
\]
Hence, we obtain from~\eqref{eq:L2} for $\theta > 0$
\begin{equation}\label{eq:low_bd_1} t_{\text{meet}}^\pi(\mathscr{C}(1)) = \left\{ \begin{array}{ll} \Omega^{\log N}_{\mathbb{P}} ( N^\gamma) & \mbox{if } \theta \geq 1 ,\\ \Omega^{\log N}_{\mathbb{P}} (N^{\gamma \theta}) &\mbox{if } \theta \in (0,1). \end{array} \right. \end{equation}
For the second of the part of the lower bound, we use a component that contains a sufficiently large ``double star'' structure and consider parameters $\theta<1$. More precisely,
by Proposition~\ref{prop:existence_simple_double_star}, with high probability, there exists a tree component that contains two adjacent vertices $x$ and $y$ such that
\begin{equation}\label{simple_double_star_properties}
\operatorname{d}(x),\operatorname{d}(y)
\mbox{ and }\sum_{v \in \mathscr{C}(x)} \operatorname{d}(v)
\mbox{ are }
\Theta_{\mathbb{P}}^{\log N}\left( N^\frac{\gamma}{2-2\gamma} \right).
\end{equation}
Now, let $A_x$ be the set of vertices in $\mathscr{C}(x)$ that are closer to $x$ than to $y$, and $A_y$ the complement.
Then, we will use that by Proposition~\ref{conductance_theorem}
\begin{equation}\label{eq:cond_bound} \E_{\mu_u}(\tau_{\rm cons} \, |\, G_N) = \Omega \left( \frac{\pi(A_x)\pi(A_y)}{ \sum_{v \in A_x} \sum_{w \in A_y} c(vw) } \right) . \end{equation}
We start by estimating the term $\pi(A_x)\pi(A_y)$.
Note that for $\theta \in (0,1)$, we have that
\[ \operatorname{d}(x) \leq |A_x| \leq \sum_{v \in A_x} \operatorname{d}(v)^{1-\theta}
\leq \sum_{v \in \mathscr{C}(x) } \operatorname{d}(v)^{1-\theta} \leq
\sum_{v \in \mathscr{C}(x)} \operatorname{d} (v) , \]
and the same bounds hold when replacing $x$ by $y$.
Therefore, for $\theta \in (0,1)$, we obtain
\[
\frac{\operatorname{d}(x)}{\sum_{v \in \mathscr{C}(x)} \operatorname{d} (v)}
\leq
\frac{\sum_{v \in A_x} \operatorname{d} (v)^{1-\theta}}{\sum_{v \in A_y} \operatorname{d} (v)^{1-\theta}}
\leq
\frac{\sum_{v \in \mathscr{C}(x)} \operatorname{d} (v)}{\operatorname{d}(y)},
\]
so that we can deduce from~\eqref{simple_double_star_properties} that
\[ \pi(A_x) \pi(A_y)
=\left(
\sqrt{\frac{\sum_{v \in A_x} \operatorname{d} (v)^{1-\theta}}{\sum_{v \in A_y} \operatorname{d} (v)^{1-\theta}}}+\sqrt{\frac{\sum_{v \in A_y} \operatorname{d} (v)^{1-\theta}}{\sum_{v \in A_x} \operatorname{d} (v)^{1-\theta}}}
\right)^{-2}
= \Omega^{\log N}_{\mathbb{P}} (1) . \]
Furthermore, since $\mathscr{C}(x)$ is a tree the denominator in~\eqref{eq:cond_bound} reduces to
\[
c(xy) = \frac{1}{\sum_{v \in \mathscr{C}(x)} \operatorname{d}(v)^{1-\theta} }
\leq
\frac{1}{\left| \mathscr{C}(x) \right|}=
O_{\mathbb{P}}^{\log N} \left( N^{- \frac{\gamma}{2-2\gamma}} \right)
.
\]
We finally consider the case $\theta \leq 0$ on this double star.
Again, we start by estimating $\pi(A_x) \pi(A_y)$. By Proposition \ref{prop:existence_simple_double_star} we have
\[
\sum_{v \in A_x}\operatorname{d}(v)^{1-\theta}
=O_{\mathbb{P}}\Big( N^\frac{\gamma(1-\theta)}{2-2\gamma}\Big).
\]
Since further $\operatorname{d}(x)=\Theta_{\mathbb{P}}(N^\frac{\gamma}{2-2\gamma})$ by~\eqref{simple_double_star_properties},
we also have that
\[ \sum_{v \in A_x}\operatorname{d}(v)^{1-\theta}
\geq \operatorname{d}(x)^{1-\theta} = \Omega_{\mathbb{P}}\Big( N^\frac{\gamma(1-\theta)}{2-2\gamma}\Big).
\]
The same bounds hold for $y$ and so by the same argument as above,
we have that $\pi(A_x)\pi(A_y)=\Omega_{\mathbb{P}}(1)$. Moreover,
\[
c(xy)=\frac{1}{\sum_{v \in \mathscr{C}(x)} \operatorname{d}(v)^{1-\theta} }
\leq \frac{1}{\operatorname{d}(x)^{1-\theta}}
= O_{\mathbb{P}}^{\log N} \Big( N^\frac{-\gamma(1-\theta)}{2-2\gamma} \Big).
\]
Combining the estimates on the stationary distribution and the conductance $c(xy)$,
we conclude from \eqref{eq:cond_bound} that
\begin{equation}\label{eq:low_bd_2}
\E_{\mu_u}(\tau_{\rm cons} \, |\, G_N) = \left\{ \begin{array}{ll} \Omega_{\mathbb{P}}^{\log N} ( N^\frac{\gamma}{2-2\gamma} )
& \mbox{if } \theta \in (0,1), \\
\Omega_{\mathbb{P}}^{\log N} ( N^\frac{\gamma(1-\theta)}{2-2\gamma} )
& \mbox{if } \theta \in (-\infty,1]. \end{array} \right.
\end{equation}
Combining Equations~\eqref{eq:low_bd_2} and~\eqref{eq:low_bd_1} completes the proof of Theorem~\ref{class_subcrit} by giving all the required lower bounds.
\end{proof}
\subsection{Consensus time for the discursive voter model}\label{ssec:proof_discursive}
In this section, we will consider the discursive voter model as defined in Definition~\ref{def:voter}(b). This version of the voter model fits into the general setting of a $Q$-voter model of Section~\ref{sec:duality} if for $\theta \in \mathbb{R}$ we consider $Q = \mathbf{Q}^\theta$ defined as
\begin{equation}\label{eq:Q_theta_discursive} \mathbf{Q}^\theta(i,j) =
\frac{\operatorname{d}(i)^{\theta-1}+\operatorname{d}(j)^{\theta-1}}{2}
\quad \mbox{if } i \sim j \mbox{ in } G_N. \end{equation}
As before, we write $\mathbf{P}^\theta$ for the law of (and $\mathbf{E}^\theta$ for the expectation with respect to) the coalescing random walks with generator $\mathbf{Q}^\theta$.
If we denote by $\mathscr{C}_1, \ldots, \mathscr{C}_k$ the connected components of $G_N$, then define $\pi = (\pi(z) , z \in V(G_N))$ via
\[ \pi(z) = \frac{1}{|\mathscr{C}_j|} , \quad \mbox{ for } z \in \mathscr{C}_j, \]
for $j \in [k]$.
Then $\pi|_{\mathscr{C}_j}$, i.e.\ the uniform measure on $\mathscr{C}_j$, is the invariant measure of the $\mathbf{Q}^\theta$ Markov chain restricted to $\mathscr{C}_j$.
First we require another application of Theorem~\ref{partial_meeting}, which is simpler than for the classical voter model, but covers a wider range of cases.
\begin{lemma}\label{meeting_obl_subcrit} For $G_N \in \mathcal{G}_{\beta,\gamma}$ with $\beta + 2\gamma <1$, we have that
\[
t_{\text{\textnormal{meet}}}(G_N) = \sup_{j \in [k]} t_{\rm meet}(C_i) =
\begin{cases}
O^{\log N}_{\mathbb{P}}\left( N^\frac{\gamma}{2-2\gamma} \right) & \theta > \frac{3-4\gamma}{2-2\gamma} \\
O^{\log N}_{\mathbb{P}}\left( N^{\gamma(2-\theta)} \right) & 1 < \theta \leq \frac{3-4\gamma}{2-2\gamma} \\
O^{\log N}_{\mathbb{P}}\left( N^\gamma \right) & 2\gamma \leq \theta \leq 1\\
O^{\log N}_{\mathbb{P}}\left( N^\frac{\gamma(2-\theta)}{2-2\gamma} \right) & \theta < 2\gamma
\end{cases}
\]
\end{lemma}
\begin{proof}
By Proposition~\ref{le:subcritical_branch_control}, we can
work on the high probability set where all big components are trees.
Recall that $ K_\gamma:=N^\frac{1-2\gamma}{2-2\gamma} \log N$
and denote for any $k \leq K_\gamma$
by $L_k$ the set of degree $1$ vertices adjacent to $k$. By Proposition~\ref{prop:leaf_counts} and since the stationary distribution is uniform
\[
\min_{k \leq K_\gamma} \pi(L_k)= \min_{k \leq K_\gamma} \frac{|L_k|}{|\mathscr{C}(k)|}=\Omega^{\log N}_{\mathbb{P}}(1).
\]
Then by exchangeability, coalescence observed in $L_k$ is just complete graph (Wright-Fisher) coalescence. This is because a simultaneous move by both walkers gives the same probability to coalesce as a single move. Thus, coalescence occurs for the partially observed process at rate
\[
\frac{1+\operatorname{d}(k)^{\theta-1}}{|L_k|}
\]
and we conclude by Proposition~\ref{prop:leaf_counts}
\begin{equation}\label{eq:2703-1}
\max_{k \leq K_\gamma} t^\pi_{\text{\textnormal{meet}}}( L_k )
\leq \max_{k \leq K_\gamma} \frac{|L_k|}{1+\operatorname{d}(k)^{\theta-1}}
=
\begin{cases}
O_{\mathbb{P}}^{\log N}(N^{\gamma(2-\theta)}) & \theta>1, \\
O_{\mathbb{P}}^{\log N}(N^\gamma) & \theta \leq 1 .
\end{cases}
\end{equation}
Now, we let $\mathcal{S}$ be the collection of small components and branches in large components. If we denote by $P_{x,y}$ the set of paths between any vertices $x$ and $y$, then by Proposition~\ref{prop:max_resistance} we obtain
\[
\begin{split}
\max_{S \in \mathcal{S}} t_{\text{hit}}(S) &\leq \max_{S \in\mathcal{S}} \max_{x,y \in V(S)} \min_{P_{x,y}} \sum_{\{u,v\} \in E(P_{x,y})} \frac{2|S|}{\operatorname{d}(u)^{\theta-1}+\operatorname{d}(v)^{\theta-1}}\\
&\leq \max_{S \in\mathcal{S}} |S| \operatorname{diam}(S) \max_{v \in S} \left( \operatorname{d}(v)^{1-\theta} \right)\\
&\leq \max_{S \in\mathcal{S}} |S| \operatorname{diam}(G_{\beta,\gamma}) \max_{v > K_\gamma} \left( \operatorname{d}(v)^{1-\theta} \right)\\
&=
\begin{cases}
O^{\log N}_{\mathbb{P}}\left(N^{\frac{(2-\theta)\gamma}{2-2\gamma}}\right) & \theta<1 ,\\
O^{\log N}_{\mathbb{P}}\left(N^{\frac{\gamma}{2-2\gamma}}\right) & \theta \geq 1,
\end{cases}
\end{split}
\]
where we used Lemma~\ref{le:subcritical_branch_control} and Propositions~\ref{prop:small_sum_of_degrees} and \ref{prop:diameter} in the last step.
If we combine this last bound with~\eqref{eq:2703-1} and apply
Theorem~\ref{partial_meeting}, then we obtain
\[
\begin{split}
\max_{k \leq K_\gamma}t_{\text{\textnormal{meet}}}(\mathscr{C}(k))
&=
\begin{cases}
O^{\log N}_{\mathbb{P}}\left( N^{\gamma(2-\theta)}+ N^\frac{\gamma}{2-2\gamma} \right) & \theta > 1 ,\\
O^{\log N}_{\mathbb{P}}\left( N^\gamma + N^\frac{\gamma(2-\theta)}{2-2\gamma} \right) & \theta \leq 1,
\end{cases}\\
&=
\begin{cases}
O^{\log N}_{\mathbb{P}}\left( N^\frac{\gamma}{2-2\gamma} \right) & \theta > \frac{3-4\gamma}{2-2\gamma} ,\\
O^{\log N}_{\mathbb{P}}\left( N^{\gamma(2-\theta)} \right) & 1 < \theta \leq \frac{3-4\gamma}{2-2\gamma} ,\\
O^{\log N}_{\mathbb{P}}\left( N^\gamma \right) & 2\gamma \leq \theta \leq 1,\\
O^{\log N}_{\mathbb{P}}\left( N^\frac{\gamma(2-\theta)}{2-2\gamma} \right) & \theta < 2\gamma,
\end{cases}
\end{split}
\]
which completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{obl_subcrit}]
The upper bound for all four cases follows immediately from
Lemma~\ref{binary_lower_bound}, Propositon \ref{prop:coal} and Lemma~\ref{meeting_obl_subcrit}, so it only remains to prove the lower bounds. For these, it will be very useful that {the stationary distribution $\pi$ on each component is always uniform.}
The first lower bound is for the case $\theta \geq \frac{3-4\gamma}{2-2\gamma}$ for which we must consider the \emph{long} double star component, whose existence is proved Proposition \ref{prop:existence_long_double_star}. First, we note that the `separating edge' $\{ v_2,v_3\}$ on the
the long double star has conductance
\[
c(v_2v_3)=O_{\mathbb{P}}^{\log N} \left( N^{-\frac{\gamma}{2-2\gamma}} \right).
\]
Moreover, by Propositions \ref{prop:big_sum_of_degrees} and \ref{prop:star_degrees}
\[
\operatorname{d}(v_1),\operatorname{d}(v_4) \mbox{ and } \left| \mathscr{C}( v_1 ) \right| \mbox{ are }
\Theta_{\mathbb{P}}^{\log N}\left( N^\frac{\gamma}{2-2\gamma} \right)
\]
which implies that we have $\Theta_{\mathbb{P}}^{\log N}(1)$ stationary mass on each side (by a similar argument as before). Hence by Proposition \ref{conductance_theorem} we have consensus time $\Omega_{\mathbb{P}}^{\log N} \left( N^{\frac{\gamma}{2-2\gamma}} \right).$
For the lower bound when $2\gamma<\theta < \frac{3-4\gamma}{2-2\gamma}$, we apply Corollary \ref{lower_meeting_bound} to $\mathscr{C}(1)$ to see that
\[
t_{\text{\textnormal{meet}}}^{\pi}( \mathscr{C}(1))
\geq \frac{(1-\sum_{v \in \mathscr{C}(1)} \pi(v)^2)^2}{4\sum_{v \in \mathscr{C}(1)} q(v) \pi(v)^2}
=\Theta_{\mathbb{P}}\left( \frac{|\mathscr{C}(1)|^2}{\sum_{v \in \mathscr{C}(1)} q(v)} \right)
=\Theta_{\mathbb{P}}\left(
\frac{|\mathscr{C}(1)|^2}{\sum_{v \in \mathscr{C}(1)} \operatorname{d}(v)^\theta}\right).
\]
Recall the moment calculation in Lemma \ref{le:empirical_moment} to see that when $\theta \geq 1$
\[
\sum_{v \in \mathscr{C}(1)} \operatorname{d}(v)^\theta
=
\Theta^{\log N}_{\mathbb{P}}\left( N^{\gamma\theta} \right),
\]
whereas for $\theta \in (2\gamma,1)$ we instead have by Proposition \ref{prop:big_sum_of_degrees}
\[
\sum_{v \in \mathscr{C}(1)} \operatorname{d}(v)^\theta \leq \sum_{v \in \mathscr{C}(1)} \operatorname{d}(v)=O^{\log N}_{\mathbb{P}}(N^\gamma ).
\]
Combining the statements yields
\[
\frac{|\mathscr{C}(1)|^2}{\sum_{v \in \mathscr{C}(1)} \operatorname{d}(v)^\theta}=\Omega^{\log N}_{\mathbb{P}}\left(
N^{(2-\theta)\gamma}\vee N^\gamma
\right) .
\]
By Lemma \ref{binary_lower_bound} and Proposition \ref{prop:coal} this expression gives a lower bound for the consensus time.
For the final case, when $\theta <2\gamma$, we require another double star component, but this one must be that without a path, whose existence is stated in Proposition \ref{prop:existence_simple_double_star}. This double star is a tree structure with two adjacent ``star'' vertices $x$, $y$, where
\[
\operatorname{d}(x),\operatorname{d}(y)\mbox{ and } \left| \mathscr{C}( x ) \right| \mbox{ are }
\Theta_{\mathbb{P}}^{\log N}\left( N^\frac{\gamma}{2-2\gamma} \right) .
\]
Therefore, we have stationary mass of $\Theta_{\mathbb{P}}^{\log N}(1)$ in the vertices closest to $x$ and in those closest to $y$.
We note that
\[
\mathbf{Q}^\theta (x,y)=O_{\mathbb{P}}^{\log N}\left( N^\frac{\gamma(\theta-1)}{2-2\gamma} \right)
\]
and since the stationary distribution is uniform
\[
\pi(x)=O_{\mathbb{P}}^{\log N}\left( N^{-\frac{\gamma}{2-2\gamma}} \right).
\]
Thus, by the definition of the conductance~\eqref{conductance_definition}
\[
c(xy)=\pi(x)\mathbf{Q}^\theta (x,y)
=O_{\mathbb{P}}^{\log N}\left( N^\frac{\gamma(\theta-1)}{2-2\gamma} N^{-\frac{\gamma}{2-2\gamma}} \right).
\]
Combining the estimate on the stationary mass and the conductance, we have by Proposition \ref{conductance_theorem},
\[
t_{\rm meet}( \mathscr{C}(1))
=\Omega_{\mathbb{P}}^{\log N}\left(
N^{\frac{\gamma(2-\theta)}{2-2\gamma}}
\right),
\]
which gives the last remaining lower bound.
\end{proof}
{\bf Acknowledgements.}
We would like to thank Peter M\"orters
and Alexandre Stauffer
for many useful discussions.
JF is supported by a scholarship from the EPSRC Centre for Doctoral Training in Statistical Applied Mathematics at Bath (SAMBa), under the project EP/L015684/1.
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUfQDxK6Ot9TMSsrFo
| 5
| 1
|
\section{MOTIVATION \& PREVIOUS WORK}
Since the first investigations of galaxy interactions
\citep{Holmberg41} using light bulbs, the use of numerical simulations
in galaxy formation has developed dramatically. Not only gravity but
also hydrodynamics and cooling are standard ingredients in the
sophisticated computer models studying galaxy formation and
interactions. In hierarchical structure formation, dark matter (DM)
halos merge to form larger halos while the gas infalls into these
potential wells \citep{Peebles68, White78}. \citeauthor{White78}
provided the basis for modern galaxy formation, in which small
galaxies form early and continuously merge into larger systems.
As more high redshift galaxies were observed in the following 10
years, \citet{White91} refined the theory to address the observed
characteristics in these galaxies. In their model, the halo
accumulates mass until the gas cools faster than a Hubble time, $t_{\rm{H}}$,
which usually occurs when atomic hydrogen line, specifically Ly$\alpha$,
cooling is efficient. This happens when the halo has $T_{\rm{vir}}$~$>$~10$^4$
K, where the cooling function sharply rises by several orders of
magnitude because the number of free electrons able to excite hydrogen
greatly increases at this temperature \citep{Spitzer78}. One can
define a cooling radius, $r_{\rm{cool}}$, in which the interior material is
able to cool within a Hubble time. Once the halo reaches this first
milestone, $r_{\rm{cool}}$~ increases through additional accretion and cooling.
A rapid baryonic collapse ensues when $t_{\rm{cool}}$~$\lower0.3em\hbox{$\,\buildrel <\over\sim\,$}$~$t_{\rm{dyn}}$~
\citep{Rees77}. The material accelerates towards the center, and its
density quickly increases. In the model discussed in White \& Frenk,
this collapse will halt when one of the following circumstances
occurs. First, angular momentum can prevent the gas from collapsing
further, and the system becomes rotationally supported. Afterwards,
this disc fragments and star formation follows. Alternatively, star
formation does not necessarily develop in a disc component, but the
energy released by stars during their main sequence and associated
supernovae (SNe) terminates the collapse.
These concepts have been applied also to the earliest galaxies in the
universe \citep{Mo98, Oh02, Begelman06, Lodato06}. Many studies
\citep[e.g.][]{Ostriker96, Haiman97b, Cen03, Somerville03, Wise05}
demonstrated that OB-stars within protogalaxies at $z > 6$ can produce
the majority of photons required for reionization. These
protogalaxies contain an ample gas reservoir for widespread star
formation, and the accompanying radiation propagates into and ionizes
the surrounding neutral intergalactic medium. Several high redshift
starburst galaxies have been observed that host ubiquitous star
formation at $z > 6$ \citep{Stanway03, Mobasher05, Bouwens06}.
Additionally, supermassive black holes (SMBH) more massive than 10$^8
M_\odot$ are present at these redshifts \citep[e.g.][]{Becker01, Fan02,
Fan06}. Finally, a reionization signature in the polarization of
the cosmic microwave background (CMB) at z $\sim$ 10 \citep{Page07}
further supports and constrains stellar and SMBH activity at high
redshifts.
The distinction between SMBH formation and a starburst galaxy should
depend on the initial ingredients (i.e. seed BHs, metallicity, merger
histories) of the host halo, but the evolution of various initial
states is debatable. It is essential to study the hydrodynamics of
high redshift halo collapses because the initial luminous object(s)
that emerges will dynamically and thermally alter its surroundings.
For example, as the object emits ultraviolet radiation, the nearby gas
heats and thus the characteristic Jeans mass increases, which may
inhibit the accretion of new gas for future star formation
\citep{Efstathiou92, Thoul96}.
The following work will attempt to clarify early galaxy formation by
focusing on protogalactic ($T_{\rm{vir}}$~$>10^4$ K) halos and following their
initial gaseous collapse. \citet[][hereafter Paper I]{Wise07a} studied
the virialization of protogalactic halos and the virial generation of
supersonic turbulence. In this paper, we address the gas dynamics of
the continued, turbulent collapse of a halo and study the evolution
and characteristics of the central regions. In later studies, we will
introduce the effects from primordial star formation and feedback and
H$_2$~cooling. The progressive introduction of new processes is
essential to understand the relevance of each mechanism. We argue
that our results may be relevant for scenarios that envisage SMBH
formation from gaseous collapses.
\citet{Loeb94} and \citet{Bromm03} conducted smoothed particle
hydrodynamics (SPH) simulations that focused on the collapse of
idealized, isolated protogalactic halos. The former group concluded
that a central $10^6 M_\odot$ SMBH must exist to stabilize the thin
gaseous disc that forms in their calculations. \citeauthor{Bromm03}
considered cases with and without H$_2$~chemistry and a background UV
radiation field. They observed the formation of a dense object with a
mass $M \sim 10^6 M_\odot$, or $\lower0.3em\hbox{$\,\buildrel >\over\sim\,$} 10\%$ of the baryonic matter, in
simulations with no or strongly suppressed H$_2$~formation. These
calculations without metal cooling and stellar feedback are useful to
explore the hydrodynamics of the collapse under simplified conditions.
\citet{Spaans06} analytically studied the collapse of 10$^4$ K halos
with an atomic equation of state. They find that $\sim$0.1\% of the
baryonic mass results in a pre-galactic BH with a mass $\sim$$10^5
M_\odot$. \citet{Lodato06} also found that $\sim$5\% of the gas mass in
$M = 10^7 M_\odot$ halos at $z \sim 10$ becomes unstable in a gaseous disc
and forms a SMBH. Recently, \citet{Clark07} studied the effects of
metal and dust cooling on the fragmentation of a collapsing
protogalactic core with varying metallicities ($Z = 0, 10^{-6},
10^{-5} Z_\odot$) and found the gas fragmenting ten times as much in
the $10^{-5} Z_\odot$ case than the primordial case. In addition, the
fragments in the primordial case are biased toward larger masses.
A runaway gaseous collapse requires angular momentum transport so
material can inflow to small scales and form a central object. The
stability of rotating gaseous clouds have been subject of much
interest over the last four decades and was thoroughly detailed by
the work of \citet[][hereafter EFE]{Chandra69}. In the 1960's and
1970's, studies utilizing virial tensor techniques
\citep[EFE;][]{Lebovitz67, Ostriker69, Ostriker73a}, variational
techniques \citep{LyndenBell67, Bardeen77}, and N-body simulations
\citep{Ostriker73b} all focused on criteria in which a stellar or
gaseous system becomes secularly or dynamically unstable. The first
instability encountered is an $m = 2$ bar-like instability that is
conducive for angular momentum transport in order to form a dense,
central object. \citet{Begelman06} investigated the conditions where
a gaseous disc in a pre-galactic halo would become rotationally
unstable to bar formation \citep[see][]{Christodoulou95a,
Christodoulou95b}. They adapt the ``bars within bars'' scenario
\citep{Shlosman89, Shlosman90}, which was originally formulated to
drive SMBH accretion from a gaseous bar that forms within a stellar
galactic bar, to the scenario of pre-galactic BH formation. Here a
cascade of bars form and transport angular momentum outwards, and the
system can collapse to small scales to form a quasistar with runaway
neutrino cooling, resulting in a central SMBH. The simulations
detailed below show how many central bar-like instabilities form.
In \S2, we describe our simulations and their cosmological context.
In the following section, we present our analysis of the halo collapse
simulations and investigate the structural and hydrodynamical
evolution, the initial halo collapse, rotational instabilities, and
the importance of turbulence. In \S4, we discuss the relevance of
angular momentum transport and rotational instabilities in early
galaxy and SMBH formation. There we also examine the applicability
and limitations of our results and desired improvements for future
simulations. Finally we conclude in the last section.
\begin{figure*}
\resizebox{\textwidth}{!}{\rotatebox{0}{\includegraphics*{f1a}}}
\resizebox{\textwidth}{!}{\rotatebox{0}{\includegraphics*{f1b}}}
\caption{An overview of the final state of the collapsing
protogalactic gas cloud. Slices of log gas density in cm$^{-3}$
are shown through the densest point in the halo. The
\textit{first} and \textit{three} rows show simulation A, and the
\textit{second} and \textit{fourth} rows show simulation B. The
columns in the top two rows from left to right are slices with a
field of view of 10 kpc, 1 kpc, 100 pc, and 1 pc. For the bottom
two rows, the fields of view are 0.01pc, 20AU, 0.2AU, and 4
R$_\odot$. Note that each color scale is logarithmic, spans 5
orders of magnitude, and is unique for every length scale.}
\label{fig:slices}
\end{figure*}
\section{SIMULATION TECHNIQUES}
To investigate protogalactic halo collapses in the early universe, we
utilize an Eulerian structured, adaptive mesh refinement (AMR),
cosmological hydrodynamical code, {\sl Enzo}\footnote{See
http://lca.ucsd.edu/software/enzo/} \citep{Bryan97, Bryan99,
OShea04}. {\sl Enzo}~solves the hydrodynamical equations using a second
order accurate piecewise parabolic method \citep{Woodward84, Bryan94},
while a Riemann solver ensures accurate shock capturing with minimal
viscosity. Additionally {\sl Enzo}~ uses a particle-mesh N-body method to
calculate the dynamics of the collisionless dark matter particles
\citep{Couchman91}. Regions of the simulation grid are refined by a
factor of two when one or more of the following conditions are met:
(1) Baryon density is greater than 3 times $\Omega_b \rho_0
N^{l(1+\phi)}$, (2) DM density is greater than 3 times
$\Omega_{\rm{CDM}} \rho_0 N^{l(1+\phi)}$, and (3) the local Jeans
length is less than 16 cell widths. Here $N = 2$ is the refinement
factor; $l$ is the AMR refinement level; $\phi = -0.3$ causes more
frequent refinement with increasing AMR levels, i.e. super-Lagrangian
behavior; $\rho_0 = 3H_0^2/8\pi G$ is the critical density; and the
Jeans length, $L_J = \sqrt{15kT/4\pi\rho G \mu m_H}$, where $H_0$,
$k$, T, $\rho$, $\mu$, and $m_H$ are the Hubble constant, Boltzmann
constant, temperature, gas density, mean molecular weight in units of
the proton mass, and hydrogen mass, respectively. The Jeans length
refinement insures that we meet the Truelove criterion, which requires
the Jeans length to be resolved by at least 4 cells on each axis
\citep{Truelove97}. Runs with a refinement criterion of 4, 8, and 16
Jeans lengths have indistinguishable mass weighted radial profiles.
We conduct the simulations within the concordance $\Lambda$CDM model
with WMAP 1 year parameters of $h$ = 0.72, $\Omega_\Lambda$~= 0.73, $\Omega_M$~= 0.27,
$\Omega_b$~= 0.024$h^{-2}$, and a primordial scale invariant ($n$ = 1) power
spectrum with $\sigma_8$ = 0.9 \citep{Spergel03}. $h$ is the Hubble
parameter in units of 100 km s$^{-1}$ Mpc$^{-1}$. $\Omega_\Lambda$, $\Omega_M$, and
$\Omega_b$~are the fractions of critical energy density of vacuum energy,
total matter, and baryons, respectively. $\sigma_8$ is the rms of the
density fluctuations inside a sphere of radius 8$h^{-1}$ Mpc. Using
the WMAP1 parameters versus the significantly different WMAP third
year parameters \citep[WMAP3;][]{Spergel07} have no effect on the
evolution of individual halos that are considered here. At high
redshifts, statistical differences in structure formation within WMAP3
cosmology when compared to WMAP1 are primarily caused by less
small-scale power prescribed by the lower $\sigma_8$ value (0.9
$\rightarrow$ 0.76) and scalar spectral index $n$ (1 $\rightarrow$
0.96) of primordial density perturbations. This manifests in (1) a
time delay of $\sim$$40\%$ of the halo formation times for a given
virial mass \citep{Alvarez06}, (2) a corresponding lower halo
abundance for star-forming halos \citep{Gao07, Wang08}, and (3)
stronger clustering of halos \citep{Wang08}. The initial conditions
of this simulation are well-established by the primordial temperature
fluctuations in the cosmic microwave background (CMB) and big bang
nucleosynthesis (BBN) \citep[][and references therein]{Burles01,
Hu02}.
We perform two realizations in which we vary the box size and random
phase to study different scenarios and epochs of halo collapse. In
the first simulation, we setup a cosmological box with 1 comoving Mpc
on a side (simulation A), periodic boundary conditions, and a 128$^3$
top grid. The other simulation is similar but with a box side of 1.5
comoving Mpc and a different random phase (simulation B). We provide
a summary of the simulation parameters in Table \ref{tab:params}.
These volumes are adequate to study halos of interest because the
comoving number density of $>$10$^4$ K halos at $z=10$ is $\sim$6
Mpc$^{-3}$ according to an ellipsoidal variant of Press-Schechter
formalism \citep{Sheth02}. We use the COSMICS package to calculate
the initial conditions%
\footnote{To simplify the discussion, simulation A will always be
quoted first with the value from simulation B in parentheses.} at
$z$ = 129 (119) \citep{Bertschinger95, Bertschinger01}. It calculates
the linearized evolution of matter fluctuations. We first run a dark
matter simulation to $z=10$ and locate the DM halos using the HOP
algorithm \citep{Eisenstein98}. We identify the first dark matter
halo in the simulation that has $T_{\rm{vir}}$~$>$ 10$^4$ K and generate three
levels of refined, nested initial conditions with a refinement factor
of two that are centered around the Lagrangian volume of the halo of
interest. The nested grids that contain finer grids have 8 cells
between its boundary and its child grid. During the simulation, the
initial grids retain its position and are always refined to its
initial resolution or higher. Their boundary conditions with each
other are treated as any other adaptive grid. The finest grid has an
equivalent resolution of a 1024$^3$ unigrid and a side length of 250
(300) comoving kpc. This resolution results in a DM particle mass of
30 (101) $M_\odot$ and an initial gas resolution of 6.2 (21) $M_\odot$. These
simulations continue from the endpoints of simulations A6 and B6 of
Paper I. Table \ref{tab:runs} lists the parameters of the most
massive halo in each realization. We evolve the system until the
central object has collapsed and reached our resolution limit. If we
were to follow the simulation to later times and focus on subsequently
collapsing halos, the nature of the gaseous collapses in these halos
should be similar because we do not consider any non-local feedback
processes that affect neighboring halos. At redshift 15, the mean
separation of halos with $T_{\rm{vir}}$~$>$ 10$^4$ K is 540 and 910 comoving
kpc in WMAP1 and WMAP3 cosmology, respectively, using Sheth-Tormen
formalism \citep{Sheth02}. Thus we argue that the results presented
here should be applicable to all high-redshift protogalactic
collapses.
There are 1.23 $\times$ 10$^8$ (498$^3$) and 7.40 $\times$ 10$^7$
(420$^3$) unique cells in the final simulation output of simulations A
and B, respectively. The finest grid then has a refinement level of
41 and a spatial resolution of roughly 0.01 of a solar radius in both
simulations.
{\sl Enzo}~employs a non-equilibrium chemistry model \citep{Abel97,
Anninos97}, and we consider six species in a primordial gas (H,
H$^{\rm +}$, He, He$^{\rm +}$, He$^{\rm ++}$, e$^{\rm -}$). Compton
cooling and heating of free electrons from the CMB and radiative losses
from atomic cooling are computed in the optically thin limit. At high
densities in the halo cores, the baryonic component dominates the
material. However, the discrete sampling of the DM potential by
particles can become inadequate, and artificial heating (cooling) of
the baryons (DM) can occur. To combat this effect, we smooth the DM
particles in cells with a width $<$0.24 ($<$0.36) comoving pc, which
corresponds to a refinement level of 15.
\vspace{1em}
\section{RESULTS}
In this section, we first describe how the halo collapses when it
starts to cool through Ly$\alpha$~line emission. Then we discuss the role
of turbulence in the collapse. Last we describe the rotational
properties and stability of the halo and central object.
\begin{figure*}
\includegraphics[width=0.48\textwidth]{f2a_color}
\hspace{0.025\textwidth}
\includegraphics[width=0.48\textwidth]{f2b_color}
\caption{Slices of electron fraction (\textit{left}) and temperature
(\textit{right}) of simulation A (\textit{top}) and B
(\textit{bottom}). The field of view is 1.5 kpc (\textit{left
panels}) and 200 pc (\textit{right panels}). The color scale is
logarithmic for electron fraction and linear for temperature in
units of 10$^3$ K. Supersonic turbulent shocks are ubiquitous
throughout the halos.}
\label{fig:tempElec1}
\end{figure*}
\subsection{Halo Collapse}
\label{sec:collapse}
Beginning at z = 21.1 in simulation A, the progenitor of the final
halo (\ifmmode{{\rm M_{vir}}}\else{M$_{\rm{vir}}$}\fi~= 4.96 $\times$ 10$^6 M_\odot$) starts to experience two major
mergers, which continues until z = 17.2 when \ifmmode{{\rm M_{vir}}}\else{M$_{\rm{vir}}$}\fi~= 2.36 $\times$
10$^7 M_\odot$. We define \ifmmode{{\rm M_{vir}}}\else{M$_{\rm{vir}}$}\fi~as the mass M$_{200}$ in a sphere that
encloses an average DM overdensity of 200. In simulation B, no recent
major merger occurs before the cooling gas starts to collapse, but it
accumulates mass by accretion and minor mergers.
Mergers disrupt the relaxed state of the progenitor and create
turbulence as these systems collide and combine. Additional turbulence
arises during virialization, as discussed in Paper I. More small
scale density fluctuations are thus present in simulation A. These
fluctuations penetrate farther into the potential well in simulation A
to scales%
\footnote{Note that all masses concerning the collapse are gas mass,
not total mass. The central regions of r $<$ 10 pc are baryon
dominated so that $M_{{\rm enc,\; gas}} \approx M_{{\rm enc,\;
tot}}$. All length scales are in proper units unless otherwise
noted.} of 1 pc, compared to simulation B that contains nearly no
fluctuations between 1 and 50 pc. This is apparent in the $l$ = 1 pc
panels of Figure \ref{fig:slices} that show the density slices at
eight length scales covering 11 orders of magnitude. At the 10 kpc
scale, the filamentary large-scale structure is shown, and the
protogalactic halo exists at the intersection of these filaments. In
the next scale, we show the protogalactic gas cloud. At the 100 pc
scale, a thick disc is seen in simulation B. It is nearly edge-on and
oriented northwest to southeast in this view. In simulation B at 1
pc, a bar forms from a rotational secular instability that transports
angular momentum outwards. Similar instabilities exist at radii of
0.2 pc, 2700 AU, 17 AU, 0.5 R$_\odot$ in simulation B. Simulation A
also undergoes a secular bar instability at smaller scales at radii of
150 AU, 1.3 AU, 0.8 R$_\odot$ but shows a more disorganized medium at
larger scales.
The virial temperatures are now $\geq 10^4$ K, and therefore they can
efficiently cool by atomic hydrogen transitions. The gas fulfills the
critical condition for contraction, $t_{\rm{dyn}}$~$>$ $t_{\rm{cool}}$, and proceeds to
continuously collapse on approximately a dynamical time. We note that
this collapse and level of fragmentation are strongly influenced by
the magnitude of radiative cooling that the gas can acheive. Here we
present the case in which the gas cools without any external radiation
backgrounds or radiation trapping, which may alter the nature of the
collapse.
\begin{figure}[b]
\resizebox{\columnwidth}{!}{\includegraphics*{f3_color.eps}}
\caption{The ratio of the enclosed gas mass and Bonnor-Ebert mass
(eq. \ref{eqn:mbe}) for the final output {\em (black with
circles)} and selected previous times that are listed in the
legend. Simulation A (\textit{left}) and B (\textit{right}). For
values above the horizontal line at $M_{\rm{enc}} / M_{\rm{BE}} =
1$, the system is gravitationally unstable.}
\label{fig:mbe}
\end{figure}
\begin{figure*}
\resizebox{0.48\textwidth}{!}{\rotatebox{0}{\includegraphics*{f4a_color}}}
\hspace{0.025\textwidth}
\resizebox{0.48\textwidth}{!}{\rotatebox{0}{\includegraphics*{f4b_color}}}
\caption{Mass-weighted radial profiles at various times of (a) gas
mass enclosed, (b) number density, (c) mass-weighted temperature,
and (d) mass-weighted radial velocity for simulation A (\textit{left
panels}) and simulation B (\textit{right panels}). The quantities
in the left and right panels are plotted with respect to radius and
gas mass enclosed, respectively. In (b), the dashed line with
crosses is the dark matter density in units of $m_H$ cm$^{-3}$. In
(d), the dashed line with crosses is the negative of the sound speed
in the final output. The times in the legends correspond to time
before the end of the simulation.}
\label{fig:profilesA}
\end{figure*}
\begin{figure*}
\resizebox{0.48\textwidth}{!}{\rotatebox{0}{\includegraphics*{f5a_color}}}
\hspace{0.025\textwidth}
\resizebox{0.48\textwidth}{!}{\rotatebox{0}{\includegraphics*{f5b_color}}}
\caption{Same as Figure \ref{fig:profilesA} for the inner parsec of
simulation A (\textit{left panels}) and simulation B (\textit{right
panels}). The maximum AMR level is listed next to the times in
the legend. In simulation B, the local minima in radial velocities
at $2 \times 10^4$, 40, 0.3, and 0.01 $M_\odot$ occur as angular
momentum is transported outwards in secular bar-like
instabilities.}
\label{fig:profilesA3}
\end{figure*}
Figure \ref{fig:tempElec1} depicts slices of electron fraction and gas
temperature at scales of 200 and 1500 pc. At the larger scale, the
gas is heated both in virial shocks at $r \sim 600$ pc and internal
turbulent shocks. Gas within the virial radius varies between
$\sim$2000 K in cold inflows from filaments and up to 30,000 K in
turbulent shocks. Electron fractions increase to up to 0.5\% because
of collisional ionizations behind the shocks. The majority of the
ionizations occur in the turbulent shocks inside \ifmmode{{\rm r_{vir}}}\else{r$_{\rm{vir}}$}\fi~where the
densities are greater and temperatures at the shocks are similar to
values in the virial shock. In addition, 84\% of the cooling
radiation originates in converging flows ($\nabla \cdot v < 0$). In
the inner 200 pc, turbulent shocks are widespread as seen in the
temperature variations. However these are less pronounced than the
one at larger radius. In the central 50 pc, the gas becomes nearly
isothermal despite the low free electron fraction.
The halo collapses in two stages. We denote the beginning of the
first stage when $t_{\rm{dyn}}$~$>$ $t_{\rm{cool}}$~for the first time. The second
stage begins when the central object becomes gravitationally unstable.
1. \textit{Cooling stage}--- As mass infalls toward the center, the
increased cooling rate, which is $\propto nn_e$ until Ly$\alpha$~radiation
becomes trapped within the inner condensation at a density of $\sim 5
\times 10^8 \ifmmode{~{\rm cm^{-3}}}\else{~cm$^{-3}$}\fi$ \citep{Oh02}, catalyzes the collapse as atomic
line transitions convert kinetic energy to radiation. Here $n$ and
$n_e$ are the number density of baryons and electrons, respectively.
Although we do not treat the radiative effects of Ly$\alpha$, radiation
trapping from recombination lines cannot prevent the collapse
\citep{Rees77}. This first stage starts 520 (40) kyr before the last
output. The inner 100 pc have a steady decrease in electron fraction
that indicates atomic hydrogen cooling is now efficient in this
region, which can be seen in the 200 pc slices of Figure
\ref{fig:tempElec1}. However, only the gas within 1.5 (1.0) pc has
$t_{\rm{dyn}}$~$\lower0.3em\hbox{$\,\buildrel >\over\sim\,$}$ $t_{\rm{cool}}$~= 383 (100) kyr at this epoch.
2. \textit{Gravitationally unstable stage}--- This starts when the
central region becomes unstable to gravitational collapse.
\citet{Ebert55} and \citet{Bonnor55} investigated the stability of an
isothermal sphere with an external pressure $P_{ext}$ and discovered
that the critical mass (BE mass hereafter) for gravitational collapse
is
\begin{equation}
\label{eqn:mbe}
M_{{\rm BE}} = 1.18 \frac{c_s^4}{G^{3/2}} P_{ext}^{-1/2} \,M_\odot .
\end{equation}
If we set $P_{ext}$ to the local pressure, then
\begin{equation}
M_{{\rm BE}} \approx 20 T^{3/2} n^{-1/2} \mu^{-2} \gamma^2 M_\odot ,
\end{equation}
where $\gamma$ = 5/3 is the adiabatic index. For both simulations,
this stage occurs between 10 and 100 kyr before we end the simulation.
We plot the ratio of the enclosed gas mass and BE mass in Figure
\ref{fig:mbe} for several epochs in the collapse. When the clump
becomes gravitationally unstable, the central 3.3 $\times$ 10$^5$ (5.5
$\times$ 10$^4$) $M_\odot$ in the central $r_{\rm{BE}}$ = 5.8 (0.9) pc
exceeds the BE mass, and its $t_{\rm{dyn}}$~= 520 (80) kyr. Thus our numerical
results agree with these analytic expectations.
We follow the evolution of the accretion and contraction until the
simulation\footnote{We stop the simulation because of ensuing
round-off errors from a lack of precision. We use 80-bit precision
arithmetic for positions and time throughout the calculation.}
reaches a refinement level of 41 (41) that corresponds to a resolution
of 0.01 (0.014) R$_\odot$. At this point, the central 4.7 $\times$
10$^5$ (1.0 $\times$ 10$^5$) $M_\odot$ are gravitationally unstable and
\textit{not} rotationally supported. The central mass is nearly
devoid of free electrons where the electron fraction, $n_e / n <
10^{-6}$, and the temperature is $\sim8000$ K. It has a radius of 7.9
(1.5) pc. The central number density is 5.8 (7.6) $\times$ 10$^{21}$
cm$^{-3}$. We repeat that this isothermal collapse occurs through
atomic hydrogen cooling only, but in reality, H$_2$~cooling is important
even in the presence of a ultraviolet background
\citep[e.g.][]{Machacek01, Wise07b, OShea08}. Thus our results should
only be considered as a scenario for excellent numerical experiments
of turbulent collapses (see \S\ref{sec:applicability} for more
discussion).
Next we show the radial profiles of the final and preceding outputs in
Figures \ref{fig:profilesA} and \ref{fig:profilesA3}, where we plot
(a) enclosed gas mass, (b) number density, (c) mass-weighted
temperature, and (d) mass-weighted radial velocity. Figure
\ref{fig:profilesA} focuses on length scales greater than 20 AU to $r
> \ifmmode{{\rm r_{vir}}}\else{r$_{\rm{vir}}$}\fi$. The halo collapses in a self-similar manner with $\rho(r)
\propto r^{-12/5}$. We also overplot the DM density in units of
m$_{\rm{H}} \ifmmode{~{\rm cm^{-3}}}\else{~cm$^{-3}$}\fi$ in the $b$ panels. The DM density in simulation
A does not flatten as much as simulation B with $\rho_{\rm{DM}}
\propto r^{-4/3}$ and $r^{-2/3}$, respectively, yet higher DM
resolution simulations will be needed to address the significance of
this difference in central slopes. In the $c$ panels, ones sees that
the entire system is isothermal within 10\% of 8000 K. In the $d$
panels, the sound speed $c_s$ in the final epoch is plotted, and there
is a shock where $v_r > c_s$ at a mass scale when $M_{{\rm enc}}$
first exceeded $M_{{\rm BE}}$. Here $v_r$ is the radial velocity, and
$c_s$ is the local sound speed.
Figure \ref{fig:profilesA3} shows the data within 1 pc at times 100
years before the end of the simulation. The self-similar, isothermal
collapse continues to stellar scales. However, the structure in the
radial velocity in simulation B exhibits a strikingly behavior with
four repeated minima at mass scales $2 \times 10^4$, $10^3$, 6, and
$10^{-3} M_\odot$. We attribute this to rotational bar-like instabilities
that we discuss later in the paper (\S\ref{sec:rotInstab}).
If we consider $v_r$ constant from the last output, we can determine
the infall times, which are shown in Figure \ref{fig:infall}. The
infall time, $t_{in} = r/v_r$, of the shocked BE mass is 350 (50) kyr.
The infall times approximately follow a broken power law, $t_{in}
\propto M_{\rm{enc}}^\beta$. Within $M_{\rm{enc}} \sim 0.1 M_\odot$,
$\beta \approx 1/2$. In the range $0.1 \lower0.3em\hbox{$\,\buildrel <\over\sim\,$} M_{\rm{enc}}/M_\odot \lower0.3em\hbox{$\,\buildrel <\over\sim\,$} 3
\times 10^4$, $\beta \approx 1$; above this mass interval, the slope
of the mass infall times increase to $\beta \approx 3/2$. The
increased radial velocities when the central object becomes
gravitationally unstable causes the steepening of the slope at
$\sim$$3 \times 10^4 M_\odot$.
\begin{figure}
\resizebox{\columnwidth}{!}{\rotatebox{0}{\includegraphics*{f6_color}}}
\caption{Radial profiles of gas infall times at the final output. To
approximate a collapse timescale, the quantities $r/v_r$ {\em
(solid)} and $M/(dM/dt)$ {\em (dashed)} are calculated and plotted
here.}
\label{fig:infall}
\end{figure}
\vspace{1em}
\subsection{Global Disc}
In simulation B, a thick disc with a radius of 50 pc and disc scale
height of $\sim$10 pc forms that is pressure supported and only
partially rotationally supported. The circular velocities within this
disc achieve only a third of Keplerian velocities. The lack of full
rotational support and large scale height suggests that a central
collapse occurs before any fragmentation in this large-scale disc is
possible. In contrast, we see a disorganized, turbulent medium and no
large scale disc formation in simulation A.
\subsection{Turbulence}
\label{sec:turbulence}
\citet{Kolmogorov41} described a theory of the basic behavior of
incompressible turbulence that is driven on a large scale and forms
eddies at that scale. These eddies then interact to form smaller
eddies and transfer some of their energy to smaller scales. This
cascade continues until energy is dissipated through viscosity. In
supersonic turbulence, most of the turbulent energy is dissipated
through shock waves, which minimizes the local nature of cascades
found in incompressible turbulence.
In Paper I, we found that turbulence is stirred during virialization.
When radiative cooling is efficient, the gas cannot virialize by
gaining thermal energy and must increase its kinetic energy in order
to reach equilibrium, which it achieves by radial infall and turbulent
motions.
In addition to virial turbulence generation, mergers stir turbulence.
Here the largest driving scale will be approximately the scale of the
merging objects, and the turbulent cascade starts from that length
scale. Additional driving may come from Kelvin-Helmholtz
instabilities of a multi-phase gas as the mergers occur
\citep{Takizawa05}. \citeauthor{Takizawa05} considered mergers of
galaxy clusters, however his work may still apply to the formation of
protogalactic halos since similar temperature contrasts exist in this
regime of mergers. As the lighter halo falls into the massive halo, a
bow shock and small-scale eddies from the Kelvin-Helmholtz instability
form between the two interacting objects. At later times, a dense,
cool core remains in the substructure of the lesser halo. The
instabilities grow and destroy the baryonic substructure, and the gas
mixes with the existing gas of the massive halo and becomes turbulent.
\begin{figure}
\resizebox{\columnwidth}{!}{\rotatebox{0}{\includegraphics*{f7_color}}}
\caption{The turbulent Mach number, $v_{rms} / c_s$, for the final
output {\em (black with diamonds)} and selected previous times that
are listed in the legend. Simulation A (\textit{left}) and B
(\textit{right}).}
\label{fig:mach}
\end{figure}
To quantify aspects of this turbulence, we inspect the turbulent Mach
number,
\begin{equation}
\mathcal{M} = \frac{v_{rms}}{c_s}; \quad
c_s^2 = \frac{dP}{d\rho} = \frac{\gamma k T}{\mu m_H}.
\end{equation}
Here $P$ is pressure, $v_{rms}$ is the 3D velocity dispersion, and
$\gamma$ is the adiabatic index that we set to 5/3. We evaluate
$v_{rms}$ with respect to the mean velocity of each spherical shell.
Radial profiles of $\mathcal{M}$ are shown in Figure \ref{fig:mach}.
Before the core becomes gravitationally unstable, the turbulence is
subsonic within the virial shock. After the core becomes
gravitationally unstable, the turbulent Mach number rises to 2--4.
The collapse produces turbulence on a timescale that is faster than it
can be dissipated.
The turbulence that exists before the initial collapse may impact the
nature of the central object. In simulation A, the core initially has
$\mathcal{M} \approx 1$, and this results in a central object with 4.7
$\times$ 10$^5 M_\odot$ and a radius of 7.9 pc. The core in simulation B
has $\mathcal{M} \approx 0.2$, and the central object is about five
times less massive and smaller, which corresponds to a free-fall time
approximately five times shorter as well.
\subsection{Spin Parameter Evolution}
During the hierarchical buildup of structure, tidal forces from
neighboring structures impart angular momentum to a halo, particularly
when its radius is maximal at the turn-around time \citep{Hoyle49,
Peebles69}. However in recent years, several groups have recognized
that the mergers may impart a considerable fraction of angular
momentum to the system \citep{Steinmetz95, Gardner01, Vitvitska02,
Maller02}. Over many realizations of mergers, the net angular
momentum change would be zero. In reality, an angular momentum
residual remains after the last major merger occurs because there are
too few events to cancel the randomization of halo spin. Although
each halo has unique rotational properties, it is useful to define a
dimensionless spin parameter
\begin{equation}
\lambda \equiv \frac{\vert L \vert \sqrt{\vert E \vert}} {G M^{5/2}},
\end{equation}
where G is the gravitational constant and L, E, and M are the angular
momentum, energy, and mass of the object, that measures the rigid body
rotation of the halo \citep{Peebles71}. In Figure \ref{fig:spin}, we
display the time evolution of $\lambda$ of the DM and baryons in our
simulations and mark the occurrence of the major merger in simulation
A. \citet{Eisenstein95b} \citep[preceded by][]{Barnes87} calculated
that the mean spin parameter, $\langle\lambda\rangle \approx 0.04$, is
weakly dependent on object mass and cosmological model, and this value
is also marked in Figure \ref{fig:spin}. Also $\lambda$ weakly
depends on its merger history, where $\langle\lambda\rangle$ increases
during mergers and slowly dissipates afterwards. Most of the angular
momentum is acquired from steady minor mergers and accretion because
major mergers only happen rarely (usually only once per logarithmic
mass interval). In 96\% of mergers, the majority of the internal spin
originates from the orbital energy of the infalling halo
\citep{Hetznecker06}.
\begin{figure}
\resizebox{\columnwidth}{!}{\rotatebox{0}{\includegraphics*{f8_color}}}
\caption{Spin parameter, $\lambda \equiv \vert L \vert \sqrt{\vert E
\vert} / GM^{5/2}$, evolution of the main halo in the
simulation. {\em (left)} simulation A. {\em (right)} simulation
B. The dashed and solid lines are the interpolated values for the
DM and baryonic spin parameter. The squares and circles
correspond to the actual measurements from the DM and gas data,
respectively. The horizontal dashed line at $\lambda$ = 0.04
marks the mean cosmological spin parameter. In simulation A, two
major mergers causes the large increase beginning at z $\approx$
21 in the hashed region. The oscillations occur as the merging
halos orbit each other until they virialize.}
\label{fig:spin}
\vspace{-0.5em}
\end{figure}
At $z \approx 22$ in simulation A, the spin parameter $\lambda = 0.06$
before the last major merger. Then the spin parameter increases by a
factor of 3 during its major merger because of the system being far
from dynamical equilibrium. The system becomes virialized after
approximately a dynamical time, and the spin parameter stabilizes at
$\lambda \approx 0.03$ and proceeds to decrease with time until
$\lambda = 0.022$ at the time of collapse. The above evolution of
$\lambda$ agrees with the findings of Hetznecker \& Burkert.
Simulation B describes a halo that does not undergo a recent major
merger, and its final $\lambda$ = 0.013.
Both halos have less angular momentum than $\langle \lambda \rangle$
when the cooling gas collapses. The probability distribution of
$\lambda$ can be described with the log-normal function
\begin{equation}
\label{eqn:lambda_prob}
p(\lambda)d\lambda = \frac{1}{\sigma_\lambda \sqrt{2\pi}} \exp
\left[ -\frac{\ln^2 (\lambda/\lambda_0)}{2\sigma_\lambda} \right]
\frac{d\lambda}{\lambda},
\end{equation}
where $\lambda_0 = 0.042 \; \pm \; 0.006$ and $\sigma_\lambda = 0.5 \;
\pm \; 0.04$ \citep[e.g.][]{Bullock01}. From the cumulative
probability function resulting from equation (\ref{eqn:lambda_prob}),
89\% (99\%) of the cosmological sample of halos have larger spin
parameters than the halos described here. \citet{Eisenstein95a}
demonstrated that halos with low spin parameters are candidates for BH
formation and quasar seeds. However they argue that the angular
momentum needs to be at least an order of magnitude lower than the
mean. Next we present further evidence that reveals a gaseous
collapse is possible with not too atypical spin parameters.
\begin{figure}
\resizebox{\columnwidth}{!}{\rotatebox{0}{\includegraphics*{f9}}}
\caption{Secular instability $e$-folding timescale in units of
$a_1^2 / \nu$ as a function of t = $T / \vert W \vert$ and $\alpha
= (tf/2)^{1/2}$ (eq. \ref{eqn:alpha}). At t $<$ 0.1375, the
system is stable to all perturbations. Above $t$ = 0.27, the
system is dynamically unstable, and this timescale is not
applicable.}
\label{fig:secular_tau}
\vspace{0.5em}
\end{figure}
\subsection{Instability of Maclaurin Spheroids}
\label{sec:analytics}
The dynamics of rotating systems is a classic topic in astrophysics
(see EFE \S\S1--6). These self-gravitating systems are susceptible to
two types of instabilities. Secular instability occurs when small
dissipative forces, e.g. viscosity, amplify perturbations to become
unstable in an otherwise stable inviscid configuration. Dynamical
(also referred to as ordinary) instability results when some
oscillatory mode exponentially grows with time, regardless of any
dissipative forces. Here we concentrate on Maclaurin spheroids
relevant for a uniform body rotating with a fixed angular velocity.
Maclaurin spheroids are a special case of Jacobi ellipsoids that are
axisymmetric. The onset of the $m = 2$ bar-like instability in
gaseous Maclaurin spheroids happens for a given eccentricity,
\begin{equation}
\label{eqn:eccentricity}
e = \left( 1 - \frac{a_3^2}{a_1^2} \right)^{1/2} \geq \left\{
\begin{array}{r@{\quad}l}
0.8127 & \mathrm{(secular)} \\
0.9527 & \mathrm{(dynamical)}
\end{array}
\right.,
\end{equation}
where $a_3$ and $a_1$ are the principle axes with $a_3 \leq a_1$ (EFE
\S33). Eccentricity is related to the ratio, $t = T / \vert W \vert$,
of rotational kinetic energy to gravitational potential by
\begin{equation}
\label{eqn:t_vs_e}
t = \frac{1}{2} [ (3e^{-2} - 2) - 3(e^{-2} - 1)^{1/2} (\sin^{-1}
e)^{-1} ],
\end{equation}
and the secular and dynamical instabilities happen at $t = (0.1375,
0.27)$, respectively \citep[e.g.][]{Ostriker73b}.
When $t$ is larger than 0.1375 but smaller than 0.27, both the
Maclaurin spheroid and Jacobi ellipsoid are perfectly stable against
small perturbations in the inviscid case. For a given $e$, the Jacobi
configuration has a lower total energy than its Maclaurin counterpart
and is therefore a preferred state. Here any dissipative force
induces a secular bar-like instability. The system slowly and
monotonically deforms through a series of Riemann S-type ellipsoids
until its final state of a Jacobi ellipsoid with an equal angular
momentum \citep{Press73} and lower angular velocity (EFE \S32) as
specific angular momentum is transported outward. The instability
grows on an $e$-folding timescale
\begin{equation}
\label{eqn:secular_tau}
\tau = \phi a_1^2 / \nu,
\end{equation}
where $\phi$ is a constant of proportionality that asymptotes at $t$ =
0.1375, decays to zero at $t$ = 0.27, and is plotted in Figure
\ref{fig:secular_tau} (EFE \S37). Here $\nu$ is the kinematic
viscosity.
\citet{Christodoulou95a, Christodoulou95b} generalized the
formulations for bar-like instabilities to account for self-gravity.
In addition, they consider different geometries, differential
rotation, and non-uniform density distributions. They devised a new
stability criterion
\begin{equation}
\label{eqn:alpha}
\alpha \equiv \frac{T/\vert W \vert}{\Omega / \Omega_J} =
\sqrt{\frac{f}{2} \frac{T}{\vert W \vert}}
\end{equation}
where $\Omega$ is the rotation frequency,
\begin{equation}
\Omega^2_J = 2 \pi G \rho \left[ \frac{(1-e^2)^{1/2}}{e^3} \sin^{-1} e
- \frac{1-e^2}{e^2} \right]
\end{equation}
is the Jeans frequency in the radial direction for a Maclaurin
spheroid, and
\begin{equation}
\label{eqn:rotF}
f = \frac{1}{e^2} \left[ 1 - \frac{e}{\sin^{-1} e} \sqrt{1 - e^2}
\right]
\end{equation}
accounts for differing geometries\footnote{See
\citet{Christodoulou95b} for more generalized geometries.} with $f =
2/3$ for a sphere and $f = 1$ for a disc. Secular and dynamical
instabilities for Maclaurin spheroids occur above $\alpha$ = (0.228,
0.341), respectively, for $f = 1$.
From N-body simulations of disc galaxies, \citet{Ostriker73b} found
that a massive dark halo with comparable mass to the disc could
suppress secular instabilities. In the case of a gaseous collapse to
a SMBH however, the baryonic component dominates over the dark matter
component in the central 10 pc. Secular instabilities cannot be
prevented through this process, which we demonstrate next.
\subsection{Rotational Instabilities}
\label{sec:rotInstab}
In the $l$ = 1 pc panel of simulation B in Figure \ref{fig:slices}, it
is apparent a bar-like instability exists in the gravitationally
unstable central object. Figure \ref{fig:diskB} shows the instability
criterion $\alpha$ (eq. \ref{eqn:alpha}) against enclosed gas mass.
Here we transform the velocities to align the $z$-axis with the
baryonic angular momentum vector of the entire halo. We use the
tangential velocities to calculate the rotational kinetic energy $T$.
The shape parameter $f$ = 2/3 (0.89) for simulation A (B).
\begin{figure}
\resizebox{\columnwidth}{!}{\rotatebox{0}{\includegraphics*{f10_color}}}
\caption{Rotational instability parameter $\alpha = \sqrt{fT/2\vert W
\vert}$ for the thick disc with $r \simeq 50$ pc in simulations A
(\textit{black solid line}) and B (\textit{red dashed line}). The
shaded areas show the standard deviation when varying the center on
the 100 densest points. For $\alpha > 0.22$ denoted by the
horizontal line, a secular instability occurs in the disc and leads
to bar formation. In simulation A, instabilities occur at mass
scales of 100, 0.1, and $10^{-4} M_\odot$. In simulation B, the same
happens at $2 \times 10^6$, $2 \times 10^4$, $10^3$, 6, and $10^{-3}
M_\odot$. We also mark $\alpha = 0.341$ where a rotating system becomes
dynamically unstable. Only simulation A at 0.1 $M_\odot$ experiences a
dynamical instability.}
\label{fig:diskB}
\end{figure}
As discussed before, Maclaurin spheroids are subject to secular $m =
2$ bar-like instabilities when $\alpha > 0.228$. In simulation A, the
central object becomes unstable at three approximate mass scales, $6.7
\times 10^{-4}$, 1.0, and 110 $M_\odot$ that correspond to radii of 0.75
R$_\odot$, 1.3 AU, and 150 AU, respectively. The enclosed mass ratios
of the recurring instabilities, i.e. $M_i/M_{i+1}$, are 1500:1,
110:1, and 1400:1, starting at the smallest mass scale. The
instability at 0.075 $M_\odot$ ($r$ = 0.13 AU) is dynamically unstable
with $\alpha$ peaking at 0.55. In simulation B, instabilities occur
at $5.3 \times 10^{-4}$, 7.0, $1.2 \times 10^3$, and $2.0 \times 10^4
M_\odot$ at radii of 0.49 R$_\odot$, 17 AU, 2700 AU, and 0.18 pc. The
enclosed mass ratios of these instabilities, are 13,000:1, 170:1,
17:1, and 85:1.
It is interesting to note that the innermost instability in both
simulations becomes dynamical ($\alpha > 0.341$), and $\alpha$
continues to increase rapidly toward the center. However these
features should be taken with caution since it occurs near our
resolution limit, where the particular location used as the center
will influence the rotational energy one would calculate. To evaulate
the sensitivity in choosing a center, we performed the same analysis
but varying the center over the 100 most densest cells in the
simulation. We plot the standard deviation of $\alpha$ as the shaded
area in Figure \ref{fig:diskB}. Inside an enclosed mass of $3 \times
10^{-4} M_\odot$, it is $\sim$0.05 but diminishes to less than 0.01 outside
$0.1 M_\odot$.
The $e$-folding time of secular instabilities $\tau$ is proportional
to $a_1^2$ (see eq. \ref{eqn:secular_tau}). Hence small-scale
instabilities collapse on a faster timescale than its parent,
large-scale bar instability. Turbulent viscosity is the main
dissipative force that drives the instability. $\tau$ is inversely
proportional to the viscosity. This further shortens $\tau$ because
supersonic turbulence is maintained to the smallest scales.
\begin{figure*}
\resizebox{0.48\textwidth}{!}{\rotatebox{0}{\includegraphics*{f11a_color}}}
\hspace{0.025\textwidth}
\resizebox{0.48\textwidth}{!}{\rotatebox{0}{\includegraphics*{f11b_color}}}
\caption{Mass-weighted radial profiles of various rotational
quantities in simulation A (\textit{left panels}) and simulation B
(\textit{right panels}). In panel A, we show the rotational
velocity compared to the Kepler velocity = $\sqrt{GM/r}$. In
panel B, we display the typical rotational velocity. In panels C
and D, the specific angular momentum (in units of km/s pc) and the
ratio of the rotational velocity and sound speed is shown,
respectively. The line styles correspond to the same times in
Figure \ref{fig:profilesA}.}
\label{fig:profilesB}
\end{figure*}
\begin{figure*}
\resizebox{0.48\textwidth}{!}{\rotatebox{0}{\includegraphics*{f12a_color}}}
\hspace{0.025\textwidth}
\resizebox{0.48\textwidth}{!}{\rotatebox{0}{\includegraphics*{f12b_color}}}
\caption{The same as Figure \ref{fig:profilesB} but with the inner
parsec of simulation A and B and the output times as listed in
Figure \ref{fig:profilesA3}.}
\label{fig:profilesB3}
\end{figure*}
\subsection{Rotational Properties}
During the collapse of the gas in our simulations, rotational support
never impedes the collapse. In Figures \ref{fig:profilesB} and
\ref{fig:profilesB3}, we show (a) coherent rotational velocity divided
by Keplerian velocity $v_{kep} = \sqrt{GM/r}$, (b) rotational
velocity, (c) specific angular momentum, and (d) rotational velocity
divided by the sound speed. We compute the rotational velocities
around the center of mass of a sphere with radius of 100 cell widths
of the finest AMR level, centered on the densest point. We note that
the rotational velocity $L/r$ plotted here is different than organized
rotation, i.e. a disc. The radial profiles only sample gas in
spherical shells, whose angular momentum vectors are not necessarily
parallel.
1. \textit{Simulation A}--- At $r > 1$ AU (M$_{\rm{enc}} = 1 M_\odot$),
the typical rotational speed is two or three times lower than the
Keplerian velocity, which is required for rotational support. At $r$
= 0.1 AU (M$_{\rm{enc}}$ = 0.07 $M_\odot$), the infall becomes marginally
rotationally supported, i.e. $L/r \sim v_{kep}$. The radial
velocities react by slowing from 15\ifmmode{~{\rm km~s^{-1}}}\else{~km s$^{-1}$}\fi~to below 5\ifmmode{~{\rm km~s^{-1}}}\else{~km s$^{-1}$}\fi. However this
rotational support does not continue to the center. Rotational speeds
are only $\sim$0.5$v_{kep}$ within 0.1 AU (M$_{\rm{enc}} = 1 M_\odot$).
2. \textit{Simulation B}--- This collapse exhibits four minima in
radial velocity that are caused by rotational bar-like instabilities.
After such an instability occurs, the radial velocities increase
because of angular momentum being transported outwards. As the
rotational velocities decrease, this instigates another secular
instability, which repeats causing a cascade of the instability. The
increased infall velocity and associated decrease in rotational
velocities (i.e. the dips in Figures \ref{fig:profilesA3}d and
\ref{fig:profilesB3}d) depict this behavior. At the final output, the
infalling material exhibits no rotational support at all radii similar
to simulation A at $r > 1$ AU.
We interpret the inner points where L/r/$v_{\rm{kep}}$ fluctuations
greatly increases above unity with caution because of the nature of
choosing a center in a turbulent medium, i.e. when turbulent
velocities dominate over rotational ones. If the central sphere is
smaller than a radius where the turbulent velocities average to zero,
we introduce errors into the angular momentum profiles by sampling the
turbulent gas incompletely. In the $b$-panels of Figure
\ref{fig:profilesB}, one sees that specific angular momentum inside
M$_{\rm{enc}} < 10^6 M_\odot$ decreases over time and is transported
outwards in the collapse.
With a not too atypical spin parameter, the thick disc with $r\sim50$
pc is not rotationally supported. In simulation A, a global disc does
not exist at all. We attribute this behavior to the nature of angular
momentum transport in a turbulent medium. Even with a higher spin
parameter, we do not expect a disc to fragment before the central
collapse of gas with low specific angular momentum and short dynamical
times. This low specific angular momentum material collapses to small
radii without fragmentation so that a central dense object forms with
a mass of $\sim10^5 M_\odot$ or 2\% of the halo gas mass. After the
initial collapse, the thick disc may become rotationally supported as
more high angular momentum gas infalls.
\section{DISCUSSION}
\label{sec:discuss}
In our cosmological simulations, we find that a $\sim$10$^5 M_\odot$ dense
object forms in the center of a metal-free protogalactic halo that
cools by atomic hydrogen cooling. Although we have neglected some
important processes, such as H$_2$~chemistry, star and BH formation and
feedback, our results show that angular momentum transport at both
small and large scales in the form of preferential segregation and
rotational instabilities, respectively, lead to the formation of a
dense, massive object with r $<$ 5 pc. This initial central collapse
should precede any fragmentation of a global disc.
\subsection{Angular Momentum Transport}
Collapsing turbulent clouds, whether cosmological or galactic in
nature, are ubiquitous in the universe. In this paper, we focus on
the details of the turbulent collapse of a proto-galactic halo.
Angular momentum transport plays a key role in such events, e.g.,
determining the characteristics of the central object(s). However
there exists the ``angular momentum problem'', where many orders of
magnitude of angular momentum must be shed \citep[see \S6
in][]{Larson03} from the initial molecular cloud to form a central
star, star cluster, or BH. In our simulations, there is a clear
scenario in which the inside-out collapse \citep{Shu87} proceeds even
if the initial turbulent cloud was rotating. We see three major
elements affecting angular momentum transport during the collapse.
\medskip
1. \textit{Angular momentum distribution}--- In cosmological halos,
there is a universal distribution of angular momentum
\begin{equation}
M(<j) = M_{\rm{vir}} \frac{\mu j}{j_0 + j}, \quad \mu > 1,
\end{equation}
that measures the mass with a specific angular momentum less than $j$
\citep{Bullock01}. This function is fitted with two parameters, $\mu$
and $j_0$, where $\mu$ controls the flattening of the power law at
high angular momenta, and $j_0$ determines at which $j$ this
transition occurs. \citeauthor{Bullock01} also find that more mass
resides in the tails of the distribution, especially at small $j$,
when compared to a system in solid body rotation. Thus all halos have
some intrinsic amount of gas with small $j$. If this distribution is
maintained during the collapse \citep[e.g.][]{Mestel63}, such gas can
collapse to some small radius, $r_{\rm{min}} > j/v_{\rm{kep}}$,
without becoming rotationally supported, which leads to the next
element of discussion -- angular momentum segregation.
2. \textit{Segregation in a turbulent medium}--- In Paper I, we
determined that most of the gas becomes supersonically turbulent as a
result of virialization. Therefore let us theorize how angular
momentum transport happens during the transition from being pressure
supported to rapidly cooling and collapsing. First consider a
turbulent uniform-density gas cloud, where parcels of gas at a
specific radius can have many different values of $j$. This differs
from the organized rotation of a disc. If we start with such an
initial configuration, how does angular momentum transport occur
during the collapse? Gas with small (high) $j$ will preferentially
migrate to small (large) radii, following turbulent flow lines. In an
axi-symmetric system, the Rayleigh criterion \citep{Rayleigh20,
Chandra61} requires that the specific angular momentum must be a
monotonically increasing function with respect to radius. The gas
with the lowest $j$ progressively piles up in the center of DM
potential wells until $t_{\rm{cool}}$~$<$~$t_{\rm{dyn}}$~when it can catastrophically
cool and collapse. Such low $j$ gas may originate in lower mass
progenitors because the gas resided in shallow potential wells
(i.e. low mass halos) that led to smaller turbulent and thermal
velocities. We argue that this effect is intimately linked to the gas
acting to achieve virial equilibrium at all stages during the collapse
(see Paper I). Furthermore, the system becomes unstable to turbulence
as the material segregates. This onset of turbulence can be delayed
if viscosity is large enough so that Reynolds numbers are below the
order of $10^2$ or $10^3$. However there are many modes of
instability if the Rayleigh criterion is not met, and even gas with a
low Reynolds number will eventually become fully turbulent on a
timescale that is chaotic, depending on the initial perturbation and
Reynolds number \citep{Shu92, Moehlis04}. We note that a more
comprehensive approach would consider the Solberg-H\o iland criterion
\citep{Endal78} that generalizes this to include partial rotational
and pressure support in a disc.
3. \textit{Bar-like rotational instabilities}--- After sufficient
amounts of gas have migrated to small radii because of angular
momentum segregation, this gas increases its rotational velocity as it
conserves angular momentum. Gas with similar angular momentum now
obtains some organized rotational velocity. As the rotational energy
increases, some shells may become rotationally unstable ($T/\vert W
\vert \ge 0.14$) in a secular $m=2$ mode. In the case of a collapsing
gas cloud, turbulent viscosity provides the dissipative force that
drives the secular instability. The system then deforms into a
bar-like object, where the gas with large $j$ moves to larger radius
and gas with small $j$ can infall to even smaller radii.
\medskip
The combination of these three processes alleviates the ``angular
momentum problem'' of inside-out collapses. Such a scenario of
angular momentum transport during a self-similar collapse may be
widely applicable in cosmological collapse problems.
\subsection{Secular Instability Cascade}
Our simulations follow the self-similar collapse of protogalactic
halos over 14 orders of magnitude in length. We find that a cascade
of three (four) bar-like instabilities occur during the latter stages
of the collapse. The ratios of mass enclosed in each successive
instability varies from 10 to 10,000 in our simulations. As a
consequence of these instabilities, the collapse of the densest point
never halts because of rotational support. Instead the gas becomes
rotationally unstable when it gains sufficient rotational energy. The
lowest $j$ gas then falls to smaller radius and may become unstable
yet again. This sequence could repeat itself several times. In
addition, we find that rotational instabilities are possible without a
global disc as in simulation A.
This is the ``bars within bars'' scenario originally proposed to fuel
active galactic nuclei through dynamical rotational bar-like
instabilities \citep{Shlosman89, Shlosman90, Heller07}. It was then
adapted for funneling enough gas into pre-galactic ($M \sim 10^5 M_\odot$)
SMBHs by \citet{Begelman06}, in whose framework the angular momentum
of the disc, where the instability occurs, depends on the spin
parameter of the halo \citep[see also][]{Mo98}. Thus the amount of
gas available for accretion onto the central SMBH also depends on the
spin parameter. Dynamical instabilities require 45\% more rotational
energy to occur than secular ones. In the framework of
\citeauthor{Begelman06}, only requiring secular instabilities may
result in a larger fraction of halos forming a pre-galactic SMBH
because of the log-normal distribution of spin parameters
(eq. \ref{eqn:lambda_prob}). Nevertheless, we do not advocate our
simulations as evidence of pre-galactic SMBH formation because we have
neglected many important processes related to H$_2$~cooling and
primordial star formation that we detail briefly in the next section.
\subsection{Applicability}
\label{sec:applicability}
\subsubsection{Limitations of Current Approach}
\label{sec:limitation}
Our results depict the importance of turbulence, accretion, and the
hydrogen cooling in the initial collapse of these halos. However we
are missing some essential processes, such as H$_2$~chemistry,
primordial and Population II stellar formation and feedback, SMBH
formation and feedback, and metal transport and cooling. It was our
intention to study only the hydrogen and helium cooling case first and
gradually introduce other processes at a later time to investigate the
magnitude and characteristics of their effects, which we will present
in later papers.
Gas becomes optically thick to Ly$\alpha$~radiation above column densities
of $\sim$$10^{13}$ cm$^{-2}$, and Ly$\alpha$~radiation trapping becomes
important above a density of $\sim$$5 \times 10^8 \ifmmode{~{\rm cm^{-3}}}\else{~cm$^{-3}$}\fi$
\citep{Oh02}. We continue to use optically thin cooling rates above
this density. Thus we overestimate the cooling within 0.03 pc. As a
consequence, we do not suggest that these simulated objects ever form
in nature. However this scenario poses an excellent numerical
experiment of turbulent collapse, which should be common in galaxy
formation, where turbulence is generated during virialization, and
star formation within turbulent molecular clouds.
\subsubsection{Desired Improvements}
\label{sec:improve}
Clearly local dwarf spheroidals contain stars with ages consistent
with formation at very high redshifts \citep{Ferrara00, Tolstoy02,
Tolstoy03, Helmi06}. To develop a model that desires to fit galaxy
luminosity functions down to the faintest observed galaxies one may
need a star formation and feedback model that follows molecular clouds
as small as one thousand solar masses in order to allow for the
dominant mode of star formation observed locally. It should be
already technologically feasible with current cosmological
hydrodynamical models to simulate these galaxies one star at a time.
Correct initial conditions for early galaxy formation require prior
star and BH formation and feedback. The typically adopted conditions
for phenomenological star formation are velocity convergence, a
critical overdensity, $t_{\rm{dyn}}$~$>$ $t_{\rm{cool}}$, and being Jeans unstable
\citep{Cen92}. Phenomenological primordial star formation is possible
if we include two additional conditions as utilized in \citet{Abel07}.
First, the H$_2$~fraction must exceed 10$^{-3}$ \citep{Abel02a}, and
second, the metallicity of the gas must not exceed some ``critical
metallicity'' of 10$^{-3}$ -- 10$^{-6}$ of the solar value
\citep{Bromm01, Schneider06, Smith07, Jappsen07a, Jappsen07b}. From
prior studies \citep[e.g.][]{Abel02a, Bromm03, OShea05, Greif06}, we
expect these stars to form to in halos that can support H$_2$~cooling
and ones embedded in relic \ion{H}{2} regions. The Lyman-Werner
radiation from massive stars can dissociate H$_2$~from large distances
\citep{Dekel87, Haiman00}, suppress star formation in lower mass halos
\citep{Machacek01, Wise05}, and should be considered to accurately
model future star formation.
BH formation in the death of some primordial stars can also have a
profound effect on surrounding structure formation as it accretes
infalling matter during later mergers. In principle, one should
include feedback of seed BHs from primordial stars with masses outside
of the range between 140 and 260 solar masses. Also it is possible to
phenomenologically model SMBH formation in a similar manner as the
stellar case. If the protogalactic collapse occurs faster than
stellar formation timescale of a massive star, a SMBH may form inside
this region. Using the stellar formation conditions plus this
condition and allowing the particle to accrete \citep[i.e. sink
particles;][]{Bate95, Krumholz04}, protogalactic collapses can be
followed in cosmological hydrodynamic simulations \citep{Clark07}.
These sink particles should regulate the accretion with an appropriate
subgrid model. Important processes include an appropriate accretion
rate (e.g. Eddington or Bondi-Hoyle), turbulence \citep{Krumholz06},
rotational support of the infalling gas, and a viscosity timescale for
accretion discs.
For small galaxies, radiative transfer effects can have a great impact
\citep[e.g.][]{Haehnelt95, Whalen04, Kitayama04, Alvarez06} and should
not be neglected. Ionization front instabilities in these galaxies
create cometary small-scale structure and shadowing effects as a
result from the explicit treatment of three-dimensional radiation
hydrodynamics. Stellar feedback can have both a positive and negative
impact on subsequent star formation. Some examples of positive
feedback include enhanced H$_2$~formation in relic \ion{H}{2} regions
\citep{Ferrara98, OShea05, Johnson07} and dust and metal-line cooling
\citep{Glover03, Schneider06, Jappsen07a}. Negative feedback may
occur from baryonic expulsion from host halos \citep{Whalen04,
Kitayama04, Yoshida07, Abel07} and halo photoevaporation
\citep{Susa06, Whalen08}.
The promising approach of \citet{Gnedin01} has recently been
implemented and coupled with the AMR hydrodynamic code ART
\citep{Gnedin08}. Also, the technique of adaptive ray tracing
\citep{Abel02b} has been implemented into {\sl Enzo}~and used to study the
outflows and ionizing radiation from a primordial star \citep{Abel07}.
This method has also been independently implemented into {\sl Enzo}~by
\citet{Razoumov06}. Finally as used in many stellar formation
routines \citep{Cen92, Tassis03}, we hope to include thermal and
radiative feedback from Population II stars in future studies.
\section{CONCLUSIONS}
We have simulated the hydrodynamics and collapse of a protogalactic
gas cloud in two cosmology AMR realizations. Our focus on the
hydrodynamics presents a basis for future studies that consider
stellar and BH feedback. In the idealized case presented, we find a
central dense object forms on the order of 10$^5 M_\odot$ and $r \lower0.3em\hbox{$\,\buildrel <\over\sim\,$} 5$
pc. This central object is not rotationally supported and does not
fragment in our simulations. However our results do not dismiss disc
formation in protogalaxies because rotationally supported disc
formation may begin after the initial central collapse. Disc
formation may be sensitively affected by feedback from the central
object.
These simulations highlight the relevance of secular bar-like
instabilities in galaxy formation and turbulent collapses. Similar
bar structures are witnessed in primordial star formation simulations.
As low angular momentum infalls, it gains rotational energy as it
conserves angular momentum. This induces an $m$ = 2, bar-like
instability that transports angular momentum outwards, and the
self-similar collapse can proceed without becoming rotationally
supported and exhibits a density profile $\rho \propto r^{-12/5}$.
This process repeats itself as material infalls to small scales that
is indicative of the ``bars within bars'' scenario. We see three and
four occurrences of embedded secular instabilities in the two
realizations studied here.
We also find that supersonic turbulence influences the collapse by
providing a channel for the gas to preferentially segregate according
to its specific angular momentum. The low angular momentum material
sinks to the center and provides the material necessary for a central
collapse. Here the possibilities of a central object include a direct
collapse into a SMBH \citep[e.g.][]{Bromm03}, a starburst
\citep[e.g.][]{Clark07}, or a combination of both
\citep[e.g.][]{Silk98}. All of these cases are viable in the early
universe, and the occurrence of these cases depends on the merger
history, local abundances in the halo, and the existence of a seed BH.
Moreover, star formation should occur whether a central BH exists or
not. Perhaps the frequency of these different protogalactic outcomes
may be traced with either 3D numerical simulations that consider star
and SMBH formation and feedback along with metal transport or Monte
Carlo merger trees that trace Pop III star formation, metallicities,
and BHs. We will attempt the former approach in future studies to
investigate protogalactic formation in more realistic detail.
\acknowledgments
This work was supported by NSF CAREER award AST-0239709 from the
National Science Foundation. We thank Kristen Menou, Michael Norman,
Ralph Pudritz, Darren Reed, and Tom Theuns for helpful discussions.
We applaud Greg Bryan and Michael Norman for developing an incredible
code. The clarity and presentation of this paper was greatly improved
by constructive comments from an anonymous referee. We are grateful
for the continuous support from the computational team at SLAC. We
performed these calculations on 16 processors of a SGI Altix 3700 Bx2
at KIPAC at Stanford University.
|
train/arxiv
|
BkiUdQw5qoYDgZr1DBik
| 5
| 1
|
\section{Introduction}\label{intro}
The increasing volume of data has led to the widespread use of cloud computing, which enables users to leverage several computing machines for processing data in a parallel fashion. In order to distribute data among computing machines of a cloud, distributed data processing frameworks are used as an integral part of modern distributed data processing systems. These frameworks distribute the storage and processing of data over a cluster of computing nodes. Hadoop \cite{1}, Spark \cite{2} and Flink \cite{3} are the most widely used frameworks for developing distributed data processing systems \cite{4}. Hadoop is one of the earliest framework and follows the functional programming model of MapReduce. Spark is a novel data processing framework that is designed to overcome the problems faced in Hadoop and Flink is the latest entry into the market that offers features for both batch and stream processing.
Given the high computational demand, distributed data processing frameworks are deployed and operated via cloud infrastructure \cite{pu2015low}. A cloud can be deployed in three models – private, public, and hybrid. When a cloud is made available to use on a pay-as-you-go basis to the general public, it is called a public cloud. A private cloud is exclusively used and maintained by an organization, not available to the general public. A hybrid cloud is a combination of private and public clouds, offering the best of both worlds \cite{8}. The cloud bursting model of hybrid clouds enables an application to use the private cloud resources and burst into the public cloud when the application faces a shortage on the private side. Hybrid cloud benefits its users in terms of security, cost, resilience and so on \cite{8}. For example, if the data comprises of confidential and non-confidential information, the confidential data is processed via private part and the non-confidential data is burst into the public part of a hybrid cloud \cite{9}. In the meanwhile, organizations can minimize cost by only paying for the temporary resources acquired from the public cloud under spiking workloads instead of paying for purchasing, programming, and maintaining private resources that would remain idle for most of the time \cite{10}.
Several studies have been conducted to evaluate the performance of distributed data processing frameworks in private and public clouds. Mavridis et al. \cite{11} and Dimopoulos et al. \cite{12} have compared the performance of Hadoop and Spark in private clouds. In \cite{14}, the authors have compared Hadoop with Spark deployed in public clouds. Similarly, the performance of Spark and Flink have been contrasted in private clouds \cite{marcu2016spark} as well as public clouds \cite{16}. However, none of the previous studies have comparatively evaluated the performance of Hadoop, Spark and Flink in hybrid clouds. In comparison to private and public clouds that offer computational resource located at one data centre, the impact of distance between the public and private data centres in a hybrid cloud plays a significant part in the performance evaluation of these frameworks. Moreover, although the previous studies have compared Hadoop with Spark or Spark with Flink, there is a paucity of empirical research on apple-to-apple comparison among Hadoop, Spark and Flink. To fill these gaps, this study contributes to the body of knowledge by comparatively evaluating the performance of Hadoop, Spark and Flink in a hybrid cloud.
In this paper, we report on the implementation of a hybrid cloud consisting of private and public cloud. We then leverage the hybrid cloud to evaluate the performance of the three most widely used frameworks (Hadoop, Spark and Flink). The evaluation is carried out in terms of \textit{execution time}, \textit{resource utilization (CPU, RAM and disk)}, \textit{cost} and \textit{scalability}. Such evaluation aims to determine which hybrid cloud configuration works the best for which distributed data processing framework. It also aims to determine the impact on execution time when more nodes are borrowed by a private cloud from a public cloud. Scalability is a critical concern for data processing systems due to the fluctuating workload \cite{4}. For example, a banking system may experience higher workload at the end of a month to process monthly salary transactions. Consequently, distributed data processing systems should scale up/down based on users' needs. Whilst the \$ cost for private cloud is mostly upfront and relevant to maintenance, the \$ cost for public cloud depends upon the usage especially that of VMs and bandwidth. Therefore, the \$ cost being another key metric in hybrid clouds is considered in our evaluation.
For the aimed evaluation, we implemented a hybrid cloud consisting of OpenStack\footnote{\label{openstack}https://www.openstack.org/} (the private cloud located in the CREST lab\footnote{https://www.crest-centre.net/}, Adelaide, Australia South region) and Azure\footnote{https://azure.microsoft.com/en-au/} (the public cloud located Sydney, Australia East region). We configured Hadoop, Spark and Flink over different combinations of private and public cloud nodes/Virtual Machines (VM). Thus, we varied the number of VMs and the number of cores in the public cloud to respectively study the horizontal and vertical scalability of the frameworks in a hybrid cloud \cite{el2014scaling}. We also measured the resources (i.e., CPU, RAM, disk and network) utilized during the execution of the experiment. Moreover, we measured the data transfer and data received among the nodes during experimental executions.
For this evaluative research study, we used both batch and iterative workloads that are commonly used for evaluating Hadoop, Spark and Flink \cite{18}. In a nutshell, this paper makes the following contributions.
\begin{enumerate}
\item We build a hybrid cloud spanning OpenStack and MS Azure for the implementation and evaluation of distributed data processing frameworks in hybrid clouds.
\item We evaluate and compare the performance of Hadoop, Spark and Flink deployed in the hybrid cloud. Our evaluation reveals several insights in terms of execution time, scalability, data transferred/received, resource utilization and data processing cost of Hadoop, Spark and Flink.
\end{enumerate}
The rest of this paper is organized as follows. Section \ref{frameworks} introduces the studied frameworks. Section \ref{hyrbid-section} reports the hybrid cloud implementation. Section \ref{experimental-setup} describes our experimental setup. Section \ref{results} presents the results, which are followed by practical observations reported in Section \ref{lessons}. Section \ref{related} presents related work and Section \ref{conclusion} concludes the paper.
\section{Distributed Data Processing Frameworks}
\label{frameworks}
Distributed data processing frameworks are employed in a system for distributing the storage and processing of data across a cluster of nodes. In this study, we have selected the three most popular frameworks, Hadoop, Spark and Flink, based on the fact that they are the most widely used and investigated frameworks in industry and academia \cite{4,19,30}. \textbf{Hadoop} is one of the earliest and widely adopted disk-based data processing framework. As an open-source batch processing framework, it follows the MapReduce functional model by Java \cite{1}. It is composed of two layers - a data storage layer called Hadoop Distributed File System (HDFS) and a data processing layer - Hadoop MapReduce Framework. \textbf{Spark} is a novel framework launched to overcome the problems faced while using Hadoop (e.g., user interface and language compatibility) \cite{marcu2016spark},\cite{20}. Unlike Hadoop, Spark is a memory-based framework with HDFS as its input source and output destination. Before an operation, the user driver program launches multiple workers to read data from a distributed file system and cache them in the memory as partitions of Resilient Distributed Dataset (RDD). This feature enables Spark to avoid reloading data from disk at each iteration and boost the data processing speed. In spark, most of the maps are processed before reduce process starts. \textbf{Flink} is relatively a new open-source memory-based framework suitable for batch and stream processing \cite{3,19}. Fink uses a high throughput streaming engine written in Java and Scala. Similar to Hadoop and Spark, Flink follows a master-slave architecture. However, unlike Spark that implements iterations as for loops, Flink executes iterations as cyclic data flows. The iterative process in Flink significantly speedups certain algorithms by reducing the work in each subsequent iteration. For details on these frameworks, interested readers can refer to \cite{1,2,3,4}.
\section{Hybrid Cloud Implementation}\label{hyrbid-section}
In this section, we describe the implementation of the hybrid cloud used for the evaluation of Hadoop, Spark and Flink.
\subsection{Private and Public Clouds}
Hybrid cloud combines a private cloud with a public cloud. Here, we used OpenStack and Azure as the private and public cloud centers. OpenStack is an open-source framework, which possesses a set of tools to create and manage highly scalable and flexible private clouds. Azure is a software solution for creating, deploying and testing public clouds. To implement the hybrid cloud, we used on-demand usage model (also called cloud bursting) where consumer is a private cloud and donor is a public cloud. In such a setup, under spiking workload, a private cloud borrows resources (e.g. VMs) from a public cloud.
\subsection{Cloud Connectivity}
A secure connection between private and public clouds is a significant part of a hybrid cloud because VMs are deployed across different networks in both clouds \cite{4}. Azure provides various VPN connectivity solutions, however, most of them are costly. To build a secure and cost-free connection between OpenStack and Azure, we opted to use WireGuard \cite{22} which is a Linux kernel-based VPN. WireGuard is user-friendly and offers several benefits such as security, reliability in connection, zero-cost and high throughput \cite{23}.
\begin{figure}
\centering
\centerline{\includegraphics[width=0.8\linewidth]{figs/Fig_New/steps_of_hybird_cloud_implementation.png}}
\caption{Steps of hybrid cloud implementation}
\vspace{-2 em}
\label{Hybrid-cloud}
\end{figure}
\subsection{Infrastructure Resource Deployment}
\label{infras-deploy}
Several tools, such as command-line interface and portable web services, are available to deploy resources in a public cloud. However, those tools only support deploying limited resources with static behaviours. Therefore, to deploy resources in a dynamic way, we leveraged Terraform \cite{24} – an open-source tool for cloud resource management. Terraform enabled us to define and execute resource deployment in large-scale clusters in a repetitive manner with little user's interference.
\subsection{Hybrid Cloud Implementation}
\label{implement-hybrid}
Here, we outline the steps we followed for setting up a hybrid cloud. The implementation, inspired from \cite{23}, consists of 8 steps as shown in \autoref{Hybrid-cloud}. \textit{Step 1 - Azure broker configuration:} First, we used Wireguard to create private and public keys, which are used to create the configuration file (wg0) of Wireguard for broker VM on Azure. \textit{Step 2 - Azure broker creation:} This step used Terraform to create Azure broker and then install Wireguard on the broker. \textit{Step 3 - OpenStack broker configuration:} Like step 1, this step used Wireguard to create private key and public key but for OpenStack broker. Then, the Wireguard config file (wg0) was created using public keys of OpenStack and Azure broker VMs. \textit{Step 4 - OpenStack broker creation:} This step leveraged Terraform to create OpenStack broker and install Wireguard on the broker VM. \textit{Step 5 - Authorization of OpenStack Broker VM:} In order to connect OpenStack broker VM to the Azure broker VM, the public key of OpenStack broker VM was added to the Wireguard configuration file (wg0) of the Azure broker VM. \textit{Step 6 - Shared networks creation:} This step created shared networks (for hosting VMs) on Azure and OpenStack. \textit{Step 7 - Peering:} Azure broker network was connected to Azure shared network and OpenStack broker network was connected to OpenStack shared network. \textit{Step 8 - Data Routing:} This step implemented data routing in Azure and OpenStack. In Azure, the broker network's routing table was linked with the shared network. Then, the routing table was updated in accordance with the shared network. In OpenStack, data routing was implemented by simultaneous addition of static rules to the router in the broker and shared networks.
After the implementation, a bandwidth measurement experiment was performed in our hybrid cloud. This experiment is important for understanding the results presented in Section \ref{results} as bandwidth among nodes directly impacts the execution time. IPerf3\footnote{https://iperf.fr/}, a cross-framework tool, was installed on all nodes to measure the bandwidth as presented in \autoref{bandwidth}. The network connection between VMs residing on the same physical machine in the OpenStack achieves the mean bandwidth of 15 Gbits/s while it is only 1.06 Gbits/s in Azure. The mean bandwidth between VMs residing on different physical machines is comparatively lower i.e., 3.52 Gbits/s in OpenStack and 923 Mbits/s in Azure. In contrast, the bandwidth of the WAN connecting the VMs in the OpenStack to the VMs in Azure is recorded as 202.91 Mbits/s. Hence, it is evident that the bandwidth between the nodes in the same cloud is much higher than that between a node in OpenStack and a node in Azure.
\begin{figure}[!t]
\captionsetup{justification=centering}
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{figs/Fig_New/bandwidth_clouds.png}
\end{subfigure}
\vspace{-0.5 em}
\caption{Bandwidth between different pairs of nodes in our hybrid cloud setup}
\label{bandwidth}
\vspace{-0.5 em}
\label{bandwidth-values}
\vspace{-1.0 em}
\end{figure}
\section{Experimental Setup}\label{experimental-setup}
\subsection{Benchmarking Workloads}\label{workloads}
Grep and K-means were selected as benchmarking workloads. Grep is a batch workload that implements one-pass processing, where the input is processed exactly once, for searching plain text. This workload uses Wikipedia text files \cite{marcu2016spark}. K-means is a well-known algorithm for clustering data \cite{18}. It is an iterative workload that implements loop-caching, where the same input is processed multiple times. This workload receives input from the random data generator. We chose data sizes as 5GB for each workload in view of two factors: (i) the time and cost incurred by the range of experiments conducted and (ii) the reliability of the experimental findings. Conducting large-scale experiments on Azure costs a significant amount - 0.03\$/hour for one vCPU and 0.13\$/GB data sent. To ensure cost limitation do not threaten the reliability of our findings, we compared our data size with the state-of-the-art studies (e.g., \cite{14},\cite{zhang2019meteor}) on Hadoop Spark and Flink. We found that the chosen data size is equal to or larger than the most related works such as 350 MB in \cite{14} and 640 MB in \cite{zhang2019meteor}.
\subsection{Infrastructure}
\label{infrastructure}
Our experimental setup consisted of 17 VMs – one master and 16 worker nodes deployed on the hybrid cloud that consists of private (OpenStack) and public (Azure) clouds. Since conducting large-scale experiments on a public cloud incurs a significant cost, we had to choose a reasonable cluster size while avoiding the threats to the reliability of our findings. We first consulted the related studies (e.g., 8 VMs in \cite{14}, 6 VMs in \cite{11},\cite{20}, 4 VMs in \cite{25} and 3 VMs in \cite{16}), and chose a cluster consisting of 17 VMs. Then, we conducted an experiment to observe the trend with the increase in the number of VMs. As presented in Section \ref{cluster-scaling-section}, the trend continues smoothly when the number of VMs increases. Hence, even a further increase in the number of VMs is unlikely to contradict our findings. For evaluating the impact of cloud bursting on the execution time, we considered different combinations of VMs that could be distributed and deployed on private and public clouds, denoted by \textit{(x\_y)}, where \textit{x} and \textit{y} respectively represent the number of VMs in private and public clouds. The configurations are (16\_0), (14\_2), (12\_4), (10\_6), (8\_8), (6\_10), (4\_12) and (2\_14)\footnote{Throughout the text and figures/tables, \textit{(x\_y)} denotes hybrid cloud configuration where \textit{x} and \textit{y} respectively represent the number of VMs deployed in the private and public cloud.}.
\begin{table}[!t]
\captionsetup{justification=centering}
\caption{Specification of the virtual machines}
\vspace{-1.5 em}
\scriptsize
\renewcommand{\arraystretch}{1.1}
\begin{center}
\begin{tabular}{l||c|c}
\hline
\textbf{Feature} \hspace{1 em} & \hspace{0.5 em} \textbf{Private Cloud (OpenStack)} \hspace{0.5 em} & \hspace{0.5 em} \textbf{Public Cloud (Azure)} \hspace{0.5 em}\\
\hline
\hline
CPU & 1 vCPU & 1 vCPU\\
RAM & 2 GB & 2 GB\\
Disk & 10 GB & 30 GB\\
Location & Adelaide & Sydney\\
\hline
\end{tabular}
\vspace{-3 em}
\label{summaryofvms}
\end{center}
\end{table}
In our setup, the master VM, equipped with 2 core CPU, 4GB RAM and 40 GB disk, is hosted in the private cloud, while the distribution of worker VMs varies based on cloud configurations. The specification of the worker VMs used in OpenStack and Azure is presented in \autoref{summaryofvms}. We use Terraform \cite{24} for reliably creating and destroying VMs in the private and public clouds.
Given that our study required frequent creation and destruction of VMs, configuration of distributed data processing frameworks and installation of benchmarking suites, we have automated the whole process via bash scripts to ensure the minimal human interference. During the experiments, we always destroy the used cluster and create a fresh one to avoid the impacts of previous run/setup. Each experiment is executed three times to remove abruptness and the mean results are reported.
\subsection{Experimental Scenarios}\label{experimental-scenarios}
\label{scenarios}
The experimental scenario underpins the way experiment is executed. In our scenario, the user connects with the master node deployed in the private cloud using SSH connection. Using the master node, \textit{step 1} is to generate the dataset as per the workload using benchmarking suites such as HiBench \cite{17} and BigDataBench\cite{18}. In \textit{step 2}, the dataset is uploaded to HDFS from the local file system of the nodes deployed on private and public clouds. In \textit{step 3}, the measurement probes for various metrics presented in Section \ref{metrics} are activated. In \textit{step 4}, the data processing job starts. Once data processing is completed, the results are recorded into a file. The execution time reported in Section \ref{results} is the time consumed in \textit{step 4}, i.e., data processing. The time consumed in data generation, data transmission, and uploading/downloading data to/from HDFS is reported separately in Section \ref{results_cloud_bursting}. In this study, the data processing time is the focus to evaluate the three frameworks since these frameworks operate independently of data generation, data transfer and data upload/download.
\subsection{Evaluation Metrics}\label{metrics}
During our experiments, we measured 10 metrics i.e., execution time, data transferred, data received, vertical scalability, horizontal scalability, cost, CPU usage, RAM usage, disk read and disk write.
The time consumed in each data processing phase (e.g, map and reduce) is calculated based on the log file analysis. We used iftop\footnote{https://www.tecmint.com/iftop-linux-network-bandwidth-monitoring-tool/}, running on each node to measure the data transferred and received during the experiment execution. Besides, Dstat\footnote{https://linux.die.net/man/1/dstat} is used to measure resource utilization (i.e., CPU, RAM, and disk). The horizontal scalability is measured by increasing the number of nodes in the public cloud, and the vertical scalability is measured by increasing the number of cores of a single node in the public cloud (details in Section \ref{scalability-s}).
\begin{figure}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.21\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/gp-bursting.jpg}
\vspace{-1.5 em}
\caption{Grep}
\label{cloud-burst-a}
\end{subfigure}
\hspace{1 em}
\begin{subfigure}{.21\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/km-bursting.jpg}
\vspace{-1.5 em}
\caption{K-means}
\label{cloud-burst-b}
\end{subfigure}%
\vspace{-0.5 em}
\caption{Execution time of Hadoop, Spark and Flink for Grep and K-means with various hybrid cloud configurations
}
\vspace{-0.5 em}
\label{bursting_grep_kmeans}
\end{figure}
\begin{figure}
\captionsetup{justification=centering}
\centering
\vspace{-0.3 em}
\begin{subfigure}{.35\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/bandwidth-impact.jpg}
\end{subfigure}
\vspace{-0.7 em}
\caption{Impact of bandwidth of the network connection between private and public cloud for (8\_8) configuration}
\label{bandwidth-impact}
\vspace{-2 em}
\end{figure}
\begin{figure*}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/stages-hadoop-gp.jpg}
\caption{Hadoop-Grep}
\label{phases-a}
\end{subfigure}%
\hspace{0.2 em}
\begin{subfigure}{.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/stages-hadoop-km.jpg}
\caption{Hadoop-Kmeans}
\label{phases-b}
\end{subfigure}%
\hspace{0.2 em}
\begin{subfigure}{.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/stages-spark-gp.jpg}
\caption{Spark-Grep}
\label{phases-c}
\end{subfigure}
\begin{subfigure}{.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/stages-spark-km.jpg}
\caption{Spark-Kmeans}
\label{phases-d}
\end{subfigure}
\begin{subfigure}{.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/stages-flink-gp.jpg}
\caption{Flink-Grep}
\label{phases-e}
\end{subfigure}
\begin{subfigure}{.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/stages-flink-km.jpg}
\caption{Flink-Kmeans}
\label{phases-f}
\end{subfigure}
\caption{Execution time per stages of the data processing. Same legend applies to Fig (a) and (b)}
\label{framework-phases}\vspace{-2.5 em}
\hspace{2.0 em}
\end{figure*}
\section{Results}\label{results}
\subsection{\textit{Impact of Cloud Bursting}}\label{results_cloud_bursting}
Fig. \ref{bursting_grep_kmeans} shows the execution time of Hadoop, Spark and Flink in the hybrid cloud for batch (Grep) and iterative (K-means) workloads. Generally, for all three frameworks, the execution time increases when more nodes are borrowed by the private cloud from the public cloud, i.e., as we move from non-bursting (16\_0) to full-bursting (2\_14). This can be attributed to two factors - network bandwidth and disk I/O. As depicted in Fig. \ref{bandwidth-impact}, the bandwidth between nodes in the private cloud is 3.52 Gbits/s, between nodes in the public cloud is 1.06 Gbits/s, and across private and public cloud (WAN) is 202 Mbits/s. As we borrow more nodes from the public cloud, more connections with lower bandwidth are used. This is evident from the amount of data transferred between nodes in various hybrid cloud configurations illustrated in Fig. \ref{data-transfer} (details in Section \ref{data-transfer-received}). To illustrate the impact of bandwidth, we ran a small-scale experiment where we used Traffic Control (a linux utility) to decrease the bandwidth of the network connection between private and public cloud for (8\_8) configuration (executing grep) by 32\% of the original value. The result depicted in Fig. \ref{bandwidth-impact} reveals that as the bandwidth decreases, the execution time increases. As more nodes are exploited in the public cloud, the disk usage increases as depicted in Fig. \ref{resource-utilization-maps} (details in Section \ref{resource-utilization}). This is because the nodes in the public cloud are equipped with 30 GB disk as compared to 10 GB disk on nodes in the private cloud, as shown in Table \ref{summaryofvms}.
\begin{figure*}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/8_8-hadoop-kmeans.png}
\caption{Hadoop}
\label{vertical-d}
\end{subfigure}%
\hspace{1 em}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/8_8-spark-kmeans.png}
\caption{Spark}
\label{vertical-e}
\end{subfigure}%
\hspace{1 em}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/8_8-flink-kmeans.png}
\caption{Flink}
\label{vertical-f}
\end{subfigure}%
\hspace{1 em}
\caption{Data transfer/received (in MB) among the nodes executing K-means in (8\_8). M denotes master node and W1-W16 denotes the worker nodes.}
\label{data-transfer-maps}
\vspace{-1 em}
\end{figure*}
\begin{figure*}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.225\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/private-private.jpg}
\caption{Within Private Cloud}
\label{data-transfer-a}
\end{subfigure}%
\hspace{0.5 em}
\begin{subfigure}{.225\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/private-public.jpg}
\caption{Private to Public Cloud}
\label{data-transfer-b}
\end{subfigure}%
\hspace{0.5 em}
\begin{subfigure}{.225\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/public-private.jpg}
\caption{Public to Private Cloud}
\label{data-transfer-c}
\end{subfigure}
\hspace{0.5 em}
\begin{subfigure}{.225\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/public-public.png}
\caption{Within Public Cloud}
\label{data-transfer-d}
\end{subfigure}
\label{data-transfer}\vspace{-0.5 em}
\caption{Data transferred/received in our hybrid cloud infrastructure during the execution of K-means}
\label{data-transfer}\vspace{-1.8 em}
\end{figure*}
On average, Hadoop is around 27.5\% and 33.6\% slower than Spark and Flink, respectively. This is attributed to the disk-based architecture of Hadoop that, unlike Spark and Flink significantly leverages the disk during data processing.
Since disk-based operations are computationally heavy and time-consuming, it slows Hadoop down. This trend continues across almost all cloud configurations as shown in Fig. \ref{resource-d}. Furthermore, Hadoop spends a lot of time in data shuffling stage during which data is written to the disk in a sequential manner. Flink outclasses Spark by around 8.1\% in terms of execution time. Whilst both Spark and Flink are memory-based frameworks, the in-built optimizer of Flink enables it to use CPU and RAM more efficiently as reported in Section \ref{resource-utilization}. When moving from non-bursting (16\_0) to full-bursting (2\_14), the mean execution time of Hadoop, Spark and Flink increases by 3.4$\times$, 2.2$\times$ and 2.1$\times$, respectively. The slowdown in Hadoop is higher as compared to Spark and Flink because Hadoop transfers more data among the nodes during data processing as shown in Fig. \ref{total-data-transfer}. For instance, Hadoop, Spark and Flink transfer around 4 GB, 2.5 GB and 2.7GB data during the execution of K-means.
With respect to workloads, the difference among Hadoop, Spark and Flink is not as evident for batch workload as compared to iterative workloads. This is because Hadoop is originally designed for batch workload while Spark and Flink are designed for iterative workloads. Unlike Hadoop, Spark and Flink cache intermediate data across nodes between iterations to improve efficiency. Fig. \ref{bursting_grep_kmeans} also shows that the mean execution time respectively increases by 3.4$\times$ and 1.4$\times$ for batch and iterative workloads as we move from (16\_0) to (2\_14).
In Fig. \ref{bursting_grep_kmeans}, the trend of increase in execution time is not exactly smooth when borrowing more nodes from the public cloud. This is a consequence of two uncontrolled factors - the fluctuation in WAN bandwidth and load distribution among workers. Except (16\_0), all the hybrid cloud configurations leverage the WAN connection for data transfer between private and public clouds. When the strength of the WAN connection fluctuates, it directly impacts the execution time. Secondly, the load distribution among the workers is not even by default, which further impacts the execution time. As shown in Fig. \ref{resource-utilization-maps}, worker W12 leverages only 41\% CPU and 0.42GB RAM with Spark during the execution of K-means while other workers are used to a higher extent (49 - 55\%). The under utilization of workers may be caused by multiple reasons including the network congestion and software failure. As an example, a worker may not be available to respond to the master. Then, the master node has to send more workloads to other available workers. As mentioned in Fig. \ref{bursting_grep_kmeans} and Fig. \ref{bandwidth-impact}, the execution time here is the time consumed by the frameworks during data processing. The time consumed in data generation, uploading/downloading data to/from HDFS/local file system, and data transfer is presented in Table \ref{data-generation-time}. Similar to data processing time, the time for other operations is also higher when more nodes are deployed in the public cloud. In addition, more time is consumed in uploading data from the local file system to HDFS on public cloud.
\begin{table}
\captionsetup{justification=centering}
\caption{Time (sec) consumed during various operations in the three hybrid cloud configurations.
}
\centering
\vspace{-0.5 em}
\begin{tabular}{l||r|r|r}
\hline
\textbf{Operation} & \textbf{14\_2} & \textbf{8\_8} & \textbf{2\_14} \\ \hline \hline
Data generation & 104 & 240 & 415 \\
Downloading data from HDFS to local & 60 & 177 & 299 \\
Data transfer from private to public cloud & 529 & 375 & 932 \\
Uploading data to HDFS from local & 902 & 1,106 & 1,352 \\
\hline
\end{tabular}
\vspace{-2 em}
\label{data-generation-time}
\end{table}
\subsection{Data Processing Phases}
Fig. \ref{framework-phases} shows the time consumed in each phase of data processing. It is evident that Spark takes significant time (around 73s) in framework initialization. This is because when the execution of a Spark job starts, it creates SparkContext which is a slow process.
Flink takes around 45s in initialization. Contrary to Spark and Flink, Hadoop takes minimal time (around 5s) to initialize. These results imply that the actual data processing time is not dominated by the framework initialization time in our study. As expected, the framework initialization time is similar for the executions of grep and k-means. However, the phases for the execution of grep and k-means are different in Spark and Flink. Fig. \ref{framework-phases} shows that the time consumed in each phase is negligibly impacted by the hybrid cloud configuration. Spark and Flink do not have a shuffle map phase for Grep because Spark and Flink only perform the filter transformation and result stage in the execution of Grep \cite{marcu2016spark}.
Comparing Fig. \ref{phases-b} and Fig. \ref{phases-d}, we observe that Hadoop takes longer time in shuffle phase as compared to Spark. This is because during the shuffle stage, Hadoop workers write data to the disk in a sequential manner facing synchronization barrier, where each thread has to wait for the antecedent thread to finish writing. On the other hand, Spark cache most of the data in memory during shuffle phase since Spark is a memory-based framework having no synchronization barrier.
\subsection{Data Transfer/Received}\label{data-transfer-received}
Fig. \ref{data-transfer-maps} shows the data transferred between each pair of the nodes during the execution of K-means by Hadoop, Spark and Flink under (8\_8) hybrid cloud configuration. We can observe that Hadoop engages almost all nodes in large data transfer. This is because Hadoop always transfers the whole data among all nodes while Spark and Flink can reduce the size of data in memory after serialization \cite{shi2015clash}. In Spark and Flink, only a few nodes receive large data. For example, with Spark, only worker W15 made one large size data (85.2MB) transfer to worker W8. The master node communicates uneven data ranging from 0.03 - 19.9MB to workers in Hadoop and 0.16 - 45MB in Spark, while even size data ranging from 0.03 - 0.12MB are transferred from master to workers in Flink. As the data transferred from the workers to the master is concerned, all three frameworks transfer almost even data ranging from 0.27 - 0.73MB to the master node. We did not observe a significant difference in data transfer with respect to hybrid cloud configurations (e.g., (16\_0) and (2\_14)). Therefore, we only report the data transfer among the nodes for one representative configuration i.e., (8\_8).
Fig. \ref{total-data-transfer} shows the total amount of data communicated among the nodes during the execution of K-means. Overall, Hadoop transfers 4.31 GB of data among the nodes, which is followed by Flink (2.19 GB) and Spark (1.45 GB). The data transfer for Spark is quite low because transformations on RDD in Spark are lazy in nature, which helps Spark to minimize data transfer between the nodes. Unlike the total data transfer shown in Fig. \ref{total-data-transfer}, Fig. \ref{data-transfer} depicts the data transfer in or between public and private clouds. For (16\_0), all data transfers happened among the nodes within the private cloud, since there are no nodes deployed in the public cloud. For (2\_14), most of the data is transferred among the nodes in the public cloud as 14 out of 16 nodes are deployed in the public cloud. For (8\_8), most of the data transfers happened across the clouds using WAN. These results show that these data processing frameworks do not consider the underlying cloud infrastructure (e.g., bandwidth) to optimize the data transfer among the nodes.
\begin{figure}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.28\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/total-data-transfer.jpg}
\end{subfigure}
\vspace{-0.5 em}
\caption{Total data transferred/received among the nodes during the execution of K-means.}
\label{total-data-transfer}
\vspace{-1 em}
\end{figure}
\begin{figure*}
\captionsetup{justification=centering}
\centering
\vspace{-1 em}
\begin{subfigure}{.40\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/CPU.png}
\caption{CPU Usage (\%)}
\label{resource-a}
\end{subfigure}%
\hspace{1 em}
\begin{subfigure}{.40\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/RAM.png}
\caption{RAM Usage (GB)}
\label{resource-b}
\end{subfigure}%
\hspace{1 em}
\begin{subfigure}{.40\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/disk-read.png}
\caption{Disk Read (MB)}
\label{resource-c}
\end{subfigure}%
\hspace{1 em}
\begin{subfigure}{.40\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/Fig_New/disk-write.png}
\caption{Disk Write (MB)}
\label{resource-d}
\end{subfigure}%
\hspace{1 em}
\vspace{-0.5 em}
\caption{Resource utilization during the execution of K-means. H, S, and F denotes Hadoop, Spark and Flink, respectively.
}
\label{resource-utilization-maps}
\vspace{-1 em}
\end{figure*}
\subsection{Resource Utilization}\label{resource-utilization}
Fig. \ref{resource-utilization-maps} shows the mean utilization of CPU, RAM and disk during the execution of K-means. In each hybrid cloud configuration shown in Fig. \ref{resource-utilization-maps}, the first set of nodes is deployed in the private cloud and the last set in the public cloud. For example, in (14\_2), the workers W1 - W14 are deployed in the private cloud and W15-W16 are deployed in the public cloud. The values presented in Fig. \ref{resource-utilization-maps} are averaged over per second values collected using Dstat. We observe that the CPU usage of the nodes in the public cloud is higher as compared to the nodes in the private cloud. This is because the CPU capacity of the nodes in the public cloud is 2.4 GHz as compared to the 3.0 GHz CPU capacity in the private cloud. Therefore, the nodes in public cloud utilize CPU more to catch up with the private cloud nodes. The CPU usage of master node is quite low compared to the worker nodes. This is because the master node acts as a manager only without directly contributing to the job execution. Similarly, the percentage use of RAM by master node is lower than that by worker nodes. However, the RAM usage of the master node is higher than that of worker nodes in Fig. \ref{resource-b} because the master node is equipped with 4 GB RAM while worker nodes are equipped with 2 GB RAM. Overall, the CPU usage ranges from 24-79\%, which reveals two facts (i) all the nodes worked well in our cluster; (ii) the CPU usage did not reach 100\% due to the job dependency during data processing. For example, W1 needs a particular piece of data to execute its task. However, this piece of data is produced by W2. Therefore, W1 has to wait for W2 until the required data is produced, and this results in reducing the overall CPU usage. As expected, the RAM usages of Spark and Flink are higher than that of Hadoop. However, Flink's RAM usage is also higher than Spark. A potential reason for this is that Spark leverages JVM's heap memory, while Flink maintains its memory stack, which is more optimally designed.
Fig. \ref{resource-c} and Fig. \ref{resource-d} depict the disk read and disk write. The disk read and write for master node is much lower than the worker nodes. On average, Hadoop, Spark and Flink write 2.38 MB/s, 1.19 MB/s and 1.13 MB/s to the disk. This is why Hadoop is slower than Spark and Flink since disk I/O operations are computationally heavy. In terms of disk read, the trend is opposite where Hadoop, Spark and Flink read around 0.85 MB/s, 1.69 MB/s and 3.00 MB/s from the disk, respectively. We also observe that the disk writes for nodes deployed in the public cloud are higher than nodes in the private cloud. A potential reason is the higher disk availability (30 GB) of nodes in the public cloud, while nodes in the private cloud only have 10 GB disk.
\subsection{Horizontal and Vertical Scalability}\label{scalability-s}
Horizontal scalability refers to scaling-up a system by adding new computing nodes, while vertical scalability refers to the scale-up of a system by adding more cores into the same computing node. Whilst previous studies (e.g., \cite{4},\cite{14,marcu2016spark,16}) have explored the scalability of these frameworks in private or public clouds, we investigate how these frameworks scale in the hybrid cloud. For assessing horizontal scalability, we fixed one VM of default settings (1 vCPU and 2GB RAM) in the private cloud and vary the number of default VMs in the public cloud from 1 to 2 and 2 to 4. For vertical scalability investigation, we deployed one default VM in the private cloud and one default VM in the public cloud. Then, we varied the number of cores of the VM deployed in the public cloud from 1 to 2 and 2 to 4. The single VM in private cloud is deployed in order to adhere to the definition of the hybrid cloud. For the sake of fair comparison, same resources (e.g., RAM size) are allocated to the various setups in vertical and horizontal scalability. For example, a VM with 4 cores in vertical scalability investigation and 4 VMs in horizontal scalability investigation are equipped with 8 GB RAM each.
Fig. \ref{scalability} shows the impacts of horizontal and vertical scalability on Hadoop, Spark and Flink in the hybrid cloud. As expected, the execution time decreases with the increases in the number of VMs (horizontal scalability) or the number of cores (vertical scalability). When doubling the number of VMs, the mean execution time reduces by 51.0\%, 33.7\% and 41.7\% for Hadoop, Spark and Flink, respectively. When doubling the number of cores, the execution time is reduced by 39.3\%, 28.1\% and 31.9\% for Hadoop, Spark and Flink, respectively. This shows that horizontal scalability leads to more reduction in execution time compared to vertical scalability in a hybrid cloud. This is because in vertical scalability, the scale-up is bottle-necked by the low-capacity node in the private cloud due to non-optimal load distribution. Therefore, the increase in the capacity of the node in the public cloud is not fully utilized. As shown in Fig. \ref{cpu-scalability-3}, worker 1 is of 2 cores while worker 2 is of 8 cores. Therefore, worker 2 is only utilized around 25\% since it is dependent upon a weaker node i.e., worker 1. In Fig. \ref{cpu-scalability-5}, all workers are of 2 cores, hence, all workers are almost equally utilized.
\begin{figure}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.2\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/horizontal-scalability-km.jpg}
\caption{Horizontal scalability}
\label{horizontal}
\end{subfigure}
\hspace{1 em}
\begin{subfigure}{.2\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/vertical-scalability-km.jpg}
\caption{Vertical scalability}
\label{vertical}
\end{subfigure}
\vspace{-0.5 em}
\caption{Scalability as the number of VMs (horizontal) or number of cores (vertical) increases in the public cloud}
\label{scalability}
\vspace{-0.5 em}
\end{figure}
\begin{figure}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.23\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/cpu-scalability-v-box.jpg}
\caption{3 Node Cluster}
\label{cpu-scalability-3}
\end{subfigure}
\begin{subfigure}{.23\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/cpu-scalability-h-box.jpg}
\caption{6 Node Cluster}
\label{cpu-scalability-5}
\end{subfigure}
\vspace{-0.5 em}
\caption{CPU Usage in cluster with 3 nodes (vertical scaling) and with 6 nodes (horizontal scaling)}
\label{cpu-scalability}
\vspace{-2 em}
\end{figure}
\subsection{Cluster Scaling}\label{cluster-scaling-section}
Unlike the previous section where we scale the cluster only in the public cloud, in this section, we scale the overall cluster size. The motivation for such scaling is to determine the trend with respect to the increasing number of nodes. Moreover, this experiment justifies the cluster size (16 nodes) used for the experiments reported in this paper. We conducted the experiment with various numbers of nodes, including 4, 8, 12 and 16 nodes, when executing Grep. As shown in Fig. \ref{cluster-scaling}, with the increase in the number of nodes, the execution times of all the three frameworks decrease because of the increase in the hybrid cloud capacity. In addition, when moving from non-bursting, i.e., (4\_0), (8\_0), (12\_0) and (16\_0) to full-bursting, i.e., (1\_3), (1\_7), (2\_10) and (2\_14), the execution times of Hadoop, Spark and Flink also increase. This is same as the analysis in Section \ref{results_cloud_bursting}. Moreover, Hadoop always consumes the highest execution time, followed by Spark and then Flink. Therefore, we conclude that the various configurations could impact the performance of these frameworks in the experiments. However, they do not impact the performance comparison among these frameworks in general. Thus, the experiment results in Section \ref{results} obtained using our experimental setup reported in Section \ref{infrastructure} are valid.
\begin{figure}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.35\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/cluster-scale.jpg}
\end{subfigure}
\vspace{-0.5 em}
\caption{Execution time of grep in 4, 8, 12 and 16 node cluster.
}
\label{cluster-scaling}
\vspace{-1.5 em}
\end{figure}
\begin{figure}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/cost.jpg}
\end{subfigure}
\vspace{-0.5 em}
\caption{Cost incurred on MS Azure for K-means execution
}
\vspace{-2 em}
\label{cost}
\end{figure}
\subsection{Cost Analysis}
The cost incurred by the frameworks deployed in various cloud configurations is calculated using Equation \ref{equation1}, which underpins the costs of VM, bandwidth and IP. Those three aspects account for around 95\% of the total cost in MS Azure. In Equation (\ref{equation1}), $V_{N}$ denotes the number of VMs, \textit{T} denotes the time (in hours) VM and associated IP are used for, $D_{S}$ denotes the data (in GB) sent out from the public cloud, $I_{N}$ denotes the number of IPs, \textit{C} denotes the hybrid cloud configuration, \textit{P} denotes the framework, and \textit{W} denotes the workload. Furthermore, 0.0264, 0.126 and 0.004 are respectively the rates (in USD) for VM, bandwidth and IP in MS Azure (Australia East region).
\begin{equation}
\small
\vspace{-0.5 em}
\begin{split}
Cost (C, P, W) = & (V_N\times T\times 0.0264) + (D_S\times 0.126) \\
& + (I_N\times T\times 0.004)
\end{split}
\vspace{-0.5 em}
\label{equation1}
\end{equation}
The costs incurred by each framework and hybrid cloud configuration during the execution of K-means are presented in Fig. \ref{cost}. Since the number of IPs used is the same for the three frameworks, the main cost determinants are VM time and bandwidth for sending data. In majority of the cases, Hadoop is the most expensive followed by Flink, while Spark is the least expensive. In (12\_4) and (10\_6) configuration, Flink is slightly more expensive than Hadoop, which is attributed to the large data transfers (i.e., 0.89 GB and 1.38 GB) as compared to 0.70 GB and 0.66 GB data transfer by Hadoop as illustrated in Fig. \ref{data-transfer-c}. Hadoop is more expensive because Hadoop's execution time is higher as well as Hadoop transfers significantly larger data among nodes. However, it is worth noting that execution time and cost are only two quality attributes. There are several other quality attributes (e.g., fault tolerance and usability) according to which Hadoop outclasses Spark and Flink \cite{4}. Moreover, Hadoop is a well-established framework that is in use for years in industry, hence, its replacement with the recent frameworks is not a straight-forward undertaking \cite{4}. Fig. \ref{cost} also shows that unlike expected, the cost does not increase consistently as more and more nodes are borrowed from the public cloud. This is because in addition to the number of VMs, the bandwidth used contributes significantly to the overall cost. For instance, the mean cost of VM, bandwidth and IP is 3.14 cents, 7.98 cents, and 0.47 cents, respectively.
\begin{figure}
\captionsetup{justification=centering}
\centering
\begin{subfigure}{.3\textwidth}
\includegraphics[width=\linewidth]{figs/Fig_New/vm-creation-deletion.jpg}
\end{subfigure}
\vspace{-0.7 em}
\caption{Time consumed in creating and destroying VMs}
\label{vm-creation-deletion}
\vspace{-1.8 em}
\end{figure}
\section{Practical Observations and Future Work}
\label{lessons}
In this section, we present our practical observations acquired during this study and directions for future research.
\textbf{Infrastructure deployment in Azure is slow:} Our hybrid cloud infrastructure requires dynamically creating and destroying VMs in both Azure and OpenStack using Terraform. However, such operations in Azure take much longer time than OpenStack. Fig. \ref{vm-creation-deletion} shows the time consumed in creating and destroying VMs in three hybrid cloud configurations. When more VMs are utilized in Azure, the time increases. The potential reasons include but not limited to hardware resource allocations from the available pool, uploading OS to VMs and informing the load balancer and firewall of the Azure data center about the new deployments.
\textbf{Automation is the key:} We observed that the hybrid cloud implementation, framework configurations and deployments are complex and tedious. Therefore, we assert that the whole process should be as automated as possible. This not only saves a practitioner/researcher's time but also ensures the reliability of data processing and associated experiments. In this regard, we recommend the use of Terraform \cite{24} for automatically creating, deploying and destroying resources on a hybrid cloud. Similarly, writing shell scripts for automating the framework configuration is a fruitful practice.
\textbf{Confirm version compatibility:} In this study, we used various frameworks, workloads and software tools in collaboration with each other. Before dwelling into implementation, it is important to confirm that the versions of tools are compatible with each other. For example, we have noticed that Flink 0.10 is only compatible with BigDataBench 5.0, while BigDataBench 5.0 requires Hadoop 2.7. Moreover, Spark 2.4 is compatible with HiBench 7.1 but not with HiBench 7.0, and Hadoop 2.7 is only compatible with Java version 1.8 or above. In addition, Spark and Flink require version compatibility with Hadoop to use HDFS and YARN as the cluster manager. Therefore, we assert that version compatibility should be confirmed before starting the implementation.
\textbf{Future Work:} \textit{Optimizing data transfers:} As shown in Fig. \ref{data-transfer-maps} and Fig. \ref{data-transfer}, the frameworks transfer data randomly among nodes in the private cloud, public cloud, and between private and public clouds. In comparison to data transfers within the same cloud, the data transfers between two clouds via WAN are costly both in terms of time and money. Therefore, the frameworks should be tuned/configured in a way that minimizes inter-cloud data transfers in a hybrid cloud setup. Whilst each framework comes with over 150 parameters, we suggest focusing on parameters (e.g., data replication factor and number of parallel threads for transferring data) related to data transfer, data distribution and network usage to optimize data transfers especially across the two clouds. \textit{VM hosting:} In this study, the VMs in private and public clouds are hosted either on the same or different physical machines. As presented in Fig. \ref{bandwidth-values}, the bandwidth between VMs hosted on the same physical machine is higher than VMs on different machines. Furthermore, Fig. \ref{bandwidth-impact} also reveals that the bandwidth among VMs directly impacts the execution time. Therefore, it will be fruitful to extend this study for different combinations of VMs placement strategies such as hosting all VMs on the same physical machine, hosting only VMs in the private cloud on the same physical machine, and so on.
\section{Related Work}\label{related}
In this section, we compare our work with the existing studies to position the novelty of our work.
\textbf{Hybrid cloud implementation and cloud bursting:} Several studies (e.g., \cite{9},\cite{23},\cite{33,35,36}) have explored the implementation of the hybrid cloud and cloud bursting. Mansouri et al. \cite{23} used WireGuard \cite{22} to connect private and public clouds for implementing a hybrid cloud to compare the impact of cloud bursting on the performance of six distributed databases (e.g., Cassandra and MongoDB). Clemente-Castelló et al. \cite{35} leveraged a low throughput and high latency link to connect two clouds hosted on OpenStack for building a collaborative hybrid cloud. The authors then used this hybrid cloud for building a performance model to predict the execution time of Hadoop jobs. Similarly, Loreti et al. \cite{9} connected two OpenStack clouds to build a hybrid cloud for evaluating Hadoop performance with respect to bandwidth. The results show that the performance gain via bursting into a public cloud is largely dependent upon the inter-cloud bandwidth. Along the same lines, Roman et al. \cite{36} studied the impact of inter-cloud bandwidth on Spark performance in a hybrid cloud. It was observed that the shuffle phase of Spark is very sensitive to the bandwidth between the two networks (private and public) in the hybrid cloud. Bicer et al. \cite{33} studied the impact of cloud bursting and scalability for data-intensive applications (e.g., KNN) in a hybrid cloud comprising of local and AWS clusters. Taking inspirations from \cite{23}, we have also used WireGuard \cite{22} to implement a hybrid cloud that integrates private cloud (OpenStack) with public cloud (Azure).
\textbf{Comparison of distributed data processing frameworks:} Several studies (e.g., \cite{11},\cite{14},\cite{marcu2016spark},\cite{16},\cite{20},\cite{25}) have evaluated and compared the performance of distributed data processing frameworks. Veiga et al. \cite{20} have used multiple benchmarking workloads to compare the performance of Hadoop, Spark and Flink deployed on a private cloud. The authors concluded that Spark and Flink outperforms Hadoop with an average reduction of 77\% and 70\% in execution time. Gu et al. \cite{14} compared the performance of Hadoop and Spark deployed on a cluster of eight nodes. The findings revealed that Spark outperformed Hadoop with respect to execution time. However, Spark consumed far more memory than Hadoop. Perera et al. \cite{16} compared Spark and Flink. Both the frameworks were deployed on Amazon EC2 instances. Due to pipelined execution, Flink consistently outperformed Spark with respect to execution time. Shi et al. \cite{25} also compared the execution time of Hadoop and Spark. They found that Spark outperformed Hadoop by 2.5× for WordCount, and 5× for K-means and PageRank. In \cite{11}, the performance of Hadoop was compared with Spark by analyzing log files with respect to execution time, resource usage and scalability in a cluster of six VMs on a private cloud. The results showed that by maximizing the resource utilization, Spark outperformed Hadoop, while horizontal scale-up of worker nodes led to reducing the execution time by about 50\%. In \cite{marcu2016spark}, the experiments were executed in private cloud to evaluate Spark and Flink for various workloads. The evaluation showed that Flink performed better for batch workloads whereas Spark performed well in large-scale graph processing.
In comparison to existing studies, this paper fills the following two gaps (i) none of the previous studies have evaluated/compared the execution time, data transfer/received, horizontal scalability, vertical scalability, cost and resource utilization for Hadoop, Spark and Flink in a hybrid cloud, and (ii) none of the previous studies have evaluated the performance of Hadoop, Spark and Flink with different permutations of VMs in the cloud bursting model. Hence, the previous studies are largely orthogonal to this study.
\section{Conclusion}\label{conclusion}
In this paper, we first reported on the implementation of a hybrid cloud consisting of private (OpenStack) and public (MS Azure) clouds. We then used the hybrid cloud for evaluating and comparing the three most widely used distributed data processing frameworks (i.e. Hadoop, Spark and Flink) in terms of execution time, resource utilization, horizontal scalability, vertical scalability and cost. We used both batch and iterative workloads in our evaluation, and found that the execution time of the three frameworks increase as more nodes are borrowed from the public cloud. In a hybrid cloud setup, Flink is the fastest followed by Spark and then Hadoop. With respect to cost, Spark is the least expensive while Hadoop is found the most expensive. All three frameworks horizontally scale better as compared to vertical scaling. In the future, we plan to evaluate the impact of distance between private and public clouds on the performance of distributed data processing frameworks in a hybrid cloud setup.
\bibliographystyle{IEEEtran}
\vspace{-0.5 em}
|
train/arxiv
|
BkiUf8XxK1Thg9qFgG1M
| 5
| 1
|
\section{Introduction}
In the last few years, the study of complex networks has found
relevance in various fields including sociology, ecology, biology,
economics and physics. In these networks, vertices do not have
homogeneous links or connectivities. A particularly relevant
structure found in several empirical studies is the so-called {\em
scale-free network} (SFN), which is characterized by the power law
distribution of the degree of connectivities, $P(k) \sim k^{-\gamma}$,
with $k$ the number of links for a node, and $\gamma$ the decay
exponent of the distribution. A network with $\gamma \rightarrow 0
$ has nodes with a relatively homogeneous number of links (somewhat
resembling the case on regular lattices), while large $\gamma$
corresponds to the existence of ``very famous'' nodes (or hubs), i.e.,
those having direct links to the majority of vertices.
Many networks realized in Nature show scale-free structure. Some
examples studied include food webs~\cite{food_web}, power grids and
neural networks~\cite{watts98,amaral00}, cellular
networks~\cite{cellular}, sexual contacts~\cite{sexual}, Internet
routers~\cite{internet}, the World Wide Web~\cite{www}, actor
collaborations~\cite{watts98,albert99,amaral00,act}, the citation
network of scientists~\cite{citations} and the stock
market~\cite{market}.
In addition to the scale-free behaviour, these networks are
characterized by
a high clustering coefficient, $C$, in comparison
with random graphs~\cite{bollobas}. The clustering coefficient, $C$,
is computed as the average of local clustering, $C_i$, for the $i^{\rm th}$
node, defined as
\begin{equation}
C_i=\frac{2y_i}{z_i(z_i-1)},
\end{equation}
where $z_i$ is the total number of nodes linked to the site $i$ and $y_i$ is
the total number of links between those nodes. As a consequence
both $C_i$ and $C$ lie in the interval [0,1].
The high level of clustering found
supports the idea that a {\em herding} phenomenon is a common feature
in social and biological communities.
The parameter $C$ also represents the density of triangles, that is of
elementary cells, associated with the network.
Numerical studies on SFNs have demonstrated how topology
plays a fundamental role in infection spreading~\cite{pastor01}, opinion
formation in large communities~\cite{bartolozzi05} and
tolerance against random and preferential node removal~\cite{bartolozzi05,tolerance}.
A detailed description of the progress in this emerging field of
statistical mechanics can be found in the recent reviews of
Refs.~\cite{albert02, dorogovtsev02, dorogovtsev03}.
The aforementioned empirical findings have inspired physicists to investigate
the dynamics of standard models in the new case where the interactions
between elements are described by complex networks. These include
the study of various magnetic models such as the Ising model. An
intriguing issue concerns how the unusual topology acts to influence
the cooperative behaviour of the spins. Studies of the ferromagnetic
(FM) Ising model on a SFN, using several theoretical
techniques~\cite{Aleksiejuk02,Dorogovtsev02b,Igloi02,Herrero04} including the Monte
Carlo (MC) method~\cite{Herrero04}, have found the robustness of
ferromagnetic ordering against thermal fluctuations for the degree
distribution exponent $\gamma \leq 3$.
This result is actually intuitive if we notice that, as $\gamma$
gets smaller, nodes at the edge of the network will generally have more
connections. In this situation, the system resembles
the FM Ising model on a regular lattice which exceeds the
lower critical spatial dimension, $d_l= 2$. There the
ordered phase is very robust against thermal fluctuations. However,
for the antiferromagnetic (AF) case with a SFN, the situation is
different.
Two factors come to play a central role in the dynamics of the AF-SFN
model; namely the competition induced by the AF interaction in the
elementary triangles of the network and the randomness related to the
non-regular connections. The abundance of elementary triangles in the
network leads to frustration, as, for example, only two of the three
spins can be anti-aligned. More generally, frustration refers to the
inability of the system to remain in a single lowest energy state
(ground state). These ingredients lead the AF SFN to belong to a
class of randomly frustrated systems commonly referred to as spin
glasses (SGs).
Most studies of SGs have been performed on regular lattices. These
studies have shown that frustration and randomness are the key
ingredients for SG behavior, characterized by a frozen random spin
orientation at low temperatures~\cite{SG}.
Spin glasses on a SFN with mixed AF and FM bonds
have been investigated recently
by Kim {\it et al.}~\cite{Kim05}. They found, for $\gamma \le 3$ and
even distributions of the two kinds of interaction,
that the system is always in a SG state for any finite temperature.
A study of the pure AF Ising model on a SFN is of great theoretical
interest since, despite the homogeneity of the bonds,
it inherits all the characteristics of a
SG from the random frustration related to its geometry.
General reviews on SG systems can be found in Refs.~\cite{SG}.
In this paper we consider the AF Ising model on a SFN, more precisely
the Barab$\acute{\rm a}$si-Albert (BA) network with tunable
clustering~\cite{Holme02}. Using
the replica exchange algorithm~\cite{hukushima96} of the Monte Carlo method,
we calculate
the order parameters of spin glass behaviour, the so-called
overlap parameter and its distribution. For an accurate determination
of the critical temperature, we also evaluate the Binder parameter.
The paper is organized as follows: Section \ref{two} describes the
model and the method. The results are discussed in
Section~\ref{three}. Section~\ref{four} is devoted to the concluding
remarks.
\section{Model and Simulation Method}\label{two}
\subsection{The model}
\begin{figure}
\centerline{\epsfig{figure=fig1sg.eps,height=7cm, width=9cm}}
\caption{(Color online). Example of a scale-free network. The number of nodes
is 500 with clustering probability $\theta=0.9$ and $m_0=m=2$.
The number of nodes has been
kept small in order to preserve the clarity of the plot. Note that, for
such small networks, a large scale invariant range is obtained only
if one considers the ensemble average over several realizations.
This plot has been realized with the Pajek software~\cite{pajek}.}
\label{netplot}
\end{figure}
In order to create the scale-free network topology we make use of the
Barab$\acute{ \rm a}$si-Albert model~\cite{albert99}. This is based on two
main considerations: (i) linear growth and (ii) preferential attachment.
In practice the network is initialized with $m_0$ disconnected
nodes. At each step a new node with $m$ edges is added to the
pre-existing network. The probability that an edge of the new node is
linked with the $i$th node is expressed by $\Pi(k_i)=k_i/\sum_{j}k_j$.
The iteration of this preferential growing process
yields a scale free network, where the probability of having a node
with $k$ connections is $P(k)\sim k^{-\gamma}$ with $\gamma= 3$.
This is an interesting value. In the thermodynamic limit,
the second moment of the distribution diverges,
$\langle k^2 \rangle = \infty$, for $\gamma \le 3$.
This leads to peculiar properties of theoretical models in
this range of $\gamma$ values~\cite{dorogovtsev03}.
In the present work we focus
on the case in which $\gamma= 3$ and the divergence of
$\langle k^2 \rangle$ is logarithmic. An extensive
investigation of the phase space for the AF model on SFN is
left for future work.
It is also worth noting that the Barab$\acute{ \rm a}$si-Albert model cannot
reproduce a high clustering coefficient. In fact, the value of this
coefficient depends on the total number of nodes, $N$, in the
network~\cite{albert02} and in the thermodynamic limit, $N \rightarrow
\infty$, $C \rightarrow 0$.
In the AF Ising system the average cluster coefficient, $C$, plays a
fundamental role in the dynamics. In fact, it represents the average
number of triangles per node and, as a result, it is directly related to
the degree of frustration in the network. In order to keep this
parameter constant, on average, with the size of the network, we
introduce a further step in the growth process, namely the triad
formation proposed by Holme and Kim~\cite{Holme02}. In this case, if
the new added node is linked with an older node, $i$, having other
links, then with a certain probability, $\theta$, the next link of the
new node, if any remain, will be added to a randomly selected
neighbour of node $i$. This method of introducing friends to friends,
while preserving the scale-free nature of the networks with $\gamma
\sim 3$, generates high clustering coefficients that do not depend on
$N$. The only tunable parameter that changes the value of the
clustering coefficient is the {\em clustering probability}
$\theta$. An example of a SF network generated with this algorithm is
shown in Fig.~\ref{netplot} for 500 nodes.
We simulate various sizes of the network with many different realizations
and investigate the scaling behaviour of the various physical quantities we are
interested in. All the simulations have been carried out fixing
$\theta=0.9$, corresponding to an average clustering coefficient of $C
\sim 0.39$, close to the value found in many real systems
\cite{albert02}.
On each SFN constructed at the beginning of the simulation, we assign
to each vertex an Ising spin, and to each link an AF interaction. The
Hamiltonian can be written as follows
\begin{equation}\label{ham}
H = -\sum_{\langle ij \rangle} J_{ij}\, s_i\, s_j \, .
\end{equation}
Here the summation is performed over the connected spins $s_i$ and
$s_j$ occupying sites $i$ and $j$, respectively. The coupling
interaction $J_{ij} = J =-1$ is AF.
As previously mentioned, each vertex with the local cluster
coefficient $C_i > 0$ together with its neighbours, compose elementary
triangles. Due to the AF interactions the local system is frustrated.
It is worth pointing out that $C$ is related to the degree of
frustration of each network. Due to the probabilistic algorithm used
for their construction, the value of $C$ fluctuates around a mean value
from one network to the next and, therefore, provides a source
of randomness that, as we will
see, gives rise to the spin glass properties of the model.
This probabilistic growth is not shared by other algorithms which use
recursion formulas to generate scale-free structures, such as, for example,
the Apollonian networks~\cite{Andrade05}. In this case, once one fixes the number
of iterations of the algorithm, which is proportional to the number of nodes
of the final network, one also fixes its topology.
The element of randomness is therefore missing in the Apollonian procedure.
As a random system, each realization of a network of size $N$ will
differ in the ``structure'' of connectivities. Therefore, in order to
have reliable statistics, we average over many realizations of the SF
network for each specified size. The system sizes that we
simulate are $N =$ 1024, 2048, 4096, and 8192.
In general, one takes into account
more realizations for small system sizes and less for large system
sizes as the latter tend to self-average. However, since the
self-averaging of physical quantities for larger system sizes is
interfered with by the increase of ground state degeneracy, we do not take
less realizations. Instead all physical quantities of interest for
each system size are averaged over 1000 network realizations.
Moreover, for each realization of the network, we fix $m_0=m=5$, corresponding
to a coordination number on a regular lattice of approximately 10.
In the thermodynamic limit, the average connectivity for
the BA network is $ \langle k \rangle = 2 m = 10$,
emphasizing the fact that we are implicitly
dealing with a high dimensional system.
Another peculiarity of SF networks is the existence of a broad
distribution of ``hubs'', that is nodes with a large number of
connections, $k$. The energy difference in a spin flip actually
depends on the number of connections of the spin itself, $\Delta
E_{i}= -2 s_i \sum_{j=1}^{k_i}s_j$. Thus in the AF case for the $i$th
spin with $k_i$ connections, the hubs are more likely to ``freeze''
into a particular configuration compared to the nodes with just a few
links. This property resembles the spin glass behaviour of particular
alloys where some elements freeze into a particular orientation at a
higher temperature than others.
\subsection{Simulation method}
The calculation of the thermal averages of the physical
quantities of interest is performed using the replica exchange
MC method~\cite{hukushima96}. In this method the
evolution of $M$ replicas, each in equilibrium with a heat bath of
inverse temperature $\beta_m$ for the $m^{\rm th}$ replica, is simulated in
parallel. Given a set of inverse temperatures, $\{ \beta \}$, the
probability distribution of finding the whole system in a state $\{ X \} =
\{ X_1,X_2, \dots, X_M\}$ is
\begin{equation}
P(\{ X, \beta\}) = \prod_{m=1}^{M} \tilde{P}(X_{m},\beta_{m}),
\end{equation}
with
\begin{equation}
\tilde{P}( X_m, \beta_m) = Z(\beta_{m})^{-1} \exp(-\beta_{m} H(X_{m})),
\label{equil}
\end{equation}
and $Z(\beta_m)$ is the partition function at the $m^{\rm th}$ temperature.
We can then define an exchange matrix between the replicas in our Markov chain,
$W(X_m,\beta_m| X_n,\beta_n)$, that is the probability
to switch the configuration $X_m$ at the temperature $\beta_m$
with the configuration $X_n$ at $\beta_n$. By using the detailed balance condition,
required to keep the entire system at equilibrium, on the transition matrix
\begin{eqnarray}
P( \ldots,\{ X_m, \beta_m \},\ldots, \{ X_n, \beta_n \},\ldots )\cdot
W(X_m,\beta_m| X_n,\beta_n) \nonumber \\ = P( \ldots,\{ X_n, \beta_m \},
\ldots, \{ X_m, \beta_n \},\ldots )
\cdot W( X_n,\beta_m | X_m,\beta_n),
\end{eqnarray}
along with Eq.~(\ref{equil}), we have that
\begin{equation}
\frac{ W( X_m,\beta_m | X_n,\beta_n)}{ W( X_n,\beta_m | X_m,\beta_n)}=\exp(-\Delta),
\end{equation}
where $\Delta=(\beta_{n}-\beta_{m})(H(X_{m})-H(X_{n}))$.
With the above constrains we can choose the matrix coefficients
according to the standard Metropolis method and, therefore, we have
\begin{equation}
W(X_m,\beta_m| X_{n},\beta_n)=\left \{ \begin{array}{ccc}
1 & {\rm if} & \Delta<0, \\
\exp(-\Delta) & {\rm if} & \Delta>0.
\end{array} \right.
\label{trans}
\end{equation}
In our simulation we restrict the exchange to temperatures next to
each other; that is, we consider only the terms $W(X_m,\beta_m|
X_{m+1},\beta_{m+1})$. This choice is motivated by the fact that the
acceptance ratio decays exponentially with $(\beta_n-\beta_m)$.
The replica exchange method is extremely efficient for simulating
systems such as spin glasses, that can otherwise become frozen in
some particular configuration at low temperatures when using a
standard Metropolis algorithm for the configuration update. In this
case, as we lower the temperature, the system can become trapped
into a local minimum of the free-energy where the barriers are so high
that the time required for the system to move to another allowed
region of the configuration space diverges to infinity as a function
of the system size. If the system is trapped in a local minimum then
the ergodicity condition is not fulfilled anymore and the measure
that one makes become biased by the particular region of the
configuration space that is being sampled. By using the exchange
replica method, instead, we keep switching the temperatures between
the $M$ copies of the system and, as long as the higher temperature
is in a hot phase (where, the system can easily explore all the
configuration space), then we are in principle able to explore all the
configuration space also for the lower temperatures. Another advantage
of this method is that the replica exchange reduces drastically the
temporal correlation in the system dynamics at each temperature. This
enables one to collect more independent measures for the thermal
averages of the physical quantities and, therefore, reduces the
uncertainty.
It is important to stress that, before starting the actual
simulations, some care is required in selecting the set of inverse
temperatures, $\{ \beta \}$. In fact, the method is efficient only
when a fairly large transition probability is maintained in the range
of interest. From Eq.~\ref{trans}, we can see that, in the hot phase,
temperatures can be more coarsely spaced while in the cold phase the
temperatures need to be closer to each other. An optimal set of
temperatures can be obtained by iterating, in preliminary runs, the
following map~\cite{hukushima96}:
\begin{equation}
\begin{array}{c}
\tilde{\beta}_{1}=\beta_1, \\
\tilde{\beta}_{m}=\tilde{\beta}_{m-1}+(\beta_{m}-\beta_{m-1})\cdot p_{m}/c,
\label{map}
\end{array}
\end{equation}
where $p_{m}$ is the acceptance ratio for the switch between two
configurations at the $m$th temperature and
$c=\sum_{m=1}^{M}p_m/(M-1)$ is a normalization factor. The initial
value for the set $\{ \beta \}$ is uniform in the interval of
interest and we ensure that $\beta_1$ belongs to the hot phase. For
each iteration of the map, a run of a few thousand MC steps is carried
out to calculate the acceptance ratios, $p_m$, which are then plugged into
Eq.~(\ref{map}) in order to obtain a new set of inverse temperatures.
After a few
iterations, the map of Eq.~(\ref{map}) converges to a fixed point,
$\{ \beta^{\star} \}$, which sets the values of the temperatures to
be used in our simulations.
In using this method, we define a ``local'' MC (LMC) update as a MC
update for each spin of each replica, either consecutively through all
elements of the network or randomly. Given that we can group the
inverse temperatures in even and odd pairs, $(\beta_{m},\beta_{m+1})$,
after each LMC update we alternate attempts to switch configurations
from one temperature to the next. According to this procedure, we
define a Monte Carlo step (MCS) as a LMC plus a half ($m$ odd or even)
exchange trial.
For each realization of the network we start from a random
configuration of the spins and then perform $10^3$ LMC updates in
order to reach thermal equilibrium. After this transient period, we
run the simulation for $3 \times 10^5$ MCSs while taking a total of
$6 \times 10^4$ measures for the thermal averages, that is one every 5
MCSs (temporal correlations are lost very quickly by using the replica exchange method).
We consider low temperatures in a search for the possible
existence of a phase transition. The thermal averages obtained for
each network are then averaged over the ensemble of networks. In the
following, we indicate $\langle...\rangle$ as the thermal average and
$\left[...\right]_{\rm av}$ as the ensemble average. The statistical
errors in the plots, where reported, are calculated via the bootstrap
method.
\section{Results and Discussion}
\label{three}
\subsection{Spatial correlations and specific heat}
As a first step we investigate the extent of spatial correlation of the spins in
the SF network by making use of the spatial autocorrelation function which is
defined on a regular lattice as
\begin{equation}
\xi(r)= \left[ \frac{1}{L_d} \langle s_i s_{i+r} \rangle \right]_{\rm av},
\label{spat_corr}
\end{equation}
where $L_d$ is the total number of pairs at distance $r$ and
depends just on the dimension considered.
In a SF network the situation is more complicated since there may be
several paths leading from a certain node to another.
We then define $r$ as the {\em minimum}
path between two nodes and the denominator of the Eq.~(\ref{spat_corr})
becomes dependent on $r$. The results, averaged over 50 configurations,
between the temperatures of $T=5.0$ and $T=2.1$ are
shown in Fig.~\ref{sp_corr} for $N=1024$.
All the temperatures in the present paper are expressed in units of $J/k_{B}$,
where $J$ is the coupling strength between spins and $k_B$ is the Boltzmann constant.
\begin{figure}
\centerline{\epsfig{figure=fig2sg.eps,height=7cm, width=9cm}}
\caption{ (Color online).
Spatial autocorrelation, $\xi(r)$, for $N=1024$ averaged over 50
network configurations for temperatures between $T=5.0$ and $T=2.1$.
The plot shows that
next neighbour spins tend to be anti-parallel as in standard AF Ising model.
The AF interaction in the triangular units of the system results in high frustration.
Note that the number of nodes at large distances is much smaller than the ones
at smaller distances and so the average calculated for $r=5$ and $r=6$ includes just
few samples. This is a consequence of the ``small-world'' effects in SF networks.}
\label{sp_corr}
\end{figure}
\vspace{1.5cm}
\begin{figure}
\centerline{\epsfig{figure=fig3sg.eps,height=7cm, width=9cm}}
\caption{Specific heat, $C_{\nu}$, as a function of the temperature and system size.
The plot has been obtained by averaging over
50 network configurations for each $N$. Note that the specific heat does not scale
with the size of the system.}
\label{sp_heat}
\end{figure}
In order to give a better interpretation of the plot in Fig.~\ref{sp_corr} we
remind the reader about an important
propriety of SF networks; that is their ``small world structure''. The
``hubs'', in fact, play a fundamental role in linking sites
otherwise very distant. Moreover, the average path length increases
just logarithmically with the size of the network~\cite{albert02,dorogovtsev02}. In
the plot of Fig.~\ref{sp_corr}, for $N=1024$ nodes, an upper limit of $r=6$ is encountered. While all the 50 configurations reach $r=6$, only a few networks exceed this limit.
The plot emphasizes how neighboring spins, on average, tend to be anti-correlated, as expected
in the AF case. The autocorrelation decreases with the
distance from the node under consideration. The temperature
dependence is also in accord with the expectations. The absolute value of the correlation
decreases with increasing temperature and vice versa. Indeed,
the highest and lowest temperatures form a perfect boundary for all
the curves.
This is an expected result, since thermal effects always tend to
reduce the correlation between the spin interactions.
We also study the behaviour of the specific heat, $C_{\nu}$, defined as
follows
\begin{equation}
C_{\nu}(T)= \left[ \frac{1}{Nk_{B}T^2}(\langle E^2 \rangle - \langle E \rangle^2) \right]_{\rm av},
\end{equation}
where $k_{B}$ is the Boltzmann constant. Although no singularity is
expected for this quantity in the spin-glass transition,
it is interesting to compare its behaviour with other studies.
The dependence of the specific heat on temperature
is reported in Fig.~\ref{sp_heat}. The statistical errors, in this case,
are smaller than the size of the symbols and therefore are not reported.
A common Schottky peak of the specific heat for a finite system is observed at the temperature of $T \sim 2.0$ independent
of the system size.
Below this point, we found that $C_\nu$ decreases and goes to
zero as $T \rightarrow 0$.
This behaviour follows from simple entropy considerations.
In fact, since we are dealing with a finite Ising system,
the entropy is bounded at each finite temperature as well,
\begin{equation}
S(T)=\int_{0}^{T}\frac{C_{\nu}(T)}{T}dT < 2^{N},
\end{equation}
and, necessarily, $C_{\nu} \rightarrow 0$ for $T \rightarrow 0$.
The next section is dedicated to study of the SG
behaviour and the phase transition of the system. In order to achieve
this task, we evaluate the corresponding order parameters, the overlap
parameter and the Binder parameter.
\subsection{Observing spin glass behaviour}
With the presence of frustration and randomness in the AF-SFN model,
we expect to observe a spin glass transition, i.e., a transition from
a temporal disordered to a temporal ordered phase at low temperatures.
This feature is not shared by the so-called fully frustrated
systems~\cite{tasrief}.
This type of transition might be characterized by the order
parameter such as that suggested by Edward and Anderson~\cite{EA},
defined as follows
\begin{equation}
q_{EA} = \left[ \frac{1}{N}\sum_i\langle s_i \rangle^{2} \right]_{\rm av}.
\end{equation}
However, an ergodic Markov chain of a system having $Z_2$ symmetry will ensure the thermal average of the $i$th spin vanishes. Therefore a finite value of this measure simply reflects the non-ergodicity in the MC update.
A more appropriate quantity that is often used to characterize the SG state is the
overlap parameter, $q$, defined as~\cite{parisi,bhattY}
\begin{equation}\label{qorder}
q = \frac{1}{N}\sum_{i} s_i^{(\alpha)} s_i^{(\beta)},
\end{equation}
where the superscripts $\alpha$ and $\beta$ denote two copies of the
same configuration of connectivity at the same temperature.
The actual value of $q$ is extracted from both the thermal
and disorder average, $ \left[ \langle... \rangle \right]_{\rm av}$.
Using the replica exchange MC simulation, the two copies,
$\alpha$ and $\beta$, are allocated at each temperature of
the parallel tempering.
This means, if the measurement is performed on $M$ points of
temperatures, there are $M$ pairs of replicas.
The Metropolis spin update is performed on each node for every MC step.
As a part of the equilibration steps of the algorithm described in the previous
section, we exchange two
$\alpha$ (and $\beta$) replicas of neighboring temperatures,
according to a certain probability.
Then, for each temperature, the $\alpha$ and $\beta$ replicas are
superimposed every 5 MCSs in order to measure the overlap parameters,
as defined in Eq.(\ref{qorder}).
In particular, for the Ising system, due to the $Z_2$ symmetry, it is
important to evaluate the absolute value of the order parameter,
\begin{equation}
|q| \equiv \left[ \langle |\frac{1}{N}\sum_{i} s_i^{(\alpha)} s_i^{(\beta)}|
\rangle \right]_{\rm av},
\end{equation}
to overcome the implication of the $Z_2$ symmetry of the Hamiltonian,
that is the configurations ${s_i}$ and ${-s_i}$ have equal Boltzmann
weights. That is, if the system is at thermal equilibrium and if we
take quite long MCS then the usual $q$ should average to zero. The
existence of a spin glass phase is indicated by the convergence of
$|q|$ to a finite value as we increase the network size. At the same
time, a convergence of $|q|$ to zero at high temperatures is
anticipated. In the latter case the system is in the paramagnetic
phase.
The temperature dependence of $|q|$, resulting from the simulations,
is shown in Fig.~\ref{overlap}. The existence of a SG phase is
indicated by the finite value of $|q|$ in the low temperature region,
and the approach of $|q|$ to zero at higher temperatures associated
with the paramagnetic phase. For high temperatures and large networks,
$|q|$ is approaching zero in accord with the thermodynamic limit where
$|q| = 0$~\cite{Ogielski85}.
The existence of these two different phases can also be observed from
the distribution of $q$, as shown in Fig.~\ref{distrib}. For
higher temperatures we observe simple Brownian fluctuations of the
values of $q$, leading to a singly peaked Gaussian distribution
characteristic of a paramagnetic state. By decreasing the
temperature, the distribution spreads out, reflecting the
increasing number of metastable disordered states associated with
a substantial frustration. At lower temperatures the
distribution develops double peaks reflecting the $Z_2$ symmetry and a
finite value of $|q|$, representative of the SG phase. We note that the
shape of the observed distribution at low temperatures is different from that of the
conventional Ising system where the double peaks approach delta-like
double peaks reflecting a simple doubly degenerate ground state~\cite{dotsenko}.
\begin{figure}
\centerline{\epsfig{figure=fig4sg.eps,height=7cm, width=9cm}}
\caption{Temperature dependence of the overlap parameter, $q$,
for different system sizes $N$. The increasing value of $q$ at low
temperatures indicates a SG phase. For a given network size, 1000
realizations of the SFN are averaged over. }
\label{overlap}
\end{figure}
An accurate evaluation of critical temperature of the phase transition
is achieved via the Binder parameter defined as follows
\begin{equation}
g_L = \frac{1}{2}\left(3 - \frac{\left[\langle q^4\rangle\right]_{\rm
av}}{\left[\langle q^2\rangle \right]_{\rm av}^{2}}\right).
\end{equation}
Here $\langle q^2 \rangle$ and $\langle q^4 \rangle$ are respectively
the second and the fourth cumulant moment of $q$. In this calculation,
in order to avoid
systematic correlation errors that could bias the results if we were
evaluating this average over $g_L$ directly~\cite{Kawashima96}, the
second and fourth order cumulants are averaged prior to taking their
ratio. The Binder parameter is constrained in the range $0 \le g_L
\le 1$.
At high temperature, where thermal fluctuations overcome all
cooperative interaction, the system is expected to exist in the
paramagnetic phase where there is no spatial autocorrelation. As a
result, the distribution of $q$ should be Gaussian centered at $q=0$.
In this case the ratio of the cumulants, $\langle q^4 \rangle /\langle
q^2 \rangle^2 \rightarrow 3 $, resulting in $g_L \rightarrow 0$.
At low temperatures, the cooperative interaction becomes dominant and
the ratio of the cumulants approaches unity so that $g_L \rightarrow 1$.
Fig.~\ref{binder} (inset) displays the temperature dependence of the Binder
parameter for a variety of network sizes. A spin-glass state is
observed for lower temperatures where the Binder parameter deviates
from zero, and increases with the system size while approaching to 1.
In the thermodynamic limit, we expect $g_L \to 1$ just below the critical
temperature. A crossing point in the size dependence of $g_L$
indicates that the critical temperature for the SG phase transition is
$T \sim 4.0$.
Fig.~\ref{binder} indicates that for temperatures above $T \sim 4.0$
the Binder parameter, while remaining always above zero, does indeed
order in an opposite manner indicative of a genuine crossing of the
curves and in accord with a genuine spin glass transition at finite
temperature. This feature which is not observed for
uniformly distributed AF and FM bonds, as $T_c = \infty$
in the thermodynamic limit~\cite{Kim05}.
However, the value of the transition temperature is not
determined with high accuracy by the crossing of the Binder parameter.
In fact, finite size effects seem to slightly distort the tendency
for very small networks, as in the case of $N=1024$. At the same time,
the statistical errors in the paramagnetic phase for large networks, see $N=8192$,
appear to be significant and some points are scattered.
\begin{figure}
\vspace{1cm}
\centerline{\epsfig{figure=fig5sg_b.eps,height=9cm, width=11cm}}
\caption{ (Color online).
The distribution of $q$ at various temperatures for different system sizes,
including (a) $N=1024$, (b) $N=2048$, (c) $N=4096$ and (d) $N=8192$.}
\label{distrib}
\end{figure}
\vspace{1cm}
\begin{figure}
\centerline{\epsfig{figure=fig6sg.eps,height=7cm, width=9cm}}
\caption{Scaling behaviour of the Binder cumulant, $g_L$, for
different system sizes. Each system size is averaged over 1000
realizations of the network configuration.}
\label{binder}
\end{figure}
\begin{figure}
\vspace{1cm}
\centerline{\epsfig{figure=fig7sg.eps,height=7cm, width=9cm}}
\caption{Scaling plot of the data illustrated in Fig.~\ref{binder},
fitted to Eq.~\ref{scalebind}.}
\label{fig_bind}
\end{figure}
A more accurate estimate of the critical temperature, $T_c$, for
finite size systems can be obtained using scaling arguments. For a SG
system, the Binder parameter depends on the system size
$L$ as
\begin{equation}\label{scalebind}
g_{L} = \tilde{g}_{L}[(T-T_c)L^{1/\nu}],
\end{equation}
being $\nu > 0$ the spin glass correlation length exponent, implying
that at $T_c$ the Binder cumulant does not depend on $L$. For the
SFN, the system size scales logarithmically with the number of nodes
$N$~\cite{albert02,dorogovtsev02,dorogovtsev03,Kim05} and therefore we take
$L = \log(N)$.
This slow increase in the diameter of the system, as well as the
average path length, is a manifestation of the ``small-world''
property of this network, induced by the presence of a large number
of highly connected hubs which create shortcuts between the nodes.
An important implication of this feature is that we cannot embed the
network in any finite dimensional lattice: we are implicitly dealing
with a high dimensional system.
The correlation length, in this case, is still well defined although its value
gets close to the densely-connected, mean field limit as we increase the
average connectivity of the nodes, $\langle k \rangle =2 m$.
The parameters $T_c$ and $\nu$ are determined by constraining the
temperature dependence of the Binder parameter for each network size
to lie on a single curve. The curves following the scaling bahaviour
of Eq.~(\ref{scalebind}) are shown in Fig.~\ref{fig_bind}.
From this fit we estimate the critical temperature $T_c\sim 4.0(1)$
and the exponent of the SG correlation length $\nu \sim 1.10(2)$.
It is important to underline that this kind of behaviour is not observed
for an AF system on a regular triangular lattice.
\section{Concluding Remarks}\label{four}
In summary, we have investigated the antiferromagnetic Ising model on
a Barab$\acute{\rm a}$si-Albert scale-free network using the replica
exchange Monte Carlo method. Through the calculation of the overlap
parameter we observe spin glass behaviour at low temperatures.
Using the scaling behaviour of the Binder parameter the critical
temperature separating the SG and the the paramagnetic phases is
found to be $T_c=4.0(2)$ with a scaling exponent of SG correlation
length $\nu \sim 1.10(2)$. Such behaviour is not observed for the AF
Ising model on regular triangular lattices. Hence the topology of the
interactions plays a critical role in the dynamics of the system.
\section*{Acknowledgments}
The authors wish to thank Y. Okabe, E. Marinari and J.-S. Wang
for valuable discussions.
One of the authors (TS) is grateful for the hospitality of the Center for the
Subatomic Structure of Matter (CSSM) at the University of Adelaide
during his academic visit to the Center. The computation of this
work has been done using the Hydra teraflop supercomputer
facility of the South
Australian Partnership for Advanced Computing (SAPAC).
|
train/arxiv
|
BkiUd7w5qoTBCaJeCJPF
| 5
| 1
|
\section{Introduction}
Thermal emission from neutron stars (NSs) may potentially
be used to directly measure the
NS surface magnetic field, temperature, and composition,
achieve a more complete understanding of the evolution
of the NSs, and constrain the properties of matter and physical
processes
under extreme conditions. It was realized long ago
that a NS
atmosphere model should properly include a strong magnetic field
and partial ionization \citep[see, e.g.,][for an early review]{Pavlov-ea95}.
Models of \emph{fully ionized} NS atmospheres with strong magnetic fields
were constructed by several research groups
\citep[e.g.,][and references therein]{Shib92,Zane00,HoLai02}.
The most recent papers highlighted the effects that
may be important for the atmospheres of magnetars: the ion cyclotron feature
\citep{HoLai,Zane01} and vacuum polarization effects,
including a conversion of the normal modes of radiation propagating in the
magnetized atmosphere \citep{HoLai02,LaiHo02,LaiHo03}.
Early considerations of \emph{partial ionization}
in the magnetized NS atmospheres
(\citealt{Miller92,RRM}; also reviewed briefly by \citealt{ZP02})
were
not very reliable because of oversimplified treatments of
atomic physics and nonideal plasma effects in strong magnetic fields.
At the typical NS atmosphere parameters,
the effects of thermal motion of the bound species are important.
In the 1990s, binding energies
and radiative transition rates with allowance
for the motion effects in strong magnetic fields
have been calculated for the H atom \citep{P94,PP97}.
Recently these atomic data
have been implemented in calculations
of thermodynamic functions \citep{PCS99,PC03,PC04},
radiative opacities \citep{PC03,PC04},
and spectra \citep{hoetal}
of the partially ionized H atmospheres of the NSs.
Some results have been presented at the previous {\sc Cospar} meeting
\citep{Ho-COSPAR}. Now our atmosphere model has been complemented by
the effects of the bound states on the polarization properties of
the strongly magnetized plasma \citep{KK}.
Below we briefly summarize the results that allow us to calculate
realistic X-ray spectra of thermal radiation from hydrogen NS
atmospheres with magnetic fields
$B\sim10^{12}-10^{14}$~G and
effective temperatures $T\gtrsim10^{5.5}$~K,
and outline the problems that remain to be
addressed at other atmospheric parameters and compositions.
\section{The atmosphere model}
We use the equation of state (EOS) for H in strong magnetic fields
\citep{PCS99} based on
the free-energy minimization method, which ensures the thermodynamic
consistency and allows one to determine
number fractions of chemical species, required for opacity calculations.
The model takes into account all
available theoretical results on the
moving H atoms and nonideal Coulomb plasmas in the magnetic fields.
This EOS has been tabulated and employed for calculation of
opacities for astrophysical
use \citep{PC03,PC04}.
\begin{figure}[t]
\begin{center}
\epsfxsize=\textwidth
\epsffile{ang13_0bw.eps}
\caption{Opacities $\kappa_j$ (Eq.~[\ref{opac}]) in the O-mode (left panels)
and X-mode (right panels)
versus
photon energy in the hydrogen plasma at $B=10^{13}$~G,
$T=10^{5.5}$~K, and $\rho=1$ g cm$^{-3}$,
for $\theta_B=10^\circ$ (upper panels)
and $45^\circ$ (lower panels).
Solid lines: a self-consistent calculation
for a partially ionized plasma (70\% of neutrals);
dot-dashed lines: the model of full ionization.
\label{fig-ang13_0}
}
\end{center}
\end{figure}
It is well known that under typical
conditions (e.g., far from the resonances) radiation propagates in a
magnetized plasma in the form of two normal modes,
called the extraordinary (X) and the ordinary (O).
The opacity in the mode $j$ ($j=$X,O)
depends on the
photon frequency $\omega$, magnetic field $B$, density $\rho$,
temperature $T$, and the angle
$\theta_B$ between the magnetic field and propagation direction.
It can be written as
\begin{equation}
\kappa_j(\omega,\theta_B) = \sum_{\alpha=-1}^1 \!\!
|e_\alpha^j(\omega,\theta_B)|^2 \,\hat\kappa_\alpha(\omega),
\label{opac}
\end{equation}
where $e_\alpha^j$ ($\alpha=-1,0,1$) are the cyclic coordinates
of the polarization vectors
of the normal modes, and the quantities
$\hat\kappa_\alpha$ ($\alpha=-1,0,1$)
do not depend on $\theta_B$.
\citet{PC03} calculated $\hat\kappa_\alpha(\omega)$
and evaluated $\kappa_j(\omega,\theta_B)$
using the polarization vectors of the normal modes in
the fully ionized plasma,
Such a calculation (dubbed ``hybrid'' in \citealt{KK})
was employed in our previous model of partially ionized
hydrogen
atmospheres of the NSs with strong magnetic fields
\citep{hoetal,Ho-COSPAR}.
In the new model
\citep{KK}, we take into account the
influence of the bound species on the polarization vectors of
the normal modes, making use of the Kramers-Kronig relation\footnote{%
Previously this relation has been used by \citet{BulikPavlov}
for a neutral gas of H atoms in a strong magnetic field.}
between the imaginary and real parts of the plasma polarizability. Thus
the calculation of the polarization vectors and opacities
of the normal modes has become self-consistent. The
calculations of thermal spectra of the NSs
show that such self-consistent
treatment is necessary if the number fraction of
the bound states exceeds several percent.
In Fig.~\ref{fig-ang13_0} we compare
radiative opacities calculated with and without allowance for
the bound species
for one particular set of plasma parameters, typical near
the radiative surface of a moderately cool neutron star
with magnetic field $B=10^{13}$~G, for two
$\theta_B$ values. In the case shown in the figure,
the neutral fraction is 70\%, thus the self-consistent treatment
of the opacities is important.
\section{Conclusion and unsolved problems}
The constructed atmosphere models allow us to calculate realistic spectra of
thermal X-ray radiation from H atmospheres of
the NSs with $10^{12}\mbox{ G}\lesssim B \lesssim 10^{14}\mbox{ G}$
and $T\gtrsim10^{5.5}$~K. Examples of these spectra are presented elsewhere
\citep{KK}.
There remain the following
unsolved problems that
prevent us from making
similarly reliable calculations beyond these limits.
{1.} Although the H EOS and opacities have been calculated for $B$
up to $10^{15}$~G and implemented in the atmosphere
model \citep{hoetal,Ho-COSPAR}, the calculated spectra at
$B\gtrsim10^{14}$~G depend on the adopted model of mode conversion due to
the vacuum resonance and on description of propagation of photons with
frequencies below the plasma frequency. Both these problems have not been
definitely solved. Their solution is also important for modeling the
low-frequency (UV and optical) tail of the spectrum.
{2.} At lower $T$ or higher $B$, H atoms recombine in
H$_n$ molecules and eventually form the condensed phase
\citep[][and references therein]{Lai-RMP}.
Corresponding quantum-mechanical data are very incomplete.
{3.} At $10^9\mbox{ G}\lesssim B \lesssim 10^{11}$~G,
transition rates of the moving H atom have
not been calculated previously because of their
complexity.
The first calculation of the energy spectrum
appropriate in this range of $B$
has been published when the present paper was in preparation
\citep{LozovikVolkov}.
{4.} At present it is not possible to calculate accurate
atmospheric spectra at $B\gtrsim10^{12}$~G for chemical elements
other than hydrogen, because of the importance of
the effects of finite nuclear mass in the strong field regime.
Apart from the H atom, these effects have been
calculated only for the He atom \emph{at rest} \citep{Hujaj03a,Hujaj03b}
and for the He$^+$ ion (at only one value of $B$, \citealt{BPV}).
{5.} A more rigorous treatment of radiative transfer in the atmosphere
requires solving the transfer equations of the Stokes parameters (see,
e.g., \citealt{LaiHo03} for the cases of fully ionized atmospheres).
However, since the nonorthogonal features of the modes due to neutral
species are diminished by the center-of-mass motion, the effect is
expected to be small.
Finally, let us note that the atmosphere model presented here, together with
a model of radiation from the condensed magnetic surface
\citep{surfem}, has been successfully used for fitting the spectrum
of the isolated neutron star RX J1856.5$-$3754 \citep{Ho-RXJ}.
\textbf{Acknowledgements.}
A.P.\ acknowledges the hospitality
of the Astronomy Department of Cornell University
and the theoretical astrophysics group
at the Ecole Normale Su\-p\'e\-r\-i\-eure de Lyon.
The work of A.P.\ is supported in part by RFBR grants
02-02-17668 and 03-07-90200,
and RLSS grant 1115.2003.2.
D.L.\ is supported in part by NSF grant AST 0307252,
NASA grant NAG 5-12034,
and SAO grant TM4-5002X.
W.H.\ is supported by NASA through Hubble Fellowship grant
HF-01161.01-A awarded by STScI, which is operated by AURA, Inc.,
for NASA, under contract NAS 5-26555.
|
train/arxiv
|
BkiUdQg4c3aisDMzEuHo
| 5
| 1
|
\section{Introduction}
Image translation has become one of the most important areas to focus on.Deep learning advancements aided in resolving this issue. GAN contains two neural networks that compete against each other (adversarial) to produce more accurate predictions.Consider a case of translating winter to summer and then bacck to winter. CycleGAN consists of two GANs, i.e., two generators and two discriminators. One generator transforms selfies into anime, and the other transforms anime images back to selfies. Discriminators check whether the images generated by generators are fake or real during training. This loop ensures that an image created by a generator is cycle consistent. It means that, consecutively, both generators on an image should yield a similar image. Shooting problems with pix2pix..
In case of image translation (supervised) there is nothing to worry about what kind of output to be generated ,but in case of unsupervised (unpaired data) it is more important to focus on the task for example orange to tomato
translation ,there can be two possibilities orange is completely translated to tomato or it can be a just colour change .In unsupervised learning data sets plays a very important role to generate the mapping function .The special thing about GANs is,they can create new objects and by taking a random input.
\section{Related Work}
\textbf{Image-to-Image Translation :}
The concept behind image to image translation is not only learn from the input images but also learns the loss to train the mapping .CycleGAN learns the mapping from original image to generated image, along with that it,learns a loss function to train this input and output mapping.
\textbf{Generative adversarial Networks:}
GANs are proved to be expert at translating image to image.The same idea can be applied to video ,text translations.The reason behind the success of GANs is adversarial loss. This loss forces the created images to be indistinguishable from the input image.The adversarial loss learn mapping function such a way that the generated images can not be distinguished from the target.GANs takes random input from Gaussian noise and generates meaningful outputs.GANs are generator and discriminator which are implemented by using convolutional neural networks to perform feature extraction and mapping.Both generator and discriminator are learned through back propagation,So generator learn to produces better desired results and discriminator learn to not to become a fool.
\textbf{Unpaired Image-to-Image Translation:}
Initially the image to image translation is supervised i.e in presence of paired data set ,but here they proposed an unsupervised model which is basically image to image translation from unpaired images.This model has wider range of applications due to being unsupervised,for example in health care,the medical data is dry and expensive for paired data.
\begin{figure}[h!]
\centering
\includegraphics[width=10cm]{paired_and_unpaired}
\caption{Paired and Unpaired Images}
\label{fig:galaxy}
\end{figure}
\textbf{Neural style transfer:}
It is one among the methods to convert Image to Image translation,by this method we can generate style of domain x in domain y.
\textbf{Cycle consistency loss:}
In cycleGAN generator-G converts the original image i.e domain X to domain Y,and the generator-F tries to reconstruct that image from domain Y back to domain X,during this conversion it is important to keep follow up the loss every time to improve the reconstruction ,that responsibility is taken by cycle consistency loss.cycle consistency loss is added with the adversarial loss and back propagated,so that model become more clever at performing task.
\section{Network Architecture}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{CycleGAN}
\caption{CycleGAN}
\label{CycleGAN}
\end{figure}
\subsection{Model}
In CycleGAN there are two discriminators and two Generators.The model for the generative networks is taken from the Johnson\cite{johnson2016perceptual}.The results of that model are impressive for style transfer and super resolution.This architecture contains convolutional layers(three) , residual blocks(six), transpose convolutional layers(two) with stride of one and one convolutional layer which is going to map the features into RGB plane.The researcher's made slight modifications to the above mentioned network and designed the generators G and F,both are inverse of each other and performs one to one map operation.The aim is to creating a mapping between domain X to domain Y and then back to domain X,consider $x_i$ belongs to X and $y_i$ belongs to Y,The mathematical view of the model is $G : X \rightarrow Y$ and another translator $F : Y \rightarrow X $,So G and F are inverse of each other, and both the mappings are one to one .
\subsection{Generator}
Generator is a combination of encoder,decoder and residual layers. Generator is responsible of creating a fake image from the input image. Generator is sub sectioned into three parts,encoder block,residual block and decoder block..
\subsubsection{Encoder}
Encoder plays a key role in feature extraction from the original image.
Encoder is designed with the help of convolutional layers.There are three convolutional layers in encoder,initial layer accepts input dimension of 3 and at output of third layer we have 256.Each convolutional layer is followed by instance normalization with the batch size of one.
\emph{IN-instance normalization,Conv-convolutional Layers.}
$\boldsymbol{Encoder: } Conv-64-IN,Conv-128-IN,,Conv-3-IN$
\subsubsection{Residual Block}
In generator encoder and decoder both are linked to gather by using residual blocks,deep learning models are having gradient vanishing issue,It is difficult to train them perfectly,So the results are not desired.But it can be solved by using residual blocks,which are going to learn the residual functions.
There are six resnet blocks each block contains two convolutional layers.
The overall architecture is as follows for generator, here Res stands for a block {Conv-IN}.The output of encoder is given to residual layers.
$\boldsymbol{Residual blocks:} Res-256,Res-256,Res-256,Res-256,Res-256,Res-256$
\subsubsection{Decoder}
The output of residual layers is taken by the decoders.Decoder is basically a up-sampler.It reconstructs the all the features into an image.This is constructed by two transpose convolutional layers takes 256 as input dimension and reduces to 32 as original.last layer is a convolutional layer takes 32 dimensions and converts into three dimensional image i.e RGB and this layer is followed by a tanh activation,here transpose convolutional layer is represented as TransposeConv.
$\boldsymbol{Decoder:} TransposeConv-128-IN,TransposeConv-64-IN,Conv-3-Tanh$
\subsection{Discriminator}
Discriminator is full of convolutional Layers( 5 layers) which are used to classify image patches of seventy by seventy sized are fake or real.The role of patchGANs is to double the number of channels and halves the size.This is repeated up to the point output converges to desired state. For discriminator the filter size is 3$*$3 ,It takes 3 dimensional input image and converts into 32,64,128,256 stage wise and back to dimension 3.In discriminator ReLU are Leaky with $\alpha$= 0.2.
$\boldsymbol{Discriminator:} Conv-32,Conv-64,Conv-128,conv-256,conv-3$
\section{Training Details:}
Adam\cite{kingma2017adam} is used as optimizer,it is one of the best performer out there,to decay the runtime average of gradients beta-1 is taken as 0.5 and to decay the square of the gradient beta-2is taken as 0.999, as initially the learning rate is set to 0.002 and the batch size is set to one i.e instance normalization which implies that batch normalization is not used .In all tasks the value of $\lambda$ is taken as 10.
\subsection{Generator Training}
The real image(x) is given to Generator-$G_{x}$ and it creates fake image fake\_.and that fake image given to the discriminator-$D\_{y}$ and it predicts the fake decision($D\_{y}$ fake\_decision) and by using that fake decision mean square loss of $G\_{x}$ is estimated.The generated fake\_{y} is given to the Generator-G\_{y},and it tries to regenerate the original image(R\_{x}) with some loss.Now training Generator-G\_{y} it takes an original image(y) from domain Y and generates fake image i.e fake-x.This fake-x is passed to discriminator D\_{x} to and predicts fake decision(D\_{x}fake\_decision).Mean square loss of generator G\_{y} is calculated by using D\_{x}fake decision.
\textbf{Forward cyclic loss} is calculated from the reconstructed image(R\_{x}) and real image x.\textbf{Backward cyclic loss} is calculated from the reconstructed image (R\_{y}) and original image y.And total generator loss is sum of G\_{x} loss, G\_{y} loss,forward cyclic loss and backward cyclic loss.This final loss is back propagated.In all cases Adam optimizer is used.
\subsection{Discriminator Training}
The discriminator D\_{x} is trained with real image x and fake\_{x} generates D$\_${x} real\_decision ,D\_{x} fake\_decision.
Decision loss of D$\_${x} is calculated by real and fake decisions and result is back propagated.Similarly discriminators D$\_${y}is trained with real image y , fake$\-${y} generates
D$\_${y} real$\_$decision,D$\_${y} fake$\_$decision.By the help of real and fake decision,Decision loss of D$\_${y} is calculated by real and fake decisions and result is back propagated.
Both discriminators D\_{x} and D\_{y} are trained by using real image x,y and generates decision-x,decision-y.and both discriminators D\_{x} and D\_{y} are trained by using fake images x,y and generates fake-decision-x,fake-decision-y,sum of the real and fake losses of D\_{x} and D\_{y} are back propagated.
\subsection{Image buffer}
Generator and discriminator both are trained at a time,it is most important to take care of model not to change drastically for every simultaneous epochs.To avoid that\cite{shrivastava2017learning} discriminator is fed with the previously generated images,instead of just one image generated by the generator.In image pool we should store 50 recently generated images,If we train like this both generator and discriminator overfits and then mode collapse will going to occur,by doing this model oscillations\cite{goodfellow2017nips} and overfitting both can be reduced.
\section{Experiments:}
\textbf{Maps dataset:} Total number of samples in the data set are 1100 and they partitioned into trainA,trainB,testA,testB to train and test the model .This data set is collected from Kaggle\cite{Maps2satellite}. By this data set CycleGAN is trained up to 150 epochs with 0.02 learning rate and then applied linearly decay of learning rate up to 315 epochs.
\textbf{Vangogh2photo dataset:} It is a small data set also partitioned into four sets for training and testing purpose,trained up to 150 with the 0.002 learning rate and from 150 to 230 epochs with decayed learning rate,i.e learning rate becomes zero gradually.This data set is collected from kaggle \cite{Vangogh2photo}.
\textbf{Summer2winter dataset:} It is used to show the season transfer.The entire data is partitioned into four parts and used for training and testing.
This is trained for 120 epochs with 0.002 learning rate and then from 120 to 230 epochs with linearly decay learning to zero.This data set is taken from the kaggle\cite{summer2winter} this images are normalized ro 256 * 256 pixels .Total summer training images are 1273 and winter images are 854.
\section{Objective functions}
There are two loss functions in CycleGAN an adversarial loss and cycle consistency ,both are important and essential to bring good outputs.
Two Components to the CycleGAN objective function, an adversarial loss, and Cycle consistency loss.Both the generators tries to fool their respective discriminator.
The loss of mapping function $ G: X \rightarrow{} Y $is as follows.
\begin{equation}\label{eu_eqn1}
L_{GAN}(G, DY , X, Y ) = E_{ y,pdata(y)}[log DY (y)] + E_{x,pdata(x)}[log(1-DY (G(x))]
\end{equation}
The discriminator tries to maximise the above expression and generator tries to minimize the adversary Discriminator.The mathematical formulation is
$ min_{G} max_{DY} L_{GAN}(G, DY , X, Y )$ .similarly $ F: Y to X $ mapping loss is as below
\begin{equation}\label{eu_eqn2}
L_{GAN}(F, DX , Y, X ) = E_{x,pdata(x)}[log DX(x)]+ E_{y,pdata(y)}log(1-DX (G(y))]
\end{equation}
similarly from $Y \rightarrow{}$ X the adversarial acts like is min$\_${F} max$\_${DX} L$\_${GAN}(F, DX, Y, X). Only Adversarial loss is not enough to produce good output(images),because we are training both Generators at a time we need to form a cyclic loss .
\begin{equation}\label{eu_eqn3}
L_{cyc}(G, F)=E_{x,pdata(x)}[||F(G(x))-x||_{1}]+ Ey,pdata(y)
[||G(F(y))-y||_{1}].
\end{equation}
The full objective function is formed by using above loss function together, and measuring the Cycle-consistency loss by using a hyper parameter $\lambda$ .
\begin{equation}\label{eu_eqn4}
L(G, F, DX, DY ) =L_{GAN}(G, DY , X, Y )
+ L_{GAN}(F, DX, Y, X)
+ \lambda L_{cyc}(G, F),
\end{equation}
So the final aim is to solve optimise this equation arg min\_{G} max\_{Dx,DY} L(G, F, DX, DY ).
\section{Applications of CycleGANs}
we can apply cycleGAN to many areas of applications for example converting Selfie of person to anime,winter season to summer season ,Animal to animal,sketch to photo ,to convert low resolution images to high resolution,object transfer ,and photo enhancement.There are wide range of application in computer vision ,graphics,video games etc
\section{Results}
\textbf{NOTE-1} all Images are in the format of domain-X (original Image)$->$ domain-Y (generated)$->$ Reconstructed Image.
\begin{figure}[htbp]
\minipage{0.5\textwidth}
\includegraphics[width=\linewidth]{VP1.png}
\caption{\textbf{Row-1:}Vangogh$>$Pictures$>$Vangogh and \textbf{Row-2:}Picture $>$ Vangogh$>$Picture}\label{fig:awesome_image1}
\endminipage\hfill
\minipage{0.5\textwidth}
\includegraphics[width=\linewidth]{M1.png}
\caption{\textbf{Row-1:}Satellite$>$Aerial$>$Satellite and \textbf{Row-2:}Aerial$>$ Satellite$>$Aerial}\label{fig:awesome_image2}
\endminipage\hfill
\minipage{0.5\textwidth}%
\includegraphics[width=\linewidth]{M2.png}
\caption{\textbf{Row-1:}Satellite$>$Aerial$>$Satellite and \textbf{Row-2:}Aerial $>$ Satellite $>$Aerial}\label{fig:awesome_image3}
\endminipage
\minipage{0.5\textwidth}
\includegraphics[width=\linewidth]{SW3.png}
\caption{\textbf{Row-1:}Summer$>$Winter$>$Summer and \textbf{Row-2:}Winter$>$ Summer$>$Winter}\label{fig:awesome_image4}
\endminipage\hfill
\end{figure}
\subsection{Loss Curves}
The below loss curves are for summer to winter and winter to summer conversion task.There is no perfect measurement to show the performance of cycleGAN.But we can observe the model accuracy by the loss figures.
\begin{figure}[h!]
\begin{subfigure}{0.6\textwidth}
\includegraphics[width=0.9\textwidth, height=2in]{cy.png}
\caption{\label{fig:7a}}
\end{subfigure}
\begin{subfigure}{0.6\textwidth}
\includegraphics[width=0.9\textwidth, height=2in]{lo.png}
\caption{\label{fig:7b}}
\end{subfigure}
\caption{Loss plots for summer to winter vice versa.(\subref{fig:7a}) Cyclic losses. (\subref{fig:7b}) Generator and discriminator losses.}
\label{fig:2}
\maketitle
\end{figure}
\section{Limitations}
\begin{figure}[h!]
\begin{subfigure}{0.6\textwidth}
\includegraphics[width=0.9\textwidth, height=2in]{L1.png}
\caption{\label{fig:8a}}
\end{subfigure}
\begin{subfigure}{0.6\textwidth}
\includegraphics[width=0.9\textwidth, height=2in]{L2.png}
\caption{\label{fig:8b}}
\end{subfigure}
\caption{These are some figures which shows cycleGAN difficulties. (\subref{fig:8a}) Summer to Winter conversion(Women is not there in generated Images). (\subref{fig:8b}) Statue is covered with the grass in generated image.}
\label{fig:3}
\maketitle
\end{figure}
Even though cycleGAN produces good results, there are some failure cases of summer to winter and winter to summer translations. In Fig-a, the woman in front of the tree is not present in the reconstructed image. In Fig-b, the generated image of a large statue is completely covered by grass, which is a major failure of cycleGAN. This observation is explained in the horse to zebra translation by the researchers in the original paper. Not only is there a resolution change from the original image to the generated and regenerated images, the clarity of the image is slightly reduced.
\section{Conclusion}
The results are much more accurate in all three data sets used to test the cycleGAN, as demonstrated in the paper. few cases, the model is recreating with low resolution images. Sometimes background changes or colour changes are introduced. All the above results and limitations clearly show that CycleGAN still needs a good amount of research.
\printbibliography
\end{document}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.6\textwidth}
\includegraphics[width=0.9\textwidth, height=6.2in]{fig_1}
\caption{\label{fig:3a}}
\end{subfigure}
\begin{subfigure}{0.6\textwidth}
\includegraphics[width=0.9\textwidth, height=6.2in]{fig_2}
\caption{\label{fig:3b}}
\end{subfigure}
\caption{This is an example of a figure consisting of multiple panels. (\subref{fig:3a}) This is the first panel. (\subref{fig:3b}) This is the second panel.}
\label{fig:3}
\maketitle
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=0.9\textwidth, height=2in]{fig_1}
\caption{\label{fig:2a}}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=0.9\textwidth, height=2in]{fig_2}
\caption{\label{fig:2b}}
\end{subfigure}
\caption{This is an example of a figure consisting of multiple panels. (\subref{fig:2a}) This is the first panel. (\subref{fig:2b}) This is the second panel.}
\label{fig:2}
\maketitle
\end{figure}
|
train/arxiv
|
BkiUfuc5qhDCZ21LDTpa
| 5
| 1
|
\section{Introduction}
The origins of prosocial behavior in groups of unrelated individuals are difficult to trace down. There exist ample evidence indicating that between-group conflicts may have been instrumental for enhancing in-group solidarity \cite{bowles_11}. On the other hand, some argue that our pre-human ancestors may have been confronted by more pressing challenges then simply to avoid being wiped out by their neighbors. About two million years ago some hominids were beginning to evolve larger brains and body size and to mature more slowly than other apes, which likely procreated serious challenges in rearing offspring that survived \cite{peters_83, calder_84}. Hence, alloparental care and provisioning for someone else's young have also been proposed as viable for igniting the evolution of the remarkable other-regarding abilities of the genus \textit{Homo} that we witness today \cite{hrdy_11}. Regardless of its origins, it is a fact that cooperation in groups is crucial for the remarkable evolutionary success of the human species, and it is therefore of the outmost importance to identify mechanisms that might have spurred its later development \cite{nowak_11}.
Evolutionary game theory \cite{sigmund_93, weibull_95, hofbauer_98, nowak_06, sigmund_10} is firmly established as the theoretical framework of choice for those studying the emergence and sustainability of cooperation at different levels of organization \cite{axelrod_84}. Recent reviews attest clearly to the fact that interdisciplinary approaches, linking together knowledge from biology, sociology, economics as well as mathematics and physics, are especially successful in identifying new ways by means of which the successful evolution of cooperation amongst selfish and unrelated individuals can be understood \cite{doebeli_el05, nowak_s06, szabo_pr07, schuster_jbp08, perc_bs10}. The public goods game, in particular, has proven itself times and again as the classic paradigm that succinctly captures the essential social dilemma that emerges as a consequence of group and individual interests being inherently different, which may ultimately result in the ``tragedy of the commons'' \cite{hardin_g_s68}. Governed by group interactions, the public goods game requires that players decide simultaneously whether they wish to contribute to the common pool, \textit{i.e.} to cooperate, or not. Regardless of the chosen strategy, each member of the group receives an equal share of the public good after the initial investments are multiplied by a synergy factor that takes into account the added value of collaborative efforts. Evidently, individuals are best off by not contributing anything to the common pool, \textit{i.e.} by defecting, while the group, and indeed the society as a whole, is most successful if everybody cooperates.
Recent research has made it clear that spatial structure plays a pivotal role by the evolution of cooperation, as comprehensively reviewed in \cite{szabo_pr07}. Inspired by the seminal paper introducing games on grids \cite{nowak_n92b}, evolutionary games on graphs and complex networks \cite{abramson_pre01, ebel_pre02, zimmermann_pre04, vukov_pre05, lieberman_n05, santos_pnas06, lozano_ploso08, santos_n08, szolnoki_epl09, liu_rr_pa10, poncela_njp09, zschaler_njp10, zhang_j_pa11, van-segbroeck_njp11, lee_s_prl11, poncela_pre11, dai_ql_njp10, wu_t_pone11, gomez-gardenes_c11} have proven instrumental in raising the awareness of the fact that relaxing the simplification of well-mixed interactions may lead to qualitatively different results that are due to pattern formation and intricate organization of the competing strategies, which reveals itself in most unexpected ways. Specifically for the spatial public goods game \cite{wakano_pnas09, szolnoki_pre09c}, it has recently been shown that inhomogeneous player activities \cite{guan_pre07}, appropriate partner selection \cite{wu_t_pre09, zhang_hf_epl11}, diversity \cite{yang_hx_pre09, ohdaira_acs11, perc_njp11}, the critical mass \cite{szolnoki_pre10}, heterogeneous wealth distributions \cite{wang_j_pre10b}, the introduction of punishment \cite{brandt_prsb03, helbing_ploscb10} and reward \cite{szolnoki_epl10}, as well as both the joker \cite{arenas_jtb11} and the Matthew effect \cite{perc_pre11}, can all substantially promote the evolution of public cooperation.
Apart from rare exceptions, the large majority of previously published works assumed unconditional strategies, \textit{i.e.} cooperators that always cooperated and defectors that always defected. Nevertheless, the usage of unconditional strategies constitutes a simplification that deserves further exploration. It is a fact that individuals, be it humans or animals, will likely behave differently under different circumstances. This invites the introduction of conditional strategies, by means of which such considerations can be appropriately taken into account. With this motivation, we here study the evolution of cooperation in the spatial public goods game containing conditional cooperators. Conditional cooperators will contribute to the common pool only if there is a sufficiently high number of other conditional cooperators in the group. If not, conditional cooperators will defect, at least until the group acquires more players that are likely to cooperate. While the details of the model and the main results will be presented in the following two sections, beforehand, the key finding of this work is that conditional cooperators are able to quarantine defectors into isolated convex ``bubbles'' from where they are unable to exploit the public good, and in doing so warrant completely defector-free states even if the synergy factor is close to one. Perhaps even more interestingly, we find that just the signalling of the willingness to cooperate, instead of a hard promise, is sufficient to elevate the level of collaborative efforts. As we will show, these observations rely on the spatial structure and cannot be observed in well-mixed systems, although they are robust against the topological variations of the interaction network and the group size.
\section{Spatial public goods game with conditional strategies}
The public goods game is staged on a square lattice with periodic boundary conditions where $L^2$ players are arranged into overlapping groups of size $G=5$ such that everyone is connected to its $G-1$ nearest neighbors. Accordingly, each individual belongs to $g=1,\ldots,G$ different groups. Initially each player on site $x$ is designated either as a conditional cooperator ($s_x = C_i$), where $i=0,\ldots,G-1$, or defector ($s_x = D$) with equal probability. Conditional cooperators contribute a fixed amount (here considered being equal to $1$ without loss of generality) to the public good only if there are at least $i$ other players within the group $g$ who are also willing to cooperate (whose strategy is $C_0$, $C_1$, $C_2$, $C_3$ or $C_4$) while defectors contribute nothing. Formally $C_0$ thus returns unconditional cooperators $C$ while $C_G$ returns unconditional defectors $D$. Note that in the presence of a player having strategy $s_x=C_G$ there cannot be $G$ other conditional cooperators within a group. The sum of all contributions in each group is multiplied by the synergy factor $r$ and the resulting public goods are distributed equally amongst all the group members irrespective of their contributions.
Monte Carlo simulations of the game are carried out comprising the following elementary steps. A randomly selected player $x$ plays the public goods game with its $G-1$ partners as a member of all the $g$ groups, whereby its overall payoff $P_{s_x}$ is thus the sum of all the payoffs acquired in the five groups. Next, player $x$ chooses one of its nearest neighbors at random, and the chosen co-player $y$ also acquires its payoff $P_{s_y}$ in the same way. Finally, player $x$ enforces its strategy $s_x$ onto player $y$ with a probability $w(s_x \to s_y)=1/\{1+\exp[(P_{s_y}-P_{s_x})/K]\}$, where $K=0.5$ quantifies the uncertainty by strategy adoptions \cite{szolnoki_pre09c}, implying that better performing players are readily adopted, although it is not impossible to adopt the strategy of a player performing worse. Such errors in decision making can be attributed to mistakes and external influences that adversely affect the evaluation of the opponent. Each Monte Carlo step (MCS) gives a chance for every player to enforce its strategy onto one of the neighbors once on average. The average densities of conditional cooperators ($\rho_{i}$) and defectors ($\rho_{D}$, alternatively denoted as $C_5$ and $\rho_5$) were determined in the stationary state after sufficiently long relaxation times. Depending on the actual conditions (proximity to phase transition points and the typical size of emerging spatial patterns) the linear system size was varied from $L=180$ to $720$ and the relaxation time was varied from $10^4$ to $10^6$ MCS to ensure proper accuracy.
It is worth pointing out that this is not a threshold-type model because the goods will always be shared between all the group members, even if the conditional cooperators will not all contribute. We note that the condition for conditional cooperators to cooperate introduced above is the most soft in terms of how many players are actually expected to cooperate. More precisely, it is likely that a very cautious cooperator (with a high $i$ value) will not cooperate, even though it may itself be reason enough for a less cautious cooperator to do so. Hence, in our model conditional cooperators require only a positive signal, or what can be interpreted as an ``easy promise'' from other group members, rather than a definite mutual agreement to contribute to the common pool. A much more stricter and sophisticated condition would be that a player having $s_x=C_i$ will cooperate only if there are at least $i$ other players in the group whose index is less or equal to $i$. This rule, imposing thus much stricter conditions, can only be applied if there is also at least one $C_G$ player (unconditional defector) in the group. Note that without it this definition yields misleading commands to conditional defectors. For example, in a group containing players $C_0$, $C_3$, $C_3$, $C_4$ and $C_4$, the strict $C_i \leq C_j$ rule would dictate defection for all $C_3$ players. In the following, we will refer to the dynamics relying on the ``more careful'' conditional strategies as the strict rule. We will comment on the outcome of such and other alternative models in the next section, where we now proceed with presenting the main results.
\section{Results}
It is instructive to first examine the evolution of a subset of all the possible strategies. Figure~\ref{three} shows the outcomes of three-strategy games, where besides the unconditional cooperators $C_0$ and defectors $C_G=D$ one conditionally cooperative strategy ($C_1$, $C_2$, $C_3$ or $C_4$) is initially present. Depicted is the stationary density of defectors $\rho_D$ versus the synergy factor $r$ for the four possible strategy triples, as well as for the traditional two-strategy version of the spatial public goods game ($C_0$ curve). In the latter case, cooperators die out at $r \leq 3.748$, which is a well-known result \cite{szolnoki_pre09c}. Additionally introducing one type of conditional cooperators to the traditional setup continuously decreases the minimally required $r$ for cooperative behavior to survive as $i$ increases. Most remarkably, if initially unconditional cooperators $C_0$ and defectors $C_G=D$ and conditional cooperators $C_4$ each occupy $1/3$ of the lattice, we find that defectors cannot survive even in the $r \to 1$ limit ($C_4$ curve). This indicates that simple conditional strategies have ample potential for elegantly avoiding the ``tragedy of the commons'' even under the worst of conditions.
\begin{figure}
\centerline{\epsfig{file=fig1.eps,width=8cm}}
\caption{\label{three} (Color online) The fraction of defectors $\rho_D$ as a function of the synergy factor $r$, as obtained for different combinations of strategies that compete for space on the square lattice. Besides pure cooperators ($C$) and defectors ($D$), $1/3$ of the lattice is initially occupied by one conditionally cooperative strategy ($C_i$), as marked by each depicted curve. It can be observed that the higher the value of $i$, the earlier (at lower $r$) the downfall of $\rho_D$. Remarkably, if the most cautious conditional cooperators are introduced ($C_4$), defectors are completely defeated irrespective of $r$. We note that the results obtained with the related 3-strategy strict rule model are identical.}
\end{figure}
\begin{figure}[b]
\centerline{\epsfig{file=fig2.eps,width=4cm}}
\caption{\label{stability} (Color online) Schematic presentation supporting the interface stability analysis of competing domains. The leading process, which modifies the interface between the ordered domains more intensively, is the invasion across the border marked by the dashed line.}
\end{figure}
The critical value of $r$ where cooperators die out can be estimated by means of a simple approach that considers the competition between two ordered domains of strategies \cite{szabo_jtb12}. As Fig.~\ref{stability} illustrates, the elementary change which modifies the step-like interface between competing domains is an invasion across the dashed line between unequal strategies. Assuming $C_j$ players as conditional cooperators and unconditional defectors, the accumulated payoffs of competing players are as follows:
\begin{equation}
{r \over G} \sum_{i=j+1}^{G-1} i = {r \over G} \sum_{i=j+1}^{G} i - (G-j) \,\,.
\label{equal}
\end{equation}
From this equation the critical synergy factor for the conditional cooperator strategy $C_j$ is $r_c^j = G-j$. Thus, even this simple analysis is able to reproduce the decreasing critical values of $r_c$ by increasing $j$, and moreover, warns that unconditional defectors cannot exist if a $C_{G-1}$ strategy is present. Evidently, other elementary processes are also possible, but their contributions to the boundary velocity are smaller, and to consider them would make this analysis untraceable. What is important is to note that the result is independent of the group size $G$, and is qualitatively valid for all lattice types. This is a straightforward consequence of the multi-point interaction of public goods games, which diminishes several microscopic differences of graphs and makes topological features like the clustering coefficient irrelevant \cite{szolnoki_pre09c}. The latter are, of course, essential for games that are governed by pair-wise interactions \cite{szabo_pr07}. As evidenced by the stability analysis, the crucial property in the presently studied model, however, is the ``spatiality'', allowing interfaces that separate domains of different strategies, which can only be fulfilled in structured populations.
\begin{figure}
\centerline{\epsfig{file=fig3.eps,width=8cm}}
\caption{\label{lowr} (Color online) Time evolution of the complete six-strategy public goods game with unconditional cooperators ($C_0$) and defectors ($C_5$) as well as the four conditionally cooperative strategies ($C_1$, $C_2$, $C_3$ and $C_4$), as obtained for $r=1.05$. As can be deduced from results presented in Fig.~\ref{three}, at such low synergy factors all but the $C_4$ strategy are outperformed by defectors. However, after the defector-induced extinction of $C_{0\ldots3}$, the most cautious conditional cooperators ($C_4$) are able to completely invade the defectors ($C_5$). Inset shows the same evolution as obtained if using the strict rule model, and it can be observed that the final outcome is the same.}
\end{figure}
Turning to the complete six-strategy version of the spatial public goods game, we find that our main conclusion, arrived at based on the analysis of different three-strategy games, remains fully valid. In Fig.~\ref{lowr}, we first present characteristic time courses of all six strategies over time as obtained for a very low value of $r$. As expected based on results presented in Fig.~\ref{three}, unconditional cooperators $C_0$, as well conditional cooperators $C_1$, $C_2$ and $C_3$, albeit marginally latter, all die out fast due to invading defectors $C_5$. However, after the defectors are left on their own with $C_4$, they cannot withstand the invasion of this strain of conditional cooperators and die out. Interestingly, although $C_4$ appear to be eating into the territory of defectors from the very beginning of the evolutionary process, their full potential is unleashed only after all the other ``less cautious'' conditional cooperators die out. This is because although $C_4$ are obviously impervious to defectors, this is not necessarily the case with regards to other conditionally cooperative strategies. In particular, it may well be that under certain circumstances the lesser criteria for when to cooperate may yield a temporary advantage of $C_{0\ldots3}$ over $C_4$. Thus in fact, the other cooperative strategies hinder $C_4$ at effectively invading defectors by invading $C_4$ themselves. This, however, is very short lived as defectors are able to invade $C_{0\ldots3}$ extremely effectively at $r \to 1$. In a rather twisted turn of events, defectors, by invading $C_{0\ldots3}$, actually pave the way themselves towards a premature extinction. Although one could thus, in principle a least, hypothesize an alliance between $C_{0\ldots3}$ and $C_5=D$ to successfully invade $C_4$, our simulations reveal that the evolutionary window for such a complicated alliance to remain stable is too small to exist.
\begin{figure}
\centerline{\epsfig{file=fig4.eps,width=8cm}}
\caption{\label{highr} (Color online) Time evolution of the complete six-strategy public goods game with unconditional cooperators ($C_0$) and defectors ($C_5$) as well as the four conditionally cooperative strategies ($C_1$, $C_2$, $C_3$ and $C_4$), as obtained for $r=4.5$. At such a high synergy factor all five cooperative strategies $C_{0\ldots4}$ are able to withstand being wiped out by defectors. In fact, the latter are forced to extinction primarily by $C_4$, and to a much lesser extend by $C_3$. As soon as defectors die out, however, all cooperative strategies become equivalent, and their evolution becomes identical to that of the voter model. As in Fig.~\ref{lowr}, the inset shows the same outcome for the strict rule model.}
\end{figure}
At high synergy factors, however, the outcome of the six-strategy public goods game is significantly different. As evidenced by results presented in Fig.~\ref{highr}, at $r=4.5$ all the cooperative strategies are able to withstand being invaded by defectors. In fact, both $C_4$ and to a substantially lesser degree $C_3$ are able to do the exact opposite, which is to gain ground on the expense of retreating $C_5=D$. Importantly, however, after the defectors die out all five remaining strategies $C_{0\ldots4}$ become completely equivalent. Note that due to the soft condition, requiring only the presence of a certain number of conditional cooperator within the group, but regardless of their type (see Section II for detail), all group members now receive the required number of positive signals from others to actually go ahead and contribute to the common pool. Henceforth, the evolution becomes identical to that of the voter model \cite{cox_ap83}, entailing logarithmically slow coarsening in the absence of surface tension \cite{dornic_prl01}. The final stationary state is thus determined primarily by the share of the square lattice that is occupied by any given strategy at the time of defector extinction.
\begin{figure}
\centerline{\epsfig{file=fig5.eps,width=8cm}}
\caption{\label{fix} (Color online) Fixation probabilities $P_f$ of the complete six-strategy game to eventually arrive at a pure $C_i$ phase in dependence on the synergy factor $r$. It can be observed that while at $r \to 1$ the fixation at $C_4$ is practically unavoidable, in the high $r$ limit all five cooperative strategies ($C_{0\ldots4}$) become equally probable as the victors of the evolutionary process. Importantly, regardless of $r$ defectors are unable to survive, let alone dominate.}
\end{figure}
This interpretation can be made more precise by determining the fixation probabilities of the cooperative strategies in dependence on $r$. Results presented in Fig.~\ref{fix} indicate that, because of the neutral relations between the five cooperative strategies, which set in after the defector die out, the governing voter-model dynamics will, through coarsening, result in a homogeneous state where the system fixates into one of the remaining $C_i$ strategies, where $i<G$. The fixation probability depends on the fraction of competing strategies at the time of $\rho_D\to0$, which in turn depend on the effectiveness of the cooperative strategies to invade unconditional defectors, and to a lesser degree also on their effectiveness to invade each other. Based on the time evolutions presented in Figs.~\ref{lowr} and \ref{highr}, it is understandable that at low values of $r$ the fixation probability of $C_4$ will be practically one, while in the opposite limit the eventual dominance of either cooperative strain will be equally probable (see Fig.~\ref{fix}).
\begin{figure}
\centerline{\epsfig{file=fig6.eps,width=8cm}}
\caption{\label{snaps} (Color online) Characteristic snapshots depicting the competition between conditional cooperators $C_4$ and unconditional defectors $C_5=D$ when starting from a random initial state, as obtained for $r=1.05$. Black are $C_5$, white are $C_4$ when they cooperate in at least three out of five groups, while light blue (gray if printed BW) are $C_4$ that cooperate in less than three of the five groups where they are members. The snapshots were taken at $0$ (a), $10$ (b), $30$ (c) and $100$ (d) full Monte Carlo steps (MCS). The final state is a pure $C_4$ phase (all players depicted white), which is not shown. It can be observed that initially all the cooperators are practically inactive or hidden [light blue and black dominates in panel (a)]. Only after the spatial reciprocity takes effect and first cooperative clusters are formed do the conditional cooperators actually start cooperating, although they do so only in the interior of the clusters [see panel (b)]. At the borders separating the two competing strategies, however, virtually all $C_4$ remain inactive as they are unable to gather the required number of positive signals from their neighbors [see panel (c)]. This thin intermediate layer of hidden and inactive $C_4$ then acts as a shield that makes it incredibly difficult for defectors to invade. In fact, defectors become effectively quarantined into ``bubbles'' from where they are unable to exploit cooperators [see panel (d) and Fig.~\ref{bubble} for a zoom-in]. Ultimately, this mechanism results in the ``tragedy of the defectors'' irrespective of the value of $r$. The linear system size used here is $L=200$.}
\end{figure}
In order to reveal the main mechanism behind the rather remarkable inability of defectors to survive in the presence of $C_4$, it is instructive to visualize the spatial patterns emerging as a consequence of their direct competition. Figure~\ref{snaps} features a series of four characteristic snapshots that were taken at different times [increasing from panel (a) to panel (d)] where only the two mentioned strategies compete for space. While defectors are depicted black, for convenience we graphically distinguish between two types of $C_4$ players. Namely between such that are predominantly active as cooperators, and such that are predominantly inactive or hidden. The criterion separating the two is simply the number of groups in which the player has actually contributed to the common pool. If the player, at the time the snapshot was taken, has cooperated in three or more out of the five groups where it is member, we mark it as active and depict it white, while otherwise, if it has cooperated in two or fewer groups, we mark it as inactive and depict it light blue (gray if printed BW). With this graphical distinction, we reveal that the reason why defectors cannot exploit $C_4$ effectively is due to the spontaneous emergence of very persistent interfaces of inactive $C_4$ players that separate cooperative and defective domains. Since $C_4$ players immediately stop cooperating in a group that contains at least one $C_5=D$, the defectors cannot collect large, competitive payoffs near the interfaces (and certainly not in the middle of the sea of $D$). On the other hand, hidden cooperators are still capable to collect significant payoffs from $C_4$ players that are on the opposite side of the interface, where in general the condition to actually cooperate will be fulfilled. In this way, hidden cooperators don't just shield the active cooperators from the invasion of defectors, but they can also effectively invade defectors to eventually completely dominate the whole population. The phalanx of hidden cooperators will quarantine unconditional defectors into convex isolated ``bubbles'', as demonstrated in Fig.~\ref{bubble}, which ultimately leads to an unavoidable ``tragedy of the defectors''.
\begin{figure}
\centerline{\epsfig{file=fig7.eps,width=6.25cm}}
\caption{\label{bubble} (Color online) An $80 \times 80$ zoom-in of Fig.~\ref{snaps}(d), demonstrating clearly the spontaneous emergence of convex isolated ``bubbles'' of defectors (depicted black) that are contained by inactive conditional cooperators of type $C_4$ (depicted light blue, gray if printed BW). While the latter will predominantly cooperate with the bulk of active conditional cooperators of the same type (depicted white), they will certainly defect in the opposite direction where there are unconditional defectors. Consequently, defectors cannot exploit $C_4$ type players, which leads to a gradual but unavoidable shrinkage of the defector quarantines.}
\end{figure}
From the described workings of the mechanism, it is clear that it cannot emerge under well-mixed conditions, as then players adopting the $C_4$ strategy will essentially never actually cooperate, given that an encounter with at least one defector is virtually unavoidable. Despite of this fact $C_4$s can survive, but they will always reveal only their defector face. Accordingly, the ``tragedy of the commons'' cannot be avoided by means of similar conditional strategies in well-mixed settings of the public goods game. On the other hand, it is also clear that the mechanism is robust and potent not only on the square lattice, but in fact on all other types of interaction graphs where long-standing bonds between players are assumed (note that certain coevolutionary rules \cite{perc_bs10}, especially such that rely on frequent rewiring of the links between players, may render the mechanism dysfunctional). Finally, we emphasize another positive message of this study, which is that cooperation can be promoted simply by the signaling of others that they are willing to cooperate, rather than a firm oath that they will actually do so. We have observed that our results remain valid also when introducing the more sophisticated conditional strategies, as discussed in Section II, although we find the usage of the more elegant and simple model much more rewarding and interesting.
\section{Summary}
In summary, we have shown that an intuitive introduction of conditional cooperative strategies provides the ultimate boost to the mechanism of spatial reciprocity \cite{nowak_n92b}. In particular, the most cautious conditional cooperators provide an escape hatch out of the ``tragedy of the commons'' for all values of the synergy factor $r$ by spontaneously forming a protective shield between them and the defectors. The shield, however, makes it not only extremely difficult for defectors to exploit the collaborative efforts of others, but at the same time provides an evolutionary advantage to cooperators that enables their invasion of the territory of defectors, eventually leading to their complete dominance. The quarantining of defectors is crucial especially at very low values of $r$, where otherwise they can reap huge benefits on the expense of cooperators. At intermediate and high values of $r$, however, all the different strains of conditional cooperators become more and more able to withstand being wiped out by defectors on their own. Thus, as soon as defectors die out, the evolution of the remaining cooperative strategies become neutral and proceeds by means of coarsening that is characteristic for the voter-model-type dynamics \cite{dornic_prl01}. By determining the fixation probabilities in dependence on the synergy factor $r$, we have shown that in the low $r$ limit the fixation at $C_4$ (the most cautious conditional cooperators) is practically unavoidable, while in the high $r$ limit all five cooperative strategies ($C_{0\ldots4}$) become equally probable to emerge as the dominant trait. Regardless of $r$, however, the defectors are unable to survive the evolutionary process, which is a very rewarding discovery to arrive at simply by means of a conditional strategy ($C_4$). Conceptually at least, our approach can be related to a recent study by Vukov et al. \cite{vukov_jtb11}, where directed investments were introduced to the public goods game. In their model, however, a cooperator will necessarily invest somewhere, while in our case cooperators may remain dormant for long periods of time before eventually deciding to contribute to the common pool. In terms of potential implication of our findings, apart from their relevance for the successful evolution of prosocial behavior between selfish and unrelated individuals, from the biological point of view, the way inactive cooperators quarantine defectors and force them into convex isolated ``bubbles'' bears resemblance to the way the immune system works when trying to contain an infection \cite{szabo_jtb07}. We hope that this study will inspire future research aimed at investigating the role of conditional strategies in structured populations.
\begin{acknowledgments}
This research was supported by the Hungarian National Research Fund (grant K-73449) and the Slovenian Research Agency (grant J1-4055).
\end{acknowledgments}
|
train/arxiv
|
BkiUbnbxK7Ehm4VQuM0O
| 5
| 1
|
\section{Introduction}\label{sec1}
One of the most powerful tools in representation theory is the notion of the {\bf block decomposition}. Given a finite-dimensional $\bK$-algebra $A$, the block decomposition of $A$ gives a partition of the set of irreducible $A$-modules. We may then study the representation theory of $A$ block-by-block. In particular, if we well-understand one block (say, a block containing a trivial module) then we can often use {\bf translation functors} to gain insight into the structure of other blocks.
If $G$ is an algebraic group over an algebraically closed field $\bK$ of characteristic $p>0$ and $\fg$ is its Lie algebra, then we may form, for each $\chi\in\fg^{*}$, the {\bf reduced enveloping algebra} $U_\chi(\fg)$. This is a finite-dimensional $\bK$-algebra which is important to the representation theory of $\fg$, and so we would like to understand its blocks. The leading result in this direction is {\bf Humphreys' conjecture on blocks}.
\begin{conj}[Humphreys' conjecture on blocks]
Suppose $G$ is reductive and let $\chi\in\fg^{*}$ be nilpotent. Then there exists a natural bijection between the blocks of $U_\chi(\fg)$ and the set $\Lambda_\chi/W_{\bullet}$. In particular, $$\left\vert\{\mbox{Blocks of}\,\,\, U_\chi(\fg)\}\right\vert=\left\vert \Lambda_\chi/W_{\bullet}\right\vert.$$
\end{conj}
Here, $\Lambda_\chi$ is a certain finite subset of $\fh^{*}$, where $\fh$ is the Lie algebra of a maximal torus $T$ of $G$, and $W$ is the Weyl group of $(G,T)$, which acts on $\fh^{*}$ via the dot-action and thus induces an equivalence relation on $\Lambda_\chi$. The requirement that $\chi$ is nilpotent means that $\chi$ vanishes on the Lie algebra $\fb$ of a Borel subgroup $B$ of $G$ (which we may assume contains $T$).
This conjecture was proved by Humphreys \cite{Hu3.1} in 1971 for $\chi=0$, subject to the requirements that $G$ be semisimple and that $p>h$, where $h$ is the Coxeter number of $(G,T)$. Humphreys then extended the result further to $\chi$ in so-called standard Levi form in 1998 in \cite{Hu.1} (the paper \cite{Hu.1} doesn't explicitly state what assumptions are being made, but the argument holds for any connected reductive algebraic group whose derived group is simply-connected). Under three assumptions (which we will call Jantzen's standard assumptions \cite{J1.1,J2.1} and denote (A), (B) and (C)), the conjecture was then proved by Brown and Gordon in \cite{BG.1} for all $\chi\in\fg^{*}$ when $p>2$, and then improved by Gordon in \cite{Go.1} to include the $p=2$ case (so long as (A), (B) and (C) still hold). In fact, under assumptions (A), (B) and (C), Humphreys' conjecture on blocks allows us to count the number of blocks of $U_\chi(\fg)$ for {\em all} $\chi\in\fg^{*}$, as these assumptions are sufficient to reduce the computation to the case of nilpotent $\chi$ (see \cite{FP1.1}, also Remark~\ref{NilpRed}, {\em infra}). Furthermore, Braun \cite{Br.1} recently proved the conjecture for $\fg=\fs\fl_n$ with $p\vert n$, where assumptions (A) and (B) hold but (C) doesn't. In this case, however, the restriction to nilpotent $\chi$ is necessary, as the analogous result for semisimple $\chi$ was shown in \cite{Br.1} to fail when $p=n=3$.
Let us now explain Jantzen's standard assumptions. These are: (A) that the derived group of $G$ is simply-connected; (B) that the prime $p$ is good for $G$; and (C) that there exists a non-degenerate $G$-invariant bilinear form on $\fg$. The primes that are not good for a given $G$ can be listed explicitly (and are all less that or equal to $5$), and the existence of a non-degenerate $G$-invariant bilinear form on $\fg$ holds whenever $\fg$ is simple.
The question motivating this note is: what happens to Humphreys' conjecture on blocks for nilpotent $p$-characters if we remove assumptions (B) and/or (C)? We see in Section~\ref{sec3} that there is a natural surjection $f:\{\mbox{Blocks of}\,\,\, U_\chi(\fg)\}\to\Lambda_\chi/W_{\bullet}$ under only assumption (A). It turns out that this can be deduced from the literature \cite{J2.1,KW.1}. Furthermore, we show in Theorem~\ref{BlockNumb} that the known proof of the injectivity of $f$ works without assumption (B). We also provide a different approach to the proof of the injectivity in Proposition~\ref{prop1}, which demonstrates that injectivity in fact holds whenever there exists a collection of irreducible modules of a certain nice form (namely, which are so-called {\bf baby Verma modules}). Premet's theorem \cite{Pr1.1} shows the existence of such irreducible modules under assumptions (A), (B) and (C), and we observe in Corollary~\ref{BlockG23} that the existence also holds for the almost-simple algebraic group of type $G_2$ in characteristic 3 (where assumption (C) fails). This thus proves Humphreys' conjecture on blocks for $G_2$ in characteristic 3, which could not be deduced using the previous approach.
In the Appendix, we conduct some calculations with a view to finding other examples where these irreducible modules exist. Unfortunately, the calculations do not lead to further examples, but we hope the calculations are interesting in their own right, as they demonstrate divisibility bounds for irreducible modules for certain nice $\chi$ and small primes.
{\bf Statements and Declarations:} The author was supported during this research by the Engineering and Physical Sciences Research Council, grant EP/R018952/1, and later by a research fellowship from the Royal Commission for the Exhibition of 1851.
{\bf Acknowledgments:} The author would like to thank Ami Braun and Dmitriy Rumynin for suggesting this question, and Simon Goodwin for engaging in many useful discussions regarding this subject and comments on earlier versions of this paper.
\section{Preliminaries on Lie algebras}\label{sec2}
Throughout this note we work with a connected algebraic group $G$ over an algebraically closed field $\bK$ of characteristic $p>0$. More precise assumptions on $G$ are given section-by-section, but it is always at least a reductive algebraic group with simply-connected derived subgroup. Inside $G$, we fix a maximal torus $T$ and a Borel subgroup $B$ of $G$ containing $T$. Write $X(T)$ for the character group of $T$, $Y(T)$ for the cocharacter group of $T$, and $\langle\cdot,\cdot\rangle:X(T)\times Y(T)\to\bZ$ for the natural pairing. We write $\fg$ for the Lie algebra of $G$, $\fb$ for the Lie algebra of $B$ and $\fh$ for the Lie algebra of $T$. As Lie algebras of algebraic groups these are all restricted, so come equipped with $p$-th power maps $\fg\to\fg$ (resp. $\fb\to\fb$, $\fh\to\fh$) written $x\mapsto x^{[p]}$ .
Set $\Phi$ to be the root system of $G$ with respect to $T$, $\Phi^{+}$ to be the positive roots corresponding to $B$ and $\Pi$ to be the simple roots. For $\alpha\in \Phi$ we set $\alpha^\vee\in Y(T)$ to be the corresponding coroot, and we write $\fg_\alpha$ for the root space of $\alpha$ in $\fg$. We then define $\fn^{+}=\bigoplus_{\alpha\in\Phi^{+}}\fg_\alpha$ and $\fn^{-}=\bigoplus_{\alpha\in\Phi^{+}}\fg_{-\alpha}$, so $\fg=\fn^{-}\oplus\fh\oplus\fn^{+}$. For $\alpha\in \Phi$ we define $h_\alpha\coloneqq d\alpha^\vee(1)\in\fh$, and we choose $e_\alpha\in \fg_\alpha$ and $e_{-\alpha}\in\fg_{-\alpha}$ so that $[e_\alpha,e_{-\alpha}]=h_\alpha$ (see, for example, \cite{J4.1} for more details on this procedure). We also choose a basis $h_1,\ldots,h_d$ of $\fh$ with the property that $h_i^{[p]}=h_i$ for all $1\leq i\leq d$.
Set $W$ to be the Weyl group of $\Phi$, which acts naturally on $X(T)$ and $\fh^{*}$. We fix $\rho\in X(T)\otimes_{\bZ}\bQ$ to be the half-sum of positive roots in $\Phi$. This then allows us to define the dot-action of $W$ on $X(T)$ as $w\cdot\lambda=w(\lambda+\rho)-\rho$ (noting that this action makes sense even if $\rho\notin X(T)$). When $\rho\in X(T)$, $d\rho(h_\alpha)=1$ for all $\alpha\in \Pi$. If $\rho\notin X(T)$, we may still define $d\rho\in\fh^{*}$ such that $d\rho(h_\alpha)=1$ for all $\alpha\in\Pi$, since the derived subgroup being simply-connected implies that these $h_\alpha$ are linearly independent in $\fh$. We may therefore define the dot action on $\fh^{*}$ similarly to how it was defined on $X(T)$. When we wish to specify that $W$ is acting through the dot-action, we may write $W_\bullet$ instead of $W$.
We write $U(\fg)$ for the universal enveloping algebra of $\fg$. We write $Z_p$ for the central subalgebra of $U(\fg)$ generated by all $x^p-x^{[p]}$ with $x\in\fg$, which we call the {\bf $p$-centre} of $U(\fg)$. Given $\chi\in\fg^{*}$, we write $U_\chi(\fg)$ for the reduced enveloping algebra $U_\chi(\fg)\coloneqq U(\fg)/\langle x^p-x^{[p]}-\chi(x)^p \,\vert\, x\in\fg\rangle$. Each irreducible $\fg$-module is finite-dimensional \cite[Theorem A.4]{J1.1} and so, by Schur's lemma, each irreducible $\fg$-module is a $U_\chi(\fg)$-module for some $\chi\in\fg^{*}$. For $\chi\in\fg^{*}$, we recall that the centraliser of $\chi$ in $\fg$ is defined as $c_{\fg}(\chi)\coloneqq \{x\in\fg\,\vert\,\chi([x,\fg])=0\}.$
The adjoint action of $G$ on $\fg$ induces the coadjoint action of $G$ on $\fg^{*}$, and if $\chi,\mu\in\fg^{*}$ lie in the same coadjoint $G$-orbit then $U_\chi(\fg)\cong U_\mu(\fg)$. The derived group of $G$ being simply-connected implies (see \cite{J2.1,KW.1}) that any $\mu\in\fg^{*}$ lies in the same $G$-orbit as some $\chi\in\fg^{*}$ with $\chi(\fn^{+})=0$. Putting these two observations together, we always assume $\chi(\fn^{+})=0$ throughout this paper.
We can define, for each $\lambda\in\fh^{*}$, a one-dimensional $\fb$-module $\bK_\lambda$ on which $\fn^{+}$ acts as zero and $\fh$ acts via $\lambda$. The assumption that $\chi(\fn^{+})=0$ means that $\bK_\lambda$ is a $U_\chi(\fb)$-module if and only if $\lambda\in\Lambda_\chi$, where \begin{equation*}
\begin{split}
\Lambda_\chi & \coloneqq \{\lambda\in\fh^{*}\mid\lambda(h)^p-\lambda(h^{[p]})=\chi(h)^p\,\,\mbox{for all}\,\, h\in\fh\} \\ & = \{\lambda\in\fh^{*}\mid\lambda(h_i)^p-\lambda(h_i)=\chi(h_i)^p\,\,\mbox{for all}\,\, 1\leq i\leq d\}
\end{split}
\end{equation*} and that all irreducible $U_\chi(\fb)$-modules are of this form. We therefore may define the {\bf baby Verma module} $Z_\chi(\lambda)=U_\chi(\fg)\otimes_{U_{\chi}(\fb)}\bK_\lambda$, a $U_\chi(\fg)$-module of dimension $p^N$, where $N=\left\vert\Phi^{+}\right\vert$. Every irreducible $U_\chi(\fg)$-module is the quotient of some baby Verma module (see \cite[Lem. B.4]{J1.1}).
Since $W_\bullet$ acts on $\fh^{*}$, we may define an equivalence relation on $\Lambda_\chi$ by setting $\lambda\sim\mu$ if and only if there exists $w\in W$ with $w\cdot\lambda=\mu$. We write $\Lambda_\chi/W_{\bullet}$ for the set of equivalence classes of $\Lambda_\chi$ under this relation.
If $\chi(\fb)=0$ then $\Lambda_\chi=\Lambda_0=\{d\lambda\in\fh^{*}\,\vert\,\lambda\in X(T)\}=X(T)/pX(T)$. In this case, $W_\bullet$ in fact acts on $\Lambda_\chi$, so $\Lambda_\chi/W_\bullet$ is the set of $W_\bullet$-orbits for this action. The condition that $\chi(\fb)=0$ is sufficiently important in this paper that we make the definition $$\fb^{\perp}\coloneqq\{\chi\in\fg^{*}\,\vert\,\chi(\fb)=0\}.$$
We say that $\chi\in\fb^{\perp}$ is in {\bf standard Levi form} if there exists a subset $I\subseteq\Pi$ of simple roots such that $$\chi(e_{-\alpha})=\twopartdef{1}{\alpha\in I,}{0}{\alpha\in\Phi^{+}\setminus I.}$$ If $I=\Pi$ we say that $\chi$ is {\bf regular nilpotent in standard Levi form}. In general, we say $\chi\in\fg^{*}$ is {\bf regular nilpotent} if it is in the same $G$-orbit as the $\mu\in\fg^{*}$ which is regular nilpotent in standard Levi form.
\section{Preliminaries on Blocks}\label{sec3}
Let us briefly recall the definition of the {\bf blocks} of a finite-dimensional $\bK$-algebra $A$ (one can find more details in \cite[I.16, III.9]{BGo.1}, for example). We say that one irreducible $A$-module $M$ is {\bf linked to} another irreducible $A$-module $N$ if $\Ext^1(M,N)\neq 0$. This is not an equivalence relation, but we may refine it to one. The equivalence classes under the resulting equivalence relation are then called the {\bf blocks} of $A$.
In this note, we are concerned with the case of $A=U_\chi(\fg)$ with $\chi\in\fb^\perp$. Under assumptions (A), (B) and (C) the results in this section are well-known -- for example, they are contained within the proof of Proposition C.5 in \cite{J1.1}. Nonetheless, we recall them to highlight when assumptions (A), (B) and (C) are or are not necessary. Remember from Section~\ref{sec2} that each irreducible $U_\chi(\fg)$-module is a quotient of a baby Verma module $Z_\chi(\lambda)$, and thus all irreducible $U_\chi(\fg)$-modules appear as composition factors of baby Verma modules. Recall also that the {\bf Grothendieck group} $\sG (U_\chi(\fg))$ of the category of finite-dimensional $U_\chi(\fg)$-modules is the abelian group generated by symbols $[M]$, for $M$ running over the collection of all finite-dimensional $U_\chi(\fg)$-modules, subject to the relation that $[P]+[N]=[M]$ if
$$0\to P\to M\to N\to 0$$ is a short exact sequence of $U_\chi(\fg)$-modules. It is clear that in $\sG (U_\chi(\fg))$ we have, for $\lambda\in\Lambda_0$, $$[Z_\chi(\lambda)]=\sum_{\tiny L\in\Irr(U_\chi(\fg))} [Z_\chi(\lambda):L][L],$$ where $\Irr(U_\chi(\fg))$ is the set of isomorphism classes of irreducible $U_\chi(\fg)$-modules and $[Z_\chi(\lambda):L]$ indicates the composition multiplicity of $L$ in $Z_\chi(\lambda)$.
We wish to define the map
$$f:\{\mbox{Blocks of}\,\, U_\chi(\fg)\}\to \{[Z_\chi(\lambda)]\,\vert\,\lambda\in \Lambda_0\}\subseteq \sG(U_\chi(\fg)),$$ as follows. Let $\fB$ be a block of $U_\chi(\fg)$, and let $E$ be an irreducible module in this block. There must exist $\lambda\in \Lambda_0$ such that $E$ is a quotient of $Z_\chi(\lambda)$. We then define $f(\fB)=[Z_\chi(\lambda)]$.
For this to be well-defined, it is necessary to see that it does not depend on our choice of $E\in\fB$ or on our choice of $Z_\chi(\lambda)\twoheadrightarrow E$. For this, we note that $U(\fg)^G\subseteq Z(U(\fg))$ acts on the baby Verma module $Z_\chi(\lambda)$ via scalar multiplication as follows. Under the assumption that the derived group of $G$ is simply-connected (assumption (A)), the argument of Kac and Weisfeiler in \cite[Th. 1]{KW.1} (c.f. \cite[Th. 9.3]{J2.1}) shows that there exists an isomorphism $\pi:U(\fg)^G\to S(\fh)^{W_{\bullet}}$, where the dot-action on $S(\fh)$ is obtained by identifying $S(\fh)$ with the algebra $P(\fh^{*})$ of polynomial functions on $\fh^{*}$ and then defining $(w\cdot F)(\lambda)=F(w^{-1}\cdot\lambda)$ for $w\in W$, $F\in P(\fh^{*})$ and $\lambda\in\fh^{*}$. This isomorphism allows us, as in \cite{J2.1}, to define a homomorphism $\cen_{\lambda}:U(\fg)^G\to\bK$ which sends $u\in U(\fg)^G$ to $\pi(u)(\lambda)$, viewing $\pi(u)$ as an element of $P(\fh^{*})$. Then $U(\fg)^G$ acts on $Z_\chi(\lambda)$ via the character $\cen_\lambda$, for $\lambda\in \Lambda_0$.
If $E$ and $E'$ lie in the same block then it is easy to see that $U(\fg)^G$ must act the same on both modules, and if $Z_{\chi}(\lambda_E)\twoheadrightarrow E$ and $Z_{\chi}(\lambda_{E'})\twoheadrightarrow E'$ then $U(\fg)^G$ acts on $E$ via $\cen_{\lambda_E}$ and on $E'$ by $\cen_{\lambda_{E'}}$. Thus, $\cen_{\lambda_E}=\cen_{\lambda_{E'}}$ and so, as in \cite[Cor. 9.4]{J2.1} (see also \cite[Th. 2]{KW.1}), we have $\lambda_E\in W_\bullet \lambda_{E'}$. One may then observe, using \cite[C.2]{J1.1}, that $[Z_\chi(\lambda_E)]=[Z_{\chi}(\lambda_{E'})]$. This shows that $f$ is well-defined. Furthermore, $f$ is clearly surjective (just take the block containing an irreducible quotient of the desired $Z_{\chi}(\lambda)$).
The above discussion also shows that $[Z_\chi(\lambda)]\cong [Z_\chi(\mu)]$ if and only if $\lambda\in W_\bullet\mu$. Thus, there is a bijection $$\{[Z_\chi(\lambda)]\,\vert\,\lambda\in \Lambda_0\}\leftrightarrow \Lambda_0/W_\bullet.$$ In particular, we get the following proposition (which also may more-or-less be found in \cite[C.5]{J1.1}), observing that at no point thus far have we required assumptions (B) or (C).
\begin{prop}\label{Lower}
Let $G$ be a connected reductive algebraic group over an algebraically closed field $\bK$ of characteristic $p>0$, with simply-connected derived subgroup, and let $\chi\in\fb^\perp$. Then there exists a natural surjection between the set of blocks of $U_\chi(\fg)$ and the set $\Lambda_\chi/W_{\bullet}=\Lambda_0/W_{\bullet}$. In particular, $$\left\vert\{\mbox{Blocks of}\,\,\, U_\chi(\fg)\}\right\vert\geq\left\vert \Lambda_0/W_{\bullet}\right\vert.$$
\end{prop}
\begin{rmk}
We have used in the above argument the fact that, when assumption (A) holds, there exists an isomorphism $U(\fg)^G\xrightarrow{\sim}S(\fh)^{W_\bullet}$. This result dates back to Kac and Weisfeiler \cite{KW.1}, who proved it for connected almost-simple algebraic groups under the assumption that $G\neq SO_{2n+1}(\bK)$ when $p=2$.\footnote{In \cite[Th. 1]{KW.1} it is required that either $p\neq 2$ or $\rho\in X(T)$, where $\rho$ is the half sum of positive roots. This is then generalised to the given assumptions in \cite[Th. 1 BIS]{KW.1}. The $W$-action used in the latter theorem can be easily seen to be the same as the dot-action we are using.} According to Janzten \cite[Rem. 9.3]{J2.1}, the argument of Kac and Weisfeiler holds for reductive $\fg$ whenever assumption (A) holds. Jantzen further gives an argument \cite[9.6]{J2.1} using reduction mod $p$ techniques which holds under his standard assumptions. In fact, slightly weaker assumptions are sufficient: assumption (B) is only needed to ensure $p$ is not a so-called torsion prime of $\Phi^\vee$ (in the sense of \cite[Prop. 8]{Dem.1}), which is also satisfied for the bad prime 3 in case $G_2$, while assumption (C) is only needed to ensure that the (derivatives of the) simple roots are linearly independent in $\fh^{*}$, which is also satisfied for $p=2$ in type $F_4$ and $p=3$ in type $G_2$. In particular, the argument of Kac-Weisfeiler is unnecessary for our later result (Corollary~\ref{BlockG23}) that Humphreys' conjecture on blocks holds for the almost-simple algebraic group of type $G_2$ in characteristic 3.
\end{rmk}
\section{Upper bound}
Humphreys' conjecture on blocks claims that the map $f$ defined in the previous section is, in fact, a bijection. What remains, therefore, is to show that $$\left\vert\{\mbox{Blocks of}\,\,\, U_\chi(\fg)\}\right\vert\leq\left\vert \Lambda_0/W_{\bullet}\right\vert.$$ Gordon \cite{Go.1} has shown that this inequality holds under assumptions (A), (B) and (C), and a similar argument is reproduced in \cite[C.5]{J1.1}. We give a version of this argument here in order to observe that it does not require assumption (B), and to highlight where assumption (C) is necessary:
The discussion in Section~\ref{sec3} shows that $U_\chi(\fg)$ has $\left\vert \Lambda_0/W_{\bullet}\right\vert$ blocks if, for each $\lambda\in\Lambda_0$, all composition factors of the baby Verma module $Z_\chi(\lambda)$ lie in the same block. This property holds for the $\mu\in\fg^{*}$ which is regular nilpotent in standard Levi form, since the corresponding baby Verma module has a unique maximal submodule and so is indecomposable, and it is well-known that all composition factors of an indecomposable module lie in the same block. Therefore $U_\chi(\fg)$ has $\left\vert \Lambda_0/W_{\bullet}\right\vert$ blocks for all $\chi$ in the $G$-orbit of $\mu$.
Suppose now that the intersection of $\fb^\perp$ with the $G$-orbit of $\mu$ is dense in $\fb^\perp$. By \cite[Prop. 2.7]{Ga.1}, $$D_{\left\vert \Lambda_0/W_{\bullet}\right\vert}\coloneqq\{\chi\in\fb^{\perp}\,\vert\,U_\chi(\fg)\,\,\mbox{has at most}\,\,\left\vert \Lambda_0/W_{\bullet}\right\vert\,\,\mbox{blocks}\}$$ is closed in $\fb^{\perp}$. Since $(G\cdot\mu)\cap\fb^{\perp}\subseteq D_{\left\vert \Lambda_0/W_{\bullet}\right\vert}$, Humphreys' conjecture on blocks would follow.
When can we say that $(G\cdot\mu)\cap\fb^\perp$ is dense in $\fb^\perp$? Well, if there exists a $G$-equivariant isomorphism $\Theta:\fg\xrightarrow{\sim}\fg^{*}$, we can set $y\coloneqq\Theta^{-1}(\mu)$. Then \cite[6.3, 6.7]{J3.1} (which make no assumptions on $p$) establish that the $G$-orbit of $y$ is dense in the nilpotent cone $\cN$ of $\fg$, and so the $G$-orbit of $\mu$ is dense in $\Theta(\cN)$. Thus, $(G\cdot\mu)\cap \fb^\perp$ is dense in $\fb^{\perp}$, and so (cf. \cite[Th. 3.6]{Go.1}) under assumptions (A) and (C) we get Humphreys' conjecture of blocks:
\begin{theorem}\label{BlockNumb}
Let $G$ be a connected reductive algebraic group over an algebraically closed field $\bK$ of characteristic $p>0$, with simply-connected derived subgroup. Suppose that there exists a $G$-module isomorphism $\Theta:\fg\xrightarrow{\sim}\fg^{*}$. Let $\chi\in\fb^{\perp}$. Then $$\left\vert\{\mbox{Blocks of}\,\,\, U_\chi(\fg)\}\right\vert=\left\vert \Lambda_\chi/W_{\bullet}\right\vert.$$
\end{theorem}
\begin{rmk}\label{NilpRed}
Suppose $\chi\in\fg^{*}$ with $\chi(\fn^{+})=0$. Under assumption (C), there exists a $G$-module isomorphism $\Theta:\fg\to\fg^{*}$, so we may fix $x\in\fg$ such that $\Theta(x)=\chi$. In $\fg$ it is well-known that $x$ has a (unique) Jordan decomposition $x=x_s+x_n$, where $x_s$ is semisimple, $x_n$ is nilpotent, and $[x_s,x_n]=0$, and thus we may define the Jordan decomposition $\chi=\chi_s+\chi_n$ where $\chi_s=\Theta(x_s)$ and $\chi_n=\Theta(x_n)$. In fact, Kac and Weisfeiler \cite[Th. 4]{KW.1} show that a Jordan decomposition of $\chi$ may be defined even when assumption (C) does not hold, so long as assumption (A) does instead: we say that $\chi=\chi_s+\chi_n$ is a Jordan decomposition if there exists $g\in G$ such that $g\cdot\chi_s(\fn^{+}\oplus\fn^{-})=0$, $g\cdot\chi_n(\fh\oplus\fn^{+})=0$, and, for $\alpha\in\Phi^{+}$, $g\cdot\chi(h_\alpha)\neq 0$ only if $g\cdot\chi(e_{\pm\alpha})=0$. Under assumptions (A) and (B), Friedlander and Parshall \cite{FP1.1} show that there is an equivalence of categories between $\{U_\chi(\fg)-\mbox{modules}\}$ and $\{U_\chi(\fc_{\fg}(\chi_s))-\mbox{modules}\}$ (the categories of finite-dimensional modules). It can then further be shown under those assumptions (see, for example, \cite[B.9]{J1.1}) that there is an equivalence of categories between $\{U_\chi(\fc_{\fg}(\chi_s))-\mbox{modules}\}$ and $\{U_{\chi_n}(\fc_{\fg}(\chi_s))-\mbox{modules}\}$. Under assumptions (A) and (B), this then often allows us to reduce representation-theoretic questions to the case of nilpotent $\chi$.
When assumption (C) holds, we may do this for Humphreys' conjecture on blocks (we assume here that $\chi$ is chosen so that $g$ may be taken as $1$ in the definition of the Jordan decomposition, recalling that reduced enveloping algebras are unchanged by the coadjoint $G$-action on their corresponding $p$-character). The equivalence of categories between $\{U_\chi(\fg)-\mbox{modules}\}$ and $\{U_{\chi_n}(\fl)-\mbox{modules}\}$ (where $\fl\coloneqq \fc_{\fg}(\chi_s)$) clearly preserves the number of blocks of the respective algebras. Thus, Humphreys' conjecture on blocks for $(\fl,\chi_n)$ will imply it for $(\fg,\chi)$ if and only if $\left\vert\Lambda_\chi/W_{\bullet}\right\vert=\left\vert\Lambda_{\chi_n} /W'_\bullet\right\vert$, where $W'$ is the Weyl group corresponding to $\fl$. What is $W'$? Well, the root system for $\fl$ is $\{\alpha\in\Phi\mid \chi_s(h_\alpha)= 0\}$ so it is easy to see that $W'$ lies inside $W(\Lambda_\chi)$, the set of $w\in W$ which fix $\Lambda_\chi$ setwise (it is straightforward to see under our assumptions that it doesn't matter in defining this subgroup whether we consider the usual action or the dot-action of $W$, since $\rho\in \Lambda_0$). When assumption (C) holds, $W(\Lambda_\chi)$ is parabolic (see \cite[Lem. 7]{MR.1}, \cite[Prop. 1.15]{Hu4.1}), and so one can easily check that $W'=W(\Lambda_\chi)$ in this case (see \cite[Rem. 3.12(3)]{BG.1}). This then obviously implies that $\left\vert\Lambda_\chi/W_{\bullet}\right\vert=\left\vert\Lambda_{\chi} /W'_\bullet\right\vert$, and so what remains is to show that $\left\vert\Lambda_{\chi} /W'_\bullet\right\vert=\left\vert\Lambda_{\chi_n} /W'_\bullet\right\vert$. One can check that there exists $\lambda\in \Lambda_\chi$ such that $w(\lambda)=\lambda$ for all $w\in W'=W(\Lambda_\chi)$. Then the map $\Lambda_\chi=\lambda+\Lambda_0\to \Lambda_0=\Lambda_{\chi_n}$, $\lambda+\tau\mapsto\tau$, induces a bijection $\Lambda_\chi/W'_\bullet\xrightarrow{\sim}\Lambda_{\chi_n}/W'_\bullet$ as required.
Braun \cite[Th. 6.23, Ex. 6.25]{Br.1} has shown that when assumption (C) fails to hold, it can be the case that Humphreys' conjecture on blocks holds for nilpotent $\chi$ but fails for general $\chi$. Specifically, set $\fg=\fs\fl_3$, $p=3$, and choose $\chi\in\fs\fl_3^{*}$ such that $\chi(e_{11}-e_{22})=\chi(e_{22}-e_{33})\neq 0$ (using $e_{ij}$ for the usual basis elements of $\fg\fl_3$). Recalling that the Weyl group for $\fs\fl_3$ is the symmetric group $S_3$, one can check that $W(\Lambda_\chi)=\{\Id,(1,2,3),(1,3,2)\}$ and so is not a parabolic subgroup of $W$. Thus, $W'\neq W(\Lambda_\chi)$ and so there can be linkages under $W$ which do not exist under $W'$. In particular, choosing suitable $\chi$, one can use this to show that $\left\vert\Lambda_\chi/W_{\bullet}\right\vert<\left\vert\Lambda_{\chi_n} /W'_\bullet\right\vert$. Braun's argument then shows that the latter value is the number of blocks of $U_{\chi_n}(\fl)$ and so the number of blocks of $U_\chi(\fg)$. We note that this argument highlights that \cite[Lem. 7]{MR.1} requires the assumption that $p$ be very good for the root system.
\end{rmk}
The argument above highlights one approach to proving Humphreys' conjecture on blocks; namely, to obtain the desired result it suffices to find a dense subset of $\fb^{\perp}$ lying inside $D_{\left\vert \Lambda_\chi/W_{\bullet}\right\vert}$. Note that $\fb^\perp=\bK^N$, where $N=\left\vert\Phi^{+}\right\vert$, and recall that any non-empty open subset is dense in $\bK^N$ when it is equipped with the Zariski topology. For each $\lambda\in \Lambda_0$, define $$C_\lambda\coloneqq \{\chi\in\fb^{\perp}\,\vert\,\,\mbox{All composition factors of }\, Z_\chi(\lambda)\,\,\mbox{are in the same block of }\,U_\chi(\fg)\,\},$$ and define $$C\coloneqq\bigcap_{\lambda\in \Lambda_0}C_\lambda.$$ It is straightforward from the arguments in Section~\ref{sec3} to see that $C\subseteq D_{\left\vert \Lambda_\chi/W_{\bullet}\right\vert}$. Furthermore, if for each $\lambda\in \Lambda_0$ we can find a dense open subset $\widehat{C}_\lambda$ of $\fb^{\perp}$ with $\widehat{C}_\lambda\subseteq C_\lambda$, then $$\widehat{C}\coloneqq\bigcap_{\lambda\in \Lambda_0}\widehat{C}_\lambda$$ would be a dense open subset of $\fb^{\perp}$ contained in $C\subseteq D_{\left\vert \Lambda_\chi/W_{\bullet}\right\vert}$. Finding the desired $\widehat{C}_\lambda$ therefore provides an approach to proving Humphreys' conjecture on blocks, and in the rest of this section we explore one particular way of obtaining such $\widehat{C}_\lambda$.
For each $\lambda\in \Lambda_0$, consider the set $$S_\lambda\coloneqq\{\chi\in\fb^{\perp}\,\vert\,Z_\chi(\lambda)\,\,\mbox{is an irreducible } U_\chi(\fg)\mbox{-module}\}.$$ It is remarked in \cite[C.6]{J1.1} that $S_\lambda$ is open in $\fb^\perp$. Specifically, if we define, for $s=1,\ldots,p^N-1$, the set $$N_{\lambda,s}=\{\chi\in\fb^{\perp}\,\vert\,Z_\chi(\lambda)\,\,\mbox{has a } U_\chi(\fg)\mbox{-submodule of dimension } s\},$$ then clearly $S_\lambda=\bigcap_{s=1}^{p^N-1}N_{\lambda,s}^c$ (where, for $X\subseteq \fb^\perp$, $X^c$ denotes $\fb^\perp\setminus X$). The openness of $S_\lambda$ then follows from the closure of each $N_{\lambda,s}$ in $\fb^\perp$ (which is proved in \cite[C.6]{J1.1}, and one can check that the proof doesn't use assumptions (B) or (C)).
\begin{prop}\label{prop1}
Let $G$ be a connected reductive algebraic group over an algebraically closed field $\bK$ of characteristic $p>0$, with simply-connected derived subgroup. Let $\chi\in\fb^{\perp}$, and suppose that for each $\lambda\in \Lambda_0$ there exists $\mu_\lambda\in \fb^{\perp}$ such that $Z_{\mu_\lambda}(\lambda)$ is an irreducible $U_{\mu_\lambda}(\fg)$-module. Then $\left\vert\{\mbox{Blocks of}\,\, U_\chi(\fg)\}\right\vert=\left\vert \Lambda_\chi/W_{\bullet}\right\vert$.
\end{prop}
\begin{proof}
Our assumption guarantees that each $S_\lambda$, for $\lambda\in\Lambda_0$, is non-empty. Each $S_\lambda$ is thus a dense open subset of $\fb^{\perp}$ and it is clear that $S_\lambda\subseteq C_\lambda$ for each $\lambda\in\Lambda_0$. We therefore have that $\bigcap_{\lambda\in \Lambda_0}S_\lambda$ is a dense open subset of $\fb^{\perp}$. Since $\bigcap_{\lambda\in \Lambda_0}S_\lambda\subseteq C\subseteq D_{\left\vert \Lambda_\chi/W_{\bullet}\right\vert}$ and $D_{\left\vert \Lambda_\chi/W_{\bullet}\right\vert}$ is closed in $\fb^\perp$, we conclude $\fb^\perp=D_{\left\vert \Lambda_\chi/W_{\bullet}\right\vert}$. Hence, $\left\vert\{\mbox{Blocks of}\,\, U_\chi(\fg)\}\right\vert\leq \left\vert \Lambda_\chi/W_{\bullet}\right\vert$ and, together with Proposition~\ref{Lower}, this gives the desired result.
\end{proof}
\begin{cor}\label{BlockG23}
Suppose $G$ is the almost-simple simply-connected algebraic group of type $G_2$ over an algebraically closed field $\bK$ of characteristic $3$. If $\chi\in\fg^{*}$ satisfies $\chi(\fb)=0$, then $U_\chi(\fg)$ has exactly $\left\vert\Lambda_\chi/W_\bullet\right\vert=3$ blocks.
\end{cor}
\begin{proof}
The calculations in Subsection~\ref{G23}, {\em infra}, show that, for each $\lambda\in \Lambda_0$, the regular nilpotent $\chi$ in standard Levi form gives an irreducible baby Verma module. The result then follows from Proposition~\ref{prop1} (and one can check directly that $\left\vert\Lambda_\chi/W_\bullet\right\vert=3$).
\end{proof}
\begin{rmk}
From the discussion in Section \ref{sec3}, it is sufficient to check the condition of Proposition~\ref{prop1} for representatives $\lambda\in\Lambda_0/{W_\bullet}$.
\end{rmk}
\begin{rmk}
By Premet's theorem \cite{Pr1.1,Pr3.1}, Proposition~\ref{prop1} gives a proof of Humphreys' conjecture on blocks when Jantzen's standard assumptions hold. This is similar to the proof of Proposition~\ref{BlockNumb}, {\em supra}.
\end{rmk}
Proposition~\ref{prop1} shows that Humphreys' conjecture on blocks holds when irreducible baby Verma modules exist. The next proposition shows what happens when they don't.
\begin{prop}\label{NoIrred}
Let $\lambda\in \Lambda_0$. If there does not exist $\mu_\lambda\in\fb^{\perp}$ such that $Z_{\mu_\lambda}(\lambda)$ is irreducible, then there exists $1\leq s\leq p^N-1$ such that, for all $\chi\in\fb^{\perp}$, $Z_\chi(\lambda)$ has an $s$-dimensional submodule.
\end{prop}
\begin{proof}
If there does not exist $\mu_\lambda\in\fb^{\perp}$ such that $Z_{\mu_\lambda}(\lambda)$ is irreducible then, using the above notation, $$\bigcap_{s=1}^{p^N-1}N_{\lambda,s}^c=\emptyset.$$ Since each $N_{\lambda,s}^c$ is open in $\fb^{\perp}$, and each non-empty open set in $\fb^{\perp}$ is dense, this implies that there exists $1\leq s\leq p^N-1$ such that $N_{\lambda,s}^c=\emptyset$. This implies that $N_{\lambda,s}=\fb^{\perp}$, as required.
\end{proof}
We end by observing an obvious generalisation of the statement that, for $\lambda\in\Lambda_0$, $S_\lambda$ is open dense in $\fb^{\perp}$ whenever there exists $\chi\in\fb^{\perp}$ with $Z_\chi(\lambda)$ irreducible.
\begin{prop}
Let $\lambda\in \Lambda_0$. Suppose that there exists $\chi_\lambda\in\fb^{\perp}$ and $0\leq k\leq N$ such that every submodule of $Z_{\chi_\lambda}(\lambda)$ has dimension divisible by $p^k$. Then the subset $$V_\lambda\coloneqq\{\mu\in\fb^{\perp}\,\vert\,\mbox{Each } U_\mu(\fg)\mbox{-submodule of }\,Z_\mu(\lambda)\,\,\mbox{has dimension divisible by }\, p^{k}\}$$ is a dense open subset of $\fb^{\perp}$.
\end{prop}
\begin{proof}
The result follows easily once we note that $$V_\lambda=\bigcap_{\substack{1\leq s\leq p^N \\ p^k\nmid s}}N_{\lambda,s}^c.$$
\end{proof}
\begin{rmk}
This proposition therefore allows us to use the results of Appendix~\ref{sec6} to find dense open subsets $V_\lambda$ of $\fb^{\perp}$. These subsets are thus candidates for the sets $\widehat{C}_\lambda$ discussed earlier; all that remains to show is that $V_\lambda\subseteq C_\lambda$ for all $\lambda\in\Lambda_0$. If this were to hold, then the previous discussion would give a proof of Humphreys' conjecture on blocks for such $\fg$.
\end{rmk}
|
train/arxiv
|
BkiUay85qhLAB70I2Uz3
| 5
| 1
|
\section{Introduction}
It has since long been recognized \cite{Berezinskii70, Berezinskii71, Kosterlitz73, Kosterlitz74, Froehlich81} that the two-dimensional XY model undergoes a Kosterlitz-Thouless (KT) phase transition upon varying
temperature $T$. This transition is peculiar in a number of
respects: it is not accompanied by the appearance of long-range order (which is prohibited by the Mermin-Wagner theorem \cite{Mermin66}) and the free energy is a smooth ($C^{\infty}$ class)
function of the thermodynamic parameters. Nonetheless, the low-$T$ phase displays long-range correlations and order-parameter stiffness. The latter exhibits a universal jump upon
crossing the transition temperature $T_{KT}$. The correlation length is characterized by an essential singularity in the vicinity of the transition in the high-$T$ phase. A distinct nonuniversal feature of the
$XY$ model is the
pronounced, asymmetric peak of the specific heat as a function of $T$. The occurrence of this maximum is usually attributed to a rapid increase of entropy upon unbinding the vortex-antivortex pairs.
The peak is located somewhat above the transition temperature $T_{KT}$. It is peculiar that on one hand the maximum is well separated from the asymptotic critical region, and, on the other, it occurs in a temperature
range where the correlation length is still very large compared to the microscopic scale.
The Kosterlitz-Thouless transition is relevant in a number of physical contexts \cite{Chaikin_book} such as magnetism, liquid crystals, melting of two-dimensional ($d=2$) solids,
superfluidity and superconductivity.
Experimentally the KT-type behavior was observed in liquid-helium films \cite{Bishop78, Maps81, Maps82} and atomic gases \cite{Hadzibabic06, Tung10, Desbuquois12, Murthy15}.
The universal aspects of the KT transition are conventionally described in the language of vortex-antivortex pair unbinding and a mapping to a Coulomb gas or sine-Gordon field theory.
The predictivity of such formulations is typically restricted to the behavior in the vicinity of the transition, which makes it harder to access the non-universal, system specific properties,
such as the critical temperature or the position, magnitude and width of the specific-heat peak.
In the present work we develop and extend the description of the KT transition using the non-perturbative renormalization group (RG).
We build upon earlier works \cite{Graeter95, Gersdorff01, Jakubczyk14}, which, however, were limited to $\phi^4$-type effective models and focused exclusively on universal aspects of the
transition.
The formulation evades the explicit
introduction of vortices as degrees of freedom, and, in the form presented here, takes
a microscopic spin system as the starting point. On the other hand, the approach captures the low-wavelength infrared (IR) asymptotics and respects the Mermin-Wagner theorem.
In the present analysis we focus primarily on the relatively
simple XY model on a square lattice, where
the results obtained at different approximation levels of the RG framework can be compared to ample Monte Carlo (MC) data \cite{Tobochnik79, Himbergen81, Olsson95, Hasenbusch97, Hasenbusch05, Xu07, Komura12, Yu14}.
Observe however that the verification of the theoretical predictions of the KT theory by MC
simulations has
not always been conclusive even with respect to the most basic properties. For a critical discussion of these issues see Ref.~\cite{Hasenbusch05}. The present approach complements the MC in that it is formulated directly for infinite volume and does not
invoke finite-size scaling theory. It also differs from the standard RG treatments in evading introduction of vortices or any expansions in powers of the order-parameter field. We show that the latter is crucial for a
correct (even qualitatively) account of thermodynamics in the high-$T$ phase.
The nonperturbative RG is among the methods allowing for accurate computations of critical behavior in diverse systems. Its applicability is in addition by no means restricted to the vicinity of a phase transition.
It has proven useful
in a wide range of complex physical contexts. Examples include models with competing orders \cite{Metzner_review, Friederich11, Giering12, Eberlein14} or situations out of equilibrium \cite{Kloss12, Kloss14, Mesterhazy15}.
The formalism by itself sheds light on fundamental aspects of critical phenomena (see e.g. \cite{Leonard15, Delamotte16}), leading to a genuine progress of the field. On the other hand, not so often do
the computations within this approach reach high-precision accuracy away from the critical region. The precise predictions also typically depend somewhat on the choice of
regularization. Ref.~\cite{Machado10} shows that the critical temperature of the 3-dimensional Ising model may be calculated with the accuracy of around 1\%. Going beyond this precision level would require a substantial effort.
The presently analyzed case of the 2-dimensional
XY model is methodologically very distinct for at least two reasons: (1) The physics governing the vicinity of the phase transition is dominated by the anomalous dimension (which is negligible in $d=3$ for most purposes); (2)
fluctuation effects are stronger (and lead to ultimate obliteration of long-range order) due to the presence of the Goldstone mode.
Our framework automatically encodes the Mermin-Wagner theorem and is (upon slight modifications) extendable to more complex systems characterized by similar low-energy behavior at finite temperatures. These include quantum spins as well as interacting bosons or fermions
in $d=2$. Such systems were already studied within simpler nonperturbative RG truncations, see e.g.~\cite{Krahl07, Floerchinger09, Rancon12, Rancon13, Rancon14, Strack14}. However, the latter are not sufficient to
correctly account for nonuniversal features related to the KT
transition - see Sec.~V.
Before embarking on the more complex problems mentioned above, it is important to understand the merits and limitations of the method in situations where the results can be reliably compared to other approaches.
\section{The XY model and the corresponding lattice field theory}
The classical XY model on a lattice is defined by the Hamiltonian
\begin{equation}
\label{Hamiltonian}
\mathcal{H}\left(\{\vec{s_i}\}_{i=1}^{N}\right) = -\frac{1}{2}J_{ij}\vec{s_i}\vec{s_j}\;
\end{equation}
where $i,j\in\{1\dots N\}$ label the sites of the lattice, $\vec{s_i}\in \mathbb{R}^2$, $|\vec{s_i}|=1$, and the summation is implicit wherever the index appears exactly
twice in a product expression.
The corresponding partition function is given by
\begin{equation}
\label{Z_XY}
\mathcal{Z} = \sum_{\{\vec{s}\}}e^{-\beta\mathcal{H}}, \;\;\; \textrm{where} \;\;\; \sum_{\{\vec{s}\}} = \int \prod_{i}d\theta_i\;.
\end{equation}
Here $\beta^{-1}=k_BT$ and $\theta_i$ denotes the angle between the vector $\vec{s_i}$ and the $x$-axis, so that $\vec{s_i}\vec{s_j}=\cos(\theta_i-\theta_j)$ and
$\theta_i\in [0,2\pi [$ for each $i$.
In order to cast the problem of evaluating the partition function in the language of field theory, we employ the identity
\begin{equation}
\label{identity}
e^{\frac{1}{2} A_{ij}\vec{s_i}\vec{s_j}}=\mathcal{N}^{-1}\int\prod_id\vec{\psi_i}e^{-\frac{1}{2}\left(\mathbb{A}^{-1}\right)_{ij}\vec{\psi_i}\vec{\psi_j}+\vec{s_i}\vec{\psi_i}}\;,
\end{equation}
where the normalization factor is given by
\begin{equation}
\mathcal{N} = (2\pi)^N\det \mathbb{A}\;.
\end{equation}
Here $\vec{\psi}_i$ is a two-dimensional vector attributed to the lattice site $i$. Eq.~(\ref{identity}) applies provided the matrix $\mathbb{A}$ is positive-definite. The non-positivity can
be cured by shifting the matrix by a constant diagonal term, which, in our setup, amounts to transforming the Hamiltonian Eq.~(\ref{Hamiltonian}) via
\begin{equation}
\mathcal{H}\left(\{\vec{s_i}\}_{i=1}^{N}\right) \longrightarrow \mathcal{H}_c \left(\{\vec{s_i}\}_{i=1}^{N}\right) = -\frac{1}{2}J_{ij}\vec{s_i}\vec{s_j}-\frac{1}{2}c
\vec{s_i}\vec{s_i}\;,
\end{equation}
i.e. shifting it by a constant equal $\frac{1}{2}Nc$. Specifying
\begin{equation}
A_{ij} = \beta (J_{ij} +c\delta_{ij} )
\end{equation}
we apply Eq.~(\ref{identity}) to Eq.~(\ref{Z_XY}). The resulting expression for the partition function $\mathcal{Z}$ still involves the multiple integration over the spin variables $(\{\vec{s}\})$, which can now be explicitly performed
\begin{equation}
\sum_{\{\vec{s}\}}e^{\vec{s_i}\vec{\psi_i}} = \left(2\pi\right)^N\prod_i I_0\left(|\vec{\psi_i}|\right)\;.
\end{equation}
Here $I_\alpha(x)$ denotes the hyperbolic Bessel function of first kind. This way we cast the partition function in the form
\begin{equation}
\mathcal{Z} = \left( \det \mathbb{A}\right)^{-1} \int \prod_i e^{-\frac{1}{2}\vec{\psi_i}(\mathbb{A}^{-1})_{ij}\vec{\psi_j}+\sum_i\ln I_0\left(|\vec{\psi_i}|\right) }\;.
\end{equation}
In order to make all the temperature dependencies explicit, we rescale the interaction matrix $\mathbb{A}$ and the fluctuating field $\vec{\psi}$ according to:
\begin{equation}
\label{change_of_var}
\tilde{A}_{ij} = \beta^{-1} A_{ij}\;,\;\;\;\;\;\;\; \vec{\phi_i} = \beta^{-\frac{1}{2}}\vec{\psi_i}\;,
\end{equation}
This way the partition function becomes expressed as
\begin{equation}
\label{zet_rep}
\mathcal{Z} = \mathcal{D}\vec{\phi} e^{-\beta S[\vec{\phi}]},
\end{equation}
with
\begin{equation}
\label{initial_action}
\beta S[\vec{\phi}] = \frac{1}{2} \vec{\phi_i}\left(\tilde{\mathbb{A}}^{-1}\right)_{ij}\vec{\phi_j}-\sum_i\log I_0(\beta^{\frac{1}{2}}|\vec{\phi_i}|)
\end{equation}
and
\begin{equation}
\label{measure}
\mathcal{D}\vec{\phi} = \left(\det \tilde{A}\right)^{-1} \prod_i d \vec{\phi_i}\;.
\end{equation}
Importantly, the change of variables of Eq.~(\ref{change_of_var}) removes temperature dependencies from the integration measure $\mathcal{D}\vec{\phi}$ as well as the kinetic term
$\frac{1}{2}\vec{\phi_i}\left(\mathbb{A}^{-1}\right)_{ij}\vec{\phi_j}$ in the effective action $S[\vec{\phi}]$, and absorbs it fully into the local potential term
$\log I_0(\beta^{\frac{1}{2}}|\vec{\phi_i}|)$. This aspect is crucial for the validity of the subsequent approximate RG procedure of Sec.~III and IV. The choice introduced in Eq.~(\ref{change_of_var})
differs from some standard conventions \cite{Amit_book}.
Eq.(\ref{zet_rep}-\ref{measure}) define the starting point for our computations. Specific lattice and interaction types may now be addressed by specifying the corresponding
matrix $\mathbb{J}$.
Assuming translational invariance the kinetic term in $S[\vec{\phi}]$ is diagonalized with the Fourier transform:
\begin{equation}
\frac{1}{2} \vec{\phi_i}\left(\tilde{\mathbb{A}}^{-1}\right)_{ij}\vec{\phi_j} = \frac{1}{2}\sum_{\vec{q}}\tilde{A}^{-1}_{\vec{q}}\vec{\phi}_{\vec{q}}\vec{\phi}_{-\vec{q}}\;,
\end{equation}
where
\begin{equation}
\tilde{A}_{\vec{q}} = c + J_{\vec{q}} = c + \frac{1}{N}J_{ij}e^{i\vec{q}(\vec{r}_i-\vec{r}_j)}\;.
\end{equation}
This establishes the explicit form of the kinetic term.
\subsection{Mean-field theory for ferromagnetic order}
Assuming a form of $\mathbb{J}$ favoring ferromagnetic ordering, one identifies the mean-field free energy as the minimum of $S[\vec{\phi}]$. Restricting to uniform field
configurations the mean-field equilibrium value of $|\phi|$ is given by
\begin{equation}
\tilde{A}_0^{-1}|\vec{\phi}|-\frac{I_1(\beta^{1/2}|\vec{\phi}|)}{I_0(\beta^{1/2}|\vec{\phi}|)}\beta^{1/2} = 0\;.
\end{equation}
The mean-field critical temperature is obtained by expanding the above around $|\vec{\phi}|=0$ up to terms linear in $|\vec{\phi}|$. This relates the critical temperature to
$\tilde{A}_0$:
\begin{equation}
\label{MF_temp}
k_B T_c = \frac {1}{2}\tilde{A}_0 \;.
\end{equation}
The corresponding critical exponents are classical and the critical temperature of Eq.~(\ref{MF_temp}) carries a strong, linear dependence on the parameter $c$ \cite{Amit_book}. Obviously, the resulting occurrence of long-range
order at mean-field level contradicts the Mermin-Wagner theorem. In addition we observe that the mean-field free energy, and, in consequence, also the specific heat
is zero in the high-temperature phase.
\subsection{Nearest-neighbor interactions}
For the square lattice with nearest-neighbor interactions we obtain
\begin{equation}
\label{NNJ}
J_{\vec{q}} = 2J\left[\cos(a q_x) + \cos(a q_y)\right]\;,
\end{equation}
where $J$ is the nearest-neighbor coupling and $a$ denotes the lattice spacing. The latter will be put equal to 1 in all numerical calculations.
\section{Nonperturbative RG}
The central idea of the nonperturbative renormalization group approach to equilibrium condensed-matter systems is to recast the problem of computing the partition function
$\mathcal{Z}$ in a form of a (functional) differential equation. There exists a number of variants of this program. The presently applied formulation, developed by Wetterich \cite{Wetterich93},
relies on the concept of a flowing scale-dependent effective action $\Gamma_k[\vec{\phi}]$. This quantity continuously connects the microscopic action (in the present work given
by Eq.~(\ref{initial_action})) with the full free energy $ F$ upon varying the flow parameter $k$. The latter is here taken to be an IR momentum cutoff scale. It serves
to add a mass of order $\sim k^2$ to the fluctuation modes, effectively freezing their propagation for momenta $q<k$. Lowering the cutoff scale implies including modes of
progressively lower momenta. For vanishing $k$ all fluctuation modes are included into the partition function and we find $\Gamma_k[\vec{\phi}]\longrightarrow \beta F[\vec{\phi}]$
as $k\to 0$.
The variation of $\Gamma_k[\vec{\phi}]$ upon changing $k$ is governed by the flow equation \cite{Wetterich93}:
\begin{equation}
\partial_k \Gamma_k[\vec{\phi}] = \frac{1}{2} \Tr\left\lbrace \partial_k R_k\left(\Gamma^{(2)}_k[\vec{\phi}] + R_k\right)^{-1} \right\rbrace ,
\label{rgeq}
\end{equation}
where $\Gamma^{(2)}_k[\vec{\phi}]$ denotes the second functional derivative of $\Gamma_k[\vec{\phi}]$. In Fourier space, the trace (Tr) sums over momenta and the field
index $a\in \{1,2\}$. The quantity $R_k(q)$ is the momentum cutoff function added to the inverse propagator to freeze the fluctuations with momenta $q<k$. An exact solution
of Eq.~(\ref{rgeq}) with the initial condition given by Eq.~(\ref{initial_action}) would imply finding the partition function $\mathcal{Z}$. This is not achievable, but the framework of
Eq.~(\ref{rgeq}) offers a number of approximation schemes \cite{Berges02, Kopietz_book, Metzner_review, RG_book} going beyond those accessible within the more traditional approaches.
\subsection{Derivative expansion}
In this work we apply the derivative expansion \cite{Berges02, RG_book, Canet03, Delamotte04} (DE) in which the symmetry-allowed terms in $\Gamma_k$ are classified according to the number of
derivatives (or powers of $\vec{q}$ in momentum space). The most general expression at
level $\partial^2$ (or $q^2$) reads:
\begin{equation}
\Gamma_k[\vec{\phi}] = \int d^2x \left\lbrace U_k(\rho) + \frac{1}{2} Z_k(\rho) (\boldsymbol{\nabla}\vec{\phi})^2 + \frac{1}{4} Y_k(\rho) (\boldsymbol{\nabla}\rho)^2 \right\rbrace ,
\label{DE}
\end{equation}
where $\rho = \frac{1}{2} \vec{\phi}^2$. We impose restrictions neither on the effective potential $U_k(\rho)$ nor the gradient functions $Z_k(\rho)$, $Y_k(\rho)$, which are allowed to depend on the cutoff scale $k$. The
occurrence of two gradient terms is due to the fact that the transverse and radial components of the field are characterized by different stiffness coefficients.
We also observe here that the initial condition Eq.~(\ref{initial_action}) contains terms of all powers of $\rho$. In fact, the initial condition does not quite fit the ansatz of Eq.~(\ref{DE}) since the kinetic term involves functions
of arbitrarily high
order in $|\vec{q}|$. We come back to this point later on. Plugging the ansatz of Eq.~(\ref{DE}) into Eq.~(\ref{rgeq}) yields a projection of the Wetterich equation onto a set of three coupled
non-linear partial differential equations describing the flow of
$U_k(\rho)$, $Z_k(\rho)$ and $Y_k(\rho)$, which may be handled numerically. It is advantageous to perform a canonical rescaling of the flowing quantities by defining
\begin{equation}
\begin{gathered}
\tilde U_k({\tilde\rho}) = v_2^{-1} k^{-2} U_k(\rho), \quad \tilde Z_k({\tilde\rho}) = Z_k^{-1} Z_k(\rho), \\ \tilde Y_k({\tilde\rho}) = v_2 Z_k^{-2} Y_k(\rho) ,
\end{gathered}
\label{dimless}
\end{equation}
where ${\tilde\rho}=v_2^{-1} Z_k\rho$ and the factor $v_2^{-1}=8 \pi $ is conventional. The $k$-dependent constant $Z_k$ (wave-function renormalization) is defined by imposing the
condition $\tilde Z_k(\trho_{\rm r})=1$ where $\trho_{\rm r}$ is an arbitrary renormalization point on the rescaled grid. The scale-dependent anomalous dimension $\eta$ is then given by
\begin{equation}
\eta_k = - k\partial_k \ln Z_k ,
\label{eta}
\end{equation}
and the physical anomalous dimension follows from $\eta=\lim_{k\to 0} \eta_k$. We refrain from quoting the lengthy explicit expressions for the flow equations.
These are given in Ref.~\cite{Gersdorff01} and in the appendix of Ref.~\cite{Jakubczyk14}.
The transition temperature $T_{KT}$ is extracted following Ref.~\cite{Jakubczyk14} by using the fact that the flowing minimum $\rho_{0,k}$ of the (nonrescaled) effective potential vanishes as
$\rho_{0,k}\sim k^{\eta}$ in the low-$T$ phase. This is consistent with both the absence of the long-range order and algebraic decay of correlations governed by the anomalous dimension $\eta$. Since
$Z_k\sim k^{-\eta}$ for $T<T_{KT}$, the minimum of the rescaled potential $\tilde{\rho}_{0,k} = v_2^{-1}Z_k \rho_{0,k}$ remains finite for $k\to 0$ as long as $T<T_{KT}$, and vanishes otherwise.
\subsection{Initial condition for the propagator}
The proposed approach relies on two approximations. First: the flowing effective action $\Gamma_k[\vec{\phi}]$ is parametrized by the ansatz of Eq.~(\ref{DE}). This implies retaining the most general $U(1)$-invariant
form of the action but only up to terms of order $\partial^2$. In particular the local potential is allowed to contain arbitrarily high powers of $\rho$. Second: as we already remarked, the initial action of Eq.~(\ref{initial_action})
involves terms of higher order in $|\vec{q}|$ than $|\vec{q}|^2$. We, however, cast it in a form consistent with Eq.~(\ref{DE}) by expanding the dispersion in Eq.~(\ref{initial_action}) around $\vec{q}=0$. Physically this may be
understood as ''smearing'' or coarse-graining the lattice structure ''by hand''.
In a somewhat more
subtle treatment one might split the flow into two stages. In the initial part ($k>a^{-1}$) of the flow hardly any renormalization of $Z_k(\rho)$ and $ Y_k(\rho)$ occurs,
but the cosine dispersion may play a role. In the second stage ($k<a^{-1}$) the lattice no longer matters.
An interesting alternative is provided by the lattice non-perturbative RG framework \cite{Machado10}, where the initial
stage of
the flow is overall bypassed, and the initial condition is not given by the bare action, but, instead, is computed from the local limit of decoupled sites. This program, however, places restrictions on the cutoff, which
are most naturally fulfilled by a non-smooth Litim-type regulator \cite{Litim01}. This in turn renders the flow much less stable numerically. Such complications are most severe in $d=2$. We observe no signatures of numerical instabilities in our variant of
approximation. In addition our calculation requires a significantly smaller field grid than that of Ref.~\cite{Machado10}.
Our strategy to perform the $q$-expansion from the outset
instead of the slightly more accurate treatments mentioned above also stems from the aim to develop a numerically modest and stable framework flexibly extendable to other contexts (such as interacting quantum gases in $d=2$). The present
approximation allows to avoid any two-dimensional integrations in the flow equations. Also observe that the dominant contributions to all the integrals come from small momenta also for $k$ large, and, of course, upon
reducing $k$ the approximation becomes progressively more accurate. By reasoning \textsl{a posteriori} our results suggest that the error from neglecting the cosine dispersion amounts to a shift of the critical
temperature (see Sec. IV).
The second derivative matrix in the propagator at the beginning of the flow thus reads
\begin{equation}
\frac{\delta^2 \beta S [\vec{\phi}]}{\delta \phi_{\vec{q_1}}^{\alpha_1}\delta \phi_{\vec{q_2}}^{\alpha_2} } = \delta_{\alpha_1, \alpha_2}\delta_{\vec{q_1}+\vec{q_2},0} \tilde{A}_{\vec{q_1}}^{-1} \;.
\end{equation}
We extract $Z_{k=k_0}(\rho)$ and $Y_{k=k_0}(\rho)$ from the $\vec{q_1}^2$ coefficients of the expansion of $\tilde{A}_{\vec{q_1}}^{-1}$ around $\vec{q_1}=0$. Here $k_0\gg a^{-1}$ is the initial cutoff scale.
In fact, we obtain $Y_{k=k_0}(\rho)=0$ and $Z_{k=k_0}(\rho)=const>0$. We note that alternative to
Eq.~(\ref{change_of_var}) rescalings of the fluctuating field and interaction matrix, such as that of Ref.~\cite{Amit_book}, generate both $Z_{k=k_0}(\rho)$ and $Y_{k=k_0}(\rho)$ dependencies in the initial condition.
The momentum integrations in the flow equations are computed over a disc of radius $k_{UV} = \pi/a$. In a non-approximate treatment they should run over the Brillouin zone
($]-\frac{\pi}{a},\frac{\pi}{a}]\times ]-\frac{\pi}{a},\frac{\pi}{a}] $ for a square lattice). The scale $k_{UV}$ is often identified with the scale $k_0$ where the flow is initiated. In fact these quantities are distinct, and,
in principle $k_0$ should be taken infinite to assure that all fluctuations are frozen, so that the action Eq.~(\ref{initial_action}) is the correct starting point. In a practical numerical implementation
we take $k_{UV}\ll k_{0} <\infty$ and assure that the inverse propagators at high scales are completely dominated by the cutoff term.
\subsection{Numerical solution}
Numerical integration of the flow equations proceeds along the line of Ref.~\cite{Jakubczyk14}. There are however two important differences. Ref.~\cite{Jakubczyk14} used an effective
$\phi^4$ action as a
starting point, while here the initial condition follows from Eq.~(\ref{initial_action}) (see below). The other difference is that in the present calculation we extract thermodynamic quantities
(specific heat in particular) related directly to the free energy, which is given by $U_{k\to 0}(\rho)$. While for the purposes of Ref.~\cite{Jakubczyk14} it was sufficient to compute the flow of the
${\tilde\rho}$-derivative of the rescaled potential $\tilde U_k'({\tilde\rho})$, here we additionally compute the flow of the (nonrescaled) potential $U_k({\tilde\rho})$. The corresponding flow equation, supplementing
the flow equations given in Ref.~\cite{Jakubczyk14} reads:
\begin{equation}
\begin{gathered}
k^{-1}\partial_k U(\tilde{\rho}) = \tilde{\rho}\eta \tilde{U}_k'(\tilde{\rho})- \\ \int dx \left(\eta x r(x) +2x^2r'(x)\right) \left(\tilde{G}_L(x,\tilde{\rho}) + \tilde{G}_T(x,\tilde{\rho})\right) \;,
\end{gathered}
\end{equation}
where $x=q^2/k^2$ and the dimensionless $\tilde{\rho}$-dependent longitudinal and transverse inverse propagators are given by
\begin{equation}
\begin{split}
\tilde G_{\rm L}^{-1}(x,\tilde{\rho}) &= x[\tilde Z_k(\tilde{\rho}) + {\tilde\rho} \tilde Y_k(\tilde{\rho}) + r(x)] + \tilde U_k'(\tilde{\rho}) + 2{\tilde\rho} \tilde U_k''(\tilde{\rho}) \\
\tilde G_{\rm T}^{-1}(x,\tilde{\rho}) &= x[\tilde Z_k(\tilde{\rho}) + r(x)] + \tilde U_k'(\tilde{\rho}) \;.
\end{split}
\end{equation}
A reliable calculation demands high numerical accuracy. This is because we solve the flow equations for a set of initial conditions parametrized by temperature $T$ and subsequently
evaluate the entropy and the specific heat by numerically computing the first two derivatives of the result for the free energy with respect to $T$.
We also observe that, since we explore the high-temperature phase and the vicinity of the phase transition in the low-$T$ phase, the principal problem of encountering the pole of the flow equations
described in Ref.~\cite{Jakubczyk14} is irrelevant here.
In the practical numerical solution we employ the smooth Wetterich cutoff
\begin{equation}
R_k(\vec{q}) = Z_k \vec{q}^2 r (\vec{q}^2/k^2), \qquad r(x) = \frac{\alpha}{e^x-1} \;.
\label{Rdef}
\end{equation}
The inclusion of the wave-function renormalization in $R_k(\vec{q})$ is a requirement for the possibility of obtaining scale-invariant solutions. The parameter $\alpha$ is in principle arbitrary.
Ref.~\cite{Jakubczyk14} rose the question of the existence of exact (functional) fixed points of the flow depending on its value. The present analysis is performed at fixed $\alpha=2.0$ which is close to the
''optimal'' value
in the immediate vicinity of the transition. We refer to \cite{Jakubczyk14} for an extensive discussion of this issue.
\section{Numerical Results}
For the nearest-neighbor XY model $J_{\vec{q}}$ is given by Eq.~(\ref{NNJ})
and it follows that the initial condition for $Z_k(\rho)$ and $Y_k(\rho)$ reads
\begin{equation}
Z_{k_0}(\rho) = Z_0 = \frac{J a^2}{(c+4J)^2}\;,\;\;\;\;\; Y_{k_0}(\rho)=0\;,
\end{equation}
while the initial effective potential is given by
\begin{equation}
U_{k_0}(\rho) = U_0(\rho) = \frac{1}{(c + 4J)}\rho - \log I_0 \left(\sqrt{2\rho\beta}\right) \;.
\end{equation}
We observe that both $U_0(\rho)$ and $Z_0$ depend on the arbitrary parameter $c$. In fact, as we already pointed out, the mean-field transition temperature of Eq.~(\ref{MF_temp}) carries a strong $c$-dependence. Adding
fluctuations by the
non-perturbative RG flow drastically reduces this dependence, but does not remove it completely, as discussed below. Also observe that
(at least formally) the above expressions for $U_0(\rho)$ and $Z_0(\rho)$ make sense for arbitrary non-negative values of $c$. On the other hand, for the present case of nearest-neighbor interactions, the
matrix $\mathbb{A}$ is positive-definite for $c>4$.
\subsection{Critical temperature}
The critical temperature $T_{KT}$ is estimated by following the flow of the minimum of the (rescaled) effective potential. This quantity reaches zero at a finite scale $k>0$ for the system
in the high-$T$ phase, and attains an (approximate) fixed-point in the KT-phase. Equivalently, one may inspect the evolution of the anomalous exponent $\eta_k$, vanishing in the high-$T$ phase for $k$ sufficiently
small and attaining a ''plateau'' otherwise. The procedure follows Ref.~\cite{Jakubczyk14} and is illustrated in Fig.~\ref{flow_plot} where we plot $\tilde{U}'(0)$ as a function of $s=-\log\left(k/k_0\right)$.
Also note that the method of estimating $T_{KT}$ is different from the MC, which typically employs a
fit of the theoretical formulae for the correlation length and susceptibility to the simulation data.
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{w0_flows_plot.eps}}
\caption{(Color online) Exemplary flows of the (rescaled) derivative of the effective potential at $\tilde{\rho}=0$ for a range of temperatures containing $T_{KT}$. For $T>T_{KT}$ the minimum of the rescaled potential
($\tilde{\rho}_0$) hits zero at a finite scale $k$, and $\tilde{U}'(0)>0$. On the other hand, for $T\leq T_{KT}$ the flowing couplings (including $\tilde{U}'(0)$) attain fixed-point-like behavior. The lowest curve
corresponds to $T\approx 0.95 T_{KT}$, the highest one to $T\approx 1.05 T_{KT}$. }
\label{flow_plot}
\end{figure}
In Ref.~\cite{Machado10} only a very weak dependence of the critical temperature on the parameter $c$ was found in the case of the Ising model in three dimensions.
In the present case we observe a monotonous dependence
of the KT temperature on the parameter $c$ ranging between $0.91 J/k_B$ for $c=4$ and $1.02 J/k_B$ for $c=8$. The dependence of $T_{KT}$ on $c$ slowly ceases at larger values of $c$. Large values of $c$ are however
very unpractical because
$Z_0\sim c^{-2}$ becomes very small. Our estimate of $T_{KT}$ may be compared to the MC results which give
$T_{KT}\approx 0.89J/k_B$. The lattice version of nonperturbative RG yielded the estimate $0.9<T_{KT}/J<1$ \cite{Machado10}.
We believe that the mechanism responsible for the annihilation (or significant reduction) of the $c$-dependence of the critical temperature is related to the fact that even though the position of the
minimum of $U_0(\rho)$ carries a strong $c$-dependence, the minimum $\tilde{\rho}_0$ of the initial rescaled potential $\tilde{U}_0(\tilde{\rho})$ shows only a very weak sensitivity to the variation of $c$. On the other hand, the
profile of $\tilde{U}_0(\tilde{\rho})$ for $\tilde{\rho}<\tilde{\rho}_0$ does depend on $c$. The dependence of $T_{KT}$ on $c$ should be efficiently eliminated by the flow in situations, where the essential features of the
flow are captured by the behavior of the effective action around $\tilde{\rho}=\tilde{\rho}_0$. The mechanism is expected to be less efficient otherwise. This condition is better fulfilled in $d=3$.
Our procedure of performing the $q$-expansion from the beginning is also of relevance for the results for $T_{KT}$. The dependence of our estimate of $T_{KT}$ on $c$ is an unpleasant feature and we perceive it as
a deficiency of the present approach. It is possible to choose $c$ so that we obtain $T_{KT}$ in precise agreement with MC, but this is not what we aim at.
We note however that the $c$-dependence of $T_{KT}$ is by far weaker than at mean-field level. In addition, the thermodynamic quantities discussed below are insensitive to the choice of $c$ provided they are
computed relative to $T_{KT}$. This suggests that the error related to our approximation is absorbed by a shift of $T_{KT}$, leaving other thermodynamic quantities hardly affected.
\subsection{Entropy and specific heat}
We proceed by evaluating the entropy at zero magnetic moment (or, equivalently, zero magnetic field), which, by elementary thermodynamics follows from $S(T,N) = Ns(T) = - \frac{\partial F}{\partial T}$. The free energy
$F(T,\phi=0,N)$ is obtained from the integrated flow via $F=k_B T\lim_{k\to 0} U_k(0)$. It is also possible to extend the analysis to non-zero fields since the magnetic field, the order parameter field and
free energy are related by
$h = k_B T\lim_{k\to 0} \partial_{|\phi|} U_k(|\phi|)$. By computing the flow for different $T$ we extract the free energy profiles $U(\rho)$ for a range of temperatures and subsequently evaluate the (discrete)
derivative with respect to $T$. The results are shown in Fig.~(\ref{entropy_plot}). We observe a collapse of the curves computed for different $c$ if the variables are scaled by the critical values. The entropy is a
positive,
monotonously increasing function of temperature, as expected from the principles of thermodynamics.
The signatures of the transition are not visible in the $T$-derivatives of the thermodynamic potential (as is expected from the $KT$-theory and also consistent with the
results of simulations).
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{entropy_plot.eps}}
\caption{(Color online) Entropy as a function of the reduced temperature for a sequence of values of $c$. The curves collapse once scaled by the critical temperature $T_c=T_{KT}$ and the corresponding entropy density $s_c$. }
\label{entropy_plot}
\end{figure}
In the next step we evaluate the specific heat (at zero magnetization). This is given by
\begin{equation}
c_v = T\frac{\partial s}{\partial T}=-\frac{T}{N}\frac{\partial^2 F}{\partial T^2}\;.
\end{equation}
The results obtained for different choices of $c$ are plotted in Fig.~(\ref{c_v_plot}). Again, the dependence on $c$ does not occur once we use the reduced variable. The pronounced maximum shows an asymmetry similar to those
found in MC. The peak is located around $T_p\approx 1.1T_{KT}$ and reaches up to $c_v^m\approx 1.6k_B$. Both these quantities are close to the MC and tensor RG results.
More specifically (see e.g.~\cite{Tobochnik79, Xu07, Yu14}), the MC peak is located at
$T_p^{MC}\approx 1.15 T_c$ and reaches up to $c_v^{m(MC)}\approx 1.55k_B$. Note however, that the free energy plotted in Ref.~\cite{Yu14} has positive slope, and the entropy obtained therein is
negative. The reasons for this are not clear to us. The level of agreement of $T_{KT}$ and $c_v(T)$ between the MC and tensor RG results reported in \cite{Yu14} is very high.
It is striking that the rich thermodynamic structure described in this section emerged via the functional RG flow from the mean-field free energy, which is trivially
equal zero in the high-$T$ phase (see Sec.~II A).
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{c_v_plot.eps}}
\caption{(Color online) Specific heat as function of the reduced temperature for a sequence of values of $c$. The location and magnitude of the peak agree well with MC and tensor RG (see the main text). }
\label{c_v_plot}
\end{figure}
\subsection{Equation of state}
The magnetic field $\vec{h}$ at given $\vec{\phi}$ is extracted from the definition
\begin{equation}
\vec{h}=\frac{\partial F}{\partial \vec{\phi}} = k_B T \lim_{k\to 0}\frac{\partial U_k(\rho)}{\partial \vec{\phi}}\;,
\label{hphi}
\end{equation}
which yields the equation of state $\vec{h}(T,\vec{\phi})$. The isothermal susceptibility at zero field is given as the derivative
\begin{equation}
\chi^{-1} (T) = \frac{\partial h}{\partial \phi}|_{\phi=0} = k_BT v_2^{-1}\lim_{k\to 0}\left(Z_k \frac{\partial U_k(\tilde{\rho})}{\partial \tilde{\rho}}\right)|_{\tilde{\rho}=0}\;.
\end{equation}
and becomes very large upon lowering temperature towards $T_{KT}$. The dependence $\phi(h)$ is shown in Fig.~\ref{mag_field_plot} for a sequence of temperatures.
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{mag_field_plot.eps}}
\caption{(Color online) Dependence of the order-parameter on magnetic field for a sequence of temperatures. The presented calculation was performed at $c=4$. }
\label{mag_field_plot}
\end{figure}
\section{Remarks on the $\phi^4$ truncation}
It is useful to compare the above calculation to a much simpler treatment relying on the $\phi^4$-type expansion. It is natural to invoke a largely simplified ansatz for the effective potential
\begin{equation}
U_k(\rho) = \frac{\lambda_k}{2}\left(\rho-\rho_{0,k}\right)^2 + \gamma_k\;,
\label{phi4_1}
\end{equation}
and also restrict $Z_k(\rho)$ and $Y_k(\rho)$ to flowing couplings corresponding to the functions evaluated at the potential minimum ($\rho_0$). The problem may then be cast onto a set of five coupled ordinary
differential flow equations for the couplings $\rho_{0,k}$, $\lambda_k$, $\gamma_k$, $Z_k$ and $Y_k$. The initial condition for the potential is extracted by expanding the effective potential in
Eq.~(\ref{initial_action}) around its minimum. The ansatz Eq.~(\ref{phi4_1}) makes sense for $Z_k\rho_{0,k}>0$. Once the flow crosses over into the regime with $Z_k\rho_{0,k}=0$ one switches to the
parametrization suitable for the high-temperature phase
\begin{equation}
U_k(\rho) = \frac{\lambda_k}{2}\rho^2 + \delta_k\rho + \gamma_k\;.
\label{phi4_2}
\end{equation}
The free energy may then be extracted from $\lim_{k\to 0}\gamma_k$. In fact, a very similar truncation (neglecting $Y_k$ and $\gamma_k$) was employed in Ref.~\cite{Graeter95} and yielded a plausible picture
of the KT transition.
We have solved the above mentioned set of flow equations and compared the results to those obtained within the complete derivative expansion in Sec.~IV. Even though the estimate of the critical
temperature $T_{KT}$ is in a reasonable range, the $\phi^4$ approximation badly fails for the thermodynamic quantities. In fact the obtained free energy $F(T)$ is not a concave function of temperature, yielding,
for example, a negative specific heat in a range of temperatures. The reason for this becomes clear after inspecting Eq.~(\ref{initial_action}). Expanding the effective potential in $\rho$ implies uncontrolled
dropping of temperature dependencies, which, as turns out, leads to a drastic deformation of the result.
\section{Summary and outlook}
We have solved the non-perturbative RG flow equations for the two-dimensional XY model at the approximation level of the complete second order derivative expansion.
From the obtained free energy $F(T,\phi,N)$ we computed the non-universal thermodynamic
properties in the the high-temperature phase.
Wherever possible, we compared the results to Monte Carlo simulations. We found satisfactory agreement for the entropy and specific heat. In particular, the location and magnitude of the specific heat peak relative
to $T_{KT}$ compare very well to MC data.
This is one of the few RG-based computations for this quantity in
this model. As we pointed out, the specific heat peak occurs in a regime which on one hand is off the asymptotic critical region, and, on the other, is characterized by large correlation length. Such a situation is
somewhat atypical. An interesting RG calculation was performed in Ref.~\cite{Yu14} within the tensor RG framework, which may be viewed as a reincarnation of the ideas of direct real-space
coarse-graining scheme. However, it is not clear to what extent that approach can be generalized to other systems.
Our estimate of the
Kosterlitz-Thouless temperature is not far from the correct value, however the present method cannot serve as a high-precision tool in this case. We argue that the location of the transition is the quantity that is
most strongly affected by the approximations, in particular by the relatively simple treatment of the dispersion at the initial stages of the flow.
As we pointed out, the thermodynamic functions become insensitive to the arbitrary parameter $c$ of the Hamiltonian upon scaling by $T_{KT}$.
We compared the full derivative expansion to a simplified treatment
invoking vertex expansion ($\phi^4$-type theory), which is commonly applied in different contexts. The latter framework turns out not to be sufficient for computing the non-universal thermodynamic quantities,
since it truncates relevant temperature dependencies in the neglected vertices.
The present calculation bridges a microscopic model with macroscopic thermodynamics via the functional flow equation, accounting for the low-energy asymptotics specific to two-dimensional systems with $U(1)$ symmetry.
It will now be natural and interesting to perform analogous studies of systems characterized by similar infrared physics, including interacting quantum gases in $d=2$.
\begin{acknowledgments}
We are grateful to Nicolas Dupuis and Walter Metzner for useful discussions. We also thank Walter Metzner for reading the manuscript and a number of valuable remarks.
PJ acknowledges funding from the Polish National Science Center via grant 2014/15/B/ST3/02212. AE acknowledges support from the German National Academy of Sciences
Leopoldina through grant LPDS~2014-13.
\end{acknowledgments}
|
train/arxiv
|
BkiUcJU5jDKDyDF8wOqp
| 5
| 1
|
\section{Introduction}
Sphere packings appear diversely in chemistry, physics and mathematics in crystal structures, granular systems or number theory \cite{umayahara2018crystal,granular,fuchs_stange_zhang_2019}. There is a long trajectory on the numerical and analytic approaches, but many questions about the algebraic and combinatorial aspects of non-congruent sphere packings are still open. This work is a continuation of \cite{RR21_1} where Ram\'irez Alfons\'in and the author introduced a polytopal approach to investigate d-dimensional sphere packings. This yielded, in particular, to a number of results on generalizations of Apollonian circle packings and Descartes' Theorem based on the Platonic solids. Here, we
focus our attention to the third dimensional analogue by investigating the connection between sphere packings and the 4-dimensional regular polytopes.\\
A particular case of sphere packing based on a regular $4$-polytope was given by Nakamura and Dias in two independent works \cite{nakamura2014localglobal,Dias2014TheLP}. Nakamura called the construction the \textit{orthoplicial Apollonian sphere packing} and is obtained as follows. Consider a packing of $8$ spheres whose tangency graph is the $1$-skeleton of an orthoplex. For each subset of four pairwise tangent spheres, there is a unique dual sphere orthogonal to the four. Invert the whole configuration trough every dual sphere, and then repeating this process infinitely (see Figure \ref{fig:Ahoctdepth}). Nakamura and Dias used the orthoplicial Apollonian sphere packing to prove the corresponding analogue of the \textit{local-to-global principle} of tetrahedral Apollonian circle packings, conjectured by Graham, Lagarias, Mallow, Wilks and Yan in \cite{apoNumber}. The original conjecture states that for every integral tetrahedral Apollonian packing, every integer large enough satisfying a certain modulo conditions, appears as a curvature. Nowadays, this conjecture is still open \cite{fuchs_stange_zhang_2019}.
\begin{figure}[H]
\centering
\includegraphics[trim=0 3 0 3,clip,width=1\textwidth]{img/orthoplicialapo2.png}
\caption{The orthoplicial Apollonian sphere packing at $0$, $1$ and $2$ iterations. Each color represents an orbit.}
\label{fig:Ahoctdepth}
\end{figure}
Cross sections are a natural way to study Apollonian configurations in high dimensions \cite{Baragar2018}. In order to recognize planar structures in sphere packings, not only geometrically but also arithmetically, we shall present an algebraic generalization of cross sections of Apollonian clusters that we call \textit{Apollonian sections}. Through this notion we shall prove the following theorem, which implies that the curvatures in every integral tetrahedral, octahedral or cubical Apollonian packing are contained in an integral orthoplicial Apollonian packing (see Figures \ref{fig:inttetrasec}, \ref{fig:intoctosec} and \ref{fig:intcubsec}).
\begin{thm}\label{thm:sectionscurv}
Let $\Omega(\mathcal{B}_\mathcal{P})$ be either a tetrahedral, an octahedral or a cubical Apollonian packing. There is an orthoplicial Apollonian packing $\Omega(\mathcal{B}_{\mathcal{O}^4})$ containing an Apollonian section arithmetically equivalent to $\Omega(\mathcal{B}_\mathcal{P})$. Moreover, $\Omega(\mathcal{B}_\mathcal{P})$ is integral if and only if $\Omega(\mathcal{B}_{\mathcal{O}^4})$ is integral.
\end{thm}
\vspace{-.6cm}
\begin{figure}[H]
\centering
\includegraphics[width=.32\textwidth]{img/gaskets/tetragasket358.pdf}
\includegraphics[ width=.31\textwidth]{img/sections/ortho358.png}
\includegraphics[width=.31\textwidth]{img/sections/tetrasec358.png}
\caption{(Left) An integral tetrahedral Apollonian circle packing $\Omega(\mathcal{B}_{\mathcal{T}^3})$, (center) an integral orthoplicial Apollonian sphere packing $\Omega(\mathcal{B}_{\mathcal{O}^4})$, (right) an Apollonian section of $\Omega(\mathcal{B}_{\mathcal{O}^4})$ arithmetically equivalent to $\Omega(\mathcal{B}_{\mathcal{T}^3})$.
}\label{fig:inttetrasec}
\end{figure}
\vspace{-.5cm}
\subsection{Main results}
In addition to Theorem \ref{thm:sectionscurv}, our main contributions are the following:
\begin{enumerate}[label=(\roman*)]
\item\label{Result1} New representations of the regular $4$-polytopes called \textit{Centered Ball Packing projections} (Figures \ref{fig:CBP1} and \ref{fig:CBP2}).
\item\label{Result2} We prove that
cross polytopes in dimension $\ge3$ as well as the $24$-cell are Möbius unique (Theorem \ref{thm:octmobunique} and Proposition \ref{thm:24mobunique}).
\item\label{Result3} We present a new set of non edge-scribable $4$-polytopes (Proposition \ref{prop:nonedge}).
\item\label{Result4} We obtain a generalization of the octahedral Descartes' Theorem of Guettler and Mallows in every higher dimension (Corollary \ref{cor:descaroct}).
\item We relate integral polytopal $d$-ball packings based on the $(d+1)$-simplex, $(d+1)$-cross polytope and $(d+1)$-cube to integers solutions of three diaphontine equations (Corollary \ref{cor:diaph}).
\end{enumerate}
\subsection{Organization of the paper.}
In Section \ref{sec:regularpol}, after recalling the main tools introduced in \cite{RR21_1}, we show the Centered Ball Packing projections of the regular $4$-polytopes. Then, we extend the results on Möbius unicity for the family of cross polytopes in dimension 3 and above and the 24-cell. We briefly discuss the Möbius spectra of regular polytopes and use the Möbius unicity of the orthoplex, cube and 24-cell to obtain a class of 4-polytopes which are not edge-scribable. After that, we restate the polytopal generalization of Descartes' Theorem given in \cite{RR21_1} in terms of quadratic forms. Through this new approach, we obtain analogues of the equation of Descartes' Theorem for the family of the simplex, cross polytopes and hypercubes in every dimension, that turns out to be useful to find primitive solutions to three different diophantine equations.\\
In Section \ref{sec:ortho}, we show that the Nakamura's definition of orthoplicial sphere packings are in fact polytopal. We then discuss some aspect concerning the duality of orthoplicial sphere packings and we
introduce the notion of \textit{orthoplicial trinity}. Then, we compute the symmetrized Apollonian group and use it to give an alternative matrix representation of the orthoplicial Apollonian group.\\
Finally, in Section \ref{sec:aposections}, we focus our attention to integral packings. After introducing the notion of \textit{Apollonian sections} of Apollonian clusters we prove the main theorem. We end with some conjectures and concluding remarks.
\section{Preliminaries on polytopal $d$-ball packings.}\label{sec:regularpol}
The construction given in \cite{RR21_1} to obtain $d$-ball packings from $(d+1)$-polytopes is described as follows. Take a $(d+1)$-polytope $\mathcal{P}$ whose vertices lie outside the unit sphere of the Euclidean space $\mathbb E^{d+1}$. Then, put at each vertex of $\mathcal{P}$ a ``light source" illuminating the sphere. By projecting stereoreographically the illuminated areas, one obtains a $d$-ball arrangement in $\widehat{\mathbb{R}^d}$ called the \textit{ball-arrangement projection} of $\mathcal{P}$, denoted by $\mathsf B(\mathcal{P})$. If $\mathcal{P}$ is edge-scribed, that is, with every edge tangent to the sphere, then $\mathsf B(\mathcal{P})$ is $d$-ball packing. In this case, every $d$-ball packing $\mathcal{B}_\mathcal{P}$ equivalent to $\mathsf B(\mathcal{P})$ under Möbius transformations is said to be \textit{polytopal} and $\mathcal{P}$ is called the \textit{tangency polytope} of $\mathcal{B}_\mathcal{P}$. The tangency graph of $\mathcal{B}_\mathcal{P}$ corresponds to the $1$-skeleton of $\mathcal{P}$ \cite{chen2016}. We notice that there are $d$-ball packings which are not polytopal \cite{RR21_1}.\\
One of the main features of polytopal $d$-ball packings is that they admit a consistent definition of duality in every dimension, contrarily to $d$-ball packings where the combinatorics is given by a graph or a simplicial complex. If $\mu$ is the Môbius transformation sending $\mathsf B(\mathcal{P})$ to $\mathcal{B}_\mathcal{P}$, then the \textit{dual} of $\mathcal{B}_\mathcal{P}$, denoted by $\mathcal{B}_\mathcal{P}^*$, is given by $\mu(\mathsf B(\mathcal{P}^*))$ where $\mathcal{P}^*$ denotes the polar of $\mathcal{P}$. Duality is then needed to generalize the \textit{Apollonian group} and its variants to every polytopal $d$-ball packing (see \cite{RR21_1} for more details).\\
We say that a polytopal $d$-ball packing is \textit{regular} if its tangency polytope is regular. Let us recall the list of regular polytopes in every dimension. All $2$-polytopes (polygons) admits a regular realization. The \textit{Platonic solids}, namely, the tetrahedron $\mathcal{T}^3$, the octahedron $\mathcal{O}^3$, the cube $\mathcal{C}^3$, the icosahedron $\mathcal{I}^3$ and the dodecahedron $\mathcal{D}^3$ are the five regular $3$-polytopes (polyhedra). The $4$-simplex $\mathcal{T}^4$, the orthoplex $\mathcal{O}^4$, the hypercube $\mathcal{C}^4$, the 600-cell $\mathcal{I}^4$ and the $120$-cell $\mathcal{D}^4$ are five regular $4$-polytopes which can be thought as a $4$-dimensional analogue of the Platonic solids. The remaining regular $4$-polytope, the 24-cell $\mathcal{R}^4$ (the notation is not standard), completes the list of regular $4$-polytopes. For every $d\ge2$, we shall denote by $\mathcal{T}^{d+1}$, $\mathcal{O}^{d+1}$ and $\mathcal{C}^{d+1}$ the $(d+1)$-dimensional simplex, octahedron and cube. It is well-known that in dimension 5 or above, these three families are the only regular polytopes \cite{coxeter1973regular}. The following table contains all the information needed for the paper about the regular $(d+1)$-polytopes for $d\ge1$.
\begin{table}[H]
\small
\centering
\begin{tabular}{ccccccccc}
$d$&Name&Notation& Schläfli symbol& Eigenvalues & Multiplicities&Midsphere ratio\\
\toprule
$d=1$&$p$-gone ($p\ge3$)&-&$\{p\}$&\multicolumn{2}{c}{Not Möbius unique for $p>3$}&$\tan(\pi/p)$\\
\midrule
\multirow{2}{*}{$d\ge1$} & \multirow{2}{*}{$(d+1)$-Simplex}&
\multirow{2}{*}{$\mathcal{T}^{d+1}$}&
\multirow{2}{*}{$\{\underbrace{3,\ldots,3}_{d-1},3\}$} &$-d$&$1$&\multirow{2}{*}{$\sqrt{\frac{d+2}{d}}$} \\
& & & & $2$&$d+1$\\
\hline
\multirow{6}{*}{$d\ge2$} &
\multirow{3}{*}{$(d+1)$-Cross polytope}
& \multirow{3}{*}{$\mathcal{O}^{d+1}$}
&\multirow{3}{*}{$\{\underbrace{3,\ldots,3}_{d-1},4\}$}
&$-2(d+1)$&$1$&\multirow{3}{*}{$1$} \\
& & & & $4$&$d+1$\\
& & & & $0$&$d$\\
\cmidrule{2-7}
&\multirow{3}{*}{$(d+1)$-Cube}
&\multirow{3}{*}{$\mathcal{C}^{d+1}$}
&\multirow{3}{*}{$\{4,\underbrace{3,\ldots,3}_{d-1}\}$}
&$-2^{d+1}d$&$1$&\multirow{3}{*}{$d^{-1/2}$} \\
& & & & $2^{d+1}$&$d+1$\\
& & & & $0$&$2^{d+1}-d-2$\\
\hline
\multirow{6}{*}{$d=2$}
&\multirow{3}{*}{Icosahedron}
&\multirow{3}{*}{$\mathcal{I}^3$}
&\multirow{3}{*}{$\{3,5\}$}
&$-12\varphi^2$&$1$&\multirow{3}{*}{$\varphi^{-1}$} \\
& & & & $4(\varphi^2+1)$&$3$\\
& & & & $0$ &$8$\\
\cmidrule{2-7}
&\multirow{3}{*}{Dodecahedron}
&\multirow{3}{*}{$\mathcal{D}^3$} &\multirow{3}{*}{$\{5,3\}$}&$-20\varphi^4$&$1$ &\multirow{3}{*}{$\varphi^{-2}$}\\
& && & $20\varphi^2$&$3$\\
& & & & $0$ &$16$\\
\hline
\multirow{6}{*}{$d=3$}
&\multirow{3}{*}{$24$-cell}
& \multirow{3}{*}{$\mathcal{R}^4$}&\multirow{3}{*}{$\{3,4,3\}$} &$-72$&$1$&\multirow{3}{*}{$3^{-1/2}$} \\
& & & & $24$&$4$\\
& & & & $0$&$19$\\
\cmidrule{2-7}
&$600$-cell&
$\mathcal{I}^4$&
$\{3,3,5\}$&
\multicolumn{2}{c}{Möbius unique?}&$5^{-1/4}\varphi^{-3/2}$\\
\cmidrule{2-7}
&$120$-cell&
$\mathcal{D}^4$&
$\{5,3,3\}$&
\multicolumn{2}{c}{Möbius unique?}&$3^{-1/2}\varphi^{-3}$
\end{tabular}
\caption{Notations, Möbius spectra and midsphere ratio of regular $(d+1)$-polytopes for $d\ge1$.}
\label{tab:mobspec}
\end{table}
For the latter computations on the space of $d$-balls, we shall use the inversive coordinates and the inversive product of Wilker \cite{wilker}. We refer the reader to \cite[Section 2]{RR21_1} and the reference therein for more details about how to compute center, curvatures and inversions on $d$-balls in inversive coordinates. We shall refer polytopal $2$ (resp. $3$)-ball packings as polytopal disk (resp. sphere) packings.
\newpage
\subsection{CBP projections of $4$-polytopes}
For every $d\ge1$, a \textit{Centered Ball Packing projection} of a regular $(d+1)$-polytope $\mathcal{P}$ is a polytopal $d$-ball packing obtained by the ball-arrangement projection of an edge-scribed regular realization of $\mathcal{P}$ containing a $i$-face $f_i$ centered in the ray going from the origin to the North Pole of $\mathbb S^d$ ($\infty$ in $\widehat{\mathbb{R}^d}$).
Such a packing will be called a \textit{$i$-CBP projection }of $\mathcal{P}$. The CBP projections of the Platonic solids were presented in \cite{RR21_1}. We show in Figures \ref{fig:CBP1} and \ref{fig:CBP2} the CBP projections of the six regular $4$-polytopes.\\
The curvatures in each layer in a $i$-CBP projection can be expressed as a linear combination of two numbers $\kappa_\mathcal{P}$ and $h_i$, where $\kappa_\mathcal{P}$ is the mean of all the curvatures, and $h_i$ is the minimum of the positive \textit{heights} (in the direction of the vector $(0,0,0,1)$), among all the vertices of $\mathcal{P}$. We notice that $\kappa_\mathcal{P}=\ell_\mathcal{P}^{-1}$ where $\ell_\mathcal{P}$ is the midsphere ratio of $\mathcal{P}$ (see \cite[Lemma 3]{RR21_1}).
\vspace{-.5cm}
\begin{figure}[H]
\begin{center}
\scriptsize
\setlength{\tabcolsep}{2pt}
\begin{tabular}{ccccc}
&Vertex centered at $\infty$
&Edge centered at $\infty$
&Ridge centered at $\infty$
&Facet centered at $\infty$\\
\toprule
\begin{tabular}[c]{c}
$\mathcal{T}^4$\\
$4$-Simplex\\
$\{3,3,3\}$\\
\end{tabular}&
\includegraphics[align=c,scale=.25]{img/projections/VC5simplex} &
\includegraphics[align=c,scale=.3]{img/projections/EC5simplex} &
\includegraphics[align=c,scale=.28]{img/projections/RC5simplex} &
\includegraphics[align=c,scale=.28]{img/projections/FC5simplex}\\
\begin{tabular}[t]{l}
$\kappa_\mathcal{P}=\sqrt{3/5}$\\
$h_0=\sqrt{1/10}$\\
$h_1=\sqrt{2/15}$\\
$h_2=\sqrt{2/15}$ \\
$h_3=\sqrt{1/10}$
\end{tabular}&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
$1$&1&$\kappa_\mathcal{P}-4 h_0$\\
$2$&4&$\kappa_\mathcal{P}+ h_0$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
$1$&2 &$\kappa_\mathcal{P}-\frac{3}{2} h_1$\\
$2$&3 &$\kappa_\mathcal{P}+ h_1$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&\multicolumn{1}{c}{Curvatures} \\
\hline
1&3&$\kappa_\mathcal{P}- h_2$ \\
2&2&$\kappa_\mathcal{P}+\frac{3}{2} h_2$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1&4&$\kappa_\mathcal{P}- h_3$\\
2&1&$\kappa_\mathcal{P}+4 h_3$
\end{tabular}\\
\hline\\[-.2cm]
\begin{tabular}[c]{c}
$\mathcal{O}^4$\\
Orthoplex\\
$\{3,3,4\}$\\
\end{tabular}
&
\includegraphics[align=c,scale=.25]{img/projections/VCorthoplex.png} &
\includegraphics[align=c,scale=.3]{img/projections/ECorthoplex.png} &
\includegraphics[align=c,scale=.28]{img/projections/RCorthoplex.png} &
\includegraphics[align=c,scale=.28]{img/projections/FCorthoplex.png}\\
\begin{tabular}[t]{l}
$\kappa_\mathcal{P}=1$\\
$h_0=\sqrt2$\\
$h_1=1$\\
$h_2=\sqrt{2/3}$\\
$h_3=\sqrt{1/2}$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$& Curvatures\\
\hline
1&1&$\kappa_\mathcal{P}-h_0$\\
2&4&$\kappa_\mathcal{P}$\\
3&1&$\kappa_\mathcal{P}+h_0$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1& 2 &$\kappa_\mathcal{P}- h_1$\\
2& 4 &$\kappa_\mathcal{P}$\\
3& 2 &$\kappa_\mathcal{P}+ h_1$\\
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1&3&$\kappa_\mathcal{P}- h_2$ \\
2&2&$\kappa_\mathcal{P}$\\
3&3&$\kappa_\mathcal{P}+ h_2$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1&4&$\kappa_\mathcal{P}- h_3$\\
2&4&$\kappa_\mathcal{P}+ h_3$
\end{tabular}\\
\hline \\[-.2cm]
\begin{tabular}[c]{c}
$\mathcal{C}^4$\\
Hypercube\\
$\{4,3,3\}$\\
\end{tabular}
&
\includegraphics[align=c,scale=.25]{img/projections/VChypercube.png} &
\includegraphics[align=c,scale=.18]{img/projections/EChypercube.png} &
\includegraphics[trim=10 0 10 0, clip,align=c,scale=.3]{img/projections/RChypercube.png} &
\includegraphics[align=c,scale=.25]{img/projections/FChypercube.png}\\
\begin{tabular}[t]{l}
$\kappa_\mathcal{P}=\sqrt{3}$\\
$h_0=1$\\
$h_1=\sqrt{1/3}$\\
$h_2=\sqrt2$\\
$h_3=1$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1&1&$\kappa_\mathcal{P}-2 h_0$\\
2&4&$\kappa_\mathcal{P}- h_0$\\
3&6&$\kappa_\mathcal{P}$\\
4&4&$\kappa_\mathcal{P}+ h_0$\\
5&1&$\kappa_\mathcal{P}+2 h_0$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1& 2 &$\kappa_\mathcal{P}-3 h_1$\\
2& 6 &$\kappa_\mathcal{P}- h_1$\\
3& 6 &$\kappa_\mathcal{P}+ h_1$\\
4& 2 &$\kappa_\mathcal{P}+3 h_1$\\
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1&4&$\kappa_\mathcal{P}- h_2$ \\
2&8&$\kappa_\mathcal{P}$\\
3&4&$\kappa_\mathcal{P}+ h_2$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1&8&$\kappa_\mathcal{P}- h_3$\\
2&8&$\kappa_\mathcal{P}+ h_3$
\end{tabular}
\end{tabular}
\end{center}
\caption{CBP projections of the $4$-simplex, orthoplex and hypercube.}
\label{fig:CBP1}
\end{figure}
\begin{figure}[H]
\begin{center}
\tiny
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{0.9}
\begin{tabular}{lcccc}
\midrule
\begin{tabular}[c]{c}
\scriptsize
$\mathcal{R}^4$\\
24-cell\\
$\{3,4,3\}$\\
\end{tabular}
&
\includegraphics[align=c,scale=.25]{img/projections/VC24cell.png} &
\includegraphics[align=c,scale=.3]{img/projections/EC24cell.png} &
\includegraphics[align=c,scale=.28]{img/projections/RC24cell.png} &
\includegraphics[trim=10 10 10 15,clip,align=c,scale=.28]{img/projections/FC24cell.png}\\
\begin{tabular}[t]{l}
$\kappa_\mathcal{P}=\sqrt{3}$\\
$h_0=1$\\
$h_1=\sqrt{1/3}$\\
$h_2=\sqrt{2/3}$\\
$h_3=\sqrt{2}$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1&1&$\kappa_\mathcal{P}-2 h_0$\\
2&8&$\kappa_\mathcal{P}- h_0$\\
3&6&$\kappa_\mathcal{P}$\\
4&8&$\kappa_\mathcal{P}+ h_0$\\
5&1&$\kappa_\mathcal{P}+2 h_0$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer& $n$&Curvatures\\
\hline
1&2 &$\kappa_\mathcal{P}-3 h_1$\\
2&3 &$\kappa_\mathcal{P}-2 h_1$\\
3&6 &$\kappa_\mathcal{P}- h_1$\\
4&2 &$\kappa_\mathcal{P}$\\
5&6 &$\kappa_\mathcal{P}+ h_1$\\
6&3 &$\kappa_\mathcal{P}+2 h_1$\\
7&2 &$\kappa_\mathcal{P}+3 h_1$\\
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1&3&$\kappa_\mathcal{P}-2 h_2$ \\
2&6&$\kappa_\mathcal{P}- h_2$\\
3&6&$\kappa_\mathcal{P}$\\
4&6&$\kappa_\mathcal{P}+ h_2$\\
5&3&$\kappa_\mathcal{P}+2 h_2$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer& $n$&Curvatures\\
\hline
1&6&$\kappa_\mathcal{P}- h_3$\\
2&12&$\kappa_\mathcal{P}$\\
3&6&$\kappa_\mathcal{P}+ h_3$
\end{tabular} \\
\midrule
\multicolumn{1}{c}{
\scriptsize
\begin{tabular}[c]{c}
$\mathcal{I}^4$\\
600-cell\\
$\{3,3,5\}$\\
\end{tabular}
}
&
\includegraphics[align=c,scale=.25]{img/projections/VC600cell.png} &
\includegraphics[align=c,scale=.33]{img/projections/EC600cell.png} &
\includegraphics[align=c,scale=.28]{img/projections/RC600cell.png} &
\includegraphics[trim=10 10 10 30,clip, align=c,scale=.28]{img/projections/FC600cell.png}\\
\begin{tabular}[t]{l}
$\kappa_\mathcal{P}=\sqrt{5}\varphi^{3/2}$\\
$h_0=1$\\
$h_1=(\varphi^2+1)^{-\frac12}$\\
$h_2=(\varphi^4+1)^{-\frac12}$\\
$h_3=(\varphi^3+1)^{-\frac12}$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&Curvatures\\
\hline
1&1&$\kappa_\mathcal{P}-2\varphi h_0$\\
2&12&$\kappa_\mathcal{P}-\varphi^2 h_0$\\
3&20&$\kappa_\mathcal{P}-\varphi h_0$\\
4&12&$\kappa_\mathcal{P}- h_0$\\
5&30&$\kappa_\mathcal{P}$\\
6&12&$\kappa_\mathcal{P}+ h_0$\\
7&20&$\kappa_\mathcal{P}+\varphi h_0$\\
8&12&$\kappa_\mathcal{P}+\varphi^2 h_0$\\
9&1&$\kappa_\mathcal{P}+2\varphi h_0$\\
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer& $n$&\multicolumn{1}{c}{Curvatures} \\
\hline
1&2 &$\kappa_\mathcal{P}-(\varphi^3+\varphi) h_1$\\
2&5 &$\kappa_\mathcal{P}-(\varphi^3+1) h_1$\\
3&10&$\kappa_\mathcal{P}-\varphi^3 h_1$\\
4&2 &$\kappa_\mathcal{P}-(\varphi^{2}+1) h_1$\\
5&5 &$\kappa_\mathcal{P}-(\varphi^{3}-1) h_1$\\
6&10&$\kappa_\mathcal{P}-\varphi^{2} h_1$\\
7&10&$\kappa_\mathcal{P}-\varphi h_1$\\
8&10&$\kappa_\mathcal{P}- h_1$\\
9&12&$\kappa_\mathcal{P}$\\
10&10&$\kappa_\mathcal{P}+ h_1$\\
$\vdots$&$\vdots$ & $\vdots$\\
17& 2 &$\kappa_\mathcal{P}+(\varphi^3+\varphi) h_1$\\
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&\multicolumn{1}{c}{Curvatures} \\
\hline
1&3 &$\kappa_\mathcal{P}-(\varphi^4+\varphi) h_2$\\
2&2 &$\kappa_\mathcal{P}-(\varphi^4+1) h_2$\\
3&6 &$\kappa_\mathcal{P}-\varphi^4 h_2$\\
4&6 &$\kappa_\mathcal{P}-(\varphi^3+\varphi) h_2$\\
5&6 &$\kappa_\mathcal{P}-(\varphi^3+1)\varphi h_2$\\
6&6 &$\kappa_\mathcal{P}-\varphi^3 h_2$\\
7&3 &$\kappa_\mathcal{P}-(\varphi^3-1) h_2$\\
8&12&$\kappa_\mathcal{P}-\varphi^2 h_2$\\
9&6 &$\kappa_\mathcal{P}-\varphi h_2$\\
10&6 &$\kappa_\mathcal{P}- h_2$\\
11&8 &$\kappa_\mathcal{P}$\\
12&6 &$\kappa_\mathcal{P}+ h_2$\\
$\vdots$&$\vdots$ & $\vdots$\\
21&3 &$\kappa_\mathcal{P}+(\varphi^4+\varphi) h_2$\\[.1cm]
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&\multicolumn{1}{c}{Curvatures} \\
\hline
1&4 &$\kappa_\mathcal{P}-\varphi^4 h_3$\\
2&4 &$\kappa_\mathcal{P}-(\varphi^3+\varphi) h_3$\\
3&6 &$\kappa_\mathcal{P}-(\varphi^3+1) h_3$\\
4&12&$\kappa_\mathcal{P}-\varphi^3 h_3$\\
5&12&$\kappa_\mathcal{P}-\varphi^2 h_3$\\
6&12&$\kappa_\mathcal{P}-\varphi h_3$\\
7&4 &$\kappa_\mathcal{P}- h_3$\\
8&12&$\kappa_\mathcal{P}$\\
9&4 &$\kappa_\mathcal{P}+ h_3$\\
$\vdots$&$\vdots$ & $\vdots$\\
15&4 &$\kappa_\mathcal{P}+\varphi^4 h_3$\\
\end{tabular}\\
\hline\\[-.2cm]
\multicolumn{1}{c}{
\begin{tabular}[c]{c}
\scriptsize
$\mathcal{D}^4$\\
120-cell\\
$\{5,3,3\}$\\
\end{tabular}
}
&
\includegraphics[align=c,scale=.25]{img/projections/VC120cell.png} &
\includegraphics[trim=0 10 0 20, clip, align=c,scale=.23]{img/projections/EC120cell.png} &
\includegraphics[align=c,scale=.28]{img/projections/RC120cell.png} &
\includegraphics[align=c,scale=.26]{img/projections/FC120cell.png}\\
\begin{tabular}[t]{l}
$\kappa_\mathcal{P}=\varphi^3\sqrt{3}$\\
$h_0=\sqrt{1/2}$\\
$h_1=(\varphi^4+1)^{-\frac12}$ \\
$h_2=(\varphi^2+1)^{-\frac12}$\\
$h_3=1$
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
\tiny
Layer&$n$&\multicolumn{1}{c}{Curvatures} \\
\hline
1& 1 & $\kappa_\mathcal{P}-\left(\varphi ^5-\varphi^{-1}\right) h_0$\\
2& 4 & $\kappa_\mathcal{P}-\left(\varphi ^5-1 \right) h_0$\\
3& 12 & $\kappa_\mathcal{P}-\left(\varphi ^4+\varphi ^2\right) h_0$\\
4& 24 & $\kappa_\mathcal{P}-\left(\varphi ^4+\varphi\right) h_0$\\
5& 12 & $\kappa_\mathcal{P}-\left(\varphi ^4+1 \right) h_0$\\
6& 4 & $\kappa_\mathcal{P}-\left(\varphi ^4+\varphi^{-1}\right) h_0$\\
7& 24 & $\kappa_\mathcal{P}-\varphi ^4 h_0$\\
8& 24 & $\kappa_\mathcal{P}-\left(\varphi ^3+\varphi\right) h_0$\\
9& 32 & $\kappa_\mathcal{P}-\left(\varphi ^3+1 \right) h_0$\\
10& 24 & $\kappa_\mathcal{P}-\varphi ^3 h_0$\\
11& 12 & $\kappa_\mathcal{P}-\left(\varphi ^2+1 \right) h_0$\\
12& 24 & $\kappa_\mathcal{P}-\left(\varphi ^2+\varphi^{-1}\right) h_0$\\
13& 28 & $\kappa_\mathcal{P}-\varphi ^2 h_0$\\
15& 24 & $\kappa_\mathcal{P}-\varphi h_0$\\
16& 24 & $\kappa_\mathcal{P}- h_0$\\
17& 54 & $\kappa_\mathcal{P} $\\
18& 24 & $\kappa_\mathcal{P}+ h_0$\\
$\vdots$&$\vdots$ & $\vdots$\\
33& 1 & $\kappa_\mathcal{P}+\left(\varphi ^5-\varphi^{-1}\right) h_0$\\
\end{tabular}
&
\tiny
\begin{tabular}[t]{c|c|l}
Layer&$n$&\multicolumn{1}{c}{Curvatures} \\
\hline
1&2 & $\kappa_\mathcal{P}-(\varphi ^6+\varphi ^2) h_1$\\
2& 6 & $\kappa_\mathcal{P}-(\varphi ^6+\varphi ) h_1$ \\
3& 3 & $\kappa_\mathcal{P}-(\varphi ^6+1) h_1$ \\
4& 12 & $\kappa_\mathcal{P}-\varphi ^6 h_1$ \\
5& 6 & $\kappa_\mathcal{P}-(\varphi ^6-1) h_1$ \\
6& 12 & $\kappa_\mathcal{P}-(\varphi ^6-\varphi ) h_1$ \\
7& 18 & $\kappa_\mathcal{P}-(\varphi ^5+\varphi ^3) h_1$ \\
8& 12 & $\kappa_\mathcal{P}-(\varphi ^5+\varphi ^2) h_1$ \\
9& 14 & $\kappa_\mathcal{P}-(\varphi ^5+\varphi ) h_1$ \\
10& 12 & $\kappa_\mathcal{P}-(\varphi ^5+1) h_1$ \\
11& 18 & $\kappa_\mathcal{P}-\varphi ^5 h_1$ \\
12& 6 & $\kappa_\mathcal{P}-(\varphi ^5-1) h_1$ \\
13& 24 & $\kappa_\mathcal{P}-(\varphi ^4+\varphi ^2) h_1$ \\
14& 15 & $\kappa_\mathcal{P}-(\varphi ^4+\varphi ) h_1$ \\
15& 2 & $\kappa_\mathcal{P}-(\varphi ^4+1) h_1$ \\
16& 24 & $\kappa_\mathcal{P}-\varphi ^4 h_1$ \\
17& 18 & $\kappa_\mathcal{P}-(\varphi ^3+\varphi ) h_1$ \\
18& 12 & $\kappa_\mathcal{P}-(\varphi ^3+1) h_1$ \\
19& 18 & $\kappa_\mathcal{P}-\varphi ^3 h_1$ \\
20& 24 & $\kappa_\mathcal{P}-\varphi ^2 h_1$ \\
21& 18 & $\kappa_\mathcal{P}-\varphi h_1$ \\
22& 12 & $\kappa_\mathcal{P}- h_1$ \\
23& 24 & $\kappa_\mathcal{P}$ \\
24& 12 & $\kappa_\mathcal{P}+ h_1$ \\
$\vdots$&$\vdots$ & $\vdots$\\
47& 2 & $\kappa_\mathcal{P}+(\varphi ^6+\varphi ^2) h _1$\\
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&\multicolumn{1}{c}{Curvatures} \\
\hline
1&5 & $\kappa_\mathcal{P}-(\varphi ^5+\varphi ^2) h_2$ \\
2&10 & $\kappa_\mathcal{P}-(\varphi ^5+\varphi ) h_2$ \\
3&10 & $\kappa_\mathcal{P}-(\varphi ^5+1) h_2$ \\
4&20 & $\kappa_\mathcal{P}-\varphi ^5 h_2$ \\
5&10 & $\kappa_\mathcal{P}-(\varphi ^5-1) h_2$ \\
6&20 & $\kappa_\mathcal{P}-(\varphi ^4+\varphi ^2) h_2$ \\
7&20 & $\kappa_\mathcal{P}-(\varphi ^4+\varphi ) h_2$ \\
8&10 & $\kappa_\mathcal{P}-(\varphi ^4+1) h_2$ \\
9&30 & $\kappa_\mathcal{P}-\varphi ^4 h_2$ \\
10&20 & $\kappa_\mathcal{P}-(\varphi ^3+\varphi ) h_2$ \\
11&20 & $\kappa_\mathcal{P}-(\varphi ^3+1) h_2$ \\
12&30 & $\kappa_\mathcal{P}-\varphi ^3 h_2$ \\
13&5 & $\kappa_\mathcal{P}-(\varphi ^3-1) h_2$ \\
14&30 & $\kappa_\mathcal{P}-\varphi ^2 h_2$ \\
15&30 & $\kappa_\mathcal{P}-\varphi h_2$ \\
16&20 & $\kappa_\mathcal{P}- h_2$ \\
17&20 & $\kappa_\mathcal{P}$ \\
18&20 & $\kappa_\mathcal{P}+ h_2$ \\
$\vdots$&$\vdots$ & $\vdots$\\
35&5 & $\kappa_\mathcal{P}+(\varphi ^5+\varphi ^2) h_2$ \\
\end{tabular}
&
\begin{tabular}[t]{c|c|l}
Layer&$n$&\multicolumn{1}{c}{Curvatures} \\
\hline
1&20 & $\kappa_\mathcal{P}-\varphi ^4 h_3$\\
2&20 & $\kappa_\mathcal{P}-(\varphi^3+\varphi) h_3 $\\
3&30 & $\kappa_\mathcal{P}-(\varphi^3+1) h_3 $\\
4&60 & $\kappa_\mathcal{P}-\varphi ^3 h_3$\\
5&60 & $\kappa_\mathcal{P}-\varphi ^2 h_3 $\\
6&60 & $\kappa_\mathcal{P}-\varphi h_3$\\
7&20 & $\kappa_\mathcal{P}- h_3 $\\
8&60 & $\kappa_\mathcal{P} $\\
9&20 & $\kappa_\mathcal{P}+ h_3 $\\
$\vdots$&$\vdots$ & $\vdots$\\
17&20 & $\kappa_\mathcal{P}+\varphi ^4 h_3 $\\
\end{tabular}
\end{tabular}
\end{center}
\vspace{-.3cm}
\caption{CBP projections of the 24-cell, 600-cell and 120-cell.}
\label{fig:CBP2}
\end{figure}
\newpage
\subsection{Möbius unicity of regular polytopes}
An edge-scribable $(d+1)$-polytope $\mathcal{P}$ is said to be \textit{Möbius unique} if the ball-arrangement projections of all its edge-scribed realizations are \textit{Möbius equivalent}, i.e. connected by a Möbius transformation. Equivalently, $\mathcal{P}$ is Möbius unique if all its edge-scribed realizations are connected by a projective transformation which preserves the unit sphere of $\mathbb{E}^{d+1}$. As noticed in \cite{RR21_1}, we have that:
\begin{enumerate}[label=\arabic*.]
\item The $2$-simplex is the only Möbius unique $2$-polytope \cite[Corollary 2.3]{RR21_1}.
\item All the $3$-polytopes are Möbius unique (by the Midsphere theorem of Brightwell and Scheinerman \cite{bright-sch}).
\item For every $d\ge 2$, the $d$-simplex and the $(d+1)$-cube are Möbius unique \cite[Corollaries 2.2 and 2.4]{RR21_1}.
\end{enumerate}
Here we shall show that 24-cell and the family of cross polytopes, in dimension $3$ or above, are also Möbius unique. For the proof of the Möbius unicity of the cross polytopes, we need the following proposition given in \cite{RR21_1}. We recall that a $d$-ball packing has \textit{maximal rank} if the rank of its Gramian is $d+2$. In particular, polytopal $d$-ball packings have always maximal rank.
\begin{prop}[\cite{RR21_1}]\label{thm:gram}
Let $\mathcal{B}$ and $\mathcal{B}'$ two $d$-ball packings with maximal rank. Then $\mathcal{B}$ is Möbius equivalent to $\mathcal{B'}$ if and only if $\Gram(\mathcal{B})=\Gram(\mathcal{B}')$.
\end{prop}
\begin{thm}\label{thm:octmobunique}
For every $d\ge2$, the $(d+1)$-octahedron is Möbius unique.
\end{thm}
\begin{proof}
We proceed similarly to the proof of the Möbius unicity of the $(d+1)$-cube in \cite{RR21_1}. Let $d\ge2$ and let $\mathcal{B}_{\mathcal{O}^{d+1}}$ be a polytopal $d$-ball packing where $\mathcal{O}^{d+1}$ is an edge-scribed realization of the $(d+1)$-octahedron.
We shall prove by induction on $d$, that for every two vertices $u$, $v$ of $\mathcal{O}^{d+1}$, we have
\begin{align}\label{eq:hoct}
\langle b_u,b_v \rangle = 1-2\mathbf{d_G}(u,v)
\end{align}
where $\mathbf{d_G}(u,v)$ is the distance between $u$ and $v$ in the $1$-skeleton of $\mathcal{O}^{d+1}$. This would imply that for any other polytopal $d$-ball packing $\mathcal{B}_{\mathcal{O}^{d+1}_2}$, where $\mathcal{O}^{d+1}_2$ is any other edge-scribed realization of the $(d+1)$-octahedron, we can find an ordering such that $\Gram(\mathcal{B}_{\mathcal{O}^{d+1}})=\Gram(\mathcal{B}_{\mathcal{O}^{d+1}_2})$. The Möbius unicity then follows from Proposition \ref{thm:gram}.\\
The case $d=2$ can be easily checked in a single octahedral disk packing. Since, $3$-polytopes are Möbius unique, then \eqref{eq:hoct} holds for every edge-scribed realization of $\mathcal{O}^3$. Let us now suppose that \eqref{eq:hoct} holds for any edge-scribed realization of the $(d+1)$-octahedron for some $d\ge2$. Let $\mathcal B_{\mathcal{O}^{d+2}}$ be a polytopal $(d+1)$-ball packing where $\mathcal{O}^{d+2}\subset\mathbb{E}^{d+2}$ is an edge-scribed realization of the $(d+2)$-octahedron. We give to the vertices of $\mathcal{O}^{d+2}$ an \textit{antipodal labelling} $$V(\mathcal{O}^{d+2})=\{v_1,\ldots,v_{d+2},v_{-1},\ldots,v_{-(d+2)}\}$$ where $v_i$ and $v_j$ are connected by an edge of $\mathcal{O}^{d+1}$ if and only if $j\not=-i$. For every $1\le i<j\le d+1$, we consider the following collection of $(d+1)$-balls
$$\mathcal{B}_{i,j}:=\{b_1,\ldots,b_{d+1},b_{-i},b_{-j}\}\subset\mathcal{B}_{\mathcal{O}^{d+2}} $$
where $b_k:=b_{v_k}$. Since $\mathcal{B}_{\mathcal{O}^{d+2}}$ is polytopal, the tangency graph of $\mathcal{B}_{\mathcal{O}^{d+2}}$ is the $1$-skeleton of $\mathcal{O}^{d+2}$. Therefore, $b_{d+2}$ and $b_{-(d+2)}$ are tangent to every $b_k\in\mathcal{B}_{i,j}$. Translating the tangency conditions with the inversive product (see \cite{RR20}), we have that $b_{d+2}$ and $b_{-(d+2)}$ satisfies
\begin{align}\label{eq:tancond}
\langle b_k, b \rangle=-1\quad \text{for every ${b_k}\in\mathcal{B}_{i,j}$},
\end{align}
and $b\in\{b_{d+2},b_{-(d+2)}\}$. In inversive coordinates, \eqref{eq:tancond} becomes the following linear system
\begin{align}\label{eq:tansys}
\mathbf B_{i,j} \mathbf Q_{d+3} \mathbf X = -\mathbf 1_{d+3}
\end{align}
where $\mathbf B_{i,j}$ is the matrix of the inversive coordinates of $\mathcal{B}_{i,j}$, $\mathbf Q_{d+3}$ is the matrix of the inversive product, $\mathbf X$ is a $(d+3)$-column vector, and $\mathbf 1_{d+3}$ is the $(d+3)$-column vector of only $1$'s. Since $b_{d+2}$ and $b_{-(d+2)}$ are distinct, \eqref{eq:tansys} has more than one solution. Therefore, $\mathbf B_{i,j}$ is singular, which implies that there is a hyperplane $H_{i,j}$ of $\mathbb E^{d+2}$ such that
$$V_{i,j}:=\{v_1,\ldots,v_{d+1},v_{-i},v_{-j}\}\subset H_{i,j}. $$
Moreover, since, for every $1\le j'\le d+1$, the hyperplanes $H_{i,j}$ and $H_{i,j'}$ share $d+2$ points of $\mathbb E^{d+2}$, then they must be the same hyperplane. Therefore, there is one hyperplane $H$ containing all the vertices $V(\mathcal{O}^{d+2})\setminus\{v_{d+2},v_{-(d+2)}\}$.\\
We thus can find a Möbius tranformation $\mu\in\mathsf{M\ddot{o}b}(\mathbb{S}^{d+1})$ which the corresponding projective transformation sends $H$ to the hyperplane $\{x_{d+2}=0\}\subset\mathbb{E}^{d+2}$. After identifying $\mu(H)$ with $\mathbb{E}^{d+1}$ we obtain that $\mu(H\cap\mathcal{O}^{d+2})$ becomes an edge-scribed realization of the $(d+1)$-octahedron $\mathcal{O}^{d+1}$. The identification $\mu(H)\simeq\mathbb{E}^{d+1}$ preserves the inversive product of the $(d+1)$-balls corresponding to the points lying in $H$. Moreover, the distance between $u$ and $v$ in the graph of $\mathcal{O}^{d+2}$ is equal to the distance in the graph of $\mathcal{O}^{d+1}$. By the invariance of the inversive product under Möbius transformations and the induction hypothesis, we have that \eqref{eq:hoct} holds for any two $(d+1)$-balls of $\mathcal B_{\mathcal{O}^{d+2}}\setminus\{b_{d+2},b_{-(d+2)}\}$.\\
The same arguments work if we exchange $b_1$ by $b_{d+2}$ in $\mathcal{B}_{i,j}$, so \eqref{eq:hoct} holds in the remaining cases.
\end{proof}
\begin{prop}\label{thm:24mobunique}
The $24$-cell is Möbius unique.
\end{prop}
\begin{proof}
It is well-known that the $1$-skeleton of a 24-cell admits a $3$-coloring such that the vertices of each color span an orthoplex. By taking the vertices of two out of the three colors, we span a hypercube. Moreover, the edges of this hypercube are also edges of the initial 24-cell. Therefore, every polytopal sphere packing whose tangency polytope is the $24$-cell contains a hypercubical sphere packing.\\
Let $\mathcal{B}_{\mathcal{R}^4}$ be the $0$-CBP projection of the 24-cell. The even layers of $\mathcal{B}_{\mathcal{R}^4}$ gives a $3$-CBP projection $\mathcal{B}_{\mathcal{C}^4}$ of the hypercube (see Figure \ref{fig:CBP1}). Let $\mathcal{B}_{\mathcal{P}}$ be a polytopal sphere packing where $\mathcal{P}$ is another edge-scribed realization of the 24-cell. Let $\mathcal Q\subset\mathcal{P}$ be one of the hypercubes contained in $\mathcal{P}$. Since the hypercube is Möbius unique, there is a Möbius transformation $\mu$ sending the packing $\mathcal{B}_{\mathcal Q}$ to $\mathcal{B}_{\mathcal{C}^4}$. Since Möbius transformations preserve the tangency relation, every sphere $b\in \mathcal{B}_{\mathcal{R}^4}\setminus\mathcal{B}_{\mathcal{C}^4}$ must be tangent to $6$ spheres of $\mathcal{B}_{\mathcal{C}^4}$ corresponding to the vertices of a facet of $\mathcal{C}^4$. This condition forces $\mu$ to send the spheres in $\mathcal{B}_{\mathcal{P}}\setminus\mathcal{B}_{\mathcal Q}$ to $\mathcal{B}_{\mathcal{R}^4}\setminus\mathcal{B}_{\mathcal{C}^4}$ implying that $\mathcal{B}_\mathcal{P}$ and $\mathcal{B}_{\mathcal{R}^4}$ are Möbius equivalent.
\end{proof}
\subsection{Möbius spectra}
The \textit{Möbius spectra} of an edge-scribable Möbius unique $(d+1)$-polytope $\mathcal{P}$, introduced in \cite{RR21_1}, is a spectral invariant defined as the spectrum of the Gramian of the ball-arrangement projection of an edge-scribed realization of $\mathcal{P}$. We believe that Möbius spectra may shed light on the properties of edge-scribable polytopes. We show in Table \ref{tab:mobspec} the Möbius spectra of the regular polytopes that are known to be Möbius unique.
\subsection{Non edge-scribable 4-polytopes.}The first examples of $4$-polytopes not admitting an edge-scribed realization were given by Schulte in \cite{schulte87}. In \cite[Corollary 9]{eppstein2002}, Eppstein, Kupperberg and Ziegler extended the list by noticing that every stacked $4$-polytope with more than 6 vertices is not edge-scribable.
We recall that a \textit{stacked polytope} is a polytope obtained by applying consecutively connected sums of simplices, where the \textit{connected sum} of two polytopes $\mathcal{P}$ and $\mathcal{P}'$ along a facet $f$, is another polytope $\mathcal{P}\#_f\mathcal{P}'$ whose faces are the union of the faces of $\mathcal{P}$ and $\mathcal{P}'$ minus $f$. The main ingredient for the proof of this impossibility result is the following.
\begin{lem}\label{lem:impo}
The consecutive connected sum of three $4$-simplices is not edge-scribable.
\end{lem}
\begin{proof}
We consider the $4$-polytope $(\mathcal{T}\#_{f_1}\mathcal{T}')\#_{f_2}\mathcal{T}''$ where $\mathcal{T}$, $\mathcal{T}'$ and $\mathcal{T}''$ are three $4$-simplices. The facets $f_1$ and $f_2$ must intersect in a common ridge $r$ (a triangle) of $\mathcal{T}$, $\mathcal{T}'$ and $\mathcal{T}''$. We label the vertices by $V(r)=\{1,2,3\}$, $V(\mathcal{T})=\{1,2,3,4,5\}$, $V(\mathcal{T}')=\{1,2,3,5,6\}$ and $V(\mathcal{T}'')=\{1,2,3,6,7\}$. Let us suppose that $(\mathcal{T}\#_{f_1}\mathcal{T}')\#_{f_2}\mathcal{T}''$ admits an edge-scribed realization $\mathcal{P}$. By a applying the ball-arrangement projection to $\mathcal{P}$ and then an inversion on a sphere centered at the tangency point of the spheres $b_1$ and $b_2$ we obtain a polytopal sphere packing $\mathcal{B}_\mathcal{P}$ Euclidean congruent to the packing depicted at Figure \ref{fig:impo}.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=1]
\clip (-4.5,-2) rectangle (4.5,2);
\node at (0,0) {\includegraphics[width=7cm]{img/imposibility.png}};
\node at (-2.5,1.5) {$b_1$};
\node at (-2.1,1.3) {$b_2$};
\node at (-1.3,-0.3) {$b_4$};
\node at (1.4,-0.3) {$b_7$};
\node at (3.4,-0.5) {$b_{f_0}$};
\node at (0,-0.3) {$b_3$};
\node at (0.7,0.7) {$b_6$};
\node at (-0.7,0.7) {$b_5$};
\end{tikzpicture}
\caption{The sphere packing $\mathcal{B}_{\mathcal{P}}$ obtained by inverting the ball-arrangement projection of a glueing of three $4$-simplices.}
\label{fig:impo}
\end{figure}
In this packing, the dual sphere $b_{f_0}$, corresponding to the facet $f_0$ of $\mathcal{T}_1$ with vertices $\{1,2,3,4\}$, cuts orthogonally the sphere $b_7$. This implies that the vertex $v_7$ of $\mathcal{T}_3$ lies in the affine hull of $f_0$, so $f_0$ is not a face of $(\mathcal{T}\#_{f_1}\mathcal{T}')\#_{f_2}\mathcal{T}''$. This contradicts the condition on the set of faces in the definition of connected sum. Therefore, $(\mathcal{T}\#_{f_1}\mathcal{T}')\#_{f_2}\mathcal{T}''$ is not edge-scribable.
\end{proof}
Chen mentioned in \cite[Section 5]{chen2016} similar arguments as above, by considering a $2$-CBP projection of the $4$-simplex instead of a $1$-CBP projection. A natural generalization of stacked polytopes are the \textit{stacked $\mathcal{P}$-polytopes}, introduced by Chen and Padrol in \cite{chenpadrol}, as polytopes obtained by connected sums of several copies of a given polytope $\mathcal{P}$. With the new results on Möbius unicity and the following lemma given in \cite{RR20}, we can generalize the construction of Eppstein, Kuperberg and Ziegler by applying the same arguments to the stacked $\mathcal{P}$-polytopes, where $\mathcal{P}=\mathcal{O}^4,\mathcal{C}^4,\mathcal{R}^4$. We recall that a $d$-ball packing $\mathcal B$ is said to be \textit{standard} if it contains the half-spaces $b_i=\{x_d\ge1\}$ and $b_j=\{x_d\le-1\}$, denoted by $[\mathcal B]^i_j$.
\begin{lem}{\normalfont\cite[Lemma 2]{RR20}} \label{lem:congru} Let $\mathcal{B}$ and $\mathcal{B}'$ be two $d$-ball packings with same tangency graph $G$ and let $ij$ be an edge of $G$. Then $\mathcal{B}$ and $\mathcal{B}'$ are Möbius equivalent if and only if $[\mathcal{B}]^{i}_j$ and $[\mathcal{B}']^{i}_j$ are Euclidean congruent.
\end{lem}
\begin{prop}\label{prop:nonedge} The following $4$-polytopes are not edge-scribable:
\begin{itemize}
\item The connected sum of two orthoplexes.
\item The consecutive connected sum of three hypercubes sharing a ridge.
\item The consecutive connected sum of three 24-cell sharing a ridge.
\end{itemize}
\end{prop}
\begin{proof}
We may apply the same arguments as those in the proof of Lemma \ref{lem:impo}.
By combining the Möbius unicity of the orthoplex, hypercube and 24-cell and Lemma \ref{lem:congru}, we have that after applying the corresponding inversion and a proper rescaling, we must obtain a packing which is Euclidean similar to a glueing by reflections of $1$-CBP projections of the corresponding $4$-polytope. Then, as above, we would have a dual sphere cutting orthogonally spheres of the different components in the connected sum, and the same contradiction arises.
\end{proof}
\subsection{The Polytopal Descartes' Theorem. }\label{sec:descartesflag}
In \cite{RR21_1}, a generalization of the Descartes' theorem for regular polytopal $d$-ball packings was used to construct integral Apollonian packings of the Platonic solids. In this section, we restate this result with quadratric forms which will be useful to obtain further consequences. Let us first define and recall the terminology needed for the rest of the paper.\\
We define the \textit{polytopal curvatures} of a polytopal $d$-ball packing $\mathcal{B}_\mathcal{P}$ as the real numbers
\begin{align}
\kappa_{f}:=\frac{1}{|V(f)|}\sum_{v\in V(f)}\kappa(b_v).
\end{align}
where $f$ is a face of $\mathcal{P}$, $V(f)$ is the set of vertices of $f$ and $\kappa(b_v)$ is the curvature of the $d$-ball $b_v\in\mathcal{B}_\mathcal{P}$. From the point of view of the polytope, these curvatures correspond to the \textit{Lorentzian curvatures} of the faces of $\mathcal{P}$, which were defined in \cite{RR21_1} by the following Lorentzian product:
\begin{align}\label{eq:curvasprod}
\kappa_{f}=-\langle \mathbf x_N,\mathbf x_f\rangle
\end{align}
where $\mathbf x_N$ is the vector $e_{d+1}+e_{d+2}\in\mathbb{L}^{d+1,1}$ and $\mathbf x_f$ is the Lorentzian barycenter of $f$, that is,
\begin{align}
\mathbf x_{f}:=\frac{1}{|V(f)|}\sum_{v\in V(f)}\mathbf x_{b_v}
\end{align}
where $\mathbf x_{b_v}$ is the Lorentzian vector corresponding to $b_v$. The polytopal curvature of a vertex $v$ is exactly the curvature of the corresponding $d$-ball $b_v$. \\
The next lemma applies to every centrally symmetric regular $(d+1)$-polytopes. These are the $p$-gons with $p$ even when $d=1$ and all the regular polytopes not belonging to the simplex family when $d\ge2$. The case $d=2$ was also given in \cite{RR21_1}.
\begin{lem}[Antipodal relation]
Let $\mathcal{B}_\mathcal{P}$ be polytopal $d$-ball packing where $\mathcal{P}$ is a regular edge-scribed $(d+1)$-polytope which is centrally symmetric. Then, for any two vertices $v$, $\bar v$ at maximal distance in the $1$-skeleton of $\mathcal{P}$, we have
\begin{align*}
\kappa_\mathcal{P}=\frac{\kappa_v+\kappa_{\bar v}}{2}.
\end{align*}
\end{lem}
\begin{proof}
Since $\mathcal{P}$ is centrally symmetric, then $\frac12(\mathbf x_{b_v}+\mathbf x_{b_{\bar v}})$ is the Lorentzian barycenter of $\mathcal{P}$. The Lemma follows from \eqref{eq:curvasprod} and linearity.
\end{proof}
We define the \textit{flag quadratic form} of a regular $(d+1)$-polytope $\mathcal{P}$ as
\begin{align}\label{eq:defflagf}
\Phi_{\mathcal{P}}(x_0,x_1,\ldots,x_{d+1}):=\sum_{i=0}^{d}\frac{(x_i-x_{i+1})^2}{L_\mathcal{P}(i)-L_\mathcal{P}(i+1)}+\frac{x_{d+1}^2}{L_\mathcal{P}(d+1)}
\end{align}
where
\begin{align*}
L_\mathcal{P}(i):=\left\lbrace
\begin{array}{cc}
-1 & i=0 \\
0 & i=1 \\
\ell_{f_i}^{-2} & \text{ if }2\le i\le d+1
\end{array}\right.
\end{align*}
where $\ell_{f_i}$ denotes the half edge-length of a regular edge-scribed realization of $f_i$. We call this value the \textit{midsphere ratio} of a regular polytope since it can be obtained from every regular realization $\mathcal{P}$ (not necessarily edge-scribed) by the ratio between the half edge-length of $\mathcal{P}$ and the radius of the \textit{midsphere} (sphere tangent to every edge) of $\mathcal{P}$. The midsphere ratios of every regular polytope are given in the Table \ref{tab:mobspec} and were adapted from \cite{coxeter1973regular}. The flag quadratic form of the $(d+1)$-simplex, $(d+1)$-cross polytope and $(d+1)$-cube, for every $d\ge1$, are
\begin{align}
\Phi_{\mathcal{T}^{d+1}}(x_0,\ldots,x_{d+1})&=-\sum_{i=0}^{d}{i+2\choose 2}(x_{i}-x_{i+1})^2+\frac{d+2}{d}x_{d+1}^2\\
\Phi_{\mathcal{O}^{d+1}}(x_0,\ldots,x_{d+1})&=-\sum_{i=0}^{d-1}{i+2\choose 2}(x_{i}-x_{i+1})^2-\frac{d+1}{2}(x_d-x_{d+1})^2+x_{d+1}^2\\
\Phi_{\mathcal{C}^{d+1}}(x_0,\ldots,x_{d+1})&=-\sum_{i=0}^{d}(x_{i}-x_{i+1})^2+\frac{1}{d}x_{d+1}^2.
\end{align}
In terms of the flag quadratic form, the polytopal generalization of the Descartes' Theorem given in \cite[Theorem 4.1]{RR21_1} is restated as follows.
\begin{thm}[Polytopal Descartes' Theorem, \cite{RR21_1}]\label{thm:regdescartes}
For $d\ge1$, let $\mathcal B_\mathcal{P}$ a polytopal $d$-ball packing where $\mathcal{P}$ is a regular edge-scribed $(d+1)$-polytope. Then, for any flag $(v,\ldots,f,\mathcal{P})$, the polytopal curvatures $\kappa_v,\ldots,\kappa_f,\kappa_\mathcal{P}$ of $\mathcal B_\mathcal{P}$ satisfies
\begin{align}\label{eq:descartesflageq}
\Phi_\mathcal{P}(\kappa_v,\ldots,\kappa_f,\kappa_\mathcal{P})=0.
\end{align}
\end{thm}
\begin{cor}\label{cor:glueingpol} Let $\mathcal{B}_{\mathcal{P}^+}$ and $\mathcal{B}_{\mathcal{P}^-}$ two regular polytopal $d$-ball packings where one is obtained from the other by the inversion on a dual $d$-ball $b_f$. Then,
\begin{align}\label{eq:glueingpol}
\kappa_{\mathcal{P}^{\pm}}=\left(\frac{\ell_f}{\ell_\mathcal{P}}\right)^2\kappa_f\pm\ell_\mathcal{P}^{-2}\sqrt{\left(\ell_f^2-\ell_\mathcal{P}^2\right)\Phi_f(\kappa_v,\ldots,\kappa_f)}.
\end{align}
\end{cor}
\begin{proof}
It follows from the definition \eqref{eq:defflagf} that
\begin{align}
\Phi_{\mathcal{P}}(x_0,x_1,\ldots,x_{d+1})=\Phi_{f}(x_0,x_1,\ldots,x_{d})-\frac{\left(\ell_{f}^2 x_d -\ell_\mathcal{P}^2x_{d+1} \right){}^2}{\ell_{f}^2-\ell_\mathcal{P}^2}
\end{align}
By combining this with \eqref{eq:descartesflageq} and then resolving for $\kappa_\mathcal{P}$ we obtain \eqref{eq:glueingpol}.
\end{proof}
Let us explain in detail why Theorem \ref{thm:regdescartes} generalizes the Descartes' Theorem. We define the \textit{simplicial}, \textit{hyperoctahedral} and \textit{hypercubical quadratic form} as
\begin{align}
\mathfrak T_{d+1}(u_1,\ldots,u_{d+2}):=&\Phi_{\mathcal T^{d+1}}\left(u_1,\frac12(u_1+u_2),\ldots,\frac1{d+2}(u_1+\ldots+u_{d+2})\right)\\=&\frac{1}{2}\left(\frac1d(\sum_{i=1}^{d+2}u_i)^2- \sum_{i=1}^{d+2}u_i^2\right)\nonumber\\
\mathfrak O_{d+1}(u_1,\ldots,u_{d+2}):=&\Phi_{\mathcal O^{d+1}}\left(u_1,\frac12(u_1+u_2),\ldots,\frac1{d+1}(u_1+\ldots+u_{d+1}),u_{d+2}\right)\\
=&u_{d+2}^2-\frac{1}{2}\sum_{i=1}^{d+1}(u_i-u_{d+2})^2
\nonumber\\
\mathfrak C_{d+1}(u_1,\ldots,u_{d+2}):=&\Phi_{\mathcal C^{d+1}}\left(u_1,\frac12(u_1+u_2),\ldots,\frac12(u_1+u_{d+2})\right)\\
=&\frac14\left(\frac1d(u_1+u_{d+2})^2- \sum_{i=1}^{d+1}(u_i-u_{i+1})^2 \right) \nonumber
\end{align}
\begin{cor}[Soddy-Gosset Theorem] \label{cor:descartetra} Let $\kappa_1,\ldots,\kappa_{d+2}$ be the curvatures of a polytopal $d$-ball packing $\mathcal{B}_{\mathcal{T}^{d+1}}$. Then,
\begin{align}\label{eq:DescartesTh}
d \sum_{i=1}^{d+2}\kappa_i^2=(\sum_{i=1}^{d+2}\kappa_i )^2
\end{align}
\end{cor}
\begin{proof}
Since $\mathcal{T}^{d+1}$ is Möbius unique, the Polytopal Descartes' Theorem holds for any polytopal $d$-ball packing $\mathcal{B}_{\mathcal{T}^{d+1}}$. For every $i=0,\ldots,d+1$, let $v_i$ be the vertex of $\mathcal{T}^{d+1}$ corresponding to the $d$-ball of curvature $\kappa_i$. Since $\mathcal{T}^{d+1}$ is a $(d+1)$-neighborly polytope, that is, every set of vertices span a face, we can find a flag $(f_0,\ldots,f_d,f_{d+1}=\mathcal{T}^{d+1})$ where the vertices of $f_i$ are $v_0,\ldots,v_i$. Therefore, we have
\begin{align}\label{eq:kappai}
\kappa_{f_i}=\frac1i(\kappa_1+\ldots+\kappa_i).
\end{align}
Then, by the Polytopal Descartes' Theorem,
\begin{align*}
\mathfrak T_{d+1}(\kappa_1,\ldots,\kappa_{d+2})=\Phi_{\mathcal{T}^{d+1}}(\kappa_{f_0},\ldots,\kappa_{f_{d+1}})=0
\end{align*}
which is equivalent to \eqref{eq:DescartesTh}.
\end{proof}
\begin{rem}
In the above corollary, ``\textit{polytopal $d$-ball packing $\mathcal{B}_{\mathcal{T}^{d+1}}$}" can be replaced by a ``\textit{$d$-ball packing made by $d+2$ pairwise tangent $d$-balls}", as in the original statement. By Proposition \ref{thm:gram}, both definitions are equivalent.
\end{rem}
\begin{cor}\label{cor:descaroct} Let $\kappa_1,\ldots,\kappa_{d+1}$ be the curvatures of $d+1$ pairwise tangent $d$-balls of a polytopal $d$-ball packing $\mathcal{B}_{\mathcal{O}^{d+1}}$. Then,
\begin{align}\label{eq:DescartesTh2}
\sum_{i=1}^{d+1}(\kappa_i-\kappa_{\mathcal{O}^{d+1}})^2=2\kappa_{\mathcal{O}^{d+1}}^2
\end{align}
\end{cor}
\begin{proof}
We can apply the same arguments as in the proof of Corollary \ref{cor:descartetra}. The vertices corresponding to the curvatures are the vertices of a facet of $\mathcal{O}^{d+1}$, which is a $d$-simplex. Therefore, we can find a a flag $(f_0,\ldots,f_d,\mathcal{O}^{d+1})$ where \eqref{eq:kappai} is satisfied for every $i=0,\ldots,d$. The Polytopal Descartes' Theorem combined with hyperoctahedral quadratic form give the result.
\end{proof}
\begin{rem}
Equation \eqref{eq:DescartesTh2} for $d=2,3$ is equivalent to the equation given in the generalizations of Descartes' Theorem of Guettler-Mallows \cite{guettler}, Nakamura \cite{nakamura2014localglobal} and Dias \cite{Dias2014TheLP}, respectively. The extra equations needed to obtain the remaining curvatures correspond to the antipodal relation.
\end{rem}
\begin{cor} \label{cor:descarcube}
For every $d\ge1$, let $\kappa_1,\ldots,\kappa_{d+2}$ be the curvatures of $d+2$ consecutive tangent $d$-balls of a polytopal $d$-ball packing $\mathcal{B}_{\mathcal{C}^{d+1}}$ where $\kappa_1$ and $\kappa_{d+2}$ are the curvatures of two $d$-balls at distance $d+1$ in the tangency graph of $\mathcal{B}_{\mathcal{C}^{d+1}}$. Then we have
\begin{align}\label{eq:DescartesTh3}
d \sum_{i=1}^{d+1}(\kappa_i-\kappa_{i+1})^2=(\kappa_1+\kappa_{d+2})^2.
\end{align}
\end{cor}
\begin{proof}
By Möbius unicity, we can consider that $\mathcal{C}^{d+1}$ is regular. Let $(v_1,\ldots,v_{d+2})$ a path passing through the vertices of $\mathcal{C}^{d+1}$ corresponding to the curvatures. There is a flag $(f_0,\ldots,f_{d+1})$ where $f_i$ is the unique $i$-face containing the vertices $v_1,\ldots,v_i$. For every $i=2,\ldots,d+1$, the intersection $f_i\cap \mathbb S^d$ gives a polytopal $i$-ball packing $\mathcal{B}_{\mathcal{C}^i}$. By applying the antipodal relation to $\mathcal{B}_{\mathcal{C}^{i+1}}$ we obtain that, for every $i=1,\ldots,d+2$,
\begin{align}
\kappa_{f_i}=\frac12(\kappa_1+\kappa_i).
\end{align}
and therefore, by the Polytopal Descartes' Theorem,
\begin{align*}
\mathfrak C_{d+1}(\kappa_1,\ldots,\kappa_{d+2})=\Phi_{\mathcal{C}^{d+1}}(\kappa_{f_0},\ldots,\kappa_{f_d},\kappa_{f_{d+1}})=0
\end{align*}
which is equivalent to \eqref{eq:DescartesTh3}.
\end{proof}
In the case when all the curvatures in the $d$-ball packing in Corollaries \ref{cor:descartetra}, \ref{cor:descaroct} and \ref{cor:descarcube} are integers, then we obtain a geometric method to find solutions to three Diophantine equations.
\begin{cor}\label{cor:diaph}
Let $d\ge1$. If there is an integral polytopal $d$-ball packing $\mathcal{B}_{\mathcal{T}^{d+1}}$, $\mathcal{B}_{\mathcal{O}^{d+1}}$ or $\mathcal{B}_{\mathcal{C}^{d+1}}$ then the following Diophantine equations
\begin{align}
\label{eq:diaph1} d(m_1^2+\cdots+m_{d+2}^2)=n^2\\
\label{eq:diaph2} m_1^2+\cdots+m_{d+1}^2=2n^2\\
\label{eq:diaph3} d(m_1^2+\cdots+m_{d+1}^2)=n^2,
\end{align}
admit interger solutions.
\end{cor}
\begin{proof}
Equations \eqref{eq:diaph1}, \eqref{eq:diaph2} and \eqref{eq:diaph3} are obtained by additioning the numbers inside the parenthesis in the equalities \eqref{eq:DescartesTh}, \eqref{eq:DescartesTh2} and \eqref{eq:DescartesTh3}, respectively.
\end{proof}
\begin{figure}[H]
\centering
\includestandalone[scale=1]{tikzs/diaph}
\caption{Three primitive solutions to Equations \eqref{eq:diaph1}, \eqref{eq:diaph2}, \eqref{eq:diaph3} for $d=2$, obtained by the relations between the curvatures in a integral tetrahedral, octahedral and cubical disk packing, respectively.
}\label{fig:diaphos}
\end{figure}
\section{The orthoplicial sphere packing.}\label{sec:ortho}
In two independent works, Nakamura \cite{nakamura2014localglobal} and Dias \cite{Dias2014TheLP} defined the \textit{orthoplicial sphere packings} as a class of packings whose tangency graph is the $1$-skeleton of an orthoplex. These packings were also studied by Sheydvasser in \cite{SHEYDVASSER201941} as a packing arising from a quaternary quadratic field. They can also be obtained as a particular case of Boyd-Maxwell packing \cite{chenlabbe} or a crystallographic sphere packing \cite{KontorovichNakamura}. In this section, we shall study orthoplicial sphere packings as \textit{polytopal} sphere packings, i.e. as packings Möbius equivalent to the ball-arrangement projection of an edge-scribed orthoplex. As we show below, orthoplicial sphere packings, in the sense of Nakamura and Dias, are polytopal.
\begin{lem}
Orthoplicial sphere packings are polytopal.
\end{lem}
\begin{proof}
For any orthoplicial sphere packing $\mathcal{B}$, the tancency conditions gives all the entries of $\Gram(\mathcal{B})$ except for the entries corresponding to the inversive product of disjoint spheres, which must be strictly less than $-1$. Therefore, the rank of $\Gram(\mathcal{B})$ must be equal to $5$, implying that the entries of disjoint spheres must be $-3$. By Equation \eqref{eq:hoct}, we can reorder the spheres of $\mathcal{B}$ such that $\Gram(\mathcal{B})=\Gram(\mathcal{B}_{\mathcal{O}^4})$, where $\mathcal{B}_{\mathcal{O}^4}$ is the ball-arrangement projection of an edge-scribed orthoplex. By Proposition \ref{thm:gram}, $\mathcal{B}$ and $\mathcal{B}_{\mathcal{O}^4}$ are Möbius equivalent, and therefore $\mathcal{B}$ is polytopal.
\end{proof}
\subsection{Orthoplicial trinities.}
By polarity, the dual $\mathcal{B}_{\mathcal{O}^4}^*$ of an orthoplicial sphere packing is Möbius equivalent to the ball-arrangement projection of a ridge-scribed hypercube. Therefore, $\mathcal{B}_{\mathcal{O}^4}^*$ is not a packing. However, by alternating the vertices of the hypercube, we can split the dual in two orthoplicial sphere packings $\mathcal{B}_{\mathcal{O}^4}^*=\mathcal{B}_{\mathcal{O}^4}'\cup\mathcal{B}_{\mathcal{O}^4}''$. Such a collection of three orthoplicial sphere packings $\{\mathcal{B}_{\mathcal{O}^4},\mathcal{B}_{\mathcal{O}^4}',\mathcal{B}_{\mathcal{O}^4}''\}$ will be called an \textit{orthoplicial trinity}.
\begin{lem}\label{lem:trinity} Let $\{\mathcal{B}_{\mathcal{O}^4},\mathcal{B}_{\mathcal{O}^4}',\mathcal{B}_{\mathcal{O}^4}''\}$ be an orthoplicial trinity. Then, for any $\mathcal{B}\in\{\mathcal{B}_{\mathcal{O}^4},\mathcal{B}_{\mathcal{O}^4}',\mathcal{B}_{\mathcal{O}^4}''\}$, we have that $\mathcal{B}^*=\{\mathcal{B}_{\mathcal{O}^4},\mathcal{B}_{\mathcal{O}^4}',\mathcal{B}_{\mathcal{O}^4}''\}\setminus \mathcal{B}$.
\end{lem}
\begin{proof}
By the Möbius unicity of the orthoplex, combined with the fact that Möbius transformations preserves duality, implies that it is enough to prove the result in a particular case. Let us consider the \textit{standard orthoplicial sphere packing} $\mathcal{B}_0$ given in Figure \ref{fig:hoct34}.
\begin{figure}[H]
\centering
\includestandalone[align=c,scale=1]{tikzs/standardhoct}
\hspace{1cm}
{\small
\begin{tabular}{c|rrrrr}
Spheres&\multicolumn{5}{c}{\text{Inversive coordinates}}\\
\hline
$b_1$ & 0 & 0 & 1 & 1 & 1 \\
$b_2$ & 0 & 0 & $-1$ & 1 & 1 \\
$b_3$ & 1 & 1 & 0 & 0 & 1 \\
$b_4$ &$-1$ & 1 & 0 & 0 & 1 \\
$b_{-1}$ & 0 & 0 & $-1$ & $-1$ & 1 \\
$b_{-2}$ & 0 & 0 & 1 & $-1$ & 1 \\
$b_{-3}$ &$-1$ & $-1$ & 0 & 0 & 1 \\
$b_{-4}$ & 1 & $-1$ & 0 & 0 & 1 \\
\end{tabular}
}
\caption{The standard orthoplicial sphere packing $\mathcal{B}_0$.}
\label{fig:hoct34}
\end{figure}
It can be checked that $\mathcal{B}_0^*$ can be split into two orthoplicial ball packings, both obtained from $\mathcal{B}_0$ by a rotation of $\frac{\pi}{2}$ degree, one around the $x$-axis and the other around the $y$-axis, as shown in Figure \ref{fig:trinidad}.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[trim=20 20 20 20, clip,width=.35\textwidth]{img/trinity1.png}
\caption{An orthoplicial trinity containing the standard orthoplicial sphere packing $\mathcal{B}_0$.
}\label{fig:trinidad}
\end{figure}
The three packings in the previous orthoplicial trinity are $1$-CBP projections of the orthoplex. A similar case arises when an orthoplicial trinity contains a $0$-CBP projection of the orthoplex. In this case the other two packings must be $3$-CBP projections as it is shown in Figure \ref{fig:trinidad2}. We notice that any orthoplicial trinity is Möbius equivalent to the ball-arrangement projection of a circunscribed $24$-cell.
\begin{figure}[H]
\centering
\includegraphics[width=.24\textwidth]{img/orthoplicial/trinityA.png}\includegraphics[width=.24\textwidth]{img/orthoplicial/trinityB.png} \includegraphics[width=.24\textwidth]{img/orthoplicial/trinityC.png}
\includegraphics[width=.24\textwidth]{img/orthoplicial/trinityABC.png}
\caption{From left to right: a $3$-CBP projection $\mathcal{B}_{\mathcal{O}^4}$ of the orthoplex; a $0$-CBP projection contained in $\mathcal{B}_{\mathcal{O}^4}^*$; the other $3$-CBP projection contained in $\mathcal{B}_{\mathcal{O}^4}^*$; the orthoplicial trinity containing $\mathcal{B}_{\mathcal{O}^4}$.}
\label{fig:trinidad2}
\end{figure}
\subsection{Apollonian groups of orthoplicial sphere packings.}
In \cite{RR21_1}, several groups generalizing the notion of the Apollonian, SuperApollonian and symmetric groups of \cite{apoGI,apoGII,apoGIII} were introduced for polytopal $d$-ball packings. We recall that for any polytopal $d$-ball packing $\mathcal{B}_\mathcal{P}$, the \textit{Apollonian group} $\mathsf A(\mathcal{B}_\mathcal{P})$ is the group generated by the inversions on the dual spheres of $\mathcal{B}_\mathcal{P}^*$, the \textit{symmetric group} of $\mathcal{B}_\mathcal{P}$ is the group generated by the inversion leaving invariant $\mathcal{B}_\mathcal{P}$, and the \textit{symmetrized Apollonian group} $\mathsf {SA}(\mathcal{B}_\mathcal{P})$ is the group generated by the previous two groups. When $\mathcal{P}$ is Möbius unique, then the three groups can be defined canonically for $\mathcal{P}$.\\
In this context, the \textit{orthoplicial Apollonian group} $\mathsf{A}(\mathcal{B}_{\mathcal{O}^4})$, introduced by Nakamura in \cite{nakamura2014localglobal} and also by Dias in \cite{Dias2014TheLP}, corresponds to the Apollonian group of the standard orthoplicial sphere packing $\mathcal{B}_0$.
\begin{lem} \label{lem:symortho}
The symmetrized orthoplicial Apollonian group is the hyperbolic Coxeter group with Coxeter graph
\raisebox{-.37cm}{
\begin{tikzpicture}[scale=0.5]
\draw[thick] (0,0) -- (4,0);
\node (NE) at (0,0) [circle,fill=black,inner sep=0pt,minimum size=.15cm] {};
\node (NW) at (1,0) [circle,fill=black,inner sep=0pt,minimum size=.15cm] {};
\node (NW) at (2,0) [circle,fill=black,inner sep=0pt,minimum size=.15cm] {};
\node (NW) at (3,0) [circle,fill=black,inner sep=0pt,minimum size=.15cm] {};
\node (NW) at (4,0) [circle,fill=black,inner sep=0pt,minimum size=.15cm] {};
\draw (2.5,0) circle (0pt) node[anchor= north] {$4$};
\draw (3.5,0) circle (0pt) node[anchor= north] {$4$};
\end{tikzpicture} }and it is isomorphic to the group generated to the following five matrices:
\begingroup
\renewcommand*{\arraystretch}{.7}
\setlength{\arraycolsep}{1.5pt}
\begin{align*}
\mathbf{V}=\small
\begin{pmatrix}
-1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\
\end{pmatrix}\,
\mathbf{E}=\small\frac{1}{2}
\begin{pmatrix}
1 & 1 & -1 & -1 & 0 \\
1 & 1 & 1 & 1 & 0 \\
-1 & 1 & 1 & -1 & 0 \\
-1 & 1 & -1 & 1 & 0 \\
0 & 0 & 0 & 0 & 2 \\
\end{pmatrix}\,
\mathbf{R}=\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & -1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\
\end{pmatrix}\,
\mathbf{F}=\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
\end{pmatrix}\,
\mathbf{S}=\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & -2 & 2 \\
0 & 0 & 1 & 0 & 0 \\
0 & -2 & 0 & -1 & 2 \\
0 & -2 & 0 & -2 & 3 \\
\end{pmatrix}.
\end{align*}
\endgroup
\end{lem}
\begin{proof}
Let $\mathcal B_0$ be the standard orthoplicial ball packing. Let $\{r_v, r_e, r_r, r_f\}$ be the fundamental generators of the symmetric group of $\mathcal B_0$, with respect to the flag $(v,e,r,f,\mathcal{O}^4)$, where $v=1$, $e=12$, $r=123$ and $f=1234$. Then, $\{r_v, r_e, r_r, r_f\}$ are represented by the following Möbius transformations:
\begin{itemize}
\item $r_v$ is the reflection on the plane $\{x=0\}.$
\item $r_e$ is the inversion on the sphere with center $(-1,1,-1)$ and radius $2$.
\item $r_r$ is the reflection on the plane $\{z=0\}.$
\item $r_f$ is the inversion on the sphere with center $(0,0,1)$ and radius $\sqrt 2$.
\end{itemize}
Since the orthoplex is regular, then $\mathsf{SA}(\mathcal B_0)$ is generated by $\{r_v, r_e, r_r, r_f,s_f\}$, where $s_f$ is the inversion on the sphere orthogonal to $b_1$, $b_2$, $b_3$ and $b_4$. By using the inversive coordinates, we obtain a linear faithful representation of $\mathsf{SA}(\mathcal B_0)$ as the discrete subgroup of $O^\uparrow_{4,1}(\mathbb{Q})$, where the generators $\{r_v, r_e, r_r, r_f,s_f\}$ are represented by the matrices $\{\mathbf V, \mathbf E, \mathbf R, \mathbf F,\mathbf S\}$, respectively. The relations in the Coxeter graph can be checked by straightforward computations on the matrices. By the Möbius unicity of the orthoplex, the symmetrized Apollonian group of any other orthoplicial ball packing is isomorphic to $\mathsf{SA}(\mathcal B_0)$.
\end{proof}
\newpage
\begin{cor}\label{cor:matrix}
The orthoplicial Apollonian group is isomorphic to the subgroup of
$O_{4,1}^\uparrow(\mathbb Z)$ generated by the following 16 matrices:
\begingroup
\renewcommand*{\arraystretch}{.7}
\setlength{\arraycolsep}{2pt}
\begin{align*}
\mathbf{S}_{1234}=&
{\small
\begin{pmatrix}
1 &0& 0 & 0 & 0 \\
0 & -1 & 0 & -2 & 2 \\
0 & 0 & 1 & 0 & 0 \\
0 & -2 & 0 & -1 & 2 \\
0 & -2 & 0 & -2 & 3 \\
\end{pmatrix}=\mathbf{S}
},&&
\mathbf{S}_{123\overline4}=
{\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & -1 & -2 & 0 & 2 \\
0 & -2 & -1 & 0 & 2 \\
0 & 0 & 0 & 1 & 0 \\
0 & -2 & -2 & 0 & 3 \\
\end{pmatrix}=\mathbf{F}\mathbf{S}_{1234}\mathbf{F}
},\\
\mathbf{S}_{12\overline34}=&
{\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & -1 & 2 & 0 & 2 \\
0 & 2 & -1 & 0 & -2 \\
0 & 0 & 0 & 1 & 0 \\
0 & -2 & 2 & 0 & 3 \\
\end{pmatrix}=\mathbf{R}\mathbf{S}_{123\overline4}\mathbf{R}
},&&
\mathbf{S}_{12\overline{34}}=
{\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & 2 & 2 \\
0 & 0 & 1 & 0 & 0 \\
0 & 2 & 0 & -1 & -2 \\
0 & -2 & 0 & 2 & 3 \\
\end{pmatrix}=\mathbf{F}\mathbf{S}_{12\overline34}\mathbf{F}
},\\
\mathbf{S}_{1\overline234}=&
{\small
\begin{pmatrix}
-1 & 0 & 0 & -2 & 2 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
-2 & 0 & 0 & -1 & 2 \\
-2 & 0 & 0 & -2 & 3 \\
\end{pmatrix}=\mathbf{E}\mathbf{S}_{12\overline34}\mathbf{E}
},&&
\mathbf{S}_{1\overline23\overline4}=
{\small
\begin{pmatrix}
-1 & 0 & -2 & 0 & 2 \\
0 & 1 & 0 & 0 & 0 \\
-2 & 0 & -1 & 0 & 2 \\
0 & 0 & 0 & 1 & 0 \\
-2 & 0 & -2 & 0 & 3 \\
\end{pmatrix}=\mathbf{F}\mathbf{S}_{1\overline234}\mathbf{F}
},\\
\mathbf{S}_{1\overline{23}4}=&
{\small
\begin{pmatrix}
-1 & 0 & 2 & 0 & 2 \\
0 & 1 & 0 & 0 & 0 \\
2 & 0 & -1 & 0 & -2 \\
0 & 0 & 0 & 1 & 0 \\
-2 & 0 & 2 & 0 & 3 \\
\end{pmatrix}=\mathbf{R}\mathbf{S}_{1\overline23\overline4}\mathbf{R}
},&&
\mathbf{S}_{1\overline{234}}=
{\small
\begin{pmatrix}
-1 & 0 & 0 & 2 & 2 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
2 & 0 & 0 & -1 & -2 \\
-2 & 0 & 0 & 2 & 3 \\
\end{pmatrix}=\mathbf{F}\mathbf{S}_{1\overline{23}4}\mathbf{F}
},\\
\mathbf{S}_{\overline1234}=&
{\small
\begin{pmatrix}
-1 & 0 & 0 & 2 & -2 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
2 & 0 & 0 & -1 & 2 \\
2 & 0 & 0 & -2 & 3 \\
\end{pmatrix}=\mathbf{V}\mathbf{S}_{1\overline234}\mathbf{V}
},&&
\mathbf{S}_{\overline123\overline4}=
{\small
\begin{pmatrix}
-1 & 0 & 2 & 0 & -2 \\
0 & 1 & 0 & 0 & 0 \\
2 & 0 & -1 & 0 & 2 \\
0 & 0 & 0 & 1 & 0 \\
2 & 0 & -2 & 0 & 3 \\
\end{pmatrix}=\mathbf{F}\mathbf{S}_{\overline1234}\mathbf{F}
},\\
\mathbf{S}_{\overline12\overline34}=&
{\small
\begin{pmatrix}
-1 & 0 & -2 & 0 & -2 \\
0 & 1 & 0 & 0 & 0 \\
-2 & 0 & -1 & 0 & -2 \\
0 & 0 & 0 & 1 & 0 \\
2 & 0 & 2 & 0 & 3 \\
\end{pmatrix}=\mathbf{R}\mathbf{S}_{\overline123\overline4}\mathbf{R}
},&&
\mathbf{S}_{\overline{1}2\overline{34}}=
{\small
\begin{pmatrix}
-1 & 0 & 0 & -2 & -2 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
-2 & 0 & 0 & -1 & -2 \\
2 & 0 & 0 & 2 & 3 \\
\end{pmatrix}=\mathbf{F}\mathbf{S}_{\overline12\overline34}\mathbf{F}
},\\
\mathbf{S}_{\overline{12}34}=&
{\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & 2 & -2 \\
0 & 0 & 1 & 0 & 0 \\
0 & 2 & 0 & -1 & 2 \\
0 & 2 & 0 & -2 & 3 \\
\end{pmatrix}=\mathbf{E}\mathbf{S}_{\overline12\overline34}\mathbf{E}
},&&
\mathbf{S}_{\overline{12}3\overline{4}}=
{\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & -1 & 2 & 0 & -2 \\
0 & 2 & -1 & 0 & 2 \\
0 & 0 & 0 & 1 & 0 \\
0 & 2 & -2 & 0 & 3 \\
\end{pmatrix}=\mathbf{F}\mathbf{S}_{\overline{12}34}\mathbf{F}
},\\
\mathbf{S}_{\overline{123}4}=&
{\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & -1 & -2 & 0 & -2 \\
0 & -2 & -1 & 0 & -2 \\
0 & 0 & 0 & 1 & 0 \\
0 & 2 & 2 & 0 & 3 \\
\end{pmatrix}=\mathbf{R}\mathbf{S}_{\overline{12}3\overline4}\mathbf{R}
},&&
\mathbf{S}_{\overline{1234}}=
{\small
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & -2 & -2 \\
0 & 0 & 1 & 0 & 0 \\
0 & -2 & 0 & -1 & -2 \\
0 & 2 & 0 & 2 & 3 \\
\end{pmatrix}=\mathbf{F}\mathbf{S}_{\overline{123}4}\mathbf{F}
}.
\end{align*}\\
\endgroup
\end{cor}
The previous representation of the orthoplicial Apollonian group in $O^\uparrow_{4,1}(\mathbb Z)$ can be obtained by conjugating the matrix representation $O^\uparrow_W(\mathbb Z)$ given by Nakamura in \cite{nakamura2014localglobal}, where $W$ is the matrix of the inversive product in \textit{augmented curvature-center coordinates} coordinates (see \cite{LagariasOct}). Both representations satisfy the following relations:
\begin{itemize}
\item[(R1)] $ \mathbf{S}_{ijkl}^2=1$ for every $(i,j,k,l)=(\pm1,\pm2,\pm3,\pm4)$.
\item[(R2)] $(\mathbf{S}_{ijkl} \mathbf{S}_{i'j'k'l'})^2=1$ if the labels $ijkl$ and $i'j'k'l'$ differ from only one letter.
\end{itemize}
\section{Apollonian sections of the orthoplicial Apollonian sphere packing.} \label{sec:aposections}
Let $\mathcal{B}_\mathcal{P}$ a polytopal $d$-ball packing and let
$\Omega(\mathcal B_\mathcal{P})$ be the \textit{Apollonian cluster} of $\mathcal{B}_\mathcal{P}$, defined as the orbit of $\mathsf A(\mathcal{B}_\mathcal{P})$ on $\mathcal{B}_\mathcal{P}$. We say that a subset $\Sigma\subset\Omega(\mathcal B_\mathcal{P})$ is an \textit{Apollonian section} of $\Omega(\mathcal{B}_\mathcal{P})$ if there is subgroup $\Gamma<\mathsf{A}(\mathcal{B}_\mathcal{P})$ and a subset $X\subseteq \mathcal{B}_\mathcal{P}$ such that $\Sigma=\Gamma\cdot X$.
Let $\Sigma=\Gamma\cdot X$ and $\Sigma'=\Gamma'\cdot X'$ be two Apollonian sections of two polytopal Apollonian clusters $\Omega(\mathcal{B}_\mathcal{P})$ and $\Omega(\mathcal{B}_{\mathcal{P}'})$, respectively. We say that $\Sigma$ and $\Sigma'$ are \textit{algebraically equivalent} if $\Gamma$ and $\Gamma'$ are isomorphic and there is equivariant bijection between $\Gamma\cdot X$ and $\Gamma'\cdot X'$ with respect to the actions. More specifically, $\Sigma$ and $\Sigma'$ are algebraically equivalent if there exists two bijections $\phi:\Gamma\rightarrow \Gamma'$ and $\psi:\Sigma\rightarrow \Sigma'$ such that
\begin{enumerate}[label=(\roman*)]
\item $\phi:\Gamma\rightarrow \Gamma'$ is a group isomorphism.
\item For all $g\in \Gamma$ and all $b\in X$, $\psi(g\cdot b)=\phi(g)\cdot \psi(b)$.
\end{enumerate}
If in addition $\psi$ preserves curvatures, we say that $\Sigma$ and $\Sigma'$ are \textit{arithmetically equivalent}.\\
In \cite{RR21_1}, the Platonic Apollonian groups were defined as the Apollonian groups of the polytopal disk packings whose tangency polytope is one of the Platonic solids. We shall say that an Apollonian cluster has a tetrahedral, octahedral, cubical, icosahedral or dodecahedral Apollonian section if it contains an Apollonian section algebraically equivalent to an Apollonian packing of the corresponding Platonic solid.
\begin{thm}\label{thm:aposections}Every orthoplicial Apollonian packing contains a tetrahedral, octahedral and cubical Apollonian section.
\end{thm}
\begin{proof}
Since the orthoplex is Möbius unique, it is enough to find the desired Apollonian sections in a particular orthoplicial Apollonian packing. First, we shall construct a tetrahedral and octahedral Apollonian section in $\Omega(\mathcal{B}_0)$ (see Figure \ref{fig:standaportho}).
\begin{figure}[H]
\centering
\includegraphics[width=.4\textwidth]{img/sections/apostandortho4.png}
\caption{The orthoplicial Apollonian packing $\Omega(\mathcal{B}_0)$ at depth $\le3$, seen from above.}\label{fig:standaportho}
\end{figure}
(Tetrahedral section) Let $\Sigma_\mathcal{T}=\Gamma_{\mathcal{T}}\cdot X_{\mathcal{T}}$ where $X_{\mathcal{T}}=\{b_1,b_2,b_3,b_4\}\subset \mathcal{B}_0$, $\Gamma_{\mathcal{T}}=\langle s_{\overline1234},s_{1\overline234},s_{12\overline34},s_{123\overline4}\rangle<\mathsf{A}(\mathcal{B}_0)$ and let $b_{1234}\in\mathcal{B}_0^*$. One can check that $\Gamma_{\mathcal{T}}$ leaves invariant $b_{1234}$. Let $H_\mathcal{T}$ be the boundary of $b_{1234}$. In $\mathcal{B}_0$, $H_\mathcal{T}$ is the plane $\{y=1\}$. After identifying $H_\mathcal{T}$ with $\widehat{\mathbb R^2}$, we obtain that
$$X_{\mathcal{T}}\cap H_\mathcal{T}:=\{b_1\cap H_\mathcal{T},b_2\cap H_\mathcal{T},b_3\cap H_\mathcal{T},b_4\cap H_\mathcal{T}\}$$
becomes a tetrahedral disk packing $\mathcal{B}_{\mathcal{T}^3}$. Moreover, the restriction of $\Gamma_{\mathcal{T}}$ on $H_\mathcal{T}$ induces an isomorphism between $\Gamma_{\mathcal{T}}$ and $\mathsf{A}(\mathcal{B}_{\mathcal{T}^3})$ given by
\begin{align*}
\phi_\mathcal{T}:\Gamma_{\mathcal{T}}&\longrightarrow\mathsf{A}(\mathcal{B}_{\mathcal{T}^3})\\
s_{b}&\longmapsto s_{b\cap H_\mathcal{T}}
\end{align*}
where $s_b$ denotes the inversion on $b$. On the other hand, we can construct a bijection
\begin{align*}
\psi_\mathcal{T}:\Sigma_{\mathcal{T}}&\longrightarrow\Omega(\mathcal{B}_{\mathcal{T}^3})\\
g\cdot b&\longmapsto (g\cdot b)\cap H_\mathcal{T}
\end{align*}
By inspection, one can check that for every $ijkl\in\{\overline1234,1\overline234,12\overline34,123\overline4\}$ and for every $m=1,2,3,4,$ we have
\begin{align*}
\psi_\mathcal{T}(s_{ijkl}\cdot b_m)&=(s_{ijkl}\cdot b_m)\cap H_\mathcal{T}\\
&=\phi_\mathcal{T}(s_{ijkl})\cdot(b_m\cap H_\mathcal{T})\\
&=\phi_\mathcal{T}(s_{ijkl})\cdot \psi_\mathcal{T}(b_m)
\end{align*}
which gives the equivariance of $\psi_\mathcal{T}$. Therefore, $\Sigma_\mathcal{T}$ is algebraically equivalent to the tetrahedral Apollonian packing $\Omega(\mathcal{B}_{\mathcal{T}^3})$.\\
(Octahedral section) We may apply a similar strategy to construct an octahedral section in $\Omega(\mathcal{B}_0)$ by intersecting with the plane $H_\mathcal{O}:=\{x-y=1\}$, orthogonal to every $b\in X_\mathcal{O}:=\mathcal{B}_0\setminus\{b_4,b_{-4}\}$. The intersection $H_\mathcal{O}\cap X_\mathcal{O}$ gives an octahedral disk packing $\mathcal{B}_{\mathcal{O}^3}$. We define then $\Sigma_\mathcal{O}:=\Gamma_\mathcal{O}\cdot X_\mathcal{O}\subset\Omega(\mathcal{B}_0)$ where
\begin{align*}
\Gamma_{\mathcal{O}}=\langle t_{123},t_{\overline123},t_{1\overline23},t_{12\overline3},t_{\overline{12}3},t_{1\overline{23}},t_{\overline12\overline3}\mid t_{ijk}:=s_{ijk4}s_{ijk\overline4}\rangle.
\end{align*}
In this case, the group isomorphism $\phi_\mathcal{O}:\Gamma_\mathcal{O}\mapsto \mathsf{A}(\mathcal{B}_{\mathcal{O}^3})$ is given by $t_{ijk}\mapsto s_{ijk}$, where $s_{ijk}$ denotes the inversion on the disk orthogonal to the three disks $\{b_i\cap H_\mathcal{O},b_j\cap H_\mathcal{O},b_k\cap H_\mathcal{O}\}$. The equivariant bijection $\psi_\mathcal{O}:\Sigma_\mathcal{O}\rightarrow \Omega(\mathcal{B}_{\mathcal{O}^3})$ is then given by $g(b)\mapsto g(b)\cap H_\mathcal{O}$. We illustrate in Figure \ref{fig:tetraoctosections} the tetrahedral and octahedral sections $\Sigma_\mathcal{T}$ and $\Sigma_\mathcal{O}$.
\begin{figure}[H]
\centering
\includegraphics[width=.4\textwidth]{img/sections/tetrasection2.png}
\includegraphics[width=.4\textwidth]{img/sections/octosection2.png}
\caption{A tetrahedral (left) and octahedral (right) section of the orthoplicial Apollonian packing $\Omega(\mathcal{B}_0)$.}\label{fig:tetraoctosections}
\end{figure}
(Cubical section) This case is a bit more tricky. We consider the orthoplicial ball packing $\mathcal{B}_1$ obtained by a $3$-CBP projection of the orthoplex, with the labelling and coordinates given in Figure \ref{fig:hoctB1}.
\begin{figure}[H]
\centering
\includestandalone[align=c,scale=1]{tikzs/orthoB1}
\hspace{1cm}
{\small
\begin{tabular}{c|rrrrr}
\text{Spheres}&\multicolumn{5}{c}{\text{Inversive coordinates}}\\
\hline
$b_1$&$1/\sqrt{2}$&$1/\sqrt{2}$&$1/\sqrt{2}$&$1/\sqrt{2}$& 1 \\
$b_2$&$1/\sqrt{2}$&$-1/\sqrt{2}$&$1/\sqrt{2}$&$-1/\sqrt{2}$& 1 \\
$b_3$&$1/\sqrt{2}$&$-1/\sqrt{2}$&$-1/\sqrt{2}$&$1/\sqrt{2}$& 1 \\
$b_4$&$1/\sqrt{2}$&$1/\sqrt{2}$&$-1/\sqrt{2}$&$-1/\sqrt{2}$& 1 \\
$b_{-1}$&$-1/\sqrt{2}$&$-1/\sqrt{2}$&$-1/\sqrt{2}$&$-1/\sqrt{2}$& 1 \\
$b_{-2}$&$-1/\sqrt{2}$&$1/\sqrt{2}$&$-1/\sqrt{2}$&$1/\sqrt{2}$& 1 \\
$b_{-3}$&$-1/\sqrt{2}$&$1/\sqrt{2}$&$1/\sqrt{2}$&$-1/\sqrt{2}$& 1 \\
$b_{-4}$&$-1/\sqrt{2}$&$-1/\sqrt{2}$&$1/\sqrt{2}$&$1/\sqrt{2}$& 1 \\
\end{tabular}
}
\caption{The orthoplicial ball packing $\mathcal{B}_1$.}
\label{fig:hoctB1}
\end{figure}
In this labelling, $i$ is positive if and only if the first coordinate of the center of $b_{i}$ is positive. The orthoplicial Apollonian packing $\Omega(\mathcal{B}_1)$ and a cubical Apollonian section of $\Omega(\mathcal{B}_1)$ are shown in Figure \ref{fig:cubicbw2}. Let us now describe the construction. First, we consider the rigde-scribed hypercube
$$\mathcal{C}^4=\mathsf{conv}\{\frac{1}{\sqrt{2}}(\pm1,\pm1,\pm1,\pm1)\}.$$
We then split $\mathcal{C}^4$ into two edge-scribed orthoplexes induced the classes of a $2$-coloring of its vertices. Let $\mathcal{O}^4$ be the orthoplex in the class of the vertex $\frac{1}{\sqrt{2}}(1,1,1,1)$. We have that $\mathcal{B}_1$ is the ball-arrangement projection of $\mathcal{O}^4$. Let $\pi:\mathbb E^4\rightarrow\mathbb{E}^3$ the orthographic projection onto the hyperplane $\{x_1=0\}\subset\mathbb{E}^4$.
We have that $\pi(\mathcal{O}^4)$ is the edge-scribed cube $\mathcal{C}^3$ with vertices $\frac{1}{\sqrt{2}}(\pm1,\pm1,\pm1)$. Let $\mathcal{B}_{\mathcal{C}^3}$ be the cubical disk packing obtained by the ball-arrangement projection of $\mathcal{C}^3$. By mapping $b_v\rightarrow b_{\pi(v)}$, for every $v\in\mathcal{O}^4$, we construct a bijection $\widetilde{\pi}:\mathcal{B}_1\rightarrow\mathcal{B}_{\mathcal{C}^3}$.\\
Let $\mathcal{B}_x$ be the packing made by the six spheres of $\mathcal{B}_1^*$ which are orthogonal to $H_\mathcal{C}:=\{x=0\}\subset\widehat{\mathbb{R}^3}$ (see the blue packing in Figure \ref{fig:trinidad2}), and let $\Gamma_\mathcal{C}<\mathsf{A}(\mathcal{B}_1)$ be the group generated by the inversions on the spheres of $\mathcal{B}_x$. According to the labelling of $\mathcal{B}_1$, we have that $\Gamma_\mathcal{C}$ corresponds to the parabolic subgroup of $\mathsf A(\mathcal{B}_1)$ $$\langle s_{12\overline{34}},s_{1\overline{23}4},s_{\overline{12}34},s_{\overline123\overline4},s_{\overline12\overline34},s_{1\overline23\overline4}\rangle.$$
By intersecting with $H_\mathcal{C}$, and then identifying $H_\mathcal{C}$ to $\widehat{\mathbb{R}^2}$, we map $\mathcal{B}_x$ to $\mathcal{B}_{\mathcal{C}^3}^*$ (see Figure \ref{fig:blues}).
\begin{figure}[H]
\includegraphics[width=.26\textwidth]{img/orthocubical1}
\hspace{1cm}
\includegraphics[width=.262\textwidth]{img/primal_dualcubic}
\caption{(Left) $\mathcal{B}_1$ in gray with $\mathcal{B}_x$ in blue; (right)
$\mathcal{B}_{\mathcal{C}^3}$ in gray with $\mathcal{B}_{\mathcal{C}^3}^*$ in blue.}\label{fig:blues}
\end{figure}
Therefore, we can define a group isomorphism $\phi_\mathcal{C}:\Gamma_\mathcal{C}\rightarrow \mathsf{A}(\mathcal{B}_{\mathcal{C}_3})$ by mapping the inversion on every sphere $b\in \mathcal{B}_x$ to the inversion on the circle $(b\cap H_\mathcal{C})\in\mathcal{B}_{\mathcal{C}^3}^*$. Let $\Sigma_\mathcal{C}:=\Gamma_\mathcal{C}\cdot \mathcal{B}_1$. It can be checked that for every $g\in \Gamma_\mathcal{C}$ and every $b\in \mathcal B_1$,
\begin{align*}
\psi_\mathcal{C}:\Sigma_\mathcal{C}&\rightarrow\Omega(\mathcal{B}_{\mathcal{C}^3})\\
g\cdot b&\mapsto \phi(g)\cdot\widetilde\pi(b)
\end{align*}
defines an equivariant bijection with respect to the action of $\Gamma_\mathcal{C}$ on $\mathcal B_1$. Therefore, $\Sigma_\mathcal{C}$ is a cubical Apollonian section of $\Omega(\mathcal{B}_1)$.
\end{proof}
We notice that the bijections $\psi_\mathcal{T}$ and $\psi_\mathcal{O}$ described above preserve both the inversive product which is not the case for $\psi_\mathcal{C}$. Indeed, spheres which are tangent in the cubical Apollonian section may correspond to disjoint disks in the cubical Apollonian packing. To see this, we might consider the cubical Apollonian packing $\Omega(\mathcal{B}_{\mathcal{C}^3})$ given above with a $2$-coloring. Then, two disks $\psi_\mathcal{C}(b)$ and $\psi_\mathcal{C}(b')$ have the same color if and only if the centers of $b$ and $b'$ both lie on the same side of the plane $H_\mathcal{C}$. When two disks with same color (and therefore disjoint), correspond to two vertices lying in a same square-face, then the corresponding spheres are tangent (see Figure \ref{fig:cubicbw2}).
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{scope}[xshift=-5.6cm]
\node at (0,0) { \includegraphics[width=.25\textwidth]{img/sections/apohcubeB1.png}};
\end{scope}
\begin{scope}[xshift=-1.9cm]
\node at (0,0) { \includegraphics[width=.25\textwidth]{img/sections/cubsection.png} };
\draw[very thick] (-.55,-.55) rectangle (.55,.55);
\end{scope}
\begin{scope}[xshift=2cm]
\node at (0,0) {
\includegraphics[width=.25\textwidth]{img/orthocubical0packzoom} };
\draw[very thick] (-2.,-2.) rectangle (2.,2.);
\end{scope}
\begin{scope}[xshift=6.2cm]
\node at (0,0) {
\includegraphics[width=.25\textwidth]{img/apolloniancubicbw} };
\end{scope}
\end{tikzpicture}
\caption{From left to right: the orthoplicial Apollonian sphere packing $\Omega(\mathcal{B}_1)$; a cubical Apollonian section $\Sigma_\mathcal{C}$ of $\Omega(\mathcal{B}_1)$; $\Sigma_\mathcal{C}$ zoomed; the corresponding cubical Apollonian disk packing with a minimal coloration.}\label{fig:cubicbw2}
\end{figure}
On the arithmetic side, the bijections $\psi_\mathcal{T}$ and $\psi_\mathcal{O}$ preserve the curvatures, contrarily to $\psi_\mathcal{C}$. However, by composing $\psi_\mathcal{C}$ with a rescaling of $\widehat{\mathbb{R}}^2$ of factor $\sqrt{2}$ we obtain a bijection that does preserve the curvatures. These properties will be important in the next section in order to construct integral orthoplicial Apollonian packings containing an Apollonian section with prescribed curvatures.
\newpage
\subsection{Construction of orthoplicial Apollonian packings containing a given integral section.} In this section we shall prove Theorem \ref{thm:sectionscurv}. For the integrality part, we will need the following three propositions.
\begin{prop}
\label{prop:intaportho} Let $\kappa_1,\kappa_2,\kappa_3,\kappa_4$ be the curvatures of four pairwise tangent spheres of an orthoplicial sphere packing $\mathcal{B}_{\mathcal{O}^4}$. Then, $\Omega(\mathcal B_{\mathcal{O}^4})$ is integral if and only if $\kappa_1,\kappa_2,\kappa_3,\kappa_4$ and $ \sqrt{\mathfrak T_3(\kappa_1,\kappa_2,\kappa_3,\kappa_4)}$ are integers.
\end{prop}
\begin{proof}
Nakamura gave a proof in \cite{nakamura2014localglobal} for all orthoplicial sphere packings Möbius equivalent to the standard $\mathcal{B}_0$. Since the orthoplex is Möbius unique, Nakamura's proposition applies to every orthoplicial sphere packing.
\end{proof}
\begin{prop} Let $\kappa_1, \kappa_2, \kappa_3$ be the curvatures of three pairwise tangent disks of on octahedral disk packing $\mathcal{B}_{\mathcal{O}^3}$. Then, $\Omega(\mathcal{B}_{\mathcal{O}^3})$ is integral if and only if $\kappa_1$, $\kappa_2$, $\kappa_3$ and $\sqrt{2\mathfrak T_2(\kappa_1,\kappa_2,\kappa_3)}$ are integers.\label{prop:intocto}
\end{prop}
\begin{proof}
The sufficiency part was treated in \cite{RR21_1}. For the necessity, we shall mimic the same method used by Nakamura to show the validity of Proposition \ref{prop:intaportho}. Let us suppose that $\Omega(\mathcal{B}_{\mathcal{O}^3})$ is integral.
Then
\begin{align}\label{eq:integerlem}
\mathfrak{T}_2(\kappa_1,\kappa_2,\kappa_3)=\kappa_1\kappa_2+\kappa_2\kappa_3+\kappa_1\kappa_3\in\mathbb Z.
\end{align}
By the antipodal relation we have that
\begin{align*}
2\kappa_{\mathcal{O}^3}=\kappa_v+\kappa_{\bar v}\in \mathbb{Z}.
\end{align*}
On the other hand, by Corollary \ref{cor:glueingpol}, we have
\begin{align*}
\kappa_{\mathcal{O}^3}=\kappa_1+\kappa_2+\kappa_3\pm \sqrt{2\mathfrak{T}_2(\kappa_1,\kappa_2,\kappa_3)}
\end{align*}
Therefore,
\begin{align*}
2\kappa_{\mathcal{O}^3}= 2(\kappa_1+\kappa_2+\kappa_3)\pm2\sqrt{2\mathfrak{T}_2(\kappa_1,\kappa_2,\kappa_3)}\in \mathbb{Z}
\end{align*}
which implies that $2\sqrt{2\mathfrak{T}_2(\kappa_1,\kappa_2,\kappa_3)}$ is an integer. Let us show that is an even integer. If there is $m\in\mathbb Z$ such that
$2\sqrt{2\mathfrak{T}_2(\kappa_1,\kappa_2,\kappa_3)}=2m+1$, then we would have
\begin{align*}
\sqrt{2\mathfrak{T}_2(\kappa_1,\kappa_2,\kappa_3)}=m+\frac12\Leftrightarrow 2\mathfrak{T}_2(\kappa_1,\kappa_2,\kappa_3)=m^2+m+\frac14\not\in\mathbb Z,
\end{align*}
contradicting \eqref{eq:integerlem}. Hence, $2\sqrt{2\mathfrak{T}_2(\kappa_1,\kappa_2,\kappa_3)}$ is an even integer, so $\sqrt{2\mathfrak{T}_2(\kappa_1,\kappa_2,\kappa_3)}$ is also an integer.
\end{proof}
\begin{prop}
Let $\kappa_1,\kappa_2,\kappa_3$ be the curvatures of three consecutive tangent disks in a cubical disk packing $\mathcal{B}_{\mathcal{C}^3}$. Then $\Omega(\mathcal{B}_{\mathcal{C}^3})$ is integral if and only if $\kappa_1$, $\kappa_2$, $\kappa_3$ and $\sqrt{2\mathfrak C_2(\kappa_1,\kappa_2,\kappa_3)}$ are integers.\label{prop:intcube}
\end{prop}
\begin{proof}
Idem to Proposition \ref{prop:intocto}.
\end{proof}
We may now prove Theorem \ref{thm:sectionscurv}. Let us recall the statement.\\
\paragraph{\textbf{Theorem 1}.}\textit{Let $\Omega(\mathcal{B}_\mathcal{P})$ be either a tetrahedral, an octahedral or a cubical Apollonian packing. There is an orthoplicial Apollonian packing $\Omega(\mathcal{B}_{\mathcal{O}^4})$ containing an Apollonian section arithmetically equivalent to $\Omega(\mathcal{B}_\mathcal{P})$. Moreover, $\Omega(\mathcal{B}_\mathcal{P})$ is integral if and only if $\Omega(\mathcal{B}_{\mathcal{O}^4})$ is integral.}
\begin{proof}
We recall that the curvature of a $d$-ball $b$ can be obtained from its inversive coordinates $\mathbf{x}_b$ by
\begin{align*}
\kappa(b)=\mathbf{k}_{d+2}^T\mathbf x_b
\end{align*}
where $\mathbf{k}_{d+2}$ is the $(d+2)$-column matrix $(0,\ldots,0,-1,1)$, c.f. Equation \eqref{eq:curvasprod}.\\
Let $\mathcal{B}_{\mathcal{O}^4}$ be the orthoplicial ball packing obtained from $\mathcal{B}_0$ by applying a translation of $\mathbb{R}^3$ of vector $(-1,0,0)$, so $H_\mathcal{T}$ becomes the plane given by $\{x=0\}$. Let $\Sigma_\mathcal{T}=\Gamma_\mathcal{T}\cdot X_\mathcal{T}$ be the corresponding tetrahedral section of $\Omega(\mathcal{B}_{\mathcal{O}^4})$ and let $\mathcal{B}_{\mathcal{T}^3}$ be the tetrahedral disk packing given by $\psi_\mathcal{T}(X_\mathcal{T})$. In inversive coordinates, the bijection $ \psi^{-1}_\mathcal{T}:\Omega(\mathcal{B}_{\mathcal{T}^3})\rightarrow \Sigma_\mathcal{T}\subset \Omega(\mathcal{B}_{\mathcal{O}^4})$ is given by
\begin{align*}
\begin{array}{rccc}
\psi^{-1}_\mathcal{T}:& \Omega(\mathcal{B}_{\mathcal{T}^3})&\rightarrow&\Sigma_\mathcal{T}\\
&\mathbf x_b&\mapsto& (0,\mathbf{x}_b)
\end{array}
\end{align*}
Let $\mathcal{B}_{\mathcal{T}^3}'$ be any tetrahedral disk packing. Since the tetrahedron is Möbius unique, there exists $\mu\in\mathsf{M\ddot{o}b}(\widehat \mathbb{R}^2)$ such that $\mu\cdot\mathcal{B}_{\mathcal{T}^3}=\mathcal{B}_{\mathcal{T}^3}'$. Therefore, $\mu\cdot \Omega(\mathcal{B}_{\mathcal{T}^3})=\Omega(\mathcal{B}_{\mathcal{T}^3}')$. Let $\mathbf M\in O^\uparrow_{3,1}(\mathbb{R})$ be the matrix corresponding to $\mu$ and let $\tilde\mu\in\mathsf{M\ddot{o}b(\widehat{\mathbb{R}^3})}$ be the Möbius transformation corresponding to the matrix
\begin{align*}
\left(
\begin{array}{c|c}
1 & \mathbf0_4^T \\
\hline
\mathbf0_4 & \mathbf M
\end{array}\right)\in O^\uparrow_{4,1}(\mathbb{R})
\end{align*}
where $\mathbf0_4$ is the null column-matrix of size $4$.
We have that $\Sigma_\mathcal{T}':=\widetilde{\mu}\cdot \Sigma_\mathcal{T}$ is a tetrahedral section of the orthoplicial Apollonian packing $\Omega(\mathcal{B}_{\mathcal{O}^4}')=\widetilde{\mu}\cdot \Omega(\mathcal{B}_{\mathcal{O}^4})$. We define the bijection
$$\widetilde \psi_\mathcal{T}:= \mu\circ \psi_\mathcal{T}\circ\widetilde \mu^{-1}$$
mapping $\Sigma_\mathcal{T}'$ to $\Omega(\mathcal{B}_{\mathcal{T}^3}')$. Let us show that $\widetilde \psi_\mathcal{T}$ preserves curvatures. For every disk $b\in \Omega(\mathcal{B}_{\mathcal{T}^3}')$, we have
\begin{align*}
\kappa(\widetilde \psi_\mathcal{T}^{-1} (b))&=\kappa(\widetilde \mu\circ \psi_\mathcal{T}^{-1}\circ \mu^{-1}(b))\\
&=\mathbf k_5^T\left(
\begin{array}{c|c}
1 & \mathbf0_4^T \\
\hline
\mathbf0_4 & \mathbf M
\end{array}\right)
\left(
\begin{array}{c}
\mathbf0_4^T \\
\hline
\mathbf I_4
\end{array}\right)
\mathbf M^{-1} \mathbf x_{b}&\text{where $\mathbf I_4$ is the identity matrix of size $4$}\\
&=\mathbf k_5^T
\begin{pmatrix}
0 \\
\mathbf x_{b}
\end{pmatrix}\\
&=\mathbf k_4^T\, \mathbf x_{b}=\kappa(b)
\end{align*}
and therefore, $\Omega(\mathcal{B}_{\mathcal{T}^3}')$ and $\Sigma_\mathcal{T}'\subset\Omega(\mathcal{B}_{\mathcal{O}^4}')$ are arithmetically equivalent.\\
We may use the beginning of the proof to show the octahedral case in a similar way as above. In this case, we may consider the orthoplicial sphere packing obtained from $\mathcal{B}_0$ after a rotation around the axis directed by $(0,0,1)$ of angle $\frac{\pi}{4}$, so the plane $H_\mathcal{O}$ becomes the plane $\{x=0\}$. Then, the bijection $\psi^{-1}_\mathcal{O}:\Omega(\mathcal{B}_{\mathcal{O}^3})\rightarrow \Sigma_\mathcal{O}$ has the same expression in inversive coordinates as $\psi^{-1}_\mathcal{T}$ and the rest of computations are identical as above.\\
The cubical case is more delicate since the corresponding Apollonian section is not obtained by a cross section. Let $\mathcal{B}_1$ be
the orthoplicial sphere packing of Figure \ref{fig:hoctB1} and let $\Sigma_\mathcal{C}=\Gamma_\mathcal{C}\cdot\mathcal{B}_1$ be the cubical section described in the proof of Theorem \ref{thm:aposections}. Let $\mathcal{B}_{\mathcal{C}^3}$ be the cubical disk packing given by $\psi_\mathcal{C}(\mathcal{B}_1)$. We define the function $\epsilon:\mathcal{B}_{\mathcal{C}^3}\rightarrow\{1,-1\}$ by $\epsilon(b)=\sign(x_1)$ where $x_1$ is the first coordinate of the vertex corresponding to $\psi_\mathcal{C}^{-1}(b)$. Then, in inversive coordinates, the bijection $\psi^{-1}_\mathcal{C}:\mathcal{B}_{\mathcal{C}^3}\rightarrow \mathcal{B}_1$ and the group isomorphism $\phi_\mathcal{C}^{-1}:\mathsf{A}(\mathcal{B}_{\mathcal{C}^3})\rightarrow \Gamma_\mathcal{C}$ are given by
\begin{align}\label{eq:cubsecsmaps}
\begin{array}{rccc}
\psi^{-1}_\mathcal{C}:& \mathcal{B}_{\mathcal{C}^3}&\longrightarrow& \mathcal{B}_1' \\
&\mathbf x_b&\longmapsto& \tfrac{1}{\sqrt{2}} (\epsilon(b),\mathbf{x}_b)
\end{array}&&
\begin{array}{rccc}
\phi_\mathcal{C}^{-1}:& \mathsf{A}(\mathcal{B}_{\mathcal{C}^3})&\rightarrow& \Gamma_\mathcal{C} \\
&\mathbf A&\mapsto& \left(\begin{array}{c|c}
1 & \mathbf0_4^T \\
\hline
\mathbf0_4 & \mathbf A
\end{array}\right)
\end{array}
\end{align}
The equivariance of $\psi_\mathcal{C}$ allow us to extend $\psi_\mathcal{C}^{-1}:\Omega(\mathcal{B}_{\mathcal{C}^3})\rightarrow\Sigma_\mathcal{C}$ by
\begin{align}\label{eq:equivFC}
\psi_\mathcal{C}^{-1}(g\cdot b)=\phi_\mathcal{C}^{-1}(g)\cdot \psi_\mathcal{C}^{-1}(b)
\end{align}
for every $g\in\mathsf A(\mathcal{B}_{\mathcal{C}^3})$ and every $b\in\mathcal{B}_{\mathcal{C}^3}$. Let $\mathcal{B}_{\mathcal{C}^3}'$ be any cubical disk packing. As before, the Möbius unicity of the cube implies that there is $\mu\in\mathsf{M\ddot{o}b}(\widehat \mathbb{R}^2)$ such that $\mu\cdot \Omega(\mathcal{B}_{\mathcal{C}^3})=\Omega(\mathcal{B}_{\mathcal{C}^3}')$. We define $\widetilde\mu\in\mathsf{M\ddot{o}b(\widehat{\mathbb{R}^3})}$ analogously to the tetrahedral case described above. Let $\widehat{\mu}=\lambda_{\frac{1}{\sqrt{2}}}\circ\widetilde{\mu}\in\mathsf{M\ddot{o}b(\widehat{\mathbb{R}^3})}$ where $\lambda_{\frac{1}{\sqrt{2}}}$ is the rescaling of $\widehat{\mathbb{R}^3}$ with factor $\frac{1}{\sqrt{2}}$. We have that $\Sigma_\mathcal{C}':=\widehat{\mu}\cdot \Sigma_\mathcal{C}$ is a cubical section of the orthoplicial Apollonian packing $\Omega(\mathcal{B}_{\mathcal{O}^4}')=\widehat{\mu}\cdot \Omega(\mathcal{B}_1)$. We define the bijection
$\widehat \psi_\mathcal{C}=\mu\circ \psi_\mathcal{C}\circ\widehat\mu^{-1}$ mapping $\Sigma_\mathcal{C}'$ to $\Omega(\mathcal{B}_{\mathcal{C}^3}')$. As above, we need to prove that $\widehat \psi_\mathcal{C}$ preserves curvatures. For, we first notice that for any disk $b\in \Omega(\mathcal{B}_ {\mathcal{C}^3}')$, there is $g'\in \mathsf A(\mathcal{B}_ {\mathcal{C}^3}')$ and $b'\in\mathcal{B}_ {\mathcal{C}^3}'$ such that $b=g'\cdot b'$, by definition. Thus, for any $b\in \Omega(\mathcal{B}_ {\mathcal{C}^3}')$, we have
\begin{align*}
\kappa\left(\widehat \psi_\mathcal{C}^{-1} (b)\right)&=\kappa\left(\widehat \psi_\mathcal{C}^{-1} (g'\cdot b')\right)\\
&=\kappa\left(\widehat \mu\circ \psi_\mathcal{C}^{-1}\circ \mu^{-1}(g'\cdot b')\right)\\
&=\kappa\left(\lambda_{\frac{1}{\sqrt{2}}}\circ\widetilde{\mu}\circ \psi_\mathcal{C}^{-1}\circ \mu^{-1}(g'\cdot b')\right)\\
&=\sqrt{2}\kappa\left(\widetilde{\mu}\circ \psi_\mathcal{C}^{-1}\circ \mu^{-1}(\mu g\mu^{-1})\cdot(b')\right)& \text{where $g=\mu^{-1} g'\mu\in \mathsf A(\mathcal{B}_{\mathcal{C}^3})$}\\
&=\sqrt{2}\kappa\left(\widetilde{\mu}\circ \phi_\mathcal{C}^{-1}(g)\cdot \psi_\mathcal{C}^{-1}\circ \mu^{-1}(b')\right)& \text{by \eqref{eq:equivFC}} \\
&=\sqrt{2}\mathbf k_5^T\left(
\begin{array}{c|c}
1 & \mathbf0_4^T \\
\hline
\mathbf0_4 & \mathbf M
\end{array}\right)
\left(
\begin{array}{c|c}
1 & \mathbf0_4^T \\
\hline
\mathbf0_4 & \mathbf A
\end{array}\right)\frac{1}{\sqrt{2}}
\left(
\begin{array}{c}
\epsilon(\mu^{-1}(b')) \\
\mathbf x_{\mu^{-1}(b')}
\end{array}\right)& \text{where $\mathbf A$ is the matrix of $g$} \\
&=\mathbf k_5^T
\left(
\begin{array}{c}
\epsilon(\mu^{-1}(b')) \\
\mathbf M\mathbf A\mathbf x_{\mu^{-1}(b')}
\end{array}\right)\\
&=\mathbf k_4^T\,\mathbf M\mathbf A\mathbf x_{\mu^{-1}(b')}\\
&=\mathbf k_4^T\,\mathbf M\mathbf A\mathbf M^{-1}\mathbf x_{b'}\\
&=\kappa(\mu g\mu^{-1}\cdot b')\\
&=\kappa(g'\cdot b')=\kappa(b)\\
\end{align*}
Let us now suppose that $\Omega(\mathcal{B}_{\mathcal{T}^3}')$, $\Omega(\mathcal{B}_{\mathcal{O}^3}')$ and $\Omega(\mathcal{B}_{\mathcal{C}^3}')$ are integral. We shall show that in the three cases we can find four pairwise tangent spheres of $\mathcal{B}_{\mathcal{O}^4}'$ with curvatures $\kappa_1$, $\kappa_2$, $\kappa_3$ and $\kappa_4$ satisfying that $\sqrt{\mathfrak T_3(\kappa_1,\kappa_2,\kappa_3,\kappa_4) }\in\mathbb Z$. The integrality of $\Omega(\mathcal{B}_{\mathcal{O}^4}')$ then follows from Proposition \ref{prop:intaportho}. \\
(Tetrahedral section) Let $\kappa_1,\kappa_2,\kappa_3,\kappa_4$ be the curvatures of the four disks of $\mathcal{B}_{\mathcal{T}^3}'$. By Descartes' theorem, we have that
$$\mathfrak T_{3}(\kappa_1,\kappa_2,\kappa_3,\kappa_4)=0.$$
Since $\widetilde\psi^{-1}_\mathcal{T}$ preserves curvatures, then $\kappa_1,\kappa_2,\kappa_3,\kappa_4$ are the curvatures of four pairwise tangent spheres of $\mathcal{B}_{\mathcal{O}^4}'$ satisfying that $\sqrt{\mathfrak T_3(\kappa_1,\kappa_2,\kappa_3,\kappa_4) }\in\mathbb Z$.\\
(Octahedral section) Let $\kappa_1,\kappa_2,\kappa_3,\kappa_{-1},\kappa_{-2},\kappa_{-3}$ be the curvatures of the six disks of $\mathcal{B}_{\mathcal{O}^3}'$ under an antipodal labelling, so $\kappa_i$ and $\kappa_{-i}$ are the curvatures of non-tangent disks. By \eqref{eq:DescartesTh2} , we have
\begin{align*}
(\kappa_1-\kappa_{\mathcal{O}^3})^2+ (\kappa_2-\kappa_{\mathcal{O}^3})^2+ (\kappa_3-\kappa_{\mathcal{O}^3})^2=2\kappa_{\mathcal{O}^3}^2
\end{align*}
where $\kappa_{\mathcal{O}^3}=\frac12(\kappa_i+\kappa_{-i})$ for every $i=1,2,3$. Since $\widetilde\psi^{-1}_\mathcal{O}$ preserves curvatures, then $\kappa_1,\kappa_2,\kappa_3,\kappa_{-1},\kappa_{-2},\kappa_{-3}$ are also the curvatures of six spheres of $\mathcal{B}_{\mathcal{O}^4}'$. Let $\kappa_4$ and $\kappa_{-4}$ be the curvatures of the remaining spheres of $\mathcal{B}_{\mathcal{O}^4}'$. One can check that the labelling of the curvatures in $\mathcal{B}_{\mathcal{O}^4}'$ is also antipodal. By the antipodal relation, which holds for $\mathcal{B}_{\mathcal{O}^3}'$ and $\mathcal{B}_{\mathcal{O}^4}'$, we have
\begin{align}\label{eq:antioctoortho}
\kappa_{\mathcal{O}^3}=\frac{\kappa_1+\kappa_{-1}}{2}=\kappa_{\mathcal{O}^4}.
\end{align}
We also have that $\kappa_1,\kappa_2,\kappa_3,\kappa_4$ are the curvatures of four pairwise tangent spheres of $\mathcal{B}_{\mathcal{O}^4}'$. By combining \eqref{eq:antioctoortho} with Corollary \ref{cor:descaroct} we obtain
\begin{align*}\label{eq:orthodes}
(\kappa_1-\kappa_{\mathcal{O}^4})^2+ (\kappa_2-\kappa_{\mathcal{O}^4})^2+ (\kappa_3-\kappa_{\mathcal{O}^4})^2+ (\kappa_4-\kappa_{\mathcal{O}^4})^2&=2\kappa_{\mathcal{O}^4}^2\\
\Leftrightarrow (\kappa_1-\kappa_{\mathcal{O}^3})^2+ (\kappa_2-\kappa_{\mathcal{O}^3})^2+ (\kappa_3-\kappa_{\mathcal{O}^3})^2+ (\kappa_4-\kappa_{\mathcal{O}^3})^2&=2\kappa_{\mathcal{O}^3}^2\\
\Leftrightarrow \kappa_4&=\kappa_{\mathcal{O}^3}\\
\end{align*}
Therefore,
\begin{align*}
\sqrt{\mathfrak T_{3}(\kappa_1,\kappa_2,\kappa_3,\kappa_4)}
&=\sqrt{\mathfrak T_{3}(\kappa_1,\kappa_2,\kappa_3,\kappa_{\mathcal{O}^3})}\\
&=\frac12\sqrt{\left(\kappa _1+\kappa _2+\kappa _3+\kappa _{\mathcal{O}^3}\right){}^2-2 \left(\kappa _1^2+\kappa _2^2+\kappa _3^2+\kappa _{\mathcal{O}^3}^2\right)}\\
&= \frac12 \sqrt{2(\kappa_1\kappa_2+\kappa_2\kappa_3+\kappa_1\kappa_3)+2\kappa_{\mathcal{O}^3}^2-
(\kappa_1-\kappa_{\mathcal{O}^3})^2- (\kappa_2-\kappa_{\mathcal{O}^3})^2- (\kappa_3-\kappa_{\mathcal{O}^3})^2}\\
&= \frac12 \sqrt{2(\kappa_1\kappa_2+\kappa_2\kappa_3+\kappa_1\kappa_3)}\\
&= \frac12 \sqrt{2\mathfrak T_{2}(\kappa_1,\kappa_2,\kappa_3)}
\end{align*}
Since $\Omega(\mathcal{B}_{\mathcal{O}^3}')$ is integral, then, $2\mathfrak T_{2}(\kappa_1,\kappa_2,\kappa_3)$ is an even integer. Moreover, by Proposition \ref{prop:intocto}, $\sqrt{2\mathfrak T_{2}(\kappa_1,\kappa_2,\kappa_3)}$ is also an even integer implying thus $\sqrt{\mathfrak T_{3}(\kappa_1,\kappa_2,\kappa_3,\kappa_4)}$ is an integer.\\
(Cubical section) Let $\kappa_1,\kappa_2,\kappa_3,\kappa_4$ be the curvatures of four consecutive tangent disks $b_1,b_2,b_3,b_4$ of $\mathcal{B}_{\mathcal{C}^3}'$ with $b_4$ tangent to $b_1$. Then $\kappa_1,\kappa_2,\kappa_3,\kappa_4$, by applying the antipodal relation on a square-face of $\kappa_{\mathcal{C}^3}$ we have
$$\kappa_1+\kappa_3=\kappa_2+\kappa_4.$$
Since $\widehat\psi^{-1}_\mathcal{C}$ preserves curvatures, then $\kappa_1,\kappa_2,\kappa_3,\kappa_4$ are the curvatures of four pairwise tangent spheres of $\mathcal{B}_{\mathcal{O}^4}$. Then, we have
\begin{align*}
\sqrt{\mathfrak T_{3}(\kappa_1,\kappa_2,\kappa_3,\kappa_4)}
&=\sqrt{\mathfrak T_{3}(\kappa_1,\kappa_2,\kappa_3,\kappa_1+\kappa_3-\kappa_2)}\\
&=\sqrt{\kappa_1\kappa_2+\kappa_2\kappa_3+\kappa_1\kappa_3-\kappa_2^2}\\
&=\sqrt{2\mathfrak C_3(\kappa_1,\kappa_2,\kappa_3)}
\end{align*}
By the integrality of $\Omega(\mathcal{B}_{\mathcal{C}^3}')$ and Proposition \ref{prop:intcube}, we have that $\sqrt{\mathfrak T_{3}(\kappa_1,\kappa_2,\kappa_3,\kappa_4)}$ is an integer.
\end{proof}
The proof of Theorem \ref{thm:sectionscurv} is constructive giving thus a method to obtain integral orthoplicial Apollonian packings containing a given tetrahedral, octahedral or cubical section. We have applied this method to produce the three orthoplicial Apollonian packings
showed in Figures \ref{fig:inttetrasec}, \ref{fig:intoctosec} and \ref{fig:intcubsec}.
\begin{figure}[H]
\centering
\includegraphics[width=.32\textwidth]{img/gaskets/octgasket.pdf}
\includegraphics[width=.32\textwidth]{img/sections/orthocto245L.png}
\includegraphics[width=.32\textwidth]{img/sections/octosec245.png}
\caption{From left to right: an integral octahedral Apollonian packing with initial curvatures $(-2,4,5)$; an integral orthoplicial Apollonian packing with initial curvatures $(-2,4,5,5)$; an octahedral Apollonian section of the packing on the center, which is arithmetically equivalent to packing on the left.
}\label{fig:intoctosec}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=.32\textwidth]{img/gaskets/cubicgasket4.pdf}
\includegraphics[width=.32\textwidth]{img/sections/ortho3512.png}
\includegraphics[width=.32\textwidth]{img/sections/orthocubsec.png}
\caption{From left to right: an integral cubical Apollonian packing with initial curvatures $(-3,5,12)$; an integral orthoplicial Apollonian packing with initial curvatures $(-3,5,12,20)$; a cubical Apollonian section of the packing on th center which is arithmetically equivalent to the packing on the left.
}\label{fig:intcubsec}
\end{figure}
\section{Open questions and concluding remarks}\label{sec:conclu}
In \cite{RR21_1} it was also conjectured that for $d\ge2$, all edge-scribable $(d+1)$-polytopes are Möbius unique. In view of the family of polytopes that are known to be Möbius unique, a natural next step would be to show the validity of the conjecture for regular polytopes.
\begin{question}
Are the 600-cell and the 120-cell Möbius unique?
\end{question}
We believe that the techniques developed in this work can be useful to study open question on the behaviour of the curvatures in integral Apollonian packings such as the local-global principle. Similar techniques can be applied to construct integral Apollonian packings of the hypercube and 24-cell containing Platonic Apollonian sections.\\
As instance, we can consider the polytopal sphere packing $\mathcal{B}_{\mathcal{R}^4}$ obtained by rescaling a $1$-CBP projection of the $24$-cell by a factor of $\frac{1}{\sqrt3}$ (see Figure \ref{fig:stand24}). Numerical experiments suggest that every integer is represented without no modulo restriction as a curvature of $\Omega(\mathcal{B}_{\mathcal{R}^4})$. We end with the following two conjectures.
\begin{conj}
The set of curvatures of $\Omega(\mathcal{B}_{\mathcal{R}^4})$ is $\mathbb N$.
\end{conj}
\begin{conj}
There is a sequence of consecutive tangent spheres $(b_0,b_1,\ldots)\subset \Omega(\mathcal{B}_{\mathcal{R}^4})$ such that, for every $n\in\mathbb N$, the curvature of $b_n$ is $n$.
\end{conj}
\begin{figure}[H]
\centering
\includegraphics[width=.4\textwidth]{img/gaskets/stand24.png}
\includegraphics[width=.4\textwidth]{img/gaskets/cell24conj.png}
\caption{The sphere packing $\mathcal{B}_{\mathcal{R}^4}$ (left) and its Apollonian packing $\Omega(\mathcal{B}_{\mathcal{R}^4})$ (right) with the curvatures.}\label{fig:stand24}
\end{figure}
\paragraph{\textbf{Figures:}} All the figures in this paper were done by an own software coded with Mathematica 12.2 \cite{Mathematica}.\\
\paragraph{\textbf{Acknowledgements:}} This work is a part of the content of the PhD thesis of the author at the University of Montpellier. The author would like to thank his advisor, Prof. Jorge Ram\'irez Alfons\'in for several helpful suggestions and many fruitful discussions.
\printbibliography[
heading=bibintoc,
title={References}
]
\end{document}
|
train/arxiv
|
BkiUdXg5qYVBbAIhtnmf
| 5
| 1
|
\section{Introduction}
Optical properties of metallic nanostructures are determined by the collective excitations of their conduction electrons with respect to the positive background ions, which are called plasmons.\cite{Kreibig} Plasmonic nanostructures are widely based on Ag and Au, which exhibit localized surface plasmon resonance as a collective dipolar oscillation of free electrons in phase with the incident electromagnetic field. Plasmonic nanostructures have a wide variety of applications. \cite{Caucheteur,Choi,Hou,Mandal} If plasmons on neighboring parts of the nanostructures joined by surfaces and interfaces interact, they can hybridize just like the electron wave functions of molecular orbitals. Such plasmon hybridization represents a powerful paradigm for designing metallic resonant nanostructures.\cite{Prodan,Laghrissi} The resulting optical properties of plasmonic nanostructures are in many ways complementary to those of semiconductor quantum dot nanostructures.\cite{Wang,Kim,Mishra,Smith}
Metallic nanogaps have been advantageous in offering interesting quantum plasmonics opportunities.\cite{Esteban} It has been observed that a metal nanogap can highly confine an electric field of the incident long-wavelength light between the gap of two metal surfaces (referred to as the field enhancement).\cite{Im,Chen,Siegfried} A metal/insulator/metal nanogap offers effective mode volumes well below the diffraction limit in the gap material, even if there is a significant loss of energy inside the metal.\cite{Maier} Even under non-resonant excitation, the spontaneous emission rate can be enhanced by coupling a dipolar emitter in the slot waveguide structure due to strong confinement of electromagnetic modes between the two metal surfaces.\cite{Jun} Large scale spacers of nanometer size gaps can be prepared using Al$_{2}$O$_{3}$ \cite{Chen} and a self-assembled monolayer.\cite{Beesley} The incorporation of quantum dots (QDs) using linker molecule has also been demonstrated.\cite{Tripathi} To reduce the interfacial electron-hole recombination, the thiol (--SH) functional group is used as a hole quencher, which also has the potential to anchor the catalytic center.\cite{Das,Yue,AR} Thiol exhibits a strong affinity towards the transition metal, such as Au, Ag, Pt, and metal chalcogenide-based semiconductors, e.g. CdS and CdSe nanoparticles.\cite{Pham,Love}
Advances in the development of plasmonics have stimulated intensive investigations of electrical detection of plasmon resonance using electrical currents in semiconductors generated due to an energy transfer from plasmons.\cite{Tang,Atwater,Moskowitz} The near-field optical intensity resulting from the interaction of light with nanostructured metals can exceed the incident light intensity by two to three orders of magnitude.\cite{Seo} However, the transformation of this near field optical energy into electricity in semiconductors was given much less attention.\cite{Moskowitz,Falk,Ishi} This allows for easy integration and high-speed, low-capacitance operation of planar devices that converge to optics and electronics.
Here, the plasmon-enhanced photocurrent is observed in thiol-linked-CdSe QDs formed in gold nanogaps. The surface plasmon resonance peaked at about 500 nm is verified by using a photoacoustic (PA) technique. The photoexcited electrons in CdSe transfer to the neighboring dot and then to closely spaced Au surface through thiol links thus forming the photocurrent, which can be tuned by a bias voltage.
\section{Experimental}
A self-assembly lithography method was used to manufacture CdSe QDs-gold nanogap structure with periodic dimensions, as was previously proposed [31,19].\cite{Tripathi2016,Tripathi} The resulting structure is shown schematically in Fig.~\ref{Scheme}. The electrical circuit measuring a current through the structure includes two 10 nm thick layers of Al$_{2}$O$_{3}$. Cross-sectional scanning electron microscopy (SEM) image of the structure is given in Fig.~\ref{SEM}. The breakdown voltage for ALD Al$_{2}$O$_{3}$ is found to be about 0.7 V/nm, so that the 10 nm Al$_{2}$O$_{3}$ layer should work as an insulator for applied voltages V smaller than 5V. The value $V=$5 V was therefore fixed in the photocurrent measurements. A gate bias voltage $V_{g}=\pm$0.1 V was applied to a 50 nm thick Au as shown in Fig.~\ref{Scheme}.
\begin{figure}
\includegraphics[width=80mm]{fig1}
\caption{\label{Scheme} Schematics of the sample fabrication procedure and photocurrent measurement setup.}
\end{figure}
Our measurements demonstrate that a 14 nm thick gold layer has a 50\% transmission of light in the visible region being also suitable to use as a conducting layer. The structure was illuminated from the bottom side, so the light was remarkably blocked by the 50 nm thick gold. Consequently, the photo-induced processes in the CdSe QDs placed in the nanogaps dominate the effects discussed below.
\begin{figure}
\includegraphics[width=90mm]{fig2}
\caption{\label{SEM} SEM cross-sectional image of the grown structure.}
\end{figure}
Schematic view of the photoacoustic resonator cell is shown in Fig.~\ref{PAsetup}. When light absorption occurs in the structure, a resonance-enhanced acoustic signal is generated and sensed by a microphone. The PA measurements were performed in the modulation frequency range from 10 Hz to 1 kHz. A chopped 250 W tungsten halogen lamp (Osram) and a 405-nm pulsed laser diode (LD) were used as light sources. The chopped or pulsed radiation from the external source passed through a grating monochromator and guided to the cell shown in Fig.~\ref{PAsetup}.
\begin{figure}
\includegraphics[width=70mm]{fig3}
\caption{\label{PAsetup} Schematic view of the air filled photoacoustic resonator cell.}
\end{figure}
\section{Results and discussion}
Figure~\ref{PC} shows the photocurrent response to the broad band light irradiation, which can be tuned by a bias voltage. It is seen that the photocurrent increases with $V_{g}$ and the current direction changes by reversing the sign of $V_{g}$ (Fig.~\ref{PC}(a)) compared to that observed for the negative potential (Fig.~\ref{PC}(b)) or unbiased middle electrode (Fig.~\ref{PC}(c)).
\begin{figure}
\includegraphics[width=70mm]{fig4}
\caption{\label{PC} Time evolution of the photoinduced current through the structure shown in Fig.~\ref{Scheme} at a broad band excitation with relative intensities indicated in the rectangular areas and $V_{g}=+$0.1 V (a), 0 (b) and --0.1 V (c).}
\end{figure}
The Au used for the top and bottom electrodes has a work function of about 5 eV,\cite{Anderson} whereas the electron affinity and bandgap of CdSe QDs are 4.3 eV and 2.2 eV, respectively.\cite{Oertel} The characteristic work functions for Al$_{2}$O$_{3}$ have also been discussed.\cite{Zheng} It can be assumed that the photoexcited electrons in CdSe QDs transfer from the dots to closely spaced middle Au layer through thiol links after they are generated in CdSe. These electrons have energies higher than the Fermi energy of Au, which enables them to transfer to the gold layer directly. Indeed, if an electric field given by $V_{g}$ is applied across the structure, it provides a current flowing through the two Al$_{2}$O$_{3}$ dielectric layers with a thickness of 10 nm, which is usually considered as an upper thickness limit in case of the quantum mechanical tunneling.\cite{Chiu,Molina-Reyes} This forms either the positive or the negative photocurrent directed downwards or upwards in Fig.~\ref{Scheme}, respectively, via electron transport through the Al$_{2}$O$_{3}$ layers, depending on the sign of $V_{g}$. At $V_{g}=$0, the electric field is much smaller and a small negative current is observed in Fig.~\ref{PC}(b), as defined by the polarity of the external voltage applied to the structure.The electric field of incident light induces coherent collective oscillation of free electrons in the metal layer. This is coupled to a positively charged metallic core yielding dipolar oscillations resonant with the incident light at a specific frequency of Au layer. In our structure, the surface plasmon resonance (SPR) wavelength is approximately 500 nm, as confirmed in previous studies.\cite{Tripathi2016,Tripathi} Here, the SPR enhanced photoelectric activity is checked using a photoacoustic technique, as the resulting acoustic pressure can be related to the absorption coefficient of absorbing medium.\cite{Mandelis}
Figure~\ref{PA1} shows a comparison of the PA signal magnitudes obtained with a 405 nm LD light on the same structure, before (curve 1) and after (curve 2) filling the nanogaps with CdSe QDs. It is seen that the PA magnitude increases by about 20\% due to QDs, whereas the frequency dependence does not change much in the two types of structures.
\begin{figure}
\includegraphics[width=70mm]{fig5}
\caption{\label{PA1} Photoacoustic signal increase due to CdSe quantum dots excited at the wavelength of 405 nm. Curve 1 - before, curve 2 - after filling the nanogaps with CdSe QDs.}
\end{figure}
Figure~\ref{PA2} shows the photoacoustic spectra of our samples. The enhancement in the PA response peaked at about 500 nm, related to plasmonic absorption by the Au layers (curve 1 in Fig.~\ref{PA2}). The response is found to even further enhance throughout the spectrum due to CdSe QDs (curve 2 in Fig.~\ref{PA2}). The underlying physical mechanisms responsible for the photocurrent behavior observed in Fig.~\ref{PC} may come from the fact that noble metal nanostructures exhibit localized SPR leading to an extremely enhanced localized electromagnetic field near the surface.\cite{Mao} The likely mechanism of the photocurrent in the CdSe QDs-gold nanogap structure is as follows. The quantum dot is excited with above-band gap light to generate electron-hole ($e^{-}-h^{+}$) pairs. While the holes may acquire electrons from impurities adsorbed on the QD surface, moveable excess electrons in CdSe transfer to the neighboring dot and then to closely spaced Au surface through thiol links, establishing a smooth transmission channel of electrons and forming the photocurrent. However, the light absorbance of CdSe QDs is generally weak due to a small number of the dots involved in the absorption processes, so that the photocurrent is small. When the surface of CdSe QDs is in a close proximity with Au, the photocurrent can be enhanced by the localized surface plasmons of gold nanotrenches, e. g. with the bandgap breaking effect.\cite{Zhang}
\begin{figure}
\includegraphics[width=70mm]{fig6}
\caption{\label{PA2} Spectrum of the photoacoustic signal observed before (1) and after (2) filling the nanogaps with CdSe QDs.}
\end{figure}
In this case, the resonance energies of the surface plasmons can be transferred to CdSe QDs non-radiatively and excite the interband transition in CdSe. The power of this energy transfer and the generation rate of the $e^{-}-h^{+}$ pairs are proportional to the square of the local field strength.\cite{Zhang} Consequently, the enhanced localized electric field near the Au surface close to the CdSe QD results in an increased density of the electron-hole pairs. Meanwhile, applying a bias voltage $V_{g}$ makes the photogenerated $e^{-}-h^{+}$ pairs separate rapidly, which precludes their recombination and hence increases the photocurrent, as indeed observed in Fig.~\ref{PC}(a) and (c) in comparison with the unbiased case shown in Fig.~\ref{PC}(b).
\section{Conclusions}
In summary, low voltage-controlled, plasmon-enhanced photocurrent is observed in a system of thiol-linked CdSe quantum dots grown in Au nanogaps. The surface plasmon resonance peak at about 500 nm is revealed using a photoacoustic technique. The observed enhancement in the photoacoustic response is related to plasmonic absorption by the Au layers and the response is further enhanced by about 20\% due to CdSe QDs. The photocurrent behavior is explained by enhanced localized electric fields near the Au surface. The photoexcited electrons in CdSe transfer to the neighboring dot and then to closely spaced Au surface through thiol links. The plasmonic nanogap structure geometries can open up new possibilities for carrier collection from quantum-dot photovoltaic devices, in nanometer-scale photodetectors and sensors.
\section{Acknowledgements}
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP: NRF-2015R1A3A2031768, NRF-2017R1E1A1A01074650) and the research project funded by start-up (1.190055.01) and U-K Brand (1.190109.01) of UNIST (Ulsan National Institute of Science and Technology). The work at Kyiv was funded by the Ministry of Education and Science of Ukraine, grant numbers 0119U100303 and 0122U001953. O.K. acknowledges support from the Erwin Schr\"{o}dinger Institute by the Special Research Fellow Programme.
|
train/arxiv
|
BkiUdWw4uBhhxQOlnnsO
| 5
| 1
|
\section{Introduction}
There are two main classes of dynamos, small scale and large scale dynamos.
This distinction is not completely unambiguous because in the presence
of shear any small scale field will attain a large scale component in
the direction of the shear.
Therefore, the class of small scale dynamos is here confined to the case of
nonhelical, isotropic and homogeneous turbulent flows (\Sec{SSdynamos}).
With shear, however, even nonhelical dynamos can produce large scale
fields (\Sec{LSdynamos}).
Members of the large scale dynamo class include turbulent flows with
sufficient amounts of helicity and/or shear such that a large scale
field is generated.
Here, a large scale field is the component that survives after averaging.
The averaging procedure has to be defined appropriately and depends on
circumstances and also on what kind of field is generated.
In the presence of a shear flow, ${\bm{U}}_0(x,z)=(0,U_0,0)$, for example,
a useful average (denoted by an overbar) will be
\begin{equation}
\overline{\bm{B}}={1\over L_y}\int_0^{L_x}{\bm{B}}\,{\rm d} {} y.
\label{AzimthalAverage}
\end{equation}
This definition will obviously preclude the study of nonaxisymmetric fields.
In the case of helical turbulence, we have essentially an $\alpha$ effect,
i.e.\ the turbulent electromotive force has a field-aligned component and
$\overline{{\bm{u}}\times{\bm{b}}}\cdot\overline{\bm{B}}\neq0$.
Here, ${\bm{b}}={\bm{B}}-\overline{\bm{B}}$ is the fluctuating magnetic field and
${\bm{u}}={\bm{U}}-\overline{\bm{U}}$ is the fluctuating velocity.
In unbounded space, for example, the mean field can have any orientation.
Even in a triply-periodic domain there are still three ultimate field
configurations, corresponding to Beltrami waves with variation in any
of the three coordinate directions.
The appropriate mean field is then best defined as a two-dimensional
average over the other two coordinate directions.
Ideally, when defining averages we want to conform with the
Reynolds rules (e.g.\ Krause \& R\"adler 1980).
In particular, we want to make sure that the average of an average gives
the same average (which is not the case for a running mean), and that
average and derivative operators commute (not the case for averages over
non-periodic directions, including time averages).
We also want the average of a product of fluctuating and mean quantities
to vanish (which is not the case for spectral filtering).
Therefore, for many practical purposes, \Eq{AzimthalAverage} is the
preferred choice, avoiding any of the aforementioned problems.
A more formal characteristics of a {\it successful} large scale dynamo
is therefore one where the ratio
\begin{equation}
q\equiv\bra{\overline{\bm{B}}^2}/\bra{{\bm{B}}^2}\gg R_{\rm m}^{-1}.
\end{equation}
Here, angular brackets denote volume averages.
Values of $q$ around 0.7 are typical under ideal conditions
(see \Sec{LSdynamos}).
Spiral galaxies tend to have $q\ge0.2$ (e.g.\ Beck et al.\ 1996).
For the sun the value of $q$ is unclear, but we would still classify it
as a large scale dynamo even if $q$ was as small as 0.01, say.
In the following we give a brief overview of recent progress in the
fields of small scale and large scale dynamos.
In this review we focus on numerical results.
\section{Small scale dynamos}
\label{SSdynamos}
Small scale dynamos are generally much harder to excite than large
scale dynamos.
In fact, for unit magnetic Prandtl numbers
($\mbox{Pr}_{\rm M}\equiv\nu/\eta=1$) the critical value of the
magnetic Reynolds number, $R_{\rm m}=u_{\rm rms}/(\eta k_{\rm f})$,
is 35 for the small dynamo (e.g., Haugen et al.\ 2004a) and only 1.2
for fully helical dynamos (Brandenburg 2001).
Here we should emphasize that there is an advantage in defining
$R_{\rm m}$ with respect to the wavenumber instead of the forcing
scale (which would make $R_{\rm m}$ larger by $2\pi$) or even the
scale of the box (which would make $R_{\rm m}$ larger by the scale
separation ratio).
The advantage is that with our definition (which is actually quite
common in the forced turbulence community) the value of $R_{\rm m}$
can be regarded as a reasonable approximation to $\eta_{\rm t}/\eta$,
where $\eta_{\rm t}$ is the turbulent (effective) magnetic diffusivity
and $\eta$ is the microscopic value.
In the following we summarize what is now known about the energy
spectra in the linear and nonlinear regimes and what happens in the
presence of shocks.
\subsection{Kazantsev and Kolmogorov spectra}
In the kinematic regime, the magnetic field is still weak and so the
velocity field is like in the nonmagnetic case, with the usual Kolmogorov
spectrum followed by a dissipative subrange.
Due to an extended bottleneck effect\footnote{
For details regarding the bottleneck effect see the recent paper
by Dobler et al.\ (2003), where the difference between the fully
three-dimensional spectra and the one-dimensional spectra available from
wind tunnel experiments is explained.}, only marginally
indicated; see \Fig{plot_poweru_early} for a simulation by
Haugen et al.\ (2004a) where $\mbox{Pr}_{\rm M}=1$ and $R_{\rm m}=600$.
During the kinematic stage, the magnetic field shows a clear $k^{3/2}$
spectrum that is characteristic of the Kazantsev (1968) spectrum that
was originally only expected in the large magnetic Prandtl number limit,
i.e.\ for $\mbox{Pr}_{\rm M}\equiv\nu/\eta\gg1$.
During the kinematic phase the spectral magnetic energy grows at
all wavenumbers exponentially in time and the spectrum remains
shape-invariant.
\begin{figure}[t!]\centering
\includegraphics[width=.50\textwidth]{plot_poweru_early}
\includegraphics[width=.45\textwidth]{power1024a}
\caption{
Left: early spectra of kinetic and magnetic energy,
normalized by $\frac{1}{2}u_{\rm rms}^2/k_1$,
during the kinematic stage of run D2.
[Adapted from Haugen et al.\ (2004a).]
Right: magnetic, kinetic and total energy spectra.
$1024^3$ meshpoints.
The Reynolds number is $u_{\rm rms}/(\nu k_{\rm f})\approx960$.
[Adapted from Haugen et al.\ (2003).]
}\label{plot_poweru_early}\end{figure}
As the magnetic energy increases, the spectrum arranges itself underneath
an envelope given by the original kinetic energy spectrum.
During this process, the kinetic energy decreases by a certain amount and,
above a certain wavenumber, the field can be in super-equipartition with
the velocity; see the right hand panel of
\Fig{plot_poweru_early} for a high resolution run with $1024^3$
meshpoints (Haugen et al.\ 2003).
These spectra are, as usual, integrated
over shells in $k$ space and normalized such that
$\int E_{\rm K}{\rm d} {} k={1\over2}\bra{{\bm{u}}^2}$ and
$\int E_{\rm M}{\rm d} {} k={1\over2}\bra{{\bm{B}}^2}/\mu_0$.
The magnetic energy displays a nearly flat spectrum in the range
$1\leq k\leq5$, peaks at $k\approx5$,
and begins to exhibit an inertial range in $8\leq k\leq25$, followed by
a dissipative subrange over one decade.
In the inertial range $E_{\rm M}(k)/E_{\rm K}(k)$ is about 2.5.
At larger magnetic Prandtl numbers one begins to see the possible
emergence of a $k^{-1}$ tail in the magnetic energy spectrum; see
\Fig{prandtl256}.
The $k^{-1}$ tail has recently been found in large magnetic Prandtl number
simulations with an imposed magnetic field (Cho et al.\ 2002).
The $k^{-1}$ spectrum has its roots in early work by Batchelor (1959)
for a passive scalar and Moffatt (1963) for the magnetic case.
In the low magnetic Prandtl number case, evidence is accumulating that
the critical magnetic Reynolds number continues to become larger
(Schekochihin et al.\ 2004, Boldyrev \& Cattaneo 2004, Haugen et al.\ 2004).
\begin{figure}[t!]\centering\includegraphics[width=.5\textwidth]
{prandtl256}\caption{
Magnetic energy spectra for runs with magnetic Prandtl numbers ranging
from 0.3 to 30.
[Adapted from Haugen et al.\ (2004a).]
}\label{prandtl256}\end{figure}
\begin{figure}[t!]\centering
\includegraphics[width=.40\textwidth]{Mach_dep_Rmcrit}
\includegraphics[width=.40\textwidth]{Mach_dep_Rmcrit_Pm5}\caption{
Critical magnetic Reynolds number $\mbox{\rm Re}_{\rm M,crit}$ as a function of $\mbox{\rm Ma}$
for simulations with $\mbox{Pr}_{\rm M}=1$ (left)
and $\mbox{Pr}_{\rm M}=5$ (right).
Note that $\mbox{\rm Re}_{\rm M,crit}$ depends strongly on Mach number for
$\mbox{\rm Ma}\approx1$.
The simulations with shock-capturing viscosity give approximately
the correct growth rates. The simulations that provide these data points
have resolutions ranging from $64^3$ to $512^3$ mesh points.
[Adapted from Haugen et al.\ (2004b).]
}\label{Mach_dep_Rmcrit}\end{figure}
\begin{figure}\centering
\includegraphics[width=.48\textwidth]{vis_divu_512}
\includegraphics[width=.32\textwidth]{vis_bb_512}\caption{
Grey (or color) scale representation of $\mbox{\boldmath $\nabla$} {}\cdot{\bm{u}}$ (left)
and $B_z$ (right) in an $xy$ cross-section through $z=0$
for $\mbox{\rm Ma}=1.1$ using constant viscosity (Run~3a of Haugen et al.\ 2004b
with $512^3$ meshpoints).
[Adapted from Haugen et al.\ (2004b).]
}\label{vis_bb}\end{figure}
\subsection{Do shocks kill dynamos?}
In the interstellar medium the flows are generally supersonic and
in places even highly supersonic.
Naively, one could think of highly supersonic turbulence as being
nearly irrotational, in which case the dynamo should actually become
much more efficient and the growth rate should
increase with Mach number (Kazantsev et al.\ 1985).
This is now known not to be the case: in a mixture of irrotational
($\mbox{\boldmath $\omega$} {}\equiv\mbox{\boldmath $\nabla$} {}\times{\bm{u}}=0$) and solenoidal ($\mbox{\boldmath $\nabla$} {}\cdot{\bm{u}}=0$) flows
the critical values of the magnetic Reynolds number for the onset
of dynamo action does actually increase as a function of the ratio
$\sigma\equiv\bra{(\mbox{\boldmath $\nabla$} {}\cdot{\bm{u}})^2}/\bra{\mbox{\boldmath $\omega$} {}^2}$
(Rogachevskii \& Kleeorin 1997).
However, the ratio $\sigma$ is found to stay finite in the limit
of large Mach numbers (Porter et al.\ 1998, Padoan \& Nordlund 1999).
This explains the recent finding that minimum magnetic Reynolds number
for dynamo action displays a bimodal behavior as a function of the
Mach number $\mbox{Ma}=u_{\rm rms}/c_{\rm s}$ (Haugen et al.\ 2004b).
In fact, for $\mbox{Pr}_{\rm M}=1$, they find
$R_{\rm m,crit}\approx35$ for $\mbox{Ma}\ll1$ and
$R_{\rm m,crit}\approx80$ for $\mbox{Ma}\gg1$.
For $\mbox{Pr}_{\rm M}=5$, the critical values are a bit lower
(25 and 50 respectively); see \Fig{Mach_dep_Rmcrit}.
Indeed, in these simulations the ratio $\sigma$ is always about 1/4.
Visualizations of the magnetic field show that the effects of shocks
is rather weak; see \Fig{vis_bb}, where we compare both cases.
It is worth noting that the use of a shock capturing viscosity
({\it not} used in the calculations presented in \Fig{vis_bb}) seems to
reproduce the critical values of the magnetic Reynolds number rather well;
see the left hand panel of \Fig{Mach_dep_Rmcrit}.
In \Fig{spectra} we compare kinetic and magnetic energy spectra for
the constant and shock-capturing viscosity solutions.
The two show excellent agreement at low wavenumbers.
At high wavenumbers there are major differences.
The direct simulations show an extended diffusion range which is
rather from the diffusive subrange in incompressible simulations
(e.g.\ Kaneda et al.\ 2003).
The large extent of this diffusive subrange in supersonic turbulence
is obviously the reason for the tremendously high resolution needed
in the direct simulations.
Fortunately, as far as dynamo action is concerned, this extended subrange
does not need to be fully resolved and it suffices to just cut it off with
a shock-capturing viscosity.
\begin{figure}\centering\includegraphics[width=0.6\textwidth]
{spectra}\caption{
Energy spectra for Runs~1a and 1c of Haugen et al.\ (2004b).
The dotted lines give the result using shock-capturing viscosity.
[Adapted from Haugen et al.\ (2004b).]
}\label{spectra}\end{figure} \section{Large scale dynamos}
\label{LSdynamos}
There are some major uncertainties in what exactly are the relevant
large scale dynamo mechanisms in galaxies and stars.
Simulations have enabled us to make close contact between simulations
and theory.
We are therefore beginning to see some significant progress in that
many of the uncertainties intrinsic to mean field dynamo theory
can now be eliminated.
However, the anticipated agreement between theory and simulations does
not yet extend to the more realistic cases with strong inhomogeneities
and anisotropies, for example where the turbulence is driven by convection,
supernovae, or by the magneto-rotational instability.
Crucial to improving the agreement between theory and simulations
has been the realization that the dominant nonlinear feedback comes
from the current helicity term.
This term can produce a ``magnetic $\alpha$ effect'' (which is
$\alpha_{\rm M}={\textstyle{1\over3}}\tau\overline{{\bm{j}}\cdot{\bm{b}}}/\rho_0$ in the
isotropic case) -- even if there is no kinetic $\alpha$ effect.
The latter is $\alpha_{\rm K}=-{\textstyle{1\over3}}\tau\overline{\mbox{\boldmath $\omega$} {}\cdot{\bm{u}}}$
in the isotropic case,
where $\mbox{\boldmath $\omega$} {}=\mbox{\boldmath $\nabla$} {}\times{\bm{u}}$ is the vorticity, ${\bm{j}}=\mbox{\boldmath $\nabla$} {}\times{\bm{b}}/\mu_0$
the current density, and $\rho_0$ the mean density.
An example where $\alpha_{\rm M}$ can be generated -- even with
$\alpha_{\rm K}=0$ -- is the shear-current effect of
Rogachevskii \& Kleeorin (2003, 2004).
\begin{figure}[t!]\centering\includegraphics[width=.55\textwidth]
{pmean_comp}\caption{
Evolution of the energies of the total field $\bra{{\bm{B}}^2}$ and of
the mean field $\bra{\overline{\bm{B}}^2}$, in units of $B_{\rm eq}^2$,
for runs with non-helical forcing
and open or closed boundaries; see the solid and dotted lines, respectively.
The inset shows a comparison of the ratio $\bra{\overline{\bm{B}}^2}/\bra{{\bm{B}}^2}$
for nonhelical ($\alpha=0$) and helical ($\alpha>0$) runs.
For the nonhelical case the run with closed boundaries is also
shown (dotted line near $\bra{\overline{\bm{B}}^2}/\bra{{\bm{B}}^2}\approx0.07$).
Note that saturation of the large scale field occurs on a
dynamical time scale; the resistive time scale is given on the
upper abscissa.
}\label{pmean_comp}\end{figure}
Recently, quantitative comparisons between theory and
simulations have shown that whatever mean electromotive force
($\overline{\mbox{\boldmath ${\cal E}$}}{}}{\equiv\overline{{\bm{u}}\times{\bm{b}}}$) is produced by the mean field
dynamo, this produces magnetic helicity in the large scale field.
Because of magnetic helicity conservation, a corresponding negative
contribution in the magnetic helicity of the small scale magnetic field
must be generated.
This results in the production of a small scale current helicity
which enters in the mean field equations as a magnetic alpha effect.
To satisfy magnetic helicity conservation at all times, the small
scale magnetic helicity equation (or equivalently the evolution
equation for $\alpha_{\rm M}$) has to be solved simultaneously
with the mean field equations.
This reproduces quantitatively the resistively slow saturation of
helically forced dynamos in a periodic box (Field \& Blackman 2002,
Blackman \& Brandenburg 2002, Subramanian 2002).
The explicitly time-dependent evolution equation for the magnetic
$\alpha$ effect was first derived by Kleeorin \& Ruzmaikin (1982).
Early work by Ruzmaikin (1981) focused attention on the possibility
of chaotic dynamics introduced by this effect (see also subsequent
work by Schmalz \& Stix 1991 and Covas et al.\ 1997),
and the first connection between dynamical and catastrophic quenching
was made by Kleeorin et al.\ (1995).
In order to avoid too much repetition with recent reviews on this subject
(e.g., Brandenburg et al.\ 2002) we discuss here only in a few words
the main consequences of the dynamical quenching model.
It reproduces quantitatively the resistively slow saturation phase of
an $\alpha^2$ dynamo in a periodic box (Field \& Blackman 2002).
Secondly, it reproduces reasonably accurately the saturation amplitude
and the cycle period of $\alpha\Omega$ dynamos with a sinusoidal shear
profile in a periodic box.
In the case of galaxies the dynamical time scale is $10^7\,{\rm yr}$, which
puts rather stringent constraints if one wants to explain microgauss
field strengths in very young galaxies that are $10^9\,{\rm yr}$ old.
It is likely that a successful dynamo has to have magnetic helicity
fluxes that allow the dynamo to get rid of small scale magnetic helicity
to allow for a rapid build-up of a large scale field that tends to
have magnetic helicity of the opposite sign (Blackman \& Field 2000,
Kleeorin et al.\ 2000).
The currently most convincing example where the presence of boundaries
has been found to be important is in connection with turbulent dynamos
that work mainly in the presence of shear (Brandenburg et al.\ 2005).
Here, the main effect that is thought to be responsible is the
so-called shear--current effect where the electromotive force has
a component proportional to $\overline{\bm{W}}\times\overline{\bm{J}}$. Here, $\overline{\bm{W}}$ is
the vorticity of the mean flow and $\overline{\bm{J}}$ is the mean current density.
This effect is technically related to the $\mbox{\boldmath $\Omega$} {}\times\overline{\bm{J}}$ effect
(e.g.\ Krause \& R\"adler 1980), which is obviously quite distinct from the
famous $\alpha$ effect.
Another possibility is the Vishniac \& Cho (2001) mechanism.
However, it has not yet been possible to verify any of them explicitly.
Preliminary mean field calculations suggest, however, that the $\overline{\bm{W}}\times\overline{\bm{J}}$
effect produces qualitatively favorable agreement with the direct simulations.
\FFig{pmean_comp} shows that in the presence of closed (i.e.\ perfectly
conducting) boundaries, magnetic energy both of the total field
and of the mean field saturates at a much lower level than with open
boundaries ($\mbox{\boldmath $n$} {}\times{\bm{B}}=0$).
The inset shows the ratio between the energies contained in the mean field
to that in the total field.
Note that this ratio stays small when the boundaries are closed, but it increases to
fairly large values (around 0.7) when the boundaries are open.
\section{Conclusions}
In many astrophysical bodies some type of large scale dynamo is likely
to operate.
This dynamo may operate with nonhelical turbulence and shear alone,
i.e.\ without $\alpha$ effect.
Small scale dynamos, on the other hand, are an extreme
case which requires that there is no shear and no helicity, for example.
Both types of dynamos are vulnerable in their own ways: a large scale
dynamo requires magnetic and current helicity fluxes in order to be successful,
while small scale dynamos may require magnetic Prandtl numbers that are
not too small.
However, as we have shown in the present paper, small scale dynamos
still work in the compressible regime.
In fact, it now seems that, once the Mach number exceeds unity, its
onset becomes independent of the Mach number.
This hypothesis has only been tested for small Mach numbers, so it would
be useful to extent these studies to larger values.
However, as we have also been able to show,
the use of shock-capturing viscosities seems to be a reasonably accurate
approximation for this purpose.
\section*{Acknowledgements}
The Danish Center for Scientific Computing is acknowledged
for granting time on the Linux cluster in Odense (Horseshoe).
|
train/arxiv
|
BkiUfg025V5jfvIP59RB
| 5
| 1
|
\section{Introduction}
Quantum walks are a quantum mechanical analogue of classical random walks. They provide a powerful tool for the study and development of quantum algorithms~\cite{Childs2002,szegedy2004}. Based on how time evolves, a quantum walk can be either continuous or discrete. For discrete quantum walks, there are several models that have been proposed and studied~\cite{Staggered,coinedwalk,szegedy2004}. In this paper, the walks we focus on are called bipartite walks; they generalize many known models such as arc-reversal walks and vertex-face walks.
We turn to a description of bipartite walks. A discrete quantum walk is given by a unitary operator
$U$ on a complex vector space $\mathbb{C}^n$.
We refer to $U$ as the \textsl{transition matrix} of a discrete quantum walk.
The state of the underlying quantum system is a unit
vector in $\mathbb{C}^n$. If the initial state is $z$, then after $k$ steps of the walk, the state
is $U^kz$. This is a unit vector, and so the squared absolute values of its entries sum to $1$.
The outcome of a measurement after $k$ steps is an element $i$ of $\{1,\ldots,n\}$, and the
probabilty that the result is $i$ is $|(U^kz)_i|^2$.
In our case, the state space is the space of complex functions
on the edges of a bipartite graph $G$. We assume that $X$ and $Y$ are the two colour classes
of $G$ and using these we construct two partitions of $E(G)$.
For the first partition, $\pi_0$, two edges are in the
same cell if they have a vertex in common, and that vertex is in $X$.
For the second partition
$\pi_1$, two edges are in the same cell if they have a vertex in common, and that vertex is in $Y$. Each of these partitions determines a projection, namely the projection onto the
functions on $E(G)$ that are constant on the cells of $\pi_0$ and $\pi_1$.
We denote these projections by $P$ and $Q$ respectively.
If $R$ is a projection, then
\[
(2R-I)^2 = 4R^2 -4R +I =4R -4R +I = I
\]
and, since $R=R^*$, we see that $2R-I$ is unitary. (Geometrically it is a reflection.)
Hence we can define a unitary operator $U$ by
\[
U := (2P-I)(2Q-I).
\]
This the transition matrix of the bipartite walk on $G$.
Konno et al.~in~\cite{twopartition} introduce a family of discrete-time quantum walks, called two-partition model, which is based on two equivalence-class partitions of the computational basis. The two partition used in the two-partition model does not necessarily give us two reflections.
Bipartite walks are a special case of the two-partition model introduced by
Konno et al.~in~\cite{twopartition}. Note that the paper by Konno et al.~focuses on showing the unitary equivalence between the members of two-partition model while we study the Hamiltonian of the transition matrix of the bipartite walk in this paper.
On the other hand, many of the most
commonly used discrete walks can be formulated as bipartite walks. We will give a constructive proof to show that arc-reversal walk can be viewed as a special case of bipartite walk.
There is a second class of quantum walks: \textsl{continuous quantum walks}. Here
the state space is the space of complex functions on the vertices of a graph $G$.
The walk is specified by a Hermitian matrix $H$ with rows and columns indexed by the
vertices of $G$ (for example, the adjcency matrix of $G$). We then
define transition matrices $U(t)$ by
\[
U(t) := \exp(itH),\quad(t \in \mathbb{R}).
\]
If the initial state of the walk is given by the unit vector $z$, the state at time $t$
is $U(t)z$.
For each unitary matrix $U$, there are Hermitian matrices $H$ such that
\[
U=\exp(iH).
\]
(We refer to $H$ as a \textsl{Hamiltonian} of $U$.)
It follows that a discrete walk on $G$ gives rise to a continuous quantum walk on the
edges of $G$ and if the continuous walk is given by matrices $U(t)$, the transition
matrix for the discrete walk is $U(1)$.
Our goal in this paper is to study the Hamiltonians of bipartite wallks. This is a topic that
has not been studied before.
For the discrete quantum walk governed by the unitary matrix $U$, there is a Hamiltonian $H$ associated with it. When there is a real skew-symmetric $S$ such that the Hamiltonian $H$ is of $H=iS$, it can be viewed as the skew-adjacency matrix of a oriented weighted graph, which we call \textsl{the $H$-digraph}. Hamiltonians of quantum walks are often associated with continuous quantum walks and have not been considered in the context of discrete quantum walks.
So far, most studies of the bipartite walk have been limited to the transition matrix and the behaviors of the walk~\cite{szegedy2004,twopartition,StefanakSkoupy}.
In this paper, we study Hamiltonians of bipartite walks and $H$-digraphs associated with it.
Spectral properties of the transition matrix is the main tool we exploit to study the Hamiltonian of $U$.
Let $S$ be a skew-symmetric matrix. We are mainly interested in the case when the Hamiltonian $H$ can be written as $H=iS$, which is not always true.We prove that the Hamiltonian $H$ is of the form $H=iS$ if and only if the adjacency matrix of $G$ is invertible.
As mentioned before, vertex-face walk can be viewed as a special case of bipartite walk. In Section~\ref{vf walk}, we show the equivalence relations between bipartite walks and vertex-face walks. The Hamiltonians obtained from vertex-face walks have some interesting properties, which have been studied extensively in~\cite{harmonyphd}. Here we introduce those properties and rephrase them from perspective of bipartite walk in Section~\ref{vf walk} and Section~\ref{vxf on CompleteG}.
When $G$ is a path on $n$ vertices, the transition matrix of the bipartite walk is a permutation matrix. When $n\geq 4$ is even, the associated $H$-digraph is a weighted oriented $K_{n-1}$. When $n\equiv 3\Mod 4$, the associated $H$-digraph is two copies of a weighted oriented $K_\frac{n-1}{2}$. Similar results can also be proved for the bipartite walk on even cycles.
Studying the Hamiltonian of bipartite walks helps us to construct examples of continuous walks with desired properties. Consider continuous quantum walk on a graph $G$ and the Hamiltonian is the adjacency matrix of $G$. If the walk has perfect state transfer between every pair of vertices of $G$, the walk has universal perfect state transfer. This is a rare and interesting phenomenon.
Using the properties of bipartite walks on paths and cycles, we find a way to weight the edges of complete graphs such that the resulting weighted graph has universal perfect state transfer. This demonstrates how we can use the Hamiltonian and bipartite walks to construct some interesting but previously hard-to-find phenomenon in continuous walks.
\section{Preliminaries}
\label{Intro}
Let $G$ be a $(d_0,d_1)$-biregular bipartite graph with two parts $C_0,C_1$.
Now we define two partitions of the edges of $G$, denoted by $\pi_0,\pi_1$ respectively. If two edges have the same end $x$ in $C_0$, then they belong to the same cell of $\pi_0$. Similarly, if two edges have the same end $y$ in $C_1$, then they belong to the same cell of $\pi_1$.
Given a matrix $M$, we \textsl{normalize} it by scaling each column of $M$ to a unit vector.
Let $P_0,P_1$ be characteristic matrix of $\pi_0,\pi_1$ respectively and let $\widehat{P}_0,\widehat{P}_1$ denote the normalized $P_0,P_1$ respectively.
Let
\[
P=\widehat{P}_0\widehat{P}_0^T,\quad Q=\widehat{P}_1\widehat{P}_1^T
\]
be the projections onto the vectors that is constant on the cells of $\pi_0,\pi_1$ respectively. We define the transition matrix of the bipartite walk over $G$ to be
\[
U=\left(2\widehat{P}_0\widehat{P}_0^T-I\right)\left(2\widehat{P}_1\widehat{P}_1^T-I\right)
=\left(2P-I\right)\left(2Q-I\right).
\]
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}
\definecolor{cv0}{rgb}{0.0,0.0,0.0}
\definecolor{cfv0}{rgb}{1.0,1.0,1.0}
\definecolor{clv0}{rgb}{0.0,0.0,0.0}
\definecolor{cv1}{rgb}{0.0,0.0,0.0}
\definecolor{cfv1}{rgb}{1.0,1.0,1.0}
\definecolor{clv1}{rgb}{0.0,0.0,0.0}
\definecolor{cv2}{rgb}{0.0,0.0,0.0}
\definecolor{cfv2}{rgb}{1.0,1.0,1.0}
\definecolor{clv2}{rgb}{0.0,0.0,0.0}
\definecolor{cv3}{rgb}{0.0,0.0,0.0}
\definecolor{cfv3}{rgb}{1.0,1.0,1.0}
\definecolor{clv3}{rgb}{0.0,0.0,0.0}
\definecolor{cv4}{rgb}{0.0,0.0,0.0}
\definecolor{cfv4}{rgb}{1.0,1.0,1.0}
\definecolor{clv4}{rgb}{0.0,0.0,0.0}
\definecolor{cv5}{rgb}{0.0,0.0,0.0}
\definecolor{cfv5}{rgb}{1.0,1.0,1.0}
\definecolor{clv5}{rgb}{0.0,0.0,0.0}
\definecolor{cv6}{rgb}{0.0,0.0,0.0}
\definecolor{cfv6}{rgb}{1.0,1.0,1.0}
\definecolor{clv6}{rgb}{0.0,0.0,0.0}
\definecolor{cv7}{rgb}{0.0,0.0,0.0}
\definecolor{cfv7}{rgb}{1.0,1.0,1.0}
\definecolor{clv7}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv5v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv6v7}{rgb}{0.0,0.0,0.0}
\Vertex[style={minimum size=1.0cm,draw=cv0,fill=cfv0,text=clv0,shape=circle},LabelOut=false,L=\hbox{$0$},x=0cm,y=4cm]{v0}
\Vertex[style={minimum size=1.0cm,draw=cv1,fill=cfv1,text=clv1,shape=circle},LabelOut=false,L=\hbox{$1$},x=4cm,y=4cm]{v1}
\Vertex[style={minimum size=1.0cm,draw=cv2,fill=cfv2,text=clv2,shape=circle},LabelOut=false,L=\hbox{$2$},x=0cm,y=3cm]{v2}
\Vertex[style={minimum size=1.0cm,draw=cv3,fill=cfv3,text=clv3,shape=circle},LabelOut=false,L=\hbox{$3$},x=4cm,y=3cm]{v3}
\Vertex[style={minimum size=1.0cm,draw=cv4,fill=cfv4,text=clv4,shape=circle},LabelOut=false,L=\hbox{$4$},x=0.0cm,y=2cm]{v4}
\Vertex[style={minimum size=1.0cm,draw=cv5,fill=cfv5,text=clv5,shape=circle},LabelOut=false,L=\hbox{$5$},x=4cm,y=2cm]{v5}
\Vertex[style={minimum size=1.0cm,draw=cv6,fill=cfv6,text=clv6,shape=circle},LabelOut=false,L=\hbox{$6$},x=0cm,y=1cm]{v6}
\Vertex[style={minimum size=1.0cm,draw=cv7,fill=cfv7,text=clv7,shape=circle},LabelOut=false,L=\hbox{$7$},x=4cm,y=1.0cm]{v7}
\Edge[lw=0.1cm,style={color=cv0v1,},](v0)(v1)
\Edge[lw=0.1cm,style={color=cv0v5,},](v0)(v5)
\Edge[lw=0.1cm,style={color=cv1v2,},](v1)(v2)
\Edge[lw=0.1cm,style={color=cv1v4,},](v1)(v4)
\Edge[lw=0.1cm,style={color=cv2v3,},](v2)(v3)
\Edge[lw=0.1cm,style={color=cv5v6,},](v5)(v6)
\Edge[lw=0.1cm,style={color=cv6v7,},](v6)(v7)
\end{tikzpicture}
\caption{Bipartite graph on $8$ vertices}
\label{not return pst graph}
\end{center}
\end{figure}
Now consider the bipartite graph $G$ in Figure~\ref{not return pst graph} as an example. We define a bipartite walk on $G$. The two parts of $G$ are $C_0=\{0,2,4,6\}$ and $C_2=\{1,3,5,7\}$. For the partitions $\pi_0,\pi_1$, the edge $(0,1),(0,5)$ are in the same cell in $\pi_0$ and Edge $(0,1),(2,1),(4,1)$ are in the same cell in $\pi_1$.
We have that
\[
\hat{P}_0=\begin{pmatrix}
\frac{1}{\sqrt{3}} & 0 & 0 & 0 \\
0 & 0 & \frac{1}{\sqrt{2}} & 0 \\
\frac{1}{\sqrt{3}} & 0 & 0 & 0 \\
\frac{1}{\sqrt{3}}& 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \frac{1}{\sqrt{2}}& 0 \\
0 & 0 & 0 & 1
\end{pmatrix} ,\quad
\hat{P}_1=\begin{pmatrix}
\frac{1}{\sqrt{2}} & 0 & 0 & 0 \\
\frac{1}{\sqrt{2}}& 0 & 0 & 0 \\
0 & \frac{1}{\sqrt{2}}& 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & \frac{1}{\sqrt{2}}& 0 & 0 \\
0 & 0 & 0 & \frac{1}{\sqrt{2}}\\
0 & 0 & 0 & \frac{1}{\sqrt{2}}
\end{pmatrix}
\]
and hence, the corresponding projections are
\[
P=
\begin{pmatrix}
\frac{1}{3}& 0 & \frac{1}{3}& \frac{1}{3}& 0 & 0 & 0 \\[2.5mm]
0 & \frac{1}{2} & 0 & 0 & 0 & \frac{1}{2}& 0 \\[2.5mm]
\frac{1}{3}& 0 & \frac{1}{3} & \frac{1}{3}& 0 & 0 & 0 \\[2.5mm]
\frac{1}{3}& 0 & \frac{1}{3}&\frac{1}{3} & 0 & 0 & 0 \\[2.5mm]
0 & 0 & 0 & 0 & 1 & 0 & 0 \\[2.5mm]
0 & \frac{1}{2}& 0 & 0 & 0 & \frac{1}{2} & 0 \\[2.5mm]
0 & 0 & 0 & 0 & 0 & 0 & 1
\end{pmatrix},\quad
Q=\begin{pmatrix}
\frac{1}{2}& \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\[2.5mm]
\frac{1}{2}& \frac{1}{2}& 0 & 0 & 0 & 0 & 0 \\[2.5mm]
0 & 0 & \frac{1}{2}& 0 & \frac{1}{2} & 0 & 0 \\[2.5mm]
0 & 0 & 0 & 1 & 0 & 0 & 0 \\[2.5mm]
0 & 0 & \frac{1}{2} & 0 & \frac{1}{2} & 0 & 0 \\[2.5mm]
0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \\[2.5mm]
0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2}
\end{pmatrix}.
\]
The transition matrix of the bipartite walk on $G$ is
\[
U=\begin{pmatrix}
0 & -\frac{1}{3} & 0 & \frac{2}{3}& \frac{2}{3} & 0 & 0 \\[2.5mm]
0 & 0 & 0 & 0 & 0 & 0 & 1 \\[2.5mm]
0 & \frac{2}{3} & 0 & \frac{2}{3} & \frac{1}{3} & 0 & 0 \\[2.5mm]
0 & \frac{2}{3} & 0 & -\frac{1}{3} & \frac{2}{3} & 0 & 0 \\[2.5mm]
0 & 0 & 1 & 0 & 0 & 0 & 0 \\[2.5mm]
1 & 0 & 0 & 0 & 0 & 0 & 0 \\[2.5mm]
0 & 0 & 0 & 0 & 0 & 1 & 0
\end{pmatrix}.
\]
Let $C$ denote the characteristic matrix of the incidence relation between $\pi_0,\pi_1$ with its rows indexed by the cells of $\pi_1$ and its columns indexed by the cells of $\pi_0$ such that
\[
C_{i,j}=1
\]
if there is an edge that belongs to both $c_i$ in $\pi_1$ and $c_j$ in $\pi_0$. Then we have that
\[
C=P_1^TP_0
\]
and normalized $C$ is
\[
\hat{C}=\widehat{P}_1^T\widehat{P}_0.
\]
The adjacency matrix of $G$ can be written as
\[
\quad A(G)=
\begin{pmatrix}
\mathbf{0}&C\\
C^T&\mathbf{0}
\end{pmatrix}.
\]
The incidence matrix and the normalized incidence matrix of the bipartite graph in Figure~\ref{not return pst graph} are
\[
C=
\begin{pmatrix}
1 & 0 & 1 & 0 \\
1 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 1
\end{pmatrix},\quad
\hat{C}=\begin{pmatrix}
\frac{1}{\sqrt{6}} & 0 & \frac{1}{2} & 0 \\[3mm]
\frac{1}{\sqrt{6}} &\frac{1}{\sqrt{2}} & 0 & 0 \\[3mm]
\frac{1}{\sqrt{3}}& 0 & 0 & 0 \\[3mm]
0 & 0 & \frac{1}{2}& \frac{1}{\sqrt{2}}
\end{pmatrix}.
\]
\section{Arc-reversal walks are a special case}
Arc-reversal walks are a well-studied model and in this section, we give a constructive proof that arc-reversal walks can be considered as a special case of bipartite walks.
Given a graph $G$, we show that the bipartite walk on the subdivision graph of $G$ is equivalent to the arc-reversal walk on $G$.
For a graph $G$, we define a new graph $G'$ by subdivided every edge of $G$ and we call $G'$ \textsl{the subdivision graph of $G$}. Then $G'$ is a bipartite graph with parts $C_0=V(G')\backslash V(G)$ and $C_1=V(G)$. We define a bipartite walk on $G'$ with transition matrix
\[
U=(2P-I)(2Q-I).
\] For each vertex $\alpha\in C_0$ and $a\in C_1$, we have
\[
\deg_{G'}(\alpha)=2, \quad\deg_{G'}(a)=\deg_{G}(a).
\]
Now if every edge $e$ of $G$ is replaced by two arcs $e_1,e_2$ with opposite directions, we can view the subdivision graph $G'$ as directed graph of $G$. Every edge in $G'$ can be viewed as an arc of directed $G$.
Let
\[
G_a=\frac{1}{\deg(a)}J-I
\]
be the Grover coin associated with vertex $a$. Then we have that
\[
2Q-I=\bigoplus_{v\in C_1} G_v=\begin{pmatrix}
G_{v_1}&&&\\
&G_{v_2}&&\\
&&\ddots&\\
&&&G_{v_n}
\end{pmatrix},
\]
where we assign the Grover coin to $v_i$ for every vertex $v_i$ in $V(G)$. Also, we have that
\[
2P-I=\bigoplus_{v\in C_0}\frac{1}{2}J_2-I=\begin{pmatrix}
\frac{1}{2}J_2-I&&&\\[3mm]
&\frac{1}{2}J_2-I&&\\[3mm]
&&\ddots&\\[3mm]
&&&\frac{1}{2}J_2-I
\end{pmatrix},
\]
which can be viewed as the arc-reversal matrix $R$, i.e.,
\[
R\cdot (a,b)=(b,a)
\]
for every arc $(a,b)$. Thus, every bipartite walk defined on the subdivision graph of $G$ is equivalent to the arc-reversal walk on $G$.
\section{Spectrum of transition matrix $U$}
Spectral properties of the transition matrix $U$ are the main machinery that we use to analyse the Hamiltonian of $U$. In this section, we present a complete characterization on the eigenvalues and eigenspaces of $U$. All the statements presented here are proved in \cite{harmonyphd} by Zhan in detail, so in this paper we omit the proofs. Note that here we use the same notations as defined before and so,
\[
P=\widehat{P}_0\widehat{P}_0^T,\quad Q=\widehat{P}_1\widehat{P}_1^T, \quad \hat{C}=\widehat{P}_1^T\widehat{P}_0
\]
and
\[
U=(2P-I)(2Q-I).
\]
\begin{theorem}[Theorem~$5.2.2$ in~\cite{harmonyphd}]
\label{1-eignsp of U}
Let $P,Q$ be projections on $\mathbb{C}^m$.
The $1$-eigenspace of $U$ is
\[
\left(\col(P)\cap \col(Q)\right)\oplus \left(\ker(P)\cap \ker(Q) \right)
\] and it has dimension
\[
m-\rk(P)-\rk(Q)+2\dim\left(\col(P)\cap \col(Q)\right).
\]
Moreover,
\[
\col(P)\cap \col(Q)=\SPAN\{\mathbf{1}\}.
\]
\end{theorem}
\begin{theorem}[Lemma $2.3.6$ in~\cite{harmonyphd}]
\label{-1 eigen}
The $(-1)$-eigenspace for $U$ is
\[
\left(\col(P)\cap \ker(Q)\right)\oplus \left(\ker(P)\cap \col(Q) \right)\] and its dimension is
\[
\abs{C_0}+\abs{C_1}-2\rk(C).
\]
\end{theorem}
\begin{theorem}[Lemma $2.3.7$ in \cite{harmonyphd}]
Let $\mu\in(0,1)$ be an eigenvalue of $\hat{C}\hat{C}^T$. Choose $\theta $ such that \[
\cos\theta=2\mu-1.
\]
The map \[
y\mapsto\left(\cos\theta+1\right)\widehat{P}_1y-\left(e^{i\theta}+1\right) \widehat{P}_0\hat{C}^Ty
\]
is an isomorphism from $\mu$-eigenspace of $\hat{C}\hat{C}^T$ to the $e^{i\theta}$-eigenspace of $U$, and the map
\[
y\mapsto\left(\cos\theta+1\right)\widehat{P}_1y-\left(e^{-i\theta}+1\right)\widehat{P}_0\hat{C}^Ty
\]
is an isomorphism from $\mu$-eigenspace of $\hat{C}\hat{C}^T$ to the $e^{-i\theta}$-eigenspace of $U$.
\end{theorem}
\begin{corollary}[Corollary $5.2.5$ in \cite{harmonyphd}]
\label{e^itheta-eigenspace of U}
Let $\mu\in(0,1)$ be an eigenvalue of $\hat{C}\hat{C}^T$. Choose $\theta$ such that $\cos\theta=2\mu-1.$ Let $E_\mu$ be the orthogonal projection onto the $\mu$-eigenspace of $\hat{C}\hat{C}^T$. Set
\[
W:= \widehat{P}_1E_\mu\widehat{P}_1^T.
\]
Then the $e^{i\theta}$-eigenmatrix of $U$ is
\[
\frac{1}{\sin^2(\theta)}\left( (\cos\theta+1)W-(e^{i\theta}+1)PW-(e^{-i \theta}+1)WP+2PWP\right),
\]
and the $e^{-i\theta}$-eigenmatrix of $U$ is
\[
\frac{1}{\sin^2(\theta)}\left( (\cos\theta+1)W-(e^{-i\theta}+1)PW-(e^{i \theta}+1)WP+2PWP\right).
\]
\end{corollary}
\section{Hamiltonians}
For every unitary matrix $U$, there exist Hermitian matrices $H$ such that
\[
U=\exp(iH).
\]
We call such $H$ a \textsl{Hamiltonian} of $U$. Since $U$ is unitary, it has spectral decomposition
\[
U=\sum_r e^{i\theta_r}E_r=\exp(iH),
\]
and we can write
\[
H=-i\sum_r \log(e^{i\theta_r})E_{\theta_r}=\sum_r \theta_r E_{\theta_r}.
\]
For each eigenvalue $e^{i\theta_r}$ of $U$, we have that
\[
\log(e^{i\theta_r})=\log(e^{i\theta_r+2k_r\pi})
\]
for non-zero integer $k_r$ and so, the choice of $H$ is not unique. That is, the Hamiltonian of $U$ is
\[
H=\sum_{\theta_r} (\theta_r+2k_r\pi) E_{\theta_r},
\]
for any non-zero integer $k_r$. Note that $k_r$ are not necessarily equal for all the $\theta_r$.
Let $S$ be a real skew-symmetric matrix and $S$ can be viewed as the skew-adjacency matrix of a weighted oriented graph. When $H=iS$, we define the \textsl{$H$-digraph} to be the weighted oriented graph whose skew-adjacency matrix is $S$. This paper focuses on the case when the Hamiltonian can be written as $H=iS$ and studies the associated $H$-digraph.
For each eigenvalue $e^{i\theta_r}$ of $U$, if
$-\pi<\theta_r\leq \pi$ and $k_r=0$, the resulting unique Hamiltonian is called \textsl{principal Hamiltonian}.
Let $H_0$ be the principle Hamiltonian. In general, if there is a real skew-symmetric $S_0$ such that $H_0=iS_0$, the choice
\[
H=H_0+\sum_r 2k_r\pi E_{\theta_r}
\]
for non-constant $k_r$, cannot be written as $H=iS$ for a real skew-symmetric $S$.
Unless explicitly stated otherwise, we take the principal Hamiltonian
to be the Hamiltonian of $U$. Later in Corollary~\ref{A invertible}, we will show that there is a real skew-symmetric $S$ such that $H=iS$ if and only if the adjacency matrix of the bipartite graph $A(G)$ is invertible.
\begin{theorem}
\label{A invertible theorem}
Let $U$ be the transition matrix of the bipartite walk on a bipartite graph $G$. Let $H$ be the Hamiltonian of $U$ and let $E_{-1}$ be the projection onto the $(-1)$-eigenspace of $U$. Then there is a real skew-symmetric matrix $S$ such that
\[
H = iS +\pi E_{-1},
\]
\end{theorem}
\proof
Using the spectral decomposition
\[
U=\sum_r e^{i\theta_r}E_r=\exp(iH),
\]
we can write
\[
H=-i\sum_r \log(e^{i\theta_r})E_r=\sum_r \theta_r E_{\theta_r},
\]
where $-\pi<\theta_r\leq \pi$. It follows that the $1$-eigenspace of $U$ corresponds to the $0$-eigenspace of $H$ and the $(-1)$-eigenspace of $U$ corresponds to the $\pi$-eigenspace of $H$ and $e^{i\theta_r}$-eigenspace gives $\theta_r$-eigenspace of $H$.
Since $G$ is bipartite, the adjacency matrix of $G$ can be written as
\[
A(G)=
\begin{pmatrix}
\mathbf{0}&C\\
C^T&\mathbf{0}
\end{pmatrix}
\]
for some $01$-matrix $C$. Let $\hat{C}$ be denoted the normalized version of $C$ and
let $\mu\in(0,1)$ be an eigenvalue of $\hat{C}\hat{C}^T$. Choose $\theta$ such that
$\cos\theta=2\mu-1.$ Let $F_\mu$ be the orthogonal projection onto the $\mu$-eigenspace
of $\hat{C}\hat{C}^T$. Set
\[
W:= \widehat{P}_1F_\mu\widehat{P}_1^T.
\]
By Corollary~\ref{e^itheta-eigenspace of U}, we have that
\begin{align*}
H&=\sum_{\theta_r\neq \{1,-1\}}\theta_r\left(E_{\theta_r}-E_{-\theta_r}\right)+\pi\cdot E_{-1}\\
&=\sum_{\theta_r\neq \{1,-1\}}\theta_r\left(-\frac{2i}{\sin(\theta)}(PW-WP)\right)+\pi\cdot E_{-1}.
\end{align*}
Since $\hat{C}\hat{C}^T$ is real and symmetric, we know that the orthogonal projection onto its $\mu$-eigenspace $F_\mu$ is real and symmetric. It follows that $W= \widehat{P}_1F_\mu\widehat{P}_1^T$ is real and symmetric. So the matrix $PW-WP$ is real. Set
\[
S=\sum_{\theta_r\neq \{1,-1\}}\theta_r\left(-\frac{2}{\sin(\theta_r)}(PW-WP)\right)
\]
and we know that $S$ is skew-symmetric.\qed
\begin{corollary}
\label{A invertible}
Let $U$ be the transition matrix of the bipartite walk on a bipartite graph $G$. Let $S$ be a real skew-symmetric matrix and the Hamiltonian $H$ of $U$ can be written as $H=iS$ if and only if $A(G)$ is invertible.
\end{corollary}
\proof
By Theorem~\ref{-1 eigen}, we know that $E_{-1}$ is a real matrix.
Using Theorem~\ref{A invertible theorem}, it is sufficient to prove that $E_{-1}=0$ if and only if $A(G)$ is invertible.
Now consider the $(-1)$-eigenvalue of $U$. From Theorem~\ref{-1 eigen} we know that
\[
\dim\left(E_{-1}\right)=\abs{C_0}+\abs{C_1}-2\rk(C).
\]
This implies that $\dim\left(E_{-1}\right)=0$ if and only if
\[
\abs{C_0}+\abs{C_1}-2\rk(C)=0.
\]
Since $\rk(P_0)=\abs{C_0}$ and $\rk(P_1)=\abs{C_1}$ and $C=P_1^TP_0$, we get that
\[
\rk{C}\leq \min\{\abs{C_0},\abs{C_1}\}.
\]
Thus, $\dim\left(E_{-1}\right)=0$ if and only if $\rk(P_0)=\rk(P_1)=\rk(C)$, which is equivalent to requiring that $C$ is invertible. Therefore we can conclude that there is a real skew-symmetric $S$ such that $H=iS$ if and only if $A(G)$ is invertible.\qed
Let $E_{\theta_r},E_{-\theta_r}$ be the corresponding eigenprojections
of eigenvalue $e^{i\theta_r},e^{-i\theta_r}$ of $U$. Since $E_{\theta_r}$ are Hermitian, we have that
\[
E_{\theta_r}=\overline{E_{-\theta_r}}.
\]
It follows that when $A(G)$ is invertible, the Hamiltonian
\[
H=\sum_{r} \theta_r \left( E_{\theta_r}-\overline{E_{\theta_r}}.\right)
\]
has zero diagonal, which implies that the $H$-digraph has no loops.
We have proved that when $-1$ is an eigenvalue of $U$, there is no skew-symmetric matrix $S$ such
that its Hamiltonian is in the form $H=iS$. So when $U$ has eigenvalue $-1$,
we consider instead the Hamiltonian of $U^2$ and the $H$-digraph obtained from
the Hamiltonian of $U^2$.
\section{Vertex-Face walks}
\label{vf walk}
Bipartite walks can be used to generalize many known walk models and one of them is the vertex-face walk. Here we show that vertex-face walk can be viewed as a special case of bipartite walk. As shown in~\cite{harmonyphd}, the Hamiltonian raised from vertex-face walk has many interesting properties, some of which will be presented using the bipartite walk language in this section and the next section.
An embedding of a graph $G$ in a surface $S$ is a continuous one-to-one map from $G$ to $S$. Given an embedding $G\rightarrow S$, the components of $S-G$ are called \textsl{regions}. If each region is homeomorphic to an open disk, then the embedding is called a \textsl{cellular embedding} and the regions are also called \textsl{faces} of the embedding.
In \cite{harmonyphd}, Zhan introduces a new model of discrete quantum walk, the \textsl{vertex-face walk}. Let $\mathcal{M}$ be a circular embedding of graph $G$ on
an orientable surface. Note that here the tail of the arc $(a,b)$ is vertex $a$. Let $M,N$
denote the arc-face incidence matrix and arc-tail incidence matrix respectively. The
transition matrix of vertex-face walk on $\mathcal{M}$ is
\[
U:= \left(2\widehat{M}\widehat{M}^T-I\right)\left(2\widehat{N} \widehat{N}^T-I\right),
\]
where $\widehat{M},\widehat{N}$ is the matrices obtained from $M,N$ respectively by scaling each column to a unit vector.
The vertex-face incidence graph $X$ of the embedding $\mathcal{M}$ is a bipartite graph and two parts of $X$ are labelled by the vertices and the faces of $\mathcal{M}$. We can view the vertex-face walk on the circular embedding $\mathcal{M}$ as a bipartite walk by considering the bipartite walk over the vertex-face incidence graph of $\mathcal{M}$.
Now we show that the transition matrix of vertex-face walk on $\mathcal{M}$ is the same as the transition matrix of the bipartite walk on the vertex-face incidence graph of $\mathcal{M}$. Since $\mathcal{M}$ is a circular orientable embedding, the edges in the vertex-face incidence graph correspond to arcs of the embedding $\mathcal{M}$ of $G$. The arc-face incidence matrix $M$ of the embedding $\mathcal{M}$ is exactly the characteristic matrix of the edge-partition matrix of the vertex-face incidence graph based on the face part. The arc-tail incidence matrix $N$ of the embedding $\mathcal{M}$ is exactly the characteristic matrix of the edge-partition matrix of the vertex-face incidence graph according to the vertex part. Hence, the bipartite walk on the incidence graph of the embedding $\mathcal{M}$ is exactly the same as the vertex-face walk on $\mathcal{M}$.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\begin{tikzpicture}
\definecolor{cv0}{rgb}{0.0,0.0,0.0}
\definecolor{cfv0}{rgb}{1.0,1.0,1.0}
\definecolor{clv0}{rgb}{0.0,0.0,0.0}
\definecolor{cv1}{rgb}{0.0,0.0,0.0}
\definecolor{cfv1}{rgb}{1.0,1.0,1.0}
\definecolor{clv1}{rgb}{0.0,0.0,0.0}
\definecolor{cv2}{rgb}{0.0,0.0,0.0}
\definecolor{cfv2}{rgb}{1.0,1.0,1.0}
\definecolor{clv2}{rgb}{0.0,0.0,0.0}
\definecolor{cv3}{rgb}{0.0,0.0,0.0}
\definecolor{cfv3}{rgb}{1.0,1.0,1.0}
\definecolor{clv3}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v3}{rgb}{0.0,0.0,0.0}
\Vertex[style={minimum size=1.0cm,draw=cv0,fill=cfv0,text=clv0,shape=circle},LabelOut=false,L=\hbox{$1$},x=2.5cm,y=5.0cm]{v0}
\Vertex[style={minimum size=1.0cm,draw=cv1,fill=cfv1,text=clv1,shape=circle},LabelOut=false,L=\hbox{$2$},x=5.0cm,y=0cm]{v1}
\Vertex[style={minimum size=1.0cm,draw=cv2,fill=cfv2,text=clv2,shape=circle},LabelOut=false,L=\hbox{$3$},x=0cm,y=0.0cm]{v2}
\Vertex[style={minimum size=1.0cm,draw=cv3,fill=cfv3,text=clv3,shape=circle},LabelOut=false,L=\hbox{$0$},x=2.5cm,y=2cm]{v3}
\Edge[lw=0.1cm,style={color=cv0v1,},](v0)(v1)
\Edge[lw=0.1cm,style={color=cv0v2,},](v0)(v2)
\Edge[lw=0.1cm,style={color=cv0v3,},](v0)(v3)
\Edge[lw=0.1cm,style={color=cv1v2,},](v1)(v2)
\Edge[lw=0.1cm,style={color=cv1v3,},](v1)(v3)
\Edge[lw=0.1cm,style={color=cv2v3,},](v2)(v3)
\end{tikzpicture}
\caption{The circular embedding of $K_4$}
\end{subfigure}%
~
The facial walks on $K_4$ embedding above: \begin{align*}
f_0&=\{(0,1),(1,2),(2,0)\}\\
f_1&=\{(1,3),(3,2),(2,1)\}\\
f_2&=\{(0,2),(2,3),(3,0)\}\\
f_3&=\{(0,3),(3,1),(1,0)\}
\end{align*}
~\begin{subfigure}[t]{0.5\textwidth}
\centering
\begin{tikzpicture}[rotate=270]
\definecolor{cv0}{rgb}{0.0,0.0,0.0}
\definecolor{cfv0}{rgb}{1.0,1.0,1.0}
\definecolor{clv0}{rgb}{0.0,0.0,0.0}
\definecolor{cv1}{rgb}{0.0,0.0,0.0}
\definecolor{cfv1}{rgb}{1.0,1.0,1.0}
\definecolor{clv1}{rgb}{0.0,0.0,0.0}
\definecolor{cv2}{rgb}{0.0,0.0,0.0}
\definecolor{cfv2}{rgb}{1.0,1.0,1.0}
\definecolor{clv2}{rgb}{0.0,0.0,0.0}
\definecolor{cv3}{rgb}{0.0,0.0,0.0}
\definecolor{cfv3}{rgb}{1.0,1.0,1.0}
\definecolor{clv3}{rgb}{0.0,0.0,0.0}
\definecolor{cv4}{rgb}{0.0,0.0,0.0}
\definecolor{cfv4}{rgb}{1.0,1.0,1.0}
\definecolor{clv4}{rgb}{0.0,0.0,0.0}
\definecolor{cv5}{rgb}{0.0,0.0,0.0}
\definecolor{cfv5}{rgb}{1.0,1.0,1.0}
\definecolor{clv5}{rgb}{0.0,0.0,0.0}
\definecolor{cv6}{rgb}{0.0,0.0,0.0}
\definecolor{cfv6}{rgb}{1.0,1.0,1.0}
\definecolor{clv6}{rgb}{0.0,0.0,0.0}
\definecolor{cv7}{rgb}{0.0,0.0,0.0}
\definecolor{cfv7}{rgb}{1.0,1.0,1.0}
\definecolor{clv7}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v7}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v7}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v7}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v7}{rgb}{0.0,0.0,0.0}
\Vertex[style={minimum size=1.0cm,draw=cv0,fill=cfv0,text=clv0,shape=circle},LabelOut=false,L=\hbox{$f_0$},x=0.0cm,y=5.0cm]{v0}
\Vertex[style={minimum size=1.0cm,draw=cv1,fill=cfv1,text=clv1,shape=circle},LabelOut=false,L=\hbox{$f_1$},x=1.6667cm,y=5.0cm]{v1}
\Vertex[style={minimum size=1.0cm,draw=cv2,fill=cfv2,text=clv2,shape=circle},LabelOut=false,L=\hbox{$f_2$},x=3.3333cm,y=5.0cm]{v2}
\Vertex[style={minimum size=1.0cm,draw=cv3,fill=cfv3,text=clv3,shape=circle},LabelOut=false,L=\hbox{$f_3$},x=5.0cm,y=5.0cm]{v3}
\Vertex[style={minimum size=1.0cm,draw=cv4,fill=cfv4,text=clv4,shape=circle},LabelOut=false,L=\hbox{$v_0$},x=0.0cm,y=0.0cm]{v4}
\Vertex[style={minimum size=1.0cm,draw=cv5,fill=cfv5,text=clv5,shape=circle},LabelOut=false,L=\hbox{$v_1$},x=1.6667cm,y=0.0cm]{v5}
\Vertex[style={minimum size=1.0cm,draw=cv6,fill=cfv6,text=clv6,shape=circle},LabelOut=false,L=\hbox{$v_2$},x=3.3333cm,y=0.0cm]{v6}
\Vertex[style={minimum size=1.0cm,draw=cv7,fill=cfv7,text=clv7,shape=circle},LabelOut=false,L=\hbox{$v_3$},x=5.0cm,y=0.0cm]{v7}
\Edge[lw=0.1cm,style={color=cv0v4,},](v0)(v4)
\Edge[lw=0.1cm,style={color=cv0v5,},](v0)(v5)
\Edge[lw=0.1cm,style={color=cv0v6,},](v0)(v6)
\Edge[lw=0.1cm,style={color=cv1v5,},](v1)(v5)
\Edge[lw=0.1cm,style={color=cv1v6,},](v1)(v6)
\Edge[lw=0.1cm,style={color=cv1v7,},](v1)(v7)
\Edge[lw=0.1cm,style={color=cv2v4,},](v2)(v4)
\Edge[lw=0.1cm,style={color=cv2v6,},](v2)(v6)
\Edge[lw=0.1cm,style={color=cv2v7,},](v2)(v7)
\Edge[lw=0.1cm,style={color=cv3v4,},](v3)(v4)
\Edge[lw=0.1cm,style={color=cv3v5,},](v3)(v5)
\Edge[lw=0.1cm,style={color=cv3v7,},](v3)(v7)
\end{tikzpicture}
\caption{ The vertex-face incidence graph of the planar embedding of $K_4$}
\end{subfigure}
\caption{The circular embedding of $K_4$ and its corresponding vertex-face incidence graph}
\end{figure}
In \cite{harmonyphd}, Zhan focuses on the circular orientable embedding of graph $G$ such that both $G$ and its dual graph are regular. The embedding $\mathcal{M}$ has type $(k,l)$ if each vertex has degree $l$ and each faces uses $k$ vertices. Note that a vertex-face walk over a $(k,l)$-type embedding $\mathcal{M}$ corresponds to a bipartite walk on a $(k,l)$-regular bipartite graph that is the vertex-face incidence graph of $\mathcal{M}$.
\begin{theorem}[Theorem~$8.5.4$ in~\cite{DQW}]
Let $G$ be a semi-regular bipartite graph with degree $(k,l)$ and $P_0,P_1$ denote its two parts. Let $\pi_0,\pi_1$ denote the partitions of edges of $G$ according to $P_0,P_1$ respectively. Let $U$ be the bipartite walk transition matrix for $G$. Then
\[
U^2=\exp\left(\gamma(U-U^T)\right)
\] for some real number $\gamma$ if and only if $G$ has four or five distinct eigenvalues. Moreover,
\[ S=
\frac{kl}{4}(U^T-U)
\] is the skew-adjacency matrix of some oriented graph on the edges of $G$.
Let $c_{0,k}$ denote the cell of partition $\pi_0$ containing edge $e_k$ and similarly, $c_{1,k}$ denote the cell of partition $\pi_1$ containing edge $e_k$. Then we have
\[
S_{i,j}=\begin{cases}
1,\quad\text{if }\abs{c_{0,i}\cap c_{1,j}}=1\text{ and }\abs{c_{1,i}\cap c_{0,j}}=0, \\[2.5mm]
-1,\quad\text{if }\abs{c_{0,i}\cap c_{1,j}}=0\text{ and }\abs{c_{1,i}\cap c_{0,j}}=1, \\[2.5mm]
0,\quad\text{otherwise.}
\end{cases}\qed
\]
\end{theorem}
A partial geometric design with parameters $(d,k,t,c)$ is a point-$d$-regular and block-$k$-regular design, where for each point-block pair $(p,B)$, the number of incident point-block pairs\[
\abs{ \{(p',B'): p'\neq p, B'\neq B, p'\in B,p\in B'\}} \]equals $c$ or $t$, depending on whether $p$ is in $B$ or not.
In~\cite{DQW} Theorem~$8.5.5$, Godsil and Zhan have showed that when $G$ is an incidence graph of a partial geometric design, then we have that
\[
U^2=\exp\left(\gamma(U-U^T)\right)
\] for some real number $\gamma$.
\section{Vertex-Face walks on complete graphs}
\label{vxf on CompleteG}
In \cite{biggembedding}, Biggs states that $K_n$ has a regular embedding if and only if $n$ is a prime power and every regular embedding of $K_n$ must arise from the rotation system stated in~\cite{harmonyphd}.
\begin{lemma}[Theorem $5.6.2$ in~\cite{harmonyphd}] Let $n=p^k$ for some prime $p$. Let $g$ be a primitive generator of the finite field $\mathbb{F}$ of order $n$. For each element $u$ in $\mathbb{F}$, define the cyclic permutation
\[
\pi_u=\{v+g^0,v+g^1,\cdots,v+g^{n-2}\}.
\]
The rotation system $\{\pi_u:u\in V(K_m)\}$ gives a circular embedding of $K_n$.
\end{lemma}
In the case of $H$-digraphs arised from the vertex-face walk on $K_n$, we know that the skew-adjacency matrix of $H$-digraph $A\big(\overrightarrow{H}\big)$ is indexed by arcs of $K_n$.
Let $f_{ab}$ denote the unique face that contains arc $(a,b)$.
From the proof of Theorem~$8.5.4$ in \cite{DQW}, we have that
\[
A\big(\overrightarrow{H}\big)_{(a,b),(c,d)}=\begin{cases}
1,\quad\text{if }c\in f_{ab}\text{ and }a\not\in f_{cd}, \\[2.5mm]
-1,\quad\text{if }a\in f_{cd}\text{ and }c\not\in f_{ab}, \\[2.5mm]
0,\quad\text{otherwise.}
\end{cases}
\]
Note that in a self-dual circular embedding of $K_n$, each face consists of $n-1$ distinct vertices, which implies that each face misses a unique vertex of $K_n$.
We use $LD\left(K_n\right)$ to denote the line digraph of $K_n$.
\begin{theorem}
\label{H-digraph of vx-fc of K_n}
The $H$-digraphs $Z_n$ obtained from the vertex-face walks of a self-dual embedding of $K_n$
is the line digraphs of $K_n$.
\end{theorem}
\proof
We construct an isomorphism from $Z_n$ to $LD(K_n)$. Define a map $f:V(Z_n)\rightarrow V\left(LD(K_n)\right)$ as
\[
(a,b)\mapsto (u,a),
\]
where $u$ is the unique vertex missed by $f_{ab}$. First we show that $f$ is a homomorphism. Say
\[
f(a,b)=(u,a),\quad f(c,d)=(v,c),
\]
which implies that $u$ is the unique vertex missed by $f_{ab}$ and $v$ is the unique vertex missed by $f_{cd}$. We know that there is an arc from $(a,b)$ to $(c,d)$ in $Z_n$ if and only if
\[
c\in f_{ab}\text{ and }a\not\in f_{cd}.
\]
Since each face miss a unique vertex in the circular embedding of $K_n$, we must have that
\[
a=v,
\] which means that there is an arc from $f(a,b)$ to $f(c,d)$ in $LD(K_n)$. Thus, the map $f$ is indeed a homomorphism.
Now we prove that $f$ is a bijection and since $LD(K_n)$ is finite, it suffices to prove that $f$ is an injection. Assume towards contradictions that two distinct arcs $(a,b)$ and $(a',b')$ get mapped to $(x,y)$ by the map $f$. Then by how we define the map $f$, we know that
\[
a=a'=y.
\]
The vertex $x$ is missed by $f_{ab}$ and $f_{a'b'}=f_{ab'}$. Since the faces here arised from facial walks on the circular embedding of $K_n$, we must have that
\[
(a,b)=(a',b').
\]
This means that $f$ has to be an injection and hence, a bijection. Therefore, we can conclude that the map $f$ gives an isomorphism from $Z_n$ to $LD(K_n)$. \qed
\begin{theorem}[Theorem~$5.6.3$ in \cite{harmonyphd}]
Let $n$ be a prime power. Let $U$ be the transition matrix of the vertex-face walk for a regular embedding of $K_n$. Then there is a $\gamma\in\mathbb{R}$ such that
\[
U=\exp\left(\gamma(U^T-U)\right).
\]
Further $U^T-U$ is a scalar multiple of the skew-adjacency matrix of an oriented graph, which
\begin{enumerate}[label=(\roman*)]
\item has $n(n-1)$ vertices,
\item is $(n-2)$-regular, and
\item has exactly three eigenvalues: $0$ and $\pm i\sqrt{n(n-2)}$
\end{enumerate}
\end{theorem}
We rephrase Theorem~\ref{H-digraph of vx-fc of K_n} in terms of bipartite walk and we get the following theorem.
\begin{theorem}
Let $G_n$ be a $(n-1)$-regular bipartite graph with each part of size $n$. Then the $H$-digraph obtained from the bipartite walk on $G_n$ is the line digraph of $K_n$.
\end{theorem}
\proof
Since there is every cell of $\pi_1$ miss a unique vertex in $C_0$ and every cell of $\pi_0$ misses a unique vertex in $C_1$, the proof of Theorem~\ref{H-digraph of vx-fc of K_n} applies here.\qed
\section{Paths and even cycles}
The vertex-face incidence graph of a cellular embedding of a graph must have degree at least three for each vertex. So neither a path nor a cycle can be a bipartite graph raised from the vertex-face incidence relation of an circular embedding. In this section, we discuss the bipartite walk defined on paths and even cycles.
\label{bipartite walk on paths}
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=0.55]
\definecolor{cv0}{rgb}{0.0,0.0,0.0}
\definecolor{cfv0}{rgb}{1.0,1.0,1.0}
\definecolor{clv0}{rgb}{0.0,0.0,0.0}
\definecolor{cv1}{rgb}{0.0,0.0,0.0}
\definecolor{cfv1}{rgb}{1.0,1.0,1.0}
\definecolor{clv1}{rgb}{0.0,0.0,0.0}
\definecolor{cv2}{rgb}{0.0,0.0,0.0}
\definecolor{cfv2}{rgb}{1.0,1.0,1.0}
\definecolor{clv2}{rgb}{0.0,0.0,0.0}
\definecolor{cv3}{rgb}{0.0,0.0,0.0}
\definecolor{cfv3}{rgb}{1.0,1.0,1.0}
\definecolor{clv3}{rgb}{0.0,0.0,0.0}
\definecolor{cv4}{rgb}{0.0,0.0,0.0}
\definecolor{cfv4}{rgb}{1.0,1.0,1.0}
\definecolor{clv4}{rgb}{0.0,0.0,0.0}
\definecolor{cv5}{rgb}{0.0,0.0,0.0}
\definecolor{cfv5}{rgb}{1.0,1.0,1.0}
\definecolor{clv5}{rgb}{0.0,0.0,0.0}
\definecolor{cv6}{rgb}{0.0,0.0,0.0}
\definecolor{cfv6}{rgb}{1.0,1.0,1.0}
\definecolor{clv6}{rgb}{0.0,0.0,0.0}
\definecolor{cv7}{rgb}{0.0,0.0,0.0}
\definecolor{cfv7}{rgb}{1.0,1.0,1.0}
\definecolor{clv7}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv4v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv5v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv6v7}{rgb}{0.0,0.0,0.0}
\Vertex[style={minimum size=1.0cm,draw=cv0,fill=cfv0,text=clv0,shape=circle},LabelOut=false,L=\hbox{$0$},x=4.5cm,y=5cm]{v0}
\Vertex[style={minimum size=1.0cm,draw=cv1,fill=cfv1,text=clv1,shape=circle},LabelOut=false,L=\hbox{$1$},x=0cm,y=5cm]{v1}
\Vertex[style={minimum size=1.0cm,draw=cv2,fill=cfv2,text=clv2,shape=circle},LabelOut=false,L=\hbox{$2$},x=4.5cm,y=3cm]{v2}
\Vertex[style={minimum size=1.0cm,draw=cv3,fill=cfv3,text=clv3,shape=circle},LabelOut=false,L=\hbox{$3$},x=0cm,y=3cm]{v3}
\Vertex[style={minimum size=1.0cm,draw=cv4,fill=cfv4,text=clv4,shape=circle},LabelOut=false,L=\hbox{$4$},x=4.5cm,y=1cm]{v4}
\Vertex[style={minimum size=1.0cm,draw=cv5,fill=cfv5,text=clv5,shape=circle},LabelOut=false,L=\hbox{$5$},x=0cm,y=1cm]{v5}
\Vertex[style={minimum size=1.0cm,draw=cv6,fill=cfv6,text=clv6,shape=circle},LabelOut=false,L=\hbox{$6$},x=4.5cm,y=-1cm]{v6}
\Vertex[style={minimum size=1.0cm,draw=cv7,fill=cfv7,text=clv7,shape=circle},LabelOut=false,L=\hbox{$7$},x=0cm,y=-1cm]{v7}
\Edge[lw=0.01cm,style={color=cv0v1,},label={$e_0$}](v0)(v1)
\Edge[lw=0.01cm,style={color=cv1v2,},label={$e_1$}](v1)(v2)
\Edge[lw=0.01cm,style={color=cv2v3,},label={$e_2$}](v2)(v3)
\Edge[lw=0.01cm,style={color=cv3v4,},label={$e_3$}](v3)(v4)
\Edge[lw=0.01cm,style={color=cv4v5,},label={$e_4$}](v4)(v5)
\Edge[lw=0.01cm,style={color=cv5v6,},label={$e_5$}](v5)(v6)
\Edge[lw=0.01cm,style={color=cv6v7,},label={$e_6$}](v6)(v7)
\end{tikzpicture}
\end{center}
\caption{$P_8$}
\end{figure}
We label the vertices of $P_n$ as $v_0,v_1\cdots,v_{n-1}$ accordingly from the leftmost vertices to the rightmost vertices of $P_n$. Note that $v_0,v_{n-1}$ are the only two vertices of degree $1$ with all the others of degree $2$. Partition $\pi_0$ is the partition of edges such that edges with the same end at a vertex in $\{v_1,v_3,\cdots,v_{n-1}\}$ are in the same cell of $\pi_0$. Partition $\pi_1$ is the partition of edges such that edges with the same end at a vertex in $ \{v_0,v_2,\cdots,\allowbreak v_{n-2}\}$ are in the same cell of $\pi_1$. Edge $e_i$ is the edge between $v_i,v_{i+1}$ for all integer $0\leq i\leq n-2$.
Recall that $P,Q$ are the projections onto the vectors that is constant on the cells of $\pi_0,\pi_1$ respectively. Let $c_i$ denote the characteristic vector of the edges adjacent to vertex $i$. The column space of $Q$ is
\[
\col(Q)=\SPAN\{c_0,c_2,\cdots,c_{n-2}\},
\]
The matrix $2Q-I$ is a reflection about the column space of $Q$, which is the span of cells of $\pi_1$. If two edges belong to the same cell, then they are the ``cellmate" of each other.
Note that every vertex of a path has degree $\leq 2$, which means that each edge has at most one cellmate in the partitions. For each $0\leq i\leq n-2$, let $e_j$ be the cellmate of $e_i$ in $\pi_1$. Using that each cell in $\pi_0,\pi_1$ has size $\leq 2$, we have that
\[
(2Q-I)e_i=e_j.
\] Similarly, if $e_i,e_j$ are cellmates in $\pi_0$, then we have that
\[
(2P-I)e_i=e_j.
\] Here both reflections $2P-I$ and $2Q-I$ is permutation matrices.
Thus, the transition matrix $U=(2P-I)(2Q-I)$ of bipartite walk on $P_n$
is a permutation matrix such that for each integer $0\leq i\leq n-2$, \begin{equation}
\label{path U permutation}
Ue_i=
\begin{cases}
e_{i+2}, \quad\text{if }i \text{ is odd and }i\neq n-3;\\
e_{i-2}, \quad\text{if }i \text{ is even and }i\neq 0; \\
e_{1}, \quad\text{if }i =0;\\
e_{n-2},\quad\text{if }i =n-3.
\end{cases}
\end{equation}
\begin{theorem}
\label{path U permutation}
The transition matrix of the bipartite walk on $P_n$ corresponds to a $(n-1)$-cycle permutation whose cycle form is
\[
\left(e_0,e_1,e_3,\cdots,e_{n-3},e_{n-2},e_{n-4},\cdots,e_2\right).
\]
\end{theorem}
\proof It follows from the discussion above.\qed
For example, the transition matrix of the bipartite walk on $P_8$ is
\[
U=
\begin{pmatrix}
0 & 0 & 1 & 0 & 0 & 0& 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1& 0 & 0 \\
0 & 1& 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1&0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0& 1 & 0
\end{pmatrix}.
\] This correspond to the permutation $(0135642)$ in $S_7$ and we have that \[
U^7=I.
\] Since $U(P_8)$ is a permutation matrix of order $7$, it is easy to see that every edge of $P_8$ can be mapped to any other edges within $7$ steps in the bipartite walk. This is an interesting phenomenon called \textsl{universal perfect state transfer}. Note that if $U$ is the transition matrix of bipartite walk on $P_n$, then
\[
U^{n-1}=I,
\]which implies that for every $n$, the bipartite walk on $P_n$ has the universal perfect state transfer. We will discuss this property further in the next section.
Cyclic permutation matrix $U$ is of order $n-1$, then it has eigenvalue
\[
\lambda_k=\left(e^{\frac{2\pi i}{n-1}}\right)^k
\] with eigenvector
\begin{equation}
\label{eigenvector of U(Pn)}
f_k=\begin{pmatrix}
1&
\lambda_k^{-1}&
\lambda_k&
\lambda_k^{-2}&
\lambda_k^{2}&
\cdots&
\lambda_k^{-(n-2)/2}&
\lambda_k^{(n-2)/2}
\end{pmatrix}^T,
\end{equation}
for $k=0,\cdots,n-2$.
The $\lambda_k$-eigenspace of $U$ is
\[
E_{\lambda_k}=\frac{1}{n-1}ff^*.
\]
Note that $E_1=\frac{1}{n-1}J.$
From the eigenvectors of $U$~(\ref{eigenvector of U(Pn)}), we know that if $s,t$ are integers in $\{1,\cdots,n-2\}$, we have that
\begin{equation}
\label{entry of E_r of path}
\left(E_{\lambda_r}\right)_{s,t}=
\begin{cases}\frac{1}{n-1}(\lambda_r)^{-\frac{s+1}{2}}(\lambda_r)^{\frac{t+1}{2}}\quad\text{if both }s, t \text{ are odd;}\\[2mm]
\frac{1}{n-1}(\lambda_r)^{\frac{s}{2}}(\lambda_r)^{\frac{t+1}{2}}\quad
\text{if }s\text{ is even and } t \text{ is odd;}\\[2mm]
\frac{1}{n-1}(\lambda_r)^{-\frac{s+1}{2}}(\lambda_r)^{-\frac{t}{2}}\quad\text{if }s\text{ is odd and } t \text{ is even;}\\[2mm]
\frac{1}{n-1}(\lambda_r)^{\frac{s}{2}}(\lambda_r)^{-\frac{t}{2}}\quad\text{if both }s, t \text{ are even.}
\end{cases}
\end{equation}
\begin{theorem}
\label{H-digraph of Path}
For an even $n\geq 4$, the $H$-digraph obtained from the bipartite walk on $P_n$ is an oriented $K_{n-1}$.
\end{theorem}
\proof
As the discussion above, the transition matrix of bipartite walk on $P_n$ has spectral decomposition
\[
U=\sum_{k=0}^{n-2} \lambda_k E_{\lambda_k},
\] where
\[
\lambda_k=\left(e^{\frac{2\pi i}{n-1}}\right)^k.
\]
When $n$ is even, the Hamiltonian of $U$ is
\[
H=\sum_{k=0}^{(n-2)/2} \frac{2k\pi }{n-1}\left(E_{\lambda_k}-
\overline{E_{\lambda_k}}\right).
\]
To prove that the $H$-digraph is an oriented complete graph, we show that the Hamiltonian $H$ has non-zero off-diagonal entries. As shown above that the eigenvector of $U$ with eigenvalue $\lambda_k$ is of the form~\ref{eigenvector of U(Pn)}, each row of $E_{\lambda_k}$ is a permutation of its first row, which implies that each row of $H$ is a permutation of its first row. So in order to prove that all the off-diagonal entries of $H$ are non-zero, it is sufficient to prove that
\[
H_{0,t}\neq 0
\] for all $t\neq 0$.
Based on the formula of the $(s,t)$-th entry of $E_{\lambda_r}$ shown in~\ref{entry of E_r of path}
we have that
for $r\in\{0,1,2,\cdots,n-2\} $ and, $s,t\in\{0,1,\cdots,n-2\}$, we have that
\[
\left(E_{\lambda_r}-\overline{E_{\lambda_r}}\right)_{s,t}=\begin{cases}
\frac{2}{n-1}\sin\left(\frac{2\pi r}{n-1}\cdot \frac{t-s}{2}\right)i,\quad\text{if both }s, t \text{ are odd;}\\[2mm]
\frac{2}{n-1}\sin\left(\frac{2\pi r}{n-1}\cdot\frac{s+t+1}{2} \right)i \quad\text{if }s\text{ is even and } t \text{ is odd;}\\[2mm]
\frac{2}{n-1}\sin\left(\frac{2\pi r}{n-1}\cdot \frac{-t-s-1}{2}\right)i,\quad\text{if }s\text{ is odd and } t \text{ is even;}\\[2mm]
\frac{2}{n-1}\sin\left(\frac{2\pi r}{n-1}\cdot \frac{s-t}{2}\right)i,\quad\text{if both }s, t \text{ are even.}\\[2mm]
0\quad\text{if }s=t.
\end{cases}
\]
Then entries of the first row of $H$ are
\[
\left( H\right)_{0,t}=\sum_{k=0}^{(n-2)/2} \frac{2k\pi }{n-1}\left(E_{\lambda_k}\right)_{0,t}= \begin{cases}
\sum_{k=0}^{(n-2)/2} \frac{4k\pi }{(n-1)^2}\sin\left(\frac{2k\pi}{n-1}\cdot\frac{t+1}{2}\right) \quad\text{if }t \text{ is odd;}\\[3mm]
\sum_{k=0}^{(n-2)/2} \frac{4k\pi }{(n-1)^2}\sin\left(\frac{2k\pi}{n-1}\cdot\frac{-t}{2}\right) \quad\text{if }t \text{ is even;}\\[3mm]
0 \quad\text{if }t=0
\end{cases}
\]
When $n=2a+2$ for some integer $a\geq 1$, then for each positive odd integer $b$, we have that
\begin{equation}
\label{sum sine}
\sum_{k=0}^{(n-2)/2} \frac{2k\pi }{n-1}\sin\left(\frac{2k\pi}{n-1}\cdot b\right)
=\frac{\pi\csc\left(\frac{b\pi}{2a+1}\right)\left(2(a+1)+\sin\left(\frac{2b\pi(a+1)}{2a+1}\right)\csc\left(\frac{b\cdot\pi}{2a+1}\right)\right)}{4a+2}
\end{equation} and
for each positive even integer $b$, we have that
\begin{equation}
\label{even sum sine}
\sum_{k=0}^{(n-2)/2} \frac{2k\pi }{n-1}\sin\left(\frac{2k\pi}{n-1}\cdot b\right)
=\frac{\pi\csc\left(\frac{b\pi}{2a+1}\right)\left(-2(a+1)+\sin\left(\frac{2b\pi(a+1)}{2a+1}\right)\csc\left(\frac{b\cdot\pi}{2a+1}\right)\right)}{4a+2}.
\end{equation}
Since the sine function is an odd function, we only need to show that $H_{0,t}\neq 0$ for all odd $1\leq t\leq \frac{n}{2}$.
Since $\csc(x)\neq 0$ over all its domain and when $1\leq b\leq a+1 $,
\[
\sin\left(\frac{2b\pi(a+1)}{2a+1}\right)\csc\left(\frac{b\cdot\pi}{2a+1}\right)\pm 2(a+1)\neq 0.
\]
The sum shown in~\ref{even sum sine} and \ref{sum sine} are non-zero for all $1\leq b\leq a+1$. Thus, we have that
\[
\left( H\right)_{0,t}\neq 0
\]
for all $t\neq 0$.
Therefore, we can conclude that the $H$-digraph is an oriented $K_{n-1}$.\qed
Note that when $n$ is odd, the adjacency matrix of $P_n$ is not invertible and so we consider the Hamiltonian of $U^2$. When $n=3$, the Hamiltonian of $U^2$ is zero matrix. When $n\equiv 1\Mod 4$, the square of its transition matrix $U^2$ still has $-1$ as an eigenvalue, which implies that there is no real skew-symmetric $S$ such that Hamiltonian of $U^2$ is of the form $iS$. So here, we omit the case when $n\equiv 1\Mod 4$.
\begin{corollary}
\label{odd path H-digraph}
When $n\equiv 3\Mod 4$, let
\[
U^2=\exp(iH),
\]
then $H$ is the weighted skew adjacency matrix of two copies of oriented $K_{\frac{n-1}{2}}$.
\end{corollary}
\proof By Theorem~\ref{path U permutation}, we know that $U^2$ corresponds to two $\left(\frac{n-1}{2}\right)$-cycles. Each $\left(\frac{n-1}{2}\right)$-cycle is equivalent to the permutation associated with the transition matrix of $P_{\frac{n+1}{2}}$. The result follows from Theorem~\ref{H-digraph of Path}.\qed
Even cycles are another class of bipartite graphs that cannot be raised from the vertex-face incidence relation of a circular embedding.
For an even integer $n$, consider a path $P_n$ with the same labelling as before and add an edge $e_{n-1}$ between $v_0,v_{n-1}$, which gives us a even cycle $C_n$.
Partition $\pi_0$ are the partition of edges based on vertices $\{v_1,v_3,\cdots,v_{n-1}\}$ and partition $\pi_1$ are the partition of edges based on vertices $\{v_0,v_2,\cdots,v_{n-2}\}$ .
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=0.7]
\definecolor{cv0}{rgb}{0.0,0.0,0.0}
\definecolor{cfv0}{rgb}{1.0,1.0,1.0}
\definecolor{clv0}{rgb}{0.0,0.0,0.0}
\definecolor{cv1}{rgb}{0.0,0.0,0.0}
\definecolor{cfv1}{rgb}{1.0,1.0,1.0}
\definecolor{clv1}{rgb}{0.0,0.0,0.0}
\definecolor{cv2}{rgb}{0.0,0.0,0.0}
\definecolor{cfv2}{rgb}{1.0,1.0,1.0}
\definecolor{clv2}{rgb}{0.0,0.0,0.0}
\definecolor{cv3}{rgb}{0.0,0.0,0.0}
\definecolor{cfv3}{rgb}{1.0,1.0,1.0}
\definecolor{clv3}{rgb}{0.0,0.0,0.0}
\definecolor{cv4}{rgb}{0.0,0.0,0.0}
\definecolor{cfv4}{rgb}{1.0,1.0,1.0}
\definecolor{clv4}{rgb}{0.0,0.0,0.0}
\definecolor{cv5}{rgb}{0.0,0.0,0.0}
\definecolor{cfv5}{rgb}{1.0,1.0,1.0}
\definecolor{clv5}{rgb}{0.0,0.0,0.0}
\definecolor{cv6}{rgb}{0.0,0.0,0.0}
\definecolor{cfv6}{rgb}{1.0,1.0,1.0}
\definecolor{clv6}{rgb}{0.0,0.0,0.0}
\definecolor{cv7}{rgb}{0.0,0.0,0.0}
\definecolor{cfv7}{rgb}{1.0,1.0,1.0}
\definecolor{clv7}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv4v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv5v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv6v7}{rgb}{0.0,0.0,0.0}
\Vertex[style={minimum size=1.0cm,draw=cv0,fill=cfv0,text=clv0,shape=circle},LabelOut=false,L=\hbox{$0$},x=4.5cm,y=6cm]{v0}
\Vertex[style={minimum size=1.0cm,draw=cv1,fill=cfv1,text=clv1,shape=circle},LabelOut=false,L=\hbox{$1$},x=0cm,y=6cm]{v1}
\Vertex[style={minimum size=1.0cm,draw=cv2,fill=cfv2,text=clv2,shape=circle},LabelOut=false,L=\hbox{$2$},x=4.5cm,y=2.5cm]{v2}
\Vertex[style={minimum size=1.0cm,draw=cv3,fill=cfv3,text=clv3,shape=circle},LabelOut=false,L=\hbox{$3$},x=0cm,y=2.5cm]{v3}
\Vertex[style={minimum size=1.0cm,draw=cv4,fill=cfv4,text=clv4,shape=circle},LabelOut=false,L=\hbox{$4$},x=4.5cm,y=0.5cm]{v4}
\Vertex[style={minimum size=1.0cm,draw=cv5,fill=cfv5,text=clv5,shape=circle},LabelOut=false,L=\hbox{$5$},x=0cm,y=0.5cm]{v5}
\Edge[lw=0.01cm,style={color=cv0v1,},label={$e_0$}](v0)(v1)
\Edge[lw=0.01cm,style={color=cv5v6,},label={$e_1$}](v1)(v2)
\Edge[lw=0.01cm,style={color=cv1v2,},label={$e_2$}](v2)(v3)
\Edge[lw=0.01cm,style={color=cv2v3,},label={$e_3$}](v3)(v4)
\Edge[lw=0.01cm,style={color=cv3v4,},label={$e_4$}](v5)(v4)
\Edge[lw=0.01cm,style={color=cv4v5,},label={$e_5$}](v0)(v5)
\end{tikzpicture}
\end{center}
\caption{$C_6$}
\end{figure}
When $n$ is even and $U$ is the transition matrix of bipartite walk on $C_n$,
using the same argument as we do when we discuss the transition matrix of bipartite walk on paths, we have that
\begin{equation}
\label{cycle U permutation}
Ue_i=
\begin{cases}
e_{i+2\Mod n} \quad\text{if }i \text{ is odd;}\\
e_{i-2 \Mod n} \quad\text{if }i \text{ is even.} \\
\end{cases}
\end{equation}
\begin{theorem}
\label{U cycle permutation}
When $n$ is even, the transition matrix $U$ of the bipartite walk on $C_n$ is a cyclic permutation matrix of order $n/2$.
\end{theorem}
\proof The mapping relation~\ref{cycle U permutation} implies that $U$ is a cyclic permutation whose cycle form is
\[
(e_0,e_{n-2},\cdots,e_2)(e_1,e_3,\cdots,e_{n-1}).\qed
\]
Note that eigenvalues of $C_n$ are
\[
\Bigg\{2\cos\left(\frac{2\pi k}{n}\right): k\in\{0,1,\cdots,n-1\}\Bigg\}.
\]
So when $n\equiv 0 \Mod 4$, the adjacency matrix of $C_n$ is not invertible and we consider the Hamiltonian of $U^2$ instead.
\begin{corollary}
\label{H-digraph of cycles}
Let $U$ be the transition matrix of of bipartite walk on $C_n$ for some even $n$.
When $n\equiv 2 \Mod 4$, let $H$ be the Hamiltonian of $U$, then the corresponding $H$-digraph is two copies of a weighted oriented $K_{\frac{n}{2}}$. When $n\equiv 0 \Mod 4$ and $n\geq 12$, let $H$ be the Hamiltonian of $U^2$, then the corresponding $H$-digraph is three copies of a weighted oriented $K_{\frac{n}{4}}$.
\end{corollary}
\proof From Theorem~\ref{U cycle permutation}, the transition matrix of $U$ is two $\frac{n}{2}$-cycles and each cycle is the permutation associated with the transition matrix of bipartite walk on $P_{\frac{n}{2}+1}$. Results follow from Theorem~\ref{H-digraph of Path} and Corollary~\ref{odd path H-digraph}.\qed
Note that when $n=4$, the Hamiltonian of $U$ is zero matrix. When $n=8$, the transition matrix $U$ and $U^2$ both have $-1$ as eigenvalues. There is no real skew-symmetric $S$ such that the Hamiltonian of $U$ or the Hamiltonian of $U^2$ is of the form $iS$ and so, we omit the case when $n=8$.
\section{Universal PST}
Let $U$ be the transition matrix of the continuous walk defined over graph $G$, then we say there is perfect state transfer from state $a$ to state $b$ if \[
\abs{U(t)_{a,b}}=1.
\]
A graph $G$ has universal perfect state transfer if it has perfect state transfer between every pair of its vertices.
According to Cameron et al.~in~\cite{Cameron2014}, the only known graphs that have universal perfect state transfer are oriented $K_2,C_3$ with constant weight $i$ assigned on each arc.
In this section, we show that bipartite walk can help us to construct weighted oriented graphs where the continuous quantum walk has universal perfect state transfer. Note that when we talk about continuous walks on weighted graph, the Hamiltonian is the weighted adjacency matrix $A$ of the graph, i.e., the transition matrix is of the form
\[
\exp(iA).
\]
If the transition matrix $U$ of a bipartite walk is a permutation matrix with finite order, then its $H$-digraph has universal perfect state transfer.
\begin{lemma}
\label{when U is permutation}
Let $G$ be a connected bipartite walk.
The transition matrix of the bipartite walk on $G$ is a permutation matrix if and only if every vertex of $G$ has degree either $1$ or $2$.
\end{lemma}
\proof Here, we use the same notations as defined in Section~\ref{Intro}. If every vertex of $G$ has degree either $1$ or $2$, using the same notations as before, then both $2P-I$ and $2Q-I$ are permutation matrices. Hence, the transition matrix $U$ is also a permutation matrix.
For the other direction, note that $2P-I,2Q-I$ are reflections about the spaces spanned by characteristic vectors of cells of $\pi_0,\pi_1$ respectively and cells in one partition are disjoint. Then in order for $U$ to map an edge $e_i$ to another edge $e_j$, the size of each cell of both partitions $\pi_1,\pi_2$ cannot be greater than two.\qed
We have shown in Theorem~\ref{path U permutation} that the transition matrix of the bipartite walk over $P_n$ for some even $n$ is a permutation matrix with finite order. We can use this to produce weighted graphs over which continuous walks have universal perfect state transfer.
The following theorem follows directly from the fact that $U^{n-1}=I$ and Theorem~\ref{H-digraph of Path}.
\begin{corollary} Let $n$ be an even integer. Let $s,t$ be distinct integer in $\{0,\cdots,n-2\}$. we define
\[
\alpha=\begin{cases}
\frac{t-s}{2},\quad\text{if both }s, t \text{ are odd;}\\[2mm]
\frac{s+t+1}{2} \quad\text{if }s\text{ is even and } t \text{ is odd;}\\[2mm]
\frac{-t-s-1}{2},\quad\text{if }s\text{ is odd and } t \text{ is even;}\\[2mm]
\frac{s-t}{2},\quad\text{if both }s, t \text{ are even.}\\[2mm]
\end{cases}.
\]
When $n$ is even, the edge $(s,t)$ of $K_{n-1}$ is assigned with weight
\[
\frac{2}{n-1}
\sum_{r=1}^{\frac{n}{2}-1}\frac{2\pi r}{(n-1)} \sin\left(\frac{2\pi r}{n-1}\alpha\right)
\]
for all distinct $s,t\in\{0,\cdots,n-2\}$. Let $A$ be the weighted adjacency matrix of the resulting weighted $K_{n-1}$. Then the continuous walk with transition matrix $\exp(iA)$ has universal perfect state transfer and every state will get transferred perfectly to any other state within time $t\leq n-1$.
\end{corollary}
\section{Open questions}
Since continuous quantum walks whose Hamiltonians are symmetric, perfect state transfer is symmetric. That is, in continuous walks, there exists time $t$ when there is perfect state transfer from state $a$ to $b$ and from state $b$ to $a$. However, perfect state transfer in the discrete quantum walk is not necessarily symmetric. Because the transition matrices of discrete quantum walks are not symmetric in general, there is no guarantee that there exists a positive integer $k$ such that at $k$-th step there is perfect state transfer between two states. In fact, there may be cases where there is perfect state transfer from state $a$ to state $b$ while there is no perfect state transfer from state $b$ to state $a$.
Recall that the transition matrix of the bipartite walk defined on the graph in Figure~\ref{not return pst graph} is
\[
U=\begin{pmatrix}
0 & -\frac{1}{3} & 0 & \frac{2}{3}& \frac{2}{3} & 0 & 0 \\[2.5mm]
0 & 0 & 0 & 0 & 0 & 0 & 1 \\[2.5mm]
0 & \frac{2}{3} & 0 & \frac{2}{3} & \frac{1}{3} & 0 & 0 \\[2.5mm]
0 & \frac{2}{3} & 0 & -\frac{1}{3} & \frac{2}{3} & 0 & 0 \\[2.5mm]
0 & 0 & 1 & 0 & 0 & 0 & 0 \\[2.5mm]
1 & 0 & 0 & 0 & 0 & 0 & 0 \\[2.5mm]
0 & 0 & 0 & 0 & 0 & 1 & 0
\end{pmatrix}
.\]
State $e_i$ is the characteristic vector of $i$. It is easy to see that there is perfect state transfer from state $e_1$ to $e_6$ at step $k=1$. But up to $k=300000$ steps, there is no perfect state transfer observed from $e_6$ to $e_1$. We suspect that there is no perfect state transfer from $e_6$ to $e_1$. We would like to find a condition on graph $G$ that determines whether or not perfect state transfer is symmetric.
So far, the graphs we observed, over which bipartite walks defined has perfect state transfer, all have minimum degree at most two. We would like to know if there is any graph $G$ with minimum degree at least three that has perfect state transfer in the bipartite walk defined on $G$.
We would like to know how the structure of the graph $G$ affects behaviors of state transfer in the bipartite walk and if there is any feature of bipartite walk that can be determined by the combinatorial or algebraic properties of the graph it is defined on. This will be the future direction of our studies.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUdpY4eIXhwh1BxcDo
| 5
| 1
|
\section{Free energy and average loop length \label{FLsection}}
\setcounter{equation}{0}
\baselineskip 18.5pt
The formulas of the preceding section in the case $d=2$ yield directly the free
energy density of the model~(\ref{ZFPL}) for the $n>2$ phase as the logarithm
of the maximum eigenvalue of the transfer matrix, rescaled by the factor of
equation~(\ref{identification}). This free energy was actually derived in 1970
by Baxter~\cite{Baxter1970} as the solution to a weighted three-coloring
problem on the honeycomb lattice. The free energy density of the FPL model in
the $n>2$ phase is
\begin{eqnarray}
F_{FPL}(n) & \equiv & \lim_{N \rightarrow \infty}
\frac{1}{N} \log Z_{FPL}(n) \nonumber \\
& = & \log \left\{ q^{1/3} \prod_{m=1}^\infty
\frac{(1-q^{-6m+2})^2}{(1-q^{-6p+4})(1-q^{-6p})} \right\}
\label{FFPL}
\end{eqnarray}
where $n=q+q^{-1}$, and $q=e^\gamma>1$. This function has an essential
singularity at $q=1$. The free energy for $n<2$ and with periodic boundary
conditions is given in integral form in~\cite{Batchelor}.
It is interesting to note that for both phases, the free energy density gives
the ensemble average length of loops. Since a configuration $C$ on a lattice
of
$N$ faces has $2N$ occupied links, the total length of loops is always $2N$.
The average loop length of configuration $C$ is therefore $2N/P(C)$.
If we define the ensemble average loop length $L_N(n)$ by
\begin{equation}
L_N(n) = \frac{1}{Z_{FPL}(n)} \sum_C \frac{2N}{P(C)} n^{P(C)}, \label{L}
\label{Ldef}
\end{equation}
then from inspection of equation~(\ref{ZFPL}) it is clear that
\begin{equation}
\frac{d}{dn} \left[ L_N(n) Z_{FPL}(n) \right] = \frac{2N}{n} Z_{FPL}(n).
\label{Leqn}
\end{equation}
The general solution to this equation can be written up to quadrature by direct
integration:
\begin{equation}
L_N(n) = \frac{1}{Z_{FPL}(n)} \int^n_C \frac{2N}{n'} Z_{FPL}(n') dn'
\end{equation}
where the lower limit of integration is an undetermined constant. In terms of
the free energy density $F_{FPL} = (1/N) \log Z_{FPL}$, this becomes
\begin{equation}
L_N(n) = 2N e^{-N F_{FPL}(n)} \int^n_C \frac{e^{N F_{FPL}(n')}}{n'} dn'.
\label{Lintegral}
\end{equation}
The integral in equation~(\ref{Lintegral}) can be evaluated by steepest
descent. The result is
\begin{equation}
L_N(n) = \frac{2N}{n}
\exp \left( -Nn \frac{dF_{FPL}}{dn}(n) \right)
\left[ \frac{\exp \left[ Nn'\frac{dF_{FPL}}{dn}(n') \right] }
{N \frac{dF_{FPL}}{dn}(n')} \right]^n_C.
\end{equation}
The constant of integration may now be determined from the known value of
$L_N(n)$ at $n \rightarrow \infty$. As will be shown in
section~\ref{perturbSection}, in this limit $Z_N(n) \simeq 3 n^{N/3}$,
$\frac{dF_{FPL}}{dn}(n) \simeq 1/3n$, and $L_N(n) = 6$. These imply that $C =
- \infty$, so in the thermodynamic limit
\begin{equation}
L_N(n) = \frac{2}{n \frac{dF_{FPL}}{dn}}.
\end{equation}
In this calculation we have neglected corrections of order $1/N$ to $L_N(n)$.
A graph of the ensemble average loop length versus $n$ in the large-$n$ phase
is shown in Figure~\ref{lengthGraph}. This verifies the conjecture
of~\cite{Reshetikhin} that loop length diverges at the critical point.
\begin{figure}
\epsfbox{lengthGraph.ps}
\caption{This is a graph of the average loop length of the FPL model versus
$n$, the fugacity of loops. The critical point is at $n=2$.}
\label{lengthGraph}
\end{figure}
\section {Correlation length \label{xiSection}}
\setcounter{equation}{0}
\baselineskip 18.5pt
To obtain the correlation length, we must compute the
expression~(\ref{formula}) for the minimal hole distribution. When $d=2$,
there are two choices for the $N_q$. Either $N_1=3$ and $N_2=0$, or $N_1=1$
and $N_2=1$. In each case, the eigenvalue gap is minimized for holes at
$\theta^q_h=\frac{\pi}{2}$ where the sum in equation~(\ref{density}) after
integration in~(\ref{formula}) is oscillatory. The transfer matrix of the
model is symmetric at the point $\theta=-\frac{1}{2}\gamma$, and its
eigenvalues are then real. After setting $\theta$ to this value, the
correlation length of the model is given by equation~(\ref{xiDef}).
Considering the case $N_1=3$ and $N_2=0$, we denote the next-leading eigenvalue
for this hole distribution as $\Lambda_{30}$. The equation~(\ref{formula})
together with the formula for densities~(\ref{density}) gives
\begin{equation}
\log \frac{\Lambda_{30}}{\Lambda_{\rm max}} =
\sum_{m=-\infty}^{\infty} \phi_m \left(-\frac{1}{\pi}\right) e^{|m\gamma|}
3 (-1)^m \frac{\sinh(2m\gamma)}{\sinh(3m\gamma)}
\label{formula2}
\end{equation}
where $\phi_m$ are the integrals over roots,
\begin{equation}
\phi_m \equiv \int^{\pi/2}_{-\pi/2} e^{2im\lambda}
\log \left[ \frac{\sinh(i\lambda+\frac{1}{2}\gamma-\theta)}
{\sinh(i\lambda-\frac{1}{2}\gamma-\theta)} \right]
\,d\lambda.
\label{integral}
\end{equation}
The integral in equation~(\ref{integral}) can easily be performed by contour
integration. After introducing the variables $q=e^\gamma$ and $z=e^\theta$,
the result for $-\frac{1}{2}\gamma < \theta < 0$ is
\begin{equation}
\int^{\pi/2}_{-\pi/2} e^{2im\lambda}
\log \left[ \frac{\sinh(i\lambda+\frac{1}{2}\gamma-\theta)}
{\sinh(i\lambda-\frac{1}{2}\gamma-\theta)} \right]
\,d\lambda =
\left\{ \begin{array}{ll}
\frac{\pi}{m} [ 1-(z^2 q^{-1})^m ], & m>0 \\
-2\pi\log z, & m=0 \\
\frac{\pi}{m} [ 1-(z^2 q)^m ], & m<0
\end{array} \right. .
\end{equation}
Substituting this result into equation~(\ref{formula2}) gives
\begin{equation}
\log \frac{\Lambda_{30}}{\Lambda_{\rm max}} =
2 \log z + 3 \sum_{m>0} \frac{(-1)^m}{m} (z^{2m} - z^{-2m})
\left( \frac{q^{2m}-q^{-2m}}{q^{3m}-q^{-3m}} \right) .
\end{equation}
After expanding the demoninator of the summand in a power series in $q^{-1}$,
this may be resummed to the form,
\begin{equation}
\log \frac{\Lambda_{30}}{\Lambda_{\rm max}} =
2 \log z - 3 \sum_{m \geq 0}
\log \left[ \frac{(1+z^2q^{-1}q^{-6m})(1+z^{-2}q^{-5}q^{-6m})}
{(1+z^{-2}q^{-1}q^{-6m})(1+z^2q^{-5}q^{-6m})} \right].
\end{equation}
This form is now convergent at the symmetric point,
$\theta = - \frac{1}{2}\gamma$ or equivalently $z^2=q^{-1}$. We may therefore
evaluate it there to obtain the correlation length according to
equation~(\ref{xiDef}),
\begin{equation}
\xi^{-1} = 3 \log \left\{
q^{1/3} \prod_{m>0} \frac{(1+q^2q^{-6m})(1+q^4q^{-6m})}
{(1+q^{-6m})(1+q^6q^{-6m})}
\right\}
\label{xi}
\end{equation}
This is the desired result, the correlation length of the FPL model where
$n=q+q^{-1}$ and $q>1$, or equivalently $q=+\sqrt{n^2-4}$.
The other possible choice of holes, $N_1=1$ and $N_2=1$ may be computed in
the same way to give
\begin{equation}
\log \frac{\Lambda_{\rm max}}{\Lambda_{11}} =
\log \left\{ q \prod_{m>0}
\frac{(1+q^2q^{-6m})(1+q^3q^{-6m})(1+q^3q^{-6m})(1+q^4q^{-6m})}
{(1+q^{-6m})(1+qq^{-6m})(1+q^5q^{-6m})(1+q^6q^{-6m})} \right\} .
\label{other}
\end{equation}
This quantity is greater than~(\ref{xi}) for all $q>1$, so it is not the
inverse correlation length. For large $q$, the inequality may be seen by
considering the limiting forms of expressions~(\ref{xi}) and~(\ref{other}).
Rigorously, the multiplicands in~(\ref{other}) may be seen to be greater than
those in ~(\ref{xi}) term by term in $m$.
\section {Perturbative analysis \label{perturbSection}}
\setcounter{equation}{0}
\baselineskip 18.5pt
The FPL model has a natural large-$n$ expansion which allows simple
perturbative verifications of results.
When $n$ is large, the dominant configurations are those with large numbers of
loops. The perturbative procedure is to approximate the sum over states by
including the configurations with the highest numbers of loops.
On a hexagonal lattice with number of faces $N$ a multiple of three, there
are three configurations with the maximum possible number of loops. In these
states, one out of every three faces has a small loop around it and these
small loops lie on a triangular lattice. A sample is shown in
Figure~\ref{noDefects}. These three configurations differ by translations
and each has $N/3$ loops.
\newcommand{\put(20,0){\line(-3,5){10}}}{\put(20,0){\line(-3,5){10}}}
\newcommand{\put(20,0){\line(-3,-5){10}}}{\put(20,0){\line(-3,-5){10}}}
\newcommand{\put(10,-17){\line(-1,0){20}}}{\put(10,-17){\line(-1,0){20}}}
\newcommand{\put(-20,0){\line(3,-5){10}}}{\put(-20,0){\line(3,-5){10}}}
\newcommand{\put(-20,0){\line(3,5){10}}}{\put(-20,0){\line(3,5){10}}}
\newcommand{\put(-10,17){\line(1,0){20}}}{\put(-10,17){\line(1,0){20}}}
\newcommand{\edgeA \edgeB \edgeC \edgeD \edgeE \edgeF}{\put(20,0){\line(-3,5){10}} \put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,-5){10}} \put(-20,0){\line(3,5){10}} \put(-10,17){\line(1,0){20}}}
\begin{figure}
\begin{center}
\begin{picture}(140,153)
\thicklines
\put(20,0){\dashbox{10}(140,153){}}
\put(60,0){\put(-20,0){\line(3,5){10}} \put(-10,17){\line(1,0){20}} \put(20,0){\line(-3,5){10}}}
\put(120,0){\put(-20,0){\line(3,5){10}} \put(-10,17){\line(1,0){20}} \put(20,0){\line(-3,5){10}}}
\put(30,51){\put(-10,17){\line(1,0){20}} \put(20,0){\line(-3,5){10}} \put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}}}
\put(90,51){\edgeA \edgeB \edgeC \edgeD \edgeE \edgeF}
\put(150,51){\put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,-5){10}} \put(-20,0){\line(3,5){10}} \put(-10,17){\line(1,0){20}}}
\put(60,102){\edgeA \edgeB \edgeC \edgeD \edgeE \edgeF}
\put(120,102){\edgeA \edgeB \edgeC \edgeD \edgeE \edgeF}
\put(30,153){\put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}}}
\put(90,153){\put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,-5){10}}}
\put(150,153){\put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,-5){10}}}
\end{picture}
\caption{A sample from a configuration with the maximum number of loops.}
\label{noDefects}
\end{center}
\end{figure}
The smallest change in the number of loops that can be made is to introduce
a defect somewhere in one of the maximal configurations, as shown in
Figure~\ref{oneDefect}. There are $2N/3$ different such defects that can
be introduced and each reduces the number of loops by 2.
\begin{figure}
\begin{center}
\begin{picture}(140,153)
\thicklines
\put(20,0){\dashbox{10}(140,153){}}
\put(60,0){\put(-20,0){\line(3,5){10}} \put(-10,17){\line(1,0){20}} \put(20,0){\line(-3,5){10}}}
\put(120,0){\put(-20,0){\line(3,5){10}} \put(-10,17){\line(1,0){20}} \put(20,0){\line(-3,5){10}}}
\put(30,51){\put(-10,17){\line(1,0){20}} \put(20,0){\line(-3,5){10}} \put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}}}
\put(90,51){\put(20,0){\line(-3,5){10}} \put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,-5){10}} \put(-20,0){\line(3,5){10}}}
\put(150,51){\put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,-5){10}} \put(-20,0){\line(3,5){10}} \put(-10,17){\line(1,0){20}}}
\put(60,102){\put(20,0){\line(-3,5){10}} \put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,-5){10}} \put(-20,0){\line(3,5){10}} \put(-10,17){\line(1,0){20}}}
\put(120,102){\put(-20,0){\line(3,5){10}} \put(-10,17){\line(1,0){20}} \put(20,0){\line(-3,5){10}} \put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}}}
\put(30,153){\put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}}}
\put(90,153){\put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,-5){10}}}
\put(150,153){\put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,-5){10}}}
\put(90,85){\put(20,0){\line(-3,-5){10}} \put(-20,0){\line(3,-5){10}} \put(-10,17){\line(1,0){20}}}
\end{picture}
\caption{A configuration with two fewer than the maximum number of loops.}
\label{oneDefect}
\end{center}
\end{figure}
Introducing defects in this way, we can reach all possible configurations.
To see that this is so, we can represent a configuration by labelling the
links on the lattice that do not contain part of a path. One of every three
links is unoccupied, and every vertex touches one unoccupied link. These
unoccupied links form a dimer configuration for the vertices of the lattice.
If we draw rhombuses around every dimer and interpret the resulting picture
as the projection of the edges of a stack of cubes, we see that a FPL
configuration is equivalent to a stack of cubes. Such an identification is
shown in Figure~\ref{cubes}.
\newcommand{\put(0,0){\line(0,1){34}}}{\put(0,0){\line(0,1){34}}}
\newcommand{\put(0,0){\line(5,-3){30}}}{\put(0,0){\line(5,-3){30}}}
\newcommand{\put(0,0){\line(-5,-3){30}}}{\put(0,0){\line(-5,-3){30}}}
\begin{figure}
\begin{center}
\begin{picture}(300,150)
\thicklines
\put(0,34){
\put(0,0){\put(20,0){\line(-3,-5){10}}}
\put(30,-17){\put(20,0){\line(-3,-5){10}}}
\put(60,-34){\put(20,0){\line(-3,5){10}}}
\put(0,34){\put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}}}
\put(30,17){\put(20,0){\line(-3,5){10}} \put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}}}
\put(60,0){\put(20,0){\line(-3,5){10}} \put(10,-17){\line(-1,0){20}}}
\put(90,-17){\put(20,0){\line(-3,5){10}}}
\put(0,68){\put(20,0){\line(-3,5){10}} \put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}}}
\put(30,51){\put(20,0){\line(-3,5){10}} \put(10,-17){\line(-1,0){20}}}
\put(60,34){\put(20,0){\line(-3,5){10}} \put(20,0){\line(-3,-5){10}}}
\put(90,17){\put(20,0){\line(-3,5){10}} \put(10,-17){\line(-1,0){20}}}
\put(30,85){\put(20,0){\line(-3,5){10}} \put(20,0){\line(-3,-5){10}}}
\put(60,68){\put(20,0){\line(-3,5){10}} \put(10,-17){\line(-1,0){20}}}
\put(90,51){\put(20,0){\line(-3,-5){10}}}
\put(60,102){\put(20,0){\line(-3,-5){10}}}
\put(90,85){\put(20,0){\line(-3,-5){10}} \put(10,-17){\line(-1,0){20}}}
\put(120,68){\put(10,-17){\line(-1,0){20}}}}
\put(150,34){
\put(0,0){\put(0,0){\line(0,1){34}} \put(0,0){\line(5,-3){30}}}
\put(30,-17){\put(0,0){\line(0,1){34}} \put(0,0){\line(5,-3){30}}}
\put(60,-34){\put(0,0){\line(0,1){34}}}
\put(0,34){\put(0,0){\line(0,1){34}} \put(0,0){\line(5,-3){30}}}
\put(30,17){\put(0,0){\line(0,1){34}} \put(0,0){\line(5,-3){30}}}
\put(60,0){}
\put(90,-17){\put(0,0){\line(0,1){34}} \put(0,0){\line(-5,-3){30}}}
\put(0,68){\put(0,0){\line(5,-3){30}}}
\put(30,51){}
\put(60,34){\put(0,0){\line(0,1){34}} \put(0,0){\line(5,-3){30}} \put(0,0){\line(-5,-3){30}}}
\put(90,17){\put(0,0){\line(-5,-3){30}}}
\put(120,0){\put(0,0){\line(0,1){34}} \put(0,0){\line(-5,-3){30}}}
\put(30,85){\put(0,0){\line(5,-3){30}} \put(0,0){\line(-5,-3){30}}}
\put(60,68){\put(0,0){\line(-5,-3){30}}}
\put(90,51){\put(0,0){\line(0,1){34}} \put(0,0){\line(5,-3){30}} \put(0,0){\line(-5,-3){30}}}
\put(120,34){\put(0,0){\line(0,1){34}} \put(0,0){\line(-5,-3){30}}}
\put(60,102){\put(0,0){\line(5,-3){30}} \put(0,0){\line(-5,-3){30}}}
\put(90,85){\put(0,0){\line(5,-3){30}} \put(0,0){\line(-5,-3){30}}}
\put(120,68){}}
\end{picture}
\end{center}
\caption{An example of the identification of FPL configurations and stacks of
cubes. One rhombus is drawn centered on each unoccupied link.}
\label{cubes}
\end{figure}
In this new representation, the action of inserting a defect is just the action
of adding or removing a cube. This identification is exhibited in
Figure~\ref{cubeDefect}. The result then follows that since every stack of
cubes can be made by adding or removing cubes, every FPL configuration can be
made from one of the maximal ones by inserting some combination of defects.
\begin{figure}
\begin{center}
\begin{picture}(200,200)
\thicklines
\put(0,0){\put(0,0){\line(0,1){34}} \put(0,0){\line(5,-3){30}}}
\put(30,-17){\put(0,0){\line(0,1){34}}}
\put(0,34){\put(0,0){\line(5,-3){30}}}
\put(30,17){}
\put(60,0){\put(0,0){\line(0,1){34}} \put(0,0){\line(-5,-3){30}}}
\put(30,51){\put(0,0){\line(5,-3){30}} \put(0,0){\line(-5,-3){30}}}
\put(60,34){\put(0,0){\line(-5,-3){30}}}
\put(120,0){
\put(0,0){\put(0,0){\line(0,1){34}} \put(0,0){\line(5,-3){30}}}
\put(30,-17){}
\put(0,34){}
\put(30,17){\put(0,0){\line(0,1){34}} \put(0,0){\line(5,-3){30}} \put(0,0){\line(-5,-3){30}}}
\put(60,0){\put(0,0){\line(0,1){34}} \put(0,0){\line(-5,-3){30}}}
\put(30,51){\put(0,0){\line(5,-3){30}} \put(0,0){\line(-5,-3){30}}}
\put(60,34){}}
\put(0,120){
\put(0,0){\put(20,0){\line(-3,-5){10}} \put(-10,17){\line(1,0){20}}}
\put(60,0){\put(-20,0){\line(3,-5){10}} \put(-10,17){\line(1,0){20}}}
\put(0,34){\put(20,0){\line(-3,5){10}} \put(10,-17){\line(-1,0){20}}}
\put(60,34){\put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,5){10}}}
\put(30,17){\put(20,0){\line(-3,5){10}} \put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,5){10}}}}
\put(120,120){
\put(0,0){\put(20,0){\line(-3,-5){10}} \put(-10,17){\line(1,0){20}}}
\put(60,0){\put(-20,0){\line(3,-5){10}} \put(-10,17){\line(1,0){20}}}
\put(0,34){\put(20,0){\line(-3,5){10}} \put(10,-17){\line(-1,0){20}}}
\put(60,34){\put(10,-17){\line(-1,0){20}} \put(-20,0){\line(3,5){10}}}
\put(30,17){\put(20,0){\line(-3,-5){10}} \put(-20,0){\line(3,-5){10}} \put(-10,17){\line(1,0){20}}}}
\end{picture}
\end{center}
\caption{In the cube representation, introducing a defect is adding or removing
a cube.}
\label{cubeDefect}
\end{figure}
To obtain an approximation for the free energy, consider first the maximal
state shown in Figure~\ref{noDefects}. For a lattice of $N$ faces, this
configuration has $(N/3)$ loops. There are 3 such configurations corresponding
to the three-fold translational degeneracy of the state. To lowest order
then $Z_{FPL} = 3\,n^{N/3}[1+O(n^{-1})]$. This result was used
in section~\ref{FLsection} to determine the asymptotics of the average loop
length.
Allowing defects, there are $(2N/3)$ locations for a defect and each defect
reduces the number of loops by two. Defects may be applied in any number and
in any combination, giving the usual sum over disconnected diagrams. We can
write this as the exponential of the connected diagram (one defect) and we
will be correct except for the effects of excluded volumes which begin with
two-defect connected diagrams and are therefore higher order. To the next
order, $Z_{FPL} = 3\,n^{N/3} \exp[(2N/3) n^{-2}] \exp[O(n^{-4})]$.
Perturbatively calculating the FPL free energy, we see that
\begin{equation}
F_{FPL}(n) = \frac{1}{3} \log(n) + \frac{2\,n^{-2}}{3} + O(n^{-4}),
\end{equation}
in conformity with Baxter's result shown in equation~(\ref{FFPL}).
This point of view incidentally leads to a simple expression for the
entropy density of the FPL configurations at $n=1$. At this point, all
configurations are weighted equally and $Z_{FPL}$ is just the number of
configurations, or the exponential of the entropy. Then calculating the
partition function is just the problem of counting the number of coverings of
the honeycomb lattice by paths, which is the number of different
possible stacks of cubes, which is the old combinatorial problem of counting
plane partitions. Elser~\cite{Elser} has calculated the asymptotics of plane
partitions for large arrays of numbers.
The result applied to this case is entirely dependent on the shape of the
boundary, even in the thermodynamic limit. This is to be expected when $n=1$,
because this is in the small-$n$ phase where the model is critical. For a
lattice of $N$ faces and free boundary conditions, the maximum entropy is
obtained for a hexagon-shaped boundary and in that case the partition function
is asymptotically
\begin{equation}
Z_{FPL}(1) = \exp \left[ N \left( \frac{3}{2} \log 3 - 2 \log 2 \right)
\right].
\end{equation}
\section {Comparison with surface tension}
\setcounter{equation}{0}
\baselineskip 18.5pt
The ground state of the $sl_q(d)$ integrable lattice model is
$(d+1)$-fold degenerate. This implies the existence
of a notion of interfacial tension $S(\gamma)$ away from the critical point
between regions of differing antiferromagnetic polarization. By considering
finite-size corrections, de~Vega~\cite{deVega} has derived transcendental
equations for this interfacial tension and computed the asymptotic behavior
of $S$ in the limits $\gamma \rightarrow 0$ and $\gamma \rightarrow \infty$.
Scaling arguments originally due to Widom~\cite{Widom} predict that the
scaling relation, $S \xi \sim 1$ should hold near the critical point, $\gamma
\rightarrow 0$ or equivalently $n \rightarrow 2^+$. It would be interesting to
test this relation in this case, but we know of no explicit expression for the
interfacial tension.
Away from the critical point however, a comparison can be made. The asymptotic
behavior of the interfacial tension for $\gamma \rightarrow \infty$ was
extracted by deVega, and the result is
\begin{equation}
S(\gamma) = \frac{d}{d+1} \gamma + O(1).
\end{equation}
In the case of the FPL model, $d=2$, $n=e^\gamma+e^{-\gamma}$, and
\begin{equation}
S(n) = \frac{2}{3} \log(n) + O(1).
\end{equation}
This result may be compared with a perturbative calculation. Consider the sum
over FPL states at large-$n$ with the constraint that boundary conditions are
fixed to cause frustration in the bulk, as in Figure~\ref{interface}. The
configuration in that figure has the maximum number of loops possible and
is the analog of the configuration shown in Figure~\ref{noDefects}. Denoting
the sum over defects in this configuration by $Z'_{FPL}$, the interfacial
tension is defined to be the change in free energy per unit length of the
interface:
\begin{equation}
\frac{Z'_{FPL}}{Z_{FPL}} \sim e^{-LS}
\label{Sdef}
\end{equation}
where $L$ is the vertical size of the lattice.
\begin{figure}
\epsfbox{interface.ps}
\caption{An interface separating two regions of differing polarization.}
\label{interface}
\end{figure}
For a lattice of N faces, the maximum number of loops possible in the presence
of the constraint is $(N/3) - (2L/3)$ instead of $(N/3)$. The maximal state
in the presence of the constraint is now $3 \times 2^{2L/3}$-fold
degenerate, because there are $(2L/3)$ locations near the interface where
defects may be freely introduced without changing the number of loops. To
lowest order therefore,
\begin{eqnarray}
Z_{FPL} & \simeq & 3 n^{N/3} \\
Z'_{FPL} & \simeq & 3 2^{2L/3} n^{(N-2L)/3}.
\end{eqnarray}
Reading off the exponents, we have from equation~(\ref{Sdef}) the result that
\begin{equation}
S(\gamma) = \frac{2}{3} \log (n) + O(1).
\label{S}
\end{equation}
Equation~(\ref{S}) is apparently consistent with the large-$\gamma$ asymptotics
derived in~\cite{deVega}.
Equation~(\ref{S}) together with equation~(\ref{xi}) show that $S \xi \neq 1$
in the FPL model. More generally, from the correlation length calculation it
is clear that for large $\gamma$ the leading behavior of the correlation length
for any value of $d$ will always be $\gamma$, and the leading behavior of $S$
is always $\gamma d / (d+1)$.
\vskip .5truein
{\Large{\bf Acknowledgement}}
The author is grateful to Professor Nikolai Reshetikhin for many helpful
conversations.
\baselineskip 14pt
|
train/arxiv
|
BkiUdUQ5qYVBhG4-STBh
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
Object detectors {\cite{multiregion,girshick2014rich,girshick2015fast,ren2015faster,he2014spatial,song2011contextualizing,insideoutside} are normally trained as classifiers to recognize the objects contained in candidate boxes. For example, most traditional detectors like deformable parts models (DPM) \cite{DPM} work on sliding window techniques where the classifiers are run over the entire image at specified interval locations. More recent detectors like R-CNN \cite{girshick2014rich} employ a region proposal method to produce potential bounding boxes in an image and perform classification over these boxes. Object proposals can significantly reduce unnecessary predictions, and alleviate the negative effects of distracting background on region classification.
As there exist hard negative regions (their intersection over union (IoU) with any groundtruth object region $< 0.5$) involving part of background as well as parts of objects, as illustrated in Fig.~\ref{fig:metric}. How to accurately discriminate between such hard negative background regions and positive object regions ($IoU \ge 0.5$) is expected to improve the performance of object detectors.
\begin{figure}[]
\centering
\vspace*{-5pt}
\includegraphics[width=1\linewidth]{metric.png}
\caption{Illustration of RoI feature distribution of the groundtruth, positives and negatives. The proposed approach aims to improve the RoI classification performance by dealing with hard negatives via similarity distance learning.}
\label{fig:metric}
\vspace*{-15pt}
\end{figure}
To improve the performance of detectors, many research efforts have been made to strengthen the capability of distinguishing positive regions and negative regions. In FastRCNN \cite{girshick2015fast}, many hyperparameters are introduced for efficient learning, \textit{e.g.}, the thresholds to define foreground RoIs (regions of interest) and background RoIs, the sampling ratio of positive (foreground RoIs) and negative samples (background RoIs) in mini-batch stochastic gradient descent (SGD) optimization, etc. Li \textit{et al}. \cite{attentivecontext} proposed to use LSTM cells \cite{hochreiter1997long} to capture local context information of proposal boxes and global context information of entire images to strengthen the discrimination ability of RoI's feature. In \cite{li2015scale}, Li \textit{et al.} took advantages of multiple subnetworks' output to deal with large scale changes. In \cite{insideoutside}, Bell \textit{et al.} introduced the Inside-Outside Net to capture multi-scale representation and incorporated context via spatial recurrent units.
\begin{figure*}
\vspace{-5pt}
\begin{minipage}[t]{0.24\linewidth}
\centering
\centerline{\includegraphics[width=0.97\linewidth]{a.png}}
\centerline{(a)}
\end{minipage}
\begin{minipage}[t]{0.26\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{b.png}}
\centerline{(b)}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=0.97\linewidth]{c.png}}
\centerline{(c)}
\end{minipage}
\vspace*{-5pt}
\caption{Object detection is based on the classification of RoIs (proposals). The triplet embedding works on the groundtruth of object RoIs (a), the positive and negative RoIs (c) with respect to different object classes (like Table and Chair), in which the similarity distance constraint (b) is applied. }
\vspace*{-15pt}
\label{fig:example}
\end{figure*}
Moreover, some methods use bootstrapping (usually called hard examples mining) to improve performance. In \cite{girshick2014rich}, Ross \textit{el al.} proposed to cache all RoIs' features and employ a SVM classifier to identify hard examples and accordingly update detector models in an iterative manner. In \cite{online_hard_example_mining}, Shrivastava \textit{el al.} proposed to select hard negative RoIs online according to RoIs' classification and localization loss in the network forward stage, and then those RoIs' features are forwarded once again for better learning. In those methods, the rules of selecting hard examples are well-motivated towards effective learning. However, the distribution of positives vs. hard negatives in feature space, and their relative similarity distances are yet to be investigated to improve the classification performance in the context of object detectors.
In this paper, we propose triplet embedding to incorporate the constraint of relative similarity distances between positives vs. (hard) negatives into region-based detector learning. The learning object is to enforce that the similarity distance between any pair of RoIs from the same object class (positives) is smaller than the distance between any pair of RoIs from different classes including background negatives. Over FastRCNN~\cite{girshick2015fast} and OHEM~\cite{online_hard_example_mining} network models, we have elegantly implemented the triplet embedding.
Our contributions are twofold. First, to the best of our knowledge, this is the first work to incorporate triplet embedding into region-based detector learning, which has strengthened the classification of positives vs. (hard) negatives with respect to different object classes. Through jointly optimizing the injected triplet loss and the original loss with FastRCNN, we have significantly improved the detector performance. Second, we propose a so-called Top-K pooling to further improve detector performance, which is empirically shown to be effective in reducing noises in feature maps. The triplet embedding, together with Top-K pooling, has advanced the state-of-the-art FastRCNN and OHEM models. The superior performance has been demonstrated over benchmark PASCAL VOC2007 dataset.
The rest of this paper is organized as follows. In Section 2, we present the problem. In Section 3, we introduce the proposed approach. Comparison experiments are given in Section 4. Finally, we conclude this paper in Section 5.
\section{Problem Statement}
\label{sec:format}
Let's take the state-of-the-art FastRCNN as a baseline region-based detector, which is to optimize the joint objective of classification and localization, and performs end-to-end training in mini-batch stochastic gradient descent. Each true RoI from training images is labeled with a groundtruth class $g$ and a bounding-box regression target $t^*$. The optimization objective can be formulated as a multi-task loss:
\begin{equation}
\vspace{-5pt}
\begin{array}{cl}
L =L_{cls}(p,g) + 1 [ {g} \geq 1 ] L_{loc} (t,t^*),
\end{array}
\label{eq1}
\end{equation}
where $L_{cls}$ and $L_{loc}$ are the losses for classification and bounding-box regression, respectively. Specifically, $L_{cls}$ is a log loss and $L_{loc}$ is a smooth $L1$ loss. For training, $p$ is the predicted class label, and $t$ is the predicted box coordinates; $1 [ {g} \geq 1 ]$ equals $1$ when ${g} \geq 1$; otherwise it is background RoIs ($g=0$), then $L_{loc}$ is ignored.
From the classification point of view, those hard negatives lying close to decision hyperplanes are prone to misclassification. In the spirit of triplet embedding, it is beneficial to leverage similarity distance learning for better classification, in which those negatives are pushed away from positives, and meanwhile the positives of the same class are pulled together as shown in Fig.~\ref{fig:metric}. Hence, we aim to incorporate similarity distance constraint in multi-loss optimization:
\vspace{-2pt}
\begin{equation}
\begin{array}{cl}
D(R^g_i, R^n) > D(R^g_i, R^p_j),
\end{array}
\vspace{-5pt}
\label{eq2}
\end{equation}
where $D$ is the similarity distance in feature space, $R^g_i$, $R^p_j$ and $R^n$ denote the groundtruth, positive and negative RoIs, respectively. Triplet embedding is then applied as shown in Fig.~\ref{fig:example} (b).
\section{Proposed Approach}
\begin{figure*}[!ht]
\centering
\vspace{-5pt}
\includegraphics[width=1\linewidth]{network_structure.png}
\caption{Illustration of a region-based object detection network with triplet embedding and Top-K pooling. In the forward stage, RoIs (object proposals) are generated and labelled with respect to different object/background classes. The triplet loss is added to strengthen the optimization objective and improve the classification performance consequently. }
\label{fig:network}
\vspace*{-10pt}
\end{figure*}
In this section, we first present RoIs samples selection for triplet embedding as well as hard triplet units, and then we describe joint optimization loss function.
\subsection{Incorporating Triplet Embedding}
The loss per RoI is the sum of a log loss and a smooth $L1$ loss in FastRCNN training. Here we consider a similarity distance constraint based on RoI's features. Specifically, given an image $X$ and a set of RoIs $(R_1,R_2,...,R_N)$ generated by object proposal algorithms as input to the network, we get the RoIs' features in the last fully connected layer $f(R)$. We use $L2~ norm$ distance to compute the similarity between RoIs as:
\begin{equation}
\begin{array}{cl}
D(R_i, R_j) = {\left\| f(R_i)-f(R_j) \right \|}^2_2.
\end{array}
\label{eq3}
\end{equation}
In network training, we setup the constraint of similarity distances as: the similarity distance $D(R_i^g,R_j^p)$ incurring object RoIs and their groundtruth are closer than the distance between the groundtruth and the background RoIs $D(R_i^g,R^n)$. Formally, this constraint can be formulated as:
\begin{equation}
\begin{array}{cl}
D(R^g_i, R^n) > D(R^g_i, R^p_j)+\alpha,
\end{array}
\label{eq4}
\end{equation}
where $\alpha$ is the minimum margin between $D(R_i^g,R_j^p)$ and $D(R_i^g,R^n)$ as shown in Fig.~\ref{fig:metric}. We empirically set $\alpha=0.5$ in this work. Thus the loss of a triplet unit $<R_i^g, R_j^p,R^n>$ is defined as:
\begin{equation}
\begin{array}{cl}
L(R_i^g, R_j^p,R^n) = \max(D(R_i^g,R_j^p) - D(R_i^g,R^n) + \alpha, 0).
\end{array}
\label{eq5}
\end{equation}
When the constraint in Eq (4) is violated for any RoIs triplet, the loss is back propagated. Therefore, the optimization objective is to minimize the loss function as:\
\begin{small}
\begin{equation}
\begin{array}{cl}
L \!= \!\sum_{1}^{N}\max({\left \|f(R_i^g)\!-\!f(R_j^p)\right \|}^2_2 \!+\!\alpha\! -\! {\left \|f(R_i^g)\!-\!f(R^n)\right \|}^2_2, \!0),
\end{array}
\label{eq6}
\end{equation}
\end{small}
where $N$ is the total number of triplet units in training.
\subsection{Class-specific Triplet Embedding}
Each RoI (\textit{i.e.}, proposal) is assigned with a class label $l_{class}$ to indicate positive or negative RoI. The RoI labeling relates to the definition of foreground and background.
\textbf{Foreground RoIs.} A RoI is labeled as foreground when its IoU overlap with a groundtruth bounding box exceeds 0.5. Threshold 0.5 is compliant with the evaluation protocol in PASCAL VOC detection benchmark.
\textbf{Background RoIs.} A RoI is labeled as background when its maximum IoU overlap with any groundtruth bounding box falls into the interval $[bg_{low},0.5)$. $bg_{low}$ is the low overlap threshold, and in FastRCNN $bg_{low} = 0.1$ is setup, which supposes that those RoIs with very small overlap ($<0.1$) with any groundtruth bounding box are uncertain.
To effectively select negatives for triplet units, we introduce a class-specific label $l_{proposal}$ to each background (negative) RoI. Instead of a single category label, we propose to assign a class-specific label to each negative background RoI by using the class label of the groundtruth RoI with a maximum IoU overlap with the negative background RoI, as shown in Fig.~\ref{fig:example} (c). For example, $l_{proposal} = c$ means this negative RoI has a maximum overlap with an object of class $c$, and is likely to be a ``qualified" hard negative. As multiple object classes may be involved in an image, the RoIs are assigned to different groups $(G_1,G_2,...,G_M)$ according to $l_{class}$ and $l_{proposal}$, in which group $G_c$ consists of positive RoIs with $l_{class}=c$ and negative background RoIs with $l_{proposal}=c$. Accordingly, the triplet sampling strategy is applied between group-specific positives and background negatives. Referring to Eq (\ref{eq5})(\ref{eq6}), for each group $G_c$, $R_i^g$ is determined by the groundtruth, $R_j^p$ are RoIs with $l_{class}=c$ and $R^n$ are RoIs with $l_{class}=background$ and $l_{proposal}=c$.
\subsection{Hard Triplet Units Sampling}
The number of RoIs (proposals) produced by selective search \cite{selectivesearch} or edgebox \cite{edgebox} in an image is around $2000$, and the number of possible triplet units can reach up to $2000^3$. As a large portion of triplet units do not violate the similarity constraint, it is meaningful to select a subset of those hard triplet units to make effective and efficient training.
We select hard triplet units within $G_c$ through computing:
\begin{small}
\begin{equation}
\begin{array}{cl}
\text{argmax}_i{\left \| f(R_a^g)\!-\!f(R_i^p)\right \|}^2_2 \;\; and \;\; \text{argmin}_j{\left \| f(R_a^g)\!-\!f(R_j^n)\right \|}^2_2 \\
R_a^g, R_i^p, R_j^n \in G_c,
\end{array}
\label{eq7}
\end{equation}
\end{small}
where $R_a^g$, $R_i^p$ are groundtruth and positive RoIs and $R_j^n$ is negative RoI, $R_a^g$ works as a reference anchor that is selected from the groundtruth RoIs in the experiment.
\begin {table*}[!ht]
\label{tab:detection}
\scriptsize
\centering
\renewcommand\arraystretch{1.1}
\begin{tabular}{p{2.25cm}|p{0.85cm}|p{0.35cm}|p{0.22cm}p{0.22cm}p{0.22cm}p{0.23cm}p{0.22cm}p{0.21cm}p{0.21cm}p{0.21cm}p{0.23cm}p{0.23cm}p{0.23cm}p{0.23cm}p{0.23cm}p{0.23cm}p{0.23cm}p{0.23cm}p{0.23cm}p{0.23cm}p{0.22cm}p{0.21cm}}
\hline
Method &Train set&mAP& aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & bike &persn &plant& sheep & sofa & train & tv\\
\hline
VGGM& 07& 59.4 & 71.6 & 71.9 & 55.9 & 51.8& 26.5 & 69.0 & 73.2 & 73.0 & 30.4 & 65.2 & 61.5 & 67.5 & 71.2 & 70.0 & 60.2 & 27.7 & 59.2 & 62.3 & 68.6 & 61.2\\
VGGM+TopK& 07& 60.0 & 71.5& 70.1 & 58.1 & 45.8 & 30.0 & 69.1 & 74.1 & 72.4 & 37.1 & 64.0 & 61.6 & 70.4 & 72.4 & 70.6 & 61.6 & 27.0 & 52.0 & 62.2 & 69.5 & 61.0 \\
VGGM+Triplet& 07& 61.1 & 70.3 & 73.0 & 57.9 & 48.2 & 29.6 & 67.4 & 73.2 & 72.9 & 39.3 & 66.4 & 63.0 & 68.1 & 71.6 & 71.3 & 63.0 & 28.8 & 58.2 & 62.7 & 73.0 & 64.1\\
VGGM+Triplet+TopK & 07& \textbf{61.6}& 70.5 & {73.5} & {58.7} & 49.3 & {31.6} & 67.7 & {74.0} & 72.0 & {40.2} & {65.3} & {62.1} & {68.2} & 71.2 & {72.3} & {62.7} & {29.2} & {59.5} & {64.7} & {75.6} & 63.9\\
\hline
VGG16& 07& 66.9 & 74.5& 78.3& 69.2& 53.2& 36.6& 77.3& 78.2& 82.0& 40.7& 72.7& 67.9& 79.6& 79.2& 73.0& 69.0& 30.1& 65.4& 70.2& 75.8& 65.8 \\
VGG16+TopK& 07& 67.3 & 74.6 & 78.2 & 69.4 & 55.0 & 39.6 & 77.1 & 77.7 & 78.6 & 46.1 & 72.0 & 67.8 & 79.9 & 79.1 & 74.3 & 69.7 & 31.2 & 68.5 & 68.7 & 76.3 & 63.3\\
VGG16+Triplet& 07& 68.3 & 74.6 & 77.6 & 65.0 & 56.0 & 40.2 & 76.5 & 77.9 & 83.1 & 47.9 & 73.1 & 68.0 & 81.7 & 78.2 & 75.7 & 72.0 & 37.0 & 64.5 & 67.4 & 76.0 & 73.1\\
VGG16+Triplet+TopK & 07 & 68.7 & 75.6 & 78.6 & 68.0 & {56.2} & 40.5 & {76.7} & 78.9 & {84.3} & 47.0 & 73.3 & {68.0} &81.0 & {78.7} & 75.4 & 72.2 & 37.8 & 65.5 & {67.8} & {76.5} & {73.7}\\
MR-CNN \cite{multiregion} & 07 &\textbf{69.1} &\textbf{82.9} & 78.9 & 70.8 & 52.8 & \textbf{55.5} & 73.7 & 73.8 & 84.3 & 48.0 & 70.2 & 57.1 & 84.5 & 76.9 & \textbf{81.9} & 75.5 & 42.6 & 68.5 & 59.9 & 72.8 & 71.7 \\
Yuting et al.\cite{zhang2015improving}& 07 & 68.5 & 74.1 & \textbf{83.2} & 67.0 & 50.8 & 51.6 & 76.2 & 81.4 & 77.2 & 48.1 & 78.9 & 65.6 & 77.3 & 78.4 & 75.1 & 70.1 & 41.4 & 69.6 & 60.8 & 70.2 & 73.7 \\
\hline
VGG16 & 07+12 & 70.0 & 77.0 & 78.1 & 69.3 & 59.4 & 38.3 & 81.6 & 78.6 & 86.7 & 42.8 & 78.8 & 68.9 & 84.7 & 82.0 & 76.6 & 69.9 & 31.8 & 70.1 & 74.8 & 80.4 & 70.4 \\
VGG16+Triplet+TopK& 07+12& \textbf{72.1} & 79.0 & 78.8 & 71.9 & {62.0} & 42.7 & 80.0 & 80.5 &{87.2} & {48.5} & 80.3 &{72.1} & 83.4 & 84.8 & 77.2 & 71.3 &{39.9} & {72.5} & 73.9 & 83.2 & 72.6\\
AC-CNN\cite{attentivecontext} & 07+12 & 72.0 & 79.3 & 79.4 & 72.5 & 61.0 & 43.5 & 80.1 & 81.5 & 87.0 & 48.5 & 81.9 & 70.7 & 83.5 & \textbf{85.6} & 78.4 & 71.6 & 34.9 & 72.0 & 71.4 & \textbf{84.3} & 73.5\\
\hline
OHEM \cite{online_hard_example_mining} & 07 & 69.9& 71.2 & 78.3 & 69.2 & 57.9 & 46.5 & 81.8 & 79.1 & 83.2 & 47.9 & 76.2 & 68.9 & 83.2 & 80.8 & 75.8 & 72.7 & 39.9 & 67.5 & 66.2 & 75.6 & 75.9\\
OHEM \cite{online_hard_example_mining} + Ours & 07 & 71.7 & 74.4 & 80.9 & 72.1& 61.4 &49.7 & 80.9 &79.5 & 83.7 & 53.3 & 75.4& 71.4 & 80.7 & 81.9 & 76.8 & 74.8 & 42.4 & 68.5 & 73.1& 78.0 &75.1\\
OHEM \cite{online_hard_example_mining}& 07+12 & 74.6 & 77.7 & 81.2 & 74.1 & 64.2 & 50.2 & \textbf{86.2} & \textbf{83.8} & 88.1 & 55.2 & 80.9 & 73.8 & 85.1 & 82.6 & 77.8 & 74.9 & 43.7 & 76.1 & 74.2 & 82.3 & \textbf{79.6}\\
OHEM \cite{online_hard_example_mining} + Ours &07+12 & \textbf{75.8} & 79.6 & 81.7 & \textbf{75.2} & \textbf{66.4} & 54.7 & 84.0 & 83.1& \textbf{88.6} & \textbf{58.0} & \textbf{83.3} & \textbf{74.0} & \textbf{86.4} & 85.0 & 80.4 & \textbf{76.1} & \textbf{44.9} & \textbf{78.6} & \textbf{77.8} & 80.7 & 78.1\\
\hline
\end{tabular}
\caption{The detection performance comparisons over PASCAL VOC 2007. Different networks (VGGM, VGG16, OHEM) are applied. Separate results are given over two different training data: VOC 07 training set and VOC 07+12 training set, respectively. In addition, the impact of Triplet and Top-K pooling on detection performance is studied as well.}
\vspace*{-10pt}
\end{table*}
\subsection{Joint Optimization of Multiple Loss Functions}
Apart from the original classification and localization regression loss functions, we need to minimize the triplet loss, as illustrated in Fig.~\ref{fig:network}. Hence, a linear weighting is applied to form the optimization objective of multiple loss functions.
\begin{equation}
L_{total} = w_1 L_{cls} + w_2 L_{loc} + w_3 L_{triplet},
\end{equation}
where we empirically set $w_1 = 1$, $w_2 = 1$ and $w_3 = 0.5$ in this work. $L_{cls}$ and $L_{loc}$ are the classification and localization loss. $L_{triplet}$ enforces the similarity distance constraint. The L2 normalization is applied to the output of $fc7$ layer (last fully connected layer). The output of network contains : (1) a probability distribution over object classes and background, (2) regressed coordinates for bounding-box localization.
\section{Top-K Pooling}
\label{sec:pagestyle}
In the network forward stage, as layers go deeper, the size of feature maps become smaller and the noise influence in pooling operations would become more severe \cite{zhi2016two}. In this work, we propose a Top-K pooling to compute the mean of top K elements from sorting response values in the pooling window, which can be formulated as follows:
\begin{equation}
\vspace{-5pt}
y_{i}= \frac {1}{K} \sum_{j=1}^{K} {x_{i,j}',}
\end{equation}
where $x_{i,j}$ denotes the $j_{th}$ element in $i_{th}$ pooling window and ${y_{i}}$ denotes the output of $i_{th}$ pooling window. ${x_{i,j}'}$ are elements of a sorted sequence in a descending order. For each $y_{i}$, a K-length array $R({y_{i}}) = \{x_{i, j} | j=1,2,...,K \}$ is maintained for the indices of the top K $x_{i,j}$ to readily compute the gradient.
Rather than applying Top-K pooling as post-processing in \cite{zhi2016two}, we not only compute Top-K responses in pooling windows, but elegantly incorporate pooling operation into network training. During the backward stage, the derivatives of the error at the layer's inputs are derived as:
\begin{equation}
\frac{ \partial E}{ \partial x_{i,j}} = \frac {1}{K} \frac{ \partial E}{ \partial y_{i}}, x_{i,j} \in R( y_{i}).
\vspace{-3pt}
\end{equation}
Traditional max pooling is susceptible to noise, while Top-K pooling performs better than mean pooling in terms of capturing the statistics of response values. Note that top-K pooling may degenerate to max pooling or mean pooling when $K=1$ or $K=window\_size$.
\section{Experiments}
\label{sec:majhead}
\textbf{Datasets and Metrics}. We perform evaluation on Pascal VOC 2007 and 2012 datasets that contain 9963 and 22531 images, respectively. The datasets are divided into \textit{train}, \textit{val} and \textit{test} subsets, and contains 20 target object classes. The performance evaluation metric is mean of Average Precision (mAP). The overall performance and per-class performance are presented on VOC 2007 dataset.
\noindent\textbf{Implementation Details}. The deep learning platform Caffe \cite{jia2014caffe} is used to train the networks. Two FastRCNN models based on $VGG\_CNN\_M\_1024$ \textit{(VGG\_M)} and \textit{VGG16} network architectures are trained in the experiment. Both networks are initialized with the models pre-trained on ILSVRC 2012 image classification dataset. In addition, the input object proposals are provided by \textit{selective search} \cite{2013selective}.
\subsection{Experiment Results on VOC 2007}
Table 1 presents the detection results on VOC benchmarks. We firstly perform comparison experiments over $VGG\_M$. When incorporating both triplet loss and Top-K pooling into $VGG\_M$, we have achieved 2.2\% mAP improvements over 07 train\_val \textit{dataset}. Injecting triplet loss alone brings about 1.7\% mAP improvement, while Top-K pooling contributes to 0.6\% mAP improvement (from 59.4 to 60.0\% mAP). Over \textit{VGG16}, which is a deeper network than $VGG\_M$, we yield a mAP improvements of 1.8\% on 07 \textit{train\_val} dataset when combining the triplet loss and Top-K pooling. Under $07+12$ \textit{train\_val}, \textit{VGG16} has achieved up to 2.1\% mAP improvement. Moreover, compared to other typical region-based detectors, such as AC-CNN~\cite{attentivecontext}, Yuting~\cite{zhang2015improving}, MR-CNN~\cite{multiregion}, the proposed approach yields competitive performance as well. OHEM~\cite{online_hard_example_mining} is the state-of-the-art object detection approach, which has introduced online bootstrapping to the design of network structure based on the FastRCNN framework. As listed in Table 1, our method can further improve the detection performance of OHEM, and yields 1.2\% mAP improvements.
Note that the mAP improvements from Top-K pooling on \textit{VGG\_M} are more than on \textit{VGG16}. It may be attributed to the different sizes of pooling kernels. \textit{VGG\_M} adopts $3\times3$ kernel-size max pooling and length $2$ stride while \textit{VGG16's} default kernel-size is $2\times2$ and stride is $1$. For large max pooling kernel-size, the replacing with Top-K pooling can effectively mitigate the negative effects of background noise. However, although $2\times2$ pooling interval is very dense in \textit{VGG16}, $0.4$\% mAP gain still results with Top-K pooling ($K=2$).
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{pie_chart.png}
\vspace*{-5pt}
\caption{The performance analysis of exemplar top detections with the original FastRCNN (a) and our approach of TripletLoss + Top-K pooling over FastRCNN (b). The pie charts show the percentages of Correct Detection (COR), False Positives from poor localization (LOC), visually similar objects (SIM), other VOC objects (OTH), or background (BG).}
\label{fraction_analysis}
\vspace{-10pt}
\end{figure}
As listed in Table 1, compared with FastRCNN, our method has achieved better detection results over most of object categories. In addition, we depict the pie charts with the percentages of true positives (correct) and false positives like \cite{hoiem2012diagnosing} in Fig.~\ref{fraction_analysis}. With triplet loss and Top-K pooling, the percentage of background false positives (BG) has been reduced. In particular, the performance improvements on some classes like tvmonitor, chair are significant. For instance, the mAP gains on chair reach up to 10\% in \textit{VGG\_M}. This can be partially attributed to similar visual characteristics of positive and negatives, while our enforced distance optimization is beneficial for more accurate discrimination. As to other classes like boat or cow, the detection performance is mainly affected by localization, so the improvements from our approach are limited.
\subsection{Discussion}
The state-of-the-art OHEM [13] applies the idea of online bootstrapping in designing network structure to improve learning. Both OHEM and our proposed approach attempt to deal with hard negative RoIs. It is worthy to note that our approach can be elegantly integrated into any region-based detector learning. For example, the triplet loss can be applied in OHEM as an additional optimization objective. From Table 1, the mAP improvements of OHEM with the proposed triplet embedding and Top-K pooling are 1.8\% and 1.2\% over 07 and 07+12 \textit{train\_val}, respectively.
To investigate the impact of triplet embedding and Top-K pooling on FastRCNN training, we illustrate the loss changes of SoftMax + Smooth L1 over VGGM network. As shown in Fig. 5, over the training course, the loss of original FastRCNN can be significantly reduced with our approach. The lower loss values have demonstrated the advantages to improve the efficiency and effectiveness of learning region-based detectors.
\begin{figure}
\centering
\vspace{-5pt}
\includegraphics[width=0.95\linewidth]{loss_curve.png}
\vspace{-15pt}
\caption{Comparison of training loss (Softmax + Smooth L1) over VGGM with and without triplet loss + Top-K pooling. For fair comparison, although the additional triplet loss is applied in optimization objective, the shown loss values (of green curve) have deducted the portion of triplet loss.}
\label{fig:loss_curve}
\vspace{-15pt}
\end{figure}
\section{Conclusion}
We have proposed to incorporate triplet embedding into the optimization objective of region-based detectors. The triplet loss may effectively enforce the similarity distance constraints between groundtruth, positive and negative RoIs. Moreover, a practically useful Top-K pooling is employed to further reduce the negative effects of feature map noises in network training. The proposed triplet embedding and Top-K pooling have significantly advanced the state-of-the-art FastRCNN and OHEM models, which have been demonstrated over PASCAL VOC 2007 benchmark.
\vspace{10pt}
\textbf{Acknowledgments:} This work was supported by grants from National Natural Science Foundation of China (U1611461, 61661146005, 61390515) and National Hightech R\&D Program of China (2015AA016302). This research is partially supported by the PKU-NTU Joint Research Institute, that is sponsored by a donation from the Ng Teng Fong Charitable Foundation.
\bibliographystyle{IEEEbib}
|
train/arxiv
|
BkiUeVU4eIOjSMCf8jKm
| 5
| 1
|
\section{Introduction}
Graphene, a monolayer of carbon atoms, first obtained in
2004,\cite{Novoselov2004} received increasing interest due to its
exceptional electronic properties and original transport physics.%
\cite{Novoselov2007}
Gradual miniaturization of graphene devices increases the importance
of edge effects with respect to the bulk two-dimensional physics.
Starting from a graphene sheet, nanoribbons and quantum dots can be
produced by lithography and etching.%
\cite{Ruoff1999,Kim2007,Avouris2007,Novoselov2008,Ensslin2009,Goldhaber2009}
Edges also play a fundamental role in the quantum Hall effect.
Graphene edges can be studied by several experimental techniques.
Scanning tunneling microscopy (STM) and transmission elecron
microscopy (TEM) can resolve the structure of the edge on the
atomic scale.%
\cite{Klusek2000,Kobayashi2005,Kobayashi2006,Dresselhaus2008,Liu2009,Ritter2009}
Raman scattering has also proven to be a powerful technique
to probe graphene edges.%
\cite{Cancado2004,Novotny,You2008,Gupta2009,Casiraghi2009}
The so-called $D$~peak at 1350~cm$^{-1}$ is forbidden by
momentum conservation in a perfect infinite graphene crystal,
and can only be activated by impurities or edges. Invoking
the double-resonance mechanism for the $D$~peak activation,%
\cite{ThomsenReich2000} Can\c{c}ado \emph{et al.} have
shown that a perfect zigzag edge does not give rise to the
$D$~peak.\cite{Cancado2004}
It should be emphasized that this property is determined by
the effect of the edge on the electronic states.
A great deal of theoretical studies of electronic properties
near the edge has focused on the case of ideal zigzag or
armchair edges, most commonly adopting the tight-binding
description. One of the spectacular results obtained by
this approach was the existence of electronic states confined
to the zigzag edge,%
\cite{Stein1987,Tanaka1987,Fujita1996-1,Fujita1996-2,Dresselhaus1996}
which was later confirmed experimentally.%
\cite{Klusek2000,Kobayashi2005,Kobayashi2006,Ritter2009}
The question about general boundary condition for Dirac electron
wave function at a translationally invariant graphene edge has
been addressed\cite{McCann2004} and a detailed analysis of
boundary conditions which can arise in the tight-binding model
has been performed.\cite{Akhmerov2008}
In spite of the fact that all graphene samples produced so
far have rough edges, the number of theoretical works dedicated
to rough edges is limited. Most of them model edge roughness
in the tight-binding model by randomly removing lattice sites.%
\cite{Areshkin2007,Guinea2007,Querlioz2008,Evaldsson2008}
The opposit limit of smooth and weak roughness has been
considered.\cite{Fang2008}
Edge states on zigzag segments of finite length have also
been studied recently.\cite{Tkachov2009}
The present work has several purposes.
One is to develop analytically treatable models which would
describe electron scattering on various types of edges in terms
of as few parameters as possible.
The second one is to calculate the polarization dependence of
the $D$~peak intensity for different models of the edge, and
thus see what information can be extracted from this dependence.
The third one is to identify the characteristic length scale
which confines the Raman process to the vicinity of the edge,
i.~e. the spatial extent of the Raman process.
It will be shown that the last two issues are intimately related
to the quasiclassical character of the electron motion during
the Raman scattering process.
The paper is organized as follows. In Sec.~\ref{sec:Qualitative}
we discuss the problem in qualitative terms and summarize the
main results of the work. In Sec.~\ref{sec:Free} we summarize
the Dirac description of single-electron states in an infinite
graphene crystal and formulate the Huygens-Fresnel principle
for Dirac electrons.
In Sec.~\ref{sec:Edge} we discuss models for the electron
scattering from a graphene edge, considering translationally
invariant as well as rough edges.
Sec.~\ref{sec:Phonons} introduces the model for electron-phonon
coupling and describes the general scheme of the calculation of
the $D$~peak intensity using the standard perturbation theory
in the coordinate representation. Finally,
Secs.~\ref{sec:Regular}, \ref{sec:Rough}, and
\ref{sec:fragmented} are dedicated to the calculation of the
$D$~peak intensity for an ideal armchair edge, an atomically
rough edge, and an edge consisting of a random collection of
long zigzag and armchair segments, respectively.
\section{Qualitative discussion and summary of the main results}
\label{sec:Qualitative}
\subsection{Electron scattering by the edge}
\label{sec:Qreflection}
\begin{figure}
\includegraphics[width=8cm]{edges}
\caption{\label{fig:edges} Examples of ordered edges: (a)~zigzag,
(b)~armchair, (c)~a more complicated but still translationally
invariant edge.
}
\end{figure}
First, we discuss translationally invariant edges. For example
(see Fig.~\ref{fig:edges}), a zigzag edge has a spatial period
$d_e=a\sqrt{3}$ ($a\approx{1}.42\:\mbox{\AA}$ is the C--C bond
length), an armchair edge has $d_e=3a$, and a more complicated
edge, shown in Fig.~\ref{fig:edges}(c), has
$d_e=\sqrt{21}\,a\approx{4.6}\,a$ (the spatial period is
measured along the average direction of the edge).
It is important to compare
$d_e$ to the electronic wavelength (we prefer to divide the
latter by $2\pi$), $\lambdabar_\ep\equiv{v}/|\epsilon|$, where $\epsilon$~is the
electron energy, and $v\approx{1}.1\cdot{10}^8~\mbox{cm/s}\approx%
{7}.3\:\mbox{eV}\cdot\mbox{\AA}$ is the electron velocity (the
slope of the Dirac cones). For comparison, at $\epsilon=1\:\mbox{eV}$
$\lambdabar_\ep\approx 7.3\:\mbox{\AA}\approx{5}a$.
As long as $d_e<\pi\lambdabar_\ep$, the component of the electronic momentum
along the edge, $p_\|$, is conserved (we measure the electron
momentum from the Dirac point). For longer periods the edge acts
analogously to a reflective diffraction grating in optics; this
case is not considered here.
In the limit $d_e\ll\lambdabar_\ep$ the reflection of electrons from any
periodic edge can be described by an effective energy-independent
boundary condition for the electronic wave
function.\cite{McCann2004,Akhmerov2008}
Next, we study rough edges. An extreme case is when the edge is
rough at the atomic scale, like that in the tight-binding model
with randomly removed sites. Then it is reasonable to assume
that in the vicinity of the edge all plane-wave components with
a given energy in both valleys are mixed randomly, as there is
no small or large parameter which would suppress or favor any
particular channel (the smallness $a/\lambdabar_\ep\ll{1}$ suppresses the
direct intravalley scattering, but multiple intervalley scattering
efficiently mixes states within the same valley as well). The
electron is thus scattered randomly both in all directions (as
a consequence of the randomization of the momentum direction
within the same valley), and between the two valleys. For this
case in Sec.~\ref{sec:irregular} we
propose a phenomenological model which describes such random
scattering of electrons by the edge, respecting only the
particle conservation and the time-reversal symmetry.
Essentially, each point of the edge is represented by an
independent point scatterer which randomly rotates the valley
state of the electron. This model is used for quantitative
calculations in the subsequent sections.
Edges, rough on length scales much larger than the lattice
constant, are likely to consist of distinct segments of zigzag
and armchair edges, as shown by
STM\cite{Kobayashi2005,Kobayashi2006,Ritter2009} and
TEM.\cite{Dresselhaus2008,Liu2009}
Then the overall probability of scattering within the same valley
or into the other valley is simply determined by the fraction of
the corresponding segments. The problem of angular distribution
of the scattered electrons is analogous to the well-studied
problem of light scattering by rough surfaces.%
\cite{Brown1984,Maradudin1985a,Maradudin1985b,GarciaStoll,%
TranCelli,Maradudin1989,Maradudin1990}
The main qualitative features of the scattering, namely, the
sharp coherent peak in the specular direction, the smooth
diffuse background, and the enhanced backscattering peak,
should be analogous for the electrons in graphene as well.
The so-called surface polaritons, shown to play an important
role in the light scattering, are analogous to the electronic
edge states in graphene. Still, full adaptation of this theory
for the case of Dirac electrons in graphene represents a
separate problem and is beyond the scope of the present work.
Here we only consider the case when regular edge segments are
sufficiently long, i.~e. their typical length $d_e\gg\lambdabar_\ep$.
Then the diffraction corrections, small in the parameter
$\lambdabar_\ep/d_e\ll{1}$, can be simply found as it is done in the
classical optics,\cite{BornWolf} using the Huygens-Fresnel
principle for Dirac electrons, Eq.~(\ref{Huygens=}).
\subsection{Quasiclassical picture of Raman scattering}
\label{sec:quasiclassical}
Since graphene is a non-polar crystal, Raman scattering involves
electronic excitations as intermediate states: the electromagnetic
field of the incident laser beam interacts primarily with the electronic
subsystem, and emission of phonons occurs due to electron-phonon
interaction. The matrix element of the one-phonon Ramanprocess
can be schematically represented as
\begin{equation}\label{Ramanmatrixelement=}
\mathcal{M}\sim\sum_{a,b}
\frac{\langle{i}|\hat{H}_{e-em}|a\rangle\langle{a}|\hat{H}_{e-ph}|b\rangle
\langle{b}|\hat{H}_{e-em}|f\rangle}
{(E_i-E_a+2i\gamma)(E_i-E_b+2i\gamma)}.
\end{equation}
Here $|i\rangle$ is the initial state of the process (the incident
photon with a given frequency and polarization, and no excitations
in the crystal), $|f\rangle$ is the final state (the scattered photon
and a phonon left in the crystal), while $|a\rangle$ and $|b\rangle$
are the intermediate states where no photons are present, but an
electron-hole pair is created in the crystal and zero phonons or one
phonon have been emitted, respectively. Note that these intermediate
states correspond to electronic eigenstates in the presence of the
edge, i.~.e, scattered states rather than plane waves.
$E_i=E_f$, $E_a$, and~$E_b$ are the energies of the corresponding
states, and $2\gamma$ is inverse inelastic scattering time (the
overall rate of phonon emission and electron-electron collisions).
$\hat{H}_{e-em}$~and~$\hat{H}_{e-ph}$ stand for the
terms in the system hamiltonian describing interaction of electrons
with the electromagnetic field and with phonons, respectively.
As discussed in Refs.~\onlinecite{shortraman,megapaper}, for
one-phonon scattering processes it is impossible to satisfy the
energy conservation in all elementary processes. This means that
the electron-hole pair represents a virtual intermediate state,
and no real populations are produced. Formally, at least one of
the denominators in Eq.~(\ref{Ramanmatrixelement=}) must be at
least of the order of the phonon frequency~$\omega_\mathrm{ph}\gg\gamma$. In
fact, the main contribution to the matrix element comes from such
states that the electron and the hole have the energy $\epsilon$ close
(within $\sim\omega_\mathrm{ph}$) to half of the energy $\omega_{in}$ of the
incident photon: $|\epsilon-\omega_{in}/2|\sim\omega_\mathrm{ph}$. These two energy
scales are well separated: $\omega_\mathrm{ph}\approx{0}.17$~eV, while typically
$\omega_{in}/2\approx{1}$~eV. According to the uncertainty principle,
the energy uncertainty, $\omega_\mathrm{ph}$, determines the typical lifetime of
the virtual state (electron-hole pair), $\sim{1}/\omega_\mathrm{ph}$. This time
scale determines the duration of the whole process.
As we are dealing with a translationally non-invariant system,
it is useful to analyze the Raman process in the coordinate
representation. The time scale $1/\omega_\mathrm{ph}$, introduced above,
translates into the length scale $\ell_\mathrm{ph}=v/\omega_\mathrm{ph}$.
Thus, this length scale,
$\ell_\mathrm{ph}\approx{4}\:\mbox{nm}$, determines the spatial extent of
the process (we will return to this point below). Its largness
compared to the electron wavelength,
$\ell_\mathrm{ph}/\lambdabar_\ep=\omega_{in}/(2\omega_\mathrm{ph})\gg{1}$, ensures that the
electronic wave functions determining the matrix elements for
each elementary process, {\em admit a quasiclassical
representation}. The quasiclassical approximation for the
electronic wave functions is fully analogous to the geometrical
optics approximation for electromagnetic waves, electronic
trajectories corresponding to light rays. Corrections to
this approximation are known as diffraction and are small by the
parameter $\omega_\mathrm{ph}/\omega_{in}\ll{1}$. It should be emphasized that
the quasiclassical picture is neither an assumption, nor a
hypothesis, but it arises automatically in the direct calculation
of the Raman matrix element which is performed in the main part
of the paper.
In the quasiclassical picture, the photoexcited electron and hole
can be viewed as wave packets of the size $\sim\lambdabar_\ep$, initially
created at an arbitrary point of the sample. More precisely,
instead of a point one can consider a region of a size $\delta{l}$,
such that $\lambdabar_\ep\ll\delta{l}\ll\ell_\mathrm{ph}$. Then momentum conservation
holds up to $\delta{p}\sim{1}/\delta{l}\ll\epsilon/v$ by virtue
of the uncertainty principle, so that electron and hole momenta
whose magnitude is $\epsilon/v$ (counted from the Dirac point),
have approximately opposite directions, as the photon momentum
is very small. The same argument holds for the phonon emission
and for the radiative recombination process: in order to emit a
photon, the electron and the hole must meet in the same region of
space of the size~$\delta{l}$ with almost opposite momenta (up to
$1/\delta{l}$). Momentum conservation at the reflection from the
edge depends on the quality of the edge, as discussed in
Sec.~\ref{sec:Qreflection}. Regardless of the properties of the
edge, an elementary geometric consideration, illustrated by
Fig.~\ref{fig:backscatt}, shows that for the electron and the hole
to be able to meet, the scattering on both the phonon and on the
edge must be backward.
\begin{figure}
\centerline{\includegraphics[width=8cm]{backscatt.eps}}
\caption{\label{fig:backscatt} (Color on-line) Real-space
representation of the scattering process responsible for the
D-peak near graphene edges. The lightning represents the
incoming photon which generates the electron-hole pair. The
solid black arrows represent the quasi-classical trajectories
of the electron and the hole. The dashed arrow represents the
emitted phonon. The flash represents the radiative recombination
of the electron-hole pair producing the scattered photon.
(a)~Backscattering off a translationally invariant edge is
possible only at normal incidence (up to the quantum uncertainty).
(b)~For oblique incidence on a translationally invariant edge
the reflection is specular, so the electron and the hole will
not be able to meet at the same point.
(c)~For a rough edge backscattering is possible even at
oblique incidence.
}\end{figure}
In the quasiclassical picture, the electron and the hole have to travel the
same distance between creation and annihilation, as their velocities are
equal. Then the process in Fig.~\ref{fig:traject}~(a) has more phase space
satisfying this restriction, and this process gives the main contribution to
the Raman matrix element. This will be also shown by an explicit estimate
in Sec.~\ref{sec:integrated}. Note that the three processes shown in
Fig.~\ref{fig:traject}, can be considered in the momentum space, as shown
in Fig.~\ref{fig:resfig}. According to the abovesaid, the processes
(b)~and~(c), often shown in the literature as an illustration of the
double resonance,\cite{ThomsenReich2000} are in fact weaker than the
process~(a) by a factor $\sim\omega_\mathrm{ph}/\omega_{in}$.
\begin{figure}
\includegraphics[width=8cm]{traject}
\caption{\label{fig:traject}(Color on-line.) Real space representation
of different contributions to the matrix element of the scattering
process responsible for the D-peak at an ideal armchair edge, placed
at $x=0$.
The solid black arrows represent the
quasi-classical trajectories of the electron and the hole corresponding
to the three Green's functions in Eqs.~(\ref{D1tr=}),~(\ref{D2tr=}).
Trajectories (a), (b), (c) correspond to decomposition of each of the
three Green's functions in Eq.~(\ref{D1tr=}).
$\ell$~is the overall spatial extent of the process, and
$\lambdabar_\ep=v/\epsilon$ is the electron wavelength divided by 2$\pi$.
}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{resfig}
\caption{\label{fig:resfig}(Color on-line.) Momentum space representation
of different contributions to the matrix element of the scattering
process responsible for the D-peak at an armchair graphene edge,
corresponding to the real space picture shown in Fig.~\ref{fig:traject}.
Solid lines represent the Dirac cones around $K$~and~$K'$ points of the
first Brillouin zone.
Vertical solid arrows represent interband electronic transitions accompanied
by photon absorption or emission (photon wave vector is neglected), dashed
arrows represent phonon emission, the horisontal dotted arrow represents
the scattering from the edge.
}
\end{figure}
\subsection{Polarization dependence}
The backscattering condition has immediate consequences for the
polarization dependence of the Raman scattering intensity. Indeed,
the matrix element of creation/annihilation of an electron and a
hole pair with momenta $\vec{p},-\vec{p}$ (counted from the Dirac
point) by a photon with the polarization~$\vec{e}$, is proportional
to $[\vec{e}\times\vec{p}]_z$, reaching its maximum when
$\vec{e}\perp\vec{p}$. Since a perfect edge conserves the
component of momentum along the edge, the backscattering is possible
only at normal incidence, as seen from Fig.~\ref{fig:backscatt}).%
\cite{singularity}
This gives the polarization dependence of the D peak intensity as
$I_D\propto\sin^2\varphi_{in}\sin^2\varphi_{out}$, where
$\varphi_{in}$ and $\varphi_{out}$ are the angles between the
polarizations of the incident and scattered photons and the normal
to the edge.
If one does not use the analyzer to fix the polarization of the
scattered photons, the dependence is $I_D\propto\sin^2\varphi_{in}$.
In experiments, however, the intensity never goes exactly to zero
for the polarizations perpendicular to the edge, but remains a
finite fraction~$\varepsilon$ of the intensity in the maximum.%
\cite{Cancado2004,Gupta2009,Casiraghi2009}
What determines this residual intensity in the minimum?
For an ideal edge the finite value of the intensity in the
minimum is entirely due to the \emph{quantum uncertainty}.
Namely, the momenta of the electron and the hole upon their
creation are not exactly opposite, but up to an uncertainty
$\sim{1}/\delta{l}$; the annihilation occurs not exactly at
the same spatial point, but within the spatial
uncertainty~$\delta{l}$. If the spatial extent of the process
is~$\ell$, the uncertaintly is estimated as
$\delta{l}\sim\sqrt{\lambdabar_\ep\ell}$, and the ratio $\varepsilon$
of the intensities in the minimum and in the maximum (i.~e.,
for polarizations perpendicular and parallel to the edge,
respectively) should be small as a power of the small
parameter quasiclassical parameter $\lambdabar_\ep/\ell$. The calculation
is performed in Sec.~\ref{sec:integrated}, the result is given
by Eqs.~(\ref{IDregularb=}) and (\ref{IDregularc=}) for the
detection without and with the analyzer, respectively. Up
to logaritmic factors, the ratio
$\varepsilon\sim\omega_\mathrm{ph}^2/\omega_{in}^2\sim(\lambdabar_\ep/\ell_\mathrm{ph})^2$.
This corresponds to $\ell_\mathrm{ph}\sim\ell$, in accordance with
the energy-time uncertainty principle, as discussed in the
previous subsection.
For rough edges the intensity in the minimum is determined by
the ability of the edge to backscatter electrons at oblique
incidence, as shown in Fig.~\ref{fig:backscatt}(c).
If the edge is rough at the atomic scale, oblique
backscattering is nearly as efficient as normal backscattering.
Still, such oblique trajectories are longer than those
corresponding to the normal incidence, so they are expected to
have a smaller weight since the virtual electron-hole pair
lives only for a restricted time. So one still can expect a
minimum of intensity for perpendicular polarization, but it is
of a purely geometrical origin, so one does not expect a
parametric smallness of the ratio~$\varepsilon$.
The calculation using the model Sec.~\ref{sec:irregular} for an
atomically rough, is performed in Sec.~\ref{sec:disordpolariz},
and the result is given by
Eqs.~(\ref{IDdisordb=}) and (\ref{IDdisordc=}) for the
detection without and with the analyzer, respectively.
In the former case the ratio $\varepsilon=1/3$, and the minimum
is indeed for the polarization of the incident light perpendicular
to the edge. With the analyzer, the absolute minimum is
$\varepsilon={1}/10$, reached when the polarizer and the analyzer
are oriented at the angle $\pi/3$ with respect to the edge and to
each other. When the polarizer and the analyzer are both rotated
parallel to each other, the minimum is $\varepsilon=1/5$.
For an edge consisiting of segments longer than electronic
wavelength, $d_e\gg\lambdabar_\ep$, we first analyze the contribution of
a single armchair segment. It is calculated in
Sec.~\ref{sec:fragmented}, and given by Eqs.~(\ref{IDfragma=})
and (\ref{IDfragmb=}) for the detection without and with the
analyzer, respectively.
The minimum is reached for the polarization, perpendicular
to the \emph{armchair direction} (which does not have to
coincide with the average direction of the edge!), and is
determined by the \emph{quantum diffraction} of the electron
on the segment, $\varepsilon\sim\lambdabar_\ep/d_e$ (provided that
$\lambdabar_\ep/d_e\gtrsim\omega_\mathrm{ph}^2/\omega_{in}^2$, the ratio for the
infinite armchair edge, which is the case when
$d_e\lesssim{50}\:\mbox{nm}$ for $\omega_{in}=2\:\mbox{eV}$).
To obtain contribution of the whole
edge, it is sufficient to multiply these expression by the
total number of such segments and replace $d_e$ by its average,
provided that all armchair segments have the same orientation.
It is crucial, however, that up to three different orientations
of armchair segments are possible, at the angle $\pi/3$ to each
other. When contributions corresponding to several different
orientations are added together, the polarization dependence
may change quite dramatically,
as was briefly discussed by the author and coworkers in
Ref.~\onlinecite{Casiraghi2009} and is considered in more
detail in Sec.~\ref{sec:fragmented}.
Note that if the average direction of the edge is armchair or
zigzag, the possible orientations of armchair segments are
symmetric with respect to the average direction: three
orientetaions at angles 0 and $\pm\pi/3$ for an armchair
edge, and two at angles $\pm\pi/6$ for a zigzag edge. Most
likely, the number of segments corresponding to ``$+$''
and ``$-$'' signs will be equal on the average. Then, by
symmetry, the maximum of intensity in both cases will be
reached for the polarization along the average direction of
the edge, in agreement with recent experimental
observations.\cite{Gupta2009,Casiraghi2009}
When the average direction is armchair, the ratio
between the minimum and the maximum of intensity is
determined by the relative fraction of segments oriented
at $\pm\pi/3$ with respect to those oriented along the
average direction.
When the average direction is zigzag, the polarization
dependence is fully determined by the symmetry (on the
average) between the two armchair directions, and is
given by Eq.~(\ref{IDzigzag=}). The ratio between
the minimum and the maximum is $\varepsilon=1/3$ for detection
without analyzer, and $\varepsilon=1/9$ for detection with an
analyzer parallel to the polarizer of the incident light,
again, in agreement with
Refs.~\onlinecite{Gupta2009,Casiraghi2009}.
Thus, quantum diffraction effects appear to be masked
by the purely geometrical effects.
Remarkably, the quantum diffraction limit is still
accessible if only \emph{two} orientations of armchair
segments are present (which is the case when the average
direction is zigzag or close to it). It is sufficient
to put the polarizer perpendicular to one of the armchair
directions, and the analyzer perpendicular to the other
one, thereby killing the leading specular contribution
for both segments. In this polarization configuration the
absolute minimum of the intensity is reached, and it is
indeed determined by the quantum diffraction, as given
by Eq.~(\ref{IDquantum=}).
\subsection{Excitation position dependence}
\label{sec:qualpos}
In Ref.~\onlinecite{Novotny}, confocal Raman spectroscopy was
suggested and used as a way to probe the length scale~$\ell$
which restricts the $D$ peak to be in the vicinity of the edge
(the spatial extent of the Raman process).
The idea is to focus the incident light beam as tightly as
possible, so that its electric field $\mathcal{E}_{in}(\vec{r})$
has the shape
$\mathcal{E}_{in}(\vec{r})\propto{e}^{-|\vec{r}-\vec{r}_0|^2/(2L^2)}$,
where the width $L$ can be measured independently, and the spot
center position~$\vec{r}_0$ can be varied experimentally.
Then, given that the intensity of the $D$~peak is proportional to
\begin{equation}
I_D\propto\int\mathcal{K}(\vec{r},\vec{r}')\,
\mathcal{E}_{in}(\vec{r})\,\mathcal{E}_{in}^*(\vec{r}')\,
d^2\vec{r}\,d^2\vec{r}',
\end{equation}
where $\mathcal{K}(\vec{r},\vec{r}')$ is a certain kernel,
decaying away from the edge with a characteristic length
scale~$\ell$, a measurement of the dependence $I_D(\vec{r}_0)$
would give information on the kernel.
The measurement would be especialy simple in the case $L\ll\ell$.
In reality, however, the relation is the opposite:
in Ref.~\onlinecite{Novotny} $L=186.5\:\mbox{nm}$, and $\ell$ is
a few tens of nanometers at most.\cite{citeGupta}
In this situation the
dependence of the Raman intensity $I_D(\vec{r}_0)$ is very close
to the excitation intensity profile $|\mathcal{E}_{in}(\vec{r})|^2$,
and the nonlocality of the kernel $\mathcal{K}(\vec{r},\vec{r}')$
manifests itself only in a slight change of shape of$I_D(\vec{r}_0)$
with respect to $|\mathcal{E}_{in}(\vec{r})|^2$. In the first
approximation it can be viewed just as small shift and broadening.
When the signal-to-noise ratio is not sufficiently high to perform
the full functional deconvolution, one has to assume a specific
functional form for the kernel and do a few-parameter fit. It is
clear that different functional forms will give values of~$\ell$,
differing by a factor of the order of~1.
In Ref.~\onlinecite{Novotny} the form
$\mathcal{K}(\vec{r},\vec{r}')=%
\theta(x)\,\theta(x')\,e^{-2\gamma(x+x')/\ell_\gamma}$
was assumed, where $x$ is the distance from the edge, $\theta(x)$
is the step function, and $\ell_\gamma=v/(2\gamma)$ is the electron
inelastic scattering length [$2\gamma$ is the electron inelastic
scattering rate, see Eq.~(\ref{Ramanmatrixelement=})]. This
assumption seems to contradict the fact that the lifetime of the
virtual electron-hole pair is $\sim{1}/\omega_\mathrm{ph}$, discussed in
Sec.~\ref{sec:quasiclassical}, as it was pointed out in
Refs.~\onlinecite{megapaper,Casiraghi2009}.
The explicit form of the kernel $\mathcal{K}(\vec{r},\vec{r}')$
for an ideal armchair edge is calculated in Sec.~\ref{sec:regSpatial},
it is given by Eq.~(\ref{spatialkernel=}), and it turns out to be more
complicated than a simple exponential. In fact, it depends on both
length scales, $\ell_\mathrm{ph}$ and $\ell_\gamma$. The length $\ell_\mathrm{ph}$ is
shorter, but the spatial cutoff it provides is only power-law. The
longer length~$\ell_\gamma$ is responsible for the strong exponential
cutoff.
Which of the two lengths plays the dominant role in the Raman process,
turns out to depend on the {\em specific observable} to be measured.
The total integrated intensity for
$\mathcal{E}_{in}(\vec{r})=\mathrm{const}$
is proportional to the integral
$\int\mathcal{K}(\vec{r},\vec{r}')\,d^2\vec{r}\,d^2\vec{r}'$,
which is determined mainly by $\ell_\mathrm{ph}$, while $\ell_\gamma$ enters
only in a logarithmic factor. The same can be said about
the polarization dependence and diffraction corrections, discussed
in the previous subsection. However, the change in the shape of
$I_D(\vec{r}_0)$, compared to the excitation intensity profile
$|\mathcal{E}_{in}(\vec{r})|^2$, is determined by the second and
higher \emph{moments} of the kernel,
$\int{x}^n(x')^{n'}\,\mathcal{K}(\vec{r},\vec{r}')\,%
d^2\vec{r}\,d^2\vec{r}'$, with $n+n'\geq{2}$. These moments turn
out to be determined by the longer scale $\ell_\gamma$. Thus, the
interpretation by the authors of Ref.~\onlinecite{Novotny} of their
experiment is qualitatively correct.
Analysis of the experimental data of Ref.~\onlinecite{Novotny}
using kernel~(\ref{spatialkernel=}) gives
$\ell_\gamma=66\:\mbox{nm}$, corresponding to
$2\gamma\approx{11}\:\mbox{meV}$.
Analogous analysis was done for the case of strongly disordered
edge in Sec.~\ref{sec:disordspatial}, and in this model one
obtains $\ell_\gamma=120\:\mbox{nm}$. Indeed, as the disordered
edge gives more weight to oblique trajectories, as shown in
Fig.~\ref{fig:backscatt}(c), the effective distance from the
edge at which the kernel decays, is shorter than for the
normal incidence, so a larger value of $\ell_\gamma$ is required
to fit the data.
The inelastic scattering rate for an electron with the
energy~$\epsilon$ due to phonon emission can be written as
$2\gamma=(\lambda_\Gamma+\lambda_K)\epsilon/2$,\cite{shortraman,megapaper}
where $\lambda_\Gamma$ and $\lambda_K$ are dimensionless
electron-phonon coupling constants [$\lambda_K$ is defined in
Eq.~(\ref{lambdaphonon=}), and $\lambda_\Gamma$~is defined
analogously, but the optical phonons at the $\Gamma$~point should
be considered].
The value of the constant~$\lambda_\Gamma$ can be reliably
taken to be about $\lambda_\Gamma\approx{0}.03$.
Indeed, a DFT calculation\cite{Piscanec2004} gives
$\lambda_\Gamma\approx{0}.028$; measurements of the dependence
of the $G$-peak frequency $\omega_G$ on the electronic Fermi
energy~$\epsilon_F$,
$d\omega_G/d|\epsilon_F|\approx\lambda_\Gamma/2\pi$ give
$\lambda_{\Gamma}\approx{0}.034$\cite{Yan2007} and
$\lambda_{\Gamma}\approx{0}.027$;\cite{Pisana2007}
the value of $\lambda_\Gamma$ is not renormalized by the Coulomb
interaction.\cite{BaskoAleiner}
The value of $\lambda_K$ has been debated recently.%
\cite{Calandra2007,BaskoAleiner,Lazzeri2008}
The measurements of the phonon group velocity (see
Ref.~\onlinecite{Lazzeri2008} for the summary of the experimental
data) give $\lambda_K\approx{0}.04$.
The ratio between the two coupling constants can be also
extracted from the experimental ratio of the two-phonon
peak intensities,\cite{megapaper}
$2(\lambda_K/\lambda_\Gamma)^2\approx{20}$,\cite{Ferrari2006}
which gives $\lambda_K\approx{0}.10$.
Thus, $\lambda_\Gamma+\lambda_K\approx{0}.1\pm{0.03}$, seems
to be a reasonable estimate. This estimate gives
$2\gamma\approx{50}\:\mbox{meV}$ for electrons with the
energy $\omega_{in}/2=0.98\:\mbox{eV}$, which translates
into a value of $\ell_\gamma$ several times shorter than
that following from the results of Ref.~\onlinecite{Novotny}.
\subsection{On the Tuinstra-K\"onig relation}
For a sample of a finite size $L_a$ the total $D$~peak intensity is
proportional to the total length of the edge, i.~e., the sample
perimeter, so that $I_D\propto{L}_a$.
At the same time, the intensity of the $G$~peak at 1581~cm$^{-1}$
is propotional to the area of the sample, i.~e., $I_G\propto{L}_a^2$.
These simple facts result in the so-called Tuinstra-K\"onig
relation, established in experiments on graphite nanocrystallites
long ago,\cite{Tuinstra1970,Knight1989} and confirmed experimentally
many times afterwards:%
\cite{Dresselhaus1999,Cancado2006,Cancado2007,Sato2006}
$I_D/I_G\propto{1}/L_a$.
The proportionality coefficient cannot be determined universally;
it clearly depends on the character of the edge
(as an extreme case, one can imagine a hexagonal flake with entirely
zigzag edges which do not give any $D$~peak at all, except at the
junctions between them; then $I_D$ is not even proportional
to $L_a$).
What is the boundary of validity of the Tuinstra-K\"onig relation
on the small-size side?
At the same time, it was noted in Ref.~\onlinecite{Ferrari2000}
that since the atomic dispacement pattern corresponding to the $D$~peak
must involve at least one aromatic ring, the size of the ring, a few
angstroms, represents an absolute lower bound. From the results of
the present work it follows that the dependence $I_D\propto{L}_a$
becomes logarithmically sensitive to the presence of the opposite
edge, $I_D\propto{L}_a\ln(\omega_\mathrm{ph}{L}_a/v)$ if the size is smaller the
electron inelastic length $L_a<v/(2\gamma)$, and the whole approach
becomes invalid when $L_a\sim{v}/\omega_\mathrm{ph}\approx{4}\:\mbox{nm}$.
The breakdown of the $1/L_a$ dependence has indeed been observed
for $L_a$ smaller than a few nanometers.\cite{Zickler2006}
\section{Free Dirac electrons}\label{sec:Free}
\begin{figure}
\includegraphics[width=8cm]{lattice2}
\caption{\label{fig:lattice} Honeycomb lattice with the $A$ and $B$
sublattices and the elementarty translation vectors.
}
\end{figure}
\begin{table}
\begin{tabular}[t]{|c|c|c|c|c|c|c|} \hline
$C_{6v}$ & $E$ & $C_2$ & $2C_3$ & $2C_6$ & $\sigma_{a,b,c}$ &
$\sigma_{a,b,c}'$
\\ \hline\hline $A_1$ & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline $A_2$ & 1
& 1 & 1 & 1 & $-1$ & $-1$ \\ \hline $B_2$ & 1 & $-1$ & 1 & $-1$ &
$1$ & $-1$ \\ \hline $B_1$ & 1 & $-1$ & 1 & $-1$ & $-1$ & $1$ \\
\hline $E_1$ & 2 & $-2$ & $-1$ & $1$ & 0 & 0 \\ \hline $E_2$ & 2 & 2 &
$-1$ & $-1$ & 0 & 0 \\ \hline\end{tabular}\hspace{1cm}
\caption{Irreducible representations of the group $C_{6v}$ and their
characters.\label{tab:C6vC3v}}
\end{table}
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
irrep &
$A_1$ & $B_1$ & $A_2$ & $B_2$ & $E_1$ & $E_2$ \\ \hline
\multicolumn{7}{|c|}{valley-diagonal matrices}\\
\hline matrix & $\openone$ & $\Lambda_z$ & $\Sigma_z$ &
$\Lambda_z\Sigma_z$ & $\Sigma_x,\,\Sigma_y$ &
$-\Lambda_z\Sigma_y,\Lambda_z\Sigma_x$ \\ \hline
\multicolumn{7}{|c|}{valley-off-diagonal matrices} \\
\hline matrix & $\Lambda_x\Sigma_z$ & $\Lambda_y\Sigma_z$ & $\Lambda_x$
& $\Lambda_y$ & $\Lambda_x\Sigma_y,-\Lambda_x\Sigma_x$ &
$\Lambda_y\Sigma_x,\Lambda_y\Sigma_y$ \\ \hline
\end{tabular}
\caption{Classification of $4\times{4}$ hermitian matrices according to
irreducible representations (irreps) of the $C_{6v}$~group.
\label{tab:matrices}}
\end{table}
In this section we summarize the model for the bulk graphene only,
which is fully analogous to that of Ref.~\onlinecite{megapaper}.
Properties of the edge are discussed in the next section.
We measure the single-electron energy~$\epsilon$ from the Fermi level of the
undoped (half-filled) graphene. The Fermi surface of undoped graphene
consists of two points, called $K$~and~$K'$. Graphene unit cell
contains two atoms, labeled $A$ and $B$ (see Fig.~\ref{fig:lattice}),
each of them has one $\pi$-orbital, so there are
two electronic states for each point of the first Brillouin zone (we
disregard the electron spin). Thus, there are exactly four electronic
states with zero energy. An arbitrary linear combination of them is
represented by a 4-component column vector~$\psi$. States with low
energy are obtained by including a smooth position dependence
$\psi(\vec{r})$, $\vec{r}\equiv(x,y)$. The low-energy hamiltonian has
the Dirac form:
\begin{equation}\label{Hel=}
\hat{H}_0=\int{d}^2\vec{r}\,\hat\psi^\dagger(\vec{r})\,
(-iv\vec\Sigma\cdot\vec\nabla)\,\hat\psi(\vec{r}).
\end{equation}
Here we used the second-quantized notation and introduced
the electronic $\psi$-operators $\hat\psi(\vec{r}),\hat\psi^\dagger(\vec{r})$.
It is convenient to define the $4\times{4}$ isospin matrices
$\vec\Sigma\equiv(\Sigma_x,\Sigma_y)$, not through their explicit
form, which depends on the choice of the basis (specific arrangement
of the components in the column~$\psi$), but through their
transformation properties. Namely, all 16 generators of the $SU(4)$
group, forming the basis in the space of $4\times{4}$ hermitian
matrices, can be classified according to the irreducible
representations of~$C_{6v}$, the point group of the graphene crystal
(Tables \ref{tab:C6vC3v} and~\ref{tab:matrices}). They can be
represented as products of two mutually commuting algebras of Pauli
matrices $\Sigma_x,\Sigma_y,\Sigma_z$ and
$\Lambda_x,\Lambda_y,\Lambda_z$,\cite{McCann2006,AleinerEfetov} which
fixes their algebraic relations. By definition, $\Sigma_x,\Sigma_y$ are
the matrices, diagonal in the $K,K'$ subspace, and transforming
according to the $E_1$~representation of~$C_{6v}$.
In the following we will take advantage of the symmetry with
respect to time reversal. The action of the time-reversal operation
on the four-component envelope function $\psi(\vec{r})$ is
defined as
\begin{equation}\label{UTpsi=}
\psi(\vec{r})\mapsto
U_t\psi^*(\vec{r}),
\end{equation}
where $U_t$~is a unitary $4\times{4}$ matrix. when
applied twice, the time reversal operation should give an identity,
which results in an additional requirement $U_tU_t^*=\openone$.
The explicit form of $U_t$ depends on the choice of the basis.
In those rare cases when a specific representation has to be chosen,
we use that of Ref.~\onlinecite{AleinerEfetov},
\begin{equation}\label{Aleinerrep=}
\psi=\left[\begin{array}{c}
\psi_{AK} \\ \psi_{BK} \\ \psi_{BK'} \\ -\psi_{AK'}
\end{array}\right],
\end{equation}
where the first subscript labels the sublattice ($A,B$), and the
second one labels the valley ($K,K'$). In this basis $\Sigma_i$~are
the Pauli matrices acting within upper and lower 2-blocks of the
column (the sublattice subspace), while $\Lambda_i$ are the Pauli
matrices acting in the ``external'' subspace of the 2-blocks (the
valley subspace). The time reversal matrix in this representation
is given by $U_t=\Sigma_y\Lambda_y$.
The electron Green's
function, corresponding to the hamiltonian~(\ref{Hel=}), is given by
\begin{equation}\label{Greenfp=}
G_0(\vec{p},\epsilon)=
\frac{\epsilon+v\vec{p}\cdot\vec\Sigma}{\epsilon^2-(vp-i\gamma_\epsilon)^2},
\end{equation}
where $\vec{p}$ and $\epsilon$ are electronic momentum and energy, counted
from the Dirac point. The inelastic broadening $\gamma_\epsilon\ll|\epsilon|$ is
introduced phenomenologically. In the coordinate representation
the Green's function is given by
\begin{eqnarray}
G_0(\vec{r},\epsilon
&=&\frac{\epsilon+i\gamma_\epsilon\mathop{\mathrm{sgn}}\epsilon-iv\vec\Sigma\cdot\vec\nabla}{4iv^2}\,H_0^{(1)}(\zeta),
\label{Greenfr=}
\end{eqnarray}
where $H_0^{(1)}(\zeta)$ is the Hankel function and
$\zeta\equiv(|\epsilon|+i\gamma_\epsilon)r/v$.
We will mostly need the asymptotic form valid at distances
$r\gg|\epsilon|/v$,
\begin{eqnarray}
G_0(\vec{r},\epsilon)=-\sqrt{\frac{i\zeta}{2\pi}}\,\frac{e^{i\zeta}}{vr}
\left[\frac{\mathop{\mathrm{sgn}}\epsilon+\vec\Sigma\cdot\vec{r}/r}{2}
\left(1+\frac{i}{8\zeta}\right)\right.\nonumber\\
-\left.\frac{\mathop{\mathrm{sgn}}\epsilon-\vec\Sigma\cdot\vec{r}/r}{2}\,\frac{i}{4\zeta}+O(\zeta^{-2})\right].
\label{GreenfLarger=}
\end{eqnarray}
Any wave function $\psi(\vec{r})$ satisfying the Dirac equation,
$(\epsilon+i\gamma_\epsilon\mathop{\mathrm{sgn}}\epsilon+iv\vec\Sigma\cdot\vec\nabla)\psi(\vec{r})=0$,
in some region~$\mathcal{O}$ of space, satisfies also the
Huygens-Fresnel principle. Namely, the value of $\psi(\vec{r})$
at an arbitrary point $\vec{r}\in\mathcal{O}$ can be written as
an integral over the boundary~$\partial\mathcal{O}$,
\begin{equation}
\psi(\vec{r})=iv\oint\limits_{\partial\mathcal{O}}
\vec{n}\cdot\left[G_0(\vec{r}-\vec{r}_e,\epsilon)\,
\vec\Sigma\,\psi(\vec{r}_e)\right]d\vec{r}_e.
\label{Huygens=}
\end{equation}
Here $\vec{r}_e$ is the distance along the boundary,
$\vec{n}$ is the inner normal to the boundary.
This relation follows from the Gauss theorem and the fact that
$(p^2+\nabla^2)H_0^{(1)}(pr)=4i\delta(\vec{r})$ for any~$p$.
\section{Models for electrons near the edge}\label{sec:Edge}
\subsection{Translationally invariant edge}\label{sec:edgeReg}
The main assumption of this subsection is that the component of
the electronic momentum~$\vec{p}$ along the edge is conserved upon
reflection, so that a plane wave is reflected as a plane wave. The
most studied ideal zigzag and armchair edges fall into this category.
Here we do not restrict ourselves just to zigzag or armchair edges,
requiring only that the spatial period~$d_e$ of the edge is smaller
than half the electron wavelength, $d_e<\pi\lambdabar_\ep$.
For $d_e\ll\lambdabar_\ep$ the reflection of electrons from the edge can be
described by an effective boundary condition for the electronic wave
function.\cite{McCann2004,Akhmerov2008}
The edge is assumed to be a straight line determined by its normal
unit vector~$\vec{n}$, so that graphene occupies the half-plane
$\vec{n}\cdot\vec{r}>0$.
The microscopic Schr\"odinger equation determines the effective
boundary condition on the wave function~$\psi(\vec{r})$, which
for smooth functions (on the scale~$d_e$) can be simply written as
$\left.B\psi\right|_\mathrm{edge}=0$, where $B$~is a $4\times{4}$
hermitian matrix. The rank of~$B$ is equal to~2 since the linear
space of incident states at fixed energy is two-dimensional due
to the valley degeneracy. Thus, it has two zero eigenvalues, while
the other two can be set to~1 without the loss of generality (only
the zero subspace of~$B$ matters). Thus, one can impose the
condition $B^2=B$. Equivalently, one can write $B=(\openone-M)/2$,
where $M$~has the same eigenvectors as~$B$, but its eigenvalues are
equal to $\pm{1}$, hence $M^2=\openone$. To ensure current
conservation, the condition $\left.B\psi\right|_\mathrm{edge}=0$
must automatically yield $\psi^\dagger(\vec{n}\cdot\vec\Sigma)\psi=0$;
this means that $M(\vec{n}\cdot\vec\Sigma)+(\vec{n}\cdot\vec\Sigma)M=0$.
Finally, the time reversal symmetry requires that the conditions
$B\psi=0$ and $BU_t\psi^*=0$ must be equivalent, which yields
$M^*=U_t^\dagger{M}U_t$.
To summarize all the above arguments, the general energy-independent
boundary condition has the form
\begin{equation}
\left.(\openone-M)\psi\right|_\mathrm{edge}=0,
\label{boundarycond=}
\end{equation}
where the $4\times{4}$ hermitian matrix~$M$ satisfies the following
conditions, which result to be the same as obtained in Ref.~\onlinecite{McCann2004}:
\begin{equation}
M^2=\openone,\quad
M(\vec{n}\cdot\vec\Sigma)+(\vec{n}\cdot\vec\Sigma)M=0,\quad
M=U_tM^*U_t^\dagger.
\end{equation}
Matrices satisfying these constraints can be parametrized
by an angle~$\chi$ and a three-dimensional unit vector $(m_x,m_y,m_z)$:%
\cite{Akhmerov2008}
\begin{subequations}\begin{eqnarray}
M&=&\left(\Sigma_z\cos\chi+[\vec{n}\times\vec\Sigma]_z\sin\chi\right)
M_\Lambda,\label{edgeMa=}\\
M_\Lambda&=&\sum_{i=x,y,z}m_i\Lambda_i,\;\;\;
m_i\in\mathbb{R},\;\;\;
\sum_{i=x,y,z}m_i^2=1.\label{edgeMb=}
\end{eqnarray}\label{edgeM=}\end{subequations}
Without loss of generality we can assume $\cos\chi\geq{0}$
(the negative sign can always be incorporated in $M_\Lambda$).
Explicit expressions for the matrix~$M$, corresponding
to the edges shown in Fig.~\ref{fig:edges}, can be obtained
in the tight-binding model on the terminated honeycomb
lattice\cite{Brey2006} [it is convenient to use
representation~(\ref{Aleinerrep=})].
For the zigzag edge with $\vec{n}=-\vec{e}_y$
[Fig.~\ref{fig:edges}(a)] the boundary condition is
$\psi_{BK}=\psi_{BK'}=0$, which gives $M=\Sigma_z\Lambda_z$.
This agrees with the prediction of Ref.~\onlinecite{Cancado2004}
that upon reflection from a zigzag edge the electron cannot
scatter between the valleys, which corresponds to a valley-diagonal
matrix $M$.
For the armchair edge with $\vec{n}=\vec{e}_x$
[Fig.~\ref{fig:edges}(a) rotated by $2\pi/3$ counterclockwise]
we have $\psi_{AK}+\psi_{AK'}=\psi_{BK}+\psi_{BK'}=0$, and
$M=-\Sigma_y\Lambda_y$. It can be shown that in
the nearest-neigbor tight-binding model on a terminated
honeycomb lattice only zigzag and armchair boundary
conditions can be obtained, the latter occurring only if
the edge direction is armchair, while to obtain the full
form of Eq. (\ref{edgeMa=}), one has to include an on-site
potential in the vicinity of the edge.\cite{Akhmerov2008}
It is known for quite some time that a perfect zigzag edge supports
states, confined to the edge.%
\cite{Stein1987,Tanaka1987,Fujita1996-1,Fujita1996-2,Dresselhaus1996}
Let us see what class of boundary conditions is compatible with existence
of such states. The wave function of an edge state must have the form
\begin{equation}
\psi(\vec{r})=\psi_0e^{-\kappa(\vec{n}\cdot\vec{r})+ip_\|[\vec{n}\times\vec{r}]_z},
\quad\kappa>0,
\end{equation}
where the vector $\psi_0$ is such that the solution satisfies both the Dirac
equation in the bulk with some energy~$\epsilon$, as well as the boundary
condition at the edge.
It is convenient to make a unitary substitution
$\psi_0=e^{i\Sigma_x\chi/2}e^{-i\Sigma_z\varphi_\vec{n}/2}\tilde\psi_0$,
where $\varphi_\vec{n}$ is the polar angle of the direction~$\vec{n}$.
Then the two conditions have the following form:
\begin{subequations}\begin{eqnarray}
&&\left(i\kappa\Sigma_x+p_\|\cos\chi\Sigma_y+p_\|\sin\chi\Sigma_z\right)\tilde\psi_0
=\frac\epsilon{v}\,\tilde\psi_0,\\
&&\Sigma_zM_\Lambda\tilde\psi_0=\tilde\psi_0.
\end{eqnarray}\end{subequations}
The boundary condition is satisfied by two linearly independent vectors
$\tilde\psi_\pm$ which can be chosen to satisfy
$\Sigma_z\tilde\psi_\pm=\pm\tilde\psi_\pm$
(to find them it is sufficient to diagonalize the matrix~$M_\Lambda$).
Each of them satisfies the first condition if and only if
$\epsilon=\pm{v}p_\|\sin\chi$, $\kappa=\mp{p}_\|\cos\chi$.
The requirement $\kappa>0$ leaves only one of them,
$\tilde\psi_{-\mathop{\mathrm{sgn}}{p}_\|}$, and the energy of the edge state is
$\epsilon=-v|p_\||\sin\chi$. Thus, it seems that almost any edge
can support a bound state, the exception being the case
$\cos\chi=0$ (armchair-like edge), which thus seems to be a special
rather than a general case.
Now we turn to the scattering (reflecting) states, which are the ones
responsible for the edge-assisted Raman scattering.
Even though the component~$p_\|$ of the electron momentum~$\vec{p}$,
parallel to the edge, is conserved, reflection can change the valley structure
of the electron wave function. The general form of such solution is
\begin{eqnarray}
&&\psi(\vec{r})=
\psi_\vec{p}
e^{ip_\perp(\vec{n}\cdot\vec{r})+ip_\|[\vec{n}\times\vec{r}]_z}+
\nonumber\\&&\qquad{}+
S_\Lambda[\vec{n}\times\vec\Sigma]_z
\psi_\vec{p}
e^{-ip_\perp(\vec{n}\cdot\vec{r})+ip_\|[\vec{n}\times\vec{r}]_z},
\label{reflectingsolution=}
\end{eqnarray}
where
$p_\perp=(\vec{n}\cdot\vec{p})<0$, $p_\|=[\vec{n}\times\vec{p}]_z$,
and $\psi_\vec{p}$ is an eigenvector of
$(\vec{p}\cdot\vec\Sigma)\psi_\vec{p}=\pm|\vec{p}|\psi_\vec{p}$.
The first term represents the wave incident on the edge, the second one
is the reflected wave. The matrix $[\vec{n}\times\vec\Sigma]_z$ simply
aligns the isospin of the reflected particle with the new direction of
momentum. The unitary matrix~$S_\Lambda$ represents a rotation in the valley
subspace. It should be found from the boundary condition~(\ref{boundarycond=})
(this is conveniently done in the basis of the eigenvectors of~$M_\Lambda$),
which gives
\begin{subequations}\begin{eqnarray}
&&S_\Lambda=\zeta\,
\frac{\Lambda_0+M_\Lambda(\cos\chi+\zeta^*\sin\chi)}%
{\Lambda_0+M_\Lambda(\cos\chi+\zeta\sin\chi)},\\
&&\zeta=\pm\frac{1}p
\left\{-[\vec{n}\times\vec{p}]_z+i(\vec{n}\cdot\vec{p})\right\},\quad
|\zeta|=1.
\end{eqnarray}\end{subequations}
For $\sin\chi=0$ (zigzag edge) we have $S_\Lambda=\zeta$.
For $\cos\chi=0$ (armchair edge) we have $S_\Lambda=M_\Lambda$,
independent of the direction of~$\vec{p}$.
The reflected part of Eq.~(\ref{reflectingsolution=}) can be
identically rewritten using the Huygens-Fresnel principle,
Eq.~(\ref{Huygens=}), as
\begin{equation}
\psi(\vec{r})=\psi_\vec{p}e^{i\vec{p}\vec{r}}
-\int\limits_\mathrm{edge} d\vec{r}_e\,
G_0(\vec{r}-\vec{r}_e,\epsilon)\,
v\Sigma_zS_\Lambda
\psi_\vec{p}e^{i\vec{p}\vec{r}_e},\label{reflsolHuyg=}
\end{equation}
so that $-v\Sigma_zS_\Lambda$ can be viewed as the $T$-matix
of the edge.
When $S_\Lambda$ does not depend on the direction of~$\vec{p}$
(armchair edge), it is easy to write down the exact explicit
expression for the single-particle Green's function:
\begin{eqnarray}\nonumber
G(\vec{r},\vec{r}';\epsilon)&=&G_0(\vec{r}-\vec{r}',\epsilon)\nonumber\\
&&{}+ G_0(\vec{r}-\vec{r}'+2\vec{n}(\vec{n}\cdot\vec{r}'),\epsilon)\,
[\vec{n}\times\vec\Sigma]_zS_\Lambda.\nonumber\\ && \label{imagesource=}
\end{eqnarray}
The second term represents nothing but the contribution of a fictitious
image source of particles, appropriately rotated, and placed at the
point $\vec{r}'-2\vec{n}(\vec{n}\cdot\vec{r}')$ obtained
from~$\vec{r}'$ by the reflection with respect to the edge.
In the quasiclassical approximation (analogous to geometric optics),
Eq.~(\ref{imagesource=}) is also valid for a general edge at large
distances $r,r'\gg{v}/|\epsilon|$, provided that position-dependent $S_\Lambda$
is taken, determined by
\begin{equation}
\zeta=-\mathop{\mathrm{sgn}}\epsilon\,\frac{i(\vec{n}\cdot(\vec{r}+\vec{r}'))+
[\vec{n}\times(\vec{r}-\vec{r}')]_z}
{\sqrt{(\vec{n}\cdot(\vec{r}+\vec{r}'))^2
+[\vec{n}\times(\vec{r}-\vec{r}')]_z^2}}.
\end{equation}
Again, using the Huygens-Fresnel principle, we can rewrite
Eq.~(\ref{imagesource=}) identically as
\begin{eqnarray}\nonumber
&&G(\vec{r},\vec{r}';\epsilon)=G_0(\vec{r}-\vec{r}',\epsilon)\nonumber\\
&&\qquad{}-\int\limits_\mathrm{edge} d\vec{r}_e\,G_0(\vec{r}-\vec{r}_e,\epsilon)\,
v\Sigma_zS_\Lambda\,G_0(\vec{r}_e-\vec{r}',\epsilon).\nonumber\\
\label{HuygensG=}
\end{eqnarray}
\subsection{Atomically rough edge}
\label{sec:irregular}
As discussed in Sec.~\ref{sec:Qreflection}, when the edge is rough
on the atomic length scale, electron scattering is random both
in all directions and between the two valleys.
This case will be of main interest for us, as it (i)~represents
the opposite limiting case to that of an ordered edge, and (ii)~can
be described by a simple model proposed below.
The main assumption is that each point of an atomically rough edge
acts as an independent point scatterer, independent from other
segments.
Associating thus a $T$-matrix to each point of the edge, we write
the scattered wave function in the form
\begin{eqnarray}
&&\psi(\vec{r})=\psi_\vec{p}e^{i\vec{p}\vec{r}}\nonumber\\&&\qquad{}+
\int\limits_{\mathrm{edge}}d\vec{r}_e\,
G_0(\vec{r}-\vec{r}_e,\epsilon)\,
T(\vec{s},\vec{e}_\vec{p};\vec{r}_e)\,
\psi_\vec{p}e^{i\vec{p}\vec{r}_e},\label{wfTmatrix=}\\
&&\vec{s}=\frac{\vec{r}-\vec{r}_e}{|\vec{r}-\vec{r}_e|},\quad
\vec{e}_\vec{p}=\frac{\vec{p}}{|\vec{p}|},\nonumber
\end{eqnarray}
where energy argument of the Green's function $\epsilon=\pm{v}p$ for
electrons and holes, respectively. The (one-dimensional)
integration is performed along the edge, which is assumed to be
a straight line determined, as in the previous subsection, by the
condition $(\vec{n}\cdot\vec{r}_e)=0$, where the unit vector
$\vec{n}$ is the normal to the edge. The unit vectors
$\vec{e}_\vec{p}$ and $\vec{s}$ indicate the incident and
scattering directions.
The $T$-matrix must satisfy
(i)~the particle conservation condition (unitarity), and
(ii)~the time reversal symmetry (reciprocity),
\begin{equation}
T(\vec{s},\vec{e}_\vec{p};\vec{r}_e)
=U_t\,T^T(-\vec{e}_\vec{p},-\vec{s};\vec{r}_e)\,U_t^\dagger.\label{Ttimerev=}
\end{equation}
Here $T^T$ stands for the $4\times{4}$ matrix transpose.
Unitarity and reciprocity are discussed in Appendix~\ref{app:Smatrix}
in the context of a general scattering theory, similar to
that for light scattering on a rough surface.\cite{Brown1984}
We propose the following form of the $T$-matrix:
\begin{subequations}
\begin{equation}\label{Tmatrix=}
T(p\vec{s},\vec{p};\vec{r}_e)=-\sqrt{\rho(\vec{s},\vec{e}_\vec{p})}\,
v\Sigma_zS_\Lambda(\vec{r}_e).
\end{equation}
The angular factor $\rho(\vec{s},\vec{e}_\vec{p})$ ensures the particle
conservation (unitarity of the edge scattering, see
Appendix~\ref{app:Smatrix} for details),
\begin{equation}\label{rho=}
\rho(\vec{s},\vec{e}_\vec{p})=
\frac{-2(\vec{n}\cdot\vec{e}_\vec{p})(\vec{n}\cdot\vec{s})}%
{1-(\vec{n}\cdot\vec{e}_\vec{p})(\vec{n}\cdot\vec{s})
-[\vec{n}\times\vec{e}_\vec{p}]_z[\vec{n}\times\vec{s}]_z}.
\end{equation}
If we introduce the angles of incidence and scattering by
$(\vec{n}\cdot\vec{e}_\vec{p})=-\cos\varphi_i$,
$(\vec{n}\cdot\vec{s})=\cos\varphi_s$,
$[\vec{n}\times\vec{e}_\vec{p}]_z=\sin\varphi_i$,
$[\vec{n}\times\vec{s}]_z=\sin\varphi_s$, the specular
direction corresponding to $\varphi_s=\varphi_i$,
then
$\rho=2\cos\varphi_i\cos\varphi_s/[1+\cos(\varphi_i+\varphi_s)]$.
Note that since the structure of the wave functions in the
$\Sigma$-subspace is fixed by the direction of momentum, one may
suggest slightly different forms of Eq.~(\ref{rho=}) which would be
equivalent. For example, the $T$-matrix obtained by from
Eq.~(\ref{rho=}) by replacement of $\Sigma_z$ by
$i[\vec{n}\times\vec\Sigma]_z$ and by changing the sign of the third
term in the denominator in Eq.~(\ref{rho=}) would have the same matrix
elements between Dirac eigenstates of the same energy.
For a coordinate-independent $S_\Lambda$ the $\vec{r}_e$ integration
eliminates all directions $\vec{s}$ different from the specular one.
Since for the latter $\rho=1$, Eq.~(\ref{Tmatrix=}) reduces to
Eq.~(\ref{reflsolHuyg=}). In this case $S_\Lambda$ can be identified
with the scattering matrix. For the short-range disorder such
identification is not possible, since scattering process is necessarily
non-local on the length scale of at least $\sim{1}/p$. The connection
between the $T$-matrix in Eq.~(\ref{wfTmatrix=}) and the scattering
matrix is more complicated, and is discuss in detail in
Appendix~\ref{app:Smatrix}. Here we only mention that it would make
absolutely no sense to require the unitarity of $S_\Lambda(\vec{r}_e)$
at each given point. Instead, we use the following form:
\begin{equation}
S_\Lambda(\vec{r}_e)=\varpi(\vec{r}_e)\sum_{i=x,y,z}m_i(\vec{r}_e)\,\Lambda_i,
\quad \sum_{i=x,y,z}m_i^2(\vec{r}_e)=1.\label{SLambdaCOE=}
\end{equation}
Here $\varpi(\vec{r}_e)$ is a complex gaussian random variable whose
real and imaginary parts are distributed independently and identically
(so its phase is uniformly distributed between 0 and $2\pi$). The
numbers $m_x,m_y,m_z$, which can be viewed as components of a unit
three-dimensional vector, must be real to ensure the time-reversal
symmetry. One may assume them to be constant or taking just few
definite values, which would correspond the edge to be composed of
segments with $a\ll{d}_e\lesssim{1}/p$ of definite types (e.~g., zigzag
or armchair). For $d_e\sim{a}$, when the scattering between the valleys
is completely random, the vector $(m_x,m_y,m_z)$ can be taken uniformly
distributed over the unit sphere. We assume the matrices
$S_\Lambda(\vec{r}_e)$ to be uncorrelated at different points on the
edge, distant by more than~$d_e$, by writing
\begin{equation}
\overline{\varpi(\vec{r}_e)\,\varpi^*(\vec{r}_e')}=
\frac{\pi{v}}{|\epsilon|}\,\delta(\vec{r}_e-\vec{r}_e').
\label{COEaverage=}
\end{equation}
\end{subequations}
Here the overline denotes the ensemble averaging. However, we assume
that this product is self-averaging upon spatial integration, i.~e.,
that Eq.~(\ref{COEaverage=}) holds even in the absence of the ensemble
averaging when integrated over a sufficiently long segment of the edge
(namely, longer than~$d_e$). The prefactor $\pi{v}/|\epsilon|$ at the
$\delta$-function ensures the unitarity of scattering, see
Appendix~\ref{app:Smatrix}.
Eq.~(\ref{wfTmatrix=}) for the wave function yields an analogous
expression for the Green's function, valid sufficiently far from
the edge, $(\vec{n}\cdot\vec{r})\gg{v}/|\epsilon|$,
$(\vec{n}\cdot\vec{r}')\gg{v}/|\epsilon|$:
\begin{eqnarray}
&&G(\vec{r},\vec{r}';\epsilon)=G_0(\vec{r}-\vec{r}',\epsilon)+\nonumber\\
&&\qquad{}+\int\limits_{\mathrm{edge}}d\vec{r}_e\,
G_0(\vec{r}-\vec{r}_e,\epsilon)\,T(\vec{s},\vec{s}';\vec{r}_e)\,
G_0(\vec{r}_e-\vec{r}',\epsilon),
\label{gfTmatrix=}\\
&&\vec{s}=\frac{\vec{r}-\vec{r}_e}{|\vec{r}-\vec{r}_e|},\quad
\vec{s}'=-\frac{\vec{r}'-\vec{r}_e}{|\vec{r}'-\vec{r}_e|}.
\end{eqnarray}
\section{Phonons and Raman scattering}\label{sec:Phonons}
\begin{figure}
\includegraphics[width=8cm]{phonons}
\caption{\label{fig:phonons} Phonon modes responsible for the Raman
$D$~peak.}
\end{figure}
We restrict our attention to scalar phonons with wave vectors close to
$K$ and $K'$ points -- those responsible for the Raman $D$ peak. The
two real linear combinations of the modes at $K$ and $K'$ points
transform according to $A_1$ and $B_1$ representations of~$C_{6v}$ and
are shown in Fig.~\ref{fig:phonons}. We take the magnitude of the
carbon atom displacement as the normal coordinate for each mode,
denoted by $u_a$ and~$u_b$, respectively. Upon quantization of the
phonon field, the displacement operators $\hat{u}_a,\hat{u}_b$ and
the lattice hamiltonian $\hat{H}_{\mathrm{ph}}$ are expressed in
terms of the phonon creation and annihilation operators
$\hat{b}^\dagger_{\vec{q}\mu},\hat{b}_{\vec{q}\mu}$,
$\mu=a,b$, as
\begin{subequations}\begin{eqnarray}
&&\hat{u}_\mu(\vec{r})=L_xL_y\int\frac{d^2\vec{q}}{(2\pi)^2}
\frac{\hat{b}_{\vec{q}\mu}e^{i\vec{q}\vec{r}}
+\hat{b}_{\vec{q}\mu}^\dagger e^{-i\vec{q}\vec{r}}}
{\sqrt{2NM\omega_\mathrm{ph}}},\\
&&\hat{H}_{\mathrm{ph}}=L_xL_y\int\frac{d^2\vec{q}}{(2\pi)^2}
\sum_{\mu=a,b}\omega_\mathrm{ph}
\left(\hat{b}_{\vec{q}\mu}^\dagger\hat{b}_{\vec{q}\mu}
+\frac{1}2\right).
\label{Hph=}
\end{eqnarray}\end{subequations}
The crystal is assumed to have the area $L_xL_y$, and to contain
$N$~carbon atoms of mass~$M$.
The area per carbon atom is $L_xL_y/N=\sqrt{27}a^2/4$.
The phonon frequency $\omega_\mathrm{ph}\approx{1350}\:\mbox{cm}^{-1}$, standing in
Eq.~(\ref{Hph=}), is assumed to be independent of the phonon momentum.
To check the validity of this assumption one should compare the
corresponding energy scale (the spread of the phonon momenta $\Delta{q}$
multiplied by the phonon group velocity~$v_\mathrm{ph}$) with the
electronic energy uncertainty. The latter is given by $\omega_\mathrm{ph}$ itself.
Recalling that phonon emission corresponds to the backscattering of
the electron (hole), $\Delta{q}$ is given by the uncertainty of the
electronic momentum, $\Delta{q}\sim\omega_\mathrm{ph}/v$. Since
$v_\mathrm{ph}/v\approx{7}\cdot{10}^{-3}\ll{1}$, the phonon dispersion
can be safely neglected.
If we neglect the phonon dispersion, the normal modes and the phonon
hamiltonian can be rewritten in the coordinate representation by
introducing the creation and annihilation operators for a phonon
in a given point of the sample:
\begin{subequations}\begin{eqnarray}
&&\hat\Phi_\mu(\vec{r})=
\sum_\vec{q}\frac{\hat{b}_{\vec{q},\mu}e^{i\vec{q}\vec{r}}}{\sqrt{L_xL_y}},\\
&&\hat{u}_\mu(\vec{r})= \sqrt{\frac{L_xL_y}{2NM\omega_\mathrm{ph}}}
\left[\hat\Phi_\mu(\vec{r})+\hat\Phi_\mu^\dagger(\vec{r})\right],\\
&&\hat{H}_{\mathrm{ph}}= \sum_\mu\omega_\mathrm{ph}\int{d}^2\vec{r}
\left[\hat\Phi_\mu(\vec{r})\hat\Phi_\mu^\dagger(\vec{r})
+\frac{N}{2L_xL_y}\right].
\end{eqnarray}\end{subequations}
Then it is convenient to define the phonon Green's function as
the time-ordered average of the $\Phi$-operators,
\begin{eqnarray}
&&D_{\mu}^{(+)}(\vec{r},\omega)=-i\int\langle\mathrm{T}
\hat\Phi_\mu(\vec{r},t)\,\hat\Phi^\dagger_{\mu}(\vec{0},0)\rangle
e^{i\omega{t}}d^2\vec{r}\,dt=\nonumber\\
&&\qquad=\frac{\delta(\vec{r})} {\omega-\omega_\mathrm{ph}+io}.
\end{eqnarray}
By symmetry, in the electron-phonon interaction hamiltonian the
normal mode displacements $u_\mu$ couple to the corresponding
valley-off-diagonal $4\times{4}$ matrices from
Table~\ref{tab:matrices}:\cite{megapaper}
\begin{equation}\label{Heph=}
\hat{H}_\mathrm{int}=F_K\int{d}^2\vec{r}\,\hat\psi^\dagger(\vec{r})
\left[\hat{u}_a(\vec{r})\Lambda_x\Sigma_z
+\hat{u}_b(\vec{r})\Lambda_y\Sigma_z\right] \hat\psi(\vec{r}).
\end{equation}
Here $F_K$ is the coupling constant having the dimensionality of
a force.
It is more convenient to use the dimensionless coupling constant
\begin{equation}\label{lambdaphonon=}
\lambda_K=
\frac{F_K^2}{M\omega_\mathrm{ph} v^2}\frac{\sqrt{27}a^2}4.
\end{equation}
The value of~$\lambda_K$ was discussed in Sec.~\ref{sec:qualpos}.
The hamiltonian describing interaction of electrons with light is
obtained from the Dirac hamiltonian~(\ref{Hel=}) by replacement
$\vec\nabla\to\vec\nabla-i(e/c)\hat{\vec{A}}$, where the vector
potential~$\hat{\vec{A}}$ is expressed in terms of creation and
annihilation operators
$\hat{a}^\dagger_{\vec{Q},\ell},\hat{a}_{\vec{Q},\ell}$ of
three-dimensional photons in the quantization volume $V=L_xL_yL_z$,
labeled by the three-dimensional wave vector~$\vec{Q}$ and two
transverse polarizations
$\ell=1,2$ with unit vectors $\vec{e}_{\vec{Q},\ell}$:
\begin{equation}
\hat{\vec{A}}(\vec{r})= \sum_{\vec{Q},\ell}\sqrt{\frac{2\pi{c}}{VQ}}
\left(\vec{e}_{\vec{Q},\ell}\hat{a}_{\vec{Q},\ell}e^{i\vec{Q}\vec{r}}
+\mathrm{h.c.} \right).\label{Afree=}
\end{equation}
The derivation of the formal expression for the Raman scattering probability
is fully analogous to that given in Ref.~\onlinecite{megapaper}. The only
difference is that the calculation is done in the coordinate representation.
As a result, we obtain the following expression for the probability for an
incident photon with wave vector~$\vec{Q}_{in}$ and polarization~$\vec{e}_{in}$
to be scattered with emission of a single phonon within an elementary
area~$d^2\vec{R}$ around a given point~$\vec{R}$:
\begin{subequations}
\begin{eqnarray}
\frac{dI_D}{d^2\vec{R}}&=&\frac{1}{cL_xL_y}\sum_{\vec{e}_{out}}
\int\frac{d^3\vec{Q}_{out}}{(2\pi)^3}\,
2\pi\delta(c|\vec{Q}_{out}|-\omega_{out})\nonumber\\
&&{}\times\sum_{\mu=a,b}|2\mathcal{M}_\mu|^2,\label{dId2Rgeneral=}\\
\mathcal{M}_\mu&=&\sqrt{\frac{\lambda_K}2}
\frac{2\pi{e}^2v^3}{\sqrt{\omega_{in}\omega_{out}}}
\int\frac{d\epsilon}{2\pi}\,{d}^2\vec{r}_{in}\,{d}^2\vec{r}_{out}\nonumber\\
&&{}\times e^{i\vec{Q}_{in}\vec{r}_{in}-i\vec{Q}_{out}\vec{r}_{out}}
\mathop{\mathrm{Tr}}\left\{\mathcal{D}_\mu+\bar{\mathcal{D}}_\mu\right\},
\label{M1ph=}\\
\mathcal{D}_{a,b}&=&G(\vec{r}_{out},\vec{r}_{in};\epsilon)
(\vec{e}_{in}\cdot\vec\Sigma)\,
G(\vec{r}_{in},\vec{R};\epsilon-\omega_{in})\,\nonumber\\
&&{}\times\Lambda_{x,y}\Sigma_z\,
G(\vec{R},\vec{r}_{out};\epsilon-\omega_{out})(\vec{e}_{out}^*\cdot\vec\Sigma),
\label{D1tr=}\\
\bar{\mathcal{D}}_{a,b}&=&G(\vec{r}_{in},\vec{r}_{out};\epsilon)\,
(\vec{e}_{out}^*\cdot\vec\Sigma)\,
G(\vec{r}_{out},\vec{R};\epsilon+\omega_{out})\,\nonumber\\
&&{}\times\Lambda_{x,y}\Sigma_z\,
G(\vec{R},\vec{r}_{in};\epsilon+\omega_{in})(\vec{e}_{in}\cdot\vec\Sigma).
\label{D2tr=}
\end{eqnarray}\end{subequations}
Here $G(\vec{r},\vec{r}';\epsilon)$ is the electronic Green's function
corresponding to the full single-particle part of the hamiltonian
(i.~e., including not only the Dirac term, but the edge as well).
It can be represented in terms of the exact single-electron
eigenfunctions $\psi_s(\vec{r})$ and energies $\epsilon_s$ as a sum
over the eigenstates~$s$:
\begin{equation}
G(\vec{r},\vec{r}';\epsilon)=\sum_s\frac{\psi_s(\vec{r})\,\psi^\dagger_s(\vec{r})}
{\epsilon-\epsilon_s+i\gamma_\epsilon\mathop{\mathrm{sgn}}\epsilon}.
\end{equation}
Using this representation, integrating over the energy and the
coordinates one obtains Eq.~(\ref{Ramanmatrixelement=}).
The summation in Eq.~(\ref{dId2Rgeneral=}) is performed over the
wave vectors $\vec{Q}_{out}$ and the polarizations $\vec{e}_{out}$
of the scatered photon. When integrated over the area of the crystal,
Eq.~(\ref{dId2Rgeneral=}) gives the absolute dimensionless probability
of the one-phonon Raman scattering for a single incident photon.
The matrix element $\mathcal{M}_\mu$ can always be represented in the form
\begin{equation}
\mathcal{M}_\mu=\mathcal{M}_\mu^x(e_{out}^x)^*
+\mathcal{M}_\mu^y(e_{out}^y)^*.
\end{equation}
If one collects all the light scattered in the full solid angle $4\pi$,
without analyzing the polarization, the integration over the angles of
$\vec{Q}_{out}$ is straightforward. It gives
\begin{subequations}\begin{equation}
\frac{dI_D}{d^2\vec{R}}=\frac{8}{3\pi}\,
\frac{\omega_{out}^2}{c^4L_xL_y}\, \sum_{\mu=a,b}
\left(|\mathcal{M}_\mu^x|^2+|\mathcal{M}_\mu^y|^2\right).
\label{dId2total=}
\end{equation}
The dependence on the polarization of the scattered light is obtained
most easily when the light is collected in a small solid
angle~$o_{out}\ll{4\pi}$ around the normal (the case of an arbitrary
solid angle was considered in Ref.~\onlinecite{megapaper}). If the
analyzer is oriented at an angle $\varphi_{out}$ to the $x$~axis, the
polarization-dependent intensity is given by
\begin{equation}
\frac{dI_D}{d^2\vec{R}}=\frac{o_{out}}{\pi^2}
\frac{\omega_{out}^2}{c^4L_xL_y} \sum_{\mu=a,b}
\left|\mathcal{M}_\mu^x\cos\varphi_{out}+
\mathcal{M}_\mu^y\sin\varphi_{out}\right|^2.
\label{dId2phiout=}
\end{equation}\end{subequations}
Eq.~(\ref{Afree=}) corresponds to the free-space quantization of the
electromagnetic field whose normal modes are plane waves. In the case
of a spatially resolved experiment as in Ref.~\onlinecite{Novotny},
Eqs. (\ref{dId2Rgeneral=}),~(\ref{M1ph=}) in order to account for the
spatial profile of the electric field, induced by the focusing lens.
Namely, the electric field, corresponding to a single photon with a
wave vector~$\vec{Q}_{in}$, incident from vacuum, should be replaced
by the field~$\mathcal{E}_{in}(\vec{r}_{in})$ of the focused laser beam,
\begin{equation}
i\sqrt{\frac{2\pi\omega_{in}}{L_xL_yL_z}}\,\vec{e}_{in}e^{i\vec{Q}_{in}\vec{r}_{in}}
\to\vec{e}_{in}\mathcal{E}_{in}(\vec{r}_{in}).
\end{equation}
As long as the distance between the lens and the sample is much larger
than the light wavelegth, the summation over the continuum of the final
states of the scattered photon can still be performed using the vacuum
mode structure.
Finally, dividing the resulting probability by the photon attempt
period $L_z/c$, we obtain the number of photons emitted per unit time,
$dI_D/dt$ (which is more appropriate when the incident light is
characterized by its electric field strength). As a result,
Eqs.~(\ref{dId2Rgeneral=}),~(\ref{M1ph=}) are modified as follows:
\begin{subequations}
\begin{eqnarray}
\frac{dI_D}{d^2\vec{R}\,dt}&=&\frac{1}{2\pi\omega_{in}}\sum_{\vec{e}_{out}}
\int\frac{d^3\vec{Q}_{out}}{(2\pi)^3}\,
2\pi\delta(c|\vec{Q}_{out}|-\omega_{out})\nonumber\\
&&{}\times\sum_{\mu=a,b}|2\mathcal{M}_\mu|^2,\label{dId2Rdtgeneral=}\\
\mathcal{M}_\mu&=&\sqrt{\frac{\lambda_K}2}
\frac{2\pi{e}^2v^3}{\sqrt{\omega_{in}\omega_{out}}}
\int\frac{d\epsilon}{2\pi}\,{d}^2\vec{r}_{in}\,{d}^2\vec{r}_{out}\nonumber\\
&&{}\times \mathcal{E}_{in}(\vec{r}_{in})\,e^{-i\vec{Q}_{out}\vec{r}_{out}}
\mathop{\mathrm{Tr}}\left\{\mathcal{D}_\mu+\bar{\mathcal{D}}_\mu\right\}.
\label{M1phE=}
\end{eqnarray}\end{subequations}
\section{Raman scattering on a translationally invariant edge}\label{sec:Regular}
\subsection{General considerations}\label{sec:reggeneral}
For simplicity we consider an armchair edge characterized by
$\vec{n}=\vec{e}_x$, $M=-\Sigma_y\Lambda_y$, as discussed in the previous
section. Due to the translational invariance in the $y$~direction, it is
sufficient to calculate the probability of phonon emission at the point
$\vec{R}=X\vec{e}_x$. As we will see below, the main contribution to the
signal comes from $X\gg\lambdabar_\ep=2v/\omega_{in}$. In this regime the motion
of the photoexcited electron-hole pair can be described quasiclassically,
and the asymptotic large-distance expansion for the Green's functions,
Eq.~(\ref{GreenfLarger=}), can be used.
Namely, the electron and the hole can be viewed as wave packets of the size
$\sim\sqrt{\lambdabar_\ep{X}}$, propagating across the crystal along classical
trajectories. Initially, they are created in the same region of space
of the size $\sim\sqrt{\lambdabar_\ep{X}}$ with opposite momenta and opposite
velocities. As they undergo scattering processes (emission of a phonon or
reflection from the edge), they change the directions of their momenta.
In order to recombine radiatively and contribute to Raman signal,
the electron and the hole should meet again within a spatial region of the
size $\sim\sqrt{\lambdabar_\ep{X}}$. Clearly, these conditions can be fulfilled if
all momenta are almost perpendicular to the edge. Small deviations by an
angle $\sim\sqrt{\lambdabar_\ep/X}$ are allowed by quantum uncertainty.
These considerations are illustrated by Fig.~\ref{fig:traject}.
Emission of one of the phonons shown in Fig.~\ref{fig:phonons} corresponds
to intervalley scattering of the photoexcited electron or the hole, as
represented formally by one of the valley-off-diagonal matrices $\Lambda_x$
or $\Lambda_y$ in the matrix element, see Eqs.~(\ref{M1ph=})--(\ref{D2tr=}).
For the process to be allowed, another act of intervalley scattering is
required, so one of the three electronic Green's functions in
Eqs.~(\ref{D1tr=}), (\ref{D2tr=}) must contain another $\Lambda_x$ or
$\Lambda_y$ matrix from the decomposition~(\ref{imagesource=}). Otherwise,
the trace in Eq.~(\ref{M1ph=}) vanishes. Thus, for an armchair edge with
$M=-\Lambda_y\Sigma_y$ only the $B_1$ phonon can be emitted, so
$\mathcal{M}_a=0$, $\mathcal{M}_b=\mathcal{M}$.
Trajectories, corresponding to
decomposition of each of the three Green's functions in Eq.~(\ref{D1tr=})
are shown in Fig.~\ref{fig:traject} (a), (b), (c), respectively. According
to the quasiclassical picture, the electron and the hole have to travel the
same distance between creation and annihilation, as their velocities are
equal. Then the process in Fig.~\ref{fig:traject}~(a) has more phase space
satisfying this restriction, and this process gives the main contribution to
the Raman matrix element. This will be also shown explicitly below.
The general polarization structure of the matrix element, compatible with
the reflection symmetry $y\to{-}y$ possessed by the armchair edge, is
\begin{eqnarray}
\mathcal{M}&=&\mathcal{M}_\|e_{in}^y(e_{out}^y)^* +\mathcal{M}_\perp
e_{in}^x(e_{out}^x)^*\nonumber\\
&=&\mathcal{M}_\|\sin\varphi_{in}\sin\varphi_{out}
+\mathcal{M}_\perp\cos\varphi_{in}\cos\varphi_{out}\label{Mparperp=}
\end{eqnarray}
(we introduced $\varphi_{in},\varphi_{out}$, the polar angles of the
polarization vectors). Since the
interband transition dipole moment is perpendicular to the electronic
momentum, and for a regular edge the momentum is (almost)
perpendincular to the edge, $\mathcal{M}_\perp$ is entirely due to
quantum diffraction. Thus, $\mathcal{M}_\perp$ is smaller than
$\mathcal{M}_\|$ by the parameter $\lambdabar_\ep/X\ll{1}$ (it is this parameter
that governs the quantum diffraction, as discussed above).
Nevertheless, in a polarization-resolved experiment the two intensities
can be measured independently, so below we calculate both
$\mathcal{M}_\|$ and $\mathcal{M}_\perp$, each to the leading order in
$\lambdabar_\ep/X$.
\subsection{Spatially integrated intensity and polarization dependence}
\label{sec:integrated}
In this subsection the electric field of the excitation wave is assumed to be
spatially homogeneous. As displacements along the edge are expected to be
parametrically smaller than those away from the edge, $|y|\sim\sqrt{x\lambdabar_\ep}$,
we use the paraxial approximation, $|\vec{r}|\approx|x|+y^2/(2|x|)$.
The Green's function can be approximated as
\begin{eqnarray}
G_0(\vec{r},\epsilon)&\approx&-\sqrt{\frac{i|\epsilon|}{2\pi{v}^3|x|}}\,
e^{(i|\epsilon|-\gamma)|x|/v+i|\epsilon|y^2/(2v|x|)}\nonumber\\
&&{}\times\left[\frac{\mathop{\mathrm{sgn}}\epsilon+\Sigma_x\mathop{\mathrm{sgn}}{x}}{2}\right.\nonumber\\
&&\;\;{}+\left.\frac{y\Sigma_y}{2|x|}
+\frac{\mathop{\mathrm{sgn}}\epsilon-\Sigma_x\mathop{\mathrm{sgn}}{x}}{8}
\left(\frac{y^2}{x^2}-\frac{iv}{|\epsilon{x}|}\right)\right],
\nonumber\\ && \label{G0parax=}
\end{eqnarray}
where the coefficient at each matrix is taken in the leading order.
Taking the first term in the square brackets in Eq.~(\ref{G0parax=}) and
evaluating the matrix traces, we obtain the following expression for
$\mathcal{M}_\|$, corresponding to the process in Fig.~\ref{fig:traject}~(a):
\begin{subequations}\begin{eqnarray}
\mathcal{M}_\|&=&
\sqrt{\frac{i\lambda_K}{\pi^3v}}\,\frac{e^2}{\omega_{in}}\nonumber\\
&&{}\times\int\limits_{-\infty}^\infty\frac{d\epsilon}v
\int\limits_0^Xdx_{in}dx_{out}
\int\limits_{-\infty}^\infty{d}y_{in}dy_{out}\,e^{i\Phi-2\gamma{X}/v}\nonumber\\
&&{}\times\sqrt{\frac{|(\omega_{in}-\epsilon)(\omega_{out}-\epsilon)\epsilon|}
{(X-x_{in})(X-x_{out})(x_{in}+x_{out})}},\label{Minitial=}\\
\Phi&=&\frac{|\epsilon|({x}_{in}+x_{out})}v
+\frac{|\epsilon|(y_{in}-y_{out})^2}{2v(x_{in}+x_{out})}
+\nonumber\\
&&{}+\frac{|\omega_{out}-\epsilon|(X-x_{out})}v
+\frac{|\omega_{out}-\epsilon|y_{out}^2}{2v(X-{x}_{out})}+\nonumber\\
&&{}+\frac{|\omega_{in}-\epsilon|(X-x_{in})}v+\frac{|\omega_{in}-\epsilon|y_{in}^2}{2v(X-{x}_{in})}.\label{Phi=}
\end{eqnarray}\end{subequations}
For the calculation of $\mathcal{M}_\perp$ we need the rest of the
terms in the square brackets in Eq.~(\ref{G0parax=}). As a result,
$\mathcal{M}_\perp$ is given
by an analogous integral, but with an extra factor in the integrand,
\[\begin{split}
&\left[\frac{1}4
\left(\frac{y_{out}-y_{in}}{x_{in}+x_{out}}+\frac{y_{in}}{X-x_{in}}\right)
\left(\frac{y_{in}-y_{out}}{x_{in}+x_{out}}+\frac{y_{out}}{X-x_{out}}\right)\right.\\
&+\left.\frac{iv}{4|\epsilon|(x_{in}+x_{out})}\right].
\end{split}\]
The details of integration are given in Appendix~\ref{app:integrals}.
First, we integrate over $y_{in}$ and $y_{out}$.
The subsequent integration over~$\epsilon$ fixes
$x_{in}/v+x_{out}/v\approx(X-x_{in})/v+(X-x_{out})/v$ [the difference
is allowed to be $\sim\sqrt{X/(v\omega_{in})}\ll{X}$], which has the
meaning of the time spent by the electron and the hole in traveling
from the creation point~$x_{in}$ to the annihilation point~$x_{out}$.
At the same time, $x$-integration fixes $\epsilon\approx\omega_{in}/2$ with
the precision $\sim\sqrt{\omega_{in}v/X}\ll\omega_{in}$. Performing all
the integrations, we obtain the matrix element,
\begin{subequations}\begin{eqnarray}
\mathcal{M}_\|&=&
e^2\sqrt{\frac{\pi\lambda_K}{i\omega_{in}X/v}}\,
\frac{\sin[\omega_\mathrm{ph}{X}/(2v)]}{\omega_\mathrm{ph}/(2v)}
\times\nonumber\\&&{}\times
{e}^{i(\omega_{in}+\omega_{out})X/(2v)-2\gamma{X}/v},
\label{Mfinal=}\\
\mathcal{M}_\perp&=&\frac{i\mathcal{M}_\|}{\omega_{in}X/v}.
\label{Mperpfinal=}
\end{eqnarray}\end{subequations}
According to Eq.~(\ref{dId2total=}), the integrated intensity into the
full solid angle $4\pi$, summed over two polarizations of the emitted
photon, is given by
\begin{subequations}\begin{eqnarray}
\frac{d^2I_D}{d^2\vec{R}}&=&
\frac{8\lambda_K}{3}\,\frac{(e^2/c)^2}{{L}_xL_y}\,
\frac{\omega_{in}^2}{c^2}\,\frac{\sin^2[\omega_\mathrm{ph}{X}/(2v)]}{[\omega_\mathrm{ph}/(2v)]^2}\,
\frac{e^{-4\gamma{X}/v}}{\omega_{in}X/v}\times\nonumber\\&&{}\times
\left[\sin^2\varphi_{in}+{(\omega_{in}X/v)^2}\,\cos^2\varphi_{in}\right],
\label{IDregulara=}\\
I_D&=&\frac{8\lambda_K}{3}\left(\frac{e^2}c\right)^2\frac{v^2}{c^2}\,
\frac{v}{\omega_{in}L_x}\,\frac{\omega_{in}^2}{\omega_\mathrm{ph}^2}
\times\nonumber\\&&{}\times
\left[\sin^2\varphi_{in}\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}\right.+\nonumber\\
&&{}\qquad+\left.\cos^2\varphi_{in}\,\frac{\omega_\mathrm{ph}^2}{\omega_{in}^2}
\ln\frac{\omega_{in}}\omega_\mathrm{ph}\right].\label{IDregularb=}
\end{eqnarray}
The second term in the square brackets is written with logarithmic
precision, since the short-distance cutoff $\sim{v}/\omega_{in}$ is
known only up to a factor of the order of 1. If we use
Eq.~(\ref{dId2phiout=}) for the intensity in a solid angle $o_{out}$ in
the presence of an analyzer, we obtain
\begin{eqnarray}
I_D&=&4\lambda_K\,\frac{o_{out}}{4\pi}
\left(\frac{e^2}c\right)^2\frac{v^2}{c^2}\,
\frac{v}{\omega_{in}L_x}\,\frac{\omega_{in}^2}{\omega_\mathrm{ph}^2}
\times\nonumber\\&&{}\times
\left[\sin^2\varphi_
{in}\sin^2\varphi_{out}\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}\right.+\nonumber\\
&&{}\qquad+\left.\cos^2\varphi_{in}\cos^2\varphi_{out}\,\frac{\omega_\mathrm{ph}^2}{\omega_{in}^2}
\ln\frac{\omega_{in}}\omega_\mathrm{ph}\right].
\label{IDregularc=}
\end{eqnarray}
\end{subequations}
Let us estimate the contribution to the matrix element~$\mathcal{M}'$
corresponding to Fig.~\ref{fig:traject}~(b), i.~e., when
decomposition~(\ref{imagesource=}) is applied to
$G(\vec{R},\vec{r}_{out};\epsilon-\omega_{out})$.
Eq.~(\ref{Mparperp=}) remains valid, as it is based on symmetry only.
The expression for~$\mathcal{M}_\|'$ looks analogous to
Eqs.~(\ref{Minitial=}), (\ref{Phi=}):
\begin{subequations}\begin{eqnarray}
\mathcal{M}_\|^\prime&=&\frac{e^2}{\omega_{in}}\,
\sqrt{\frac{i\lambda_K}{\pi^3v}}\int\limits_{-\infty}^\infty\frac{d\epsilon}v
\times\nonumber\\
&&{}\times
\int\limits_0^Xdx_{in}\int\limits_0^{x_{in}}dx_{out}
\int\limits_{-\infty}^\infty{d}y_{in}dy_{out}\,e^{i\Phi'-2\gamma{X}/v}\times\nonumber\\
&&{}\times\sqrt{\frac{|(\omega_{in}-\epsilon)(\omega_{out}-\epsilon)\epsilon|}
{(X-x_{in})(X+x_{out})(x_{in}-x_{out})}},\label{Mprimea=}\\
\Phi'&=&\frac{|\epsilon|({x}_{in}-x_{out})}v
+\frac{|\epsilon|(y_{in}-y_{out})^2}{2v(x_{in}-x_{out})}
+\nonumber\\
&&{}+\frac{|\omega_{out}-\epsilon|(X+x_{out})}v
+\frac{|\omega_{out}-\epsilon|y_{out}^2}{2v(X+{x}_{out})}+\nonumber\\
&&{}+\frac{|\omega_{in}-\epsilon|(X-x_{in})}v+\frac{|\omega_{in}-\epsilon|y_{in}^2}{2v(X-{x}_{in})}.
\label{Mprimeb=}
\end{eqnarray}\end{subequations}
However, here integration over~$\epsilon$ fixes
$x_{in}-x_{out}\approx{x}_{out}+X+(X-x_{in})$, which is compatible with
the limits of the spatial integration only when
$x_{out}\sim{v}/\omega_{in}$, $X-x_{in}\sim{v}/\omega_{in}$, as shown
in Fig~\ref{fig:traject}~(b). This restriction results in suppression
$\mathcal{M}_\|'/\mathcal{M}_\|\sim\omega_\mathrm{ph}/\omega_{in}\ll{1}$.
If $x_{out},|X-x_{in}|\sim{v}/\omega_{in}$, the asymptotic form,
Eq.~(\ref{G0parax=}), cannot be used for
$G(\vec{r}_{in},\vec{R};\omega_{in}-\epsilon)$
[thus, Eqs.~(\ref{Mprimea=}),~(\ref{Mprimeb=}) represent only an
estimate of $\mathcal{M}_\|'$ by the order of magnitude], but it can
be used for $G(\vec{R},\vec{r}_{out};\omega_{out}-\epsilon)$,
and $G(\vec{r}_{out},\vec{r}_{in};\epsilon)$. This fact results in an
additional smallness for the matrix element $\mathcal{M}_\perp'$:
assuming the typical value $X\sim{v}/\omega_\mathrm{ph}$, we can write
$|\mathcal{M}_\perp'|\sim|\mathcal{M}_\|'|(\omega_\mathrm{ph}/\omega_{in})\sim%
|\mathcal{M}_\perp|(\omega_\mathrm{ph}/\omega_{in})$.
Thus, $\mathcal{M}_\perp'$ produces only a small correction to the
$\cos^2\varphi_{in}$ term in Eqs.~(\ref{IDregulara=}),~(\ref{IDregularb=}),
and to the $\cos^2\varphi_{in}\cos^2\varphi_{out}$ term in
Eq.~(\ref{IDregularc=}).
Finally, the intensity in Eq.~(\ref{IDregularc=}) has an
interference contribution
$\propto\Re(\mathcal{M}_\|^*\mathcal{M}_\perp')$, which
produces a term
$\propto\sin\varphi_{in}\cos\varphi_{in}\sin\varphi_{out}\cos\varphi_{out}$.
Note that the interference term $\Re(\mathcal{M}_\|^*\mathcal{M}_\perp)$
is absent because of the factor~$i$ in Eq.~(\ref{Mperpfinal=}). We have not
been able to calculate $\mathcal{M}_\perp'$ explicitly or to establish a
phase relationsip between $\mathcal{M}_\|$ and $\mathcal{M}_\perp'$ in
any other way.
However, it is hard to imagine that the inteference of two
strongly oscillating amplitudes, corresponding to two different
processes, would survive the integration over~$X$.
\subsection{Excitation position dependence}
\label{sec:regSpatial}
This subsection aims at describing a spatially resolved experiment like
that of Ref.~\onlinecite{Novotny} and clarifying the role of different
length scales in the dependence of the Raman intensity on the position
of the excitation spot.
Consequently, we use Eqs.~(\ref{dId2Rdtgeneral=}),~(\ref{M1phE=}), and
repeat the calculation of Sec.~\ref{sec:integrated} with an arbitrary
dependence of $\mathcal{E}_{in}(\vec{r})$, smooth on the length scale
$v/\omega_{in}$ (we assume detection without analyzer and sum over the
polarizations of the scattered photon). The result is
\begin{subequations}\begin{eqnarray}
&&\frac{dI_D}{dt}=
\frac{4\lambda_K}{3\pi}\left(\frac{e^2}c\right)^2\frac{v}{c}\,\sin^2\varphi_{in}
\int\limits_{-\infty}^\infty{d}y\,
\mathcal{I}\!\left(\frac{v}{\omega_\mathrm{ph}},\frac{v}{2\gamma}\right),
\label{dIDdtreg=}\\
&&\mathcal{I}(\ell_\mathrm{ph},\ell_\gamma)=\int\limits_0^\infty
\mathcal{E}_{in}(x,y)\,\mathcal{E}_{in}^*(x',y)\,\mathcal{K}(x,x')\,
dx\,dx',\label{Inonlocal=}\\
&&\mathcal{K}(x,x')=-e^{-i(x-x')/\ell_\mathrm{ph}}\mathop{\mathrm{Ei}}(-2\max\{x,x'\}/\ell_\gamma),
\label{spatialkernel=}
\end{eqnarray}
where the exponential integral $\mathop{\mathrm{Ei}}(z)$ is defined as
\begin{equation}
-\mathop{\mathrm{Ei}}(-z)=\int\limits_z^\infty\frac{e^{-t}\,dt}t.
\end{equation}\end{subequations}
Let us assume the excitation intensity to have the form
$|\mathcal{E}_{in}(x)|^2=w(x-x_0)$, where $w(x)$ is some smooth
bell-shaped function centered at $x=0$, and $x_0$~is the position of
the focus, which serves as the experimental control parameter.
In the following we also assume that
the phase of $\mathcal{E}_{in}(x)$ does not depend on~$x$, then
$\mathcal{E}_{in}(x)$ can be taken real without loss of generality.
The integral in Eq.~(\ref{Inonlocal=}) is
determined by three length scales. The width~$L$ of the excitation
profile $w(x)$ is assumed to be much longer than $\ell_\mathrm{ph}=v/\omega_\mathrm{ph}$
and the electron inelastic scattering length $\ell_\gamma=v/(2\gamma)$:
$\ell_\mathrm{ph},\ell_\gamma\ll{L}$. In all above expressions of this section
no assumption was made about the relative magnitude of
$\ell_\mathrm{ph}$~and~$\ell_\gamma$. However, it is reasonable to assume
$\ell_\mathrm{ph}\lesssim\ell_\gamma$;
also, the final expressions become more compact in this limit.
In the zeroth approximation in $1/L$ one can assume
$w(x)=\mathrm{const}$ in the effective region of integration in
Eq.~(\ref{Inonlocal=}), i.~e. replace the kernel by a
$\delta$-function. This gives the result of Sec.~\ref{sec:integrated},
\begin{subequations}\begin{eqnarray}
&&\mathcal{I}_{x_0}= l_0^2w(-x_0)+O(\ell/L),\\
&&l_0^2=\int\limits_0^\infty\mathcal{K}(x,x')\,dx\,dx'
=\ell_\mathrm{ph}^2\ln\frac{\ell_\gamma^2+4\ell_\mathrm{ph}^2}{4\ell_\mathrm{ph}^2}.
\end{eqnarray}\end{subequations}
The length scale $l_0$, appearing here, may be viewed as the effective
range of integration in Eq.~(\ref{Inonlocal=}), which determines the
magnitude of the signal. As we see, this length is mainly determined by
$\ell_\mathrm{ph}$ and is only logarithmically sensitive to the electron
inelastic scattering.
What is detected experimentally in Ref.~\onlinecite{Novotny} is the
difference in the profiles of $w(-x_0)$ and $\mathcal{I}_{x_0}$,
appearing because of the non-locality of the kernel.
This difference appears in the second order of the expansion of
Eq.~(\ref{Inonlocal=}) in the spatial derivatives of
$\mathcal{E}_{in}(x)$ (i.~e, in the order $1/L^2$):
\begin{equation}
\mathcal{I}_{x_0}=l_0^2w(l_1-x_0)+\frac{l_0^2l_2^2}2\,w''(l_1-x_0)
+O(\ell^3/L^3).\label{expandprofile=}
\end{equation}
Here the length $l_1$ [the ``center of mass'' of the kernel in
Eq.~(\ref{spatialkernel=})] is given by
\begin{eqnarray}
l_1&=&\Re\int\limits_0^\infty{x}\,\mathcal{K}(x,x')\,\frac{dx\,dx'}{l_0^2}
\nonumber\\
&=&\frac{\ell_\gamma^3/8}{\ell_\gamma^2/4+\ell_\mathrm{ph}^2}
\left(\ln\frac{\ell_\gamma^2/4+\ell_\mathrm{ph}^2}{\ell_\mathrm{ph}^2}\right)^{-1}.
\end{eqnarray}
It describes the overall shift of the profile, which may be difficult
to detect experimentally, unless the precise location of the edge is known.
The length $l_2$, appearing in Eq.~(\ref{expandprofile=}), determines the
broadening of the signal profile $\mathcal{I}_{x_0}$ with respect to
the excitation profile $w(-x_0)$, proportional to $w''(x)$ (the second
derivative). In the limit $\ell_\gamma\gg\ell_\mathrm{ph}$ it is given by
(see Appendix~\ref{app:kernel} for the full expression and other details):
\begin{equation}\label{l2=}
l_2^2
=\ell_\gamma^2
\frac{2\ln(\ell_\gamma/\ell_\mathrm{ph})-1}{16\ln^2(\ell_\gamma/\ell_\mathrm{ph})}
+O(\ell_\mathrm{ph}^2).
\end{equation}
Note that this length is indeed determined by the electronic
inelastic length (up to logarithmic corrections), in qualitative
agreement with the assumption of Ref.~\onlinecite{Novotny}.
Instead of Eq.~(\ref{spatialkernel=}), Can\c{c}ado, Beams and
Novotny\cite{Novotny} fitted the experimental profile $\mathcal{I}_{x_0}$
using the following expression:
\begin{equation}\label{wrongkernel=}
\mathcal{I}_{x_0}\propto\left|\int\limits_{x_e}^\infty
e^{-(x-x_e)/x_D}\mathcal{E}_{in}(x-x_0)\,dx\right|^2,
\end{equation}
where the excitation profile was independently determined to be
gaussian: $|\mathcal{E}_{in}(x)|^2\propto{e}^{-x^2/L^2}$. The effective
position of the edge~$x_e$, the width~$x_D$, as well as the overall
proportionality coefficient were used as fitting parameters, and the
value $x_D=20$~nm was obtained. Expanding Eq.~(\ref{wrongkernel=}) to
the order $1/L^2$ and comparing it to Eq.~(\ref{expandgaussian=}), we
obtain
\begin{equation}\begin{split}
&Ae^{-\frac{(x_0-x_e-x_D)^2}{L^2}}
\left[1+\frac{x_D^2}{L^2}\left(\frac{(x_0-x_e-x_D)^2}{L^2}-1\right)\right]=\\
&=e^{-\frac{(x_0-l_1)^2}{L^2}}
\left[1+\frac{l_2^2}{L^2}\left(\frac{2(x_0-l_1)^2}{L^2}-1\right)\right]
+O(L^{-3}).
\end{split}\end{equation}
This equation is satisfied for all~$x_0$ provided that $x_e+x_D=l_1$,
$A=1+l_2^2/L^2$, $x_D^2=2l_2^2$. Thus, in spite of the fact that in
Ref.~\onlinecite{Novotny} a wrong kernel was used, we still can take
the experimentally found~$l_2$ and use it with the correct kernel.
Namely, the experimentally
measured $x_D=20$~nm yields $l_2=14$~nm. Using Eq.~(\ref{l2=}) and
taking $\ell_\mathrm{ph}=4$~nm, we obtain $\ell_\gamma=66$~nm, which gives
$2\gamma=11$~meV. As discussed in Sec.~\ref{sec:qualpos} and in
Ref.~\onlinecite{Casiraghi2009}, this value is significantly smaller
than an estimate obtained using other sources of information.
\section{Raman scattering on an atomically rough edge}\label{sec:Rough}
In this section we calculate the Raman intensity for an edge rough
on atomic scale, and described by the model of Sec.~\ref{sec:irregular}.
The general arguments of Sec.\ref{sec:reggeneral} mostly remain valid,
except for that of the symmetry $y\to-y$, not possessed by any given
realization of the disorder. This symmetry is restored upon averaging
of the intensity over the realizations of disorder, but the
matrix element~$\mathcal{M}_\mu$ must be taken in the general form.
\subsection{Spatially integrated intensity and polarization dependence}
\label{sec:disordpolariz}
\begin{figure}
\includegraphics[width=8cm]{trajectoblique}
\caption{\label{fig:trajectoblique}(Color on-line.) Electron
trajectories corresponding to Raman scattering on a disordered edge.
Notations are the same as in Fig.~\ref{fig:traject}.}
\end{figure}
Since the edge can scatter an electron by an arbitrary angle,
it is convenient to use the rotated coordinates $(\xi,\eta)$, as shown
in Fig.~\ref{fig:trajectoblique}. Taking the first Green's function in
Eqs.~(\ref{D1tr=}), (\ref{D2tr=}) as given by Eq.~(\ref{gfTmatrix=}),
and taking the free Green's functions in the paraxial approximation
(i.~e., assuming $|\eta|\ll\xi$), we arrive at the following expression
for the Raman matrix element:
\begin{subequations}\begin{eqnarray}
\mathcal{M}_\mu&=&
\sqrt{\frac{\lambda_K}{2}}\,\frac{e^2v}{4\pi^2\omega_{in}}\times\nonumber\\
&&{}\times
\int\limits_{-\infty}^\infty{dy}_e\,e^{-2(\gamma/v)\sqrt{X^2+y_e^2}}
\mathop{\mathrm{Tr}}_{2\times{2}}\{\Lambda_\mu{S}_\Lambda(y_e)\}\times\nonumber\\
&&{}\times
\frac{X(Xe_{in}^y+y_ee_{in}^x)(Xe_{out}^y+y_ee_{out}^x)^*}{(X^2+y_e^2)^{3/2}}\times\nonumber\\
&&{}\times\int\limits_{-\infty}^\infty\frac{d\epsilon}v
\int\limits_0^{\sqrt{X^2+y_e^2}}d\xi_{in}d\xi_{out}
\int\limits_{-\infty}^\infty{d}\eta_{in}d\eta_{out}\times\nonumber\\
&&{}\times \sqrt{\frac{|\omega_{in}-\epsilon||\omega_{out}-\epsilon||\epsilon|^2}
{v^4\xi_{in}\xi_{out}\xi_{in}'\xi_{out}'}}\,
e^{i\Phi_{in}+i\Phi_{out}},\label{Mirrinitial=}\\
\Phi_a&=&
\frac{|\epsilon|}v\left(\xi_a+\frac{\eta_a^2}{2\xi_a}\right)
+\frac{|\omega_a-\epsilon|}{v}\left(\xi_a'
+\frac{\eta_a^2}{2\xi_a'}\right),\\
\xi_a'&=&\sqrt{X^2+y_a^2}-\xi_a,
\end{eqnarray}\end{subequations}
where $a$ is either ``$in$'' or ``$out$''. Analogously to the case of
the regular edge, we first integrate over $\eta_{in},\eta_{out}$, and
subsequent integration over $\epsilon$ and $\xi_{in,out}$ fixes
$\xi_{in}+\xi_{out}\approx\sqrt{X^2+y_e^2}$, $\epsilon\approx\omega_{in}/2$.
As a result, we obtain
\begin{eqnarray}
\mathcal{M}_\mu&=& \frac{ie^2}{4}\sqrt{\frac{\lambda_K}{2}}
\int\limits_{-\infty}^\infty{dy}_e\,
\mathop{\mathrm{Tr}}_{2\times{2}}\{\Lambda_\mu{S}_\Lambda(y_e)\}
\times\nonumber\\
&&{}\times
e^{(i\omega_{in}/2+i\omega_{out}/2-2\gamma)\sqrt{X^2+y_e^2}/v}
\times\nonumber\\
&&{}\times
\frac{X(Xe_{in}^y+y_ee_{in}^x)(Xe_{out}^y+y_ee_{out}^x)^*}{(X^2+y_e^2)^2}\times\nonumber\\
&&{}\times\frac{\sin[\omega_\mathrm{ph}\sqrt{X^2+y_e^2}/(2v)]}{\omega_\mathrm{ph}/(2v)}.
\end{eqnarray}
To calculate the intensity, we use
Eqs.~(\ref{SLambdaCOE=}),~(\ref{COEaverage=}) to average the square of
the matrix element, and sum over the two phonon modes. Angular
integration and summation over the two detector polarizations according
to Eq.~(\ref{dId2total=}) gives
\begin{subequations}\begin{eqnarray}
\frac{dI_D}{d^2\vec{R}}&=&
\frac{2\lambda_K}{9}\left(\frac{e^2}c\right)^2\frac{v^2}{c^2}\,
\frac{\omega_{in}/v}{L_xL_y}\int\limits_{-\infty}^\infty{d}y_e\,
e^{-4\gamma\sqrt{X^2+y_e^2}/v}\times\nonumber\\
&&{}\times\frac{\sin^2[\omega_\mathrm{ph}\sqrt{X^2+y_e^2}/(2v)]}{[\omega_\mathrm{ph}/(2v)]^2}
\times\nonumber\\&&{}\times
\frac{X^2(X^2\sin^2\varphi_{in}+y_e^2\cos^2\varphi_{in})}{(X^2+y_e^2)^3},\\
I_D&=&
\frac{\pi\lambda_K}{36}\left(\frac{e^2}c\right)^2\frac{v^2}{c^2}\,
\frac{\omega_{in}}{vL_x}\,\frac{v^2}{\omega_\mathrm{ph}^2}
\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}\times\nonumber\\
&&{}\times\left(3\sin^2\varphi_{in}+\cos^2\varphi_{in}\right).
\label{IDdisordb=}
\end{eqnarray}
Eq.~(\ref{dId2phiout=}) for the intensity emitted into a solid angle
$o_{out}$ in the presence of an analyzer gives
\begin{eqnarray}
I_D&=&
\frac{\pi\lambda_K}{24}\frac{o_{out}}{4\pi}
\left(\frac{e^2}c\right)^2\frac{v^2}{c^2}\,
\frac{\omega_{in}}{vL_x}\,\frac{v^2}{\omega_\mathrm{ph}^2}
\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}\times\nonumber\\
&&{}\times\left[\sin^2\varphi_{in}+\sin^2\varphi_{out}
+\frac{1}2\cos(2\varphi_{in}-2\varphi_{out})\right].\nonumber\\
\label{IDdisordc=}
\end{eqnarray}
\end{subequations}
The trigonometric expression in the square brackets can be identically
rewritten as
$\sin^2\varphi_{in}+\sin(2\varphi_{out}-\varphi_{in})\sin\varphi_{in}+1/2$,
so its absolute minimum is 1/4, reached at
$\varphi_{out}=-\varphi_{in}=\pm\pi/6$.
\subsection{Excitation position dependence}
\label{sec:disordspatial}
Here we follow the same logic as in Sec.~\ref{sec:regSpatial},
but instead of Eqs.~(\ref{dIDdtreg=})--(\ref{spatialkernel=}) we have
\begin{eqnarray}
\frac{dI_D}{dt}&=&\frac{\lambda_K}{9\pi}\left(\frac{e^2}c\right)^2\frac{v}c
\nonumber\\ &&{}\times
\int\limits_{-\infty}^\infty dY\,dX\,dy_e\,
e^{-4(\gamma/v)\sqrt{X^2+y_e^2}}\,
\nonumber\\ &&{}\times
\frac{X^2(X\sin\varphi_{in}+y_e\cos\varphi_{in})^2}{(X^2+y_e^2)^3}
\nonumber\\ &&{}\times
\int\limits_0^{\sqrt{X^2+y_e^2}}d\xi_{in}\,d\xi_{in}'\,
e^{-i(\omega_\mathrm{ph}/v)(\xi_{in}-\xi_{in}')}
\nonumber\\ &&{}\times
\mathcal{E}_{in}\!\left(\frac{\xi_{in}X}{\sqrt{X^2+y_e^2}},Y+y_e-\frac{\xi_{in}y_e}{\sqrt{X^2+y_e^2}}\right)
\nonumber\\ &&{}\times
\mathcal{E}_{in}^*\!\left(\frac{\xi_{in}'X}{\sqrt{X^2+y_e^2}},Y+y_e-\frac{\xi_{in}'y_e}{\sqrt{X^2+y_e^2}}\right),
\end{eqnarray}
where $(X,Y)$ is the point where the phonon is emitted.
As in Sec.~\ref{sec:regSpatial}, we expand in the spatial derivatives
of $\mathcal{E}_{in}(\vec{r})$, and obtain Eq.~(\ref{expandprofile=})
with $l_2$ given by (actually, the result depends slightly on the
polarization; here we choose unpolarized detection and excitation
polarization along the edge, $\varphi_{in}=\pi/2$):
\begin{equation}
l_2^2=\ell_\gamma^2
\frac{5\ln(\ell_\gamma/\ell_\mathrm{ph})-3\cdot{2}^{14}/(45\pi)^2}{48\ln^2(\ell_\gamma/\ell_\mathrm{ph})}
+O(\ell_\mathrm{ph}^2).
\end{equation}
This expression reproduces the experimental value $l_2=14\:\mbox{nm}$
if $\ell_\gamma=120\:\mbox{nm}$ (again, $\ell_\mathrm{ph}=4\:\mbox{nm}$ is taken).
\section{Raman scattering on a fragmented edge}
\label{sec:fragmented}
In this section we consider the Raman scattering on an edge
consisitng of armchair and zigzag segments whose typical
length $d_e$ significantly exceeds the electronic wavelength,
$d_e\gg\lambdabar_\ep$, where $\lambdabar_\ep=2v/\omega_{in}$. This is an
intermediate case between the two limiting cases considered
in the previous two sections.
Only armchair segments contribute to the Raman
process.\cite{Cancado2004} Moreover, contributions from different
segments add up incoherently. Thus, we first focus on the
contribution of a single armchair segment, placed at $x=0$,
$-d_e/2\leq{y}\leq{d}_e/2$ (as before, graphene is assumed to
occupy the half-space $x>0$).
The electronic Green's function
corresponding to the reflection from a single armchair segment can be
easily written from the Huygens-Fresnel principle, Eq.~(\ref{Huygens=})
if one approximates its value on the boundary by that for an infinite
perfect armchair edge:
\begin{eqnarray}
&&{G}(\vec{r},\vec{r}';\epsilon)=G_0(\vec{r}-\vec{r}',\epsilon)
-iv\int\limits_{-d_e/2}^{d_e/2}dy_e\nonumber\\
&&\qquad\qquad{}\times G_0(x,y-y_e;\epsilon)\,
\Sigma_x\,G_0(x',y_e-y';\epsilon)\,\Sigma_y\Lambda_y.\nonumber\\
\end{eqnarray}
This approximation ignores the change of the exact wave function
within the distance $\sim\lambdabar_\ep$ from the ends of the segment, which gives
a small error if $d_e\gg\lambdabar_\ep$. In fact, it is the standard approximation
for the study of diffraction in optics.\cite{BornWolf}
Using this Green's function, we obtain the following expression for the
matrix element corresponding to emission of a phonon in an arbitrary
point $(X,Y)$:
\begin{subequations}\begin{eqnarray}
&&\mathcal{M}=
-\sqrt{\frac{\lambda_K}{2}}\,\frac{e^2v}{\pi^2\omega_{in}}
\int\limits_{-\infty}^\infty\frac{d\epsilon}v
\int{d}^2\vec{r}_{in}\,d^2\vec{r}_{out}\int\limits_{-d_e/2}^{d_e/2}dy_e\nonumber\\
&&\qquad{}\times\sqrt{\frac{|\omega_{in}-\epsilon||\omega_{out}-\epsilon|\epsilon^2}
{v^4\rho_{in}\rho_{out}\rho_{in}'\rho_{out}'}}\,e^{i\Phi_{in}+i\Phi_{out}}\nonumber\\
&&\qquad{}\times\cos\frac{\phi_{in}-\phi_{out}}{2}
\sin\frac{2\varphi_{out}+\phi_{out}'-\phi_{out}}{2}\nonumber\\
&&\qquad{}\times\cos\frac{\phi_{in}'-\phi_{out}'}{2}
\sin\frac{2\varphi_{in}+\phi_{in}'-\phi_{in}}{2},\\
&&\rho_a=\sqrt{x_a^2+(y_a-y_e)^2},\\ &&\rho_a'=\sqrt{(X-x_a)^2+(y_a-Y)^2},\\
&&\phi_a=\arctan\frac{y_a-y_e}{x_a},\quad
\phi_a'=\arctan\frac{y_a-Y}{X-x_a},\\
&&\Phi_a=\frac{|\epsilon|+i\gamma}{v}\,\rho_a+\frac{|\omega_a-\epsilon|+i\gamma}{v}\,\rho_a',
\end{eqnarray}\end{subequations}
where $a$ is either ``$in$'' or ``$out$''.
It is convenient to use the paraxial approximation with respect
to the axis connecting the points $(0,y_e)$ and $(X,Y)$.
In this approximation we expand
\begin{subequations}\begin{eqnarray}
&&\rho_a+\rho_a'\approx\frac{X}{\cos\phi_0}
+\frac{X\cos^3\phi_0}{x_a(X-x_a)}\,\frac{(y_a-y_{a0})^2}2,\\
&&\phi_a\approx\phi_0+\frac{y_a-y_{a0}}{x_a}\,\cos^2\phi_0,\\
&&\phi_a'\approx-\phi_0+\frac{y_a-y_{a0}}{X-x_a}\,\cos^2\phi_0,\\
&&y_{a0}=x_a\tan\phi_0,\quad\tan\phi_0=\frac{Y-y_e}{X}.
\end{eqnarray}\end{subequations}
Integrating over $\vec{r}_{in},\vec{r}_{out}$ in the usual way, we obtain
\begin{eqnarray}
\mathcal{M}&=&
-ie^2\sqrt{\frac{\lambda_K}{2}}
\int\limits_{-d_e/2}^{d_e/2}dy_e\,\frac{\sin[\omega_\mathrm{ph}{X}/(2v\cos\phi_0)]}{\omega_\mathrm{ph}{X}/(2v\cos\phi_0)}\nonumber\\
&&{}\times\exp\left[\left(i\,\frac{\omega_{in}+\omega_{out}}{2}-2\gamma\right)\frac{X}{v\cos\phi_0}\right]\nonumber\\
&&{}\times\sin(\varphi_{in}-\phi_0)\sin(\varphi_{out}-\phi_0).
\end{eqnarray}
This integral can be calculated analogously to the standard
diffraction integral in optics.\cite{BornWolf}
According to Eq.~(\ref{dId2total=}), the integrated intensity into the
full solid angle $4\pi$, summed over two polarizations of the emitted
photon, and integrated over~$\vec{R}$, is given by
\begin{subequations}\begin{eqnarray}
I_D&=&\frac{8\lambda_K}{3}\left(\frac{e^2}c\right)^2\frac{v^2}{c^2}\,
\frac{v}{\omega_{in}L_x}\,\frac{d_e}{L_y}\,\frac{\omega_{in}^2}{\omega_\mathrm{ph}^2}
\times\nonumber\\&&{}\times
\left\{\sin^2\varphi_{in}\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}+\cos^2\varphi_{in}
\right.\nonumber\\ &&{}\qquad\times\left.
\left[\frac{\omega_\mathrm{ph}^2}{\omega_{in}^2}\ln\frac{\omega_{in}}\omega_\mathrm{ph}
+\frac{2v}{\omega_{in}d_e}\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}\right]\right\}.
\nonumber\\ \label{IDfragma=}
\end{eqnarray}
This expression is analogous to Eq.~(\ref{IDregularb=}), weighted by
the factor $d_e/L_y$.
The coefficient at the last term, $\propto\cos^2\varphi_{in}$, determines
the minimum of the intensity at $\varphi_{in}=0$ and
has two contributions: the one corresponding to the infinite edge,
and the one due to the finite size of the segment. The latter one
is dominant unless
$d_e\gtrsim(v/\omega_{in})(\omega_{in}/\omega_\mathrm{ph})^2\approx{50}\:\mbox{nm}$
for $\omega_{in}=2\:\mbox{eV}$.
Still, as long as
$\omega_{in}d_e/v\gg{1}$, the ratio between the intensities for the
parallel and perpendicular polarizations is large.
If we use
Eq.~(\ref{dId2phiout=}) for the intensity in a solid angle $o_{out}$ in
the presence of an analyzer, we obtain
\begin{eqnarray}
I_D&=&4\lambda_K\,\frac{o_{out}}{4\pi}
\left(\frac{e^2}c\right)^2\frac{v^2}{c^2}\,
\frac{v}{\omega_{in}L_x}\,\frac{d_e}{L_y}\,\frac{\omega_{in}^2}{\omega_\mathrm{ph}^2}
\times\nonumber\\&&{}\times
\left\{\sin^2\varphi_{in}\sin^2\varphi_{out}\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}
\right. +\nonumber\\ &&{}\qquad+ \left.
\left[\sin^2(\varphi_{in}+\varphi_{out})+\frac{1}2\sin{2}\varphi_{in}\sin{2}\varphi_{out}\right]
\right.\nonumber\\ &&{}\qquad\;\;\times\left.
\frac{v}{\omega_{in}d_e}\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}
\right. +\nonumber\\ &&{}\qquad+ \left.
\cos^2\varphi_{in}\cos^2\varphi_{out}\right.\nonumber\\ &&{}\qquad\;\;\times\left.
\left[\frac{\omega_\mathrm{ph}^2}{\omega_{in}^2}\ln\frac{\omega_{in}}\omega_\mathrm{ph}+
\frac{v}{\omega_{in}d_e}\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}\right]\right\}.\nonumber\\
\label{IDfragmb=}
\end{eqnarray}
\end{subequations}
Eqs. (\ref{IDfragma=}), (\ref{IDfragmb=}) describe the contribution
of a single armchair segment to the Raman intensity. To obtain the
contribution of the whole edge, it is sufficient to multiply these
expression by the total number of such segments and replace $d_e$
by its average, if all segments have the same orientation. It is
crucial, however, that up to three different orientations of armchair
segments are possible, at the angle $\pi/3$ to each other, as
discussed by the author and coworkers in
Ref.~\onlinecite{Casiraghi2009}.
Let us first consider a measurement in the absence of an analyzer.
Since the intensity is a biliniear form of the polarization
vector~$\vec{e}_{in}$, it can always be written in the form
\begin{eqnarray}\label{IDphigeneric=}
I(\varphi_{in})&\propto&\cos^2(\varphi_{in}-\varphi_{max})
+\varepsilon\sin^2(\varphi_{in}-\varphi_{max})\nonumber\\
&=&\frac{1+\varepsilon}{2}+
\frac{1-\varepsilon}{2}\,\cos(2\varphi_{in}-2\varphi_{max}),
\end{eqnarray}
where $\varphi_{max}$ is the angle where the intensity is maximum
and $\varepsilon$ is the ratio between the intensities in the minimum
and in the maximum.
Eq.~(\ref{IDfragma=}) corresponds to $\varphi_{max}=\pi/2$ and
small $\varepsilon\ll{1}$ due to the quantum diffraction.
Let the edge have $N_0$ armchair segments oriented along
the $y$~direction, such as the one considered above, and $N_\pm$
segments oriented at $\pm\pi/3$ to the $y$ axis. Note that the average
direction of the edge may still be arbitrary, as it depends on the
distribution of zigzag segments too. Let each segment be charaterized
by the same values of $\varepsilon$ and $\varphi_{max}$, when the
latter is measured with respect to the corresponding normal. Adding the
contributions, we again obtain an expression of the form of
Eq.~(\ref{IDphigeneric=}), but with different values of parameters:
\begin{subequations}\begin{eqnarray}
&&N_0I(\varphi_{in})+N_+I(\varphi_{in}-\pi/3)+N_-I(\varphi_{in}+\pi/3)\nonumber\\
&&\propto\cos^2(\varphi_{in}-\tilde\varphi_{max})
+\tilde\varepsilon\sin^2(\varphi_{in}-\tilde\varphi_{max}),\\
&&\tilde\varphi_{max}=\varphi_{max}
+\frac{1}2\arctan\frac{\sqrt{3}(N_+-N_-)}{2N_0-N_+-N_-},\label{tvep=}\\
&&\tilde\varepsilon=
\frac{(1+\varepsilon)N_{tot}-(1-\varepsilon)\tilde{N}}%
{(1+\varepsilon)N_{tot}+(1-\varepsilon)\tilde{N}},\\
&&\tilde{N}\equiv\sqrt{N_{tot}^2-3(N_+N_-+N_0N_++N_0N_-)},\\
&&N_{tot}\equiv N_0+N_++N_-.
\end{eqnarray}\end{subequations}
Inspection of Eq.~(\ref{tvep=}) shows that $\tilde\varepsilon\ll{1}$
if and only if (i)~$\varepsilon\ll{1}$ and (ii)~$N_{tot}-\tilde{N}\ll{N}_{tot}$.
The latter condition is equivalent to having one of $N_0,N_+,N_-$ to
be much larger than the others. If these conditions hold, we can write
(assuming that $N_0\gg{N}_+,N_-$ for concreteness)
\begin{equation}
\tilde\varepsilon\approx\varepsilon+\frac{3}{4}\frac{N_++N_-}{N_0}.
\end{equation}
In the opposite case $N_0=N_+=N_-$ we have $\tilde\varepsilon=1$,
so that isotropy is fully restored and no signatures of quantum
diffraction are left.
An analogous summation can be performed in the presence of an analyzer
in the general case,
but the final expressions are very bulky and not very informative. The
qualitative conclusion is the same: the terms which were small compared
to the leading term $\sin^2\varphi_{in}\sin^2\varphi_{out}$ grow as one
adds segments with different orientations. At $N_0=N_+=N_-$ the isotropy
is restored,
\begin{eqnarray}
&&\sin^2\varphi_{in}\sin^2\varphi_{out}
+\sin^2(\varphi_{in}-\pi/3)\sin^2(\varphi_{out}-\pi/3)\nonumber\\
&&{}+\sin^2(\varphi_{in}+\pi/3)\sin^2(\varphi_{out}+\pi/3)\nonumber\\
&&{}=\frac{3}8\sin^2(\varphi_{in}-\varphi_{out})+\frac{9}8\cos^2(\varphi_{in}-\varphi_{out}),
\end{eqnarray}
and signatures of the quantum diffraction are lost.
Let us focus on the special case when the average direction of edge is
zigzag, and it is symmetric on the average, $N_+=N_0$, $N_-=0$.
Then we have
\begin{eqnarray}
I_D&\propto&\sin^2\varphi_{in}\sin^2\varphi_{out}\nonumber\\
&&{}+\sin^2\!\left(\varphi_{in}-\frac\pi{3}\right)
\sin^2\!\left(\varphi_{out}-\frac\pi{3}\right)+O(\varepsilon)\nonumber\\
&=&\frac{3}8+\frac{3}4\cos^2(\varphi_{in}-\varphi_{out})\nonumber\\
&&{}-\cos^2\!\left(\varphi_{in}-\frac\pi{6}\right)
\cos^2\!\left(\varphi_{out}-\frac\pi{6}\right)+O(\varepsilon).\nonumber\\
\label{IDzigzag=}
\end{eqnarray}
The maximum of intensity is reached when both polarizations are
along the average direction of edge. For the unpolarized detection,
we add the contributions with $\varphi_{out}=0$ and
$\varphi_{out}=\pi/2$ and obtain the ratio between the minimum and
the maximum intensity to be $1/3+O(\varepsilon)$; if an analyzer is used
and $\varphi_{in}=\varphi_{out}$, we obtain
$\tilde\varepsilon=1/9+O(\varepsilon)$. These findings agree
with the available experimental data.\cite{Gupta2009,Casiraghi2009}
The dependence of Eq.~(\ref{IDzigzag=}) has a remarkable property:
at $\varphi_{in}=0$, $\varphi_{out}=\pi/3$ (or vice versa) the leading
term vanishes and $I_D=O(\varepsilon)$, i.~e. the quantum limit is still
accessible. In fact, the same will be true for any edge with
only two orientations of the segments (i.~e, for $N_-=0$, but
$N_0\neq{N}_+$, generally speaking).
The ratio of intensity in this minimum to the maximum without
analyzer is given by
\begin{subequations}\begin{eqnarray}
&&\tilde{\varepsilon}=\frac{2}{Z}\left[\frac{v}{\omega_{in}d_e}+
\frac{\omega_\mathrm{ph}^2}{4\omega_{in}^2}\,\ln\frac\omega_\mathrm{ph}{\omega_{in}}
\left(\ln\frac{\omega_\mathrm{ph}^2+(4\gamma)^2}{(4\gamma)^2}\right)^{-1}\right],
\nonumber\\ && \label{IDquantum=}\\
&&Z\equiv{1}+\frac{\sqrt{N_0^2+N_+^2-N_0N_+}}{N_0+N_+}.
\end{eqnarray}\end{subequations}
\section{Conclusions}
We have studied scattering of Dirac electrons on a graphene edge.
For a translationally invariant edge (such
as zigzag or armchair or another edge with a certain spatial
period) the reflection can be described by an effective
low-energy boundary condition for the electronic wave
function.\cite{McCann2004,Akhmerov2008}
For edges which are rough on the atomic scale we have proposed a
random-matrix model which
describes random scattering of electrons on the edge, respecting
the particle conservation and time-reversal symmetry. Essentially,
each point of the edge acts as an independent point scatterer with
a random rotation of the valley state. We have also considered
edges consisting of zigzag and armchair distinct segments longer
than the electron wavelength, each segment can be treated as a
piece of an ideal edge, while the small corrections due to quantum
diffraction, can be found using the Huygens-Fresnel principle for
Dirac electrons, analogously to the standard treatment of diffraction
in the classical optics.
Next, we have calculated the intensity of the edge-induced $D$~peak
in the Raman scattering spectrum of graphene. It is shown how the
quasiclassical character of the electron motion manifests itself in
the polarization dependence of the intensity. For an ideal armchair
edge the maximum of intensity corresponds to the case when both the
polarizer and the analyzer are along the edge, and the large ratio of
intensities in the maximum and the minimum turns out to be determined
by the quantum corrections to the quasiclassical motion of the
photoexcited electron and the hole. For an edge consisting of randomly
distributed zigzag and armchair segments of the length significantly
exceeding the electron wavelength, the effect of quantum diffraction
can be masked by the presence of armchair segments of different
orientations. The maximum and the minimum of the intensity are
determined by the number of the armchair segments with different
orientations, rather than the average direction of the edge. If only
two orientations of armchair segments are present in the edge, the
quantum diffraction limit can still be probed by a careful choce of
the polarizer and the analyzer (the polarizer should be oriented
perpendicularly to one of the armchair directions, the analyzer
perpendicularly to the other one). For an edge, rough at the atomic
scale, no segments can be identified, and the intensity reaches its
maximum for the polarization along the average direction of the edge.
The ratio of the maximum and the minimum intensity is determined by
the ability of the edge to scatter electrons at oblique angles.
As the whole Raman process is edge-assisted, one can pose the question
about the characteristic length scale which restricts the process to
the vicinity of the edge. We find that the answer is not unique, and
the length scale depends on the specific observable under study.
If one is interested in the total intensity or its polarization
dependence, the effective length scale is $v/\omega_\mathrm{ph}$ ($v$~being the
electronic velocity and $\omega_\mathrm{ph}$~the phonon frequency). However, if one
makes a spatially resolved experiment, measuring the dependence of
the intensity on the position of the excitation spot, the relevant
length scale is the electron inelastic scattering length $v/(2\gamma)$.
We have thus found a qualitative agreement with the interpretation of
Ref.~\onlinecite{Novotny}, but we argued the inelastic scattering
length found in that work is too large to be consistent with other
available information on electron inelastic scattering in graphene.
\section{Acknowledgements}
The author is grateful to A. C. Ferrari, S. Piscanec, and M. M. Fogler
for stimulating discussions.
|
train/arxiv
|
BkiUcGrxK03BfJXwvyc3
| 5
| 1
|
\section{Proof of Theorem~\ref{thm:lb2}}\label{apd:lb2}
\restatelb*
\begin{proof}
We construct an adversary with oblivious primary losses and adaptive secondary losses to prove the theorem. The adversary is inspired by the proof of the lower bound by~\cite{altschuler2018online}. We divide $T$ into $T^{1-\alpha}$ epochs evenly and the primary losses do not change within each epoch. Let $\lceil t \rceil_e =\min_{m:mT^\alpha\geq t} mT^\alpha$ denote the last time step of the epoch containing time step $t$. For each expert $h\in\mathcal{H}$, at the beginning of each epoch, we toss a fair coin and let $\loss{1}_{t,h}=0$ if it is head and $\loss{1}_{t,h}=1$ if it is tail. It is well-known that there exists a universal constant $a$ such that $\EE{\min_{h\in\mathcal{H}} Z_h} = E/2- a\sqrt{E\log(K)}$ where $Z_h\sim \mathrm{Bin}(E,1/2)$. Then we have
\begin{align*}
\EE{\min_{h\in\mathcal{H}}\sum_{t=1}^T\loss{1}_{t,h}}\leq \frac{T}{2}-aT^{\frac{1+\alpha}{2}}\sqrt{\log(K)}\:.
\end{align*}
For algorithm $\mathcal{A}$, let $\mathcal{A}_t$ denote the selected expert at time $t$. Then we construct adaptive secondary losses as follows. First, for the first $T^\alpha$ rounds, $\loss{2}_{t,h} = c+\delta $ for all $h\in\mathcal{H}$. For $t\geq T^\alpha +1$,
\begin{align*}
\loss{2}_{t,h}=\begin{cases}c & \text{if } h=\mathcal{A}_{t-1}=\ldots= \mathcal{A}_{t-T^\alpha}\\c+\delta & \text{otherwise}\end{cases}\:.
\end{align*}
This indicates that the algorithm can obtain $\loss{2}_{t,\mathcal{A}_t}=c$ only by selecting the expert she has consecutively selected in the last $T^\alpha$ rounds and that each switching leads to $\loss{2}_{t,\mathcal{A}_t}= c+\delta$. Let $S$ denote the total number of switchings and $\tau_1, \ldots,\tau_S$ denote the time steps $\mathcal{A}$ switches. For notation simplicity, let $\tau_{S+1}=T+1$. If $\EE{\Loss{1}_{T,\mathcal{A}}}\geq {T}/{2}-aT^{\frac{1+\alpha}{2}}\sqrt{{\log(K)}}/2$, then $\EE{\reg{1}}\geq aT^{\frac{1+\alpha}{2}}\sqrt{{\log(K)}}/2$; otherwise,
\begin{align*}
\frac{T}{2} - \frac{1}{2}\EE{\sum_{s=1}^{S} \min\left(\tau_{s+1}-\tau_s,\lceil \tau_s\rceil_e+1-\tau_s\right)}\labelrel\leq{myeq:a} \EE{\Loss{1}_{T,\mathcal{A}}} < \frac{T}{2} - aT^{\frac{1+\alpha}{2}}\sqrt{{\log(K)}}/2\:,
\end{align*}
where Eq.~\eqref{myeq:a} holds due to that the $s$-th switching helps to decrease the expected primary loss by at most $\min\left(\tau_{s+1}-\tau_s,\lceil \tau_s\rceil_e+1-\tau_s\right)/2$. Since the $s$-th switching increases the secondary loss to $c+\delta$ for at least $\min(\tau_{s+1}-1-\tau_{s}, T^\alpha)$ rounds, then we have
\begin{align*}
\EE{\Loss{2}_{T,\mathcal{A}}} \geq& cT+\delta \EE{\sum_{s=1}^{S} \min(\tau_{s+1}-\tau_{s}, T^\alpha)}\\
\geq &cT+\delta \EE{\sum_{s=1}^{S} \min\left(\tau_{s+1}-\tau_s,\lceil \tau_s\rceil_e+1-\tau_s\right)}\\
> &cT+\delta aT^{\frac{1+\alpha}{2}}\sqrt{{\log(K)}},
\end{align*}
which indicates that $\EE{\reg{2}_c}=\Omega(T^{\frac{1+\alpha}{2}})$. Therefore, $\EE{\max\left(\reg{1}, \reg{2}_c\right)}\geq \max\left(\EE{\reg{1}},\EE{\reg{2}_c}\right)=\Omega(T^{\frac{1+\alpha}{2}})$.
\end{proof}
\section{Proof of Theorem~\ref{thm:subopt}}\label{apd:subopt}
\restatesubopt*
\begin{proof}
We divide $T$ into $T^{1-\beta}$ intervals evenly with $\beta=\frac{1+\alpha}{2}$ and construct $T^{1-\beta}+1$ worlds with $2$ experts. For computation simplicity, we let $\delta =1/2$. The adversary selects a random world $W$ at the beginning. She selects world $0$ with probability ${1}/{2}$ and world $w$ with probability ${1}/{2T^{1-\beta}}$ for all $w\in[T^{1-\beta}]$.
In world $0$, we design the losses of experts as shown in Table~\ref{tab:alglb}. During the $w$-th interval with $w\in[T^{1-\beta}]$ being odd, we set $(\loss{1}_{t,h_1},\loss{2}_{t,h_1},\loss{1}_{t,h_2},\loss{2}_{t,h_2}) = (0,c+\delta T^{\alpha - \beta},1,c-\delta T^{\alpha - \beta})$ for the first $T^\beta/2$ rounds and $(\loss{1}_{t,h_1},\loss{2}_{t,h_1},\loss{1}_{t,h_2},\loss{1}_{t,h_2}) = (1,c,0,c)$ for the second $T^\beta/2$ rounds. For $w$ being even, we swap the losses of the two experts, i.e., $(\loss{1}_{t,h_1},\loss{2}_{t,h_2},\loss{1}_{t,h_2},\loss{2}_{t,h_2}) = (1,c-\delta T^{\alpha - \beta},0,c+\delta T^{\alpha - \beta})$ for the first $T^\beta/2$ rounds and $(\loss{2}_{t,h_1},\loss{2}_{t,h_1},\loss{1}_{t,h_2},\loss{2}_{t,h_2}) = (0,c,1,c)$ for the second $T^\beta/2$ rounds.
The intuition of constructing world $w\in[T^{1-\beta}]$ is described as below. In world $w$, the secondary loss is the same as that in world $0$. The primary losses of each expert $h\in\mathcal{H}$ in the first $w-1$ intervals are an approximately random permutation of that in world $0$. Therefore, any algorithm will attain almost the same expected primary loss (around $(w-1)T^\beta/2$) in the first $w-1$ intervals of world $w$. The primary losses during the first $T^\beta/2$ rounds in the $w$-th interval are the same as those in world $0$. Therefore, the cumulative losses from the beginning to any time $t$ in the first half of the $w$-th interval are almost the same in world $0$ and world $w$, which makes the algorithm only dependent on the cumulative losses behave nearly the same during the first half of the $w$-th interval in two worlds. For $t=(w-1/2)T^\beta+1,\ldots, T$, we set $\loss{1}_{t,h}=1$ for all $h\in\mathcal{H}$, which indicates that any algorithms are unable to improve their primary loss after $t=(w-1/2)T^\beta+1$. To prove the theorem, we show that if the algorithm selects expert $h$ with loss $(1,c-\delta T^{\alpha - \beta})$ during the first half of the $w$-th interval with large fraction, then $\reg{1}$ will be large in world $w$; otherwise, $\reg{2}_c$ will be large in world $0$.
More specifically, for the first $w-1$ intervals in world $w$, we need to make the cumulative primary losses to be $(w-1)T^\beta/2$ with high probability. Let $t' =(w-1)T^\beta-2\sqrt{(w-1)T^\beta\log(T)}$. For $t=1,\ldots, t'$, $\loss{1}_{t,h}$ are i.i.d. samples from $\mathrm{Ber}({1}/{2})$ for all $h\in\mathcal{H}$. We denote by $E_h^{(w)}$ the event of $\abs{\sum_{t=1}^{t'} (\loss{1}_{t,h} -1/2)}\leq \sqrt{(w-1)T^\beta\log(T)}$ and denote by $E$ the event of $\cap_{h\in\mathcal{H},w\in[T^{1-\beta}]} E_h^{(w)}$. If $E_{h_1}^{(w)}\cap E_{h_2}^{(w)}$ holds, we compensate the cumulative primary losses by assigning $\loss{1}_{t,h}=1$ for $(w-1)T^\beta/2 - \sum_{t=1}^{t'} \loss{1}_{t,h}$ rounds and $\loss{1}_{t,h}=0$ for the remaining rounds during $t=t'+1,\ldots, (w-1)T^\beta$ for all $h\in\mathcal{H}$ such that the cumulative primary losses in the first $w-1$ intervals for both experts are $(w-1)T^\beta/2$ ; otherwise, we set $\loss{1}_{t,h} = 1$ for all $h\in\mathcal{H}$ during $t=t'+1,\ldots, (w-1)T^\beta$. Hence, if $E_{h_1}^{(w)}\cap E_{h_2}^{(w)}$, the cumulative losses $\Loss{1}_{(w-1)T^\beta,h} = (w-1)T^\beta/2$ for all $h\in\mathcal{H}$. To make it clearer, the values of the secondary losses in world $w$ for an even $w$ if $E_{h_1}^{(w)}\cap E_{h_2}^{(w)}$ holds are illustrated in Table~\ref{tab:alglb2}.
Let $q_w=2\sum_{t=(w-1)T^\beta+1}^{(w-1/2)T^\beta}\EE{\mathds{1}(\loss{1}_{t,\mathcal{A}_t}=0)}/T^\beta$ denote the expected fraction of selecting the expert with losses $(0,c+\delta T^{\alpha-\beta})$ in $w$-th interval in world $0$ as well as that in world $w$ when $E$ holds. We denote by $\reg{1,w}=\Loss{1,w}_{T,\mathcal{A}}- \Loss{1,w}_{T,h_0}$ and $\reg{1,w}_{t'}=\Loss{1,w}_{t',\mathcal{A}}- \Loss{1,w}_{t',h_0}$ with $h_0=\argmin_{h\in\mathcal{H}} \Loss{1,w}_{T,h}$ being the best expert in hindsight the regret with respect to the primary loss for all times and the regret incurred during $t=1,\ldots,t'$ in world $w$. We denote by $\reg{2,w}_c$ the regret to $cT$ with respect to the secondary loss in world $w$. Then we have
\begin{align*}
\EEc{\reg{2,W}_c}{W=0} = \sum_{w\in[T^{1-\beta}]} \frac{\delta(2q_w-1)T^\alpha}{2}\:,
\end{align*}
and for all $w\in[T^{1-\beta}]$,
\begin{align*}
\EEc{\reg{1,W}}{W=w,E} \geq& (1-q_w)\frac{T^\beta}{2} +\EEc{\reg{1,w}_{t'}}{W=w,E}-\left((w-1)T^\beta-t'\right) \\
\geq& (1-q_w)\frac{T^\beta}{2}-2\sqrt{(w-1)T^\beta\log(T)}\: .
\end{align*}
Due to Hoeffding's inequality and union bound, we have $\PP{\neg E_{h}^{(w)}}\leq \frac{2}{T^2}$ for all $h\in\mathcal{H}$ and $w\in[T^{1-\beta}]$ and $\PP{\neg E}\leq \frac{4}{T^{1+\beta}}$. Let $Q=\frac{{\sum_{w=1}^{T^{1-\beta}} q_w}}{T^{1-\beta}}$ denote the average of $q_w$ over all $w\in[T^{1-\beta}]$. By taking expectation over the adversary, we have
\begin{align}
&\EE{\max\left(\reg{1}, \reg{2}_c\right)}\nonumber\\
\geq & \PP{E}\cdot\EEc{\max\left(\reg{1}, \reg{2}_c\right)}{E}\nonumber\\
\geq &\left(1-\frac{4}{T^{1+\beta}}\right)\left(\frac{1}{2T^{1-\beta}}\sum_{w=1}^{T^{1-\beta}} \EEc{\reg{1,W}}{W=w,E}+ \frac{1}{2}\EEc{\reg{2,W}_c}{W=0,E}\right)\nonumber\\
\geq& \frac{1}{2}\left(\frac{1}{2T^{1-\beta}}\left(\sum_{w} {(1-q_w)}\frac{T^\beta}{2} -2\sum_{w=1}^{T^{1-\beta}}\sqrt{(w-1)T^\beta\log(T)}\right)+ \frac{\delta}{4}{\sum_{w=1}^{T^{1-\beta}}(2q_w-1)}T^\alpha\right) \nonumber\\
\geq& \frac{1}{8}(1-Q)T^\beta -\sqrt{T\log(T)} + \frac{\delta}{8}(2Q-1)T^{1-\beta+\alpha}\nonumber\\
\geq& \frac{1}{16}T^{\frac{1+\alpha}{2}}-\sqrt{T\log(T)},\label{eq:alglb}
\end{align}
where Eq.~\eqref{eq:alglb} holds by setting $\beta=\frac{1+\alpha}{2}$ and $\delta =1/2$.
\end{proof}
\begin{table}[H]
\caption{The losses in world $0$.}\label{tab:alglb}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
\toprule
\multicolumn{2}{c|}{experts\textbackslash time} &$T^\beta/2$ &$T^\beta/2$ &$T^\beta/2$ &$T^\beta/2$ &$T^\beta/2$ & $\ldots$ \\
\cmidrule{1-8}
\multirow{2}{*}{$h_1$} &$\loss{1}$& $0$ & $1$ & $1$ & $0$ & $0$ & $\ldots$\\
\cmidrule{2-8}
& $\loss{2}$ & $c+\delta T^{\alpha-\beta}$ & $c$ &$c-\delta T^{\alpha-\beta}$ & $c$ & $c+\delta T^{\alpha-\beta}$& $\ldots$\\
\cmidrule{1-8}
\multirow{2}{*}{$h_2$} &$\loss{1}$& $1$ & $0$ & $0$ & $1$ & $1$ & $\ldots$\\
\cmidrule{2-8}
& $\loss{2}$ & $c-\delta T^{\alpha-\beta}$ & $c$ &$c+\delta T^{\alpha-\beta}$ & $c$ & $c-\delta T^{\alpha-\beta}$& $\ldots$\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\caption{The primary losses in world $w$ (which is even) if $E_{h_1}^{(w)}\cap E_{h_2}^{(w)}$ holds.}\label{tab:alglb2}
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
\multicolumn{2}{c|}{experts\textbackslash time} &$t'$ &$(w-1)T^\beta- t'$ &$T^\beta/2$ &$T^\beta/2$ &$T-T^\beta$ \\
\cmidrule{1-7}
{$h_1$} &$\loss{1}$& i.i.d. from $\mathrm{Ber}({1}/{2})$ & compensate & $1$ & $1$ & $1$\\
\cmidrule{1-7}
{$h_2$} &$\loss{1}$& i.i.d. from $\mathrm{Ber}({1}/{2})$ & compensate & $0$ & $1$ & $1$\\
\bottomrule
\end{tabular}
\end{table}
\section{Proof of Theorem~\ref{thm:linK}}\label{apd:linK}
\restatelinK*
\begin{proof}
The idea is to construct an example in which the best expert with respect to the primary loss is deactivated sequentially while incurring an extra $\Theta(T^\alpha)$ secondary loss. In the example, we set $\mathcal{H}=[K]$. Let $T_{k} = T^{\alpha+\frac{(k-1)(1-\alpha)}{K-1}}$ for $k\in[K]$ and $T_{0}=0$. For each expert $k\in\mathcal{H}$, we set $(\loss{1}_{t,k},\loss{2}_{t,k}) = (1,c)$ for $t\leq T_{{k-1}}$ and $(\loss{1}_{t,k},\loss{2}_{t,k}) = (0,c+\frac{\delta T^\alpha}{T_{k}-T_{{k-1}}})$ for $t\geq T_{{k-1}}+1$. Then expert $k$ will be deactivate at time $t=T_{k}$. For any algorithm with $\sreg{1}_{k}=o(T_{k})$ for all $k \in \mathcal{H}$, expert $k$ should be selected for $T_{k}-2T_{{k-1}}-o(T_{k})$ rounds during $t=T_{{k-1}}+1,\ldots,T_{k}$. Therefore, we have $\reg{2}_c \geq \sum_{k\in[K]}\frac{\delta T^\alpha}{T_{k}-T_{{k-1}}} (T_{k}-2T_{{k-1}}-o(T_{k}))= \Omega(KT^\alpha)$.
\end{proof}
\section{Proof of Theorem~\ref{thm:ub}}\label{apd:ub}
\restateub*
\begin{proof}
Let $\tLoss{1,h^*}_h = \sum_{m=1}^{T^{1-\alpha}}I_{h^*}(e_m)\tloss{1}_{e_m,h}$ and $\tLoss{1,h^*}_\mathcal{A} = \sum_{m=1}^{T^{1-\alpha}}I_{h^*}(e_m)\tloss{1}_{e_m,\mathcal{A}}$ denote the cumulative pseudo primary losses of expert $h$ and algorithm $\mathcal{A}$ during the time when $h^*$ is active. First, since we update $w_{m+1,h}^{h^*} = w_{m,h}^{h^*}\eta^{I_{h^*}(e_m)(\tloss{1}_{e_m,h}-\eta \tloss{1}_{e_m,\mathcal{A}})+1}\leq w_{m,h}^{h^*}$ with $\eta\in[1/\sqrt{2},1]$ and experts will not be reactivated between (not including) $t_n$ and $t_{n+1}$, the probability of following the first rule on Line~7 in Algorithm~\ref{alg:orc2}, which is $\frac{w_{m+1,h_{m}}}{w_{m,h_{m}}}$, is legal. Then we show that at each epoch $m$, the probability of getting $h_m = h$ is $\PP{h_m = h}=p_{m,h}$. The proof follows Lemma~1 by~\citep{geulen2010regret}. For an reactivating epoch $m\in\{(t_n-1)/T^\alpha+1\}_{n=0}^N$, $h_m$ is drawn from $p_m$ and thus, $\PP{h_m = h}=p_{m,h}$ holds. For other epochs $m\notin\{(t_n-1)/T^\alpha+1\}_{n=0}^N$, we prove it by induction. Assume that $\PP{h_{m-1} = h}=p_{m-1,h}$, then
\begin{align*}
\PP{h_m = h} &= \PP{h_{m-1} = h}\frac{w_{m,h}}{w_{m-1,h}} + p_{m,h} \sum_{h'\in \mathcal{H}} \PP{h_{m-1} = h'}\left(1-\frac{w_{m,h'}}{w_{m-1,h'}}\right)\\
& = \frac{w_{m-1,h}}{W_{m-1}}\cdot\frac{w_{m,h}}{w_{m-1,h}} +\frac{w_{m,h}}{W_m}\left(1- \sum_{h'\in \mathcal{H}}\frac{w_{m-1,h'}}{W_{m-1}}\cdot \frac{w_{m,h'}}{w_{m-1,h'}}\right)\\
& =p_{m,h}\:.
\end{align*}
To prove the upper bound on sleeping regrets, we follow Claim~12 by \cite{blum2007external} to show that $\sum_{h,h^*} w_{m,h}^{h^*}\leq K \eta^{m-1}$ for all $m\in [T^{1-\alpha}]$.
First, we have
\begin{align}
W_m \tloss{1}_{e_m,\mathcal{A}} = W_m \sum_{h\in\mathcal{H}}p_{m,h} \tloss{1}_{e_m,h} = \sum_{h\in\mathcal{H}}w_{m,h} \tloss{1}_{e_m,h}= \sum_{h\in\mathcal{H}}\sum_{h^*\in\mathcal{H}}I_{h^*}(e_m)w_{m,h}^{h^*} \tloss{1}_{e_m,h}\:. \label{eq:intm}
\end{align}
Then according to the definition of $w_{m,h}^{h^*}$, we have
\begin{align*}
&\sum_{h\in\mathcal{H},h^*\in\mathcal{H}} w_{m+1,h}^{h^*}\\=&\sum_{h\in\mathcal{H},h^*\in\mathcal{H}} w_{m,h}^{h^*}\eta^{I_{h^*}(e_m)(\tloss{1}_{e_m,h}-\eta \tloss{1}_{e_m,\mathcal{A}})+1}\\
\leq& \eta\left(\sum_{h\in\mathcal{H},h^*\in\mathcal{H}} w_{m,h}^{h^*} \left(1-(1-\eta)I_{h^*}(e_m)\tloss{1}_{e_m,h}\right)\left(1+(1-\eta)I_{h^*}(e_m)\tloss{1}_{e_m,\mathcal{A}}\right)\right)\\
\leq& \eta\left(\sum_{h\in\mathcal{H},h^*\in\mathcal{H}} w_{m,h}^{h^*} - (1-\eta)\left(\sum_{h\in\mathcal{H},h^*\in\mathcal{H}} w_{m,h}^{h^*}I_{h^*}(e_m)\tloss{1}_{e_m,h} - W_m\tloss{1}_{e_m,\mathcal{A}}\right)\right)\\\nonumber
=&\eta\sum_{h\in\mathcal{H},h^*\in\mathcal{H}} w_{m,h}^{h^*}\:,\label{eq:sr}
\end{align*}
where the last inequality adopts Eq.~\eqref{eq:intm}. Combined with $w_{1,h}^{h^*} = \frac{1}{K}$ for all $h\in\mathcal{H},h^*\in\mathcal{H}$, we have $\sum_{h,h^*} w_{m+1,h}^{h^*}\leq K \eta^{m}$. Since $w_{m+1,h}^{h^*} = w_{1,h}^{h^*}\eta^{\sum_{i=1}^{m} I_{h^*}(e_i)\tloss{1}_{e_i,h}-\eta\sum_{i=1}^{m} I_{h^*}(e_i)\tloss{1}_{e_i,\mathcal{A}}+m} \leq K \eta^{m}$, we have
\begin{align*}
\tLoss{1,h^*}_\mathcal{A} -\tLoss{1,h^*}_h\leq \frac{(1-\eta)\tLoss{1,h^*}_h +\frac{2\log(K)}{\log(1/\eta)}}{\eta}\,.
\end{align*}
By setting $\eta = 1-\sqrt{2\log(K)/T^{1-\alpha}}$, we have $\sreg{1}(h^*) \leq 2\sqrt{\log(K)T^{1+\alpha}} +2T_{h^*}\sqrt{\log(K)T^{\alpha-1}}$.
To derive $\reg{2}_c$, we bound the number of switching times. We denote by $S_n$ the number of epochs in which some experts are deactivated during $(t_n-1)/T^\alpha+1 < m< (t_{n+1}-1)/T^\alpha+1$ and by $\tau_1,\ldots,\tau_{S_n}$ the deactivating epochs, i.e., $\Delta \mathcal{H}_{\tau_i}\neq \emptyset$ for $i\in [S_n]$. We denote by $\alpha_m$ the probability of following the second rule at line~7 in Algorithm~\ref{alg:orc2}, which is getting $h_m$ from $p_m$. Then we have
\begin{align*}
\alpha_m = \sum_{h\in\mathcal{H}} \PP{h_{m-1} = h}\left(1-\frac{w_{m,h}}{w_{m-1,h}}\right)=\sum_{h\in\mathcal{H}} \frac{w_{m-1,h}}{W_{m-1}}\left(1-\frac{w_{m,h}}{w_{m-1,h}}\right) = \frac{W_{m-1}-W_{m}}{W_{m-1}}\:.
\end{align*}
Since $W_{\tau_{i+1}}/W_{\tau_i+1}\geq \eta^{2(\tau_{i+1}-\tau_i-1)}$, we have
\begin{align*}
\sum_{m=\tau_i+1}^{\tau_{i+1}} \alpha_m &\leq 1-\sum_{m=\tau_i+2}^{\tau_{i+1}} \log(1-\alpha_m) = 1-\sum_{m=\tau_i+2}^{\tau_{i+1}} \log\left(\frac{W_m}{W_{m-1}}\right) = 1+\log\left(\frac{W_{\tau_{i+1}}}{W_{\tau_i+1}}\right)\\
&\leq 1+2\sqrt{2}(\tau_{i+1}-\tau_i-1)(1-\eta) = 1+4(\tau_{i+1}-\tau_i-1)\sqrt{\log(K)/T^{1-\alpha}}\,.
\end{align*}
Therefore, during time $(t_n-1)/T^\alpha \leq m< (t_{n+1}-1)/T^\alpha$, the algorithm will switch at most $K+4(t_{n+1}-t_n)\sqrt{\log(K)/T^{1-\alpha}}+1$ times in expectation,
which results in $\reg{2}_c \leq 4\delta\sqrt{\log(K)T^{1+\alpha}}+\delta N(K+1)T^{\alpha} = O(\sqrt{\log(K)T^{1+\alpha}}+NKT^{\alpha})$
\end{proof}
\section{Introduction}
\vspace{-0.05in}
The online learning problem has been studied extensively in the literature and used increasingly in many applications including hiring, advertising and recommender systems. One classical problem in online learning is prediction with expert advice, in which a decision maker makes a sequence of $T$ decisions with access to $K$ strategies (also called ``experts''). At each time step, the decision maker observes a scalar-valued loss of each expert. The standard objective is to perform as well as the best expert in hindsight. For example, a recruiter (the decision maker) sequentially decides which job applicants to hire with the objective of minimizing errors (of hiring an unqualified applicant and rejecting a qualified one). However, this may give rise to some social concerns since the decision receiver has a different objective (getting a job) which does not receive any attention. This problem can be modeled as an online learning problem with the primary loss (for the decision maker) and secondary loss (for the decision receiver). Taking the social impact into consideration, we ask the following question:
\vspace{0.03in}
{\centerline{\em Can we achieve low regret with respect to the primary loss, while performing}}
{\centerline{\em
not much worse than the worst expert with respect to the secondary loss?}}
\vspace{-0.05in}
Unfortunately, we answer this question negatively. More generally, we consider a bicriteria goal of minimizing the regret to the best expert with respect to the primary loss while minimizing the regret to a linear threshold $cT$ with respect to the secondary loss for some $c$. When the value of $c$ is set to the average secondary loss of the worst expert with respect to the secondary loss, the objective reduces to no-regret for the primary loss while performing no worse than the worst expert with respect to the secondary loss. Other examples, e.g., the average secondary loss of the worst expert with respect to the secondary loss among the experts with optimal primary loss, lead to different criteria of the secondary loss. Therefore, with the notion of regret to the linear threshold, we are able to study a more general goal. Based on this goal, we pose the following two questions:
\begin{enumerate}
\vspace{-0.1in}
\item If all experts have secondary losses no greater than $cT+o(T)$ for some $c$, can we achieve
no-regret (compete comparably to the best expert) for the primary loss while achieving secondary loss no worse than $cT+o(T)$?\label{q1}
\item If we are given some external oracles to deactivate some ``bad'' experts with unsatisfactory secondary loss, can we perform as well as each expert with respect to the primary loss during the time they are active while achieving secondary loss no worse than $cT+o(T)$?\label{q2}
\end{enumerate}
\vspace{-0.1in}
These two questions are trivial in the i.i.d. setting as we can learn the best expert with respect to the primary loss within $O(\log(T))$ rounds and then we just need to follow the best expert. In this paper, we focus on answering these two questions in the adversarial online setting.
\vspace{-0.1in}
\subsection{Contributions}
\vspace{-0.05in}
\paragraph{An impossibility result without a bounded variance assumption}
We show that without any constraints on the variance of the secondary loss, even if all experts have secondary loss no greater than $cT$, achieving no-regret with respect to the primary loss and bounding secondary loss by $cT+O(T)$ is still unachievable. This answers our motivation question that it is impossible to achieve low regret with respect to the primary loss, while performing not much worse than the worst expert with respect to the secondary loss. This result explains why minimizing one loss while bounding another
is non-trivial and applying existing algorithms for scalar-valued losses after scalarizing primary and secondary losses does not work. We propose an assumption on experts that the secondary loss of the expert during any time interval does not exceed $cT$ by $O(T^\alpha)$ for some $\alpha\in [0,1)$.
Then we study the problem in two scenarios, a ``good'' one in which all experts satisfy this assumption and a ``bad'' one in which experts partially satisfy this assumption and we are given access to an external oracle to deactivate and reactivate experts.
\vspace{-0.05in}
\paragraph{Our results in the ``good'' scenario}
In the ``good'' scenario, we show that running an algorithm with limited switching rounds such as Follow the Lazy Leader~\citep{kalai2005efficient} and Shrinking Dartboard (SD)~\citep{geulen2010regret} can achieve both regret to the best with respect to the primary loss and regret to $cT$ with respect to the secondary loss at $O(T^{\frac{1+\alpha}{2}})$. We also provide a lower bound of $\Omega(T^\alpha)$.
From another perspective, we relax the ``good'' scenario constraint by introducing adaptiveness to the secondary loss and constraining the variance of the secondary loss between any two switchings for any algorithm instead of that of any expert. We show that in this weaker version of ``good'' scenario, the upper bound of running switching-limited algorithms matches the lower bound at $\Theta(T^{\frac{1+\alpha}{2}})$.
\vspace{-0.05in}
\paragraph{Our results in the ``bad'' scenario}
In the ``bad'' scenario, we assume that we are given an external oracle to determine which experts to deactivate as they do not satisfy the bounded variance assumption. We study two oracles here. One oracle deactivates the experts which do not satisfy the bounded variance assumption once detecting and never reactivates them. The other one reactivates those inactive experts at fixed rounds. In this framework, we are limited to select among the active experts at each round and we adopt a more general metric, sleeping regret, to measure the performance of the primary loss. We provide algorithms for the two oracles with theoretical guarantees on the sleeping regrets with respect to the primary loss and the regret to $cT$ with respect to the secondary loss.
\vspace{-0.1in}
\subsection{Related work}
\vspace{-0.05in}
One line of closely related work is online learning with multi-objective criterion. A bicriteria setting which examines not only the regret to the best expert but also the regret to a fixed mixture of all experts is investigated by~\cite{even2008regret,kapralov2011prediction,sani2014exploiting}. The objective by~\cite{even2009online} is to learn an optimal static allocation over experts with respect to a global cost function. Another multi-objective criterion called the Pareto regret frontier studied by~\cite{koolen2013pareto} examines the regret to each expert. Different from our work, all these criteria are studied in the setting of scalar-valued losses. The problem of multiple loss functions is studied by~\cite{chernov2009prediction} under a heavy geometric restriction on loss functions. For vector losses, one fundamental concept is the Pareto front, the set of feasible points in which none can be dominated by any other point given several criteria to be optimized~\citep{hwang2012multiple,auer2016pareto}. However, the Pareto front contains unsatisfactory solutions such as the one minimizing the secondary loss, which implies that learning the Pareto front can not achieve our goal. Another classical concept is approachability, in which a learner aims at making the averaged vector loss converge to a pre-specified target set~\citep{blackwell1956analog,abernethy2011blackwell}. However, we show that our fair solution is unapproachable without additional bounded variance assumptions. Approachability to an expansion target set based on the losses in hindsight is studied by~\cite{mannor2014approachability}. However, the expansion target set is not guaranteed to be meet our criteria. Multi-objective criterion has also been studied in multi-armed bandits~\citep{turgay2018multi}.
\vspace{-0.1in}
\section{Model}\label{sec:model}
\vspace{-0.05in}
We consider the adversarial online learning setting with a set of $K$ experts $\mathcal{H} = \{1,\ldots,K\}$. At round $t=1,2,\ldots,T$, given an active expert set $\mathcal{H}_t\subseteq \mathcal{H}$, an online learner $\mathcal{A}$ computes a probability distribution $p_t\in \Delta_K$ over $\mathcal{H}$ with support only over $\mathcal{H}_t$ and selects one expert from $p_t$. Simultaneously an adversary selects two loss vectors $\loss{1}_t,\loss{2}_t\in [0,1]^K$, where $\loss{1}_{t,h}$ and $\loss{2}_{t,h}$ are the primary and secondary losses of expert $h\in \mathcal{H}$ at time $t$. Then $\mathcal{A}$ observes the loss vector and incurs expected losses $\loss{i}_{t,\mathcal{A}}=p_t^\top \loss{i}_t$ for $i\in \{1,2\}$. Let $\Loss{i}_{T,h} = \sum_{t=1}^T\loss{i}_{t,h}$ denote the loss of expert $h$ and $\Loss{i}_{T,\mathcal{A}}=\sum_{t=1}^Tp_t^\top \loss{i}_t$ denote the loss of algorithm $\mathcal{A}$ for $i\in\{1,2\}$ during the first $T$ rounds.
We will begin by focusing on the case that the active expert set $\mathcal{H}_t =\mathcal{H}$.
\vspace{-0.1in}
\subsection{Regret notions}
\vspace{-0.05in}
Traditionally, the regret (to the best) is used to measure the scalar-valued loss performance of a learner, which compares the loss of the learner and the best expert in hindsight. Similar to~\cite{even2008regret}, we adopt the regret notion of $\mathcal{A}$ with respect to the primary loss as
\begin{align*}
\reg{1} \triangleq \max\left({\Loss{1}_{T,\mathcal{A}}- \min_{h\in \mathcal{H}} \Loss{1}_{T,h}},1\right)\:.
\end{align*}
We introduce another metric for the secondary loss called {\em regret to $cT$} for some $c\in[0,1]$, which compares the secondary loss of the learner with a linear term $cT$,
\begin{align*}
\reg{2}_c \triangleq \max\left(\Loss{2}_{T,\mathcal{A}}- cT,1\right)\:.
\end{align*}
Sleeping experts are developed to model the problem in which not all experts are available at all times~\citep{blum1997empirical,freund1997using}. At each round, each expert $h\in \mathcal{H}$ decides to be active or not and then a learner can only select among the active experts, i.e. have non-zero probability $p_{t,h}$ over the active experts. The goal is to perform as well as $h^*$ in the rounds where $h^*$ is active for all $h^*\in \mathcal{H}$. We denote by $\mathcal{H}_t$ the set of active experts at round $t$. The sleeping regret for the primary loss with respect to expert $h^*$ is defined as
\begin{align*}
\sreg{1}(h^*) \triangleq \max\left(\sum_{t:h^*\in\mathcal{H}_t}\sum_{h\in \mathcal{H}_t}p_{t,h}\loss{1}_{t,h} - \sum_{t:h^*\in\mathcal{H}_t} \loss{1}_{t,h^*},1\right)\:.
\end{align*}
The sleeping regret notion we adopt here is different from the regret to the best ordering of experts in the sleeping expert setting of~\cite{kleinberg2010regret}. Since achieving the optimal regret bound in Kleinberg's setting is computationally hard~\citep{kanade2014learning}, we focus on the sleeping regret notion defined above.
\vspace{-0.1in}
\subsection{Assumptions}
\vspace{-0.05in}
Following a standard terminology, we call an adversary oblivious if her selection is independent of the learner's actions. Otherwise, we call the adversary adaptive. First, we assume that the primary loss is oblivious. This is a common assumption in the online learning literature and this assumption holds throughout the paper.
\begin{assumption}\label{asp:loss1}
The primary losses $\{\loss{1}_t\}_{t\in[T]}$ are oblivious.
\end{assumption}
For an expert $h\in\mathcal{H}$, we propose a bounded variance assumption on her secondary loss: the average secondary loss for any interval does not exceed $c$ much. More formally, the assumption is described as below.
\begin{assumption}\label{asp:intv}
For some given $c, \delta,\alpha \in [0,1]$ and for all expert $h\in \mathcal{H}$, for any $T_1,T_2\in [T]$ with $T_1\leq T_2$,
\begin{align*}
\sum_{t=T_1}^{T_2}(\loss{2}_{t,h}-c)\leq \delta T^\alpha\:.
\end{align*}
\end{assumption}
We show that such a bounded variance assumption is necessary in Section~\ref{sec:neg}.
We call a scenario ``good'' if all experts satisfy assumption~\ref{asp:intv}. Otherwise, we call the scenario ``bad''. This ``good'' constraint can be relaxed by introducing adaptiveness to the secondary loss. We have a relaxed version of the ``good'' scenario in which the average secondary loss between any two switchings does not exceed $c$ much for any algorithm. More formally,
\begin{customasp}{2$'$}\label{asp:intv2}
For some given $c, \delta,\alpha \in [0,1]$, for any algorithm $\mathcal{A}$, let $\mathcal{A}_t\in \mathcal{H}$ denote the selected expert at round $t$. For any expert $h\in \mathcal{H}$ and $T_1\in [T]$ such that $\mathcal{A}_{T_1}=h$ and $\mathcal{A}_{T_1-1}\neq h$ (where $\mathcal{A}_{T+1} = T \mathcal{A}_{0}=0$ for notation simplicity), we have
\begin{align*}
\sum_{\tau =T_1}^{\min_{t >T_1: \mathcal{A}_t \neq h}t-1}\left(\loss{2}_{\tau,h}-c\right)\leq \delta T^\alpha\:.
\end{align*}
\end{customasp}
In the ``good'' scenario, the active expert set $\mathcal{H}_t =\mathcal{H}$ for all rounds and the goal is minimizing both $\reg{1}$ and $\reg{2}_c$. In the ``bad'' scenario, we consider that we are given an oracle which determines $\mathcal{H}_t$ at each round and the goal is minimizing $\sreg{1}(h^*)$ for all $h^*\in \mathcal{H}$ and $\reg{2}_c$.
\vspace{-0.1in}
\section{Impossibility result without any bounded variance assumption}\label{sec:neg}
\vspace{-0.05in}
In this section, we show that without any additional assumption on the secondary loss, even if all experts have secondary loss no greater than $cT$ for some $c\in [0,1]$, there exists an adversary such that any algorithm incurs $\mathbb{E}[{\max(\reg{1},\reg{2}_c)}]=\Omega(T)$.
\begin{theorem}\label{thm:notwork}
Given a fixed expert set $\mathcal{H}$, there exists an adversary such that any algorithm will incur $\mathbb{E}[{\max(\reg{1}, \reg{2}_c)}]=\Omega(T)$ with $c = \max_{h\in \mathcal{H}} \Loss{2}_{T,h}/T$, where the expectation is taken over the randomness of the adversary.
\end{theorem}
\vspace{-0.1in}
\begin{proof}
To prove this theorem, we construct a binary classification example as below.
In a binary classification problem, for each sample with true label $y\in\{+,-\}$ and prediction $\hat{y}\in\{+,-\}$, the primary loss is defined as the expected $0/1$ loss for incorrect prediction, i.e., $\EEs{y,\hat{y}}{\II{\hat{y}\neq y}}$ and the secondary loss is defined as the expected $0/1$ loss for false negatives, i.e., $\EEs{y,\hat{y}}{\II{\hat{y}\neq y,y=+}}$. We denote by $h(b)$ the expert predicting $-$ with probability $b$ and $+$ otherwise. Then every expert can be represented by a sequence of values of $b$. At round $t$, the true label is negative with probability $a$. We divide $T$ into two phases evenly, $\{1,\ldots,T/2\}$ and $T/2+1,\ldots,T$, in each of which the adversary generates outcomes with different values of $a$ and two experts $\mathcal{H}=\{h_1,h_2\}$ have different values of $b$ in different phases. We construct two worlds with different values of $a$ and $b$ in phase $2$ and any algorithm should have the same behavior in phase $1$ of both worlds. The adversary randomly chooses one world with equal probability. The specific values of $a$ and $b$ are given in Table~\ref{tab:bin}. Let $c=1/16$.
\vspace{-0.1in}
\begin{table}[H]
\caption{The values of $a$ and $b$ in different phases for the binary classification example.}\label{tab:bin}
\centering
\begin{tabular}{c|c|c|c}
\toprule
experts\textbackslash phase&$1: a = \frac{5}{8}$& $2: a = \frac{3}{4}$ (world I)&$2: a = \frac{5}{8}$ (world II)\\
\cmidrule{1-4}
$h_1$ & $b =\frac{1}{6}$ &$b = 0$& $b =\frac{1}{6}$\\
\cmidrule{1-4}
$h_2$ & $b =0$&$b = \frac{1}{2}$ & $b =0$\\
\bottomrule
\end{tabular}
\end{table}
\vspace{-0.1in}
The loss of expert $h(b)$ is
${\loss{1}_{t,h(b)}}=(1-a)b+a(1-b)$ and ${\loss{2}_{t,h(b)}}=(1-a)b$.
In phase $1$ and phase $2$ of world II, ${\loss{1}_{t,h_1}}= 7/12$, ${\loss{2}_{t,h_1}}=1/16$, ${\loss{1}_{t,h_2}}={5}/{8}$ and ${\loss{2}_{t,h_2}}=0$. In phase $2$ of world I, ${\loss{1}_{t,h_1}}=3/4$, ${\loss{2}_{t,h_1}}=0$, ${\loss{1}_{t,h_2}}={1}/{2}$ and ${\loss{2}_{t,h_2}}=1/8$. For any $h\in \mathcal{H}$, we have $\Loss{2}_{T,h}\leq T/16$.
For any algorithm which selects $h_1$ for $T_1$ (in expectation) rounds in phase $1$ and $T_2$ (in expectation) rounds in phase $2$ of world I. If $T_1\leq T/4$, then ${\reg{1}} \geq (T/2-T_1)/24\geq T/96$ in world II; else if $T_1>T/4$ and $T_2\geq T_1/4$, then ${\reg{1}} \geq T_2/4-T_1/24\geq T/192$ in world I; else ${\reg{2}_c}= T_1/16+(T/2-T_2)/8-T/16 = (T_1-2T_2)/16\geq T/128$ in world I. In any case, we have $\mathbb{E}[{\max(\reg{1}, \reg{2}_c)}]=\Omega(T)$.
\end{proof}
\vspace{-0.1in}
The proof of Theorem~\ref{thm:notwork} implies that an expert with total secondary loss no greater than $cT$ but high secondary loss at the beginning will consume a lot of budget for secondary loss, which makes switching to other experts with low primary loss later costly in terms of secondary loss. The theorem answers our first question negatively, i.e., we are unable to achieve no-regret for primary loss while performing as well as the worst expert with respect to the secondary loss.
\vspace{-0.1in}
\section{Results in the ``good'' scenario}\label{sec:good}
\vspace{-0.05in}
In this section, we consider the problem of minimizing $\max(\reg{1}, \reg{2}_c)$ with Assumption~\ref{asp:intv} or~\ref{asp:intv2}. We first provide lower bounds of $\Omega(T^\alpha)$ under Assumption~\ref{asp:intv} and of $\Omega(T^\frac{1+\alpha}{2})$ under Assumption~\ref{asp:intv2}. Then we show that applying any switching-limited algorithms such as Shrinking Dartboard (SD)~\citep{geulen2010regret} and Follow the Lazy Leader (FLL)~\citep{kalai2005efficient} can achieve $\max(\reg{1}, \reg{2}_c) = O(T^\frac{1+\alpha}{2})$ under Assumption~\ref{asp:intv} or~\ref{asp:intv2}, which matches the lower bound under Assumption~\ref{asp:intv2}.
\vspace{-0.1in}
\subsection{Lower bound}
\vspace{-0.05in}
\begin{theorem}\label{thm:lb1}
If Assumption~\ref{asp:intv} holds with some given $c,\delta,\alpha$, then there exists an adversary such that any algorithm incurs $\mathbb{E}[{\max(\reg{1}, \reg{2}_c)}]=\Omega(T^\alpha)$.
\end{theorem}
\vspace{-0.1in}
\begin{proof}
We construct a binary classification example to prove the lower bound.
The losses and the experts $\mathcal{H} = \{h_1,h_2\}$ are defined based on $h(b)$ in the same way as that in the proof of Theorem~\ref{thm:notwork}. We divide $T$ into $3$ phases, the first two of which have $T^\alpha$ rounds and the third has $T-2T^\alpha$ rounds. Each expert has different $b$s in different phases as shown in Table~\ref{tab:lbbin}. At each time $t$, the sample is negative with probability $3/4$. We set $c=0$.
Since $(\loss{1}_{t,h(0)},\loss{2}_{t,h(0)}) = (3/4,0)$ and $(\loss{1}_{t,h(1)},\loss{2}_{t,h(1)}) = (1/4,1/4)$, the cumulative loss for both experts are $(\Loss{1}_{T,h},\Loss{2}_{T,h}) = (3T/4-T^\alpha/2,T^\alpha/4)$. Any algorithm $\mathcal{A}$ achieving $\Loss{1}_{T,h}\leq 3T/4-T^\alpha/4$ will incur $\reg{2}_c\geq T^\alpha/8$.
\end{proof}
\vspace{-0.1in}
\begin{table}[H]
\caption{The values of $b$ in different phases for the binary classification example.}\label{tab:lbbin}
\centering
\begin{tabular}{c|c|c|c}
\toprule
experts\textbackslash phase&$1: T^\alpha$ & $2:T^\alpha$&$3:T-2T^\alpha$\\
\cmidrule{1-4}
$h_1$ & $b =1$ &$b = 0$& $b =0$\\
\cmidrule{1-4}
$h_2$ & $b =0$ &$b = 1$& $b =0$ \\
\bottomrule
\end{tabular}
\end{table}
\vspace{-0.1in}
Combined with the classical lower bound of $\Omega(\sqrt{T})$ in online learning~\citep{cesa2006prediction}, $\mathbb{E}[\max(\reg{1}, \reg{2}_c)]=\Omega(\max(T^\alpha, \sqrt{T}))$. In the relaxed version of the ``good'' scenario, we have the following theorem.
\begin{restatable}{theorem}{restatelb}\label{thm:lb2}
If Assumption~\ref{asp:intv2} holds with some given $c,\delta,\alpha$, then there exists an adversary such that any algorithm incurs $\mathbb{E}[\max(\reg{1}, \reg{2}_c)]=\Omega(T^{\frac{1+\alpha}{2}})$.
\end{restatable}
\vspace{-0.1in}
\paragraph{Sketch of the proof}
Inspired by the proof of the lower bound by~\cite{altschuler2018online}, we construct an adversary such that any algorithm achieving $\reg{1}=O(T^{\frac{1+\alpha}{2}})$ has to switch for some number of times. For the secondary loss, the adversary sets $\loss{2}_{t,h}=c$ only if $h$ has been selected for more than $T^\alpha$ rounds consecutively until time $t-1$; otherwise $\loss{2}_{t,h}=c+\delta$. In this case, every switching will increase the secondary loss. Then we can show that either $\reg{1}$ or $\reg{2}_c$ is $\Omega(T^{\frac{1+\alpha}{2}})$. The complete proof can be found in Appendix~\ref{apd:lb2}.
\vspace{-0.1in}
\subsection{Algorithm}
\vspace{-0.05in}
Under Assumption~\ref{asp:intv} or~\ref{asp:intv2}, we are likely to suffer an extra $\delta T^\alpha$ secondary loss every time we switch from one expert to another. Inspired by this, we can upper bound $\max(\reg{1}, \reg{2}_c)$ by limiting the number of switching times. Given a switching-limited learner $\mathcal{L}$ on scalar-valued losses, e.g., Shrinking Dartboard (SD)~\citep{geulen2010regret} and Follow the Lazy Leader (FLL)~\citep{kalai2005efficient}, our algorithm $\mathcal{A}_{\mathrm{SL}}(\mathcal{L})$ is described as below.
We divide the time horizon into $T^{1-\alpha}$ epochs evenly and within each epoch we select the same expert. Let $e_i = \{(i-1)T^\alpha +1,\ldots,i T^\alpha\}$ denote the $i$-th epoch and $\loss{1}_{e_i,h}=\sum_{t\in e_i} \loss{1}_{t,h}/T^{\alpha}$ denote the average primary loss of the $i$-th epoch. We apply $\mathcal{L}$ over $\{\loss{1}_{e_i,h}\}_{h\in\mathcal{H}}$ for $i=1,\ldots,T^{1-\alpha}$. Let $s_{\mathrm{SL}}(E)$ and $r_{\mathrm{SL}}(E)$ denote the expected number of switching times and the regret of running $\mathcal{L}$ for $E$ rounds. Then we have the following theorem.
\begin{theorem}\label{thm:alg}
Under Assumption~\ref{asp:intv} or~\ref{asp:intv2}, given a switching-limited learner $\mathcal{L}$, $\mathcal{A}_{\mathrm{SL}}(\mathcal{L})$ achieves $\reg{1} \leq T^\alpha{r_{\mathrm{SL}}(T^{1-\alpha})}$ and $\reg{2}_c \leq \delta T^\alpha ({s_{\mathrm{SL}}(T^{1-\alpha})}+1)$. By adopting SD or FLL as the learner $\mathcal{L}$, $\mathcal{A}_{\mathrm{SL}}(\mathrm{SD})$ and $\mathcal{A}_{\mathrm{SL}}(\mathrm{FLL})$ achieve $\max(\reg{1}, \reg{2}_c) = O(\sqrt{\log(K)T^{{1+\alpha}}})$.
\end{theorem}
\vspace{-0.1in}
\begin{proof}
It is obvious that $\reg{1} \leq T^\alpha{r_{\mathrm{SL}}(T^{1-\alpha})}$. We denote by $S$ the random variable of the total number of switching times and $\tau_1,\ldots,\tau_S$ the time steps the algorithm switches. For notation simplicity, let $\tau_0 =1$ and $\tau_{S+1} = T+1$. Then $\reg{2}_c =\mathbb{E}_{\mathcal{A}}[{\sum_{t=1}^T (\loss{2}_{t,\mathcal{A}_t}-c)}]\leq \mathbb{E}_{\mathcal{A}}[{\sum_{s=0}^S\sum_{t=\tau_{s}}^{\tau_{s+1}-1}(\loss{2}_{t,\mathcal{A}_t}-c)}] \leq \mathbb{E}_{\mathcal{A}}[{\sum_{s=0}^S\delta T^\alpha}] = \delta T^\alpha (s_{\mathrm{SL}}(T^{1-\alpha})+1)$. Both SD and FLL have ${s_{\mathrm{SL}}(T^{1-\alpha})}=O(\sqrt{\log(K)T^{1-\alpha}})$ and ${r_{\mathrm{SL}}(T^{1-\alpha})}=O(\sqrt{\log(K)T^{1-\alpha}})$~\citep{geulen2010regret,kalai2005efficient}, which completes the proof.
\end{proof}
\vspace{-0.1in}
$\mathcal{A}_{\mathrm{SL}}(\mathrm{SD})$ and $\mathcal{A}_{\mathrm{SL}}(\mathrm{FLL})$ match the lower bound at $\Theta(T^\frac{1+\alpha}{2})$ under Assumption~\ref{asp:intv2}. But there is a gap between the upper bound $O(T^\frac{1+\alpha}{2})$ and the lower bound $\Omega(T^\alpha)$ under Assumption~\ref{asp:intv}, which is left as an open question. We investigate this question a little bit by answering negatively if the analysis of $\mathcal{A}_{\mathrm{SL}}(\mathcal{L})$ can be improved to achieve $O(T^\alpha)$. We define a class of algorithms which depends only on the cumulative losses of the experts, i.e., there exists a function $g: \mathbb{R}^{2K}\mapsto \Delta^K$ such that $p_t = g(\Loss{1}_{t-1},\Loss{2}_{t-1})$. Many classical algorithms such as Exponential Weights~\citep{littlestone1989weighted} and Follow the Perturbed Leader~\citep{kalai2005efficient} are examples in this class. The following theorem show that any algorithm dependent only on the cumulative losses cannot achieve a better bound than $\Omega(T^{\frac{1+\alpha}{2}})$, which provides some intuition on designing algorithms for future work. The detailed proof can be found in Appendix~\ref{apd:subopt}.
\begin{restatable}{theorem}{restatesubopt}\label{thm:subopt}
Under Assumption~\ref{asp:intv}, for any algorithm only dependent on the cumulative losses of the experts, $\mathbb{E}[{\max(\reg{1}, \reg{2}_c)}] = \Omega(T^\frac{1+\alpha}{2})$.
\end{restatable}
\vspace{-0.1in}
\section{Results in the ``bad'' scenario}\label{sec:bad}
\vspace{-0.05in}
In the ``bad'' scenario, some experts may have secondary losses with high variance. To compete with the best expert in the period in which it has low variance, we assume that the learner is given some fixed external oracle determining which experts to deactivate and reactivate. In this section, we consider the goal of minimizing $\sreg{1}(h^*)$ for all $h^*\in \mathcal{H}$ and $\reg{2}_c$. Here we study two oracles: one deactivates the ``unsatisfactory'' expert if detecting high variance of the secondary loss and never reactivates it again; the other one deactivates the ``unsatisfactory'' expert if detecting high variance of the secondary loss and reactivates it at fixed time steps.
\vspace{-0.1in}
\subsection{The first oracle: deactivating the ``unsatisfactory'' experts}
\vspace{-0.05in}
The oracle is described as below. The active expert set is initialized to contain all experts $\mathcal{H}_1 = \mathcal{H}$. At time $t=1,\ldots,T$, we let $\Delta\mathcal{H}_{t}=\{h\in \mathcal{H}_{t}: \exists t' \leq t, \sum_{\tau = t'}^{t}(\loss{2}_{\tau,h}-c)> \delta T^\alpha\}$ denote the set of active experts which do not satisfy Assumption~\ref{asp:intv}. Then we remove these experts from the active expert set, i.e., $\mathcal{H}_{t+1} = \mathcal{H}_{t}\setminus \Delta\mathcal{H}_{t}$. We assume that there always exist some active experts, i.e. $\mathcal{H}_T\neq \emptyset$.
One direct way is running $\mathcal{A}_{\mathrm{SL}}(\mathcal{L})$ as a subroutine and restarting $\mathcal{A}_{\mathrm{SL}}(\mathcal{L})$ at time $t$ if there exist experts deactivated at the end of $t-1$, i.e., $\Delta H_{t-1}\neq \emptyset$. However, restarting will lead to linear dependency on $K$ for sleeping regrets. To avoid this linear dependency, we construct pseudo primary losses for each expert such that if $h$ is active at time $t$, $\tloss{1}_{t,h}= \loss{1}_{t,h}$; otherwise, $\tloss{1}_{t,h}= 1$. The probability of selecting inactive experts degenerates due to the high pseudo losses. For those inactive experts we cannot select, we construct a mapping $f:\mathcal{H} \mapsto \mathcal{H}$, which maps each expert to an active expert. If $\mathcal{A}_{\mathrm{SL}}(\mathcal{L})$ decides to select an inactive expert $h$ at time $t$, we will select $f(h)$ instead. The detailed algorithm is described in Algorithm~\ref{alg:orc1}. Although the algorithm takes $\alpha$ as an input, it is worth to mention that the algorithm only uses $\alpha$ to decide the length of each epoch. We can choose a different epoch length and derive different regret upper bounds.
\begin{algorithm}[ht] \caption{$\mathcal{A}_{1}$}\label{alg:orc1}
{\begin{algorithmic}[1]
\STATE {\bfseries Input:} $T$, $\mathcal{H}$, $\alpha$ and a learner $\mathcal{L}$
\STATE Initialize $f(h) = h$ for all $h\in \mathcal{H}$.
\STATE Start an instance $\mathcal{A}_{\mathrm{SL}}(\mathcal{L})$.
\FOR{$t=1,\ldots,T$}
\STATE Get expert $h_t$ from $\mathcal{A}_{\mathrm{SL}}(\mathcal{L})$.
\STATE Select expert $f(h_t)$.
\STATE Feed $\tloss{1}_{t}$ to $\mathcal{A}_{\mathrm{SL}}(\mathcal{L})$.
\STATE For all $h$ with $f(h)\in \Delta\mathcal{H}_{t}$, set $f(h) = h_0$, where $h_0$ is any expert in $\mathcal{H}_{t+1}$.
\ENDFOR
\end{algorithmic}}
\end{algorithm}
\begin{theorem}
Let $T_{h^*}$ denote the number of rounds where expert $h^*$ is active. Running Algorithm~\ref{alg:orc1} with learner $\mathcal{L}$ being SD or FLL can achieve
\begin{align}
\sreg{1}(h^*) =O(\sqrt{\log(K)T_{h^*}T^\alpha})\:,\label{eq:sr1}
\end{align}
for all $h^*\in \mathcal{H}$ and
\begin{align}
\reg{2}_c= O(\sqrt{\log(K)T^{1+\alpha}} + K T^\alpha)\:.\label{eq:regc1}
\end{align}
\end{theorem}
\vspace{-0.1in}
\begin{proof}
Since $\loss{1}_{m, h}\leq \tloss{1}_{m,h}$, we have
\begin{align*}
\sreg{1}(h^*) = &\left(\sum_{t=1}^{T_{h^*}}\EEs{\mathcal{A}}{\loss{1}_{t,\mathcal{A}_t}} - \sum_{t=1}^{T_{h^*}} \loss{1}_{t,h^*} \right)\leq \left(\sum_{t=1}^{T_{h^*}}\EEs{\mathcal{A}}{\tloss{1}_{t,\mathcal{A}_t}} - \sum_{t=1}^{T_{h^*}} \tloss{1}_{t,h^*} \right)
\\=&O(\sqrt{\log(K)T_{h^*}T^\alpha})\:,
\end{align*}
where the last step uses the results in Theorem~\ref{thm:alg}. It is quite direct to have $\reg{2}_c= O(\delta T^{\alpha}(\sqrt{\log(K)T^{1-\alpha}}+K))=O(\sqrt{\log(K)T^{1+\alpha}} + K T^\alpha)$, where the first term comes from the number of switching times for running $\mathcal{A}_{\mathrm{SL}}$ and the second term comes from an extra switching caused by deactivating one expert.
\end{proof}
\vspace{-0.1in}
For the sleeping regret for expert $h^*$, the right hand side in Eq.~\eqref{eq:sr1} is $o(T_{h^*})$ if $T_{h^*}=\omega(T^\alpha)$, which is consistent with the impossibility result without bounded variance in Section~\ref{sec:neg}. When $\alpha \geq 1/2$, the right hand side of Eq.~\eqref{eq:regc1} is dominated by $KT^\alpha$. This linear dependency on $K$ is inevitable if we want to have $\sreg{1}_{h^*}=o(T_{h^*})$ for all $h^*\in \mathcal{H}$. The proof is given in Appendix~\ref{apd:linK}.
\begin{restatable}{theorem}{restatelinK}\label{thm:linK}
Let $T_{h^*}=\omega(T^\alpha)$ for all $h^*\in\mathcal{H}$. There exists an adversary such that any algorithm achieving $\sreg{1}_{h^*}=o(T_{h^*})$ for all $h^*\in \mathcal{H}$ will incur $\reg{2}_c = \Omega(KT^\alpha)$ for $K=O(\log(T))$.
\end{restatable}
\vspace{-0.1in}
\subsection{The second oracle: reactivating at fixed times}
\vspace{-0.05in}
Now we consider the oracle which deactivates the unsatisfactory experts once detecting and reactivate them at fixed times. The oracle is described as follows. At given $N+1$ fixed time steps $t_0=1,t_1,\ldots,t_{N}$ with $t_{n+1}-t_{n}=\Omega(T^\beta)$ for some $\beta>\alpha$ (where $t_{N+1} = T+1$ for notation simplicity), the active expert set $\mathcal{H}_t$ is reset to $\mathcal{H}$. At time $t=t_{n},\ldots,t_{n+1}-2$ for any $n=0,\ldots,N$, the experts $\Delta\mathcal{H}_{t}=\{h\in \mathcal{H}_{t}: \exists t' \text{ such that } t_n \leq t' \leq t, \sum_{\tau = t'}^{t}(\loss{2}_{\tau,h}-c)> \delta T^\alpha\}$ will be deactivated, i.e. $\mathcal{H}_{t+1} = \mathcal{H}_{t}\setminus \Delta\mathcal{H}_{t}$. We assume that there always exists some satisfactory experts, i.e. $\mathcal{H}_{t_{n}-1}\neq \emptyset$ for all $n=1,\ldots, N+1$.
Restarting Algorithm~\ref{alg:orc1} at $t=t_0,\ldots,t_{N}$ is one of the most direct methods. Let $T^{(n)}_{h^*}$ denote the number of rounds $h^*$ is active during $t=t_n,\ldots,t_{n+1}-1$ and $T_{h^*}=\sum_{n=0}^N T^{(n)}_{h^*}$ denote the total number of rounds $h^*$ is active. Then we have $\sreg{1}_{h^*} =O(\sum_{n=0}^{N} \sqrt{\log(K)T^{(n)}_{h^*}T^{\alpha}})= O(\sqrt{\log(K)T_{h^*}T^{\alpha}N})$ and $\reg{2}_c = O(\sum_{n=0}^{N}( \sqrt{\log(K)T^\alpha (t_{n+1}-t_n)} + K\delta T^\alpha))=O(\sqrt{\log(K)T^{1+\alpha}N} + NKT^\alpha)$.
However, if all experts are active all times, then the upper bound of $\sreg{1}(h^*)$ for the algorithm of restarting is $O(\sqrt{\log(K)T^{1+\alpha}N}) = O(\sqrt{\log(K)T^{2+\alpha-\beta}})$, which is quite large. We consider a smarter algorithm with better sleeping regrets when $T_{h^*}$ is large. The algorithm combines the methods of constructing meta experts for time-selection functions by~\cite{blum2007external} to bound the sleeping regrets and inside each interval, we select experts based on SD~\citep{geulen2010regret} to bound the number of switching times. We run the algorithm in epochs with length $T^\alpha$ and within each epoch we play the same expert. For simplicity, we assume that the active expert set will be updated only at the beginning of each epoch, which can be easily generalized. Let $e_i = \{(i-1)T^\alpha +1,\ldots,i T^\alpha\}$ denote the $i$-th epoch and $E=\{e_i\}_{i\in[T^{1-\alpha}]}$ denote the set of epochs. We let $\loss{1}_{e,h}=\sum_{t\in e} \loss{1}_{t,h}/T^{\alpha}$ and $\loss{1}_{e,\mathcal{A}}=\sum_{t\in e} \loss{1}_{t,\mathcal{A}_t}/T^{\alpha}$ denote the average primary loss of expert $h$ and the algorithm. And we let $\mathcal{H}_{e}$ and $\Delta \mathcal{H}_{e}$ denote the active expert set at the beginning of epoch $e$ and the deactivated expert set at the end of epoch $e$. Then we define the time selection function for epoch $e$ as $I_{h^*}(e)=\mathds{1}(h^* \text{ is active in epoch }e)$ for each $h^*\in \mathcal{H}$. Then we construct $K$ meta experts for each time selection function. Similar to Algorithm~\ref{alg:orc1}, we adopt the same expert mapping function $f$ and using pseudo losses $\tloss{1}_{e,h} = \loss{1}_{e,h}$ if $h$ is active and $\tloss{1}_{e,h} = 1$ if not. The detailed algorithm is shown as Algorithm~\ref{alg:orc2}. Then we have the following theorem, the detailed proof of which is provided in Appendix~\ref{apd:ub}.
\begin{restatable}{theorem}{restateub}\label{thm:ub}
Running Algorithm~\ref{alg:orc2} can achieve
\begin{align*}
\sreg{1}(h^*) = O(\sqrt{\log(K)T^{1+\alpha}} +T_{h^*}\sqrt{\log(K)T^{\alpha-1}})\:,
\end{align*}
for all $h^*\in\mathcal{H}$ and
\begin{align*}
\reg{2}_c = O(\sqrt{\log(K)T^{1+\alpha}}+\log(K)T^{\alpha}N+NKT^{\alpha})\:.
\end{align*}
\end{restatable}
\vspace{-0.1in}
Algorithm~\ref{alg:orc2} achieves $o(T_{h^*})$ sleeping regrets for $h^*$ with $T_{h^*}=\omega(T^{\frac{1+\alpha}{2}})$ and outperforms restarting Algorithm~\ref{alg:orc1} when $NT_{h^*}=\omega(T)$. $\sreg{1}(h^*)$ of Algorithm~\ref{alg:orc2} is $O(\sqrt{\log(K)T_{h^*}^{1+\alpha}})$ when $T_{h^*} = \Theta(T)$, which matches the results in Theorem~\ref{thm:alg}.
\begin{algorithm}[ht] \caption{$\mathcal{A}_2$}\label{alg:orc2}
{\begin{algorithmic}[1]
\STATE {\bfseries Input:} $T$, $\mathcal{H}$, $\alpha$ and $\eta$
\STATE Initialize $f(h) = h$ for all $h\in \mathcal{H}$.
\STATE $w_{1,h}^{h^*} = \frac{1}{K}$ for all $h\in \mathcal{H}$, for all $h^*\in\mathcal{H}$.
\FOR{$m=1,\ldots, T^{1-\alpha}$}
\STATE $w_{m,h} =\sum_{h^*} I_{h^*}(e_m)w_{m,h}^{h^*}$, $W_m = \sum_{h}w_{m,h}$ and $p_{m,h} = \frac{w_{m,h}}{W_m}$.
\STATE \textbf{if} $m\in\{(t_n-1)/T^{1-\alpha}+1\}_{n=0}^N$ \textbf{then} get $h_m$ from $p_m$. \textbf{else}
\STATE With prob. $\frac{w_{m,h_{m-1}}}{w_{m-1,h_{m-1}}}$, get $h_m=h_{m-1}$; with prob. $1-\frac{w_{m,h_{m-1}}}{w_{m-1,h_{m-1}}}$, get $h_m$ from $p_m$.
\STATE \textbf{end if}
\STATE Select expert $f(h_m)$.
\STATE Update $w_{m+1,h}^{h^*} = w_{m,h}^{h^*}\eta^{I_{h^*}(e_m)(\tloss{1}_{e_m,h}-\eta \tloss{1}_{e_m,\mathcal{A}})+1}$ for all $h,h^*\in\mathcal{H}$.
\STATE For all $h$ with $f(h)\in \Delta\mathcal{H}_{e_m}$, set $f(h) = h_0$, where $h_0$ is any expert in $\mathcal{H}_{e_{m+1}}$.
\ENDFOR
\end{algorithmic}}
\end{algorithm}
\vspace{-0.1in}
\vspace{-0.1in}
\section{Discussion}
\vspace{-0.05in}
We introduce the study of online learning with primary and secondary losses. We find that achieving no-regret with respect to the primary loss while performing no worse than the worst expert with respect to the secondary loss is impossible in general. We propose a bounded variance assumption over experts such that we can control secondary losses by limiting the number of switching times. Therefore, we are able to bound the regret with respect to the primary loss and the regret to $cT$ with respect to the secondary loss. Our work is only a first step in this problem and there are several open questions.
One is the optimality under Assumption~\ref{asp:intv}. As aforementioned, our bounds of $\max(\reg{1}, \reg{2}_c)$ in the ``good'' scenario are not tight and we show that any algorithm only dependent on the cumulative losses will have $\reg{1} = \Omega(T^{\frac{1+\alpha}{2}})$, which indicates that the optimal algorithm cannot only depends on the cumulative losses if the optimal bound is $o(T^{\frac{1+\alpha}{2}})$. Under Assumption~\ref{asp:intv2}, the upper bound of the algorithm of limiting switching matches the lower bound. This possibly implies that limiting switching may not be the best way to make use of the information provided by Assumption~\ref{asp:intv}.
In the ``bad'' scenario with access to the oracle which reactivates experts at fixed times, our sleeping regret bounds depend not only on $T_{h^*}$ but also on $T$, which makes the bounds meaningless when $T_{h^*}$ is small. It is unclear if we can obtain optimal sleeping regrets dependent only on $T_{h^*}$ for all $h^*\in \mathcal{H}$. The algorithm of {\em Adanormalhedge} by~\cite{luo2015achieving} can achieve sleeping regret of $O(\sqrt{T_{h^*}})$ without bound on the number of switching actions. However, how to achieve sleeping regret of $o(T_{h^*})$ with limited switching cost is of independent research interest.
In the ``bad'' scenario where Assumption~\ref{asp:intv} does not hold, we assume that $c$ is pre-specified and known to the oracle. Theorem~\ref{thm:notwork} show that achieving $\max(\reg{1},\reg{2}_c)=o(T)$ with $c = \max_{h} \Loss{2}_{T,h}$ is impossible without any external oracle. How to define a setting an unknown $c$ and design a reasonable oracle in this setting is an open question.
\vspace{-0.1in}
\section*{Broader Impact}
\vspace{-0.05in}
This research studies a society-constrained online decision making problem, where we take the decision receiver's objective into consideration. Therefore, in a decision making process (e.g. deciding whether to hire a job applicant, whether to approve a loan, or whether to admit a student to an honors class), the decision receiver (e.g., job applicants, loan applicants, students) could benefit from our study at the cost of increasing the loss of the decision maker (e.g., recruiters, banks, universities) a little. The consequences of failure of the system and biases in the data are not applicable.
\begin{ack}
This work was supported in part by the National Science Foundation under grant CCF-1815011.
\end{ack}
|
train/arxiv
|
BkiUdl85qoYDgPtUrBit
| 5
| 1
|
\section{INTRODUCTION}
In this work, we attempt to address the problem of performing metric localization in a known environment under extreme changes in visual scale. Our localization approach is based on the identification of objects in the environment, and their use as landmarks. By ``objects" we here mean physical entities which are distinct from their surroundings and have some consistent physical properties of structure and appearance.
Many robotic applications involve repeated traversals of a known environment over time. In such applications, it is usually beneficial to first construct a map of the environment, which can then be used by a robot to navigate the environment in subsequent missions. Surveying the environment from a very high altitude allows complete geographic coverage of the environment to be obtained by shorter, and thus more efficient, paths by the surveyor. At the same time, a robot that makes use of this high-altitude map to localize may have mission parameters requiring it to operate at a much lower altitude.
\begin{figure}
\centering{}%
\begin{minipage}[t]{0.2\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/nearTrottier2/1_small.png}}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.2\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/nearTrottier2/2_small.png}}%
\end{minipage}\hspace{1px}\smallskip \\
\begin{minipage}[t]{0.2\textwidth}%
\begin{center}Far Image\par\end{center}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.2\textwidth}%
\begin{center}Near Image\par\end{center}%
\end{minipage}\smallskip \\
\begin{minipage}[t]{0.2\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/mappings/12_nearTrottier2_sift_err_13.png}}
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.2\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/mappings/12_nearTrottier2_resnet_50_res3d_64_err_11.png}}
\end{minipage}\smallskip \\
\begin{minipage}[t]{0.2\textwidth}%
\begin{center}SIFT\par\end{center}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.2\textwidth}%
\begin{center}res3d, $64 \times 64$ input\par\end{center}%
\end{minipage}\smallskip \\
\caption{The results of SIFT features alone, and of the overall-best configuration of our system, on the largest-scale-change image pair in the dataset. Despite the scale factor of 6, the highly-distinctive object in the foreground allows our system to determine an accurate homography between the images, while SIFT features fail to do so.}
\label{fig:nearTrottier2}
\end{figure}
One such scenario is that of performing visual surveys of benthic environments, such as coral reefs, as in Johnson-Roberson et al.~\cite{underwaterMappingExample}. A fast-moving surface vehicle may be used to rapidly map a large area of a reef. This map may then be used by a slower-moving, but more maneuverable, autonomous underwater vehicle (AUV) such as the Aqua robot~\cite{aquaRobot}, to navigate the reef while capturing imagery very close to the sea floor. Another relevant scenario is that of a robot performing loop closure over long distances as part of \ac{SLAM}. Loop closure, the recognition of a previously-viewed location when viewing it a second time, is key to accurate \ac{SLAM}, and the overall accuracy of \ac{SLAM} techniques could be considerably improved if loop closure could be conducted across major changes in scale and perspective.
In scenarios such as the above, a robot must deal with the change in visual scale between two perspectives, which may be $5\times$ or even greater. In some scenarios, such as in benthic environments, other factors may also intrude, such as colour-shifting due to the optical properties of water, and image noise due to particulate suspended in the water. Identifying scenes across such large changes in scale is very challenging for modern visual localization techniques. Even the most scale-robust techniques, such as \ac{SIFT}, can only localize reliably under scale factors less than about $3\times$.
We hypothesize that the hierarchical features computed by the intermediate layers of a \ac{CNN}~\cite{deepTextbook} may prove robust to changes in scale, due to their high degree of abstraction. We propose a technique for performing metric localization across significant changes in scale by identifying and describing non-semantic objects in a way that allows them to be associated between scenes. We show that these associations can be used to guide the matching of \ac{SIFT} features between images in a way that improves the robustness of matching to scale changes, allowing accurate localization under visual scale factors of 3 and greater. The proposed system does not require any environment-specific training, and in principle can be deployed out-of-the-box in arbitrary environments. The objects used by our system are defined functionally, in terms of their utility as scale-invariant landmarks, and are not limited to semantically-meaningful object categories.
We specifically consider the problem of localizing between pairs of images known to contain the same scene at different visual scale. A solution to this problem is an essential component of a system that can perform full global localization across large scale changes, and in certain cases - such as the low-vs-high-altitude case described above - could suffice on its own for global localization. We demonstrate the approach both on standard localization benchmarks and on a novel dataset of image pairs from urban scenes exhibiting major scale changes.
\section{RELATED WORK}
Visual localization refers to the problem of determining a robot's pose using images from one or more cameras, with reference to a map or set of previously-seen images. This may be done with some prior on the robot's position, or with no such prior, called global localization~\cite{dudek2010computational}. Visual odometry is a form of non-global localization, while global localization is closely related to loop closure; both of these are important components of \ac{SLAM}, and there is a large body of literature exploring both problems. Prominent early work includes Leonard et al.~\cite{leonard1991mobile} and Mackenzie et al.~\cite{mackenzie1994precise}, and Fox et al.~\cite{fox1999markov}.
Many traditional visual approaches to these problems, and particularly global localization, have been based on the recognition of whole-image descriptors of particular scenes, such as GIST features~\cite{gist}. Successful instances include SeqSLAM~\cite{seqslam}, which uses a heavily downsampled version of the input image as a descriptor, and LSD-SLAM~\cite{lsdslam}, which performs direct image alignment for loop closure, as well as Hansen et al.~\cite{seqslamvar1}, Cadena et al.,~\cite{robustPlaceRecogWithSequences}, Liu et al.~\cite{zhangWholeImageComparison} and Naseer et al.~\cite{seqslamvar2}. Because whole-image descriptors encode the geometric relationships of features in the 2D image plane, images of the same scene from different perspectives can have very different descriptors, making such methods very sensitive to changes in perspective and scale.
Another common approach is to discretize point-feature descriptors and build bag-of-words histograms of the input images. FAB-MAP~\cite{fabmap}, ORB-SLAM~\cite{orbslam}, and the system of Ho et al.~\cite{seqslamvar3} perform variants of this for loop closure, starting from SURF~\cite{surf}, ORB~\cite{orb}, and \ac{SIFT}~\cite{sift} features, respectively. While suitable for place-recognition tasks, such approaches alone are not appropriate for global localization, because spatial information about the visual words is not contained in the histogram.
Hence, state-of-the-art \ac{SLAM} systems such as ORB-SLAM and LSD-SLAM rely on visual odometry for pose estimation. Their visual odometry techniques are limited in robustness to changes in scale, perspective, and appearance, and so rely on successive estimations from closely-spaced frames.
Other global localization approaches attempt to recognize particular landmarks in an image, and use those to produce a metric estimate of the robot's pose. SLAM++ of Salas-Moreno et al.~\cite{salas2013slam++} performs \ac{SLAM} by recognizing landmarks from a database of 3D object models. Linegar et al.~\cite{landmarks24Hour} and Li et al.~\cite{highLevelFeaturesUnderwater} both train a bank of support vector machines (SVMs) to detect specific landmarks in a known environment, one SVM per landmark. More recently,~\cite{semanticSlam} made use of a Deformable Parts Model (DPM)~\cite{deformablePartsModel} to detect objects for use as loop-closure landmarks in their \ac{SLAM} system. All of these approaches require a pre-existing database of either object types or specific objects to operate. These databases can be costly to construct, and these systems will fail in environments in which not enough landmarks belonging to the database are present.
Some work has explored the use of \acp{CNN} for localization. PoseNet~\cite{posenet} is a \ac{CNN} that learns a mapping from images in an environment to metric camera poses, but it can only operate on the environment on which it was trained. In S{\"{u}}nderhauf et al.~\cite{convWholeImagePlaceRecog}, the intermediate activations of a \ac{CNN} trained for image classification were used as whole-image descriptors for place recognition, a non-metric form of global localization. In a similar fashion, Vysotska et al.~\cite{vysotska2016lazy} use whole-image descriptors from a \ac{CNN} in a SeqSLAM-like framework. Subsequent work of S{\"{u}}nderhauf et al.~\cite{convNetLandmarks} refined this approach by using the same descriptor for object proposals within an image instead of the whole image. Cascianelli et al.~\cite{cascianelli2017robust} and Panphattarasap et al.~\cite{panphattarasap2016visual} both expand on this technique. These works consider only place recognition, however, and do not attempt to deal with the more challenging problem of full global localization (which necessitates returning a pose estimate). Schmidt et al.~\cite{denseDeepPointDescriptors} and Simo-Serra et al.~\cite{sparseDeepPointDescriptors} have both explored the idea of learning point-feature descriptors with a \ac{CNN}, which could replace classical point features in a bag-of-words model.
When exploring robustness to perspective change, all of these works only consider positional variations of at most a few meters, when the scenes exhibit within-image scale variations of tens or hundreds of meters, and when the reference or training datasets consisted of images taken over traversals of environments ranging from hundreds to thousands of meters. As a result, little significant change in scale exists between map images and query images in these experiments. To the best of our knowledge, ours is the first to attempt to combine deep object-like features and point features into a single, unified representation of landmarks. This synthesis provides superior metric localization to either technique in isolation, particularly under significant ($3\times$ and greater) changes in scale.
\section{PROPOSED SYSTEM}\label{sec:system}
The first stage of our metric localization pipeline consists in detecting objects in a pair of images, computing convolutional descriptors for them, and matching these descriptors between images. Our approach here closely follows that used for image-retrieval by S{\"{u}}nderhauf et al.~\cite{convWholeImagePlaceRecog}; we differ in using Selective Search (SS), as proposed by Uijlings et al.~\cite{selectiveSearch}, to propose object regions, and in our use of a more recent \ac{CNN} architecture.
To extract objects from an image, Selective Search object proposals are first extracted from the image, and filtered to remove objects with bounding boxes less than 200 pixels in size and with aspect ratio greater than 3 or less than 1/3. The image regions defined by each surviving SS bounding box are then extracted from the image, rescaled to a fixed size via bilinear interpolation, and run through a \ac{CNN}. We use a ResNet-50 architecture trained on the ImageNet image-classification dataset, as described in He et al.~\cite{resnet}. Experiments were run using six different layers of the network as feature descriptors, and with inputs to the network of four different resolutions. The network layers and resolutions are listed in Table \ref{table:resNetSizes}.
Having extracted objects and their descriptors from a pair of images, we perform brute-force matching of the objects between the images. Following~\cite{convNetLandmarks}, we take the match of each object descriptor $\mathbf{u}$ in image $i$ to be the descriptor $\mathbf{v}$ in image $j$ that has the smallest cosine distance from $\mathbf{u}$, defined as $t_{cos. err.}(\mathbf{u}, \mathbf{v}) = 1 - \frac{\mathbf{u} \cdot \mathbf{v}}{||\mathbf{u}||_2 \cdot ||\mathbf{v}||_2}$. Matches are validated by cross-checking; a match $(\mathbf{u}, \mathbf{v})$ is only considered valid if $\mathbf{u}$ is the most similar object to $\mathbf{v}$ in image $i$ and $\mathbf{v}$ is the most similar object to $\mathbf{u}$ in image $j$.
\begin{table}[ht]
\caption{The sizes, as a number of floating-point values, of the output layers of ResNet-50 at different input resolutions. Values in bold indicate layer-resolution pairs which provided the best results in any of our experiments.}
\begin{center}
\begin{tabular}{c|c c c c c c}
Input res. & pool1 & res2c & res3d & res4f & res5c & pool5 \\
\hline
$224 \times 224$ & 201k & 803k & 401k & 200k & \textbf{100k} & 2k \\
$128 \times 128$ & 66k & 262k & 131k & 65k & 131k & 8k \\
$64 \times 64$ & 16k & 66k & \textbf{32k} & 66k & \textbf{131k} & 8k \\
$32 \times 32$ & 4k & 16k & 32k & 66k & 131k & 8k \\
\end{tabular}
\end{center}
\label{table:resNetSizes}
\end{table}
Once object matches are found, we extract \ac{SIFT} features from both images, using 3 octave layers, an initial Gaussian with $\sigma=1.6$, an edge threshold of 10, and a contrast threshold of 0.04. For each pair of matched objects, we match \ac{SIFT} features that lie inside the corresponding bounding boxes to one another. \ac{SIFT} features are matched via their Euclidean distance, and cross-checking is again used to filter out bad matches. By limiting the space over which we search for \ac{SIFT} matches to matched object regions, we hypothesize that the scope for error in \ac{SIFT} matching will be significantly reduced, and thus the accuracy of the resulting metric pose estimates will be increased. As a baseline against which to compare our results, experiments were also run using \ac{SIFT} alone, with no objects, and objects alone, without \ac{SIFT} features - this last is essentially a na\"ive application of the place-recognition system of S{\"{u}}nderhauf~\cite{convNetLandmarks} to metric localization. In these baseline experiments, \ac{SIFT} matching was performed in the same way, but the search for matches was conducted over all \ac{SIFT} features in both images. When object proposals alone were used, they were matched in the same manner described above, and their bounding box centers were used as match points.
The resulting set of match points are used to produce a metric pose estimate. Depending on the experiment, we compute either a homography $H$ or an essential matrix $E$~\cite{Hartley2004}. In either case, the calculation of $H$ or $E$ from point correspondences is done via a RANSAC algorithm with an inlier threshold of 6, measured in pixel units.
\begin{figure}
\centering
\includegraphics[width=8.5cm,keepaspectratio]{Images/SS_and_ResNet.pdf}
\caption{A simplified illustration of our object-detection architecture.}
\label{fig:ssResNetArch}
\end{figure}
\section{KITTI EXPERIMENTS}\label{sec:kitti}
\subsection{Experimental Setup}
To evaluate the robustness of our proposed method to changes in scale, we conducted experiments on the KITTI Odometry benchmark dataset~\cite{kitti}. This dataset consists of data sequences from a variety of sensors, including colour stereo imagery captured at a 15Hz frame rate, taken from a sensor rig mounted on a car as it drives along twenty-two distinct routes in the daytime. Eleven of these sequences contain precise ground truth poses for each camera frame taken on each trajectory. These trajectories were used to evaluate the proposed method.
Our evaluation consisted of first subsampling each sequence by taking every fifth frame, to make the size of the overall dataset more manageable and increase the level of scale change present between adjacent frames in the sequence. A set of image pairs was generated for each subsampled sequence by taking each frame $i$ in the sequence and pairing $i$ with the 10 subsequent frames, $i + j\ \forall j \in [1, 10]$. Each successive value of $j$ gave an image pair $(i, i+j)$ with a greater degree of visual scale change, as shown in Fig.~\ref{fig:kittiScalePairSamples}.
We finally filtered out any frame pairs whose gaze directions differed by more than $45^\circ$ in any axis, in order to consider only pairs that actually look at the same scene (in practice, only the yaw differs significantly in KITTI). In total, 40,748 image pairs were used in our evaluation. For each image pair, the images from the left colour camera (designated camera 2 in KITTI) were used for localization. An example set of images is shown in Fig.~\ref{fig:kittiScalePairSamples}.
\begin{figure}
\centering
\includegraphics[width=6cm]{Images/kitti/image_sep_small.png}
\caption{Sample frame from sequence 00. The frames below it are separated by $j = 1, 5, 10$ in the subsampled sequence, respectively. This gives an indication of the range of visual scale changes observed over all the pairs of the dataset.}
\label{fig:kittiScalePairSamples}
\end{figure}
To estimate a transform between an image pair, a set of point matches was produced between the two images according to each of the three methods we compare, as described in section~\ref{sec:system}. In each case, these point matches were used to estimate an essential matrix $E$, from which was derived a pose estimate $(\mathbf{q}_e, \mathbf{t}_e)$ via a standard method of applying SVD and cheirality checking~\cite{Hartley2004}. $(\mathbf{q}_e, \mathbf{t}_e)$ describes the transform between the two frames. To assess the quality of the estimate, we used two error metrics. The first was the relative positional error, as defined in Eq.~\ref{eqn:relDist}:
\begin{equation}\label{eqn:relDist}
t_{err} = \frac{||\mathbf{t}_g - \mathbf{t}_e||_2}{||\mathbf{t}_g||_2 + ||\mathbf{t}_e||_2}
\end{equation}
where $t_g$ is the ground-truth translation between the two frames and $t_e$ is the estimated translation. We normalize the vector from the estimated pose to the true pose to remove any correlation of that vector's length with the magnitude of the true translation. Values of $t_{err}$ range from 0 to 1.
The second error metric was the rotational error, which following~\cite{rotationMetrics} is defined in Eq.~\ref{eqn:rotDist}:
\begin{equation}\label{eqn:rotDist}
r_{err} = 1 - |\mathbf{q}_g \cdot \mathbf{q}_e|
\end{equation}
Where $\mathbf{q}_g$ and $\mathbf{q}_e$ are quaternions representing the ground-truth and estimated gaze directions, respectively. For some image pairs, no pose could be estimated, due to insufficient or inconsistent point matches. We refer to this as localization failure, and for both $t_{err}$ and $r_{err}$ we substitute a value of 1, the maximum possible error under each metric, in these failure cases.
A preliminary evaluation was carried out over the space of CNN input resolutions and output layers by running them on the first 1000 image pairs from the first subsampled sequence (sequence 00). We found that using an input resolution of $224 \times 224$ and the res5c feature layer as output gave both the highest accuracy and lowest localization failure rate. This configuration was used for all object-landmark experiments on KITTI that we describe below.
\subsection{Results}\label{subsec:kittiResults}
All metrics were plotted against the ground-truth translational distance, $||\mathbf{t}_g||_2$, between the frames in the image pairs. To make these plots readable, we grouped image pairs by their frame-separation $j$, and plotted the mean error of each group against its mean ground-truth distance, in Fig.~\ref{fig:kittiPosErr} (for $t_{err}$) and Fig.~\ref{fig:kittiRotErr} (for $r_{err}$). A logarithmic curve was fitted against each, as we expected that performance would initially worsen rapidly with distance, then level off. We also display the failure rate of each group versus the group's mean distance in Fig.~\ref{fig:kittiFailureRates}.
\begin{table}[ht]
\caption{}
\label{table:kittiPerformance}
\begin{center}
\begin{tabular}{c|c c c }
Method & $t_{err}$ & $r_{err}$ & failure count \\
\hline
SIFT only & 0.680 & 0.149 & 1854 \\
Objects only & 0.744 & 0.232 & 7146 \\
Proposed method & \bf{0.641} & \bf{0.086} & \bf{785} \\
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=7.5cm,keepaspectratio]{Images/kitti/All_relative_T_error.png}
\caption{This plot shows the mean normalized positional error ($t_{err}$) versus the mean inter-camera distance of each group of image pairs. Image pairs are grouped by the number of frames separating them in the sequence, from 1 to 10. $t_{err}$ is a unitless error metric that ranges from 0 (best) to 1 (worst). The improvement of the proposed method over SIFT is small but consistent.}
\label{fig:kittiPosErr}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.5cm,keepaspectratio]{Images/kitti/All_rotational_error.png}
\caption{The mean rotational error ($r_{err}$) versus the mean inter-camera distance of each group of image pairs. ($r_{err}$), like ($t_{err}$), is a unitless error metric that ranges from 0 (best) to 1 (worst). This metric shows a more significant improvement over SIFT in the proposed method.}
\label{fig:kittiRotErr}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.5cm,keepaspectratio]{Images/kitti/All_failure_rates.png}
\caption{The fraction of the pairs in each image-pair group for which localization failure occurred, versus the mean inter-camera distance of each group. In all groups, our proposed method has far fewer failures than either SIFT or object features alone.}
\label{fig:kittiFailureRates}
\end{figure}
The overall performance of each method across all pairs is provided in Table~\ref{table:kittiPerformance}. This table shows that our proposed method improves on SIFT under each metric: a small improvement of 6\% in $t_{err}$, and more significant improvements of 43\% in $r_{err}$, and 58\% in failure rate, overall. Meanwhile, the objects-only method performs significantly worse than both our method and SIFT on all metrics and at all pair distances.
Fig.~\ref{fig:kittiRotErr} shows that on $r_{err}$ the improvement of our method over SIFT is negligible at $j=1$, but grows significantly and consistently with the distance between frames. In Fig.~\ref{fig:kittiPosErr} meanwhile, we see that on $t_{err}$ the improvement grows at first, and is greater than 0.05 for most of the intermediate gaps, but shrinks again at the largest gaps. Fig.~\ref{fig:kittiFailureRates} shows similar behaviour in the localization failure rate - it is lowest for all methods at the largest gaps.
From visual inspection of these extreme image pairs, this improvement at high $j$ appears to be caused by sections where the vehicle drives down a long, straight road for some distance. In these cases, the visual scale of objects visible near the end of the road will show little change over even a gap of $j = 10$, making localization relatively easy. Unlike more winding roads, such long, straight sections will not have any high-$j$ pairs removed due to the images being on either side of a bend in the road, meaning that the high-$j$ groups will contain disproportionately many pairs from these straight sections.
\section{MONTREAL IMAGE PAIR EXPERIMENTS}
\subsection{Experimental Setup}
To test the effectiveness of the proposed system in a real-world scenario, a set of 31 image pairs were taken across eleven scenes surrounding the Montreal campus. Scenes were chosen to contain a roughly-centred object of approximately uniform depth in the scene, so that a uniform change in image scale could be achieved by taking images at various distances from the object. This ensures that successful matches must be made under one change in scale, and makes the relationship between the images amenable to description by a homography. The image pairs exhibit changes in scale ranging from factors of about 1.5 to about 7, with the exception of one image pair showing scale change of about 15 in a prominent foreground object. All images were taken using the rear-facing camera of a Samsung Galaxy S3 phone, and were downsampled to $1200\times900$ pixels via bilinear interpolation for all experiments. Each image pair was hand-annotated with a set of 10 point correspondences, distributed approximately evenly over the nearer image in each pair. We have made this dataset publicly available \footnote{\url{http://www.cim.mcgill.ca/~mrl/montreal\_scale\_pairs/}}.
The proposed system was used to compute point matches between each image pair, and from these point matches, a homography $H$ was computed as described in section~\ref{sec:system}. $H$ was used to calculate the total symmetric transfer error (STE) for the image pair $e$ over the ground truth points:
\begin{equation}
\textup{STE} = \sum^N_i||p^i_{far} - H p^i_{near}||_2 + ||p^i_{near} - H^{-1}p^i_{far}||_2
\end{equation}
Whenever no $H$ could be found for an image pair by some method, its error on that image pair was set to the maximum STE we observed for any attempted method, $\textup{STE}_\textup{max} = 15,833,861,380.8$. The plain STE ranges over many orders of magnitude on this dataset, so we present the results using the logarithmic STE, making the results easier to interpret.
The same set of parameters were run over this dataset as in the KITTI experiments - our system at six network layers and four input resolutions, plus \ac{SIFT} alone and objects alone, for comparison. However, the results from objects alone were substantially worse at all configurations than those of either SIFT or the proposed method, similar to what we observed in section~\ref{sec:kitti}. For the sake of brevity, we ignore the objects-only results in the discussion and figures below.
\subsection{Results}\label{subsec:realWorldResults}
Table~\ref{table:realErrorByLayer} shows the performance of each feature layer and each input resolution over the whole Montreal dataset, and shows the results from using \ac{SIFT} features alone as well. As this table shows, the total error using just \ac{SIFT} features is significantly greater than that of the best-performing input resolution for each feature layer. Also, the average error of the intermediate layers res2c, res3d, and res4f, are all very comparable. It is interesting to note that in this experiment, more intermediate layers are favoured, while the KITTI experiments favoured the highest resolution and the second-deepest layer of the network. This may arise from the difference in the native resolution of the images - KITTI's image resolutions vary from sequence to sequence, but are all close to $1230\times370$.
Fig.~\ref{fig:topThreeLayersPerPair} show the error of each of the three best-performing configurations, as well as the \ac{SIFT}-only approach, on each of the image pairs in the dataset, plotted versus the median scale change over all pairs of ground-truth matches $(p^i, p^j)$ in each image. The scale change between matches $(p^i, p^j)$ is defined as: $\textup{scale change}_{i,j} = \frac{||p^i_{near} - p^j_{near}||_2}{||p^i_{far} - p^j_{far}||_2}$. The lines of best fit for each method further emphasize the improvement of our system over \ac{SIFT} features at all scale factors up to 6. The best-fit lines for all of the top-three configurations of our system overlap almost perfectly, although there is a fair degree of variance in their performances on individual examples.
The use of homographies to relate the image pairs allows us to visually inspect the quality of the estimated $H$, by using $H$ to map all pixels in the farther image to their estimated locations in the nearer image. Visual inspection of these mappings for the 31 image pairs confirm that those configurations with lower logarithmic STEs tend to have more correct-looking mappings, although all configurations of our system with mean logarithmic STE $< 10$ produce comparable mappings for most pairs, and on some pairs, higher-error configurations such as res4f with $64 \times 64$-pixel inputs produce a subjectively better mapping than the lowest-error configuration. Fig.~\ref{fig:nearTrottier2} and Fig.~\ref{fig:nearTrotter1} display some example homography mappings.
\begin{table}[ht]
\caption{A table showing the logarithmic STE of each configuration of the system, averaged over all image pairs. The best-performing feature overall is res3d with a $64 \times 64$ input size, followed closely by res2c with $64 \times 64$ inputs and res4f with $128 \times 128$ inputs. The mean log. STE of \ac{SIFT} features alone is presented as well, for comparison.}
\label{table:realErrorByLayer}
\begin{center}
\begin{tabular}{c|c c c c c c}
Input res. & pool1 & res2c & res3d & res4f & res5c & pool5 \\
\hline
$224 \times 224$ & 11.102 & 13.057 & 9.631 & 10.277 & 10.537 & 10.011 \\
$128 \times 128$ & 10.389 & 11.716 & 10.231 & 9.458 & 9.930 & 9.505 \\
$64 \times 64$ & 10.921 & 9.381 & \textbf{9.339} & 9.777 & 10.234 & 9.667 \\
$32 \times 32$ & 10.134 & 10.162 & 10.473 & 9.658 & 10.301 & 10.607 \\
\hline
SIFT & 10.654
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=9cm]{Images/realWorld/errorsVsScale.png}
\caption{The error of \ac{SIFT} features alone, and the three best-performing configurations of our system, on each image pair in the dataset, plotted versus the median scale change exhibited in the image pair, along with a line of best fit for each method.}
\label{fig:topThreeLayersPerPair}
\end{figure}
\begin{figure}
\centering{}%
\begin{minipage}[t]{0.12\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/nearTrottier1/0_small.png}}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.12\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/nearTrottier1/1_small.png}}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.12\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/mappings/01_nearTrottier1_sift_err_16.png}}
\end{minipage}\smallskip \\
\begin{minipage}[t]{0.12\textwidth}%
\begin{center}Far Image\par\end{center}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.12\textwidth}%
\begin{center}Near Image\par\end{center}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.12\textwidth}%
\begin{center}SIFT\par\end{center}%
\end{minipage}\smallskip \\
\begin{minipage}[t]{0.12\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/mappings/01_nearTrottier1_resnet_50_res3d_64_err_10.png}}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.12\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/mappings/01_nearTrottier1_resnet_50_res4f_128_err_11.png}}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.12\textwidth}%
\subfloat{\centering{}\includegraphics[width=1\textwidth]{Images/realWorld/mappings/01_nearTrottier1_resnet_50_res5c_128_err_8.png}}
\end{minipage}\smallskip \\
\begin{minipage}[t]{0.12\textwidth}%
\begin{center}res3d, $64 \times 64$ input\par\end{center}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.12\textwidth}%
\begin{center}res4f, $128 \times 128$ input\par\end{center}%
\end{minipage}\hspace{1px}
\begin{minipage}[t]{0.12\textwidth}%
\begin{center}res5c, $128 \times 128$ input\par\end{center}%
\end{minipage}\smallskip \\
\caption{The results of \ac{SIFT} features and three different configurations of our system on another pair. The homography estimated by the best-overall configuration, res3d at $64 \times 64$ input size, is notably worse than those produced by two other intermediate feature layers. No configuration of our system performs best on all image pairs.}
\label{fig:nearTrotter1}
\end{figure}
\section{CONCLUSIONS}
\addtolength{\textheight}{-0.5cm}
One strength of our proposed system is that it requires no domain-specific training, making use only of a pre-trained \ac{CNN}. However, as future work we wish to explore the possibility of training a \ac{CNN} with the specific objective of producing a scale- and perspective-invariant object descriptor, as doing so may result in more accurate matching of objects. We also wish to explore the possibility that including matches from multiple layers of the network in the localization process could improve the system's accuracy.
The most natural extension of this work, however, is to extend it to the full global-localization problem, where the system must localize within a large map or database of images with no prior on the position, and must moreover do so across major scale changes. Depending on the scenario, this may require combining our localization method with a similarly scale-robust place-recognition system.
We have shown that by combining deep learning with classical methods, we can perform accurate localization across major changes in scale. Our system uses a pre-trained deep network to describe arbitrary objects and correctly match them between images for use as navigation landmarks. Restricting \ac{SIFT} feature matching to matched object regions substantially improves the robustness of \ac{SIFT} matching both to changes in image noise and to changes in scale. Despite much prior work on place recognition and localization using both classical methods and deep learning, our result sets a new benchmark for metric localization performance across significant scale changes.
\begin{acronym}
\acro{CNN}{Convolutional Neural Network}
\acro{SLAM}{Simultaneous Localization And Mapping}
\acro{SIFT}{Scale-Invariant Feature Transforms}
\end{acronym}
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUcOnxK7ICUuXekKpj
| 5
| 1
|
\section{Introduction}
Ultrasound (US) imaging is widely used to assess and diagnose breast cancer. Various classic and deep learning based methods have been proposed for breast mass classification in US \cite{houssein2021deep}. Deep neural networks have achieved excellent performance in breast mass differentiation, but are commonly perceived as black-box models difficult to incorporate into the clinical practice due to the lack of interpretability \cite{abdullah2021review}. In contrary, standard classification techniques based on handcrafted features have established medical and physical interpretation. Flores et al. conducted a detailed comparison between the texture and morphological features in the case of the breast mass differentiation in US \cite{flores2015improving}. Results based purely on classification metrics indicated that the morphological features are the better performing features for the mass differentiation. However, authors did not investigate why the shape or texture based methods failed to correctly classify specific US images. Standard classification methods may fail to produce accurate predictions for breast mass US images for several reasons. For example, due to the shadowing artifact resulting in ill-defined mass borders the accurate estimation of the morphological features may be infeasible, see Fig. 1. Similarly, the texture based classifier may underperform when the US image is noisy or if the mass texture was adversely impacted by the US image processing algorithms~\cite{byra2019quantitative}. Therefore, it should be expected in practice that the texture based methods will perform better on some cases while the shape based techniques will perform better on others.
In this work, we propose a deep meta-learning based approach to the selection of the appropriate standard classification method for particular breast mass US image. In machine learning literature, meta-learning techniques have been used to recommend classification algorithms for specific tasks and datasets~\cite{khan2020literature}. Here, we develop a neural network that can automatically process input breast mass US image and recommend whether to apply the shape or texture based classifier for the analysis. By using meta-learning we aim to address the issues associated with the robustness of the standard classifiers. In our study, deep learning techniques are not used to directly classify breast masses, but to improve the performance of the standard methods based on well-understood handcrafted features.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.9\linewidth]{exa.png}
\end{center}
\caption{US images presenting breast masses with a) poorly defined shape due to the shadowing artifact and b) with a
well-defined contour.}
\label{f1}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{ext.png}
\end{center}
\caption{Scheme presenting the extraction of the morphological and texture features from breast mass US images.}
\label{f2}
\end{figure*}
\section{Methods}
\subsection{Dataset}
To develop and evaluate the proposed approach, we used breast mass US images collected from three publicly available datasets: BUSI, RODTOOK and UDIAT \cite{al-dhabyani_dataset_2020,rodtook_automatic_2018,yap_automated_2018,yap_breast_2018}. The datasets were processed to remove US images that included scanner annotations within the breast mass regions, which could impact the estimation of the texture features. The dataset after the filtration contained 746 breast mass US images, 302 malignant and 444 benign. 8-fold cross-validation was used to assess the implemented techniques. For each fold, $1/8$ and $7/8$ of the dataset were used for the testing and training, respectively. Additionally, the training set was divided with a 50\%/50\% split into the development training set and the meta-training set. The development training set was used to train the standard classifiers while the meta-training set was utilized to train the meta-network to perform classifier selection. All sets were balanced to include the same ratio of the US images from each public dataset as well as the same ratio of the malignant and benign breast masses.
\subsection{Standard classifiers}
Following the work of Flores et al., we developed two standard classifiers to differentiate malignant and benign breast masses in US images \cite{flores2015improving}. The first classifier was based on shape parameters. In this case, the following 15 morphological features were determined based on the mass boundary outlines: depth-to-width ratio, mass area, circularity, roundness, normalized residual value, overlap ratio, convexity, orientation, long axis to short axis ratio, elliptic normalized skeleton, elliptic normalized circumference, mean of normalized radial length (NRL), standard deviation of NRL, area ratio and contour roughness \cite{flores2015improving,alvarenga2010assessing,gomez2020assessment}. The second classifier was trained with the texture features calculated using the gray-level co-occurrence matrix (GLCM) technique. We determined the following GLCM statistics based on the breast mass areas: contrast, correlation, energy, variance, maximum probability and auto-correlation \cite{gomez2012analysis}. Statistics were computed for two quantization levels (4 and 16), four orientations (0$^{\circ}$, 45$^{\circ}$, 90$^{\circ}$ and 135$^{\circ}$) and two distances~(1 and 5 pixels), resulting in 112 texture features. Extraction of the features is illustrated in Fig.~\ref{f2}. Shape features were calculated in Python while the texture features were computed in Matlab (The MathWorks, Inc., USA).
Logistic regression algorithm with the L1 loss was applied for the binary classification of malignant and benign breast mass. For each cross-validation fold, we used the development dataset to separately train the classifiers based on the shape and texture features. We employed class weights that were inversely proportional to the class frequencies in the training set to address the class imbalance problem. Additionally, based on the development training set the median and interquartile range were determined for each feature and used for feature scaling both in the case of the development training data and the samples from the test set and the meta-learning training set.
\subsection{Meta-learning}
For each cross-validation fold, the classifiers trained on the development training set were evaluated on the meta-learning training set to determine the reference for the training of the meta-network. For each US image from the meta-learning training set we determined the classification error with the following equation:
\begin{equation}
e = |p-c|,
\end{equation}
\noindent where $p$ is the probability of mass malignancy outputted by the classifier and $c$ stands for the breast mass class (0 for benign and 1 for malignant). Classification error approaches 0 when the classifier is accurate and 1 otherwise. For each US image from the meta-learning training set we selected the better performing standard method in respect to the classification error. Next, the meta-network was trained to output the better performing standard classifier for each US image, which corresponded to the binary classification setting. Our approach is illustrated in Fig. \ref{f3}, we expect that the meta-network will learn to recommend the more accurate classifier for the particular US image based on particular image characteristics.
Meta-network was trained using the binary cross-entropy loss function. EfficientNetV2M convolutional neural network pre-trained on the ImageNet dataset served as the backbone for the meta-network \cite{tan2021efficientnetv2}. The original dense layer was replaced by a dense layer equipped with the sigmoid activation function suitable for the binary classification problem. Gray-scale US images were converted to RGB and processed in the same way as the original ImageNet data used for the pre-training. Moreover, US images were cropped using regions of interest provided by the dataset creators and resized to the target image size of 256x256. Fine-tuning with the Adam optimizer and learning rate of 0.0001 was applied to train the meta-network. Calculations were performed in TensorFlow \cite{abadi2016tensorflow}.
\begin{figure*}[]
\begin{center}
\includegraphics[width=0.7\linewidth]{idea.png}
\end{center}
\caption{The meta-network was developed to indicate which standard classifier should be applied to classify the input breast mass US image. We expect that the meta-network will recommend the standard classifier based on input US image apperance.}
\label{f3}
\end{figure*}
\subsection{Evaluation}
For each cross-validation fold, the morphological and texture based classifiers trained on the development training set were evaluated on the test set. Additionally, we used the meta-network trained on the meta-learning training set to recommend the standard classifier for each test US image. To evaluate the implemented methods, we used the area under the receiver operating characteristic curve (AUC) and accuracy. Additionally, we evaluated the performance of an oracle that always recommend the better performing classifier. This approach corresponds to the best performance achievable with the classifiers based on the morphological and texture features in our meta-learning framework.
\section{Results and Discussion}
Fig. \ref{f4} presents the relationship between the outputs (probabilities of malignancy) of the morphological and texture based classifiers determined for the entire dataset. \cite{antropova2017deep}. The correlation coefficient between the outputs was equal to 0.64, which shows that the agreement between the classifiers was fairly strong, but not perfect. In Fig. \ref{f4}, we outlined with the red boxes the cases for which the classifiers strongly disagreed in the assigned probabilities of malignancy. This result suggests that for some breast masses only one of the classifiers should be preferred.
Table 1 summarizes the classification performance obtained for the investigated techniques. The morphological feature based classifier achieved higher AUC value, 0.93, than the texture based method, 0.88. This result confirms the findings of Flores et al. who reported that the morphological features are the better performing features for the breast mass differentiation \cite{flores2015improving}. The meta-learning based classifier selection resulted in better performance than for the individual standard classifiers. In this case, we achieved AUC and accuracy of 0.95 and 0.91, respectively. Table 1 also presents the performance of an oracle that always recommend the better performing classifier for the test image. In this case, the obtained AUC and accuracy were equal to 0.99 and 0.96, respectively. This result shows that the meta-network did not select the better performing classifier for each US image. Moreover, the accuracy of 0.96 indicates that the perfect classification was not achievable on our dataset with the combination of the shape and texture based classifiers.
As far as we know, in our work meta-learning was used for the first time to recommend suitable classifiers for the breast mass classification in US. Flores et al. compared the performance of the morphological and texture features for breast mass differentiation \cite{flores2015improving}. However, authors did not investigate the agreement between the classifiers or the reasons for which the classifiers fail to produce accurate predictions. Conclusions of the authors were based solely on the obtained AUC values. In contrary, we presented that the proposed meta-learning technique can be used to successfully select the better performing standard classifier for input US images. Compared to the shape based classifier, with the proposed meta-learning based approach we could increase the AUC value from 0.93 to 0.95.
The proposed approach partially addresses one of the important problems of the deep neural networks, which is the lack of the interpretability. In this study, we did not develop a neural network to perform the diagnosis, but to improve the performance of the standard well-known classification methods. In practice, the deep learning methods may fail on new data due to the data-shifts or adversarial attacks \cite{rabanser2019failing}. However, in the case of our approach, the classification is solely based on the handcrafted features. Even if the meta-network fail to recommend the better performing algorithm for the input US image, still one of the standard classifiers will be used for the analysis, which should grant a certain level of robustness.
In future, we plan to conduct additional experiments to better illustrate the proposed meta-learning based approach. First, we would like to incorporate other classification methods into our framework. For example, it would be interesting to include a classifier based on quantitative US parameters, like the backscatter coefficient \cite{oelze2016review}. Second, we plan to utilize techniques that can be used to generate saliency maps to understand where the convolutional meta-network is focusing in the input US image to output the recommendation \cite{zhou2016learning}. Third, we would like to improve our framework and take into account the cases that were wrongly assessed by both classifiers. Presence of such cases should be considered when designing the reference for the training of the meta-network.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.7\linewidth]{agr.png}
\end{center}
\caption{The agreement between the outputs (probabilities of mass malignancy) of the shape and texture based classifiers determined on the entire data via the 8-fold cross-validation. Red boxes indicate the test cases for which the classifiers disagreed.}
\label{f4}
\end{figure}
\begin{table}[]
\begin{center}
\caption{Breast mass differentiation performance determined for the standard methods and the approaches utilizing the meta-learning based classifier selection. }
\label{t1}
\scalebox{0.75}{
\begin{tabular}{|c|c|c|}
\hline
Method & AUC & Accuracy \\
\hline \hline
GLCM features & 0.88 & 0.81 \\ \hline
Morphological features & 0.93 & 0.88 \\ \hline
Meta-learning & 0.95 & 0.91 \\ \hline
Oracle & 0.99 & 0.96 \\ \hline
\end{tabular}}
\end{center}
\end{table}
\section{Conclusion}
In this preliminary study, we investigated the usefulness of the meta-learning techniques in the context of breast mass classification in ultrasound. We developed a meta-network that could recommend the most suitable standard classifier for particular breast mass US image. Obtained results demonstrated that meta-learning can be used to improve the performance of the standard classifiers based on handcrafted features.
\section*{Conflicts of interest}
The authors do not have any conflicts of interest to disclosure.
\section*{Acknowledgement}
This work was supported by the National Centre for Research and Development of Poland (grant number INFOSTRATEG-I/0042/2021).
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUc045qYVBL4eBuf52
| 5
| 1
|
\section{Introduction}
The present paper grew out of the author's attempts to understand and extend the constructions of \cite{ACD}. \\
In \cite{AC}, representations up to homotopy of a derived algebraic group $G$ were introduced as $A_\infty$-comodules over the group coalgebra $A: = \mathcal{O}(G)$. They form a DG-category $\operatorname{Rep}^h(G)$. In \cite{ACD} it was proved that the homotopy category of $\operatorname{Rep}^h(G)$ is monoidal. \\
To construct tensor products, the authors used the language of DB-algebras and DB-bimodules (see section 3.2 and 6.1 in \cite{ACD}) and studied the algebra of $\operatorname{Rep}^h(G)$ by means of a certain universal DB-pair $(\Omega,T)$. The tensor product of objects in $\operatorname{Rep}^h(G)$ was given by a diagonal $\Omega \to \Omega \boxtimes \Omega$, and the tensor product of morphisms was given by a diagonal $T \to T \boxtimes T$. The resulting tensor product of morphisms was only homotopy associative and homotopy consistent with compositions. It was left as an open question whether this monoidal structure admits some sort of a coherent lift to DG-level. \\
In operadic language, the DB-algebra $\Omega$ corresponds to an $(\mathbf{a},\mathbf{m})$-colored operad $\Omega$ governing pairs of a DG-algebra and an $A_\infty$-module over it. The DB-bimodule $T$ corresponds to an operadic $\Omega$-bimodule $T$ governing maps of such pairs which are homomorphisms in color $\mathbf{a}$ and $A_\infty$ in color $\mathbf{m}$. We axiomatize the situation by defining {\em operadic pairs} and algebras over them. The pair $(\Omega,T)$ provides an example of an operadic pair. \\
The context for operadic pairs is as follows. The category of $A_\infty$-algebras is {\em not} the category of algebras over the DG-operad $A_\infty$, because the latter category does not have enough morphisms. However, there exists an operadic pair $(A_\infty, M_\infty)$, for which there is an equivalence of categories $A_\infty\operatorname{Alg} \simeq \operatorname{Alg}(A_\infty, M_\infty)$. The operadic pair $(A_\infty, M_\infty)$ consists of cellular chains on {\em Stasheff associahedra} and {\em Stasheff multiplihedra}. The same holds for its $(\mathbf{a},\mathbf{m})$-colored version $(A_\infty^{\operatorname{Col}}, M_\infty^{\operatorname{Col}})$. Algebras over $(A_\infty^{\operatorname{Col}}, M_\infty^{\operatorname{Col}})$ are pairs of an $A_\infty$-algebra and an $A_\infty$-module over it, and maps of such pairs are $A_\infty$ in both colors. The operadic pair $(\Omega,T)$ above is a certain quotient of $(A_\infty^{\operatorname{Col}}, M_\infty^{\operatorname{Col}})$. \\
On the polyhedral side, taking quotients corresponds to contraction. The contraction of associahedra corresponding to the projection $A_\infty^{\operatorname{Col}} \to \Omega$ was known from \cite{ACD}; it resulted in {\em cubes}. For the projection $M_\infty^{\operatorname{Col}} \to T$, the corresponding contraction was not previously known. In this paper we compute it and prove the following: \\
{\bf Theorem.} There exists an isomorphism of chain complexes $C_*(\mathcal{F}_n) \simeq T(\mathbf{a}^n,\mathbf{m};\mathbf{m})$, where $\mathcal{F}_n$ are {\em} freehedra of \cite{San}. \\
These polytopes were originally introduced to study free loop spaces, and until now they bore no relation to operads. Therefore, the current paper establishes a dictionary between \cite{San} and \cite{ACD}. In particular, it seems that the freehedral diagonal of \cite{San} coincides with the diagonal $T \to T \boxtimes T$ of \cite{ACD}. In further research we expect to use polyhedral methods to define {\em weakly Hopf} structure on the operadic pair $(\Omega,T)$. This would provide a weakly monoidal structure on the DG-category $\operatorname{Rep}^h(G)$, giving a lift from the homotopy level.
\subsection{Organization of the paper}
This paper aims to be as self-contained as possible, thus the length. In Section 2 we give an overview of operadic theory in one and two colors, and introduce operadic pairs. In Section 3 we present associahedra and multiplihedra. In section 4 we summarize the existing definitions of freehedra. In Section 5 we prove our main theorem, which provides an operadic meaning for freehedra. In Section 6 we discuss the existing projections betweeen polyhedral families in terms of operadic pairs. In Section 7 we define strictly Hopf operadic pairs and prepare the ground for studying weakly Hopf operadic pairs.
\subsection{Acknowledgements} This paper would not have been written without Jim Stasheff's advice and support. I am also grateful to Sergey Arkhipov, Ryszard Nest, Lars Hesselholt, Nathalie Wahl, Camilo Abad, Stephen Forcey and Samson Saneblidze for discussions and interest. Finally, I am grateful to Vladimir Dotsenko, Timothy Logvinenko and Svetlana Makarova for inviting me to present this research at their seminars.
\section{Operads and operadic pairs}
Let $\mathsf{C}$ be a closed monoidal category with sums.
\begin{defi}
In the category ${\mathbb{N}}$-$\operatorname{Seq}(\mathsf{C})$ of $\mathbb{N}$-sequences in $\mathsf{C}$, an object $\mathcal{P}$ is a collection of $\mathcal{P}(i) \in \mathsf{C}$ for $i \geq 1$. For $\mathcal{P}$ and $\mathcal{Q}$ in ${\mathbb{N}}$-$\operatorname{Seq}(\mathsf{C})$, their tensor-product $\mathcal{P} \odot \mathcal{Q}$ is given by
$$ (\mathcal{P} \odot \mathcal{Q})(n) = \bigoplus_{i_1+ \ldots +i_k = n} \mathcal{P}(k) \otimes \mathcal{Q}(i_1) \otimes \ldots \otimes \mathcal{Q}(i_k)$$
\end{defi}
This makes ${\mathbb{N}}$-$\operatorname{Seq}(\mathsf{C})$ a non-symmetric monoidal category. The unit is the ${\mathbb{N}}$-sequence $\underline{\mathbb{I}}$ with $\underline{\mathbb{I}}= \mathbb{I}$ and $\underline{\mathbb{I}}(n) = 0$ for $n \geq 1$, where $\mathbb{I}$ is the monoidal unit of $\mathsf{C}$ and $0$ is the initial object of $\mathsf{C}$.
\begin{defi}
An non-symmetric operad in $\mathsf{C}$ is a unital algebra in ${\mathbb{N}}$-$\operatorname{Seq}(\mathsf{C})$.
\end{defi}
If $\mathcal{P}$ is an operad, we say that $\mathcal{P}(k)$ is the object of arity $k$ operations. Explicitly, the operadic structure on $\mathcal{P}$ is given by a collection of composition maps
$$ \circ_{i_1,\ldots,i_k} \colon \mathcal{P}(k) \otimes \mathcal{P}(i_1) \otimes \ldots \otimes \mathcal{P}(i_k) \to \mathcal{P}(i_1+ \ldots+ i_k)$$
satisfying associativity conditions. The existence of a unit allows us to express all such compositions through
$$ \circ_{i} \colon \mathcal{P}(k) \otimes \mathcal{P}(l) \to \mathcal{P}(k+l-1)$$
\begin{defi}
Every object $X \in \mathsf{C}$ gives rise to its operad $\underline{\operatorname{End}}_X$, with $\underline{\operatorname{End}}_X(n) = \underline{\operatorname{Hom}}_\mathsf{C}(X^{\otimes n},X)$ and with operadic structure coming from compositions.
\end{defi}
\begin{defi}
For an operad $\mathcal{P}$ and an object $X$, the structure of a $\mathcal{P}$-algebra on $X$ is a map of operads $\mathcal{P} \to \underline{\operatorname{End}}_X$. If $X$ and $Y$ are $\mathcal{P}$-algebras, their map in $\operatorname{Alg}(\mathcal{P})$ is a map $X \to Y$ such that for any $n$ the diagram below commutes:
\[ \begin{tikzcd}[column sep=huge]
\mathcal{P}(n) \otimes X^{\otimes n} \arrow{r}{\operatorname{id}_{\mathcal{P}(n)} \otimes f^{\otimes n}} \arrow{d}{} & \mathcal{P}(n) \otimes Y^{\otimes n} \arrow{d}{} \\%
X \arrow{r}{f} & Y
\end{tikzcd}
\]
\end{defi}
Let $\mathsf{C}$ be the category of chain complexes $\mathsf{DGVect}(\mathsf{k})$. Operads in $\mathsf{DGVect}(\mathsf{k})$ are called DG-operads. The simplest DG-operad is $Ass$, with $Ass(n) = k$ for any $n$. Algebras over $Ass$ are DG algebras. The key operad for this paper is a classical resolution of $Ass$ called $A_\infty$. For detailed discussion of $A_\infty$-formalism, see for example \cite{Kel}.
\begin{defi}
The DG-operad $A_\infty$ is generated by operations $\mu_n$ of arity $n$ and degree $2-n$ for $n \geq 2$, with differential
$$d(\mu_n) = \sum_{i+j+k = n} \mu_{i+1+k}(\operatorname{id} {\otimes^i} \otimes \mu_j \otimes \operatorname{id}^{\otimes j})$$
\end{defi}
$A_\infty$-algebras are homotopy-associative algebras with an explicit system of all the higher coherences.
\begin{rem}
The category of algebras over $A_\infty$ is {\em not} what people usually mean by the category of $A_\infty$-algebras. The problem is that $\operatorname{Alg}(A_\infty)$ doesn't have enough morphisms. A morphism $A \to B$ in $\operatorname{Alg}(A_\infty)$ has to strictly respect multiplication $\mu_2$ and all the higher operations. A true $A_\infty$-morphism $A \to B$ should respect multiplication $\mu_2$ only up to homotopy, and includes the data of all the higher coherences $A^{\otimes n} \xrightarrow{\operatorname{deg}1-n} B$.
\end{rem}
To combat this difficulty we use operadic bimodules, i.e. bimodules in the category ${\mathbb{N}}$-$\operatorname{Seq}(\mathsf{C})$. Note that this is not a symmetric monoidal category, so left and right actions differ a lot.
\begin{defi}
For objects $X, Y \in \mathsf{C}$, let $\underline{\operatorname{Hom}}_{X,Y}$ be an $\mathbb{N}$-sequence given by $\underline{\operatorname{Hom}}_{X,Y}(n) = \underline{\operatorname{Hom}}_\mathsf{C}(X^{\otimes n},Y)$. It has a natural structure of a right module over $\underline{\operatorname{End}}_X$ and of a left module over $\underline{\operatorname{End}}_Y$ given by compositions.
\end{defi}
Below we present the standard resolution of the trivial $Ass$-bimodule given by $Ass$ itself.
\begin{defi}
$M_\infty$ is a bimodule over $A_\infty$ generated by $f_n$ of arity $n$ and degree $1-n$ for $n \geq 0$, with differentials
$$d(f_n) = \sum f_{r+1+t} (\operatorname{id}^{\otimes r} \otimes \mu_s \otimes \operatorname{id}^{\otimes r}) + \sum \mu_r (f_{i_1}\otimes \ldots \otimes f_{i_r})$$
\end{defi}
\begin{prop}
Let $A$, $B$ be two $A_\infty$-algebras with structure maps $\alpha \colon A_\infty
\to \underline{\operatorname{End}}_A$ and $\beta \colon A_\infty
\to \underline{\operatorname{End}}_B$. Then any $A_\infty$-morphism $f \colon A \to B$ is given by a structure map $\phi \colon M_\infty \to \underline{\operatorname{Hom}}_{X,Y}$ of bimodules over $A_\infty$,
where $\underline{\operatorname{Hom}}_{X,Y}$ is viewed as a bimodule over $A_\infty$ via restrictions along $\alpha$ and $\beta$.
\end{prop}
Note that the composition of $A_\infty$-morphisms is induced by a map
$$c \colon\thinspace M_\infty \to M_\infty \otimes_{A_\infty} M_\infty$$ which is given on generators by
$$ c(f_n) = \sum f_i \otimes f_{n-i+1}$$
and the identity $A_\infty$-morphisms are induced by a map
$$\epsilon \colon\thinspace M_\infty \to A_\infty$$ which is given on generators by
$$ \epsilon(f_n) = \begin{cases} \operatorname{id} & n = 1 \\ 0 & n>1 \end{cases}$$
This suggests the following new definition.
\begin{defi}
An {\em operadic pair} is a pair $(\mathcal{P},\mathcal{M})$ where $\mathcal{P}$ is an operad and $\mathcal{M}$ is a counital coalgebra in operadic bimodules over $\mathcal{P}$, with comultiplication $c \colon\thinspace \mathcal{M} \to \mathcal{M} \otimes_{\mathcal{P}} \mathcal{M}$ and counit $\epsilon\colon\thinspace \mathcal{M} \to \mathcal{P}$.
\end{defi}
\begin{defi}
For an operadic pair $(\mathcal{P},\mathcal{M})$, an object of $\operatorname{Alg}(\mathcal{P},\mathcal{M})$ is just a $\mathcal{P}$-algebra. For two such objects $A$ and $B$, with structure maps $\chi_A: \mathcal{P} \to \underline{\operatorname{End}}_{A}$ and $\chi_B: \mathcal{P} \to \underline{\operatorname{End}}_{B}$, a morphism $f$ in $\operatorname{Alg}(\mathcal{P},\mathcal{M})$ is given by a structure map of $\mathcal{P}$-bimodules $\chi_f: \mathcal{M} \to \underline{\operatorname{Hom}}_{A,B}$. The composition is induced by $c$ and the identity morphisms are incduced by $\epsilon$.
\end{defi}
Then $(A_\infty,M_\infty)$ is an example of DG-operadic pair, and the category of $A_\infty$-algebras is precisely $\operatorname{Alg}(A_\infty,M_\infty)$. \\
\begin{rem}
Every operad $\mathcal{P}$ forms a counital coalgebra in bimodules over itself, resulting in a trivial operadic pair $(\mathcal{P},\mathcal{P})$. For this pair, we have $\operatorname{Alg}(\mathcal{P},\mathcal{P}) = \operatorname{Alg}(\mathcal{P})$.
\end{rem}
For an operadic pair $(\mathcal{P},\mathcal{M})$, by its underlying pair we mean the pair $(\mathcal{P},\mathcal{M})$ with forgotten coalgebra structure on $\mathcal{M}$. \\
We now repeat the story with colors. Fix the set of colors $\operatorname{Col}$.
\begin{defi}
In the category ${\mathbb{N}}$-$\operatorname{Seq}_{\operatorname{Col}}(\mathsf{C})$ of colored $\mathbb{N}$-sequences in $\mathsf{C}$, an object $\mathcal{P}$ is a collection of $\mathcal{P}(c_1,\ldots,c_k;c) \in \mathsf{C}$ for all tuples $c_1,\ldots, c_k, c$ with $c_i$ and $c$ in $\operatorname{Col}$. Here, $c_i$ are called input colors and $c$ is called output color. For $\mathcal{P}$ and $\mathcal{Q}$ in ${\mathbb{N}}$-$\operatorname{Seq}_{\operatorname{Col}}(\mathsf{C})$, their tensor-product $\mathcal{P} \odot \mathcal{Q}$ is given by
\begin{align*}
& (\mathcal{P} \odot \mathcal{Q})(c_1, \ldots, c_n;c) = \\
& \bigoplus_{\substack{i_1+ \ldots +i_k = n \\ c'_1, \ldots, c'_k \in \operatorname{Col}}} \mathcal{P}(c'_1,\ldots,c'_k;c) \otimes \mathcal{Q}(c_1,\ldots, c_{i_1};c'_1) \otimes \ldots \otimes \mathcal{Q}(c_{n-i_k+1},\ldots, c_n; c'_k)
\end{align*}
\end{defi}
Colored operads and colored operadic bimodules are defined as algebras and bimodules in this new monoidal category.
\begin{defi}
Let $\{X_c \}_{c \in \operatorname{Col}}$ be a collection of objects in $\mathsf{C}$. The colored operad $\underline{\operatorname{End}}_{\{X_c \}}$ is defined by
$$\underline{\operatorname{End}}_{\{X_c \}}(c_1,\ldots,c_n;c) = \underline{\operatorname{Hom}}_{\mathsf{C}}(X_{c_1} \otimes \ldots \otimes X_{c_n},X_c)$$
with operadic structure given by compositions.
\end{defi}
\begin{defi}
An algebra over a colored operad $\mathcal{P}$ is a collection of objects $\{X_c \}_{c \in \operatorname{Col}}$ with a map of operads $\mathcal{P} \to \underline{\operatorname{End}}_{\{X_c \}}$. If $\{X_c \}$ and $\{Y_c \}$ are $\mathcal{P}$-algebras, then their map in $\operatorname{Alg}(\mathcal{P})$ is a collection of maps $f_c \colon\thinspace X_c \to Y_c$ such that for every tuple $(c_1,\ldots,c_n,c)$ the following diagram commutes.
\[ \begin{tikzcd}[column sep=3 cm]
\mathcal{P}(c_1,\ldots,c_n;c) \otimes \bigotimes_{i=1}^n X_{c_i} \arrow{r}{\operatorname{id} \otimes \bigotimes_{i=1}^n f_{c_i}} \arrow{d}{} & \mathcal{P}(c_1,\ldots,c_n;c) \otimes \bigotimes_{i=1}^n Y_{c_i} \arrow{d}{} \\%
X_c \arrow{r}{f_c} & Y_c
\end{tikzcd}
\]
\end{defi}
\begin{defi}
Let $\{X_c \}_{c \in \operatorname{Col}}$ and $\{Y_c \}_{c \in \operatorname{Col}}$ be two collection of objects in $\mathsf{C}$. The colored $\mathbb{N}$-sequence $\underline{\operatorname{Hom}}_{\{X_c \},\{Y_c \}}$ is defined by
$$\underline{\operatorname{Hom}}_{\{X_c \},\{Y_c \}} (c_1,\ldots,c_n;c) = \underline{\operatorname{Hom}}_{\mathsf{C}}(X_{c_1} \otimes \ldots \otimes X_{c_n},Y_c)$$
It has the natural structure of a left module over $\underline{\operatorname{End}}_{\{X_c \}}$ and a right module over $\underline{\operatorname{End}}_{\{Y_c \}}$ given by compositions.
\end{defi}
The definition of an operadic pair can now be repeated verbatim. \\
In the rest of the paper we will only be interested in the case when $\operatorname{Col} = \{\mathbf{a},\mathbf{m}\}$, with $\mathbf{a}$ for {\em algebra} and $\mathbf{m}$ for {\em module}. The simplest example of a colored DG-operad is $Ass^{\operatorname{Col}}$, has $Ass^{\operatorname{Col}}(\mathbf{a},\ldots,\mathbf{a};\mathbf{a}) = k$, $Ass^{\operatorname{Col}}(\mathbf{a},\ldots,\mathbf{a},\mathbf{m};\mathbf{m}) = k$ and $0$ everywhere else. An algebra over this colored operad is a pair $(A,M)$ where $A$ is a DG-algebra and $M$ is a DG-module over $A$. Similarly to the non-colored case, the operad $Ass^{\operatorname{Col}}$ has a standard resolution $A_\infty^{\operatorname{Col}}$.
\begin{defi}
$A_\infty^{\operatorname{Col}}$ is generated by operations $\mu_n^{\mathbf{a}} \in A_\infty^{\operatorname{Col}}(\mathbf{a}^n;\mathbf{a})$ of degree $2-n$ and $\mu_n^{\mathbf{m}} \in A_\infty^{\operatorname{Col}}({\mathbf{a}}^{n-1},\mathbf{m};\mathbf{a})$ of degree $2-n$, with differentials
$$d(\mu_n^\mathbf{a}) = \sum_{i+j+k = n} \mu^\mathbf{a}_{i+1+k}(\operatorname{id}_\mathbf{a}^{\otimes^i} \otimes \mu^\mathbf{a}_j \otimes \operatorname{id}_\mathbf{a}^{\otimes j})$$
$$d(\mu_n^\mathbf{m}) = \sum_{\substack{i+j+k = n \\ j \geq 1, k \geq 1}} \mu^\mathbf{a}_{i+1+k}(\operatorname{id}_\mathbf{a}^{\otimes^i} \otimes \mu^\mathbf{a}_j \otimes \operatorname{id}_\mathbf{m}^{\otimes j}) + \sum_{\substack{i+j = n \\ j \geq 1}} \mu_{}( \operatorname{id}_\mathbf{a}^{\otimes i} \otimes \mu_j^\mathbf{m} ) $$
\end{defi}
Again, the correct category of algebras is obtained via the formalism of operadic pairs.
\begin{defi}
The operadic bimodule $M_\infty^{\operatorname{Col}}$ is generated over $A_\infty^{\operatorname{Col}}$ by $f^\mathbf{a}_n$ and $f^\mathbf{m}_n$, with differentials
$$d(f_n^\mathbf{a}) = \sum f^\mathbf{a}_{r+1+t} (\operatorname{id}_\mathbf{a}^{\otimes r} \otimes \mu^\mathbf{a}_s \otimes \operatorname{id}_\mathbf{a}^{\otimes t}) + \sum \mu^\mathbf{a}_r (f^\mathbf{a}_{i_1}\otimes \ldots \otimes f^\mathbf{m}_{i_r})$$
\begin{align*}
& d(f^\mathbf{m}_n) = \sum f^\mathbf{m}_{r+1+t} (\operatorname{id}_\mathbf{a}^{\otimes r} \otimes \mu^\mathbf{a}_s \otimes \operatorname{id}_\mathbf{m}^{\otimes t}) + \sum \mu^\mathbf{a}_r (f^\mathbf{a}_{i_1}\otimes \ldots \otimes f^\mathbf{m}_{i_r}) + \\
& + \sum f^\mathbf{m}_{r+1+t} (\operatorname{id}_\mathbf{a}^{\otimes r} \otimes \mu^\mathbf{m}_t)
\end{align*}
The comultiplication $c\colon\thinspace M_\infty^{\operatorname{Col}} \to M_\infty^{\operatorname{Col}} \otimes_{ A_\infty^{\operatorname{Col}}} M_\infty^{\operatorname{Col}} $ is given on generators by
$$ c(f^\mathbf{a}_n) = \sum f^\mathbf{a}_i \otimes f^\mathbf{a}_{n-i+1}$$
$$ c(f^\mathbf{m}_n) = \sum f^\mathbf{m}_i \otimes f^\mathbf{m}_{n-i+1}$$
The counit $\epsilon \colon\thinspace M_\infty^{\operatorname{Col}} \to A_\infty^{\operatorname{Col}}$ is given on generators by
$$ \epsilon(f^\mathbf{a}_n) = \begin{cases} \operatorname{id}_\mathbf{a} & n = 1 \\ 0 & n>1 \end{cases}$$
$$ \epsilon(f^\mathbf{m}_n) = \begin{cases} \operatorname{id}_\mathbf{m} & n = 1 \\ 0 & n>1 \end{cases}$$
This makes $(A_\infty^{\operatorname{Col}}, M_\infty^{\operatorname{Col}})$ an operadic pair.
\end{defi}
In this paper, we are mainly interested in a certain quotient of $(A_\infty^{\operatorname{Col}}, M_\infty^{\operatorname{Col}})$.
\begin{defi} Let $\Omega$ be the quotient of $A_\infty^{\operatorname{Col}}$ by the ideal $I$ generated by all $\mu^\mathbf{a}_i$ for $i > 2$. Let $T$ be a further quotient of $M_\infty^{\operatorname{Col}}/I$ by a subbimodule generated by $f^\mathbf{a}_i$ for $i>1$.
\end{defi}
$(\Omega,T)$ remains an operadic pair. \\
Albeit in a different language, the operaic pair $(\Omega, T)$ was closely studied in \cite{ACD} in connection to representations up to homotopy. There the authors developed a convenient forest notation for bases of $\Omega$ and $T$, which we use in the main theorem of this paper. \\
\begin{defi}
A \emph{short forest} is a sequence of planar trees of depth 2. Inner edges are called branches and outer edges are called leaves. For a short forest $F$, let $l(F)$ be the number of leaves, let $b(F)$ be the number of branches and let $t(F)$ be the number of trees.
\end{defi}
Below is an example of a short forest $F$ with $l(F) = 12$, $b(F) = 8$ and $t(F) = 5$. The roots are depicted as connected with a horizontal line, the ground.
\begin{center}
\begin{forest}
for tree = {grow'=90,circle, fill, minimum width = 4pt, inner sep = 0pt, s sep = 13pt}
[{},phantom
[{},name = one [[]] [[][]] ]
[{},name = two [[]] [[]] [[]] ]
[{},name = three [[][][][]] ]
[{},name = four [[]] ]
[{},name = five [[]] ]
]
\draw[black] (one) -- (two) -- (three) -- (four) -- (five);
\end{forest}
\end{center}
For a forest $F$, denote by $F^i$ its $i$-th tree. Write $F^i$ as $(F^i_1, \ldots, F^i_{b_i})$, where $F^i_j$ denoted the number of leaves on the $j$-th branch of $i$-th tree.
\begin{prop}
\label{isom}
The basis of $\Omega (a^i,m;m)$ is given by short forests with $l(F)$, where the degree of the forest is $t(F)-b(F)$.
\end{prop}
\begin{proof}
To the tree $F^i = (F^i_1, \ldots, F^i_{b_i})$ we assign the operation
$$ \mu(F^i) = \mu^\mathbf{m}_{b_i}\left((\mu^\mathbf{a}_2)^{F^i_1-1},\ldots, (\mu^\mathbf{a}_2)^{F^i_{b_1}-1},\operatorname{id}^\mathbf{m}\right)$$
The powers of $\mu^\mathbf{a}_2$ are well-defined since $\mu^\mathbf{a}_2$ is associative. We then build the operation for the whole forest by composing $\mu(F^i)$ for all the trees in the same order as the trees appear in the forest.
\end{proof}
Under this isomorphism, the example forest above corresponds to the operation
$$\mu^\mathbf{m}_2 \Bigg(\operatorname{id}^\mathbf{a}, \mu^\mathbf{a}_2, \mu^\mathbf{m}_3 \bigg(\operatorname{id}^\mathbf{a},\operatorname{id}^\mathbf{a},\operatorname{id}^\mathbf{a}, \mu^\mathbf{m}_1 \Big((\mu^\mathbf{a}_2)^3, \mu^\mathbf{m}_1 \big( \operatorname{id}^\mathbf{a}, \mu^\mathbf{m}_1 \big) \Big) \bigg) \Bigg) $$
The differential of $\Omega$ in this basis can be described in terms of two forest transformations, $U$ (for "unite") and $S$ (for "separate"). Let $F$ be a forest with a chosen pair of branches $B = (B_l,B_r)$ belonging to the same tree $T$.
\begin{enumerate}
\item $U(F,B)$ is the forest where $B_l$ and $B_r$ are replaced with the one branch that has leaves of both $B_l$ and $B_r$.
\item $S(F,B)$ is the forest where $T$ is replaced by two separate trees, $T_l$ with branches of $T$ up to $B_l$ and $T_r$ with branches of $T$ starting from $B_r$.
\end{enumerate}
For example, consider the following forest with $B = (B_l,B_r)$ highlighted green:
\begin{center}
\begin{forest}
for tree = {grow'=90,circle, fill, minimum width = 4pt, inner sep = 0pt, s sep = 13pt}
[{},phantom
[{},name = one [[]] [[][]] ]
[{},name = two [[]] [[]] ]
[{},name = three [ {}, for tree = {fill = green, edge = {color = green}} [][]] [ {}, for tree = {fill = green, edge = {color = green}} [] [] []] [[]] ]
]
\draw[black] (one) -- (two) -- (three);
\end{forest}
\end{center}
Then $U(F,B)$ and $S(F,B)$ are the two forests below.
\begin{center}
\begin{forest}
for tree = {grow'=90,circle, fill, minimum width = 4pt, inner sep = 0pt, s sep = 13pt}
[{},phantom
[{},name = one [[]] [[][]] ]
[{},name = two [[]] [[]] ]
[{},name = three [ [][] [] [] []] [[]] ]
]
\draw[black] (one) -- (two) -- (three);
\end{forest}
\end{center}
\begin{center}
\begin{forest}
for tree = {grow'=90,circle, fill, minimum width = 4pt, inner sep = 0pt, s sep = 13pt}
[{},phantom
[{},name = one [[]] [[][]] ]
[{},name = two [[]] [[]] ]
[{},name = three [ [][]]]
[{},name = four [ [] [] []] [[]] ]
]
\draw[black] (one) -- (two) -- (three) -- (four);
\end{forest}
\end{center}
\begin{prop}
Under the correspondence of Prop. \ref{isom}, the differential of $\Omega$ is this:
$$d(F) = \sum_{B = (B_l,B_r)} \pm U(F,B) + \sum_{B = (B_l,B_r)} \pm S(F,B)$$
where in both sums $B$ runs along the set of neighbouring branch pairs. The operadic composition is given by forest concatenation when composing two-colored operations or by leaf multiplication when composing with a one-colored operation.
\end{prop}
We now explain a similar description for $T$.
\begin{prop}
\label{isom2}
The basis of $T(\mathbf{a}^i,\mathbf{m};\mathbf{m})$ is given by triples $(F,T,G)$ where $F$ and $G$ are forests, $T$ is a tree, and the total number of leaves is $i$.
\end{prop}
\begin{proof}
To the tree $F^i = (F^i_1, \ldots, F^i_{b_i})$ in the left forest, we assign the operation
$$ \mu(F^i) = \mu^\mathbf{m}_{b_i}\left((\mu^\mathbf{a}_2)^{F^i_1-1},\ldots, (\mu^\mathbf{a}_2)^{F^i_{b_i}-1},\operatorname{id}_\mathbf{m} \right)$$
To the middle tree $T = (T_1, \ldots, T_t)$, we assign the operation
$$ \mu(T) = f^\mathbf{m}_{t}\left((\mu^\mathbf{a}_2)^{T_1-1},\ldots, (\mu^\mathbf{a}_2)^{T_t-1},\operatorname{id}_\mathbf{m} \right)$$
To the tree $G^i = (G^i_1, \ldots, G^i_{c_i})$ in the right forest, we assign the operation
$$ \mu(G^i) = \mu^\mathbf{m}_{c_i}\left((\mu^\mathbf{a}_2)^{G^i_1-1},\ldots, (\mu^\mathbf{a}_2)^{G^i_{c_i}-1},\operatorname{id}_\mathbf{m} \right)$$
We then build the operation for the whole triple by composing $\mu(F^i)$, $\mu(T)$ and $\mu(G^i)$ in the same order as the trees appear in the triple.
\end{proof}
Informally, the right forest $G$ is what happens before we map, the middle tree $T$ is the map itself, and the left forest $F$ is what happens after we map. Below is an example triple corresponding to $\mu^\mathbf{m}_1 \left( \operatorname{id}_\mathbf{a},\mu^\mathbf{m}_1 \left( \mu^\mathbf{a}_2,f_2 \left( \mu^\mathbf{a}_2,\operatorname{id}_\mathbf{m} \right) \right ) \right)$.
\[ \left ( \vcenter{\hbox{ \begin{forest}
for tree = {grow'=90,circle, fill, minimum width = 4pt, inner sep = 0pt, s sep = 13pt}
[{},phantom
[{},name = one [[]] ]
[{},name = two [[][]] ]
]
\draw[black] (one) -- (two);
\end{forest},
\begin{forest}
for tree = {grow'=90,circle, fill, minimum width = 4pt, inner sep = 0pt, s sep = 13pt}
[{},phantom
[ [[][]] ]
]
\end{forest}, 1 }} \right ) \]
To describe the differential, we need to modify the definition of the transformation $S$ in the case when it is applied to the branch pair in the middle tree, because two trees cannot both remain in the middle. Set $S_l( (F,T,G),B) = (F \circ T_l, T_r, G)$ and $S_r( (F,T,G),B) = (F, T_l, T_r \circ G)$. Now for $B$ any neighbouring pair of branches is $(F,T,G)$, define
$$ S((F,T,G),B) = \begin{cases} (S(F,B), T, G) & B \subset F \\ S_l((F,T,G),B) + S_r((F,T,G),B) & B \subset T \\ (F, T, S(G,B)) & B \subset G \end{cases} $$
\begin{prop}
Under the correspondence of Prop. \ref{isom2}, the differential of $T$ is this:
\begin{align*}
& d(F,T,G) = \\
& \sum_{B} \pm U((F,T,G),B) + \sum_{B} \pm S((F,T,G)B) + (F \circ T, 1, G) + (F, 1, T \circ G)
\end{align*}
where in both sums $B$ runs along the set of neighbouring branch pairs anywhere in the triple. The operadic bimodule structure is given either by forest concatenation when composing with operations in $\Omega(\mathbf{a}^n,\mathbf{m};\mathbf{m})$ or by leaf multiplication when composing with operations in $\Omega(\mathbf{a}^n;\mathbf{a})$.
\end{prop}
\section{Associahedra and multiplihedra}
\subsection{Associahedra}
It is a well known fact that the DG operad $A_\infty$ is obtained by the functor of cellular chains from a CW-operad of {\em Stasheff associahedra} (see \cite{Sta} and \cite{Tam}).
\begin{defi}
An abstract polytope $\mathcal{K}(n)$ has faces corresponding to planar trees with $n$ leaves. The face $T$ is a subface of the face $T'$ if $T'$ can be obtained from $T$ by contracting inner edges. Viewed as an $\mathbb{N}$-sequence in the category of CW-complexes, $\mathcal{K}$ has an operadic structure given by tree grafting.
\end{defi}
\begin{prop}
\label{assoc}
$C_*(\mathcal{K}) = A_\infty$. Under this isomorphism, the $n$-corolla corresponds to $\mu_n$.
\end{prop}
$\mathcal{K}(1)$ and $\mathcal{K}(2)$ are points. The pictures below show the interval $\mathcal{K}(3)$ and the pentagon $\mathcal{K}(4)$, with faces labelled by planar trees.
\begin{center}
\begin{tikzpicture}[
vertex/.style={circle,draw,minimum size=8mm,inner sep=0pt, scale = 1.5},
edge/.style={circle,draw,minimum size=6mm,inner sep=0pt, scale = 1.5}
]
\node[vertex] (v0) at (0,0) {\RS{{L{lr}}{Rr}}};
\node[vertex] (v1) at (6,0) {\RS{{Ll}{R{rl}}}};
\draw (v0) -- node[edge,above=1pt]{\RS{{Ll}{Ii}{Rr}}} (v1);
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}[
vertex/.style={circle,draw,minimum size=8mm,inner sep=0pt, scale = 1.5},
edge/.style={circle,draw,minimum size=6mm,inner sep=0pt, scale = 1.5}
]
\newcommand\R{3cm}
\node[circle,draw,scale = 2] at (0,0) {\RS{{Ll}{Qq}{Zz}{Rr}}};
\node[vertex](v0) at (0*72+36:\R) {\RS{{MMm}{S{Mm}S{ms}}}};
\node[vertex](v1) at (1*72+36:\R) {\RS{{L{li}}{R{ir}}}};
\node[vertex](v2) at (2*72+36:\R) {\RS{{SSs}{M{Ss}M{sm}}}};
\node[vertex](v3) at (3*72+36:\R) {\RS{{L{Ll}{R{lr}}}{RRr}}};
\node[vertex](v4) at (4*72+36:\R) {\RS{{R{Rr}{L{rl}}}{LLl}}};
\draw (v0) edge node[edge,above=1pt,xshift=2pt]{\usebox\sba} (v1);
\draw (v1) edge node[edge,left,yshift=4pt]{\usebox\sbb} (v2);
\draw (v2) -- node[edge,left,yshift=-4pt]{\usebox\sbc} (v3);
\draw (v3) -- node[edge,below=1pt,xshift=2pt]{\usebox\sbd} (v4);
\draw (v4) -- node[edge,right=1pt]{\usebox\sbe} (v0);
\end{tikzpicture}
\end{center}
It is a straigtforward observation that the $(\mathbf{a},\mathbf{m})$-colored operad $A_\infty^{\operatorname{Col}}$ can also be obtained from associahedra via cellular chains. Precisely, let $\mathcal{K}^{\operatorname{Col}}$ be a colored CW-operad with \begin{align*}
& \mathcal{K}^{\operatorname{Col}}(\mathbf{a}^n;\mathbf{a}) = \mathcal{K}(n);\\
& \mathcal{K}^{\operatorname{Col}}(\mathbf{a}^{n-1},\mathbf{m};\mathbf{m}) = \mathcal{K}(n);\\
& \emptyset \text{ elsewhere}.
\end{align*}
Then $C_*(\mathcal{K}^{\operatorname{Col}}) = A_\infty^{\operatorname{Col}}$, with the $n$-corolla of $ \mathcal{K}^{\operatorname{Col}}(\mathbf{a}^n;\mathbf{a})$ corresponding to $\mu_n^\mathbf{a}$ and with the $n$-corolla of $\mathcal{K}^{\operatorname{Col}}(\mathbf{a}^{n-1},\mathbf{m};\mathbf{m})$ corresponding to $\mu_n^\mathbf{m}$. \\
\subsection{Multiplihedra} $M_\infty$, the operadic bimodule over $A_\infty$, is also obtained by the functor of cellular chains from polytopes $\mathcal{J}$ called {\em multiplihedra} that form a CW-operadic bimodule over $\mathcal{K}$. According to \cite{For}, multiplihedra admit a description in terms of trees, similar to the description of associahedra.
\begin{defi}
A painted planar tree $T$ is a planar tree with a possibility of single-input vertices, and with a selected subtree $T_{\operatorname{painted}}$ such that
\begin{itemize}
\item the root of $T$ belongs to $T_{\operatorname{painted}}$
\item the leaves of $T$ do not belong to $T_{\operatorname{painted}}$
\item every single-input vertex of $T$ is a leaf of $T_{\operatorname{painted}}$
\item for every vertex of $T_{\operatorname{painted}}$ either all inputs are in $T_{\operatorname{painted}}$ or all inputs are not in $T_{\operatorname{painted}}$
\end{itemize}
\end{defi}
The picture below shows some examples of such painted trees.
\begin{center}
\begin{tikzpicture}
\node[scale = 2]{\usebox\rbo};
\node[right = 2cm, scale = 2]{\usebox\mbo};
\node[left = 2cm, scale = 2]{\usebox\lbo};
\end{tikzpicture}
\end{center}
\begin{defi}
For a painted tree $T$, the admissible contractions are:
\begin{enumerate}
\item contract an inner edge of $T$ that is unpainted. For example, $$ \RS{BB {LL{Ll}{Rr}} {RRRr}} \longrightarrow \RS{BB {LLl}{IIi}{RRr}}$$
\item contract an edge that is inner to $T_{\operatorname{painted}}$. For example,
$$ \RS{BB {AA{Al}{Cr}} {CCRr}} \longrightarrow \RS{BB {AAl}{BBi}{CCr}}$$
\item contract a corolla of painted leaves. For example,
$$ \RS{BB {AA{Al}{Cr}} {CCRr}} \longrightarrow \RS{BB {AA{Ll}{Rr}} {CCRr}}$$
\end{enumerate}
\end{defi}
\begin{defi}
An abstract polytope $\mathcal{J}(n)$ has faces corresponding to painted planar trees. The face $T$ is a subface of the face $T'$ if $T'$ can be obtained from $T$ by a sequence of admissible contractions. Operadic bimodule structure is again given by tree grafting. For left module structure, the formerly unpainted tree remains unpainted, and for right module structure, the formerly unpainted tree admits the maximal painting.
\end{defi}
Below are examples of left and right grafting:
$$ \RS{II {LLl} {RRr}} \circ_1 \RS{BB {AA{Al}{Cr}} {CCRr}} = \RS{BB {AA {AA{Al}{Cr}} {CCRr} } {CCCCRr} }$$
$$\RS{BB {AA{Al}{Cr}} {CCRr}} \circ_{3} \RS{II {LLl} {RRr}} = \RS{BB {AAA{Al}{Cr}} {CCR{Ll} {Rr}}}$$
The picture below illustrates the interval $\mathcal{J}(2)$ and the hexagon $\mathcal{J}(3)$, with faces labelled by colored trees.
\begin{center}
\begin{tikzpicture}[
vertex/.style={circle,draw,minimum size=8mm,inner sep=0pt, scale = 1.5},
edge/.style={circle,draw,minimum size=6mm,inner sep=0pt, scale = 1.5}
]
\node[vertex] (v0) at (0,0) {\RS{B{Al}{Cr}}};
\node[vertex] (v1) at (6,0) {\RS{BI{l}{r}}};
\draw (v0) -- node[edge,above=1pt]{\RS{B{l}{r}}} (v1);
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}[
vertex/.style={circle,draw,minimum size=10mm,inner sep=0pt, scale = 1.5},
edge/.style={circle,draw,minimum size=6mm,inner sep=0pt, scale = 1.5}
]
\newcommand\R{3cm}
\node[circle,draw,scale = 2] at (0,0) {\RS{Blir}};
\node[vertex](v0) at (0:\R) {\RS{BI{L{l}{r}}{Rr}}};
\node[vertex](v1) at (60:\R) {\RS{B{AL{l}{r}}{CRr}}};
\node[vertex](v2) at (120:\R) {\RS{B{A{Al}{Cr}}{CCr}}};
\node[vertex](v3) at (180:\R) {\RS{B{AAl}{C{Al}{Cr}}}};
\node[vertex](v4) at (240:\R) {\RS{B{ALl}{CR{lr}}}};
\node[vertex](v5) at (300:\R) {\RS{BI{Ll}{R{lr}}}};
\draw (v0) -- node[above right = 2pt, edge]{\RS{B{L{l}{r}}{Rr}}} (v1);
\draw (v1) -- node[above = 2pt, edge]{\RS{B{A{l}{r}}{Cr}}} (v2);
\draw (v2) -- node[above left = 2pt, edge] {\RS{B{Al}{Bi}{Cr}}} (v3);
\draw (v3) -- node[below left = 2pt, edge] {\RS{B{Al}{C{lr}}}}(v4);
\draw (v4) -- node[below = 2pt,edge]{\RS{B{Ll}{R{lr}}}} (v5);
\draw (v5) -- node[below right = 2pt,edge]{\RS{BIlir}} (v0);
\end{tikzpicture}
\end{center}
Let $C_n$ denote the painted tree labelling the top-dimensional cell of $\mathcal{J}(n)$. For example, $C_3 = \RS{Blir}$.
\begin{prop} The isomorphism of Prop. \ref{assoc} extends to
$C_*(\mathcal{K},\mathcal{J}) = (A_\infty, M_\infty)$, where by $(A_\infty, M_\infty)$ we mean just the underlying pair. Under this isomorphism, the corolla $C_i$ corresponds to $f_i$.
\end{prop}
\begin{rem}
$(\mathcal{K},\mathcal{J})$ does not form an CW-operadic pair because the map $c \colon\thinspace C_*(\mathcal{J}) \to C_*(\mathcal{J}) \otimes_{C_*(\mathcal{K})} C_*(\mathcal{J})$ involves sums, and one cannot add maps of CW-complexes. In general, the notion of operadic pairs doesn't seem to be well adapted for non-additive categories like $Top$. However, it is often useful to realize the underlying pair of a DG-operadic pair as cellular chains on a CW-operad with a CW-bimodule.
\end{rem}
Similarly to the case of associahedra, we observe that the $(\mathbf{a},\mathbf{m})$-colored bimodule $M_\infty^{\operatorname{Col}}$ over $A_\infty^{\operatorname{Col}}$ can also be obtained from multiplihedra via cellular chains. Precisely, let $\mathcal{J}^{\operatorname{Col}}$ be a colored CW-sequence with \begin{align*}
& \mathcal{J}^{\operatorname{Col}}(\mathbf{a}^n;\mathbf{a}) = \mathcal{J}(n);\\
& \mathcal{J}^{\operatorname{Col}}(\mathbf{a}^{n-1},\mathbf{m};\mathbf{m}) = \mathcal{J}(n);\\
& \emptyset \text{ elsewhere}.
\end{align*}
\subsection{Contraction problem}
In the category $\mathsf{DGVect}(\mathsf{k})$ there is a projection of operadic pairs $(A_\infty^{\operatorname{Col}},M_\infty^{\operatorname{Col}}) \to (\Omega,T)$. On the polyhedral side, this should correspond to some contraction of associahedra and multiplihedra. \\
The picture below illustrates how a pentagon $\mathcal{K}(4)$ contracts to a square $I^2$, if we remove the non-associativity of the algebra. For readability we label vertices not with binary trees but with expressions in 4 letters.
\begin{center}
\begin{tikzpicture}[
vertex/.style={minimum size=8mm,inner sep=0pt}]
\newcommand\R{2}
\node[vertex](v0) at (0*72+36:\R) {a(b(cm))};
\node[vertex](v1) at (1*72+36:\R) {(ab)(cm)};
\node[vertex](v2) at (2*72+36:\R) {((ab)c)m};
\node[vertex](v3) at (3*72+36:\R) {(a(bc))m};
\node[vertex](v4) at (4*72+36:\R) {a((bc)m))};
\draw (v3) -- (v4) -- (v0) -- (v1) -- (v2);
\draw[ultra thick, red] (v2) -- (v3);
\end{tikzpicture}
\end{center}
The polyhedral contraction behind $A_\infty^{\operatorname{Col}} \to \Omega$ was computed, albeit in a different language, in \cite{ACD}.
\begin{prop}
\label{cubes}
$\Omega(\mathbf{a}^n,\mathbf{m};\mathbf{m}) \simeq C_*(I^{n-1})$, and the projection $A_\infty^{\operatorname{Col}} \to \Omega$ comes from a projection of associahedra to cubes.
\end{prop}
\begin{proof}
For a cube $I^{n-1}$, every face can be written as a word in letters $a$, $b$ and $c$, where $a$
is interpreted as $\{0\}$, $b$ is interpreted as $[0,1]$, $c$
is interpreted as $\{1\}$, and the word is interpreted as their product. For example, for the square the top-dimensional cell is $bb$, the initial vertex is $aa$, and the right side is $cb$. Now, having a short forest, you form the word by setting its $i$th letter equal to
\begin{itemize}
\item $a$, if the leaves with numbers $i$ and $i+1$ belong to the same branch
\item $b$, if the leaves with numbers $i$ and $i+1$ belong to different branches of the same tree
\item $c$, if the leaves with numbers $i$ and $i+1$ belong to different trees.
\end{itemize}
\end{proof}
Note that the above isomorphisms actually arrange the cubes into a CW-operad. \\
The corresponding contraction of multiplihedra was not previously known, and its computation is the goal of the current paper. The picture below illustrates the two-dimensional case, where a hexagon contracts to a pentagon. Warning: this pentagon is not an associahedron, but actually a freehedron.
\begin{center}
\begin{tikzpicture}[
vertex/.style={minimum size=8mm,inner sep=0pt}]
\newcommand\R{2.5cm}
\node[vertex](v0) at (0:\R) {f(a)(f(b)f(c))};
\node[vertex](v1) at (60:\R) {(f(a)f(b))f(m)};
\node[vertex](v2) at (120:\R) {f(ab)f(m)};
\node[vertex](v3) at (180:\R) {f((ab)m)};
\node[vertex](v4) at (240:\R) {f(a(bm))};
\node[vertex](v5) at (300:\R) {f(a)f(bc)};
\draw (v0) -- (v1) -- (v2) -- (v3) -- (v4) -- (v5) -- (v0);
\draw[ultra thick, red] (v1) -- (v2);
\end{tikzpicture}
\end{center}
\section{Freehedra}
In this section I present freehedra directly following \cite{San} and \cite{RS}. Consequently I do not include any proofs, but instead include a lot of details and pictures. There are three definitions: as truncations of simplices, as subdivisions of cubes, and a purely combinatorial one. The first definition is not used in the main arguments of this paper, so the reader can safely skip it.
\subsection{Freehedra as truncations of simplices}
The first way to obtain freehedra is to cook them from simplices by applying two sequences of truncations. \\
Consider the simplex $\Delta^n$ in your favourite embedding to $\mathbb{R}^n$. We now define the first sequence of truncations. Let the original vertices be labelled $0$, $1$, $\ldots$, $n$. After each truncation, some new vertices are cut from edges by the truncating hyperplane; the vertex cut from the edge $a \to b$ is denoted $(ab)$. \\
\begin{enumerate}
\item Let $Q_0$ be a hyperplane that separates $0$ from the other vertices. Remove everything connected to $0$. The resulting object is a simplicial prism. Its first simplicial face $S_1$ has vertices $(01), \ldots, (0n)$, and the second simplicial face $S_2$ has vertices $1, \ldots, n$.
\item The second hyperplane is like $Q_0$ but for the $(n-1)$-simplices $S_1$ and $S_2$ simultaneously. It separates $(01)$ and $1$ from the other vertices. Denote it by $Q_1$ and remove everything connected to $(01)$ and $1$. \\
\end{enumerate}
To define all the truncations inductively, denote by $L(k)$ the set of vertices that $Q_k$ separates from the rest. We see that $L(0) = \{0 \}$ and $L(1) = \{(01), 1 \}$. Now, having an expression for a vertex $v \in L(i-1)$, let $l_i(v)$ be the same expression with $i-1$ replaced by $i$. For example, $l_2((01)) = (02)$. Now $L(i)$ is defined to consist of vertices $l_i(v)$ and $(vl_i(v))$ for all $v \in L(i-1)$. This defines $Q(i)$, and we proceed to the next step by removing everything at the side of $L(i)$. The final truncation is by $Q_{n-2}$. We leave it to the interested reader to verify that this sequence of truncations is well-defined. \\
The second sequence is the same but starting at $n$ instead of $0$. Denote the hyperplanes by $P_0$, $\ldots$, $P_{n-2}$.\\
The pictures below show $\mathcal{F}_2$ and $\mathcal{F}_3$ cut out of a triangle and a tetrahedron respectively. Note that applying only one of the two truncation sequences yields cubes.
\begin{center}
\begin{tikzpicture}
\draw[thin] (1.5,1.5) -- (3,3) -- (4.5,1.5);
\draw[densely dashdotted] (1.5,1.5) -- (2,0);
\draw[densely dashdotted] (4.5,1.5) -- (4,0);
\draw[dotted] (0,0) -- (1.5,1.5);
\draw[dotted] (0,0) -- (2,0);
\draw[thin] (2,0) -- (4,0);
\draw[dotted] (4,0) -- (6,0);
\draw[dotted] (6,0) -- (4.5,1.5);
\filldraw[black] (1.5,1.5) circle (1.5pt);
\filldraw[black] (3,3) circle (1.5pt);
\filldraw[black] (4.5,1.5) circle (1.5pt);
\filldraw[black] (2,0) circle (1.5pt);
\filldraw[black] (4,0) circle (1.5pt);
\fill[gray,opacity = 0.3] (2,0) -- (1.5,1.5) -- (3,3) -- (4.5,1.5) -- (4,0) -- cycle;
\end{tikzpicture}
\end{center}
\vskip 1cm
\begin{center}
\begin{tikzpicture}
\draw[dotted] (0,0) -- (2,3);
\draw[dotted] (0,0) -- (2,1.2);
\draw[dotted] (0,0) -- (2,0);
\draw[dotted] (1.23,1.17) -- (1,1.5) -- (1.64,1.31);
\draw[dotted] (2.32,2.81) -- (2,3) -- (2.81,3);
\draw[densely dashdotted] (2,0) -- (1.23,1.17) -- (1.64,1.31) --(2,1.2) -- cycle;
\draw[densely dashdotted] (1.23,1.17) -- (2.32,2.81) -- (2.81,3) -- (1.64,1.31);
\filldraw[black] (1.23,1.17) circle (1.5pt);
\filldraw[black] (2,0) circle (1.5pt);
\filldraw[black] (1.64,1.31) circle (1.5pt);
\filldraw[black] (2.81,3) circle (1.5pt);
\filldraw[black] (2.32,2.81) circle (1.5pt);
\filldraw[black] (2,1.2) circle (1.5pt);
\draw[dotted] (7-0,0) -- (7-2,3);
\draw[dotted] (7-0,0) -- (7-2,1.2);
\draw[dotted] (7-0,0) -- (7-2,0);
\draw[dotted] (7-1.23,1.17) -- (7-1,1.5) -- (7-1.64,1.31);
\draw[dotted] (7-2.32,2.81) -- (7-2,3) -- (7-2.81,3);
\draw[densely dashdotted] (7-2,0) -- (7-1.23,1.17) -- (7-1.64,1.31) --(7-2,1.2) -- cycle;
\draw[densely dashdotted] (7-1.23,1.17) -- (7-2.32,2.81) -- (7-2.81,3) -- (7-1.64,1.31);
\filldraw[black] (7-1.23,1.17) circle (1.5pt);
\filldraw[black] (7-2,0) circle (1.5pt);
\filldraw[black] (7-1.64,1.31) circle (1.5pt);
\filldraw[black] (7-2.81,3) circle (1.5pt);
\filldraw[black] (7-2.32,2.81) circle (1.5pt);
\filldraw[black] (7-2,1.2) circle (1.5pt);
\draw[thin] (2,0) -- (5,0);
\draw[thin] (2,1.2) -- (7-2.32,2.81);
\draw[thin] (7-2.81,3) -- (2.81,3);
\draw[thin] (2.32,2.81) -- (5,1.2);
\fill[gray,opacity = 0.6] (2,0) -- (2,1.2) -- (7-2.32,2.81) -- (7-1.23,1.17) -- (5,0) -- cycle;
\fill[gray,opacity = 0.4] (2,0) -- (2,1.2) -- (1.64,1.31) -- (1.23,1.17) -- cycle;
\fill[gray,opacity = 0.1] (1.64,1.31) -- (1.23,1.17) -- (2.32,2.81) -- (2.81,3) -- cycle;
\fill[gray,opacity = 0.3] (2,1.2) -- (1.64,1.31) -- (2.81,3) -- (7 -2.81,3) -- (7-2.32,2.81) -- cycle;
\end{tikzpicture}
\end{center}
\begin{prop}
Freehedra have two natural projections onto cubes and one natural projection onto simplices.
\end{prop}
\begin{proof}
All the three projections are obtained by de-truncation.
\end{proof}
\subsection{Freehedra as subdivisions of cubes}
The second definition of freehedra is inductive. According to it, each freehedron $\mathcal{F}_n$ is a certain subdivision of $\mathcal{F}_{n-1}\times [0,1]$, thus all the freehedra arise as drawn on cubes. \\
We will first present a simplified version of this definition. At each step, the freehedron $\mathcal{F}_n$ will have a distinguished hyperface face $X_n$. These distinguished faces are only needed for user-friendliness; in the full version of the definition, at each step Saneblidze keeps track of labels for all hyperfaces.
\begin{defi}
Let $\mathcal{F}_0$ be the point, and let $\mathcal{F}_1$ be the interval $[0,1]$ with distinguished vertex $X_1$ = $1$. Assume $\mathcal{F}_{n-1}$ and its distinguihed face $X_{n-1}$ are defined. Consider the polyhedron $F_{n-1} \times [0,1]$, and split its hyperface $X_{n-1} \times [0,1]$ vertically into $X_{n-1} \times [0,\frac{1}{2}]$ and $X_{n-1} \times [\frac{1}{2},1]$. This is $\mathcal{F}_n$. Set $X_n = X_{n-1} \times [\frac{1}{2},1]$.
\end{defi}
The picture below illustrates freehedra in dimensions 1, 2 and 3. Distinguished hyperfaces are highlighted red.
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0) -- (3,0);
\filldraw[black] (0,0) circle (1.5pt);
\filldraw[red] (3,0) circle (1.5pt);
\draw[thin] (3+4,0) -- (0+4,0) -- (0+4,-3) -- (3+4,-3) -- (3+4,-1.5);
\draw[red] (3+4,-1.5) -- (3+4,-0);
\filldraw[black] (0+4,0) circle (1.5pt);
\filldraw[black] (0+4,-3) circle (1.5pt);
\filldraw[red] (3+4,0) circle (1.5pt);
\filldraw[red] (3+4,-1.5) circle (1.5pt);
\filldraw[black] (3+4,-3) circle (1.5pt);
\fill[gray, opacity = 0.3] (0+4,0) -- (3+4,0) -- (3+4,-3) -- (0+4,-3) -- cycle;
\draw[thin] (8,0) rectangle (11,-3);
\draw[dashed] (8,-3) -- (9,-2) -- (12,-2);
\draw[dashed] (9,-2) -- (9,1);
\filldraw[black] (8,0) circle (1.5pt);
\filldraw[black] (8,-3) circle (1.5pt);
\filldraw[black] (11,0) circle (1.5pt);
\filldraw[black] (11,-3) circle (1.5pt);
\filldraw[black] (9,1) circle (1.5pt);
\filldraw[black] (9,-2) circle (1.5pt);
\filldraw[black] (11.5,-2.5) circle (1.5pt);
\filldraw[black] (12,-2) circle (1.5pt);
\fill[gray,opacity = 0.3] (8,0) rectangle (11,-3);
\fill[gray,opacity = 0.1] (8,0) -- (9,1) -- (12,1) -- (11,0);
\fill[gray,opacity = 0.5] (11,-3) -- (11,0) -- (11.5,0.5) -- (11.5,-2.5) -- cycle;
\fill[gray,opacity = 0.5] (11.5,-2.5) -- (11.5,-1) -- (12,-0.5) -- (12,-2) -- cycle;
\draw[thin] (8,0) -- (9,1) -- (12,1);
\draw[thin] (11,0) -- (11.5,0.5);
\draw[red] (11.5,0.5) -- (12,1) -- (12,1-1.5) --(11.5,-1) -- cycle;
\fill[red, opacity = 0.5] (11.5,0.5) -- (12,1) -- (12,1-1.5) --(11.5,-1) -- cycle;
\draw[thin] (11,-3) -- (11.5,-2.5) -- (11.5,-1);
\draw[thin] (11.5,-2.5) -- (12,-2) -- (12,-0.5);
\filldraw[red] (12,1) circle (1.5pt);
\filldraw[red] (12,-0.5) circle (1.5pt);
\filldraw[red] (11.5,0.5) circle (1.5pt);
\filldraw[red] (11.5,-1) circle (1.5pt);
\end{tikzpicture}
\end{center}
It is useful to have labels for all the hyperfaces. For $\mathcal{F}_n$, the labels are $d^0_i$ for $1 \leq i \leq n$, $d^1_i$ for $2 \leq i \leq n$ and $d^2_i$ for $1 \leq i \leq n$. The previosly defined distinguished hyperface is labelled $d_n^2$. The assignment is again given by an inductive procedure. For $1 \leq i \leq n$ and $\epsilon \in \{0,1\}$, let $e_i^0$ denote the face of the cube $[0,1]^n$ with coordinates $(x_1, \ldots, x_{i-1}, \epsilon, x_{i+1}, \ldots, x_n)$. For $\mathcal{F}_1$ label the vertex 0 by $d^0_1$ and label the vertex $1$ by $d^2_1$. Now assume that all the hyperfaces of $\mathcal{F}_{n-1}$ are labelled. Then hyperfaces of $\mathcal{F}_n$ viewed as a subdivision of $[0,1]^n$ are labelled according to the following table:\\
\begin{center}
\begin{tabular}{|c|c|}
\hline
{\bf Face in $\mathcal{F}_{n-1} \times [0,1]$} & {\bf Label in $\mathcal{F}_n$} \\
\hline
$e^0_i$, $1 \leq i \leq n$ & $d^0_i$ \\
\hline
$e^1_i$, $2 \leq i \leq n$ & $d^1_i$ \\
\hline
$d^2_i \times [0,1]$, $1 \leq i \leq n-2$ & $d^2_i$ \\
\hline
$d^2_{n-1} \times [0,\frac{1}{2}]$ & $d^2_{n-1}$ \\
\hline
$d^2_{n-1} \times [\frac{1}{2},1]$ & $d^2_{n}$ \\
\hline
\end{tabular}
\end{center}
The picture below illustrates the labels for $\mathcal{F}_2$ and $\mathcal{F}_3$, both in their cubical and simplicial incarnations. The colors in dimension 3 are simply for user-friendliness.
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0) rectangle (3,-3);
\filldraw[black] (0,0) circle (1.5pt);
\filldraw[black] (0,-3) circle (1.5pt);
\filldraw[black] (3,0) circle (1.5pt);
\filldraw[black] (3,-1.5) circle (1.5pt);
\filldraw[black] (3,-3) circle (1.5pt);
\fill[gray, opacity = 0.3] (0,0) -- (3,0) -- (3,-3) -- (0,-3) -- cycle;
\draw[thin] (1.5+4,1.5-3) -- (3+4,3-3) -- (4.5+4,1.5-3);
\draw[thin] (1.5+4,1.5-3) -- (2+4,0-3);
\draw[thin] (4.5+4,1.5-3) -- (4+4,0-3);
\draw[thin] (2+4,0-3) -- (4+4,0-3);
\node[anchor = north] at (7,-3) {$d^1_2$};
\node[anchor = east] at (5.7,-2.3) {$d^0_1$};
\node[anchor = east] at (6.2,-0.6) {$d^0_2$};
\node[anchor = west] at (8.3,-2.3) {$d^2_2$};
\node[anchor = west] at (7.8, -0.6) {$d^2_1$};
\filldraw[black] (1.5+4,1.5-3) circle (1.5pt);
\filldraw[black] (3+4,3-3) circle (1.5pt);
\filldraw[black] (4.5+4,1.5-3) circle (1.5pt);
\filldraw[black] (2+4,0-3) circle (1.5pt);
\filldraw[black] (4+4,0-3) circle (1.5pt);
\fill[gray,opacity = 0.3] (2+4,0-3) -- (1.5+4,1.5-3) -- (3+4,3-3) -- (4.5+4,1.5-3) -- (4+4,0-3) -- cycle;
\node[anchor = east] at (0,-1.5) {$d^0_1$};
\node[anchor = north] at (1.5,-3) {$d^0_2$};
\node[anchor = south] at (1.5,0) {$d^1_2$};
\node[anchor = west] at (3,-0.75) {$d^2_2$};
\node[anchor = west] at (3,-0.75-1.5) {$d^2_1$};
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}
\fill[violet, opacity = 0.2] (1,1) rectangle (4,-2);
\draw[thin] (0,0) rectangle (3,-3);
\draw[dashed] (0,-3) -- (1,-2) -- (4,-2);
\draw[dashed] (1,-2) -- (1,1);
\filldraw[black] (0,0) circle (1.5pt);
\filldraw[black] (0,-3) circle (1.5pt);
\filldraw[black] (3,0) circle (1.5pt);
\filldraw[black] (3,-3) circle (1.5pt);
\filldraw[black] (1,1) circle (1.5pt);
\filldraw[black] (1,-2) circle (1.5pt);
\filldraw[black] (3.5,-2.5) circle (1.5pt);
\filldraw[black] (4,-2) circle (1.5pt);
\fill[blue,opacity = 0.3] (0,0) rectangle (3,-3);
\fill[gray,opacity = 0.1] (0,0) -- (1,1) -- (4,1) -- (3,0);
\fill[gray,opacity = 0.5] (3,-3) -- (3,0) -- (3.5,0.5) -- (3.5,-2.5) -- cycle;
\fill[gray,opacity = 0.5] (3.5,-2.5) -- (3.5,-1) -- (4,-0.5) -- (4,-2) -- cycle;
\draw[thin] (0,0) -- (1,1) -- (4,1);
\draw[thin] (3,0) -- (3.5,0.5);
\draw[black] (3.5,0.5) -- (4,1) -- (4,1-1.5) --(3.5,-1) -- cycle;
\fill[gray, opacity = 0.5] (3.5,0.5) -- (4,1) -- (4,1-1.5) --(3.5,-1) -- cycle;
\draw[thin] (3,-3) -- (3.5,-2.5) -- (3.5,-1);
\draw[thin] (3.5,-2.5) -- (4,-2) -- (4,-0.5);
\filldraw[black] (4,1) circle (1.5pt);
\filldraw[black] (4,-0.5) circle (1.5pt);
\filldraw[black] (3.5,0.5) circle (1.5pt);
\filldraw[black] (3.5,-1) circle (1.5pt);
\node[anchor = south] at (2,0.2) {$d^1_3$};
\draw[thin] (3.5,1.5) -- (3,1);
\draw[dashed,->] (3,1) -- (2.5,0.5);
\node[anchor = south] at (3.5,1.5){$d^1_2$};
\draw[thin] (-0.5,-1) -- (0,-1);
\draw[dashed, ->] (0,-1) -- (0.5,-1);
\node[anchor = east] at (-0.5,-1) {$d^0_1$};
\node[anchor = west] at (1.5,-1.5) {$d^0_2$};
\draw[thin] (2,-3.5) -- (2,-3);
\draw[dashed,->] (2,-3) -- (2,-2.5);
\node[anchor = north] at (2,-3.5) {$d^0_3$};
\node[anchor = west] at (3.45,0) {$d^2_3$};
\node[anchor = west] at (3.45,-1.6) {$d^2_2$};
\node[anchor = north] at (3.25,-0.8) {$d^2_1$};
\begin{scope}[shift={(4,-3)}, scale = 1.3]
\draw[thin] (2,0) -- (1.23,1.17) -- (1.64,1.31) --(2,1.2) -- cycle;
\draw[thin] (1.23,1.17) -- (2.32,2.81) -- (2.81,3) -- (1.64,1.31);
\filldraw[black] (1.23,1.17) circle (1.5pt);
\filldraw[black] (2,0) circle (1.5pt);
\filldraw[black] (1.64,1.31) circle (1.5pt);
\filldraw[black] (2.81,3) circle (1.5pt);
\filldraw[black] (2.32,2.81) circle (1.5pt);
\filldraw[black] (2,1.2) circle (1.5pt);
\draw[dashed] (7-1.23,1.17) -- (7-1.64,1.31) -- (7-2,1.2) -- (7-2,0);
\draw[thin] (7-2,0) -- (7-1.23,1.17);
\draw[thin] (7-1.23,1.17) -- (7-2.32,2.81) -- (7-2.81,3);
\draw[dashed] (7-2.81,3) -- (7-1.64,1.31);
\filldraw[black] (7-1.23,1.17) circle (1.5pt);
\filldraw[black] (7-2,0) circle (1.5pt);
\filldraw[black] (7-1.64,1.31) circle (1.5pt);
\filldraw[black] (7-2.81,3) circle (1.5pt);
\filldraw[black] (7-2.32,2.81) circle (1.5pt);
\filldraw[black] (7-2,1.2) circle (1.5pt);
\draw[thin] (2,0) -- (5,0);
\draw[thin] (2,1.2) -- (7-2.32,2.81);
\draw[thin] (7-2.81,3) -- (2.81,3);
\draw[dashed] (2.32,2.81) -- (5,1.2);
\fill[green,opacity = 0] (2,0) -- (1.23,1.17) -- (2.32,2.81) -- (5,1.2) -- (5,0) -- cycle;
\fill[violet,opacity = 0.4] (2,0) -- (2,1.2) -- (7-2.32,2.81) -- (7-1.23,1.17) -- (5,0) -- cycle;
\fill[gray,opacity = 0.3] (2,0) -- (2,1.2) -- (1.64,1.31) -- (1.23,1.17) -- cycle;
\fill[blue,opacity = 0.3] (1.64,1.31) -- (1.23,1.17) -- (2.32,2.81) -- (2.81,3) -- cycle;
\fill[gray,opacity = 0.1] (2,1.2) -- (1.64,1.31) -- (2.81,3) -- (7 -2.81,3) -- (7-2.32,2.81) -- cycle;
\end{scope}
\node[anchor = west] at (5.9,-1.8)
{$d^0_1$};
\node[anchor = west] at (8.2,-1.8) {$d^1_2$};
\node[anchor = west] at (7,-0.5) {$d^0_3$};
\draw[->] (6,0) -- (6.6,-0.4);
\node[anchor = south east] at (6,0) {$d^0_2$};
\draw[thin] (8.5,-3.5) node[anchor = north] {$d^1_3$} -- (8.5,-3);
\draw[dashed,->] (8.5,-3) -- (8.5,-2.5);
\draw[thin] (8.5,1.5) node[anchor = south] {$d^2_1$} -- (8.5,0.9);
\draw[dashed,->] (8.5,0.9) -- (8.5,0.4);
\begin{scope}[shift = {(0.2,0)}]
\draw[->,dashed] (11.2,-1.8) -- (10.6,-1.8);
\draw (11.5,-1.8) --(11.2,-1.8);
\node[anchor = west] at (11.5,-1.8) {$d^2_3$};
\end{scope}
\draw[->] (11.1,0.1) -- (10.5,-0.4);
\node[anchor = south west] at (11.1,0.1) {$d^2_2$};
\end{tikzpicture}
\end{center}
In general, the table below explains which cubic hyperface corresponds to which hyperplane in the truncated simplex. For the hyperface of the original simplex containing all the vertices except for $i$, the corresponding hyperplane is denoted by $D_i$.\\
\begin{center}
\begin{tabular}{|c|c|}
\hline
{\bf Cubic label} & {\bf Hyperplane in simplicial incarnation} \\
\hline
$d^0_i$, $i \leq n-1$ & $Q_{i-1}$ \\
\hline
$d^0_n$ & $D_n$\\
\hline
$d^1_i$ & $D_{i-1}$\\
\hline
$d^2_1$ & $D_0$ \\
\hline
$d^2_i$, $i \geq 2$ & $P_{n-i}$ \\
\hline
\end{tabular}
\end{center}
\begin{rem}
Cubically interpreted freehedra appear in \cite{Cha}, where a surprising connection with Dyck paths is studied.
\end{rem}
\subsection{Freehedra combinatorially}
The purely combinatorial definition of freehedra has the benefit that faces of all codimensions obtain labels. These labels are used in the main theorem of the paper.
\begin{defi} A nice $n$-expression is an expression $$s = s_l] [s_{l+1}] \ldots [s_k] | [s_0] \ldots [s_{l-1}]$$ where
\begin{itemize}
\item (the absence of the opening bracket for $s_l$ is not a typo)
\item every stretch $s_i$ is a nonempty subset of $\{0,1,\ldots,n \}$
\item for every $i$, $\max s_i = \min s_{i+1}$
\item $|s_i| \geq 2$ if $i \neq l$ ($|s_l| = 1$ is allowed)
\item $\min s_0 = 0$ and $\max s_k = n$
\item in the case $l=0$ $s_0$ is placed to the left of the bar
\end{itemize}
\end{defi}
Every face of $\mathcal{F}_n$ is labelled with a nice $n$-expression. For a nice expression $s$ as above, let $L$ be the number of elements $i \in \{0,1,\ldots, n\}$ that are not present in $s$. Then the codimension of the corresponding face is $l+L$.\\
Examples of such expression for $n = 3$ are $3]|[01][13]$ (of codimension $2+1 = 3$) or $023]|$ (of codimension $0+1 = 1$).
\begin{defi}
Consider a nice $n$-expression $s$ as above:
$$s = s_l] [s_{l+1}] \ldots [s_k] | [s_0] \ldots [s_{l-1}]$$
The {\em face transformations} that can be applied to $s$ are:
\begin{enumerate}
\item Drop: for some stretch $s_j$ remove some $x \in s_j$ with $\min s_j < x < \max s_j$.
\item Inner break: replace some stretch $[s_j]$ with $[s^1_j][s^2_j]$ where $s^1_j = \{a \leq x| a \in s_j \}$ and $s^2_j = \{a \geq x| a \in s_j \}$ for some $x \in s_j$ with $\min s_j < x < \max s_j$.
\item Right outer break: replace the stretch $s_l]$ with $s^1_l][s^2_l]$ where $s^1_l = \{a \leq x| a \in s_l \}$ and $s^2_l = \{a \geq x| a \in s_l \}$ for some $x \in s_l$ with $x < \max s_l$.
\item Left outer break: for $x \in s_l$ with $\min s_l < x$, replace the stretch $s_l]$ with $\{ a \geq x \ | a \in s_l \}]$, and add the stretch $[\{ a \leq x | a \in s_l \}]$ to the end of the expression after $s_{l-1}$.
\end{enumerate}
\end{defi}
For example, the expression $23]|[012]$ can be transformed into $23]|[02]$ by a drop, or into $23]|[01][12]$ by an inner break, or into $2][23]|[012]$ by a right outer break, or into $3]|[012][23]$ by a left outer break.
\begin{defi}
In $\mathcal{F}_n$, a face labelled $s'$ is a codimension 1 subface of a face labelled $s$ if $s'$ can be obtained from $s$ by one of the face transformations.
\end{defi}
The resulting abstract polytopes are precisely freehedra. The cubical notation for hyperfaces translates into into combinatorial notation for hyperfaces like this:
\begin{itemize}
\item $d^0_i$ corresponds to $0\ldots i-1][i-1 \ldots n]|$;
\item $d^1_i$ corresponds to $0\ldots \widehat{i-1} \ldots n]|$, where the hat means the omission;
\item $d^2_i$ corresponds to $i \ldots n]|[0 \ldots i]$.
\end{itemize}
Below are nice $2$-expressions and their face transformation shown on $\mathcal{F}_2$. Drops are labelled D, inner breaks are labelled IB, left outer breaks are labelled LOB and right outer breaks are labelled ROB.
\begin{center}
\begin{tikzpicture}
\draw[thin] (0,0) rectangle (6,6);
\filldraw[black] (0,0) circle (1.5pt);
\filldraw[black] (0,6) circle (1.5pt);
\filldraw[black] (6,0) circle (1.5pt);
\filldraw[black] (6,6) circle (1.5pt);
\filldraw[black] (6,3) circle (1.5pt);
\node[scale = 2] (A) at (3,3) {$012]|$};
\node[scale = 1.5, anchor = east] (1) at (0,3) {$0][012]|$};
\node[scale = 1.5, anchor = north] (2) at (3,0) {$[01][12]|$};
\node[scale = 1.5, anchor = south] (3) at (3,6) {$02]|$};
\node[scale = 1.5, anchor = west] (4) at (6,1.5) {$12]|[01]$};
\node[scale = 1.5, anchor = west] (5) at (6,4.5) {$2]|[012]$};
\node[anchor = north east] (11) at (0,0) {$0][01][12]|$};
\node[anchor = north west] (22) at (6,0) {$1][12]|[01]$};
\node[anchor = west] (33) at (6,3) {$2]|[01][12]$};
\node[anchor = south west] (44) at (6,6) {$2]|[02]$};
\node[anchor = south east] (55) at (0,6) {$0][02]|$};
\draw[->, shorten > = 5pt] (A) -- node[above,blue,scale = 1.5]{ROB} (1);
\draw[->, shorten > = 5pt] (A) -- node[left,blue,scale = 1.5]{ROB}(2);
\draw[->, shorten > = 5pt] (A) -- node[left,blue,scale = 1.5]{D}(3);
\draw[->, shorten > = 5pt] (A) -- node[below left,blue, scale = 1.5]{LOB} (4);
\draw[->, shorten > = 5pt] (A) -- node[above left,blue, scale = 1.5]{LOB} (5);
\draw[->] (1) edge [bend right=45] node[left,blue]{IB} (11);
\draw[->] (5) edge [bend left=70] node[right,blue]{IB} (33);
\draw[->] (4) edge [bend right=70] node[right,blue]{LOB} (33);
\draw[->] (5) edge [bend right=70] node[right,blue]{D} (44);
\draw[->] (3) edge [bend left=45] node[above,blue]{LOB} (44);
\draw[->] (4) edge [bend left=65] node[right,blue]{ROB} (22);
\draw[->] (3) edge [bend right=45] node[above,blue]{ROB} (55);
\draw[->] (1) edge [bend left=45] node[left,blue]{D} (55);
\draw[->] (2) edge [bend right=45] node[below,blue]{LOB} (22);
\draw[->] (2) edge [bend left=45] node[below,blue]{ROB} (11);
\end{tikzpicture}
\end{center}
\section{Main isomorphism}
We establish an isomorphism $I$ between the set of nice expressions and the forest-tree-forest basis of $T$ from Prop \ref{isom2}.
\begin{con}
Consider a nice $n$-expression
$$s = s_l] [s_{l+1}] \ldots [s_k] | [s_0] \ldots [s_{l-1}]$$
We form the forest-tree-forest triple $I(s) = (F,T,G)$ as follows. Every stretch gives rise to a separate tree. The stretch $s_l$ produces $T$, the stretches $s_i$ for $i>l$ (located to the left of the bar) produce the trees of $F$ and the stretches $s_i$ for $i<l$ (located to the right of the bar) produce the trees of $G$. The trees are assembled into the triple in the following order:
$$ (F,T,G) = (\iota(s_{k}) \circ \ldots \circ \iota(s_{l+1}), \iota(s_l), \iota(s_{l-1}) \circ \ldots \circ \iota(s_{0}))$$
It remains to explain $\iota$. For a stretch $s = a_1 < \ldots <a_m$, $\iota(s)$ is a tree with $m-1$ branches, where the number of leaves on the $j$th branch is $a_{j+1}-a_j$.
\end{con}
\begin{prop}
The map $I$ above is a bijection.
\end{prop}
\begin{proof}
Having a tree-forest-tree triple $(F,T,G)$ with $n$ leaves, we form a nice $n$-expression $s = I^{-1}(F,T,G)$ as follows. Start from the rightmost branch of the rightmost tree of $G$ and move left, adding one symbol for one branch within the tree, and beginning the new stretch for the new tree. To form the next symbol of the current stretch, add the number of leaves on the current branch to previous symbol.
\end{proof}
\begin{prop}The map $I$ provides an isomorphism of chain complexes
$$ C_*(\mathcal{F}_n) \simeq T(\mathbf{a}^n,\mathbf{m};\mathbf{m})$$
\end{prop}
\begin{proof}
We only need to verify that the resulting map of graded vector spaces is consistent with differentials. Consider a face of $\mathcal{F}_n$ labelled with a nice $n$-expression $s = s_l] [s_{l+1}] \ldots [s_k] | [s_0] \ldots [s_{l-1}]$, with $I(s) = (F,T,G)$. We go through the list of summands in $d(F,T,G)$ from Prop \ref{isom2}.
\begin{enumerate}
\item The summands $U((F,T,G),B)$ for any $B$ correspond to drop transformations.
\item The summands $S((F,T,G), B \subset F)$ correspond to inner break transformations at stretches $s_i$ for $i>l$.
\item The summands $S((F,T,G), B \subset G)$ correspond to inner break tranformations at stretches $s_i$ for $i<l$.
\item The summand $(F \circ T, 1, G)$ and the summands $S_l((F,T,G), B \subset T)$ correspond to left outer breaks.
\item The summand $(F, 1, T \circ G)$ and the summands $S_r((F,T,G), B \subset T)$ correspond to right outer breaks.
\end{enumerate}
\end{proof}
Therefore we may think of forest-tree-forest triples as another collection of labels for the faces of freehedra. Recall that forests gave a collection of labels for the faces of cubes, as in Prop \ref{cubes}.
\begin{prop}
Freehedra form an CW-operadic bimodule over the CW-operad of cubes.
\end{prop}
\begin{proof}
In forest notation, the action is by forest concatenation.
\end{proof}
The theorem below summarizes the results of this section.
\begin{theo}
The underlying pair of the DG-operadic pair $(\Omega,T)$ is $C_*(I,\mathcal{F})$.
\end{theo}
\section{Projections of polyhedra}
The operadic interpretation of freehedra equips them with a natural projection from multiplihedra. We now describe it explicitly in terms of painted trees and forest-tree-forest triples. Let $T$ be a painted binary tree corresponding to a vertex of $\mathcal{J}(n)$. The projection $\pi \colon\thinspace \mathcal{J}(n) \to \mathcal{F}(n)$ sends $T$ to a triple $\pi(T) = (F,1,G)$, where $G$ is formed from the unpainted subtree $T'$ containing the right leaf, and $F$ is formed from $T \backslash T'$ with painting forgotten. The procedure converting these binary trees to forests is the same for $T'$ and $T \backslash T'$.
\begin{con}
Having a binary tree, we start from the right leaf and move towards the root. Whenever we encounter a branch $B$, we create a tree with one branch having as many leaves as eventually belong to the subtree starting at $B$ (the structure of this subtree is forgotten). These trees are arranged into a forest from right to left. \\
\end{con}
\begin{tikzpicture}
\node[scale = 2] (A) at (0,0) {\RS{B {AA LLl} {C {ALLlr} CR {Llr} {Rr}} } };
\node[scale = 2] (B) at (4,2) {\RS{I {Llr} {Rr}} };
\node[scale = 2] (C) at (4,-2) {\RS{ I {LLl} {R {L lr} {Rr} } }};
\node (E) at (7.5,-2) {\begin{forest}
for tree = {grow'=90,circle, fill, minimum width = 4pt, inner sep = 0pt, s sep = 13pt}
[{},phantom
[{},name = one [[]] ]
[{},name = two [[][]] ]
]
\draw[black] (one) -- (two);
\end{forest}};
\node (D) at (7,2) {\begin{forest}
for tree = {grow'=90,circle, fill, minimum width = 4pt, inner sep = 0pt, s sep = 13pt}
[{},phantom
[{},name = two [[][]] ]
]
\end{forest}};
\draw[->] (A) -- node[above] {$T'$} (B);
\draw[->] (A) -- node[above] {$T \backslash T'$} (C);
\draw[->] (B) -- node[above] {$F$} (D);
\draw[->] (C) -- node[above] {$G$} (E);
\end{tikzpicture}
The picture illustrates the construction of $\pi(T)$. The following proposition is now straightforward.
\begin{prop}
The projection $M_\infty^{\operatorname{Col}} \to T$ is induced by the above projection $\pi \colon\thinspace
\mathcal{J} \to T$.
\end{prop}
The diagram below summarizes the projections between some families of polyhedra. Note that the projections from freehedra onto cubes and simplices are best seen at the simplicial incarnation of freehedra.
\begin{center}
\begin{tikzpicture}
\node[] (Mult) {$\mathcal{J}(n)$};
\node[] (Assoc) [above right = of Mult] {$\mathcal{K}(n)$};
\node[] (Free) [below right = of Mult] {$\mathcal{F}(n)$};
\node[] (Cube) [below right = of Assoc] {$I^n$};
\node[] (Simp) [right = of Cube] {$\Delta^n$};
\node[] (Pt) [right = of Simp] {$*$};
\draw[->] (Mult) -- node[above left]{} (Assoc);
\draw[->] (Mult) -- node[below left]{} (Free);
\draw[->] (Assoc) -- node[above right]{} (Cube);
\draw[->] (Free) -- node[below right ]{} (Cube);
\draw[->] (Cube) -- node[above]{} (Simp);
\draw[->] (Simp) -- node[above]{} (Pt);
\end{tikzpicture}
\end{center}
Every family of polytopes in this diagram can be interpreted operadically as the CW-counterpart of the DG-operadic bimodule in a certain $(a,m)$-colored DG-operadic pair. The partially informal table below lists these interpretations (we denote by $B$ the bimodule responsible for $A_\infty$-morphisms of DG-modules over DG-algebras).\\
\begin{center}
\begin{tabular}{|c||c|c|c|c||c|}
\hline
{\bf Polyhedra} & {\bf Algebras} & {\bf Modules} & {\bf \shortstack{Map of \\ algebras}}& {\bf \shortstack{Map of \\ modules}} & { \bf Pair}\\
\hline
$\mathcal{J}$ & $A_\infty$
& $A_\infty$ & $A_\infty$ & $A_\infty$ & $(A_\infty^{\operatorname{Col}},M_\infty^{\operatorname{Col}})$ \\
\hline
$\mathcal{K}$ & $A_\infty$
& $A_\infty$ & strict & strict & $(A_\infty^{\operatorname{Col}},A_\infty^{\operatorname{Col}})$ \\
\hline
$\mathcal{F}$ & DG
& $A_\infty$ & strict & $A_\infty$ & $(\Omega,T)$\\
\hline
$I$ & DG
& $A_\infty$ & strict & strict & $(\Omega,\Omega)$\\
\hline
$\Delta$ & DG
& DG & strict & $A_\infty$ & $(Ass^{\operatorname{Col}}, B)$ \\
\hline
$*$ & DG
& DG & strict & strict & $(Ass^{\operatorname{Col}},Ass^{\operatorname{Col}})$ \\
\hline
\end{tabular}
\end{center}
\begin{prop}
There exists the following diagram of projections between operadic pairs. Applying the functor of cellular chains to the diagram of polyhedral projections yields a part of this diagram -- namely, the bimodule part with output $\mathbf{m}$.
\[
\begin{tikzcd}[column sep = small]
& [-25pt] (A_\infty^{\operatorname{Col}},A_\infty^{\operatorname{Col}}) \arrow{rd} & [-25pt] & . & . \\
(A_\infty^{\operatorname{Col}},M_\infty^{\operatorname{Col}}) \arrow{ru} \arrow{rd} & & (\Omega, \Omega) \arrow{r} & (Ass^{\operatorname{Col}},B) \arrow{r} & (Ass^{\operatorname{Col}},Ass^{\operatorname{Col}}) \\
. & (\Omega,T) \arrow{ru} & . & . & . \\
\end{tikzcd}
\]
\end{prop}
\begin{proof}
By direct inspection.
\end{proof}
\begin{rem}
The table above lists not all possible quotients of $(A_\infty^{\operatorname{Col}},M_\infty^{\operatorname{Col}})$, just the ones that are encountered in real life more frequently than never. For example, one can also consider the operadic pair controlling $A_\infty$-modules over DG-algebras, where morphisms are allowed to be $A_\infty$ both for algebras and for modules. This results in a family of where the 3-dimensional polyhedron has polygon score $(0,8,0,4)$ but does not yet appear in the Encyclopedia of Combinatorial Polytope Sequences yet. So operadic pairs can be used as a tool for obtaining new polyhedral families.
\end{rem}
\section{Hopf operadic pairs}
In the closing section we briefly discuss the diagonals for operadic pairs. The category of colored $\mathbb{N}$-sequences ${\mathbb{N}}$-$\operatorname{Seq}_{\operatorname{Col}}(\mathsf{C})$ is equipped with a second tensor product, given by
$$(\mathcal{P} \boxtimes \mathcal{Q})(c_1,\ldots,c_n;c) = \mathcal{P}(c_1,\ldots,c_n;c) \otimes \mathcal{Q}(c_1,\ldots,c_n;c)$$
This tensor product has the property that for an operad $\mathcal{P}$, $\mathcal{P} \boxtimes \mathcal{P}$ is also an operad. The definition and the proposition below are classical.
\begin{defi}
An operad $\mathcal{P}$ is called Hopf if it is equipped with a coassociative diagonal $\Delta_{\mathcal{P}} \colon\thinspace \mathcal{P} \to \mathcal{P} \boxtimes \mathcal{P}$.
\end{defi}
\begin{prop}
\label{hopf}
For a Hopf operad $\mathcal{P}$, the category $\operatorname{Alg}(\mathcal{P})$ is monoidal.
\end{prop}
For any operad $\mathcal{P}$ with an operadic bimodule $\mathcal{M}$, the sequence $\mathcal{M} \boxtimes \mathcal{M}$ is an operadic bimodule over $\mathcal{P} \boxtimes \mathcal{P}$. For a Hopf operad, one can at both sides restrict along the diagonal $\Delta_{\mathcal{P}} \colon\thinspace \mathcal{P} \to \mathcal{P} \boxtimes \mathcal{P}$, and thus view $\mathcal{M} \boxtimes \mathcal{M}$ as a bimodule over $\mathcal{P}$ itself. This suggests the following new definition.
\begin{defi}
An operadic pair $(\mathcal{P},\mathcal{M})$ is called {\em strictly} Hopf, if $\Omega$ is a Hopf operad and there is a coassociative map of counital coalgebras $\Delta_{\mathcal{M}} \colon\thinspace \mathcal{M} \to \mathcal{M} \boxtimes \mathcal{M}$.
\end{defi}
\begin{prop}
For a strictly Hopf operadic pair $(\mathcal{P},\mathcal{M})$, the category $\operatorname{Alg}(\mathcal{P},\mathcal{M})$ is monoidal.
\end{prop}
\begin{proof}
The tensor product of objects follows from the Hopf structure on $\mathcal{P}$ via Prop \ref{hopf}. Consider $\mathcal{P}$-algebras $X^1 = \{X^1_c \}$, $X^2 = \{X^2_c \}$, $Y^1 = \{ Y^1_c \}$ and $Y^1 = \{ Y^1_c \}$, with morphisms $f^1 \colon\thinspace X^1 \to Y^1$ and $f^2: X^2 \to Y^2$, given by characteristic maps $\chi_{f^1} \colon\thinspace \mathcal{M} \to \underline{\operatorname{Hom}}_{X^1,Y^1}$ and $\chi_{f^2} \colon\thinspace \mathcal{M} \to \underline{\operatorname{Hom}}_{X^2,Y^2}$. Then the characteristic map $\chi_{f^1 \otimes f^2} \colon\thinspace \mathcal{M} \to \underline{\operatorname{Hom}}_{X^1 \otimes X^2,Y^1 \otimes Y^2}$ is the following composition:
\[
\begin{tikzcd}[column sep = huge]
M \arrow[dashed]{r} \arrow{d}{\Delta_{\mathcal{M}}} & \underline{\operatorname{Hom}}_{X^1 \otimes X^2,Y^1 \otimes Y^2} \\
M \boxtimes M \arrow{r}{\chi_{f_1} \otimes \chi_{f_2}} & \underline{\operatorname{Hom}}_{X^1,Y^1} \otimes \underline{\operatorname{Hom}}_{X^2,Y^2} \arrow{u}
\end{tikzcd}
\]
Associativity of this tensor product follows from coassociativity of $\Delta_{\mathcal{M}}$, and consistency with compositions follows from $\Delta_{\mathcal{M}}$ being a map of coalgebras.
\end{proof}
Unfortunately, strictly Hopf DG-operadic pairs are a rare beast, with $(\Omega, T)$ being an important non-example. $\Omega$ is indeed a Hopf operad, with a formula for $\Delta_{\Omega}$ given in Cor. 5.10 of ACD. The formula for $\Delta_T$ is given in Prop. 7.4 of ACD, but this $\Delta_T$ is neither coassociative nor a map of coalgebras. Both properties only hold up to homotopy. \\
The original constructions for $\Delta_{\Omega}$ and $\Delta_T$ are purely algebraic and involve some choices. The results of the current paper suggest that both $\Delta_{\Omega}$ and $\Delta_T$ can be interpreted as known diagonals for polyhedral families. These are obtained with the help of a partial order on faces. Assume that all cubes are embedded into $\mathbb{R}^n$ as $[0,1]^n$, and that their subdivisions into freehedra are rectangular.
\begin{defi}
For $v_1$ and $v_2$ vertices either of $I^n$ or of $\mathcal{F}(n)$, we say $v_1 \leq v_2$ if the inequality holds coordinatewise.
\end{defi}
\begin{defi}
For $F_1$ and $F_2$ faces either of $I^n$ or of $\mathcal{F}(n)$, we say $F_1 \leq F_2$ if $\max F_1 \leq \min F_2$.
\end{defi}
Then the following formula from Saneblidze defines both the cubic diagonal $\Delta \colon\thinspace C_*(I^n) \to C_*(I^n) \otimes C_*(I^n)$ and the freehedral diagonal $\Delta \colon\thinspace C_*(\mathcal{F}(n)) \to C_*(\mathcal{F}(n)) \otimes C_*(\mathcal{F}(n))$:
$$\Delta(F) = \sum_{\substack{F_1, F_2 \subset F,\text{ } F_1 \leq F_2 \\ \dim F_1 + \dim F_2 = \dim F }} F_1 \otimes F_2$$
\begin{prop}
For appropriate choices, Abad-Crainic-Dherin diagonals $\Delta_\Omega$ and $\Delta_T$ coincide with Saneblidze diagonals given by the formula above.
\end{prop}
The proof requires translating the original constructions of $\Delta_\Omega$ and $\Delta_T$ to operadic language, which is technically involved. Thus we delay the proof until the follow up paper, where we define {\em weakly} Hopf operadic pairs and upgrade $( \Delta_\Omega, \Delta_T)$ to weakly Hopf structure.
|
train/arxiv
|
BkiUdhI5qoYA4xX7F1mU
| 5
| 1
|
\subsubsection{\@startsection{subsubsection}{3}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\bf}}
\def\paragraph{\@startsection{paragraph}{4}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\textit}}
\renewcommand\@biblabel[1]{#1}
\renewcommand\@makefntext[1
{\noindent\makebox[0pt][r]{\@thefnmark\,}#1}
\makeatother
\renewcommand{\figurename}{\small{Fig.}~}
\sectionfont{\large}
\subsectionfont{\normalsize}
\renewcommand{\headrulewidth}{1pt}
\renewcommand{\footrulewidth}{1pt}
\setlength{\arrayrulewidth}{1pt}
\setlength{\columnsep}{6.5mm}
\setlength\bibsep{1pt}
\makeatletter
\DeclareRobustCommand\onlinecite{\@onlinecite}
\def\@onlinecite#1{\begingroup\let\@cite\NAT@citenum\citealp{#1}\endgroup}
\makeatother
\twocolumn[
\begin{@twocolumnfalse}
\noindent\LARGE{\textbf{Materials Design by Quantum-Chemical and other Theoretical/Computational Means:
Applications to Energy Storage and Photoemissive Materials}}
\vspace{0.6cm}
\noindent\large{\textbf{K\'aroly N\'emeth$^{\ast}$\textit{$^{a}$} }}
\vspace{0.5cm}
\noindent \textbf{\small{DOI: 10.1039/b000000x}}
\vspace{0.6cm}
\noindent \normalsize{
The present paper discusses some recent developments in the field of rational design for energy
storage and photoemissive materials. Recent and new examples of designer materials for Li-ion and Li-air
type batteries with high capacity and energy/power density as
well as photoemissive materials with low workfunctions and improved brightness are discussed as illustrative
examples of how quantum-chemical and other theoretical computational means can be used for rational materials
design.
}
\vspace{0.5cm}
\end{@twocolumnfalse}
]
\section{Introduction}
Rational materials design has been one of the holy grails of chemistry and physics. With the advent of efficient
algorithms that solve the Schr\"odinger equation at various levels of approximation in (near) linear scaling
computational effort \cite{LinearScaling} as compared to the system size,
combined with parallel computers using ever faster processors, this goal is now within reach and the fruits of
decades of hard work in the development of such methodologies are increasingly harvested in terms of rationally
designed materials. Major projects funded by governments or by the industry invest substantial funds in rational
materials design. The US government funds the Materials Genome Initiative to computationally explore,
design and experimentally test designer materials with approximately 100 million dollars
through
various agencies, such as NSF and DOE. Major materials design efforts are also underway in other programs as well,
for example within the Joint Center of Energy Storage Research (``Battery Hub'') program
of DOE as electrode materials and electrolyte design efforts.
Similar initiatives can be seen in the private industry as well. For example,
Bosch, a major supplier of electrical appliances conducts systematic computational
materials screening and design efforts in the fields of energy storage and conversion.
Major companies, such as IBM, Bosch, and Apple
produce thousands of patent applications each year. More than half a million patent
applications are filed in the US annually, and about a quarter of a million patents are granted.
There is an increasing number of patents solely based on theoretical materials design
(``conceptual reduction to practice'') without experimental testing (``actual reduction to practice'')
that can also be attributed to the increased competition in the race for
materials that can potentially control important emerging fields of applications, such as
energy storage, clean energy production, etc.
\cite{BoschCO2patent10,CFx-ContourEnergy11,US8389178}. Patents have always been formulated so that they cover an as broad as
possible range of potential modifications of the core invention, leading sometimes to extremely
broad patents, such as Intel's one on all nanostructures smaller than 50 nm in diameter or Rice University's one
on all objects composed at least 99\% of carbon nanotubes \cite{JMPearce12}.
The vast majority of the materials design efforts for the exploration of crystalline materials use
electronic structure codes with periodic boundary conditions
such as VASP \cite{VASP} or Quantum Espresso \cite{QE}. These codes utilize plane wave
representation of wavefunctions, effective pseudoptentials to model
core electrons and various exchange and
correlation functionals within the DFT approach. While these codes and the underlying
methods have sever limitations
for certain phenomena, such as electron correlation effects, band gaps in transition metal compounds,
intermolecular interactions \cite{JPaier06}, excited state properties, etc.,
effective core potentials, exchange/correlation potentials and novel DFT correction methodologies have been
developed to cure some of these problems.
These latter methods include the DFT+U \cite{dftu} method
that turned out particularly practical for studying band gaps, thermo/electro-chemistry,
phase changes, magnetic states and structural properties of solids.
Gaussian and mixed Gaussian-plane-wave representation based codes, as well as
wavelet or adaptive grid based ones
are also used in the exploration of materials. Such codes include
SIESTA \cite{Siesta}, Gaussian \cite{Gaussian09}, PQS \cite{PQS}, FreeON \cite{FreeON}, to name a few.
Some of these codes are also capable to calculate explicit electron-electron correlation with periodic
boundary conditions, for example utilizing quasiparticle (electron-pair) bands on the MP2 level of correlation
and beyond \cite{SSuhai83}.
Other approaches to materials design may be based on the ``mining'' and analysis of existing data deposited in
the scientific literature, with the aim of revealing potentially useful but yet unobserved or unutilized
connections and thereby constructing new structures/compositions of materials that are potentially superior
to existing and well observed ones \cite{KNemeth13}.
The present perspectives paper will focus on some recent developments in the field of materials design for
Li-ion and Li-air batteries first, then describes approaches about the development of improved photoemissive
materials.
\section{Results and Discussion}
\subsection{Li-ion batteries}
Most Li-ion batteries operate through the intercalation and deintercalation of Li-ions in the positive electrode
(cathode of the discharge process) electroactive materials. There is a great effort to optimize the properties of
the electroactive crystals. The optimization parameters include the gravimetric and volumetric energy densities
of the electroactive materials, their capacities (concentration of Li that can intercalate), the power densities
(how fast the charging and discharging may occur), the voltage associated with the corresponding electrochemical reaction,
the electrolytes that have to sustain the voltages and currents that occur and the economy of the materials involved, to
name a few optimization parameters. Most design efforts focus on layered crystalline cathode materials, such as spinels or
polyanionic compounds \cite{BCMelot13,GHautier11}.
In order to computationally predict the voltage associated with an electrochemical cell reaction,
one has to calculate the Gibbs free energy change associated with the reaction, express it in terms of eV-s and divide
by the number of electrons transferred during the reaction per molecule of product. The reaction energy is almost always
approximated as the difference of the electronic energies of the products and reactants which is usually a good
approximation when no gas molecules are involved that would cause large entropy changes \cite{FZhou04,AJain11,GHautier11}.
However, the accurate
calculation of reaction energies and other properties of solids is often problematic, especially in case of transition
metal compounds.
It turns out, that DFT(GGA)+U methods are capable to provide reaction energies and voltages that compare well
with experiments for this class of materials \cite{FZhou04,AJain11,GHautier11}.
Perhaps the best intercalation material designed for cathode applications so far,
is Li$_{3}$Cr(BO$_{3}$)(PO$_{4}$) a polyanionic
material with sidorenkite (Na$_{3}$MnPO$_{4}$CO$_{3}$) structure \cite{GHautier11}. This material has not been synthesized yet.
Lithium intercalation in the deintercalated Cr(BO$_{3}$)(PO$_{4}$) structure occurs at calculated voltages of 4.2-5.1 V,
relative to Li/Li$^{+}$, resulting in theoretical energy densities of 1705 Wh/kg and 4814 Wh/L with capacities of
354 mAh/g \cite{GHautier11}.
For comparison, LiCoO$_{2}$ cathode materials in current Li-ion batteries
have a theoretical energy density of 568 Wh/kg (2868 Wh/L) and charge capacity
of 273 mAh/g (1379 mhA/cm$^{-3}$) \cite{US8389178}.
While the energy density of Li$_{3}$Cr(BO$_{3}$)(PO$_{4}$) and related materials is very attractive,
their power density (the rate of charging and discharging) remains low, as the process of intercalation/deintercalation
is relatively slow, rendering the charging of Li-ion batteries typically an overnight long process. To develop
materials with both high power and energy densities,
the present author has recently proposed \cite{BNpatent} the use of functionalized hexagonal
boron nitride (h-BN) monolayers as intercalation materials.
In this case the intercalation would happen into
the surface of a 2D material directly from the electrolyte, avoiding the
slow diffusion inside the crystallites.
The charging process would be similarly fast, allowing for large current densities.
Here, two materials based on BN are briefly discussed.
The first one is a 3D material, Li$_{3}$BN$_{2}$, the second one is a 2D one,
BNCN, a cyano (-CN) group functionalized h-BN.
\subsubsection{Li$_{3}$BN$_{2}$}
It has been known for decades
that the reaction of molten Li$_{3}$N with h-BN can produce various phases of the Li$_{3}$BN$_{2}$
crystalline material \cite{HYamane87}. $\alpha$-Li$_{3}$BN$_{2}$ has a
layered structure as shown in Fig. \ref{Li3BN2},
with two Li ions being mobile per formula unit, while the third Li is part
of a 1D rod-like polymer, with repeating -Li-N-B-N- sequence. The polymeric chains and the mobile Li ions are placed in
separate layers in the $\alpha$-Li$_{3}$BN$_{2}$ phase (space group P4$_{2}$/mnm). The cell reaction proposed
\cite{BNpatent} is
2Li + LiBN$_{2}$ $\rightarrow$ Li$_{3}$BN$_{2}$ on discharge.
The formal charge of the BN$_{2}^{n-}$ linear anion changes from n=-1 to n=-3 on discharge.
Since no heavy transition metal atoms are involved, one may
use the PBEsol functional \cite{PBE,PBEsol} to obtain realistic optimum crystal structures and reaction energies.
The Quantum Espresso \cite{QE} software has been used, with ultrasoft pseudopotentials and 50 rydberg wavefunction cutoff,
using a 6x6x6 k-space grid.
The residual forces were smaller than 1.d-4 Ry/bohr at the optimum structures. Experimental
lattice parameters a=b and c have been
reproduced with an accuracy of smaller than 1\% and 2.5\% errors, respectively. Note that the c-direction is perpendicular to the
layers mentioned. In the Li-deintercalated structure, the lattice parameters a=b and c become 1.2 and 3.3 \% shorter, which
indicates a cell volume shrinking of 5.6 \%, or 2.8\% per two-electron transfer.
Note that such a relatively small cell volume change counts as acceptable for Li-ion battery electrode applications
\cite{GHautier11}. The cell reaction energy is calculated to be ${\Delta}E$ = -7.23 eV, indicating a cell voltage of
U=3.61 V (assuming an Li/Li$^{+}$ anode). The gravimetric and volumetric energy densities are 3247 Wh/kg and 5919 Wh/L, respectively. These values are
significantly larger than those obtained for Li$_{3}$Cr(BO$_{3}$)(PO$_{4}$) \cite{GHautier11}. Especially the gravimetric energy density is almost
twice as large for Li$_{3}$BN$_{2}$ as for Li$_{3}$Cr(BO$_{3}$)(PO$_{4}$). Li$_{3}$BN$_{2}$ appears to be greatly superior to
Li$_{3}$Cr(BO$_{3}$)(PO$_{4}$) also in terms of gravimetric and volumetric capacity densities with the respective values of
899 mAh/g and 1638 mAh/cm$^{3}$.
It is surprising that Li$_{3}$BN$_{2}$ has not been tested as cathode electroactive material yet. The reason is
probably that only the alpha phase can be expected to preserve its layered structure after the deintercalation of
mobile Li-ions. The monoclinic phase has been considered recently
for application as part of a conversion based anode material \cite{THMason11}.
The high melting point of $\alpha$-Li$_{3}$BN$_{2}$, about 900 $^{o}$C \cite{HYamane87}
is also indicative of the stability of the polymeric chains with -Li-N-B-N- repeating units, as the Li ions that
are not incorporated in the chains are known to be very mobile as excellent Li-ion conductivity values indicate
\cite{HYamane87}. Therefore, the present calculations also predict the existence of stable phases with
the stoichiometry of Li$_{x}$BN$_{2}$ with $1<x<3$. Furthermore, potentially also the beta and the monoclinic phase would
be transformed to $\alpha$-Li$_{3}$BN$_{2}$ after repeated charge/discharge cycles, as the alpha phase
appears as the energetically most favorable packing of the polymeric -Li-N-B-N- rods.
\begin{figure}[tb!]
\resizebox*{3.4in}{!}{\includegraphics{Fig1.eps}}
\caption{
Perspective view of the layered structure of $\alpha$-Li$_{3}$BN$_{2}$ in a 3x3x3 supercell.
Color code: Li - violet, B - magenta, N - blue.
}
\label{Li3BN2}
\end{figure}
\subsubsection{BNCNNa}
The reaction of molten sodium cyanide with h-BN is expected to lead to the cyano-functionalization of B sites on
both sides of the h-BN monolayers which then
intercalate Na$^{+}$ ions, as depicted in Fig. \ref{BNCNNa}. This structure is expected to lead to
fast charging and discharging due to the sterically unhindered access of Na$^{+}$ ions to the intercalation sites.
Using the above methodology and a $\approx$ 30 {\AA} vacuum layer to separate layers of BNCNNa, the electrochemical
properties of BNCNNa have been computed. Both intercalated and deintercalated structures are stable and preserve
the covalent cyano functionalized B centers. With Li atoms instead of Na,
the covalent functionalization would break up and LiCN and h-BN would form. The voltage of the Na + BNCN
$\rightarrow$ BNCNNa electrochemical cell is calculated to be U = 2.87 V, associated with a cell reaction energy
of 2.87 eV (assuming an Na/Na$^{+}$ anode).
The gravimetric and volumetric energy densities are 1042 Wh/kg and 3547 Wh/L,
the capacity values 363 mAh/g and 1236 mAh/cm$^3$. These values indicate great potential for example for use in
portable electronics devices. BNCN and analogous functionalized h-BN compunds can serve as universal intercalation
electrode materials capable to intercalate alkali, alkaline earth, Al and other cations (unless conversion reactions
happen) thus allowing for batteries based on the transfer of cations other than Li$^{+}$.
Note that in principle similar functionalization of graphene can also be envisioned, however the patterned and
polarized h-BN surface is a much better candidate for electrophile and nucleophile attack by functionalization
agents, such as molten salts. In case of graphene, all atoms are equivalent for functionalization, while for h-BN
only half of the atoms can be subject of, say, nucleophile attack, leaving the other half unfunctionalized providing
space for intercalation of cations and free N atoms in the BN surface to contribute to the complexation
and binding of the intercalating ions as well as to the storage of the extra negative charge per formula unit
due to discharge.
Therefore, h-BN appears to be a better candidate to build 2D intercalation
structures than graphene. Doped graphene structures may have similar properties to h-BN for covalent
functionalization. Functional groups should be selected based on whether they make h-BN a strong electron acceptor
after the functionalization, when positive electrode materials are designed. For negative electrode
materials, the functional group should be selected such that the resulting layer will be a good electron donor.
These considerations are analogous to the design of charge transfer salts \cite{JHuang08}.
While transition metal compounds currently dominate Li-ion battery electroactive materials, there is a lot
of potential in conjugated $\pi$-electron systems and their functionalized derivatives to be utilized as
electroactive materials. Concepts of these systems, such as the principles of charge transfer compounds,
have practically been completely ignored in energy storage development so far. As a direct continuation of the
present work, a great variety of functionalization of h-BN will be explored to find the best novel anode and
cathode materials of this class of compounds. Band structures, conductivities, charge
distributions and structural changes upon intercalation/deintercalation will be analyzed to understand the
mechanism of charge storage in these materials. The experimental testing of some of
these materials is already underway through collaborative work at
Illinois Institute of Technology and elsewhere.
\begin{figure}[tb!]
\resizebox*{3.4in}{!}{\includegraphics{Fig2.eps}}
\caption{
Perspective view of the cyano functionalized h-BN monolayer with intercalated Na ions in a 3x3x1 supercell.
Color code: Na - violet, B - magenta, N - blue, C - gray.
}
\label{BNCNNa}
\end{figure}
\subsection{Li-air batteries}
Li-air batteries are considered as the ultimate large energy density batteries that would enable long range all
electric vehicles. They produce electrical energy through the reaction of metallic
Li with O$_{2}$ taken from the air, whereby the discharge product must be Li$_{2}$O$_{2}$ in order the
battery be rechargeable. The fact that Li$_{2}$O$_{2}$ is an aggressive oxidant and may explosively oxidize the
carbon electrode in which it forms and deposits makes Li-O$_{2}$/peroxide batteries unsafe. A thorough analysis
\cite{KNemeth13} by the present author
about the thermochemistry and kinetics of Li-oxalate, Li$_{2}$C$_{2}$O$_{4}$,
with the availability of catalysts for selective
and energy-efficient CO$_{2}$/oxalate conversions indicates that
instead of O$_{2}$, CO$_{2}$ should be taken from air (or from a tank)
to construct a high energy and power density battery that can be safely used and is of environmentally benign and
economic materials. The details of the design based on existing experimental
data can be found in Ref. \onlinecite{KNemeth13}.
A key component of the energy-efficiency of CO$_{2}$/oxalate cathodes is the catalyst that reduces the
overpotential of CO$_{2}$ reduction to nearly zero. While such catalysts are known, for example copper
complexes or oxygen molecule in selected ionic liquid electrolytes \cite{KNemeth13},
the applicability of these catalysts may vary in
the various implementations, therefore there is a need for the development of additional robust
catalysts to optimize the performance of metal-CO$_{2}$/oxalate batteries.
\subsection{Photoemissive materials}
Another example of materials design discussed here concerns photoemissive materials. Improved photocathodes
are needed for future electron and light sources, such as x-ray free electron lasers, energy recovery
linacs and dynamic and ultrafast transmission electron microscopes. The improvements must concern
the brightness, the quantum yield, the workfunction and the chemical inertness and lifetime of the
photocathodes. Some of our recent works provide examples of related designs \cite{KNemeth10,JZTerdik12,ARuth13}.
In the first example, we have shown \cite{KNemeth10} using band structure calculations
that the angular distribution of emitted electrons can drastically
change when depositing an ultrathin (2-4 monolayers) MgO layer on the Ag(001) crystal surface.
While a lot of electronic
structure studies have been carried out on the MgO:Ag(001) system to explain the experimentally observed
variation of its catalytic
activity with varying number of MgO monolayers, we were the first to propose using this variation to
optimize the brightness (angular distribution) of emitted electrons through overlayer deposition.
Recent experimental results \cite{TCDroubay13}
have confirmed our predictions and point out the drastic change in the
angular distribution of emitted electrons due to MgO overlayers on Ag(001).
Ultrathin oxide monolayers over metal surfaces change the electrostatic boundary conditions on the surface and thus they may
significantly decrease or increase the workfunctions and the shape and occupation of surface bands which leads to
drastic changes in the properties of the emitted electrons \cite{KNemeth10}.
In the second example, we have pointed out that the workfunction of the seasoned, high quantum efficiency
Cs$_{2}$Te photoemissive material can be lowered from $\approx$ 3 eV to about 2.4 eV by acetylation
resulting in Cs$_{2}$TeC$_{2}$ while its quantum efficiency is preserved \cite{JZTerdik12}.
Analogous compounds Cs$_{2}$PdC$_{2}$ and Cs$_{2}$PtC$_{2}$ exist and we
have managed to synthesize Li$_{2}$TeC$_{2}$ (x-ray diffraction confirms theoretically predicted
structure) \cite{Li2TeC2}
as the first member of ternary acetylides with Te. This design was motivated by the goal of
turning Cs$_{2}$Te into an easier to excite $\pi$-electron system.
Our current research directions in the photocathode field focus on the theoretical screening of various inorganic
and organic overlayers on metals and semiconductors to lower workfunctions, optimize
brightness, quantum yield and life time of such
photocathodes in the often harsh temperature, imperfect vacuum and electromagnetic field environments they are used.
\section{Conclusions}
The present paper discusses several examples of materials design using quantum chemical and other theoretical /
computational methods to develop improved Li-ion and Li-air batteries and improved photoemissive
materials. The experimental testing of some of these designs is underway.
\section{Acknowledgements}
Discussions with Drs. L. Shaw (IIT), K. C. Harkay and G. Srajer (Argonne) are gratefully acknowledged.
The Li-air battery and the photocathode research has been supported by the U.S. DOE Office of Science, under
contract No. DE-AC02-06CH11357 and NSF (No. PHY-0969989).
\footnotetext{\textit{$^{a}$~Address: Physics Department, Illinois Institute of Technology,
Chicago, Illinois 60616, USA, [email protected]}}
\footnotesize{
\bibliographystyle{rsc}
\providecommand*{\mcitethebibliography}{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{30}
\providecommand*{\natexlab}[1]{#1}
\providecommand*{\mciteSetBstSublistMode}[1]{}
\providecommand*{\mciteSetBstMaxWidthForm}[2]{}
\providecommand*{\mciteBstWouldAddEndPuncttrue}
{\def\unskip.}{\unskip.}}
\providecommand*{\mciteBstWouldAddEndPunctfalse}
{\let\unskip.}\relax}
\providecommand*{\mciteSetBstMidEndSepPunct}[3]{}
\providecommand*{\mciteSetBstSublistLabelBeginEnd}[3]{}
\providecommand*{\unskip.}}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}
{(\emph{\alph{mcitesubitemcount}})}
\mciteSetBstSublistLabelBeginEnd{\mcitemaxwidthsubitemform\space}
{\relax}{\relax}
\bibitem[Zalesny \emph{et~al.}(2011)Zalesny, Papadopoulos, Mezey, and
Leszczynski]{LinearScaling}
\emph{Linear-Scaling Techniques in Computational Chemistry and Physics}, ed.
R.~Zalesny, M.~G. Papadopoulos, P.~G. Mezey and J.~Leszczynski, Springer,
Berlin, Heidelberg, New York, 1st edn, 2011, vol.~13\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{P. Albertus et al.}(2010)]{BoschCO2patent10}
{P. Albertus et al.}, \emph{{High Specific Energy Li/O$_{2}$-CO$_{2}$ Battery;
assignee: Bosch LLC; US Patent Application 12/907,205}}, 2010\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{S. C. Jones et al.}(2011)]{CFx-ContourEnergy11}
{S. C. Jones et al.}, \emph{{Polymer Materials As Binder for a CFx Cathode;
assignee: Contour Energy Systems Inc.; US Patent Application 13/010,431}},
2011\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nemeth \emph{et~al.}(2013)Nemeth, van Veenendaal, and
Srajer]{US8389178}
K.~Nemeth, M.~van Veenendaal and G.~Srajer, \emph{{Electrochemical energy
storage device based on carbon dioxide as electroactive species, assignee:
US. Dept. of Energy, Patent (granted), US8389178}}, 2013\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pearce(2012)]{JMPearce12}
J.~M. Pearce, \emph{Nature}, 2012, \textbf{491}, 519--521\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{J. Hafner}(2008)]{VASP}
{J. Hafner}, \emph{J Comput Chem 29: 2044–2078, 2008}, 2008, \textbf{29},
2044–2078\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{P. Gianozzi {\it et.al}}(2009)]{QE}
{P. Gianozzi {\it et.al}}, \emph{J. Phys.: Condens. Matter}, 2009, \textbf{21},
395502\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Paier et al.}(2006)]{JPaier06}
J.~{Paier et al.}, \emph{J. Chem. Phys.}, 2006, \textbf{124}, 154709\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Anisimov et al.}(1991)]{dftu}
V.~I. {Anisimov et al.}, \emph{Phys. Rev. B}, 1991, \textbf{44}, 943\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Soler, José M. et al.}(2002)]{Siesta}
{Soler, José M. et al.}, \emph{J. Phys. Cond. Mat.}, 2002, \textbf{14},
2745\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{{M. J. Frisch} et al.}(2013)]{Gaussian09}
{{M. J. Frisch} et al.}, \emph{{\sc Gaussian09}}, 2013, Gaussian, Inc.,
Wallingford CT, http://www.gaussian.com\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Baker et al.}(2012)]{PQS}
J.~{Baker et al.}, \emph{WIREs Comput. Mol. Sci.}, 2012, \textbf{2}, 63\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bock \emph{et~al.}(2013)Bock, Challacombe, Gan, Henkelman, Nemeth,
Niklasson, Odell, Schwegler, Tymczak, and Weber]{FreeON}
N.~Bock, M.~Challacombe, C.~K. Gan, G.~Henkelman, K.~Nemeth, A.~M.~N.
Niklasson, A.~Odell, E.~Schwegler, C.~J. Tymczak and V.~Weber, \emph{{\sc
FreeON}}, 2013, Los Alamos National Laboratory (LA-CC 01-2; LA-CC-04-086),
Copyright University of California., http://www.freeon.org/\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Suhai(1983)]{SSuhai83}
S.~Suhai, \emph{Phys. Rev. B}, 1983, \textbf{27}, 3506\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[N{\'e}meth and Srajer(2014)]{KNemeth13}
K.~N{\'e}meth and G.~Srajer, \emph{RSC Advances}, 2014, \textbf{4}, 1879\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Melot and Tarascon(2013)]{BCMelot13}
B.~C. Melot and J.-M. Tarascon, \emph{Acc. Chem. Res.}, 2013, \textbf{46},
1226--1238\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Hautier et al.}(2011)]{GHautier11}
G.~{Hautier et al.}, \emph{J. Mater. Chem.}, 2011, \textbf{21},
17147–17153\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Zhou et al.}(2004)]{FZhou04}
F.~{Zhou et al.}, \emph{Electrochem. Comm.}, 2004, \textbf{6}, 1144--1148\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Jain et al.}(2011)]{AJain11}
A.~{Jain et al.}, \emph{Phys. Rev. B}, 2011, \textbf{84}, 045115\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nemeth(2013)]{BNpatent}
K.~Nemeth, \emph{{Functionalized Boron Nitride Materials as Electroactive
Species in Electrochemical Energy Storage Devices, patent pending, assignee:
Nemeth's Materials Design LLC}}, 2013\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Yamane et al.}(1987)]{HYamane87}
H.~{Yamane et al.}, \emph{J. Solid State Chem.}, 1987, \textbf{71}, 1--11\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Perdew \emph{et~al.}(1996)Perdew, Burke, and Ernzerhof]{PBE}
J.~P. Perdew, K.~Burke and M.~Ernzerhof, \emph{Phys. Rev. Lett.}, 1996,
\textbf{77}, 3865\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Perdew et al.}(2008)]{PBEsol}
J.~P. {Perdew et al.}, \emph{Phys. Rev. Lett.}, 2008, \textbf{100},
136406\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Mason et al.}(2011)]{THMason11}
T.~H. {Mason et al.}, \emph{J. Phys. Chem. C}, 2011, \textbf{115},
16681--16688\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Huang et al.}(2008)]{JHuang08}
J.~{Huang et al.}, \emph{Phys. Chem. Chem. Phys.}, 2008, \textbf{10},
2625\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{{K. N{\'e}meth, K.C. Harkay \it et.al}}(2010)]{KNemeth10}
{{K. N{\'e}meth, K.C. Harkay \it et.al}}, \emph{Phys. Rev. Lett.}, 2010,
\textbf{104}, 046801\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Terdik and {N{\'e}meth et al.}(2012)]{JZTerdik12}
J.~Z. Terdik and K.~{N{\'e}meth et al.}, \emph{Phys. Rev. B}, 2012,
\textbf{86}, 035142\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ruth and {N{\'e}meth et al.}(2013)]{ARuth13}
A.~Ruth and K.~{N{\'e}meth et al.}, \emph{J. Appl. Phys.}, 2013, \textbf{113},
183703\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Droubay et al.}(2013)]{TCDroubay13}
T.~C. {Droubay et al.}, \emph{Phys. Rev. Lett., submitted}, 2013\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[N{\'e}meth \emph{et~al.}()N{\'e}meth, Unni, and {J. Kaduk et
al.}]{Li2TeC2}
K.~N{\'e}meth, A.~K. Unni and {J. Kaduk et al.}, \emph{{The synthesis of
ternary acetylides with Te, to be published}}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
}
\end{document}
|
train/arxiv
|
BkiUakTxK6nrxjHy_C_L
| 5
| 1
|
\section{Proof of Proposition \ref{lem:marginalizable}}\label{sec:pflem:marginalizable}
We use the mathematical induction on $|\{S_i:i\in V\setminus S\}|$ where $S_i$ is defined in Definition \ref{def:marginalizable}.
Before starting the proof we define the equivalence class $[\ell]=\{i\in V\setminus S:S_i=S_\ell\}$.
Now, we start the proof by considering
\begin{align*}
&\sum_{i\in V\setminus S}\mathbb{P}_{\beta,\gamma}(x)=\sum_{i\in V\setminus S:i\notin[\ell]}\sum_{i\in V\setminus S:i\in[\ell]}\frac1Z\exp\left(\sum_{(i,j)\in E}\beta_{ij}x_ix_j+\sum_{i\in V}\gamma_ix_i\right)\\
&=\sum_{i\in V\setminus S:i\notin[\ell]}\frac1Z\exp\left(\sum_{(i,j)\in E:i,j\notin[\ell]}\beta_{ij}x_ix_j+\sum_{i\in V\setminus[\ell]}\gamma_ix_i\right)\\
&\qquad\times\sum_{i\in V\setminus S:i\in[\ell]}\exp\left(\sum_{(i,j)\in E:i\in[\ell]}\beta_{ij}x_ix_j+\sum_{i\in [\ell]}\gamma_ix_i\right)\\
&=\sum_{i\in V\setminus S:i\notin[\ell]}\frac1Z\exp\left(\sum_{(i,j)\in E:i,j\notin[\ell]}\beta_{ij}x_ix_j+\sum_{i\in V\setminus[\ell]}\gamma_ix_i\right)f_{[\ell]}(x_{S_\ell})
\end{align*}
where $f_{[\ell]}(x_{S_\ell})$ is some positive function.
Since $|S_\ell|\le 2$, one can modify a parameter $\beta^\dagger,\gamma^\dagger$ only between elements of $S_\ell$ to achieve the following identity
$$\sum_{i\in V\setminus S}\mathbb{P}_{\beta,\gamma}(x)=\sum_{i\in V\setminus\{S\cup[\ell]\}}\frac1{Z^\dagger}\exp\left(\sum_{(i,j)\in E^\dagger}\beta_{ij}^\dagger x_ix_j+\sum_{i\in V\setminus[\ell]}\gamma_i^\dagger x_i\right)$$
where $E^\prime=E\cup\{(j,k):S_\ell=\{j,k\}\}$.
Using the induction hypothesis, the above identity completes the proof of Proposition \ref{lem:marginalizable}.
\section{Proof of Theorem \ref{thm:main}}\label{sec:pfthm:main}
Since the algorithm only uses the marginals of at most $K+L$ dimensions, instead of $\sigma_t$, consider the following sequence
$$\sigma^\prime_t=\{S\subset S^\prime:S^\prime\in\sigma_t,|S|\le K+L\}.$$
One can observe that if $\sigma^\prime_t=\sigma^\prime_{t-1}$, then one can observe that the sequential local framework cannot recover more marginals after $t$-th iteration, while $\sigma_t$ increases its cardinality at least $1$ otherwise.
However, the maximum cardinality of $\sigma_t$ is $O(|V|^{K+L})$ and this implies that the algorithm always terminates in $O(|V|^{K+L})$.
This completes the proof of Theorem \ref{thm:main}.
\section{Proof of Lemma \ref{thm:grid}}\label{sec:pflem:grid}
We first consider the distribution conditioned on $x_{\{a,c,k\}}$ as illustrated in Figure \ref{fig:grid2}.
In Figure \ref{fig:grid2}, observe that $g$ is a bottleneck with views $b,f,p$.
Furthermore, $g$ is label consistent for $\{a,c,k\}$ with a reference $b$ by assuming $\beta_{bg}>0$ (or $\beta_{bg}<0$).
Hence, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,f,g,p\}}|x_{\{a,c,k\}}\right)$ using $\mathtt{TensorDecomp}$ and obtain $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,c,f,g,k,p\}}\right)$ using the following identity.
\begin{align*}
\mathbb{P}_{\beta,\gamma}\big(&x_{\{a,b,c,f,g,k,p\}}\big)=\mathbb{P}_{\beta,\gamma}\left(x_{\{b,f,g,p\}}|x_{\{a,c,k\}}\right)\mathbb{P}_{\beta,\gamma}\left(x_{\{a,c,k\}}\right).
\end{align*}
Similarly, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,f,k,\ell,p,q,r\}}\right)$, $\mathbb{P}_{\beta,\gamma}\left(x_{\{c,d,e,i,j,o,t\}}\right)$, $\mathbb{P}_{\beta,\gamma}\left(x_{\{e,j,n,o,r,s,t\}}\right)$.
In order to recover marginals including $x_h$ or $x_m$, $h$ and $m$ should be bottlenecks.
Conditioned on $x_{\{b,d,\ell,q\}}$, as illustrated in Figure \ref{fig:grid4}, $h$ is a bottleneck with views $c,p,r$, however, we do not have a marginal $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,c,d,\ell,p,q,r\}}\right)$ currently.
Now, we recover the marginal $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,c,d,\ell,p,q,r\}}\right)$.
Consider the distribution conditioned on $x_{\{p,r\}}$ as illustrated in Figure \ref{fig:grid3}.
In Figure \ref{fig:grid3}, observe that $q$ and $b,c,d$ are disconnected if $\ell$ is removed.
Furthermore, $\mathbb{P}_{\beta,\gamma}\left(x_{\{\ell,q\}}|x_{\{p,r\}}\right)$ and $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,c,d,q\}}|x_{\{p,r\}}\right)$ are already observed.
Hence, using $\mathtt{LinearView}$ by setting $S\leftarrow\{b,c,d\}$, $i\leftarrow\ell$, $j\leftarrow q$ and conditioning $x_{\{p,r\}}$, one can obtain $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,c,d,\ell,p,q,r\}}\right)$.
Now, $h$ is a bottleneck with views $c,p,r$ by conditioning $x_{\{b,d,\ell,q\}}$.
Using $\mathtt{TensorDecomp}$ one can obtain $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,c,d,h,\ell,p,q,r\}}\right)$.
Using same procedure, one can also obtain $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,c,g,m,q,r,s\}}\right)$.
Until now, we have recovered every pairwise marginals between visible variable and latent variable.
The remaining goal is to recover pairwise marginals between latent variables.
First, by setting $S\leftarrow\{e,j,o\}$, $i\leftarrow h$, $j\leftarrow c$ and conditioning $x_{\{b,d\}}$, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,c,d,e,h,j,o\}}\right)$ using $\mathtt{LinearView}$.
Consecutively, by setting $S\leftarrow\{h\}$, $i\leftarrow i$, $j\leftarrow j$ and conditioning $x_{\{e,o\}}$, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{e,h,i,j,o\}}\right)$ using $\mathtt{LinearView}$ which includes the pairwise marginals $\mathbb{P}_{\beta,\gamma}\left(x_{\{i,j\}}\right)$.
Other pairwise marginals between latent variables can be also recovered using the same procedure.
Since we end the sequence in 5 steps, this completes the proof of Lemma \ref{thm:grid}.
\section{Proof of Lemma \ref{lem:convRBM}}\label{sec:pflem:convRBM}
We first consider the distribution conditioned on $x_{\{c,e,f\}}$ as illustrated in Figure \ref{fig:crbm2}.
In Figure \ref{fig:crbm2}, observe that $m$ is a bottleneck with views $a,b,d$ with a reference $a$ by assuming $\beta_{am}>0$ (or $\beta_{am}<0$).
Hence, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,d,m\}}|x_{\{c,e,f\}}\right)$ using $\mathtt{TensorDecomp}$ and obtain $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,c,d,e,f,m\}}\right)$ using the following identity.
\begin{align*}
\mathbb{P}_{\beta,\gamma}\big(&x_{\{a,b,c,d,e,f,m\}}\big)=\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,d,m\}}|x_{\{c,e,f\}}\right)\mathbb{P}_{\beta,\gamma}\left(x_{\{c,e,f\}}\right).
\end{align*}
Similarly, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,c,d,e,f,n\}}\right)$, $\mathbb{P}_{\beta,\gamma}\left(x_{\{g,h,i,j,k,\ell,q\}}\right)$, $\mathbb{P}_{\beta,\gamma}\left(x_{\{g,h,i,j,k,\ell,r\}}\right)$.
In order to recover marginals including $x_o$ or $x_p$, $o$ and $p$ should be bottlenecks.
Conditioned on $x_{\{h,m,q\}}$, $o$ is a bottleneck with views $d,e,g$, however we do not have a marginal $\mathbb{P}_{\beta,\gamma}\left(x_{\{d,e,g,h,m,q\}}\right)$ currently.
Now, we recover the marginal $\mathbb{P}_{\beta,\gamma}\left(x_{\{d,e,g,m,q\}}\right)$.
Since we observed $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,d,e,g,h,j,k\}}\right)$ and $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,d,e,m\}}\right)$, we can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,d,e,g,h,j,k,m\}}\right)$ using $\mathtt{DisjointView}$ by setting $S\leftarrow\{g,h,j,k\}$, $T\leftarrow\{m\}$ and $C\leftarrow\{a,b,d,e\}$.
Likewise, using $\mathtt{DisjointView}$, one can recover a marginal $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,d,e,g,h,j,k,m,q\}}\right)$ as well.
Using the recovered marginal $\mathbb{P}_{\beta,\gamma}\left(x_{\{d,e,g,h,m,q\}}\right)$, conditioning $x_{\{h,m,q\}}$ and using $\mathtt{TensorDeomp}$, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{d,e,g,h,m,o,q\}}\right)$.
Similarly, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{e,f,h,i,n,p,r\}}\right)$.
Since we end the sequence in 4 steps, this completes the proof of Lemma \ref{thm:grid}.
\section{Proof of Lemma \ref{thm:regular}}\label{sec:pflem:regular}
The main idea of the proof is to show that every latent nodes of size $\le cN$ contains at least a single recoverable latent node using $\mathtt{TensorDecomp}$ where $N=|V|$.
We first state the following condition for a latent node $i$.
\begin{condition}\label{cond:regular}
For a latent node $i$, two of its neighbors $j,k$ are visible and a set of neighbors $S$ of $j,k$ are visible except for $i$, not containing $j,k$.
Also, there exists $\ell\in O\setminus S$ such that $i$ is a bottleneck with views $j,k,\ell$ in $G\setminus S$.
\end{condition}
In the above condition, $O$ denote the set of visible nodes.
One can easily observe that if any latent node satisfies the above condition, then it is recoverable by conditioning neighbors of $j,k$ and apply $\mathtt{TensorDecomp}$ with views $j,k$ and some other.
Now consider the following procedure.
First, duplicate for each $i\in V$ into $i_1,\dots,i_d$ where $i_n$ is visible/latent if $i$ is visible/latent.
Let $V^\prime$ be a such duplicated vertex set and $O^\prime\subset V^\prime$ be a set of visible nodes and $H^\prime=V^\prime\setminus O^\prime$ be a set of latent nodes.
The procedure starts with a graph on $V^\prime$ without edges.
\begin{itemize}
\item[1.] Choose latent nodes $i_1,\dots,i_d\in H^\prime$.
For each $n\in\{1,\dots,n\}$
if deg$(i_n)\ne 1$,
Choose a single neighbor $j_m$ of $i_n$ with probability
$$\mathbb{P}(j_m~\text{is chosen})=\frac{1-\text{deg}(j_m)}{\sum_{k_o\in V^\prime}(1-\text{deg}(k_o))}.$$
\item[2.]Similarly, for each neighbor $j_m\in O^\prime$ of $i_1,\dots,i_d$, for all $j_1,\dots,j_d$ satisfying deg$(j_o)=0$, add neighbors of $j_o$ as in step 1.
\item[3.] Check whether there exists an edge $(i_n,i_m)$ or a pair of edges $(i_n,j_m)$, $(i_{n^\prime},j_{m^\prime})$.
If such edge or a pair of edges exists, then the procedure restarts from the beginning.
\item[4.] Let $G$ be a graph such that contracting $\ell_1,\dots,\ell_d$ into $\ell$ for all $\ell\in V$.
Check whether $i$ satisfies Condition \ref{cond:regular} with $j,k$ and $i$ is a bottleneck by conditioning neighbors of $j,k$.
\item[5.]If $G$ satisfies the condition in step 3, then the procedure succeeds.
If not, repeat the procedure for the next latent node until every latent node decides its neighbor.
\item[6.] If every latent nodes decided its neighbor, the procedure fails.
\end{itemize}
The above procedure is constructing the fractional edges of random $d$-regular graph by contracting $\ell_1,\dots,\ell_d$ into $\ell$.
step 3 checks whether the procedure creates a loop or multiple edges.
One can notice that if any node satisfies Condition \ref{cond:regular} in step 3, then there exists a recoverable latent node.
Our primary goal is to bound the probability that the procedure fails, i.e., no latent node satisfies Condition \ref{cond:regular} under the fractional graph.
One can observe that if some visible node is chosen to be a neighbor of a latent node in the procedure but it is already a neighbor of other latent node, then it cannot help to satisfy Condition \ref{cond:regular}.
Also, at each iteration, choosing neighbor has an effect that reducing at most $2d$ nodes from whole nodes as at most $d^2$ edges are created.
Now, suppose there exist $\alpha n$ latent nodes where $\alpha<\frac1{2d(d-1)}$.
Using this fact, one can observe that
the probability that a visible node connected to a latent node has $d-1$ visible neighbors is at least $p=(1-2d\alpha)^{d-1}$.
We also note that the probability that the procedure start over in step 3 is $O(1/N)$ at each iteration.
Therefore, one can conclude that
\begin{align}
&\mathbb{P}(\text{the procedure fails})\notag\\
&\le\prod_{i\in H}\Bigg[O(1)\sum_{n=0}^{d-\text{deg}(i)}\left(\frac{\alpha N}{(1-2d\alpha)N}\right)^{d-\text{deg}(i)-n}\left((1-p)^n+np(1-p)^{n-1}+O\left(\frac1N\right)\right)^{\mathbf{1}_{n\ge 2}} \Bigg]^{\mathbf{1}_{d-\text{deg}(i)\ge2}}\notag\\
&\le\prod_{i\in H}\Bigg[O(1)\sum_{n=0}^{d-\text{deg}(i)}\alpha^{d-\text{deg}(i)-n}\left(\alpha^{n-1}+O\left(\frac1N\right)\right)^{\mathbf{1}_{n\ge 2}}\Bigg]^{\mathbf{1}_{d-\text{deg}(i)\ge2}}\notag\\
&\le\prod_{i\in H}\Bigg[O(1)\alpha^{d-\text{deg}(i)-1}\Bigg]^{\mathbf{1}_{d-\text{deg}(i)\ge2}}\notag\\
&\le\left(O(1)\alpha^{d-1}\right)^{\alpha N/(d+1)}\left(O(1)\alpha^{d-2}\right)^{d\alpha N/(d+1)^2}\notag\\
&\le \left(O(1)\alpha\right)^{k\alpha N}\label{eq:regular}
\end{align}
for sufficiently small $\alpha$ (up to constant)
where $O(1/N)$ in the bracelet represents the probability that non-existence of $\ell$ in Condition \ref{cond:regular} and the degree varies as the procedure iterates.
Also, $\mathbf{1}_S$ is an indicator function having a value $1$ if an event $S$ occurs, $0$ if not.
The second last inequality follows from the fact that we can choose at least $\alpha n/(d+1)$ latent nodes of degree $0$ at first, and then, we can choose at least $d\alpha n/(d+1)^2$ latent nodes of degree less than or equal to $1$.
$k$ in the last inequality is
$$k=\frac{2d^2-2d-1}{(d+1)^2}>1$$
for all $d\ge 5$.
One might concern that after the procedure succeeds, the extension of the procedure to the all vertices may start over with high probability so that the probability $\mathbb{P}(\text{no latent node satisfies Condition \ref{cond:regular}})$ becomes significantly larger than \eqref{eq:regular}.
However, we note that the restarting probability that extending the procedure to all vertices is $1-\exp\left(\frac{1-d^2}{4}\right)$ a.a.s., i.e., constant, (see \cite{wormald1999models}) and therefore
$$\mathbb{P}(\text{no latent node satisfies Condition \ref{cond:regular}})\le\exp\left(\frac{d^2-1}{4}\right)\mathbb{P}(\text{the procedure fails})=\left(O(1)\alpha\right)^{k\alpha N}$$
for $O(1)\alpha<1$ in the above equation.
Now, we consider all $1\le\alpha N\le cN$ and all choices of sets of latent node to apply the union bound as below.
The explicit choice of $c$ will be presented later.
\begin{align*}
&\mathbb{P}(\text{no latent node satisfies Condition \ref{cond:regular} for all choices of a set of latent node with $1\le\alpha N\le cN$})\\
&\qquad=\sum_{1\le\alpha N\le cN}\binom{N}{\alpha N}\mathbb{P}(\text{no latent node satisfies Condition \ref{cond:regular}})\\
&\qquad\le\sum_{1\le\alpha N\le cN}\binom{N}{\alpha N}\left(O(1)\alpha\right)^{k\alpha N}\\
&\qquad\le\sum_{1\le\alpha n\le cn}O(1)\sqrt{\frac{1}{\alpha(1-\alpha)n}}\alpha^{-\alpha n}(1-\alpha)^{-(1-\alpha)n}\left(O(1)\alpha\right)^{k\alpha n}\\
&\qquad\le\sum_{1\le\alpha n\le cn}O(1)\sqrt{\frac{1}{\alpha(1-\alpha)n}}\exp\Big[\Big((k-1)\alpha\log\alpha+\alpha O(1)-(1-\alpha)\log(1-\alpha)\Big)n\Big]\\
&\qquad=o(1)
\end{align*}
where the first inequality is from Stirling's formula and we choose $c$ to satisfy that $(k-1)c\log c+c\log O(1)-(1-c)\log(1-c)<0$ to obtain the last equality.
Such $c$ always exists as
\begin{align*}
(k-1)c\log c+c\log O(1)-(1-c)\log(1-c)=c((k-1)\log c+O(1)+1-O(c))<0
\end{align*}
for a sufficiently small $c$.
Now, we know that at each iteration of the sequential learning framework, there exists at least one bottleneck latent node which can be recovered without labeling issue (forcing labels).
Furthermore, using $\mathtt{LinearView}$ and conditioning, one can also treat recovered latent nodes as visible nodes while the marginals including latent nodes always containing the conditioned variables, i.e., the order of marginals reduces in some sense as recovered marginals has fixed order while a part of order is the constant number (at most $d-1$) of conditioned variables.
Using this fact, one can conclude that the sequential learning framework recovers every pairwise marginals in $2d|H|$ iterations. where $2d$ follows from that the upperbound of calls of $\mathtt{LinearView}$ for recovering a single latent node is $2d-2$ and at most two bottleneck calls are required.
This completes the proof of Theorem \ref{thm:regular}.
\section{Conclusion}
In this paper, we present a new learning strategy for latent graphical models.
Unlike known algebraic, e.g., $\mathtt{TensorDecomp}$ and optimization, e.g.,
$\mathtt{ExculsiveView}$, approaches for this non-convex problem,
ours is of combinatorial flavor and more generic using them as subroutines.
We believe that our approach provides a new angle for the important learning task.
\section{Examples}\label{sec:example}
In this section, we provide concrete examples of loopy
latent GM where the proposed sequential learning
framework
is applicable.
In what follows,
we assume that it uses classes $\mathcal N, \mathcal M$
corresponding to $\mathtt{TensorDecomp}$, $\mathtt{ExculsiveView}$, $\mathtt{DisjointView}$ and $\mathtt{LinearView}$.
\vspace{0.05in}
\noindent {\bf Grid graph.}
We first consider a latent GM on a grid graph illustrated in Figure \ref{fig:grid1} where boundary nodes are visible and internal nodes are latent.
The following lemma states that
all pairwise marginals
can be successfully recovered given observed ones, utilizing the proposed sequential learning algorithm.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.145\textwidth}
\centering
\includegraphics[width=\textwidth]{grid1}
\caption{}
\label{fig:grid1}
\end{subfigure}
~
\begin{subfigure}[b]{0.145\textwidth}
\centering
\includegraphics[width=\textwidth]{grid2}
\caption{}
\label{fig:grid2}
\end{subfigure}
~
\begin{subfigure}[b]{0.145\textwidth}
\centering
\includegraphics[width=\textwidth]{grid3}
\caption{}
\label{fig:grid3}
\end{subfigure}
~
\begin{subfigure}[b]{0.145\textwidth}
\centering
\includegraphics[width=\textwidth]{grid4}
\caption{}
\label{fig:grid4}
\end{subfigure}
~
\begin{subfigure}[b]{0.145\textwidth}
\centering
\includegraphics[width=\textwidth]{grid5}
\caption{}
\label{fig:grid5}
\end{subfigure}
~
\begin{subfigure}[b]{0.145\textwidth}
\centering
\includegraphics[width=\textwidth]{grid6}
\caption{}
\label{fig:grid6}
\end{subfigure}
\caption{Sequential learning for recovering $\mathbb{P}_{\beta,\gamma}(x_h,x_i)$ (a) GM on a grid graph (b) Recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,c,f,g,k,p\}}\right)$ using that $g$ is a bottleneck with views $b,f,p$ conditioned on $x_{\{a,c,k\}}$. Similarly, recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,f,k,\ell,p,q,r\}}\right)$ and $\mathbb{P}_{\beta,\gamma}\left(x_{\{c,d,e,i,j,o,t\}}\right)$ (c) Recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,c,d,\ell,p,q,r\}}\right)$ using that $S\leftarrow\{b,c,d\}$, $i\leftarrow\ell$ and $j\leftarrow q$ form an input of $\mathtt{LinearView}$ conditioned on $x_{\{p,q\}}$ (d) Recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,c,d,h,\ell,p,q,r\}}\right)$ using that $h$ is a bottleneck with views $c,p,r$ conditioned on $x_{\{b,d,\ell,q\}}$ (e) Recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{b,c,d,h,e,j,o\}}\right)$ using that $S\leftarrow\{e,j,o\}$, $i\leftarrow h$ and $j\leftarrow c$ form an input of $\mathtt{LinearView}$ conditioned on $x_{\{b,d\}}$ (f) Recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{e,h,i,j,o\}}\right)$ using that $S\leftarrow\{h\}$, $i\leftarrow i$ and $j\leftarrow j$ form an input of $\mathtt{LinearView}$ conditioned on $x_{\{e,o\}}$}
\label{fig:grid}
\end{figure}
\begin{lemma}\label{thm:grid}
Consider any latent GM with a parameter $\beta,\gamma$ illustrated in Figure \ref{fig:grid1}, $K=3$,
and $\sigma_0=\{S\subset O:|S|\le 6\}$.
Then, $\sigma_{5}$ updated under Algorithm \ref{alg:sequential} contains all pairwise marginals.
\end{lemma}
In the above, recall that
$O$ is the set of visible nodes.
The proof strategy is illustrated in Figure \ref{fig:grid}
and the formal proof is presented in Appendix \ref{sec:pflem:grid}.
We remark that to prove Lemma \ref{thm:grid}, $\mathtt{ExclusiveView}$ and $\mathtt{DisjointView}$ are not necessary to use.
\iffalse
We also perform the real experiment comparing the performance of EM and Algorithm \ref{alg:sequential}.
We generate ten GMs on the grid graph with random parameter $\gamma_i,\beta_{ij}\sim \text{Unif}(-5,5)$.\footnote{Unif$(a,b)$ is the uniform distribution in the interval $[a,b]$.}
Since the graph is small, we can use the exact visible marginals as inputs for both algorithms, and
EM can also perform exact inference at each iteration.
We measure
\begin{align*}
&\sum_{i\in V}\frac{\left|\mathbb{P}_{\beta,\gamma}(x_i=1)-\mathbb{\widehat P}(x_i=1)\right|}{|V|+|E|}\\
&\qquad\quad+\sum_{(i,j)\in E}\frac{\left|\mathbb{P}_{\beta,\gamma}(x_ix_j=1)-\mathbb{\widehat P}(x_ix_j=1)\right|}{|V|+|E|}.
\end{align*}
where $\mathbb{\widehat P}$ is the estimated distribution returned by an algorithm.
Hence, this measure is between $0$ and $1$.
The experimental results are listed in Table \ref{table:exp}, which shows
that EM stuck at bad local optimum and its estimations are far from true ones
even though it uses true exact visible marginals as its input.
\begin{table}[t]
\centering
\caption{Performance comparisons between EM and the sequential learning framework.}
\label{table:exp}
\begin{tabular}{lll}
\cline{1-3}
\multicolumn{1}{|l|}{GM} & \multicolumn{1}{l|}{EM} & \multicolumn{1}{l|}{Algorithm \ref{alg:sequential}} \\ \cline{1-3}
\multicolumn{1}{|l|}{Figure \ref{fig:grid1}} & \multicolumn{1}{l|}{0.35} & \multicolumn{1}{l|}{0} \\ \cline{1-3}
\multicolumn{1}{|l|}{Figure \ref{fig:crbm1}} & \multicolumn{1}{l|}{0.37} & \multicolumn{1}{l|}{0} \\ \cline{1-3}
\end{tabular}
\end{table}
\fi
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{crbm1}
\caption{}
\label{fig:crbm1}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{crbm2}
\caption{}
\label{fig:crbm2}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{crbm3}
\caption{}
\label{fig:crbm3}
\end{subfigure}
\caption{Sequential learning for recovering $\mathbb{P}_{\beta,\gamma}\left(x_{\{d,e,g,h,o\}}\right)$ (a) CRBM where edges exist between $m$ and $\{a,b,d,e\}$, $n$ and $\{b,c,e,f\}$, $o$ and $\{d,e,g,h\}$, $p$ and $\{e,f,h,i\}$, $q$ and $\{g,h,j,k\}$, $r$ and $\{h,i,k,\ell\}$ (b) Recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,c,d,e,f,m\}}\right)$ using that $m$ is a bottleneck with views $a,b,d$ conditioned on $x_{\{c,e,f\}}$. Similarly, recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{g,h,i,j,k,\ell,q\}}\right)$ (c) Recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{a,b,d,e,g,h,j,k,m,q\}}\right)$ using $\mathtt{DisjointView}$.
Then, Recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{d,e,g,h,m,o,q\}}\right)$ using that $o$ is a bottleneck with views $d,e,g$ conditioned on $x_{\{h,m,q\}}$}
\label{fig:crbm}
\end{figure}
\vspace{0.05in}
\noindent {\bf Convolutional graph.}
Second,
we consider a latent GM illustrated in Figure \ref{fig:crbm1},
which corresponds to a convolutional restricted Boltzmann machine (CRBM) \cite{lee2009convolutional},
and also prove the following lemma.
\begin{lemma}\label{lem:convRBM}
Consider any latent GM with a parameter $\beta,\gamma$ illustrated in Figure \ref{fig:crbm1}, $K=3$,
and $\sigma_0=\{S\subset O:|S|\le 8\}$.
Then, $\sigma_{4}$ updated under Algorithm \ref{alg:sequential} contains all pairwise marginals.
\end{lemma}
The proof strategy is illustrated in Figure \ref{fig:crbm}
and the formal proof is presented again in Appendix \ref{sec:pflem:convRBM}.
We remark that to prove Lemma \ref{lem:convRBM}, $\mathtt{ExclusiveView}$ and $\mathtt{LinearView}$ are not necessary to use.
Furthermore, it is straightforward to generalize the proof of Lemma \ref{lem:convRBM}
for arbitrary CRBM.
\begin{lemma}\label{thm:convRBM}
Consider any CRBM with $N\times M$ visible nodes and a filter size $n\times m$, $2\le n\le m$, $K=2mn-4$
and $\sigma_0=\{S\subset O:|S|\le 4mn-2m\}$.
Then, $\sigma_{MNmn/2}$ updated under Algorithm \ref{alg:sequential} contains all pairwise marginals.\footnote{The theorem holds for arbitrary stride of CRBM.}
\end{lemma}
\iffalse
In this section, we provide an example of learning GM on a small convolutional RBM (CRBM).
Consider a latent GM with a parameter $\beta,\gamma$ on a grid graph illustrated as in Figure \ref{fig:crbm1} the above nodes are visible and the below nodes are latent.
Our goal is to recover every pairwise marginals utilizing the sequential learning given a set of observed marginals on $\sigma_0=\{S\subset O:|S|\le 8\}$ and conditioning at most 3 variables, i.e., $K=3$, where $O$ is a set of observed nodes.
\fi
\vspace{0.05in}
\paragraph{\bf Random regular graph.}
Finally, we state the following theorem for latent random regular GMs.
\begin{lemma}\label{thm:regular}
Consider any latent GM with a parameter $\beta,\gamma$
on a random $d$-regular graph $(V,E)$ for some constant $d\geq 5$, $K=2d-2$ and
$\sigma_0=\{S\subset O:|S|\le 2(d-1)^2|H|)\}$.
There exists a constant $c=c(d)$ such that if the number of latent variables is at most $c=c|V|$,
$\sigma_{2d|H|}$ updated under Algorithm \ref{alg:sequential} contains all pairwise marginals a.a.s.
\end{lemma}
The proof of the above lemma is presented in Appendix \ref{sec:pflem:regular},
where it is impossible without using our sequential learning strategy.
One can obtain an explicit formula of $c(d)$ from our proof, but it is quite a loose bound since we do not make much efforts to optimize it.
\section{Introduction}
Graphical models (GM) are succinct representation of a joint distribution on a graph where each node corresponds to a random variable and each edge represents the conditional independence between random variables.
GM have been successfully applied for various fields including
information theory \cite{gallager1962low,kschischang1998iterative},
physics \cite{parisi1988statistical}
and machine learning \cite{jordan1998learning,freeman2000learning}.
Introducing latent variables to GM
has been popular approaches for enhancing their representation
powers in recent deep models, e.g.,
convolutional/restricted/deep Boltzmann machines \cite{lee2009convolutional,salakhutdinov2009deep}.
Furthermore, they are inevitable in certain
scenarios when a part of samples is missing, e.g., see \cite{fayyad1996data}.
\iffalse
the complete random variables is usually impossible or requires high costs.
In particualr, GMs having latent variables
Recently, latent GM receive much attention due to its representation power and empirical success (e.g., RBM, DBM \cite{xx}).
Furthermore, latent GM is natural to be introduced in practical scenario since observing the complete random variables is usually impossible or requires high costs.
\fi
However, learning parameters of latent GMs is significantly harder than
that of no-latent ones since
the latent variables make the corresponding negative log-likelihood non-convex.
The main challenge comes from the difficulty of
inferring unobserved/latent marginal probabilities associated
to latent/hidden variables.
Nevertheless, the expectation-maximization (EM) schemes \cite{dempster1977maximum}
have been popularly used in practice with empirical successes,
e.g., contrastive divergence learning for deep models \cite{hinton2002training}.
They iteratively infer unobserved marginals given current estimation
of parameters, and typically stuck at local optima of the log-likelihood function \cite{redner1984mixture}.
\iffalse
Given data,
the typical strategy for learning latent GM is the maximum likelihood estimation via the gradient method which iteratively estimates the data distribution and the model distribution of sufficient statistics.
The main bottleneck of learning latent GM is that the log-likelihood objective is non-convex, i.e., the gradient method suffers from local optima.
It is known that once we obtain the distribution of the sufficient statistics including latent variables, the log-likelihood objective becomes convex.
However, the direct observation of the distribution including latent variables is not likely to happen.
\fi
To address this issue,
the spectral methods have provided a refreshing angle
on learning probabilistic latent models \cite{anandkumar2014tensor}.
These theoretical methods exploit the linear algebraic
properties of a model to factorize
observed (low-order) moments/marginals into unobserved ones.
Furthermore, the factorization methods
can be combined with convex log-likelihood optimizations
under certain structures, coined {exclusive views},
of latent GMs \cite{chaganty2014estimating}.
Both factorization methods and exclusive views can be understood as
`local algorithms' handling certain partial structures of latent GMs.
However, up to now,
they are known to be applicable to a quite limited class of latent GMs,
and not as broadly applicable as EM,
which is the main motivation of this paper.
\vspace{0.05in}
\noindent{\bf Contribution.}
Our major question is ``Can we learn latent GMs of more complicated structures beyond naive applications
of local algorithms, e.g., known factorization methods or exclusive views?''.
To address this, we introduce two novel concepts, called marginalization and conditioning,
which reduce the problem of learning a larger GM to that of a smaller one.
Hence, if the smaller one is possible to be processed by
known local algorithms,
then the larger one is too.
Our marginalization concept suggests to search a `marginalizable' subset of variables of GM
so that their marginal distributions are invariant with respect to other variables
under certain graphical transformations.
It allows to focus on learning the smaller transformed GM, instead of the original larger one.
On the other hand, our conditioning concept removes some dependencies among variables of GM,
simply by conditioning some subset of variables.
Hence, it enables us to discover marginalizable structures which was not before conditioning.
At first glance, conditioning looks very powerful as conditioning more variables would discover more desired marginalizable structures.
However, as more variables are conditioned, the algorithmic complexity grows exponentially.
Therefore, we set an upper bound of those conditioned variables.
Marginalization and conditioning
naturally motivate a sequential scheme that repeatedly
recover larger portions of unobserved marginals given previous recovered/ observed ones, i.e.,
recursively
recovering unobserved marginals utilizing any `black-box' local algorithms.
Developing new local algorithms, other than known factorization methods and exclusive views,
are not of major scope. Nevertheless, we provide two new such algorithms, coined
{disjoint views} and {linear views}, which play a similar role to exclusive views, i.e.,
can also be combined
with known factorization methods.
Given these local algorithms, the proposed sequential learning scheme can learn a significantly
broader and more complicated class of latent GMs, than known ones, including convolutional restricted Boltzmann machines
and GMs on random regular graphs, as described in Section \ref{sec:example}.
Consequently, our results imply that there exists a one-to-one correspondence between
observed distributions and parameters for the class of latent GMs.
Furthermore, for arbitrary latent GMs, it can be used for boosting the performance of EM as a pre-processing stage:
first run it to recover as large unobserved marginals as possible, and then run EM using the additional information.
We believe that
our approach provides a new angle for the important problem of learning latent GMs.
\vspace{0.05in}
\noindent {\bf Related works.}
Parameter estimation of latent GMs has a long history, dating back to \cite{dempster1977maximum}. While it can be broadly applied to most of latent GMs, EM algorithm suffers not only from local optima but from a risk of slow convergence. A natural alternative to \emph{general method} of EM is to constrain the structure of graphical models. In independent component analysis (ICA) and its extensions \cite{Hyvarinen_2000,bach2002kernel}, latent variables are assumed to be independent inducing simple form of latent distribution using products.
Recently, spectral methods has been successfully applied for various classes of GMs including latent tree \cite{mossel2005learning,song2011kernel}, ICA \cite{comon2010handbook,podosinnikova2015rethinking}, Gaussian mixture models \cite{Hsu2013}, hidden Markov models \cite{siddiqi2010reduced,song2010hilbert,hsu2012spectral,anandkumar2012method,zhang2015spectral}, latent Dirichlet allocation \cite{Anandkumar2012} and others \cite{Halpern2013,Chaganty2013,zou2013contrastive,song2014nonparametric}.
In particular \cite{anandkumar2014tensor} proposed an algorithm of tensor type
under certain graph structures.
Another important line of work using method of moments for latent GMs, concerns on recovering joint or conditional probabilities only among observable variables (see \cite{Balle2012} and its references). \cite{Parikh2011,Parikh2012} proposed spectral algorithms to recover the joint among observable variables when the graph structure is bottlenecked tree. \cite{chaganty2014estimating} relaxed the constraint of tree structure and proposed a technique to combine method of moments in conjunction with likelihood for certain structures. Our generic sequential learning framework allows to use of all these approaches as key components, in order to broaden the applicability of methods.
We note that we primarily focus on undirected pairwise binary GMs in this paper, but our results can be
naturally extended for other GMs.
\iffalse
[REMOVE LATER IF NOT RELATED]
Structure estimation for latent graphical models are even more challenging problem than parameter estimation.
As a very special case of latent Gaussian GMs, \cite{Chandrasekaran2012} showed how we can learn the equivalent visible Gaussian GMs with convex problems. Beyond the Gaussian case, \cite{Erd99,Anandkumar2011,Choi2011} proposed algorithms for the tree-structured latent GMs. A method of moment approach has been also studied in \cite{Anandkumar2013} to recover the structure learning of linear Bayesian networks with latent variables. In contrast, we focus on parameter learning of latent GMs and consider more general graph structures rather assuming the structure is known a priori.
\fi
\iffalse
as well as .
sequential learning framework which could fully exploit any learning algorithm for latent GM and broaden its applications.
The main idea of our framework is to discover graph structures, which is guaranteed to learn, using two strategies called marginalization and conditioning.
Our framework sequentially discovers such graph structure and recovers the distribution corresponding to them.
We also provide an poly-time algorithm for verifying whether a latent GM can be learned or not using our framework.
\fi
\iffalse
\noindent{\bf Organization.} In Section \ref{sec:pre}, we provide necessary backgrounds on
GMs and their parameters learning problems.
Our key concepts, marginalization and conditioning, are explained in Section \ref{sec:cond}
and our learning strategy of their sequential applications is presented in Section \ref{sec:main}.
We provide concrete examples of latent GMs learnable under the proposed framework in Section \ref{sec:example}.
\fi
\section{Marginalizing and Conditioning}\label{sec:cond
In Section \ref{sec:tensor}, we introduced sufficient conditions for recovering unobservable marginals. Specifically, Theorem \ref{thm:bottleneck} and \ref{thm:exclusiveview}
state that for certain structures of latent GMs, it is possible to recover latent marginals simply from low-order visible marginals and in turn the parameters of latent GMs via convex MLE estimators in \eqref{eq:ml}.
Now, a natural question arises: ``Can we even recover unobserved marginals for latent GMs with more complicated structures beyond naive applications of the bottlenecks or exclusive views?'' To address this, in this section we enlarge the class of such latent GMs by proposing generic
concepts, marginalization and conditioning.
\subsection{Key Ideas}
We start by defining two concepts, marginalization and conditioning, formally.
The former is a combinatorial concept defined as follows.
\begin{definition}[Marginalization]\label{def:marginalizable}
Given graph $G=(V,E)$, we say $S\subset V$ is marginalizable if for all $i\in V\setminus S$, there exists a (minimal) set $S_i\subset S$ with $|S_i|\le 2$ such that $i$ and $S\setminus S_i$ are disconnected in $G\setminus S_i$.\footnote{$G\setminus S_i$
is the subgraph of $G=(V,E)$ induced by $V\setminus S_i$.}
For marginalizable set $S$ in $G=(V,E)$,
the marginalization of $S$, denoted by $\mathtt{Marg}(S,G)$,
is the graph on $S$ with edges
$$\big\{(i,j)\in E\, :\, i,j\in S\big\}\cup\big\{(j,k)\, :\, S_i=\{j,k\} \text{ for } i\in V\setminus S\big\}.$$
\end{definition}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{marginalizable1}
\caption{$G$ and $S$}
\label{fig:marginalizable1}
\end{subfigure}
\qquad\quad
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.55\textwidth]{marginalizable2}
\caption{$\mathtt{Marg}(S,G)$}
\label{fig:marginalizable2}
\end{subfigure}
\caption{Examples of (a) a graph $G$ and a marginalizable set $S$ in $G$; (b) the marginalization $\mathtt{Marg}(S,G)$ of $S$.}
\label{fig:marginalizable}
\end{figure}
In Figure \ref{fig:marginalizable}, for example, node $i$ is disconnected with $\{k,o\}$ when removing $S_i = \{j,n\}$. Hence, the edge between $j$ and $n$ is additionally included in the marginalization of $S$.
With the definition of marginalization, the following key proposition reveals that recovering unobserved marginals of a latent GM can be actually reduced to that of much smaller latent GM.
\begin{proposition}\label{lem:marginalizable}
Consider a GM on $G=(V,E)$ with a parameter $\beta,\gamma$.
If $S\subset V$ is marginalizable in $G$, then there exists
(unique) $\beta^\prime,\gamma^\prime$ such that
GM on $\mathtt{Marg}(S,G)$ with a parameter $\beta^\prime,\gamma^\prime$ inducing
the same distribution on $x_S$, i.e.,
\begin{equation}\label{eq:equalprobGandMarg}
\mathbb{P}_{\beta,\gamma}(x_S)=\mathbb{P}_{\beta^\prime,\gamma^\prime}(x_S).
\end{equation}
\end{proposition}
The proof of the above proposition is presented in Appendix \ref{sec:pflem:marginalizable}.
Proposition \ref{lem:marginalizable} indeed provides a way of representing the marginal probability on $S$
of GM via the smaller GM on $\mathtt{Marg}(S,G)$. Suppose there exists any algorithm (e.g., via bottleneck, but we don't restrict ourselves on this method) that can recover a joint distribution $\mathbb{P}_{\beta^\dagger,\gamma^\dagger}(x_S)$, or equivalently
sufficient statistics, of latent GM on $\mathtt{Marg}(S,G)$ only using \emph{observed} marginals in $S$.
Then, it should be
\begin{equation}\label{eq:equalprobonS}
\mathbb{P}_{\beta^\dagger,\gamma^\dagger}(x_S)=
\mathbb{P}_{\beta^\prime,\gamma^\prime}(x_S),\footnote{Equivalently, $\beta^\dagger=\beta^\prime, \gamma^\dagger=\gamma^\prime$.}
\end{equation}
where
$\beta^\prime,\gamma^\prime$ is the unique parameter satisfying \eqref{eq:equalprobGandMarg}.
Using Proposition \ref{lem:marginalizable} and marginalization, one can recover unobserved marginals of a large GM by considering smaller GMs corresponding to marginalizations of the large one.
The role of marginalization will be further discussed and clarified in Section \ref{sec:main}.
In addition to marginalizing, we introduce the second key ingredient,
called conditioning, with which the class of recoverable latent GMs can be further expanded.
\begin{proposition}\label{prop:conditioning}
For a graph $G=(V,E)$, for $C\subset V$ and $S\subset V\setminus C$, $\mathtt{Marg}(S,G\setminus C)$ is a subgraph of $\mathtt{Marg}(S,G)$.
\iffalse
For GM on $G=(V,E)$ with a parameter $\beta,\gamma$, consider a conditional distribution $\mathbb{P}_{\beta,\gamma}\left(x_{V\setminus S}|x_S=s\right)$ conditioned on some realization
$x_S=s$. Then, $\mathtt{Marg}(S\setminus x_S,G)$ corresponding to $\mathbb{P}_{\beta,\gamma}\left(x_{V\setminus S}|x_S=s\right)$, is a subset of $\mathtt{Marg}(S,G)$.
\fi
\end{proposition}
The proof of the above proposition is straightforward
since $S_i$ (defined in Definition \ref{def:marginalizable})
for $S$ in $G$ contains that for $S$ in $G\setminus C$, i.e., the edge set of $\mathtt{Marg}(S,G)$ contains that of $\mathtt{Marg}(S,G\setminus C)$.
Figure \ref{fig:labeling} illustrates the example on how conditioning actually broaden the recoverable latent GMs, as suggested in Proposition \ref{prop:conditioning}. Once the node $\ell$ is conditioned out, the marginalization $\mathtt{Marg}(S,G\setminus\{\ell\})$ (Figure \ref{fig:labeling3}) is a form that can be handled by $\mathtt{TensorDecomp}$.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{labeling1}
\caption{$G$ and $S$}
\label{fig:labeling1}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{labeling2}
\caption{$\mathtt{Marg}(S,G)$}
\label{fig:labeling2}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{labeling3}
\caption{$G\setminus\{\ell\}$}
\label{fig:labeling3}
\end{subfigure}
\caption{(a) a graph $G$ and a marginalizable set $S$ (b) the marginalization $\mathtt{Marg}(S,G)$ (c) the marginalization $\mathtt{Marg}(S,G\setminus\{\ell\})=G\setminus\{\ell\}$}
\label{fig:labeling}
\end{figure}
\subsection{Labeling Issues}\label{sec:label}
In spite of its usefulness, there is a caveat in performing conditioning: consistent labeling of latent nodes.
For example, consider the latent GM as in Figure \ref{fig:labeling}. Conditioned on $x_\ell$, $h$ is a bottleneck with views $i$, $j$, $k$ (Figure \ref{fig:labeling3}).
If $\mathbb{P}_{\beta,\gamma}\left(x_{\{i,j,k,\ell\}}\right)$ is given,
one can recover the conditional distribution $\mathbb{P}_{\beta,\gamma}\left(x_{\{h,i,j,k\}}|x_\ell=s\right)$ up to labeling of $x_h$, from
Theorem \ref{thm:bottleneck} and conditioning.
Here, the conditioning worsens the relabeling problem in the sense that we might choose different labels for $x_h$ for each conditioned value $x_\ell=0$ and $x_\ell=1$. As a result, the recovered joint distribution computed as $\sum_{x_\ell\in\{0,1\}}\mathbb{P}_{\beta,\gamma}\left(x_{\{h,i,j,k\}}|x_\ell\right)\mathbb{P}_{\beta,\gamma}(x_\ell)$ with \emph{mixed} labeling of $x_h$, would be different from the true joint.
To handle this issue,
we define the following concept for consistent labeling of latent variables.
\begin{definition}
[Label-Consistency] Given GM on $G=(V,E)$ with a parameter $\beta,\gamma$, we say $i\in V$ is label-consistent for $C\subset V\setminus\{i\}$ if
there exists $j\in V\setminus(C\cup\{i\})$, called `reference',
such that
$$\log \frac{\mathbb{P}_{\beta,\gamma}(x_j=1|x_i=1,x_C=s)}{\mathbb{P}_{\beta,\gamma}(x_j=1|x_i=0,x_C=s)},$$
called `preference', is consistently positive or negative
for all $s
\in\{0,1\}^C$.\footnote{Note that the preference cannot be zero due to Assumption \ref{assum1}.}
\end{definition}
In Figure \ref{fig:labeling} for example, $h$ is label-consistent for $\{\ell\}$ with reference $i$ since the corresponding preference is the function only on $\beta_{hi}$, which is fixed as either $\beta_{hi}>0$ or $\beta_{hi}<0$ (note that the reference can be arbitrarily chosen due to the symmetry of structure).
Using the label-consistency of $h$, one can choose a consistent label of $x_h$ by choosing the label consistent to the preference of the reference node $i$.
Even if $i\in V$ is label-consistent under GM with the true known parameter, we need to specify the reference and corresponding preference
to obtain a correct labeling on $x_i$.
We note however that attractive GMs (i.e., $\beta_{ij}>0$ for all $(i,j)\in E$) always satisfy the label-consistency with any reference node since for any $i,j\in V$ and $C\subset V\setminus\{i,j\}$ where $i,j$ are connected in $G\setminus C$,
$$\mathbb{P}_{\beta,\gamma}(x_j=1|x_i=1,x_C)>\mathbb{P}_{\beta,\gamma}(x_j=1|x_i=0,x_C).$$
Furthermore, there can be some settings in which we can force the label-consistency from the structure of latent GMs even without the information of its true parameter.
For example, consider a latent GM on $G=(V,E)$ and a parameter $\beta,\gamma$.
For a set $C\subset V$, a latent node $i\in V\setminus C$ and its neighbor $j\in V\setminus(C\cup\{i\})$ such that
$(i,j)\in E$ is the only path from $i$ to $j$ in $G\setminus C$,
by symmetry of labels of latent nodes, one can assume that $\beta_{ij}>0$, i.e.,
$$\mathbb{P}_{\beta,\gamma}(x_j=1|x_i=1,x_C)>\mathbb{P}_{\beta,\gamma}(x_j=1|x_i=0,x_C),$$
to force the label-consistency of $i$ for $C$.
In general,
one can still choose labels of latent variables to maximize the log-likelihood of observed variables.
As in conditioning, marginalization also has a labeling issue.
Consider a latent GM on $G=(V,E)$.
Suppose that every unobserved pairwise marginal can be recovered by two marginalizations of $S_1,S_2\subset V$.
If there is a common latent node $i\in S_1\cap S_2$, then the labeling for $x_i$ might be inconsistent.
To address this issue, we make the following assumption on graph $G=(V,E)$, node $i\in V$, and parameter $\beta,\gamma$ of GM.
\begin{assumption}[Degeneracy]\label{assum2}
$\mathbb{P}_{\beta,\gamma}(x_i=1)\ne 0.5$.
\end{assumption}
Under the assumption,
one can choose a label of $x_i$ to satisfy $\mathbb{P}_{\beta,\gamma}(x_i=1)>0.5$ using the symmetry of labels of latent nodes.
\section{Sequential Marginalizing and Conditioning}\label{sec:main}
In the previous section, we introduced two concepts marginalization and conditioning to translate
the marginal recovery problem of a large GM into that of smaller and tractable GMs.
In this section, we present a sequential strategy, adaptively applying marginalization and conditioning, by which we substantially enlarge the class of tractable GMs with hidden/latent variables.
\subsection{Example}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{conditioning1}
\caption{}
\label{fig:conditioning1}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{conditioning2}
\caption{}
\label{fig:conditioning2}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{conditioning3}
\caption{}
\label{fig:conditioning3}
\end{subfigure}
\caption{(a) A latent GM with latent nodes $h,i$ and visible nodes $j,k,\ell,m,n$ (b) A latent GM after conditioning $x_{\{j,k\}}$ (c) A latent GM after conditioning $x_{\{\ell,m,n\}}$}
\label{fig:conditioning}
\end{figure}
We begin with a simple example describing our sequential learning framework. Consider a latent GM as illustrated in Figure \ref{fig:conditioning1} and a parameter $\beta,\gamma$.
Given visible marginal $\mathbb{P}_{\beta,\gamma}\left(x_{\{j,k,\ell,m,n\}}\right)$, our goal is to recover all unobserved pairwise marginals including $x_h$ or $x_i$ in order to learn $\beta,\gamma$ via convex MLE \eqref{eq:ml}.
As both nodes $h$ and $i$ are not a bottleneck, one can consider the conditioning strategy described
in the previous section, i.e.,
the conditional distribution $\mathbb{P}_{\beta,\gamma}\left(x_{\{h,i,\ell,m,n\}}|x_{\{j,k\}}\right)$ in Figure \ref{fig:conditioning2}.
Now, node $i$ is a bottleneck with views $\ell,m,n$.
Hence, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{\{i,\ell,m,n\}}|x_{\{j,k\}}\right)$ using $\mathtt{TensorDecomp}$ where the label of $x_i$ is set to satisfy
\begin{align*}
\mathbb{P}_{\beta,\gamma}&\left(x_\ell=1|x_i=1,x_{\{j,k\}}\right) > \mathbb{P}_{\beta,\gamma}\left(x_\ell=1|x_i=0,x_{\{j,k\}}\right),
\end{align*}
i.e., node $i$ is label consistent.
Further, $\mathbb{P}_{\beta,\gamma}\left(x_{\{i,j,k,\ell,m,n\}}\right)$ can be recovered
using the known visible marginals $\mathbb{P}_{\beta,\gamma}\left(x_{\{j,k\}}\right)$ and the following identity
$$\mathbb{P}_{\beta,\gamma}\left(x_{\{i,j,k,\ell,m,n\}}\right)=\mathbb{P}_{\beta,\gamma}\left(x_{\{i,\ell,m,n\}}|x_{\{j,k\}}\right)\mathbb{P}_{\beta,\gamma}\left(x_{\{j,k\}}\right).$$
Since we recovered pairwise marginals between $x_i$ and $x_\ell$, $x_m$, $x_n$, the remaining goal is to recover pairwise marginals including $x_h$.
Now consider a latent GM where $x_{\{\ell,m,n\}}$ is conditioned and it
is illustrated in Figure \ref{fig:conditioning3}.
At this time, the node $h$ is a bottleneck with views $i,j,k$, which can be handled by an additional application of $\mathtt{TensorDecomp}$ (the details are same as the previous case on node $i$).
This example shows that the sequential application of conditioning extends a class of latent GM that unobserved pairwise marginals are recoverable.
Here, we use an algorithm $\mathtt{TensorDecomp}$ as a black-box, hence one can consider other algorithms as long as they have similar guarantees.
One caveat is that conditioning an arbitrary number of variables is very expensive
as the learning algorithmic (and sampling) complexity grows exponentially with respect to the number of conditioned variables.
Therefore, it would be reasonable to bound the number of conditioned variables.
\subsection{Algorithm Design}
Now, we are ready to state the main learning framework sequentially applying marginalization and conditioning, summarized in Algorithm \ref{alg:sequential}.
Suppose that there exists an algorithm, called $\mathtt{NonConvexSolver}$, e.g.,
$\mathtt{TensorDecomp}$, for a class of pairs $\mathcal{N}\subset\{(G,\mathcal{S}_G):G=(V,E),\mathcal{S}_G\subset 2^V\}$ such that
all $(G,\mathcal{S}_G)\in\mathcal{N}$ satisfy the following:
\begin{itemize}
\item[$\circ$]
Given GM with a parameter $\beta,\gamma$ on $G=(V,E)$
and marginals $\{\mathbb{P}_{\beta,\gamma}\left(x_S\right):S\in\mathcal{S}_G\}$, $\mathtt{NonConvexSolver}$ outputs the entire distribution $\mathbb{P}_{\beta,\gamma}(x)$,
up to labeling of variables on $V\setminus\left(\bigcup_{S\in\mathcal{S}_G}S\right)$.
\end{itemize}
For example, consider a graph $G$ illustrated in Figure \ref{fig:bottleneck}
with $\mathcal{S}_G=\big\{\{j,k,\ell\}\big\}$.
Then, $\mathtt{TensorDecomp}$ outputs the entire distribution $\mathbb{P}_{\beta,\gamma}\left(x_{\{i,j,k,\ell\}}\right)$.
In addition, suppose that there exists an algorithm, called $\mathtt{Merge}$, e.g., $\mathtt{ExclusiveView}$, for a class of pairs $\mathcal{M}\subset\{(G,\mathcal{T}_G):G=(V,E),\mathcal{T}_G\subset 2^V\}$ such that all
$(G,\mathcal{T}_G)$ satisfy the following:
\begin{itemize}
\item[$\circ$]
Given GM with a parameter $\beta,\gamma$ on $G=(V,E)$
and marginals $\{\mathbb{P}_{\beta,\gamma}\left(x_S\right):S\in\mathcal{T}_G\}$, $\mathtt{Merge}$ outputs the distribution $\mathbb{P}_{\beta,\gamma}\left(x_{T}\right)$ where $T=\bigcup_{S\in\mathcal{T}_G}S$.
\end{itemize}
Namely, $\mathtt{Merge}$ simply merges the small marginal distributions for $S\in\mathcal{T}_G$ into
the entire distribution on $\bigcup_{S\in\mathcal{T}_G}S$.
For example, consider a graph $G$ illustrated in Figure \ref{fig:exclusiveview} with $$\mathcal{T}_G=\big\{\{i,j,k,\ell\},\{i,i^\prime\},\{j,j^\prime\},\{k,k^\prime\},\{\ell,\ell^\prime\}\big\}$$
where $i^\prime,j^\prime,k^\prime,\ell^\prime\in S$ have exclusive views $i,j,k,\ell$, respectively.
Then, $\mathtt{ExclusiveView}$ outputs the distribution $\mathbb{P}_{\beta,\gamma}\left(x_{S\cup\{i,j,k,\ell\}}\right)$.
For a GM on $G=(V,E)$ with a parameter $\beta,\gamma$, suppose we know
a family of label-consistency quadruples
\begin{align*}
\mathcal{L}=\big\{&(i,j,p,C)\,:\,\text{$i$ is label-consistent for $C$}\\
&\qquad\qquad \text{with reference $j$ and preference $p$}\big\}
\end{align*}
and marginals $\{\mathbb{P}_{\beta,\gamma}(x_S):S\in\sigma_0\}$ for some $\sigma_0\subset 2^V$.
As we mentioned in the previous section, we also bound the number of conditioning variables by some $K\ge 0$.
Under the setting, our goal is to recover
more marginals beyond initially known
ones $\{\mathbb{P}_{\beta,\gamma}(x_S):S\in\sigma_0\}$.
\iffalse
$\mathbb{P}_{\beta,\gamma}(x_i,x_j)$ for all $(i,j)\in E$.
To this end, one would be looking for a set for conditioning $C\subset V,|C|\le K$ and a set for marginalization $R\subset V\setminus C$ such that $\mathbb{P}_{\beta,\gamma}(x_{R\cup C})$ can be recovered by conditioning variables on $C$, marginalizing $R$ and applying $\mathtt{NonConvexSolver}$.
For example, in Figure \ref{fig:conditioning1}, if we choose $C=\{c,d\}$ and $R=\{b,e,f,g\}$, then one can recover $\mathbb{P}\left(x_{\{b,c,d,e,f,g\}}\right)$ using $\mathtt{TensorDecomp}$ which is impossible
without conditioning.
\fi
The following conditions
for $C\subset V$ with $|C|\le K$ and $R\subset V\setminus C$
are sufficient
so that additional marginals
$\mathbb{P}_{\beta,\gamma}(x_{R\cup C})$ can be recovered by conditioning variables on $C$, marginalizing $R$ and applying $\mathtt{NonConvexSolver}$:
\begin{itemize}
\item[$\mathcal C1.$] $(H,\mathcal{S}_H)\in\mathcal{N}$ for some $\mathcal{S}_H\subset 2^V$
\item[$\mathcal C2.$] For all $S\in\mathcal{S}_H$, there exists $S^\prime\in\sigma_0$ such that $S\cup C\subset S^\prime$
\item[$\mathcal C3.$] For all $i\in R\setminus\left(\bigcup_{S\in\mathcal{S}_H}S\right)$, there exist $j\in\bigcup_{S\in\mathcal{S}_H}S$ and $p$ such that $(i,j,p,C)\in\mathcal{L}$,
\end{itemize}
where $H=\mathtt{Marg}(R,G\setminus C)$.
In the above,
$\mathcal{C}1$ implies that if $\{\mathbb{P}_{\beta,\gamma}(x_S|x_{C}):S\in\mathcal{S}_H\}$ are given,
then
$\mathtt{NonConvexSolver}$ outputs $\mathbb{P}_{\beta,\gamma}(x_R|x_{C})$ up to labeling of $R\setminus\left(\bigcup_{S\in\mathcal{S}_H}S\right)$.
In addition,
$\mathcal{C}2$ says that the required marginals $\{\mathbb{P}_{\beta,\gamma}(x_S|x_{C}):
S\in\mathcal{S}_H\}$ and $\mathbb{P}(x_C)$ are known.
Finally, $\mathcal{C}3$ is necessary that all nodes which we need to infer their labels are label-consistent.
Similarly, the following conditions for $C\subset V$ with $|C|\le K$ and $(G\setminus C,\mathcal{T}_{G\setminus C})\in\mathcal{M}$ are sufficient so that $\mathbb{P}_{\beta,\gamma}(x_{T\cup C})$ can be recovered by conditioning variables on $C$ and applying $\mathtt{Merge}$ where $T=\bigcup_{S\in\mathcal{T}_G}S$:
\begin{itemize}
\item[$\mathcal C4.$] For all $S\in\mathcal{T}_{G\setminus C}$, there exists $S^\prime\in\sigma_0$ such that $S\cup C\subset S^\prime$,
\end{itemize}
In the above,
$\mathcal{C}4$ says that the required marginals for merging are given.
The above procedures imply that
given initial marginals $\{\mathbb{P}_{\beta,\gamma}(x_S):S\in\sigma_0\}$, one can recover \emph{additional} marginals
$\{\mathbb{P}_{\beta,\gamma}(x_S):S\in\mathcal{A}_0\cup \mathcal{B}_0\}$, where
\begin{align}\label{eq:AtBt}
\mathcal{A}_0=\{&R\cup C:C\subset V,|C|\le K,R\subset V\setminus C\nonumber~\text{satisfy $\mathcal{C}1$-$\mathcal{C}3$}\},\nonumber\\
\mathcal{B}_0=\{&T\cup C:C\subset V,|C|\le K,(G\setminus C,\mathcal{T}_{G\setminus C})\in\mathcal{M}\nonumber\\
&\qquad\qquad\text{satisfy $\mathcal{C}4$ where $T=\cup_{S\in\mathcal{T}_G}S$}\},
\end{align}
from $\mathtt{NonConvexSolver}$ and $\mathtt{Merge}$, respectively.
One can repeat the above procedure for recovering more marginals as
$$\sigma_{t+1}=\sigma_t\cup {\mathcal A}_{t}\cup {\mathcal B}_t.$$
Recall that we are primarily interested in recovering
all pairwise marginals, i.e., $$\{\mathbb{P}_{\beta,\gamma}(x_i,x_j): (i,j)\in E\}.$$
The following theorem implies that one can check the success of Algorithm \ref{alg:sequential} in
$O\left(|V|^{K+L}\right)$ time, where $K,L$ are typically chosen as small constants.
\begin{theorem}\label{thm:main}
Suppose we have a label-consistency family $\mathcal L$ of GM on $G=(V,E)$ and marginals
$\{\mathbb{P}_{\beta,\gamma}(x_S):S\in\sigma_0\}$ for some $\sigma_0\subset 2^V$.
If Algorithm \ref{alg:sequential} eventually recover all pairwise marginals,
then they do
in $O\left(|V|^{K+L}\right)$ iterations,
where $K$ and $L$ denote the maximum numbers of conditioning variables
and nodes of graphs in $\mathcal{N},\mathcal{M}$, respectively.
\end{theorem}
The proof of the above theorem is presented in Appendix \ref{sec:pfthm:main}.
We note that one can design their own sequence of recovering marginals rather than recovering all marginals in $\mathcal{A}_t,\mathcal{B}_t$ for computational efficiency.
In Section \ref{sec:example}, we provide such examples,
of which strategy has the linear-time complexity at each iteration.
We also remark that even when Algorithm \ref{alg:sequential} recovers some, not all, pairwise unobserved marginals
for given latent GMs,
it is still useful since one can run the EM algorithm using the additional information provided by Algorithm \ref{alg:sequential}.
We leave this suggestion for further exploration in the future.
\iffalse
So far, we recover unobserved marginals given marginals on $\sigma_0$ using conditions \eqref{eq:At1}-\eqref{eq:At4} and \eqref{eq:Bt1}-\eqref{eq:Bt3}.
However, we already studied the sequential conditioning example with Figure \ref{fig:conditioning} in the previous section.
One can easily extend the previous sequential conditioning example for $\mathtt{TensorDecomp}$ to the general algorithm $\mathtt{NonConvexSolver}$ as follows:
Consider a sequence of families of sets $\{\sigma_t\subset 2^V:t=0,1,\dots\}$
defined as
$$\sigma_{t+1}=\sigma_t\cup {\mathcal A}_{t}\cup {\mathcal B}_{t}$$
where $\mathcal{A}_t$ represents a collection of recoverable marginals by utilizing conditioning, marginalizing, $\mathtt{NonConvexSolver}$ and
$\mathcal{B}_t$ represents a collection of recoverable marginals by using merging.
Formally, ${\mathcal A}_t$ is a collection of $R\cup C$ for all $C\in V,R\in V\setminus C$ satisfying
the four conditions \eqref{eq:At1}-\eqref{eq:At4}
and ${\mathcal B}_t$ is a collection of
$S\cup T\cup C$ for all $S,T,C\subset V$ satisfying the three conditions \eqref{eq:Bt1}-\eqref{eq:Bt3}.
One can observe that for any $t$, $\mathbb{P}_{\beta,\gamma}(x_S),S\in\sigma_t$ can be recovered by utilizing the procedure we discussed in the previous paragraphs and marginals on $\sigma(t-1)$.
Now, we state our sequential algorithm formally as Algorithm \ref{alg:main}.
\begin{algorithm}[t]
\caption{Sequential Local Learning Framework} \label{alg:main}
\begin{algorithmic}[1]
\STATE {\bf Input: } $G,\mathcal{L},\sigma_0,\mathtt{Alg},$
\STATE $\mathcal{R}_0=\left\{\mathbb{\widehat P}_{\beta,\gamma}(x_S):\mathbb{\widehat P}_{\beta,\gamma}(x_S)=\mathbb{P}_{\beta,\gamma}(x_S),S\in\sigma_0\right\}$
\STATE {\bf Initialize: } $t\leftarrow 0$
\REPEAT
\STATE $\mathcal{R}_{t+1}\leftarrow\mathcal{R}_{t}$
\STATE $\mathcal{A}_t\leftarrow\{R\cup C:C\subset V,R\subset V\setminus C~\text{satisfy \eqref{eq:At1}-\eqref{eq:At4}}\}$
\STATE $\mathcal{B}_t\leftarrow\{S\cup T\cup C\,:\,S,T,C\subset V~\text{satisfy \eqref{eq:Bt1}-\eqref{eq:Bt3}}\}$
\FOR{$R\cup C\in\mathcal{A}_t$}
\FOR{$s\in\{0,1\}^C$}
\STATE $\mathbb{\widehat P}_{\beta,\gamma}(x_{R}|x_C=s)$
\STATE \qquad$\leftarrow\mathtt{Alg}\left(\left[\mathbb{\widehat P}_{\beta,\gamma}\left(x_S|x_C=s\right):S\in\mathcal{S}_H\right],H\right)$
\STATE Modify labelings of $\mathbb{\widehat P}_{\beta,\gamma}(x_{R}|x_C=s)$ consistent to the reference and the preference in \eqref{eq:At4}
\ENDFOR
\STATE $\mathbb{\widehat P}_{\beta,\gamma}(x_{R\cup C})\leftarrow\mathbb{\widehat P}_{\beta,\gamma}(x_{R}|x_C)\mathbb{\widehat P}_{\beta,\gamma}(x_C)$
\STATE $\mathcal{R}_{t+1}\leftarrow\mathcal{R}_{t+1}\cup\left\{\mathbb{\widehat P}_{\beta,\gamma}(x_{R\cup C})\right\}$
\ENDFOR
\FOR{$S\cup T\cup C\in\mathcal{B}_t$}
\STATE $\mathbb{\widehat P}_{\beta,\gamma}(x_{S\cup T\cup C})$
\STATE \qquad$\leftarrow\mathbb{\widehat P}_{\beta,\gamma}(x_S|x_C)\mathbb{\widehat P}_{\beta,\gamma}(x_T|x_C)\mathbb{\widehat P}_{\beta,\gamma}(x_C)$
\STATE $\mathcal{R}_{t+1}\leftarrow\mathcal{R}_{t+1}\cup\left\{\mathbb{\widehat P}_{\beta,\gamma}(x_{S\cup T\cup C})\right\}$
\ENDFOR
\STATE $\sigma(t+1)\leftarrow\sigma_t\cup\mathcal{A}_t\cup\mathcal{B}_t$
\STATE $t\leftarrow t+1$
\UNTIL $\sigma_t=\sigma(t-1)$
\STATE {\bf return } $\left[\mathbb{\widehat P}_{\beta,\gamma}(x_i,x_j):(i,j)\in E,\{i,j\}\subset \exists S\in\mathcal{R}_t\right]$
\end{algorithmic}
\end{algorithm}
One of the issue is that whether Algorithm \ref{alg:main} outputs pairwise marginals on all edges or not.
One can observe that for some $t^*$, if for all $(i,j)\in E$, there exists $S\in\sigma(t^*)$ such that $i,j\in S$, then for all $(i,j)\in E$, $\mathbb{P}_{\beta,\gamma}(x_i,x_j)$ can be recovered by Algorithm \ref{alg:main}.
The following theorem states the condition that Algorithm \ref{alg:main} outputs $\mathbb{P}_{\beta,\gamma}(x_i,x_j)$ for all $(i,j)\in E$.
\begin{theorem}\label{thm:main}
Suppose there exists an algorithm $\mathtt{NonConvexSolver}$ with $\mathcal{N}\subset\{(G,\mathcal{S}_G):G=(V,E),\mathcal{S}_G\subset 2^V\}$ such that for all $(G,\mathcal{S}_G)\in\mathcal{N}$, for any GM on G with a parameter $\beta,\gamma$, given $\mathbb{P}_{\beta,\gamma}(x_S),S\in\mathcal{S}_G$, $\mathtt{NonConvexSolver}$ outputs $\mathbb{P}_{\beta,\gamma}(x)$.
For a GM on $G=(V,E)$ with a parameter $\beta,\gamma$,
suppose that $\mathbb{P}_{\beta,\gamma}(x_S),S\in\sigma_0$ and
\begin{align*}
\mathcal{L}=\big\{&(i,j,p,C)\,:\,\text{$i$ is label-consistent for $C$}\\
&\qquad\qquad \text{with reference $j$ and preference $p$}\big\}
\end{align*}
are given for some $\sigma_0\subset 2^V$.
Consider a sequence of families of sets $\{\sigma_t\subset 2^V:t=0,1,\dots\}$ defined as
$$\sigma(t+1)=\sigma_t\cup\mathcal{A}_t\cup\mathcal{B}_t$$
where
\begin{align*}
\mathcal{A}_t=&\{R\cup C:C\subset V,R\subset V\setminus C~\text{satisfy \eqref{eq:At1}-\eqref{eq:At4}}\}\\
\mathcal{B}_t=&\{S\cup T\cup C\,:\,S,T,C\subset V~\text{satisfy \eqref{eq:Bt1}-\eqref{eq:Bt3}}\}.
\end{align*}
If there exists $t^*$ such that for all $(i,j)$, $\{i,j\}\subset\exists S\in\sigma(t^*)$, then
the output of Algorithm \ref{alg:main}
$\left[\mathbb{\widehat P}_{\beta,\gamma}(x_i,x_j):(i,j)\in E\right]$ is equal to $[\mathbb{P}_{\beta,\gamma}(x_i,x_j):(i,j)\in E]$.
\end{theorem}
We note that an algorithm both using $\mathtt{TensorDecomp}$ and $\mathtt{ExclusiveView}$ is an application of Theorem \ref{thm:main}.
Theorem \ref{thm:main} provides a general guideline for designing sequential algorithms given any algorithm which recovers a joint distribution from observed marginals of GMs.
The main issue of Theorem \ref{thm:main} is to verify the existence of $t^*$, i.e. check whether Algorithm \ref{alg:main} can recover $\mathbb{P}_{\beta,\gamma}(x_i,x_j)$ for all $(i,j)\in E$ or not.
The following lemma provides an upperbound on $t^*$.
\begin{lemma}\label{lem:mainbound}
If $\mathcal{N}$ consists of graphs of at most $L$ vertices, then $t^*\le O(|V|^{K+L})$.
\end{lemma}
\fi
\begin{algorithm}[t]
\caption{Sequential Local Learning} \label{alg:sequential}
\begin{algorithmic}[1]
\STATE {\bf Input } $G=(V,E)$, Initially observable marginals $\{\mathbb{P}_{\beta,\gamma}(x_S):S\in\sigma_0\}$, $\mathtt{NonConvexSolver}$, $\mathtt{Merge}$
\WHILE{until convergence}
\STATE $\sigma_{t+1}=\sigma_t\cup {\mathcal A}_{t}\cup {\mathcal B}_t$ from \eqref{eq:AtBt}
\ENDWHILE
\STATE {\bf Return } All recovered pairwise marginals
\end{algorithmic}
\end{algorithm}
\subsection{Recoverable Local Structures}\label{sec:exalg}
For running the sequential learning framework in the previous section,
one requires `black-box' knowledge of
a label-consistency family $\mathcal L$ and a class of locally recoverable structures of latent GMs, i.e.,
$\mathcal{N}$ and $\mathcal{M}$.
The complete study on them is out of our scope, but we provide the following guidelines on their choices.
As mentioned in Section \ref{sec:label},
$\mathcal L$ can be found easily for some class of GMs including attractive ones.
One can also infer it heuristically for general GMs in practice.
As we mentioned in the previous section,
one can choose $(G,\mathcal{S}_G)\in\mathcal{N}$ that corresponds to
$\mathtt{TensorDecomp}$.
Beyond $\mathtt{TensorDecomp}$, in practice, one might hope to choose an
additional option for small sized latent GMs since even a generic non-convex solver
might compute an almost optimum of MLE
due to their small dimensionality.
For the choice of $(G,\mathcal{T}_G)\in\mathcal M$,
we mentioned those corresponding to
$\mathtt{ExclusiveView}$ in the previous section.
In addition, we provide the following two more examples, called
$\mathtt{DisjointView}$ and $\mathtt{LinearView}$, as described in Algorithm \ref{alg:disjointview}
and \ref{alg:linearview}, respectively.
In Algorithm \ref{alg:linearview},
$[\mathbb{P}_{\beta,\gamma}(x_j,x_i)]^{-1}$ is defined as
\begin{align*}
\begin{bmatrix}
\mathbb{P}_{\beta,\gamma}(x_j=0,x_i=0)&\mathbb{P}_{\beta,\gamma}(x_j=0,x_i=1)\\
\mathbb{P}_{\beta,\gamma}(x_j=1,x_i=0)&\mathbb{P}_{\beta,\gamma}(x_j=1,x_i=1)
\end{bmatrix}^{-1}.
\end{align*}
\begin{algorithm}[H]
\caption{$\mathtt{DisjointView}$} \label{alg:disjointview}
\begin{algorithmic}[1]
\STATE {\bf Input } $\mathcal{T}_G=\{S\cup C,T\cup C\}$, $\{\mathbb{P}_{\beta,\gamma}(x_S):S\in\mathcal{T}_G\}$
\STATE \qquad\quad$S,T$ are disconnected in $G\setminus C$
\STATE $\mathbb{P}_{\beta,\gamma}(x_{S\cup T\cup C})$
\STATE \qquad $\leftarrow\mathbb{P}_{\beta,\gamma}(x_S|x_C)\mathbb{P}_{\beta,\gamma}(x_T|x_C)\mathbb{P}_{\beta,\gamma}(x_C)$
\STATE {\bf Return } $\mathbb{P}_{\beta,\gamma}(x_{S\cup T\cup C})$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{$\mathtt{LinearView}$} \label{alg:linearview}
\begin{algorithmic}[1]
\STATE {\bf Input } $\mathcal{T}_G=\{\{i,j\},S\cup\{j\}\}$, $\{\mathbb{P}_{\beta,\gamma}(x_S):S\in\mathcal{T}_G\}$
\STATE\qquad\quad $S,j$ are disconnected in $G\setminus\{i\}$
\FOR{$s\in\{0,1\}^S$}
\STATE $\mathbb{P}_{\beta,\gamma}(x_{S}=s|x_i)$
\STATE \qquad
$\leftarrow[\mathbb{P}_{\beta,\gamma}(x_j,x_i)]^{-1}$
$\mathbb{P}_{\beta,\gamma}(x_j,x_{S}=s)$
\ENDFOR
\STATE {\bf Return } $\mathbb{P}_{\beta,\gamma}\left(x_{S\cup\{i,j\}}\right)$
\end{algorithmic}
\end{algorithm}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.36\textwidth}
\centering
\includegraphics[width=\textwidth]{disjointview}
\caption{$\mathtt{DisjointView}$}
\label{fig:disjointview}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{linearview}
\caption{$\mathtt{LinearView}$}
\label{fig:linearview}
\end{subfigure}
\caption{Illustrations for (a) $\mathtt{DisjointView}$ (b) $\mathtt{LinearView}$}
\label{fig:merge}
\end{figure}
\noindent Figure
\ref{fig:merge}
illustrates $\mathtt{DisjointView}$ and $\mathtt{LinearView}$.
\iffalse
$\mathtt{DisjointView}$
\hrule
Suppose disjoint sets $S,T,C\subset V$ satisfy that
$S,T$ are disconnected in $G\setminus C$.
If $\mathbb{P}_{\beta,\gamma}(x_{S\cup C})$ and $\mathbb{P}_{\beta,\gamma}(x_{T\cup C})$ are given,
then $\mathbb{P}_{\beta,\gamma}(x_{S\cup T\cup C})$ can be recovered as
$$\mathbb{P}_{\beta,\gamma}(x_{S\cup T\cup C})=\mathbb{P}_{\beta,\gamma}(x_{S}|x_C)\mathbb{P}_{\beta,\gamma}(x_{T}|x_C)\mathbb{P}_{\beta,\gamma}(x_C).$$
In this case, $\mathcal{T}_G=\{S\cup C,T\cup C\}.$
\vspace{0.1in}
\hrule
\vspace{0.1in}
$\mathtt{LinearView}$
\vspace{0.03in}
\hrule
Suppose $\{i,j\}\subset V$ and $S\subset V\setminus\{i,j\}$ satisfy that
$j$ and $S$ are disconnected in $G\setminus\{i\}$.
If $\mathbb{P}_{\beta,\gamma}\left(x_{\{i,j\}}\right)$ and $\mathbb{P}_{\beta,\gamma}\left(x_{S\cup\{j\}}\right)$ are given,
then $\mathbb{P}_{\beta,\gamma}\left(x_{S\cup\{i,j\}}\right)$ can be recovered by solving the following linear equations
\begin{align*}
\mathbb{P}_{\beta,\gamma}(x_j&=s,x_S)\\
&=\mathbb{P}_{\beta,\gamma}\left(x_j=s,x_i=0\right)\mathbb{P}_{\beta,\gamma}\left(x_S|x_i=0\right)\\
&\quad+\mathbb{P}_{\beta,\gamma}\left(x_j=s,x_i=1\right)\mathbb{P}_{\beta,\gamma}\left(x_S|x_i=1\right)
\end{align*}
for all $s\in\{0,1\}$.
In this case, $\mathcal{T}_G=\big\{S\cup\{j\},\{i,j\}\big\}$.
\vspace{0.1in}
\hrule
\fi
\iffalse
\textcolor{red}{
\begin{theorem}
Given GM on $G=(V,E)$ with a parameter $\beta,\gamma$, suppose $i,j\in V$ and $S\subset V$ satisfy that every path between $j$ and $S$ contains $i$.
If $\mathbb{P}_{\beta,\gamma}(x_i,x_j)$ and $\mathbb{P}_{\beta,\gamma}\left(x_{S\cup\{j\}}\right)$ are given,
then there exists an algorithm $\mathtt{AlgAlg}$ which outputs $\mathbb{P}_{\beta,\gamma}\left(x_{S\cup\{i,j\}}\right)$.
\end{theorem}
The above theorem enables us to treat a latent node as a visible node.
For example, consider a
}
In addition to conditioning and marginalization,
we also use the idea of merging two marginal distributions.
Suppose that we use sequential learning for recovering pairwise marginals of a latent GM on $G$ with a parameter $\beta,\gamma$.
At some moment of our sequential learning framework,
if we know $\mathbb{P}_{\beta,\gamma}\left(x_{S\cup C}\right)$ and $\mathbb{P}_{\beta,\gamma}(x_{T\cup C})$ where $S$ and $T$ are disconnected in $G\setminus C$,
then, one can recover $\mathbb{P}_{\beta,\gamma}\left(x_{S\cup T\cup C}\right)$ using the following identity
\begin{align*}
\mathbb{P}_{\beta,\gamma}&\left(x_{S\cup T\cup C}\right)=\mathbb{P}_{\beta,\gamma}\left(x_S|x_C\right)\mathbb{P}_{\beta,\gamma}\left(x_T|x_C\right)\mathbb{P}_{\beta,\gamma}\left(x_C\right).
\end{align*}
This idea for merging two different marginals to a single marginal would help
for extending our sequential framework to a larger class of latent GMs.
However,
irrespective of the computational issue,
for guaranteeing $\mathcal P1$,
the latent GMs should have
a one-to-one mapping between a parameter $\beta,\gamma$ and
observed marginals $\mathbb{P}_{\beta,\gamma}(x_O)$.
Otherwise, it is impossible to recover latent marginals.
The following theorem state necessary conditions for
latent GMs satisfying $\mathcal P1$, which provides a guideline to choose small local GMs in $\mathcal{N}$ in practice.
of Figure \ref{xx} corresponding to $\mathtt{TensorDecomp}$
In Theorem \ref{thm:main}, we consider an arbitrary algorithm which recovers a joint distribution using observed marginals for GM on a class of graphs.
However, for some structure of latent GMs, it is impossible to find such an algorithm due to several distinct parameters (ignoring labeling of latent variables) represents the same visible marginals, i.e. there is no one-to-one mapping between $\beta,\gamma$ and $\mathbb{P}_{\beta,\gamma}(x_O)$.
Now, we introduce some necessary conditions for latent GMs of which every possible visible marginal $\mathbb{P}_{\beta,\gamma}(x_O)$ correspond to a unique parameter $\beta,\gamma$ up to labeling of latent variables.
\begin{lemma}\label{lem:localstructure}
Consider a latent GM on $G=(V,E)$ with a set of latent nodes $H\subset V$ and a set of visible nodes $O=V\setminus H$.
The following statements are necessary conditions for one-to-one correspondence between a parameter $\beta,\gamma$ and the visible marginal $\mathbb{P}_{\beta,\gamma}(x_O)$ up to labeling of latent variables.
\begin{itemize}
\item[1.] For all $i\in H$, the degree of $i$ is larger than 2.
\item[2.] For any two disjoint sets $S,T\subset H,|S|=|T|$, there does not exist a bijection
$$f:V\setminus T\rightarrow V\setminus S,~f(S)=T,~f(i)=i~\text{for all}~i\notin S$$
such that $f(N(i))=N(f(i))$ for all $i\in S$ where
$$N(i)=\{j\,:\,(i,j)\in E\},$$
i.e., there is no trivial structural symmetry in $H$
\item[3.] $|O|+|H|+|E|\le 2^{|O|}-1$
\end{itemize}
\end{lemma}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/localstructure1}
\caption{}
\label{fig:localstructure1}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.137\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/localstructure2}
\caption{}
\label{fig:localstructure2}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.1\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/localstructure3}
\caption{}
\label{fig:localstructure3}
\end{subfigure}
\caption{Several GMs with gray latent nodes and white visible nodes: (a) does not satisfy condition 1,3 in Lemma \ref{lem:localstructure}, (b) does not satisfy condition 2 in Lemma \ref{lem:localstructure},
(c) satisfies all conditions in Lemma \ref{lem:localstructure}.}
\label{fig:localstructure}
\end{figure}
Figure \ref{fig:localstructure} illustrates examples of latent GMs which satisfy and violate the above conditions.
\fi
\section{Preliminaries}\label{sec:pre}
\subsection{Graphical Model and Parameter Learning}
Given undirected graph $G=(V,E)$,
we consider the following pairwise binary Graphical Model (GM), where
the joint probability distribution on $x=[x_{i} \in\{0,1\}: i \in V]$ is defined as:
\begin{equation}\label{eq:ising}
\mathbb{P}(x)=
\mathbb{P}_{\beta,\gamma}(x)=\frac{1}{Z}\exp\left(\sum_{(i,j)\in E}\beta_{ij}x_ix_j+\sum_{i\in V}\gamma_ix_i\right),
\end{equation}
for some parameter
$\beta=[\beta_{ij}:(i,j)\in E]\in\mathbb{R}^{E}$ and
$\gamma=[\gamma_i:i\in V]\in\mathbb{R}^{V}$.
The normalization constant $Z$
is called the {\it partition function}.
Given samples $x^{(1)}, x^{(2)}, \cdots,x^{(N)}\in\{0,1\}^V$
drawn
from the distribution \eqref{eq:ising} with
some true (fixed but unknown) parameter $\beta^*,\gamma^*$,
the problem of our interest is recovering it.
The popular method for the parameter learning task is
the following maximum likelihood estimation (MLE):
\begin{equation}\label{eq:ml}
\mbox{maximize}_{\beta,\gamma}\frac1{N}\sum_{n=1}^N \log \mathbb{P}_{\beta,\gamma}\left(x^{(n)}\right),
\end{equation}
where it is well known \cite{wainwright2008graphical} that the log-likelihood
$\log \mathbb{P}_{\beta,\gamma}\left(\cdot\right)$ is concave
with respect to $\beta,\gamma$, and the gradient of the log-likelihood is
\begin{align}
&\frac{\partial}{\partial\gamma_i}\frac{1}{N}\sum_{n=1}^N\log \mathbb{P}_{\beta,\gamma}\left(x^{(n)}\right)=
\frac{1}{N}\sum_{n=1}^Nx_i^{(n)}-\mathbb{E}_{\beta,\gamma}[x_i]
\label{eq:gradient1}\\
&\frac{\partial}{\partial\beta_{ij}}\frac{1}{N}\sum_{n=1}^N\log \mathbb{P}_{\beta,\gamma}\left(x^{(n)}\right)=
\frac{1}{N}\sum_{n=1}^N x_i^{(n)}x_j^{(n)}-\mathbb{E}_{\beta,\gamma}[x_ix_j].
\label{eq:gradient2}
\end{align}
Here, the last term, expectation of corresponding sufficient statistics, comes from the partial derivative of the log-partition function.
Furthermore, it is well known that there exists a one-to-one correspondence between parameter $\beta,\gamma$
and sufficient statistics $\mathbb{E}_{\beta,\gamma}[x_ix_j], \mathbb{E}_{\beta,\gamma}[x_i]$
(see \cite{wainwright2008graphical} for details).
One can further observe that if the number of samples is sufficiently large, i.e., $N\to\infty$,
then \eqref{eq:ml} is equivalent to
\begin{align*}
& \mbox{maximize}_{\beta,\gamma}\sum_{x\in\{0,1\}^V}\mathbb{P}_{\beta^*,\gamma^*}(x)\log \mathbb{P}_{\beta,\gamma}(x),
\end{align*}
where
the true parameter $\beta^*,\gamma^*$ achieves the (unique) optimal solution.
This directly implies that, once empirical nodewise and pairwise marginals in
\eqref{eq:gradient1} and \eqref{eq:gradient2} approach the true marginals,
the gradient method can recover $\beta^*,\gamma^*$ modulo the difficulty of exactly computing the expectations of sufficient statistics.
Now let us consider more challenging task: parameter learning under latent variables.
Given a subset $H$ of $V$ and $O=V\setminus H$,
we assume that for every sample $x=(x_O,x_H)$,
$x_{O}=\left[x_i\in\{0,1\}: i\in O\right]$
are observed/visible and other variables $x_{H}=\left[x_i\in\{0,1\}: i\in H\right]$ are hidden/latent.
In this case, MLE only involves observed variables:
\begin{equation}\label{eq:latentml}
\mbox{maximize}_{\beta,\gamma}\frac1N\sum_{n=1}^N \log \mathbb{P}_{\beta,\gamma}\left(x_O^{(n)}\right),
\end{equation}
where $\mathbb{P}_{\beta,\gamma}(x_O)=\sum_{x_H\in \{0,1\}^H} \mathbb{P}_{\beta,\gamma}(x_O, x_H)$.
Similarly as before,
the true parameter $\beta^*,\gamma^*$ achieves the optimal solution of \eqref{eq:latentml}
if the number of samples
is large enough.
However, the log-likelihood under latent variables is no longer concave, which
makes the parameter learning task harder. One can apply an expectation-maximization (EM) scheme, but it is typically stuck in local optima.
\iffalse
It is known that approximating the partition function is \#P-hard in general \cite{C3}.
Given a graph $G=(V,E)$, an Ising model is a joint distribution on $x\in\Omega^{|V|},\Omega=\{-1,1\}$ defined as follows:
\begin{equation}\label{eq:ising}
\mathbb{P}(x\,|\,\beta,\gamma)=\frac{1}{Z}\exp\left(\sum_{(i,j)\in E}\beta_{ij}x_ix_j+\sum_{i\in V}\gamma_ix_i\right)
\end{equation}
where $\beta=[\beta_{ij}:(i,j)\in E]\in\mathbb{R}^{|E|},\gamma=[\gamma_i:i\in V]\in\mathbb{R}^{|V|}$ and $Z$ is a partition function.
Given a graph $G=(V\cup H,E),~V\cap H=\emptyset$ on visible nodes $V$ and latent nodes $H$, a latent Ising model is a joint distribution on $V$ defined as follows:
\begin{equation}\label{eq:latentising}
\begin{split}
&\mathbb{P}(x_V\,|\,\beta,\gamma)=\frac{1}{Z}\sum_{x_H\in\Omega^H}\mathbb{P}(x\,|\,\beta,\gamma)\\
&=\frac{1}{Z}\sum_{x_H\in\Omega^H}\exp\left(\sum_{(i,j)\in E}\beta_{ij}x_ix_j+\sum_{i\in V\cup H}\gamma_ix_i\right)
\end{split}
\end{equation}
where $x_S=[x_i:i\in S]$ for $S\subset V\cup H$.
For a notational convenience, we usually denote $h$ as a hidden variable while $x,y,z$ represents both hidden and visible variables.
Also, we use a shorthand notations $\mathbb{P}(\cdot):=\mathbb{P}(\cdot|\,\beta,\gamma)$, $x_{i:j}=[x_i,\dots,x_j]$ for $i<j$ and $x_{i_{1:n}}=[x_{i_1},\dots,x_{i_n}]$.
Given $N$ i.i.d. samples $(x_V^{(1)},\dots,x_V^{(N)})$ of a latent Ising model, where each $x_V^{(k)}$ is sampled from $\mathbb{P}(x_V\,|\,\beta^*,\gamma^*)$ for some true parameters $\beta^*,\gamma^*$,
the most popular way to learn parameters $\widehat\beta,\widehat\gamma$ from samples is the maximum likelihood which address to find $\widehat\beta,\widehat\gamma$ maximizing the likelihood function, i.e.
\begin{equation}\label{eq:mle}
[\widehat\beta,\widehat\gamma] = \arg\max_{\beta^\prime,\gamma^\prime}\sum_{i=1}^\ell\mathbb{P}(x_V^{(k)}\,|\,\beta^\prime,\gamma^\prime).
\end{equation}
If we can observe every pairwise marginals over all edges, $\mathbb{P}(x_i,x_j\,|\,\beta^*,\gamma^*)$, $(i,j)\in E$, it is known that the gradient descent algorithm recovers $\beta^*,\gamma^*$ \cite{wainwright2008graphical}.
However, for a latent Ising model, finding maximum likelihood is hard as the optimization \eqref{eq:mle} is non-convex and pairwise marginals cannot be observed due to latent nodes.
\fi
\vspace{-0.05in}
\subsection{Tensor Decomposition}\label{sec:tensor}
\iffalse
The most popular algorithm for solving \eqref{eq:ml} is the gradient method which is guaranteed to converge to the optimal $\beta,\gamma$ due to the concavity of the log-likelihood.
The gradient of the log-likelihood has the following formula:
The above gradient formula implies that the optimal $\beta,\gamma$ are only functions of empirical marginals on nodes and edges rather than the entire samples.
Now, consider \eqref{eq:latentml} that is non-concave and
suppose the (visible) samples follow the distribution of the latent GM with some true (known) parameters $\beta^*,\gamma^*$, i.e., $x_O^{(n)}\sim\mathbb{P}_{\beta^*,\gamma^*}(x_O)$.
Then, one can observe that if the number of samples is sufficiently large, then
\eqref{eq:latentml} is equivalent to
\begin{align*}
&\mbox{maximize}_{\beta,\gamma}\sum_{x_O\in\{0,1\}^O}\mathbb{P}_{\beta^*,\gamma^*}(x_O)\log \mathbb{P}_{\beta,\gamma}(x_O),
\end{align*}
where
the true parameters $\beta^*,\gamma^*$ achieve the solution for the above
optimizations.
Namely, the true parameters achieve the maximum likelihood estimation if the empirical visible distribution is equal to the true visible distribution.
As we mentioned earlier, if $H=\emptyset$, $\beta^*,\gamma^*$ can be recovered
by a gradient method using the pairwise marginals
$\mathbb{P}_{\beta^*,\gamma^*}(x_i,x_j)$ obtained from large enough samples.
\fi
The fundamental issue on parameter learning of
latent GM is that it is hard to infer
the pairwise marginals for latent variables, directly from samples.
If one could infer them, it is also possible to recover $\beta^*,\gamma^*$ as we discussed in previous section.
Somewhat surprisingly, however,
under certain conditions of latent GM, pairwise marginals including latent variables can be recovered using low-order visible marginals.
Before introducing such conditions, we first make the following assumption for any GM on a graph $G=(V,E)$ considered throughout
this paper.
\begin{assumption}[Faithful]\label{assum1}
For any two nodes $i,j\in V$, if $i,j$ are connected, then $x_i,x_j$ are dependent.
\end{assumption}
This faithfulness assumption implies that
GM only has conditional independences given by the graph $G$.
We also introduce the following notion \cite{anandkumar2014tensor}.
\begin{definition}
[Bottleneck] A node $i\in V$ is a bottleneck if there exists $j,k,\ell\in V$, denoted as `views', such that every path between two of $j,k,\ell$ contains $i$.
\end{definition}
\begin{figure}[ht]
\vspace{-0.2in}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{bottleneck}
\caption{bottleneck}
\label{fig:bottleneck}
\end{subfigure}
\qquad\qquad
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{exclusiveview}
\caption{exclusive view}
\label{fig:exclusiveview}
\end{subfigure}
\caption{(a) Bottleneck $i$ has three views $j,k,\ell$; (b) Set $S$ satisfies exclusive views property with exclusive views $i,j,k,\ell$}
\vspace{-0.05in}
\end{figure}
Figure \ref{fig:bottleneck} illustrates the bottleneck. By construction, views are conditionally independent given the bottleneck.
Armed with this notion, now we introduce the following theorem to provide sufficient conditions for recovering unobserved/latent marginals \cite{anandkumar2014tensor}.
\begin{theorem}\label{thm:bottleneck}
Given GM with a parameter $\beta,\gamma$, suppose $i$ is a bottleneck with views $j,k,\ell$.
If $\mathbb{P}_{\beta,\gamma}\left(x_{\{j,k,\ell\}}\right)$ is given,
then there exists an algorithm $\mathtt{TensorDecomp}$ which outputs $\mathbb{P}_{\beta,\gamma}\left(x_{\{i,j,k,\ell\}}\right)$
up to relabeling of $x_i$, i.e. ignoring symmetry of $x_i=0$ and $x_i=1$.
\end{theorem}
The above theorem implies that using visible marginals $\mathbb{P}_{\beta,\gamma}\left(x_{\{j,k,\ell\}}\right)$, one can recover unobserved marginals
$\mathbb{P}_{\beta,\gamma}\left(x_{\{i,j,k,\ell\}}\right)$ involving $x_i$.
For a bottleneck with more than three views, the joint distribution of the bottleneck and views are recoverable using Theorem \ref{thm:bottleneck} by choosing three views at once.
\iffalse
The main idea of $\mathtt{TensorDecomp}$ \cite{anandkumar2014tensor} is to linear-transform the 3-dimensional tensor $\mathbb{P}_{\beta,\gamma} \left(x_{\{j,k,\ell\}}\right)$ to other 3-dimensional tensor
\begin{align*}
T_j=\sum_{s\in\{0,1\}}&\mathbb{P}_{\beta,\gamma}(x_i=s)\mathbb{P}_{\beta,\gamma}(x_j|x_i=s)\\
&\otimes\mathbb{P}_{\beta,\gamma}(x_j|x_i=s)\otimes\mathbb{P}_{\beta,\gamma}(x_j|x_i=s).
\end{align*}
Then, using orthogonal tensor decomposition which takes $T_j$ and outputs $\mathbb{P}_{\beta,\gamma}(x_i=s),\mathbb{P}_{\beta,\gamma}(x_j|x_i=s)$, one can recover $\mathbb{P}_{\beta,\gamma}(x_i)$ and $\mathbb{P}_{\beta,\gamma}(x_j|x_i)$ up to relabeling of $x_i$.
The same procedure is also applied for $T_k,T_\ell$ which are analogues of $T_j$.
Finally, $\mathtt{TensorDecomp}$ recovers $\mathbb{P}_{\beta,\gamma}\left(x_{\{i,j,k,\ell\}}\right)$ as
\begin{align*}
\mathbb{P}_{\beta,\gamma}\big(x_i=s,&x_{\{j,k,\ell\}}\big)=\mathbb{P}_{\beta,\gamma}(x_i=s)\mathbb{P}_{\beta,\gamma}(x_j|x_i=s)\\
&\otimes\mathbb{P}_{\beta,\gamma}(x_k|x_i=s)\otimes\mathbb{P}_{\beta,\gamma}(x_\ell|x_i=s).
\end{align*}
\fi
Besides $\mathtt{TensorDecomp}$, there are other conditions of latent GM which marginals including latent variables are recoverable.
Before elaborating on the conditions, we further introduce the following notion for GM on a graph $G=(V,E)$
\cite{chaganty2014estimating}.
\begin{definition}
[Exclusive View] For a set of nodes $S\subset V$,
we say it satisfies the exclusive view property if for each $i\in S$, there exists $j\in V\setminus S$, denoted as `exclusive view', such that every path between $j$ and $S\setminus\{i\}$ contains $i$.
\end{definition}
Figure \ref{fig:exclusiveview} illustrates the exclusive view property.
Now, we are ready to state the conditions for recovering unobserved marginals using the property \cite{chaganty2014estimating}.
\begin{theorem}\label{thm:exclusiveview}
Given GM with a parameter $\beta,\gamma$, suppose a set of nodes $S$ satisfies the exclusive view property with a set of exclusive views $E$.
If $\mathbb{P}_{\beta,\gamma}(x_E)$ and $\mathbb{P}_{\beta,\gamma}\left(x_i,x_j\right)$ are given for all $i\in S$ and an exclusive view $j\in E$ of $i$,
then there exists an algorithm $\mathtt{ExclusiveView}$ which outputs $\mathbb{P}_{\beta,\gamma}(x_{S\cup E})$.
\end{theorem}
At first glance, Theorem \ref{thm:exclusiveview} does not seems to be useful as it requires a set of marginals including every variable corresponding to $S\cup E$.
However,
suppose a set of latent nodes $S$ satisfying the property while its set of exclusive views $E$ is visible, i.e., $\mathbb{P}_{\beta,\gamma}(x_E)$ is observed.
If for all $i\in S$, $i$ is a bottleneck with views containing its exclusive view $j\in E$, then one can resort to $\mathtt{TensorDecomp}$ to obtain $\mathbb{P}_{\beta,\gamma}(x_i,x_j)$.
\iffalse
to know the conditional probabilities given latent variables, i.e., $\mathbb{P}_{\beta,\gamma}(x_j|x_i)$, $j\in V$, $i\in H$. However, one can resort to $\mathtt{TensorDecomp}$ to obtain these hidden quantities.
\fi
|
train/arxiv
|
BkiUd1bxK7Tt522Wa_CX
| 5
| 1
|
\section{Introduction}
The evolution operator for the free Schr\"odinger equation, here denoted by $e^{-it\Delta}$, is subject
to a wide variety of estimates. Functional analysis dictates that it must be an isometry on $L^2({{\mathbb R}}^n)$
at every fixed time $t$. Representing $e^{-it\Delta}$ as a convolution operator with the kernel $(-4\pi
i\,t)^{-\frac{n}2}e^{-i|x|^2/(4t)}$, leads to the dispersive bound
\begin{equation}\label{nde}
\norm[e^{-it\Delta}f][\infty] \leq (4\pi |t|)^{-\frac{n}2} \norm[f][1]
\end{equation}
valid for each $t \not= 0$. Between these two estimates one already has
most of the necessary elements to verify more subtle space-time properties
of the Schr\"odinger evolution such as global Strichartz bounds.
It is natural to ask whether a perturbed operator $e^{itH}, H =
-\Delta + V$, can satisfy (up to a constant) the same $L^1 \to
L^\infty$ estimate as the free evolution. In general, it
cannot. If $H$ has point spectrum (eigenvalues), the naive
dispersive estimate \eqref{nde} fails. Indeed, for any Schwartz
function $f$ that has nonzero inner product with an eigenfunction,
$\langle e^{itH}f,f\rangle$ does not converge to zero as $t\to
\infty$. Therefore, it is a natural endeavour to prove
\begin{equation} \label{eq:dispersive}
\norm[e^{itH}P_{ac}(H)f][\infty] \leq C|t|^{-\frac{n}2}\norm[f][1],
\end{equation}
where $P_{ac}(H)$ denotes the projection onto the absolutely continuous
spectrum\footnote{For the potentials discussed here, there is no singular
continuous spectrum by the Agmon-Kato-Kuroda
Theorem~\cite[Theorem XIII.33]{RSIV}.} of $H$.
It is known that \eqref{eq:dispersive} can fail for $t$ large in the presence of a zero-energy
eigenvalue or resonance. For more details, see \cite[Theorem 10.5]{JK},~\cite[Theorem 8.2]{Jensen},
and~\cite[\S 3]{JSS}. By assuming that zero is a regular point, that is, neither an eigenvalue nor a
resonance of $H$, one can find conditions governing the decay and regularity (but not the size, or
signature) of $V$ which are known to be sufficient to imply the dispersive bound \eqref{eq:dispersive}.
These are listed below for reference.
\begin{itemize}
\item \cite{GS1} $n=1$: \quad $(1+|x|)V \in L^1({{\mathbb R}})$
\item \cite{Sch} $n=2$: \quad $|V(x)| \leq C(1+|x|)^{-3-{\varepsilon}}$
\item \cite{Gol} $n=3$: \quad $V \in L^{\frac32-{\varepsilon}}({{\mathbb R}}^3) \cap L^{\frac32+{\varepsilon}}({{\mathbb R}}^3)$
\item \cite{JSS} $n\geq3$: \quad $\hat{V} \in L^1$ and $(1+|x|^2)^{\gamma/2}V(x)$ is a bounded operator
on the Sobolev space $H^\nu$ for some $\gamma>n+4$ and some $\nu>0$
\end{itemize}
For a more thorough discussion of the work on this problem, see the survey \cite{Bill}.
One might extrapolate from the results in dimensions 1, 2, and 3 that a
suitable $L^p$-type condition for potentials should be sufficient in
every dimension. The main result of this paper, Theorem \ref{T}, shows that
this is not true: In every dimension $n > 3$, there exist continuous and
compactly supported potentials for which the dispersive estimate
\eqref{eq:dispersive} fails.
In constructing the counterexamples, we follow the approach of \cite{GS1}
and~\cite{Gol2}. Specifically, we use Stone's formula to
construct the spectral measure from the resolvent, which in turn is studied via
a finite Born series expansion (iteration of the resolvent identity). While
we do not explicitly separate the contributions of high and low energies,
the failure of dispersive estimates in this case should be recognized as a
high-energy phenomenon.
The three dimensional analysis of \cite{GS1} relies heavily on the simple
explicit expression of the free resolvent. The free resolvent can be
written in terms of elementary functions in all odd dimensions; however, the
expressions become increasingly unwieldy as the dimension increases. In even
dimensions, Bessel/Hankel functions are required.
The key to avoiding this morass is the introduction of certain symbol classes,
$S^{i,j}$, which capture the essential features of the free resolvent.
In particular, in dimension $n$, one must integrate by parts approximately
$(n+1)/2$ times to obtain the appropriate power of $t$; this seems quite
impossible without such a unifying tool.
In dimensions four and higher, the Green's function is rather singular at the
origin, specifically, it is not locally square integrable. This necessitates
carrying the Born expansion much further
than in \cite{GS1}, which adds to the complexity of our proof.
Our analysis contains certain partial positive results. To be precise,
we show that \eqref{eq:dispersive} is attained by the tail of the Born series,
taken after a finite number (depending on the dimension) of initial terms.
The question of whether $e^{itH}P_{ac}(H)$ is dispersive then
reduces to an estimate on the initial
terms in the Born series. We
construct a potential for which the sum of these terms is bounded
below by $|t|^{-\alpha}, \alpha > \frac{n}2$, at certain times $0<t<1$. In the
limit $t\to 0$, this runs contrary to the desired bound of $|t|^{-\frac{n}2}$.
The Uniform Boundedness Principle is used
to show that the worst possible limiting behaviour can be achieved.
It should again be emphasized that the non-dispersive phenomenon takes
place over extremely short times; moreover, it is a high-energy phenomenon.
Indeed by Theorem~B.2.3 of \cite{Semigr}, for any bounded compactly supported
function $\phi$, the operator $e^{itH}\phi(H)$ maps $L^1$ into $L^\infty$ uniformly
in $t$. This is true for very general potentials, in particular those that are bounded.
A physical interpretation is that even
high-frequency waves travelling with large velocity can be
effectively scattered by a non-smooth potential. Depending on the
geometry of the potential, the first reflection may generate an
unacceptable degree of constructive interference. For the purposes
of our counterexample, ``non-smooth'' will mean that $V$ is assumed
to possess fewer than $\frac{n-3}2$ continuous derivatives.
Compare this to the smoothness conditions in \cite{JSS}, which are
sufficient to imply a dispersive bound. In that paper a potential is
only explicitly required
to possess derivatives of order $\nu$ for some $\nu > 0$. Indeed,
there exist numerous examples of functions satisfying all the hypotheses of
\cite{JSS}, yet which we would consider to be non-smooth. On the other hand,
the potentials constructed in this paper are differentiable to order
$\frac{n-3}2$ but the dispersive estimate still fails. This suggests that
while a dispersive bound may hold for all sufficiently smooth potentials (with
rapid decay at infinity), other criteria besides the number and size of
derivatives determine what happens in the absence of such strong regularity.
The additional assumption in \cite{JSS} is that $\hat{V} \in L^1$, which is
satisfied by any potential in the Sobolev space $H^{\frac{n}2+{\varepsilon}}({{\mathbb R}}^n)$.
Determining which functions of lesser regularity also have integrable Fourier
transform is a well known difficult problem. The counterexample constructed
here is motivated by a different and explicitly geometric consideration,
the focal pattern of reflections caused by an elliptical surface.
Strictly speaking, the reflection is caused by a highly oscillatory potential
whose level sets are ellipses. When presented in this light,
it is clear that some notion of curvature and/or convexity can
also determine whether dispersive estimates remain valid.
There is still considerable room between the currently known sufficient
conditions and the negative result presented here. We believe this middle
ground can be explored via some combination of geometric and
Fourier analysis and that these are most likely two sides of the same coin.
\section{Notes on the free resolvent}
We introduce here a class of symbols which will be relevant in the
study of the free resolvent, simplifying both the notation and
the analysis. For $i,j\in \mathbb{Q}$, we denote by $a_{i,j}$ a
symbol belonging to the class $S^{i,j}$, i.e., a symbol that
satisfies the following estimates
$$
\left|\frac{\partial^{k}a_{i,j}(x)}{\partial x^{k}}\right|\leq
\left \{\begin{array}{lcc} c_{k} x^{i-k} \ \ \text{if} \ \ 0<x\leq 1,\\
c_{k}x^{j-k}\ \ \text{if} \ \ x>1
\end{array} \right. \qquad \forall \ k\geq 0.
$$
The calculus of these symbols is quite straightforward: the
derivative of a symbol in $S^{i,j}$ is a symbol in $S^{i-1,j-1}$
and the product of a symbol in $S^{i,j}$ with a symbol in
$S^{i',j'}$ is a symbol in $S^{i+i',j+j'}$. In particular, the
product of a symbol in $S^{i,j}$ with $x^{\alpha}$ belongs to
$S^{i+\alpha,j+\alpha}$.
Now let us consider the resolvent of the free Schr\"odinger
equation,
$$
R_{0}(z)=(-\Delta-z)^{-1}.
$$
In dimension $n\geq4$, $R_{0}(z)$ is given by the kernel:
\begin{align}\label{free kernel}
R_{0}(z)(x,y)=\frac{i}{4}\Bigl(\frac{z^{\frac{1}{2}}}{2\pi|x-y|}\Bigr)^{\frac{n}{2}-1}H_{\frac{n}{2}-1}^{(1)}(z^{\frac{1}{2}}|x-y|)
,
\end{align}
where $\Im z^{\frac{1}{2}} \geq 0$ and $H_{\frac{n}{2}-1}^{(1)}$ is the
first Hankel function.
We encode the information contained in the asymptotic expansions
of the first Hankel function near the origin and at infinity (see
\cite{GR}), together with the information provided
by the differential equation satisfied by the first Hankel
function,
$$
H_{\nu-1}^{(1)}(z) - H_{\nu+1}^{(1)}(z)=2\frac{d}{dz}
H_{\nu}^{(1)}(z),
$$
into the following formula valid for $\Re \nu>-\frac{1}{2}$ and
$|\arg z|<\pi$,
$$
H_{\nu}^{(1)}(z)=e^{iz} a_{-\nu,-\frac{1}{2}}(z).
$$
This together with \eqref{free kernel} yield a representation for
the kernel of the free resolvent in dimension $n\geq 4$ in terms
of the aforementioned symbols, that is,
\begin{align}\label{R0}
R_0^{\pm}(\lambda^2)(x,y)=a_{0,\frac{n-3}{2}}(\lambda|x-y|)\frac{e^{\pm
i\lambda|x-y|}}{|x-y|^{n-2}},
\end{align}
where $R_0^{\pm}(\lambda^2)$ denote the boundary values
$R_0(\lambda^2\pm i0)$.
Let us also point out a similar formula for the imaginary part of
the free resolvent,
\begin{align}\label{ImR0}
\Im
R_{0}(\lambda^{2})(x,y)=a_{n-2,\frac{n-3}{2}}(\lambda|x-y|)\frac{e^{\pm
i\lambda|x-y|}}{|x-y|^{n-2}},
\end{align}
by which we mean that we can write it as the sum of two terms of this type, one with phase $e^{i\lambda|x-y|}$
and the other with phase $e^{-i\lambda|x-y|}$. Indeed, using (for example) the identity
\begin{align}\label{scaling}
\lambda^{n-2}(-\Delta-1)^{-1}(\lambda x)=(-\Delta-\lambda^{2})^{-1}(x)
\end{align}
for the kernels of the free resolvents, we can write
$$
\Im
R_{0}(\lambda^{2})(x,y)=\lambda^{n-2}(\lambda|x-y|)^{\frac{2-n}{2}}J_{\frac{n-2}{2}}(\lambda|x-y|),
$$
where $J_{\frac{n-2}{2}}$ denotes the Bessel function. Consulting
the asymptotic expansions of the Bessel function near the origin
and at infinity (see again \cite{GR}) and using the
differential equation satisfied by the Bessel function,
$$
J_{\nu-1}(z) - J_{\nu+1}(z)=2\frac{d}{dz} J_{\nu}(z),
$$
one easily derives \eqref{ImR0}.
The purpose of understanding the free resolvent is that it enables us to study functions of $H$
through to the Stone formula for the spectral measure:
\begin{align*}
\bigl\langle F(H)P_{ac} f,g\bigr\rangle
&=2\int_{0}^{\infty} F(\lambda^2)\lambda
\langle E'(\lambda^2)f, g\rangle d\lambda \\
&=\frac{2}{\pi} \int_{0}^{\infty}F(\lambda^2)\lambda\bigl\langle
\Im R_{V}(\lambda^2)f, g\rangle d\lambda,
\end{align*}
where $f,g$ are any two Schwartz functions, $P_{ac}$ denotes the projection onto the absolutely
continuous spectrum of $H$, $E'(\lambda)$ denotes the spectral measure associated to $H$, and
$R_V^\pm(\lambda^2) := (H-\lambda^2\pm i0)^{-1}$ is the resolvent of the perturbed Schr\"odinger
equation. We have chosen signs so that $2i \Im R_V(\lambda^2) = R_V^+(\lambda^2) - R_V^-(\lambda^2)$.
In order to compute the kernel of $\Im R_V(\lambda^2)$, we make
use of the resolvent identity:
$$
R_{V}^{\pm}(\lambda^{2})=R_{0}^{\pm}(\lambda^{2})-R_{0}^{\pm}(\lambda^{2})
VR_{V}^{\pm}(\lambda^{2}),
$$
which by iteration gives rise to the following finite Born series
expansion:
\begin{align}
R_{V}^{\pm}(\lambda^{2})& \label{Born series} \\
&=\sum_{l=0}^{2m+1}R_{0}^{\pm}(\lambda^{2})[-VR_{0}^{\pm}(\lambda^{2})]^{l}
\label{first terms} \\
&\quad +R_{0}^{\pm}(\lambda^{2})V[R_{0}^{\pm}(\lambda^{2})V]^{m}R_{V}^{\pm}
(\lambda^{2})[VR_{0}^{\pm}(\lambda^{2})]^{m}VR_{0}^{\pm}(\lambda^{2}).
\label{tail}
\end{align}
Elementary algebra can also be used to solve for $R_V^\pm(\lambda^2)$ in terms of $R_0^\pm(\lambda^2)$:
$$R_V^\pm(\lambda^2) = \big(I + R_0^\pm(\lambda^2)V\big)^{-1}R_0^\pm(\lambda^2)
:= S^\pm(\lambda^2)R_0^\pm(\lambda^2)$$
For now this identity is only a formal statement, as we have not shown that
$S^\pm(\lambda^2)=(I + R_0^\pm(\lambda^2)V\big)^{-1}$ exists as
a bounded operator on any space. Existence and uniform boundedness of $S^\pm(\lambda^2)$ will be
demonstrated in Section~\ref{sec:tail}.
\section{Useful lemmas} \label{sec:lemmas}
In this section we prove a few technical lemmas. We begin with
certain results related to the boundedness of the Riesz potentials
between various weighted spaces. By Riesz potentials, we mean the
operators
$$
I_{\alpha}:f\mapsto|x|^{\alpha-n}*f
$$
where $0<\alpha< n$.
Let ${\mathfrak{I}}_{q}$ denote the space of compact operators $T$ for which
$\|T\|_{{\mathfrak{I}}_{q}}=[tr(|T|^{q})]^{\frac{1}{q}}$ is finite. We recall
the following well-known result (see \cite[Theorem XI.20]{RSIII}):
\begin{lemma}\label{opbounds}
Let $f,g\in L^{q}(\mathbb{R}^{n})$, for some $2\leq q<\infty$.
Then, $f(x)g(-i\nabla)\in{\mathfrak{I}}_{q}$ and
$$
\|f(x)g(-i\nabla)\|_{{\mathfrak{I}}_{q}}\leq(2\pi)^{-\frac{n}{q}}\|f\|_{q}\|g\|_{q}.
$$
Here, $f(x)$ denotes multiplication by $f$ in physical space,
while $g(-i\nabla)$ denotes multiplication by g in frequency
space.
\end{lemma}
As a consequence of Lemma \ref{opbounds}, one can derive results on the boundedness of the Riesz
potentials between various weighted spaces. To describe these spaces, we will use the notation
$$\norm[f][L^{p,\sigma}] := \norm[\langle x\rangle^\sigma f][L^p],$$
where $\langle x\rangle :=(1+|x|^2)^{1/2}$, $1\leq p\leq \infty$, and $\sigma\in {{\mathbb R}}$. Following the notation of
Jensen and Kato, we write $B(0,\sigma;0,-\sigma')$ for the set of bounded operators from $L^{2,\sigma}$ to
$L^{2, -\sigma'}$, while $B_{0}(0,\sigma;0,-\sigma')$ denotes the set of compact operators from
$L^{2,\sigma}$ to $L^{2, -\sigma'}$, Jensen shows (see Lemma 2.3 in \cite{Jensen}) the following result.
\begin{proposition}\label{impprop}
{\rm1)} If $0<\alpha<\frac{n}{2}$, $\sigma, \sigma'\geq 0$, and $\sigma+\sigma'\geq \alpha$, then
$I_{\alpha}\in B(0,\sigma;0,-\sigma').$ Moreover, if $\sigma+\sigma'>\alpha,$ then $I_{\alpha}\in
B_{0}(0,\sigma;0,-\sigma').$
{\rm2)} If $\frac{n}{2}\leq\alpha<n$, $\sigma,\sigma'>\alpha-\frac{n}{2}$, and
$\sigma+\sigma'\geq\alpha$, then $I_{\alpha}\in B(0,\sigma;0,-\sigma').$ Moreover, if
$\sigma+\sigma'>\alpha$, then $I_{\alpha}\in B_{0}(0,\sigma;0,-\sigma').$
\end{proposition}
The case $\alpha \geq n$ may appear qualitatively different from the Riesz potentials considered above;
however, the mapping bounds between weighted $L^2$ spaces are still valid.
\begin{proposition}\label{alpha>n}
Let $\alpha \geq n$. The convolution operator $I_\alpha := f \mapsto |x|^{\alpha-n} * f$ is an element
of $B_0(0,\sigma;0,-\sigma')$, provided $\sigma, \sigma' > \alpha - \frac{n}2$.
\end{proposition}
\begin{proof}
As every Hilbert-Schmidt operator is compact, in order to prove the proposition it suffices to show that
$I_\alpha$ is a Hilbert-Schmidt operator between $L^{2,\sigma}$ and $L^{2,-\sigma'}$. In turn, this is
equivalent to showing the finiteness of the integral
$$\iint \langle x\rangle^{-2\sigma} |x-y|^{2(\alpha-n)} \langle y\rangle^{-2\sigma'}\,dx dy.$$
Consider the integral with respect to $x$, namely
$$\int \langle x\rangle^{-2\sigma}|x-y|^{2(\alpha-n)}dx.$$
If $|y| \leq 1$, this is dominated by the integral of $\langle x\rangle^{2(\alpha-\sigma-n)}$, which is
finite because $\sigma > \alpha-\frac{n}2$.
Now suppose $|y| > 1$. Over the region where $|x| \leq \frac12|y|$, the factor $|x-y|$ is essentially
of size $|y|$, as can be seen from the triangle inequality. Meanwhile, the factor $\langle x\rangle^{-2\sigma}$
is integrable because $\sigma > \alpha-\frac{n}2 \geq \frac{n}2$. Consequently, the integral over this region
is bounded by $|y|^{2(\alpha-n)}$, i.e.,
$$
\int_{|x| \leq \frac12|y|}\langle x\rangle^{-2\sigma}|x-y|^{2(\alpha-n)}dx \lesssim |y|^{2(\alpha-n)}.
$$
Over the region where $|x-y| \leq \frac12|y|$, the triangle inequality dictates $|x|\sim|y|$. Hence,
\begin{align*}
\int_{|x-y| \leq \frac12|y|}\langle x\rangle^{-2\sigma}|x-y|^{2(\alpha-n)}dx
&\lesssim \langle y \rangle ^{-2\sigma} \int_{|x-y| \leq \frac12|y|}|x-y|^{2(\alpha-n)}dx\\
&\lesssim \langle y\rangle^{-2\sigma}|y|^{2\alpha-n}
\lesssim |y|^{2\alpha-2\sigma-n}.
\end{align*}
Everywhere else in ${{\mathbb R}}^n$, the two functions $|x|$ and $|x-y|$
are of comparable size. Recalling that $\sigma>\alpha-\frac{n}{2}$, the integral over this region is then dominated
by
$$
\int_{|x|>\frac12|y|} \langle x\rangle^{-2\sigma}|x|^{2(\alpha-n)}
\lesssim \int_{|x|>\frac12|y|}|x|^{2(\alpha-\sigma-n)}
\lesssim |y|^{2\alpha-2\sigma-n}.
$$
Therefore, the dominant term for large $y$ comes from the region $|x|\leq \frac{1}{2}|y|$.
To complete the estimate for the Hilbert-Schmidt norm, it remains to bound
the integral over the $y$-variable. As $\sigma'>\alpha-\frac{n}{2}$, this is dominated by
$$\int \langle y\rangle^{2(\alpha-n)} \langle y\rangle^{-2\sigma'}\,dy \lesssim 1.$$
This concludes the proof of Proposition~\ref{alpha>n}.
\end{proof}
Propositions \ref{impprop} and \ref{alpha>n} immediately yield some mapping bounds for
the free resolvent and its derivatives. Indeed, we have
\begin{corollary}\label{deriv in weighted}
Let $j$ be any nonnegative integer and suppose $\sigma, \sigma' > j+\frac12$
with $\sigma + \sigma' > j + \frac{n+1}2$. Then
\begin{equation*}
\Big\| \Big(\frac{d}{d\lambda}\Big)^{j}R_0^\pm(\lambda^2)f\Big\|_{L^{2,-\sigma'}}
\lesssim \lambda^{-j}\japanese[\lambda]^{j+\frac{n-3}2} \norm[f][L^{2,\sigma}].
\end{equation*}
\end{corollary}
\begin{proof}
Recall that the kernel of $R_0^\pm(\lambda^2)$ is given by
$|x|^{2-n}e^{\pm i\lambda|x|}a_{0,\frac{n-3}2}(\lambda |x|)$.
When a symbol is differentiated, the effect is comparable to
dividing by $\lambda$; see Section~2 for the calculus of the symbols $a_{i,j}$.
Each derivative that falls on the exponential factor increases the power of $|x|$ by one.
Based on these possible outcomes, the integral kernel of
$(\frac{d}{d\lambda})^jR_0^\pm(\lambda^2)$ must be of the
form $\lambda^{-j}|x|^{2-n}e^{\pm i\lambda|x|}a_{0,\frac{n-3}2+j}(\lambda|x|)$,
which is dominated pointwise by the kernel of $\lambda^{-j}I_2 + \lambda^{\frac{n-3}2}
I_{\frac{n+1}2+j}$. Thus, for the kernels, we have the pointwise inequality
\begin{align}\label{deriv free res bound}
\Big(\frac{d}{d\lambda}\Big)^jR_0^\pm(\lambda^2)
\lesssim \lambda^{-j}\japanese[\lambda]^{j+\frac{n-3}2}(I_2 +I_{\frac{n+1}2+j}).
\end{align}
The claim follows from Propositions \ref{impprop} and \ref{alpha>n}.
\end{proof}
The estimate above is based entirely on the size of the integral kernel of
$R_0^\pm(\lambda^2)$ and its derivatives and completely ignores the
oscillatory nature of these functions. If one takes advantage of this
oscillation using Fourier analysis techniques, the result is a much more
subtle mapping estimate known as the Limiting Absorption Principle for the
free resolvent (see \cite{agmon}, \cite[Theorem XIII.33]{RSIV}).
\begin{lemma}\label{lap}
Choose any $\sigma, \sigma' > \frac12$ and ${\varepsilon} > 0$. Then for all
$\lambda \geq 1$,
\begin{equation*}
\norm[R_0^\pm(\lambda^2)f][L^{2,-\sigma'}] \lesssim \lambda^{-1+{\varepsilon}}
\norm[f][L^{2,\sigma}].
\end{equation*}
\end{lemma}
\begin{proof}[Sketch of Proof]
First, one shows that $R_0^\pm(1)$ is a bounded operator from $L^{2,\sigma}$
to $L^{2,-\sigma'}$. One characterization of $R_0^\pm(1)$ is that it
multiplies the Fourier transform of $f$ by the distribution
$m(\xi) = \frac{c_n}{|\xi|^2-1} \pm C_n i \delta_0(|\xi|^2 - 1)$.
If $f \in L^{2,\sigma}$, then $\hat{f} \in H^\sigma({{\mathbb R}}^n)$. As
$\sigma > \frac12$, the Trace Theorem (see \cite{agmon} or \cite[Theorem IX.39]{RSII}) implies that
$\hat{f}$ will restrict to an $L^2$ function on surfaces of codimension 1.
The surface of particular interest here is the unit sphere, where $m(\xi)$ becomes singular.
After a partition of unity decomposition and smooth changes of variables, each sector of the sphere
can be mapped to a subset of the hyperplane $\{\xi_1 = 0\}$. Under the
same change of variables, the singular part of $m(\xi)$ takes the form
$m(\xi) = \frac1{\xi_1} \pm i \delta_0(\xi_1)$.
This reduces matters to a one-dimensional problem. In ${{\mathbb R}}$, multiplying
the Fourier transform by $\frac{1}{\xi_1}$ or by a delta-function are
integration operators which map $L^1$ to $L^\infty$ and consequently also
map $L^{2,\sigma}$ to $L^{2,-\sigma'}$ provided $\sigma, \sigma' > \frac12$.
The kernel of $R_0^\pm(\lambda^2)$ is simply a dilation of $R_0^\pm(1)$; see \eqref{scaling}.
A straightforward scaling argument shows that
\begin{align*}
\norm[R_0^\pm(\lambda^2)f][L^{2,-\sigma'}] \lesssim \lambda^{\sigma+\sigma'-2} \norm[f][L^{2,\sigma}]
\end{align*}
for all $\lambda \ge 1$. Finally, one can use the embeddings
$L^{2,\sigma} \subset L^{2, \min(\sigma, \frac{1+{\varepsilon}}2)}$ and
$L^{2,-\min(\sigma',\frac{1+{\varepsilon}}2)} \subset L^{2, -\sigma'}$
to obtain the desired power of decay in $\lambda$.
\end{proof}
Note that Corollary \ref{deriv in weighted} and Lemma \ref{lap} imply that the free resolvent and its derivatives
map functions with good decay at infinity to functions with less decay.
If this is composed with multiplication by a potential $V(x)$ with
sufficient decay at infinity, the resulting operator will be bounded
from certain weighted spaces to themselves.
\begin{corollary}
Let $j$ be a nonnegative integer and suppose
$|V(x)| \leq C\japanese[x]^{-\beta}$ for some $\beta > \max(\frac{n+1}2 + j,
2j+1)$. Then for every $j+\frac12 < \sigma < \beta-(j+\frac12)$,
\begin{equation} \label{freeest}
\Big\|\Big(\frac{d}{d\lambda}\Big)^jR_0^\pm(\lambda^2)Vf\Big\|_{L^{2,-\sigma}}
\leq \left\{ \begin{aligned} &\japanese[\lambda]^{-1+{\varepsilon}}\norm[f][L^{2,-\sigma}], &&{\rm if}\ j=0, \\
&\lambda^{-j}\japanese[\lambda]^{j+\frac{n-3}2}\norm[f][L^{2,-\sigma}],
&&{\rm if}\ j \geq 1. \end{aligned}\right.
\end{equation}
\end{corollary}
\begin{remark}
It is possible to mimic the proof of the Limiting Absorption Principle to
prove stronger estimates in the cases where $1 \leq j < \frac{n-1}2$.
These are interesting in their own right, but will not be needed here.
\end{remark}
As mentioned in the introduction, the kernel of the free resolvent is not
locally square integrable, which places it outside the context of the mapping estimates above.
However, as the next results demonstrate, the kernel
associated to $[VR_0^\pm ]^m$ belongs to a weighted $L^2$ space,
provided $m$ is big enough and $V$ decays sufficiently rapidly.
We start with the following
\begin{lemma}\label{easylem}
Let $\mu$ and $\sigma$ be such that $\mu<n$ and
$n<\sigma+\mu$. Then
$$
\int_{\mathbb{R}^{n}}\frac{dy}{\langle y
\rangle^{\sigma}|x-y|^{\mu}}\lesssim
\begin{cases}
\langle x \rangle^{n-\sigma-\mu}, & \sigma<n\\
\langle x \rangle^{-\mu}, & \sigma>n.
\end{cases}
$$
\end{lemma}
\begin{proof}
We analyze the integral on each of the following three disjoint
domains:
Domain 1: $|y| \leq \frac{|x|}{2}$. From the triangle inequality
we get $|x-y|\sim|x|$; we estimate the contribution of this domain
to the integral by
$$
|x|^{-\mu} \int_{|y|\leq \frac{|x|}{2}}\langle y\rangle ^{-\sigma}
dy\lesssim |x|^{-\mu}|x|^n \langle x\rangle^{-\sigma}
\lesssim
\begin{cases}
\langle x \rangle^{n-\sigma-\mu}, & \sigma<n\\
\langle x \rangle^{-\mu}, & \sigma>n.
\end{cases}
$$
Domain 2: $|x-y|\leq\frac{|x|}{2}$. On this domain $|y|\sim|x|$
and we estimate its contribution to the integral by
$$
\int_{|x-y|\leq\frac{|x|}{2}}\frac{dy}{\langle
x\rangle^{\sigma}|x-y|^{\mu}}
=\langle x\rangle^{-\sigma} \int_{0}^{\frac{|x|}{2}}\frac{r^{n-1}}{r^{\mu}}dr
\lesssim \langle x\rangle^{n-\sigma-\mu},
$$
where the inequality holds because $\mu<n$.
Domain 3: $|y|>\frac{|x|}{2}$ and $|x-y|>\frac{|x|}{2}$. The
triangle inequality yields $|x-y|\sim|y|$ and as $n-\sigma-\mu<0$,
we obtain the estimate
$$
\int_{|y|>\frac{|x|}{2}}\langle y\rangle^{-\sigma}|y|^{-\mu}dy
\lesssim \langle x\rangle^{n-\sigma-\mu},
$$
by treating $|x|\leq 1$ and $|x|>1$ separately.
\end{proof}
\begin{proposition} \label{smoothing} Suppose $|V(x)| \leq C\japanese[x]^{-\beta}$ for some
$\beta > n+3$. Then for any integer $0 \leq j \leq \frac{n}2+1$ and any pair $(p,q)$ such that
either $1< p<\frac{2n}{n+3}$ and $\frac{1}{q}=\frac{1}{p}-\frac{2}{n}$, or $p=1$ and $1\leq q<\frac{n}{n-2}$, we have
\begin{equation*}
\Big\| V \Big(\frac{d}{d\lambda}\Big)^jR_0^\pm(\lambda^2)f\Big\|_{L^{1,\frac32}\cap L^q}
\lesssim \lambda^{-j}\japanese[\lambda]^{j+\frac{n-3}2} \norm[f][L^{1,\frac32}\cap L^{p}].
\end{equation*}
\end{proposition}
\begin{proof}
In view of \eqref{deriv free res bound}, we need only prove estimates for the operator $V I_k$
for certain $2 \leq k \leq n+\frac32$.
The weighted $L^1$ estimate follows from
$$
\sup_{x\in{{\mathbb R}}^n} \japanese[x]^{-\frac32} \int_{{{\mathbb R}}^n} \japanese[y]^{\frac32-\beta}|x-y|^{k-n}\, dy \lesssim 1,
$$
which is a direct consequence of Lemma~\ref{easylem} with $\sigma=\beta-\frac32$ and $\mu=n-k$.
We turn now to the smoothing estimate. Consider first the case $p=1$.
Lemma~\ref{easylem} with $\sigma=q\beta$ and $\mu=q(n-k)$ implies that for $1\leq q<\frac{n}{n-2}$, we have
\begin{align*}
\int |V(x)|^q|x-y|^{q(k-n)}dx \lesssim \langle y \rangle^{\frac{3q}{2}}.
\end{align*}
Note that the upper bound on $q$ is dictated by $k=2$.
Thus, in the case $p=1$, the claim follows from Minkowski's inequality:
\begin{align*}
\Big\| V(x)\int|x-y|^{k-n}|f(y)|dy\Big\|_{L_x^q}
&\lesssim \int|f(y)|\langle y \rangle^{\frac{3}{2}} dy
\lesssim \norm[f][L^{1,\frac32}].
\end{align*}
Lastly, we treat the case $1<p<\frac{2n}{n+3}$. Note that given $p$, the choice of $q$ is governed by the
Hardy-Littlewood-Sobolev inequality for $I_2$. As $V\in L^\infty $, we obtain
\begin{align*}
\|V I_2(f)\|_{L^q}\lesssim \|f\|_{L^p}\lesssim \norm[f][L^{1,\frac32}\cap L^{p}].
\end{align*}
It remains to consider $I_k$ with $k=\frac{n+1}2 + j$.
For $0\leq j<\frac{n-1}{2}$, by the Hardy-Littlewood-Sobolev inequality and the fact that $V\in L^1\cap L^\infty$, we get
\begin{align*}
\|V I_{\frac{n+1}{2}+j}(f)\|_{L^q}
&\lesssim \|V\|_{\frac{2pn}{n-3+2j}}\| I_{\frac{n+1}{2}+j}(f)\|_ {\frac{2pn}{n-1-2j}}
\lesssim \|f\|_{L^p}\lesssim \norm[f][L^{1,\frac32}\cap L^{p}].
\end{align*}
For the remaining values of $j$, i.e., $\frac{n-1}{2}\leq j\leq \frac{n}{2}+1$, we use again
Lemma~\ref{easylem} with $\sigma=q\beta$ and $\mu=q(n-k)$ to obtain
\begin{align*}
\int |V(x)|^q|x-y|^{q(k-n)}dx \lesssim \langle y \rangle^{\frac{3q}{2}}
\end{align*}
for $1\leq q< \frac{2n}{n-1}$. For the values of $p$ currently under consideration, $q$ is guaranteed to lie
in this range. Another application of Minkowski's inequality yields
\begin{align*}
\|V I_{\frac{n+1}{2}+j}(f)\|_{L^q}
\lesssim \norm[f][L^{1,\frac32}] \lesssim \norm[f][L^{1,\frac32}\cap L^{p}].
\end{align*}
This completes the proof of the proposition.
\end{proof}
\begin{proposition}\label{weighted}
For any $0 \leq j \leq \frac{n}2+1$ and $\sigma > j+\frac12$,
$$ \Big\|\Big(\frac{d}{d\lambda}\Big)^j R_0^\pm(\lambda^2)f
\Big\|_{L^{2,-\sigma}} \lesssim \lambda^{-j}
\japanese[\lambda]^{j+\frac{n-3}2}\norm[f][L^{1,\frac32}\cap L^2]. $$
\end{proposition}
\begin{proof}
We use the estimate \eqref{deriv free res bound} and split the resolvent kernel into two pieces, according to whether
$|x-y| < 1$ or $|x-y| \geq 1$. The piece supported away from the diagonal $x=y$ maps $L^1$
into $L^{2,-\sigma}$ because of the bound
$$ \sup_{x\in{{\mathbb R}}^n} \int_{|x-y|\geq 1}\frac{dy}{|x-y|^{2(n-k)}\langle y\rangle^{2\sigma}}
\lesssim 1, $$
valid for any $k\leq \frac{n+1}2+j$.
The piece supported close to the diagonal $x=y$ is a convolution against an integrable function and
hence it maps $L^2$ to itself.
\end{proof}
If the map $VR_0^\pm(\lambda^2)$, or one of its derivatives (with respect to $\lambda$),
is applied enough times to a locally integrable function with fast decay, the result will be locally in $L^2$.
Any subsequent applications of the free resolvent will yield functions in weighted $L^2$ spaces. Each time the Limiting
Absorption Principle is invoked, it improves the norm bounds by a factor of $\japanese[\lambda]^{-1+{\varepsilon}}$
until eventually, some polynomial decay in $\lambda$ is achieved. Our primary estimate of this form is given below.
\begin{corollary} \label{enough}
Suppose $|V(x)| \leq C\japanese[x]^{-\beta}$ for some $\beta > n+3$.
Let $m_0 > \frac{n^2}2$ and $0 \leq j \leq \frac{n}2+1$. Then
\begin{equation}\label{enough!!}
\Big\|\Big(\frac{d}{d\lambda}\Big)^j\big[VR_0^\pm(\lambda^2)\big]^{m_0}f
\Big\|_{L^{2,\sigma}} \lesssim
\lambda^{-j}\japanese[\lambda]^{j +1-2n}\norm[f][L^{1,\frac32}]
\end{equation}
for any $\sigma < \beta - (\frac{n+3}2)$.
\end{corollary}
\begin{proof}
The lower bound of $\frac{n^2}2$ is not intended to be sharp and was
obtained in the following manner:
It requires about $\frac{n}4$ iterations of $VR_0^\pm(\lambda^2)$ to smooth
an integrable function to local $L^2$ behavior (see Proposition~\ref{smoothing}) and one more to
reach a weighted $L^2$ space (see Proposition~\ref{weighted}). Also, $\frac{n}{2}+1$ powers of
$VR_0^\pm(\lambda^2)$ can be lost to derivatives which we bound using Corollary~\ref{deriv in weighted}.
For each of these $\frac{3n+8}4$ operations, we have established only a crude bound which grows
like $\lambda^{\frac{n-3}2}$.
According to Lemma~\ref{lap}, each time the Limiting Absorption Principle is invoked,
this reduces the degree of polynomial growth by $1-{\varepsilon}$,
so it needs to be done approximately $\frac{(3n+8)(n-3)}{8} + 2n-1 $ times.
Setting $m_0 > \frac{n^2}2$ is sufficient to obtain \eqref{enough!!}.
\end{proof}
We will also need the following mapping properties of $\Im R_0(\lambda^2)$.
\begin{proposition}\label{ImR0 prop}
Let $0\leq j\leq \frac{n}{2}+1$. Then, for $\sigma>\frac{n+3}{2}$ we have
\begin{align}\label{deriv ImR0 L2}
\Big\|\Big(\frac{d}{d\lambda}\Big)^j \Im R_0(\lambda^2)\Big\|_{L^{2,\sigma}\to L^{2,-\sigma}}
\lesssim \lambda^{n-2-j}\langle \lambda\rangle^{\frac{3}{2}}.
\end{align}
Moreover, assuming $|V(x)|\leq C\langle x\rangle^{-\beta}$ for some $\beta>n+3$, we have
\begin{align}\label{deriv ImR0 L1}
\Big\|\Big(\frac{d}{d\lambda}\Big)^j V\Im R_0(\lambda^2)\Big\|_{L^{1,\frac32}\to L^{1,\frac32}}
\lesssim \lambda^{n-2-j}\langle \lambda\rangle^{\frac{3}{2}}
\end{align}
while, for $m\geq 2m_0>n^2$, $\sigma>\frac{n+3}2$, and $\beta>2\sigma$, we have
\begin{align}\label{deriv Im VR0}
\Big\|\Big(\frac{d}{d\lambda}\Big)^j \Im [VR_0^+(\lambda^2)]^m f\Big\|_{L^{2,\sigma}}
\lesssim \lambda^{n-2-j}\langle \lambda\rangle^{j+\frac{5}{2}-2n +\frac{n^2(n-3)}{4}}\|f\|_{L^{1,\frac32}}.
\end{align}
\end{proposition}
\begin{proof}
From \eqref{ImR0}, we have the following formula for the kernel of $\Im R_0(\lambda^2)$:
\begin{align*}
\Im R_{0}(\lambda^{2})(x,y)=a_{n-2,\frac{n-3}{2}}(\lambda|x-y|)\frac{e^{\pm i\lambda|x-y|}}{|x-y|^{n-2}}.
\end{align*}
Derivatives can affect $\Im R_0(\lambda^2)$ in two ways: Whenever a derivative falls on the symbol, this has the
effect of reducing the power of $\lambda$ by one. If a derivative falls on the phase, this has the effect of
increasing the power of $|x-y|$ by one. Hence, using the calculus of the symbols $a_{i,j}$, we get
\begin{align*}
\Big(\frac{d}{d\lambda}\Big)^j\Im R_0(\lambda^2)(x,y)
&=\sum_{\substack{j_1+j_2=j \\j_1,j_2\geq 0}}\lambda^{-j_1}a_{n-2,\frac{n-3}2}(\lambda|x-y|)
\frac{e^{\pm i\lambda|x-y|}}{|x-y|^{n-2-j_2}}\\
&=\sum_{\substack{j_1+j_2=j \\j_1,j_2\geq 0}}\lambda^{n-2-j}a_{j_2,j_2-\frac{n-1}2}(\lambda|x-y|)
e^{\pm i\lambda|x-y|}.
\end{align*}
Thus,
\begin{align}\label{deriv ImR0}
\Big|\Big(\frac{d}{d\lambda}\Big)^j\Im R_0(\lambda^2)(x,y)\Big|
\lesssim \lambda^{n-2-j}\langle \lambda|x-y|\rangle^{j-\frac{n-1}2}.
\end{align}
The estimate \eqref{deriv ImR0 L2} follows from \eqref{deriv ImR0} and
\begin{align}\label{showshow}
\int \frac{\langle x\rangle^{-2\sigma}\langle y\rangle^{-2\sigma}}{\langle \lambda|x-y|\rangle^{n-1-2j}}dxdy
\lesssim \langle \lambda\rangle^{3}.
\end{align}
For $0\leq j\leq \frac{n-1}{2}$, \eqref{showshow} follows from the bound
$\langle \lambda|x-y|\rangle^{-n+1+2j}\lesssim 1$; the resulting integral is finite whenever $\sigma>\frac{n}{2}$.
For $\frac{n-1}{2}<j\leq \frac{n}{2}+1$, we first apply Lemma~\ref{easylem} to the integral in the variable $y$ to obtain
\begin{align*}
\int \frac{\langle y\rangle^{-2\sigma}}{\langle \lambda|x-y|\rangle^{n-1-2j}}dy
\lesssim \lambda^{-n+1+2j}\int \frac{\langle y\rangle^{-2\sigma}}{|x-y|^{n-1-2j}}dxdy
\lesssim \langle \lambda\rangle^{3}\langle x\rangle^{-n+1+2j}.
\end{align*}
The remaining integral in the variable $x$ is finite under our assumptions on $\sigma$.
In view of \eqref{deriv ImR0}, the estimate \eqref{deriv ImR0 L1} follows from
\begin{align}\label{show}
\sup_{x\in{{\mathbb R}}^n} \langle x\rangle^{-\frac32}\int \frac{dy}{\langle y\rangle^{\beta-\frac32}\langle \lambda|x-y|\rangle^{\frac{n-1}2-j}}\lesssim1.
\end{align}
To see \eqref{show} one considers separately the cases $0\leq j\leq \frac{n-1}{2}$ and
$\frac{n-1}{2}<j\leq \frac{n}{2}+1$, bounding $\langle \lambda|x-y|\rangle^{-\frac{n-1}2+j}\lesssim 1$ in the former case and
applying Lemma~\ref{easylem} with $\sigma=\beta-\frac32$ and $\mu=\frac{n-1}{2}-j$ in the latter case.
We turn now to \eqref{deriv Im VR0}. We rewrite $\Im [VR_0^+(\lambda^2)]^m= [VR_0^+(\lambda^2)]^m- [VR_0^-(\lambda^2)]^m$
using the following algebraic identity:
\begin{equation} \label{algebraic identity}
\prod_{k=0}^M A_k^+ - \prod_{k=0}^M A_k^- =
\sum_{l=0}^M \Big(\prod_{k=0}^{l-1} A_k^-\Big)\big(A_l^+ - A_l^-\big)
\Big(\prod_{k=l+1}^M A_k^+\Big).
\end{equation}
Then,
\begin{align}\label{diff}
\Im [VR_0^+(\lambda^2)]^m
=\sum_{\substack{m_1+m_2=m \\m_1,m_2\geq 0}}[VR_0^-(\lambda^2)]^{m_1}V\Im R_0 [VR_0^+(\lambda^2)]^{m_2}.
\end{align}
We treat the cases $m_1<m_0$ and $m_2<m_0$ separately. In the first case, use Corollary~\ref{enough} for
$[VR_0^+(\lambda^2)]^{m_2}$, \eqref{deriv ImR0 L2}, and Corollary~\ref{deriv in weighted} for
$[VR_0^-(\lambda^2)]^{m_1}$ to derive the claim. In the second case, use the weighted $L^1$ bound in
Proposition~\ref{smoothing} for $[VR_0^+(\lambda^2)]^{m_2}$, \eqref{deriv ImR0 L1}, and Corollary~\ref{enough} for
$[VR_0^-(\lambda^2)]^{m_1}$ to obtain \eqref{deriv Im VR0}.
\end{proof}
We also record the following lemma whose proof is just an exercise
in integration by parts:
\begin{lemma}\label{intparts}
Given $a\in C^{\infty}_c({{\mathbb R}}\setminus\{0\})$, we have
\begin{displaymath}
\bigl|\int_{{{\mathbb R}}}e^{i t\lambda^2}\lambda a(\lambda)d\lambda\bigr|
\lesssim |t|^{-N} \sum_{s=0}^N \Bigl|\int_{{{\mathbb R}}}e^{i t\lambda^2}
\lambda^{s+1-2N} a^{(s)}(\lambda) d\lambda\Bigr|,
\end{displaymath}
for every $N\geq 0$.
\end{lemma}
\section{Dispersive Estimate for the Final Term} \label{sec:tail}
In this section we will show that the tail \eqref{tail} of the
finite Born series expansion \eqref{Born series} obeys dispersive
estimates for any potential $V$ satisfying $|V(x)|\lesssim \langle
x\rangle^{-\beta}$, provided we take $\beta$ and $m$ large
enough.
\begin{theorem}\label{dispersive tail}
Assume that the potential $V$ satisfies $|V(x)|\lesssim \langle
x\rangle^{-\beta}$ for some $\beta>\frac{3n+5}2$ and that
$m>n^2$. Then
\begin{align}
\sup_{x,y\in{{\mathbb R}}^n}\Bigl|\Im \int_{0}^{\infty}e^{it\lambda^{2}}\lambda
\bigl\{R_{0}^+(\lambda^{2})&V[R_{0}^+(\lambda^{2})V]^{m}
S^+(\lambda^2)R_{0}^+(\lambda^{2}) \notag \\
&\times \ [VR_{0}^+(\lambda^{2})]^{m}VR_{0}^+(\lambda^{2})\bigr\}(x,y)
d\lambda\Bigr| \lesssim |t|^{-\frac{n}{2}}. \label{tail integral1}
\end{align}
\end{theorem}
\begin{remark} The condition $\beta > \frac{3n+5}2$ is not intended to be
sharp. Since the function we eventually construct as a counterexample
has compact support, decay conditions are not a matter of primary concern.
\end{remark}
There are numerous oscillatory components in this integral, which suggests
the use of stationary phase methods. Although it appears natural to take the
critical point to be $\lambda = 0$, this turns out not to be the best choice.
Define the functions
$G_{\pm,x}(\lambda^{2})(\cdot) := e^{\mp i\lambda|x|}R_{0}^{\pm}(\lambda^{2})
(\cdot,x)$. The expression in \eqref{dispersive tail} can be rewritten as
$I^+(t,x,y) - I^-(t,x,y)$, where
\begin{align} \label{Ipm}
I^\pm(t,x,y) &:= \int_{0}^{\infty}e^{it\lambda^{2}}e^{\pm i\lambda(|x|+|y|)}
\lambda \big\langle S^\pm(\lambda^2)R_{0}^\pm(\lambda^{2})
[VR_{0}^\pm(\lambda^{2})]^{m} VG_{\pm,y}(\lambda^{2}), \notag \\
& \hskip 2.5in [VR_{0}^\mp(\lambda^{2})]^{m}VG_{\mp,x}(\lambda^{2})
\big\rangle d\lambda \notag\\
& = \int_0^\infty e^{it\lambda^2}e^{\pm i\lambda(|x|+|y|)}
b^\pm_{x,y}(\lambda^2)\, d\lambda.
\end{align}
It suffices to show that $|I^+(t,x,y)-I^-(t,x,y)| \lesssim |t|^{-\frac{n}2}$
uniformly in $x$ and $y$.
The first step is to establish some properties (including existence)
of the operators $S^\pm(\lambda^2)$. This is the crux of the Limiting
Absorption Principle for perturbed resolvents. We sketch the details below.
\begin{proposition} \label{limabs}
Suppose $|V(x)| \leq C \japanese[x]^{-\beta}$ for some $\beta > \frac{n+1}{2}$
and also that zero energy is neither an eigenvalue nor a resonance of
$H = -\Delta + V$. Then
$$\sup_{\lambda\geq0} \norm[S^\pm(\lambda^2)][L^{2,-\sigma}\to L^{2,-\sigma}]
\ < \ \infty$$
for all $\sigma \in (\frac12, \beta-\frac12)$.
\end{proposition}
\begin{proof}
Under our assumptions, \eqref{deriv free res bound} and Proposition~\ref{impprop} imply that $R_0^\pm(\lambda^2)V$
is a compact operator on the space $L^{2,-\sigma}$. The Fredholm alternative then guarantees the
existence of $S^\pm(\lambda^2)$ unless there exists a nonzero function $g \in L^{2,-\sigma}$ satisfying
$g = -R_0^\pm(\lambda^2)Vg$.
For $\lambda>0$, as $g = -R_0^\pm(\lambda^2)Vg$ is formally equivalent to $(-\Delta+V)g=\lambda^2g$,
it follows by a theorem of Agmon \cite{agmon} (see also \cite[Section XIII.8]{RSIII}) that
$g$ is in fact an eigenfunction, that is, $g\in L^2$. As positive imbedded eigenvalues do not exist by Kato's theorem
(see, for example, \cite[Section XIII.8]{RSIII}), we must have $g \equiv 0$.
When $\lambda = 0$, the free resolvent $R_0(0)$ is a scalar multiple of $I_2$. Since we are in dimension $n \geq 4$,
it is possible to improve the decay of $g$ by a bootstrap argument to obtain $g\in L^{2,-\sigma'}$ for all $\sigma'>0$;
in dimension $n\geq 5$, it is in fact possible to bootstrap all the way to $g\in L^2$.
In other words, zero energy would have to be either an eigenvalue or a resonance of $H$, contradicting our assumptions.
Thus, we must have $g\equiv 0$.
To obtain a uniform bound for $S^\pm(\lambda^2)$, note that by Lemma~\ref{lap} we have
$$
\|R_0^\pm(\lambda^2)V\|_{L^{2,-\sigma}} \lesssim \japanese[\lambda]^{-1+{\varepsilon}}.
$$
Thus $I + R_0^\pm(\lambda^2)V$ converges to the identity as $\lambda \to\infty$. Its inverse,
$S^\pm(\lambda^2)$, will thus have operator norm less than 2 for all $\lambda > \lambda_0$.
On the remaining interval, $\lambda \in [0,\lambda_0]$, observe that the family of operators
$R_0^\pm(\lambda^2)$ varies continuously with $\lambda$. By continuity of inverses,
$S^\pm(\lambda^2)$ is continuous and bounded on this compact interval.
\end{proof}
Derivatives of $S^\pm(\lambda^2)$ can be taken using the identity
$$\frac{d}{d\lambda}S^\pm(\lambda^2) =
- S^\pm(\lambda^2)\frac{d}{d\lambda}\big(R_0^\pm(\lambda^2)\big)
\,VS^\pm(\lambda^2). $$
From this, Corollary \ref{deriv in weighted}, and Proposition~\ref{limabs}, it follows that for
$1\leq j\leq \frac{n}{2}+1$,
\begin{align}\label{S}
\Big\|\Big(\frac{d}{d\lambda}\Big)^jS^{\pm}(\lambda^2)\Big\|_{L^{2,-\sigma}\to L^{2,-\sigma}}
\lesssim \lambda^{-j}\langle \lambda\rangle^{j+\frac{n-3}{2}},
\end{align}
provided $\frac{1}{2}+j<\sigma<\beta-(\frac{1}{2}+j)$ and $\beta>\frac{n+1}{2}+j$. Moreover, it becomes clear that
$R_V^\pm(\lambda^2) = S^\pm(\lambda^2)R_0^\pm (\lambda^2)$ and its derivatives have mapping properties comparable to
those of the free resolvent.
We now have estimates for every object in \eqref{Ipm} except
for the functions $G_{\pm,y}(\lambda^2)$. These follow from another
straightforward computation.
\begin{proposition} \label{Gboundprop}
Suppose $|V(x)| \leq C\japanese[x]^{-\beta}$ for some $\beta > \frac{3n+5}2$.
Then for each $0 \le j \le \frac{n}2+1$,
\begin{equation} \label{Gbound}
\Big\| V(\cdot)\Big(\frac{d}{d\lambda}\Big)^{j}G_{\pm,y}(\lambda^2)(\cdot)
\Big\|_{L^{1,\frac32}} \lesssim \frac{\lambda^{-j}}{\japanese[y]^{n-2}}
+ \frac{\lambda^{\frac{n-3}2-j}}{\japanese[y]^{\frac{n-1}2}}+
\frac{\lambda^{\frac{n-3}2}}{\japanese[y]^{\frac{n-1}2}}.
\end{equation}
\end{proposition}
\begin{proof}
Write out the function $G_{\pm,y}(\lambda^2)$ in the form
$$
G_{\pm,y}(\lambda^2)(x) = a_{0,\frac{n-3}2}(\lambda|x-y|) \frac{e^{\pm i\lambda(|x-y|-y)}}{|x-y|^{n-2}}.
$$
Derivatives can affect $G_{\pm,y}$ in one of two ways. Whenever a
derivative falls on the symbol, it has the effect of reducing the power of
$\lambda$ by one (this property was utilized previously in
Section~\ref{sec:lemmas}). When derivatives fall on the exponential factor,
the effect is to multiply by $|x-y|-|y|$, which is smaller than $\japanese[x]$.
Thus, for $0 \le j \le \frac{n}2+1$,
\begin{equation*}
\Big(\frac{d}{d\lambda}\Big)^jG_{\pm,y}(\lambda^2)(x)
=\sum_{\substack{j_1+j_2=j \\j_1,j_2\geq 0}}\lambda^{-j_1}a_{0,\frac{n-3}2}(\lambda|x-y|)
\frac{(|x-y|-|y|)^{j_2}}{|x-y|^{n-2}}e^{\pm i\lambda(|x-y|-y)}
\end{equation*}
and hence
\begin{equation*}
\Big|\Big(\frac{d}{d\lambda}\Big)^jG_{\pm,y}(\lambda^2)(x)\Big|
\lesssim \sum_{\substack{j_1+j_2=j \\j_1,j_2\geq 0}}
\Big(\lambda^{-j_1}\frac{\langle x\rangle^{j_2}}{|x-y|^{n-2}}+\lambda^{\frac{n-3}{2}-j_1}\frac{\langle x\rangle^{j_2}}{|x-y|^{\frac{n-1}{2}}}\Big).
\end{equation*}
The result now follows from Lemma~\ref{easylem} provided $\beta>\frac{3n+5}{2}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{dispersive tail}]
Consider first what happens if $|t| \leq 4$. The bounds established in
Corollary~\ref{enough} (for $j=0$), Proposition~\ref{limabs}, and Proposition~\ref{Gboundprop} (for $j=0$)
show that the function $b^\pm_{x,y}(\lambda^2)$ in \eqref{Ipm}
is smaller than $\japanese[\lambda]^{-2}$ uniformly in $x$ and $y$.
This bounds the value of $I^+(t,x,y)-I^-(t,x,y)$ by a constant, which is
less than $|t|^{-\frac{n}2}$ as desired.
For the remainder of the calculation we will assume that $|t| > 4$.
Let $\rho: {{\mathbb R}}\to{{\mathbb R}}$ be a smooth even cutoff function which is identically one on
the interval $[-1, 1]$ and identically zero outside $[-2,2]$.
Let $b_{x,y,1}^\pm(\lambda^2) := \rho(|t|^{\frac12}\lambda)b^\pm_{x,y}(\lambda^2)$
and $b_{x,y,2}^\pm := b_{x,y}^\pm - b_{x,y,1}^\pm$ and define $I^\pm_1(t,x,y)$, $I^\pm_2(t,x,y)$ accordingly.
For simplicity, the dependence on $x$ and $y$ will be
suppressed whenever possible.
We consider the integrals $I^\pm_2(t,x,y)$ first.
\noindent {\bf Case 1: $|x| + |y| \geq |t|$.} At least one of $|x|$, $|y|$
is greater than $\frac{|t|}{2}$; without loss of generality assume it is $|y|$.
Then $|y|^{-1}\leq 2|t|^{-1}<|t|^{-\frac{1}{2}}$ and hence $|y|^{-1}$ does not belong to $\text{supp}\ b_2^{\pm}$.
Moreover, for $\lambda \geq |y|^{-1}$, Proposition~\ref{Gboundprop} yields the bound
\begin{align}\label{y}
\norm[V\big({\textstyle\frac{d}{d\lambda}}\big)^jG_{\pm,y}(\lambda^2)][L^{1,\frac32}]
\lesssim \frac{\lambda^{\frac{n-3}2-j}\japanese[\lambda]^j}{\japanese[y]^{\frac{n-1}2}}
\lesssim |t|^{\frac{1-n}2} \lambda^{\frac{n-3}2-j}\japanese[\lambda]^j.
\end{align}
To bound $G_{\pm,x}(\lambda^2)$, we use
\begin{align}\label{x}
\norm[V(\tfrac{d}{d\lambda})^j G_{\pm,x}(\lambda^2)][L^{1,\frac32}]
\lesssim \lambda^{-j} \japanese[\lambda]^{j + \frac{n-3}2}.
\end{align}
No additional improvement can be gained here, because the size of $|x|$ is unknown.
By Corollary \ref{deriv in weighted}, Corollary~\ref{enough}, Proposition~\ref{limabs}, \eqref{S}, \eqref{y}, and
\eqref{x}, we can deduce
$$
|b_2^\pm(\lambda^2)|
\lesssim |t|^{\frac{1-n}2}\lambda^{\frac{n-1}{2}}\langle \lambda\rangle^{-3n-1}
\lesssim |t|^{\frac{1-n}2}\langle \lambda\rangle^{-\frac{5n+3}{2}}
$$
and
$$
\Big|\frac{d}{d\lambda} b_2^\pm(\lambda^2)\Big|
\lesssim |t|^{\frac{1-n}2}\lambda^{\frac{n-3}2}\langle \lambda\rangle^{-\frac{5n+3}{2}}.
$$
Applying stationary phase around the critical
point $\lambda_0 =\mp \frac{|x|+|y|}{2t}$ and integrating by parts once away from the critical point, it follows that
$|I^\pm_2(t)| \lesssim |t|^{-\frac{n}2}$.
\noindent {\bf Case 2: $|t|^{\frac12} \leq |x|+|y| <|t|$}.
Again, assume without loss of generality that $|y| \geq \frac12|t|^{\frac12}$. Therefore, for
$\lambda \in \text{supp}\ b_2^{\pm}$ we have $|y|\geq \frac{1}{2}|\lambda|^{-1}$, which implies
$$
\norm[V\big({\textstyle\frac{d}{d\lambda}}\big)^jG_{\pm,y}(\lambda^2)][
L^{1,\frac32}] \lesssim \frac{\lambda^{\frac{n-3}2-j}\japanese[\lambda]^j}{
\japanese[y]^{\frac{n-1}2}}
$$
For $G_{\pm,x}(\lambda^2)$ we will use \eqref{x}.
The critical point for the phase occurs at $\lambda_0 = \mp\frac{|x|+|y|}{2t}$,
which is comparable in size to $\frac{|y|}{|t|}$ and greater than
$\frac12|t|^{-\frac12}$. In the interval $[\lambda_0-\frac14|t|^{-\frac12},
\lambda_0+\frac14|t|^{-\frac12}]$ we have the size estimate
$$
|b^\pm_2(\lambda^2)| \ \sim \ \Big(\frac{|\lambda_0|}{|y|}\Big)^{\frac{n-1}2} \ \sim \ |t|^{-\frac{n}2+\frac12}.
$$
An application of stationary phase yields the desired bound on this interval.
Away from the critical point, the derivatives of $b_2^\pm(\lambda^2)$ obey the following bounds
\begin{equation}\label{deriv b2}
\Big|\Big(\frac{d}{d\lambda}\Big)^jb_2^\pm(\lambda^2)\Big| \lesssim
\frac{\lambda^{\frac{n-1}2-j}\japanese[\lambda]^{j-\frac{5(n+1)}2}}{\japanese[y]^{\frac{n-1}2}}
\end{equation}
for all $0 \le j \le \frac{n}2+1$.
Over the intervals $[|t|^{-\frac12},\lambda_0 - \frac14|t|^{-\frac12}]$
and $[\lambda_0+\frac14|t|^{-\frac12}, 2\lambda_0]$, \eqref{deriv b2} becomes
\begin{align*}
\Big|\Big(\frac{d}{d\lambda}\Big)^jb_2^\pm(\lambda^2)\Big|
&\lesssim \frac{\lambda_0^{\frac{n-1}2}}{\japanese[y]^{\frac{n-1}2}}\lambda^{-j}\japanese[\lambda]^{j-\frac{5(n+1)}2}\\
&\lesssim |t|^{-\frac{n}{2}+\frac{1}{2}}\lambda^{-j}
\end{align*}
for all $0 \le j \le \frac{n}2+1$. As on this region $\lambda-\lambda_0\gtrsim |t|^{-\frac{1}{2}}$, each integration
by parts in \eqref{Ipm} gains us a factor of $|t|^{-\frac12}$. Thus, integrating by parts twice (i.e., taking $j=2$)
and recalling that in this case $\lambda_0\gtrsim |t|^{-\frac12}$, we obtain the desired dispersive estimate.
Over the interval $[2\lambda_0,1]$ (where $\lambda-\lambda_0\geq \frac12\lambda$), we use \eqref{deriv b2} and
the assumption $|y|\geq \frac12|t|^{\frac12}$ to get
\begin{align*}
\Big|\Big(\frac{d}{d\lambda}\Big)^jb_2^\pm(\lambda^2)\Big|
\lesssim \lambda^{\frac{n-1}2-j}|t|^{-\frac{n-1}4}
\end{align*}
for all $0 \le j \le \frac{n}2+1$. To obtain the desired decay in $t$, it is necessary to integrate by parts
at least $\frac{n+1}4$ times.
On the interval $[1,\infty]$, \eqref{deriv b2} implies that $b_2^\pm(\lambda^2)$ and its
derivatives all decay faster than $\japanese[\lambda]^{-2} \japanese[y]^{\frac{1-n}2}$.
Using again the assumption $|y| \geq\frac12 |t|^{\frac12}$ and integrating by parts another $\frac{n+1}4$
times, we obtain the desired dispersive estimate.
\noindent {\bf Case 3: $|x|, |y| < |t|^{\frac12}$.} This time,
the critical point $\lambda_0 = \mp\frac{|x|+|y|}{2t}$ lies outside the support
of $b^\pm_2(\lambda^2)$. Therefore, one could safely integrate by parts;
however, the lack of a lower bound for $|x|$ and $|y|$ limits the usefulness
of estimates like \eqref{Gbound} in the regime $\lambda<1$. Without loss of generality, assume $|y| \geq |x|$.
For $\lambda \geq 1$, $b_2^\pm(\lambda^2)$ and its derivatives decay rapidly. Indeed, by
Corollary ~\ref{deriv in weighted}, Corollary~\ref{enough}, Proposition~\ref{limabs}, \eqref{S}, and
Proposition~\ref{Gboundprop}, for $\lambda\geq 1$ and $0\leq j\leq \frac{n}{2}+1$ we get
$$
\Big|\Big(\frac{d}{d\lambda}\Big)^j b_2^{\pm}(\lambda^2)\Big|
\lesssim \lambda^{-2n-3}\Big(\frac{1}{\langle x\rangle^{n-2}}+\frac{1}{\langle x\rangle^{\frac{n-1}{2}}}\Big)
\Big(\frac{1}{\langle y\rangle^{n-2}}+\frac{1}{\langle y\rangle^{\frac{n-1}{2}}}\Big).
$$
As the powers of $\japanese[x]$ and $\japanese[y]$ in the denominator may not make a meaningful contribution
(if $x,y$ are small), it is necessary to integrate by parts at least $\frac{n}2$ times in order to generate the
desired $|t|^{-\frac{n}2}$ decay or better.
The regime $\lambda \in [\japanese[y]^{-1},1]$ is similar to the interval $[2\lambda_0,1]$ in the previous case.
Indeed,
\begin{align*}
\norm[V\big({\textstyle\frac{d}{d\lambda}}\big)^jG_{\pm,y}(\lambda^2)][L^{1,\frac32}]
\lesssim \frac{\lambda^{\frac{n-3}2-j}\japanese[\lambda]^j}{\japanese[y]^{\frac{n-1}2}}
\lesssim \frac{\lambda^{\frac{n-3}2-j}}{\japanese[y]^{\frac{n-1}2}}
\end{align*}
and
\begin{align*}
\norm[V(\tfrac{d}{d\lambda})^j G_{\pm,x}(\lambda^2)][L^{1,\frac32}]
\lesssim \lambda^{-j} \japanese[\lambda]^{j + \frac{n-3}2}
\lesssim \lambda^{-j}
\end{align*}
for all $0\leq j\leq \frac{n}{2}+1$. Thus,
\begin{equation*}
\Big|\Big(\frac{d}{d\lambda}\Big)^jb_2^\pm(\lambda^2)\Big|
\lesssim \frac{\lambda^{\frac{n-1}2-j}\japanese[\lambda]^{j-\frac{5(n+1)}2}}{\japanese[y]^{\frac{n-1}2}}
\lesssim \frac{\lambda^{\frac{n-1}2-j}}{\japanese[y]^{\frac{n-1}2}}
\end{equation*}
for all $0 \le j \le \frac{n}2+1$. Integrating by parts $\frac{n}{2}\leq N \leq\frac{n}2+1$ times is more than
enough to create polynomial decay in $\lambda$:
\begin{align*}
|t|^{-N} \int_{\langle y\rangle^{-1}}^1 (\lambda-\lambda_0)^{-N}\frac{\lambda^{\frac{n-1}2-N}}{\japanese[y]^{\frac{n-1}2}}d\lambda
\lesssim |t|^{-N}\japanese[y]^{2N-n}.
\end{align*}
Recalling that in this case we have $|y| <|t|^{\frac12}$, the resulting bound for this piece is
$|t|^{-\frac{n}{2}}$.
For the remaining interval, $ [|t|^{-\frac12},\japanese[y]^{-1}]$, we exploit instead the cancellation between
$R_0^+(\lambda^2)$ and $R_0^-(\lambda^2)$ using the algebraic identity \eqref{algebraic identity}.
We apply \eqref{algebraic identity} to $I_2^+(t,x,y)-I_2^-(t,x,y)$, where
\begin{align*}
I_2^\pm(t,x,y)
&=\int_0^\infty e^{it\lambda^2} \bigl(1-\rho(|t|^{\frac12}\lambda)\bigr)
\lambda \bigl\{R_{0}^\pm(\lambda^{2})V[R_{0}^\pm(\lambda^{2})V]^{m}
S^\pm(\lambda^2)R_{0}^\pm(\lambda^{2})\\
& \hskip 2in \times \ [VR_{0}^\pm(\lambda^{2})]^{m}VR_{0}^\pm(\lambda^{2})\bigr\}(x,y)
\,d\lambda\\
&=\int_{0}^{\infty}e^{it\lambda^{2}}\bigl(1-\rho(|t|^{\frac12}\lambda)\bigr)
\lambda \big\langle \delta_y, R_{0}^\pm(\lambda^{2})V[R_{0}^\pm(\lambda^{2})V]^{m}
S^\pm(\lambda^2)R_{0}^\pm(\lambda^{2}) \\
& \hskip 2in \times \ [VR_{0}^\pm(\lambda^{2})]^{m}VR_{0}^\pm(\lambda^{2})\delta_x \big\rangle d\lambda\\
&=\int_{0}^{\infty}e^{it\lambda^{2}}c_{x,y,2}^{\pm}(\lambda^2)d\lambda.
\end{align*}
Each term in the resulting sum contains a factor of $R_0^+(\lambda^2)
-R_0^-(\lambda^2)$, an integral operator whose kernel is pointwise
dominated by $\lambda^{n-2}$ (see \eqref{deriv ImR0}).
This is even true if the cancellation falls on $S^+(\lambda^2)$ because we can write
$$
S^+(\lambda^2) - S^-(\lambda^2) = -S^-(\lambda^2)\big(R_0^+(\lambda^2)- R_0^-(\lambda^2)\big)VS^+(\lambda^2).
$$
We will integrate by parts $\frac{n+1}2$ times if $n$ is odd and $\frac{n}2+1$
times if $n$ is even. Our analysis relies on the estimates of Proposition~\ref{ImR0 prop}.
In place of the weighted $L^1$ estimate \eqref{Gbound}, we use the following two bounds for the two possible
initial functions on which the resolvents act. For $0\leq j\leq \frac{n}2+1$, we have
\begin{align}
\Big\|V(\cdot) \Big(\frac{d}{d\lambda}\Big)^j R_0^\pm(\lambda^2){\textstyle (\cdot,y)}\Big\|_{L^{1,\frac32}}
&\lesssim\frac{\lambda^{-j}}{\japanese[y]^{n-2}}, \label{show1}\\
\Big\|V(\cdot) \Big(\frac{d}{d\lambda}\Big)^{j}\big(\Im R_0(\lambda^2)\big)(\cdot,y)\Big\|_{L^{1,\frac32}}
&\lesssim\lambda^{n-2-j}. \label{show2}
\end{align}
To see \eqref{show1}, we use the pointwise bound
$$
\Big|\Big(\frac{d}{d\lambda}\Big)^jR_0^\pm(\lambda^2)(x,y)\Big|
\lesssim \lambda^{-j}I_2 +\lambda^{\frac{n-3}2}I_{\frac{n+1}2+j}
$$
and apply Lemma~\ref{easylem} to obtain
\begin{align*}
\Big\|V(\cdot) \Big(\frac{d}{d\lambda}\Big)^j R_0^\pm(\lambda^2){\textstyle (\cdot,y)}\Big\|_{L^{1,\frac32}}
\lesssim \frac{\lambda^{-j}}{\japanese[y]^{n-2}}+\frac{\lambda^{\frac{n-3}2}}{\japanese[y]^{\frac{n-1}2-j}}
\lesssim \frac{\lambda^{-j}}{\japanese[y]^{n-2}},
\end{align*}
where the last inequality holds for $\lambda \leq \japanese[y]^{-1}$.
Similarly, to prove \eqref{show2} we use \eqref{deriv ImR0}; applying Lemma~\ref{easylem} and
treating the cases $0\leq j\leq \frac{n-1}2$ and $\frac{n-1}2<j\leq \frac{n}{2}+1$ separately, we obtain
\begin{align*}
\Big\|V(\cdot) \Big(\frac{d}{d\lambda}\Big)^{j}\big(\Im R_0(\lambda^2)\big)(\cdot,y)\Big\|_{L^{1,\frac32}}
\lesssim \lambda^{n-2-j} \big(1 +\lambda^{\frac32}\langle y\rangle^{\frac{3}{2}}\big)
\lesssim\lambda^{n-2-j},
\end{align*}
again, for $\lambda \leq \japanese[y]^{-1}$.
Using the estimates in Proposition~\ref{ImR0 prop}, \eqref{show1}, and \eqref{show2}, we get
$$
\Big|\Big(\frac{d}{d\lambda}\Big)^{j}\big(c_{x,y,2}^+(\lambda^2)-c_{x,y,2}^-(\lambda^2)\big)\Big|\lesssim \frac{\lambda^{n-1-j}}{\langle y\rangle^{n-2}}.
$$
Thus, an application of Lemma~\ref{intparts} with $N=\frac{n+1}2$ for $n$ odd, or $N=\frac{n}{2}+1$ for $n$ even
yields the bound
\begin{align*}
|t|^{N}\int_{|t|^{-\frac12}}^{\langle y\rangle^{-1}} \frac {\lambda^{n-1-2N}}{\langle y\rangle^{n-2}}d\lambda
\lesssim |t|^{-\frac{n}2}.
\end{align*}
In each of the three cases discussed above, the difference $I^+_2(t,x,y)-I^-_2(t,x,y)$ is
seen to be smaller than $|t|^{-\frac{n}2}$.
To complete the proof of the theorem, we need to show
$$|I^+_1(t,x,y)-I^-_1(t,x,y)|\lesssim |t|^{-\frac{n}2} \ \ \text{for} \ \ |t|>4.$$
Here,
\begin{align*}
I_1^\pm(t,x,y)
&=\int_0^\infty e^{it\lambda^2} \rho(|t|^{\frac12}\lambda)
\lambda \bigl\{R_{0}^\pm(\lambda^{2})V[R_{0}^\pm(\lambda^{2})V]^{m}
S^\pm(\lambda^2)R_{0}^\pm(\lambda^{2})\\
& \hskip 2in \times \ [VR_{0}^\pm(\lambda^{2})]^{m}VR_{0}^\pm(\lambda^{2})\bigr\}(x,y)
\,d\lambda\\
&=\int_{0}^{\infty}e^{it\lambda^{2}}\rho(|t|^{\frac12}\lambda)
\lambda \big\langle \delta_y, R_{0}^\pm(\lambda^{2})V[R_{0}^\pm(\lambda^{2})V]^{m}
S^\pm(\lambda^2)R_{0}^\pm(\lambda^{2}) \\
& \hskip 2in \times \ [VR_{0}^\pm(\lambda^{2})]^{m}VR_{0}^\pm(\lambda^{2})\delta_x \big\rangle d\lambda\\
&=\int_{0}^{\infty}e^{it\lambda^{2}}c_{x,y,1}^{\pm}(\lambda^2)d\lambda.
\end{align*}
Arguing as in Case 3 above, we see that
$$
\big|c_{x,y,1}^+(\lambda^2)-c_{x,y,1}^-(\lambda^2)\big|\lesssim \lambda^{n-1}.
$$
Thus,
$$
|I^+_1(t,x,y)-I^-_1(t,x,y)|\ \lesssim\ \int_0^{|t|^{-\frac12}} \lambda^{n-1}\,d\lambda \ \lesssim \ |t|^{-\frac{n}2}.
$$
This concludes the proof of Theorem~\ref{dispersive tail}.
\end{proof}
\section{Nondispersive Estimates}
\subsection{Nondispersive estimate for the term $l=1$} \label{sec:l=1}
To summarize the progress up to this point, we have decomposed the
perturbed resolvent $R_V^\pm(\lambda^2)$ into a finite Born series
with initial terms given by \eqref{first terms} and a tail given
by \eqref{tail}. In the previous sections, the contribution of the
tail was shown to satisfy a dispersive estimate at both high and
low energies. The dispersive behavior of the full evolution
$e^{itH}P_{ac}(H)$ is therefore dictated by the contribution from
the initial terms of the Born series.
We show that there are potentials in the class
\[X=\bigg\{V\in C^\alpha({{\mathbb R}}^n),\ \alpha<\tfrac{n-3}2, \ \text{supp}V
\subset B(0,5)\setminus B(0,\tfrac52)\bigg\} \] that do not yield a dispersive estimate for the term
corresponding to $l=1$ in \eqref{first terms}. It will follow, via an argument in the next subsection, that
the entire expression \eqref{first terms} cannot satisfy a dispersive estimate either. To define the
class of potentials more precisely, let $X$ be the completion of the appropriately supported $C^\infty$
functions with respect to the $W^{\alpha,\infty}$-norm,
\[ \norm[f][X] := \norm[(1+\Delta)^{\alpha/2}f][\infty]. \]
Fix the points $x_0, y_0 \in {{\mathbb R}}^n$ so that $x_0$ is the unit vector in the
first coordinate direction and $y_0 = -x_0$.
Now let $f^{{\varepsilon}}$ and $g^{{\varepsilon}}$ be smooth approximations of $f=\delta_{x_0}$
and $g=\delta_{y_0}$ which are supported in $B(x_0,{\varepsilon})$ and $B(y_0,{\varepsilon})$,
respectively, and have unit $L^1$-norm.
Define the expression
\begin{align*}
a_1^L(t,{\varepsilon},V) :=
&t^{\frac{n}2}\int e^{it\lambda}\psi_L(\lambda) \langle
[R_{0}^+(\lambda)(x,x_{1})V(x_1)R_{0}^+(\lambda)(x_{1},y) \\
&\hskip 0.5in - R_0^-(\lambda)(x,x_1)V(x_1)R_0^-(\lambda)(x_1,y)]
f^{\varepsilon}(x),g^{\varepsilon}(y)\rangle\, dx dx_1 dy d \lambda \\
= &t^{\frac{n}2} \int I_{L}(t, |x-x_{1}|,|y-x_{1}|)V(x_{1})
f^{\varepsilon}(x)g^{\varepsilon}(y)dx_{1}dxdy
\end{align*}
where $\psi$ can be any Schwartz function with $\psi(0) = 1$ and $\psi_L(\lambda) = \psi(\lambda/L)$.
Fubini's theorem is used to perform the $d\lambda$ integral first, noting that since $f^{\varepsilon}$, $g^{\varepsilon}$,
and $V$ all have disjoint support, the singularities of $R^{\pm}_0(\lambda)(x,x_1)$ and
$R^{\pm}_0(\lambda)(x_1,y)$ can be disregarded.
If the term corresponding to $l=1$ in the Born series \eqref{first terms}
satisfied a dispersive estimate, it would yield the bound
\begin{equation} \label{eq:a1dispersive}
\lim_{L\to \infty} |a_1^L(t,{\varepsilon},V)|
\ \leq \ C(V) \norm[f^{\varepsilon}][1]\norm[g^{\varepsilon}][1] \ =\ C(V).
\end{equation}
Observe that $a_1^L(t,{\varepsilon},V)$ is linear in the last entry and can therefore be viewed as a family of
linear maps indexed by the remaining parameters $(L,t,{\varepsilon})$. By the Uniform Boundedness Principle, if
a dispersive estimate for the $l=1$ term held for every potential $V \in X$, it would imply the sharper
inequality
\begin{equation}\label{a1bdd}
\sup_{L\geq 1} |a_1^L(t,{\varepsilon},V)| \leq C\norm[V][X].
\end{equation}
For $t \ll 1$ this will not be possible, thanks to the asymptotic description of the function $I_L(t,
|x-x_1|,|y-x_1|)$ stated below.
\begin{lemma} \label{lem:asymptotic}
Suppose $n \geq 3$ and $0 < t \leq 1$. Let $\psi:{{\mathbb R}}\to{{\mathbb R}}$ be a Schwartz function with Fourier transform
supported in the unit interval and satisfying $\psi(0) = 1$, and $K$ a compact subset of $(0,\infty)$.
There exist constants $C_1, C_2 < \infty$ depending on $n$, $\psi$, and $K$ such that
\begin{align} \label{eq:asymptotic}
\Big| I_L(t,|x-x_1|,|x_1-y|)
&- {\textstyle\frac{i}{2(-4\pi i\,t)^{n-\frac32}}
\Big(\frac{(|x-x_1|+|x_1-y|)^{n-2}}{|x-x_1|^{\frac{n-1}2} |x_1-y|^{\frac{n-1}{2}}}\Big)
e^{-i\frac{(|x-x_1|+|x_1-y|)^2}{4t}}} \Big| \notag\\
&\leq C_1 t^{-(n-\frac52)}
\end{align}
for all $L > C_2t^{-3}$ and $|x-x_1|, |x_1-y| \in K$.
If $t$ is held fixed, then the
remainder converges as $L\to\infty$ to a function $G(|x-x_1|,|x_1-y|,t)$
uniformly over all pairs of distances $|x-x_1|,|x_1-y| \in K$.
\end{lemma}
The proof of Lemma \ref{lem:asymptotic} is technical and is given in Section~6 below. An immediate
consequence of this lemma is the following
\begin{corollary}
Let $n \geq 3$, $0 <t \leq 1$, and ${\varepsilon} < \frac12$. The following bound is valid for all functions $V
\in X$ with $\norm[V][X] \leq 1$:
\begin{equation} \label{eq:a1infty}
\begin{aligned}
\lim_{L\to\infty} &\bigg|a_1^L(t,{\varepsilon},V)
- \frac{i\,t^{\frac{3-n}2}}{2(-4\pi i)^{n-\frac32}} \int
\bigg(\frac{(|x-x_1|+|x_1-y|)^{n-2}}{|x-x_1|^{\frac{n-1}2}
|x_1-y|^{\frac{n-1}{2}}} \bigg) \\
& \hskip 1.8in \times e^{-i\frac{(|x-x_1|+|x_1-y|)^2}{4t}}
V(x_1) f^{\varepsilon}(x) g^{\varepsilon}(y)\, dx_1 dx dy \bigg| \\
&\leq Ct^{\frac{5-n}2}\norm[f^{\varepsilon}][1]\norm[g^{\varepsilon}][1].
\end{aligned}
\end{equation}
\end{corollary}
\begin{proof}
If ${\varepsilon} <\frac12$, then we have $|x-x_1|,|x_1-y| \in [1,10]$
for every combination of points with $x\in \text{supp}(f^{\varepsilon})$, $y\in \text{supp}(g^{\varepsilon})$, $x_1 \in
\text{supp}(V)$. Thus the conditions of Lemma~\ref{lem:asymptotic} are satisfied, with the conclusion
that $I_L(t,\cdot,\cdot)$ converges uniformly as $L\to \infty$ to a bounded function in $x,x_1,y$.
The result then follows from the dominated convergence theorem and the observation that $\norm[V][1]
\leq C\norm[V][X] \leq C$ .
\end{proof}
If the integral in \eqref{eq:a1infty} were taken in absolute values, the resulting bound on
$a_1^L(t,{\varepsilon},V)$ would be of size $|t|^{\frac{3-n}2}$. In dimension $n\geq 4$, this contrasts with the
desired estimate
$$
\lim_{L\to\infty}|a_{1}^{L}(t,{\varepsilon},V)|\leq C,
$$
which is uniform in $t$. Furthermore, for a fixed small time $t$ it is not difficult to construct a
potential $V_t \in X$ which negates the oscillatory factor of $e^{-i(|x-x_1|+|x_1-y|)^2/(4t)}$.
Let $\phi$ be a smooth cutoff which is supported in the interval $[6,8]$
and $F:{{\mathbb R}}\to{{\mathbb R}}$ a nonnegative smooth function which satisfies
$F(s) = 0$ for all $s \leq 0$ and $F(s)=s$ for all $s \geq \frac12$.
Given a time $0<t\leq 1$, define
\begin{equation}
V_t(x_1) = C_n t^{\alpha}\phi(|x_0-x_1|+|x_1-y_0|)
F\Big(\cos\Big(\frac{(|x_0-x_1|+|x_1-y_0|)^2}{4t}\Big)\Big).
\end{equation}
The constant $C_n$ will be chosen momentarily. It is perhaps unnecessary to modify the cosine function
with $F$; however, the positivity of $F$ does guarantee that zero energy will be neither an eigenvalue
nor a resonance of $-\Delta + V_t$.
\begin{proposition}
There exists a constant $C_n >0$ so that the function
$V_t$ defined above satisfies $\norm[V_t][X] \leq 1$ for all $0<t\leq 1$.
\end{proposition}
\begin{proof}
It is equivalent to show that in the absence of the coefficient $C_n$,
$\norm[V_t][X]$ would be bounded by a finite constant uniformly in $t$.
The support of $\phi(|x_0-x_1|+|x_1-y_0|)$ is located within an annular
region bounded by the ellipsoids with foci $x_0,y_0$ and major axes of
length $6$ and $8$, respectively. As this region is bounded away from both
$x_0$ and $y_0$, the length sum
$|x_0-x_1| + |x_1-y_0|$ is a scalar $C^\infty$-function of $x_1$.
It follows that any sufficiently smooth function of
$\frac{|x_0-x_1|+|x_1-y_0|}{4t}$ on this domain should have
$C^\alpha$-norm controlled by $(1+t^{-\alpha})$. The leading
coefficient $t^{\alpha}$ then ensures that the $X$-norm will be
controlled by a uniform constant for all $|t| \leq 1$. Finally,
multiplication by the fixed smooth cutoff
$\phi(|x_0-x_1|+|x_1-y_0|)$ only increases the norm by another
finite constant.
\end{proof}
Now it is a simple matter to show that $V_t$ produces a counterexample to \eqref{a1bdd} and hence to
\eqref{eq:a1dispersive} for $0<t\ll 1$.
\begin{proposition} \label{prop:a1bound}
Suppose $n > 3$. There exist constants $T, C_1, C_2 > 0$ such that
if $0 < t \leq T$ and $0 < {\varepsilon} < C_1t$, then
\begin{equation*}
\lim_{L\to\infty} \Big|a_1^L(t,{\varepsilon},V_t)\Big| \ge C_2
t^{-(\frac{n-3-2\alpha}2)}.
\end{equation*}
\end{proposition}
\begin{proof}
Start with the asymptotic integral formula in \eqref{eq:a1infty}.
For any choice of points $x\in \text{supp}(f^{\varepsilon})$,
$y\in \text{supp}(g^{\varepsilon})$, $x_1\in\text{supp}(V_t)$,
the expression $\frac{(|x-x_1|+|x_1-y|)^{n-2}}{(|x-x_1|\,|x_1-y|)^{(n-1)/2}}$
is a smooth positive function of size comparable to 1.
Consider what happens to the integral over $dx_1$ in the special case when $x = x_0$, $y = y_0$. Then,
the oscillatory part of $V_t(x_1)$ is synchronized with the real part of
$e^{-i(|x-x_1|+|x_1-y|)^2/(4t)}$ so that the real part of the product is always positive and of size
approximately 1 on a set of approximately unit measure. The real part of the integral is then bounded
below by a positive constant.
For arbitrary $x\in \text{supp}(f^{\varepsilon})$ and $y\in \text{supp}(g^{\varepsilon})$, it is possible to differentiate
under the integral sign in either of the variables $x$ or $y$ and each partial derivative is controlled
by $t^{-1}$. Thus the lower bound on the real part of the integral remains valid so long as
$|x-x_0|,|y-y_0| \lesssim t$, which is ensured by setting ${\varepsilon} < C_1t$.
The definition of $V_t$ also includes a factor of $t^\alpha$. When this is substituted into
\eqref{eq:a1infty}, the resulting leading coefficient is proportional to $t^{-(\frac{n-3-2\alpha}2)}$.
There is also an error term of unknown sign, but with size controlled by $t^{-(\frac{n-5-2\alpha}2)}$.
This can be absorbed into the lower bound for any $0<t\leq T$, provided $T$ is chosen sufficiently
small.
\end{proof}
\subsection{Nondispersive Estimate for the Full Evolution}
\begin{theorem}\label{T}
Suppose $n > 3$. There cannot exist a bound of the form
\begin{equation*}
\norm[e^{itH}P_{ac}f][\infty] \le C(V)|t|^{-\frac{n}2}\norm[f][1]
\end{equation*}
with $C(V) <\infty$ for every potential $V\in X$, $\norm[V][X] \leq 1$.
\end{theorem}
\begin{proof}
Assume the contrary and write $V = \theta W$ with $\norm[W][X] \leq 1$ and
$\theta \in [0,1]$. By assumption, we would then have the bound
\begin{align}
|\langle e^{itH}P_{ac}f,g\rangle| &=
\frac{1}{2\pi} \sup_{L\geq 1} \Big|\int_0^\infty e^{it\lambda}
\psi_L(\lambda)\langle [R_{\theta W}(\lambda+i0)-R_{\theta W}(\lambda-i0)]f,
g\rangle\,d\lambda \Big| \notag \\
&\leq C(W,\theta)|t|^{-\frac{n}2} \norm[f][1]\norm[g][1] \label{Stone}
\end{align}
for $\psi$ as in Lemma~\ref{lem:asymptotic} and for every $f,g\in L^1\cap L^2$ and, in particular, for
the functions $f^{\varepsilon}, g^{\varepsilon}$ defined in subsection~\ref{sec:l=1}.
The finite Born series expansion \eqref{Born series} allows us to write the perturbed resolvent
$R_{\theta W}(\lambda\pm i0)$ as the sum of a polynomial of degree $2m+1$ in $\theta$ and a tail. When
this is substituted into \eqref{Stone} above, along with the functions $f^{\varepsilon}, g^{\varepsilon} \in L^1\cap L^2$,
the tail is shown in Theorem~\ref{dispersive tail}
to be controlled by
$C|t|^{-\frac{n}{2}}\norm[f^{\varepsilon}][1]\norm[g^{\varepsilon}][1]$ for some $C$.
It follows that the initial terms must obey a similar bound. Write
this as
$$
\sup_{L\geq 1} \Big|P^L(\theta)\Big| := \sup_{L\geq 1} \Bigl|\sum_{k=0}^{2m+1}\theta^{k}a_{k}^{L}\Bigr|
\leq C(W,\theta),
$$
where the coefficients $\{a_k^L\}_{k=0}^{2m+1}$ of the polynomial $P^L$ are defined for each
$k\in\{0,1,..., 2m+1\}$ and $L < \infty$ by the formula
$$
a_{k}^{L}(t,{\varepsilon},W)=t^{\frac{n}2}\int e^{it\lambda}\psi_L(\lambda) \langle [R_{0}^+(\lambda)([-W
R_{0}^+(\lambda)]^{k} -R_0^-(\lambda)[-W R_0^-(\lambda)]^k]f^{{\varepsilon}},g^{{\varepsilon}}\rangle d\lambda.
$$
Denote by $\mathbb{V}$ the $2m+2$-dimensional space of all polynomials of degree $2m+1$, and consider
the linear maps from $\mathbb{V}$ into ${{\mathbb R}}^{2m+2}$ defined by
$$
P=\sum_{k=0}^{2m+1} a_{k}\theta^{k} \mapsto \{a_{0}, ..., a_{2m+1}\}
$$
and
$$
P=\sum_{k=0}^{2m+1} a_{k}\theta^{k} \mapsto \Bigl\{P\big(0\big),
P\bigl(\tfrac{1}{2m+1}\bigr), \ldots, P\bigl(\tfrac{2m+1}{2m+1}\bigr)\Bigr\}.
$$
Clearly the two maps are bijections and thus one can express each coefficient $a_{k}$ as a linear
combination of the values $P(0),P(\frac{1}{2m+1}),..., P(\frac{2m+1}{2m+1})$. From our assumption that
$C(W,\theta) < \infty$ for every $0\leq \theta \leq 1$, it follows that each of the expressions
$|P^L(0)|, |P^L(\tfrac{1}{2m+1})|,\ldots, |P^L(\frac{2m+1}{2m+1})|$, as well as their maximum, is
bounded uniformly in $L\geq 1$. One concludes that
\[ \sup_{L\geq 1} |a_1^L(t,{\varepsilon},W)| \leq C(W) <\infty \]
for every $W \in X$ with $\norm[W][X] \leq 1$.
This, however, is precisely the same statement as \eqref{eq:a1dispersive}
which was already shown to be false.
\end{proof}
\section{Proof of Lemma \ref{lem:asymptotic}}
The main ingredients of Lemma~\ref{lem:asymptotic} are a recurrence relation
(in $n$) for the resolvent kernels
and explicit computations in dimensions 2 and 3. With some abuse of notation,
define $R_n^\pm(\lambda)$ to be the free resolvent $\lim_{{\varepsilon}\downarrow 0}
(-\Delta -\lambda \pm i{\varepsilon})^{-1}$ in ${{\mathbb R}}^n$. The Stone formula dictates that
\begin{equation*} \begin{aligned}
\frac{1}{2\pi i} \int_0^{\infty} e^{it\lambda}
\big\langle[R_n^+(\lambda)-R_n^-(\lambda)]f, g\big\rangle \, d\lambda
&= \big\langle e^{-it\Delta}f, g \big\rangle \\
&= (-4\pi i\,t)^{-\frac{n}2} \iint_{{{\mathbb R}}^{2n}} e^{\frac{-i |x-y|^2}{4t}} f(x)
\bar{g}(y)\, dxdy
\end{aligned} \end{equation*}
for all $t \not= 0$ and $f, g$ (say) Schwartz functions.
Recall that the resolvents $R_n(z) = (-\Delta - z)^{-1}$ can be
defined for all $z \in \C \setminus {{\mathbb R}}^+$, and that
$R_n^\pm(\lambda)$ are the analytic continuations onto the
boundary from above and below, respectively. It follows that both
$R_n^+(\lambda)$ and $R_n^-(\lambda)$ can be defined for negative
values of $\lambda$. Moreover, $[R_n^+(\lambda) - R_n^-(\lambda)]
= 0$ for all $\lambda \le 0$. The integral above may therefore be
taken over the entire real line.
One further observation is that since $R_n^+(\lambda)$ is a holomorphic family of operators for
$\lambda$ in the upper halfplane and is uniformly bounded (as operators on $L^2$, for example) away from
the real axis, its inverse Fourier transform must be supported on the halfline $\{t \leq 0\}$.
Similarly, $R_n^-(\lambda)$, which is holomophic in the lower halfplane, has inverse Fourier transform
supported in $\{t \geq 0\}$. This leads to the conclusion
\begin{equation} \label{eq:Rtransform}
\int_{{\mathbb R}} e^{it\lambda} R_n^-(\lambda,|x|)\, d\lambda =
\left\{ \begin{aligned}
-2\pi i(-4\pi i\,&t)^{-\frac{n}{2}} e^{\frac{-i|x|^2}{4t}},& &{\rm if}\ t>0 \\
&0, & &{\rm if}\ t < 0\\
\end{aligned} \right.
\end{equation}
for all $x \in {{\mathbb R}}^n$.
Setting $|x| = r$ in the preceding identity leads to the recurrence relation
\begin{equation}
R_{n+2}^-(\lambda,r) = -\frac1{2\pi r} \ptl[][r] \big[R_n^-(\lambda,r)\big].
\end{equation}
The same identity also holds for $R_n^+(\lambda,r)$.
\subsection{The cases $n = 2,3$}
It should first be noted that the integral $\int_{{\mathbb R}} e^{it\lambda}R_n^-(\lambda,r)\, d\lambda$ in
\eqref{eq:Rtransform} is never absolutely convergent and is properly interpreted as the Fourier
transform of a distribution. As such, its behavior at $t=0$ requires additional clarification.
\begin{lemma}
For any fixed $r > 0$ and $n = 2,3$, the expression $\int_{{\mathbb R}} e^{it\lambda} R_n^-(\lambda,r)\, d\lambda$
agrees with the distribution $f$ given by
\begin{equation} \label{eq:Rdistribution}
(f,\phi) = -2\pi i(-4\pi i)^{-\frac{n}{2}}
\lim_{a\downarrow 0} \int_a^\infty
t^{-\frac{n}2} e^{\frac{-i r^2}{4t}} \phi(t)\, dt
\end{equation}
for all Schwartz functions $\phi$.
\end{lemma}
\begin{proof}
Because of analyticity considerations, the identity above must be correct modulo distributions supported
on $t=0$. Let $\phi \in C^\infty_c({{\mathbb R}})$ have nonvanishing derivatives at $t=0$ and consider pairings of
the form $\langle f,N\phi(N\,\cdot)\rangle$.
On one hand, the function $t^{-n/2} e^{-i r^2/(4t)}\chi_{(0,\infty)}$ has a
continuous anti-derivative $I(t)$ with asymptotic behavior
$I(t) = O(t^{2-\frac{n}2})$ as $t$ approaches zero. Integrating by parts,
\begin{equation*}
\begin{aligned}
\lim_{a\downarrow 0} \int_a^\infty N \phi(Nt) t^{-\frac{n}2}
e^{\frac{-i r^2}{4t}}\, dt &= - N\lim_{a\downarrow 0} \phi(Na)I(a)
- \lim_{a\downarrow 0} \int_a^\infty N^2\phi'(Nt) I(t)\, dt \\
&= -N^2\int_0^\infty \phi'(Nt) I(t)\, dt \\
&= O(N^{\frac{n-2}{2}}) \quad {\rm in\ the\ limit\ }\ N \to \infty.
\end{aligned}
\end{equation*}
Meanwhile, the pairing $\langle f,N\phi(N\,\cdot)\rangle$ is defined by
Parseval's identity to be
\begin{equation*}
\big\langle f, N\phi(N(\cdot))\big\rangle = \int_{{\mathbb R}}
R_n^-(\lambda,r) \hat{\phi}(\lambda/N)\, d\lambda.
\end{equation*}
For fixed $r > 0$, the resolvent $R_n^-(\lambda,r)$
possesses the asymptotic expansion
\begin{equation*}
R_n^-(\lambda,r) = c_1r^{\frac{1-n}2}
\lambda^{\frac{n-3}{4}}e^{-ir\sqrt{\lambda}}
+ O(\lambda^{\frac{n-5}{4}})
\end{equation*}
as $\lambda\to\infty$ and is integrable near $\lambda=0$. Thus, it has a continuous anti-derivative
$J(\lambda,r)$ which grows no faster than $O(\lambda^{\frac{n-1}{4}})$. Integrating by parts,
\begin{equation*}
\begin{aligned}
\big\langle f, N\phi(N\,\cdot)\big\rangle &= - N^{-1} \int_{{\mathbb R}}
J(\lambda,r) \hat{\phi}'(\lambda/N)\, d\lambda \\
&= O(N^{\frac{n-1}{4}}).
\end{aligned}
\end{equation*}
As $n=2,3$, the difference between the left and right sides of \eqref{eq:Rdistribution} grows no faster
than $O(N^{\frac12})$ when applied to the test functions $N\phi(Nt)$. It is well-known that any nonzero
distribution $g$ supported on $t=0$ has the form $(g,\phi) = \sum_{k=1}^M c_k \phi^{(k)}(0)$, and would
therefore grow at least as fast as $O(N)$ when applied to the same family of test functions.
\end{proof}
Having established the inverse Fourier transform of $R_n^-(\lambda,r)$ for
each $r > 0$, it is possible to calculate the inverse Fourier transform of
any product $R_n^-(\lambda,r)R_n^-(\lambda,s)$ by taking convolutions.
Given a choice of $r,s,t > 0$,
\begin{equation} \label{eq:convolution}
\int_{{\mathbb R}} e^{it\lambda}R_n^-(\lambda,r)R_n^-(\lambda,s)\,d\lambda
= \frac{-2\pi}{(-4\pi i)^n}
\int_0^t e^{-i(\frac{r^2}{4u} + \frac{s^2}{4(t-u)})}
\frac{du}{u^{\frac{n}2}(t-u)^{\frac{n}2}}
\end{equation}
where the Fourier transform has introduced a normalizing factor of
$(2\pi)^{-1}$. To make the complex exponential more manageable,
change variables to
\begin{equation*}
\frac{v}{t} = \frac{r^2}{u} + \frac{s^2}{t-u} - \frac{r^2+s^2}{t}
= \frac{(t-u)^2r^2 + u^2s^2}{u(t-u)t}.
\end{equation*}
The range of possible values for $v$ is $[2rs, \infty)$. Based on the
quadratic relationship
\begin{equation*}
(r^2+s^2+v)u^2 -(2r^2+v)tu + r^2t^2 = 0,
\end{equation*}
the variable substitutions for $u$ and $(t-u)$ are given by
\begin{equation*}
u = \bigg(\frac{2r^2}{2r^2+v \mp\sqrt{v^2-4r^2s^2}}\bigg)t, \qquad
t-u
= \bigg(\frac{\frac12\big(\sqrt{v+2rs}\mp\sqrt{v-2rs}\big)^2}{2r^2+v\mp
\sqrt{v^2-4r^2s^2}}\bigg)t.
\end{equation*}
The substitution formula for the differentials is
\begin{equation*}
du = \pm t\bigg(\frac{r \big(\sqrt{v+2rs}\mp\sqrt{v-2rs}\big)}
{2r^2+v \mp\sqrt{v^2-4r^2s^2}}\bigg)^2\frac{dv}{\sqrt{v^2-4r^2s^2}}.
\end{equation*}
Making all appropriate substitutions and correctly accounting for the fact that each value of $v > 2rs$
is attained twice in $u\in(0,t)$, the integral in \eqref{eq:convolution} becomes
\begin{align}
\int_{{\mathbb R}} e^{it\lambda}R_2^-(\lambda,r)R_2^-(\lambda,s)\,d\lambda
&= \frac{1}{4\pi t} e^{-i\frac{r^2+s^2}{4t}}
\int_{2rs}^\infty \frac{e^{-i\frac{v}{4t}}}{\sqrt{v^2-4r^2s^2}}\,dv\notag \\
&= \frac{1}{4\pi t} e^{-i\frac{r^2+s^2}{4t}} H_0^{(1)}\big(\tfrac{-rs}{2t}\big) \label{eq:2dim}
\end{align}
in the case $n=2$. Here, $H_0^{(1)}$ is the Hankel function introduced in Section~2. Some relevant
properties of this function are that $H_0^{(1)}(z)$ is analytic in the upper halfplane and decays
asymptotically like $\sqrt{\pi i/2z}e^{iz}$ as $z \to \infty$ along any ray.
In the case $n=3$, the integral in \eqref{eq:convolution} becomes
\begin{equation*} \begin{aligned}
\int_{{\mathbb R}} &e^{it\lambda}R_3^-(\lambda,r)R_3^-(\lambda,s)\,d\lambda \\
&= \frac{-2\pi\,e^{-i\frac{r^2+s^2}{4t}}}{(-4\pi i)^3t^2}\int_{2rs}^\infty
{\textstyle \Big[\bigg(\frac{2r^2+v+\sqrt{v^2-4r^2s^2}}
{r\big(\sqrt{v+2rs}+\sqrt{v-2rs}\big)}\Big)
+ \Big(\frac{2r^2+v-\sqrt{v^2-4r^2s^2}}
{r\big(\sqrt{v+2rs}-\sqrt{v-2rs}\big)}\Big)\Big]} \\
&\hskip 3.7in\times \frac{e^{-i\frac{v}{4t}}}{\sqrt{v^2-4r^2s^2}}\,dv \\
&= \frac{-2\pi\,e^{-i\frac{r^2+s^2}{4t}}}
{(-4\pi i)^3\,t^2}\Big(\frac{r+s}{rs}\Big)
\int_{2rs}^\infty \frac{e^{-i\frac{v}{4t}}}{\sqrt{v-2rs}}\,dv. \\
\end{aligned}
\end{equation*}
At this point it remains to calculate the Fourier transform of an inverse
square-root function, which yields
\begin{equation*}
\int_{2rs}^\infty \frac{e^{-i\frac{v}{4t}}}{\sqrt{v-2rs}}\,dv
= e^{-i\frac{rs}{2t}} \sqrt{-4\pi i\,t}.
\end{equation*}
The final result is
\begin{equation} \label{eq:3dim}
\int_{{\mathbb R}} e^{it\lambda}R_3^-(\lambda,r)R_3^-(\lambda,s)\,d\lambda
= \frac{1}{2i(-4\pi i\,t)^{3/2}}\Big(\frac{r+s}{rs}\Big)
e^{-i\frac{(r+s)^2}{4t}}.
\end{equation}
\subsection{Dimensions $n > 3$}
The recurrence relation for $R_{n+2}^-(\lambda)$ makes it possible to
compute the analogous terms in dimensions $n=5, 7, \ldots$, by
repeatedly applying the differential operator $(4\pi^2 rs)^{-1}
\frac{\partial^2}{\partial r\partial s}$ to the three-dimensional result
\eqref{eq:3dim}.
For small values of $t$, the leading-order term occurs when all derivatives
fall on $e^{-i(r+s)^2/(4t)}$. This leads to the following asymptotic
expression as $t\to 0$, which is valid in any odd dimension $n \ge 3$.
\begin{equation} \label{eq:ndim}
\int_{{\mathbb R}} e^{it\lambda}R_n^-(\lambda,r)R_n^-(\lambda,s) \, d\lambda
= \frac{1}{2i(-4\pi i\,t)^{n-\frac32}}
\bigg[\frac{(r+s)^{n-2}}{(rs)^{\frac{n-1}{2}}}\bigg]
e^{-i\frac{(r+s)^2}{4t}}
+ O\big(t^{-(n-\frac52)}\big).
\end{equation}
The same result is true in even dimensions as well. To see this, recall that $H_0^{(1)}(z) = (\frac{\pi
i}{2z})^{1/2}e^{iz}\omega(z)$, where the derivatives of $\omega$ satisfy the following bounds as $|z|$
goes to infinity:
\begin{equation*}
\lim_{z\to\infty} \omega(z) = 1, \qquad
\big({\textstyle\frac{d}{dz}}\big)^k \omega(z) = O(|z|^{-k}),\ k = 1,2,\ldots
\end{equation*}
The expression in \eqref{eq:2dim} can then be rewritten as
\begin{equation*}
\int_{{\mathbb R}} e^{it\lambda}R_2^-(\lambda,r)R_2^-(\lambda,s)\,d\lambda
= \frac{1}{2i(-4\pi i\,t\;rs)^{1/2}}e^{-i\frac{(r+s)^2}{4t}}
\omega\big({\textstyle -\frac{rs}{2t}}\big).
\end{equation*}
Applying the differential operators $\ptl[][r]$ and $\ptl[][s]$ only increases
the
degree of the singularity at $t=0$ when the derivative falls on the
term $e^{-i(r+s)^2/(4t)}$. If the derivative falls instead on
$\omega(-\frac{rs}{2t})$,
one power of $t$ is added to the denominator, but the effect is cancelled by
the faster decay of $\frac{d}{dz}\omega(z)$.
Consequently, when $(4\pi^2 rs)^{-1}\frac{\partial^2}{\partial r \partial s}$
is applied iteratively to \eqref{eq:2dim}, the leading-order term results
from having all of the derivatives fall on $e^{-i(r+s)^2/(4t)}$.
The recurrence relation for $R_{n+2}^-(\lambda)$ then dictates that
\begin{equation} \tag{\ref{eq:ndim}}
\int_{{\mathbb R}} e^{it\lambda}R_n^-(\lambda,A)R_n^-(\lambda,B)\,d\lambda
= \frac{1}{2i(-4\pi i\,t)^{n-\frac32}}
\bigg[\frac{(r+s)^{n-2}}{(rs)^{\frac{n-1}{2}}}\bigg] e^{-i\frac{(r+s)^2}{4t}}
+ O\big(t^{-(n-\frac52)}\big)
\end{equation}
for dimensions $n=4, 6, \ldots$, as desired. The results of this calculation
can be summarized as follows.
\begin{proposition}
Suppose $n \ge 3$ and let $K$ be a compact subset of $(0,\infty)$. There exist constants $C_1, C_2 <
\infty$, depending on $n$ and $K$, such that the remainder function
\begin{equation*}
G(r,s,t) :=
\int_{{\mathbb R}} e^{it\lambda}R_n^-(\lambda,A)R_n^-(\lambda,B)\,d\lambda
- \frac{1}{2i(-4\pi i\,t)^{n-\frac32}}
\bigg(\frac{(r+s)^{n-2}}{(rs)^{\frac{n-1}{2}}}\bigg) e^{-i\frac{(r+s)^2}{4t}}
\end{equation*}
satisfies the estimates
\begin{equation*}
|G(r,s,t)| \leq C_1 t^{-(n-\frac52)}, \qquad
\Big|\ptl[][t]G(r,s,t)\Big| \leq C_2 t^{-(n-\frac12)},
\end{equation*}
uniformly in $r,s \in K$ and $0<t\leq 1$.
\end{proposition}
\begin{proof}
One obtains an exact expression for $G(r,s,t)$ by differentiating the
base case $n=2$ or
$n=3$. Under the assumption $r,s \in K$, every monomial in $r$ and $s$
(including those with fractional and/or negative exponents) can be dominated
by a constant. Every expression of the form
$t^{-k}\frac{d^k}{dz^k}\omega\big(\frac{-rs}{2t}\big)$ can also be bounded
by a constant. Finally, nonnegative powers of $t$ are smaller than 1.
The function $G(r,s,t)$ consists of all the lower-order terms where at
least one of the partial derivatives $\ptl[][r], \ptl[][s]$ does not fall on
the exponential $e^{-i(r+s)^2/(4t)}$.
It follows that each of these terms is $O(t^{-(n-\frac52)})$.
If the derivative $\ptl[][t]$ is taken at the end, this can only increase
the sharpness of the singularity by a factor of $t^{-2}$.
\end{proof}
To be precise, the proposition above is describing the Fourier transform of a distribution as the
integrand $R_n^-(\lambda,r)R_n^-(\lambda,s)$ experiences growth on the order of
$|\lambda|^{\frac{n-3}2}$. In Lemma~\ref{lem:asymptotic}, the auxilliary function $\psi_L(\lambda)$ is
introduced to make the integral absolutely convergent. This has the effect of convolving the
distribution $G(s,t,\cdot)$ with the approximate identity $(2\pi)^{-1}\widehat{\psi_L}$.
At a fixed time $0<t \leq 1$, if $L \geq 2t^{-1}$ one can estimate the
effect of the convolutions
\begin{equation*}
\Big| \big[(2\pi)^{-1}\widehat{\psi_L} * (\cdot)^{-(n-\frac32)}
e^{-i\frac{r^2+s^2}{4(\cdot)}}\big]\, (t) - t^{-(n-\frac32)}
e^{-i\frac{r^2+s^2}{4t}} \Big|
\leq C_{n,K}L^{-1}t^{-(n+\frac12)}
\end{equation*}
and
\begin{equation*}
\Big| \big[(2\pi)^{-1}\widehat{\psi_L} * G(r,s,\cdot)]\,(t) - G(r,s,t)\Big|
\leq C_{n,K}L^{-1}t^{-(n-\frac12)}
\end{equation*}
by using the Mean Value Theorem and the support
property of $\hat{\psi}$. If $L > Ct^{-3}$, these resulting differences
are no larger than the initial size estimate for $G(r,s,t)$. Furthermore,
at fixed $0<t\leq 1$ they vanish in the limit $L\to\infty$ uniformly over
all pairs $r,s\in K$.
Recall the definition of $I_L(t,|x-x_1|,|x_1-y|)$ in the notation of this
section:
\begin{multline*}
I_L(t,|x-x_1|,|x_1-y|) = \int e^{it\lambda} \Big[R_n^+(\lambda,|x-x_1|)
R_n^+(\lambda,|x_1-y|) \\
- R_n^-(\lambda,|x-x_1|)R_n^-(\lambda,|x_1-y|)\Big]\,d\lambda.
\end{multline*}
Under the substitutions $r = |x-x_1|$ and $s = |x_1-y|$, we have fully characterized the contribution of
the term $e^{it\lambda}R_n^-(\lambda,r)R_n^-(\lambda,s)$ to the integral. The inverse Fourier transform
of $R_n^+(\lambda,r)R_n^+(\lambda,s)$ is a distribution supported on the half line $\{t \leq 0\}$
because of analyticity considerations. After convolution with $\widehat{\psi_L}$, it will be supported
in $(-\infty,L^{-1}]$ and therefore vanishes at any $t > 0$ once $L > t^{-1}$.
This concludes the proof of Lemma~\ref{lem:asymptotic}.
|
train/arxiv
|
BkiUddjxaKgS2KSYnGHl
| 5
| 1
|
\section{Introduction}
It has long been recognized in the physics literature that ``spatial random permutations'' -- laws on permutations defined from spatial models -- are intimately related to the behaviour of low-temperature gases. We begin by recounting briefly the first such example.
In 1953, Feynman \cite{feynman} wrote the quantum-mechanical partition function for helium as a sum over the energy associated to certain interacting Brownian particles that may interchange their positions over a finite-time interval. He argued that the $\lambda$-transition undergone by the gas at low temperature is reflected by the appearance of large cycles in a measure on permutations naturally associated to this representation of the partition function.
To describe more precisely this law on permutations, we consider only the hard-core instance (which formally corresponds to a potential that is $+\infty$ when it is non-zero; the potential for helium is finite, varying from highly positive below the atomic radius to slightly negative on the order of this radius).
Fix dimension $d \geq 2$, as well as a ``time" (or inverse-temperature) parameter $\beta \in (0,\infty)$ and a small ``interaction-range" parameter $r > 0$.
Scatter a large number $N$ of points independently and uniformly in the $d$-dimensional torus of volume $N$, and run from each of them an independent Brownian motion for time $\beta$; the law appearing in Feynman's repesentation of the gas is obtained by conditioning the system on the time-$\beta$ configuration of $N$ points coinciding with the time-zero configuration, and on the avoidance constraint that no pair of points be at distance less than $r$ at any time $t \in [0,\beta]$. A random permutation arises by mapping each point at time-zero to the point at time-$\beta$ obtained by following the Brownian path beginning at the point during the period $[0,\beta]$.
It is anticipated that, when $d \geq 3$, and $r > 0$ is fixed at a small enough value, cycles of macroscopic volume (of order $N$) appear in the model, provided that $\beta$ exceeds a critical value, and that this behaviour reflects the condensation of the gas at low-temperature.
The behaviour of Feynman's model is understood rigorously only in the non-interacting case (where formally $r=0$, so that no avoidance conditioning is applied). Here,
the existence of the critical value for the presence of macroscopic cycles
was proved by Suto \cite{sutoone,sutotwo}, who showed that it coincides with the critical density of the ideal Bose gas identified by Einstein \cite{einstein}. Some extensions of these results to other non-interacting models are made in \cite{buspatial}. To the best of our knowledge, no direct argument has been made to establish the existence of large cycles in an interacting model in a Euclidean setting.
It is a physically important and mathematically very interesting question, then, to prove the presence of large cycles in natural models of spatial random permutations.
\subsection{Main results}
In this article, we study the cycles in the random stirring model on a tree. The random stirring model on a given graph $G$
is the stochastic process $\sigma$ mapping $[0,\infty)$ to permutations of the vertex-set of $G$ which starts at the identity and under which the transposition associated to each edge in $G$
is performed at each of the points in a Poisson process with mean one, independently for each edge.
For each $T \in [0,\infty)$, we will refer to the marginal law $\sigma_T$ as the random stirring model with parameter~$T$.
Omer Angel \cite{angel} has proved that, on a regular tree of degree at least five, and for a certain interval of values of $T$, the random stirring model
with parameter $T$ on the tree has infinite cycles almost surely.
We now state our main theorem.
Our result
develops Angel's, by dispensing with the hypothesis that vertices have at least four offspring, and by being applicable for all sufficiently high $T$ (though his result begins to apply for slightly smaller values of $T$, as we shortly discuss).
Terms from graph theory are reviewed in Subsection \ref{secctrw}.
\begin{theorem}\label{thmone}
Let $G$ be any infinite rooted tree of uniformly bounded degree each of whose vertices has at least two offspring. Then there exists $T_0 \in (0,\infty)$ such that
if $T \geq T_0$ then the random stirring model with parameter $T$ contains infinite cycles almost surely.
\end{theorem}
The second theorem quantifies the value of $T_0$ for high-degree trees.
\begin{theorem}\label{thmtwo}
Let $d \geq 39$.
Let $G$ be an infinite rooted tree of uniformly bounded degree
each of whose vertices has at least $d$ offspring.
Then we may choose $T_0 = 429 d^{-1}$ in the statement of Theorem~\ref{thmone}.
\end{theorem}
\subsection{Literature on the random stirring model}
The random stirring model (which is also called the random interchange model) was introduced in \cite{harris}. Its physical relevance was indicated by B{\'a}lint T{\'o}th \cite{toth}, who used it to give a representation of the spin-$1/2$ Heisenberg ferromagnet; the lecture notes \cite{guw} contain an overview of this topic. Recent mathematical progress on the model
includes the resolution of Aldous' conjecture identifying its spectral gap \cite{clr}, and a formula for the probability that the random permutation consists of a single cycle \cite{alonkozma}.
The emergence of a giant component under percolation on the complete graph as the percolation parameter
$p$ increases through values near $1/n$ has been intensively studied. This transition is accompanied by the appearance of large-scale cycles in the associated random stirring model, as we now review.
Under the uniform measure on permutations on a finite set $V$, the lengths of cycles, normalized by $\vert V \vert$, and listed in decreasing order, converge to the Poisson-Dirichlet distribution with parameter one. Studying a model very closely related to the random stirring model for the complete graph, Oded Schramm considered the law on permutations of an $n$-point set given by composing $tn$ uniform random transpositions \cite{comprantrans}. (We will refer to this law as the $(n,t)$-random composition model.) Under this law, say that two vertices are connected if a transposition has been made on the edge between them.
Reflecting the emergence of a giant component in percolation on the $n$-point complete graph at parameter $p = 1/n$, the $(n,t)$-random composition model with $t = 1 + \epsilon$ contains a giant connected component of some density $\theta(\epsilon) \in (0,1)$.
It is shown in \cite{comprantrans} that the ordered list of cycle lengths normalized by $\theta(\epsilon) n$ converges to the Poisson-Dirichlet distribution of parameter one; that is, a local equilibrium for large cycles inside the giant connected component is achieved as soon as this component becomes macroscopic.
Nathana{\"e}l Berestycki \cite{berestycki} has given a short proof that a cycle exists of size $\Theta(n)$ when $t = 1 + \epsilon$.
\subsection{The cyclic-time random walk}\label{secctrw}
Our analysis of the random stirring model exploits a closely related dependent random walk that Omer Angel in~\cite{angel} called the cyclic-time random walk. We now introduce further notation and define this walk.
We begin by recalling some graph-theoretic notation. The vertex and edge-sets of a given graph $G$ will be denoted by $V(G)$ and $E(G)$.
A graph is rooted if it has a distinguished vertex, the root, that we will denote by $\phi$. We write $d:V(G) \times V(G) \to \mathbb{N}$ for the graphical distance on $G$. A graph without cycles is called a tree. In a tree, there is a unique simple path $P_v$ leading from any given vertex $v$ to the root; the first element after $v$ on $P_v$ is called the parent of $v$, and each vertex is called an offspring of its parent. For $v,w \in V(G)$, $v$ is called a descendent of $w$ if $w$ is a vertex in $P_v$; $v$ is called a strict descendent of $w$ if, in addition, it is not equal to~$w$. Note that the set of descendents of a given vertex induces a subtree of $G$ (which we call the descendent tree of the vertex).
For each vertex $v \in V(G)$, we write $E_v$ for the set of edges incident to $v$; we write $\degr{v} = \big\vert E_v \big\vert$ for the degree of~$v$.
For each edge $e \in E(G)$, the incident vertex of $e$ closer to $\phi$ will be called the parent vertex of $e$ and will be denoted by $e^+$; the other, called the child vertex of $e$ and labelled $e^-$.
Throughout we take $G$ to be a rooted tree whose vertex degree is uniformly bounded and each of whose vertices has at least two offspring. Sometimes we will further invoke the hypothesis that, for some given $d \geq 2$,
\begin{equation}\label{eqatleastdegg}
\text{each vertex in $G$ has at least $d$ offspring.}
\end{equation}
We now present a construction of cyclic-time random walk. Throughout, fix $T \in (0,\infty)$. For convenience, suppose that $G$ is embedded in $\mathbb{R}^2$, so that each element of $V(G)$ is identified with a point in $\mathbb{R}^2$ and each element $e \in E(G)$ with the line segment $[v_1,v_2] \subseteq \mathbb{R}^2$ where $e = (v_1,v_2)$ for $v_1,v_2 \in V(G)$. For each $v \in V(G)$, let the pole at $v$
$\{ v \} \times [0,T) \subseteq \mathbb{R}^3$, denote the line segment of length $T$ that rises vertically from $v$. Elements of $E(G) \times [0,T)$ will be called bars.
The bar $b = (e,h)$ is said to be supported on the edge $e$ and to have height $h$.
Note that the bar $(e,h)$ is a horizontal line segment which intersects the poles at $e^+$ and $e^-$; the intersection points $(e^+,h)$ and $(e^-,h)$ will be called the joints of $(e,h)$.
The bar set $E(G) \times [0,T)$ carries the product of counting and Lebesgue measure on its components. (As a shorthand, we will refer to this product measure simply as Lebesgue measure.)
Let $(v,h) \in V(G) \times [0,T)$. Unit-speed cyclic upward motion from $(v,h)$ is the process $[0,\infty) \to \{ v \} \times [0,T): t \to \big(v, (h + t) \, \textrm{mod} \, T \big)$.
Let $\mathcal{B}_0 \subseteq E(G) \times [0,T)$ be a collection of bars. Cyclic-time random meander $X^{\mathcal{B}_0}_{(v,h)}:[0,\infty) \to V(G) \times [0,T)$,
among $\mathcal{B}_0$ and with initial condition $(v,h) \in V(G) \times [0,T)$, is the following process. First, $X_{(v,h)}(0) = (v,h)$; the process pursues unit-speed cyclic upward motion from $(v,h)$ until (the possibly infinite time at which) it reaches the joint of a bar in $\mathcal{B}_0$, when it jumps to the other joint of this bar. The process $X^{\mathcal{B}_0}_{(v,h)}$ then continues by iterating the same rule, until it is defined on all of $[0,\infty)$. The process is chosen to be right-continuous with left limits.
We write $X^{\mathcal{B}_0}$
for $X^{\mathcal{B}_0}_{(\phi,0)}$. (There are choices of $\mathcal{B}_0$ for which these rules fail to define $X^{\mathcal{B}_0}_{(v,h)}$ on all of $[0,\infty)$. It is a simple matter to verify that this difficulty does not arise in the case that is relevant to us and which we now discuss.)
We write $\mathbb{P}_T$ for a probability measure carrying a bar collection $\mathcal{B} \subseteq E(G) \times [0,T)$
having Poisson law with intensity one with respect to Lebesgue measure.
Cyclic-time random meander with parameter~$T$ is the random process $X^{\mathcal{B}}$. We write $X$ in place of $X^\mathcal{B}$ and call $X$ in shorthand a meander.
Cyclic-time random walk (begun at $v \in V(G)$ and with parameter~$T$) is the vertex-valued process given by projecting $X_{(v,0)}:[0,\infty) \to V(G) \times [0,T)$ onto $V(G)$. We denote it by $Y_v$ and write $Y$ in place of $Y_\phi$.
Note that the random stirring model with parameter~$T$ is the law of the random map $V(G) \to V(G): v \to Y_v(T)$.
See Figure~\ref{cycleexample} for an illustration.
\begin{figure}
\centering\epsfig{file=cycleexample.eps, width=14cm}
\caption{For the graph shown on the left, cyclic-time random meander $X$ departing from $(\phi,0)$ is illustrated on the right. The right-hand sketch depicts a construction in $\mathbb{R}^3$ in which the poles associated to vertices are the vertical dashed lines and the bars in $\mathcal{B}$ are the horizontal black lines.
Assume that there are no bars in $\mathcal{B}$ supported on edges that connect vertices $v$ and $w$ of $\phi$ to their offspring. The trajectory of the meander from $(\phi,0)$ is divided into three intervals of duration $T$, at the end of which, the meander returns to $(\phi,0)$. These three sub-trajectories are indicated in black, red and green in the right-hand sketch. As the left-hand sketch shows, the cycle of $\phi$ in the associated permutation
thus has three elements.}\label{cycleexample}
\end{figure}
Following Angel, we say that cyclic-time random walk $Y$ is transient
if there is positive probability that it never returns to the root, in the sense that
there exists $s_0 > 0$ such that $\phi \not\in Y(s_0,\infty)$.
Theorem~\ref{thmone} will follow directly from the next proposition.
\begin{proposition}\label{propone}
Let $G$ be any infinite rooted tree of uniformly bounded degree each of whose vertices has at least two offspring.
Then there exists $T_0 \in (0,\infty)$ such that
if $T \geq T_0$ then cyclic-time random walk $Y$
is transient.
\end{proposition}
Proposition~\ref{propone} and Theorem~\ref{thmone} are proved in Section \ref{secproofs}. Theorem~\ref{thmtwo} is obtained by reprising these arguments and developing quantitative counterparts to limiting assertions made along the way. Its proof is given in Appendix $A$.
\subsection{The sharp transition conjecture and trees of high degree}
For any given graph $G$ on which the random stirring model is well-defined,
let $\mathscr{T}^G$ denote the set of $T > 0$ such that the random stirring model on $G$ with parameter $T$ has infinite cycles almost surely.
Note that $T \not\in \mathscr{T}^G$ unless the bond percolation on $G$ given by the set of edges that support a bar in $\mathcal{B}$ has an infinite component. As noted in \cite{angel}, this implies that
$\big[ 0 , - \log ( 1 - p_c ) \big) \cap \mathscr{T}^G = \emptyset$,
where $p_c = p_c(G)$ denotes the critical value for bond percolation on $G$. Writing $\mathcal{T}_d$ for the rooted regular tree each of whose vertices has $d$ offspring, note that
$p_c(\mathcal{T}_d) = d^{-1}$, and thus that,
if $d \geq 8$, then
\begin{equation}\label{eqdegglb}
\big[ 0 , d^{-1} + \tfrac{1}{2}d^{-2} \big) \cap \mathscr{T}^{\mathcal{T}_d} = \emptyset \, .
\end{equation}
Define the critical points $T_c^1(G) = \inf \mathscr{T}^G$ and $T_c^2(G) = \sup \, \big( [0,\infty) \setminus \mathscr{T}^G \big)$.
Note that
$T_c^1(G) \leq T_c^2(G)$ trivially. Conjecture $9$ of \cite{angel} claims that, for any graph $G$ for which $\mathscr{T}^G$ is non-empty, these two critical points are equal.
The present work and \cite{angel} go some way to verifying the conjecture for high-degree trees:
we will prove the next result in Appendix $B$.
\begin{theorem}\label{thmthree}
For any $\epsilon > 0$, there exists $d_0 \in \mathbb{N}$ such that if $d \geq d_0$
then $\big[ d^{-1} + (\tfrac{7}{6} + \epsilon) d^{-2}, \infty) \subseteq \mathscr{T}^{\mathcal{T}_{d}}$.
\end{theorem}
This deduction and (\ref{eqdegglb}) show that the discrepancy $T_c^2(G) - T_c^1(G)$ is $O(d^{-2})$ for high $d$.
In \cite{hammondtwo}, the discrepancy is shown to be zero for regular trees of high-degree, thus confirming Conjecture $9$ of \cite{angel} for such trees.
\vspace{2mm}
\noindent{\bf Acknowledgments.} I thank Christophe Garban and Daniel Ueltschi for useful discussions.
\section{Proofs}\label{secproofs}
We now begin to define and describe the elements needed to prove Proposition~\ref{propone}, after which, we will give the proof of this result; Theorem~\ref{thmone} will then be an immediate consequence.
\subsection{Preliminaries}
Here we record a simple observation regarding future of cyclic-time random meander given its past.
\begin{lemma}\label{lemubl}
Let $t > 0$.
Consider the law $\mathbb{P}_T$ given $X:[0,t] \to V(G) \times [0,T)$. Let ${\rm Found}_t \subseteq E(G) \times [0,T)
denote the set of bars in $\mathcal{B}$ that $X$ has crossed during $[0,t]$. Let the set of time-$t$ {\em untouched} bar locations ${\rm UnTouch}_t \subseteq E(G) \times [0,T)$
denote the set of bars $b \in E(G) \times [0,T)$ neither of whose joints belongs to $X_{[0,t]}$. Then the conditional distribution of $\mathcal{B}$ is given by ${\rm Found}_t \cup \mathcal{B}_{(t,\infty)}$, where $\mathcal{B}_{(t,\infty)}$ is a random bar collection with Poisson law of intensity~$1\!\!1_{{\rm UnTouch}_t}$ with respect to Lebesgue measure on $E(G) \times [0,T)$.
\end{lemma}
\noindent{\bf Proof.} That ${\rm Found}_t \subseteq \mathcal{B}$ is known given $X$ on $[0,t]$; similarly, if $X_{[0,t]}$ visits the joint of some bar in $\mathcal{B}$,
that bar belongs to ${\rm Found}_t$. The time-$0$ distribution of the remaining bars, those in ${\rm UnTouch}_t$, is undisturbed by the data~$X_{[0,t]}$. \qed
\subsection{Useful bars}\label{secusefulbars}
We outline the strategy for proving Proposition~\ref{propone}.
As time evolves from the outset, the process $X$ will, with positive probability, jump across several bars in $\mathcal{B}$, so that $Y$ may start to move away from $\phi$. For each $t > 0$,
we will identify a subset $\mathcal{U}_t$ of ${\rm Found}_t$ of ``useful'' bars that, roughly speaking, act as regeneration points for the trajectory $Y:[0,t] \to V(G)$. Among other properties, these bars have been crossed by $X$ only once before time $t$, so that, subsequently to crossing a useful bar, the walk $Y$
is a descendent of the child vertex of the edge on which the bar is supported.
If the walk is to return to $\phi$, it must later pass back along each edge that supports a useful bar; however, we will choose a definition of useful bar so that, in endeavouring to return, the walk necessarily runs a positive probability of jumping out into a previously unvisited sub-tree. If the walk arrives in such a sub-tree, we will argue that there is a significant chance that it moves forward from there for a short while, thereby generating further useful bars (the number of which grows linearly with the period $T$).
That is, to return to the root, the walk must ``undo'' each useful bar; but any attempt to do so will generate many more such bars with a uniformly positive probability. This means that the walk returns to the root only with small probability, if $T$ is chosen to be high.
We now identify the subset $\mathcal{U}_t \subseteq {\rm Found}_t$. Some more notation is needed.
\begin{definition}
For any subset $A \subseteq V(G)$, let $H_A \in [0,\infty]$, $H_A = \inf \big\{ t \geq 0: Y(t) \in A \big\}$,
denote the hitting time of $A$ by $Y$; the convention $\inf \emptyset = \infty$ is used.
\end{definition}
Let $t > 0$ and take $B \in {\rm Found}_t$, with $B = (e,s) \in E(G) \times [0,T)$.
Then we define $B$ to be an element of $\mathcal{U}_t$ if each of the following conditions is satisfied:
\begin{itemize}
\item $H_{\parent{e}} < H_{\child{e}} < t$;
\item $\big\{ s \in [0,t]: Y(s) = \parent{e} \big\} = [H_{\parent{e}},H_{\child{e}})$;
\item $H_{\parent{e}} - H_{\child{e}} \leq \kappa$; and
\item the set $\big\{ s \in [0,t): Y(s) = \child{e} \big\}$ takes the form of an interval whose right-hand endpoint is strictly less than $t$.
\end{itemize}
In other words, a bar $(e,s) \in \mathcal{B}$ crossed before time $t$ is useful (at time $t$) if, in its history strictly before time $t$, the walk $Y$ has made a jump and arrived at the edge $e$'s parent vertex $e^+$,
and has then, without intervening jumps and before a duration $\kappa$ has passed, jumped to the child vertex $e^-$,
before jumping again to one of the offspring of $e^-$, without then returning to $e^-$.
\subsection{The return to a useful bar}
Each vertex in $G$ having at least two offspring, we note that, for $(e,s) \in \mathcal{U}_t$, each of $e^+$ and $e^-$ has an offspring, $u^+$ and $u^-$, such that
$Y_{[0,t]} \cap \big\{ u^+, u^- \big\} = \emptyset$; indeed, there are $\degr{\parent{e}} - 2 \geq 1$ choices for $u^+$ and $\degr{\child{e}} - 2 \geq 1$ for $u^-$, because $Y$ until time $t$ has visited at most one offspring of $\parent{e}$ and at most one offspring of $\child{e}$. This fact explains how we will be able to treat the elements of $\mathcal{U}_t$ as obstacles for the return of $Y$ to $\phi$ after time $t$. We will argue that, conditionally on returning to $e^-$ after time $t$, there is positive probability that $Y$ arrives at either $u^+$ or $u^-$.
\begin{definition}
The time
$t \in [0,\infty)$
is called a frontier time of $X:[0,\infty) \to V(G) \times [0,T)$ if $Y(t) \not\in Y_{[0,t)}$.
\end{definition}
For $e \in E(G)$, we will employ the shorthand notation $e^c = V(G) \setminus \big\{ \parent{e},\child{e} \big\}$.
\begin{definition}
For $t \geq 0$ and $A \subseteq V(G)$, let $H_{t,A} \in [t,\infty]$ be given by $H_{t,A} = \inf \big\{ s \geq t: Y(s) \in A \big\}$. For $x \in V(G)$, we write $H_{t,x} = H_{t,\{ x\}}$.
\end{definition}
We now provide the precise statement regarding the departure of $Y$ from the edge $e$. This includes a more precise description of how we identify the vertices $u^+$ and $u^-$.
For this, we endow the tree $G$ with a lexicographical ordering on vertices.
\begin{lemma}\label{lempioneertofrontier}
Assume~(\ref{eqatleastdegg}).
Let $t > 0$. Consider $\mathbb{P}_T$ given $X:[0,t] \to V(G) \times [0,T)$. Let $(e,s) \in E(G) \times [0,T)$ denote an element of $\mathcal{U}_t$ with $e^+ \not= \phi$
selected in a manner that is measurable with respect to $X_{[0,t]}$. Let $d_*$ denote the maximum of the degrees of $e^+$ and $e^-$.
Fix indices $\ell_1 \in \{ 1,\ldots, \degr{\parent{e}} - 2 \}$ and $\ell_2 \in \{ 1,\ldots, \degr{\child{e}} - 2 \}$.
Let $u^+$ denote the $\ell_1$-st offspring of $\parent{e}$ among those not belonging to $Y_{[0,t]}$, and let $u^-$
denote the $\ell_2$-nd offspring of $\child{e}$ among those not belonging to $Y_{[0,t]}$.
Further condition on $H_{t,\child{e}} < \infty$ and on the trajectory $X:[t,H_{t,\child{e}}] \to V(G) \times [0,T)$.
Let $\tau = H_{H_{t,\child{e}}, e^c}$; that is, $\tau$ is the first time after the moment subsequent to $t$ at which $Y$ returns
to $\child{e}$ that $Y$ leaves the vertex-set of the edge $e$. Then
the conditional probability that $Y_\tau \in \big\{ u^+,u^- \big\}$ is at least $d_*^{-1} \big( 1 - e^{-
(d - 1)T - \kappa}\big)$.
\end{lemma}
\begin{figure}
\centering\epsfig{file=returntoedgee.eps, width=14cm}
\caption{The return of $X$ to the pole at the child vertex of an edge $e$ supporting a bar in $\mathcal{U}_t$ is depicted. In the left-hand figure, the locale of $G$ near $e$ is shown, including the beginnings of the descendent trees of $u^+$ and $u^-$; correspondingly, the dotted line segments at the base of the right-hand sketch indicate the graph structure. The red arrows indicate the trajectory of the meander $X$ until time~$t$. The blue arrows indicate the trajectory of $X$ just prior to its return to the pole of $\child{e}$ at time~$H_{t,\child{e}}$.}\label{figreturntoedgee}
\end{figure}
\noindent{\bf Proof of Lemma \ref{lempioneertofrontier}.}
See Figure \ref{figreturntoedgee}. Writing $\eta = H_{t,\child{e}}$, we will apply Lemma \ref{lemubl} to study the conditional distribution of $X(\eta + \cdot):[0,\infty) \to V(G) \times [0,T)$.
Note that $(\child{e},s)$ is the first joint of a bar in ${\rm Found}_\eta$ encountered on the pole at $\child{e}$ by unit-speed cyclic upward motion from $X(\eta)$. Indeed, $X$ has visited the pole at $\child{e}$ before time $\eta$ during a single interval of time, arriving there at the joint $(\child{e},s)$; now, returning to $\child{e}$ at time $\eta$,
this point is the first joint of an element of ${\rm Found}_\eta$ to be located on a journey upwards from~$X(\eta)$.
Note also that, given $X_{[0,\eta]}$, were $X$ after time $\eta$ to remain at $\child{e}$ until encountering the joint $(\child{e},s)$, it would jump to $(\parent{e},s)$ at the moment of this encounter.
Let the ``jump" event $J$ occur if $X$ does indeed remain at the pole at $\child{e}$ until meeting $(\child{e},s)$; let $I \subseteq [0,T)$ denote the interval of heights
through which $X$ passes after time $\eta$ and before encountering $(\child{e},s)$ in the case that $J$ occurs.
Note that $X_{[0,\eta]} \cap \big( \{ u^- \} \times [0,T) \big) = \emptyset$, so that $\{ (e^-,u^-) \} \times I \subseteq {\rm UnTouch}_{\eta}$.
In the notation of Lemma~\ref{lemubl}, under $\mathbb{P}_T$ given $X_{[0,\eta]}$, $J^c$ occurs if and only if $\mathcal{B}_{(\eta,\infty)}$ contains a bar with a joint in $\{ e^- \} \times I$;
the fact that $\{ (e^-,u^-) \} \times I \subseteq {\rm UnTouch}_{\eta}$ thus ensures that, under $\mathbb{P}_T$ given $X_{[0,\eta]}$ and $J^c$, there is probability at least $1/\degr{u^-}$
that the element of $\mathcal{B}_{(\eta,\infty)}$ of lowest height among those having a joint in $\{ e^- \} \times I$ is supported on $(e^-,u^-)$.
In this way, we see from Lemma~\ref{lemubl} and the strong Markov property that
\begin{equation}\label{eqescone}
\mathbb{P}_T \Big( Y (\tau) = u^- \Big\vert X_{[0,\eta]}, J^c \Big) \geq 1/\degr{u^-} \, .
\end{equation}
On the other hand, conditionally on $X_{[0,\eta]}$ and on $J$, $X$ after time $\eta$ leaves the pole at $\child{e}$ by crossing the bar $(e,s)$ to arrive at $(\parent{e},s)$.
Let $\chi = H_{ \eta , V(G) \setminus \{ \child{e} \}}$
denote the moment of this arrival. Noting that $(e,s) \in \mathcal{U}_t$ and that $X_{[t,\chi)} \cap \big( \{ \parent{e} \} \times [0,T) \big) = \emptyset$, we see that
$\big( \{ \parent{e} \} \times [0,T) \big) \cap X_{[0,\chi)}$ consists an interval of length at most $T/2$ whose upper endpoint is $X(\chi)$, so that
\begin{equation}\label{eqvisitchi}
(\parent{e},r \, {\rm mod} \, T) \not\in X_{[0,\chi]}
\textrm{ for all $r \in ( \chi,\chi + T - \kappa )$} \, .
\end{equation}
Moreover, $\big( \{ u^+ \} \times [0,T) \big) \cap X_{[0,\chi]} = \emptyset$.
In light of these facts, we will see that Lemma \ref{lemubl} implies that
\begin{equation}\label{eqesctwo}
\mathbb{P}_T \Big( Y (\tau) = u^+ \Big\vert X_{[0,\eta]}, J \Big) \geq \degr{u^+}^{-1} \Big( 1 - e^{-(d - 1)T - \kappa}\Big) \, .
\end{equation}
Indeed,
(\ref{eqvisitchi}) implies that unit-speed upward cyclic motion from $(e^+,s)$
will meet no joint of a bar in ${\rm Found}_\chi$ for a duration of at least $T - \kappa$.
Whenever $v$ is a neighbour of $e^+$ such that $v \not\in Y_{[0,\chi)}$, $(e^+,v) \times [0,T) \subseteq {\rm UnTouch}_\chi$.
Note that any offspring $v$ of $e^+$ except for $e^-$ satisfies $v \not\in Y_{[0,\chi)}$. There are at least $d - 1$ such $v$. Applying Lemma~\ref{lemubl},
the conditional probability that there exists a bar in $\mathcal{B} \setminus {\rm Found}_\chi = \mathcal{B}_{(\chi,\infty)}$ with a joint at $Y(\chi)$ having a height lying in the modulo-$T$ reduction of $(\chi,\chi + T - \kappa)$ is at least
$1 - \exp\big\{ - (d - 1) T - \kappa \big\}$; note also that the conditional probability, given the presence of such a bar, that the first such bar encountered by upward cyclic motion from $X_\chi$ is supported on the edge $(e^+,u^+)$ is at least $1/\degr{u^+}$. Hence, we obtain (\ref{eqesctwo}) by applying Lemma \ref{lemubl} (and the strong Markov property) at time $\chi$.
The lemma follows from (\ref{eqescone}) and (\ref{eqesctwo}). \hfill $\Box$
\begin{definition}
Let $t > 0$. Consider $\mathbb{P}_T$ given $X:[0,t] \to V(G) \times [0,T)$ and let $(e,s) \in \mathcal{U}_t$.
Also given $H_{t,\child{e}} < \infty$, we say that $X$ makes a frontier departure from $e$ if,
after time $H_{t,\child{e}}$, at the moment of departure of $Y$ from $\{ e^+,e^- \}$, $Y$ arrives at
an offspring of either $\parent{e}$ or $\child{e}$ that it has never visited before.
\end{definition}
We now state a slight improvement of Lemma \ref{lempioneertofrontier}.
\begin{lemma}\label{lemnew}
Assume~(\ref{eqatleastdegg}).
Let $t > 0$. Consider $\mathbb{P}_T$ given $X:[0,t] \to V(G) \times [0,T)$; choosing $(e,s) \in \mathcal{U}_t$ such that $e^+ \not= \phi$ measurably with respect to $X_{[0,t]}$, condition
further on $H_{t,\child{e}} < \infty$. Then
$X$ makes a frontier departure from $e$ with probability at least $\frac{d - 1}{d + 1} \big( 1 - e^{-(d - 1)T - \kappa}\big)$.
\end{lemma}
\noindent{\bf Proof.} Note that (\ref{eqescone}) is valid whenever
$u^-$ is a neighbour of $\child{e}$ such that $u^- \not\in Y_{[0,t]}$,
and that
(\ref{eqesctwo}) holds whenever
$u^+$ is a neighbour of $\parent{e}$ such that $u^+ \not\in Y_{[0,t]}$.
The edge $e$ belonging to $\mathcal{U}_t$, $Y_{[0,t]}$ contains only two neighbours of $e^+$, and likewise for $e^-$. We sum (\ref{eqesctwo}) (and (\ref{eqescone})) over choices of $u^+$ (or $u^-$) that are offspring of $\parent{e}$ (or $\child{e}$) not belonging to $Y_{[0,t]}$; formally, recalling the indices $\ell_1$ and $\ell_2$ from the statement of Lemma \ref{lempioneertofrontier}, we sum (\ref{eqescone}) over $\ell_1 \in \{1,\ldots,\degr{\child{e}}- 2 \}$
and (\ref{eqesctwo}) over $\ell_2 \in \{1,\ldots,\degr{\parent{e}}- 2 \}$ to obtain
$$
\mathbb{P}_T \Big( Y (\tau) \not\in Y_{[0,\tau)} \Big\vert X_{[0,\eta]}, J^c \Big) \geq \frac{\degr{u^-} - 2}{\degr{u^-}},
$$
and
$$
\mathbb{P}_T \Big( Y (\tau) \not\in Y_{[0,\tau)} \Big\vert X_{[0,\eta]}, J \Big) \geq \frac{\degr{u^+} - 2}{\degr{u^+}} \Big( 1 - e^{-(d - 1)T - \kappa} \Big).
$$
By $\min \big\{ \degr{u^-}, \degr{u^+} \big\} \geq d + 1$, we obtain the statement of the lemma.
\begin{flushright}
\hfill $\Box$
\end{flushright}
\subsection{Departing after the return}
We now extend the notion of a useful bar, making it relative to a non-zero start time.
\begin{definition}
Let $s > 0$, and let $t > s$. Let ${\rm Found}_{s,t} \subseteq E(G) \times [0,T)$ denote the set of bars crossed by $X$ during $[s,t)$.
Let $(e,r) \in {\rm Found}_{s,t}$.
We declare that $(e,r) \in \mathcal{U}_{s,t}$ if each of the following conditions is satisfied:
\begin{itemize}
\item $\parent{e}$ is a strict descendent of~$Y(s)$;
\item $d \big( Y(s), Y(H_{s,\parent{e}}) \big) > d \big( Y(s), Y(r) \big)$ for all $r \in [s,H_{s,\parent{e}})$;
\item $\big\{ r \in [s,t]: Y(r) = \parent{e} \big\} = [H_{s,\parent{e}},H_{s,\child{e}})$;
\item $H_{s,\parent{e}} - H_{s,\child(e)} \leq \kappa$; and
\item the set $\big\{ r \in [s,t): Y(r) = \child{e} \big\}$ takes the form of an interval whose right-hand endpoint is strictly less than $t$.
\end{itemize}
\end{definition}
In fact, the set $\mathcal{U}_{0,t}$ may be smaller than $\mathcal{U}_t$, because we do not require the second of the above conditions in defining $\mathcal{U}_t$.
The stricter definition permits the union property recorded in the next lemma.
\begin{definition}
Let $t > 0$. A time $s \in [0,t]$ is called a $t$-regeneration time if $\big\{ r \in [0,t]: Y(r) = Y(s) \big\}$
is an interval. If this interval has right-hand endpoint strictly less than $t$, then the first jump made by $Y$ after time $s$ is in the direction away from the root.
\end{definition}
\begin{lemma}\label{lemdisjoint}
Let $t > s > 0$.
Let $X:[0,\infty) \to V(G) \times [0,T)$ be such that $s$ is a $t$-regeneration time. Then $\mathcal{U}_s$ and $\mathcal{U}_{s,t} $ are disjoint subsets of $\mathcal{U}_t$.
\end{lemma}
\noindent{\bf Proof.} Let $(e,r) \in \mathcal{U}_s$, (with $e \in E(G)$ and $r \in [0,T)$). Note that $Y_{[s,t]}$ lies in the descendent tree of $Y(s)$, a vertex which is itself a strict descendent of~$e^-$.
Hence, $\mathcal{U}_s \subseteq \mathcal{U}_t$.
The other inclusion is similarly established. No vertex associated to a joint of a bar in $\mathcal{U}_s$ is a descendent of $Y(s)$, while every vertex associated to a joint of a bar in $\mathcal{U}_{s,t}$ is a strict descendent of $Y(s)$. This ensures the disjointness of the two sets. \hfill $\Box$
\begin{lemma}\label{lemrapidadvanceone}
Assume~(\ref{eqatleastdegg}).
Given $\epsilon > 0$, there exists $T_0 > 0$ such that, for $T \geq T_0$,
the $\mathbb{P}_T$-probability that
\begin{equation}\label{gbound}
\vert \mathcal{U}_{0,T} \vert \geq
\bigg( \frac{d^2 ( d - 1 )}{(d + 1)^2} \Big( 1 - e^{- ( d + 1 ) \kappa} \Big)
- \epsilon \bigg) T
\end{equation}
is at least $1 - \epsilon$.
\end{lemma}
\noindent{\bf Proof.}
Let $\mathcal{T}_\degg$ denote the rooted regular tree each of whose vertices has $d$ offspring.
We first prove the lemma when $G = \mathcal{T}_\degg$.
Let $\beta > 1$.
Let $Z:\mathbb{N} \to \mathbb{N}$, $Z(0) = 0$, denote nearest-neighbour random walk with bias $\beta$ to the right and with reflection at zero.
This is the Markov chain with transition probabilities $p_{n,m} = \delta_{m,n+1} \frac{\beta}{\beta + 1} + \delta_{m,n-1} \frac{1}{\beta + 1}$ for $n \geq 1$ and $p_{0,m} = \delta_{m,1}$.
We call $n \in \mathbb{Z}$ a renewal point for $Z$ (and write $n \in {\rm RG}$)
if $m \in \mathbb{N}$ and $Z(m) = Z(n)$ implies that $m =n$. We call $n$ a strong renewal point for $Z$ (and write $n \in {\rm SRG}(Z)$) if $\{ n,n+1\} \subseteq {\rm RG}$. Note that the conditional distribution given $Z:[0,n] \to \mathbb{Z}$
and $n \in {\rm SRG}(Z)$ of $Z(n + \cdot) - Z(n)$ is given by $Z$ conditioned to make three rightward steps and then to remain at values of at least two. This conditional distribution being independent of $Z:[0,n] \to \mathbb{Z}$ given
$n \in {\rm SRG}(Z)$, we see that the strong renewal points form a renewal sequence (in the sense that the differences of consecutive terms are independent and have a common law).
It is easy to confirm that, for each $n \in \mathbb{N}^+$, $\mathbb{P} (n \in {\rm SRG}(Z)) = \frac{\beta (\beta - 1)}{(\beta + 1)^2}$. Hence, the renewal theorem implies that
\begin{equation}\label{eqzstrbeta}
n^{-1} \Big\vert {\rm SRG}(Z) \cap \big\{ 1,\ldots, n \big\} \Big\vert \to \frac{\beta ( \beta - 1 )}{(\beta + 1)^2}, \qquad \textrm{almost surely.}
\end{equation}
Let $W:[0,\infty) \to V(G)$ denote continuous-time random walk on $G$ departing from $\phi$ (whose jumps are given by exponential rate-one clocks on the edges of $G$).
Write $M:[0,\infty) \to \mathbb{N}$, $M(s) = d \big( \phi , W(s) \big)$, where recall that $d(\cdot,\cdot)$
denotes graphical distance on $G$. Let $J:\mathbb{N} \to \mathbb{N}$, $J(0) = 0$, denote the jump chain of $M$, that records in discrete-time the successive states visited by $M$. Let $D:\mathbb{N} \to (0,\infty)$,
where $D(0)$ is the time for $W$ to make its first transition, and $D(n)$ for $n \in \mathbb{N}^+$
is the length of time that $W$ spends at its new location after its $n$-th transition.
Taking $\beta = d$, note that $J:\mathbb{N} \to \mathbb{N}$ and $Z_\beta:\mathbb{N} \to \mathbb{N}$ are equal in law.
Note that $Y:[0,T) \to V(G)$ has the distribution of $W:[0,T) \to V(G)$.
Thus, we wish to argue that~(\ref{gbound}) holds for the process $W:[0,T) \to V(G)$.
The process~$W$ making transitions at rate at least $d + 1$ except when at the root, the law of large numbers implies that, for any $\epsilon > 0$, there exists an almost surely finite random variable $T_0$ such that
$W:[0,T) \to V(G)$ makes at least $T(d + 1)(1 -\epsilon)$ transitions if $T \geq T_0$. Clearly then, for $\epsilon > 0$, we have that $W(T) \geq T d (1 - \epsilon)$ if $T \geq T_0$ (where the law of the almost surely finite $T_0$ may have changed).
Every strong renewal point $j$ for $J$ for which $0 < j \leq Td (1-\epsilon)$ corresponds to an element of $\mathcal{U}_{0,T}$, provided that
$D(j) \leq \kappa$. For any given $k \in \mathbb{N}^+$, conditionally on a choice of $J:\mathbb{N} \to \mathbb{N}$ such that $J(k) > 0$, and on the values $\big\{ D(i): i \not= k \big\}$, the conditional probability that $D(k) \leq \kappa$ is at least $1 - e^{- (d + 1) \kappa}$.
Recalling (\ref{eqzstrbeta}), we see that, for any $\epsilon > 0$, there exists a deterministic $T_0 > 0$ such that $T \geq T_0$ implies that
$$
\big\vert \mathcal{U}_{0,T} \big\vert \geq \bigg( \frac{d ( d - 1 )}{(d + 1)^2} - \epsilon \bigg) T d \big( 1 -\epsilon \big) \Big( 1 - e^{- (d +1) \kappa} \Big)
$$
with probability at least $1 - \epsilon$.
This yields the statement of the lemma in the case that $G = \mathcal{T}_\degg$.
The general case may be reduced to the special one. Assume now that $G$ satisfies~(\ref{eqatleastdegg}).
It is straightforward to construct a coupling $\mathcal{C}$
of the continuous-time random walks $W^{\mathcal{T}_\degg}$ and $W^G$ with the property that
at any moment of time at which $W^G$ makes a transition towards the root, $W^{\mathcal{T}_\degg}$ is either at the root or makes a transition towards the root, while at any moment of time at which $W^{\mathcal{T}_\degg}$ makes a transition away from the root, so does $W^G$. We omit the details of this standard construction. Under $\mathcal{C}$, let $T_1$ denote the supremum of times at which $W^{\mathcal{T}_\degg}$ is at the root, and let $m$ denote the maximal distance from the root attained by $W^G$ before $T_1$. Let $T_2 = \inf \big\{ t \geq T_1: d \big( \phi, W^G \big) = m \big\}$.
It is easily verified that,
if $T > T_2$ and $s \in (T_2,T)$ is a moment
at which a bar in $\mathcal{U}_{0,T}^{\mathcal{T}_\degg}$ is crossed by $W^{\mathcal{T}_\degg}$,
then $s$ is also a moment at which
a bar in $\mathcal{U}_{0,T}^G$ is crossed by $W^G$.
Hence, whenever $T > T_2$, $\big\vert \mathcal{U}_{0,T}^G \big\vert \geq \big\vert \mathcal{U}_{0,T}^{\mathcal{T}_\degg} \big\vert - \big\vert \mathcal{U}_{0,T_2}^{\mathcal{T}_\degg} \big\vert$ under $\mathcal{C}$. The random variable $T_2$ being finite $\mathcal{C}$-almost surely, we infer the statement of the lemma for the graph $G$ from this statement for $\mathcal{T}_\degg$. \hfill $\Box$
\begin{lemma}\label{lemrapidadvance}
Assume~(\ref{eqatleastdegg}).
Given $\epsilon > 0$, there exists $T_0 > 0$ such that the following holds.
Let $T \geq T_0$. Fix $t > 0$. Consider $\mathbb{P}_T$ given $X:[0,t] \to V(G) \times [0,T)$
such that $t$ is a frontier time.
Then the conditional probability that
\begin{equation}\label{eqgoodbd}
\vert \mathcal{U}_{t,t+T} \vert \geq
\bigg( \frac{d^2 ( d - 1 )}{(d + 1)^2} \big( 1 - e^{- (d + 1 ) \kappa} \big)
- \epsilon \bigg) T
\end{equation}
and that $t$ is a $(t+T)$-regeneration time
is at least $1/4$.
\end{lemma}
\noindent{\bf Remark.}
Define
$$
c_\kappan =
\frac{d^2 ( d - 1 )}{2(d + 1)^2} \Big( 1 - e^{- (d + 1) \kappa} \Big).
$$
Let $X:[0,\infty) \to V(G) \times [0,T)$
be such that
$t \in [0,\infty)$
is a frontier time of~$X$.
We say that $X$ makes a rapid advance from $t$ if $Y(s)$ is a descendent of $Y(t)$ for all $s \in [t,t+T]$
and if
$\vert \mathcal{U}_{t,t+T} \vert \geq c_\kappan T$.
In these terms, Lemma~\ref{lemrapidadvance} implies that there exists $T_0$ such that, if $T \geq T_0$, then,
under $\mathbb{P}_T$ given $X:[0,t] \to V(G) \times [0,T)$ such that $t$ is a frontier time,
the conditional probability that $X$ makes a rapid advance from $t$ is at least $1/4$.
\noindent{\bf Proof of Lemma \ref{lemrapidadvance}.}
Let $D$ denote the descendent tree of $Y(t)$, (with root $Y(t)$).
In this proof, we use the term $G$-process to refer to the conditional distribution of $X(t + \cdot)$
given data $X:[0,t] \to V(G) \times [0,T)$ as specified in the statement of the lemma. By the $D$-process, we mean cyclic-time random meander in $D$ from $X(t)$. In this way, each process is defined on $[0,\infty)$.
Note that the $D$- and $G$-processes may be coupled
by using the same collection of elements in $\mathcal{B}$ supported on edges in $D$.
The two processes then coincide during $[0,T]$
provided that the $G$-process departs from the pole at $Y(t)$ by jumping to the pole indexed by an offspring of $Y(t)$, and does not return to the pole at $Y(t)$ during $[0,T]$.
The probability that this happens is at least
$\frac{d - 1}{d + 1}$ by Lemma \ref{lemubl} and a basic hitting estimate.
By Lemma \ref{lemrapidadvanceone}, the random variable $\mathcal{U}_{t,t+T}$ under the $D$-process satisfies (\ref{eqgoodbd}) with probability arbitrarily close to one (provided that $T$ is chosen to be high enough).
If the coupling works, then the bound applies to the $G$-process as well. The bound $d \geq 2$ gives the statement of the lemma (where in fact $1/4$ might be replaced by any value less than $1/3$). \hfill $\Box$
\begin{definition}\label{defgoodreturn}
Let $t > 0$ and let the bar $(e,r)$ denote any given element of~$\mathcal{U}_t$.
We say that $X$ makes a return to $e$ if $H_{t,\child{e}} < \infty$. If $X$ makes a return to $e$, we say that the return is good if
\begin{enumerate}
\item $X$ makes a frontier departure from $e$, and then
\item $X$ makes a rapid advance from the frontier time $H_{H_{t,\child{e},e^c}}$.
\end{enumerate}
\end{definition}
Define
$$
c_{T,\kappan} = \frac{d - 1}{4(d + 1)} \big( 1 - e^{-(d - 1)T - \kappa}\big) \, .
$$
\begin{lemma}\label{lemsum}
Assume~(\ref{eqatleastdegg}).
There exists $T_0 > 0$ such that the following holds. Let $T \geq T_0$.
For any $t > 0$ and any $(e,r) \in E(G) \times [0,T)$ with $e^+ \not= \phi$, under $\mathbb{P}_T$ given
$(e,r) \in \mathcal{U}_t$ and $H_{t,\child{e}} < \infty$,
the probability that the return of $X$ to $e$ is good is at least~$c_{T,\kappan}$.
\end{lemma}
\noindent{\bf Proof.}
This is implied by Lemma \ref{lemnew}
and the remark that follows Lemma~\ref{lemrapidadvance}, as well as by Lemma \ref{lemubl}. \hfill $\Box$
\subsection{Damage limitation after a bad return}
We need a lemma that controls the damage done by a return to a useful bar which turns out not to be good.
\begin{lemma}\label{lemretnotgood}
Let $t > 0$. Let $e$ denote the element of $\mathcal{U}_t$ that is crossed last by $X:[0,t] \to V(G) \times [0,T)$.
Let $p(e^+)$ denote the parent of $e^+$. Then, conditionally on $e^+ \not= \phi$ and $H_{t,p(e^+)} < \infty$, we have that,
almost surely,
$\mathcal{U}_t \setminus \mathcal{U}_{H_{t,p(e^+)}}$ contains at most two elements.
\end{lemma}
\noindent{\bf Proof.} We write $\overline\mathcal{U}_t$ for the set of edges that support a bar in $\mathcal{U}_t$.
Note that, for each $t > 0$, these two sets are in one-to-one correspondence, since no two elements in $\mathcal{U}_t$ are supported on the same edge.
Hence, it suffices to derive the statement of the lemma with $\mathcal{U}$ replaced by $\overline\mathcal{U}$.
Note that the elements of $\overline\mathcal{U}_t$, enumerated $\big( e_1,\ldots, e_k \big)$
in the order in which the constituent edges are crossed by $Y:[0,t] \to V(G) \times [0,T)$,
have the property that, in the list $\big( e_1^-,\ldots, e_k^- \big)$,
each entry is a descendent of each of its precursors. By definition, $e = e_k$.
For $1 \leq i \leq k-1$, note that $\inf \big\{ s \geq t: e_i \not\in \overline\mathcal{U}_s \big\} = H_{t,e_i^-}$. Note also that if $H_{t,e_i^-} < \infty$ then $e_i \in \overline\mathcal{U}_{H_{t,e_i^-}}$,
because the fourth requirement in the definition of $\{ \mathcal{U}_t:t\geq 0 \}$ in Subsection~\ref{secusefulbars} is chosen so that a given bar leaves this process not at, but only momentarily after, a return by $Y$ to the child vertex of the edge supporting this bar.
Thus, $\overline\mathcal{U}_t \setminus \overline\mathcal{U}_{H_{t,p(e^+)}}$ may contain no edges other than $e$ and $\big(p(e^+),e^+\big)$. \hfill $\qed$
\subsection{Establishing the main results}
$\empty$
\noindent{\\ {\bf Proof of Proposition~\ref{propone}.}}
We fix $T_0 > 0$ high enough to satisfy the hypotheses of each of the preceding lemmas.
We now form the process $X:[0,\infty) \to V(G) \times [0,T)$ iteratively.
We will construct an increasing sequence $\big\{ \tau_i : i \in \mathbb{N}^+ \big\}$
of times at which the present number of useful bars is gauged.
At the first step, provided that the positive probability event that
$\vert \mathcal{U}_t \vert \geq 2$ for some $t > 0$ occurs, we set
$\tau_1 = \inf \big\{ t \geq 0 : \vert \mathcal{U}_t \vert \geq 2 \big\}$. (We wait for two elements to appear in $\mathcal{U}_t$ because if there is only one, it may be supported on an edge incident to $\phi$, and we have not set up the tools to handle this case.) Otherwise, we set $\tau_j = -\infty$ for each $j \in \mathbb{N}^+$, in a formal device indicating that our effort to determine that $X$ does not have a periodic trajectory has failed.
(In the case that $\tau_1 \not= - \infty$, note that $\vert \mathcal{U}_{\tau_1} \vert = 2$, because the convention that $X$ be right-continuous means that $X$ crosses a bar at time $\tau_1$, and is no longer at the child vertex of the second bar to become useful, permitting this bar to join $\mathcal{U}_t$ at time $t = \tau_1$.)
Let $k \in \mathbb{N}^+$. Suppose that $0 < \tau_k < \infty$. (As will be apparent, we set $\tau_k = \infty$ in the case that it becomes evident at the $k$-th stage of the construction that $X$ does not have a periodic trajectory. For definiteness, if $\tau_k$ is set equal to $\infty$ in the subsequent definition, then we automatically also set $\tau_l = \infty$ for all $l > k$.)
If $\vert \mathcal{U}_{\tau_{k}} \vert \leq 1$, set $\tau_l = - \infty$ for all $l \geq k+1$.
Otherwise, let $(e_k,t_k)$ denote the bar in $\mathcal{U}_{\tau_k}$ that is the last to be crossed by $X$
before time $\tau_k$.
Let $\chi_k = H_{\tau_k,e_k^-}$.
If $\chi_k = \infty$, then set $\tau_{k+1}=\infty$.
If $\chi_k < \infty$ and the return of $X$ to $e_k$ is good,
recalling that $e_k^c$ denotes $V(G) \setminus \{ e_k^+ , e_k^- \}$, we take $\tau_{k+1}= H_{\chi_k,e_k^c} + T$.
If $\chi_k < \infty$ and the return of $X$ to $e_k$ is not good, we take $\tau_{k+1} = H_{\chi_k,p(\parent{e})}$,
where recall that $p(\parent{e})$ denotes the parent of $\parent{e}$. Note that $H_{\chi_k,p(\parent{e})}$ may be infinite.
For $k \in \mathbb{N}^+$, set $u_k = \vert \mathcal{U}_{\tau_k} \vert$. As a convention, we take $u_k = 0$ if
$\tau_k = -\infty$ and $u_k = \infty$ if $\tau_k = \infty$.
For $t \in [0,\infty)$, let $\sigma_t$ denote the sigma-algebra generated by $\big\{ X_s: 0 \leq s \leq t \big\}$.
For $k \in \mathbb{N}^+$, write $\sigma'_k = \sigma_{\tau_k}$, where, in a standard definition,
$\sigma_{\tau_k} = \big\{ A \subseteq \Omega: A \cap \big\{ \tau_k \leq t \big\} \in \sigma_t \, \textrm{for each $t > 0$} \big\}$.
We now define three $\sigma'_k$-measurable random variables, $p_k$, $q_k$ and $r_k$.
To define each of them, consider $\mathbb{P}_T$ given $\big\{ X_t: 0 \leq t \leq \tau_k \big\}$.
Then $p_k$ is set equal to the conditional probability that $\chi_k < \infty$.
Let $q_k$ denote the conditional probability, given further that $\chi_k < \infty$, that the return of $X$ to $e_k$ is good. Let $r_k$ denote the conditional probability, given that $\chi_k < \infty$ and that the return of $X$ to $e_k$ is not good, that $H_{\chi_k,p(\parent{e})} < \infty$.
Note that $u_k$ is $\sigma'_k$-measurable. Note also that, by the definition of a good return, and Lemmas~\ref{lemdisjoint} and~\ref{lemretnotgood}, we have that, $\sigma'_k \big( \cdot \big\vert u_k > 1 \big)$-almost surely, the conditional distribution of $u_{k+1} - u_k$ given $\big\{ X_t:0\leq t \leq \tau_k \big\}$ stochastically dominates the
law $\infty \delta_{1 - p_k} + c_\kappan T \delta_{p_k q_k} + \infty \delta_{p_k(1 - q_k)(1-r_k)} -2 \delta_{p_k(1 - q_k)r_k}$. The latter distribution is parametrized by $(r_k,p_k,q_k)$, and stochastically dominates the one obtained by replacing the values of each of $p_k$ and $r_k$ by $1$.
Moreover, by Lemma~\ref{lemsum}, $q_k \geq c_{T,\kappan}$, $\sigma'_k \big( \cdot \big\vert u_k > 1 \big)$-almost surely.
To summarise these deductions, $\sigma'_k \big( \cdot \big\vert u_k > 1 \big)$-almost surely, the conditional distribution of $u_{k+1} - u_k$ given $\big\{ X_t:0\leq t \leq \tau_k \big\}$ stochastically dominates the law
$c_\kappan T \delta_{c_{T,\kappan}} - 2 \delta_{1 - c_{T,\kappan}}$.
The data $\big\{ u_1,\ldots,u_k \big\}$ being $\sigma_k'$-measurable, we infer that, given such data for which $u_k > 1$, the conditional distribution of $u_{k+1} - u_k$
also stochastically dominates the law
$c_\kappan T \delta_{c_{T,\kappan}} - 2 \delta_{1 - c_{T,\kappan}}$.
Let $Q:\mathbb{N}^+ \to \mathbb{R}$ denote the random walk on $\mathbb{R}$ whose increments are independent and have the law
$c_\kappan T \delta_{c_{T,\kappan}} - 2 \delta_{1 - c_{T,\kappan}}$, with initial condition $Q(1) = 2$.
Let $\rho \in \mathbb{N}$ denote the first time at which $Q$ is at most one, and define $Q_*:\mathbb{N}^+ \to \mathbb{R}$ by
\begin{equation*}
Q_*(i) = \begin{cases} Q(i) & \text{if } i \leq \rho \, , \\
0 & \text{if } i > \rho \, ,
\end{cases}
\end{equation*}
for each $i \in \mathbb{N}^+$.
We find that, conditionally on $\tau_1 < \infty$, $\big\{ u_i: i \in \mathbb{N}^+ \big\}$
stochastically dominates $\big\{ Q_*(j): j \in \mathbb{N}^+ \big\}$.
Hence, we find that the probability that $u_i \to \infty$ as $i \to \infty$
is at least the probability that $\big\{ Q(i): i \in \mathbb{N} \big\}$ is a sequence of terms all of which exceed one and which tends to infinity.
By the law of large numbers,
this occurrence has positive probability provided that
\begin{equation}\label{eqcdtcb}
c_{T,\kappan} c_\kappan T - 2 \big( 1 - c_{T,\kappan} \big) > 0 \, .
\end{equation}
Note that the left-hand side is non-decreasing and tends to infinity in the limit of high $T$; as such, we may adjust the value of $T_0 > 0$, if necessary, so that, for $T > T_0$, the condition (\ref{eqcdtcb}) is satisfied.
Clearly, if $X:[0,\infty) \to V(G) \times [0,T)$ has a periodic orbit, then $\tau_k$ eventually assumes the value $-\infty$.
Hence, provided that $T> T_0$, with positive probability, $X$ does not have a periodic orbit, so that $\phi \not\in Y(t,\infty)$ for sufficiently high $t$. This completes the proof of Proposition~\ref{propone}. \hfill $\Box$
\vspace{1mm}
\noindent{\bf Proof of Theorem~\ref{thmone}.}
By Proposition~\ref{propone}, there is positive probability that $X = X_{(\phi,0)}$ has an aperiodic orbit, in which case, members of the semi-infinite sequence $\big\{ Y(kT): k \in \mathbb{N} \big\}$
form the consecutive elements of part of some infinite cycle in the associated random stirring model with parameter~$T$.
Should $X$ have a periodic orbit, we may search in successive generations away from the root for an edge $e$ such that no bar in $\mathcal{B}$ is supported on $e$. The conditional distribution of $X_{(e^-,0)}$ is then given by cyclic-time random meander in the descendent tree of $e^-$. Proposition~\ref{propone} being applicable to this tree, there is a further uniformly positive probability that $X_{(e^-,0)}$ has an aperiodic orbit. This procedure may continue until a meander with such an orbit is located. \qed
|
train/arxiv
|
BkiUd27xK0zjCxh7333y
| 5
| 1
|
\section{\label{SEC001}Introduction}
Hadrons are classified by QCD quantum numbers, in particular isospin $I$, angular momentum $J$ and parity $P$. Studying a hadron by means of lattice QCD typically requires a trial state $\mathcal{O} | \Omega \rangle$, where $| \Omega \rangle$ is the vacuum and $\mathcal{O}$ a suitable hadron creation operator such that $\mathcal{O} | \Omega \rangle$ has the required quantum numbers $I(J^P)$.
When using the Wilson twisted mass lattice discretization for the quark fields, parity and isospin/ flavor symmetries are broken at finite lattice spacing. Consequently, isospin $I$ and parity $P$ are only approximate quantum numbers (which, of course, become exact in the continuum limit). This might cause practical problems. For example in general it is not possible to construct trial states, where mixing of different parity states or mixing of $I_z = 0$ states with $I = 0$ and $I = 1$ is not present. To study the corresponding hadrons in a rigorous way, e.g.\ to determine their masses, one has to compute large correlation matrices containing states from different parity and isospin/flavor sectors and extract all hadron masses of interest in a single analysis. Cf.\ e.g.\ \cite{Baron:2010th} for a detailed theoretical discussion and \cite{Jansen:2008si,Blossier:2009vy,Michael:2010aa,Wagner:2010ad,Wagner:2011fs,Alexandrou:2012rm,Kalinowski:2012re,Kalinowski:2013wsa} for various recent examples.
Here we explore the possibility to combine Wilson twisted mass sea quarks with either (untwisted) Wilson + clover valence quarks or Wilson twisted mass + clover valence quarks. Since the clover term can be used to cancel part of the lattice discretization errors, the above mentioned symmetry breaking and mixing problems are expected to be reduced, when using such mixed action setups. In particular for spectroscopy these setups might be advantageous.
\section{Lattice setup}
\subsection{\label{SECsea} Sea quarks and gauge link configurations}
This work is based on gauge link configurations generated by the ETM Collaboration with the Iwasaki gauge action~\cite{Iwasaki:1985we} and $N_f = 2+1+1$ flavors of twisted mass quarks. The light degenerate $(u,d)$ quark doublet is described by the standard Wilson twisted mass action \cite{Frezzotti:2000nk},
\begin{eqnarray}
\label{EQN001} S_{\scriptsize \textrm{light}}[\chi^{(l)},\bar{\chi}^{(l)},U] \ \ = \ \ a^4 \sum_x \bar{\chi}^{(l)}(x) \Big(D_W(m_0) + i \mu \gamma_5 \tau_3\Big) \chi^{(l)}(x) ,
\end{eqnarray}
while for the heavy $(c,s)$ sea quark doublet the twisted mass formulation for non-degenerate quarks of \cite{Frezzotti:2003xj} has been used,
\begin{eqnarray}
\label{EQN002} S_{\scriptsize \textrm{heavy}}[\chi^{(h)},\bar{\chi}^{(h)},U] \ \ = \ \ a^4 \sum_x \bar{\chi}^{(h)}(x) \Big(D_W(m_0) + i \mu_\sigma \gamma_5 \tau_1 + \tau_3 \mu_\delta\Big) \chi^{(h)}(x) .
\end{eqnarray}
In both cases $D_W$ denotes the standard Wilson Dirac operator and $m_0$ the untwisted quark mass, while $\chi^{(l)} = (\chi^{(u)},\chi^{(d)})$ and $\chi^{(h)} = (\chi^{(c)},\chi^{(s)})$ are the quark fields in the so-called twisted basis. When tuning the theory to maximal twist, automatic $\mathcal{O}(a)$ improvement for physical quantities applies \cite{Frezzotti:2003xj,Frezzotti:2003ni}. This tuning has been done by adjusting $m_0$ such that the PCAC quark mass in the light quark sector vanishes.
All computations presented in the following have been performed on 100 gauge link configurations generated with $\beta = 1.9$, $(L/a)^3 \times T/a = 32^3 \times 64$, $\kappa = (2 a m_0 + 8)^{-1} = 0.16327$, $a \mu = 0.004$, $a \mu_\sigma = 0.15$ and $a \mu_\delta = 0.19$. This corresponds to a lattice spacing $a \approx 0.086 \, \textrm{fm}$ and a pion mass $m_\pi \approx 320 \, \textrm{MeV}$. More details regarding this ensemble can be found in \cite{Baron:2010bv}.
\subsection{Valence quarks}
\subsubsection{\label{SEC003}Wilson twisted mass valence quarks}
To avoid $s$ and $c$ quark mixing \cite{Baron:2010th}, one typically uses a twisted mass discretization for valence $s$ and $c$ quarks, which is different from the sea $s$ and $c$ quarks (\ref{EQN002}). It is given by (\ref{EQN001}) with $\chi^{(l)} \rightarrow \chi^{(s)} = (\chi^{(s^+)} , \chi^{(s^-)})$ and $\mu \rightarrow \mu_s$ (or $\chi^{(l)} \rightarrow \chi^{(c)} = (\chi^{(c^+)} , \chi^{(c^-)})$ and $\mu \rightarrow \mu_c$). Note that there are two possibilities to realize e.g.\ a valence $c$ quark, $\chi^{(c^+)}$ and $\chi^{(c^-)}$, which differ in the sign of the twisted mass term, $\pm i \mu_c \gamma_5$.
The bare charm quark mass $a \mu_c = 0.27678$ has been chosen such that the $D$ meson mass computed within this mixed action setup with flavor structure $\bar{c}^+ d$ agrees with the $D$ meson mass computed in the unitary setup, i.e.\ using (\ref{EQN002}) also for valence $s$ quarks.
\subsubsection{\label{SEC002}Wilson twisted mass + clover valence quarks}
As motivated in section~\ref{SEC001} we consider the clover term in the valence quark action with the intention to reduce lattice discretization errors related to parity and isospin/flavor breaking.
In the Wilson twisted mass case we add the clover term
\begin{eqnarray}
S_{\scriptsize \textrm{clover}}[\chi^{(l)},\bar{\chi}^{(l)},U] \ \ = \ \ c_\mathrm{sw} a^5 \sum_x \sum_{\mu < \nu} \bar{\chi}^{(l)}(x) \frac{1}{2} \sigma_{\mu \nu} F_{\mu\nu}(x) \chi^{(l)}(x)
\end{eqnarray}
to the quark action (\ref{EQN001}), where $\sigma_{\mu \nu} = i [\gamma_\mu , \gamma_\nu] / 2$ and $F_{\mu \nu}(n) = i (Q_{\mu \nu}(x) - Q_{\nu \mu}(x)) / 8 a^2$ is the discretized field strength tensor with $Q_{\mu \nu}$ denoting the sum over plaquettes in the $\mu$-$\nu$-plane attached to $x$ (for details cf.\ e.g.\ \cite{Gattringer:2010zz} and references therein). The coefficient $c_\mathrm{sw} = 1.62051$ has been chosen according to a perturbative expansion \cite{Aoki:1998ph}.
Wilson twisted mass quarks with and without clover term require a separate tuning to maximal twist. Again we adjust $\kappa = (2 a m_0 + 8)^{-1}$ such that the PCAC quark mass
\begin{eqnarray}
a m_\mathrm{PCAC} \ \ = \ \ \frac{\langle \partial_0 A_0^b(t/a) P^b(0) \rangle}{2 \langle P^b(t/a) P^b(0)\rangle} \quad , \quad b=1,2
\end{eqnarray}
($A^b_\mu(x) = \frac{1}{2} \bar{\chi}^{(l)}(x) \gamma_\mu \gamma_5 \tau^b \chi^{(l)}(x)$, $P^b(n) = \frac{1}{2} \bar{\chi}^{(l)}(x) \gamma_5 \tau^b \chi^{(l)}(x)$) vanishes, resulting in $\kappa = 0.13883$ (cf.\ Figure~\ref{FIG002}). Note that Wilson twisted mass quarks at maximal twist are already automatically $\mathcal{O}(a)$ improved. The intention of adding the clover term is, therefore, to cancel part of the remaining $\mathcal{O}(a^2)$ contributions \cite{Becirevic:2006ii,Bartek2013}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7.0cm]{mpcac1.eps}
\includegraphics[width=7.0cm]{mpcac2.eps}
\caption{\label{FIG002}(left) $a m_\mathrm{PCAC}$ as a function of the temporal separation $t/a$; (right) $a m_\mathrm{PCAC}$ as a function of $1 / 2 \kappa$ (statistical errors are smaller than the symbols).}
\end{center}
\end{figure}
The bare light and charm quark masses $a \mu_l = 0.0036847$ and $a \mu_c = 0.291968$ have been tuned such that the pion mass and the $D$ meson mass are approximately the same as with the valence quark action from section~\ref{SEC003} (Wilson twisted mass valence quarks).
\subsubsection{\label{SEC004}Clover improved Wilson valence quarks}
We proceed as in section~\ref{SEC002}, this time choosing $\mu = 0$ and using quark fields in the physical basis, i.e.\ $\chi^{(l)} \rightarrow \psi^{(l)}$.
The light and charm hopping parameters $\kappa_l = 0.13832$ and $\kappa_c = 0.12286$ have been tuned such that the pion mass and the $D$ meson mass are approximately the same as with the valence quark action from section~\ref{SEC003} (Wilson twisted mass valence quarks).
\section{Numerical results}
\subsection{Computation of $D$ and the $D_0^\ast$ meson masses}
We determine the $D$ and the $D_0^\ast$ meson masses by studying the asymptotic exponential behavior of correlation functions $C_{j k}(t) = \langle (\mathcal{O}_j(t))^\dagger \mathcal{O}_k(0) \rangle$. Suitable creation operators are denoted by $\mathcal{O}_j \in \{ \bar{\chi}^{(c^+)} \gamma_5 \chi^{(d)} \ , \ \bar{\chi}^{(c^+)} \chi^{(d)} \}$ for Wilson twisted mass (+ clover) valence quarks and $\mathcal{O}_j \in \{ \bar{\psi}^{(c)} \gamma_5 \psi^{(d)} \ , \ \bar{\psi}^{(c)} \psi^{(d)} \}$ for Wilson valence quarks. These operators generate the $D$ and the $D_0^\ast$ quantum numbers $J^P = 0^-$ and $J^P = 0^+$, when applied to the vacuum. The correlation functions are computed using the one-end trick (cf.\ e.g.\ \cite{Boucaud:2008xu}) with a single set of four spin-diluted stochastic timeslice sources per gauge link configuration.
When using clover improved Wilson valence quarks, one can show analytically that the off-diagonal correlation matrix elements vanish, i.e.\ $C_{j k} = 0$ for $j \neq k$. For more complicated problems and larger correlation matrices typically half of the correlation matrix elements, which are non-zero when using Wilson twisted mass valence quarks, vanish. This might be a considerable advantage in cases, where the computation of correlation matrices requires sizable HPC resources.
When using Wilson twisted mass valence quarks (with or without clover term) the full $2 \times 2$ correlation matrix has to be computed and both the $D$ meson and the $D_0^\ast$ meson mass have to be determined by a single analysis, e.g.\ by solving a generalized eigenvalue problem,
\begin{eqnarray}
\label{EQN003} C_{j k}(t) v_k^{(n)}(t,t_0) \ \ = \ \ C_{j k}(t_0) v_k^{(n)}(t,t_0) \lambda^{(n)}(t,t_0) \quad , \quad m^{(n)}_{\scriptsize \textrm{eff}}(t,t_0) \ \ = \ \ \ln\bigg(\frac{\lambda^{(n)}(t,t_0)}{\lambda^{(n)}(t+a,t_0)}\bigg)
\end{eqnarray}
(cf.\ e.g.\ \cite{Blossier:2009kd}). A constant fit to the effective masses $m^{(n)}_{\scriptsize \textrm{eff}}(t,t_0=a)$ in the plateau-like region at large $t$ yields the masses of the $D$ and the $D_0^\ast$ meson.
Note that the determination of the meson masses is simpler with clover improved Wilson quarks: two effective masses can be determined independently from the two diagonal elements of $C_{j k}$, i.e.\ solving a generalized eigenvalue problem is not necessary.
In Figure~\ref{FIG003} we compare effective mass plots for the $D$ meson (green curves) and the $D_0^\ast$ meson (blue curves) obtained with the three valence quark actions discussed in sections \ref{SEC003} to \ref{SEC004}. While Wilson twisted mass valence quarks with and without clover term yield plateaus of similar quality, the corresponding clover improved Wilson plateaus are of somewhat lower quality. Whether this is the case also for other observables (e.g.\ mesons of different flavor structure), will be part of future investigations.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7.0cm]{proctm_Dmass.eps}
\includegraphics[width=7.0cm]{proctmclover_Dmass.eps} \\
\includegraphics[width=7.0cm]{procwilson_Dmass.eps}
\caption{\label{FIG003}effective masses of $D$ and $D_0^\ast$ obtained with different valence quark actions.}
\end{center}
\end{figure}
A certain indication, whether adding the clover term to the twisted mass action indeed reduces the mixing between $P = -$ and $P = +$ states, is provided by the squared absolute value of the eigenvector components $| v_j^{(n)} |^2$ obtained, when solving the generalized eigenvalue problem (\ref{EQN003}). These eigenvector components are plotted in Figure~\ref{FIG004} as functions of the temporal separation $t/a$. For the $D$ meson we observe that mixing is significantly reduced from $\approx 10\%$ to $\raisebox{-0.5ex}{$\,\stackrel{<}{\scriptstyle\sim}\,$} 5\%$ (left column), while for the $D_0^\ast$ meson there is no qualitative change (right column). We plan to extend this analysis to other hadrons in the near future.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7.0cm]{proctm_Dev1.eps}
\includegraphics[width=7.0cm]{proctm_Dev2.eps} \\
\includegraphics[width=7.0cm]{proctmclover_Dev1.eps}
\includegraphics[width=7.0cm]{proctmclover_Dev2.eps}
\caption{\label{FIG004}squared absolute eigenvector components for the $D$ meson (left column) and its parity partner, the $D_0^\ast$ meson (right column), for standard Wilson twisted mass valence quarks (upper line) and for clover improved Wilson twisted mass valence quarks (lower line).}
\end{center}
\end{figure}
\subsection{Pion mass splitting}
Due to isospin breaking in twisted mass lattice QCD, the charged pion $\pi^\pm$ and the neutral pion $\pi^0$ are of different mass. The mass splitting $\Delta (m_\pi)^2 = |m_{\pi^\pm}^2 - m_{\pi^{0,\textrm{\scriptsize con}}}^2|$ (``con'' denotes the neglect of disconnected diagrams, which vanish in the continuum limit) is an $\mathcal{O}(a^2)$ lattice discretization artifact. Hence, $\Delta m_\pi^2$ is another indicator, whether adding the clover term indeed reduces isospin breaking.
For Wilson twisted mass valence quarks with and without clover term we find
\begin{eqnarray}
a^2 \Delta (m_\pi^{\scriptsize \textrm{tm}})^2 \ \ = \ \ 0.035(4) \quad , \quad a^2 \Delta (m_\pi^{\scriptsize \textrm{tm+clover}})^2 = 0.032(2) ,
\end{eqnarray}
i.e.\ within statistical errors the splitting is not reduced. This is in contrast to a similar quenched investigation \cite{Becirevic:2006ii}, where a reduction of the pion mass splitting by more than a factor $2$ was observed.
\section{Summary and outlook}
We presented first results of a comparison of three different mixed action setups: Wilson twisted mass sea quarks with either (1) Wilson twisted mass, (2) Wilson twisted mass + clover and (3) Wilson + clover valence quarks. The goal is to reduce twisted mass parity and isospin symmetry breaking. This might be helpful for ongoing hadron spectroscopy projects, in particular \cite{Alexandrou:2012rm,Kalinowski:2012re,Kalinowski:2013wsa}.
Clover improved Wilson valence quarks have the advantage that trial states from different parity or isospin/flavor sectors are orthogonal. Therefore, only half as many correlation functions compared to using twisted mass valence quarks need to be computed. A disadvantage seem to be stronger statistical fluctuations in effective masses (here observed for the $D$ and the $D_0^\ast$ meson).
For the case of Wilson twisted mass valence quarks it is not yet clear, whether adding the clover term as discussed in section~\ref{SEC002} significantly reduces twisted mass symmetry breaking. While there is less mixing for the $D$ meson, other observables related to twisted mass symmetry breaking, in particular the pion mass splitting, essentially do not change.
To be able to decide, whether one of the clover improved mixed action setups is advantageous, further investigations and more numerical results are necessary. In particular we plan to study different lattice spacings and a larger set of observables.
\begin{acknowledgments}
We thank our colleagues from ETMC, in particular Constantia Alexandrou, Gregorio Herdoiza, Martin Kalinowski, Andrea Shindler and Carsten Urbach for discussions. M.W.\ acknowledges support by the Emmy Noether Programme of the DFG (German Research Foundation), grant WA 3000/1-1. This work was supported in part by the Helmholtz International Center for FAIR within the framework of the LOEWE program launched by the State of Hesse.
\end{acknowledgments}
|
train/arxiv
|
BkiUftk25V5jZT5RBWyd
| 5
| 1
|
\section{Introduction}
\para
Weyl semimetal is an interesting and important gapless state of matter in which the low-energy excitations can be described by the Weyl equation. The system inherits lots of exotic transport properties due to the chiral anomaly, which has attracted lots of theoretical and experimental interest \cite{vishwanath,burkov0,Landsteiner:2016led}. As a topological quantum matter, the description of Weyl semimetal goes beyond the Landau-Ginzburg paradigm in terms of symmetry breaking. The same as graphene \cite{jan}, the effective fine structure constant is very large due to the smallness of the Fermi velocity compared to the speed of light. This means that the Weyl semimetal can exist in a strongly interacting region with no quasiparticles, where the perturbed quantum field theory and topological band theory description break down \cite{Gonzalez:2015tsa}. Therefore, it is an important and challenging question to find a proper theoretical description of the strongly coupled Weyl semimetal.
\para
Holographic duality (or AdS/CFT correspondence) relates the $d$-dimensional strongly coupled field theory to a $d+1$-dimensional weakly coupled classical gravitational theory, which is a powerful tool to tackle problems arising in field theory. This method has been applied to solve various problems in condensed matter physics and yielded invaluable insights \cite{Zaanen:2015oix,book0,review}. Recently, the holographic model of strongly coupled Weyl semimetal has been constructed in Refs. \cite{Landsteiner:2015pdh,Landsteiner:2015lsa}, where the Weyl semimetal phase is characterized by a nonzero anomalous Hall conductivity. The system undergoes a topological quantum phase transition from the Weyl semimeatl phase to a topological trivial phase with vanishing anomalous Hall conductivity. Since then, many issues related to holographic Weyl semimetal have been studied, including odd viscosity \cite{Landsteiner:2016stv}, surface state \cite{Ammon:2016mwa}, optical conductivity \cite{Grignani:2016wyz}, axial Hall conductivity \cite{Copetti:2016ewq}, topological invariants \cite{Liu:2018djq}, and nodal line semimetal \cite{Liu:2018bye,Liu:2020ymx}. Other studies can be found in Refs. \cite{Gursoy:2012ie,Hashimoto:2016ize,Ammon:2018wzb,Baggioli:2018afg,Liu:2018spp,Ji:2019pxx,Song:2019asj,
Tanaka:2020yax,Juricic:2020sgg,Baggioli:2020cld,Fadafan:2020fod}, and see \cite{Landsteiner:2019kxb} for a recent review on this topic.
\para
So far, the investigations of the holographic Weyl semimetal are mainly focused on translational invariant systems where the momentum is conserved. In real materials, the momentum of electrons is dissipated due to scattering with the background ion lattice or disorder. In the weakly coupled field theory, the Weyl points can be destroyed by breaking the translational symmetry \cite{Hosur:2013kxa}, which means that the momentum relaxation may have nontrivial physical effects on the properties of the Weyl semimetal. At a strongly coupled region, the Weyl semimetal still exists, and it is important to explore the effects of momentum relaxation on the system \cite{Landsteiner:2015lsa,Landsteiner:2015pdh}. This motivates us to study momentum relaxation in the holographic Weyl semimetal by breaking the translational symmetry along the spatial directions.
\para
We will use the linear axion models \cite{Andrade:2013gsa,Baggioli:2021xuv} to implement the momentum dissipation in the holographic Weyl semimetal. This enables us to break the translational symmetry while retaining the homogeneity of the background geometry. We will focus on low-temperature physics. The reason is twofold. First, the zero-temperature ground state is difficult to construct in the presence of the axion fields. Second, the absolute zero temperature cannot be physically reached in experiments. Because of the existence of the quantum critical region, the nature of the quantum phase transition manifests at low temperature. Therefore, it is suitable to study low-temperature physics to investigate the behavior of the critical point of the phase transition under momentum dissipation, which is the main focus of this paper.
\para
This paper is organized as follows. In section 2, we introduce the holographic model of Weyl semimetal including axion fields. In section 3, we calculate the dc conductivities of the vector gauge field fluctuations and investigate their behavior with respect to the momentum relaxation strength. Section 4 is devoted to the conclusion and discussion. The Appendix presents the details of the equations of motion and asymptotic expansions.
\section{Holographic Weyl semimetal with momentum relaxation}\label{sec2}
In this section, we begin our setup of the holographic Weyl semimetal with momentum relaxation which is induced by the axion fields. The action for the model reads
\begin{eqnarray}\label{action}
\mathcal{S}&=&\int d^5x\sqrt{-g}
\bigg[\frac{1}{2\kappa^2}\big(R+\frac{12}{L^2}\big)-\frac{1}{4}F^2-\frac{1}{4}\mathcal{F}^2+\frac{\alpha}{3}\epsilon^{abcde}A_a\big(F_{bc}F_{de} +3\mathcal{F}_{bc}\mathcal{F}_{de}\big) \nonumber\\
&&-(D_a\Phi)^\ast(D^a\Phi)-V(\Phi)-\frac{1}{2}\sum_{I=1}^{3}(\partial \psi_{I})^2 \bigg]+\mathcal{S}_{GH}+\mathcal{S}_{c.t.} \,,
\end{eqnarray}
where $\kappa^2$, $L$, and $\alpha$ are the gravitational constant, AdS radius, and Chern-Simons coupling, respectively.
According to the holographic dictionary, the vector gauge field $V_a$ corresponds to vector current in the dual field theory with field strength $\mathcal{F}_{ab}=\partial_a V_b-\partial_b V_a$. The axial gauge field $A_a$ corresponds to axial current in the dual field theory with field strength $F_{ab}=\partial_a A_b-\partial_b A_a$. The scalar field $\Phi$ is charged under the axial gauge transformation, and the covariant derivative is $D_a\Phi=(\partial_a-iqA_a)\Phi$. We choose the scalar field potential $ V(\Phi)=m^2\Phi^2+\frac{\lambda}{2} \Phi^4$ with the scalar field mass $m^2=-3$. Therefore, the operator dual to the scalar field has conformal dimension $3$, and its source has conformal dimension 1. Note that the scalar field $\psi_I (I=1,2,3)$ is massless and its total number is equal to the spatial dimension of the dual system. $\mathcal{S}_{GH}$ is the Gibbons-Hawking boundary term, and $\mathcal{S}_{c.t.}$ is the counterterm to demand that the physical observable is finite. Without loss of generality, we will focus on the cases of $q=1$ and $\lambda=1/10$ in the following.
\para
The finite-temperature ansatz for the background fields reads
\begin{eqnarray}\label{ansatz}
ds^2=-udt^2+\frac{dr^2}{u}+f(dx^2+dy^2)+hdz^2,\,\, A=A_z dz,\,\, \Phi=\phi(r),\,\, \psi_I=\beta_{Ij} x^j \,,
\end{eqnarray}
where the fields $u, f, h, A_z$, and $\phi$ are functions of the radial coordinate $r$. The corresponding equations of motion can be found in the Appendix. Near the UV boundary, $r\to \infty$, we demand that the background geometry is asymptotically to $\text{AdS}_5$ with $u,f,h\sim r^2$. The asymptotic behavior for the axial gauge field and the scalar field reads
\begin{eqnarray}
A_z=b+\cdots,\,\,\, \phi=\frac{M}{b}+\cdots ,
\end{eqnarray}
where $M$ and $b$ correspond to the mass parameter and the time-reversal symmetry-breaking parameter in the field theory, respectively. The scalar fields $\psi_I \, (I=1,2,3)$ depend linearly on the spatial coordinate $(x^j={x, y, z})$, where $\beta_{Ij}=\beta$ is a positive real constant. Similar to the particle physics, the scalar fields $\psi_I$ are often called axions, as they have a shift symmetry. The spatial translational symmetry $x^a\to x^a+\xi^a$ is broken due to the spatially dependent sources of $\psi_I$. More precisely, the axion fields will contribute to the Ward identity of the boundary energy-momentum tensor $\nabla_i\langle T^{ij}\rangle=\nabla^j \psi_I^{(0)}\langle O_I \rangle $, which indicates the nonconservation of boundary momentum. Therefore, the axion fields give us a simple holographic approach to dissipate the momentum in the dual field theory, where $\beta$ represents the strength of momentum dissipation.
\subsection{Holographic Weyl semimetal without axion fields }
\para
For $\beta=0 $, the translational symmetry is recovered, and the axion fields drop out of the equations of motion. The holographic Weyl semimetal has been studied in this case \cite{Landsteiner:2015pdh,Landsteiner:2015lsa}, which we will review briefly in this subsection. We will summarize the zero-temperature as well as the finite-temperature physics and discuss how to probe the critical point of the phase transition at low temperature.
\para
At zero temperature, the dual field theory preserves the Lorentz invariance in the $(t,x,y)$ direction, which corresponds to $u=f$. We have only one controllable dimensionless parameter: $M/b$. There exist three different kinds of IR solutions, which correspond to different value of $M/b$, (I){\em the Weyl semimetal phase} for $M/b<(M/b)_c$, (II){\em the Lifshitz critical point} for $M/b=(M/b)_c=0.744$, and (III) {\em the topological trivial phase} for $M/b>(M/b)_c$. At zero temperature, the critical point $(M/b)_c$ is uniquely determined by the Lifshitz critical point. The near-horizon value $A_z(0)$ is nonzero in the Weyl semimetal phase, while it always vanishes in the topological trivial phase. By tuning the parameter $M/b$, the system undergoes a topological quantum phase transition from the Weyl semimetal phase to a topological trivial phase. The order parameter is anomalous Hall conductivity, which is proportional to the near-horizon value of $A_z$:
\begin{eqnarray}
\sigma_{\text{AHE}}\propto A_z(0).
\end{eqnarray}
\para
At finite temperature, the background solutions admit a regular expansion near the black hole horizon $r=r_h$ with $u(r_h)=0$. We have two dimensionless parameters: $M/b$ and $T/b$. At finite and low temperature, the sharp quantum phase transition becomes a crossover due to the thermal fluctuations, where the anomalous Hall conductivity remains a very small value in the topological trivial phase. Figure \ref{fig:ahe0} shows the anomalous Hall conductivity as a function of the $M/b$ at different temperatures for the holographic Weyl semimetal without momentum relaxation.
\begin{figure}[!ptb]
\begin{center}
\includegraphics[width=0.6\textwidth]{plot-ahe0.pdf}
\end{center}
\vspace{-0.6cm}
\caption{The anomalous Hall conductivity as a function of the $M/b$ without momentum relaxation for different temperatures \cite{Landsteiner:2015pdh}. The black line is for zero temperature, and the colored lines are for finite temperature with $T/b=0.05$ (blue), $0.03$ (purple), and $0.02$ (green), respectively. There is a sharp quantum phase transition at zero temperature which becomes a crossover at finite temperature.}
\label{fig:ahe0}
\end{figure}
\para
As the temperature is decreased, the anomalous Hall conductivity approaches that of the ground state, which can be used to probe the location of the critical point of the quantum phase transition. At zero temperature, the critical point is equal to the point with divergent $\vert \frac{\partial\sigma_{\text{AHE} }}{\partial(M/b)} \vert$. At finite temperature, we locate the position of the critical point as the point with maximum $\vert \frac{\partial\sigma_{\text{AHE} }}{\partial(M/b)} \vert$. For example, the critical value obtained at $T/b=0.02$ is $0.722$ with a relative error within $3\% $ for the holographic Weyl semimetal without momentum dissipation. Therefore, the probe of the critical point is accurate at low temperature, and we will use this method to determine the critical point of phase transition in momentum relaxed holographic Weyl semimetal.
\subsection{Holographic Weyl semimetal with axion fields}
\para
In the presence of axion fields with $\beta\neq0$, the holographic Weyl semimetal is supposed to still exhibit a quantum phase transition between the Weyl semimetal phase and the topological trivial phase. At zero temperature, the theory is characterized by two dimensionless parameters: $M/b$ and $\beta/b$. Different from the minimal model \cite{Landsteiner:2015pdh}, the zero-temperature solutions have $u\neq f$, which can also be observed from the background equations of motion in the Appendix. Therefore, it is difficult to find the ground state of the holographic Weyl semimetal in the presence of axion fields, and we will leave it for further work.
\para
At finite temperature, we have three dimensionless parameters: $M/b$, $T/b$, and $\beta/b$. The asymptotic expansions for the background fields change slightly compared with the minimal model; see the Appendix for more details. We focus on the low-temperature physics and fix the temperature of the system to be $T/b=0.02$. Therefore, using the shooting method, we can obtain a series of numerical solutions of the background equations of motion which depends on the remaining two dimensionless parameters ($M/b$ and $\beta/b$). In the next section, we will study the effects of momentum relaxation on the order parameter and various dc conductivities.
\section{Momentum relaxation effects on the phase transition}\label{sec3}
\para
To explore the effects of momentum relaxation, we study the conductivities, i.e., the response of the background system under the gauge field fluctuations. In the following, we will obtain the phase diagram of the holographic Weyl semimetal from the anomalous Hall conductivity. We will compute the longitudinal and transverse dc conductivities. We will also study the behavior of dc resistivity as a function of temperature in the two phases.
\para
The conductivities of the dual field system are related to the retarded current-current correlation via the Kubo formula:
\begin{eqnarray}
\sigma_{ij}=\lim_{\omega\to 0}\frac{1}{i\omega}\langle J_i J_j\rangle_R(\omega, \mathbf{k}=0) \,.
\end{eqnarray}
In holography, the retarded Green's functions can be obtained from the dual gauge fields fluctuations above the background solutions, where the infalling boundary conditions are imposed at the black hole horizon.
\para
We turn on the vector gauge field fluctuations along the spatial directions
\begin{eqnarray}
\delta V_x=v_x(r)e^{-i\omega t},\,\, \delta V_y=v_y(r)e^{-i\omega t},\,\, \delta V_z=v_z(r)e^{-i\omega t} \,,
\end{eqnarray}
Note that, since the vector field perturbations decouple from that of the axion fields, we do not need to consider the fluctuations of the axion fields like Ref. \cite{Andrade:2013gsa}. Generally, the axion fields affect the physical system in two aspects. First, they alter the background solution and its thermodynamics. Second, they cause the momentum relaxation by directly coupling to the perturbation fields. For the model we studied here, the absence of axion fields in the vector field fluctuations indicates that their effects on the transports arise from their effects on the equilibrium solution.
\para
Plugging the above ansatz into the vector field equations, we find
\begin{eqnarray}
v_z''+\bigg(\frac{u'}{u}+\frac{f'}{f}-\frac{h'}{2h}\bigg)v_z'+\frac{\omega^2}{u^2}v_z&=&0 \,,\\
v_\pm''+\bigg(\frac{u'}{u}+\frac{h'}{2h}\bigg)v_\pm'+\frac{\omega^2}{u^2}v_\pm \pm 8\alpha\omega\frac{A_z'}{u\sqrt{h}}v_\pm&=&0 \,,
\end{eqnarray}
where we have defined $v_\pm=v_x\pm i v_y$ to get the last equation.
\para
The full frequency conductivities can be obtained numerically by solving the above equations with an ingoing boundary condition at the horizon. However, since we are interested in only the dc conductivities, we will alternatively use the near-far matching method to obtain the desired results, following Ref. \cite{Landsteiner:2015pdh}. This method treats the above equations semianalytically, and its final results can be expressed in terms of data on the black hole horizon $r_h$. The dc conductivities $\sigma_{xx}, \sigma_{yy}$, and $\sigma_{xy}$ can be computed along this procedure as
\begin{eqnarray}\label{eq:ahe}
\sigma_T=\sigma_{xx}=\sigma_{yy}=\frac{G_{+}+G_{-}}{2i\omega}=\sqrt{h(r_h)},\,\, \sigma_{xy}=\frac{ G_{+}-G_{-} }{2\omega}=8\alpha \big( b-A_z(r_h) \big) \,,
\end{eqnarray}
where $G_{\pm}=\omega \big( \pm 8\alpha(b-A_z(r_h))+i \sqrt{h(r_h)} \big)$ are the Green functions of $v_\pm$. Using the same method, the longitudinal conductivity $\sigma_{zz}$ is given by
\begin{eqnarray}
\sigma_{zz}=\frac{G_{zz}}{i\omega}=\frac{f(r_h)}{\sqrt{h(r_h)}} \,.
\end{eqnarray}
\subsection{Phase diagram}\label{sec:pd}
\para
The phase transition is characterized by the anomalous Hall conductivity, which can be expressed as
\begin{eqnarray}\label{ahe}
\sigma_{\text{AHE} }=8\alpha b-\sigma_{xy}=8\alpha A_z(r_h) \,.
\end{eqnarray}
\para
In Fig. \ref{fig:ahe}, we plot the anomalous Hall conductivity as a function of $M/b$ for different $\beta/b$ at temperature $T/b=0.02$. For a fixed value of $\beta/b$, the figure shows that, as we increase $M/b$, the anomalous Hall conductivity decreases monotonically from the Weyl semimetal phase to a very small value in the topological trivial phase. By comparing with the results without momentum relaxation (black dashed curve), we find that the momentum relaxation affects the phase transition in an interesting way. For a small value of momentum relaxation strength (i.e., for $\beta/b<1$), the anomalous Hall conductivity remains almost unchanged. As we increase the value of $\beta/b$ further, the anomalous Hall conductivity changes dramatically, and its value decreases rapidly in the region $M/b<0.744$ (i.e., the original Weyl semimetal phase with $\beta/b=0$). As the topological Weyl semimetal phase is characterized by a nontrivial anomalous Hall conductivity, the behavior of anomalous Hall conductivity may indicate that the region of Weyl semimetal phase narrows and finally disappears with the increase of $\beta/b$. \footnote{At $M/b=0.01$, the numerical results of AHE are less than $1$ for $\beta/b>2.75$. This seems inconsistent with the analytical results at $M/b=0$, which we will explain in the next subsection.}
\begin{figure}[!ptb]
\begin{center}
\includegraphics[width=0.45\textwidth]{plot-ahe1.pdf}
\includegraphics[width=0.45\textwidth]{plot-ahe3d.pdf}
\end{center}
\vspace{-0.6cm}
\caption{Left: the normalized anomalous Hall conductivity as a function of the $M/b$ for different $\beta/b$ at temperature $T/b=0.02$. The black dashed curve is the anomalous Hall conductivities without momentum dissipation, while the colored curves are for $\beta/b=1$ (red), $2$ (green), $2.5$ (blue), $2.75$ (orange), $3$ (purple), and $3.5$ (cyan), respectively. Right: the 3D version of the anomalous Hall conductivity as functions of $M/b$ and $\beta/b$, where the value of $\beta/b$ ranges from $0$ to $3.5$ with an interval $1/4$.}
\label{fig:ahe}
\end{figure}
\para
In order to characterize more specifically the effects of momentum relaxation on the order parameter, we will study the behavior of the critical point of the phase transition with respect to $\beta/b$. The critical point can be obtained from the anomalous Hall conductivity, which is equivalent to the point with maximum $\vert \frac{\partial\sigma_{\text{AHE} } }{\partial(M/b)} \vert$. We show our main results in Fig. \ref{fig:critical}, which gives the behavior of critical point $(M/b)_c$ as a function of $\beta/b$ at temperature $T/b=0.02$. As we increase $\beta/b$, the value of the critical point decrease monotonically. There exists a critical $(\beta/b)_c$, above which the value of the critical point becomes zero. The value of the critical point decreases very slowly for $\beta/b<2$, while it gets smaller rapidly for $2<\beta/b<(\beta/b)_c$. \footnote{For $2.5< \beta/b<(\beta/b)_c$, the $\vert \frac{\partial\sigma_{\text{AHE} } }{\partial(M/b)} \vert$ does not show a sharp peak, which means that critical value $(M/b)_c$ in this region may have a relatively large error.} This indicates that the momentum relaxation can reduce and even destroy the Weyl semimetal phase, which is the main findings of this paper. This is consistent with the field theory predictions, and we will give a simple explanation as follows. From the dual point of view, we fix the distance of Weyl points in the momentum space to be 1. The case of $\beta/b=0$, i.e., for a system without momentum relaxation, corresponds to the width of Brillouin zone $k_L\to \infty$. As we increase the momentum relaxation strength $\beta/b$, the value of $k_L$ decreases. There exists a critical $\beta/b$ to make $k_L=1$ where the two Weyl points meet and annihilate each other due to the periodicity of the Brillouin zone. This picture explains the observed disappearance of the Weyl semimeatl phase as $\beta/b$ is increased.
\begin{figure}[!ptb]
\begin{center}
\includegraphics[width=0.6\textwidth]{plot-critical.pdf}
\end{center}
\vspace{-0.6cm}
\caption{The critical point of the phase transition as a function of $\beta/b$ at temperature $T/b=0.02$.}
\label{fig:critical}
\end{figure}
\subsubsection{ Anomalous Hall conductivity at $M/b=0$ }
\para
In this subsection, we will analyze the two possible solutions of the momentum relaxed system in the $M/b\to 0$ limit and then explain the apparent conflict mentioned in the footnote of the above subsection. In the $M/b\to 0$ limit, the background geometry has a simple analytical solution \cite{Andrade:2013gsa}, which reads
\begin{eqnarray}\label{sch}
u=r^2-\frac{r_h^4}{r^2}+\frac{\beta^2}{4}\Big( -1+\frac{r_h^2}{r^2} \Big) ,\,\, f=h=r^2,\,\, A_z=b,\,\, \phi=0,\,\, \psi_I=\beta x^I \,,
\end{eqnarray}
From Eq. (\ref{ahe}), the normalized anomalous Hall conductivity is $\frac{\sigma_{\text{AHE} }}{8\alpha b}|_{\frac{M}{b}=0}=1$, which is independent of $\beta/b$. In addition to the analytical solution, we can find a spontaneous symmetry-breaking-type solution following the analysis in Ref. \cite{Horowitz:2009ij}. At zero temperature $r_h=\frac{\beta}{2\sqrt{2} }$, the near-horizon limit of Eq. (\ref{sch}) is $\text{AdS}_2\times \mathbb{R}^3$. By analyzing the linearized equation of motion for $\phi$, we find that its effective mass at the extremal geometry becomes $m_{eff}^2=\frac{m^2}{4}+\frac{2b^2q^2}{\beta^2}$. Therefore, the zero-temperature background is unstable if $m_{eff}^2$ is below the Breitenlohner-Freedman (BF) bound of the $\text{AdS}_2$: $m_{BF}^2=-1/4$. At zero temperature, the condition for instability is $\beta/b>\frac{2\sqrt{2}q}{\sqrt{1-m^2}}$. For the particular parameters we studied in this paper, the new branch of solution becomes more pronounced if $\beta/b>\sqrt{2}$. At finite temperature with $T/b$ fixed, there exists a critical $(\beta/b)_{n} $ above which the new solution appears.\footnote{We conjecture that this critical $(\beta/b)_n$ is equal to the $(\beta/b)_{c} $ shown in Fig. \ref{fig:critical}.}
\para
From the above analysis, we know that there exist two solutions for $M/b=0$ at temperature $T/b=0.02$. Therefore, the apparent inconsistency can be understand as follows. As $M/b\to 0$, the numerical solutions in Fig. \ref{fig:ahe} approach the spontaneous symmetry-breaking-type solution more easily when the value of $\beta/b$ is larger than $2.75$. A detailed analysis of the various phases near this region is beyond the scope of this paper and needs more further work.
\subsection{dc conductivities and resistivities}
\para
Apart from the anomalous Hall conductivity, it is interesting to study the behavior of the diagonal conductivities as a function of $M/b$ for different $\beta/b$ at $T/b=0.02$. Figure \ref{fig:dcTL} shows that the transverse (longitudinal) conductivities produce a peak (minimum) at an intermediate value of $M/b$, where the location of the peak (minimum) decreases as $\beta/b$ is increased. By comparing the location of the peak (minimum) with the critical value of the phase transition (vertical lines), we find that they both have a similar monotonically decreasing behavior with the increase of $\beta/b$. This supports the results of the shrink and disappearance of the Weyl semimetal phase under the momentum dissipation observed from the behavior of anomalous Hall conductivity. As $M/b\to 0$, we find that the transverse and longitudinal conductivities have the same value at fixed $\beta/b$ and the value increases as $\beta/b$ is increased. For large $M/b$, the diagonal conductivities approach constant values, and their values increase slightly as $\beta/b$ is increased.
\begin{figure}[!ptb]
\begin{center}
\includegraphics[width=0.6\textwidth]{plot-dc.pdf}
\end{center}
\vspace{-0.6cm}
\caption{The linear-log plot of the transverse (dashed lines) and longitudinal (solid lines) conductivities as a function of the $M/b$ for different $\beta/b$ at temperature $T/b=0.02$. The black lines are the conductivities of the system without momentum relaxation, while the colored curves are for $\beta/b=2$ (red), $2.5$ (green), and 2.75 (blue), respectively. The dashed gray lines are the positions of the critical points of the phase transition.}
\label{fig:dcTL}
\end{figure}
\para
Figure \ref{fig:mi} shows the dc resistivity $\rho=1/\sigma$ as a function of the temperature in the topological trivial phase and the Weyl semimetal phase. At low temperature, the dc resistivity decreases as a function of the temperature for fixed $\beta/b$ in the two phases. For $\beta/b=0$, the dc resistivities (black dashed curve) in both phases behave as $\rho_{T/zz}\sim T^{-1}$, which corresponds to the linear dependence of the conductivities $\sigma_{T/zz}\sim\omega (\omega\to 0,\, T=0) $ in the ground state \cite{Grignani:2016wyz,Landsteiner:2015pdh}. As we increase $\beta/b$, the behavior of the dc resistivities changes gradually, and the linear $T^{-1}$ dependence is inapplicable. In the topological trivial phase, the dc resistivities have a power law dependence as $\rho\sim T^{-1-\delta}$, where the value of $\delta$ depends on $\beta/b$. This power law scaling reveals a possible emergent symmetry of the zero-temperature ground state. In contrast, we do not find a simple scaling behavior for dc resistivities at $M/b=0.45$ with nonzero momentum relaxation strength.
\begin{figure}[!ptb]
\begin{center}
\includegraphics[width=0.42\textwidth]{plot-Ti.pdf}
\includegraphics[width=0.42\textwidth]{plot-Zi.pdf}
\includegraphics[width=0.42\textwidth]{plot-Tm.pdf}
\includegraphics[width=0.42\textwidth]{plot-Zm.pdf}
\end{center}
\vspace{-0.6cm}
\caption{Log-log plot of the transverse(left) and longitudinal(right) dc resistivities as a function of the temperature for different $\beta/b$, where the top two panels are for the topological trivial phase with $M/b=1.2$ and the bottom two panels are for the Weyl semimetal phase with $M/b=0.45$. The black dashed line is the dc resistivity of the system without momentum relaxation, and the colored curves are for $\beta/b=1$ (red), $2$ (green), and $3$ (blue), respectively, in each panel. The resistivity $\rho_0$ is for the normalization of the dc resistivity in each panel.}
\label{fig:mi}
\end{figure}
\section{Conclusion and discussion}\label{sec4}
\para
In this work, we have studied the momentum relaxation effect in the holographic Weyl semimetal with a topological quantum phase transition. The momentum relaxation is induced by the axion fields in holography which break translational symmetry along spatial directions. The order parameter of the phase transition is the anomalous Hall conductivity. By tuning the momentum dissipation strength, we obtain the behavior of the anomalous Hall conductivity across the phase transition at finite temperature. At finite and low temperature, the critical value of the phase transition can be obtained from the anomalous Hall conductivity. We found that it decreases as the strength of momentum dissipation is increased up to a special value, above which the critical value goes to zero. This indicates that the momentum relaxation can lead to the shrink and disappearance of the Weyl semimetal phase, which is consistent with the predictions of the weakly coupled field theory.
\para
We have also studied the behavior of the transverse and longitudinal conductivities for different momentum relaxation strengths. We found that the maximal (minimal) value of the transverse (longitudinal) conductivity approaches zero when we increase the momentum relaxation strength, which supports the results of the shrink and disappearance of the Weyl semimetal phase under the momentum relaxation. Finally, we have studied the temperature dependence of the dc resistivity for different momentum relaxation strengths in the two phases, where a power law scaling of dc resistivity is observed in the topological trivial phase.
\para
The momentum relaxation affects the holographic Weyl semimetal system in an interesting way, and there are several further questions worthy to explore. First, the shrink and disappearance of the Weyl semimetal phase under the momentum relaxation is observed from the behavior of the critical point of the phase transition. However, the definition of the critical point at finite temperature is not exact compared with the result of the zero-temperature ground state. Therefore, it is important to explore the zero-temperature physics of the translational invariant broken Weyl semimetal in order to get more evidence and an explanation of the phenomenon we found in this paper. Second, as the momentum relaxation is induced by the massless axion fields, the results we found in this paper may depend on the particular translational symmetry-broken mechanics we used. It would be interesting to apply other translational invariant breaking mechanics, like massive gravity \cite{Vegh:2013sk,Davison:2013jba}, to test the universality of our results.
\vspace{.8cm}
\subsection*{Acknowledgments}
I thank Yan Liu for his suggestions on the project and helpful guidance throughout the work. I thank Hong-Da Lyu, Yan Liu, and Xin-Meng Wu for reading a preliminary version of the manuscript and providing useful comments and suggestions. I thank Zhi-Hong Li, Jie Jiang, Qi-Rong Jiao, Han-Qing Shi, and Hai-Qing Zhang for useful discussions. This work is supported by the National Natural Science Foundation of China Grant No. 11875083.
\vspace{.3 cm}
|
train/arxiv
|
BkiUbuw4c3aisGX5ZvLg
| 5
| 1
|
\section{Introduction}
Time dependent density functional theory (TDDFT) was introduced
by E.~Runge and E.K.U.~Gross in \cite{RG} as a non-interacting electron
model which tracks electron charge exactly. An
exposition of the subject may be found in \cite{U}.
When Kohn-Sham potentials are used, the electronic Hamiltonian includes any
(time dependent) external potentials, ionic potentials, the Hartree
potential, and the compensating exchange-correlation potential to ensure
the non-interacting and charge exactness features of the model.
By permitting time dependent potentials,
TDDFT extends the nonlinear Schr\"{o}dinger equation,
which has been studied extensively \cite{CH,Caz}, principally with
potentials not directly depending on time. Some progress for time
dependent linear Hamiltonians has been made \cite{MR}.
In previous work \cite{JP,J1}, we analyzed
closed quantum systems on bounded domains of
${\mathbb R}^{3}$
via time-ordered evolution operators. The article \cite{JP} demonstrated
strong $H^{2}$ solutions, compatible with simulation, whereas
the article \cite{J1}
demonstrated weak
solutions;
\cite{J1} also includes the exchange-correlation component of the
Hamiltonian potential, not included in \cite{JP}, which
is a nonlocal time-history term, satisfying certain regularity
hypotheses.
TDDFT is a significant field for applications,
including computational nano-electronics and chemical
physics \cite{tddft}.
An important early article in the time dependent case, directed toward
Hartree-Fock Hamiltonians, is
\cite{CLB}. This article included nuclear dynamics as a coupled classical
dynamical system, and defined an electronic Hamiltonian in terms of a
kinetic term, together with a Hartree potential, an ionic potential with
mobile point masses, and an external, electric-field-induced potential.
The mathematical framework was defined on ${\mathbb R}^{3}$ in terms of a
Cauchy problem with $H^{2}$ initial datum.
A recent
article directed toward TDDFT,
in which a quantum correction is of
local density type, is \cite{SCB}; this article couples quantum mechanics
and control theory. Neither of these articles allows for a time-history
exchange-correlation potential.
In this article, we introduce a class of quantum corrections, including
the local density approximation, but also ionic Coulomb potentials and
time-history potentials.
As we demonstrate below,
smoothing of such potentials provides a model within the
framework of \cite{J1}. By using compactness arguments suggested in
\cite{Caz}, we are able to obtain a solution of the originally posed
model.
Uniqueness is also established.
The use of evolution
operators and smoothing as presented here is consistent with techniques in
the applied literature \cite{tddft} and provides direct support for
successive approximation and other numerical procedures \cite{JF,CP}.
In this sense, the results of this article are more inclusive than an
existence/uniqueness analysis.
In the following subsections of the introduction, we
summarize the basic results of \cite{J1},
as a starting point for the present article.
In section two, we formulate the new model, which incorporates the
category of quantum corrections,
and we prove that its smoothed version lies
within the scope of \cite{J1}. In section three, we introduce the
compactness arguments, and establish existence of a weak solution as the
limit of solutions of the smoothed model.
Uniqueness is established in section four. We conclude with some summary
remarks.
\subsection{The model}
\label{origmodel}
In its original form, without ionic influence,
TDDFT includes three components for the electronic potential:
an external potential, the Hartree potential,
and a general non-local term representing the exchange-correlation potential,
which is assumed to include a time-history part.
If $\hat H$ denotes
the Hamiltonian operator of the system, then the state $\Psi(t)$ of the
system obeys the nonlinear Schr\"{o}dinger equation,
\begin{equation}
\label{eeq}
i \hbar \frac{\partial \Psi(t)}{\partial t} = \hat H \Psi(t).
\end{equation}
Here,
$\Psi = \{\psi_{1}, \dots, \psi_{N}\}$
consists of
$N$
orbitals, and the charge density
$\rho$
is defined by
$$ \rho({\bf x}, t) = |\Psi({\bf x}, t)|^{2} =
\sum_{k = 1}^{N} |\psi_{k} ({\bf x}, t)|^{2}.
$$
An initial condition,
\begin{equation}
\label{ic}
\Psi(0) = \Psi_{0},
\end{equation}
and boundary conditions are included.
The particles are confined to a bounded Lipschitz region
$\Omega \subset {\mathbb R}^{3}$
and homogeneous Dirichlet boundary conditions hold
within a closed system.
$\Psi$
denotes a finite vector function of space and time.
The effective potential
$V_{\rm e}$
is a real scalar function of the form,
$$
V_{\rm e} ({\bf x},t, \rho) = V({\bf x}, t) +
W \ast \rho + \Phi({\bf x}, t, \rho).
$$
Here,
$W({\bf x}) = 1/|{\bf x}|$
and the convolution
$W \ast \rho$
denotes the Hartree potential. If $\rho$ is extended as zero outside
$\Omega$, then, for ${\bf x} \in \Omega$,
$$
W \ast \rho \; ({\bf x})=\int_{{\mathbb R}^{3}}
W({\bf x} -{\bf y}) \rho({\bf y})\;d {\bf y},
$$
which depends only upon values $W({\bf z})$,
$\|{\bf z}\|\leq
\mbox{diam}(\Omega)$. We may redefine $W$
smoothly outside this set,
so as to obtain a function of compact support for which Young's inequality
applies. The exchange-correlation potential
$\Phi$
represents a time-history of $\rho$:
$$
\Phi({\bf x}, t, \rho)= \Phi({\bf x}, 0, \rho) +
\int_{0}^{t} \phi({\bf x}, s, \rho) \; ds.
$$
The Hamiltonian operator is given by,
\begin{equation}
\hat H
= -\frac{\hbar^{2}}{2m} \nabla^{2}
+V({\bf x}, t) +
W \ast \rho + \Phi({\bf x}, t, \rho),
\label{Hamiltonian1}
\end{equation}
and
$m$
designates the effective mass and $\hbar$ the
normalized Planck's constant.
If ionic influence is present, then (\ref{Hamiltonian1}) is adjusted,
typically by Coulomb potentials.
\subsection{Definition of weak solution and function
spaces}
The solution
$\Psi$
is continuous from the time interval
$J$,
to be
defined shortly,
into the finite energy Sobolev space
of complex-valued
vector functions which vanish in a generalized sense on the boundary,
denoted
$H^{1}_{0}(\Omega)$: $\Psi \in C(J; H^{1}_{0})$.
The time derivative is continuous from
$J$
into the dual
$H^{-1}$
of
$H^{1}_{0}$:
$\Psi \in C^{1}(J; H^{-1})$.
The spatially dependent test functions
$\zeta$
are
arbitrary in
$H^{1}_{0}$.
The duality bracket is denoted
$\langle f, \zeta \rangle$.
{\it Norms and inner products are discussed in Appendix \ref{appendixA}.}
We will make use of the equivalence of the standard $H^{1}_{0}$ norm and
the gradient seminorm, due to the Poincar\'{e} inequality,
which holds for bounded domains $\Omega$ \cite{Leoni}.
\begin{definition}
\label{weaksolution}
For
$J=[0,T]$,
the vector-valued function
$\Psi = \Psi({\bf x}, t)$
is a
weak solution of (\ref{eeq}, \ref{ic}, \ref{Hamiltonian1}) if
$\Psi \in C(J; H^{1}_{0}(\Omega)) \cap C^{1}(J;
H^{-1}(\Omega)),$
if
$\Psi$
satisfies the initial condition
(\ref{ic}) for
$\Psi_{0} \in H^{1}_{0}(\Omega)$,
and if
$\forall \; 0 < t \leq T$:
\vspace{.25in}
\begin{equation}
i \hbar\langle \frac{\partial\Psi(t)}{\partial t},
\zeta \rangle =
\int_{\Omega} \frac{{\hbar}^{2}}{2m}
\nabla \Psi({\bf x}, t)\cdotp \nabla { \zeta}({\bf x})
+ V_{\rm e}({\bf x},t,\rho) \Psi({\bf x},t) { \zeta}({\bf x})
d{\bf x}.
\label{wsol}
\end{equation}
\end{definition}
\subsection{Hypotheses and theorem statement}
\label{hyps}
We provide some discussion, relevant to the physical model, prior to the
statement of the hypotheses. Additional discussion will be provided
following the hypotheses. It is emphasized that the hypotheses of this
subsection are those required for the original theory of \cite{J1} to apply;
this was accomplished with evolution operators and the Banach fixed point
mapping. Subsequent sections of this article consider more general families
of correction potentials.
The time-history potential $\Phi({\bf x}, t, \rho)$
above has a structure, including the time-integrated part, which is
motivated by \cite[Eqs.\ (15), (17)]{MarGross}. This article characterizes
the action functionals $A$ whose variational derivatives with respect to
$\rho$ yield appropriate exchange-correlation potentials.
The form of $\Phi$ selected above represents a general statement of these
ideas. It is not unreasonable that the mathematical hypotheses, to be
stated shortly, should resemble
the known properties of the Hartree potential because of
the restorative nature of exchange and correlation. From a mathematical
perspective, the model permits multiple `copies' of $\Phi$, allowing for
quantum corrections. These are seen to be important for applications. For
example, in the quantum chemistry community \cite{SaddTeter}, it is
appropriate to split $\Phi$: the exchange part is represented by a
weighted density approximation (WDA), while the correlation part is
represented by a local density approximation (LDA).
The nonlocal WDA form for $\Phi$ is appropriate for nonuniform mixtures
\cite{DL}. The general form we have allowed for $\Phi$ is intended to
anticipate applications of this type.
The following
hypotheses are those for which the evolution operator theory of \cite{J1}
applies. The present article builds upon this established theory.
We assume the following
hypotheses in order to apply the results of
\cite{J1}.
\begin{itemize}
\item
\begin{enumerate}
\item
The time-history potential
$\Phi$
is continuous in
$t \in J$
into
$H^{1}_{0}$.
\item
$\Phi$ is bounded, uniformly in
$t \in J$,
from $H^{1}_{0}$
into
$W^{1,3}$.
More precisely,
by boundedness, we mean that the family $\{\Phi(\cdotp, t, \cdotp)\}$
maps every fixed ball in $H^{1}_{0}$
into a fixed ball in
$W^{1,3}$,
uniformly in
$t$.
\end{enumerate}
\item
The derivative
$\partial \Phi/\partial t =
\phi$
is assumed measurable, and bounded in its arguments.
\item
Furthermore, the following smoothing condition
is assumed,
expressed by a (uniform) Lipschitz norm condition:
$$\forall t \in [0,T],
\mbox{\rm if} \;
\|\Psi_{j}\|_{H^{1}_{0}},
j=1,2, \; \mbox{are bounded by} \; r,
$$
then
\begin{equation}
\|[\Phi(\cdotp,t,|\Psi_{1}|^{2})-\Phi(\cdotp,t,|\Psi_{2}|^{2})]\psi\|_{H^{1}}
\leq
C(r) \|\Psi_{1} - \Psi_{2}\|_{H^{1}_{0}} \|\psi\|_{H^{1}_{0}}.
\label{ecfollowsH}
\end{equation}
Here,
$\psi$
is arbitrary in
$H^{1}_{0}$
and
$C(r)$
depends only on
$r$.
\item
If $\Phi(\cdotp, 0, \rho)$ fails to be a nonnegative functional of
$\rho = |\Psi|^{2}$,
we assume that it
satisfies, uniformly in $t$,
for $\; \|\Psi(t)\|_{L^{2}} =
\|\Psi_{0}\|_{L^{2}}$,
the constraint that
\begin{equation}
\label{constraint}
\|\Phi(\cdotp, 0,
|\Psi|^{2})
|\Psi|^{2}\|_{L^{1}} \leq C_{1} \|\nabla \Psi\|_{L^{2}}^{2}
+ C_{2}, \; \Psi(t) \in H^{1}_{0},
\end{equation}
for nonnegative constants $C_{1}$ and $C_{2}$. It is required that
$C_{2}$ depend only on $\|\Psi_{0}\|_{L^{2}}$ and the problem data,
and $C_{1}$ is
sufficiently small:
\begin{equation}
\label{sufsmall}
C_{1} < \frac{\hbar^{2}}{2m}.
\end{equation}
\item
The so-called external potential
$V$
is assumed to be
continuously
differentiable on the closure of the space-time domain.
\end{itemize}
\begin{remark}
We comment here on the hypotheses.
\begin{enumerate}
\item
The regularity assumed for $\Phi$ in the first assumption
is consistent with certain
requirements of TDDFT. One of these is the Zero Force Theorem
\cite{U}, which imposes a gradient condition on $\Phi$.
We note that the Hartree potential satisfies these conditions. In fact,
any convolution of the form $\Phi = F \ast \rho$, where $F \in W^{1,1}$,
satisfies the conditions.
\item
An inequality of the form
(\ref{ecfollowsH})
is satisfied by the Hartree
potential \cite[Theorem 3.1]{JNA},
and by any convolution of the form $\Phi = F \ast \rho$, with
$F \in L^{2}$ and $\nabla F \in L^{1}$.
It was used in \cite{J1} to
construct the contraction mapping used there for the evolution operator.
For quantum corrections not
satisfying this condition, the smoothing is utilized in the following
section in order to place the smoothed systems within this framework.
\item
Hypotheses (\ref{constraint}, \ref{sufsmall}) are relevant only when the
associated potentials are negative. This is expected to occur for
restoring potentials and certain Coulomb potentials.
In the following section, it will be necessary to
smooth certain components of the quantum correction potential. The
smoothed Coulomb potentials satisfy
(\ref{constraint}, \ref{sufsmall}) without qualification. However, for
smoothed LDA approximations,
there is a disparity in exponent bounds for $\alpha$.
A smaller range is necessary for negative potentials (see (\ref{proplamb}) to
follow for verification in this case).
Also, unsmoothed convolutions of the form $\Phi = F \ast \rho$, with
$\nabla F \in L^{1}$, satisfy the conditions if they have sufficiently
small $L^{\infty}$ bounds.
\end{enumerate}
\end{remark}
The following theorem was proved in \cite{J1}, based upon the evolution
operator as presented in \cite{J2}, and will provide a solution
for the smoothed problem on
$J$
as introduced in the following section.
\begin{theorem}
\label{EU}
For any interval
$[0,T]$,
the system (\ref{wsol}) in Definition
\ref{weaksolution},
with Hamiltonian
defined by (\ref{Hamiltonian1}),
has a unique weak solution if the hypotheses of section \ref{hyps} hold.
\end{theorem}
\section{Quantum Corrections and the Local Density Approximation}
\label{qcsection}
In this section, we define a class of quantum correction potentials,
including the local density approximation to the
exchange-correlation potential
$\Phi$. These correction potentials are of three types.
\begin{enumerate}
\item
The local density approximation, discussed in Definition \ref{Def2.1}
to follow.
This potential is designated as $\Phi_{\mbox{\rm lda}}(\rho)$.
\item
A finite number of Coulomb ionic potentials, $c_{j} W(\cdotp - {\bf
x}_{j})$, subject to the Born-Oppenheimer approximation. In particular,
the ionic masses are assumed to be point masses, at fixed locations ${\bf
x}_{j} \in \Omega$. The function $W$ is introduced in section
\ref{origmodel}. The constants $c_{j}$ may be positive or negative.
The aggregate of these Coulomb potentials is designated $\Phi_{\mbox{\rm
c}}(\cdotp)$.
\item
A time-history potential of the structure of $\Phi$, introduced in section
\ref{origmodel}. The presence of this potential allows for physical
modeling flexibility, since the exchange potential and the correlation
potential are viewed separately in TDDFT. We permit one of these to be
approximated locally and the other by a time-history among the modeling
choices. We retain the notation $\Phi(\cdotp, t, \rho)$ for this
component, assumed to satisfy the hypotheses detailed in section
\ref{hyps}. Also, it is assumed that $\Phi(\cdotp, t,
\rho_{n}(\cdotp, t))$ converges in $L^{2}$, uniformly in $t$, if
$\rho_{n}(\cdotp, t)$ converges in $L^{2}$, uniformly in $t$.
\end{enumerate}
The consolidated quantum correction potential is then given by
\begin{equation}
\Phi_{\mbox{\rm qc}}(\cdotp, t, \rho) = \Phi_{\mbox{\rm lda}}(
\rho) + \Phi_{\mbox{\rm c}}(\cdotp) +
\Phi(\cdotp, t, \rho).
\end{equation}
\begin{definition}
\label{Def2.1}
The local density approximation $\Phi_{\mbox{\rm lda}}$ is
now defined.
We consider the following approximation,
where
$\lambda$
is a real constant, positive or negative.
\begin{equation}
\label{lda}
\Phi_{\rm lda}(\rho) = \lambda \rho^{\alpha/2} =
\lambda |\Psi|^{\alpha}.
\end{equation}
Additionally,
\begin{itemize}
\item
If
$\lambda > 0$,
the range of
$\alpha$
is
$1 \leq \alpha < 4$.
\item
If
$\lambda < 0$,
the range of
$\alpha$
is
$1 \leq \alpha \leq 4/3$.
Also, $|\lambda|$ must be sufficiently small, consistent with
(\ref{constraint}) and (\ref{sufsmall}).
\end{itemize}
\end{definition}
We redefine the Hamiltonian
considered here as
\begin{equation*}
\hat H
= -\frac{\hbar^{2}}{2m} \nabla^{2}
+V({\bf x}, t) +
W \ast \rho + \Phi_{\rm qc}(\cdotp, t, \rho),
\end{equation*}
\begin{equation}
\Phi_{\rm qc}(\cdotp, t, \rho) =
\underbrace{\lambda |\Psi|^{\alpha}(\cdotp, t)}_{\Phi_{\rm lda}}
+ \underbrace{\sum_{j=1}^{M} c_{j}\frac{1}{|\cdotp - {\bf
x}_{j}|}}_{\Phi_{\rm c}}
+ \Phi(\cdotp, t,
\rho).
\label{Hamiltonian2}
\end{equation}
The proofs accommodate a finite number of terms in
$\Phi_{\rm lda}$. One term has been chosen for simplicity.
The parameters of $\Phi_{\rm lda}$ satisfy the assumptions of Definition
\ref{Def2.1}. The numerical constants $c_{j}$ are of arbitrary sign,
and the ionic locations ${\bf x}_{j}$ are fixed interior points in
$\Omega$.
$\Phi$ satisfies the hypotheses
specified in (3) above,
and is a nonlocal potential such as weighted density approximation.
Convolutions, discussed in Remark 1, represent an important class. The
time integrated part of $\Phi$ is motivated by \cite{MarGross}.
The following theorem is the goal of our analysis.
\begin{theorem}
\label{central}
If the effective potential is redefined by
\begin{equation}
\label{redefined}
V_{\rm e}({\bf x}, t, \rho) =
V({\bf x}, t) +
W \ast \rho + \Phi_{\rm qc}(\cdotp, t, \rho),
\end{equation}
then there is a unique weak solution of (\ref{wsol}).
The solution is in the regularity class
$C(J; H^{1}_{0}) \cap C^{1}(J;H^{-1})$
and satisfies the specified initial condition.
\end{theorem}
The existence part of the proof of Theorem \ref{central}
is carried out in section three (see
Theorems \ref{central1} and \ref{central2}).
The uniqueness is demonstrated in section four.
\subsection{The smoothing}
We begin by defining a standard convolution \cite{LiebLoss}.
\begin{definition}
\label{convolution}
Suppose that
a nonnegative function
$\phi_{1}$
is given,
$\phi_{1} \in C^{\infty}_{0}({\mathbb R}^{3})$,
of integral one. Set
$$
\phi_{\epsilon}({\bf x}) =
\epsilon^{-3}\phi_{1}({\bf x}/\epsilon), \; {\bf x} \in {\mathbb
R}^{3},
$$
and, for
$f \in L^{p}(\Omega), 1 \leq p < \infty$,
$$
f_{\epsilon} = \phi_{\epsilon} \ast f.
$$
\end{definition}
We recall \cite{LiebLoss} that
$\lim_{\epsilon \rightarrow 0}f_{\epsilon} = f$
in
$L^{p}$
and
$\|f_{\epsilon}\|_{L_{p}} \leq \|f\|_{L_{p}}, \; \forall \epsilon > 0$.
\begin{definition}
\label{smoothpotdef}
We denote by $\Phi_{\epsilon}$ a smoothed replacement of $\Phi_{\mbox{\rm
qc}}$ as follows.
\begin{enumerate}
\item
$\Phi_{\mbox{\rm lda}} \mapsto \phi_{\epsilon} \ast
\Phi_{\mbox{\rm lda}}$.
\item
$\Phi_{\mbox{\rm c}} \mapsto \phi_{\epsilon} \ast
\Phi_{\mbox{\rm c}}$.
\item
Time-history terms are not smoothed.
\end{enumerate}
The effective potential for the approximate problem is given by:
\begin{equation}
\label{smootheff}
V_{\rm e}({\bf x}, t, \rho_{\epsilon}) =
V({\bf x}, t) +
W \ast \rho_{\epsilon} + \Phi_{\epsilon}({\bf x}, t, \rho_{\epsilon}).
\end{equation}
\end{definition}
\subsection{Existence and uniqueness for the smoothed system}
As mentioned in the introduction, we will show that the smoothed problem
has a unique weak solution on $[0, T]$ for each fixed $\epsilon > 0$.
We first state the result.
\begin{proposition}
\label{2.1}
If
$\Phi_{\rm qc}$
is replaced by its smoothing
$\Phi_{\epsilon}$, as specified in Definition \ref{smoothpotdef},
then the hypotheses of section \ref{hyps} hold, as applied to
$\Phi_{\epsilon}$.
In particular, Theorem \ref{EU} is applicable.
With $V_{\rm e}$ defined by (\ref{smootheff}),
there exists a unique weak solution
$\Psi_{\epsilon}$,
as specified in Definition 1.1,
of the corresponding system:
\begin{equation}
i \hbar\langle \frac{\partial\Psi_{\epsilon}(t)}{\partial t},
\zeta \rangle =
\int_{\Omega} \frac{{\hbar}^{2}}{2m}
\nabla \Psi_{\epsilon}({\bf x}, t)\cdotp \nabla { \zeta}({\bf x})
+ V_{\rm e}({\bf x},t,\rho_{\epsilon})
\Psi_{\epsilon}({\bf x},t) { \zeta}({\bf x})\; d{\bf x}.
\end{equation}
\end{proposition}
\begin{proof}
We observe that the time-history term, if present, is assumed to satisfy
the assumptions of section \ref{hyps}. This includes
(\ref{constraint}) and (\ref{sufsmall}),
which are required to hold
in the aggregate,
inclusive of all nonpositive terms for the potential $\Phi_{\epsilon}$.
The Coulomb potential does not depend on $t$ or
$\rho$; although the unsmoothed potential fails to be in $W^{1,3}$,
its smoothing is in this space.
Since individual terms of
$\phi_{\epsilon} \ast \Phi_{\rm c}$ may be negatively signed, we
estimate the collective potential. We show that this potential satisfies
(\ref{constraint}) and
(\ref{sufsmall}), with $C_{1}$ preselected to be arbitrarily small.
Initially, we estimate, for $\eta> 0$ arbitrary,
\begin{equation}
\label{Coulomb1}
\|(\phi_{\epsilon} \ast \Phi_{\rm c}) |\Psi|^{2} \|_{L^{1}} \leq
(1/2)[\eta^{2} \|(\phi_{\epsilon} \ast \Phi_{\rm c}) \Psi\|_{L^{2}}^{2}
+ \eta^{-2}
\|\Psi\|_{L^{2}}^{2}].
\end{equation}
By the H\"{o}lder inequality, with conjugate indices $p=3,
p^{\prime} = 3/2$, we have
\begin{equation}
\label{Coulomb2}
\|(\phi_{\epsilon} \ast \Phi_{\rm c}) \Psi\|_{L^{2}}^{2} \leq
[\|\phi_{\epsilon} \ast \Phi_{\rm c}\|_{L^{3}}
\|\Psi\|_{L^{6}}]^{2} \leq
[\|\phi_{1} \|_{L^{3}} \|\Phi_{\rm c}\|_{L^{1}}
\|\Psi\|_{L^{6}}]^{2}.
\end{equation}
By the equivalence of norms on $H^{1}_{0}$, and by Sobolev's inequality,
we may select $\eta$ so that (\ref{sufsmall}) holds for any preselected
$C_{1}$.
This verifies the final requirement for the Coulomb potential.
For the smoothing of $\Phi_{\rm lda}$,
we state the three properties required to be verified.
\begin{enumerate}
\item
$ \Phi_{\epsilon}$
maps sets bounded in
$H^{1}_{0}$
into sets
bounded in
$W^{1,3}$.
\item
The Lipschitz property (\ref{ecfollowsH}) holds.
\item
If $\lambda < 0$,
$\|\phi_{\epsilon} \ast \Phi_{\rm lda} (\rho) |\Psi|^{2} \|_{L^{1}} \leq
C_{1} \|\nabla \Psi \|_{H^{1}_{0}}^{2}$,
where $C_{1}$ does not depend on $t$ and satisfies (\ref{sufsmall}).
This is a case where $C_{2} = 0$.
\end{enumerate}
Before verifying properties (1) and (2), we note that there is no
restriction on the size of
$|\lambda|$,
and the range of
$\alpha$
is
$1 \leq \alpha < 4$,
whatever the sign of
$\lambda$.
Property (1) is immediate from the inequalities,
$$
\|\phi_{\epsilon} \ast \Phi_{\rm lda}(\rho)\|_{L^{3}}
\leq |\lambda| \; \|\phi_{\epsilon}\|_{L^{3}} \||\Psi|^{\alpha}\|_{L^{1}},
\;\;
\|\nabla \phi_{\epsilon} \ast \Phi_{\rm lda}(\rho)\|_{L^{3}}
\leq |\lambda|\; \|\nabla \phi_{\epsilon}\|_{L^{3}} \||\Psi|^{\alpha}\|_{L^{1}},
$$
which follow from Young's inequality, applied to the convolution.
Indeed, recall that
$\alpha < 4$,
so
that the Sobolev inequality may be applied.
For the verification of property (2), we begin with the gradient term,
and specifically with the product rule
as applied to the definition of
$\phi_{\epsilon} \ast \Phi_{\rm lda}/|\lambda|$:
$$
\|\nabla [(\phi_{\epsilon}\ast |\Psi_{1}|^{\alpha}-
\phi_{\epsilon}\ast |\Psi_{2}|^{\alpha}) \psi]\|_{L^{2}}
=
$$
\begin{equation}
\label{te}
\|\nabla \phi_{\epsilon}\ast (|\Psi_{1}|^{\alpha}-
|\Psi_{2}|^{\alpha}) \psi\ +
\phi_{\epsilon}\ast (|\Psi_{1}|^{\alpha}-
|\Psi_{2}|^{\alpha}) \nabla \psi
\|_{L^{2}}.
\end{equation}
We have used the differentiation property of the convolution.
When the triangle inequality is employed, the second term is the more
delicate to estimate since
$\nabla \psi \in L^{2}$
(only).
Thus, by use of the
Schwarz inequality and Young's inequality, we must estimate
$
\||\Psi_{1}|^{\alpha}-
|\Psi_{2}|^{\alpha} \|_{L^{1}}.
$
The case
$\alpha = 1$
is immediate.
We prepare for the cases
$1 < \alpha < 4$
by citing the following useful numerical inequality
\cite{LS}:
\begin{equation}
\label{leach}
\left(\frac{y^{r} - z^{r}}{y^{s} - z^{s}} \frac{s}{r} \right)
^{\frac{1}{r-s}} \leq \max(y,z),
\; y \geq 0, z \geq 0, y \not=z, r>0, s>0, s \not=r.
\end{equation}
We apply (\ref{leach}) with the identifications.
$$
r = \alpha, s = 1, y = |\Psi_{1}|, z = |\Psi_{2}|,
$$
to obtain the pointwise estimate, which holds almost
everywhere in $\Omega$,
\begin{equation}
\label{estPhi}
|\;|\Psi_{1}|^{\alpha}-|\Psi_{2}|^{\alpha}|
\leq \alpha (\max(|\Psi_{1}|, |\Psi_{2}|))^{\alpha - 1}
\;|\;|\Psi_{1}| - |\Psi_{2}|\;|.
\end{equation}
Although we will require inequality (\ref{estPhi}) later in the article,
it is more convenient here to use the less sharp inequality, derived from
(\ref{estPhi}):
$$
|\;|\Psi_{1}|^{\alpha}-|\Psi_{2}|^{\alpha}|
\leq \alpha (1 + |\Psi_{1}|+ |\Psi_{2}|)^{\alpha}
\;|\;|\Psi_{1}| - |\Psi_{2}|\;|.
$$
We use a technique motivated by \cite{Caz}. If
$r = \alpha + 2$,
and
$r^{\prime}$
is conjugate to
$r$,
if
$p = r/r^{\prime}$,
and
$p^{\prime}$
is conjugate to
$p$,
then
\begin{equation}
\label{successiveindices}
\alpha r^{\prime} p^{\prime} = r, \; r^{\prime} p = r,
\end{equation}
and an application of H\"{o}lder's inequality gives
$$
\|\;|\Psi_{1}|^{\alpha}-|\Psi_{2}|^{\alpha}\|_{L^{r^{\prime}}}
\leq \alpha \|1 + |\Psi_{1}|+ |\Psi_{2}|\|_{L^{r}}^{\alpha}
\|\;|\Psi_{1}| - |\Psi_{2}|\;\|_{L^{r}}
\leq C\|\Psi_{1} - \Psi_{2}\|_{L^{r}}.
$$
An application of Sobolev's inequality shows that the rhs of this
inequality
is dominated by a locally bounded constant times
$\|\Psi_{1} - \Psi_{2}\|_{H^{1}}$.
Since the
$L^{1}$
norm is dominated by a constant times
the
$L^{r^{\prime}}$
norm, the estimation of the second term
arising from (\ref{te}) is completed. The first term
also reduces to the estimation of
$
\||\Psi_{1}|^{\alpha}-
|\Psi_{2}|^{\alpha} \|_{L^{1}},
$
as does the non-gradient term.
Thus, the proof of property
(2) is completed.
For property (3), which corresponds to $\lambda < 0$ and $1
\leq \alpha \leq 4/3$,
we consider the following estimate
via two applications of H\"{o}lder's inequality:
\begin{equation}
\label{proplamb}
|\lambda|\left| \int_{\Omega} |\Psi_{\epsilon}|^{\alpha} |\Psi_{\epsilon}|^{2}
\; d{\bf x} \right|
\leq |\lambda| \; |\Omega|^{2/3 - \alpha/2}
\|\Psi_{\epsilon}\|_{L^{2}}^{\alpha} \; \|\Psi_{\epsilon} \|_{L^{6}}^{2}.
\end{equation}
Since the $L^{2}$ norm of $\Psi = \Psi_{\epsilon}$ is specified in
(\ref{constraint}),
$\lambda$ can be chosen to
satisfy (\ref{sufsmall}) by use of the Sobolev embedding theorem.
It follows that a unique weak solution
$\Psi_{\epsilon}$
exists for the smoothed system as formulated.
\end{proof}
\section{Existence}
\setcounter{remark}{1}
The results of this section are derived for an arbitrary time interval
$[0,T]$. They are directed toward the existence statement in Theorem
\ref{central}.
The compactness techniques are motivated by \cite{Caz}.
\subsection{`A priori' bounds for the smoothed solutions}
\label{cofe}
We begin by quoting a result proved in \cite{J1}, now applied to the
family of solutions
$\Psi_{\epsilon}$
\begin{lemma}
\label{lemma3.1}
If the functional
${\mathcal E}(t)$
is defined
for
$0 < t \leq T$
by,
\begin{equation}
\label{Eoft}
{\mathcal E}(t) =
\int_{\Omega}\left[\frac{{\hbar}^{2}}{4m}|\nabla \Psi_{\epsilon}|^{2}
+
\left(\frac{1}{4}(W \ast |\Psi_{\epsilon}|^{2})+ \frac{1}{2}
(V+\Phi_{\epsilon}(\cdotp, t,\rho_{\epsilon}))\right)
|\Psi_{\epsilon}|^{2}\right]d{\bf x},
\end{equation}
then the following identity holds:
\begin{equation}
{\mathcal E}(t)={\mathcal E}(0)
+
\frac{1}{2}\int_{0}^{t}\int_{\Omega}[(\partial V/\partial s)({\bf x},s)
+ \phi({\bf x}, s)]
|\Psi_{\epsilon}|^{2}\;d{\bf x}ds,
\label{consener}
\end{equation}
where
${\mathcal E}(0)$
is given by
$$
\int_{\Omega}\left[\frac{{\hbar}^{2}}{4m}|\nabla
\Psi_{0}|^{2}+\left(\frac{1}{4}
(W\ast|\Psi_{0}|^{2})+\frac{1}{2}
(V(\cdotp,0) + \Phi_{\epsilon}(\cdotp, 0, \rho_{0})
\right)|\Psi_{0}|^{2}\right]
\;d{\bf x}.
$$
\end{lemma}
\begin{proposition}
\label{3.1}
The kinetic term is bounded above by a natural splitting.
For each fixed $t$:
\begin{equation*}
\frac{{\hbar}^{2}}{4m}\int_{\Omega} |\nabla
\Psi_{\epsilon}|^{2}\;d{\bf x} \leq {\mathcal F}_{\epsilon}(t) +
{\mathcal G}_{\epsilon}(t).
\end{equation*}
Here, ${\mathcal F}_{\epsilon}(t)$ is a quantity which can be bounded
above, independently of $t$ and $\epsilon$, in a manner depending only on
the data of the problem. It is given explicitly by
\begin{equation*}
{\mathcal F}_{\epsilon}(t) = {\mathcal E}(0) + \frac{1}{2}
\int_{0}^{t} \int_{\Omega}
[(\partial V/\partial s)({\bf x}, s) + \phi({\bf x},
s)]|\Psi_{\epsilon}|^{2} d{\bf x} ds - \frac{1}{2} \int_{\Omega}
V({\bf x}, t) |\Psi_{\epsilon}|^{2}d{\bf x}.
\end{equation*}
Moreover, ${\mathcal G}_{\epsilon}(t)$ can be estimated as the sum of two
terms: the first can be absorbed into the kinetic term, while the second
is independent of $\epsilon$ and $t$. ${\mathcal G}_{\epsilon}(t)$
is given explicitly by
\begin{equation*}
{\mathcal G}_{\epsilon}(t) =
- \frac{1}{2}\int_{\Omega}
\Phi_{\epsilon}(\rho_{\epsilon}) |\Psi_{\epsilon}|^{2}d{\bf x}.
\end{equation*}
\end{proposition}
\begin{proof}
\begin{itemize}
\item
The estimation of ${\mathcal F}_{\epsilon}(t)$
\end{itemize}
We notice that $V, \partial V/ \partial t, \phi$ are bounded on the finite
measure space-time domain $\Omega \times [0, T]$, so that the estimation
of ${\mathcal F}_
{\epsilon}(t)$ reduces to the analysis of the smoothed term
in ${\mathcal E}(0)$ given by
\begin{equation*}
\int_{\Omega} \Phi_{\epsilon}(\cdotp, 0, \rho_{0})|\Psi_{0}|^{2} \; d {\bf x}.
\end{equation*}
Since the time-history, if present, is not smoothed, and acts boundedly, it
suffices to examine the Coulomb and LDA potentials.
\begin{itemize}
\item
The Coulomb term.
\end{itemize}
By the Schwarz inequality and Young's inequality, we estimate
\begin{equation*}
\|(\phi_{\epsilon} \ast \Phi_{\rm c}) |\Psi_{0}|^{2}\|_{L^{1}} \leq
\|\phi_{1} \|_{L^{2}} \|\Phi_{\rm c}\|_{L^{1}}
\|\Psi_{0}\|_{L^{4}}^{2}.
\end{equation*}
An application of Sobolev's inequality concludes the argument.
\begin{itemize}
\item
The LDA term.
\end{itemize}
This is a direct estimate:
\begin{equation*}
\|\phi_{\epsilon} \ast \Phi_{\rm lda}(\rho_{0})|\Psi_{0}|^{2}\|_{L^{1}}
\leq \|\phi_{\epsilon} \ast |\Psi|^{\alpha} \|_{L^{3/2}}
\|\Psi_{0} \|_{L^{6}}^{2} \leq \|\phi_{1} \|_{L^{3/2}}
\||\Psi_{0}|^{\alpha} \|_{L^{1}}
\|\Psi_{0} \|_{L^{6}}^{2}.
\end{equation*}
Since $\alpha < 4$, the estimate follows as previously from the embedding
theorems.
\begin{itemize}
\item
The estimation of ${\mathcal G}_{\epsilon}(t)$
\end{itemize}
This represents the more delicate part of the proof.
\begin{itemize}
\item
The time-history term.
\end{itemize}
If the term,
$$
\Phi({\bf x}, t, \rho)= \Phi({\bf x}, 0, \rho) +
\int_{0}^{t} \phi({\bf x}, s, \rho) \; ds,
$$
is included, and the leading term fails to be a positive functional, then
we have required that (\ref{constraint}, \ref{sufsmall}) hold, here as
applied to $\Psi_{\epsilon}$. This is consistent with the structure of
${\mathcal G}_{\epsilon}$ as stated. The integral term has been discussed
in the previous part and is bounded. Note that (\ref{sufsmall}) is
required to hold for the {\it aggregate} potential, including those
components to be discussed now. We shall mention this at the appropriate
time.
\begin{itemize}
\item
The Coulomb term.
\end{itemize}
We use the core of the argument as developed in the proof of Proposition
\ref{2.1}. Indeed, for any preselected $C_{1}$, inequality
(\ref{sufsmall}) can be satisfied. This follows directly from
(\ref{Coulomb1}) and (\ref{Coulomb2} with a proper choice of $\eta$.
\begin{itemize}
\item
The LDA term.
\end{itemize}
This pertains to the case $\lambda < 0$ if this term is included.
We have already derived the relevant inequality, viz.\thinspace,
(\ref{proplamb}) near the conclusion of the proof of Proposition \ref{2.1}.
This inequality is required here also.
In order to satisfy (\ref{sufsmall}) in the aggregate sense, we reason as
follows. We accept the time-history term as given, if at all. We choose
$\lambda$ so that the sum of the LDA potential and time-history potential
continues to satisfy this inequality. This can be extended to a finite
number of such terms.
Finally, we have shown that the
Coulomb potential can be included so as to maintain this inequality.
This concludes the proof.
\end{proof}
The following corollary is immediate from the equivalence of norms on
$H^{1}_{0}$.
\begin{corollary}
\label{Hbound}
There is a bound
$r_{0}$
in the norm of
$C(J; H^{1}_{0})$
for the smoothed
solutions.
\end{corollary}
\begin{proposition}
\label{3.2}
There is a uniform bound, in
$t \in J$
and
$\epsilon > 0$,
for the norms,
$$
\|(\Psi_{\epsilon})_{t}\|_{H^{-1}}.
$$
\end{proposition}
\begin{proof}
One begins by using the weak form of the equation as discussed in
Proposition \ref{2.1},
and isolating the time
derivative acting on an arbitrary test function $\zeta, \|\zeta
\|_{H^{1}_{0}} \leq 1$.
The gradient term
is bounded
by Corollary \ref{Hbound}, while the bound for the
external potential term follows directly from the hypothesis on $V$.
For the Hartree term, we estimate,
by H\"{o}lder's inequality and Young's inequality, for each $t \in J$,
$$
\left|\int_{\Omega} W \ast |\Psi_{\epsilon}|^{2} \; \Psi_{\epsilon} \zeta
\right|
\leq \|W\|_{L^{1}} \;
\|\Psi_{\epsilon}\|_{L^{3}}^{2} \|\Psi_{\epsilon}\|_{L^{6}}
\; \|\zeta\|_{L^{6}}.
$$
Sobolev's inequality, combined with Proposition \ref{3.1}, gives the bound
for this term.
We now consider the components of the quantum correction
potential.
\begin{itemize}
\item
The LDA term.
\end{itemize}
For the
smoothed LDA term, the sign of $\lambda$ is not relevant
and we consider
$1 \leq \alpha < 4$.
We estimate by H\"{o}lder's inequality,
for
$r=\alpha +2$
and
$r^{\prime}$
conjugate to
$r$, for each $t \in J$,
$$
\left|\int_{\Omega} \phi_{\epsilon} \ast |\Psi_{\epsilon}|^{\alpha}
\; \Psi_{\epsilon} \zeta
\right| \leq
\|\phi_{\epsilon}\ast|\Psi_{\epsilon}|^{\alpha}
\; \Psi_{\epsilon}\|_{L^{r^{\prime}}}
\|\zeta\|_{L^{r}}.
$$
The first factor on the rhs requires additional
explanation. We have, by another application of H\"{o}lder's inequality,
with
$p = r/r^{\prime}$
and
$p^{\prime}$
conjugate to
$p$
(note that
$r/\alpha = r^{\prime} p^{\prime}$),
$$
\|\phi_{\epsilon}\ast|\Psi_{\epsilon}|^{\alpha}
\; \Psi_{\epsilon}\|_{L^{r^{\prime}}}
\leq
\|\phi_{\epsilon} \ast |\Psi_{\epsilon}|^{\alpha}\|_{L^{r/\alpha}}
\|\Psi_{\epsilon}\|_{L^{r}}
\leq
\||\Psi_{\epsilon}|^{\alpha}\|_{L^{r/\alpha}}
\|\Psi_{\epsilon}\|_{L^{r}}
$$
\begin{equation}
\label{sucHolder}
\leq
\|\Psi_{\epsilon}\|_{L^{r}}^{\alpha + 1}.
\end{equation}
We conclude
that the LDA term is bounded in the dual norm, as claimed.
\begin{itemize}
\item
The Coulomb term.
\end{itemize}
By the Schwarz inequality and Young's inequality, uniformly in $t$,
\begin{equation*}
\left|\int_{\Omega} \phi_{\epsilon} \ast \Phi_{\rm c}
\; \Psi_{\epsilon} \zeta
\right| \leq
\|\phi_{1} \|_{L^{2}} \|\Phi_{\rm c}\|_{L^{1}}
\|\Psi_{\epsilon}\|_{L^{4}} \;
\|\zeta\|_{L^{4}},
\end{equation*}
and the estimate is completed by Sobolev's inequality.
\begin{itemize}
\item
Time-history term.
\end{itemize}
By Proposition \ref{3.1}, the smoothed solutions are bounded in
$H^{1}_{0}$, uniformly in $t$, so that, by the first hypothesis in section
\ref{hyps}, the functions $\Phi(\cdotp, t, \Psi_{\epsilon})$ have a uniform
$H^{1}_{0}$ bound. It follows as in previous estimates that the term,
\begin{equation*}
\int_{\Omega} \Phi(\cdotp, 0, \Psi_{\epsilon}) \; \Psi_{\epsilon} \zeta
\; d{\bf x},
\end{equation*}
defines a functional which
is bounded in the dual norm.
\end{proof}
The following corollary is an immediate consequence of
Corollary \ref{Hbound} and Proposition
\ref{3.2}.
\begin{corollary}
Any sequence taken from the set
$\{\Psi_{\epsilon}\}$
of
solutions of the smoothed systems
is bounded in the norms of
$C(J; H^{1}_{0})$
and
$C^{1}(J; H^{-1})$.
\end{corollary}
\subsection{Convergent subsequences}
\label{Convsub}
We begin by stating the two basic lemmas
derived from the propositions in
Appendix B.
These are due, in the form stated there, to the authors of
\cite{Caz} and \cite{Simon}, resp.
\begin{lemma}
\label{l3.2}
There is an element
$\Psi \in L^{\infty}(J; H^{1}_{0}(\Omega)) \cap W^{1, \infty}(J;
H^{-1}(\Omega))$,
and a sequence
$\Psi_{\epsilon_{n}}$
satisfying the weak
convergence property,
\begin{equation}
\label{weakh1allt}
\Psi_{\epsilon_{n}}(t) \rightharpoonup \Psi(t), \; \mbox{in} \;
H^{1}_{0}, \; \forall t \in J.
\end{equation}
\end{lemma}
\begin{proof}
The preceding corollary, coupled with Proposition \ref{B1}, part (1),
furnishes the necessary argument.
\end{proof}
\begin{lemma}
\label{l3.3}
Suppose $r < 6$ is fixed.
A subsequence of the sequence in
(\ref{weakh1allt})
may be assumed to converge in $C(J;
L^{r}(\Omega))$.
\end{lemma}
\begin{proof}
The equicontinuity of the sequence from $J$ to $H^{1}_{0}$
is derived from the fundamental theorem of calculus
applied on an arbitrary subinterval, together with the boundedness
estimates in the dual space.
The compact embedding of $H^{1}_{0} \mapsto L^{r}$, coupled with
Proposition \ref{B2}, furnishes the necessary remaining details.
We have identified $Y$ with $L^{r}$ here.
\end{proof}
We divide the verification of Theorem \ref{central} into two parts.
\begin{theorem}
\label{central1}
The function
$\Psi$
of Lemma \ref{l3.2}
satisfies the TDDFT system discussed in Theorem \ref{central} with the
quantum corrections.
\end{theorem}
\begin{proof}
By Lemma \ref{l3.3}, by relabelling if necessary, it follows that
\begin{equation}
\label{strongrallt}
\Psi_{\epsilon_{n}}(t) \rightarrow \Psi(t), \; \mbox{in} \;
L^{r}, \; \mbox{uniformly} \; \forall t \in J,
\end{equation}
for an arbitrary $r< 6$ selected in advance.
It follows that
$\Psi \in C(J; L^{r})$.
We now examine the equation satisfied by
$\Psi$.
By weak convergence (Lemma \ref{l3.2}),
\begin{equation}
\label{lim1}
\lim_{n \rightarrow \infty}
\int_{\Omega} \frac{{\hbar}^{2}}{2m}
\nabla \Psi_{\epsilon_{n}}({\bf x}, t)\cdotp \nabla \zeta({\bf x})
\; d{\bf x}
= \int_{\Omega} \frac{{\hbar}^{2}}{2m}
\nabla \Psi({\bf x}, t)\cdotp \nabla \zeta({\bf x})
\; d{\bf x}.
\end{equation}
We now consider each of the three cases required to verify that
\begin{equation}
\label{lim2}
\lim_{n \rightarrow \infty}
\int_{\Omega}
V_{\rm e}({\bf x},t,\rho_{\epsilon_{n}})
\Psi_{\epsilon_{n}}({\bf x},t) \zeta({\bf x})\; d{\bf x} =
\int_{\Omega}
V_{\rm e}({\bf x},t,\rho)
\Psi({\bf x},t) \zeta({\bf x})\; d{\bf x}.
\end{equation}
By the boundedness of the external potential, and the strong convergence of
the sequence, we conclude immediately that, for each $t$,
\begin{equation}
\label{limpot1}
\lim_{n \rightarrow \infty}
\int_{\Omega}
V({\bf x},t)
\Psi_{\epsilon_{n}}({\bf x},t) \zeta({\bf x})\; d{\bf x} =
\int_{\Omega}
V({\bf x},t)
\Psi({\bf x},t) { \zeta}({\bf x})\; d{\bf x}.
\end{equation}
For the Hartree potential, we will use the triangle inequality. Thus, we
begin by writing,
\begin{eqnarray*}
\int_{\Omega}
W \ast \rho_{\epsilon_{n}}
\Psi_{\epsilon_{n}}({\bf x},t) \zeta({\bf x})\; d{\bf x} &-&
\int_{\Omega}
W \ast \rho \;
\Psi({\bf x},t) \zeta({\bf x})\; d{\bf x} =
\\
\int_{\Omega}
W \ast \rho_{\epsilon_{n}}
[\Psi_{\epsilon_{n}}({\bf x},t)-\Psi({\bf x}, t)] \zeta({\bf x})\; d{\bf x}
&+& \int_{\Omega}
W \ast [\rho_{\epsilon_{n}} - \rho]
\Psi({\bf x}, t) \zeta({\bf x})\; d{\bf x}.
\end{eqnarray*}
Each of the two rhs terms is estimated by the generalized
H\"{o}lder inequality.
This reduces to estimating the following two triple products of norms:
$$
\|W \ast \rho_{\epsilon_{n}}\|_{L^{2}}
\|\Psi_{\epsilon_{n}}(t)-\Psi(t)\|_{L^{3}} \|\zeta\|_{L^{6}}, \; \;
\|W \ast [\rho_{\epsilon_{n}} - \rho]\|_{L^{2}}
\|\Psi(t)\|_{L^{3}}
\|\zeta\|_{L^{6}}.
$$
For the first triple product, Young's inequality is applied to the convolution
term, followed by $L^{2}$ boundedness;
$L^{3}$ convergence is applied to the second term of the first product;
and Sobolev's inequality is applied to the third term.
For the second triple product, the only term requiring explanation is the
convolution term of the product. We estimate as follows.
$$
\|W \ast [\rho_{\epsilon_{n}} - \rho]\|_{L^{2}}
\leq \|W\|_{L^{2}} \|(|\Psi_{\epsilon_{n}}| - |\Psi|)
(|\Psi_{\epsilon_{n}}| + |\Psi|)\|_{L^{1}},
$$
which is estimated by the Schwarz inequality. An application of $L^{2}$
boundedness and $L^{2}$ convergence yields the final result:
\begin{equation}
\label{limpot2}
\lim_{n \rightarrow \infty}
\int_{\Omega}
W \ast \rho_{\epsilon_{n}}
\Psi_{\epsilon_{n}}({\bf x},t) \zeta({\bf x})\; d{\bf x} =
\int_{\Omega}
W \ast \rho \;
\Psi({\bf x},t) \zeta({\bf x})\; d{\bf x}.
\end{equation}
The potential $\Phi_{\rm qc}$ requires the analysis of the three
components introduced in section \ref{qcsection}.
For the smoothed LDA potential $\phi_{\epsilon} \ast \Phi_{\rm lda}$,
we will use the triangle inequality, and
we write,
\begin{equation*}
\int_{\Omega}
\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}})
\Psi_{\epsilon_{n}}({\bf x},t) \zeta({\bf x})\; d{\bf x} -
\int_{\Omega}
\Phi_{\rm lda}(\rho)
\Psi({\bf x},t) \zeta({\bf x})\; d{\bf x} =
\end{equation*}
\begin{equation*}
\int_{\Omega}
\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}})
[\Psi_{\epsilon_{n}}({\bf x},t)-\Psi({\bf x}, t)] \zeta({\bf x})\; d{\bf x}
+
\end{equation*}
\begin{equation*}
\int_{\Omega}
[\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}}) -\Phi_{\rm lda}(\rho)]
\Psi({\bf x}, t) \zeta({\bf x})\; d{\bf x}.
\end{equation*}
We apply the H\"{o}lder inequality to each of the
terms to
obtain two products
of norms:
$$
\|\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}})
[\Psi_{\epsilon_{n}}(t)-\Psi(t)]\|_{L^{r^{\prime}}}
\|\zeta\|_{L^{r}}, \; \;
\|[\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}}) -\Phi_{\rm lda}(\rho)]
\Psi(t)\|_{L^{r^{\prime}}} \| \zeta \|_{L^{r}},
$$
where $r = \alpha + 2$ and $r^{\prime}$ is conjugate to $r$.
We use the method employed in the proof of Proposition
\ref{3.2} (cf.\thinspace (\ref{sucHolder}))
in order
to estimate the $L^{r^{\prime}}$ norms.
For convenience, we suppress the scalar $|\lambda|$; also, $1 \leq \alpha
< 4$.
We have, for the first product,
$$
\|\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}})
[\Psi_{\epsilon_{n}}(t)-\Psi(t)]\|_{L^{r^{\prime}}} \leq
\|\phi_{\epsilon_{n}} \ast |\Psi_{\epsilon_{n}}|^{\alpha}\|_{L^{r/\alpha}}
\|\Psi_{\epsilon_{n}}(t)-\Psi(t)]\|_{L^{r}}
\leq
$$
$$
\||\Psi_{\epsilon_{n}}|^{\alpha}\|_{L^{r/\alpha}}
\|\Psi_{\epsilon_{n}}(t)-\Psi(t)]\|_{L^{r}}
\leq
\|\Psi_{\epsilon_{n}}\|_{L^{r}}^{\alpha}
\|\Psi_{\epsilon_{n}}(t)-\Psi(t)]\|_{L^{r}},
$$
which converges to zero as remarked at the beginning of the proof
(see (\ref{strongrallt})).
Thus, the first product of norms is convergent to zero. For the second
product, we begin as before, to obtain,
$$
\|[\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}}) -\Phi_{\rm lda}(\rho)]
\Psi(t)\|_{L^{r^{\prime}}} \leq
\|\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}}) -\Phi_{\rm lda}(\rho)\|
_{L^{r/\alpha}} \|\Psi(t)\|_{L^{r}}.
$$
To estimate this, we apply the triangle inequality to the first factor:
$$
\|\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}}) -\Phi_{\rm lda}(\rho)\|
_{L^{r/\alpha}} \leq
\|\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}}) -
\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho)\|
_{L^{r/\alpha}} +
$$
$$
\|\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho) - \Phi_{\rm lda}(\rho)\|
_{L^{r/\alpha}}.
$$
The first term on the rhs is bounded, via the smoothing property, by
$$
\|\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho_{\epsilon_{n}}) -
\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}(\rho)\|
_{L^{r/\alpha}} \leq
\||\Psi_{\epsilon_{n}}|^{\alpha} -
|\Psi|^{\alpha}\|
_{L^{r/\alpha}}.
$$
The estimation of this expression
requires inequality (\ref{estPhi}) with the identifications
$\Psi_{1} \mapsto \Psi_{\epsilon_{n}}, \Psi_{2} \mapsto \Psi$.
When the power $r/\alpha$ is applied to the inequality,
and integration over $\Omega$ is carried out,
one can apply H\"{o}lder's inequality with $p = \alpha$ and $p^{\prime} =
\alpha/(\alpha - 1)$ to conclude convergence. Convergence for the second
term is a consequence of the property of smoothing; since
$|\Psi|^{\alpha} \in L^{r/\alpha}$, its convolution is convergent in norm.
Altogether, we have shown:
\begin{equation}
\label{limpot3}
\lim_{n \rightarrow \infty}
\int_{\Omega}
\phi_{\epsilon_{n}} \ast \Phi_{\rm lda}
(\rho_{\epsilon_{n}})
\Psi_{\epsilon_{n}}({\bf x},t) \zeta({\bf x})\; d{\bf x} =
\int_{\Omega}
\Phi_{\rm lda}(\rho)
\Psi({\bf x},t) \zeta({\bf x})\; d{\bf x}.
\end{equation}
We now consider the Coulomb term. Again, we write
\begin{eqnarray*}
\int_{\Omega}
\phi_{\epsilon_{n}} \ast \Phi_{\rm c}
\Psi_{\epsilon_{n}}({\bf x},t) \zeta({\bf x})\; d{\bf x} &-&
\int_{\Omega}
\Phi_{\rm c}
\Psi({\bf x},t) \zeta({\bf x})\; d{\bf x} =
\\
\int_{\Omega}
\phi_{\epsilon_{n}} \ast \Phi_{\rm c}
[\Psi_{\epsilon_{n}}({\bf x},t)-\Psi({\bf x}, t)] \zeta({\bf x})\; d{\bf x}
&+&
\int_{\Omega}
[\phi_{\epsilon_{n}} \ast \Phi_{\rm c} -\Phi_{\rm c}]
\Psi({\bf x}, t) \zeta({\bf x})\; d{\bf x}.
\end{eqnarray*}
The estimation is now straightforward. The H\"{o}lder inequality yields
the two triple products for the rhs term estimates:
$$
\|\phi_{\epsilon_{n}} \ast \Phi_{\rm c}\|_{L^{2}} \;
\|\Psi_{\epsilon_{n}}(t) - \Psi(t) \|_{L^{3}} \;
\|\zeta\|_{L^{6}}, \;
\|\phi_{\epsilon_{n}} \ast \Phi_{\rm c} - \Phi_{\rm c} \|_{L^{2}} \;
\|\Psi(t)\|_{L^{3}} \; \|\zeta \|_{L^{6}}.
$$
The first term is convergent because of strong convergence; the second,
because of the convergence of the smoothing in $L^{2}$.
The final term to estimate among the quantum correction terms is the
time-history term, if present. Recall that this term is not smoothed.
The term $\Phi(\cdotp, t, \rho)$
is analyzed as follows. We have the algebraic representation,
\begin{equation*}
\int_{\Omega} \Phi(\cdotp, t, \rho_{\epsilon_{n}}) \; \Psi_{\epsilon_{n}} \zeta
\; d{\bf x} -
\int_{\Omega} \Phi(\cdotp, t, \rho) \; \Psi \zeta
\; d{\bf x} =
\end{equation*}
\begin{equation*}
\int_{\Omega}[\Phi(\cdotp, t, \rho_{\epsilon_{n}}) - \Phi(\cdotp, t,
\rho)] \;
\Psi_{\epsilon_{n}} \zeta
\; d{\bf x} \; +
\end{equation*}
\begin{equation*}
\int_{\Omega}[\Phi(\cdotp, t, \rho)[\Psi_{\epsilon_{n}} - \Psi)] \zeta
\; d{\bf x}.
\end{equation*}
The first term converges to zero because of the
assumed uniform $L^{2}$ continuity of $\Phi$
in its third argument,
while
the second term is governed by the uniform convergence in $L^{r}$.
We now use (\ref{lim1}) and (\ref{lim2}) to conclude that
$$
\lim_{n \rightarrow \infty} \langle \partial \Psi_{\epsilon_{n}}/\partial
t, \zeta \rangle
= \int_{\Omega} \frac{{\hbar}^{2}}{2m}
\nabla \Psi({\bf x}, t)\cdotp \nabla \zeta({\bf x})
+ V_{\rm e}({\bf x},t,\rho)
\Psi({\bf x},t) \zeta({\bf x})\; d{\bf x}.
$$
However, we may deduce from Lemma \ref{l3.2} that
\begin{equation}
\label{deducefrom}
\lim_{n \rightarrow \infty}
\langle \partial \Psi_{\epsilon_{n}}/\partial t, \zeta \rangle
=
\langle \partial \Psi/\partial t, \zeta \rangle,
\end{equation}
so that $\Psi$ solves the TDDFT system. The initial condition is a
consequence of (\ref{strongrallt}) in, say, $L^{2}$ for $t=0$.
\end{proof}
It remains to verify the regularity class for $\Psi$.
\begin{theorem}
\label{central2}
The function $\Psi$ of Theorem \ref{central1} satisfies
$$\Psi \in C(J; H^{1}_{0}(\Omega)) \cap C^{1}(J;
H^{-1}(\Omega)).$$
\end{theorem}
\begin{proof}
We begin with the verification that $\Psi \in C(J; H^{1}_{0})$,
and make use of Proposition \ref{B1}, part (2), of appendix B.
In particular, it suffices to show that
\begin{equation*}
\int_{\Omega}\frac{{\hbar}^{2}}{4m}|\nabla \Psi_{\epsilon_{n}}|^{2}
\; d{\bf x}
\rightarrow
\int_{\Omega}\frac{{\hbar}^{2}}{4m}|\nabla \Psi|^{2}\; d{\bf x}, \; n
\rightarrow \infty, \; \mbox{uniformly in} \; t.
\end{equation*}
We use the representations contained in Lemma \ref{lemma3.1} as applied to
$\Psi_{\epsilon_{n}}$. We rewrite them as follows.
\begin{equation}
\label{Eoft2}
{\mathcal E}_{n}(t) =
\int_{\Omega}\left[\frac{{\hbar}^{2}}{4m}|\nabla \Psi_{\epsilon_{n}}|^{2}
+
\left(\frac{1}{4}(W \ast |\Psi_{\epsilon_{n}}|^{2})+ \frac{1}{2}
(V+\Phi_{\epsilon_{n}}(\cdotp, t,\rho_{\epsilon_{n}}))\right)
|\Psi_{\epsilon_{n}}|^{2}\right]d{\bf x},
\end{equation}
\begin{equation}
{\mathcal E}_{n}(t)={\mathcal E}(0)
+
\frac{1}{2}\int_{0}^{t}\int_{\Omega}[(\partial V/\partial s)({\bf x},s)
+ \phi({\bf x}, s)]
|\Psi_{\epsilon_{n}}|^{2}\;d{\bf x}ds.
\label{consener2}
\end{equation}
Note that the expression ${\mathcal E}_{n}(t)$, as defined in
(\ref{Eoft2}), converges uniformly in $t$ to
${\mathcal E}(t)$,
when the boundedness for
$\partial V/ \partial t + \phi$ is
applied, due to strong convergence.
The approach now is to solve for the gradient term in
(\ref{Eoft2}) and deduce its uniform convergence from that of each of the other
terms. Because of the hypotheses made on the external potential and the
time-history terms,
the terms requiring analysis are the Hartree and
remaining quantum correction terms.
The techniques are similar to those used earlier. For the Hartree
potential, we have
\begin{eqnarray*}
\int_{\Omega}
W \ast \rho_{\epsilon_{n}}(t) \;
\rho_{\epsilon_{n}}({\bf x},t) \; d{\bf x} &-&
\int_{\Omega}
W \ast \rho(t) \;
\rho({\bf x},s) \; d{\bf x} =
\\
\int_{\Omega}
W \ast \rho_{\epsilon_{n}}(t)
[\rho_{\epsilon_{n}}({\bf x},t)-\rho({\bf x}, t)]\; d{\bf x}
&+& \int_{\Omega}
W \ast [\rho_{\epsilon_{n}}(t) - \rho(t)]
\rho({\bf x}, t) \; d{\bf x}.
\end{eqnarray*}
Each of the two rhs terms is estimated by the Schwarz inequality, so that
we must estimate the following two products of norms:
$$
\|W \ast \rho_{\epsilon_{n}}(t) \|_{L^{2}}
\|\rho_{\epsilon_{n}}(t) - \rho(t)\|_{L^{2}}, \;
\|W \ast [\rho_{\epsilon_{n}}(t) - \rho(t)]\|_{L^{2}}
\|\rho(t) \|_{L^{2}}.
$$
For the first product, the first term is estimated by Young's inequality,
to obtain a quantity, bounded on $J$.
We estimate the second factor as
$$
\|\rho_{\epsilon_{n}}(t) - \rho(t)\|_{L^{2}} \leq
\||\Psi_{\epsilon_{n}}(t)| - |\Psi(t)| \|_{L^{4}}
\||\Psi_{\epsilon_{n}}(t)| + |\Psi(t)| \|_{L^{4}},
$$
which is convergent to zero as $n \rightarrow \infty$, by
the strong uniform convergence.
For the second product,
an application of Young's inequality and the strong uniform convergence
allows one to conclude that
uniform convergence to zero as $n
\rightarrow \infty$.
Next, we consider the LDA term.
\begin{eqnarray*}
\int_{\Omega}
\Phi_{\rm lda}(\rho_{\epsilon_{n}}(t)) \rho_{\epsilon_{n}}({\bf x}, t)
\; d{\bf x} &-&
\int_{\Omega}
\Phi_{\rm lda}(\rho (t)) \rho({\bf x}, t)
\; d{\bf x} =
\\
\int_{\Omega}
\Phi_{\rm lda}(\rho_{\epsilon_{n}}(t))
[\rho_{\epsilon_{n}}({\bf x},t)-\rho({\bf x}, t)] \; d{\bf x}
&+& \int_{\Omega}
[\Phi_{\rm lda}(\rho_{\epsilon_{n}}(t)) -\Phi_{\rm lda}(\rho(t))] \rho({\bf x}, t)
\; d{\bf x}.
\end{eqnarray*}
H\'{o}lder's inequality is applied to each of the terms on the rhs, so
that we need to estimate the following norm products:
$$
\||\Psi_{\epsilon_{n}}(t)|^{\alpha} [|\Psi_{\epsilon_{n}}(t)| - |\Psi(t)|] \|_{L^{r^{\prime}}} \;
\||\Psi_{\epsilon_{n}}(t)| + |\Psi(t)|\|_{L^{r}},
$$
$$
\|\;[|\Psi_{\epsilon_{n}}(t)|^{\alpha}- |\Psi(t)|^{\alpha}]|\Psi(t)| \; \|_{L^{r^{\prime}}}
\|\Psi(t)\|_{L^{r}},
$$
where $r = \alpha + 2$ and $r^{\prime}$ is conjugate to $r$.
As has been demonstrated previously, the first product is estimated by
$$
\|\Psi_{\epsilon_{n}}(t)\|_{L^{r}}^{\alpha}\;
\|\Psi_{\epsilon_{n}}(t) - \Psi(t) \|_{L^{r}}
(\|\Psi_{\epsilon_{n}}(t)\|_{L^{r}} +
\|\Psi(t)\|_{L^{r}}),
$$
which converges to zero as $n \rightarrow \infty$. The second product is
estimated, with the help of (\ref{estPhi}) and H\"{o}lder's inequality, as
\begin{equation}
\label{alphaminusone}
\alpha \|(|\Psi_{\epsilon_{n}}(t)| + |\Psi(t)|)^{\alpha - 1} (|\Psi_{\epsilon_{n}}(t)| - |\Psi(t)|)
\|_{L^{r/\alpha}}
\|\Psi(t)\|_{L^{r}}^{2},
\end{equation}
and another application of H\"{o}lder's inequality, with $p=\alpha$ and
$p^{\prime}$ conjugate to $\alpha$, gives the bound,
$$
\alpha \|(|\Psi_{\epsilon_{n}}(t)| + |\Psi(t)|)\|_{L^{r}}^{\alpha - 1}
\; \||\Psi_{\epsilon_{n}}(t)| - |\Psi(t)|\|_{L^{r}}
\|\Psi(t)\|_{L^{r}}^{2},
$$
so that this term also converges to
zero. Finally, the Coulomb term is directly estimated via the strong
convergence; we omit the details.
It follows that $\Psi \in C(J; H^{1}_{0})$.
In order to conclude
that $\Psi \in C^{1}(J; H^{-1})$, we subtract two copies of the TDDFT
system, one evaluated at $t$, and the other at $s$, and we estimate for an
arbitrary test function $\zeta$. We need to show that this difference
satisfies a zero limit as $t \rightarrow s$, uniformly in
$\|\zeta\|_{H^{1}_{0}} \leq 1$.
The property just established, $\Psi \in C(J; H^{1}_{0})$, implies this
for the gradient and external potential terms. The remaining terms can be
estimated via a very useful analogy: replace the $n \rightarrow \infty$
limit in the estimates for Theorem \ref{central1}
by the $t \rightarrow s$ limit, after constructing parallel algebraic
representations. The
convergence of the corresponding dominating terms holds since
$\Psi \in C(J; H^{1}_{0})$.
This completes the proof.
\end{proof}
\begin{remark}
The combination of Theorem \ref{central1} and Theorem \ref{central2}
gives Theorem \ref{central} as formulated earlier. This is the
first central
result of the article.
\end{remark}
\section{Uniqueness}
\setcounter{remark}{2}
We will establish uniqueness of solutions under the following assumptions.
\begin{assumption}
\label{Green}
There is a bounded linear operator ${\mathcal G}$, the Dirichlet solver,
such that, for every $\phi \in C^{\infty}_{0}(\Omega)$, there is a unique
solution ${\mathcal G} \phi = \psi
\in C^{2}({\bar \Omega})$ to the homogeneous boundary value
problem,
\begin{equation}
\label{BVP}
-\Delta \psi = \phi, \; \psi_{|_{\partial \Omega}} = 0.
\end{equation}
We will refer to this as the Green's operator assumption.
\end{assumption}
\begin{remark}
The Green's operator assumption holds if $\Omega$ is of
class $C^{4}$. This follows from \cite[Theorem 8.13]{GT}, combined with
standard theorems involving embedding into H\"{o}lder spaces \cite{Leoni,
Adams}.
An immediate property of ${\mathcal G}$ is the following:
\begin{equation*}
{\mathcal G} C^{\infty}_{0}(\Omega) \supset C^{\infty}_{0}(\Omega).
\end{equation*}
\end{remark}
\begin{remark}
For the purposes of the uniqueness result, we will use an equivalent norm
on $H^{1}_{0}(\Omega)$ consisting only of the gradient seminorm part.
Since $\Omega$ is assumed to be a bounded Lipschitz domain,
it admits the divergence theorem in the form,
\begin{equation*}
\int_{\Omega} \nabla \zeta \cdotp \nabla \psi \; dx =
-\int_{\Omega} \zeta \Delta \psi \; dx,
\end{equation*}
where $\zeta \in H^{1}_{0}(\Omega)$ and $\psi$ is defined in Assumption 4.1.
This is documented in the Encyclopedia of Mathematics, and follows from the
boundary trace theory included in many references, such as \cite{Leoni}.
\end{remark}
\begin{theorem}
Under the additional assumptions of this section, there is a unique
weak solution of (\ref{wsol}), where $V_{{\rm e}}$ is defined in
(\ref{redefined}). The defining properties of weak solution are described in
Definition \ref{weaksolution}.
\end{theorem}
\begin{proof}
The proof employs Gronwall's inequality, following some preliminary
estimates. Suppose that $\Psi_{1}$ and $\Psi_{2}$ are weak solutions of
(\ref{wsol}) as defined in Definition \ref{weaksolution}.
The potential is given by (\ref{redefined}).
Set $\Psi$ equal to the difference, $\Psi = \Psi_{1} - \Psi_{2}$.
In particular, $\Psi(\cdotp, 0) \equiv 0$.
Upon subtraction of the two systems, one obtains,
after integration over $[0,t]$,
\begin{equation*}
i \hbar \int_{\Omega} \Psi({\bf x},t)
\zeta({\bf x}) \; d {\bf x} =
\end{equation*}
\begin{equation*}
\int_{0}^{t} \int_{\Omega} \frac{{\hbar}^{2}}{2m}
\nabla \Psi({\bf x}, s)\cdotp \nabla { \zeta}({\bf x})
+ [V_{\rm e}({\bf x},s,\rho_{1}) \Psi_{1}({\bf x},s) -
V_{\rm e}({\bf x},s,\rho_{2}) \Psi_{2}({\bf x},s)]
{ \zeta}({\bf x})
d{\bf x} ds.
\end{equation*}
Since $C^{\infty}_{0}(\Omega)$ is dense in $H^{1}_{0}(\Omega)$, we may
restrict $\zeta$ to
$C^{\infty}_{0}(\Omega)$. For any such $\zeta$, we choose
$\psi = {\mathcal G} \zeta$, and make the replacement $\zeta = -\Delta
\psi$ on the lhs. We obtain, after an application of the divergence
theorem,
\begin{equation*}
i \hbar \int_{\Omega} \nabla \Psi({\bf x}, t)
\cdotp \nabla \psi({\bf x}) \; d {\bf x} =
\end{equation*}
\begin{equation*}
\int_{0}^{t} \int_{\Omega} \frac{{\hbar}^{2}}{2m}
\nabla \Psi({\bf x}, s)\cdotp \nabla { \zeta}({\bf x})
+ [V_{\rm e}({\bf x},s,\rho_{1}) \Psi_{1}({\bf x},s) -
V_{\rm e}({\bf x},s,\rho_{2}) \Psi_{2}({\bf x},s)]
{ \zeta}({\bf x})
d{\bf x} ds.
\end{equation*}
We employ duality to estimate $H^{1}_{0}$ norms. For the lhs, we have
\begin{equation}
\label{lhsduality}
\sup_{\psi={\mathcal G}\zeta: \|\zeta\|_{H^{1}_{0} \leq 1}}
\left|i \hbar \int_{\Omega} \nabla \Psi({\bf x}, t)
\cdotp \nabla \psi({\bf x}) \; d {\bf x} \right| = \hbar \|{\mathcal G}\|
\|\Psi(\cdotp, t)\|_{H_{0}^{1}}.
\end{equation}
For the rhs, the subadditivity of the supremum implies that the latter is
dominated by the sum of two terms, $T_{1}$ and $T_{2}$. The first of these is
\begin{equation}
T_{1} =
\int_{0}^{t} \sup_{\|\zeta\|_{H^{1}_{0}} \leq 1}
\left|\int_{\Omega} \frac{{\hbar}^{2}}{2m}
\nabla \Psi({\bf x}, s)\cdotp \nabla { \zeta}({\bf x}) d{\bf x}\right| ds
=
\frac{{\hbar}^{2}}{2m}
\int_{0}^{t} \|\Psi(\cdotp, s)\|_{H_{0}^{1}} \; ds.
\end{equation}
The second of these, $T_{2}$, is dominated by the sum of the three individual
potential terms: the external, the Hartree, and the
quantum correction potentials, resp.
Because the external potential acts linearly, we have the supremum bound
for this term of
\begin{equation*}
\int_{0}^{t} \sup_{\|\zeta\|_{H^{1}_{0}} \leq 1}
\left|\int_{\Omega}
V({\bf x}, t) \Psi({\bf x}, s) \zeta({\bf x})d {\bf x}\right| ds
\leq
C \int_{0}^{t} \|\Psi(\cdotp, s)\|_{H^{1}_{0}} \; ds.
\end{equation*}
The Coulomb potential, if it is present, also acts linearly. It yields an
estimate analogous to that for the external potential.
The Hartree potential and
two of the three possible quantum correction potentials act nonlinearly and
require the triangle
inequality. For the Hartree potential, we have the upper bound,
\begin{equation*}
\int_{0}^{t} \sup_{\|\zeta\|_{H^{1}_{0}} \leq 1}
\left | \int_{\Omega}[W \ast \rho_{1} -
W \ast \rho_{2}] \Psi_{1} \zeta \; d {\bf x}\right | \; ds
\end{equation*}
\begin{equation*}
+
\int_{0}^{t} \sup_{\|\zeta\|_{H^{1}_{0}} \leq 1}\left | \int_{\Omega}
(W \ast \rho_{2})
[\Psi_{1} - \Psi_{2}] \zeta \; d {\bf x}\right | \; ds.
\end{equation*}
Both of these Hartree terms can be estimated from above by a constant
times
\begin{equation*}
\int_{0}^{t} \|\Psi(\cdotp, s)\|_{H^{1}_{0}} \; ds.
\end{equation*}
For the second Hartree term, we can directly apply
\cite[Theorem 3.1]{JNA} in combination with the Schwarz inequality..
For the first term, we estimate the spatial integral by
the generalized H\"{o}lder inequality:
\begin{equation*}
\left | \int_{\Omega}[W \ast \rho_{1} -
W \ast \rho_{2}] \Psi_{1} \zeta \; d {\bf x}\right | \; ds
\leq \|W \ast (\rho_{1} - \rho_{2})\|_{L^{3/2}} \; \|\Psi_{1} \|_{L^{6}}
\|\zeta\|_{L^{6}}
\end{equation*}
The second and third rhs factors are bounded by the embedding constant times
the $H^{1}_{0}$ norm. The first factor is bounded by
\begin{equation*}
\|W\|_{L^{1}} \||\Psi_{1}| - |\Psi_{2}| \|_{L^{3}}
\||\Psi_{1}| + |\Psi_{2}| \|_{L^{3}} \leq
\|W\|_{L^{1}} \|\Psi_{1} - \Psi_{2} \|_{L^{3}}
(\|\Psi_{1}\|_{L^{3}} + \|\Psi_{2} \|_{L^{3}}),
\end{equation*}
as follows from Young's inequality, combined with the Schwarz inequality.
These estimates show that the first Hartree term is also bounded by a
constant times
\begin{equation*}
\int_{0}^{t} \|\Psi(\cdotp, s)\|_{H^{1}_{0}} \; ds.
\end{equation*}
The triangle inequality is also applied prior to the estimation of the LDA
terms. We have the upper bound
\begin{equation*}
\int_{0}^{t} \sup_{\|\zeta\|_{H^{1}_{0}} \leq 1}
\left | \int_{\Omega}[|\Psi_{1}|^{\alpha} -
|\Psi_{2}|^{\alpha}] \Psi_{1} \zeta \; d {\bf x}\right | \; ds
\; +
\int_{0}^{t} \sup_{\|\zeta\|_{H^{1}_{0}} \leq 1}\left | \int_{\Omega}
|\Psi_{2}|^{\alpha}
[\Psi_{1} - \Psi_{2}] \zeta \; d {\bf x}\right | \; ds.
\end{equation*}
The second LDA term is estimated by
H\"{o}lder's inequality with the $r^{\prime}/r$ conjugate
pairing ($r = \alpha + 2$).
This gives an upper bound for the spatial integral of
\begin{equation*}
\| \;|\Psi_{2}|^{\alpha}
[\Psi_{1} - \Psi_{2}]\|_{L^{r^{\prime}}} \; \| \zeta \|_{L^{r}}
\leq C
\|\Psi_{2}\|_{L^{r}}^{\alpha}\;
\|\Psi_{1} - \Psi_{2} \|_{L^{r}}.
\end{equation*}
The inequality here results from an application of H\"{o}lder's inequality
with the conjugate $p^{\prime}/p$ pairing used earlier; $p =
\frac{r}{r^{\prime}}, \alpha r^{\prime}p^{\prime} = r$.
This leads to the desired estimate for this term after an application
of Sobolev's
inequality. For the first term of the LDA estimate, we again use
H\"{o}lder's inequality with the $r^{\prime}/r$ conjugate pairing.
This gives an upper bound for the spatial integral of
\begin{equation*}
\|[|\Psi_{1}|^{\alpha} -
|\Psi_{2}|^{\alpha}] \Psi_{1}\|_{L^{r^{\prime}}}
\|\zeta\|_{L^{r}}.
\end{equation*}
This is estimated in the same way as the estimation preceding and
following (\ref{alphaminusone}). One employs inequality (\ref{estPhi}),
followed by two applications of H\"{o}lder's inequality. The first
application of H\"{o}lder's inequality, with conjugacy indices $p =
\frac{r}{r^{\prime}}, p^{\prime}$, gives the upper bound of
\begin{equation*}
\|[|\Psi_{1}|^{\alpha} -
|\Psi_{2}|^{\alpha}] \Psi_{1}\|_{L^{r^{\prime}}}
\|\zeta\|_{L^{r}} \leq
\end{equation*}
\begin{equation*}
\alpha \|(|\Psi_{1}| + |\Psi_{2}|)^{\alpha - 1}\; (|\Psi_{1}|
-|\Psi_{2}|)\|_{L^{r/\alpha}} \|\Psi_{1}\|_{L^{r}}
\|\zeta\|_{L^{r}}.
\end{equation*}
A second application of H\"{o}lder's inequality, with conjugacy indices
$p = \alpha, p^{\prime}$, provides the further upper bound
\begin{equation*}
\alpha \||\Psi_{1}| + |\Psi_{2}| \|_{L^{r}}^{\alpha - 1} \;
\| \Psi_{1} - \Psi_{2}\|_{L^{r}}
\|\Psi_{1}\|_{L^{r}}
\|\zeta\|_{L^{r}}.
\end{equation*}
This leads immediately to the desired upper bound for the supremum of this
term. Only the time-history term remains to be analyzed, if present.
We have the difference formula,
\begin{equation*}
\int_{0}^{t} \left[
\int_{\Omega} \Phi(\cdotp, s, \rho_{1}) \; \Psi_{1} \zeta
\; d{\bf x} -
\int_{\Omega} \Phi(\cdotp, s, \rho_{2}) \; \Psi_{2} \zeta
\; d{\bf x}\right]\; ds =
\end{equation*}
\begin{equation*}
\int_{0}^{t}\left[
\int_{\Omega}[\Phi(\cdotp, s, \rho_{1}) - \Phi(\cdotp, s,
\rho_{2})] \;
\Psi_{1} \zeta
\; d{\bf x}\right]ds \; + \int_{0}^{t} \left[
\int_{\Omega}[\Phi(\cdotp, s, \rho_{2})[\Psi_{1} - \Psi_{2})] \zeta
\; d{\bf x}\right]ds.
\end{equation*}
The first term is estimated by hypothesis (\ref{ecfollowsH}),
in conjunction with
H\"{o}lder's inequality. The second term is directly estimated by
H\"{o}lder's inequality. Both estimates are finalized by Sobolev's
inequality.
Altogether, we have obtained the required hypothesis for the application
of the Gronwall inequality. In particular, $\|\Psi(\cdotp,
t)\|_{H^{1}_{0}} \equiv 0$, so that $\Psi \equiv 0$ and uniqueness
follows.
\end{proof}
The uniqueness result permits a useful convergence result for the
smoothing `sequence'.
\begin{corollary}
\label{full}
Suppose that $\epsilon_{n}$ is any positive sequence of real numbers
convergent to zero. Then the sequence $\Psi_{\epsilon_{n}}$, satisfying
Proposition
\ref{2.1}, converges in the norm of $C(J; H^{1}_{0}(\Omega)) \cap
C^{1}(J; H^{-1}(\Omega))$ to the unique solution $\Psi$ defined in Theorem
\ref{central}.
\end{corollary}
\begin{proof}
We use the elementary fact that, if every subsequence has a further
subsequence converging to a unique limit, then the entire sequence
converges to that unique limit.
The first part of the proof of Theorem
\ref{central2} demonstrates subsequential
convergence in
$C(J; H^{1}_{0}(\Omega))$. The arguments leading to (\ref{deducefrom})
demonstrate convergence in
$C^{1}(J; H^{-1}(\Omega))$.
\end{proof}
\section{Summary Remarks}
We have formulated a model within the framework of time dependent density
functional theory. It is a closed system model, posed on a
bounded domain in ${\mathbb R}^{3}$ with homogeneous boundary
conditions.
The novelty of the article lies
in the flexibility of the choice of potentials. In addition to the Hartree
potential and a given external potential,
we permit Coulomb potentials with fixed ionic point masses, a
time-history potential, and the
local density approximation (LDA), which is
typically used in simulation.
We have obtained existence and
uniqueness for this model on a bounded domain in ${\mathbb R}^{3}$ and
a given finite time interval.
The growth of the LDA term,
in terms of the exponent $\alpha$, cannot be
modified for the methods of this article to apply. We have selected the
form here, because of its wide usage in the literature.
Finally, Corollary \ref{full} assumes significance because the smoothed
solutions can be obtained via the evolution operator, and its
approximations (see the cited references).
We note finally, that the case of periodic boundary conditions frequently
occurs in applications. It is a topic of future study.
|
train/arxiv
|
BkiUcD85jDKDx8o5rONc
| 5
| 1
|
\section{Introduction}
\subsection{Motivation and context}
A key challenge that has to be faced when dealing with real-word engineering analysis and design problems
is to find a model for a process or apparatus that is able to correctly interpret the observed data.
The advantages of having at
one's disposal a mathematical model include enabling the analysis of extreme situations,
the verification of decisions, the avoidance of time-consuming and expensive experimental tests or intensive numerical simulations,
and the possibility of optimizing over model parameters for the purpose of design.
In this context, a tradeoff must be typically made between the accuracy of the model (here broadly intended as the capacity of the model in reproducing the experimental or simulation data) and its complexity, insofar as the former usually increases with the complexity of the model. Actually, the use of ``simple'' models of complex fenomena is gaining increasing interest in engineering design; examples are the so-called {\em surrogate models} constructed from complex simulation data arising, for instance, in aerodynamics modeling, see, e.g., \cite{YoAnVa:18, forrester2008engineering, gorissen2010surrogate}.
In particular, if the purpose of the model is performing optimization-based design, then it becomes of paramount importance to have a model that is suitably tailored for optimization. To this purpose, it is well known that an extremely advantageous property for a model to possess is {\em convexity}, see, e.g., \cite{calafiore2014optimization, boyd2004convex}.
In fact, if the objective and constraints in an optimization-based design problem are convex, then efficient tools
(such as interior-point methods, see, e.g., \cite{potra2000interior}) can be used to solve the problem in an efficient, global and guaranteed sense.
Conversely, finding the solution to a generic nonlinear programming problem may be extremely difficult,
involving compromises such as long computation time or suboptimality of the solution; \cite{boyd2004convex,rockafellar1993lagrange}.
Clearly, not all real-world models are convex, but several relevant ones are indeed convex, or can anyways be {\em approximated} by convex ones. In all such cases it is of critical importance to be able to construct convex models from the available data.
The focus of this work is on the construction of functional models from data, possessing the desirable property of convexity.
Several tools have been proposed in the literature to fit data via convex or log-log-convex functions
(see Section~\ref{subsec:posy} for a definition of log-log convexity).
Some remarkable examples are, for instance, \cite{magnani2009convex}, where an efficient least-squares partition algorithm
is proposed to fit data through max-affine functions; \cite{kim2010convex}, where a similar method has been
proposed to fit max-monomial functions; \cite{hoburg2016data}, where a technique based on
fitting the data through implicit softmax-affine functions has been proposed; and
\cite{daems2003simulation,calafiore2015sparse},
where methods to fit data through posynomial models have been proposed.
\subsection{Contributions}
Since the pioneering works \cite{cybenko1989approximation,white1990connectionist,HORNIK1991251}, artificial feedforward neural networks
have been widely used to find models apt at describing the data, see, e.g., \cite{ruck1990multilayer,vt1994radial,andras2014function}.
However, the input-output map represented by a neural network need not possess properties such as convexity, and hence
the ensuing model is in general unsuitable for optimization-based design.
The main objective of this paper is to show that, if the activation function of the hidden layer and of the output layer are
properly chosen, then it is possible to design a feedforward neural network with one hidden layer that fits the data and that
represents a convex function of the inputs.
Such a goal is pursued by studying the properties of the
log-sum-exp (or softmax-affine) $\lse_{T}$ class of functions, by showing that they can be represented through a feedforward neural network, and by proving that they posses universal approximator properties with respect to convex functions; this constitutes our main result, stated in \Cref{cor:mainresult} and specialized in \Cref{cor:convAppr} and \Cref{cor:convApprFFNN}.
Furthermore,
we show that an exponential transformation maps the class of $\lse_{T}$ functions
into the generalized posynomial family $\mathrm{GPOS}_T$, which can be used for fitting log-log convex data, as stated in \Cref{cor:mainresult2}, \Cref{cor:llconvappr}, and
\Cref{cor:convApprFFNN_GPOS}.
Our approximation proofs rely in part on {\em tropical} techniques.
The application of tropical geometry to neural networks
is an emerging topic --- two recent works have used tropical methods
to provide combinatorial estimates, in terms of Newton polytopes,
of the ``classifying power'' of neural networks
with piecewise affine functions, see~\cite{charisopoulos2017morphological, lim}. Although there is no direct relation with the present results,
a comparison of these three works does suggest tropical methods
may be of further interest in the learning-theoretic context.
We flank the theoretical results in this paper with a
numerical \texttt{Matlab} toolbox, named \texttt{Convex\_Neural\_Network}, which we developed and made freely available on the web\footnote{See
\url{https://github.com/Corrado-possieri/convex-neural-network/}}. This toolbox implements the proposed class
of feedforward neural networks, and
it has been used for the numerical experiments reported in the examples section.
Convex neural networks are important in engineering applications in the context of construction of { surrogate models} for describing and optimizing complex input-output relations. We provide examples of application
to two complex physical processes:
the amount of vibration transmitted by a vehicle suspension system as a function of its mechanical parameters,
and the peak power generated by the combustion reaction of propane as a function of the initial concentrations of the involved
chemical species.
\subsection{Organization of the paper}
The remainder of this paper is organized as follows: in Section~\ref{sec:notation} we introduce the notation and we give some preliminary results about the classes of functions under consideration.
In Section~\ref{sec:approx}, we illustrate the approximation capabilities of the considered classes of functions, by establishing
that generalized log-sum-exp functions and generalized posynomials are universal smooth approximators of convex and
log-log-convex data, respectively.
In Section~\ref{sec:algo}, we show the correspondence between these functions and
feedforward neural networks with properly chosen activation function. The effectiveness of the proposed approximation
technique in realistic applications is highlighted in Section~\ref{sec:appli}, where the $\lse_{T}$ class is used to
perform data-driven optimization of two physical phenomena. Conclusions are given in Section~\ref{sec:concl}.
\section{Notation and technical preliminaries\label{sec:notation}}
Let $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{R}$, $\R_{\geqslant 0}$, and $\R_{>0}$ denote the set of natural, integer, real, nonnegative real,
and positive real numbers, respectively.
{Given $\bm{\xi}\in\mathbb{R}^n$, $\delta_{\bm{\xi}}$ denotes the Dirac measure on the set $\{\bm{\xi}\}\subset \mathbb{R}^n$.
The vectors $\bm{\xi}_0,\dots,\bm{\xi}_k\in\mathbb{R}^n$ are \emph{linearly independent} if
$c_0\,\bm{\xi}_0+\dots+c_k\,\bm{\xi}_k\neq\bm{0}$
for all $c_0,\dots,c_k\in\mathbb{R}$ not identically zero, whereas they are
\emph{affinely independent} if $\bm{\xi}_1-\bm{\xi}_0,\dots,\bm{\xi}_k-\bm{\xi}_0$ are linearly independent.}
Given $f:\mathbb{R}^n\rightarrow\mathbb{R}\cup\{+\infty\}$,
{let \[\mathrm{dom}\, f\doteq\{\mathbf{x}\in\mathbb{R}^n:f(\mathbf{x})<+\infty \}.\]
Supposing that
$\mathrm{dom}\, f\neq \emptyset$,} define the \emph{Fenchel transform}
$f^\star:\mathbb{R}^n\rightarrow\mathbb{R}\cup\{+\infty\}$ of $f$ as
\[f^\star(\mathbf{x}^\star)=\sup_{\mathbf{x}\in\mathbb{R}^n}(\inner{\mathbf{x}^\star}{\mathbf{x}}-f(\mathbf{x})),\]
where $\inner{\mathbf{x}}{\mathbf{y}}$ denotes an inner product; in particular, the standard inner product
$\inner{\mathbf{x}}{\mathbf{y}}\doteq\mathbf{x}^{\top}\mathbf{y}$ will be assumed all throughout this paper.
By the Fenchel-Moreau theorem, \cite{borwein2010convex}, it results that $f=f{}^{\star}{}^{\star}$ if and only if $f$ is convex and lower semicontinuous,
whereas, in general, it holds that $f\geqslant f{}^{\star}{}^{\star}$. We shall assume henceforth
that all the considered convex functions are {\em proper},
meaning that
their domain is nonempty.
\subsection{The Log-Sum-Exp class of functions\label{subsec:lse}}
Let $\mathrm{LSE}$ (Log-Sum-Exp) be the class of functions $f:\mathbb{R}^n\rightarrow\mathbb{R}$ that can be written as
\begin{equation}
f(\mathbf{x}) = \log \left(\sum_{k=1}^K b_k \exp ( \inner{\bm{\alpha}^{(k)}}{\mathbf{x}} )\right),
\label{eq:lse_lse_reg}
\end{equation}
for some $K\in\mathbb{N}$, $b_k\in\R_{>0}$, $\bm{\alpha}^{(k)}=[\begin{array}{ccc}
\alpha_1^{(k)} & \cdots & \alpha_n^{(k)}
\end{array}]^\top\in\mathbb{R}^n$, $k=1,\dots,K$, where
$\mathbf{x}=[\begin{array}{ccc}
x_1 & \cdots & x_n
\end{array}]^\top$ is a vector of variables. Further, given $T\in\R_{>0}$ (usually referred to as the
\emph{temperature}), define the
class $\lse_{T}$ of functions $f_T:\mathbb{R}^n\rightarrow\mathbb{R}$ that can be written as
\begin{equation}
f_T(\mathbf{x}) = T \log \left(\sum_{k=1}^K b_k^{1/T} \exp ( \inner{\bm{\alpha}^{(k)}}{\mathbf{x}/T} )\right),
\label{eq:lse_lse_reg_trop}
\end{equation}
for some $K\in\mathbb{N}$, $b_k\in\R_{>0}$, and $\bm{\alpha}^{(k)}\in\mathbb{R}^n$, $k=1,\dots,K$.
By letting
$\beta_k \doteq \log b_k$, $ k=1,\ldots,K$,
we have that functions in the family $\lse_{T}$ can be equivalently parameterized as
\begin{equation}
f_T(\mathbf{x}) = T \log \left(\sum_{k=1}^K \exp ( \inner{\bm{\alpha}^{(k)}}{\mathbf{x}/T} + \beta_k/T )\right),
\label{eq:lse_lse_reg_trop_exp}
\end{equation}
where the $\beta_k$s have no sign restrictions.
It may sometimes be convenient to highlight the full parameterization
of $f_T$, in which case we shall write $f_T^{(\overrightarrow{\bm{\alpha}},\bm{\beta})}$,
where $\overrightarrow{\bm{\alpha}} = (\bm{\alpha}^{(1)},\ldots,\bm{\alpha}^{(K)})$, and
$\bm{\beta} = (\beta_1,\ldots,\beta_K)$.
%
It can then be observed that, for any $T>0$, the following property holds:
\begin{equation}
f_T^{(\overrightarrow{\bm{\alpha}},\bm{\beta})} (\mathbf{x}) = T f_1^{(\overrightarrow{\bm{\alpha}},\bm{\beta}/T)} (\mathbf{x}/T).
\label{eq:scaling}
\end{equation}
A key fact is that each $f_T\in\lse_{T}$ is smooth and convex.
Indeed, letting $\mu$ be a positive
Borel measure on $\mathbb{R}^n$, following the terminology
of \cite{klartag2012centroid}, the {\em log-Laplace transform}
of $\mu$~is
\begin{equation}\label{eq:loglaplacetransform}
M(\mathbf{x}) \doteq \log\left( \frac{1}{\mu(\mathbb{R}^n)} \int_{\mathbb{R}^n} \exp(\inner{\bm{\tau}}{\mathbf{x}})\,\mathrm{d}\mu(\bm{\tau}) \right).
\end{equation}
The convexity of this function is well known,
being a direct consequence of H\"older's inequality.
Hence, letting $\mu = \sum_{k=1}^K b_k \,\delta_{\bm{\alpha}^{(k)}}$ be a sum of Dirac measures,
we obtain that each $f\in\mathrm{LSE}$ is convex.
The convexity of all $f_T\in\lse_{T}$ follows immediately
by the fact that convexity is preserved under positive scaling. On the other hand, the smoothness of each
$f_T\in\lse_{T}$ follows by the smoothness of the functions $\exp(\cdot)$ and $\log(\cdot)$ in their domain.
The interest in this class of functions arises from the fact that, as established in the subsequent~\Cref{cor:mainresult},
functions in $\lse_{T}$ are universal smooth approximators of convex functions.
In the following proposition, we show that if
the points with coordinates
$\bm{\alpha}^{(1)},\dots,\bm{\alpha}^{(K)}$ constitute an affine generating family of $\mathbb{R}^n$,
or, equivalently, if one
can extract $n+1$ affinely independent vectors from $\bm{\alpha}^{(1)},\dots,\bm{\alpha}^{(K)}$ then the function $f_T(\mathbf{x})$ given in \eqref{eq:lse_lse_reg_trop} is strictly convex.
In dimension $2$, this condition means that the family of points
of coordinates $\bm{\alpha}^{(1)},\dots,\bm{\alpha}^{(K)}$ contains the vertices of a triangle;
in dimension $3$, the same family must contain the vertices of a tetraehedron, and so on.
\begin{prop}\label{prop:strictConv}
The function $f_T(\mathbf{x})$ given in \eqref{eq:lse_lse_reg_trop} is strictly convex whenever the vectors
$\bm{\alpha}^{(1)},\dots,\bm{\alpha}^{(K)}$ constitute an affine generating family of $\mathbb{R}^n$.
\end{prop}
\begin{proof}
Let $\mu$ be a positive Borel measure on $\mathbb{R}^n$.
For every $\bm{\xi}\in\mathbb{R}^n$, consider the random variable
$\mathbf{X}_{\bm{\xi}}$, whose distribution $\nu_{\bm{\xi}}$, absolutely continuous with respect
to $\mu$, has the Radon-Nikodym derivative $\frac{\mathrm{d}\nu_{\bm{\xi}}}{\mathrm{d}\mu}$
equal to $\mathbf{x} \mapsto \exp(\inner{\bm{\xi}}{\mathbf{x}})$. It can be checked that the Hessian of the log-Laplace
transform of $\mu$ is
$\nabla^2M(\bm{\xi}) = \mathrm{Cov}(\mathbf{X}_{\bm{\xi}})$,
where $\mathrm{Cov}(\cdot)$ denotes the covariance matrix of the random variable at argument, see the proof of \cite[Prop~7.2.1]{brazitikos}.
Hence, as soon as the support of the distribution of $\mathbf{X}_{\bm{\xi}}$
contains $n+1$ affinely independent points, this
covariance matrix is positive definite, which entails
the strict convexity of $M$. The proposition follows
by considering the log-Laplace transform of
$\mu=\sum_{k=1}^K b_k\, \delta_{\bm{\alpha}^{(k)}/T}$,
in which the support of $\mu$ is $\{\bm{\alpha}^{(1)}/T,\dots,\bm{\alpha}^{(K)}/T\}$.
\end{proof}
\begin{rem}\label{rk-affine}
If the points with coordinates $\bm{\alpha}^{(1)},\dots,\bm{\alpha}^{(K)}$ do not constitute an affine generating family of $\mathbb{R}^n$, we can find a vector $\mathbf{u}\in\mathbb{R}^n$ such that
$\langle \mathbf{u} , \bm{\alpha}^{(k)}-\bm{\alpha}^{(1)}\rangle =0$ for $k=2,\dots,K$. It follows
that
\begin{align}
f_T(\mathbf{x} + s \mathbf{u}) = s \langle\bm{\alpha}^{(1)},\mathbf{u} \rangle + f_T(\mathbf{x}),\qquad \forall s\in\mathbb{R},
\end{align}
showing that $f_T$ is affine in the direction $\mathbf{u}$.
\end{rem}
We next observe that the function class $\lse_{T}$ enlarges as $T$ decreases, as stated more precisely in the following lemma.
\begin{lem}\label{lem:nested}
For all $T>0$ and each $p\in\mathbb{N}$, $p\geqslant 1$, one has
\[\lse_{T}\subset \mathrm{LSE}_{T/p}.\]
\end{lem}
\begin{proof} By definition, for a function $f_T \in \lse_{T}$ there exist
$T>0$, $b_k >0$ and $\bm{\alpha}^{(k)}$, $k=1,\ldots,K$, such that
\begin{align*}
f_T(\mathbf{x}) &= T \log \left(\sum_{k=1}^K b_k^{1/T} \exp ( \inner{\bm{\alpha}^{(k)}}{\mathbf{x}/T} )\right)
\\
&\hspace{-3ex}= (T/p) \log \left( \sum_{k=1}^K b_k^{1/T} \exp (\inner{\bm{\alpha}^{(k)}}{\mathbf{x}/T} ) \right)^p
\\&\hspace{-3ex}= (T/p) \log \left( \sum_{k=1}^K (b_k^{1/p})^{p/T} \exp (\inner{\bm{\alpha}^{(k)}/p}{\mathbf{x}/(T/p)}) \right)^p \\
&\hspace{-3ex}= (T/p) \log \left(\sum_{k=1}^{K'} \tilde b_k^{p/T} \exp ( \inner{\tilde{\bm{\alpha}}^{(k)}}{\mathbf{x}/(T/p)})\right),
\end{align*}
where the last equality follows from the observation that, by expanding the (integer) power $p$, we obtain a summation over $K'\geqslant K$ terms,
each of which has the form of products of terms taken from the larger parentheses.
These terms retain the format
of the original terms in the parentheses, only with suitably modified parameters $\tilde b_k$ and $\tilde \bm{\alpha}^{(k)}$.
The claim then follows by observing that the last expression represents a function in $\mathrm{LSE}_{T/p}$.
\end{proof}
Consider now the class $\mathrm{MA}$ of \emph{max-affine functions} with $K$ terms,
i.e., the class of all the functions that can be written as
\begin{equation}
\label{eq:maxaffine}
\bar{f}(\mathbf{x})\doteq \max_{k=1,\dots,K} ( \beta_k+\inner{\bm{\alpha}^{(k)}}{\mathbf{x}} ).
\end{equation}
When the entries of $\bm{\alpha}^{(k)}$ are nonnegative integers, the function $\bar{f}$
is called a {\em tropical polynomial},~\cite{viro,itenberg}. Allowing these entries to be relative
integers yields the class of {\em Laurent tropical polynomials}.
When these entries are real, by analogy with classical
posynomials (see Section~\ref{subsec:posy}), the function $\bar{f}(\mathbf{x})$ is sometimes referred to as a \emph{tropical posynomial}.
Note that the class of $\mathrm{MA}$ functions has been recently used in learning problems, \cite{charisopoulos2017morphological},
\cite{lim}, and in data fitting, see \cite{magnani2009convex}
and \cite{hoburg2016data}.
Such functions are convex, since the function obtained by taking the point-wise maximum of convex functions is convex.
It follows from the parameterization in (\ref {eq:lse_lse_reg_trop_exp}) that, for all $\mathbf{x}\in\mathbb{R}^n$,
$\lim_{T \searrow 0}f_T(\mathbf{x})=\bar{f}(\mathbf{x})$,
i.e., the function $f_T$ given in \eqref{eq:lse_lse_reg_trop_exp} approximates
$\bar{f}$ as $T$ tends to zero, see \cite{hoburg2016data}.
This deformation
is familiar in tropical geometry under the name of ``Maslov dequantization,''~\cite{litvinov},
and it is a key ingredient of Viro's patchworking method,~\cite{viro}.
The following uniform bounds are rather standard, but their formal proof is given here for completeness.
\begin{lem}\label{prop:approx}
For any $T\in\R_{>0}$, $f_T$ in \eqref{eq:lse_lse_reg_trop_exp}, and for all $\mathbf{x}\in\mathbb{R}^n$, it holds that
\begin{equation}
\bar f (\mathbf{x}) \leqslant f_T(\mathbf{x}) \leqslant T\log K + \bar f (\mathbf{x}).
\label{eq:metric_estimate}
\end{equation}
\end{lem}
\begin{proof}
By construction, we have that
\begin{align*}
\bar{f}(\mathbf{x}) & = \max_{k=1,\dots,K} ( \beta_k+\inner{\bm{\alpha}^{(k)}}{\mathbf{x}} )\\
& =\max_{k=1,\dots,K}T\log((\exp (\beta_k+ \inner{\bm{\alpha}^{(k)}}{\mathbf{x}} ))^{1/T})\\
& = T\log\left(\max_{k=1,\dots,K}(\exp (\beta_k+ \inner{\bm{\alpha}^{(k)}}{\mathbf{x} }))^{1/T}\right)\\
& \leqslant T\log\left(\sum_{k=1}^K (\exp (\beta_k+ \inner{\bm{\alpha}^{(k)}}{\mathbf{x}} ))^{1/T}\right)\\
& = f_T(\mathbf{x}),
\end{align*}
thus proving the left-hand side of the inequality in \eqref{eq:metric_estimate}. On the other hand,
we have that
\begin{align*}
f_T(\mathbf{x}) & = T\log\left(\sum_{k=1}^K\exp (\beta_k/T+ \inner{\bm{\alpha}^{(k)}}{\mathbf{x}/T} )\right)\\
& \leqslant T\log\left( K\left(\exp \left(\max_{k=1,\dots,K}(\beta_k+ \inner{\bm{\alpha}^{(k)}}{\mathbf{x}} )\right)\right)^{1/T}\right)\\
& = T\log( K(\exp(\bar{f}(\mathbf{x}))^{1/T}))\\
&= T\log K + \bar f (\mathbf{x}),
\end{align*}
thus proving the right-hand side of the inequality in \eqref{eq:metric_estimate}.
\end{proof}
\subsection{Posynomials\label{subsec:posy}}
Given $c_k\in\R_{>0}$ and $\bm{\alpha}^{(k)}\in\mathbb{R}^n$, a \emph{positive monomial}
is a product of the form $c_k\mathbf{x}^{\bm{\alpha}^{(k)}} = c_k x_1^{\alpha_1^{(k)}}x_2^{\alpha_2^{(k)}}\cdots x_n^{\alpha_n^{(k)}}$.
A \emph{posynomial} is a finite sum of positive monomials,
\begin{equation}
\psi(\mathbf{x}) = \sum_{k=1}^K c_k \mathbf{x}^{\bm{\alpha}^{(k)}}.
\label{eq:POS}
\end{equation}
Posynomials are thus functions $\psi:\R_{>0}^n\rightarrow\R_{>0}$; we let $\mathrm{POS}$ denote the class of all posynomial functions.
\begin{definition}[Log-log-convex function]
A function $\varphi(\mathbf{x}):\R_{>0}^n\rightarrow\R_{>0}$ is log-log-convex
if $\log \varphi$ is convex in $\log (\mathbf{x})$.
\end{definition}
A positive monomial function $\varphi_k(\mathbf{x}) \doteq c_k\mathbf{x}^{\bm{\alpha}^{(k)}}$
is clearly \emph{log-log-convex}, since $\log \varphi_k(\mathbf{x})$ is linear (hence convex)
in $\log \mathbf{x}$.
Log-log convexity of functions in the $\mathrm{POS}$ family
can be derived from the following proposition, which goes back to Kingman,~\cite{Kin61}.
\begin{prop}[Lemma p.~283 of~\cite{Kin61}]
\label{prop:loglogprop}
If $f_1(\mathbf{x})$ and $f_2(\mathbf{x})$ are log-log-convex functions, then the following functions are log-log-convex:
\begin{enumerate}[i)]
\item $\varphi_a(\mathbf{x}) = f_1(\mathbf{x})+f_2(\mathbf{x})$,
\item $\varphi_b(\mathbf{x}) = f_1(\mathbf{x})f_2(\mathbf{x})$,
\item $\varphi_c(\mathbf{x}) = \max(f_1(\mathbf{x}), f_2(\mathbf{x}))$,
\item $\varphi_d(\mathbf{x}) = f_1(\mathbf{x})^p$, $p\in\R_{>0}$.
\end{enumerate}
\end{prop}
\if{\begin{proof}
By definition, $f_1(\mathbf{x})$ and $f_2(\mathbf{x})$ are log-log-convex in $\mathbf{x}$ if and only if $f_1(\exp(\mathbf{u}))$ and $f_2(\exp(\mathbf{u}))$
are log-convex in $\mathbf{u}$. Since the sum of log-convex functions is log-convex (see, e.g., Section~3.5.2 in \cite{boyd2004convex}),
we have that \[\varphi_a(\exp(\mathbf{u})) = f_1\exp(\mathbf{u})+f_2\exp(\mathbf{u})\] is log-convex, whence $\varphi_a(\mathbf{x})$ is log-log-convex.
Function \[\log \varphi_b (\exp(\mathbf{u}))= \log f_1(\exp(\mathbf{u})) + \log f_2(\exp(\mathbf{u}))\] is convex since it is the sum of convex functions,
whence $\varphi_b(\mathbf{x})$ is log-log-convex. Similarly,
\[\log \varphi_d (\exp(\mathbf{u}))= p\log f_1(\exp(\mathbf{u}))\] is the positive multiple of a convex function, hence convex.
Finally, due to the fact that $\log$ is monotonically increasing,
\begin{multline*}
\log \varphi_c(\exp(\mathbf{u}))= \log \max \left( f_1(\exp(\mathbf{u})), f_2(\exp(\mathbf{u})) \right) \\
= \max \left( \log f_1(\exp(\mathbf{u})), \log f_2(\exp(\mathbf{u})) \right),
\end{multline*}
which is convex, since the point-wise maximum of convex functions is convex.
\end{proof}}\fi
Since $c_k\mathbf{x}^{\bm{\alpha}^{(k)}}$ is log-log-convex, then by \Cref{prop:loglogprop}
each function in the $\mathrm{POS}$ class is log-log-convex.
Posynomials are of great interest in practical applications since, under a log-log transform, they become
convex functions \cite{hoburg2016data,boyd2007tutorial}. More precisely, by letting $\mathbf{q} \doteq \log \mathbf{x}$,
one has that
\begin{equation*}
\log \left(\sum_{k=1}^K c_k \mathbf{x}^{\bm{\alpha}^{(k)}} \right)= \log \left(\sum_{k=1}^K c_k \exp (\inner{\bm{\alpha}^{(k)}}{\mathbf{q}})\right),
\end{equation*}
which is a function in the $\mathrm{LSE}$ family.
Furthermore, given $T\in\R_{>0}$, since positive scaling preserves
convexity, \cite{boyd2004convex}, letting $\psi$ be a posynomial,
we have that functions of the form
\begin{equation}
\psi_T(\mathbf{x}) = (\psi(\mathbf{x}^{1/T}))^T
\label{eq:genposy}
\end{equation}
are log-log-convex.
Functions that can be rewritten in the form~\eqref{eq:genposy}, with
$\psi\in\mathrm{POS}$, are here denoted by $\mathrm{GPOS}_T$ and they form a subset of the family of the
so-called generalized posynomials.
It is a direct consequence of the above discussion that $\lse_{T}$ and $\mathrm{GPOS}_T$ functions are related by a one-to-one correspondence, as stated in the following proposition.
\begin{prop}
\label{prop:lse-gposmapping}
Let $f(\mathbf{x})\in\lse_{T}$ and $\psi(\mathbf{z})\in \mathrm{GPOS}_T$. Then,
\begin{align*}
\exp \left( f\left( \log(\mathbf{z})\right) \right) & \in \mathrm{GPOS}_T, \\
\log \left( \psi \left( \exp(\mathbf{x})\right) \right)& \in \lse_{T}.
\end{align*}
\end{prop}
\section{Data approximation via $\lse_{T}$ \\ and $\mathrm{GPOS}_T$ functions\label{sec:approx}}
The main objective of this section is to show that the classes $\lse_{T}$ and $\mathrm{GPOS}_T$ can be
used to approximate convex and log-log-convex data, respectively.
In particular, in Section~\ref{subsec:approxconv},
we establish that functions in $\lse_{T}$ are universal smooth approximators of convex
data. Similarly, in Section~\ref{subsec:posyappr}, we show that functions in $\mathrm{GPOS}_T$ are universal
smooth approximators of log-log-convex data.
\subsection{Approximation of convex data via $\lse_{T}$\label{subsec:approxconv}}
Consider a collection $\mathcal{C}$ of $m$ data pairs,
\begin{equation*}
\mathcal{C} = \{(\mathbf{x}_1,y_1),\dots,(\mathbf{x}_m,y_m) \},
\end{equation*}
where $\mathbf{x}_i\in\mathbb{R}^n$, $y_i\in\mathbb{R}$, $i=1,\dots,m$, with
\begin{equation*}
y_i = g(\mathbf{x}_i)
,\quad i=1,\ldots,m,
\end{equation*}
and where $g:\mathbb{R}^n\rightarrow \mathbb{R}$ is an unknown convex function.
The data in $\mathcal{C}$ are referred to as \emph{convex data}.
The main goal of this section is to show that there exists a function $f_T\in\lse_{T}$ that fits
such convex data with arbitrarily small absolute approximation error.
The question of the uniform approximation of a convex
function by functions $f_T\in \lse_{T}$ can be considered
either on $\mathbb{R}^n$, or on compact subsets of $\mathbb{R}^n$.
The latter situation is the most relevant to the approximation of finite
data sets. It turns out that there is a
general characterization of the class of functions
uniformly approximable over $\mathbb{R}^n$, which we state as
\Cref{thm:convAppr}.
We then derive an uniform approximation result
over compact sets (\Cref{cor:mainresult}).
However, the approximation issue over the whole $\mathbb{R}^n$ has an intrinsic
interest.
\begin{thm}\label{thm:convAppr}
The following statements are equivalent.
\begin{enumerate}[(a)]
\item\label{it-3} The function $g:\mathbb{R}^n\rightarrow\mathbb{R}$ is convex
and $\mathrm{dom}\, g^\star \doteq \{\mathbf{u}\in\mathbb{R}^n : g^\star(\mathbf{u})<\infty\}$ is a polytope.
\item\label{it-2} For all $\varepsilon\in\R_{>0}$, there is $\bar{T}\in\R_{>0}$ such that, $\forall T\in\R_{>0}$, $T\leqslant\bar{T}$,
there is $f_T\in \lse_{T}$ such that $\|f_T-g\|_\infty\leqslant \varepsilon$.
\item\label{it-0} For all $\varepsilon\in\R_{>0}$, there exists a convex polyhedral function $h$ such that $\|h-g\|_\infty\leqslant \varepsilon$.
\end{enumerate}
\end{thm}
\begin{proof}
\eqref{it-2}$\implies$\eqref{it-3}:
If $\|f_T-g\|_{\infty}\leqslant \varepsilon$ for some $\varepsilon\in\R_{>0}$, we have $\mathrm{dom}\, f_T^\star=\mathrm{dom}\, g^\star$. Therefore, item~\eqref{it-2}
together with the metric estimate~\eqref{eq:metric_estimate},
which gives $\|f_T- \bar f\|_\infty\leqslant T\log K$, implies that $\mathrm{dom}\, g^\star =\mathrm{dom}\, f_T^\star = \mathrm{dom}\, \bar{f}^\star= \operatorname{conv}\{\alpha^{(1)},\ldots,\alpha^{(K)}\}$.
Item \eqref{it-2}
implies that $g$ is the pointwise limit of a sequence of convex functions,
and so $g$ is convex.
\eqref{it-3}$\implies$\eqref{it-0}: Suppose now that \eqref{it-3} holds. Let us triangulate the polytope $P\doteq \mathrm{dom}\, g^\star$ into
finitely many simplices of diameter at most $\omega\in\R_{>0}$. Let $V$ denote the collection of vertices of these simplices, and define
the function $h:\mathbb{R}^n\rightarrow\mathbb{R}$,
\[
h(\mathbf{x}) \doteq \sup_{\mathbf{v}\in V} (\inner{\mathbf{v}}{\mathbf{x}} - g^\star(\mathbf{v}) ) .
\]
Observe that $h$ is convex and polyhedral. Since $g$ is convex and finite
{(hence $g$ is continuous by \cite[Thm.~10.1]{rockafellar:1970})},
we have
\begin{multline*}
g(\mathbf{x}) = g{^\star}{^\star}(\mathbf{x})
= \sup_{\mathbf{y}\in \mathbb{R}^n} (\inner{\mathbf{y}}{\mathbf{x}} - g^\star(y) )\\
= \sup_{\mathbf{y}\in P} (\inner{\mathbf{y}}{\mathbf{x}} - g^\star(\mathbf{y}) )
\geqslant h(\mathbf{x})
\end{multline*}
Moreover,
for all $\mathbf{x}\in \mathbb{R}^n$, the latter supremum is attained by a point $\mathbf{y}\in P$,
which belongs to some simplex of the triangulation. Let $\mathbf{v}_1,\ldots,\mathbf{v}_{{n+1}}\in V$
denote the vertices of this simplex, so that $\mathbf{y}=\sum_{i=1}^{{n+1}} \gamma_i \mathbf{v}_i$
where $\gamma_i\geqslant 0$, $i=1,\dots,m$, and $\sum_{i=1}^{{n+1}}\gamma_i =1$.
Since $g$ is polyhedral, we know that $g^\star$, which is a convex function
taking finite values on a polyhedron, is continuous on this polyhedron \cite{rockafellar:1970}.
So, $g^\star$ is uniformly continuous on $P=\mathrm{dom}\, g^\star$.
It follows that we can choose $\omega\in\R_{>0}$ such that $\max_{i}\|g^\star (\mathbf{y})-g^\star(\mathbf{v}_i)\|\leqslant \varepsilon$,
for all $\mathbf{y}\in P$ included in a simplex with vertices $\mathbf{v}_1,\ldots,\mathbf{v}_{{n+1}}$ of the triangulation.
Therefore, we have that
\begin{multline*}
g(\mathbf{x}) = \inner{\mathbf{y}}{\mathbf{x}} - g^\star(\mathbf{y})
\leqslant \inner{\mathbf{y}}{\mathbf{x}} - \sum_{i=1}^{{n+1}} \gamma_i (g^\star(\mathbf{v}_i)-\varepsilon) \\
\leqslant \sum_{i=1}^{{n+1}} \gamma_i (\inner{\mathbf{v}_i}{\mathbf{x}} - g^\star(\mathbf{v}_i) )+\varepsilon
\leqslant h(\mathbf{x})+\varepsilon,
\end{multline*}
which shows that~\eqref{it-0} holds.
\eqref{it-0}$\implies$\eqref{it-2}: any convex polyhedral function $h:\mathbb{R}^n\rightarrow\mathbb{R}$ can be rewritten
in the following form:
\[
h(\mathbf{x})= \max_{k=1,\ldots,K} ( \log b_k + \inner{\bm{\alpha}^{(k)}}{\mathbf{x}}),
\]
for some $K\in\mathbb{N}$, $b_k\in\R_{>0}$, and $\bm{\alpha}^{(k)}$, $k=1,\ldots,K$.
By~\eqref{eq:metric_estimate}, for each $\varepsilon\in\R_{\geqslant 0}$, there is $\bar{T}\in\R_{>0}$ such that, for each $T\in\R_{>0}$, $T\leqslant\bar{T}$,
the function $f_T$ given in \eqref{eq:lse_lse_reg_trop} satisfies
$\|h-f_T\|_\infty\leqslant \varepsilon$. Hence, if $\|h-g\|_\infty\leqslant \varepsilon$, then $\|g-f_T\|_\infty\leqslant 2\varepsilon$,
thus concluding the proof.
\end{proof}
\begin{rem}
The condition that the domain of $g^*$ is a polytope
in \Cref{thm:convAppr} is rather restrictive. This entails
that the map $g$ is Lipschitz, with constant
$\sup_{\mathbf{u}\in \mathrm{dom}\, g^\star} \|\mathbf{u}\|$, where $\|\cdot\|$ is the Euclidean
norm. In contrast, not every
Lipschitz function has a polyhedral domain. For instance,
if $g(\mathbf{x})=\|\mathbf{x}\|$, $\mathrm{dom}\, g^\star$ is the unit Euclidean ball.
However, the condition on the domain of $g^\star$ only involves
the behavior of $g$ ``at infinity''. \Cref{cor:mainresult} below shows that when
considering the approximation problems
over compact sets, the restriction
to a polyhedral domain can be dispensed with.
\end{rem}
\begin{thm}[Universal approximators of convex functions]\label{cor:mainresult}
Let $f$ be a real valued continuous convex function defined
on a compact convex set ${\mathcal K} \subset \mathbb{R}^n$.
Then,
For all $\varepsilon > 0$ there exist $T>0$ and a function $f_T \in \mathrm{LSE}_T$
such that
\begin{align}
|f_T(\mathbf{x}) - f(\mathbf{x})| \leqslant \varepsilon,\quad\text{ for all }\; \mathbf{x}\in {\mathcal K} .\label{e-defap}
\end{align}
\end{thm}
If~\eqref{e-defap} holds, then $f_T$ is an \emph{$\varepsilon$-approximation of $f$ on $\mathcal{K}$}.
\begin{proof}
We first show that the statement of the theorem holds
under the additional assumptions that $f$ is $L$-Lipschitz continuous on $\mathcal{K}$ for some constant $L>0$ and that $\mathcal{K}$ has non-empty interior.
Observe that there is a sequence $(\mathbf{x}_k)_{k\geqslant 1}$ of elements in the interior of $\mathcal{K}$ that is dense in $\mathcal{K}$ (for instance, we may consider the set of vectors in the interior of $\mathcal{K}$ that have rational coordinates, this set is denumerable, and so, by indexing its elements in an arbitrary way, we get a sequence that is dense in $\mathcal{K}$). In what follows,
we shall identify $f:\mathcal{K}\to \mathbb{R}$ with the convex function $\mathbb{R}^n\to \mathbb{R}\cup\{+\infty\}$ that coincides with $f$ on $\mathcal{K}$ and takes the value
$+\infty$ elsewhere. Recall in particular that
the {\em subdifferential} of $f$ at a point $\mathbf{y}\in \mathcal{K}$ is the
set
\[ \partial f(\mathbf{y})\doteq\{\mathbf{v}\in \mathbb{R}^n\mid f(\mathbf{x})-f(\mathbf{y})\geqslant\langle \mathbf{v}, \mathbf{x}-\mathbf{y}\rangle ,\quad
\forall \mathbf{x}\in \mathcal{K}\},
\]
and that, by Theorem~23.4 of \cite{rockafellar:1970},
$\partial f(\mathbf{y})$ is non-empty for all $\mathbf{y}$ in the relative interior of the domain of $f$, i.e., here, in the interior of $\mathcal{K}$. It is also known that
$\|\mathbf{v}\|\leqslant L$ for all $\mathbf{v}\in \mathrm{dom}\, f^\star$, and in particular for all $\mathbf{v}\in \partial f(\mathbf{x})$ with $\mathbf{x}\in \mathcal{K}$ (Corollary~13.3.3 of~\cite{rockafellar:1970}). Let us now choose
in an arbitrary way an element $\mathbf{v}_k \in \partial f(\mathbf{x}_k)$, for each $k\geqslant 1$, and consider
the map $f_\jmath: \mathbb{R}^n\to \mathbb{R}$,
\[
f_\jmath(\mathbf{x})\doteq
\max_{1\leqslant k\leqslant \jmath} \Big(f(\mathbf{x}_k) + \langle \mathbf{v}_k, \mathbf{x}-\mathbf{x}_k\rangle \Big) .
\]
By definition of the subdifferential, we have $f(\mathbf{x})\geqslant f_\jmath(\mathbf{x})$
for all $\mathbf{x}\in \mathcal{K}$, and by construction of $f_\jmath$, $f(\mathbf{x}_k)=f_\jmath(\mathbf{x}_k)$ for all $1\leqslant k\leqslant \jmath$, so the sequence $(f_\jmath)_{\jmath\geqslant 1}$ converges pointwise to $f$
on the set $X\doteq\{\mathbf{x}_k\mid k\geqslant 1\}$.
Since $\|\mathbf{v}_k\|\leqslant L$,
every map $\mathbf{x}\mapsto f(\mathbf{x}_k) + \langle \mathbf{v}_k, \mathbf{x}-\mathbf{x}_k\rangle$ is Lipschitz of
constant $L$, and so, $f_\jmath$ is also Lipschitz of constant $L$.
Hence, the sequence of maps $(f_\jmath)_{\jmath \geqslant 1}$ is equi-Lispchitz.
A fortiori, it is equicontinuous.
Then, by
the second theorem of Ascoli (Th\'eor\`eme T.2, XX, 3; 1 of \cite{schwartz}),
the pointwise convergence of the sequence
$(f_\jmath)_{\jmath \geqslant 1}$ to $f$ on the set $X$
implies that the same sequence converges {\em uniformly} to $f$ on the closure of $X$, that is, on $\mathcal{K}$. In particular, for all $\varepsilon>0$, we can find an integer $\jmath$ such that
\begin{align}
\sup_{\mathbf{x}\in\mathcal{K}} |f(\mathbf{x})-f_\jmath(\mathbf{x})|\leqslant \varepsilon/2 .
\label{e-intermediate}
\end{align}
Consider now
\[
f_T(\mathbf{x})\doteq
T\log \Big(\sum_{1\leqslant k\leqslant \jmath} \exp\big(f(\mathbf{x}_k)/T + \langle \mathbf{v}_k/T, \mathbf{x}-\mathbf{x}_k\rangle \big)\Big) .
\]
By \Cref{prop:approx}, choosing any $T>0$ such that $T\log \jmath\leqslant \varepsilon/2$ yields $|f_\jmath(\mathbf{x})-f_T(\mathbf{x})|\leqslant \varepsilon/2$ for all $\mathbf{x} \in \mathbb{R}^n$. Together with~\eqref{e-intermediate}, we get
$|f(\mathbf{x})-f_T(\mathbf{x})|\leqslant \varepsilon $ for all $\mathbf{x}\in \mathcal{K}$,
showing that the statement of the theorem indeed holds.
We now relax the assumption that $f$ is Lipschitz continuous.
Consider,
for all $\eta>0$, the Moreau-Yoshida regularization of $f$, which
is the map $g_\eta: \mathbb{R}^n\to \mathbb{R}$ defined by
\begin{align}
g_\eta(\mathbf{x}) =\inf_{\mathbf{y}\in \mathcal{K}} \Big(
\frac{1}{2\eta}\|\mathbf{x}-\mathbf{y}\|^2 + f(\mathbf{y})
\Big)
, \forall \mathbf{x} \in \mathbb{R}^n .
\end{align}
Observe that $\eta\mapsto g_\eta$ is nonincreasing, and that
$g_\eta\leqslant g$. It is known that the function $g_\eta$ is convex, being
the inf-convolution of two convex functions (Theorem~5.4 of \cite{rockafellar:1970}), it is also known that
$g_\eta$ is Lipschitz of constant $1/(2\eta)$ (Th.~4.1.4, \cite{lemarechal}) and that the family of functions $(g_\eta)_{\eta>0}$
converges pointwise to $f$ as $\eta\to 0^+$ (Prop.~4.1.6, {\em ibid.}).
Moreover, we supposed that $f$ is continuous.
We now use a theorem of Dini, showing that if a nondecreasing family of
continuous real-valued maps defined on a compact set converges
pointwise to a continuous function, then this family converges {\em uniformly}.
It follows that $g_\eta$ converges {\em uniformly} to $f$ on the compact set $\mathcal{K}$
as $\eta \to 0^+$. In particular, we can find $\eta>0$ such that
$|f(\mathbf{x})-g_\eta(\mathbf{x})|\leqslant \varepsilon/2$ holds for all $\mathbf{x}\in \mathcal{K}$.
Applying the statement of the theorem, which is already proved in the case of Lipschitz convex maps, to the map $g_\eta$, we get that there exists a map $f_T\in \mathrm{LSE}_T$ for some $T>0$ such that $|f_T(\mathbf{x})-g_\eta(\mathbf{x})|\leqslant \varepsilon/2$ holds for all $\mathbf{x}\in \mathcal{K}$,
and so $|f_T(\mathbf{x})-f(\mathbf{x})|\leqslant \varepsilon$, for all $\mathbf{x}\in \mathcal{K}$,
showing that the statement of the theorem again holds for $f$.
Finally, it is easy to relax the assumption that $\mathcal{K}$ has non-empty
interior: denoting by $E$ the affine space generated by $\mathcal{K}$, we
can decompose a vector $\mathbf{x} \in \mathbb{R}^n$ in an unique way as $\mathbf{x} = \mathbf{y} + \mathbf{z}$
with $\mathbf{y} \in E$ and $\mathbf{z}\in E^\top$, where $E^\top = \{\mathbf{z} \mid \langle \mathbf{z}, \mathbf{y}-\mathbf{y}'\rangle =0,\forall \mathbf{y},\mathbf{y}'\in E\}$. Setting $\bar f(\mathbf{x})\doteq f(\mathbf{z})$
allows us to extend $f$ to a convex continuous function $\bar f$, constant on any direction orthogonal to $E$, and whose domain contains $\bar{\mathcal{K}}\doteq \{\mathbf{y} + \mathbf{z} \mid \mathbf{y} \in \mathcal{K}, \|\mathbf{z}\|\leqslant 1\}$ which is a compact convex set of non-empty interior. By applying the statement of the theorem to $\bar{f}$, we get a $\varepsilon$-approximation of $\bar{f}$ on $\bar{\mathcal{K}}$ by a map $f_T$ in $\mathrm{LSE}_T$. A fortiori, $f_T$ is a $\varepsilon$-approximation of ${f}$ on ${\mathcal{K}}$.
\end{proof}
\begin{rem}\label{rem:specCase}
A useful special case arises when $f$ is a convex function from $\mathbb{R}^n \to \mathbb{R}\cup\{+\infty\}$, and ${\mathcal K}$ is included in the relative interior of $\mathrm{dom}\, f$. Then, the continuity assumption in \Cref{cor:mainresult} is automatic, see
e.g., Theorem~10.4 of \cite{rockafellar:1970}.
\end{rem}
{The following proposition is now an immediate consequence of
\Cref{cor:mainresult}, where $\mathcal K$ can be taken as the convex hull of the
input data.
\begin{prop}[Universal approximators of convex data]\label{cor:convAppr}
Given a collection of convex data $\mathcal{C}\doteq\{(\mathbf{x}_i,y_i)\}_{i=1}^m$
generated by an unknown convex function,
for each $\varepsilon\in\R_{>0}$ there exists ${T} > 0$
and $f_T\in \lse_{T}$ such that
\[
|f_T(\mathbf{x}_i)-y_i|\leqslant \varepsilon,\quad i=1,\dots,m.
\]
\end{prop}
The following counterexample shows that, in general, we cannot
find a function $f_T$ matching exactly the data points,
i.e., some approximation is sometimes unavoidable.
\begin{exa}\label{rem:nofit}
Suppose first that $n=1$, consider the function $\phi(x)=\max(0,x-1)$, and the data
$\mathbf{x}_1=1$,
$\mathbf{x}_2=-1$,
$\mathbf{x}_3=0$,
$\mathbf{x}_4=2$, with $y_i=g (x_i)$ for $i= 1,\ldots,4$,
so $y_1=y_2=y_3=0$ and $y_4=1$.
Suppose now that this dataset is matched exactly by
a function $f_T\in\mathrm{LSE}_T$ with $T>0$, parametrized
as in~\eqref{eq:lse_lse_reg_trop_exp}.
Since the points $(\mathbf{x}_1,y_1), \dots, (\mathbf{x}_4,y_4)$
are not aligned, we know, by \Cref{rk-affine}, that
the family $\{\bm{\alpha}^{(1)},\dots,\bm{\alpha}^{(k)}\}$ contains an affinely generating
family of $\mathbb{R}$ (in dimension $1$, this simply means that $\bm{\alpha}^{(i)}$ take at least two values). It follows from \Cref{prop:strictConv}
that $f_T$ is strictly convex. However, a strictly convex function cannot match exactly the subset of data
$(-1,0),(0,0),(1,0)$, as it
consists of three aligned points.
This entails that in any dimension $n\geqslant 2$, there are also data sets that
cannot be matched exactly. Indeed, if $f_T\in \lse_{T}$ is a function of $n$ variables, then, for any vectors $\bm{\alpha},\mathbf{u}\in\mathbb{R}^n$, the function $\bar{f}_T: s\mapsto f_T(\bm{\alpha} + s\mathbf{u})$ of one variable is also in $\mathrm{LSE}_T$. Hence, if any data set
$(\mathbf{x}_i,y_i), i=1,\ldots, m$, is such that a subset of points $(\mathbf{x}_i)_{i\in I}$
is included in an affine line $L$, and if a function $f_T$ matches exactly the
set of data, then, the function $\bar{f}_T$ is the solution of an exact
matching problem by an univariate function in $\lse_{T}$, and the previous dimension $1$ counter example shows that this problem need
not be solvable.
\end{exa}
\subsection{Approximation of log-log-convex data via $\mathrm{GPOS}_T$\label{subsec:posyappr}}
Consider a collection $\mathcal{L}$ of $m$ data pairs,
\begin{equation*}
\mathcal{L} = \{(\mathbf{z}_1,w_1),\dots,(\mathbf{z}_m,w_m) \},
\end{equation*}
where $\mathbf{z}_i\in\R_{>0}^n$, $w_i\in\R_{>0}$, $i=1,\dots,m$, with
\begin{equation*}
w_i = \ell(\mathbf{z}_i)
,\quad i=1,\dots,m,
\end{equation*}
where $\ell:\R_{>0}^n\rightarrow \R_{>0}$ is an unknown log-log-convex function.
The data in $\mathcal{L}$ is referred to as \emph{log-log-convex}.
The following corollary
states that there exists $\psi_T\in\mathrm{GPOS}_T$ that
fits the data $\mathcal{L}$ {with arbitrarily small relative approximation error}.
A subset ${\mathcal R}\subset \R_{>0}^n$ will be said to be {\em log-convex}
if its image by the map which performs the $\log$ entry-wise is convex.
\begin{cor}
[Universal approximators of log-log-convex functions]\label{cor:mainresult2}
Let $\ell$ be a log-log-convex function defined on a compact
log-convex subset
${\mathcal R}\subset \R_{>0}^n$.
Then, for any $\tilde{\varepsilon} > 0$ there exist $T>0$ and a function $\psi_T \in \mathrm{GPOS}_T$
such that, for all $\mathbf{x}\in\mathcal{R}$,
\begin{equation}\label{eq:relBoun}
\left\vert \frac{\ell(\mathbf{x})-\psi_T(\mathbf{x})}{\min(\ell(\mathbf{x}),\psi_T(\mathbf{x}))}\right\vert\leqslant \tilde{\varepsilon}.
\end{equation}
\end{cor}
\begin{proof}
By using the log-log transformation, define $\tilde{\ell}(\mathbf{q})\doteq\log(\ell(\exp(\mathbf{q})))$. Since $\ell(\mathbf{x})$ is
log-log-convex in $\mathbf{x}$, $\tilde{\ell}(\mathbf{q})$ is convex in $\mathbf{q}=\log \mathbf{x}$.
Furthermore, the set $\mathcal{K}\doteq \log(\mathcal{R})$
is convex and compact since the set $\mathcal{R}$ is log-convex and compact.
Thus, by \Cref{cor:mainresult}, for all ${\varepsilon}\in\R_{>0}$, there exist $T>0$ and a function $f_T \in \mathrm{LSE}_T$ such that $|f_T(\mathbf{q})-\tilde{\ell}(\mathbf{q})|\leqslant {\varepsilon}$ for all $\mathbf{q}\in\mathcal{K}$. Note that, by construction
\begin{align*}
\exp(f_T(\mathbf{q}))
&=\exp\left(T\log\left(\sum_{k=1}^K \exp (\beta_k/T+ \inner{\bm{\alpha}^{(k)}}{\mathbf{q}/T} )\right)\right)\\
&= \left(\sum_{k=1}^K\exp (\beta_k/T+ \inner{\bm{\alpha}^{(k)}}{\log(\mathbf{x}^{1/T})} )\right)^T\\
&= \left(\sum_{k=1}^K c_k(\mathbf{x}^{1/T})^{\bm{\alpha}^{(k)}}\right)^T=\psi_T(\mathbf{x}),
\end{align*}
where $c_k\doteq \exp(\beta_k/T)=b_k^{1/T}$ and $\psi_T(\mathbf{x})\in\mathrm{GPOS}_T$.
Thus, since, by the reasoning given above,
we have $\exp(\tilde{\ell}(\mathbf{q}(\mathbf{x})))=\ell(\mathbf{x})$ and
$\exp(f_T(\mathbf{q}(\mathbf{x})))=\psi_T(\mathbf{x})$, it results that
\begin{equation*}
\begin{array}{rl}
\ell(\mathbf{x})-\psi_T(\mathbf{x})&=
\exp(\tilde{\ell}(\mathbf{q}))-\exp(f_T(\mathbf{q}))\\
&=\ell(\mathbf{x})(1-\exp(f_T(\mathbf{q})-\tilde{\ell}(\mathbf{q})))\\
& = \psi_T(\mathbf{x})(\exp(\tilde{\ell}(\mathbf{q})-f_T(\mathbf{q}))-1).
\end{array}
\end{equation*}
Thus, it results that, for all $\mathbf{x}\in\mathcal{R}$,
\begin{align*}
\left\vert \frac{\ell(\mathbf{x})-\psi_T(\mathbf{x})}{\ell(\mathbf{x})} \right\vert & \leqslant \sup_{\mathbf{q}\in\mathcal{K}} \vert 1-\exp(f_T(\mathbf{q})-\tilde{\ell}(\mathbf{q}))\vert
\leqslant \tilde{\varepsilon},\\
\left\vert \frac{\ell(\mathbf{x})-\psi_T(\mathbf{x})}{\psi_T(\mathbf{x})} \right\vert& \leqslant \sup_{\mathbf{q}\in\mathcal{K}} \vert \exp(\tilde{\ell}(\mathbf{q})-f_T(\mathbf{q}))-1\vert
\leqslant \tilde{\varepsilon},
\end{align*}
where $\tilde{\varepsilon}\doteq 1-\exp({\varepsilon})$.
Hence, \eqref{eq:relBoun} holds since $\tilde{\varepsilon}$ can be made arbitrarily small by letting ${\varepsilon}$ be sufficiently small.
\end{proof}
{The following proposition is now an immediate consequence of
\Cref{cor:mainresult2}, where $\mathcal R$ can be taken as the log-convex hull of the
input data points\footnote{For given points $\mathbf{z}_1,\ldots,\mathbf{z}_m\in\R_{>0}^n$, we define their log-convex hull as the set of vectors $\mathbf{z}= \prod_{i=1}^m\mathbf{z}_i^{\xi_i}$, where
$\xi_i\in[0,1]$ for all $i$ and $\sum_{i=1}^m\xi_i = 1$ (all operations are here intended entry-wise).}.
\begin{prop}\label{cor:llconvappr}
Given a collection of log-log-convex data $\mathcal{L}\doteq\{(\mathbf{z}_i,w_i)\}_{i=1}^m$, for each $\tilde{\varepsilon}\in\R_{>0}$ there exist ${T}\in\R_{>0}$ and a $\psi_T\in \mathrm{GPOS}_T$ such~that \[\left\vert \frac{\psi_T(\mathbf{z}_i)-w_i}{\min(\psi_T(\mathbf{z}_i),w_i)} \right \vert\leqslant \tilde{\varepsilon},\quad i=1,\dots,m.\]
\end{prop}
\begin{rem}
A reasoning analogous to the one used in Remark~\ref{rem:nofit} can be employed to
show that, given a collection $\mathcal{L}$ of log-log-convex data pairs, there need not exist $\psi_T\in\mathrm{GPOS}_T$
that matches exactly the data in $\mathcal{L}$, for any $T>0$.
\end{rem}}
Propositions~\ref{cor:convAppr} and \ref{cor:llconvappr} establish that functions in $\lse_{T}$ and
$\mathrm{GPOS}_T$ can be used as universal smooth approximators of convex and log-log-convex data, respectively.
However, there is a difference between the type of approximation of these two classes of functions.
As a matter of fact, given a collection of convex data $\mathcal{C}=\{(\mathbf{x}_i,y_i) \}_{i=1}^m$, the class $\lse_{T}$ is such that
there exists $f_T\in\lse_{T}$ such that the \emph{absolute error} between $f_T(\mathbf{x}_i)$ and $y_i$ can be made arbitrarily small,
provided that $T\in\R_{>0}$ is sufficiently small.
On the other hand, given a collection of log-log-convex data $\mathcal{L}=\{(\mathbf{z}_i,w_i) \}_{i=1}^m$
the class $\mathrm{GPOS}_T$ is such that, given $\mathcal{L}=\{(\mathbf{z}_i,w_i)\}_{i=1}^m$,
there exists $\psi_T\in\mathrm{GPOS}_T$ such that the \emph{relative error}
between $\psi_T(\mathbf{z}_i)$ and $w_i$ can be made arbitrarily small,
provided that $T\in\R_{>0}$ is sufficiently small.
Figure~\ref{fig:impl} summarizes the results that have been established in this section
through \Cref{cor:convAppr} and \ref{cor:llconvappr}.
\begin{figure}[htb]
\centering
\resizebox{0.4\textwidth}{!}{
\includegraphics{posyfitR1-sg-figure0}
}
\caption{Relation among the classes of functions and data.\label{fig:impl}}
\end{figure}
However, it is worth noticing that, since in any compact subset of $\R_{>0}^n$ bounding relative errors
is equivalent to bounding absolute errors, it follows that the class $\mathrm{GPOS}_T$ is also such that
there exists $\psi_T\in\mathrm{GPOS}_T$ such that the \emph{absolute error}
between $\psi_T(\mathbf{z}_i)$ and $w_i$ can be made arbitrarily small, provided that $T\in\R_{>0}$ is sufficiently small.
\section{Relation with feedforward neural networks\label{sec:algo}}
Functions in $\lse_{T}$ can
be modeled through a feedforward neural network (\emph{$\mathrm{FFNN}$}) with one hidden layer.
Indeed, consider a $\mathrm{FFNN}$
with $n$ input nodes, one hidden layer with $K$ nodes, and one output node, as depicted in Figure~\ref{fig:appr}.
\begin{figure}[htb]
\centering
\resizebox{0.35\textwidth}{!}{
\includegraphics{posyfitR1-sg-figure1}}
\caption{A feedforward neural network with one hidden layer. \label{fig:appr}}
\end{figure}
Let the activation function of the hidden nodes be
\[s\mapsto (\exp(s/T)),\]
and let the activation of the output node be
\[s\mapsto T\log(s).\]
Each node in the hidden layer computes a term of the form
$s_k = \inner{\bm{\alpha}^{(k)}}{\mathbf{x}} + \beta_k$,
where the $i$-th component $\alpha^{(k)}_i$ of $\bm{\alpha}^{(k)}$ represents the weight between node $k$ and input $x_i$, and $\beta_k$ is the bias term of node $k$. Each node $k$ thus generates activations
\[a_k = \exp ( \inner{\bm{\alpha}^{(k)}}{\mathbf{x}/T} + \beta_k/T ).\]
We consider the weights from the inner nodes to the output node to be unitary, whence the output node computes
$s = \sum_{k=1}^K a_k$ and then, according to the output activation function, the output layer returns
the value
\[
y = T \log (s) = T \log \left(\sum_{k=1}^K a_k\right).
\]
We name such a network an $\mathrm{LSE\text{-}FFNN}$.
Comparing the expression of $y$ with (\ref{eq:lse_lse_reg_trop_exp}) it is readily seen that an
$\mathrm{LSE\text{-}FFNN}$ allows us to represent any function
in $\lse_{T}$.
We can then restate \Cref{cor:convAppr} as the following key theorem.
\begin{thm}\label{cor:convApprFFNN}
Given a collection of convex data $\mathcal{C}\doteq\{(\mathbf{x}_i,y_i)\}_{i=1}^m$
generated by an unknown convex function,
for each $\varepsilon\in\R_{>0}$ there exists an $\mathrm{LSE\text{-}FFNN}$
such that \[|f_T(\mathbf{x}_i)-y_i|\leqslant \varepsilon,\quad i=1,\dots,m,\]
where $f_T$ is the input-output function of the $\mathrm{LSE\text{-}FFNN}$.
\end{thm}
\Cref{cor:convApprFFNN} can be viewed as a specialization of the Universal Approximation Theorem \cite{cybenko1989approximation,white1990connectionist,HORNIK1991251,hornik1989multilayer}
to convex functions.
While universal-type approximation theorems provide theoretical approximation guarantees for general FFNN on general classes of functions, our \Cref{cor:convApprFFNN} only provides
guarantees for data generated by convex functions. However, while general FFNN synthesize nonlinear and non-convex functions, $\mathrm{LSE\text{-}FFNN}$ are guaranteed to provide a convex input-output map, and this is a key feature of interest when the
synthesized model is to be used at a later stage as a basis for optimizing over the input variables.
$\mathrm{LSE\text{-}FFNN}$s can also be used to fit log-log-convex data $\mathcal{L} =\{(\mathbf{z}_i,w_i)\}_{i=1}^m$:
by applying a log-log transformation $\mathbf{x}_i = \log \mathbf{z}_i$, $y_i = \log w_i$,
$i=1,\ldots,m$, we simply transform log-log-convex data into convex data
$\mathcal{C} =\{(\mathbf{x}_i,y_i)\}_{i=1}^m$ and train the network on these data. Therefore, the following theorem
is a direct consequence of \Cref{cor:convApprFFNN} and \Cref{cor:mainresult2}.
\begin{thm}\label{cor:convApprFFNN_GPOS}
Given a collection of log-log-convex data $\mathcal{L}\doteq\{(\mathbf{z}_i,w_i)\}_{i=1}^m$
generated by an unknown log-log-convex function,
for each $\tilde{\varepsilon}\in\R_{>0}$ there exists an $\mathrm{LSE\text{-}FFNN}$
such that
\[\left|\frac{\exp(f_T(\log(\mathbf{z}_i)))-w_i}{\min(\exp(f_T(\log(\mathbf{z}_i))),w_i)}\right|\leqslant \tilde{\varepsilon},\quad i=1,\dots,m,\]
where $f_T$ is the input-output function of the $\mathrm{LSE\text{-}FFNN}$.
\end{thm}
\subsection{Implementation considerations}
Given training data $\mathcal{C} =\{(\mathbf{x}_i,y_i)\}_{i=1}^m$, and for fixed $K$ and $T>0$,
the network weights $\bm{\alpha}^{(1)},\dots,\bm{\alpha}^{(K)},\beta_1,\ldots,\beta_K$ can be determined via standard training algorithms, such as
the Levenberg-Marquardt algorithm \cite{marquardt1963algorithm}, the gradient descent with momentum \cite{sutskever2013importance},
or the Fletcher-Powell conjugate gradient \cite{scales1985introduction}, which are, for instance, efficiently implemented in \texttt{Matlab}
through the \texttt{Neural Network Toolbox} \cite{nntoolbox}.
These algorithms tune the network's weights in order to minimize
a loss criterion of the form
\[L = \sum_{i=1}^m L_i(f_T(\mathbf{x}_i) -y_i ) + {R},
\]
where the observation loss $L_i$ is typically a standard quadratic or an absolute value loss, and ${ R}$ is a regularization term that does not depend on the training data.
For given network parameters $\overrightarrow{\bm{\alpha}}$
and $\bm{\beta}
$, using \eqref{eq:scaling} we observe that
\begin{eqnarray*}
f_T^{(\overrightarrow{\bm{\alpha}},\bm{\beta})}(\mathbf{x}_i) -y_i &=&
T f_1^{(\overrightarrow{\bm{\alpha}},\bm{\beta}/T)}(\mathbf{x}_i/T) -y_i \\
&=& T \left(f_1^{(\overrightarrow{\bm{\alpha}},\bm{\beta}/T)}(\mathbf{x}_i/T) - y_i/T \right).
\end{eqnarray*}
Hence, the loss term $\sum_{i=1}^m L_i(f_T(\mathbf{x}_i) -y_i )$ is proportional to
$\sum_{i=1}^m L_i(f_1(\mathbf{x}_i/T) -y_i/T )$ for the usual quadratic and absolute losses.
Thus, the temperature $T>0$ can be implemented in practice by pre-scaling the data
(i.e., divide the inputs and outputs by $T$), and then feeding such scaled data to an $\mathrm{LSE\text{-}FFNN}$
which synthesizes a function in the LSE (or, equivalently,
$\mathrm{LSE}_1$) class, having
activations $s\mapsto\exp(s)$ in the hidden layer and
$s\mapsto \log(s)$ in the output layer.
Training and simulation of this type of $\mathrm{LSE\text{-}FFNN}$ is implemented in a package we developed,
which works in conjunction with \texttt{Matlab}'s \texttt{Neural Network Toolbox}.
In numerical practice, we shall fix $T>0$ and a value of $K$, train the network
with respect to the remaining model parameters as detailed above,
and possibly iterate by adjusting $T$ and $K$, until a satisfactory fit is eventually found on validation data.
Here, the parameter $T$ controls the {\em smoothness} of the fitting function (as $T$ increases $f_T$ becomes ``smoother''),
and the parameter $K$ controls the {\em complexity} of the model class (as $K$ increases $f_T$ becomes more complex).
\section{Applications to physical examples\label{sec:appli}}
We next illustrate the proposed methodology with practical numerical examples.
In Section~\ref{sec:vibration} we find a convex model expressing the amount of
vibration transmitted by a vehicle suspension system as a function of its mechanical parameters. Similarly, in Section~\ref{sec:propane},
we derive a convex model
relating the peak power generated by the chemical reaction of propane combustion as a function of the initial concentrations of all the involved chemical species.
These models are first trained using the gathered data, and next used for design (e.g., find concentrations that maximize power) by solving
convex optimization and geometric programming problems via efficient numerical algorithms.
This two-step process (model training followed by model exploitation for design)
embodies an effective tool for performing data-driven optimization of complex physical processes.
\subsection{Vibration transmitted by a vehicle suspension system\label{sec:vibration}}
In this numerical experiment, we considered the problem of identifying an $\lse_{T}$ and a
$\mathrm{GPOS}_T$ model for the amount of whole-body vibration transmitted by a vehicle suspension system having 11 degrees of freedom, as depicted in
Figure~\ref{fig:model}.
\begin{figure}[htb!]
\centering
\resizebox{0.35\textwidth}{!}{
\includegraphics{posyfitR1-sg-figure2}
}
\caption{Model for the vehicle suspension system.
\label{fig:model}}
\end{figure}
The model of the vehicle is taken from \cite{zareh2012semi} and includes the dynamics of the seats, the wheels, and
the suspension system. In order to measure the amount of vibration transmitted by the vehicle,
it is assumed that the left wheels of the vehicle are moving on a series of two bumps with constant speed, see \cite{zareh2012semi}
for further details on the vehicle model.
The amount of whole-body vibration is measured following the international standard ISO-2631, \cite{ISO2631}.
Namely, the vertical acceleration of the left-set $a(t)$
is frequency-weighted through the function $H(t)$ following the directions given in Appendix~A of \cite{ISO2631},
thus obtaining the filtered signal
\begin{equation*}
a_w(t)=\int_{0}^{t}H(t-\tau) \,a(\tau)\,\mathrm{d}\tau.
\end{equation*}
Then, the amount of transmitted vibration is computed as
\begin{equation*}
V= \left(\frac{1}{\Theta}\int_0^\Theta a_w^2(t)dt\right)^{\frac{1}{2}},
\end{equation*}
where $\Theta$ is the simulation time.
Clearly, $V$ is a complicated function of the input parameters $k_w$, $k_s$, $k_t$, $c_s$, and $c_t$, that can be evaluated by simulating the dynamics given in \cite{zareh2012semi}.
However, manipulation and parameter design
using direct and exact evaluation of $V$
by integration of the dynamics can be very costly
and time consuming. Therefore,
we are interested in obtaining a simpler model via an $\mathrm{LSE\text{-}FFNN}$.
It is here worth noticing that, in practice, we may not know whether the function we are dealing with is convex, log-log-convex, or neither. Nevertheless, by fitting an $\mathrm{LSE\text{-}FFNN}$ to the observed data we can obtain a convex (or log-log-convex) function approximation of the data, and check directly a posteriori if the approximation error is satisfactory or not.
Further, certain types of physical data may suggest that a $\mathrm{GPOS}_T$ model might be suitable for modeling them: it is the case with data where the inputs are physical quantities that are inherently positive (e.g., in this case, stiffnesses and damping coefficients) and likewise the output quantity is also inherently positive (e.g., the mean-squared acceleration level).
In this example, we identified both a model in $\mathrm{LSE}_{T}$ and a model in $\mathrm{GPOS}_{T}$ for $V$.
Firstly, a set of 250 data points
\[\mathcal{S}\doteq\{(\mathbf{x}_i, V_i) \}_{i=1}^{250},\]
has been gathered by simulating the dynamics of the system depicted in Figure~\ref{fig:model}
for randomly chosen values of $k_w$, $k_s$, $k_t$, $c_s$, and $c_t$ with the distributions shown in Table~\ref{tab:param}.
\begin{table}[htb!]
\caption{Distribution of the parameters in the multi-body simulations.\label{tab:param}}
\centering
{\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{6pt}
\begin{tabular}{lcc}
\hline
Parameter & Distribution & Dimension\\
\hline
$k_w$ (stiffness of the wheel) & $\mathcal{N}(175.41,\,17.1)$ & $\mathrm{kN/m}$\\
$k_s$ (stiffness of the suspension) & $\mathcal{N}(17.424,\,1.72)$ & $\mathrm{kN/m}$\\
$k_t$ (stiffness of the set) & $\mathcal{N}(1.747,\,0,17)$ & $\mathrm{kN/m}$\\
$c_s$ (damping of the suspension) & $\mathcal{N}(1.465,\,0.15)$ & $\mathrm{kN\,s/m}$\\
$c_t$ (damping of the seat) & $\mathcal{N}(0.697,\,0.07)$ & $\mathrm{kN\,s/m}$\\
\hline
\end{tabular}
}
\end{table}
\noindent
An $\mathrm{LSE\text{-}FFNN}$ with
5 inputs
and $K=10$ hidden neurons
has been implemented by interfacing
the \texttt{Neural Network} toolbox \cite{nntoolbox} with
a \texttt{Convex\_Neural\_Network} module that we developed, which provides
a \texttt{convexnet} function that can be used for training the $\mathrm{LSE\text{-}FFNN}$.
The temperature parameter $T$
has been determined via a campaign of several cross validation experiments with varying values of $T$.
For the $\mathrm{LSE}_{T}$, the best model in terms of mean absolute error has been obtained for $T=0.01$.
The same temperature value has been obtained for the $\mathrm{GPOS}_{T}$ models.
After the training, the outputs of the $\mathrm{LSE}_{T}$ and $\mathrm{GPOS}_{T}$ models
to the inputs $\{\mathbf{x}_i\}_{i=201}^{250}$ (which have not been used for training) have been
compared with $\{V_i \}_{i=201}^{250}$, with the outputs of a classical $\mathrm{FFNN}$ with
symmetric sigmoid activation function for the hidden layer (with the same number of
hidden nodes) and linear activation function for the output layer and with the output of an $\mathrm{MA}$
function (with $10$ terms), which has been trained on the same data. In particular, the
$\mathrm{MA}$ function has been trained by using the heuristic given in \cite{magnani2009convex},
whereas the $\mathrm{FFNN}$, the $\lse_{T}$ and the $\mathrm{GPOS}_T$ networks have been trained by using
the \texttt{Neural Network Toolbox}.
Figure~\ref{fig:num} and \ref{fig:errors} depict the estimates and the approximation errors obtained
by using the $\mathrm{FFNN}$, $\mathrm{MA}$, $\mathrm{LSE}_{T}$, $\mathrm{GPOS}_{T}$ models, whereas
Table~\ref{tab:predErr} summarizes the error of each model.
\begin{figure}[htb!]
\centering
\includegraphics{posyfitR1-sg-figure3}
\caption{Results of the numerical tests.\label{fig:num}}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics{posyfitR1-sg-figure4}
\caption{Absolute approximation errors.\label{fig:errors}}
\end{figure}
\begin{table}[htb!]
\caption{Prediction errors\label{tab:predErr}}
\centering
{\renewcommand{\arraystretch}{1.2}
\renewcommand{\tabcolsep}{3pt}
\begin{tabular}{ccccc}
\hline
Method & Mean abs. err. & Mean rel. err.& Max abs. err. & Max rel. err.\\
\hline
$\mathrm{FFNN}$ & $0.95\,\mathrm{mm/s^2}$& $2.36\%$ & $3.54\,\mathrm{mm/s^2}$ & $11.6\%$\\
$\mathrm{MA}$ & $0.96\,\mathrm{mm/s^2}$& $2.45\%$ & $5.38\,\mathrm{mm/s^2}$ & $20.73\%$\\
$\mathrm{LSE}_{T}$ & $0.81\,\mathrm{mm/s^2}$& $1.98\%$& $3.25\,\mathrm{mm/s^2}$ & $7.85\%$\\
$\mathrm{GPOS}_{T}$ & $0.71\,\mathrm{mm/s^2}$& $1.69\%$& $2.79\,\mathrm{mm/s^2}$ & $5.87\%$\\
\hline
\end{tabular}
}
\end{table}
As shown by Table~\ref{tab:predErr}, the $\mathrm{GPOS}_T$ model has the best performance in terms of absolute and relative errors.
{The $\mathrm{FFNN}$ model $\phi$, the $\mathrm{MA}$ model $f_0$,
the $\mathrm{LSE}_{T}$ model $f_{T}$, and the $\mathrm{GPOS}_{T}$ model $\psi_{T}$
have next been used to design the parameters $\mathbf{x}$ that minimize the amount of vibration~$V$.
Namely, letting $\bar{\mathbf{x}}$ be the mean value of the random variable used to find the models,
the nonlinear programming problem
\begin{equation} \label{eq:nonlinear}
\left\vert \begin{array}{rl}
\text{minimize }& \phi(\mathbf{x})\\
\text{subject to } & 0.9 \,\bar{\mathbf{x}} \leqslant \mathbf{x} \leqslant 1.1\,\bar{\mathbf{x}} ,\\
\end{array}\right.
\end{equation}
has been solved by using the \texttt{Matlab} function \texttt{fmincon}. Similarly,
the convex optimization problems
\begin{equation} \label{eq:convexMA}
\left\vert \begin{array}{rl}
\text{minimize }& f_{0}(\mathbf{x})\;\mbox{[or $f_T(\mathbf{x})$]}\\
\text{subject to } & 0.9 \,\bar{\mathbf{x}} \leqslant \mathbf{x}\leqslant 1.1\,\bar{\mathbf{x}} ,\\
\end{array}\right.
\end{equation}
and the geometric program
\begin{equation} \label{eq:geometricProgram}
\left\vert \begin{array}{rlll}
\text{minimize }& \psi_{T}(\mathbf{x})\\
\text{subject to } & 0.9 \,\bar{x}_i\, x_i^{-1}\leqslant 1, &\quad i=1,\dots,5,\\
& 1.1 \,\bar{x}_i^{-1}\, x_i\leqslant 1, &\quad i=1,\dots,5,\\
\end{array}\right.
\end{equation}
have been solved by using \texttt{CVX}, a package for solving convex and geometric programs \cite{boyd2007tutorial,cvx,gb08,duffin1967geometric}.
Then, the dynamics of the multibody system depicted in Figure~\ref{fig:model} have been simulated
with the optimal values gathered by solving either~\eqref{eq:nonlinear}, \eqref{eq:convexMA},
or~\eqref{eq:geometricProgram}.
The computation of the solution to~\eqref{eq:nonlinear} required $4.813\,\mathrm{s}$ (that is larger than
the computation times reported in Table~\ref{tab:optimVal} due to the fact that convex optimization tools cannot be used)
and the computed optimal solution led to an amount of total vibration equal to $46.525\,\mathrm{mm/s^2}$.
On the other hand, the results obtained by solving \eqref{eq:convexMA},
and~\eqref{eq:geometricProgram}
are reported in Table~\ref{tab:optimVal}.
\begin{table}[htb!]
\caption{Results of the simulations with the optimal values\label{tab:optimVal}}
\centering
{\renewcommand{\arraystretch}{1.2}
\renewcommand{\tabcolsep}{8pt}
\begin{tabular}{ccc}
\hline
Problem solved & Computing time & Amount of vibration\\
\hline
$f_0$
& $0.922\,\mathrm{s}$ & $30.513\,\mathrm{mm/s^2}$\\
$f_T$ & $1.628\,\mathrm{s}$ & $29.723\,\mathrm{mm/s^2}$ \\
$\psi_T$& $0.554\,\mathrm{s}$ & $29.609\,\mathrm{mm/s^2}$\\
\hline
\end{tabular}}
\end{table}
The results given in Table~\ref{tab:optimVal} highlight the effectiveness of the $\mathrm{GPOS}_T$ model, which indeed yields the best design in the least computing time.}
\subsection{Combustion of propane\label{sec:propane}}
In this numerical experiment, we considered the problem of identifying a convex
and a log-log-convex model for the peak power
generated through the combustion of propane. We considered the reaction network for the combustion
of propane presented in \cite{jachimowski1984chemical}, which consists of 83
reactions and 29 chemical species, see Figure~\ref{fig:stoic} for a graphical representation of the stoichiometric matrix
of this chemical reaction network.
\begin{figure}[thb]
\centering
\resizebox{0.22\textwidth}{!}{
\includegraphics{posyfitR1-sg-figure5}
}
\caption{Stoichiometric matrix of the reaction network for the combustion of propane, in which
each row corresponds to a different chemical reaction and each column corresponds to
a different chemical species.\label{fig:stoic}}
\end{figure}
The instantaneous power generated by the combustion is approximatively given by
\begin{subequations}\label{eq:totalpower}
\begin{equation}\label{eq:power}
P(t)=\Delta_c H^e \frac{\mathrm{d}\, m(t)}{\mathrm{d}\,t},
\end{equation}
where $\Delta_c H^e$ is the calorific value of the propane ($\simeq 2220\,\mathrm{kJ}/\mathrm{mol}$)
and $m(t)$ denotes the number of moles of propane. Hence, the peak power generated by this reaction~is
\begin{equation}\label{eq:peakpower}
\overline{P}\doteq\max_{t\in\R_{\geqslant 0}} P(t).
\end{equation}
\end{subequations}
Clearly, $\overline{P}$ is a function of the initial concentrations $\mathbf{x}$ of all the involved chemical species
and can be obtained by performing exact numerical simulations of the chemical network (e.g., by using
the stochastic simulation algorithm given in \cite{possieri2018stochastic}), taking averages to determine
the mean behavior of $m(t)$, and using \eqref{eq:totalpower}. However, performing all these computations
is rather costly, due to the fact that a large number of simulations has to be performed in order to
take average. Hence, in order to maximize the effectiveness of the combustion, it is more convenient
to obtain a simplified ``surrogate'' model relating $\mathbf{x}$ and $\overline{P}$. In particular, convex and log-log-convex models relating $\mathbf{x}$
with $\overline{P}^{-1}$ seem to be appealing since they allow the design of the initial concentrations
that maximizes $\overline{P}$ by means of computationally efficient algorithms.
In this example, we identify a model in $\mathrm{LSE}_{T}$ and a model
in $\mathrm{GPOS}_{T}$ for ${\overline{P}}^{-1}$ as a function of $\mathbf{x}$.
We observe that, also in this example, $\mathrm{GPOS}_{T}$ models appear to be potentially well adapted to the physics of the problem, since all input variables are positive concentrations of chemicals, and the output (peak power) is also positive.
First, a collection of 500 data points \[\mathcal{S}\doteq\{(\mathbf{x}_i,\overline{P}_i^{-1}) \}_{i=1}^{500}\] has been gathered
by choosing randomly the initial condition $\mathbf{x}_i$ of the
chemical reaction network with uniform distribution in the interval $[1.494,1.827]\,\mathrm{pmol/m^3}$,
by performing $1000$ simulations of the chemical reaction network through the algorithm given
in \cite{possieri2018stochastic}, by taking averages to determine the expected time behavior of
$m(t)$, and using \eqref{eq:totalpower} to determine the value of $\overline{P}_i$, $i=1,\dots,500$.
Then, the function \texttt{convexnet} of the toolbox \texttt{Convex\_Neural\_Network} has been used
to design an $\mathrm{LSE\text{-}FFNN}$ with
$n= 29$ input nodes, 1 output node, and 1 hidden layer with $K=3$ neurons.
Several cross-validation experiments have been executed preliminarily in order to determine a satisfactory value for the temperature parameter $T$, which resulted to be $T=0.01$ for $\mathrm{LSE}_{T}$ models and
$T=0.005$ for $\mathrm{GPOS}_{T}$ models.
After training the network, we considered the inputs $\{\mathbf{x}_i\}_{i=251}^{500}$ (which have not been used for training)
and computed the corresponding outputs for the $\mathrm{LSE}_{T}$ model $f_T$ and of the $\mathrm{GPOS}_{T}$ model $\psi_T$ . These outputs
are compared with $\{\overline{P}_i^{-1} \}_{i=251}^{500}$, with the outputs of a classical $\mathrm{FFNN}$ with
symmetric sigmoid activation function for the hidden layer (with the same number of
hidden nodes) and linear activation function for the output layer, which has been trained by using the
same data, and with the outputs of an $\mathrm{MA}$ function $f_0$ (with 3 terms)
that has been trained on the same data by using the
heuristic given in \cite{magnani2009convex}.
Figure~\ref{fig:num2} and \ref{fig:errors2} depict the estimates and the approximation errors obtained
by using the $\mathrm{FFNN}$, $\mathrm{MA}$, $\mathrm{LSE}_{T}$ and $\mathrm{GPOS}_{T}$ models, whereas
Table~\ref{tab:predErr2}
summarizes the prediction error of each model.
\begin{figure}[htb!]
\centering
\includegraphics{posyfitR1-sg-figure6}
\caption{Results of the numerical tests.\label{fig:num2}}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics{posyfitR1-sg-figure7}
\caption{Approximation errors.\label{fig:errors2}}
\end{figure}
\begin{table}[htb!]
\caption{Prediction errors\label{tab:predErr2}}
\centering
{\renewcommand{\arraystretch}{1.2}
\renewcommand{\tabcolsep}{4pt}
\begin{tabular}{ccccc}
\hline
Method & Mean abs. err. & Mean rel. err. & Max abs. err. & Max rel. err.\\
\hline
$\mathrm{FFNN}$ & $42.15\,\mathrm{W^{-1}}$ & $1.28\%$ & $172.75\,\mathrm{W^{-1}}$& $4.62\%$ \\
$\mathrm{MA}$ & $30.38\,\mathrm{W^{-1}}$ & $9.27\%$ & $115.81\,\mathrm{W^{-1}}$& $4.33\%$ \\
$\mathrm{LSE}_{T}$ & $20.48\,\mathrm{W^{-1}}$ & $0.62\%$ & $130.63\,\mathrm{W^{-1}}$& $4.18\%$ \\
$\mathrm{GPOS}_{T}$ & $14.94\,\mathrm{W^{-1}}$ & $0.45\%$ & $126.31\,\mathrm{W^{-1}}$& $4.03\%$ \\
\hline
\end{tabular}
}
\end{table}
As shown by Table~\ref{tab:predErr2}, the $\mathrm{LSE}_{T}$ and the $\mathrm{GPOS}_{T}$ models
have improved prediction capabilities with respect to the classical $\mathrm{FFNN}$
and the $\mathrm{MA}$ model. In particular, the
model in $\mathrm{GPOS}_{T}$ presents the best approximation performance.
Moreover, the model $f_0$ in $\mathrm{MA}$ , the model $f_{T}$ in $\mathrm{LSE}_{T}$ and the model $\psi_{T}$ in $\mathrm{GPOS}_{T}$,
can be used
to efficiently design the initial concentrations $\mathbf{x}$ that are within the considered range and that maximize the peak power.
In fact, the convex optimization problems
\begin{equation} \label{eq:convexMA2}
\left\vert \begin{array}{rl}
\text{minimize }& f_{0}(\mathbf{x}) \;\mbox{[or $f_{T}(\mathbf{x})$]}\\
\text{subject to } & 1.49\cdot 10^{-12} \leqslant \mathbf{x} \leqslant 1.83\cdot 10^{-12} ,
\end{array}\right.
\end{equation}
as well as the geometric program
\begin{equation} \label{eq:geometricProgram2}
\left\vert \begin{array}{rll}
\text{minimize }& \psi_{T}(\mathbf{x})\\
\text{subject to } & 1.49\cdot 10^{-12}\, x_i^{-1}\geqslant1,& i=1,\dots,29,\\
& 1.83\cdot 10^{-12} \, x_i\geqslant1,& i=1,\dots,29,\\
\end{array}\right.
\end{equation}
can be efficiently solved by using any solver able to deal with convex optimization problems.
On the other hand, letting $\phi$ be the $\mathrm{FFNN}$ model, solving a problem
of the form \eqref{eq:convexMA2} with $\phi$ as the objective
may be rather challenging due to the fact that it need not be (and generically is not) convex.
In fact, by attempting at solving such nonlinear programming problem via
\texttt{fmincon} we failed, whereas we have
been able to find the solutions of problems~\eqref{eq:convexMA2} and \eqref{eq:geometricProgram2}
by using the \texttt{Matlab} toolbox \texttt{CVX}.
Table~\ref{tab:optimVal2} reports the computing time required to determine such solutions
and the corresponding peak power obtained by simulating the chemical reaction network.
\begin{table}[htb!]
\caption{Results of the simulations with the optimal values\label{tab:optimVal2}}
\centering
{\renewcommand{\arraystretch}{1.2}
\renewcommand{\tabcolsep}{8pt}
\begin{tabular}{ccc}
\hline
Problem solved & Computing time & Peak power\\
\hline
$f_0$ & $1.279\mathrm{s}$ & $0.363\,\mathrm{m W}$ \\
$f_t$ & $2.968\mathrm{s}$ & $0.368\,\mathrm{m W}$ \\
$\psi_T$ & $0.655\,\mathrm{s}$ & $0.369\,\mathrm{mW}$\\
\hline
\end{tabular}}
\end{table}
As shown by Table~\ref{tab:optimVal2}, the model in $\mathrm{GPOS}_{T}$
presents the best performance in the considered example.
\section{Conclusions\label{sec:concl}}
A feedforward neural network with exponential activation functions in the inner layer and logarithmic activation in the output layer
can approximate with arbitrary precision any convex function on a convex and compact input domain.
Similarly, any log-log-convex function can be approximated to arbitrary relative precision by a class of generalized posynomial functions.
This allows us to construct convex (or log-log-convex) models that approximate observed data, with the advantage over standard feedforward networks that the synthesised input-output map is convex (or log-log-convex) in the input variables, which makes it readily amenable to efficient optimization via convex or geometric programming.
The techniques given in this paper enable data-driven
optimization-based design methods that apply convex optimization on a surrogate
model obtained from data.
Of course, some data might be more suitable than other to be approximated via
convex or log-log-convex models: if the (unknown) data generating function underlying the observed data is indeed convex (or log-log-convex), then we may expect very good results in the fitting via LSE$_T$ functions (or $\mathrm{GPOS}_T$ functions). Actually, even when
convexity or log-log-convexity of the data generating function is not known a priori, we can find in many cases a data fit that is of quality comparable, or even better, than the one obtained via general non-convex neural network models, with the clear advantage of having a model possessing the additional and desirable feature of convexity.
\bibliographystyle{ieeetr}
|
train/arxiv
|
BkiUdgu6NNjgB0Ss2Zz9
| 5
| 1
|
\section*{Methods
\setlength{\parskip}{0pt
}{}
\newenvironment{addendum}
\setlength{\parindent}{0in
\smal
\begin{list}{Acknowledgements}
\setlength{\leftmargin}{0in
\setlength{\listparindent}{0in
\setlength{\labelsep}{0em
\setlength{\labelwidth}{0in
\setlength{\itemsep}{12pt
\let\makelabel\addendumlabel}
}
{\end{list}\normalsize}
\newcommand*{\addendumlabel}[1]{\textbf{#1}\hspace{1em}}
\newcommand{\onlinecite}[1]{\hspace{-1 ex} \nocite{#1}\citenum{#1}}
\setlength{\parindent}{0.39in}
\setlength{\parskip}{6pt}
\newcommand{\spacing}[1]{\renewcommand{\baselinestretch}{#1}\large\normalsize}
\spacing{2}
\renewcommand{\figurename}{{\bf{Figure}}}
\let\oldtitle=\title
\def\title#1{\oldtitle{\sffamily\bfseries{#1}}}
\let\oldauthor=\author
\def\author#1{\oldauthor{\sffamily\bfseries\normalsize{#1}}}
\def\@maketitle
\newpage\spacing{1}\setlength{\parskip}{12pt
{\Large\bfseries\noindent\sloppy \textsf{\@title} \par
{\noindent\@author
}
\newenvironment{affiliations}
\setcounter{enumi}{1
\setlength{\parindent}{0in
\slshape\slopp
\begin{list}{\upshape$^{\arabic{enumi}}$}
\usecounter{enumi
\setlength{\leftmargin}{0in
\setlength{\topsep}{0in
\setlength{\labelsep}{0in
\setlength{\labelwidth}{0in
\setlength{\listparindent}{0in
\setlength{\itemsep}{0ex
\setlength{\parsep}{0in
}
}{\end{list}\par\vspace{12pt}}
\renewcommand{\labelitemi}{$\ast$}
\renewenvironment{abstract}
\setlength{\parindent}{0in
\setlength{\parskip}{0in
\sffamily\bfserie
}{\par\vspace{-6pt}}
\title{Emergence of superconductivity from the dynamically heterogeneous insulating state in
La$_{\bm{2-x}}$Sr$_{\bm{x}}$CuO$_{\bm{4}}$}
\author{Xiaoyan Shi$^{1}$, G. Logvenov$^{2,3}$, A. T. Bollinger$^{2}$, I. Bo\v{z}ovi\'{c}$^{2}$, C. Panagopoulos$^{4,5}$ \& Dragana Popovi\'{c}$^{1*}$}
\begin{document}
\maketitle
\begin{affiliations}
\item National High Magnetic Field Laboratory and Department of Physics, Florida State University, Tallahassee, Florida 32310, USA
\item Brookhaven National Laboratory, Upton, New York 11973, USA
\item Max Planck Institute for Solid State Research, Heisenbergstrasse 1, D-70569 Stuttgart, Germany
\item Department of Physics, University of Crete and FORTH, GR-71003 Heraklion, Greece
\item Division of Physics and Applied Physics, Nanyang Technological University, 637371 Singapore
\end{affiliations}
\begin{itemize}
\item E-mail: [email protected]
\end{itemize}
\newpage
\begin{abstract}
A central issue for copper oxides is the nature of the insulating ground state at low carrier densities and the emergence of high-temperature superconductivity from that state with doping. Even though this superconductor-insulator transition (SIT) is a zero-temperature
transition, measurements are not usually carried out at low temperatures. Here we use magnetoresistance to probe both the insulating state at very low temperatures and the presence of superconducting fluctuations in La$_{\bm{2-x}}$Sr$_{\bm x}$CuO$_{\bm 4}$ (LSCO) films, for doping levels that range from the insulator to the superconductor ($\bm{x=0.03-0.08}$). We observe that the charge glass behavior, characteristic of the insulating state, is suppressed with doping, but it coexists with superconducting fluctuations that emerge already on the insulating side of the SIT. The unexpected quenching of the superconducting fluctuations by the competing charge order at low temperatures provides a new perspective on the mechanism for the SIT.
\end{abstract}
In cuprates, the long-range-ordered antiferromagnetic (AF) ground state of the parent Mott insulator is destroyed quickly by adding charge carriers\cite{Kastner-review}. The electronic ground state that separates it from a superconductor, which emerges at somewhat higher doping, remains poorly understood. The high-temperature properties of this intermediate, ``pseudogap'' region have been studied extensively, in particular in the underdoped regime, i.e. on the superconducting side of the SIT. For example, high magnetic fields that were applied to suppress superconductivity revealed the insulating nature of the underlying electronic state\cite{Ando-1995,GSB-1996}. On the other hand, there are very few data at low temperatures, especially on the insulating side of the SIT. In LSCO, it is known that, at low enough temperatures, the hole-poor, finite-size AF domains located in CuO$_2$ ($ab$) planes undergo cooperative freezing\cite{Cho92,Niedermayer98,magsusc-Lavrov} into an inhomogeneous, but magnetically ordered phase, often referred to as a cluster spin glass. The doped holes seem to be clustered into areas that separate these AF domains\cite{Julien99,Singer02NQR,Dumm03EM,Ando02Ranisotropy,Ando03MR}. In LSCO with $x=0.03$, they exhibit correlated, glassy behavior at even lower temperatures, deep within the spin-glass phase ($T \ll T_{SG}$)\cite{Ivana-PRL,Jelbert08,Ivana-pMR,Shi-PhysicaB}, suggestive of a charge glass transition as $T\rightarrow 0$. The key question is how such an insulating, dynamically heterogeneous ground state evolves with doping and gives way to high-temperature superconductivity\cite{Vlad-Christos}.
On general grounds, the behavior near the zero-temperature SIT is expected to be influenced by quantum fluctuations. In case of the electrostatically-induced SIT\cite{LSCO-SIT,YBCO-SIT}, the scaling analysis of the temperature dependence of the resistance $R(T)$ in LSCO near the critical doping $x_c\approx 0.06$ was interpreted\cite{LSCO-SIT} in terms of models\cite{MPAFisher} where Cooper pairs emerge already on the insulating side\cite{torque-Li,Diag-Li}: the transition is driven by quantum phase fluctuations and the localized pairs form a Bose glass.
However, one could question whether the extrapolation of the experimental results to low temperatures is accurate, and whether the effects of electrostatic doping are equivalent to those of chemical doping. In this study, we use an independent and more direct technique to probe superconducting fluctuations and the properties of the insulating state near the SIT as a function of chemical doping; moreover, we extend the temperature range of measurements down to 0.3~K.
The 100~nm thick films of LSCO were grown by atomic-layer-by-layer molecular beam epitaxy (ALL-MBE), which provides exquisite control of the thickness and chemical composition of the films\cite{MBE} (see Methods). The samples with $0.03\leq x\leq0.06$ show insulating behavior in the in-plane $R(T)$, while those with $0.065\leq x\leq0.08$ become superconductors below the critical temperature $T_{c}(x)$ (Fig.~\ref{fig:RvsT}). Compared to LSCO single crystals\cite{Ando-Tdep}, we find that the films are more resistive for the same nominal doping.
\begin{figure}
\centerline{\includegraphics[width=12cm]{XShi_fig1_resubmit.eps}}
\caption{\textbf{Temperature dependence of the in-plane resistivity $\rho$ for LSCO thin films with different doping levels $x$, as shown.} The critical temperatures $T_c$, defined as the transition midpoint (on a linear scale), are $(6\pm 1)$~K, $(9\pm 1)$~K and $(12\pm 1)$~K, for $x=0.065, 0.07, 0.08$ samples, respectively.}
\label{fig:RvsT}
\end{figure}
However, while insulating $R(T)$ for $x=0.03$ and 0.05 samples is described well by two-dimensional variable-range hopping\cite{Shi-PhysicaB}, the resistance increase with decreasing temperature is much weaker for $x=0.055$ and $x=0.06$ and cannot be fitted to any simple functional form.
At the onset of the charge glass regime in LSCO single crystals with $x=0.03$, observed at $T \ll T_{SG}$, a difference appears between zero-field resistance $R(H=0)$ measured after zero-field cooling and cooling in a magnetic field\cite{Ivana-PRL,Ivana-pMR}. This difference becomes more pronounced with decreasing temperature and it reflects the presence of frozen AF domains, such that only holes in the domain boundaries contribute to transport. The magnetic field affects the domain structure because there is a weak ferromagnetic moment\cite{Thio}, oriented parallel to the $c$ axis, associated with each AF domain. We find that all non-superconducting LSCO films exhibit such history dependence (Fig.~\ref{fig:ZFC-FC}) at $T<T^{\dagger}(x)$,
\begin{figure}
\centerline{\includegraphics[width=14cm]{XShi_fig2_resubmit.eps}}
\caption{\textbf{The relative difference between field-cooled (FC) and zero-field-cooled (ZFC) ${\bm{R(H=0)}}$ in
LSCO films.}
The doping levels are $x=0.03, 0.05, 0.055$, and $0.06$, as shown. In the FC protocol, the field was oriented perpendicular to CuO$_2$ planes and applied at $T>10$~K. For each doping, $\mu_{0}H = 9$~T, 5~T, 2~T and 1~T were used during field cooling. Arrows show $T^{\dagger}(x)$, the temperature where the difference between FC and ZFC values vanishes. Solid lines are exponential fits to guide the eye.}
\label{fig:ZFC-FC}
\end{figure}
where $T^{\dagger}$ does not depend on the magnitude or the orientation of the magnetic field used during field cooling, but it decreases with doping.
Another manifestation of the onset of the charge glass behavior in strongly insulating, lightly doped La$_2$CuO$_4$ is the emergence of a hysteretic, positive magnetoresistance at low fields\cite{Ivana-PRL,Ivana-pMR,Shi-PhysicaB}. In LSCO films that exhibit variable-range hopping transport, this effect is indeed observed at low enough temperatures (Fig.~\ref{fig:MR}a,b), giving rise to the history dependent
\begin{figure}
\centerline{\includegraphics[width=15cm]{XShi_fig3_resubmit.eps}}
\caption{\textbf{In-plane magnetoresistance of non-superconducting LSCO films at different temperatures.} For each curve, the field was applied following zero-field cooling from $T\sim 10$~K. The arrows show the direction of $H$ sweeps. Where shown, the error bars correspond to the typical change in the magnetoresistance due to temperature fluctuations. The transverse ($H\parallel c$) magnetoresistance is shown for \textbf{a,} $x=0.03$, \textbf{b,} $x=0.05$, \textbf{c,} $x=0.055$ and \textbf{d,} $x=0.06$ doping levels. \textbf{e,} The data from \textbf{c,} for sweep up, plotted \textit{vs.} $H^2$. Dashed lines are linear fits representing the contributions from normal state transport, \textit{i.e.} they correspond to $[R(H)-R(0)]/R(0) = [R_n(0)-R(0)]/R(0) +\alpha H^2$. The intercept of the dashed line shows the relative difference between the fitted normal state resistance and the measured resistance at $H=0$. Arrows show $H_c'$, the field above which superconducting fluctuations are fully suppressed and the normal state is restored. \textbf{f,} The magnetoresistance for the $x=0.06$ film with field applied parallel to CuO$_2$ planes. The sweep rate was 0.005~T/min for $\mu_{0}H<1$~T and 0.02~T/min for $\mu_{0}H>1$~T.}
\label{fig:MR}
\end{figure}
zero-field resistance and memory\cite{Shi-PhysicaB}. The magnitude of the hysteretic, positive magnetoresistance is comparable for both $H\parallel c$ and $H\perp c$, as observed in single crystals\cite{Ivana-PRL,Ivana-pMR}.
A small increase in doping from $x=0.05$ to $0.055$, however, leads to dramatic changes in the magnetoresistance when the field is parallel to the $c$ axis (Fig.~\ref{fig:MR}c,d). The magnetoresistance increases by almost an order of magnitude, and its positive component dominates in the entire experimental field range. The hysteresis, however, is observed only over a limited range of the positive magnetoresistance, in contrast to the behavior in more insulating films (Fig.~\ref{fig:MR}a,b). The results indicate that, in films with $x=0.055$ and $x=0.06$, another mechanism, most likely the suppression of superconducting fluctuations, also contributes to the positive magnetoresistance. This is confirmed by measurements with field applied parallel to the $ab$ planes, which show that the non-hysteretic positive contribution is much weaker in that case (Fig.~\ref{fig:MR}f), as expected in the presence of superconducting fluctuations.
Figure~\ref{fig:3D} shows the extent of the glassy region in temperature, field and doping, mapped out using the range of the hysteretic positive magnetoresistance, as well as the zero-field $T^{\dagger}(x)$ values. Moreover, the extent of superconducting fluctuations can also be determined from the transverse ($H\parallel c$) magnetoresistance\cite{YBCO-SCF,YBCO-SCFlong,LSCO-SCF}. In particular, above a sufficiently high magnetic field $H_{c}'(T)$, superconducting fluctuations are completely suppressed and the normal state is fully restored. In the normal state at low fields, the magnetoresistance increases as $H^2$ (Ref.~\onlinecite{Harris}), so that the values of $H_{c}'$ can be found from the downward deviations from such quadratic dependence that arise from superconductivity when $H<H_{c}'$. The magnetoresistance curves in Fig.~\ref{fig:MR}c,d indeed exhibit this kind of behavior, as illustrated in Fig.~\ref{fig:MR}e for the $x=0.055$ film. A similar analysis of the data on the $x=0.06$ film at even higher magnetic fields is shown in Supplementary Figs.~1 and 2. We note that the condition for the weak-field regime $\omega_c\tau\ll 1$ (with $\omega_c$ the cyclotron frequency and $\tau$ the scattering time) is easily satisfied for lightly doped LSCO films. For $x=0.055$, for example, $\omega_c\tau\sim 0.01$ at $\mu_{0}H\sim 10$~T for $\rho\approx 1$~m$\Omega\cdot$cm (see Fig.~\ref{fig:RvsT}). The phase diagram constructed in Fig.~\ref{fig:3D} then shows, in addition to the glassy region, the values of $H_{c}'(T)$
\begin{figure}
\centerline{\includegraphics[width=15cm]{XShi_fig4_resubmit.eps}}
\caption{\textbf{Phase diagram of the glassy region and of the onset of superconducting fluctuations in LSCO.} Phase diagram shows the evolution of the glassy region and the emergence of superconducting fluctuations (SCFs) and superconductivity (SC) with doping, temperature and magnetic field. The extent of the glassy regime does not depend on the field orientation. The range of SCFs is shown for the field applied perpendicular to CuO$_2$ planes. Solid and dashed lines guide the eye. Different colors of symbols for $H_{c}'(T)$ and $T_c(H)$ correspond to different values of doping. For both $x=0.06$ and $x=0.07$ films, $H_{c}'(T)=H_{c}'(0)[1-(T/T_2)^2]$ ($x=0.06$: $\mu_{0}H_{c}'(0)=11$~T, $T_2=24$~K ; $x=0.07$: $\mu_{0}H_{c}'(0)=15$~T, $T_2=29$~K.)}
\label{fig:3D}
\end{figure}
and $T_c$ for different films, where $T_c$ was defined as the midpoint of the resistive transition. The following conclusions may be drawn.
For strongly insulating $x=0.03$ and $x=0.05$ films, where superconducting fluctuations are not observed, the extent of the glassy behavior does not depend much on doping. However, as glassiness is suppressed by further increase in doping, superconducting fluctuations (SCFs) emerge in insulating-like $x=0.055$ and $x=0.06$ films. Here the fluctuations not only coexist with glassiness, but also they affect transport over a much wider range of $T$ and $H$ than glassy behavior. At even higher doping, when superconductivity sets in, it is no longer possible to probe glassy dynamics using transport measurements. While the role of superconducting fluctuations in the phenomenology of the pseudogap and their significance for understanding high-temperature superconductivity have been of great interest, there has been experimental disagreement about how high in temperature they may persist. By tracking the restoration of the normal-state magnetoresistance, we find that the $H=0$ onset temperatures for SCFs in $x=0.055, 0.06$ and $0.07$ films (Fig.~\ref{fig:3D}) are lower by about $10-20$~K than those determined from the onset of diamagnetism\cite{Diag-Li} and Nernst effect\cite{Nernst} in LSCO crystals with similar $\rho(T)$ and $T_c$ values.
The origin of the discrepancy between onset temperatures for SCFs determined from different experiments, however, is still under debate\cite{Armitage,SCF-micro,SCF-Mott_Lara}. There has been similar debate concerning the values of the upper critical field $H_{c2}(T\rightarrow 0)$ in LSCO and other cuprates. We note that $\mu_{0}H_{c}'(T=0)=(15\pm 1)$~T for the $x=0.07$ film is in agreement with $\mu_{0}H_{c2}\approx 16$~T obtained from specific heat measurements of a single crystal LSCO with a similar $T_c$ value\cite{SH-Wang}. Specific heat results in LSCO at higher dopings\cite{SH-Wang} are, in turn, consistent with $H_{c2}(0)$ values determined from the $c$-axis resistive transport\cite{Ando-Hc2}. Therefore, even though the method we employed to define $H_{c}'$
has an inherent limitation in accuracy, particularly at low temperatures where the $H^2$ dependence may be obscured by strong SCFs, we conclude that our determination of the onset of SCFs in a superconducting sample ($x=0.07$) with a low $T_c$, where the discrepancies between different techniques are less pronounced, is fairly consistent with other studies. In non-superconducting samples with $x=0.055$ and $x=0.06$, the onset of SCFs takes place at even lower temperatures and fields (Fig.~\ref{fig:3D}), as expected. We note that the $H_{c}'(T)$ line is well fitted by a simple quadratic formula $H_{c}'(T)=H_{c}'(0)[1-(T/T_2)^2]$ in both superconducting ($x=0.07$) and non-superconducting ($x=0.06$) samples. The same $H_{c}'(T)$ dependence has been observed also in superconducting YBa$_2$Cu$_3$O$_y$ crystals\cite{YBCO-SCF,YBCO-SCFlong} and overdoped LSCO\cite{LSCO-SCF}.
In order to explore the coexistence region in more detail, we calculate the SCF contribution to the conductivity\cite{YBCO-SCF,YBCO-SCFlong} $\Delta\sigma_{SCF}(T,H)=\rho^{-1}(T,H)-\rho_{n}^{-1}(T,H)$ using the measured resistivity $\rho(T,H)$ and the normal-state resistivity $\rho_{n}(T,H)$, where $\rho_{n}$ was obtained by extrapolating the region of $H^2$ magnetoresistance observed at high enough fields and temperatures (Fig.~\ref{fig:MR}e; see also Supplementary Figs.~1 and 2). We emphasize that $\Delta\sigma_{SCF}(T,H)$ is not very sensitive to the values of $H_{c}'$, because the magnetoresistance at high fields is weak, i.e. the slope $\alpha$ of the $H^2$ dependence (see Fig.~\ref{fig:MR}e caption) is very small. Therefore, all of our conclusions are qualitatively robust. As shown in Fig.~\ref{fig:SCF} for the $x=0.06$ sample,
\begin{figure}
\centerline{\includegraphics[width=15cm]{XShi_fig5_resubmit.eps}}
\caption{\textbf{The contribution of superconducting fluctuations to conductivity and the glassy region in ${\bm{x=0.06}}$ LSCO film.} The color map and contour plot shows the SCF contribution to conductivity $\Delta\sigma_{SCF}$ as a function of $T$ and $H\parallel c$. Red squares represent $H_{c}'(T)$ and the green dashed line is a fit with $(\mu_{0}H_c')[$T$]=11[1-(T[$K$]/24)^2]$. Pink dots ($H\parallel c$) and purple diamonds ($H\perp c$) show the extent of the charge glass region as determined from the measurements of the hysteretic positive magnetoresistance.}
\label{fig:SCF}
\end{figure}
superconducting fluctuations are suppressed, as expected, by increasing temperature and magnetic field. Quite unexpectedly, however, superconducting fluctuations are also suppressed at low temperatures, with the effect becoming stronger as temperature is reduced.
Similar behavior is observed in the $x=0.055$ film, but not in $x=0.07$, which becomes superconducting at $T_c=(9\pm 1)$~K. This striking non-monotonicity in $\Delta\sigma_{SCF}(T)$ reveals the presence of a competing state. By presenting the extent of the glassy region in $(T,H)$ on the same plot, it is clear that the competing state is precisely the dynamically heterogeneous charge order that is characteristic of the insulating phase.
Low-temperature experiments in very lightly doped La$_2$CuO$_4$ show that, as a result of long-range Coulomb interactions, holes form a collective, glassy state of charge clusters located in the CuO$_2$ planes\cite{Ivana-PRL,Jelbert08,Ivana-pMR,Shi-PhysicaB}. Our results show that adding charge carriers in LSCO leads to the formation of localized Cooper pairs within this intrinsically heterogeneous charge ordered state, consistent with the Bose glass picture. By increasing the doping, the charge glass is suppressed, resulting in increased superconducting fluctuations, pair delocalization, and eventually the transition to a superconducting state. Surprisingly, the superconducting fluctuations on the insulating side are quenched at low temperatures by the charge glass order. Therefore, the pair localization and the onset of SIT in LSCO are influenced
by a competing charge order, and not merely by disorder, as it seems to be the case in some conventional superconductors\cite{Valles,Sacepe}.
The competition between charge order and superconductivity was revealed recently in YBa$_2$Cu$_3$O$_y$, a less disordered copper oxide, using nuclear magnetic resonance in the presence of high magnetic fields\cite{Julien-YBCO} that were required to destabilize superconductivity. In contrast, our data show that the charge order in non-superconducting LSCO samples is present already in zero magnetic field. These findings strengthen the idea that there is an intrinsic charge ordering instability in the CuO$_2$ planes.
\begin{methods}
The LSCO films were grown by atomic-layer-by-layer molecular beam epitaxy (ALL-MBE)\cite{MBE} on LaSrAlO$_4$ substrates with the $c$ axis perpendicular to the surface. The films were deposited at $T\approx 680~^{\circ}$C under $3\times 10^{-6}$~Torr ozone partial pressure. The growth was monitored in real-time by reflection high energy electron diffraction (RHEED) which showed that the films were atomically smooth and without any secondary-phase precipitates. The films are 75 unit cells (about 100~nm) thick; the measured $c = 1.312$~nm. Finally, a 160~nm thick gold layer was evaporated \textit{in situ} on top of the films for contacts. The films were patterned using UV photolithography and ion milling to fabricate Hall bar patterns with the length $L = 2$~mm and the width $W = 0.3$~mm. The distance between the voltage contacts is 1.01~mm, and their width is 0.05~mm. In order to remove any excess oxygen, the films were subsequently annealed in high vacuum ($4\times 10^{-5}$~Torr) for over an hour at $200-250~^{\circ}$C.
The in-plane sample resistance and magnetoresistance were measured with a standard four-probe ac method ($\sim11$~Hz) in the Ohmic regime, at $T$ down to 0.3~K realized in a $^3$He cryostat with magnetic fields up to 9~T and in the Millikelvin Facility at the National High Magnetic Field Laboratory with fields up to 18~T. The fields, applied either parallel or perpendicular to the CuO$_2$ planes, were swept at constant temperatures. The sweep rates, typically 0.02-0.03~T/min, were low enough to avoid the heating of the sample due to eddy currents. In both field orientations, the current $I\perp B$.
\end{methods}
|
train/arxiv
|
BkiUecvxK7ICUn2IaOnN
| 5
| 1
|
\section{Introduction}
Inertial confinement fusion (ICF) experiments have reached the alpha heating regime, in which energy from fusion products is a significant contributor to the fuel energy balance and almost exceeds the radiative and conduction losses. The experimental Lawson parameter, given by the areal density and temperature product, is within $30\,$\% of the expected ignition threshold \cite{1}. If ignition is achieved, the fuel will rapidly self heat on a picosecond timescale and increase the total yield by more than a factor of 100 over current experiments. This is because the fusion rate is a strong function of temperature. A significant fraction of the milligrams of deuterium-tritium fuel will react, giving a megajoule scale yield.
As the ignition threshold gets nearer, other physical processes will become more important. One important aspect is the self-generation of magnetic fields during the implosion. These fields occur due to the Biermann battery mechanism of magneto-hydrodynamics (MHD), and tend to wrap azimuthally around any intrusive plasma deformities. One study found that the rapid growth rates and radial compression can cause field strengths to approach $10^4\,$T, an exceedingly large value \cite{2}. This is high enough that it will indirectly affect hydrodynamics by inhibiting and deflecting the electron heat conduction.
In this work, we show that the $\mathbf{B}$ field may be even larger than previously thought, since there is an additional collisional magnetic source term. We derive this thermo-electric mechanism, discuss its physical origin and compare its magnitude to the Biermann term. The new term acts on ion composition gradients, such as those found at the edge of the carbon mix jets entering the hot-spot. Furthermore, the term scales with temperature, meaning that the field production will be extremely rapid in fusion conditions. We also discuss enhancement of the Biermann term in carbon mix regions due to the greater radiative cooling increasing the hot-spot temperature gradients.
Jets of carbon ablator mix have been measured entering the fuel hot-spot, with a typical total mass of up to $100\,$ng \cite{3, 4}. This mixing has a detrimental effect because the Bremsstrahlung radiative rate increases with the ion charge state. The radiation escapes, meaning the carbon region reaches a cooler temperature than the rest of the hot-spot. It then acts as a heat sink, with little fusion occurring within the mix region but a large amount of alpha particle and electron heat conduction into it. This energy is rapidly radiated away, with a measured loss of overall fusion yield \cite{3}. The magnetic insulation effect could reduce the detrimental heat loss into these mix regions.
The magneto-hydrodynamics model is expected to be valid for national ignition facility deuterium-tritium hot-spot conditions, which have typical temperature $5\,$keV, density $100\,$gcm$^{-3}$, radius $30\,\mu$m and areal density $0.3\,$gcm$^{-2}$. Under these hot-spot conditions, the Coulomb logarithm $\ln(\Lambda)$ is in the range 2 to 5, sufficiently high that the light elements composing the hot-spot will be fully ionised and the classical transport coefficients should be valid. In addition, the Debye length $\lambda_D\simeq 10^{-10}\,$m is significantly shorter than any plasma scale-lengths, allowing the quasi-neutral approximation. In terms of the electron mass $m_e$, charge $e$, number density $n_e$ and temperature $T_e$, average ion charge state $\tilde Z=(\sum_jZ_j^2n_j)/(\sum_jZ_jn_j)$ and vacuum permittivity $\epsilon_0$, the corresponding electron-ion coulomb collision time
\begin{align}
\tau &= \sqrt{\frac{9\pi}{2}}\frac{4\pi\epsilon_0^2\sqrt{m_e}T_e^{3/2}}{n_e\tilde Z e^4\ln(\Lambda)}\label{tau}
\end{align}
is approximately $1\,$fs. This leads to a mean free path of $\lambda_e=40\,$nm and Knudsen number $\lambda_e|\nabla T_e|/T_e \simeq 0.005$. Since the electron and ion mean free paths are much less than the gradient scale-lengths, the kinetic non-local corrections to the heat flux and fusion reactivity will be minimal \cite{5, 6, 7}. This also ensures that the MHD fluid approximation is valid.
\section{Derivation}
The Braginskii generalised Ohm's law gives the steady-state electric field, including the effects of magnetised Coulomb collisions. The collisional behaviour depends on the dimensionless magnetisation parameter $\chi=\omega\tau=e|\mathbf{B}|\tau/m_e$. The plasma electric field is \cite{8}
\begin{align}
\mathbf{E} &= -\mathbf{u}\times \mathbf{B} + \frac{\mathbf{J\times B}}{n_ee} - \frac{\nabla p_e}{n_ee} + \underline\eta.\mathbf{J} - \frac{1}{e}\underline\beta.\nabla T_e.\label{ohm}
\end{align}
We have neglected the terms due to electron inertia and inter-species ion diffusion, since in sub-sonic hot-spot conditions these are smaller by the electron-ion mass ratio. The ideal term $-\mathbf{u\times B}$ is due to the relativistic transform from the fluid frame, at fluid velocity $\mathbf{u}$, to the laboratory frame. The Hall term gives the effects of currents $\mathbf{J}$. The third term gives the Debye shielded potential, occurring because the electron pressure $p_e$ must be counteracted by a charge imbalance with an electric potential.
The final two terms are due to the Coulomb collision operator. The resistive term is fairly intuitive, in that electrons carrying current will be scattered randomly by collisions with ions on a timescale $\tau$, neutralising the current. In terms of the dimensionless transport coefficients $\alpha_\perp(\tilde Z, \chi)$, $\alpha_\wedge(\tilde Z, \chi)$ and $\alpha_0(\tilde Z) = \alpha_\perp(\tilde Z, 0)$, the full tensor form is given by \cite{8}
\begin{align}
\underline\eta.\mathbf{J} = \frac{m_e}{n_ee^2\tau}\left(\alpha_0\mathbf{\hat b}(\mathbf{J.\hat b}) + \alpha_\perp\mathbf{\hat b\times(J\times\hat b)} - \alpha_\wedge\mathbf{\hat b\times J }\right)\label{eta}
\end{align}
The orthogonal basis vectors are given in terms of the magnetic field direction $\mathbf{\hat b = B/|B|}$. The $\alpha_0$ term is independent of $\chi$, since transport along the field direction cannot be affected by magnetic fields. The second term gives the resistive electric field across the field lines. The perpendicular resistivity coefficient $\alpha_\perp$ increases as $\chi$ increases. The magnetic deflection also introduces a third term which is perpendicular to both the field and the driving current. This off-diagonal term is maximal for around $\chi\simeq 5$. The transport coefficients must be found numerically from the Vlasov-Fokker-Planck equation. Fits to the dimensionless coefficients are given in reference \cite{8}.
Similarly, the collisional thermal force is given in terms of the dimensionless transport coefficients $\beta_\perp(\tilde Z, \chi)$, $\beta_\wedge(\tilde Z, \chi)$ and $\beta_0(\tilde Z) = \beta_\perp(\tilde Z, 0)$ by
\begin{align}
\underline\beta.\nabla T_e = \beta_0\mathbf{\hat b}(\nabla T_e\mathbf{.\hat b}) + \beta_\perp\mathbf{\hat b\times}(\nabla T_e\mathbf{\times\hat b)} + \beta_\wedge\mathbf{\hat b\times }\nabla T_e\label{beta}.
\end{align}
The collisional thermal force is due to the electron velocity dependence of the coulomb Collision rate. It arises because, even in pressure equilibrium, if there is a temperature gradient then faster electrons from the hotter side will be less collisional with the ions [eq. (\ref{tau})]. This means there is a net force on the electrons towards the colder side, which is balanced by an electric field also towards the colder side.
The $\mathbf{J}$ and $\nabla T_e$ vectors can be decomposed into their components parallel and perpendicular to the field, via the identity $\mathbf{J}=\mathbf{\hat b}(\mathbf{J.\hat b}) + \mathbf{\hat b\times}(\mathbf{J\times\hat b)}$. The resistive term can then be manipulated to give
\begin{align}
&\underline\eta.\mathbf{J} = \frac{m_e}{n_ee^2\tau}\left(\alpha_0\mathbf{J} + (\alpha_\perp-\alpha_0)\mathbf{\hat b\times(J\times\hat b)} - \alpha_\wedge\mathbf{\hat b\times J }\right)\\
\begin{split}
&= \frac{m_e}{n_ee^2\tau}\left[ \alpha_0\mathbf{J} + \frac{\mathbf{B}}{\mathbf{|B|}}\times\left( -\alpha_\wedge\mathbf{J } + (\alpha_\perp-\alpha_0)\mathbf{(J\times\hat b)}\right)\right].\label{eta2} \end{split}
\end{align}
We make the standard MHD approximation to neglect the displacement current in the Maxwell equations, effectively eliminating high frequency oscillation modes and electron waves, giving $\mathbf{J} = c^2\epsilon_0\nabla\times\mathbf{B}$. We also use the definition $\chi=e|\mathbf{B}|\tau/m_e$ of the magnetization and define the magnetic diffusivity $\eta_0=m_ec^2\epsilon_0\alpha_0/(n_ee^2\tau)$, to give
\begin{align}
\underline\eta.\mathbf{J} &= \eta_0\nabla\times\mathbf{B} - \mathbf{u_\alpha\times B},\label{alpha}\\
\mathbf{u_\alpha} &= \frac{1}{n_ee}\left[-\delta_\perp\mathbf{J} + \delta_\wedge(\mathbf{J\times\hat b})\right],
\end{align}
where, following reference \cite{9}, we have also defined the Hall velocity correction coefficients $\delta_\perp(\tilde Z, \chi) = \alpha_\wedge/\chi$ and $\delta_\wedge(\tilde Z, \chi) = (\alpha_\perp-\alpha_0)/\chi$. These coefficients are plotted in Fig. \ref{coefs}a. They are dimensionless and positive for all $\chi$ and $\tilde Z$. Comparing eq. (\ref{alpha}) to eq. (\ref{ohm}), it is clear that the collisional resistance alters the advection velocity of the magnetic field, with a term of the same functional form as $-\mathbf{u\times B}$.
The thermoelectric term $-\frac{1}{e}\underline\beta.\nabla T_e$ can be similarly decomposed to give
\begin{align}
&-\frac{1}{e}\left(\beta_0\nabla T_e + (\beta_\perp-\beta_0)\mathbf{\hat b\times(\nabla} T_e\mathbf{\times\hat b)} + \beta_\wedge\mathbf{\hat b\times }\nabla T_e\right)\\
= &-\frac{\beta_0}{e}\nabla T_e + \frac{\mathbf{B}}{e|\mathbf{B}|}\times\left[ -\beta_\wedge\nabla T_e + (\beta_0-\beta_\perp)(\nabla T_e\times\mathbf{\hat b)}\right]\label{beta2}.
\end{align}
Again using $\chi=e|\mathbf{B}|\tau/m_e$ and defining the Nernst velocity coefficient $\gamma_\perp(\tilde Z, \chi) = \beta_\wedge/\chi$ and the cross-gradient Nernst coefficient $\gamma_\wedge(\tilde Z, \chi) = (\beta_0-\beta_\perp)/\chi$, the thermoelectric contribution to the electric field can be written \cite{9}
\begin{align}
-\frac{1}{e}\underline\beta.\nabla T_e &= -\frac{1}{e}\beta_0\nabla T_e - \mathbf{u_\beta\times B}\\
\mathbf{u_\beta} &= \frac{\tau}{m_e}\left[ -\gamma_\perp\nabla T_e + \gamma_\wedge(\nabla T_e\times\mathbf{\hat b})\right].
\end{align}
Similarly to the $\delta$ coefficients, the newly defined Nernst coefficients are dimensionless, positive and tend towards finite order 1 values for low magnetization. The $\delta(\tilde Z, \chi)$ and $\gamma(\tilde Z, \chi)$ coefficients are plotted in Fig. \ref{coefs} for $\tilde Z=1$ and $\tilde Z\rightarrow\infty$. Note that these coefficients have been calculated using the fit functions in reference \cite{8}, which can lead to inaccuracies in the cross-gradient coefficients in the limit of low magnetization. Physically, they should tend to zero for low magnetization. More accurate fits will be explored in future work.
\begin{figure*}[t]
\includegraphics{coefs.eps}
\caption{Plots of the extended-MHD $\delta$ and $\gamma$ transport coefficients, giving the effect of the extended-MHD collisional terms on the magnetic field advection velocity. Both are shown for ion charge state $Z=1$ and in the limit for $Z\rightarrow\infty$. (a) The Hall velocity correction coefficients. (b) The Nernst velocity and cross-gradient Nernst velocity coefficients.}
\label{coefs}
\end{figure*}
The total extended-MHD electric field can therefore be written in the form
\begin{align}
\mathbf{E} &= -\mathbf{u_B\times B} -\frac{\nabla p_e}{n_ee} + \eta_0\nabla\times\mathbf{B} - \frac{1}{e}\beta_0\nabla T_e,\label{totalohm}
\end{align}
where the total field advection velocity $\mathbf{u_B}$ has been altered by the Coulomb collisions and will be discussed in the following section.
\section{Discussion}
To see the magnetic field evolution, eq. (\ref{totalohm}) can be substituted into the Maxwell equation $\partial_t\mathbf{B} = -\mathbf{\nabla\times E}$. Using the ideal gas equation of state $p_e=n_eT_e$, the pressure gradient term yields the Biermann battery magnetic source term
\begin{align}
\frac{\partial\mathbf{B}}{\partial t} = \nabla\times\left(\frac{\nabla p_e}{n_ee}\right) = -\frac{\nabla n_e\times \nabla p_e}{n_e^2e} = -\frac{\nabla n_e\times \nabla T_e}{n_ee}.
\end{align}
The resistive term can be simplified using the identity $-\nabla\times(\eta_0\nabla\times\mathbf{B}) = \eta_0\nabla^2\mathbf{B} - \nabla\eta_0\times(\nabla\times\mathbf{B})$ with $\mathbf{\nabla.B}=0$. The thermoelectric term can be simplified with the same identity, along with the fact that $\nabla\times\nabla T_e = 0$.
The final form of the induction equation is therefore composed only of an advection term, a diffusion term, the resistivity gradient term and two source terms that are still active even when $\mathbf{B}=0$ \cite{9},
\begin{align}
\begin{split}
\frac{\partial\mathbf{B}}{\partial t} =&\nabla\times(\mathbf{u_B}\times \mathbf{B}) + \eta_0\nabla^2\mathbf{B} - \nabla\eta_0\times(\nabla\times\mathbf{B})\\
&-\frac{\nabla n_e\times\nabla T_e}{n_ee} + \frac{\beta_0^\prime(\tilde Z)}{e}\nabla\tilde Z\times\nabla T_e.\label{induction}
\end{split}
\end{align}
The first term causes advection of the magnetic field at velocity $\mathbf{u_B}$, although it has no effect when the advection is along the field line. The field advection velocity is given by
\begin{align}\begin{split}
\mathbf{u_B} &= \,\mathbf{u} -(1+\delta_\perp)\frac{\mathbf{J}}{n_ee} + \delta_\wedge\frac{\mathbf{J\times\hat b}}{n_ee} \\&+\frac{\tau}{m_e}\left(-\gamma_\perp\nabla T_e + \gamma_\wedge\nabla T_e\times\mathbf{\hat b}\right)\label{ub}
\end{split}
\end{align}
It is now clear that the sole effect of the anisotropic $\perp$ and $\wedge$ extended-MHD terms is to alter the magnetic field advection velocity. Instead of having $\mathbf{u_B} =\mathbf{u}$ as in ideal MHD, the advection velocity now also includes the the Hall velocity, with some small correction terms containing the $\delta$ coefficients. From Fig. \ref{coefs}, it is clear that, for $Z=1$, the Hall velocity corrections do not exceed $20\,\%$. The advection also includes the Nernst velocity from the thermoelectric term, which advects the field down electron temperature gradients at a speed similar to the flow of heat from electron conduction. Due to the large heat fluxes in fusion hot-spots, the Nernst advection can significantly alter the magnetic field profile. However, the Hall velocity terms (those containing $\mathbf{J}$) are typically small in ICF hot-spot conditions, on the order of $100\,$ms$^{-1}$. This is compared to $10^5\,$ms$^{-1}$ for the fluid and Nernst velocities. There is also the cross-gradient $\gamma_\wedge$ Nernst advection term, which advects the field along isotherms, in the direction of $\nabla T_e\times\mathbf{B}$.
Use of a non-zero resistivity causes a diffusion of the magnetic field, whose strength is characterised by the dimensionless magnetic Reynolds number $R_M = UL/\eta_0$, where $U$ is a typical velocity and $L$ is a typical length scale. For the hot-spot conditions, use of equation (\ref{tau}) gives $\eta_0 \simeq 10^{-2}$m$^2$s$^{-1}$. Taking $U\simeq 3\times 10^{5}$ms$^{-1}$ as a typical implosion velocity and $L$ as the hot-spot size, this gives $R_M\simeq 10^3$, meaning advection of the field is dominant over its diffusion and the $\eta_0$ terms are fairly small in the present case. The smoothing effect of the diffusion term over the stagnation time $t=100\,$ps can be estimated as $L_\mathrm{diff}=\sqrt{\eta_0t}\simeq 1\,\mu$m, giving a minimum length scale for the size of magnetic features.
It should be noted \cite{10} that when $\mathbf{J}$ is perpendicular to $\mathbf{B}$, the $\delta_\wedge$ advection term in eq. $(\ref{ub})$ is equivalent to additional diffusion of the magnetic field, such that the resistive terms in eq. (\ref{induction}) become $\eta_\perp\nabla^2\mathbf{B} - \nabla\eta_\perp\times(\nabla\times\mathbf{B})$, rather than $\eta_0\nabla^2\mathbf{B} - \nabla\eta_0\times(\nabla\times\mathbf{B})$. This is true, for example, in a two-dimensional geometry with self-generated fields. However, eq. (\ref{induction}) shows that the general formulation is that of isotropic diffusion with coefficient $\eta_0$, with the additional $\delta_\wedge$ advection term that will cause some additional anisotropic diffusion when $\mathbf{J}$ is not parallel to $\mathbf{B}$.
The magnetic dynamics in inertial confinement fusion hot-spots are dominated by the advection term and the two source terms (final terms in eq. \ref{induction}). The Biermann term acts on misaligned density and temperature gradients, while the thermoelectric term acts on misaligned ion composition and temperature gradients. The quantity $\beta_0(Z)$ and its derivative $\beta_0'(Z)$ are plotted in Fig. \ref{betaplot}. Clearly the collisional source term will be maximal for low $\tilde Z$ plasmas with steep gradients in $\tilde Z$, whereas the Biermann term is independent of $\tilde Z$.
In the context of inertial confinement fusion fuel impurities, there may exist a $\nabla \tilde Z$ due to carbon jets penetrating the burning fuel. The carbon region will reach equilibrium at a lower temperature than the rest of the hot-spot, since the Bremsstrahlung radiative losses increase with $\tilde Z$. This naturally introduces a $\nabla T_e$ away from the mix jet and a $\nabla \tilde Z$ towards the mix jet. Due to thermal conduction and hydrodynamic motion, these are unlikely to be exactly aligned. The conditions are therefore met for the collisional thermoelectric source term.
The mix region will radiatively contract, leading to a $\nabla n_e$ towards the mix region and $\nabla T_e$ away from it. The magnitude of the Biermann term is then approximately $fT_e/(el_nl_T)$, where $l_T=T_e/|\nabla T_e|$ is the temperature gradient scale-length, $l_n=n_e/|\nabla n_e|$ is the density scale-length and $f=\sin\theta$ is a reduction factor due to the misalignment of the gradients. With typical hot-spot temperature $T_e=5\,$keV, $f=0.1$ and scale-lengths $3\,\mu$m, this gives field growth rate $50\,$Tps$^{-1}$. The field is thus expected to reach several thousand Tesla over the $100\,$ps
stagnation time-scale.
If carbon enters the hot-spot, the ion charge state gradient will be reduced by inter-species ion diffusion. This can be estimated using the model of Molvig, Simakov and Vold \cite{11}, in which an initially sharp interface between a light and heavy ion species will develop through diffusion. The diffusion coefficient can be estimated as $D=\frac{2T_i\tau}{Z^2\sqrt{m_im_e}}\simeq 0.03\,$m$^2$s$^{-1}$, similar to the thermal diffusion and resistive magnetic diffusion rates. Over the stagnation time $t=100\,$ps, this leads to a diffusive scale-length of $\sqrt{tD}=1.6\,\mu$m. This gives a lower bound on the expected scale-lengths $l_Z$, $l_n$ and $l_T$.
The collisional source term is maximal when $Z=1$, giving $\beta_0'(1)\simeq0.3$, meaning it is similar in magnitude to the Biermann term. For $Z\simeq 1$, as in the hydrogen hot-spot, this leads to field growth rate $0.3fT_e/(el_Zl_T)\simeq 20\,$Tps$^{-1}$. The collisional source term can be similar magnitude to the Biermann term. In fact, it may exceed it. This is because hydrodynamic motion acts to smooth the pressure gradients and reduce the Biermann growth. The collisional term has no such natural stabilisation, since it acts on composition gradients which can even exist in pressure equilibrium, such as in an ideal isobaric hot-spot.
Carbon jets will also have increased Biermann fields relative to hydrogen jets. This is due to the increased radiative cooling providing a steeper temperature gradient in the carbon mix case. We also note that the two source terms are likely to be in opposite directions. This is because $\nabla n_e$ and $\nabla \tilde Z$ are both towards the centre of the mix region. However, $\beta_0^\prime$ is positive and the two terms have opposite signs [eq. (\ref{induction})], so will be in opposite directions. This may mean the magnetised hydrodynamics of hydrogen jets are quite different to that of carbon mix jets, since the alterations to heat flow could be in the opposite direction. For hydrogen jets, the magnetised Righi-Leduc heat-flow is towards the base of the jet \cite{2}. For carbon jets where the collisional thermoelectric source term is dominant, the magnetic field could be in the opposite direction and deflect heat towards the spike tip.
The Nernst advection must also be considered, since it will advect the magnetic field into the cooler mix region. Since the magnetisation scales as $T_e^{3/2}$, this will reduce the anisotropic heat flux effects.
\begin{figure*}[t]
\includegraphics{beta.eps}
\caption{(a) Plot of the Braginskii thermo-electric coefficient $\beta_0(\tilde Z)$. (b) Plot of the derivative of $\beta_0$. Although the thermo-electric force increases for higher $Z$ plasmas, it is the derivative that gives the coefficient of the magnetic field production rate. This is maximal for low Z plasma such as DT fusion fuel, with $\beta_0^\prime(1)\simeq0.3$.}
\label{betaplot}
\end{figure*}
Another important consideration is the fusion product alpha particle transport. These alpha particles start with energy $3.5\,$MeV, giving a gyro-radius of $r_L=(|\mathbf{B}|/\mathrm{1000T})^{-1}\times270\,\mu$m. The estimates of the $5000\,$T field strength suggest that the minimal alpha gyro-motion could be close to the hot-spot size $r\simeq 30\mu$m. However, the scale-length of the magnetic field regions will be much smaller than this, meaning the alpha particle energy deposition profile will have only minor changes. The field strength would need to reach approximately $10^5\,$T for any appreciable magnetic confinement of the alpha particle energy.
Since most energy within the carbon regions is rapidly radiated away, heat flux into carbon regions is a primary loss mechanism from the plasma. Electron heat flux into the carbon region will be reduced by the magnetic field. This may help to insulate the carbon mix regions and slightly reduce their negative effects.
In summary, the nature of the induction equation indicates that carbon impurities mixing into the fusion hot-spot may lead to larger magnetic fields than with hydrogen jets. The field generation rate is on the order of $50\,$Tps$^{-1}$. This increase is due to two mechanisms. Firstly, the temperature gradients around the spike will be larger due to the increased radiative cooling, leading to increased Biermann growth. Secondly, there is an additional collisional thermoelectric source of magnetic field that only occurs with gradients in the average ion charge state $\tilde Z$. This will only arise if higher Z impurities enter the hot-spot. These mechanisms will lead to magnetisation of the electron heat flux, affecting the hydrodynamics of the jet.
Research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number 20180040DR.
|
train/arxiv
|
BkiUdrLxaL3Suji9h5HL
| 5
| 1
|
\section{INTRODUCTION}
Over the past decades, loop closure detection has become an important part of visual SLAM. In the early stage of development, SLAM only targeted at visual odometry which accumulates inevitable drifts. The result of navigation and mapping often fails in the long run. Later, it is found that graph based optimization can greatly correct drifting error with the help of loop closure detection and it becomes an essential component of modern SLAM \cite{williams2009comparison}. Nowadays, a full visual SLAM system consists of front-end and back-end. In the front-end, vision based SLAM runs visual odometry to estimate the frame-to-frame transition directly. However, visual odometry often has the problem of cumulative drift in real applications. In the back-end, loop closure partially resets localization to minimize transitional measurement error by matching current frame with historical data \cite{lowry2016visual}. Visual SLAM has been widely applied into many robotics fields such as cleaning robot, drone as well as autonomous cars. It has become a promising technique in robotics.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.90\linewidth]{images/flowchart.pdf}
\end{center}
\captionsetup{justification=centering}
\caption{Framework of Binary Content Based Loop Closure Detection.}
\label{fig: flowchart}
\end{figure}
The idea of loop closure detection is to find repetitive scenes from the historical data so that we can link two places together. The link between two places acts as an additional constraint to the mapping. After applying graph optimization, we can minimize the drifting error based on those constraints. Experiments have shown that loop closure detection can greatly improve the performance of SLAM \cite{williams2009comparison}. However, tackling this problem consists of many challenges; First of all, the database grows with time, meaning that the database size can be tremendous without proper compression. Secondly, the complexity of indexing grows proportionally with database size. Hence the requirement of computational resources also gradually increases. Lastly, two frames of same place taken at different timestamp may be slightly different due to variation of light condition, dynamic objects, etc. Therefore, loop closure detection still remains a challenging topic in visual SLAM.
Existing works on loop closure detection share the common idea of using hand-crafted feature points and feature descriptors, such as FAB-MAP\cite{cummins2008fab}, Bag of Visual Words (BoVW) \cite{galvez2012bags}, VLAD \cite{huang2016vlad} and Fisher Vector \cite{uchida2016image}. These methods extract feature points from image frame and translate them into descriptors. Then the descriptors are stored into database in sequence and we can simply tell where loop closure happens by comparing current descriptor and database. The number of comparison grows with time, hence in general the comparison of descriptors must be fast. However, there is always a trade-off between speed and precision. To achieve higher accuracy, it takes lots of computational resources. For example, in FAB-MAP \cite{cummins2008fab}, it takes 400 ms to extract SIFT features for a frame of size $640\times 480$ pixels on a normal computer. Image descriptors such as Fisher Vector contain high-order statistics so that it takes more time to process. In conclusion, existing methods leverage on creating accurate image descriptors but lack of satisfactory efficiency.
In this paper, we argue that extracting and comparing feature descriptors takes too much computational resources and becomes a burden to the processing system over a long run. Existing feature point based methods can achieve satisfactory recall rate results but are difficult to run in real time. In the meantime we note that the distribution of the objects or salient patterns is also an important information except from feature points. For a scene, the geometrical distribution of the objects as well as the shape of each object are usually unique and hence this can be used for loop closure detection. However, this information is not utilized in feature point methods. A good fact about the pattern information is that it does not involve any color information. And if we can express it in binary format, the speed can be improved. Hence we introduce this feature for loop closure detection, where the object distribution information is expressed as the binary content of image. Thereafter, we can verify the loop closure places by checking the similarity of the binary contents of two images. At the same time, we keep existing feature point method on top of binary content indexing to achieve high recall rate. The new framework consists of three parts: binary content construction, fast image retrieval and precise loop closure detection. It firstly introduces a binary map into loop closure detection to reduce the computational cost for indexing while applies precise image matching to guarantee precision. Compared to the existing methods, no offline training is required for our method. It is also proven that our method outperforms existing methods at both recall rate and speed. The main contributions of this paper are as follows:
\begin{itemize}
\item We propose a binary content based fast loop closure detection, which combines the advantage of both fast binary operation and traditional loop closure detection approach.
\item The performance is greatly improved. Our method is much faster than existing methods without reducing recall rate and recall precision.
\item Compared to existing methods, the proposed method does not require any offline-training. It can be easily implemented to SLAM system.
\end{itemize}
This paper is organized as follows: Section \MakeUppercase{\romannumeral 2} reviews the related works on loop closure detection. Section \MakeUppercase{\romannumeral 3} describes the details of the proposed method. Section \MakeUppercase{\romannumeral 4} shows experiment results and comparison with existing works, followed by conclusion in Section \MakeUppercase{\romannumeral 5}.
\section{Related Work}
Most of loop closure detection methods adopt Bag of Words structure which originates from nature language processing. In this model, a text is represented as the multiset of its words, regardless of grammar or word order. Similarly, this idea is applied into loop closure detection such as FABMAP and DBoW2 \cite{cummins2008fab,galvez2012bags}.
FAB-MAP defines a probabilistic model over the bag-of-words representation \cite{cummins2008fab}. It utilizes Chow-Liu tree \cite{chow1968approximating} to approximate the co-occurrences between SURF feature points. \cite{cummins2009highly} tests datasets of 70 km and 1,000 km in length respectively and achieves a satisfactory recall rate with only a few false positives. DBoW2 \cite{galvez2012bags} creates a tree vocabulary from offline training over a big dataset. New feature points are marked with a sequence number according to the vocabulary so that the co-occurrence of the frames can be estimated by the Euclidean distance of the feature points in the vocabulary.
Some other research works aim to find an effective and efficient image descriptor for loop closure detection. \cite{mur2017orb} uses SURF feature descriptor for loop detection. It achieves a satisfactory result but consumes lots of computational resource. \cite{huang2016vlad} extracts VLAD vector from each image. VLAD is a first order statistics of the non-probability Fisher Vector \cite{uchida2016image}, which can be obtained by training a codebook of k visual words using k means. The similarity is estimated by measuring the Euclidean distance of related vectors. In recent years, there are also some binary descriptors used in loop closure detection such as Binary Robust Invariant Scalable Keypoints (BRISK) and Binary Robust Independent Elementary Features (BRIEF), \cite{lowe2004distinctive,bay2006surf,viswanathan2009features,calonder2010brief,leutenegger2011brisk}. They take the advantage of fast binary operation and use probability theory to represent features. However, they contain some uncertainty so that the accuracy may drop sometimes.
\begin{figure}[t]
\begin{center}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.99\linewidth]{images/SalientRaw.jpg}
\caption{Raw Image}
\label{fig:Saliency Detection-a}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.99\linewidth]{images/SalientResult.jpg}
\caption{Result}
\label{fig:Saliency Detection-b}
\end{subfigure}
\end{center}
\captionsetup{justification=centering}
\caption{Example of Log spectral residual approach.}
\label{fig:Saliency Detection}
\end{figure}
\begin{figure*}[t]
\begin{subfigure}{0.49\linewidth}
\begin{center}
\includegraphics[width=1.0\linewidth]{images/Frame1.jpg}
\end{center}
\caption{First frame}
\label{fig:binaryexample-a}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\begin{center}
\includegraphics[width=1.0\linewidth]{images/binaryresult1.jpg}
\end{center}
\caption{Binary content extraction result}
\label{fig:binaryexample-b}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.99\linewidth]{images/Frame2.jpg}
\caption{Second frame}
\label{fig:binaryexample-c}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.99\linewidth]{images/binaryresult2.jpg}
\caption{Binary content extraction result}
\label{fig:binaryexample-d}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.99\linewidth]{images/Frame3.jpg}
\caption{Third frame}
\label{fig:binaryexample-e}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.99\linewidth]{images/binaryresult3.jpg}
\caption{Binary content extraction result}
\label{fig:binaryexample-f}
\end{subfigure}
\caption{Examples of binary content extraction.}
\label{fig:binary_extraction_example}
\end{figure*}
Another trend in loop closure detection is the utilization of Deep Learning based descriptors. \cite{chatfield2014return} has conducted a comprehensive evaluation and has shown the advantages of Deep Learning based features. In the work of \cite{hou2015convolutional}, the authors apply a pre-trained Convolutional Neural Network (CNN) model, where the outputs at the intermediate layers are used as image descriptors. The utilization of GPU accelerates the processing speed up to the level of milli-second. In \cite{xia2016loop}, the authors apply PCANet \cite{chan2015pcanet} to extract features as image descriptors. It only takes 10-60 ms on City Center dataset on an NVIDIA GPU with the recall rate up to 20\%. Deep learning method shows a good performance in loop closure detection. However, the application is limited by the requirement of GPU which is costly for robotic systems.
Recently, another research work uses object for loop closure detection \cite{wang2019salient}. It performs loop closure detection based on the objects cropped from each image. It achieves very satisfactory speed but at the sacrifice of recall rate. Another problem is that it can fail if there is any repetitive objects in the scene.
\section{Framework}
The proposed binary content based loop closure detection framework consists of three parts: binary content construction, fast image retrieval and precise loop closure detection, which is shown in Fig. \ref{fig: flowchart}. To utilize the object distribution information, the first step binary content construction extracts the objects or salient regions from the image and then further compresses extracted parts into compact binary image. After that, fast image retrieval performs binary image indexing at high speed and filters out most of unmatched pairs. Lastly, precise loop closure detection conducts further check on the result to remove any false positive. In the process of fast binary content indexing, most of unmatched pairs are filtered out so this process only takes limited computational resources. The details of each step will be explained in this section.
\subsection{Binary Content Construction}
The extracted binary content should be highly representative information of the original image. However, the binary content cannot reveal the color or grey level of pixel so that we only operate at the level of salient region. A salient region generally refers to those image parts that contain rich texture. The location of the salient region and the shape of salient part can be useful for loop detection. Different images will have different salient regions so that it can be a criterion to search for paired images. To extract salient regions, we perform the Log spectral residual method \cite{hou2007saliency}. The Log spectral residual method has the advantage of low computational cost and high extraction capability. Moreover, no prior knowledge is required for this approach.
Generally, given an input image $\mathcal{I}$, we define the following notations:
\begin{itemize}
\item $\mathbf{A}(f)$: The real part of Fast Fourier Transform of image $\mathcal{I}$, $\mathbf{A}(f) = \Re(\mathcal{F}(\mathcal{I}))$.
\item $\mathbf{P}(f)$: The imaginary part of Fast Fourier Transform of image $\mathcal{I}$, $\mathbf{P}(f) = \Im(\mathcal{F}(\mathcal{I}))$.
\item $\mathbf{L}(f)$: The log spectral of $\mathbf{A}(f)$, $\mathbf{L}(f) = \log(\mathcal{A}(f))$.
\end{itemize}
The Log spectral residual $R(f)$ is defined as:
\begin{equation}
\begin{aligned}
R(f) &= L(f) - h_{n}(f) \cdot L(f), \label{Equation:saliecyresult}
\end{aligned}
\end{equation}
where $h_{n}(f)$ is an average filter of an $n\times n$ matrix. Salient region map $\mathcal{O}(x)$ can be derived by recovering equation (\ref{Equation:saliecyresult}) with Gaussian filter $\mathcal{G}(x)$:
\begin{subequations}
\begin{align}
S(x) &= \mathcal{G}(x) \cdot \mathcal{F}^{-1}[R(f) + \exp(P(f))]^2,\\
\mathcal{O}(x) &=
\left\{
\begin{aligned}
1 &~&~&~\text{if}~ S(x) > E(S(x)) \cdot \gamma, \\
0 &~&~&~\text{otherwise},
\end{aligned}
\right.
\end{align}
\label{eqn:salient level}
\end{subequations}
where the threshold $\gamma$ indicates the level of salient region extraction. A larger $\gamma$ implies that less salient area will be ignored, and only highly salient regions or objects will be retained. A demonstration of the Log spectral residual approach is shown in Fig. \ref{fig:Saliency Detection}, where only the crafts are kept after filtering. Salient region contains the most representative information of the image and in most cases it is unique for each image. By binarizing each frame into salient region map and storing it, the database is built up for later processing.
\subsection{Fast Image Retrieval}
Fast image retrieval aims to match binary content with the database. The key idea of this part is to make use of fast logical operation to conduct searching. Similar scenes share similar salient region distribution. When the place is revisited, the light condition or view angle can be slightly changed, but the distribution will remain the same. Hence, by comparing the salient region map $\mathcal{O}_1$ and $\mathcal{O}_2$ we can perform an element-wise similarity check:
\begin{equation}
\xi = \frac{\mathcal{F}(\mathcal{O}_1 \;\&\; \mathcal{O}_2)}{\max\{\mathcal{F}(\mathcal{O}_1),\mathcal{F}(\mathcal{O}_2\}},
\end{equation}
where $\xi$ is the similarity factor of two images and $\mathcal{F}(\mathbf{x})$ counts the number of "true" values in the matrix. The fast image retrieval can be performed by simply setting threshold to $\xi$. In the meanwhile, we also define a binary image center $\mathcal{M}$
\begin{equation}
\mathcal{M} = \frac{\sum\limits_{\mathbf{u}\in \mathcal{O}} \mathcal{O}(\mathbf{u}) \cdot \mathbf{u}}{\mathcal{F}(\mathcal{O})},
\end{equation}
where $\mathbf{u}$ is the coordinate of pixel in image. By setting threshold on $\mathcal{M}$ we can simply filter out unmatched pairs.
An example of binary content based fast indexing is shown in Fig. \ref{fig:binary_extraction_example}. We randomly pick up 3 frames from KITTI dataset \cite{geiger2013vision}. The first and second frames are taken at same place but different time, while the third frame is taken at another similar place. The first and second frames are loop closure pairs but the first and third frames are not. By applying the fast indexing, we can calculate the $\xi$ between frames: $\xi_{12} = 67\%$ and $\xi_{13} = 20\%$. Intuitively we can tell that the second frame is much more similar to the first frame.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.99\linewidth]{images/featurematching.jpg}
\end{center}
\caption{An example of SURF feature points matching.}
\label{fig: feature_matching}
\end{figure}
\begin{table}[t]
\begin{center}
\begin{tabular}{ccc}
\toprule
Dataset & Image Size & Source of Ground Truth \\
\midrule
KITTI & 370$\times$1226 & GPS \\
New College & 640$\times$480 & GPS \\
City Center & 640$\times$480 & GPS \\
\bottomrule
\end{tabular}
\captionsetup{justification=centering}
\caption{Information of Different Datasets.}
\label{table:datasets information.}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{cccc}
\toprule
Dataset & Mean Time & Average Recall Rate & Precision \\
\midrule
KITTI & 130 & 54.9 & 100 \\
New College & 92 & 20.9 & 100 \\
City Center & 86 & 27.7 & 100 \\
\bottomrule
\end{tabular}
\caption{Loop detection results of our approach.}
\label{table:Experiment results on different dataset.}
\end{center}
\end{table}
\begin{figure}[h]
\begin{center}
\begin{subfigure}{0.70\linewidth}
\includegraphics[width=1.00\linewidth]{images/kitti_groundtruth_1.pdf}
\captionsetup{justification=centering}
\caption{Ground truth of KITTI sequence 00.}
\label{fig:kitti_result-a}
\end{subfigure}
\hfill
\begin{subfigure}{0.70\linewidth}
\includegraphics[width=1.00\linewidth]{images/kitti_result.pdf}
\captionsetup{justification=centering}
\caption{Loop closure detection result.}
\label{fig:kitti_result-b}
\end{subfigure}
\end{center}
\captionsetup{justification=centering}
\caption{Loop closure detection result of the proposed method.}
\label{fig:kitti_result}
\end{figure}
\subsection{Precise Loop Closure Detection}
The fast image retrieval is able to remove most unmatched pairs. However, the binary content only considers the structure of the content which is fast but not accurate enough. Considering that traditional method using SURF feature descriptor has a good performance in matching images, we can implement feature point based comparison to further increase the precision.
The SURF feature points are extracted from each frame due to its high precision in image matching \cite{bay2006surf}. And we use SURF descriptors to examine each image pair. Fig. \ref{fig: feature_matching} shows an example of feature matching. The number of matched feature points reveals the similarity of image pair.
\section{Experiment Results}
To prove its robustness, we test the proposed method with different datasets including KITTI dataset, New College dataset and City Center dataset \cite{geiger2013vision,smith2009new,engel2016photometrically}. The information of respective dataset is given in Table \ref{table:datasets information.}. The most important performance indexes for loop closure detection are recall rate, recall precision and speed. Recall precision refers to the ratio of correct loop closure detection against total loop closure detected. The higher recall precision the better, since any false positive may cause filter divergence easily. Recall rate refers to the number of correct loop pairs detected against total loop pairs which can be collected from the ground truth. In this section, we provide a detailed analysis of our proposed method.
\begin{figure*}[t]
\begin{subfigure}{0.49\linewidth}
\begin{center}
\includegraphics[width=0.8\linewidth]{images/groundtruth.pdf}
\end{center}
\caption{Ground truth}
\label{fig:result_comparison-a}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\begin{center}
\includegraphics[width=0.8\linewidth]{images/OurApproach.pdf}
\end{center}
\caption{Binary content-based approach}
\label{fig:result_comparison-b}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\begin{center}
\includegraphics[width=0.8\linewidth]{images/FABMAP.pdf}
\end{center}
\caption{FABMAP}
\label{fig:result_comparison-c}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\begin{center}
\includegraphics[width=0.8\linewidth]{images/DBoW2.pdf}
\caption{DBoW2}
\label{fig:result_comparison-d}
\end{center}
\end{subfigure}
\caption{Comparison of binary content extraction with existing methods.}
\label{fig:result_comparison}
\end{figure*}
\begin{table*}[t]
\setlength{\tabcolsep}{2pt}
\begin{center}
\begin{tabular}{c|ccc|ccc|ccc}
\toprule
\multirow{2}{*}{Dataset} & \multicolumn{3}{c|}{Sequence 00} & \multicolumn{3}{c|}{Sequence 02} &\multicolumn{3}{c}{Sequence 05} \\
& Mean Time (ms) & Recall Rate (\%) & Precision (\%) & Mean Time & Recall Rate & Precision & Mean Time & Recall Rate & Precision \\
\midrule
\multicolumn{1}{c|}{Our Approach} & 130 & 54.9 & 100 & 129 & 47.7 & 100
& 118 & 62.5 & 100 \\
\multicolumn{1}{c|}{FABMAP} & 1124 & 32.2 & 97.7
& 1162 & 23.4 & 49
& 1021 & 35.3 & 98\\
\multicolumn{1}{c|}{DBoW2} & 460 & 57.2 & 100
& 448 & 38.9 & 100
& 355 & 54.0 & 100 \\
\bottomrule
\end{tabular}
\caption{Quality Analysis of FABMAP and the proposed method on different datasets.}
\label{table:comparison of different methods}
\end{center}
\end{table*}
\subsection{Experiment result on public dataset}
We conduct the test on an intel\textregistered\;NUC mini computer which is popularly used in robotics relateed applications. The proposed method is tested with different datasets mentioned above. The loop closure detection results are collected and displayed in Matlab for visualization purpose. An example of our loop closure detection approach on KITTI dataset is shown in Fig. \ref{fig:kitti_result}. In the figure we plot the moving trajectory of camera and mark the ground truth of loop closure detection with black circle on the first image, while the detection result is shown on the second image with red circle marked instead. Each circle refers to a loop closure pair. Intuitively we can tell that there is no false positive detected and most of loop closure places are identified. Our proposed method achieves a recall rate of 54.9\% and a recall precision of 100\% which is very satisfactory. More test results can be found in Table \ref{table:Experiment results on different dataset.}. In total we pick up 5 recordings with loop closure from KITTI dataset, and our method achieves more than 50\% on average without any false positive. It also achieves 20\% on New College dataset and 27\% on City Center dataset without false positive. In the meanwhile, our methods still can run at high speed of 10 Hz on average.
\subsection{Comparison with other methods}
We further compare our method with the state-of-the-art methods such as FABMAP, DBoW2 \cite{cummins2008fab,galvez2012bags}. To be consistent, all experiments are conducted on an intel\textregistered\;NUC mini computer. In order to have a clear comparison, we pick the largest datasets with loop closure for demonstration since the efficiency differs more as database size increases. We use KITTI sequence 00, KITTI sequence 02 and KITTI sequence 05 with more than 10k frames in total. We test KITTI sequence 05 on each method first and the result is shown in Fig. \ref{fig:result_comparison}. In the experiment, we finely tuned the threshold in both FABMAP and DBoW2 in order to get the best recall rate and recall precision. However, our approach does not require to tune any parameter for specific dataset. Besides, both FABMAP and DBoW2 require offline training of similar dataset in advance, while the proposed method does not. Our method reports most of the loop closure places correctly while FABMAP has false positive and DBoW2 fails to report loop closure in some places. The details of the rest results on other datasets are shown in Table \ref{table:comparison of different methods}. The speed of the proposed method is 3 times faster than DBoW2 and 9 times faster than FABMAP. In our approach, we use the sophisticated SURF feature to achieve the precision because feature-wise comparison does not occur frequently. Hence our approach also provides reliable precision and recall rate. A demonstration of the experiment result can be found at \url{https://youtu.be/YCRd3N0LwSA}.
\section{Conclusion}
In this paper, we have presented a fast loop closure detection method via binary content. Traditional approaches such as FABMAP and DBoW2 use feature descriptors to compress the image content and build a descriptor vocabulary for indexing. However, these methods require intensive mathematical calculation to estimate the similarity of two images, which is less efficient than binary operation. Observe that operation on binary image can have a similar result but at higher speed than feature descriptor. Hence based on the observation, we proposed a new framework for loop closure detection which consists of three parts: binary content construction, fast image retrieval and precise loop closure detection. The experiment result has demonstrated that it is able to detect most of loop closure places without false positive. The proposed method was also compared with state-of-the-art methods such as FAB and DBoW2. The result has shown that it outperforms other approaches in both recall rate and speed. In addition, no offline training is required in our approach so that it is easy for implementation.
\section*{ACKNOWLEDGMENT}
The author would like to thank Mr. Wang Chen for many great suggestions during the course of this research work.
\balance
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUfQ_xK7FjYEB4XYPO
| 5
| 1
|
\section{Introduction}
The noncommutative generalization of It\^{o} stochastic calculus developed
in [1--6], gives an adequate instrument of studying of the behavior of open
quantum dynamical systems in a singular coupling with Bose stochastic
fields. The quantum stochastic (QS) calculus enables us to solve the old
problem of the stochastic description of continuous collapse of the quantum
system under a continuous observation by using the stochastic theory of
quantum nondemolition measurements and filtering theory [7--9]. This gives
the examples of the stochastic nonunitary, nonstationary and even nonadapted
evolution equations in Hilbert space, the solution of which requires one to
define the chronologically ordered stochastic exponents of operators and
maps in an appropriate way.
Here we solve this general problem in the framework of a new QS calculus in
Fock space, based on the explicit definition of the QS integrals free of the
adaptedness restriction in a uniform inductive topology, given in [10]. We
derive the general (nonadapted) It\^{o} formula as a differential of the
Wick formula for the normal ordered products, represented in an inductive $%
\star $-algebra with respect to an indefinite metric structure. The QS
generalization of It\^{o} formula for adapted processes was obtained by
Hudson and Parhasarathy in [1], where the unitary QS evolution was
constructed for the case of time and field independent QS generators $L$.
They used the QS integral for an adapted operator-valued function $D_{t}$ as
the limit of It\^{o} integral sums in the weak operator topology, defined as
in classical case due to commutativity of forward QS differentials $\mathrm{d%
}\Lambda (t)=\Lambda (t+\mathrm{d}t)-\Lambda (t)$ with $D_{t}$. In this
approach the QS evolution for nonstationary generating operators of QS
differential equations was obtained for some finite dimensional cases by
Holevo [11].
An another definition of QS integrals, based on the Berezin-Bargman calculus
in terms of kernels of operators in Fock space was proposed by Maassen [3].
One can show that Maassen kernel calculus corresponds to the particular
cases of our QS calculus, which is given directly in terms of the Fock
representation of integrated operators, instead of kernels [3,4]. Using this
new calculus we construct also the explicit solution of the nonstationary,
non Markovian, even nonadapted QS Langevin equations for a QS differentiable
stochastic process in the sense [12,13] over a unital $\star $-algebra $%
\mathcal{A}\subseteq \mathcal{B}(\mathcal{H})$ as the Fock representation of
an recursively defined operator-valued process in a pseudo Hilbert space
with noninner QS-integrable generators. Such QS evolution in a Markovian
stationary case was constructed recently by Evans and Hudson [14] and in a
nonstationary case by Lindsay and Parthasarathy [15]. We shall obtain the
existence and uniqueness of Evans-Hudson flow in free dimensional Markovian
case by the estimating of the explicit solution in the introduced inductive
uniform topology under the natural integrability conditions of time
dependent structural coefficients.
\section{Nonadapted QS integrals and differentials}
Let $\mathcal{H}$ be a Hilbert space with probability vectors $h\in \mathcal{%
H}\ ,\ \Vert h\Vert =1$, of a quantum dynamical object, described at any
instant $t\in \mathbb{R}_{+}$ by the algebra $\mathcal{B}(\mathcal{H})$ of
all linear bounded operators $L$ in $\mathcal{H}$ with Hermitian involution $%
L\rightarrow L^{\ast }$ and identity operator $I$. Let $X$ be a Borel space
with a positive measure $\mathrm{d}x$, say $X=\mathbb{R}_{+}\times \mathbb{R}%
^{d}$, and let $\{\mathcal{E}(x),x\in X\}$ be a family of complex Euclidean
subspaces $\mathcal{E}(x)\subseteq \mathcal{K}$ of a Hilbert space $\mathcal{%
K}$ (usually a dense subspace $\mathcal{E}$ of an infinite-dimensional $%
\mathcal{K}$) describing the quantum field (noise) at a point $x\in X$ of a
dimensionality $\mathrm{\dim }\mathcal{E}(x)\leq \infty $, and $\mathcal{E}%
_{-}\left( x\right) \supseteq \mathcal{K}$ be their duals, identified with
the completion of $\mathcal{K}$ (respectively to a norm $\left\Vert
k\right\Vert _{-}\leq \left\langle k\mid k\right\rangle \equiv \left\Vert
k\right\Vert _{1}$, say dual to a Hilbert norm $\left\Vert k\right\Vert \geq
\left\Vert k\right\Vert _{1}$ on $\mathcal{E}\left( x\right) =\mathcal{E}$).
We denote by $\mathcal{E}\subseteq L^{2}\left( X\right) \otimes \mathcal{K}$
the Hilbert integral $\int^{\oplus }\mathcal{E}(x)\mathrm{d}x$ of the field
state spaces $\mathcal{E}(x)$, that is the space of all square integrable
vector-functions $k\colon x\rightarrow k(x)\in \mathcal{E}(x)$,
\begin{equation*}
\langle k|k\rangle =\int \Vert k(x)\Vert ^{2}\mathrm{d}x<\infty \ ,\ \Vert
k(x)\Vert ^{2}=\langle k|k\rangle (x),
\end{equation*}%
and by $\Gamma (\mathcal{K})$ the Fock space of symmetrical tensor-functions
$k(x_{1},\dots ,x_{n})$, $n=0,1,\dots ,$ with values in $\mathcal{E}%
(x_{1})\otimes \dots \otimes \mathcal{E}(x_{n})$. Let us assume the absolute
continuity $\mathrm{d}x=\lambda (t,\mathrm{d}x)\mathrm{d}t$ with respect to
a measurable map $t\colon X\rightarrow \mathbb{R}_{+}$, say $t(x)=t$, $%
\lambda (t,\mathrm{d}x)=\mathrm{d}\mathbf{x}$ for $x=(t,\mathbf{x})\in
\mathbb{R}_{+}\times \mathbb{R}^{d}$, such that
\begin{equation*}
\int_{\Delta }f(t(x))\mathrm{d}x=\int_{0}^{\infty }f(t)\lambda (t,\Delta )%
\mathrm{d}t
\end{equation*}%
for any integrable $\Delta \subseteq X$ and essentially bounded function $%
f\colon \mathbb{R}_{+}\rightarrow \mathbb{C}$. Then one can represent the
Fock space $\Gamma (\mathcal{K})$ as the Hilbert integral $\mathcal{F}=\int_{%
\mathcal{X}}^{\oplus }\mathcal{E}^{\otimes }(\varkappa )\mathrm{d}\varkappa $
of the functions
\begin{equation*}
k\colon \varkappa \rightarrow k(\varkappa )\in \mathcal{E}^{\otimes
}(\varkappa ),\mathcal{E}^{\otimes }(\varkappa )=\otimes _{x\in \varkappa }%
\mathcal{E}(x)
\end{equation*}%
over the set $\mathcal{X}$ of all finite chains $\varkappa =(x_{1},\dots
,x_{n})$, identified with the indexed subsets $\{x_{1},\dots ,x_{n}\}\subset
X$ of cardinality $|\varkappa |=n<\infty $ and $\mathrm{d}\varkappa
=\displaystyle{\prod}_{x\in \varkappa }\mathrm{d}x$ under the order $t(x_{1})<\dots
<t(x_{n})$. We shall denote by $t(\varkappa )$ the chains (subsets) $%
\{t(x)|x\in \varkappa \}$, $\emptyset \in \mathcal{X}$ denotes the empty
chain and $1_{\emptyset }\in \mathcal{F}$ denotes the vacuum function: $%
1_{\emptyset }(\varkappa )=0$, if $\varkappa \not=\emptyset $; $1_{\emptyset
}(\emptyset )=1$.
This can be done as in the case $X=\mathbb{R}_{+},t(x)=x$ by the isometry
\begin{equation*}
\sum_{n=0}^{\infty }{\frac{1}{n!}}\int_{X^{n}}\Vert k(\varkappa )\Vert ^{2}%
\mathrm{d}\varkappa =\sum_{n=0}^{\infty }\;\idotsint\nolimits_{t_{1}<\dots
<t_{n}}\Vert k(x_{1},\dots ,x_{n})\Vert _{1}^{2}\mathrm{d}x_{1}\dots \mathrm{%
d}x_{n},
\end{equation*}%
where the integrals in right hand side is taken over all $\varkappa
=\{x_{1}<\dots <x_{n}\}$ with different $t_{i}=t(x_{i})$ due to
\begin{equation*}
{\frac{1}{n!}}\int_{0}^{\infty }\dots \int_{0}^{\infty }f(t_{1},\ldots
,t_{n})\mathrm{d}t_{1}\cdots \mathrm{d}t_{n}=\int_{0}^{\infty }\mathrm{d}%
t_{1}\int_{t_{1}}^{\infty }\mathrm{d}t_{2}\dots \int_{t_{n-1}}^{\infty }%
\mathrm{d}t_{n}f(t_{1},t_{2},\dots ,t_{n})
\end{equation*}%
for the symmetrical function%
\begin{equation*}
f(t_{1},\dots ,t_{n})=\int_{X^{n}}\Vert k(\varkappa )\Vert
^{2}\displaystyle{\prod}_{i=1}^{n}\lambda (t_{i},\mathrm{d}x_{i}).
\end{equation*}
One can consider the set $X$ as the space with a casual preorder $\lesssim $
[12], and the increasing map $t\colon x\lesssim x^{\prime }\Rightarrow
t(x)\leq t(x^{\prime })$ as the local time, if for any $x\in X$ and $%
t^{\prime }>t(x)$ there exists $x^{\prime }\in X$ such that $t(x^{\prime
})=t^{\prime }$ (As it is for the map $t(x)=t$ with respect to the Galilean
or Einsteinian order in space-time $X=\mathbb{R}_{+}\times \mathbb{R}^{%
\mathrm{d}}$).
Let us denote by $\mathcal{F}(\xi )=\int_{\mathcal{X}}^{\oplus }\xi
^{|\varkappa |}\mathcal{E}_{\xi }^{\otimes }(\varkappa )\mathrm{d}\varkappa $
for all $\xi >0$ the Hilbert scale of Fock spaces $\mathcal{F}(\xi
)\subseteq \mathcal{F}(\zeta )$, $\xi \geq \zeta $, defined by the scalar
products
\begin{equation*}
\Vert k\Vert ^{2}(\xi )=\sum_{n=0}^{\infty }\xi
^{n}\idotsint\nolimits_{0\leq t_{1}<\dots <t_{n}<\infty }\Vert k(x_{1},\dots
,x_{n})\Vert _{\xi }^{2}\mathrm{d}x_{1}\cdots \mathrm{d}x_{n}\ ,
\end{equation*}%
(where $\Vert k(\varkappa )\Vert _{\xi }^{2}=\Vert k(\varkappa )\Vert
_{-}^{2},$ $\xi <1$ for $k\left( \varkappa \right) \in \mathcal{E}%
_{-}^{\otimes \varkappa }$ and $\Vert k(\varkappa )\Vert _{\xi }^{2}=\Vert
k(\varkappa )\Vert ^{2},$ $\xi >1$ for $k\left( \varkappa \right) \in
\mathcal{E}^{\otimes \varkappa }$) by $\mathcal{G}(\xi )=\mathcal{H}\otimes
\mathcal{F}(\xi )$ the Hilbert tensor products, by $\mathcal{G}^{+}=\mathcal{%
G}(\xi ^{+})$, $\mathcal{G}=\mathcal{G}(1)$, $\mathcal{G}_{-}=\mathcal{G}%
(\xi _{-})$ the Hilbert subspaces $\mathcal{G}^{+}\subseteq \mathcal{G}%
\subseteq \mathcal{G}_{-}$ for some $\xi ^{+}\geq 1\geq \xi _{-}$, and let
us note that any linear operator $L\in \mathcal{B}(\mathcal{H})$ can be
considered as $(\xi ^{+},\xi _{-})$-continuous (bounded) operator $B:%
\mathcal{G}^{+}\rightarrow \mathcal{G}_{-}$ of the form $B=L\otimes \hat{1}$%
, where $\hat{1}$ means the identity operator $\hat{1}=\int_{\mathcal{X}%
}^{\oplus }I^{\otimes }(\varkappa )\mathrm{d}\varkappa \equiv I^{\otimes }$
in $\mathcal{F}=\mathcal{F}(1)$, $I^{\otimes }(\varkappa )=\otimes _{x\in
\varkappa }I(x)$, considered as the identical map $\mathcal{F}(\xi
^{+})\rightarrow \mathcal{F}(\xi _{-})$. Following [2,8] we define the QS
integral $\Lambda ^{t}(\mathbf{D})=\int_{0}^{t}\mathrm{d}\Lambda ^{s}(%
\mathbf{D})$ for a table $\mathbf{D}=(D_{\nu }^{\mu })_{\nu =0,+}^{\mu =-,0}$
of functions $\{D_{\nu }^{\mu }(x),x\in X\}$ with values in continuous
operators
\begin{equation*}
D_{0}^{0}(x):\mathcal{G}^{+}\otimes \mathcal{E}(x)\rightarrow \mathcal{G}%
_{-}\otimes \mathcal{E}_{-}(x)\ ,\ D_{+}^{-}(x):\mathcal{G}^{+}\rightarrow
\mathcal{G}_{-},\eqno(1.1a)
\end{equation*}%
\begin{equation*}
D_{+}^{0}(x):\mathcal{G}^{+}\rightarrow \mathcal{G}_{-}\otimes \mathcal{E}%
_{-}(x)\ ,\ \;D_{0}^{-}(x):\mathcal{G}^{+}\otimes \mathcal{E}(x)\rightarrow
\mathcal{G}_{-}\ ,\eqno(1.1.b)
\end{equation*}%
as the sum $\Lambda ^{t}(\mathbf{D})=\sum_{\mu ,\nu }\Lambda _{\mu }^{\nu
}(t,D_{\nu }^{\mu })$ of the operators $\Lambda _{\mu }^{\nu }(t,D):a\in
\mathcal{G}\mapsto \Lambda _{\mu }^{\nu }(t,D)a$, acting as
\begin{eqnarray*}
&[\Lambda _{0}^{0}(t,D_{0}^{0})a](\varkappa )=\sum_{x\in \varkappa
^{t}}[D_{0}^{0}(x)\dot{a}(x)](\varkappa \backslash x)&(1.2a) \\
&[\Lambda _{0}^{+}(t,D_{+}^{0})a](\varkappa )=\sum_{x\in \varkappa
^{t}}[D_{+}^{0}(x)a](\varkappa \backslash x)&(1.2b) \\
&[\Lambda _{-}^{0}(t,D_{0}^{-})a](\varkappa )=\int_{X^{t}}[D_{0}^{-}(x)\dot{a%
}(x)](\varkappa )\mathrm{d}x&(1.2c) \\
&[\Lambda _{-}^{+}(t,D_{+}^{-}a](\varkappa
)=\int_{X^{t}}[D_{+}^{-}(x)a](\varkappa )\mathrm{d}x\ ,&(1.2\mathrm{d}) \\
&&
\end{eqnarray*}%
Here $\varkappa ^{t}=\varkappa \cap X^{t}\ ,\ X^{t}=\{x\in X:t(x)<t\}$,$\
\varkappa \backslash x=\{x^{\prime }\in \varkappa :x^{\prime }\not=x\}$, and
$a\mapsto \dot{a}(x)$ is the point (Malliavin [16,17]) derivative $\mathcal{G%
}^{+}\rightarrow \mathcal{G}^{+}\otimes \mathcal{E}(x)$, evaluated in the
Fock representation almost everywhere as $[\dot{a}(x)](\varkappa )=a(x\sqcup
\varkappa )$, where $x\sqcup \varkappa =\{x,\varkappa :x\notin \varkappa \}$
is the disjoint union of the chains $x,\,\varkappa \in \mathcal{X}$. The
operator-functions (1.2) were defined in [1] as the limits of the QS It\^{o}
integral sums with respect to the gage, creation, annihilation, and time
processes respectively for the bounded adapted operator valued functions $%
D(x)=A(x)\otimes \hat{1}_{[t}$, where $t=t(x),\hat{1}_{[t}=I_{[t}^{\otimes }$
is the identity operator in $\mathcal{F}_{[t}=\int_{\mathcal{X}%
_{[t}}^{\oplus }\mathcal{E}^{\otimes }(\varkappa )\mathrm{d}\varkappa $,$\
\mathcal{X}_{[t}=\{\varkappa \in \mathcal{X}|t(\varkappa )\geq t\}$. As it
follows from theorem 1 in [9,10] the operators (1.2) are densely defined in $%
\mathcal{G}$ as $(\zeta ^{+},\zeta _{-})$-continuous operators $\mathcal{G}%
(\zeta ^{+})\rightarrow \mathcal{G}(\zeta _{-})$ for any $\zeta ^{+}>\xi
^{+}\ ,\ \zeta _{-}<\xi _{-}$ even for the nonadapted and unbounded $D$,
satisfying local QS-integrability conditions
\begin{equation*}
\Vert D_{0}^{0}\Vert _{\xi ^{+},\infty }^{\xi _{-},t}<\infty ,\Vert
D_{+}^{0}\Vert _{\xi ^{+},2}^{\xi _{-},t}<\infty ,\Vert D_{0}^{-}\Vert _{\xi
^{+},2}^{\xi _{-},t}<\infty ,\Vert D_{+}^{-}\Vert _{\xi ^{+},1}^{\xi
_{-},t}<\infty \ ,\eqno(1.3)
\end{equation*}%
for all $t\in \mathbb{R}_{+}$ and some $\xi _{-}\ ,\ \xi ^{+}>0$, where
\begin{equation*}
\Vert D\Vert _{\xi ^{+},p}^{\xi _{-},t}=\left( \int_{X^{t}}\left( \Vert
D(x)\Vert _{\xi ^{+}}^{\xi _{-}}\right) ^{p}\mathrm{d}x\right) ^{1/p},\Vert
D\Vert _{\xi ^{+}}^{\xi _{-}}=\sup \{\Vert D\mathbf{a}\Vert (\xi _{-})/\Vert
\mathbf{a}\Vert (\xi ^{+})\}.
\end{equation*}
Let us now define the multiple QS integral
\begin{equation*}
\Lambda _{\lbrack 0,t)}(B)=\sum_{n=0}^{\infty }\;\idotsint_{0\leq
t_{1}<\dots <t_{n}<t}\mathrm{d}\Lambda ^{t_{1},\dots ,t_{n}}(B)\equiv
\int_{0\leq \tau <t}\mathrm{d}\Lambda ^{\tau }(B)
\end{equation*}%
for the operator-valued function $B(\pmb\vartheta )$ on the table $\pmb%
\vartheta =(\vartheta _{\nu }^{\mu })_{\nu =0,+}^{\mu =-,0}$ of four subsets
$\vartheta _{\nu }^{\mu }\in \mathcal{X}$ with values
\begin{equation*}
B\left(
\begin{matrix}
\vartheta _{0}^{-} & \vartheta _{+}^{-} \\
\vartheta _{0}^{0} & \vartheta _{+}^{0}%
\end{matrix}%
\right) :\mathcal{G}(\eta ^{+})\otimes \mathcal{E}^{\otimes }(\vartheta
_{0}^{-})\otimes \mathcal{E}^{\otimes }(\vartheta _{0}^{0})\rightarrow
\mathcal{G}(\eta _{-})\otimes \mathcal{E}_{-}^{\otimes }(\vartheta
_{0}^{0})\otimes \mathcal{E}_{-}^{\otimes }(\vartheta _{+}^{0})\eqno(1.4)
\end{equation*}%
as the operators in $\mathcal{G}$ with the action
\begin{equation*}
\lbrack \Lambda _{\lbrack 0,t)}(B)a](\varkappa )=\sum_{\vartheta
_{0}^{0}\sqcup \vartheta _{+}^{0}\subseteq \varkappa ^{t}}\int_{\mathcal{X}%
^{t}}\int_{\mathcal{X}^{t}}[B(\pmb\vartheta )\dot{a}(\vartheta
_{0}^{-}\sqcup \vartheta _{0}^{0})](\vartheta _{-}^{0})\mathrm{d}\vartheta
_{0}^{-}\mathrm{d}\vartheta _{+}^{-}\ .\eqno(1.5)
\end{equation*}
Here $\vartheta _{-}^{0}=\varkappa \cap \overline{(\vartheta _{0}^{0}\sqcup
\vartheta _{+}^{0})}=\varkappa \backslash \vartheta _{0}^{0}\backslash
\vartheta _{+}^{0}$ is the difference of a subset $\varkappa \subset X$ and
the partition $\vartheta _{0}^{0}\sqcup \vartheta _{+}^{0}$ as the disjoint
union $\vartheta _{0}^{0}\bigcup \vartheta _{+}^{0}\subseteq \varkappa $, $%
\vartheta _{0}^{0}\cap \vartheta _{+}^{0}=\emptyset $, $\mathcal{X}%
^{t}=\{\varkappa \in \mathcal{X}|\varkappa \subset X^{t}\}$, and the point
(Malliavin [16]) derivative
\begin{equation*}
\dot{a}(\vartheta )=\int^{\oplus }a(\varkappa \sqcup \vartheta )\mathrm{d}%
\varkappa \in \mathcal{G}^{+}\otimes \mathcal{E}^{\otimes }(\vartheta )
\end{equation*}%
is defined for almost all $\varkappa \in \mathcal{X}$, $\varkappa \cap
\vartheta =\emptyset $ as $\dot{a}(\varkappa ,\vartheta )=a(\varkappa \sqcup
\vartheta )$ by a vector-function $a\in \mathcal{G}^{+}$. We shall say that
the function $B$ is locally QS integrable (in a uniform inductive limit), if
for any $t\in \mathbb{R}_{+}$ there exists a pair $(\eta ^{\bullet },\eta
_{\bullet })$ of triples $\eta ^{\bullet }=(\eta ^{-},\eta ^{0},\eta ^{+})$,$%
\;\eta _{\bullet }=(\eta _{-},\eta _{0},\eta _{+})$ of numbers $\eta ^{\mu
}>0$, $\eta _{\nu }>0$, for which $\Vert B\Vert _{\eta ^{\bullet }}^{\eta
_{\bullet }}(t)<\infty $, where
\begin{equation*}
\Vert B\Vert _{\eta ^{\bullet }}^{\eta _{\bullet }}(t)=\int_{\mathcal{X}^{t}}%
\mathrm{d}\vartheta _{+}^{-}\left( \int_{\mathcal{X}^{t}}\int_{\mathcal{X}%
^{t}}{\frac{(\eta _{+})^{|\vartheta _{+}^{0}|}}{(\eta ^{-})^{|\vartheta
_{0}^{-}|}}}\sup_{\mathcal{X}^{t}}{\frac{(\eta _{0})^{|\vartheta _{0}^{0}|}}{%
(\eta ^{0})^{|\vartheta _{0}^{0}|}}}\left( \Vert B(\pmb\vartheta )\Vert
_{\eta ^{+}}^{\eta _{-}}\right) ^{2}\mathrm{d}\vartheta _{+}^{0}\mathrm{d}%
\vartheta _{0}^{-}\right) ^{1/2}\eqno(1.6)
\end{equation*}%
(sup is taken as essential supremum over $\vartheta _{0}^{0}\in \mathcal{X}%
^{t}$). As it follows from the next theorem, the function $B(\pmb\vartheta )$
in QS integral (1.5) can be defined up to the equivalence having the kernel $%
B\approx 0\Leftrightarrow \Vert B\Vert _{\eta ^{\bullet }}^{\eta _{\bullet
}}(t)=0$ for all $\eta _{\bullet },\eta ^{\bullet }$ and $t$. In particular,
one can define it only for the tables $\pmb\vartheta =(\vartheta _{\nu
}^{\mu })$, which are partitions $\varkappa =\sqcup \vartheta _{\nu }^{\mu }$
of the chains $\varkappa \in \mathcal{X}$, i.e. for $\pmb\vartheta =\sqcup
_{x\in \varkappa }\mathbf{x}$, where $\mathbf{x}$ means one from the four
single point (elementary) tables
\begin{equation*}
\mathbf{x}_{0}^{0}=\left(
\begin{matrix}
\emptyset & \emptyset \cr x & \emptyset%
\end{matrix}%
\right) ,\ \mathbf{x}_{+}^{0}=\left(
\begin{matrix}
\emptyset & \emptyset \cr\emptyset & x%
\end{matrix}%
\right) ,\ \mathbf{x}_{0}^{-}=\left(
\begin{matrix}
x, & \emptyset \cr\emptyset & \emptyset%
\end{matrix}%
\right) ,\ \mathbf{x}_{+}^{-}=\left(
\begin{matrix}
\emptyset & x\cr\emptyset & \emptyset%
\end{matrix}%
\right) \ .
\end{equation*}
\begin{theorem}
If $B$ is a locally QS integrable function (1.4), then the multiple integral
(1.5) is a $(\xi ^{+},\xi _{-})$ continuous operator $U^{t}:\mathcal{G}(\xi
^{+})\rightarrow \mathcal{G}(\xi _{-})$ for $\xi ^{+}\geq \sum_{\mu }\eta
^{\mu },\xi _{-}^{-1}\geq \sum_{\nu }\eta _{\nu }^{-1}$, having the estimate
\begin{equation*}
\Vert \Lambda _{\lbrack 0,t)}(B)a\Vert (\xi _{-})\leq \ \Vert B\Vert _{\eta
^{\bullet }}^{\eta _{\bullet }}\Vert a\Vert (\xi ^{+})\ ,\;\forall a\in
\mathcal{G}(\xi ^{+})\ .
\end{equation*}%
The formally conjugated in $\mathcal{G}$ operator is defined as QS integral
\begin{equation*}
\Lambda _{\lbrack 0,t)}(B)^{\ast }=\Lambda _{\lbrack 0,t)}(B^{{\star }}),B^{{%
\star }}(\pmb\vartheta )=B(\pmb\vartheta ^{{\star }})^{\ast },\pmb\vartheta
^{{\star }}=\left(
\begin{matrix}
\vartheta _{+}^{0} & \vartheta _{+}^{-}\cr\vartheta _{0}^{0} & \vartheta
_{0}^{-}%
\end{matrix}%
\right) \ ,\eqno(1.7)
\end{equation*}%
which is the continuous operator $\mathcal{G}(1/\xi _{-})\rightarrow
\mathcal{G}(1/\xi ^{+})$ with
\begin{equation*}
\Vert \Lambda _{\lbrack 0,t)}(B^{{\star }})\Vert _{1/\xi _{-}}^{1/\xi
^{+}}=\Vert \Lambda _{\lbrack 0,t)}(B)\Vert _{\xi ^{+}}^{\xi _{-}}\equiv
\sup \{\Vert U^{t}a\Vert (\xi _{-})/\Vert a\Vert (\xi ^{+})\}\ .
\end{equation*}%
The QS process $U^{t}=\Lambda _{\lbrack 0,t)}(B)$ has a QS differential $%
\mathrm{d}U^{t}=\mathrm{d}\Lambda ^{t}(\mathbf{D})$ in the sense
\begin{equation*}
\Lambda _{\lbrack 0,t)}(B)=B(\pmb\emptyset )+\Lambda ^{t}(\mathbf{D})\ ,\
D_{\nu }^{\mu }(x)=\Lambda _{\lbrack 0,t(x))}(\dot{B}(\mathbf{x}_{\nu }^{\mu
}))
\end{equation*}%
with $(\xi ^{+},\xi _{-})$-continuous QS derivatives $\mathbf{D}=(D_{\nu
}^{\mu })$, densely defined as in (1.5) by $\dot{B}(\mathbf{x},\pmb\vartheta
)=B(\mathbf{x}\sqcup \pmb\vartheta )$ for almost all $\pmb\vartheta
=(\vartheta _{\nu }^{\mu })$, where $\mathbf{x}=(\varkappa _{\nu }^{\mu })$
is one from the elementary tables $\mathbf{x}_{\nu }^{\mu }$, $\mu \not=+$, $%
\nu \not=-$ with $\varkappa _{\nu }^{\mu }=x$, and $\vartheta _{\nu }^{\mu
}\in \mathcal{X}^{t(x)}$.
Let $B(\pmb\vartheta )$ be defined for any partition $\varkappa =\sqcup
\vartheta _{\nu }^{\mu }\in \mathcal{X}$ as the solution $B(\pmb\vartheta )=%
\mathbf{L}^{\triangleleft }(\pmb\vartheta )\odot B(\pmb\emptyset )$ of the
recurrency
\begin{equation*}
B(\mathbf{x}\sqcup \pmb\vartheta )=L(\mathbf{x})\odot B(\pmb\vartheta
),\vartheta _{\nu }^{\mu }\in \mathcal{X}^{t(x)}
\end{equation*}%
with $B(\pmb\emptyset )=T^{0}\otimes \hat{1}$, i.e.
\begin{equation*}
\dot{B}(\mathbf{x},\pmb\vartheta )=(L(\mathbf{x})\otimes \hat{1})\cdot B(\pmb%
\vartheta )\ ,\ B(\pmb\emptyset )=T^{0}\otimes \hat{1}\eqno(1.8)
\end{equation*}%
with a table $\mathbf{L}=(L_{\nu }^{\mu })_{\nu =0,+}^{\mu =-,0}$ of
operator-valued functions $L_{\nu }^{\mu }(x)=L(\mathbf{x}_{\nu }^{\mu })$,%
\begin{eqnarray*}
L_{0}^{0}(x) &:&\mathcal{H}\otimes \mathcal{E}(x)\rightarrow \mathcal{H}%
\otimes \mathcal{E}(x)\ ,\;L_{+}^{-}(x):\mathcal{H}\rightarrow \mathcal{H}\
,(1.9a) \\
L_{+}^{0}(x) &:&\mathcal{H}\rightarrow \mathcal{H}\otimes \mathcal{E}(x)\
,\;L_{0}^{-}(x):\mathcal{H}\otimes \mathcal{E}(x)\rightarrow \mathcal{H}\
,(1.9b)
\end{eqnarray*}%
$L\odot B=(L\otimes \hat{1})\cdot B$, and
\begin{equation*}
B(\pmb\varkappa )\cdot B(\pmb\vartheta )=(B(\pmb\varkappa )\otimes
I^{\otimes }(\vartheta _{0}^{0}\sqcup \vartheta _{+}^{0}))(B(\pmb\vartheta
)\otimes I^{\otimes }(\varkappa _{0}^{-}\sqcup \varkappa _{0}^{0}))\ ,
\end{equation*}%
where $I^{\otimes }(\varkappa )=\otimes _{x\in \varkappa }I(x),I(x)$ is the
identity operator in $\mathcal{E}(x)$. ($(B(\mathbf{x})\cdot B(\pmb\vartheta
)$ in (1.8) means usual product of operators in $\mathcal{G}$, if dim$%
\mathcal{E}=1$).
Then the process $U^{t}=\Lambda _{\lbrack 0,t)}(B)$ satisfies the QS
differential equation $\mathrm{d}U^{t}=\mathrm{d}\Lambda ^{t}(\mathbf{L}%
\odot U^{t})$ in the sense
\begin{equation*}
U^{t}=U^{0}+\Lambda ^{t}(\mathbf{L}\odot U^{t})\ ,\ (\mathbf{L}\odot U)_{\mu
}^{\mu }(x)=(L(\mathbf{x}_{\nu }^{\mu })\otimes \hat{1})U^{t(x)}\ .\eqno%
(1.10)
\end{equation*}
\end{theorem}
\begin{proof}
$\;$Using~the sum-point integral$\;$property%
\begin{equation*}
\;\int \sum_{\sqcup \vartheta _{\nu }=\vartheta }f(\vartheta _{-},\vartheta
_{0},\vartheta _{+})\mathrm{d}\vartheta =\iiint f(\vartheta _{-},\vartheta
_{0},\vartheta _{+}){\displaystyle{\prod}_{\nu }\mathrm{d}\vartheta _{\nu }}
\end{equation*}%
of the multiple sum-point integral, we obtain from definition (1.5) for $%
a,c\in \mathcal{G}$:
\begin{equation*}
\int \langle c(\vartheta )|[U^{t}a](\vartheta )\rangle \mathrm{d}\vartheta
=\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta _{+}^{-}\int_{\mathcal{X}^{t}}%
\mathrm{d}\vartheta _{0}^{-}\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta
_{+}^{0}\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta _{0}^{0}\langle \dot{c}%
(\vartheta _{0}^{0}\sqcup \vartheta _{+}^{0})|B(\pmb\vartheta )\dot{a}%
(\vartheta _{0}^{-}\sqcup \vartheta _{0}^{0})\rangle =
\end{equation*}%
\begin{equation*}
\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta _{+}^{-}\int_{\mathcal{X}^{t}}%
\mathrm{d}\vartheta _{0}^{-}\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta
_{+}^{0}\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta _{0}^{0}\langle B(\pmb%
\vartheta )^{\ast }\dot{c}(\vartheta _{0}^{0}\sqcup \vartheta _{+}^{0}))|%
\dot{a}(\vartheta _{0}^{-}\sqcup \vartheta _{0}^{0})\rangle =\int \langle
\lbrack U^{t{\ast }}c](\vartheta )|a(\vartheta )\rangle \mathrm{d}\vartheta
\ ,
\end{equation*}%
that is $U^{t{\ast }}$ acts as $\Lambda _{\lbrack 0,t)}(B^{{\star }})$ in
(1.5) with $B^{{\star }}(\pmb\vartheta )=B(\pmb\vartheta ^{{\star }})^{\ast
} $. Moreover this equation gives $\Vert \Lambda _{\lbrack 0,t)}(B)\Vert
_{\xi }^{1/\zeta }=\Vert \Lambda _{\lbrack 0,t)}(B^{{\star }})\Vert _{\zeta
}^{1/\xi }$ as
\begin{equation*}
\Vert U\Vert _{\xi }^{1/\zeta }=\text{sup}|\langle c|Ua\rangle |/\Vert
a\Vert (\xi )\Vert c\Vert (\zeta )=\sup |\langle U^{\ast }c|a\rangle |/\Vert
c\Vert (\zeta )\Vert a\Vert (\xi )=\Vert U^{\ast }\Vert _{\zeta }^{1/\xi }\ .
\end{equation*}%
Let us estimate the integral $\langle c|U^{t}a\rangle $, using the Schwarz
inequality
\begin{equation*}
\int \Vert \dot{c}(\vartheta )\Vert (\eta _{-}^{-1})\Vert \dot{a}(\vartheta
)\Vert (\eta _{+})(\eta _{0}/\eta ^{0})^{|\vartheta |/2}\mathrm{d}\vartheta
\leq \Vert \dot{c}\Vert (\eta _{-}^{-1},\eta _{0}^{-1})\Vert \dot{a}\Vert
(\eta ^{+},\eta ^{0})
\end{equation*}%
and the following isometricity property of the multiple derivative:
\begin{equation*}
\Vert \dot{a}\Vert (\xi ,\eta )=\left( \iint \xi ^{|\vartheta |}\eta
^{|\sigma |}\Vert a(\vartheta \sqcup \sigma )\Vert ^{2}\mathrm{d}\vartheta
\mathrm{d}\sigma \right) ^{1/2}=\Vert a\Vert (\xi +\eta )\ .
\end{equation*}%
This gives $|\langle c|U^{t}a\rangle |=|\int \langle c(\varkappa )|[\Lambda
_{\lbrack 0,t)}^{\otimes }(B)a](\varkappa )\rangle \mathrm{d}\varkappa \leq
\newline
$
\begin{eqnarray*}
&\leq &\newline
\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta _{+}^{-}\int_{\mathcal{X}^{t}}%
\mathrm{d}\vartheta _{0}^{-}\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta
_{+}^{0}\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta _{0}^{0}\Vert \dot{c}%
(\vartheta _{0}^{0}\sqcup \vartheta _{+}^{0})\Vert (\eta _{-}^{-1})|\,\Vert
B(\pmb\vartheta )\Vert _{\eta ^{+}}^{\eta _{-}}\Vert \dot{a}(\vartheta
_{0}^{-}\sqcup \vartheta _{0}^{0})\Vert (\eta ^{+})\newline
\\
&\leq &\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta _{+}^{-}\int_{\mathcal{X}%
^{t}}\mathrm{d}\vartheta _{0}^{-}\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta
_{+}^{0}\Vert \dot{c}(\vartheta _{+}^{0})\Vert (\eta _{-}^{-1}+\eta
_{0}^{-1})\Vert \dot{a}(\vartheta _{0}^{-})\Vert (\eta ^{+}+\eta ^{0})%
\newline
\Vert B\Vert _{\eta ^{+},\eta ^{0}}^{\eta _{-},\eta _{0}}(t,\pmb\vartheta
\backslash \pmb\vartheta _{0}^{0}) \\
&\leq &\Vert c\Vert (\sum_{\nu =-}^{+}\eta _{\nu }^{-1})\Vert a\Vert
(\sum_{\mu =-}^{+}\eta ^{\mu })\newline
\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta _{+}^{-}\left[ \int_{\mathcal{X}%
^{t}}\mathrm{d}\vartheta _{0}^{-}\int_{\mathcal{X}^{t}}\mathrm{d}\vartheta
_{+}^{0}{\frac{(\eta _{+})^{|\vartheta _{+}^{0}|}}{(\eta ^{-})^{|\vartheta
_{0}^{-}|}}}\Vert B\Vert _{\eta ^{+},\eta ^{0}}^{\eta _{-},\eta _{0}}(t,\pmb%
\vartheta \backslash \pmb\vartheta _{0}^{0})^{2}\right] ^{\frac{1}{2}}
\end{eqnarray*}%
where $\Vert B\Vert _{\eta ^{+},\eta ^{0}}^{\eta _{-},\eta _{0}}(t,\pmb%
\vartheta \backslash \pmb\vartheta _{0}^{0})=\mathrm{esssup}_{\vartheta
_{0}^{0}\in \mathcal{X}^{t}}{\frac{(\eta _{0})^{|\vartheta _{0}^{0}|/2}}{%
(\eta ^{0})^{|\vartheta _{0}^{0}|/2}}}\Vert B(\pmb\vartheta )\Vert _{\eta
^{+}}^{\eta _{-}}$. Hence
\begin{equation*}
|\langle c|U^{t}a\rangle |\leq \Vert c\Vert (\xi _{-}^{-1})\Vert a\Vert (\xi
^{+})\Vert B\Vert _{\eta ^{\bullet }}^{\eta _{\bullet }}(t)
\end{equation*}%
for $\xi ^{+}\geq \sum_{\mu }\eta ^{\mu },\xi _{-}^{-1}\geq \sum_{\nu }\eta
_{\nu }^{-1}$.
Using the definition (1.5) and the property
\begin{equation*}
\int_{\mathcal{X}^{t}}f(\vartheta )\mathrm{d}\vartheta =f(\emptyset )+\int_{%
\mathcal{X}^{t}}\mathrm{d}x\int_{\mathcal{X}^{t(x)}}\dot{f}(x,\vartheta )%
\mathrm{d}\vartheta \ ,\ \dot{f}(x,\vartheta )=f(x\sqcup \vartheta )\ ,
\end{equation*}%
one can obtain
\begin{equation*}
\lbrack (U^{t}-U^{0})a](\varkappa )=[(\Lambda _{\lbrack 0,t)}^{\otimes
}(B)-B(\pmb\emptyset ))a](\varkappa )=
\end{equation*}%
\begin{equation*}
\int_{X^{t}}\mathrm{d}x\sum_{\vartheta _{0}^{0}\sqcup \vartheta
_{+}^{0}\subseteq \varkappa }^{t(\vartheta _{\nu }^{0})<t(x)}\int_{\mathcal{X%
}^{t(x)}}\mathrm{d}\vartheta _{+}^{-}\int_{\mathcal{X}^{t(x)}}\mathrm{d}%
\vartheta _{0}^{-}[\dot{B}(\text{$\mathbf{x}$}_{+}^{-},\pmb\vartheta )\dot{a}%
(\vartheta _{0}^{-}\sqcup \vartheta _{0}^{0})+\dot{B}(\text{$\mathbf{x}$}%
_{0}^{-},\pmb\vartheta )\dot{a}(x\sqcup \vartheta _{0}^{-}\sqcup \vartheta
_{0}^{0})](\vartheta _{-}^{0})
\end{equation*}%
\begin{equation*}
+\sum_{x\in \varkappa ^{t}}\sum_{\vartheta _{0}^{0}\sqcup \vartheta
_{+}^{0}\subseteq \varkappa }^{t(\vartheta _{\nu }^{0})<t(x)}\int_{\mathcal{X%
}^{t(x)}}\mathrm{d}\vartheta _{+}^{-}\int_{\mathcal{X}^{t(x)}}\mathrm{d}%
\vartheta _{0}^{-}[\dot{B}(\text{$\mathbf{x}$}_{+}^{0},\pmb\vartheta )\dot{a}%
(\vartheta _{0}^{-}\sqcup \vartheta _{0}^{0})+\dot{B}(\text{$\mathbf{x}$}%
_{0}^{0},\pmb\vartheta )\dot{a}(x\sqcup \vartheta _{0}^{-}\sqcup \vartheta
_{0}^{0})](\vartheta _{-}^{0})
\end{equation*}%
\begin{equation*}
=\int_{X^{t}}\mathrm{d}x[D_{+}^{-}(x)a+D_{0}^{-}(x)\dot{a}(x)](\varkappa
)+\sum_{x\in \varkappa ^{t}}[D_{+}^{0}(x)a+D_{0}^{0}(x)\dot{a}(x)](\varkappa
/x)\ .
\end{equation*}%
Hence
\begin{equation*}
U^{t}-U^{0}=\Lambda _{-}^{+}(t,D_{+}^{-})+\Lambda
_{-}^{0}(t,D_{0}^{-})+\Lambda _{0}^{+}(t,D_{+}^{0})+\Lambda
_{0}^{0}(t,D_{0}^{0})\ ,
\end{equation*}%
where $\Lambda _{\mu }^{\nu }(t)$ are the QS integrals (1.2) of operators
\begin{equation*}
\lbrack D_{+}^{\mu }(x)a](\varkappa )=\sum_{\vartheta _{0}^{0}\sqcup
\vartheta _{+}^{0}\subseteq \varkappa }^{t(\vartheta _{\nu }^{0})<t(x)}\int_{%
\mathcal{X}^{t(x)}}\mathrm{d}\vartheta _{+}^{-}\int_{\mathcal{X}^{t(x)}}%
\mathrm{d}\vartheta _{0}^{-}[\dot{B}(\text{$\mathbf{x}$}_{+}^{\mu },\pmb%
\vartheta )\dot{a}(\vartheta _{0}^{-}\sqcup \vartheta _{0}^{0})](\vartheta
_{-}^{0}),
\end{equation*}%
\begin{equation*}
\lbrack D_{0}^{\mu }(x)b](\varkappa )=\sum_{\vartheta _{0}^{0}\sqcup
\vartheta _{+}^{0}\subseteq \varkappa }^{t(\vartheta _{\nu }^{0})<t(x)}\int_{%
\mathcal{X}^{t(x)}}\mathrm{d}\vartheta _{+}^{-}\int_{\mathcal{X}^{t(x)}}%
\mathrm{d}\vartheta _{0}^{-}[\dot{B}(\text{$\mathbf{x}$}_{0}^{\mu },\pmb%
\vartheta )\dot{b}(\vartheta _{0}^{-}\sqcup \vartheta _{0}^{0})](\vartheta
_{-}^{0})\ ,
\end{equation*}%
for $a\in \mathcal{G}^{+}\ ,\ b\in \mathcal{G}^{+}\otimes \mathcal{E}(x)$.
This can be written in terms of (1.5) as $D_{\nu }^{\mu }(x)=\Lambda
_{\lbrack 0,t(x))}^{\otimes }(\dot{B}(\mathbf{x}_{\nu }^{\mu })).$ Due to
the inequality%
\begin{equation*}
\Vert U^{t}\Vert _{\xi ^{+}}^{\xi _{-}}\leq \Vert B\Vert _{\eta ^{\bullet
}}^{\eta _{\bullet }}(t),\;\;\xi ^{+}\geq \sum \eta ^{\mu },\;\;\;\xi
_{-}^{-1}\geq \sum \eta _{\nu }^{-1}
\end{equation*}%
one obtains $\Vert D_{+}^{-}\Vert _{\xi ^{+},1}^{\xi _{-},t}\leq \Vert
B\Vert _{\eta ^{\bullet }}^{\eta _{\bullet }}(t)$:
\begin{equation*}
\int_{X^{t}}\Vert D_{+}^{-}(x)\Vert _{\xi ^{+}}^{\xi _{-}}\mathrm{d}x\leq
\int_{X^{t}}\Vert \dot{B}_{+}^{-}(x)\Vert _{\eta ^{\bullet }}^{\eta
_{\bullet }}[t(x)]\mathrm{d}x=\int_{X^{t}}\mathrm{d}x\int_{\mathcal{X}%
^{t(x)}}\Vert B_{+}^{-}(x\sqcup \vartheta )\Vert _{\eta ^{\bullet }}^{\eta
_{\bullet }}[t(x)]\mathrm{d}\pmb\vartheta
\end{equation*}%
\begin{equation*}
=\int_{\mathcal{X}^{t}}\Vert B_{+}^{-}({\mathcal{X}})\Vert _{\eta ^{\bullet
}}^{\eta _{\bullet }}(t)\mathrm{d}\varkappa -\Vert B_{+}^{-}(\pmb\emptyset
)\Vert _{\eta ^{\bullet }}^{\eta _{\bullet }}(t)=\Vert B\Vert _{\eta
^{\bullet }}^{\eta _{\bullet }}(t)-\Vert B_{+}^{-}(\pmb\emptyset )\Vert
_{\eta ^{\bullet }}^{\eta _{\bullet }}(t),
\end{equation*}%
where $B_{+}^{-}(\varkappa ,\pmb\vartheta )=B(\pmb\varkappa )1_{\emptyset
}(\vartheta _{+}^{-}),\,\pmb\varkappa =%
\begin{pmatrix}
\vartheta _{0}^{-} & \varkappa \cr\vartheta _{0}^{0} & \vartheta _{+}^{0}%
\end{pmatrix}%
$. In the same way we obtain
\begin{equation*}
\int_{X^{t}}\left( \Vert D_{0}^{-}(x)\Vert _{\xi ^{+}}^{\xi _{-}}\right) ^{2}%
\mathrm{d}x\leq \int_{X^{t}}\left( \Vert \dot{B}_{0}^{-}(x)\Vert _{\eta
^{\bullet }}^{\eta _{\bullet }}[t(x)]\right) ^{2}\mathrm{d}x\leq \eta
^{-}(\Vert B\Vert _{\eta ^{\bullet }}^{\eta _{\bullet }}(t))^{2}\ ,
\end{equation*}%
\begin{equation*}
\int_{X^{t}}\left( \Vert D_{+}^{0}(x)\Vert _{\xi ^{+}}^{\xi _{-}}\right) ^{2}%
\mathrm{d}x\leq \int_{X^{t}}\left( \Vert \dot{B}_{+}^{0}(x)\Vert _{\eta
^{\bullet }}^{\eta _{\bullet }}[t(x)]\right) ^{2}\mathrm{d}x\leq \eta
_{+}^{-1}(\Vert B\Vert _{\eta ^{\bullet }}^{\eta _{\bullet }}(t))^{2}\ ,
\end{equation*}%
and
\begin{equation*}
\mathrm{ess}\sup {}_{x\in X^{t}}\Vert D_{0}^{0}(x)\Vert _{\xi ^{+}}^{\xi
_{-}}\leq \mathrm{ess}\sup_{x\in X^{t}}\Vert \dot{B}_{0}^{0}(x)\Vert _{\eta
^{\bullet }}^{\eta _{\bullet }}[t(x)]\leq \sqrt{\eta ^{0}/\eta _{0}}\ \Vert
B\Vert _{\eta ^{\bullet }}^{\eta _{\bullet }}(t)\ .
\end{equation*}%
This proves the QS--integrability (1.3) of the derivatives $D_{\nu }^{\mu
}(x)$ with respect to the $(\xi ^{+},\xi _{-})$ norms.
If $B(\pmb\vartheta )$ satisfies the recurrence (1.8), then $\dot{B}(\mathbf{%
x}_{\nu }^{\mu })=L_{\nu }^{\mu }(x)\odot B$, and $D_{\nu }^{\mu }(x)=L_{\nu
}^{\mu }(x)\odot U^{t(x)}$ due to the property $\Lambda _{\lbrack
0,t)}^{\otimes }(L\odot B)=L\odot \Lambda _{\lbrack 0,t)}^{\otimes }(B)$ for
$L\odot B=(L\otimes \hat{1})\cdot B$, following immediately from the
definition (1.5). Hence $U^{t}=\Lambda _{\lbrack 0,t)}^{\otimes
}(B)=U^{0}+\Lambda ^{t}(\mathbf{D})$ with $U^{0}=T^{0}\otimes \hat{1}$ and $%
B(\pmb\vartheta )$, defined for the partitions $\varkappa =\sqcup \vartheta
_{\nu }^{\mu }$ of the chains $\varkappa \in \mathcal{X}$ as $\mathbf{L}%
^{\triangleleft }(\pmb\vartheta )\odot T^{0}$, where $\mathbf{L}%
^{\triangleleft }(\pmb\vartheta )=L(\mathbf{x}_{n})\cdot \cdot \cdot L(%
\mathbf{x}_{1})$ for $\pmb\vartheta =\sqcup _{i=1}^{n}\mathbf{x}_{i}$,
satisfies the equation (1.10).\hfill
\end{proof}
\begin{corollary}
Let $B(\pmb\vartheta )=L(\pmb\vartheta )\otimes \hat{1}$ be defined by the
QS--integrable operator--valued function
\begin{equation*}
L\left(
\begin{matrix}
\vartheta _{0}^{-} & \vartheta _{+}^{-}\cr\vartheta _{0}^{0} & \vartheta
_{+}^{0}%
\end{matrix}%
\right) :\mathcal{H}\otimes \mathcal{E}^{\otimes }(\vartheta
_{0}^{-})\otimes \mathcal{E}^{\otimes }(\vartheta _{0}^{0})\rightarrow
\mathcal{H}\otimes \mathcal{E}_{-}^{\otimes }(\vartheta _{0}^{0})\otimes
\mathcal{E}_{-}^{\otimes }(\vartheta _{+}^{0})
\end{equation*}%
with $\Vert L\Vert _{\eta ^{-},\eta ^{0}}^{\eta _{0},\eta _{+}}=\Vert B\Vert
_{\eta ^{\cdot }}^{\eta _{\cdot }}\ <\infty $ for $\eta ^{+},\eta
_{-}^{-1}\geq 1$. Then the QS integral $\Lambda _{\lbrack 0,t)}^{\otimes
}(B)=U^{t}$ defines an adapted $(\xi ^{+},\xi _{-})$ continuous process $%
U^{t}$ for $\xi ^{+}\geq \eta ^{-}+\eta ^{0}+1\ ,\ \xi _{-}^{-1}\geq \eta
_{+}^{-1}+\eta _{0}^{-1}+1$ in the sense $U^{t}(a^{t}\otimes c)=b^{t}\otimes
c$ for all $a^{t}\in \mathcal{G}^{t}(\xi ^{+}),c\in \mathcal{F}_{[t}$ with $%
b^{t}\in \mathcal{G}^{t}(\xi _{-})$, where $\mathcal{G}^{t}(\xi )=\mathcal{H}%
\otimes \mathcal{F}^{t}(\xi )$
\begin{equation*}
\mathcal{F}^{t}(\xi )=\int_{\mathcal{X}^{t}}^{\oplus }\xi ^{|\varkappa |}%
\mathcal{E}_{\xi }^{\otimes }(\varkappa )\mathrm{d}\varkappa ,\;\mathcal{X}%
^{t}=\{\varkappa \in \mathcal{X}|t(\varkappa )\subset \lbrack 0,t)\},\;%
\mathcal{E}_{\xi }=%
\begin{cases}
\mathcal{E},\ \xi >1\ ;\newline
\mathcal{E}_{-},\ \xi <1\ .%
\end{cases}%
\end{equation*}%
It has the adapted QS derivatives $D_{\nu }^{\mu }(x)=A_{\nu }^{\mu
}(x)\otimes \hat{1}_{[t(x)},$ where $\hat{1}_{[t}$ is the identity operator
in $\mathcal{F}_{[t(x)}$ with $A_{\nu }^{\mu }(x)$, defined in $\mathcal{G}%
^{t(x)}$. If $U^{t}$ is an adapted QS process with $\Vert U\Vert _{\xi
,\infty }^{\xi _{+},t}<\infty $, then $\mathrm{d}\Lambda ^{t}(\mathbf{B}%
\odot U)=\mathrm{d}\Lambda ^{t}(\mathbf{B})U^{t}$ in the sense%
\begin{equation*}
\Lambda ^{t}(\mathbf{B}\odot U)a=\int_{0}^{t}\mathrm{d}\Lambda ^{s}(\mathbf{B%
})U^{s}a,
\end{equation*}
where $\mathbf{B}\odot U=\mathbf{B}\cdot (U\otimes \mathbf{1})$ and the left
hand side is defined as the limit of the It\^{o} integral sums $\Lambda ^{t}(%
\mathbf{B}\odot U)=\lim_{n\rightarrow \infty }\sum_{i=0}^{n}\left[ \Lambda
^{t_{i+1}}(\mathbf{B})-\Lambda ^{t_{i}}(\mathbf{B})\right] U^{t_{i}}\ ,\
t_{0}=0,t_{n+1}=t$ in the uniform $(\xi ,\xi _{-})$--topology.
\end{corollary}
Indeed, the QS integral (1.5) for $B=L\otimes \hat{1}$ and $a=a^{t}\otimes c$
with $a^{t}\in \mathcal{G}^{t}\ ,\ c\in \mathcal{F}_{[t}$ can be written as
\begin{equation*}
\lbrack \Lambda _{\lbrack 0,t)}^{\otimes }(B)a](\varkappa )=c(\varkappa
_{\lbrack t})\otimes \sum_{\sqcup \vartheta _{\nu }^{0}=\varkappa ^{t}}\int_{%
\mathcal{X}^{t}}\int_{\mathcal{X}^{t}}[L(\pmb\vartheta )\otimes I(\vartheta
_{-}^{0})]a(\vartheta _{0}^{-}\sqcup \vartheta _{0}^{0}\sqcup \vartheta
_{-}^{0})\mathrm{d}\vartheta _{0}^{-}\mathrm{d}\vartheta _{+}^{-}\ .
\end{equation*}%
The norm $\Vert L\otimes \hat{1}\Vert _{\eta ^{\bullet }}^{\eta _{\bullet }}$
for $\eta ^{+},\eta _{-}^{-1}\geq 1$ does not depend on $\eta ^{+},\eta
_{-}^{-1}$, hence%
\begin{equation*}
\Vert \Lambda _{\lbrack 0,t)}^{\otimes }(L\otimes \hat{1})\Vert _{\xi
^{+}}^{\xi _{-}}\leq \Vert L\Vert _{\eta ^{-},\eta ^{0}}^{\eta _{0},\eta
_{+}},
\end{equation*}%
if $\xi ^{+}\geq \sum \eta ^{\mu }\ ,\ \xi _{-}^{-1}\geq \sum \eta _{\nu
}^{-1}$ with $\eta ^{+}=1=\eta _{-}$.
The derivatives $D_{\nu }^{\mu }$ are adapted as multiple QS integrals $%
\Lambda _{\lbrack o,t(x))}^{\otimes }(\dot{B}(\mathbf{x}_{\nu }^{\mu }))$ of
$\dot{B}(\mathbf{x})=\dot{L}(\mathbf{x})\otimes \hat{1}$. If $U^{s}\ ,\ s<t$
is a simple adapted function $U^{s}=%
\sum_{i=0}^{n}U_{i}1_{[t_{i},t_{i+1})}(s) $ with $t_{0}=0,t_{n+1}=t\
,1_{[t,t_{+})}(s)=1$ for $s\in \lbrack t,t_{+})$, otherwise $%
1_{[t,t_{+})}(s)=0$, then
\begin{equation*}
\Lambda ^{t}(\mathbf{B}\odot U)a=\sum_{i=0}^{n}(\Lambda ^{t_{i+1}}-\Lambda
^{t_{i}})(\text{$\mathbf{B}$}\odot U_{i})a=\sum_{i=0}^{n}[\Lambda ^{t_{i+1}}(%
\text{$\mathbf{B}$})-\Lambda ^{t_{i}}(\text{$\mathbf{B}$})]b_{i}\ ,
\end{equation*}%
where $b_{i}=U_{i}a$, if $U$ is a constant adapted process on $[r,s)$:
\begin{equation*}
\lbrack \Lambda _{\lbrack r,s)}^{\otimes }(BU)a](\varkappa )=\sum_{\vartheta
_{+}^{0}\sqcup \vartheta _{0}^{0}\subseteq \varkappa _{r}^{s}}\int_{\mathcal{%
X}_{r}^{s}}\int_{\mathcal{X}_{r}^{s}}[B(\pmb\vartheta )\dot{b}(\vartheta
_{0}^{-}\sqcup \vartheta _{0}^{0})](\vartheta _{-}^{0})\mathrm{d}\vartheta
_{0}^{-}\mathrm{d}\vartheta _{+}^{-}\ .
\end{equation*}
\section{A nonadapted QS calculus and It\^{o} formula}
Now we shall consider the operators $U=\iota (T)$ acting in $\mathcal{G}=%
\mathcal{H}\otimes \mathcal{F}$ as the multiple QS integrals (1.5) with $%
B=L\otimes \hat{1}$, and $t=\infty $ according to the formula
\begin{equation*}
\lbrack \iota (T)a](\varkappa )=\sum_{\varkappa _{0}^{0}\sqcup \varkappa
_{+}^{0}=\varkappa }\iint T(\pmb\varkappa )a(\varkappa _{0}^{0}\sqcup
\varkappa _{0}^{-})\mathrm{d}\varkappa _{0}^{-}\mathrm{d}\varkappa _{+}^{-}%
\eqno(2.1)
\end{equation*}%
Here the sum is taken over all partitions of the chain $\varkappa \in
\mathcal{X}$, and the operator-valued function $T_{(}\pmb\varkappa )$ is in
one to one correspondence
\begin{align*}
& T\left(
\begin{matrix}
\varkappa _{0}^{-} & \varkappa _{+}^{-}\cr\varkappa _{0}^{0} & \varkappa
_{+}^{0}%
\end{matrix}%
\right) =\sum_{\vartheta \subseteq \varkappa _{0}^{0}}L\left(
\begin{matrix}
\varkappa _{0}^{-} & \varkappa _{+}^{-}\cr\vartheta & \varkappa _{+}^{0}%
\end{matrix}%
\right) \otimes I^{\otimes }(\varkappa _{0}^{0}\backslash \vartheta ) \\
& L\left(
\begin{matrix}
\vartheta ^{-} & \vartheta _{+}^{-} \\
\vartheta & \vartheta _{+}%
\end{matrix}%
\right) =\sum_{\varkappa \subseteq \vartheta }(-1)^{|\varkappa |}T\left(
\begin{matrix}
\vartheta ^{-} & \vartheta _{+}^{-} \\
\vartheta \backslash \varkappa & \vartheta _{+}%
\end{matrix}%
\right) \otimes I^{\otimes }(\varkappa ) \\
&
\end{align*}%
with the operator-valued function $L(\pmb\vartheta )$, defining the integral
representation $U=\Lambda _{\lbrack 0,\infty )}(L\otimes \hat{1})$.
Using the arguments in section 1., one can prove, that the operator $\iota
(T)$ is $(\xi ^{+},\xi _{-})$-continuous, if $T$ is $(\zeta ^{\bullet
},\zeta _{\bullet })$-bounded for $\zeta ^{\bullet }=(\zeta ^{-},\zeta ^{0})$
and $\zeta _{\bullet }=(\zeta _{0},\zeta _{+})$, satisfying the inequalities
$\zeta ^{-}+\zeta ^{0}\leq \xi ^{+}$, $\zeta _{0}^{-1}+\zeta _{+}^{-1}\leq
\xi _{-}^{-1}$, because
\begin{equation*}
\Vert \iota (T)\Vert _{\xi ^{+}}^{\xi _{-}}\leq \Vert T\Vert _{\zeta
^{\bullet }}^{\zeta _{\bullet }}\equiv \int \left( \iint \frac{(\zeta
_{+})^{|\varkappa _{+}^{0}|}}{(\zeta ^{-})^{|\varkappa _{0}^{-}|}}\mathrm{es}%
\text{$\mathrm{s\sup }$}_{\varkappa _{0}^{0}\in \mathcal{X}}{\frac{(\zeta
_{0})^{|\varkappa _{0}^{0}|}}{(\zeta ^{0})^{|\varkappa _{0}^{0}|}}}\Vert T(%
\pmb\varkappa )\Vert ^{2}\mathrm{d}\varkappa _{+}^{0}\mathrm{d}\varkappa
_{0}^{-}\right) ^{1/2}\mathrm{d}\varkappa _{+}^{-}\ .
\end{equation*}%
In this case the formally conjugated operator
\begin{equation*}
U^{\ast }=\iota (T^{{\star }})\ ,\ T^{{\star }}(\pmb\varkappa )=T(\pmb%
\varkappa ^{{\star }})^{\ast }\ ,\ \left(
\begin{matrix}
\varkappa _{0}^{-} & \varkappa _{+}^{-}\cr\varkappa _{0}^{0} & \varkappa
_{+}^{0}%
\end{matrix}%
\right) ^{{\star }}=\left(
\begin{matrix}
\varkappa _{+}^{0} & \varkappa _{+}^{-}\cr\varkappa _{0}^{0} & \varkappa
_{0}^{-}%
\end{matrix}%
\right) \eqno(2.2)
\end{equation*}%
exists as $(\xi ^{+},\xi _{-})$-continuous operator $\mathcal{G}(\xi
_{+})\rightarrow \mathcal{G}(\xi ^{-})$ with $\Vert U^{\ast }\Vert _{\xi
^{+}}^{\xi _{-}}=\Vert U\Vert _{1/\xi _{-}}^{1/\xi ^{+}}$, if $\xi ^{+}\geq
\zeta _{0}^{-1}+\zeta _{+}^{-1}$, $\xi _{-}^{-1}\geq \zeta ^{-}+\zeta ^{0}$.
As we shall prove now, the map $\iota $ is the representation in $\mathcal{G}
$ of a unital $\star $-algebra of operator-valued functions $T(\pmb\varkappa
)$, satisfying the relative boundedness condition
\begin{equation*}
\Vert T\Vert (\pmb\zeta )=\text{$\mathrm{ess\sup }$}_{\pmb\varkappa }\{\Vert
T(\pmb\varkappa )\Vert /\displaystyle{\prod}_{\mu \leq \nu }\zeta _{\nu }^{\mu }(\varkappa
_{\nu }^{\mu })\}<\infty \ ,\eqno(2.3)
\end{equation*}%
where $\zeta (\varkappa )=\displaystyle{\prod}_{x\in \varkappa }\zeta (x)$, with respect
to a triangular matrix-function $\pmb\zeta (x)=[\zeta _{\nu }^{\mu }(x)]$, $%
\mu ,\nu =-,0,+$ $\zeta _{\nu }^{\mu }=0$ for $\mu >\nu $ under the order $%
-<0<+\ $,$\ \zeta _{-}^{-}(x)=1=\zeta _{+}^{+}(x)$ with positive $L^{p}$%
-integrable functions $(\zeta _{\nu }^{\mu })_{\nu =0,+}^{\mu =-,0}$ for
corresponding $p=1,2,\infty $:
\begin{equation*}
\Vert \zeta _{+}^{-}\Vert _{1}\leq \infty ,\Vert \zeta _{0}^{-}\Vert
_{2}<\infty \ ,\ \Vert \zeta _{+}^{0}\Vert _{2}<\infty \ ,\ \Vert \zeta
_{0}^{0}\Vert _{\infty }<\infty \ ,
\end{equation*}%
where $\Vert \zeta \Vert _{p}=(\int \zeta ^{p}(x)\mathrm{d}x)^{1/p}$. In
this case the operator $U=\iota (T)$ is $\zeta $-bounded, as it follows from
the next theorem, for $\zeta >\Vert \zeta _{0}^{0}\Vert =\mathrm{ess\sup }%
_{x\in X}\zeta _{0}^{0}(x)$ in the sense of $(\xi ^{+},\xi _{-})$-continuity
of $U$ for all $\xi _{-}>0\ ,\ \xi ^{+}\geq \zeta \cdot \xi _{-}$. This is
due to the estimate
\begin{equation*}
\Vert T\Vert _{\zeta ^{\bullet }}^{\zeta _{\bullet }}\leq \int \left( \iint
\frac{(\zeta _{+})^{|\varkappa _{+}^{0}|}}{(\zeta ^{-})^{|\varkappa
_{0}^{-}|}}\text{$\mathrm{ess\sup }$}_{\varkappa _{0}^{0}}{\frac{(\zeta
_{0})^{|\varkappa _{0}^{0}|}}{(\zeta ^{0})^{|\varkappa _{0}^{0}|}}}\left[
\prod_{\mu <\nu }\zeta _{\nu }^{\mu }(\varkappa _{\nu }^{\mu })\right] ^{2}%
\mathrm{d}\varkappa _{+}^{0}\mathrm{d}\varkappa _{0}^{-}\right) ^{1/2}%
\mathrm{d}\varkappa _{+}^{-}\Vert T\Vert (\pmb\zeta )
\end{equation*}%
\begin{equation*}
=\int \displaystyle{\prod}_{x\in \varkappa }\zeta _{+}^{-}(x)\mathrm{d}\varkappa \left(
\int \displaystyle{\prod}_{x\in \varkappa }{\frac{\zeta _{0}^{-}(x)^{2}}{\zeta ^{-}}}%
\mathrm{d}\varkappa \int \displaystyle{\prod}_{x\in \varkappa }{\frac{\zeta _{+}^{0}(x)^{2}%
}{\zeta _{+}^{-1}}}\mathrm{d}\varkappa \text{$\mathrm{ess\sup }$}%
\displaystyle{\prod}_{x\in \varkappa }{\frac{\zeta _{0}^{0}(x)^{2}}{\zeta ^{0}\zeta
_{0}^{-1}}}\right) ^{1/2}\Vert T\Vert (\pmb\zeta )
\end{equation*}%
\begin{equation*}
\leq \exp \{\int (\zeta _{+}^{-}(x)+(\zeta _{0}^{-}(x)^{2}+\zeta
_{+}^{0}(x)^{2})/2\varepsilon )\mathrm{d}x\}\Vert T\Vert (\pmb\zeta )\ ,\eqno%
(2.4)
\end{equation*}%
for $\zeta ^{-},\zeta _{+}^{-1}\geq \varepsilon >0$, and $\zeta ^{0}\zeta
_{0}^{-1}\geq \Vert \zeta _{0}^{0}\Vert _{\infty }^{2}$, giving $\iota (T)=0$%
, if $T(\pmb\varkappa )=0$ for almost all $\pmb\varkappa $. Hence the
operator (2.1) is defined even if $T(\pmb\varkappa )$ is described for
almost all $\pmb\varkappa =(\varkappa _{\nu }^{\mu })$, in particular, only
for the partitions $\varkappa =\sqcup \varkappa _{\nu }^{\mu }$ of the
chains $\varkappa \in \mathcal{X}$.
\begin{theorem}
If the operator-valued function
\begin{equation*}
T\left(
\begin{matrix}
\varkappa _{0}^{-} & \varkappa _{+}^{-}\cr\varkappa _{0}^{0} & \varkappa
_{+}^{0}%
\end{matrix}%
\right) :\mathcal{H}\otimes \mathcal{E}^{\otimes }(\varkappa
_{0}^{-})\otimes \mathcal{E}^{\otimes }(\varkappa _{0}^{0})\rightarrow
\mathcal{H}\otimes \mathcal{E}^{\otimes }(\varkappa _{0}^{0})\otimes
\mathcal{E}^{\otimes }(\varkappa _{+}^{0})
\end{equation*}%
satisfies the condition (2.3), then the conjugated operators $U=\iota
(T),U^{\ast }=\iota (T^{{\star }})$ are $\zeta $-bounded in $\mathcal{G}$
for any $\zeta >\zeta _{0}^{0}$, and the operator $U^{\ast }U$ is defined in
$\mathcal{G}$ as $\zeta ^{2}$-bounded operator
\begin{equation*}
\iota (S\cdot T)=\iota (S)\iota (T)\;,\quad S=T^{{\star }}
\end{equation*}%
by the following product formula
\begin{equation*}
(S\cdot T)(\pmb\varkappa )=\sum_{\vartheta _{\nu }^{\mu }\subseteq \varkappa
_{\nu }^{\mu }}^{\mu <\nu }\sum_{\sigma _{+}^{-}\cap \rho _{+}^{-}=\vartheta
_{+}^{-}}^{\sigma _{+}^{-}\cup \rho _{+}^{-}=\varkappa _{+}^{-}}S\left(
\begin{matrix}
\vartheta _{0}^{-}\sqcup \vartheta _{+}^{-}, & \varkappa _{+}^{-}\backslash
\sigma _{+}^{-}\cr\varkappa _{0}^{0}\sqcup \vartheta _{+}^{0}, & \varkappa
_{+}^{0}\backslash \vartheta _{+}^{0}%
\end{matrix}%
\right) T\left(
\begin{matrix}
\varkappa _{0}^{-}\backslash \vartheta _{0}^{-}, & \varkappa
_{+}^{-}\backslash \rho _{+}^{-}\cr\varkappa _{0}^{0}\sqcup \vartheta
_{0}^{-}, & \vartheta _{+}^{-}\sqcup \vartheta _{+}^{0}%
\end{matrix}%
\right) \ .\eqno(2.5)
\end{equation*}%
This induces a unital $\star $-algebraic structure on the inductive space $%
\mathcal{U}$ of all relatively bounded functions $T$ with
\begin{equation*}
\Vert T^{{\star }}\Vert (\pmb\zeta )=\Vert T\Vert (\pmb\zeta ^{{\star }%
}),\;\;\;\Vert T^{{\star }}\cdot T\Vert (\pmb\xi )\leq \lbrack \Vert T\Vert (%
\pmb\zeta )]^{2},
\end{equation*}%
if $\xi _{\nu }^{\mu }\geq (\pmb\zeta ^{{\star }}\pmb\zeta )_{\nu }^{\mu }$,
where $\pmb\zeta ^{{\star }}(x)=\mathbf{g}\pmb\zeta (x)^{\ast }\mathbf{g}$
and $(\pmb\zeta ^{{\star }}\pmb\zeta )(x)=\pmb\zeta ^{{\star }}(x)\pmb\zeta
(x)$ are defined by usual product of the matrices
\begin{equation*}
\mathbf{g}=\left[
\begin{matrix}
0 & 0 & 1\cr0 & 1 & 0\cr1 & 0 & 0%
\end{matrix}%
\right] ,\ \pmb\zeta (x)=\left[
\begin{matrix}
1 & \zeta _{0}^{-} & \zeta _{+}^{-}\cr0 & \zeta _{0}^{0} & \zeta _{+}^{0}\cr0
& 0 & 1%
\end{matrix}%
\right] (x),\ \pmb\zeta ^{{\star }}(x)=\left[
\begin{matrix}
1 & \zeta _{+}^{0} & \zeta _{+}^{-}\cr0 & \zeta _{0}^{0} & \zeta _{0}^{-}\cr0
& 0 & 1%
\end{matrix}%
\right] (x)\ .\eqno(2.6)
\end{equation*}%
If the multiple QS integral $U^{t}=\Lambda _{\lbrack 0,t)}(B)$ is defined by
$B(\pmb\vartheta )=\iota (L(\pmb\vartheta ))$ with
\begin{equation*}
\left\Vert L\left(
\begin{matrix}
\vartheta ^{-} & \vartheta _{+}^{-}\cr\vartheta & \vartheta _{+}%
\end{matrix}%
\right) \right\Vert (\pmb\xi )\leq c\lambda _{0}^{0}(\vartheta )\lambda
_{+}^{0}(\vartheta _{+})\lambda _{0}^{-}(\vartheta ^{-})\lambda
_{+}^{-}(\vartheta _{+}),\;\lambda (\vartheta )=\displaystyle{\prod}_{x\in \vartheta
}\lambda (x)\geq 0\ ,
\end{equation*}%
then $\Lambda _{\lbrack 0,t)}\circ \iota =\iota \circ N_{[0,t)}$ i.e. $%
U^{t}=\iota (T^{t})$, where
\begin{equation*}
T^{t}(\pmb\varkappa )=\sum_{\pmb\vartheta \subseteq {\pmb\varkappa }^{t}}L(%
\pmb\vartheta ,\pmb\varkappa \backslash {\pmb\vartheta })\equiv N_{[0,t)}(L)(%
\pmb\varkappa )\eqno(2.7)
\end{equation*}%
with $\Vert T^{t}\Vert (\pmb\zeta )\leq c$, if $\zeta _{\nu }^{\mu }(x)\geq
\xi _{\nu }^{\mu }(x)+\lambda _{\nu }^{\mu }(x)$ for $t(x)<t$, and $\zeta
_{\nu }^{\mu }(x)\geq \xi _{\nu }^{\mu }(x)$ for $t(x)\geq t$. The QS
derivatives $D_{\nu }^{\mu }(x)=\Lambda _{\lbrack 0,t(x))}(\dot{B}(\mathbf{x}%
_{\nu }^{\mu }))$ for the process $U^{t}=\iota (T^{t})$ have the natural
difference form $\mathbf{D}=\mathbf{G}-\mathbf{U}$, described by the
representations of
\begin{equation*}
\dot{T}^{t}(\mathbf{x},\pmb\varkappa )=T^{t}(\pmb\varkappa \sqcup \mathbf{x})
\end{equation*}%
with $\mathbf{x}=\mathbf{x}_{\nu }^{\mu },\mu <+,\nu >-$ at $t\searrow t(x)$
\begin{equation*}
U_{\nu }^{\mu }(x)=\iota (\dot{T}^{t(x)}(\mathbf{x}_{\nu }^{\mu })),\quad
G_{\nu }^{\mu }(x)=\iota (\dot{T}^{t(x)]}(\mathbf{x}_{\nu }^{\mu }))\ ,\eqno%
(2.8)
\end{equation*}%
where $T^{s]}(\pmb\varkappa )=N_{[0,t)}(L)(\pmb\varkappa )$ for any $t>s\ ,\
t\leq t(x)$\ $\forall x\in \varkappa _{s}=\sqcup \varkappa _{\nu }^{\mu
}\cap t^{-1}(s,\infty )$, and $\mathbf{x}_{\nu }^{\mu }$ denotes an
elementary table $\pmb\vartheta =(\vartheta _{\lambda }^{\kappa })$ with $%
\vartheta _{\lambda }^{\kappa }=\emptyset $ except $\vartheta _{\nu }^{\mu
}=x$. The QS differential $\mathrm{d}U^{\ast }=\mathrm{d}\Lambda (\mathbf{D}%
^{{\star }})$ is defined by the derivative $\mathbf{D}^{{\star }}=\mathbf{G}%
^{{\star }}-\mathbf{U}^{{\star }}$, and
\begin{equation*}
\mathrm{d}(U^{\ast }U)=\mathrm{d}\Lambda (\mathbf{U}^{{\star }}\mathbf{D}+%
\mathbf{D}^{{\star }}\mathbf{U}+\mathbf{D}^{{\star }}\mathbf{D})=\mathrm{d}%
\Lambda (\mathbf{G}^{{\star }}\mathbf{G}-\mathbf{U}^{{\star }}\mathbf{U})\ ,%
\eqno(2.9)
\end{equation*}%
where the QS derivative $\mathbf{G}^{{\star }}\mathbf{G}-\mathbf{U}^{{\star }%
}\mathbf{U}$ of the QS process $(U^{\ast }U)^{t}=U^{t{\ast }}U^{t}$ is
described in terms of the usual products $(\mathbf{U}^{{\star }}\mathbf{U}%
)(x)=\mathbf{U}^{{\star }}(x)\mathbf{U}(x)$ and the pseudo Hermitian
conjugation $\mathbf{U}^{{\star }}(x)=(\hat{I}\otimes \mathbf{g}(x))\mathbf{U%
}(x)^{\ast }(\hat{I}\otimes \mathbf{g}(x))$ of the triangular matrices
\begin{equation*}
\mathbf{U}=\left[
\begin{matrix}
U & U_{0}^{-} & U_{+}^{-}\cr0 & U_{0}^{0} & U_{+}^{0}\cr0 & 0 & U%
\end{matrix}%
\right] ,\ \mathbf{D}=\left[
\begin{matrix}
0 & D_{0}^{-} & D_{+}^{-}\cr0 & D_{0}^{0} & D_{+}^{0}\cr0 & 0 & 0%
\end{matrix}%
\right] ,\ \mathbf{G}=\left[
\begin{matrix}
U, & G_{0}^{-}, & G_{+}^{-}\cr0 & G_{0}^{0} & G_{+}^{0}\cr0 & 0 & U%
\end{matrix}%
\right] ,
\end{equation*}%
with $U(x)=U^{t(x)},\mathbf{g}(x)=[g_{\nu }^{\mu
}(x)],g_{-}^{-}=1=g_{+}^{+},g_{0}^{0}(x)=I(x)$, otherwise $g_{\nu }^{\mu }=0$%
.
\end{theorem}
\begin{proof}
Let us firstly obtain an estimate for the representation $U=\iota (T)$ of a
relatively bounded operator-valued function $T$ in the sense (2.3). Due to
the inequalities (2.4) one obtains
\begin{equation*}
\Vert U\Vert _{\xi ^{+}}^{\xi }\leq \exp \{\Vert \zeta _{+}^{-}\Vert
_{1}+(\Vert \zeta _{0}^{-}\Vert _{2}^{2}+\Vert \zeta _{+}^{0}\Vert
_{2}^{2})/2\varepsilon \}\Vert T\Vert (\pmb\zeta )\ ,
\end{equation*}%
if
\begin{equation*}
\xi ^{+}\geq \varepsilon +\zeta ^{0},\quad \xi _{-}^{-1}\geq \zeta
_{0}^{-1}+\varepsilon \ ,\ \zeta ^{0}\zeta _{0}^{-1}\geq \Vert \zeta
_{0}^{0}\Vert _{\infty }^{2}\ .
\end{equation*}%
Hence for any $\xi ^{+}\xi _{-}^{-1}>\Vert \zeta _{0}^{0}\Vert _{\infty
}^{2} $ there exists an $\varepsilon >0$ such that this inequality holds,
namely,
\begin{equation*}
\varepsilon \leq \left( \xi ^{+}+\xi _{-}^{-1}-\sqrt{(\xi ^{+}-\xi
_{-}^{-1})^{2}+4\Vert \zeta _{0}^{0}\Vert _{\infty }^{2}}\right) \qquad
/2=\varepsilon (\xi ^{+},\xi _{-})\ ,
\end{equation*}%
where the upper bound $\varepsilon (\xi ^{+},\xi _{-})$ corresponds to the
solution $\varepsilon >0$ of the equation $\zeta ^{0}\zeta _{0}^{-1}=\Vert
\zeta _{0}^{0}\Vert _{\infty }^{2}$ with $\zeta ^{0}=\xi ^{+}-\varepsilon
>0,\;\zeta _{0}^{-1}=\xi _{-}^{-1}-\varepsilon >0$. Hence the operator $U$
is $\zeta $-bounded for any $\zeta >\zeta _{0}^{0}$ and also $U^{\ast }$ is $%
\zeta $-bounded due to $(\pmb\zeta ^{{\star }})_{0}^{0}=\zeta _{0}^{0}=(\pmb%
\zeta )_{0}^{0}$.
Now we show that the product formula (2.3) is valid for $T(\varkappa
)=X\otimes \mathbf{f}^{\otimes }(\pmb\varkappa )$, where $X\in \mathcal{B}(%
\mathcal{H})$, and
\begin{equation*}
\mathbf{f}^{\otimes }(\pmb\varkappa )=\otimes _{\mu \leq \nu }f_{\nu }^{\mu
}(\varkappa _{\nu }^{\mu })
\end{equation*}%
with $f_{\nu }^{\mu }(\varkappa )=\otimes _{x\in \varkappa }f_{\nu }^{\mu
}(x)$ defined by the operator-valued elements $(f_{\nu }^{\mu })_{\nu
=0,+}^{\mu =-,0}$ of the matrix-function
\begin{equation*}
\mathbf{f}(x)=\left[
\begin{matrix}
1 & f_{0}^{-} & f_{+}^{-}\cr0 & f_{0}^{0} & f_{+}^{0}\cr0 & 0 & 1%
\end{matrix}%
\right] (x),\quad
\begin{array}{ll}
& f_{0}^{0}(x):\mathcal{E}(x)\rightarrow \mathcal{E}(x)\ ,\;f_{+}^{-}(x):%
\mathbb{C}\rightarrow \mathbb{C} \\
& f_{+}^{0}(x):\mathbb{C}\rightarrow \mathcal{E}(x)\ ,\;f_{0}^{-}(x):%
\mathcal{E}(x)\rightarrow \mathbb{C}%
\end{array}%
\end{equation*}%
with
\begin{equation*}
\Vert f_{+}^{-}\Vert _{1}<\infty ,\quad \Vert f_{0}^{-}\Vert _{2}<\infty
,\quad \Vert f_{+}^{0}\Vert _{2}<\infty ,\quad \Vert f_{0}^{0}\Vert _{\infty
}<\infty \ .
\end{equation*}%
Let us find the action (2.1) of the operator $U=\iota (X\otimes \mathbf{f}%
^{\otimes })$ on the product vector $a=h\otimes k^{\otimes }\ ,\ h\in
\mathcal{H}\ ,\ k\in \mathcal{K}$, where $k^{\otimes }(\varkappa )=\otimes
_{x\in \varkappa }k(x)$:
\begin{align*}
& [Ua](\varkappa )=Xh\otimes \sum_{\varkappa _{0}^{0}\sqcup \varkappa
_{+}^{0}=\varkappa }\iint f_{+}^{-}(\varkappa _{+}^{-})f_{0}^{-}(\varkappa
_{0}^{-})k^{\otimes }(\varkappa _{0}^{-})f_{+}^{0}(\varkappa
_{+}^{0})\otimes f_{0}^{0}(\varkappa _{0}^{0})k^{\otimes }(\varkappa
_{0}^{0})\mathrm{d}\varkappa _{+}^{-}\mathrm{d}\varkappa _{0}^{-} \\
& =Xh\otimes \sum_{\varkappa _{0}^{0}\sqcup \varkappa _{+}^{0}=\varkappa
}\otimes _{x\in \varkappa _{+}^{0}}f_{+}^{0}(x)\otimes _{x\in \varkappa
_{0}^{0}}f_{0}^{0}(x)k(x)\int \displaystyle{\prod}_{x\in \varkappa }f_{+}^{-}(x)\mathrm{d}%
\varkappa \int \displaystyle{\prod}_{x\in \varkappa }f_{0}^{-}(x)k(x)\mathrm{d}\varkappa \\
& =Xh\otimes (f_{+}^{0}+f_{0}^{0}k)^{\otimes }(\varkappa )\exp \{\int
(f_{+}^{-}(x)+f_{0}^{-}(x)k(x))\mathrm{d}x\}
\end{align*}
In the same way, acting on the product vector $Xh\otimes
(f_{+}^{0}+f_{0}^{0}k)^{\otimes }$ by $U^{\ast }=\iota (X^{\ast }\otimes
\mathbf{f}^{{\star }\otimes })$ with
\begin{equation*}
\mathbf{f}^{{\star }}(x)_{0}^{0}=f_{0}^{0}(x)^{\ast },\quad \mathbf{f}^{{%
\star }}(x)_{+}^{-}=f_{+}^{-}(x)^{\ast },\quad \mathbf{f}^{{\star }%
}(x)_{+}^{0}=f_{0}^{-}(x)^{\ast },\quad \mathbf{f}^{{\star }%
}(x)_{0}^{-}=f_{+}^{0}(x)^{\ast }\ ,
\end{equation*}%
one obtains $[U^{\ast }Ua](\varkappa )=X^{\ast }Xh\otimes (f_{0}^{-{\ast }%
}+f_{0}^{0{\ast }}(f_{+}^{0}+f_{0}^{0}k))^{\otimes }(\varkappa )$.
\begin{equation*}
\exp \{\int [f_{+}^{-}(x)^{\ast }+f_{+}^{0{\ast }%
}(x)(f_{+}^{0}(x)+f_{0}^{0}(x)k(x))+f_{+}^{-}(x)+f_{0}^{-}(x)k(x)]\mathrm{d}%
x\}=
\end{equation*}%
\begin{equation*}
=X^{\ast }Xh\otimes ((\mathbf{f}^{{\star }}\mathbf{f})_{+}^{0}+(\mathbf{f}^{{%
\star }}\mathbf{f})_{0}^{0}k)^{\otimes }(\varkappa )\exp \{\int ((\mathbf{f}%
^{{\star }}\mathbf{f})_{+}^{-}(x)+(\mathbf{f}^{{\star }}\mathbf{f}%
)_{0}^{-}k(x))\mathrm{d}x\}\ ,
\end{equation*}%
where the operator-valued functions
\begin{equation*}
(\mathbf{f}^{{\star }}\mathbf{f})_{0}^{0}(x)=f_{0}^{0}(x)^{\ast
}f_{0}^{0}(x),(\mathbf{f}^{{\star }}\mathbf{\ f})_{+}^{-}(x)=f_{+}^{-}(x)^{%
\ast }+f_{+}^{0}(x)^{\ast }f_{+}^{0}(x)+f_{+}^{-}(x)
\end{equation*}%
\begin{equation*}
(\mathbf{f}^{{\star }}\mathbf{f})_{+}^{0}(x)=f_{0}^{-}(x)^{\ast
}+f_{0}^{0}(x)^{\ast }f_{+}^{0}(x),(\mathbf{f}^{\star }\mathbf{f}%
)_{0}^{-}(x)=f_{+}^{0}(x)^{\ast }f_{0}^{0}(x)+f_{0}^{-}(x)
\end{equation*}%
are defined as matrix elements of the product $(\mathbf{f}^{{\star }}\mathbf{%
f})(x)=\mathbf{f}(x)^{{\star }}\mathbf{f}(x)$ of triangular matrices $%
\mathbf{f}^{{\star }}$ and $\mathbf{f}$. Hence on the linear span of the
product vectors $a=h\otimes k^{\otimes }$ we have for $T=X\otimes \mathbf{f}%
^{{\star }}$ the $\star $-multiplicative property
\begin{equation*}
\iota (T)^{\ast }\iota (T)=\iota (X^{\ast }X\otimes (\mathbf{f}^{{\star }}%
\mathbf{f})^{\otimes })=\iota (T^{{\star }}\cdot T)\ ,
\end{equation*}%
where the product $(T^{{\star }}\cdot T)(\pmb\varkappa )$ is defined as
(2.5) due to $(\mathbf{f}^{{\star }}\mathbf{f})^{\otimes }=\mathbf{f}^{{%
\otimes \star }}\cdot \mathbf{f}^{{\otimes }}$:
\begin{equation*}
(\mathbf{f}^{{\star }}\mathbf{f})^{\otimes }(\pmb\varkappa )=\otimes _{x\in
\varkappa _{0}^{0}}(f_{0}^{0}(x)^{\ast }f_{0}^{0}(x))\otimes _{x\in
\varkappa _{+}^{0}}(f_{0}^{-}(x)^{\ast }+f_{0}^{0}(x)^{\ast
}f_{+}^{0}(x))\otimes
\end{equation*}%
\begin{equation*}
\otimes _{x\in \varkappa _{0}^{-}}(f_{+}^{0}(x)^{\ast
}f_{0}^{0}(x)+f_{0}^{-}(x))\otimes _{x\in \varkappa
_{+}^{-}}(f_{+}^{-}(x)^{\ast }+f_{+}^{0}(x)^{\ast
}f_{+}^{0}(x)+f_{+}^{-}(x))=
\end{equation*}%
\begin{equation*}
=\sum_{\vartheta _{\nu }^{\mu }\subseteq \varkappa _{\nu }^{\mu }}^{\mu <\nu
}f_{0}^{0}(\varkappa _{0}^{0})^{\ast }f_{0}^{0}(\varkappa _{0}^{0})\otimes
f_{0}^{-}(\varkappa _{+}^{0}\backslash \vartheta _{+}^{0})^{\ast }\otimes
f_{0}^{0}(\vartheta _{+}^{0})^{\ast }f_{+}^{0}(\vartheta _{+}^{0})\otimes
\end{equation*}%
\begin{equation*}
f_{+}^{0}(\vartheta _{0}^{-})^{\ast }f_{0}^{0}(\vartheta _{0}^{-}))\otimes
f_{0}^{-}(\varkappa _{0}^{-}\backslash \vartheta _{0}^{-})\sum_{\sigma
_{+}^{-}\cap \rho _{+}^{-}=\vartheta _{+}^{-}}^{\sigma _{+}^{-}\cup \rho
_{+}^{-}=\varkappa _{+}^{-}}f_{+}^{-}(\varkappa _{+}^{-}\backslash \sigma
_{+}^{-})^{\ast }f_{+}^{0}(\vartheta _{+}^{-})^{\ast }f_{+}^{0}(\vartheta
_{+}^{-})f_{+}^{-}(\varkappa _{+}^{-}\backslash \rho _{+}^{-})
\end{equation*}%
\begin{equation*}
=\sum_{\vartheta _{\nu }^{\mu }\subseteq \varkappa _{\nu }^{\mu }}^{\mu <\nu
}\sum_{\sigma _{+}^{-}\cap \rho _{+}^{-}=\vartheta _{+}^{-}}^{\sigma
_{+}^{-}\cup \rho _{+}^{-}=\varkappa _{+}^{-}}\mathbf{f}^{\otimes }\left(
\begin{matrix}
\varkappa _{+}^{0}\backslash \vartheta _{+}^{0} & \varkappa
_{+}^{-}\backslash \sigma _{+}^{-}\cr\varkappa _{0}^{0}\sqcup \vartheta
_{+}^{0} & \vartheta _{0}^{-}\sqcup \vartheta _{+}^{-}%
\end{matrix}%
\right) ^{\ast }\mathbf{f}^{\otimes }\left(
\begin{matrix}
\varkappa _{0}^{-}\backslash \vartheta _{0}^{-} & \varkappa
_{+}^{-}\backslash \rho _{+}^{-}\cr\varkappa _{0}^{0}\sqcup \vartheta
_{0}^{-} & \vartheta _{+}^{-}\sqcup \vartheta _{+}^{0}%
\end{matrix}%
\right) \ .
\end{equation*}%
As the operator-valued functions $X\otimes \mathbf{f}^{\otimes }(\pmb%
\varkappa )$ are relatively bounded $\Vert X\otimes \mathbf{f}^{\otimes }(%
\pmb\varkappa )\Vert (\pmb\zeta )=\Vert X\Vert $ with respect to $\zeta
_{\nu }^{\mu }(x)=\Vert f_{\nu }^{\mu }(x)\Vert $ and their linear span is
dense in inductive space $\mathcal{U}$, the product formula can be obtained
as a limit for any $T\in \mathcal{U}$, and
\begin{equation*}
\Vert (T^{{\star }}\cdot T)(\pmb\varkappa )\Vert \leq \sum \Vert T^{{\star }%
}\left(
\begin{matrix}
\vartheta _{0}^{-}\sqcup \vartheta _{+}^{-} & \varkappa _{+}^{-}\backslash
\sigma _{+}^{-}\cr\varkappa _{0}^{0}\sqcup \vartheta _{+}^{0} & \varkappa
_{+}^{0}\backslash \vartheta _{+}^{0}%
\end{matrix}%
\right) \Vert \;\Vert T\left(
\begin{matrix}
\varkappa _{0}^{-}\backslash \vartheta _{0}^{-} & \varkappa
_{+}^{-}\backslash \rho _{+}^{-}\cr\varkappa _{0}^{0}\backslash \vartheta
_{0}^{-} & \vartheta _{+}^{-}\sqcup \vartheta _{+}^{0}%
\end{matrix}%
\right) \Vert \leq
\end{equation*}%
\begin{equation*}
\Vert T\Vert ^{2}(\pmb\zeta )\sum \pmb\zeta ^{\otimes }\left(
\begin{matrix}
\varkappa _{+}^{0}\backslash \vartheta _{+}^{0}, & \varkappa
_{+}^{-}\backslash \sigma _{+}^{-}\cr\varkappa _{0}^{0}\sqcup \vartheta
_{+}^{0}, & \vartheta _{0}^{-}\sqcup \vartheta _{+}^{-}%
\end{matrix}%
\right) \pmb\zeta ^{\otimes }\left(
\begin{matrix}
\varkappa _{0}^{-}\backslash \vartheta _{0}^{-} & \varkappa
_{+}^{-}\backslash \rho _{+}^{-}\cr\varkappa _{0}^{0}\backslash \vartheta
_{0}^{-} & \vartheta _{+}^{-}\sqcup \vartheta _{+}^{0}%
\end{matrix}%
\right) =[\Vert T\Vert (\pmb\zeta )]^{2}(\pmb\zeta ^{2})^{\otimes }(\pmb%
\varkappa )
\end{equation*}%
this means $\Vert T^{{\star }}\cdot T\Vert (\pmb\zeta ^{{\star }}\pmb\zeta
)\leq \lbrack \Vert T\Vert (\pmb\zeta )]^{2}$. Due to the proven continuity
of the linear map $\iota $ on $\mathcal{U}$ into the $\ast $-algebra of
relatively bounded operators on the projective limit $\cap _{\xi >0}\mathcal{%
G}(\xi )$, the $\star $-multiplicative property of $\iota $ can be extended
on the whole $\star $-algebra $\mathcal{U}$ with the unity $I(\pmb\varkappa
)=I\otimes \mathbf{1}^{\otimes }(\pmb\varkappa )$, $\mathbf{1}(x)$ is the
identity matrix, having the representation $\iota (I)=\hat{I}$.
Now let us find the representation $U^{t}$ of the multiple quantum integral
(2.7), having the values $\iota \circ N_{[0,t)}(L)$ in $\mathcal{U}$ for
relatively bounded operator-valued functions $L(\pmb\vartheta ,\pmb\varkappa
)$ due to
\begin{equation*}
\Vert T^{t}(\pmb\varkappa )\Vert \leq c\sum_{\vartheta _{0}^{0}\subseteq
\varkappa _{0}^{0}}^{t(\vartheta _{0}^{0})<t}\sum_{\vartheta
_{+}^{0}\subseteq \varkappa _{+}^{0}}^{t(\vartheta
_{+}^{0})<t}\sum_{\vartheta _{0}^{-}\subseteq \varkappa
_{0}^{-}}^{t(\vartheta _{0}^{-})<t}\sum_{\vartheta _{+}^{-}\subseteq
\varkappa _{+}^{-}}^{t(\vartheta _{+}^{-})<t}\Vert L(\pmb\vartheta ,\pmb%
\varkappa \backslash \pmb\vartheta )\Vert
\end{equation*}%
\begin{equation*}
\leq c\displaystyle{\prod}_{\nu =0,+}^{\mu =-,0}\sum_{\vartheta _{\nu }^{\mu }\leq
\varkappa _{\nu }^{\mu }}^{t(\vartheta _{\nu }^{\mu })<t}\lambda _{\nu
}^{\mu }(\vartheta _{\nu }^{\mu })\xi _{\nu }^{\mu }(\varkappa _{\nu }^{\mu
}\backslash \vartheta _{\nu }^{\mu })=c\displaystyle{\prod}_{\nu =0,+}^{\mu =-,0}\zeta
_{\nu }^{\mu }(\varkappa _{\nu }^{\mu })\ ,
\end{equation*}%
where
\begin{equation*}
\zeta (\varkappa )=\displaystyle{\prod}_{x\in \varkappa }^{t(x)<t}[\lambda (x)+\xi
(x)]\displaystyle{\prod}_{x\in \varkappa }^{t(x)\geq t}\xi (x)
\end{equation*}%
for
\begin{equation*}
\lambda (\vartheta )=\displaystyle{\prod}_{x\in \vartheta }\lambda (x),\quad \xi
(\varkappa )=\displaystyle{\prod}_{x\in \varkappa }\xi (x)\ .
\end{equation*}
From the definitions (2.1) of $U^{t}=\iota (T^{t})$ we obtain the QS
integral (1.5):
\begin{equation*}
\lbrack U^{t}a](\varkappa )=\sum_{\varkappa _{0}^{0}\sqcup \varkappa
_{+}^{0}=\varkappa }\iint \sum_{\pmb\vartheta \subseteq \pmb\varkappa ^{t}}L(%
\pmb\vartheta ,\pmb\varkappa \backslash \pmb\vartheta )a(\varkappa
_{0}^{0}\sqcup \varkappa _{0}^{-})\mathrm{d}\varkappa _{0}^{-}\mathrm{d}%
\varkappa _{+}^{-}
\end{equation*}%
\begin{equation*}
=\sum_{\vartheta \sqcup \vartheta _{+}\subseteq \varkappa ^{t}}\int_{%
\mathcal{X}^{t}}\int_{\mathcal{X}^{t}}\sum_{\varkappa _{0}^{0}\sqcup
\varkappa _{+}^{0}=\vartheta _{-}}\iint L(\pmb\vartheta ,\pmb\varkappa )\dot{%
a}(\vartheta \sqcup \vartheta ^{-},\varkappa _{0}^{0}\sqcup \varkappa
_{0}^{-})\mathrm{d}\varkappa _{0}^{-}\mathrm{d}\varkappa _{+}^{-}\mathrm{d}%
\vartheta ^{-}\mathrm{d}\vartheta _{+}^{-}\ ,
\end{equation*}%
where $\vartheta _{-}=\varkappa \backslash (\vartheta \sqcup \vartheta _{+}),%
\dot{a}(\vartheta ,\varkappa _{0}^{0})=a(\vartheta \sqcup \varkappa
_{0}^{0}) $. Hence $\iota (T^{t})=\Lambda _{\lbrack 0,t)}(B)$ with
\begin{equation*}
\lbrack B(\pmb\vartheta )\dot{a}(\vartheta \sqcup \vartheta ^{-})](\varkappa
)=\sum_{\varkappa _{0}^{0}\sqcup \varkappa _{+}^{0}=\varkappa }\iint L(\pmb%
\vartheta ,\pmb\varkappa )\dot{a}(\vartheta \sqcup \vartheta ^{-},\varkappa
_{0}^{0}\sqcup \varkappa _{0}^{-})\mathrm{d}\varkappa _{0}^{-}\mathrm{d}%
\varkappa _{+}^{-}\ ,
\end{equation*}%
that is $B(\pmb\vartheta )=\iota (L(\pmb\vartheta ))$. In particular, if $%
U^{t}=U^{0}+\Lambda ^{t}(\mathbf{D})$ with $U^{0}=\iota (T^{0})$ and $%
\mathbf{D}(x)=\iota (\mathbf{C}(x))$, then $U^{t}=\iota (T^{0}+N^{t}(\mathbf{%
C}))$, i.e. $\iota \circ N^{t}=\Lambda ^{t}\;\circ \iota $, where
\begin{equation*}
N^{t}(\mathbf{C})(\pmb\varkappa )=\sum_{x\in {\pmb\varkappa }^{t}}C(\mathbf{x%
},\pmb\varkappa \backslash \mathbf{x}),\quad C(\mathbf{x}_{\nu }^{\mu },\pmb%
\varkappa )=C_{\nu }^{\mu }(x,\pmb\varkappa )\ .
\end{equation*}
In the case $U^{t}=\Lambda _{\lbrack 0,t)}(B)$ with $B=\iota (L)$ the QS
derivatives%
\begin{equation*}
D_{\nu }^{\mu }(x)=\Lambda _{\lbrack 0,t(x))}(\dot{B}(\mathbf{x}_{\nu }^{\mu
}))=\iota (C_{\nu }^{\mu }(x))
\end{equation*}%
are defined by
\begin{equation*}
C_{\nu }^{\mu }(x,\pmb\varkappa )=N_{[0,t(x))}(\dot{L}(\mathbf{x}_{\nu
}^{\mu })=\dot{T}^{t(x)]}(\mathbf{x}_{\nu }^{\mu },\pmb\varkappa )-\dot{T}%
^{t(x)}(\mathbf{x}_{\nu }^{\mu },\pmb\varkappa )\ ,
\end{equation*}%
where
\begin{equation*}
N_{[0,t)}(\dot{L}(\mathbf{x}))=\sum_{\pmb\vartheta \subseteq \pmb\varkappa
^{t}}L(\pmb\vartheta \sqcup \mathbf{x},\pmb\varkappa \backslash \pmb%
\vartheta ),
\end{equation*}%
$\mathbf{x}$ is one of the four elementary tables $\mathbf{x}_{\nu }^{\mu }$
and
\begin{equation*}
\dot{T}^{t(x)}(\mathbf{x},\pmb\varkappa )=\sum_{\pmb\vartheta \subseteq \pmb%
\varkappa ^{t(x)}}L(\pmb\vartheta ,\pmb\varkappa \sqcup \mathbf{x}%
)\backslash \pmb\vartheta )=T^{t(x)}\quad (\pmb\varkappa \sqcup \mathbf{x})\
,
\end{equation*}%
\begin{equation*}
\dot{T}^{t(x)]}(\mathbf{x},\pmb\varkappa )=\sum_{\pmb\vartheta \subseteq \pmb%
\varkappa ^{t(x)}\sqcup \mathbf{x}}L(\pmb\vartheta ,\pmb\varkappa \sqcup
\mathbf{x})\backslash \pmb\vartheta )=T^{t(x)}\quad (\pmb\varkappa \sqcup
\mathbf{x})
\end{equation*}%
\begin{equation*}
+\sum_{\pmb\vartheta \subseteq \varkappa ^{t(x)}}L(\pmb\vartheta \sqcup
\mathbf{x},\pmb\varkappa \backslash \pmb\vartheta )=\dot{T}^{t(x)}(\mathbf{x}%
,\pmb\varkappa )+N_{[0,t(x))}(\dot{L}(\mathbf{x}))(\pmb\varkappa )
\end{equation*}%
due to
\begin{equation*}
T^{s]}(\pmb\varkappa )=\sum_{\pmb\vartheta \subseteq \varkappa ^{s]}}L(\pmb%
\vartheta ,\pmb\varkappa \backslash \pmb\vartheta )=T^{s_{+}}(\pmb\varkappa )
\end{equation*}%
where
\begin{equation*}
\varkappa ^{s]}=\{x\in \varkappa |t(x)\leq s\}\ ,\quad s_{+}=\min
\{t(x)>s|x\in \varkappa \}\ .
\end{equation*}%
Hence the derivatives $D_{\nu }^{\mu }(x),x\in X^{t}$, defining $%
U^{t}-U^{0}=\Lambda _{\lbrack 0,t)}(\mathbf{D})$ are represented as the
differences
\begin{equation*}
D_{\nu }^{\mu }(x)=\iota \lbrack \dot{T}^{t(x)]}(\mathbf{x}_{\nu }^{\mu
})]-\iota \lbrack \dot{T}^{t(x)}(\mathbf{x}_{\nu }^{\mu })]
\end{equation*}%
of the operators (2.8), where $\dot{T}^{s]}(\mathbf{x},\pmb\varkappa )=T^{t}(%
\pmb\varkappa \sqcup \mathbf{x})=\dot{T}^{t}(\mathbf{x},\pmb\varkappa )$ for
any $t:s<t\leq s_{+}=\min \{t(x)>s|x\in \pmb\varkappa \}$.
Let us consider $\dot{T}^{t}(\mathbf{x})$ as elements $T_{\nu }^{\mu }(x)=%
\dot{T}(\mathbf{x}_{\nu }^{\mu })$ of the triangular operator-valued
matrix-function $\mathbf{T}^{t}(x)$ with $T_{\nu }^{\mu }=0$ for $\mu >\nu $%
, and $T_{-}^{-}(x)=T^{t}=T_{+}^{+}(x)$ independent of $x\in X$, defining
the triangular matrices $\mathbf{U}=[U_{\nu }^{\mu }]$ and $\mathbf{G}%
=[G_{\nu }^{\mu }]$ as $\mathbf{U}(x)=\iota (\mathbf{T}^{t(x)}(x))$ and $%
\mathbf{G}(x)=\iota (\mathbf{T}^{t(x)]}(x))$, where $\mathbf{T}^{s]}=\mathbf{%
T}^{t}$ for a $t\in (s,s_{+}]$. This helps to generalize the QS It\^{o}
formula [1] for nonadapted processes as
\begin{equation*}
U^{t{\ast }}U^{t}-U^{0{\ast }}U^{0}=\Lambda ^{t}(\mathbf{U}^{{\star }}%
\mathbf{D}+\mathbf{D}^{{\star }}\mathbf{U}+\mathbf{D}^{{\star }}\mathbf{D}),
\end{equation*}%
because
\begin{equation*}
U^{t{\ast }}U^{t}=\iota (T^{t{\star }}\cdot T^{t}),(T^{{\star }}\cdot T)(\pmb%
\varkappa \sqcup \mathbf{x}_{\nu }^{\mu })=(\mathbf{T}^{{\star }}(x)\cdot
\mathbf{T}(x))_{\nu }^{\mu }(\pmb\varkappa )\ ,
\end{equation*}%
as it follows directly from the formula (2.5), in terms of the usual product
of triangular matrices $T^{{\star }}$ and $T$, defined by the multiplication
$\cdot $ of the matrix elements $T_{\nu }^{\mu }(x)$ and $\star $%
-multiplicative property
\begin{equation*}
\iota (\mathbf{T}^{{\star }}(x)\cdot \mathbf{T}(x))\;=\;\iota (\mathbf{T}%
(x))^{{\star }}\iota (\mathbf{T}(x))\;.
\end{equation*}
Applying this for $t=t(x)$ and $t=t_{+}(x)=\min \{t\in t(\varkappa
)|t>t(x)\} $ to the representation
\begin{equation*}
\iota \lbrack (\mathbf{T}^{t(x)]{\star }}\mathbf{T}^{t(x)]})(x)-(\mathbf{T}%
^{t(x){\star }}\mathbf{T}^{t(x)})(x)]
\end{equation*}%
of the QS derivative of the process $U^{t{\ast }}U^{t}$, we finally obtain
the formula
\begin{equation*}
\mathrm{d}(U^{t{\ast }}U^{t})=\mathrm{d}\Lambda ^{t}[\iota (\mathbf{T}%
^{t]})^{{\star }}\iota (\mathbf{T}^{t]})-\iota (\mathbf{T}^{t})^{{\star }%
}\iota (\mathbf{T}^{t})]\ ,
\end{equation*}%
giving the multiplication table (2.9) in terms of the triangular matrices
(2.8) with $G_{-}^{-}(x)=U^{t(x)}=G_{+}^{+}(x)\
,U_{-}^{-}(x)=U^{t(x)}=U_{+}^{+}(x)$ and $D_{\nu }^{\mu }(x)=G_{\nu }^{\mu
}(x)-U_{\nu }^{\mu }(x)\ .$
\end{proof}
\begin{corollary}
The QS process $U^{t}=\iota (T^{t})$ is adapted, iff $T^{t}(\pmb\varkappa
)=T(\pmb\varkappa ^{t})\otimes 1(\pmb\varkappa _{\lbrack t})$ for almost all
$\pmb\varkappa =(\varkappa _{\nu }^{\mu })$, where $\pmb\varkappa ^{t}=\pmb%
\varkappa \cap X^{t},\pmb\varkappa _{\lbrack t}=\pmb\varkappa \cap X_{[t}$,
and $1(\pmb\varkappa )=I(\varkappa _{0}^{0})$ for $\varkappa _{\nu }^{\mu
}=\emptyset ,\mu \not=\nu $, otherwise $1(\pmb\varkappa )=0$. The QS It\^{o}
formula for adapted processes $U^{t}$ can be written in the form
\begin{equation*}
\mathrm{d}(U^{\ast }U)=\mathrm{d}\Lambda (\mathbf{G}^{{\star }}\mathbf{G}-%
\mathbf{U}^{{\ast }}\mathbf{U}\otimes \mathbf{1})=U^{\ast }\mathrm{d}U+%
\mathrm{d}U^{\ast }U+\mathrm{d}U^{\ast }\mathrm{d}U\ ,\eqno(2.10)
\end{equation*}%
where $\mathrm{d}U^{\ast }\mathrm{d}U=\mathrm{d}\Lambda (\mathbf{D}^{{\star }%
}\mathbf{D})$ is defined by the usual product of the triangular matrices $%
\mathbf{D}=[D_{\nu }^{\mu }]$, $\mathbf{D}^{{\star }}=(\hat{I}\otimes
\mathbf{g})\mathbf{D}^{\ast }(\hat{I}\otimes \mathbf{g})$ with $D_{\nu
}^{\mu }=0$, if $\mu =+$ or $\nu =-\ .$
\end{corollary}
Indeed, if $T^{t}(\pmb\varkappa )=T(\pmb\varkappa ^{t})\otimes 1(\pmb%
\varkappa _{\lbrack t})$, then
\begin{equation*}
\lbrack U^{t}(a^{t}\otimes c)](\varkappa )=\sum_{\varkappa _{0}^{0}\sqcup
\varkappa _{+}^{0}=\varkappa ^{t}}\iint T(\pmb\varkappa ^{t})a(\varkappa
_{0}^{0}\sqcup \varkappa _{0}^{-})\otimes c(\varkappa _{\lbrack t})\mathrm{d}%
\varkappa _{0}^{-}\mathrm{d}\varkappa _{+}^{-}\ ,
\end{equation*}%
for any $a^{t}\in \mathcal{G}^{t},c\in \mathcal{F}_{[t}$, where the integral
should be taken over $\varkappa _{0}^{-},\varkappa _{+}^{-}\in \mathcal{X}%
^{t}$, otherwise $T^{t}(\pmb\varkappa )=0$. Hence $U^{t}(a^{t}\otimes
c)=b^{t}\otimes c$ for a $b^{t}\in \mathcal{G}^{t}$. In this case
\begin{equation*}
\dot{T}^{t(x)}(\mathbf{x}_{\nu }^{\mu },\pmb\varkappa )=T^{t(x)}(\pmb%
\varkappa \sqcup \mathbf{x}_{\nu }^{\mu })=T(\pmb\varkappa ^{t(x)})\otimes
1_{\nu }^{\mu }(x)\otimes 1(\pmb\varkappa _{t(x)})\ ,
\end{equation*}%
where $1_{\nu }^{\mu }(x)=0$, if $\mu \not=\nu
,1_{-}^{-}=1=1_{+}^{+},1_{0}^{0}(x)=I(x)$. This gives $U_{\nu }^{\mu
}(x)=\iota (\dot{T}^{t(x)}(\mathbf{x}_{\nu }^{\mu }))=U^{t}\otimes 1_{\nu
}^{\mu }(x),$ and
\begin{equation*}
\mathrm{d}\Lambda (G^{{\star }}G-U^{\ast }U\otimes \mathbf{1})=\mathrm{d}%
\Lambda ((U^{\ast }\otimes \mathbf{1})\mathbf{D}+\mathbf{D}^{{\star }%
}(U\otimes \mathbf{1})+\mathbf{D}^{{\star }}\mathbf{D})=
\end{equation*}%
\begin{equation*}
U^{\ast }\mathrm{d}\Lambda (\mathbf{D})+\mathrm{d}\Lambda (\mathbf{D}^{{%
\star }})U+\mathrm{d}\Lambda (\mathbf{D}^{{\star }}\mathbf{D})=U^{\ast }%
\mathrm{d}U+\mathrm{d}U^{\ast }U+\mathrm{d}U^{\ast }\mathrm{d}U
\end{equation*}%
as it follows from corollary 1 for the adaptive $U^{t}$.
\section{A nonadapted QS evolution and chronological products}
The proved $\star $-homomorphism and continuity properties of the
representation $\iota $ of the unital inductive $\star $-algebra $\mathcal{U}
$ of all operator-valued functions $T(\pmb\varkappa )$ of $\varkappa _{\nu
}^{\mu }\in \mathcal{X}$, $(\mu ,\nu )\in \{-,0\}\times \{0,+\}$, relatively
bounded with respect to some $\pmb\zeta =[\zeta _{\nu }^{\mu }(x)]$, into
the $\star $-algebra $\mathcal{B}$ of all relatively bounded operators on
the projective limit $\mathcal{G}^{+}=\bigcap_{\xi >0}\mathcal{G}(\xi )$
enables us to construct a QS functional calculus.
Namely, if $T=f(Q_{1},\dots ,Q_{m})$ is an analytical function of $Q_{i}\in
\mathcal{U}$ as a limit in $\mathcal{U}$ of polynomials $T_{n}$ with some
ordering of noncommuting $Q_{1},\dots ,Q_{m}$ in the sense $\Vert
T_{n}-T\Vert (\pmb\zeta )\rightarrow 0$ for a $\pmb\zeta $, then $U=\iota
(T) $ is the ordered function $f(X_{1},\dots ,X_{m})$ of $X_{i}=\iota
(Q_{i}) $ as a limit on $\mathcal{G}^{+}$ of the corresponding polynomials $%
U_{n}=\iota (T_{n})$, that is $\Vert U_{n}-U\Vert _{\xi ^{+}}^{\xi
_{-}}\rightarrow 0$ for any $\xi _{-}>0$ and $\xi ^{+}>\xi _{-}\Vert \zeta
_{0}^{0}\Vert _{\infty }^{2}$. The function $U^{{\ast }}=f^{{\ast }}(X_{1}^{{%
\ast }},\dots ,X_{n}^{{\ast }})$ with the transposed ordering as $U^{{\ast }%
}=\iota (T^{{\star }})$ for $T^{{\star }}=f^{{\ast }}(Q_{1}^{{\star }},\dots
,Q_{n}^{{\star }})$ is also defined as $(\xi ^{+},\xi _{-})$-limit due to $%
\Vert T_{n}^{{\star }}-T^{{\star }}\Vert (\pmb\zeta ^{{\star }})\rightarrow
0 $ and $(\pmb\zeta ^{{\star }})_{0}^{0}=\zeta _{0}^{0}$.
The differential form of this calculus is given by the noncommutative and
nonadaptive generalization of the QS It\^{o} formula
\begin{equation*}
\mathrm{d}X=\mathrm{d}\Lambda (\mathbf{A})\Rightarrow \mathrm{d}f(X)=\mathrm{%
d}\Lambda (f(\mathbf{X}+\mathbf{A})-f(\mathbf{X}))\eqno(3.1)
\end{equation*}%
defined for any analytical function $U^{t}=f(X^{t})$ of $\iota (Q^{t})$ as
QS differential of $\iota (T^{t})$ for $T^{t}=f(Q^{t})$ with
\begin{equation*}
U_{\nu }^{\mu }(x)=f(\mathbf{X})_{\nu }^{\mu }(x),\quad G_{\nu }^{\mu }(x)=f(%
\mathbf{X}+\mathbf{A})_{\nu }^{\mu }(x),
\end{equation*}%
where $f(\mathbf{Z})(x)=f(\mathbf{Z}(x))$ is the triangular matrix which is
the function of the matrix $\mathbf{Z}(x)$, representing $\mathbf{Q}%
^{t(x)}(x)$ and $\mathbf{Q}^{t(x)]}(x)$ correspondingly as $\mathbf{X}%
(x)=\iota (\mathbf{Q}^{t(x)}(x))$ and%
\begin{equation*}
\mathbf{X}(x)+\mathbf{A}(x)\ ,\ \;\;A_{\nu }^{\mu }(x)=\iota (\dot{Q}%
^{t(x)]}(\mathbf{x}_{\nu }^{\mu })-\dot{Q}^{t(x)}(\mathbf{x}_{\nu }^{\mu })).
\end{equation*}%
For the ordered functions $U^{t}=f(X_{1}^{t},\dots ,X_{n}^{t})$ this can be
written in terms of $\mathbf{X}_{i}$ with $\mathrm{d}\mathbf{X}_{i}=\mathrm{d%
}\Lambda (\mathbf{A}_{i})$ and $\mathbf{Z}_{i}=\mathbf{X}_{i}+\mathbf{A}_{i}$
as
\begin{equation*}
\mathrm{d}U=\mathrm{d}\Lambda (f(\mathbf{Z}_{1},\dots ,\mathbf{Z}_{n})-f(%
\mathbf{X}_{1},\dots ,\mathbf{X}_{n}))\ .\eqno(3.2)
\end{equation*}%
In particular, if all the triangular matrices $\{\mathbf{X}_{i},\mathbf{Z}%
_{i}\}$ are commutative, then one can obtain the exponential function $%
U^{t}=\exp \{X^{t}\}$ for $X=\sum X_{i}$ as the solution of the following QS
differential equation
\begin{equation*}
\mathrm{d}U=\mathrm{d}\Lambda \left[ (\exp \{\mathbf{A}\}-\hat{\mathbf{I}})\
\mathbf{U}\right] \ ,\eqno(3.3)
\end{equation*}%
with $\mathbf{A}=\sum \mathbf{A}_{i}$ and the initial condition $U^{0}=\hat{I%
}$. Now we shall study the problem of the solution of the general QS
evolution equation
\begin{equation*}
\mathrm{d}U=\mathrm{d}\Lambda (\left( \mathbf{S}-\hat{\mathbf{I}}\right) \
\mathbf{U})=\mathrm{d}\Lambda \left( \mathbf{B}\ \mathbf{U}\right) ,\eqno%
(3.4)
\end{equation*}%
defined by a matrix-function $\mathbf{B}(x)=[B_{\nu }^{\mu }(x)]$ with
noncommutative operator-values
\begin{align*}
B_{0}^{0}(x)& \colon \mathcal{G}\otimes \mathcal{E}(x)\rightarrow \mathcal{G}%
\otimes \mathcal{E}(x),\quad B_{+}^{-}(x)\colon \mathcal{G}\rightarrow
\mathcal{G}\ , \\
B_{+}^{0}(x)& \colon \mathcal{G}\rightarrow \mathcal{G}\otimes \mathcal{E}%
(x),\quad B_{0}^{-}(x)\colon \mathcal{G}\otimes \mathcal{E}(x)\rightarrow
\mathcal{G}\ ,
\end{align*}%
and $B_{\nu }^{\mu }(x)=0$, if $\mu =+\qquad \text{or}\qquad \nu =-$. In the
adapted case this equation can be written according to Corollary 1 as $%
\mathrm{d}U=\mathrm{d}\Lambda (\mathbf{B})U$, which shows that its solution
with $U^{0}=\hat{I}$ should be defined in some sense as a chronologically
ordered exponent $U^{t}=\Gamma _{\lbrack 0,t)}(\mathbf{B})$. In particular,
if $B_{\nu }^{\mu }(x)=\hat{I}\otimes l_{\nu }^{\mu }(x)$, where $\mathbf{l}%
=[l_{\nu }^{\mu }]$ is a triangular QS-integrable matrix-function with $%
l_{\nu }^{\mu }=0$, if $\mu =+$ or $\nu =-$,
\begin{equation*}
l_{0}^{0}(x)\colon \mathcal{E}(x)\rightarrow \mathcal{E}(x),\;\Vert
l_{0}^{0}\Vert _{\infty }^{t}<\infty ;\;l_{+}^{0}(x)\in \mathcal{E}%
(x),\;l_{0}^{-}(x)\in \mathcal{E}^{{\ast }}(x),\;\Vert l\Vert
_{2}^{t}<\infty ;\Vert l_{+}^{-}\Vert _{1}^{t}<\infty ,
\end{equation*}%
then $U^{t}$ is defined as $I\otimes \Gamma _{\lbrack 0,t)}(\mathbf{l})$,
where $\Gamma _{\lbrack 0,t)}(\mathbf{l})=\iota \left( \mathbf{f}%
_{[0,t)}^{\otimes }\right) $ is the representation (2.1) of $\mathbf{f}%
_{[0,t)}^{\otimes }(\pmb\varkappa )=\otimes _{x\in \pmb\varkappa }\mathbf{f}%
^{t}(x)$ with $\mathbf{f}^{t}(x)=\mathbf{1}(x)+\mathbf{l}^{t}(x)$, $\mathbf{l%
}^{t}(x)=\mathbf{l}(x)$ if $t(x)<t$ and $\mathbf{l}^{t}(x)=0$, if $t(x)\geq
t,$ i.e. $\Gamma _{\lbrack 0,t)}(\mathbf{l})$ is the second quantization $%
\Gamma (\mathbf{l}^{t})$ of $\mathbf{l}^{t}$.
\begin{theorem}
The QS evolution equation (3.4) written in the integral form $%
U^{t}=U^{0}+\Lambda ^{t}(\mathbf{B}\mathbf{U})$ with $U^{0}=\iota (T^{0})$, $%
\mathbf{B}(x)=\iota (\mathbf{L}(x))$, is the representation $\iota $ of the
recurrences
\begin{equation*}
T^{t_{+}}(\pmb\varkappa )=\left[ F_{t(x)}\cdot T^{t}\right] (\pmb\varkappa
),x\in \varkappa =\sqcup \varkappa _{\nu }^{\mu }\ ,\eqno(3.5)
\end{equation*}%
defined for any partition $\pmb\varkappa =(\varkappa _{\nu }^{\mu })$ of a
chain $\varkappa \in \mathcal{X}$ with $t\in (t_{-}(x),t(x)]$,$%
\;t_{-}(x)=\max \;t\left( \varkappa ^{t(x)}\right) $ and $t_{+}\in
(t(x),t_{+}(x)]$, $t_{+}(x)=\min t(\varkappa _{t(x)})$, by
\begin{equation*}
F_{t(x)}\left( \pmb\varkappa \sqcup \mathbf{x}_{\nu }^{\mu }\right) =L_{\nu
}^{\mu }(x,\pmb\varkappa )+I\left( \pmb\varkappa \sqcup \mathbf{x}_{\nu
}^{\mu }\right) \equiv F_{\nu }^{\mu }(x,\pmb\varkappa ),
\end{equation*}%
where $\mathbf{x}_{\nu }^{\mu }$ is one of the four single point tables $\pmb%
\vartheta =(\vartheta _{\lambda }^{\kappa })$ with $\vartheta _{\nu }^{\mu
}=x$.
The recurrency (3.5) with the initial condition $T^{t}(\pmb\varkappa )=T^{0}(%
\pmb\varkappa )$ for all $t\leq \min t(\varkappa )$ has the unique solution
\begin{equation*}
T^{t}(\pmb\varkappa )=[F_{[0,t)}\cdot T^{0}](\pmb\varkappa
),\;\;\;\;F_{[0,t)}=\bullet _{0\leq s<t}^{\leftarrow }F_{s}\ ,\eqno(3.6)
\end{equation*}%
where $F_{s}(\pmb\varkappa )=I(\pmb\varkappa ),$ if $s\notin \sqcup
\varkappa _{\nu }^{\mu }=\varkappa $, defined for every chain $\varkappa
=(x_{1},\dots ,x_{m},\dots )\in \mathcal{X}$, $t\in (t_{m-1},t_{m}]$ as
product%
\begin{equation*}
\bullet _{x\in \varkappa t}^{\leftarrow }F_{t(x)}=F_{t_{m-1}}\cdot \cdot
\cdot F_{t_{1}}
\end{equation*}%
of $F_{t_{i}},t_{i}=t(x_{i})<t_{i+1}$ and $T^{0}$ in chronological order.
The solution $U^{t}=\iota (T^{t})$ of (3.4) is isometric $U^{{\ast }}U=\hat{I%
}$ (unitary: $U^{{\ast }}=U^{-1}$) up to a $t>0$, if $U^{0}$ is isometric
(unitary) and the triangular matrix-function $\mathbf{S}(x)=\mathbf{B}(x)+%
\hat{\mathbf{I}}(x)=\iota (\mathbf{F}(\varkappa ))$ is pseudoisometric:%
\begin{equation*}
\mathbf{S}^{{\star }}(x)\mathbf{S}(x)=\hat{\mathbf{I}}(x)=\hat{I}\otimes
\mathbf{1}(x)
\end{equation*}%
(pseudounitary $\mathbf{S}^{{\star }}(x)=\mathbf{S}(x)^{-1}$) for almost all
$x\in X^{t}$, that is
\begin{equation*}
S_{0}^{0}(x)^{{\ast }}S_{0}^{0}(x)=I(x),\quad S_{+}^{-}(x)^{{\ast }%
}+S_{+}^{0}(x)^{{\ast }}S_{+}^{0}(x)+S_{+}^{-}(x)=0\eqno(3.7)
\end{equation*}%
\begin{equation*}
S_{+}^{-}(x)^{{\ast }}+S_{0}^{0}(x)^{{\ast }}S_{+}^{0}(x)=0,\quad
S_{+}^{0}(x)^{{\ast }}S_{0}^{0}(x)+S_{0}^{-}(x)=0
\end{equation*}%
(and $S_{0}^{0}(x)$ is unitary $S_{0}^{0}(x)^{{\ast }}=S_{0}^{0}(x)^{-1}$
for almost all $x\in X^{t}$).
\end{theorem}
\begin{proof}
We are looking for the solution of the equation (3.4) as the representation $%
U=\iota (T)$ of some $T(\pmb\varkappa )$. If $\mathbf{B}(x)=\iota (\mathbf{L}%
(x))$, then $\mathbf{B}(x)\mathbf{U}(x)=\iota (\mathbf{L}(x)\mathbf{T}(x))$
and $\Lambda ^{t}(\mathbf{B}\mathbf{U})=\iota (N^{t}(\mathbf{LT})$ due to
the property $\Lambda \circ \iota =\iota \circ N$, proved in theorem 2, and
the multiplicative property $\iota (\mathbf{LT})=\mathbf{B}\mathbf{U}$,
where $\mathbf{U}(x)=\iota (\mathbf{T}(x))$, $\mathbf{T}(x)$ denotes the
triangular matrix $[T_{\nu }^{\mu }(x)],T_{\nu }^{\mu }=0$, if $\mu >\nu $,
with $T_{-}^{-}(x)=T^{t(x)}=T_{+}^{+}(x)$ and $T_{\nu }^{\mu }(x)=\dot{T}%
^{t(x)}$ $(\mathbf{x}_{\nu }^{\mu })$ for $\mu \not=+$, $\nu \not=-$. This
gives a possibility to consider the equation (3.4) in integral form as the
representation $U^{t}=\iota (T^{0}+N^{t}(\mathbf{LT}))$ of the equation
\begin{equation*}
T^{t}(\pmb\varkappa )=T^{0}(\pmb\varkappa )+N^{t}(\mathbf{LT})(\pmb\varkappa
),
\end{equation*}%
corresponding to $U^{0}=\iota (T^{0})$, where
\begin{equation*}
N^{t}(\mathbf{C})(\pmb\varkappa )=\sum_{\nu =0,+}^{\mu =-,0}\ \sum_{x\in
\varkappa _{\nu }^{\mu }}^{t(x)<t}\left[ \mathbf{L}(x)\mathbf{T}(x)\right]
_{\nu }^{\mu }\left( \pmb\varkappa \backslash \mathbf{x}_{\nu }^{\mu }\right)
\end{equation*}%
depends on $T^{s}(\pmb\vartheta )$ with $s=t(x)<t$ for $x\in \sqcup
\varkappa _{\nu }^{\mu }\supseteq \sqcup \vartheta _{\nu }^{\mu }$. This
defines $T^{t}(\pmb\varkappa )$ for any partition $\pmb\varkappa =(\varkappa
_{\nu }^{\mu })$ of a chain $\varkappa =(x_{1},\dots ,x_{n})\in \mathcal{X}$
as the solution $T^{t}(\pmb\varkappa )=T_{m}(\pmb\varkappa ),\qquad
m=|\varkappa ^{t}|$ of the recurrency
\begin{equation*}
T_{m}(\pmb\varkappa )=T_{0}(\pmb\varkappa )+\sum_{k=1}^{m}\left[ \left(
F_{t_{k}}-I\right) T_{k-1}\right] (\pmb\varkappa )=\left[ F_{t_{m}}T_{m-1}%
\right] (\pmb\varkappa ),
\end{equation*}%
where $T_{k-1}=T^{t_{k}}$ for $t_{k}=t(x_{k})$, and the product $\mathbf{LT}$
for $T_{\nu }^{\mu }(x,\pmb\varkappa )=T^{t(x)}(\pmb\varkappa \sqcup \mathbf{%
x}_{\nu }^{\mu })$ is written as%
\begin{equation*}
\lbrack \mathbf{L}(x)\mathbf{T}(x)]_{\nu }^{\mu }(\pmb\varkappa \backslash
\mathbf{x}_{\nu }^{\mu })=\left[ \left( F_{t(x)}-I\right) T^{t(x)}\right] (%
\pmb\varkappa )
\end{equation*}%
for $x\in \varkappa _{\nu }^{\mu }$ in terms of $F_{t(x)}(\pmb\varkappa
\sqcup x_{\nu }^{\mu })=F_{\nu }^{\mu }(x,\pmb\varkappa )$ for $\mathbf{F}=%
\mathbf{L}+\mathbf{I}$. So, if the solution of (3.4) exists as $U^{t}=\iota (%
\mathbf{T}^{t})$, then it is uniquely defined by (3.6).
Let us suppose, that $(\mathbf{S}^{{\star }}\mathbf{S})(x)=\hat{I}\otimes
\mathbf{1}(\mathbf{x})$ for almost all $x$ with $t(x)<t$, which is the
representation $\iota (\mathbf{F}^{{\star }}\mathbf{F})=\mathbf{S}^{{\star }}%
\mathbf{S}$ of $\left( F_{t(x)}^{{\star }}F_{t(x)}\right) (\pmb\varkappa
\sqcup \mathbf{x})=I(\pmb\varkappa )\otimes 1(\mathbf{x})$ for corresponding
$\mathbf{F}(x)$, $t(x)<t$. By the recurrency $T_{m}=F_{t_{m}}T_{m-1}$ we
obtain $(T_{k}^{{\star }}T_{k})(\pmb\varkappa )=I(\pmb\varkappa )$ for all $%
t_{k}<t$ from the initial condition $(T_{0}^{{\star }}T_{0})(\pmb\varkappa
)=I(\pmb\varkappa )$. Hence, $(T^{t{\star }}T^{t})(\pmb\varkappa )=I(\pmb%
\varkappa )$ for almost all tables $\pmb\varkappa =(\varkappa _{\nu }^{\mu
}) $, namely those which are partitions of the chains $\varkappa $. This
gives $U^{t{\ast }}U^{t}=\hat{I}$ for $U^{t}=\iota (T^{t})$. In the same way
one can obtain the condition $U^{t}U^{t{\ast }}=\hat{I}$ from $(\mathbf{SS}^{%
{\star }})(x)=\hat{I}\otimes \mathbf{1}(x)$ for almost all $x$ with $t(x)<t$%
. Writing the condition $\mathbf{S}^{{\star }}\mathbf{S}=\hat{I}\otimes
\mathbf{1}$ in terms of matrix elements, we obtain (3.7):
\begin{equation*}
(\mathbf{S}^{{\star }}\mathbf{S})(x)=\left[
\begin{matrix}
1 & S_{+}^{0}(x)^{{\ast }} & S_{+}^{-}(x)^{{\ast }}\cr0 & S_{0}^{0}(x)^{{%
\ast }} & S_{0}^{-}(x)^{{\ast }}\cr0 & 0 & 1%
\end{matrix}%
\right] \left[
\begin{matrix}
1 & S_{0}^{-}(x) & S_{+}^{-}(x)\cr0 & S_{0}^{0}(x) & S_{+}^{0}(x)\cr0 & 0 & 1%
\end{matrix}%
\right] =\hat{I}\otimes \left[
\begin{matrix}
1 & 0 & 0\cr0 & I(x) & 0\cr0 & 0 & 1%
\end{matrix}%
\right] \ .
\end{equation*}
The unitary solution of the equation (3.4) under conditions (3.7) in terms
of $\mathbf{B}=\mathbf{S}-\hat{\mathbf{I}}$ was obtained in [1] in the
framework of It\^{o} (adapted) QS calculus for the stationary Markovian case
$\mathbf{B}(t)=\mathbf{L}\otimes \hat{1}$, for nonstationary finite
dimensional Markovian case $\mathbf{B}(t)=\mathbf{L}(t)\otimes \hat{1}$ in
[11]; and for non Markovian adapted case $\mathbf{B}(t)=\mathbf{L}%
^{t}\otimes 1_{[t}$ in [5].
\end{proof}
\begin{corollary}
Let $U^{0}=T^{0}\otimes \hat{1},T^{0}\in \mathcal{B}(\mathcal{H})$ and $%
\mathbf{S}(x)=\mathbf{F}(x)\otimes \hat{1}$ be defined by the operators $%
F_{\nu }^{\mu }=L_{\nu }^{\mu },\ \mu <\nu ,\ F_{0}^{0}=L_{0}^{0}+I$, acting
as
\begin{align*}
F_{0}^{0}(x)& \colon \mathcal{H}\otimes \mathcal{E}(x)\rightarrow \mathcal{H}%
\otimes \mathcal{E}(x),\quad F_{+}^{-}:\mathcal{H}\rightarrow \mathcal{H}, \\
F_{+}^{0}(x)& \colon \mathcal{H}\rightarrow \mathcal{H}\otimes \mathcal{E}%
(x),\quad F_{0}^{-}(x)\colon \mathcal{H}\otimes \mathcal{E}(x)\rightarrow
\mathcal{H}
\end{align*}%
with
\begin{equation*}
\Vert F_{0}^{0}\Vert _{\infty }^{(t)}<\infty ,\ \Vert F_{+}^{0}\Vert
_{2}^{(t)}<\infty ,\ \Vert F_{0}^{-}\Vert _{2}^{(t)}<\infty ,\ \Vert
F_{+}^{-}\Vert _{1}^{(t)}<\infty \ .\eqno(3.8)
\end{equation*}%
Then the solution $U^{t}=\iota \left( T^{t}\right) $ of the evolution
equation (3.4) is defined as $\zeta $-bounded operator for $\zeta >\Vert
F_{0}^{0}\Vert _{\infty }^{(t)}$ by adapted chronological product $T^{t}(\pmb%
\varkappa )=F_{[0,t)}(\pmb\varkappa )\cdot T^{0}$, satisfying the recurrency
\begin{equation*}
T^{t_{+}}(\pmb\varkappa \sqcup \mathbf{x})=\ F(\mathbf{x})\cdot T^{t}(\pmb%
\varkappa ),\quad F(\mathbf{x}_{\nu }^{\mu })=F_{\nu }^{\mu }(x)\ ,\eqno(3.9)
\end{equation*}%
where $t_{-}(x)<t\leq t(x)<t_{+}\leq t_{+}(x),t_{-}(x)=\max \{t<t(x)|t\in
t(\varkappa )\},t_{+}(x)=\min \{t>t(x)|t\in t(\varkappa )\}$, and $\cdot $
means the semitensor product, defined in theorem 1.
The QS process $U^{t}=\iota (T^{t})$ is adapted, can be represented as
multiple QS integral (1.5) $U^{t}=\Lambda _{\lbrack 0,t)}(\mathbf{L}%
^{\triangleleft }\cdot T^{0}\otimes \hat{1})$ with semitensor chronological
products $\mathbf{L}^{\triangleleft }(\pmb\vartheta )=\bullet _{x\in \pmb%
\vartheta }^{\leftarrow }L(\mathbf{x})$ of $\mathbf{L}(x)=\mathbf{F}(x)-%
\mathbf{I}(x)$, and has the estimate
\begin{equation*}
\Vert U^{t}\Vert _{\xi ^{+}}^{\xi _{-}}\leq \Vert T^{0}\Vert \exp \left\{
\int_{0}^{t}(\Vert L_{+}^{-}(x)\Vert +(\Vert L_{0}^{-}(x)\Vert ^{2}+\Vert
L_{+}^{0}(x)\Vert ^{2})/2\varepsilon \}\mathrm{d}x\right\} \eqno(3.10)
\end{equation*}%
for $\xi ^{+}/\xi _{-}>\mathrm{ess\sup }_{x\in X^{t}}\Vert F_{0}^{0}(x)\Vert
$ and sufficiently small $\varepsilon >0$.
\end{corollary}
Indeed, if $\mathbf{S}=\mathbf{F}\otimes \hat{1}$ and $\mathbf{F}$ satisfies
the local integrability conditions (3.8), then
\begin{equation*}
\Vert \bullet _{x\in \pmb\varkappa ^{t}}^{\leftarrow }F(\mathbf{x})\Vert
\leq \displaystyle{\prod}_{x\in \pmb\varkappa ^{t}}\Vert F(\mathbf{x})\Vert =\displaystyle{\prod}_{\nu
=0,+}^{\mu =-,0}\zeta ^{t}(\varkappa _{\nu }^{\mu })\ ,
\end{equation*}%
where $\zeta ^{t}(\varkappa _{\nu }^{\mu })=\displaystyle{\prod}_{x\in \varkappa _{\nu
}^{\mu }}^{t(x)<t}\Vert F_{\nu }^{\mu }(x)\Vert $. Hence, $T^{t}(\pmb%
\varkappa )=F_{[0,t)}(\pmb\varkappa ^{t})\otimes 1(\pmb\varkappa _{\lbrack
t})$ is relatively bounded $\Vert T^{t}\Vert (\pmb\zeta ^{t})\leq \Vert
T^{0}\Vert $ with respect to $\pmb\zeta ^{t}(x)=\left( \Vert F_{\nu }^{\mu
}(x)\Vert \right) _{\nu =0,+}^{\mu =-,0}$, $x\in X^{t}$, and $\pmb\zeta
^{t}(x)=0$, $x\in X_{[t}$. Due to (2.4) and Theorem 2 this gives the $\zeta $%
-boundedness of the operator $U^{t}=\iota (t_{[0,t)})$ with respect to $%
\zeta >\Vert F_{0}^{0}\Vert _{\infty }^{(t)}$; and the estimate (3.10) for $%
\xi ^{+}>\xi _{-}\Vert F_{0}^{0}\Vert _{\infty }^{(t)},\varepsilon
<\varepsilon (\xi ^{+},\xi _{-})$ in terms of the norms (3.9) for $F_{\nu
}^{\mu }=L_{\nu }^{\mu }+I\otimes 1_{\nu }^{\mu }$. Taking into account that
$\iota \circ N_{[0,t)}=\Lambda _{\lbrack 0,t)}\circ \iota $ and
\begin{equation*}
N_{[0,t)}\left( \mathbf{L}^{\triangleleft }\cdot T^{0}\right) (\pmb\varkappa
)=\sum_{\pmb\vartheta \subset \pmb\varkappa ^{t}}\mathbf{L}^{\triangleleft }(%
\pmb\vartheta )\cdot T^{0}\otimes 1(\pmb\varkappa \backslash \pmb\vartheta )=%
\left[ \bullet _{x\in \pmb\varkappa ^{t}}^{\leftarrow }(I(\mathbf{x})+L(%
\mathbf{x}))\cdot T^{0}\right] \otimes 1(\pmb\varkappa _{t]})\ ,
\end{equation*}%
we obtain the QS integral representation $\Lambda _{\lbrack 0,t)}(\mathbf{L}%
^{\triangleleft }\cdot T^{0}\otimes \hat{1})$ of Wick chronological product $%
\iota (T^{t})$. This process is adapted and has the QS derivative
\begin{equation*}
\mathbf{D}(x)=\Lambda _{\lbrack 0,t(x))}(\mathbf{L}(x)\cdot \mathbf{L}%
^{\triangleleft }\cdot T^{0}\otimes \hat{1})=\mathbf{L}(x)\odot U^{t(x)}\ ,
\end{equation*}%
where $(L\cdot U)_{\nu }^{\mu }=(L_{\nu }^{\mu }\otimes \hat{1})\cdot U$, $%
B_{0}^{\mu }(x)\cdot U=B_{0}^{\mu }(U\otimes I(x))$, $B_{+}^{\mu }\cdot
U=B_{+}^{\mu }U$. Hence multiple integral $U^{t}=\Lambda _{\lbrack 0,t)}(%
\mathbf{L}^{\triangleleft }\otimes \hat{1})$ satisfies the QS equation
(1.10) with $U^{0}=\hat{I}$ as the case $\mathrm{d}U=\mathrm{d}\Lambda (%
\mathbf{L}\otimes \hat{1})U$ of (3.4).
Finally let us define the solution of the unitary evolution equation (1.10)
with $\mathbf{L}=\mathrm{e}^{-\mathrm{i}\mathbf{H}}-\mathbf{I}$, $\mathbf{H}%
^{{\star }}=\mathbf{H}$, $U^{0}=\hat{I}$ as the representation (2.1) of
chronologically ordered products
\begin{equation*}
T^{t}(\pmb\varkappa )=\bullet _{x\in \pmb\varkappa }^{\leftarrow }F^{t}(%
\mathbf{x})=\mathbf{F}^{\triangleleft }(\pmb\varkappa ^{t})\otimes 1(\pmb%
\varkappa _{\lbrack t})
\end{equation*}%
for $\mathbf{F}(x)=\exp \{-\mathrm{i}\mathbf{H}(x)\}$
\begin{equation*}
\mathbf{F}^{\triangleleft }(\pmb\varkappa )=F(\mathbf{x}_{n})\cdot \cdot
\cdot F(\mathbf{x}_{1})\qquad \text{for}\qquad \pmb\varkappa =\sqcup
_{i=1}^{n}\mathbf{x}_{i}\ .
\end{equation*}%
Here $F^{t}(\mathbf{x})=I(\mathbf{x})=I\otimes 1(\mathbf{x})$, if $t(x)\geq
t $, $F^{t}(\mathbf{x}_{\nu }^{\mu })=F_{\nu }^{\mu }(x)$, if $t<t(x)$, $\pmb%
\varkappa =(\varkappa _{\nu }^{\mu })$ is a partition $\varkappa =\sqcup
_{\nu =0,+}^{\mu =-,0}\varkappa _{\nu }^{\mu }$ of a chain $\varkappa
=(x_{1},\dots ,x_{n})\in \mathcal{X}$ ordered by $t(x_{i-1})<t(x_{i})$ with $%
x_{i}\in \varkappa _{\nu }^{\mu }$, corresponding to the single point table $%
\mathbf{x}_{i}=(\varkappa _{\nu }^{\mu })_{\nu =0,+}^{\mu =-,0}$ with $%
\varkappa _{\nu }^{\mu }=x_{i}$, and $F(\mathbf{x})\cdot T(\pmb\vartheta )$
is the semitensor product,
\begin{equation*}
F(\pmb\varkappa )\cdot T(\pmb\vartheta )=(F(\pmb\varkappa )\otimes
I^{\otimes }(\vartheta _{0}^{0}\sqcup \vartheta _{+}^{0}))(T(\pmb\vartheta
)\otimes I^{\otimes }(\varkappa _{0}^{-}\sqcup \varkappa _{0}^{0})),
\end{equation*}%
which is the usual product $F(\mathbf{x})T(\pmb\vartheta )$, if $\dim
\mathcal{E}=1$. As it follows from theorem 3, the solution $U^{t}=\iota
\left( F_{[0,t)}\right) $ is unitary, if the triangular matrix-function $%
\mathbf{F}(x)=\exp \{-\mathrm{i}\mathbf{H}(x)\}$ is pseudo unitary, that is
the Hamiltonian matrix-function $\mathbf{H}=[H_{\nu }^{\mu }]$ is pseudo
Hermitian $H^{{\star }}(x)=H(x)$ for almost all $x\in X$:
\begin{equation*}
H_{0}^{0{\ast }}=H_{0}^{0},H_{+}^{0{\ast }}=H_{0}^{-},H_{0}^{-{\ast }%
}=H_{+}^{0},H_{+}^{-{\ast }}=H_{+}^{-},
\end{equation*}%
$(H_{\nu }^{\mu }=0$ for $\mu =+,$ or $\nu =-)$. One can easily find the
powers $\mathbf{H}^{n}$ of the triangular matrix $\mathbf{H}$: $\mathbf{H}%
^{0}=\mathbf{I}$, $\mathbf{H}^{1}=\mathbf{H}\ $, $\mathbf{H}^{2}$ is defined
by the table
\begin{equation*}
\mathbf{H}^{2}=%
\begin{pmatrix}
H_{0}^{-}H_{0}^{0}, & H_{0}^{-}H_{+}^{0}\cr H_{0}^{0}H_{0}^{0}, &
H_{0}^{0}H_{+}^{0}%
\end{pmatrix}%
,\ \mathbf{H}^{n+2}=%
\begin{pmatrix}
H_{0}^{-}H_{0}^{0n-1}, & H_{0}^{-}H_{0}^{0n}H_{+}^{0}\cr H_{0}^{0n+2}, &
H_{0}^{0n-1}H_{+}^{0}%
\end{pmatrix}%
,n=1,2,\dots
\end{equation*}%
and $\mathbf{F}=\sum_{n=0}^{\infty }(-\mathrm{i}\mathbf{H})^{n/_{n!}}$ as
the triangular matrix $F_{\nu }^{\mu }=0,\mu >\nu ,F_{-}^{-}=1=F_{+}^{+}$,%
\begin{eqnarray*}
F_{0}^{0} &=&e^{-\mathrm{i}H_{0}^{0}},\;\;\;\;\;F_{+}^{-}=H_{0}^{-}\left[
\left( e^{-\mathrm{i}H_{0}^{0}}-I_{0}^{0}+\mathrm{i}H_{0}^{0}\right)
/H_{0}^{0}H_{0}^{0}\right] H_{+}^{0}-\mathrm{i}H_{+}^{-} \\
F_{+}^{0} &=&\left[ \left( e^{-\mathrm{i}H_{0}^{0}}-I_{0}^{0}\right)
/H_{0}^{0}\right] H_{+}^{0},\;\;\;\;\;F_{0}^{-}=H_{0}^{-}\left[ \left( e^{-%
\mathrm{i}H_{0}^{0}}-I_{0}^{0}\right) /H_{0}^{0}\right] \ .
\end{eqnarray*}%
Representing the conjugated operators $F_{0}^{-},F_{+}^{0}$ in the form $%
H_{0}^{-}=F^{{\ast }}H_{0}^{0}+\mathrm{i}E^{{\ast }}$, $H_{+}^{0}=H_{0}^{0}F-%
\mathrm{i}E$, where $E(x),F(x)$ are uniquely defined by $F^{{\star }}E=0$,
one can obtain the following canonical decomposition of the table $(L_{\nu
}^{\mu })$ of the generating operators $L_{\nu }^{\mu }(x)=F_{\nu }^{\mu
}(x)-I\otimes 1_{\nu }^{\mu }(x)$ of the unitary QS evolution $U^{t}$:
\begin{equation*}
\begin{pmatrix}
L_{0}^{-} & L_{+}^{-}\cr L_{0}^{0} & L_{+}^{0}%
\end{pmatrix}%
=%
\begin{pmatrix}
F^{{\ast }}L_{0}^{0} & F^{{\ast }}L_{0}^{0}F\cr L_{0}^{0} & L_{0}^{0}F%
\end{pmatrix}%
+%
\begin{pmatrix}
E^{{\ast }} & {\frac{1}{2}}E^{{\ast }}E\cr0 & -E%
\end{pmatrix}%
+%
\begin{pmatrix}
0 & -\mathrm{i}H\cr0 & 0%
\end{pmatrix}%
\ ,
\end{equation*}%
\begin{equation*}
H=H_{+}^{-}-FH_{0}^{0}F^{{\ast }},\;\;\;\;\ \ \ \;L_{0}^{0}=\exp \{-\mathrm{i%
}H_{0}^{0}\}-I_{0}^{0}\ .
\end{equation*}%
Each of these three tables $\mathbf{L}_{i},\ i=1,2,3$ corresponds to a
pseudounitary triangular matrix $\mathbf{F}_{i}=\mathbf{I}+\mathbf{L}_{i}$,
satisfying the condition $\displaystyle{\prod}_{i=1}^{3}\mathbf{F}_{i}=\mathbf{I}%
+\sum_{i=1}^{3}\mathbf{L}_{i}=\mathbf{F}$ due to the orthogonality of $%
\mathbf{L}^{i}$. The first one can be diagonalized by the pseudounitary
transformation $\mathbf{F}_{0}^{{\star }}\mathbf{L}_{1}\mathbf{F}_{0}=%
\begin{pmatrix}
0 & 0\cr L_{0}^{0} & 0%
\end{pmatrix}%
$. This defines the QS unitary evolution as the composition of three
canonical types:
1) the Poissonian type evolution, given by the diagonal matrix-function $%
\mathbf{F}(x)$, corresponding to $H_{\nu }^{\mu }=0$ for all $(\mu ,\nu
)\not=0$, for which
\begin{equation*}
U^{t}=\iota (F_{[0,t)})=F_{0_{[0,t)}}^{0},\qquad F_{0}^{0}(x)=\exp \{-%
\mathrm{i}H_{0}^{0}(x)\}
\end{equation*}%
that is $U^{t}=\int^{\oplus }U^{t}(\varkappa )\mathrm{d}\varkappa $, where $%
U^{t}(\varkappa )=F_{0}^{0}(x_{n})^{t}\cdot \cdot \cdot F_{0}^{0}(x_{1})^{t}$
for any chain $\varkappa =(x_{1},\dots ,x_{n})\in \mathcal{X}$, where $%
F_{0}^{0}(x)^{t}=F_{0}^{0}(x)$, if $t(x)<t$, otherwise $F_{0}^{0}(x)^{t}=I%
\otimes 1_{0}^{0}(x)$,
2) the quantum Brownian evolution, corresponding to $H_{0}^{0}=0=H_{+}^{-}$
with $\mathrm{i}H_{+}^{0}=E=\mathrm{i}H_{0}^{-{\ast }}$, and
3) Lebesgue type evolution, corresponding to $H_{\nu }^{\mu }=0$ for all $%
(\mu ,\nu )\not=(-,+)$ for which
\begin{equation*}
U^{t}=\iota (F_{[0,t)})=\hat{1}\otimes \int_{\mathcal{X}^{t}}(-\mathrm{i}%
)^{|\varkappa |}\overleftarrow{\displaystyle{\prod} }_{x\in \varkappa }H_{+}^{-}(x)\mathrm{%
d}\varkappa ={\overleftarrow{\exp }}\left\{ -\mathrm{i}%
\int_{X^{t}}H_{+}^{-}(x)\mathrm{d}x\right\} \otimes \hat{1}\ ,
\end{equation*}%
where $\overleftarrow{\displaystyle{\prod} }_{x\in \varkappa }H(x)=H(x_{n})\cdots H(x_{1})$
is the usual chronological product of operators $H_{+}^{-}(x)$ in $\mathcal{H%
}$ defined for any chain $\varkappa =(x_{1},\dots ,x_{n})$ by $%
t(x_{i-1})<t(x_{i})$. The sufficient conditions for the existence of the
operators $U^{t}$ as the representations $\iota $ of chronological products
of the elements $F_{\nu }^{\mu }(x)$ of $\exp \{-\mathrm{i}\mathbf{H}(x)\}$
is the local QS integrability%
\begin{equation*}
\Vert H_{0}^{0}\Vert _{\infty }^{t}<\infty ,\Vert H_{+}^{0}\Vert
_{2}^{t}=\Vert H_{0}^{-}\Vert _{2}^{t}<\infty ,\Vert H_{+}^{-}\Vert
_{1}^{t}<\infty .
\end{equation*}%
These conditions define the QS integral $\sum \Lambda _{\nu }^{\mu }\left(
t,H_{\nu }^{\mu }\right) =\Lambda ^{t}(\mathbf{H})$ as a $(\xi ^{+},\xi
_{-}) $-continuous operator for any $\xi ^{+}>1>\xi _{-}$ and the QS time
ordered exponential [14] $U^{t}={\overleftarrow{\exp }}\{-\mathrm{i}\Lambda
^{t}(\mathbf{H})\}$ as
\begin{equation*}
U^{t}=\Gamma _{\lbrack 0,t)}(\mathbf{L})\equiv \iota \left( F_{[0,t)}\right)
\end{equation*}%
even if $\mathbf{H}(x)$ is not pseudo Hermitian.
\section{Non-Markovian QS processes and Langevin equations}
Let $\mathcal{A}\subseteq \mathcal{B}(\mathcal{H})$ be a unital $\ast $%
-algebra of operators $A\in \mathcal{A}$, acting on a Hilbert space $%
\mathcal{H}$, and $j^{t}:\mathcal{A}\rightarrow \mathcal{B}(\mathcal{G})$ be
a family of unital $\ast $-homomorphisms, representing $\mathcal{A}$ on $%
\mathcal{G}=\mathcal{H}\otimes \mathcal{F}$ as a QS process in the sense
[12,13]:
\begin{equation*}
j^{t}(A^{{\ast }}A)=j^{t}(A)^{{\ast }}j^{t}(A)\ ,\ j^{t}(I)=\hat{I}\ .
\end{equation*}%
We shall assume that each process $A^{t}=j^{t}(A)$ has a QS differential $%
\mathrm{d}j(A)=\mathrm{d}\Lambda (\pmb\partial (A))$ in the sense
\begin{equation*}
j^{t}(A)=j^{0}(A)+\sum \Lambda _{\mu }^{\nu }(t,\partial _{\nu }^{\mu }(A))\
,\eqno(4.1)
\end{equation*}%
where $\pmb\partial =(\partial _{\nu }^{\mu })_{\nu =0,+}^{\mu =-,0}$ is a
family of linear maps $\partial _{\nu }^{\mu }(x):\mathcal{A}\rightarrow
\mathcal{B}(\mathcal{G})$, depending on $x\in X$ in such a way that the
table-function $\mathbf{D}(x)=\pmb\partial (x,A)$ is QS integrable for every
$A\in \mathcal{A}$. The QS derivative $\pmb\partial $ of the process $j$ was
introduced by Hudson [14] in the Markovian case $\pmb\partial =j\circ \pmb%
\lambda $, corresponding to the assumption $\partial _{\nu }^{\mu }(x,%
\mathcal{A})\subseteq \;j^{t(x)}(\mathcal{A})$ for all $\mu ,\nu $ and $x$.
Using the adapted QS It\^{o} formula he obtained the cohomology conditions
for the maps $\lambda _{\nu }^{\mu }$: $\mathcal{A}\rightarrow \mathcal{A}$,
which are necessary and sufficient in the constant case $\pmb\lambda (x)=\pmb%
\lambda $ for homomorphism property of $j^{t}$. These conditions can be
written simply as unital $\star $-homomorphism property for $\pmb\varphi =%
\pmb\lambda +\mathbf{j}\ ,\ \mathbf{j}(A)=A\otimes \mathbf{1}$
\begin{equation*}
\pmb\varphi (x,A^{{\ast }}A)=\pmb\varphi (x,A)^{{\star }}\pmb\varphi (x,A)\
,\ \pmb\varphi (x,I)=\hat{I}
\end{equation*}%
in terms of the linear maps $\pmb\varphi (x):A\rightarrow \left[ \varphi
_{\nu }^{\mu }(x,A)\right] $ into the triangular block-matrices $\mathbf{A}%
=[A_{\nu }^{\mu }]=\pmb\varphi (A)$. In the scalar case $\mathcal{E}(x)=%
\mathbb{C}$
\begin{equation*}
\varphi _{\nu }^{\mu }(A)=\lambda _{\nu }^{\mu }(A),\mu <\nu ;\;\varphi
_{\nu }^{\mu }(A)=\lambda _{\nu }^{\mu }(A)+A,\mu =\nu ;\;\varphi _{\nu
}^{\mu }(A)=0,\mu >\nu ,
\end{equation*}%
is defined as the sum of the map $\pmb\lambda =[\lambda _{\nu }^{\mu }]$
into the triangular matrices: $\lambda _{\nu }^{\mu }(A)=0$ if $\mu =+$ or $%
\nu =-$ and the diagonal map $\mathbf{j}=[j_{\nu }^{\mu }]$, $j_{\nu }^{\mu
}(A)=0$, $\mu \not=\nu $, $j_{\nu }^{\mu }(A)=A$, if $\mu =\nu $. As an
example one can consider the spatial $\star $-homomorphism%
\begin{equation*}
\pmb\varphi (x,A)=\mathbf{F}^{{\star }}(x)(A\otimes \mathbf{1}(x))\mathbf{F}%
(x),
\end{equation*}%
where $\mathbf{F}(x)=\left[ F_{\nu }^{\mu }(x)\right] $ is a pseudounitary
triangular matrix $F_{\nu }^{\mu }(x)=0,\mu >\nu $ with $%
F_{-}^{-}=I=F_{+}^{+}$.
We shall prove as the consequence of the theorem 4 that the
pseudo-homomorphism property of a locally QS integrable function $\pmb%
\varphi =\{\pmb\varphi (x)\}$ is also sufficient for the uniqueness and
homomorphism property (4.1) of the solution $j^{t}$ of QS Langevin equation
(4.2) with a given initial $\star $-homomorphism $j^{0}$ in nonstationary
and non-Markovian, and even in the nonadapted case.
Before doing this let us describe a decomposable operator representation $%
\pmb{\mathcal A}=\int^{\oplus }\pmb{\mathcal A}(\varkappa )\mathrm{d}%
\varkappa $ of the unital $\star $-algebra $\pmb{\mathcal A}$ of relatively
bounded $\mathcal{A}$-valued operator-functions $T(\pmb\varkappa )$ in a
pseudo Hilbert space $\pmb{\mathcal G}=\mathcal{H}\otimes \pmb{\mathcal F}$.
Here $\pmb{\mathcal F}$ is the pseudo Fock space defined in [8,10] as usual
Fock space over the space $\pmb{\mathcal K}=L^{1}(X)\oplus \mathcal{K}\oplus
L^{\infty }(X)$ of $L^{p}$-integrable vector-function $\mathbf{k}(x)=[k^{\mu
}(x)|\;\mu =-,0,+]$ with the pseudoscalar product
\begin{equation*}
(\mathbf{k}|\mathbf{k})=\langle k^{-}|k^{+}\rangle +\Vert k^{0}\Vert
^{2}+\langle k^{+}|k^{-}\rangle =\langle \mathbf{k}|\mathbf{gk}\rangle ,
\end{equation*}%
$k^{-}\in L^{1}(X),k^{+}\in L^{\infty }(X),k^{0}\in \mathcal{K}$. In general
case, when $\mathcal{K}$ is $L^{2}$-integral $\mathcal{K}=\int^{\oplus }%
\mathcal{E}(x)\mathrm{d}x$ of Euclidean (Hilbert) spaces $\mathcal{E}%
(x),x\in X$ with%
\begin{equation*}
k\in \mathcal{K}\Longleftrightarrow k(x)\in \mathcal{E}(x),\int \Vert
k(x)\Vert ^{2}\mathrm{d}x<\infty ,
\end{equation*}%
$\pmb{\mathcal F}$ consists of all integrable in the sense%
\begin{equation*}
\Vert \mathbf{k}\Vert =\sup\limits_{\varkappa ^{+}}[\int (\int \Vert
k(\varkappa ^{-},\varkappa ^{0},\varkappa ^{+})\Vert \mathrm{d}\varkappa
^{-})^{2}\mathrm{d}\varkappa ^{0}]^{1/2}<\infty
\end{equation*}%
tensor-functions $k(\varkappa ^{-},\varkappa ^{0},\varkappa ^{+})\in
\mathcal{E}^{\otimes }(\varkappa ^{0})=\otimes _{x\in \varkappa ^{0}}%
\mathcal{E}(x)$ of three chains $\varkappa ^{\mu }\in \mathcal{X}$, $\mu
=-,0,+$ with the pseudoscalar product $(\mathbf{k}|\mathbf{k})=\langle
\mathbf{k}|\mathbf{g}^{\otimes }\mathbf{k}\rangle $,
\begin{equation*}
(\mathbf{k}|\mathbf{k})=\iiint \langle k(\varkappa ^{-},\varkappa
^{0},\varkappa ^{+})|k(\varkappa ^{+},\varkappa ^{0},\varkappa ^{-})\rangle
\mathrm{d}\varkappa ^{-}\mathrm{d}\varkappa ^{0}\mathrm{d}\varkappa ^{+},%
\eqno(4.2)
\end{equation*}%
Taking into account that $\bigcap \varkappa ^{\mu }=\emptyset $ almost
everywhere for the continuous measure $\mathrm{d}\varkappa $ on $\mathcal{X}$%
, and that for any $a\in \mathcal{H}\otimes \pmb{\mathcal F}$
\begin{equation*}
(a|a)=\int \sum_{\sqcup \varkappa ^{\mu }=\varkappa }\langle a(\varkappa
^{-},\varkappa ^{0},\varkappa ^{+})|a(\varkappa ^{+},\varkappa
^{0},\varkappa ^{-})\rangle \mathrm{d}\varkappa \equiv (\mathbf{a}|\mathbf{a}%
)
\end{equation*}%
one can consider the space $\mathbf{\mathcal{G}}$ as a pseudo Hilbert
integral $\int^{\oplus }\mathbf{\mathcal{G}}(\varkappa )\mathrm{d}\varkappa $
of tensor-functions $\mathbf{a}(\varkappa )\in \mathcal{H}\otimes %
\pmb{\mathcal E}^{\otimes }(\varkappa )=\mathbf{\mathcal{G}}(\varkappa ),%
\pmb{\mathcal E}(x)=\mathbb{C}\oplus \mathcal{E}(x)\oplus \mathbb{C}$ with
values in direct sums $\mathbf{a}(\varkappa )=\oplus _{\sqcup \varkappa
^{\mu }=\varkappa }a(\varkappa ^{-},\varkappa ^{0},\varkappa ^{+})$ over all
the partitions $\varkappa =\varkappa ^{-}\sqcup \varkappa ^{0}\sqcup
\varkappa ^{+}$ of $\varkappa \in \mathcal{X}$. The operator-valued
functions $T(\pmb\varkappa )$ of $\pmb\varkappa =(\varkappa _{\nu }^{\mu })$
are unequally defined by decomposable operator $\mathbf{T}=\int^{\oplus }%
\mathbf{T}(\varkappa )\mathrm{d}\varkappa $ in $\pmb{\mathcal G}$ acting as
\begin{equation*}
\lbrack \mathbf{T}\mathbf{a}](\varkappa )=\sum_{\sqcup \varkappa ^{\mu
}=\varkappa }\oplus \lbrack Ta](\varkappa ^{-},\varkappa ^{0},\varkappa
^{+})=\mathbf{T}(\varkappa )\mathbf{a}(\varkappa ),
\end{equation*}%
\begin{equation*}
\lbrack Ta](\varkappa ^{-},\varkappa ^{0},\varkappa ^{+})=\sum_{\sqcup _{\nu
\geq \mu }\varkappa _{\nu }^{\mu }=\varkappa ^{\mu }}^{\mu =-,0,+}T%
\begin{pmatrix}
\varkappa _{0}^{-} & \varkappa _{+}^{-}\cr\varkappa _{0}^{0} & \varkappa
_{+}^{0}%
\end{pmatrix}%
a(\varkappa _{-}^{-},\varkappa _{0}^{-}\sqcup \varkappa _{0}^{0},\varkappa
_{+}^{-}\sqcup \varkappa _{+}^{0}\sqcup \varkappa _{+}^{+})\eqno(4.3)
\end{equation*}%
due to $\sqcup \varkappa ^{\mu }=\sqcup \varkappa _{\nu }$ for $\varkappa
^{\mu }=\sqcup _{\nu \geq \mu }\varkappa _{\nu }^{\mu }$, $\varkappa _{\nu
}=\sqcup _{\mu \leq \nu }\varkappa _{\nu }^{\mu }$. It is easy to check that
the pseudo conjugated operator $\mathbf{T}^{{\star }}=\int^{\oplus }\mathbf{T%
}(\varkappa )^{{\star }}\mathrm{d}\varkappa $ with respect to the pseudo
scalar product (4.2) is also decomposable: $\mathbf{T}^{{\star }}(\varkappa
)=\mathbf{T}(\varkappa )^{{\star }}$, and is defined as in (4.3), by $T^{{%
\star }}(\pmb\varkappa )$ and the product $(\mathbf{T}^{{\star }}\mathbf{T}%
)(\varkappa )=\mathbf{T}^{{\star }}(\varkappa )\mathbf{T}(\varkappa )$
corresponds to the product (2.5). Moreover, the Fock representation (2.1) of
the operator $\star $-algebra $\int^{\oplus }\pmb{\mathcal A}(\varkappa )%
\mathrm{d}\varkappa $ can be described as a spatial transformation $\iota
(T)=J^{{\star }}\mathbf{T}J$, where $J$ is a pseudoisometric operator $%
(Ja|Ja)=\Vert a\Vert ^{2}$, with $J^{{\star }}\colon \langle J^{{\star }}%
\mathbf{a}|a\rangle =(\mathbf{a}|Ja)$, acting as
\begin{equation*}
\lbrack Ja](\varkappa ^{-},\varkappa ^{0},\varkappa ^{+})=\delta _{\emptyset
}(\varkappa ^{-})a(\varkappa ^{0}),\;\;\;\;\;[J^{{\star }}\mathbf{a}%
](\varkappa )=\int a(\varkappa ^{-},\varkappa ,\emptyset )\mathrm{d}%
\varkappa ^{-}\ ,
\end{equation*}%
($\delta _{\emptyset }$ means the vacuum function $\delta _{\emptyset
}(\varkappa )=0$, if $\varkappa \not=\emptyset ,\delta _{\emptyset
}(\emptyset )=1$). One can consider $J^{{\star }}\mathbf{T}J$ as a weak
limit $t\rightarrow \infty $ of the operators $J_{[0,t)}^{{\star }}\mathbf{T}%
J_{[0,t)}$, well defined on $\mathcal{G}$ as $J_{[0,t)}=\int_{\mathcal{X}%
^{t}}^{\oplus }J(\varkappa )\mathrm{d}\varkappa $ due to
\begin{equation*}
\Vert J_{[0,t)}a\Vert ^{2}=\iiint_{\varkappa ^{\mu }\in \lbrack 0,t)}\delta
_{\emptyset }(\varkappa ^{-})\Vert a(\varkappa ^{0})\Vert ^{2}\mathrm{d}%
\varkappa ^{-}\mathrm{d}\varkappa ^{0}\mathrm{d}\varkappa ^{+}\leq \;\mathrm{%
e}^{t}\Vert a\Vert ^{2},
\end{equation*}%
and to prove directly the property
\begin{equation*}
J^{{\star }}\mathbf{T}^{{\star }}JJ^{{\star }}\mathbf{T}J=J^{{\star }}%
\mathbf{T}^{{\star }}\mathbf{T}J
\end{equation*}%
corresponding to the multiplicative property of $\iota $.
Let $U^{t}=J^{{\star }}\mathbf{T}^{t}J$ be the solution of the QS evolution
equation (3.4) with $\mathbf{S}=\mathbf{J}^{{\star }}\mathbf{FJ}$, where $%
\mathbf{J}=J\otimes \mathbf{1}(x),1_{\nu }^{\mu }=0$, if $\mu \not=\nu $, $%
1_{-}^{-}=1=1_{+}^{+}$, $1_{0}^{0}(x)=I(x)$ is the identity operator in $%
\mathcal{E}(x)$, and let $\pmb\tau ^{t}\colon \pmb{\mathcal A}\rightarrow %
\pmb{\mathcal A},\upsilon ^{t}\colon \mathcal{B}\rightarrow \mathcal{B}$ be
the corresponding transformations $\pmb\tau (\mathbf{A})=\mathbf{T}^{{\star }%
}\mathbf{AT}$, $\upsilon (B)=U^{{\star }}BU$ of the algebras of relatively
bounded operators respectively in $\pmb{\mathcal
G}$ and ${\mathcal{G}}$. Then one can obtain, denoting $\mathbf{E}=JJ^{{%
\star }}$,%
\begin{equation*}
J^{{\star }}(\pmb\tau ^{t}(A))J=J^{{\star }}\mathbf{T}^{t\star }\mathbf{AT}%
^{t}J=J^{{\star }}\mathbf{T}^{t\star }\mathbf{EAE}\mathbf{T}^{t}J=\upsilon
^{t}(J^{{\star }}\mathbf{A}J),
\end{equation*}%
that is the QS process $\iota ^{t}=\upsilon ^{t}\circ \iota $ over the $%
\star $-algebra $\pmb{\mathcal A}$ is the composition $\iota ^{t}=\iota
\circ \pmb\tau ^{t}$ of the representation $\iota $ and $\pmb\tau
^{t}=\int^{\oplus }\pmb\tau ^{t}(\varkappa )\mathrm{d}\varkappa $, where $%
\pmb\tau (\varkappa ,\mathbf{A})=\mathbf{T}(\varkappa )^{{\star }}\mathbf{AT}%
(\varkappa )$ is defined due to (3.6) as chronological compositions
\begin{equation*}
\phi _{\lbrack 0,t)}(\varkappa ,\mathbf{A})=\mathbf{F}_{t_{1}}^{{\star }%
}(\varkappa )\cdots \mathbf{F}_{t_{m}}^{{\star }}(\varkappa )\mathbf{A}%
\mathbf{F}_{t_{m}}(\varkappa )\cdots \mathbf{F}_{t_{1}}(\varkappa )=\left[
\circ _{x\in \varkappa ^{t}}^{\rightarrow }\pmb\phi _{t(x)}(\varkappa )%
\right] (\mathbf{A})
\end{equation*}%
of the maps $\pmb\phi _{t(x)}(\varkappa \sqcup x)=\pmb\phi (x,\varkappa )$,
and $\pmb\tau ^{0}(\varkappa ,A)=\mathbf{T}^{0}(\varkappa )^{{\star }}%
\mathbf{A}\mathbf{T}^{0}(\varkappa )$: $\pmb\tau (\varkappa )=\pmb\tau
^{0}(\varkappa )\circ \pmb\phi _{\lbrack 0,t)}(\varkappa )$, where
\begin{equation*}
\pmb\phi (\dot{\mathbf{A}})=\int^{\oplus }\pmb\phi (\varkappa ,\dot{\mathbf{A%
}}(\varkappa ))\mathrm{d}\varkappa =\mathbf{F}^{{\star }}\dot{\mathbf{A}}%
\mathbf{F},\dot{\mathbf{A}}\in \pmb{\dot{\mathcal A}}=\int^{\oplus
}\int^{\oplus }\pmb{\mathcal A}(\varkappa \sqcup x)\mathrm{d}\varkappa
\mathrm{d}x.
\end{equation*}
Moreover, if a $\mathcal{B}$-valued process $B^{t}=\iota (\mathbf{A}^{t})$
has a QS differential $\mathrm{d}B=\mathrm{d}\Lambda (\mathbf{D})$, then the
transformed process $\hat{B}^{t}=\upsilon ^{t}(B^{t})$ satisfies the QS
equation
\begin{equation*}
\hat{B}^{t}=\hat{B}^{0}+\Lambda ^{t}(\hat{\pmb\sigma }(\mathbf{G})-\hat{%
\mathbf{B}}),\mathbf{G}=\mathbf{B}+\mathbf{D}\ ,\eqno(4.4)
\end{equation*}%
where $\hat{\mathbf{B}}(x)=\mathbf{U}^{{\star }}(x)\mathbf{B}(x)\mathbf{U}%
(x)\equiv \pmb\upsilon (x,\mathbf{B}(x))$, $\mathbf{B}(x)=\mathbf{J}^{{\star
}}\dot{\mathbf{A}}^{t(x)}(x)\mathbf{J}$, $\pmb\sigma (\mathbf{G})=\mathbf{S}%
^{{\star }}\mathbf{GS}$ as it fallows directly from (3.4) and the main
formula (2.9):
\begin{equation*}
\mathrm{d}\left( U^{{\star }}BU\right) =\mathrm{d}\Lambda \left( \mathbf{U}^{%
{\star }}\mathbf{S}^{{\star }}(\mathbf{B}+\mathbf{D})\mathbf{SU}-\mathbf{U}^{%
{\star }}\mathbf{BU}\right) \ .
\end{equation*}%
In particular case $\mathbf{D}=0$ this gives the QS Langevin (non adapted)
equation for the QS process $\iota ^{t}\colon \pmb{\mathcal A}\rightarrow
\mathcal{B}$, written in the differential form as
\begin{equation*}
\mathrm{d}\iota ^{t}(\mathbf{A})=\mathrm{d}\Lambda ^{t}(\pmb\iota \circ \pmb%
\phi (\dot{\mathbf{A}})-\pmb\iota (\dot{\mathbf{A}}))=\mathrm{d}\Lambda ^{t}(%
\pmb\iota \circ \pmb\lambda (\dot{\mathbf{A}}))\ ,\eqno(4.5)
\end{equation*}%
where $\dot{\mathbf{A}}(x)=\int^{\oplus }\mathbf{A}(x\cup \varkappa )\mathrm{%
d}\varkappa \in \dot{\pmb{\mathcal A}}(x),$%
\begin{equation*}
\pmb\lambda (\dot{\mathbf{A}})=\pmb\phi (\dot{\mathbf{A}})-\dot{\mathbf{A}},%
\pmb\iota (x,\dot{\mathbf{A}})=\pmb\upsilon (x,\mathbf{J}^{{\star }}\dot{%
\mathbf{A}}\mathbf{J}),
\end{equation*}%
and $(\pmb\iota \circ \pmb\phi )(\dot{\mathbf{A}})=\pmb\upsilon \circ \pmb%
\sigma (\mathbf{J}^{{\star }}\dot{\mathbf{A}}\mathbf{J})$ due to
\begin{equation*}
\mathbf{J}^{{\star }}\pmb\phi (\dot{\mathbf{A}})\mathbf{J}=\mathbf{J}^{{%
\star }}\mathbf{F}^{{\star }}\dot{\mathbf{A}}\mathbf{FJ}=\mathbf{J}^{{\star }%
}\mathbf{F}^{{\star }}\mathbf{E}\dot{\mathbf{A}}\mathbf{EFJ}=\mathbf{S}^{{%
\ast }}\mathbf{J}^{{\ast }}\dot{\mathbf{A}}^{{\ast }}\mathbf{J}\mathbf{S}=%
\pmb\sigma (\mathbf{J}^{{\star }}\dot{\mathbf{A}}\mathbf{J})\ .
\end{equation*}%
The restriction of the equation (4.7) on the $\star $-subalgebra $\mathcal{A}%
\otimes \mathbf{1}^{\otimes }\in \pmb{\mathcal A},\mathbf{1}^{\otimes
}(\varkappa )=\otimes _{x\in \varkappa }\mathbf{1}(x)$ gives the (non
Markovian) Langevin equation (4.2) for $j^{t}=\iota ^{t}\circ \mathbf{j}$, $%
\mathbf{j}(A)=A\otimes \mathbf{1}^{\otimes }$ with the QS derivative
\begin{equation*}
\pmb\partial =\pmb\iota \circ \pmb\lambda \circ \mathbf{j},\;\;\;\;\;\mathbf{%
j}(x,A)=A\otimes \mathbf{1}(x)\otimes \mathbf{1}^{\otimes }
\end{equation*}%
over $\mathcal{A}\subset \mathcal{B}(\mathcal{H})$.
Let us find the solution of the general Langevin QS equation (4.5) with
nonspatial map $\pmb\varphi $. It is given by the following theorem, which
is an analog of the theorem 3 for the maps $\pmb\lambda ,\pmb\phi ,\pmb\tau $
instead of the corresponding operators $\mathbf{L},\mathbf{F},\mathbf{T},$.
\begin{theorem}
The QS equation (4.5), written as $\iota ^{t}=\iota ^{0}+\Lambda ^{t}\circ %
\pmb\delta $ for all $\mathbf{A}\in \pmb{\mathcal A}$ with
\begin{equation*}
\delta _{\nu }^{\mu }(x,\mathbf{A})=\iota _{\nu }^{\mu }(x,\pmb\lambda (x,%
\dot{\mathbf{A}})),\iota ^{0}(\mathbf{A})=J^{{\star }}\pmb\tau ^{0}(\mathbf{A%
})J
\end{equation*}%
is defined by linear decomposable maps $\pmb\lambda (x)\colon \dot{%
\pmb{\mathcal A}}(x)\rightarrow \dot{\pmb{\mathcal A}}(x)$ with $\lambda
_{\nu }^{\mu }(x,\mathbf{A})=0$, if $\mu =+$ or $\nu =-$ and $\pmb\tau
^{0}\colon \pmb{\mathcal A}\rightarrow \pmb{\mathcal A}$ as the
representation
\begin{equation*}
\iota ^{t}(\mathbf{A})=J^{{\star }}\pmb\tau ^{t}(\mathbf{A})J\ ,\ \pmb\iota
(x,\dot{\mathbf{A}}(x))=\mathbf{J}^{{\star }}\dot{\pmb\tau }^{t(x)}(x,\dot{%
\mathbf{A}}(x))\mathbf{J}
\end{equation*}%
of the recurrences
\begin{equation*}
\pmb\tau ^{t_{+}}(\varkappa )=\pmb\tau ^{t}(\varkappa )\circ \pmb\phi
_{t(x)}(\varkappa ),\quad x\in \varkappa \in \mathcal{X}\ ,\eqno(4.6)
\end{equation*}%
$t\in (t_{-}(\varkappa ),t(x)]$, $t_{+}\in (t(x),t_{+}(x)]$, where $t_{\pm
}=t_{m\pm 1}$ for a chain $\varkappa =(x_{1},\dots ,x_{m},\dots )$, $x=x_{m}$%
, $t_{m}=t(x_{m})$, and
\begin{equation*}
\pmb\phi _{t(x)}(\varkappa \sqcup x,\mathbf{A})=\pmb\lambda (x,\varkappa ,%
\mathbf{A})+\mathbf{A}\equiv \pmb\phi (x,\varkappa ,\mathbf{A}),\mathbf{A}%
\in \pmb{\mathcal A}(x\sqcup \varkappa )\ .
\end{equation*}%
The recurrency (4.6) with initial condition $\pmb\tau ^{t}(\varkappa )=\pmb%
\tau ^{0}(\varkappa )$ for all $t\in \lbrack 0,t_{1}]$ has the unique
solution
\begin{equation*}
\pmb\tau ^{t}(\varkappa )=\pmb\tau ^{0}(\varkappa )\circ \pmb\phi _{\lbrack
0,t)}(\varkappa )\ ,\ \pmb\phi _{\lbrack 0,t)}=\circ _{0\leq
s<t}^{\rightarrow }\pmb\phi _{s}\eqno(4.7)
\end{equation*}%
defined for every $\varkappa =(x_{1},\dots ,x_{m},\dots ),t\in
(t_{m-1},t_{m}]$ by the chronological composition $\circ _{x\in \varkappa
^{t}}^{\rightarrow }\pmb\phi _{t(x)}=\pmb\phi _{t_{1}}\circ \dots \circ \pmb%
\phi _{t_{m-1}}$ of $\pmb\phi _{t(x)}(\varkappa )=\pmb\phi (x,\varkappa
\backslash x)$ for $x\in \varkappa ^{t}=\{x\in \varkappa |t(x)<t\}$, $\pmb%
\phi _{s}(\varkappa )=\mathbf{i}(\varkappa )$, $i(\varkappa )$ is the
identity map $\pmb{\mathcal A}(\varkappa )\rightarrow \pmb{\mathcal A}%
(\varkappa )$, if $s\notin t(\varkappa )$. The solution $\{\iota ^{s}\}$ of
(4.5) is Hermitian: $\iota ^{s}(\mathbf{A}^{{\star }})=\iota ^{s}(\mathbf{A}%
)^{{\ast }}$ up to a $t>0$, if $\pmb\tau ^{0}$ and $\pmb\phi (x)$ are pseudo
Hermitian:
\begin{equation*}
\pmb\tau ^{0}(\mathbf{A}^{{\star }})=\pmb\tau ^{0}(\mathbf{A})^{{\star }},%
\pmb\phi (x,\dot{\mathbf{A}}^{{\star }}(x))=\pmb\phi (x,\dot{\mathbf{A}}%
(x))^{{\star }},x\in X^{t},
\end{equation*}%
and is (unital, faithful) QS-process, representing $\pmb{\mathcal A}$, if $%
\pmb\tau ^{0}\colon \pmb{\mathcal A}\rightarrow \pmb{\mathcal A}$ and $\pmb%
\phi (x)\colon \dot{\pmb{\mathcal A}}\rightarrow \dot{\pmb{\mathcal A}}(x)$
are (unital) $\star $-endomorphisms (automorphisms) of the algebras $%
\pmb{\mathcal A}$ and $\dot{\pmb{\mathcal A}}(x)$ for almost all $x\in X^{t}$%
.
\end{theorem}
\begin{proof}
We look for the solution of the equation (4.5) as for the representation $%
\iota ^{t}(\mathbf{A})=J^{{\star }}\mathbf{A}^{t}J$ of a process $\mathbf{A}%
^{t}=\pmb\tau ^{t}(\mathbf{A})$, transforming the $\star $-algebra $%
\pmb{\mathcal A}$. From definition of $\pmb\iota (x)$ and $\dot{\mathbf{A}}%
(x,\varkappa )=\mathbf{A}(\varkappa \sqcup x)$ we obtain
\begin{equation*}
\pmb\iota (x,\dot{\mathbf{A}}(x))=\mathbf{J}^{{\star }}\hat{\mathbf{A}}(x)%
\mathbf{J},\hat{\mathbf{A}}(x)=\dot{\pmb\tau }^{t(x)}(x,\dot{\mathbf{A}}(x))=%
\dot{\mathbf{A}}^{t(x)}(x),
\end{equation*}%
and $\Lambda ^{t}(\pmb\delta (A))=\Lambda ^{t}(\mathbf{J}^{{\star }}\hat{%
\mathbf{L}}\mathbf{J})=J^{{\star }}N^{t}(\hat{\mathbf{L}})J$, where $\hat{%
\mathbf{L}}(x)=\dot{\pmb\tau }^{t(x)}(x,\dot{\mathbf{L}}(x)),\dot{\mathbf{L}}%
(x)=\pmb\lambda (x,\dot{\mathbf{A}}(x))$. This gives the equation (4.6) in
the integral form as the representation $J^{{\star }}(\dot{\mathbf{A}}%
^{0}+N^{t}(\hat{\mathbf{L}}))J=J^{{\star }}\mathbf{A}^{t}J$ of the equations
\begin{align*}
\pmb\tau ^{0}(\varkappa ,\mathbf{A}(\varkappa ))& +\sum_{x\in \varkappa ^{t}}%
\dot{\pmb\tau }^{t(x)}(x,\varkappa \backslash x,\pmb\lambda (x,\varkappa
\backslash x,\dot{\mathbf{A}}(x,\varkappa \backslash x)))= \\
\pmb\tau ^{0}(\varkappa ,\mathbf{A}(\varkappa ))& +\sum_{x\in \varkappa ^{t}}%
\pmb\tau ^{t(x)}(\varkappa ,\pmb\lambda _{t(x)}(\varkappa ,\mathbf{A}%
(\varkappa )))=\pmb\tau ^{t}(\varkappa ,\mathbf{A}(\varkappa ))
\end{align*}%
where $\pmb\lambda _{t(x)}(\varkappa )=\pmb\lambda (x,\varkappa \backslash
x) $ for a $x\in \varkappa $. Denoting $\pmb\lambda _{t}(\mathbf{A})+\mathbf{%
A}$ as $\pmb\phi _{t}(\mathbf{A})$, we obtain the recurrency (4.6) for $\pmb%
\tau ^{t}(\varkappa )=\pmb\tau _{m}(\varkappa ),m=|\varkappa ^{t}|$
supposing the linearity of the maps $\pmb\tau ^{t}$:
\begin{equation*}
\pmb\tau _{m}(\varkappa )=\pmb\tau _{0}(\varkappa )+\sum_{k=1}^{m}(\pmb\tau
_{k-1}(\varkappa )\circ \pmb\phi _{t_{k}}(\varkappa )-\pmb\tau
_{k-1}(\varkappa ))=\pmb\tau _{m-1}(\varkappa )\circ \pmb\phi
_{t_{m}}(\varkappa ).
\end{equation*}%
This recurrency has the unique solution (4.7), which is linear as the
composition of the linear maps $\pmb\tau ^{0}$ and $\pmb\phi _{t}$, what
proves the uniqueness of the solution $\iota ^{t}=\iota \circ \pmb\tau ^{t}$
of the equation (4.5).
If the maps $\pmb\tau ^{0}(\varkappa )$ and $\pmb\phi (x,\varkappa )=\pmb%
\phi _{t(x)}(\varkappa \sqcup x)$ are pseudo Hermitian, then the composition
$\pmb\tau ^{t}(\varkappa )$ is also pseudo Hermitian, and if they satisfy
the (unital) $\star $-endomorphism (automorphism) property
\begin{align*}
\pmb\tau ^{0}(\mathbf{A}^{{\star }}\mathbf{A})& =\pmb\tau ^{0}(\mathbf{A})^{{%
\star }}\pmb\tau ^{0}(\mathbf{A}),\pmb\phi (x,\dot{\mathbf{A}}^{{\star }}%
\dot{\mathbf{A}})=\pmb\phi (x,\dot{\mathbf{A}})^{{\star }}\pmb\phi (x,%
\mathbf{A}) \\
(\pmb\tau ^{0}(\mathbf{A}^{-1})& =\pmb\tau ^{0}(\mathbf{A})^{-1},\ \pmb\phi
(x,\dot{A}^{-1})=\pmb\phi (x,\dot{A})^{-1})
\end{align*}%
for $x\in X^{t}$, then the compositions (4.7) have obviously the same
properties. This proves the Hermiticity and (unital) homomorphism
(isomorphism) property for the map $\iota ^{t}\colon \mathbf{A}\in %
\pmb{\mathcal A}\rightarrow J^{{\star }}\pmb\tau ^{t}(\mathbf{A})J$.
\end{proof}
Let us denote by $\pmb{\mathcal A}^{t}\subseteq \pmb{\mathcal A}$ the $\star
$-subalgebra of relatively bounded operators $\mathbf{A}=\int^{\oplus }%
\mathbf{A}(\varkappa )\mathrm{d}\varkappa $ with $\mathbf{A}(\varkappa )=%
\mathbf{A}(\varkappa ^{t})\otimes \mathbf{1}^{\otimes }(\varkappa _{\lbrack
t})$, and by $\mathcal{B}^{t}\subseteq \mathcal{B}$ the corresponding
algebra of operators $B=B^{t}\otimes \hat{1}_{[t}$ with $B^{t}$, acting in $%
\mathcal{G}^{t}$. The adapted QS process $\iota ^{t}$ over $\pmb{\mathcal A}$
is defined by the condition $\iota ^{t}(\pmb{\mathcal A}^{t})\subseteq
\mathcal{B}^{t}$ for almost all $t$, and corresponds to the adapted QS
evolution $\upsilon ^{t}\colon \mathcal{B}\rightarrow \mathcal{B},\upsilon
^{t}(\mathcal{B}^{t})\subseteq \mathcal{B}^{t}$, described as $\upsilon
^{t}(B)=\iota ^{t}(\mathbf{A})$ for $B=J^{{\star }}\mathbf{A}J$.
\begin{corollary}
The QS process $\iota ^{t}$, defined by the equation (4.6), is adapted, if $%
\pmb\tau ^{0}(\pmb{\mathcal A}^{0})\subset \pmb{\mathcal A}^{0}$ and $\pmb%
\phi (x,\varkappa )=\pmb\phi (x,\varkappa ^{t(x)})\otimes \mathbf{i}%
(\varkappa _{\lbrack t(x)})$ for almost all $x\in X$. In that case the QS
evolution $\upsilon ^{t}$ is defined by adapted map $\pmb\sigma (x)\colon
\mathbf{J}^{{\star }}\dot{\pmb{\mathcal A}}^{t(x)}(x)\mathbf{J}\rightarrow
\mathbf{J}^{{\star }}\dot{\pmb{\mathcal A}}^{t(x)}(x)\mathbf{J}$; the
transformed adapted process $\hat{B}^{t}=\upsilon ^{t}(B^{t})$, with $B^{t}$%
, having the derivative $\mathbf{D}(x)\in \mathbf{J}^{{\star }}\dot{%
\pmb{\mathcal A}}^{t(x)}(x)\mathbf{J}$, satisfies the QS differential
equation
\begin{equation*}
\mathrm{d}\upsilon ^{t}(B^{t})=\upsilon ^{t}[\mathrm{d}\Lambda ^{t}(\pmb%
\sigma (B\otimes \mathbf{1})+\pmb\sigma (\mathbf{D})-B\otimes \mathbf{1})]%
\eqno(4.8)
\end{equation*}%
In particular, if $B^{t}=A\otimes \hat{1}$ and $\pmb\sigma (x,A\otimes \hat{1%
}\otimes \mathbf{1})=\pmb\varphi (x,A)\otimes \hat{1}$, $\pmb\varphi (x,%
\mathcal{A})\in \pmb{\mathcal A}(x)$, where $\pmb{\mathcal A}(x)$ is the
algebra of $\mathcal{A}$-valued triangular matrices $\mathbf{A}=[A_{\nu
}^{\mu }],A_{\nu }^{\mu }=0$, if $\mu >\nu ,A_{-}^{-}=A=A_{+}^{+}$ then the
equation (4.8) has the form (4.1) in terms of $j^{t}(A)=\upsilon
^{t}(A\otimes \hat{1})$, $\partial _{\nu }^{\mu }(x)=j^{t(x)}\circ \lambda
_{\nu }^{\mu }(x)$, where
\begin{equation*}
j^{0}(A)=\tau ^{0}(A)\otimes \hat{1},\pmb\lambda (x,A)=\pmb\varphi
(x,A)-A\otimes \mathbf{1}(x),A\in \mathcal{A}.
\end{equation*}%
If the maps $\varphi _{\nu }^{\mu }(x)$ are locally $L^{p}$-integrable in
the sense
\begin{equation*}
\Vert \lambda _{0}^{0}\Vert _{\infty }^{t}<\infty ,\Vert \lambda
_{+}^{0}\Vert _{2}^{t}<\infty ,\Vert \lambda _{0}^{-}\Vert _{2}^{t}<\infty
,\Vert \lambda _{+}^{-}\Vert _{1}^{t}<\infty ,\eqno(4.9)
\end{equation*}%
where $\Vert \lambda \Vert _{p}^{t}=\left( \int_{X^{t}}\sup_{A\in \mathcal{A}%
}\{\Vert \lambda (x,A)\Vert /\Vert A\Vert \}^{p}\mathrm{d}x\right) ^{1/p}$,
then the solution $j^{t}(A)=J^{{\star }}\pmb\tau ^{0}(\pmb\phi _{\lbrack
0,t)}(A\otimes \hat{1}))J$ exists as relatively bounded QS integral
\begin{equation*}
j^{t}(A)=\Lambda _{\lbrack 0,t)}(\pmb\tau ^{0}\circ \pmb\lambda
^{\triangleright }(A)\otimes \hat{1}),\pmb\lambda ^{\triangleright
}(\varkappa )=\circ _{x\in \varkappa }^{\rightarrow }\pmb\lambda
(x,\varkappa _{t(x)})
\end{equation*}%
where $\pmb\lambda (x,\varkappa )=\pmb\lambda (x)\otimes \mathbf{i}^{\otimes
}(\varkappa ),\mathbf{i}^{\otimes }(\varkappa )$ is the identity map for
operators in $\pmb{\mathcal E}^{\otimes }(\varkappa )$ and $\pmb\tau
^{0}(\varkappa )=\tau ^{0}\otimes \mathbf{i}^{\otimes }(\varkappa )$. It has
the estimate
\begin{equation*}
\Vert j^{t}(A)\Vert _{\xi ^{+}}^{\xi _{-}}\leq \Vert \tau ^{\circ }\Vert
\exp \{\int_{X^{t}}(\Vert \lambda _{+}^{-}(x)\Vert +(\Vert \lambda
_{+}^{0}(x)\Vert ^{2}+\Vert \lambda _{0}^{-}(x)\Vert ^{2})/2\varepsilon )%
\mathrm{d}x\}\eqno(4.10)
\end{equation*}%
for $\xi ^{+}/\xi _{-}>\mathrm{ess\sup }_{x\in X^{t}}\Vert \varphi
_{0}^{0}(x)\Vert ,\Vert A\Vert \leq 1$, and sufficiently small $\varepsilon
>0$.
\end{corollary}
Indeed, the process $\iota ^{t}(A^{t})=J^{{\star }}\pmb\tau ^{t}(A^{t})J$ is
adapted, iff $\pmb\tau ^{t}(\varkappa ,\mathbf{A})=\pmb\tau ^{t}(\varkappa
^{t},\mathbf{A}^{t})\otimes \mathbf{1}^{\otimes }(\varkappa _{\lbrack t})$,
as it was proven in the Corollary 2. But due to $\pmb\tau (\varkappa ,%
\mathbf{A})=\pmb\tau (\varkappa ,\mathbf{A}(\varkappa ))$, it is possible
only in the case $\mathbf{A}^{t}(\varkappa )=\mathbf{A}^{t}(\varkappa
^{t})\otimes 1^{\otimes }(\varkappa _{\lbrack t})$ and $\pmb\tau
^{t}(\varkappa )=\pmb\tau ^{t}(\varkappa ^{t})\otimes \mathbf{1}^{\otimes
}(\varkappa _{\lbrack t})$, what is equivalent to the corresponding
conditions for $\pmb\tau ^{0}$ and $\pmb\phi (x)$. If $B^{t}=J^{{\star }}%
\mathbf{A}^{t}J$ is an adapted process: $\mathbf{A}^{t}(\varkappa )=\mathbf{A%
}^{t}(\varkappa ^{t})\otimes \mathbf{1}^{\otimes }(\varkappa _{\lbrack t})$,
then
\begin{align*}
\dot{\mathbf{A}}^{t}(x,\varkappa )& =\mathbf{A}^{t}(\varkappa \sqcup x)=%
\mathbf{A}^{t}(\varkappa )\otimes \mathbf{1}(x)\ ,\qquad \qquad \ \forall
t\leq t(x) \\
\mathbf{B}(x)& =\mathbf{J}^{{\star }}\dot{\mathbf{A}}^{t(x)}(x)\mathbf{J}%
=B^{t(x)}\otimes \mathbf{1}(x)\ ,\qquad \qquad \forall x\in X
\end{align*}%
and $\hat{\mathbf{B}}(x)=\upsilon (x,\mathbf{B}(x))=\hat{B}^{t(x)}\otimes
\mathbf{1}(x)$, where $\hat{B}^{t}=\upsilon ^{t}(B^{t})$ for the transformed
process $\pmb\upsilon (x,B\otimes \mathbf{1}(x))=\upsilon ^{t(x)}(B)\otimes
\mathbf{1}(x)$ evaluated in $B=B^{t(x)}$. This gives the equation (4.4) for
the adapted process $\hat{B}^{t}$ in the differential form (4.8):
\begin{equation*}
\mathrm{d}\upsilon ^{t}(B^{t})=\mathrm{d}\Lambda ^{t}(\pmb\upsilon (\pmb%
\sigma (B\otimes \mathbf{1}+\mathbf{D})-B\otimes \mathbf{1}))=\upsilon ^{t}[%
\mathrm{d}\Lambda ^{t}(\pmb\beta (B)+\pmb\sigma (\mathbf{D}))],
\end{equation*}%
where $\pmb\beta (B)=\pmb\sigma (B\otimes \mathbf{1})-B\otimes \mathbf{1}$
and $\mathrm{d}\Lambda ^{t}\circ \pmb\upsilon =\upsilon ^{t}\circ \mathrm{d}%
\Lambda ^{t}$ for the adapted evolution $\upsilon ^{t}$ due to the same
arguments, as in Corollary 1. This equation, restricted on $B^{t}=A\otimes
\hat{1}$ has the integral form (4.1) due to $\mathbf{D}=0$ where $%
j^{t}(A)=\upsilon ^{t}(A\otimes \hat{1}),\pmb\partial (x)=\upsilon
^{t(x)}\circ \pmb\beta (x)$ because $\pmb\upsilon (x)=\upsilon
^{t(x)}\otimes \mathbf{1}(x)$. If $\pmb\beta =\pmb\lambda \otimes \hat{1}$,
where $\pmb\lambda (x,A)\in \pmb{\mathcal A}(x)$, then it can be written as
\begin{equation*}
\mathrm{d}j^{t}(A)=\mathrm{d}\Lambda ^{t}(j(\pmb\lambda (A))=j^{t}[\mathrm{d}%
\Lambda ^{t}(\pmb\lambda (A)\otimes \hat{1})]\ .
\end{equation*}%
The solution $j^{t}(A)=J^{{\star }}\pmb\tau ^{t}(A\otimes \mathbf{1}%
^{\otimes })J$ of this equation is defined by chronological composition
(4.7) as
\begin{equation*}
\pmb\tau ^{t}(\varkappa ,A\otimes \mathbf{1}^{\otimes })=\pmb\tau
^{0}(\varkappa ,\pmb\varphi ^{\triangleright }(\varkappa ^{t},A)\otimes
\mathbf{1}^{\otimes }(\varkappa _{\lbrack t})),\pmb\varphi ^{\triangleright
}(\varkappa )=\circ _{x\in \varkappa }^{\rightarrow }\pmb\varphi
(x,\varkappa _{t(x)})\ ,
\end{equation*}%
where $\pmb\tau ^{0}(\varkappa )=\tau ^{0}\otimes \mathbf{i}\otimes
(\varkappa )$ is an initial map, and $\pmb\varphi (x,\varkappa )=\pmb\varphi
(x)\otimes \mathbf{i}(\varkappa )$. It can be described as $j^{t}(A)=\iota
(T^{t})$ by operator-valued function
\begin{equation*}
T^{t}(\pmb\varkappa )=\tau ^{0}[\varphi (\mathbf{x}_{1},\varphi (\mathbf{x}%
_{2},\varphi (\mathbf{x}_{m},A)))]\otimes \mathbf{1}^{\otimes }(\pmb%
\varkappa _{\lbrack t})\ ,
\end{equation*}%
$t_{m}<t\leq t_{m+1}$, corresponding in (4.2) to the decomposable $\mathbf{T}%
^{t}=\pmb\tau ^{t}(A\otimes \mathbf{1}^{\otimes })$ for a partition $\pmb%
\varkappa =(\varkappa _{\nu }^{\mu })$ of the chain $\varkappa =(x_{1},\dots
,x_{m},\dots )\in \mathcal{X}$ with $\varphi (\mathbf{x}_{\nu }^{\mu
})=\varphi _{\nu }^{\mu }(x)$. Hence, the operator $\mathbf{T}^{t}$ is
relatively bounded for $A\in \mathcal{A}$:
\begin{equation*}
\Vert T^{t}(\pmb\varkappa )\Vert \leq \Vert \tau ^{0}\Vert \ \Vert A\Vert
\displaystyle{\prod}_{x\in \pmb\varkappa }\Vert \varphi ^{t}(\mathbf{x})\Vert =\Vert \tau
^{0}\Vert \ \Vert A\Vert \displaystyle{\prod}_{\nu =0,+}^{\mu =-,0}\zeta ^{t}(\varkappa
_{\nu }^{\mu }),
\end{equation*}%
where $\pmb\varphi ^{t}(x)=\pmb\varphi (x)$, if $t(x)<t$, otherwise $\pmb%
\varphi ^{t}(x)=\mathbf{i}(x)$, and $\zeta ^{t}(\varkappa _{\nu }^{\mu
})=\displaystyle{\prod}_{x\in \varkappa _{\nu }^{\mu }}^{t(x)<t}\left\Vert \varphi _{\nu
}^{\mu }(x)\right\Vert $. This proves like in the Corollary 3 the existence
of the solution $j^{t}(A)=J^{{\star }}\mathbf{T}^{t}J=\iota (T^{t})$ of the
equation (4.1) as relatively bounded operator $B^{t}=j^{t}(A)\in \mathcal{B}$
for any $A\in \mathcal{A}$, having the estimate (4.10) for $\Vert A\Vert
\leq 1$ in terms of $\Vert \varphi _{\nu }^{\mu }(x)\Vert =\sup \{\Vert
\varphi _{\nu }^{\mu }(A)\Vert /\Vert A\Vert \}$, $\varphi _{\nu }^{\mu
}=\lambda _{\nu }^{\mu }$ for $\mu <\nu $. It can be written in the form of
the multiple QS integral (1.5) with respect to the operator function $B(\pmb%
\varkappa )=L(\pmb\varkappa )\otimes \hat{1}$ with
\begin{equation*}
L(\pmb\varkappa ,A)=\tau ^{0}[\lambda (\mathbf{x}_{1},\lambda (\mathbf{x}%
_{2},\ldots ,\lambda (\mathbf{x}_{n},A)\dots ))]=\tau ^{0}[\lambda
^{\triangleright }(\pmb\varkappa ,A)]
\end{equation*}%
for any partition $\sqcup _{\nu =0,+}^{\mu =-,0}\varkappa _{\nu }^{\mu
}=(x_{1},\dots ,x_{n})\in \mathcal{X}$ where $\lambda (\mathbf{x}_{\nu
}^{\mu })=\lambda _{\nu }^{\mu }(x)$ on the single point table $\mathbf{x}%
=(\varkappa _{\nu }^{\mu }),\varkappa _{\nu }^{\mu }=x$.
The solution of the nonstationary Markov Langevin equation in the form of
multiple QS integral was obtained recently in the frame work of It\^{o} QS
calculus by Lindsay and Parthasarathy [15] for more restrictive conditions
then (4.9) (finite dimensional and local bounded $\lambda _{\nu }^{\mu }$).
Note that our estimate (4.10) is also applicable for the $QS$ flows with the
non-homomorphic maps
\begin{equation*}
\pmb\varphi (x)=\pmb\lambda (x)+\pmb\jmath (x)
\end{equation*}%
into the generalized operators $\varphi _{\nu }^{\mu }(x,A)=\lambda _{\nu
}^{\mu }(x,A)+A\otimes \mathbf{1}_{\nu }^{\mu }(x)$, where $1_{\nu }^{\mu
}(x)=0$, if $\mu \not=\nu $, $1_{-}^{-}(x)=1=1_{+}^{+}(x)$, $%
1_{0}^{0}(x)=I(x)$ is the identity operator $\mathcal{E}(x)\rightarrow
\mathcal{E}_{-}(x)$, and $\lambda _{\nu }^{\mu }(x,A)=0$, if $\mu >\nu $; $%
(\lambda _{\nu }^{\mu })_{\nu =0,+}^{\mu =-,0}$ are the locally integrable
in the sense (4.9) function $x\mapsto \lambda _{\nu }^{\mu }(x)$
respectively to the norms of maps $\lambda _{\nu }^{\mu }:A\in \mathcal{A}%
\mapsto \lambda _{\nu }^{\mu }(x,A)$ into the operators
\begin{eqnarray*}
\lambda _{0}^{0}(x,A) &:&\mathcal{H}\otimes \mathcal{E}(x)\rightarrow
\mathcal{H}\otimes \mathcal{E}_{-}(x),\;\;\;\;\;\;\;\ \ \,\lambda
_{+}^{-}(x,A):\mathcal{H}\rightarrow \mathcal{H}\ ,\newline
\\
\lambda _{+}^{0}(x,A) &:&\mathcal{H}\rightarrow \mathcal{H}\otimes \mathcal{E%
}_{-}(x)\ ,\ \;\;\;\;\;\ \;\;\lambda _{0}^{-}(x,A):\mathcal{H}\otimes
\mathcal{E}(x)\rightarrow \mathcal{H}\ .
\end{eqnarray*}%
Considering a Hilbert scale $\{\mathcal{E}_{p}(x):p\in \mathbb{R}_{+}\}$ of
the triple $\mathcal{E}_{p}(x)\subseteq \mathcal{E}_{1}(x)\subseteq \mathcal{%
E}_{p^{-1}}(x)$ one can obtain the corresponding results also in the limits $%
\bigcap \mathcal{E}_{p}(x)$, $\bigcup \mathcal{E}_{p}(x)$ instead of $%
\mathcal{E}(x)$ and $\mathcal{E}_{-}(x)$. The restriction to the classical
Gaussian case gives the existence and uniqueness theorems for the It\^{o}
evolution equation in the white noise analysis approach [18,19]. In the
classical cases the nonadapted It\^{o} formula and noncausal differential
equations were studied recently by Nualart and Pardoux [17,20].
\begin{acknowledgement}
I wish to thank Prof. L. Accardi and F. Guerra for stimulating discussions
of the results and for the hospitality of the University of Rome
\textquotedblleft La Sapienza\textquotedblright , and \textquotedblleft
Centro Matematico V. Volterra\textquotedblright , University of Rome II.
\end{acknowledgement}
\vfill\eject
\section{References}
\begin{enumerate}
\item Hudson R.L., Parthasarathy K.R. \textit{Quantum} \textit{It\^{o}'s
formula and stochastic evolution}, Comm. Math. Phys. \textbf{93}, (1984),
301-323.
\item Meyer P.A., \textit{Elements de Probabilit\'es Quantiques}, Exposes I
a IV, Institute de Mathematique, Universit\'e Louis Pasteur Strasbourg, 1985.
\item Maassen H., \textit{Quantum Markov processes in Fock space described
by integral kernels}, in: ``Quantum Probability and Applications II'', ed.
L. Accardi and W. von Waldenfels, Lecture Notes in Mathematics, 11--36,
Berlin Heidelberg New York: Springer, 1985.
\item Lindsay J.M., Maassen H., \textit{An integral kernel approach to noise}%
, in: \textquotedblleft Quantum probability and Applications
III\textquotedblright , ed. L. Accardi and W. von Waldenfels, Lecture Notes
in Mathematics, 192--208, Berlin Heidelberg New York: Springer 1988.
\item Accardi L., Quaegebeur J., \textit{The It\^{o} Algebra of Quantum
Gaussian Fields}, Journal of Functional Analysis, \textbf{85}, N. 2 (1989),
213--263.
\item Accardi L., Fagnola F., \textit{Stochastic integration}, in: ``Quantum
Probability and Applications III'', ed. L. Accardi and W. von Waldenfels,
Lecture notes in Mathematics, 6--19, Berlin Heidelberg New York: Springer
1988.
\item Belavkin V.P., \textit{Nondemolition measurements, nonlinear filtering
and dynamic programming of quantum stochastic processes}, in: ``Modeling and
Control of Systems'', ed. A. Blaquiere, Lecture Notes in Control and
Information Sciences, \textbf{121}, 245--265, Berlin Heidelberg, New York:
Springer 1988.
\item Belavkin V.P., \textit{Nondemolition stochastic calculus in Fock space
and nonlinear filtering and control in quantum systems}, in:
\textquotedblleft Stochastic methods in Mathematics and
Physics\textquotedblright , ed. R. Gielerak and W. Karwoski, 310--324,
Singapour, New Jersey, London, Hong Kong: World Scientific, 1988.
\item Belavkin V.P., \textit{Quantum stochastic calculus and quantum
nonlinear filtering}, Centro Matematico V. Volterra, Universit\`a degli
Studi di Roma II, Preprint N. 6, Marzo, 1989.
\item Belavkin V.P., \textit{A new form and $\star $-algebraic structure of
quantum stochastic integrals in Fock space}, Seminario matematico e
fisico\dots (1989),\dots
\item Holevo A.S., \textit{Time-ordered exponentials in quantum stochastic
calculus}, Preprint N. 517 Universit\"{a}t Heidelberg, June, 1989.
\item Accardi L., Frigerio A., Lewis J.T., \textit{Quantum stochastic
processes}, publications RIMS \textbf{48}, (1982), 97--133.
\item Belavkin V.P., \textit{Reconstruction theorem for quantum stochastic
processes}, Theor. Math. Phys., \textbf{62}, N. 3, (1985), 275--289.
\item Hudson R.L., \textit{Quantum diffusions and cohomology of algebras}.
Proceedings First World congress of Bernoulli society, vol. 1, Yu. Prohorov,
V.V. Sazonov (eds.), (1987), 479--485.
\item Lindsay J.M., Parthasarathy K.R., \textit{Cohomology of power sets
with applications in quantum probability}, Comm. Math. Phys. \textbf{124},
(1989), 337--364.
\item Malliavin P., \textit{Stochastic calculus of variations and
hypoelliptic operators} in: It\^{o} K., (ed) Proc. of Int. Symp. Stoch. D.
Eqs. Kyoto 1976 pp. 195--263. Tokyo: Kinokuniya--Wiley, 1978.
\item Nualart D., Zakai M., \textit{Generalized stochastic integrals and the
Malliavin calculus}, Prob Theor Rel Fields \textbf{73}, (1986), 255--280.
\item Streit L., Hida T., \textit{Generalized Brownian functions and the
Feynman Integral}, Stoch Processes and their Applications, \textbf{16},
(1983), 55--69.
\item Hida T., \textit{Generalized Brownian functionals and stochastic
integrals}, Appl Math Optim\.{,} \textbf{12}, (1984), 115--123.
\item Nualart D., Pardoux E., \textit{Stochastic calculus with anticipating
integrands}, Prob Theor Rel Fields, \textbf{78}, (1988), 535--581.
\end{enumerate}
\end{document}
|
train/arxiv
|
BkiUbNY5qoYDgaG4QETt
| 5
| 1
|
\section{Introductory Remarks}
Below the scale at which strong gravitational effects become relevant, gravity can be treated as an effective theory \cite{Donoghue, Burgess}. In pure Einstein gravity, this strong coupling scale is simply given by the reduced Planck mass $\mplf = 2.435\times 10^{18}$ GeV. In the presence of matter, the strong coupling scale is lowered as
\eq{scs}{M_{**} \sim \mplf/\sqrt{N}}
where $N$ is shorthand for a weighted index that counts the total numbers of particles of different spins present. As reviewed in \cite{Antoniadis:2014xva}, one can understand this result in a variety of ways\footnote{See \cite{Dvali3, Dvali1, Dvali2} for a range of arguments including black hole thermodynamics.} for which we offer our own simple derivation in the next section. On the other hand, the effective strength of gravity -- $M_*$ -- is in principle an independent quantity. One determines $M_*$ via local scattering experiments, say a test (point) mass scattering off of a heavier mass. As we also review shortly, depending on the process in question and its scale\footnote{i.e. Allowing for interactions that violate the equivalence principle at high energies.},
\eq{}{M_* \sim \mplf/\sqrt{N_*}}
counts the number $N_*$ of species that can mediate \textit{tree-level} interactions of gravitational strength. During inflation for example in the presence of large (but stabilized) extra dimensions, processes that couple only to the transverse traceless (TT) polarizations of the graviton will turn up the scale corresponding to $N_* = 1 + N^{\rm T}_{\rm KK}$ where $N^{\rm T}_{\rm KK}$ counts the number of TT Kaluza-Klein (KK) resonances with masses below the momentum transfer in question. Processes that couple only to the longitudinal polarization of the graviton\footnote{Recalling that in unitary gauge, spacetime is foliated such that inflaton fluctuations are gauged away. The graviton thus acquires a propagating longitudinal polarization by `eating' the fluctuating inflaton \cite{Cheung:2007st} (reviewed in \cite{Chluba}).} (equivalently, processes mediated by species that couple to the trace of the energy momentum (EM) tensor) on the other hand see the scale corresponding to $N_*$ where $N_* = 1 + \w N_*$ where $\w N_*$ effectively counts the (non KK) \textit{universally coupled} species contributing to the process in question in addition to any KK graviscalars whose couplings or expectation values may have shifted between laboratory scales and the scale of inflation. \textit{In what follows, the term universally coupled specifically refers to all species that can mediate tree-level exchange processes between covariantly conserved sources.}
In this note, we further elaborate upon these facts and their consequences for \textit{inferring} absolute scales from observable quantities during inflation. In doing so, we address various concerns raised in a recent paper \cite{Kleban}, where firstly it was reasoned that KK gravitons could not contribute towards lowering $M_*$ during inflation since this would require their masses to be less than the Hubble scale, seemingly in violation of the Higuchi bound \cite{Higuchi}. A straightforward corollary of this reasoning however, would be that it is impossible for the compactification scale to be less than the effective cosmological constant, implying a powerful no go theorem for compactifications were it true. We demonstrate by explicit example and proof that as expected from the underlying consistency of higher dimensional Einstein gravity, that KK gravitons evade the Higuchi bound \footnote{Compactifications spontaneously break higher dimensional Lorentz invariance. As a result, the underlying healthiness of the higher dimensional theory manifests in cancellations that preclude the problematic terms that would otherwise have implied a propagating ghost in a given mass range on a de Sitter (dS) background (see appendix).}.
Secondly, it was reasoned in \cite{Kleban} that universally coupled species that are not TT spin-2 excitations could not affect the inferred scale of inflation from a positive detection of primordial tensors, since the former couldn't have generated perturbations that linearly source B-mode polarization in the cosmic microwave background (CMB) \cite{polnarev}. This reasoning is inaccurate since it neglects to treat gravity as an effective theory and (as we remind the reader shortly) results in the inconsistent corollary that it would be possible to infer a scale for inflation beyond the scale at which a classical description of spacetime breaks down. A more careful examination of how one connects theoretical quantities to what is extracted from the CMB confirms the basic conclusions of \cite{Antoniadis:2014xva}; that inferring an absolute scale of inflation is complicated by the uncertainty in $M_*$:
\eq{}{V_*^{1/4} \sim \frac{r_*^{1/4}}{\sqrt N_*}\,3.28 \times 10^{16}\,\rm{GeV}}
where $r_*$ is the observed tensor to scalar ratio and $N_* = (1 + N^{\rm T}_{\rm KK})(1 + \w N_*)$ with the caveat that one can only trust this inference if the resulting curvature is such that $R \lesssim \mplf^2/N$, or
\eq{hbound}{\frac{H^2_*}{\mplf^2} \lesssim \frac{1}{N}}
with $N$ being the total number of species in our theory and $H_*$ is the Hubble factor during inflation (the above is nothing more than the obvious requirement that $H_*^2/M_{**}^2 \lesssim 1$) \cite{Antoniadis:2014xva}. Furthermore, as we elaborate upon further, non KK universally coupled species can violate the so-called single field tensor to scalar consistency relation even as the spectral properties of the anisotropies remain unchanged.
\subsection{Outline}
Some of our findings are direct corollaries of results established elsewhere \cite{Dvali3,Dvali2,Dvali1} while others are obvious in hindsight although evidently obscured by a number of moving parts. Therefore, we err on the side of detail in the following treatment, clarifying various details omitted in \cite{Antoniadis:2014xva} and presenting the observation on the nature of the Higuchi bound alluded to in the title. Readers interested only in the latter result and any wider lessons to be drawn are invited to skip to the conclusion and appendices. We begin by reviewing aspects of how short distance gravity can differ from macroscopic gravity in a general context after which we fix to the specific setting of inflationary cosmology. We derive the implications of $M_*$ and $M_{**}$ differing from the macroscopic strength of gravity (set by $\mplf = 2.44 \times 10^{18}$ GeV) for cosmological observables and in particular, inferring an absolute energy scale for inflation, after which we address concerns raised in \cite{Kleban} and conclude. Along the way, we draw attention to the fact that the process dependence of $M_*$ can result in a violation of the single-field tensor to scalar consistency relation, in addition to the fact that any positive detection of primordial tensors necessarily bounds the \textit{total} number of species in the universe, hidden or otherwise.
\section{Review}
\subsection{The different scales of gravity}
In the absence of matter, gravity becomes strongly coupled as the momentum transfer for any given process approaches the (reduced) Planck scale $\mplf = 2.435\times 10^{18}$ GeV or when the background curvature approaches $R \sim \mplf^2$. In the presence of matter, the strong coupling scale is lowered as $M_{**} = \mplf/\sqrt{N}$, where $N$ is shorthand for a weighted sum that counts the total number of species present. A direct understanding is arrived at, for example, from the effective action one obtains after integrating out an arbitrary particle spectrum of massive particles minimally coupled to gravity, initially described by the Einstein-Hilbert action:
\eq{S1}{S = \int d^4x\sqrt{-g}\left(\frac{M^2_B}{2}R - \Lambda_B + \calL_M\right) }
where $\calL_M$ is some arbitrary matter sector and $M_B$ and $\Lambda_B$ denote recognizable dimensionful couplings that are subject to renormalization. All graviton scattering amplitudes (or on shell sub-amplitudes) can be calculated from the effective action that results from integrating out the matter fields. To one loop, these amplitudes are reproduced by the effective action
\eq{1l}{ S = \int d^4x\sqrt{-g}\left(\frac{\mplf^2}{2}R - \Lambda + c_1 R^2 + c_2 R_{\mu\nu}R^{\mu\nu}\right) + ... }
where the ellipses denote higher order contributions in the curvature and loop expansion. The coefficient of the Einstein-Hilbert term and the cosmological constant are fixed by measurements with the `bare' terms in (\ref{S1}) absorbing the divergences after dimensionally regularizing, and where
\eq{}{c_{1,2} \sim \sum_i \frac{w_i}{16\pi^2}\, {\rm log}\frac{m_i^2}{\mu^2}}
represents a weighted sum over all massive species present with the $w_i$ calculable in terms of the spins of the particles integrated out \cite{Birrell, Vassilevich:2003xt}. Given that $c_{1,2} \sim N$ we see from counting derivatives that the perturbative expansion breaks down when the momentum transfer approaches
\eq{}{p^2 \sim \mplf^2/N}
or when the background curvature approaches
\eq{mc}{R \sim \mplf^2/N}
both of which are manifestations of the fact in the presence of an arbitrary spectrum of particles, the scale at which strong gravity effects become relevant is set by (\ref{scs}) -- $M_{**} \sim \mplf/\sqrt N$. Crucially, we note that \textit{the maximum allowed curvature before a classical description of spacetime breaks} down is given by $\mplf^2/N$ instead of $\mplf^2$ as would be the case in a purely gravitational theory, a fact that is not without consequence for inflationary cosmology.
On the other hand, the effective strength of gravity at any given scale, set by $M_*$, is an independent quantity. Although all massive species lower the strong coupling scale as (\ref{scs}), only those which mediate tree level interactions between covariantly conserved sources can enhance the strength of gravitational interactions (the latter being our definition of \textit{universally coupled} species) immediately below distance scales smaller than their inverse Compton wavelength\footnote{This is because the loop threshold effects that lower the strong coupling scale only become relevant when all higher loop corrections also become relevant, i.e. when the momentum transfer approaches $M_{**}$, precisely when the effective theory starts to break down \cite{Antoniadis:2014xva}.}. This occurs independently of the process for Kaluza-Klein (KK) resonances and in a process dependent (i.e. equivalence principle violating) manner for species that couple to the trace of the EM tensor of the source:
\eq{sg}{M_* \sim \mplf/\sqrt{N_*}}
where $N_*$ counts the number of species contributing to (and with masses below) the momentum transfer of the tree level process in question. As an example of the latter, one can contemplate (KK) graviscalars, 4-dimensional (4D) scalars or vector bosons with explicit non-minimal couplings to gravity (hence coupling to the trace of the energy momentum tensor of any source) or which mediate effective interactions via higher dimensional operators -- e.g. dimension 5 in the case of pseudo-scalar exchange \cite{Adelberger:2003zx} or dimension 6 in the context of Higgs effective field theory \cite{Antoniadis:2014xva}\footnote{One could also contemplate that $M_* \equiv \mplf$ for all momentum transfer up to the scale $M_{**}$ as was done in \cite{Gasperini}, where $M_{**}$ can be lowered arbitrarily by engineering an appropriate field content.}.
One can understand the lowering of $M_*$ immediately above the threshold $M$ where the latter denotes the mass of the species by first considering what happens for a KK graviton. For a given TT KK resonance with mass $M$, that tree level graviton exchange between any two conserved sources is augmented as (suppressing tensor structure for simplicity):
\begin{figure}[h!]\epsfig{file=nbKK.jpg, height=1.2in, width=3.3in}\end{figure}
\eq{tl}{\frac{1}{\mplf^2 p^2} \to \frac{1}{\mplf^2 p^2} + \frac{g}{\mplf^2 (p^2+M^2)}}
where $g$ counts the number of contributing KK polarizations of mass $M$ (as illustrated above). Hence in the regime $M^2 \ll p^2 \ll \mplf^2/N$ we remain within the regime of validity of our effective theory and the tree level exchange effectively becomes
\eq{mple}{\frac{1}{\mplf^2 p^2} \to \frac{g+1}{\mplf^2p^2}}
implying an increase in the strength of gravity at distances smaller than $M^{-1}$ but greater than $M_{**}^{-1}$ as per (\ref{sg}). Consider now the effects of any other universally coupled (non KK) species, for example the Higgs via the dimension 6 effective operators coupled to scalar or fermionic matter:
\eq{ucs}{\Delta\calL_{\rm eff} \sim c_1\frac{H^\dag H}{\mplf^2}\partial_\mu\p\partial^\mu\p + c_2\frac{H^\dag H}{\mplf^2}\bar\psi\slashed\partial\psi\sim c_{\{1,2\}}\frac{H^\dag H}{\mplf^2}T^\mu_\mu}
which evidently couples the singlet component of the Higgs $h$ to the trace of the energy-momentum tensor of any conserved source made up of $\phi$ or $\psi$ quanta with the effective interaction
\eq{d6}{\Delta\calL_{\rm eff}\sim c_i\frac{v\,h}{\mplf^2}T^\mu_\mu}
where $v$ denotes the vacuum expectation value (vev) of the Higgs in the phase we're doing perturbation theory in. One finds an identical enhancement to gravitational interactions as in (\ref{tl}) with the replacement
\eq{gdef}{g \to c^2_i v^2/\mplf^2.} Similar enhancements can come from KK graviscalars -- when more than one extra dimension is compactified, there are always extra scalar polarizations $H^{(n)}$ distinct from the scalar mode of the massive 4D graviton which couple to the trace of the EM tensor:
\eq{d5}{\Delta\calL_{\rm eff}\sim \sum_{n=1}\kappa\frac{H^{(n)}}{\mplf}T^\mu_\mu}
with $\kappa$, a constant numerical factor \cite{Giudice}. As soon as the momentum transfer exceeds the mass of any of these scalar KK resonances, one finds an enhancement akin to (\ref{mple}) with $g \to \kappa^2$ per resonance. One thus concludes that for Cavendish experiments performed with point masses, $M_* \sim \mplf/\sqrt{N_*}$ where $N_*$ stands for a process dependent weighted index\footnote{We note in passing that there are certain caveat emptors and ambiguities when dealing with scale dependent quantities in EFT of gravity (see discussions in \cite{Anber,NBB}). We evade these issues by dealing only with (unambiguously defined) physically observable quantities such as on-shell S-matrix elements.}.
\subsection{The scale of Inflation}
So what, if any consequences do the facts reviewed above have for cosmological observations involving curvature and tensor perturbations? Not so many as it turns out, since observable quantities are always dimensionless and therefore independent of the units in which they are expressed (Planck units being the naturally available scale in early universe cosmology). The amplitude and spectral properties of the CMB anisotropies in particular are unaffected by any of the considerations above. However, in trying to \textit{infer} an energy scale of inflation, one necessarily runs up against the fact that $M_* \neq \mplf$, and that the absolute scale of inflation is uncertain up to our lack of knowledge of $M_*$ beyond laboratory scales.
To see this, we first fix a particular context for the purposes of demonstration even though the results generalize \cite{Antoniadis:2014xva}. From the outset, we stress that we work in the context of single field inflation. Any other fields present therefore couple to the inflaton only through gravitational strength interactions. Secondly, we work in a 4D context, even though we allow for KK gravitons to be excited. That is, KK masses are set by the size of extra dimensions (i.e. the expectation value of the moduli that parametrize them) which is an independent parameter from the masses of these moduli, given some stabilization mechanism. Therefore we can consider situations where
\eq{mkkc}{m^2_{\rm KK} < H_*^2 \ll \mu^2}
where $m_{\rm KK}$ is the characteristic KK mass, $\mu$ is characteristic moduli mass that can be taken to be arbitrarily large and $H_*$ is the Hubble factor during inflation. This permits an effectively 4D description of the background over which KK modes with masses up to $H_*^2$ can be excited. Before deriving our desired results, it is useful to have an overview of the different moving parts at work:
\begin{itemize}
\item Cavendish experiments fix $M_*$ to be $M_* = \mplf = 2.44\times 10^{18}$ GeV up to the percent level at scales $\sim 10^{-4}$ m $\sim 10^{-2}$ eV$^{-1}$ \cite{Kapner:2006si}.
\item For each mass threshold crossed between $10^{-2}$ eV up to the scale of inflation\footnote{Corresponding to a new tree level exchange channel opening up for the process in question.}, the strength of gravity increases as per (\ref{mple}), after which the usual logarithmic running sets in.
\item The species that contribute depend on the process considered, but always include KK gravitons, consistently treated as massive spin-2 excitations over a 4D solution since $H_*^2 \ll \mu^2$.
\end{itemize}
In order to illustrate the physics at work in as simple a context as possible, we further make the assumptions that:
\begin{itemize}
\item We either have one extra dimension, so that aside from the zero modes, there are no extra graviscalars/vectors other than those `eaten' by the massive spin-2 KK modes;
\item or for more than one extra dimension, the mechanism that gives masses to the zero modes of the radion and vector moduli also generates commensurate masses for their KK excitations.
\end{itemize}
The latter two conditions can readily be relaxed, although for the purposes of simple illustration ensure through (\ref{mkkc}) that the scalar and vector fluctuation modes of the extra dimensions have masses much larger than $H_*$ thus permitting an effective 4D description for the perturbations with no other hidden scalars. For more than one extra dimension, relaxing the latter requirement would result in additional light universally coupled hidden fields which still require a mechanism to generate masses for them to avoid fifth force constraints \footnote{As for non-universally coupled fields, $N$ species of the former alone would fix $M_* \equiv \mplf$ all the way up to $M_{**} = \mplf/\sqrt N$, logarithmic running aside (cf. \cite{Gasperini}). Within this context, see also \cite{Ozsoy:2014sba, Namba:2015gja} where additional fields and interactions typically generate non-Gaussianities.}.
Since the only dynamical field that has any background time dependence (and energy density) in this set-up is the inflaton, the universe has only one physical clock. Therefore, all perturbations are adiabatic and we are entitled to foliate spacetime such that the inflaton fluctuations are gauged away (comoving/ unitary gauge) \cite{Cheung:2007st}. Since we are only interested in the scalar and tensor perturbations, and since all moduli have masses much greater than any other scale in the problem, the metric induced on the spatial hypersurfaces can be parametrized as
\eq{mpdec}{h_{ij}(t,x) = a^2(t)e^{2\calR(t,x)}\hat h_{ij};~~~ \hat h_{ij} := \sum_{n}{\rm exp}[\gamma^{(n)}_{ij}],~~~\partial_i \gamma^{(n)}_{ij} = \gamma^{(n)}_{ii} = 0}
where the $\gamma^{(n)}_{ij}$ correspond to the $n^{\rm th}$ induced TT KK resonance which propagates freely at the scale of inflation (i.e. with masses less than the Hubble scale)\footnote{St\"uckelberg decomposing the propagating massive spin-2 graviton as $\hat h^{(n)}_{ij} = \gamma^{(n)}_{ij} + \partial_i A_j + \partial_j A_i + 2\partial_i\partial_j \psi$, the vector perturbation $A_i$ decouples at linear order and the longitudinal polarization $\psi$ has no evolving background and therefore contributes vanishingly to the observed adiabatic mode $\calR$ (\ref{mpdec}).}. If there are $N^{\rm T}_{\rm KK}$ of these resonances, then clearly this would result in a total power for the tensor spectrum of
\eq{tspec}{\calP_\gamma := 2(1 + N^{\rm T}_{\rm KK})\frac{H^2_*}{\pi^2 \mplf^2},}
equivalent to the replacement $M^2_{*T} = \mplf^2/(1 + N^{\rm T}_{\rm KK})$ (where the subscript is to emphasize the process dependence of this quantity). Next, we consider the effect on the curvature perturbations of a scalar $\eta$ that couples to the trace of the energy momentum tensor (equivalently, with non-minimal coupling)\footnote{This incorporates the example of dimension 6 effective interactions (\ref{ucs}) which can be converted into a non-minimal coupling to gravity via field redefinition using the background equations of motion \cite{Weinberg} if the coupling is to the matter component sourcing the background.}:
\eq{jf}{\Delta\calL_{\rm eff}\sim \xi\,\eta^2\frac{T^\mu_\mu}{\mplf^2} \equiv - \xi\, \eta^2 R}
which is to be considered in conjunction with the original Einstein-Hilbert term
\eq{jf2}{\calL_{\rm eff} \supset \frac{\mplf^2}{2}\left(1 - \frac{2\xi \eta^2}{\mplf^2} \right)R}
One might be tempted to immediately infer from the above that doing perturbation theory around a background where $\eta$ has a non-vanishing expectation value $v$ relative to one where it vanishes, would imply an effective change in the strength of gravity encoded by $M^2_* = \mplf^2/(1 + g)$, with $g := 2\xi v^2/\mplf^2 \ll 1$ (cf. (\ref{gdef})) by assumption\footnote{A concrete example of this occurs in the context of Higgs Inflation \cite{Bezrukov} where $\eta$ is identified with the inflaton itself (the singlet component of the Higgs), and where the vacuum expectation value (vev) during inflation $v \sim \mplf/\sqrt{-\xi}$ with $\xi \sim - 10^3$ is many orders of magnitude greater than its value in the EW vacuum where $v = 246$ GeV.}. As far as the curvature corrections are concerned, this is essentially correct, although we have to work a little harder to prove this. We first go to the Einstein frame via the conformal transformation
\eq{ct}{g_{\mu\nu} = \left(1 - \frac{2\xi \eta^2}{\mplf^2} \right)^{-1}{\w g}_{\mu\nu}\, := F^{-1}(\eta){\w g}_{\mu\nu}}
which rescales (\ref{jf2}) into the usual Einstein-Hilbert term
\eq{ctl}{ \mathcal L_{\rm eff} \supset \frac{\mplf^2}{2}\w R + F^{-2}(\eta)\mathcal L_m\left[F^{-1}(\eta)\w g_{\mu\nu}, \psi,A_\mu,\phi,\eta\right] + ...}
where $\calL_m$ is the Lagrangian that describes all matter content including standard model fields, the inflaton and the species $\eta$. We first observe that all massless fermions and $U(1)$ gauge fields in 4D are conformally invariant, so the transformation (\ref{ct}) has no effect on the conformally rescaled (and correspondingly canonically normalized) fields. Scalar fields on the other hand (unless non-minimally coupled with $\xi = -1/6$) are not. Therefore in the Electroweak (EW) vacuum (where the conformally rescaled Higgs vacuum expectation value dictates all particle masses), Cavendish experiments will turn up $M_* = 2.44 \times 10^{18}$ GeV as the strength of gravity, say at sub-mm scale torsion balance experiments \cite{Kapner:2006si}. The canonically normalized Higgs field is given by $\w H = F^{-1/2}H$, so that all particle masses scale $\propto F^{1/2}(\eta)$ under constant shifts of $\eta$. Equivalently, keeping particle masses fixed, this is equivalent to changing the strength of gravity\footnote{One can also see this from explicitly verifying that the EM tensor derived from (\ref{ctl}) scales as $T^\mu_{\nu}(\eta_*) = \frac{F^2(\eta_0)}{F^2(\eta_*)}T^\mu_\nu(\eta_0)$ which follows directly from its conformal dimension. This is equivalent to keeping the EM tensor fixed, but scaling $M_*$ as $M_* \propto F(\eta)$.} as $M_*(\eta) = F(\eta)/F(\eta_0)\cdot 2.44\times 10^{18}$ GeV where $\xi\eta^2$ in the EW vacuum is defined to be $\xi_0 v_0^2$. If during inflation however, the background shifts to $\xi_*v_*^2$ (either through explicit shifts in the expectation value of $\eta$ or through running of $\xi$), it is straightforward to see that the amplitude of curvature perturbations one infers from (\ref{ctl}) will be given by
\eq{cspec}{\calP_\calR := \frac{H^2_*}{8\pi^2 M_{* s}^2 \epsilon_*};~~~ \epsilon_* := -\dot H_*/H_*^2,}
where $M_{*s}$ above is
\eq{mstardef}{M_{*s} = F(\eta_*)/F(\eta_0)\cdot 2.44\times 10^{18}\,{\rm GeV}.}
This amplitude is \textit{fixed} by the observed anisotropies of the CMB to be $\calP(k_*) = \calA \times 10^{-10}$ where $\calA \sim 22.15$ \cite{planckcos}. The tensor to scalar ratio is also a quantity that would be fixed by any putative measurement of primordial tensor modes, and is given replacing $\mplf$ with $M_{*s}$ in (\ref{tspec}) as specified in (\ref{mstardef}) and given (\ref{cspec}) to be
\eq{rfix}{r_*:= \frac{\calP_\gamma}{\calP_\calR} = 16\epsilon_*\left(1 + N^{\rm T}_{\rm KK}\right) = 16\epsilon_*\left(\frac{M^2_{*s}}{M^2_{*T}}\right),}
now with $M^2_{*T} = M^2_{*s}/(1 + N^{\rm T}_{\rm KK})$. Any positive determination of $r_*$ fixes $\epsilon_*$ and implies that in the regime we can trust our effective theory, the scale of inflation is given by
\eq{sfix}{H_*^2 = M^2_{*T}\left(\frac{\pi^2\mathcal A r_*}{2 \cdot 10^{10}}\right);~~~V_*^{1/4} = M_{*T}\left(\frac{3\pi^2\mathcal A r_*}{2 \cdot 10^{10}}\right)^{1/4};~~~M^2_{*T} = M^2_{*s}/(1 + N^{\rm T}_{\rm KK})}
with
\eq{}{M^2_{*s} = \mplf^2\frac{F^2(\eta_*)}{F^2(\eta_0)}\approx \frac{\mplf^2}{1 + \w N_*}}
with $\w N_* := \sum_i g_i$ where for example, $g_i = 2\Delta( \xi_i\eta_i^2)/\mplf^2$ or $g_i = 2\Delta(\xi_i\eta)/\mplf$ for dimension 5 or 6 couplings to the trace of the EM tensor (e.g. (\ref{d6}) and (\ref{d5})) presuming these individual shifts to be small. All of this is subject to the caveat that the inferred Hubble scale is necessarily bounded from above by (\ref{hbound})
\eq{slim}{H^2_* \lesssim \frac{\mplf^2}{N}}
where to reiterate, $N$ is the weighted index corresponding to the total number of species present, hidden or otherwise. For example, in the scenario of \cite{Dvali3} with $10^{32}$ hidden copies of the standard model, it would not be possible to infer a scale of inflation greater than a TeV. Furthermore, it is amusing to note that (\ref{sfix}) and (\ref{slim}) together imply a bound on the total number of species in the universe (universally coupled or otherwise) given the fixed amplitude of curvature perturbations in conjunction with any positive determination of $r_*$ that goes as:
\eq{}{N \leq \frac{9.15}{r_*}\times 10^7\frac{\mplf^2}{M^2_{*T}}.}
In the standard scenario, $M^2_{*T} \equiv \mplf^2$ which for any detection of $r_* \sim \mathcal O(10^{-1})$ would imply $N \lesssim 10^9$, and for any given extra dimensional scenario, the factor $\mplf^2/M^2_{*T}$ is a calculable geometrical quantity that encapsulates the size of the extra dimensions \cite{Antoniadis:2014xva}.
In summary, we see that although the amplitude and spectral properties of CMB anisotropies are unaffected by any difference in the strength of gravity during inflation, \textit{inferring} an absolute scale is complicated by our lack of knowledge of the scale $M_{*T}$ and and $M_{**}$, rendering it effectively uncertain. We further note that since the tilt of the power spectrum of all of the individual graviton polarizations is still fixed by the deviation of the background from an exactly dS geometry $n_T = -2\dot H_*/H^2_*$, we see that (\ref{rfix}) implies a deviation from the tensor to scalar consistency relation:
\eq{csv}{n_T = -\frac{r_*}{8}\left(\frac{M^2_{T*}}{M^2_{s*}}\right).}
which is the only \textit{observable} consequence of the process and scale dependence of the strength of gravity at the scale of inflation.
\subsection{Response to arXiv:1508.01527}
Recently several concerns regarding some of the results discussed above were raised in \cite{Kleban} which we presently wish to address. Firstly, it was observed that the Higuchi bound nominally appears to forbid the presence of massive spin-2 excitations over a dS background within the mass range
\eq{}{0 \leq m^2_{\rm KK} \leq 2H^2}
which would imply that KK resonances could not be excited during inflation under the circumstances described in the previous subsection and correspondingly affect the inferred scale of inflation. However if such a bound were to truly apply to KK gravitons, one would have a no-go theorem for compactifications that would forbid consistent solutions to higher dimensional Einstein gravity with an effective 4D Hubble scale that is greater than the compactification scale. No such no-go theorem exists\footnote{Were it to do so, higher dimensional Einstein gravity would have to posses at least one other characteristic scale in addition to the higher dimensional Planck mass.}. We demonstrate this by explicit example in appendix A, where we construct a solution to 5D Einstein gravity on an orbifold topology with an empty bulk and bounding branes of opposite tensions that support an induced 4D dS geometry:
\eq{5ddec0}{ds^2 = \frac{\left(1 + H|y|\right)^2}{H^2\tau^2}\left( -d\tau^2 + dx_1^2 + dx_2^2 + dx_3^2\right) + dy^2 }
where
\eq{hans0}{H = -\kappa_5^2\Lambda_0/6}
with $\Lambda_{0}, \Lambda_c$ being the tensions of the brane at $y=0$ and $y_c$ respectively with $\Lambda_0$ taken to be negative so that one has an expanding induced dS solution, with the two tensions related by the junction conditions at $y_c$ as $\Lambda_c = -\Lambda_0/(1 + H|y_c|)$. By adjusting $\Lambda_c$, one can thus keep the induced Hubble factor fixed whilst simultaneously dialing the physical inter-brane separation to be as large as one desires
\eq{}{|y_c| = \frac{6(\Lambda_0 + \Lambda_c)}{\kappa_5^2\Lambda_0\Lambda_c}.} Hence in keeping $H^2 = \Lambda_0^2\kappa_5^4/36$ fixed, our solution consistently attains $H^2 \gg \frac{1}{y_c^2}$, and given that the KK mass spectrum scales up to pre-factors of order unity as
\eq{}{m^2 \sim \pi^2\frac{n^2}{y_c^2}}
we find an explicit construction where the masses of an arbitrary but finite number of KK modes can be made less than the Hubble scale. In appendix B we understand precisely how KK gravitons \textit{evade} Higuchi's bound in this context. The reason for this is straightforward and has to do with the fact that compactifications necessarily spontaneously break higher dimensional Lorentz invariance. As a result, background sources contribute terms that precisely cancel what would have been problematic terms in the equation of motion for the massive spin-2 graviton, as could have been inferred from the outset by the healthiness of higher dimensional Einstein gravity.
Furthermore, it was argued in \cite{Kleban} that lower spin particles could not affect the observed spectrum of tensor perturbations since the latter can only be generated from the Einstein Hilbert term in the effective action:
\eq{ttact}{S = \frac{\mplf^2}{2}\int\sqrt{-g}R.}
This statement is inaccurate, as taking the above as the only source of TT petrurbations leads to the contradiction that it would be possible to infer a scale for inflation beyond where a classical description of geometry breaks down (\ref{mc}). Focussing presently on a strictly 4D context for clarity of discussion, one needs only to realize that in fact it is not (\ref{ttact}), but the effective action\footnote{Where $\phi$ is the inflaton that sources the background evolution.}
\eq{1li}{ S = \int d^4x\sqrt{-g}\left(\frac{\mplf^2}{2}R -\frac{1}{2}\partial_\mu\phi\partial^\mu\phi - V(\phi) + c_1 R^2 + c_2 R_{\mu\nu}R^{\mu\nu}\right) + ... }
that one has to work with in extracting the tensor power spectrum where as reviewed in the previous section, $c_{1,2} \sim N$ are spin dependent weighted indices that count the \textit{total} number of massive species of all spins present and where the ellipses denote higher order terms in the curvature and loop expansion (with each independent loop momenta contributing a factor of $N$). On a given background, the resulting quadratic action for the TT polarizations of the graviton is given by
\eq{stt}{S_{TT} = \frac{\mplf^2}{8}\int d^4x\sqrt{-g_{0}}\left[ \dot h_{ij}\dot h_{ij} - \frac{1}{a^2}\partial_k h_{ij}\partial_k h_{ij}\right]\left(1 + c\frac{H_*^2}{\mplf^2} + ...\right)}
where the correction term is obtained from the last two terms in (\ref{1li}) with two derivatives acting on the background\footnote{More generally, the leading corrections coming from the 2n$^{th}$ derivative term in the effective action to the graviton propagator denoted by ellipses are proportional to $c^n H_*^{2n}/\mplf^{2n}$.}. This expansion breaks down precisely when
\eq{}{H_* \sim \frac{\mplf}{\sqrt N}}
implying the bound (\ref{hbound}) that the scale of inflation cannot be greater than the strong coupling scale $M_{**}$. Furthermore, as seen in the previous section, lower spin species that couple to the trace of the energy momentum tensor do indeed affect the spectrum of tensor and curvature perturbations in such a way that the usual single field tensor to scalar consistency relation is violated (\ref{csv}) due to the process dependence of $M_*$.
\section{Concluding remarks}
In this note, we have elaborated upon various consequences of the fact that characteristic strength of gravity at a given energy ($M_*$) and its strong coupling scale ($M_{**}$) are in general different from each other and the macroscopically determined $\mplf$, particularly as they relate to inferring absolute scales from cosmological observations. This is because universally coupled species (defined as all particles that can mediate \textit{tree level} interactions between conserved sources) affect the strength of gravity at distances smaller than their Compton wavelength. Moreover, all species present (universally coupled, hidden or otherwise) drag down the strong coupling scale -- where a classical description of geometry breaks down -- as $M_{**} = \mplf/N$, where $N$ is the total number of species. This necessarily bounds the scale of inflation from above. As stressed, although observables are dimensionless ratios of quantities measured at a fixed scale and thus independent of the units in which they are expressed, inferring an absolute scale for inflation from any detection of primordial tensors is complicated by the fact that we simply do not know $M_*$ and $M_{**}$ during inflation. Along the way, we made an observation of possible wider interest, namely that KK gravitons necessarily evade the Higuchi bound on any consistent compactification of higher dimensional Einstein gravity -- a result guaranteed by the healthiness of the Einstein-Hilbert action in any number of dimensions. We understand why this is so in the appendices.
\acknowledgements \noindent SP is supported by funds from the Swiss National Science Foundation.
|
train/arxiv
|
BkiUdMw5qoYA1LW_XP50
| 5
| 1
|
\section{Introduction}
\label{ch1}
The birth of the very first stars in the Universe must have occured at redshifts $z \sim 15 - 30$ in dark matter mini-halos with $\sim 10^{6}$ solar masses (e.g., \cite{Haiman1996}, \cite{Tegmark1997}). Such mini-halos were composed from a primordial gas of a few chemical species where the main coolant was molecular hydrogen (\cite{Galli1998}). Thus, the expected temperature for these primordial clouds is about 300 K, which is thirty times greater than the temperature in typical present-day clouds. The Jeans mass associated with these clouds is therefore greater when compared to their present-day counterparts, as well as the mass of the collapsing objects.\\
From hydrodynamical simulations, it has been shown that several stars could be born from a single dark matter mini-halo, contrary to past results that pointed towards the formation of a single star per dark matter mini-halo. Moreover, the clear impact of some chemical species in the star formation process has been established. In particular, \cite{Bromm2001} proposed that the presence of metals triggers fragmentation in metal deficient primordial clouds up to a metallicity $Z/Z_{o} = 10^{-3.5}$, which has been confirmed by recent investigations, e.g, \cite{Bovino2014}, \cite{Safranek2014}. But, although the qualitative picture of the formation of the Pop III.1 stars is rather well known, how the star formation mode shifts from extremely massive stars with 100-1000 solar masses to present-day stars is still unclear. The development of surveys searching for the most metal-deficient stars has shown that the ratio between oxygen, carbon and nitrogen is enhanced compared to iron for around one-quarter of all the known stars with $\mathrm{[Fe/H]} < -2.0$ (\cite{Beers2005}). These particular stars are now collectively known as carbon-enhanced metal poor (CEMP) stars, and arbitrarily have been defined to have $\mathrm{[C/Fe]} > +0.7$. Furthermore, subsequent studies have grouped the stars falling into the CEMP-definition into four different sub-groups on the basis of the abundances of their electron-capture associated species. The CEMP-s stars show an overabundance of chemical species produced by the s-process, the CEMP-r stars show an overabundance of chemical species produced by the r-process, the CEMP-s/r stars show an overabundance of elements related to both processes, while the CEMP-no stars do not show an overabundance of elements, neither related to the s-process or related to the r-process.\\
Abundances for the s-group are well explained by means of mass transfer in a binary system from an AGB star to a secondary smaller star which is the one observed today. For the CEMP-no group the panorama is a bit more complicated as several progenitors have been proposed by different authors. As has been found by \cite{Hansen2016b}, CEMP-no stars seem to be bona-fide second-generation stars. This has been proposed on the basis of multiple observational findings (e.g., \cite{Cook2011}, \cite{Cook2012}), \cite{Carollo2012}, \cite{Yoon2016}, \cite{Hansen2016b}).\\
The paper is structured as follows. Sec. \ref{ch2} describes the computational scheme employed in our simulations, as well as the initial conditions for all our models and its features. Sec. \ref{ch3} contains a description from our results, while our conclusions are presented in Sec. \ref{ch4}
\section{Methods}
\label{ch2}
Simulations were carried out by combining two different codes. One was GRADSPH\footnote{\url{https://www.swmath.org/software/1046}} , developed by \cite{Vanaverbeke2009}, a parallel fully three-dimensional TREESPH code designed to evolve self-gravitating astrophysical fluids. The other one was KROME\footnote{\url{http://www.kromepackage.org}}, developed by \cite{Grassi2014}, a novel astrochemical open-source package to treat the microphysics of the collapse, such as the temperature and the evolution of the chemical species included networks used. Such framework has been already used by \cite{Riaz2018c}, investigating primordial star formation and its binaries properties. GRADSPH has been further tested on star formation and their evolution in \cite{Riaz2018a} and \cite{Riaz2018b}.\\
We perform several simulations, varying the initial metallicity of the cloud from a primordial case to $Z/Z_{\odot}=10^{-2}$, including the one given by the observational pattern of the Keller star \citep{Keller2014}. All simulations were started at the same redshift $z = 15.0$, an initial temperature $\mathrm{T} = 300$ K, and an initial density $\rho = 10^{-22}$ g $\cdot$ cm$^{-3}$, from which in order to assure the gravitational collapse of the clouds, the initial mass was set as $\mathrm{M}_{J} = 1.026\times10^{6}$ $\mathrm{M}_{\odot}$. Moreover, we defined two groups of simulations based on their chemical pattern, labeled as p-runs, for the one using a primordial network which includes nine chemical species: H {\sc i}, H {\sc ii}, He {\sc i}, He {\sc ii}, He {\sc iii}, e$^{-}$, H$_{2}$ {\sc i}, H$_{2}$ {\sc ii}, and H$^{-}$; and the m-runs, for the ones using a metal-enriched network which includes the already named species for the p-run, plus the metal-species C {\sc i}, C {\sc ii}, O {\sc i}, O {\sc ii}, Si {\sc i}, Si {\sc ii}, and Si {\sc iii}. All species were initialized in number densities, with a value of almost zero ($n_{X} = 10^{-40}$ cm$^{-3}$), with the exception of H {\sc i}, He {\sc i}, H$_{2}$ {\sc i}, H {\sc ii}, e$^{-}$, C {\sc ii} (carbon was assumed as totally ionized), O {\sc i}, and Si {\sc i}. The non-metal species were initialized in number denisities as $n_{\mathrm{H}} = 44.81$, $n_{\mathrm{He}} = 3.72$, $n_{\mathrm{H}_{2}} = 2.98\times10^{-5}$, and $n_{\mathrm{H}^{+}} = 5.97\times10^{-3}$. The metal species were computed on-the-fly by KROME for models met2 ($Z/Z_{\odot}=10^{-2}$) and met3 ($Z/Z_{\odot}=10^{-3}$) and scaled according to their metallicities, in the Keller model the reported observed abundances (\cite{Keller2014}) were used. The initial abundance of the electrons was computed on-the-fly by KROME for all models, such that the positive charge of the species was balanced.
\section{Results}
\label{ch3}
Fig. \ref{n_T} shows the density profile of the temperature evolution for different cloud models resulting from the one-zone simulations. The dotted line represents the primordial model, the dotted-dashed line the met3 model, the dashed line the met2 model, and the solid line the Keller model. The red bottom line represents the CMB floor temperature given by the initial redshift of the simulations. The temperature is given in K, while the number density in cm$^{-3}$. From the figure, the enhancement in the cooling rate for the cloud is evident, as even for a slight presence of metals as $Z/Z_{\odot}=10^{-3}$ the temperature of the cloud drops drastically compared to the primordial model at densities of $\sim 10^{3} \mathrm{cm}^{-3}$. Moreover, for a metallicity $Z/Z_{\odot}=10^{-2}$ the cloud is already able to reach the CMB floor temperature, in agreement with previous results. Further, the temperature evolution of the Keller model is very similar to the met2 model, due to the high presence of carbon. Fig. \ref{hydro_rho_T} shows the density profile of the termperature's evolution for the primordial model resulting from the hydrodynamical runs. The red solid bottom line represents the CMB floor temperature.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{n_T.png}
\caption{Density profile of the temperature evolution of prestellar clouds with different chemical species abundances. The dotted line represents the primordial model, the dotted-dashed line the met3 model, the dashed line the met2 model, and the solid line the Keller model. The red bottom line represents the CMB floor temperature give by the initial redshift of the simulations.}
\label{n_T}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{hydro_rho_T.png}
\caption{Density profiles of temperature evolution of the prestellar cloud of the primordial model resulting from the hydrodynamical runs. The red bottom line represents the CMB floor temperature give by the initial redshift of the simulations.}
\label{hydro_rho_T}
\end{figure}
\section{Discussion}
\label{ch4}
We have presented the coupling between KROME and GRADSPH (Fig. \ref{hydro_rho_T}), as well as results using only the former (Fig. \ref{n_T}). By looking at Fig. \ref{n_T} the enhancement in the cooling of the clouds it is evident, which is consistent with previous results (e.g., \cite{Bovino2014}). Further, the results from the one-zone runs are in agreement with the metallicity threshold proposed by \cite{Bromm2001}, showing that for a metallicity $Z/Z_{\odot}=10^{-2}$, clouds are already able to reach the CMB floor temperature. In addition, the high presence of ionized carbon and neutral oxygen in the Keller model allows to the cloud to follow closely the temperature evolution of the met2 model. This reflects their major contribution as cooling channels at high redshift star-forming conditions, which is in agreement with previous results.\\
In order to improve the accuracy of the results generated by the simulation it is necessary to include further physical processes such as the presence of a UV radiation background or the treatment of a dust grain distribution. The former has been proven to have a lesser impact on the thermodynamics of the collapse of a cloud, but this need to be confirmed by further studies. The latter has been shown to have major impact of the thermodynamics of a collapsing cloud by several authors, showing its great impact as a catalyzer for several chemical reactions that are impossible without the presence of a third-body, or acting as a shield for the external radiation that hits the cloud.
\begin{acknowledgement}
The simulations were performed with resources provided by the Kultrun Astronomy Hybrid Cluster via the projects Conicyt Programa de Astronomia Fondo Quimal 2017 (project code QUIMAL170001), Conicyt PIA (project code ACT172033), and Fondecyt Iniciacion (project code 11170268). Powered@NLHPC: This research was partially supported by the supercomputing infrastructure of the NLHPC (ECM-02). DRGS, SB, VD, CO and FF thank for funding via CONICYT PIA ACT172033. DRGS and SB acknowledge funding through CONICYT project Basal AFB-170002. RR, CO, FF and DRGS thank for funding through the 'Concurso Proyectos Internacionales de Investigaci\'on, Convocatoria 2015' (project code PII20150171). DRGS, SB and VBD thank for funding via CONICYT PIA ACT172033. FF and VBD thank for funding through Fondecyt regular (project code 1161247). VBD thanks to Conicyt for financial support on her master studies (CONICYT-PFECHA/Mag\'isterNacional/2017-22171293).
\end{acknowledgement}
\bibliographystyle{baaa}
\small
|
train/arxiv
|
BkiUdPA4eIZjh60lkvHT
| 5
| 1
|
\section{Introduction}
The idea of making a note of which rules are used as they are applied is
quite a simple one and it would be easy to regard it as too trivial to
spend any time on. However, when we look into the algebraic structure of
the records themselves, things become a lot less trivial. The background
to this work includes the papers by Squier, Lafont, Prout\'e, Otto,
Cremanns, Anick, Kobayashi, Pride and others on finiteness conditions for
monoids. The `combinatorial' finiteness condition is that a monoid has a
finite complete presentation. This implies the homological finiteness
condition $FP_\infty$~\citep{Anick,Squier87,Kobayashi} and also the
`homotopical' finiteness condition FDT \citep{Squier94}. It is also known
that the weaker homological condition $FP_3$ is not sufficient for either
FDT or the existence of a complete rewriting system in the case of
monoids~\citep{Squier94}.
Our aim is to use enhanced rewriting procedures to explicitly provide:
\begin{enumerate}[i)]
\item
A finite complete rewriting system (combinatorial specification).
\item
A finite set of homotopy generators for $P^{(2)}\Gamma$ (homotopical
specification).
\item
A (small) finitely generated resolution (homological specification).
\end{enumerate}
Logged rewriting for group presentations~\citep{paper5} gives procedures
for representing consequences of the relations of the presentation as
elements of a pre-crossed module and algorithms for computing generators
of the modules of identities among relations. In the monoid case the
relations are given by pairs of terms and the structure of a crossed
module is not appropriate to represent consequences of relations. It is
well known \citep{Stell,Street} that sesquicategories or 2-categories can
be used to model rewriting systems. It has been
proved~\citep{Pride99,Gilbert} that when the rewriting system comes from a
group presentation, the 2-category can be identified with the
crossed module of the presentation.
In the special case of groups, various results are known. In particular
there are methods for calculating a set of generators for the kernel
$\Pi_2$ of the crossed module of `consequences', which is useful for
constructing resolutions and calculating (co)homology. For the case where
the rewriting system does not present a group we detail the algebraic
structure of the analogue of $\Pi_2$; presenting an algorithm for
computing a set of generators for it; and provide justification that the
constructions we make give combinatorial, homotopical and (co)homological
detail in the same spirit as $\Pi_2$.
In the case of monoids, logged rewriting techniques have further
applications. Specifically, we have so far examined applications to the
analysis of coset and double coset systems and used logged rewriting to
provide an alternative to the Reidemeister-Schreier algorithm for finding
presentations of subgroups~\citep{DCosets,RSAlt}. Additionally, we show in
Section 8 that logged rewriting techniques are easily generalised to Kan
extensions where they provide a proof technique for a wide range of
decidablility problems solvable by string rewriting~\citep{paper2}.
\section{Logged Rewriting Systems} \label{LRS}
A \emph{monoid presentation} is given as a pair $\mathcal{P} = \mon
\langle X | R \rangle$ where $X$ is a set of generators and $R$ is a set
of pairs $(l,r)$ of elements of the free monoid $X^*$. The monoid
presented, $M$ is the quotient obtained by factoring $X^*$ by $=_R$, the
congruence generated by $R$. The quotient monoid morphism will be denoted
$\theta:X^* \to M$.
We will assume that the reader is familiar with standard string rewriting
as in~\citep{BoOt}. The notation we use follows the usual conventions.
The set $R$ is a \emph{rewriting system} for the monoid $M$ and its
elements are referred to as {\em rules}. The \emph{reduction relation
generated by $R$} on the free monoid $X^*$ is denoted by $\to_R$, the
reflexive, transitive closure is denoted $\stackrel{*}{\to}_R$ and the
reflexive, symmetric, transitive closure $\stackrel{*}{\leftrightarrow}_R$
coincides with the congruence $=_R$. For convenience we assume that $R$
is compatible with an admissible well-ordering $>$; i.e. for all pairs
$(l,r) \in R$, we have $l>r$. This ensures that the relation $\to_R$ is
Noetherian.
The main aim of this section is to formally define a \emph{logged rewrite
system} for $\cP$. Such a system must not only reduce any word in $X^*$ to
an irreducible word (unique if the rewriting system is complete) but must
also express the actual reduction as a consequence of the original monoid
relations. The reader who does not wish to get into the details at this
stage may wish to think of a consequence of the monoid relations as a
sequence of rewrites recorded as: {\tt [prefix, rule, direction of rule,
suffix]} which must give a valid rewrite.
It is important to identify the algebraic framework for these
`consequences' in order to understand what we may do with them. Formally,
one represents consequences of group relations by elements of a crossed
module. Consequences of monoid relations cannot be represented in that
framework; essentially this is because the free monoid does not have
inverses. However, it is well known that general string rewriting systems
may be modelled by sesquicategories or
2-categories~\citep{Benson,Stell,Street}. Therefore to every monoid
presentation we shall associate a sesquicategory. Its 2-cells correspond
to possible sequences of rewrites and inverse rewrites between strings in
the free monoid, with respect to the given rewriting system. Formally:
\begin{Def}[Sesquicategory of Rewrites]\mbox{ }\\
The \emph{sesquicategory $SQ(\cP)$ of a monoid presentation $\cP$}
consists of the following:
\begin{itemize}
\item
a single 0-cell which is denoted $*$,
\item
a free monoid of 1-cells which are the elements of $X^*$,
\item
a collection of 2-cells which are sequences
$$
\alpha = u_1 \alpha_1^{\ep_1} v_1 \cdot
\cdots \cdot u_n \alpha_n^{\ep_n} v_n
$$
where $u_1, \ldots, u_n, v_1, \ldots, v_n \in X^*$,
$\alpha_1, \ldots, \alpha_n \in R \cup \{1\}$ and
$\ep_1, \ldots, \ep_n = \pm 1$
such that $u_i \, \tgt(\alpha_i^{\ep_i}) \, v_i
= u_{i+1} \, \src(\alpha_{i+1}^{\ep_{i+1}}) \, v_{i+1}$
for $i=1, \ldots, n\! -\! 1$.
\item
left and right actions of the 1-cells upon
the 2-cells (whiskering) i.e. for any rewrite $\alpha$ and any
elements $u$ and $v$ of the free monoid we say that $u \alpha v$ is a
rewrite and
$\src(u \alpha v) = u \src(\alpha) v$ and
$\tgt(u \alpha v) = u \tgt(\alpha) v$.
\item
identity rewrites for each string $w\in X^*$, denoted
$1_w$ where $\src(1_w)=\tgt(1_w)=w$
with the property that
$$ u \cdot 1_w \cdot v = 1_{uwv}$$ for all $u, v$ in $X^*$.
\item
a partial (`vertical') composition of rewrites, defined so that
$\alpha \cdot \beta$ is a rewrite with
$\src( \alpha \cdot \beta ) = \src( \alpha )$ and
$\tgt( \alpha \cdot \beta ) = \tgt( \beta )$
whenever $\tgt( \alpha ) = \src( \beta )$.
\end{itemize}
\end{Def}
For the above definition it can be verified that the sesquicategory axioms
hold with respect to vertical composition and the whiskering action.
Further, we shall allow rewrites to be cancelled by the reverse
application of the rewriting sequence.
The formal inverse of any rewrite $\alpha$ is
denoted $\alpha^{-1}$ where $\src(\alpha^{-1})=tgt(\alpha)$ and
$\tgt(\alpha^{-1})=\src(\alpha)$ and we allow that
$$
\alpha \cdot \alpha^{-1} = 1_{\src(\alpha)} \text{ for all rewrites }
\alpha.
$$
This gives $\cdot$ a groupoid structure, so we may refer to the
sesquigroupoid $SQ(\cP)$.
In the case where we can apply the rule $\alpha$ to one substring of a
string and the rule $\beta$ to another substring which is completely
disjoint from the first, it is natural to regard the order in which the
rules are actually applied as immaterial. This interchangability of
non-overlapping rewrites is captured by the interchange law on the
sesquicategory, giving us a 2-category. We shall denote the set of 2-cells
in $SQ(\cP)$ by $C_2$.
\begin{Def}[2-category of Rewrites]\mbox{ }\\
The 2-category of rewrites $C_2(\cP)$ is obtained by factoring the 2-cells
$C_2$ of $SQ(\cP)$ by the interchange law:
$$
I= \{ (\alpha \, \src(\beta) \cdot \tgt(\alpha) \, \beta,
\src(\alpha) \, \beta \cdot \alpha \, \tgt(\beta)) : \alpha, \beta \in C_2
\}
$$
\end{Def}
Specifically, the set of pairs of $I$ generates a relation on $C_2$
$$ \{ (\gamma \cdot u \alpha_1 v \cdot \delta,
\gamma \cdot u \alpha_2 v \cdot \delta : (\alpha_1,
\alpha_2) \in I, u,v \in X^*, \gamma, delta \in C_2 \}
$$
and the reflexive, symmetric, transitive closure of this is $=_I$, which
preserves both vertical composition and whiskering.
Congruence classes are formally denoted with square brackets so
$[\alpha]_I$ denotes the class of $C_2$ under $=_I$ that contains $\alpha$.
Whiskering and vertical composition are preserved and so may be applied to
the congruence classes:
$u[\alpha]_Iv=[u \alpha v]_I$ for all $u,v \in X^*$ and $[\alpha]_I \cdot
[\beta]_I = [\alpha \cdot \beta]_I$.
A horizontal composition of the congruence classes may also be defined:
$[\alpha]_I \circ [\beta]_I = [\alpha \, \src(\beta) \cdot \tgt(\alpha)
\, \beta]_I$.
In the case of term rewriting one does not always wish to factor out by
the interchange law as it destroys the notion of length (number of steps)
of a rewrite. In the case of string rewriting we do not have to worry
about notions of length of derivation, thus we use the 2-category.
However, it should be noted that whilst rewrites may be represented
uniquely in the sesquicategory, the word problem for the 2-category is
generally unsolvable (generalisation of a crossed module). Like many, for
convenience, we abuse notation a little, representing rewrites that
should strictly be written as classes $[\alpha]_I$ by non-unique
representatives in the sesquicategory $\alpha$. So a pair of rewrites
$\alpha, \beta \in C_2$, are equivalent if and only if $[\alpha]_I =
[\beta]_I$.
In the context of groups, the sesquicategory associated to a monoid
presentation is well known. Pride proved that if the monoid presentation
involved is obtained from a group presentation then the associated
2-category is isomorpic (as a crossed module) to the free crossed module
associated to the group presentation~\citep{Gilbert,Pride99}. Logged
rewriting for groups was established by using the crossed module structure
for the logs.
We now formally define logged rewriting using the 2-category associated
with a monoid presentation.
\begin{Def}[Logged Rewriting System]\mbox{ }\\
A \emph{logged rewriting system} for a presentation $\mathcal{P}$ of a
monoid $M$ is a collection of 2-cells (rewrites)
$$
\cL = \{\alpha_1, \ldots, \alpha_n\}
$$
of the associated 2-category $C_2(\cP)$ so that the
\emph{underlying rewriting system}
$$
R_\cL = \{ (\src(\alpha_1), \tgt(\alpha_1) ), \ldots,
(\src(\alpha_n), \tgt(\alpha_n) ) \}
$$
is a rewriting system for $M$.
\end{Def}
A rewriting system $R$ on a monoid $M$ generates a {\em reduction
relation} $$\to_R = \{ (ulv,urv) : (l,r) \in R, u,v \in M \}. $$
The reflexive, transitive closure of this relation is denoted
$\stackrel{*}{\to}_R$, and the reflexive, symmetric, transitive closure is
denoted $\stackrel{*}{\leftrightarrow}_R$ and coincides with the
congruence generated by $R$ on $M$, denoted $=_R$.
The logged reduction of a string by a rule $\alpha$ is written as:
$u \alpha v: u \src(\alpha) v \to u \tgt(\alpha) v$ and the rewrite
recorded is $u \alpha v$.
If the elements $(l,r)$ of a rewriting system on a free monoid $X^*$ are
ordered such that $l>r$ with respect to some well-ordering on $X^*$, then
the resulting reduction system is {\em Noetherian}; i.e. an irreducible
element is reached after finitely many reductions. A reduction system is
{\em confluent} if for any string $w$ there exists a unique irreducible
string $\bar{w}$ such that $w \stackrel{*}{\to_R} \bar{w}$. A rewrite
system is said to be {\em complete} if the corresponding reduction
relation is both Noetherian and confluent. This is a desirable property,
since any pair of strings $w_1, w_2$ can be reduced in a finite number of
steps to their irreducible forms $\bar{w_1}$ and $\bar{w_2}$ which will be
equal if and only if $w_1 =_R w_2$; i.e. the {\em word problem} is
decidable.
\section{Logged Completion}
The Knuth-Bendix algorithm attempts to convert an arbitrary rewriting
system into a complete one by adding rules compatible with the ordering to
the system to try to force confluence. The key concept here is that of
{\em critical pairs} which are pairs of reductions which can be applied to
the same string to obtain two different results. The important critical
pairs are associated with the \emph{overlaps} of a the rules in the
rewriting system. When considering normal critical pairs we only care
about the sources and targets of the rewrites and the relevant information
identifying the overlap. When we are dealing with a logged rewriting
system it is necessary to think of the sequences of rules giving the
instructions permitting both of the rewrites and to include these logs as
part of the critical pair information.
\begin{Def}[Logged Critical Pairs]\mbox{ }\\
An overlap occurs between the logged rewrites
$\alpha_1 :l_1 \to r_1$ and
$\alpha_2: l_2 \to r_2$ of $\cL$ whenever one of the following is true:
\vspace{-1ex}
\begin{center}
\begin{tabular}{llll}
i) \quad $u_1 l_1 v_1 = l_2$,
& ii) \quad $u_1 l_1 = l_2 v_2$,
& iii) \quad $l_1 v_1 = u_2 l_2$,
& iv) \quad $l_1 = u_2 l_2 v_2$.
\end{tabular}
\end{center}
for some $u_1, u_2, v_1, v_2 \in X^*$.
The {\em logged critical pair} resulting from the overlap is a whiskered
pair $(u_1 \alpha_1 v_1, u_2 \alpha_2 v_2)$ for the appropriate $u_1,
u_2, v_1, v_2 \in X^*$.
\end{Def}
Given a monoid presentation $\mathcal{P}= mon \langle X | R \rangle$ we
can associate to it an \emph{initial logged rewriting system} $\cL_{init}$
which consists of one 2-cell $\alpha$ for each rule $(l,r)$ of $R$ with
$\src(\alpha)=l$ and $\tgt(\alpha)=r$. These 2-cells are the generators of
sesquigroupoid associated to the presentation.
If the initial logged rewriting system is not complete then we can attempt
to transform it into a complete logged rewriting system, by adding 2-cells
which will make the underlying rewriting system complete, in a version of
the Knuth-Bendix algorithm which records information that is usually
discarded. Clearly, this recorded completion terminates exactly when the
usual completion procedure would terminate.
\begin{Alg}[Logged Knuth-Bendix Procedure]\mbox{ }\\
\vspace{-4ex}
\begin{small}
\begin{center}
\begin{tabular}{p{13.5cm}}
\hline
\begin{enumerate}[{LKB}1:]
\item
(Input)
Let $\cP$ be a presentation of a monoid with generators $X$ and relations
$(l_1,r_1), \ldots, (l_n,r_n)$ where $l_1>r_1, \ldots, l_n>r_n$ for some
well-ordering on the free monoid $X^*$. Define $\cL_{init}$ to be the
set of 2-cells or logged rules $\{\alpha_1, \ldots, \alpha_n\}$, where
$\src(\alpha_i)=l_i$ and $\tgt(\alpha_i)=r_i$ for $i=1, \ldots, n$.
\item
(Initialise)
Set $\cL_{all}=\cL_{init}$; $\cL_{new}=\cL_{init}$; and let $C$ be the
empty list.
\item
(Search for Overlaps and Record Critical Pairs)
Whenever an overlap occurs between the rewrites $\alpha_a \in \cL_{all}$
and $\alpha_n \in \cL_{new}$, record the associated critical pair by
adding the element $(u_a, \alpha_a, v_a, u_n, \alpha_n, v_n)$ to the list
$C$ (where $u_a, v_a, u_n$ or $v_n$ may be the identity element).
\item
(Attempt to Resolve Critical Pairs)
Set $\cL_{new}=\emptyset$.
For every element of $C$ consider the pair $(u_a r_a v_a, u_n r_n v_n)$,
reducing each string by $\cL_{all}$ to the irreducible strings $z_a$ and
$z_n$ respectively. If $z_a=z_n$ then the critical pair is said to resolve
and it can be removed from $C$. Otherwise we must add a new logged rule to
the system. If $\beta_a$ and $\beta_n$ are the logs of the reductions
to $z_a$ and $z_n$ then the new logged rule is
$\gamma=\beta_a^{-1} \cdot u_a \alpha_a^{-1} v_a \cdot u_n \alpha_n
v_n \cdot \beta_n$
if $z_a > z_n$ and
$\gamma=\beta_n^{-1} \cdot u_n \alpha_n^{-1} v_n \cdot u_a \alpha_a
v_a \cdot \beta_a$ if $z_n>z_a$.
Add $\gamma$ to $\cL_{new}$ and to $\cL_{all}$.
\item
(Loop)
If $\cL_{new}$ is non-empty then loop to {LKB$3$}.
Otherwise the procedure terminates: all critical pairs of $\cL_{all}$ have
been tested, and resolve.
\item
(Output)
Output $\cL_{all}$, a complete logged rewriting system for $\cP$.
\end{enumerate}
\mbox{ }\\
\hline
\end{tabular}
\end{center}
\end{small}
\end{Alg}
The immediate application for logged rewriting systems is in the provision
of witnesses for computation. An ordinary complete rewriting system can
determine whether or not two strings $s_1$ and $s_2$ represent the same
element of the monoid; a logged rewriting system produces a proof in terms
of a sequence of specific applications of the original monoid relations
which will transform $s_1$ into $s_2$.
This is a fairly shallow application, although variations on it are useful
in more complex algorithms such as~\citep{paper5}.
\section{Endorewrites}
Deeper information about the presentation can be gained by studying the
interaction of the relations with each other, known in group theory as the
{\em identities among relations}. The identities themselves represent
rewrite sequences which start at a word, send it through various
transformations and return it to its original form. For the monoid case,
we decided to refer to such rewrites as endorewrites. In the case of
monoids, the structure is necessarily less simple than the kernel of a
crossed module map. Note that we will continue to identify rewrites
which should strictly be written as classes $[\alpha]_I$ by (non-unique)
representatives in the sesquicategory $\alpha$. So $\alpha, \beta \in EQ
\subseteq C_2$, are equal as rewrites if and only if $[\alpha]_I =
[\beta]_I$.
\begin{Def}[Endorewrites]\mbox{ }\\
A 2-cell $\alpha \in C_2$ is an \emph{endorewrite} on a string $w$ if
$\src(\alpha)=\tgt(\alpha)=w$.
\end{Def}
The set of all endorewrites is actually the equaliser object of the two
maps $\src, \tgt : C_2 \to X^*$ in the category of sets. We denote it
$EQ$.
\begin{Lem}[Endorewrite Structure]\mbox{ }\\
The set of all endorewrites $EQ$ is the disjoint union of the sets $EQ_w$
for $w \in X^*$ where
$$EQ_w = \{ \alpha : \src(\alpha)=\tgt(\alpha)=w \}.$$
Each $EQ_w$ is closed under vertical composition; and their union $EQ$ is
additionally closed under horizontal composition and whiskering.
\end{Lem}
Vertical composition is defined only within subsets $EQ_w$. Horizontal
composition is defined across subsets: if $\alpha \in EQ_w$ and
$\alpha' \in EQ_{w'}$, then $\alpha \circ \alpha' \in EQ_{ww'}$.
Whiskering means that for any substring $s$ of a string $w$ there is an
injective mapping $EQ_s \to EQ_w$ defined by $\alpha \mapsto u \alpha v$
where $usv=w$.
\begin{Lem}[Conjugate Endorewrites]\mbox{ }\\
If $\theta(w_1)=\theta(w_2)$ then for every $\beta \in C_2$ such that
$\src(\beta)=w_1$ and $\tgt(\beta)=w_2$ there exists a bijection
$\Phi_\beta:EQ_{w_1} \to EQ_{w_2}$ defined by
$\alpha \mapsto \beta^{-1} \cdot \alpha \cdot \beta$.
\end{Lem}
Thus the elements of $EQ_{w'}$ are all conjugates of elements of $EQ_{w}$
so it becomes logical that we should only seek generators for
endorewrites $EQ_w$ of one representative string $w$ for each monoid
element $\theta(w)$. The next lemma helps to make this concrete.
\begin{Lem}[Partial Action of Rewrites on Endorewrites]\mbox{ }\\
There is a partial function $EQ \times (C_2, \cdot) \to EQ$,
defined by $\alpha^\beta = \beta^{-1} \cdot \alpha \cdot \beta$
for $\alpha \in EQ$ and $\beta \in C_2$ such that
$\src(\alpha)=\tgt(\alpha)=\src(\beta)$.
This satisfies the following properties:
\begin{enumerate}[i)]
\item
$\alpha^{1_{\src \alpha}} = \alpha$ for all $\alpha \in EQ$.
\item
$\alpha^{(\beta_1 \cdot \beta_2)} = (\alpha^{\beta_1})^{\beta_2}$ for all
$\alpha, \beta_1, \beta_2 \in C_2$ such that
$\src(\alpha)=\tgt(\alpha)=\src(\beta_1)$ and $\tgt(\beta_1)=\src(\beta_2)$.
\item
$u \alpha^\beta v = (u \alpha v)^{u \beta v}$ for all $u, v \in X^*$
whenever $\alpha^\beta$ is defined.
\item
$(\alpha_1 \cdot \alpha_2)^\beta = \alpha_1^\beta \cdot \alpha_2^\beta$
for all $\alpha_1, \alpha_2 \in EQ$ and $\beta \in C_2$ such that
$\tgt(\alpha_1 \circ \alpha_2)=\src(\beta)$.
\end{enumerate}
\end{Lem}
The first two properties are the categorical equivalent of the properties
required for a partial monoid action, the second two show that the partial
action preserves the whiskering and vertical composition operations in
$EQ$. All the properties follow from the definitions of $(C_2, \cdot)$,
identity 2-cells $1_{\src(\alpha)}$ and the definition of $\alpha^\beta$.
Intuitively, $\alpha$ is like a circular walk: conjugating by $\beta$ just
means that we first walk down an additional path to the start of $\alpha$,
retracing our steps back along that path once the circular walk $\alpha$
is completed. Clearly the circular walk is not much more interesting for
having this initial path added to it and a guidebook that suggested all
conjugates of $\alpha$ were distinct jaunts would be absurd. Thus we
factor $EQ$ by this partial action and consider $\alpha$ to be
equivalent to all its possible conjugates. Formally:
\begin{Lem}[Classes of Endorewrites]\mbox{ }\\
Let $C_2(\cP)$ be the 2-category of rewrites for a monoid presentation
$\cP$ and let $EQ$ be the set of all endorewrites.
Then define
$$J = \{ (\alpha, \beta^{-1} \cdot \alpha \cdot \beta) : \alpha
\in EQ, \beta \in C_2 \text{ and } \src \alpha = \src \beta \}.$$
Let $=_{I+J}$ be the smallest congruence on $E$ with respect to $\cdot$
and whiskering which contains both $J$ and the interchange law $I$.
Then the quotient $EQ^J = EQ/\! =_{I+J}$ is well-defined, preserving both
vertical composition and whiskering.
\end{Lem}
To conclude this section we observe the following lemma.
\begin{Lem}[Structure of $EQ_w$]\mbox{ }\\
Let $C_2(\cP)$ be the 2-category of rewrites for a monoid presentation
$\cP$ of a monoid $M$.
Then $EQ_w^J$, the set of classes of endorewrites on any string $w \in X^*$,
is a $\mathbb{Z}M$-bimodule with respect to vertical composition and whiskering.
\end{Lem}
\begin{proof}
Vertical composition of conjugacy classes of $EQ_w$ gives an abelian
group structure: it is associative, with identity is $[1_w]_{I+J}$;
the inverse of $[\alpha]_{I+J}$ is $[\alpha^{-1}]_{I+J}$;
and if $\alpha_1, \alpha_2 \in EQ_w$ then
$$[\alpha_1]_{I+J} \cdot [\alpha_2]_{I+J} =
[\alpha_1 \cdot \alpha_2]_{I+J} =
[\alpha_1 \cdot \alpha_2]_{I+J}^{\alpha_2^{-1}} =
[\alpha_2 \cdot \alpha_1 \cdot \alpha_2 \cdot \alpha_2^{-1}]_{I+J} =
[\alpha_2 \cdot \alpha_1]_{I+J}.$$
The left and right whiskering actions of $X^*$ on $EQ_w^J$ restrict to
well-defined left and right actions of $M$ since
$u_1 \, \alpha \, v_1 =_{I+J} u_2 \, \alpha \, v_2$,
when $\theta(u_1)=\theta(u_2)$ and $\theta(v_1)=\theta(v_2)$ since:
$$ u_1 \, \alpha \, v_1 = 1_{u_1} \cdot \alpha \cdot 1_{v_1}
=_{I+J} 1_{u_2} \cdot \alpha \cdot 1_{v_2}
= u_2 \, \alpha \, v_2.$$
\end{proof}
\begin{Rem}
Note that horizontal composition is not abelian:
if $\alpha: w \to w$ and $\beta: z \to z$ then $\alpha \circ \beta: wz \to wz$
whilst $\beta \circ \alpha: zw \to zw$
and generally we cannot expect that $\theta(wz)=\theta(zw)$.
\end{Rem}
\section{Critical Pairs}
In this chapter we shall prove the intuitively reasonable idea that all
distinct circular routes come from examining the reconnection of
non-trivial diverging paths and thus provide a method for identifying all
the interesting endorewrites of any completable rewriting system.
Our main result requires that we first identify exactly what we mean by
`a generating set of endorewrites'.
A \emph{generating set} for $EQ$ must be a set of endorewrites $E$ such
that any other endorewrite of $EQ$ is equivalent under the interchange law
together with the conjugacy congruence $=_{I+J}$, to a product of
whiskered elements and inverse elements of $E$. Formally:
\begin{Def}[Generating Set for $EQ$]\mbox{ }\\
A \emph{generating set} for the endorewrites $EQ$ associated with a monoid
presentation is a set $E \subseteq EQ$ such that
for any $\gamma \in EQ$ there exist $\alpha_1, \ldots, \alpha_n \in E$
such that
$$\gamma =_{I+J} u_1 \alpha_1 v_1 \cdot
\cdots \cdot u_n \alpha_n v_n$$
for some $u_1, \ldots, u_n, v_1, \ldots, v_n \in X^*$ and
$\ep_1, \ldots, \ep_n \in \{-1,1\} $.
\end{Def}
Our main theorem claims that a set of generating endorewrites $E$,
can be produced from the critical pairs which result from overlaps of
the completed rewriting system.
In order to prove the theorem we use digraph arguments, a digraph being
associated with each endorewrite coming from a critical pair in the
following way:
\begin{Lem}[Digraphs associated with Endorewrites]\label{lem-yield}\mbox{ }\\
Given two strings $w$ and $z$, any pair of logged reductions $\alpha_1,
\alpha_2 : w \to z$ is represented by a labelled digraph
which is associated uniquely with an endorewrite.
\end{Lem}
\begin{proof}
\begin{tabular}{@{}p{12cm}@{}c}
Let $\alpha_1, \alpha_2 : w \to z$ in $C_2$. Then we
have a digraph $D(\alpha_1,\alpha_2)$, as shown, and the associated
endorewrite $\delta(\alpha_1, \alpha_2) = \alpha_1 \alpha_2^{-1} \in
EQ_w$ can be obtained by reading the labels anticlockwise from the edges,
beginning at the vertex which is greatest with respect to $>$ on $X^*$.
&
$$
\xymatrix{w \ar@/_/[d]_{\alpha_1} \ar@/^/[d]^{\alpha_2} \\ z}
$$
\end{tabular}
\end{proof}
\begin{Rem}[Resolved Critical Pairs Yield Endorewrites]\mbox{ }\\
If $C$ is the set of all logged critical pairs and $EQ$ is the set of all
endorewrites of a complete logged rewriting system, then there is a
map $\delta:C \to EQ$ associating an endorewrite with each critical pair.
In detail, if $c=(\alpha_1,\alpha_2)$. is a logged critical pair then there
is a string $w$ which may be rewritten in two ways -- $\alpha_1:w \to w_1$
and $\alpha_2:w \to w_2$, where $\alpha_1, \alpha_2 \in C_2$.
Since the pair can be resolved, there exists a string $z$ so that
$\beta_1:w_1 \to z$ and
$\beta_2:w_2 \to z$,
for some rewrite sequences $\beta_1, \beta_2$ in $C_2$.
It is immediate that
$\delta(c)=\alpha_1 \cdot \beta_1 \cdot \beta_2^{-1} \cdot
\alpha_2^{-1}$ is an endorewrite on $w$.
\end{Rem}
We now observe that endorewrites resulting from critical pairs are
trivial when the critical pair involves disjoint rules or a
conjugate of the endorewrite resulting from the reduction of the minimal
string on which the same overlap occurs.
\begin{Lem}[Overlaps and Endorewrites]\label{lem-overlap}\mbox{ }\\
If $\alpha_1:l_1 \to r_1$ and $\alpha_2: l_2 \to r_2$ are rules of a
complete logged rewriting system $\cL$, such that they may both be applied to
a string $w$ then:
\begin{enumerate}[i)]
\item
if the rules overlap on $w$ then the endorewrite of the critical pair is
equivalent to a whiskering of the endorewrite given by a resolution of the
same pair of rules applied to the minimal string on which the same
overlap occurs.
\item
if the rules do not overlap on $w$ then resolution of the critical pair
yields the trivial identity,
\end{enumerate}
\end{Lem}
\begin{proof}
\noindent
\begin{tabular}{@{}p{8.2cm}@{}l}
In case (i) the rules overlap on $w$
so there exist
$u_1, v_1, v_2, x, y, z \in X^*$
such that $w = xyz$ and either
$y = u_1l_1v_1 = l_2$ or
$y = u_1l_1 = l_2v_2$.
In either case we can write $y = u_1l_1v_1 = l_2v_2$
and the logged reductions of $y$ are
$u_1 \alpha_1 v_1: y \to u_1 r_1 v_1$ and
$\alpha_2 v_2 : y \to r_2 v_2$.
By completeness there are logged reductions
$\beta_1 : u_1 r_1 v_1 \to t$ and
$\beta_2 : r_2 v_2 \to t$
such that
$\gamma =
u_1 \alpha_1 v_1 \cdot \beta_1 \cdot \beta_2^{-1} \cdot \alpha_2^{-1} u_2$
is an endorewrite.
The critical pair of reductions on $w$ are
$x u_1 \alpha_1 v_1 z : w \to x u_1 r_1 v_1 z$ and
$x \alpha_2 u_2 z : w \to x r_2 v_2 z$.
This pair can be resolved by
$x \beta_1 z : x u_1 r_1 v_1 z \to x t z$ and
$x \beta_2 z : x r_2 v_2 z \to x t z$.
The endorewrite associated to it is
$x \gamma z$.
&
$$
\xymatrix{
& w \ar@/_/[dl]_{u_1 \alpha_1 v_1}
\ar@/^/[dr]^{\alpha_2 v_2}&\\
x u_1 r_1 v_1 \ar@/_/[dr]_{\beta_1} &
\gamma
& r_2 v_2 \ar@/^/[dl]^{\beta_2}\\
& t &\\
& xwz \ar@/_/[dl]_{x u_1 \alpha_1 v_1 z}
\ar@/^/[dr]^{x \alpha_2 v_2 z}&\\
x u_1 r_1 v_1 z \ar@/_/[dr]_{x \beta_1 z} &
x \gamma z
& x r_2 v_2 z \ar@/^/[dl]^{x \beta_2 z}\\
& x t z &\\}
$$
\end{tabular}
\noindent
\begin{tabular}{@{}p{7.8cm}@{}l}
In case (ii) the rules do not overlap on $w$ so there exist $x,y,z \in X^*$
such that $w = x l_1 y l_2 z$ and the logged reductions
shown in the digraph on the right apply.
This yields the endorewrite
$x \alpha_1 y l_2 z \cdot x r_1 y \alpha_2 z \cdot
x l_1 y \alpha_2 z \cdot x \alpha_1 y r_2 z$,
which is equivalent under the interchange law to $1_w$.
&
$$
\xymatrix{
& w \ar@/_/[dl]_{x\alpha_1 y l_2 z} \ar@/^/[dr]^{x l_1 y \alpha_2 z}&\\
x r_1 y l_2 z \ar@/_/[dr]_{x r_1 y \alpha_2 z} &
1_w
& x l_1 y r_2 z \ar@/^/[dl]^{x \alpha_1 y r_2 z}\\
& x r_1 y r_2 z &\\
}
$$
\end{tabular}
\end{proof}
\begin{Lem}[Digraph of Reduction Sequences]\label{alg-graph}\mbox{ }\\
For any critical pair of logged reduction sequences, there exists a finite
digraph which is the union of digraphs resulting from resolving critical
pairs as in Lemma \ref{lem-overlap}.
\end{Lem}
\begin{proof}
Given two logged reduction sequences
$\alpha : w \to w_1 \to \cdots \to w_m \to z$ and
$\alpha' : w \to w_{m+1} \to \cdots \to w_n \to z$,
we define a digraph $D$.
The vertices $V(D)$ are the distinct words occurring in these sequences, and
there is an edge labelled $\alpha_i$ from $w_i$ to $w_j$ if $w_i \to w_j$
is a reduction step labelled by $\alpha_i$ in one of the two given
reduction sequences.
The pair of reduction sequences $(\alpha, \alpha')$ yield the endorewrite
$\gamma=\delta(\alpha, \alpha')$ in the
way described in Lemma \ref{lem-overlap}.
We now add to the graph (if the graph is drawn, this looks like
subdivison into small confluence diagrams, the proof was originally
phrased in `diamonds').
Note that the vertices are ordered with respect to $>$ in $X^*$.
\noindent
\emph{
{\em \bf Algorithm 5.5 (Digraph Filling/Construction) }
\begin{small}
\begin{center}
\begin{tabular}{@{}p{13cm}}
\hline
\begin{enumerate}[{D}1:]
\item
(Initialise)
Given $D$ as defined above, set $V$ to be the set of vertices in $D$ and set $i=1$.
\item
(Select a Vertex)
If $V$ is empty, go to step D7. Otherwise,
set $v_i$ to be the maximum vertex in $V$ and remove $v_i$ from $V$.
\item
(Test and Resolve)
If the vertex is not the source of two distinct arrows in D then discard it and go back to step D2.
Otherwise, consider the corresponding two reductions
$\beta_{i,1}: v_i \to v_{i,1}$ and $\beta_{i,2}: v_i \to v_{i,2}$
The critical pair $(\beta_{i,1},\beta_{i,2})$
can be resolved since $\cL$ is a complete rewrite
system so we have
$\gamma_{i,1}: v_{i,1} \to z_i$ and
$\gamma_{i,2}: v_{i,2} \to z_i$
\item
(Create New Digraph)
Define $D_i$ to be the digraph
$
\xymatrix{
& v_i \ar@/_/[dl]_{\beta_{i,1}} \ar@/^/[dr]^{\beta_{i,2}}&\\
v_{i,1} \ar@/_/[dr]_{\gamma_{i,1}} &
& v_{i,2} \ar@/^/[dl]^{\gamma_{i,2}}\\
& z_i &\\
}
$
\item
(Add to Digraph)
Add $D_i$ to $D$, identifying the vertices which have the same labels.
\item
(Loop)
Increment $i$ by 1 and go to step D2.
\item
(Terminate)
Output $D$.
\end{enumerate}\\
\hline
\end{tabular}
\end{center}
\end{small}
}
\vspace{1em}
We note firstly that $\cL$ is finite, so there are only
finitely many rules which can be applied;
secondly, any finite word can only be reduced in a finite number of ways;
finally, the system is noetherian, so there are no
infinite reduction sequences.
This means that the procedure will terminate, giving a finite digraph
$D$ which is the union of the digraphs $D_i$, which are all of the type
considered in Lemma \ref{lem-overlap}.
\end{proof}
\begin{Lem}[Digraph Compositions]\label{lem-graph}\mbox{ }\\
The product (at the base point) of the endorewrites associated (in the
sense of Lemma \ref{lem-yield}) with the sub-digraphs is equivalent under
the interchange law to the endorewrite associated with the original
digraph.
\end{Lem}
\begin{proof}
\begin{tabular}{@{}p{10.5cm}@{}l}
Consider the composition of
digraphs of the type described, remembering that
each edge is associated uniquely to a particular log of the reduction.
The endorewrites associated to the two digraphs
are $\alpha_1 \cdot \gamma_1^{-1} \cdot \beta_1^{-1}$
and $\gamma_1 \cdot \alpha_2 \cdot \beta_2^{-1}$.
Composing them from the base point $w$ gives us
$\alpha_1 \cdot \gamma_1^{-1} \cdot \beta_1^{-1} \cdot
\beta_1 \cdot (\gamma_1 \cdot \alpha_2 \cdot \beta_2^{-1}) \cdot
\beta_1^{-1}$
which is equivalent in the sesquigroupoid to
$\alpha_1 \cdot \alpha_2 \cdot \beta_2^{-1} \cdot \beta_1^{-1}$, the
endorewrite given by taking the boundary of the composite.
The fact that the order of the digraph endorewrites is not important
corresponds with the fact that $EQ$ is abelian.
&
$$
\xymatrix{ & w \ar@/_/[ddl]_{\alpha_1} \ar@/^/[dr]^{\beta_1} &\\
&& z_1 \ar@/^/[ddl]^{\beta_2} \ar[dll]^{\gamma_1} \\
w_1 \ar@/_/[dr]_{\alpha_2} && \\
& z &}
$$
\end{tabular}
\end{proof}
Combining Lemma \ref{alg-graph} and Lemma \ref{lem-graph} with Lemma
\ref{lem-overlap} we can deduce that any digraph can be identified with a
product of whiskered endorewrites and inverse endorewrites of $E$. This
allows us to prove the main theorem:
\begin{Thm}[Critical Pairs give a Set of Generators for
$EQ$]\label{thm-main}\mbox{ }\\
Let $\cL_{init}$ be the initial logged rewriting system for a monoid
presentation, and let $\cL_{comp}$ be a completion.
Let $C$ be the set of all logged critical pairs resulting from overlaps
of the system $\cL_{comp} \cup \cL_{init}$. Then
$$E = \{ \delta(c) : c \in C \}$$
is a generating set of endorewrites.
\end{Thm}
\begin{proof}
Let $\gamma$ be an endorewrite on some string $w$.
Then consider the critical pair $(\gamma, 1_w)$.
Using Algorithm \ref{alg-graph} we can construct a digraph $D$ whose
associated endorewrite is $\gamma$ and whose sub-digraphs yield a product
of whiskered elements of $E$ and their inverses which is equivalent to
$\gamma$ by Lemma \ref{lem-graph}.
\end{proof}
\section{Example}
This small example illustrates our methods for computing a complete set of
generators for the endorewrites of a monoid presentation from the overlaps
of a complete logged rewriting system.
Consider the monoid presentation $$mon\langle e,s \ | \ e^2=e, s^3=s,
s^2e=e, es^2=e, sese=ese, eses=ese\rangle.$$
Using the short-lex ordering with $s>e$, labelling the relations
$\alpha_1, \ldots, \alpha_6$ we have the complete logged rewriting system
consisting of the following six rules:
\begin{center}
\begin{tabular}{lll}
$\alpha_1 : e^2 \to e$,
&
$\alpha_2 : s^3 \to s$,
&
$\alpha_3 : s^2e \to e$,
\\
$\alpha_4 : e^2s \to e$,
&
$\alpha_5 : sese \to ese$,
&
$\alpha_6 : eses \to ese$.
\\
\end{tabular}
\end{center}
Consider the overlap of $\alpha_2$ and $\alpha_3$ on the string $w=s^3 e$.
Reducing it by $\alpha_2 e$ we get $se$ which is irreducible.
Alternately, we can reduce $w$ by $s \alpha_3$ and similarly get $se$.
Thus we have an endorewrite of $se$, i.e.
$\alpha_2 e \cdot s \alpha_3^{-1}$.
Continuing in this way, considering all the overlaps of the logged system
the following twenty six endorewrites can be computed:
Endorewrites of $e$: \;
$\alpha_2 se \cdot s^2 \alpha_3^{-1}$, \;
$\alpha_1 s^2 \cdot \alpha_4 \cdot \alpha_1^{-1} \cdot e \alpha_4^{-1}$, \;
$\alpha_1 e \cdot e \alpha_1^{-1}$, \;
$\alpha_3 e \cdot \alpha_1 \cdot \alpha_3^{-1} \cdot s^2
\alpha_1^{-1}$, \;
$\alpha_3 s^2 \cdot \alpha_4 \cdot \alpha_3^{-1} \cdot s^2
\alpha_4^{-1}$, \;
$\alpha_4 s^2 \cdot es \alpha_2^{-1}$ and
$\alpha_4 e \cdot e \alpha_3^{-1}$.
Endorewrites of $s$: \;
$\alpha_2 s^2 \cdot s^2 \alpha_2^{-1}$.
Endorewrites of $s^2$: \;
$\alpha_2 s \cdot s \alpha_2^{-1}$.
Endorewrites of $es$: \;
$\alpha_4 s \cdot e \alpha_2^{-1}$.
Endorewrites of $se$: \;
$\alpha_2 e \cdot s \alpha_3^{-1}$.
Endorewrites of $ese$: \;
$\alpha_1 ses \cdot \alpha_6 \cdot \alpha_1^{-1} se \cdot e \alpha_6^{-1}$, \;
$s \alpha_5 \cdot \alpha_5 \cdot \alpha_3^{-1}se$, \;
$\alpha_3 ses \cdot s^2 \alpha_6^{-1}$, \;
$\alpha_4 se \cdot es \alpha_3^{-1}$, \;
$\alpha_5 e \cdot ses \alpha_1^{-1}$, \;
$\alpha_5 s^2 \cdot es \alpha_4 \cdot \alpha_5^{-1} \cdot ses \alpha_4^{-1}$, \;
$\alpha_5 s \cdot \alpha_6 \cdot \alpha_5^{-1} \cdot s \alpha_6^{-1}$, \;
$\alpha_5 ses \cdot \alpha_6 es \cdot es \alpha_1 s \cdot \alpha_6
\cdot \alpha_5^{-1} \cdot s \alpha_1^{-1} se \cdot se \alpha_5^{-1}
\cdot ses \alpha_6^{-1}$, \;
$\alpha_6 se \cdot \alpha_6 e \cdot ese \alpha_3^{-1}$, \;
$\alpha_6 s^2 \cdot es \alpha_4 \cdot \alpha_6^{-1} \cdot ese \alpha_2^{-1}$, \;
$\alpha_6 s \cdot \alpha_6 \cdot es \alpha_4^{-1}$, \;
$\alpha_6 e \cdot es \alpha_1 \cdot \alpha_1^{-1}se \cdot e \alpha_5^{-1}$, \;
$\alpha_6 ese \cdot ese \alpha_5$, \;
$\alpha_6 es \cdot es\alpha_1 s \cdot \alpha_6 \cdot es
\alpha_1^{-1} \cdot \alpha_6^{-1} e \cdot es \alpha_6^{-1}$ and
$\alpha_5 se \cdot e \alpha_5 \cdot \alpha_1 se \cdot \alpha_5^{-1}
\cdot s \alpha_1^{-1} se \cdot se \alpha_5^{-1}$.
These endorewrites generate all possible endorewrites of the system, but
we note that generating sets obtained in this way are unlikely to be
minimal generating sets.
For example, in this case there is a relation between the three
endorewrites $\alpha_2 s \cdot s \alpha_2^{-1}$, \, $\alpha_2 e \cdot s
\alpha_3^{-1}$, and $\alpha_2 se \cdot s^2 \alpha_3^{-1}$,
in that the third can be obtained from the first two in the following way:
$$ (\alpha_2 s \cdot s \alpha_2^{-1})e \cdot s(\alpha_2 e \cdot s
\alpha_3^{-1}) = \alpha_2 se \cdot s^2 \alpha_3^{-1}.$$
Unfortunately, the fact that this problem generalises the word problem
for crossed modules means that reducing the generating set can be rather
ad-hoc since there are no normal forms for the 2-cells.
\section{Homotopical and Homological Interpretations}
We promised, in the introduction, that our results would enable
homotopical and homological specifications of the monoid. It is well
known that the existence of a finite complete rewriting system for a
monoid presentation implies the homological finiteness conditions
FP${}_3$~\citep{Squier87} and the stronger condition
FP${}_\infty$~\citep{Anick,Kobayashi} as well as the homotopical
condition of having finite derivation type
(FDT)~\citep{Cremanns,Squier94}. The addition made by this paper, in
considering logged rewriting systems, is that our algorithms enable the
specification of the structures which the properties are based upon.
In the homotopical case, it is immediate to observe that the set $E$ of
generating endorewrites suffices as a set of homotopy generators in the
sense of~\citep{Cremanns}. In detail: if $\alpha$ is any cycle of the
graph whose objects are all strings and whose invertible edges are all
rewrites, then $\alpha$ corresponds to the digraph of an endorewrite
and it turns out that the product of the subdigraphs is homotopically
equivalent to $\alpha$ for the same reasons as the associated
endorewrite is equivalent to the composite of the endorewrites of the
subdigraphs.
In terms of homology, the specification of $E$, similar to the analogous
case of $\Pi_2$ for groups, enables us to construct a resolution.
Specifically, we have an exact sequence of free, finitely generated
$\mathbb{Z} M$-modules:
$$
\xymatrix{
C_2 \ar[r]^{\delta_2}
& C_1 \ar[r]^{\delta_1}
& C_0 \ar[r]^{\delta_0}
& \mathbb{Z} \ar[r]
& 0.
}
$$
Given our specification of a finite set of homotopy generators, further
details of the resolution can be found in~\citep{Cremanns} in the proof
of the fact that FDT implies FP${}_3$.
For lower dimensional topology and cohomological dimensions for monoids,
Pride~\citep{Pride93,Pride95,Pride99} has developed geometric methods;
using a calculus of
pictures, with spherical pictures representing the relations between the
relations, which may be identified with our endorewrites. His method for
determining a generating set differs significantly from ours; involving
picking an `obvious' set of pictures and then using picture operations to
prove that they generate all spherical pictures for the presentation.
The key word here is `obvious' -- whether an obvious set of pictures can
be identified depends upon the shape of the presentation and its relation
to presentations for which generating sets of pictures are known. In the
case of groups substantial research means that many shapes of presentation
can be recognised, but in the case of monoids, presentations are less
recognisable.
Our generating set of endorewrites is determined algorithmically,
dependent on the successful completion of the presentation. The rewriting
method has the clear advantage of being able to be applied like brute
force in cases where the pictures are not obvious, or potentially in
complex examples where the pictures may be too complex to be identified by
eye. More interesting than comparing the two methods, however is to
consider using them in combination -- rewriting can provide an initial set
of pictures for unrecognisable monoid presentations and picture calculus
can then operate on the result to refine and reduce the set and present is
as something more ascetically pleasing and expressive than the strings of
letters representing whiskered 2-cells.
An alternative to looking at standard resolutions of a group by $\mathbb{Z}
G$-modules as in \citep{Pride99} is to consider crossed resolutions. One
reason for interest in these is because their stronger invariance with
respect to the presentation makes them potentially more useful in the
classification of topological structures such as knots via crossed
resolutions of their intertwining monoids.
Recall the group case: a {\em crossed complex (over groupoids)} is a
sequence $C$
$$
\xymatrix{
& \cdots \ar[r]^{\delta_{n+1}}
& C_n \ar[r]^{\delta_n}
& C_{n-1} \ar[r]^{\delta_{n-1}}
& \cdots \ar[r]^{\delta_3}
& C_2 \ar[r]^{\delta_2}
& C_1 \ar[r]^{\delta_1}
& C_0
}
$$
such that
\begin{enumerate}[i)]
\item
$C_1$ is a groupoid with $C_0$ as its set of vertices and
$\delta^1, \delta^0$ as its source and target maps.
\item
For $n \geqslant 2$, $C_n$ is a totally disconnected groupoid over $C_0$
and for $n \geqslant 3$, the groups at the vertices of $C_n$ are abelian.
\item
The groupoid $C_1$ operates on the right of each $C_n$ for $n \geqslant 2$
by an action denoted $(x,a) \mapsto x^a$.
\item
For $n \geqslant 2$, $\delta_n: C_n \to C_{n-1}$ is a morphism of
groupoids over $C_0$ and $C_1$ acts on itself by conjugation.
\item
$\delta_{n}\delta_{n-1} = 0:C_n \to C_{n-2}$ for $n \geqslant 3$ and
$\delta_2 \delta^0
= \delta_2 \delta^1 : C_2 \to C_0$.
\item
If $c \in C_2$ then $\delta_2(c)$ operates trivially on $C_n$ for $n
\geqslant 3$ and operates on $C_2$ by conjugation by $c$.
\end{enumerate}
A crossed complex $C$ is {\em free} if $C_1$ is a free groupoid (on some
graph $\Gamma_1$) and $C_2$ is a free crossed $C_1$-module (for some
$\lambda: \Gamma_2 \to C_1$) and for $n \geqslant 3$, $C_n$ is a free
$\pi_1C$-module on some $\Gamma_n$ where $\pi_1 C$ is the fundamental
groupoid of the crossed complex; i.e. the quotient of the groupoid $C_1$
by the normal, totally disconnected subgroupoid $\delta_2(C_2)$.
A crossed complex $C$ is {\em exact} if for $n \geqslant 2$
$$ Ker(\delta_n: C_n \to C_{n-1}) = Im(\delta_{n+1}:C_{n+1} \to C_n).$$
If $C$ is an free exact crossed complex and $G$ is a groupoid then $C$
together with an isomorphism $\pi_1C \to G$ (or, equivalently, C with a
quotient morphism $C_1 \to G$ whose kernel is $\delta_2(C_2)$) is called a
{\em crossed resolution of $G$}. It is a {\em free crossed resolution}
if $C$ is also free.
In the case of monoids, we propose a similar structure.
Let $\cP = mon\langle X, R \rangle$ be a monoid presentation.
If we can find a complete rewriting system for $R$ then we can construct
the following sequence:
$$
\xymatrix{
& \cdots \ar[r]^{\delta_{n+1}}
& C_n \ar[r]^{\delta_n}
& C_{n-1} \ar[r]^{\delta_{n-1}}
& \cdots \ar[r]^{\delta_3}
& C_2 \ar@<1ex>[r]^{\tgt} \ar@<-1ex>[r]^{\src}
& C_1 \ar[r]^{\delta_1}
& C_0
}
$$
Define $C_0$ to be the monoid $M$ which is presented by $\cP$.
Define $C_1$ to be the free monoid $X^*$
and let $\delta_1:C_1 \to C_0$ be the quotient morphism.
Then let $\src,tgt: C_2 \to C_1$ be the
2-category of rewrites, but instead of
a right action of $C_1$ we have a two-sided action; instead of a crossed
module $\delta_2: C_2 \to C_1$ we have a 2-category $\src,\tgt:C_2 \to
C_1$ and instead of $C_1$ being a groupoid, it is a category.
Then let $C_3$ be a family of free $\mathbb{Z} M$-bimodules:
its objects are the elements of $M$ and its arrows are of
the form $\ep_1(m_1e_1n_1) + \ep_2(m_2e_2n_2) + \cdots \ep_k(m_ke_kn_k):m \to m$
when $m_1e_1n_1 \cdot m_2e_2n_2 \cdot \cdots \cdot m_ke_kn_k$ is an endorewrite
in $EQ_w$ for some $\theta(w)=m$.
For higher levels $n > 3$ we can define $C_n$ to be the free
$\mathbb{Z} M$-bimodule on a set of generators for $Ker(\delta_{n-1})$.
We find that $C$ is a crossed complex and we have maps $b_{i,j}: C_i \times C_j
\to C_{i+j}$
-- whiskering in the case of $C_0$ operating on the left and right of
$C_i$ for $i>0$. Then $C_1$ has 2 multiplications under the operations
of $C_0$ which coincide only if $C_1$ is a monoid in the category of
groupoids (interchange law).
There are no inverses in dimension 0, but inverses at all higher levels.
From the definitions we deduce exactness: $Ker(\delta_n) =
Im(\delta_n+1)$.
This appears to be identifiable with the structure of a crossed
differential algebra, that is a crossed complex $C$ with a
morphism $C \otimes C \to C$ which gives a monoid structure on $C$
(these are defined in detail in~\citep{Tonks}).
We are still investigating how useful this enhanced style of resolution
may be in the monoid case, so we won't pursue the details of the
construction further in this paper.
\section{Generalised Logged String Rewriting}
In~\citep{paper2} it was shown that the familiar string rewriting methods
can be applied to problems of computing left Kan extensions over the
category of sets. Structures such as monoid and category presentations,
induced actions of groups and monoids, equivalence and conjugacy classes,
equalisers and pushouts all turn out to be special cases of left Kan
extensions over $\sets$ and thus string rewriting methods can be
applied to all these variations on the word problem.
Since string rewriting for Kan extensions can be achieved by embedding
in a monoid, it is unnecessary to go through the detail of the
sesquigroupoid whose 2-cells possess the structure for the logged
rules. However, since we don't need to embed in a monoid in order for
the string rewriting methods to work, we briefly outline the alternative
sesquigroupoid.
Let $(E, \ep)$ be the left Kan extension of the category action $X: \bA
\to \sets$ along the functor $F : \bA \to \bB$.
We assume that the data for the Kan extension is given as a finite
presentation $\cP$, consisting of generating graphs for $\bA$ and $\bB$, a
set of relations for $\bB$ and the action of functors $F$ and $X$ being
defined for every object and arrow of the generating graph of $\bA$.
The 2-category $C_2$ associated with the presentation of the Kan
extension has 0-cells
$(\bigsqcup_{A \in \ob \bA} XA) \sqcup \ob \bB$ and 1-cells
$\{ (s_x:x \to FA)\, | \, x \in XA, A \in \ob \bA \} \sqcup \arr \bB$.
The 2-cells are the rewrites and inverse rewrites, with vertical
composition as before, but clearly, whiskering and horizontal
compositions are partial operations dependent on whether paths can be
composed.
In conjunction with~\citep{paper2}, this observation enables logged
rewriting techniques to be applied to a wide range of problems, including
category presentations, equivalence relations, induced actions, pushouts
and coset systems. In eaach case, interpretations and potential
applications of the endorewrites requires further investigation.
\section{Implementations and Further Applications}
Techniques of logged rewriting have been implemented by the first author
as \GAP \, functions which will eventually be submitted as a package.
Applications of logged rewriting were explored in \citep{paper5} where the
group version was implemented, providing a new algorithmic method for the
construction of crossed resolutions of groups; in \citep{RSAlt} where the
logged completion methods give an alternative to the Reidemeister-Schreier
method of computing a subgroup presentation; and in \citep{DCosets} we
show how endorewrites for double coset rewriting systems reveal
information about the subgroups.
Further work could pursue other potential applications, including in Petri
nets, concurrency and the analysis of knot quandles; as well as
generalising the techniques to Gr\"obner bases where the endorewrites can
be identified with syzygies.
{\small
|
train/arxiv
|
BkiUd9Q4ubng_t84Z6jX
| 5
| 1
|
\section{Introduction}
The dynamics and deformations of immiscible liquid droplets suspended in another fluid medium and subject to an electric field find a wide range of applications in industrial processes, including ink-jet printing \citep{basaran2013}, electrospinning \citep{huang2003}, oil extraction from oil-water emulsions \citep{schramm1992,
eow2002}, electrospraying and atomization of liquids \citep{taylor1964,taylor1969,castellanos2014} and microfluidic devices and pumps \citep{stone2004,laser2004}. Their study is also important in understanding natural phenomena such as electrification of rain, bursting of rain drops in thunderstorms and electrification of the atmosphere \citep{simpson1909, blanchard1963}. Of interest to us in this work is the case of dielectric liquids such as oils, which are poor conductors. Unlike aqueous electrolytes, where the dynamics arises from the action of the electric field on diffuse Debye layers extending into the liquid bulk, these so-called leaky dielectric liquids are typically characterized by the absence of bulk charges; any net charge in the system instead concentrates at interfaces between liquid phases as a result of the mismatch in material properties. Dynamics and deformations then result from the action of the field on this surface charge, which induces interfacial stresses and can drive fluid flows.
\begin{table}
\begin{center}\vspace{-0.4cm}
\begin{tabular}{ccc}
\textit{Experimental work}: & & \\[2pt]
\citet{allan1962,torza1971,vizika1992,tsukada93}; & & \\[2pt]
\citet{krause1998,ha2000a,ha2000b,sato2006}; & & \\[2pt]
\citet{salipante2010,salipante2013,karyappa2014,lanauze2015}. & & \\[5pt]
\textit{Theoretical modeling} (EHS): & & \\[2pt]
\citet{konski1953,harris1957}; \\[2pt]
\citet{allan1962,taylor1964}. \\[5pt]
\textit{Numerical simulation} (EHS): & & \\[2pt] \citet{brazier1971a,brazier1971b,miksis1981}; & & \\[2pt]
\citet{haywood1991,dubash2007a,dubash2007b}. & &\\[5pt]
\textit{Theoretical modeling} (EHD): & & \\[2pt]
\citet{taylor1966,torza1971,ajayi1978,esmaeeli2011}; \\[2pt]
\citet{zhang2013,lanauze2013,he2013,yariv2016}; & & \\[2pt]
\citet{bandopadhyay2016,yarivalmog16,das2016}. & & \\[5pt]
\textit{Numerical simulation} (EHD): & & \\[2pt] \citet{sherwood1988,feng1996,baygents1998,feng1999}; \\[2pt]
\citet{hirata2000,lac2007,supeene2008,bjorklund2009}; \\[2pt]
\citet{lopez2011,karyappa2014,hu2015,lanauze2015}. \\[5pt]
\textit{Reviews}: & & \\[2pt]
\citet{melcher1969,saville1997,vlahovska2016}. \\[2pt]
\end{tabular}
\caption{Non-exhaustive summary of the literature on the deformations and dynamics of uncharged liquid drops subject to a uniform DC electric field. We distinguish electrohydrostatic models (EHS), which neglect fluid flow, from electrohydrodynamic models (EHD), where fluid flow is taken into account. } \label{summary}
\end{center}
\end{table}
We focus in this work on the simple case of an isolated leaky dielectric drop suspended in a weakly conducting liquid subject to a uniform DC electric field. This prototypical problem has fascinated scientists for decades and a summary of the existing literature on this problem is presented in table~\ref{summary}. Early studies in the field primarily focused on the specific cases of an either insulating or perfectly conducting drop suspended in an insulating fluid medium. In these cases, the drop-fluid interface does not experience any tangential electric stresses, and as a consequence fluid motions are absent and the drop can only attain a steady prolate shape as a result of a jump in electric pressure across the interface \citep{konski1953,harris1957}. Oblately deformed drops were first observed in experiments by \citet{allan1962}, suggesting an inconsistency in the existing electrohydrostatic models. In his pioneering work, \citet{taylor1966} realized that dielectric liquids, while poor conductors, still have a weak conductivity and can therefore carry free charges to the drop-fluid interface. The action of the electric field on these surface charges then gives rise to tangential electric stresses that generate toroidal circulatory currents now known as Taylor vortices. By incorporating this effect into a small-deformation theory, Taylor was able to predict both prolate and oblate shapes depending on material properties, and his results compared favorably with experiments.
The discovery of these surface charges and their role in generating fluid motions motivated \citet{melcher1969} to develop a more complete framework for studying the electrohydrodynamics of leaky dielectric drops. The cornerstone of their work is a surface charge conservation equation that prescribes a balance between transient charge relaxation, the jump in Ohmic currents from both bulk fluids and charge convection along the drop surface due to the interfacial fluid flow. Taylor's original theory based on this model accounted for first-order deformations in the limit of vanishing electric capillary number $Ca_E$, denoting the ratio of electric to capillary forces. While predicted deformation values showed good agreement with experimental results \citep{torza1971} in weak fields where deformations are small, significant departures were observed with increasing field strength. In an attempt to resolve this discrepancy, \citet{ajayi1978} calculated drop deformations to second order in $Ca_E$, yet his results did not improve upon Taylor's solution in the case of oblate drops when compared with experiments. This systematic mismatch was a consequence of the neglect of nonlinear interfacial charge convection in these models. There have since then been numerous attempts to extend these original predictions by including additional effects such as transient shape deformation \citep{haywood1991,esmaeeli2011}, transient charge relaxation \citep{zhang2013}, fluid acceleration \citep{lanauze2013}, interfacial charge convection \citep{feng2002,shkadov02,he2013,das2016}, and sedimentation \citep{bandopadhyay2016,yarivalmog16}.
Various numerical schemes have also been developed over the years to address this problem computationally. \citet{brazier1971a}, \citet{brazier1971b} and \citet{miksis1981} used the boundary element method to solve the electrohydrostatics problem, wherein the shape of the drop is evolved quasi-statically so as to balance normal stresses on the interface. In a more comprehensive study, \citet{sherwood1988} solved the coupled electrohydrodynamic problem assuming creeping flow conditions, which allowed him to use the boundary element method for both the electric and flow problems. His pioneering work was extended by \citet{baygents1998} to study axisymmetric drop pair interactions and by \citet{lac2007} to investigate a much wider range of electric and fluid parameters. Very recently, \citet{lanauze2015} extended these models by formulating an axisymmetric boundary element method for the complete Melcher--Taylor leaky dielectric model. Other methods based on finite elements \citep{feng1996,feng1999,hirata2000,supeene2008}, level sets \citep{bjorklund2009}, the immersed boundary method \citep{hu2015} and the volume-of-fluid method \citep{lopez2011} have also been employed to investigate drop dynamics.
Recent experiments, however, have uncovered another dynamical regime in strong electric fields \citep{krause1998,ha2000b, sato2006,salipante2010}. Upon increasing field strength, a symmetry-breaking bifurcation has been reported in the case of weakly conducting drops, by which the axisymmetric shape predicted by the aforementioned models becomes unstable and gives rise to a non-axisymmetric tilted drop configuration accompanied by a rotational flow. In yet stronger fields, chaotic dynamics have also been reported, with unsteady stretching and tumbling of the drop \citep{salipante2013}, sometimes leading to breakup \citep{ha2000b}. This curious transition, most recently described in the work of \citet{salipante2010,salipante2013}, shares similarities with the electrorotation of weakly conducting rigid particles in strong electric fields, which is well known since the work of \citet{quincke1896} and has been explained in detail theoretically \citep{jones1984,das2013}. The case of a deformable drop, however, is significantly more challenging than that of a rigid particle, due to the deformations of the interface and to the complexity of the interfacial flow, which does not follow rigid body dynamics. Theoretical models for Quincke electrorotation of droplets are scarce and have all assumed a spherical shape as well as weak \citep{he2013} or strong \citep{yariv2016} charge convection by the flow. Computational models are non-existent to our knowledge, as nearly all simulation methods developed in the past have only allowed for axisymmetric shapes, which is sufficient to describe the oblate and prolate deformations arising in weak fields but is inadequate to capture symmetry breaking. A notable exception is the work of \citet{lopez2011}, who simulated the electrohydrodynamics of three-dimensional drops using the volume-of-fluid approach but did not address the Quincke regime.
In this work, we develop three-dimensional boundary element simulations of the electrohydrodynamics of a liquid droplet based on a formulation for the complete Melcher--Taylor leaky dielectric model. This enables us to investigate dynamics both in the axisymmetric Taylor regime of weak fields as well as in the Quincke regime of strong fields; to our knowledge, these are the first numerical simulations to capture Quincke electrorotation of drops in three dimensions. Our numerical results show excellent agreement with both existing experimental data and small-deformation theories. Details of the boundary integral formulations for the electric and flow problems and their numerical implementations are described in \S \ref{sec:BIF} as well as in the appendices. Simulation results and comparisons with previous experiments and theories are discussed in \S \ref{sec:results}. We conclude by summarizing our work and discussing possible extensions in \S \ref{sec:conclusion}.
\section{Problem definition}\label{sec:probdef}
\subsection{Governing equations}\label{sec:govern}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth]{figure1.pdf}
\end{center}
\caption{Problem definition: A liquid droplet with surface $S$ and outward unit normal $\boldsymbol{n}$ is suspended in an unbounded domain and placed in a uniform electric field $\boldsymbol{E}_{0}$ pointing in the vertical direction. $V^{\pm}$ denote the exterior and interior domains, respectively, and $(\epsilon^\pm, \sigma^\pm,\mu^\pm)$ are the corresponding dielectric permittivities, electric conductivities and dynamic viscosities. The drop's major and minor axis lengths are denoted by $L$ and $B$, and the major axis is tilted at an angle $\alpha$ with respect to the horizontal direction.}
\label{fig:figure1}
\end{figure}
We consider an uncharged neutrally buoyant liquid droplet with undeformed radius $a$ occupying volume $V^{-}$ in an infinite fluid medium $V^{+}$ and subject to a uniform electric field $\boldsymbol{E}_{0}$ as depicted in figure~\ref{fig:figure1}. The drop surface is denoted as $S$ and has an outward unit normal $\boldsymbol{n}$. Let $(\epsilon^{\pm},\sigma^{\pm},\mu^{\pm})$ be the dielectric permittivities, electric conductivities, and dynamic viscosities of the exterior and interior fluids, respectively. In the Melcher--Taylor leaky dielectric model \citep{melcher1969}, all charges in the system are concentrated on the drop surface, so that the electric potential in both fluid domains is harmonic:
\begin{equation}
\nabla^{2}\varphi^{\pm}(\boldsymbol{x})=0 \qquad \mbox{for}\,\,\,\boldsymbol{x}\in V^{\pm}. \label{eq:laplace}
\end{equation}
On the drop surface, the electric potential is continuous, as is the tangential component of the local electric field:
\begin{align}
\llbracket \varphi (\boldsymbol{x})\rrbracket =0 \quad \mbox{and} \quad \llbracket \boldsymbol{E}_t(\boldsymbol{x})\rrbracket =\boldsymbol{0} \qquad \mbox{for}\,\,\, \boldsymbol{x}\in S,
\end{align}
where $\boldsymbol{E}^{\pm}_{t}=({\mathsfbi{I}}-\boldsymbol{nn})\bcdot \boldsymbol{E}^{\pm}$ and $\boldsymbol{E}^{\pm}=-\bnabla \varphi^{\pm}$. We have introduced the notation $\llbracket f(\boldsymbol{x})\rrbracket \equiv f^+(\boldsymbol{x}) - f^-(\boldsymbol{x})$ for any field variable $f(\boldsymbol{x})$ defined on both sides of the interface. Unlike $\boldsymbol{E}_t$, the normal component of the electric field $E_{n}^{\pm}=\boldsymbol{n}\bcdot\boldsymbol{E}^{\pm}$ undergoes a jump due to the mismatch in electrical properties between the two media \citep{landau1984}, which results in a surface charge distribution $q(\boldsymbol{x})$ related to the normal displacement field by Gauss's law:
\begin{equation}
q(\boldsymbol{x})=\llbracket \epsilon {E}_n(\boldsymbol{x})\rrbracket \qquad \mbox{for}\,\,\, \boldsymbol{x}\in S.
\end{equation}
The surface charge density $q$ evolves due to two distinct mechanisms: Ohmic currents from the bulk and advection by the fluid flow with velocity $\boldsymbol{v}(\boldsymbol{x})$ on the drop surface. Accordingly, it satisfies the conservation equation:
\begin{equation}
\partial_{t}q + \llbracket \sigma {E}_n\rrbracket +\bnabla_{s}\bcdot (q\boldsymbol{v})=0 \qquad \mbox{for}\,\,\, \boldsymbol{x}\in S, \label{eq:chargeeq0}
\end{equation}
where $\bnabla_{s}\equiv ({\mathsfbi{I}}-\boldsymbol{nn})\bcdot\bnabla$ is the surface gradient operator. On neglecting unsteady terms and surface charge convection, equation \eqref{eq:chargeeq0} reduces to the simpler boundary condition $\llbracket \sigma {E}_n\rrbracket=0$ used in a number of previous studies \citep{sherwood1988,baygents1998,lac2007}.
The fluid velocity field $\boldsymbol{v}^{\pm}(\boldsymbol{x})$ and corresponding pressure field $p^{H\pm}(\boldsymbol{x})$ satisfy the Stokes equations in both fluid domains:
\begin{equation}
-\mu^{\pm}\nabla^{2}\boldsymbol{v}^{\pm}+\bnabla p^{H\pm}=\boldsymbol{0}\quad \mbox{and}\quad \bnabla\bcdot\boldsymbol{v}^{\pm}=0 \qquad \mbox{for}\,\,\,\boldsymbol{x}\in V^{\pm}.
\end{equation}
The velocity is continuous on the drop surface:
\begin{equation}
\llbracket \boldsymbol{v}(\boldsymbol{x})\rrbracket =\boldsymbol{0} \qquad \mbox{for}\,\,\, \boldsymbol{x}\in S,
\label{eq:kinematic}
\end{equation}
and, in the absence of Marangoni effects, the jumps in electric and hydrodynamic tractions across the interface balance interfacial tension forces:
\begin{equation}
\llbracket \boldsymbol{f}^{E}\rrbracket + \llbracket \boldsymbol{f}^{H}\rrbracket =\gamma (\bnabla_{s}\bcdot\boldsymbol{n})\boldsymbol{n} \qquad \mbox{for}\,\,\, \boldsymbol{x}\in S.\label{eq:stressbalance}
\end{equation}
Here, $\gamma$ is the constant surface tension and $\bnabla_{s} \bcdot\boldsymbol{n}=2\kappa_{m}$ is twice the mean surface curvature. The jumps in tractions are expressed in terms of the Maxwell stress tensor ${\mathsfbi{T}}^{E}$ and hydrodynamic stress tensor ${\mathsfbi{T}}^{H}$ as
\begin{align}
\llbracket \boldsymbol{f}^{E}\rrbracket &=\boldsymbol{n}\bcdot\llbracket{\mathsfbi{T}}^{E}\rrbracket=\boldsymbol{n}\bcdot\llbracket \epsilon (\boldsymbol{EE}-\tfrac{1}{2}E^{2}{\mathsfbi{I}})\rrbracket, \\
\llbracket \boldsymbol{f}^{H}\rrbracket &=\boldsymbol{n}\bcdot\llbracket{\mathsfbi{T}}^{H}\rrbracket = \boldsymbol{n}\bcdot \llbracket-p^H{\mathsfbi{I}}+\mu\left(\bnabla \boldsymbol{v}+\bnabla\boldsymbol{v}^{T}\right)\rrbracket.
\end{align}
The jump in electric tractions can also be expressed as
\begin{align}
\begin{split}
\llbracket \boldsymbol{f}^{E}\rrbracket =\llbracket \epsilon E_{n} \rrbracket \boldsymbol{E}_{t}+\tfrac{1}{2}\llbracket \epsilon(E_{n}^{2}-E_{t}^{2})\rrbracket \boldsymbol{n}
=q\boldsymbol{E}_{t}+ \llbracket p^{E}\rrbracket \boldsymbol{n}.\label{eq:electraction}
\end{split}
\end{align}
The first term on the right hand side captures the tangential electric force on the interface arising from the action of the tangential field on the interfacial charge distribution. The second term captures normal electric stresses and can be interpreted as the jump in an electric pressure $p^E=\frac{1}{2}\epsilon (E_n^2-E_t^2)$ \citep{lac2007}.
\subsection{Non-dimensionalization}\label{sec:nondim}
Non-dimensionalization of the governing equations yields five dimensionless groups, three of which are ratios of material properties typically defined as:
\begin{equation}
R=\frac{\sigma^+}{\sigma^-}, \qquad Q= \frac{\epsilon^-}{\epsilon^+},\qquad \lambda =\frac{\mu^-}{\mu^+}.
\end{equation}
The low-drop-viscosity limit $\lambda\rightarrow 0$ describes a bubble, whereas $\lambda\rightarrow \infty$ describes a rigid particle. The product $RQ$ can also be interpreted as the ratio of the inner to outer charge relaxation times:
\begin{equation}
RQ=\frac{\tau^-}{\tau^+}\qquad \mbox{where}\qquad \tau^+=\frac{\epsilon^+}{\sigma^+}, \quad \tau^-=\frac{\epsilon^-}{\sigma^-}.
\end{equation}
A possible choice for the two remaining dimensionless numbers consists of the electric capillary number $Ca_E$ and electric Mason number $Ma$ defined as
\begin{equation}
Ca_{E}=\frac{a\epsilon^+ E_{0}^{2}}{\gamma}, \qquad Ma = \frac{\mu^+}{\epsilon^+ \tau_{MW}E_{0}^2}. \label{eq:MaCa}
\end{equation}
The electric capillary number $Ca_E$ compares the characteristic time $\tau_{\gamma}$ for a deformed drop to relax to its equilibrium shape as a result of surface tension to the electro-viscous timescale $\tau_{EHD}$ \citep{salipante2010}, each defined as
\begin{equation}
\tau_{\gamma}=\frac{\mu^+(1+\lambda)a}{\gamma}, \qquad \tau_{EHD}=\frac{\mu^+(1+\lambda)}{\epsilon^+ E_{0}^{2}}.
\end{equation}
On the other hand, the Mason number $Ma$ is the ratio of $\tau_{EHD}$, multiplied by a factor of $(1+\lambda)^{-1}$, to the Maxwell-Wagner relaxation time
\begin{equation}
\tau_{MW}=\frac{\epsilon^- +2\epsilon^+}{\sigma^- +2\sigma^+},
\end{equation}
which is the characteristic timescale for polarization of the drop surface upon application of the field \citep{das2013}. $Ma$ is also directly related to the ratio of the electric field magnitude $E_0$ to the critical electric field $E_{c}$ for onset of Quincke rotation of a rigid sphere as
\begin{equation}
Ma=\frac{\overline{\epsilon}-\overline{\sigma}}{2}\left(\frac{E_{c}}{E_{0}}\right)^{2},
\end{equation}
where
\begin{equation}
\overline{\epsilon}=\frac{\epsilon^- -\epsilon^+}{\epsilon^- + 2\epsilon^+}, \quad \overline{\sigma}=\frac{\sigma^- -\sigma^+}{\sigma^- + 2\sigma^+}, \quad E_{c}=\sqrt{\frac{2\mu^+}{\epsilon^+ \tau_{MW}(\overline{\epsilon}-\overline{\sigma})}}. \label{eq:Ec}
\end{equation}
For a rigid sphere, Quincke rotation occurs when $E_0>E_c$, or $Ma< (\overline{\epsilon}-\overline{\sigma})/2$, thus necessitating the application of a strong electric field. For the critical electric $E_c$ to take on a real value, the condition $\overline{\epsilon}>\overline{\sigma}$, which is equivalent to $RQ>1$ or $\tau^+>\tau^-$, needs to be satisfied; this generally implies that the drop is less conducting than the suspending fluid. It is useful to note the direct correspondence between $Ma$ and the electric Reynolds number $Re_E$ defined by other authors \citep{lanauze2015,schnitzer2015}:
\begin{equation}
Ma =\frac{\tau^+/\tau_{MW}}{Re_{E} } \qquad \mbox{where} \qquad Re_E= \frac{\epsilon^{+}E_{0}^{2}}{\sigma^+\mu^+}.
\end{equation}
Finally, an additional dimensionless group can also be constructed by taking the ratio of the capillary time $\tau_\gamma$ and Maxwell-Wagner relaxation time $\tau_{MW}$ and is independent of field strength \citep{salipante2010}:
\begin{equation}
Ca_{MW}=\frac{\tau_{\gamma}}{\tau_{MW}}=\frac{\mu^+ (1+\lambda)a}{\gamma \tau_{MW}}=(1+\lambda)Ca_{E}Ma. \label{eq:CaMW}
\end{equation}
For a fixed set of material properties, varying $Ca_{MW}$ is equivalent to varying drop size $a$. In the remainder of the paper, we exclusively use dimensionless variables by scaling lengths with $a$, electric fields with $E_{0}$, and times with $\tau_{MW}$. In addition to $R$, $Q$ and $\lambda$, we primarily use $Ca_E$ and $Ma$ as dimensionless groups, though some of the results in \S \ref{sec:results} will also be shown in terms of $E_{0}/E_{c}$ and $Ca_{MW}$.
\section{Boundary integral formulation}\label{sec:BIF}
\subsection{Electric problem}\label{sec:electric}
The solution of Laplace's equation \eqref{eq:laplace} is best formulated using boundary integral equations \citep{jaswon1963,symm1963,pozrikidis2002}. Following previous studies in the field \citep{sherwood1988,baygents1998,lac2007, lanauze2015} we represent the potential in terms of the single-layer density $\llbracket {E}_{n}(\boldsymbol{x})\rrbracket$ as
\begin{equation}
\varphi(\boldsymbol{x}_{0})=-\boldsymbol{x}_0\bcdot \boldsymbol{E}_{0}+\oint_{S} \llbracket {E}_{n}(\boldsymbol{x})\rrbracket \, \mathcal{G}(\boldsymbol{x}_0;\boldsymbol{x})\,\mathrm{d}S(\boldsymbol{x}) \qquad \mbox{for}\,\,\,\boldsymbol{x}_{0}\in V^{\pm}, S. \label{eq:intpotential}
\end{equation}
Here, $\boldsymbol{x}_{0}$ is the evaluation point for the potential and can be anywhere in space, whereas $\boldsymbol{x}$ denotes the integration point which is located on the drop surface. The Green's function or fundamental solution of Laplace's equation in an unbounded domain is given by
\begin{equation}
\mathcal{G}(\boldsymbol{x}_{0};\boldsymbol{x})=\frac{1}{4\uppi r}\quad \mbox{where} \quad \boldsymbol{r}=\boldsymbol{x}_{0}-\boldsymbol{x}, \,\,\, r=|\boldsymbol{r}|.
\end{equation}
Note that equation (\ref{eq:intpotential}) is valid in both fluid phases as well as on the interface since the Green's function is continuous across $S$. The equation is weakly singular, however, when $\boldsymbol{x}=\boldsymbol{x}_{0}$, though the singularity can be removed analytically by introducing plane polar coordinates in the parametric plane defining the local surface \citep{pozrikidis2002}. Knowledge of the single-layer potential density $ \llbracket {E}_{n}(\boldsymbol{x})\rrbracket$ on the interface therefore allows one to determine the electric potential anywhere in space by simple integration, which prompts us to seek an equation for $ \llbracket {E}_{n}(\boldsymbol{x})\rrbracket$ in terms of the surface charge density $q$. We first take the gradient of equation \eqref{eq:intpotential} to obtain an integral equation for the electric field in the fluid:
\begin{equation}
\boldsymbol{E}^{\pm}(\boldsymbol{x}_0)=\boldsymbol{E}_{0}-\oint_{S} \llbracket {E}_{n}(\boldsymbol{x})\rrbracket \bnabla_{0}\mathcal{G}(\boldsymbol{x}_0;\boldsymbol{x})\,\mathrm{d}S(\boldsymbol{x}) \quad \mbox{for}\,\,\,\boldsymbol{x}_{0}\in V^{\pm}.
\end{equation}
The derivative of the Green's function undergoes a discontinuity at the interface, which needs to be accounted for when the evaluation point is on the boundary \citep{pozrikidis2011}:
\begin{equation}
\boldsymbol{E}^\pm(\boldsymbol{x}_0)=\boldsymbol{E}_{0}-\oint_{S} \llbracket {E}_{n}(\boldsymbol{x})\rrbracket \bnabla_{0}\mathcal{G}(\boldsymbol{x}_0;\boldsymbol{x})\,\mathrm{d}S(\boldsymbol{x})\pm \tfrac{1}{2}\llbracket {E}_{n}(\boldsymbol{x}_0)\rrbracket \boldsymbol{n}(\boldsymbol{x}_0) \quad \mbox{for}\,\,\,\boldsymbol{x}_{0}\in S.
\label{eq:intelectric}
\end{equation}
The integral equation for the electric field is strongly singular. However, taking a dot product on both sides with the unit normal $\boldsymbol{n}(\boldsymbol{x}_0)$ reduces the singularity by one order. Averaging the normal components of the field outside and inside the drop then yields
\begin{equation}
\tfrac{1}{2}[E_n^+(\boldsymbol{x}_0)+E_n^-(\boldsymbol{x}_0)]=E_{n0}-\oint_{S} \llbracket {E}_{n}(\boldsymbol{x})\rrbracket \{\boldsymbol{n}(\boldsymbol{x}_0)\boldsymbol{\cdot}\bnabla_{0}\mathcal{G}(\boldsymbol{x}_0;\boldsymbol{x})\}\,\mathrm{d}S(\boldsymbol{x}) \quad \mbox{for}\,\,\,\boldsymbol{x}_{0}\in S,
\label{eq:intnormalsing}
\end{equation}
where the weak singularity can now be removed analytically following \citet{sellier2006} by subtracting $\llbracket {E}_{n}(\boldsymbol{x}_0)\rrbracket$ from the single-layer density:
\begin{align}
\begin{split}
&\tfrac{1}{2}[E_n^+(\boldsymbol{x}_0)+E_n^-(\boldsymbol{x}_0)]+\llbracket {E}_{n}(\boldsymbol{x}_0)\rrbracket \left[\tfrac{1}{2}-L(\boldsymbol{x}_0)\right] \\
&=E_{n0}-\oint_{S} \{\llbracket {E}_{n}(\boldsymbol{x})\rrbracket-\llbracket {E}_{n}(\boldsymbol{x}_0)\rrbracket\}\{\boldsymbol{n}(\boldsymbol{x}_0)\boldsymbol{\cdot}\bnabla_{0}\mathcal{G}(\boldsymbol{x}_0;\boldsymbol{x})\}\,\mathrm{d}S(\boldsymbol{x}) \quad \mbox{for}\,\,\,\boldsymbol{x}_{0}\in S.
\label{eq:intnormalreg}
\end{split}
\end{align}
The scalar function ${L}(\boldsymbol{x}_0)$ is a purely geometrical quantity depending on drop shape and expressed as \citep{sellier2006}
\begin{align}
L(\boldsymbol{x}_0) = \boldsymbol{n}(\boldsymbol{x}_0)\bcdot \oint_S \Big\{ [\boldsymbol{\nabla} \mathcal{G} \bcdot \boldsymbol{n}(\boldsymbol{x})] [\boldsymbol{n}(\boldsymbol{x})-\boldsymbol{n}(\boldsymbol{x}_0)] + \mathcal{G}(\boldsymbol{x}_0;\boldsymbol{x}) [\boldsymbol{\nabla}\bcdot \boldsymbol{n}] (\boldsymbol{x}) \boldsymbol{n}(\boldsymbol{x})\Big\} \,\mathrm{d}S(\boldsymbol{x}). \label{eq:integralL}
\end{align}
Gauss's law also allows us to express $E_n^+$ and $E_n^-$ on each side of the interface in terms of the jump in normal electric field,
\refstepcounter{equation}
$$
E_n^+=\frac{q-Q\llbracket {E}_{n}\rrbracket}{1-Q}, \qquad \qquad {E}_n^-=\frac{q-\llbracket {E}_{n}\rrbracket}{1-Q}, \eqno{(\theequation{\mathit{a},\mathit{b}})} \label{eq:normalelectric}
$$
which, after substitution into equation (\ref{eq:intnormalreg}), provides a regular integral equation for the jump $\llbracket {E}_{n}\rrbracket$: \vspace{-0.2cm}
\begin{align}
\begin{split}
&\oint_{S} \{\llbracket {E}_{n}(\boldsymbol{x})\rrbracket-\llbracket {E}_{n}(\boldsymbol{x}_0)\rrbracket\}\{\boldsymbol{n}(\boldsymbol{x}_0)\boldsymbol{\cdot}\bnabla_{0}\mathcal{G}(\boldsymbol{x}_0;\boldsymbol{x})\}\,\mathrm{d}S(\boldsymbol{x})\\
&\quad \quad +\llbracket {E}_{n}(\boldsymbol{x}_0)\rrbracket \left[\frac{Q}{Q-1}-L(\boldsymbol{x}_0) \right]=E_{n0}+\frac{q(\boldsymbol{x}_0)}{Q-1}, \qquad \mbox{for}\,\,\,\boldsymbol{x}_{0}\in S.
\label{eq:intjump}
\end{split}
\end{align}
The jump $\llbracket {E}_{n}\rrbracket$ can therefore be computed from \eqref{eq:intjump} for a given surface charge density after discretization of the integral on a mesh, yielding a large linear system that is solved iteratively. Further details of the numerical implementation are given in \S \ref{sec:numerical} and in appendix~A. Having obtained $\llbracket {E}_{n}\rrbracket$, the normal components $E_n^+$ and $E_n^-$ are easily obtained using equation (\ref{eq:normalelectric}).
The tangential component of the electric field can then be evaluated using \eqref{eq:intelectric}; however, care must be taken to remove the strong singularity in the kernel. Here, we adopt instead an indirect method in which we first compute the electric potential $\varphi$ using equation \eqref{eq:intpotential} then differentiate it numerically on the drop surface to obtain $\boldsymbol{E}_t$. Once the normal and tangential components of the electric field are known, we can determine the jump in the normal component of Ohmic currents $\llbracket \sigma E_n\rrbracket$ as well as the jump in electric tractions $\llbracket \boldsymbol{f}^E\rrbracket $ using equation \eqref{eq:electraction}.
\subsection{Flow problem}\label{sec:flow}
The applied electric field induces fluid motion inside and outside the drop. The need to solve for the fluid flow is twofold, as it affects the surface charge distribution according to equation \eqref{eq:chargeeq0} and causes deformations of the interface, which is a material surface advected by the flow. The flow problem is solved after application of the dynamic boundary condition \eqref{eq:stressbalance} to obtain the hydrodynamic traction jump $\llbracket \boldsymbol{f}^H\rrbracket $ on the drop-fluid interface. Assuming creeping flow, we use the Stokes boundary integral equation to represent the fluid velocity as \citep{rallison1978,pozrikidis2002}
\begin{align}
\begin{split}
\boldsymbol{v}(\boldsymbol{x}_0)=&-\frac{1}{4\uppi \mu (1+\lambda)} \oint_S \llbracket \boldsymbol{f}^H(\boldsymbol{x})\rrbracket \boldsymbol{\cdot} \mathsfbi{G} (\boldsymbol{x}_0;\boldsymbol{x}) \,\mathrm{d}S(\boldsymbol{x})\\
&+\frac{\kappa}{4 \uppi} \oint_S \boldsymbol{v}(\boldsymbol{x}) \boldsymbol{\cdot} \mathsfbi{T}(\boldsymbol{x}_0;\boldsymbol{x})\boldsymbol{\cdot} \boldsymbol{n}(\boldsymbol{x}) \,\mathrm{d}S(\boldsymbol{x}), \quad \mbox{for}\,\,\, \boldsymbol{x}_0 \in V^\pm, S, \label{eq:stokesbie}
\end{split}
\end{align}
where $\kappa=(1-\lambda)/(1+\lambda)$ and $\mathsfbi{G}(\boldsymbol{x}_0;\boldsymbol{x})$ and $\mathsfbi{T}(\boldsymbol{x}_0;\boldsymbol{x})$ denote the free-space Green's functions for the Stokeslet and stresslet, respectively:
\refstepcounter{equation}
$$
\mathsfbi{G}(\boldsymbol{x}_0;\boldsymbol{x})=\frac{\mathsfbi{I}}{r} + \frac{\boldsymbol{r}\boldsymbol{r}}{r^3}, \qquad \mathsfbi{T}(\boldsymbol{x}_0;\boldsymbol{x})=6\frac{\boldsymbol{r}\boldsymbol{r}\boldsymbol{r}}{r^5}. \eqno{(\theequation{\mathit{a},\mathit{b}})}
\label{eq:stokeslet}
$$
The usual negative sign in the definition of the stresslet appears if $\boldsymbol{r}$ is defined as $\boldsymbol{x}-\boldsymbol{x}_0$. Note that $\kappa=\pm 1$ corresponds to the case of a bubble ($\lambda \rightarrow 0$) and solid particle ($\lambda \rightarrow \infty$), respectively. The interfacial velocity appearing in the double layer potential is yet unknown, but an integral equation for $\boldsymbol{v}$ on the surface can be obtained by moving the evaluation point $\boldsymbol{x}_0$ to the boundary $S$. In dimensionless form, it reads:
\begin{align}
\begin{split}
\boldsymbol{v}(\boldsymbol{x}_0)+&\frac{\lambda-1}{8\upi} \oint_S [\boldsymbol{v}(\boldsymbol{x}) - \boldsymbol{v}(\boldsymbol{x}_0)] \boldsymbol{\cdot} \mathsfbi{T}(\boldsymbol{x}_0;\boldsymbol{x})\boldsymbol{\cdot} \boldsymbol{n}(\boldsymbol{x}) \,\mathrm{d}S(\boldsymbol{x})\\
=&-\frac{1}{8\uppi Ma} \oint_S \llbracket \boldsymbol{f}^H(\boldsymbol{x})\rrbracket \boldsymbol{\cdot} \mathsfbi{G} (\boldsymbol{x}_0;\boldsymbol{x}) \,\mathrm{d}S(\boldsymbol{x}), \qquad \mbox{for}\,\,\, \boldsymbol{x}_0 \in S. \label{eq:stokessurface}
\end{split}
\end{align}
The forcing term in this equation is contained in the hydrodynamic traction jump $\llbracket \boldsymbol{f}^H \rrbracket$. After discretization of the integral, equation \eqref{eq:stokessurface} yields a dense linear system that is again solved iteratively. The weak singularity appearing in the double-layer potential in the original equation \eqref{eq:stokesbie} has been removed by using appropriate integral identities; the weak singularity of the single-layer potential, on the other hand, disappears after introducing plane polar coordinates \citep{pozrikidis1992}. It is well known that the integral equation \eqref{eq:stokessurface} admits arbitrary rigid body motions and uniform expansion as eigensolutions, resulting in the ill-conditioning of the linear system for $\lambda \gg 1 $ or $\lambda \ll 1$ and leading to poor convergence of the solution \citep{zinchenko1997}. We employ Wielandt's deflation technique to eliminate $\kappa=\pm 1$ from the spectrum of the integral equation to cure the ill-conditioning \citep{kim2013}; see appendix B for details. Once the interfacial velocity is known, the nodes are advected with the normal component of the fluid velocity; the heuristic mesh relaxation algorithm of \cite{loewenberg1996} is applied in the tangential direction so as to reduce mesh distortion.
\subsection{Summary of the numerical method} \label{sec:numerical}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure2.pdf}
\caption{Discretized mesh: $N_\triangle = 1280$ six-node curved elements. (\textit{a}) An initially spherical mesh at time $t=0$, (\textit{b}) a deformed mesh for a tilted drop in the Quincke regime corresponding to the case of figure \ref{fig:snapquincke}, and (\textit{c}) a deformed mesh of a prolate drop in the Taylor regime (system 3), where we applied the mesh relaxation algorithm of \citet{loewenberg1996}.}
\label{fig:mesh}
\end{figure}
We solve integral equations \eqref{eq:intpotential}, \eqref{eq:intjump} and \eqref{eq:stokessurface} numerically using the boundary element method on a discrete representation of the drop surface \citep{pozrikidis2002}. The initially spherical surface is first discretized by successive subdivision of an icosahedron, by which each triangular element is subdivided into four new triangles whose nodes are projected onto the sphere \citep{loewenberg1996}. This leads to a highly uniform triangular mesh, in which we treat each element as a six-node curved element thus allowing for computation of the local curvature. Most of the results we present here are on a surface with $N_\triangle =320$ elements and 642 nodes obtained after $N_d=2$ successive subdivisions, though a few results are also shown with $N_\triangle =1280$ elements and 2562 nodes, corresponding to $N_d=3$ subdivisions. Typical meshes with $N_d=3$ are shown in figure~\ref{fig:mesh} for different levels of deformation. The evaluation of integrals and the calculation of geometrical properties such as the unit normal and curvature on the discretized surface are standard and are outlined in appendix~A.
The numerical algorithm during one integration step can be summarized as follows:
\begin{itemize}
\item Given an interfacial charge distribution $q$ (which is taken to be uniformly zero at $t=0$), solve for $\llbracket E_n\rrbracket$, $E_n^+$ and $E_n^-$ by inverting equation (\ref{eq:intjump}) numerically, together with equation (\ref{eq:normalelectric}). Discretization of the integrals in (\ref{eq:intjump}) yields a large algebraic system which we solve iteratively using GMRES \citep{saad1986}.
\item Evaluate the electric potential $\varphi$ on the drop surface using equation (\ref{eq:intpotential}), where the single-layer density $\llbracket E_n\rrbracket$ is known.
\item Differentiate $\varphi$ on the drop surface using the method outlined in appendix~A to obtain the tangential component $\boldsymbol{E}_{t}=-(\mathsfbi{I}-\boldsymbol{nn})\bcdot\bnabla \varphi$ of the electric field.
\item Calculate the jump in hydrodynamic tractions $\llbracket \boldsymbol{f}^{H}\rrbracket$ using the dynamic boundary condition (\ref{eq:stressbalance}), where electric tractions and surface tension forces are known from the solution of the electric problem and from the current geometry.
\item Solve for the interfacial velocity $\boldsymbol{v}$ by inverting the boundary integral equation (\ref{eq:stokessurface}), which again yields an algebraic system after discretization of the integrals.
\item Update the surface charge density $q$ and advance the position of the surface nodes $\boldsymbol{x}_{i}$ by numerical integration of the charge conservation equation and kinematic boundary condition,
\begin{align}
&\frac{\partial q}{\partial t}= \frac{Q+2}{1+2R}(E_n^--RE_n^+)- \boldsymbol{\nabla}_s\boldsymbol{\cdot}(q\boldsymbol{v}_t)+ \boldsymbol{v}_m \boldsymbol{\cdot}\boldsymbol{\nabla}_s q, \label{eq:chargeeq} \\
&\frac{\mathrm{d} \boldsymbol{x}_{i}}{\mathrm{d}t} = \boldsymbol{n}(\boldsymbol{x}_{i})\boldsymbol{\cdot}\boldsymbol{v}(\boldsymbol{x}_{i})+ \boldsymbol{v}_m(\boldsymbol{x}_{i}),
\label{eq:nodeadvect}
\end{align}
where $\boldsymbol{v}_{m}$ denotes the tangential mesh relaxation velocity and is determined using the method proposed by \cite{loewenberg1996}. Numerical integration of these equations is performed explicitly in time using a second-order Runge-Kutta scheme.
\end{itemize}
The charge conservation equation (\ref{eq:chargeeq}) requires numerical evaluation of the surface divergence and gradient appearing on the right-hand side. These quantities are obtained by analytical differentiation based on the parametrization discussed in appendix~A; an alternate method based on finite volumes \citep{yon1998}, and a semi-implicit scheme wherein the linear $\llbracket \sigma E_n\rrbracket$ and nonlinear $\boldsymbol{\nabla}_s\boldsymbol{\cdot}(q\boldsymbol{v})$ terms are treated implicitly and explicitly, respectively, were also attempted but did not produce significant differences in the results. The numerical method was tested extensively by first considering the case of a solid spherical particle under Quincke rotation, for which an exact analytical solution based on spherical harmonics is available \citep{das2013}, and by comparison with previous numerical studies of drop dynamics in simple shear flow \citep{kennedy1994} and under electric fields in the absence of charge convection \citep{lac2007}.
\section{Results and discussion} \label{sec:results}
We now turn to simulation results, which we compare with existing experimental data. Following prior studies, we characterize deviations from the spherical shape using Taylor's deformation parameter $\mathcal{D}$, which we define as
\begin{equation}
\mathcal{D}=\frac{L-B}{L+B}.
\end{equation}
In axisymmetric configurations (Taylor regime), $L$ and $B$ denote the lengths of the drop axes in directions parallel and perpendicular to the electric field, respectively, so that the sign of $\mathcal{D}$ distinguishes between oblate ($\mathcal{D}<0$) and prolate ($\mathcal{D}>0$) shapes. When electrorotation takes places (Quincke regime), $L$ and $B$ are defined as in figure~\ref{fig:figure1} as the lengths of the major and minor axes of the drop, respectively, so that $\mathcal{D}>0$ at all times. We also introduce the tilt angle $\alpha$ as the angle between the major axis of the drop and the plane normal to the applied field, where $\alpha=0$ in the Taylor regime and $\alpha>0$ in the Quincke regime. The determination of these geometric quantities is performed by fitting an ellipsoid to the drop surface using a least-squares algorithm.
\subsection{Taylor regime}\label{sec:taylorregime}
\begin{table}
\vspace{-0.3cm}
\begin{center}
\begin{tabular}{cccccccccc}
System & $\epsilon^+/\epsilon_0$ & $\epsilon^-/\epsilon_0$ & $\sigma^+$ & $\sigma^-$ & $\mu^+$ & $\mu^-$ & $\gamma$ & $a$ &$E_0$ \\[1pt]
& & & (S $\text{m}^{-1}$) & (S $\text{m}^{-1}$)& (Pa s)& (Pa s) & (mN $\text{m}^{-1}$) & (mm) & (kV $\text{cm}^{-1}$) \\ [5pt]
1a & 4.9 & 2.8 & $5.8 \times 10^{-11}$ & $0.2 \times 10^{-11}$ & 0.68 & 0.05 & 4.5 & 2.0 & 1.6 \\[0pt]
1b & 4.9 & 2.8 & $5.8 \times 10^{-11}$ & $0.2 \times 10^{-11}$ & 0.68 & 0.05 & 4.5 & 2.0 & 2.1 \\[0pt]
1c & 4.9 & 2.8 & $5.8 \times 10^{-11}$ & $0.2 \times 10^{-11}$ & 0.68 & 0.05 & 4.5 & 2.0 & 6.1 \\[0pt]
2a & 5.3 & 3.0 & $4.5 \times 10^{-11}$ & $0.12 \times 10^{-11}$ & 0.69 & 0.97 & 4.5 & 0.7 & 0.45--2.0 \\[0pt]
2b & 5.3 & 3.0 & $4.5 \times 10^{-11}$ & $0.12 \times 10^{-11}$ & 0.69 & 0.97 & 4.5 & 2.1 & 0.26--1.2 \\ \hline
\end{tabular}
\caption{Material properties: systems 1 and 2 correspond to the experiments of \citet{lanauze2015} and \citet{salipante2010}, respectively. $\epsilon_0=8.8542 \times 10^{-12}\,\text{F.m}^{-1}$ denotes the permittivity of vacuum.} \label{table:dimensionaltaylor}
\end{center}
\end{table}
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{ccccccc}
System & $R$ & $Q$ & $\lambda$ & $Ca_E$ & $Ma$ \\ [5pt]
1a & 29.0 & 0.57 & 0.074 & 0.49 & 0.65 \\[0pt]
1b & 29.0 & 0.57 & 0.074 & 0.85 & 0.375 \\[0pt]
1c & 29.0 & 0.57 & 0.074 & 7.18 & 0.045 \\[0pt]
2a & 36.6 & 0.57 & 1.41 & 0.03--0.6 & 0.27--5.4 \\[0pt]
2b & 36.6 & 0.57 & 1.41 & 0.03--0.6 & 0.8--16\\[0pt]
3 & 0.1 & 1.37 & 1 & 0.3 & 0.5 \\ \hline
\end{tabular}
\caption{Dimensionless parameters corresponding to the material properties of table 1: systems 1, 2 and 3 correspond to the experiments of \citet{lanauze2015}, \citet{salipante2010} and \citet{ha2000a}, respectively.}
\label{table:dimensionlesstaylor}
\end{center}
\end{table}
We first investigate drop dynamics in the Taylor regime, where the drops attain either a steady oblate or prolate shape depending on material properties. The Taylor regime was addressed in our recent work using both a small-deformation theory and axisymmetric boundary element simulations \citep{das2016}, and is primarily used here as a benchmark for our three-dimensional algorithm. Material properties in our simulations are chosen based on the experiments of \citet{lanauze2015} for transient (system 1) and \citet{salipante2010} for steady drop deformations (system 2) and are provided in table~\ref{table:dimensionaltaylor}; corresponding dimensionless parameters are presented in table~\ref{table:dimensionlesstaylor}. Both of these experiments focused on oblate drops. We also consider the case of prolate deformations using one set of parameters from the experiments of \citet{ha2000a} (system 3); their study, however, did not report all the material properties necessary to construct the five dimensional groups required in our model, so we arbitrarily set the electric capillary number and Mason number values to $Ca_E=0.3$ and $Ma=0.5$, respectively.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure3.pdf}
\caption{(Color online) Deformation parameter $\mathcal{D}$ as a function of time for the parameters of: ($a$) system 1a, ($b$) system 1b, ($c$) system 1c, and ($d$) system 3. Boundary element (BEM) results are compared to the experiments of \cite{lanauze2015} in the case of oblate drops, and to various small-deformation theories (SDT). The steady deformation values predicted by the models of Taylor (1966) and Ajayi (1978) in the case of system 1c are $-0.75$ and $-1.40$, respectively, and out of the frame of the figure. The effect of the mesh relaxation (MR) algorithm is also shown and found to be greater when large deformations arise (system 3).}
\label{fig:transienttaylor}
\end{figure}
Figure \ref{fig:transienttaylor}($a$) shows the transient deformation of an oblate drop corresponding to system 1a for an electric field strength of $E_0/E_c=0.49$. Unsurprisingly, the axisymmetric boundary element method performs best in predicting the drop deformation when compared with experiments. Results from our three-dimensional simulations are shown for two different mesh resolutions ($N_d=2$ and $3$) as a convergence test; we find as expected that the accuracy improves with increasing $N_d$, and the results with $N_d=3$ are nearly identical to the predictions of the axisymmetric code. The classic small-deformation theories of \cite{taylor1966} and \cite{ajayi1978} that neglect interfacial charge convection perform rather poorly; however, inclusion of charge convection in the theoretical model improves the results considerably \citep{das2016}.
The case of system 1b, corresponding to a stronger applied field ($E_0/E_c=0.64$), shows the same trends albeit with larger deformations in figure~\ref{fig:transienttaylor}($b$). While the boundary element simulations capture the transient and steady-state accurately, the performance of small-deformation theories is not as good as previously due to significant deformations. The surface charge distribution and fluid velocity obtained from the three-dimensional simulation for this case are illustrated at three different times in figure~\ref{fig:snaptaylor}. As revealed by these snapshots, the interfacial velocity, which is directed from the poles towards the equator, causes transport of negative and positive charges towards the equatorial circumference of the drop, thereby inducing a sharp charge gradient across it. This gradient cannot be captured by small-deformation theories, as these employ truncated spherical harmonic expansions to represent variables; it is also challenging to capture numerically, especially as $E_0/E_c$ is increased further.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{figure4.pdf}
\caption{(Color online) Time evolution profiles of the surface charge density (top row) and interfacial fluid velocity (bottom row) in the case of system 1b in the Taylor regime at $t/\tau_{MW}=1.0$, $2.5$ and $4.0$ (left to right). See supplementary online materials for a movie showing the dynamics and flow field in this case.}
\label{fig:snaptaylor}
\end{figure}
This is illustrated in figure~\ref{fig:transienttaylor}($c$), showing the case of system 1c with an even higher electric field of $E_0/E_c=1.86$. There, the charge gradient across the interface becomes sharper and an actual discontinuity appears that triggers instabilities, reminiscent of Gibbs phenomenon, leading to the termination of the simulations. \citet{lanauze2015} were the first to discover this charge shock in their numerical work, and suggested that it might be an artefact of the axisymmetric nature of their boundary element simulations, which prevents transition to Quincke electrorotation. As we demonstrate here, the development of the charge shock in fact can occur in the Taylor regime, where it is due to the quadrupolar Taylor flow in the case of oblate drops that causes the sweeping of positive and negative charges towards the equator. The strength of this flow increases with electric field and is more pronounced for low-viscosity drops, leading to stronger shocks in these cases. While more analysis is required to understand the detailed structure of these shocks, we note that the Melcher--Taylor leaky dielectric model does not account for charge diffusion, which may have a regularizing effect in experiments. As expected, figure~\ref{fig:transienttaylor}($c$) shows a very poor performance of small-deformation theories in this regime, which are slightly improved by inclusion of charge convection but are unable to capture the charge discontinuity.
The case of prolate drop deformations corresponding to system 3 is shown in figure~\ref{fig:transienttaylor}($d$), where larger deformations are observed. The steady state deformation value reported in the experiments of \citet{ha2000a}, which did not specify the value of $Ma$, is $\mathcal{D}=0.25$; the simulations of \citet{lac2007} with $Ma \rightarrow \infty$ reported $\mathcal{D}=0.22$, while our simulations with $Ma=0.5$ predict $\mathcal{D}=0.27$. No experimental data exist for the transient deformation, so we use axisymmetric simulations as the benchmark in this case. We find as expected that the three-dimensional simulations with $N_d=3$ perform best, especially when the mesh relaxation algorithm is used as deformations are significant. Unsurprisingly, the large drop deformation is poorly captured and underpredicted by the various small-deformation theories.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure5.pdf}
\caption{(Color online) Steady drop deformation $\mathcal{D}$ as a function of electric capillary number $Ca_E$ for the parameters of: ($a$) system 2a, and ($b$) system 2b. Boundary element (BEM) results are compared to the experiments of \citet{salipante2010} and to various small-deformation theories (SDT).}
\label{fig:steadytaylor}
\end{figure}
We conclude the discussion of the Taylor regime by considering steady state drop deformations corresponding to system 2, for which we compare our simulations with theoretical and experimental data in figure~\ref{fig:steadytaylor}. Steady deformation values are shown for increasing values of electric capillary number $Ca_E$ for two different drop sizes of $a=0.7\,\mathrm{mm}$ and $a=2.1\,\mathrm{mm}$. For a given value of $Ca_E$, the smaller drop experiences a stronger electric field corresponding to a lower value of $Ma$ when compared to the larger drop. As a consequence, the small drop experiences stronger charge convection on its surface, which tends to reduce deformations as previously shown by other authors \citep{feng1999,lanauze2015}. In consistency with previous results, the axisymmetric and three-dimensional simulations perform best followed by the small-deformation theory with convection \citep{das2016}. Since the effect of convection is weaker in the case of the larger drop, the small-deformation theories without convection do not deviate as much from the experimental data and simulation results as for the smaller drop.
\subsection{Quincke regime}
We now turn our attention to the electrorotation of drops in the Quincke regime, which is seen to occur when the applied field exceeds a certain critical value. For comparison with experiments, we use the parameter values provided by \citet{salipante2010} but restrict ourselves to small drop sizes. We consider two different sets of material properties which are summarized in tables~4 and 5 and correspond to different viscosity ratios. The heuristic mesh relaxation algorithm of \citet{loewenberg1996} is not included in the simulations in the Quincke regime, as we found that it caused numerical instabilities preventing the simulations from reaching steady state; as deformations tend to be fairly moderate when electrorotation takes place ($\mathcal{D} \lesssim 0.1$ in the simulations shown below), we do not expect significant errors due to mesh distortion.
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{cccccccccc}
System & $\mu^+$ & $\mu^-$ & $\gamma$ & $a$ &$E_0$ \\[0pt]
& (Pa.s)& (Pa.s) & (mN.$\text{m}^{-1}$) & (mm) & (kV.$\text{cm}^{-1}$) \\ [5pt]
2c & 0.69 & 9.74 & 4.5 & 0.25, 0.75, 1.25, 1.75 & 0.67--5.36 \\[0pt]
2d & 0.69 & 4.87 & 4.5 & 0.25, 0.75, 1.25, 1.75 & 0.67--5.36 \\[0pt]
\hline
\end{tabular}
\caption{Material properties for system 2, corresponding to the experiments of \citet{salipante2010} with a critical electric field of $E_c=2.68$ kV.$\text{cm}^{-1}$. The permittivity and conductivity values for this system are given in table \ref{table:dimensionaltaylor}.}
\label{table:dimensionalquincke}
\end{center}
\end{table}
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{ccccccc}
System & $R$ & $Q$ & $\lambda$ & $Ca_{MW}$ & $E_0/E_c$ \\ [5pt]
2c & 36.6 & 0.57 & 14.1 & 0.44, 1.32, 2.20, 3.08 & 0.25--2.0 \\[0pt]
2d & 36.6 & 0.57 & 7.05 & 0.23, 0.69, 1.15, 1.61 & 0.25--2.0 \\[0pt]
\hline
\end{tabular}
\caption{Dimensionless parameters corresponding to the material properties shown in table \ref{table:dimensionalquincke} for system 2, obtained from the experiments of \citet{salipante2010}.}
\label{table:dimensionlessquincke}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{figure6.pdf}
\caption{(Color online) Time evolution profiles of the surface charge density (top row) and interfacial fluid velocity (bottom row) in the case system 2c in the Quincke regime at $t/\tau_{MW}=3.75$, $5.25$ and $10.5$ (left to right). See supplementary online materials for a movie showing the dynamics and flow field in this case.}
\label{fig:snapquincke}
\end{figure}
A typical simulation exhibiting Quincke rotation is illustrated in figure~\ref{fig:snapquincke} in the case of system 2c for an initial drop radius of $a=1.25\,\mathrm{mm}$ and electric field $E_0/E_c=1.5$, where $E_c$ is the critical electric field for the onset of rotation of a rigid sphere given in equation~(\ref{eq:Ec}). The figure shows both the interfacial charge profile and interfacial velocity field at different times during the transient. Upon application of the field, the drop deforms towards an oblate shape similar to that found in the Taylor regime. This configuration, however, becomes unstable and leads to the rotation of the drop with respect to an arbitrary axis perpendicular to the field direction. As it rotates, the drop relaxes towards a more spherical shape as we characterize in more detail below, and ultimately reaches a steady shape with a tilt angle $\alpha$ with respect to the horizontal plane. As is visible in figure~\ref{fig:snapquincke}, the charge profile is smoother than in the Taylor regime and is no longer axisymmetric, leading to a net electrostatic dipole that forms an angle with the field direction; the nature of the flow is also significantly different from the classic Taylor flow and appears to be primarily rotational. The transient dynamics are illustrated in more detail in figure~\ref{fig:transientquincke}, showing the tilt angle $\alpha$ and deformation parameter $\mathcal{D}$ as functions of time for different electric field strengths. Oscillations in both $\alpha$ and $\mathcal{D}$ are observed during the transient and are more significant in stronger fields, where the drop can undergo actual tumbling before its orientation stabilizes; similar time dynamics have also been reported in experiments \citep{salipante2010} and theory \citep{he2013}. In yet stronger fields, experiments have shown that the dynamics in some cases do not reach a steady state but instead exhibit chaotic tumbling and stretching of the drop \citep{salipante2013}; this regime was not captured in our simulations, which became unstable in very strong fields.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure7.pdf}
\caption{(Color online) (\textit{a}) Tilt angle $\alpha$ and (\textit{b}) drop deformation parameter $\mathcal{D}$ as functions of time $t/\tau_{MW}$ for system 2d with drop size $a=0.75$ mm and $Ca_{MW}=0.69$. Stronger electric fields cause faster and more pronounced oscillations in the tilt angle and drop deformation.}
\label{fig:transientquincke}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure8.pdf}
\caption{(Color online) Phase diagram distinguishing the axisymmetric Taylor regime (empty symbols) from the Quincke electrorotation regime (filled symbols) for two different viscosity ratios: (\textit{a}) $\lambda=14.1$, and (\textit{b}) $\lambda=7.05$.}
\label{fig:phasequincke}
\end{figure}
The transition from the Taylor regime to the Quincke regime is characterized in more detail in figure \ref{fig:phasequincke} showing phase diagrams for systems 2c and 2d in the $(E_0/E_c,Ca_{MW})$ plane, where we recall that for fixed material properties $Ca_{MW}$ is a measure of drop size. The case of a very viscous drop ($\lambda = 14.1$) is shown in figure \ref{fig:phasequincke}($a$), where the critical electric field for the transition to electrorotation is found to be close to the value of $E_c$ for a rigid sphere, yet decreases slightly with increasing $Ca_{MW}$. A small highly viscous drop is indeed expected to behave in the same way as a rigid particle. Increasing $Ca_{MW}$ (or equivalently, drop size) at a fixed value of $E_0/E_c$ leads to larger deformations in the Taylor regime, which causes an increase in the effective dipole induced inside the drop and thus has a destabilizing effect as demonstrated by the decrease in the critical electric field. A similar phase diagram is obtained at the lower viscosity ratio of $\lambda=7.05$ in figure \ref{fig:phasequincke}($b$); decreasing $\lambda$, however, is found to slightly increase the threshold for Quincke rotation. All of these trends are consistent with the experimental data of \cite{salipante2010}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure9.pdf}
\caption{(Color online) (\textit{a}) Steady tilt angle $\alpha$ and (\textit{b}) drop deformation parameter $\mathcal{D}$ as functions of applied electric field strength $E/E_c$ for system 2c for different values of $Ca_{MW}$. Boundary element (BEM) simulation results are compared with the experiments of \citet{salipante2010}.}
\label{fig:tiltlambda14}
\end{figure}
The steady-state tilt angle $\alpha$ is shown as a function of electric field strength in figure~\ref{fig:tiltlambda14}($a$) for system 2c, where it is also compared with the complementary of the angle between the steady dipole and applied electric field in the case of a rigid sphere, which we denote by $\beta$ \citep{salipante2010}:
\begin{equation}
\beta = \frac{\uppi}{2} - \arctan{\left[ \left(\frac{E_0^2}{E_c^2}-1\right)^{-1/2} \right]}.
\end{equation}
In the Taylor regime, the tilt angle is zero as the drop shape is axisymmetric. As field strength increases, a supercritical pitchfork bifurcation is observed at the onset of rotation, with a value of $\alpha$ that increases with $E_0/E_c$ and asymptotes towards $\upi/2$ in strong fields. Both angles $\alpha$ and $\beta$ show similar trends as expected, especially in the case of weakly deformed drops ($Ca_{MW}=0.44$) that behave like rigid spheres. Increasing drop size (or equivalently $Ca_{MW}$) causes the bifurcation to occur at lower field strengths in agreement with the phase diagram of figure~\ref{fig:phasequincke}. These trends once again agree with the experimental results of \citet{salipante2010} at similar values of $Ca_{MW}$.
Corresponding values of the steady drop deformation $\mathcal{D}$ are shown in figure~\ref{fig:tiltlambda14}($b$). Increasing field strength in the Taylor regime leads to stronger deformations in agreement with figure~\ref{fig:steadytaylor}. Interestingly, the transition to electrorotation breaks this trend and leads to a relaxation of the drop towards a more spherical shape. This decrease in $\mathcal{D}$ with the onset of rotation can be rationalized as a result of a change in the nature of the flow. In the Taylor regime, the axisymmetric toroidal vortex flow illustrated in figure~\ref{fig:snaptaylor} is dominated by straining and causes the elongation of the drop in the equatorial plane; under Quincke rotation, the flow becomes primarily rotational and therefore has a weaker effect on drop shape. This qualitative change also has an impact on the charge distribution, which is much smoother in the Quincke regime than in the Taylor regime, thus reducing the effective dipole and the magnitude of electric stresses at a given field magnitude.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure10.pdf}
\caption{(Color online) Parameter $\zeta$, defined in equation~(\ref{eq:zeta}) and calculated at the position of the drop centroid for system 2c, as a function of electric field strength for (\textit{a}) $\lambda=14.1$ and (\textit{b}) $\lambda = 7.05$. Values of $\zeta$ close to $1$ or $-1$ describe flows dominated by either strain or rotation, respectively. }
\label{fig:zeta}
\end{figure}
In order to quantify more precisely the nature of the flow inside the drop, we introduce a parameter $\zeta$ as \vspace{-0.2cm}
\begin{align}
\zeta = \frac{\mathrm{tr}({\mathsfbi{S}^2}) - \mathrm{tr}({\mathsfbi{W}^2})}{\mathrm{tr}({\mathsfbi{S}^2}) + \mathrm{tr}({\mathsfbi{W}^2})}, \label{eq:zeta}
\end{align}
where $\mathsfbi{S}=\tfrac{1}{2}(\boldsymbol{\nabla}\boldsymbol{v} + \boldsymbol{\nabla}\boldsymbol{v}^T)$ and $\mathsfbi{W}=\tfrac{1}{2}(\boldsymbol{\nabla}\boldsymbol{v} - \boldsymbol{\nabla}\boldsymbol{v}^T)$ denote the rate-of-strain and rate-of-rotation tensors, respectively, which we evaluate at the centroid of the drop. With this definition, values of $\zeta$ close to $+1$ and $-1$ describe flows dominated by strain and rotation, respectively. The dependence of $\zeta$ on electric field strength in the steady state is shown in figure~\ref{fig:zeta} for different values of $Ca_{MW}$ and for two viscosity ratios. In the Taylor regime, $\zeta=1$ at the center of the drop, which is to be expected for the axisymmetric Taylor flow. As the transition to electrorotation takes place, $\zeta$ rapidly jumps to a value close to $-1$, which indicates a drastic change in the nature of the flow. Note, however, that $\zeta$ is not strictly $-1$ in the Quincke regime, implying that the flow retains a straining component; nonetheless, we find that $\zeta\rightarrow -1$ as $E_0/E_c$ keeps increasing and the rotational component of the flow becomes more dominant.
\section{Concluding remarks}\label{sec:conclusion}
In this work, we have developed a three-dimensional boundary element method for the unsteady electrohydrodynamics of a deformable viscous drop based on the complete Melcher--Taylor leaky dielectric model including nonlinear charge convection. Our method extends previous numerical studies in this field \citep{sherwood1988,baygents1998,lac2007,lanauze2015}, which either were restricted to axisymmetric shapes or neglected charge convection. Our results were first shown to reproduce the steady oblate and prolate shapes known to arise in the Taylor regime of weak fields and compared favorably with previous models and experiments. In stronger fields, the experimentally observed symmetry-breaking bifurcation and transition to Quincke electrorotation was also captured for the first time in simulations. A phase diagram for the transition between the two regimes was constructed, and the evolution of drop shape and tilt angle with increasing field strength was discussed and shown to agree well with experiments. Our numerical simulations also allowed us to characterize the nature of the flow, which is not easily visualized experimentally, and demonstrated a transition from a strain-dominated flow in the Taylor regime to a primarily rotational flow in the Quincke regime.
Our simulations, which were limited to isolated viscous drops in moderate electric fields, open the way for the study of more complex situations. The cases of very strong fields and low-viscosity drops remain challenging numerically: our numerical method was found to become unstable in these limits, thus preventing us from investigating the unsteady chaotic dynamics observed in the experiments of \cite{salipante2013}. Another difficulty arising in this case is the formation of charge shocks as shown by previous studies \citep{lanauze2013,das2016} and illustrated in figure~\ref{fig:snaptaylor}. The accurate treatment of these sharp charge discontinuities should require the implementation of a shock capturing scheme for the solution of the charge conservation equation. High-order weighted essentially non-oscillatory (WENO) schemes \citep{hu1999} within a finite-volume formulation could prove useful towards this purpose, though their implementation on unstructured meshes is non-trivial.
Extensions of the present work could also include the consideration of sedimentation, which couples nonlinearly with the electrohydrodynamic problem as a result of charge convection and was recently discussed theoretically in the limit of small deformations and weak fields \citep{bandopadhyay2016,yarivalmog16}. Droplet-droplet and droplet-wall interactions, either pairwise or in collections of multiple drops, would also be interesting to analyze in the light of recent experiments on droplet pairs \citep{dommer16} and emulsions \citep{varshney2012,varshney2016}. Such interactions also have yet to be studied numerically, which would likely requires the use of an accelerated algorithm such as the fast multipole method \citep{zinchenko2000}.
\section*{Acknowledgements}
The authors thank P.~Vlahovska and P.~Salipante for helpful discussions and suggestions, A.~Khair and J.~Lanauze for useful comments and sharing their experimental data, A.~Spann and M.~Theillard for discussions on the implementation of the numerical scheme. Acknowledgement is made to the Donors of the American Chemical Society Petroleum Research Fund for support of this research through grant 53240-ND9.
|
train/arxiv
|
BkiUbXs4dbjiVD3oEy2p
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
Models with extra spatial dimensions allow us to confront some of the
outstanding issues of the Standard Model (SM) (see \cite{ADD, RS,ED6}).
In particular, the Universal Extra Dimensions (UED) scenario \cite{Appelquist:2000nn}
leads to an interesting dark
matter candidate \cite{Servant:2002aq,Cheng:2002ej,Kong:2005hn,Burnell:2005hm,Arrenberg:2008wy}
as well as a foil for searches for Supersymmetry
at colliders \cite{Rizzo:2001sd,Cheng:2002ab,Datta:2005zs,Battaglia:2005zf}. In the original minimal UED picture (MUED), all of the
SM fields live in a 5-dimensional $S^1/Z_2$ orbifolded bulk with a
compactification radius $R$. Due to the breaking of 5D Lorentz invariance,
Kaluza-Klein (KK) number is no longer conserved although a $Z_2$ symmetry,
KK-parity, remains. This being the case, the tree-level wave functions for the
various KK states are either sines or cosines in the coordinate
of the extra dimension. Allowing for radiative loop
corrections to the tree-level particle masses, the physics of MUED is then
described by only two parameters beyond those of the SM \cite{Cheng:2002iz}:
$R$ and a cutoff
scale, $\Lambda$, used to define these loop corrections, which is usually
taken such that $\Lambda R \sim {\cal O}(10-100)$ but with only logarithmic sensitivity
to this particular choice.
In MUED and its extension to higher dimensions
\cite{Burdman:2006gy,Dobrescu:2007xf,Freitas:2007rh,Dobrescu:2007ec},
the bulk masses of the
SM fermions are taken to be zero. However, this is no longer the case in
Split-UED (SUED) \cite{sued1,sued2,sued3,sued4}. Indeed this `bulk mass' term is naturally included in the effective Lagrangian as the term is compatible with 5D Lorentz invariance as well as gauge invariance of the model. Here one notes that in order to maintain the KK parity the `coefficient' of the $\bar \Psi \Psi$ fermion bilinear term in the action must be an odd function of the 5D coordinate,
$y$, defined on an interval, $y \in (-L, L)$ where $L=\frac{\pi R}{2}$.
The simplest choice to make in
this case, as is similarly done in the Randall-Sundrum (RS) model,
is to write this coefficient as $\mu \theta(y)$, where $\mu$ is
a dimensionful parameter whose value is, in general, dependent upon which
SM field is being considered and $\theta(y)=1(-1)$ for $y>(<)0$. Naturally,
one might expect that the values of $\mu$ can be of either sign and be of
order $\sim 1/R$. The effects of including a non-zero value for $\mu$ are
two-fold: First, depending upon its sign, the fermion zero-modes, which are
identified with the known SM fermions, no longer have flat wave functions in
the extra dimension. These are now found to be either peaked near $y=0$ or at
the orbifold boundaries; this leads to potentially large differences in the
various couplings of these fermions to the KK gauge fields from those expected
in MUED. In particular, the zero mode fermions now have tree-level couplings
to the KK-number even gauge modes. Second, the KK fermion wave functions and
masses (which are given by $\sim n/R$ at tree-level) are now somewhat more
complicated and are explicitly dependent upon the specific value
of the $\mu$ parameter. In particular, the expressions for the KK fermion
masses are {\it different} depending upon whether the relevant
KK-number is even or odd.
The purpose of this paper is ($i$) to explore in some of the detailed
implications of non-zero values for fermion mass parameter $\mu$ leading
to alterations from the conventional MUED
phenomenology and ($ii$) to investigate the regions in the $R-\mu$ plane
which are accessible to current and future collider experiments.
To these ends, in Section \ref{sec:SUED}, we provide a basic overview of the masses,
wave functions and couplings of the fermion KK states in
split-UED model and display their
explicit dependence upon the parameter $\mu$ pointing out important
differences with the MUED case. Here we will assume that the $\mu$ parameter
takes on a universal value for all fermions for simplicity of the analysis so
that there is only one new parameter to consider beyond that of MUED. In
Section \ref{sec:collider} we will discuss the collider phenomenology of split-UED
and, in particular, the properties of the KK states and the potential for their
discovery at the LHC. Further, we obtain the regions of the $R-\mu$ plane
which are allowed by current experimental data and show the regions which will
be made accessible by searches at the LHC. Our conclusions can be found in
Section \ref{sec:conclusion}.
Appendix \ref{app:spectrum} contains detailed information of KK decomposition and mass spectrum.
\section{Split Universal Extra Dimensions}
\label{sec:SUED}
\subsection{Model}
\label{sec:model}
Universal extra dimensions postulates that all of the Standard Model particles are propagating in a small extra dimension(s). Orbifold compactification makes it possible to construct a chiral four dimensional effective theory.
In contrast to the brane world scenarios \cite{ADD,RS}, the translational symmetry along the extra dimension leads to a remnant discrete symmetry, dubbed KK parity, so that the lightest Kaluza-Klein particle can be a good dark matter candidate. Also this parity mimics the R-parity in supersymmetric theory so that UED phenomenology shares several common features with MSSM \cite{Cheng:2002ab}. On the other hand, it has been often overlooked in UED models that the bulk Dirac masses are generically allowed and are not in conflict with higher dimensional
Lorentz symmetry or gauge invariance. In this section, we review split-UED model where these bulk Dirac masses are generically allowed in a way that KK parity is intact.
In split-UED, quarks $(Q, U^c, D^c)$ and leptons $(L, E^c)$ are all promoted to fields in five dimensional spacetime on $S^1/Z_2 \times M^4$ orbifold with two fixed points $y=- L$ and $y=L$, respectively, where $y$ is the coordinate along extra dimension with the half length $L=\pi R/2$. In the minimal setup, the gauge group is the same as in the Standard Model: $SU(3)_c\times SU(2)_W \times U(1)_Y$ under which charges are assigned as follows
\begin{eqnarray}
\Psi_i(x,y)=(Q_i, U^c_i, D^c_i, L_i, E^c_i)^c=((3,2)_{1/6},( \bar{3},1)_{-2/3},(\bar{3},1)_{1/6},(1,2)_{-1/2},(1,1)_{1})^c \, ,
\end{eqnarray}
where the index $i$ runs for three generations of fermions.
Allowing bulk mass term in split-UED
the generic action $S=\int d^4 x \int_{-L}^L dy {\cal L}_5$ is given by
\begin{eqnarray}
{\cal L}_5= \sum_{i,j=1}^3 \frac{i}{2} (D_M \bar{\Psi}_i \Gamma^M \Psi_j -
\bar{\Psi}_i \Gamma^M D_M \Psi_j) -m_{ij}(y) \bar{\Psi}_i \Psi_j \, ,
\label{Eq:action}\end{eqnarray}
where the covariant derivative is $D_M =\partial_M + i g_3 \frac{\lambda^\alpha}{2} G^\alpha_M + i g_2 \frac{T^a}{2} W^a_M + i g_1Y B_M $ with the usual Gell-Mann and Pauli matrices $\lambda$ and $T$.
The $g_1$, $g_2$, $g_3$ and $G$, $W$, $B$ are the gauge coupling constants and
the gauge fields of the corresponding gauge groups, respectively.
Without loss of generality we can diagonalize the action in Eq. (\ref{Eq:action})
by unitary transformations. Therefore the mass term $m_{ij}$ can be taken as
$ m_{ij} = m_i \delta_{ij}$ and there is no kinetic mixing between different flavors (for $i\neq j$).
In general, we may have dimensionful parameters $(m_Q, m_{U^c}, m_{D^c}, m_L, m_{E^c})$ for each generation. Now imposing Dirichlet boundary conditions for unnecessary chiral component of fermions,
we can finally get exactly the same spectra in the SM for the lowest Kaluza-Klein modes.
All the details of derivation to get the Kaluza-Klein spectra are described in the Appendix.
The most prominent feature of split-UED is that the fermion profile in the extra dimension is either localized near the origin or at boundaries depending the sign of bulk mass parameter $m_i(y)=\mu_i \theta(y)$ in a way that Kaluza-Klein parity is respected. Having a non-zero bulk mass, $m$, a field still has a massless zero mode which satisfies Neumann boundary conditions. However its Kaluza-Klein excitation states get additional contributions and the mass is given by
$m_n = \sqrt{k_n^2 + \mu^2}$ where $k_n$ is the momentum to the extra dimension which is determined by $\mu= \pm k_n \cot k_n L$ for $n\in Z_{\rm odd}$ or $k_n = \pi n/L$ for $n\in Z_{\rm even}$. Here we impose Dirichlet boundary conditions for $\Psi_L$ modes so that $\Psi_R$ contains the SM fermions in our
convention. We assume that gauge sector and Higgs sector remain the same as in the conventional UED models. Therefore the zero modes have flat profiles and Kaluza-Klein modes have cosine wave functions satisfying Neumann boundary conditions.
In summary, in split-UED, there are new $15$ dimensionful parameters $\mu_\Psi$ for three generations,
the cutoff scale ($\Lambda$) and one length parameter $L$ ($=\pi R/2$) given by the size of extra dimension in addition to the SM parameters.
In this study we will consider all bulk mass parameters are the same for simplicity,
and study $\mu L \geq -1$ region
\footnote{For $\mu L < -1$, the KK spectra contain the unacceptable light modes
below KK scale $\sim$ TeV.}.
\subsection{Behavior of couplings}
\label{sec:couplings}
Having the explicit wave functions of fields as given in the Appendix, we can calculate the explicit Lagrangian for interactions among those fields. Essentially the overlap integral of the wave functions gives the effective couplings. For a gauge boson $V=(G, W, B)$ after choosing a simplifying gauge to get rid of the fifth component of
gauge multiplet, $V_5$, by the orbifold condition, we find
\begin{eqnarray}
-{\cal L}_{\rm int} &\ni& g_V \int_{-L}^L d y \, \bar{\Psi}\Gamma^\mu \Psi V_\mu \\
&=& g_V \sum_{\ell mn}\int_{-L}^L d y \,
\Big [ \bar{\psi}^\ell {f^\ell_\Psi}^* (y) \Big ] \gamma^\mu
\Big [ \psi^m f_\Psi^m(y) \Big ]
\Big [ V_\mu^n f_V^n(y) \Big ] \\
&=& \sum_{\ell mn} g^{eff}_{\ell mn} \bar{\psi}^\ell \gamma_\mu \psi^m V_\mu^n \, ,
\end{eqnarray}
where the effective coupling is obtained by the integration of the wave function overlap
with a convenient dimensionless variable $x_\Psi=\mu_\Psi L$:
\begin{eqnarray}
g^{eff}_{\ell mn} &&\equiv g_V \int_{-L}^L d y \, {f^\ell _\Psi}^*(y) f_\Psi^m(y) f_V^n(y) \\
&&\equiv g_V {\cal F}_{\ell mn}(x_\Psi) \, .
\end{eqnarray}
As the profiles of gauge bosons are universal and the profile of fermions depend on
the bulk mass parameter $\mu_\Psi$, the overlap integral ${\cal F}_{\ell mn}$ is the same for all gauge bosons
but depends on $\mu_\Psi$. The suppressed gauge group indices should be understood.
Let us now see the coupling between KK bosons ($G, W, B$) of $SU(3)_c$, $SU(2)_W$ and $U(1)_Y$ and the zero mode SM fermion pair for the definiteness \footnote{As it is clear in minimal UED, the weak mixing angles for KK gauge bosons are suppressed by $m_W/m_{\rm KK} \ll 1$. Thus essentially gauge eigenstates are well aligned by mass eigenstates.}.
The zero mode wave function profile for SM fermion $\Psi_i=(Q,U^c, D^c, L,E^c)^c$ is given by
\begin{eqnarray}
f_i^{(0)}(y) = \sqrt{\frac{\mu_i}{1-e^{-2\mu_i L}}} e^{-\mu_i |y|} \, .
\label{Eq:zeromode}
\end{eqnarray}
If $\mu_i>0 (<0)$, the profile is exponentially localized near the center (at the boundaries). The zero mode is massless in the absence of the electroweak symmetry breaking even though its KK modes get additional mass from the bulk mass $\mu_i$.
\FIGURE[t]{
\centerline{
\epsfig{file=FIGURES/fermi_new.ps, width=7.6cm} \hspace{0.0cm}
\epsfig{file=FIGURES/vec_new.ps, width=7.6cm}}
\caption{\sl Profiles of zero mode fermion with various $\mu$'s in (a)
and first three KK gauge bosons $n=0,1,2$ in (b). Kaluza-Klein parity
is obviously respected by the zero mode profile and the localization depends on
the sign of the bulk mass $\mu$. KK-odd mode ($n=1, 3, 5, \cdots$) are odd and even modes ($n=0,2,\cdots$) are even under KK parity.
}
\label{fig:profile}}
The KK gauge bosons commonly have the same profiles as in MUED
\begin{eqnarray}
f_{V=G, W, B}^{(n>0)}(y)=\frac{1}{\sqrt{L}} \cos \frac{ n \pi (y+L) }{2L} \, ,
\label{eq:vec}
\end{eqnarray}
and the zero mode profile, $f_V^{(0)}=1/\sqrt{2L}$, is flat, as shown in Fig.~\ref{fig:profile}.
Note that $\int_{-L}^L dy \, \big (f_V^{(n)} \big )^2=1$.
The coupling of level-$n$ bosons to SM fermion pair is now written as
\begin{equation}
{\cal L}_{\rm eff}\ni -\sum_\psi \sum_n \frac{C_{n}(\mu_\psi)}{\sqrt{2L}} \left[\bar{\psi_0}\gamma^\mu \left(g^{5D}_3\frac{\lambda^a}{2} G^{a,(n)}_\mu +g^{5D}_2\frac{T^i_\psi}{2} W_\mu^{i,(n)} + g^{5D}_1 Y_\psi B_\mu^{(n)}\right)
\psi_0\right] \, , \,
\label{Eq:int}
\end{equation}
where $C_n$ is a dimensionless parameter measuring the overlap of wave functions
between two SM fermions and a KK gauge boson defined as
\begin{eqnarray}
C_{n}(\mu_\psi L)&\equiv& \sqrt{2L}\int_{-L}^{L} dy (f_\psi^{(0)})^2 f_V^{(n)} \\
&=& \left\{ \begin{array}{ll}
0 , & \hspace{1cm}\mbox{$n =1, 3, 5, 7, \cdots$};\\
{\cal F}_{00n}(x_\psi) , & \hspace{1cm}\mbox{$n = 0, 2, 4, 6, 8, \cdots$} ,\end{array} \right.
\end{eqnarray}
where $x_\psi = \mu_\psi L$ and ${\cal F}$ is explicitly calculated to be
\begin{equation}
{\cal F}_{002m} (x)
=\frac{x^2 (-1+(-1)^m e^{2x})(\coth x-1)}{\sqrt{2(1+\delta_{m0})}(x^2 + m^2\pi^2/4)} \, ,
~~~ m=0, \, 1, \, 2, \,3 , \, \cdots \, .
\end{equation}
From the KK-parity conservation, $C_{odd}=0$ is easily understood. The Standard Model coupling constants are obtained as
\begin{eqnarray}
g^{SM}=g_{000}^{eff}=\frac{g^{5D}}{\sqrt{2L}} C_0 (x_\psi) = \frac{g^{5D}}{\sqrt{2L}} \, ,
\label{eq:gsm}
\end{eqnarray}
as $C_0(x_\psi)={\cal F}_{000}(x_\psi)=1$ independent of $x_\psi$. Here $g_{\ell nm}^{eff}$
denotes the effective coupling constant for the $\psi_\ell -\psi_m-V_n$ interaction.
Finally for the even $n$'s we get the coupling between the Standard Model fermions and even-KK excitation states of gauge bosons, as
\begin{eqnarray}
g_{002n}^{eff}=g^{SM} {\cal F}_{002n} (x_\psi) \, .
\end{eqnarray}
\FIGURE[t]{
\centerline{
\epsfig{file=FIGURES/couplings.ps, width=7.5cm}
\epsfig{file=FIGURES/F2n.ps, width=7.5cm}}
\caption{\sl The ratio of tree level couplings in SUED to the corresponding SM couplings.
Couplings involving level-2 (level-1) KK bosons are shown in red (blue) in (a). (b) contains zero mode fermion couplings to KK-even gauge bosons $f_0-f_0-V_{2n}$.
The MUED limit ($\mu=0$) is denoted by the vertical solid line (in magenta).
}
\label{fig:couplings}}
The various couplings associated with one vector boson and two fermions are
shown in Fig.~\ref{fig:couplings}.
We find that there are two interesting regions.
One is the MUED limit, {\it i.e.,} $\mu \to 0$, which is
shown as the vertical solid line (in magenta) and the other is large positive $\mu$ limit.
For $\mu \to \infty$, the zero mode fermions are well localized near the center ($y=0$)
so that their couplings to KK gauge bosons asymptotically approach to the well known value $(-1)^n \sqrt{2}$ as one can see the red
curve for $f_0-f_0-V_2$ (Fig.~\ref{fig:couplings}(a))
as well as the curves for $f_0-f_0-V_{2n}$ (Fig.~\ref{fig:couplings}(b)).
The alternating sign can be understood as the $2n$-th KK gauge boson wave function in Eq. (\ref{eq:vec}) is proportional to $\cos n\pi = (-1)^n $ at $y=0$ where the fermion wave function is mostly localized. The $\sqrt{2}$ is from the zero mode normalization in Eq. (\ref{eq:gsm}). These vertices are all vanishing in the limit of $\mu\to 0$ because of KK number conservation in MUED.
For collider phenomenology, we are mostly interested in interactions for
low lying Kaluza-Klein modes, $n=0,1,2$ as heavier modes are too massive and
easily decouple from the low energy phenomenology.
The most relevant couplings in our study are the interactions
and decays of the second Kaluza-Klein gauge boson.
The coupling $f_n$-$f_n$-$V_0$ remains the same for all $\mu$ due to the normalization condition of
wave functions for $n$-th fermion profile, while all other couplings now change
for non-vanishing bulk masses.
The $f_2$-$f_0$-$V_0$ coupling remains zero in SUED
but in principle this coupling can be generated
by the unknown physics at the cutoff scale ($\Lambda$), and the lowest order coupling may take
the form \cite{Cheng:2002iz}
\begin{equation}
\bar{f}_2 \sigma^{\mu\nu} T^a P_{L/R} f_0 F_{0\mu\nu}^a \, .
\end{equation}
However, being higher dimensional, we expect it to be suppressed at least
by one power of $1/\Lambda$, hence we shall neglect it in the discussion that follows.
It is interesting to notice that the $SU(3)_c$ coupling for the KK gluon can be {\it chiral}.
Let us examine the level-$2n$ gluon couplings with quarks:
\begin{eqnarray}
-{\cal L}_{\rm eff}=g_s\sum_{n \geq 0} [
\bar{u}\gamma^\mu \left({\cal F}_{002n}(x_Q)P_L + {\cal F}_{002n}(x_U)P_R \right)u\nonumber \\
+\bar{d}\gamma^\mu \left({\cal F}_{002n}(x_Q)P_L + {\cal F}_{002n}(x_D)P_R \right)d]G_\mu^{(2n)} \, .
\end{eqnarray}
All the KK-parity violating interactions are forbidden.
Now it is obvious that KK-gluon has chiral interactions with the SM quarks,
if $\mu_Q \neq \mu_U$ or $\mu_Q \neq \mu_D$, in general.
Finally the vector (V) and axial-vector (A) couplings of KK gluons
with an up-type quark and a down-type quark,
\begin{eqnarray}
-{\cal L}_{\rm eff}=g_s \sum_{q=u,d}\sum_{n\geq 0}\bar{q}\gamma^\mu (V^q_{2n} -A^q_{2n}\gamma_5) q G_\mu^{(2n)} \, ,
\end{eqnarray}
are determined as
\begin{eqnarray}
V_{2n}^{u/d} =\frac{1}{2}\left({\cal F}_{002n}(x_Q)+{\cal F}_{002n}(x_{U/D})\right) \\
A_{2n}^{u/d} =\frac{1}{2}\left({\cal F}_{002n}(x_Q)-{\cal F}_{002n}(x_{U/D})\right) \, .
\end{eqnarray}
The same is similarly true for all other gauge bosons as well. When $x_Q=x_{U/D}$, only the vectorial coupling is non-vanishing. However, in general, $x_Q \neq x_{U/D}$ and the non-vanishing axial couplings are allowed. For instance, if $x_Q=0$ and $x_{U/D}\neq 0$, the vectorial and axial couplings have opposite signs but have the same size: $V_{2n}^{u/d}=-A_{2n}^{u/d}={\cal F}_{002n}(x_{U/D})$. With non-vanishing axial couplings in the even KK gauge boson interactions, one might expect, for instance, additional contribution to the forward-backward asymmetry of the top quark pair production ($A_{FB}^t$) via the quark pair annihilation channel. The cross-section for $q\bar{q}$ annihilation into top quarks of mass $m_t$
through the $2n$-th KK gluons reads
\begin{eqnarray}
\frac{d\sigma (q\bar{q}\rightarrow g_{2n}^* \to t \bar{t})}{d\cos \hat{\theta}} &=&
\frac{\pi \beta \alpha_S^2}{9 \hat{s}} \Big \{ 1+c^2 \beta^2+ \frac{4 m_t^2}{\hat s} \nonumber \\
&& \hspace*{-3cm} +
\sum_{n \geq 1} \frac{2 \hat{s} (\hat{s}-m_{2n}^2)} {(\hat{s}-m_{2n}^2)^2+m_{2n}^2 \Gamma_{2n}^2}
\Big [ V_{2n}^q \, V_{2n}^t \, \big (1+c^2 \beta^2+ \frac{4 m_t^2}{\hat s} \big )
+ 2 \, A_{2n}^q \, A_{2n}^t \, c \beta \Big ] \nonumber \\
&& \hspace*{-3cm}+
\sum_{n,\ell \geq 1} \hat{s}^2 \frac{(\hat{s}-m_{2n}^2)(\hat{s}-m_{2\ell}^2) + m_{2n} m_{2\ell} \Gamma_{2n} \Gamma_{2\ell}}
{ [ (\hat{s}-m_{2n}^2)^2+m_{2n}^2 \Gamma_{2n}^2 ] [ (\hat{s}-m_{2\ell}^2)^2+m_{2\ell}^2 \Gamma_{2\ell}^2 ] } \\
&& \hspace{-2.5cm}\times \Big [ \Big ( V_{2n}^q V_{2\ell}^q + A_{2n}^q A_{2\ell}^q \Big )
\Big ( V_{2n}^t V_{2\ell}^t \big (1+c^2 \beta^2+ \frac{4 m_t^2}{\hat s} \big )
+ A_{2n}^t A_{2\ell}^t \beta^2 (1+c^2) \Big ) \nonumber \\
&& \hspace{-1cm} + 2 \, c \beta \, \Big ( V_{2n}^q A_{2\ell}^q + V_{2\ell}^q A_{2n}^q \Big )
\Big ( V_{2n}^t A_{2\ell}^t + V_{2\ell}^t A_{2n}^t \Big ) \Big ] \nonumber
\Big\} \, , \nonumber
\label{eq:bornqq}
\end{eqnarray}
where $\hat{\theta}$ is the polar angle of the top quark with respect
to the incoming quark in the center of mass rest frame,
$\hat{s}$ is the squared partonic invariant mass,
$\beta = \sqrt{1-\frac{4 m_t^2}{\hat s}}$ is the velocity of the top quark,
with $c = \cos \hat{\theta}$.
The parameters $V_{2n}^q (V_{2n}^t)$ and $A_{2n}^q(A_{2n}^t)$ represent,
respectively, the vector and axial-vector couplings of the
KK gluons to the light quarks (top quarks).
Considering experiments at Tevatron, the parton level energy $\hat{s}$ is typically
much less than the KK gluon mass so that the interference term (the second term) is
dominant over the pure new physics term (the third term).
And the leading contribution in the second term is
the interference between two diagrams the SM gluon and the level-2 KK gluon.
As the tree level SM contribution (the first term) does not produce
the forward-backward asymmetry after integrating over
$-1< \cos \hat{\theta}<1$, the main contribution is from the linear term of cosine
in the second term for $n=1$:
\begin{eqnarray}
A_{FB}^t
&\propto & - \frac{A_{2}^q A_{2}^t}{m_{2}^2}.
\end{eqnarray}
When $x_t = -1$, $x_{U/D} \to \infty$ and $x_Q=0$,
\begin{eqnarray}
A_{2}^q \to \frac{1}{\sqrt{2}}, \,\, A_{2}^t \to -\frac{1}{4},
\end{eqnarray}
thus the forward-backward asymmetry is positive, which is consistent with the recent measurements at Tevatron
\cite{newcdf, cdf, d0}, but we find that its size is not large enough to explain the current anomaly
for $R^{-1} \sim 1$ TeV.
\subsection{Mass spectrum}
\label{sec:masses}
The mass spectrum of fermions gets tree level modifications from the bulk parameters $\mu_\Psi$ as well as the loop induced mass correction from the RG running effect for a given boundary condition at some high scale $\Lambda$ just as in the case in conventional UED. Taking the vanishing boundary condition at $\Lambda$, it is known that the one-loop induced mass correction is minor ($\sim \%$ level for electroweak particles). This is due to lack of long RG running from $\Lambda$ which is argued to be less than $100$ TeV based on naive dimensional analysis (see e.g. \cite{Cacciapaglia:2005da}). Thus we may neglect the loop-induced mass correction for fermions as long as the bulk mass parameter is sufficiently large $\mu_\Psi > 0.1/L$.
The mass of KK fermon ($M_n$) gets contributions from the bulk mass at tree level
as follows
\begin{eqnarray}
M^2_{n} = k_n^2 + \mu^2 \, ~~~ {~\rm for ~} n\ge 1 \, ,
\end{eqnarray}
where
\begin{eqnarray}
&& k_n {\rm ~is~the~}\frac{n+1}{2}-{\rm th~solution~of~} \mu=-k \, cot(k L) \, ,
{\rm ~if~} n=2m-1 \, ,\label{eq:oddmodes}\\
&& k_n = \frac{n}{R}\, , \hspace*{6.75cm}{\rm ~if~} n=2m \, . \label{eq:evenmodes}
\end{eqnarray}
In the MUED limit, $\mu \to 0$, Eqs.~(\ref{eq:oddmodes}-\ref{eq:evenmodes}) both
reduce to $k_n = \frac{n}{R}$. On the other hand, all KK boson masses remain the same,
$\frac{n}{R}$, and show no $\mu$ dependence.
Including EW symmetry breaking and the radiative corrections, a naive estimate gives
\begin{eqnarray}
M_n &\approx& M_n^{tree} \left ( 1 + {\rm ~ loop ~corrections} \right ) \, , \\
M_n^{tree} &=& \sqrt{ k_n^2 + \mu^2 + m_0^2} \, ,
\end{eqnarray}
where $m_0$ is expected from the electroweak symmetry breaking.
\subsection{Constraints from contact interactions}
\label{sec:constraints}
One of the most prominent features of SUED having non-vanishing bulk mass parameters is
the existence of tree level KK number violating interactions.
From $W^3_{2n},B_{2n}$ exchange diagrams we can effectively obtain the contact interaction Lagrangian ${\cal L}_{\rm eff}$ which is stringently constrained by electroweak precision measurements
\cite{Alcaraz:2006mx,Amsler:2008zzb}
\begin{eqnarray}
{\cal L}_{\rm eff}=\sum_{i,j=L,R}\sum_{f}\frac{4\pi}{(\Lambda_{AB}^{ef})^2}
[\bar{e_i}\gamma_\mu e_i][ \bar{f}_j \gamma^\mu f_j] \, .
\label{Eq:eff}
\end{eqnarray}
\TABLE[t]{
\centerline{
\begin{tabular}{c|c|c|c|c}
{}&u&d&$\mu^+\mu^-$ &$\tau^+\tau^-$\\
\hline
$LL$ (TeV) & 10.2 & 6.0 & 12.5 & 8.6 \\
$RR$ (TeV) & 8.3 & 4.3 & 11.9 & 8.2
\end{tabular}
}
\label{table:contact}
\caption{Bounds for contact interaction \cite{Alcaraz:2006mx,Amsler:2008zzb}}.}
Assuming a universal bulk mass $\mu$, the $B_{2n}$ and $W^3_{2n}$ mediated interaction
effective Lagrangian is obtained.
The most stringent bound comes from the contact interaction for $ee\mu\mu$:
\begin{equation}
\bar{e}_L \gamma_\mu e_L \sum_{n} \frac{({\cal F}_{002n})^2}{4}
\left(\frac{g_1^2}{m_{B_{2n}}^2} +\frac{g_2^2}{m_{W^3_{2n}}^2}\right)\bar{\mu}_L \gamma^\mu \mu_L
+
\bar{e}_R \gamma_\mu e_R \sum_{n} ({\cal F}_{002n})^2
\left(\frac{g_1^2}{m_{B_{2n}}^2}\right)\bar{\mu}_R \gamma^\mu \mu_R \, .
\label{Eq:eff2}
\end{equation}
Taking Eqs. (\ref{Eq:eff}-\ref{Eq:eff2}) into account with $m_{B_{2n}}\simeq m_{W^3_{2n}}\simeq (2n)/R$, we obtain the following relations:
\begin{eqnarray}
\frac{1}{\Lambda_{LL}^2}&=&\frac{g_1^2+g_2^2}{64\pi} R^2 \sum_n \frac{({\cal F}_{002n}(\mu L))^2}{n^2},
\\
\frac{1}{\Lambda_{RR}^2}&=&\frac{g_1^2}{16\pi} R^2 \sum_n \frac{({\cal F}_{00,2n}(\mu L))^2}{n^2},
\end{eqnarray}
where the bounds for $\Lambda_{LL}$ and $\Lambda_{RR}$ are given in Table \ref{table:contact}.
We also consider the constraints arising from the dilepton resonance searches at Tevatron \cite{CDFdimuon}
and find that those for $\gamma_2$ give a slightly better constraint on $R^{-1}$
than those for $Z_2$, while $W_2^\pm$ gives a similar limit to that for $Z_2$ \cite{:2007bs}.
In the next Section we include these as well as constraints from contact interactions.
\section{Collider phenomenology}
\label{sec:collider}
A large amount of effort has given into examining the collider aspects of
Universal Extra Dimensions \cite{Appelquist:2000nn}
at LHC \cite{Rizzo:2001sd,Cheng:2002ab,Datta:2005zs,Burdman:2006gy,Dobrescu:2007xf} and
ILC \cite{Battaglia:2005zf,Freitas:2007rh},
as well as its astrophysical implications \cite{Servant:2002aq,Cheng:2002ej,Kong:2005hn,Burnell:2005hm,Dobrescu:2007ec,Arrenberg:2008wy}.
In this Section we would like to investigate the implications of non-vanishing bulk mass in SUED.
\subsection{Level-1 modes}
\label{sec:level1}
We start our discussion with the level-1 KK modes.
Their phenomenology depends on the precise value of the bulk mass and
the radiative corrections to KK masses. Therefore here we would like discuss only
generic features.
A small value for the bulk mass ($0 \leq|\mu L| \ll 1$) would give the similar decay patterns
as in the MUED case. The dominant production is provided by the strong interaction
at a hadron collider, {\it i.e.,} KK quark production ($Q_1Q_1$, $q_1q_1$ and $Q_1 q1$),
KK gluon production ($g_1g_1$) and associated production ($g_1Q_1$, $g_1q_1$).
The $SU(2)_W$-doublet KK quarks ($Q_1$) dominantly decay into
$SU(2)_W$ KK gauge bosons ($Z_1$ and $W_1^\pm$) while
the $SU(2)_W$-singlet KK quarks ($q_1$) decay into KK photon.
SM leptons are obtained from the decay of EW gauge bosons ($Z_1$ and $W_1^\pm$)
to KK leptons.
Here the difference with MUED would be the mass splitting between each mode.
A bulk mass term would increase the mass of the KK fermion,
making the decay product of KK bosons softer than that in MUED.
However the other decay products from the KK fermions to KK bosons become more energetic
due to the increased splitting.
For instance, KK quarks can have mass just below the KK gluon and
the jet from the decay of KK gluon ($g_1 \to Q_1 q$) would be softer
while the jet from the decay of KK quark ($Q_1 \to q Z_1$ or $Q_1 \to q' W_1^\pm$)
becomes harder than in MUED. The same is true for KK leptons and KK gauge bosons.
However we do not expect a dramatic change in the reach
for this model, as long as the decay patterns are the same and the mass splitting is not too small.
The other extreme limit is the case of very large $\mu$, $\mu L \gg 1$
\footnote{The opposite limit (a large negative $\mu L$)
is also interesting as shown in the Appendix.
In this case, taking a very large $R^{-1}$, the masses of the level-1 KK fermions remain at EW scale
while all other KK fermions at higher level are decoupled from the theory due to the large mass splitting
between the level-1 and the level-2 KK fermions. All KK bosons are also very heavy due to the large $R^{-1}$.
Therefore in this limit, {\it the only} available KK modes are the level-1 KK leptons and quarks.
This study will be elsewhere \cite{ongoing}.}.
In this case, all KK fermions become much heavier than KK bosons and
they may not be within the reach of the LHC. KK gauge bosons go through 3 body decays
to the KK photon ($g_1 \to j j \gamma_1$ and $Z_1,W_1^\pm \to f\bar{f}' \gamma_1$) and
production would be via the KK gluon ($g_1g_1$) and EW gauge bosons
($Z_1 W_1^\pm$)
\footnote{$Z_1Z_1$, $\gamma_1\gamma_1$ and $Z_1\gamma_1$ involve KK fermions
in the $t$- and $u$-channels and their production cross sections are negligible
for heavy KK quarks.}.
It is interesting to notice that this situation is similar to
{\it the focus point region of supersymmetry}.
For the moderate ranges of the bulk mass ($|\mu L| \sim 1$),
the gauge bosons may still go through 3-body decays
while the LHC will be able to produce KK quarks and KK leptons.
Unlike MUED, now all KK quarks dominantly decay to the KK gluon since it has the largest coupling.
Therefore the collider signature would be quite jetty.
An interesting possibility is that all KK fermions are very heavy with a large $\mu$
so that they are unobservable but still the KK bosons are within the reach of the LHC. Even in this case, we might expect to observe e.g. the level-2 gluon through its interaction with the level zero Standard Model quarks with a sizable coupling ($g\simeq \sqrt{2}g_s$). Dilepton production through $Z_2$ and $\gamma_2$ is also sizable and provides a golden search channel for split-UED. Detection of a dark matter (DM) particle, on the other hand, is quite challenging because the DM-SM coupling through the level-1 fermion will be highly suppressed by the large KK fermion mass.
We plot cross sections for the gauge boson production at the LHC, as function of mass
in Fig.~\ref{fig:xsection1}, assuming $\mu L \gg 1$; here the curve is
dotted for $g_1 g_1$ (in black), solid for $Z_1W_1^+$ (in red) and dashed for $Z_1W_1^-$ (in blue).
For KK gluon pair production from the $gg$ initial state,
there are $s$-, $t$-, $u$- and four point interaction diagrams, and
all couplings are fixed by $SU(3)_c$ gauge invariance.
There is a contribution from the $q\bar{q}$ initial state but it is smaller than that from $gg$ at the LHC
for the mass range shown.
$Z_1 W_1^\pm$ is produced by $W^\pm$ exchange in the $s$-channel and
contribution from KK quarks in the $t$- and $u$-channels.
Having these 3 diagrams is also necessary by $SU(2)_W$ gauge invariance and
neglecting any of them gives inconsistent results.
However in our case, considering the limit where the KK quarks are much heavier than KK gauge bosons,
their exchanges with a few TeV in mass barely affects the production cross sections.
\FIGURE[t]{
\centerline{
\epsfig{file=FIGURES/xsection.ps, width=8.5cm} }
\caption{\sl Cross section for gauge boson pair production as a function of mass at the LHC (for $\mu L \gg 1$).}
\label{fig:xsection1}}
From Fig.~\ref{fig:couplings} we can immediately read off the patterns of KK particle decay branching fractions.
The level-1 gauge boson only couples to $f_0-f_1$ thanks to KK parity. This coupling becomes less significant as $\mu$ becomes larger in which case the level-1 fermion becomes significantly heavier than
the level-1 gauge boson. Due to the large mass gap, the decay products of the level-1 fermion are
reasonably energetic.
\subsection{Level-2 modes}
\label{sec:level2}
Now we turn to discussion of the level-2 KK modes.
In general level-2 KK fermions ($f_2$) can decay into
either two level-1 KK states $f_1V_1$
or one level-2 and one SM mode $f_0 V_2$
(the branching fraction of $f_2$ to $f_2^\prime V_0$ is suppressed by the small mass splitting between
$f_2$ and $f_2^\prime$.).
In the limit of $\mu L \gg 1$ in split-UED, all KK fermions masses are raised, and
level-2 KK quarks ($Q_2$ and $q_2$) decay to $q g_2 $ and $Q_1g_1$.
Then $g_1$ gives rise to a missing energy signature through a 3 body decay
while $g_2$ can directly decay into two jets and may appear as dijet resonance.
In MUED, the coupling of level-2 resonances to the SM fermions is suppressed by 1-loop.
The branching fractions of electroweak level-2 gauge bosons
into dilepton final states are small partly due to the competing decay modes into
other level-2 and level-1 KK states, and partly due to difference between
the strong and electroweak couplings.
Therefore one has to rely on indirect production of level-2 KK gauge bosons from
the KK gluon and KK quarks to enhance the production cross sections.
The corresponding reach has been estimated in Ref. \cite{Datta:2005zs}.
In SUED, however, this coupling exists at tree level
due to the fermion bulk mass term and
it could be as large as $\sqrt{2}$ times the corresponding SM coupling strength, which makes
dilepton searches in this model promising.
At the same time, the bulk mass increases the mass of KK fermions,
thus reducing the branching fraction of level-2 bosons into other KK states.
The decay width of level-2 KK bosons into SM fermion final states is given by
\begin{eqnarray}
\label{eq:width1} \Gamma &=& \frac{N_c M}{24 \pi} \left [
\Big ( g_L^2 + g_R^2 \Big ) \Big ( 1- \frac{m^2}{M^2} \Big )
+ 6 g_L g_R \frac{m^2}{M^2} \right ] \sqrt{1 - \frac{4m^2}{M^2}} \\
&=& \label{eq:width2}
\frac{N_c M }{ 24 \pi} \Big ( g_L^2 + g_R^2 \Big ) ~~~~{\rm for ~ M \gg m} \, .
\end{eqnarray}
\FIGURE[t]{
\centerline{
\epsfig{file=FIGURES/widths.ps, width=7.5cm} \hspace*{0.1cm}
\epsfig{file=FIGURES/BRZ2.ps,width=7.5cm} }
\caption{\sl (a) The ratio of widths of level-2 KK bosons to their masses and
(b) branching fractions of 1 TeV $Z_2$, as a function of $\mu L$.}
\label{fig:width_br}}
The 1-loop correction is expected to be the smallest for $\gamma_n$
among all other KK states at the same level, and in fact, KK photons receive
negligible correction from RG running, making the lightest KK photon
a viable dark matter candidate.
Therefore the decay channels of $\gamma_2$ into $f_1$-$f_1$ or $f_0$-$f_2$ are closed
and $\gamma_2$ always can appear as a resonance.
As in Eqs. (\ref{eq:width1})-(\ref{eq:width2}), the width dependence on the SM fermion mass
is negligible even for the top quark, if the resonance is heavy enough.
In this case, the ratio of the total width to its mass becomes
mass-independent.
The total width of $Z_2$ ($\gamma_2$) is then $\sim$ 7\% (3.5\%) of its mass
as $\mu$ increases, as shown in Fig.~\ref{fig:width_br}(a),
while in MUED, the widths of level-2 KK bosons are
much less than 1\% \cite{Datta:2005zs}.
This makes it challenging to probe double resonances which are separated from each other in mass
by the small 1-loop corrections (7\% or so).
The branching fractions of the $\gamma_2$ are $\mu$-independent for a universal bulk mass,
which is the case that we consider. They are 25\% in dilepton, 36.7\% in dijet,
4.2 in $b\bar{b}$, 14\% $t\bar{t}$ and 12.5\% in $\tau \bar\tau$.
The $\gamma_2$ decays invisible through SM neutrinos 7.5\% of the time.
Notice that the branching fraction into dilepton channel is about 20 times larger than
in case of MUED.
The decay of $Z_2$ is somewhat more complicated than $\gamma_2$ due to the slightly larger
1-loop correction, which we assume to be about 7\% as in the MUED.
In this case the decay modes to other KK states remain open.
Without knowing the exact 1-loop mass corrections for all KK particles,
it is impossible to compute its total width and branching fractions.
For a rough estimate (only for this purpose), we assume that
KK fermions only gets corrections from the bulk mass
while $Z_2$ gets heavier by 7\% from RG running.
This is certainly an inconsistent setup.
However, 1-loop corrections to KK fermion masses are known to be merely a few percents
(1\% for singlet KK fermions and 3\% for doublet KK fermions), and
for a large $\mu$, bulk mass enhances its mass by quite a large amount and
this 1-loop contribution becomes negligible.
This approximation is valid in a broad range of $\mu$.
Given that, one can compute the partial widths of $Z_2$ in three different channels and
the results are shown in Fig.~\ref{fig:width_br}(b).
The level-2 KK fermion does not get correction from the bulk mass, but the
$f_0$-$f_2$-$Z_2$ coupling becomes smaller, as shown in Fig.~\ref{fig:couplings},
making the relevant branching fraction smaller for large $\mu$.
The same is true for $f_1$-$f_1$-$Z_2$ while $f_0$-$f_0$-$Z_2$ coupling behaves
in the opposite manner.
Moreover, unlike $f_2$, the level-1 KK modes get heavier as the $\mu L$ increases, and
at some value of $\mu$, the $Z_2$ decay to $f_0f_2$ gets closed.
In Fig.~\ref{fig:width_br}(b), this transition value of $\mu L$ is about 0.6 for a 1 TeV $Z_2$.
Having $f_0$-$f_0$-$Z_2$ as a dominant channel, it is straightforward to compute
the relevant branching fractions. Since $Z_2$ ($W_2^3$) couples to SM pair with the same strength,
one needs to count relevant degrees of freedom.
The branching fractions are 1/24 in $\tau\bar\tau$, 1/12 in dilepton,
1/2 in dijet and 1/8 in either $b\bar{b}$ and $t \bar{t}$.
$Z_2$ also can decay invisibly 1/24 of the time.
\subsection{The LHC reach for $\gamma_2$ and $Z_2$ in dilepton channel}
\label{sec:resonances}
We simulate dilepton resonances in the Split-UED at the LHC with
$\sqrt{s}$=10 TeV, using a private Monte-Carlo generator.
We assume the mass splitting between two bosons ($\gamma_2$ and $Z_2$) is given by
$M_{Z_2}=1.07 M_{\gamma_2}$ and $M_{\gamma_2}\approx \frac{2}{R}$ as in MUED.
We include both $\gamma_2$ and $Z_2$ in the dilepton signal and
use the CTEQ6.6 PDF with NLO K-factor.
The leptons from the decay of these KK bosons are highly energetic and
can easily pass triggers.
For heavy resonances, the energy resolution is better in electron final states
than in muon final states, and hence we consider electron final state
with 1\% mass resolution smearing.
$|\eta| < 2.5$ and $M_{\ell\ell} > M_{\gamma_2} - 500$ GeV are imposed as cuts and
we only count events with dielectron masses greater than 0.8 of $M_{\gamma_2}$.
The dominant background is Drell-Yan, and $t\bar{t}$ and fakes are expected to
be significantly smaller.
In all cases the background is smaller than the signal by a factor of $\sim$100.
Fig.~\ref{fig:lhc1}(a) shows the required luminosity to observe at least 10
signal events as a function of $\mu L$ for several values of masses.
The LHC should able to cover the large parameter space
(up to $M_{V_2} \sim 1.5$ TeV for $\mu L \ge 1$)
even with the early data at the level of $\sim$100 pb$^{-1}$ or less.
With the integrated luminosity of $\sim$100 fb$^{-1}$, the most of parameter space
would be probed, setting limit on the bulk mass and the mass of the KK gauge boson.
The expected number of signal events is plotted in the $\mu L$ versus $R^{-1}$ plane
in Fig.~\ref{fig:lhc1}(b), for ${\cal L}=1$ fb$^{-1}$.
The shaded region in the left side (in yellow) is a projected Tevatron exclusion
at 95\% C.L. assuming 10 fb$^{-1}$ \cite{CDFdimuon}.
The limit on $R^{-1}$ from $\gamma_2$ gives the best exclusion since it is lighter
than $Z_2$ and $W_2^\pm$ by 7\%, and
constrains on $Z_2$ and $W_2^\pm$ are comparable, and hidden below that from $\gamma_2$.
The other shaded area in the left upper corner (in green) is EW constraint from LEP II
considering contact interaction in SUED, as discussed in Section~\ref{sec:constraints}.
\FIGURE[t]{
\centerline{
\epsfig{file=FIGURES/Lumi10.ps, width=7.6cm} \hspace*{-0.2cm}
\epsfig{file=FIGURES/muL_Rinv.ps,width=7.6cm} }
\caption{\sl The luminosity required to obtain 10 events as a function of $\mu L$
for several values of masses in (a) and the number of signal events
in the $\mu L$ versus $R^{-1}$ plane in (b),
for $\sqrt{s}$=10 TeV, ${\cal L}=1$ fb$^{-1}$, $M_{Z_2}=1.07 M_{\gamma_2}$.
In all cases the background is smaller by a factor of $\sim$100.
We used the CTEQ6.6 with NLO K-factor and 1\% mass resolution smearing.
In obtaining the result we only count events with dilepton masses greater than 0.8 $\times M_{\gamma_2}$.
}
\label{fig:lhc1}}
In Fig.~\ref{fig:invmass} invariant mass distributions are shown for (a)
$R^{-1}=1$ TeV, $\sqrt{s}= 14$ TeV and ${\cal L}=100$ fb$^{-1}$
and (b) $R^{-1}=0.75$ TeV, $\sqrt{s}= 10$ TeV and ${\cal L}=1$ fb$^{-1}$.
For both cases, we assume $\mu L \gg 1$.
The yellow histogram is the SM background while
the red histogram includes both signal and backgrounds.
At the early phase of LHC, one may able to see a bump and get to resolve it into
double resonance structure as more data gets accumulated.
Notice the negative interference between the SM background and the KK signal,
which implies the relative sign difference in the couplings.
\FIGURE[t]{
\centerline{
\epsfig{file=FIGURES/split.ps, width=7.5cm}\hspace*{0.2cm}
\epsfig{file=FIGURES/split4.ps,width=7.5cm} }
\caption{\sl Invariant mass distributions at the LHC for (a)
$R^{-1}=1$ TeV, $\sqrt{s}=14$ TeV and ${\cal L}=100$ fb$^{-1}$
and (b) $R^{-1}=0.75$ TeV, $\sqrt{s}=10$ TeV and ${\cal L}=1$ fb$^{-1}$.
The yellow histogram is the SM background while
the red histogram includes both signal and backgrounds.}
\label{fig:invmass}}
\section{Conclusions}
\label{sec:conclusion}
The Minimal Universal Extra Dimensions scenario has received great attention.
Recently non-vanishing bulk fermion masses have been introduced
without spoiling the virtue of KK-parity.
The fermion profiles are no longer simple sine/cosine functions and depend upon
the specific values of bulk parameters.
The profiles of fermions are split along the extra dimensions
while the wave functions of the bosons remain the same as in UED.
A simple introduction of a KK-parity conserving bulk fermion mass has significant influences
on collider aspects as well as astrophysical implications of UED.
For instance, the DM annihilation fraction into certain SM fermion pairs is
either enhanced or reduced (compared to the MUED case) so that one can perhaps explain
the PAMELA positron excess while suppressing the anti-proton flux.
In this paper, we have concentrated on collider phenomenology of
Split Universal Extra Dimensions.
We have revisited the KK decomposition in detail and
analyzed wave function overlaps to compute relevant couplings for collider studies.
We have discussed general collider implication for level-1 KK modes and
level-2 KK with non-zero bulk mass and have computed LHC reach for the
EW level-2 KK bosons, $\gamma_2$ and $Z_2$, in the dilepton channel.
The LHC should able to cover the large parameter space
(up to $M_{V_2} \sim 1.5$ TeV for $\mu L \ge 1$)
even with early data assuming $\sim$100 pb$^{-1}$ or less.
The existence of double resonances is one essential feature arising from extra dimensional models.
Whether or not one can see double resonances depends
both on how degenerate the two resonances are and on the mass resolution of the detector.
The very high $P_T$ from the decay makes resolution in dimuon channel worse than in
dielectron final state. This is because one can reconstruct electron from ECAL but
muon momentum reconstruction relies on its track, which is barely curved in this case.
Further indication for SUED might be the discovery of
$W^\prime$-like signature of mass close to $Z_2$.
The MUED predicts a somewhat lower event rate due to 1-loop suppressed
coupling of level-2 bosons to SM fermion pair, while it exists at tree level in SUED.
Therefore in UED, one has to rely on indirect production of level-2 bosons, whose
collider study requires complete knowledge of the model:
{\it the mass spectrum and all the couplings}.
On the other hand, in the large $\mu$ limit of SUED,
the dependence on mass spectrum is diminished since level-2 KK bosons decay
only into SM fermion pairs.
This allows us to estimate the signal rate from their direct production,
so that they can be discovered at the early phase of the LHC.
The indirect production mechanism only increases production cross sections,
improving our results.
Once a discovery has been made, one should try to reconstruct events and
do further measurements such as spin and coupling determination,
with more accumulated data \cite{Li:2009xh,Petriello:2008zr,Rizzo:2009pu},
which might discriminate KK resonances from other $Z^\prime$ models.
The coupling measurement is directly related to the determination of the bulk masses.
A challenging issue might be the existence of two resonances which are rather close to each other.
\bigskip
\acknowledgments
We thank J. Shu and K. Wang for discussion on the forward-backward asymmetry and also
thank C. Cs\'aki, J. Heinonen and J. Hubisz for helpful discussion.
S. Park is supported by the World Premier International Research Center Initiative
(WPI initiative) by MEXT and also supported by the Grant-in-Aid for scientific
research (Young Scientists (B) 21740172) from JSPS, Japan.
K. Kong and T. G. Rizzo are supported in part by the DOE under contract DE-AC02-76SF00515.
|
train/arxiv
|
BkiUbAk5qsBB3Jex1p36
| 5
| 1
|
\section{Introduction}
Under the preparations in \cite{Asa-a} we complete our project of the title in this paper.
We fix a small category $I$ and a commutative ring $\Bbbk$ and
denote by $\Bbbk\text{-}\mathbf{Cat}$ (resp.\ $\Bbbk\text{-}\mathbf{Ab}$, $\Bbbk\text{-}\mathbf{Tri}$)
the 2-category of small $\Bbbk$-categories
(resp.\ abelian $\Bbbk$-categories, triangulated $\Bbbk$-categories).
For a $\Bbbk$-category ${\mathcal C}$ a (right) ${\mathcal C}$-{\em module} is a contravariant functor from
${\mathcal C}$ to the category $\operatorname{Mod} \Bbbk$ of $\Bbbk$-modules,
and we denote by $\operatorname{Mod} {\mathcal C}$ (resp.\ $\operatorname{Prj} {\mathcal C}$, $\operatorname{prj} {\mathcal C}$) the category of ${\mathcal C}$-modules
(resp.\ projective ${\mathcal C}$-modules, finitely generated projective ${\mathcal C}$-modules).
When we deal with derived equivalences, we usually assume that
$\Bbbk$ is a field because Keller's theorem in \cite{Ke1} or \cite{Ke2}
on derived equivalences of categories
requires that the $\Bbbk$-categories in consideration are $\Bbbk$-flat or $\Bbbk$-projective.
A $\Bbbk$-category ${\mathcal C}$ with an action of a group $G$ have been well investigated
in connection with a so-called covering technique in representation theory of algebras
(see e.g., \cite{Gab}).
The orbit category ${\mathcal C}/G$ and the canonical functor ${\mathcal C} \to {\mathcal C}/G$
are naturally constructed from these data, and one studied relationships
between $\operatorname{Mod} {\mathcal C}$ and $\operatorname{Mod} {\mathcal C}/G$.
We brought this point of view to the derived equivalence classification problem
of algebras in \cite{Asa97}, and a main tool obtained there
was fully used in the derived equivalence classifications
in \cite{Asa99, Asa02}. The main tool was extended in \cite{Asa11} in
the following form:
\begin{thm}
Let $G$ be a group acting on categories ${\mathcal C}$ and ${\mathcal C}'$.
Assume that ${\mathcal C}$ is $\Bbbk$-flat and that the following condition is satisfied:
\begin{itemize}
\item[$(*)$]
There exists a $G$-stable tilting subcategory $E$ of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} {\mathcal C})$
such that there is a $G$-equivariant equivalence ${\mathcal C}' \to E$.
\end{itemize}
Then the orbit categories ${\mathcal C}/G$ and ${\mathcal C}'/G$ are derived equivalent.
\end{thm}
(In the above, ${\mathcal C}$ is called $\Bbbk$-{\em flat} if
all morphism spaces are flat $\Bbbk$-modules, and
$E$ is said to be $G$-{\em stable} if the set of objects in $E$ is stable
under the $G$-action on ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} {\mathcal C})$ induced from that on ${\mathcal C}$.)
Observe that if we regard $G$ as a category with a single
object $*$, then a $G$-action on a category ${\mathcal C}$ is nothing
but a functor $X : G \to \Bbbk\text{-}\mathbf{Cat}$ with $X(*)={\mathcal C}$; and
the orbit category ${\mathcal C}/G$ coincides with (the $\Bbbk$-linear version of) the Grothendieck
construction $\operatorname{Gr}(X)$ of $X$ defined in \cite{Groth}.
The purpose of this paper is to generalize this theorem to
an arbitrary category $I$ and to any {\em colax functors}\footnote
In \cite{Asa-a} we called them {\em oplax} functors.
There are two versions of Grothendieck constructions: (1) for contravariant lax functors
and (2) for covariant colax functors. Since skew group algebras are formulated as the second
version we deal with colax functors here. See \cite[Example 2.12] {Asa11}.
}
$X, X' \colon I \to \Bbbk\text{-}\mathbf{Cat}$
(roughly speaking a colax functor $X$ is a family $(X(i))_{i\in I_0}$ of $\Bbbk$-categories indexed by
the objects $i$ of $I$ with an action of $I$,
the precise definition is given in Definition \ref{dfn:colax-fun}).
Recall that if ${\mathcal C}$ is a category with an action of a group $G$, then
the module category $\operatorname{Mod} {\mathcal C}$ (resp.\ the derived category ${\mathcal D}(\operatorname{Mod} {\mathcal C})$)
has the induced $G$-action; thus both of them are again categories with $G$-actions.
Hence for a colax functor $X$ the ``module category'' $\operatorname{Mod} X$
(resp.\ the ``derived category'' ${\mathcal D}(\operatorname{Mod} X)$) should again be a family of categories with
an $I$-action, i.e., a colax functor from $I$ to $\Bbbk\text{-}\mathbf{Ab}$ (resp.\ to $\Bbbk\text{-}\mathbf{Tri}$).
In addition, we need a notion of equivalences between colax functors for two purposes:
\begin{enumerate}
\item[(a)] to generalize the statement $(*)$; and
\item[(b)] to define a derived equivalence of colax functors $X$, $X'$ by an existence of
an equivalence between colax functors ${\mathcal D}(\operatorname{Mod} X)$ and ${\mathcal D}(\operatorname{Mod} X')$.
\end{enumerate}
To define equivalences of objects we need notions of 1-morphisms and 2-morphisms,
thus we need a 2-categorical structure on the collection of colax functors, i.e.,
it is needed to define a 2-category $\overleftarrow{\operatorname{Colax}}(I, \mathbf{C})$ of all colax functors
from $I$ to a 2-category $\mathbf{C}$, which can be used for both (a) and (b) above.
Having these things in mind we see that
to generalize the theorem above we have to solve the following problems:
(1) Define the ``module category'' of a colax functor again as
a colax functor.
(2) Define the ``derived category'' of a colax functor as
a colax functor.
(3) Give a natural definition of an equivalence between colax
functors using 2-morphisms of the 2-category of colax functors.
(4) Give a condition on a 1-morphism between colax functors
to be an equivalence.
(5) Give a natural definition of a derived equivalence between
colax functors by the equivalence (defined in (3)) of
their ``derived categories'' defined in (2).
(6) Characterize the existence of derived equivalences of colax functors
by tilting subcategories, which turns out to be
a generalization of Rickard's Morita theorem for colax functors.
(7) Induce a derived equivalence of Grothendieck constructions
of colax functors from the existence of tilting subcategories,
which will be a generalization of the theorem above.
In our previous paper \cite{Asa-a}
we have solved the problems (1) -- (6) and made clear
the meaning of the condition $(*)$ in the setting of colax functors.
In this paper we solve the problem (7), and in addition we give
a unified way to solve (1) and (2) using the following general statement
on compositions with pseudofunctors (cf.\ Gordon--Power--Street \cite[Subsection 5.6]{GPS95}):
\begin{thm-nn}[Theorem \ref{comp-pseudofun}]
Let $\mathbf{B}, \mathbf{C}$ and $\mathbf{D}$ be $2$-categories and $V \colon \mathbf{C} \to \mathbf{D}$ a
pseudofunctor.
Then the obvious correspondence $($see subsection \ref{dfn-corr} for details$)$
$$
\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V) \colon \overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C}) \to \overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{D})
$$
turns out to be a pseudofunctor.
\end{thm-nn}
The solutions of (1) and (2) use the correspondence on objects
given by the pseudofunctor $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V)$.
The correspondence on 1-morphisms is needed also to solve (7).
The following is our main result (see Definition \ref{dfn:tilting-colax} for definitions):
\begin{thm-nn}[Theorem \ref{mainthm2}]
Let $X, X' \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Assume that $X$ is $\Bbbk$-flat and that there exists a tilting colax functor ${\mathcal T}$ for $X$
such that ${\mathcal T}$ and $X'$ are equivalent in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Then $\operatorname{Gr}(X)$ and $\operatorname{Gr}(X')$ are derived equivalent.
\end{thm-nn}
Note that there is an easier way (Lemma \ref{colax-eq}, a solution of (4))
to verify that ${\mathcal T}$ and $X'$ are equivalent in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$ in the above.
As an easy application, the theorem above gives a unified proof of the
following.
\begin{thm-nn}[Theorem \ref{thm:unified-proof}]
Assume that $\Bbbk$ is a field and that
$\Bbbk$-algebras $A$ and $A'$ are derived equivalent.
Then the following pairs are derived equivalent as well:
\begin{enumerate}
\item
path-categories $AQ$ and $A'Q$ for any quiver $Q$;
\item
incidence categories $AS$ and $A'S$ for any poset $S$; and
\item
monoid algebras $AG$ and $A'G$ for any monoid $G$.
\end{enumerate}
\end{thm-nn}
Theorem \ref{mainthm2} can be used to glue many
derived equivalences together as shown in Example \ref{exm:gluing}.
The paper is organized as follows.
In section 2 we recall the definition of the 2-category $\overleftarrow{\operatorname{Colax}}(I, \mathbf{C})$
of colax functors from a category $I$ to a 2-category $\mathbf{C}$.
In section 3 we first define a diagonal 2-functor
$\Delta \colon \Bbbk\text{-}\mathbf{Cat} \to \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$ in an obvious way,
and introduce a notion of $I$-coverings $(F, \psi) \colon X \to \Delta({\mathcal C})$
for a colax functor $X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})_0$ and ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}_0$
(the subscript 0 stands for objects) as a generalization of
$G$-coverings for a group $G$.
In section 4 we define a $\Bbbk$-linear version of Grothendieck construction
as a 2-functor $\operatorname{Gr} \colon \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat}) \to \Bbbk\text{-}\mathbf{Cat}$ and introduce
the canonical morphism $(P, \phi) \colon X \to \Delta(\operatorname{Gr}(X))$.
In section 5 we will show that the Grothendieck construction is a
strict left adjoint to the diagonal 2-functor
with a unit given by the family of canonical morphisms, in particular,
this shows that the canonical morphism $(P, \phi) \colon X \to \Delta(\operatorname{Gr}(X))$
is an $I$-covering and any other $I$-covering $X \to \Delta({\mathcal C})$ is
given as the composite of this followed by $\Delta(H)$ for an equivalence
$H \colon \operatorname{Gr}(X) \to {\mathcal C}$. This will be used in the proof of the main result.
In section 6 we redefine the module colax functor $\operatorname{Mod} X \colon I \to \Bbbk\text{-}\mathbf{Ab}$
and its derived colax functor ${\mathcal D}(\operatorname{Mod} X) \colon I \to \Bbbk\text{-}\mathbf{Tri}$
for a colax functor $X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})_0$ by using Theorem \ref{comp-pseudofun}.
In addition, we also define ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$ for $X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})_0$ and
show that this construction preserves $I$-precoverings, which is also used in the proof
of the main result.
It is obvious that the definitions given here coincide
with those given in our previous paper \cite{Asa-a}.
In section 7 we recall the definition of derived equivalences of colax functors in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$
and the theorem characterizing the derived equivalence by tilting colax functors
(Theorem \ref{mainthm1}).
In section 8 we give a proof of Theorem \ref{mainthm2},
and give some applications including an example
of gluing of pieces of derived equivalences together to have a larger one.
In the last section we give a proof of Theorem \ref{comp-pseudofun}.
\section*{Acknowledgements}
Most part of this work was done during my stay in Bielefeld
in February and September, 2010; and a final part (Theorem 6.4)
in September, 2011.
I would like to thank Claus M.\ Ringel and Henning Krause
for their hospitality and nice discussions.
The results were announced at the seminars in the Universities
of Bielefeld, Bonn, Paris 7, and in Beijing Normal University.
I would like to thank Jan Schr\"oer, Bernhard Keller and Changchang Xi
for their kind invitations.
The results were also announced at conferences:
ICRA XIV held in August 2010 in Tokyo (functor version),
the 6-th China-Japan-Korea International Conference on Ring and Module Theory
held in June 2011 at Kyung Hee University at Suwon,
and Shanghai International Conference on Representation
Theory of Algebras held in October 2011 at Shanghai Jiao Tong University.
I would like to thank the organizers for their kind invitations and hospitality.
Finally, I would like to thank D.\ Tamaki for useful discussions with him
on Grothendieck constructions and for his expositions on 2-categorical notions
through his preprints \cite{Tam, Tam2} that aimed at a generalization of \cite{Asa12}.
In addition I would also like to thank the referee for his/her careful reading,
suggestions and questions, by which the paper became easier to read
and I could notice that I forgot to consider the naturality property (0)
of 1-morphisms in Definition \ref{dfn:colax-fun-2cat}
and I could add the verification of this property
in the proof of Lemma \ref{lem:1-morphisms-colax};
also I changed the terminology ``oplax'' to ``colax''.
\section{Preliminaries}
In this section we recall the definition of the 2-category of colax functors
from $I$ to a 2-category from \cite{Asa-a} (see also Tamaki \cite{Tam}).
\begin{dfn}
\label{dfn:colax-fun}
Let $\mathbf{C}$ be a 2-category.
A {\em colax functor}
(or an {\em oplax} functor)
from $I$ to $\mathbf{C}$ is a triple
$(X, \eta, \theta)$ of data:
\begin{itemize}
\item
a quiver morphism $X\colon I \to \mathbf{C}$, where $I$ and $\mathbf{C}$ are regarded as quivers
by forgetting additional data such as 2-morphisms or compositions;
\item
a family $\eta:=(\eta_i)_{i\in I_0}$ of 2-morphisms $\eta_i\colon X(\id_i) \Rightarrow \id_{X(i)}$ in $\mathbf{C}$
indexed by $i\in I_0$; and
\item
a family $\theta:=(\theta_{b,a})_{(b,a)}$ of 2-morphisms
$\theta_{b,a} \colon X(ba) \Rightarrow X(b)X(a)$
in $\mathbf{C}$ indexed by $(b,a) \in \operatorname{com}(I):=
\{(b,a)\in I_1 \times I_1 \mid ba \text{ is defined}\}$
\end{itemize}
satisfying the axioms:
\begin{enumerate}
\item[(a)]
For each $a\colon i \to j$ in $I$ the following are commutative:
$$
\vcenter{
\xymatrix{
X(a\id_i) \ar@{=>}[r]^(.43){\theta_{a,\id_i}} \ar@{=}[rd]& X(a)X(\id_i)
\ar@{=>}[d]^{X(a)\eta_i}\\
& X(a)\id_{X(i)}
}}
\qquad\text{and}\qquad
\vcenter{
\xymatrix{
X(\id_j a) \ar@{=>}[r]^(.43){\theta_{\id_j,a}} \ar@{=}[rd]& X(\id_j)X(a)
\ar@{=>}[d]^{\eta_jX(a)}\\
& \id_{X(j)}X(a)
}}\quad;\text{ and}
$$
\item[(b)]
For each $i \ya{a}j \ya{b} k \ya{c} l$ in $I$ the following is commutative:
$$
\xymatrix@C=3em{
X(cba) \ar@{=>}[r]^(.43){\theta_{c,ba}} \ar@{=>}[d]_{\theta_{cb,a}}& X(c)X(ba)
\ar@{=>}[d]^{X(c)\theta_{b,a}}\\
X(cb)X(a) \ar@{=>}[r]_(.45){\theta_{c,b}X(a)}& X(c)X(b)X(a).
}
$$
\end{enumerate}
\end{dfn}
\begin{dfn}
Let $\mathbf{C}$ be a 2-category and $X = (X, \eta, \theta)$, $X'= (X', \eta', \theta')$
be colax functors from $I$ to $\mathbf{C}$.
A {\em $1$-morphism} (called a {\em left transformation}) from $X$ to $X'$
is a pair $(F, \psi)$ of data
\begin{itemize}
\item
a family $F:=(F(i))_{i\in I_0}$ of 1-morphisms $F(i)\colon X(i) \to X'(i)$
in $\mathbf{C}
; and
\item
a family $\psi:=(\psi(a))_{a\in I_1}$ of 2-morphisms
$\psi(a)\colon X'(a)F(i) \Rightarrow F(j)X(a)$
$$
\xymatrix{
X(i) & X'(i)\\
X(j) & X'(j)
\ar_{X(a)} "1,1"; "2,1"
\ar^{X'(a)} "1,2"; "2,2"
\ar^{F(i)} "1,1"; "1,2"
\ar_{F(j)} "2,1"; "2,2"
\ar@{=>}_{\psi(a)} "1,2"; "2,1"
}
$$
in $\mathbf{C}$ indexed by $a\colon i \to j$ in $I_1$
\end{itemize}
satisfying the axioms
\begin{enumerate}
\item[(a)]
For each $i \in I_0$ the following is commutative:
$$
\vcenter{
\xymatrix{
X'(\id_i)F(i) & F(i)X(\id_i)\\
\id_{X'(i)}F(i) & F(i)\id_{X(i)}
\ar@{=>}^{\psi(\id_i)} "1,1"; "1,2"
\ar@{=} "2,1"; "2,2"
\ar@{=>}_{\eta'_iF(i)} "1,1"; "2,1"
\ar@{=>}^{F(i)\eta_i} "1,2"; "2,2"
}}\quad;\text{ and}
$$
\item[(b)]
For each $i \ya{a} j \ya{b} k$ in $I$ the following is commutative:
$$
\xymatrix@C=4pc{
X'(ba)F(i) & X'(b)X'(a)F(i) & X'(b)F(j)X(a)\\
F(k)X(ba) & & F(k)X(b)X(a).
\ar@{=>}^{\theta'_{b,a}F(i)} "1,1"; "1,2"
\ar@{=>}^{X'(b)\psi(a)} "1,2"; "1,3"
\ar@{=>}_{F(k)\,\theta_{b,a}} "2,1"; "2,3"
\ar@{=>}_{\psi(ba)} "1,1"; "2,1"
\ar@{=>}^{\psi(b)X(a)} "1,3"; "2,3"
}
$$
\end{enumerate}
A $1$-morphism $(F, \psi) \colon X \to X'$ is said to be
$I$-{\em equivariant} if $\psi(a)$ is a 2-isomorphism in $\mathbf{C}$
for all $a \in I_1$.
\end{dfn}
\begin{dfn}
Let $\mathbf{C}$ be a 2-category, $X = (X, \eta, \theta)$, $X'= (X', \eta', \theta')$
be colax functors from $I$ to $\mathbf{C}$, and
$(F, \psi)$, $(F', \psi')$ 1-morphisms from $X$ to $X'$.
A {\em $2$-morphism} from $(F, \psi)$ to $(F', \psi')$ is a
family $\zeta= (\zeta(i))_{i\in I_0}$ of 2-morphisms
$\zeta(i)\colon F(i) \Rightarrow F'(i)$ in $\mathbf{C}$
indexed by $i \in I_0$
such that the following is commutative for all $a\colon i \to j$ in $I$:
$$
\xymatrix@C=4pc{
X'(a)F(i) & X'(a)F'(i)\\
F(j)X(a) & F'(j)X(a).
\ar@{=>}^{X'(a)\zeta(i)} "1,1"; "1,2"
\ar@{=>}^{\zeta(j)X(a)} "2,1"; "2,2"
\ar@{=>}_{\psi(a)} "1,1"; "2,1"
\ar@{=>}^{\psi'(a)} "1,2"; "2,2"
}
$$
\end{dfn}
\begin{dfn}
Let $\mathbf{C}$ be a 2-category, $X = (X, \eta, \theta)$, $X'= (X', \eta', \theta')$
and $X''= (X'', \eta'', \theta'')$
be colax functors from $I$ to $\mathbf{C}$, and
let $(F, \psi)\colon X \to X'$, $(F', \psi')\colon X' \to X''$
be 1-morphisms.
Then the composite $(F', \psi')(F, \psi)$ of $(F, \psi)$ and
$(F', \psi')$ is a 1-morphism from $X$ to $X''$ defined by
$$
(F', \psi')(F, \psi):= (F'F, \psi'\circ\psi),
$$
where $F'F:=((F'(i)F(i))_{i\in I_0}$ and for each $a\colon i \to j$ in $I$,
$
(\psi'\circ\psi)(a):= F'(j)\psi(a)\circ \psi'(a)F(i)
$
is the pasting of the diagram
$$
\xymatrix@C=4pc{
X(i) & X'(i) & X''(i)\\
X(j) & X'(j) & X''(j).
\ar_{X(a)} "1,1"; "2,1"
\ar_{X'(a)} "1,2"; "2,2"
\ar^{F(i)} "1,1"; "1,2"
\ar_{F(j)} "2,1"; "2,2"
\ar@{=>}_{\psi(a)} "1,2"; "2,1"
\ar^{X''(a)} "1,3"; "2,3"
\ar^{F'(i)} "1,2"; "1,3"
\ar_{F'(j)} "2,2"; "2,3"
\ar@{=>}_{\psi'(a)} "1,3"; "2,2"
}
$$
\end{dfn}
The following is straightforward to verify.
\begin{prp}
Let $\mathbf{C}$ be a $2$-category.
Then colax functors $I \to \mathbf{C}$,
$1$-morphisms between them, and $2$-morphisms between
$1$-morphisms $($defined above$)$ define a $2$-category,
which we denote by $\overleftarrow{\operatorname{Colax}}(I, \mathbf{C})$.
\end{prp}
\begin{ntn}\label{ntn:co-op}
Let $\mathbf{C}$ be a 2-category.
Then we denote by $\mathbf{C}^{\text{op}}$
(resp.\ $\mathbf{C}^{\text{co}}$) the 2-category
obtained from $\mathbf{C}$ by reversing the 1-morphisms
(resp.\ the 2-morphisms), and we set
$\mathbf{C}^{\text{coop}}:=(\mathbf{C}^{\text{co}})^{\text{op}}=(\mathbf{C}^{\text{op}})^{\text{co}}$.
\end{ntn}
\section{$I$-coverings}
In this section we introduce the notion of $I$-coverings that is a generalization of
that of $G$-coverings for a group $G$ introduced in \cite{Asa11}, which was
obtained by generalizing the notion of Galois coverings
introduced by Gabriel in \cite{Gab}.
This will be used in the proof of our main theorem.
\begin{dfn}
We define a 2-functor $\Delta\colon \Bbbk\text{-}\mathbf{Cat} \to \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$ as follows,
which is called the {\em diagonal} 2-functor:
\begin{itemize}
\item
Let ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}$. Then $\Delta({\mathcal C})$ is defined to be a functor
sending each morphism $a\colon i \to j$ in $I$ to
$\id_{{\mathcal C}}\colon {\mathcal C} \to {\mathcal C}$.
\item
Let $E \colon {\mathcal C} \to {\mathcal C}'$ be a 1-morphism in $\Bbbk\text{-}\mathbf{Cat}$.
Then $\Delta(E)\colon \Delta({\mathcal C}) \to \Delta({\mathcal C}')$ is a 1-morphism
$(F,\psi)$ in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$
defined by $F(i):=E$ and $\psi(a):= \id_E$ for all $i \in I_0$ and all $a \in I_1$:
$$
\xymatrix{
{\mathcal C} & {\mathcal C}'\\
{\mathcal C} & {\mathcal C}'.
\ar^E "1,1"; "1,2"
\ar^E "2,1"; "2,2"
\ar_{\id_{{\mathcal C}}} "1,1"; "2,1"
\ar^{\id_{{\mathcal C}'}} "1,2"; "2,2"
\ar@{=>}_{\id_E}"1,2";"2,1"
}
$$
\item
Let $E, E'\colon {\mathcal C} \to {\mathcal C}'$ be 1-morphisms in $\Bbbk\text{-}\mathbf{Cat}$, and
$\alpha \colon E \Rightarrow E'$ a 2-morphism in $\Bbbk\text{-}\mathbf{Cat}$.
Then $\Delta(\alpha)\colon \Delta(E) \Rightarrow \Delta(E')$ is a 2-morphism in
$\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$ defined by $\Delta(\alpha):= (\alpha)_{i\in I_0}$.
\end{itemize}
\end{dfn}
\begin{rmk}\label{rmk:1-mor-to-De}
Let $\mathbf{C}$ be a 2-category,
$X=(X, \eta, \theta) \in \overleftarrow{\operatorname{Colax}}(I, \mathbf{C})$, and $C \in \mathbf{C}_0$.
Further let
\begin{itemize}
\item $F$ be a family of 1-morphisms $F(i)\colon X(i) \to C$ in $\mathbf{C}$
indexed by $i\in I_0$; and
\item $\psi$ be a family 2-morphisms $\psi(a)\colon F(i) \Rightarrow F(j)X(a)$
indexed by $a\colon i \to j$ in $I$:
$$
\xymatrix{
X(i) & C\\
X(j) & C
\ar^{F(i)} "1,1"; "1,2"
\ar_{F(j)} "2,1"; "2,2"
\ar_{X(a)} "1,1";"2,1"
\ar@{=} "1,2";"2,2"
\ar@{=>} "1,2";"2,1"
}
$$
\end{itemize}
Then $(F, \psi)$ is in $\overleftarrow{\operatorname{Colax}}(I, \mathbf{C})(X, \Delta(C))$
if and only if the following hold.
\begin{enumerate}
\item[(a)]
For each $i \in I_0$ the following is commutative:
$$
\vcenter{
\xymatrix{
F(i) & F(i)X(\id_i)\\
& F(i)\id_{X(i)}
\ar@{=>}^{\psi(\id_i)} "1,1"; "1,2"
\ar@{=} "1,1"; "2,2"
\ar@{=>}^{F(i)\eta_i} "1,2"; "2,2"
}}\quad;\text{ and}
$$
\item[(b)]
For each $i \ya{a} j \ya{b} k$ in $I$ the following is commutative:
$$
\xymatrix@C=4pc{
F(i) & F(j)X(a)\\
F(k)X(ba) & F(k)X(b)X(a).
\ar@{=>}^{\psi(a)} "1,1"; "1,2"
\ar@{=>}_{F(k)\,\theta_{b,a}} "2,1"; "2,2"
\ar@{=>}_{\psi(ba)} "1,1"; "2,1"
\ar@{=>}^{\psi(b)X(a)} "1,2"; "2,2"
}
$$
\end{enumerate}
\end{rmk}
\begin{dfn}
Let ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}$ and $(F, \psi) \colon X \to \Delta({\mathcal C}) $ be in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Then
\begin{enumerate}
\item
$(F, \psi)$ is called an $I$-{\em precovering} (of ${\mathcal C}$) if the homomorphism
$$
(F,\psi)_{x,y}^{(1)}\colon \bigoplus_{a\in I(i,j)}X(j)(X(a)x, y) \to {\mathcal C}(F(i)x, F(j)y)
$$
of $\Bbbk$-modules defined by
$(f_a\colon X(a)x \to y)_{a\in I(i,j)} \mapsto \sum_{a\in I(i,j)} F(j)(f_a) \circ\psi(a)(x)$
is an isomorphism for all $i, j \in I_0$ and
all $x \in X(i)_0$, $y \in X(j)_0$.
\item
$(F, \psi)$ is called an $I$-{\em covering} if it is an $I$-precovering and is {\em dense},
i.e., for each $c \in {\mathcal C}_0$ there exists an $i \in I_0$ and $x \in X(i)_0$
such that $F(i)(x)$ is isomorphic to $c$ in ${\mathcal C}$.
\end{enumerate}
\end{dfn}
\section{Grothendieck constructions}
In this section we define a 2-functor $\operatorname{Gr}\colon \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat}) \to \Bbbk\text{-}\mathbf{Cat}$ whose
correspondence on objects is a $\Bbbk$-linear version of (the opposite version of)
the original Grothendieck construction (cf. \cite{Tam}).
\begin{dfn}
We define a 2-functor $\operatorname{Gr}\colon \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat}) \to \Bbbk\text{-}\mathbf{Cat}$,
which is called the {\em Grothendieck construction}.
{\bf On objects.} Let $X=(X, \eta, \theta) \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})_0$.
Then $\operatorname{Gr}(X) \in \Bbbk\text{-}\mathbf{Cat}_0$ is defined as follows.
\begin{itemize}
\item $\operatorname{Gr}(X)_0:= \bigcup_{i\in I_0} \{ i \} \times X(i)_0
= \{{}_ix:= (i,x) \mid i \in I_0, x \in X(i)_0\}$.
\item For each ${}_ix, {}_jy \in \operatorname{Gr}(X)_0$, we set
$$
\operatorname{Gr}(X)({}_ix, {}_jy) := \bigoplus_{a\in I(i,j)} X(j)(X(a)x, y).
$$
\item For each ${}_ix, {}_jy, {}_kz \in \operatorname{Gr}(X)_0$ and
each $f=(f_a)_{a\in I(i,j)}\in \operatorname{Gr}(X)({}_ix, {}_jy)$,
$g=(g_b)_{b\in I(j,k)}\in \operatorname{Gr}(X)({}_jy, {}_kz)$, we set
$$
g\circ f:= \left(\sum_{\begin{smallmatrix}a\, \in\, I(i,j)\\b\, \in\, I(j,k)\\c\, =\, ba\end{smallmatrix}}
g_b\circ X(b)f_a
\circ \theta_{b,a}x \right)_{c\,\in\, I(i,k)},
$$
where each summand is the composite of
$$
X(ba)x \ya{\theta_{b,a}x} X(b)X(a)x \ya{X(b)f_a}
X(b)y \ya{g_b} z.$$
\item For each ${}_ix \in \operatorname{Gr}(X)_0$ the identity $\id_{{}_ix}$ is given by
$$
\id_{{}_ix} = (\delta_{a,\id_i}\eta_i\,x)_{a\in I(i,i)} \in \bigoplus_{a\in I(i,i)}X(i)(X(a)x,x),
$$
where $\delta$ is the Kronecker delta\footnote
This is used to mean that the $a$-th component is $\eta_i\,x$ if $a=\id_i$,
and 0 otherwise.
}.
\end{itemize}
{\bf On 1-morphisms.}
Let $X=(X, \eta, \theta), X'=(X', \eta', \theta')$ be objects of $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$ and
$(F, \psi)\colon X \to X'$ a 1-morphism in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Then a 1-morphism
$$
\operatorname{Gr}(F, \psi) \colon \operatorname{Gr}(X) \to \operatorname{Gr}(X')
$$
in $\Bbbk\text{-}\mathbf{Cat}$ is defined as follows.
\begin{itemize}
\item For each ${}_ix \in \operatorname{Gr}(X)_0$, $\operatorname{Gr}(F, \psi)({}_ix):={}_i(F(i)x)$.
\item For each ${}_ix, {}_jy \in \operatorname{Gr}(X)_0$ and
each $f=(f_a)_{a\in I(i,j)} \in \operatorname{Gr}(X)({}_ix, {}_jy)$,
we set
$\operatorname{Gr}(F,\psi)(f):= (F(j)f_a\circ \psi(a)x)_{a\in I(i,j)}$, where each entry is the composite of
$$
X'(a)F(i)x \xrightarrow{\psi(a)x} F(j)X(a)x \xrightarrow{F(j)f_a} F(j)y.
$$
\end{itemize}
{\bf On 2-morphisms.}
Let $X=(X, \eta, \theta), X'=(X', \eta', \theta')$ be objects of $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$ and
$(F, \psi), (F', \psi') \colon X \to X'$ 1-morphisms in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$,
and let $\zeta\colon (F,\psi) \Rightarrow (F', \psi')$ be a 2-morphism in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Then a 2-morphism
$$
\operatorname{Gr}(\zeta) \colon \operatorname{Gr}(F,\psi) \Rightarrow \operatorname{Gr}(F', \psi')
$$
in $\Bbbk\text{-}\mathbf{Cat}$ is defined by
$$
\operatorname{Gr}(\zeta){}_ix := (\delta_{a, \id_i} \zeta(i)x)_{a\in I(i,i)}\colon {}_i(F(i)x)
\to {}_i(F'(i)x)$$
in $\operatorname{Gr}(X')$ for each ${}_ix \in \operatorname{Gr}(X)_0$.
\end{dfn}
\begin{exm}\label{exm:Gr}
Let $A$ be a $\Bbbk$-algebra
regarded as a $\Bbbk$-category with a single object.
Then $A \in \Bbbk\text{-}\mathbf{Cat}_0$.
Consider the functor $X:= \Delta(A) \colon I \to \Bbbk\text{-}\mathbf{Cat}$.
Then it is straightforward to verify the following.
\begin{enumerate}
\item If $I$ is a free category defined by the quiver $1 \to 2$,
then $\operatorname{Gr}(X)$ is isomorphic to the triangular algebra
$\bmat{A&0\\A&A}$.
\item If $I$ is a free category defined by a quiver $Q$,
then $\operatorname{Gr}(X)$ is isomorphic to the path-category $AQ$ of $Q$ over $A$.
\item If $I$ is a poset $S$,
then $\operatorname{Gr}(X)$ is isomorphic to the incidence category $AS$ of $S$ over $A$.
\item If $I$ is a monoid $G$,
then $\operatorname{Gr}(X)$ is isomorphic to the monoid algebra\footnote
Since $AG$ has the identity $1_A1_G$,
this is regarded as a category with a single object.
} $AG$ of $G$ over $A$.
\end{enumerate}
In (3) above, $AS$ is defined to be the factor category
of the path-category $AQ$ modulo the ideal
generated by the full commutativity relations in $Q$,
where $Q$ is the Hasse diagram of $S$ regarded as a quiver by
drawing an arrow $x \to y$ if $x \le y$ in $Q$.
If $S$ is a finite poset, then $AS$ is identified with the usual incidence algebra.
See \cite{Asa-Kim} for further examples of the Grothendieck constructions
of functors, in which the examples (2) and (3) above are unified and generalized.
\end{exm}
\begin{dfn}
Let $X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
We define a left transformation $(P_X, \phi_X):= (P, \phi)\colon X \to \Delta(\operatorname{Gr}(X))$
(called the {\em canonical morphism}) as follows.
\begin{itemize}
\item For each $i \in I_0$, the functor $P(i)\colon X(i) \to \operatorname{Gr}(X)$ is defined by
$$
\left\{
\begin{aligned}
P(i)x&:= {}_ix\\
P(i)f &:=(\delta_{a,\id_i} f\circ (\eta_i\,x))_{a\in I(i,i)}\colon {}_ix \to {}_iy\text{\ in\ $\operatorname{Gr}(X)$}
\end{aligned}
\right.
$$
for all $f\colon x \to y$ in $X(i)$.
\item For each $a \colon i \to j$ in $I$, the natural transformation
$\phi(a)\colon P(i) \Rightarrow P(j)X(a)$
$$\xymatrix{
X(i) & \operatorname{Gr}(X)\\
X(j) & \operatorname{Gr}(X)
\ar^{P(i)} "1,1";"1,2"
\ar_{P(j)} "2,1";"2,2"
\ar_{X(a)} "1,1";"2,1"
\ar@{=} "1,2";"2,2"
\ar@{=>}_{\phi(a)}"1,2";"2,1"
}
$$
is defined by $\phi(a)x:= (\delta_{b,a} \id_{X(a)x})_{b \in I(i,j)}$ for all $x \in X(i)_0$.
\end{itemize}
\end{dfn}
\begin{lem}
The $(P, \phi)$ defined above is a $1$-morphism in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
\end{lem}
\begin{proof}
This is straightforward by using Remark \ref{rmk:1-mor-to-De}.
\end{proof}
\begin{prp}\label{prp:can-covering}
Let $X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Then the canonical morphism $(P, \phi)\colon X \to \Delta(\operatorname{Gr}(X))$
is an $I$-covering.
More precisely, the morphism
$$(P, \phi)_{x,y}^{(1)}\colon \bigoplus_{a\in I(i,j)}X(j)(X(a)x, y) \to \operatorname{Gr}(X)(P(i)x, P(j)y)$$
is the identity
for all $i, j \in I_0$ and all $x \in X(i)_0$, $y \in X(j)_0$.
\end{prp}
\begin{proof}
By the definitions of $\operatorname{Gr}(X)_0$ and of $P$ it is obvious
that $(P, \phi)$ is dense.
Let $i, j \in I_0$ and $x \in X(i)$, $y \in X(j)$.
We only have to show that
$$(P, \phi)_{x,y}^{(1)}\colon \bigoplus_{a\in I(i,j)}X(j)(X(a)x, y) \to \operatorname{Gr}(X)(P(i)x, P(j)y)$$
is the identity.
Let $f = (f_a)_{a\in I(i,j)}\in \bigoplus_{a\in I(i,j)}X(j)(X(a)x, y)$.
Then
$$
\begin{aligned}
(P, \phi)_{x,y}^{(1)}(f) &= \sum_{a\in I(i,j)}P(j)(f_a)\circ \phi(a)x\\
&= \sum_{a\in I(i,j)}(\delta_{b,\id_j}f_a\circ (\eta_j\,x))_{b\in I(j,j)}\circ
(\delta_{c,a}\id_{X(a)x})_{c\in I(i,j)}\\
&= \sum_{a\in I(i,j)}\left(\sum_{\smat{b\in I(j,j)\\c\in I(i,j)\\d=bc}}\delta_{b,\id_j}f_a \circ(\eta_j\,x)\circ \delta_{c,a}\id_{X(b)X(a)x}\circ \theta_{b,c}x \right)_{d\in I(i,j)}\\
&= \sum_{a\in I(i,j)}\left(\delta_{d,a}f_a \circ (\eta_j\,x)\circ \id_{X(\id_j)X(a)x}\circ \theta_{\id_j, a}x \right)_{d\in I(i,j)}\\
&= (f_a \circ (\eta_j\,x) \circ \theta_{\id_j,a}x)_{a\in I(i,j)}
= (f_a)_{a\in I(i,j)}\\
&= f,
\end{aligned}
$$
as required.
\end{proof}
\begin{lem}\label{covering-equivalence}
Let $X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})_0$ and
$H\colon \operatorname{Gr}(X) \to {\mathcal C}$ be in $\Bbbk\text{-}\mathbf{Cat}$ and consider the composite $1$-morphism
$(F, \psi) \colon X \ya{(P, \phi)} \Delta(\operatorname{Gr}(X)) \ya{\Delta(H)} \Delta({\mathcal C})$.
Then $(F, \psi)$ is an $I$-covering if and only if $H$ is an equivalence.
\end{lem}
\begin{proof}
Obviously $(F, \psi)$ is dense if and only if so is $H$.
Further for each $i, j \in I_0$, $x \in X(i)$ and $y \in X(j)$,
$(F, \psi)^{(1)}_{x,y}$ is an isomorphism if and only if so is $H_{{}_ix, {}_jy}$
because
we have a commutative diagram
$$
\xymatrix{
\bigoplus_{a\in I(i,j)}X(j)(X(a)x, y) & {\mathcal C}(F(i)x, F(j)y)\\
\operatorname{Gr}(X)({}_ix, {}_jy)
\ar^(.58){(F, \psi)^{(1)}_{x,y}}"1,1";"1,2"
\ar@{=}_{(P,\phi)^{(1)}_{x,y}}"1,1";"2,1"
\ar_{H_{{}_ix, {}_jy}}"2,1";"1,2"
}
$$
by Proposition \ref{prp:can-covering}.
\end{proof}
\section{Adjoints}
In this section we will show that the Grothendieck construction is a strict left adjoint to
the diagonal 2-functor, and that $I$-coverings are essentially given
by the unit of the adjunction.
\begin{dfn}
Let ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}$.
We define a functor $Q_{{\mathcal C}} \colon \operatorname{Gr}(\Delta({\mathcal C})) \to {\mathcal C}$ by
\begin{itemize}
\item $Q_{{\mathcal C}}({}_ix) := x$ for all ${}_ix \in \operatorname{Gr}(\Delta({\mathcal C}))_0$; and
\item $Q_{{\mathcal C}}((f_a)_{a \in I(i,j)}):= \sum_{a\in I(i,j)}f_a$
for all $(f_a)_{a \in I(i,j)} \in \operatorname{Gr}(\Delta({\mathcal C}))({}_ix, {}_jy)$
and for all ${}_ix, {}_jy \in \operatorname{Gr}(\Delta({\mathcal C}))_0$.
\end{itemize}
It is easy to verify that $Q_{{\mathcal C}}$ is a $\Bbbk$-functor.
\end{dfn}
\begin{thm}\label{Gr-De-adjoint}
The $2$-functor $\operatorname{Gr}\colon \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat}) \to \Bbbk\text{-}\mathbf{Cat}$ is a strict left $2$-adjoint to
the $2$-functor $\Delta\colon \Bbbk\text{-}\mathbf{Cat} \to \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
The unit is given by the family
of canonical morphisms $(P_X, \phi_X) \colon X \to \Delta(\operatorname{Gr}(X))$
indexed by $X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$, and the counit is given by the family of
$Q_{{\mathcal C}} \colon \operatorname{Gr}(\Delta({\mathcal C})) \to {\mathcal C}$
defined as above indexed by ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}$.
In particular, $(P_X, \phi_X)$ has a strict universality in the comma category
$(X \downarrow \Delta)$, i.e., for each $(F, \psi) \colon X \to \Delta({\mathcal C})$ in
$\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$ with ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}$,
there exists a unique $H \colon \operatorname{Gr}(X) \to {\mathcal C} $ in $\Bbbk\text{-}\mathbf{Cat}$ such that
the following is a commutative diagram in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$:
$$
\xymatrix{
X & \Delta({\mathcal C}).\\
\Delta(\operatorname{Gr}(X))
\ar^{(F,\psi)}"1,1";"1,2"
\ar_{(P_X,\phi_X)} "1,1"; "2,1"
\ar@{-->}_{\Delta(H)} "2,1";"1,2"
}
$$
\end{thm}
\begin{proof}
For simplicity set $\eta:=((P_X, \phi_X))_{X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})_0}$ and
$\varepsilon:= (Q_{{\mathcal C}})_{{\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}_0}$.
\begin{clm}
$\Delta \varepsilon \cdot \eta \Delta = \id_{\Delta}$.
\end{clm}
Indeed,
Let ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}$.
It is enough to show that
$\Delta(Q_{{\mathcal C}}) \cdot (P_{\Delta({\mathcal C})}, \phi_{\Delta({\mathcal C})}) = \id_{\Delta({\mathcal C})}$.
Now
$$
\begin{aligned}
\mathrm{LHS}
&=\left((Q_{{\mathcal C}}P_{\Delta({\mathcal C})}(i))_{i\in I_0}, (Q_{{\mathcal C}}\phi_{\Delta({\mathcal C})}(a))_{a\in I_1}\right)
, \text{ and}\\
\mathrm{RHS}
&=\left((\id_{{\mathcal C}})_{i\in I_0}, (\id_{\id_{{\mathcal C}}})_{a\in I_1}\right).
\end{aligned}
$$
{\it First entry$\colon$}Let $i \in I$.
Then $Q_{{\mathcal C}}P_{\Delta({\mathcal C})}(i) = \id_{{\mathcal C}}$
because for each $x, y \in {\mathcal C}_0$ and each $f\in {\mathcal C}(x, y)$ we have
$(Q_{{\mathcal C}}P_{\Delta({\mathcal C})}(i))(x) = Q_{{\mathcal C}}({}_ix) = x$; and
$(Q_{{\mathcal C}}P_{\Delta({\mathcal C})}(i))(f) = (\delta_{a, \id_i}f \cdot ((\eta_{\Delta({\mathcal C})})_i\, x))_{a\in I_1} =
\sum_{a \in I(i, i)}\delta_{a, \id_i}f = f$.
{\it Second entry$\colon$} Let $a \colon i \to j$ in $I$.
Then $Q_{{\mathcal C}}\phi_{\Delta({\mathcal C})}(a) = \id_{\id_{{\mathcal C}}}$ because
for each $x \in {\mathcal C}_0$ we have
$Q_{{\mathcal C}}\left(\phi_{\Delta({\mathcal C})}(a)x\right)
= Q_{{\mathcal C}}\left((\delta_{b,a}\id_{\Delta({\mathcal C})(a)x})_{b\in I(i,j)}\right)
=\sum_{b\in I(i,j)}\delta_{b,a}\id_x = \id_x = \id_{\id_{{\mathcal C}}x}$.
This shows that $\mathrm{LHS} = \mathrm{RHS}$.
\begin{clm}
$\varepsilon \operatorname{Gr} \cdot \operatorname{Gr} \eta = \id_{\operatorname{Gr}}$.
\end{clm}
Indeed, let $X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
It is enough to show that
$Q_{\operatorname{Gr}(X)}\cdot \operatorname{Gr}(P_X, \phi_X) = \id_{\operatorname{Gr}(X)}$.
{\it On objects$\colon$} Let ${}_ix \in \operatorname{Gr}(X)_0$.
Then $Q_{\operatorname{Gr}(X)}\left(\operatorname{Gr}(P_X, \phi_X)(x)\right)
= Q_{\operatorname{Gr}(X)}({}_i(P_X(i)x))
= {}_ix$.
{\it On morphisms$\colon$} Let $f = (f_a)_{a\in I(i,j)} \colon {}_ix \to {}_jy$ be in $\operatorname{Gr}(X)$.
Then
$Q_{\operatorname{Gr}(X)}\operatorname{Gr}(P_X, \phi_X)(f)
= Q_{\operatorname{Gr}(X)}((P_X(j)(f_a)\circ\phi_X(a)x)_{a\in I(i,j)})
= \sum_{a\in I(i,j)}P_X(j)(f_a)\circ\phi_X(a)x
= (P_X, \phi_X)^{(1)}_{x,y}(f) = f$.
Thus the claim holds.
The two claims above prove the assertion.
\end{proof}
\begin{cor}\label{covering-Gr}
Let $(F, \psi) \colon X \to \Delta({\mathcal C})$ be in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Then the following are equivalent.
\begin{enumerate}
\item $(F, \psi)$ is an $I$-covering;
\item There exists an equivalence $H \colon \operatorname{Gr}(X) \to {\mathcal C}$ such that
the diagram
$$
\xymatrix{
X & \Delta({\mathcal C})\\
\Delta(\operatorname{Gr}(X))
\ar^{(F, \psi)}"1,1";"1,2"
\ar_{(P_X, \phi_X)}"1,1";"2,1"
\ar_{\Delta(H)}"2,1";"1,2"
}
$$
is strictly commutative.
\end{enumerate}
\end{cor}
\begin{proof}
This immediately follows by Theorem \ref{Gr-De-adjoint} and Lemma \ref{covering-equivalence}.
\end{proof}
\section{The Module colax functor}
Let $X\colon I \to \Bbbk\text{-}\mathbf{Cat}$ be a colax functor.
In this section we simplify the definition of
the ``module category'' $\operatorname{Mod} X$ of $X$ as a colax functor $I \to \Bbbk\text{-}\mathbf{Cat}$
given in our previous paper \cite{Asa-a}.
Recall that the {\em module category} $\operatorname{Mod} {\mathcal C}$ of
a category ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}$ is defined to be the functor category
$\Bbbk\text{-}\mathbf{Cat}({\mathcal C}\op, \operatorname{Mod} \Bbbk)$, where $\operatorname{Mod} \Bbbk$ denotes the
category of $\Bbbk$-modules.
Since $\Bbbk\text{-}\mathbf{Cat}$ is a 2-category, this is extended to a representable 2-functor
$$
\operatorname{Mod}':= \Bbbk\text{-}\mathbf{Cat}((\operatorname{-})\op, \operatorname{Mod} \Bbbk) \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{Ab}^{\mathrm{coop}}
$$
(see Notation \ref{ntn:co-op}).
As is easily seen the composite $\operatorname{Mod}' \circ X$ turns out to be a colax functor
$I \to \Bbbk\text{-}\mathbf{Ab}^{\mathrm{coop}}$, i.e., a contravariant lax functor $I \to \Bbbk\text{-}\mathbf{Ab}$.
When $X$ is a group action, namely when $I$ is a group $G$ and $X \colon G \to \Bbbk\text{-}\mathbf{Cat}$
is a functor, the usual module category $\operatorname{Mod} X$ with a $G$-action of $X$
was defined to be the composite functor
$\operatorname{Mod} X:= \operatorname{Mod}' \circ X \circ i$, where $i \colon G \to G$ is the group anti-isomorphism
defined by $x \mapsto x^{-1}$ for all $x \in G$.
In this way we can change $\operatorname{Mod}' \circ X$ to a covariant one.
But in general we cannot assume the existence of such an isomorphism $i$.
Instead in this paper we will use a covariant ``pseudofunctor'' $\operatorname{Mod} \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{Ab}$
defined below and will define $\operatorname{Mod} X$ as the composite $\operatorname{Mod} \circ X$,
which can be seen as a ``lax'' extended version of the module category construction of a category with a $G$-action stated above.
We start with a notion of colax functors from a 2-category to a 2-category.
Compare our definitions of colax functors,
left transformations (1-morphisms) and 2-morphisms in the setting of 2-categories given below
with definitions of morphisms, transformations and modifications in the setting of bicategories
(see Leinster \cite{Lei} for instance).
\begin{dfn}
\label{dfn:colax-fun-2cat}
Let $\mathbf{B}$ and $\mathbf{C}$ be 2-categories.
(1) A {\em colax functor} from $\mathbf{B}$ to $\mathbf{C}$ is a triple
$(X, \eta, \theta)$ of data:
\begin{itemize}
\item
a triple $X=(X_0, X_1, X_2)$ of maps $X_i\colon \mathbf{B}_i \to \mathbf{C}_i$ ($\mathbf{B}_i$ denotes the
collection of $i$-morphisms of $\mathbf{B}$ for each $i=0,1,2$) preserving domains and codomains of all 1-morphisms and 2-morphisms
(i.e.\ $X_1(\mathbf{B}_1(i,j)) \subseteq \mathbf{C}_1(X_0i, X_0j)$
for all $i, j \in \mathbf{B}_0$ and $X_2(\mathbf{B}_2(a,b)) \subseteq \mathbf{C}_2(X_1a, X_1b)$
for all $a, b \in \mathbf{B}_1$ (we omit the subscripts of $X$ below));
\item
a family $\eta:=(\eta_i)_{i\in \mathbf{B}_0}$ of 2-morphisms $\eta_i\colon X(\id_i) \Rightarrow \id_{X(i)}$ in $\mathbf{C}$
indexed by $i\in \mathbf{B}_0$; and
\item
a family $\theta:=(\theta_{b,a})_{(b,a)}$ of 2-morphisms
$\theta_{b,a} \colon X(ba) \Rightarrow X(b)X(a)$
in $\mathbf{C}$ indexed by $(b,a) \in \operatorname{com}(\mathbf{B}):=
\{(b,a)\in \mathbf{B}_1 \times \mathbf{B}_1 \mid ba \text{ is defined}\}$
\end{itemize}
satisfying the axioms:
\begin{enumerate}
\item[(i)]
$(X_1, X_2) \colon \mathbf{B}(i,j) \to \mathbf{C}(X_0i,X_0j)$ is a functor
for all $i, j \in \mathbf{B}_0$;
\item[(ii)]
For each $a\colon i \to j$ in $\mathbf{B}_1$ the following are commutative:
$$
\vcenter{
\xymatrix{
X(a\id_i) \ar@{=>}[r]^(.43){\theta_{a,\id_i}} \ar@{=}[rd]& X(a)X(\id_i)
\ar@{=>}[d]^{X(a)\eta_i}\\
& X(a)\id_{X(i)}
}}
\qquad\text{and}\qquad
\vcenter{
\xymatrix{
X(\id_j a) \ar@{=>}[r]^(.43){\theta_{\id_j,a}} \ar@{=}[rd]& X(\id_j)X(a)
\ar@{=>}[d]^{\eta_jX(a)}\\
& \id_{X(j)}X(a)
}}\quad;
$$
\item[(iii)]
For each $i \ya{a}j \ya{b} k \ya{c} l$ in $\mathbf{B}_1$ the following is commutative:
$$
\vcenter{
\xymatrix@C=3em{
X(cba) \ar@{=>}[r]^(.43){\theta_{c,ba}} \ar@{=>}[d]_{\theta_{cb,a}}& X(c)X(ba)
\ar@{=>}[d]^{X(c)\theta_{b,a}}\\
X(cb)X(a) \ar@{=>}[r]_(.45){\theta_{c,b}X(a)}& X(c)X(b)X(a)
}}\quad;\text{ and}
$$
\item[(iv)]
For each $a, a' \colon i \to j$ and $b, b' \colon j \to k$ in $\mathbf{B}_1$
and each $\alpha \colon a \to a'$, $\beta \colon b \to b'$ in $\mathbf{B}_2$
the following is commutative:
$$
\xymatrix{
X(ba) & X(b)X(a)\\
X(b'a') & X(b')X(a').
\ar@{=>}^{\theta_{b,a}}"1,1";"1,2"
\ar@{=>}^{\theta_{b',a'}}"2,1";"2,2"
\ar@{=>}_{X(\beta*\alpha)}"1,1";"2,1"
\ar@{=>}^{X(\beta)*X(\alpha)}"1,2";"2,2"
}
$$
\end{enumerate}
(2) A {\em lax functor} from $\mathbf{B}$ to $\mathbf{C}$ is a colax functor
from $\mathbf{B}$ to $\mathbf{C}^{\text{co}}$ (see Notation \ref{ntn:co-op}).
(3) A {\em pseudofunctor} from $\mathbf{B}$ to $\mathbf{C}$ is a colax functor with
all $\eta_i$ and $\theta_{b,a}$ 2-isomorphisms.
(4) We define a 2-category $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C})$ having all the colax functors
$\mathbf{B} \to \mathbf{C}$ as the objects as follows.
{\bf 1-morphisms.}
Let $X = (X, \eta, \theta)$, $X'= (X', \eta', \theta')$
be colax functors from $\mathbf{B}$ to $\mathbf{C}$.
A {\em $1$-morphism} (called a {\em left transformation}) from $X$ to $X'$
is a pair $(F, \psi)$ of data
\begin{itemize}
\item
a family $F:=(F(i))_{i\in \mathbf{B}_0}$ of 1-morphisms $F(i)\colon X(i) \to X'(i)$
in $\mathbf{C}$
; and
\item
a family $\psi:=(\psi(a))_{a\in \mathbf{B}_1}$ of 2-morphisms
$\psi(a)\colon X'(a)F(i) \Rightarrow F(j)X(a)$
$$\vcenter{\xymatrix{
X(i) & X'(i)\\
X(j) & X'(j)
\ar_{X(a)} "1,1"; "2,1"
\ar^{X'(a)} "1,2"; "2,2"
\ar^{F(i)} "1,1"; "1,2"
\ar_{F(j)} "2,1"; "2,2"
\ar@{=>}_{\psi(a)} "1,2"; "2,1"
}}
$$
in $\mathbf{C}$ indexed by $a\colon i \to j \text{ in }\mathbf{B}_1$
with the property that
\item[(0)]
for each $\alpha \colon a \Rightarrow b$ in $\mathbf{B}(i,j)$ the following
is commutative:
\begin{equation}\label{eq:naturality-psi}
\vcenter{\xymatrix@C=10ex{
X'(a)F(i) & X'(b)F(i)\\
F(j)X(a) & F(j)X(b),
\ar@{=>}^{X'(\alpha)F(i)}"1,1";"1,2"
\ar@{=>}_{F(j)X(\alpha)}"2,1";"2,2"
\ar@{=>}_{\psi(a)}"1,1";"2,1"
\ar@{=>}^{\psi(b)}"1,2";"2,2"
}}
\end{equation}
thus a family of natural transformations of functors
$$
\vcenter{\xymatrix@C=15ex{
\mathbf{B}(i,j) & \mathbf{C}(X'(i), X'(j))\\
\mathbf{C}(X(i), X(j)) & \mathbf{C}(X(i), X'(j))
\ar"1,1";"1,2"^{X'}
\ar"1,1";"2,1"_{X}
\ar"1,2";"2,2"^{\mathbf{C}(F(i), X'(j))}
\ar"2,1";"2,2"_{\mathbf{C}(X(i), F(j))}
\ar@{=>}"1,2";"2,1"_{\psi_{ij}}
}}\quad(i,j\in \mathbf{B}_0)
$$
\end{itemize}
satisfying the axioms
\begin{enumerate}
\item[(a)]
For each $i \in \mathbf{B}_0$ the following is commutative:
$$
\vcenter{
\xymatrix{
X'(\id_i)F(i) & F(i)X(\id_i)\\
\id_{X'(i)}F(i) & F(i)\id_{X(i)}
\ar@{=>}^{\psi(\id_i)} "1,1"; "1,2"
\ar@{=} "2,1"; "2,2"
\ar@{=>}_{\eta'_iF(i)} "1,1"; "2,1"
\ar@{=>}^{F(i)\eta_i} "1,2"; "2,2"
}}\quad;\text{ and}
$$
\item[(b)]
For each $i \ya{a} j \ya{b} k$ in $\mathbf{B}_1$ the following is commutative:
$$
\xymatrix@C=4pc{
X'(ba)F(i) & X'(b)X'(a)F(i) & X'(b)F(j)X(a)\\
F(k)X(ba) & & F(k)X(b)X(a).
\ar@{=>}^{\theta'_{b,a}F(i)} "1,1"; "1,2"
\ar@{=>}^{X'(b)\psi(a)} "1,2"; "1,3"
\ar@{=>}_{F(k)\,\theta_{b,a}} "2,1"; "2,3"
\ar@{=>}_{\psi(ba)} "1,1"; "2,1"
\ar@{=>}^{\psi(b)X(a)} "1,3"; "2,3"
}
$$
\end{enumerate}
{\bf 2-morphisms.}
Let $X = (X, \eta, \theta)$, $X'= (X', \eta', \theta')$
be colax functors from $\mathbf{B}$ to $\mathbf{C}$, and
$(F, \psi)$, $(F', \psi')$ 1-morphisms from $X$ to $X'$.
A {\em $2$-morphism} from $(F, \psi)$ to $(F', \psi')$ is a
family $\zeta= (\zeta(i))_{i\in \mathbf{B}_0}$ of 2-morphisms
$\zeta(i)\colon F(i) \Rightarrow F'(i)$ in $\mathbf{C}$
indexed by $i \in \mathbf{B}_0$
such that the following is commutative for all $a\colon i \to j$ in $\mathbf{B}_1$:
$$
\xymatrix@C=4pc{
X'(a)F(i) & X'(a)F'(i)\\
F(j)X(a) & F'(j)X(a).
\ar@{=>}^{X'(a)\zeta(i)} "1,1"; "1,2"
\ar@{=>}^{\zeta(j)X(a)} "2,1"; "2,2"
\ar@{=>}_{\psi(a)} "1,1"; "2,1"
\ar@{=>}^{\psi'(a)} "1,2"; "2,2"
}
$$
{\bf Composition of 1-morphisms.}
Let $X = (X, \eta, \theta)$, $X'= (X', \eta', \theta')$ and $X''= (X'', \eta'', \theta'')$
be colax functors from $\mathbf{B}$ to $\mathbf{C}$, and
let $(F, \psi)\colon X \to X'$, $(F', \psi')\colon X' \to X''$
be 1-morphisms.
Then the composite $(F', \psi')(F, \psi)$ of $(F, \psi)$ and
$(F', \psi')$ is a 1-morphism from $X$ to $X''$ defined by
$$
(F', \psi')(F, \psi):= (F'F, \psi'\circ\psi),
$$
where $F'F:=((F'(i)F(i))_{i\in \mathbf{B}_0}$ and for each $a\colon i \to j$ in $\mathbf{B}$,
$
(\psi'\circ\psi)(a):= F'(j)\psi(a)\circ \psi'(a)F(i)
$
is the pasting of the diagram
$$
\xymatrix@C=4pc{
X(i) & X'(i) & X''(i)\\
X(j) & X'(j) & X''(j).
\ar_{X(a)} "1,1"; "2,1"
\ar_{X'(a)} "1,2"; "2,2"
\ar^{F(i)} "1,1"; "1,2"
\ar_{F(j)} "2,1"; "2,2"
\ar@{=>}_{\psi(a)} "1,2"; "2,1"
\ar^{X''(a)} "1,3"; "2,3"
\ar^{F'(i)} "1,2"; "1,3"
\ar_{F'(j)} "2,2"; "2,3"
\ar@{=>}_{\psi'(a)} "1,3"; "2,2"
}
$$
\end{dfn}
\begin{rmk}
(1) Note that a (strict) 2-functor from $\mathbf{B}$ to $\mathbf{C}$ is a pseudofunctor with
all $\eta_i$ and $\theta_{b,a}$ identities.
(2) By regarding the category $I$ as a 2-category with all 2-morphisms identities,
the definition (1) of colax functors above coincides
with Definition \ref{dfn:colax-fun}.
(3) When $\mathbf{B} = I$, the definition (4) of $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C})$
above coincides with that of $\overleftarrow{\operatorname{Colax}}(I, \mathbf{C})$ given before.
\end{rmk}
\begin{exm}
\label{exm:Mod-D}
(1) Since $\Bbbk\text{-}\mathbf{Cat}$ is a 2-category,
$
\operatorname{Mod}':= \Bbbk\text{-}\mathbf{Cat}((\operatorname{-})\op, \operatorname{Mod} \Bbbk) \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{Ab}^{\mathrm{coop}}
$
is a 2-functor, which we can regard as a contravariant lax functor
$$
\operatorname{Mod}':= \Bbbk\text{-}\mathbf{Cat}((\operatorname{-})\op, \operatorname{Mod} \Bbbk) \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{Ab}.
$$
(2) We define a pseudofunctor $\operatorname{Mod} \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{Ab}$ as follows.
\begin{itemize}
\item For each ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}_0$ we set $\operatorname{Mod} {\mathcal C}:= \operatorname{Mod}' {\mathcal C}$.
\item For each $F \colon {\mathcal C} \to {\mathcal C}'$ in $\Bbbk\text{-}\mathbf{Cat}_1$ we set
$\operatorname{Mod} F:= \operatorname{-}\otimes_{{\mathcal C}}\overline{F} \colon \operatorname{Mod}{\mathcal C} \to \operatorname{Mod}{\mathcal C}'$, where $\overline{F}$ is the ${\mathcal C}$-${\mathcal C}'$-bimodule
defined by $\overline{F}(x, y):= {\mathcal C}'(y, F(x))$ for all $x \in {\mathcal C}_0$, $y \in {\mathcal C}'_0$,
which we sometimes write as $\overline{F}:= {\mathcal C}'(?, F(\operatorname{-}))$.
\item For each $\alpha \colon F \Rightarrow G$ in $\Bbbk\text{-}\mathbf{Cat}_2$ (with $F, G \colon {\mathcal C} \to {\mathcal C}'$ in $\Bbbk\text{-}\mathbf{Cat}_1$)
we define $\operatorname{Mod} \alpha \colon \operatorname{Mod} F \Rightarrow \operatorname{Mod} G$ by setting
$(\operatorname{Mod} \alpha)x := {\mathcal C}'(?, \alpha x) \colon {\mathcal C}'(?, Fx) \Rightarrow {\mathcal C}'(?, Gx)$ for all $x \in {\mathcal C}_0$.
\item For each ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}$ we define
$\eta_{\mathcal C} \colon \operatorname{Mod} \id_{\mathcal C} \Rightarrow \id_{\operatorname{Mod}{\mathcal C}}$
by setting $\eta_{\mathcal C} M \colon M \otimes_{\mathcal C} {\mathcal C}(?,\operatorname{-}) \to M$
to be the canonical isomorphisms for all $M \in \operatorname{Mod}{\mathcal C}$.
\item For each pair of functors ${\mathcal C} \ya{F} {\mathcal C}' \ya{G} {\mathcal C}''$ in $\Bbbk\text{-}\mathbf{Cat}$
we define $\theta_{G, F} \colon \operatorname{Mod} GF \Rightarrow \operatorname{Mod} G \circ \operatorname{Mod} F$ as the inverse
of the canonical isomorphism
$$
\operatorname{-}\otimes_{\mathcal C} {\mathcal C}'(?,F(\operatorname{-}))\otimes_{{\mathcal C}'} {\mathcal C}''(?, G(\operatorname{-}))
\Rightarrow \operatorname{-}\otimes_{\mathcal C} {\mathcal C}''(?, GF(\operatorname{-})).
$$
\end{itemize}
It is straightforward to check that this defines a pseudofunctor.
(3) Denote by $\Bbbk\text{-}\mathbf{ModCat}$ the
2-subcategory of $\Bbbk\text{-}\mathbf{Ab}$ consisting of the following:
\begin{itemize}
\item objects: $\operatorname{Mod} {\mathcal C}$ with ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}_0$,
\item 1-morphisms: functors between objects having exact right adjoints, and
\item 2-morphisms: all natural transformations between 1-morphisms.
\end{itemize}
Then note that the pseudofunctor $\operatorname{Mod} \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{Ab}$ defined above can be seen
as a pseudofunctor $\Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{ModCat}$.
For each $\operatorname{Mod} {\mathcal C}$ with ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}_0$ we denote by ${\mathcal K}_p(\operatorname{Mod}{\mathcal C})$
the full subcategory of the homotopy category ${\mathcal K}(\operatorname{Mod}{\mathcal C})$ of $\operatorname{Mod}{\mathcal C}$ consisting of
{\em homotopically projective} objects $M$, i.e., objects $M$ such that ${\mathcal K}(\operatorname{Mod}{\mathcal C})(M, A) =0$
for all acyclic objects $A$.
Recall that there is a natural embedding
$\mathbf{j}_{{\mathcal C}} \colon {\mathcal K}_p(\operatorname{Mod}{\mathcal C}) \to {\mathcal D}(\operatorname{Mod}{\mathcal C})$
having a left adjoint $\mathbf{p}_{{\mathcal C}}$ such that there exists a quasi-isomorphism
$\eta_{{\mathcal C}}M \colon \mathbf{j}_{{\mathcal C}}\mathbf{p}_{{\mathcal C}}M \to M$ for each $M \in {\mathcal D}(\operatorname{Mod} {\mathcal C})$
and that $\mathbf{p}_{{\mathcal C}}\mathbf{j}_{{\mathcal C}} = \id_{{\mathcal K}_p(\operatorname{Mod}{\mathcal C})}$.
Then we can define a pseudofunctor ${\mathcal D} \colon \Bbbk\text{-}\mathbf{ModCat} \to \Bbbk\text{-}\mathbf{Tri}$ as follows.
\begin{itemize}
\item For each $\operatorname{Mod}{\mathcal C}$ in $\Bbbk\text{-}\mathbf{ModCat}_0$ with ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}$ we set
${\mathcal D}(\operatorname{Mod}{\mathcal C})$ to be the derived category of $\operatorname{Mod}{\mathcal C}$.
\item For each $F \colon \operatorname{Mod}{\mathcal C} \to \operatorname{Mod}{\mathcal C}'$ in $\Bbbk\text{-}\mathbf{ModCat}_1$,
$F$ naturally induces a functor ${\mathcal K} F \colon {\mathcal K}(\operatorname{Mod}{\mathcal C}) \to {\mathcal K}(\operatorname{Mod}{\mathcal C}')$,
which restricts to a functor ${\mathcal K}_p F \colon {\mathcal K}_p(\operatorname{Mod}{\mathcal C}) \to {\mathcal K}_p(\operatorname{Mod}{\mathcal C}')$
because $F$ has an exact right adjoint.
Then we set ${\mathcal D} F$ to be the left derived functor
$\mathbf{L} F\colon {\mathcal D}(\operatorname{Mod}{\mathcal C}) \to {\mathcal D}(\operatorname{Mod}{\mathcal C}')$
of $F$, which is defined as the composite $\mathbf{L} F:= \mathbf{j}_{{\mathcal C}'} ({\mathcal K}_p F)\mathbf{p}_{{\mathcal C}}$.
\item For each $\alpha \colon F \Rightarrow F'$ in $\Bbbk\text{-}\mathbf{ModCat}_2$
with $F, F' \colon \operatorname{Mod} {\mathcal C} \to \operatorname{Mod} {\mathcal C}'$ in $\Bbbk\text{-}\mathbf{ModCat}_1$,
$\alpha$ naturally induces a natural transformation ${\mathcal K}_p \alpha \colon {\mathcal K}_p F \Rightarrow {\mathcal K}_p F'$.
Then we define ${\mathcal D} \alpha:= \mathbf{j}_{{\mathcal C}'} ({\mathcal K}_p \alpha) \mathbf{p}_{{\mathcal C}}$.
\item We define $\eta_{\operatorname{Mod} {\mathcal C}} \colon {\mathcal D}(\id_{\operatorname{Mod} {\mathcal C}}) (=\mathbf{j}_{{\mathcal C}}\mathbf{p}_{{\mathcal C}}) \Rightarrow \id_{{\mathcal D}(\operatorname{Mod} {\mathcal C})}$
by $\eta_{\operatorname{Mod}{\mathcal C}}:= (\eta_{{\mathcal C}}M)_{M \in {\mathcal D}(\operatorname{Mod}{\mathcal C})}$.
\item Note that for each $\operatorname{Mod} {\mathcal C} \ya{F} \operatorname{Mod} {\mathcal C}' \ya{F'} \operatorname{Mod} {\mathcal C}''$
in $\Bbbk\text{-}\mathbf{ModCat}_1$
we have $\mathbf{L}(F'\circ F) = \mathbf{L} F' \circ \mathbf{L} F$
because $\mathbf{p}_{{\mathcal C}'}\mathbf{j}_{{\mathcal C}'}= \id_{{\mathcal K}_p(\operatorname{Mod} {\mathcal C})}$.
We define $\theta_{F',F} \colon \mathbf{L}(F'\circ F) \Rightarrow \mathbf{L} F' \circ \mathbf{L} F$ as the identity
$\id_{\mathbf{L}(F'\circ F)}$.
\end{itemize}
It is straightforward to check that this defines a pseudofunctor.
\end{exm}
\begin{exm}
\label{exm:prj-Kb}
(1) We define a pseudofunctor $\operatorname{prj} \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{add}$
as the subpseudofunctor of $\operatorname{Mod} \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{Ab} \hookrightarrow \Bbbk\text{-}\mathbf{add}$
by setting $\operatorname{prj} {\mathcal C}$ to be the full subcategory of $\operatorname{Mod} {\mathcal C}$
consisting of finitely generated projective ${\mathcal C}$-modules
for all ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}_0$,
where $\Bbbk\text{-}\mathbf{add}$ is the full 2-subcategory of $\Bbbk\text{-}\mathbf{Cat}$ consisting of additive $\Bbbk$-categories.
Then for each $F \colon {\mathcal C} \to {\mathcal C}'$ in $\Bbbk\text{-}\mathbf{Cat}_1$
and each $x \in {\mathcal C}_0$ we have
\begin{equation}\label{prj-representables}
(\operatorname{prj} F)({\mathcal C}(\operatorname{-}, x)) = {\mathcal C}(\operatorname{-}, x) \otimes_{{\mathcal C}}\overline{F} \cong {\mathcal C}'(\operatorname{-}, F(x)).
\end{equation}
Note that we can define two 2-functors
$\oplus \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{add}$ and $\operatorname{sic} \colon \Bbbk\text{-}\mathbf{add} \to \Bbbk\text{-}\mathbf{add}$
by forming formal additive hulls (see e.g., \cite[Subsection 4.1]{Asa99})
and by taking split idempotent completions (see e.g., \cite[Definition 3.1]{Asa11}),
respectively.
Then the Yoneda embeddings $Y_{{\mathcal C}}\colon {\mathcal C} \to \operatorname{prj}{\mathcal C}$,
$x \mapsto {\mathcal C}(\operatorname{-}, x)$ (${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}_0$) induce
a natural 2-isomorphism $Y \colon \operatorname{sic}\circ\oplus \Rightarrow \operatorname{prj}$:
$$
\xymatrix{
\Bbbk\text{-}\mathbf{Cat} && \Bbbk\text{-}\mathbf{add}.\\
&\Bbbk\text{-}\mathbf{add}
\ar^{\operatorname{prj}}"1,1";"1,3"
\ar_{\oplus}"1,1";"2,2"
\ar_{\operatorname{sic}}"2,2";"1,3"
\ar@{=>}_Y^{\cong}"2,2";"1,2"
}
$$
(2) A 2-functor ${\mathcal K}^{\text{\rm b}} \colon \Bbbk\text{-}\mathbf{add} \to \Bbbk\text{-}\mathbf{Tri}$ is canonically defined by setting
${\mathcal K}^{\text{\rm b}}({\mathcal C})$ to be the homotopy category of bounded complexes in ${\mathcal C}$
for all ${\mathcal C} \in \Bbbk\text{-}\mathbf{add}$.
Then the composite pseudofunctor ${\mathcal K}^{\text{\rm b}}\circ\operatorname{prj} \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{Tri}$
turns out to be a subpseudofunctor of ${\mathcal D}\circ\operatorname{Mod} \colon \Bbbk\text{-}\mathbf{Cat} \to \Bbbk\text{-}\mathbf{Tri}$.
\end{exm}
The following is a useful tool to define new colax functors from an old one by composing with
pseudofunctors.
The proof will be given in the last section.
\begin{thm}\label{comp-pseudofun}
Let $\mathbf{B}, \mathbf{C}$ and $\mathbf{D}$ be $2$-categories and $V \colon \mathbf{C} \to \mathbf{D}$ a
pseudofunctor.
Then the obvious correspondence $($see subsection \ref{dfn-corr} for details$)$
$$
\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V) \colon \overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C}) \to \overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{D})
$$
turns out to be a pseudofunctor.
\end{thm}
\begin{dfn}
Let $X = (X, \eta, \theta) \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
(1) We define the {\em module colax functor}
$\operatorname{Mod} X = (\operatorname{Mod} X, \operatorname{Mod} \eta, \operatorname{Mod} \theta) \colon I \to \Bbbk\text{-}\mathbf{ModCat}$ of $X$
as the composite $\operatorname{Mod} X:= \operatorname{Mod} \circ X = \overleftarrow{\operatorname{Colax}}(I, \operatorname{Mod})(X) \colon I \ya{X} \Bbbk\text{-}\mathbf{Cat} \ya{\operatorname{Mod}} \Bbbk\text{-}\mathbf{ModCat}$.
By applying Theorem \ref{comp-pseudofun} to $\mathbf{B}:= I$, $\mathbf{C}:= \Bbbk\text{-}\mathbf{Cat}$, $\mathbf{D}:= \Bbbk\text{-}\mathbf{ModCat}$
and $V:= \operatorname{Mod}$ (Example \ref{exm:Mod-D}(2)) we see that $\operatorname{Mod} X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{ModCat})$.
Then we have
\begin{itemize}
\item
for each $i \in I_0$,
$(\operatorname{Mod} X)(i) = \operatorname{Mod} (X(i))$; and
\item
for each $a \colon i \to j$ in $I$ the functor
$(\operatorname{Mod} X)(a) \colon (\operatorname{Mod} X)(i) \to (\operatorname{Mod} X)(j)$
is given by $(\operatorname{Mod} X)(a) = \text{-}\otimes_{X(i)}\overline{X(a)}$,
where $\overline{X(a)}
$
is an $X(i)$-$X(j)$-bimodule defined by
$$
\overline{X(a)}(x, y):= X(j)(y, X(a)(x))
$$
for all $x \in X(i)_0$ and $y \in X(j)_0$.
\end{itemize}
(2) By Theorem \ref{comp-pseudofun} and Example \ref{exm:Mod-D} we can define a colax functor
${\mathcal D}(\operatorname{Mod} X) \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Tri})$ as the composite ${\mathcal D}(\operatorname{Mod} X):= {\mathcal D} \circ \operatorname{Mod} X$,
which we call the {\em derived module colax functor} of $X$.
Then for each $a\colon i \to j$ in $I$,
${\mathcal D}(\operatorname{Mod} X)(i) \xrightarrow{{\mathcal D}(\operatorname{Mod} X)(a)} {\mathcal D}(\operatorname{Mod} X)(j)$
is equal to
$${\mathcal D}(\operatorname{Mod} X(i)) \xrightarrow{\text{-}\overset{\mathbf{L}}{\otimes}_{X(i)}\overline{X(a)}}{\mathcal D}(\operatorname{Mod} X(j)).$$
(3) By Theorem \ref{comp-pseudofun} and Example \ref{exm:prj-Kb}
we can define a pseudofunctor
$$
\overleftarrow{\operatorname{Colax}}(I, {\mathcal K}^{\text{\rm b}}\circ\operatorname{prj}) \colon \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat}) \to \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Tri})
$$
sending each $X \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$ to ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$.
By the remark in Example \ref{exm:prj-Kb}(2) ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$ is
a colax subfunctor of ${\mathcal D}(\operatorname{Mod} X)$.
\end{dfn}
\begin{rmk}
Let ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}_0$.
Then it is obvious by definitions that
$$
\Delta({\mathcal K}^{\text{\rm b}}(\operatorname{prj} {\mathcal C})) = {\mathcal K}^{\text{\rm b}}(\operatorname{prj} \Delta({\mathcal C})).
$$
\end{rmk}
\begin{prp}
\label{precovering-preserved}
The pseudofunctor ${\mathcal K}^{\text{\rm b}}\circ\operatorname{prj}$ preserves $I$-precoverings, that is,
if $(F, \psi) \colon X \to \Delta({\mathcal C})$ is an $I$-precovering in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$
with ${\mathcal C} \in \Bbbk\text{-}\mathbf{Cat}_0$, then
so is ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} (F, \psi)) \colon {\mathcal K}^{\text{\rm b}}(\operatorname{prj} X) \to \Delta({\mathcal K}^{\text{\rm b}}(\operatorname{prj} {\mathcal C}))$ in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Tri})$.
\end{prp}
\begin{proof}
It is straightforward to verify that the 2-functors $\oplus$, $\operatorname{sic}$ and ${\mathcal K}^{\text{\rm b}}$
defined in Example \ref{exm:prj-Kb} preserve $I$-precoverings.
Then the assertion follows from the natural 2-isomorphism $Y \colon \operatorname{sic}\circ\oplus \Rightarrow \operatorname{prj}$.
\end{proof}
\section{Derived equivalences of colax functors}
In this section we recall necessary terminologies and the main theorem
in our previous paper \cite{Asa-a}.
First we cite the following. See \cite{Asa-a} for the proof.
\begin{lem}
\label{colax-eq}
Let $\mathbf{C}$ be a $2$-category and $(F, \psi) \colon X \to X'$
a $1$-morphism in the $2$-category $\overleftarrow{\operatorname{Colax}}(I, \mathbf{C})$.
Then $(F, \psi)$ is an equivalence in $\overleftarrow{\operatorname{Colax}}(I, \mathbf{C})$
if and only if
\begin{enumerate}
\item
For each $i \in I_0$, $F(i)$
is an equivalence in $\mathbf{C}$; and
\item
For each $a \in I_1$, $\psi(a)$ is a $2$-isomorphism in $\mathbf{C}$
$($namely, $(F,\psi)$ is $I$-equivariant$)$.
\end{enumerate}
\end{lem}
\begin{dfn}
Let $X, X' \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Then $X$ and $X'$ are said to be {\em derived equivalent} if
${\mathcal D}(\operatorname{Mod} X)$ and ${\mathcal D}(\operatorname{Mod} X')$ are equivalent
in the 2-category $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Tri})$.
\end{dfn}
By Lemma \ref{colax-eq} we obtain the following.
\begin{prp}
\label{der-eq-criterion}
Let $X, X' \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Then $X$ and $X'$ are derived equivalent if and only if
there exists a $1$-morphism
$(F, \psi) \colon {\mathcal D}(\operatorname{Mod} X) \to {\mathcal D}(\operatorname{Mod} X')$ in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Tri})$ such that
\begin{enumerate}
\item
For each $i \in I_0$, $F(i)$
is a triangle equivalence; and
\item
For each $a \in I_1$, $\psi(a)$ is a natural isomorphism
$($i.e., $(F,\psi)$ is $I$-equivariant$)$.
\end{enumerate}
\end{prp}
A $\Bbbk$-category ${\mathcal A}$ is called $\Bbbk$-{\em projective}
(resp.\ $\Bbbk$-{\em flat}) if
${\mathcal A}(x,y)$ are projective (resp.\ flat) $\Bbbk$-modules for all $x,y \in {\mathcal A}_0$.
\begin{dfn}\label{dfn:tilting-colax}
Let $X\colon I \to \Bbbk\text{-}\mathbf{Cat}$ be a colax functor.
\begin{enumerate}
\item
$X$ is called $\Bbbk$-{\em projective} (resp.\ $\Bbbk$-{\em flat})
if $X(i)$ are $\Bbbk$-projective (resp.\ $\Bbbk$-flat) for all $i \in I_0$.
\item
A colax subfunctor ${\mathcal T}$ of of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$ is called
{\em tilting} if for each $i \in I_0$,
${\mathcal T}(i)$ is a tilting subcategory of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(i))$, namely,
\begin{itemize}
\item
${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(i))(U, V[n]) = 0$ for all $U, V \in {\mathcal T}(i)_0$
and $0 \ne n \in {\mathbb Z}$; and
\item
the smallest thick subcategory of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(i))$
containing ${\mathcal T}(i)$ is equal to ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(i))$.
\end{itemize}
\item A tilting colax subfunctor ${\mathcal T}$ of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$
with an $I$-equivariant inclusion
$(\sigma, \rho)\colon$ ${\mathcal T} \hookrightarrow {\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$
is called a {\em tilting colax functor} for $X$.
\end{enumerate}
\end{dfn}
The following was our main theorem in \cite{Asa-a} that gives
a generalization of the Morita type theorem characterizing derived equivalences of categories by Rickard \cite{Rick} and Keller \cite{Ke1} in our setting.
\begin{thm}
\label{mainthm1}
Let $X, X' \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Consider the following conditions.
\begin{enumerate}
\item
$X$ and $X'$ are derived equivalent.
\item
${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$ and ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X')$ are equivalent
in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Tri})$.
\item
There exists a tilting colax functor ${\mathcal T}$ for $X$
such that ${\mathcal T}$ and $X'$ are equivalent in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
\end{enumerate}
Then
\begin{enumerate}
\item[(a)]
$(1)$ implies $(2)$.
\item[(b)]
$(2)$ implies $(3)$.
\item[(c)]
If $X'$ is $\Bbbk$-projective, then $(3)$ implies $(1)$.
\end{enumerate}
\end{thm}
\section{Derived equivalences of Grothendieck constructions}
First we cite the statement \cite[Corollary 9.2]{Ke1} in the $\Bbbk$-category case.
\begin{thm}[Keller]
Let ${\mathcal A}$ and ${\mathcal B}$ be $\Bbbk$-categories and assume that
${\mathcal A}$ is $\Bbbk$-flat.
Then the following are equivalent.
\begin{enumerate}
\item
${\mathcal A}$ and ${\mathcal B}$ are derived equivalent.
\item
${\mathcal B}$ is equivalent to a tilting subscategory of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} {\mathcal A})$.
\end{enumerate}
\end{thm}
The following is our main result in this paper.
\begin{thm}
\label{mainthm2}
Let $X, X' \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Assume that $X$ is $\Bbbk$-flat and that there exists a tilting colax functor ${\mathcal T}$ for $X$
such that ${\mathcal T}$ and $X'$ are equivalent in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$
$($the condition $(3)$ in {\em Theorem \ref{mainthm1}}$)$.
Then $\operatorname{Gr}(X)$ and $\operatorname{Gr}(X')$ are derived equivalent.
\end{thm}
\begin{proof}
Note that $\operatorname{Gr}(X)$ is also $\Bbbk$-flat by definition of $\operatorname{Gr}(X)$.
Let ${\mathcal T}$ be a tilting colax subfunctor of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$
with an $I$-equivariant inclusion
$(\sigma, \rho)\colon {\mathcal T} \hookrightarrow {\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$.
Put $(P, \phi):= (P_X, \phi_X)$ for short.
Let ${\mathcal T}'$ be the full subcategory of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} \operatorname{Gr}(X))$
consisting of the objects ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} P(i))(U)$ with $i \in I_0$ and $U \in {\mathcal T}(i)_0$.
Then ${\mathcal T}'$ is a tilting subcategory of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj}\operatorname{Gr}(X))$.
Indeed, let $L, M \in {\mathcal T}'_0$ and $0 \ne p \in {\mathbb Z}$.
Then $L = {\mathcal K}^{\text{\rm b}}(\operatorname{prj} P(i))(U)$ and $M = {\mathcal K}^{\text{\rm b}}(\operatorname{prj} P(j))(V)$
for some $i, j \in I_0$ and some $U \in {\mathcal T}(i)_0$, $V \in {\mathcal T}(j)_0$.
Since
$$
{\mathcal K}^{\text{\rm b}}(\operatorname{prj}(P, \phi)) \colon {\mathcal K}^{\text{\rm b}}(\operatorname{prj} X) \to \Delta({\mathcal K}^{\text{\rm b}}(\operatorname{prj}\operatorname{Gr}(X)))
$$
is an $I$-precovering by Proposition \ref{precovering-preserved},
we have
$$
\begin{aligned}
{\mathcal K}^{\text{\rm b}}(\operatorname{prj}\operatorname{Gr}(X))(L, M[p]) &\cong
{\mathcal K}^{\text{\rm b}}(\operatorname{prj}\operatorname{Gr}(X))({\mathcal K}^{\text{\rm b}}(\operatorname{prj}(P,\phi))(U), {\mathcal K}^{\text{\rm b}}(\operatorname{prj}(P,\phi))(V[p]))\\
&\cong
\bigoplus_{a\in I(i,j)}{\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(j))({\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)(a)(U), V[p])\\
&\overset{\rm (a)}{\cong}
\bigoplus_{a\in I(i,j)}{\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(j))({\mathcal T}(a)U, V[p]) \overset{\rm (b)}{=} 0,
\end{aligned}
$$
where the isomorphism (a) follows using the natural isomorphism $\rho(a)$:
$$
\xymatrix{
{\mathcal T}(i) & {\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(i))\\
{\mathcal T}(i) & {\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(j))
\ar@{^{(}->}"1,1";"1,2"
\ar@{^{(}->}"2,1";"2,2"
\ar_{{\mathcal T}(a)}"1,1";"2,1"
\ar^{{\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)(a)}"1,2";"2,2"
\ar@{=>}_{\rho(a)}^{\cong}"1,2";"2,1"
\save "1,1"+<-0.9cm,0.1cm>*\txt{$U\in$} \restore
}
$$
and the equality (b) follows by assumption from the fact that ${\mathcal T}(a)U, V \in {\mathcal T}(j)$.
Now for a triangulated category ${\mathcal U}$ and a class of objects ${\mathcal V}$ in ${\mathcal U}$
we denote by $\thick {\mathcal V}$ the smallest thick subcategory of ${\mathcal U}$ containing ${\mathcal V}$.
Then for each $i \in I_0$ and $x \in X(i)$
we have
${\mathcal K}^{\text{\rm b}}(\operatorname{prj} P(i))(X(i)(\operatorname{-}, x)) \cong (\operatorname{prj} P(i))(X(i)(\operatorname{-}, x)) \cong \operatorname{Gr}(X)(\operatorname{-}, P(i)(x))
= \operatorname{Gr}(X)(\operatorname{-}, {}_ix)$
by the formula \eqref{prj-representables}, and hence
$$
\begin{aligned
\operatorname{Gr}(X)(\operatorname{-}, {}_ix) &\cong
{\mathcal K}^{\text{\rm b}}(\operatorname{prj} P(i))(X(i)(\operatorname{-}, x))\\
&\in {\mathcal K}^{\text{\rm b}}(\operatorname{prj} P(i))(\thick {\mathcal T}(i))\\
&\subseteq \thick\{{\mathcal K}^{\text{\rm b}}(\operatorname{prj} P(i))(U) \mid U \in {\mathcal T}(i)\}\\
&\subseteq \thick {\mathcal T}'.
\end{aligned}
$$
Therefore $\thick {\mathcal T}' = {\mathcal K}^{\text{\rm b}}(\operatorname{prj} \operatorname{Gr}(X))$, and hence
${\mathcal T}'$ is a tilting subcategory of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} \operatorname{Gr}(X))$, as desired.
Hence $\operatorname{Gr}(X)$ and ${\mathcal T}'$ are derived equivalent because $\operatorname{Gr}(X)$
is $\Bbbk$-flat.
Let $(F, \psi)$ be the restriction of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} (P, \phi))$ to ${\mathcal T}$.
Then $(F, \psi) \colon {\mathcal T} \to \Delta({\mathcal T}')$ is a dense functor and an $I$-precovering,
thus it is an $I$-covering, which shows that
${\mathcal T}' \simeq \operatorname{Gr}({\mathcal T})$ by Corollary \ref{covering-Gr}.
Since ${\mathcal T}$ and $X'$ are equivalent in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$, we have
$\operatorname{Gr}({\mathcal T}) \simeq \operatorname{Gr}(X')$.
As a consequence, $\operatorname{Gr}(X)$ and $\operatorname{Gr}(X')$ are derived equivalent.
\end{proof}
\begin{cor}
\label{der-eq-Gr}
Let $X, X' \in \overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
If $X$ and $X'$ are derived equivalent, then
so are $\operatorname{Gr}(X)$ and $\operatorname{Gr}(X')$.
\end{cor}
\begin{proof}
Assume that $X$ and $X'$ are derived equivalent, namely that
the condition (1) in Theorem \ref{mainthm1} is satisfied.
Then the condition (3) in Theorem \ref{mainthm1} holds
by Theorem \ref{mainthm1} (a) and (b).
Hence $\operatorname{Gr}(X)$ and $\operatorname{Gr}(X')$ are derived equivalent
by the theorem above.
\end{proof}
The following is easy to verify.
\begin{lem}
Let $C, C'$ be in $\Bbbk\text{-}\mathbf{Cat}$.
If $C$ and $C'$ are derived equivalent,
then so are $\Delta(C)$ and $\Delta(C')$.\quad\qed
\end{lem}
Corollary \ref{der-eq-Gr} together with the lemma above
and Example \ref{exm:Gr} gives us
a unified proof of the following fact.
\begin{thm}\label{thm:unified-proof}
Assume that $\Bbbk$ is a field and that $\Bbbk$-algebras $A$ and $A'$ are derived equivalent.
Then the following pairs are derived equivalent as well:
\begin{enumerate}
\item
path-categories $AQ$ and $A'Q$ for any quiver $Q$;
\item
incidence categories $AS$ and $A'S$ for any poset $S$; and
\item
monoid algebras $AG$ and $A'G$ for any monoid $G$.
\end{enumerate}
\qed
\end{thm}
\begin{exm
\label{exm:gluing}
Assume that $\Bbbk$ is a field.
Let $n$ be a natural number $\ge 3$, and $I$ the free category defined by the quiver $Q$:
$2 \ya{a_2} 3 \ya{a_3} \cdots \ya{a_{n-1}} n$.
Define functors $X, X' \colon I \to \Bbbk\text{-}\mathbf{Cat}$ as follows.
For each $i \in I_0 =\{2, \dots, n\}$ let $X(i)$ be the $\Bbbk$-category defined by the quiver
$$
\xymatrix{
1 & 2 & 3 & \cdots & i
\ar@/^/^{\alpha_1}"1,1";"1,2"
\ar@/^/^{\alpha_2}"1,2";"1,3"
\ar@/^/^{\alpha_3}"1,3";"1,4"
\ar@/^/^{\alpha_{i-1}}"1,4";"1,5"
\ar@/^/^{\beta_1}"1,2";"1,1"
\ar@/^/^{\beta_2}"1,3";"1,2"
\ar@/^/^{\beta_3}"1,4";"1,3"
\ar@/^/^{\beta_{i-1}}"1,5";"1,4"
}
$$
with relations
$\alpha_{j+1}\alpha_{j}=0$, $\beta_j\beta_{j+1}=0$, $\alpha_j\beta_j = \beta_{j+1}\alpha_{j+1}$ for all $j = 1,\dots, i-1$
and $\alpha_1\beta_1\alpha_1 = 0$, $\beta_{i-1}\alpha_{i-1}\beta_{i-1}=0$.
For each $a_i\colon i \to i+1$ in $I_1$ let $X(a_i) \colon X(i) \to X(i+1)$ be the
inclusion functor. This defines a functor $X\colon I \to \Bbbk\text{-}\mathbf{Cat}$.
For each $i \in I_0 =\{2, \dots, n\}$ let $X'(i)$ be the $\Bbbk$-category defined by the quiver
$$
\xymatrix{
1 & 2 & 3 & \cdots & i
\ar^{\gamma_1}"1,1";"1,2"
\ar^{\gamma_2}"1,2";"1,3"
\ar^{\gamma_3}"1,3";"1,4"
\ar^{\gamma_{i-1}}"1,4";"1,5"
\ar@/^15pt/^{\gamma_i}"1,5";"1,1"
}
$$
with relations
$\gamma_{j+i}\cdots\gamma_{j+1}\gamma_{j} =0$ for all $j \in {\mathbb Z}/i{\mathbb Z}$.
For each $a_i\colon i \to i+1$ in $I_1$ let $X(a_i) \colon X(i) \to X(i+1)$ be the
functor defined by the correspondence $1 \mapsto 1$, $j \mapsto j+1$ and $\alpha_1 \mapsto \alpha_2\alpha_1$, $\alpha_j \mapsto \alpha_{j+1}$ for all $j=2, \dots i$.
This defines a functor $X'\colon I \to \Bbbk\text{-}\mathbf{Cat}$.
As is explained in \cite{Asa97} we have a tilting spectroid ${\mathcal T}(i)$ for $X(i)$
that is a full subcategory of ${\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(i))$ consisting of the following $i$ objects
$$
\begin{aligned}
T(i)_1&:= (\underline{P_1}),\\
T(i)_2&:= (\underline{P_2}\ya{P(\alpha_2)}P_3\ya{P(\alpha_3)}\cdots \ya{P(\alpha_{i-1})}P_i),\\
T(i)_3&:= (\underline{P_2}\ya{P(\alpha_2)}P_3\ya{P(\alpha_3)}\cdots \ya{P(\alpha_{i-2})}P_{i-1}),\\
&\vdots\\
T(i)_i&:=(\underline{P_2}),
\end{aligned}
$$
where $P_j:= X(i)(\operatorname{-}, j) \in \operatorname{prj} X(i)$ for all $j \in X(i)_0$, $P(\alpha):= X(i)(\operatorname{-}, \alpha)$
for all $\alpha \in X(i)_1$ and the underline indicates the place of degree zero.
Again by \cite{Asa97}, ${\mathcal T}(i)$ is presented by the same quiver with relations
as $X'(i)$ and we have an isomorphism
$F(i)\colon X'(i) \to {\mathcal T}(i)$ sending $j$ to $T(i)_j$ for all $j=1,\dots, i$ and
$\gamma_j$ to a morphism $\delta(i)_j\colon T(i)_j \to T(i)_{j+1}$ for all $j \in {\mathbb Z}/i{\mathbb Z}$,
where $\delta(i)_1:=(\underline{P(\alpha_1)})$, $\delta(i)_j:=(\underline{\id_{P_2}}, \dots, \id_{P_{i-j+1}},0)$
for all $j=2,\dots,i-1$ and $\delta(i)_{i}:=(\underline{P(\beta_1}))$.
Thus ${\mathcal T}(i)$ gives a derived equivalence between $X(i)$ and $X'(i)$.
For each $a_i \colon i \to i+1$ in $I_1$ define a functor ${\mathcal T}(a_i) \colon {\mathcal T}(i) \to {\mathcal T}(i+1)$
by the correspondence $T(i)_1 \mapsto T(i+1)_1$,
$T(i)_j \mapsto T(i+1)_{j+1}$ and $\delta(i)_1 \mapsto \delta(i+1)_2\delta(i+1)_1$, $\delta(i)_j \mapsto \delta(i+1)_{j+1}$ for all $j =2,\dots, i$.
This defines a functor ${\mathcal T} \colon I \to \Bbbk\text{-}\mathbf{Cat}$.
Then we have a strict commutative diagram
$$
\xymatrix{
X'(i) & {\mathcal T}(i)\\
X'(i+1) & {\mathcal T}(i+1)
\ar^{F(i)}"1,1";"1,2"
\ar_{F(i+1)}"2,1";"2,2"
\ar_{X'(a_i)}"1,1";"2,1"
\ar^{{\mathcal T}(a_i)}"1,2";"2,2"
}
$$
in $\Bbbk\text{-}\mathbf{Cat}$ for all $i \in I_0$, which shows that
$X'$ and ${\mathcal T}$ are equivalent in $\overleftarrow{\operatorname{Colax}}(I, \Bbbk\text{-}\mathbf{Cat})$.
Finally by definition of ${\mathcal T}(a_i)$'s it is easy to see that we have an $I$-equivariant inclusion
$(\sigma, \rho)\colon {\mathcal T} \hookrightarrow {\mathcal K}^{\text{\rm b}}(\operatorname{prj} X)$:
$$
\xymatrix{
{\mathcal T}(i) & {\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(i))\\
{\mathcal T}(i+1) & {\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(i+1)).
\ar@{^{(}->}^-{\sigma(i)}"1,1";"1,2"
\ar@{^{(}->}_-{\sigma(i+1)}"2,1";"2,2"
\ar_{{\mathcal T}(a_i)}"1,1";"2,1"
\ar^{{\mathcal K}^{\text{\rm b}}(\operatorname{prj} X(a_i))}"1,2";"2,2"
\ar@{=>}_{\rho(a_i)}^{\cong}"1,2";"2,1"
}
$$
Hence by Theorem \ref{mainthm2} we can glue derived equivalences between
$X(i)$'s and $X'(i)$'s together to have a derived equivalence
between $\operatorname{Gr}(X)$ and $\operatorname{Gr}(X')$.
For example when $n = 5$, these are presented by the following quivers
$$
\operatorname{Gr}(X)=
\vcenter{
\xymatrix{
1 & 2\\
1 & 2 & 3\\
1 & 2 & 3 & 4\\
1 & 2 & 3 & 4 & 5
\ar@/^/^{\alpha_1}"1,1";"1,2"
\ar@/^/^{\beta_1}"1,2";"1,1"
\ar@/^/^{\alpha_1}"2,1";"2,2"
\ar@/^/^{\alpha_2}"2,2";"2,3"
\ar@/^/^{\beta_1}"2,2";"2,1"
\ar@/^/^{\beta_2}"2,3";"2,2"
\ar@/^/^{\alpha_1}"3,1";"3,2"
\ar@/^/^{\alpha_2}"3,2";"3,3"
\ar@/^/^{\alpha_3}"3,3";"3,4"
\ar@/^/^{\beta_1}"3,2";"3,1"
\ar@/^/^{\beta_2}"3,3";"3,2"
\ar@/^/^{\beta_3}"3,4";"3,3"
\ar@/^/^{\alpha_1}"4,1";"4,2"
\ar@/^/^{\alpha_2}"4,2";"4,3"
\ar@/^/^{\alpha_3}"4,3";"4,4"
\ar@/^/^{\alpha_{4}}"4,4";"4,5"
\ar@/^/^{\beta_1}"4,2";"4,1"
\ar@/^/^{\beta_2}"4,3";"4,2"
\ar@/^/^{\beta_3}"4,4";"4,3"
\ar@/^/^{\beta_{4}}"4,5";"4,4"
\ar"1,1";"2,1"
\ar"1,2";"2,2"
\ar"2,1";"3,1"
\ar"2,2";"3,2"
\ar"2,3";"3,3"
\ar"3,1";"4,1"
\ar"3,2";"4,2"
\ar"3,3";"4,3"
\ar"3,4";"4,4"
}
},
\quad
\operatorname{Gr}(X') =
\vcenter{
\xymatrix{
1 & 2\\
1 & 2 & 3\\
1 & 2 & 3 & 4\\
1 & 2 & 3 & 4 & 5
\ar^{\gamma_1}"1,1";"1,2"
\ar@/^15pt/^{\gamma_2}"1,2";"1,1"
\ar^{\gamma_1}"2,1";"2,2"
\ar^{\gamma_2}"2,2";"2,3"
\ar@/^15pt/^{\gamma_3}"2,3";"2,1"
\ar^{\gamma_1}"3,1";"3,2"
\ar^{\gamma_2}"3,2";"3,3"
\ar^{\gamma_3}"3,3";"3,4"
\ar@/^15pt/^(.7){\gamma_4}"3,4";"3,1"
\ar^{\gamma_1}"4,1";"4,2"
\ar^{\gamma_2}"4,2";"4,3"
\ar^{\gamma_3}"4,3";"4,4"
\ar^{\gamma_{4}}"4,4";"4,5"
\ar@/^15pt/^{\gamma_5}"4,5";"4,1"
\ar"1,1";"2,1"
\ar"1,2";"2,3"
\ar"2,1";"3,1"
\ar"2,2";"3,3"
\ar"2,3";"3,4"
\ar"3,1";"4,1"
\ar"3,2";"4,3"
\ar"3,3";"4,4"
\ar"3,4";"4,5"
}
}
$$
with suitable relations as calculated in \cite{Asa-Kim}.
Note that if we start with $I$ presented by the same quiver $Q$ as above with
relations $a_{i+1}a_i=0$ for all $i=2,\dots, n-2$, then
both $\operatorname{Gr}(X)$ and $\operatorname{Gr}(X')$ are presented by the same quivers
with relations consisting of the same relations as before together with
the additional relations that the vertical paths of length 2 are zero, respectively.
\end{exm}
\section{The composite of colax functors and pseudofunctors}
In this section we prove Theorem \ref{comp-pseudofun}.
Throughout this section $\mathbf{B}, \mathbf{C}$ and $\mathbf{D}$ are $2$-categories.
\begin{ntn}
When we denote a colax functor by a letter $X$ the 1-st (resp.\ 2-nd and 3-rd) entry of $X$
is denoted by $X_{012}:=(X_0, X_1, X_2)$ (resp.\ $\eta^X$ and $\theta^X$), thus we set
$X = (X_{012}, \eta^X, \theta^X)$, and sometimes we simply write $X$ for $X_{d}$ for all $d = 0,1,2$
if this seems to make no confusion.
\end{ntn}
\subsection{Correspondences on cells}
\label{dfn-corr}
\begin{lem}
Let $X\colon \mathbf{B} \to \mathbf{C}$ and $V \colon \mathbf{C} \to \mathbf{D}$ be colax functors.
We define the composite $VX \colon \mathbf{B} \to \mathbf{D}$ as follows.
\begin{itemize}
\item $(VX)_d:= V_d X_d \colon \mathbf{B}_d \ya{X_d} \mathbf{C}_d \ya{V_d} \mathbf{D}_d$ for all $d = 0,1,2$.
\item $\eta^{VX}_i:= \eta^V_{X(i)} \circ V\eta^X_i
\colon
\xymatrix{
VX(\id_i) &V(\id_{X(i)}) &\id_{(VX)(i)}
\ar@{=>}^{V\eta^X_i}"1,1";"1,2"
\ar@{=>}^{\eta^V_{X(i)}}"1,2";"1,3"
}$
for all $i \in \mathbf{B}_0$.
\item $\theta^{VX}_{b,a}:= \theta^V_{X(b), X(a)}\circ V\theta^X_{b,a} \colon
\xymatrix@C=35pt{VX(ba) & V(X(b)\circ X(a)) &VX(b) \circ VX(a)
\ar@{=>}^-{V\theta^X_{b,a}}"1,1";"1,2"
\ar@{=>}^{\theta^V_{X(b), X(a)}}"1,2";"1,3"
}
$
for all $(b,a) \in \operatorname{com}(\mathbf{B})$.
\end{itemize}
Namely, $VX:=((V_0X_0, V_1X_1, V_2X_2), (\eta^V_{X(i)} \circ V\eta^X_i)_{i\in \mathbf{B}_0},
(\theta^V_{X(b), X(a)}\circ V\theta^X_{b,a})_{(b,a)\in \operatorname{com}(\mathbf{B})})$.
Then the composite $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V)(X):= VX \colon \mathbf{B} \to \mathbf{D}$ is again a colax functor.
\end{lem}
\begin{proof}
It is enough to verify the axioms (i) -- (iv) in Definition \ref{dfn:colax-fun-2cat}.
(i) $((VX)_1, (VX)_2)\colon \mathbf{B}(i,j) \ya{(X_1, X_2)}\mathbf{C}(X(i),X(j))
\ya{(V_1, V_2)} \mathbf{D}(VX(i), VX(j))$ is a functor for all $i, j \in \mathbf{B}_0$
as a composite of the functors
$(X_1, X_2)$ and $(V_1, V_2)$.
(ii) For each $a \colon i \to j$ in $\mathbf{B}$ we have the following commutative diagram:
$$
\xymatrix@C=45pt{
VX(a)\id_{VX(i)} & VX(a) V(\id_{X(i)}) & VX(a) VX(\id_i)\\
& V(X(a)\id_{X(i)}) & V(X(a)X(\id_i))\\
&& VX(a\id_i).
\ar@{=>}_{VX(a)\eta^V_{X(i)}}"1,2";"1,1"
\ar@{=>}_{VX(a)V(\eta^X_{i})}"1,3";"1,2"
\ar@{=>}_{V(X(a)\eta^X_{i})}"2,3";"2,2"
\ar@{=>}_{\theta^V_{X(a), \id_{X(i)}}}"2,2";"1,2"
\ar@{=>}_{\theta^V_{X(a), X(\id_{i})}}"2,3";"1,3"
\ar@{=>}_{V(\theta^X_{a, \id_{i}})}"3,3";"2,3"
\ar@{=}"2,2";"1,1"
\ar@{=}"3,3";"2,2"
}
$$
The commutativity of the square follows from the axiom (iv) for $\theta^V$.
The remaining commutative diagram is obtained similarly.
These two commutative diagrams verify the axiom (ii) of colax functors.
(iii) For each $i \ya{a} j \ya{b} k \ya{c} l$ in $\mathbf{B}$ we have the following commutative diagram:
$$
\xymatrix@C=45pt{
VX(cba) & V(X(c)X(ba)) & VX(c)\cdot VX(ba)\\
V(X(cb)X(a)) & V(X(c)X(b)X(a)) & VX(c) V(X(b)X(a))\\
VX(cb)\cdot VX(a) & V(X(c)X(b)) VX(a) & VX(c)\cdot VX(b) \cdot VX(a),
\ar@{=>}^{V\theta^X_{c, ba}}"1,1";"1,2"
\ar@{=>}^{\theta^V_{X(c),X(ba)}}"1,2";"1,3"
\ar@{=>}^{V(\theta^X_{c,b}\id_{X(a)})}"2,1";"2,2"
\ar@{=>}^{\theta^V_{X(c),X(b)X(a)}}"2,2";"2,3"
\ar@{=>}_{V(\theta^X_{c,b})VX(a)}"3,1";"3,2"
\ar@{=>}_{\theta^V_{X(c), X(b)} VX(a)}"3,2";"3,3"
\ar@{=>}_{V\theta^X_{cb,a}}"1,1";"2,1"
\ar@{=>}^{V(\id_{X(c)}\theta^X_{b,a})}"1,2";"2,2"
\ar@{=>}^{VX(c)\cdot V\theta^X_{b,a}}"1,3";"2,3"
\ar@{=>}_{\theta^V_{X(cb),X(a)}}"2,1";"3,1"
\ar@{=>}^{\theta^V_{X(c)X(b), X(a)}}"2,2";"3,2"
\ar@{=>}^{VX(c)\theta^V_{X(b),X(a)}}"2,3";"3,3"
}
$$
which verifies the axiom (iii) of colax functors.
(iv) Let $a, a' \colon i \to j$; $b, b' \colon j \to k$;
$\alpha \colon a \Rightarrow a'$ and $\beta \colon b \Rightarrow b'$ be in $\mathbf{B}$.
Then we have the following commutative diagram:
$$
\xymatrix@C=40pt{
VX(ba) & V(X(b)\cdot X(a)) & VX(b)\cdot VX(a)\\
VX(b'a') & V(X(b')\cdot V(a')) & VX(b')\cdot VX(a'),
\ar@{=>}^-{V(\theta^X_{b,a})}"1,1";"1,2"
\ar@{=>}^{\theta^V_{X(b),X(a)}}"1,2";"1,3"
\ar@{=>}_-{V(\theta^X_{b',a'})}"2,1";"2,2"
\ar@{=>}_{\theta^V_{X(b'),X(a')}}"2,2";"2,3"
\ar@{=>}_{VX(\beta\cdot \alpha)}"1,1";"2,1"
\ar@{=>}^{V(X\beta \cdot X\alpha)}"1,2";"2,2"
\ar@{=>}^{VX\beta \cdot VX\alpha}"1,3";"2,3"
}
$$
which verifies the axiom (iv) of colax functors.
\end{proof}
\begin{lem}\label{lem:1-morphisms-colax}
Let $X, X' \colon \mathbf{B} \to \mathbf{C}$ and $V \colon \mathbf{C} \to \mathbf{D}$ be colax functors and
$(F,\psi) \colon X \to X'$ a $1$-morphism in $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C})$, and
consider the diagram
\begin{equation}
\label{induced-1-morphism}
\vcenter{
\xymatrix@C=9em@R=9em{
VX(i) & VX'(i)\\
VX(j) & VX'(j).
\ar^{VF(i)} "1,1";"1,2"
\ar_{VF(j)} "2,1";"2,2"
\ar_{VX(a)} "1,1";"2,1"
\ar^{VX'(a)} "1,2";"2,2"
\ar@/^2.5pc/|{V(X'(a)F(i))} "1,1";"2,2" \ar@/_2.5pc/|{V(F(j)X(a))} "1,1";"2,2"
\ar@{=>}^-{\theta^V_{X'(a),F(i)}} "1,2"+<-2.5em,-2.5em>;"1,2"
\ar@{=>}^{V\psi(a)} "1,2"+<-6em,-4.7em>;"2,1"+<5.2em,4.5em>
\ar@{=>}^-{\theta^V_{F(j),X(a)}} "2,1"+<2.5em,2.5em> ;"2,1"
}}
\end{equation}
Assume that $\theta^V_{d,c}$ are isomorphisms for all $(d,c) \in \operatorname{com}(\mathbf{C})$
$($e.g., that $V$ is a pseudofunctor$)$.
Then we can define a $1$-morphism $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V)(F, \psi):= V(F,\psi) \colon VX \to VX'$
in $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{D})$ by
$$\begin{aligned}
V(F,\psi):&=((V(F(i)))_{i\in \mathbf{B}_0}, (\psi_V(a))_{a \in \mathbf{B}_1}),
\text{ where for }a:i \to j\\
\psi_V(a):&= \theta^V_{F(j), X(a)}\cdot V(\psi(a)) \cdot \theta^V_{X'(a), F(i)}{}^{-1}.
\end{aligned}
$$
\end{lem}
\begin{proof}
We set $X = (X, \eta, \theta)$ and $X' = (X', \eta', \theta')$ for short.
First, the functor $V_{12} \colon \mathbf{C}(X(i), X'(j)) \to \mathbf{D}(VX(i), VX'(j))$
sends the commutative square \eqref{eq:naturality-psi}
to the commutative square $(*)$ below
{\footnotesize
$$\vcenter{\xymatrix@R=4.5ex@C=8ex{
VX'(a)\cdot VF(i) &&&VX'(b)\cdot VF(i)\\
&V(X'(a)F(i) )& V(X'(b)F(i))&\\
&V(F(j)X(a)) & V(F(j)X(b))\\
VF(j)\cdot VX(a) &&& VF(j)\cdot VX(b),
\ar@{=>}^{V(X'(\alpha)F(i))}"2,2";"2,3"
\ar@{=>}_{V(F(j)X(\alpha))}"3,2";"3,3"
\ar@{=>}_{V(\psi(a))}"2,2";"3,2"
\ar@{=>}^{V(\psi(b))}"2,3";"3,3"
\ar@{}|{(*)}"2,2";"3,3"
\ar@{=>}^{VX'(\alpha)*VF(i)}"1,1";"1,4"
\ar@{=>}_{VF(j)*VX(\alpha)}"4,1";"4,4"
\ar@{=>}_{\psi_V(a)}"1,1";"4,1"
\ar@{=>}^{\psi_V(b)}"1,4";"4,4"
\ar@{=>}_{\theta^V_{X'(a),F(i)}}"2,2";"1,1"
\ar@{=>}^(0.4){\theta^V_{F(j),X(a)}}"3,2";"4,1"
\ar@{=>}^{\theta^V_{X'(b),F(i)}}"2,3";"1,4"
\ar@{=>}^(0.4){\theta^V_{F(j),X(b)}}"3,3";"4,4"
\save "1,2"+<11.5ex,-2.5ex>*{\text{\tiny (iv)}} \restore
\save "4,2"+<11.5ex,2.5ex>*{\text{\tiny (iv)}} \restore
\save "2,1"+<9ex,-4ex>*{\text{\tiny (definition)}} \restore
\save "2,4"+<-9ex,-4ex>*{\text{\tiny (definition)}} \restore
}}
$$
}\noindent
which is completed to the commutative diagram above.
Hence the family $(\psi_V(a))_{a \in \mathbf{B}_1}$ has the property (0)
of 1-morphisms in $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{D})$ (Definition \ref{dfn:colax-fun-2cat}(4)).
(a) For each $i \in \mathbf{B}_0$ we have the following commutative diagram:
{\footnotesize
$$
\xymatrix@C=3.5em{
VX'(\id_i)\cdot VF(i) & V(X'(\id_i)\cdot F(i)) & V(F(i)\cdot X(\id_i)) & VF(i)\cdot VX(\id_i)\\
V(\id_{X'(i)})\cdot VF(i) &V(\id_{X'(i)}\cdot F(i)) & V(F(i)\cdot \id_{X(i)}) & VF(i)\cdot V(\id_{X(i)})\\
\id_{VX'(i)}\cdot VF(i) &&& VF(i)\cdot \id_{VX(i)},
\ar@{=>}_{\theta^V_{X'(\id_i),F(i)}}"1,2";"1,1"
\ar@{=>}^{V\psi(\id_i)}"1,2";"1,3"
\ar@{=>}^{\theta^V_{F(i),X(\id_i)}}"1,3";"1,4"
\ar@{=>}^-{\theta^V_{\id_{X'(i)},F(i)}}"2,2";"2,1"
\ar@{=}"2,2";"2,3"
\ar@{=>}_-{\theta^V_{F(i),\id_{X(i)}}}"2,3";"2,4"
\ar@{=}"3,1";"3,4"
\ar@{=>}^{V\eta'_i\cdot VF(i)}"1,1";"2,1"
\ar@{=>}^{V(\eta'_i\id_{F(i)})}"1,2";"2,2"
\ar@{=>}^{V(\id_{F(i)}\eta_i)}"1,3";"2,3"
\ar@{=>}_{VF(i)\cdot V\eta_i}"1,4";"2,4"
\ar@{=>}^{\eta^V_{X'(i)}\cdot VF(i)}"2,1";"3,1"
\ar@{=>}_{VF(i)\cdot \eta^V_{X(i)}}"2,4";"3,4"
}
$$
}
which verifies the axiom (a) of 1-morphisms.
(b) For each $i \ya{a} j \ya{b} k$ in $\mathbf{B}$ we have the following commutative diagrams:
{\tiny
$$
\xymatrix{
VX'(ba)\cdot VF(i) & V(X'(b)X'(a))\cdot VF(i) & VX'(b)\cdot VX'(a)\cdot VF(i)\\
&& VX'(b)V(X'(a)F(i)) &VX'(b)\cdot V(F(j)X(a))\\
&&& VX'(b)\cdot VF(j)\cdot VX(a)\\
V(X'(ba)F(i)) & V(X'(b)X'(a)F(i)) & V(X'(b)F(j)X(a)) & V(X'(b)F(j))VX(a)
\ar@{=>}^{V(\theta'_{b,a})VF(i)}"1,1";"1,2"
\ar@{=>}^{\theta^V_{X'(b),X'(a)}VF(i)}"1,2";"1,3"
\ar@{=>}^{V(\id_{X'(b)})V(\psi(a))}"2,3";"2,4"
\ar@{=>}^-{V(\theta'_{b,a}F(i))}"4,1";"4,2"
\ar@{=>}^{V(X'(b)\psi(a))}"4,2";"4,3"
\ar@{=>}^{\theta^V_{X'(b)F'(j),X(a)}}"4,3";"4,4"
\ar@{=>}_{VX'(b)\theta^V_{X'(a),F(i)}{}^{-1}}"1,3";"2,3"
\ar@{=>}^{VX'(b)\cdot \theta^V_{F(j),X(a),}}"2,4";"3,4"
\ar@{=>}^{\theta^V_{X'(b),F(j)}{^{-1} VX(a)}}"3,4";"4,4"
\ar@{=>}^{\theta^V_{X'(ba),F(i)}}"4,1";"1,1"
\ar@{=>}^{\theta^V_{X'(b)X'(a),F(i)}}"4,2";"1,2"
\ar@{=>}^{\theta^V_{X(b),X'(a)F(i)}}"4,2";"2,3"
\ar@{=>}^{\theta^V_{X'(b),F(j)X(a)}}"4,3";"2,4"
}
$$
}
and
{\tiny
$$
\xymatrix{
V(X'(ba)F(i)) & V(X'(b)X'(a)F(i)) & V(X'(b)F(j)X(a)) & V(X'(b)F(j))VX(a)\\
V(F(k)X(ba)) & & V(F(k)X(b)X(a)) & V(F(k)X(b))VX(a)\\
VF(k)\cdot VX(ba) && VF(k)\cdot V(X(b)X(a)) & VF(k)\cdot VX(b) \cdot VX(a).
\ar@{=>}^-{V(\theta'_{b,a}F(i))}"1,1";"1,2"
\ar@{=>}^{V(X'(b)\psi(a))}"1,2";"1,3"
\ar@{=>}^{\theta^V_{X'(b)F'(j),X(a)}}"1,3";"1,4"
\ar@{=>}^{V(F(k)\theta_{b,a})}"2,1";"2,3"
\ar@{=>}^{\theta^V_{X(b)F'(j),X(a)}}"2,3";"2,4"
\ar@{=>}"1,3";"1,4"
\ar@{=>}_{VF(k)\cdot V\theta_{b,a}}"3,1";"3,3"
\ar@{=>}_{VF(k)\cdot \theta^V_{X(b),X(a)}}"3,3";"3,4"
\ar@{=>}_{V\psi(ba)}"1,1";"2,1"
\ar@{=>}_{V(\psi(b)X(a))}"1,3";"2,3"
\ar@{=>}^{V(\psi(b))VX(a)}"1,4";"2,4"
\ar@{=>}_{\theta^V_{F(k),X(ba)}}"2,1";"3,1"
\ar@{=>}_{\theta^V_{F(k),X(b)X(a)}}"2,3";"3,3"
\ar@{=>}^{\theta^V_{F(k),X(b)}VX(a)}"2,4";"3,4"
}
$$
}
Glue these two diagrams together along the common row to get a large diagram,
which verifies the axiom (b) of 1-morphisms.
\end{proof}
\begin{lem}
Let $X, X' \colon \mathbf{B} \to \mathbf{C}$ and $V \colon \mathbf{C} \to \mathbf{D}$ be colax functors,
$(F,\psi), (F', \psi') \colon X \to X'$ $1$-morphisms, and
$\alpha \colon (F, \psi) \Rightarrow (F', \psi')$ a $2$-morphism in $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C})$.
Assume that all $\theta^V_{d,c}$ are isomorphisms $($e.g., that $V$ is a pseudofunctor$)$.
Then we can define a $2$-morphism $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V)(\alpha):= V\alpha \colon V(F,\psi) \Rightarrow V(F', \psi')$
in $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{D})$ by
$$
V\alpha := (V\alpha_i)_{i\in \mathbf{B}_0}.
$$
\end{lem}
\begin{proof}
Let $a\colon i \to j$ be in $\mathbf{B}$.
It is enough to show the commutativity of the following diagram:
$$
\xymatrix@C=3em{
VX'(a)\cdot VF(i) & V(X'(a)F(i)) & V(F(j)X(a)) & VF(j)\cdot VX(a)\\
VX'(a)\cdot VF'(i) & V(X'(a)F'(i)) & V(F'(j)X(a)) & VF'(j)\cdot VX(a).
\ar@{=>}_{\theta^V_{X'(a),F(i)}}"1,2";"1,1"
\ar@{=>}^{V(\psi(a))}"1,2";"1,3"
\ar@{=>}^{\theta^V_{F(j),X(a)}}"1,3";"1,4"
\ar@{=>}^{\theta^V_{X'(a),F'(i)}}"2,2";"2,1"
\ar@{=>}_{V(\psi'(a))}"2,2";"2,3"
\ar@{=>}_{\theta^V_{F'(j),X(a)}}"2,3";"2,4"
\ar@{=>}_{VX'(a)\cdot V\alpha_i}"1,1";"2,1"
\ar@{=>}_{V(X'(a)\alpha_i)}"1,2";"2,2"
\ar@{=>}^{V(\alpha_jX(a))}"1,3";"2,3"
\ar@{=>}^{V\alpha_i\cdot VX(a)}"1,4";"2,4"
}
$$
Since $\alpha = (\alpha_i \colon F(i) \Rightarrow F'(i))_{i\in \mathbf{B}_0}$ is a 2-morphism in $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C})$,
we have the commutative diagram
$$
\xymatrix{
X'(a)F(i) & F(j)X(a)\\
X'(a)F'(i) & F'(j)X(a).
\ar@{=>}^{\psi(a)}"1,1";"1,2"
\ar@{=>}_{\psi'(a)}"2,1";"2,2"
\ar@{=>}_{X'(a)\alpha_i}"1,1";"2,1"
\ar@{=>}^{\alpha_jX(a)}"1,2";"2,2"
}
$$
This gives the commutativity of the central square of the diagram above
by applying the functor $(V_1, V_2)$ to it.
The axiom (iv) of colax functors for $V$ shows the commutativity of the remaining squares.
\end{proof}
\subsection{Proof of Theorem \ref{comp-pseudofun}}
By the three lemmas above we can define a correspondence
$$
\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V)_{012} \colon \overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C}) \to \overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{D})
$$
sending $i$-cells to $i$-cells for all $i = 0, 1, 2$ preserving domains and codomains.
It remains to define families $H = (H_X)_{X\in \overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C})_0}$ and
$\Theta = (\Theta_{F',F})_{(F',F) \in \operatorname{com}(\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C}))}$
and to show that $\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V):= (\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V)_{012}, H, \Theta)$ becomes a pseudofunctor
$\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C}) \to \overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{D})$.
For each $X\in \overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C})_0$ we define $H_X \colon V(\id_X) \Rightarrow \id_{VX}$ by setting
$$H_X:= (\eta_{X(i)}^V \colon V(\id_{X(i)}) \to \id_{VX(i)})_{i\in \mathbf{B}_0}.$$
Then $H_X$ turns out to be a 2-morphism
because by definitions of $\theta^V$ and $\eta^V$ we have a commutative diagram
$$
\xymatrix{
VX(a)\cdot V(\id_{X(i)} & V(X(a)\cdot \id_{X(i)}) & V(\id_{X(j)}X(a)) & V(\id_{X(j)})\cdot VX(a)\\
VX(a)\cdot \id_{VX(i)} &&& \id_{VX(j)}VX(a)
\ar@{=>}^{(\theta_{X(a),\id_{X(i)}}^V)^{-1}}"1,1";"1,2"
\ar@{=}"1,2";"1,3"
\ar@{=>}^{\theta_{\id_{X(j)},X(a)}^V}"1,3";"1,4"
\ar@{=}"2,1";"2,4"
\ar@{=>}^{VX(a)\cdot \eta_{X(i)}^V}"1,1";"2,1"
\ar@{=>}_{\eta_{X(j)}^V\cdot VX(a)}"1,4";"2,4"
}
$$
for all $a \colon i \to j$ in $\mathbf{B}$.
Note that $H_X$ are isomorphisms because $\eta_k^V$ are for all $k \in \mathbf{C}_0$.
For each $(F',F) \in \operatorname{com}(\overleftarrow{\operatorname{Colax}}(\mathbf{B}, \mathbf{C}))$, say
$F \colon X \Rightarrow X'$ and $F' \colon X' \Rightarrow X''$,
we define $\Theta_{F',F} \colon V(F'F) \Rightarrow VF'\circ VF$ by setting
$$
\Theta_{F',F}:= (\theta_{F'(i), F(i)}^V \colon V(F'(i)F(i)) \to VF'(i)\cdot VF(i))_{i\in \mathbf{B}_0}.
$$
Then $\Theta_{F',F}$ turns out to be a 2-morphism.
Indeed, it is enough to show the commutativity of the diagram
$$
\xymatrix{
VX''(a)\cdot V(F'(i)F(i)) & V(F'(j)F(j))\cdot VX(a)\\
VX''(a)\cdot VF'(i) \cdot VF(i) & VF'(j)VF(j) VX(a)
\ar@{=>}^{\Psi(a)}"1,1";"1,2"
\ar@{=>}_{\Psi'(a)}"2,1";"2,2"
\ar@{=>}_{VX''(a)\cdot \theta_{F'(i), F(i)}^V}"1,1";"2,1"
\ar@{=>}^{\theta_{F'(j),F(j)}^V\cdot VX(a)}"1,2";"2,2"
}
$$
for all $a \colon i \to j$ in $\mathbf{B}$, where we set
$V(F'F) = ((V(F'(i)F(i))_{i\in \mathbf{B}_0}, (\Psi(a))_{a\in \mathbf{B}_1})$
and $VF'\cdot VF = ((VF(i)\cdot VF(i))_{i\in \mathbf{B}_0}, (\Psi'(a))_{a\in \mathbf{B}_1})$,
namely
$$
{\footnotesize
\begin{aligned}
\Psi(a) &:= \theta_{F'(j)F(j),X(a)}^V\cdot V((F'(j)\cdot \psi(a)) \cdot V(\psi'(a)\cdot F(i))
\cdot ({\theta'}_{X''(a), F'(i)F(i)}^V)^{-1}\\
\Psi'(a) &:= (VF'(j) \cdot (\theta_{F(j),X(a)}^V\cdot V\psi(a)\cdot ({\theta'}_{X'(a),F(i)}^V)^{-1}) \circ
(\theta_{F'(j),X'(a)}^V\cdot V\psi'(a)\cdot ({\theta'}_{X''(a),F'(i)}^V)^{-1}\cdot VF(i))
\end{aligned}
}$$
for all $a \colon i \to j$ in $\mathbf{B}$.
This follows from the coassociativity of $V$ and the naturality of $\theta^V$.
Note that $\Theta_{F',F}$ are isomorphisms because $\theta_{b,a}^V$ are for all $a, b\in \mathbf{C}_0$.
Now the defining conditions of $\theta^V$ and $\eta^V$ directly show
that $(\overleftarrow{\operatorname{Colax}}(\mathbf{B}, V)_{012}, H, \Theta)$ is a colax functor, hence
a pseudofunctor because all $H_X$ and $\Theta_{F',F}$ are isomorphisms.
\qed
|
train/arxiv
|
BkiUdXk5qX_Bhq2Hdqjk
| 5
| 1
|
\section{Introduction} \label{sec:intro}
The age-metallicity relation (AMR) in the solar neighbourhood has been a fundamental diagram to understand how the Milky Way disk has formed. Considering that stars pollute star forming regions with metals after exploding, younger stars need to be more metal-rich than older stars. Hence, in a closed-box scenario, it is expected that the AMR is tight, with a trend of metallicity increasing with time. However, even early works on the AMR, either using smaller spectroscopic samples \citep{1993A&A...275..101E} or larger photometric samples \citep{2011A&A...530A.138C}, have shown the contrary: the relation has a scatter much larger than the uncertainties in determined metallicities or ages.
Explaining the scatter of the AMR has used the fact that stars migrate and that chemical enrichment is not constant across the disk \citep[e.g.][]{2018MNRAS.475.5487S}. Therefore, the variety of ages and metallicities with apparently no relation in the solar neighbourhood can be attributed to stars tracing the chemical enrichment history from different birth places \citep{2010ApJ...722..112M}. Recently, \cite{Feuillet_2019MNRAS.489.1742F} has attempted to unveil the true structure of the AMR in an extended region of the disk taking advantage of large spectroscopic datasets. By calculating means in the age distribution at a given metallicity, they found that the relation is not flat, but has a turn-around at solar metallicities, i.e., that metal-poor and metal-rich stars are older than solar metallicity stars. This can be interpreted as the effect of stellar migration, that is, older stars born in inner regions of the Galaxy have moved outwards \citep[e.g.][]{Miglio_2021A&A...645A..85M}.
Yet, quantifying the importance of migration and the net difference in chemical enrichment rates in the disk have remained an open question. The recent work of \cite{Johnson_2021arXiv210309838J} used the observed AMR of \cite{Feuillet_2019MNRAS.489.1742F} to constrain not only the amount of migration of the fossil stars, but also the migration of the intermediate-mass stars which will become white dwarfs and then explode as Supernova Type Ia (SNIa). Since the latter are long lived, they are able to explote far from their birthplaces. This enriches a different region of the Galaxy with metals. A precise AMR is thus of paramount importance to constrain the different processes that affect the disk formation and structure.
If the precision of measured ages and metallicities improve, we could unveil the details of the AMR and so the formation of the disk. \cite{Nissen_2018A&ARv..26....6N} extensively illustrated the power of using high-precision abundances of stars in several astrophysical applications. In particular, using high precision solar-twins abundances it has been possible to study tight relationships between chemical abundance ratios and ages,{ such as [Y/Mg] and [Ba/Mg] \citep[e.g.][]{Nissen_2015A&A...579A..52N, Spina_2018MNRAS.474.2580S, Jofre_2020A&A...633L...9J}. These relations are attributed to a strong dependency of chemical enrichment with time. These abundances have been dubbed as ``chemical clocks'' and offer interesting perspectives of using certain abundance ratios as an alternative for stellar ages.
With the intention of testing the applicability of these chemical clocks, which have been shown to vary with metallicity \citep{delgado_2019A&A...624A..78D} and Galactic region since the star formation efficiency is not constant across the Galaxy} \citep[e.g.][]{Casali_2020A&A...639A.127C, 2021arXiv210314692C}, \citet[hereafter N20]{Nissen_2020A&A...640A..81N} extended the metallicity range of solar{-{type stars from the solar-twin regime ($-0.1 < \mathrm{[Fe/H}] < 0.1$) to $-0.3 < \mathrm{[Fe/H]} < 0.3$}. To their surprise, they found two notable separated sequences in the AMR of their sample. The stars from both relations showed different chemical abundance ratios of a few other elements, as well as the overall kinematic distributions. They concluded that if the AMR the solar neighbourhood is truly composed by two sequences, it should be seen in larger samples as well. They stressed, however, that such study is not straightforward due to the typical large uncertainties in ages of stars very different to the Sun.
Fortunately, there is a way to avoid using ages and still test N20's results in larger datasets. If ages do not reach the needed precision, we need to consider an alternative proxy for age that is precise. \cite{2015MNRAS.453.1855M} demonstrated the power of using [C/N] abundances of red giants as proxy for ages to show how the thick disk and the thin disk had very different evolutionary histories. { This abundance ratio differs from the chemical clocks since it does not follow a chemical enrichment rate but is the product of processes happening inside red giant stars. } \cite{2015MNRAS.453.1855M} revived the fundamental principle that red giants change their C/N atmospheric abundance ratio after experiencing their first dredge-up because they bring new material synthesised in their cores towards the atmosphere. This is a consequence of the CNO cycle for hydrogen burning, and applied it to Galactic archaeology. Since the change in the C/N ratio largely depends on the stellar mass, a distribution of [C/N] abundances of a large sample of red giants of similar evolutionary stages can be related to their masses, hence their ages.
APOGEE has very precise atmospheric parameters and stellar abundances, in particular, metallicities and [C/N] abundance ratios, for a very large number of red giants distributed in the entire Galactic disk \citep{Jonsson_2020AJ....160..120J}.
Indeed, \cite{Hasselquist_2019ApJ...871..181H} studied the [C/N]-[Fe/H] relation in the disk as an alternative for the AMR. They used large samples of stars in APOGEE, spanning a wide range in parameter space and abundances. That allowed them to include the thin and the thick disk in order to study the difference in the [C/N]-[Fe/H] relations and distributions at different Galactic radii and heights. They commented that the solar neighbourhood had a larger scatter in the [C/N]-[Fe/H] relation than the majority of the Galactic regions, but no two separate sequences were identified.
The present work attempts to focus the discussion in a possible dual AMR in the solar neighbourhood by selecting from APOGEE only stars that { have a restricted range} parameter space to avoid scatter in the [C/N]-age relation as much as possible. The main motivation is to focus on the Galactic plane and stars close to solar metallicities, following N20's results. Such stars are then further studied taking advantage of the multidimensional information available for the APOGEE stars today, namely kinematics from Gaia and the several elemental abundances beyond $\alpha$, C and N. With this focused analysis two separate [C/N] - [Fe/H] relations are found, sharing similar properties to those of N20.
\section{Data and Methods}\label{data}
Two valued added catalogues from the 16th Data Release of SDSS \citep{2020ApJS..249....3A} were considered. The first one is the {\tt APOGEE Red-Clump (RC) Catalogue}\footnote{\url{https://www.sdss.org/dr16/data\_access/value-added-catalogs/?vac\_id=apogee-red-clump-(rc)-catalog}} which includes the identification of RC stars using spectrophotometric data applied then to APOGEE. It has a 95\% certainty of the star being a RC star. Details of this selection can be found in \citet[]{Bovy_2014ApJ...790..127B}. The second catalogue is the {\tt astroNN catalog of abundances, distances, and ages for APOGEE DR16 stars}\footnote{\url{https://www.sdss.org/dr16/data\_access/value-added-catalogs/?vac\_id=the-astronn-catalog-of-abundances,-distances,-and-ages-for-apogee-dr16-stars}}, which consists on applying a deep learning neural network to APOGEE spectra in order to derive, among other properties, distances as described in \cite{Leung_2019MNRAS.489.2079L}. That method is model free, because it is based on the spectra and the parallax of stars and applied to spectra of stars with unknown parallax \citep[see][for further discussions]{Jofre_2015MNRAS.453.1428J, Jofre_2017MNRAS.472.2517J}. The catalogue also includes ages and dynamical and kinematical properties such as actions, eccentricities and velocities, following the description of \cite{Mackereth_2019MNRAS.489..176M}. Both samples cross-matched gives approximately 40,000 RC stars with abundances, distances, ages and dynamical parameters.
The atmospheric parameters and stellar abundances considered are those of APOGEE DR16, that is, from the APSCAP pipeline \citep{Garcia_2016AJ....151..144G}. They are based on fitting synthetic spectra computed considering LTE and 1D to the observed spectra using the FERRE code \citep{2006ApJ...636..804A}. The spectra from which abundances are derived are in the infrared and have resolution of about 20,000. Spectra have normally a signal-to-noise of 100, which is sufficient to have abundance precisions normally below 0.05 dex \citep{Jonsson_2020AJ....160..120J}. Extended discussion on APOGEE accuracy and precision in context with other optical spectroscopic surveys can be also found in \cite{Jofre_2019ARA&A..57..571J}.
Further cuts were applied on the data. As a quality cut only stars with spectral parameters determined with confidence were considered. This means to take only stars with parameter {\tt APSCAPFLAG = 0}.
In addition, only stars in metallicity range of $-0.35 < \mathrm{[Fe/H]} < 0.35$ were selected with the intention to have a similar metallicity range to N20. In order to remove thick disk (high-$\alpha$) stars from the sample, further chemical and spacial cuts were applied by requesting $[\alpha/\mathrm{M}] < 0.1$, Galactic height $|\mathrm{z}| < 0.8$ kpc, and a positive parallax measured with better confidence than 20\%. All these cuts reduces the sample to about 18,000 stars. { The atmospheric parameters of the sample have a distribution of mean temperature of 4800~K and a standard deviation of 130 K and mean surface gravity of 2.44 and standard deviation of 0.1~dex. Abundance ratio precisions have mean and standard deviation indicated in Table~\ref{tab:xfe_errors}. }
\begin{table}[t]
\caption{Mean and standard deviation of the error distributions in the abundance ratios considered in this work.}
\begin{center}
\begin{tabular}{c|cc}
\hline
$\sigma$[X/Fe] & mean & standard deviation \\
\hline
C & 0.012 & 0.004 \\
N & 0.018 & 0.005 \\
O & 0.015 & 0.005 \\
Mg & 0.011 & 0.002 \\
Al & 0.02 & 0.005 \\
Si & 0.011 & 0.002 \\
Ca & 0.013 & 0.004 \\
Ti & 0.018 & 0.0058 \\
Cr & 0.035 & 0.010 \\
Mn & 0.016 & 0.005 \\
Fe & 0.008& 0.0002 \\
Ni & 0.013 & 0.007 \\
\hline
\end{tabular}
\end{center}
\label{tab:xfe_errors}
\end{table}%
\subsection{[C/N] as a proxy for age}
\cite{2015MNRAS.453.1855M} took upon the work of \cite{1965ApJ...142.1447I} to demonstrate that indeed [C/N] abundances could be used to study the mass, hence age, distribution of red giant populations. Since then, there has been rich literature regarding the relation of [C/N] abundances and ages for red giant stars. Some works derive empirical relations of measured [C/N] abundances from the spectra with independent measurements of ages from e.g. asteroseismology \citep{2016MNRAS.456.3655M} or open clusters \citep{2019A&A...629A..62C}. Other works derive ages using {\it The Cannon} directly to the spectra \citep{2016ApJ...823..114N}. The latter, however, indirectly uses [C/N] as important input for ages \citep[see also discussions in ][]{2019MNRAS.484..294D}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{CN_AGE_Casali.png}
\caption{Correlation of [C/N] with age for the sampled stars. For reference, the empirical relation of age and [C/N] determined using open clusters by \cite{2019A&A...629A..62C} is shown with red and blue lines, { and correspond to the relations $ \log{\mathrm{Age (yr)}} = 10.64 + 2.61 \mathrm{[C/N]}$ and $\log{\mathrm{Age (yr)}} = 11.20 + 2.51 \mathrm{ [C/N]}$, respectively. } }
\label{fig:casali}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.5]{CN_FEH_R.png}
\caption{Distribution of [C/N] and and [Fe/H] at different galactocentric radii. }
\label{fig:cn_feh_maps_r}
\end{figure*}
Figure~\ref{fig:casali} shows the relation of [C/N] abundance ratios and ages of the stars used here. The ages considered for this plot are those from {\tt AstroNN}, which were derived by \cite{Mackereth_2019MNRAS.489..176M}. They discuss the dependency of [C/N] in the derived ages, therefore this relation is expected. For guiding the eye, the empirical relationship determined by \cite{2019A&A...629A..62C} is plotted with lines. They used [C/N] abundances of red giants in open clusters observed with the APOGEE and the Gaia-ESO \citep{2012Msngr.147...25G, 2013Msngr.154...47R} surveys and independent ages derived from isochrone fitting to the CMD of the clusters to find an empirical relation. Their results have a generous uncertainty in the zero point and slope of the relation, allowing for a range of options between [C/N] abundances and ages. Here, the relations
$ \log{\mathrm{Age (yr)}} = 10.64 + 2.61 \mathrm{[C/N]}$ and $\log{\mathrm{Age (yr)}} = 11.20 + 2.51 \mathrm{ [C/N]}$
are plotted with red and blue, respectively.
The figure intends to show that the [C/N] abundances in this sample can be used as an age proxy. This is important step considering that RC stars are avoided in some studies of [C/N] abundances with age \citep[e.g.][]{2015MNRAS.453.1855M, Hasselquist_2019ApJ...871..181H}. RC stars might have experienced extra mixing processes while during the red giant branch through the helium flash or thermohaline mixing which might have altered the [C/N] abundances after the first dredge-up \citep{Masseron_2017MNRAS.464.3021M, Lagarde_2017A&A...601A..27L}. The figure shows that this sample, which is restricted to a tight space in stellar parameters, still leads to a correlation with age, but transforming [C/N] abundances into ages can imply uncertain ages, especially because these mixing processes are poorly understood.
\section{Results}
\subsection{[C/N]-metallicity as a function of Galactic radius}
Figure~\ref{fig:cn_feh_maps_r} shows density maps in the [C/N]-[Fe/H] plane for stars located in different Galactocentric radii $R$.
The panels show the [C/N]-[Fe/H] for stars located from the inner to the outer disk, in $R$ bins of 2 kpc starting at 3 kpc on the left hand panel and finishing at 13 kpc on the right hand panel.
It is possible to notice the sark difference of [C/N]-[Fe/H] relations between the different panels. The first panel ($3<R<5$ kpc), containing 62 stars only, shows two separated groups of stars, with different metallicities and [C/N], which could be attributed to the overlap between the inner disk and the bulge. The second panel ($5<R<7$ kpc), containing { 1488} stars, shows rather one sequence of stars, relatively metal rich and high in [C/N] abundances. The stars follow a tight relation between [C/N] and [Fe/H], suggesting they experienced a rapid chemical enrichment in the past, probably like a close-box. The panel further shows a large number of stars in the background which are not part of this relation and show indication of a secondary sequence at lower metallicities, but there are not enough stars to confirm this.
The third panel ($7<R<9$ kpc), containing { 7016} stars, corresponds to the solar neighbourhood. This region shows there are two separated groups of stars. The first one is a sequence which has similar properties to the sequence seen in the adjacent left panel, namely its stars are rather metal-rich and [C/N] enhanced, and a have steep relation. The second sequence, which could be a continuation of the background population of the previous panel, shows a tight relation between [C/N] and [Fe/H], reaching more metal-poor stars and higher [C/N] values.
The groups are separated, agreeing with N20's results about the two sequences in the solar neighbourhood. These sequences were not seen in \cite{Hasselquist_2019ApJ...871..181H} probably because they used stars covering a wider range in stellar parameters, which might have blurred the distributions. In fact, that same panel was commented to have a larger scatter in the [C/N]-[Fe/H] relation with respect to the rest of the panels.
The fourth panel ($9<R<11$ kpc) contains { 7322} stars and presents one main sequence of stars rather metal-poor as well as [C/N]-poor. This suggests that the outer disk has had a slower star formation history, with stars being rather young. There is however a tight relation between [C/N] and [Fe/H], similar to the the solar neighbourhood lower sequence. Like in the ($5<R<7$ kpc) panel, there is significant background of stars which do not follow the relation. Here, however, they are rather metal-rich, and could be attributed to the tail of the upper sequence seen in the solar neighbourhood panel. The last panel ($11<R<13$ kpc) contains { 1764} stars of almost only metal-poor stars with low [C/N], suggesting these stars are probably rather young. This is in agreement with previous studies, which have found a dominance of younger stars in the outer disk \citep{2017ApJS..232....2X, 2020ApJS..249...29H}. The relation between [C/N] and metallicities in this panel is rather loose.
\cite{Feuillet_2019MNRAS.489.1742F} also plotted the AMR at different Galactic radii and heights. Comparing with their results for the Galactic plane, it is possible that their turn-around in the AMR leading to a C-shape is attributed to the appearance of the second sequence, which, as extensively discussed by N20, it could produce a large scatter for typical uncertainties in stellar ages. The C-shape in the AMR appears in the outer disk panels as well, but in Fig.~\ref{fig:cn_feh_maps_r} the secondary sequence tends to disappear. This might be a selection effect, since here only RC stars are considered, while \cite{Feuillet_2019MNRAS.489.1742F} include a wider range of surface gravities hence luminosities.
{ It is worth to comment the selection effects of these distributions due to the bias induced by selecting only metallicities that are above -0.3 dex. Because the star formation efficiency is different across the disk, the stars of same metallicity in the inner and outer parts of the disk do not necessarily reflect the same timescales and epochs of formation and hence, they can not be interpreted in a total evolutionary framework. Adding however a wider range in metallicity might induce further scatter in the C/N-age relation \citep{Das_2020MNRAS.493.5195D} and other systematics which this work is trying to avoid. }
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{clusters_sn.png}
\caption{Density map of [C/N] and [Fe/H] in the solar neighbourhood alongside with the 3 main clusters found using Shift Mean. }
\label{fig:clusters}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.35]{Toomre.png}
\includegraphics[scale=0.35]{Lz_e.png}
\includegraphics[scale=0.35]{jr_jz.png}
\caption{Kinematics of selected groups. The colors follow the classification from Fig.~\ref{fig:clusters}. }
\label{fig:kinematics}
\end{figure*}
\subsection{Two populations in the solar neighbourhood}
In order to see if the two sequences of AMRs found by N20 are also present in this larger and independent dataset, the density map of [C/N] and [Fe/H] for the stars located in the solar neighbourhood is shown again in Fig.~\ref{fig:clusters}. If these sequences are produced due to two different populations tracing different chemical enrichment histories, investigating more chemical patterns as well as kinematics is useful. Therefore, only stars located at the maxima of the distributions were selected considering the clustering algorithm Shift Mean, which is designed to find local bumps in a density estimate of data \citep[see Chapter 6.4 of][]{astroMLText}. The data of the solar neighbourhood supports { 3 main} groups. { which are located in the main two populations. One is more metal-rich (reaching [Fe/H] of 0.3 dex) and can be considered to be older because the [C/N] ratios are higher (see e.g. Figure~\ref{fig:casali}) and the other one has a more extended range in metallicity reaching lower [Fe/H] values of -0.2 as well as a more extended range in [C/N] suggesting a larger range in ages reaching younger ages than the more metal-rich and older population.
The groups are shown as coloured contours in Fig.~\ref{fig:clusters}. The stars from the upper population are given a red colour, while the stars from the lower population groups are given blue colours. These are studied in terms of their kinematics and other elemental abundances.
\subsubsection{Kinematics}
Figure~\ref{fig:kinematics} shows the distributions of different dynamical quantities for the stars enclosed in the red and blue groups of Fig.~\ref{fig:clusters}
The left-hand panels show the distributions of the stars in the classical Toomre diagram, which was also plotted by N20. While there is a large overlap between both groups, the red group has a tendency of having lower tangential velocities but slightly higher vertical and radial velocities than the blue group. This result agrees with N20, who also found that their red (old) group had a larger velocity dispersion and a larger rotational lag than the blue (younger) group, which is encouraging. Differences in kinematics are in any case minimal, here and in N20, making it difficult to add more about the origin of these two groups from the velocities alone.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.35]{abundances.png}
\caption{Abundance ratios as a function of metallicity of selected groups from the [C/N]-[Fe/H] diagram. Linear regression fits to the data of the corresponding groups are displayed for guidance. }
\label{fig:abundances1}
\end{figure*}
The middle panels plot the relation of the stellar orbits angular momenta $L_z$ and the eccentricities $e$. As in the left-hand panel, the overlap between both groups is large, and only slight differences can be identified from the histograms. Both groups have relative circular orbits, with a range of eccentricities comparable and below $e<0.4$. The red group presents, however, a tendency of having stars slightly more eccentric at $0.3<e<0.4$ than the blue group, indicating that more stars in the red group have been kinematically heated. This makes sense since the red group has overall higher [C/N] abundance ratios than the blue group, hence the red group has stars that are generally older, and old stars are kinematically hotter than young stars \citep[e.g.][]{2008A&A...480...91S, Mackereth_2019MNRAS.489..176M, 2020arXiv200406556S, 2021MNRAS.503.1815B}.
While the overlap in $L_z$ is large, the red group has overall lower angular momenta than the blue group. If the red group is associated to a sequence of stars tracing the chemical enrichment history of the inner disk, then they could have radially migrated. The `churning' effect of radial migration move stars outwards by keeping eccentricities fixed but loosing angular momenta \citep{2020MNRAS.493.1419F}, which is consistent with the behaviour of the distributions of the red group compared to the blue group. However, having two distributions with different $L_z$ does not necessarily mean one population has lost angular momentum, since eccentric orbits might also be an indicator of just visitors. In fact, the right-hand panels show the radial and vertical actions $j_r$ and $j_z$, respectively. Again, both groups heavily overlap in the scatter plot, but the distributions allow for a better inspection of possible differences. While the radial action distribution is very similar for both groups, there is a difference in the vertical actions $j_z$.
The difference in $L_z$ and $j_z$ between the red and the blue sequences might thus be interpreted as stars being born in different regions. More specifically, the red sequence might show stars formed at smaller Galactic radii that due to their higher eccentric orbits compared to the blue sequence reach the solar neighbourhood.
\subsubsection{Elemental abundances as a function of metallicity}
The red and the blue groups are also studied in other abundance planes, which are shown in Fig.~\ref{fig:abundances1}.
In each panel a scatter plot of stars coloured by red and blue for different abundance ratios as a function of metallicity is shown. Linear regression fits to the corresponding distributions of the respective groups as a function of metallicity have been performed and are indicated with solid lines of the same colour. The slope and the error of the regressions are indicated in each panel.
In all panels the abundance trends have a smooth transition between one and the other sequence. For most of the elements both sequences merge into one sequence, behaving as the classical {\it low-$\alpha$} chemical sequence found in the APOGEE data \citep{Jonsson_2020AJ....160..120J}.
The slope of the regression fits of the abundances as a function of metallicity, however, can differ between the red and blue group for some cases, such as [C+N/Fe], [O/Fe], [Mg/Fe], { [Si/Fe] } and [Ni/Fe].
As discussed in \cite{2015MNRAS.453.1855M}, while the [C/N] abundance changes in giants after they experience dredge-up, the total amount of C and N stays the same, reflecting the composition of the birth cloud, which, like many other chemical abundances, will change as stars die \citep[see also Fig. 2 of][]{2016MNRAS.456.3655M}.
Hence, [C+N/Fe] ratios have been used to study the differences of star formation histories in the disk \citep{2015MNRAS.453.1855M, Hasselquist_2019ApJ...871..181H} as well as metal-poor stars which might have form in dwarf galaxies and accreted later on in the Milky Way \citep{Hawkins_2015MNRAS.453..758H, Das_2020MNRAS.493.5195D, Horta_2021MNRAS.500.1385H}. Around solar metallicities, C is made by both, He-burning in the core of stars as well as in AGB stars, while N is made mostly in AGB stars \citep{Kobayashi_2020ApJ...900..179K}, therefore [C+N/Fe] traces the contribution of AGB stars. The fact that the red group has overall higher [C+N/Fe] abundances than the blue group might be attributed to the difference in metallicity, since both C and N have a metallicity dependency, and the red group has mostly metal-rich stars while the blue group has a wider range in metallicities. { It is noted that the} [C+N/Fe] are above zero even for the blue sequence. This might be related to a systematic uncertainty in the N abundances of APOGEE, { which obtain a value of 0.2 for the solar N abundance } \citep{Jonsson_2020AJ....160..120J}. Indeed, the recent study of nitrogen abundances in the Sun of \cite{2020A&A...636A.120A} points towards a systematic difference between N measured from molecular features and atomic features. { It is not the aim of this paper to correct by systematic effects but to illustrate the trends in both populations and look for differences. This is another argument of why this work does not attempt to relate [C/N] directly with an age value.}
Oxygen and magnesium are $\alpha-$capture elements that are produced inside massive stars and are ejected into the interstellar medium via core-collapse supernova (SNII). Both oxygen and magnesium belong to the few primary elements, e.g., the yields are not affected by the metal content of the progenitor star. Iron, on the other hand, is mostly produced by thermonuclear supernova (SNIa), whose progenitors are lower mass stars. Therefore there is a time delay in the enrichment of oxygen (or magnesium) and iron \citep{1986A&A...154..279M}. There is extensive literature in using in particular [O/Fe]-[Fe/H] planes to constrain chemical evolution models, both of the Milky Way \citep{Johnson_2021arXiv210309838J} and other galaxies \citep{2019A&ARv..27....3M}. The mass of the progenitor galaxy as well as the star formation history can be addressed from the relation between [$\alpha$/Fe] and [Fe/H] for a given stellar population \citep{1979ApJ...229.1046T}. The [$\alpha$/Fe]-[Fe/H] -- hereafter Tinsley-Wallerstein (TW)\footnote{This is motivated after discussions among astronomers on Social Media in 2020 about calling this important diagram after Wallerstein and Tinsley who used and explained it. Many fundamental diagrams are named after scientists that have first explained.} -- diagram on the top right hand panel of Fig.~\ref{fig:abundances1} shows how the blue group has lower [O/Fe] ratios than the red group for the same metallicity. It further shows that while the red stars seem to have reached a plateau at solar metallicities, the blue stars still are in the decreasing part of the TW diagram, suggesting that star formation efficiency is lower for the blue group. A similar behaviour is seen for Mg, shown in the left-hand side panel of the middle row of Fig.~\ref{fig:abundances1}. It is interesting to note that these abundance planes in the APOGEE data show a turn-around in the abundances around solar metallicities \citep{Jonsson_2020AJ....160..120J}, which might also be seen in the [Mg/Fe] - [Fe/H] trends with optical spectroscopy \citep{Adibekyan_2012A&A...545A..32A}.
{ Silicon is another $\alpha$-capture element which presents a slight change of slope in the regression fits between the blue and the red population. This element is believed to be produced by both, SNII as well as SNIa, hence it does not correlate directly with Mg which is mostly produced by SNII. Furthermore, the production mechanism in SNII is different to Mg which translates to a dependency on the progenitor's mass \citep{2019ApJ...883...34B}. In the populations of this study, the Si abundance is a combination of the many SNII of lower mass, in addition to SNIa. The fact that the red and the blue groups have a slight difference can be an effect of the different metallicities between the groups (SNIa contribution) as well as age (SNII contribution). }
Nickel is an iron-peak element which is synthesised in SNIa as well as inside massive stars and expelled into the interstellar medium via explosions in essentially the same way as Fe. It is therefore expected that [Ni/Fe] has an overall flat and solar value for an extended range in metallicity. But as recently discussed by \cite{Kobayashi_2020ApJ...895..138K}, the large variety of possible progenitors of SNIe, makes it hard to reproduce and interpret the observed [Ni/Fe] abundance ratios, especially at high metallicities \citep[see also][for the effect of yields of different SNIe prescriptions]{2021MNRAS.503.3216P}. Since the yields differ between the different white dwarfs progenitors of the supernovae, the elemental abundance ratios will be affected when changing the contribution of such progenitors. While the metal production of SNIa is independent of metallicity, it production rate can be affected by metallicity, since the lifetime of the secondary star of the binary depends on the progenitor metallicity.
The red sequence shows a steeper trend of [Ni/Fe] with metallicity than the blue sequence. This could be attributed to a different metallicity effect for the progenitor (hence timescale) of the SNIe, affecting in a different way the [Ni/Fe] abundance ratios. It would be the difference of environments leading to different contributions form SNIa subclasses \citep{2021MNRAS.503.3216P}. The slight increase at metallicities above solar has been seen in the literature \citep{Adibekyan_2012A&A...545A..32A, Bensby_2014A&A...562A..71B} as well as when considering the bulk of the APOGEE data \citep{Jonsson_2020AJ....160..120J}. Considering the break in the slope between both sequences, it is interesting to associate this increase to an overlap between two stellar populations coexisting in the solar neighbourhood which trace different star formation histories.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.35]{abundances_lines.png}
\caption{Abundance ratios as a function of [C/N] for the same sample as in Figure~\ref{fig:abundances1}. }
\label{fig:abundances}
\end{figure*}
\subsubsection{Elemental Abundances as a function of [C/N]}
Figure~\ref{fig:abundances} presents the same abundance planes as in Fig.~\ref{fig:abundances1} but here as a function of [C/N]. This allows for an alternative to study the temporal evolution of the elements in each sequence, and thus chemical enrichment rates. Following the previous section, linear regression fits to the data have been performed to help interpreting the results. When studying the abundance ratios in the [C/N] plane most of the merged sequences seen in Fig.~\ref{fig:abundances1} break down in two separate sequences.
Interestingly, the $\alpha$-capture elements O and Mg, which showed a sequence in the TW diagram of Fig.~\ref{fig:abundances1} with different slopes, merge into one sequence in Fig.~\ref{fig:abundances}. This might be due to the primary nature of this element, which is produced by massive stars regardless of the initial metallicity of the stars. { Silicon, on the other hand, has different slopes for the blue and the red sequence, providing evidence of the SNIa production of this element, in addition to SNII. }
Tight [O/Fe] or [Mg/Fe] with age relations have been discussed in the literature \citep{2018MNRAS.475.5487S, delgado_2019A&A...624A..78D, Haywood_2019A&A...625A.105H, Hayden_2020arXiv201113745H}. That the relation between [O/Fe] or [Mg/Fe] with [C/N] is tight reinforces the postulation of using [C/N] as a proxy for precise ages in RC stars.
Abundances [C+N/Fe] present a large scatter, with a slight negative trend as a function of [C/N] for both groups. Since at these metallicities AGBs produce C and N, the abundance ratio is expected to increase with time, hence showing a negative trend with [C/N]. Both sequences have an offset, suggesting that the red stars have had more pollution from AGB stars than the blue sequence at a given time. That could happen if star formation is fast, reaching higher metallicities at a given time. The higher metallicities serve as seed for higher C and N production in AGB stars.
While no significant offset between the [Al/Fe] mean abundances of the blue and the red groups is found, the overall trends show opposite slopes, with the blue sequence increasing with [C/N] while the red sequence decreasing with [C/N]. The regression fits are however uncertain, since the abundances have high scatter. Aluminum is an odd-z element which is produced inside stars and is expelled into the interstellar medium via core-collapse supernovae. More specifically, its production yields from nuclear reactions which use Ne and Na as seeds. While these are also produced generally inside stars, higher metallicity stars have a larger Al production because some seeds are already available \citep{Kobayashi_2020ApJ...900..179K}. This metallicity dependency shows as a stronger effect in the different sequences.
Calcium shows a stark discontinuity between the red and the blue sequence in the [Ca/Fe]-[C/N] plane, breaking down from the TW diagram shown in Fig.~\ref{fig:abundances1}. The abundances follow a tight relation that have opposite trends, in which the blue sequence is positive and the red sequence is negative. This is very similar to the case of [Al/Fe], which can be understood from the same arguments. Calcium is an $\alpha-$capture element produced in massive stars, but is a secondary element whose yields depend on metallicity. Abundances of Ca are in general well-measured, there are many clean lines with accurate atomic data for calcium in the APOGEE spectra, as well as in optical spectra \citep{Jofre_2019ARA&A..57..571J}, therefore the trends here are more accurate than in the case of aluminum.
The bottom panels of Fig.~\ref{fig:abundances} show the abundances of titanium, manganese and nickel. All of the abundance ratios show similar behaviour in their differences between the blue and the red stars, in which the trends as a function of [C/N] have similar direction but a systematic difference with the red stars being overall more enhanced. Both [Ti/Fe] and [Mn/Fe] have large scatter in their abundances, but [Ni/Fe] is tight. Titanium is a difficult element since it behaves like an $\alpha-$capture element and therefore commonly associated to that family, even if its production mechanism is not like the rest of the $\alpha-$capture elements. Modern theoretical yields of Ti do not match the observations \citep{Kobayashi_2020ApJ...900..179K}, making it difficult to interpret why both sequences have an offset in the [Ti/Fe] - [C/N] plane. This offset is also seen in N20.
Mn and Ni on the other hand are produced in SNIa, and given the variety of progenitors for SNIa, including the binary companion of the exploding white dwarfs, both elements have a dependency in metallicity \citep{Kobayashi_2020ApJ...895..138K} but also in the explosion mechanism \citep{2021MNRAS.503.3216P}. Manganese, while difficult to measure because of the strong hyperfine structure splitting in their lines, is one of the best suited elements to test chemical evolution caused by SNIa. The stark differences seen in both sequences in the [Mn/Fe]- [C/N] plane hint towards two populations having formed in different environments, leading to different types of SNIe. Two stars of the same [C/N] (e.g. age) have quite different [Mn/Fe]. This is reassured by the [Ni/Fe] - [C/N] panel. The differences in [Ni/Fe] are smaller for coeval stars as in the case of manganese, but still significant given the overall scatter in [Ni/Fe] abundances is much tighter.
It is worth to comment on the abundance planes that are in tension with N20, namely [O/Fe], [Mg/Fe], [Al/Fe] and [Ca/Fe].
N20 found that [O/Fe] had a large difference between sequences, while Mg a change of trend. In addition, the red sequences of N20 for [Al/Fe] and [Ca/Fe] were positive and steeper compared to the blue sequences. Here we obtain negative trends. There are some possible explanations that can help understanding this difference. First, there is a selection effect in which an [$\alpha/$M] cut has been applied to select the thin disk only. This means that here it is not possible to reach high values of Mg, O and Ca of N20. Second, the abundances might have significant offsets between both samples, since they are determined from very different methods and spectral signatures (fitting molecules in the case of APOGEE and determining equivalent widths of atomic lines in the case of N20). This can lead to large differences in final abundances \citep{2017ASInC..14...37J}. Third, the ages of the stars from the N20 red sequence can be up to 10 Gyr, whereas here the sequence might reach 6 Gyr at most (see Fig.~\ref{fig:casali}). Furthermore, the [C/N] range of the red sequence ($\sim 0.1$ dex) implies a range in age of few Gyr, which makes the ``age-trend" for the red group based on the [C/N] abundances not directly comparable with N20's age sequences.
\subsection{Uncertainties}
\subsubsection{3D and non-LTE effects}
While the different trends might be interpreted as an overlap of populations tracing different star formation histories, it is important to consider possible systematics caused by 3D and non-LTE effects. For example, \cite{2019A&A...630A.104A} discussed how O abundances are affected by 3D and non-LTE effects at high metallicities. When an accurate prescription of the O triplet lines is considered for the determination of O, the apparent plateau of [O/Fe] observed at high metallicities is not seen. APOGEE results are obtained from molecular features, under the prescription of 1D and LTE, and present a plateau.
The separation between the sequences in the [C/N]-[Fe/H] relation occurs at that metallicity range, and might be responsible for part of the differences found in the TW diagram in Fig.~\ref{fig:abundances1} for both sequences.
Other elements might also be affected by this. The recent work of \cite{2020A&A...642A..62A} illustrates the differences between LTE and non-LTE abundances for dwarfs and giants in the GALAH survey \citep{Buder_2020arXiv201102505B}. While it is not possible to directly compare the effects of giants in optical and IR spectra, one can expect differences of the same order of magnitude. In that work, abundances that have a metallicity dependency on this effect at solar metallicities in giants are C, Mg, Al and Mn. Indeed, \cite{Jonsson_2020AJ....160..120J} attributed some of the Mg features at solar and super-solar metallicities in the APOGEE data due to this effect. As commented in \cite{Jonsson_2020AJ....160..120J}, the next data releases of APOGEE will include non-LTE corrections. It will be interesting to see if the breaks found between red and blue sequence remain after these effects have been considered.
\subsubsection{Contaminations and biases in RC stars}\label{sect:biases}
The results presented here are based on the assumption that [C/N] correlates with age. In order to minimise systematic effects in this correlation, only the RC stars have been considered. This allows to have a sample of well-determined distances \citep{Leung_2019MNRAS.489.2079L} as well as stars with very similar stellar parameters. The later is necessary to relate differences in abundances with some astrophysical reason and not because of systematic uncertainties in the spectral analysis \citep{Nissen_2018A&ARv..26....6N}.
Still, restricting the parameter space does not ensure that [C/N] might be affected by other inner processes inside red giants, for example rotation \citep{2015A&A...583A..87S} or theromohaline mixing \citep{Lagarde_2017A&A...601A..27L}. Other fundamental problems in our understanding on stellar evolution theory still imply uncertainties in ages which makes a direct estimate of [C/N] and age in red giants difficult, such as possible dependencies of metallicity with the mixing length theory \citep{2017ApJ...840...17T} or simply the stellar evolution code employed \citep{2020arXiv201207957T, 2020A&A...635A.164S}. Binary evolution and mass transfer can also induce scatter in the [C/N]-age relation. If stars transfer mass from a binary, the [C/N] could be affected, hence can not be directly used as a mass (or age) proxy. Some such stars might not show signatures in the spectra that could hint towards a binary evolution, because the binaries could have merged and be now a single star \citep{Jofre_2016A&A...595A..60J, 2018MNRAS.473.2984I}. Distinguishing a star that is younger from one that has experience mass transfer from the [C/N] abundances is still not obvious \citep{2019MNRAS.487.4343H}.
Using RC stars can further induce biases in the [C/N]-age correlation. Actually, \cite{2015MNRAS.453.1855M} avoided using RC stars in their discussions, because from their spectral parameters alone it is difficult to disentangle them with lower red giants branch (RGB) stars which are at an earlier evolutionary phase \citep[see also][]{2017A&A...597L...3M}. \cite{Hasselquist_2019ApJ...871..181H} also avoided the parameter space of the RC in their [C/N]-[Fe/H] relations. Using RC stars can be a problem because low mass stars experience further mixing and mass loss while at the tip of the RGB, e.g. before settling in the red clump. Stars of higher mass do not experience this since the phase at the tip of the RGB is very short. This issue was investigated in \cite{2016MNRAS.456.3655M}, who studied an empirical relation between ages and [C/N] abundances using the first APOCASK catalogue \citep{2014ApJS..215...19P}. That sample has stars with APOGEE spectra (determining same C and N abundances as here) but also with astroseismic observations from {\it Kepler}. The latter allows to know from the power spectrum of such observations the evolutionary phase of the stars. \cite{2016MNRAS.456.3655M} could benefit from the seismic analysis and distinguish stars in the RC and the RGB, deriving two different age-[C/N] relations, yet both with the comparable accuracies \citep[see also ][]{Lagarde_2017A&A...601A..27L}. They further applied the age-[C/N] relation derived for the RC in APOKASC to the entire RC catalogue of \cite{Bovy_2014ApJ...790..127B} stars and found consistent results as when applied to APOKASC only.
\cite{Masseron_2017MNRAS.464.3021M} also studied the effect of extra mixing in RC stars by comparing nitrogen abundances of APOGEE (and APOKASC) stars with different RGB models considering extra mixing such as thermohaline mixing. While theoretical predictions hinted towards an increase in N along the RGB because of extra mixing, the observations of thin disk solar metallicity stars did not support the predictions. In fact, only observations of metal-poor stars showed an increase of N along the RGB. They explained this by a mass effect, namely that their thin disk solar metallicity sample was in general more massive than the metal-poor sample. The higher the mass of the star, the shorter its RGB phase, hence the smaller the effects of extra mixing. The stars used here are selected to be solar-metallicity thin disk stars, hence the effects of extra mixing altering the [C/N] abundances as proxy for age should be small.
It is finally worth to comment that by selecting the RC in the thin disk at solar metallicities there is a bias induced towards young stars. \cite{Bovy_2014ApJ...790..127B} discussed how these biases are increased in their RC catalog because of the selection cuts, stressing that the RC does not randomly sample the underlying age distribution of stars, but is instead the distribution is skewed toward younger ages.
Further noise can be due to contamination in the RC selection, since the RC catalog has a 95\% purity \citep{Bovy_2014ApJ...790..127B}. These outliers, however, while present, still allow to see that there is an overall trend of [C/N] and [Fe/H] which results on different chemical enrichment histories in the disk, which is the main focus of this work.
\section{Discussion}
\subsection{The evolutionary history of the Galactic disk}
A interesting prospect of taking the chemical abundances for studying the evolution of stellar populations is interpreting phylogenetic trees \citep{Jofre_2017MNRAS.467.1140J, Jackson_2021MNRAS.502...32J}. As explained in these papers, using phylogenetic trees in this context is possible because there is heredity between stars through the chemical abundances, in addition to descent with modification, i.e., each stellar generation is more metal-rich than the previous one, hence modifying the chemistry of the ISM. Because understanding and quantifying these two processes is at the core of phylogenetic studies, trees are a suitable tool that can be applied in Galactic chemical evolution studies.
\cite{Jackson_2021MNRAS.502...32J} used abundances of solar twins that are very similar to those used by N20 to build such a tree. It showed two main stellar populations (branches) which were attributed to the thin and the thick disk. \cite{Jackson_2021MNRAS.502...32J} found that the root of the thin disk branch contained a group of stars whose branching pattern was poorly supported. This group had stars of very similar abundances and ages. Their kinematics suggested their origin spreads the disk. The authors concluded this population could be part of the old thin disk and perhaps product of a star formation burst. The blue sequence in fact has two bumps (see Fig.~\ref{fig:clusters}), in particular one at lower metallicities and higher [C/N]. It could be that this bump is the possible product of the star formation burst seen in the phylogenetic tree of \cite{Jackson_2021MNRAS.502...32J}. { This bump could be a also a simply selection effect, therefore concluding on its nature requires addressing the selection function of this sample. }
The two main branches found in the tree might be attributed to the two AMRs which trace two different stellar populations. Perhaps the red group here is in fact the tail of the thick disk, since the red group traces a faster chemical enrichment history, an older population and a hotter dynamics overall. The ages of the red group here and in \cite{Jackson_2021MNRAS.502...32J} are not comparable as already discussed above. It is however difficult to be certain about this interpretation when looking at the sequences of the panels outside the solar neighbourhood (Fig.~\ref{fig:cn_feh_maps_r}), in which one of the two sequences seem to disappear. It would be interesting to see the branching pattern of phylogenetic trees outside the solar neighbourhood, but that is subject of another study.
As extensively discussed by N20, two sequences of age and metallicity in the solar neighbourhood can be explained considering the {\it two-infall} model for chemical evolution \citep{1997ApJ...477..765C}. In such model, there are two main episodes of star formation in the disk, which are different from each other. In the {\it two-infall} model the first episode was fast, quickly enriching the interstellar medium with metals and forming the stars now belonging to the thick disk. Later on, a second episode which lasts until today, has been forming the thin disk stars. Both episodes are driven by some infall of metal-poor gas from outside. In the thin disk, however, the enrichment of gas higher in the inner regions, which translates into a negative metallicity gradient with Galactic radii but not necessarily two separate populations, unless migration of stars is considered.
Alternatively, \cite{Haywood_2019A&A...625A.105H} explains two different main episodes of disk evolution with a quenching of star formation in the thick disk. The rapid star formation in the thick disk might have driven strong stellar feedback which could have enriched the outer disk through galactic fountains while exhausting the gas reservoir in the disk to make new stars. The creation of the bar, then, could have caused through the outer Linblad resonances turbulence in the outer disk ($R>6$ kpc) which helped the ISM to mix and form stars again \citep[see also recent summary description of this scenario in][]{Katz_2021arXiv210202082K}. Other ways to bring new gas that contains some metals into the disk can be by a merging event. While the stars from the merging event will likely be deposited in the halo, the gas can mix with the disk which can lead to a new episode of star formation in the disk \citep{2019ApJ...883L...5B}. To further constrain the details of such scenarios, having a more accurate value of the ages that the [C/N] abundances are reflecting is essential.
\subsection{Age metallicity relation, radial migration}
As more accurate and precise data is available for stars in the Galaxy, the AMR is still a fundamental tool to study the formation of the disk \citep{2018MNRAS.475.5487S}. It particularly helps to constrain the effects of radial migration. \cite{Johnson_2021arXiv210309838J} presents an interesting recent discussion about the effect of the dual AMR.
The chemical evolution models discussed by \cite{Johnson_2021arXiv210309838J} study the effect on the AMR using different star formation rates. In particular, a star formation rate which they call "Late-Burst" accounting for a later star formation burst produced by perhaps a galaxy merger, can create a bimodal AMR, especially at inner Galactic radii. They were however unable to confirm the duality of the AMR when comparing with data of \cite{Feuillet_2019MNRAS.489.1742F} which is their benchmark sample for observations, but perhaps considering a subset of stars would allow for higher precision in the data.
The dual AMR here can also be the product of radial migration caused by resonances in the disk \citep{2010ApJ...722..112M}. As shown in that work, moving perturbes such as the rotation of the bar or the spiral structure in the disk can change the motions of the stars, inducing radial migration inward and outwards in the disk. As long as two perturbes act simultaneously, it is possible to obtain a bimodality in the change of angular momentum of the stars, regardless of the details of the past history of the Milky Way and the pattern speed or strength of such perturbes. This bimodality might cause an effect in the metallicity distribution of the stars. The dual AMR seen only in the solar neighbourhood panel of Fig.~\ref{fig:cn_feh_maps_r} could thus be a signature of a spiral-bar resonance overlap, since it does not need to have the same effect everywhere in the disk.
\subsection{The Sun's birthplace}
It is interesting to comment further on the statement made by \cite{Haywood_2019A&A...625A.105H} about the solar birth place. From radial migration arguments it is expected that the Sun formed at inner Galactic radii because its metallicity is higher than the metallicity of the ISM in the solar neighbourhood 4.5 Gyr ago. They argue, however, that when considering the TW diagrams at different Galactic radii such as those displayed by \cite{2015ApJ...808..132H}, the solar [$\alpha/$Fe] abundance ratios do not fall in the bulk of the abundance distributions of the stars in the inner regions, which have generally higher [$\alpha$/Fe] abundances, but rather with the bulk of stars in outer regions.
In Fig.~\ref{fig:abundances1}, the solar abundances lie in the blue groups for all panels. If the red group in Fig.~\ref{fig:abundances1} corresponds to the tail of the inner disk population and the blue group correspond to the tail of the outer disk population, then it is possible that indeed the Sun migrated inwards from outer regions in the disk as commented by \cite{Haywood_2019A&A...625A.105H}.
\section{Conclusion}
Inspired by the latest results of \cite{Nissen_2020A&A...640A..81N} about the two age metallicity sequences in the solar neighbourhood using high-precision analysis of solar-type stars, this work presented an alternative relation of [C/N] and metallicity of RC stars to test if the two sequences are present in larger and independent datasets. In red giants, [C/N] abundances can have a direct dependency on the stellar mass, hence ages. The advantage to use [C/N] instead of ages arises in the fact that measuring precise C and N abundances is more plausible than deriving ages. As concluded by N20, the two sequences were never seen before because stellar ages are still too uncertain.
For this work, about 18,000 red clump stars selected from the APOGEE DR16 catalogue with precise measurements of [Fe/H], C and N, in addition to Galactic heights below 800 pc and solar scaled $\alpha$ abundances. The density maps of [C/N] versus [Fe/H] for different Galactic radii revealed that indeed in the solar neighbourhood two separate sequences of [C/N] and age are present. Inner and outer regions in the disk, however, show just one dominant group.
By analysing other elemental abundance ratios and dynamical distributions of stars belonging to each sequence it was possible to see that the sequences are different. { One is composed by stars that are more metal-rich and [C/N] rich, suggesting an old population. The other is composed by stars that cover a wider range in both, metallicity and [C/N], reaching lower values and suggesting to contain younger stars. } The old and metal rich population was shown to be kinematically hotter with more eccentric orbits than the younger population. This is consistent with current expectations of metal-rich and old stars formed at inner parts of the disk moving to the solar vicinity through radial migration or simply passing by due to their eccentric orbits. A dual AMR might also be predicted by discrete episodes of star formation, which is predicted in some models.
{ The [C/N]-[Fe/H] sequence that covers a wider range in [Fe/H] showed two bumps of stars at slight different [C/N] abundances (hence ages). This bump could be related to the group found in the phylogenetic tree by \citep{Jackson_2021MNRAS.502...32J}, which was attributed to the product of a star formation burst. To conclude on this hypothesis however, counting stars should take selection effects into account}.
Attributing the two populations to one being the tail of the inner disk and the other one the tail of the outer disk, each experiencing different chemical enrichment history, the solar abundances match better the outer sequence than the inner sequence. This supports \cite{Haywood_2019A&A...625A.105H} claims about the Sun forming in outer regions migrating towards the inner regions, and not the other way around, as believed by most studies.
Modern survey data enable us to find structure in the disk in several ways, allowing us to make progress in the understanding of how our home galaxy formed. This work shows the power of having high resolution spectra of large number of stars, which give us not only the information about metallicity and the classical [$\alpha/$Fe] ratio, but other abundances too. This increases our chances to find structure in the Milky Way. Here [C/N] was used as key alternative to age, and other abundances such as Mn, Al, Ni and Ca were used to evaluate that the formation histories of the structures could be different provided the systematic uncertainties in elemental abundance measurements are properly addressed. The revolution of combining multidimensional chemodynamical information in order to reveal the shared history of the stars in our Milky Way is just starting.
\begin{acknowledgements}
The author acknowledges Danielle de Brito Silva for vibrant discussions, as well as Thomas M\"adler, Payel Das and Poul Erik Nissen for important feedback of early versions of this paper. Sven Buder is further acknowledged for promoting and making research about the Tinsley-Wallerstein diagram, generating a page on wikipedia with the details. The author finally warmly thanks the referee, for their careful and friendly report that improved this manuscript.
Figures resulted from the studying and following the example figures of the book of \cite{astroMLText}, adapting the routines to this dataset and purposes. Work funded by FONDECYT REGULAR 1200703 and FONDECYT Iniciaci\'on 11170174.
\end{acknowledgements}
|
train/arxiv
|
BkiUd8c5qX_BwR884cUa
| 5
| 1
|
\section{Modeling Core Collapse Supernova Nucleosynthesis}\label{sect:current}
The complexity of neutrino transport and the frequent failure of self-consistent models for core collapse supernovae to produce explosions have generally divorced modeling of core collapse supernova nucleosynthesis from modeling of the central engine. Nucleosynthesis simulations commonly replace the central engine of the supernova with a parameterized kinetic energy \emph{piston} \citep{WoWe95,RHHW02,LiCh03} or a thermal energy \emph{bomb} \citep{ThNH96,NaSS98}. The energy of this blast wave, together with the placement of the \emph{mass cut} that demarcates ejecta from matter that is assumed to fall back onto the neutron star, are tuned to recover the desired explosion energy and ejected \nuc{56}{Ni} mass. These two methods are largely compatible with the largest differences coming in the inner regions of the ejecta \citep{AuBT91}. It is this inner region, where much of the iron, nickel and neighboring nuclei are produced, that is also most affected by the details of the explosion mechanism, including the effects of interactions between nuclei and the tremendous neutrino flux.
While the importance of neutrino interactions is manifest in the name of the $\nu$\emph{-process} and well documented for the r-process, neutrinos potentially impact all stages of supernova nucleosynthesis. During explosive nucleosynthesis, in the inner layers of the ejecta, where iron group nuclei result from $\alpha$-rich freezeout, interactions with neutrinos alter the neutronization, changing the ultimate composition.
Galactic chemical evolution calculations and the relative neutron-poverty of terrestrial iron and neighboring elements place strong limits on the amount of neutronized material that may be ejected into the interstellar medium by core collapse supernovae \citep{Trim91}. \citet{HWFM96} placed a limit of $10^{-4} \ensuremath{M_{\odot}}$ on the typical amount of neutron-rich $(\ensuremath{\mathrm {Y}_{e}}\lesssim0.47)$ ejecta allowed from a core collapse supernova. Those multi-dimensional simulations of the central engine that produce explosions, \citep[see, e.g.,][]{HBHF94,JaMu96} predict the ejection of much larger quantities of neutron-rich iron group elements than this limit. In an effort to compensate, modelers have been forced to invoke the fallback of a considerable amount of matter onto the neutron star, occurring on a timescale longer than was simulated. One common property exhibited by recent multi-group simulations \citep{LMTM01,RaJa02,ThBP03,BRJK03} is a decrease in the neutronization of the inner layers of the ejecta due to these neutrino interactions. This is a feature that current parameterized nucleosynthesis models can not replicate because they ignore the neutrino transport. While the decreased neutronization seen in multi-group transport models would reduce the need to invoke fallback, it also makes any fallback scenario more complicated, since the most neutron-rich material may no longer be the innermost.
\section{The Nucleosynthesis Implications of Neutrino Interactions}\label{sect:nu}
Because of the impact of the neutrinos on the nucleosynthesis, the nucleosynthesis products from future explosion simulations (utilizing multi-group neutrino transport) will be qualitatively different from parameterized bomb or piston nucleosynthesis models. This was demonstrated by exploratory calculations by \citet{McFW96}. The dominant processes are $\nu/\bar{\nu}$ and $\rm e^{\pm}$ captures on shock dissociated free nucleons, though at later times (and in regions with cooler peak temperatures) the more poorly known $\nu/\bar{\nu}$ and $\rm e^{\pm}$ captures on heavy nuclei may contribute significantly. In addition to their impact on the electron fraction, these interactions, as well as neutral current inelastic neutrino scattering off these nuclei \citep{BrHa91}, are also important to the thermal balance, potentially affecting the $\alpha$-richness of the ejecta, thereby altering the abundance of important nuclei like \nuc{44}{Ti}, \nuc{57}{Fe}, \nuc{58}{Ni} and \nuc{60}{Zn} \citep{WoWe95}. \citet{TrKM03} used a parameterized neutrino luminosity and spectrum \citep{JaMu96} to drive a supernova explosion and a tracer particle approach with a large nuclear network to examine the nucleosynthesis. They found significant impact of nuclear electron capture since some ejected zones reached densities $> 10^8 \ensuremath{\mathrm{\thinspace g \thinspace cm^{-3}}}$. However, since these authors ignored neutrino captures, their simulations tell only half of the story.
We have examined the effects of both electron and neutrino captures in the context of recent multi-group supernova simulations. These models \citep[see][for more details]{FHLM05} are based on fully general relativistic, spherically symmetric simulations \citep{LMTM01}. \cite{PWBJ05} have performed similar simulations using tracer particles from two dimensional simulations \citep{BRJK03}. In both cases, artificial adjustments to the simulations were needed to remedy the failure of the underlying models of the central engine to produce explosions. Also in both cases, the simulations were mapped to simplified models as later times, because the neutrino tranport simulations could not be run to sufficiently late times. While both of these shortcomings need to be addressed, these simulations nonetheless reveal the significant impact that neutrino interactions have on the composition of the ejecta.
We observe three distinct phases in the evolution of the electron fraction of the matter that will become the innermost ejecta as it collapses, passes through the stalled shock and is driven off by neutrino heating. During core collapse, the electron fraction in these lower density, silicon-rich, regions is little changed either by the electron capture that is deleptonizing denser regions or the relatively weak neutrino flux. However, the combination of the larger neutrino flux after core bounce and the burning of silicon to iron in the still infalling matter greatly enhances the neutrino capture rates. With most of the matter still tied up in relatively inert heavy nuclei, the greater abundance of free protons over free neutrons allows antineutrino captures to dominate, lowering \ensuremath{\mathrm {Y}_{e}}. The passage of the matter through the stalled shock raises the temperature, dissociating the nuclei, but the concomitant increase in density prevents the lifting of the electron degeneracy. As a result of the high electron chemical potential, the balance of electron and positron captures strongly favors lower electron fraction. However, the combination of neutrino and antineutrino captures favors higher \ensuremath{\mathrm {Y}_{e}}\ because of the slight dominance of neutrinos over antineutrinos as well as slightly higher abundance of neutrons compared to protons in the fully dissociated, mildly neutron-rich matter. As a result, the electron fraction undergoes only mild excursions in this phase. Eventually, continued neutrino heating (or perhaps some other mechanism) is sufficient to reenergize the shock, in the process lifting the electron degeneracy in this innermost ejecta. As a result, the rate of electron captures drops while the rate of positron captures increases causing \ensuremath{\mathrm {Y}_{e}}\ to rise. While the dominance of neutrino captures over antineutrino captures drops as the matter becomes neutron-poor, their sum continues to favor higher \ensuremath{\mathrm {Y}_{e}}. Eventually, the electron chemical potential drops below half the mass difference between the neutron and proton, allowing positron and neutrino captures to dominate electron and antineutrino captures \citep{Belo03}. With both neutrino emission and absorption processes favoring a higher electron fraction, \ensuremath{\mathrm {Y}_{e}}\ rises markedly in this phase, reaching values as high as 0.55.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=.9\columnwidth]{elements.eps}
\caption{Comparison of elemental abundances for Ca to Zn between models by \citet{FHLM05} (circles) and \citet{ThNH96} (squares) and observation determinations for metal-poor \citep{GrSn91} (upward pointing triangles) and extremely metal-poor \citep{CDSH04} (downward pointing triangle) stars.}
\label{fig:nu_elem}
\end{center}
\end{figure}
The global effect of this proton-rich ejecta is the replacement of previously documented overabundances of neutron rich iron peak nuclei (near the N=50 closed shell) \citep{WoWe95,ThNH96} with a mix of \nuc{56}{Ni} and \ensuremath{\alpha}-particles.
Production of \nuc{58,62}{Ni} is suppressed while \nuc{45}{Sc} and \nuc{49}{Ti} are enhanced. Elemental abundances of scandium, cobalt, copper and zinc are significantly closer to those observed (see Fig.~\ref{fig:nu_elem}). The results are however sensitive to the details of the simulations. \citeauthor{PWBJ05} found a significant sensitivity in the nuclear production to the expansion rate of the matter, which was a parameter in their late time extrapolation. In addition to the global effects on the neutronization and entropy of the matter, our simulations, which include neutrino and antineutrino capture rates on heavy nuclei \citep{ZiLa05}, find that these reactions have direct impact on the abundances of species like \nuc{53,54}{Fe}, \nuc{55,56,57}{Co}, \nuc{59}{Ni} and \nuc{59}{Cu} at the 10-20\% level.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{a40-cf.eps}\\
\caption{Isotopic abundances including the effects of neutrino interactions\citep{FMLT05} relative to solar abundances (filled circles) compared with earlier predictions \citep{ThNH96} (open circles), which neglected neutrino interactions. The effect of neutrino interactions is clearly seen for nuclei above $A>64$ where enhanced abundances are obtained. The inner panel presents the elemental abundances obtained from the isotopic abundances.\label{fig:nup}}
\end{center}
\end{figure}
A mild rp-process is also seen in this proton-rich ejecta (see Fig.~\ref{fig:nup}). \citeauthor{PWBJ05}, who neglected neutrino captures in their network, find this effect limited to zinc (A=64) by $\beta$-decay lifetimes of waiting-point nuclei that are longer than the expansion timescale. We have found that transformation of protons into neutrons by neutrino captures allows (n,p) reactions to take the place of $\beta$-decays allowing significant flow to much higher A. We term this the $\nu$p process because of the essential role the neutrinos play in producing these light p-process nuclei. The quantity of p-process nuclei produced and upper mass limit of this production are quite sensitive to the strength of the neutrino interactions and therefore the details of the neutrino source as well as the proximity and duration of the neutrino exposure.
\section{Conclusion}
Our results, and those of \cite{PWBJ05}, clearly illustrate the need to include the full effect of the supernova neutrino flux on the nucleosynthesis if we are to accurately calculate the iron-peak nucleosynthesis from core collapse supernovae. The sensitivities displayed in these models point strongly to the need to couple simulations of core collapse supernova nucleosynthesis to models of the explosion mechanism. Not only will this foster better understand the contribution of supernovae to our cosmic origins, but comparison of nucleosynthesis estimates with observations will also improve our understanding of the central engine.
\ack The authors wish to thank H.-Th. Janka and A. Mezzacappa for fruitful discussions. The work has been partly supported by the U.S. National Science Foundation under contract PHY-0244783, by the U.S. Department of Energy, through the Scientic Discovery through Advanced Computing Program, by the Swiss SNF grant 200020-105328 and by the Spanish MCyT and European Union ERDF under contracts AYA2002-04094-C03-02 and AYA200306128. Oak Ridge National Laboratory is managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.
|
train/arxiv
|
BkiUdH3xK5YsWTJsPYn5
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
The 5.8 Gyr-old \citep{mam08}
main-sequence G8.5V star $\tau$ Ceti is the second closest \cite[3.65~pc,][]{vanL07} Solar-type star
reported to harbor both a tentative planetary system and a debris disk
\cite[after $\epsilon$ Eridani, e.g.][]{gre98,hat00}. The $\tau$ Ceti debris disk was
first identified as an infrared excess by IRAS \citep{aum85} and
confirmed by ISO \citep{hab01}. \cite{gre04} marginally resolved 850 $\mu$m
emission from the system with the James Clerk Maxwell Telescope (JCMT)/SCUBA,
revealing a massive (1.2 $M_\oplus$) disk extending to 55 AU from the star. Recent \emph{Herschel}
observations at 70, 160, and 250~$\mu$m resolve the disk well and are best fit
by a broad dust belt with an inner edge between $1-10$ AU and an outer edge at
$\sim55$~AU \citep{law14}. Due to its proximity and similarity to our Sun in age and spectral type, $\tau$ Ceti has been the
object of numerous searches for planets using the radial velocity technique
\cite[e.g.][]{pepe11}, most of which have proved unsuccessful.
Using extensive modeling and Bayesian analysis of radial velocity data from the High Accuracy Radial Velocity Planet Searcher (HARPS) spectrograph \citep{may03,pepe11}, the Anglo-Australian Planet Search (AAPS) on the Anglo Australian Telescope (AAT),
and the High Resolution Echelle Spectrograph (HIRES) on the Keck telescope \citep{vogt94}, \cite{tuo13} report evidence for a tightly-packed five planet system.
This purported planetary system consists of five super-Earths
with masses of $4.0-13.2$~$M_\oplus$ (for orbits co-planar with the disk), semi-major axes
ranging over $0.105-1.35$ AU, and small eccentricities, $e\sim0-0.2$.
The veracity of these planet candidates, however, remains controversial. \cite{tuo13} acknowledge
that the detected signals could also result from a combination of instrumental bias
and stellar activity, although no further evidence is given to support these alternative
interpretations. Also of note is the sub-Solar metallicity of $\tau$ Ceti, [Fe/H] $= -0.55\pm0.05$ dex \citep{pav12}, which makes it an interesting target for exoplanet searches due to the observed higher frequency of low-mass planets around low-metallicity stars \citep{jen13}.
We present interferometric observations of the $\tau$ Ceti system at
1.3~mm using the Atacama Large Millimeter/submillimeter Array (ALMA).
Millimeter imaging of this debris disk opens a unique window on the location
and morphology of the underlying population of dust-producing planetesimals
orbiting the star. While these large, kilometer-sized bodies cannot be
detected directly, millimeter observations probe emission from the large dust
grains produced through collisions that are not rapidly redistributed by stellar radiation and winds \citep{wya06}.
These new ALMA observations provide limits on the disk location and width,
which bear on the proposed planetary system within the disk.
In Section~\ref{sec:obs}, we present the ALMA observations of the $\tau$ Ceti
system. In Section~\ref{sec:results}, we describe the analysis technique
and disk model results. In Section~\ref{sec:disc}, we discuss the significance of the best-fit model parameters for
the dust belt inner edge, width, proposed planetary
system, and the origin of a bright, unresolved central emission source.
\section{Observations}
\label{sec:obs}
The $\tau$ Ceti system was observed using Band 6 (1.3~mm) in December 2014 with the ALMA 12-m array. We obtained one scheduling block (SB) in good weather (PWV = 1.76~mm) with 34 antennas, with the longest baselines sampling to $1\arcsec$ ($4$ AU) resolution.
These observations were complemented by two SBs taken with the Atacama Compact
Array (ACA) in July 2014 to provide shorter baselines and sensitivity to
emission at larger scales. For these ACA SBs, 11 operational antennas were
available. The observation dates, baseline lengths, and total time on-source are summarized in Table~\ref{tab:obs}.
For maximum continuum sensitivity, the correlator was configured to process two
polarizations in four 2 GHz-wide basebands
centered at 226, 228, 242, and 244~GHz, each with 256 spectral channels.
For the July SBs, the phase center was
$\alpha = 01^\text{h}44^\text{m}02.348$,
$\delta = -15\degr56\arcmin02\farcs509$ (J2000, ICRS reference frame). The phase center for the December SB
was $\alpha = 01^\text{h}44^\text{m}02.299$,
$\delta = -15\degr56\arcmin02\farcs154$ (J2000, ICRS reference frame). Both phase centers were chosen to be the
position of $\tau$ Ceti at the time of the observations given its proper motion of ($-1721.05$, $854.16$)
mas yr$^{-1}$ \citep{vanL07}.
The field of view is $\sim26\arcsec$, given by the FWHM size of the primary beam the ALMA 12-m antennas at the mean frequency of 234 GHz.
The data from all three SBs were calibrated separately using the \texttt{CASA}
software package (version 4.2.2). We corrected for time-dependent complex gain variations using
interleaved observations of the calibrator J0132-1654. Observations of J0137-2430 were used to determine the spectral response
of the system. The absolute flux calibration scale was derived from observations of Neptune, and a mean calibration
was applied to all four basebands, with a systematic uncertainty of $\sim10\%$ \cite[see][for a complete discussion of flux density models of Solar System bodies]{but12}.
To generate a first image at the mean frequency, 234~GHz (1.3~mm), we Fourier inverted the calibrated visibilities with natural weighting and a multi-frequency synthesis with the \texttt{CLEAN} algorithm. To improve surface brightness sensitivity, we included a modest taper using the \texttt{uvtaper} parameter in \texttt{CLEAN}, which controls the radial weighting of visibilities in the $(u,v)$-plane through the multiplication of the visibilities by the Fourier transform of a circular Gaussian (on-sky FWHM $= 6\arcsec$). With the added taper, however, it became difficult to resolve the outer disk and the central stellar emission. For clarity, we chose to image the disk and the star separately. We isolate the disk emission by subtracting a point source model from these data using the CASA task \texttt{uvsub} to account for the stellar emission. To isolate the stellar component, we image with \texttt{CLEAN} and no taper, only including baselines longer than 40 k$\lambda$, where we expect the star to dominate the emission (see Section~\ref{sec:results}). We choose to account for the primary beam in our modeling (see Section~\ref{subsec:modeling}) and thus do not apply a primary beam correction to any of these images.
\section{Results and Analysis}
\label{sec:results}
\subsection{Continuum Emission}
\label{subsec:continuum}
Figure~\ref{fig:images} shows an ALMA 1.3 mm image of the $\tau$ Ceti disk made
with the central star subtracted (middle panel) along with an image including
only baselines longer than 40 k$\lambda$ showing emission from the star and
not the disk (right panel). The \emph{Herschel/PACS} 70 $\mu$m star-subtracted
image (left panel) is shown for reference \citep{law14}.
The natural weight rms noise is 30 $\mu$Jy and 180 $\mu$Jy for the 12-m and ACA observations, respectively.
For the image showing only the stellar emission, the natural weight rms is
higher, 35 $\mu$Jy, since we exclude some baselines.
The belt is not detected in the ACA observations given the low signal-to-noise ratio, and
we only consider the 12-m data for imaging and modeling (see Section~\ref{subsec:modeling}).
For the 1.3~mm image of the star, the synthesized beam with natural weighting
is $1\farcs9\times1\farcs0$ ($7\times4$~AU), and position angle $= -87\degr$.
To improve surface brightness sensitivity, the image of the disk makes use of
a modest taper and has a synthesized beam size of $6\farcs5\times6\farcs1$
($24\times22$~AU), and position angle $= 55\degr$.
These 1.3~mm images reveal (1) patchy emission ($\sim6\sigma$) from a nearly face-on (low inclination) dust disk,
and (2) a bright ($23\sigma$), unresolved central peak coincident with the expected stellar position. The disk is located $\sim12\arcsec$ ($\sim44$~AU) from the star with a position angle of $\sim90\degr$ (E of N). \cite{rei88} quantify the position uncertainty, $\sigma$ of a point source given the signal-to-noise ratio, $S/N$, and the synthesized beam size, $\theta$: $\sigma \sim 0.5\theta/(S/N)\approx 0\farcs14$, for our observations. The position of the observed central source is coincident with the expected stellar position within this uncertainty.
\begin{figure}[ht]
\begin{minipage}[h]{0.37\textwidth}
\begin{center}
\includegraphics[scale=0.5]{fig1a}
\end{center}
\end{minipage}
\begin{minipage}[h]{0.3\textwidth}
\begin{center}
\includegraphics[scale=0.5]{fig1b}
\end{center}
\end{minipage}
\begin{minipage}[h]{0.3\textwidth}
\begin{center}
\includegraphics[scale=0.5]{fig1c}
\end{center}
\end{minipage}
\caption{\small \emph{(left)} \emph{Herschel/PACS} map of the 70~$\mu$m emission from the $\tau$ Ceti debris disk with the stellar contribution subtracted \cite[see][]{law14}. The \emph{Herschel} $5\farcs6$ beam size is shown by the ellipse in the lower left corner.
\emph{(center)} The $\tau$ Ceti debris disk imaged by ALMA at 1.3~mm with contours in steps of $2\sigma$, where $\sigma$ is the rms noise level in the image $\sim30$~$\mu$Jy. To isolate the disk emission, a point source model has been subtracted to account for the central stellar emission. Using natural weighting along with a $6\arcsec$ Gaussian taper, the resulting FWHM synthesized beam size is $6\farcs5\times6\farcs1$.
\emph{(right)} ALMA image of the 1.3~mm continuum emission for baselines longer than 40~k$\lambda$ showing only the central point source with contours in steps of $5\sigma$. Imaging with natural weighting and no taper yields a FWHM synthesized beam size of $1\farcs9\times1\farcs0$.
The position of the stellar photosphere is indicated in the left two panels by the blue star symbol. The primary beam of the ALMA antennas at 1.3~mm (FWHM $\sim26\arcsec$) is shown by the dashed blue circle in the right two panels.
}
\label{fig:images}
\end{figure}
\subsection{Emission Modeling Procedure}
\label{subsec:modeling}
We make use of the modeling scheme described in \cite{mac13,mac15b}. In this approach, we construct parametric models of the 1.3~mm disk emission and then compute corresponding model visibilities using a python implementation\footnote{The code used to perform this part of the analysis is open source and freely available at \texttt{https:$//$github.com$/$AstroChem$/$vis\_sample}.} of the Miriad \texttt{uvmodel} task (Loomis et al. in prep). To determine the best-fit parameter values and their uncertainties, we employ the \texttt{emcee} Markov Chain Monte Carlo (MCMC) package \citep{for13}. This affine-invariant ensemble sampler for MCMC, enables us to accurately sample the posterior probability functions of all model parameters with minimal fine-tuning. Due to the much higher rms noise of the ACA data, we choose to only fit models to the visibilities from the full 12-m ALMA array.
We model the millimeter emission of the $\tau$ Ceti debris disk as an axisymmetric, geometrically thin belt with an inner radius, $R_\text{in}$, an outer radius, $R_\text{out}$, and a radial surface brightness distribution described by a simple power law, $I_\nu \propto r^{\gamma-0.5}$.
Here, $\gamma$ describes the power law in radial surface density,
$\Sigma \propto r^\gamma$, and temperature is assumed to follow a power law,
$T\propto r^{-0.5}$, approximating radiative equilibrium for blackbody grains. To first order, the dust temperature also depends on the grain opacity, $T\propto r^{-2/(4+\beta)}$, where $\beta$ is the power law index of the grain opacity as a function of frequency, $\kappa_\nu\propto\nu^\beta$. \cite{gas12} measure $\beta = 0.58$, from observations of debris disks, which implies a temperature power law index of $\sim-0.44$. Thus, the expected change in the temperature profile due to $\beta$ is much smaller than the uncertainty in our resulting model fits and we choose to ignore this effect. Furthermore, the surface density and temperature profiles are degenerate, so we assume a blackbody profile and fit only for $\gamma$.
We constrain the outer disk radius using previous JCMT/SCUBA observations \citep{gre04}, since the parent body disk may have a different size relative to the smaller grains imaged with \emph{Herschel}. While \cite{gre04} suggested that the disk was highly inclined, the \emph{Herschel} image (Figure~\ref{fig:images}, left panel) indicates that it is closer to face-on. The SCUBA image is therefore marginally resolved at best, so we take their derived disk radius of 55~AU as an upper limit on $R_\text{out}$ and allow the inner radius, $R_\text{in}$, to vary. We fit for the surface density radial power law index, $\gamma$, within a range of $-4$ to $4$. The unresolved central peak seen in images is modeled by a central point source with flux, $F_\text{cen}$. We do not fit for any relative offsets of the belt center, point source position, and phase center.
Models of the \emph{Herschel} images derive an inclination of $i = 35\degr\pm10\degr$ and position angle, $PA = 105\degr\pm10\degr$ \citep{law14}, and we assume that the millimeter belt emission is described by the same geometry. For all parameters, we assume uniform priors and require that the model be physically plausible: $F_\text{cen} \geq 0$, and $0 \leq R_\text{in} < R_\text{out} \leq 55$~AU.
A total flux density, $F_\text{belt} = \int I_\nu d\Omega$, provides the normalization for the belt emission. Using SCUBA on the JCMT, \cite{gre04} obtain a total flux density at 850 $\mu$m for the disk of $5.8\pm0.6$~mJy, including both the central star and likely contamination from background sources. Recent SCUBA-2 observations at 850 $\mu$m yield a total flux density of $4.5\pm0.9$~mJy, including a contribution from the star of $\sim1$ mJy (Holland et al., in prep.). An extrapolation of this measurement using the typical spectral index of 2.58 for debris disks at (sub)millimeter wavelengths \citep{gas12}, yields an expected flux density of the disk at 1.3 mm of $1.2\pm0.2$~mJy. This more robust single-dish flux measurement allows us to constrain the total flux density of our models with a Gaussian prior, $0.6\text{ mJy}\leq F_\text{belt}\leq1.6\text{ mJy}$, accounting for uncertainty in both the single-dish 850~$\mu$m flux measurement and the extrapolation to 1.3~mm.
The angular scale of the $\tau$ Ceti debris disk is $\sim25\arcsec$ in diameter. At 1.3~mm, the half power field of view of the 12-m ALMA antennas is comparable, FWHM$\sim26\arcsec$. Given this, we must account for the effect of the primary beam response on our model parameters. To do this, we model the ALMA primary beam as a Gaussian normalized to unity at the beam center and multiply each parametric disk model by this Gaussian beam model. Since we account for the effect of the primary beam in our modeling scheme, we choose not to apply a primary beam correction to the images shown in Figure~\ref{fig:images} (right panels).
\subsection{Results of Model Fits}
\label{subsec:model_fits}
Modeling the ALMA 1.3~mm visibilities yields a final best-fit model with a reduced $\chi^2$ value of 1.1. Table~\ref{tab:mcmc} lists the best-fit values for each of the 5 free parameters along with their corresponding $1\sigma$ ($68\%$) uncertainties. The 1D (diagonal panels) and 2D (off-diagonal panels) projections of the posterior probability distributions for all parameters except the total belt flux, $F_\text{belt}$, are shown in Figure~\ref{fig:mcmc}. A full resolution image of this best-fit model (with a flat surface density profile, $\gamma=0$, and the central star excluded) is shown in the leftmost panel of Figure~\ref{fig:model}. The same model convolved with the $\sim6\arcsec$ ALMA synthesized beam and imaged like the observations in Figure~\ref{fig:images} is shown in the next two panels both without (left) and with (right) simulated random noise with an rms of 30~$\mu$Jy. Including the simulated noise results in a patchy image with emission structure similar to the ALMA 1.3~mm image shown in Figure~\ref{fig:images}. In both the ALMA and model images, the most significant peaks of emission are consistent with the expectation for a slightly inclined disk with PA near $90\degr$. The rightmost panel of Figure~\ref{fig:model} shows the residuals resulting from subtracting this best-fit model from the observed visibilities, again imaged with the same parameters. No significant features are evident.
\begin{figure}[ht]
\centerline{\psfig{file=fig2,width=10cm,angle=0}}
\caption[]{\small The 1D (diagonal panels) and 2D (off-diagonal panels) projections of the posterior probability distributions for the best-fit model parameters ($R_\text{in}$, $R_\text{out}$, $F_\text{cen}$, and $\gamma$) resulting from $\sim10^4$ MCMC trials. For a given parameter, the 1D distribution is determined by marginalizing over all other model parameters. The best-fit parameter value is indicated by the vertical blue dashed line. The 2D joint probability distributions show the $1\sigma$ (red) and $2\sigma$ (gray) regions for all parameter pairs, with the best-fit parameter values marked by the blue cross symbol.
}
\label{fig:mcmc}
\end{figure}
The best-fit total belt flux density is $F_\text{belt} = 1.0^{+0.6}_{-0.4}$~mJy, constrained by the Gaussian prior taken from previous single dish flux measurements. \cite{law14} note that the SCUBA and SCUBA-2 flux densities are higher than expected given an extrapolation from the \emph{Herschel} flux density measurements. This difference suggests that these earlier observations could be contaminated by the extragalactic background or that the disk could have an additional warm component. Given the limits in sensitivity of our ALMA data, the total flux density we measure is consistent with both the \emph{Herschel} and SCUBA/SCUBA-2 values and we cannot distinguish between these two scenarios.
Not surprisingly, given the sensitivity limits of the ALMA data, model fitting does not provide a strong constraint on the power law index of the surface density radial profile, $\gamma=-0.3^{+1.9}_{-1.3}$. With large uncertainty, this result implies a shallow surface density profile. In addition, we see a clear degeneracy between the surface density gradient, $\gamma$, and the disk outer radius, $R_\text{out}$ \cite[e.g.][]{mun96}. For very negative values of $\gamma$, the outer regions of the resulting belt model have low surface brightness, making it difficult to constrain the position of the outer edge. Thus, the contours shown in Figure~\ref{fig:mcmc} for that pair of parameters exhibit a slope, spreading out to span a wide range of possible outer radii for increasingly negative values of $\gamma$.
\begin{figure}[ht]
\centerline{\psfig{file=fig3,width=16cm,angle=0}}
\caption[]{\small \emph{(left)} A full resolution (pixel scale $\sim0\farcs05\sim0.2$~AU) image of the best-fit model to the 1.3~mm ALMA continuum emission. For simplicity, we have chosen a flat surface density profile with $\gamma=0$ and excluded the central stellar component. \emph{(center left)} The same best-fit model convolved with the $\sim6\arcsec$ ALMA synthesized beam and imaged as in Figure~\ref{fig:images}, but with no noise added. \emph{(center right)} The convolved best-fit model (same as shown in center left) with added simulated random noise at the same level as the ALMA 1.3~mm image, rms $\sim30$~$\mu$Jy. \emph{(right)} The residuals of the full best-fit model including the star and imaged with the same parameters as in Figure~\ref{fig:images}. The ellipse in the lower left corner shows the $6\farcs5\times6\farcs1$ (FWHM) synthesized beam size.
}
\label{fig:model}
\end{figure}
Another helpful way to visualize and compare the ALMA observations and the best-fit model is by deprojecting the real and imaginary visibilities based on the inclination, $i$, and position angles, $PA$, of the disk major axis, as is shown in Figure~\ref{fig:vis} \cite[see][for a detailed description of deprojection]{lay97}. Essentially, the coordinates for each visibility point are defined by a distance from the origin of the $(u,v)$ plane, $\mathcal{R} = \sqrt{u^2+v^2}$. To change to a deprojected, rotated coordinate system, we define an angle $\phi = \frac{\pi}{2}-PA$, where $PA$ is the position angle of the disk measured east of north. The new coordinates are defined as $u' = u\text{ cos}\phi+v\text{ sin}\phi$ and $v' = (-u\text{ sin}\phi+v\text{ cos}\phi)\text{ cos}i$, where $i$ is the inclination angle of the disk. Then, the new deprojected $(u,v)$ distance is $\mathcal{R}_{uv} = \sqrt{u'^2+v'^2}$. Assuming that the disk is axisymmetric, we average the visibilities azimuthally in annuli of $\mathcal{R}_{uv}$. For our ALMA $\tau$ Ceti observations, the real part of the deprojected visibilities is reasonably consistent with the prediction for a broad belt of emission, showing a central peak and several oscillations of decreasing amplitude. The constant offset from zero is the visibility signature of the unresolved central peak we see clearly in the images. The imaginary visibilities are essentially zero, indicating that there is no asymmetric structure in the disk, which is consistent with the absence of any significant residuals in Figure~\ref{fig:model} (rightmost panel). Note that we are lacking $(u,v)$ coverage on baselines shorter than $\lesssim20$ k$\lambda$, the region of the visibility curve with the most structure.
\begin{figure}[ht]
\centerline{\psfig{file=fig4,width=10cm,angle=0}}
\caption[]{\small The deprojected real (filled symbols) and imaginary (open symbols) visibilities for the ACA (blue diamonds) and 12-m array (black circles), compared to the best-fit belt model (red solid line). The single dish SCUBA-2 flux (Holland et al., in prep.) extrapolated from 850 $\mu$m to 1.3 mm is also plotted at $\mathcal{R}_{uv} = 0$ k$\lambda$.
}
\label{fig:vis}
\end{figure}
\section{Discussion}
\label{sec:disc}
We have obtained ALMA 1.3~mm observations of the $\tau$ Ceti system using both the ACA and the full 12-m array with baselines corresponding to scales of $1\arcsec$ ($4$ AU). The resulting image shows emission from an outer dust disk located $\sim12\arcsec$ ($\sim44$ AU) from the star surrounding an unresolved central peak. We fit parametric models to the millimeter visibilities, which included two components: (1) an outer disk with a radial surface density profile described by a power law with index $\gamma$, and (2) a point source at the stellar position. In the context of our simple model, this analysis provides tentative constraints on the location of the disk inner edge and the width of the disk. We now compare the model fits to previous \emph{Herschel} observations and discuss implications for the geometry of the proposed inner planetary system located within the dust belt.
\subsection{Location of the Disk Inner Edge and Belt Width}
\label{subsec:inner_edge}
Our best-fit model yields an inner radius for the disk of $6.2^{+9.8}_{-4.6}$~AU, consistent with the analysis of \emph{Herschel} observations that constrained the inner edge of the disk to be between 1 and 10~AU from the star \citep{law14}. For comparison, the planetary system proposed by \cite{tuo13} consists of five super-Earths in a tightly-packed configuration with semi-major axes ranging over $0.105-1.35$~AU. Given the uncertainties on $R_\text{in}$ from our best-fit model, the disk could extend well into this inner planetary system ($R_\text{in} < 1$ AU) or end far beyond the outermost planet ($R_\text{in} > 2$ AU). None of the proposed planets have large enough orbital radius or mass to cause significant perturbations or clear the disk beyond 3 AU (within the range of $R_\text{in}$ allowed by our models). Lawler et al. (2014) use numerical simulations to show that the system would be stable with an additional Neptune-mass planet on an orbit of $5-10$ AU, the largest mass planet at such separations that cannot be ruled out by the radial velocity data.
\begin{figure}[ht]
\centerline{\psfig{file=fig5,width=14cm,angle=0}}
\caption[]{\small \emph{(left)}~The deprojected real component of the expected complex visibilities for belt models with our best-fit $R_\text{in}=6.2$~AU and $\gamma=-1,0,+1$ (dot-dash green line, solid red line, and dotted purple line, respectively), and a model with $R_\text{in} = 20$~AU and $\gamma=0$ (dashed blue line). The real visibilities from our ACA observations presented here are shown by the black points and are consistent with all four models.
\emph{(center)}~The real visibilities of simulated ACA 1.3~mm emission for models with $\gamma=0$ and $R_\text{in}=6.2$ and $20$~AU (red and blue points, respectively). With 10 antennas and 10 hours on source, these models are easily distinguishable.
\emph{(right)}~The real visibilities of simulated ACA 1.3~mm emission for models with $R_\text{in}=6.2$~AU and $\gamma=+1$ and $-1$ (purple and green points, respectively). Again, these profiles are clearly different in shape, with the zero-crossing null locations shifted by $>10$ k$\lambda$.
}
\label{fig:sim}
\end{figure}
The belt position and width are strongly constrained by the location of the first null in the deprojected real visibilities \cite[see Figure~\ref{fig:vis},][]{mac15b}. Although we obtained some ACA data, the integration time was short, and the resulting sensitivity (rms $\sim180$~$\mu$Jy) at short baselines ($< 20$~k$\lambda$) was insufficient to discriminate between disk models with inner radii of $1-10$~AU, the parameter space with significant implications for the proposed planetary system. New observations with shorter baselines are needed to better determine the location of the dust belt, as well as its radial surface density gradient. To demonstrate the contribution that such observations would make to our analysis, we carried out simulations of ALMA ACA observations (rms 60~$\mu$Jy, using 10 antennas in the Cycle 4 setup) at 1.3~mm for a model with our best-fit $R_\text{in} = 6.2$ AU and $\gamma=-1,0,+1$, and a model with $R_\text{in} = 20$ AU and $\gamma=0$, all consistent with the ALMA observations presented here. Figure~\ref{fig:sim} (left panel) shows the real component of the expected complex visibilities for all four models, along with our current ACA observations. The center and right panels show the real part of simulated ACA visibilities for all four belt models compared to the expected theoretical visibility curves. These profiles are clearly different in shape, with the zero-crossing locations shifted by $>10$ k$\lambda$ and the amplitude of the oscillations differing by more than a factor of 2.
Although the ALMA observations allow for broad disk models that extend in toward the
central star, they are not consistent with a narrow ring model located far
from the star. The contours for the inner and outer radius in Figure~\ref{fig:mcmc} show the absence of any models with large $R_\text{in}$ and small $R_\text{out}$, indicating that the disk must be broad. Indeed, we can place a strong upper limit, $R_\text{in}<25$ AU with $99\%$ ($3\sigma$)
confidence. Given the values of $R_\text{in}$ and $R_\text{out}$ from our best-fit model, the
fractional width of the $\tau$ Ceti disk is $\Delta R/R = 1.6^{+0.3}_{-0.6}$.
If we assume that the outer belt edge at millimeter wavelengths aligns
with the edge found at far-infrared wavelengths ($R_\text{out} = 55$~AU), we can place
a lower limit on the belt width, $\Delta R > 30$~AU.
At $99\%$ confidence, $\Delta R/R > 0.75$.
For comparison, our Solar System's classical Kuiper Belt has a fractional width of $\Delta
R/R \sim 0.18$ \cite[e.g.][]{hah05,ban15}, significantly more narrow. In fact, the Kuiper Belt
appears to be confined between Neptune's 3:2 and 2:1 resonances. Similarly, the Fomalhaut
debris disk appears narrow with $\Delta R /R \sim0.1$, possibly attributable to planets orbiting both interior to
and exterior to the ring \citep{bol12}. In contrast, recent ALMA observations of
the HD~107146 debris disk \citep{ric15} indicate that its belt extends from
30~AU to 150~AU ($\Delta R/R \sim 1.3$), with a break at $\sim70$AU.
The $\epsilon$ Eridani debris disk also appears to be somewhat broader with a
fractional width determined from resolved SMA observations of $\Delta R/R = 0.3$ \citep{mac15b}.
The fractional width of the $\tau$ Ceti belt is
substantially larger than both the classical Kuiper Belt and Fomalhaut.
However, the $\tau$ Ceti belt is comparable in width to the HD 107146 disk,
suggesting that it might also have a more complicated radial structure,
which we are unable to resolve with these observations.
\cite{kal06} discuss the implications of the observed diversity in debris disk structures
in the context of scattered light observations. For a narrow belt structure, both
the inner and outer edges of the disk must be maintained by gravitational perturbers
such as stellar or substellar companions, or be confined by mean-motion resonances with an interior
planet as is the case for our own Kuiper Belt. Without any such confinement mechanism
for the outer disk edge, and since more massive planets have been ruled out around
$\tau$ Ceti at distances approaching $\sim10$ AU \citep{law14}, the expected structure is indeed a wide belt.
\subsection{Central Component}
\label{subsec:star}
In addition to the extended emission from an outer belt, the ALMA 1.3~mm image shows a bright, unresolved point source (see the constant positive offset on long baselines in Figure~\ref{fig:vis}) at the expected position of the star with a flux density of $0.69^{+0.02}_{-0.05}$ mJy. For a G8.5V star with an effective temperature of $5344\pm50$~K, an extrapolation of a PHOENIX stellar atmosphere model \citep{hus13} predicts a 1.3 mm flux density of 0.60~mJy (with 5\% uncertainty). Thus, the flux density of this central source is marginally higher than the expectation for the stellar photosphere at this millimeter wavelength. We note, however, that an extrapolation of the mid-infrared flux of the star, as measured by WISE at 22~$\mu$m \citep{wri10} and AKARI at 9 and 18~$\mu$m \citep{ish10}, yields a prediction for the flux of the stellar photosphere at 1.3~mm of $\sim0.5$~mJy, substantially lower than the measured 1.3~mm flux density. Our ALMA measurement is complemented by previous observations by \cite{vill14} with the Karl G. Jansky Very Large Array (VLA) at 34.5~GHz (8.7~mm) and 15.0~GHz (2.0~cm). At 8.7~mm, the measured flux density is $25.3\pm3.9$~$\mu$Jy, significantly higher than the predicted photospheric flux density of 14~$\mu$Jy. While the star is not detected at 2.0~cm, a robust 99\% confidence upper limit is determined of $<11.7$~$\mu$Jy (model photospheric prediction of 2.5~$\mu$Jy).
\begin{figure}[ht]
\centerline{\psfig{file=fig6,width=14cm,angle=0}}
\caption[]{\small \emph{(left)} Flux density spectrum of $\tau$ Ceti from ALMA and VLA observations. The dashed line indicates the expected spectral index of 2.0 for a classical photosphere. \emph{(right)} Brightness temperature spectrum calculated assuming the photospheric radius of the star. For both plots, our ALMA measurements are shown as blue circles and the VLA measurements \citep{vill14} are shown as black diamonds. Detections are indicated by points with $1\sigma$ error bars. The 99\% upper confidence limit at 2.0 cm is indicated by the downwards arrow. Again, the dashed line indicates the expected brightness temperature for a classic photosphere with the brightness temperature determined from our 1.3~mm ALMA measurement.
}
\label{fig:star}
\end{figure}
As \cite{vill14} discuss, the observed unresolved emission from $\tau$ Ceti at both millimeter and centimeter wavelengths plausibly arises from a hot stellar chromosphere. Similar excess emission at long wavelengths has been noted for several neighboring Sun-like stars, including $\alpha$ Cen A and B (spectral types G2V and K2V, respectively) observed with ALMA by \cite{lis15} and $\epsilon$ Eridani (spectral type K2V) observed with the Submillimeter Array (SMA) and Australia Telescope Compact Array (ATCA) by \cite{mac15b}. We combine our new ALMA 1.3~mm flux density with the previous VLA 8.7~mm measurement and 2~cm upper limit, and determine the Planck brightness temperature at all three wavelengths \cite[following][]{lis13}. Figure~\ref{fig:star} shows the resulting ALMA and VLA constraints on both the flux density and the brightness temperature spectra of $\tau$ Ceti. We assume that the photospheric radius is comparable at optical and millimeter/centimeter wavelengths, and adopt a value of $0.793\pm0.004$ $R_\odot$, obtained from interferometric measurements using the FLUOR instrument on the CHARA array \citep{dif07}. At 1.3~mm this analysis yields $T_B = 5,800\pm200$ K, modestly hotter than the effective temperature of $5344\pm50$~K. However, at longer wavelengths, the brightness temperature diverges significantly from the photospheric prediction with $T_B = 9,300\pm1400$~K and $<23,000$~K at 8.7~mm and 2~cm, respectively.
Additionally, the spectral index at long wavelengths of the central emission from $\tau$ Ceti shows the same deviation from an optically thick photosphere (spectral index of $\sim2$) as is seen for $\alpha$ Cen A and B and $\epsilon$ Eridani. Between 1.3 and 8.7~mm, the spectral index of the central peak in our observations of $\tau$ Ceti is $1.74\pm0.15$ (with the $\sim10\%$ uncertainty in the flux scale and the $1\sigma$ modeling errors added in quadrature). For comparison, the measured spectral indices between 0.87 and 3.2~mm are 1.62 and 1.61 for $\alpha$ Cen A and B, respectively \citep{lis15}.
\section{Conclusions}
\label{sec:conclusions}
We observed the $\tau$ Ceti debris disk with ALMA at 1.3~mm with baselines that probe $1\arcsec$ (4~AU) scales. These are the first observations of this nearby system with a millimeter interferometer and reveal somewhat patchy emission from a dust disk surrounding an unresolved central emission peak. In order to characterize these two emission components, we fit simple parametric models directly to the visibility data within an MCMC framework.
Our best-fit model yields an inner belt edge of $6.2^{+9.8}_{-4.6}$ AU,
consistent with the analysis of previous far-infrared \emph{Herschel}
observations. Given the relatively low sensitivity at short baselines in
the ALMA observations, we are unable to place a tighter constraint on the
inner edge and its position relative to the proposed five planet system.
These data, however, provide a strong lower limit on the fractional width of
the belt, $\Delta R/R > 0.75$ with $99\%$ confidence. This result implies that
the $\tau$ Ceti debris disk is broad, much wider than the classical Kuiper Belt
in our Solar System and more comparable to the HD 107146 debris disk \citep{ric15}.
The bright central peak at the stellar position has a flux density of $F_\text{1.3mm}=0.69^{+0.02}_{-0.05}$~mJy, somewhat higher than the predicted flux of the stellar photosphere at 1.3~mm. At longer centimeter wavelengths, this excess is more significant, increasing to $\sim2\times$ the photospheric prediction in VLA observations at 8.7~mm \citep{vill14}. The spectral index between these two measurements is $1.74\pm0.15$, shallower than the expectation for an optically thick photosphere. Given the high brightness temperatures at both 1.3 and 8.7~mm, this excess emission is likely due to a hot stellar chromosphere. Similar spectra have been observed for other nearby Sun-like stars, e.g. $\alpha$ Cen A/B and $\epsilon$ Eridani.
These first ALMA observations of the $\tau$ Ceti system allow us to probe the structure of the debris disk with higher resolution than previous work. However, higher sensitivity observations at shorter baselines are still needed to constrain
the location of the inner edge of the dust belt more precisely. If the disk extends in towards the star, within the orbit of the outermost proposed planet, this provides strong evidence against the posited five planet system.
However, if the disk inner edge is located well outside the proposed planetary system, an additional massive planet on a wide orbit may be required to clear out the central hole in the belt. Additional observations with the ACA could provide the necessary sensitivity to determine the position of the inner disk edge and its implications for an interior planetary system.
\acknowledgements
This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.00588.S.
ALMA is a partnership of ESO (representing its member states), NSF (USA) and
NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI
(Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA
Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy
Observatory is a facility of the National Science Foundation operated under
cooperative agreement by Associated Universities, Inc. M.A.M acknowledges
support from a National Science Foundation Graduate Research Fellowship
(DGE1144152). S.M.L. gratefully acknowledges support from the NRC Canada Plaskett Fellowship.
B.C.M. acknowledges support from a Natural Science and
Engineering Research Council (NSERC) Discovery Accelerator Supplement grant.
G.M.K. is supported by the Royal Society as a Royal Society University Research Fellow.
M.B. acknowledges support from a FONDECYT Postdoctral Fellowship, project no. 3140479 and the Millennium Science Initiative (Chilean Ministry of Economy), through grant RC130007.
|
train/arxiv
|
BkiUcuA5qoTAhudwxexq
| 5
| 1
|
\section{Introduction}
Classically the theory of Witt vectors comes in two flavors, the $p$-typical Witt vectors $W(k;p)$ and the big Witt vectors $\bW(k)$. Those are special cases of Witt vectors defined using a truncation set $S$, and the extra flexibility coming from varying the truncation set has proven quite useful.
In this paper we take the use of truncation sets one step further by introducing \emph{truncation posets}, and redevelop the foundations of Witt vectors from this point of view. The existence of all the usual structure maps of Witt vectors is easy to establish using this formalism. We give explicit formulas on ghost coordinates and isolate all the necessary congruences in a single lemma due to Dwork (Lemma \ref{l:Dwork}).
Recall that a \emph{truncation set} is a set $S \subset \bN=\{1,2,\ldots\}$ which is closed under division. Given a truncation set $S$ and a commutative ring $k$, one can define the ring $\bW_S(k)$ of Witt vectors. As a set this is $k^S$, and the addition and multiplication maps are determined by requiring that the ghost map $w : \bW_S(k) \to k^S$ is a ring map, functorially in the ring $k$. With $S=\{1,p,p^2,\ldots\}$ this recovers the $p$-typical Witt vectors and with $S=\bN$ it recovers the big Witt vectors.
In recent work related to various algebraic $K$-theory calculations more general truncation sets have come up. For example, when studying the algebraic $K$-theory of $k[x_1,\ldots,x_n]/(x_1^{a_1},\ldots,x_n^{a_n})$ in \cite{AGHL14} it turned out to be natural to consider certain subsets of $\bN^n$, and in \cite{An15} where we calculate the algebraic $K$-theory of $k \langle x_1,\ldots,x_n \rangle/m^a$, the polynomial ring in $n$ non-commuting variables modulo the $a$'th power of the ideal $m=(x_1,\ldots,x_n)$, we were led to consider certain subsets of the set of words in $n$ letters.
In each of the above cases it is possible to unpack the truncation poset Witt vectors that show up and write them as a product of ordinary Witt vectors, and to describe the maps in terms of the classical structure maps of Witt vectors. But this unpacking is messy, and naturally defined maps of truncation poset Witt vectors have to be divided into cases when considering only the classical Witt vectors. We claim that by considering truncation posets the above-mentioned $K$-theory calculations become somewhat easier to carry out, and the results become significantly easier to state.
Given a truncation poset $S$ as in Definition \ref{d:gentrun} below and a commutative ring $k$, we will define the $S$-Witt vectors $\bW_S(k)$ to be $k^S$ as a set. We will then make the collection of truncation posets into a category, and $S \mapsto \bW_S(k)$ into a functor, in a number of different ways by considering three types of maps.
The first type of map, which we call an $R$-map, is most general. Given an $R$-map $f : S \to T$ of truncation posets we get an induced map $f^* : \bW_T(k) \to \bW_S(k)$. By varying $S$, $T$ and $f$ this recovers all composites of the classical restriction and Frobenius maps, as well as diagonal maps. Classically the restriction and Frobenius maps are defined in rather different ways, so it is perhaps surprising that the two definitions can be unified in this way.
The second type of map, which we call a $T$-map, is an $R$-map satisfying certain additional conditions. Given a $T$-map $f : S \to T$ of truncation posets we get an induced map $f_\oplus : \bW_S(k) \to \bW_T(k)$, and by varying $S$, $T$ and $f$ this recovers all composites of the addition map and Verschiebung maps on the classical Witt vectors.
We can combine these two kinds of maps and define a category $\mathcal{TP}^{TR}$. An object of $\mathcal{TP}^{TR}$ is a truncation poset, and a morphism is an equivalence class of spans
\[
S \xleftarrow{f} A \xrightarrow{g} T
\]
where $f$ is an $R$-map and $g$ is a $T$-map. This is similar to the definition of a $G$-Mackey functor in terms of bispans of finite $G$-sets for a finite group $G$. See Theorem \ref{t:TRfunctor} in the body of the paper.
Finally, we define a third type of map of truncation posets that we call an $N$-map. This is an $R$-map satisfying certain (stronger) additional conditions. Given an $N$-map $f : S \to T$ we get an induced map $f_\otimes : \bW_S(k) \to \bW_T(k)$ which encodes all composites of the multiplication map and norm maps on the classical Witt vectors.
We can combine all three kinds of maps to define a category $\mathcal{TP}^{TNR}$ of truncation posets with transfer, norm and restriction. We then have the following result, which we also restate as Theorem \ref{t:mainbody}.
\begin{thm} \label{t:main}
Let $k$ be a commutative ring. There is a functor
\[
\bW(k) : \mathcal{TP}^{TNR} \to Set
\]
given on objects by $S \mapsto \bW_S(k)$ which encodes all addition, multiplication, restriction, Frobenius, Verschiebung and norm maps of ordinary Witt vectors.
\end{thm}
While one can argue that some category encoding all of this information must exist for formal reasons, our category $\mathcal{TP}^{TNR}$ has a very concrete description in terms of generators and relations, and it is easy to perform calculations on ghost coordinates.
We make some remarks.
\begin{remark}
The norm map is perhaps less classical than the other maps encoded by $\mathcal{TP}^{TNR}$. It can be thought of as a multiplicative version of the Verschiebung. Its existence can be deduced from Brun's paper \cite{Br05}, but see \cite{An_norm} for a concrete definition with explicit formulas.
\end{remark}
\begin{remark} \label{r:Tambarabispans}
The machinery developed in this paper is similar in flavor to that of \emph{Tambara functors} (see \cite{Ta93} or \cite{St}). In fact, Tambara called what has become known as a Tambara functor a $TNR$-functor. But there are some differences.
First, there is no analogue of the restriction map in the context of equivariant stable homotopy theory unless one is willing to consider \emph{cyclotomic spectra}. What topologists usually refer to as a restriction map corresponds to the Frobenius map of Witt vectors. To avoid confusion we will avoid the conflicting terminology from algebraic topology in this paper, although we do borrow the acronym $TNR$.
And second, a Tambara functor can be defined as a functor from the category of \emph{bispans}
\[
X \leftarrow A \to B \to Y,
\]
of finite $G$-sets, and while the definition of composition of two bispans is somewhat complicated it is possible to represent any composite of restrictions (which we should call Frobenius), norms and transfers, in any order, as a bispan. In our case $\mathcal{TP}^{TNR}$ is also built from three types of maps, but it is not true that any map in $\mathcal{TP}^{TNR}$ can be represented by a bispan.
One might argue that this indicates that our definition of a truncation poset is too general. We remedy this by defining a subcategory $\mathcal{TP}^{TNR}_\textnormal{join}$ containing only certain especially nice truncation posets, and show that any map in $\mathcal{TP}^{TNR}_\textnormal{join}$ can indeed be represented by a bispan. But note that the truncation posets that show up in $K$-theory calculations are not usually in $\mathcal{TP}^{TNR}_\textnormal{join}$.
\end{remark}
\subsection{Outline}
We start in Section \ref{s:gentrun} by defining the main new player, the truncation poset. In Section \ref{s:genWitt} we describe how to generalize Witt vectors from ordinary truncation sets to truncation posets, and explain how $S \mapsto \bW_S(k)$ defines a functor out of each of the three categories $\mathcal{TP}^T$, $\mathcal{TP}^N$ and $(\mathcal{TP}^R)^{\mathrm{op}}$.
In Section \ref{s:Mackey} we combine the category $\mathcal{TP}^T$ with $(\mathcal{TP}^R)^{\mathrm{op}}$ by considering the category freely generated by maps in $\mathcal{TP}^T$ and $(\mathcal{TP}^R)^{\mathrm{op}}$, modulo certain explicit relations. Any map in $\mathcal{TP}^{TR}$ can be described by a \emph{span} of truncation posets where the first leg is in $\mathcal{TP}^R$ and the second leg is in $\mathcal{TP}^T$. Then $S \mapsto \bW_S(k)$ becomes a functor from $\mathcal{TP}^{TR}$ to sets. This is similar in flavor to the definition of a Mackey functor.
In Section \ref{s:Tambara}, which is significantly more difficult both because of the difficulty with commuting an $R$-map past an $N$-map and because of the combinatorics involved in defining an exponential diagram, we combine all three of the categories $\mathcal{TP}^T$, $\mathcal{TP}^N$ and $(\mathcal{TP}^R)^{\mathrm{op}}$ and show that $S \mapsto \bW_S(k)$ is a functor from $\mathcal{TP}^{TNR}$ to sets. This is similar in flavor to the definition of a Tambara functor, but see Remark \ref{r:Tambarabispans} above.
Finally, in Section \ref{s:bispans} we show that if we restrict our attention to certain especially nice truncation posets we can define a category $\mathcal{TP}^{TNR}_\textnormal{join}$ where every morphism can in fact be represented, in an essentially unique way, by a bispan of truncation posets. We finish by comparing functors out of a particular subcategory of $\mathcal{TP}^{TNR}_\textnormal{join}$ to Tambara functors for a finite cyclic group.
\subsection{Acknowledgements}
This paper was inspired by the author's joint work with Anna Marie Bohmann on graded Tambara functors \cite{AnBo}, and by conversations with Ayelet Lindenstrauss and Lars Hesselholt about algebraic $K$-theory calculations. We have also borrowed some of the Witt vector formalism from Hesselholt's survey article \cite{He}. The author would also like to thank James Borger and Arnab Saha for interesting conversations about Witt vectors, and Chuck Weibel for suggesting the name truncation poset.
\section{Truncation posets and maps between them} \label{s:gentrun}
If $S$ is a partially ordered set we write $s \mid t$ rather than $s \leq t$ for the partial order. We will consider $\bN$ as a partially ordered set ordered by division.
\subsection{Ordinary truncation sets and classical Witt vectors}
We will refer to anything defined in terms of ordinary truncation sets as ``classical''. Recall that $S \subset \bN$ is a truncation set if $s \in S$ and $t \mid s$ implies $t \in S$. For a commutative ring $k$, the ring of $S$-Witt vectors $\bW_S(k)$ is defined to be $k^S$ as a set. The addition and multiplication maps are defined by the requirement that the ghost map
\[
w : \bW_S(k) \to k^S
\]
defined by
\[
(a_s) \mapsto \langle x_s \rangle \qquad x_s = \sum_{d \mid s} da_d^{s/d}
\]
is a ring map, functorially in the ring $k$. We make the standing assumption that everything in this paper is required to be functorial in $k$.
We will need the following constructions. If $n \in \bN$ and $S$ is a truncation set, let
\[
S/n = \{t \in \bN \quad | \quad nt \in S\}.
\]
This is another truncation set. The classical Frobenius map is defined as the map
\[
F_n : \bW_S(k) \to \bW_{S/n}(k)
\]
which is given on ghost coordinates by $\langle x_s \rangle \mapsto \langle y_t \rangle$ with $y_t = x_{nt}$.
There is also a map going the other way. The classical Verschiebung map is defined as the map
\[
V_n : \bW_{S/n}(k) \to \bW_S(k).
\]
which is given on Witt coordinates by $(b_t) \mapsto (a_s)$ with $a_s = b_{s/n}$ if $n \mid s$ and $0$ if $n \nmid s$. Alternatively it can be defined on ghost coordinates by $\langle y_t \rangle \mapsto \langle x_s \rangle$ with $x_s = ny_{s/n}$ if $n \mid s$ and $0$ if $n \nmid s$.
For $n \in \bN$ we let $\langle n \rangle$ denote the truncation set of divisors of $n$. Given a truncation set $S$ we get another truncation set
\[
\langle n \rangle S = \{t \in \bN \quad | \quad t=es \textnormal{ for some $e \mid n$, $s \in S$}\}.
\]
It follows immediately that $(\langle n \rangle S)/n = S$, so in particular we have a Verschiebung map $V_n : \bW_S(k) \to \bW_{\langle n \rangle S}(k)$. But by \cite{An_norm} we also have a norm map (the ``classical'' norm)
\[
N_n : \bW_S(k) \to \bW_{\langle n \rangle S}(k).
\]
This can be defined on ghost coordinates by $\langle x_s \rangle \mapsto \langle y_t \rangle$ where
\[
y_t = x_{t/g}^g, \quad g=\gcd(n,t).
\]
Finally, if $T \subset S$ is another truncation set there is a classical restriction map
\[
R^S_T : \bW_S(k) \to \bW_T(k).
\]
This can be defined either on Witt coordinates by $(R^S_T(a_s))_t = a_t$ or on ghost coordinates by $(R^S_T \langle x_s \rangle)_t = x_t$.
Note that specifying $n \in \bN$ and the source $\bW_S(k)$ uniquely determines the target of the norm map $N_n$ but not the target of the Verschiebung map $V_n$. However, given a truncation set $T$ with $T/n=S$ it is true that $\langle n \rangle S \subset T$, and that the diagram
\[ \xymatrix{
\bW_S(k) \ar[r]^-{V_n} \ar[rd]_-{V_n} & \bW_T(k) \ar[d]^{R^T_{\langle n \rangle S}} \\
& \bW_{\langle n \rangle S}(k)
} \]
commutes. Given $(a_s) \in \bW_S(k)$, the image $V_n(a_s) \in \bW_T(k)$ is padded with zeroes. But because the formula for $N_n(a_n)$ is more complicated, padding with zeroes does not work in this case. See Example \ref{ex:notcommutingRpastN} below for a concrete example of this problem with the norm map.
\subsection{Truncation posets}
We make the following definition.
\begin{defn} \label{d:gentrun}
A \emph{truncation poset} is a partially ordered set $S$ together with a function
\[
| - | : S \to \bN
\]
satisfying the following properties.
\begin{enumerate}
\item If $s \mid t$ then $|s| \mid |t|$.
\item If $s \mid t \mid u$ then $\frac{|u|}{|s|} = \frac{|u|}{|t|} \cdot \frac{|t|}{|s|}$.
\item If $d \mid |s|$ then there is a unique $t \in S$ with $t \mid s$ and $|t| = |s|/d$. In particular there is a unique $t \in S$ with $t \mid s$ and $|t|=1$.
\item If $s \in S$ and $d \in \bN$ there is at most one $t \in S$ with $s \mid t$ and $|t|=d|s|$.
\end{enumerate}
\end{defn}
For ease of notation we will sometimes write $t/s$ for the natural number $\frac{|t|}{|s|}$ and $s/d$ for the unique $t \in S$ with $t \mid s$ and $|t|=|s|/d$. If there is a possibility for confusion we will write $|s|_S$ for $|s|$.
\begin{remark}
Suppose we are given a poset $S$ and a natural number $t/s$ for each $s \mid t$ in $S$. Then there is at most one way to define $|-|$ in such a way that $S$ becomes a truncation poset. Indeed, we must have
\[
|s|=\max\{s/t \quad | \quad t \mid s\}.
\]
\end{remark}
\begin{defn}
Let $S$ be a truncation poset and let $k$ be a commutative ring. The $S$-Witt vectors of $k$, denoted $\bW_S(k)$, is the set $k^S$. The ghost map is the map
\[
w : \bW_S(k) \to k^S
\]
sending the vector $(a_s)$ to the vector $\langle x_s \rangle$ with
\[
x_s = \sum_{t \mid s} |t| a_t^{s/t}.
\]
\end{defn}
We will describe the various structure maps that exist in Section \ref{s:genWitt} below. But first we present a series of examples of truncation posets and define the maps of truncation posets we will need.
\begin{example}
An ordinary truncation set $S \subset \bN$ is a truncation poset with $|-|$ defined to be the identity map.
\end{example}
\begin{example} \label{ex:nN}
Let $n \in \bN$. The subset $n \bN \subset \bN$ is a truncation poset with $|s|_{n\bN}=s/n$. The multiplication by $n$ map $\bN \to n \bN$ is an isomorphism of truncation posets.
\end{example}
The next two examples appeared in \cite{AGHL14}, the first one explicitly and the second one implicitly.
\begin{example} \label{ex:N_to_the_n}
A subset $S \subset \bN^n$ which is closed under division in the sense that if $(s_1,\ldots,s_n) \in S$ and $d \mid s_i$ for all $1 \leq i \leq n$ then $(s_1/d,\ldots,s_n/d) \in S$, is a truncation poset with $|(s_1,\ldots,s_n)| = \gcd(s_1,\ldots,s_n)$.
\end{example}
\begin{example} \label{ex:N_to_the_n2}
Fix positive integers $a_1,\ldots,a_n$. A subset $S \subset \bN^n$ which satisfies $a_i \mid s_i$ for all $(s_1,\ldots,s_n) \in S$ and $1 \leq i \leq n$, and which is closed under division by $n$-tuples satisfying the same condition, is a truncation poset with $|s_1,\ldots,s_n|=\gcd(\frac{s_1}{a_1},\ldots,\frac{s_n}{a_n})$.
\end{example}
\begin{example} \label{ex:words}
Put a partial order on the set of words in $n$ letters by saying $w_1 \mid w_2$ if $w_2 = w_1^d$ for some $d \in \bN$. Define $|w_2|=d$ if $w_2 = w_1^d$ with $w_1$ irreducible (meaning $w_1$ is not a power of a shorter word). For example, $|x_1 x_2 x_1 x_2|=2$. Then a set $S$ of words which is closed under division is a truncation poset.
\end{example}
The next example is central to the calculation of the algebraic $K$-theory of a truncated polynomial ring in non-commuting variables, see \cite{An15}.
\begin{example} \label{ex:words2}
Fix a positive integer $a$ and consider words in $n$ letters of length divisible by $a$, modulo the equivalence relation given by cyclically permuting blocks of $a$ letters. A set of such words which is closed under division in the same sense as in the previous example, and with $|-|$ defined in the same way, is a truncation poset. For example, if $a=1$ the words $x_1x_2x_1x_2$ and $x_2x_1x_2x_1$ are equivalent but if $a=2$ or $a=4$ they are not. If $a=1$ or $a=2$ then $|x_1x_2x_1x_2|=2$ but if $a=4$ then $|x_1x_2x_1x_2|=1$.
\end{example}
We could go on, but we hope the above examples have convinced the reader that truncation posets are in rich supply.
\begin{lemma}
If $S$ and $T$ are truncation posets then so is $S \coprod T$, and there is a canonical isomorphism
\[
\bW_{S \coprod T}(k) \cong \bW_S(k) \times \bW_T(k).
\]
Moreover, this isomorphism is compatible with the canonical isomorphism $k^{S \coprod T} \cong k^S \times k^T$ under the ghost map.
\end{lemma}
\begin{proof}
This is clear by inspection of the definitions.
\end{proof}
In fact all truncation posets split up as a disjoint union.
\begin{lemma} \label{l:splittingofS}
Let $S$ be a truncation poset. Then there is a splitting $S = \coprod S_i$ with each $S_i$ isomorphic to an ordinary truncation set via $|-|$.
\end{lemma}
\begin{proof}
Let $I = \{ i \in S \quad | \quad |i|=1\}$. Then it is clear from the definition that $S = \coprod\limits_{i \in I} S_i$ with $S_i = \{s \in S \quad | \quad i \mid s\}$.
\end{proof}
We call each $S_i \subset S$ as in the above lemma a \emph{connected component} of $S$.
\subsection{Maps of truncation posets}
The structure maps for Witt vectors will come from maps of truncation posets. We make the following definition.
\begin{defn}
A \emph{map} $f : S \to T$ of truncation posets is a map of sets such that if $s_1 \mid s_2$ then $f(s_1) \mid f(s_2)$ and $\frac{|f(s_2)|}{|f(s_1)|} = \frac{|s_2|}{|s_1|}$.
\end{defn}
For maximal similarity with the later definitions we will sometimes call a map of truncation posets an $R$-map. It is clear that such maps compose and that we get a category $\mathcal{TP}^R$ of truncation posets and $R$-maps.
\begin{example}
Let $f : T \subset S$ be an inclusion of ordinary truncation sets. Then $f$ is a map of truncation posets.
\end{example}
\begin{example} \label{ex:multn}
Let $S$ be an ordinary truncation set and let $T = nS \cap S \subset S$. Then $T$ is a truncation poset as in Example \ref{ex:nN} above and the inclusion $f : T \to S$ is a map of truncation posets. Moreover, the map
\[
\frac{1}{n} : T \to S/n
\]
is an isomorphism of truncation posets. We will switch back and forth between thinking about the inclusion $nS \cap S \subset S$ and the multiplication by $n$ map $S/n \to S$.
\end{example}
\begin{example} \label{ex:multn2}
As a special case of the previous example, let $S$ be an ordinary truncation set. Then the multiplication by $n$ map $S \xrightarrow{n} \langle n \rangle S$ is a map of truncation posets.
\end{example}
\begin{example}
Let $S$ be any truncation poset. Then the fold map $\nabla : S \coprod S \to S$ is a map of truncation posets.
\end{example}
Any map of truncation posets will induce a map between Witt vectors, and some maps will induce two or three different maps. For the extra maps we need additional conditions.
\begin{defn} \label{d:Tmap}
A map $f : S \to T$ of truncation posets is a \emph{$T$-map} ($T$ for \emph{transfer}, not for the target of the map) if it satisfies the following additional conditions.
\begin{enumerate}
\item For every $s \in S$ and $t' \in T$ with $f(s) \mid t'$ there exists an $s' \in S$ with $s \mid s'$ and $f(s')=t'$. \label{cond:fib}
\item For every $t \in T$ the set $f^{-1}(t)$ is finite.
\end{enumerate}
If $f$ satisfies the first condition (but not necessarily the second) we say that $f$ is a \emph{fibration}.
\end{defn}
It is clear that we get a category $\mathcal{TP}^T$ of truncation posets and $T$-maps.
\begin{lemma} \label{l:decomposeTmap}
Suppose $f : S \to T$ is a $T$-map. Decompose $S$ and $T$ as $S = \coprod_i S_i$ and $T = \coprod_j T_j$ with each $S_i$ and $T_j$ isomorphic to an ordinary truncation set as in Lemma \ref{l:splittingofS}. Then $f$ is the coproduct of maps
\[
S_i \xrightarrow{f_i} T_j \subset T,
\]
and each $f_i$ is isomorphic to a map of the form $V/n \xrightarrow{n} V$ as in Example \ref{ex:multn}.
\end{lemma}
\begin{proof}
Suppose $S_i$ is isomorphic to the ordinary truncation set $U$ and $T_j$ is isomorphic to the ordinary truncation set $V$, with $f_i$ corresponding under these isomorphisms to $g : U \to V$. Note that $g$ is injective, and let $n=g(1)$. We can define a map $g' : U \to V/n$ by $g'(u) = \frac{g(u)}{n}$, and then $g$ factors as $U \xrightarrow{g'} V/n \xrightarrow{n} V$. The map $g'$ is injective and satisfies $g'(1)=1$. Now condition (\ref{cond:fib}) in Definition \ref{d:Tmap} implies that $g'$ is also surjective.
\end{proof}
We need one more version of a map of truncation posets.
\begin{defn} \label{d:Nmap}
A map $f : S \to T$ of truncation posets is an \emph{$N$-map} ($N$ for \emph{norm}) if it satisfies the following additional conditions.
\begin{enumerate}
\item For every $s \in S$ and $t' \in T$ with $t'$ in the same connected component as $f(s)$ there exists an $s' \in S$ in the same connected component as $s$ with $t' \mid f(s')$. \label{cond:strongfib}
\item For every $t \in T$ the set
\[
\widehat{f^{-1}}(t) = \textnormal{minimal elements of } \{s \in S \quad | \quad t \mid f(s)\}
\]
is finite.
\end{enumerate}
\end{defn}
Here an $s \in S$ with $t \mid f(s)$ is \emph{minimal} if there is no $s' \in S$ with $s' \mid s$, $s' \neq s$, and $t \mid f(s')$.
We note that being an $N$-map is a stronger condition than being a $T$-map. Indeed, if $f : S \to T$ is an $N$-map and $f(s) \mid t'$ we get an $s''$ in the same connected component as $s$ with $t' \mid f(s'')$. But then $f(s) \mid t' \mid f(s'')$ implies that $s \mid s''$ and that $d = \frac{|f(s'')|}{|t'|} \mid |s''|$. Hence we can let $s' = \frac{s''}{d}$. It follows that $s \mid s'$ and that $f(s')=t'$.
The generalization $\widehat{f^{-1}}(t)$ of inverse image is compatible with composition in the following sense:
\begin{lemma}
Suppose we have a composite of $N$-maps $S \xrightarrow{f} T \xrightarrow{g} U$. Then the composite
\[
\textnormal{subsets of $U$} \xrightarrow{\widehat{g^{-1}}} \textnormal{subsets of $T$} \xrightarrow{\widehat{f^{-1}}} \textnormal{subsets of $S$}
\]
agrees with
\[
\textnormal{subsets of $U$} \xrightarrow{\widehat{(g \circ f)^{-1}}} \textnormal{subsets of $S$}.
\]
\end{lemma}
\begin{proof}
This is a straightforward verification. To show that $\widehat{(g \circ f)^{-1}}(u)$ is contained in $\widehat{f^{-1}}(\widehat{g^{-1}}(u))$, take $s \in \widehat{(g \circ f)^{-1}}(u)$. Then $\gcd(|s|, \frac{|g(f(s))|}{|u|})=1$. Now let $t = \frac{f(s)}{d}$, where $d = \gcd(|f(s)|, \frac{|g(f(s))|}{|u|})$. It follows that $t \in \widehat{g^{-1}}(u)$ and that $s \in \widehat{f^{-1}}(t)$, so $s \in \widehat{f^{-1}}(\widehat{g^{-1}}(u))$. The opposite inclusion is similar.
\end{proof}
We get a category $\mathcal{TP}^N$ of truncation posets and $N$-maps. We note the following consequence of the definition of an $N$-map.
\begin{lemma} \label{l:minimaldiv}
Suppose $f : S \to T$ is an $N$-map. Given $t, t' \in T$ with $t \mid t'$ and $s \in \widehat{f^{-1}}(t)$ there is a unique $s' \in \widehat{f^{-1}}(t')$ with $s \mid s'$. Conversely, given $s' \in \widehat{f^{-1}}(t')$ there is a unique $s \in \widehat{f^{-1}}(t)$ with $s \mid s'$.
\end{lemma}
\begin{proof}
Given $t' \in T$ with $t \mid t'$, the definition of an $N$-map gives us some $s'$ in the same connected component as $s$ with $t' \mid f(s')$. If we require $s'$ to be minimal then $s'$ is unique.
But then $t \mid f(s')$ and we get an element $s'' \in \widehat{f^{-1}}(t)$ defined by $s'' = \frac{s'}{d}$ with $d = \gcd(|s'|, \frac{|f(s')|}{|t|})$. It follows that $s=s''$ since they are in the same connected component and satisfy the same minimality condition. Hence $s=s'' \mid s'$.
For the converse, note that $s' \in \widehat{f^{-1}}(t')$ also satisfies $t \mid f(s')$. We can then define $s$ by requiring that $s \mid s'$ and $t \mid f(s)$, and that $s$ is minimal.
\end{proof}
Next we note that an $N$-map decomposes as follows:
\begin{lemma} \label{l:decomposeNmap}
Suppose $f : S \to T$ is an $N$-map. Decompose $S$ and $T$ as $S = \coprod_i S_i$ and $T=\coprod_j T_j$ with each $S_i$ and $T_j$ isomorphic to an ordinary truncation set as in Lemma \ref{l:splittingofS}. Then $f$ is the coproduct of maps
\[
S_i \xrightarrow{f_i} T_j \subset T,
\]
and each $f_i$ is isomorphic to a map of the form $U \xrightarrow{n} \langle n \rangle U$ as in Example \ref{ex:multn2}.
Moreover, only finitely many $S_i$ map to each $T_j$.
\end{lemma}
\begin{proof}
As in the proof of Lemma \ref{l:decomposeTmap} suppose $f_i$ corresponds to a map $g : U \to V$ or ordinary truncation sets and let $n=g(1)$. Then $g$ factors as $U \xrightarrow{n} \langle n \rangle U \xrightarrow{g'} V$. Now $g'$ is automatically injective, and condition (\ref{cond:strongfib}) in Definition \ref{d:Nmap} implies that $g'$ is also surjective.
\end{proof}
\begin{example}
Fix positive integers $a_1,\ldots,a_n$ and $b_1,\ldots,b_n$ with each $a_i \mid b_i$. Also fix $N \in \bN \cup \{\infty\}$. Let $S \subset \bN^n$ be the following truncation poset:
\[
S = \{(s_1,\ldots,s_n) \in \bN^n \quad | \quad a_i \mid s_i \textnormal{ and } s_1+\ldots+s_n \leq N \}
\]
with
\[
|(s_1,\ldots,s_n)| = \gcd \Big( \frac{s_1}{a_1},\ldots,\frac{s_n}{a_n} \Big)
\]
as in Example \ref{ex:N_to_the_n2}.
Similarly, let $T \subset \bN^n$ be the truncation poset
\[
T = \{(t_1,\ldots,t_n) \in \bN^n \quad | \quad b_i \mid t_i \textnormal{ and } t_1+\ldots+t_n \leq N \}
\]
with
\[
|(t_1,\ldots,t_n)| = \gcd \Big( \frac{t_1}{b_1},\ldots,\frac{t_n}{b_n} \Big).
\]
Then the inclusion $T \subset S$ is a $T$-map but not generally an $N$-map.
\end{example}
\begin{example}
Fix positive integers $a$ and $b$ with $a \mid b$. Also fix $N \in \bN \cup \{\infty\}$. Let $S$ be the following truncation poset:
\[
S = \{ \textnormal{words in $x_1,\ldots,x_n$ of length $\leq N$ and word length divisible by $a$} \}/\sim_a
\]
where $\sim_a$ is the equivalence relation given by cyclically permuting blocks of $a$ letters as in Example \ref{ex:words2}.
Similarly, let $T$ be the truncation poset
\[
T = \{ \textnormal{words in $x_1,\ldots,x_n$ of length $\leq N$ and word length divisible by $b$} \}/\sim_b.
\]
Then there is a natural map $T \to S$ sending the equivalence class $[w]_{\sim_b}$ of a word to $[w]_{\sim_a}$, and this map is a $T$-map but not generally an $N$-map.
\end{example}
\begin{remark}
Note that we never require $|f(s)|=|s|$, and that in the interesting examples $|s|$ is not defined in the most obvious way.
\end{remark}
\section{Witt vectors as functors from $\mathcal{TP}$} \label{s:genWitt}
Before we say anything about how to combine the categories $\mathcal{TP}^T$, $\mathcal{TP}^N$ and $(\mathcal{TP}^R)^{\mathrm{op}}$ we describe the Witt vectors as a functor from each individual category. We will use the following result, which Hesselholt \cite{He} attributes to Dwork in the case of an ordinary truncation set.
\begin{lemma} \label{l:Dwork}
Let $S$ be a truncation poset and let $k$ be a commutative ring. For every prime $p$, choose a ring homomorphism $\phi_p : k \to k$ such that $\phi_p(a) \equiv a^p \mod p$. Then $\langle x_s \rangle$ is in the image of the ghost map if and only if $x_s \equiv \phi_p(x_{s/p}) \mod p^{\nu_p(|s|)}$ for every $p$ and every $s \in S$ with $p \mid |s|$.
\end{lemma}
The proof is identical to the proof for ordinary truncation sets and will be omitted.
\subsection{Restriction and Frobenius maps}
The maps in $\mathcal{TP}^R$ will do triple duty, encoding restriction and Frobenius maps as well as diagonal maps. We start with the following definition, which we justify below.
\begin{defn} \label{d:RFdiagmap}
Let $f : S \to T$ be a map of truncation posets. Then
\[
f^* : \bW_T(k) \to \bW_S(k)
\]
is defined to be the unique map, functorial in $k$, such that the diagram
\[ \xymatrix{
\bW_T(k) \ar[r]^-w \ar[d]_{f^*} & k^T \ar[d]^{f^*_w} \\
\bW_S(k) \ar[r]^-w & k^S
} \]
commutes. Here $f^*_w$ is defined by $(f^*_w \langle x_t \rangle)_s = x_{f(s)}$.
\end{defn}
It is clear that if this defines a map on Witt vectors then $(g \circ f)^* = f^* \circ g^*$ because this holds on ghost coordinates, and that we get a functor $\bW(k) : (\mathcal{TP}^R)^{\mathrm{op}} \to Set$.
\begin{lemma} \label{l:Rmapwelldef}
Given $f : S \to T$, the composite $f^*_w \circ w$ is contained in the image of the ghost map.
\end{lemma}
\begin{proof}
We use Dwork's Lemma. For this proof we can use any $\phi_p$.
Let $(a_t) \in \bW_T(k)$ and define $\langle x_t \rangle = w(a_t) \in k^T$ and $\langle y_s \rangle = f^*_w \langle x_t \rangle \in k^S$. Then we need to check if
\[
y_s \equiv \phi_p(y_{s/p}) \mod p^{\nu_p(|s|)}
\]
whenever $p \mid |s|$. We have $y_s=x_{f(s)}$ and $y_{s/p}=x_{f(s/p)}=x_{f(s)/p}$, so because $\langle x_t \rangle$ is in the image of the ghost map we can conclude that $y_s \equiv \phi_p(y_{s/p}) \mod p^{\nu_p(|f(s)|)}$. But then the result follows, because $|s| \mid |f(s)|$ and so $\nu_p(|f(s)|) \geq \nu_p(|s|)$.
\end{proof}
It follows that Definition \ref{d:RFdiagmap} does indeed define a map $f^* : \bW_T(k) \to \bW_S(k)$. It is unique because it is unique on the ``universal Witt vector'' $(a_t)$ in the ``representing ring'' $k=\bZ[a_t]_{t \in T}$. Next we discuss how $f^*$ generalizes the diagonal map, the classical restriction map, and the classical Frobenius map.
\begin{lemma}
Let $S$ be any truncation poset and let $\nabla : S \coprod S \to S$ be the fold map. Then
\[
\nabla^* : \bW_S(k) \to \bW_{S \coprod S}(k) \cong \bW_S(k) \times \bW_S(k)
\]
is the diagonal map.
\end{lemma}
\begin{proof}
This is immediate from the definition.
\end{proof}
\begin{lemma}
Suppose $S \subset T$ is an inclusion of ordinary truncation sets and let $i : S \to T$ denote the inclusion. Then
\[
i^* : \bW_T(k) \to \bW_S(k)
\]
is the classical restriction map $R^T_S$.
\end{lemma}
\begin{proof}
This is immediate from the definition, using that the restriction map can be defined on either Witt coordinates or ghost coordinates.
\end{proof}
\begin{lemma}
Let $n \in \bN$ and let $S$ be an ordinary truncation set. Then
\[
f^* : \bW_S(k) \to \bW_{S/n}(k)
\]
induced by the multiplication by $n$ map $f : S/n \to S$ is the classical Frobenius map $F_n$.
\end{lemma}
\begin{proof}
In this case Definition \ref{d:RFdiagmap} reduces to the usual definition of the Frobenius.
\end{proof}
Any map $f : S \to T$ factors as a composite of an iterated diagonal map, a classical Frobenius map on each connected component, and a classical restriction map on each connected component. Because each of these maps is well defined on Witt coordinates by the classical theory of Witt vectors, it is possible to prove Lemma \ref{l:Rmapwelldef} by piecing together these classical results.
\subsection{Addition and Verschiebung maps}
The maps in $\mathcal{TP}^T$ will encode addition and Verschiebung maps. Again we make the definition first and justify it later.
\begin{defn} \label{d:Vplusmap}
Let $f : S \to T$ be a $T$-map of truncation posets. Then
\[
f_\oplus : \bW_S(k) \to \bW_T(k)
\]
is defined to be the unique map, functorial in $k$, such that the diagram
\[ \xymatrix{
\bW_S(k) \ar[r]^-w \ar[d]_{f_\oplus} & k^S \ar[d]^{f^w_\oplus} \\
\bW_T(k) \ar[r]^-w & k^T
} \]
commutes. Here $f^w_\oplus$ is the map defined by
\[
(f^w_\oplus \langle x_s \rangle)_t = \sum_{s \in f^{-1}(t)} \frac{|t|}{|s|} x_s.
\]
\end{defn}
Note that we needed the finiteness condition in Definition \ref{d:Tmap} to define $f_\oplus^w$.
\begin{lemma} \label{l:Vmapwelldef}
Given a $T$-map $f : S \to T$, the composite $f_\oplus^w \circ w$ is contained in the image of the ghost map.
\end{lemma}
\begin{proof}
Here it is convenient to let $k=\bZ[a_s]_{s \in S}$ and to let $(a_s) \in \bW_S(k)$ be the ``canonical Witt vector''. As in the proof of Lemma \ref{l:Rmapwelldef}, let $\langle x_s \rangle = w(a_s)$ and $\langle y_t \rangle = f_\oplus^w \langle x_s \rangle$. Also, let $\phi_p : k \to k$ be the ring map defined by mapping $a_s$ to $a_s^p$.
Suppose $\nu_p(|t|) \geq 1$. We need to verify that $y_t \equiv \phi_p(y_{t/p}) \mod p^{\nu_p(|t|)}$. We have
\[
y_t = \sum_{f(s)=t} \frac{|t|}{|s|} x_s = \sum_{f(s)=t} \frac{|t|}{|s|} \sum_{u \mid s} |u| a_u^{s/u}
\]
and
\[
y_{t/p} = \sum_{f(s')=t/p} \frac{|t|/p}{|s'|} x_{s'} = \sum_{f(s')=t/p} \frac{|t|/p}{|s'|} \sum_{v \mid s'} |v| a_v^{s'/v}.
\]
Hence
\[
\phi_p(y_{t/p}) = \sum_{f(s')=t/p} \frac{|t|/p}{|s'|} \sum_{v \mid s'} |v| a_v^{ps'/v}.
\]
Each term in $\phi_p(y_{t/p})$ labelled by $s'$ and $v$ corresponds to a term in $y_t$ labelled by $s=ps'$ and $u=v$. Note that here we used the fibration condition in Definition \ref{d:Tmap}. The terms of $y_t$ that do not correspond to a term of $\phi_p(y_{t/p})$ all correspond to terms labelled by $(s,u)$ with $\nu_p(|u|) = \nu_p(s)$. But in those cases the coefficient $\frac{|t|}{|s|} \cdot |u|$ of $a_u^{s/u}$ is divisible by $p^{\nu_p(|t|)}$, so the result follows.
\end{proof}
It follows that Definition \ref{d:Vplusmap} does indeed define a unique map $f_\oplus : \bW_S(k) \to \bW_T(k)$.
\begin{lemma}
Let $S$ be an ordinary truncation set and let $\nabla : S \coprod S \to S$ be the fold map. Then
\[
\bW_S(k) \times \bW_S(k) \cong \bW_{S \coprod S}(k) \xrightarrow{\nabla_\oplus} \bW_S(k)
\]
is the classical addition map on $\bW_S(k)$.
\end{lemma}
\begin{proof}
This follows immediately from Definition \ref{d:Vplusmap} because in this case each $\frac{|t|}{|s|} = 1$ and the classical addition map on Witt vectors is defined by using the addition on ghost coordinates.
\end{proof}
Of course the fold map also furnishes $\bW_S(k)$ with an addition map when $S$ is a truncation poset, in the same way.
\begin{lemma}
Let $S$ be an ordinary truncation set, let $n \in \bN$, and let $f : S/n \to S$ be the multiplication by $n$ map. Then
\[
\bW_{S/n}(k) \xrightarrow{f_\oplus} \bW_S(k)
\]
is the classical Verschiebung map $V_n$.
\end{lemma}
\begin{proof}
In this case $f$ is injective and each $\frac{|t|}{|s|} = n$, so Definition \ref{d:Vplusmap} says that on ghost coordinates we have
\[
(f^w_\oplus \langle x_s \rangle)_t = \begin{cases} nx_{t/n} \quad & \textnormal{if $n \mid t$} \\ 0 \quad & \textnormal{if $n \nmid t$} \end{cases}
\]
But this is one equivalent definition of $V_n$.
\end{proof}
It follows immediately that the Verschiebung map $V_n$ is additive, because the diagram
\[ \xymatrix{
S/n \coprod S/n \ar[r]^-{n \coprod n} \ar[d]_\nabla & S \coprod S \ar[d]^\nabla \\
S/n \ar[r]^-n & S
} \]
commutes in $\mathcal{TP}^T$. We will discuss the relations between maps in $\mathcal{TP}^T$ and $\mathcal{TP}^R$ in Section \ref{s:Mackey} below.
Any $T$-map factors as a composite of addition maps and Verschiebung maps, so it is possible to combine the existence of addition and Verschiebung maps on classical Witt vectors to prove Lemma \ref{l:Vmapwelldef}.
\subsection{Multiplication and Norm maps}
Finally, the maps in $\mathcal{TP}^N$ will encode multiplication and norm maps. Once again we start with the definition. Recall the definition of $\widehat{f^{-1}}(t)$ from Definition \ref{d:Nmap}.
\begin{defn} \label{d:Ntimesmap}
Let $f : S \to T$ be an $N$-map. Then
\[
f_\otimes : \bW_S(k) \to \bW_T(k)
\]
is defined to be the unique map, functorial in $k$, such that the diagram
\[ \xymatrix{
\bW_S(k) \ar[r]^-w \ar[d]_{f_\otimes} & k^S \ar[d]^{f^w_\otimes} \\
\bW_T(k) \ar[r]^-w & k^T
} \]
commutes. Here $f^w_\otimes$ is the map defined by
\[
(f^w_\otimes \langle x_s \rangle)_t = \prod_{s \in \widehat{f^{-1}}(t)} x_s^{|t|/|s|}.
\]
\end{defn}
Note that we needed the strong finiteness condition in Definition \ref{d:Nmap} to make sense of the map $f^w_\otimes$.
\begin{lemma} \label{l:Nmapwelldef}
Given an $N$-map $f : S \to T$, the composite $f^w_\otimes \circ w$ is contained in the image of the ghost map.
\end{lemma}
\begin{proof}
Let $k=\bZ[a_s]_{s \in S}$, $(a_s) \in \bW_S(k)$, $\langle x_s \rangle = w(a_s)$ and $\langle y_t \rangle = f_\otimes^w \langle x_s \rangle$ as in the proof of Lemma \ref{l:Vmapwelldef} above, and let $\phi_p$ be the ring map defined by $a_s \mapsto a_s^p$. To unclutter the notation we write $t/s$ for $|t|/|s|$.
Suppose $\nu_p(|t|) \geq 1$. We need to verify that $y_t \equiv \phi_p(y_{t/p}) \mod p^{\nu_p(|t|)}$. We have
\[
y_t = \prod_{s \in \widehat{f^{-1}}(t)} x_s^{t/s} = \prod_{s \in \widehat{f^{-1}}(t)} \Big( \sum_{u \mid s} |u|a_u^{s/u} \Big)^{t/s}
\]
and
\[
y_{t/p} = \prod_{s' \in \widehat{f^{-1}}(t/p)} x_{s'}^{t/ps} = \prod_{s' \in \widehat{f^{-1}}(t/p)} \Big( \sum_{v \mid s'} |v|a_v^{s'/v} \Big)^{t/ps'}.
\]
It follows that
\[
\phi_p(y_{t/p}) = \prod_{s' \in \widehat{f^{-1}}(t/p)} \Big( \sum_{v \mid s'} |v|a_v^{ps'/v} \Big)^{t/ps'}.
\]
To proceed we need to understand the relationship between $\widehat{f^{-1}}(t)$ and $\widehat{f^{-1}}(t/p)$. Consider $s' \in \widehat{f^{-1}}(t/p)$. Then we get an $s \in \widehat{f^{-1}(t)}$ as in the following two cases.
\begin{enumerate}
\item $p \nmid \frac{|f(s')|}{|t/p|}$. Then $s = ps' \in \widehat{f^{-1}}(t)$.
\item $p \mid \frac{|f(s')|}{|t/p|}$. Then $s=s' \in \widehat{f^{-1}}(t)$.
\end{enumerate}
Note that for this we had to use the strong fibration condition in Definition \ref{d:Nmap}.
In each case it is straightforward to verify that the factor corresponding to $s'$ in $\phi_p(y_{t/p})$ and the factor corresponding to $s$ in $y_t$ are congruent $\mod p^{\nu_p(|t|)}$.
\end{proof}
As for the other types of maps it follows that Definition \ref{d:Ntimesmap} defines a unique map $f_\otimes : \bW_S(k) \to \bW_T(k)$.
\begin{lemma}
Let $S$ be an ordinary truncation set and let $\nabla : S \coprod S \to S$ be the fold map. Then
\[
\bW_S(k) \times \bW_S(k) \cong \bW_{S \coprod S}(k) \xrightarrow{\nabla_\otimes} \bW_S(k)
\]
is the classical multiplication map on $\bW_S(k)$.
\end{lemma}
\begin{proof}
This is clear because the classical multiplication map is defined via the multiplication map on ghost coordinates.
\end{proof}
Of course the fold map also furnishes $\bW_S(k)$ with a multiplication map when $S$ is a truncation poset, in the same way.
\begin{lemma}
Let $S$ be an ordinary truncation set, let $n \in \bN$, and let $f : S \to \langle n \rangle S$ be the multiplication by $n$ map. Then
\[
\bW_S(k) \xrightarrow{f_\otimes} \bW_{\langle n \rangle S}(k)
\]
is the ``classical'' norm map $N_n$.
\end{lemma}
\begin{proof}
A comparison of the map $f_\otimes^w$ with the formula for $N_n$ on ghost coordinates from \cite{An_norm} shows that they agree.
\end{proof}
\section{Combining $T$-maps and $R$-maps} \label{s:Mackey}
In this section we define a category $\mathcal{TP}^{TR}$ by combining the categories $\mathcal{TP}^T$ and $(\mathcal{TP}^R)^{\mathrm{op}}$. The following definition is more complicated than it needs to be; we present it this way in anticipation of the category $\mathcal{TP}^{TNR}$.
\begin{defn}
The category $\mathcal{TP}^{TR}$ has objects the truncation posets, and a morphism $S \to T$ in $\mathcal{TP}^{TR}$ is an equivalence class of diagrams
\[
S \overset{f_1}{\longrightarrow} A_1 \overset{f_2}{\longrightarrow} A_2 \overset{f_3}{\longrightarrow} \ldots \overset{f_n}{\longrightarrow} A_n \overset{f_{n+1}}{\longrightarrow} T
\]
where each $f_i$ is a map in one of $\mathcal{TP}^T$ and $(\mathcal{TP}^R)^{\mathrm{op}}$. The equivalence relation on such diagrams is generated by the following types of relations:
\begin{enumerate}
\item Isomorphism of diagrams.
\item Insertion of an identity morphism.
\item Composition if $f_i$ and $f_{i+1}$ are in the same category $\mathcal{TP}^T$ or $(\mathcal{TP}^R)^{\mathrm{op}}$.
\item Commuting an $R$-map past a $T$-map as in Definition \ref{d:commuteRT} below.
\end{enumerate}
\end{defn}
Given an $R$-map $f : S \to T$ we abuse notation and write $f^* : T \to S$ for the corresponding map in $\mathcal{TP}^{TR}$, and given a $T$-map $f : S \to T$ we write $f_\oplus : S \to T$ for the corresponding map in $\mathcal{TP}^{TR}$. (Hence with our notation the functor $\bW(k)$ takes $f^*$ to $f^*$ and $f_\oplus$ to $f_\oplus$.) We need to explain how to commute an $R$-map past a $T$-map.
\begin{defn} \label{d:commuteRT}
Given a diagram
\[
S \xrightarrow{f} A \xleftarrow{g} T
\]
with $f \in \mathcal{TP}^T$ and $g \in \mathcal{TP}^R$, we declare the composite $g^* \circ f_\oplus$ to be equal to the composite $f'_\oplus \circ (g')^*$, where
\[
S \xleftarrow{g'} f_\oplus^* T \xrightarrow{f'} T
\]
and $f_\oplus^* T$ is defined by
\[
f_\oplus^* T = \Big\{ (s,t, \xi) \quad | \quad f(s)=g(t) \textnormal{ and } \xi \in C_m, \,\, m=\gcd\Big(\frac{|f(s)|}{|s|}, \frac{|g(t)|}{|t|}\Big) \Big\}.
\]
Here $f' : f_\oplus^* T \to T$ and $g' : f_\oplus^* T \to S$ are the obvious maps, sending $(s,t,\xi)$ to $t$ and $s$, respectively. The norm on $f_\oplus^* T$ is defined by
\[
|(s,t,\xi)| = \gcd(|s|,|t|),
\]
and we say $(s_1,t_1, \xi_1) \mid (s_2,t_2,\xi_2)$ if $s_1 \mid s_2$, $t_1 \mid t_2$, and $\xi_1=\xi_2$. Here we identify $C_{m_1}$ and $C_{m_2}$, using that $|f(s_2)|/|s_2|=|f(s_1)|/|s_1|$ and $|g(t_2)|/|t_2|=|g(t_1)|/|t_1|$.
\end{defn}
In other words, we take the usual pullback $S \times_A T = f^* T$ but count each $(s,t)$ with multiplicity to account for the fact that $|(s,t)|=\gcd(|s|,|t|)$ rather than the expected $|f(s)|=|g(t)|$. The cyclic group can be thought of as a bookkeeping device.
\begin{lemma}
With notation as above, $f_\oplus^* T$ is a truncation poset, $g' : f_\oplus^* T \to S$ is an $R$-map, and $f' : f_\oplus^* T \to T$ is a $T$-map.
\end{lemma}
\begin{proof}
This is straightforward. For example, given $(s,t,\xi) \in f_\oplus^* T$ and $t' \in T$ with $t \mid t'$ we need to find $(s',t',\xi') \in f_\oplus^* T$ with $(s,t,\xi) \mid (s',t',\xi')$. Because $f$ is a $T$-map, so in particular a fibration, there is some $s' \in S$ with $s \mid s'$ and $f(s')=g(t')$. Upon identifying $C_m$ with $C_{m'}$, where $m=\gcd\big( \frac{|f(s)|}{|s|}, \frac{|g(t)|}{|t|} \big)$ as before and $m'=gcd\big( \frac{|f(s')|}{|s'|}, \frac{|g(t')|}{|t'|} \big)$, we can take $\xi'=\xi$ and $(s',t',\xi')$ is the required element of $f_\oplus^* T$.
\end{proof}
Because we can always commute an $R$-map past a $T$-map it follows that the category $\mathcal{TP}^{TR}$ has a much simpler description:
\begin{prop}
Any map in the category $\mathcal{TP}^{TR}$ defined above can be written uniquely, up to isomorphism of spans, as a composite $g_\oplus \circ f^*$ for a diagram
\[
S \xleftarrow{f} A \xrightarrow{g} T.
\]
\end{prop}
Definition \ref{d:commuteRT} is justified by the following result.
\begin{lemma}
Witt notation as in Definition \ref{d:commuteRT} the composite
\[
\bW_S(k) \xrightarrow{f_\oplus} \bW_A(k) \xrightarrow{g^*} \bW_T(k)
\]
is equal to the composite
\[
\bW_S(k) \xrightarrow{(g')^*} \bW_{f_\oplus^* T}(k) \xrightarrow{f'_\oplus} \bW_T(k).
\]
\end{lemma}
\begin{proof}
It suffices to prove this on ghost coordinates. Take $\langle x_s \rangle \in k^S$, and suppose the first composite maps this to $\langle y_t \rangle$ and the second composite maps it to $\langle y_t' \rangle$. Then we get
\begin{eqnarray*}
y_t & = & \sum_{s \in f^{-1}(g(t))} \frac{|g(t)|}{|s|} x_s \\
& = & \sum_{s \in f^{-1}(g(t))} \frac{|t|}{\gcd(|s|,|t|)} \cdot \gcd\Big(\frac{|f(s)|}{|s|}, \frac{|g(t)|}{|t|}\Big) x_s \\
& = & \sum_{(s,t,\xi) \in f_\oplus^* T} \frac{|t|}{\gcd(|s|,|t|)} x_s \\
& = & y_t',
\end{eqnarray*}
which proves the result.
\end{proof}
We have now incorporated restriction maps, Frobenius maps, addition maps and Verschiebung maps in one category of truncation posets. Putting it all together we have proved the following:
\begin{thm} \label{t:TRfunctor}
Let $k$ be a commtuative ring. There is a functor
\[
\bW(k) : \mathcal{TP}^{TR} \to Set
\]
sending $S$ to $\bW_S(k)$, such that the composite $(\mathcal{TP}^R)^{\mathrm{op}} \to \mathcal{TP}^{TR} \to Set$ agrees with the functor in Definition \ref{d:RFdiagmap} and the composite $\mathcal{TP}^T \to \mathcal{TP}^{TR} \to Set$ agrees with the functor in Definition \ref{d:Vplusmap}.
\end{thm}
\section{Combining $T$-maps, $N$-maps and $R$-maps} \label{s:Tambara}
Finally we define a category $\mathcal{TP}^{TNR}$ by combining $\mathcal{TP}^T$, $\mathcal{TP}^N$ and $(\mathcal{TP}^R)^{\mathrm{op}}$.
\begin{defn} \label{d:TrunTNR}
The category $\mathcal{TP}^{TNR}$ has objects the truncation posets, and a morphism $S \to T$ in $\mathcal{TP}^{TNR}$ is an equivalence class of diagrams
\[
S \overset{f_1}{\longrightarrow} A_1 \overset{f_2}{\longrightarrow} A_2 \overset{f_3}{\longrightarrow} \ldots \overset{f_n}{\longrightarrow} A_n \overset{f_{n+1}}{\longrightarrow} T
\]
where each $f_i$ is a map in one of $\mathcal{TP}^T$, $\mathcal{TP}^N$ and $(\mathcal{TP}^R)^{\mathrm{op}}$. The equivalence relation on such diagrams is generated by the following:
\begin{enumerate}
\item Isomorphism of diagrams.
\item Insertion of an identity morphism.
\item Composition if $f_i$ and $f_{i+1}$ are in the same category $\mathcal{TP}^T$, $\mathcal{TP}^N$ or $(\mathcal{TP}^R)^{\mathrm{op}}$.
\item Commuting an $R$-map past a $T$-map as in Definition \ref{d:commuteRT} above.
\item Commuting an $R$-map past an $N$-map as in Definition \ref{d:commuteNR} below if the pullback $f_\otimes^* T$ exists as in Definition \ref{d:multpullback2} below.
\item Commuting an $N$-map past a $T$-map as in Definition \ref{d:commuteTN} below.
\end{enumerate}
\end{defn}
Given an $N$-map $f : S \to T$ we write $f_\otimes : S \to T$ for the corresponding map in $\mathcal{TP}^{TNR}$. Note that it is not always possible to commute an $R$-map past an $N$-map, as the following example shows.
\begin{example} \label{ex:notcommutingRpastN}
Let $f : \{1,3\} \to \{1,2,3,6\}$ be the multiplication by $2$ map and let $g : \{1,2,3\} \to \{1,2,3,6\}$ be the inclusion. Then the composite
\[
\bW_{\{1,3\}}(k) \xrightarrow{f_\otimes} \bW_{\{1,2,3,6\}}(k) \xrightarrow{g^*} \bW_{\{1,2,3\}}(k)
\]
is given on ghost coordinates by
\[
\langle x_1, x_3 \rangle \mapsto \langle x_1,x_1^2, x_3,x_3^2 \rangle \mapsto \langle x_1,x_1^2, x_3 \rangle.
\]
But it is impossible to define this map as a composite of an $R$-map followed by an $N$-map. With an $R$-map we can make as many copies as we want of $\langle x_1,x_3 \rangle$, $\langle x_1 \rangle$ and $\langle x_3 \rangle$, using the truncation set $\coprod_{i_1} \{1,3\} \coprod_{i_2} \{1\} \coprod_{i_3} \{3\}$. But this truncation set only maps to $\{1,2,3\}$ via an $N$-map if $i_1=i_2=i_3=0$.
\end{example}
The analogue of the additive pullback $f^*_\oplus T$ considered in Definition \ref{d:commuteRT} above is the following:
\begin{defn} \label{d:multpullback1}
Given a diagram
\[
S \xrightarrow{f} A \xleftarrow{g} T,
\]
with $f \in \mathcal{TP}^N$ and $g \in \mathcal{TP}^R$, let
\[
f_\otimes^* T = \Big\{(s,t,\xi) \quad | \quad g(t) \mid f(s),\,\, s \textnormal{ minimal, } t \textnormal{ maximal, } \xi \in C_m \Big\}.
\]
Here $m = \gcd \big( \frac{|f(s)|}{|s|}, \frac{|g(t)|}{|t|} \big)$ as before, $s$ minimal means that there is no $s' \mid s$ with $s' \neq s$ and $g(t) \mid f(s')$ (for $t$ fixed), and $t$ maximal means that there is no $t \mid t'$ with $t \neq t'$ and $g(t') \mid f(s)$ (for $s$ fixed).
\end{defn}
This is not necessarily a good definition, because if we try to carry this out in the situation in Example \ref{ex:notcommutingRpastN} we find that the obvious maps $f' : f^*_\otimes T \to T$ and $g' : f^*_\otimes T \to S$ are not maps of truncation posets.
\begin{defn} \label{d:multpullback2}
We say the pullback $f^*_\otimes T$ from Definition \ref{d:multpullback1} exists (as a truncation poset) if the following additional condition is satisfied. For any $(s_1,t_1,\xi_1)$ and $(s_2,t_2,\xi_2)$ in $f^*_\otimes T$ with $s_1$ and $s_2$ in the same connected component of $S$ and $t_1$ and $t_2$ in the same connected component of $T$ we have $|s_1||t_2|=|s_2||t_1|$. We then define
\[
|(s,t,\xi)| = \gcd(|s|,|t|)
\]
and
\[
(s_1,t_1,\xi_1) \mid (s_2, t_2,\xi_2) \qquad \textnormal{if} \qquad s_1 \mid s_2, t_1 \mid t_2 \textnormal{ and } \xi_1=\xi_2,
\]
where as usual we have identified $C_{m_1}$ and $C_{m_2}$.
\end{defn}
The pullback $f_\otimes^* T$ often exists. For example, the following gives a sufficient condition.
\begin{defn}
We say a truncation poset $T$ \emph{has joins} if $t \mid t_1$ and $t \mid t_2$ implies that there exists $t'$ with $t_1 \mid t'$ and $t_2 \mid t'$.
\end{defn}
For example, for any $n \in \bN$ the truncation set $\langle n \rangle$ has joins. The truncation set $\{1,2,3\}$ does not have joins.
\begin{lemma} \label{l:joinsimpliespullback}
Suppose $g : T \to A$ is an $R$-map and suppose $T$ has joins. Then the pullback $f_\otimes^* T$ exists for any $N$-map $f : S \to A$.
\end{lemma}
\begin{proof}
Suppose we have $(s_1,t_1,\xi_1)$ and $(s_2,t_2,\xi_2)$ in $f_\otimes^* T$ with $s_1$ and $s_2$ in the same connected component of $S$ and $t_1$ and $t_2$ in the same connected component of $T$. Then we need to show that $|s_1||t_2| = |s_2||t_1|$. Let $t'$ be the join of $t_1$ and $t_2$.
Because $f$ is an $N$-map, there are $s_i' \in S$ for $i=1,2$ with $s_i \mid s_i'$ and $g(t') \mid f(s_i')$, and with $s_i'$ minimal. But because $s_1'$ and $s_2'$ are in the same connected component of $S$ and satisfy the same minimality condition we must have $s_1'=s_2'$. It follows that
\[
\frac{|f(s_1)|}{|g(t_1)|} = \frac{|f(s_1')|}{|g(t')|} = \frac{|f(s_2')|}{|g(t')|} = \frac{|f(s_2)|}{|g(t_2)|}
\]
and we are done.
\end{proof}
\begin{defn} \label{d:commuteNR}
Let
\[
S \xrightarrow{f} A \xleftarrow{g} T
\]
be a diagram with $f \in \mathcal{TP}^N$ and $g \in \mathcal{TP}^R$, and suppose the pullback $f_\otimes^* T$ exists. Then we declare the composite $g^* \circ f_\otimes$ to be equal to the composite $f'_\otimes \circ (g')^*$.
\end{defn}
We justify the above definition with the following two results.
\begin{lemma} \label{l:pullbackofNmapisnmap}
Let $f : S \to A$ be an $N$-map, let $g : T \to A$ be an $R$-map, and suppose the pullback $f_\otimes^* T$ exists. Then $f' : f_\otimes^* T \to T$ is an $N$-map.
\end{lemma}
\begin{proof}
Suppose $(s,t,\xi) \in f^*_\otimes T$ and that $\bar{t} \in T$ is in the same connected component as $t$. We need to find $(s',t',\xi') \in f^*_\otimes T$ in the same connected component as $(s,t,\xi)$ with $\bar{t} \mid t'$.
Because $f$ is an $N$-map, we get an $s' \in S$ with $g(\bar{t}) \mid f(s')$. We can assume that $s'$ is minimal. Then we define $t'$ to be maximal with the property that $\bar{t} \mid t'$ and $g(t') \mid f(s')$. Then (after identifying $C_m$ and $C_{m'}$), $(s',t',\xi) \in f^*_\otimes T$ is the desired element.
Verifying the finiteness condition is straightforward and we omit it.
\end{proof}
\begin{lemma}
With assumptions as in Lemma \ref{l:pullbackofNmapisnmap}, the composite
\[
\bW_S(k) \xrightarrow{f_\otimes} \bW_A(k) \xrightarrow{g^*} \bW_T(k)
\]
is equal to the composite
\[
\bW_S(k) \xrightarrow{(g')^*} \bW_{f_\otimes^* T}(k) \xrightarrow{f'_\otimes} \bW_T(k).
\]
\end{lemma}
\begin{proof}
It suffices to show that the two maps agree on ghost coordinates. Suppose the first composite sends $\langle x_s \rangle$ to $\langle y_t \rangle$ and the second composite sends it to $\langle y_t' \rangle$. We find that
\[
y_t = \prod_{s \in \widehat{f^{-1}}(g(t))} x_s^{|g(t)|/|s|},
\]
while
\[
y_t' = \prod_{(s,t',\xi) \in \widehat{(f')^{-1}}(t)} x_s^{|t|/|(s,t',\xi)|}
\]
The point is that if $(s,t',\xi)$ is in $\widehat{(f')^{-1}}(t)$ then $s \in \widehat{f^{-1}}(g(t))$, and conversely, if $s \in \widehat{f^{-1}}(g(t))$ then there is a unique $t' \in T$ with $(s,t', \xi) \in \widehat{(f')^{-1}}(t)$ for any $\xi \in C_m$. Hence
\[
y_t' = \prod_{s \in \widehat{f^{-1}}(g(t))} \Big[ x_s^{|t|/\gcd(|s|,|t'|)} \Big]^{\gcd(|f(s)|/|s|,|g(t)|/|t|)}.
\]
Hence it suffices to show that
\[
st\gcd\Big(\frac{|f(s)|}{|s|}, \frac{|g(t)|}{|t|}\Big) = |g(t)| \gcd(|s|,|t'|).
\]
But this is easily verified using that $\gcd(|f(s)|/|g(t)|, |s|)=1$ and $\gcd(|s|,|t'|)=\gcd(|s|,|t|)$.
\end{proof}
This finishes our discussion of the composite of an $R$-map followed by an $N$-map. There is one more thing to do, namely to describe the composite of a $T$-map followed by an $N$-map. If we think of a $T$-map as addition and an $N$-map as multiplication this can be thought of as a distributivity law. This is somewhat combinatorial.
\begin{defn} \label{d:commuteTN}
Suppose $f : S \to A$ is a $T$-map and $g : A \to T$ is an $N$-map. Then we declare the composite $g_\otimes \circ f_\oplus$ to be equal to the composite $t_\oplus \circ n_\otimes \circ r^*$, where the maps $t$, $n$ and $r$ are defined by the exponential diagram
\[ \xymatrix{
S \ar[d]_f & E \ar[l]_-r \ar[r]^-n & D \ar[d]^t \\
A \ar[rr]^-g & & T
} \]
\end{defn}
It remains to say what we mean by an exponential diagram. Our definition is dictated by the proof of Lemma \ref{l:Wittofexp} below. We define $D$ as follows. First, let
\[
D_t' = \Big\{(s_{a,\xi}, \zeta_{a,\xi})_{a \in \widehat{g^{-1}}(t),\, \xi \in C_{t/a}} \quad | \quad s_{a,\xi} \in f^{-1}(a),\,\, \zeta_{a,\xi} \in C_{a/s_{a,\xi}} \Big\},
\]
where $C_{t/a}$ denotes a cyclic group of order $|t|/|a|$ and $C_{a/s_{a,\xi}}$ denotes a cyclic group of order $|a|/|s_{a,\xi}|$. Note that on ghost coordinates the composite of $f_\oplus$ and $g_\otimes$ is given on the $t$'th coordinate by a sum indexed over $D_t'$ as the proof of Lemma \ref{l:Wittofexp} below.
But this sum is too large, because $t_\oplus : \bW_D(k) \to \bW_T(k)$ also has a coefficient $\frac{|t|}{|d|}$. To account for that we define $D_t = D_t'/C_t$ and let $D = \coprod_{t \in T} D_t$. Here $C_t$ is the cyclic group of order $|t|$, and the canonical generator of $C_t$ acts on $D_t'$ by cyclically permuting each tuple $(s_{a,\xi},\zeta_{a,\xi})$ for fixed $a$ as well multiplying the last $\zeta_{a,\xi}$ (which is brought around to the front) by the canonical generator of $C_{a/s_{a,\xi}}$.
We say that $(t, (s_{a,\xi},\zeta_{a,\xi}))$ divides $(t', (s'_{a',\xi'},\zeta'_{a',\xi'}))$ if $t \mid t'$ and in addition the following condition hold. For each $a \in \widehat{g^{-1}}(t)$ we use Lemma \ref{l:minimaldiv} to find $a' \in \widehat{g^{-1}}(t')$ with $a \mid a'$. We then have a canonical map $C_{t'/a'} \to C_{t/a}$. We require that $s_{a,\xi} \mid s'_{a',\xi'}$ for each $\xi' \mapsto \xi$. We also require that $\zeta_{a',\xi'} = \zeta_{a,\xi}$, using the identification $C_{a/s_{a,\xi}} \cong C_{a'/s'_{a',\xi'}}$. We define $|(t,(s_{a,\xi}, \zeta_{a,\xi}))|$ to be the maximal $d$ such that there exists $(t', (s'_{a',\xi'}, \zeta'_{a',\xi'}))$ dividing it with $t'=t/d$.
The map $D \to T$ is the obvious map, sending $(t,(s_{a,\xi}, \zeta_{a,\xi}))$ to $t$. We need the following result in the proof of Lemma \ref{l:Wittofexp}.
\begin{lemma}
Regard $D$ as a set of equivalence classes of elements in $D'$. The cardinality of the equivalence class of $(t,(s_{a,\xi}, \zeta_{a,\xi})) \in D$ is equal to $\frac{|t|}{|(t,(s_{a,\xi}, \zeta_{a,\xi}))|}$.
\end{lemma}
\begin{proof}
This is equivalent to saying that $C_{|t|/e} \subset C_t$ acts trivially on $(t,(s_{a,\xi}, \zeta_{a,\xi}))$ if and only if $(t,(s_{a,\xi}, \zeta_{a,\xi}))$ is divisible by $e$.
Suppose $C_{|t|/e}$ acts trivially on $(t,(s_{a,\xi}, \zeta_{a,\xi}))$ and let $\xi_t$ be the canonical generator of $C_t$. Then $\xi_t^e$ is the canonical generator of $C_{|t|/e}$. It follows that we must have $s_{a,\xi_t^i} = s_{a,\xi_t^{e+i}}$ for all $a \in \widehat{g^{-1}}(t)$ and $\xi_t^i \in C_{t/a}$. Hence $(s_{a,\xi})$ is determined by $s_{a,1},\ldots,s_{a,\xi_t^{\bar{g}-1}}$ where $\bar{g} = \gcd \big( e, \frac{|t|}{|a|} \big)$.
It also follows that $(\zeta_{a,\xi})$ is determined by $\zeta_{a,1},\ldots,\zeta_{a,\xi_t^{\bar{g}-1}}$ and that $\frac{|a|}{\gcd(|a|,e)} \mid |s_{a,\xi}|$ for each $(a,\xi)$.
Then we can define $(t', (s_{a',\xi'}, \zeta_{a',\xi'}))$ as follows. First, let $t'=t/e$. For each $a \in \widehat{g^{-1}}(t)$ there is a corresponding $a' \in \widehat{g^{-1}}(t')$, and we have $\frac{|t'|}{|a'|}=\gcd \big( e, \frac{|t|}{|a|} \big)$. Then we can take $s_{a',\xi'}=\frac{s_{a,\xi}}{|a|/\gcd(|a|,e)}$ for any $\xi \in C_{t/a}$ mapping to $\xi' \in C_{t'/a'}$ and $\zeta_{a',\xi'}=\zeta_{a,\xi}$ (with the usual identification of cyclic groups).
The other implication is essentially the above argument in reverse.
\end{proof}
The definition of $E$ is similar, with
\[
E = \coprod_{b \in A, \, \xi \in C_{g(b)/b}} \Big\{(s_{a,\xi}, \zeta_{a,\xi})_{a \in \widehat{g^{-1}}(g(b)),\, \xi \in C_{g(b)/a}} \Big\}\big/C_{g(b)}.
\]
The map $r : E \to S$ sends $(b, \xi,(s_{a,\xi}, \zeta_{a,\xi}))$ to $s_{b,\xi}$ and the map $n : E \to D$ sends $(b,\xi,(s_{a,\xi},\zeta_{a,\xi}))$ to $(g(b),(s_{a,\xi}, \zeta_{a,\xi}))$.
Now it is tedious but straightforward to verify the following.
\begin{lemma}
With the above definitions, $D$ and $E$ are truncation posets, $r : E \to S$ is an $R$-map, $n : E \to D$ is an $N$-map, and $t : D \to T$ is a $T$-map.
\end{lemma}
The motivation for the definition of the exponential diagram, is in the next result.
\begin{lemma} \label{l:Wittofexp}
Suppose we are given an exponential diagram as above. Then the composite
\[
\bW_S(k) \xrightarrow{f_\oplus} \bW_A(k) \xrightarrow{g_\otimes} \bW_T(k)
\]
is equal to the composite
\[
\bW_S(k) \xrightarrow{r^*} \bW_E(k) \xrightarrow{n_\otimes} \bW_D(k) \xrightarrow{t_\oplus} \bW_T(k).
\]
\end{lemma}
\begin{proof}
We can compute using ghost coordinates. Suppose the first composite sends $\langle x_s \rangle$ to $\langle y_t \rangle$ and the second composite sends $\langle x_s \rangle$ to $\langle y_t' \rangle$. We compute
\begin{eqnarray*}
y_t & = & \prod_{a \in \widehat{g^{-1}}(t)} \Big( \sum_{s \in f^{-1}(a)} \frac{|a|}{|s|} \Big)^{|t|/|a|} \\
& = & \prod_{a \in \widehat{g^{-1}}(t), \xi \in C_{t/a}} \sum_{s \in f^{-1}(a), \zeta \in C_{a/s}} x_s \\
& = & \sum_{(s_{a,\xi}, \zeta_{a,\xi}) \in D_t'} \prod_{(a,\xi)} x_{s_{a,\xi}} \\
& = & \sum_{(s_{a,\xi}, \zeta_{a,\xi}) \in D_t} \frac{|t|}{|(s_{a,\xi},\zeta_{a,\xi})|} \prod_{(a,\xi)} x_{s_{a,\xi}},
\end{eqnarray*}
and this last expression is equal to $y_t'$ by inspection.
\end{proof}
Putting all of this together we have proved the following, which is a restatement of Theorem \ref{t:main}:
\begin{thm} \label{t:mainbody}
Let $k$ be a commtuative ring. There is a functor
\[
\bW(k) : \mathcal{TP}^{TNR} \to Set
\]
sending $S$ to $\bW_S(k)$, such that the composite $\mathcal{TP}^{TR} \to \mathcal{TP}^{TNR} \to Set$ agrees with the functor in Theorem \ref{t:TRfunctor} and the composite $\mathcal{TP}^N \to \mathcal{TP}^{TNR} \to Set$ agrees with the functor in Definition \ref{d:Ntimesmap}.
\end{thm}
\section{The subcategory $\mathcal{TP}^{TNR}_\textnormal{join}$ and bispans} \label{s:bispans}
Motivated by Lemma \ref{l:joinsimpliespullback} we define a subcategory of $\mathcal{TP}^{TNR}$ as follows.
\begin{defn}
Let $\mathcal{TP}^{TNR}_\textnormal{join}$ be the category whose objects are truncation posets with join, and whose morphisms are generated by equivalence classes of morphisms of truncation posets with join in the same way as in Definition \ref{d:TrunTNR}.
\end{defn}
To make sense of this we should verify that the composite of two morphisms in $\mathcal{TP}^{TNR}_\textnormal{join}$ is still in $\mathcal{TP}^{TNR}_\textnormal{join}$. In other words, we should check that starting with truncation posets with joint, the truncation posets $f_\oplus^* T$ and $f_\otimes^* T$, as well as the truncation posets in the definition of an exponential diagram, all have join as well. This is straightforward and we omit it.
The following result follows.
\begin{prop}
Any map in $\mathcal{TP}^{TNR}_\textnormal{join}$ can be written uniquely, up to isomorphism of bispans, as a composite $h_\oplus \circ g_\otimes \circ f^*$ for a diagram
\[
S \xleftarrow{f} A \xrightarrow{g} B \xrightarrow{h} T,
\]
where $f \in \mathcal{TP}^R$, $g \in \mathcal{TP}^N$ and $h \in \mathcal{TP}^T$.
\end{prop}
Finally, we compare our construction to ``classical'' Tambara functors for cyclic groups.
\begin{defn} \label{d:TNF}
Let $\mathcal{TP}^{TNF}_{\langle n \rangle}$ be the subcategory of $\mathcal{TP}^{TNR}_\textnormal{join}$ whose objects are finite disjoint unions of $\langle m \rangle$ for $m \mid n$ and whose morphisms are given by equivalence classes of bispans
\[
S \xleftarrow{f} A \xrightarrow{g} B \xrightarrow{h} T
\]
of such, with the following additional requirement: Suppose we decompose $A$ and $S$ as $\coprod A_i$ and $\coprod S_j$, respectively, and write $f$ as a coproduct of maps $A_i \xrightarrow{f_i} S_j \subset S$. If $A_i = \langle m_i \rangle$ and $S_j = \langle m_j \rangle$ then $m_i \mid m_j$ and $f_i$ is multiplication by $m_j/m_i$.
\end{defn}
Note that the maps $g : A \to B$ and $h : B \to T$ automatically satisfy the condition in Definition \ref{d:TNF}.
We obviously have a functor
\[
\bW(k) : \mathcal{TP}^{TNF}_{\langle n \rangle} \to Set
\]
obtained by restricting the functor $\bW(k)$ from Theorem \ref{t:mainbody}. Now we can compare this to Tambara functors because of the following.
\begin{prop}
There is an equivalence of categories between $\mathcal{TP}^{TNF}$ and the category of bispans of finite $C_n$-sets.
\end{prop}
\begin{proof}
The equivalence is given on objects by sending the truncation poset $\langle m \rangle$ to the finite $C_n$-set $C_n/C_m$ and sending disjoint unions to disjoint unions. We send a map $\frac{m_2}{m_1} : \langle m_1 \rangle \to \langle m_2 \rangle$ to the quotient map $C_n/C_{m_1} \to C_n/C_{m_2}$.
To finish the proof we should verify that the composition laws for the two types of bispans agree. This is straightforward and we omit it.
\end{proof}
\bibliographystyle{plain}
|
train/arxiv
|
BkiUdRA5qsJBjpijFD-_
| 2
| 0.4
|
\section*{Acknowledgements}
We acknowledge fruiful discussions with M. Stoitsov and N. Schunck.
This work was supported by the U.S. Department of Energy
under Contract Nos. DE-FG02-96ER40963, DE-FC02-07ER41457, DE-FC02-09ER41583
(UNEDF SciDAC Collaboration), and DE-FG02-07ER41529
(University of Tennessee).
\section*{References}
\bibliographystyle{unsrt}
|
train/arxiv
|
BkiUbCA5qoaAwnMw7jAC
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
Since 2008, the measurements of cosmic-ray (CR) positrons by PAMELA~\cite{PAMELA:2008gwm}, Fermi-LAT~\cite{Fermi-LAT:2011baq}, and AMS-02~\cite{AMS:2013fma, AMS:2019iwo} have shown an unexpected excess at energies $\gtrsim 10~\si{GeV}$.
Possible interpretations for this excess include annihilating/decaying dark matter~\cite{Bergstrom:2008gr, Cholis:2008hb, Yin:2008bs, Yuan:2013eja} and astrophysical sources like nearby pulsars within kpc~\cite{Hooper:2008kg, Yuksel:2008rf}.
In particular, the middle-aged pulsar Geminga with a distance of $\sim 250~\si{pc}$ is widely assumed to produce high energy positrons that could propagate to the Earth~\cite{Hooper:2008kg, Yuksel:2008rf, Yin:2013vaa, Feng:2015uta, Hooper:2017tkg, Cholis:2017ccs, Fang:2018qco, Profumo:2018fmz, Cholis:2018izy, Tang:2018wyr, Xi:2018mii, Johannesson:2019jlk, DiMauro:2019yvh, Fang:2019ayz, Manconi:2020ipm, Wang:2021xph, Fang:2022mdg}.
In 2017, the HAWC experiment observed $\sim 10~\si{TeV}$ $\gamma$ rays spatially extended about 2 degrees around Geminga, which would be produced by positrons and electrons of energies $\sim 100~\si{TeV}$ via inverse Compton scattering (ICS) off low energy photons~\cite{HAWC:2017kbo}.
Therefore, this observation confirms that Geminga is a source of high energy positrons and electrons.
But the surface brightness profile (SBP) measured by HAWC implies a diffusion coefficient smaller than the conventional value by at least two orders of magnitude.
The recent observation of another extended halo around the middle-aged pulsar J0621+3749 by
LHAASO further established the general conclusion of slow diffusion around pulsars
\cite{LHAASO:2021crt}.
Such slow diffusion results in much less CR positrons arriving at the Earth, unlikely to explain the positron excess.
Nonetheless, by assuming a two-zone diffusion model with slow diffusion in a small zone around the source but normal diffusion outside the zone, positrons originated from Geminga can still sufficiently contribute to the positron excess~\cite{Fang:2018qco,Profumo:2018fmz,Bao:2021hey}.
In addition, positrons and electrons from Geminga are also expected to induce extended ICS $\gamma$ rays in the energy range of Fermi-LAT.
Based on two-zone diffusion templates, an analysis of 10-yr Fermi-LAT $\gamma$-ray data by Xi et al.~\cite{Xi:2018mii} (denoted as X19 below) did not found such extended emission and derive a stringent constraint on the $\gamma$-ray flux in the $\sim 5\text{--}100$~\si{GeV} energy range.
According to this constraint and the HAWC data, $e^\pm$ from Geminga with a single power-law injection spectrum can only contribute a small faction to the CR positron spectrum observed by AMS-02.
On the other hand, taking into account both a larger region of interest and the proper motion of the Geminga pulsar, another analysis of Fermi-LAT data by Di Mauro et al.~\cite{DiMauro:2019yvh} (denoted as D19 hereafter) claimed a discovery of extended $\gamma$-ray emissions around Geminga in the energy range of $\sim 10$--$100$~\si{GeV}.
However, considering both the corresponding $\gamma$-ray flux and the HAWC data, the Geminga contribution to the position flux they obtained is not enough for the AMS-02 excess.
Both the X19 and D19 analyses assumed a single power-law Geminga $e^\pm$ injection spectrum with a high energy cutoff.
The inconsistency with the AMS-02 data may indicate that there are less low energy positrons and electrons producing GeV $\gamma$ rays.
Therefore, we will attempt to modify the injection spectrum by adding a low energy cutoff, in order to simultaneously explain the HAWC, Fermi-LAT, and AMS-02 data.
The results of the $\gamma$-ray flux from the X19 and D19 analyses will be considered separately.
This paper is organized as follows.
In Section~\ref{sec:Geminga}, we describe the propagation of positrons and electrons produced by Geminga and the $\gamma$-ray flux induced by ICS.
In Section~\ref{sec:D19}, we simultaneously interpret the HAWC data, the Fermi-LAT $\gamma$-ray observation given by D19, and the AMS-02 positron spectrum assuming an $e^\pm$ injection spectrum with a low energy cutoff.
In Section~\ref{sec:X19}, we use the Fermi-LAT $\gamma$-ray constraint given by X19 to explore how much contribution Geminga can supply to the AMS-02 positron excess.
Section~\ref{sec:sum} gives the summary and discussion.
\section{Positrons and electrons from Geminga}
\label{sec:Geminga}
The Geminga pulsar is a $\gamma$-ray source discovered by SAS-2~\cite{Fichtel:1975}.
Its age is about $342~\si{kyr}$~\cite{Manchester:2004bp}, and the distance for the Earth is $250^{+120}_{-62}~\si{pc}$~\cite{Faherty:2007}.
Geminga is expected to emit lots of positrons and electrons, which diffuse away from Geminga and lose energies by upscatter low energy photons in the cosmic microwave background (CMB) and interstellar radiation backgrounds through ICS processes.
The propagation of CR $e^\pm$ is described by the diffusion-cooling
equation
\begin{equation}
\frac{\partial N}{\partial t} - \nabla\cdot(D\nabla N) -
\frac{\partial}{\partial
E}(bN) = Q \,,
\label{eq:prop}
\end{equation}
where $N$ is the $e^\pm$ differential density, $E$ is the $e^\pm$ energy, $D$ is the diffusion coefficient, and $Q$ is the source term.
The energy loss rate $b$ includes both contributions from synchrotron radiation and ICS.
The synchrotron energy loss rate in a magnetic field $B$ is given by~\cite{Crusius:1988}
\begin{equation}
b_\mathrm{syn} = \frac{4 \sigma_\mathrm{T} \gamma_e^2 U_B}{3 m_e c},
\end{equation}
where $\sigma_\mathrm{T}$ is the Thomson cross section, $\gamma_e = E/(m_e c^2)$ is the $e^\pm$ Lorentz factor, and $U_B = B^2/(8\pi)$ is the energy density of the magnetic field.
The energy loss rate due to ICS is estimated following Ref.~\cite{Fang:2020dmi}.
We convert the propagation equation to a difference equation, which is solved using the numerical method described in Ref.~\cite{Fang:2018qco}.
We assume a spherically symmetrical two-zone diffusion scenario with the diffusion coefficient given by
\begin{equation}
D(E, r)=\left\{
\begin{aligned}
D_1(E), & & r< r_\star, \\
D_2(E), & & r\geq r_\star.
\end{aligned}
\right.
\label{eq:diff}
\end{equation}
Here $r$ is the distance from Geminga, and $r_\star$ denotes the boundary of the two diffusion zones.
Both $D_1(E)$ and $D_2(E)$ are assumed to have a form of $D_{100} (E/100~\si{TeV})^\delta$, where $D_{100}$ is the diffusion coefficient at $E = 100~\si{TeV}$, and $\delta = 0.33$ is adopted for a Kolmogorov-type diffusion~\cite{Kolmogorov:1941}.
The morphological SBP study of the extended TeV $\gamma$-ray emissions around Geminga by HAWC gives a diffusion coefficient $D_{100} = (3.2^{+1.4}_{-1.0})\times10^{27}~\si{cm^{2}~s^{-1}}$ for $100~\si{TeV}$ $e^\pm$ around Geminga, while a similar study of another nearby pulsar Monogen (PSR B0656+14) leads to $D_{100} = (15^{+49}_{-9})\times10^{27}~\si{cm^{2}~s^{-1}}$~\cite{HAWC:2017kbo}.
The joint fit of both results in $D_{100} = (4.5\pm 1.2)\times10^{27}~\si{cm^{2}~s^{-1}}$.
Thus, $D_{100}$ for the inner zone with $r< r_\star$ is at the order of $10^{27}~\si{cm^{2}~s^{-1}}$.
For the outer zone with $r \geq r_\star$, positrons and electrons propagate through the ordinary interstellar medium (ISM), and we take the GALPROP~\cite{Moskalenko:1997gh} default value $D_{100} = \num{1.7e30}~\si{cm^{2}~s^{-1}}$,
which is consistent with the measurements of CR secondary-to-primary ratios, particularly the $\mathrm{B}/\mathrm{C}$ ratio.
The source term for high energy $e^\pm$ injected by Geminga is assumed as
\begin{equation}
Q(t,E,r) = q(t,E)\delta(r)\,,
\label{eq:source}
\end{equation}
where
\begin{equation}
q(t,E)=q_0\left(1+\frac{t}{\tau}\right)^{-2}E^{-\gamma} \exp\left(-\frac{E}{E_\mathrm{hc}}\right) \exp\left(-\frac{E_\mathrm{lc}}{E}\right).
\label{eq:time_profile}
\end{equation}
$\tau$ is the characteristic initial spin-down time scale of the Geminga pulsar, taken to be $12~\si{kyr}$ following Ref.~\cite{HAWC:2017kbo}.
$\gamma$ is the injection spectral index for $e^\pm$.
$E_\mathrm{hc}$ and $E_\mathrm{lc}$ are the high and low energy cutoffs, respectively.
$q_0$ is a constant determined by the normalization relation
\begin{equation}
\int_{E_{\rm min}}^{E_{\rm max}}q(t_\mathrm{s},E)EdE=\eta\dot{E}_s\,,
\label{eq:norm}
\end{equation}
with the Geminga age $t_\mathrm{s}=342~\si{kyr}$ and the spin-down luminosity $\dot{E}_\mathrm{s}=\num{3.2e34}~\si{erg~s^{-1}}$~\cite{Manchester:2004bp}.
Here $\eta$ is the conversion efficiency for the spin-down energy converted to $e^\pm$ energies.
We will not consider the difference between the positrons and electrons when calculating the $\gamma$-ray flux, and the positron flux $\Phi_{e^+}$ is just a half of the total $e^\pm$ flux.
The photon emissivity due to $e^\pm$ ICS based on the Klein-Nishina cross section is given by~\cite{Fang:2007sc}
\begin{equation}
Q_\mathrm{ICS} (t, E_\gamma, r) = 4\pi \sum_j \int^{\infty}_{0}d\epsilon
\, n_j(\epsilon)\int^{E_{\mathrm{max}}}_{E_{\mathrm{min}}}dE \,
J(t, E, r)F(\epsilon, E_{\gamma}, E),
\end{equation}
$n_j(\epsilon)$ is the number density of a background photon component $j$ with energy $\epsilon$, temperature $T_j$, and energy density $U_j$, expressed as
\begin{equation}
n_j(\epsilon)=\frac {15U_j}{(\pi k
T_j)^4}\frac{\epsilon^2}{\exp(\epsilon/kT_j)-1},
\end{equation}
where $k$ is the Boltzmann constant.
The $e^\pm$ energy threshold for upscattering a target photon with energy $\epsilon$ to a photon with energy $E_\gamma$ is
\begin{equation}
E_\mathrm{min} = \frac{1}{2} \left( E_\gamma + \sqrt{E_\gamma^2 + \frac{E_\gamma m_e^2 c^4 }{\epsilon}} \right).
\end{equation}
$J(t, E, r) = v_e N(t, E, r)/(4\pi)$ is the $e^\pm$ intensity, with $v_e$ denoting the $e^\pm$ speed, which approaches the light speed $c$ for high energy $e^\pm$.
The function $F$ is given by
\begin{equation}
F(\epsilon, E_{\gamma}, E) = \frac{3\sigma_\mathrm{T}}{4\gamma_e^2 \epsilon}
\left[ 2q\ln q + (1+2q)(1-q) + \frac{\Gamma^2 q^2(1-q)}{2(1+\Gamma q)}
\right],
\end{equation}
with
\begin{equation}
\Gamma = \frac{4\epsilon\gamma_e}{m_e c^2},\quad
q = \frac{E_\gamma}{\Gamma (E_e - E_\gamma)}.
\end{equation}
Following Ref.~\cite{HAWC:2017kbo}, we consider three background photon components, including the CMB, the IR background, and the optical background, for the ICS processes.
The temperatures and energy densities are presented in Table~\ref{tab:ISRF}.
Integrating $Q_\mathrm{ICS} (t_s, E_\gamma, r)$ along the light of sight~\cite{Liu:2019sfl}, we obtain the $\gamma$-ray flux for specific energy $E_\gamma$ and angular separation $\theta$,
\begin{equation}
I(E_\gamma,\theta)=\frac{1}{4\pi}\int_{l_\mathrm{min}}^{l_\mathrm{max}}dl \,Q_\mathrm{ICS}(t_s, E_\gamma, r).
\end{equation}
Then we integrate out $\theta$ to get the energy spectrum of the $\gamma$-ray flux $\Phi_\gamma$, or integrate out $E_\gamma$ to derive the SBP as a function of $\theta$.
\begin{table}[!t]
\begin{center}
\setlength{\tabcolsep}{1em}
\renewcommand\arraystretch{1.3}
\caption{Temperature $T_j$ and energy density $U_j$ of three background photon components~\cite{HAWC:2017kbo}.}
\label{tab:ISRF}
\vspace{1em}
\begin{tabular}{ccc}
\hline\hline
Component $j$ & $T_j$ (K) & $U_j$ ($\si{eV/cm^3}$)\\
\hline
CMB& 2.7 & 0.26\\
IR & 20 & 0.3\\
Optical & 5000 & 0.3\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\section{Result according to the D19 gamma-ray observation}
\label{sec:D19}
In this section, we try to interpret the HAWC and AMS-02 data according to the Fermi-LAT $\gamma$-ray observation from the D19 analysis~\cite{DiMauro:2019yvh}.
Both the results without and with the low energy cutoff $E_\mathrm{lc}$ in the $e^\pm$ injection spectrum are calculated for comparison.
\begin{figure}[!t]
\centering
\subfigure[~$\gamma$-ray spectrum \label{fig:DM:gamma}]{\includegraphics[width=0.48\textwidth]{DM_flux.pdf}}
\subfigure[~Surface brightness profile~\label{fig:DM:SPB}]{\includegraphics[width=0.465\textwidth]{DM_sbp.pdf}}
\subfigure[~CR positron spectrum\label{fig:DM:posi}]{\includegraphics[width=0.48\textwidth]{DM_positron.pdf}}
\caption{The $\gamma$-ray spectrum around Geminga (a), the Geminga SBP (b), and the CR positron spectrum (c) assuming $e^\pm$ injection spectra without a low energy cutoff for $\eta = 0.8$ (blue solid lines) and with $E_\mathrm{lc} = 20~\si{GeV}$ for $\eta=0.4$ (red dashed lines).
In the upper left panel, the green region denotes the $\sim 10~\si{TeV}$ spectral data measured by HAWC~\cite{HAWC:2017kbo}, and the data points and upper limits in $10~\si{GeV} \lesssim E_\gamma \lesssim \si{TeV}$ are given by the D19 analysis of Fermi-LAT data~\cite{DiMauro:2019yvh}.
The data points in the upper right panel shows the HAWC observation of the Geminga SBP~\cite{HAWC:2017kbo}.
The lower panel displays the positron spectrum measured by AMS-02~\cite{AMS:2019iwo}.}
\label{fig:DM}
\end{figure}
Firstly, we consider an $e^\pm$ injection spectrum without $E_\mathrm{lc}$, and adjusting the energy conversion efficiency $\eta$ to meet the data.
Setting the boundary radius $r_\star = 50~\si{pc}$, the diffusion coefficient at $E = 100~\si{GeV}$ in the inner diffusion zone $D_{100} = 4.5\times10^{27}~\si{cm^{2}~s^{-1}}$, the ISM magnetic field $B = 3~\si{\mu G}$, the $e^\pm$ injection spectral index $\gamma = 2.25$, and the high energy cutoff $E_\mathrm{hc} = 511~\si{TeV}$, we derive the $\gamma$-ray spectrum around Geminga, the Geminga SBP, and the CR positron spectrum at the Earth for $\eta = 0.8$, shown as the blue solid lines in Fig.~\ref{fig:DM}.
In order to compare the predictions and the observations, we show the $\sim 10~\si{TeV}$ spectral data measured by HAWC~\cite{HAWC:2017kbo} and the Fermi-LAT data points and upper limits from $\sim 10~\si{GeV}$ to $\sim \si{TeV}$ given by the D19 analysis~\cite{DiMauro:2019yvh} in Fig.~\ref{fig:DM:gamma}.
The HAWC observation of the Geminga SBP~\cite{HAWC:2017kbo} is demonstrated in Fig.~\ref{fig:DM:SPB}, while the positron spectrum measured by AMS-02~\cite{AMS:2019iwo} is displayed in Fig.~\ref{fig:DM:posi}.
For the above setup with $\eta = 0.8$, we find that the $\gamma$-ray prediction can well interpret the $\gamma$-ray spectrum and the SBP, and the predicted $e^+$ spectrum can explain the AMS-02 data at $E \gtrsim 100~\si{GeV}$.
However, a $80\%$ efficiency of the spin-down energy converted to $e^\pm$ energies looks unrealistic.
Secondly, we introduce a low energy cutoff $E_\mathrm{lc} = 20~\si{GeV}$ in the $e^\pm$ injection spectrum with other parameters unchanged, and find that the observational data can be explained for $\eta = 0.4$, as illustrated as the red dashed lines in Fig.~\ref{fig:DM}.
Such a $40\%$ conversion efficiency is much more reasonable than the previous one.
Now the predicted positron flux at $E \lesssim 100~\si{GeV}$ seems slightly lower than the blue solid line, but we can still interpret the AMS-02 data at $E \gtrsim 100~\si{GeV}$ very well.
\section{Result according to the X19 gamma-ray constraint}
\label{sec:X19}
In contrast to the D19 analysis~\cite{DiMauro:2019yvh}, the X19 analysis of the Fermi-LAT data have not found any extended $\gamma$-ray emission around Geminga, deriving a rather stringent constraint on the $\gamma$-ray flux at $\sim 5\text{--}100~\si{GeV}$~\cite{Xi:2018mii}.
In this section, we consider this constraint to see how it affects the Geminga contribution to the CR positron spectrum, assuming a low energy cutoff in the $e^\pm$ injection spectrum.
However, we find it impossible to simultaneously explain the HAWC, Fermi-LAT, and AMS-02 data, because the X19 constraint is too strict.
Instead, we would like to know how much contribution Geminga can provide to the AMS-02 positron excess.
For this purpose, we treat $\gamma$, $E_\mathrm{hc}$, $E_\mathrm{lc}$, $\eta$, $B$, and $D_{100}$ in the inner diffusion zone as free parameters and perform a scan in the parameter space with fixed $r_\star$, utilizing the \texttt{MultiNest} algorithm~\cite{Feroz:2008xx} to improve the fitting efficiency.
The ranges for the free parameters in the scan are chosen to be
\begin{eqnarray}
&& 1.8 < \gamma < 2.2,\quad
200~\si{TeV} < E_\mathrm{hc} < 600~\si{TeV},\quad
100~\si{GeV} < E_\mathrm{lc} < 900~\si{GeV},
\nonumber\\
&& 0.1 < \eta < 0.4,\quad
3~\si{\mu G} < B < 8~\si{\mu G},\quad
10^{26}~\si{cm^2~s^{-1}} < D_{100} < 10^{27}~\si{cm^2~s^{-1}}.
\end{eqnarray}
\begin{table}[!t]
\begin{center}
\setlength{\tabcolsep}{1em}
\renewcommand\arraystretch{1.3}
\caption{Parameters in the best results for fixed $r_\star$.}
\label{tab:bestpara}
\vspace{1em}
\begin{tabular}{cccc}
\hline\hline
$r_\star$ (pc) & $50$ & $70$ & $100$ \\
\hline
$\gamma$ & $1.94$ & $1.93$ & $1.87$\\
$E_\mathrm{hc}$ (TeV) & $520$ & $537$ & $463$\\
$E_\mathrm{lc}$ (GeV) & $870$ & $302$ & $547$ \\
$\eta$ & $0.12$ & $0.21$ & $0.20$ \\
$B$ ($\si{\mu G}$) & $5.0$ & $6.9$ & $7.2$\\
$D_{100}$ ($10^{27}~\si{cm^2~s^{-1}}$) & $4.8$ & $7.8$ & $8.5$\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
In order to get optimistic results,
we adopt the most loose upper limits on the $\gamma$-ray flux in $10\text{--}500~\si{GeV}$ derived by the X19 analysis, i.e., the upper limits in the upper panel of Fig.~6 in the X19 paper~\cite{Xi:2018mii}.
The parameters of the best results we obtain for $r_\star = 50,~70,~100~\si{pc}$ are listed in Table~\ref{tab:bestpara}.
The corresponding predictions for the $\gamma$-ray spectrum, the SBP, and the positron spectrum are demonstrated in Fig.~\ref{fig:Xi}.
While the HAWC data are properly fitted and the $\gamma$-ray flux in $5~\si{GeV} \lesssim E_\gamma \lesssim 100~\si{GeV}$ lies below the X19 upper limits, we find that Geminga can only supply less than $36\%$ of the AMS-02 positron flux at $E \sim 300\text{--}600~\si{GeV}$.
\begin{figure}[!t]
\centering
\subfigure[~$\gamma$-ray spectrum \label{fig:Xi:gamma}]{\includegraphics[width=0.48\textwidth]{Xi_flux.pdf}}
\subfigure[~Surface brightness profile~\label{fig:Xi:SPB}]{\includegraphics[width=0.465\textwidth]{Xi_sbp.pdf}}
\subfigure[~CR positron spectrum\label{fig:Xi:posi}]{\includegraphics[width=0.48\textwidth]{Xi_positron.pdf}}
\caption{The best results for the $\gamma$-ray spectrum around Geminga (a), the Geminga SBP (b), and the CR positron spectrum (c) assuming $r_\star = 50~\si{pc}$ (blue solid lines), $r_\star = 70~\si{pc}$ (red dashed lines), and $r_\star = 100~\si{pc}$ (purple dotted lines).
In the upper left panel, the upper limits in $5~\si{GeV} \lesssim E_\gamma \lesssim 100~\si{GeV}$ are given by the X19 analysis of Fermi-LAT data~\cite{Xi:2018mii}.
The other experimental data are the same as in Fig.~\ref{fig:DM}.}
\label{fig:Xi}
\end{figure}
These results show that the X19 constraint favor $\gamma < 2$, $\eta \lesssim 0.21$, and $E_\mathrm{lc}$ of several hundred GeV, which suppress the $\gamma$-ray flux at $\sim \mathcal{O}(10)~\si{GeV}$.
According to an approximate relation~\cite{Xi:2018mii}
\begin{equation}
E_\gamma = 20 \left(\frac{E}{100~\si{TeV}}\right)^2 \si{TeV}
\end{equation}
for $e^\pm$ ICS off CMB photons, $\mathcal{O}(10)~\si{GeV}$ $\gamma$ rays are induced by $\mathcal{O}(\si{TeV})$ $e^\pm$.
Thus, the X19 constraint implies less $\mathcal{O}(\si{TeV})$ positrons and electrons from Geminga, resulting in a lower CR positron flux for $E \sim \mathcal{O}(100)~\si{GeV}$ at the Earth, which is insufficient to explain the AMS-02 excess.
\section{Summary and discussion}
\label{sec:sum}
In this work, we attempt to explain the AMS-02 positron excess by the nearby pulsar Geminga assuming a two-zone diffusion scenario and an $e^\pm$ injection spectrum with a low energy cutoff, taking into account the $\gamma$-ray data from HAWC and Fermi-LAT.
The analyses of Fermi-LAT data for extended $\gamma$-ray emissions around Geminga by two groups have obtained different results.
While the X19 analysis found no such emission and derive upper limits on the $\gamma$-ray flux, the D19 analysis claimed an observation of the extended $\gamma$ rays.
We have considered both results separately.
By fitting the D19 observation and the HAWC data assuming no low energy cutoff in the injection spectrum, we find that the conversion efficiency $\eta$ should be as large as $80\%$ to account for the AMS-02 positron excess.
Nonetheless, if a low energy cutoff $E_\mathrm{lc} = 20~\si{GeV}$ is introduced, we would only need a $40\%$ conversion efficiency, which is much more realistic.
Therefore, it is plausible to interpret the positron excess by Geminga, according to the D19 analysis.
On the other hand, if the stringent constraint from the X19 analysis is considered, we find that Geminga could not accounting for the total positron excess.
We carry out a scan in the parameter space for the boundary radius $r_\star = 50,~70,~100~\si{pc}$ and require to fit the HAWC data and satisfy the X19 constraint.
The best results we obtain can only explain a faction of the AMS-02 positron flux lower than $36\%$ at $E\sim \mathcal{O}(100)~\si{GeV}$.
This may imply that more nearby pulsars or other sources are needed to interpret the positron excess.
Since the different conclusions obtained above come from the contradictory results of the two Fermi-LAT analyses, it is crucial to know whether result is true.
This may require a more careful data analysis and more Fermi-LAT data.
\begin{acknowledgments}
We thank Kun Fang for providing the code to solve the two-zone propagation equation.
This work is supported in part by the National Natural Science Foundation of China under Grants No.~11875327 and No.~11805288,
the Fundamental Research Funds for the Central Universities,
and the Sun Yat-Sen University Science Foundation.
Q.Y. is supported by the Program for Innovative Talents and Entrepreneur in Jiangsu.
\end{acknowledgments}
\bibliographystyle{utphys}
|
train/arxiv
|
BkiUdp84ubngzDmaLY8W
| 5
| 1
|
\section{Introduction}\label{sec:intro}
Two-dimensional honeycomb lattices, such as graphene or monolayers of transition metal dichalcogenides (TMDs), are considered to be promising candidates for valley based electronics~\cite{Schaibley2016,Vitale2018} owing to their particular bandstructure, which features two non-equivalent Dirac valleys at the corners of the Brillouin zone. Due to their large separation in momentum space, these constitute a novel discrete orbital degree of freedom for low-energy carriers, which could be used to store, filter and transport information~\cite{Rycerz2007} much in the same way the electronic spin is used for spintronics applications. When spatial inversion symmetry is explicitly broken in these materials while preserving time-reversal symmetry, finite but opposite Berry curvatures develop at each valley~\cite{Xiao2007}, endowing the system's carriers with an anomalous velocity. Indeed, in sufficiently clean samples with suppressed intervalley scattering, applying an in-plane electric field generates non-dissipative counterpropagating pure valley currents along its transverse direction without a net charge flow, giving rise to what is known as the valley Hall effect (VHE). Indirect measurements of this effect were reported for graphene on top of a hexagonal boron nitride substrate~\cite{Gorbachev2014}, biased graphene bilayers~\cite{Sui2015,Shimazaki2015} and TMDs~\cite{Mak2014}. Typical transport probes of the valley Hall conductivity are based on non-local resistance measurements in Hall bar geometries with several terminals, in a similar scheme as the one used to detect spin Hall currents~\cite{Abanin2011}. Other alternatives require the use of spatially-resolved optical Kerr signals, which in the case of TMDs allowed to demonstrate an accumulation of carriers with different valley polarization along the sample edges~\cite{Lee2016}.
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{density_s_zoom.pdf}
\caption{Measuring the variation of the particle density, upon varying the strength of the applied strain $\tau$, leads to a quantized bulk response, $\mathfrak{S}(\bm{r})$, reflecting the underlying valley Hall effect. Left panel: Number of particles within a unit cell at position $\bm{r}$, $\tilde n(\bm{r})$, when applying trigonal strain to the sample (arrows). Right panel: Local valley Hall marker $\mathfrak{S}(\bm{r})$ [Eq.~\eqref{local_marker}] in units of the conductivity quantum $\sigma_0=e^2/h$, displaying a plateau at a quantized value $\mathfrak{S}(\boldsymbol{r})/\sigma_0^{} \in \mathbb{Z}$ deep in the bulk. In the present case, $\mathfrak S(\bm{r})/\sigma_0^{} \simeq 1$, as emphasized in the zoomed region. This local bulk response exists irrespective of the sample's edge termination. For more details on the system parameters used, see Section~\ref{IIIc} and Fig.~\ref{hexagonal_flakes}\textbf{d}.}
\label{Fig_1_story}
\end{figure}
Strained honeycomb lattices provide yet another platform where valley Hall related phenomena can take place. Interestingly, the coupling between the electronic degrees of freedom and the mechanical deformation in these samples can be well described by the emergence of strong artificial gauge fields, which drastically alter their low-energy properties~\cite{CastroNeto2009,Guinea2010a}. When properly engineering the strain, effective pseudomagnetic fields pointing in opposite directions develop at inequivalent Dirac cones, resulting in the formation of relativistic pseudo-Landau levels (pLLs) in the vicinity of the valleys. This characteristic energy spectrum has been successfully observed in a broad variety of devices: starting from strained graphene nanobubbles~\cite{Levy2010} and molecular graphene~\cite{Gardenier2020,Gomes2012}, all the way up to polaritonic lattices~\cite{Jamadi2020}, honeycomb arrays of microwave resonators~\cite{Bellec2020} and, more recently, photonic Fock-state lattices~ \cite{Deng2022}. In finite-size samples, strain may give rise to valley polarized counterpropagating edge states, which could eventually lead to the detection of quantized valley Hall conductivities. Nevertheless, the lack of topological protection of these helical boundary modes makes their existence strongly dependent on the proper design of edge terminations and on the type of strain~\cite{Salerno2017}. Even more, short-range scattering can couple both valleys and lead to backscattering between the edge channels~\cite{Low2010}, making usual transport probes very fragile to disorder and matching conditions.
In this work, we demonstrate that a quantized valley Hall response can be directly measured by monitoring the variation of the particle density in the bulk, upon small variations of the applied strain; see Figure~\ref{Fig_1_story}. This method builds on the Widom-St\v{r}eda formula~\cite{Streda1982,Widom1982}, which relates (in its original form) the electrical Hall conductivity to a bulk density response: $\sigma_H=e \, \partial n_{\text{bulk}}/\partial B$, where $B$ denotes an applied magnetic field, $n_{\text{bulk}}$ is the particle density evaluated locally in the bulk and $e$ is the charge of the carriers. In the present framework of strained systems, we consider a \emph{pseudo-magnetic} field perturbation, as obtained by modifying the strength of the applied strain, so that the resulting density response directly reflects the underlying valley Hall conductivity. Importantly, this approach suggests that the quantized valley Hall response can be extracted from equilibrium bulk properties, in sharp contrast with more standard non-local transport measurements.
This makes our proposal particularly appealing for synthetic lattice systems, as for engineered molecular lattices where the local density of states (LDoS) can be extracted via STM imaging~\cite{Gomes2012, Swart2017,Drost2017, Khajetoorians2019,Polini2013,Marino2015}, or for the case of ultracold atoms in optical lattices, where the local particle density can be finely measured using the quantum gas microscope technique~\cite{Greiner2009, Kuhr2010, Greiner2015, Zwierlein2015, Gross2015, Kuhr2015,leonard2022realization}.
The article is structured as follows. In Section~\ref{II}, we review the key concepts underlying the valley Hall effect in strained honeycomb lattices, and present our measurement proposal based on locally probing a density response to strain variations; in this Section~\ref{II} we also introduce the model used to validate this approach. In Section~\ref{III}, we present our numerical results for different strain configurations. In subsections \ref{IIIa} and \ref{IIIb}, we focus on uniaxially stretched lattices, which will allow us to address large system sizes, where our findings can be better illustrated. This restriction will be lifted in subsection \ref{IIIc}, where we analyze the particle density variations with respect to strain in a triaxially strained hexagonal flake. Importantly, these results highlight the main message of our work:~the density response to strain variation is quantized deep in the bulk, irrespective of the sample's edge termination [Fig~\ref{Fig_1_story}]. Finally, in Section~\ref{IV}, we present our concluding remarks and discuss the requirements needed to observe a quantized valley Hall response in the bulk of an experimentally realistic setting.
\section{Valley Hall Effect in a strained honeycomb lattice}\label{II}
We start by presenting a general tight-binding model for strained honeycomb lattices in Section~\ref{IIa}, and we recall how the concept of pseudo-magnetic field emerges from its low-energy effective description. In Section~\ref{IIb}, we adapt the Widom-St\v{r}eda formula to the system under consideration and establish the existence of a local bulk marker for the valley Hall coefficient. The conditions needed to observe a quantized response through density variations in the bulk are discussed. In order to better illustrate our results, our studies are performed for the case of uniaxial strain, which allows for the use of periodic boundary conditions along the direction perpendicular to the deformation. By doing so, large system sizes with a clear separation between the bulk and the boundary can be handled. A more realistic flake geometry will be studied at the end of Section~\ref{III}, where we additionally address the case of trigonal strain.
\subsection{Strained honeycomb lattices}\label{IIa}
\begin{figure}[!b]
\center
\includegraphics[width=\columnwidth]{Lattice.pdf}
\caption{Honeycomb lattice with zig-zag terminations along the $x$-direction and $N_x=3$ primitive cells of area $A_c=3\sqrt{3}a^2/2$. The vectors connecting neighbouring sites are denoted as $\bm{\delta}_j$. (\textbf{a}) Strain is applied by mechanically stretching the lattice. (\textbf{b}) Strain is directly imprinted on the tunneling amplitudes $t_j^{}(\bm r)$ without modifying the underlying crystalline structure of the lattice .}
\label{fig:lattice}
\end{figure}
The tight-binding Hamiltonian of a strained honeycomb lattice can be generically written as
\begin{equation}\label{H_strain}
\hat H^{} = - \sum_{\bm{r} \in \mathcal A,j} t_j(\bm{r})(\hat a_{\bm{r}}^\dagger \hat b^{}_{\bm{r}+\boldsymbol \delta_j(\bm{r})} + \text{h.c.}), \quad j \in \{1,2,3\},
\end{equation}
where the operators $\hat a^{}_{\bm{r}} (\hat a^\dagger_{\bm{r}})$ and $\hat b^{}_{\bm{r}+\bm{\delta}_j(\bm{r})} (\hat b^\dagger_{\bm{r}+\bm{\delta}_j(\bm{r})})$ are annihilation (creation) operators on A and B sublattices at position $\bm{r}$ and $\bm{r}+\bm{\delta}_j(\bm{r})$, respectively. In solid-state devices, such as mechanically stretched graphene sheets~\cite{Guinea2010a,Levy2010}, the hopping elements are essentially modified by distorting the lattice geometry, which is here encoded in the position of the atoms and in the set of space-dependent first nearest neighbours vectors, which we defined as $\bm{\delta}_j(\bm{r})$ [see Fig.~\ref{fig:lattice}\textbf{a}]. This same strategy has been used in synthetically built systems, such as molecular graphene~\cite{Gomes2012}, photonic~\cite{Rechtsman2013, Bellec2020, Jamadi2020} and acoustic~\cite{Abbaszadeh2017,Brendel2017,Wen2019} meta-materials. One of the main advantages of these setups is that stress configurations can be designed at will by simply engineering different lattice patterns. Other theoretical proposals have also analyzed the possibility of mimicking the physics of strained honeycomb lattices with ultracold atoms trapped in optical lattices~\cite{Alba2013,Tian2015,Jamotte2022,DiLiberto2022}. Interestingly, in some of those platforms, strain can be directly imprinted on the tunneling amplitudes without modifying the underlying crystalline structure of the lattice. In that case, the vectors $\boldsymbol{\delta}_j(\bm{r})=\bm{\delta}_j$ are simply the pristine ones, namely $\boldsymbol{\delta}_1^{} = (-a,0)$, $\boldsymbol{\delta}_2^{} = (a/2,\sqrt 3a/2)$ and $\boldsymbol{\delta}_3^{} = (a/2,-\sqrt 3a/2)$ [see Fig.~\ref{fig:lattice}\textbf{b}], where $a$ is the lattice spacing. In the following, we present results for this simpler scenario, keeping in mind that the discussion can be easily generalized to the geometrically deformed lattice. We specifically work with uniaxial strain along the $x$-direction, which is modeled with space-dependent tunneling amplitudes as
\begin{equation}\label{t_j}
\begin{split}
t_j(x) = t\left(1+\tau \frac{x-x_c}{3a^2} |\hat{\mathbf x} \cdot \boldsymbol{\delta}_j |\right), \quad \hat{\mathbf x} = (1,0).
\end{split}
\end{equation}
Here, $x$ runs over discretized positions $x_l^{}$ located in the middle of a $\boldsymbol \delta_j^{}$-link between an A and a B site, as indicated in Fig.~\ref{fig:lattice}b. The position of the system's center is denoted by $x_\text{c}^{}$. The parameters are kept in such a way that $\tau L_x/6a <1$, so that the hopping terms do not vanish across the entire sample. For this stress configuration, the Hamiltonian defined by Eq.~\eqref{H_strain} can be diagonalized for open boundary conditions along $x$ and periodic boundary conditions along $y$. The energy spectrum is shown in Fig.~\ref{fig:spectrum_strain}, where we present results for both the unstrained ($\tau=0$) and strained case ($\tau=70.84\times 10^{-4}$). We have used zig-zag terminations and $N_x=301$ cells along the $x$-direction, which correspond to a system size of $L_x=450.5\,a$. In the first case, the spectrum (grey lines) shows two well-defined Dirac valleys, with the gap closing points denoted as $\textbf{K}$ and $\textbf{K}'$. For $\tau>0$, (tilted) pseudo Landau levels are generated around each Dirac cone as well as edge states localized along the boundaries of the system. The lines of this spectrum have been colored based on the mean position $\left\langle x \right\rangle$ of each eigenstate. Since the spatial inhomogeneity in the hopping terms is not too strong ($\tau \ll 1$), the valley index is still preserved and it is possible to derive an effective Hamiltonian linearised around the Dirac points that incorporates the effect of strain via a minimal coupling term~\cite{Goerbig2011,Salerno2015a},
\begin{equation}\label{H_Dirac_A}
h^{\xi}_{\text A}(\bm{q}) = \hbar v_\text{F}^{} \left[(\xi q_y^{} - e A^{\xi}_y) \sigma_x^{} - (q_x^{}-eA^\xi_x) \sigma_y\right],
\end{equation}
where $v_F = 3ta/2\hbar$, $e\mathbf A^{\xi}(x)=\xi \frac{\hbar\tau}{9a^2}(x-x_c)\hat{\mathbf{y}}$ is the pseudo-vector potential, $\bm{q} \equiv \bm{k} - \xi \mathbf K$ and $\xi$ indicates the valley, taking the value $ +1$ for $\mathbf K$ and $-1$ for $\mathbf K'$. The effective magnetic field at each valley is given by
\begin{equation}
\label{Buni}
\boldsymbol{B}_\tau^{\xi} = \boldsymbol{\nabla} \times \mathbf A^{\xi} =\frac{\xi \hbar\tau}{9e a^2} \hat{\mathbf{z}} = B_{\tau}^{\xi}\hat{\mathbf{z}}.
\end{equation}
Note that it has opposite sign at each valley and is hence not a proper magnetic field but a pseudo-magnetic field, which does not break time-reversal symmetry. Consequently, the edge states in the sample are helical instead of chiral: counterpropagtaing modes emerge at each boundary with the valley index determining the sign of their velocity, as clearly seen in the color code of Fig.~\ref{fig:spectrum_strain}.
\begin{figure}[!t]
\center
\includegraphics[width=1.0\columnwidth]{spectrum_strain_mean_positions.pdf}
\caption{Energy spectrum of the Hamiltonian in Eq.~\eqref{H_strain} for $\tau=0$ (grey lines) and $\tau = 70.84\times 10^{-4}$ (colored lines). We have used $N_x=301$ cells along the $x$-direction and zigzag terminations. For finite strain, the color scale indicates the mean position $\langle x \rangle$ of each state. The black dashed lines represent the pseudo-Landau levels (pLLs) whose approximate dispersion is given by Eq.~\eqref{E_pLL}.}
\label{fig:spectrum_strain}
\end{figure}
The bulk energy spectrum consists of a set of discretized energy levels, which are not strictly flat, as Eq.~\eqref{H_Dirac_A} would predict. Indeed, there are higher-order terms, which have been neglected in the linear approximation that complicate the picture away from the Dirac points. The first correction to the constant pseudo-magnetic field model can be incorporated by considering that strain also induces a spatial dependence on the Dirac-Fermi velocity. Including this inhomogeneity, the pseudo-Landau levels acquire a dispersion in momentum space, which is approximately described by~\cite{Salerno2015a}
\begin{equation}\label{E_pLL}
E_\nu(q_y) = \pm t \sqrt{\frac{\tau \nu}{2} (1-\xi q_y a)}, \quad \nu \in \mathbb N.
\end{equation}
This analytical prediction is highlighted with black dashed lines in Fig.~\ref{fig:spectrum_strain}.
\subsection{The valley Hall response}\label{IIb}
The Widom-St\v{r}eda formula provides an insightful connection between the Hall conductivity of a two-dimensional gas $\sigma_H$ and the variation of its bulk particle density $n_{\textrm{bulk}}$ in response to the modification of an external magnetic field $B$. This relation states that, whenever the Fermi energy $\mu_F$ lies within a spectral gap, this transport coefficient can be obtained as
\begin{equation}\label{Streda}
\sigma_H^{} = e \frac{\partial n_{\textrm{bulk}}}{\partial B}\Bigg\rvert_{\mu_F^{}}.
\end{equation}
This formula, originally derived by St\v{r}eda within linear response theory~\cite{Streda1982}, was obtained independently by Widom using very general thermodynamic relations~\cite{Widom1982}. Interestingly, its validity holds for any insulating state of matter, including strongly-correlated ones~\cite{Repellin2020}. In the case of Chern insulators, such as quantum Hall states, Eq.~\eqref{Streda} can be used to predict the emergence of a quantized Hall response~\cite{Xiao2005,Umucalilar2008}.
Strained honeycomb lattices preserve time-reversal symmetry, so the Hall conductivity in these systems remains trivially equal to zero. Nevertheless, due to the explicit breaking of space-inversion symmetry, the quantum valley Hall effect can take place~\cite{Settnes2017}. When the Fermi energy is taken to be near the charge neutrality point, the valley Hall response $\sigma_\text{V}^{}$ can be defined as the difference of the contributions to the Hall conductivity at $\mathbf{K}$ and $\mathbf{K'}$, denoted $\sigma^K_H$ and $\sigma_H^{K'}$ respectively, i.e.
\begin{equation}
\sigma_\text{V}^{} \equiv \sigma_H^{K}-\sigma_H^{K'}.
\end{equation}
Our first goal is to adapt Eq.~\eqref{Streda} to the system under consideration in order to probe the VHE via density response to strain variations. For $\tau\ll 1$, we can rely on the constant pseudo-magnetic field approximation discussed in the previous section to obtain the conductivity for each valley as
\begin{equation}\label{Streda_strain_valley}
\sigma^{\xi}_H \simeq e\frac{\partial n^{\xi}_{}}{\partial B^{\xi}_\tau}\Bigg\rvert_{\mu_F} = \xi \left(\nu+\frac 12 \right)\sigma_0,
\end{equation}
where $n^{\xi}$ stands for the contribution of the $K$ ($\xi=+1$) or the $K'$ ($\xi = -1$) valley to the bulk particle density, $\nu$ is the index of the last occupied pLL and $\sigma_0 = e^2/h$ is the conductivity quantum. The last equality has been derived by performing an explicit calculation of $n^{\xi}$ with the analytical eigenstates obtained from Eq.~\eqref{H_Dirac_A} (see Appendix~\ref{AppendixA}). For uniaxial strain as the one of Eq.~\eqref{t_j}, these wavefunctions remain fairly close to the exact eigenstates near the center of the sample~\cite{Salerno2015a,Jamotte2022} and can then be used to provide a good approximation to the particle density around the bulk of the system. Based on Eq.~\eqref{Streda_strain_valley}, the valley Hall response can be directly obtained in terms of the variation of the total bulk density $ n_\textrm{bulk}^{} = n^{K}_{} + n^{K'}_{}$, for varying strain intensity, as
\begin{eqnarray}
\notag
\sigma_\text{V}^{}&\simeq&e \left(\frac{\partial n^K}{\partial B^{K}_\tau} - \frac{\partial n^{K'}}{\partial B^{K'}_\tau}\right)\Bigg\rvert_{\mu_F}\\
\label{Streda_strain}
&=& e \frac{\partial n_{\textrm{bulk}}}{\partial B_{\tau}^{K}}\Bigg\rvert_{\mu_F}\\
\notag
&=& \frac{\partial \tilde{n}_{\textrm{bulk}}}{\partial \alpha_\tau^{}}\Bigg\rvert_{\mu_F} \sigma_0 = (2\nu+1)\sigma_0.
\end{eqnarray}
Here, we defined
\begin{equation}
\alpha_\tau^{} \equiv B_\tau^{K} A_c/\phi_0 ,
\end{equation}
as the flux in a primitive cell of area $A_c = 3\sqrt 3a^2/2$ in units of the flux quantum $\phi_0 = h/e$ in the presence of a magnetic field $B_{\tau}^{K}$. In the last equality of Eq.~\eqref{Streda_strain}, $\tilde{n}_{\textrm{bulk}} = n_{\textrm{bulk}}A_c$ stands for the dimensionless particle density per cell in the bulk (i.e.~the number of particles within a unit cell illustrated in Fig.~\ref{fig:lattice}\textbf{b}).
Note that while usual experimental probes of the valley Hall conductivity rely on non-local (out-of-equilibrium) transport measurements, Eq.~\eqref{Streda_strain} provides an alternative approach, which only relies on locally testing the equilibrium particle density variations upon modifying the strength of strain. We stress that this approach relies on the low-energy Dirac model introduced in Eq.~\eqref{H_Dirac_A}, hence, it is valid in the regime $\tau \ll 1$.
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth]{density_Btau_B.pdf}
\caption{Particle density per cell for the same parameters as in Fig.~\ref{fig:spectrum_strain} ($\alpha_\tau^{} = 3.255 \times 10^{-4}$) for a strained sample (solid line) and for a lattice with a homogeneous external magnetic field $B = B_\tau^{K}$ (dashed line). Green and blue colors correspond to chemical potentials $\mu_1^{} = 0.02\,t$ (first gap) and $\mu_2^{} = 0.068\,t$ (second gap), respectively. Note that we plot deviations of $\tilde{n}$ from unity, which are of the order of $10^{-3}$.}
\label{fig:density_tot}
\end{figure}
The explicit dependence of the hopping amplitudes on the position, in the case of uniaxial strain, brings a caveat to the problem: it makes the particle density position dependent even deep into the bulk of the sample. This behaviour is explicitly shown in Fig.~\ref{fig:density_tot}, where we plot the dimensionless density of particles per cell
\begin{equation}
\tilde{n}(x) = \sum_{\alpha=A,B} \tilde{n}(x_{\alpha}) = A_c\sum_{\alpha=A,B} n(x_{\alpha}) ,
\label{ntildex}
\end{equation}
as a function of the cell position, which is here denoted as $x\equiv x_{2l}$ for $l=0,1,2\hdots$ (as defined in Fig.~\ref{fig:lattice}\textbf{b}). In Eq.~\eqref{ntildex}, the position of an $\alpha$-site within the unit cell at $x$ has been denoted as $x_{\alpha}$.
We show the behavior of this quantity for two different values of the chemical potential and the same parameters, as in Fig.~\ref{fig:spectrum_strain}. We also compare these densities with the ones of a honeycomb lattice in the presence of a homogeneous external magnetic field of strength $B=B_{\tau}^{K}$. Note that the difference between both models tends to zero at the center of the sample. Characteristic Friedel oscillations are clearly visible near the edges of the sample, until the density peaks due to the presence of the edge modes. As opposed to the constant magnetic field case, the strained lattice presents a clear asymmetry between the right and left boundaries. At the left of the sample, the hopping amplitude decreases with respect to its unstrained value [see Eq.~\eqref{t_j}], making the wavefunctions more localized and the density slightly higher than the one at $x=x_c$. The opposite behavior takes place at the right of the sample, where the density decreases with respect to the one at the center. Since the deviation is symmetrical with respect to this point, we can define a bulk particle density by averaging the densities $\tilde{n}(x)$ over a certain radius $r_{\textrm{bulk}}=L_{\textrm{bulk}}/2$ around $x=x_c$. In this way,
\begin{equation}
\tilde{n}_{\textrm{bulk}} = \frac{1}{N_{\textrm{bulk}}}\sum_{x\in\textrm{bulk}}\tilde{n}(x),
\label{n_tilde_bulk}
\end{equation}
where the bulk region corresponds to $x \in [x_c - r_{\textrm{bulk}}, x_c + r_{\textrm{bulk}}]$ and $N_{\textrm{bulk}}$ is the number of cells considered in the sum. If the bulk radius is small compared to the size of the system, the bulk density as defined above will remain quite close to the one that would be obtained from a constant magnetic field model, making the St\v{r}eda formulation of Eqs.~\eqref{Streda_strain_valley} and~\eqref{Streda_strain} still adequate. In particular, the valley Hall response may be obtained by averaging the density variations in the central bulk region as
\begin{equation}
\label{sigma_averaged}
\sigma_V = \frac{1}{N_{\textrm{bulk}}}\sum_{x\in \textrm{bulk}}\mathfrak{S}(x),
\end{equation}
where,
\begin{equation}
\label{local_marker}
\mathfrak{S}(x)=\sigma_0\frac{\partial \tilde{n}(x)}{\partial \alpha_{\tau}}\Bigg\rvert_{\mu_F}.
\end{equation}
Note that $\mathfrak{S}(x)$ plays the role of a local marker in the problem: it provides a way to locally probe the valley Hall coefficient when properly averaged over $L_{\textrm{bulk}}$.
One must keep in mind that the valley Hall quantization is a property of an insulating bulk. In order to measure it, the Fermi energy has to lie in a region between two pLLs. A word of caution is in order here: since we are dealing with a finite-size sample in the presence of edge states, there are no true spectral gaps in the system. Nevertheless, the bulk particle density for sufficiently large systems ($L_{\textrm{bulk}}\ll L_x$) is expected to depend very weakly on the filling of the edge modes, and therefore the St\v{r}eda formulation should remain reasonably accurate, as discussed in the next section.
\section{Results and discussion}\label{III}
\subsection{Spectral properties}\label{IIIa}
With the aim of determining the energy regions where the Widom-St\v{r}eda formula can be applied, it is instructive to study the spectral properties of the strained honeycomb lattice. We show in Fig.~\ref{fig:Strain_Energy_alpha_sigma_spectra_alphas}\textbf{a} the density of states (DoS) of the sample as a function of energy and $\alpha_{\tau}$, calculated as
\begin{equation}
\rho(\varepsilon) = -\frac{1}{\pi}\textrm{Im}\textrm{Tr}[\hat{G}^r(\varepsilon)],
\end{equation}
with $\hat{G}^{r}(\varepsilon)=(\varepsilon + i\eta - \hat{H})^{-1}$ the retarded Green's function of the system. For reference purposes, the particular DoS for $\alpha_{\tau}^{}=3.255\,\times 10^{-4}$ is shown in the right panel of Fig.~\ref{fig:Strain_Energy_alpha_sigma_spectra_alphas}\textbf{a}, which corresponds to the value of strain ($\tau^{}=70.84\times 10^{-4}$) used to produce the spectrum of Fig.~\ref{fig:spectrum_strain}.
One clearly identifies a continuum of states representing the pseudo-Landau levels $\nu=0,1,2$, as well as a set of discrete modes, which stem from the edge states of the system. As opposed to the case of a non-strained honeycomb lattice in a real magnetic field, the pseudo-Landau levels for $|\nu|\geq 1$ have a certain width in energy due to their finite drift velocity. As a visual aid, we have included their analytical energy at the Dirac points in solid black lines -- see Eq.~\eqref{E_pLL} for $q_y^{} = 0$. The dependence of the DoS as a function of strain nicely reflects the spectral flow: when $\alpha_{\tau}$ increases, the discretized edge modes decrease in energy until they merge with the continuum of bulk states. This behavior is consistent with their wavefunction moving away from the hard-wall potential, while at the same time, becoming more localized in space (recall that the effective magnetic length $\ell_B^{} = \sqrt{\hbar/e B_{\tau}^{K}} = \sqrt{A_c^{}/2\pi \alpha_\tau^{}}$).
\begin{figure}[!t]
\center
\includegraphics[width=1.0\columnwidth]{DoS.pdf}
\caption{\textbf{a.} Total density of states $\rho(\varepsilon)$ in logarithmic scale as a function of energy $\varepsilon$ and pseudomagnetic flux $\alpha_\tau^{}$. The size of the sample is the same as in Fig.~\ref{fig:spectrum_strain}, namely $L_x = 450.5\,a$. The black dashed and dashed-dotted lines represent the chemical potentials $\mu_1^{} = 0.02t$ and $\mu_2^{} = 0.068 t$, respectively. \textbf{b.} Bulk density of states $\rho_{\textrm{bulk}}(\varepsilon)$ in logarithmic scale for $L_\textrm{bulk} = 36\,a$. In both panels, black solid lines identify the energy of the analytical pseudo-Landau levels at the Dirac points. In the right panels, we show the corresponding DoS (or LDoS) for $\alpha^{}_\tau = 3.255 \times 10^{-4}$ indicated by the vertical dashed lines in the left panels.}
\label{fig:Strain_Energy_alpha_sigma_spectra_alphas}
\end{figure}
The key quantity to evaluate the density variations through the sample [Eqs.~\eqref{sigma_averaged} and~\eqref{local_marker}] is the local density of states per cell (LDoS), which may be obtained in terms of the retarded Green's function as
\begin{equation}
\rho(\varepsilon,x) = -\frac{1}{\pi}\sum_{\alpha=A,B}\textrm{Im}\langle x_\alpha | \hat{G}^{r}(\varepsilon)|x_\alpha\rangle,
\end{equation}
where the sum runs over the two sublattice sites belonging to the cell at $x = x_{2l}^{}$.
Note that in order to obtain the local density of particles at each cell, this quantity must be integrated in energy up to the Fermi level,
\begin{equation}
\tilde{n}(x) = \int_{-\infty}^{\mu_F}\rho(\varepsilon,x)d\varepsilon.
\end{equation}
The bulk particle density previously defined in Eq.~\eqref{n_tilde_bulk} can then be obtained as the average
\begin{equation}
\tilde{n}_{\textrm{bulk}} = \frac{1}{N_{\textrm{bulk}}}\sum_{x \in \textrm{bulk}}\int_{-\infty}^{\mu_F}\rho(\varepsilon,x)d\varepsilon = \int_{-\infty}^{\mu_F}\rho_{\textrm{bulk}}(\varepsilon)d\varepsilon.
\end{equation}
Here, $\rho_{\textrm{bulk}}(\varepsilon) = \sum_{x\in \textrm{bulk}} \rho(\varepsilon,x)/N_\textrm{bulk}$ is nothing but the density of states projected onto this particular region. The valley Hall coefficient in Eq.~\eqref{Streda_strain} can be re-written in terms of this quantity as an integral over the Fermi sea
\begin{equation}
\sigma_V = \sigma_0 \int_{-\infty}^{\mu_F}\frac{\partial\rho_{\textrm{bulk}}(\varepsilon)}{\partial \alpha_{\tau}}d\varepsilon.
\label{fixed_mu}
\end{equation}
In a finite-size sample, we thus expect to have a quantized result from Eq.~\eqref{fixed_mu} whenever $\rho_{\textrm{bulk}}(\mu_F)\simeq 0$.
We show in Fig.~\ref{fig:Strain_Energy_alpha_sigma_spectra_alphas}\textbf{b} the bulk density of states $\rho_{\textrm{bulk}}(\varepsilon)$ for a bulk region of width equal to $L_{\textrm{bulk}}=36\,a$ (centered around $x_c^{}$). This size of $L_{\textrm{bulk}}$ is of the order of the magnetic length for $\alpha_\tau^{} = 3.255 \times 10^{-4}$. We can clearly see that, for this particular bulk area, the contribution of the edge modes is negligible in the first two gaps between pLLs and slightly more relevant in the third gap (note the logarithmic color scale). This is in agreement with the edge states arising from the $\nu=2$ pLL being appreciably more delocalized than the ones originating from the $\nu=0$ and $\nu=1$ pLL. To avoid the presence of spurious finite-size effects, we will then focus on the results for the first two gaps.
\subsection{A local probe for the valley Hall response}\label{IIIb}
\begin{figure}[!b]
\center
\includegraphics[width=0.9\columnwidth]{cond_space.pdf}
\caption{Spatial dependence of the kernel $\mathfrak{S}(x)$ for the same parameters as in Fig.~\ref{fig:spectrum_strain} ($\alpha_\tau^{} = 3.255 \times 10^{-4}$) for a strained sample (solid lines) and for a lattice with a homogeneous external magnetic field $B = B_\tau^{K}$ (dashed lines). Green and blue colors correspond to chemical potentials $\mu_1^{} = 0.02\,t$ (first gap) and $\mu_2^{} = 0.068\,t$ (second gap), respectively.}
\label{fig:cond_space}
\end{figure}
Figure \ref{fig:cond_space} shows the local response $\mathfrak{S}(x)$ as a function of the position in the sample -- see Eq.~\eqref{local_marker} -- for two different chemical potentials: $\mu_1=0.02\,t$ in the first gap (green) and $\mu_2=0.068\,t$ in the second gap (blue). We compare the local response as obtained for a strained lattice (solid lines) and an unstrained lattice in a uniform magnetic field of magnitude $B=B_{\tau}^{K}$ (dashed lines). In this latter case, the local marker (also obtained with Eq.~\eqref{local_marker}) remains uniform in the bulk of the system and equal to the quantized integer value expected from the theory. This could be anticipated from Fig.~\ref{fig:density_tot}: a real magnetic field leads to plateaus in the bulk density profiles and, accordingly, in their variation with respect to the external flux. On the other hand, for the strained case, $\mathfrak{S}(x)$ presents a linear drift around the center of the system, naturally inherited from the density asymmetry due to uniaxial strain previously discussed. Deviations from this linearity arise as soon as edge effects become significant. These are more pronounced for the second than for the first gap, as the edge states from the former are more delocalized. Note that these local markers are equal at $x_\text{c}^{}$, confirming that the constant pseudo-magnetic field picture is accurate at the center of the sample. One also deduces from the linearity that discrepancies can be filtered away by simply averaging over an adequate radius $r_{\textrm{bulk}}=L_{\textrm{bulk}}/2$, as prescribed by Eq.~\eqref{sigma_averaged}.
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{cond_Lbulk_tau_tot.pdf}
\caption{\textbf a-\textbf b. Valley Hall response obtained from Eq.~\eqref{sigma_averaged} as a function of the flux per plaquette $\alpha_{\tau}$ and the size of the bulk $L_{\textrm{bulk}}$ in a system of size $L_x^{} = 450.5 a$. The chemical potential has been fixed to $\mu_1=0.02\,t$ in the first and $\mu_2=0.068\,t$ in the second gap, respectively. The solid black lines represent the magnitude of the magnetic length $\ell_B^{}$ for each strain intensity. \textbf{c}-\textbf{d} Projected density of states $\rho_\textrm{bulk}(\mu_F)$ as a function of $\alpha_{\tau}$ and $L_{\textrm{bulk}}$ at the corresponding chemical potentials $\mu_\text{F}^{}=\mu_1$ and $\mu_\text{F}^{}=\mu_2$.}
\label{fig:cond_Lbulk_tau_tot}
\end{figure}
\begin{figure}[!b]
\center
\includegraphics[width=1\columnwidth]{cond_mu_tau_tot.pdf}
\caption{\textbf{a.} Valley Hall response obtained from Eq.~\eqref{sigma_averaged} as a function of $\alpha_{\tau}$ and $\mu_F$. Dashed white lines represent $\mu_F/t = \sqrt{\tau\nu/2}$ for $\nu=1$ and $\nu=2$. The size of the bulk region where we averaged the local marker has been taken to be $L_{\textrm{bulk}}=36\, a$. \textbf{b.} (Blue dots) Cut of \textbf{a} for $\alpha_{\tau}^{} =3.255 \times 10^{-4}$ (indicated by a solid black line in the upper panel). (Grey area) Local density of states $\rho_\text{bulk}^{}$ from Fig.~\ref{fig:Strain_Energy_alpha_sigma_spectra_alphas}\textbf{b}.}
\label{fig:cond_mu}
\end{figure}
In order to properly determine a reasonable bulk size to perform the average, we show in Fig.~\ref{fig:cond_Lbulk_tau_tot} the valley Hall response as obtained from Eq.~\eqref{sigma_averaged} as a function of $\alpha_{\tau}$ and $L_{\textrm{bulk}}^{}$ for $\mu_\textrm{F}^{} = \mu_1$ (panel $\textbf{a}$) and $\mu_\textrm{F}=\mu_2^{}$ (panel $\textbf{b}$). In both cases, the bulk density response to strain variations shows a remarkably quantized value ($\sigma_V^{} \simeq 1$ and $\sigma_V^{} \simeq 3$, respectively) when averaging over $L_{\textrm{bulk}}\lesssim 100 a$. Noticeable deviations from these integer values occur at very specific values of $\alpha_{\tau}$, appearing as horizontal stripes. For these values of strain, one of the discretized modes that stem from the edge states crosses the Fermi level, as can be seen in Fig.~\ref{fig:Strain_Energy_alpha_sigma_spectra_alphas}\textbf{a}. This is evidenced in Fig.~\ref{fig:cond_Lbulk_tau_tot}$\,$\textbf{c},\textbf{d} where we show the bulk density of states at the Fermi energy $\rho_\text{bulk}^{}(\mu_\text{F}^{})$ as a function of $\alpha_\tau^{}$ and $L_\text{bulk}^{}$ for the same chemical potentials as the ones used for the upper panels.
If the size of $L_{\textrm{bulk}}$ is sufficiently large, the density of states projected onto the selected region becomes finite, breaking down the insulating character of the portion of the system that is being probed. Indeed, $\rho_\text{bulk}^{}(\mu_\text{F}^{})$ also presents horizontal stripes at the same values of $\alpha_\tau^{}$ as in Fig.~\ref{fig:cond_Lbulk_tau_tot}\textbf{a} and Fig.~\ref{fig:cond_Lbulk_tau_tot}\textbf{b}, reflecting an increase of the contribution from the edge states. More generally, as expected from Section~\ref{IIIa}, a quantized valley Hall coefficient can be measured via bulk density response to strain variations as long as $\rho_\text{bulk}^{}(\mu_\text{F}^{}) \simeq 0$. When finite-size effects become appreciable, $\rho_\text{bulk}^{}(\mu_\text{F}^{})$ increases, leading to deviations of the response from the quantized integer values predicted by the theory. We thus conclude from Fig.~\ref{fig:cond_Lbulk_tau_tot} that, within our range of parameters, any $L_\text{bulk}^{} \lesssim 100a$ is a suitable bulk size for probing the quantum valley Hall effect in the first couple of gaps for a sample of size $L_x^{} = 450.5 \, a$.
In Fig.~\ref{fig:cond_mu}\textbf{a} we plot the valley Hall response as obtained from Eq.~\eqref{sigma_averaged} as a function of both the chemical potential $\mu_F$ and $\alpha_{\tau}$, so as to display a full scan of the valley Hall fan diagram. Here, we have chosen an average region of size $L_{\textrm{bulk}}=36\,a$. As a guide to the eye, we include dashed white lines to identify whenever the Fermi energy is equal to the analytical $\nu$-th pLL energy, i.e. $\mu_F/t = \sqrt{\tau\nu/2}$. We can clearly see the formation of plateaus in all the regions where the filling fraction of the pseudo-Landau levels remains constant. Quantization breaks down as soon as the bulk becomes metallic. This is best illustrated in Fig.~\ref{fig:cond_mu}\textbf{b}, where we show a specific cut of the upper panel for $\alpha_{\tau}= 3.255 \times 10^{-4}$. The shaded grey area represents the bulk density of states $\rho_{\textrm{bulk}}(\mu_F)$. Whenever the region being probed becomes incompressible ($\partial\rho_{\textrm{bulk}}/\partial \mu_F^{}\simeq 0$), a robust QVH plateau occurs. On the other hand, as soon as $\rho_{\textrm{bulk}}(\mu_F)$ becomes finite, the density response to strain variations becomes erratic.
\subsection{Trigonal strain in a finite hexagonal flake}\label{IIIc}
Up to this point, we particularized our analysis to large system sizes, as made possible by using uniaxial strain along one direction and periodic boundary conditions along the other. It is worth asking whether these results would hold for smaller samples with different strain configurations or edge terminations. In this section, we explore a more realistic geometry where open boundary conditions are imposed along the entire system. In this case, trigonal strain can be implemented, which we model with a space-dependent tunneling amplitude of the form
\begin{equation}
t_j(\bm{r}) = t\left(1 + \tau \frac{(\bm{r}-\bm{r}_c)\cdot\bm{\delta}_j}{3 a^2}\right),
\end{equation}
where $\bm{r}_c$ denotes the position of the system's center. This particular strain leads to an effective gauge potential of the form $e\bm{A}^{\xi}(\bm{r}) = \xi \frac{\hbar \tau}{3 a^2}\left((y-y_c)\hat{\mathbf{x}} - (x - x_c)\hat{\mathbf{y}}\right)$ \cite{Salerno2017}. The corresponding pseudo-magnetic field at each valley is consequently given by
\begin{equation}
\bm{B}_{\tau}^{\xi}= \bm{\nabla}\times \bm{A}^{\xi} = -\xi\frac{2 \hbar \tau}{3 e a^2}\hat{\mathbf{z}}.
\end{equation}
Note that with this convention, aside from changing its magnitude, the sign of the pseudo-magnetic field is opposite at each of the valleys with respect to the previously analyzed case [see Eq.~\eqref{Buni}]. In this regard, we already anticipate a bulk density response to strain variations, which should be negative (positive) for $\tau>0$ ($\tau<0$) when taking a chemical potential $\mu_F >0$ within a gap between pLLs.
The triaxial deformation is known to produce pseudo-Landau levels, which are non-dispersive, as opposed to the tilted ones obtained with the previously uniaxially-stretched lattices. We then expect to observe particle densities with a plateau-like behavior near the bulk of the system, which could lead to an improvement of the spatial homogeneity of the local valley marker $\mathfrak{S}(\bm r)$. Furthermore, these highly degenerate pLLs lead to larger bulk spectral gaps that could help detecting a more precise quantized response for higher filling fractions.
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{DoS_flake.png}
\caption{Total density of states $\rho(\varepsilon)$ in the hexagonal flake for (\textbf{a}) $\tau >0$ and (\textbf{b}) $\tau<0$. Dashed grey lines indicate the energy of the analytical pseudo-Landau levels $E_{\nu} = \pm v_F \sqrt{2\hbar e|\bm{B}_{\tau}^{\xi}|\nu}=\pm t \sqrt{3\tau\nu}$ for $\nu=0,1,2$. In the right panels we show a specific cut of the left panels (marked with a vertical ashed blue line) for $|\tau| = 0.07$.}
\label{dosflake}
\end{figure}
\begin{figure*}[!t]
\center
\includegraphics[width=\textwidth]{s_LocDoS_cut_new.png}
\caption{Panels \textbf{a} and \textbf{c} show the local density of states discriminated by sublattice $\rho_{\alpha}(\varepsilon,\bm{r})$ [see Eq.~\eqref{rho_sublattice}] for $\tau>0$ and $\tau<0$, respectively. The energy has been taken to be $\varepsilon=\mu_F=0.1\,t$ and $|\tau|=0.07$. We exploit the mirror symmetry of the flake with respect to the $y=y_c$ axis to plot separately half of the A-sublattice (the upper part) and half of the B-sublattice (the lower part). A corresponding cut of the panels along $\bm{r}=x_c\hat{\mathbf{x}}+y\hat{\mathbf{y}}$ is shown to the left, where we have restored the entire spatial distribution. Panels \textbf{b} and \textbf{d} show the corresponding local marker $\mathfrak{S}_{\alpha}(\bm{r})$ differentiated by sublattice [see Eq.~\eqref{salpha}].}
\label{hexagonal_flakes}
\end{figure*}
We numerically study a triaxially-stretched flake of a honeycomb lattice, which is shaped in the form of an hexagon, hence preserving the trigonal symmetry. The terminations have been chosen so as to alternate between bearded and zigzag-like edges. In this way, the perimeter of the hexagon can be built with sites belonging to a single sublattice (without loss of generality, we choose them to be A sites). Interestingly, depending on the sign of the hopping variations (namely $\tau$ being positive or negative) these types of flakes can support, or not support, helical edge states in the gap between the 0-th and 1-st pLL. As thoroughly described in Ref.~\cite{Salerno2017}, this essentially depends on the wavefunction of the $\nu=0$ pLL being localized on the B-sublattice ($\tau>0$ within our convention) or on the A-sublattice ($\tau<0$). In the former case, the zero energy pLL can be mixed with the non-propagating edge modes that live on the A-sublattice generating dispersive helical edge states, while in the latter case the chiral symmetry forbids this to happen. We will present results for both cases and show that the density response $\mathfrak{S} (\bm r)$ that we propose as a valley Hall marker is quantized regardless of the edge states being present or not.
We show in Fig.~\ref{dosflake} the flake's total density of states $\rho(\varepsilon)$ as a function of energy and the strain strength for the case $\tau>0$ (upper panel) and $\tau<0$ (lower panel). A pseudo-Landau level structure is revealed for both cases. The bulk spectrum is reasonably well described by the low-energy approximate model, namely $E_{\nu} = \pm v_F \sqrt{2\hbar e|\bm{B}_{\tau}^{\xi}|\nu}=\pm t \sqrt{3\tau\nu}$, which is displayed by grey dashed lines. As already anticipated, for positive $\tau$, edge states emerge in the first gap while for negative $\tau$ the density of states remains exactly zero within that range of energies, indicating the absence of propagating helical modes. We also note that boundary modes are present in all gaps between higher pseudo-Landau levels ($\nu>0$), in accordance with the criterion developed in Ref.~\cite{Salerno2017}. This behavior can be better captured by analyzing the local density of states at a given energy, which we here define for each sublattice site as
\begin{equation}
\rho_{\alpha}(\varepsilon,\bm{r}) = -\frac{1}{\pi}\textrm{Im}\langle \bm{r}_{\alpha} | \hat{G}^{r}(\varepsilon)|\bm{r}_{\alpha}\rangle,
\label{rho_sublattice}
\end{equation}
where $\alpha=A,B$. In Fig.~\ref{hexagonal_flakes}~\textbf{a} and \textbf{c} this quantity is plotted for both positive and negative strain variations, respectively. We have chosen an energy within the first gap $\varepsilon=\mu_F^{} = 0.1\,t$ and $|\tau|=0.07$. For this large pseudo-magnetic field, the magnetic length is equal to $\ell_B\simeq 4.63\,a$. We exploit the mirror symmetry of the flake with respect to the $y=y_c$ axis to plot separately half of the A-sublattice (the upper part) and half of the B-sublattice (the lower part). The missing pieces can be simply obtained by reflecting the corresponding portions with respect to the horizontal line that divides the sample into two halves.
In Fig.~\ref{hexagonal_flakes}\textbf{a}, delocalized edge modes surrounding the entire perimeter of the flake are clearly visible. A corresponding cut along $\bm{r}=x_c\hat{\mathbf{x}}+y\hat{\mathbf{y}}$ is shown to the left of the figure, where we have restored the entire spatial distribution. At energy $\mu_F^{} = 0.1t$, the edge modes spread over a few magnetic lengths, being more delocalized on the B than on the A sublattice. In Fig.~\ref{hexagonal_flakes}\textbf{c} the LDoS remains strictly zero along the entire flake, in consistency with the absence of edge states for this sign of $\tau$.
In the right-hand side of Fig.~\ref{hexagonal_flakes} we present the local valley Hall marker for $\tau>0$ (\textbf{b}) and $\tau<0$ (\textbf{d}). The chemical potential has been chosen to be $\mu_F = 0.1\,t$. The density response to strain variations is discriminated by sublattice, that is to say, we plot individually
\begin{equation}
\mathfrak{S}_{\alpha}(\bm{r}) = \sigma_0 \frac{\partial \tilde{n}_{\alpha}(\bm{r})}{\partial \alpha_{\tau}}\Bigg\rvert_{\mu_F}=\sigma_0 \int_{-\infty}^{\mu_F}\frac{\partial \rho_{\alpha}(\varepsilon,\bm{r})}{\partial \alpha_{\tau}}d\varepsilon,
\label{salpha}
\end{equation}
where $\tilde{n}_{\alpha}(\bm{r})$ is the dimensionless particle density at position $\bm{r}_{\alpha}$ and $\alpha=A,B$. We can clearly see that the response near the bulk of the flake is quantized in both scenarios: when the system supports or lacks edge states. Even more, the response is quantized separately on each sublattice (see Appendix~\ref{AppendixA}). Due to the particle-hole symmetry of the problem, the contribution to the total particle density of the half-filled system must remain inert to strain variations. In this sense, the bulk valley marker for chemical potentials within the first gap can be analyzed by only considering the modifications of the particle density of the 0-th pLL, which is localized either on the B-sublattice for $\tau>0$ or on the A-sublattice for $\tau<0$. The sublattice polarization of the bulk pLL leads to a sublattice polarization of the marker and explains why $\mathfrak{S}_{A}(\bm{r})\simeq 0$ and $\mathfrak{S}_{B}(\bm{r})\simeq -1$ for $\bm{r}$ around $\bm{r}_c$ in Fig.~\ref{hexagonal_flakes}\textbf{b}, while in Fig.~\ref{hexagonal_flakes}\textbf{d} the opposite behavior takes place, namely $\mathfrak{S}_{A}(\bm{r})\simeq 1$ and $\mathfrak{S}_{B}(\bm{r})=0$. Note that, in this last case, the response of the B sites is strictly zero in the entire sample: due to the absence of edge modes and the localization of the 0-th pLL on the A sites, the B sublattice remains completely half-filled for a chemical potential within the first gap meaning that $\tilde{n}_B(\bm{r})=1/2$ for every $\bm{r}$ and hence $\partial\tilde{n}_B(\bm{r})/\partial \alpha_{\tau}\rvert_{\mu_F}=0$. We stress that the plateau-like behavior of the local marker in Fig.~\ref{hexagonal_flakes} is here inherited from the fact that the particle densities of triaxally strained lattices are more uniform around the bulk than the tilted ones previously obtained for the uniaxial configuration. This can be seen in the left panel of Fig.~\ref{Fig_1_story}, where we plotted the adimensional particle density per cell $\tilde n(\bm r)$ for the same parameters as in Fig.~\ref{hexagonal_flakes}$\mathbf{d}$.
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{cond_mu_LocDoS_xy.png
\caption{\textbf{a.} Valley Hall response $\sigma_V$ (blue points) obtained from averaging the local marker per cell $\mathfrak{S}(\bm{r}) = \mathfrak{S}_{A}(\bm{r})+\mathfrak{S}_{B}(\bm{r})$ in a bulk region of size $L_{\textrm{bulk}}=5\,a$ as a function of $\mu_F$. The grey shaded area shows the density of states projected onto the bulk region at the Fermi level $\rho_\text{bulk}^{}(\mu_F)$. Panels \textbf{b} and \textbf{c} show the local density of states $\rho(\mu_F,\bm{r})$ along $\bm{r} = x_c\hat{\mathbf{x}} + y\hat{\mathbf{y}}$ for chemical potentials $\mu_F=0.52\,t$ and $\mu_F = 0.7\,t$, respectively, indicated by black dashed lines in \textbf{a}.}
\label{cond_mu_rhobulk}
\end{figure}
The valley Hall coefficient $\sigma_V^{}$ may be obtained by following the same procedure as before, specifically, averaging the local marker per cell $\mathfrak{S}(\bm{r})=\mathfrak{S}_{A}^{}(\bm{r})+\mathfrak{S}_B^{}(\bm{r})$ over a reasonable bulk radius $r_{\textrm{bulk}}=L_{\textrm{bulk}}/2$ [see Eq.~\eqref{sigma_averaged}]. Since the bulk response in Fig.~\ref{hexagonal_flakes} remains fairly homogeneous and quantized over a magnetic length around the center of the system, we take $L_{\textrm{bulk}}=5\,a$, which is of the order of the magnetic length $\ell_B^{}$ for $\tau = 0.07$. The averaged response as a function of chemical potential is shown in Fig.~\ref{cond_mu_rhobulk}$\mathbf{a}$ for the case of $\tau = 0.07 > 0$. We also include with a shaded grey area the density of states projected onto the bulk region being probed. The first couple of lorentzian-shaped peaks represent the bulk pLL states that, as opposed to the uniaxially strained case, are here well defined in energy. The valley Hall coefficient remains quantized to a good degree for chemical potentials within the first gap. Nevertheless, finite-size effects already become appreciable for gaps between higher pseudo Landau levels. In Fig.~\ref{cond_mu_rhobulk}~$\mathbf{b}$ and $\mathbf{c}$ we show the behavior of the local density of states at chemical potentials within the second and third gaps ($\mu_F = 0.52\,t$ and $\mu_F^{} = 0.7\,t$), which are indicated by black dashed lines in Fig.~\ref{cond_mu_rhobulk}\textbf{a}. Note that the number of nodes of the LDoS differentiated by sublattice nicely reflects the polynomial hierarchy of the Dirac eigenspinors subject to a pseudo-magnetic field. In contrast to the case of $\mu_F = 0.1\,t$ (Fig.~\ref{hexagonal_flakes}$\mathbf{a}$), the edge modes for these energies are much more delocalized and hence lead to a breakdown of the insulating character of the bulk portion of the system, which is being probed. As already discussed in the previous section, the equivalence between the valley Hall coefficient and the density variations strongly relies on probing an incompressible region. In this sense, the deviations from the expected quantized result taking place for chemical potentials within the second and third gaps can be partly attributed to a finite $\rho_{\textrm{bulk}}$ at the Fermi level. On the other hand, for these large values of strain, the low-energy model fails to describe correctly the higher order bulk pLLs, which leads to another natural source of discrepancy with the analytical prediction.
\section{Concluding remarks}
\label{IV}
This work introduces an alternative approach for measuring the valley Hall response in strained honeycomb lattices, which relies on probing an equilibrium property of these systems locally in the bulk. Specifically, we have demonstrated that quantized valley Hall coefficients can be obtained by measuring the variation of the particle density, deep within the bulk of a sample, upon small variations of the applied strain. This bulk approach to valley Hall physics, which is based on the Widom-St\v{r}eda formula, leads to the introduction of a local valley Hall marker $\mathfrak{S}(\bm{r})$, which is particularly relevant for realistic lattices with open boundary conditions. When properly averaged over a central insulating region, this marker (determined here as a local density response function) remains quantized and robust as a function of both the chemical potential and the pseudo-magnetic field strength. Such a plateau-like behavior takes place whenever the probed region is genuinely incompressible, hence requiring the existence of sufficiently large spectral gaps between pseudo-Landau levels. We have compared our numerical findings with the results expected from a low-energy analytical model that incorporates the effect of strain at lowest order, finding a good agreement for sufficiently large samples and moderate values of strain. In order to obtain strictly quantized (integer) values from this local response, the magnitude of the applied strain should be chosen appropriately:~a strong strain can potentially lead to large discrepancies with the analytical predictions, while a very weak strain would lead to tiny spectral gaps between the pLLs. As a general rule, a clear separation between the bulk and the boundary states of the sample is needed to obtain a satisfactory quantized result, which occurs whenever the magnetic length remains much smaller than the system size. We have investigated different strain configurations and edge terminations, and we have found that the quantization of our proposed marker remains independent of the edge physics, i.e. independently from the existence of helical edge states living at the boundaries of a finite-size sample. This behavior is rooted in the fact that, within our framework, the valley Hall coefficient is directly extracted from a Fermi sea response. This stands in sharp contrast with usual transport measurements within the linear regime, which only have access to Fermi surface properties~\cite{Marmolejo-Tejada_2018}.
Synthetic molecular lattices~\cite{Gomes2012,Polini2013,Swart2017,Drost2017,Khajetoorians2019}, where a two dimensional electron gas is confined to move in a properly designed array of carbon monoxide (CO) molecules, present themselves as an appealing experimental platform where our approach can be tested. Indeed, in these tailored nanostructures, the pseudo-magnetic field can be finely tuned, by simply changing the position of the CO molecules in a honeycomb lattice pattern~\cite{Gomes2012,Polini2013}. The spectral properties of these systems are usually accessed via STM probes, which yield valuable information on the local density of states. The filling fraction of the pseudo-Landau levels could be controlled with an external gate voltage and, in principle, a tomography of the particle density could be built by integrating the LDoS in energy up to the desired Fermi level at each lattice position. In this sense, quantized bulk density responses to strain variations should be experimentally accessible with current technologies. Furthermore, we note that the STM techniques might also resolve the sublattice polarization of the valley Hall response.
A possible alternative is offered by ultracold Fermi gases in optical lattices, where strain can be finely adjusted through well-designed atom-light couplings~\cite{Alba2013,Tian2015,Jamotte2022}, and where the local particle density can be directly measured in-situ~\cite{Greiner2009, Kuhr2010, Greiner2015, Zwierlein2015, Gross2015, Kuhr2015,leonard2022realization}. Last but not least, recent advances in engineering arbitrary two-dimensional optical tweezer arrays~\cite{Barredo2016,Wang2020,Schymik2020,Ebadi2021, Bakr2022} open yet another route for the study of quantum gases in strained lattices, within a highly controllable and scalable environment. \\
\paragraph*{Contribution statement} M. Jamotte and L. Peralta Gavensky contributed equally to this work. \\
\paragraph*{Acknowledgements}
The authors acknowledge fruitful discussions with Ingmar Swart. Work in Brussels is supported by the ERC Starting Grants TopoCold and LATIS, the Fonds De La Recherche Scientifique (FRS-FNRS, Belgium), the FRIA grant FC 38756 and the EOS grant CHEQS. Work in Padova is supported by the Rita Levi Montalcini Program through the fellowship DI\_L\_LEVI22\_01.
\vspace{1cm}
|
train/arxiv
|
BkiUfZM5qhLB3UNjqWRo
| 5
| 1
|
\chapter{Introduction}
It is a controversial issue whether or not quantum coherence can be
maintained
during the formation and subsequent evaporation of a black hole. At
one end of
the spectrum of opinion is Hawking's suggestion that this process
indicates a
new level of unpredictability introduced into quantum mechanics by
gravity [\hawking]. Another proposal, which is also radical from the
point of view of quantum mechanics, is that information about the
initial quantum state of the
system is carried by a Planck scale stable remnant [\ac,\steve].
Perhaps the most conservative position has been advocated by 't~Hooft\ who
argues that this process should be thought of as a conventional
scattering event in which the black hole is an intermediate state
somewhat analogous to a complex intermediate nucleus formed in a
nuclear collision [\thooft]. 't~Hooft\ has taken some tentative steps
toward an $S$-matrix\ description of such events but the precise
meaning of the resulting $S$-matrix\ remains unclear.
We feel that this approach deserves attention and should be explored.
It may indeed provide a resolution of the above paradox, or else one
would like to see this logical possibility ruled out.
In order to avoid some of the formidable technical obstacles posed by
quantum gravity in $3+1$ dimensions one can instead consider black
hole evolution in $1+1$ dimensions. Of course this simplified
setting does not capture all the physics of real black holes but it
does contain an information paradox analogous to the one originally
posed by Hawking. We begin in section~2 by outlining the arguments
leading to 't~Hooft 's $S$-matrix\ in $1+1$ dimensions. For this discussion
we use a simple model recently proposed by Callan {\it et al.}
[\cghs] and subsequently discussed in [\bddo -\bilcal]. It turns out
that one obtains some exact expressions where approximations had to
be made in the higher dimensional theory. The physical
interpretation of our $S$-matrix\ is nevertheless every bit as obscure
as 't~Hooft 's. The main purpose of this paper is to clarify some of the
issues involved by considering a simpler system, which shares many
features with two-dimensional black holes, but can be solved
explicitly. The system in question is the $1+1$ dimensional
Schwinger model with the unusual feature that the electrodynamic
coupling strength depends on position. It varies from vanishing
coupling at one end of space to infinite coupling at the other. The
two ends correspond to spatial infinity (weak coupling) and the deep
interior of the black hole (strong coupling). This is also the
appropriate coupling dependence to describe $s$-wave fermion
scattering off a $3+1$ dimensional extreme magnetic dilaton black
hole[\alstrom]. A similar model arose in the analysis of monopole
catalysis in [\cgc,\rubakov]. The methods we use in this paper may
find application in that context also.
In section~3 we describe the analogy between black hole physics and
$1+1$ dimensional electrodynamics. The question of the existence of
a unitary $S$-matrix\ is shown to be similar in the two cases. In
section~4 we set up and
solve the classical equations for the formation of an object called a
``charge-hole'' by an incoming electric charge. We then discuss the
electromagnetic analogue of Hawking radiation. In this section no
attempt is
made to include back-reaction on the electromagnetic field of the
charge-hole due to the emitted radiation. Section~5 is devoted to
describing 't~Hooft 's method as applied to our model and an expression is
derived for an $S$-matrix\ . Section~6 uses the method of bosonization
to account for back-reaction and gives an exact expression for the
single-particle elastic $S$-matrix\ between one-fermion states. Then
we construct the generalization to arbitrary states and show that the
exact $S$-matrix\ is a generalization of 't~Hooft 's, with well-defined
procedures for extracting amplitudes in Fock space. Finally, in
section~7, the information problem is briefly discussed for the
electrodynamic and gravitational systems.
\chapter{'t~Hooft 's $S$-matrix\ for $1+1$ dimensional gravity}
In this section we will repeat 't~Hooft 's argument for the form of the
quantum
$S$-matrix\ for black hole physics in a simplified $1+1$ dimensional
context. We will make no attempt in this section to clarify or
interpret 't~Hooft's
theory. The reader is advised to skim this section lightly and
return
to it after reading the subsequent material.
Consider the following action for $1+1$ dimensional dilaton gravity
[\cghs]:
$$
I = \int d^2 x \bigl[ e^{-2\phi}\def\F{\Phi} \bigl( R + 4 (\nabla \phi}\def\F{\Phi)^2 + 4\lambda}\def\L{\Lambda^2
\bigr)
- {1\over 2} \sum_{i=1}^N \, (\nabla f_i)^2 \bigr] \ .
\eqn\first
$$
This theory has received considerable attention as a toy model for
black hole physics [\cghs -\bilcal] and we will be brief here. We
use the conformal gauge
$$
g_{++} = g_{--} = 0, \quad g_{+-} = - {1\over 2} e^{2 \rho} \ .
\eqn\second
$$
The linear dilaton vacuum is given in light-cone ``Kruskal"
coordinates by
$$
e^{-2 \rho} = e^{-2\phi}\def\F{\Phi} = -\lambda}\def\L{\Lambda^2 x^+ x^- \ ,
\eqn\third
$$
and the classical static black hole solution is
$$
e^{-2 \rho} = e^{-2\phi}\def\F{\Phi} = - \lambda}\def\L{\Lambda^2 x^+ x^- + {M \over \lambda} \ .
\eqn\fourth
$$
Let us consider a geometry describing infalling massless matter, in
the form of a shock wave, with energy-momentum tensor
$$
T_{++}^f = {M \over \lambda x^+_0} \delta}\def\D{\Delta ( x^+ - x^+_0 )
\eqn\fifth
$$
where $x^+_0$ is the coordinate of the null trajectory of the shock
and $M$ is the total energy carried by it. The
gravitational and dilaton fields are constructed by patching together
the vacuum solution for $x^+ < x^+_0$ and a black hole solution with
mass $M$ for $x^+ > x^+_0$. In order to keep the dilaton and metric
continuous at $x^+ = x^+_0$, it is necessary to translate the black
hole solution along the $x^-$ axis by $- {M \over \lambda^3 x^+_0}$.
The full solution is
$$
e^{-2 \rho} = e^{-2\phi}\def\F{\Phi} = - \lambda}\def\L{\Lambda^2 x^+ x^- - {M \over \lambda x^+_0}(x^+
- x^+_0) \, \theta}\def\T{\Theta(x^+ - x^+_0) \ .
\eqn\sixth
$$
If further energy $\delta}\def\D{\Delta M$, in the form of another incoming shock-wave,
is added to the black hole, the result is simply another shift $-{M
\over \lambda^3 x^+_1}$ in $x^-$ on the null trajectory $x^+ =
x^+_1$. This is illustrated in figure~1. In four dimensions the
corresponding coordinate transformation across the shock front is
more complicated [\thooft]. However, near the horizon and for $\delta}\def\D{\Delta M
\ll M$ it can be approximated by a simple shift.\foot{That a uniform
shift is the full answer in two dimensions has also been noted by
E.~Verlinde and H.~Verlinde [\priv].}
For a continuous incoming flux $T_{++}(x^+)$ the solution is
$$\eqalign{
e^{-2 \rho} = e^{-2\phi}\def\F{\Phi} =& - \lambda}\def\L{\Lambda^2 x^+ x^- - \int_0^{x^+} d x^+_0 \,
T_{++}(x^+_0) \, (x^+ - x^+_0) \cr
=& - \lambda}\def\L{\Lambda^2 x^+ x^- - P_{+}(x^+) \bigl[ x^+ - {1\over P_{+}(x^+)}
\int_0^{x^+} dx^+_0 \, x^+_0 \, T_{++}(x^+_0) \bigr]\> , \cr}
\eqn\seventh
$$
where $P_{+}(x^+) = \int_0^{x^+} dx^+_0 \, T_{++}(x^+_0)$ is the
total incoming
Kruskal momentum conjugate to $x^+$. From this expression it is
clear that the final black hole geometry is indistinguishable at the
classical level from a black hole formed by a single incoming shock
wave carrying energy $\bar M =\lambda \int_0^\infty dx^+_0 \, x^+_0
\,T_{++}(x^+_0)$ in along
$\bar x_0^+ = {\bar M\over \lambda P_{+}(\infty)}$.
In [\thooft] 't~Hooft\ argues that such coordinate shifts influence the
quantum vacuum of the matter fields. In particular, infalling matter
will induce a unitary transformation on the outgoing modes,
$$
U = \exp(i \delta}\def\D{\Delta x^- P_{-}) \ ,
\eqn\eighth
$$
where $P_{-} = \int_{-\infty}^{0} d x^-_0 \, T_{--}(x^-_0)$ generates
$x^-$ translations of the Kruskal coordinates and $\delta}\def\D{\Delta x^- =
P_{+}(\infty)$ is the coordinate shift calculated above. Written in
a more symmetric form 't~Hooft 's $S$-matrix\ is
$$
S = \exp({i\over \lambda^2} P_+ P_-) \ .
\eqn\ninth
$$
The proper interpretation of this expression is elusive. It should
be pointed
out that \ninth\ cannot be the final answer. Indeed, the final state
obtained in this way does not reflect any properties of the initial
state except the total incoming Kruskal momentum, so this $S$-matrix\
cannot keep track of the full structure of quantum states.
In section~5 a similar line of reasoning will lead to an analogous
expression for an $S$-matrix\ (with the same shortcomings) in our $1+1$
dimensional electrodynamics. In section~6 we go on to derive a fully
unitary $S$-matrix\ and show how the 't~Hooft -like result is the leading
term in a systematic expansion.
\chapter{The electrodynamic analogy}
Consider $1+1$ dimensional quantum electrodynamics coupled to a
background
dilaton field $\phi}\def\F{\Phi$. The gauge invariant action is
$$
I = \int d^2 x \>\bigl[ i \overline{\psi}\def\P{\Psi} \gamma}\def\G{\Gamma^\mu ( \partial_\mu + i A_\mu) \psi}\def\P{\Psi
-{1\over 4} e^{-2\phi}\def\F{\Phi(x)} F_{\mu\nu} F^{\mu\nu} \bigr] \ .
\eqn\tenth
$$
The dilaton field is a static non-dynamical background and its only
role in our model is to define a position-dependent coupling
constant,
$$
g^2 (x) = e^{2 \phi}\def\F{\Phi (x)} \ .
\eqn\eleventh
$$
We will choose a particular dilaton background motivated by the
``linear dilaton vacuum" of $1+1$ dimensional gravity,
$$
\phi}\def\F{\Phi (x) = - x^1 \ ,
\eqn\twelfth
$$
where $x^1$ is the space-like coordinate in Minkowski space. By
analogy with the black hole case we shall consider the region $x^1
\rightarrow + \infty$ as asymptotic exterior space. In this region
the coupling $g^2 (\phi}\def\F{\Phi )$ vanishes exponentially and free fermions can
propagate. The region $x^1 \rightarrow
- \infty$, where the coupling diverges, is analogous to the infinite
throat deep in the interior of certain extreme magnetically charged
black holes [\gidstro]. The question we want to address is whether
or not quantum information is ever lost to an observer at $x^1
\rightarrow + \infty$. More specifically: is the $S$-matrix\ for the
asymptotic states at $x^1 \rightarrow + \infty$ unitary?
Consider the Penrose diagram in figure~2 for flat $1+1$ dimensional
space-time with a linear dilaton background. An incoming particle
originating on ${\cal I}_R^-$ can either propagate to ${\cal I}_R^+$, thereby
escaping the region of strong coupling, or it can continue
propagating toward ${\cal I}_L^+$, in which case it is ``lost'' to the
outside observer. The unitarity of the $S$-matrix\ will therefore in
general require asymptotic states to be defined on both ${\cal I}^{\pm}_L$
and ${\cal I}^{\pm}_R$.
In both linear dilaton electrodynamics and $1+1$ dimensional dilaton
gravity, left- and right-moving modes of matter fields are uncoupled
at the classical level and in perturbation theory. In dilaton
gravity this is apparent in the conformal gauge \second\ where the
matter fields $f_i$ satisfy free wave equations. Incoming
(left-moving) perturbations experience no scattering and the same is
true of right-moving perturbations. In linear dilaton
electrodynamics the analogous gauge choice is light-cone gauge
$A_-=0$ (or $A_+=0)$, where the Dirac equation,
$$
\gamma}\def\G{\Gamma^\mu \bigl( \partial_\mu + i A_\mu \bigr) \, \psi}\def\P{\Psi = 0 \ ,
\eqn\thirteenth
$$
separates into a pair of uncoupled equations,
$$\eqalign{
\partial_- \psi}\def\P{\Psi_L =& 0 \ , \cr
\bigl( \partial_+ + i A_+ \bigr) \, \psi}\def\P{\Psi_R =& 0 \ . \cr}
\eqn\fourteenth
$$
The left-moving component appears to be completely decoupled (or the
right-moving component in $A_+=0$ gauge). In perturbation theory the
asymptotic final states will have particles on both ${\cal I}_L^+$ and
${\cal I}_R^+$ and it seems that information is inevitably lost to an
observer at $x^1 \rightarrow + \infty$.
In both theories, non-perturbative effects associated with quantum
anomalies invalidate the above reasoning. In dilaton gravity, the
conformal anomaly is responsible for the emission of right-moving
Hawking radiation when a left-moving particle creates a black hole
[\chrful,\cghs]. In linear dilaton electrodynamics the axial anomaly
causes a very similar phenomenon, in which an outgoing current
discharges the field caused by an incoming charged particle, and in
this case one can show that the outgoing radiation carries all the
initial quantum information.
\chapter{Charge hole physics}
\section{Classical solution}
Let us begin with classical $1+1$ dimensional electromagnetism.
Maxwell's equations take the form
$$
\partial_\mu \Bigl( {F^{\mu\nu}\over g^2(x)} \Bigr) = j^\nu \ .
\eqn\fifteenth
$$
The source-free equations are
$$
\partial_\mu \Bigl( {F^{\mu\nu}\over g^2(x)} \Bigr) = 0 \ .
\eqn\sixteenth
$$
In two space-time dimensions the field strength tensor only has one
independent component,\foot{Our conventions are $\epsilon^{01}=+1$ and
metric signature $(-,+)$.}
$$
F^{\mu\nu} = F \epsilon^{\mu\nu} \ ,
\eqn\seventeenth
$$
and we see from \sixteenth\ that ${F\over g^2}$ is constant. Thus
the general source-free solution is described in terms of one free
parameter $q$,
$$\eqalign{
F^{\mu\nu} =& q\, g^2(x)\,\epsilon^{\mu\nu} \cr
=& q\, e^{-2 x^1}\, \epsilon^{\mu\nu} \cr
=& q\, e^{(x^- -x^+)}\, \epsilon^{\mu\nu} \ , \cr}
\eqn\eighteenth
$$
where we have introduced the light-cone coordinates $x^{\pm} = x^0
\pm x^1$.
We will refer to the classical object described by \eighteenth\ as a
``charge-hole''. It corresponds to a static black hole in dilaton
gravity. The parameter $q$ which replaces the mass of a black hole
is of course the charge carried by the charge hole. The analog of
the gravitational collapse solution \sixth\ is a charge hole formed
by an incoming charged particle. Let the trajectory be $x^+ =
x^+_0$, where $x^+_0$ is a constant. The resulting field is given by
$$
F^{\mu\nu} = q \, \theta}\def\T{\Theta (x^+ - x^+_0) \, e^{x^- -x^+} \, \epsilon^{\mu\nu} \ .
\eqn\twentyfirst
$$
From Maxwell's equations \fifteenth\ we see that the field in
\twentyfirst\ corresponds to a current
$$
j_+ = q \, \delta}\def\D{\Delta \bigl( x^+ - x_0 ^+ \bigr) \ ,
\eqn\twentysecond
$$
The charge hole vector potential is easily computed in the light-cone
gauge $A_- = 0$. It is given by
$$
A_+(x) = -{q\over 2}
\bigl[\theta}\def\T{\Theta \bigl(x^+ -x^+_0 \bigr) e^{(x^- -x^+)} +\alpha(x^+)\bigr] \ ,
\eqn\twentythird
$$
where $\alpha(x^+)$ is arbitrary.
\section{Analogue of Hawking Radiation}
It has been remarked that Hawking radiation can be viewed as pair
production near the event horizon with one particle escaping to
infinity and its partner
falling into the black hole. This phenomenon also occurs in the
field of a charge hole, where one member of the pair is attracted and
the other is repelled. The radiation is in the form of charged
particles and, much as in the black hole case, it persists
indefinitely unless back-reaction on the charge-hole is accounted
for. Apparently, an outside observer only detects the outgoing
particles and must use a density matrix description of the
evaporation process.
The Hawking effect appears in the quantum theory of matter in the
curved, but classical, geometry of a black hole. Let us therefore
consider the behavior of the quantized fermion field in the
background of a charge-hole. The gauge field has an effect on the
fermion system through the axial anomaly. The
most efficient way to account for the anomaly is to bosonize the
fermion field.
We therefore begin by reviewing the standard bosonization rules.
One makes the following identifications between fermion variables and
composite operators of a real boson field $Z$:
$$\eqalign{
\overline{\psi}\def\P{\Psi} \gamma}\def\G{\Gamma^\mu \psi}\def\P{\Psi = j^\mu &\leftrightarrow {1\over{\sqrt{\pi}}} \epsilon^{\mu\nu}\partial_\nu
Z\ , \cr \psi}\def\P{\Psi_L &\leftrightarrow :\exp(i\sqrt{4\pi} Z_L ): \ , \cr
\psi}\def\P{\Psi_R &\leftrightarrow :\exp(i\sqrt{4\pi} Z_R ): \ , \cr}
\eqn\twentyfive
$$
where we have divided $Z$ into left- and right-moving parts,
$$
Z_{L,R} = {1\over 2} \bigl[Z \mp \int_{x^1}^{\infty} dx^1 \, (\partial_0
Z)\bigr]\ .
\eqn\twentysix
$$
Written in terms of the bosonic field the action \tenth\ becomes
$$
I = \int d^2 x \bigl[-{1\over 2} \partial^\mu Z \partial_\mu Z - {1\over \sqrt{4\pi}}
\epsilon^{\mu\nu} F_{\mu\nu} Z - {1\over 4g^2(x)} F^{\mu\nu} F_{\mu\nu}] \ .
\eqn\twentyeight
$$
The equation of motion for $Z$ is
$$
\nabla^2 Z = {1\over \sqrt{4\pi}} \, \epsilon^{\mu\nu} F_{\mu\nu} \ ,
\eqn\thirty
$$
which in the background of \twentyfirst\ becomes
$$
\partial_+ \partial_- Z ={q\over 2\sqrt{4\pi}}\, \theta}\def\T{\Theta (x^+ -x^+_0)\, e^{x^- -
x^+} \ .
\eqn\thirtyone
$$
The solution with appropriate boundary conditions corresponding to no
incoming radiation is
$$
Z = - {q\over 2\sqrt{4\pi}} \bigl[ e^{(x^- - x^+)} - e^{(x^- -
x^+_0)} \bigr] \, \theta}\def\T{\Theta \bigl( x^+ - x^+_0 \bigr) \ .
\eqn\thirtytwo
$$
To examine the outgoing radiation we go to the limit $x^+ \rightarrow
+ \infty$
$$
Z \rightarrow {q\over 2\sqrt{4\pi}}\, e^{(x^- - x^+_0)} \ .
\eqn\thirtythree
$$
Using \twentyfive\ we see that an outgoing flux of charge is produced
$$
j_- = {q\over 4\pi} e^{x^- } e^{-x^+_0} \ .
\eqn\thirtyfour
$$
This flux is the analogue of the outgoing Hawking radiation which is
produced by a gravitational collapse. According to \thirtyfour\ the
radiation persists forever, eventually radiating an infinite charge,
just as the black hole radiates an infinite mass unless back-reaction
is accounted for.
\chapter{'t~Hooft -type $S$-matrix\ for linear dilaton electrodynamics}
In this section we will derive an approximate expression for the
$S$-matrix\ . The arguments parallel 't~Hooft\ 's construction for black
hole physics as in section~2.
Let us consider the theory in the gauge $A_- = 0$. The vector
potential describing the field of an infalling charge is given by
\twentythird . The
right-moving field
$\psi}\def\P{\Psi_R$ satisfies
$$
\bigl( \partial_+ + i A_+ \bigr) \, \psi}\def\P{\Psi_R = 0 \ ,
\eqn\thirtyfive
$$
with the general solution
$$
\psi}\def\P{\Psi_R = \exp [ i S(x) ] \, \chi_R \ ,
\eqn\thirtysix
$$
where $S(x) = \int_{-\infty}^{x_+} dx^+ \, A_+ $
and $\chi_R$ is a free field. Thus the effect of the gauge field is to
multiply the outgoing fermion field by a position-dependent phase
factor $e^{i S(x)}$. Inserting the charge hole vector potential
\twentythird\ gives
$$
S(x) = {q\over 2} \theta}\def\T{\Theta (x^+ - x^+_0) \, \bigl[ e^{(x^- - x^+)} -
e^{(x^- - x^+_0)} \bigr] - {q\over 2} \int \alpha(x^+) \ .
\eqn\thirtyeight
$$
The second term is an arbitrary constant $c$ times $-q$. To compute
the $S$-matrix\ we consider the limit $x^+ \rightarrow + \infty$ ,
where
$$
S(x) \rightarrow - {q\over 2} \bigl[ e^{(x^- - x^+_0)} + c \bigr] \ .
\eqn\thirtynine
$$
Thus the effect of the charge hole gauge field on the fermion system
is a canonical transformation which multiplies $\psi}\def\P{\Psi_R$ by a phase
$$
\psi}\def\P{\Psi_R (x^-) \rightarrow \exp[-i {q\over 2} ( e^{x^- - x^+_0} + c ) ]
\, \psi}\def\P{\Psi_R\ .
\eqn\forty
$$
The transformation \forty\ is a unitary transformation equivalent to
the action
of the unitary operator
$$\eqalign{
U =& \exp \bigl[i \int dx^- \, S(x^-) \, j_R (x^-) \bigr] \cr
=& \exp \bigl[ {-i \int dx^- \, {q\over 2} \bigl( e^{(x^- - x^+_0)}
+ c \bigr) \, j_R (x^-)}\bigr] \ , \cr}
\eqn\fortyone
$$
where
$$
j_R = \psi}\def\P{\Psi_R^{\dagger} \psi}\def\P{\Psi_R \ .
\eqn\fortytwo
$$
Let us next suppose that instead of a single delta-function the
incoming charge
is described by a continuous classical flux $j_L(x^-)$. The
resulting unitary
operator is easily computed to be
$$
U = \exp \bigl[ -{i\over 2} \int d x^+_0 d x^- \, j_L (x^+_0) \,
\bigl( e^{x^- - x^+_0} + c \bigr)
\, j_R ( x^- )
\bigr] \ .
\eqn\fortythree
$$
At this point $j_L (x^+) $ is the classical incoming current and
$j_R(x^-)$ is the quantum operator $\psi}\def\P{\Psi_R^{\dagger} \psi}\def\P{\Psi_R$. The
symmetry of the expression, however, suggests that $j_L$ and $j_R$
can be treated on an equal footing as operators in the incoming and
outgoing Fock spaces.
The $S$-matrix\ \fortythree\ is quite similar to 't~Hooft 's gravitational
$S$-matrix\ \ninth . In particular, it cannot be a fully correct
description of the scattering any more than \ninth\ is. To see this,
consider an incoming current $j_L(x^+)$. According to \fortythree\
the resulting final state is given by
$$
U \ket{0} = \exp \bigr[-{i\over 2} \int d x^- \, \bigl( A e^{x^-} +
B c \bigr) \, j_R
(x^-)
\bigr] \ket{0} \ ,
\eqn\fortyfour
$$
where $A$ and $B$ are two moments of $j_L(x^+)$
$$\eqalign{
A =& \int dx^+ \, j_L(x^+) e^{-x^+} \cr
B =& \int dx^+ \, j_L(x^+) \ . \cr}
\eqn\fortyfive
$$
Evidently, the final state depends on only two parameters describing
the incident particles. There is clearly no way that such a final
state can keep track of the full complexity of the incident state and
thus \fortythree\ cannot define a unitary $S$-matrix\ in the Fock
spaces of in and out particles.
\chapter{Exact $S$-matrix\ for linear dilaton electrodynamics}
\section{One-particle $S$-matrix }
Using the bosonization rules of section~4, the action for linear
dilaton electrodynamics can be written
$$
I = \int d^2 x \bigl[ -{1\over 2} \partial_\mu Z \partial^\mu Z - {1\over
\sqrt{4\pi}}\, Z\, \epsilon^{\mu\nu}F_{\mu\nu} - {1\over 4g^2(x)} F^{\mu\nu}
F_{\mu\nu} \bigr] \ .
\eqn\fortysix
$$
The vector potential can be integrated out to give the following
effective action for the boson field $Z$:
$$
I = \int d^2 x \bigl[ - {1\over 2} \partial_\mu Z \partial^\mu Z - {g^2 (x) \over 2
\pi} Z^2
\bigr] \ .
\eqn\fortyseven
$$
This procedure is analogous to that used in [\lslt] to make local the
conformal anomaly term in dilaton gravity.
The $Z$ field now has a mass which increases indefinitely in the
negative $x^1$-direction. Thus it is evident that any finite-energy
configuration must be totally reflected. An observer at $x^1
\rightarrow + \infty$ will recover all information. This fact is not
at all apparent in the original fermionic formulation. Nevertheless,
one can construct a unitary $S$-matrix\ for fermions. We will first
illustrate this by computing the amplitude for a single fermion to be
elastically reflected.
An initial state of definite energy is described on ${\cal I}_R^-$ by
$$
\ket{in} = \int dx^+ \, e^{-i p_+ x^+} \psi}\def\P{\Psi_L(x^+) \, \ket{0} \> ,
\eqn\fortyeight
$$
where $\ket{0}$ is the in-vacuum. Using the bosonization
prescription
\twentyfive\ this can be written as
$$
\ket{in} = \int dx^+_0 \, e^{-i p_+ x^+_0} : e^{ i \sqrt{4\pi}
Z_L(x^+_0) } : \ket{0} \> .
\eqn\fortynine
$$
From the boson point of view this is a linear superposition of
coherent states
$$: e^{i \sqrt{4\pi} Z_L(x^+_0) } : \ket{0} \> .
\eqn\fifty
$$
Each such coherent state is identified with a classical configuration
$Z_C(x)$
and evolves in time into another coherent state according to the
classical
equations of motion. The initial configuration corresponding to
\fifty\ is a
left-moving step function
$$
Z_C = \sqrt{\pi} \theta}\def\T{\Theta ( x^+ - x^+_0 ) \ .
\eqn\fiftyone
$$
Note that the charge carried by a configuration is given by
$$
Q = \int_{-\infty}^{+\infty} dx \, j^0
= \int_{-\infty}^{+\infty} dx \, {1\over{\sqrt{\pi}}} {\partial Z\over \partial x}
= {1\over{\sqrt{\pi}}} \bigl[Z_C(+\infty) - Z_C(-\infty)\bigr] \ .
\eqn\fiftytwo
$$
Thus the net incoming charge is proportional to the height of the
step function.
The incoming state has the form \fiftyone\ on ${\cal I}_R^-$ i.e. at $x^-
\rightarrow -\infty$. To find the subsequent evolution we need to
solve the classical equations for $Z_C$,
$$
\partial_+ \partial_- Z_C = - {1\over 4\pi} g^2(x) Z_C \> = - {1\over 4\pi}
e^{(x^- - x^+)} Z_C \ ,
\eqn\fiftythree
$$
subject to the boundary conditions \fiftyone\ at ${\cal I}_R^-$. Note
that we do not need to impose boundary conditions on ${\cal I}_L^-$
because the mass term in \fiftythree\ diverges there forcing $Z$ to
vanish. The appropriate solution can for example be found by using a
a coordinate system which turns \fiftythree\ into a Klein-Gordon
equation with a uniform tachyonic mass~[\alstrom]. It is given by
$$
Z_C =\sqrt{\pi} \, \theta}\def\T{\Theta (x^+ - x^+_0)\, J_0 \bigl[ {1\over{\sqrt{\pi}}} e^{{1\over 2} x^-} \sqrt{
e^{-x^+_0} - e^{-x^+}} \bigr] \ .
\eqn\fiftyfour
$$
It is instructive to examine \fiftyfour\ on a series of time slices,
showing how the field evolves. This is illustrated in figure 3. We
see that the point charge continues to penetrate toward $x^1
\rightarrow - \infty$ but becomes more and more tightly screened as
time evolves. Asymptotically it becomes totally screened. A
reflected charge of equal magnitude moves off to the right towards
the asymptotic weak coupling region and it is followed by a series of
pairs with ever higher frequency but lower charge. The degenerate
left-moving ``blip" is an artefact of having arbitrarily high-energy
components in the localized state $\psi}\def\P{\Psi(x_0) \ket{0}$. The actual
initial state \fortyeight\ is a superposition of such
localized states and has finite energy.
The asymptotic out-state on ${\cal I}_R^+$ is obtained by taking the limit
$x^+ \rightarrow + \infty$ in \fiftyfour\
$$
Z_C \rightarrow \sqrt{\pi} J_0 \bigl[ {1\over{\sqrt{\pi}}} e^{{1\over 2} (x^- -x^+_0)}\bigr]\ .
\eqn\fiftyfive
$$
The corresponding coherent quantum state is given by
$$
: \exp \bigl[ i\int dx^- \, 2\partial_- Z_C(x^-, x^+_0) \, Z_R(x^-)
\bigr] : \ket{0} \> .
\eqn\fiftysix
$$
Thus the final state is
$$
\int dx^+_0 \, e^{-i p_+ x^+_0} : \exp \bigl[ i\int dx^- \, 2\partial_-
Z_C(x^-, x^+_0) \, Z_R(x^-) \bigr] : \ket{0} \> .
\eqn\fiftyseven
$$
The elastic scattering amplitude is the overlap of this state with an
outgoing fermion,
$$
\int dx_0^- e^{ i q_- x^-_0} \bra{0} : \exp[-i \sqrt{4\pi}
Z_R(x_0^-)] : \ .
\eqn\fiftyeight
$$
A standard coherent state calculation yields an amplitude,
$$\eqalign{
A(q_-, p_+) =& \int dx_0^+ dx_0^- e^{i(q_- x^-_0 - p_+ x^+_0)} \,
\exp \bigl(\int {dv \over v + i\epsilon} J_0 \bigl[ {1\over{\sqrt{\pi}}} e^{{1\over 2} (v
+ x^-_0 -x^+_0)} \bigr] \, \bigr) \cr
=& 2\pi\, \delta (p_+{-}q_-) \int dx e^{-i p_+ x} \exp \bigl(\int {dv
\over v + i\epsilon} J_0 \bigl[ {1\over{\sqrt{\pi}}} e^{{1\over 2} (v - x)} \bigr] \,
\bigr) \ . \cr}
\eqn\fiftynine
$$
The $i\epsilon$ prescription takes care of the ultra-violet
divergences but, as it stands, this expression is still infra-red
divergent. This is because we have used a simple logarithm for the
boson propagator in the coherent state calculation, whereas a more
careful evaluation, using a regularized propagator, would give a
finite result. An alternative, if somewhat crude, subtraction
procedure is simply to subtract from the Bessel function in the $v$
integral in \fiftynine\ a step function $\theta (v_0 - v)$, which
cancels the $v\rightarrow -\infty$ infrared divergence. The
dependence on the subtraction point, $v_0$, can be absorbed into the
overall normalization of the amplitude, which we have not kept track
of here. If desired, the normalization can be determined by the
physical requirement that the probability for elastic reflection of a
fermion approaches unity as the energy tends to zero.
\section{The full $S$-matrix}
Now we want to construct the full operator $S$-matrix\ for the
scattering of arbitrary fermion states. The best way to achieve this
is to first obtain the exact $S$-matrix\ for bosons and then appeal to
the equivalence between the Hilbert spaces of the bosons and fermions
to read off the fermion $S$-matrix .
The boson amplitudes are easy to obtain because \fortyseven\ defines
a free field theory. Let us start with the LSZ-reduced expression
for a one-particle $S$-matrix\ element for bosons, which is obtained by
sandwiching the operator
$$
S_{1\rightarrow 1} = i \int d^2x_1 \,d^2x_2 \, Z_R(x^-_1) \,
\overrightarrow \nabla^2_1 G(x_1,x_2) \overleftarrow \nabla^2_2 \,
Z_L(x^+_2)
\eqn\sixty
$$
between asymptotic single boson Fock states. By using the coordinate
system in which the equation of motion for $Z$ becomes a tachyonic
Klein-Gordon equation, and demanding that the propagator vanishes in
the strong coupling region, one is led to
$$
G(x_1,x_2) = \sqrt{\pi} J_0\bigl[{1\over{\sqrt{\pi}}} \sqrt{\vert e^{-x^+_1}-
e^{-x^+_2}\vert \> \vert e^{x^-_1}-e^{x^-_2}\vert}\bigr]\ .
\eqn\sixtyone
$$
After inserting this propagator into \sixty\ and some integrations,
we find
$$
S_{1\rightarrow 1} = i \int dx^-_1 dx^+_2 \, \partial_- Z_R(x^-_1) \,
J_0 \bigl[{1\over{\sqrt{\pi}}} e^{{1\over 2}(x^-_1 - x^+_2)} \bigr] \, \partial_+
Z_L(x^+_2) \ .
\eqn\sixtytwo
$$
For a free field theory the full $S$-matrix\ is obtained by
exponentiating the single particle expression
$$
S = \exp \Bigl[i \int dx^-_1 dx^+_2 \, \partial_- Z_R(x^-_1)
\, J_0 \bigl[{1\over{\sqrt{\pi}}} e^{{1\over 2}(x^-_1 - x^+_2)} \bigr] \,
\partial_+ Z_L(x^+_2)\Bigr] \ .
\eqn\sixtythree
$$
Exactly the same operator expression can now be used to compute
$S$-matrix\
elements in the fermion basis. For example, the single-particle
matrix element
\fiftynine\ is given by
$$
\int dx^+ dx^- \, e^{i(q_- x^- - p_+ x^+)}\, \bra{0} : e^{-i
\sqrt{4\pi} Z_R(x^-)} : S :e^{+i \sqrt{4\pi} Z_L(x^+)} : \ket{0} \ .
\eqn\sixtyfour
$$
The general expression \sixtythree\ can be written directly in
fermion language by using the fermion-boson correspondence,
$$
{1\over{\sqrt{\pi}}} \epsilon^{\mu\nu} \partial_\nu Z = j^\nu \ ,
\eqn\sixtyfive
$$
giving
$$
S = \exp \bigl(i\pi \int dx^+ \, d x^- \, j_R(x^-) \,
J_0 \bigl[ {1\over{\sqrt{\pi}}} e^{{1\over 2}
(x^- -x^+)} \, \bigr] j_L(x^+) \, \bigr) \ .
\eqn\sixtysix
$$
Evidently the exact $S$-matrix\ is of the form advocated by 't~Hooft\ but
with a more
complicated kernel than \fortyfour . In fact, the correspondence can
be seen directly by expanding the Bessel function in a power series
in $e^{x^- - x^+}$. The first two terms of the expansion pick up the
moments in \fortyfour \foot{In fact there is a factor of two
discrepancy between the coefficients in \fortyfour\ and \sixtysix .
This factor can be traced to the asymmetric treatment of incoming and
outgoing currents in section~5 and does not appear in a more
symmetric calculation}. The full series expansion involves all the
moments making it possible for unitarity to be restored.
The meaning of the higher terms in the series expansion can be given
a graphical interpretation. Each successive power of $e^{x^- - x^+}$
corresponds to a closed loop of fermions in the gauge field
propagator, which enters into the calculation of the phase shift of
the outgoing fermions.
\chapter{Information retrieval}
Having established the existence of a unitary $S$-matrix\ for linear
dilaton electrodynamics, it is interesting to ask how the information
in a complex initial state is radiated back.
For example, suppose an initial state of given total charge $Q$
described by a coherent state with some modulations on the $Z$-field.
Now consider boosting the configuration to higher energy so that the
information carrying modulations are squeezed into a smaller volume.
At extremely high energy it will become indistinguishable from a step
function whatever its initial profile. However, boosting a
configuration cannot change its information content. How, then, does
the final state remember the incident structure?
The answer is in the very high-frequency exponentially attenuating
tail in figure~3. In the limit of infinite boost, the tail extends
to $x^1 \rightarrow - \infty$, and because of the increasing
frequency in this region it carries infinite energy. In a finite
energy configuration, the tail is bounded. The details of the
initial configuration are coded in the details of the high-frequency
low-amplitude tail. In other words, an energetic collection of low
charge fermion pairs trails the main bulk of the outgoing charged
radiation and information about all the details of the boosted
initial state are coded into modulations on that tail.
We do not know to what extent the mechanism for information retrieval
carries over to two-dimensional gravity, let alone the real world.
Obviously we cannot expect the information in a black hole to be
radiated in a late tail of high energy quanta since most of the
energy of the black hole will already have been radiated. Note,
however, that in the analogy between two-dimensional gravity and
linear dilaton electrodynamics, gravitational energy is replaced by
electric charge. The information carrying tail in linear dilaton
electrodynamic carries very little charge which should perhaps be
interpreted in gravity as information escaping from a black hole
remnant in a long tail of very soft radiation, containing a large
number of quanta. Since the coding of the information into long
wavelength quanta would have to be a very slow process [\hawking,\ac]
such a proposal would probably suffer from the drawbacks of stable
remnant theories. We hope to return to these points.
Another point worth noting is that the unitarity of the $S$-matrix\
depends on the field content of the theory. For example, if two
species of fermions were coupled to the electromagnetic field the
difference of their charge densities would not be expelled from the
strongly coupled region. In this case one linear combination of the
bosonizing fields would carry information to $x^1\rightarrow -\infty$
where it would be lost to an outside observer. Perhaps information
can only be conserved in some theories.
\vskip 2cm
\noindent
{\undertext{Acknowledgements:}} The authors would like to thank
S.~Giddings, J.~Russo and A.~Strominger for useful discussions.
\FIG\figone{The effect of an infalling shock wave on a black hole
geometry. The event horizon shifts outward.}
\FIG\figtwo{``Penrose diagram" for a charge-hole.}
\FIG\figthree{Evolution in time of the bosonizing field, for an
incoming fermion.}
\refout
\figout
\end
|
train/arxiv
|
BkiUd0A5qoYAzjhRy9TR
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
Optical nanofibers (ONF) have emerged as a noninvasive probe for spectroscopy, sensing, and cold atom physics~\cite{Brambilla2010,Zhang2010,Garcia-Fernandez2011,Morrissey2013,Sague2007}.
In the case of cold atomic gases, the sub-$\mu$m diameter of the ONF allows for insertion into the atomic cloud with minimal disturbance to the trapping beams.
Moreover, the small mode area of the evanescent field around the ONF waist leads to strong coupling between the ONF guided mode and atoms near its surface~\cite{LeKien2006}.
Optical nanofibers have been used to study atomic spectra near surfaces~\cite{Sague2007,Russell2009,Nayak2012a} and magneto-optical trap (MOT) size, lifetime~\cite{Morrissey2009}, and temperature (by completely different methods than reported here)~\cite{Russell2012,Russell2013}.
Having a good temperature diagnostic is important, for example, in optimizing nanofiber trapping from a cold thermal cloud~\cite{LeKien2004,Vetsch2010,Goban2012,Beguin2014a,Lee2014}.
Standard techniques for measuring MOT temperature, such as time-of-flight (TOF) absorption imaging or dithering the magnetic field gradient~\cite{Russell2012}, are effective, but constrained experimental environments can prevent their use.
Hybrid quantum systems composed of superconductors and neutral atoms require cryogenic environments that are incompatible with many of the diagnostic tools used in laser cooling and trapping.
To successfully interface cold atoms and superconducting qubits~\cite{Verdu2009,Hoffman2011,Xiang2013,Patton2013,Bernon2013,Jessen2014,Weiss2015} it is necessary to develop tests that do not perturb the cryogenic environment.
Superconductors are perilously sensitive to optical power and DC and AC magnetic fields.
The standard temperature measurements of the atomic cloud mentioned above are inaccessible in these setups due to a lack of optical access or sensitivity to changing magnetic fields.
Here we present, as part of our program to magnetically couple atoms trapped around an optical nanofiber to a superconducting resonator~\cite{Hoffman2011}, a new way to monitor the temperature of a cloud of cold atoms near a nanofiber, using the correlations of fluorescent photons emitted into the nanofiber guided mode.
When the emitters are not stationary, the intensity-intensity correlation function depends on their motion as well as the geometry of the mode into which they emit~\cite{Carmichael1978,Kimble1978}.
The intensity-intensity correlation function, $g^{(2)}(\tau)$, measures correlations in the fluctuations of light intensity, e.g. the photon statistics~\cite{Loudon2000}, and can reveal both classical and quantum aspects of the light and its sources.
Among the classical effects is the transit time of atoms through an optical mode.
Systems such as atomic beams~\cite{Hennrich2005,Norris2009}, single atoms in a MOT~\cite{Gomer1998,Gomer1998a}, and a single trapped ion~\cite{Rotter2008} were used to measure these transit-time effects.
While bunched and antibunched photon statistics have been observed in the light emitted into the ONF guided mode~\cite{LeKien2008,Nayak2008,Nayak2009,Das2010}, the correlations related to atomic trajectories near the ONF have been studied only tangentially~\cite{Nayak2008}.
Intensity correlations decay with a characteristic time that depends on atomic transit.
We measure this time for different atomic temperatures.
Its dependence on temperature allows for a simple model to infer the MOT temperature directly from the correlations.
This paper is organized as follows.
Section~\ref{sec:system} outlines the nanofiber mode structure, potential shifts, and the coupling efficiency of fluorescence photons into the nanofiber.
Section~\ref{sec:theory} provides a general overview of intensity-intensity correlations and briefly discusses the theoretical considerations for calculating and simulating them.
Finally Section~\ref{sec:expt} presents the experimental results and compares them to simulations.
\section{The system}
\label{sec:system}
The experiment relies on two main parts: a source of cold $^{87}$Rb atoms and an ONF.
A MOT provides a constant source of slowly moving atoms whose fluorescent light can couple evanescently into the guided mode of the optical nanofiber.
The nanofiber collects the light from the atoms and also modifies the local potential for the atoms, which move with typical velocities on the order of 10 $\mathrm{cm}\cdot\mathrm{s}^{-1}$.
\subsection{Nanofiber mode structure}
\label{subsec:mode}
Our single-mode nanofiber is a fiber pulled to a small enough diameter that all higher-order modes are cut off.
The mode (HE$_{11}$) of such an ONF has an intensity profile outside of the fiber given by~\cite{LeKien2004}
\begin{equation}
\lvert \mathbf{E}(r) \lvert^2 = \mathcal{E}^2\left[ K^2_0(qr) + u K^2_1(qr) + w K^2_2(qr)\right]\,,
\label{eq:intensity}
\end{equation}
where $\mathcal{E}^2$ is proportional to the intensity at the fiber surface; $K_i$ is the modified Bessel function of the second kind of order $i$; $u$ and $w$ are constants obtained from Maxwell's equations; $r$ is the distance from the center of the fiber; and $q = \sqrt{\beta^2-k^2}$ is the transverse component of the wavevector, where $\beta$ is the field propagation constant in the nanofiber, and $k = 2\pi/\lambda$ is the free-space wavevector.
The parameter $q$ describes the decay of the field in the radial direction.
\subsection{Atom-surface potential}
\label{subsec:potentials}
We approximate the nanofiber as an infinite dielectric plane when calculating the van der Waals potential~\cite{Alton2010,Stern2011,Frawley2012}, so that $U_{\mathrm{vdW}}(r) = C_3 (r-r_0)^{-3}$ for $r>r_0$, where $r_0$ is the fiber radius.
The $C_3$ coefficient is equal to $4.94\times10^{-49}\,\mathrm{J}\cdot\mathrm{m}^{-3}$ and $7.05\times10^{-49}\,\mathrm{J}\cdot\mathrm{m}^{-3}$ for the $5 S_{1/2}$ and $5 P_{3/2}$ levels, respectively, of $^{87}$Rb near fused silica~\cite{LeKien2004,Lin2004,Sague2008a,Markle2014}.
The infinite-plane approximation is accurate to within 20\% for atom-fiber distances less than 200 nm~\cite{LeKien2004}, a distance comparable to the decay length of the evanescent field ($q^{-1} \approx 188$ nm, see Sec.~\ref{subsec:data}).
To include the effect of retardation, which causes the atom--surface interaction to scale as $(r-r_0)^{-4}$ for $r\gg r_0$, we use a phenomenological model for the potential that smoothly connects the non-retarded (van der Waals) and retarded (Casimir-Polder) regimes~\cite{LeKien2008b,Russell2009}:
\begin{equation}
U(r) = - \frac{C_4}{(r-r_0)^3 \left((r-r_0)+C_4/C_3 \right) }\,,
\label{eq:vdWCP}
\end{equation}
where $C_4$ is the Casimir-Polder coefficient.
This $C_4$ coefficient is equal to $4.47\times10^{-56}\,\mathrm{J}\cdot\mathrm{m}^{-4}$ and $12.2\times10^{-56}\,\mathrm{J}\cdot\mathrm{m}^{-4}$ for the $5 S_{1/2}$ and $5 P_{3/2}$ levels, respectively, of $^{87}$Rb near fused silica~\cite{Spruch1993,Sague2008a}
\subsection{Potential shifts}
\label{subsec:shifts}
The atom-surface potential shifts the atomic levels dependent on position.
The shifts produce a spatially-varying absorption (emission) rate~\cite{Foot2005}:
\begin{equation}
p_{\mathrm{abs}}\left( r \right) = \frac{\Gamma}{2} \frac{s}{1+s+4\left( \frac{d\omega \left( r \right) +\delta}{\Gamma} \right)^2}\,,
\label{eq:shift}
\end{equation}
where $r$ is the position of the atom, $s = I/I_{\mathrm{sat}}$ is the saturation parameter ($I_{\mathrm{sat}} = 3.58\, \mathrm{mW}\cdot\mathrm{cm}^{-2}$ for a uniform sublevel population distribution~\cite{Steck2001}), $\delta = \omega_{\mathrm{L}} - \omega_0$ is the detuning of the driving (i.e. MOT) beams from atomic resonance, and $d\omega \left( r \right) = \left( U_e(r) - U_g (r)\right ) / \hbar$ is the atom-surface shift assuming a two-level atom.
Note that we neglect effects due to interference of the MOT beams with each other or due to scattering of the MOT beams off the nanofiber in the near field.
This is justified because the fiber is long relative to these effects in one direction so that the atoms see a landscape that is, on average, uniform.
\subsection{Coupling efficiency}
\label{subsec:coupling}
The coupling efficiency of an atom to the ONF is the rate of spontaneous emission that couples into the one-dimensional mode of the fiber divided by the total spontaneous emission rate~\cite{LeKien2006,Masalov2013},
\begin{equation}
\beta \left ( r \right ) = \Gamma_{1\mathrm{D} }\left( r \right)/\Gamma_{ \mathrm{tot} } \left( r \right)\,.
\label{eq:coupling}
\end{equation}
Fermi's golden rule determines the form of $\beta \left ( r \right ) $, which follows the spatial variation of Eq.~\ref{eq:intensity}.
Photon detection in the experiment is a joint process of absorbing a photon from the MOT beams and emitting it into the nanofiber mode, which is mathematically described by the product of the photon emission rate in Eq.~\ref{eq:shift} and the coupling efficiency in Eq.~\ref{eq:coupling}.
It is the position-dependence of this joint probability that allows us to obtain information about the atomic motion.
\section{Correlations}
\label{sec:theory}
The intensity-intensity correlation function
\begin{equation}
g^{(2)}(\tau) = \frac{\left\langle I(t)\,I(t+\tau) \right\rangle}{\left\langle I(t) \right\rangle^2}\,,
\label{eq:corr}
\end{equation}
measures the conditional probability of detecting a photon at a time $\tau$ from recording another photon.
Here $\langle \cdot \rangle$ denotes an average over all $t$, and, in this discussion, $I(t)$ is the photocurrent or, equivalently, the photon counting rate at time $t$.
At its core, $g^{(2)}(\tau)$ characterizes the fluctuations in $I(t)$.
When measuring fluorescence from an atomic ensemble, the function contains contributions from different sources of fluctuations including single-atom field-field correlations, single-atom intensity-intensity correlations, different-atom field-field correlations, and different-atom intensity-intensity correlations.
Neglecting correlations between different atoms and assuming that they are motionless, we can write $g^{(2)}(\tau)$ as~\cite{Carmichael1978}
\begin{equation}
g^{(2)}(\tau) = 1+\left\lvert g_A^{(1)}(\tau) \right\rvert^2+\frac{1}{\bar{N}}g_A^{(2)}(\tau)\,,
\label{eq:g2}
\end{equation}
where $\bar{N}$ is the average atom number in a particular time window, and $g_A^{(2)}(\tau)$ and $g_A^{(1)}(\tau)$ are the single-atom intensity-intensity and field-field correlations, respectively.
For small atom number $\bar{N}$, we can observe the ``antibunching term'' $g_A^{(2)}(\tau)$.
Laser-cooled atoms are not stationary emitters.
The resonance fluorescence emitted into the fundamental mode exhibits correlations due to transit-time effects related to the geometry of that mode.
The atoms act as beacons signaling their position while passing near the nanofiber.
Accounting for the motion of atoms amounts to adding a temporal envelope $f(\tau)$ to Eq.~\ref{eq:g2}~\cite{Hennrich2005},
\begin{equation}
\label{eq:transitcorr}
g^{(2)}(\tau) = 1+ f(\tau) \lvert g_A^{(1)}(\tau) \rvert^2+ \frac{1}{ \bar{N} } f(\tau)g_A^{(2)}(\tau)\,.
\end{equation}
The function $f(\tau)$ generally depends on the environment and how the emitted light couples to the detection apparatus -- it is the shape of this temporal envelope that will allow us to extract information about the trajectories of atoms moving near an ONF.
We can relate the width of the correlation function $g^{(2)}(\tau)$ to the temperature of the atomic cloud by noting that the temperature determines the velocity distribution of the atoms and the speed of the atoms determines the timescale of the interaction with the nanofiber.
The ONF mode described by Eq.~\ref{eq:intensity} possesses a characteristic length scale of $1/q$.
Dividing this length by a characteristic speed of a Maxwell-Boltzmann distribution of atoms at a temperature $T$, which we take to be the most probable speed, $v_p = \sqrt{2 k_B T/m}$, yields a simple relationship between transit time and temperature:
\begin{equation}
\tau_0 = \frac{a}{q}\sqrt{\frac{m}{2 k_B T}}\,,
\label{eq:temp}
\end{equation}
where $a$ is an overall scale factor based on the geometry of the problem and on our choice of characteristic speed. We are not able to find an analytical form for $a$ from simple physical considerations, but used simulations to understand how geometric details affect $a$ (see Sec.~\ref{subsec:sim}).
\section{Experiment and results}
\label{sec:expt}
\subsection{Apparatus}
\label{subsec:app}
We load the MOT from the low-velocity tail of a background vapor of \Rb{87} atoms produced by a dispenser (see details in Ref.~\cite{Lee2013}).
We change the intensity and detuning of the cooling beams in order to controllably vary the temperature of the atomic cloud between $\sim160-840\,\mu \mathrm{K}$, as measured by time-of-flight expansion through fluorescence imaging.
Our ability to determine the atomic cloud temperatures is limited by the time-of-flight (TOF) imaging system in our setup combined with the low atom numbers for colder MOTs in steady state.
While we collected correlation data for ostensibly colder MOTs, we only present data for temperatures for which we could provide calibration to a known technique.
This is an indication that in certain circumstances the signal-to-noise ratio of the correlation measurement technique can be better than that of TOF.
The optical nanofiber (ONF) is produced via the flame brushing technique~\cite{Hoffman2014a,Birks1992}.
A hydrogen-oxygen flame acts as a local heat source to soften $125\,\mu \mathrm{m}$-diameter, single-mode fiber (Fibercore SM800) whose ends are pulled with computer-controlled linear motors.
This method reliably produces fibers of subwavelength diameters with transmission of the fundamental mode above 99\% and as high as 99.95\%, allowing them to sustain powers of hundreds of milliwatts in high vacuum~\cite{Hoffman2014a}.
Based on our fiber-pulling reproducibility, we know the transmission is greater than 95\%.
Relying on repeated, destructive measurements of the nanofiber diameter using a scanning electron microscope (SEM), we estimate the diameter of our ONF to be $530\pm 50$ nm, with a 1\% uniformity over a length of 7 mm.
This fiber diameter with the stated uncertainty accepts only one guided mode, described by Eq.~\ref{eq:intensity} above, at the experimentally relevant wavelength of 780 nm.
This same fiber has been in our apparatus for over two years with no noticeable degradation in quality. Rubidium atoms can coat the fiber surface and reduce transmission under operating pressures, but application of a thru-fiber heating beam with a power of more than a few $\mu$W is sufficient to desorb the atoms within a few ms.
We glue (EPO-TEK OG116-31) the fiber to a titanium u-shaped mount for stability, and attach the mount to a UHV-compatible manipulator system (VG Scienta Transax).
The manipulator consists of a motorized stepper motor along one axis and 2D manual translations stages along the other axes.
This manipulator works in conjunction with three pairs of magnetic shim coils to optimally overlap the nanofiber waist with the region of highest atomic density in the cloud.
Light that couples into the guided mode is filtered at the output of the fiber by a volume Bragg grating (VBG, OptiGrate BP-785), a narrow-line interference filter (Semrock LL01-780-12.5), and a long-pass color filter (Thorlabs FGL645) before being sent to the two fiber-coupled single-photon counting modules (SPCMs) (see Fig.~\ref{fig:schematic}).
A field-programmable gate array (FPGA)~\cite{Peters2015} stores and time-tags photon output TTL pulses from the SPCMs, which are then post-processed and correlated.
An internal clock of 48 MHz sets the minimal time resolution to 20.83 ns.
The use of two SPCMs circumvents problems near zero time delay related to detector dead time, typically 50 ns.
\begin{figure}
\centering
\includegraphics[width=0.9 \columnwidth]{correlation_schematic.pdf}
\caption[Experimental schematic for correlation measurements]{\label{fig:schematic}(Color online) Experimental schematic. A MOT is spatially-overlapped with a nanofiber, and the MOT beams drive resonance fluorescence that couples into the guided mode. This signal is filtered by a volume Bragg grating (VBG), bandpass (BP) filter, and long-pass filter before being split by a 50:50 beamsplitter (BS) and sent to two SPCMs. TTL pulses from the SPCMs are time-tagged by an FPGA and correlated in software.}
\end{figure}
\subsection{Data and fitting}
\label{subsec:data}
For this experiment, the MOT beams are on continuously during data acquisition and drive fluorescent transitions in the atoms.
We collect $\sim 2.5\times 10^{7}$ photon counts for each experimental run, corresponding to about 45 min of averaging per data point.
Time-of-flight imaging measures the temperature of the atomic cloud before and after photon collection.
Our data is a list of times corresponding to photon detection events, which we use to find $g^{(2)}(\tau)$.
We do not do any further binning of the data, so that the timing resolution of 20.83 ns is set by the internal clock in the FPGA.
While this time resolution obscures details on atomic spontaneous emission timescales (tens of nanoseconds), it provides good resolution on the timescale of a few microseconds where the atomic trajectories produce signatures in the correlation function.
Measurements using an oscilloscope (Tektronix DPO 7054) with finer time resolution allowed us to observe antibunching for low atom number.
Varying the Rb dispenser current allows us to change the number of atoms in the MOT, so that we can change the average number of atoms interacting with the nanofiber mode.
Fig.~\ref{fig:antibunch} shows the transition from antibunched (positive slope after $\tau=0$, estimated atom number is $\sim1.4$) to bunched (negative slope after $\tau=0$, estimated atom number is $\sim6$) correlations as we increase the number of atoms fluorescing into the mode of the ONF.
The timescale of the bunched or antibunched feature is set by the internal degrees of freedom of the atom and is much shorter than the temporal envelope due to atomic motion, which we discuss next.
Similar results were observed in Ref.~\cite{Nayak2009}.
\begin{figure}
\centering
\includegraphics[width=0.9 \columnwidth]{bunch_antibunch_moving_avg_6.pdf}
\caption[Short timescale autocorrelation function of resonance fluorescence emitted into the nanofiber guided mode, illustrating a transition from bunching to antibunching]{\label{fig:antibunch}(Color online) Second-order correlation function $g^{(2)}(\tau)$ for light scattered into the fiber as a function of delay time $\tau$.
The curves show data for low (blue) and high (red) Rb dispenser currents, illustrating antibunching and bunching, respectively.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9 \columnwidth]{723uK_0mW_plot500points_fit1000points_fit_and_residuals_rescale.pdf}
\caption[Autocorrelation function that displays transit-time effects of atoms passing through the nanofiber guided mode, with fit and residuals]{\label{fig:fit}(Color online) Second-order correlation function $\gcorr{2}{\tau}$ as a function of delay time $\tau$ for an atom temperature of 460 $\mu$K. The data (blue dots) are fit (solid red line) to $1+f(\tau)$ using Eq.~\ref{eq:fit}, with the residuals displayed in the lower plot.}
\end{figure}
Figure~\ref{fig:fit} displays an example of $\gcorr{2}{\tau}$ extracted from data for an atom temperature of 460 $\mu$K.
Note the very different timescale than in Fig.~\ref{fig:antibunch} so that the atom-number-dependent peak or dip corresponds to only one data point in Fig.~\ref{fig:fit}.
Because this point at zero time delay is the only one that depends on atom number, we neglect it when fitting the data.
The signal has a characteristic width due to transit-time effects, which is the result of a position-dependent atom-fiber coupling efficiency combined with moving atoms.
An atom at a particular location will emit into the mode with probability proportional to its intensity at that position, and averaging over many atomic trajectories will sample the entire mode.
In this way, the autocorrelation function contains information about the mode in question (the shape of $\gcorr{2}{\tau}$) and about the motion of the atoms (the decay time of $\gcorr{2}{\tau}$).
We make a series of approximations to the model of the mode structure before comparing the observed transit-time broadening to theory.
The factors $u$ and $w$ in Eq.~\ref{eq:intensity} are small for a fiber radius of 265 nm and wavelength of 780.24 nm (calculated to be 0.166 and 0.00875, respectively), so we neglect them and keep only the first term, which is proportional to $K^2_0$.
As a further simplifying approximation we also take the asymptotic form of $K_{0}$~\cite{Olver2010},
\begin{equation}
K_{0} (z)\sim \sqrt{\frac{\pi}{2 z}}e^{-z} \,,
\label{eq:asymptote}
\end{equation}
which is valid in our case.
This yields an intensity around the nanofiber proportional to $\exp [-2qr]/2qr$.
Defining an effective index of refraction, $n_{\mathrm{eff}} = \beta/k$, we can rewrite the propagation constant so that the radial decay parameter becomes $q = k\sqrt{n^2_\mathrm{eff}-1}$, which is $0.66 k$ for our nanofiber.
We recast the spatial dependence of the intensity into the temporal envelope in Eq.~\ref{eq:transitcorr}~\cite{Hennrich2005,Norris2009}:
\begin{equation}
f(\tau) = A\, \frac{e^{-2\left ( \lvert \tau \rvert /\tau_0+ 0.66\, k\, r_0 \right )}}{\left ( \lvert \tau \rvert /\tau_0+ 0.66\,k\,r_0 \right )}\,,
\label{eq:fit}
\end{equation}
where $r_0 = 265$ nm is the fiber radius, $A$ is a fitting parameter for the overall amplitude, and the absolute value reflects the time-symmetric nature of the autocorrelation function for stationary processes.
The parameter $\tau_0$ represents a characteristic correlation time (see Eq.~\ref{eq:temp}).
The red curve in Fig.~\ref{fig:fit} shows the best fit to $g^{(2)}(\tau) = 1 + f(\tau)$ because $g^{(2)}_A (\tau )$ is flat in our experiment over timescales longer than the atomic lifetime.
For a measured atomic temperature of 460 $\mu$K, the fit achieves a reduced $\chi^2$ of 1.02 (and a range of approximately $1-1.5$ across all datasets).
We note that using Eq.~\ref{eq:fit} for the temporal envelope $f(\tau)$ results in statistically better fits than an exponential or Gaussian decay.
The overall height of the bunched peak depends on the absolute knowledge of the background at very long time (seconds), which we know can depend on mechanical vibrations of the fiber.
The environment acoustically and thermally drives these vibrations so that we do not have an exact measure of unity.
This, combined with the signal-to-background ratio of the photon counting rates, can explain why the amplitude $A$ does not reach the expected value of 2 for chaotic light from independent emitters.
\subsection{$\tau_0$ vs. temperature}
\label{subsec:temp2}
We extract best-fit values for $\tau_0$ at different MOT atomic temperatures, with each temperature also measured by standard TOF imaging.
Fig.~\ref{fig:temp} shows a plot of the resulting best-fit values $\tau_0$, where the vertical error bars are the standard errors from the fit, and the horizontal error bars originate from uncertainty in the knowledge of the magnification of the imaging system.
The average atom number for all points in Fig.~\ref{fig:temp} falls in the same range as for the data presented in Fig.~\ref{fig:antibunch}, i.e. $1 < \bar{N} < 10$.
The purple line in Fig.~\ref{fig:temp} is a fit to Eq.~\ref{eq:temp} with the single fit parameter $a$, and the shaded area represents the 95\% ($2\sigma$) confidence bands considering both the vertical and horizontal error bars.
We observe good agreement between the model and the data; the fit has a reduced $\chi^2$ of 1.67, and the overall scale parameter is $a=1.46\pm0.04$.
The exact value of this scale factor is discussed further in Sec.~\ref{subsec:sim}.
\begin{figure}
\centering
\includegraphics[width=0.9 \columnwidth]{TOF_data_20141021_sim_530nm.pdf}
\caption{\label{fig:temp}(Color online) Extracted $\tau_0$ vs. temperature $T$, measured via TOF.
The blue circles are experimental data.
The vertical error bars indicate standard error in the fit of Eq.~\ref{eq:fit}, and the horizontal error bars arise from systematic uncertainty in the magnification of the imaging system.
The purple line is a fit to Eq.~\ref{eq:temp}, and the shaded region is the 95\% ($2\sigma$) confidence bands considering both the vertical and horizontal error bars.
The reduced $\chi^2$ is 1.67.
The red open squares are the results of the trajectory simulation, with a single scale parameter of 0.77.}
\end{figure}
\subsection{Simulations}
\label{subsec:sim}
To better understand the physical situation, we perform simulations of atomic trajectories subject to Newton's equations of motion~\cite{Sague2007}.
These simulations include the atom-surface potential and its resultant shift discussed in Sections~\ref{subsec:potentials} and~\ref{subsec:shifts}.
The classical nature of the simulations is justified because the smallest angular momenta present in the system are still $\sim100$ times larger than $\hbar$.
The atoms are started at a radial distance $r=1500$ nm away from the fiber surface.
At this distance, the coupling is weak due to the rapid decay of the mode with length scale $1/q$.
The axial and radial symmetry of the problem allows us to restrict trajectories to the x-y plane with initial velocities pointing in one quadrant.
We sample the speeds from a 3D Maxwell-Boltzmann distribution before projecting onto this plane.
Trajectories evolve for either 50 $\mu$s or when the atom strikes the fiber surface, whichever happens first.
The coupling efficiency in Eq.~\ref{eq:coupling} is a fit to the complete solution for a two-level atom~\cite{Masalov2013}.
We also assume that the orientation of the atomic dipoles relative to the fiber surface is random, so that the coupling efficiency is an effective ensemble average. Independent measurements confirm that this assumption of random orientations is valid for our MOT.
Photon scattering events are infrequent on microsecond timescales, and when they do occur their effect on atomic velocity is negligible.
As a result, we assume ballistic trajectories.
The position-dependent coupling efficiency in Eq.~\ref{eq:coupling} and the position-dependent emission rate in Eq.~\ref{eq:shift} are calculated at each instant of time along a trajectory and multiplied together.
This yields a time-dependent detection probability for each trajectory.
Time-correlating the detection probability of a trajectory with itself produces a signal proportional to the intensity-intensity correlation for a single atom.
We discretize these time-dependent probabilities onto a mesh of 50-ns resolution so that calculating the correlation function becomes a simple array operation.
Experimentally-measured values for atom temperature are fed into the simulation, which is averaged over $5,000$ randomly sampled speeds and directions.
The resulting correlation function is fit to Eq.~\ref{eq:fit} in order to extract the decay time $\tau_0$.
We first utilize the simulations to address the scale factor in Eq.~\ref{eq:temp}.
Fig.~\ref{fig:simangle} displays the dependence of the transit time on the angular spread of the atomic trajectories for a distribution with temperature 90 $\mu$K.
For an atomic beam aimed directly at the fiber, we extract a transit time of 1.49 $\mu$s, which matches well the calculated time of 1.43 $\mu$s using Eq.~\ref{eq:temp} with $a=1$.
The transit time increases slowly as we increase the angular distribution of trajectories, until it hits a critical value of $\arctan(53/300)$, corresponding to the point after which not all paths intersect with the nanofiber.
Beyond this angle, atoms then interact with the fiber over distances longer than $1/q$, and the transit time consequently increases further.
Fig.~\ref{fig:simangle} illustrates that the simulation fully samples the interaction region with an angular spread of at least $\pi/6$ to get reasonable results.
Moreover, we note that the ratio of the transit time for the fully-sampled simulation to the effective one-dimensional simulation with no angular spread is 1.7.
These results suggest that our observed scale parameter of $a=1.46\,\pm\,0.04$ is partially due to angular spread in the trajectories.
\begin{figure}
\centering
\includegraphics[width=0.9 \columnwidth]{taus_vs_angle_90uK_530nm.pdf}
\caption[Simulated correlation function width vs. sampling angle range for an atomic temperature of 90 $\mu$K]{\label{fig:simangle}(Color online) Simulated correlation time $\tau_0$ vs. sampling angle range $\Delta \theta$ for an atomic temperature of 90 $\mu$K. The dashed line indicates the critical angle $\arctan(53/300)$ in the simulation at which not all atoms hit the fiber.}
\end{figure}
We performed simulations for the same temperatures measured in the experiment and one additional atomic temperature of 100 $\mu$K.
The red open squares in Fig.~\ref{fig:temp} display transit times extracted from simulated data that were fit to the temporal envelope function, Eq.~\ref{eq:fit}.
The simulated data are multiplied by a single, overall scale parameter equal to 0.77 in order to minimize the least-squares distance to the experimental fit.
The simulations follow the expected trend with temperature and differ only by a scale factor of order unity.
The discrepancies might be explained by the various simplifications made in our model, which neglects, for instance, the stochastic nature of the photon absorption/emission process.
We note, however, that the difference between the experimental and simulated data is comparable to other temperature measurement methods using optical nanofibers~\cite{Russell2013}.
\section{Conclusions}
\label{sec:concs}
We have presented a technique to measure the temperature of a laser-cooled atomic cloud that is applicable to experiments with restrictive environments, such as hybrid quantum systems using superconducting circuits.
The method uses the intensity-intensity correlation function to extract motion of atoms as they pass through the ONF mode and is easily extendable to other photonic devices with different optical mode geometries.
This technique allows mapping of mode structures, which could be useful when using the next family of higher-order modes to trap atoms near an optical nanofiber~\cite{Fu2007,Fu2008,Sague2008,Ravets2013,Kumar2015}.
\section{Acknowledgements}
This work was supported by the National Science Foundation through the PFC at JQI and the ARO Atomtronics MURI.
We gratefully acknowledge J. E. Hoffman for nanofiber fabrication, J. K. Peters for help with the FPGA for data acquisition, A. D. Cimmarusti for assistance with data processing software, H. J. Carmichael for discussions regarding simulations, and W. D. Phillips for a thoughtful reading of the manuscript.
|
train/arxiv
|
BkiUeBM5qYVBi77IMuK4
| 5
| 1
|
\section{}
\label{}
\section{}
\label{}
\section{The Elsevier article class}
\paragraph{Installation} If the document class \emph{elsarticle} is not available on your computer, you can download and install the system package \emph{texlive-publishers} (Linux) or install the \LaTeX\ package \emph{elsarticle} using the package manager of your \TeX\ installation, which is typically \TeX\ Live or Mik\TeX.
\paragraph{Usage} Once the package is properly installed, you can use the document class \emph{elsarticle} to create a manuscript. Please make sure that your manuscript follows the guidelines in the Guide for Authors of the relevant journal. It is not necessary to typeset your manuscript in exactly the same way as an article, unless you are submitting to a camera-ready copy (CRC) journal.
\paragraph{Functionality} The Elsevier article class is based on the standard article class and supports almost all of the functionality of that class. In addition, it features commands and options to format the
\begin{itemize}
\item document style
\item baselineskip
\item front matter
\item keywords and MSC codes
\item theorems, definitions and proofs
\item lables of enumerations
\item citation style and labeling.
\end{itemize}
\section{Front matter}
The author names and affiliations could be formatted in two ways:
\begin{enumerate}[(1)]
\item Group the authors per affiliation.
\item Use footnotes to indicate the affiliations.
\end{enumerate}
See the front matter of this document for examples. You are recommended to conform your choice to the journal you are submitting to.
\section{Bibliography styles}
There are various bibliography styles available. You can select the style of your choice in the preamble of this document. These styles are Elsevier styles based on standard styles like Harvard and Vancouver. Please use Bib\TeX\ to generate your bibliography and include DOIs whenever available.
Here are two sample references: \cite{Feynman1963118,Dirac1953888}.
\section*{References}
\section{Introduction}
\par
RICH-1\cite{Albrecht} is a large gaseous Ring Image Cherenkov Counter (RICH) providing Particle Identification (PID) for hadrons within the momentum range 3 to 55 GeV/c for the COMPASS Experiment at CERN SPS\cite{compass}. It consists of a 3 m long $C_{4}F_{10}$ gaseous radiator, where charged particles with velocity above the Cherenkov threshold emit photons; 21 m$^2$ VUV mirror surface where the photons are reflected and focalized on a 5.5 $m^2$ of photo-detection surface sensitive to single photons (Fig\ref{fig:richPrinciple}-A).
Three photo detection technologies are used in RICH-1: Multi Wire Proportional Chambers (MWPCs) with CsI photo-cathodes, Multi Anode Photo-Multipliers Tubes (MAPMTs) and Micro Pattern Gaseous Detectors (MPGDs) based Photon Detectors (PDs) (Fig.\ref{fig:richPrinciple}-B).
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figures/RichLayoutNew1}
\caption{A. The Cherenkov photon propagation and focusing. B. Photon detectors (not to scale).}
\label{fig:richPrinciple}
\end{figure}
\par
RICH-1 was designed and built in 1996-2000, commissioned in 2001-2002 and is in operation since 2002. The whole photo-detection surface was originally equipped with 16 MWPCs with CsI photo-cathodes of $\sim 600\times600$ mm$^2$ active area. In-spite of their good performance, MWPCs have limitations in terms of maximum effective gain ($\sim 10^4$), time response($\sim \mu$s), rate capability and aging of the CsI photo-cathodes. In 2006, 4 central chambers were replaced with detectors consisting of MAPMTs coupled to individual fused silica lens telescopes to cope with the high particle rates of the central region. In parallel, an extensive R\&D program\cite{THGEM_rd} aimed to develop MPGD based large area PDs established a novel hybrid technology combining MicroMegas \cite{MM} and THick Gas Electron Multipliers (THGEMs) \cite{Alexeev}. In 2016 COMPASS RICH-1 was upgraded by replacing 4 of the remaining 12 MWPCs with CsI photo-cathodes with new detectors based on the novel MPGD hybrid technology \cite{upgradeHybrid}. The new detectors have been successfully commissioned and operated during the 2016 and 2017 COMPASS data taking periods.
\section{The Hybrid Architecture}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figures/hybridschemefinal}
\caption{The Hybrid architecture}
\label{fig:hybrid}
\end{figure}
\par
The basic structure of the hybrid module (Fig.\ref{fig:hybrid}) consists of two layers of THGEMs, one MicroMegas, and two planes of wires. UV light sensitivity is obtained via the deposit of a thin (300 nm) CsI layer on the top of the first THGEM electrode which acts as a reflective photo-cathode.
The Drift wire plane is installed at 4 mm from the CsI coated THGEM and is biased to a suitable voltage in order to maximize the extraction and collection efficiency of the converted photo-electrons. The other wire plane guarantees the correct closure of the drift field lines and is positioned 4.5 mm away from the quartz window which separates the radiator gas volume from the $Ar:CH_{4}$ $50:50$ gas mixture of the photon detector.
The photo-electron generated by the conversion of Cherenkov photon from the CsI surface is guided into one of the first THGEM holes where the avalanche process takes place due to the electric field generated by the biasing voltage applied between the top and bottom THGEM electrodes. The electron cloud generated in the first multiplication stage is then driven by the 1.5 $kV.cm^{-1}$ Electric field across the 3 mm transfer region to the second THGEM, where thanks to the complete misalignment of the holes with respect to the first THGEM layer ($\sim$ 462 $\mu$m displacement along the THGEM length coordinate), the charge is spread and undergoes a second multiplication process.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{Figures/PadFinalSchemeHybrid}
\caption{A: Exploded view of one single readout pad structure. B: The schematic of the circuit diagram of the \enquote{capacitive anode} idea. C and D: Metallographic section of the PCB: the detail of the through-via contacting the external pad via the hole of the buried pad. \cite{upgradeHybrid}}
\label{fig:padscheme}
\end{figure}
Finally the charge is guided by the 0.8 $kV/cm$ field across the 5 mm gap to the bulk MicroMegas where the last multiplication occurs. The MicroMegas mesh which is the only non-segmented electrode is kept at ground potential while the anode, segmented in square pads of 7.5$\times$7.5 $mm^{2}$ (with 0.5 mm inter-pad gaps) is biased at positive voltage (Fig.\ref{fig:padscheme}-A and Fig.\ref{fig:padscheme}-B). The MicroMegas PCBs are based on the capacitive/resistive concept: the anodic pads are powered through individual resistors and the signal induced on the anodic pads is read out by the Front End APV-25 chips\cite{APV25} via capacitively coupled buried pads embedded 70 $\mu$m below the anodic ones ( Fig.\ref{fig:padscheme}-C). The high voltage is provided to the anodic pads by vias passing through the readout pads. Special attention was paid on obtaining a very flat surface for the anodic pad via connections (as shown in Fig.\ref{fig:padscheme}-D). The intrinsic ion blocking capabilities of the MicroMegas as well as the arrangements of the THGEM geometry and fields grant an ion back flow on the photo-cathode surface lower or equal to 3\% \cite{PDreview}.
\section{Building and commissioning of the final detectors}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figures/THGEMDrawing}
\caption{The final THGEM design}
\label{fig:thgemdrawing}
\end{figure}
\par
All the THGEMs have the same geometrical parameters: thickness of 470 $\mu$m, total length of 581 mm and width of 287 mm (Fig.\ref{fig:thgemdrawing}). The holes diameter is 400 $\mu$m and the pitch is 800 $\mu$m. The holes were produced by mechanical drilling and have no rim. To obtain a symmetric field line configuration near the edges of the THGEM, the diameter of the holes located along the external borders have been enlarged to 500 $\mu$m, which results in an improved electrical stability of the whole system. The top and bottom electrodes of each THGEM are segmented in 12 sectors (10 are 564 mm long and 23.3 mm wide, the most external ones are 17.9 mm wide), all sectors are separated by 0.7 mm clearance area. The biasing voltage is individually provided to each sector of each THGEM.
\par
The THGEMs for the RICH-1 upgrade were produced following specific procedures (result of 8 year long dedicated R\&D) concerning raw material selection, THGEM production, quality assessment, characterization, storage and installation.
\par
To achieve an effective gain uniformity of $\sim 7\%$ over large surface area ($\sim 0.2$ $m^{2}$) is challenging due to the thickness tolerance of the raw PCB sheets available in the market. To avoid waisting produced THGEMs, the selection was performed on the raw material before drilling using a setup based on MITUTOYO EURO CA776 coordinate measuring machine, at the INFN Trieste mechanical workshop. In total 50 foils were measured from which 100 THGEMs (2 THGEMs/foils) could be produced. Foils with a thickness tolerance of $\pm15$ $\mu$m ptp were sent for transfer of the mask image, etching and drilling of the holes and the others procedures needed to prepare raw THGEMs. Within 100 measured PCB pieces 60 passed the threshold and became raw THGEMs.
\par
A post production treatment was then applied in Trieste: it consists in polishing the raw THGEM with pumice powder and cleaning by high pressurized water and in ultrasonic bath with high pH solution ($pH \sim 11$), rinsing with demineralized water and drying in oven at 50 $^{0}$C for 24 hours \cite{Polishing}. A measurement of the discharge rate was performed for the treated THGEMs using an automated test setup: in an $Ar:CO_{2}$ $70:30$ gas mixture the bias voltage of the THGEM was increased by 10 V steps and the number of sparks (events with more than 50 nA current) was measured for 30 minutes until the bias voltage was increased again. The THGEMs with the lowest discharge rates were chosen for characterization. Effective gain uniformity study was performed using a dedicated test setup consisting of a MINI-X X-Ray generator and APV-25 based SRS DAQ\cite{APV25}\cite{SRS}. The best pieces were selected for the upgrade.
\par
The MicroMegas were produced by BULK technology at CERN EP/DT/EF/MPT workshop over the pad segmented multilayer PCBs produced by TvR SrL SpA in Schio, Vicenza, Italy. The $600\times600$ $mm^{2}$ PDs were built by mounting two $300\times600$ $mm^{2}$ modules side by side in the same aluminum frames coupled to single wire frames holding drift and field wires. The gluings of the MicroMegas PCBs to the final frames were done with the help of a volumetric dispenser coupled to a CNC machine. The assembling of each PD was performed in a clean room and it's response was studied to validate the detector. The effects of the variation of environmental conditions (pressure and temperature) were studied and an automated high voltage correction system was implemented to stabilize the PD gain response.
\par
A special box to transport validated THGEMs under controlled atmosphere was used before and after their Au-Ni coating. The deposition of the solid photo-converter for the hybrid photo-cathodes was performed at the CERN Thin film Laboratory following the procedure described in ref-\cite{CsI}. The photo-cathodes (THGEMs with CsI coating on one side) were mounted inside a dedicated glove-box. The old PDs with MAPMT and MWPCs with CSI photo-cathodes were dismounted from RICH-1 vessel, the MAPMTs with their individual fused silica lenses were taken out from the old frames and mounted onto the frame of the new hybrid detector. The PDs are then installed on COMPASS RICH-1 and equipped with frontend electronics, low voltages, high voltages and cooling services.
\section{Results and Conclusion}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{Figures/Spectra_With_without_Beam}
\caption{A. amplitude spectrum from hybrid PD13 taken without beam (random trigger); B. aplitude spectrum from hybrid PD13 taken with beam (physics trigger).}
\label{fig:spectrawithwithoutbeam}
\end{figure}
\par
The new hybrid detectors were commissioned during the 2016 COMPASS data taking period from May to October. The average equivalent electronic noise is $\sim 900 e^{-}$ and a zero suppression procedure with a $3\sigma$ threshold cut is applied for the standard data taking. After ensuring accurate timing the amplitude spectra for noise and signals were obtained. With no beam and random trigger, the noise part of the amplitude spectrum is observed (Fig.\ref{fig:spectrawithwithoutbeam}-A). With physics triggers the amplitude spectrum shows the noise part, a prominent single photon exponential part and a tail due to charged particle signals (Fig.\ref{fig:spectrawithwithoutbeam}-B).
\begin{figure}[!htb]
\centering
\includegraphics[scale=.65]{Figures/performance.png}
\caption{Voltage an current monitored for one of the hybrid bulk MicroMegas by the Hybrid HV control System for a time range of 1 week. \cite{upgradeHybrid}}
\label{fig:performance}
\end{figure}
The performance, in terms of voltage and gain stability of the PDs were studied. In Fig.\ref{fig:performance} the current and voltage values of a MicroMegas sector are plotted for a time range of one week. the typical discharge rate is few events per day; no sizable voltage drop is observed when a discharge occurs, confirming the validity of the detector optimization. In Fig.\ref{fig:performance} a zoomed view of the voltage curve is shown: the slow continuous voltage modulation is due to the environmental pressure temperature correction by the high voltage control system.
\par
The single photon amplitude spectra collected by changing only the biasing voltages of the second
THGEM for 1250, 1275 and 1300 V are shown in figure \ref{fig:spectra} a). Similarly in fig \ref{fig:spectra} b) where only the biasing voltage of the MicroMegas is changed for 588, 600, 612, 624 V. The different slopes of the exponential distributions are in agreement with the expected values from the laboratory exercises\cite{Alexeev} and they confirm the good detector response.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.40]{Figures/spectra_TM.png}
\caption{A. Amplitude distribution of the single photon signal collected for different biasing voltages of the second THGEM electrode: 1250, 1275,1300 V.
B. Amplitude distribution of the single photon signal collected for different biasing voltages of the MicroMegas electrode: 588, 600, 612, 624 V. \cite{upgradeHybrid}}
\label{fig:spectra}
\end{figure}
Cherenkov rings are observed in the new hybrid PDs: one example is presented in Fig.\ref{fig:rings}-A where the Cherenkov photon hits are presented in blue and the red point corresponds to the extrapolated center of the expected ring, obtained by reconstructing a track (with momentum of 5.19 GeV/c in this case) and reflecting its trajectory on the mirror surface to the PDs.
\par
The four new PDs have been stably operated during 2017 COMPASS data taking periods with a higher average effective gain with respect to the MWPC based PDs. The hybrid MicroMegas + THGEMs photon-detection technology has proven to be successful and solid.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{Figures/Rings}
\caption{Cerenkov rings from RICH-1 during an typical event in COMPASS 2017 data taking}
\label{fig:rings}
\end{figure}
\section{Acknowledgment}
\par
The activity is partially supported by the H2020 project AIDA2020 GA no. 654168. It is supported in part by CERN/FIS-PAR/0007/2017 through COMPETE, FEDER and FCT (Lisbon). One author (J.Agarwala) is supported by the ICTP UNESCO TRIL fellowship program. The authors are member of the COMPASS Collaboration and part of them are members of the RD51 Collaboration: they are grateful to both Collaborations for the effective support and the precious encouragements.
|
train/arxiv
|
BkiUfVzxK0fkXPSOrTiE
| 5
| 1
|
\section*{\Large \bf Introduction}
In a recent paper [1] Westmoreland and Schumacher made an
attempt to show that ``ordinary quantum mechanics is not
consistent with the superluminal transmission of classical
information''. Their proof was constructed from three
elements: the no-cloning theorem, quantum teleportation,
and the relativity of simultaneity.
In a comment [2] we argued that the claim given in [1] was
untenable, since the formulation of the no-cloning theorem
did not allow quantum jumps.
Now Westmoreland and Schumacher have replaced [1] with a
revised version. Although the proof of the no-cloning
theorem has remained the same, the new version includes
an essential note: ``...the `no-cloning' theorem in fact
holds for the most general sort of quantum evolution
described by a completely positive map on density
operators... In particular, cloning is still impossible
even if we allow measurements and manipulations of the systems
based on the outcomes of measurements.''
But using the extended version of the no-cloning theorem does
not save the proof given in [1]. The fallacy is a
misinterpretation of the no-cloning and teleportation theorems,
which do not involve time and reference frames. Indeed,
signals---superluminal or not---imply no contradiction between
teleportation and the no-cloning theorem: the former is possible
and the latter is valid.
\section{The no-cloning theorem}
Let $T$ be a completely positive map on the set of the states,
i.e., statistical operators $\rho$ of a system which is of the
following form:
\begin{equation}
T\rho=\sum_{i}V_{i}\rho V^{\dagger}_{i},\qquad V_{i}=
U_{i}P_{i}\;\;\;{\rm or}\;\;\;P_{i}U_{i},
\label{1.1}
\end{equation}
where $U_{i}$ is a unitary operator, $P_{i}$ is a projector, and
\begin{equation}
\sum_{i}P_{i}=I.
\label{1.2}
\end{equation}
Let us consider a composed $CBA$ system, the Hilbert spaces of
$B$ and $C$ systems being identified. We use the notations:
\begin{equation}
\rho=\rho_{CBA},\;\;\rho_{C}={\rm Tr}_{BA}\rho,\;\;
\rho_{BA}={\rm Tr}_{C}\rho,\;\;(T\rho)_{C}={\rm Tr}_{BA}(T\rho),
\;\;{\rm etc}.
\label{1.3}
\end{equation}
{\it The no-cloning theorem\/}:
\begin{equation}
(\forall(T,\rho_{BA}))(\exists\rho_{C}):(T\rho)_{B}\ne\rho_{C}
\;\;{\rm or}\;\;(T\rho)_{C}\ne\rho_{C},
\label{1.4}
\end{equation}
or, equivalently,
\begin{equation}
(\neg\exists(T,\rho_{BA}))(\forall\rho_{C}):(T\rho)_{B}=\rho_{C}
\;\;{\rm and}\;\;(T\rho)_{C}=\rho_{C}.
\label{1.5}
\end{equation}
In words: There do not exist $T$ and $\rho_{BA}$, such that for
every $\rho_{C}$
\begin{equation}
(T\rho)_{B}=(T\rho)_{C}=\rho_{C}
\label{1.6}
\end{equation}
holds.
\section{Teleportation}
The essence of teleportation may be formulated as
{\it The teleportation theorem\/}:
\begin{equation}
(\exists\;CBA\;{\rm system})(\exists(T,\rho_{BA}))
(\forall\rho_{C}):(T\rho)_{B}=\rho_{C}.
\label{2.1}
\end{equation}
In words: There exist $CBA$ system, $T$, and $\rho_{BA}$, such
that for every $\rho_{C}$
\begin{equation}
(T\rho)_{B}=\rho_{C}
\label{2.2}
\end{equation}
holds.
{\it Corollary\/}: For every teleportation
\begin{equation}
(T\rho)_{C}\ne\rho_{C}
\label{2.3}
\end{equation}
holds (no cloning [3]).
\section{What is the proof}
Let $P_{\psi}$ be a projector corresponding to a vector $\psi$,
$\|\psi\|=1$. In [1]
\begin{equation}
\rho=\rho_{C}\otimes\rho_{BA},\quad \rho_{C}=P_{\phi_{C}},\quad
\rho_{BA}=P_{\Psi^{-}_{AB}},
\label{3.1}
\end{equation}
the teleportation map is
\begin{equation}
T\rho=\sum_{i=1}^{4}V_{i}\rho V_{i}^{\dagger},\quad
V_{i}=U_{Bi}\otimes P_{ACi},\quad
P_{ACi}=P_{\chi_{ACi}},
\label{3.2}
\end{equation}
where
\begin{equation}
\chi_{AC1}=\Psi^{+}_{AC},\quad \chi_{AC2}=\Psi^{-}_{AC},
\quad \chi_{AC3}=\Phi^{+}_{AC},\quad \chi_{AC4}=\Phi^{-}_{AC}.
\label{3.3}
\end{equation}
The proof is as follows. Due to superluminal signals, there
exists a frame of reference in which for any time $t$ such that
\begin{equation}
t_{{\rm II}}<t<t_{{\rm I}}
\label{3.4}
\end{equation}
the states of $B$ and $C$ systems are
\begin{equation}
\rho^{B}_{t}=(T\rho)_{B}=\rho_{C}
\label{3.5}
\end{equation}
and
\begin{equation}
\rho^{C}_{t}=\rho_{C}
\label{3.6}
\end{equation}
respectively, so that
\begin{equation}
\rho^{B}_{t}=\rho^{C}_{t}=\rho_{C}\qquad {\rm for}\quad t\in
(t_{{\rm I}},t_{{\rm II}}).
\label{3.7}
\end{equation}
Eq.(\ref{3.7}), the authors conclude, contradicts the no-cloning
theorem, which completes the proof.
\section{What is wrong}
The no-cloning and teleportation theorems do not involve
time and reference frames. Therefore the only relation
in the proof which is connected with the theorems is
\begin{equation}
(T\rho)_{B}=\rho_{C}.
\label{4.1}
\end{equation}
A contradiction would be the equality
\begin{equation}
(T\rho)_{C}=\rho_{C},
\label{4.2}
\end{equation}
which does not hold.
As for signals---superluminal or not---their only function
is to coordinate sets
\begin{equation}
\{P_{ACi}\}_{i=1}^{4}\quad {\rm and}\quad \{U_{Bi}\}_{i=1}^{4}.
\label{4.3}
\end{equation}
\section*{Acknowledgment}
I would like to thank Stefan V. Mashkevich for helpful
discussions.
|
train/arxiv
|
BkiUdh45qX_BlbLl_xcC
| 5
| 1
|
\section{Introduction}
{\it Introduction.}
Topological order (TO) has been rationalized in the last few decades~\cite{Wen1990,Wen2013} as a new type of order in two dimensions (2D),
beyond the well-known
Ginzburg-Landau paradigm. Importantly, it is at the heart of the rapidly expanding field of quantum computing~\cite{Kitaev2003}.
The fractional quantum Hall (FQH) state of the 2D electron gas~\cite{Stormer1983} is the first topological ordered state discovered. The simple Laughlin wave function provides a beautiful qualitative understanding of the physics of the Abelian FQH state at filling fraction $\nu=1/m$ as an incompressible fluid~\cite{Laughlin1983},
while more involved wave functions can also describe non-Abelian FQH states~\cite{Moore1991,Read1999,Repellin2015}.
It revealed
the emergence of fractional excitations, the anyons, a key feature of TO~\cite{Wen1990}. Anyons carry fractional charge~\cite{Laughlin1983}
as well as Abelian~\cite{Halperin1984} or non-Abelian statistics~\cite{Wen1991b,Moore1991}.
An important feature of FQH states is the existence of a bulk gap and chiral modes providing
unidirectional transport on the edge~\cite{Wen1991a,Wen1992}.
More precisely, their edge physics
can be described by chiral $SU(2)_k$ Wess-Zumino-Witten
(WZW) Conformal Field Theory (CFT)~\cite{WZWreview1988}.
Recently, a matrix product state (MPS) representation of the FQH states~\cite{Estienne2013a,Estienne2015} enabled to
probe their physical properties with unprecedented numerical accuracy.
In a pioneering work~\cite{Kalmeyer1987}, Kalmeyer and Laughlin (KL) have extended the notion of FQH state to the lattice.
When localized on the lattice, the bosonic $\nu=1/2$ Laughlin state gives rise to a spin-$1/2$ chiral spin liquid
(CSL)~\cite{WWZ1989},
closely related to the resonating valence bond (RVB) state of high-Tc superconductivity~\cite{Anderson1973}.
Recently, fractional Chern insulators~\cite{Levin2009,Repellin2014,Maciejko2015} have set up a new route to realize FQH physics on the lattice.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth,angle=0]{Figures/PhaseDiag.pdf}
\caption{[Color online] Schematic phase diagram of the chiral AFHM drawn from Ref.~\protect\cite{Nielsen2013} as a function of magnetic frustration $J_2/J_1$ and (relative) amplitude of the chiral interaction $\lambda_c/J_1$.
The KL nature and the boundary of the chiral spin liquid phase (blue region) was only accessed from small cluster
calculations. The location in parameter space of the two models studied here are shown by (blue) dots.}
\label{FIG:PhaseDiag}
\end{center}
\end{figure}
Whether simple local lattice Hamiltonians can host chiral spin liquid ground states~\cite{WWZ1989} is one of the key issues
that determine whether or not such topological phases could be realized experimentally.
The original innovative proposal by KL that the GS of the frustrated triangular spin-$1/2$ antiferromagnetic Heisenberg model (AFHM)
is a CSL turned out not to be correct, the GS of this model being
magnetically ordered. However, Bauer et al.~\cite{Bauer2014} showed recently that, on the kagome lattice (2D lattice of corner-sharing triangles), the GS of the Hamiltonian $H=\sum_{\triangle(ijk)} {\bf S}_i \cdot ({\bf S_j}\times {\bf S_k})$, sum of the chiral spin interaction
over all triangles $\triangle(ijk)$, has the universal properties of the $\nu=1/2$ Laughlin state.
This CSL was shown to be exceptionally robust under the addition of an extra nearest-neighbor Heisenberg-like interaction (defining a generic ``chiral AFHM"), even of large magnitude.
\begin{table}[htbp]
\caption{Numbers of independent $SU(2)$-symmetric tensors for the four different
virtual spaces we consider, $D\le 5$. The third (fourth) column gives the number of $A_1$ ($A_2$) tensors and the last column the total number of tensors in the $A$ ansatz. Note that all four types of ans\"atze exhibit a gauge-$\mathbb Z_2$ symmetry associated to the conserved parity of the number of spin-$\frac{1}{2}$ on the $z=4$ bonds. }
\begin{center}
\begin{tabular}{@{} ccccc@{}}
\hline
\hline
${\cal V}$ & $D$ &$A_R^{(A_1)}$ & $A_I^{(A_2)}$ & Total \# \\
\hline
$\frac{1}{2}\oplus 0$&3& 2& 1& 3 \\
$\frac{1}{2}\oplus 0\oplus 0$&4& 8& 4& 12 \\
$\frac{1}{2}\oplus \frac{1}{2}\oplus 0$&5& 10& 8& 18 \\
$\frac{1}{2}\oplus 0\oplus 0\oplus 0$&5& 21& 12& 33 \\
\hline
\hline
\end{tabular}
\end{center}
\label{TABLE:numbers}
\end{table}%
Another alternative approach has been pursued, trying to construct ``parent Hamiltonians" for the Abelian~\cite{Schroeter2007,Thomale2009}
and non-Abelian~\cite{Greiter2014,Glasser2015} CSL.
Using
a re-writing of the wave function as a correlator of a $1+1$ chiral CFT~\cite{Nielsen2011,Nielsen2012},
the simplest \hbox{spin-$\frac{1}{2}$} parent Hamiltonian on the square lattice obtained by Nielsen et al.~\cite{Nielsen2013} consists of interactions between all pairs and triples of spins in the system.
Since long-range interactions might be hard to achieve experimentally in e.g. cold atom systems~\cite{Nielsen2014}, the authors argue that
a similar (Abelian) CSL phase is also hosted in a simplified {\it local} Hamiltonian where all the long-range parts of the interaction
have been set to zero~\cite{Nielsen2013}. We shall adopt here their local chiral AFHM which, introducing a slightly different parametrization, reads:
\begin{eqnarray}
H&=&J_1\sum_{\big< i,j\big>} {\bf S}_i\cdot{\bf S}_j
+J_2\sum_{\big<\big<k,l\big>\big>} {\bf S}_k\cdot{\bf S}_l\nonumber \\
&+& \lambda_c \sum_{\square(ijkl)}i(P_{ijkl}-P_{ijkl}^{\,\,\,\,\,\,\,-1}) \, ,
\label{EQ:model}
\end{eqnarray}
where the first (second) sum is taken over nearest-neighbor (next-nearest-neighbor) bonds and
the last sum over all plaquettes of the square lattice. $P_{ijkl}$ makes a cyclic permutation
of the four spins of every plaquette in e.g. the clockwise direction. $H$ breaks time reversal symmetry but preserves the global spin
$SU(2)$ symmetry.
It is the analog for the square lattice of the chiral AFHM on the kagome lattice studied by Bauer et al.~\cite{Bauer2014}~:
The chiral interaction ${\bf S}_i \cdot ({\bf S_j}\times {\bf S_k})$ on the triangle is replaced here by its generalization on the plaquette
and magnetic frustration is introduced via competing $J_1$ and $J_2$ antiferromagnetic couplings.
A schematic phase diagram showing the (approximate) extension of the KL chiral spin liquid
is provided for convenience in Fig.~\ref{FIG:PhaseDiag}. We shall here
focus on the two special points studied by Nielsen et al.~\cite{Nielsen2013} and located in Fig.~\ref{FIG:PhaseDiag}, supposedly in the CSL phase; $J_1=2$, $J_2=0$, $\lambda_c=1$
and $J_1=2\cos{(0.06\pi)}\cos{(0.14\pi)}\simeq 1.78$, $J_2=2\cos{(0.06\pi)}\sin{(0.14\pi)}\simeq 0.84$,
$\lambda_c=2\sin{(0.06\pi)}\simeq 0.375$. Hereafter, we refer to the latter as the ``$J_1-\lambda_c$ model'' and the ``$J_1-J_2-\lambda_c$ model'', respectively.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth,angle=0]{Figures/Ener.pdf}
\caption{
[Color online] Scaling of the iPEPS variational energies (per site) vs $D^2/\chi$
for the two local chiral Hamiltonians studied here;
(a) $J_1-\lambda_c$ model; (b) $J_1-J_2-\lambda_c$ model.
The filled (open) symbols correspond to fully optimized (fixed) tensors as
explained in the text.
A comparison
with the exact energy (per site) of a
$5\times 6$ torus~\cite{Nielsen2013} is shown. In (b) the variational energy
of the Kalmeyer-Laughlin (KL) spin liquid obtained by Monte Carlo~\cite{Nielsen2013} is also shown.
}
\label{FIG:Ener}
\end{center}
\end{figure}
Our strategy to explore the physics of the above model is to use the
tensor network framework~\cite{Cirac2009b,Cirac2012a,Orus2013,Schuch2013b,Orus2014}.
One of the motivation is to test whether some fundamental obstruction is at play that prevents to describe
a gapped CSL phase with 2D tensor networks~\cite{Dubail2015}. Previous attempts using projected entangled pair states (PEPS) led
to the discovery of {\it critical} CSL exhibiting chiral edge modes~\cite{Shuo2015,Poilblanc2015,Poilblanc2016}.
PEPS are ans\"atze
that approximate GS wave functions in terms of a unique site tensor $A_{\alpha\beta\gamma\delta}^{s}$,
where the greek indices label the states of the $D$-dimensional {\it virtual} spaces $\cal V$ attached to each site in the
$z=4$ directions of the lattice,
and $s=\pm\frac{1}{2}$ is the $S_z$ component of the physical spin. The site tensors are then
entangled together (i.e. contracted w.r.t. their virtual indices) to form a 2D tensor network. A priori, all the
$2D^4$ coefficients of the site tensor can serve as parameters to optimize the variational GS energy.
However, the CSL bears a number of symmetry properties that greatly constrains the PEPS ansatz.
Recently, a classification of fully $SU(2)$-symmetric (singlet) PEPS was proposed~\cite{Mambrini2016}
in terms of the irreducible representations (IRREP)
of the lattice point group ($C_{4v}$ in the case of the 2D square lattice). Since the CSL should be
invariant under the combination of a reflection $\cal R$ w.r.t. to any crystalline direction ($x$, $y$, $x\pm y$) and time reversal
symmetry (i.e. complex conjugation), the simplest adequate PEPS site tensors
have the form $A=A_R^{(A_1)} + i A_I^{(A_2)}$,
where the two real tensors $A_R^{(A_1)}$ and $A_I^{(A_2)}$ transform according to
the $A_1$ (symmetric w.r.t. $\cal R$) and $A_2$
(antisymmetric w.r.t. $\cal R$) IRREP~\cite{Poilblanc2015,Poilblanc2016}.
These tensors have been tabulated in Ref.~\onlinecite{Mambrini2016} for $D\le 6$,
and their numbers for all virtual spaces $\cal V$ considered
in this work are listed in Table~\ref{TABLE:numbers}.
Following a previous study of the non-chiral frustrated AFHM~\cite{Poilblanc2017}, we consider a general superposition of all tensors of each class, the weights in the sum being considered as
variational parameters. As in the non-chiral case, the energy or observables can be computed directly in the thermodynamic limit using infinite-PEPS (iPEPS)
corner transfer matrix (CTM) renormalization group (RG) techniques~\cite{Nishino1996,Nishino2001,Orus2009,Orus2012}, making advantage of simplifications
introduced by the use of point-group symmetric tensors~\cite{Poilblanc2017}.
At each RG step a truncation of the (hermitian) CTM is done
keeping (at most) $\chi$ eigenvalues and preserving exactly the $SU(2)$ multiplet structure.
Energy optimization~\cite{Corboz2016,Vanderstraeten2016,Liu2017}
is performed using a conjugate gradient (CG)
method~\cite{Numerical2007} up to a maximum
$\chi=\chi_{\rm opt}$ and then, eventually, one takes the limit $\chi\rightarrow\infty$ (using a ``rigid" ansatz) by extrapolating the data~\cite{Poilblanc2017}.
We now turn to the results. In Fig.~\ref{FIG:Ener} we show the scaling of the iPEPS energies vs $D^2/\chi$
for the two local chiral Hamiltonians studied here, and different choices of the virtual space
$\cal V$ up to $D\le 5$. Using linear fits, one obtains accurate variational energies
in the $\chi\rightarrow\infty$ limit, apart from $D=5$ for which the
CTM RG converges to unphysical (pairs of) solutions beyond $\chi=2D^2$.
The exact GS energies obtained on a small periodic
30-site cluster~\cite{Nielsen2013} (expected to give a lower bound of the true thermodynamic values) provide a first reference,
showing that the iPEPS energies are remarkably accurate. For the second model in Fig.~\ref{FIG:Ener}(b),
we have compared our results to the variational energy of the KL ansatz computed with Monte Carlo~\cite{Nielsen2013}.
We find that, even for the smallest bond dimensions $D=3$ (${\cal V}=\frac{1}{2}\oplus 0$) and $D=4$ (${\cal V}=\frac{1}{2}\oplus 0\oplus 0$),
the iPEPS energy is lower than the energy of the KL CSL.
This provides solid arguments that these chiral $SU(2)$-invariant PEPS are very good variational states.
Hereafter we investigate further their edge and bulk properties and point out similarities and differences with the KL wave function.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth,angle=0]{Figures/ES.pdf}
\caption{[Color online] Chiral entanglement spectra of the $D=3$ PEPS optimized for the $J_1-J_2-\lambda_c$ model (subtracting the GS energy for convenience) for $N_v=8$. The
edge momentum $K$ is defined mod-$\pi$ since the $SU(2)$ generators are invariant under
only sublattice translations.
Even (a) and odd (b) $\mathbb Z_2$ sectors are shown. The correct $SU(2)_1$
counting obtained for each quasi-degenerate group of levels at low energy (outlined by boxes when necessary) is indicated in red.}
\label{FIG:ES}
\end{center}
\end{figure}
{\it Chiral edge modes.} First, we have computed the entanglement spectrum (ES) of the optimized $D=3$ PEPS on an infinitely-long cylinder $\cal C$,
bi-partitioned into two semi-infinite half-cylinders ${\cal C}_{\rm L}$ and ${\cal C}_{\rm R}$,
${\cal C}={\cal C}_{\rm L}\cup {\cal C}_{\rm R}$.
This can be done exactly~\cite{Cirac2011} on cylinders with up to $N_v=8$ sites of circumference. Li and Haldane~\cite{Li2008} have conjectured
that, in chiral topological states, there is a deep one-to-one correspondence between the true physical edge spectrum and the ES~\cite{Dubail2012a,Dubail2012b}.
The ES is obtained
from the leading eigenvector of the finite-dimensional $D^{2N_v}\times D^{2N_v}$ transfer matrix of the cylinder, as
originally proposed
in Ref.~\onlinecite{Cirac2011}, and already applied to chiral spin liquids~\cite{Poilblanc2015,Poilblanc2016}.
The ES shown in Fig.~\ref{FIG:ES} as a function of the momentum $K$ along the cut clearly reveal the existence of well-defined chiral branches linearly dispersing as $E_K\sim v K$. One also sees quasi-degenerate groups of levels whose counting (in terms of $SU(2)$ multiplets)
matches exactly the one of the $SU(2)_1$ WZW CFT~\cite{WZWreview1988}, as expected in a KL CSL phase~\cite{Herwerth2015}. Note that the ES of the optimized
PEPS is remarkably similar to the one obtained
for another studied chiral PEPS~\cite{Poilblanc2015,Poilblanc2016}, certainly belonging to the same $D=3$ chiral PEPS family, but far away in parameter space.
Although, the same exact calculation cannot be realized for $N_v=8$ beyond $D=3$, we conjecture that the
$SU(2)_1$ chiral edge modes are genuine features of our chiral PEPS optimized for Hamiltonian (\ref{EQ:model}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth,angle=0]{Figures/SS_v2.pdf}
\caption{[Color online] (a) Absolute value of the spin-spin correlations vs distance (along some crystal axis direction $x$ or $y$)
for the $D=3$ and $D=4$ (optimized) chiral PEPS and different environment dimension $\chi$
(as shown in legends) on a semi-log plot. The dashed lines are fits according to exponential behaviors
of the short and long distance correlations.
(b) Largest correlation length $\xi_{\rm max}$ (obtained from the linear fits in (a)) vs $\chi/D^2$, for both model parameter sets.
(c) ${\rm w} (\xi_{\rm max})$ versus $\xi_{\rm max}$ using the same symbols as in (b).}
\label{FIG:SS}
\end{center}
\end{figure}
{\it Bulk properties.} The KL CSL is expected to have short-range (spin-spin) correlations~\cite{Kalmeyer1987} as the bosonic $\nu=1/2$ FQH state it derives from. We investigate now
the correlation functions of the PEPS ans\"atze, and establish important differences. We use the same definitions and
CTM RG procedure as described in the study of the frustrated AFHM and focus on the two cases $D=3$ (${\cal V}=\frac{1}{2}\oplus 0$) and $D=4$ (${\cal V}=\frac{1}{2}\oplus 0\oplus 0$).
Fig.~\ref{FIG:SS}(a) shows the spin-spin correlations vs distance on a semi-log plot.
At short distance, we observe a rapid exponential fall-off characteristic of the KL CSL. However our data clearly show additional exponential tails
with much larger characteristic length but with much smaller weight. In other words, we can parametrize the correlation function vs distance as
\begin{equation}
C_S(d)=\sum_{\xi_{\rm min}\le\xi\le\xi_{\rm max}} {\rm w}(\xi)\exp{(-d/\xi)}\, ,
\end{equation}
where the short distance decay is characterized by ${\rm w}(\xi_{\rm min})\simeq 1$ while, at long distance,
the slower decay $\exp{(-d/\xi_{\rm max})}$
takes over with $\xi_{\rm max}\gg \xi_{\rm min}$ and ${\rm w}(\xi_{\rm max})\ll 1$. In fact, we think that $\xi_{\rm max}\rightarrow\infty$
when $\chi\rightarrow \infty$ (see Fig.~\ref{FIG:SS}(b)) while, simultaneously, ${\rm w}(\xi_{\rm max})$ goes very rapidly to zero.
If, as suggested in Fig.~\ref{FIG:SS}(c), ${\rm w}(\xi)\propto \exp(-\xi/\lambda)$, where
$\lambda\simeq 0.7\sim \xi_{\rm min}$, $C_S(d)$ will show a typical stretched exponential form at long distance,
$C_S(d)\sim (d/\lambda)^\frac{1}{4}\exp{\{-(d/\lambda)^\frac{1}{2}\}}$.
In any case, $C_S(d)$ should exhibit a ``gossamer tail'' which decays slower than any single exponential function.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth,angle=0]{Figures/DD_v2.pdf}
\caption{[Color online] (a) Absolute value of the dimer-dimer correlations vs distance $d$ (along some crystal axis direction)
for the $D=3$ and $D=4$ (optimized) chiral PEPS and different environment dimension $\chi$
(as shown in legends) on semi-log and log-log (inset) plots. The dashed (red) curve is a power-law $d^{-\alpha}$ fit.
(b) Correlation lengths obtained from the fits of the long distance correlations, shown vs $\chi/D^2$, for both model parameter sets. }
\label{FIG:DD}
\end{center}
\end{figure}
The dimer-dimer correlations are shown in Fig.~\ref{FIG:DD}(a). The asymptotic long-distance behaviors can always
be fitted as exponential decays.
The correlation lengths extracted from the fits are found to diverge linearly with $\chi$, for both model studied, as shown
in Fig.~\ref{FIG:DD}(b). At short distance, the data are better fitted as a power law $d^{-\alpha}$, although
with a large exponent $\alpha\simeq 4.5$,
rather than as an exponential. Thus, the power-law behavior takes over at all distances when $\chi\rightarrow\infty$. This suggests a
form of emerging $U(1)$-gauge symmetry typical of dimer liquids~\cite{Rokhsar1988} or RVB states~\cite{Albuquerque2010,Tang2011,Poilblanc2012,Schuch2012} on bipartite lattices.
{\it Summary and outlook.} Using a previous symmetry classification of $SU(2)$-invariant PEPS we have constructed simple families of
chiral PEPS on the square lattice.
Using iPEPS supplement by a CG algorithm, we have optimized these PEPS w.r.t the local chiral (frustrated) AFHM, believed to host
a CSL phase of the same class as the $\nu=1/2$ bosonic FQH liquid. The energy optimizations reveal very competitive ans\"atze
(better than the KL ansatz) even for small bond dimensions $D=3$ or $D=4$. As expected in such a CSL phase, we find clear evidence of
$SU(2)_1$ chiral edge modes. However, bulk properties turned out to have fundamental differences compared to a FQH gapped liquid~:
although spin-spin and dimer-dimer correlations seem qualitatively different, both seem to reveal long-range behaviors.
Although, detailed data have been provided for two particular points in parameter space, a similar behavior
has also been found between those two points.
We conjecture that this may well be realistic features of the GS of (\ref{EQ:model}) which would host in fact a {\it critical} CSL.
Certainly, this does not contradict the results of Ref.~\onlinecite{Nielsen2013} showing that, on small clusters, the KL state is an extremely
good ansatz for (\ref{EQ:model}).
Indeed, the short-range properties of our critical chiral PEPS are also likely to be extremely close to those of the KL state so that
only the long-distance properties can distinguish them. Interestingly, it was proved
that any strictly short-range quadratic parent Hamiltonian for chiral {\it free} fermions is gapless~\cite{Dubail2015}. It may well be that this extends to interacting local Hamiltonians, in agreement with our findings.
This would also agree with the fact that the CFT wave function derived
using the null vectors of $SU(2)_1$~\cite{Nielsen2012,Nielsen2013},
i.e. the KL state, has a parent $H$ that is long range.
\begin{acknowledgements}
This project is supported by the TNSTRONG
ANR grant (French Research Council). This work was granted access to the HPC resources of CALMIP supercomputing center under the allocation 2017-P1231. I acknowledge inspiring conversations with Fabien Alet, Sylvain Capponi, Ignacio Cirac, Matthieu Mambrini, Anne Nielsen, Pierre Pujol, German Sierra and Norbert Schuch. I am also grateful to Anne Nielsen for providing the variational energy of the KL state to compare to.
\end{acknowledgements}
|
train/arxiv
|
BkiUdTE4ubngtpssnuL0
| 5
| 1
|
\section{Conclusions}
\label{sec:conclusions}
Inspired by $k$-core analysis and density-based graph mining,
we propose density-friendly graph decomposition,
a new tool for analyzing graphs.
Like $k$-core decomposition,
our approach decomposes a given graph into a nested sequence of
subgraphs
These subgraphs have the property that the inner subgraphs are always denser than the outer ones;
additionally the most inner subgraph is the densest one---properties that the $k$-cores do not satisfy.
We provide two efficient algorithms to discover such a decomposition.
The first algorithm is based on minimum cut and it extends the exact
algorithm of Goldberg for the densest-subgraph problem.
The second algorithm extends a linear-time algorithm by Charikar for
approximating the same problem.
The second algorithm runs in linear time, and thus, in addition to
finding subgraphs that respect better the density structure of the
graph, it is as efficient as the $k$-core decomposition algorithm.
In addition to offering a new alternative for decomposing a graph into
dense subgraphs,
we significantly extend the analysis, the understanding, and the
applicability of previous well-known graph algorithms:
Goldberg's exact algorithm and
Charikar's approximation algorithm for
finding the densest subgraph,
as well as the $k$-core decomposition algorithm itself.
\section{Locally-dense subgraphs and \\ core decomposition}
\label{sec:core}
Here we study the connection of graph cores,
obtained with the well-known $k$-core decomposition algorithms,
with local-density, studied in this paper.
We are able to show that from the theory point-of-view,
graph cores are as good approximation to the optimal locally-dense
graph decomposition as the subgraphs obtained by the \textsc{GreedyLD}\xspace algorithm.
In particular we show a similar result to
Proposition~\ref{prop:peel-approximation},
namely, a factor-$2$ approximation on the profile function of the core
decomposition.
However, as we will see in our empirical evaluation,
the behavior of the two algorithms,
\textsc{GreedyLD}\xspace and $k$-core decomposition are different in practice,
with \textsc{GreedyLD}\xspace giving in general more dense subgraphs and closer to the
ones given by exact locally-dense decomposition.
Before stating and proving the result regarding $k$-cores,
recall that a set of vertices $X\subseteq V$ is a $k$-core if
every vertex in the subgraph induced by $X$ has degree at least $k$,
and $X$ is maximal with respect to this property.
A linear-time algorithm for obtaining all $k$-cores is illustrated in
Algorithm~\ref{algorithm:k-core}.
\begin{algorithm}[t]
\caption{\label{algorithm:k-core}$\textsc{Core}\xspace(G)$}
\Input{Graph $G = (V, E)$}
\Output{Collection $\mathcal{C}$ of $k$-cores}
$\col{C} \leftarrow \set{V}$\;
$k \leftarrow \min_w \dg{w}$\;
\For {$i = \abs{V}, \ldots, 1$} {
$w_i \leftarrow$ the vertex with the smallest degree\;
\If {$\dg{w} > k$} {
add $V$ to $\col{C}$\;
$k \leftarrow \dg{w}$\;
}
delete $w_i$ from $V$\;
}
\Return $\mathcal{C}$\;
\end{algorithm}
It is a well-known fact that the set of all $k$-cores of a graph forms
a nested chain of subgraphs, in the same way that locally-dense
subgraphs do.
\begin{proposition}
\label{prop:core-approximation}
Let $\set{C_i}$ be the set of all $k$-cores of a graph $G=(V,E)$.
Then $\set{C_i}$ forms a nested chain,
\[
\emptyset = C_0\subsetneq C_1 \subsetneq \cdots \subsetneq C_l = V.
\]
\end{proposition}
Similar to
Proposition~\ref{prop:peel-approximation},
$k$-cores provide a factor-$2$ approximation
with respect to the optimal locally-dense subgraphs.
The proof is in fact quite similar to that of
Proposition~\ref{prop:peel-approximation}.
\begin{proposition}
Let $\col{B} = \set{B_i}$ be the set of optimal locally-dense subgraphs.
Let $\col{C} = \set{C_i}$ be the set of $k$-cores.
Then
\[
\prof{i;\,\col{C}} \geq \prof{i;\,\col{B}} / 2.
\]
\end{proposition}
\begin{proof}
Sort $V$ according to the reverse visiting order of \textsc{Core}\xspace
and let $\din{v}$ be the number of edges of $v$
from earlier neighbors.
Fix $k$ to be an integer, $1 \leq k \leq n$ and let $B_i$ be the smallest subgraph
such that $\abs{B_i} \geq k$. Let $v_j$ be the last vertex occurring in
$B_i$. We must have $\din{v_j} \geq \density{v_j, B_i}$, and $\density{v_j,
B_i} \geq \density{B_i , B_{i - 1}}$ as otherwise we can
remove $v_j$ from $B_i$ and improve the density due to Lemma~\ref{lem:delete}. We have
\[
\prof{k;\,\mathcal{B}} = \density{B_i, B_{i - 1}} \leq \din{v_j}.
\]
Let $C_x$ be the smallest smallest core such that $\abs{C_x} \geq k$.
Let $v_s$ be the vertex with the smallest index that is still in $C_x \setminus C_{x - 1}$.
Let $v_l$ be the vertex with the largest index that is still in $C_x \setminus C_{x - 1}$.
If $j > l$, then $\din{v_j} < \din{v_l}$ otherwise $C_x$ is not a core.
If $j < l$, then $\din{v_j} \leq \din{v_l}$, otherwise $C_x$ would is not the smallest core as $j \geq k$.
Hence, $\din{v_j} \leq \din{v_l}$.
Let $g_y$ be the degree of $v_y$ right before $v_l$ is removed during \textsc{Core}\xspace.
We now have
\begin{eqnarray*}
\prof{k;\,\mathcal{C}} & = & \density{C_x, C_{x - 1}} \\
& = & \frac{1}{l - s + 1} \sum_{y = s}^l \din{v_y} \\
& \geq & \frac{1}{2(l - s + 1)} \sum_{y = s}^l g_y \\
& \geq & \frac{\din{v_l}}{2} \\
& \geq & \frac{\din{v_j}}{2},
\end{eqnarray*}
which proves the proposition.
\end{proof}
\section{Decomposition algorithms}
\label{sec:discovery}
In this section we propose two algorithms for the problem of
locally-dense graph decomposition (Problem~\ref{problem:LDGD}).
The first algorithm gives an {\em exact solution}, and runs in
worst-case time ${\cal O}(|V|^2|E|)$,
but it is significantly faster in practice.
The second algorithm is a linear-time algorithm that provides a
factor-$2$ approximation guarantee.
Both algorithms are inspired by corresponding algorithms for the
densest-subgraph problem.
The first algorithm by the exact algorithm of
Goldberg~\cite{Goldberg:1984up},
and the second algorithm by the greedy linear-time algorithm of
Charikar~\cite{Charikar:2000tg}.
\subsection{Exact algorithm}
We start our discussion on the exact algorithm for locally-dense graph
decomposition by reviewing Goldberg's algorithm~\cite{Goldberg:1984up}
for the densest-subgraph problem.
Recall that the densest-subgraph problem asks to find the subset of
vertices $W$ that maximizes
$\density{W} = \abs{E(W)}/\abs{W}$.
Given a graph $G=(V,E)$ and a positive number $\alpha \geq 0$
define a function
\[
\compact{\alpha} =
\max_{W\subseteq V}
\left\{ \abs{E(W)} - \alpha \abs{W}\right\},
\]
and the maximizer
\[
\compactgraph{\alpha} = \arg \max_{W\subseteq V} \left\{ \abs{E(W)} - \alpha \abs{W} \right\},
\]
where ties are resolved by picking the largest $W$.
Note that the value of the function $\compact{}$
decreases as $\alpha$ increases, and as $\alpha$ exceeds a certain
value the function becomes $0$ by taking $W=\emptyset$.
Goldberg observed that the densest-subgraph problem is equivalent to
the problem of finding the largest value of $\alpha^*$ for which
$\compact{\alpha^*}\ge 0$ and the maximizer set
$\compactgraph{\alpha^*}=W^*$ is non empty.\footnote{This
observation is an instance of
{\em fractional programming}~\cite{dinkelbach67fractional}.}
The densest subgraph is precisely this maximizer set $W^*$.
Furthermore, Goldberg showed how to find the vertex set
$W=\compactgraph{\alpha}$, for a given value of $\alpha$.
This is done by
mapping the problem to an instance of
the {\em min-cut problem},
which can be solved in time~${\cal O}(|V||E|)$,
in a recent breakthrough by Orlin~\cite{Orlin:2013wu}.
We will present an extension of this transformation in the next section, where
we discuss how to speed-up the algorithm.
Thus, Goldberg's algorithm uses binary search over $\alpha$ and finds
the largest value of $\alpha^*$ for which $\compact{\alpha^*}\ge 0$
and the maximizer set $W^*$ is non empty.
Each iteration of the binary search involves a call to a
min-cut instance for the current value of $\alpha$.
Our algorithm for finding the locally-dense decomposition of a graph
builds on Goldberg's algorithm~\cite{Goldberg:1984up}.
We show that Goldberg's construction has the following,
rather remarkable, property:
there is a sequence of values
$\alpha^* =\alpha_1\ge\ldots\ge\alpha_k$, for $k\le n$,
which gives all the distinct values of the function $\compact{}$.
Furthermore,
the corresponding set of subgraphs
$\{\compactgraph{\alpha_1},\ldots,\compactgraph{\alpha_k}\}$
is exactly the set of all locally-dense subgraphs of $G$,
and thus the solution to our decomposition problem.
Therefore, our algorithm is a simple extension of Goldberg's
algorithm: instead of searching only for the optimal value
$\alpha_1=\alpha^*$,
we find the whole sequence of $\alpha_i$'s
and the corresponding subgraphs.
Next we prove the claimed
properties and discuss the algorithm in more detail.
We first show that the distinct maximizers of the
function~$\compactgraph{}$ correspond to the set of locally-dense
subgraphs.
\begin{proposition}
\label{prop:compact}
Let $\set{B_i}$ be the set of locally-dense subgraphs.
Then
\[
B_i = \compactgraph{\alpha},
\text{ for }
\density{B_{i + 1}, B_{i}} < \alpha \leq \density{B_i, B_{i - 1}}.
\]
\end{proposition}
\begin{proof}
We first show that $U = \compactgraph{\beta}$ is a locally-dense
subgraph, for any $\beta$.
Note that for any $X \subseteq U$, we must have
$\abs{\marginaledges{X, U \setminus X}} - \beta\abs{X} \geq 0$, otherwise we can delete
$X$ from $U$ and obtain a better solution which violates the optimality of $U = \compactgraph{\beta}$.
This implies that $\density{X, U \setminus X} = \marginaledges{X, U \setminus X} / \abs{X} \geq \beta$.
Similarly, for any $Y$ such that $Y \cap U = \emptyset$, we have
$\abs{\marginaledges{Y, U}} - \beta \abs{Y} < 0$ or, equivalently,
$\density{Y, U} < \beta$.
Thus, $U$ is locally dense.
Fix $i$ and select $\alpha$ s.t. $\density{B_{i + 1}, B_{i}} < \alpha \leq \density{B_i, B_{i - 1}}$.
Let $B_j = \compactgraph{\alpha}$. If $j > i$, then, due to Corollary~\ref{corollary:chain}, $\density{B_j, B_{j - 1}} < \alpha$
which we can rephrase as
\[
c = \abs{\marginaledges{B_j \setminus B_{j - 1}, B_{j - 1}}} - \alpha|B_j \setminus B_{j - 1}| < 0.
\]
If we delete $B_j \setminus B_{j - 1}$ from $U$, then we increase the cost exactly by $-c$, that is,
we obtain a better solution which violates the optimality of $U$. If $j < i$,
then Corollary~\ref{corollary:chain} implies that $\density{B_{j + 1}, B_{j}} \geq \alpha$, so
we can add $B_{j + 1} \setminus B_j$ to obtain a better solution.
It follows that $B_i = \compactgraph{\alpha}$.
\end{proof}
\begin{algorithm}[t]
\caption{\label{algorithm:exact}$\textsc{ExactLD}\xspace(G,X,Y)$}
\Input{Graph $G = (V, E)$ \\ locally-dense subgraphs $X$ and $Y$ with $X \subsetneq Y$}
$\alpha \leftarrow \density{Y, X} + \abs{V}^{-2}$\;
$Z \leftarrow \compactgraph{\alpha}$\;
\If {$Z \neq X$} {
\Out $Z$\;
$\textsc{ExactLD}\xspace(G,X, Z)$\;
$\textsc{ExactLD}\xspace(G,Z, Y)$\;
}
\end{algorithm}
Next we need to show that it is possible to search efficiently for the
sequence of $\alpha$'s that give the set of locally-dense
subgraphs.
To that end we will show that if we have obtained two subgraphs
$B_x \subsetneq B_y$ of the decomposition
(corresponding to values $\alpha_x\ge\alpha_y$),
it is possible to pick a new value $\alpha$ so that computing
$\compactgraph{\alpha}$ allows us to make progress in
the search process:
we either find a new locally-dense subgraph
$B_x \subsetneq B_z \subsetneq B_y$ or we establish that no such
subgraph exists between $B_x$ and $B_y$, in other words,
$B_x$ and $B_y$ are consecutive subgraphs in our decomposition.
\begin{proposition}
\label{prop:rich}
Let $\set{B_i}$ be the set of locally-dense subgraphs.
Let $B_x \subsetneq B_y$ be two subgraphs.
Set $\alpha = \density{B_y, B_x} + \abs{V}^{-2}$ and let
$B_z = \compactgraph{\alpha}$.
If $x + 1 < y$, then $x < z < y$.
If $x + 1 = y$, then $z = x$.
\end{proposition}
\begin{proof}
A simple calculation shows that $\alpha > \density{B_y, B_x} \geq \density{B_y, B_{y - 1}}$.
Proposition~\ref{prop:compact} now implies that $z < y$.
Assume that $x + 1 < y$. This implies that $\density{B_y, B_{y - 1}} < \density{B_{x + 1}, B_{x}}$
which implies that $\density{B_y, B_{x}} < \density{B_{x + 1}, B_{x}}$. Let us write
\begin{align*}
a & = \marginaledges{B_y \setminus B_x, B_x}, & b & = \abs{B_y} - \abs{B_x}, \\
c & = \marginaledges{B_{x + 1} \setminus B_x, B_x}, \text{ and } & d & = \abs{B_{x + 1}} - \abs{B_x}. \\
\end{align*}
Let us now bound the difference between the densities as
\begin{eqnarray*}
\density{B_{x + 1}, B_{x}} - \density{B_y, B_{x}} & = & \frac{a}{b} - \frac{c}{d} \\
& = & \frac{ad - bc}{bd} \\
& \geq & \frac{1}{bd} \\
& > & \frac{1}{\abs{V}^2}.
\end{eqnarray*}
This implies that $\alpha < \density{B_{x + 1}, B_{x}}$.
Proposition~\ref{prop:compact} now implies that $z \geq x + 1 > x$.
Assume that $x + 1 = y$. Since $\density{B_y, B_{y - 1}} < \density{B_{x}, B_{x - 1}}$,
the same argument as above shows that $z \geq x$, which guarantees that $x = z$.
\end{proof}
The exact decomposition algorithm uses Proposition~\ref{prop:rich} to
guide the search process.
Starting by the two extreme subgraphs of the decomposition,
$\emptyset$ and $V$, the algorithm maintains a sequence of
locally-dense subgraphs.
Recursively, for any two currently-adjacent subgraphs in the sequence,
we use Proposition~\ref{prop:rich} to check whether the two subgraphs
are consecutive or not in the decomposition.
If they are consecutive, the recurrence at that branch of the search is
terminated.
If they are not, a new subgraph between the two is discovered and it
is added in the decomposition.
The algorithm is named \textsc{ExactLD}\xspace\ and it is illustrated as
Algorithm~\ref{algorithm:exact}.
With the next propositions we prove the correctness of the algorithm
and we bound its running time.
\begin{proposition}
The algorithm \textsc{ExactLD}\xspace\ initiated with input
$(G, \emptyset, V)$
visits all locally-dense subgraphs of $G$.
\end{proposition}
\begin{proof}
Let $\set{B_i}$ be the set of locally-dense subgraphs.
We will prove the proposition by showing that for $i < j$, the algorithm
$\textsc{ExactLD}\xspace(G, B_i, B_j)$ visits all monotonic subgraphs that are between $B_i$ and $B_j$.
We will prove this by induction over $j - i$. The first step $j = i + 1$ is trivial.
Assume that $j > i + 1$. Then Proposition~\ref{prop:rich} implies that
$B_k = \compactgraph{\alpha}$, where $i < k < j$.
The inductive assumption now guarantees that $\textsc{ExactLD}\xspace(G, B_i, B_k)$
and $\textsc{ExactLD}\xspace(G, B_k, B_j)$ will visit all monotonic subgraphs
between $B_i$ and $B_j$.
\end{proof}
\begin{proposition}
The worst-case running time of algorithm \textsc{ExactLD}\xspace\ is
$\bigo{|V|^2|E|}$.
\end{proposition}
\begin{proof}
We will show that the algorithm \textsc{ExactLD}\xspace, initiated with input
$(G, \emptyset, V)$ makes $2k - 3$ calls to the function
$\compactgraph{}$,
where $k$ is the number of locally-dense subgraphs.
Let $k_i$ be the number of calls of $\compactgraph{}$ when the
input parameter $Y = B_i$.
Out of these $k_i$ calls one call will result in $\compactgraph{\alpha} = X$.
There are $k - 1$ such calls, since $Y = \emptyset$ is never tested.
Each of the remaining calls will discover a new locally-dense subgraph.
Since there are $k - 2$ new subgraphs to discover, it follows that
$2k-3$ calls to $\compactgraph{}$ are needed.
Since a call to $\compactgraph{}$ corresponds to a min-cut
computation, which has running time
$\bigo{|V||E|}$~\cite{Orlin:2013wu}, and since $k\in\bigo{|V|}$,
the claimed running-time bound follows.
\end{proof}
\subsection{Speeding up the exact algorithm}
Our next step is to speed-up \textsc{ExactLD}\xspace. This speed-up does not
improve to the theoretical bound for the computational time but, in
practice, it improves the performance of the algorithm dramatically.
The speed-up is based on the following observation. We know from
Proposition~\ref{prop:rich} that $\textsc{ExactLD}\xspace(G, X, Y)$ visits only
subgraphs $Z$ with the property $X \subseteq Z \subseteq Y$. This gives us
immediately the first speed-up: we can safely ignore any vertex outside $Y$, that
is, $\textsc{ExactLD}\xspace(G(Y), X, Y)$ will yield the same output.
Our second observation is that any subgraph $Z$ visited by $\textsc{ExactLD}\xspace(G, X,
Y)$ must contain vertices $X$. However, we cannot simply delete them because we
need to take into account the edges between $X$ and $Z$. To address this
let us consider the following maximizer
\[
\compactgraph{\alpha; X} = \arg \max_{X \subseteq W\subseteq V} \set{ \abs{E(W)} - \alpha \abs{W}}.
\]
We can replace the original $\compactgraph{\alpha}$ in
Algorithm~\ref{algorithm:exact} with $\compactgraph{\alpha; X}$. To compute
$\compactgraph{\alpha; X}$ we will use a straightforward extension of the
Goldberg's algorithm~\cite{Goldberg:1984up} and transform this problem into a
problem of finding a minimum cut.
In order to do this, given a graph $G = (V, E)$, let us define a weighted graph $H$
that consists of vertices $V \setminus X$ and edges $E(V \setminus X)$ with weights of 1.
Add two auxiliary vertices $s$ and $t$ into $H$ and connect these vertices to every vertex in
$V \setminus X$. Given a vertex $y \in V \setminus X$,
assign a weight of $2\alpha$ to the edge $(y, t)$ and a weight of
\[
w(y) = \dg{y; V \setminus X} + 2\dg{y; X}
\]
to the edge $(s, y)$, where $\dg{y; U}$ stands for the number of neighbors of $y$ in $U$. We claim that solving a minimum cut such that $s$ and $t$ are in different cuts
will solve $\compactgraph{\alpha; X}$. This cut can be obtained by constructing a maximum flow from $s$ to $t$.
To prove this claim let $C \subsetneq V(H)$ be a subset of vertices containing
$s$ and not containing $t$. Let $Z = C \setminus \set{s}$ and also let $W = V \setminus (Z \cup X)$.
There are three types of cross-edges from $C$ to $V(H) \setminus C$:
\emph{(i)} edges from $x \in Z$ to $t$,
\emph{(ii)} edges from $s$ to $x \in W$, and
\emph{(iii)} edges from $x \in Z$ to $y \in W$. The total cost of $C$ is then
\[
2\abs{Z}\alpha + \sum_{y \in W} w(y) + \abs{\crossedges{Z, W}}.
\]
We claim that the last two terms of the cost are equal to $2\abs{E} - 2\abs{E(X \cup Z)}$.
To see this, consider an edge $e = (x, y)$ in $E \setminus E(X \cup Z)$. This implies that
one of the end points, assume it is $y$, has to be in $W$. There are three different cases for $x$:
\emph{(i)} if $x\in W$, then $e$ contributes 2 to the cost: 1 to $w(x)$ and 1 to $w(y)$,
\emph{(ii)} if $x \in X$, then $e$ contributes $2$ to $w(y)$, and
\emph{(iii)} if $x \in Z$, then $e$ contributes $1$ to $w(y)$ and $1$ to the third term. Thus, we
can write the cut as
\[
\begin{split}
&2\abs{Z}\alpha + 2\abs{E} - 2\abs{E(X \cup Z)} \\
&\quad= 2\abs{E} - 2\abs{X}\alpha - 2(\abs{E(X \cup Z)} - \alpha\abs{Z \cup X}).
\end{split}
\]
The first two terms in the right-hand side are constant which implies that
that finding the minimum cut is equivalent of maximizing $\abs{E(X \cup Z)} - \alpha\abs{Z \cup X}$.
Consequently, if $Z^*$ is the min-cut solution, then $\compactgraph{\alpha} = X \cup Z^*$.
Note that the graph $H$ does not have vertices included in $X$.
By combining both speed-ups we are able to reduce the running time of $\textsc{ExactLD}\xspace(X, Y)$ by considering
only the vertices that are in $Y \setminus X$.
\subsection{Linear approximation algorithm}
As we saw in the last section, the exact algorithm can be
significantly accelerated, and indeed, our experimental evaluation
shows that it is possible to run the exact algorithm for a graph of
millions of vertices and edges within 2 minutes.
Nevertheless, the worst-case complexity of the algorithm is cubic, and
thus, it is not truly scalable for massive graphs.
Here we present a more lightweight algorithm for performing a
locally-dense decomposition of a graph.
The algorithm runs in linear time and offers a factor-$2$
approximation guarantee.
As the exact algorithm builds on Goldberg's algorithm for the
densest-subgraph problem,
the linear-time algorithm builds on Charikar's approximation algorithm
for the same problem~\cite{Charikar:2000tg}.
As already explained in Section~\ref{sec:prel},
Charikar's approximation algorithm iteratively removes the vertex with
the lowest degree, until left with an empty graph, and returns the
densest graph among all subgraphs considered during this process.
Our extension to this algorithm, called \textsc{GreedyLD}\xspace, is illustrated in
Algorithm~\ref{algorithm:peel}, and it operates in two phases.
The first phase is identical to the one in Charikar's algorithm:
all vertices of the graph are iteratively removed,
in increasing order of their degree in the current graph.
In the second phase, the algorithm proceeds to discover approximate
locally-dense subgraphs, in an iterative manner, from $B_1$ to $B_k$.
The first subgraph $B_1$ is the approximate densest subgraph, the same
one returned by Charikar's algorithm.
In the $j$-th step of the iteration, having discover subgraphs
$B_1,\ldots, B_{j-1}$
the algorithm selects the subgraph $B_j$ that maximizes the density
$\density{B_j,B_{j-1}}$.
To select $B_j$ the algorithm considers subsets of vertices only in
the degree-based order that was produced in the first phase.
\begin{algorithm}[t]
\caption{\label{algorithm:peel}$\textsc{GreedyLD}\xspace(G)$}
\Input{Graph $G = (V, E)$}
\Output{Collection $\col{C}$ of approximate locally-dense subgraphs}
\For {$i = \abs{V}, \ldots, 1$} {
$w_i \leftarrow$ the vertex with the smallest degree\;
delete $w_i$ from $V$\;
}
$\col{C} \leftarrow \set{\emptyset}$\;
$j \leftarrow 0$\;
\While {$j < \abs{V}$} {
$i \leftarrow \arg \max_{i > j} \density{\enset{w_1}{w_i}, \enset{w_1}{w_j}}$\;
add $\enset{w_1}{w_j}$ to $\col{C}$\;
$j \leftarrow i$\;
}
\Return $\col{C}$\;
\end{algorithm}
Discovering $\col{C}$ from the ordered vertices takes $\bigo{n^2}$ time, if
done naively. However, it is possible to implement this step in $\bigo{n}$
time. In order to do this, sort vertices in the reverse visit order,
and define $\din{v}$ to be the number of edges of
$v$ from the earlier neighbors. Then, we can we express the density as an average,
\[
\density{\enset{w_1}{w_i}, \enset{w_1}{w_j}} = \frac{1}{i - j}\sum_{k = j + 1}^i \din{v_k}.
\]
Consequently, we can see that recovering $\col{C}$ is an instance of the following problem,
\begin{problem}
Given a sequence $y_1, \ldots, y_n$, compute the maximal interval
\[
m(j) = \arg \max_{j \leq i \leq n} \frac{1}{i - j + 1}\sum_{k = j}^i y_k,
\]
for every $1 \leq j \leq n$.
\end{problem}
Luckily, Calders et al.~\cite{DBLP:journals/is/CaldersDGG14} demonstrated that we can use the classic PAVA
algorithm~\cite{PAV} to solve this problem for \emph{every} value of $j$ in total $\bigo{n}$ time.
To quantify the approximation guarantee of \textsc{GreedyLD}\xspace,
note that the sequence of approximate locally-dense subgraphs produced
by the algorithm are not necessarily aligned with the locally-dense
subgraphs of the optimal decomposition.
In other words, to assess the quality of the density of an approximate
locally-dense subgraph $B_j$ produced by \textsc{GreedyLD}\xspace,
there is no direct counterpart in the optimal decomposition to compare.
To overcome this difficulty we develop a scheme of ``vertex-wise''
comparison, where for any $1\le i\le n$,
the density of a the smallest approximate locally-dense subgraph of
size at least $i$
is compared with the density of a the smallest optimal locally-dense
subgraph of size at least $i$.
This is defined below via the concept of {\em profile}.
\begin{definition}
\label{definition:profile}
Let $\mathcal{B} = (\emptyset = B_0 \subsetneq B_1 \subsetneq \cdots
\subsetneq B_k = V)$ be a nested chain of subgraphs, the first
subgraph being the empty graph and the
last subgraph being the full graph.
For an integer $i$, $1 \leq i \leq n$ define
\[
j = \min \set{x \mid \abs{B_x} \geq i}
\]
to be the index of the smallest subgraph in $\mathcal{B}$ whose size
is at least $i$. We define a {\em profile function}
$\funcdef{\prof{}}{\enset{1}{n}}{\mathbb{R}}$ to be
\[
\prof{i;\,\mathcal{B}} = \density{B_j, B_{j - 1}}.
\]
\end{definition}
Our approximation guarantee is now expressed as a guarantee of the
profile function of the approximate decomposition with respect to the
optimal decomposition.
\begin{proposition}
\label{prop:peel-approximation}
Let $\col{B} = \set{B_i}$ be the set of optimal locally-dense subgraphs.
Let $\col{C} = \set{C_i}$ be the subgraphs obtained by \textsc{GreedyLD}\xspace.
Then
\[
\prof{i;\,\col{C}} \geq \prof{i;\,\col{B}} / 2.
\]
\end{proposition}
\begin{proof}
Sort the set of vertices $V$ according to the reverse visiting order
of \textsc{GreedyLD}\xspace and let $\din{v}$ be the number of edges of $v$
from earlier neighbors.
Fix $k$ to be an integer, $1 \leq k \leq n$ and let $B_i$ be the smallest subgraph
such that $\abs{B_i} \geq k$. Let $v_j$ be the last vertex occurring in
$B_i$. We must have $\din{v_j} \geq \density{v_j, B_i}$, and $\density{v_j,
B_i} \geq \density{B_i , B_{i - 1}}$ as otherwise we can
remove $v_j$ from $B_i$ and improve the density due to Lemma~\ref{lem:delete}. We have
\[
\prof{k;\,\mathcal{B}} = \density{B_i, B_{i - 1}} \leq \din{v_j}.
\]
Let $C_x$ be the smallest subgraph such that $\abs{C_x} \geq k$.
Let $v_z$ be the vertex with the smallest index that is still in $C_x \setminus C_{x - 1}$.
Let $g_y$ be the degree of $v_y$ right before $v_j$ is removed during \textsc{GreedyLD}\xspace.
Note that $\din{v_j} \leq g_y$, and we can easily show that $\sum_{y = z}^j g_y \leq 2\sum_{y = z}^j \din{v_y}$.
We now have
\begin{eqnarray*}
\prof{k;\,\mathcal{C}} & = & \density{C_x, C_{x - 1}} \\
& \geq & \frac{1}{j - z + 1} \sum_{y = z}^j \din{v_y} \\
& \geq & \frac{1}{2(j - z + 1)} \sum_{y = z}^j g_y \\
& \geq & \frac{\din{v_j}}{2},
\end{eqnarray*}
where the optimality of $C_x$ implies the first inequality.
\end{proof}
We should point out that $\prof{1, \col{B}}$ is equal to the density of the
densest subgraph, while $\prof{1, \col{C}}$ is equal to the density of the
subgraph discovered by the Charikar's algorithm. Consequently,
Proposition~\ref{prop:peel-approximation} provides automatically the
2-approximation guarantee of the Charikar's algorithm.
We should also point out that $\prof{i, \col{C}}$ can be larger than $\prof{i,
\col{B}}$ but for the first index, say $j$, for which $\prof{j, \col{C}} \neq
\prof{j, \col{B}}$, Proposition~\ref{prop:maximal} guarantees that $\prof{j,
\col{C}} < \prof{j, \col{B}}$.
\section{Experimental evaluation}
\label{sec:experiments}
We will now present our experimental evaluation.
We test the two proposed algorithms, \textsc{ExactLD}\xspace and \textsc{GreedyLD}\xspace,
for decomposing a graph into locally-dense subgraphs,
and we contrast the resulting decompositions against $k$-cores,
obtained with the \textsc{Core}\xspace algorithm.
We compare the three algorithms in terms of running time,
decomposition size
(number of subgraphs they provide),
and relative density of the subgraphs they return.
We also use the Kendall-$\tau$ to measure how similar are the
decompositions in terms of the order they induce on the graph
vertices.
\subsection{Experimental setup}
We perform our evaluation on 11 graphs of different sizes and
densities.
A short description of the graphs is given below, and their basic
characteristics can be found in Table~\ref{tab:basic}.
\begin{itemize}[itemsep=-1pt]
\item
{\texttt{dolphins}:}
an undirected social network of frequent associations between dolphins
in a community living off Doubtful Sound in New Zealand.
\item
{\texttt{karate}:}
the social network of friendships between members of a karate club at a US university in the 1970.
\item
{\texttt{lesmis}:}
co-appearance of characters in Les Miserables novel by Victor Hugo.
\item
{\texttt{astro}:}
a co-authorship network among arXiv Astro Physics publications.
\item
{\texttt{enron}:}
an e-mail communication network by Enron employees.
\item
{\texttt{fb1912}:}
an ego-network obtained from Facebook.
\item
{\texttt{hepph}:}
a co-authorship network among arXiv High Energy Physics publications.
\item
{\texttt{dblp}:}
a co-authorship network among computer science researchers.
\item
{\texttt{gowalla}:} a friendship network of \url{gowalla.com}.
\item
{\texttt{roadnet}:} a road network of California, where
vertices represent intersections and edges represent road segments.
\item
{\texttt{skitter}:} an internet topology graph, obtained from traceroutes run daily in 2005.
\end{itemize}
The first three datasets are obtained from UCIrvine
Network Data Repository,\footnote{\url{http://networkdata.ics.uci.edu/index.php}}
and the remaining datasets are obtained from Stanford SNAP Repository.\!\footnote{\url{http://snap.stanford.edu/data}}
We apply \textsc{Core}\xspace, \textsc{GreedyLD}\xspace, and \textsc{ExactLD}\xspace to every dataset.
We use a computer equipped with 3GHz Intel Core i7 and 8GB of RAM.\!\footnote{The implementation is available at\\ \url{http://research.ics.aalto.fi/dmg/software.shtml}}
\begin{table}[t]
\caption{Basic characteristics of the datasets and the running times of the algorithms.
\textsc{E}~stands for \textsc{ExactLD}\xspace,
\textsc{G} for \textsc{GreedyLD}\xspace, and
\textsc{C} for \textsc{Core}\xspace.}
\label{tab:basic}
\begin{tabular*}{\columnwidth}{l@{\hspace{2mm}}rr rrr}
\toprule
& & & \multicolumn{3}{c}{running time} \\
\cmidrule{4-6}
Name & $\abs{V}$ & $\abs{E}$ & \textsc{c} & \textsc{g} & \textsc{e} \\
\midrule
{\texttt{dolphins}} &
62 & 159 &
1ms & 1ms & 2ms
\\
{\texttt{karate}} &
34 & 78 &
1ms & 1ms & 2ms
\\
{\texttt{lesmis}} &
77 & 254 &
2ms & 2ms & 3ms
\\[1mm]
{\texttt{astro}} &
18\,772 & 396\,160 &
0.4s & 0.4s & 2s
\\
{\texttt{enron}} &
36\,692 & 183\,831 &
0.3s & 0.3s & 2s
\\
{\texttt{fb1912}} &
747 & 30\,025 &
44ms & 44ms & 0.2s
\\
{\texttt{hepph}} &
12\,008 & 237\,010 &
0.2s & 0.2s & 0.9s
\\[1mm]
{\texttt{dblp}} &
317\,080 & 1\,049\,866 &
2s & 2s & 14s
\\
{\texttt{gowalla}} &
196\,591 & 950\,327 &
2s & 2s & 9s
\\
{\texttt{roadnet}} &
1\,965\,206 & 5\,533\,214 &
7s & 8s & 1m6s
\\
{\texttt{skitter}} &
1\,696\,415 & 11\,095\,298 &
21s & 21s & 1m46s
\\
\bottomrule
\end{tabular*}
\end{table}
\subsection{Results}
We begin by reporting the running times of the three algorithms for
all of our datasets.
They are shown in Table~\ref{tab:basic}.
As expected, the linear-time algorithms \textsc{Core}\xspace and \textsc{GreedyLD}\xspace are both very fast;
the largest graph with 11 million edges and 1.7 million vertices is
processed in 21 seconds.
However, we are also able to run the exact decomposition for all the graphs in reasonable
time, despite its running-time complexity of
$\bigo{\abs{V}^2\abs{E}}$.
It takes less than 2 minutes for \textsc{ExactLD}\xspace to process the largest
graph.
There are three reasons that contribute to achieving this performance.
First, we need to compute the minimum cut only $\bigo{k}$ times, where
$k$ is the number of locally-dense graphs.
In practice, $k$ is much smaller than the number of vertices.
Second, computing minimum cut in practice is faster than the theoretical $\bigo{\abs{V}\abs{E}}$ bound.
Third, as described in Section~\ref{sec:discovery}, most of the minimum cuts
are computed using subgraphs. While in theory these subgraphs can be
as large as the original graph, in practice these subgraphs are
significantly smaller.
\begin{table}[t]
\caption{Smallest ratio of the profile function, and the profile function of the exact solution as defined in Equation~(\ref{eq:ratio}), and the ratio of
the most inner discovered subgraph versus the actual densest subgraph.}
\label{tab:ratio}
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lrrrrr}
\toprule
& \multicolumn{2}{c}{$r(\col{C}, \col{B})$} && \multicolumn{2}{c}{$\density{C_1} / \density{B_1}$}\\
\cmidrule{2-3} \cmidrule{5-6}
Name & \textsc{Core}\xspace & \textsc{GreedyLD}\xspace && \textsc{Core}\xspace & \textsc{GreedyLD}\xspace\\
\midrule
{\texttt{dolphins}} &
0.94 & 0.83 && 0.98 & 0.98
\\
{\texttt{karate}} &
0.95 & 0.99 && 0.95 & 0.99
\\
{\texttt{lesmis}} &
0.86 & 0.87 && 0.96 & 1.00
\\[1mm]
{\texttt{astro}} &
0.85 & 0.85 && 0.87 & 0.92
\\
{\texttt{enron}} &
0.83 & 0.82 && 0.94 & 1.00
\\
{\texttt{fb1912}} &
0.69 & 0.74 && 0.91 & 1.00
\\
{\texttt{hepph}} &
0.74 & 0.75 && 1.00 & 1.00
\\
{\texttt{dblp}} &
0.80 & 0.86 && 1.00 & 1.00
\\
{\texttt{gowalla}} &
0.89 & 0.92 && 0.87 & 1.00
\\
{\texttt{roadnet}} &
0.81 & 0.87 && 0.84 & 0.87
\\
{\texttt{skitter}} &
0.73 & 0.84 && 0.84 & 1.00
\\
\bottomrule
\end{tabular*}
\end{table}
\begin{figure*}[t]
\begin{tikzpicture}
\begin{axis}[xlabel={index $i$},ylabel= {$\prof{i}$},
width = 3.4cm,
cycle list name=yaf,
scale only axis,
title = {{\texttt{lesmis}}},
ymin = 0,
no markers
]
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/lesmis_c.plot};
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/lesmis_g.plot};
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/lesmis_e.plot};
\pgfplotsextra{\yafdrawaxis{0}{77}{0}{5.4}}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[xlabel={index $i$},
width = 3.4cm,
cycle list name=yaf,
scale only axis,
title = {{\texttt{fb1912}}},
ymin = 0,
no markers
]
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/f1912_c.plot};
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/f1912_g.plot};
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/f1912_e.plot};
\pgfplotsextra{\yafdrawaxis{0}{747}{0}{108}}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[xlabel={index $i$},
width = 3.4cm,
cycle list name=yaf,
scale only axis,
title = {{\texttt{astro}}},
ymin = 0,
scaled ticks = false,
x tick label style={/pgf/number format/1000 sep={\,}},
no markers,
]
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/ca-astro_c.plot};
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/ca-astro_g.plot};
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/ca-astro_e.plot};
\pgfplotsextra{\yafdrawaxis{0}{18772}{0}{47}}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[xlabel={index $i$},
width = 3.4cm,
cycle list name=yaf,
scale only axis,
title = {{\texttt{hepph}}},
ymin = 0,
scaled ticks = false,
x tick label style={/pgf/number format/1000 sep={\,}},
no markers,
legend entries = {\textsc{Core}\xspace, \textsc{GreedyLD}\xspace, \textsc{ExactLD}\xspace}
]
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/ca-hepph_c.plot};
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/ca-hepph_g.plot};
\addplot+[const plot] table[x index = 2, y index = 0, header = false] {results/ca-hepph_e.plot};
\pgfplotsextra{\yafdrawaxis{0}{12008}{0}{120}}
\end{axis}
\end{tikzpicture}
\caption{\label{fig:profile}Profile functions for {\texttt{lesmis}}, {\texttt{fb1912}}, {\texttt{astro}}, and {\texttt{hepph}}.}
\end{figure*}
Next, we compare how well \textsc{Core}\xspace and \textsc{GreedyLD}\xspace approximate the exact
locally-dense decomposition. In order to do that we compute the ratio
\begin{equation}
\label{eq:ratio}
r( \col{C}, \col{B}) = \min_i \frac{\prof{i; \col{C}}}{\prof{i; \col{B}}},
\end{equation}
where $\col{B}$ is the locally-dense decomposition and $\col{C}$ is
obtained by either from \textsc{GreedyLD}\xspace or \textsc{Core}\xspace.
These ratios are shown in Table~\ref{tab:ratio}.
We also compare $\prof{1; \col{C}}/\prof{1; \col{B}}$, that is, the ratio
of density for the inner most subgraph in $\col{C}$ against the density of
$\col{B}_1$, the densest subgraph.
Propositions~\ref{prop:peel-approximation}
and~\ref{prop:core-approximation} guarantee that there ratios are at
least~$1/2$. In practice, the ratios are larger, typically over $0.8$.
In most cases, but not always, \textsc{GreedyLD}\xspace obtains better ratios than
\textsc{Core}\xspace.
When comparing the ratio for the inner most subgraph, \textsc{GreedyLD}\xspace, by
design, will always be better or equal than \textsc{Core}\xspace.
We see that only in three
datasets \textsc{Core}\xspace is able to find the same subgraph as \textsc{GreedyLD}\xspace.
\begin{table}[t]
\caption{Sizes of the discovered decompositions and Kendall-$\tau$
statistics between the decompositions.
\textsc{E}~stands for \textsc{ExactLD}\xspace,
\textsc{G} for \textsc{GreedyLD}\xspace, and
\textsc{C} for \textsc{Core}\xspace.
}
\label{tab:kendall}
\begin{tabular}{lrrr rrr}
\toprule
Name & \textsc{c} & \textsc{g} & \textsc{e} & \textsc{c}-vs-\textsc{e} & \textsc{g}-vs-\textsc{e} & \textsc{c}-vs-\textsc{g} \\
\midrule
{\texttt{dolphins}} &
4 & 6 & 7 &
0.76 & 0.77 & 0.99
\\
{\texttt{karate}} &
4 & 3 & 4 &
0.80 & 0.95 & 0.78
\\
{\texttt{lesmis}} &
8 & 8 & 9 &
0.94 & 0.99 & 0.95
\\[1mm]
{\texttt{astro}} &
52 & 83 & 435 &
0.93 & 0.93 & 0.99
\\
{\texttt{enron}} &
43 & 162 & 357 &
0.92 & 0.92 & 0.99
\\
{\texttt{fb1912}} &
87 & 55 & 75 &
0.95 & 0.98 & 0.97
\\
{\texttt{hepph}} &
64 & 63 & 283 &
0.93 & 0.93 & 0.98
\\[1mm]
{\texttt{dblp}} &
47 & 97 & 1087 &
0.88 & 0.89 & 0.97
\\
{\texttt{gowalla}} &
51 & 161 & 899 &
0.97 & 0.96 & 0.98
\\
{\texttt{roadnet}} &
3 & 43 & 2710&
0.57 & 0.80 & 0.68
\\
{\texttt{skitter}} &
111 & 266 & 3501 &
0.98 & 0.97 & 0.99
\\
\bottomrule
\end{tabular}
\end{table}
Let us now compare the different solutions found by the three
algorithms.
In Table~\ref{tab:kendall} we report the sizes of discovered communities and their
Kendall-$\tau$ statistics, which compares the ordering of the vertices
induced by the decompositions.
In particular, the Kendall-$\tau$ statistic is computed by assigning each
vertex an index based on which subgraph the vertex belongs.
To handle ties, we use the $b$-version of Kendall-$\tau$,
as given by Agresti~\cite{agresti10ordinal}.
If the statistic is 1, the decompositions are equal.
Our first observation is that
typically the locally-dense decomposition algorithms return more
subgraphs than the $k$-core decomposition.
As an extreme example, {\texttt{roadnet}} contains only 3 $k$-cores while \textsc{GreedyLD}\xspace finds 43
subgraphs and \textsc{ExactLD}\xspace finds 2710.
This can be explained by the fact that the vertices in the graph have
low degrees, which results in a very coarse $k$-core decomposition.
On the other hand, \textsc{ExactLD}\xspace and \textsc{GreedyLD}\xspace exploit density to discover
more fine-grained decompositions.
This result is similar to what we presented in the
Example~\ref{ex:toy} in the introduction.
The Kendall-$\tau$ statistics are typically close to $1$, especially for large
datasets suggesting that all 3 methods result in similar decompositions.
The statistic between \textsc{Core}\xspace and \textsc{GreedyLD}\xspace is typically larger than to
the exact solution. This is expected since \textsc{Core}\xspace and \textsc{GreedyLD}\xspace use the exact same
order for vertices---the only difference between these two methods is how they
partition the vertex order. In addition, decompositions produced by \textsc{GreedyLD}\xspace
are closer to the exact solution than the decompositions produced by \textsc{Core}\xspace,
which is also a natural result.
Let us now compare the solutions in terms of profile functions as
defined in Definition~\ref{definition:profile}.
We illustrate several prototypical examples of such profile functions
in Figure~\ref{fig:profile}.
From the figure we see that \textsc{GreedyLD}\xspace produces similar profiles as the
exact locally-dense decomposition.
We also see that \textsc{Core}\xspace does not respect the local density constraint.
In {\texttt{fb1912}}, {\texttt{astro}}, and {\texttt{hepph}} there exist $k$-shells that are
denser than their inner shells, that is, joining these shells would increase
the density of the inner shell.
\textsc{GreedyLD}\xspace does not have this problem since by
definition it will have a monotonically decreasing profile.
Finally, in Table~\ref{tab:lesmis} we present the
decompositions obtained by the three algorithms for the {\texttt{lesmis}} graph.
We see that \textsc{GreedyLD}\xspace obtains very similar result to the exact solution,
the only difference is the second subgraph and the third subgraph is merged and
the $3$rd last subgraph lends vertices to the $2$nd last subgraph.
While \textsc{GreedyLD}\xspace has the same first subgraph as the exact solution, which is the densest
subgraph, \textsc{Core}\xspace breaks this subgraph into 3 subgraphs.
Interestingly enough, the main character of the book, Jean Valjean, is
not placed into the first shell by \textsc{Core}\xspace.
\begin{table}[ht!]
\caption{Decompositions of the {\texttt{lesmis}} dataset. The numbers represent
the greedy peeling and
the $k$-core
decomposition, respectively. The names are ordered according to locally dense decomposition,
the groups are marked with chanding color. }
\label{tab:lesmis}
\scriptsize
\colorlet{tabalt1}{yafcolor5!25}
\colorlet{tabalt2}{yafcolor4!10}
\colorlet{tabcolor1}{tabalt1}
\colorlet{tabcolor2}{tabalt2}
\colorlet{tabcolor3}{tabalt1}
\colorlet{tabcolor4}{tabalt2}
\colorlet{tabcolor5}{tabalt1}
\colorlet{tabcolor6}{tabalt2}
\colorlet{tabcolor7}{tabalt1}
\colorlet{tabcolor8}{tabalt2}
\colorlet{tabcolor9}{tabalt1}
\setlength{\tabcolsep}{0cm}
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lll@{\hspace{4pt}}lll@{\hspace{4pt}}lll l}
\toprule
\cellcolor{tabcolor1}Valjean & 1 & 2 & \cellcolor{tabcolor3}Fameuil & 2 & 3 & \cellcolor{tabcolor7}Pontmercy & 6 & 7 & \\
\cellcolor{tabcolor1}MmeThenardier & 1 & 3 & \cellcolor{tabcolor3}Blacheville & 2 & 3 & \cellcolor{tabcolor7}MotherInnocent & 7 & 7 & \\
\cellcolor{tabcolor1}Thenardier & 1 & 2 & \cellcolor{tabcolor3}Favourite & 2 & 3 & \cellcolor{tabcolor7}Magnon & 7 & 7 & \\
\cellcolor{tabcolor1}Javert & 1 & 2 & \cellcolor{tabcolor3}Dahlia & 2 & 3 & \cellcolor{tabcolor7}MmePontmercy & 7 & 7 & \\
\cellcolor{tabcolor1}Eponine & 1 & 2 & \cellcolor{tabcolor3}Zephine & 2 & 3 & \cellcolor{tabcolor7}BaronessT & 6 & 7 & \\
\cellcolor{tabcolor1}Gavroche & 1 & 1 & \cellcolor{tabcolor3}Fantine & 2 & 3 & \cellcolor{tabcolor8}Child1 & 7 & 7 & \\
\cellcolor{tabcolor1}Marius & 1 & 1 & \cellcolor{tabcolor4}Bamatabois & 3 & 4 & \cellcolor{tabcolor8}Child2 & 7 & 7 & \\
\cellcolor{tabcolor1}Mabeuf & 1 & 1 & \cellcolor{tabcolor4}Judge & 3 & 4 & \cellcolor{tabcolor9}Napoleon & 8 & 8 & \\
\cellcolor{tabcolor1}Enjolras & 1 & 1 & \cellcolor{tabcolor4}Champmathieu & 3 & 4 & \cellcolor{tabcolor9}CountessDeLo & 8 & 8 & \\
\cellcolor{tabcolor1}Combeferre & 1 & 1 & \cellcolor{tabcolor4}Brevet & 3 & 4 & \cellcolor{tabcolor9}Geborand & 8 & 8 & \\
\cellcolor{tabcolor1}Prouvaire & 1 & 1 & \cellcolor{tabcolor4}Chenildieu & 3 & 4 & \cellcolor{tabcolor9}Champtercier & 8 & 8 & \\
\cellcolor{tabcolor1}Feuilly & 1 & 1 & \cellcolor{tabcolor4}Cochepaille & 3 & 4 & \cellcolor{tabcolor9}Cravatte & 8 & 8 & \\
\cellcolor{tabcolor1}Courfeyrac & 1 & 1 & \cellcolor{tabcolor5}Gillenormand & 4 & 5 & \cellcolor{tabcolor9}Count & 8 & 8 & \\
\cellcolor{tabcolor1}Bahorel & 1 & 1 & \cellcolor{tabcolor5}MlleGillenormand & 4 & 5 & \cellcolor{tabcolor9}OldMan & 8 & 8 & \\
\cellcolor{tabcolor1}Bossuet & 1 & 1 & \cellcolor{tabcolor5}LtGillenormand & 4 & 5 & \cellcolor{tabcolor9}Labarre & 8 & 8 & \\
\cellcolor{tabcolor1}Joly & 1 & 1 & \cellcolor{tabcolor6}Simplice & 5 & 6 & \cellcolor{tabcolor9}MmeDeR & 8 & 8 & \\
\cellcolor{tabcolor1}Grantaire & 1 & 1 & \cellcolor{tabcolor6}Anzelma & 5 & 6 & \cellcolor{tabcolor9}Isabeau & 8 & 8 & \\
\cellcolor{tabcolor1}Gueulemer & 1 & 2 & \cellcolor{tabcolor6}Woman2 & 5 & 6 & \cellcolor{tabcolor9}Gervais & 8 & 8 & \\
\cellcolor{tabcolor1}Babet & 1 & 2 & \cellcolor{tabcolor6}Toussaint & 5 & 6 & \cellcolor{tabcolor9}Scaufflaire & 8 & 8 & \\
\cellcolor{tabcolor1}Claquesous & 1 & 2 & \cellcolor{tabcolor7}Myriel & 6 & 6 & \cellcolor{tabcolor9}Boulatruelle & 8 & 8 & \\
\cellcolor{tabcolor1}Montparnasse & 1 & 2 & \cellcolor{tabcolor7}MlleBaptistine & 6 & 6 & \cellcolor{tabcolor9}Gribier & 8 & 8 & \\
\cellcolor{tabcolor1}Brujon & 1 & 3 & \cellcolor{tabcolor7}MmeMagloire & 6 & 6 & \cellcolor{tabcolor9}Jondrette & 8 & 8 & \\
\cellcolor{tabcolor1}MmeHucheloup & 1 & 3 & \cellcolor{tabcolor7}Marguerite & 7 & 7 & \cellcolor{tabcolor9}MmeBurgon & 8 & 8 & \\
\cellcolor{tabcolor2}Cosette & 2 & 4 & \cellcolor{tabcolor7}Fauchelevent & 6 & 7 & \cellcolor{tabcolor9}MlleVaubois & 8 & 8 & \\
\cellcolor{tabcolor3}Tholomyes & 2 & 3 & \cellcolor{tabcolor7}Perpetue & 7 & 7 & \cellcolor{tabcolor9}MotherPlutarch & 8 & 8 & \\
\cellcolor{tabcolor3}Listolier & 2 & 3 & \cellcolor{tabcolor7}Woman1 & 7 & 7 & \\
\bottomrule
\end{tabular*}
\end{table}
\section{Introduction}
\label{sec:intro}
Finding dense subgraphs and communities is one of the most
well-studied problems in graph mining.
Techniques for identifying dense subgraphs are used in a large number
of application domains, from biology, to web mining, to analysis of
social and information networks.
Among the many concepts that have been proposed for discovering dense
subgraphs, $k$-{\em cores} are particularly attractive for the
simplicity of their definition and the fact that they can be identified in
linear time.
The $k$-core of a graph is defined as a maximal subgraph in which every
vertex is connected to at least $k$ other vertices within that
subgraph.
A $k$-{\em core decomposition} of a graph consists of finding the set
of all $k$-cores.
A nice property is that the set of all $k$-cores
forms a nested sequence of subgraphs, one included in the next.
This makes the $k$-core decomposition of a graph a useful tool in
analyzing a graph by identifying areas of increasing centrality and
connectedness, and revealing the structural organization of the
graph.
As a result, $k$-core decomposition has been applied to a number of
different applications, such as
modeling of random graphs~\cite{bollobas1984evolution},
analysis of the internet topology~\cite{Carmi03072007},
social-network analysis~\cite{Seidman:1983tv},
bioinformatics~\cite{bader2003automated},
analysis of connection matrices of the human brain~\cite{hagmann08brain},
graph visualization~\cite{DBLP:journals/corr/abs-cs-0504107},
as well as
influence analysis~\cite{kitsak10influence,Ugander17042012} and team formation~\cite{Bonchi:2014kh}.
The fact that the $k$-core decomposition of a graph gives a chain of
subgraphs where vertex degrees are higher in the inner cores, suggests
that we should expect that the inner cores are, in certain sense, more
dense or more connected than the outer cores.
As we will show shortly, this statement is not true.
Furthermore, in this paper we show how to obtain a graph decomposition
for which the statement is true, namely, the inner subgraphs of the
decomposition are denser than the outer ones.
To quantify density, we adopt a classic notion
used in the densest-subgraph problem~\cite{Charikar:2000tg, Goldberg:1984up},
where density is defined as the ratio between the edges and the
vertices of a subgraph.
This density definition can be also viewed as the average degree
divided by 2.
Our motivating observation is that $k$-cores are not ordered according
to this density definition.
The next example demonstrates that the most inner core is \emph{not}
necessarily the densest subgraph, and in fact, we can increase the
density by either adding or removing vertices.
\begin{example}\em
\label{ex:toy}
Consider the graph $G_1$ shown in Figure~\ref{fig:toy},
consisting of 6 vertices and 9 edges.
The density of the whole graph is $9/6 = 1.5$.
The graph has three $k$-cores:
a $3$-core marked as $C_1$, a
$2$-core marked as $C_2$, and
a $1$-core, corresponding the the whole graph and marked as $C_3$.
The core $C_1$ has density $6/4 = 1.5$
(it contains $6$ edges and $4$ vertices),
while the core $C_2$ has density $8/5 = 1.6$
(it contains $8$ edges and $5$ vertices).
In other words, $C_1$ has lower density than $C_2$,
despite being an inner core.
Let us now consider $G_2$ shown in Figure~\ref{fig:toy}.
This graph has a single core,
namely a $2$-core, containing the whole graph.
The density of this core is equal to $11/8 = 1.375$.
However, a subgraph $B_1$ contains $7$ edges and $5$ vertices,
giving us density $7/5 = 1.4$,
which is higher than the density of the only core.
\end{example}
This example motivates us to define an alternative,
more density-friendly, graph decomposition, which we call
\emph{locally-dense decomposition}.
We are interested in a decomposition such that
($i$) the density of the inner subgraphs is higher than the density of
the outer subgraphs,
($ii$) the most inner subgraph corresponds to the densest subgraph,
and
($iii$) we can compute or approximate the decomposition efficiently.
We achieve our goals by first defining a \emph{locally-dense} subgraph,
essentially a subgraph whose density cannot be improved by adding and
deleting vertices.
We show that these subgraphs are arranged into a hierarchy such
that the density decreases as we go towards outer subgraphs and that
the most inner subgraph is in fact the densest subgraph.
We provide two efficient algorithms to discover this hierarchy.
The first algorithm extends the exact algorithm for discovering the
densest subgraph given by Goldberg~\cite{Goldberg:1984up}.
This algorithm is based on solving a minimum cut
problem on a certain graph that depends on a parameter $\alpha$.
Goldberg showed that for a certain value $\alpha$
(which can be found by binary search),
the minimum cut recovers the densest subgraph.
One of our contributions is to shed more light into Goldberg's
algorithm and show that the same construction allows to discover
\emph{all} locally-dense subgraphs by varying $\alpha$.
Our second algorithm extends the linear-time algorithm by Charikar for
approximating dense subgraphs~\cite{Charikar:2000tg}.
This algorithm first orders vertices by deleting iteratively a vertex
with the smallest degree, and then selects the densest subgraph
respecting the order.
We extend this idea by using the same order, and finding first the
densest subgraph respecting the order, and then iteratively finding
the second densest subgraph containing the first subgraph, and so on.
We show that this algorithm can be executed in linear time and it
achieves a factor-$2$ approximation guarantee.
Charikar's algorithm and the algorithm for discovering a $k$-core
decomposition are very similar: they both order vertices by deleting vertices with the
smallest degree.
We show that this connection is profoundly deep and
we demonstrate that a $k$-core decomposition provides a
\mbox{factor-$2$} approximation for locally-dense decomposition.
On the other hand, our experimental evaluation shows that
in practice $k$-cores have different structure than locally-dense
subgraphs, and as predicted by the theory,
$k$-cores are not always well-aligned with graph density.
The remainder of paper is organized as follows. We give preliminary notation
in Section~\ref{sec:prel}. We introduce the locally-dense subgraphs in
Section~\ref{sec:monotonic}, present algorithms for discovering the subgraphs
in Section~\ref{sec:discovery}, and describe the connection to $k$-core
decomposition in Section~\ref{sec:core}. We present the related work in
Section~\ref{sec:related} and present the experiments in
Section~\ref{sec:experiments}.
Finally, we conclude the paper with discussion in Section~\ref{sec:conclusions}.
\section{Locally-dense graph \\ decomposition}
\label{sec:monotonic}
In this section we present the main concept introduced in this paper,
the {\em locally-dense decomposition} of a graph.
We also discuss the properties of this decomposition.
We start by defining the concept of a {\em locally-dense subgraph}.
\begin{definition}
A set of vertices $W$ is \emph{locally dense} if there are no
$X \subseteq W$ and $Y$ satisfying
$Y \cap W = \emptyset$ such that
\[
\density{X, W \setminus X} \le \density{Y, W}.
\]
\end{definition}
In other words,
for $W$ to be locally dense
there should not be an $X$ ``inside'' $W$ and a $Y$ ``outside'' $W$
so that the density that $Y$ brings to $W$ is larger than the density
that $X$ brings.
Due to the notational simplicity, we will often refer to these sets of vertices as
subgraphs.
Interestingly, the property of being locally dense induces a nested chain of
subgraphs in $G$.
\begin{proposition}
Let $U$ and $W$ be locally-dense subgraphs.
Then either $U \subseteq W$ or $W \subseteq U$.
\end{proposition}
\begin{proof}
Assume otherwise.
Define $X = U \setminus W$ and
$Y = W \setminus U$.
Both $X$ and $Y$ should be non-empty sets.
Then either
$\density{X, U \cap W} \leq \density{Y, U \cap W}$ or
$\density{X, U \cap W} > \density{Y, U \cap W}$.
Assume the former. This implies
\begin{eqnarray*}
\density{X, U \setminus X} & = & \density{X, U \cap W} \\
& \leq & \density{Y, U \cap W} \\
& \leq & \density{Y, U},
\end{eqnarray*}
which contradicts the fact that $U$ is locally dense.
For the first equality we used the fact that
$U \setminus X = U \cap W$,
while for the last inequality we used the fact that
$\crossedges{Y, U \cap W} \leq \crossedges{Y, U}$.
The case
$\density{X, U \cap W} > \density{Y, U \cap W}$
is similar.
\end{proof}
The proposition implies that the set of locally-dense subgraphs of a
graph forms a nested chain,
in the same way that the set of $k$-cores does.
\begin{corollary}
\label{corollary:chain}
A set of locally-dense subgraphs can be arranged into a sequence
$B_0 \subsetneq B_1 \subsetneq \cdots \subsetneq B_k$,
where $k \leq \abs{V}$. Moreover, $\density{B_{i}, B_{i - 1}} > \density{B_{i + 1}, B_{i}}$ for $1 \leq i < k$.
\end{corollary}
The chain of locally-dense subgraphs of a graph $G$,
as specified by Corollary~\ref{corollary:chain},
defines the {\em locally-dense decomposition} of~$G$.
We proceed to characterize the locally-dense subgraphs of the
decomposition with respect to their {\em global} density in the
whole graph $G$.
We want to characterize the global density of subgraph $B_i$ of the
decomposition.
$B_i$ cannot be denser than the previous subgraph $B_{i-1}$ in the
decomposition, however, we want to measure the density that the
additional vertices $S_i=B_i\setminus B_{i-1}$ bring.
This density involves edges among vertices of $S_i$ and edges from
$S_i$ to the previous subgraph $B_{i-1}$.
This is captured precisely by the concept of {\em outer density}
$\density{B_i,B_{i-1}}$ defined in the previous section.
As the following proposition shows
the outer density of $B_i$ with respect to $B_{i-1}$ is maximized over
all subgraphs that contain $B_{i-1}$.
In other words,
$B_i$ is the densest subgraph we can choose after $B_{i-1}$,
given the containment constraint.
\begin{proposition}
\label{prop:maximal}
Let $\set{B_i}$ be the chain of locally-dense subgraphs.
Then $B_0 = \emptyset$, $B_k = V$, and $B_{i}$ is the densest subgraph properly containing $B_{i - 1}$,
\[
B_{i} = \arg \max_{W \supsetneq B_{i - 1}} \density{W, B_{i - 1}}.
\]
\end{proposition}
To prove the proposition we will use the following two following lemmas which we state without a proof.
\begin{lemma}
\label{lem:add}
Let $X \subseteq Y$ be two sets of vertices. Assume $Z \cap Y = \emptyset$.
If $\density{Z, Y} \geq \density{Y, X}$, then $\density{Y \cup Z, X} \geq \density{Y, X}$.
\end{lemma}
\begin{lemma}
\label{lem:delete}
Let $X \subseteq Y$ be two sets of vertices. Assume $Z \subseteq Y \setminus X$.
If $\density{Z, Y \setminus Z} < \density{Y, X}$, then $\density{Y \setminus Z, X} > \density{Y, X}$.
\end{lemma}
\begin{proof}[of Proposition~\ref{prop:maximal}]
Assume inductively that the proposition holds for all $j < i$.
Let $U = \arg \max_{W \supsetneq B_{i - 1}} \density{W, B_{i - 1}}$.
We will first show that $U$ is locally dense.
We argue that there are no sets $X$ and $Y$ with
$X\subseteq U$ and $Y\cap U=\emptyset$ that can serve as
certificates for $U$ being non locally-dense.
Fix any $X \subseteq U$.
Define $X_j = X \cap (B_j \setminus B_{j - 1})$ for $j < i$, and
$X_i = X \cap (U \setminus B_{i - 1})$.
Define also $U_j = (U \setminus X) \cup B_{j - 1}$ for $j \leq i$.
Note that $B_j \subseteq U_j \cup X_j$.
If $X_i \neq \emptyset$, we have $\density{X_i, U \setminus X_i} \geq \density{U, B_{i - 1}}$, otherwise
Lemma~\ref{lem:delete} implies that we can delete $X_i$ from $U$ and obtain a more dense subgraph.
Similarly, for $j < i$ with $X_j \neq \emptyset$, we have
\[
\begin{split}
\density{X_j, U_j \setminus X_j} & \geq \density{X_j, B_j \setminus X_j} \\
& \geq \density{B_j, B_{j - 1}} \\
& > \density{U, B_{i - 1}},
\end{split}
\]
where the first inequality follows from $B_j \setminus X_j \subseteq U_j \setminus X_j$,
the second inequality is implied by Lemma~\ref{lem:delete} and the induction assumption on $j$, and
the last inequality is implied by Lemma~\ref{lem:add} and the induction assumption on $j + 1, \ldots, i - 1$.
These inequalities imply
\begin{eqnarray*}
\density{X, U \setminus X} & = & \sum_{j = 1, X_j \neq \emptyset}^i \frac{\abs{X_j}}{\abs{X}} \density{X_j, U_j \setminus X_j} \\
& \geq & \sum_{j = 1}^i \frac{\abs{X_j}}{\abs{X}} \density{U, B_{i - 1}} \\
& = & \density{U, B_{i - 1}}.
\end{eqnarray*}
Consider also any set $Y$ with $Y \cap U = \emptyset$.
Due to the optimality of $U$ and Lemma~\ref{lem:add} we must have $\density{Y, U} < \density{U, B_{i - 1}}$.
We conclude that for any $X$ and $Y$ with
$X\subseteq U$ and $Y\cap U=\emptyset$
it is $\density{X, U \setminus X}>\density{Y, U}$,
which shows that $U$ is locally dense.
Assume now $U = B_j$ for $j \geq i$.
We need to show that $j=i$.
Assume otherwise.
Since $B_i$ is locally dense, we have $\density{B_j \setminus B_i, B_i}<\density{B_i, B_{i-1}}$.
With a simple calculation we can show that $\density{B_j \setminus B_i, B_i}<\density{B_j, B_{i-1}}$.
Lemma~\ref{lem:delete} implies that removing
$B_j \setminus B_i$ produces a more dense subgraph which contradicts
the optimality of $U$.
\end{proof}
As a consequence of the previous proposition we can characterize the first
subgraph in the decomposition.
\begin{corollary}
Let $\set{B_i}$ be a locally-dense decomposition of a graph $G$.
Then $B_1$ is the densest subgraph of $G$.
\end{corollary}
The above discussion motivates the problem of
locally-dense graph decomposition,
which is the focus of this paper.
\begin{problem}
\label{problem:LDGD}
Given a graph $G=(V,E)$
find a maximal sequence of locally-dense subgraphs
\[
\emptyset = B_0 \subsetneq B_1 \subsetneq \cdots \subsetneq B_k=V.
\]
\end{problem}
\section{Preliminaries}
\label{sec:prel}
\spara{Graph density.}
Let $G=(V,E)$ be a graph with $|V|=n$ vertices and $|E|=m$ edges.
Given a subset of vertices $X\subseteq V$,
it is common to define $\edges{X}=\set{(x,y)\in E \mid x,y \in X}$, i.e.,
the edges of $G$ that have both end-points in $X$.
The {\em density} of the vertex set $X$ is then defined to be
\[
\density{X} = \frac{\abs{E(X)}}{\abs{X}},
\]
that is, half of the {\em average degree} of the subgraph induced
by~$X$.
The set of vertices $X\subseteq V$ that maximizes the density measure
$\density{X}$ is the {\em densest subgraph} of $G$.\!\footnote{We should point out that density is also often defined as $\abs{E(X)} / {\abs{X} \choose 2}$. This is not the case for this paper.}
The problem of finding the densest subgraph can be solved in
polynomial time.
A very elegant solution that involves a mapping to a series of
minimum-cut problems was given by Goldberg~\cite{Goldberg:1984up}.
As the fastest algorithm to solve the minimum-cut problem runs in
${\cal O}(mn)$ time, this approach is not scalable to very large graphs.
On the other hand, there exists a linear-time algorithm that provides a
factor-$2$ approximation to the densest-subgraph problem~\cite{Asahiro:1996uq,Charikar:2000tg}.
This is a greedy algorithm, which starts with the input graph, and
iteratively removes the vertex with the lowest degree, until left
with an empty graph. Among all subgraphs considered during this
vertex-removal process, the algorithm returns the densest.
Next we will provide graph-density definitions that relate pairs of vertex
sets. Given two non-overlapping sets of vertices $X$ and $Y$ we first define the
{\em cross edges} between $X$ and $Y$ as
\[
\crossedges{X, Y} = \set{(x, y) \in E \mid x \in X, y \in Y}.
\]
We then define the {\em marginal edges} from $X$ with respect to $Y$.
Those are the edges that have one end-point in $X$ and the other
end-point in either $X$ or~$Y$, that is,
\[
\marginaledges{X,Y}=\edges{X} \cup \crossedges{X,Y}.
\]
The set $\marginaledges{X,Y}$ represents the additional edges that
will be included in the induced subgraph of $Y$ if we expand $Y$ by
adding~$X$.
Assume that $X$ and $Y$ are non-overlapping. Then, we define the \emph{outer density} of $X$ with
respect to $Y$ as
\[
\density{X, Y} = \frac{\abs{\marginaledges{X, Y}}}{\abs{X}}.
\]
Again, this is the ``extra density'' that we bring to $Y$ if we expand
it by appending $X$ to it.
We will be often dealing with the case where $X$ and $Y$ are overlapping and
we would be interested in the outer density of vertices in $X$ that are not already
included in~$Y$. Hence, we will expand the definition of outer density to a more general case by defining
\[
\density{X, Y} = \density{X \setminus Y, Y}.
\]
\spara{$\mathbf{k}$-cores.}
We briefly review the basic background regarding $k$-cores.
The concept
was introduced by Seidman~\cite{Seidman:1983tv}.
Given a graph $G=(V,E)$,
a set of vertices $X\subseteq V$ is a $k$-core if
every vertex in the subgraph induced by $X$ has degree at least $k$,
and $X$ is maximal with respect to this property.
A $k$-core of $G$ can be obtained by recursively removing all the vertices
of degree less than $k$, until all vertices in the remaining graph
have degree at least $k$.
It is not hard to see that
if $\set{C_i}$ is the set of all distinct $k$-cores of $G$
then $\set{C_i}$ forms a nested chain
\[
\emptyset = C_0\subsetneq C_1 \subsetneq \cdots \subsetneq C_\ell = V.
\]
Furthermore, the set of vertices $S_k$ that belong in a $k$-core but
not in a $(k-1)$-core is called $k$-\emph{shell}.
The $k$-\emph{core decomposition} of $G$ is the process of identifying
all $k$-cores (and all $k$-shells).
Therefore, the $k$-core decomposition of a graph identifies
progressively the internal cores and decomposes the graph shell by
shell.
A linear-time algorithm to obtain the $k$-core decomposition
was given by Matula and Beck~\cite{Matula1983smallest}.
The algorithm starts by provisionally assigning each vertex $v$ to a
core of index $\degree{v}$, an upper bound to the correct core of a vertex.
It then repeatedly removes the vertex with the smallest degree, and
updates the core index of the neighbors of the removed vertex.
Note the similarity of this algorithm, with the $2$-approximation
algorithm for the densest-subgraph problem~\cite{Charikar:2000tg}.
\section{Related work}
\label{sec:related}
Our paper is related to previous work on discovering dense subgraphs,
clique-like structures, and hierarchical communities.
We review some representative work on these topics.
\spara{Clique relaxations.}
The densest possible subgraph is a clique.
Unfortunately finding large cliques is computationally
intractable~\cite{DBLP:conf/focs/Hastad96}.
Additionally, the notion of clique does not provide a robust
definition for practical situations, as a few absent edges may
completely destroy the clique.
To address these issues, researchers have come up with relaxed
clique definitions.
A relaxation, $k$-plex was suggested by Seidman and
Foster~\cite{seidman10kflex}.
In a $k$-plex a vertex can have at most $k - 1$ absent edges.
Unfortunately, discovering maximal $k$-plexes is also an \textbf{NP}-hard
problem~\cite{balasundaram:2011:kflex}.
An alternative relaxation for a clique is the one of an $n$-clique,
a maximal subgraph where each vertex is connected
to every vertex with a path, possibly outside of the subgraph, of at most
$n$-length~\cite{Bron:1973:AFC:362342.362367}.
So, according to this definition a clique is an $1$-clique.
As maximal $n$-cliques may produce sparse graphs,
the concept of $n$-clans was also proposed by limiting the diameter of
the subgraph to be at most $n$~\cite{mokken:79:clans}. Since $1$-clan corresponds
to a maximal clique, discovering $n$-clans is a computationally intractable problem.
\spara{Quasi-cliques.}
For the definition of graph density we have chosen to
work with $\density{X}$, the average degree of the subgraph induced by
$X$. While this is a popular density definition, there are other alternatives.
One such alternative would be to divide the number of edges present in the
subgraph with the total number of possible edges, that is,
divide by ${n \choose 2}$.
This would give us a normalized density score that is
between $0$ and $1$.
Subgraphs that maximize this density definition are called
{\em quasi-cliques}, and algorithms for enumerating all
quasi-cliques, which can be exponentially many,
have been proposed by Abello et al.~\cite{abello02clique} and Uno~\cite{Uno:2010:EAS:1712671.1712672}.
However, the definition of quasi-cliques is problematic.
Note that a single edge already provides maximal density.
Consequently additional objectives are needed.
One natural objective is to maximize the size of a graph with density
of $1$, however, this makes the problem equivalent to
finding a maximal clique which, as mentioned above, is a
computationally-intractable problem~\cite{DBLP:conf/focs/Hastad96}.
\spara{Alternative definitions for density.}
Other definitions of graph density have been proposed.
Recently, Tsourakakis proposed to measure density by counting
triangles, instead of counting edges~\cite{DBLP:journals/corr/Tsourakakis14}.
Interestingly enough, it is possible to find an approximate densest
subgraph under this definition.
An interesting future direction for our work is to study if the
decomposition proposed in this paper can be extended for the
triangle-density definition.
Density definitions of the form $g(\abs{E}) - \alpha h(\abs{V})$, where
$g$ and $h$ are some increasing functions were studied by Tsourakakis
et al.~\cite{DBLP:conf/kdd/TsourakakisBGGT13},
with specific focus on $h(x) = {x \choose 2}$.
It not known whether the densest-subgraph problem according to this
definition is polynomially-time solvable or \textbf{NP}-hard.
Finally, a variant for $\density{X}$ adopted for directed graph,
along with polynomial-time discovery algorithm, was suggested by
Khuller and Saha~\cite{khuller09dense}.
Such a definition could serve for defining decompositions of directed
graphs, which is also left for future work.
\spara{Hierarchical communities.}
Discovering hierarchy of $k$ nested communities with as homogeneous shells as
possible with the constraint that inner communities are denser was
studied by the authors of this paper~\cite{DBLP:conf/pkdd/TattiG13}.
Here, $\abs{E} / {\abs{V} \choose 2}$ was used as density definition,
and a heuristic algorithm was proposed.
Unfortunately, no exact polynomial-time algorithm is known for this problem.
As a potential future
work it would be interesting to see whether the ideas presented in this paper
can be merged with the idea of discovering $k$-homogeneous communities.
\section{Segmentation problem}
\begin{proposition}
\end{proposition}
\begin{proof}
Let $C_1, \ldots, C_k$ be the optimal solution, and
assume that there is $C_i$ that is not locally-dense
and let $X$ and $Y$ be the violating sets.
Next we argue that we can safely assume that $Y \subseteq C_{i + 1}$ and
$X \cap C_{i - 1} = \emptyset$. We will split the argument in two cases:
Case ($i$): $Y \nsubseteq C_{i + 1}$ and
Case ($ii$): $Y \subseteq C_{i + 1}$.
Assume the first case. If $\density{C_{i + 1} \setminus C_i, C_i} \geq \density{X, C_i
\setminus X}$, then redefine $Y$ as $C_{i + 1} \setminus C_i$. In such case,
$X$ and $Y$ are still violating the local density but now we can use Case
($ii$). Assume that $\density{C_{i + 1} \setminus C_i, C_i} < \density{X, C_i
\setminus X}$.
Define $Y_1 = Y \cap C_{i + 1}$ and $Y_2 = Y \setminus Y_1$.
Assume that $Y_1 \neq \emptyset$ and $\density{Y_1, C_i} \geq \density{Y, C_i}$.
Note that
\[
\density{Y_1, C_{i + 1}} \geq \density{Y_1, C_{i}} \geq \density{Y, C_i} \geq \density{X, C_i
\setminus X} > \density{C_{i + 1} \setminus C_i, C_i}.
\]
We can now redefine $Y$ as $Y_1$, $X$ as $C_{i + 1} \setminus C_i$, and increase $i$ by 1,
and repeat the argument.
\end{proof}
|
train/arxiv
|
BkiUdnA5qsNCPep76bRU
| 5
| 1
|
\section{Introduction}
The importance of existence and uniqueness theorems for initial value and boundary value problems (IVP and BVP) involving the classical derivative operator is indisputable because, without them, one cannot understand modelled systems correctly and make predictions how they behave. Recently, with the popularity of fractional derivative operators such as Riemann-Liouville (R-L), Caputo (C), etc., the equations involving these operators have begun to be studied in detail (See, \cite{Del}-\cite{Yoruk}). However, such a generalization leads to some difficulties and differences. For instance, unlike the initial value problems involving the classical derivative, the existence of continuous solution to some IVPs in the sense of
R-L derivative strictly depends on the initial values and smoothness conditions on the functions in right-hand side of equations in IVPs. To support this claim, one can refer to \cite{San3}, and one can see there that an initial value problem including a non-linear fractional differential equation of order $\sigma\in (0,1)$ has no continuous solution when the problem has a non-homogeneous initial value and the right-hand side of the equation is continuous on $[0,T]\times\mathbb{R}$. The similar issue arises in our investigation of the existence and uniqueness of solutions to the following problem
\begin{equation} \label{intvalue}
\begin{cases}
&D^{\sigma}\omega(x) = f\big(x,\omega(x),D^{\sigma-1}\omega(x)\big),\quad x>0 \\
&\omega(0)=0, \quad D^{\sigma-1}\omega\left(x\right)|_{x=0} =b,
\end{cases}
\end{equation}
where $\sigma\in(1,2),$ $b\neq 0,$ $f$ will be specified later and $D^\sigma$ represents the Riemann-Liouville fractional derivative of order $\sigma,$ which is given by
$$D^{\sigma}\omega(x) =\frac{1}{\Gamma(2-\sigma)}\frac{d^{2}}{dx^{2}}\int_{0}^{x}\frac{\omega(t)}{(x-t)^{\sigma-1}}dt.
$$
The equation in \eqref{intvalue} was first considered by Yoruk et. al. \cite{Yoruk}, when the second initial value is also homogenous ($b=0$) and the right-hand side function is continuous on $[0,T]\times \mathbb{R}\times\mathbb{R}.$ They gave Krasnoselskii-Krein, Roger and Kooi-type uniqueness results. Since problem \eqref{intvalue} we consider has a non-homogeneous initial condition, we investigate existence of its solutions in a convenient space of functions under following conditions:
\begin{itemize}
\item[(C1)]
\quad Let $f\left(x,t_{1},t_{2}\right)\in C \big(\left(0,T\right]\times\mathbb{R}\times\mathbb{R}\big)$ and $x^{\sigma-1} f\left(x,t_{1},t_{2}\right) \in C\big(\left[0,T\right]\times\mathbb{R}\times\mathbb{R}\big),$
\end{itemize}
where $C(X)$ represents the class of continuous functions defined on $X.$
Moreover, under some appropriate conditions we establish Nagumo-type, Krasnoselskii-Krein-type and Osgood-type uniqueness results for the problem. To prove them, we follow some ways introduced in \cite{Agarwal},\cite{San3},\cite{Yoruk} by generalizing some definitions made there and, in addition to this, we use the tools of Lebesgue spaces such as Hölder inequality.
\section{Preliminaries}
We begin with definitions of R-L integral and R-L derivative of higher order. The lower terminal points of integrals in their formulas will be taken as zero.
\begin{definition}
\label{def1} The Riemann-Liouville integral of order $\sigma>0$ of a function $\omega\left(x\right)$ is defined by
\begin{equation}
I^{\sigma}\omega\left(x\right) :=\frac{1}{\Gamma \left(\sigma\right)
\ \int_{0}^{x}\frac{\omega\left( t\right) }{\left( x-t \right)
^{1-\sigma}}dt
\end{equation}
provided that the integral is pointwise defined on $\mathbb{R}_{0}^{+}.$
\end{definition}
\begin{definition}
The Riemann-Liouville derivative of order $n-1<\sigma<n$ ($n\in\mathbb{N}$) of a function $\omega\left(x\right)$ is given by
\begin{equation} \label{def2}
D^{\sigma}\omega\left(x\right) =\frac{1}{\Gamma \left( n-\sigma\right)
}\frac{d^{n}}{dx^{n}}\int_{0}^{x}\frac{\omega\left(t \right) }{\left(x-t
\right) ^{\sigma-n+1}}dt, v
\end{equation}
provided that the right side is pointwise defined on $\mathbb{R}_{0}^{+}.$
\end{definition}
Compositional relations are frequently used in the literature (See, \cite{Kilbas}, \cite{Podlubny}) for converting an initial value or boundary-value problem to a corresponding integral equation. The following two lemmas \cite{Podlubny} are related to that.
\begin{lemma}\label{lemma}
For a function $\omega(x)$ such that $D^{\sigma}\omega(x)$ with $n-1<\sigma<n$ is integrable, the compositional relation
\begin{equation}
I^{\sigma}D^{\sigma}\omega(x)=\omega(x)+\sum_{m=1}^{n}D^{\sigma-n}\omega(x)|_{x=0} \frac{x^{\sigma-n}}{\Gamma(\sigma-n+1)}
\end{equation}
is satisfied, where it is assumed that $D^{\sigma-n}\omega(x)=I^{n-\sigma}\omega(x)$ when $\sigma<n.$
In case of $\sigma=2,$ the above formula turns into
\begin{equation}
I^{\sigma}D^{\sigma}\omega(x)=\omega(x)+D^{\sigma-1}\omega(x)|_{x=0} \frac{x^{\sigma-1}}{\Gamma(\sigma)}+D^{\sigma-2}\omega(x)|_{x=0} \frac{x^{\sigma-2}}{\Gamma(\sigma-1)}.
\end{equation}
Moreover, if $\omega$ is continuous on $[0,T],$ then
\begin{equation}
I^{\sigma}D^{\sigma}\omega(x)=\omega(x)+D^{\sigma-1}\omega(0)\frac{x^{\sigma-1}}{\Gamma(\sigma)}
\end{equation}
holds.
\end{lemma}
\begin{lemma}\label{lemma1.1}
For an $\sigma$ R-L integrable function $\omega(x),$ the well-known rule
$$D^{\sigma}I^{\sigma}\omega(x)=D^{2}I^{2-\sigma}I^{\sigma}\omega(x)=D^{2}I^{2}\omega(x)=\omega(x)$$
holds.
\end{lemma}
Solutions to problem \eqref{intvalue} will be investigated in the space defined below \cite{Del} :
\begin{theorem}\label{Th1} The space of continuous functions defined on $[0,T],$ whose R-L fractional derivative of order $\beta,$ $0<\beta<1$ are continuous on $[0,T]$ is a Banach space when endowed with the following norm:
\begin{align*}
||\omega||_{\beta}=||\omega||_{\infty}+||D^{\beta}\omega||_{\infty},
\end{align*}
where $||.||_{\infty}$ is the supremum norm defined on the class of continuous functions. This space will be denoted by $C^{\beta}([0,T]).$
\end{theorem}
The local existence of solutions to problem \eqref{intvalue} will be proved with the aid of Schauder fixed point theorem \cite{Zeidler}:
\begin{theorem}\label{Th2} Let $\mathcal{C}$ be a closed, bounded, convex subset of a Banach space $X:=\big\{u:I\to\mathbb{R} \ \text{continuous} : I\subset\mathbb{R} \ \text{closed and bounded interval}\big\}.$ If operator $\mathcal{S}:\mathcal{C}\rightarrow \mathcal{C}$ is continuous and, if $\mathcal{S}(\mathcal{C})$ is an equicontinuous set on $I,$
then $\mathcal{S}$ has at least one fixed point in $\mathcal{C}.$
\end{theorem}
\section{Main Results}
The one of mathematical tools used for showing the existence and uniqueness of the desired type of solution to a given initial or boundary value problem is first to convert them into an integral equation. One investigates the existence and uniqueness of the solution to the integral equation instead of the associated problem. Here, we follow this way by taking the aid of the lemma given below:
\begin{lemma} \label{lemma1} Under condition (C1),
if $\omega \in C^\sigma [0,T]$ is a solution of problem (\ref{intvalue}), then $\omega\in C^{\sigma-1}[0,T]$ is a solution of the following integral equation
\begin{align}
\omega(x)&=\frac{b}{\Gamma(\sigma)}x^{\sigma-1}+\frac{1}{\Gamma(\sigma)}\int_{0}^{x}\frac{f(t,\omega(t),D^{\sigma-1}\omega(t))}{(x-t)^{1-\sigma}}dt \label{intequ}
\end{align}
and, vice versa.
\end{lemma}
\begin{proof} We assume that $\omega \in C^\sigma [0,T]$ is a solution of problem (\ref{intvalue}). If we apply $I^{\sigma}$ to both sides of the equation in the problem and, if we consider
$$I^{\sigma}D^{\sigma}\omega(x)=\omega(x)+\frac{D^{\sigma-1}\omega(0)}{\Gamma(\sigma)}x^{\sigma-1} \quad \text{for all} \quad u \in C^\sigma [0,T],$$
then the integral equation in (\ref{intequ}) is appeared.
Now we suppose that $\omega\in C^\sigma [0,T]$ is a solution of integral equation (\ref{intequ}), and let us show that $u$ is a solution of the problem (\ref{intvalue}). If $D^{\sigma}$ is applied to the both sides of (\ref{intequ}), and then, if
$$D^{\sigma}I^{\sigma}\omega(x)=\omega(x) \ \ \text{for all} \ \ \omega\in C^\sigma [0,T]$$
is used, then one can observe that $\omega \in C[0,T]$ satisfies the equation in (\ref{intvalue}). Moreover, let us prove that $\omega \in C^\sigma [0,T]$ also fulfils initial value conditions. By change of variables and condition (C1) we have
\begin{align}
\omega(0)=\lim_{x\rightarrow 0^+}\omega(x)&=\frac{b}{\Gamma(\sigma)}x^{\sigma-1}+\frac{1}{\Gamma(\sigma)}\lim_{x\rightarrow0^+}\int_{0}^{x}\frac{f(t,\omega(t),D^{\sigma-1}\omega(t))}{(x-t)^{1-\sigma}}dt\nonumber \\
&=\frac{1}{\Gamma(\sigma)}\lim_{x\rightarrow 0^+}\int_{0}^{x}\frac{t^{\sigma-1}f(t,\omega(t),D^{\sigma-1}\omega(t))}{t^{\sigma-1}(x-t)^{1-\sigma}}dt \nonumber \\
&=\frac{1}{\Gamma(\sigma)}\lim_{x\rightarrow 0^+}x\int_{0}^{1}\frac{(x\tau)^{\sigma-1} f(x\tau,\omega(x\tau),D^{\sigma-1}\omega(x\tau))}{\tau^{\sigma-1} (1-\tau)^{1-\sigma}}d\tau =0,\label{lem2}
\end{align}
showing that $u$ satisfies the first initial condition in (\ref{intvalue}).
Now let us show that $u$ provides the second initial condition in (\ref{intvalue}). If $D^{\sigma-1}$ is applied to both sides of (\ref{intequ}), and if the relation $D^{\sigma-1}I^{\sigma}h(x)=Ih(x)$ is used, then we can first get
\begin{equation}
D^{\sigma-1}\omega(x)=D^{\sigma-1}\omega(0)+\int_{0}^{x}f(t,\omega(t),D^{\sigma-1}\omega(t))dt.
\end{equation}
From here, we then obtain
\begin{align}
D^{\sigma-1}\omega\left(0\right)&= b+\lim_{x\rightarrow 0^+}\int_{0}^{x}f(t,\omega(t),D^{\sigma-1}\omega(t))dt\nonumber \\
&=b+\lim_{x\rightarrow 0^+} \int_{0}^{x}\frac{1}{t^{\sigma-1}}t^{\sigma-1}f(t,\omega(t),D^{\sigma-1}\omega(t))dt \nonumber\\
&=b+\lim_{x\rightarrow 0^+} \int_{0}^{1}\frac{x^{2-\sigma}}{\tau^{\sigma-1}}(x\tau)^{\sigma-1}f(x\tau,\omega(x\tau),D^{\sigma-1}\omega(x\tau))d\tau=b, \label{lem3}
\end{align}
since $2-\sigma>0$ and $t^{\sigma-1}f(t,\omega(t),D^{\sigma-1}\omega(t))$ is continuous on $[0,T].$
Consequently, it has been shown that a solution of (\ref{intequ}) provides the problem (1.1) if condition (C1) is assumed to be satisfied.
\end{proof}
\begin{theorem} [Existence] Let condition (C1) be satisfied, and assume that there exist positive real numbers $r$ and $\mathcal{M}$ such that
$\left|x^{\sigma-1}f(x,\omega,v)\right|\leq \mathcal{M} \quad \text{for all} \quad (x,\omega,v)\in I=\left[0,T\right]\times\left[-r_{1},r_{1}\right] \times \left[ b-r_{2},b+r_{2}\right]$
with $r_{1}+r_{2}\leq r.$ Then problem (\ref{intvalue}) admits at least one solution in $C^\sigma [0,T_{0}],$ where
\begin{equation}\label{T0}
T_{0}= \begin{cases}
\quad T \quad &\text{if} \quad T<\frac{r}{C(b,\sigma,\mathcal{M})} \\
\quad \frac{r}{C(b,\sigma,\mathcal{M})} \quad &\text{if} \quad T\geq \frac{r}{C(b,\sigma,\mathcal{M})}\geq 1, \\
\quad \left[ \frac{r}{C(b,\sigma,\mathcal{M})}\right]^{\sigma-1},&\text{if} \quad T\geq \frac{r}{C(b,\sigma,\mathcal{M})}, \quad 1\geq \frac{r}{C(b,\sigma,\mathcal{M})} \quad\text{and} \quad 1<\sigma\leq 1.5 \\
\quad \left[ \frac{r}{C(b,\sigma,\mathcal{M})}\right]^{2-\sigma}, &\text{if} \quad T\geq \frac{r}{C(b,\sigma,\mathcal{M})},\quad 1\geq \frac{r}{C(b,\sigma,\mathcal{M})} \quad\text{and} \quad 1.5\leq<\sigma<2
\end{cases}
\end{equation}
and
\begin{equation}\label{CbsigmaM}
C(b,\sigma,\mathcal{M})=
\left[ \frac{\left| b\right| }{\Gamma(\sigma)}+\mathcal{M}\left(\frac{1+\Gamma(3-\sigma)}{2-\sigma}\right) \right].
\end{equation}
\end{theorem}
\begin{proof}
As it is known from Lemma \ref{lemma1}, solutions of problem \ref{intvalue} are solutions of integral equation \ref{intequ} as well. Moreover, the fixed points of the opeator $\mathcal{S}:C^{\sigma-1} [0,T_{0}]\to C^{\sigma-1} [0,T_{0}]$ defined by
\begin{align} \label{soperator}
\mathcal{S}\omega(x)=\frac{b}{\Gamma(\sigma)}x^{\sigma-1}+\int_{0}^{x}\frac{f(t,\omega(t),D^{\sigma-1}\omega(t))}{(x-t)^{1-\sigma }}dt
\end{align}
interfere with solutions of the integral equation. For this reason, it is sufficient to prove that operator $\mathcal{S}$ admits at least one fixed point. For this, it will be shown that operator $\mathcal{S}$ satisfies the hypotheses of Schauder fixed-point theorem. Let us start with showing the following inclusion to be valid:
$$\mathcal{S}(B_r)\subset B_r$$
where
\begin{align*}
B_r=\{\omega \in C^{\sigma-1}[0,T_{0}]: ||\omega||_{\infty}+||D^{\sigma-1}\omega-b||_{\infty} \leq r \}
\end{align*}
is a closed compact subset of $C^{\sigma-1}[0,T_{0}].$ Accordingly to norm on $C^{\sigma-1} [0,T_{0}],$ upper bounds of $\left\|\mathcal{S}\omega(x) \right\|_{\infty}$ and $\left\| D^{\sigma-1}\mathcal{S}\omega(x)-b\right\|_{\infty}$ can be determined as follows:
\begin{align}
\left|\mathcal{S}\omega(x)\right| &\leq \frac{\left|b\right| }{\Gamma(\sigma)}x^{\sigma-1}+\frac{1}{\Gamma(\sigma)}\int_{0}^{x}\frac{\left|t^{\sigma-1}f(t,\omega(t),D^{\sigma-1}\omega(t)\right|}{t^{\sigma-1}(x-t)^{1-\sigma }}dt \nonumber \\
&\leq \frac{\left|b\right| }{\Gamma(\sigma)}x^{\sigma-1}+\frac{\mathcal{M}}{\Gamma(\sigma)}\int_{0}^{1}\frac{x}{\tau^{\sigma-1}(1-\tau)^{1-\sigma }}d\tau
\leq \frac{\left|b\right| }{\Gamma(\sigma)}x^{\sigma-1}+\Gamma(2-\sigma)\mathcal{M}x \label{soperator1}
\end{align}
and
\begin{align}
\left|D^{\sigma-1}\mathcal{S}\omega(x)-b\right| &\leq \int_{0}^{x}\frac{\left|t^{\sigma-1}f(t,\omega(t),D^{\sigma-1}\omega(t)\right|}{t^{\sigma-1}}dt \mathcal{M}x^{2-\sigma} \int_{0}^{1}\tau^{1-\sigma}d\tau= \frac{\mathcal{M}x^{2-\sigma}}{2-\sigma}. \label{soperator2}
\end{align}
From (\ref{soperator1}) and (\ref{soperator2}),
\begin{align}
\left|\mathcal{S}\omega(x)\right|+\left|D^{\sigma-1}\mathcal{S}\omega(x)-b\right| \leq \frac{\left|b\right| }{\Gamma(\sigma)}x^{\sigma-1}+\Gamma(2-\sigma)\mathcal{M}x+ \frac{\mathcal{M}x^{2-\sigma}}{2-\sigma}
\end{align}
is obtained. Taking supremum over $[0,T_0]$ for a $T_0 >0$ for the right hand-side of the above equation,
\begin{align}
\left|\mathcal{S}\omega(x)\right|+\left|D^{\sigma-1}\mathcal{S}\omega(x)-b\right| \leq C(b,\sigma,\mathcal{M}) T_0 ^{\alpha}
\end{align}
can be written, where $\alpha\in\Omega=\left\lbrace \sigma-1,1,2-\sigma\right\rbrace.$ $\alpha$ depends on values of $b,\mathcal{M},\sigma,r. $ To determine $T_0$ and $\alpha$, let $$C(b,\sigma,\mathcal{M}) T_0 ^{\alpha}=r.$$
If $T_0 ^{\alpha}=\frac{r}{C(b,\sigma,\mathcal{M})}<1,$ then it is observed that $T_0 <1$ for any $\alpha\in \Omega.$ If $T_0 ^{\alpha} =\frac{r}{C(b,\sigma,\mathcal{M})}\geq1,$ it must be $T_0 \geq 1$
for any $\alpha\in \Omega.$ Thus,
\begin{align}
\sup_{x\in [0,T_0]}\left[ \left|\mathcal{S}\omega(x)\right|+\left|D^{\sigma-1}\mathcal{S}\omega(x)-b\right|\right] \leq C(b,\sigma,\mathcal{M}) T_0 ^{\alpha}=r,
\end{align}
where $$T_0:=\left[\frac{r}{C(b,\sigma,\mathcal{M})}\right]^{1/\alpha} $$
and
\begin{equation}\label{alpha}
\alpha= \begin{cases}
\quad 1 \quad &\text{if} \quad \frac{r}{C(b,\sigma,\mathcal{M})}\geq 1 \\
\quad \sigma-1\quad &\text{if} \quad \frac{r}{C(b,\sigma,\mathcal{M})}<1 \quad \text{and} \quad 1<\sigma\leq1.5\\
\quad 2-\sigma \quad &\text{if} \quad \frac{r}{C(b,\sigma,\mathcal{M})}<1 \quad \text{and} \quad 1.5\leq\sigma<2.\\
\end{cases}
\end{equation}
Consequently, for all cases we obtain $$||\mathcal{S}\omega||_{\infty}+||D^{\sigma-1}\mathcal{S}\omega-b||_{\infty}\leq r,$$ which is the desired result.
Now, let us prove the equicontinuity of $\mathcal{S}(B_{r})\subset C^{\sigma-1} [0,T_{0}].$ Since the composition of uniformly continuous functions is so as well, the function $x^{\sigma-1}f(x,\omega(x),D^{\sigma-1}\omega(x
))$ is uniformly continuous on $[0,T_{0}].$ Because for any $\omega\in B_{r},$ both $\omega(x)$ and $D^{\sigma-1}\omega(x)$ and $x^{\sigma-1}f(x,\omega,v)$ are
uniformly continuous on $I,$ respectively.
Therefore, for given any $\epsilon>0,$ one can find a
\delta=\delta(\epsilon)>0$ so that for all $x_{1},x_{2}\in [0,T_{0}]$ with $\left|x_{1}-x_{2}\right|<\delta$ it is
$$\left \vert x_{1}^{\sigma-1}f(x_{1},\omega(x_{1}),D^{\sigma-1}\omega(x
))-x_{2}^{\sigma-1}f(x_{2},\omega(x_{2}),D^{\sigma-1}\omega(x_{2}
))\right \vert <K\epsilon ,$$
where $K=\max\left(\frac{1}{T_{0}\Gamma(2-\sigma)}, \frac{2-\sigma}{T_{0}^{2-\sigma}}\right).$
It follows that
\begin{align*}
\big|\mathcal{S}&\omega\left(x_{1}\right)-\mathcal{S}\omega\left(x_{2}\right) \big|+\big|D^{\sigma-1}\mathcal{S}\omega\left(x_{1}\right)-D^{\sigma-1}\mathcal{S}\omega\left(x_{2}\right) \big| \\
&\leq \int_{0}^{1} \frac{\left \vert
h\left( \eta x_{1}\right) -h\left( \eta x_{2}\right) \right
\vert}{\Gamma \left(\sigma\right)\eta^{1-\sigma} \left( 1-\eta \right) ^{\sigma-1}}x d\eta+\int_{0}^{1} \frac{\left \vert
h\left( \eta x_{1}\right) -h\left( \eta x_{2}\right) \right
\vert}{\eta^{1-\sigma} }x^{2-\sigma}d\eta\\
&<T_{0}\Gamma(2-\sigma)K\epsilon+\frac{T_{0}^{2-\sigma}}{2-\sigma}K\epsilon=\epsilon,
\end{align*}
where $h(x)=x^{\sigma-1}f\left(x,\omega\left(x\right),D^{\sigma-1}\omega(x
) \right).$ This implies that $\mathcal{S}(B_{r})
$ is an equicontinuous set of $C^\sigma [0,T_{0}].$ \\
Finally, the continuity of $\mathcal{S}$ on $B_{r}$ will be proven. Assume that $\left \{\omega_{k}\right \}_{k=1}^{\infty}\subset B_{r}$ is a sequence with $\omega_{k}\stackrel{C^\sigma [0,T_{0}]}{\rightarrow} \omega$ as $k\rightarrow \infty.$ Then, one can easily conclude that $\omega_{k}$ and $D^{\sigma-1}\omega_{k}(t)$
converges uniformly to $\omega$ and $D^{\sigma-1}\omega(t
),$ respectively. With these and the uniform continuity of $x^{\sigma-1}f(x,\omega,v)$ on $I=\left[0,T\right]\times\left[-r_{1},r_{1}\right] \times \left[ b-r_{2},b+r_{2}\right],$ it leads to
\begin{align*}
&\left \|\mathcal{S}\omega_{k}-\mathcal{S}\omega\right \|_{\sigma-1}=\sup_{x\in [0,T_{0}]} \left \vert \frac{1}{\Gamma \left(
\sigma\right)}\int_{0}^{x}\frac{ \left[f\left(t,\omega_{k}(t
),D^{\sigma-1}\omega_{k}(t
)\right) -f\left(t ,\omega(t),D^{\sigma-1}\omega(t
)\right)\right] }{\left(x-t\right)
^{1-\sigma}}dt \right \vert\\
&+\sup_{x\in [0,T_{0}]} \left \vert \int_{0}^{x} \left[f\left(t,\omega_{k}(t
),D^{\sigma-1}\omega_{k}(t
)\right) -f\left(t ,\omega(t),D^{\sigma-1}\omega(t
)\right)\right] dt \right \vert
\\
&\leq \sup_{\eta x\in [0,T_{0}]} \int_{0}^{1}\frac{(\eta x)^{\sigma-1} \left|f\left(\eta x,\omega_{k}(\eta x),D^{\sigma-1}\omega_{k}(\eta x
)\right)-f\left( \eta x ,\omega(\eta x),D^{\sigma-1}\omega(\eta x
)\right)\right|}{\Gamma \left(\sigma\right)\eta^{\sigma-1}\left( 1-\eta \right) ^{1-\sigma}} xd\eta \\
&+\sup_{\eta x\in [0,T_{0}]} \int_{0}^{1}\frac{(\eta x)^{\sigma-1} \left|f\left(\eta x,\omega_{k}(\eta x),D^{\sigma-1}\omega_{k}(\eta x
)\right)-f\left( \eta x ,\omega(\eta x),D^{\sigma-1}\omega(\eta x
)\right)\right|}{\eta^{\sigma-1}} x^{2-\sigma}d\eta \\
&\rightarrow 0 \quad \text{as} \quad k\rightarrow \infty.
\end{align*}
In conclusion, since hypotheses of Theorem \ref{Th2} are fulfilled, it implies that operator $\mathcal{S}$ admits at least one fixed point in $C^\sigma [0,T_{0}],$ which is a solution of problem (\ref{intvalue}) as well.
\end{proof}
The mean value theorem for R-L derivative of order $\sigma\in (0,1)$ was correctly given by \cite{San3}. Now, its counterparts for order $\sigma\in (1,2)$ reads as follows:
\begin{lemma}\label{lemma1.2}
Let $\sigma \in (1,2)$ and $\omega\in C^{\sigma-1}\left( \left[ 0,T\right]\right) .$ Then, there is a function $\mu:[0,T]\to [0,T]$ with $0<\mu(x)<x $ so that
$$\omega(x)=-D^{\sigma-1}\omega(0)\frac{x^{\sigma-1}}{\Gamma(\sigma)}+\Gamma(2-\sigma)(\mu(x))^{\sigma-1}D^{\sigma-1}\omega(\mu(x)),$$
is satisfied.
\end{lemma}
The lemma can be proved by following the way used in \cite{San3} and we omit it here. By the help of this lemma we can obtain the Nagumo type uniqueness:
\begin{theorem} (\textit{Nagumo type uniqueness}) Let $1<\sigma<2,$ $0<T<\infty$ and let condition {\rm \textbf{(C1)}} be satisfied. Moreover, assume that there exists a positive real number $L\leq \frac{2-\sigma}{T(1+\Gamma(3-\sigma))}$ such that the inequality
\begin{align}\label{unique1}
x^{\sigma-1}\left|f(x,t_{1,1},t_{2,1})-f(x,t_{1,2},t_{2,2})\right|\leq L\left( \left|t_{1,1}-t_{1,2}\right|+\left|t_{2,1}-t_{2,2}\right|\right)
\end{align}
is fulfilled for all $x\in[0,T]$ and for all $t_{1,i},t_{2,i}\in\mathbb{R}$ with $i=1,2.$ Then, \eqref{intvalue} has at most one solution in the space of $C^{\sigma-1}(\left[0,T_0\right]).$
\begin{proof}
We have just showed the existence of the solution to problem \eqref{intvalue} in the previous theorem. For the uniqueness, we first assume that \eqref{intvalue} admits two different
solutions such as $\omega_{1}$ and $\omega_{2}$ in the space of $C^{\sigma-1}(\left[0,T_0\right]).$ Let us define a function $\Phi(x)$ to be in the form
$$
\Phi(x):=\begin{cases}
\left|\omega_{1}(x)-\omega_{2}(x)\right|+\left|D^{\sigma-1}\omega_{1}(x)-D^{\sigma-1}\omega_{2}(x)\right|,& x>0 \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 \ \ \ \ \ \ \ \ \ \ \ ,& x=0.
\end{cases}
$$
Since $\omega_{1},\omega_{2}\in C^{\sigma-1}(\left[0,T\right]),$ the continuity of $\Phi(x)$ on $x\in (0,T_0]$ can obviously be seen. For its continuity at $x=0,$
\begin{align*}
0\leq\lim_{x\to 0^{+}}\Phi(x)&=\lim_{x\to 0^{+}} \frac{1}{\Gamma(\sigma)}\left|\int_{0}^{x}\frac{f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)}{\left(x-t\right) ^{1-\sigma}}
dt\right| \\
&+\lim_{x\to 0^{+}}\left|\int_{0}^{x} f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)
dt\right| \\
&\leq\int_{0}^{1}\frac{\lim_{x\to 0^{+}}x\left| H\left(x\eta,\omega_{1}\left( x\eta\right)\right)-H\left(x\eta,\omega_{2}\left( x\eta\right)\right) \right| }{\eta^{\sigma-1}\left(1-\eta\right)^{1-\sigma}}d\eta \\
&+\int_{0}^{1} \frac{\lim_{x\to 0^{+}}x^{2-\sigma} \left| H\left(x\eta,\omega_{1}\left( x\eta\right)\right)-H\left(x\eta,\omega_{2}\left( x\eta\right)\right) \right|}{\eta^{\sigma-1}}d\eta=0,
\end{align*}
where $H(x,\omega(x))=x^{\sigma-1}f\left(x,\omega\left(x\right),D^{\sigma-1}\omega(x
) \right)$ and we made the change of variable $t=x\eta$ and used condition (C1), respectively. Consequently, $\lim_{x\to 0^{+}}\Phi(x)=0=\Phi(0).$ \\
The fact that $\Phi(x)\geq 0$ on $[0,T]$ allows us to choose a point $x_{0}\in (0,T]$ so that
\begin{align*}
0<\Phi(x_{0})&=\left|\omega_{1}(x_{0})-\omega_{2}(x_{0})\right| +\left|D^{\sigma-1}\omega_{1}(x_{0})-D^{\sigma-1}\omega_{2}(x_{0})\right|.
\end{align*}
By using the mean value theorem in Lemma \ref{lemma1.2}
\begin{align}\label{unique5}
\left|\omega_{1}(x_{0})-\omega_{2}(x_{0})\right|&=\Gamma(2-\sigma)x_{0}\left|x_{0,1}^{\sigma-1}D^{\sigma}(\omega_{1}-\omega_{2})(x_{0,1})\right|+\left|D^{\sigma-1}\omega_{1}(x_{0})-D^{\sigma-1}\omega_{2}(x_{0})\right|\nonumber \\
&=\Gamma(2-\sigma)x_{0}x_{0,1}^{\sigma-1}\left|f\left(x_{0,1},\omega_{1}\left(x_{0,1}\right)\right)-f\left(x_{0,1},\omega_{2}\left(x_{0,1}\right)\right)\right|
\end{align}
is obtained for $x_{0,1}\in (0,x_{0}).$
Secondly, for the estimation of
$\left|D^{\sigma-1}\omega_{1}(x_{0})-D^{\sigma-1}\omega_{2}(x_{0})\right|,$ we have from the well-known integral mean theorem for the classical calculus
\begin{align}\label{unique6}
\left|D^{\sigma-1}\omega_{1}(x_{0})-D^{\sigma-1}\omega_{2}(x_{0})\right|&=
\int_{0}^{x_0}\frac{t^{\sigma-1}\left|f(t,\omega_{1}(t),D^{\sigma-1}\omega_{1}(t)-f(t,\omega_{2}(t),D^{\sigma-1}\omega_{2}(t)\right|}{t^{\sigma-1}}dt \nonumber\\
&=\frac{x^{2-\sigma}_{0}}{2-\sigma}x_{0,2}^{\sigma-1}\left|f\left(x_{0,2},\omega_{1}\left(x_{0,2}\right)\right)-f\left(x_{0,2},\omega_{2}\left(x_{0,2}\right)\right)\right|,
\end{align}
where $x_{0,2}\in (0,x_{0}).$
We specify $x_{1}$ as one of the points $x_{0,1}$ and $x_{0,2}$ so that $\left| H(x_{1},\omega_{1}(x_{1}))-H(x_{1},\omega_{2}(x_{1}))\right|$ $ :=\max\left(\left| H(x_{0,1},\omega_{1}(x_{0,1}))-H(x_{0,1},\omega_{2}(x_{0,1}))\right|,\left| H(x_{0,2},\omega_{1}(x_{0,2}))-H(x_{0,2},\omega_{2}(x_{0,2}))\right| \right) . $
Thus, from \eqref{unique5} and \eqref{unique6}, we have
\begin{align} \label{unique7}
0&<\Phi(x_{0})\leq\left(\Gamma(2-\sigma)x_{0}+\frac{x^{2-\sigma}_{0}}{2-\sigma}\right) \left| H(x_{1},\omega_{1}(x_{1}))-H(x_{1},\omega_{2}(x_{1}))\right|\nonumber\\
&\leq T \left( \frac{1+\Gamma(3-\sigma)}{2-\sigma}\right)x_{1}^{\sigma-1} \left|f\left(x_{1},\omega_{1}\left(x_{1}\right)\right)-f\left(x_{1},\omega_{2}\left(x_{1}\right)\right)\right|\nonumber\\
&\leq TL \left( \frac{1+\Gamma(3-\sigma)}{2-\sigma}\right)x_{1}^{\sigma-1} \left( \left|\omega_{1}(x_{1})-\omega_{2}(x_{1})\right| +\left|D^{\sigma-1}\omega_{1}(x_{1})-D^{\sigma-1}\omega_{2}(x_{1})\right|\right) =\Phi(x_{1})\nonumber
\end{align}
since $L\leq \frac{2-\sigma}{T(1+\Gamma(3-\sigma))}.$ Repeating the same procedure for the point $x_{1},$ it enables us to find some points $x_{2}\in(0,x_{1})$ so that
$0<\Phi(x_{0})\leq\Phi(x_{1})\leq \Phi(x_{2}).$
Continuing in the same way, the sequence
$\left\{x_{n}\right\}_{n=1}^{\infty}\subset [0,x_{0})$
can be constructed so that $x_{n}\to 0$ and
\begin{equation}\label{uniq}
0<\Phi(x_{0})\leq\Phi(x_{1})\leq\Phi(x_{2})\leq...\leq\Phi(x_{n})\leq...
\end{equation}
However, the fact that $\Phi(x)$ is continuous at $x=0$ and
$x_{n}\to 0$ leads to $\Phi(x_{n})\to \Phi(0)=0,$ and this
contradicts with \eqref{uniq}. Consequently, IVP
\eqref{intvalue} possesses a unique solution.
\end{proof}
\end{theorem}
\begin{theorem} (\textit{Krasnoselskii-Krein type uniqueness}) Let $1<\sigma<2$ and $ T^{*}_{0}=\min\left\lbrace T_{0},1 \right\rbrace,$ where $T_0$ is defined by \eqref{T0}. Let condition (C1) be fulfilled. Furthermore, suppose that there exists a $L>0$ so that
\begin{align}\label{unique2}
x^{\sigma-1}\left|f(x,t_{1,1},t_{2,1})-f(x,t_{1,2},t_{2,2})\right|\leq \frac{L}{2}\left( \left|t_{1,1}-t_{1,2}\right|+\left|t_{2,1}-t_{2,2}\right|\right)
\end{align}
holds for all $x\in[0,T]$ and for all $t_{1,i},t_{2,i}\in\mathbb{R}$ with $i=1,2,$ and that there exist $C>0$ and $\alpha \in (0,1)$ satisfying $(1-\sigma)(1-\alpha)-L(1-\alpha)+1>0$ such that
\begin{align}\label{unique3}
x^{\sigma-1}\left|f(x,t_{1,1},t_{2,1})-f(x,t_{1,2},t_{2,2})\right|\leq C\left( \left|t_{1,1}-t_{1,2}\right|+x^{\alpha(\sigma-1)}\left|t_{2,1}-t_{2,2}\right|\right) \end{align}
holds for all $x\in[0,T]$ and for all $t_{1,i},t_{2,i}\in\mathbb{R}$ with $i=1,2.$
Then, problem \eqref{intvalue} has an unique solution in the space of $C^{\sigma-1}(\left[0,T^{*}_{0}\right]).$
\begin{proof}
As claimed in the previous theorem, we first assume that problem \eqref{intvalue} has two different solutions such as $\omega_{1}(x)$ and $\omega_{2}(x)$ in $C^{\sigma-1}(\left[0,T^{*}_{0}\right]).$ However, by contradiction, we will show that it can not be happen. For this, let us first define $\Phi_{1}(x)=\left|\omega_{1}(x)-\omega_{2}(x)\right|$ and $\Phi_{2}(x)=\left|D^{\sigma-1}\omega_{1}(x)-D^{\sigma-1}\omega_{2}(x)\right|$ and try to find estimates for each functions by using condition {\rm \textbf{(C1)}} and inequality \eqref{unique3}. Hence, we first have
\begin{align*}
\Phi_{1}(x)&\leq \frac{1}{\Gamma(\sigma)}\int_{0}^{x}\frac{\left|f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)\right|}{\left(x-t\right) ^{1-\sigma}}
dt \\
&\leq \frac{1}{\Gamma(\sigma)}\int_{0}^{x}\frac{t^{\sigma-1}\left|f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)\right|}{t^{\sigma-1}\left(x-t\right) ^{1-\sigma}}
dt \\
&\leq \frac{C}{\Gamma(\sigma)} \int_{0}^{x}\frac{\left[\Phi^{\alpha}_{1}(x)+t^{\alpha(\sigma-1)}\Phi^{\alpha}_{2}(x) \right] }{t^{\sigma-1}\left(x-t\right) ^{1-\sigma}}
dt \\
&\leq \frac{C}{\Gamma(\sigma)}\left( \int_{0}^{x}\left(\frac{1}{t^{\sigma-1}\left(x-t\right) ^{1-\sigma}}\right) ^{q}
dt\right)^{1/q}\left( \int_{0}^{x}\left[\Phi^{\alpha}_{1}(t)+t^{\alpha(\sigma-1)}\Phi^{\alpha}_{2}(t) \right]^{p}
dt\right)^{1/p} \\
&\leq \frac{C\Gamma(1+(1-\sigma)q)\Gamma(1+(\sigma-1)q))}{\Gamma(\sigma)}x^{1/q}\Omega^{1/p}(x)
\end{align*}
where we used Hölder inequality with $q>1$ satisfying $(1-\sigma)q+1>0$ and $p=q/(q-1),$ and $\Omega(x)$ is defined by
$$\Omega(x)=\int_{0}^{x}\left[\Phi^{\alpha}_{1}(t)+t^{\alpha(\sigma-1)}\Phi^{\alpha}_{2}(t) \right]^{p}dt.$$
From here, we have the following estimation
\begin{align}\label{estimation1}
\Phi^{p}_{1}(x)\leq Cx^{p/q}\Omega(x),
\end{align}
where $C$ is not specified here and throughout the proof. In addition to this, the upper bound for $\Phi_{2}(x)$ can be found as follows:
\begin{align*}
\Phi_{2}(x)&\leq \int_{0}^{x} \left|f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)\right|
dt \\
&\leq \int_{0}^{x}\frac{t^{\sigma-1}\left|f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)\right|}{t^{\sigma-1} }
dt \\
&\leq \frac{C}{\Gamma(\sigma)} \int_{0}^{x}\frac{\left[\Phi^{\alpha}_{1}(x)+t^{\alpha(\sigma-1)}\Phi^{\alpha}_{2}(x) \right] }{t^{\sigma-1}\left(x-t\right) ^{1-\sigma}}
dt \\
&\leq C \left( \int_{0}^{x}\left(\frac{1}{t^{\sigma-1}}\right)^{q}
dt\right)^{1/q}\left( \int_{0}^{x}\left[\Phi^{\alpha}_{1}(t)+t^{\alpha(\sigma-1)}\Phi^{\alpha}_{2}(t) \right]^{p}
dt\right)^{1/p} \\
&\leq \frac{C\Gamma(1+(1-\sigma)q)}{\Gamma(2+(\sigma-1)q))}x^{(1+q(1-\sigma))/q}\Omega^{1/p}(x).
\end{align*}
From here,
\begin{align}\label{estimation2}
\Phi^{p}_{2}(x)\leq Cx^{(1+q(1-\sigma))p/q}\Omega(x)
\end{align}
is then obtained. By using estimations in \eqref{estimation1} and \eqref{estimation2} in the derivative of $\Omega(x)$ we have
\begin{align}\label{estimation3}
\Omega'(x)&=\left[\Phi^{\alpha}_{1}(x)+t^{\alpha(\sigma-1)}\Phi^{\alpha}_{2}(x) \right]^{p}\leq 2^{p-1}\left[\left( \Phi^{\alpha}_{1}(x)\right) ^{p}+t^{p\alpha(\sigma-1)}\left( \Phi^{\alpha}_{2}(x)\right) ^{p}\right] \nonumber\\
&= 2^{p-1}\left[\left( \Phi^{p}_{1}(x)\right)^{\alpha} +x^{p\alpha(\sigma-1)}\left( \Phi^{p}_{2}(x)\right)^{\alpha} \right]\nonumber\\
&\leq 2^{p-1}\left[C^{\alpha}x^{\alpha p/q}\Omega^{\alpha}(x)+ C^{\alpha}x^{p\alpha(\sigma-1)}x^{(1+q(1-\sigma))\alpha p/q}\Omega^{\alpha}(x)\right]\leq Cx^{\alpha p/q}\Omega^{\alpha}(x).
\end{align}
If we multiply the both sides of the above inequality with $(1-\alpha)\Omega^{-\alpha}(x),$
\begin{align*}
(1-\alpha)\Omega^{-\alpha}(x)\Omega'(x)=\frac{d}{dx}\left[\Omega^{1-\alpha}(x)\right] \leq Cx^{\alpha p/q}.
\end{align*}
is then obtained. Integrating the both sides of the inequality over $[0,x],$ we get
\begin{align*}
\Omega^{1-\alpha}(x)\leq Cx^{(\alpha p+q)/q},
\end{align*}
since $\Omega(0)=0.$ Consequently, this leads to the following estimation on $\Omega(x)$
\begin{align}\label{estimation7}
\Omega(x)\leq Cx^{(\alpha p+q)/(1-\alpha)q}.
\end{align}
By considering \eqref{estimation1} and \eqref{estimation2} together with \eqref{estimation7}, one can conclude that
\begin{align*}
\Phi^{p}_{1}(x)\leq Cx^{p/q}\Omega(x)\leq Cx^{p/q}x^{(\alpha p+q)/(1-\alpha)pq}=Cx^{p+q/(1-\alpha)q}.
\end{align*}
or
\begin{align} \label{est4}
\Phi_{1}(x)\leq Cx^{(p+q)/(1-\alpha)pq}=x^{1/(1-\alpha)},
\end{align}
and
\begin{align*}
\Phi^{p}_{2}(x)\leq Cx^{p(1+q(1-\sigma))/q}\Omega(x)\leq Cx^{p(1+q(1-\sigma))/q}x^{(\alpha p+q)/(1-\alpha)pq}=Cx^{\frac{(1-\alpha)(1-\sigma)pq+p+q}{(1-\alpha)q}}.
\end{align*}
or
\begin{align} \label{est5}
\Phi_{2}(x)\leq Cx^{(1-\sigma)+\frac{p+q}{(1-\alpha)pq}}=x^{(1-\sigma)+\frac{1}{(1-\alpha)}},
\end{align}
since $\frac{p+q}{pq}=1.$ \\
Let us now define $$\Psi(x)=x^{-L}\max\left\lbrace \Phi_{1}(x),\Phi_{2}(x)\right\rbrace,$$
where $L(1-\alpha)<1+(1-\sigma)(1-\alpha).$
If $\Phi_{1}(x)=\max\left\lbrace \Phi_{1}(x),\Phi_{2}(x)\right\rbrace,$ then from \eqref{est4} we get
$$0\leq \Psi(x)\leq x^{\frac{1}{1-\alpha}-L},$$
or in the case of $\Phi_{2}(x)=\max\left\lbrace \Phi_{1}(x),\Phi_{2}(x)\right\rbrace,$ by the inequality \eqref{est5} we have the following
$$0\leq \Psi(x)\leq x^{(1-\sigma)+\frac{1}{1-\alpha}-L}=x^{\frac{(1-\sigma)(1-\alpha)-L(1-\alpha)+1}{1-\alpha}}.$$
In both cases, $\Psi(x)$ is continuous on $[0,T^{*}_{0}]$ and $\Psi(0)=0.$ Let us now show that $\Psi(x)\equiv 0$ on $[0,T^{*}_{0}].$ For this, suppose otherwise and let $\Psi(x)\not\equiv 0.$ This means $\Psi(x)>0,$ and from its continuity one can say that there exists a point $x_{1}\in [0,T^{*}_{0}]$ so that $\Psi(x)$ takes its maximum value at that point. Thus, let $$M=\Psi(x_{1})=\max_{x\in [0,T^{*}_{0}]}\Psi(x).$$
By assuming $\Psi(x)=x^{-L}\Phi_{1}(x),$
\begin{align}
M&=\Psi(x_{1})=x_{1}^{-L}\Phi_{1}(x_{1}) \\
&\leq\frac{x_{1}^{-L}}{\Gamma(\sigma)}\int_{0}^{x_{1}}\frac{t^{\sigma-1}\left|f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)\right|}{t^{\sigma-1}\left(x-t\right) ^{1-\sigma}}dt \nonumber
\\
&\leq\frac{Lx_{1}^{-L}}{2\Gamma(\sigma)}\int_{0}^{x_{1}}\left(x-t\right) ^{\sigma-1}t^{1-\sigma+k}\left[\Phi_{1}(t)+\Phi_{2}(t) \right] dt \nonumber\leq\frac{Lx_{1}^{-L}}{\Gamma(\sigma)}\int_{0}^{x_{1}}\left(x-t\right) ^{\sigma-1}t^{1-\sigma+k}\Psi(t) dt \nonumber \\
&\leq M\frac{L\Gamma(2-\sigma+L)}{\Gamma(2+L)}x_{1}< M \nonumber
\end{align}
is obtained for $t\in [0,T^{*}_{0}],$ since $\frac{L\Gamma(2-\sigma+L)}{\Gamma(2+L)}<1$ for $1<\sigma<2.$ However, it is a contradiction.
On the other hand, when $\Psi(x)=x^{-L}\Phi_{2}(x),$ we get
\begin{align}
M&=\Psi(x_{1})=x_{1}^{-L}\Phi_{2}(x_{1})\\
&\leq x_{1}^{-L} \int_{0}^{x_{1}}\frac{t^{\sigma-1}\left|f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)\right|}{t^{\sigma-1}}dt \nonumber
\\
&\leq\frac{Lx_{1}^{-L}}{2}\int_{0}^{x_{1}}t^{1-\sigma+L}\left[\Phi_{1}(t)+\Phi_{2}(t) \right] dt \leq L x_{1}^{-L} \int_{0}^{x}t^{1-\sigma+L}\Psi(t) dt \nonumber \\
&\leq M\frac{L}{L+2-\sigma}x^{L+2-\sigma}_{1}< M, \nonumber
\end{align}
which is contraction as well.
Consequently, it must be $\Psi(x)\equiv 0$ on $[0,T^{*}_{0}].$ This gives us the uniqueness of solutions of the considered problem.
\end{proof}
\end{theorem}
\begin{theorem} [Osgood-type uniqueness] Let $1<\sigma<2,$ and let $T_0$ be defined by \eqref{T0} and condition (C1) be satisfied. Furthermore, suppose that for all $x\in[0,T]$ and for all $t_{1,i},t_{2,i}\in\mathbb{R}$ with $i=1,2,$ the equality
\begin{align}\label{osgood}
x^{\sigma-1}\left|f(x,t_{1,1},t_{2,1})-f(x,t_{1,2},t_{2,2})\right|\leq C \left( g\left(\left|t_{1,1}-t_{1,2}\right|^{p}+\left|t_{2,1}-t_{2,2}\right|^{p}\right)\right) ^{1/p}
\end{align}
is fulfilled, where $p>1$ is conjugate of $q>1$ satisfying $1+(1-\sigma)q>0,$
$$C^{q}\geq 2\max\left( b\Gamma^{q}(\sigma)[T_0\Gamma(1+(1-\sigma)q)\Gamma(1+(\sigma-1)q)]^{-1} , (1+(1-\sigma)q) T_{0}^{-1-q(1-\sigma)} \right) ,$$
and, $g$ is a continuous, non-negative and non-decreasing function in $[0,T_0]$ so that $g(0)=0$ and it satisfies
\begin{align}\label{osgood3}
\lim_{\epsilon\to 0^{+}}\int_{\epsilon}^{\gamma}\frac{du}{g(u)}=\infty
\end{align}
for any $\gamma\in\mathbb{R}.$
Then, \eqref{intvalue} has an unique solution in the space of $C^{\sigma-1}(\left[0,T^{*}_{0}\right]).$
\begin{proof} As made in previously given uniqueness theorems, we assume that there exist two different solutions such as $\omega_{1}(x)$ and $\omega_{2}(x)$ to problem \eqref{intvalue} in $C^{\sigma-1}(\left[0,T^{*}_{0}\right]).$ Moreover, let
$\Phi_{1}(x)=\left| \omega_{1}(x)-\omega_{2}(x)\right| $ and $\Phi_{2}(x)=\left| D^{\sigma-1}\omega_{1}(x)-D^{\sigma-1}\omega_{2}(x)\right| .$ At first, we get the estimation on $\Phi_{1}(x)$ as follows:
\begin{align}
\Phi_{1}(x)&\leq \frac{1}{\Gamma(\sigma)}\int_{0}^{x}\frac{t^{\sigma-1}\left|f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)\right|}{t^{\sigma-1}\left(x-t\right) ^{1-\sigma}}
dt \nonumber\\
&\leq \frac{C}{\Gamma(\sigma)} \int_{0}^{x}\frac{\left[ g\left(\left|\omega_{1}\left(t\right)-\omega_{2}\left(t\right)\right|^{p}+\left|D^{\sigma-1}\omega_{1}\left(t\right)-D^{\sigma-1}\omega_{2}\left(t\right)\right|^{p}\right)\right] ^{1/p}}{t^{\sigma-1}\left(x-t\right) ^{1-\sigma}}
dt \nonumber\\
&\leq \frac{C}{\Gamma(\sigma)}\left( \int_{0}^{x}\left(\frac{1}{t^{\sigma-1}\left(x-t\right) ^{1-\sigma}}\right) ^{q}
dt\right)^{1/q}\left( \int_{0}^{x} g\left(\Phi^{p}_{1}(t)+\Phi^{p}_{2}(t)\right)
dt\right)^{1/p} \nonumber\\
&\leq C\left[ \frac{\Gamma(1+(1-\sigma)q)\Gamma(1+(\sigma-1)q)}{\Gamma^{q}(\sigma)}\right]^{1/q}x^{1/q}\left( \int_{0}^{x} g\left(\Phi^{p}_{1}(t)+\Phi^{p}_{2}(t)\right)
dt\right)^{1/p} \nonumber\\
&\leq \frac{1}{2^{1/p}}\left( \int_{0}^{x} g\left(\Phi^{p}_{1}(t)+\Phi^{p}_{2}(t)\right)
dt\right)^{1/p}
\end{align}
where we used the inequality \eqref{osgood}, Hölder inequality and the assumption on $C,$ respectively. From here, it follows that
\begin{align}\label{estim1}
\Phi^{p}_{1}(x)\leq \frac{1}{2}\int_{0}^{x} g\left(\Phi^{p}_{1}(t)+\Phi^{p}_{2}(t)\right)
dt .
\end{align}
Similarly to above, we have
\begin{align}
\Phi_{2}(x)&\leq \int_{0}^{x}\frac{t^{\sigma-1}\left|f\left(t ,\omega_{1}\left(t\right),D^{\sigma-1}\omega_{1}\left(t\right)\right)-f\left(t ,\omega_{2}\left(t\right),D^{\sigma-1}\omega_{2}\left(t\right)\right)\right|}{t^{\sigma-1}}
dt \nonumber\\
&\leq C \int_{0}^{x}\frac{\left[ g\left(\left|\omega_{1}\left(t\right)-\omega_{2}\left(t\right)\right|^{p}+\left|D^{\sigma-1}\omega_{1}\left(t\right)-D^{\sigma-1}\omega_{2}\left(t\right)\right|^{p}\right)\right] ^{1/p}}{t^{\sigma-1}}
dt \nonumber\\
&\leq C\left( \int_{0}^{x}\left(\frac{1}{t^{\sigma-1}}\right) ^{q}
dt\right)^{1/q}\left( \int_{0}^{x} g\left(\Phi^{p}_{1}(t)+\Phi^{p}_{2}(t)\right)
dt\right)^{1/p} \nonumber\\
&\leq C\left[ \frac{1}{(1+(1-\sigma)q)}\right]^{1/q} x^{(1+q(1-\sigma))/q}\left( \int_{0}^{x} g\left(\Phi^{p}_{1}(t)+\Phi^{p}_{2}(t)\right)
dt\right)^{1/p} \nonumber\\
&\leq \frac{1}{2^{1/p}}\left( \int_{0}^{x} g\left(\Phi^{p}_{1}(t)+\Phi^{p}_{2}(t)\right)
dt\right)^{1/p}.
\end{align}
This leads to
\begin{align} \label{estim2}
\Phi^{p}_{2}(x)\leq \frac{1}{2}\int_{0}^{x} g\left(\Phi^{p}_{1}(t)+\Phi^{p}_{2}(t)\right)
dt .
\end{align}
Now, set
$$ \Psi(x):=\max_{0\leq t\leq x} \left[ \Phi^{p}_{1}(x)+\Phi^{p}_{2}(x)\right],$$
and assume that $\Psi(x)>0$ for $x\in (0,T_{0}].$ We will show that it can not be possible under assumptions.
From the definition of $ \Psi(x)$ one easily conclude that for each $x\in [0,T_0],$
it is $\Phi^{p}_{1}(x)+\Phi^{p}_{2}(x)\leq \Psi(x)$ and there exists a $x_{1}\leq x$ so that $ \Psi(x)=\Phi^{p}_{1}(x_{1})+\Phi^{p}_{2}(x_{1}).$ Then, from estimations \eqref{estim1}-\eqref{estim2} and from the fact that $g$ is non-decreasing function
\begin{align}
\Psi(x)=\Phi^{p}_{1}(x_{1})+\Phi^{p}_{2}(x_{1})\leq \int_{0}^{x_{1}} g\left(\Phi^{p}_{1}(t)+\Phi^{p}_{2}(t)\right)\leq \int_{0}^{x} g\left(\Psi(t)\right)dt:=\Psi_{*}(x)
\end{align}
is then obtained. It can be seen that $\Psi(x)\leq \Psi_{*}(x).$ Moreover, we have
$$\frac{d}{dx}\Psi_{*}(x)=g\left(\Psi(x)\right)\leq g\left(\Psi_{*}(x)\right)$$
for all $x\in [0,T_0].$ From this fact, for sufficiently small $\delta>0,$ we have
$$\int_{\delta}^{T_{0}}\frac{\Psi^{'}_{*}(x)}{g\left(\Psi_{*}(x)\right)}dx\leq T_{0}-\delta.$$
Furthermore, by changing variables $u=\Psi_{*}(x)$ in the above integral and by using the continuity of $\Psi_{*}(x)$ and $\Psi_{*}(0)=0$ , we have
$$\int_{\epsilon}^{\gamma}\frac{du}{g\left(u\right)} \leq T_{0}-\delta.$$
for sufficiently small $\epsilon>0$ with $\epsilon=\Psi_{*}(\delta)$ and for $\gamma=\Psi_{*}(T_{0}).$ However, this contradicts with the assumption on $g$ given in \eqref{osgood3}. Consequently, $\Psi(x)=0$ for $x\in [0,T_{0}],$ i.e. $\omega_{1}(x)=\omega_{2}(x).$
\end{proof}
\end{theorem}
\textit{Remark.} It must be pointed out that, as noted in Theorem 1.4.3 in \cite{Agarwal}, the condition that function $g(u)$ is non-decreasing can be dropped.
\section{Conclusions }
In this research, we gave some sufficient conditions for existence and uniqueness of a problem involving a nonlinear differential equations in the sense of R-L derivative when the right-hand side function has a discontinuity at zero. Considering the literature, these results can be generalized and improved. Besides, one can obtain another uniqueness results for this problem as well.
\bigskip
|
train/arxiv
|
BkiUdh05qhLBXbB1emwo
| 5
| 1
|
\section{Writing Notes/Guidelines}
\section{Introduction}
Last year showed rapid progress in neural factoid open-domain question answering based on \emph{retriever-reader} architecture (Open-QA).
Such Open-QA systems \cite{chen2017reading} seek evidence for answering the questions inside the knowledge source using the \emph{retriever} and then extract the answer from the retrieved knowledge using the \emph{reader}. The knowledge source is often a large corpus of short snippets of natural language, so-called passages (e.g., taken from an encyclopedia).
The progress can be attributed to advances in neural retrieval methods \cite[\textit{inter alia}]{karpukhin2020dense, izacard2020distilling, khattab2020relevance, luan2020sparse, xiong2020approximate} that benefit from smarter negative sampling strategies or a better trade-off between complex question-passage interaction and its efficiency. It also can be attributed to reading methods that enable processing large quantities of retrieved passages \citet{izacard2020leveraging}. They compensate for a certain amount of the retrieval error and enable early aggregation of answer's evidence between passages.
This work demonstrates the relative improvement of 23-32\% compared to last year's state-of-the-art DPR system \cite{karpukhin2020dense}, while using the same knowledge source and the retriever. We propose a state-of-the-art Open-QA baseline composed of retriever, passage reranker, extractive reader, generative reader, and a novel component fusion approach. We follow the practice from information retrieval and show that our moderately sized reranker allows to reduce the passage count needed at the input of large reader models about four times. Our readers then take the best from both worlds. The extractive reader proposes a list of salient answer spans. The generative reader reranks these spans, seeing all the passages at once, or generates its own answer.
The proposed pipeline is heterogeneous and modular, making it an ideal benchmark.
To sum up, our contributions are three-fold:
\begin{enumerate}
\item We present a simple novel approach to aggregate scores from all system components and show that combining extractive and generative approaches is superior to a posterior averaging ensemble of homogeneous models.
\item We show that the extractive reader can sometimes match the performance of the generative approaches without taking the advantage of the fusion between retrieved passages. This indicates that the evidence aggregation from multiple passages in the generative approaches is either not learned or not necessary to perform well on these datasets.
\item We push the state-of-the-art for two large and popular datasets, demonstrating what is achievable with the proposed approach, having the same knowledge source and the retriever as in the previous works \cite{karpukhin2020dense,izacard2020leveraging}.
\end{enumerate}
\section{Open-QA Pipeline}
We propose the R2-D2 (\textsc{Rank twice}, \textsc{reaD twice}), 4-stage pipelined system that can choose whether to generate or to extract an answer. The parameters of each component in pipeline are estimated separately. It is composed of DPR passage retriever \cite{karpukhin2020dense}, passage reranker (see subsection \ref{ss:reranker}), and two readers. Figure \ref{fig:r2d2_pipeline} shows the diagram of our system. The first reader performs an extractive span-selection similar to \citet{fajcik2020rethinking}. The second reader is based on Fusion-In-Decoder (FiD) \cite{izacard2020leveraging}.
Formally, given a question $q \in \mathcal{Q}$ from the set of all possible questions $\mathcal{Q}$ and the corpus $\mathcal{C}=\{p_1, p_2, ... , p_n\}$ composed of passages $p_i$, the retriever learns a ranking function $\operatorname{rank}:\mathcal{Q} \times \mathcal{C} \rightarrow \mathbb{R} $ that assigns a score to each passage. We assume each passage contains its passage title (e.g., title from the Wikipedia article).
Taking a top-$K$ scoring passages $\mathcal{C}_{r}\subset\mathcal{C}$, reranker again rescores $\mathcal{C}_{r}$ scoring passages by learning a reranking function $\operatorname{rerank}:\mathcal{Q} \times \mathcal{C}_{r} \rightarrow \mathbb{R}$. Note that while $\operatorname{rank}$ and $\operatorname{rerank}$ have similar signatures, the computational cost of $\operatorname{rerank}$ over the same amount of passages is drastically higher, as it computes fine-grained interaction between tokens of question and passage.
Next, the rescored passages are passed to two readers: the extractive reader reads top-$V$ passages $\mathcal{C}_{rr}\subset\mathcal{C}_{r}$ independently of each other and assigns the probability $\boldsymbol{P}_{e}(a_e|q, \mathcal{C}_{rr})$ to each span $a_e$ in the passages (see subsection \ref{ss:ext_reader}).
The FiD generative reader reads top-$V_2$ passages $\mathcal{C}_{rr}'\subset\mathcal{C}_{r}$ jointly and generates an answer from probability space $\boldsymbol{P}_g(a_g|q,\mathcal{C}_{rr}')$ via greedy search.
Finally, R2-D2 aggregates the outputs from all components using two fusions (described in subsection \ref{ss:fusions}).
\begin{figure}[t!]
\centering
\include{figures/pipeline-v3}
\caption{R2-D2 pipeline.}
\label{fig:r2d2_pipeline}
\end{figure}
\subsection{Passage Reranker}\label{ss:reranker}
The proposed passage reranker is based on transformer cross-encoder similar to \citet{nogueira2019passage, luan2020sparse}.
The input is the concatenation of question $q\in\mathcal{Q}$ and passage $p\in\mathcal{C}_r$ with a special \texttt{SEP} token between them. The passage consists of a title and context that are prepended with special start tokens and concatenated together.
We denote the contextual representation of input token $w$ obtained by the cross-encoder as $\operatorname{En}(p, q)[w]\in\mathbb{R}^d$.
Now we can define the reranking function for passage rescoring as
\begin{equation}
\operatorname{rerank}(q,p) = \operatorname{En}(p,q)[\texttt{CLS}]^\top w
\end{equation}
where $w\in\mathbb{R}^{d}$ is a trainable vector and \texttt{CLS} is the special token added at the start of an input sequence.
Finally, we define the following formula\footnote{Formal definition of softmax over a set is described in the Apendix \ref{app:softmax_not}.}
\begin{equation}
\boldsymbol{P}_{rr}\left(p | q, \mathcal{C}_r\right) =\operatorname*{softmax}\limits_{p\in \mathcal{C}_r}\left(\operatorname{rerank}\left(q, p\right)\right)_{p}
\end{equation}
to assign a probability to the case that passage $p$ contains answer to the question $q$.
\begin{description}[style=unboxed,leftmargin=0em,listparindent=\parindent]
\setlength\parskip{0em}
\item[Training.] The model input for each question is exactly one positive sample supplemented with hard negatives from the retriever. The ground truth passage, annotated the same way as in \citet{karpukhin2020dense}, is primarily used as a positive sample. If the ground truth is unknown, the positive sample is the best retriever passage containing the answer.
The hard negatives are uniformly sampled from retriever's top-$K$ results that do not contain the answer.
The used loss function is the cross-entropy.
\end{description}
\subsection{Extractive Reader}
\label{ss:ext_reader}
Extractive reader estimates the probability $\boldsymbol{P}_{e}(a_e|q, \mathcal{C}_{rr})$.
It is the probability of a span $a_e$ from top-$V$ passage $p \in \mathcal{C}_{rr}$ being an answer to a question $q$.
We decompose the $\boldsymbol{P}_{e}(a_e|q, \mathcal{C}_{rr})$ into four probabilities of:
\begin{itemize}
\setlength\parskip{0em}
\item token $s$ being starting token of an answer span,
\item token $e$ being ending token of an answer span,
\item tokens $s$ and $e$ being boundary tokens of an answer span \cite{fajcik2020rethinking},
\item passage $p$ containing an answer for the question $q$ (inner reranker) as in \citet{karpukhin2020dense}.
\end{itemize}
To obtain the final probability used in test-time, we compute their product\footnote{We tried decoding from the subsets of these probabilities in Appendix \ref{app:decoding_ext_probs} not observing significant difference.}. These probabilities are defined as:
\begin{equation}
\boldsymbol{P}_{*}(*|q, \mathcal{C}_{rr}) = \operatorname{softmax}(s_*)_i \: ,
\end{equation}
where $*$ may stand for a \emph{start}, \emph{end}, \emph{joint}, and a \emph{passage}. The $i$ is an index of a given element, and the $s_*$ is a vector of scores for each element among all passages in $\mathcal{C}_{rr}$. So the \emph{softmax} normalization sum goes through all the passages. On the other hand, the $s_*$ scores are estimated by the model with just a single passage at its input \cite{clark-gardner-2018-simple}. The scores are as follows:
\setlength{\jot}{1ex}
\begin{gather}%
s^{i}_{start} = \operatorname{En}(p,q)[s]^\top w_{start} \\
s^{i}_{end} = \operatorname{En}(p,q)[e]^\top w_{end} \\
s^{i}_{joint} = (W_j \operatorname{En}(p,q)[s] + b_j)^\top \operatorname{En}(p,q)[e] \\
s^{i}_{passage} = \operatorname{En}(p,q)[\texttt{CLS}]^\top w_{p} \:.
\end{gather}%
Where $w_*,b_j \in \mathbb{R}^h$, $\operatorname{En}(p, q)[\cdot] \in \mathbb{R}^h$, and $W_j \in \mathbb{R}^{h \times h}$ are all trainable parameters.
We omit the spans of a title and question for answer span selection. Therefore the final answer can be selected only from the context.
The following training objective with independently marginalized components is used:
\begin{equation} \label{eq:extReaderIndLoss}
\begin{split}
-\log \sum_{s \in starts(C_{rr})} \boldsymbol{P}_{start}(s|q, \mathcal{C}_{rr}) \\
-\log \sum_{e \in ends(C_{rr})} \boldsymbol{P}_{end}(e|q, \mathcal{C}_{rr}) \\
-\log \sum_{j \in boundaries(C_{rr})} \boldsymbol{P}_{joint}(j|q, \mathcal{C}_{rr}) \\
-\log \sum_{p \in C_{rr}} \boldsymbol{P}_{passage}(p|q, \mathcal{C}_{rr}) \: .
\end{split}
\end{equation}
The sums are going through target annotations (starts, ends, etc.) obtained by the distant supervision approach.
\subsection{Component Fusion}
\label{ss:fusions}
To produce the final answer, R2-D2 aggregates the log-probabilities of all system components via linear combinations tuned on validation data.
Firstly, the log-probabilities of all system components for top-$M$ answer spans proposed by the extractive reader are aggregated. Formally, assume the $\mathcal{A}_q$ is the set of top-$M$ answer spans from $\boldsymbol{P}_{e}(a|q,\mathcal{C}_{rr})$ for question $q$.
The generative model performs the \textbf{answer reranking} evaluating the log-probability of the answer spans
\begin{equation}
\label{eq:genrerank}
\{\log\boldsymbol{P}_g(a|q,\mathcal{C}_{rr}'): a\in \mathcal{A}_q\}.
\end{equation}
Next a logistic regression loss \eqref{eq:aggloss} is minimized to perform \textbf{score aggregation}.
It combines the scores across the R2-D2 components to maximize the correct answer span probability over dataset~$\mathcal{D}$. This dataset is composed of the top-$M$ outputs of the extractive reader with the correct answer.
\begin{gather}%
x(a) =[\boldsymbol{P}_{e}(a) \; \boldsymbol{P}_g(a) \; \boldsymbol{P}_r(p_a) \; \boldsymbol{P}_{rr}(p_a)] \\
\label{eq:aggloss}
-\sum_{\mathclap{(\mathcal{A}_q,gt) \in \mathcal{D}}} \log\operatorname*{softmax}\limits_{a \in \mathcal{A}_q} \big({ w^\top \log x(a) + b}\big)_{gt}
\end{gather}%
Here $p_a$ denotes the passage containing the answer span $a$, $\mathcal{A}_q$ is a set of proposed answer spans, $gt$ is the correct answer span, distribution dependencies are dropped for clarity and only the logistic regression parameters $w, b$ are tuned in this step.
Finally, we theorized the correct answer span might not always be available in the passage set $\mathcal{C}_{rr}$, but the generative reader might be able to generate the answer from its parameters and the evidence given in passages. We introduce the binary classifier, which decides whether to select the best span answer from answer aggregation step or a free-form answer generated via FiD. Given that $s_{agg}(q)=\max_{a\in\mathcal{A}_q} w^\top x(a)+b$ is the best span score and $s^*_g(q)=\log\boldsymbol{P}_g(a_q^*|q,\mathcal{C}_{rr}')$ is the log-probability of the answer $a_q^*$ obtained via greedy decoding for question $q$, a classifier is trained via binary cross-entropy $BCE(l,t)$ with log-odds ratio $l$ and target $t$ to do the \textbf{binary decision}
\begin{equation}
\label{eq:bdformula}
\sum_{(e,t) \in \mathcal{D}} BCE(w^\top [s_{agg}(e);s^*_g(e)]+b, t ).
\end{equation}
Here, the training dataset $\mathcal{D}$ contains only cases where either the extractive or the abstractive prediction is correct (but not both).
\section{Experimental Setup}
Our models are implemented in PyTorch \cite{paszke2019pytorch} using Transformers \cite{wolf-etal-2020-transformers}. We use 12GB GPU to train the passage reranker, 48GB GPU for the generative reader, and 16x 32GB GPUs to train the extractive reader with $V=128$ passages at its input. The inference runs on 12GB GPU. In all experiments, we used Adam optimizer with a decoupled weight decay \cite{loshchilov2017decoupled}. Our models are evaluated by two metrics:
\begin{description}[style=unboxed,leftmargin=0em,listparindent=\parindent] \setlength\parskip{0em}
\item [Exact match (EM)] measures the proportion of examples, for which the system prediction matched at least one annotated ground-truth answer. We use the script from \citet{lee-etal-2019-latent}\footnote{\url{https://cutt.ly/rkZNIer}}.
\item [Accuracy@K] measures the proportion of examples, for which the ground-truth answer string is present in top-K retrieved passages. We match the string exactly as \citet{karpukhin2020dense}\footnote{\url{https://cutt.ly/0luNhx4}}.
\end{description}
\subsection{Datasets and Data Pre-processing}
We evaluate our models on three datasets. Their statistics are available in Table \ref{fig:datatsets}. To train the reranker we filter out examples, which do not contain golden passage or exact match in top-$K$ retrieved passages.
To train the extractive reader, only examples with exact match in a golden passage or top-1 retrieved passage are kept. Both filtering strategies are closely described in Appendix~\ref{app:data_preprocessing}.
\begin{description}[style=unboxed,leftmargin=0em,listparindent=\parindent]
\setlength\parskip{0em}
\item[NQ-Open] \cite{kwiatkowski2019natural, lee-etal-2019-latent} or NaturalQuestions-Open consists of real user queries obtained from Google search engine. The maximum length of each answer is at most 5 tokens. Each training and development sample contains 1 annotated answer, while test data contain 5-way answer annotation.
\item[TQ-Open] \cite{joshi-etal-2017-triviaqa} or TriviaQA-Open consists of question-answer pairs from 14 different trivia quiz websites. Each question contains human annotated answer and a set of answer aliases gathered from Wikipedia. We use the unfiltered version.
\item[EfficientQA] \cite{min2021neurips} is a dataset collected the same way as NQ-Open through 2019 and thus may contain more questions without evidence in our corpus than NQ-Open. We use the officially released dev set for testing\footnote{The test set was not released during our experiments.} models trained on NQ-Open training data.
\end{description}
Additionally, we also report results according to train-test set overlaps discovered by \citet{lewis-etal-2021-question} in Appendix \ref{app:train_test_overlap}.
\begin{table}
\centering
\scalebox{0.92}{\input{tables/dataset-statistics.tex}}
\caption{Dataset statistics. The filt. lines report how many examples are kept for training the reranker (filt. reranker) and extractive reader (filt. ext. reader). The lines w/ golden passage denote how many examples from the set contain golden passage annotation. }
\label{fig:datatsets}
\end{table}
\subsection{Models and Pipeline}
\label{ss:models_and_pipeline}
\begin{description}[style=unboxed,leftmargin=0em,listparindent=\parindent]
\setlength\parskip{0em}
\item[Retriever.]
We use BERT-based DPR from the official checkpoint\footnote{\url{https://github.com/facebookresearch/DPR}}. Each passage is represented via 768-dimensional embedding. We use a multiset checkpoint for TQ-Open, as the checkpoint for TQ directly isn't officially released.
We use the same knowledge corpus containing 21,015,320 passages based on 12-20-2018 Wikipedia snapshot as \citet{karpukhin2020dense}. In inference time, the retriever passes $K=200$ passages $\mathcal{C}_r$ to reranker.
\item[Passage reranker.]
We use the RoBERTa-base \cite{liu2019roberta} and truncate the inputs to a maximum length of 256. The linear scheduler with 0.1 warmup proportion is used, the number of epochs is 5 and the model is validated every 40,000 optimization steps.
We use learning rate $1.6\cdot10^{-4}$ and batch size 8. In training, the model reranks 24 passages per question with negatives uniformly sampled from top-400 passages retrieved by DPR.
During the inference, top-$K$ ($K=200$) retriever passages are rescored and passed to readers.
\item[Extractive reader.] The extractive reader encoder is based on pre-trained ELECTRA-large.
Its inputs are truncated if they are longer than the allowed maximum size (512 tokens).
During the training phase, all spans from all $p \in \mathcal{C}_{r}$\footnote{Note that we train on data from retriever, not reranker.} that match\footnote{Matching strategies are described in Appendix \ref{app:data_preprocessing}.} with at least one of the known answers are selected as target annotations.
Therefore the annotations might appear in the wrong context.
The extractive reader reads the top-$V = 128$ passages during the training phase and when it is used without the reranker.
To demonstrate the effect of reranker, the reader reads only the top-$V = 24$ passages if the reranker is used.
We use a linear scheduler with a warmup for the first 20,000 steps for all models.
The maximum number of training steps is 200,000.
The model is validated every 20,000 steps, and the best checkpoint among validations is selected.
The learning rate is $2 \cdot 10^{-5}$ and the optimization step was done after each training example.
\item[Generative reader.]
We utilize T5-large \cite{raffel2020exploring} and use a concatenation of question, passages and their respective titles at the Fusion-in-Decoder's input the same way as \citet{izacard2020distilling}.
We truncate each passage to the length of 250 tokens for NQ.
For TQ, as questions are significantly longer, we truncate whole inputs to the same size.
Following FiD for TQ, we use only human-generated answer.
In training, the golden passage always comes first, if available, and we take the rest of passages as ranked by retriever up to $V_2$ passages.
\citet{izacard2020leveraging} trained FiD with $V_2 = 100$ passages at its input. However, such approach requires tremendous amount of GPU memory, and thus requires employing speed-memory trade-offs such as gradient checkpointing \cite{Chen2016TrainingDN}. Unlike the original approach, we use only $V_2 = 25$ passages in our FiD. We note that in practice combining reranker with shorter-context FiD yields results similar to original implementation with much lower memory consumption and better throughput in the R2-D2 setting\footnote{Due to the numerous decoder computations in answer re-ranking.}. We analyze the speed of our implementation in Appendix \ref{app:inference time}.
Other hyperparameters are similar to the original work---batch size 64, learning rate $5 \cdot 10^{-5}$ but no learning rate schedule.
In test time, we decode an answer via greedy decoding.
\end{description}
\section{Results and Analysis}
\begin{table}[t]
\scalebox{0.63}{\input{tables/systems}}
\caption{Comparison with the state-of-the-art in EM. \#$\theta$ denotes the estimated amount of model parameters.
Symbol $^{-}$ reports the result only for smaller system with $220M$ parameters.
}
\label{tab:systems}
\end{table}
\begin{table*}[ht]
\centering
\scalebox{1.00}{\input{tables/ablation-study-3-wo-dev}}%
\caption{Ablation study. We report results for extractive (ext), generative (gen) and both readers (ext+gen) without (ret.) and with reranking (+rr). The $\Delta$ column shows the exact match difference caused by passage reranking.}
\label{tab:ablation_study}
\end{table*}
The effectiveness of our approach is compared with the state-of-the-art in Table \ref{tab:systems}.
Our system, composed of just the retriever and FiD reader R1-D1 (Generative), shows inferior performance compared to FiD-large.
This is most likely caused by 4 times fewer passages at its input, as in \citet{izacard2020leveraging}.
In contrast, our ELECTRA based extractive reader R1-D1 (Extractive) shows large gains compared to extractive state-of-the-art, while having the same retriever as DPR.
We hypothesize this may be caused by ELECTRA pre-training method, which shows strong performance through variety of tasks and we further show that it is also due to training and inference with large input size of 128 passages and better objective (discussed in Section \ref{sec:ext_reader_perf} and Appendix \ref{app:ext_r_ablations}).
Only system that matches the performance of our extractive reader is the concurrent work on UnitedQA-E \cite{cheng2021unitedqa}, which uses advanced regularization and HardEM techniques. We note that these are orthogonal to our approach and could potentially lead to further improvements.
Finally, we find that our R2-D2 system with 21M passages corpus is competitive even with FiD++, which uses DPR retriever improved via knowledge distillation, and 26M passage corpus, which also includes lists.
Additionally, we evaluate our model with a better retrieval model (HN-DPR) based on the DPR checkpoint where hard negatives are mined using the retrieval model itself\footnote{\url{https://cutt.ly/Ux5Yt4h}}.
Note that we do not compare EfficientQA with state-of-the-art, as the previous works didn't reported results on dev set we use for testing.
\subsection{Reranker Performance}
Next, we compare the performance of our retriever, reranker and reader with Accuracy@K in Figure~\ref{fig:acc_k_nq}.
The passage reranker improves the accuracy consistently and we observe the same trend on other datasets (Appendix \ref{app:accuracy_at_k}).
We also include analysis, where we rerank each passage $p_i$ according its $s^i_{passage}$ score from extractive reader.
We observe results similar or even better to reranker for $K<10$, indicating the extractive reader reranks well on its own.
However, in subsequent experiments we do not replace the reranker with reader because: (i) passage reranker has fewer parameters, (ii) extractive reading can run in parallel with reranking and generative reading as extractive reader is not benefiting from reranking, and (iii) passage reranking scores often improve results during score aggregation (see Section \ref{sec:component_fusion}).
\begin{figure}[t]
\centering
\input{charts/accuracy-at-k_nq-open-test}%
\caption{Accuracy@K on test-data of NQ-Open.}%
\label{fig:acc_k_nq}
\end{figure}%
\subsection{Extractive Reader Performance}
\label{sec:ext_reader_perf}
\begin{figure}[t]
\centering
\input{charts/extractive-reader-batch-sizes}%
\caption{Influence of test input size on extractive reader's performance for various train input sizes (different curves) on NQ-Open test dataset.}%
\label{fig:ext_reader_batch_analysis}
\end{figure}%
In order to investigate the influence of the number of input passages on the extractive reader's performance, we trained multiple ELECTRA-base models, each with different input size. In test time, we evaluate each of them on various input sizes. Figure \ref{fig:ext_reader_batch_analysis} shows that increasing train/test input size has a positive influence on extractive reader's performance. However, input size 128 doesn't seem to increase the performance anymore.
\subsection{Ablations}
The ablations are listed in Table~\ref{tab:ablation_study}.
We ablate results without using passage reranker, with separate readers and their combination and with different stages of component fusion.
Namely, performing a \emph{naive} answer re-ranking by generative reader means the system chooses the most probable answer span among the top-$M$ spans provided by the extractive reader according to generative reader log-probabilities as shown in equation \eqref{eq:genrerank}.
Analogously, the \emph{aggr} fusion denotes that the system chooses the most probable answer span according to aggregated scores, as in equation \eqref{eq:aggloss}.
Finally, the \emph{aggr+bd} fusion denotes the binary decision, as shown in equation \eqref{eq:bdformula}.
\begin{table}[t]
\centering
\scalebox{1.0}{\input{tables/fusion-analysis_score-aggr}}
\caption{Results for different pipeline components used for score aggregation on NQ-Open a TQ-Open. See text for details.}
\label{tab:score-aggr}
\end{table}
As expected, we observe that reranker improves the results consistently for generative model in all cases.
The gains are especially large for TQ-Open (over 3.7 EM, underscored in Table~\ref{tab:ablation_study}).
In fact, the results are comparable to \citet{izacard2020leveraging}, suggesting that using the FiD reader with smaller context window and reranker is a reasonable alternative to memory inefficient FiD with large input size.
Furthermore as expected, the extractive reader without reranker already has top-128 passages at the input, and improvements from the passage reranking are only negligible if any (less than 1 EM).
Finally, the results on NQ-Open and EfficientQA suggest applying the binary decision does not bring large improvements over the score aggregation if any.
However, notice that this is not the case for TQ-Open, where the generative reader performs significantly better compared to extractive reader, suggesting both component fusions play important role in the system.
\begin{table}[t]
\centering
\scalebox{1.0}{\input{tables/fusion-analysis_binary-decision}}
\caption{Results for binary decision on NQ-Open and TQ-Open for different aggregated pipeline components from Table \ref{tab:score-aggr}.}
\label{tab:binary-decision}
\end{table}
\subsection{Component Fusion}
\label{sec:component_fusion}
Furthermore, we analyze the performance of each component combination in the score aggregation and its impact on the component fusion via binary decision.
Both fusions are tuned on validation data and reported on test data of the NQ-Open and TQ-Open datasets. See Appendix \ref{app:additional_comp_fusion} for analysis on additional datasets.
Table \ref{tab:score-aggr} shows all relevant combinations of ranker \emph{r}, reranker \emph{rr}, extractive reader \emph{e} and generative reader \emph{g} probabilities used in score aggregation.
In overall, we observe that combining retriever and reranker scores with the reader leads to better or equal performance. On NQ-Open, we observe minor improvements up to \textasciitilde1 EM. However, there is no difference on TQ-Open.
The impact of adding a binary decision after the score aggregation is shown in Table~\ref{tab:binary-decision}.
Interestingly, the binary decision component significantly improves the performance only without reranked answer scores ($\{e\}$ rows in both tables).
However, fusing the generative and extractive reader via binary decision performs significantly worse on NQ-Open than fusing both readers together with score aggregation ($\{e\}$ row in Table~\ref{tab:binary-decision} vs. $\{e,g\}$ row in Table~\ref{tab:score-aggr}). As already noted in ablations, we find this to be quite the opposite for TQ-Open. We hypothesize that the binary decision is strong in cases, where generative reader performs better to extractive reader (the case of TQ-Open). We argue that if the generative reader is better, the abstractive answer should be used far more often, than when it's not.
We support the hypothesis by analyzing the proportion of test samples, on which the binary decision component activated (i.e. an abstractive prediction was selected). On NQ-Open, the component almost never activated (only on 3.5\% samples), but this proportion is much higher (26.6\%) on TQ-Open.
\begin{table}[t]
\centering
\scalebox{1.0}{\input{tables/ensemble}}%
\caption{Comparison between ensembling via posterior averaging and score aggregation on NQ-Open.}
\label{fig:reader_ensemble}
\end{table}
\subsection{Comparison with Posterior Averaging}
Finally, we compare our score aggregation with the ensemble computed via posterior probability averaging. In particular, we train three extractive and generative base-sized models initialized with different random seed. We do not use reranker in this experiment, and set train/test input size of extractive reader to 32. We assess the predictions using the averaged posterior probabilities and compare their average performance with score aggregation in Table \ref{fig:reader_ensemble}. Concretely, we compare with average of all 2 model ensembles (2 models) and with an ensemble of all 3 checkpoints (3 models). We observe two to three times improvement of score aggregation over the posterior probability averaging on NQ-Open test data.
\section{Related Work}
\begin{description}[style=unboxed,leftmargin=0em,listparindent=\parindent,parsep=0pt,]
\item[Passage reranking.]
Previous work in QA based on neural nets used Bi-LSTM encoders \cite{wang2018r3,lee2018ranking} that score each document independently. Over time, Bi-LSTM were replaced by BERT-like transformer encoders \cite{qiao2019understanding,wang2019multi}.
For document ranking, \citet{nogueira2019multistage} proposed a multi-stage architecture. The first stage scores each document independently, and the second estimates the more relevant document from all document pairs. Another document ranking approach uses the seq2seq model to generate a true or false answer to the document's relevance to the query \cite{nogueira2020document}.
Recent works have often focused on effective reranking. \citet{xin2020early} achieved inference speedup using early exiting, \citet{jang2020document} proposed a smaller and faster model, and \citet{mao2021reader} came up with a method which uses reader's predictions to rerank the passages.
Our reranker is most similar to \citet{nogueira2019passage, luan2020sparse}, except that unlike in IR, we assume there is just one correct passage and thus train our model via categorical cross-entropy.
\item[Multipassage Reading Comprehension]
Related work considers generative and extractive approaches towards modeling the reader. The generative reader generates an answer while conditioned on question alone \cite{roberts2020much}, or question with relevant passages \cite{lewis2020retrieval,min2020ambigqa}. \citet{izacard2020leveraging} showed it suffices to concatenate the passages in the decoder of seq2seq model, increasing the amount of top-passages the model can depend on dramatically.
The extractive reader used in Open-QA assumes that the answer is a continuous span string in located in retrieved paragraphs \cite{chen2017reading}. \citet{clark-gardner-2018-simple} proposed to aggregate the probabilities of distantly supervised answer matches via maximum marginal likelihood (MML). \citet{lin2018denoising} proposed to denoise distantly supervised answer string matches in MML via paragraph-ranker.
\citet{cheng2020probabilistic} experimented with different assumptions for MML, showing improvement when marginalizing over components of span probability independently. \citet{fajcik2020rethinking} proposed to model joint span probability directly via compound objective, instead of modeling the probability of span's start and end independently. \citet{karpukhin2020dense} incorporated an independent passage classifier loss to his MML objective. Our objective is similar to the last work, except that it uses joint component and also optimizes MML over relevant passages' probabilities.
\item[Component Fusion.]
\citet{yang-etal-2019-end} also combined BM25 ranker and reader scores via linear combination. Our work can be seen as an extension of this idea to combining the scores of all pipeline's components.
\citet{iyerreconsider2020} proposed a system which directly learns to rerank question-passage-answer triplets proposed via extractive model. However, reranking answers from their large extractive model via large reranker leads to \textasciitilde1 EM improvement absolute, whereas R2-D2s score aggregation improves 4 to 5 EM w.r.t. the extractive reader. Concurrently with our work, \citet{cheng2021unitedqa} proposed hard voting ensembling scheme to combine the reader predictions. Firstly, each model from an ensemble produces its best prediction, then the votes for identical predictions are combined, omitting the scores produced by the individual models. The authors obtained best results using two FiD readers and single extractive reader, leading to 1.6 and 2.4 EM improvement on TQ-Open and NQ-Open, compared to their best single extractive or generative model.
\end{description}
\section{Conclusion}
This work proposed R2-D2, a novel state-of-the-art pipeline for open-domain QA based on 4 components: retriever, reranker, generative reader and extractive reader.
We showed that employing a reranker is a reasonable alternative to using large passage counts at the input of both the extractive and the generative reader.
Our results on NQ-Open and EfficientQA showed that the extractive and the generative reader could perform equally in Open-QA, although the generative reader is twice the size of the extractive reader.
On the other hand, we observe the extractive reader underperforms on TQ-Open. We hypothesize, that the cause is (1) the complexity of trivia questions with many entities, which often require combining evidence from multiple passages --- these are impossible to answer for the extractive reader by design --- and (2) the expensive hyperparameter search, as we used NQ-Open hyperparameters also for TQ-Open.
Contrary to belief based on the results on different datasets \cite{yang-etal-2019-end,wang-etal-2019-multi,izacard2020leveraging}, we found the extractive reader can also benefit from larger input sizes, both in training and test time.
Finally, we proposed a component fusion, which allows merging the complementary behavior of generative and extractive approaches along with the ranking components and found it improves the results significantly.
Due to its heterogenous and modular nature, our pipeline forms an ideal base for future research of component integration in modern Open-QA.
\section*{Acknowledgments}
We would like to thank Jan Doležal for implementing an R2-D2 demo.
This work was supported by the Czech Ministry of Education, Youth and Sports, subprogram INTERCOST, project code: LTC18054.
The computation used the infrastructure supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90140).
|
train/arxiv
|
BkiUcHjxK3YB9i3RJxRf
| 5
| 1
|
\section{Introduction}
At present there is a general concensus that the $\Lambda$CDM model of cosmology \cite{Peebles1984} is most consistent with the cosmological observations and is considered to be the standard model of cosmology. The model describes a spatially flat universe predominantly filled with dark energy in the form of a cosmological constant $\Lambda$ and cold dark matter. These two comprises about $95\%$ of the total energy of the universe. Fundamental cosmological parameters in the standard model are constrained using a host of observational data. Recently different independent data sets used to constrain some fundamental cosmological parameters, Hubble constant $H_{0}$ (a parameter that describes the present expansion rate of the universe) and cosmic curvature parameter $\Omega_{k}$ (a parameter that determines the geometry of the universe) for example, shows remarkable incosistencies in the estimates of these parameters. Measurements of cosmic microwave background radiation (CMBR) temperature and anisotropies from \textit{Planck} satellite estimates a value of $H_{0} = 67.4 \pm 0.5$ $km$ $s^{-1}$ $Mpc^{-1}$ in $\Lambda$CDM model \cite{Aghanim}. Whearas, type Ia supernovae (SNe Ia) probes predict
$H_{0} = 74.03 \pm 1.42$ $km$ $s^{-1}$ $Mpc^{-1}$ by Hubble-Lema$\hat{i}$tre law without assuming any cosmological model \cite{Riess}. The $4.4\sigma$ tension between the two independent estimates can not be attributed to systemetic error and revels the inconsistency between the early universe and the late universe \cite{Riess,Vale,Feeney,Verde}.
Similarly, the estimates of the cosmic curvature parameter $\Omega_{k}$ from CMBR temperature and polarization mesurements from \textit{Planck} suggests a closed universe ($\Omega_{k} < 0$) at $99\%$ confidence level \cite{Aghanim,Vale19} whearas the combination of CMBR data and baryon acoustic oscillation (BAO) measurements predicts a spatially flat universe ($\Omega_{k}=0$) with a remarkable $0.2\%$ precision \cite{Handley}. These inconsistencies indicate that either some new physical phenmenone are at play which are not well understood or the standard model is flawed and other cosmological models of the universe are worth exploring.
Power law cosmology with the scale factor $a(t) \propto t^{\beta}$, where $\beta$ is a constant, is found to be an excellent fit to a host of observations with $\beta \approx 1$. Linearly coasting cosmology (Power law cosmology with $\beta \approx 1$ is found to be comfortably concordant with various low redshift probes such as SN-Ia \cite{SN1,SN2,SN3}, quasar-angular sizes (QSO) \cite{qso}, cosmic chronometer H(z) \cite{hz}, gravitational lensing statics \cite{lensing}, BAO \cite{bao,bao1}, galaxy cluster gas mass fractions \cite{massfraction} and the combination of H(z)+ BAO + SN-Ia + gamma-ray burst distance moduli (GRB) \cite{GRB}. Such a model of the universe is free from the horizon problem and flatness problem. The age of the universe in such a model is consistent with the age estimates for old stars and globular clusters \cite{age,SN1,hz}. Further, nucleosynthesis in a linearly coasting universe produces the desired amount of primordial Helium along with the metallicity observed in lowesst metallicity objects \cite{Nucleosynthesis,nuc}. Further, linear evolution of the scale factor is supported in alternative, non-minimally coupled gravity theories where it turns out to be independent of the equation of state of matter \cite{nonmin,nonmin1,nonmin2}. The coupling of the large scale scalar curvature of the universe to the scalar field in non-minimally coupled theories give rise to a characteristic evolution: the non-minimal coupling diverges, the scale factor approaches linearty and the non-minimally coupled field acquires a stress energy that cancels the vacuum energy in the theory. This aspect has been widely explored in attempts to solve the cosmological constant problem.
Linear coasting cosmology presents a simpler alternative to the expansion history of the universe predicted by the standard model which can be falsified if it fails to provide a good fit to the available observational data. In recent years strong gravitational lensing has become a very important astrophysical tool to estimate cosmological parameters. Earlier attempts to use strong lensing to constraint cosmological parameters were based on two approaches.
First was the stastical one, using CLASS or SQLS samples \cite{chae,oguri} , based on comparison between empirical distribution of image separations in observed samples of lenses and the theoretical one. The other method used galaxy clusters as lenses \cite{pac,sereno,gilmore,jullo} with each lens generating multiple images of various sources i.e. background galaxies. With better undestanding of structure and evolution of early type galaxies to make assesment of mass density profile and availability of reasonable catalogs of strong lenses with spectroscopic and astrometric data (obtained with well defined selection criteria), the ratio of angular diameter distance from source to lense and from source to observer is estimated and used to constraint the cosmological parameters\cite{bies,cao}.
In this paper, I test the viability of linerly coasting cosmology by constraining the power law cosmology using the strong gravitational lensing data of 118 lenses from SLACS, BELLS, LSD and SL2S surveys compiled by Cao et al.\cite{Cao}.
The paper is organized as follows. In Section 2, I describe the power law cosmology ansatz and derive the angular diameter distance relations.
In Section 3, I briefly describe the method to estimate angular diameter distance using strong gravitatinal lensing and how it is used to constraint the power index for the model. In section 4 methodology and data is discussed. Finally the results are summarized in section 5.
\section{Power Law Cosmology}
Large scale homogeniety and isotropy observed in the universe suggests the geometery of the universe can be described by the Friedmann-Walker-Robertson (FRW) metric:
\begin{equation}
ds^{2} = c^{2}dt^{2}-a(t)^{2}[\frac{dr^{2}}{1-K r^{2}} + r^{2}(d\theta^{2}+ \sin^{2}{\theta} d\phi^{2})]
\end{equation}
Here $c$ is the speed of light, $t$ is the proper time, $r$, $\theta$ and $\phi$ are the spherical polar comoving coordinates.$K$ is the curvature constant and is $\pm 1, 0$ for a suitable choice of units for $r$. $a(t)$ is an unknown function of $t$ and is called the cosmic scale factor or expansion parameter. In standard model, the scale factor and curvature constant, which specify the dynamics of the universe, are determined from the Einstein's equation from the general theory of relativity (GTR) for a homogeneous and isotropic fluid as source.
In power law cosmology, the scale factor $a(t)$ takes the form:
\begin{equation}
a(t) = \alpha t^{\beta}
\end{equation}
where $\alpha$ and $\beta$ are constants. The Hubble parameter (defined as $H(t) = \frac{\dot{a}}{a}$) is:
\begin{equation}
H(t) = \frac{\beta}{t}
\end{equation}
From the definitation of redshift, $\frac{a_{0}}{a(t)} \equiv 1+ z$, where $a_{0}$ is the current value of scale factor and $z$ is the redshift, we have
$$a(t) = \alpha t^{\beta} = \frac{a_{0}}{1+z}$$
$$\Rightarrow \frac{1}{t} = [\frac{\alpha}{a_{0}}(1+z)]^{\frac{1}{\beta}}$$ and
\begin{equation}
H(z) = H_{0}(1+z)^{\frac{1}{\beta}}
\end{equation}
where $H_{0} = \beta(\frac{\alpha}{a_{0}})^{\frac{1}{\beta}}$ is the present value of the Hubble constant.
The dimensionless comoving distance $d(z)$ in FRW cosmology is:
\begin{equation}
d(z) = \left\lbrace\begin{array}{cc} D_{c},&K=0\\ &\\
\frac{a_{0}H_{0}}{c}\sinh{(\frac{c}{a_{0}H_{0}}D_{c})},&K=-1\\ &\\ \frac{a_{0}H_{0}}{c}\sin{(\frac{c}{a_{0}H_{0}}D_{c})},&K=1
\end{array}\right.
\end{equation}
where $D_{c} = \int_{0}^{z}\frac{H_{0}}{H(z')}dz'$ and angular diameter distance is
\begin{equation}
D_{A}(z) = \frac{c}{H_{0}(1+z)} d(z)
\end{equation}
\section{Strong Gravitational Lensing}
Strong gravitational lensing (SGL) occurs when the multiple images of a background galaxy (source) appear due to lensing effect of a galaxy or cluster of galaxies (lens) lying along its line of sight. The multiple image separation of the source in a specific strong lensing system depends only on angular diameter distances to the lens and to the source provided a reliable model for the mass distribution within the lens is known.
A perfect alignment of the source, lens and observer along the same line of sight, results in symmetry around the lens, causing a ring like structure called Einstein ring.
The Einstein ring radius $\theta_{E}$ depends on the mass of the lensing object $M_{lens}$; more massive it is, the larger the radius. It also depends on the distance between observer to lens $D_{l}$, observer to source $D_{s}$ and source to lens $D_{ls}$ and is given by \cite{Sch}
\begin{equation}
\theta_{E} = \left[\frac{4GM_{lens}D_{ls}}{c^{2}D_{l}D_{s}}\right]^{1/2}
\end{equation}
From stellar kinematic measurements, the dynamical mass $M_{E}$ inside the Einstein radius $R_{E} \equiv \theta_{E} D_{l}$ for lenses having a singular isothermal sphere (SIS) mass distribution is
\begin{equation}
M_{E} = \frac{\pi }{G}\sigma_{SIS}^{2}R_{E}
\end{equation}
where $\sigma_{SIS}$ is the steller velocity dispersion of the lens mass distribution.
Therefore lenses having a singular isothermal sphere (SIS) mass distribution, the Einstein radius is,
\begin{equation}
\theta_{E} = 4\pi \frac{\sigma_{SIS}^{2}}{c^{2}} \frac{D_{ls}}{D_{s}}
\end{equation}
$\theta_{E}$ depends on the cosmological model through the ratio $D_{ls}/D_{s}$, the angular diameter distance between source and lens and between lens and observer. The cosmological model of the universe can be tested using equation (9)
if one has a reliable data of the lensing system, i.e., Einstein radius $\theta_{E}$ from image astronomy and $\sigma_{SIS}$ from central velocity dispersion obtained from spectroscopy. \cite{bies,cao}
With new and powerful upcoming sky surveys alongwith the ongoing surveys with better precision, SGL is going to be one of the most effective probes to constraint cosmological parameters in a model independent way.
\section{Methodology and Data}
To constraint a cosmological model using SGL data, the quantity of intrest is
\begin{equation}
D_{th}(z_{l},z_{s}) = \frac{D_{ls}(z_{l},z_{s})}{D_{s}(z_{s})}
\end{equation}
and corrosponding observable is
\begin{equation}
D_{ob} = \frac{c^{2}}{4\pi}\frac{\theta_{E}}{\sigma_{0}^{2}}
\end{equation}
As the dependence on cosmological model is through a distance ratio, the method is independent of the Hubble constant value $H_{0}$ and is not affected by dust absorption or source evolutionary effects. However it depends upon the reliability of lens modeling and measurements of $\sigma_{0}$.
In a flat power law cosmology ($K = 0$), the ratio $D_{th}(z_{l},z_{s})$ is
\begin{equation}
D_{th}(z_{l},z_{s}) = \frac{[(1+z_{s})^{1-\frac{1}{\beta}}-(1+z_{l})^{1-\frac{1}{\beta}}]}{[(1+z_{s})^{1-\frac{1}{\beta}}-1]}
\end{equation}
and in the limiting case $\beta \rightarrow 1$ the expression is obtained
\begin{equation}
D_{th}(z_{l},z_{s}) = \frac{\log(\frac{1+z_{s}}{1+z_{l}})}{\log(1+z_{s})}
\end{equation}
The corrosponding quantities in open power law cosmology (with $K=-1$) are
\begin{equation}
D_{th}(z_{l},z_{s}) =\frac{\sinh[k\frac{\beta}{\beta-1}[(1+z_{s})^{1-\frac{1}{\beta}}-(1+z_{l})^{1-\frac{1}{\beta}}]]} {\sinh[k\frac{\beta}{\beta-1}[(1+z_{s})^{1-\frac{1}{\beta}}-1)]]}
\end{equation}
where $k = \frac{c}{a_{0}H_{0}}$ and in the limiting case $\beta \rightarrow 1$ the ratio is
\begin{equation}
D_{th}(z_{l},z_{s}) = \frac{(\frac{1+z_{s}}{1+z_{l}})^{k}-(\frac{1+z_{l}}{1+z_{s}})^{k}}{(1+z_{s})^{k}-(1+z_{s})^{-k}}
\end{equation}
The data sets used in this paper consists of 118 strong lensing systems from Solan Lens ACS Survey (SLAC), BOSS Emission-Line Lens Survey (BELLS), Lenses Structure and Dynamics Survey (LSD)and Strong Lensing Legacy Survey (SL2S). The dataset was compiled by Cao et.al(2015)[Table 1 in \cite{Cao}]. For each lens, the source redshift ($z_{s}$), lens redshift ($z_{l}$) and luminosity averaged central velocity dispersion $ \sigma_{0}$ are determined using spectroscopy from Solan Digital Sky Survey (SDSS), Baryon Oscillation Spectroscopic Survey (BOSS), LSD and Canada France Hawaii Telescope Legacy Survey (CFHTLS). Most of the lensing galaxies in the dataset used are early type elliptical galaxies and a SIS mass profile for such galaxies is strongly supported in various independent studies \cite{Fuk,Hel,Laga, Ruff}.
As the velocity dispersion for a SIS lens $\sigma_{SIS}$ may not be same as the central velocity dispersion $\sigma_{0}$ so a new parameter $f_{E}$ was introduced by Kochanek (1992)\cite{Koch} such that $\sigma_{SIS} = f_{E}\sigma_{0}$. The parameter $f_{E}$ compensates for the contribution of dark metter halos in velocity dispersion as well as systematic errors in measurement of image seperation and any possible effect of background matter over lensing systems. All these factors can affect the image separation by up to $20\%$ which limits $\sqrt{0.8} < f_{E} < \sqrt{1.2}$.
\cite{Ofek,cao}. In this work $f_{E}$ is taken as a free parameter fitted together with the cosmological parameters $\beta$ and $k$.
The Einstein radius $\theta_{E}$ is determined from Hubble Space Telescope Images (HST). In different surveys considered here, the method to determine Einstein radius is more or less consistent. The Einstein radii were measured by fitting the mass distribution models, after subtracting de Vaucouleurs profile of the lens, to generate lensed image. The fractional uncertainty of $\theta_{E}$ is estimated to be $5\%$ across all surveys \cite{Cao}.
Constraints on power law cosmology is obtained by performing a Markov chain Monte Carlo (MCMC) analysis using Python module \textit{emcee} \cite{emcee} and maximizing the likelyhood function $\textit{L} = e^{-\chi^{2}/2}$, to determine the best-fit values of the parameters of the model. The likelyhood function for the dataset used is evaluated using the expression
\begin{equation}
\chi^{2} = \sum_{i=1}^{N} \left(\frac{D_{th}(z_{l,i},z_{s,i},\textbf{p})-D_{ob}(\sigma_{0,i},\theta_{E,i})}{D_{ob}\Delta D_{ob,i}}\right)^{2}
\end{equation}
here $\textbf{p}$ denots cosmological model parameters, $N$ is the number of data points and $\Delta D_{ob}$ is the uncertainty in the value of $D_{ob}$ given by
\begin{equation}
\Delta D_{ob} = \sqrt{(\frac{\Delta\theta_{E}}{\theta_{E}})^{2}+ 4(\frac{\Delta\sigma_{0}}{\sigma_{0}})^{2}))}
\end{equation}
where $\Delta\theta_{E}$ and $\Delta\sigma_{0}$ are the uncertainties in the measurement of Einstein radius and velocity dispersion respectively.
\section{Results and Discussion}
The constraints obtained in this work are summerized in table 1. First, the best fit values and 1-dimensional marginalized best fit values and uncertainties of $\beta$ and $f_{E}$ were obtanined for a flat ($K = 0$) power law universe. The priors used for both parameters are flat with $\beta$ prior non-zero over the range $0.1\leq \beta \leq 4$ and $f_{E}$ prior non-zero over the range $0.8\leq f_{E} \leq 1.2$. For the open ($K=-1$)power law cosmology, the data-set do not constraint the parameter $k= \frac{c}{a_{0}H_{0}}$ and it is found that for $k<<1$ the constraints on $\beta$ and $f_{E}$ are same as that for the flat case. The same conclusion can also be drawn from equation (14), which shows that for $k=\frac{c}{a_{0}H_{0}} <<1$ the ratio $D_{th}(z_{l},z_{s})$ for the open case is same as for the flat case. 1-D marginalized best fit values of $\beta$ and $f_{E}$ are then obtained with non-zero flat prior for $k$ over the range $0.01\leq k \leq 0.3$ (considering the constraints on $|\Omega_{k}|= \frac{c}{a_{0}H_{0}}$ obtained in various independent studies \cite{Ryan1,Ryan2,Ryan3,Ryan4,holi} ). The posterior 1-dimensional probability distributions and two dimensional confidance regions of the parameters for the flat and open power law models are presented in Figs. 1-2.
The aim of this paper was to test the viability of a lineally coasting cosmology with $a(t) \sim t$, which presents itself as a falsifiable model. The constraints obtained on the general power law cosmology using SGL data shows that a linear coasting is accomodated well within $1\sigma$. Various independent works using different datasets have reported that $a(t) \sim t$ is in concordance with the observational tests. For instance, in \cite{Ryan} constraints on power law cosmology are investigated using an extensive dataset consisting of cosmic chronometer, standard ruler and standard candle measurements. The results reported in \cite{Ryan} (Table 2) shows that the best fitting values of $\beta$ from $H(z)$, QSO, GRB and HIIG data is consistent with $\beta = 1$ within $1-2\sigma$. However, the best fit value of $\beta$ from BAO data ($0.9211^{+0.01653}_{-0.01652}$) is not consistent with the linearly coasting universe. But, it is to be noted that while analyzing the BAO data, size of the sound horizon $r_{s}$ is needed to calibrate the BAO scale. The sound horizon is defined as
\begin{equation}
r_{s} = \int^{\infty}_{z_{d}} \frac{c_{s}(z)}{H(z)}dz
\end{equation} where $c_{s}$ is the sound speed in photon-baryon fluid and is
\begin{equation}
c_{s} = \frac{1}{\sqrt{3}}c[1+\frac{3}{4}\rho_{b}(z)/\rho_{\gamma}(z)]^{-1/2}
\end{equation}
The expression used in \cite{Ryan} to compute $r_{s}$ is the numerically callibrated approximation of the value reported by the linear perturbations code CAMB \cite{Auborg} which is evaluated for a $\Lambda$CDM model. As $r_{s}$ is dependent on the expansion history of the universe, a careful evaluation of $r_{s}$ is needed for a power law cosmology while using BAO data.
I conclude that, this class of power law cosmologies are found to be an excellent fit to a host of independent observations and can be considered as an interesting alternative to the standard cosmology.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline\hline
Curvatura Constant & \multicolumn{2}{c|}{Best Fit Values} & \multicolumn{2}{c|}{1-D Margnalized Values} \\ \hline
K & $\beta$ & $f_{E}$ & $\beta$ & $f_{E}$ \\
\hline & & & & \\
0 & 0.8443 & 1.0160 & $1.0046^{+0.5465}_{0.2311}$ & $1.0043^{0.0199}_{-0.0213}$ \\
\hline & & & & \\
-1 & 0.9489 & 1.0103 & $1.0478^{+0.4511}_{0.2510}$ & $1.0033^{0.0194}_{-0.0172}$\\ \hline \hline
\end{tabular}
\caption{\small Unmarginalized and 1-dimensional marginalized best fit parameter values and uncertainties.}
\end{table}
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{GL-fig1.pdf}
\caption{ {\small{ 1-Dimensional posterior probability distributions and 2- Dimensional confidance regions of the $\beta$ and $f_{E}$ in flat power law cosmology}}}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\linewidth]{GL-fig22.pdf}
\caption{ {\small{ 1-Dimensional posterior probability distributions and 2- Dimensional confidance regions of the parameters in open power law cosmology}}}
\end{figure}
|
train/arxiv
|
BkiUesTxK0wg05VB8wcp
| 5
| 1
|
\section{Introduction}
The data volumes from the ongoing and next generation multi-band and multi-epoch surveys are expected to be so huge that analyzing, cross-correlating and extracting knowledge from such data will represent a challenge for scientists and computer engineers. Therefore, efficient techniques and software solutions, to be directly integrated into the reduction pipelines, will be required to cross-correlate in real time a large variety of parameters for billions of sky objects. One of the most common techniques used in astrophysics and a core step of any standard modern pipeline of data reduction/analysis, particularly sensible to the growing of the datasets dimensions, is the cross-match among heterogeneous catalogues, which consists in identifying and comparing sources belonging to different observations, performed at different wavelengths or under different conditions.
In this work we present $C^{3}$ (\textit{Command-line Catalogue Cross-match}), a tool to perform efficient catalogue cross-matching, based on the multi-thread paradigm, which can be easily integrated into an automatic data analysis pipeline. Furthermore, one of major features of this tool is the possibility to choose shape, orientation and size of the cross-matching area, making the $C^{3}$ tool easily tailorable on the specific user needs.
\section{$C^3$ Design and Features}
$C^3$ is a command-line software, designed and developed to perform general cross-matching among astrophysical catalogues, matching the needs to work on large datasets produced by independent surveys, to combine data to extract new information and to increase the astrophysical knowledge. Based on a specialized sky partitioning function, its high-performance capability is ensured by making use of the multi-core parallel processing paradigm. It works with the most common catalogue formats: FITS, ASCII, CSV, VOTable and with equatorial and galactic coordinates systems
In order to be a general purpose tool, different functional cases have been implemented (see Sect.~\ref{sect:usecases} for further details), as well as the most used join function types (Sect.~\ref{sect:join}). However, $C^{3}$ is easy to use and to configure through few lines in a single configuration file. Main features of $C^{3}$ are the following:
\begin{itemize}
\item \textit{Command line}: it can be used as stand-alone process or integrated within more complex pipelines;
\item \textit{Python compatibility}: up to version 3.4;
\item \textit{Multi-platform}: $C^{3}$ has been tested on Ubuntu Linux 14.04, Windows 7/10, Mac OS and Fedora;
\item \textit{Multi-process}: the cross-matching process has been developed to run by using a multi-core parallel processing paradigm;
\item \textit{Sky partitioning}: a simple sky partitioning algorithm is used to reduce computational time;
\item \textit{User-friendliness}: the tool is very simple to configure and to use; only a simple configuration file is required.
\end{itemize}
\subsection{Functional cases}\label{sect:usecases}
$C^{3}$ is able to match two input catalogues by applying three different matching criteria, corresponding to different functional cases: \textit{Sky}, \textit{Exact Value} and \textit{Row-by-Row}. In the \textit{Sky} functional case, $C^{3}$ performs a positional cross-match between two catalogues, based on the same concept of the Q-FULLTREE approach, an our tool introduced in \citep{becciani2015}: for each object of the first input catalogue, it is possible to define an elliptical, circular (as special ellipse) or rectangular region centered on its coordinates, whose dimensions are limited by a fixed value or defined by specific catalogue parameters (for instance, the FWHM at a specific wavelength). The orientation in the space of the region is defined by a position angle, characterized also by two additional parameters, to opportunely set the zero-point and the direction of rotation. Once defined the region of interest, the next step is to search for sources of the second catalogue within such region, by comparing their distance from the central object and the limits of the area.
In the \textit{Exact Value} case, two objects are matched if they have the same value for a pair of columns (one for each catalogue) defined in the configuration file. Finally the \textit{Row-by-Row} case consists in matching objects with the same row-ID of the two catalogues.
\subsection{Match selection and join types}\label{sect:join}
The results of the cross-match are stored as a series of rows, corresponding to the matching objects. In the \textit{Exact value} and \textit{Sky} cases, the conditions that matched rows have to satisfy to be stored can be chosen between two match selection criteria (\textit{all} the matches or only the \textit{best} pairs) and a number of different join possibilities: \textit{i)} \textbf{$1$ and $2$}, only rows having an entry in both input catalogues; \textit{ii)} \textbf{$1$ or $2$}, all rows, matched and unmatched, from both input catalogues; \textit{iii)} \textbf{All from $1$ (All from $2$)}, all matched rows from catalogue $1$ (or $2$), together with the unmatched rows from catalogue $1$ (or $2$); \textit{iv)} \textbf{$1$ not $2$ ($2$ not $1$)}, all the rows of catalogue $1$ (or $2$) without matches in the catalogue $2$ (or $1$); \textit{v)} \textbf{$1$ xor $2$}, only unmatched rows from the catalogue $1$ and $2$.
\subsection{Performance boosters}
In order to increase the performance in terms of computational time, $C^3$ makes use of two different methods: \textit{i)} a massive application of multi-core parallel processing paradigm, managed by defining, in the configuration file, the number of concurrent parallel processes; \textit{ii)} a sky partitioning algorithm, in order to reduce the number of checks between the two catalogues. A subsample of the first input catalogue is assigned to each concurrent process, while the objects of the second catalogue are assigned to one of the \textit{cells} defined by the partitioning procedure, according to their coordinates. The size of the unit cell is defined by the maximum dimension that the matching regions can assume, (Fig.~\ref{fig:partitioning}a) . The described choice to set the dimensions of the cells ensures that the matching between an object of the first and a source of the second catalogue can happen only if the source lies in the nine cells surrounding the object, (Fig.~\ref{fig:partitioning}b).
\articlefigure[width=.7\textwidth]{P1-25_f1.pdf}{fig:partitioning}{The $C^{3}$ sky partitioning method. The sky is partitioned in cells whose dimensions are determined by the maximum dimension that the matching regions can assume (panel \emph{b}). Each source of the second catalogue is assigned to a cell; a source and an ellipse referred to a first catalogue object can match only if the source lies within the nine cells surrounding the object (panel \emph{b}).}
\section{A $C^3$ customized version: ClumpPopulator}
\articlefigure[width=.48\textwidth]{P1-25_f2.pdf}{fig:cp}{ClumpPopulator is able to perform positional association also for a number of additional ellipses, concentric to the basic clump ellipse and with gradually increasing and/or decreasing dimensions.}
In the FP7 project ViaLactea, \citep{molinaro2016}, a customized $C^3$ version, named \textit{ClumpPopulator}, has been designed to find positional associations between sources detected at different resolutions. This software has been calibrated to be applied to the catalogue of clumps produced by Hi-GAL \citep{molinari2016} and a cross-matched catalogue of sources from UKIDSS \citep{lucas2008} and GLIMPSE \citep{churchwell2009} surveys, although it can work on arbitrary source catalogues. In addition to the cross-match based on the catalogue parameters (FWHMs and position angles at a given resolution), the association is extended to a user-defined number of additional ellipses, concentric to the basic clump ellipse and with gradually increasing and/or decreasing dimensions. Moreover, a set of routines have been provided to remove intersecting clumps from the results, to compare stellar surface density inside and outside the clumps and to produce a catalogue containing only the sources associated to clumps.
\section{Conclusions}\label{sect:conclusion}
In this paper we have introduced $C^{3}$, a new scalable tool to cross-match astronomical datasets. It is a Python script, based on the multi-core parallel processing paradigm and on a sky partitioning algorithm in order to ensure high-performance capability, and designed to provide the maximum flexibility to the end users in terms of choice about catalogue properties (I/O formats and coordinates systems), shape and size of matching area and cross-matching type. Despite the general purpose of the tool, it is easy to configure, by compiling a single configuration file, and to execute as a stand-alone process or integrated within any generic data reduction/analysis pipeline. It is also possible to tailor the tool configuration to the features of the hosting machine, by properly setting the number of concurrent processes and the resolution of sky partitioning.
A test campaign, done on real public data, has been performed both to scientifically validate the $C^{3}$ tool, showing a perfect agreement with other publicly available tools, and to compare its computing time efficiency with other cross-matching applications, revealing the full comparable performance, in particular when input catalogues increase their size and dimensions.
The $C^{3}$ tool and the user guide are available at the page \url{http://dame.dsf.unina.it/c3.html}.
\acknowledgements This work was financed by the 7th European Framework Programme for Research Grant FP7-SPACE-2013-1, \textit{ViaLactea - The Milky Way as a Star Formation Engine}.
|
train/arxiv
|
BkiUdeXxK1UJ-2RwWtYB
| 5
| 1
|
\section*{Acknowledgements}
\noindent The authors wish to thank Professor Philip K. Maini for helpful comments and feedback on the manuscript.~GC is supported by from EPSRC and MRC Centre for Doctoral Training in Systems Approaches to Biomedical Science and Cancer Research UK. C.Z. acknowledges Breast Cancer Research Foundation (BCRF).
P.G.K. acknowledges support from the Leverhulme
Trust via a Visiting Fellowship and thanks the Mathematical
Institute of the University of Oxford for its hospitality
during part of this work.
\bibliographystyle{elsarticle-num}
\section{Spatial Model}
\label{AppendixA}
\noindent We outline here the set up for the 1D simulations presented in Section \ref{conclusion}. As a full description of the spatial model goes beyond the scope of the present work, we focus on the main changes to~(\ref{mixedmodel})-(\ref{eq_ad}).
We now view the oxygen concentration $c$ as a dependent variable, rather than a prescribed function. We suppose that oxygen is supplied to the region by blood vessels on the domain boundary $\partial \Omega_2$ (see Figure~\ref{schematicspatial}). Oxygen diffuses from the boundary into the tissue where it is consumed by the tumour cells at rates which depend on their phenotype and the local oxygen concentration. The evolution of the dimensionless cell density, $n=n(\vec{x},z,t)$, is driven by a phenotypic flux of the same form as in Equation~(\ref{mixedmodel}) but a spatial flux is included to account for random motion in the spatial dimension.
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\textwidth]{figures/model/modeldiagram2.pdf}
\caption{Schematic representation of the phenotypic and spatial model.}
\label{schematicspatial}
\end{figure}
As shown in Figure \ref{schematicspatial}, we consider a fixed tissue slice where the oxygen supply (i.e. vasculature) is confined to one of the tissue boundaries. Given the assumed symmetry of the problem, we can consider a 1D Cartesian geometry with $x\in[0,L]$. The spatial model is defined by the following system of coupled PDEs:
\begin{subequations}
\begin{align}
\hspace{-10mm}
\frac{\partial n}{\partial t}=\underbrace{ D_N \frac{\partial^2 n}{\partial x^2}}_{spatial \hspace{1mm} flux}+ \frac{\partial }{\partial z} \left(\theta \frac{\partial n}{\partial z}-n v_z(z,c)\right)+ F(z,c,\phi,t) n,\\
\frac{\partial c}{\partial t} = D_C\frac{\partial^2 n}{\partial x^2}-\Gamma(t,x,c),\label{ox}\\
\theta \frac{\partial n}{\partial z}-n v_z = 0, \qquad z\in\left\{0,1\right\},\, x\in [0,L],\, t>0,\\
\left.\frac{\partial n}{\partial x}\right|_{x=0}=\left.\frac{\partial n}{\partial x}\right|_{x=L}=0, \quad z\in(0,1),\, t>0,\\
\left.\frac{\partial c}{\partial x}\right|_{x=L}=0, \quad c(0,t)=c_{\infty}, \quad t>0,\\[2pt]
n(x,z,0)= n_0(x,z) \quad x\in[0,L],\, z\in(0,1),\\[2mm]
c(x,0)= c_0(x) \quad x\in[0,L],\\
\phi(x,t)=\int_0^1 n(x,z,t) \, dz,\\
\Gamma(t,x,c)=\int_0^1 \gamma(z,c) n(x,z,t) \, dz,\\
\begin{aligned}
F(z,c,\phi,t)= p(z,c)\left(1-\phi\right) -f(z) - \underbrace{g H(c_N-c)}_{necrosis}\\-\sum^{N}_{i} \log\left(\frac{1}{SF(z,c)}\right)\delta(t-t_i).
\end{aligned}
\end{align}\label{spatial_mod}
\end{subequations}
In Equation~(\ref{spatial_mod}), $D_N$ and $D_C$ are the assumed constant spatial diffusion coefficient for the cells and oxygen, respectively, while $\gamma$ denotes the rate at which cells of phenotype $z$ consume oxygen and $\Gamma$ the net rate of oxygen consumption at position $x$ and time $t$. The advection velocity $v_z$ is as defined by Eq.~(\ref{eq_ad}), while the fitness function $F$ is analogous to that defined in Section \ref{fit}, with an additional term to account for necrosis. The latter is assumed to occur at a constant rate $g \geq 0$, independent of cell phenotype, when the oxygen concentration falls below a threshold value, $c_N \geq 0$. We also modify the definition of the survival fraction $SF$ given in \S\ref{fit} (see Equation~(\ref{lq_model})) to account for the \textit{oxygen-enhancement ratio} (OER)~\cite{lewin,lewin2}. According to the \textit{oxygen fixation hypothesis} \cite{HallEricJ2012Rftr}, part of the biological damage induced by radiation is indirect, being mediated by the presence of free radicals. Thus, when oxygen is limited, radio-sensitivity is accordingly reduced. Based on experiments, the range of oxygen concentrations at which this effect is relevant corresponds to more severe levels of hypoxia (where $c\sim 0.5\%$ or lower). We do not consider such situations for the well-mixed model, where we consider mild hypoxia. However, accounting for the OER will be important for the spatially extended model. Recall from Section \ref{hyp cond} that hypoxia is a favourable \textit{niche} for CSCs. Therefore the OER will endow them with additional protection from radiation.
Denoting by $c^R_H$ the oxygen threshold at which the OER becomes active, we use the following functional form for the survival fraction when simulating the spatially-extended model:
\begin{align}
SF(z,c)=\begin{cases}
\exp\left[-\alpha(z)d-\beta(z) d^2\right] \quad c>c^R_H\\[2mm]
\exp\left[-\myfrac[2pt]{\alpha(z)}{OER}d-\myfrac[2pt]{\beta(z)}{OER^2} d^2\right] \quad c<c^R_H.
\end{cases}\label{SF_space}
\end{align}
In Equation (\ref{SF_space}), $\alpha$ and $\beta$ are defined by~(\ref{radiosensitivity}).
We note that in the main text, we consider $c=1$ (normoxia) and $c=0.2$ (hypoxia), so that the OER does not impact cell responses to RT.
For the well-mixed model, the oxygen concentration is typically maintained at a prescribed, constant value. By contrast, for the spatially extended model, we suppose that the tumour cells consume oxygen at a rate $\gamma$ which depends on their phenotype, $z$. As mentioned previously, stem cells are known to have a glycolytic metabolism and, thus, we assume that they consume less oxygen than cancer cells. Consequently, we consider $\gamma$ to be a monotonically increasing function of the phenotypic variable $z$ which asymptotes to its maximum value for $z>0.5$:
\begin{equation}
\gamma(z,c)= H(c-c_N)\left[\gamma_{max} -\frac{\gamma_{max}}{2} e^{-k_{\gamma} z}\right].\label{gamma}
\end{equation}
In Equation (\ref{gamma}), $H=H(x)$ is the Heaviside function
(i.e. $H(x)=1$ if $x>0$ and $H(x)=0$ if $x\leq 0$).
In order to continue their normal function, glycolytic cells consume oxygen, albeit at a lower rate. Motivated by results presented in \cite{consumption}, we assume that glycolytic CSCs consume oxygen at approximately half the rate of terminally differentiated cancer cells.
\subsection{Parameters}
\label{App_param}
\begin{table}[h!]
\centering\footnotesize
\begin{tabular}{l c@{\hspace{0.5cm}} l@{\hspace{0.5cm}} l@{\hspace{0.5cm}} c c}
\toprule[2pt]\addlinespace[2pt]
&Parameter & Value & Units & Reference & Label\\[2pt]
\toprule\addlinespace[4pt]
Phenotypic Diffusion&$\theta$ & $5\times 10^{-6}$& $hr^{-1}$ & -& \\[3pt]
\hline\addlinespace[3pt]
\multirow{3}{20mm}{\centering Advection velocity $v_z$ Eq~(\ref{eq_ad})}& $V_\pm$ & $\left\{2,4,8\right\}\times 10^{-4}$& $hr^{-1}$ & -& \\
& $\xi_\pm$ & $\left\{0.05,0.1,0.5\right\}$& - & -& \\
& $\omega_\pm$ & $\left\{1,2\right\}$& - & -& \\
\toprule\addlinespace[2pt]
\multirow{9}{25mm}{\centering Fitness $F$ Eq~(\ref{eqnetprol})-(\ref{apoptosis})} &$p^{max}_0$ & 0.005& $hr^{-1}$&\cite{Sweeney1998} &\\[2pt]
&$K_{H,0}$ & $0.05$ & -&-&\\[2pt]
&$g_0$ &0.01&-&-&\\[2pt]
&$p^{max}_1$ & 0.02& $hr^{-1}$&\cite{Sweeney1998}&\\[2pt]
&$K_{H,1}$ & $0.3$ & -&-&\\[2pt]
&$g_1$ &0.04&-&-&\\[2pt]
&$d_f$ & $\left\{0.001,0.015\right\}$ &$hr^{-1}$ &-&\\[2pt]
&$k_f$ & $10$ & -&-&\\[2pt]
& $\Phi_{max}$ & $10^8$ & cell/cm$^3$&\cite{DelMonte2009}&\\[5pt]
\multirow{4}{25mm}{\centering Survival Fraction $SF$ Eq~(\ref{radiosensitivity})/Eq~(\ref{SF_space})}& $\alpha_{min,max}$& Table \ref{rad_tab2}& Gy&\cite{Saga}&\\
& $\beta_{min,max}$& Table \ref{rad_tab2}& Gy$^{-2}$&\cite{Saga}&\\[2pt]
& $\xi_R$ & 0.2& - & -&\\[3pt]
& OER & 3 &- & \cite{lewin2}& S\\[2pt]
\toprule\addlinespace[2pt]
\multirow{2}{30mm}{\centering Initial phenotypic distribution $n_0$}&$\phi_0$ & 0.4& $hr^{-1}$&-&\\[2pt]
&$\sigma$& 0.1 & -&- & \\[2pt]
\toprule[1.5pt]\addlinespace[3pt]
\centering Spatial Diffusion &$D_N$ &$1.25\times 10^{-4}$& mm$^2$hr$^{-1}$&&S\\[2pt]
\centering Domain Size & L & 0.45 & mm&-&S\\[3pt]
\toprule
Oxygen Diffusion & $D_c$ & $6.84\times 10^{-1}$&mm$^2$hr$^{-1}$&-&S\\[3pt]
\multirow{2}{25mm}{\centering Consumption $\gamma$ Eq~(\ref{gamma})}&$\gamma_{max}$ &$3.11\times 10^{-12}$&g(cell hr)$^{-1}$&\cite{Boag1970}&S\\
&$k_\gamma$ & $10$ & -&-&S\\[3pt]
\multirow{3}{25mm}{\centering Oxygen thresholds}&$c_\infty$ & 1& -&\cite{Lewin2018}&S\\[2pt]
&$c_H$ & 0.3& -& \cite{lewin,ester2}&S\\[2pt]
&$c_N$ & 0.0125 & -&\cite{ester2}&S\\[2pt]
\bottomrule[2pt]\addlinespace[2mm]
\end{tabular}
\caption{List of the parameters values in model~(\ref{mixedmodel})-(\ref{eq_ad}) and/or its spatial extension~(\ref{spatial_mod})-(\ref{gamma}). Where the parameters are free, we list the set of values considered in the paper. We further label with (S) those parameter that are only present in the spatial model.}
\label{param_set}
\end{table}
The model contains a large number of parameters, most of which will vary in value between tumours and patients. The main focus of this work is to study the role played by phenotypic advection (as it interacts with cell proliferation and apoptosis, as well as competition mechanisms). On this basis, we decided to perform a parameter sweep for parameters associated with the advection velocity, while holding all other model parameters fixed at values previously reported in the literature, where such values exist.
The main challenge is to identify the phenotypically dependent parameters, such as the growth rate in Equation (\ref{pO2}). As most data reported in the literature refer to processes, such as cell proliferation, at the population/cell colony and do not account for phenotypic variation, it was difficult to estimate parameters that characterise the phenotypic variation in these processes.
We based our estimates of the proliferation rate on the doubling times reported by \cite{Sweeney1998} for two breast cancer cell lines, MCF-7 and BT-549. The former belong to the class of \textit{laminal}-like cells which are characterised by low stemness levels \cite{Ricardo2011} and high proliferation rates (doubling time $1.8$ days, i.e., growth rate $0.016$ hr$^{-1}$). On the other hand, BT-549 belong to the class of \textit{triple-negative} cells whose population is dominated by highly aggressive but slowly proliferating stem-like cells \cite{Ricardo2011} (doubling time $3.7$ days \cite{Sweeney1998}, i.e., growth rate $0.008$ hr$^-1$). Given the variability in the phenotypic distribution of these cell lines, we have rounded the values to those presented in Table \ref{param_set}.
As is common in the literature, we have chosen the source of oxygen (i.e. $c_\infty$) to be at a pressure of $100$ mmHg \cite{Lewin2018}. Given that atmospheric pressure corresponds to $760$ mmHg with $21\%$ O$_2$, the oxygen tension corresponding to $c_\infty$ is about $8\%$\, O$_2$. The hypoxic and necrotic thresholds ($c_H$ and $c_N$) are then equivalent to oxygen pressures of $2.5\% \, O_2$ and $0.1\% \, O_2$ in line with \cite{ester,ester2}. These values can be converted into oxygen concentrations by use of Henry's law \cite{Lewin2018}, see Table \ref{param_set}.
\section{Linear Stability Analysis.}
\label{AppendixB}
\noindent As mentioned in Section \ref{LSA}, in order to compute the largest eigenvalue $\lambda_0$ numerically we rely on the \textit{Chebfun} package for MATLAB \cite{Driscoll2014}. In order to solve the eigenvalue problem we first
make the following substitution in Equation~(\ref{neu}):
\begin{equation}
\delta{n}=y(z) \exp\left[\frac{1}{2\theta}\int^z v_z(s) \,ds\right].
\end{equation}
It is straightforward to show that the function $y$ satisfies the following eigenvalue problem:
\begin{eqnarray}
\begin{aligned}
\theta \frac{d^2 y}{d z^2}+q(z;c,\bar{\phi})y-p(z;c)\bar{n}\int_0^1 y(s)k(s,z) \,ds=\lambda y
\end{aligned}\label{eq:ApBeig}\\
\mbox{where} \quad q(z;c,\bar{\phi})=p(z;c)(1-\bar{\phi})-f(z)-\frac{1}{2}\frac{d v_z}{d z}-\frac{1}{4}\frac{v^2_z}{\theta},\label{eq:q}\\
\mbox{and} \quad k(s,z)=\exp\left[\frac{1}{2\theta}\int_s^z v_z(p) \,dp\right],\\
\frac{d y}{dz}=0\quad \mbox{at} \; z=0,1.
\end{eqnarray}
Note that the integral in Equation~(\ref{eq:ApBeig})is of the form of a Fredholm integral which is built in the \textit{Chebfun} package \cite{Driscoll2014}. The above differential equation for $\bar{n}=0$
corresponds to the standard form of a
Schr{\"o}dinger-type, \textit{Sturm-Liouville} eigenvalue problem, where the \textit{Hermiticity} of the differential operator
implies the existence of purely real eigenvalues.
In the case of the null steady state, the eigenvalue problem simplifies to:
\begin{subequations}
\begin{align}
\begin{aligned}
\theta \frac{d^2 y}{d z^2}+\tilde{q}(z;c)y=\lambda y
\end{aligned}
\label{neu_cond0}
\\
\frac{d y}{dz}=0 \quad \mbox{at} \; z=0,1. \label{neu_cond}
\end{align}\label{eq:eigenSL}%
\end{subequations}
where $\tilde{q}(z;c)=q(z;c,0)$ as defined in~(\ref{eq:q}).
Therefore, applying the \textit{Sturm Oscillation Theorem}~\cite{coddingtonlevinson}
to~(\ref{eq:eigenSL})
we deduce that $\sigma(\mathcal{M})$ has infinitely many simple and real eigenvalues which can be enumerated in strictly decreasing order:
\begin{equation}
\lambda_0>\lambda_1>\ldots, \, \lim\limits_{n\rightarrow \infty} \lambda_n=-\infty.
\end{equation}
We conclude that the trivial steady state is either a stable node (if $\lambda_0<0$) or a saddle (if $\lambda_0>0$).
In addition to numerical estimation of $\lambda_0$,
analytical approximations and bounds can be obtained via the so-called \textit{Rayleigh quotient} $R(y)$.
If we multiply Equation~(\ref{neu_cond0}) by $y$ and integrate by parts, then we obtain:
\begin{subequations}
\begin{align}
R(y)=\frac{1}{\|y\|^2_{L^2}} \:
\int_0^1 \left \{ \theta y\frac{d^2 y}{d z^2}+\tilde{q}(z;c)y^2 \right \} dz,
\end{align}
where $y$ also satisfies the Neumann boundary conditions \ref{neu_cond}.
We deduce that the following therefore holds:
\begin{align}
\lambda_0 =\sup_{y\in E, \ y\neq 0} R(y)
\end{align}
\end{subequations}
where $E$ is the set of twice differentiable functions that satisfy condition~(\ref{neu_cond}).
\begin{lemma}
If the function $\tilde{q}$ is such that $\max\limits_{z\in(0,1)} \tilde{q}<0$ then the null steady state is stable.\label{lemma1}
\end{lemma}
\begin{proof}
Consider the numerator of the quotient defining $R(y)$:
\begin{subequations}
\begin{align}
\begin{aligned}
\int_0^1\left \{ \theta y\frac{d^2 y}{d z^2}+\tilde{q}(z;c)y^2 \right \} dz = - \cancelto{0}{\left[y\frac{d y}{d z}\right]_{0}^1} + \int_0^1\tilde{q}(z;c)y^2-\theta \left(\frac{d y}{d z}\right)^2 dz \\
\leq \int_0^1\tilde{q}(z;c)y^2 dz. \qquad \qquad
\end{aligned}
\end{align}
We deduce that
\begin{align}
R(y) \leq \int_0^1 \tilde{q}(z;c) \frac{y^2}{\|y\|_2^2} dz=R_{up}(y).
\end{align}
It is therefore apparent that if the function $q$ is negative throughout the domain, then $R_{up}$ is negative for any choice of $y\in E$. In such a case, we have that:
\begin{align}
\lambda_0=\sup_{y\in E, \ y\neq 0} R(y)<\sup_{y\in E, \ y\neq 0} R_{up}(y)<0.
\end{align}
\end{subequations}
\end{proof}
We now show that under normoxia, $q < 0$ if the death rate is high and
the magnitude of the phenotypic advection velocity is sufficiently large.
\begin{lemma}
If the model proliferation rate, apoptosis rate and phenotypic
advection velocity and diffusion coefficient are chosen such that:
\begin{equation}
\int_0^1 \left \{ p(z,c)-f(z)- \frac{v_z^2}{4\theta} \right \} dz>0,\label{cond_ins}
\end{equation}
then the trivial steady state is unstable. \label{lemma2}
\end{lemma}
\begin{proof}
Consider $y_0\equiv 1$, then $y_0\in V$ and $\lambda_0=R(y_0)$ where:
\begin{equation*}
R(y_0)=\int_0^1 \left \{ p(z,c)-f(z)- \frac{v_z^2}{4\theta} \right \} dz>0.
\end{equation*}
Consequently, $\sup_{y\in V} R(y)\geq R(y_0)>0$, and our steady state is unstable.
\end{proof}
\begin{remark}
Note that for~(\ref{cond_ins}) to hold we require $\int_0^1 (p-f) dz>0$ so that cell proliferation dominates apoptosis.
Based on the functional form defined in Section~\ref{fit}, we have that:
\begin{equation}
\begin{aligned}
\hspace{-8mm}I(c;d_f)&=\int_0^1 (p-f ) dz\\
&= \sqrt{g_1} p_1(c) \left[\mathcal{Z}\left(\frac{z-0.55}{\sqrt{g_1}}\right)
+\sqrt{g_0} p_0(c) \mathcal{Z}\left(\frac{z}{\sqrt{g_0}}\right)+\frac{d_f}{k_f}e^{-k_f z}\right]_{z=0}^{z=1}\\
&\sim \frac{\sqrt{g_0} p_0(c)}{2}+\sqrt{g_1} p_1(c)-\frac{d_f}{k_f}
\end{aligned}
\end{equation}
where $\mathcal{Z}$ denotes the cumulative distribution function for the normal distribution. We note that $I(1;d_f)>0$ while $I(0.2;d_f)<0$ for all values of the parameters listed in Table \ref{par_fitness}. We conclude that under normoxia there is a threshold $\mathcal{V}_+(\xi_+,\omega)$ such that the system is unstable for all choices of $V_+<\mathcal{V}_+(\xi_+,\omega)$:
\begin{subequations}
\begin{align}
\mathcal{V}_+=\sqrt{\frac{2I(1;d_f)\theta}{I_v(\xi_+,\omega_+)}},\\
\mbox{where} \; I_v(\xi_+,\omega_+)=\int_0^1 \left(\frac{1}{V^*_+}\tanh\left(\myfrac[3pt]{z^{\omega_+}}{\xi_+}\right)\tanh\left(\myfrac[2pt]{(1-z)}{\xi_+}\right)\right)^2 dz.
\end{align}
We note also that higher values of $\theta$ favour instability of the trivial solution as $\mathcal{V}_+$ increases with $\theta$. By inspecting Figure \ref{vel_prof}, we note qualitatively that $I_v$ is expected to decrease for increasing values of $\xi_+$ and $\omega_+$.
\end{subequations}
\end{remark}
\begin{figure}[h!]
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{figures/nospace/linearstab/eigen6.pdf}
\caption{}
\label{lsa1_a}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{figures/nospace/linearstab/eigen.pdf}
\caption{}
\label{lsa1_b}
\end{subfigure}
\caption{Linear stability analysis
of the trivial solution: plot of the two largest
eigenvalues $\lambda_0(\xi)$ and $\lambda_1(\xi)$ for (a) $V_+=6\times 10^{-4}$, $\omega_+=1$ and $d_f=0.001$, and (b) $V_+=8\times 10^{-4}$, $\omega_+=1$, $d_f=0.001$. In (a), $\lambda_0 > 0$ for all values of $\xi$. In (b), $\lambda_0$ changes sign as $\xi$ increases and we can
identify a critical value of $\xi$ at which
the trivial solution loses stability, favouring the emergence of a nontrivial, phenotypic cell distribution.}
\label{lsa1}
\end{figure}
To analyse other regions of parameter space, where neither of the sufficient conditions holds, we rely on numerical estimates of the eigenvalue $\lambda_0$. As shown in Figure \ref{lsa1}, and as expected based on the above findings, when the magnitude of the velocity $V_+$ is small,
$\lambda_0>0$ for all $\xi$ and the trivial solution is unstable.
By contrast, as the magnitude of the advection velocity
increases, its steepness, $\xi$, determines
the stability of the trivial solution. Using this estimate, we can identify the region of stability of the trivial steady state (see Figure \ref{xi_crit} in Section \ref{LSA}).
We remark that the boundary between the regions of stability is non-smooth. This is because $\lambda_0 = \lambda_0(\xi)$ plateaus as $\xi\ll1$ (see Figure \ref{lsa1}). By computing the second largest eigenvalue, $\lambda_1(\xi)$, we observe that the sharp change in the profile of $\lambda_0$ as $\xi$ decreases
occurs where $|\lambda_0-\lambda_1|$ attains its minimum value.
It is possible to show that the two eigenvalues do not cross, as expected by the Sturm Oscillation theorem. A similar phenomenon occurs in quantum physics~\cite{cohent} where it is known as \textit{avoided crossing}.
Finally, we consider the stability of the trivial solution in an hypoxic environment. We confirm the numerical simulations from \S\ref{hyp cond} by showing that, under hypoxia, the trivial solution is always unstable.
\begin{lemma}
Under hypoxia (i.e. when $c=0.2$), and for the parameter values listed in Table \ref{param_set}, the trivial steady state is always unstable. \label{lemma3}
\end{lemma}
\begin{proof}
Let us consider as a trial function:
\begin{equation}
y=\frac{1}{(\pi \kappa^2)^{1/4}}\exp\left(-\frac{z^2}{2\kappa^{2}}\right) +Az^2,
\end{equation}
where a small parabolic correction is added to the standard Gaussian, the constant $A$ being chosen to ensure
that the boundary condition at $z=1$ is satisfied:
\begin{equation}
A=\frac{e^{-\frac{1}{2\kappa^2}}}{2\pi^{1/4}\kappa^{5/2}};
\end{equation}
the derivative $y'$ at $z=0$ vanishes, by construction. We now want to show that the Rayleigh quotient is positive for such a choice of the test function $y$ which implies that the trivial steady state is unstable.
Given that the denominator of $R(y)$ is always positive, its sign will be determined by the numerator $R_n(y)$ that is:
\begin{equation}
R_n(y)=\int_0^1 \left(p-f-\frac{v_z^2}{4\theta}\right) y^2 dz -\cancelto{0}{\left.\frac{v_z y^2}{2}\right|_0^1} +\int_0^1 v_z y\frac{d y}{d z} - \theta \left(\frac{d y}{d z}\right)^2 dz
\end{equation}
Computing the derivative of $y$ and denoting the Gaussian by $y_0$, we obtain:
\begin{subequations}
\begin{align}
y^2= y_0^2 +2Az^2y_0+A^2z^4,\\
y'^2= \frac{z^2}{\kappa^4} y_0^2 -\frac{4Az^2}{\kappa^2}y_0+4z^2A^2,\\
yy'= -\frac{z}{\kappa^2}y_0^2+Az\left(2-\frac{z^2}{\kappa^2}\right)y_0+2A^2z^3.
\end{align}
Recalling that the constant $A$ is exponentially small in $\kappa$ while $y_0$ grows only as a power law of $\kappa^{-1}$, the terms multiplied by $A$ will be negligible and the sign of $R_{n}(y)$ will be determined also by the leading term:
\begin{align}
R_n(y)= I_0 + \mathcal{O}(A),\label{eq:Rnexp}\\
I_0=\int_0^1 \left(p-f\right)y_0^2 dz-\theta\int_0^1 m^2(z) y_0^2 dz,\label{eq:I0first}\\
\quad{where} \quad m(z)=\frac{v_z}{2\theta}+ \frac{ z}{\kappa^2}.
\end{align}
\end{subequations}
Proving instability therefore reduces to show that $I_0$ is positive for the range of parameters and functional forms considered in hypoxic condition. We do so finding a lower bound on the value on $I_0$, exploiting the quick decay of the function $y_0$, whose mass is concentrated in a neighborhood of $z=0$. Given that $p(0)-f(0)>0$ and $m(0)=0$, provided that $m$ does not grow too fast near $z=0$, we can intuitively see that the major contribution to the integral $I_0$ will be positive. We will now expand this intuitive argument with a more rigorous calculation.
We first focus on $I_0^{(1)}=\int_0^1(p-f)y_0^2 dz$, the contribution in~(\ref{eq:I0first}) due to cell proliferation.
We can compute this integral exactly as the integrand comprises products of exponentials, that can be re-written as the integral of Gaussian distribution:
\begin{subequations}
\begin{align}
I_0^{(1)}= &\left[\frac{p_1(c)\sqrt{2}\zeta_1 }{\kappa}e^{-\frac{0.55^2}{g_1}+\frac{c_1^2}{2\zeta_1^2}}\mathcal{Z}\left(\frac{z-c_1}{\zeta_1}\right)\right.\\
&\left.+\frac{p_0(c)\sqrt{2}\zeta_0}{\kappa}\mathcal{Z}\left(\frac{z}{\zeta_0}\right)-d_f e^{-k_f+\frac{c_f^2}{\kappa^2}}\mathcal{Z}\left(\frac{\sqrt{2}(z-c_f)}{\kappa}\right)\right]_{0}^1,
\end{align}\label{eq:I_0(1)}%
\end{subequations}
where $2\zeta_{0,1}^2=(\kappa^2 g_{0,1})/ (g_{0,1}+\kappa^2)$, $c_1=0.55(2\zeta^2_1)/g_1$ and $c_f=k_f\kappa^2/2$, while $\mathcal{Z}$ is again the normal cumulative distribution function as in Lemma \ref{lemma2}.
We now focus on the term in~(\ref{eq:I0low}) which depends on $m$. In this case the integral can not be computed exactly and we will therefore find a lower bound for its contribution instead. This is achieved by decomposing the full domain $[0,1]$ into two three sub-domains. This will allow us to balance the rapid growth of the function $m$ with the quicker decay of $y_0$ away from $z=0$:
\begin{align}
\int_0^1 m^2y^2_0 dz =\int_0^{z_0\kappa} m^2y^2_0dz + \int_{z_0\kappa}^{z_1\kappa}m^2 y_0^2dz+\int_{z_1\kappa}^1 m^2y_0^2 dz.\label{eq:mint1}
\end{align}
where $z_{0,1}$ are positive constants such that $0 < z_0 < z_1 < \kappa^{-1}$. Note that we have the freedom of choosing their values with the aim of making the quantity in~(\ref{eq:mint1}) as small as possible.
It is straightforward to see that $m$ attains its maximum value at $z=1$ as both $v_z$ and $z/\kappa^2$ attain maxima there. We now choose the value of $\kappa$ so that the derivative of $m$ at $z=0$ vanishes:
\begin{subequations}
\begin{align}
m'(z)= \left(\frac{v'_z(z)}{2\theta}+\frac{1}{\kappa^2}\right) \quad \Rightarrow \quad \kappa = \sqrt{\frac{2\theta}{|v'_z(0)|}}.\label{eq:kappa}
\end{align}
However, by definition (see Equation~(\ref{eq_ad_hyp})), under hypoxia, the advection velocity $v_z(z)=v_z^-(z)$ is such that $|v'_z(z)|\leq |v'_z(0)|$ for all $z\in(0,1]$, with equality only if $\omega_-=2$. Consequently, we have that $m(z)$ is a non-decreasing function of $z$, i.e. $m'(z)\geq 0$.
Given the above, we can now construct an upper bound for the integral in~(\ref{eq:mint1}):
\begin{align}
\begin{aligned}
\hspace{-8mm} \int_0^1 m^2 y^2_0 dz &\leq m^2(z_0 \kappa)\int_0^{z_0 \kappa} y^2_0 \: dz + m^2(z_1 \kappa)\int_{z_0 \kappa}^{z_1 \kappa} y^2_0 \: dz + m^2(1)\int_{z_1 \kappa}^1 y_0^2 \: dz\\
&=m^2(z_0 \kappa) \left[\mathcal{Z}\right]_0^{\sqrt{2}z_0}+m^2(z_1 \kappa) \left[\mathcal{Z}\right]_{\sqrt{2}z_0}^{ \sqrt{2}z_1}+ \frac{1}{\kappa^4} \left[\mathcal{Z}\right]_{\sqrt{2} z_1}^{\frac{\sqrt{2}}{\kappa}}\\
&\leq \frac{m^2(z_0 \kappa)}{2}+m^2(z_1 \kappa) \left[\mathcal{Z}\right]_{\sqrt{2}z_0}^{ \sqrt{2}z_1}+ \frac{1}{\kappa^4} \left[\mathcal{Z}\right]_{\sqrt{2} z_1}^{\infty}
\end{aligned}
\end{align}
Let us reiterate that we want $z_0$ and $z_1$ to be such that $m^2(z_0\kappa)$ and $m^2(z_1\kappa)$ are not too large while $\left[\mathcal{Z}\right]_0^{\sqrt{2}z_0}$ and $\left[\mathcal{Z}\right]_{\sqrt{2} z_1}^{\infty}$ are sufficiently small. In this way, the growth of $m$ is balanced by the exponential decay of the Gaussian function $y^2_0$. In particular, we choose $z_0=\sqrt{2}$ and $z_1=5/\sqrt{2}$.
\end{subequations}
Combining the above with the estimate from Equation~(\ref{eq:I_0(1)}), we obtain:
\begin{eqnarray}
I_0 > I_0^{(1)}- \frac{\theta m^2(z_0\kappa)}{2} -\theta m^2(z_1\kappa)\left[\mathcal{Z}\right]_{\sqrt{2}z_0}^{ \sqrt{2}z_1}-\frac{v'_z(0)^2}{4\theta} \left[\mathcal{Z}\right]_{\sqrt{2} z_1}^{\infty}=I^{low}_0.\label{eq:I0low}
\end{eqnarray}
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\textwidth]{figures/nospace/linearstab/I0max.pdf}
\caption{Plot of the lower bound $I_0^{low}$ and the standard deviation $\kappa$ as defined by~(\ref{eq:I0low}) and~(\ref{eq:kappa}) respectively for parameter regime considered in the paper (note that $d_f$ is fixed to its maximum values $0.015$ as this gives the smaller bound $I_0^{max}$).}
\label{fig:my_label}
\end{figure}
We can compute the values of $\kappa$ and $I_0^{low}$ associated with the value of the magnitude $V_-$ and steepness $\xi_-$ considered in the paper (without loss of generality, we only consider $d_f=0.015$ as $I^{low}_0$ decreases with $d_f$). As shown in Figure \ref{fig:my_label}, for all such values, we have that $I_0^{low}>0$. As $I_0>I_0^{low}$, we therefore have that generically $I_0$
is also positive. We estimate $A \leq O(10^{-13})$ which justifies us dropping the $O(A)$ in~(\ref{eq:Rnexp}). Consequently, we conclude that $R_n(y)$ is positive and so is the quotient $R$. Hence, in hypoxia, the trivial steady state is always unstable.
\end{proof}
\section{Conclusion and Future Challenges}
\label{conclusion}
\noindent We have developed a structured model to investigate how clonogenic heterogeneity affects the growth and treatment response of a population of tumour cells. Cell heterogeneity is incorporated via an
independent and continuous structural variable which represents \emph{stemness}. As proposed by \cite{Scott169615,Chisholm2016}, we view stemness as a plastic trait, with cells becoming more, or less, stem-like depending on their environmental conditions.
Our mathematical model accounts for cell proliferation and apoptosis,
inter-cell competition, and phenotypic movement along the stemness axis, via diffusion and advection.
Studies of the population dynamics in the absence of treatment revealed that, under normoxia, a variety of qualitative behaviours may arise
depending on the functional forms used to represent the structural flux and fitness landscape. When advection dominates movement along the stemness axis, its magnitude, relative to the rates of proliferation and cell death, determines whether the population is driven to extinction. Multimodal distributions, which allow for the formation and maintenance of CSCs pools, are observed for asymmetric velocity profiles. Under hypoxia, the population distribution is unimodal and skewed toward stem-like phenotypes, with little intra-population variability.
The resulting cell distribution is highly
resistant to radiotherapy, the tumour will typically
regrow following treatment. By contrast, under
normoxia (or re-oxygenated hypoxia), and
for suitable parameter values, the tumour may become extinct following radiotherapy.
There are many ways in which the work presented in this paper could be extended.
A first, natural extension would be to incorporate structural and spatial heterogeneity (i.e., both phenotypic and spatial dimensions) \cite{hodgkinson}.
This would enable us to consider \textit{in vivo} situations, where spatial gradients in oxygen levels emerge naturally, due to oxygen consumption by the cells as it diffuses from blood vessels. As outlined in~\ref{AppendixA}, in such a model oxygen consumption rates may vary with cell phenotype, and spatial fluxes may account for random movement of the cells.
Preliminary results for such a model are presented in Figure~\ref{space1}.
We consider a 1D Cartesian geometry and focus on a tumour region of
width $L$, in which a blood vessel located at $x=0$ provides a continuous
supply of oxygen to the tissue. If the tumour initially comprises a spatially homogeneous distribution of terminally differentiated cells (see Equation~(\ref{initial_cond})), then the oxygen rapidly relaxes to a steady state and a hypoxic region forms at distance from $x=0$. In contrast to the well-mixed model, cells are now able to move, by random motion, between normoxic and hypoxic regions.
While terminally differentiated cancer cells are dominant in the well-oxygenated region, a small fraction persists in the hypoxic region (in particular, near the boundary of the hypoxic region, orange line in the plots in Figure~\ref{space1}).
This is due to the influx of
cells from the well-oxygenated portion of the domain. Similarly, CSCs are dominant in the hypoxic region, but a small fraction of hypoxic CSCs migrate towards $x=0$, where re-oxygenation induces their maturation, creating a differentiated and highly proliferative cell phenotype, alongside terminally differentiated cancer cells.
These results illustrate how the interplay between space, resources
and phenotypic adaptation may give rise to complex behaviours; their investigation is the focus of ongoing work.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{figures/space/simnotreatment}
\caption{Series of plots showing how, in the absence of treatment, the cancer cell population $n(x,z,t)$ and the oxygen concentration $c(x,t)$ change over time $t$ when we account for spatial and phenotypic variation (see Equations~(\ref{spatial_mod})).
We indicate the threshold $c=c_H$ which defines the boundary of the hypoxic region with a horizontal red line in the upper plots and with a vertical orange line in the lower plots. We fix $V_\pm=4\times10^{-4}$, $\xi_\pm=0.1$, $\omega_+=1$, $\omega_-=2$ and $d_f=0.001$, while the remaining model
parameters are fixed at the values stated in Table \ref{param_set}.}
\label{space1}
\end{figure}
A significant challenge of the modelling approach presented in this paper is the determination of model parameters and functional forms.
In the longer term, techniques such as \textit{single-cell RNA sequencing}~\cite{tirosh,venteicher} will make it be possible to
quantify specific aspects of
our model, such as the dependence of the proliferation and apoptosis rates on cell stemness and the dependence on
the tumour micro-environment of the (phenotypic) advection velocity
associated with cell maturation and de-differentiation.
In spite of their current limitations, we believe that studies of
such models can increase understanding of the ways in which specific
physical processes may influence
the phenotypic distribution of cell populations in different environments.
At the same time, we acknowledge that
it remains a matter of debate as to whether asymmetric cell distributions are driven by micro-environmental signals (as in the model presented here), asymmetric division, or a combination of the two~\cite{Roeder2006}. By using a non-local proliferation kernel to account for asymmetric division, we could investigate these alternative hypotheses and identify conditions under which they lead to different outcomes.
A important feature of our model is the way in which the response to radiotherapy (RT) varies with cell stemness (i.e., $z$).
Our analysis shows how the functional forms used to describe the advection velocity and fitness functions can affect the system dynamics post-RT.
While unimodal phenotypic distributions lead to monotonic growth curves post-treatment, more complex behaviour is observed when heterogeneous populations, with a pool of CSCs, are considered.
For example, under normoxia, the presence of radio-resistant CSCs can drive recurrence, despite an initial phase of tumour regression. As the CSCs mature into highly-proliferating cancer cells, rapid re-growth is accompanied by re-sensitisation of the population to RT.
Under hypoxia, CSCs maintain their stemness, leading to a slowly growing,
radio-resistant cell population.
More complex outcomes arise when we consider the effect that treatment might have on the environment.
As noted in~\ref{radio+change}, changes in the vasculature induced by radiotherapy can result in either post-treatment re-oxygenation or hypoxia. While re-oxygenation increases the radio-sensitivity of the population, hypoxia increases their radio-resistance. In practice, such environmental changes are likely to be transient. Even in an untreated tumour,
fluctuations in oxygen levels can occur. Consider, for example,
cells in a neighborhood of immature blood vessels. As the cells proliferate, they exert mechanical pressure on the vessels, causing them to collapse and local oxygen levels to fall. Under hypoxia, the tumour cells stimulate the growth of new blood vessels from pre-existing ones, via angiogenesis. In this way, tumour regions may cycle between periods of hypoxia and normoxia.
It would be of interest to extend the model to account explicitly for the tumour vasculature and its interaction with tumour cells. This could be achieved at a ``high level'' of description, via
simple ODE models such as \cite{Hahnfeldt1999, Stamper2010}, or via more complex, multi-phase \cite{Hubbard2013} or multi-scale approaches \cite{Byrne2010,Macklin2009,Vavourakis2017,Walpole2013}.
This would enable us to better capture the different time-scales on which the oxygen dynamics and cell adaptation velocity change. As shown in
Figure \ref{space2}, variations in oxygen levels emerge naturally within spatially-resolved models. Here, cell killing leads to tissue re-oxygenation which, in turn, disrupts the CSC niche. Depending on the time scale over which the cells adapt to their new environmental conditions, this may increase the overall radio-sensitivity. Understanding and accounting for such phenomena is particularly relevant for predicting responses to RT and comparing alternative treatment protocols.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{figures/space/simtreatment}
\caption{Evolution of the population $n(x,z,t)$ in the spatial and phenotypic dimensions following a cycle of fractionated radiotherapy
(5 $\times$ $2$ Gy). The parameter values are the same as those used in Figure \ref{space1} and the initial cell distribution is the same as the final distribution in Figure \ref{space1}. For the LQ-model we used parameter set $R3$ in Table \ref{rad_tab2}.}
\label{space2}
\end{figure}
In the extinction scenario, or post administration of high RT doses, the number of cells in the population can become low and our continuum model may cease to be valid. In such conditions, stochastic effects which are neglected herein may become important. As in \cite{Ardaseva2020_2,Franz2013,Spill2015}, stochastic and mean field approaches may be combined with hybrid discrete-continuum techniques to account for small population effects and to study their impact on the probability of tumour extinction.
In this paper, we considered only single dose and fractionated treatment protocols. In future work, we could investigate alternative strategies, such as \textit{adaptive therapeutic} protocols \cite{Gatenby2009} and/or multi-drug treatments, which have been proposed as an effective way to overcome radio-resistance. From this point of view, considerable efforts have been invested in designing treatments that exploit features of CSCs, such as their
metabolic plasticity~\cite{frontiers}.
Motivated by recent metabolically-structured models~\cite{Ardaseva2019,hodgkinson, Villa2019}, a natural extension of our model would be to include a
``metabolic dimension'' in order to investigate the interplay between stemness, metabolic switching and
resistance. A biologically informed model that incorporates
metabolic and phenotypic effects, together with the tumour micro-environment
and vascular remodelling lies at the heart of a
mathematical program that would enable systematic comparison with
{\it in vivo} observations. The framework and results outlined in this work represent a first step towards achieving this long-term goal.
\section{Introduction}
\noindent Understanding of the mechanisms by which cancer is initiated and progresses continues to increase, and, yet, cancer remains one of the leading causes of premature mortality worldwide and a major barrier to increasing average life-expectancy. For example, in 2018, 9.6 million people are estimated to have died of cancer \cite{Bray2018}. Furthermore, treatment outcomes can differ markedly between patients with the same cancer type, with the emergence of resistance being one of the major causes of treatment failure.
Over the past twenty years, there has been a major shift in our perception of solid tumours; they are now regarded as heterogeneous tissues in which malignant cells interact with
normal cells and shape their environment in ways that favour malignant growth
\cite{Hanahan2000}. Cancer stem cells (CSCs) were introduced to explain intra-tumour heterogeneity via the \emph{CSC hypothesis} \cite{Reya2001}.
This hypothesis proposes that, while CSCs may comprise only a small fraction of the total cell population, their high clonogenic potential and their ability to produce more mature, or specialised, cancer cells enables them to create an entire tumour \cite{Rycaj2014}. As CSCs are found to be resistant to standard treatments,
they are recognised as a major cause of disease recurrence and treatment failure \cite{Baumann2008,Rycaj2014,cancersreview}. These observations have stimulated the development of novel therapeutic strategies which aim to eradicate CSCs \cite{ende2,Kong2020,Shibata2019,frontiers}.
In practice, the plasticity of CSCs represents a major obstacle to such treatments. Additionally, CSCs can adapt to their local micro-environment, and remodel it to create and maintain a niche which supports their survival~\cite{architects}.
Increasingly, researchers are turning to mathematical models in order to understand how CSCs affect the growth and composition of tumours, particularly their heterogeneity and response to treatment. These models often decompose
the tumour into a series of compartments, each representing a particular
cell subtype. For example, in~\cite{ende2}, Enderling distinguishes cancer stem cells (CSCs) and cancer
cells, whereas Saga and coworkers distinguish radio-resistant and radio-sensitive cells~\cite{Saga}, and Scott and colleagues distinguish
tumour-initiating cells (or CSCs), transit-amplifying cells and terminally differentiated cells (TDCs)~\cite{Scott169615}.
Thus, most compartmental models are based on the CSC hypothesis which assumes that it is possible to distinguish between cancer stem cells and the tumour bulk.
However, this paradigm has been challenged by recent experimental
studies~\cite{dirkse, Soleymani2018} that highlight the phenotypic heterogeneity and plasticity of cancer cells, whose clonogenic (or \emph{stemness}) potential can be altered by the surrounding micro-environment (extrinsic forces). These findings have
led to a new hypothesis for intra-tumoural heterogeneity, based on \emph{adaptive CSC plasticity} \cite{Fanelli2020}. Under this hypothesis, cancer cells move between stem-like and terminally differentiated states in response to extrinsic (environmental) and/or intrinsic (random epigenetic mutation) forces. Remarkably, the development of state-of-the-art
experimental tools, such as single-cell RNA-seq, means that it is now possible to track the evolution of stemness traits ~\cite{tirosh,venteicher},
rendering this an ideal time to develop mathematical models that can explore these concepts.
Compartmental models can be used to study adaptive CSC plasticity , by allowing transitions between different compartments. However, since they assume that the tumour comprises distinct cell populations, with distinct properties, they are unable to account for continuous variation in cell properties. An increasingly popular mathematical approach for describing population heterogeneity and plasticity
characterises tumour cells by their position on a continuous phenotypic axis. Position on the phenotypic axis determines cell properties such as resistance to treatment~\cite{Chisholm2016,Clairambault2020,hodgkinson,Lorenzi2016,Lorz2014} and/or metabolic state \cite{Ardaseva2019,Villa2019}. This approach is motivated by concepts from evolutionary ecology, such as risk-spreading
through spontaneous (epigenetic or genetic) variations and evolutionary pressure \cite{Thomas2013}. The resulting models are typically formulated as systems of reaction-diffusion equations~\cite{Ardaseva2019,Lorenzi2016,Villa2019}, with an advective transport term sometimes included to account for biased mutation dynamics \cite{hodgkinson} or adaptive phenotypic switches \cite{Chisholm2016,Lorenzi2015,Stace2020}.
In this paper, we formulate a mathematical model that accounts for the evolution of a cancer cell population along such a stemness axis in response to extrinsic and intrinsic stimuli. Initially, we focus on the plastic response of cells to changes in nutrient levels, in particular oxygen. This is motivated by recent experimental studies~\cite{Garnier2019,Liu2014,Pistollato2010,Pistollato2009} suggesting that hypoxia (i.e. low oxygen levels) is a key driver of cell de-differentiation.
From this point of view, spatial heterogeneity
may introduce significant additional complications: as oxygen diffuses into a tumour and is consumed by cells, spatial gradients in the oxygen levels are established.
In this way, local
micro-environments characterised by normoxia, hypoxia and necrosis form as the distance to the nearest nutrient supply (i.e., blood vessels) increases~\cite{hodgkinson,Lorz2014,Villa2019}. For simplicity, we postpone consideration of such spatial complexity to future work and focus, instead, on a \emph{well-mixed} setting where oxygen levels are homogeneous and prescribed. This idealised scenario allows us to investigate how cell properties, such as proliferation, apoptosis
and adaptive response to environmental signals, contribute to the emergence of heterogeneous stemness levels in the population and the long term tumour composition. In this regard, we are interested in identifying conditions under which CSCs are favoured. We then extend the model to account for treatment via a phenotypically-modulated linear-quadratic model
of radiotherapy (see, e.g.,~\cite{lewin2,lewin,Saga} for recent
discussions) which accounts for differential radio-sensitivity of CSCs~\cite{frontiers}.
This allows us to investigate how different radiotherapy protocols perturb the phenotypic distribution and subsequent regrowth of the tumour.
In practice, stemness is just one of multiple traits that regulate cell behaviour and heterogeneity. We, therefore, anticipate that future models will combine multiple phenotypic axes or \emph{synthetic dimensions}, such as stemness and metabolic state \cite{Ardaseva2019,hodgkinson}. Given the complexity of such multi-dimensional models, it is important first to understand these aspects separately. Noting that considerable mathematical effort has been devoted to investigating cancer
metabolism~\cite{cancermetab}, we choose here to focus on population heterogeneity with respect to a continuously varying stemness axis. We hope that in the long term this work will help motivate a systematic experimental characterization of cell plasticity and phenotype.
The remainder of the article is organised as follows.
In Section~\ref{model presentation},
we present a well-mixed, spatially homogeneous, model of solid tumour growth in response to a prescribed
oxygen concentration. We first investigate the population dynamics in the absence of treatment, considering both normoxic and hypoxic conditions. Numerical results are presented in Section~\ref{notreatment}. As a partial validation of the numerical results, we use spectral stability analysis to characterise the long time behaviour of the solutions. Section~\ref{radio_result} focuses on tumour cell responses to different radiotherapy protocols. As in Section~\ref{notreatment}, we simulate responses under normoxia and hypoxia, but we also consider situations in which the environment alternates between periods of hypoxia and normoxia in order to explore the different ways that radiotherapy can alter tissue oxygenation. Finally in Section~\ref{conclusion}, we summarize our key findings and propose possible
directions for future work. We also present preliminary results showing how accounting for spatial and phenotypic variation may affect a tumour's growth and response to radiotherapy.
\subsection{Linear Stability Analysis}
\label{LSA}
\noindent We now validate some of the above numerical results by performing a linear stability analysis which enables us to characterise the equilibrium states. We denote by $\bar{n}=\bar{n}(z)$ a steady state for the (untreated) system~(\ref{mixedmodel})-(\ref{eq_ad}), with a total cell density $\bar{\phi}=\int_0^1 \bar{n}(z) dz$ and let $\delta n$ represent a small perturbation to this solution. Then we can approximate the solution $n$ in a neighbourhood of $\bar{n}$ as:
\begin{equation}
n(z,t)= \bar{n} + \delta n(z,t),\quad \|\delta n\| \ll 1
\quad \,\forall t>0.
\end{equation}
Substituting this ansatz into~(\ref{mixedmodel}) and retaining linear terms, we obtain the following equation for $\delta n$:
\begin{subequations}
\begin{align}
\begin{aligned}
\frac{\partial \delta n}{\partial t}=
\mathcal{M}\delta n,
\end{aligned}
\label{defM}
\\[2pt]
\frac{\partial \delta n}{\partial z} = 0, \qquad z = 0, 1,\\
\delta n(z,0)\equiv 0,\label{neu}
\end{align}
\end{subequations}
where $\mathcal{M}$ is the following integro-differential operator
\begin{equation}
\mathcal{M}\delta n \equiv \frac{\partial }{\partial z} \left(\theta \frac{\partial \delta n}{\partial z}- v_z\delta n\right)+ \left[p\left(1-\bar{\phi}\right) - f \right] \delta n - p\bar{n}\int_0^1 \delta n dz. \label{Mdef}
\end{equation}
The solution $\bar{n}$ is \textit{spectrally}
stable if the spectrum of the operator, $\sigma(\mathcal{M})$, does
not contain eigenvalues with positive real part, i.e.,
\begin{equation}
\sigma(\mathcal{M}) \bigcap \left\{\lambda \in \mathbb{C}: \Re(\lambda)>0\right\}=\emptyset.
\end{equation}
Moreover, the dynamics of the system will be dominated by the fastest growing mode (i.e., the eigenfunction corresponding to the eigenvalue with the largest real part, $\lambda_0$).
In \ref{AppendixB} we transform the above eigen-problem so that it does not include any first order derivatives. For a non-zero steady state, we retain a non-local term in the eigenvalue problem and this can give rise to a spectrum with a pair of complex eigenvalues. Recalling case A.1 from Section \ref{oxygen_env} (see Figure \ref{norm_sym}), the numerically estimated value of $\lambda_0$ is indeed complex ($\lambda_0=-1.535 \times 10^{-4} \pm i \, 2.24\times 10^{-3}$, where $i^2 = -1$). This result, in turn, explains why damped fluctuations are observed in the numerical simulations.
By contrast, when considering the trivial steady state, $\bar{n}\equiv 0$, which is always a fixed point for the system, the non-local term vanishes and we obtain the standard form analysed by
Sturm-Liouville theory. Using well known results, we can identify sufficient conditions for the stability/instability of the trivial steady state (see Lemmas \ref{lemma1}-\ref{lemma3} in \ref{AppendixB}). Under hypoxia, where $v_z<0$,
we find that the trivial steady state is unstable (for the parameter sets in Table \ref{param_set}) and the system evolves to a non-zero distribution, which is consistent with the numerical results from Section \ref{hyp cond}. We note that the results relate only to the behaviour of the fitness function and advection velocity near the boundary $z=0$, suggesting that the most relevant parameters are $p_0^{max}$, $V_-$, $\theta$ and $\xi_-$.
By contrast, under normoxia, and for the range of parameter considered here, the system undergoes a bifurcation. For sufficiently small $V_+$, the trivial steady state is unstable; for sufficiently large $V_+$ and for large values of the death rate, $d_f$, the trivial steady state is stable, (see, for example, case C2 in Section~\ref{oxygen_env}). To investigate other parameter regimes that we can not tackle analytically, we rely on numerical estimation of the largest eigenvalue, $\lambda_0$. As shown in Figure \ref{xi_crit}, it is possible to identify the boundary of the region of stability in $(\xi_+,V_+)$ space. This
diagram does not change significantly as the death rate varies in the range from $d_f=0.001$ to $d_f=0.015$ (results not shown).
However, the results are highly sensitive to the value of $\omega_+$. Comparing Figures \ref{xi_crit1} and \ref{xi_crit2}, we see that setting $\omega_+=2$ favours the formation of a non-trivial equilibrium distribution, with the curve shifting to the far right of the parameter space (i.e., small values of $\xi_+$ and large
values of $V_+$). In the latter case, this implies that even higher velocities $V_+$ are needed to stabilise the tumour elimination solution. This is consistent with the numerical results in Section~\ref{oxygen_env}, where setting $\omega_+=2$ (see scenario B in Section~\ref{oxygen_env}) favoured the accumulation of CSCs which acted a reservoir for tumour cells.
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/nospace/linearstab/xi_crit_om1}
\caption{$\omega_+=1$}
\label{xi_crit1}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/nospace/linearstab/xi_crit_om2}
\caption{$\omega_+=2$}
\label{xi_crit2}
\end{subfigure}
\caption{Series of phase diagrams partitioning the $(V_+,\xi_+)$ parameter space into regions where the trivial steady state is linearly stable (green regions) and unstable (white regions).
The diagrams are obtained for $d_f=0.001$.
We note that changing $\omega_+$ has a significant impact on the size of the region of $(V_+,\xi_+)$ parameter space in which the non-trivial steady state is stable (compare (a) and (b)).}
\label{xi_crit}
\end{figure}
\section{Model Formulation}
\label{model presentation}
\noindent We consider the temporal evolution of a heterogeneous population of tumour cells, $N(z,t)$, where $t \geq 0$ denotes time and $z$ ($0 \leq z \leq 1$) represents their stemness or \emph{clonogenic capacity}.
As shown in Figure \ref{schematic}, $z=0$ corresponds to cancer stem cells (CSCs) which have the maximum level of stemness, and $z=1$ corresponds to terminally differentiated cells (TDCs), which have lost their proliferative capacity
and which can either enter replicative senescence or undergo cell death \cite{Lee2014}.
\color{black} We assume that the population dynamics may by described by a reaction-advection-diffusion equation~(see Eq.~\ref{full1} below) which accounts for two essential physical/ecological processes.
First, cells \emph{move} along the stemness axis (i.e., in the $z$-direction) in response to extrinsic (micro-environment) and intrinsic (random epimutation) \emph{forces} \cite{Scott169615}, which give rise to advective and diffusive fluxes respectively.
Finally, the effect of natural selection on the population is represented by the fitness function $F$, which models the net growth rate of the cells.
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\textwidth]{figures/model/modeldiagram_well_mix}
\caption{Schematic representation of the well-mixed, phenotypic model. We associate with each cell a stemness level $z$, which varies continuously between the cancer stem cell state (CSCs, with $z\sim0$), the differentiated cell state (with $z\sim 0.5$) and the terminally differentiated cell state (TDCs, with $z\sim 1$).}
\label{schematic}
\end{figure}
While multiple nutrients and growth factors regulate the growth rate (or fitness function $F$) and phenotypic adaptation (i.e., the advective velocity $v_z$) of the tumour cells,
here, for simplicity, we focus on a single nutrient, specifically oxygen.
The critical role of low oxygen levels, or \textit{hypoxia}, in cancer has long been recognised due to its association with
cell quiescence and poor therapeutic outcomes~\cite{hodgkinson,lewin,Saga}.
Recent experimental results \cite{Soleymani2018} have shown that hypoxia also plays a role in de-differentiation by regulating pathways associated with a stem-like phenotype.
We account for these phenomena in our model by assuming
that all cells are exposed to the same level of oxygen, $c=c(t)$,
which mediates the values of the fitness function, $F$, and the advection velocity, $v_z$; the
latter feature distinguishes our work from existing theoretical models in which intrinsic forces are assumed to dominate phenotypic variation (i.e., $v_z = 0$) \cite{Ardaseva2019, Villa2019}.
By combining the processes mentioned above, we deduce that the evolution over time $t$ and along the phenotypic axis $z$
of the cell concentration, $N(z,t)$, is governed by the following non-local partial differential equation (PDE) and associated boundary and initial conditions:
\begin{subequations}
\begin{align}
\frac{\partial N}{\partial t}= \frac{\partial }{\partial z} \underbrace{\left(\theta \frac{\partial N}{\partial z}-N v_z(z,c)\right)}_{\text{structural flux}}+ \underbrace{F(z,\Phi,t;c)}_{\text{fitness}}N,\label{full1}\\
\theta \frac{\partial N}{\partial z}-N v_z = 0, \quad z=\left\{0,1\right\}, \, t>0,\\[1pt]
N(z,0)= N_0(z) \quad z\in(0,1),\\[1pt]
\Phi(t)=\int_0^1 N(z,t) \, dz.
\end{align}%
In Equation~(\ref{full}), the non-negative constant $\theta$ represents the rate at which cells diffuse along the phenotypic axis, due to random epigenetic mutations, $\Phi(t)$ denotes the
density of cells in the domain at time $t$, and $N_0(z)$ is the initial distribution of cells along the phenotypic axis.
In ecology, the function $F$ is referred to as fitness landscape which is a mathematical representation of natural, or \textit{Darwinian}, selection \cite{Pisco2015}. We suppose it has the following form:
\begin{align}
\begin{aligned}
F(z,\Phi,t;c)= \underbrace{p(z,c)\left(1-\frac{\Phi}{\Phi_{max}}\right)}_{\text{proliferation}}-\overbrace{f(z)}^{\text{\shortstack{natural cell\\ death}}}-\underbrace{\sum^{M}_{i=1} \log\left(\frac{1}{SF(z,c)}\right)\delta(t-t_i)}_{\text{radiotherapy}}.\label{F}
\end{aligned}
\end{align}\label{full}
\end{subequations}
In Equation~(\ref{F}), $p=p(z,c)$ denotes the phenotype-dependent growth rate of the cells (see Section~\ref{fit} for details). It is multiplied by a non-local (in the phenotypic sense) logistic term, with constant carrying capacity $\Phi_{max}$, to capture intra-population competition for space and other resources. We assume that oxygen levels remain sufficiently high so that necrosis can be neglected. Hence, the death rate, $f$, accounts only for natural cell death, or apoptosis, which is assumed to occur at a rate which is independent of the oxygen concentration, $c(t)$. Radiotherapy (RT) also contributes to cell death and, in so doing, reduces cell fitness. We suppose that $M$ rounds of RT are administered at discrete times $t_i$ ($i=1,2,\ldots, M$). After each treatment dose, the proportion of cells of phenotype $z$ that survive is denoted by the survival fraction $SF(z,c)$. By allowing $SF$ to depend on $z$, we can account for phenotypic-dependent radio-sensitivity, and, for example, view the CSCs (i.e. $z=0$) as the most radio-resistant tumour subpopulation \cite{Rycaj2014}. Additionally, the dependence of $SF(z,c)$ on $c(t)$ enables us to account for differential radio-sensitivity under normoxia and hypoxia \cite{Hockel1996,Sorensen2020}.
In contrast to~\cite{lewin2}, where the
term $(1-SF)$ is used to capture cell death due to radiotherapy, here we use the term
$\log(1/SF)$, to ensure that the jump in tumour cells following each dose of radiotherapy is consistent with the Linear-Quadratic (LQ) model.
We now partially rescale our model by recasting the dependent variables $N$ and $\Phi$ in the following way:
\begin{equation}
n = \frac{N}{\Phi_{max}}, \qquad \phi = \frac{\Phi}{\Phi_{max}},
\end{equation}
where the units of time, $t$ [hr] are preserved in a dimensional form to facilitate the interpretation of the results. Under this rescaling, equations~(\ref{full}) become
\begin{subequations}
\begin{align}
\hspace{-10mm}
\frac{\partial n}{\partial t}= \frac{\partial }{\partial z} \left(\theta \frac{\partial n}{\partial z}-n v_z(z,c)\right)+ F(z,\phi,t;c) n,\\
\theta \frac{\partial n}{\partial z}-n v_z = 0, \qquad z\in\left\{0,1\right\}, \, t>0,\\
n(z,0)= n_0(z) \quad z\in(0,1),\\
\phi(t)=\int_0^1 n(z,t) \, dz,\\
\begin{aligned}
F(z,\phi,t;c)= p(z,c)\left(1-\phi\right) -f(z)-\sum^{N}_{i} \log\left(\frac{1}{SF(z,c)}\right)\delta(t-t_i).\label{Fitness}
\end{aligned}
\end{align}
In order to complete the model, it remains to specify several functional forms; this will be done in Sections \ref{fit} and \ref{vz_sec}.
Extending the model to account for spatial variation is presented in~\ref{AppendixA}, and preliminary results are included in Section~\ref{conclusion} (a full investigation of the spatially-extended model is postponed to future work).
In what follows, we assume that oxygen concentration $c$ has been rescaled so that $c=1$ corresponds to physiological oxygen levels, namely \textit{physoxia}, which is about $8\%$ oxygen \cite{McKeown2014}.
When considering hypoxia, we focus on mild hypoxia, fixing $c=0.2$ which corresponds to $1.6\%$ oxygen in standard units
(see~\ref{App_param} for details).
At this oxygen concentration, necrosis can be neglected; it typically occurs at lower oxygen tensions (approximately $0.1\%$ oxygen in standard units).
Unless otherwise stated, we assume that the tumour initially comprises a small population of differentiated cells so that
\begin{align}
n_0(z)=\frac{\phi_0}{\sqrt{2\pi \sigma^2}} e^{-\frac{\left(z-0.5\right)^2}{2\sigma^2}}\label{initial_cond},
\end{align}\label{mixedmodel}%
where the positive constants $\phi_0$ and $\sigma$ specify the initial size and phenotypic variance of the population.
\end{subequations}
The proportion of CSCs is often
used to characterise heterogeneous populations of cancer cells.
CSCs are typically identified by their expression of specific markers (such as CD44/CD24 and ALDH1, depending on the tumour type \cite{frontiers}); thresholds in these markers are used to distinguish stem from differentiated cancer cells.
Since our model treats stemness as a continuously varying cell property, we introduce a threshold $z^* \in (0,1)$ in our simulations, and classify cells with $0<z<z^*$ as CSCs.
We therefore define the proportion of stem cells at time $t$ to be:
\begin{equation}
\phi_{CSC}(t,z^*)=\frac{\int_0^{z^*} n(t,s) \,ds}{\phi(t)}.
\label{cum_dist}
\end{equation}
As a further statistical feature of the cell population, we introduce the phenotypic mean, $\mu(t)$, which is defined as follows:
\begin{equation}
\mu(t)=\frac{1}{\phi(t)} \int_0^1 z n(z,t) dz. \label{mean}
\end{equation}
In the absence of suitable experimental data, it is difficult to specify many of the parameters and functional forms in Equations~(\ref{mixedmodel}).
For this reason, we focus on identifying the qualitative behaviours that the model exhibits across a range of `biologically-reasonable' situations.
\subsection{Fitness Landscape}
\label{fit}
\noindent When considering the fitness landscape, we assume that, for fixed values of $c$, the proliferation rate, $p(z,c)$ has a multi-peaked profile, with local maxima centred around $z=0$ and $z=0.55$, representing respectively cells with stem-like ($z=0$) and intermediate phenotypes ($z=0.55$, this value being arbitrary). As shown in Figure \ref{fitness landscape}, this choice reduces the overlap of the two Gaussian profiles while maintaining the proliferation rate at $z=1$ close to zero. This asymmetry also emphasises that, under normoxia, more stem-like cells (i.e. $z<0.5$) proliferate at lower rates than more differentiated cells (i.e. $z>0.5$).
Different environmental conditions (i.e., oxygen concentrations), will create distinct ecological \textit{niches} each of which will favour a particular phenotype. We account for this effect by assuming that the amplitude of the peaks in the proliferation rate are oxygen dependent. Accordingly we write:
\begin{subequations}
\begin{align}
p(z;c)=p_0(c)\exp\left[-\frac{z^2}{g_0}\right]+p_1(c)\exp\left[-\frac{(z-0.55)^2}{g_1}\right],\\
p_i(c)=p_i^{max}\myfrac[2pt]{c^4}{K_{i}^4+c^4}, \quad i=0,1,\label{pO2}
\end{align}
\label{eqnetprol}%
\end{subequations}
where $p_0(c)$ and $p_1(c)$ are Hill–Langmuir type equations with fourth order exponents, so that the growth rate decays rapidly when $c\sim K_{i}$. We assume that differentiated cells are fitter than CSCs under normoxia and, therefore, choose $p_1^{max}>p_0^{max}$. At the same time, we note that chronic hypoxia is widely considered to favour CSCs \cite{Ayob2018, Conley2012, Lan2018}. The plasticity of CSCs enables them more easily to adapt their metabolism to changing nutrient levels than differentiated cells \cite{Garnier2019,Snyder2018} and, therefore, to survive and proliferate in challenging conditions. This behaviour contrasts with that of differentiated cancer cells which tend to become quiescent when exposed to hypoxia. We account for these effects by assuming $K_0\ll K_1$.
When we consider the rate of cell death due to apoptosis, $f(z)$,
we note that apoptosis occurs predominantly when cells lose their clonogenic capacity.
As such, it predominantly affects only TDCs with $z\sim 1$.
Motivated by the mathematical models developed in \cite{ende2,Scott169615}, we propose
the following monotonically increasing function for $f(z)$:
\begin{equation}
f(z)=d_f \,e^{-k_f(1-z)}.\label{apoptosis}
\end{equation}
Even though they may not proliferate, TDCs compete for space and resources and, thus, impact the tumour dynamics.
In what follows, we consider two different cases. First, guided by experimental results reported by Driessens et al.~\cite{Driessens2012}, we assume that apoptosis of TDCs occurs on a much longer timescale than that on which
cells proliferate so that $d_f<<\max_{z} p(z;1)$. In the second case, the rates of cell proliferation and apoptosis are assumed to be comparable. This situation represents a tumour with high cell turnover and, as we will see, gives rise to a tumour population with higher clonogenic capacity.
In Figure \ref{fitness landscape}, we sketch the fitness landscape $F(z,0,t;c)$ for different environmental conditions in the absence of treatment and competition. In doing so, we have neglected competition and radiotherapy in Equations~(\ref{Fitness}), where $p$ and $f$ are defined by Equations~(\ref{eqnetprol})-(\ref{apoptosis}).
\begin{figure}[h!]
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/notreatment/fitness_land2}
\caption{high $d_f$}
\label{landnorm2}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/notreatment/fitness_land1}
\caption{low $d_f$}
\label{landnorm1}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/notreatment/fitness_land3}
\caption{low $d_f$}
\label{landhyp}
\end{subfigure}
\caption{Series of sketches showing how the maximum growth rate $p(z,c)-f(z)$, as defined by Equations~(\ref{eqnetprol})-(\ref{apoptosis}) changes in different micro-environments:
(a)-(b) under normoxia ($c=1$), the progenitor cells ($z=0.55$) are the fittest phenotype, and the death rate may be either high (a) or low (b); (c) under hypoxia ($c=0.2$), the CSCs ($z=0$) are the fittest phenotype. The parameter values used to produce the sketches
are listed in Table~\ref{par_fitness}. Regions of positive and negative fitness are highlighted in green and red, respectively.}
\label{fitness landscape}
\end{figure}
\begin{table}[h!]
\centering
\subfloat[proliferation]{
\begin{tabular}{c|c c c }
\toprule[1.5pt]\addlinespace[2pt]
& $p^{max}_i\, (hr^{-1})$ &$K_i$&$g_i$ \\
\hline\addlinespace[2pt]
$i$=0&$0.005$&$0.05$&$0.01$\\
$i$=1& $0.02$ &$0.3$&$0.04$\\
\bottomrule[1.5pt]
\end{tabular}}
\hspace{10mm}
\subfloat[apoptosis]{
\begin{tabular}{ c c}
\toprule[1.5pt]\addlinespace[2pt]
$d_f\, (hr^{-1})$ & $k_f$\\
\hline\addlinespace[2pt]
$\left\{0.001,0.015\right\} $& $10$\\
\bottomrule[1.5pt]
\end{tabular}}
\caption{Range of parameter values used in the sensitivity analysis. More information on the specific parameter choice can be found
in~\ref{AppendixA}.}
\label{par_fitness}
\end{table}
We now consider the impact of radiotherapy on cell fitness.
As mentioned above, CSCs possess protective mechanisms
that enable them to withstand damage caused by
radiation and oxidative stresses~\cite{Radioresistance,Clark2016, Diehn2009,Rycaj2014,cancersreview,frontiers,Vassalli2019}. They are, therefore, more resistant to treatment than their differentiated counterparts. It is well known that local oxygen
concentration levels also affect treatment outcomes~\cite{Horsman2012,Moulder1987}. While we account for this effect in the full spatial model (see~\ref{AppendixA}), here we focus on the role of phenotype-dependent radio-sensitivity.
In particular, we adapt the standard Linear-Quadratic (LQ) model so that the tissue specific coefficients, $\alpha (Gy^{-1})$ and $\beta(Gy^{-2})$, are phenotype dependent:
\begin{subequations}
\begin{align}
-\log(SF)=\alpha(z) d + \beta(z) d^2,\label{lq_model}
\end{align}%
where $d$ is the radiation dose in Grays (Gy). Equation~(\ref{lq_model}) is the natural, continuum extension of previous works \cite{Leder2014,Saga}, in which two-compartment models are used to describe the time-evolution of cancer cells and cancer stem cells exposed to radiotherapy, and CSCs are assumed to be radio-resistant. Accordingly, here, we assume $\alpha$ and $\beta$ are increasing functions of the phenotype $z$~\cite{Saga,cancersreview,frontiers} of the following form:
\begin{align}
\alpha(z)= \alpha_{min}+(\alpha_{max}-\alpha_{min})\tanh\left(\frac{z}{\xi_R}\right),\label{rad_alpha}\\[2pt]
\beta(z)=\beta_{min}+(\beta_{max}-\beta_{min})\tanh\left(\frac{z}{\xi_R}\right).\label{rad_beta}
\end{align}\label{radiosensitivity}
\end{subequations}
In Equations~(\ref{rad_alpha})-(\ref{rad_beta}), $\xi_R$, $\alpha_{min,max}$ and $\beta_{min,max}$ are non-negative constants with $\alpha_{min}<\alpha_{max}$ and $\beta_{min}<\beta_{max}$.
Where possible, parameter estimates are taken from the literature (see~\cite{Saga} for estimates of $\alpha_{min,max}$ and $\beta_{min,max}$); the value of $\xi_R=0.2$ is instead chosen so that differentiated cells (i.e. $z>0.5$) have maximum sensitivity to treatment (i.e., $\alpha(z)\sim \alpha_{max}$ for $z>0.5$).
\begin{table}[h!]
\centering
\begin{tabular}{c|l l | l l }
\toprule[1.5pt]\addlinespace[3pt]
& \multirow{2}{*}{$[\alpha_{min},\alpha_{max}](Gy^{-1})$}&\multirow{2}{*}{$[\beta_{min},\beta_{max}](Gy^{-2})$}&\multirow{2}{*}{$\displaystyle\frac{\alpha_{min}}{\beta_{min}}(Gy)$}&\multirow{2}{*}{$\displaystyle\frac{\alpha_{max}}{\beta_{max}}(Gy)$}\\
&&&&\\
\hline\addlinespace[2pt]
R1& $[0.005,0.15]$ & $[0.002,0.10]$ &2.5&1.5\\[2pt]
R2& $[0.050,0.20]$ & $[0.020,0.05]$ &2.5&4\\[2pt]
R3& $[0.005,0.40]$ & $[0.002,0.05]$ &2.5&8\\[1pt]
\bottomrule[1.5pt]
\end{tabular}
\vspace{3mm}
\caption{Summary of the parameter values used in Equation~(\ref{radiosensitivity}) to describe the three different RT responses used in model simulations. In all cases, we fix $\xi_R=0.2$.}
\label{rad_tab2}
\end{table}
We consider three different parameter sets (see Table \ref{rad_tab2}); they may represent three cell populations which differ in their sensitivity to radiotherapy (RT).
For cases R1 and R3, CSCs (with $z\sim 0$) respond in the same way to RT, whereas differentiated cancer cells (with $z> 0.5$) respond differently. For case R1, the small value of $\alpha_{max}/\beta_{max}$ for the sensitive cells ($z=1$) corresponds to a late responding tissue, whereas for case R3, the large value of $\alpha_{max}/\beta_{max}$ corresponds to an early responding tissue, with a low repair capacity, for which fractionation is known to be beneficial \cite{McMahon_2018}. Finally, case R2 is intermediate between cases R1 and R3. By assuming heterogeneity in the cell response to RT, we allow consideration of the selective pressure of RT. For a given dosage and LQ model, differences in the radio-sensitivity of CSCs and differentiated cells are determined by the ratios $\alpha_{min}/\alpha_{max}\in (0,1)$ and $\beta_{min}/\beta_{max}\in(0,1)$. When both fractions are small, CSCs are more likely to survive RT than their differentiated counterparts and, therefore, the selective pressure of RT on the population is high. By contrast, as $\alpha_{min}/\alpha_{max}$ and $\beta_{min}/\beta_{max}$ approach the value of unity, RT offers no selective advantage to CSCs as, at leading order, the response is independent of phenotype. The latter also depends on the specific dose applied. For example, for high doses the quadratic term in Equation~(\ref{lq_model}) is dominant and the selective pressure is only associated with the value of $\beta_{min}/\beta_{max}$. By contrast, for lower doses, the linear and non-linear effects contribute to cell killing and, so, the selective pressure of RT is associated with both $\alpha_{min}/\alpha_{max}$ and $\beta_{min}/\beta_{max}$. For these reasons, we will consider two different RT protocols: either a single dose of $10\,Gy$ is delivered or a fractionated schedule is used (here five doses of $2\,Gy$ are delivered over five consecutive days \cite{Brenner1991,Dale1985}). While R2 is expected to have the least RT selective pressure in both scenarios, this might be higher in R1 or R3 depending on the treatment protocol considered.
\section{Population Dynamics in the Absence of Treatment}
\label{notreatment}
\noindent In this section, we present numerical solutions of Equations~(\ref{mixedmodel})-(\ref{apoptosis}) and~(\ref{eq_ad}) showing how, in the absence of treatment, the tumour cell distribution along the stemness axis evolves under normoxia and hypoxia. Our numerical solutions are generated using the method of lines, with discretisation performed in the $z$-direction. In more detail and following \cite{Gerisch2006}, we use a finite volume scheme, opting for a Koren limiter to control the advection component of the structural flux. In this way, we reduce~(\ref{mixedmodel}) to a system of time-dependent, ordinary differential equations which can be solved in MATLAB using \textit{ode15s}, an adaptive solver for stiff equations.
The numerical simulations are validated in Section \ref{LSA} where we perform a linear stability analysis. The associated eigenvalue problem is solved numerically using MATLAB's \textit{chebfun} package \cite{Driscoll2014}.
\subsection{Normoxic Conditions}
\label{oxygen_env}
\noindent In well-oxygenated environments, the advection velocity is positive and cells are driven towards a terminally differentiated phenotype, with $z=1$. Depending on the balance between the advective flux and cell renewal (i.e., Darwinian selection and Lamarckian induction), the model predicts a variety of long-time behaviours: the system relaxes to its steady state via damped fluctuations or monotonically. We start by considering symmetric velocity profiles (see Figure \ref{vel_norm1}).
As summarised in Figures \ref{norm_sym} and \ref{steady_state}, as the magnitude of the advection velocity, $V_+$, and its steepness, $\xi_+$, are varied, the system exhibits different long time behaviours, even though the dynamics at early times are similar for all parameter sets considered (see Figure \ref{norm_sym}). If simulations are initialised with a small population of cells with $z\sim 0.5$, then the dynamics are initially dominated by proliferation. Over time, as $\phi$ increases, competition slows the cell proliferation rate and phenotypic advection becomes more important. As the cells mature, they accumulate near $z=1$, and the rate of natural cell death exceeds the rate of cell proliferation. From this time onwards, the growth curves corresponding to different parameter sets start to deviate.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{figures/notreatment/normoxia/new_evol_sym}
\caption{
Results from a series of numerical simulations of
Equations~(\ref{mixedmodel})-(\ref{apoptosis}) and~(\ref{eq_ad}), showing how the cell distribution, $n(z,t)$, the phenotypic mean, $\mu(t)$, and the cell density, $\phi(t)$, change over time when we use a symmetric velocity profile (i.e., $\omega_+=1$ in Equation~(\ref{eq_ad})).
As $V_+$ increases and $\xi_+$ decreases, the system can be driven to extinction.
See Figure \ref{steady_state} for the values of the other model parameters.}
\label{norm_sym}
\end{figure}
For example, in case A.2, the system rapidly relaxes to a non-zero steady state distribution characterised by cells with medium clonogenic capacity (i.e., a mix of highly proliferating and terminally differentiated cells or TDCs).
Similarly, for cases C.1 and C.2, the cell density, $\phi(t)$, decays exponentially to extinction at a rate dictated by $d_f$. In other parameter regimes, the relaxation phase is characterised by damped fluctuations. In case A.1, for example, fluctuations are driven by the interplay between apoptosis, competition and advection. As TDCs are eliminated, the reduction in competition allows re-growth of highly proliferative cancer cells (i.e., $z\sim0.55$).
As these cells proliferate, competition slows growth and advection becomes dominant, resulting in the alternating pattern of red and white stripes observed in the surface plot for $n(z,t)$ shown in Figure \ref{norm_sym} for case A.1. Over time, the fluctuations decay and the system relaxes to its steady state distribution.
In Section~\ref{LSA}, we present a complementary
investigation of this behavior, relating the damped oscillations
to a complex eigenvalue in the linearisation about the
equilibrium solution.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{figures/notreatment/normoxia/phase_diagrams_new}
\caption{Series of phase diagrams characterising the steady state distribution predicted by the model as properties of the advection velocity, $v_z$, vary
(i.e., for different values of the parameters $V_+$, $\xi_+$ and $\omega_+$), and the rate of apoptosis, $d_f$. At each point in $(V_+,\xi_+)$ parameter space,
we characterise the equilibrium distribution based on the number of peaks and the dominant phenotype (i.e., the $z$-locations of the local maxima) for different values of the parameters $\omega_+$ and $d_f$. For parameter sets that give rise to a significant fraction of CSCs
(i.e., $\%$ CSCs $\geq 1\%$), the value of $\phi_{CSC}(0.3,t_{\infty})$, as defined by Equation~(\ref{cum_dist}), is also indicated.}
\label{steady_state}
\end{figure}
Focusing on the long time behaviour, the symmetric advective profile gives rise to a population with a unimodal equilibrium distribution where the location of the peak is dictated by the values of the other parameters. For example, for small values of the maximum death rate, $d_f$ (see case A.1), the distribution is skewed towards $z=1$, while for higher values of $d_f$ the peak is shifted towards the centre of the domain. These observations are summarized in Figure \ref{steady_state}, where we have further analysed how the properties of the equilibrium distribution depend on other parameters in the model. We note that as the advective velocity increases (i.e., larger $V_+$) the value of $\xi_+$ determines whether total extinction occurs. This suggests that there is a bifurcation
as $V_+$ and $\xi_+$ vary, with the system transitioning from a trivial to a non-zero steady state (this behaviour will be
investigated in Section~\ref{LSA}).
By contrast, the equilibrium distribution for an asymmetric velocity profile (i.e., $\omega_+=2$, as in Figure \ref{vel_norm2}), has a multimodal distribution, typically characterised by two peaks. In this case, since the CSCs have a lower propensity to mature, they accumulate and persist in the population, even under normoxia. The second column of Figure \ref{steady_state} shows that the proportion of CSCs at long time increases as the death rate, $d_f$, the steepness parameter, $\xi_+$, and the maturation velocity, $V_+$, increase, until the CSCs become the dominant subpopulation (see, for example, Case B.3 in Figure \ref{norm_asym}).
Varying the death rate, $d_f$, does not significantly affect whether extinction occurs; rather, it determines the location of the maximum peak in the equilibrium distribution (see, for example, case B.2 in Figure \ref{norm_asym}). For low death rates, cells are predominantly
in a terminally differentiated state. As the death rate increases, the peak moves to the left, producing an equilibrium distribution in which a higher proportion of rapidly proliferating cells balances the high death rate.
Figure \ref{norm_asym} shows how the system relaxes to its steady state when $\omega_+=2$. Comparison with Figure \ref{norm_sym} reveals that in this case the dynamics are characterised by secondary regrowth, driven by the accumulation of CSCs. For example, in case B.1, phenotypic diffusion enables the cancer cells to de-differentiate, acquire a stem-like phenotype and, therefore, contribute to population growth.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{figures/notreatment/normoxia/new_evol_asym2}
\caption{Results from a series of numerical simulations of
Equations~(\ref{mixedmodel})-(\ref{apoptosis}) and~(\ref{eq_ad}), showing how the cell distribution, $n(z,t)$, the phenotypic mean, $\mu(t)$, and the cell density, $\phi(t)$, change over time. For these results, we use an asymmetric velocity profile (i.e., $\omega_+=2$ in Equation~(\ref{eq_ad})). See Figure \ref{steady_state} for the values of the other model parameters.}
\label{norm_asym}
\end{figure}
To summarise, the properties of the advection velocity $v_z$, determine whether the model predicts extinction or persistence of CSCs, regardless of whether they are present initially. When $\omega_+=2$, random mutations (i.e., diffusion), may dominate the advective force near $z=0$, allowing CSCs first to form, then to proliferate and ultimately to comprise a significant proportion of the equilibrium population. CSCs have been observed in normoxic regions; for example, they have been found in perivascular tumour regions, where endothelial cells secrete factors that inhibit CSC maturation \cite{Calabrese2007}. By contrast, when $\omega_+=1$
(i.e., for symmetric velocity profiles), all cells mature over time, leading to the eventual extinction of CSCs. This behaviour could describe that of tumours which lack CSCs, or the effect of drugs which induce stem cell differentiation and, thereby, reduce the incidence of resistance to other treatments, such as radiotherapy.
We conclude that targetting $V_+$ and $\xi_+$ may be effective for eliminating CSCs, increasing tumour sensitivity to treatment and, in certain scenarios, driving tumour extinction.
\subsection{Hypoxic Conditions}
\label{hyp cond}
\noindent Under hypoxia, the advection velocity in our model is negative and cells will be driven to de-differentiate. In this case, the equilibrium distribution is unimodal, with the dominant phenotype at $z=0$. Although varying the death rate $d_f$ does not effect the equilibrium distribution (compare cases H3 and H4 in Figure \ref{hyp1}), the values of $\omega_-$ and $\xi$ influence the width of the peak
(compare cases H1 and H2 in Figure \ref{hyp1}) and, therefore, the variability in the population.
\begin{figure}[h!]
\begin{subfigure}{\textwidth}
\includegraphics[width=\textwidth]{figures/notreatment/hypoxia/evol_cond1}
\caption{}
\label{hyp1_cond1}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/notreatment/hypoxia/evol_cond2}
\caption{}
\label{hyp1_cond2}
\end{subfigure}
\caption{Numerical results under hypoxic condition for four parameter sets, all with $V_-$= $4\times 10^{-4}$.
In (a) we use the standard initial condition defined by Equation~(\ref{initial_cond}) while in (b) the population is centred around $z=1$. The other parameter values are as follows:
(H1) $\xi_-$= 0.05, $\omega_-$= 1 and $d_f$= 0.001;
(H2) $\xi_-\mbox{= }0.5$, $\omega_-\mbox{= }1$ and $d_f\mbox{= }0.001$;
(H3) $\xi_-\mbox{= }0.5$, $\omega_-\mbox{= }2$ and $d_f\mbox{= }0.001$;
(H4) $\xi_-\mbox{= }0.5$, $\omega_-\mbox{= }2$ and $d_f\mbox{= }0.015$. }
\label{hyp1}
\end{figure}
Differences in the system dynamics also arise as the initial conditions $n_0(z)$ vary.
The results in Figure \ref{hyp1_cond1} indicate little variation in the system dynamics
when the initial conditions from Section \ref{oxygen_env} are used. By contrast, in
Figure \ref{hyp1_cond2} we observe marked differences when the initial conditions are centred around the TDCs.
In this case, population regrowth is delayed, the delay depending on the choice of parameter values.
For example, when $\omega_-=2$, the velocity in a neighbourhood of $z=1$ is smaller than when
$\omega_-=1$. Consequently, cells de-differentiate more slowly, delaying tumour regrowth.
Similarly, increasing the death rate, $d_f$, reduces the number of cells that can de-differentiate and, subsequently, delays regrowth. Therefore, while $d_f$ does not affect the equilibrium distribution, it influences the system dynamics.
These results show how the formation of hypoxic regions can shape the development of a tumour. In particular, the emergence of hypoxia maintains and enhances the
pool of CSCs, preventing population extinction (see, for example, scenario D in Section \ref{oxygen_env}).
\section{Population Dynamics in the Presence of Treatment}
\label{radio_result}
\noindent In the previous section, we found that the system possesses a
stable steady state to which the dynamics converge
for the range of parameter values considered.
Therefore, we anticipate that, while treatment can perturb the system from its equilibrium, it will eventually relax to its stable steady state once treatment ends. Thus we expect extinction to occur for parameter values lying in the stability region of the trivial steady state (see Figure \ref{xi_crit}).
From this point of view, we are interested in understanding how different environmental conditions (i.e. normoxia and hypoxia), different treatment protocols and different tumour compositions affect the relaxation phase and, in particular, the time to recurrence.
To account for variability in tumour responses, we consider the different advection velocities used in our earlier analysis (see Table \ref{tab_rad3}).
Starting from the initial condition~(\ref{initial_cond}), cells follow different pre-treatment protocols as specified in Table \ref{tab_rad3}. Without loss of generality, we shift time so that $t=0$ corresponds to $24$ hours before treatment begins. While attention will focus on tumour responses in constant environmental conditions, we also consider briefly treatment responses in changing environments.
For each scenario, we simulate the response to treatment for the range of values of the radiation parameters listed in Table \ref{rad_tab2}.
We denote by $n^{(S1,R1)}(z,t)$ the solutions corresponding to scenario $S1$ from Tables \ref{tab_rad3} and radio-sensitivity parameter set $R1$ from Table \ref{rad_tab2}.
\begin{table}
\hspace{-4mm}
\begin{tabular}{c c c c}
\toprule[1.5pt]\addlinespace[2pt]
Scenario & Protocol &Parameters& Subsection\\[2pt]
\hline\addlinespace[6pt]
\multirow{4}{*}{$S1$} &\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols1}}} &&\multirow{12}{*}{\ref{radio_norm}} \\
&&$(V_+[10^{-4}],\xi_+,\omega_+,d_f)$&\\
& & =$\left(4,0.05,2,0.015\right)$&\\
&&&\\[2pt]
\multirow{4}{*}{$S2$}&\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols2}}}&&\\
&&$(V_+[10^{-4}],\xi_+,\omega_+,d_f)$&\\
&&=$\left(8,0.05,1,0.001\right)$&\\
&&&\\[2pt]
\multirow{4}{*}{$S3$}& \multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols2}}}&&\\
&&$(V_+[10^{-4}],\xi_+,\omega_+,d_f)$&\\
&&=$\left(8,0.05,2,0.001\right)$&\\
&&&\\[2pt]
\hline\addlinespace[4pt]
\multirow{4}{*}{$S4$}&\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols3}}}&&\multirow{4}{*}{\ref{radio_hyp}}\\
&&$(V_-[10^{-4}],\xi_-,\omega_-,d_f)$&\\
&&=$(2,0.5,2,0.001)$&\\
&&&\\[2pt]
\hline\addlinespace[8pt]
\multirow{4}{*}{$S5$} &\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols4}}}&&\multirow{8}{*}{\ref{radio+change}}\\
&&$(V_\pm[10^{-4}],\xi_+,\xi_-,\omega_+,\omega_-,d_f)$&\\
&&=$(8,0.05,0.5,1,2,0.001)$&\\
&&&\\[2pt]
\multirow{4}{*}{$S6$} &\multirow{2}{*}{\parbox[c]{120pt}{\includegraphics[width=0.3\textwidth]{figures/radio/growthcurves/protocols/protocols5}}}&&\\
&&$(V_\pm[10^{-4}],\xi_+,\xi_-,\omega_+,\omega_-,d_f)$&\\
&&=$(8,0.05,0.5,1,2,0.001)$&\\[2pt]
&&&\\
\bottomrule[1.5pt]
\end{tabular}
\vspace{2mm}
\caption{Parameter sets used to generate the numerical simulations in Section~\ref{fit}, together with the corresponding environmental conditions pre- and post-treatment (blue: normoxia, red: hypoxia). Simulations are initialised using equation \ref{initial_cond} at different times $t=-t_s$ as indicated in the second column. Radiotherapy is administered at time $t=24$ hours. The parameter values have been chosen to illustrate the range of qualitative behaviours that the model exhibits.}
\label{tab_rad3}
\end{table}
\subsection{Treatment Response in Normoxic Conditions}
\label{radio_norm}
\noindent The simulation results presented in Figure \ref{radio_norm_growth} illustrate the different regrowth dynamics that can arise when well-oxygenated tumour cells are exposed to a single dose of RT. We identify three distinct behaviours: instantaneous regrowth (S1), decay and extinction (S2) and initial remission with subsequent regrowth (S3). While the cell survival fraction immediately post-treatment depends on the parameter values used in the LQ-model (see Equation~(\ref{radiosensitivity})), the qualitative population regrowth dynamics post-treatment do not depend on these values.
In more detail, for scenario S1, the cell density increases rapidly after treatment, driving the system towards its (asymptotic) equilibrium. By contrast, for scenarios S2 and S3, the growth curves initially decrease at similar rates until about $40$ days after treatment. Thereafter, for scenario $S3$ the tumour exhibits rapid regrowth to the equilibrium distribution, whereas for scenario $S2$, the tumour continues to shrink, until it is eventually eliminated.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{figures/radio/growthcurves/normoxia}
\caption{Different treatment outcomes under normoxia. For each scenario S1, S2 and S3 (see Table \ref{tab_rad3}) we consider the dynamics of the total cell number, $\phi(t)$, and compare the responses for the radio-sensitivity parameter sets R1, R2 and R3 (see Table \ref{rad_tab2}) to the control, untreated case. For each scenario we also present plots of the phenotypic cell distribution, $n(z,t)$, at different times for radiotherapy protocol R1. The vertical line indicates the time of irradiation, while a line is also shown that follows the evolution of the control (i.e., in the absence of treatment).}
\label{radio_norm_growth}
\end{figure}
The origin of such differences can be understood from the time evolution of $n(z,t)$ post-radiotherapy. Figure \ref{radio_norm_growth} shows that for
case R1 of Table~\ref{rad_tab2}, the balance between cell proliferation and advection drives the system dynamics.
The reduction in the cell density $\phi(t)$ post-radiotherapy reduces intra-population competition and allows the cells to resume
proliferation. Depending on the magnitude of the advection velocity (which is positive),
the cells either regrow ($S3$) or they are driven
to a terminally differentiated state and, thereafter, become extinct
($S2$). For scenario $S3$, the
presence of radioresistant CSCs post treatment and a small positive velocity at $z=0$
together drive regrowth. As the CSCs start to
mature, there is a continuous source of highly
proliferative cells which, in turn, drive rapid regrowth of the
tumour. As the total cell number increases, intra-population competition slows cell proliferation until eventually advection becomes dominant, driving the cells to de-differentiate. By contrast, for scenario $S2$, advection dominates proliferation along the entire phenotypic axis. Additionally, CSCs are absent so that all cells are rapidly terminally differentiated and, thereafter, undergo cell death.
Comparison of scenarios S2 and S3 reveals how different phenotypic compositions can generate treatment responses which are initially qualitatively similar, but differ markedly at long times.
This finding is reinforced in Figure \ref{radio_mean_norm} where we plot the mean phenotypes, $\mu = \mu(t)$, as defined by Equation~(\ref{mean}).
For scenarios S2 and S3, the dynamics of the mean phenotype are indistinguishable at short times and do not start to diverge until approximately 20 days after treatment.
\begin{figure}[h]
\includegraphics[width=0.95\textwidth]{figures/radio/growthcurves/mean_normoxia}
\caption{Series of plots showing the evolution of the phenotypic mean, $\mu(t)$, for scenarios S1, S2 and S3 (see Figure \ref{radio_norm_growth}). We note that the scales used on the vertical axes are different.}
\label{radio_mean_norm}
\end{figure}
More generally, the results presented in Figure \ref{radio_mean_norm} reveal three characteristic behaviours for the evolution of the phenotypic mean following radiotherapy. The dynamics of $\mu$ may be the same as those prior to treatment, with negligible deviation from the control (see scenario S2). A discontinuity in $\mu$ may be induced by radiotherapy (see scenario $S1$). In this case, CSCs comprise a significant proportion of the population prior to RT and the effect of radioresistance is pronounced (see Figure \ref{radio_norm_growth}). As CSCs
are more likely to survive radiotherapy than more mature cells, we observe an ``instantaneous'' shift in $\mu$ towards
less mature phenotypes.
The size of the discontinuity depends on the relative sensitivity of CSCs and TDCs to RT, or, using the terminology introduced in Section \ref{fit}, the selective power of RT. Since we are considering high radiation dosages, the discontinuity is determined by the ratio $\beta_{min}/\beta_{max}$. In order for the selective pressure of treatment to be apparent, CSCs must comprise a significant fraction of the population prior to treatment. This explains why, for scenario S3,
there is an initial transient period during which, as for
scenario S2, there is no discernible deviation from the control. Only at later times does the difference in the evolution of $\mu(t)$ for the different parameter sets become apparent.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{figures/newradioprof/radio_evol_2.pdf}
\caption{Series of numerical results showing how the growth dynamics and the phenotypic mean evolves following exposure to a single dose of radiotherapy when cell radio-sensitivity is a non-monotonic function of cell phenotype. The simulations are analogous to those presented in Figure~\ref{radio_norm_growth} and \ref{radio_mean_norm}, except that Equations~(\ref{eq:new_al_bet}) are used in place of Equations~(\ref{rad_alpha})-(\ref{rad_beta}). }
\label{fig:App_radio1}
\end{figure}
We note that other factors, in addition to stemness, influence cell radio-sensitivity.
It is natural to expect
cells that have permanently exited the cell-cycle will be less radio-sensitive than cycling cells, as the DNA damage response may already be active in such cells \cite{Lee2014}.
The functional forms for $\alpha$ and $\beta$ defined by
Equations~(\ref{rad_alpha})-(\ref{rad_beta}) assume that radio-sensitivity increases monotonically with cell phenotype, $z$.
In order to investigate situations in which TDCs have lower radio-sensitivity than proliferating cancer cells, we now
the following, non-monotonic functional forms:
\begin{subequations}
\begin{align}
\alpha(z)=\alpha_{min}+(\alpha_{max}-\alpha_{min})\tanh\left(\frac{z}{\xi_R}\right)H_{0.075}(1-z),\\[2pt]
\beta(z)=\beta_{min}+(\beta_{max}-\beta_{min})\tanh\left(\frac{z}{\xi_R}\right)H_{0.075}(1-z),
\end{align}\label{eq:new_al_bet
where $H_\epsilon$ is defined in \S \ref{vz_sec}, and we arbitrarily fix $\epsilon=0.075$ (all other parameters are as defined in \S\ref{fit}).
\end{subequations}
When the single dose experiment is repeated with the new radio-sensitivity profile, we observe an overall increase in the population survival fraction (compare Figures \ref{fig:App_radio1} and \ref{radio_norm_growth}) and changes in the dynamics of the population mean $\mu(t)$ (compare Figures \ref{fig:App_radio1} and \ref{radio_mean_norm}). The differences are most pronounced for scenarios $S2$ and $S3$ where TDCs, localised near $z=1$, are dominant in the population prior to treatment. The qualitative growth dynamics (i.e., $\phi(t)$) is similar for both cases.
Further investigation of these differences is beyond the scope of the current study and is postponed for future work.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{figures/radio/growthcurves/fractioned}
\caption{Simulation results for fractionated radiotherapy protocols, showing how the total cell number $\phi(t)$ and the phenotypic mean
$\mu(t)$ evolve for scenarios S1 and S3 (see Figure \ref{radio_norm_growth} for details). In all plots, the light purple shaded area indicates the variability in responses when a single dose of 10 Gy is administered and is included for comparison with the fractionated treatments (see Figure \ref{radio_mean_norm}). The yellow shaded area indicates the duration of the treatment for the fractionated case.}
\label{frac_growth}
\end{figure}
In practice, delivery of a single (high) dose of 10 Gy may not be practical for treating patients, due to adverse side effects \cite{Taylor2011}. Therefore, we now consider tumour responses to fractionated RT protocols.
The trends for fractionated RT are similar to those for single doses for all scenarios in Table \ref{tab_rad3}. Typically, the proportion of cells that survive fractionated therapy is larger than for the single-dose case, by a factor of about 100.
Consequently, for scenarios $S1$ and $S3$, the time to return to the equilibrium population distributions is reduced. For S2, while treatment causes a monotonic decrease in the cell density $\phi$, since more cells survive fractionated RT, it takes longer for the cell population to become extinct.
For scenarios $S1$ and $S3$, we recall that for high doses of RT, the phenotypic mean was markedly affected by the specific LQ model parameters considered; this is not the case when lower doses are applied (see Figure \ref{frac_growth}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\textwidth]{figures/radio/Ff2}
\caption{Phenotypic distribution $n^{(S1,R1)}(z,t)$ for the control (light blue), the colony exposed to a single dose (dark blue) and the one treated with fractionated dose $2$ Gy
$\times5$ (green). The orange and yellow lines indicate the phenotypic mean for the single dose (orange) and fractionated (yellow) therapy respectively. Note that the first panel corresponds to the end of the treatment so that $t_{pt}$ is 24 hr and 120 hr for the 10 Gy and fractionated protocol respectively. On the other hand, the remaining panels are measured relative to the beginning of the treatment, which is at $t=24$ hr for both protocols.}
\label{distFd}
\end{figure}
The variability in responses for scenarios S1 and S3 following a single dose of radiotherapy can be attributed to the temporary advantage CSCs have post treatment. When using a fractionated protocol, intra-population competition is maintained at the cost of fewer cells being killed. This is apparent when we compare the phenotypic distribution at different times for the two treatment protocols (see Figure \ref{distFd}). When $10$ Gy is administered in one dose (first panel, dark blue region), the peak of the distribution is at $z=0$. On the other hand, after 5 doses of $2Gy$ per day (first panel, green region), the proportions of differentiated and cancer stem cells are approximately equal.
Given that the former proliferate faster than the latter, the differentiated cells quickly become the dominant phenotype. Consequently, one month after treatment ends (third panel in Figure \ref{distFd}), the proportion of CSCs in the population is the same for both protocols.
We conclude further that the single dose protocol outperforms the fractionated protocol when we compare the total number of cells (the blue curve is below the green one for all values of $z$).
\subsection{Treatment Response in Hypoxic Conditions}
\label{radio_hyp}
\noindent Cell populations that are continuously exposed to hypoxia, exhibit instantaneous re-growth following RT, as shown in Figure \ref{rad_hypoxia_growth}. Compared with the treatment outcome under normoxia, a higher percentage of cells survive radiation, because there is a larger proportion of radio-resistant cells in the population under hypoxia. Even though a smaller fraction of cells are killed, re-growth is also usually slower under hypoxia than under normoxia.
We note also that, following exposure to the single and fractionated protocols, the phenotypic mean $\mu(t)$ shifts toward $z=0$ under hypoxia, favouring CSCs as the dominant phenotype (see Figure \ref{rad_hypoxia_growth}). The drift in $\mu$ is less pronounced for the fractionated case, suggesting the latter protocol is less favourable for the immediate accumulation of resistant subpopulation of CSCs than the single dose.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{figures/radio/growthcurves/hypoxianew}
\caption{Comparison of the tumour cell responses to single and fractionated radiotherapy protocols under hypoxia for scenario S4
(See Table \ref{tab_rad3}). Simulation results showing the time evolution of the cell density, $\phi(t)$, and phenotypic mean, $\mu(t)$, are presented. For comparison, the light purple shaded areas in the fractionated plots indicate the variability in the response when a single dose of $10$ Gy is administered. The yellow shaded areas indicate the duration of treatment for the fractionated case.}
\label{rad_hypoxia_growth}
\end{figure}
Taken together, our simulation results suggest that, under hypoxia, RT may accelerate the accumulation of resistant cells, while significantly reducing the overall growth rate of the population.
\subsection{Treatment Response in a Changing Environment}
\label{radio+change}
\noindent Thus far we have assumed that the oxygen concentration remains constant throughout treatment.
While this may accurately describe RT responses for cells cultured \textit{in vitro}, such control is likely to be absent \textit{in vivo} \cite{Arnold2018,Fenton2001,Kempf2015}. There is currently no consensus about the impact of RT on tumour vasculature and, hence, tissue re-oxygenation. On the one hand, high doses of radiotherapy may damage the vasculature~\cite{Hormuth}, and decrease nutrient availability post radiotherapy.
On the contrary, moderate RT may transiently increase tissue oxygenation
by \emph{normalising} the tumour vasculature
(vessel normalisation is a phenomenon that has been observed when tumours are exposed to vascular-targetting agents which destroy some of the blood vessels in a way that increases blood flow through the network and, thereby, tissue oxygen levels \cite{Carmeliet2011,Jain2014}).
Moreover, as tumour cells are killed, the pressure on immature vessels, not damaged by the radiation, decreases, and oxygen supply to the surviving cells may increase. Equally, hypoxic regions may form at later times as the tumour regrows. From this point of view, radiotherapy may impact both the phenotypic distribution of the cell population (and, thereby, its radio sensitivity), and oxygen levels post-treatment.
We can use our mathematical model to investigate these scenarios, by assuming that oxygen levels change post radiotherapy.
Based on the results presented in Sections \ref{radio_norm} and \ref{radio_hyp}, we anticipate that reoxygenation of a hypoxic tumour will be beneficial in certain cases, driving CSC maturation, and even leading to tumour eradication. The results presented in Figure \ref{posthypo_2} show that the long-term tumour regression is preceded by an initial phase of regrowth during which CSCs that survive treatment de-differentiate and proliferate. Such a treatment might initially be considered unsuccessful, although
the stability of the trivial steady state upon
re-oxygenation leads to extinction at longer times.
\begin{figure}[h]
\begin{subfigure}{0.4\textwidth}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.9\textwidth]{figures/radio/growthcurves/reox2.pdf}
\caption{S5}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[width=0.9\textwidth]{figures/radio/growthcurves/post_hyp}
\caption{}
\label{posthypo_2b}
\end{subfigure}
\end{subfigure}
\hspace{5mm}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=0.9\textwidth]{figures/radio/growthcurves/differentreox2.pdf}
\caption{S6}
\label{posthypo_2c}
\end{subfigure}
\caption{Growth curves for changing environmental conditions: (a) re-oxygenation and (b) post-radiation hypoxia for the parameter values S5 and S6 in Table \ref{tab_rad3}, respectively. Different response to treatment are compared based on parameter values from Table~\ref{rad_tab2}. (c) Growth curve $\phi(t)$ and phenotypic mean $\mu(t)$ evolution for model $R1$ from Table~\ref{rad_tab2}, when exposed to transient post-treatment hypoxia. We denote by $T_R$ the time at which re-oxygenation occur (indicated by the arrows in the plot). If $T_R$ is sufficiently small than re-oxygenation does not drive re-growth of the cells population. If we waited for a sufficiently long time (as in case $T_R=1000$) then re-oxygenation would first drive regrowth. Areas in blue and pink correspond to intervals of normoxia and hypoxia respectively.}
\label{posthypo_2}
\end{figure}
As mentioned previously, when high radiation doses are applied \textit{in vivo}, it is likely that the vessel network is also damaged, potentially inducing hypoxia \cite{Arnold2018}. Figure \ref{posthypo_2b} shows that
such environmental changes may negatively impact the outcome.
The formation of an hypoxic region favours the development and
maintenance of radioresistant CSCs, reducing the treatment efficacy and making it more difficult to eradicate the tumour. At the same time, environmental changes may be transient: damaged blood vessels are likely to be replaced by new vessels which form via angiogenesis and re-oxygenate
the damaged regions. As shown in Figure \ref{posthypo_2c}, depending on the time-scale required for vessel regrowth (indicated by $T_R$), different behaviours may arise. If the duration of RT-induced periods of hypoxia is sufficiently short, then the size of the cell population remains low.
By contrast, if there is sufficient time for cells to de-differentiate (see $T_R=1000$), then re-oxygenation leads to a rapid increase in cell number, although
eventually the cells die out. These results highlight the complex interplay between tumour growth and treatment response \textit{in vivo} and the importance of environmental factors in
determining the eventual outcome of radiotherapy treatment.
\subsection{Structural Flux}
\label{vz_sec}
Plasticity is an essential feature of phenotypic adaptation to changing environmental conditions~\cite{dirkse,Pisco2015}. It assumes that cells with the same genome can acquire distinct phenotypes depending on their epigenetic status, which is also inheritable. Phenotypic variation may be mediated by random (spontaneous) \textit{epigenetic mutations} \cite{Lorenzi2016}, which we assume to be rare. We account for this effect by including in the structural flux a diffusion term with a constant diffusion coefficient $\theta=5\times 10^{-6}$ hr$^{-1}$(see
Equation \ref{full1}). Such random mutations should not favour any specific phenotype, and \textit{Darwinian} selection (i.e. the fitness function $F$) drives phenotypic evolution of the population. This aspect has been widely studied in previous work in order to investigate how cells adapt to different environments \cite{Ardaseva2019,Lorenzi2016,Villa2019}. At the same time, there is evidence that phenotypic switching may be mediated by environmental factors via \textit{Lamarckian} selection (or induction) \cite{Pisco2015}. In this framework, cells adapt to their environment~\cite{dirkse,schaider} by following a preferential (\emph{biased}) trajectory in phenotypic space. We can, therefore, envisage situations in which a subpopulation may be prevalent in a population without being the fittest subpopulation (i.e. the population with the highest proliferation rate).
For example, recent studies have identified cell de-differentiation and CSC maintenance as stress responses to harsh environmental conditions \cite{Pisco2015}, including hypoxia.
More specifically, cells respond to hypoxic stress by up-regulating Hypoxia Inducible Factors (HIFs) which, in turn, promote the expression of stem-related genes~\cite{Garnier2019,Liu2014,Pistollato2010,Pistollato2009}. HIF suppression has also been linked to cell differentiation and reduced levels of stemness~\cite{Shiraishi2017}.
We account for such micro-environment mediated adaptation by incorporating an advective term in the structural flux. Cells
are assumed to evolve along the stemness axis with a velocity $v_z=v_z(z,c)$, that depends on the oxygen concentration $c$ and cell phenotype $z$. Under normoxia, cells tend to differentiate, and $v_z>0$. From this point of view, the model is similar to classical age-structured models \cite{Perthame2007,Webb2008}, with $v_z$ being analogous to a \emph{maturation} velocity. In our model, however, \emph{ageing} (i.e. differentiation or loss of clonogenic potential \cite{Scott169615}) may be reversible. For example, under hypoxia (i.e. $c \leq c_H$), we assume $v_z<0$ (see Figure \ref{vel_prof}) and a more stem-like character is promoted.
\begin{figure}[h!]
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/model/velocity_prof/normxi0_05}
\caption{ $\xi_+=0.05$}
\label{vel_norm1}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/model/velocity_prof/normxi0_5}
\caption{ $\xi_+=0.5$}
\label{vel_norm2}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/model/velocity_prof/hypxi0_5}
\caption{$\xi_-=0.5$}
\end{subfigure}
\caption{Series of sketches showing how $v_z^+$ and $v_z^-$, as defined by Equations~(\ref{eq_ad_norm}) and~(\ref{eq_ad_hyp}) respectively, change as
the parameters $\xi_\pm$ and $\omega_\pm$ vary.}
\label{vel_prof}
\end{figure}
Combining the above observations, and motivated in part by recent, similar considerations~\cite{hodgkinson}, we propose
the following functional forms for the phenotypic drift term, $v_z$:
\begin{subequations}
\begin{align}
v_z(z;c)=v^+_z(z) H_{\epsilon}(c-c_H)-v^-_z(z)H_{\epsilon}(c_H-c),\\[3pt]
v^+_z(z)= \myfrac[2pt]{V_+}{V^*_+} \tanh\left(\myfrac[3pt]{z^{\omega_+}}{\xi_+}\right)\tanh\left(\myfrac[2pt]{(1-z)}{\xi_+}\right),\label{eq_ad_norm}\\[3pt]
v^-_z(z)=\myfrac[2pt]{V_-}{V^*_-} \tanh\left(\myfrac[2pt]{z}{\xi_-}\right)\tanh\left(\myfrac[3pt]{(1-z)^{\omega_-}}{\xi_-}\right).\label{eq_ad_hyp}
\end{align}\label{eq_ad}
\end{subequations}
where $H_\epsilon$ is a smooth variant of the Heaviside function approaching the latter in the limit of $\epsilon \rightarrow 0$ (i.e., $H_\epsilon(x)= {(1+\tanh(\epsilon^{-1}x))}/{2}$). In Equations~(\ref{eq_ad}), the normalising factors $V_\pm^*$ ensure that $\left(\max_z v^\pm_z\right)/V_\pm=1$ and $V_\pm\,(hr^{-1})$ corresponds to the magnitude of the velocity. Further, by controlling
the advection speed along the stemness axis, $V^{-1}_\pm$ determines the timescales for maturation and de-differentiation.
The parameters $\xi_\pm$ regulate the slopes of $v_z$ at the boundaries $z=0,1$.
As shown in Figure \ref{vel_norm1}, when $\xi_\pm \ll 1$, the advection velocity is steep when $z\sim 0,1$ and flatter elsewhere.
This functional form is similar to that proposed in~\cite{hodgkinson}.
For larger values of $\xi_\pm$, the variation is more gradual, with a single maximum (or minimum)
near $z\sim0.5$ (see Figure \ref{vel_norm2}). The exponents $\omega_\pm$ allow us to tune the symmetry/asymmetry in $v_z$ and also to modulate the flux at the boundaries (see Figure \ref{vel_prof}). For example, if $\omega_+=2$, then $v(0)=\partial_z v(0)=0$ which means that CSCs will be less likely to differentiate compared to the case $\omega_+=1$.
In the absence of experimental data with which to specify the parameters in the phenotypic drift velocity, we consider combinations of the following parameter sets:
\begin{itemize}
\item $V_\pm \in \left\{2,4,8\right\} \times 10^{-4} \left[hr^{-1}\right]$, \item $\xi_\pm\in\left\{0.05,0.1,0.5\right\}$, and
\item $\omega_\pm\in\left\{1,2\right\}$.
\end{itemize}
In summary, our phenotype-structured model for the growth and response to radiotherapy of a solid tumour is defined by Equations~(\ref{mixedmodel})-(\ref{eq_ad}). A list of the model parameters and estimates of their values can be found in Table~\ref{param_set} in~\ref{AppendixA}.
|
train/arxiv
|
BkiUeF7xaL3SufhMo6K7
| 5
| 1
|
\section{Introduction}
Self-organization is an emerging field in biophysics with important implications in tissue engineering, biomechanics, and regenerative medicine. Embryo growth and muscle structure result from such self-organization processes. We develop biophysical models to describe this non-equilibrium process that leads to an ordered biomaterial. Our approach provides deeper understanding of how tissue structure arises from the individual cells as they proliferate, change morphology, and increase in density.
Histology samples and time-lapse imaging indicate that certain cells (\textit{nematoid} cells) tend to align their major axes with those of neighboring cells. The direction of the orientation vector for each cell (which lies along the major axis of an ellipse fit to the cell contour) is usually determined by tensile mechanical force dipole interactions along cell cytoskeletons \cite{gjorevski2015dynamic,ramaswamy2010activematter}. The interactions among the actin fibers and cadherin anchoring junctions of the cells are responsible for the cell intercalation, adhesion, and axially-aligned migration that describe the motion and behavior of a large variety of cell types including fibroblasts, smooth muscle cells (SMCs), and osteoblasts \cite{gruler1999nematic,gjorevski2015dynamic}. The biomechanical organization of cells in a petri dish resembles 2D liquid crystals at a quasi-steady state, and has been previously studied using Langevin dynamics model \cite{kemkemer2000elastic} and pairwise elastic interaction theories applied to microtubule ensembles \cite{decamp2015orientational}, mouse fibroblast monolayers \cite{duclos2014perfect}, and bioengineered nematic vesicles \cite{keber2014topology}.
We extend previous theoretical and experiment-based studies employing elastic continuum theory \cite{gruler1999nematic,kemkemer2000elastic} and active nematics spin glass \cite{bischofs2005effect,duclos2014perfect,decamp2015orientational,keber2014topology} to understand how order emerges in such non-equilibrium systems. Our model predicts, and our time-lapse experiments confirm, vortex structures in cell monolayers which evolve and interact amongst each other during the phase transition from a disordered to ordered state, typical of active nematics \cite{menzel2014active,ramaswamy2010activematter,shi2013topological}.
To extend existing active nematics studies, we develop a nematic cell alignment theory (NCAT) which relates cell elongation to cell alignment at maximal cell packing density, at which point the active nematic reaches a quasi-steady state. Incorporating the analogy between nematoid cell arrangements \textit{in vitro} and 2D nematic crystals, we model the cells as mobile, interacting ellipses with coupling interactions that increase as they proliferate to higher densities. Specifically, this increased probability of interaction among cells leads to higher alignment correlation. This fundamental self-organization phenomenon converges to statistically distinct final energies at high density for different cell phenotypes due to variations in cytoskeletal interactions that affect cell elongation. In this paper, we show how NCAT combines statistical mechanics and a hard ellipse model in a new way to relate cell shape to cell-to-cell interactions and ultimately explain \textit{in vitro} cell monolayer self-organization.
\section{Theory} \label{morphotypes}
\subsection{Nematic Cell Alignment Theory}
We propose a nematic cell alignment theory (NCAT) that describes the evolution of cell alignment via pairwise cell-cell interactions, which vary with cell density and cell elongation.
We define cells as ellipses with orientations in the range $ \theta_i \in [0,\pi) $ as the unit vector orientation of the major axis of the ellipse. The anisotropic interaction term between two cells $ i $ and $ j $ is proportional to $ \cos(2(\theta_i- \theta_j)) $ and occurs primarily for neighboring cells \cite{schwarz2002elastic}. To simplify our model, we assume that cell migration is slow enough such that the cell interactions are minimally affected by their migration trajectories, an assumption made in previous literature for spin lattice studies \cite{toner1998flocks}. Finally, the model does not include any other repulsive or attractive potentials among the cells, instead assuming that cell-cell interactions are sufficiently absorbed in the NCAT Hamiltonian.
Our simplified NCAT Hamiltonian is the Lebwohl-Lasher model \cite{lebwohl1972nematic}:
\begin{align} \label{eq:hamiltonian}
\mathcal{H} &= -A \sum_{\mean{i,j}} \cos 2(\theta(\mathbf{r}_i) - \theta(\mathbf{r}_j))
\end{align}
where $ A $ is the coupling term influenced by cell alignment interaction strength (which correlates with cell eccentricity) and cell density. $ \mathbf{r}_i $ represents the position of cell $ i $. $ \mean{i,j} $ is the indicator for a valid cell neighbor pair. $ \theta(\mathbf{r}) $ is the cell's major axis orientation at position $ \mathbf{r} $.
This Hamiltonian employs the same order parameter as that used in the literature \cite{gruler1999nematic}, $ \cos 2 \theta $, which accounts for the degeneracy between $ \theta $ and $ \theta + \pi $ orientation. A key difference is that Equation \ref{eq:hamiltonian} employs a pairwise (bond) energy rather than a per-cell energy very much in the spirit of \cite{friedrich2011nematic}, and this avoids the pitfall of defining an ad hoc director $ \mathbf{n} $ for the cell monolayer (i.e. a set of predefined angles for all cells to align to).
The partition function for the cell orientation is defined as (letting $ 1/kT = 1 $):
\begin{align}
Z &= \sum_{\Theta} \exp\left(A \sum_{\mean{i,j}} \cos 2(\theta(\mathbf{r}_i) - \theta(\mathbf{r}_j))\right)
\end{align}
We define the cell locations as $ \{\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N\} $ and $ \Theta $ as the set of valid microstates comprised of the cell orientations at the lattice points $ \theta(\mathbf{r}_i) $.
In NCAT, we model cells as a network ensemble of \textit{interacting hard ellipses} with eccentricity $ \epsilon $ and cell density $ \rho $, both measurable experimental parameters that determine the magnitude of $ A $. We assume that all neighboring cells may contact each other at any point along their perimeters with equal probability. We calculate the energy $ \hat u(\epsilon) $ for two interacting ellipses by averaging $\cos 2(\theta - \theta')$ over all possible tangent cell contact points (see Figure \ref{fig:disclination_exp}(c)). Next, we calculate the same bond energy $ u(A) $ for two cells using $ \mathcal{H} $ from Eq. \ref{eq:hamiltonian} (ignoring the summation). The resulting expressions for $ \hat u $ and $ u $ (derived in Appendix C of Supplemental Material) are:
\begin{align}
\hat u(\epsilon) = \left(\frac{\sqrt{1 - \epsilon^2} - 1}{\sqrt{1 - \epsilon^2} + 1}\right)^2 & & u(A) = \frac{I_1(A)}{I_0(A)}
\end{align}
where $ I_n $ is the modified Bessel function of the first kind. We numerically evaluate $ A(\epsilon) = u^{-1}(\hat u (\epsilon)) $ to determine the coupling parameter $ A $ in terms of the average eccentricity of the cells in the time-lapse experiments for comparison with simulation results.
\subsection{Simulation Design and Results}
Using Monte Carlo simulation, we model critical behavior allowing cells to transition from an unordered, isotropic phase to an ordered, nematic phase as they proliferate and increase their contact area. We assume that the cells are self-propelled particles that reach a linearly stable configuration as predicted by \cite{menzel2014active}. The primary purpose of simulation is not necessarily to capture dynamics, but rather to run the simulation over a range of $ A $ to quasi-steady state (at confluent density) and plot the average energy per lattice bond while varying the cell eccentricity. Our simulation uses 1024 lattice sites with periodic boundary conditions and starts with randomly aligned cells (Figure \ref{simul_demo}(a)) and at quasi-steady state reproduces localized vortices typical for apolar nematic media, (Figure \ref{simul_demo}(b)).
\begin{figure}
\includegraphics[width=0.45\textwidth]{pictures/sim_demo_examples_labelled.png}
\includegraphics[width=0.45\textwidth]{pictures/sim_demo_plts_labelled.png}
\caption{\label{simul_demo} Monte Carlo simulation for NCAT model at nematic, quasi-steady state regime showing lattice orientations and correlation vs distance plots at isotropic, low $ A $ (a,c) and at nematic, high $ A $ (b,d). Note the $ m = \pm 1/2 $ disclinations in (b). Continuous phase transition occurs when varying $ A $, which can be seen in (e) the average bond energy and (f) the time-averaged heat capacity ($ C \propto A^2 (\mean{E^2} - \mean{E}^2) $) at quasi-steady state. Both plots are smoothed using low pass filter and correspond to simulations with a 5-nearest-neighbor connection rule with fixed lattice points.}
\end{figure}
The phase transition in Figure \ref{simul_demo} (e,f) is similar in nature to the Kosterlitz-Thouless (KT) transition observed for the $ XY $ lattice model \cite{kemkemer2000elastic,chate2006simple}. This transition is manifested by the change in the spin-spin correlation dependence which shifts from exponential to power law \cite{kosterlitz1974critical,chate2006simple} in Figure \ref{simul_demo}(c,d) and at the peak of the time-averaged heat capacity in Figure \ref{simul_demo}(f). This approach to long range order may be observed both experimentally from cell culture experiments and theoretically from simulation. At the critical $ A $, vortices stop being generated and the gyrating structures typically observed in nematic cell cultures begin to be observed.
To expose the $ m = \pm 1/2 $ vortex structures typically observed in elastic continuum theory (see Appendix B of Supplemental Material) \cite{kemkemer2000elastic}, we include a velocity term which allows the lattice sites to move. These vortex patterns observed experimentally and in simulation (see Figure \ref{fig:disclination_exp}) demonstrate that while cell monolayers typically do not reach equilibrium, the macrostate energy with the vortices are close to non-equilibrium steady state. Note that $ m = 1/2 $ vortex usually interacts with a $ m = -1/2 $ vortex in both simulation (when lattice sites are allowed to move along their orientations) and experiment. This interaction can be seen in simulation in Figure \ref{simul_demo}(a) and \textit{in vitro} in Figure \ref{fig:energies_comp}(b).
\begin{figure}
\begin{center}
\includegraphics[width=0.23\textwidth]{pictures/exp_disclination_labelled}
\includegraphics[width=0.215\textwidth]{pictures/disclination_cropped_labelled}\\
\includegraphics[width=0.48\textwidth]{pictures/rollingellipse.png}
\end{center}
\caption{The presence of $ m = 1/2 $ disclinations in experiment (fibroblast/SMC mixture) (a) and in simulation (b). (c) Hard ellipse interaction as it relates to alignment differences between the major axes. Note the role eccentricity plays in the alignment. See appendix C of Supplemental Material for derivations.}
\label{fig:disclination_exp}
\end{figure}
\section{Materials and Methods}
In the \textit{in vitro} experiments, cells were plated at approximately equal density and under \textit{highly controlled incubation conditions} on an incubated 12-well culture system (see Appendix A of Supplemental Material). We studied mixtures of iPSC (induced pluripotent stem cells), smooth muscle cells (SMC), and Huf3 fibroblast (FB) cell populations. These are important cell populations involved in differentiation of stem cells to smooth muscle cells. The purpose of varying the mixtures is to correlate the NCAT model parameters with mixtures of different cell phenotypes that may appear during differentiation. The purpose of this experiment was to show that there was a significant difference in the long range order of cell alignment that may allow one to better understand biomechanics involved in such a differentiation process.
To image the cells, we developed an automated quantitative phase time-lapse microscope to quantify the dry mass profile in the image (which increases the index of refraction relative to water) \cite{popescu2006diffraction}. The images were taken every 10 minutes over the course of 20-30 hours. We then processed the images to measure the eccentricity and orientation of each cell over the course of the experiment.
We find the cell locations $\mathbf{r_i}$ using local regional maxima. Average eccentricity data was computed by using automatic cell segmentation and contour analysis techniques. Finally, we employed \textit{histogram of oriented gradients} (HOG) \cite{dalal2005histograms} to compute the cell orientation direction at each cell location. All image post-processing was performed using the standard \texttt{skimage} implementation in Python. All Monte Carlo simulations were run in Python on 16-core computers to parallelize the simulation for different values of $ A $.
\section{Results and Discussion}
We find distinguishable alignment patterns in timelapse images that result from variations in cell eccentricity for different phenotypes. Figure \ref{fig:energies_comp} shows the qualitative differences in our analysis of fibroblasts and pluripotent cells.
To verify that the distribution of alignment energies in our \textit{in vitro} experiments can be compared to the NCAT simulation, we first analyze the energy histogram on a per-cell basis. A comparison of simulated and experimental histograms for SMC/iPSC cell combinations are shown in Figure \ref{fig:energies_comp} (c) and (d) respectively.
\begin{figure}
\centering
\includegraphics[width=0.22\textwidth]{pictures/ipsc_demo_labelled.png}
\includegraphics[width=0.22\textwidth]{pictures/fbsmc_demo_labelled.png}\\
\includegraphics[width=0.23\textwidth]{pictures/energy_dist_simul_labelled.png}
\includegraphics[width=0.23\textwidth]{pictures/smcipsc_dists_labelled.png}
\caption{\label{angular_dist} There are differences in alignment between dermal iPSC (a) and FB (b), shown using time-lapse images overlaid with cell orientation directions. We also analyze average cell energy (average correlation of cells with neighbors) using simulation in the range $ A = 0.05 $ (violet) to $ A = 0.7 $ (red) in (c). An experimental histogram of average energy for mixtures of SMC/iPSC cells in quasi-steady state in (d) shows similar distribution to simulation.}
\label{fig:energies_comp}
\end{figure}
The NCAT analysis allows us to distinguish differences in mixtures of binary cell populations and agrees with our simulation results.
Figure \ref{fig:analyzed_smc} (c) and (d) shows the experimental transition from uncorrelated to correlated cell orientations for nematoid cells, which compares well with our simulation results in Figure \ref{simul_demo} (c) and (d).\footnote{The Supplemental Material contains time-lapse videos of FB, SMC, and iPSC using the procedure in Materials and Methods. Also plotted is the total angular correlation of the cells in the samples as a function of distances calculated for each video frame. As the FB and SMC cells become more confluent, the angular correlation function changes from an exponential to a linear dependence on distance, however the iPSC cells even at high densities maintain their original exponential dependence with cell-to-cell distance.}
\begin{figure}
\centering
\includegraphics[width=0.24\textwidth]{pictures/fbsmc_demo_ld_labelled
\includegraphics[width=0.24\textwidth]{pictures/fbsmc_demo_hd_labelled}\\
\includegraphics[width=0.24\textwidth]{pictures/ld_corr_labelled}%
\includegraphics[width=0.24\textwidth]{pictures/hd_corr_labelled}%
\caption{\label{fig:analyzed_smc} Nematic ordering in a proliferating smooth muscle cell-fibroblast mixture from (a) unordered, low-density to (b) ordered, high density with corresponding spin-spin correlation function behavior in (c) and (d) respectively. Note the evolution from exponential to power law behavior (in this case, linear due to presence of vortices).}
\end{figure}
Using energy and correlation data from experiment and simulation, we can approximate the critical eccentricity and density at which nematic ordering occurs for nematoid cell mixtures.
From Figure \ref{fig:phasethryvsexp} (a), we note that that smooth muscle cells undergo a phase transition at lower densities than the fibroblasts, but reach a higher quasi-steady state energy.
To determine the density at which a phase transition occurs, we track the transition from exponential to power law dependence (shown in Figure \ref{fig:phasethryvsexp} (b)) by plotting the square of the Pearson coefficient $ R^2 $ against the correlation data as a function of density.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{pictures/fbsmc_energyvsdensity_labelled.png}\\
\includegraphics[width=0.48\textwidth]{pictures/fbsmc_phasetransition_labelled.png}\\
\includegraphics[width=0.48\textwidth]{pictures/confluent_energyvscoupling_all_labelled.png}
\caption{(a) Comparison of energies as a function of cell density $ \rho $ for FB/SMC mixtures compared to iPSC, where $ \rho = 1 $ is max density. (b) Phase transition from the same experiment shows an approach to linear correlation behavior (quasi-steady state configuration) with Pearson coefficient $ R^2 $. (c) A comparison of energies at confluency (red points) using $ A(\epsilon) $ (where $ \epsilon $ is the average cell eccentricity in the time-lapse image) and energy predictions from simulation (blue curve). Note: For (a) and (b), solid lines are fits to sigmoid curves to demonstrate overall trends, and do not have mathematical significance.}
\label{fig:phasethryvsexp}
\end{figure}
The dependence of energy versus coupling constant $ A(\epsilon) $ in the confluent density regime is shown in Figure \ref{fig:phasethryvsexp} (c). We plot average energy for the last 30 frames of time-lapse data for all datasets used in this study (SMC/iPSC and 2 independent FB/SMC experiments), shown in red in Figure \ref{fig:phasethryvsexp} (c). We then ran our Monte Carlo simulations to quasi-steady state and computed the average energy for a range of the coupling parameter $A$. This is shown in the blue solid plot in Figure \ref{fig:phasethryvsexp} (c), showing agreement between NCAT simulation and experiment.
\section{Conclusions}
Based on energy distribution and energy dynamics signatures over the course of a multiwell time-lapse imaging experiment, we observe several trends consistent with NCAT theory and with recent observations of biological active nematic systems \cite{bischofs2005effect,duclos2014perfect,decamp2015orientational,keber2014topology}. First, as nematic cells proliferate from mid to high density, there is a KT-like phase transition predicted by NCAT and nonequilibrium studies of proliferating active crystals \cite{chate2006simple}. Second, cell cultures typically reach nematic, quasi-steady state configurations with the appearance of $ \pm 1/2 $ vortices, which agrees with our simulations. Finally, mixed cell populations have characteristic quasi-steady state configurations at confluent density that fit to simulation predictions of average energy vs $ A(\epsilon) $.
Our findings introduce eccentricity as a key variable that differentiates the nematic ordering process of fibroblasts, smooth muscle cells, and pluripotent stem cells and correlates with the order parameter. In future investigations, we plan to study the biological basis of cell elongation at confluency and its role in determining $ A $ to further explore the observed correlation between eccentricity and nematic ordering.
Measuring the phenotypic concentration of different cell types in stem cell derived transplants is an important quality control method. This work may provide non-invasive quality assurance measures to correlate model parameters with clinical outcomes for pluripotent stem cell-derived smooth muscle cell-based transplant therapies, where it is important to consider the biomechanical integrity, macro-structure, and phenotypic make-up of cultured cells.
\begin{acknowledgments}
The contributions for this work are as follows: SP developed imaging software, image post-processing, NCAT theory formulation with comparison with experiment, and wrote the paper. TB developed imaging software, oversaw the work, and edited the paper. NL developed image processing routines for quantitative phase imaging. CC optimized the optics for imaging platform. MG and EC performed all biological preparations and time-lapse experiments on imaging platform. BC oversaw the biological preparations and edited the paper.
\end{acknowledgments}
\section{Multiwell Experiments}
For multiwell experiments, we used 1:100 diluted Matrigel in DMEM culture medium. The different cell samples are plated at mid-density (about 50\% confluency, 50000 cells per well). Timelapse images were taken over a period of 3 days for 10-minute intervals. The cells were maintained at standard incubation conditions with a supply of 5\% CO${}_2 $ gas mixture at 37$ {}^\circ $C.
\section{Vortices} \label{App:AppendixB}
The amorphous lattice vortices $m = \pm 1/2 $ can be derived as solutions to the Euler Lagrange equation for the local orientation director $ \mathbf{n}(\mathbf{r}) = (\cos\Phi(x,y),\sin\Phi(x,y)) $, where $ \Phi(x,y) $ is the angle for the local direction in the elastic continuum. In cylindrical coordinates, the equation and solution for $ \Phi(x,y) $ are \cite{kemkemer2000elastic}:
\begin{align}
0 &= \frac{\partial \Phi}{\partial r^2} + \frac{\partial \Phi}{\partial r} + \frac{1}{r^2}\frac{\partial^2\Phi}{\partial\phi^2}\\
\Phi &= m\phi + \Phi_0
\end{align}
We observe the vortices for $m = \pm 1/2 $ in both cell culture and simulations.
\section{Eccentricity and Coupling Parameter $ A $} \label{App:AppendixC}
We now formalize the relationship between cell eccentricity and coupling parameter $ A(\epsilon) $ by matching the mean energy moments of a simplified two-body hard ellipse interaction and the Boltzmann statistics of the NCAT model. In utilizing this two-body interaction, we perform a relatively simple computation, but we approximate the relationship among the hard ellipses in a lattice network as simply the sum of two-body interactions for the purposes of approximating $ A(\epsilon) $.
The hard ellipse problem consists of two ellipses that are tangent at uniformly random points along their perimeter. We define the polar angle with respect to the minor axis of the ellipses as $ \phi $ and $ \phi' $. By aligning the tangent line slope along the $ y $-axis of the frame of reference, we can calculate the angle of the orientation vector with respect to the horizontal using trigonometry: $ \theta(\phi;\epsilon) = \tan^{-1}(\sqrt{1 - \epsilon^2} \cot \phi) $. Subsequently, we use the difference in orientation angles to find the energy $ \hat u(\epsilon) $ for the two-cell interaction as a function of the eccentricity $ \epsilon $.
\begin{align}
f(\phi,\phi';\epsilon) &= \cos(2\theta(\phi;\epsilon)-2\theta(\phi';\epsilon))\\
\hat u(\epsilon) &= \langle f(\phi,\phi';\epsilon)\rangle\\
&= \frac{1}{\pi^2}\int_0^\pi \int_0^\pi f(\phi,\phi';\epsilon) d\phi d \phi' \\
&= \left(\frac{\sqrt{1 - \epsilon^2} - 1}{\sqrt{1 - \epsilon^2} + 1}\right)^2
\end{align}
To find how this relates to $ A $, we compute the mean energy again, but this time using the Boltzmann distribution from the NCAT Hamiltonian:
\begin{align}
Z(A) &= \int_0^\pi \int_0^\pi \exp(A\cos2(\theta - \theta')) d\theta d\theta' = \pi^2 I_0(A)\\
u(A) &= \frac{\int_0^\pi \int_0^\pi \exp(A\cos 2(\theta - \theta'))\cos 2(\theta - \theta') d\theta d\theta'}{Z(A)}\\
&= \frac{I_1(A)}{I_0(A)}
\end{align}
Setting $ u(A) = \hat u(\epsilon) $ leads to a first order approximation for $ A $ in terms of $ u(\epsilon) $ based solely on the shape of the interacting cells. Since both functions are monotonically increasing, we get $ A \approx (u^{-1} \circ \hat u)(\epsilon) $ and this numerical problem has been solved using piecewise Taylor approximation methods and continued fraction methods. We use the expression from \cite{hill1981evaluation}, evaluated at three separate ranges of $ \hat u(\epsilon) $:
\begin{align}
\alpha(\epsilon) &= \frac{2}{1 - \hat u (\epsilon)}\\
\beta(\epsilon) &= \frac{32}{\alpha(\epsilon) - 131.5 + 120 \hat u (\epsilon)}\\
\gamma(\epsilon) &= 2001 + 4317 \hat u(\epsilon) - 2326 \hat u(\epsilon)^2\\
A &= (u^{-1} \circ \hat u)(\epsilon) \\&\approx
\begin{cases}
\frac{2\hat u - \hat u^3 - \frac{\hat u^5}{6} - \frac{\hat u^7}{24}+\frac{\hat u^9}{360}+\frac{53\hat u^{11}}{2160}}{(1 - \hat u^2)^{-1}} & \hat u(\epsilon) < 0.85 \\
\frac{1}{4}\left(\alpha + 1 + \frac{3}{\alpha - 5 - \frac{12}{\alpha - 10 - \gamma}}\right) & \hat u(\epsilon) >0.95 \\
\frac{1}{4}\left(\alpha + 1 + \frac{3}{\alpha - 5 - \frac{12}{\alpha - 10 - \beta}}\right) & \mathrm{otherwise}
\end{cases}
\end{align}
Note that the range of the theoretical \textit{average} energy functions $ u $ and $ \hat u $ is in $ [0,1] $. This approximation works up to 3 decimal places. Finally, we can get an exact analytical expression for $ \epsilon $ in terms of $ A $: $ \epsilon(A) = (\hat u^{-1} \circ u)(A) = 2 \frac{\sqrt{\sqrt{u}(1 + u)-2u}}{1-u}$.
|
train/arxiv
|
BkiUdMo4uzlgqFq2SSkH
| 5
| 1
|
\section{introduction}
Ultracold atomic Fermi gases provide unique opportunities to investigate
the crossover from the Bardeen-Cooper-Shrieffer (BCS) type
superfluids to the Bose-Einstein condensation (BEC) of tightly bound
molecules\cite{Eagles,Leggett,Nozieres,SadeMelo} in a unified
manner\cite{Regal,Zwierlein,Bartenstein,Kinast,Ketterle}.
One of the key ingredients to achieve this BCS-BEC crossover
in Fermi gases is a Feshbach resonance\cite{Ketterle}, which allows one
to tune the pairing interaction from the weak-coupling BCS limit to the
strong coupling BEC limit\cite{Timmermans,Holland,Ohashi,Bloch,Giorgini}.
Since the BCS-BEC crossover is a fundamental many-body problem,
it has recently attracted much
attention, not only in cold atom physics, but also in various research
fields, such as condensed matter physics and high energy physics.
In particular, this system is expected to be helpful for further
understanding of high-$T_{\rm c}$ cuprates, which has been one of the
most challenging problems in condensed matter physics\cite{Lee}.
\par
In the under-doped regime of high-$T_{\rm c}$ cuprates, the so-called
pseudogap phenomenon has been extensively studied\cite{Lee,Fischer}. In
this phenomenon, the single-particle density of states (DOS) in the
normal state exhibits a dip structure around the Fermi energy.
The temperature at which the pseudogap appears is referred to as the
pseudogap temperature $T^*$, which is higher than the superconducting phase transition temperature
$T_{\rm c}$. In the region between $T^*$ and $T_{\rm c}$, various
anomalies have been observed in physical quantities, such as nuclear
spin-lattice relaxation rate (NMR-$T_1^{-1}$)\cite{Yasuoka}, and
angle-resolved photoemission spectroscopy (ARPES)\cite{Damascelli}.
As the origin of the pseudogap,
possibility of preformed pairs due to strong pairing fluctuations has been
proposed\cite{Randeria,Singer,Janko,Rohe,Yanase,Perali}.
However, because of the complexity of high-$T_{\rm c}$ cuprates, other
scenarios have been also discussed, such as antiferromagnetic spin
fluctuations\cite{Pines,Kampf} and a hidden order\cite{Chakravarty}.
Thus, a simple system only having strong pairing fluctuations would be
helpful to confirm whether or not preformed pairs are responsible
for the pseudogap formation in high-$T_{\rm c}$ cuprates.
\par
In this regard, the cold Fermi gas system meets this demand. This
system is much cleaner and simpler than high-$T_{\rm c}$ cuprates, and
the pairing mechanism associated with a Feshbach resonance has been well
understood. The BCS-BEC crossover is dominated by strong pairing
fluctuations, so that one can focus on how they affect physical quantities.
Indeed, effects of pairing fluctuations on single-particle spectral
weight have been theoretically studied by many
researchers\cite{Rohe,Yanase,Janko,Perali,Pieri,Bruun,Massignan,Chen,Haussmann}.
They clarified that pairing fluctuations lead to a BCS-type double peak
structure in the spectral weight above $T_{\rm c}$, which is a signature
of pseudogap phenomenon.
They also found that the two peaks in the spectral weight merge into a
single peak at high temperatures. In Ref.~\cite{Perali}, detailed
analysis on the spectral weight above $T_{\rm c}$ has been carried out
over the entire BCS-BEC crossover, and, in the BEC regime, the deviation
from the BCS-type behaviors due to an asymmetric double peak structure
has been pointed out.
Since a photoemission-type experiment has recently become
possible in cold atom physics\cite{Stewart}, we can now examine
strong-coupling effects on single-particle excitations within the
current experimental technology.
Although cold Fermi gases are not exactly the same as
high-$T_{\rm c}$ cuprates (e.g., pairing symmetry), the study of
pseudogap phenomenon in cold Fermi gases is expected to be useful for
further understanding of the underdoped regime of high-$T_{\rm c}$
cuprates.
\par
In this paper, we investigate pseudogap behaviors of an ultracold Fermi
gas above $T_{\rm c}$. Including pairing fluctuations within the $T$-matrix
approximation developed in Refs.~\cite{Rohe,Perali}, we systematically
examine how the pseudogap develops in DOS, as well as the spectral
weight, over the entire BCS-BEC crossover region. We determine the
pseudogap temperature $T^*$ at which the dip structure in DOS vanishes. We
show that $T^*$ is quite different from the temperature $T^{**}$ where
the double peak structure in the spectral weight disappears. In
the BCS regime, we find that $T^*>T^{**}$. However, $T^{**}$ becomes
higher than $T^*$ in the crossover region and BEC regime.
Including this, we determine the pseudogap region in the BCS-BEC
crossover phase diagram in terms of temperature and the strength of pairing interaction.
\par
This paper is organized as follows. In Sec.~\ref{Model}, we explain our model and
formulation to study pseudogap in DOS and spectral weight. In
Sec.~\ref{results}, we examine the pseudogap structure in DOS. Here, we show how
the pseudogapped DOS continuously changes into fully gapped one, as
one passes through the BCS-BEC crossover region. We determine
the pseudogap temperature $T^*$ from the temperature dependence of
DOS. In Sec.~\ref{spweight}, we examine strong-coupling effects on the spectral
weight. We introduce another pseudogap temperature $T^{**}$ from the
temperature dependence of spectral weight. We also discuss difference
between $T^*$ and $T^{**}$. Throughout this paper, we take $\hbar=k_{\rm
B}=1$.
\section{Model and formalism}
\label{Model}
We consider a three-dimensional uniform Fermi gas, consisting of two
atomic hyperfine states described by pseudospin
$\sigma=\uparrow,\downarrow$. So far, all the experiments on cold Fermi
gases are using a broad Feshbach resonance to tune the strength of a
pairing
interaction\cite{Regal,Zwierlein,Bartenstein,Kinast,Ketterle}. In this
case, detailed Feshbach-induced pairing mechanism is known to be
not crucial as far as we consider the interesting BCS-BEC crossover
regime, and one can safely use the ordinary single-channel BCS model,
described by the Hamiltonian,
\begin{equation}
H= \sum_{\bm p,\sigma}\xi_{\bm p}c_{\bm p\sigma}^\dagger c_{\bm p\sigma}
-U\sum_{\bm q}\sum_{\bm p,\bm p^\prime}c_{\bm p+\bm q/2\uparrow}^\dagger
c_{-\bm p+\bm q/2\downarrow}^\dagger c_{-\bm p^\prime+\bm
q/2\downarrow}c_{\bm p^\prime+\bm q/2\uparrow}.
\label{eq.1}
\end{equation}
Here, $c_{\bm p\sigma}$ is the annihilation operator of a Fermi atom
with the pseudospin $\sigma$ and the kinetic energy $\xi_{\bm p}=\varepsilon_{\bm
p}-\mu=p^2/2m-\mu$, measured from the chemical potential $\mu$ (where
$m$ is an atomic mass). $-U$ ($<0$) is an assumed tunable pairing
interaction associated with a Feshbach resonance. It is related to
the $s$-wave scattering length $a_s$ as\cite{Randeria2}
\begin{equation}
\frac{4\pi a_s}{m}=-\frac{U}{1-U\sum^{\omega_c}_{\bm p}\frac{1}{2\varepsilon_{\bm p}}},
\label{asU}
\end{equation}
where $\omega_c$ is a high-energy cutoff. Since the strength of an
interaction is usually measured in term of the scattering length $a_s$ in cold
atom physics, Eq.~(\ref{asU}) is useful in comparing theoretical results
with experiments. In this scale, the weak-coupling BCS limit and
strong-coupling BEC limit are characterized as $(k_{\rm
F}a_s)^{-1}\ll -1$ and $(k_{\rm F}a_s)^{-1}\gg +1$, respectively (where
$k_{\rm F}$ is the Fermi momentum). The region $-1\ \raise.3ex\hbox{$<$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ (k_{\rm
F}a_s)^{-1}\ \raise.3ex\hbox{$<$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ +1$ is referred to as the crossover region. The center of the crossover
region ($(k_{\rm F}a_s)^{-1}=0$) is called the unitarity limit\cite{Ho}.
\par
To discuss strong-coupling effects in the BCS-BEC crossover regime above
$T_{\rm c}$, we include pairing fluctuations within the $T$-matrix
approximation\cite{Rohe,Perali}. Namely, we consider the single-particle
thermal Green's function,
\begin{eqnarray}
G_{\bm p}(i\omega_n)=\frac{1}{i\omega_n-\xi_{\bm p}-\Sigma(\bm p,i\omega_n)},
\label{Green}
\end{eqnarray}
where $\omega_n$ is the fermion Matsubara frequency. The self-energy
correction $\Sigma({\bm p},i\omega_n)$ describes effects of pairing
fluctuations, which is diagrammatically given by Fig.~\ref{diagram}(a).
In Fig.~\ref{diagram}, the solid lines are the free fermion propagator,
\begin{equation}
G^0_{\bm p}(i\omega_n)=\frac{1}{i\omega_n-\xi_{\bm p}}.
\label{eq.free}
\end{equation}
Although this $T$-matrix theory does not treat the single-particle
Green's function self-consistently, Ref.~\cite{Perali} has shown that it
can correctly describe the smooth crossover from the BCS regime to the
BEC regime. We briefly note that the self-consistent $T$-matrix
approximation (where the full Green's function $G$ is used in stead of
$G^0$ in evaluating the self-energy) has been recently employed to study the
spectral weight and rf-spectrum in the crossover
region\cite{Haussmann}.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{diagram.eps}}
\caption{(a) Self-energy correction $\Sigma({\bm p},i\omega_n)$, and (b)
particle-particle scattering matrix $\Gamma({\bm q},i\nu_n)$, in the
$T$-matrix approximation. The solid and wavy lines represent the
non-interacting Fermi Green's function $G^0_{\bm p}(i\omega_n)$ and
pairing interaction $-U$, respectively.}
\label{diagram}
\end{figure}
Summing up the diagrams in Fig.~\ref{diagram}(a), we obtain
\begin{eqnarray}
\Sigma(\bm p,i\omega_n)=T\sum_{\bm q,\nu_n}\Gamma(\bm q,i\nu_n)G_{\bm
q-\bm p}^0(i\nu_n-i\omega_n)e^{i(\nu_n-\omega_n)\delta},
\label{selfe}
\end{eqnarray}
where $\nu_n$ is the boson Matsubara frequency. The particle-particle
scattering matrix $\Gamma({\bm q},i\nu_n)$, which describes fluctuations in the Cooper channel, is diagrammatically given by Fig.~\ref{diagram}(b). The expression is given by
\begin{eqnarray}
\Gamma({\bm q},i\nu_n)
&=&\frac{-U}{1-U\Pi(\bm q,i\nu_n)}
\nonumber
\\
&=&
{4\pi a_s \over m}
{1 \over
\displaystyle
1+{4\pi a_s \over m}
\Bigl[
\Pi(\bm q,i\nu_n)-\sum_{\bm p}{1 \over 2\varepsilon_{\bm p}}
\Bigr]
}.
\label{Gamma}
\end{eqnarray}
In the last expression, the ultraviolet divergence coming
from the contact pairing interaction has been absorbed into
the scattering length $a_s$\cite{Randeria2}. $\Pi(\bm q,i\nu_n)$ is the
pair-propagator, given by
\begin{eqnarray}
\Pi({\bm q},i\nu_n)&=&T\sum_{\bm p,\omega_n}G^0_{{\bm p}+{\bm
q}/2}(i\nu_n+i\omega_n)G^0_{{-\bm p}+{\bm q/2}}(-i\omega_n)
\nonumber\\
&=&\sum_{\bm p}\frac{1-f(\xi_{\bm p+\bm q/2})-f(\xi_{\bm p-\bm q/2})}{\xi_{\bm p+\bm q/2}+\xi_{\bm p-\bm q/2}-i\nu_n},
\end{eqnarray}
where $f(\varepsilon)$ is the Fermi distribution function.
\par
To examine the pseudogap region, one needs to determine $T_{\rm
c}$\cite{Nozieres,SadeMelo,Ohashi,Perali}. The equation for $T_{\rm c}$
is obtained from the Thouless criterion\cite{Thouless}, $\Gamma({\bm
q}=0,i\nu_n=0,T=T_{\rm c})^{-1}=0$, which gives
\begin{equation}
1=-\frac{4\pi a_s}{m}\sum_{\bm p}
\left[
\frac{1}{2(\varepsilon_{\bm p}-\mu)}\tanh{\xi_{\bm p} \over 2T}-\frac{1}{2\varepsilon_{\bm p}}
\right].
\label{Thouless}
\end{equation}
As pointed out by Nozi\`eres and Schmitt-Rink\cite{Nozieres}, the
chemical potential $\mu$ deviates from the Fermi energy
$\varepsilon_{\rm F}$ in the BCS-BEC crossover. This strong-coupling
effect can be conveniently included by solving Eq.~(\ref{Thouless}),
together with the equation for the number $N$ of Fermi atoms,
\begin{equation}
N=2T\sum_{\bm p,\omega_n}e^{i\omega_n\delta}G_{\bm p}(i\omega_n).
\label{number}
\end{equation}
We show the self-consistent solutions of the coupled equations
(\ref{Thouless}) and (\ref{number}) in Fig.~\ref{Tcmuc}.
\par
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{Tcmuc.eps}}
\caption{Self-consistent solutions of the coupled equations
(\ref{Thouless}) and (\ref{number}) in the BCS-BEC crossover (`TMA' in
the figure). (a) phase transition temperature $T_{\rm c}$. (b) chemical
potential $\mu(T=T_{\rm c}$). In panel (b), $\mu$ is negative when
$(k_{\rm F}a_s)^{-1}\ge 0.35$. `BCS' and `NSR' are the weak-coupling BCS
result and the NSR result, respectively.}
\label{Tcmuc}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{mu.eps}}
\caption{Calculated chemical potential $\mu$ above $T_{\rm c}$ in the BCS side
(a) and BEC side (b). Each line starts from $T_{\rm c}$. We will use these
results in calculating the density of states and spectral weight in
Secs. III and IV.}
\label{muT}
\end{figure}
In the normal phase above $T_{\rm c}$, we only solve the number equation
(\ref{number}) to determine the temperature dependence of $\mu(T> T_{\rm
c})$.
The resulting $\mu(T)$ in Fig.~\ref{muT} is used to calculate DOS $\rho(\omega)$, as well as the
spectral weight $A({\bm p},\omega)$. They are obtained from the analytic continued Green's function as, respectively,
\begin{eqnarray}
\rho(\omega)=-{1 \over \pi}\sum_{\bm p}{\rm Im}
[G(\bm p,i\omega\to \omega+i\delta)],
\label{DOS}
\end{eqnarray}
\begin{eqnarray}
A({\bm p},\omega)=-{1 \over \pi}
{\rm Im}[G(\bm p,i\omega\to \omega+i\delta)].
\label{SW}
\end{eqnarray}
The analytic continued self-energy in $G({\bm p},i\omega_n\to\omega+i\delta)$ has the form,
\begin{equation}
\Sigma(\bm p,\omega+i\delta)=\Sigma_{\rm H}+
\frac{1}{\pi}\sum_{\bm
q}\int_{-\infty}^{\infty}dz\ \frac{n_B(z)+f(\xi_{\bm q-\bm
p})}{z-(\omega+i\delta)-\xi_{\bm q-\bm p}}
{\rm Im}[\Gamma(\bm q,i\nu_n\to z+i\delta)],
\label{selfe2}
\end{equation}
where $n_B(\varepsilon)$ is the Bose distribution function. $\Sigma_{\rm
H}=-(U/2)\sum_{\bm p}f(\xi_{\bm p})$ is the Hartree term, and the last term
in Eq. (\ref{selfe2}) describes fluctuation correction to
single-particle excitations.
\par
Before ending this section, we comment on the $T$-matrix theory used in
this paper. In the BCS-BEC crossover literature, the so-called Gaussian
fluctuation theory developed by Nozi\`eres and Schmitt-Rink
(NSR)\cite{Nozieres,SadeMelo} has been also used. The present $T$-matrix
theory is a natural extension of this to include higher order pairing
fluctuations. Indeed, the $T_{\rm c}$-equation (\ref{Thouless}) is
common to the two theories, and the NSR number equation is also obtained
from Eq. (\ref{number}), by expanding $G_{\bm p}(i\omega_n)$ in Eq.~(\ref{number}) up to
$O(\Sigma)$, as
\begin{equation}
G_{\bm p}^{\rm NSR}(i\omega_n)=G^0_{\bm p}(i\omega_n)+G^0_{\bm p}(i\omega_n)\Sigma(\bm p,i\omega_n)G^0_{\bm p}(i\omega_n).
\label{NSRG}
\end{equation}
The two theories essentially give the same BCS-BEC
crossover behaviors of $T_{\rm c}$ and $\mu(T=T_{\rm c})$, as shown in
Fig.~\ref{Tcmuc}. In particular, both theories correctly describe the
strong-coupling BEC limit, where the superfluid phase transition is
dominated by BEC of $N/2$ tightly bound molecules (which leads to
$T_{\rm c}=0.218T_{\rm F}$\cite{Nozieres}) and $2|\mu|$ equals the
binding energy of a two-body bound state $E_{\rm
bind}=1/ma_s^2$\cite{Leggett}. However, when one uses $G_{\bm p}^{\rm
NSR}(i\omega_n\to\omega+i\delta)$ in calculating Eq.~(\ref{DOS}), unphysical results are obtained. The NSR theory overestimates the suppression of DOS around
$\omega=0$, leading to a negative DOS around $\omega=0$
in the crossover region\cite{Tsuchiya}.
The NSR theory also gives an unphysical divergence of DOS at
$\omega=\mu$ (although we do not explicitly show this in this
paper)\cite{Tsuchiya}. Thus, although the NSR theory can describe the
BCS-BEC crossover behaviors of $T_{\rm c}$ and $\mu$, one needs to be
careful in considering single-particle properties in the BCS-BEC
crossover. Since this problem is absent in the present $T$-matrix
theory, we employ this framework to examine DOS and the spectral weight
in this paper.
\section{Pseudogap in single-particle density of states}
\label{results}
In this section, we discuss the pseudogap phenomenon in DOS.
Figure~\ref{dosTc} shows DOS in the BCS-BEC crossover at $T_{\rm
c}$. Starting from the weak-coupling BCS regime, a pseudogap
develops around $\omega=0$, as one increases the strength of the pairing
interaction. Since the superfluid order parameter vanishes at $T_{\rm
c}$, this dip structure purely originates from pairing fluctuations.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{dosTc.eps}}
\caption{Density of states at $T_{\rm c}$. (a) BCS side
($(k_{\rm F}a_s)^{-1}<0$). (b) BEC side ($(k_{\rm F}a_s)^{-1}>0$).}
\label{dosTc}
\end{figure}
\par
The reason why the fluctuation correction described by the self-energy
in Eq.~(\ref{Green}) causes the pseudogap in DOS can be easily understood
by noting similarity between Eq.~(\ref{Green}) and the Green's
function in the mean-field BCS theory\cite{Mahan},
\begin{eqnarray}
G_{\bm p}^{\rm BCS}(i\omega_n)=-{i\omega_n+\xi_{\bm p} \over \omega_n^2+\xi_{\bm p}^2+\Delta^2},
\label{BCS}
\end{eqnarray}
where $\Delta$ is the superfluid order parameter. Assuming that pairing
fluctuations are strong around ${\bm q}=\nu_n=0$ (Note that $\Gamma({\bm
q}=0,\nu_n=0)$ diverges at $T_{\rm c}$.), we may approximate Eq.~(\ref{selfe}) to
\begin{eqnarray}
\Sigma(\bm p,i\omega_n)\simeq \Sigma_{\rm H}
-G_{-{\bm p}}^0(-i\omega_n)\Delta^2_{\rm pg},
\label{selfe3}
\end{eqnarray}
where $\Delta^2_{\rm pg}\equiv -T\sum_{\bm q,\nu_n}[\Gamma(\bm
q,i\nu_n)+U]$.
Although $G^0_{-{\bm p}}$ in Eq.~(\ref{selfe3}) does not involve the
Hartree term $\Sigma_{\rm H}$ in the present $T$-matrix approximation, a
better approximation would involve it in evaluating $\Sigma$. In this
case, substituting Eq.~(\ref{selfe3}) into Eq.~(\ref{Green}), we obtain
\begin{eqnarray}
G_{\bm p}(i\omega_n)
=
{1 \over i\omega_n-\xi_{\bm p}+\Delta_{\rm pg}^2G_{-{\bm p}}^0(-i\omega_n)}
=
-{i\omega_n+\xi_{\bm p} \over \omega_n^2+\xi_{\bm p}^2+\Delta_{\rm pg}^2},
\label{BCS2}
\end{eqnarray}
where $\mu$ in $\xi_{\bm p}$ is replaced by $\mu+\Sigma_{\rm H}$. Since
$G^0_{-{\bm p}}(-i\omega_p)$ may be regarded as the hole Green's
function, Eq.~(\ref{BCS2}) means that pairing fluctuations induce a
particle-hole coupling.
Comparing Eq.~(\ref{BCS2}) with Eq.~(\ref{BCS}), we find that
$\Delta_{\rm pg}$ (which describes effects of pairing fluctuations)
plays the same role as the BCS gap parameter $\Delta$.
Actually, dynamical effects of
pairing fluctuations with ${\bm q}\ne 0$ and $\nu_n\ne 0$ smear
the clear gap structure and coherence peak known in the mean-field BCS theory. However, in Fig.~\ref{dosTc}(a), one can still
see broad peaks around $\omega/\varepsilon_{\rm F}\simeq \pm 0.2$ (which
correspond to the diverging coherence peaks at $\omega=\pm\Delta$ in the BCS theory) when
$(k_{\rm F}a_s)^{-1}\ \raise.3ex\hbox{$<$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ -0.4$. Although the above discussion simplifies
the treatment of pairing fluctuations, it would be helpful in
understanding the reason why pairing fluctuations give the pseudogap
structure above $T_{\rm c}$.
\par
While the pseudogapped DOS is very remarkable in the unitarity limit, it
continuously changes into a {\it fully} gapped one in the
strong-coupling BEC regime, as shown in Fig.~\ref{dosTc}(b). In the BEC
regime where $\mu$ is negative ($(k_{\rm F}a_s)^{-1}>0.35$), when we
only retain the negative $\mu$ and ignore other strong-coupling effects, the
DOS has a finite energy gap $|\mu|$ as
\begin{eqnarray}
\rho(\omega)=
\left\{
\begin{array}{ll}
0& (\omega<|\mu|),\\
{m^{3/2} \over \sqrt{2}\pi^2}
\sqrt{\omega-|\mu|}&(\omega\ge|\mu|).
\end{array}
\right.
\label{eq.becdos}
\end{eqnarray}
In the BEC limit, $2|\mu|$ equals the binding energy $E_{\rm
bind}=1/ma_s^2$ of a two-body bound state, which means that the energy gap in
Eq.~(\ref{eq.becdos}) is directly related to the molecular dissociation
energy. Since the intensity of DOS is almost absent below
$\omega/\varepsilon_{\rm F}\sim 1.4$ when $(k_{\rm F}a_s)^{-1}=+0.8$ in
Fig.~\ref{dosTc}(b), the region of $(k_{\rm F}a_s)^{-1}\ \raise.3ex\hbox{$>$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ 0.8$ is considered to be close to
an $N/2$ molecular gas, rather than an $N$ atomic Fermi gas.
\par
However, we note that $\rho(\omega<0)$ still has small but
{\it finite} intensity even when $(k_{\rm F}a_s)^{-1}\ \raise.3ex\hbox{$>$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ 1.0$, as shown in Fig.~\ref{dosTc}(b), which means the existence of hole-type excitations. The finite DOS in the
negative energy region is absent when we ignore all fluctuation effects
except for the negative $\mu$ (See Eq.~(\ref{eq.becdos}).). Since the concept of hole is characteristic of many-fermion system, one finds
that, although the BEC region around $(k_{\rm F}a_s)^{-1}\simeq +1$ is
dominated by two-body molecular {\it bosons}, the character of
many-fermion system still remains to some extent there, leading to the
finite $\rho(\omega<0)$. We also find this by simply employing
Eq.~(\ref{BCS2}) to calculate DOS in the BEC regime ($\mu<0$), which
gives
\begin{eqnarray}
\rho(\omega)=
\left\{
\begin{array}{ll}
\displaystyle
{m^{3/2} \over 2\sqrt{2}\pi^2}
{\omega \over \sqrt{\omega^2-\Delta_{\rm pg}^2}}
\left[
1+{\sqrt{\omega^2-\Delta_{\rm pg}^2} \over \omega}
\right]
\sqrt{\sqrt{\omega^2-\Delta_{\rm pg}^2}-|\mu|}
&
~~~(\omega\ge\sqrt{\Delta_{\rm pg}^2+|\mu|^2}),
\\
\displaystyle
{m^{3/2} \over 2\sqrt{2}\pi^2}
{|\omega| \over \sqrt{\omega^2-\Delta_{\rm pg}^2}}
\left[
1-{\sqrt{\omega^2-\Delta_{\rm pg}^2} \over |\omega|}
\right]
\sqrt{\sqrt{\omega^2-\Delta_{\rm pg}^2}-|\mu|}
&
~~~(\omega\le-\sqrt{\Delta_{\rm pg}^2+|\mu|^2}).
\end{array}
\right.
\nonumber
\\
\label{DOS2}
\end{eqnarray}
When the two-body binding energy $E_{\rm bind}=1/ma_s^2~(\simeq 2|\mu|)$
is much larger than the `characteristic energy' $\Delta_{\rm pg}$, one
may ignore $\Delta_{\rm pg}$ in Eq.~(\ref{DOS2}). In this extreme BEC
limit, the upper branch in Eq.~(\ref{DOS2}) reduces to
Eq.~(\ref{eq.becdos}), and the lower one vanishes, as expected.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{dosBCS.eps}}
\caption{Temperature dependence of the density of states $\rho(\omega)$
in the BCS side. $T_{\rm c}$ in each panel equals (a)
0.112$\varepsilon_{\rm F}$, (b) 0.146$\varepsilon_{\rm F}$, (c)
0.183$\varepsilon_{\rm F}$, and (d) 0.217$\varepsilon_{\rm F}$. In this
figure and Fig.\ref{dosBEC}, we have offset the results for $T>T_{\rm
c}$. The short horizontal line near each result is at
$\rho(\omega)=0$.}
\label{dosBCS}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{dosBEC.eps}}
\caption{Temperature dependence of DOS in the BEC side.
$T_{\rm c}$ in each panel equals (a) 0.244$\varepsilon_{\rm F}$, (b)
0.259$\varepsilon_{\rm F}$, (c) 0.262$\varepsilon_{\rm F}$, and (d)
0.255$\varepsilon_{\rm F}$.
}
\label{dosBEC}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth]{phdgm.eps}}
\caption{Pseudogap temperature $T^*$ determined from DOS in the BCS-BEC
crossover. We also plot another pseudogap temperature $T^{**}$ where
the double peak structure in the spectral weight vanishes. `BCS' is
$T_{\rm c}=(8\gamma/\pi e^2)\varepsilon_{\rm F} e^{\pi/2k_{\rm F}a_s}$
in the mean-field BCS theory (where $\gamma=1.78$)\cite{Pethick}. $T^*$
or $T^{**}$ gives the boundary between the pseudogap regime (PG) and
normal Fermi gas regime (NF). $2|\mu|$ $(\simeq E_{\rm bind})$ in the
BEC regime gives the characteristic temperature below which thermal
dissociation of bound molecules are suppressed. Namely, $T\simeq
2|\mu|$ physically describes the boundary between PG and molecular Bose
gas regime (MB).}
\label{phdgm}
\end{figure}
\par
Figures~\ref{dosBCS} and \ref{dosBEC} show DOS above $T_{\rm c}$. The
pseudogap structure in DOS becomes obscure at high temperatures due to
weak pairing fluctuations. The dip structure eventually vanishes at a
certain temperature, which we define as the pseudogap temperature $T^*$\cite{noteTT}.
\par
Figure~\ref{phdgm} shows the resulting pseudogap temperature $T^*$ in
the BCS-BEC crossover. Starting from the weak-coupling BCS regime, $T^*$
monotonically increases. However, $T^*$ is still lower than $T_{\rm c}$
calculated in the mean-field BCS theory (`BCS' in
Fig.~\ref{phdgm}). Although the mean-field $T_{\rm c}$ is sometimes
considered as a characteristic temperature where preformed pairs are
formed, our result shows that the pseudogap actually starts to develop
in DOS from lower temperature.
\par
We note that, although the fact that the pseudogap disappears at $T^*$
is common to the entire BCS-BEC crossover region, the detailed way of
disappearance is somehow different in between the BCS regime and
crossover-BEC regime. In Fig.~\ref{dosBCS}(a), the pseudogap around
$\omega=0$ is simply filled up at high temperatures. The shape of DOS
then becomes close to DOS of a free Fermi gas,
\begin{eqnarray}
\rho(\omega)={m^{3/2} \over \sqrt{2}\pi^2}
\sqrt{\omega+\mu}~~~~~(\omega\ge-\mu).
\label{eq.becdos2}
\end{eqnarray}
Namely, as far as we consider DOS, the system may be regarded as a
(weakly interacting) normal Fermi gas above $T^*$. On the other
hand, in the BEC side shown in Fig.~\ref{dosBEC}, in addition to the
enhancement of DOS around $\omega=0$, the lower peak is suppressed at
high temperatures. In the unitarity limit (Fig.~\ref{dosBEC}(a)), when
the pseudogap is completely filled up, DOS still has a different
shape from DOS of a free Fermi gas. In the BEC regime where $\mu<0$,
Figs.~\ref{dosBEC}(c) and (d) show that DOS above $T^*$ has a
finite intensity in the negative energy region, in contrast to
Eq.~(\ref{eq.becdos}). These results indicate that pairing fluctuations
still affect single-particle excitations above $T^*$ in the BEC side,
although the depression of DOS around $\omega=0$ is absent. Indeed, in
Sec.~\ref{spweight}, we will show an evidence of such fluctuation effects
in the spectral weight in this regime.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{swTc.eps}}
\caption{Calculated intensity of the spectral weight $A(\bm p,\omega)$ at $T_{\rm c}$ in the energy-momentum plane. (a) BCS side ($(k_{\rm
F}a_s)^{-1}=-0.6$). (b) Unitarity limit ($(k_{\rm F}a_s)^{-1}=0.01$). (c) BEC
side ($(k_{\rm F}a_s)^{-1}=0.6$).}
\label{swTc}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{sw.eps}}
\caption{Spectral weight $A(\bm p,\omega)$ as a function of $\omega$. In
each panel, we take the momentum where the peak-to-peak energy becomes
minimum: (a) $p/k_{\rm F}=0.91$, (b) 0.83, and (c) 0.01.}
\label{swTc2}
\end{figure}
\section{Pseudogap in spectral weight}
\label{spweight}
It has been pointed out\cite{Janko,Rohe,Yanase,Perali} that pairing
fluctuations cause a BCS-type double peak structure in the
single-particle spectral weight $A({\bm p},\omega)$. In
this section, we examine how this strong-coupling effect is related to
the pseudogap in DOS discussed in Sec.~\ref{results}.
\par
Figure~\ref{swTc} shows the intensity of spectral weight $A({\bm
p},\omega)$ at $T_{\rm c}$ in the energy-momentum plane. In the BCS side
(panel (a)), in addition to the particle branch at $\omega\simeq\xi_{\bm
p}$, we can see a weak peak line of a hole branch at
$\omega\simeq-\xi_{\bm p}$. The intensity of the particle branch is
suppressed around $\omega=0$, where it intersects with
the hole branch and the level repulsion between them occurs.
The resulting structure is similar to the BCS spectral
weight\cite{Janko,Rohe,Yanase,Perali,note}, given by\cite{Mahan}
\begin{equation}
A_{\rm BCS}(\bm p,\omega)=u_{\bm p}^2\delta(\omega-E_{\bm p})+v_{\bm
p}^2\delta(\omega+E_{\bm p}),
\label{sfBCS}
\end{equation}
where $u_{\bm p}^2=(1+\xi_{\bm p}/E_{\bm p})/2$, $v_{\bm
p}^2=(1-\xi_{\bm p}/E_{\bm p})/2$, and $E_{\bm p}=\sqrt{\xi_{\bm
p}^2+\Delta^2}$ is the Bogoliubov quasiparticle excitation spectrum.
For a given momentum $p$, $A_{\rm BCS}(\bm p,\omega)$ has two peaks at
$\omega=\pm E_{\bm p}$.
The negative energy branch at $\omega=-E_{\bm p}$ given by the second term
in Eq.~(\ref{sfBCS}) is dominant in the low momentum region $p\ll p_{\rm
F}$ (where $u_{\bm p}\ll v_{\bm p}$).
On the other hand, the positive energy branch ($\omega=+E_{\bm p}$) becomes
crucial when $p\gg k_{\rm F}$ (where $u_{\bm p}\gg v_{\bm p}$).
The existence of two branches can be understood from the Bogoliubov
transformation $c_{\bm p\uparrow}=u_{\bm p}\gamma_{\bm p\uparrow}+v_{\bm
p}\gamma_{-\bm p\downarrow}^\dagger$ ($\gamma_{\bm p\sigma}$ is an
annihilation operator of a quasiparticle with momentum $\bm p$ and spin
$\sigma$), which indicates that the annihilation of an atom is accompanied by creation and annihilation of Bogoliubov excitations\cite{Griffin}.
The minimum energy gap $2\Delta$ between the two branches $\omega=\pm
E_{\bm p}$ is obtained at the Fermi level $p=k_{\rm F}$.
Since the simplified Green's function in
Eq.~(\ref{BCS2}) has the same form as Eq.~(\ref{BCS}), Eq.~(\ref{BCS2})
gives rise to the spectral weight similar to the BCS type in
Eq.~(\ref{sfBCS}), where the superfluid gap $\Delta$ is now replaced by
the pseudogap $\Delta_{\rm pg}$, describing effects of pairing
fluctuation. The minimum value $2\Delta_{\rm pg}$ of the pseudogap
energy is obtained at $p\simeq k_{\rm F}$ in this case.
From this reason, the double peak
structure in Fig.~\ref{swTc}(a) is found to come from the particle-hole
coupling due to strong pairing fluctuations\cite{Perali}. In addition,
they also induce finite lifetime of quasiparticle excitations, leading
to finite widths of the two peaks in $A({\bm p},\omega)$\cite{Perali}.
This feature is absent in the BCS spectral weight in Eq.~(\ref{sfBCS}), which has two $\delta$-functional peaks at $\omega=\pm E_{\bm p}$.
As a result, $A({\bm p},\omega)$ at the momentum where the minimum peak-to-peak energy is obtained has finite spectral weight between the two peaks, as shown
in Fig.~\ref{swTc2}, giving finite intensity of DOS inside the pseudogap.
This {\it gapless} double peak structure is referred
to as the pseudogap in the spectral weight in the
literature\cite{Janko,Rohe,Yanase,Perali}.
\par
This pseudogap structure in the spectral weight becomes remarkable, as one approaches the
unitarity limit. In this limit, strong pairing fluctuations also broaden
the spectral peaks, as shown in Fig.~\ref{swTc}(b). In
the BEC regime (Fig.~\ref{swTc}(c)), the peak width of the upper branch
shrinks. This is because the BEC regime is well described by a gas of
tightly bound molecules, so that the upper branch simply describes their
dissociation. Since the molecular formation simply occurs within
two-body physics in the BEC limit, the peak of the lower branch
(which is an evidence of many-body physics) is low and broad in
Fig.~\ref{swTc}(c).
\par
These different behaviors of upper and lower peaks in the BEC regime can be directly understood from the imaginary part of the self-energy correction. Using the fact that the particle-particle scattering matrix $\Gamma$ reduces to the Bose Green's function in the BEC limit as\cite{Perali}
\begin{equation}
\Gamma(\bm q,i\nu_n)=\frac{8\pi}{m^2a_s}\frac{1}{i\nu_n-E_{\bm q}^B}
\label{bGreen}
\end{equation}
(where $E_{\bm q}^B=q^2/4m-\mu_B$ is the energy of a molecule measured from the molecular chemical potential $\mu_B\simeq 2\mu+1/(ma_s^2)\simeq 0$), we can approximately evaluate the imaginary part of the analytic continued self-energy in Eq.~(\ref{selfe2}) as
\begin{eqnarray}
{\rm Im}\Sigma(\bm p,\omega+i\delta)&=&-\frac{8\pi^2}{m^2a_s}\sum_{\bm
q}n_B(E_{\bm q}^B)\delta\left(\omega-(E_{\bm q}^B-\xi_{\bm q-\bm p})\right),
\nonumber\\
&=&-\frac{4T}{a_s
p}\ln\left[\frac{1-\exp\left\{-\beta\left(\frac{3p^2}{2m}+\Delta\omega+\frac{2p}{\sqrt{m}}\sqrt{\frac{p^2}{2m}+\Delta\omega}-\mu_B\right)\right\}}{1-\exp\left\{-\beta\left(\frac{3p^2}{2m}+\Delta\omega-\frac{2p}{\sqrt{m}}\sqrt{\frac{p^2}{2m}+\Delta\omega}-\mu_B\right)\right\}}\right]
\nonumber
\\
&\times&\theta({p^2 \over 2m}+\omega_{\rm th}-\omega),
\label{eq22}
\end{eqnarray}
where $\Delta\omega=\omega_{\rm th}-\omega$, and $\omega_{\rm th}=\mu-\mu_B\simeq -1/2ma_s^2$. Since ${\rm Im}\Sigma(\bm p,\omega+i\delta)$ directly gives the peak width of the spectral weight, the first line in Eq.~(\ref{eq22}) indicates that, in the BEC regime, the peak widths are dominated by molecules excited thermally with finite center of mass momentum $\bm q\neq 0$. Since Eq. (\ref{eq22}) vanishes when $\omega>p^2/2m+\omega_{\rm th}\simeq p^2/2m-1/2ma_s^2$, the upper branch around $\omega=\xi_{\bm p}$ ($>0$) appears as a sharp delta-function peak in the spectral weight in the BEC limit. This is consistent with the sharp upper peak in Fig.~\ref{swTc2}(c).
\par
On the other hand, expanding Eq. (\ref{eq22}) around the lower branch, $\omega=\xi_{\bm p}$, one obtains
\begin{equation}
{\rm Im}\Sigma(\bm
p,\omega+i\delta)\simeq\frac{4T}{a_sp}\ln\left(\frac{m}{4Tp^2}\delta\omega^2\right),
\label{selfBEC2}
\end{equation}
where $\delta\omega=\omega-(-\xi_{\bm p})$.
Equation~(\ref{selfBEC2}) shows that the imaginary part of the
self-energy logarithmically diverges along the lower branch $\omega=-\xi_{\bm p}$. Thus, the lower peak is smeared out in the BEC limit.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{swBCS.eps}}
\caption{(a) Intensity of the spectral weight $A({\bm p},\omega)$ in the BCS
side ($(k_{\rm F}a_s)^{-1}=-0.6$). We set $T/T_{\rm c}=1.03$, at which the dip
structure can be clearly seen in DOS. (b) $A({\bm p},\omega)$ as a function
of $\omega$. The momentum $p$ is taken to be $p/k_{\rm F}=0.91$ (solid
line), 0.83 (dashed line), and 0.97 (dotted line).}
\label{swBCS}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{swBEC.eps}}
\caption{(a) Intensity of spectral weight $A({\bm p},\omega)$ in the BEC side
($(k_{\rm F}a_s)^{-1}=+0.4$). We take $T/T_{\rm c}=1.53$, at which the
pseudogap structure is absent in DOS. (b) $A({\bm p},\omega)$ as a function
of $\omega$ at $p=0.01k_{\rm F}$.}
\label{swBEC}
\end{figure}
As one increases the temperature, Fig.~\ref{swTc2} shows that the double
peak structure gradually becomes obscure to eventually vanish at a
certain temperature $(\equiv T^{**})$. Regarding $T^{**}$ as another
pseudogap temperature\cite{noteTTT},
one might expect that it is deeply related to
$T^*$ defined from DOS, because DOS is given by the momentum summation
of the spectral weight. However, when we compare $T^{**}$ with $T^*$ in
the BCS-BEC crossover, they are very different from each other, as shown
in Fig.~\ref{phdgm}. While one sees $T^*>T^{**}$ in the BCS
side\cite{Janko}, $T^{**}$ becomes higher than $T^*$ in the BEC side
($(k_{\rm F}a_s)^{-1}\ \raise.3ex\hbox{$>$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ -0.07$).
\par
In the BCS side, when $T\ \raise.3ex\hbox{$>$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ T^{**}$, since pairing fluctuations are
still strong near the Fermi surface, the single peak in the spectral
weight at $p\simeq\sqrt{2m\mu}$ is broad and the peak height is low,
compared with the cases of higher and lower momenta, as shown in
Fig.~\ref{swBCS}. This low peak height at $p\simeq\sqrt{2m\mu}$ directly
affects the density of states around $\omega=0$, leading to the dip or pseudogap structure in $\rho(\omega)$ in the region $T^{**}\le T\le T^*$. We briefly note that
the result of $T^*>T^{**}$ in the BCS side agrees with the previous
work\cite{Janko}.
\par
On the other hand, although the double peak structure still exists when
$T> T^*$ in the BEC side, the intensity of the lower peak is
very weak and broad (See
Fig.~\ref{swBEC}.), because the system is close to a gas of two-body
bound molecules. Thus, the existence of lower peak is easily smeared out
in the momentum summation in calculating DOS, $\rho(\omega)=\sum_{\bm p}A({\bm p},\omega)$.
\par
To see the physical backgrounds of $T^*$ and $T^{**}$, it is convenient
to recall that, when pairs are formed above $T_{\rm c}$, the
lifetime of Fermi excitations becomes short
due to strong tendency to form pairs, leading to a broad
quasi-particle peak in the spectral weight $A({\bm p},\omega)$. In
addition, preformed pairs also induce the particle-hole coupling, which
gives the double peak structure in $A({\bm p},\omega)$. Between the two
effects associated with pair formation, while $T^{**}$ is directly related to the latter by
definition, the former is crucial for $T^*$: In the BCS regime, since
the peak-to-peak energy in $A({\bm p},\omega)$ is small, the double-peak
pseudogap structure is easily smeared out by the lifetime effect,
namely, the broadening of two peaks. On the other hand, DOS around
$\omega=0$ is suppressed, when the height of quasiparticle peak at
$\omega\simeq 0$ is lowered by the broadening effect. As a result, one
obtains $T^*>T^{**}$ in the BCS regime, and one may use $T^*$ as the
characteristic temperature where preformed pairs are formed. The double
peak structure can be clearly seen in $A({\bm p},\omega)$ in the
crossover-BEC regime, because the peak-to-peak energy becomes larger
than the peak widths. However, as discussed previously, the lower peak
becomes very broad and the weight becomes small in the BEC regime,
reflecting that the system is close to a gas of two-body bound
molecules. Thus, one cannot see the dip structure in DOS even when the
particle-hole coupling induce the double peak structure in $A({\bm
p},\omega)$ below $T^{**}$. As one further decreases the temperature,
the lower peak in $A({\bm p},\omega)$ shrinks and the peak height
increases, because the system approaches the superfluid phase. This
clearly enhances the intensity of DOS in the negative energy region,
leading to the dip structure below $T^* (<T^{**})$.
\par
The different behaviors of two pseudogap temperatures
$T^*$ and $T^{**}$ imply that the pseudogap region may depend on what we
measure. When we consider a quantity where DOS is crucial, $T^*$ would
give the boundary between the pseudogap region and normal Fermi gas
regime. On the other hand, when we consider a quantity dominated by the
spectral weight, $T^{**}$ would be observed as the boundary between the
two regions. While the specific heat is an example of the former
quantity, the recent photoemission-type experiment\cite{Stewart} is
considered to be a latter example.
\par
We note that, when the temperature is lower than the binding energy
$E_{\rm bind}\simeq 2|\mu|$ of a two-body bound molecule in the BEC
regime, thermal dissociation of molecules is suppressed. In this sense,
one may regard this regime as a molecular Bose gas, rather than a
(strongly-correlated) Fermi gas. Including this, we obtain the phase
diagram in Fig.~\ref{phdgm}. In this figure, the pseudogap regime is the
region surrounded by $T^*$ or $T^{**}$, $T_{\rm c}$ and $2|\mu|$. We
briefly note that except for $T_{\rm c}$, other temperatures $T^*$,
$T^{**}$, and $T=2|\mu|$, are all crossover temperatures without
accompanied by any phase transition.
\par
\section{summary}
\par
To summarize, we have investigated the pseudogap behaviors of an
ultracold Fermi gas in the BCS-BEC crossover above $T_{\rm c}$. We have
calculated the single-particle density of states (DOS), as well as
the single-particle spectral weight, including pair fluctuations within
the framework of $T$-matrix approximation. We showed how the pseudogap
structure appears/disappears in DOS above $T_{\rm c}$ in the BCS-BEC crossover
region. Starting from the weak-coupling BCS regime, while the pseudogap in
DOS becomes remarkable near the unitarity limit, it continuously changes into a
fully gapped DOS in the BEC regime.
\par
We determined the pseudogap temperature $T^*$ as the temperature when
the dip structure in DOS disappears. We also introduced another pseudogap
temperature $T^{**}$ at which the double peak structure in the spectral
weight vanishes. We showed that, although both the dip structures in DOS
and the double peak structure in the spectral weight originate from
pairing fluctuations, their values are very different from each other in
the BCS-BEC crossover. While one finds $T^*>T^{**}$ in the BCS
side ($(k_{\rm F}a_s)^{-1}\ \raise.3ex\hbox{$<$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ 0$), $T^{**}$ becomes much higher
than $T^{*}$ in the BEC side ($(k_{\rm F}a_s)^{-1}\ \raise.3ex\hbox{$>$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ 0$). This means
that the pseudogap region may depend on the physical quantities which we
measure. In particular, since the recent photoemission-type
experiment\cite{Stewart} is related to the spectral weight, one expects
that $T^{**}$ would work as the pseudogap temperature in this
experiment. Including $T^*$ and $T^{**}$, we determined the pseudogap
region in the BCS-BEC phase diagram with respect to temperature and the
strength of pairing interaction. Since the pseudogap effects are crucial
in understanding strong-coupling Fermi superfluids, our results would be
useful in the search for the pseudogap region in the BCS-BEC crossover
regime of ultracold Fermi gases.
\acknowledgments
\par
We would like to thank A. Griffin for valuable discussions and
comments. This work was supported by a Grant-in-Aid for Scientific
research from MEXT in Japan (18043005,20500044).
\par
|
train/arxiv
|
BkiUd1s5qX_BqBVRHRAs
| 5
| 1
|
\section{Introduction}
This paper is concerned with the mechanism by which a holographic
boundary theory can describe bulk physics. As emphasized in
\cite{1}\cite{2}\cite{3} a holographic description entails a vast
reduction of the number of degrees of freedom needed to describe a
region of bulk spacetime. Despite the large amount of
circumstantial evidence for the holographic principle it is still
very mysterious how such a sparse set of degrees of freedom can
describe all local bulk physics. A particular challenge is to
understand how events deep in the interior of the bulk space are
recorded in the instantaneous (Schroedinger Picture) state vector
of the boundary theory long before a signal can propagate from the
event to the boundary \cite{4}.
Let us consider an example. For definiteness we will consider the
3+1 dimensional super Yang Mills description of bulk physics in 5
dimensional AdS space \footnote{The usual $S_5$ factor in the
correspondence plays no role in the present paper and will be
ignored. }\cite{5}\cite{6}\cite{7}.
We will be interested in the limit of large radius of curvature compared to
the
string scale. In this limit stringy excitations are negligible and the
low energy supergravity approximation to bulk physics is reliable.
On the SYM side we must take $N$ large keeping the 't Hooft coupling
constant $g^2N$ fixed and large.
Suppose as in \cite{4} an event takes place at the center
\footnote{Since AdS is a homogeneous space it has no preferred
points. Center here means the origin of cavity coordinates. } of AdS
which radiates a gravitational wave toward the boundary.
No signal including the wave itself can arrive at the boundary until a
certain time elapses.
If the original event is well localized near the center of a very
large AdS space, the original bulk fields will typically be very
spherically symmetric and time independent.
In fact the only bulk field of importance is the time--time component
of the metric whose behavior near the boundary records the presence of
a certain amount of energy in the interior.
On the SYM side this means that the energy momentum tensor is almost
exactly homogeneous and consists of an homogeneous energy density and
the pressure needed to make $\langle{T^{\mu}_{\mu}}\rangle = 0$.
However, this effect is featureless and provides no information about the
profile of the gravitational wave.
In addition, it is vanishingly small in the large $N$ limit since
corrections to
the metric due to the energy of the wave are smaller than the wave
itself by factors of $\sqrt{G_5} \sim N^{-1}$.
We refer the reader in \cite{4} for notations and conventions.
Thus, within a neighborhood of the boundary, all supergravity field
functionals retain their original, vacuum--like expectation
values, at least until light has had a chance to propagate from $r
\sim 0$ to the boundary.
The implication for the SYM theory is that all expectation values of local
gauge invariant operators corresponding to the bulk fields, as well
as expectation values of products of such operators, should initially be
identical
to their vacuum values and contain no information about the propagating wave.
This situation continues until the outgoing wave arrives at the
boundary.
At that time the perturbation on the boundary
becomes nonzero and begins oscillating over the whole 3-sphere.
From the SYM point of view, the energy momentum tensor and its
products suddenly begin to coherently oscillate.
The features of the gravitational wave can then be recovered from
expectation values of the energy momentum tensor and its products.
Thus during the time when the wave vanishes within a
neighborhood of the boundary, the SYM theory is excited to a
non-vacuum-like state which we cannot distinguish from the
vacuum by taking expectation values of local gauge invariant operators or
any of their products\footnote{ We are assuming that local gauge
invariant operators are in
one--to--one correspondence with local observables of the bulk theory
evaluated near the boundary.}.
In \cite{4}, it was argued that the holographic boundary theory must contain
special non-local operators, called precursors, that distinguish such
states and code in detail local bulk information.
The precursors should become increasingly non-local the further the
corresponding bulk process is from the boundary in accordance with the UV--IR
connection \cite{3}.
In those cases in which the boundary theory has a gauge symmetry, the
precursors must also be gauge invariant since they contain
physical information.
In the case of ${\cal{N}} = 4$ SYM theory, this suggests that the
precursors are Wilson loops whose size is dictated by the UV--IR
relation.
We remark that there exists a rich class of generalized, equal--time
Wilson loops being candidates for the precursors.
Apart from conventional spatial Wilson loops, we may consider
spatial Wilson loops with insertions of local gauge covariant operators.
For example, we can consider the operator
\begin{equation}
Tr P F_{\mu\nu}(x_1) F^{\mu\nu}(x_2) W ,
\end{equation}
where $W$ is a Wilson loop passing through the
points $x_1$ and $x_2$ and $P$ denotes path ordering.
Presumably, such operators and their products form a complete set of
observables in the boundary theory.
In \cite{4}, it was shown how a plane gravitational wave can be modeled
by ``squeezed'' states constructed in free field theory.
In particular, it was shown how to account for the oscillating energy
density and the apparent acausality in the behavior of the energy
momentum tensor required by the correspondence.
It was found that this behavior is consistent with bounds required by
general principles of quantum mechanics.
In addition, apart from possible numerical coefficients, the free
field theory model reproduces corrections to the linearized
solution induced by non-linear terms in Einstein's equations involving
the energy density of the wave.
In this note we model bulk waves with ``squeezed states'' constructed
in the interacting SYM theory.
We compute expectation values of local gauge invariant operators
in the ``squeezed states'' and match them with the boundary data of
the wave.
We show that expectation values of products of local gauge
invariant operators contain no additional information about the
profile of the wave in agreement with bulk causality.
Our computations are done in the 't Hooft limit keeping only the
leading terms in the $1/N$ expansion.
Finally, using the corespondence, we calculate expectation values of Wilson
loops in the ``squeezed states'' and show how they carry non-trivial
information if their size is as dictated by the UV--IR
connection.
We discuss the implications of our results for holography at the end.
Before concluding the introduction we will review some facts and
conventions about the AdS--CFT correspondence. The metric of AdS in
cavity coordinates is
\begin{eqnarray}
ds^2 &=& R^2\left[{(1+r^2)^2 \over (1-r^2)^2}dt^2 - {4 \over
(1-r^2)^2}(dr^2 +r^2 d\Omega^2) \right]
\cr &=& R^2 dS^2,
\end{eqnarray}
where the coordinates and $dS^2$ are dimensionless and $d\Omega^2$
is the metric of the unit 3-sphere. The center of AdS means
the point $r=0$. Near a point of the boundary at $r=1$ the metric
has the form
\begin{equation}
ds^2=R^2\left[{1\over z^2}(dt^2 -dz^2 - dx^i dx^i) \right]
\end{equation}
where $z=1-r$ and $x^1,x^2, x^3$ replace the coordinates of the
3-sphere.
For our purposes the metric (1.3) is to be regarded as a local
approximation to the cavity metric. It is true, but irrelevant to
our purposes, that the same metric also gives an exact description
of a patch of AdS space. In any case we will call these the
half--plane coordinates. The two dimensionless parameters $R/l_s$
and $g_s$ of the bulk theory -- $l_s$ is the string length scale --
are related to the SYM quantities $N$ and $g$ by
\begin{eqnarray}
R/l_s&=&(g^2 N)^{1/4} \cr g_s &=& g^2.
\end{eqnarray}
The 5 and 10 dimensional Newton constants are given by
\begin{eqnarray}
G_5 &=& G_{10}/R^5 \cr G_{10}&=& g_s^2 l_s^8.
\end{eqnarray}
We set $R = 1$ for simplicity.
The string length scale is then given by $l_s = 1/(g^2 N)^{1/4}$.
Throughout we neglect numerical factors of order unity.
\setcounter{equation}{0}
\section{Bulk Waves}
As in \cite{4}, we model bulk waves with ``squeezed states'' in the
boundary theory.
Our goal is to study expectation values of various
operators in the ``squeezed states'' and
identify the precursors that store local bulk information.
For definiteness, let us consider a gravitational wave propagating
radially outward from $r \sim 0$.
In the next section, we will be interested in the case of a dilaton wave.
Assume that the wave is in one of the lowest spherical harmonics on
the 3--sphere.
In half--plane coordinates the plane fronted wave has the form
\begin{equation}
\gamma_{\mu \nu}(z,x,t) = \xi_{\mu\nu} \sqrt{G_5} {\Phi(z,t)\over z^2},
\end{equation}
where $\gamma_{\mu \nu}(z,x,t)$ is defined by
\begin{equation}
ds^2 = \left[{1\over z^2}(dt^2 -dz^2 - dx^i dx^i) \right] + \gamma_{\mu
\nu}(z,x,t)dx^{\mu}dx^{\nu}
\end{equation}
and $\xi_{\mu\nu}$ is a transverse traceless polarization tensor
with non--vanishing components in the $x$ directions. The
polarization tensor is assumed normalized to unity.
Far away from the original sources, $\Phi(z,t)$ satisfies the same wave
equation as a free, minimally coupled, massless scalar field in AdS.
We use normalization conventions so that $\Phi(z,t)$ is canonically
normalized.
Thus we keep the amplitude $|\Phi(z,t)|$ finite, independent of N, and
the energy of the wave is finite.
The corresponding operator in the SYM theory is $\xi_{ij}T_{ij}/N$.
The 2-point function of this operator is of order $N^0$.
Non-linear terms in the gravitational field equations are
suppressed by additional factors of $\sqrt{G_5} \sim N^{-1}$ and will
be ignored in this
paper \footnote { In \cite {4}, these effects were included and it was
shown how they can be reproduced in free field theory up to possible
numerical coefficients.
In the large N limit they are suppressed. However, they are important
to recover consistency in the behavior of the energy momentum tensor
required by general principles of quantum mechanics.
We refer the reader in refs. \cite{4}\cite{10} for a discussion of this
point.}.
Near the boundary, normalizable solutions to the wave equation behave as
follows
\begin{equation}
\Phi (z, t) \sim z^4 \int{d\omega |\omega|^3 \phi(\omega) e^{-i\omega
t}},
\end{equation}
with $\phi(\omega) = \phi^*(-\omega)$ since the field is real.
According to the AdS--CFT correspondence, the wave makes a
contribution to the SYM energy momentum tensor given by \cite{11}
\begin{equation}
\langle{T_{ij}\over N}\rangle \sim - \xi_{ij} z^{-4} \Phi(z,t)|_{z=0}.
\end{equation}
We are interested in describing a wave emitted at a particular time $t_0$ in
the past, near $r \sim 0$, so that, when $t < 0$, the perturbation
vanishes within a neighborhood of the boundary.
This can be achieved if we choose the function $|\omega|^3
\phi(\omega)$ to be analytic in
the upper half $\omega$--plane and have the right asymptotic behavior as
$\omega
\rightarrow i \infty$. Then the boundary data vanish for $t < 0$ and so
does the
contribution to $<{T/ N}>$.
In general, the boundary data will be non-vanishing when $t > 0$ since
$\phi(\omega)$ will have singularities in the lower half-plane.
Also, causality of the bulk theory insures that the function
$\Phi(z,t)$ describes a wave which, at any $ t_0 < t < 0$, exactly vanishes
for $z < |t|$.
In addition, bulk causality requires that all local bulk fields
evaluated in a neighborhood of the boundary, as
well as products of such fields, retain their vacuum expectation
values until $t = 0$.
Therefore, on the SYM side, expectation values of local gauge
invariant operators and their products must be identical to their
vacuum values until $t = 0$.
\bigskip
\noindent{ \bf Squeezed States in Yang--Mills Theory }
We propose that during the propagation of the wave, the SYM theory is
excited in the ``squeezed state'' defined by
\begin{equation}
\left| \Psi \right> = \exp { \left[{ i \xi_{ij} \over N} \int{d^3 \vec{x}
dt f(t) T_{ij} (\vec{x},t)} \right] } \left| \Omega \right>,
\end{equation}
where $|\Omega\rangle$ is the vacuum of the interacting theory and $f(t)$ is
some real function related to the boundary data of the
wave.
The polarization $\xi_{ij}$ is taken to be traceless and symmetric.
It will turn out to be the polarization of the wave.
The state thus defined is unit--normalized \footnote{In the free theory, to
leading order in $1/N$, it reduces to the
``squeezed state'' considered in \cite{4}, up to some normalization
factor. Also, the energy momentum tensor is normal--ordered so
that the vacuum energy density is zero.}.
Our motivation in writing Eq(2.5) is as follows.
In the large N limit with the 't Hooft coupling held fixed and large,
$|\Psi\rangle$ corresponds to a coherent state in the bulk.
To see this note that if we Fourier expand any local gauge invariant operator
$O(\vec{x},t)$
\begin{equation}
O(\vec{x},t) = \int_{\omega > 0}{d\omega d^3 \vec{k}
{\cal{O}}(\omega,\vec{k}) e^{-i
\omega t + i \vec{x}\cdot \vec{k}} +
h.c.},
\end{equation}
then, to leading order in $1/N$, the positive frequency modes
${\cal{O}}(\omega,\vec{k})$ behave like annihilation operators and the
negative frequency modes
${\cal{O}}^{\dagger}(\omega,\vec{k})$ behave like creation
operators \cite{13}.
In particular, their commutator is a c--number function.
Thus, up to some irrelevant normalization factor, the state $|\Psi
\rangle$ takes the form
\begin{equation}
\left| \Psi \right> \sim \exp { \left[ \int{
d\omega f(\omega) {\cal{O}}^{\dagger}(\omega,0) } \right] } \left|
\Omega \right>.
\end{equation}
We see that if we identify the SYM
vacuum with the bulk vacuum and the modes ${\cal{O}}(\omega,\vec{k})$
with the Fourier modes of the bulk field corresponding to $O$,
$\left| \Psi \right>$ becomes a coherent bulk state.
This can always be done in the limit we are considering since in this
limit, the relation
\begin{equation}
O(x) = z^{-4}\Phi(z,x)|_{z \rightarrow 0}
\end{equation}
holds as an exact operator relation \cite{12}\cite{13}.
Coherent states in the bulk describe classical waves.
Next we calculate the expectation value
\begin{equation}
\left<\Psi\right| {T_{ij}(\vec{y},\tau)\over N} \left| \Psi \right>
= {1 \over N}\left < \Omega \right| e^ {-{ i \xi_{mn} \over N} \int{ f(t)
T_{mn}(x)} }
{T_{ij}(y)} e^ { { i \xi_{mn} \over N} \int{ f(t) T_{mn}(x)}
} \left| \Omega \right >.
\end{equation}
In the 't Hooft limit and to leading order in $1/N$, the commutator
$\left[ T_{ij}(y),T_{mn}(x) \right]$ is a c--number function
proportional to the central charge of the theory.
Therefore, it is of order $N^2$.
In fact, it is independent of the 't Hooft coupling and can be
calculated in the free theory.
This function vanishes both inside and outside the light-cone; it
receives contributions only when the points $x$ and $y$ are at
light--like separation.
Hence, we can commute $T(y)$ past the exponential picking a factor
proportional to this commutator.
Recall also that the energy momentum tensor has zero expectation value
in the vacuum.
Then, to leading order in $1/N$, the expectation value is given by
\begin{equation}
\left<\Psi\right| {T_{ij}(\vec{y},\tau) \over N} \left| \Psi \right>
= {i\xi_{mn} \over N^2}\int {dtd^3\vec{x}f(t) \left[
T_{ij}(\vec{y},\tau),T_{mn}(\vec{x},t) \right]}.
\end{equation}
The commutator is determined by the imaginary part of the time-ordered
2-point function of the energy momentum tensor, and so
\begin{equation}
\left<\Psi\right| {T_{ij}(\vec{y},\tau) \over N} \left| \Psi \right>
= -\xi_{mn}{2 \over N^2}\int {dtf(t)\epsilon(\tau-t)
Im\int d^3\vec{x}\left< T_{ij}(\vec{y},\tau)T_{mn}(\vec{x},t) \right>} +
O({1 \over N}).
\end{equation}
The expectation value inside the integral is in the vacuum.
All other components of the energy momentum tensor have
expectation values of order $1/N$ in this state.
The details of the calculation can be found in the appendix.
Here, we write down the results.
The spatial integral is imaginary and independent of $\vec{y}$.
Simple dimensional analysis shows that it behaves like
\begin{equation}
{1 \over |\tau - t|^5}.
\end{equation}
Thus the expectation value of the energy momentum tensor in the
``squeezed state'' is given by
\begin{equation}
\left<\Psi\right| {T_{ij}(\vec{y},\tau) \over N} \left| \Psi \right>
\sim \xi_{ij} \int {dtf(t) {1 \over (\tau - t)^5}}.
\end{equation}
If we Fourier transform $f(t)$
\begin{equation}
f(t) = \int{d\omega {f(\omega) \over \omega} e^{-i \omega t}},
\end{equation}
with $f(\omega) = -f^{*}(- \omega)$, we obtain
\begin{equation}
\left<\Psi\right| {T_{ij}(\vec{y},\tau) \over N} \left| \Psi \right>
\sim i\xi_{ij} \int {d\omega f(\omega) |\omega|^3 e^{-i \omega \tau}}.
\end{equation}
Comparing with Eq(2.3), we must set
\begin{equation}
\phi(\omega) \sim i f(\omega).
\end{equation}
Note that we have chosen $\phi(\omega)$ so that the boundary data
vanish for $t < 0$.
This does not imply that $f(t)$ is zero for $t < 0$.
Finally, consider expectation values of products of the energy
momentum tensor.
Using the same method as before, we can easily see that these will
differ from their vacuum values only by products of commutators.
Schematically, we have
\begin{equation}
{1\over N^2}\left<\Psi\right| {T_1 }{T_2 } \left| \Psi \right>
= {1 \over N^2}\left< \Omega \right|{T_1 }{T_2 }\left|\Omega\right> -
{1 \over N^2}\int{ \left[T_2,T\right]} \int{\left[T_1,T\right]}.
\end{equation}
The non-trivial component is just the product $\left<\Psi\right| {T_1
} \left| \Psi \right>\left<\Psi\right| {T_2 } \left| \Psi
\right>$. Therefore, for $t_1, t_2 < 0$, the expectation value is identical to
its vacuum value since each factor vanishes by construction.
In any case, products of local gauge invariant operators contain no additional
information about the profile of the wave.
This is of course a consequence of large $N$ factorization.
\setcounter{equation}{0}
\section{Wilson Loops}
In this section we show how the expectation value of a Wilson loop
in a squeezed state carries non-trivial information about a
dilaton wave. Since we are interested in the instantaneous state
vector, a Wilson loop will typically mean a spatial loop with no
extension in the time direction.
To model a dilaton wave in the SYM theory, we replace
$\xi_{ij}T_{ij}/N$ with $O = TrF^2/N$ in Eq(2.5).
We consider a conventional Wilson loop
\begin{equation}
W({\cal{C}}) = Tr P e^{i \oint {A_{\mu}dx^{\mu}}}
\end{equation}
for simplicity.
In the 't Hooft limit and large 't Hooft coupling, the
vacuum expectation value of this loop can be obtained from the area of
a minimal world-sheet in AdS that ends on the loop at the boundary
\cite{8}.
We consider a spatial Wilson loop evaluated at $ \tau
< 0$ and oriented in the $x_1 - x_2$ plane.
We take the loop to be circular with size $a$. We regularize the VEV
of this loop by dividing the divergent term proportional to the
circumference.
We wish to calculate the expectation value $\left<\Psi\right| {W}
\left| \Psi \right>$ in the case when $f$ is small.
In this case, we can expand the exponential keeping only linear terms
in $f$.
We do not expect higher order terms to modify our conclusions
significantly, since in the 't Hooft limit their expectation values
should factorize into products involving the linear term together with
featureless (independent of $\tau$ and $a$) factors such as the VEV of
products of $O$.
Then the expectation value reduces to the following expression
involving the commutator of the loop with the operator $O$
\begin{equation}
\left<\Psi\right| {W}\left| \Psi \right> =
\left<W\right> + i \int {dtd^3\vec{x}f(t)\left< \left[ W(\tau),O(\vec{x},t)
\right]\right>}.
\end{equation}
All expectation values in the RHS of the equation are vacuum
expectation values.
The first term is irrelevant to us since, by conformal invariance, it
should be independent of the size of the loop $a$ (and $\tau$).
In terms of time--ordered vacuum expectation values the second term
takes the form
\begin{equation}
i \int {dtd^3\vec{x}f(t) \epsilon(\tau-t) \left[\left<
W(\tau)O(\vec{x},t)\right> - \left<
W^{\dagger}(\tau)O(\vec{x},t)\right>^{*}\right] }.
\end{equation}
The hermitian conjugate of the loop operator is obtained by reversing
the orientation of the loop in the $x_1-x_2$ plane.
The Euclidean version of the ``2-point functions'' appearing in Eq(3.3) has
been computed in \cite{9} using the correspondence.
One first finds a minimal world-sheet with the loop as its boundary.
The world-sheet in turn induces a source term in the dilaton field
equations through the coupling
\begin{equation}
{1 \over 2 \pi \alpha'} \int{d^2\sigma \sqrt{h}} e^{\phi \over 2}.
\end{equation}
Here, $h_{ij}$ is the metric induced on the world-sheet when the
background metric is in the Einstein frame.
The term in the world-sheet action involving the curvature of the
world-sheet is
suppressed when the 't Hooft coupling is large and can be ignored.
The 2-point function is given by the boundary data of the
dilaton profile obtained by solving the classical field equations in
the presence of the source.
It depends only on two parameters, which are the polar co-ordinate of
the operator $O$ on the plane defined by the loop $r$ and its
perpendicular distance from the plane of the loop $ y =
\sqrt{(t-\tau)^2 + x_3^2} $ \cite{9}:
\begin{equation}
\left< W(\tau)O(\vec{x},t)\right> \sim {\left< W\right> \over N}
{a^4 \over {\left[ {(y^2 + r^2 - a^2)}^2 + 4a^2y^2 \right]}^2}.
\end{equation}
We see that the 2-point function behaves like
$1/d^4$ when the operator approaches the loop, where $d = \sqrt{ y^2 +
(r-a)^2}$ is the distance of the operator from the loop.
To obtain the expression in Minkowski signature, we replace
$(\tau - t)^2 \rightarrow -(\tau - t)^2 + i\epsilon$.
Before we continue with our calculation, we make some remarks about
this correlation function.
First, we see that it is of order $N^0$ since the expectation value of
the loop itself is of order $N$.
In fact, we may think of the operator $O = TrF^2 / N$ as a small
Wilson loop.
The disconnected part of the 2-point function is zero since $O$ has
vanishing VEV.
The connected part of the 2-point function receives contributions from
world-sheets in the bulk that have the two loops as boundaries.
The topology of these surfaces implies that the 2-point
function is of order zero in the large $N$ expansion.
Second, reversing the orientation of the loop does not change the
result for the dilaton profile since the coupling of the
world-sheet in the bulk to the dilaton field, Eq(3.4), remains the same.
Hence, Eq(3.2) reduces to the following expression
\begin{equation}
\left<\Psi\right| {W}\left| \Psi \right> =
\left<W\right> - 2 \int {dtf(t)\epsilon(\tau - t)Im \int d^3\vec{x} \left<
W(\tau)O(\vec{x},t)\right>}.
\end{equation}
First we do the spatial integral over the 2-point function and obtain
the imaginary part as a function of the ratio
\begin{equation}
\lambda = {|\tau - t| \over a}.
\end{equation}
We also rescale $x_3$ and $r$ so that the variables of integration are
dimensionless.
In polar co-ordinates the integral takes the form
\begin{equation}
I = {2\pi \over a}\int{drdx_3{r \over \left[ x_3^2 - A^2 +
i\epsilon \right]^2 \left[x_3^2 - B^2 + i\epsilon\right]^2}},
\end{equation}
where
\begin{equation}
A^2 = {\lambda^2 - (r - 1)^2}
\end{equation}
and
\begin{equation}
B^2 = {\lambda^2 - (r + 1)^2}.
\end{equation}
The integrand has poles when $A^2$ and $B^2$ are positive.
Therefore the integral has non-vanishing imaginary part.
We explain the physical origin of these poles at the end of this section.
We integrate over $x_3$ first, closing the contour from below and
picking up the residues at the poles in the lower-half plane.
Only non-negative real poles contribute to the imaginary part as a result
of the $i\epsilon$ prescription.
In the appendix, we analyze the behavior of the imaginary part of the
integral for three cases.
When $\lambda \gg 1$, we find
\begin{equation}
Im(I) \sim {1 \over a\lambda^5} = {a^4 \over |\tau - t|^5}.
\end{equation}
The result is identical to the result found in Eq.(2.12) for the case of
local operators.
This is of course the behavior one should expect to see.
In this case, the temporal separation between the loop and the operator
O is much bigger than the size of the loop, and we should be able to
use the operator product expansion of the loop in terms of local gauge
invariant operators to calculate the 2-point function.
Note also that the 2-point function behaves like
\begin{equation}
\left<WO\right> \sim {a^4 \over \left[x^2 - (\tau - t)^2\right]^4}
\end{equation}
when $\lambda \gg 1$, as the 2-point function of $O$ with itself.
As $\lambda \rightarrow 1$, the imaginary part increases.
When $\lambda \sim 1$, it is the biggest
and behaves like
\begin{equation}
Im(I) \sim {1 \over a( \lambda - 1 )^{3/ 2}}.
\end{equation}
When $\lambda \ll 1$, we find that the imaginary part tends
to zero like
\begin{equation}
Im(I) \sim {\lambda^2 \over a}.
\end{equation}
We can understand the result as follows.
As explained below, in this case, the imaginary part of the integral
receives contributions only when the operator is very close to the
loop at $r \sim 1$ and $x_3 \sim 0$.
Their temporal separation is also small.
Thus, using the Heisenberg equations of motion, we can approximate
$O(\vec{x},t)$ with
\begin{equation}
O(\vec{x},t) = O(\vec{x},\tau) - \partial_tO(\vec{x},t)|_{t=\tau}(\tau
-t).
\end{equation}
We see that the operator $O$ commutes with the Wilson loop unless the two are
in contact.
Essentially, only a single point of the loop contributes to the
commutator, a measure zero effect.
The commutator in turn determines the imaginary part of the integral
as we can see from Eq(3.2) and Eq(3.3). So we expect the imaginary part of the
integral to vanish like a power of $\lambda^2$ or faster.
Let us now see how the expectation value of the Wilson loop in the
``squeezed state''
\begin{equation}
\left<\Psi|W(\tau)|\Psi\right> = -2 \int dt f(t)\epsilon(\tau - t) Im(I)
\end{equation}
carries information about the corresponding dilaton wave.
The imaginary part of the integral is a function of $\lambda = |\tau -t|/a$.
As before, we choose $f(t)$ so that $ \left<\Psi|O(\tau)|\Psi\right>$ exactly
vanishes when $\tau < 0$. At any $\tau < 0$, the
corresponding bulk wave vanishes for $z < |\tau|$.
On the other hand, the expectation value of the Wilson loop has non-trivial
time dependence when $\tau < 0$.
Early in the remote past, when $|\tau| \gg a$, we can approximate
$Im(I) \sim 1/|\tau - t|^5$.
Therefore, the expectation value tends to zero since it behaves
exactly the same way as the expectation value of local gauge
invariant operators given in Eq(2.13).
When $|\tau| \ll a$, the imaginary part of $I$ is essentially
independent of $\tau$ within most of the domain of integration but a
small interval when $|t| \sim |\tau|$.
Thus the expectation value receives its time--dependence from this small
region of integration.
Within this region though, $\lambda \ll 1$ and so the imaginary
part of $I$ is tiny.
Hence, the expectation value is featureless having essentially no
time--dependence.
When $|\tau| \sim a$, the expectation value receives non-trivial
time--dependence due to competition effects between $f(t)$ and the imaginary
part of $I$.
It receives its biggest contribution from the region of integration
near $t \sim 0$ since then $\lambda \sim 1$ and the imaginary part of
$I$ diverges.
When $|\tau| \sim a$, the wave is at co-ordinate distance $\sim a$ from the
boundary.
Thus the Wilson loop ``detects'' the wave when its distance from the
boundary is comparable to the size of the loop, and reproduces
details that depend on the profile of the wave.
This is of course a manifestation of the UV--IR relation \cite{3}.
Another interesting example, is the case when $f(t)$ is oscillatory
near $t = 0$ and exponentially small otherwise. The oscillations are
well concentrated near $t = 0$. For example, we may take $f(t)$ to be a
polynomial in $t$ times a gaussian. At any time $\tau$ other than
zero, the corresponding bulk wave should be oscillatory near $z = |\tau|$
and very small in a neighborhood of the boundary. In this case,
expectation values of local gauge invariant operators behave like
\begin{equation}
\left<\Psi|O(\tau)|\Psi\right> \sim {f(0)\delta t \over |\tau|^5}
\end{equation}
and so they remain small unless the wave is at the boundary at $\tau
= 0$. Here, $\delta t$ is the characteristic decay time of the
oscillations in $f(t)$. The expectation value of the Wilson loop
though has very different time--dependence. Again, using the results
for the behavior of the imaginary part of $I$ as a function of the
ratio $|\tau|/a$, one can see that the expectation value is
oscillatory when $|\tau| \sim a$ and suppressed when $|\tau| \gg a$ or
$|\tau| \ll a$.
In short, when the wave is very close to the boundary, only small Wilson loops
are excited. At that time, however, expectation values of local gauge
invariant
operators begin to oscillate. On the other hand, when the wave is far
from the boundary only big Wilson are excited. This shows that the
precursors are in fact Wilson loops.
\
Finally, let us try and understand the physical origin of the poles in
the integrand in Eq(3.8).
When the denominators vanish, the 2-point function has non-vanishing
imaginary part since then the $i\epsilon$ prescription for
treating the poles becomes relevant.
As one can see from Eq(3.2) and Eq(3.3), the imaginary part of the 2-point
function is determined by the vacuum expectation value of the commutator
between the Wilson loop and the operator $O$.
Therefore, at the poles the commutator is non-vanishing.
Now, the commutator can be non-zero only when some part of the loop of
non-trivial measure is on the light-cone of $O$. Then the
commutator between the vector potential at any point whose separation
from $O$ is light-like and $O$ is non-zero, and in turn all of them contribute
to the commutator between the Wilson loop itself and $O$.
This is precisely what happens at the poles as we show below.
The imaginary part of the 2-point function vanishes when the loop is
not intersecting the light-cone, and the contribution to the integral
from this region of integration is real.
Then, the 2-point function is non-singular as well.
Suppose the operator is at $t=0$.
Then the loop can intersect with the past light-cone of $O$ only.
For $|x_3| < \lambda$, the light-cone intersects the $x_1 - x_2$ plane
at a circle of radius
\begin{equation}
\rho = \sqrt{\lambda^2 - x_3^2}.
\end{equation}
The polar co-ordinate $r$ of $O$ is the distance of the center of this
circle from the center of the loop.
The point of the loop closest to the center of the light-cone circle is
at distance $|r-1|$ from it, while the one that is the farthest is at
distance $r+1$.
Clearly, when $A^2$ is negative, the loop is outside the light-cone and
so no contributions to the imaginary part of the integral arise from
this region of integration for any $\lambda$.
When $A^2$ is positive the loop and the light-cone circle intersect.
We may choose, however, $|x_3| = A$ so that the two circles are
tangent to each other.
This is precisely when the integrand is singular.
When the two circles are tangent the set of points on the
loop that are close enough to the light-cone is of bigger measure and we get a
contribution to the commutator and a pole in the 2-point function.
For $\lambda > 1$ the light-cone circle is tangent to the loop from the
outside. The opposite is true for $\lambda < 1$. In this case, the light-cone
circle becomes smaller and smaller as $\lambda \rightarrow 0$ and the
effect ceases to be important.
For $\lambda \geq 1$, we can choose $x_3$ small enough so that $\rho$ is
bigger than the radius of the loop.
If $B^2$ is positive, then, for $|x_3| < B$, the loop is inside the
light-cone.
For $|x_3| = B$ the two circles are tangent and again we have a pole
in the 2-point function.
For $\lambda \geq 1$, the 2-point function becomes even more singular when
$A =
B$ at $r = 0$ and $\rho = 1$.
In this case, the whole loop is on the light-cone.
Therefore, we should expect a big contribution to the imaginary
part of the integral from the small $r$ region.
We expect this effect to amplify when $\lambda \sim 1$ since then, $x_3
\sim 0$, and the operator is closer to the plane of the loop.
Note also that the 2-point function is as singular when $A$ or $B$ are
zero and $x_3 = 0$.
The two effects combine when $\lambda = 1$.
The 2-point function can be the most singular in this case and we expect
the imaginary part of the integral to be the biggest.
\setcounter{equation}{0}
\section{Discussion}
The main purpose of this paper is to identify the non-local precursor
fields of the SYM boundary theory that record information about
local processes occurring deep in the interior of the bulk AdS
spacetime.
Causality of the bulk theory requires that the precursors are
intrinsically non-local. They are not simple
products of local operators corresponding to the classical supergravity
fields in the standard AdS--CFT dictionary. Correlation functions of
such products essentially remain featureless until the
signal from the event arrives at the boundary. Yet, as in \cite{4}, we
argue that the precursors store the
information long before the signal can propagate to the boundary.
In this paper, we study a rather simple case involving the propagation of a
classical bulk wave toward the boundary. It is shown that when the
wave vanishes within a neighborhood of the boundary, products of local
gauge invariant operators retain their vacuum expectation values,
whereas Wilson loops are excited when their size is of the same order as
the co-ordinate distance of the wave from the boundary.
A detailed translation of all the configurations of the bulk theory to the SYM
theory is not yet available, but, as in the example of the wave, we
believe that the precursors will involve Wilson loops with size
dictated by the UV--IR connection.
The precise way Wilson loops would store information about
complicated processes in the bulk is very difficult to see. In particular,
it remains a challenge to understand what precursors describe small
Schwarzschild black holes at the center of AdS, or what configurations
of Wilson loops provide the signal that a black hole forms in a
head-on collision of two very energetic gravitons \cite{4}.
However, Wilson loops and their products form a
complete set of gauge invariant operators in the SYM theory. This
means that at any time one should be able to recover all the
information about the state of the theory from their expectation
values and expectation values of their products.
Our analysis has been carried out in the 't Hooft limit where we keep
the 't Hooft coupling fixed and large and take $N \rightarrow \infty$.
In this limit the bulk theory is manifestly local as it is well
described by linearized supergravity. We do not consider $1/N$
corrections in this paper since they are too small.
We think that their effect is to modify the original
expectation values of local gauge invariant operators by
featureless components that do not carry any interesting information
about the details of the relevant bulk process. For example, in the
case of the gravitational wave considered in section II, the
next--to--leading order $1/N$
corrections depend on the total energy in the bulk, which is
constant, but not on the detailed profile of the wave.
Any interesting effect of bulk interactions should be recovered from
such expectation values only when the signal of the event arrives at
the boundary.
We believe that the ``squeezed states'' constructed in the SYM theory
continue to accurately describe gravitational waves including the
effect of bulk interactions.
Evidence for this was found in \cite{4}, where the success of the free
field theory model considered was linked with the non-renormalization
theorem for the 3-point function of the energy momentum tensor.
It would be very interesting to study the exact description of a
gravitational wave in the flat space limit considered in
\cite{15}\cite{16}.
In this limit we take $N$ large and $g$ fixed. We also keep bulk
energies fixed in string units. This means that we have to consider
energies in the SYM theory that scale like $N^{1/4}$.
In flat space, plane gravitational waves are exact solutions of the
theory and do not receive any stringy corrections \cite{17}.
However, we do not have any computational control in the SYM theory
in this limit apart from conjectured non-renormalization theorems for
the 2-point and 3-point functions of chiral primaries.
What really distinguishes the precursors in the case of ${\cal{N}}=4$
SYM theory from other non-local observables in the theory is that
Wilson loops cannot be expressed in terms of finite polynomials of
local gauge invariant operators corresponding to the bulk fields.
Gauge invariance equips the boundary holographic theory with this rich
class of
intrinsically non-local observables so that it can reproduce traces of bulk
causality and locality. Thus gauge invariance is crucial in the way
this particular local conformal theory describes
bulk physics. It would be interesting to understand the precise nature
of the precursors in other AdS--CFT dualities in which the CFT is not
a conventional gauge theory. For example, the $AdS_3$ case.\footnote
{Some interesting issues
concerning this particular case were recently studied in \cite{14}.}
In some of these examples the CFT is
obtained from a gauge theory through renormalization group flows; however
there is no remnant of the original gauge symmetry at the fixed point.
It is particularly challenging to find special non-local observables
in these examples as well so as to understand better the holographic
nature of gravity.
\
{\bf Acknowledgements }
We would like to thank Vijay Balasubramanian, Gary Felder, Jason
Prince Kumar, Maxim Perelstein, Joseph Polchinski, Simon Ross, Steve
Shenker and Eva Silverstein for useful discussions.
This work was supported in part by NSF grant 9870115 and by the US
Department of Energy under contract DE--AC03--76SF00515.
|
train/arxiv
|
BkiUd8Q5qoYDgbY2HZ_S
| 5
| 1
|
\subsection{Explanatory examples} \label{sec:exp}
This section discusses two real-world situations that developers must cope with during the TLPs migration task, i.e.,\@\xspace code refactoring and vulnerable dependencies handling. In the first place, it is essential to consider
different TPL releases that are conformed to the semantic versioning format.\footnote{\url{https://semver.org/}} A standard version string follows the pattern \emph{X.Y}, in which \emph{X} represents the \emph{major} release and \emph{Y} represents the \emph{minor} one. Sometimes, releases can include a \emph{patch} version \emph{Z}, resulting in the final string \emph{X.Y.Z}.
We present an explanatory example related to
\emph{log4j},\footnote{\url{https://logging.apache.org/log4j/}} a widely used Java logging library. When it is upgraded from version \emph{1.2} to version \emph{1.3}, as shown in Listing \ref{lst:v12} and Listing \ref{lst:v13}, respectively, a lot of internal changes happened which need to be carefully documented.\footnote{\url{http://articles.qos.ch/preparingFor13.html}} \revised{As it can be noticed, the main change affects the \texttt{Category} class which is replaced by the \texttt{Logger} class. Furthermore, all the former methods that were used by the deprecated class cause several failures at the source code level. For instance, the \texttt{setPriority} method is replaced by \texttt{setLevel} in the new version.}
\begin{lstlisting}[caption={log4j version 1.2.},
label=lst:v12,captionpos=b,style=JavaStyle,numbers=left,xleftmargin=4em,frame=single,framexleftmargin=4.2em]
Category root = Category.getRoot();
root.debug("hello");
Category cat = Category.getInstance(Some.class);
cat.debug("hello");
cat.setPriority(Priority.INFO);
\end{lstlisting}
\begin{lstlisting}[caption={log4j version 1.3.},
label=lst:v13,captionpos=b,style=JavaStyle,numbers=left,xleftmargin=4em,frame=single,framexleftmargin=4.2em]
Logger root = Logger.getRootLogger();
root.debug("hello");
Logger logger = Logger.getLogger(Some.class);
logger.debug("hello");
logger.setLevel(LEVEL.INFO);
\end{lstlisting}
Though this is a very limited use case, it suggests that the code refactoring that takes place during the migration is
an error-prone activity even for a single minor upgrade, i.e.,\@\xspace from
version \emph{1.2} to version \emph{1.3}. Additionally, the complexity
dramatically grows in the case of a major release as it typically requires extra efforts rather than a minor one which are not welcome by the majority of developers
\cite{kula_developers_2018}. Considering such context, the reduction of the
time needed for a single migration step, even a minor one, is expected to
improve the overall development process.
Concerning vulnerable dependencies, GitHub\@\xspace Dependabot\footnote{\url{https://dependabot.com/blog/github-security-alerts/}} provides weekly security alert digests that highlight possible security issues for outdated dependencies of a repository,
which can be of different languages, e.g.,\@\xspace Python, Java, JavaScript.\footnote{\url{https://dependabot.com/\# languages}} An example of a Dependabot report is shown in Fig.~\ref{fig:digest}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth]{GHdigest_new.png}
\caption{GitHub\@\xspace Dependabot alert.}
\label{fig:digest}
\end{figure}
As shown in Fig. \ref{fig:digest}, Dependabot suggests possible TPL upgrades to solve vulnerabilities in the given project. For instance, the \textbf{guava} dependency seems to be outdated, and thus the system automatically suggests jumping to the latest version, i.e.,\@\xspace \emph{24.1.1}.
Though this alert can raise awareness
of this evolution,
it does not offer any concrete recommendations on how to perform the actual migration steps. In some cases, the bot does not provide any recommended version to update the project, e.g.,\@\xspace for the \textbf{log4j} dependence. In this respect, we see that there is an urgent need for providing recommendations of the most suitable plan, so as to upgrade the library, as this can significantly reduce the migration effort.
\subsection{Existing techniques} \label{sec:related}
This section reviews some relevant work that copes with the migration problem.
\begin{table}[h]
\centering
\footnotesize
\caption{\revised{Main features of TLPs migration systems.}
\begin{tabular}{|l | c| c | c| c| c | c | c | }
\hline
\textbf{System} & \rotatebox[origin=l]{90}{\textbf{Inferring migration}} & \rotatebox[origin=l]{90}{\textbf{Incremental plan}} & \rotatebox[origin=l]{90}{\textbf{Popularity}} & \rotatebox[origin=l]{90}{\textbf{GitHub\@\xspace issues}} & \rotatebox[origin=l]{90}{\textbf{\textbf{Upgrading}}} & \rotatebox[origin=l]{90}{\textbf{Replacement}} & \rotatebox[origin=l]{90}{\textbf{Applying Migration}} \\ \hline
Meditor~\cite{xu_meditor_2019} & \ding{51} & \ding{55} &\ding{51} & \ding{55} & \ding{51} & \ding{51} & \ding{51} \\ \hline
Apivawe~\cite{hora_apiwave_2015} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} & \ding{51} & \ding{55} \\ \hline
Graph Mining~\cite{teyton_mining_2012} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} & \ding{51} & \ding{55} \\ \hline
RAPIM~\cite{alrubaye2019learning} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} & \ding{51} & \ding{51} \\ \hline
Diff-CatchUp~\cite{xing_api-evolution_2007} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{51} & \ding{51} & \ding{55} \\ \hline
M$^{3}$~\cite{collie_m3_2020} & \ding{51} & \ding{55} & \ding{55} & \ding{55} & \ding{55} & \ding{51} & \ding{51} \\ \hline
\rowcolor{mygray}
\textbf{EvoPlan} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{55} & \ding{55} \color{black} \\ \hline
\end{tabular}
\label{tab:features}
\end{table}
Meditor \cite{xu_meditor_2019} is a tool aiming to identify migration-related (MR) changes within commits and map them at the level of source code with a syntactic program differencing algorithm. To this end, the tool mines GitHub\@\xspace projects searching for MR updates in the \emph{pom.xml} file and check their consistency with the WALA framework.\footnote{\url{https://github.com/wala/WALA}}
Hora and Valente propose Apiwave \cite{hora_apiwave_2015}, a system that excerpts information about libraries' popularity directly from mined GitHub\@\xspace project's history. Afterwards, it can measure the popularity of a certain TLP by considering the import statement removal or addition.
Teyton \emph{et~al.}\@\xspace~\cite{teyton_mining_2012} propose an approach that discovers migrations among different TLPs and stores them in a graph format. A token-based filter is applied on \emph{pom.xml} files to extract the name and the version of the library from the artifactid tag. The approach evetually exhibits four different visual patterns that consider both ingoing and outgoing edges to highlight the most popular target.
RAPIM~\cite{alrubaye2019learning} employs a tailored machine learning model to identify and recommend API mappings learned from previously migration changes. Given two TPLs as input, RAPIM extracts valuable method descriptions from their documentation using text engineering techniques and encode them in feature vectors to enable the underpinning machine learning model.
Diff-CatchUp \cite{xing_api-evolution_2007} has been conceived with the aim of proposing usage examples to support the migration of reusable software components. The tool makes use of the UMLDiff algorithm \cite{10.1145/1101908.1101919} to identify all relevant source code refactorings. Then, a heuristic approach is adopted to investigate the design-model of the evolved component and retrieve a customizable ranked list of suggestions.
Collie \emph{et~al.}\@\xspace recently proposed the M$^{3}$ tool \cite{collie_m3_2020} to support a semantic-based migration of C libraries. To this end, the system synthesizes a behavioral model of the input project by relying on the LLVM intermediate representation.\footnote{\url{https://llvm.org/}} Given a pair of source and target TLPs, the tool generates abstract patterns that are used to perform the actual migration.
Table \ref{tab:features} summarizes the features of the above-ment\-ioned approaches by considering the different tasks involved in migration processes by starting with the discovery of possible migration changes up to embedding them directly into the source code as explained below.
\begin{itemize}
\item \emph{Inferring migration}: To extract migration-related information, tools can analyze existing projects' artifacts, i.e.,\@\xspace commits, \emph{pom.xml} file, or tree diff. This is the first step of the whole migration process.
\item \emph{Incremental plan}: The majority of the existing approaches perform the migration just by considering the latest version of a TLP. This could increase the overall effort needed to perform the actual migration, i.e.,\@\xspace developers suffer from accumulated technical debt. In contrast, considering a sequence of intermediate migration steps before going to the final one can reduce such refactoring.
\item \emph{Popularity}: This is the number of client projects that make use of a certain library. In other words, if a TLP appears in the \emph{pom.xml} file or in the import statement, its popularity is increased.
\item \emph{GitHub\@\xspace issues}: As an additional criterion, the migration process can include data from \emph{GitHub\@\xspace issues} that may include relevant information about TLPs migration. Thus, we consider them as a possible source of migration-related knowledge.
\item \emph{Upgrading}: This feature means that the tool supports the upgrading of a TLP from an older version to a newer one. For instance, the migration described in Section \ref{sec:exp} falls under this class of migration.
\item \emph{Replacement}: Differently from upgrading, replacement involves the migration from a library to a different one that exposes the same functionalities.
\item \emph{Applying migration}: It represents the final step of the migration process in which the inferred migration changes are actually integrated into the project.
\end{itemize}
\subsection{Dimensions to be further explored}
Even though several approaches successfully cope with TPL migration, there are still some development dimensions that need to be further explored. However, providing an exhaustive analysis is out of the scope of this section. Thus, we limit ourselves to identify some of them by carefully investigating the approaches summarized in Table \ref{tab:features}. The elicited dimensions are the following ones:
\begin{itemize}
\item \emph{D1: Upgrading the same library.} Almost all of the presented
approaches apart from Meditor, focus on replacing libraries and very few
support the upgrades of already included ones (see columns
\textit{Upgrading} and \textit{Replacement} in Table \ref{tab:features}).
\item \emph{D2: Varying the migration data sources.} During the inferring
migration phase, strategies to obtain migra\-tion-related data play a
fundamental role in the overall process. A crucial challenge should be
investigating new sources of information besides the well-known sources
e.g.,\@\xspace Bug reports, Stack Overflow posts, and GitHub\@\xspace issues.
\item \emph{D3: Aggregating different concepts.} The entire migration
process is a complex task and involves notions belonging to different
domains. For instance, GitHub\@\xspace issues could play a relevant role in the
migration process. A recent work \cite{misra_is_2020} shows that
the more comments are included in the source code, the lesser is
the time needed to solve an issue. Neil \emph{et~al.}\@\xspace \cite{neil_mining_2018} extracted
security vulnerabilities from issues and bug reports that could affect library dependencies.
\item \emph{D4: Identification of the upgrade plan.} Existing approaches
identify and apply migrations by taking as input the explicit specification
of the target version of the library that has to be upgraded. Providing developers with insights about candidate upgrade plans that might reduce
the migration efforts can represent valuable support to the overall upgrade
process.
\end{itemize}
In the present work we aim to explore and propose
solutions for the dimensions \textsc{D1} and \textsc{D4} by providing multiple
possible upgrade plans given the request of upgrading a given library to target
a specific target version. Furthermore, we also perform an initial
investigation on the \textsc{D2} and \textsc{D3} dimensions, relying on GitHub\@\xspace
issues. As it can be seen in Table~\ref{tab:features}, EvoPlan\@\xspace covers five out of
the seven considered features. In particular, our approach is able to
\emph{infer migration}, make use of \emph{incremental plan} by considering the
\emph{popularity} and \emph{issues}, so as to eventually recommend an
\emph{upgrade plan}. Compared to the existing tools, EvoPlan\@\xspace tackles most of the
issues previously presented
\subsection{Crawler} \label{sec:tracker}
Migration-related information is mined from GitHub\@\xspace using the \emph{Crawler}
component. By means of the \texttt{JGit} library,\footnote{\url{https://www.eclipse.org/jgit/}} \emph{Crawler} downloads a set \textit{P} of GitHub\@\xspace projects that have at least one
\emph{pom.xml} file, which is
a project file containing the list of all adopted TPLs. In case there are
multiple \emph{pom.xml} files, they will be analyzed separately to avoid
information loss. Then, the \emph{Crawler} component analyzes all the
repository's commits that affect the \emph{pom.xml} to find added and removed
TPLs. Additionally, raw issue data is obtained and stored in separate files. In
particular, we count the number of opened and closed issues for each project
\textit{p} $\in$ \textit{P} in a specific time interval \textit{D}.
The starting point of this interval is identified when a certain version
\textit{v} of a given library \textit{l} that is added as dependencies of the
\emph{pom.xml} file in client \textit{C}. A previous study
\cite{10.1007/978-3-319-26844-622} demonstrates that
the monthly rate of open issues tends to decrease over time.
Thus, the endpoint of \textit{D} is obtained by considering the first two months of
development to extract relevant data concerning the considered library \textit{l} without loss of data. \revised{In such a way, the GitHub\@\xspace issues that have been opened and closed for each TLP that has been added in \textit{p}, are obtained for further processing phases.}
\subsection{Data Extractor} \label{sec:dataEx}
In this phase, \revised{data is gathered by means of \texttt{JGit}, and analyzed using different processing steps as follows.}
The first step makes use of the GitHub\@\xspace \emph{log} command to retrieve the list of every modification
saved on GitHub\@\xspace for a specific file. Furthermore, the command provides the code
\emph{SHA} for every commit, which allows us to identify it.
For instance, Fig. \ref{fig:diff-log}.a depicts a
commit related to a given \emph{pom.xml} file taken as input. The identifier of
the commit is used to retrieve the list of the corresponding operated changes
as shown in Fig. \ref{fig:diff-log}.b. In particular, inside a commit we can
find a large number of useful information like what was written or removed and
when. The \emph{Data Extractor} component focuses on the lines which contain an
evidence of library changes. In a commit, the added lines are marked with the
sign '+', whereas the removed ones are marked with '-' (see the green and red lines, respectively shown in Fig.~\ref{fig:diff-log}.b).
In this way, the evolution of a library is obtained by analyzing the sequence
of added/removed lines. With this information, \revised{EvoPlan\@\xspace is also able to count} how many
clients have performed a specific migration. The information retrieved by the
\emph{Data Extractor} component is stored in a target CSV file, which is taken as input by the subsequent entity of the process as discussed below.
\begin{figure}
\small
\centering
\includegraphics[width=\linewidth]{log.png} \\
a) Example of \emph{log} \\
\includegraphics[width=\linewidth]{diff.png}\\
b) Example of \emph{diff}\\
\caption{Example of artifacts used by the \emph{Data Extractor} component.}
\label{fig:diff-log}
\end{figure}
\subsection{Graph Builder}
This component creates nodes and
relationships by considering the date and library changes identified in the previous
phase. To this end, EvoPlan\@\xspace exploits the Cypher query language\footnote{\url{https://neo4j.com/developer/cypher-query-language/}} to store
data into a \textsc{Neo4j} graph. For instance, we extract from CSV files two pairs library-version \emph{(l,v1)} and \emph{(l,v2)} with signs '-' and '+', respectively. In this way, the component creates an oriented edge from
\emph{(l,v1)} to \emph{(l,v2)}. Once the first edge is created, any further pairs containing the same library upgrade will be added as an incremented weight on the graph edge.
The date value contained in the CSV record is
used to avoid duplicated edges or loops. Furthermore, each edge is weighted
according to the number of clients as described in \textit{Data Extractor}
phase. That means if we find \emph{w} times the same couple \emph{(l,v1)} to \emph{(l,v2)} (i.e.,\@\xspace a number of \emph{w} projects have already migrated the library \emph{l} from \emph{v1} to \emph{v2}), the edge will have a weight of \emph{w}.
Thus, the final outcome of
this component is a migration graph that considers the community's interests as
the only weight. For instance, Fig.~\ref{fig:graph}
represents the extracted migration graph for the \emph{slf4j-api} library. The
graph contains all the mined version of the library and for each pair the
corresponding number of clients that have performed the considered upgrade is
shown. For instance, in Fig.~\ref{fig:graph} the edge from the version
\emph{1.6.1} to \emph{1.6.4} is selected, and 14 clients (see the details on
the bottom) have performed such a migration.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{slf4j_graph.pdf}
\caption{\revised{Migration graph of the \emph{slf4j} library.}}
\label{fig:graph}
\end{figure}
\subsection{Plan Calculator} \label{sec:plan}
Such a component plays a key role in the project. Given a library to be
upgraded, the starting version, and the target one, \emph{Plan Calculator}
retrieves
the k shortest paths
by using the well-founded \emph{Yen's K-shortest paths
algorithm} \cite{Yen2007FindingTK} which has been embedded into the
\textsc{Neo4j} library.
As a default heuristic implemented in EvoPlan\@\xspace, the component retrieves all the
possible paths that maximize the popularity of the steps that can be performed to do the wanted
upgrade. Thus, the \textit{Plan Calculator} component employs the aforementioned weights
which represent the popularity as a criteria for the shortest path algorithm.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{kpaths.png}
\caption{List of \emph{k}-shortest paths for \emph{slf4j}.}
\label{fig:path}
\end{figure*}
By considering the graph shown in Fig. \ref{fig:graph}, there are several possibilities to upgrade \textit{slf4j} from version \emph{1.5.8} to
\emph{1.7.25}. By taking into account the available weights, EvoPlan\@\xspace can recommend the ranked list depicted in Fig.~\ref{fig:path}. The first path in the list suggests to follow the steps \emph{1.6.1}, \emph{1.6.4}, and
\emph{1.7.5} to reach the final version considered in the example, i.e.,\@\xspace
\emph{1.7.25}.\footnote{\revised{It is worth noting that the popularity values are disproportionate to the popularity of the corresponding upgrade plans. In the example shown in Fig. \ref{fig:path} the most popular upgrade is the one with popularity value 0.898.}} Such a plan is the one that is performed most by other projects, which rely on \emph{slf4j} and that have already operated the wanted library migration. Thus, such a path is more frequent than directly updating the library to the newest version.
\subsection{Issues Miner} \label{sec:IssuesCalc}
Issues play an important role in project development. For instance, by solving issues, developers contribute to the identification of bugs as well as the enhancement of software quality through feature requests~\cite{liao_exploring_2018}. In the scope of this work, we exploit issues as criteria for ordering upgrade plans. In particular, we rely on the availability of issues that have been opened and closed due to upgrades of given third-party libraries.
\begin{table}[t!]
\centering
\caption{Issues information extracted for \emph{commons-io}.
\begin{tabular}{|l | c | c | c | }
\hline
\textbf{Version} & \textbf{Open Issues}& \textbf{Closed Issues} & \textbf{Delta} \\ \hline
1.0&14&33&19 \\ \hline
1.3.2&150&420&270 \\ \hline
1.4&87&408&321 \\ \hline
2.0&5&10&5 \\ \hline
2.0.1&133&457&324 \\ \hline
2.1&129&516&387 \\ \hline
2.2&67&999&932 \\ \hline
2.3&5&20&15 \\ \hline
2.4&939&3,283&2,344 \\ \hline
2.5&64&918&854 \\ \hline
2.6&64&548&484 \\ \hline
\end{tabular}
\vspace{-.2cm}
\label{tab:issues}
\end{table}
The \emph{Issue Miner} component is built to aggregate and filter raw issues
data gathered in the early stage of the process shown in Fig. \ref{fig:approach}. However, due to the internal construction of \textsc{Neo4j}, we cannot directly embed this data as a weight
on the migration graph's edges. Thus, as shown in Section \ref{sec:tracker}, we
collect the number of open and closed issues considering a specific time
window, i.e.,\@\xspace two months starting from the introduction of a certain TLP in the
project. Then, this component filters and aggregates the issues data related by
using Pandas, a widely-used Python library for data mining
\cite{pandas_pandas_2020}. For instance, Table \ref{tab:issues} shows the mined
issues related to the \emph{commons-io} library. In particular, for each
version of the library, the number of issues that have been opened and closed by
all the analysed clients since they have migrated to that library version is
shown. EvoPlan\@\xspace can employ the extracted data to enable a ranking function based on GitHub\@\xspace issues as discussed in the next section.
\emph{Issues Miner} works as a stand-alone component, thus it does not impact on the time required by the overall process. In this way, we have an additional source of information that can be used later in the process as a supplementary criterion to choose the ultimate upgrade plan from the ranked list produced by the \textit{Plan Calculator} component.
\subsection{Plan Ranker} \label{sec:PlanRank}
In the final phase, the k-paths produced by the \textit{Plan Calculator} are
rearranged according to the information about issues. For every
path, we count the average value of opened/closed issues. A large value means that a certain
path potentially requires less integration effort since there are more closed issues than the opened ones \cite{liao_exploring_2018}, i.e.,\@\xspace issues
have been tackled/solved rather than being left untouched.
Thus, the aim is to order the plans produced by \textit{Plan Calculator} according to the retrieved issues: among the most popular plans we will propose those with the highest issue values.
\begin{table}[h!]
\caption{An example of the ranking results.
\begin{tabular}{|l | p{0.80cm} | p{0.80cm} | }
\hline
\textbf{Proposed Path} & \rotatebox[origin=c]{90}{\textbf{Pop. Value}} & \rotatebox[origin=c]{90}{\textbf{Issues Value}} \\ \hline
1.5.8, 1.6.1, 1.6.4, 1.6.6, 1.7.5, 1.7.25 & 1.446 & 57 \\ \hline
\rowcolor{lightgray}
1.5.8, 1.6.1, 1.6.4, 1.7.5, 1.7.25 & 0.898 & 58 \\ \hline
1.5.8, 1.7.5, 1.7.25 & 1.0 & 58 \\ \hline
\rowcolor{Gold}
1.5.8, 1.6.1, 1.7.5, 1.7.25 & 1.0 & 61 \\ \hline
1.5.8, 1.6.1, 1.6.4, 1.7.2, 1.7.5, 1.7.25 & 1.238 & 58 \\ \hline
\end{tabular}
\label{tab:Ranking}
\end{table}
Table \ref{tab:Ranking} shows an example of the ranking process. There are two highlighted paths, the gray row corresponds to the best result according to the plan popularity only. In fact, the gray highlighted plan is the one with lower popularity value. Meanwhile, the orange row is recommended according to the issues criteria (in this case, the higher the issue value, the better). The path that should be selected is the orange one because it represents the one characterized by the highest activity in terms of opened and closed issue, among the most popular ones. In this way, EvoPlan\@\xspace is
able to recommend an upgrade plan to migrate from the initial version to the
desired one by learning from the experience of other projects which have
already performed similar migrations.
\subsection{Preliminary results} \label{sec:results}
We report and analyze the obtained results by answering the research questions introduced in the previous section.
\subsection{\rqfirst}
Table \ref{tab:metrics} reports the average results obtained from the cross-validation evaluation.
EvoPlan\@\xspace achieves the maximum precision for \emph{commons-io}, i.e.,\@\xspace 0.90 in all the rounds. The tool also gets a high precision for \emph{junit}, i.e.,\@\xspace 0.88. Meanwhile, the smallest precision, i.e.,\@\xspace 0.58 is seen by \emph{httpclient}. Concerning recall, EvoPlan\@\xspace obtains a value of 0.94 and 0.96 for the \emph{junit} and \emph{commons-io} libraries, respectively. In contrast, the tool achieves the worst recall value with \emph{httpclient}, i.e.,\@\xspace 0.64. Overall by considering the F-Measure score, we see that EvoPlan\@\xspace gets the best and the worst performance by \emph{commons-io} and \emph{httpclient}, respectively.
\vspace{.1cm}
\begin{table}[h!]
\centering
\caption{Precision, Recall, and F-Measure considering popularity.
\begin{tabular}{|l | p{1.6cm} | p{1.2cm} | p{1.8cm} |}
\hline
\textbf{Library} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} \\ \hline
\emph{junit} & 0.88 & 0.94 & 0.91 \\ \hline
\emph{httpclient} & 0.58 & 0.64 & 0.61 \\ \hline
\emph{slf4j-api} & 0.65 & 0.74 & 0.69 \\ \hline
\emph{log4j} & 0.88& 0.93 & 0.91 \\ \hline
\emph{commons-io} & \textbf{0.90} & \textbf{0.96} & \textbf{0.94} \\ \hline
\emph{guava} & 0.60 & 0.73 & 0.65 \\ \hline
\emph{commons-lang3} & 0.66 & 0.67 & 0.65 \\ \hline
\end{tabular}
\label{tab:metrics}
\end{table}
Altogether, we see that there is a substantial difference between the performance obtained by EvoPlan\@\xspace for different libraries. We suppose that this happens due to the availability of the training data. In particular, by carefully investigating each library used in the evaluation, we see that the libraries with the worst results in terms of performance have a few migrations that we can extract from the \emph{pom.xml} on average (cf. Table \ref{tab:libs}). For instance, there are 162 and 209 migrations associated with \emph{commons-lang3} and \emph{slf4j-api}, respectively and EvoPlan\@\xspace gets a low performance on these libraries. Meanwhile, there are
2,972 migrations for \emph{junit} and EvoPlan\@\xspace gets high precision, recall, and F${_1}$ for this library. It means that less
data can negatively affect the final recommendations.
Another factor that can influence the conducted evaluation could be the number of versions involved in an upgrade for each library i.e.,\@\xspace the availability of fewer versions dramatically reduce the migration-related information. This hypothesis is confirmed by the observed values for \emph{log4j} and \emph{junit} that bring better results with 39 and 40 analyzed versions respectively. However, there is an exception with \emph{guava}, i.e.,\@\xspace EvoPlan\@\xspace yields a mediocre result for the library (F$_{1}$=0.65), though we considered 627 migration paths and 49 different versions. By examining the library, we realized that it has many versions employed in the Android domain as well as abandoned versions. Thus, we attribute the reduction in performance to the lack of decent training data.
\vspace{.2cm}
\begin{tcolorbox}[boxrule=0.86pt,left=0.3em, right=0.3em,top=0.1em, bottom=0.05em]
\small{\textbf{Answer to RQ$_1$.} EvoPlan\@\xspace is capable of predicting the correct upgrade plan given a real-world migration dataset. Although for some libraries we witness a reduction in the overall performances, the main reason can be found in the lack of migration paths in the original dataset.}
\end{tcolorbox}
\subsection{\textbf{RQ$_2$}: \emph{Is there any correlation between the \GH issues and the popularity of a certain migration path?}}
To answer this question we measure the correlation among observed data, i.e.,\@\xspace the number of clients that perform a certain migration step and the issues delta considering the time interval described in Section \ref{sec:tracker}.
The number of clients performing migration is defined with the term \textit{popularity} as described in Section \ref{sec:plan}. Meanwhile, as its name suggests, the \textit{delta} is the difference between the number
of closed issues and the number of open ones. It assumes a positive value when the number of closed issues is greater than the opened ones. In contrast, negative values are observed when open issues exceed the number of closed ones. In other words, deltas characterizes migration steps in terms of closed issues.
\begin{table}[b!]
\centering
\vspace{-.4cm}
\caption{Correlation coefficients with a $p$-$value < 2.2\mathrm{e}{-16}$.}
\begin{tabular}{|p{3cm}|p{3cm}|}
\hline
\textbf{Metric} & \textbf{Value} \\ \hline
Kendal's ($\tau$) & 0.458 \\ \hline
Pearson (r) & 0.707 \\ \hline
Spearman ($\rho$) & 0.616 \\ \hline
\end{tabular}
\label{tab:corr}
\end{table}
The results of the three indexes are shown in Table \ref{tab:corr}. As we can see, all the metrics show a
positive correlation between the number of clients that perform a certain migration and the corresponding delta issues. In particular, Kendall's tau $\tau$ is equal to 0.458, while Spearman's rank $\rho$ reaches the value of 0.616. The maximum correlation is seen by Pearson's coefficient, i.e.,\@\xspace r = 0.707.
The strong correlation suggests that given a library, the more clients perform a migration on its versions, the more issues are solved. As it has been shown in a recent work~\cite{liao_exploring_2018},
the act of solving issues allows developers to identify bugs and improve code, as well as enhance software quality. Summing up, having a large number of migrated clients can be interpreted as a sign of maturity,
i.e.,\@\xspace the evolution among different versions attracts attention
by developers.
\vspace{.2cm}
\begin{tcolorbox}[boxrule=0.86pt,left=0.3em, right=0.3em,top=0.1em, bottom=0.05em]
\small{\textbf{Answer to RQ$_2$.}
There is a significant correlation between the upgrade plan popularity and
the number of closed issues. This implies that
plans to be given highest priority should be those that have the majority
of issues solved during the migration.}
\end{tcolorbox}
\subsection{\textbf{RQ$_3$}: \emph{Is \EP able to provide consistent recommendations in reasonable time?}}
We measured the average time required for running experiments using a mainstream laptop with the following information: i5-8250U, 1.60GHz Processor, 16GB RAM, and Ubuntu 18.04 as the operating system. Table~\ref{tab:time} summarizes the time for executing the corresponding phases
\begin{table}[h!]
\centering
\vspace{-.3cm}
\caption{Execution time.}
\begin{tabular}{|p{3cm}|p{3cm}|}
\hline
\textbf{Phase} & \textbf{Time (seconds)} \\ \hline
Graph building & 15,120 \\ \hline
Querying & 0.11 \\ \hline
Testing & 145.44 \\ \hline
\end{tabular}
\label{tab:time}
\vspace{-.2cm}
\end{table}
The most time consuming phase is the creation of graph with 15,120 seconds, corresponding to 252 minutes. Meanwhile, the querying phase takes just 0.11 seconds to finish; the testing phase is a bit longer: 145.44 seconds. It is worth noting that the testing consists of the sub-operations that are performed in actual use, i.e.,\@\xspace opening CSV files, extracting the actual plan, and calculating the shortest path.
This means that we can get an upgrade plan in less than a second, which is acceptable considering the computational capability of the used laptop.
This suggests that EvoPlan\@\xspace can be deployed in practice to suggest upgrade plans.
\vspace{.2cm}
\begin{tcolorbox}[boxrule=0.86pt,left=0.3em, right=0.3em,top=0.1em, bottom=0.05em]
\small{\textbf{Answer to RQ$_3$.} The creation of the migration graph is computationally expensive. However, it can be done offline, one time for the whole cycle. EvoPlan\@\xspace is able to deliver a single upgrade plan in a reasonable time window, making it usable in the field.}
\end{tcolorbox}
\subsection{Research questions} \label{sec:ResearchQuestions}
To study the performance of EvoPlan\@\xspace, we consider the following research questions:
\begin{itemize}
\item \rqfirst~To answer this question, we conduct experiments following
the ten-fold cross-validation methodology~\cite{10.5555/1643031.1643047} on
a dataset considering real migration data collected from GitHub\@\xspace. Moreover, we
compute \emph{Precision}, \emph{Recall}, and \emph{F-measure} by comparing the recommendation outcomes with real migrations as stored in GitHub\@\xspace;
\item \textbf{RQ$_2$}: \emph{Is there any correlation between the \GH issues and the popularity of a certain migration path?} \newline We analyze how the number of opened and closed
issues could affect the migration process. To this end, we compute three
different statistical coefficients to detect if there exists any
correlation among the available data.
\item \textbf{RQ$_3$}: \emph{Is \EP able to provide consistent recommendations in reasonable time?}~Besides the recommended migration steps, we are interested in measuring the time of the overall process, including the graph building phase. This aims at ascertaining the feasibility of our approach in practice.
\end{itemize
\subsection{Overall process} \label{sec:Process}
As depicted in Fig.~\ref{fig:eval}, we perform experiments using the ten-fold
cross-validation methodology on a well-founded dataset coming from an existing
work~\cite{kula_developers_2018}.
Given the whole list of $\approx$11,000 projects, we download the entire
dataset using the \emph{Crawler} component. Then, the dataset is split into testing and
ground truth projects, i.e.,\@\xspace 10\% and 90\% of the entire set, respectively, by each round of the
process. This means that in each round we generate a new migration graph by using the actual 90\% portion. Given a single testing project, the \emph{Analyzing commits} phase is conducted to capture the
actual upgrade path followed by the repository, as stated in Section \ref{sec:tracker}.
To build the ground-truth graph, i.e.,\@\xspace the real migration in GitHub\@\xspace, we consider projects not included in the testing ones and calculate
every possible upgrade plan for each TPLs.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Eval.pdf}
\caption{The evaluation process.}
\label{fig:eval}
\end{figure}
To aim for a reliable evaluation, we select the starting and the end version of a certain TPL from the actual plan of a testing project. The pair is used to feed the \emph{Plan Calculator} component which in turn retrieves the proposed plan.
In this respect, by following the two paths we are able to compute the metrics
to assess the overall performance, namely precision, recall, and F-measure.
\subsection{Data collection} \label{sec:dataset}
We make use of an existing dataset which has been curated by a recent study available on GitHub\@\xspace.\footnote{\url{https://bit.ly/2Opd1GH}}
The rationale behind this selection is the quality of the repositories which were
collected by applying different filters, i.e.,\@\xspace removing duplicates, including
projects with at least one \emph{pom.xml} file, and crawling only well-main\-tained
and mature projects.
Table \ref{tab:dataset} summarizes the number of projects and \emph{pom.xml}
files. The dataset consists of 10,952 GitHub\@\xspace repositories, nevertheless we were
able to download only 9,517 of them, as some have been deleted or moved.
Starting from these projects, we got a total number of 27,129 \emph{pom.xml}
files. Among them, we selected only those that did not induce the creation of
empty elements by the \emph{Data Extractor} component while analyzing
\textit{logs} and \textit{diffs} as shown in Fig. \ref{fig:diff-log}. The
filtering process resulted in 13,204 \emph{pom.xml} files. The training set is
used to create a migration graph to avoid any possible bias. For each round, we
tested 420 projects, and 3,821 projects are used to build the graph.
\begin{table}[h!]
\centering
\caption{Statistics of the dataset.
\begin{tabular}{|l | p{1.3cm}|}
\hline
Total number of projects & 10,952 \\ \hline
Number of downloaded projects& 9,517 \\ \hline
Total number of \emph{pom.xml} files & 27,129 \\ \hline
Number of screened \emph{pom.xml} files & 13,204 \\ \hline
\end{tabular}
\label{tab:dataset}
\end{table}
Table \ref{tab:libs} summarizes the set of libraries in the dataset, obtained by employing the \emph{Crawler} module (cf. Section \ref{sec:tracker}). There are seven popular libraries,\footnote{\url{https://mvnrepository.com/popular}} i.e.,\@\xspace \emph{junit}, \emph{httpclient}, \emph{slf4j}, \emph{log4j}, \emph{commons-io}, \emph{guava}, and \emph{commons-lang3}.
Among others, \emph{junit}
has the largest number of migrations, i.e.,\@\xspace 2,972. Concerning the number of versions, \emph{slf4j} has 71 different versions, being the densest library. Meanwhile, \emph{commons-lang3} is associated with the smallest number of migrations, i.e.,\@\xspace 162, and \emph{commons-io} is the sparsest library with only 16 versions. The last column shows the number of versions that we could exploit to get the issues. The difference means that no issues data was available for the whole versions dataset.
\begin{table}[h]
\centering
\caption{Number of migrations and versions.
\begin{tabular}{|l | p{0.80cm} | p{0.80cm} | p{0.80cm} |}
\hline
\textbf{Library} & \rotatebox[origin=l]{90}{\textbf{\# migrations}} & \rotatebox[origin=l]{90}{\textbf{\# versions}} & \rotatebox[origin=l]{90}{\textbf{\# issue vers.}} \\ \hline
\emph{junit} & 2,972 & 30 & 19 \\ \hline
\emph{httpclient} & 218 & 53 & 35 \\ \hline
\emph{slf4j} & 209 & 71 & 26 \\ \hline
\emph{log4j} & 229 & 42 & 19 \\ \hline
\emph{commons-io} & 186 & 16 & 11\\ \hline
\emph{guava} & 627 & 70 & 34 \\ \hline
\emph{commons-lang3} & 162 & 16 & 13\\ \hline
\end{tabular}
\vspace{-.4cm}
\label{tab:libs}
\end{table}
\subsection{Metrics} \label{sec:metrics}
Given a migration path retrieved by EvoPlan\@\xspace, we compare it with the real migration path extracted from a testing project. To this end, we employ
\emph{Precision}, \emph{Recall}, and \emph{F-measure} (or F$_1$-score) widely used in the Information Retrieval domain to assess the performance prediction of a system.
In the first place, we rely on the following definitions:
\begin{itemize}
\item A \textit{true positive} corresponds to the case when the recommended path matches with the actual path extracted from the testing projects; \emph{TP} is the total number of true positives;
\item A \textit{false positive} means that the recommended upgrade plan is not present in the ground-truth paths; \emph{FP} is the total number of false positives;
\item A \textit{false negative} is the migration steps that should be present in the suggested plan but they are not; \emph{FN} is the total number of false negatives
\end{itemize}
Considering such definitions, the aforementioned metrics are computed as follows:
\begin{equation} \label{eqn:Precision}
P = \frac{ TP }{TP+ FP}
\end{equation}
\begin{equation} \label{eqn:Recall}
R = \frac{ TP }{TP+FN}
\end{equation}
\begin{equation} \label{eqn:F-Measure}
F-measure = \frac{ 2 \times P \times R}{P + R}
\end{equation}
\textbf{Rank correlation}: We consider the following coefficients:
\begin{itemize}
\item \textit{Kendall's tau}
measures the strength of dependence between two variables. It is a non-parametric test, i.e.,\@\xspace it is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified.
\item \textit{Pearson's correlation}
is the most widely used correlation statistic to measure the degree of the relationship between linearly
related variables. In particular, this coefficient is suitable when it is possible to draw a regression line between the points of the available data.
\item \textit{Spearman's correlation}
is a non-parametric test that is used to measure the degree of association between two variables. Differently from Pearson's coefficient, Spearman's correlation index performs better in cases of monotonic relationships.
\end{itemize}
All the considered coefficients assume values in the range [-1,+1], i.e.,\@\xspace from perfect negative correlation to perfect positive correlation. The value 0 indicates that between two variables there is no correlation.
In the next section, we explain in detail the experimental results obtained through the evaluation.
\section{Introduction}
\label{sec:Introduction}
When dealing with certain coding tasks, developers usually
make use of third-party libraries (TPLs) that provide the
desired functionalities. Third-party libraries offer a wide range of
operations, e.g.,\@\xspace database management, file utilities, Website
connection, to name a few. Their reuse
allows developers to exploit a well-founded
infrastructure, without reinventing the wheel, which
eventually helps save time as well as increase productivity.
\revised{However, TPLs evolve over the course of time, and API functions can either be added or removed, aiming to make the library become more efficient/effective, as well as to fix security issues.}
\revised{Upgrading clients' code from a library release to a newer one can be a daunting and time-consuming task, especially when the APIs being upgraded introduce breaking changes that make the client fail to compile or introduce behavioral changes into it \cite{7884616}. Thus, managing TPLs and keeping them up-to-date becomes a critical practice to minimize the technical debt~\cite{avgeriou_et_al:DR:2016:6693}.}
\revised{In order to upgrade a client $C$ from a starting library version $l_{v_i}$ to a target one $l_{v_t}$, the developer needs to
understand both versions' documentation deeply, as well as to choose the right matching between corresponding methods.}
Things become even more complicated when several subsequent
versions of the library of interest $l$ have been released
from $v_i$ to $v_t$.
In such cases, developers who want to reduce the technical
debts, which have been accumulated due to libraries
that have not been upgraded yet, have first to decide the
\textit{upgrade plan} that has to be applied, i.e., how to go
from $l_{v_i}$ to $l_{v_t}$ since many possible paths might
be followed. In such cases, it is essential to have proper
machinery to assist developers in choosing suitable upgrade
plans to potentially reduce the efforts that are needed to migrate the
client project $C$ under development. It is possible to minimize migration efforts by identifying upgrade plans, which similar projects have already performed, and thus, by relying on the experiences of already upgraded clients. In this way, developers have the availability of supporting material, e.g.,\@\xspace documentation, and snippets of code examples that can be exploited during the migration phases.
In the context of open-source software, developing new systems by reusing existing components raises relevant challenges in: \textit{(i)} searching for relevant modules; and \textit{(ii)} adapting the selected components to meet some pre-defined requirements. To this end, recommender systems in software engineering have been developed to support developers in their daily tasks~\cite{robillard_recommendation_2014,di_rocco_development_2021}. Such systems have gained traction in recent years as they are able to provide developers with a wide range of useful items, including code snippets~\cite{Nguyen:2019:FRS:3339505.3339636}, tags/topics \cite{10.1145/3382494.3410690,10.1145/3383219.3383227}, third-party libraries \cite{Nguyen:2019:JSS:CrossRec}, documentation~\cite{ponzanelli_prompter:_2016,RUBEI2020106367}, to mention but a few. \revised{In the CROSSMINER project~\cite{di_rocco_development_2021}, we conceptualized various techniques and tools for extracting knowledge from open source components to provide tailored recommendations to developers, helping them complete their current development tasks.}
\revised{In this work, we propose EvoPlan\@\xspace, a recommender system to provide
upgrade plans for TPLs.
By exploiting the experience of other projects that have already performed similar
upgrades and migrations, EvoPlan\@\xspace recommends the plan that should be considered to upgrade from the current library
version to the desired one. A graph-based representation is inferred by analyzing GitHub\@\xspace repositories and their \emph{pom.xml} files.} During this phase,
EvoPlan\@\xspace assigns weights representing the number of client projects that have already performed a specific upgrade.
Afterwards, the system employs a shortest-path algorithm to minimize the
number of upgrade steps considering such weights. It eventually retrieves
multiple upgrade plans to the user with the target version as well as all the intermediate passages.
To the best of our knowledge, there exist no tools that provide this type of recommendations. Thus, we cannot compare EvoPlan\@\xspace with any baselines but evaluate it by using metrics commonly used in information retrieval
applications, i.e.,\@\xspace precision, recall, and F-measure.
Furthermore, we also evaluate the correlation between GitHub\@\xspace \footnote{\url{https://github.com/}} issues data
and the suggested upgrade plans.
In this sense, our work has the following contributions:
\begin{itemize}
\item \emph{Gathering and storing of migration data}: Using \textsc{Neo4j} Java Driver,\footnote{\url{https://github.com/neo4j/neo4j-java-driver}} EvoPlan\@\xspace stores the extracted data in a persistent and flexible data structure;
\item \emph{Recommendation of an upgrade plan list}: Considering the number of clients, EvoPlan\@\xspace suggests the most common upgrade plans that are compliant with those that have been accepted by the developers community at large;
\item \emph{Modularity and flexible architecture}: The proposed system can be seen as both an external module integrable into other approaches and a completely stand-alone tool that can be customized by end users;
\item \emph{Automated evaluation and replication package availability}: \revised{The performance of EvoPlan\@\xspace has been evaluated by employing the widely used ten-fold cross-validation technique. Last but not least, we make the EvoPlan\@\xspace replication package available online to facilitate future research.}\footnote{\url{https://github.com/MDEGroup/EvoPlan}}
\end{itemize}
The paper is structured as follows. Section
\ref{sec:Background} presents a motivating example and existing migration tools
in the literature. Furthermore, in this section we also highlight the open
challenges in the domain. Section \ref{sec:ProposedApproach} introduces EvoPlan\@\xspace,
\revised{the proposed approach to the recommendation of third-party library upgrades.} In Section
\ref{sec:Study}, we present the performed evaluation process. The results obtained from the empirical
evaluation are presented in Section \ref{sec:Results} together with possible threats to validity. The related work is reviewed in Section \ref{sec:RelatedWorks}. Finally, we conclude the paper and envisage future work
in Section~\ref{sec:Conclusion}.
\section{Motivations and Background}
\label{sec:Background}
TPLs offer several tailored functionalities,
and invoking them allows developers to make use of a well-founded infrastructure, without needing to re-implementing from scratch~\cite{Nguyen:2019:JSS:CrossRec}. Eventually, this helps save time as well as increase productivity. However, as libraries evolve over the course of time, it is necessary to have a proper plan to migrate them once they have been updated.
So far, various attempts have been made to tackle this issue.
In this section, we introduce two motivating examples, and recall some notable relevant work as a base for further presentation.
\subsection{Explanatory examples} \label{sec:exp}
This section discusses two real-world situations that developers must cope with during the TLPs migration task, i.e.,\@\xspace code refactoring and vulnerable dependencies handling. In the first place, it is essential to consider
different TPL releases that are conformed to the semantic versioning format.\footnote{\url{https://semver.org/}} A standard version string follows the pattern \emph{X.Y}, in which \emph{X} represents the \emph{major} release and \emph{Y} represents the \emph{minor} one. Sometimes, releases can include a \emph{patch} version \emph{Z}, resulting in the final string \emph{X.Y.Z}.
We present an explanatory example related to
\emph{log4j},\footnote{\url{https://logging.apache.org/log4j/}} a widely used Java logging library. When it is upgraded from version \emph{1.2} to version \emph{1.3}, as shown in Listing \ref{lst:v12} and Listing \ref{lst:v13}, respectively, a lot of internal changes happened which need to be carefully documented.\footnote{\url{http://articles.qos.ch/preparingFor13.html}} \revised{As it can be noticed, the main change affects the \texttt{Category} class which is replaced by the \texttt{Logger} class. Furthermore, all the former methods that were used by the deprecated class cause several failures at the source code level. For instance, the \texttt{setPriority} method is replaced by \texttt{setLevel} in the new version.}
\begin{lstlisting}[caption={log4j version 1.2.},
label=lst:v12,captionpos=b,style=JavaStyle,numbers=left,xleftmargin=4em,frame=single,framexleftmargin=4.2em]
Category root = Category.getRoot();
root.debug("hello");
Category cat = Category.getInstance(Some.class);
cat.debug("hello");
cat.setPriority(Priority.INFO);
\end{lstlisting}
\begin{lstlisting}[caption={log4j version 1.3.},
label=lst:v13,captionpos=b,style=JavaStyle,numbers=left,xleftmargin=4em,frame=single,framexleftmargin=4.2em]
Logger root = Logger.getRootLogger();
root.debug("hello");
Logger logger = Logger.getLogger(Some.class);
logger.debug("hello");
logger.setLevel(LEVEL.INFO);
\end{lstlisting}
Though this is a very limited use case, it suggests that the code refactoring that takes place during the migration is
an error-prone activity even for a single minor upgrade, i.e.,\@\xspace from
version \emph{1.2} to version \emph{1.3}. Additionally, the complexity
dramatically grows in the case of a major release as it typically requires extra efforts rather than a minor one which are not welcome by the majority of developers
\cite{kula_developers_2018}. Considering such context, the reduction of the
time needed for a single migration step, even a minor one, is expected to
improve the overall development process.
Concerning vulnerable dependencies, GitHub\@\xspace Dependabot\footnote{\url{https://dependabot.com/blog/github-security-alerts/}} provides weekly security alert digests that highlight possible security issues for outdated dependencies of a repository,
which can be of different languages, e.g.,\@\xspace Python, Java, JavaScript.\footnote{\url{https://dependabot.com/\# languages}} An example of a Dependabot report is shown in Fig.~\ref{fig:digest}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth]{GHdigest_new.png}
\caption{GitHub\@\xspace Dependabot alert.}
\label{fig:digest}
\end{figure}
As shown in Fig. \ref{fig:digest}, Dependabot suggests possible TPL upgrades to solve vulnerabilities in the given project. For instance, the \textbf{guava} dependency seems to be outdated, and thus the system automatically suggests jumping to the latest version, i.e.,\@\xspace \emph{24.1.1}.
Though this alert can raise awareness
of this evolution,
it does not offer any concrete recommendations on how to perform the actual migration steps. In some cases, the bot does not provide any recommended version to update the project, e.g.,\@\xspace for the \textbf{log4j} dependence. In this respect, we see that there is an urgent need for providing recommendations of the most suitable plan, so as to upgrade the library, as this can significantly reduce the migration effort.
\subsection{Existing techniques} \label{sec:related}
This section reviews some relevant work that copes with the migration problem.
\begin{table}[h]
\centering
\footnotesize
\caption{\revised{Main features of TLPs migration systems.}
\begin{tabular}{|l | c| c | c| c| c | c | c | }
\hline
\textbf{System} & \rotatebox[origin=l]{90}{\textbf{Inferring migration}} & \rotatebox[origin=l]{90}{\textbf{Incremental plan}} & \rotatebox[origin=l]{90}{\textbf{Popularity}} & \rotatebox[origin=l]{90}{\textbf{GitHub\@\xspace issues}} & \rotatebox[origin=l]{90}{\textbf{\textbf{Upgrading}}} & \rotatebox[origin=l]{90}{\textbf{Replacement}} & \rotatebox[origin=l]{90}{\textbf{Applying Migration}} \\ \hline
Meditor~\cite{xu_meditor_2019} & \ding{51} & \ding{55} &\ding{51} & \ding{55} & \ding{51} & \ding{51} & \ding{51} \\ \hline
Apivawe~\cite{hora_apiwave_2015} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} & \ding{51} & \ding{55} \\ \hline
Graph Mining~\cite{teyton_mining_2012} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} & \ding{51} & \ding{55} \\ \hline
RAPIM~\cite{alrubaye2019learning} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} & \ding{51} & \ding{51} \\ \hline
Diff-CatchUp~\cite{xing_api-evolution_2007} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{51} & \ding{51} & \ding{55} \\ \hline
M$^{3}$~\cite{collie_m3_2020} & \ding{51} & \ding{55} & \ding{55} & \ding{55} & \ding{55} & \ding{51} & \ding{51} \\ \hline
\rowcolor{mygray}
\textbf{EvoPlan} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{55} & \ding{55} \color{black} \\ \hline
\end{tabular}
\label{tab:features}
\end{table}
Meditor \cite{xu_meditor_2019} is a tool aiming to identify migration-related (MR) changes within commits and map them at the level of source code with a syntactic program differencing algorithm. To this end, the tool mines GitHub\@\xspace projects searching for MR updates in the \emph{pom.xml} file and check their consistency with the WALA framework.\footnote{\url{https://github.com/wala/WALA}}
Hora and Valente propose Apiwave \cite{hora_apiwave_2015}, a system that excerpts information about libraries' popularity directly from mined GitHub\@\xspace project's history. Afterwards, it can measure the popularity of a certain TLP by considering the import statement removal or addition.
Teyton \emph{et~al.}\@\xspace~\cite{teyton_mining_2012} propose an approach that discovers migrations among different TLPs and stores them in a graph format. A token-based filter is applied on \emph{pom.xml} files to extract the name and the version of the library from the artifactid tag. The approach evetually exhibits four different visual patterns that consider both ingoing and outgoing edges to highlight the most popular target.
RAPIM~\cite{alrubaye2019learning} employs a tailored machine learning model to identify and recommend API mappings learned from previously migration changes. Given two TPLs as input, RAPIM extracts valuable method descriptions from their documentation using text engineering techniques and encode them in feature vectors to enable the underpinning machine learning model.
Diff-CatchUp \cite{xing_api-evolution_2007} has been conceived with the aim of proposing usage examples to support the migration of reusable software components. The tool makes use of the UMLDiff algorithm \cite{10.1145/1101908.1101919} to identify all relevant source code refactorings. Then, a heuristic approach is adopted to investigate the design-model of the evolved component and retrieve a customizable ranked list of suggestions.
Collie \emph{et~al.}\@\xspace recently proposed the M$^{3}$ tool \cite{collie_m3_2020} to support a semantic-based migration of C libraries. To this end, the system synthesizes a behavioral model of the input project by relying on the LLVM intermediate representation.\footnote{\url{https://llvm.org/}} Given a pair of source and target TLPs, the tool generates abstract patterns that are used to perform the actual migration.
Table \ref{tab:features} summarizes the features of the above-ment\-ioned approaches by considering the different tasks involved in migration processes by starting with the discovery of possible migration changes up to embedding them directly into the source code as explained below.
\begin{itemize}
\item \emph{Inferring migration}: To extract migration-related information, tools can analyze existing projects' artifacts, i.e.,\@\xspace commits, \emph{pom.xml} file, or tree diff. This is the first step of the whole migration process.
\item \emph{Incremental plan}: The majority of the existing approaches perform the migration just by considering the latest version of a TLP. This could increase the overall effort needed to perform the actual migration, i.e.,\@\xspace developers suffer from accumulated technical debt. In contrast, considering a sequence of intermediate migration steps before going to the final one can reduce such refactoring.
\item \emph{Popularity}: This is the number of client projects that make use of a certain library. In other words, if a TLP appears in the \emph{pom.xml} file or in the import statement, its popularity is increased.
\item \emph{GitHub\@\xspace issues}: As an additional criterion, the migration process can include data from \emph{GitHub\@\xspace issues} that may include relevant information about TLPs migration. Thus, we consider them as a possible source of migration-related knowledge.
\item \emph{Upgrading}: This feature means that the tool supports the upgrading of a TLP from an older version to a newer one. For instance, the migration described in Section \ref{sec:exp} falls under this class of migration.
\item \emph{Replacement}: Differently from upgrading, replacement involves the migration from a library to a different one that exposes the same functionalities.
\item \emph{Applying migration}: It represents the final step of the migration process in which the inferred migration changes are actually integrated into the project.
\end{itemize}
\subsection{Dimensions to be further explored}
Even though several approaches successfully cope with TPL migration, there are still some development dimensions that need to be further explored. However, providing an exhaustive analysis is out of the scope of this section. Thus, we limit ourselves to identify some of them by carefully investigating the approaches summarized in Table \ref{tab:features}. The elicited dimensions are the following ones:
\begin{itemize}
\item \emph{D1: Upgrading the same library.} Almost all of the presented
approaches apart from Meditor, focus on replacing libraries and very few
support the upgrades of already included ones (see columns
\textit{Upgrading} and \textit{Replacement} in Table \ref{tab:features}).
\item \emph{D2: Varying the migration data sources.} During the inferring
migration phase, strategies to obtain migra\-tion-related data play a
fundamental role in the overall process. A crucial challenge should be
investigating new sources of information besides the well-known sources
e.g.,\@\xspace Bug reports, Stack Overflow posts, and GitHub\@\xspace issues.
\item \emph{D3: Aggregating different concepts.} The entire migration
process is a complex task and involves notions belonging to different
domains. For instance, GitHub\@\xspace issues could play a relevant role in the
migration process. A recent work \cite{misra_is_2020} shows that
the more comments are included in the source code, the lesser is
the time needed to solve an issue. Neil \emph{et~al.}\@\xspace \cite{neil_mining_2018} extracted
security vulnerabilities from issues and bug reports that could affect library dependencies.
\item \emph{D4: Identification of the upgrade plan.} Existing approaches
identify and apply migrations by taking as input the explicit specification
of the target version of the library that has to be upgraded. Providing developers with insights about candidate upgrade plans that might reduce
the migration efforts can represent valuable support to the overall upgrade
process.
\end{itemize}
In the present work we aim to explore and propose
solutions for the dimensions \textsc{D1} and \textsc{D4} by providing multiple
possible upgrade plans given the request of upgrading a given library to target
a specific target version. Furthermore, we also perform an initial
investigation on the \textsc{D2} and \textsc{D3} dimensions, relying on GitHub\@\xspace
issues. As it can be seen in Table~\ref{tab:features}, EvoPlan\@\xspace covers five out of
the seven considered features. In particular, our approach is able to
\emph{infer migration}, make use of \emph{incremental plan} by considering the
\emph{popularity} and \emph{issues}, so as to eventually recommend an
\emph{upgrade plan}. Compared to the existing tools, EvoPlan\@\xspace tackles most of the
issues previously presented
\section{Proposed approach}
\label{sec:ProposedApproach}
In this paper we propose an approach to support the first phase of the
migration process, i.e.,\@\xspace inferring the possible upgrade plans that can satisfy the request of the developer that might want to upgrade a given TPL used in the project under development.
Our approach aims at suggesting the most appropriate migration plan by
taking into consideration two key factors: the \textit{popularity}
of the upgrade plan and the \textit{availability of discussions} about it.
Popularity means how many clients have performed a given
upgrade plan, while discussions are GitHub\@\xspace issues
that have been open and closed in projects during the migration phase.
By mining GitHub\@\xspace using the dedicated API,\footnote{\url{https://developer.github.com/v3/}} we are able to extract the information required as input for the recommendation
engine of EvoPlan\@\xspace.
\begin{figure}[t!]
\centering
\includegraphics[width=0.90\linewidth]{EvoPlanV2_new.pdf}
\caption{EvoPlan\@\xspace's architecture.}
\label{fig:approach}
\end{figure}
The conceived approach is depicted in Fig.~\ref{fig:approach} and consists of six components, i.e.,\@\xspace \emph{Crawler}, \emph{Data Extractor}, \emph{Graph Builder}, \emph{Issues Miner}, \emph{Plan Calculator} and \emph{Plan Ranker}. With the \emph{Crawler} component, the system retrieves information about GitHub\@\xspace repositories and downloads them locally. These repositories are then analyzed by the \emph{Data Extractor} component to excerpt information about commits and history version. Once all the required information has been collected, \emph{Graph Builder} constructs a migration
graph with multiple wei\-ghts, and \emph{Issues Miner} generates data related to GitHub\@\xspace issues. The \emph{Plan Calculator} component relies on the graph to calculate the k-best
paths available. Finally, \emph{Plan Ranker} sorts these paths by considering the number of issues. In the succeeding subsections, we are going to explain in detail the functionalities of each component.
\subsection{Crawler} \label{sec:tracker}
Migration-related information is mined from GitHub\@\xspace using the \emph{Crawler}
component. By means of the \texttt{JGit} library,\footnote{\url{https://www.eclipse.org/jgit/}} \emph{Crawler} downloads a set \textit{P} of GitHub\@\xspace projects that have at least one
\emph{pom.xml} file, which is
a project file containing the list of all adopted TPLs. In case there are
multiple \emph{pom.xml} files, they will be analyzed separately to avoid
information loss. Then, the \emph{Crawler} component analyzes all the
repository's commits that affect the \emph{pom.xml} to find added and removed
TPLs. Additionally, raw issue data is obtained and stored in separate files. In
particular, we count the number of opened and closed issues for each project
\textit{p} $\in$ \textit{P} in a specific time interval \textit{D}.
The starting point of this interval is identified when a certain version
\textit{v} of a given library \textit{l} that is added as dependencies of the
\emph{pom.xml} file in client \textit{C}. A previous study
\cite{10.1007/978-3-319-26844-622} demonstrates that
the monthly rate of open issues tends to decrease over time.
Thus, the endpoint of \textit{D} is obtained by considering the first two months of
development to extract relevant data concerning the considered library \textit{l} without loss of data. \revised{In such a way, the GitHub\@\xspace issues that have been opened and closed for each TLP that has been added in \textit{p}, are obtained for further processing phases.}
\subsection{Data Extractor} \label{sec:dataEx}
In this phase, \revised{data is gathered by means of \texttt{JGit}, and analyzed using different processing steps as follows.}
The first step makes use of the GitHub\@\xspace \emph{log} command to retrieve the list of every modification
saved on GitHub\@\xspace for a specific file. Furthermore, the command provides the code
\emph{SHA} for every commit, which allows us to identify it.
For instance, Fig. \ref{fig:diff-log}.a depicts a
commit related to a given \emph{pom.xml} file taken as input. The identifier of
the commit is used to retrieve the list of the corresponding operated changes
as shown in Fig. \ref{fig:diff-log}.b. In particular, inside a commit we can
find a large number of useful information like what was written or removed and
when. The \emph{Data Extractor} component focuses on the lines which contain an
evidence of library changes. In a commit, the added lines are marked with the
sign '+', whereas the removed ones are marked with '-' (see the green and red lines, respectively shown in Fig.~\ref{fig:diff-log}.b).
In this way, the evolution of a library is obtained by analyzing the sequence
of added/removed lines. With this information, \revised{EvoPlan\@\xspace is also able to count} how many
clients have performed a specific migration. The information retrieved by the
\emph{Data Extractor} component is stored in a target CSV file, which is taken as input by the subsequent entity of the process as discussed below.
\begin{figure}
\small
\centering
\includegraphics[width=\linewidth]{log.png} \\
a) Example of \emph{log} \\
\includegraphics[width=\linewidth]{diff.png}\\
b) Example of \emph{diff}\\
\caption{Example of artifacts used by the \emph{Data Extractor} component.}
\label{fig:diff-log}
\end{figure}
\subsection{Graph Builder}
This component creates nodes and
relationships by considering the date and library changes identified in the previous
phase. To this end, EvoPlan\@\xspace exploits the Cypher query language\footnote{\url{https://neo4j.com/developer/cypher-query-language/}} to store
data into a \textsc{Neo4j} graph. For instance, we extract from CSV files two pairs library-version \emph{(l,v1)} and \emph{(l,v2)} with signs '-' and '+', respectively. In this way, the component creates an oriented edge from
\emph{(l,v1)} to \emph{(l,v2)}. Once the first edge is created, any further pairs containing the same library upgrade will be added as an incremented weight on the graph edge.
The date value contained in the CSV record is
used to avoid duplicated edges or loops. Furthermore, each edge is weighted
according to the number of clients as described in \textit{Data Extractor}
phase. That means if we find \emph{w} times the same couple \emph{(l,v1)} to \emph{(l,v2)} (i.e.,\@\xspace a number of \emph{w} projects have already migrated the library \emph{l} from \emph{v1} to \emph{v2}), the edge will have a weight of \emph{w}.
Thus, the final outcome of
this component is a migration graph that considers the community's interests as
the only weight. For instance, Fig.~\ref{fig:graph}
represents the extracted migration graph for the \emph{slf4j-api} library. The
graph contains all the mined version of the library and for each pair the
corresponding number of clients that have performed the considered upgrade is
shown. For instance, in Fig.~\ref{fig:graph} the edge from the version
\emph{1.6.1} to \emph{1.6.4} is selected, and 14 clients (see the details on
the bottom) have performed such a migration.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{slf4j_graph.pdf}
\caption{\revised{Migration graph of the \emph{slf4j} library.}}
\label{fig:graph}
\end{figure}
\subsection{Plan Calculator} \label{sec:plan}
Such a component plays a key role in the project. Given a library to be
upgraded, the starting version, and the target one, \emph{Plan Calculator}
retrieves
the k shortest paths
by using the well-founded \emph{Yen's K-shortest paths
algorithm} \cite{Yen2007FindingTK} which has been embedded into the
\textsc{Neo4j} library.
As a default heuristic implemented in EvoPlan\@\xspace, the component retrieves all the
possible paths that maximize the popularity of the steps that can be performed to do the wanted
upgrade. Thus, the \textit{Plan Calculator} component employs the aforementioned weights
which represent the popularity as a criteria for the shortest path algorithm.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{kpaths.png}
\caption{List of \emph{k}-shortest paths for \emph{slf4j}.}
\label{fig:path}
\end{figure*}
By considering the graph shown in Fig. \ref{fig:graph}, there are several possibilities to upgrade \textit{slf4j} from version \emph{1.5.8} to
\emph{1.7.25}. By taking into account the available weights, EvoPlan\@\xspace can recommend the ranked list depicted in Fig.~\ref{fig:path}. The first path in the list suggests to follow the steps \emph{1.6.1}, \emph{1.6.4}, and
\emph{1.7.5} to reach the final version considered in the example, i.e.,\@\xspace
\emph{1.7.25}.\footnote{\revised{It is worth noting that the popularity values are disproportionate to the popularity of the corresponding upgrade plans. In the example shown in Fig. \ref{fig:path} the most popular upgrade is the one with popularity value 0.898.}} Such a plan is the one that is performed most by other projects, which rely on \emph{slf4j} and that have already operated the wanted library migration. Thus, such a path is more frequent than directly updating the library to the newest version.
\subsection{Issues Miner} \label{sec:IssuesCalc}
Issues play an important role in project development. For instance, by solving issues, developers contribute to the identification of bugs as well as the enhancement of software quality through feature requests~\cite{liao_exploring_2018}. In the scope of this work, we exploit issues as criteria for ordering upgrade plans. In particular, we rely on the availability of issues that have been opened and closed due to upgrades of given third-party libraries.
\begin{table}[t!]
\centering
\caption{Issues information extracted for \emph{commons-io}.
\begin{tabular}{|l | c | c | c | }
\hline
\textbf{Version} & \textbf{Open Issues}& \textbf{Closed Issues} & \textbf{Delta} \\ \hline
1.0&14&33&19 \\ \hline
1.3.2&150&420&270 \\ \hline
1.4&87&408&321 \\ \hline
2.0&5&10&5 \\ \hline
2.0.1&133&457&324 \\ \hline
2.1&129&516&387 \\ \hline
2.2&67&999&932 \\ \hline
2.3&5&20&15 \\ \hline
2.4&939&3,283&2,344 \\ \hline
2.5&64&918&854 \\ \hline
2.6&64&548&484 \\ \hline
\end{tabular}
\vspace{-.2cm}
\label{tab:issues}
\end{table}
The \emph{Issue Miner} component is built to aggregate and filter raw issues
data gathered in the early stage of the process shown in Fig. \ref{fig:approach}. However, due to the internal construction of \textsc{Neo4j}, we cannot directly embed this data as a weight
on the migration graph's edges. Thus, as shown in Section \ref{sec:tracker}, we
collect the number of open and closed issues considering a specific time
window, i.e.,\@\xspace two months starting from the introduction of a certain TLP in the
project. Then, this component filters and aggregates the issues data related by
using Pandas, a widely-used Python library for data mining
\cite{pandas_pandas_2020}. For instance, Table \ref{tab:issues} shows the mined
issues related to the \emph{commons-io} library. In particular, for each
version of the library, the number of issues that have been opened and closed by
all the analysed clients since they have migrated to that library version is
shown. EvoPlan\@\xspace can employ the extracted data to enable a ranking function based on GitHub\@\xspace issues as discussed in the next section.
\emph{Issues Miner} works as a stand-alone component, thus it does not impact on the time required by the overall process. In this way, we have an additional source of information that can be used later in the process as a supplementary criterion to choose the ultimate upgrade plan from the ranked list produced by the \textit{Plan Calculator} component.
\subsection{Plan Ranker} \label{sec:PlanRank}
In the final phase, the k-paths produced by the \textit{Plan Calculator} are
rearranged according to the information about issues. For every
path, we count the average value of opened/closed issues. A large value means that a certain
path potentially requires less integration effort since there are more closed issues than the opened ones \cite{liao_exploring_2018}, i.e.,\@\xspace issues
have been tackled/solved rather than being left untouched.
Thus, the aim is to order the plans produced by \textit{Plan Calculator} according to the retrieved issues: among the most popular plans we will propose those with the highest issue values.
\begin{table}[h!]
\caption{An example of the ranking results.
\begin{tabular}{|l | p{0.80cm} | p{0.80cm} | }
\hline
\textbf{Proposed Path} & \rotatebox[origin=c]{90}{\textbf{Pop. Value}} & \rotatebox[origin=c]{90}{\textbf{Issues Value}} \\ \hline
1.5.8, 1.6.1, 1.6.4, 1.6.6, 1.7.5, 1.7.25 & 1.446 & 57 \\ \hline
\rowcolor{lightgray}
1.5.8, 1.6.1, 1.6.4, 1.7.5, 1.7.25 & 0.898 & 58 \\ \hline
1.5.8, 1.7.5, 1.7.25 & 1.0 & 58 \\ \hline
\rowcolor{Gold}
1.5.8, 1.6.1, 1.7.5, 1.7.25 & 1.0 & 61 \\ \hline
1.5.8, 1.6.1, 1.6.4, 1.7.2, 1.7.5, 1.7.25 & 1.238 & 58 \\ \hline
\end{tabular}
\label{tab:Ranking}
\end{table}
Table \ref{tab:Ranking} shows an example of the ranking process. There are two highlighted paths, the gray row corresponds to the best result according to the plan popularity only. In fact, the gray highlighted plan is the one with lower popularity value. Meanwhile, the orange row is recommended according to the issues criteria (in this case, the higher the issue value, the better). The path that should be selected is the orange one because it represents the one characterized by the highest activity in terms of opened and closed issue, among the most popular ones. In this way, EvoPlan\@\xspace is
able to recommend an upgrade plan to migrate from the initial version to the
desired one by learning from the experience of other projects which have
already performed similar migrations.
\section{Evaluation}
\label{sec:Study}
To the best of our knowledge, there are no replication packages and reusable tools related to the approaches outlined in Section~\ref{sec:Background} that we could use to compare EvoPlan\@\xspace with them. As a result, it is not possible to compare EvoPlan\@\xspace with any baselines. Thus, we have to conduct an evaluation of \revised{the proposed approach on a real dataset collected from GitHub\@\xspace.}
Section~\ref{sec:ResearchQuestions} presents three research questions, while Section~\ref{sec:Process} describes the evaluation process. Section \ref{sec:dataset} gives a detailed description of the dataset used for the evaluation, and the employed metrics are specified in Section~\ref{sec:metrics}.
\subsection{Research questions} \label{sec:ResearchQuestions}
To study the performance of EvoPlan\@\xspace, we consider the following research questions:
\begin{itemize}
\item \rqfirst~To answer this question, we conduct experiments following
the ten-fold cross-validation methodology~\cite{10.5555/1643031.1643047} on
a dataset considering real migration data collected from GitHub\@\xspace. Moreover, we
compute \emph{Precision}, \emph{Recall}, and \emph{F-measure} by comparing the recommendation outcomes with real migrations as stored in GitHub\@\xspace;
\item \textbf{RQ$_2$}: \emph{Is there any correlation between the \GH issues and the popularity of a certain migration path?} \newline We analyze how the number of opened and closed
issues could affect the migration process. To this end, we compute three
different statistical coefficients to detect if there exists any
correlation among the available data.
\item \textbf{RQ$_3$}: \emph{Is \EP able to provide consistent recommendations in reasonable time?}~Besides the recommended migration steps, we are interested in measuring the time of the overall process, including the graph building phase. This aims at ascertaining the feasibility of our approach in practice.
\end{itemize
\subsection{Overall process} \label{sec:Process}
As depicted in Fig.~\ref{fig:eval}, we perform experiments using the ten-fold
cross-validation methodology on a well-founded dataset coming from an existing
work~\cite{kula_developers_2018}.
Given the whole list of $\approx$11,000 projects, we download the entire
dataset using the \emph{Crawler} component. Then, the dataset is split into testing and
ground truth projects, i.e.,\@\xspace 10\% and 90\% of the entire set, respectively, by each round of the
process. This means that in each round we generate a new migration graph by using the actual 90\% portion. Given a single testing project, the \emph{Analyzing commits} phase is conducted to capture the
actual upgrade path followed by the repository, as stated in Section \ref{sec:tracker}.
To build the ground-truth graph, i.e.,\@\xspace the real migration in GitHub\@\xspace, we consider projects not included in the testing ones and calculate
every possible upgrade plan for each TPLs.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Eval.pdf}
\caption{The evaluation process.}
\label{fig:eval}
\end{figure}
To aim for a reliable evaluation, we select the starting and the end version of a certain TPL from the actual plan of a testing project. The pair is used to feed the \emph{Plan Calculator} component which in turn retrieves the proposed plan.
In this respect, by following the two paths we are able to compute the metrics
to assess the overall performance, namely precision, recall, and F-measure.
\subsection{Data collection} \label{sec:dataset}
We make use of an existing dataset which has been curated by a recent study available on GitHub\@\xspace.\footnote{\url{https://bit.ly/2Opd1GH}}
The rationale behind this selection is the quality of the repositories which were
collected by applying different filters, i.e.,\@\xspace removing duplicates, including
projects with at least one \emph{pom.xml} file, and crawling only well-main\-tained
and mature projects.
Table \ref{tab:dataset} summarizes the number of projects and \emph{pom.xml}
files. The dataset consists of 10,952 GitHub\@\xspace repositories, nevertheless we were
able to download only 9,517 of them, as some have been deleted or moved.
Starting from these projects, we got a total number of 27,129 \emph{pom.xml}
files. Among them, we selected only those that did not induce the creation of
empty elements by the \emph{Data Extractor} component while analyzing
\textit{logs} and \textit{diffs} as shown in Fig. \ref{fig:diff-log}. The
filtering process resulted in 13,204 \emph{pom.xml} files. The training set is
used to create a migration graph to avoid any possible bias. For each round, we
tested 420 projects, and 3,821 projects are used to build the graph.
\begin{table}[h!]
\centering
\caption{Statistics of the dataset.
\begin{tabular}{|l | p{1.3cm}|}
\hline
Total number of projects & 10,952 \\ \hline
Number of downloaded projects& 9,517 \\ \hline
Total number of \emph{pom.xml} files & 27,129 \\ \hline
Number of screened \emph{pom.xml} files & 13,204 \\ \hline
\end{tabular}
\label{tab:dataset}
\end{table}
Table \ref{tab:libs} summarizes the set of libraries in the dataset, obtained by employing the \emph{Crawler} module (cf. Section \ref{sec:tracker}). There are seven popular libraries,\footnote{\url{https://mvnrepository.com/popular}} i.e.,\@\xspace \emph{junit}, \emph{httpclient}, \emph{slf4j}, \emph{log4j}, \emph{commons-io}, \emph{guava}, and \emph{commons-lang3}.
Among others, \emph{junit}
has the largest number of migrations, i.e.,\@\xspace 2,972. Concerning the number of versions, \emph{slf4j} has 71 different versions, being the densest library. Meanwhile, \emph{commons-lang3} is associated with the smallest number of migrations, i.e.,\@\xspace 162, and \emph{commons-io} is the sparsest library with only 16 versions. The last column shows the number of versions that we could exploit to get the issues. The difference means that no issues data was available for the whole versions dataset.
\begin{table}[h]
\centering
\caption{Number of migrations and versions.
\begin{tabular}{|l | p{0.80cm} | p{0.80cm} | p{0.80cm} |}
\hline
\textbf{Library} & \rotatebox[origin=l]{90}{\textbf{\# migrations}} & \rotatebox[origin=l]{90}{\textbf{\# versions}} & \rotatebox[origin=l]{90}{\textbf{\# issue vers.}} \\ \hline
\emph{junit} & 2,972 & 30 & 19 \\ \hline
\emph{httpclient} & 218 & 53 & 35 \\ \hline
\emph{slf4j} & 209 & 71 & 26 \\ \hline
\emph{log4j} & 229 & 42 & 19 \\ \hline
\emph{commons-io} & 186 & 16 & 11\\ \hline
\emph{guava} & 627 & 70 & 34 \\ \hline
\emph{commons-lang3} & 162 & 16 & 13\\ \hline
\end{tabular}
\vspace{-.4cm}
\label{tab:libs}
\end{table}
\subsection{Metrics} \label{sec:metrics}
Given a migration path retrieved by EvoPlan\@\xspace, we compare it with the real migration path extracted from a testing project. To this end, we employ
\emph{Precision}, \emph{Recall}, and \emph{F-measure} (or F$_1$-score) widely used in the Information Retrieval domain to assess the performance prediction of a system.
In the first place, we rely on the following definitions:
\begin{itemize}
\item A \textit{true positive} corresponds to the case when the recommended path matches with the actual path extracted from the testing projects; \emph{TP} is the total number of true positives;
\item A \textit{false positive} means that the recommended upgrade plan is not present in the ground-truth paths; \emph{FP} is the total number of false positives;
\item A \textit{false negative} is the migration steps that should be present in the suggested plan but they are not; \emph{FN} is the total number of false negatives
\end{itemize}
Considering such definitions, the aforementioned metrics are computed as follows:
\begin{equation} \label{eqn:Precision}
P = \frac{ TP }{TP+ FP}
\end{equation}
\begin{equation} \label{eqn:Recall}
R = \frac{ TP }{TP+FN}
\end{equation}
\begin{equation} \label{eqn:F-Measure}
F-measure = \frac{ 2 \times P \times R}{P + R}
\end{equation}
\textbf{Rank correlation}: We consider the following coefficients:
\begin{itemize}
\item \textit{Kendall's tau}
measures the strength of dependence between two variables. It is a non-parametric test, i.e.,\@\xspace it is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified.
\item \textit{Pearson's correlation}
is the most widely used correlation statistic to measure the degree of the relationship between linearly
related variables. In particular, this coefficient is suitable when it is possible to draw a regression line between the points of the available data.
\item \textit{Spearman's correlation}
is a non-parametric test that is used to measure the degree of association between two variables. Differently from Pearson's coefficient, Spearman's correlation index performs better in cases of monotonic relationships.
\end{itemize}
All the considered coefficients assume values in the range [-1,+1], i.e.,\@\xspace from perfect negative correlation to perfect positive correlation. The value 0 indicates that between two variables there is no correlation.
In the next section, we explain in detail the experimental results obtained through the evaluation.
\section{Experimental results}
\label{sec:Results}
We report and analyze the obtained results by answering the research questions introduced in the previous section.
\subsection{\rqfirst}
Table \ref{tab:metrics} reports the average results obtained from the cross-validation evaluation.
EvoPlan\@\xspace achieves the maximum precision for \emph{commons-io}, i.e.,\@\xspace 0.90 in all the rounds. The tool also gets a high precision for \emph{junit}, i.e.,\@\xspace 0.88. Meanwhile, the smallest precision, i.e.,\@\xspace 0.58 is seen by \emph{httpclient}. Concerning recall, EvoPlan\@\xspace obtains a value of 0.94 and 0.96 for the \emph{junit} and \emph{commons-io} libraries, respectively. In contrast, the tool achieves the worst recall value with \emph{httpclient}, i.e.,\@\xspace 0.64. Overall by considering the F-Measure score, we see that EvoPlan\@\xspace gets the best and the worst performance by \emph{commons-io} and \emph{httpclient}, respectively.
\vspace{.1cm}
\begin{table}[h!]
\centering
\caption{Precision, Recall, and F-Measure considering popularity.
\begin{tabular}{|l | p{1.6cm} | p{1.2cm} | p{1.8cm} |}
\hline
\textbf{Library} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} \\ \hline
\emph{junit} & 0.88 & 0.94 & 0.91 \\ \hline
\emph{httpclient} & 0.58 & 0.64 & 0.61 \\ \hline
\emph{slf4j-api} & 0.65 & 0.74 & 0.69 \\ \hline
\emph{log4j} & 0.88& 0.93 & 0.91 \\ \hline
\emph{commons-io} & \textbf{0.90} & \textbf{0.96} & \textbf{0.94} \\ \hline
\emph{guava} & 0.60 & 0.73 & 0.65 \\ \hline
\emph{commons-lang3} & 0.66 & 0.67 & 0.65 \\ \hline
\end{tabular}
\label{tab:metrics}
\end{table}
Altogether, we see that there is a substantial difference between the performance obtained by EvoPlan\@\xspace for different libraries. We suppose that this happens due to the availability of the training data. In particular, by carefully investigating each library used in the evaluation, we see that the libraries with the worst results in terms of performance have a few migrations that we can extract from the \emph{pom.xml} on average (cf. Table \ref{tab:libs}). For instance, there are 162 and 209 migrations associated with \emph{commons-lang3} and \emph{slf4j-api}, respectively and EvoPlan\@\xspace gets a low performance on these libraries. Meanwhile, there are
2,972 migrations for \emph{junit} and EvoPlan\@\xspace gets high precision, recall, and F${_1}$ for this library. It means that less
data can negatively affect the final recommendations.
Another factor that can influence the conducted evaluation could be the number of versions involved in an upgrade for each library i.e.,\@\xspace the availability of fewer versions dramatically reduce the migration-related information. This hypothesis is confirmed by the observed values for \emph{log4j} and \emph{junit} that bring better results with 39 and 40 analyzed versions respectively. However, there is an exception with \emph{guava}, i.e.,\@\xspace EvoPlan\@\xspace yields a mediocre result for the library (F$_{1}$=0.65), though we considered 627 migration paths and 49 different versions. By examining the library, we realized that it has many versions employed in the Android domain as well as abandoned versions. Thus, we attribute the reduction in performance to the lack of decent training data.
\vspace{.2cm}
\begin{tcolorbox}[boxrule=0.86pt,left=0.3em, right=0.3em,top=0.1em, bottom=0.05em]
\small{\textbf{Answer to RQ$_1$.} EvoPlan\@\xspace is capable of predicting the correct upgrade plan given a real-world migration dataset. Although for some libraries we witness a reduction in the overall performances, the main reason can be found in the lack of migration paths in the original dataset.}
\end{tcolorbox}
\subsection{\textbf{RQ$_2$}: \emph{Is there any correlation between the \GH issues and the popularity of a certain migration path?}}
To answer this question we measure the correlation among observed data, i.e.,\@\xspace the number of clients that perform a certain migration step and the issues delta considering the time interval described in Section \ref{sec:tracker}.
The number of clients performing migration is defined with the term \textit{popularity} as described in Section \ref{sec:plan}. Meanwhile, as its name suggests, the \textit{delta} is the difference between the number
of closed issues and the number of open ones. It assumes a positive value when the number of closed issues is greater than the opened ones. In contrast, negative values are observed when open issues exceed the number of closed ones. In other words, deltas characterizes migration steps in terms of closed issues.
\begin{table}[b!]
\centering
\vspace{-.4cm}
\caption{Correlation coefficients with a $p$-$value < 2.2\mathrm{e}{-16}$.}
\begin{tabular}{|p{3cm}|p{3cm}|}
\hline
\textbf{Metric} & \textbf{Value} \\ \hline
Kendal's ($\tau$) & 0.458 \\ \hline
Pearson (r) & 0.707 \\ \hline
Spearman ($\rho$) & 0.616 \\ \hline
\end{tabular}
\label{tab:corr}
\end{table}
The results of the three indexes are shown in Table \ref{tab:corr}. As we can see, all the metrics show a
positive correlation between the number of clients that perform a certain migration and the corresponding delta issues. In particular, Kendall's tau $\tau$ is equal to 0.458, while Spearman's rank $\rho$ reaches the value of 0.616. The maximum correlation is seen by Pearson's coefficient, i.e.,\@\xspace r = 0.707.
The strong correlation suggests that given a library, the more clients perform a migration on its versions, the more issues are solved. As it has been shown in a recent work~\cite{liao_exploring_2018},
the act of solving issues allows developers to identify bugs and improve code, as well as enhance software quality. Summing up, having a large number of migrated clients can be interpreted as a sign of maturity,
i.e.,\@\xspace the evolution among different versions attracts attention
by developers.
\vspace{.2cm}
\begin{tcolorbox}[boxrule=0.86pt,left=0.3em, right=0.3em,top=0.1em, bottom=0.05em]
\small{\textbf{Answer to RQ$_2$.}
There is a significant correlation between the upgrade plan popularity and
the number of closed issues. This implies that
plans to be given highest priority should be those that have the majority
of issues solved during the migration.}
\end{tcolorbox}
\subsection{\textbf{RQ$_3$}: \emph{Is \EP able to provide consistent recommendations in reasonable time?}}
We measured the average time required for running experiments using a mainstream laptop with the following information: i5-8250U, 1.60GHz Processor, 16GB RAM, and Ubuntu 18.04 as the operating system. Table~\ref{tab:time} summarizes the time for executing the corresponding phases
\begin{table}[h!]
\centering
\vspace{-.3cm}
\caption{Execution time.}
\begin{tabular}{|p{3cm}|p{3cm}|}
\hline
\textbf{Phase} & \textbf{Time (seconds)} \\ \hline
Graph building & 15,120 \\ \hline
Querying & 0.11 \\ \hline
Testing & 145.44 \\ \hline
\end{tabular}
\label{tab:time}
\vspace{-.2cm}
\end{table}
The most time consuming phase is the creation of graph with 15,120 seconds, corresponding to 252 minutes. Meanwhile, the querying phase takes just 0.11 seconds to finish; the testing phase is a bit longer: 145.44 seconds. It is worth noting that the testing consists of the sub-operations that are performed in actual use, i.e.,\@\xspace opening CSV files, extracting the actual plan, and calculating the shortest path.
This means that we can get an upgrade plan in less than a second, which is acceptable considering the computational capability of the used laptop.
This suggests that EvoPlan\@\xspace can be deployed in practice to suggest upgrade plans.
\vspace{.2cm}
\begin{tcolorbox}[boxrule=0.86pt,left=0.3em, right=0.3em,top=0.1em, bottom=0.05em]
\small{\textbf{Answer to RQ$_3$.} The creation of the migration graph is computationally expensive. However, it can be done offline, one time for the whole cycle. EvoPlan\@\xspace is able to deliver a single upgrade plan in a reasonable time window, making it usable in the field.}
\end{tcolorbox}
\subsection{Threats to validity}
\label{sec:Threats}
This section discusses possible threats that may affect the proposed approach.
Threats to \emph{internal validity} could come from the graph building process.
In particular, the crawler can retrieve inaccurate information from
\emph{pom.xml} files or GitHub\@\xspace commits. To deal with this, we employed a similar
mining technique used in some related studies presented in Section
\ref{sec:related}, i.e.,\@\xspace Meditor, APIwave, aiming to minimize missing data.
Another possible pitfall lies in downgrade migrations, i.e.,\@\xspace a client that moves
from a newer version to an older one. We consider the issue
as our future work.
Concerning \emph{external validity}, the main threat is related to the generalizability of the obtained results. We try to mitigate the threat by considering only popular Java libraries.
Nevertheless, EvoPlan\@\xspace relies on a flexible architecture that can be easily modified to incorporate more TPLs.
Concerning the employed GitHub\@\xspace issues data, they are coarse-grained, i.e.,\@\xspace we can have a huge number of issues that do not have a strong tie with the examined TLPs. We addressed this issue in the paper by considering the ratio of the delta instead of absolute numbers.
Concerning the supported data sources, EvoPlan\@\xspace employs \emph{Maven} and GitHub\@\xspace to mine migration histories and retrieve issues, respectively. Thus, currently, upgrade plans can be recommended for projects that rely on these two technologies. However, the architecture of EvoPlan\@\xspace has been designed in a way that supporting additional data sources would mean operating localized extensions in the \textit{Crawler}, \textit{Data Extractor}, and \textit{Issue Miner} components without compromising the validity of the whole architecture.
Finally, threats to \emph{construct validity} concern the ten-fold cross-validation shown in Section \ref{sec:Process}. Even though this technique is used mostly in the machine learning domain, we mitigate any possible incorrect values by considering a different ground-truth graph for each evaluation round. Additionally, the usage of GitHub\@\xspace issues could be seen as a possible threat. We mitigate this aspect by using such information as post-processing to reduce possible negative impacts on the recommended items, i.e.,\@\xspace ranking the retrieved upgrade plans according to the total amount of issues.
\section{Related work}
\label{sec:RelatedWorks}
A plethora of studies highlights different issues related to the TLPs migration problem. Dig and Johnson \cite{dig_role_2005} demonstrate the role of code refactorings as the principal origin of \emph{breaking changes}, i.e.,\@\xspace failures caused by a library upgrade from an older version to a newer one.
\emph{Binary incompatibilities (BIs)} happen when the application is no longer compilable after migration~\cite{cossette_seeking_2012}.
The Clirr tool has been used to detect the entities that cause incompatibilities by analyzing the JAR files of the testing project.
By evaluating six different recommendation techniques that are typically used to fix BIs, this study exhibits that they were capable of resolving only 20\% of them.
A recent work \cite{kula_developers_2018} attracts the community attention over the \emph{migration awareness} problem. By conducting a user study, the two main migration awareness mechanisms have been evaluated, i.e.,\@\xspace security advisories and new releases announcement. In this respect, the results show that the majority of the software systems rarely update the older but reliable libraries and security advisories provide incomplete solutions to the developers.
Alrubaye \emph{et~al.}\@\xspace~\cite{alrubaye_how_nodate} conducted an empirical study to highlight the benefits of the migration process over software quality measured by the three standard metrics used in the domain, i.e.,\@\xspace coupling, cohesion, and complexity. By relying on a dataset composed of nine different libraries and 57,447 Java projects, statistical tests have been carried on relevant migration data. The results confirm that the migration process improves the code quality in terms of the mentioned metrics.
The problem of \emph{Technical debt} has been studied in both
academia
and industry~\cite{avgeriou_et_al:DR:2016:6693},
and it is related to
``immature'' code
sent to production~\cite{10.1145/157710.157715}. Although this approach is used to achieve immediate results, it could lead to future issues after a certain period. To solve this, technical debt can be repaid through code refactorings by carrying out a cost-benefit analysis. Lavazza \emph{et~al.}\@\xspace \cite{lavazza_technical_2018} propose the usage of technical debt as an external software quality attribute of a project. Furthermore, technical debt can affect software evolution and maintainability by introducing defects that are difficult to fix.
Sawant and Bacchelli \cite{sawant_fine-grape_2017} investigate \emph{API features usages} over different TLPs releases by mining 20,263 projects and collect 1,482,726 method invocations belonging to five different libraries. Using the proposed tool fine-GRAPE, two case studies have been conducted considering two aspects, i.e.,\@\xspace the number of migrations towards newer versions and the usages of API features. The results
confirm that developers tend not to update their libraries. More interesting, the second study shows that a low percentage of API features are actually used in the examined projects.
\section{Conclusion and future work}
\label{sec:Conclusion}
The migration of TPLs during the development of a software project plays an
important role in the whole development cycle. Even though some tools are
already in place to solve the issue, different challenges are still opened, i.e.,\@\xspace
reducing the effort during the migration steps or the need to consider heterogeneous data
sources to name a few. We proposed EvoPlan\@\xspace, a novel approach to support the
upgrading of TPLs by considering miscellaneous software artifacts. By
envisioning different components, our tool is capable of extracting relevant
migration data and encoding it in a flexible graph-based representation. Such a
migration graph is used to retrieve multiple upgrade plans considering the
popularity as the main rationale. They are eventually ranked by exploiting the
GitHub\@\xspace issues data to possibly minimize the effort that is required by the
developer to select one of the candidate upgrade plans. A feasibility study
shows that the results are promising, with respect to both effectiveness and
efficiency.
As future work, we plan to incorporate additional concepts in the
migration graph, i.e.,\@\xspace TLPs documentation, Stack Overflow posts, and issues sentiment
analysis. We believe that such additional data allows EvoPlan\@\xspace to better capture the
migration paths performed by clients. Moreover, we can consider a larger
testing dataset to improve the coverage of the recommendation items, i.e.,\@\xspace
provide upgrade plans for more TLPs.
\begin{acknowledgements}
The research described in this paper has been partially supported by the AIDOaRT Project, which has received funding from
the European Union's H2020-ECSEL-2020, Federal Ministry of Education, Science and Research, Grant Agreement n$^{\circ}$101007350
\end{acknowledgements}
%
%
\bibliographystyle{spmpsci}
\subsection{Explanatory examples} \label{sec:exp}
This section discusses two real-world situations that developers must cope with during the TLPs migration task, i.e.,\@\xspace code refactoring and vulnerable dependencies handling. In the first place, it is essential to consider
different TPL releases that are conformed to the semantic versioning format.\footnote{\url{https://semver.org/}} A standard version string follows the pattern \emph{X.Y}, in which \emph{X} represents the \emph{major} release and \emph{Y} represents the \emph{minor} one. Sometimes, releases can include a \emph{patch} version \emph{Z}, resulting in the final string \emph{X.Y.Z}.
We present an explanatory example related to
\emph{log4j},\footnote{\url{https://logging.apache.org/log4j/}} a widely used Java logging library. When it is upgraded from version \emph{1.2} to version \emph{1.3}, as shown in Listing \ref{lst:v12} and Listing \ref{lst:v13}, respectively, a lot of internal changes happened which need to be carefully documented.\footnote{\url{http://articles.qos.ch/preparingFor13.html}} \revised{As it can be noticed, the main change affects the \texttt{Category} class which is replaced by the \texttt{Logger} class. Furthermore, all the former methods that were used by the deprecated class cause several failures at the source code level. For instance, the \texttt{setPriority} method is replaced by \texttt{setLevel} in the new version.}
\begin{lstlisting}[caption={log4j version 1.2.},
label=lst:v12,captionpos=b,style=JavaStyle,numbers=left,xleftmargin=4em,frame=single,framexleftmargin=4.2em]
Category root = Category.getRoot();
root.debug("hello");
Category cat = Category.getInstance(Some.class);
cat.debug("hello");
cat.setPriority(Priority.INFO);
\end{lstlisting}
\begin{lstlisting}[caption={log4j version 1.3.},
label=lst:v13,captionpos=b,style=JavaStyle,numbers=left,xleftmargin=4em,frame=single,framexleftmargin=4.2em]
Logger root = Logger.getRootLogger();
root.debug("hello");
Logger logger = Logger.getLogger(Some.class);
logger.debug("hello");
logger.setLevel(LEVEL.INFO);
\end{lstlisting}
Though this is a very limited use case, it suggests that the code refactoring that takes place during the migration is
an error-prone activity even for a single minor upgrade, i.e.,\@\xspace from
version \emph{1.2} to version \emph{1.3}. Additionally, the complexity
dramatically grows in the case of a major release as it typically requires extra efforts rather than a minor one which are not welcome by the majority of developers
\cite{kula_developers_2018}. Considering such context, the reduction of the
time needed for a single migration step, even a minor one, is expected to
improve the overall development process.
Concerning vulnerable dependencies, GitHub\@\xspace Dependabot\footnote{\url{https://dependabot.com/blog/github-security-alerts/}} provides weekly security alert digests that highlight possible security issues for outdated dependencies of a repository,
which can be of different languages, e.g.,\@\xspace Python, Java, JavaScript.\footnote{\url{https://dependabot.com/\# languages}} An example of a Dependabot report is shown in Fig.~\ref{fig:digest}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth]{figs/GHdigest_new.png}
\caption{GitHub\@\xspace Dependabot alert.}
\label{fig:digest}
\end{figure}
As shown in Fig. \ref{fig:digest}, Dependabot suggests possible TPL upgrades to solve vulnerabilities in the given project. For instance, the \textbf{guava} dependency seems to be outdated, and thus the system automatically suggests jumping to the latest version, i.e.,\@\xspace \emph{24.1.1}.
Though this alert can raise awareness
of this evolution,
it does not offer any concrete recommendations on how to perform the actual migration steps. In some cases, the bot does not provide any recommended version to update the project, e.g.,\@\xspace for the \textbf{log4j} dependence. In this respect, we see that there is an urgent need for providing recommendations of the most suitable plan, so as to upgrade the library, as this can significantly reduce the migration effort.
\subsection{Existing techniques} \label{sec:related}
This section reviews some relevant work that copes with the migration problem.
\begin{table}[h]
\centering
\footnotesize
\caption{\revised{Main features of TLPs migration systems.}
\begin{tabular}{|l | c| c | c| c| c | c | c | }
\hline
\textbf{System} & \rotatebox[origin=l]{90}{\textbf{Inferring migration}} & \rotatebox[origin=l]{90}{\textbf{Incremental plan}} & \rotatebox[origin=l]{90}{\textbf{Popularity}} & \rotatebox[origin=l]{90}{\textbf{GitHub\@\xspace issues}} & \rotatebox[origin=l]{90}{\textbf{\textbf{Upgrading}}} & \rotatebox[origin=l]{90}{\textbf{Replacement}} & \rotatebox[origin=l]{90}{\textbf{Applying Migration}} \\ \hline
Meditor~\cite{xu_meditor_2019} & \ding{51} & \ding{55} &\ding{51} & \ding{55} & \ding{51} & \ding{51} & \ding{51} \\ \hline
Apivawe~\cite{hora_apiwave_2015} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} & \ding{51} & \ding{55} \\ \hline
Graph Mining~\cite{teyton_mining_2012} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} & \ding{51} & \ding{55} \\ \hline
RAPIM~\cite{alrubaye2019learning} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} & \ding{51} & \ding{51} \\ \hline
Diff-CatchUp~\cite{xing_api-evolution_2007} & \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{51} & \ding{51} & \ding{55} \\ \hline
M$^{3}$~\cite{collie_m3_2020} & \ding{51} & \ding{55} & \ding{55} & \ding{55} & \ding{55} & \ding{51} & \ding{51} \\ \hline
\rowcolor{mygray}
\textbf{EvoPlan} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{55} & \ding{55} \color{black} \\ \hline
\end{tabular}
\label{tab:features}
\end{table}
Meditor \cite{xu_meditor_2019} is a tool aiming to identify migration-related (MR) changes within commits and map them at the level of source code with a syntactic program differencing algorithm. To this end, the tool mines GitHub\@\xspace projects searching for MR updates in the \emph{pom.xml} file and check their consistency with the WALA framework.\footnote{\url{https://github.com/wala/WALA}}
Hora and Valente propose Apiwave \cite{hora_apiwave_2015}, a system that excerpts information about libraries' popularity directly from mined GitHub\@\xspace project's history. Afterwards, it can measure the popularity of a certain TLP by considering the import statement removal or addition.
Teyton \emph{et~al.}\@\xspace~\cite{teyton_mining_2012} propose an approach that discovers migrations among different TLPs and stores them in a graph format. A token-based filter is applied on \emph{pom.xml} files to extract the name and the version of the library from the artifactid tag. The approach evetually exhibits four different visual patterns that consider both ingoing and outgoing edges to highlight the most popular target.
RAPIM~\cite{alrubaye2019learning} employs a tailored machine learning model to identify and recommend API mappings learned from previously migration changes. Given two TPLs as input, RAPIM extracts valuable method descriptions from their documentation using text engineering techniques and encode them in feature vectors to enable the underpinning machine learning model.
Diff-CatchUp \cite{xing_api-evolution_2007} has been conceived with the aim of proposing usage examples to support the migration of reusable software components. The tool makes use of the UMLDiff algorithm \cite{10.1145/1101908.1101919} to identify all relevant source code refactorings. Then, a heuristic approach is adopted to investigate the design-model of the evolved component and retrieve a customizable ranked list of suggestions.
Collie \emph{et~al.}\@\xspace recently proposed the M$^{3}$ tool \cite{collie_m3_2020} to support a semantic-based migration of C libraries. To this end, the system synthesizes a behavioral model of the input project by relying on the LLVM intermediate representation.\footnote{\url{https://llvm.org/}} Given a pair of source and target TLPs, the tool generates abstract patterns that are used to perform the actual migration.
Table \ref{tab:features} summarizes the features of the above-ment\-ioned approaches by considering the different tasks involved in migration processes by starting with the discovery of possible migration changes up to embedding them directly into the source code as explained below.
\begin{itemize}
\item \emph{Inferring migration}: To extract migration-related information, tools can analyze existing projects' artifacts, i.e.,\@\xspace commits, \emph{pom.xml} file, or tree diff. This is the first step of the whole migration process.
\item \emph{Incremental plan}: The majority of the existing approaches perform the migration just by considering the latest version of a TLP. This could increase the overall effort needed to perform the actual migration, i.e.,\@\xspace developers suffer from accumulated technical debt. In contrast, considering a sequence of intermediate migration steps before going to the final one can reduce such refactoring.
\item \emph{Popularity}: This is the number of client projects that make use of a certain library. In other words, if a TLP appears in the \emph{pom.xml} file or in the import statement, its popularity is increased.
\item \emph{GitHub\@\xspace issues}: As an additional criterion, the migration process can include data from \emph{GitHub\@\xspace issues} that may include relevant information about TLPs migration. Thus, we consider them as a possible source of migration-related knowledge.
\item \emph{Upgrading}: This feature means that the tool supports the upgrading of a TLP from an older version to a newer one. For instance, the migration described in Section \ref{sec:exp} falls under this class of migration.
\item \emph{Replacement}: Differently from upgrading, replacement involves the migration from a library to a different one that exposes the same functionalities.
\item \emph{Applying migration}: It represents the final step of the migration process in which the inferred migration changes are actually integrated into the project.
\end{itemize}
\subsection{Dimensions to be further explored}
Even though several approaches successfully cope with TPL migration, there are still some development dimensions that need to be further explored. However, providing an exhaustive analysis is out of the scope of this section. Thus, we limit ourselves to identify some of them by carefully investigating the approaches summarized in Table \ref{tab:features}. The elicited dimensions are the following ones:
\begin{itemize}
\item \emph{D1: Upgrading the same library.} Almost all of the presented
approaches apart from Meditor, focus on replacing libraries and very few
support the upgrades of already included ones (see columns
\textit{Upgrading} and \textit{Replacement} in Table \ref{tab:features}).
\item \emph{D2: Varying the migration data sources.} During the inferring
migration phase, strategies to obtain migra\-tion-related data play a
fundamental role in the overall process. A crucial challenge should be
investigating new sources of information besides the well-known sources
e.g.,\@\xspace Bug reports, Stack Overflow posts, and GitHub\@\xspace issues.
\item \emph{D3: Aggregating different concepts.} The entire migration
process is a complex task and involves notions belonging to different
domains. For instance, GitHub\@\xspace issues could play a relevant role in the
migration process. A recent work \cite{misra_is_2020} shows that
the more comments are included in the source code, the lesser is
the time needed to solve an issue. Neil \emph{et~al.}\@\xspace \cite{neil_mining_2018} extracted
security vulnerabilities from issues and bug reports that could affect library dependencies.
\item \emph{D4: Identification of the upgrade plan.} Existing approaches
identify and apply migrations by taking as input the explicit specification
of the target version of the library that has to be upgraded. Providing developers with insights about candidate upgrade plans that might reduce
the migration efforts can represent valuable support to the overall upgrade
process.
\end{itemize}
In the present work we aim to explore and propose
solutions for the dimensions \textsc{D1} and \textsc{D4} by providing multiple
possible upgrade plans given the request of upgrading a given library to target
a specific target version. Furthermore, we also perform an initial
investigation on the \textsc{D2} and \textsc{D3} dimensions, relying on GitHub\@\xspace
issues. As it can be seen in Table~\ref{tab:features}, EvoPlan\@\xspace covers five out of
the seven considered features. In particular, our approach is able to
\emph{infer migration}, make use of \emph{incremental plan} by considering the
\emph{popularity} and \emph{issues}, so as to eventually recommend an
\emph{upgrade plan}. Compared to the existing tools, EvoPlan\@\xspace tackles most of the
issues previously presented
\subsection{Crawler} \label{sec:tracker}
Migration-related information is mined from GitHub\@\xspace using the \emph{Crawler}
component. By means of the \texttt{JGit} library,\footnote{\url{https://www.eclipse.org/jgit/}} \emph{Crawler} downloads a set \textit{P} of GitHub\@\xspace projects that have at least one
\emph{pom.xml} file, which is
a project file containing the list of all adopted TPLs. In case there are
multiple \emph{pom.xml} files, they will be analyzed separately to avoid
information loss. Then, the \emph{Crawler} component analyzes all the
repository's commits that affect the \emph{pom.xml} to find added and removed
TPLs. Additionally, raw issue data is obtained and stored in separate files. In
particular, we count the number of opened and closed issues for each project
\textit{p} $\in$ \textit{P} in a specific time interval \textit{D}.
The starting point of this interval is identified when a certain version
\textit{v} of a given library \textit{l} that is added as dependencies of the
\emph{pom.xml} file in client \textit{C}. A previous study
\cite{10.1007/978-3-319-26844-622} demonstrates that
the monthly rate of open issues tends to decrease over time.
Thus, the endpoint of \textit{D} is obtained by considering the first two months of
development to extract relevant data concerning the considered library \textit{l} without loss of data. \revised{In such a way, the GitHub\@\xspace issues that have been opened and closed for each TLP that has been added in \textit{p}, are obtained for further processing phases.}
\subsection{Data Extractor} \label{sec:dataEx}
In this phase, \revised{data is gathered by means of \texttt{JGit}, and analyzed using different processing steps as follows.}
The first step makes use of the GitHub\@\xspace \emph{log} command to retrieve the list of every modification
saved on GitHub\@\xspace for a specific file. Furthermore, the command provides the code
\emph{SHA} for every commit, which allows us to identify it.
For instance, Fig. \ref{fig:diff-log}.a depicts a
commit related to a given \emph{pom.xml} file taken as input. The identifier of
the commit is used to retrieve the list of the corresponding operated changes
as shown in Fig. \ref{fig:diff-log}.b. In particular, inside a commit we can
find a large number of useful information like what was written or removed and
when. The \emph{Data Extractor} component focuses on the lines which contain an
evidence of library changes. In a commit, the added lines are marked with the
sign '+', whereas the removed ones are marked with '-' (see the green and red lines, respectively shown in Fig.~\ref{fig:diff-log}.b).
In this way, the evolution of a library is obtained by analyzing the sequence
of added/removed lines. With this information, \revised{EvoPlan\@\xspace is also able to count} how many
clients have performed a specific migration. The information retrieved by the
\emph{Data Extractor} component is stored in a target CSV file, which is taken as input by the subsequent entity of the process as discussed below.
\begin{figure}
\small
\centering
\includegraphics[width=\linewidth]{figs/log.png} \\
a) Example of \emph{log} \\
\includegraphics[width=\linewidth]{figs/diff.png}\\
b) Example of \emph{diff}\\
\caption{Example of artifacts used by the \emph{Data Extractor} component.}
\label{fig:diff-log}
\end{figure}
\subsection{Graph Builder}
This component creates nodes and
relationships by considering the date and library changes identified in the previous
phase. To this end, EvoPlan\@\xspace exploits the Cypher query language\footnote{\url{https://neo4j.com/developer/cypher-query-language/}} to store
data into a \textsc{Neo4j} graph. For instance, we extract from CSV files two pairs library-version \emph{(l,v1)} and \emph{(l,v2)} with signs '-' and '+', respectively. In this way, the component creates an oriented edge from
\emph{(l,v1)} to \emph{(l,v2)}. Once the first edge is created, any further pairs containing the same library upgrade will be added as an incremented weight on the graph edge.
The date value contained in the CSV record is
used to avoid duplicated edges or loops. Furthermore, each edge is weighted
according to the number of clients as described in \textit{Data Extractor}
phase. That means if we find \emph{w} times the same couple \emph{(l,v1)} to \emph{(l,v2)} (i.e.,\@\xspace a number of \emph{w} projects have already migrated the library \emph{l} from \emph{v1} to \emph{v2}), the edge will have a weight of \emph{w}.
Thus, the final outcome of
this component is a migration graph that considers the community's interests as
the only weight. For instance, Fig.~\ref{fig:graph}
represents the extracted migration graph for the \emph{slf4j-api} library. The
graph contains all the mined version of the library and for each pair the
corresponding number of clients that have performed the considered upgrade is
shown. For instance, in Fig.~\ref{fig:graph} the edge from the version
\emph{1.6.1} to \emph{1.6.4} is selected, and 14 clients (see the details on
the bottom) have performed such a migration.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figs/slf4j_graph.pdf}
\caption{\revised{Migration graph of the \emph{slf4j} library.}}
\label{fig:graph}
\end{figure}
\subsection{Plan Calculator} \label{sec:plan}
Such a component plays a key role in the project. Given a library to be
upgraded, the starting version, and the target one, \emph{Plan Calculator}
retrieves
the k shortest paths
by using the well-founded \emph{Yen's K-shortest paths
algorithm} \cite{Yen2007FindingTK} which has been embedded into the
\textsc{Neo4j} library.
As a default heuristic implemented in EvoPlan\@\xspace, the component retrieves all the
possible paths that maximize the popularity of the steps that can be performed to do the wanted
upgrade. Thus, the \textit{Plan Calculator} component employs the aforementioned weights
which represent the popularity as a criteria for the shortest path algorithm.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figs/kpaths.png}
\caption{List of \emph{k}-shortest paths for \emph{slf4j}.}
\label{fig:path}
\end{figure*}
By considering the graph shown in Fig. \ref{fig:graph}, there are several possibilities to upgrade \textit{slf4j} from version \emph{1.5.8} to
\emph{1.7.25}. By taking into account the available weights, EvoPlan\@\xspace can recommend the ranked list depicted in Fig.~\ref{fig:path}. The first path in the list suggests to follow the steps \emph{1.6.1}, \emph{1.6.4}, and
\emph{1.7.5} to reach the final version considered in the example, i.e.,\@\xspace
\emph{1.7.25}.\footnote{\revised{It is worth noting that the popularity values are disproportionate to the popularity of the corresponding upgrade plans. In the example shown in Fig. \ref{fig:path} the most popular upgrade is the one with popularity value 0.898.}} Such a plan is the one that is performed most by other projects, which rely on \emph{slf4j} and that have already operated the wanted library migration. Thus, such a path is more frequent than directly updating the library to the newest version.
\subsection{Issues Miner} \label{sec:IssuesCalc}
Issues play an important role in project development. For instance, by solving issues, developers contribute to the identification of bugs as well as the enhancement of software quality through feature requests~\cite{liao_exploring_2018}. In the scope of this work, we exploit issues as criteria for ordering upgrade plans. In particular, we rely on the availability of issues that have been opened and closed due to upgrades of given third-party libraries.
\begin{table}[t!]
\centering
\caption{Issues information extracted for \emph{commons-io}.
\begin{tabular}{|l | c | c | c | }
\hline
\textbf{Version} & \textbf{Open Issues}& \textbf{Closed Issues} & \textbf{Delta} \\ \hline
1.0&14&33&19 \\ \hline
1.3.2&150&420&270 \\ \hline
1.4&87&408&321 \\ \hline
2.0&5&10&5 \\ \hline
2.0.1&133&457&324 \\ \hline
2.1&129&516&387 \\ \hline
2.2&67&999&932 \\ \hline
2.3&5&20&15 \\ \hline
2.4&939&3,283&2,344 \\ \hline
2.5&64&918&854 \\ \hline
2.6&64&548&484 \\ \hline
\end{tabular}
\vspace{-.2cm}
\label{tab:issues}
\end{table}
The \emph{Issue Miner} component is built to aggregate and filter raw issues
data gathered in the early stage of the process shown in Fig. \ref{fig:approach}. However, due to the internal construction of \textsc{Neo4j}, we cannot directly embed this data as a weight
on the migration graph's edges. Thus, as shown in Section \ref{sec:tracker}, we
collect the number of open and closed issues considering a specific time
window, i.e.,\@\xspace two months starting from the introduction of a certain TLP in the
project. Then, this component filters and aggregates the issues data related by
using Pandas, a widely-used Python library for data mining
\cite{pandas_pandas_2020}. For instance, Table \ref{tab:issues} shows the mined
issues related to the \emph{commons-io} library. In particular, for each
version of the library, the number of issues that have been opened and closed by
all the analysed clients since they have migrated to that library version is
shown. EvoPlan\@\xspace can employ the extracted data to enable a ranking function based on GitHub\@\xspace issues as discussed in the next section.
\emph{Issues Miner} works as a stand-alone component, thus it does not impact on the time required by the overall process. In this way, we have an additional source of information that can be used later in the process as a supplementary criterion to choose the ultimate upgrade plan from the ranked list produced by the \textit{Plan Calculator} component.
\subsection{Plan Ranker} \label{sec:PlanRank}
In the final phase, the k-paths produced by the \textit{Plan Calculator} are
rearranged according to the information about issues. For every
path, we count the average value of opened/closed issues. A large value means that a certain
path potentially requires less integration effort since there are more closed issues than the opened ones \cite{liao_exploring_2018}, i.e.,\@\xspace issues
have been tackled/solved rather than being left untouched.
Thus, the aim is to order the plans produced by \textit{Plan Calculator} according to the retrieved issues: among the most popular plans we will propose those with the highest issue values.
\begin{table}[h!]
\caption{An example of the ranking results.
\begin{tabular}{|l | p{0.80cm} | p{0.80cm} | }
\hline
\textbf{Proposed Path} & \rotatebox[origin=c]{90}{\textbf{Pop. Value}} & \rotatebox[origin=c]{90}{\textbf{Issues Value}} \\ \hline
1.5.8, 1.6.1, 1.6.4, 1.6.6, 1.7.5, 1.7.25 & 1.446 & 57 \\ \hline
\rowcolor{lightgray}
1.5.8, 1.6.1, 1.6.4, 1.7.5, 1.7.25 & 0.898 & 58 \\ \hline
1.5.8, 1.7.5, 1.7.25 & 1.0 & 58 \\ \hline
\rowcolor{Gold}
1.5.8, 1.6.1, 1.7.5, 1.7.25 & 1.0 & 61 \\ \hline
1.5.8, 1.6.1, 1.6.4, 1.7.2, 1.7.5, 1.7.25 & 1.238 & 58 \\ \hline
\end{tabular}
\label{tab:Ranking}
\end{table}
Table \ref{tab:Ranking} shows an example of the ranking process. There are two highlighted paths, the gray row corresponds to the best result according to the plan popularity only. In fact, the gray highlighted plan is the one with lower popularity value. Meanwhile, the orange row is recommended according to the issues criteria (in this case, the higher the issue value, the better). The path that should be selected is the orange one because it represents the one characterized by the highest activity in terms of opened and closed issue, among the most popular ones. In this way, EvoPlan\@\xspace is
able to recommend an upgrade plan to migrate from the initial version to the
desired one by learning from the experience of other projects which have
already performed similar migrations.
\subsection{Preliminary results} \label{sec:results}
We report and analyze the obtained results by answering the research questions introduced in the previous section.
\subsection{\rqfirst}
Table \ref{tab:metrics} reports the average results obtained from the cross-validation evaluation.
EvoPlan\@\xspace achieves the maximum precision for \emph{commons-io}, i.e.,\@\xspace 0.90 in all the rounds. The tool also gets a high precision for \emph{junit}, i.e.,\@\xspace 0.88. Meanwhile, the smallest precision, i.e.,\@\xspace 0.58 is seen by \emph{httpclient}. Concerning recall, EvoPlan\@\xspace obtains a value of 0.94 and 0.96 for the \emph{junit} and \emph{commons-io} libraries, respectively. In contrast, the tool achieves the worst recall value with \emph{httpclient}, i.e.,\@\xspace 0.64. Overall by considering the F-Measure score, we see that EvoPlan\@\xspace gets the best and the worst performance by \emph{commons-io} and \emph{httpclient}, respectively.
\vspace{.1cm}
\begin{table}[h!]
\centering
\caption{Precision, Recall, and F-Measure considering popularity.
\begin{tabular}{|l | p{1.6cm} | p{1.2cm} | p{1.8cm} |}
\hline
\textbf{Library} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} \\ \hline
\emph{junit} & 0.88 & 0.94 & 0.91 \\ \hline
\emph{httpclient} & 0.58 & 0.64 & 0.61 \\ \hline
\emph{slf4j-api} & 0.65 & 0.74 & 0.69 \\ \hline
\emph{log4j} & 0.88& 0.93 & 0.91 \\ \hline
\emph{commons-io} & \textbf{0.90} & \textbf{0.96} & \textbf{0.94} \\ \hline
\emph{guava} & 0.60 & 0.73 & 0.65 \\ \hline
\emph{commons-lang3} & 0.66 & 0.67 & 0.65 \\ \hline
\end{tabular}
\label{tab:metrics}
\end{table}
Altogether, we see that there is a substantial difference between the performance obtained by EvoPlan\@\xspace for different libraries. We suppose that this happens due to the availability of the training data. In particular, by carefully investigating each library used in the evaluation, we see that the libraries with the worst results in terms of performance have a few migrations that we can extract from the \emph{pom.xml} on average (cf. Table \ref{tab:libs}). For instance, there are 162 and 209 migrations associated with \emph{commons-lang3} and \emph{slf4j-api}, respectively and EvoPlan\@\xspace gets a low performance on these libraries. Meanwhile, there are
2,972 migrations for \emph{junit} and EvoPlan\@\xspace gets high precision, recall, and F${_1}$ for this library. It means that less
data can negatively affect the final recommendations.
Another factor that can influence the conducted evaluation could be the number of versions involved in an upgrade for each library i.e.,\@\xspace the availability of fewer versions dramatically reduce the migration-related information. This hypothesis is confirmed by the observed values for \emph{log4j} and \emph{junit} that bring better results with 39 and 40 analyzed versions respectively. However, there is an exception with \emph{guava}, i.e.,\@\xspace EvoPlan\@\xspace yields a mediocre result for the library (F$_{1}$=0.65), though we considered 627 migration paths and 49 different versions. By examining the library, we realized that it has many versions employed in the Android domain as well as abandoned versions. Thus, we attribute the reduction in performance to the lack of decent training data.
\vspace{.2cm}
\begin{tcolorbox}[boxrule=0.86pt,left=0.3em, right=0.3em,top=0.1em, bottom=0.05em]
\small{\textbf{Answer to RQ$_1$.} EvoPlan\@\xspace is capable of predicting the correct upgrade plan given a real-world migration dataset. Although for some libraries we witness a reduction in the overall performances, the main reason can be found in the lack of migration paths in the original dataset.}
\end{tcolorbox}
\subsection{\textbf{RQ$_2$}: \emph{Is there any correlation between the \GH issues and the popularity of a certain migration path?}}
To answer this question we measure the correlation among observed data, i.e.,\@\xspace the number of clients that perform a certain migration step and the issues delta considering the time interval described in Section \ref{sec:tracker}.
The number of clients performing migration is defined with the term \textit{popularity} as described in Section \ref{sec:plan}. Meanwhile, as its name suggests, the \textit{delta} is the difference between the number
of closed issues and the number of open ones. It assumes a positive value when the number of closed issues is greater than the opened ones. In contrast, negative values are observed when open issues exceed the number of closed ones. In other words, deltas characterizes migration steps in terms of closed issues.
\begin{table}[b!]
\centering
\vspace{-.4cm}
\caption{Correlation coefficients with a $p$-$value < 2.2\mathrm{e}{-16}$.}
\begin{tabular}{|p{3cm}|p{3cm}|}
\hline
\textbf{Metric} & \textbf{Value} \\ \hline
Kendal's ($\tau$) & 0.458 \\ \hline
Pearson (r) & 0.707 \\ \hline
Spearman ($\rho$) & 0.616 \\ \hline
\end{tabular}
\label{tab:corr}
\end{table}
The results of the three indexes are shown in Table \ref{tab:corr}. As we can see, all the metrics show a
positive correlation between the number of clients that perform a certain migration and the corresponding delta issues. In particular, Kendall's tau $\tau$ is equal to 0.458, while Spearman's rank $\rho$ reaches the value of 0.616. The maximum correlation is seen by Pearson's coefficient, i.e.,\@\xspace r = 0.707.
The strong correlation suggests that given a library, the more clients perform a migration on its versions, the more issues are solved. As it has been shown in a recent work~\cite{liao_exploring_2018},
the act of solving issues allows developers to identify bugs and improve code, as well as enhance software quality. Summing up, having a large number of migrated clients can be interpreted as a sign of maturity,
i.e.,\@\xspace the evolution among different versions attracts attention
by developers.
\vspace{.2cm}
\begin{tcolorbox}[boxrule=0.86pt,left=0.3em, right=0.3em,top=0.1em, bottom=0.05em]
\small{\textbf{Answer to RQ$_2$.}
There is a significant correlation between the upgrade plan popularity and
the number of closed issues. This implies that
plans to be given highest priority should be those that have the majority
of issues solved during the migration.}
\end{tcolorbox}
\subsection{\textbf{RQ$_3$}: \emph{Is \EP able to provide consistent recommendations in reasonable time?}}
We measured the average time required for running experiments using a mainstream laptop with the following information: i5-8250U, 1.60GHz Processor, 16GB RAM, and Ubuntu 18.04 as the operating system. Table~\ref{tab:time} summarizes the time for executing the corresponding phases
\begin{table}[h!]
\centering
\vspace{-.3cm}
\caption{Execution time.}
\begin{tabular}{|p{3cm}|p{3cm}|}
\hline
\textbf{Phase} & \textbf{Time (seconds)} \\ \hline
Graph building & 15,120 \\ \hline
Querying & 0.11 \\ \hline
Testing & 145.44 \\ \hline
\end{tabular}
\label{tab:time}
\vspace{-.2cm}
\end{table}
The most time consuming phase is the creation of graph with 15,120 seconds, corresponding to 252 minutes. Meanwhile, the querying phase takes just 0.11 seconds to finish; the testing phase is a bit longer: 145.44 seconds. It is worth noting that the testing consists of the sub-operations that are performed in actual use, i.e.,\@\xspace opening CSV files, extracting the actual plan, and calculating the shortest path.
This means that we can get an upgrade plan in less than a second, which is acceptable considering the computational capability of the used laptop.
This suggests that EvoPlan\@\xspace can be deployed in practice to suggest upgrade plans.
\vspace{.2cm}
\begin{tcolorbox}[boxrule=0.86pt,left=0.3em, right=0.3em,top=0.1em, bottom=0.05em]
\small{\textbf{Answer to RQ$_3$.} The creation of the migration graph is computationally expensive. However, it can be done offline, one time for the whole cycle. EvoPlan\@\xspace is able to deliver a single upgrade plan in a reasonable time window, making it usable in the field.}
\end{tcolorbox}
\subsection{Research questions} \label{sec:ResearchQuestions}
To study the performance of EvoPlan\@\xspace, we consider the following research questions:
\begin{itemize}
\item \rqfirst~To answer this question, we conduct experiments following
the ten-fold cross-validation methodology~\cite{10.5555/1643031.1643047} on
a dataset considering real migration data collected from GitHub\@\xspace. Moreover, we
compute \emph{Precision}, \emph{Recall}, and \emph{F-measure} by comparing the recommendation outcomes with real migrations as stored in GitHub\@\xspace;
\item \textbf{RQ$_2$}: \emph{Is there any correlation between the \GH issues and the popularity of a certain migration path?} \newline We analyze how the number of opened and closed
issues could affect the migration process. To this end, we compute three
different statistical coefficients to detect if there exists any
correlation among the available data.
\item \textbf{RQ$_3$}: \emph{Is \EP able to provide consistent recommendations in reasonable time?}~Besides the recommended migration steps, we are interested in measuring the time of the overall process, including the graph building phase. This aims at ascertaining the feasibility of our approach in practice.
\end{itemize
\subsection{Overall process} \label{sec:Process}
As depicted in Fig.~\ref{fig:eval}, we perform experiments using the ten-fold
cross-validation methodology on a well-founded dataset coming from an existing
work~\cite{kula_developers_2018}.
Given the whole list of $\approx$11,000 projects, we download the entire
dataset using the \emph{Crawler} component. Then, the dataset is split into testing and
ground truth projects, i.e.,\@\xspace 10\% and 90\% of the entire set, respectively, by each round of the
process. This means that in each round we generate a new migration graph by using the actual 90\% portion. Given a single testing project, the \emph{Analyzing commits} phase is conducted to capture the
actual upgrade path followed by the repository, as stated in Section \ref{sec:tracker}.
To build the ground-truth graph, i.e.,\@\xspace the real migration in GitHub\@\xspace, we consider projects not included in the testing ones and calculate
every possible upgrade plan for each TPLs.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{figs/Eval.pdf}
\caption{The evaluation process.}
\label{fig:eval}
\end{figure}
To aim for a reliable evaluation, we select the starting and the end version of a certain TPL from the actual plan of a testing project. The pair is used to feed the \emph{Plan Calculator} component which in turn retrieves the proposed plan.
In this respect, by following the two paths we are able to compute the metrics
to assess the overall performance, namely precision, recall, and F-measure.
\subsection{Data collection} \label{sec:dataset}
We make use of an existing dataset which has been curated by a recent study available on GitHub\@\xspace.\footnote{\url{https://bit.ly/2Opd1GH}}
The rationale behind this selection is the quality of the repositories which were
collected by applying different filters, i.e.,\@\xspace removing duplicates, including
projects with at least one \emph{pom.xml} file, and crawling only well-main\-tained
and mature projects.
Table \ref{tab:dataset} summarizes the number of projects and \emph{pom.xml}
files. The dataset consists of 10,952 GitHub\@\xspace repositories, nevertheless we were
able to download only 9,517 of them, as some have been deleted or moved.
Starting from these projects, we got a total number of 27,129 \emph{pom.xml}
files. Among them, we selected only those that did not induce the creation of
empty elements by the \emph{Data Extractor} component while analyzing
\textit{logs} and \textit{diffs} as shown in Fig. \ref{fig:diff-log}. The
filtering process resulted in 13,204 \emph{pom.xml} files. The training set is
used to create a migration graph to avoid any possible bias. For each round, we
tested 420 projects, and 3,821 projects are used to build the graph.
\begin{table}[h!]
\centering
\caption{Statistics of the dataset.
\begin{tabular}{|l | p{1.3cm}|}
\hline
Total number of projects & 10,952 \\ \hline
Number of downloaded projects& 9,517 \\ \hline
Total number of \emph{pom.xml} files & 27,129 \\ \hline
Number of screened \emph{pom.xml} files & 13,204 \\ \hline
\end{tabular}
\label{tab:dataset}
\end{table}
Table \ref{tab:libs} summarizes the set of libraries in the dataset, obtained by employing the \emph{Crawler} module (cf. Section \ref{sec:tracker}). There are seven popular libraries,\footnote{\url{https://mvnrepository.com/popular}} i.e.,\@\xspace \emph{junit}, \emph{httpclient}, \emph{slf4j}, \emph{log4j}, \emph{commons-io}, \emph{guava}, and \emph{commons-lang3}.
Among others, \emph{junit}
has the largest number of migrations, i.e.,\@\xspace 2,972. Concerning the number of versions, \emph{slf4j} has 71 different versions, being the densest library. Meanwhile, \emph{commons-lang3} is associated with the smallest number of migrations, i.e.,\@\xspace 162, and \emph{commons-io} is the sparsest library with only 16 versions. The last column shows the number of versions that we could exploit to get the issues. The difference means that no issues data was available for the whole versions dataset.
\begin{table}[h]
\centering
\caption{Number of migrations and versions.
\begin{tabular}{|l | p{0.80cm} | p{0.80cm} | p{0.80cm} |}
\hline
\textbf{Library} & \rotatebox[origin=l]{90}{\textbf{\# migrations}} & \rotatebox[origin=l]{90}{\textbf{\# versions}} & \rotatebox[origin=l]{90}{\textbf{\# issue vers.}} \\ \hline
\emph{junit} & 2,972 & 30 & 19 \\ \hline
\emph{httpclient} & 218 & 53 & 35 \\ \hline
\emph{slf4j} & 209 & 71 & 26 \\ \hline
\emph{log4j} & 229 & 42 & 19 \\ \hline
\emph{commons-io} & 186 & 16 & 11\\ \hline
\emph{guava} & 627 & 70 & 34 \\ \hline
\emph{commons-lang3} & 162 & 16 & 13\\ \hline
\end{tabular}
\vspace{-.4cm}
\label{tab:libs}
\end{table}
\subsection{Metrics} \label{sec:metrics}
Given a migration path retrieved by EvoPlan\@\xspace, we compare it with the real migration path extracted from a testing project. To this end, we employ
\emph{Precision}, \emph{Recall}, and \emph{F-measure} (or F$_1$-score) widely used in the Information Retrieval domain to assess the performance prediction of a system.
In the first place, we rely on the following definitions:
\begin{itemize}
\item A \textit{true positive} corresponds to the case when the recommended path matches with the actual path extracted from the testing projects; \emph{TP} is the total number of true positives;
\item A \textit{false positive} means that the recommended upgrade plan is not present in the ground-truth paths; \emph{FP} is the total number of false positives;
\item A \textit{false negative} is the migration steps that should be present in the suggested plan but they are not; \emph{FN} is the total number of false negatives
\end{itemize}
Considering such definitions, the aforementioned metrics are computed as follows:
\begin{equation} \label{eqn:Precision}
P = \frac{ TP }{TP+ FP}
\end{equation}
\begin{equation} \label{eqn:Recall}
R = \frac{ TP }{TP+FN}
\end{equation}
\begin{equation} \label{eqn:F-Measure}
F-measure = \frac{ 2 \times P \times R}{P + R}
\end{equation}
\textbf{Rank correlation}: We consider the following coefficients:
\begin{itemize}
\item \textit{Kendall's tau}
measures the strength of dependence between two variables. It is a non-parametric test, i.e.,\@\xspace it is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified.
\item \textit{Pearson's correlation}
is the most widely used correlation statistic to measure the degree of the relationship between linearly
related variables. In particular, this coefficient is suitable when it is possible to draw a regression line between the points of the available data.
\item \textit{Spearman's correlation}
is a non-parametric test that is used to measure the degree of association between two variables. Differently from Pearson's coefficient, Spearman's correlation index performs better in cases of monotonic relationships.
\end{itemize}
All the considered coefficients assume values in the range [-1,+1], i.e.,\@\xspace from perfect negative correlation to perfect positive correlation. The value 0 indicates that between two variables there is no correlation.
In the next section, we explain in detail the experimental results obtained through the evaluation.
|
train/arxiv
|
BkiUdyo4uzlhipPcwG2w
| 5
| 1
|
\section{Introduction}
\label{s_intro}
Little is known about the stellar properties, extinction, and
the expected intrinsic \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission of distant, high redshift
galaxies. Indeed, although it has in the recent past become possible
through various techniques to detect already sizeable numbers of
galaxies at $z \ga 5$
(see e.g.\ the reviews of Taniguchi et al.\ 2003 and Spinrad 2004)
the information available on these objects remains generally scant.
For example, in many cases the galaxies are just detected in
two photometric bands and \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ line emission, when present,
serves to determine the spectroscopic redshift (e.g.\ Bremer et al.\ 2004,
Dickinson et al.\ 2004, Bunker et al.\ 2004).
Then the photometry is basically used to estimate the star formation rate
(SFR) assuming standard conversion factors between the UV restframe
light and the SFR, and nothing is known about the extinction,
and the properties of the stellar population (such as age, detailed
star formation histories etc.)
At higher redshift ($z \ga 6$) even less information is generally available
(but see a recent study of Eyles et al.\ 2005 on two $z \sim 6$ galaxies
observed with HST and Spitzer).
Many objects are found by \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission, but remain weak or sometimes even
undetected in the continuum (e.g.\ Rhoads \& Malhotra 2001, Kodaira et al.\ 2003,
Cuby et al.\ 2003, Ajiki et al.\ 2003, Taniguchi et al.\ 2004).
In these cases the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ luminosity can be determined
and used to estimate a SFR using again standard conversion factors.
Also the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ equivalent width is estimated,
providing some possible clue on the nature of these sources.
However, this has lead to puzzling results e.g.\ for the
sources from the LALA survey
which seem to show unusually large \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ equivalent widths
that are difficult to understand without invoking exceptional
conditions (PopIII stars?; Malhotra \& Rhoads 2002, Rhoads et al.\ 2003).
Given the few data available
for the LALA sources it is fair to say that the nature of these objects,
their stellar populations, extinction etc.\ remain currently
largely unknown (cf.\ Dawson et al.\ 2004).
When possible, a simple comparison between the UV and \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ SFR
is undertaken providing possibly information on the
\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmission, i.e.\ the partial absorption of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ photons
on their sight line through the intergalactic medium (e.g.\ Haiman
2002, Santos 2004) and/or on partial \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ ``destruction'' processes
close to the source
(e.g.\ due to dust or ISM geometry; Charlot \& Fall 1993, Valls-Gabaud 1993,
Tenorio-Tagle et al.\ 1999, Mas-Hesse et al.\ 2003).
Notable exceptions of $z \ga 4$ samples for which some estimate
of extinction is available from multi-band photometry include
work on the Subaru Deep Survey (Ouchi et al.\ 2004)
and {\em GOODS} data (e.g.\ Papovich et al.\ 2004).
Lehnert \& Bremer (2004) also discuss some preliminary information
on little extinction in their $z>5$ sources.
Interestingly, in their study of a $z=5.34$ galaxy discovered by Dey et al.\ (1998),
Armus et al.\ (1998) find indications for significant reddening
($A_V > 0.5$ mag) from analysis of the observed SED and from the presence
of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission.
In a similar manner we will here present a consistent study of the
stellar population properties, extinction, and \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission for
two galaxies at redshift $z \ga 6$.
For this aim we use two distant ($z \ga 6$) gravitationally lensed galaxies
for which multi-band photometry is available (detection in at least 3--4 bands).
Through a quantitative analysis of their SED, using a vast library
of empirical and theoretical template spectra, we aim to constrain
properties of the stellar populations, such as age and star formation (hereafter SF)
history (burst or constant SF?) and their extinction.
Furthermore by comparing the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission expected from the
stellar population constraint with the observed \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ flux we
estimate consistently the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ ``transmission'' for the individual sources.
The \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmission and SF properties derived here can in principle
be used to infer the ionisation fraction of hydrogen in the IGM at a
given redshift (cf.\ Haiman 2002, Santos 2004),
a key quantity of interest for the
study of the reionisation history of the Universe (cf.\ review from
Barkana \& Loeb 2001).
Obviously the present ``exploratory'' work will have to be extended to
larger galaxy samples, and sophisticated tools will probably be
needed to interpret such results in terms of IGM properties
(cf.\ Gnedin \& Prada 2004). However, this approach should be complementary to
other methods probing the reionisation history by
measuring the Gunn-Peterson optical depth observed in quasar spectra as
a function of redshift (e.g.\ Becker et al.\ 2001, Fan et al.\ 2003),
or by comparing \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ luminosity functions at different redshifts
(e.g.\ Malhotra \& Rhoads 2004).
The remainder of the paper is structured as follows.
In Sect.\ \ref{s_obs} we summarise the adopted observational constraints
from the literature.
Our modeling technique is described in Sect.\ \ref{s_models}.
The detailed results for each galaxy are presented in Sects.\ \ref{s_370}
and \ref{s_kesr}.
Our main conclusion are summarised in Sect.\ \ref{s_conclude}.
\section{Observational constraints}
\label{s_obs}
The two galaxies studied here are:
1) The probable $z \sim 7$ galaxy recently discovered
by Kneib et al.\ (2004, hereafter KESR), which presently lacks of a spectroscopic
redshift but for which rather accurate multi-band HST observations are available,
allowing us in particular also to derive a fairly reliable photometric redshift.
2) the $z=6.56$ \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emitter HCM 6A behind the lensing cluster Abell 370.
We now summarise the observational data,
taken from the literature.
The adopted redshift and gravitational magnification factors
are listed in Table \ref{tab_props}.
Before proceeding let us mention for clarity that these two objects
are generally considered to be star forming galaxies (starbursts), not
AGN (narrow line - type II - or others), as no contradicting information
is available so far. However, one must bear in mind that some of the
interpretations presented below (and in the literature) may need to be revised,
should this assumption be incorrect.
{\bf Triple arc in Abell 2218:}
The observational data for this object, named Abell 2218 KESR hereafter,
is taken from Kneib et al.\ (2004, hereafter KESR) and from Egami et al.\ (2005).
The photometry from KESR includes observations with HST (WFPC2, ACS, NICMOS) in
V$_{\rm 606W}$ (undetected), I$_{\rm 814W}$, z$_{\rm 850LP}$, and H$_{\rm 160W}$, and with NIRC/Keck
in $J$.
Subsequently, additional photometry was obtained with NICMOS/HST in the $J$ band
(F110W), and with IRAC/Spitzer at 3.6 and 4.5 $\mu$m\ (see Egami et al.)
For our computations (see below) we have used the appropriate filter transmission
curves. In particular,
updated transmission curves were used for the ACS and NICMOS filters
(M. Sirianni 2003, private communication; Sirianni et al.\ 2004;
R.\ Thompson 2003, private communication).
Few brief comments concerning the photometry are needed here.
First, KESR present photometry for two multiple images
(a and b).
Apparently sources a and b differ in the z$_{\rm 850LP}$\ flux
(with quoted errors of $\pm$ 0.05 mag) by 3.2 $\sigma$,
whereas the fluxes in the other filters agree well within 1 $\sigma$.
Differential lensing across the images together with sampling
effects could be responsible for this small discrepancy. As
we are interested in a global representative SED for this source,
we have chosen to use the averaged photometric SED, the magnification
factors being the same for the two images.
Finally, we have also noted some apparent discrepancies between
the measurements reported in KESR and Egami et al., the most
important one being the H$_{\rm 160W}$\ flux, which is $\sim$ 15--20 \%
(3--4 $\sigma$) higher in the latter publication.
These differences are mostly due to the use of different apertures
on different repixeled/rescaled images (J.\ Richard, 2004, private
communication).
Again, this illustrates the difficulty in deriving reliable colors for
extended arcs.
To account for these small discrepancies and for the possible error
underestimate we therefore adopt a minimum photometric
error of 0.15 mag in all filters.
It is worth noting that photometric errors translate into absolute
flux calibration errors for fitting purposes.
As we will see below, adopting the latter minimum errorbars
significantly improves the SED fits.
In addition to the photometry, the non-detection of the source with Keck LRIS spectroscopy
provides an upper limit on the continuum flux between 9000 and 9300 \AA\ (KESR).
This upper limit will be used as an additional constraint in our SED modeling.
KESR also indicate a possible drop of the continuum below $\sim$ 9800 \AA\ from
their Keck II NIRSPEC spectrum. For various reasons the reality of this
spectral break is questionable.
First a true neutral hydrogen break (``\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ break'') at $\sim$ 9800 \AA, far
in the red wing of the z$_{\rm 850LP}$\ filter, seems incompatible with the relatively strong
flux measured in this filter. Furthermore, test computations show that
such a break is difficult if not impossible to reconcile with
our spectral modeling. In any case the significance of this finding
appears questionable as the detected continuum is extremely
faint and noisy. The reality of this spectral feature is now also questioned
by Egami et al.\ (2005). For these reasons this information is
discarded from our spectral fitting.
In practice we have retained the following two variants to describe
the observed SED of this source:
{\em SED1)} The average fluxes (I$_{\rm 814W}$, z$_{\rm 850LP}$, H$_{\rm 110W}$, H$_{\rm 160W}$) of images a and b
from KESR plus the IRAC/Spitzer data of image b from Egami et al.
{\em SED2)} All fluxes from image b from Egami et al.
These are treated as SEDs from two different objects.
Furthermore, for each of these ``objects'' we have computed two cases
in our SED fitting described below:
{\em i)} The observed SEDs in I$_{\rm 814W}$, z$_{\rm 850LP}$, H$_{\rm 110W}$, H$_{\rm 160W}$, 3.6, and 4.5 $\mu$m.
{\em ii)} Same as (i) plus the flux limits from the V$_{\rm 606W}$\ and
Keck LRIS non-detections.
No emission line has so far been detected for Abell 2218 KESR.
Its spectroscopic redshift remains therefore presently unknown
but the well-constrained mass model for the cluster strongly suggests
a redshift $z \ga$ 6 for this source (KESR, Egami et al.\ 2005).
The magnification factors of both images a and b is $\mu=25 \pm 3$,
according to KESR.
{\bf Abell 370 HCM6A:}
The observational data of this $z=6.56$ galaxy is taken from Hu et al.\ (2002).
The photometry includes $VRIZJHK^\prime$ from Keck I and II
(LRIS and Echellette Spectrograph and Imager) and from Subaru
(CISCO/OHS).
The gravitational magnification of the source is $\mu=4.5$
according to Hu et al.\ (2002).
Photometric fluxes and errors were adopted from their Fig.\ 3.
Where possible the appropriate filter transmission curves
were used.
The ``$Z$'' band filter transmission is somewhat uncertain, as
these observations were undertaken using an RG850 filter,
which together with the LRIS optics and the CCD response,
yields a transmission similar to a $Z$ band filter (Hu et al.\ 1999).
Our approximate filter curve shows a blueward shift of $\lambda_{\rm eff}$
by $\sim$ 200 \AA\ compared to the information given by Hu et al.\ (1999).
However, since the redshift is known for this source and since
we adjust the observed flux (not magnitude) in this band,
this should not affect our conclusions.
\section{SED modeling}
\label{s_models}
\begin{figure}
\centerline{\psfig{figure=plot_beta_slope.eps,width=8.8cm}}
\caption{Temporal evolution of the UV slope $\beta$ measured between 1300 and 1800
\AA\ from synthesis models of different metallicities and for instantaneous
bursts (solid lines) and constant SF (long dashed lines).
Black lines show solar metallicity models, red lines metallicities between
$Z = 10^{-5}$ and zero (PopIII), blue lines intermediate cases of $Z=0.004$ and 0.0004.
The dotted lines show $\beta$ if nebular continuous emission is neglected,
i.e.\ assuming pure stellar emission.
Note especially the strong degeneracies of $\beta$ in age and metallicity for
bursts, the insensitivity of $\beta$ on $Z$ for constant SF, and
the rather red slope for young very metal-poor bursts. Further discussions in the text
}
\label{fig_beta}
\end{figure}
\subsection{Main restframe UV-optical SED features of high-z galaxies
and their ``information content''}
\label{s_uv}
Before proceeding to the fits of the individual SEDs a brief comment
on the available SED features seems appropriate.
For obvious reasons the available SED (basically from broad-band photometry)
of high-z ($z \ga 6$) galaxies is primarily limited to the rest-frame UV
(when observed from the ground) or optical spectrum (when available e.g.
with Spitzer and future satellite missions).
The main information ``encoded'' in this SED is therefore:
{\em 1)} the neutral HI break shortward of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ (hereafter the ``\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi'' break)
due to the strong or complete Gunn-Peterson trough,
{\em 2)} the slope of the UV spectrum, and
{\em 3)} possibly a 4000 \AA\ break (hereafter denoted Balmer break),
if present and covered by the observations.
In addition the presence of the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ line, mostly used to determine
spectroscopically the redshift, provides clear evidence for ongoing massive
star formation (hereafter SF).
The position of the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ break depends essentially on redshift.
The UV slope depends on the intrinsic spectrum -- in turn depending
mostly on age and SF history -- and on the extinction, i.e.\ the extinction
law and the amount of reddening.
The Balmer break becomes visible (in absorption) in the continuum of
stellar populations after $\ga$ 10--30 Myr.
\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission, if due to stellar photoionisation and not AGN activity,
indicates the presence of young ($\la$ 10 Myr) massive ionizing stars.
Concerning the UV slope, it is useful to recall that this quantity
\footnote{Various definitions of the UV slope exist. The most commonly
used ones, generally denoted $\beta$, are defined as the power-law
index of the SED in $F_\lambda$ versus $\lambda$ over a certain
wavelength interval.} does not lend itself to determine
the metallicity of a star forming galaxy from a theoretical
point of view and in terms of individual objects.
The reasons are that intrinsically the UV slope shows only small variations
with metallicity, and that the slope depends strongly on the exact
SF history (see e.g.\ Leitherer \& Heckman 1995, Meurer et al.\ 1995).
This is illustrated in Fig.\ \ref{fig_beta} where $\beta$ (measured over
the interval 1300-1800 \AA) is plotted as a function of age for
populations of metallicities between solar and zero (PopIII) and for
the limiting cases of bursts and SFR=const.
Furthermore, as pointed out in Schaerer (2002, 2003) and also shown in this
figure, for very low metallicities ($Z\la 1/50\ifmmode Z_{\odot} \else Z$_{\odot}$\fi$) nebular continuous
emission becomes dominant even
down to UV wavelengths (longward of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi), such that the observed
integrated (stellar+nebular) spectrum has even a flatter UV slope
than high metallicity starbursts.
In other words, even for bursts, there are strong intrinsic degeneracies
of $\beta$ between age and metallicity, to which the additional effect
of reddening must be added,
including the uncertainties on the a priori unknown extinction law.
It is therefore evident that on an individual object basis
there are in general degeneracies between age, metallicity, SF history,
and extinction.
However, this does not preclude the possible existence of
statistical correlations between quantities such as e.g.\ $\beta$
and metallicity in large samples of galaxies, as known to
hold e.g.\ for local UV selected starbursts (cf.\ Heckman et al.\ 1998).
Also, as we will see below, there are cases where the UV slope
and the mere fact of the presence of an emission line allow us nevertheless to
lift some degeneracies and therefore to determine interesting
constraints on the stellar population and on extinction.
The behaviour of the 4000 \AA\ break and its use as an age indicator has
extensively been discussed in the literature (e.g.\ Bruzual 1983, and
recently Kauffmann et al.\ 2003).
As in simple stellar populations its amplitude is basically a monotonically
increasing function of age, an estimate of the break, e.g.\ obtained from
3.8-4.5 $\mu$m\ photometry with IRAC/Spitzer and JHK photometry in $z\ga 6$
galaxies, provides information on the age of the light emitting stellar population.
Since the exact SF history cannot be determined in these cases
(in contrast to studies at low $z$, cf.\ Kauffmann et al.) the amplitude
of the break provides a range of ages, the minimum age being given by
instantaneous bursts, the maximum age from models with constant SF.
For obvious reasons this maximum ``luminosity weighted'' age derived from a
measure of the Balmer break is also unaffected by considerations of
possible multiple stellar populations. The same is not true for the minimum age,
which is, however, of less cosmological interest.
\subsection{Spectral fitting}
\label{s_fit}
For the spectral fitting we use a slightly adapted version
of the photometric redshift code {\em Hyperz}\
of Bolzonella et al.\ (2000).
{\em Hyperz}\ does standard SED fitting using a number of modeling parameters.
The free parameters for the SED modeling are:
\begin{itemize}
\item [{\em 1)}] the spectral template,
\item [{\em 2)}] extinction and the reddening law,
\item [{\em 3)}] a parameter \ifmmode f_{\rm Lyf} \else $f_{\rm Lyf}$\fi\ describing possible deviations from the
average Lyman forest attenuation from Madau (1995).
\end{itemize}
For Abell 2218 KESR the source redshift is also a free parameter.
For the spectral templates we use a large compilation of empirical
and theoretical SED, including starbursts, QSO, and galaxies of all Hubble types,
and covering various star formation histories (bursts, exponentially decreasing,
constant SF) and various metallicities.
For most applications we group the templates in the following way:
\begin{itemize}
\item {\bf Starbursts and QSOs (hereafter SB+QSO):} this group includes the starburst
templates with $E(B-V)$ from $<0.1$ to 0.7 from the Calzetti et al.\ (1994) and
Kinney et al.\ (1996) atlas,
the HST QSO template of Zheng et al.\ (1997), as well as UV-optical spectrum
of the metal-poor galaxy SBS 0335-052 with numerous strong optical emission lines
(and an extinction of $E(B-V) \sim 0.09$, Izotov \& Thuan 1998)
kindly communicated to us by Yuri Izotov (2002, private communication)
\item {\bf BCCWW+:} Bruzual \& Charlot (1998, private communication;
cf.\ Bruzual \& Charlot 1993) evolving synthesis models
assuming bursts, constant star formation, and exponentially decaying star
formation histories
reproducing present day spectra of galaxies of various types
(E, S0, Sa, Sb, Sc, Sd, and Im) plus the empirical
E, Sbc, Scd, and Im templates from Coleman et al.\ (1980),
as included in the public {\em Hyperz}\ version.
\item {\bf S03+:} Theoretical templates of starburst galaxies
from Schaerer (2003) covering metallicities of $Z=0.02$ (solar), 0.008,
0.004, 0.001, 1/50 \ifmmode Z_{\odot} \else Z$_{\odot}$\fi, $Z=10^{-5}$, $10^{-7}$, and zero metallicity
(PopIII). For low metallicities ($Z \le 10^{-5}$) these templates have
been computed for 3 different assumptions on the IMF. The spectral
library includes burst models and models with a constant star formation rate
(SFR). For more details see Schaerer (2003).
For the present work these computations were extended to cover
ages of up to 1 Gyr. These SEDs are available on request from the
first author and on the Web\footnote{{\tt http://obswww.unige.ch/sfr}}.
\end{itemize}
The standard extinction law adopted here is the one from
Calzetti et al.\ (2000) determined empirically from nearby starbursts.
We also explore the possible implications of other laws, such as
the Galactic law of Seaton (1979) including the 2200 \AA\ bump,
and the SMC law from Pr\'evot et al.\ (1984) and Bouchet et al.\ (1985)
showing no UV bump, but a steeper increase of the extinction
in the UV compared to Calzetti et al..
For the Lyman forest attenuation, {\em Hyperz}\ follows Madau (1995).
However, we allow for possible deviations from the mean attenuation
by varying the Lyman forest optical depths $\tau_{\rm eff}^{\alpha,\beta}$
by a multiplicative factor taking the values of (\ifmmode f_{\rm Lyf} \else $f_{\rm Lyf}$\fi, 1., and $1/\ifmmode f_{\rm Lyf} \else $f_{\rm Lyf}$\fi$).
Typically we adopted $\ifmmode f_{\rm Lyf} \else $f_{\rm Lyf}$\fi=$ 2 or 3. Here $\tau_{\rm eff}^{\alpha,\beta}$
stands for the optical depths corresponding to the absorption between
\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ and Ly$\beta$, and between Ly$\beta$ and the Lyman limit respectively.
The following other minor changes have been made in our version (1.3ds) of
{\em Hyperz}. The calculation of the synthetic photometry deals correctly
with templates including strong spectral lines (emission or absorption).
Furthermore we make sure to use the proper filter transmission curves
usually given in photon units. Earlier versions of {\em Hyperz}\ and
other codes (e.g.\ evolutionary synthesis codes) assume sometimes
(for ``historical'' reasons) that transmission curves be given in
flux units. In case of wide filters, e.g.\ such as some ACS/HST
filters, this may lead to small differences.
Other modifications concern essentially features related to the
user interface (additional outputs etc.).
For given choices of the above parameters, the{\em Hyperz}\ code
performs a standard minimisation fit to the observed SEDs and
determines, for each point in the parameter space, the
corresponding $\chi^2$ value. Using these $\chi^2$ values, it is
possible to quantify the probabilities for the main free
parameters, namely extinction, age of the spectral template, SF
history, etc. When the SED fitting is based on theoretical
templates, the SFR value is easily obtained and allows us to
compare the expected values for the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ flux to the actual ones.
To convert the observed/adjusted quantities to absolute values
we adopt the following cosmological parameters:
$\Omega_m=0.3$, $\Omega_\Lambda=0.7$, and
$H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$.
\section{Results for Abell 2218 KESR}
\label{s_kesr}
\begin{figure}
\centerline{\psfig{figure=plot_pz_7_rev1.eps,width=8.8cm}}
\caption{Photometric redshift probability
distributions \ifmmode P(z) \else $P(z)$\fi\ of Abell 2218 KESR using
three spectral template groups. Solid line: BCCWW+ template group,
dotted: SB+QSO, long dashed: S03+.
The three upper blue curves stand for the average SED of a and b (SED1), the lower red ones
for the SED of object b (SED2) from Egami et al.\ (2005).
In all cases a minimum photometric error of 0.15 mag was adopted.
The \ifmmode P(z) \else $P(z)$\fi\ shown here has been computed based on all filters in which the object
is detected (I$_{\rm 814W}$\ to 4.5 $\mu$m). }
\label{fig_pz_7}
\end{figure}
\subsection{Photometric redshift estimate}
As a spectroscopic redshift has not been obtained (yet) for this galaxy
we here examine its photometric redshift estimate.
In Fig.\ \ref{fig_pz_7} we show the photometric redshift probability
distributions \ifmmode P(z) \else $P(z)$\fi\ for the two SEDs (SED1, SED2) of Abell 2218 KESR described above
using the three spectral template groups and adopting a minimum photometric error
of 0.15 mag.
For each redshift, \ifmmode P(z) \else $P(z)$\fi\ quantifies the quality of the best fit model
obtained varying all other parameters (i.e.\ extinction, \ifmmode f_{\rm Lyf} \else $f_{\rm Lyf}$\fi, spectral template among
template group).
Given the excellent HST (WFPC2, ACS and NICMOS) photometry, \ifmmode P(z) \else $P(z)$\fi\
is quite well defined: the photometric redshift ranges typically between
$z_{\rm phot} \sim$ 5.5 and 7.3. Outside of the plotted redshift range
\ifmmode P(z) \else $P(z)$\fi\ is essentially zero.
If we assume the (smaller) quoted formal photometric errors (but note the discrepancies
discussed in Sect.\ \ref{s_obs}) \ifmmode P(z) \else $P(z)$\fi\ becomes more peaked, i.e.\ the photometric
redshift better defined. This is driven by the error on the z$_{\rm 850LP}$\ flux, which determines
the red side of the ``\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi'' break. However, the resulting best fit value \ifmmode z_{\rm phot} \else $z_{\rm phot}$\fi\ does not
change much. Furthermore, the fit quality is considerably decreased.
This demonstrates the interest of such high accuracy measurements and the need for
reliable error estimates.
The predicted redshift distribution is found to be quite insensitive
to the exact template (as shown in Fig.\ \ref{fig_pz_7}),
to the exact value of \ifmmode f_{\rm Lyf} \else $f_{\rm Lyf}$\fi, and to the adopted extinction law
(variations of the latter two are not shown).
However, we note that for this object the fits (and \ifmmode P(z) \else $P(z)$\fi) are
improved when allowing for deviations from the average Madau (1995)
attenuation law. The curves shown here have been computed for $\ifmmode f_{\rm Lyf} \else $f_{\rm Lyf}$\fi=2$.
More important in determining \ifmmode P(z) \else $P(z)$\fi\ is the exact SED.
As seen from Fig.\ \ref{fig_pz_7} the use of SED1 or SED2 lead to somewhat different
\ifmmode P(z) \else $P(z)$\fi\ distributions. SED2 (cf.\ Sect.\ \ref{s_obs}) yields a somewhat larger redshifts,
albeit with a somewhat yreduced fit quality.
These differences illustrate how uncertainties and difficulties in the photometric
measurements of such faint sources, whose origin are briefly discussed in
Sect.\ \ref{s_obs}, propagate to the photometric redshift estimate.
All our best-fit solutions have redshift $\ifmmode z_{\rm phot} \else $z_{\rm phot}$\fi \sim$ 6.25--6.63,
lower than the redshift range estimated by KESR,
but compatible with the more recent quantitative analysis of Egami et al.\ (2005).
Given the various free parameters, uncertainties on the intrinsic SED, etc.
we conclude that the redshift of Abell 2218 KESR
is likely $z \sim$ 6.0 -- 7.2.
taking into account both our photometric determination
and the lensing considerations of KESR.
\subsection{SED fits and inferences on the stellar population and on \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi.}
A large number of models have been computed using the different variants
of the SEDs describing this object (SED1-2), the different
filter combinations (non-detections + spectroscopic constraint)
discussed in Sect.\ \ref{s_obs}, and varying the various model
parameters.
We first discuss briefly the main salient results
with the help of some illustrations. A more general discussion
of the results and their dependence on various assumptions follows.
\subsubsection{Age, star formation history, and extinction}
\begin{figure}
\centerline{\psfig{figure=plot_sed_7rev_all.eps,width=8.8cm}}
\caption{Best fits SEDs to the observations of Abell 2218
(SED2 from Egami et al.\ 2005, including the flux limit from the
non-detections in V$_{\rm 606W}$\ and at 9000-9300 \AA\ from spectroscopy;
cf.\ Sect.\ \ref{s_obs}).
The red crosses indicate the corresponding model broad band fluxes.
The solid line shows the best fit for a template from the S03+ group,
and dotted from the SB+QSO group.
The redshift for these solutions are $z \sim$ 6.63 and 6.54 respectively.
See text for more information}
\label{fig_7rev_all}
\end{figure}
\begin{figure}
\centerline{\psfig{figure=plot_sed_7rev_old.eps,width=8.8cm}}
\caption{Best fits SEDs to the observations of Abell 2218
(Large symbols: SED2 from Egami et al.\ 2005; small symbols: SED1;
cf.\ Sect.\ \ref{s_obs}). Only true detections are taken into account.
The red crosses indicate the corresponding model broad band fluxes.
The solid (dotted) line shows the best fit for a template with constant
SFR from the S03+ group to the SED1 (SED2).
The best ``maximum age'' fits correspond
to ages of 500 and 400 Myr, no extinction, and redshifts \ifmmode z_{\rm phot} \else $z_{\rm phot}$\fi\ $\sim$ 6.40
and 6.57 respectively for SED1 and SED2.
See text for more information}
\label{fig_7rev_old}
\end{figure}
Figure \ref{fig_7rev_all} shows the best fit models
to the SED2 including the upper limits from the I$_{\rm 814W}$\ and
LRIS spectroscopy for the S03+ and SB+QSO template groups.
The best fit redshifts are \ifmmode z_{\rm phot} \else $z_{\rm phot}$\fi\ $=6.63$ and 6.54 respectively.
These fits show in particular that the spectroscopic constraint can be
accommodated simultaneously with the observed z$_{\rm 850LP}$\ flux;
the resulting fits are within the 1 $\sigma$ errors in all bands.
The best fit from the S03+ group corresponds to a burst with
an age of 15 Myr at solar metallicity and no extinction (solid line).
Similarly good fits are also obtained for lower metallicity.
The best fit with empirical starburst and QSO templates (SB+QSO)
is obtained with the spectrum of the metal-poor H~{\sc ii}\ galaxy SBS 0335-052
(dotted line in Fig.\ \ref{fig_7rev_all}).
In this case the apparent Balmer break observed between the NICMOS/HST and
IRAC/Spitzer domain is simply explained by the presence of strong
emission lines in the 3.6 and 4.5 $\mu$m\ filters
\footnote{The main lines are between H$\gamma$, \ifmmode {\rm H}\beta \else H$\beta$\fi\ and [O~{\sc iii}] $\lambda\lambda$4959,5007\
in the 3.6 $\mu$m\ filter and He~{\sc i}\ $\lambda$5876 in the 4.5 $\mu$m\ filter.
E.g.\ for the emission lines between H$\gamma$ and [O~{\sc iii}] $\lambda\lambda$4959,5007\
the total observed equivalent width (boosted by the $(1+z)$ factor)
is $\sim$ 9130 \AA, as estimated from the data of Izotov \& Thuan (1998),
compared to a filter surface of $\sim$ 6600 \AA.}
and some additional extinction to reduce the restframe UV flux.
The extinction needed is $A_V = 0.6$ for the Calzetti et al.\ law, or
$A_V = 0.2$ for the Pr\'evot et al.\ extinction law.
In terms of age the restframe UV to optical spectrum (continuum and lines) of
SBS 0335-052 corresponds to a young population of $\sim$ 3--5 Myr according to
the analysis of Papaderos et al.\ (1998) and Vanzi et al.\ (2000).
Of course, the presence of an older population in addition to the starburst cannot be
excluded on the present grounds.
In short, the observed SED of Abell 2218 KESR can be explained by a young population
withour or with emission lines.
Spectroscopy in the 3--4 $\mu$m\ range would be needed to distinguish the latter solution
from others.
Alternatively, good SED fits are also obtained with relatively ``old'' populations.
The oldest ages are obtained when invoking the longest SF timescale, i.e.\
constant SF. In this case the UV restframe flux remains high (due to the continuous
formation of massive stars) and older ages need to be attained to build up
a sufficient population of evolved stars with strong Balmer breaks,
in order to reproduce the observed break.
This case is shown in Fig.\ \ref{fig_7rev_old} with fits with the S03+ templates
to the observed SED1 and SED2. The best fits solutions obtained here correspond
to ages of 500 and 400 Myr, no extinction, and redshifts \ifmmode z_{\rm phot} \else $z_{\rm phot}$\fi\ $\sim$ 6.40
and 6.57 respectively. Similar, but somewhat older ages are obtained for
metallicities $Z < 0.008$, below the ones shown here.
\begin{figure}
\centerline{\psfig{figure=contour_7rev.z655rev.eps,width=8.8cm}}
\caption{$\chi^2$ contour plot in extinction -- age for solutions
fitting the observed SED2 at redshift $z=6.55$ with templates from the S03+ group
assuming constant SF.
The best fit solution is indicated by the black dot.
Equidistant $\chi^2$ levels with a spacing of 1 are shown.
The (1D) 68, 90, and 99 \% confidence regions ($\Delta \chi^2 = 1.$, 2.71, 6.63)
are delimited by the thick black lines in long dashed, dotted and solid
respectively.
Corresponding formation redshifts $z_{\rm form}$ (assuming the cosmological
parameters given in Sect.\ \ref{s_fit}) are indicated by the arrows.
Discussion in text}
\label{fig_7rev_contour}
\end{figure}
A quantitative examination of the ``maximum age'' allowed by the observations
(here SED2) is presented in Fig.\ \ref{fig_7rev_contour}, showing $\chi^2$ contours in the
extinction--age plane for a given set of spectral templates (S03+ group with
constant SFR), the Calzetti et al.\ extinction law, and a fixed redshift of
$z=6.55$. For these conditions the best fit is corresponds to
400 Myr, zero extinction, and $\ifmmode z_{\rm phot} \else $z_{\rm phot}$\fi=6.57$ (see Fig.\ \ref{fig_7rev_old}).
This Figure shows that a maximum age of $\sim$ (250--650) Myr (1 $\sigma$ interval)
is obtained, in good agreement with the modeling of Egami et al.\ (2005).
If true, this would correspond to a formation redshift of $z_{\rm form}
\sim$ 8.7--20 for our adopted cosmological parameters.
As clear from Fig.\ \ref{fig_7rev_contour}, even with constant SF models,
younger populations with some extinction can also fit, although less well,
the present observations.
However, solutions with low or zero extinction are generally preferred.
When varying the star formation history between these extreme cases (burst or SFR$=$const),
i.e.\ considering e.g.\ exponentially declining SF histories, any intermediate
age can be found for obvious reasons.
Such cases are e.g.\ obtained when fitting templates
from the Bruzual \& Charlot models (not shown here)
with exponentially decreasing SF histories and can be found in Egami et al.
As discussed in Sect.\ \ref{s_uv}, considering multiple stellar populations
(cf.\ Eyles et al.\ 2005) does not alter the above estimate of the maximum age
determined from constant SFR models.
In any case, the data available here does not allow us to constrain the SF history
and age further.
\subsubsection{\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission}
The observations obtained so far have not revealed any emission line
from this object (KESR). In particular \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission is lacking, which
could be puzzling for a source with intense star formation.
As we have just seen a variety of star formation histories and ages
are possible for Abell 2218 KESR. Therefore one may or may not expect
intrinsic \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission.
A simple explanation for the apparent absence
of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission could be to invoke an advanced age (in the post starburst phase).
However, even with a young age it is not necessary that \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission
be observed. E.g.\ the spectrum of the metal-poor H~{\sc ii}\ galaxy SBS 03335-052
which provides an excellent fit to the observed SED and shows strong emission
lines (cf.\ Fig.\ \ref{fig_7rev_all}) does not show \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission
(Thuan et al.\ 1997, Kunth et al.\ 2003).
Alternatively, if intrinsically present, the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ non-detection could
be due a variety of factors:
a redshift $z \la 6.4$ placing \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ below the spectral
range discussed in detail by KESR\footnote{This is probably excluded as,
according to J.-P.\ Kneib (2005, private communication),
no emission line was found in the blue part of the spectrum taken with LRIS.},
a flux below their strongly
varying detection threshold, or other factors depressing the
\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission within the host galaxy (dust, ISM+HI geometry) and
in the intervening IGM.
In conclusion, from the available data the apparent lack of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission from
this source is not puzzling. However, it is not completely excluded
that the galaxy truely shows \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission, which has so far eluded detection.
\subsubsection{General comments on fits and discussion}
After these main findings we shall now quickly mention more ``technical''
results about the influence of various fit parameters.
Quite generally, the results on the age, SF history, extinction etc.\ depend little
on the different variants of the observed SEDs (SED1-2), on the inclusion or not of
the non-detections in the fits, and on the use of the published formal errors or a
minimum error of 0.15 mag (cf.\ Sect.\ \ref{s_obs}).
The results discussed above are therefore quite robust with respect to these
assumptions.
Small differences in the best fit values can, however, be obtained. E.g.\ the best fit
photometric redshift can vary by up to $\la$ 0.2 depending on adopting
SED1 or SED2.
In all cases we note that SED1 allows better fits (smaller $\chi^2$) than SED2.
Adopting $\sigma_{\rm min}=0.15$ mag also significantly increases the fit quality.
Finally, considering variations around the mean Lyman forest attenuation improves the fits
(especially as the HST photometry determining the ``\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ break'' is quite accurate).
In practice all best fits are found with an increased Lyman-forest opacity ($\ifmmode f_{\rm Lyf} \else $f_{\rm Lyf}$\fi=2$).
To summarise, given the absence of a spectroscopic redshift, a fair number of
good fits is found to the observations of Abell 2218 KESR
when considering all the free parameters.
The main conclusions from these ``best fits'' are:
\begin{itemize}
\item[{\em 1)}] Generally the determined extinction is negligible or zero
quite independently of the adopted extinction law.
The best fit with the empirical starburst spectrum of SBS 0335-052
represents an exception to this case, requiring an additional
$A_V \sim$ 0.2--0.6 mag, depending on the adopted extinction law.
\item[{\em 2)}] Although generally burst models fit somewhat better
than those with constant star formation among the theoretical
templates (BC, S03+), the data does not strongly constrain
the star formation history.
\item[{\em 3)}] Typical ages between $\sim$ 15 and 400 Myr are obtained.
A reasonable 1-$\sigma$ upper bound on the age of $\sim$ 650 Myr can be
obtained assuming constant star formation.
However, the data can also be well fit with
a very young ($\sim$
3--5 Myr) stellar population with strong emission lines (using e.g.\
the spectrum of the metal-poor galaxy SBS 0335-052). In this
case the apparent Balmer break observed between the HST and Spitzer
broad-band photometry is simply due to the presence of strong emission
lines affecting the red 3.6 and 4.5 $\mu$m\ filters.
\item[{\em 4)}] Given degeneracies of the restframe UV spectra between age
and metallicity (cf.\ above) no clear indication on the galaxian
metallicity can be derived, in contrast to the claim of KESR.
Good fits to the available data can even be
found with solar metallicity starburst templates.
\item[{\em 5)}] Depending on the star formation history and age
one may or may not expect intrinsic \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission, i.e.\
an important H~{\sc ii}\ region around the object.
The apparent absence of observed \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission does therefore
not provide much insight.
\end{itemize}
A more complete error analysis beyond the level presented here
is difficult to achieve for a variety of reasons and clearly beyond the
scope of this publication.
\subsubsection{SFR, stellar mass and luminosity}
The theoretical templates can also be used to estimate the
stellar mass involved in the starburst or the star formation
rate when constant star formation is assumed. For this aim we
use all the best fits to the three SEDs (SED1-3)
with the S03+ templates, we assume a typical redshift of
$z=6.6$, and the magnification $\mu=25$ determined by KESR.
For the adopted cosmology the distance luminosity is then
$d_L=$64457.8 Mpc.
When constant SF is assumed one obtains the following star formation
rate: $SFR \sim (0.9-1.1)$ \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi\
(for a Salpeter IMF from 1 to 100 \ifmmode M_{\odot} \else M$_{\odot}$\fi).
For the best fit ages of $\sim$ 400--570 Myr the total mass
of stars formed would then correspond to $\sim (3.6-6.3) \times 10^8$ \ifmmode M_{\odot} \else M$_{\odot}$\fi.
The mass estimated from best fit burst models (of ages $\sim$ 6--20 Myr) is
slightly smaller, $M_\star \sim (0.3 - 1) \times 10^8$ \ifmmode M_{\odot} \else M$_{\odot}$\fi.
If we assume a Salpeter
IMF with \ifmmode M_{\rm low} \else M$_{\rm low}$\fi\ $=0.1$ \ifmmode M_{\odot} \else M$_{\odot}$\fi\ the mass and SFR estimates would be higher
by a factor 2.55, and in good agreement with the values derived by KESR
and Egami et al.\ (2005).
In all the above cases the total luminosity (unlensed) is typically $L_{\rm bol}
\sim 2 \times 10^{10}$ \ifmmode L_{\odot} \else L$_{\odot}$\fi.
\begin{table*}
\caption{Summary of the main adopted and estimated properties of the analysed high
redshift galaxies.
The adopted magnification $\mu$ is given in col.\ 2,
col.\ 3 gives the redshift,
col.\ 4 an indication on the most plausible star formation history (burst or constant SF),
col.\ 5 a plausible age of the stellar population,
col.\ 6 an estimate of the optical extinction $A_V$ (for the Calzetti et al.\ 2000 law),
col.\ 7 the estimated SFR (for a Salpeter IMF from 1--100 \ifmmode M_{\odot} \else M$_{\odot}$\fi),
col.\ 8 an estimated stellar mass (for same IMF),
col.\ 9 the estimated bolometric luminosity, and
col.\ 10 the estimated \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmission (fraction of intrinsic emitted flux over observed
\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ flux).
See Figs.\ \ref{fig_contour_6a} and \ref{fig_7rev_contour} and the corresponding
text for an estimate of the confidence levels and range of parameters.}
\begin{tabular}{llllllllll}
\hline
Object & $\mu$ & redshift & SF history & age & $A_V$ & SFR & stellar mass & $L_{\rm bol}$ & \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\\\
& & & & [Myr] & [mag] & [\ifmmode M_{\odot} \else M$_{\odot}$\fi yr$^{-1}$] & [\ifmmode M_{\odot} \else M$_{\odot}$\fi] & [\ifmmode L_{\odot} \else L$_{\odot}$\fi] & transmission \\
\hline
Abell 2218 & 25. & $\sim$ 6.0--7.2 & ? & 3--400 & negligible? &
$\sim$ 1 & (0.3--6) $\times$ 10$^8$ & $\sim 2 \times 10^{10}$ \\
KESR \\
\\
Abell 370 & 4.5 & 6.56 & CSFR/ & ? & $\sim$ 1. & 11--41 & (1--4) $\times 10^{8}$ & (1--4) $\times 10^{11}$ & 23--90 \% \\
HCM6A & & & young burst \\
\\
& & & composite & young+``old''
& negligible& $>$ 0.4-0.8 & $\sim 2 \times 10^9$& $\sim 3 \times 10^{10}$
& $\ga$ 40 \% \\
\hline
\end{tabular}
\label{tab_props}
\end{table*}
\section{Results for Abell 370 HCM 6A}
\label{s_370}
\begin{figure*
\centerline{\psfig{figure=plot_sed_6a.eps,width=8.8cm}
\psfig{figure=plot_sed_6a_csfr_sbs.eps,width=8.8cm}}
\caption{Best fits SEDs to the observations of Abell 370 HCM 6A.
The red crosses indicate the corresponding model broad band fluxes.
Solid lines show the best fit for a template from the BC+CWW group,
dotted from SB+QSO group, and dashed from the S03+ group (see explanations
in Sect.\ \ref{s_models}).
{\bf Left:} Observed spectral range.
{\bf Right:} Predicted SED in Spitzer/IRAC domain for best fit models.
Dashed lines show the bursts from the BCCWW+ and S03+ template groups.
The dotted line is the spectrum of SBS 0335-052 from the SB+QSO group
with additional $A_V=1.$ The solid lines show best fits for constant
star formation using different extinction/attenuation laws (Calzetti
starburst law versus SMC law). The solid triangles illustrate
the IRAC point-source sensitivity (1 $\sigma$) for low and medium
backgrounds excluding ``confusion noise''.}
\label{fig_sed_6a}
\end{figure*}
\begin{figure*
\centerline{\psfig{figure=contour_6a_burst_rev1.eps,width=8.8cm}
\psfig{figure=contour_6a_csfr_rev1.eps,width=8.8cm}}
\caption{$\chi^2$ contour plots showing solutions in extinction -- age
diagrams. The best solutions are indicated by the black dot.
Equidistant $\chi^2$ levels with a spacing of 0.5 are shown. The 2D 68\%
confidence region (corresponding to $\Delta \chi^2 = 2.3$)
is delimited by the solid thick black line. The (1D) 68 \% confidence region
for $A_V$ ($\Delta \chi^2 = 1.$) at each given age is delimited by the dashed thick black line.
{\bf Left:} Plot for solutions using a solar metallicity
burst template from the S03+ template group and the Calzetti attenuation law.
Although providing a good fit to the photometry, the
region corresponding to ages $\protect\ga$ 15 Myr, right of the dotted vertical line, is excluded
as no emission line would be expected in this case.
{\bf Right:} Same as left panel for constant star formation models.
The solutions indicate a non-negligible extinction, but no constraint on age.
Discussion in text.
}
\label{fig_contour_6a}
\end{figure*}
\subsection{SED fits and inferences on the stellar population}
Overall the published SED of HCM 6A (see Fig.\ \ref{fig_sed_6a})
is ``reddish'', showing an increase of the flux from $Z$ to $H$
and even to $K^\prime$\footnote{The significance of a change of the SED slope
between $ZJH$ and $HK^\prime$ seems weak, and difficult to understand.}.
From this simple fact and the above explanations it is already clear
qualitatively that one is driven towards solutions with a) ``advanced'' age
and little extinction or b) constant or young star formation
plus extinction.
However, a) can be excluded as no \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission would be expected in this case.
Quantitatively, the best solutions obtained for each spectral template group
is shown in the left panel of Figure \ref{fig_sed_6a}.
Indeed, the solutions shown correspond to bursts of ages $\sim$ 50--130 Myr
(BCCWW+, S03 templates) and little or no extinction.
However, as just mentioned, solutions lacking young ($\la$ 10 Myr) massive stars can
be excluded since \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission is observed.
The best fit empirical SB+QSO template shown corresponds to the spectrum of
the H~{\sc ii}\ galaxy SBS 0335-052 with an additional extinction of $A_V=1.$
To reconcile the observed SED with \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi, a young population,
e.g.\ such as SBS 0335-052, or constant SF is required.
In any of these cases fitting the ``reddish'' SED requires
a non negligible amount of reddening.
To illustrate the typical range of possible results we show in
Fig.\ \ref{fig_contour_6a} $\chi^2$ contour maps and corresponding confidence
intervals for solar metallicity models
(S03+ template group) and reddened with the Calzetti law.
The left panel (burst models) illustrates in particular the need
for progressively higher extinctions the younger the bursts.
All ages $\ga$ 10 Myr are excluded for the absence of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission.
From the constant SF models (right panel) we see that for a given
age $A_V$ is typically $\sim$ 0.5--1.8 mag at the 68 \% confidence
level. For obvious reasons, no constraint can be set on the age
since the onset of (constant) SF.
Hence, from the photometry of HCM 6A and from the presence of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\
we are led to conclude that this object must suffer from
reddening with typical values of $A_V \sim 1.$ for a Calzetti
attenuation law.
A somewhat smaller extinction ($A_V \sim 0.4$) can be obtained if
the steeper SMC extinction law of Pr\'evot et al.\ (1984)
is adopted. From the present data it is not possible to distinguish
the different extinction/attenuation laws.
Also, it is not possible to draw any constraints on the metallicity of
HCM 6A from the available data (cf.\ Sect.\ \ref{s_uv}).
What if we are dealing with composite stellar populations?
Indeed it is conceivable that the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission originates from a
population of young stars and the ``reddish'' restframe UV flux be due
to another, older population.
Assuming constant SF, no loss of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ and standard
SFR conversion factors, the observed \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission implies a maximum
UV flux of the order of 0.1 $\mu$Jy (and approximately constant in $F_\nu$ over
the observed wavelength range: $\lambda_{\rm rest} \ga 3000$ \AA)
for an unreddened population.
The bulk of the observed flux could then be from an older population.
In this case the rising spectrum from the z$_{\rm 850LP}$\ over the JH (and presumably
K) bands could even be due to an unreddened population;
a strongly increasing flux and probably a significant ``Balmer'' break
is then expected, similar to the aged burst shown in the right panel of
Fig.\ \ref{fig_sed_6a}.
This explanation should in principle be testable with Spitzer observations
as discussed below.
How does our possible indication for a high extinction fit in with
other studies?
At redshift $z \la 4$ the extinction of Lyman break galaxies (LBGs)
has been estimated by various authors (e.g.\ Sawicky \& Yee 1998,
Meurer et al.\ 1999,
Adelberger \& Steidel 2000, Shapley et al.\ 2001, Ouchi et al.\ 2004).
Given their mean/median values (typically $<E(B-V)> \sim$ 0.15--0.2)
and the $E(B-V)$, our finding of a ``high'' extinction is not exceptional.
Furthermore, Armus et al.\ (1998) find indications for $A_V > 0.5$ mag
from their analysis of a $z=5.34$ galaxy.
However, this implies that
dust extinction is likely present in starburst galaxies with redshifts above 6.
Large amounts of dust have already been observed in QSOs up to similar
redshift (Bertoldi et al.\ 2003, Walter et al.\ 2003).
\subsection{Properties of HCM6A: SFR, mass, \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmission}
To estimate properties such as the mass, SFR, and the intrinsic
\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission from HCM 6A we simply examine the predictions from the
best fit models,
scale them appropriately to the luminosity
distance\footnote{For the adopted cosmology and $z=6.56$ one
has $d_L=$ 64005.7 Mpc.},
and correct for the gravitational magnification (here $\mu=4.5$).
The derived quantities are summarised
in Table \ref{tab_props}.
First we consider single (non-composite) stellar populations.
From the best fit constant SF models (with variable ages)
we deduce an extinction corrected star formation rate of
the order of SFR(UV) $\sim$ 11 -- 41 \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi\ for a Salpeter
IMF from 1 to 100 \ifmmode M_{\odot} \else M$_{\odot}$\fi. For a commonly adopted, although unjustified,
Salpeter IMF down to 0.1 \ifmmode M_{\odot} \else M$_{\odot}$\fi\ this would increase by a factor 2.55.
Actually this estimate is not very different than the one obtained
from standard SFR calibrations provided the same assumptions on the
IMF. Indeed the observed restframe UV luminosity,
e.g.\ derived from the average J, H, and K$^\prime$ flux of
$F_{\rm UV} =(2.6 \pm 0.7) \times 10^{-30}$ erg s$^{-1}$ cm$^{-2}$\ Hz$^{-1}$
is $L_{\rm rest UV} = 4 \pi d_L^2 F_{\rm UV} / (1+z) / \mu \approx 4.\times 10^{28}$
erg s$^{-1}$ Hz$^{-1}$ (Hu et al.\ 2002), translates to
$SFR_{\rm UV} \approx = c \, L_{\rm rest UV} \, 10^{0.4 A_{\rm UV}}$, where
$c$ is the usual SFR conversion coefficient, and $A_{\rm UV}$ the UV extinction.
For the standard value $c=1.4 \times 10^{-28}$ from Kennicutt (1998), assuming
a Salpeter IMF down to 0.1 \ifmmode M_{\odot} \else M$_{\odot}$\fi, and
$A_{\rm UV} \sim$ 2.--3.,
one has
$SFR \sim$ 35--88 \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi; for the IMF used in this work (Salpeter from 1-100 \ifmmode M_{\odot} \else M$_{\odot}$\fi)
this becomes $SFR \sim$ 14--34 \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi.
This assumption, and the absence of extinction correction also explains
the difference with $SFR$ estimate of Hu et al.\ (2002)
\footnote{Actually Hu et al.\ (2002) derive without further explanation
$SFR = 9$ \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi\ from $L_{\rm rest UV} = 4.\times 10^{28}$ erg s$^{-1}$, whereas the
classical Kennicutt (1998) calibration would yield $SFR = 5.6$ \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi\
without extinction correction.}.
For continuous SF over timescales $t_{\rm SF}$ longer than $\sim$ 10 Myr, the total
(bolometric) luminosity output is typically $\sim 10^{10}$ \ifmmode L_{\odot} \else L$_{\odot}$\fi\ per unit
SFR (in \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi) for a Salpeter IMF from 1-100 \ifmmode M_{\odot} \else M$_{\odot}$\fi, quite independently of metallicity.
The total luminosity associated with the observed SF is therefore
$L \sim (1-4) \times 10^{11} \ifmmode L_{\odot} \else L$_{\odot}$\fi$, close to or just above the limit to
possibly qualify as a luminous infrared galaxy ($L_{\rm IR} > 10^{11} \ifmmode L_{\odot} \else L$_{\odot}$\fi$;
cf.\ Sanders \& Mirabel 1996)
if a significant fraction of its bolometric flux emerges in the (restframe) IR.
For $t_{\rm SF} \sim$ 10 Myr the estimated stellar mass is
$M_\star \approx t_{\rm SF} \times SFR \sim (1-4) \times 10^8$ \ifmmode M_{\odot} \else M$_{\odot}$\fi.
From the data given by Hu et al.\ (2002), the observed \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ flux is
$F(\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi) = \mu \times 3.8 \times 10^{-18}$ erg s$^{-1}$ cm$^{-2}$, with the magnification
factor $\mu$.
The \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ luminosity per unit SFR from the same S03 models used above is
$L(\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi)=(2.4-4.4) \times 10^{42}$ erg s$^{-1}$\ (\ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi)$^{-1}$ for metallicities
between solar and 1/50 \ifmmode Z_{\odot} \else Z$_{\odot}$\fi. The SFR deduced from \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ would then be
SFR$(\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi) \sim$ 0.4--0.8 \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi\ for HCM 6A.
Taking an extinction of $A_V=1.$ (for the Calzetti law) into account
implies a reddening corrected SFR$(\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi) \sim$ 7--12 \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi.
The ratio between SFR$(\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi)$/SFR(UV) presumably reflects the incomplete
\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmission $t_{\rm \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi}$, which can be estimated in various ways.
The most consistent estimate is obtained from the comparison of the predicted
\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ luminosity of each best fit model (obtained from fitting the broad-band
SED) to the observed \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ luminosity.
From the best fit models with $A_V \sim 1$ and the Calzetti law we obtain
$t_{\rm \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi} \sim$ 23--54 \%;
in the case of the fit with the Pr\'evot et al.\ extinction law we find
a higher transmission $t_{\rm \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi} \sim$ 90 \%.
For comparison Haiman (2002) assumed a \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmission of 20 \% from
the data of Hu et al.
Per definition this \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ ``transmission'' corresponds to the ratio
of the expected/intrinsic \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission from the starburst over the observed
one. The physical causes for a partial (i.e.\ $<$ 100 \%) transmission are
of course open to various interpretations (e.g.\ physical processes
destroying \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ photons in the host galaxy, absorption in the intervening
IGM etc.).
In fact the relatively high \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmission estimated here could be
somewhat surprising, given the Gunn-Peterson trough observations
in $z \ga 6$ quasars (cf.\ Becker et al.\ 2001, Fan et al.\ 2003)
and the possible presence of dust (this work).
Consider now the case of composite stellar populations.
In this case we retain as a rough estimate in Table \ref{tab_props} the
$SFR(\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi)$ (from the young population) as a lower limit,
and the mass and total luminosity is
derived from the best fit burst model of age $\sim$ 100 Myr assuming
that this ``older'' population dominates the observed continuum flux.
Formally we then have no handle on the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmission,
except that it cannot be very low (say $\la$ 40 \% $\approx 0.1 / 0.26
= F_{\rm young}/<F_{\rm obs}>$)
since otherwise the associated UV flux from the young population $F_{\rm young}$
would dominate the observed continuum flux $<F_{\rm obs}>$.
\subsection{Spitzer Observatory predictions}
It is interesting to examine the SEDs predicted by the various
models at longer wavelengths, including the rest-frame optical
domain, which is potentially observable with the sensitive IRAC camera
onboard the Spitzer Observatory and other future missions.
In the right panel of Fig.\ \ref{fig_sed_6a} we plot
the 3 best fits to the observed data for the BCCWW+ and S03+
template groups (``burst'' solutions with no extinction) and the
SBS 03352-052 template (with additional $A_V=1$) showing
strong optical emission lines.
We see that these solutions have fluxes comparable to or above
the detection limit of IRAC/Spitzer
\footnote{The IRAC detection limits plotted here
correspond to the values given by the Spitzer Science Center on
{\tt http://ssc.spitzer.caltech.edu/irac/sens.html} as 1 $\sigma$ point-source sensitivity
for low and medium backgrounds for frame times of 200s and described by Fazio et al.\ (2004).
These values do not include ``confusion noise''.}.
On the other hand the strongly reddened constant SF or young burst solutions
do not exhibit a Balmer break and are hence expected to show fluxes
just below the IRAC sensitivity at 3.6 $\mu$m\ and significantly
lower at longer wavelengths.
As \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission is expected only for the reddened SEDs the
latter solutions (low 3.6-4.5 $\mu$m\ flux) are predicted to apply to HCM 6A,
except if composite stellar populations are considered.
Indeed, a high 3.6 and 4.5 $\mu$m\ flux could be a good indication
for a composite stellar population, as discussed above.
In addition it is important to secure higher accuracy photometry
especially in the near-IR (JHK) to assess the accuracy of the redward increasing
shape of the spectrum (in $F_\nu$), which drives one towards solutions
with non-negligible extinction.
\section{Conclusion}
\label{s_conclude}
Using SED fitting techniques considering a large
number of parameters (mainly a vast library of empirical and theoretical
template spectra, variable extinction and extinction laws)
we have attempted to constrain the properties of the stellar populations
and \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission of two strongly lensed galaxies with redshifts $z \ga$ 6
from their observed SED including various ground-based observations,
HST, and Spitzer observations.
The following main results
have been obtained for these objects
(see Sects.\ \ref{s_370}, \ref{s_kesr}),
and summary in Table \ref{tab_props}):
\begin{itemize}
\item {\bf Triple arc in Abell 2218} discovered by Kneib et al.\ (2004, KESR).
The most likely redshift of this source is $z \sim$ 6.0--7.2
taking into account both our photometric determination and lensing
considerations.
SED fits indicate generally a low extinction ($E(B-V) \la 0.05$)
but do not strongly constrain the SF history.
Best fits have typical ages of $\sim$ 3 to 400 Myr. A reasonable
maximum age of (250--650) Myr (1 $\sigma$ interval) can be estimated.
However, the apparent 4000 \AA\ break
observed recently from combination of IRAC/Spitzer and HST observations,
can also equally well be reproduced with the template of a young
($\sim$ 3--5 Myr) burst where strong restframe optical emission lines
enhance the 3.6 and 4.5 $\mu$m\ fluxes.
The estimated SFR is typically $\sim$ 1 \ifmmode M_{\odot} {\rm yr}^{-1} \else M$_{\odot}$ yr$^{-1}$\fi\ for a Salpeter IMF
from 1-100 \ifmmode M_{\odot} \else M$_{\odot}$\fi, in agreement with previous estimates.
Given the poor constraint on age and SF history,
we conclude that intrinsic \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission
may or may not be present in this galaxy.
The apparent non-detection of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ by KESR can therefore
even be understood without invoking \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ destruction.
\item {\bf Abell 370 HCM 6A} discovered by Hu et al.\ (2002).
The relatively red SED and the presence of \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission indicate
basically two possible solutions:
1) a young burst or ongoing constant SF with non-negligible extinction
($A_V \sim$ 0.5--1.8 at a 1 $\sigma$ level)
or 2) a composite young + ``old'' stellar population.
For the first case,
best fits are obtained for constant SF with $E(B-V) \sim 0.25$.
In consequence previous SFR estimates for this source must likely be revised
upward.
If correct, the bolometric luminosity
of this galaxy is estimated to be $L \sim (1-4) \times 10^{11} \ifmmode L_{\odot} \else L$_{\odot}$\fi$,
comparable to the luminosity of infrared luminous galaxies.
Furthermore a \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmission of $\sim$ 23--90 \% is estimated from our best fit models.
Alternatively the observed 0.9-2.2 $\mu$m\ SED could also be fit
without extinction by a composite ``young'' and ``old'' stellar population,
where the former would be responsible for the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ emission and a fraction
of the restframe UV flux.
The SFR, stellar mass, and total luminosity are then lower than in case 1.
The two scenarios may be distinguishable with IRAC/Spitzer observations at
3.6 and 4.5 $\mu$m.
Given the limited observed spectral range, the present data does not allow
to draw any firm constraints on the maximum
age of the stellar population.
\end{itemize}
In general it should also be noted that broad-band SED fits or measurements
of the UV slope do not allow one to determine the metallicity of
a star forming galaxy from a theoretical point of view and in terms
of individual objects given important degeneracies
(cf.\ Sect.\ \ref{s_uv}).
The estimates of the \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ transmissions presented here can
in principle be used to constrain the intervening IGM properties,
and therefore probe the reionisation of the Universe.
Although the results obtained here from this exploratory study
of just two lensed galaxies, the highest known redshift galaxies
with photometric detections in at least 3--4 filters, cannot provide
a general view on the SF and IGM properties at $z \ga 6$,
there is good hope that the sample of such objects will
considerably increase in the near future with the availability
of large ground-based telescopes and sensitive space borne
observatories such as Spitzer and even more so with the planned
JWST.
\section*{Acknowledgements}
We thank an anonymous referee for critical comments which helped
to improve the paper.
We thank Eichi Egami and Johan Richard for comments on the HST and
Spitzer photometry of the arc in Abell 2218 KESR, Jean-Paul Kneib for
information on Keck spectroscopy of this object, and Yuri Izotov for
communicating spectra of metal-poor galaxies.
Part of this work was supported by the Swiss National Science Foundation
and the CNRS.
|
train/arxiv
|
BkiUbYvxK0fkXQzmKO0M
| 5
| 1
|
\section{Introduction}
Over the past decade, reconfiguration problems have drawn a lot of attention of researchers in algorithms and combinatorics \cite{B14,BKW14,DDFHI15,FHOU15,HD05,IDHPSUU11,KMM12,MNRSS13,W14}.
In this framework, one asks the following question: Given two solutions $I$, $J$ of a fixed optimization problem, can $I$ be transformed into $J$ by a sequence of small steps that maintain feasibility for all intermediate solutions?
Such problems are practically motivated by the fact it may be impossible to adapt a new production strategy instantaneously if it differs too much from the strategy that is currently in use; changes have to be made in small steps, but production has to keep running throughout. From a theoretical perspective, the study of reconfiguration problems provides deep insights into the structure of the solution space.
One of the well-studied examples is when the solution space consists of all the independent sets of a graph (optionally all having a prescribed size).
In this case, three types of reconfiguration rules have been considered. These are naturally explained using \emph{tokens} on vertices of the graph. In \emph{Token Addition Removal} (TAR)~\cite{IDHPSUU11, MNRSS13}, there is a token on every vertex of the initial independent set, and there is a buffer of tokens, initially empty. A step consists of removing a token from a vertex and placing it in the buffer, or placing a buffer token onto a vertex of the graph. The set of vertices with tokens must form an independent set at all times, and the goal is to move the tokens from the initial to the target independent set while ensuring the buffer size never exceeds a given threshold. In \emph{Token Sliding} (TS)~\cite{KMM12, HD05}, a step consists of replacing one vertex~$v$ in the independent set by a neighbor of~$v$ (the token slides along an edge). In \emph{Token Jumping} (TJ)~\cite{KMM12} a step also consists of replacing a single vertex, but the newly added vertex need not have any neighboring relation with the replaced vertex (the token jumps). Token jumping reconfiguration is equivalent to TAR reconfiguration with a buffer of size one.
These models have been analyzed in detail in the recent literature on algorithms~\cite{B14,BKW14,DDFHI15,FHOU15,GH10,LokshtanovMPRS15}, complexity theory~\cite{HD05,IDHPSUU11,KMM12,MNRSS13}, combinatorics~\cite{CDP06,DP06}, and even statistical physics~\cite{JLNSW12,KR15,NZB16}. It is known that the reconfiguration problem under all the above three rules is PSPACE-complete for general graphs, perfect graphs, and planar graphs \cite{HD05,KMM12,IDHPSUU11}. The TJ and TAR reconfiguration problems are PSPACE-complete even for bounded bandwidth graphs~\cite{W14}. Further analyses on the complexity can be found in~\cite{B14,BKW14,DDFHI15,FHOU15,LokshtanovMPRS15}. The constrained token moving problems are related to pebbling games that have been studied in the literature, with applications to robot motion planning~\cite{ABHS15,CDP06, DP06, GH10}.
As mentioned, the goal in reconfiguring independent sets is to go from one given independent $I$ to another one $J$ by a sequence of small steps. In the TS and TJ models, a step involves moving only a single token. This is ideal, but unfortunately reconfiguration is often impossible in the TS or TJ model. Reconfiguration in the TAR model is always possible if one makes the buffer size sufficiently large. However, having a large buffer size is undesirable. We are interested in determining the minimum buffer size that is sufficient to ensure any independent set in a given graph~$G$ can be reconfigured to any target independent set of the same size. We call this minimum the TAR \emph{reconfiguration threshold} (precise definitions in Section~\ref{sec:prelim}). Our aim is to bound the threshold in terms of properties of the graph, and to identify the structures contained in hereditary graph classes that cause the thresholds to be large. We also generalize the TJ model to \emph{Multiple Token Jumping} (MTJ), where in each step a prescribed number of tokens may be moved simultaneously. In the MTJ model, the question becomes: What is the minimum number of simultaneously jumping tokens needed to ensure any reconfiguration is possible? This quantity is called the MTJ \emph{reconfiguration threshold}.
\begin{figure}
\begin{subfigure}{4.6cm}
\begin{center}
\includegraphics[scale=0.35]{Pumpkin}
\end{center}
\caption{A pumpkin of size 18.}
\label{fig:pumpkin}
\end{subfigure}
\begin{subfigure}{10cm}
\begin{center}
\includegraphics[scale=0.35]{CompleteBinaryTree}
\end{center}
\caption{A graph of treewidth two with a complete binary tree~$T$ of depth two as a bipartite topological double minor.}
\label{fig:binary}
\end{subfigure}
\caption{The bipartite structures responsible for large MTJ and TAR reconfiguration thresholds, respectively. A \emph{pumpkin} consists of odd-length vertex-disjoint paths between two vertices. The special form of topological \emph{minor} represents each vertex of the tree~$T$ by an edge or even cycle in~$G$, and each edge of~$T$ by two odd-length paths connecting vertices in opposite partite sets in~$G$.}
\end{figure}
\subparagraph*{Our contribution.} We provide upper and lower bounds on the MTJ and TAR reconfiguration thresholds in terms of several graph parameters. In general, our bounds apply to the reconfiguration thresholds of hereditary \emph{graph classes}. The threshold of a graph class is the supremum of the threshold values of the graphs in that class: it is the smallest value~$k$ such that for any graph in the class, any source independent set~$I$ in that graph can be reconfigured into any target independent set~$J$ using steps of size~$k$ (for MTJ) or a buffer of size~$k$ (for TAR).
The MTJ reconfiguration threshold of graphs that are structurally very simple, may nevertheless be very large. For example, an even cycle with~$2n$ vertices can be partitioned into two independent sets~$I$ and~$J$ of size~$n$ each. Any MTJ reconfiguration of~$I$ into~$J$ requires a jump of~$n$ vertices, and this is trivially sufficient. Since a cycle has a feedback vertex set (FVS, see Section~\ref{sec:prelim}) of size one, the MTJ threshold cannot be bounded in terms of the size of a minimum feedback vertex set. However, we prove that the threshold is upper-bounded by the size of a minimum vertex cover of~$G$. Although this bound is tight in the worst case, there are many graph classes with a small MTJ threshold even though they require a large vertex cover. Trees for example have MTJ threshold at most one. We therefore introduce the notion of \emph{pumpkin}, which consists of two nodes connected by at least two vertex-disjoint paths of odd length (Figure~\ref{fig:pumpkin}). The \emph{size} of a pumpkin is the total number of vertices in the structure. We characterize the MTJ reconfiguration threshold of a hereditary graph class~$\Pi$ in terms of the size of the largest pumpkin it contains: the MTJ reconfiguration threshold is upper- and lower-bounded in terms of the largest pumpkin contained in a bipartite graph in~$\Pi$.
TAR reconfiguration is more versatile than MTJ reconfiguration. In the concrete example of a $2n$-cycle discussed above, its MTJ threshold is~$n$ while any pair of independent sets can be reconfigured in the TAR model using a buffer of size two. Moreover, we show that any graph that has a feedback vertex set of size $k$ has TAR reconfiguration threshold at most $k+1$, and reconfiguring one side of the complete bipartite graph~$K_{n,n}$ to the other side shows that this is tight. Our main result concerning TAR reconfiguration states that the TAR reconfiguration threshold of any graph is upper-bounded by its pathwidth. Somewhat surprisingly, there are graphs of constant treewidth (treewidth 2 suffices) for which the TAR reconfiguration threshold is arbitrarily large. We also introduce the concept of \emph{bipartite topological double minor} (BTD-minor), see Figure~\ref{fig:binary}, and show using an isoperimetric inequality that any hereditary graph class containing a graph having a complete binary tree of depth $d$ as a BTD-minor, has TAR reconfiguration threshold $\Omega(d)$. We conjecture that the TAR reconfiguration threshold can also be upper-bounded in terms of the depth of the largest complete binary tree BTD-minor, but we have not been able to prove this (see Section~\ref{sec:conclusion}).
We require the restriction to hereditary graph classes in some of our statements to be able to develop meaningful lower bounds on reconfiguration thresholds, as explained next. Let~$G$ be the disjoint union of~$K_{n,n}$ and a graph~$H$, and let~$I$ and~$J$ be the two partite sets of~$K_{n,n}$. One can verify that~$I$ can be reconfigured to~$J$ by jumps of size at most one if and only if~$H$ has an independent set of size~$n-1$. Similarly,~$I$ can be TAR reconfigured to~$J$ using a buffer of size~$k$ if and only if~$H$ has an independent set of size~$n-k$. Since the size of a maximum independent set is NP-complete to determine, there are no good characterizations of this quantity. When developing lower bounds on the threshold of a hereditary graph class~$\Pi$, this issue disappears since the reconfiguration threshold of any class containing the graph~$G$ above, is at least as high as the threshold of~$H$ (which must be contained in~$\Pi$ if~$G$ is), which is~$n$. The restriction to hereditary graph classes therefore enables us to focus our attention to reconfiguration problems where all vertices in the graph are contained in either the source or target independent set, thereby avoiding the obstacle that the reconfiguration threshold matches the size of a maximum independent set.
\subparagraph*{Applications.}
The MTJ and TAR reconfiguration thresholds play an important role in statistical physics and wireless communication networks. To understand the importance of the TAR reconfiguration threshold, consider the following process:
In a graph $G$, nodes are trying to become \emph{active} (transmit information) at some rate, independently of each other in a distributed manner.
When a potential activation occurs at a node, it can only become active if none of its neighboring nodes are active at that moment (as otherwise the transmissions would interfere). An active node deactivates at some rate independent of the other processes. At any point in time, the set of active nodes in this process forms an independent set of the graph. In statistical physics, this process is known as Glauber dynamics with \emph{hard-core interaction}. This activity process on graphs has many applications in different fields of study.
Loosely speaking, when the activation rate is large, in the long run the above process always tries to stay in a maximum independent set.
For the graphs with more than one maximum independent set, it is interesting to study the time this process takes to reach a target independent set, starting from some specific independent set. This time has been shown to depend crucially upon what we call the TAR reconfiguration threshold of the underlying graph \cite{NZB16}.
In particular, the mixing time of the Glauber dynamics on a graph increases exponentially with its TAR reconfiguration threshold, and hence the Glauber dynamics on the graph is fast mixing if and only if the TAR reconfiguration threshold is small.
The MTJ reconfiguration threshold of a graph~$G$ can be interpreted in the following way. Consider the auxiliary graph, whose vertices correspond to size-$s$ independent sets in $G$ for some fixed~$s$, with an edge between vertices representing sets $I$, $J$ if $|I \setminus J| \leq k$. Then the MTJ reconfiguration threshold is at most~$k$ if and only if this graph auxiliary graph is connected for all~$s$. The MTJ reconfiguration threshold therefore has applications in the parallel Glauber dynamics (PGD) \cite{JLNSW12, KR15}, where the MTJ reconfiguration threshold provides the jump size required to make the underlying Markov process ergodic.
\subparagraph*{Organization.} The succeeding sections are organized as follows. In Section~\ref{sec:prelim} we provide graph-theoretic preliminaries. In Section~\ref{sec:reconfig} we provide a formal description of the two types of reconfiguration. In Section~\ref{sec:mtj} we analyze MTJ reconfiguration. Section~\ref{sec:tar} deals with TAR reconfiguration.
\section{Preliminaries}\label{sec:prelim}
In this section we give the most important graph-theoretic definitions. Notions not defined here can be found in one of the textbooks~\cite{ParAlgo15,D2000}.
A graph is a pair $G=(V,E)$, where $V$ is the set of vertices, and $E$ is the set of edges. We also use~$V(G)$ and~$E(G)$ to refer to the vertex and edge set of~$G$, when convenient. All graphs we consider are finite, simple, and undirected. For~$U \subseteq V$ we denote by~$G-U$ the graph obtained from~$G$ by removing the vertices in~$U$ and their incident edges.
A set $U\subseteq V$ is called an \emph{independent set} of $G$, if $\{u,v\}\notin E$ for any $u,v\in U$.
The \emph{symmetric difference} of two sets~$U$ and~$U'$ is~$U \Delta U' := (U_1\setminus U_2)\cup (U_2\setminus U_1)$. A set $U\subseteq V$ is a \emph{vertex cover} of $G$ if every edge in $E$ is incident with a vertex in $U$.
The minimum cardinality of a vertex cover of $G$ is denoted by $\mathrm{\textsc{vc}}(G)$.
A set~$U \subseteq V$ is a \emph{feedback vertex set} if~$G-U$ is acyclic (a \emph{forest}).
The minimum cardinality of a feedback vertex set of~$G$ is denoted~$\mathrm{\textsc{fvs}}(G)$. For a vertex $v$, denote by $N_G(v)$ the set of its neighbors (excluding $v$ itself).
The \emph{open} and \emph{closed} \emph{neighborhood} of a set~$U \subseteq V$ are~$N_G(U) := \bigcup _{s \in U} N_G(s) \setminus U$ and $N_G[U] := \bigcup _{s \in U} N_G(s) \cup U$, respectively. We omit the subscript when it is clear from the context. A graph $G'=(V',E')$ is said to be a \emph{subgraph} of $G$, if $V'\subseteq V$, and $E'\subseteq E$.
It is an \emph{induced subgraph} of $G$ if $V'\subseteq V$ and for any $u,v\in V'$ we have $\{u,v\} \in E$ if and only if $\{u,v\}\in E'$. The subgraph of~$G$ induced by~$U \subseteq V$ is denoted~$G[U]$. A \emph{graph class} is a (possibly infinite) collection of graphs.
A graph class $\Pi$ is said to be \emph{hereditary} if given any graph $G\in\Pi$, any induced subgraph of $G$ belongs to the class $\Pi$ as well.
A graph is \emph{bipartite} if its vertex set can be partitioned into two independent sets~$I$ and~$J$, which are also called the \emph{partite sets}. We sometimes denote such a bipartite graph by~$G = (I \cup J, E)$. A bipartite graph is \emph{balanced} if~$|I| = |J|$.
A \emph{matching} is a set of edges that do not share any endpoints. A matching \emph{covers} a vertex~$v$ if it contains an edge incident on~$v$. A matching is \emph{perfect} if it covers all vertices. We will utilize the following well-known consequence of K\H{o}nig's theorem.
\begin{fact}[{\cite[Corollary 16.7]{Schrijver03}}] \label{fact:konig}
Let~$G = (I \cup J, E)$ be a bipartite graph. Then~$G$ has a matching covering~$I$ if and only if~$|N(S)| \geq |S|$ for each~$S \subseteq I$.
\end{fact}
A vertex~$v$ is a \emph{cutvertex} in graph~$G$ if the removal of~$v$ increases the number of connected components. A graph is \emph{biconnected} if it does not contain a cutvertex. Under this definition, the graph~$K_2$ is biconnected. A \emph{biconnected component} of~$G$ is a maximal biconnected subgraph of~$G$.
\begin{definition}[{\cite[\S 7.2]{ParAlgo15}}]
A \emph{path decomposition} of a graph $G = (V,E)$ is a sequence $\mathcal{P} = (X_1, X_2,\ldots,X_r)$ of subsets of $V$ called \emph{bags}, satisfying the following conditions:
\begin{enumerate}[{\normalfont (P1)}]
\item $\bigcup_{i=1}^r X_i=V$. In other words, every vertex of $G$ is in at least one bag.
\item For every $\{u,v\}\in E$, there exists $l\in\{1,2,\ldots,r\}$ such that the bag $X_l$ contains both $u$ and $v$.
\item For every $v\in V$, if $u\in X_i\cap X_k$ for some $i\leq k$, then $u\in X_j$ also for each $j$ such that $i\leq j\leq k$. In other words, the indices of the bags containing $u$ form an interval in~$\{1,2,\ldots, r\}$.
\end{enumerate}
\end{definition}
The \emph{width} of a path decomposition $(X_1,\ldots, X_r)$ is $\max_{1\leq i\leq r}|X_i|-1$.
The \emph{pathwidth} of $G$, denoted by $\mathrm{\textsc{pw}}(G)$, is the minimum possible width of a path decomposition of $G$.
A path-decomposition $(X_1,X_2,\ldots,X_r)$ of a graph $G$ is \emph{nice} if the following holds:
\begin{enumerate}[{\normalfont (i)}]
\item $X_1=X_r=\emptyset$, and
\item for every $i\in\{1,2,\ldots, r-1\}$, there is either a vertex $v\notin X_i$ such that $X_{i+1}=X_i\cup\{v\}$, or there is a vertex $w\in X_i$ such that $X_{i+1}=X_i\setminus\{w\}$.
\end{enumerate}
It is well-known (cf.~\cite[Lemma 7.2]{ParAlgo15}) that every graph admits a nice path decomposition of width~$\mathrm{\textsc{pw}}(G)$.
For any path decomposition $\mathcal{P}=(X_1,X_2,\ldots,X_r)$ of $G = (V,E)$, and any vertex $v\in V$, define $l_{\mathcal{P}}(v)=\min\{i:v\in X_i\}$ and $r_{\mathcal{P}}(v)=\max\{i:v\in X_i\}$, i.e.~$l_{\mathcal{P}}(v)$ and $r_{\mathcal{P}}(v)$ respectively denote the index of the first and last bag containing $v$. Note that if $\mathcal{P}$ is nice, then $l_{\mathcal{P}}(\cdot)$ and $r_{\mathcal{P}}(\cdot)$ are injective maps over the set of vertices.
\section{Definitions and Basic Facts for Reconfiguration} \label{sec:reconfig}
In this section we formally define the two notions of reconfiguration and establish some basic facts.
\subparagraph{Multiple Token Jump (MTJ).}
Given any two independent sets $I$ and $J$, with $|I|=|J|$, we say that $I$ can be \emph{$k$-MTJ reconfigured} to $J$, if there exists a finite sequence of independent sets $(I = W_0, W_1,W_2,\ldots,W_n, W_{n+1}=J)$ for some $n\geq 0$, such that for all $i \in \{0, \ldots, n+1\}$ the set~$W_i$ is independent in~$G$, $|W_i|=|I|=|J|$, and $|W_{i+1}\setminus W_i|\leq k$.
A step $W_i\to W_{i+1}$ in the reconfiguration process with~$|W_{i}\setminus W_{i+1}|=k$ is called a $k$-TJ move.
Given a graph~$G = (V,E)$, define $\ensuremath{\mathrm{\textsc{mtj}}}(G,s)$ as the minimum value of $k$, such that any two independent sets of size $s$ in~$G$ can be $k$-MTJ reconfigured to each other. Now define $\ensuremath{\mathrm{\textsc{mtj}}}(G):=\max_{1\leq s\leq |V|} \ensuremath{\mathrm{\textsc{mtj}}}(G,s)$.
Our goal is to characterize the value of $\ensuremath{\mathrm{\textsc{mtj}}}(G)$ in terms of certain parameters of the graph $G$. We call $\ensuremath{\mathrm{\textsc{mtj}}}(G)$ the \emph{MTJ reconfiguration threshold} of the graph $G$. The MTJ reconfiguration threshold of a graph class $\Pi$ is defined as $\ensuremath{\mathrm{\textsc{mtj}}}(\Pi):=\sup_{G\in\Pi} \ensuremath{\mathrm{\textsc{mtj}}}(G)$.
\subparagraph{Token Addition Removal (TAR).}
Given any two independent sets $I$ and $J$, with $|I|=|J|$, we say that $I$ can be \emph{$k$-TAR reconfigured} to $J$, if there exists a finite sequence of independent sets $(I = W_0, W_1,W_2,\ldots,W_n, W_{n+1}=J)$ for some $n\geq 0$, such that~$W_i$ is independent in~$G$, $|I|-|W_i|\leq k$, and $|W_{i-1}\Delta W_{i}|\leq 1$ for all $i \in \{0, \ldots, n+1\}$.
We refer to the quantity $B_i:=|I|-|W_i|$ as the \emph{buffer size} at step $i$: the tokens that were on the initial independent set, and are not on the current independent set~$W_i$, are placed in the buffer.
Define $\ensuremath{\mathrm{\textsc{tar}}}(G,s)$ to be the smallest buffer size~$k$ such that any two independent sets of size $s$ can be $k$-TAR reconfigured to each other. Define $\ensuremath{\mathrm{\textsc{tar}}}(G):=\max_{1\leq s\leq |V|} \ensuremath{\mathrm{\textsc{tar}}}(G,s)$. As before, we call $\ensuremath{\mathrm{\textsc{tar}}}(G)$ the \emph{TAR reconfiguration threshold} of the graph $G$, and extend the same terminology to graph classes~$\Pi$ by defining $\ensuremath{\mathrm{\textsc{tar}}}(\Pi):=\sup_{G\in\Pi} \ensuremath{\mathrm{\textsc{tar}}}(G)$.
\subparagraph{Facts on Reconfiguration.}
Observe that for any graph~$G$, it holds that $\ensuremath{\mathrm{\textsc{mtj}}}(G) = 1$ if and only if $\ensuremath{\mathrm{\textsc{tar}}}(G)= 1$. In general, the TAR reconfiguration threshold is at most the MTJ reconfiguration threshold. Indeed, to see this, observe that each $k$-TJ move can be thought of as a sequence of $2k$ steps with maximum buffer size $k$. First, sequentially remove the $k$ vertices that are jumping away, placing their tokens in the buffer; then sequentially place the buffer tokens on the $k$ new vertices in the independent set.
\begin{proposition} \label{prop:balancedbip}
Let~$G$ be a graph with independent sets~$I$ and~$J$ of equal size. If~$I \setminus J$ can be $k$-TAR reconfigured (resp.\,$k$-MTJ reconfigured) to $J \setminus I$ in the graph~$G[I \Delta J]$, then~$I$ can be $k$-TAR reconfigured (resp.\,$k$-MTJ reconfigured) to~$J$ in~$G$.
\end{proposition}
\begin{proof
Consider a sequence of independent sets~$(I \setminus J = W_0, \ldots, W_{n+1} = J \setminus I)$ in $G[I\Delta J]$ that reconfigures~$I \setminus J$ to~$J \setminus I$. Since~$I$ and~$J$ are independent in~$G$, no vertex of~$I \Delta J$ is adjacent to a vertex of~$I \cap J$. Hence~$W'_i := W_i \cup (I \cap J)$ is an independent set in~$G$ for all~$i$, and the sequence~$(W'_0, \ldots, W'_{n+1})$ reconfigures~$(I \setminus J) \cup (I \cap J) = I$ to~$(J \setminus I) \cup (I \cap J) = J$ in~$G$. The step size and buffer size of this sequence in~$G$ are not greater than the corresponding values for the sequence in~$G[I \Delta J]$, which completes the proof.
\end{proof}
Proposition~\ref{prop:balancedbip} shows that to upper-bound the TAR or MTJ reconfiguration threshold, it suffices to do so in balanced bipartite graphs where the source and target configurations are disjoint; note that~$G[I \Delta J]$ is balanced bipartite and~$I \setminus J$ and~$J \setminus I$ are disjoint. We will frequently exploit this in our proofs. For any graph class $\Pi$, let $\Pi_\mathrm{bip}$ denote the set of bipartite graphs in $\Pi$. The following proposition shows that the reconfiguration threshold of a hereditary graph class is determined by the behavior of the bipartite graphs in the class. Note that for hereditary classes~$\Pi$, the class~$\Pi_\mathrm{bip}$ is also hereditary.
\begin{proposition} \label{prop:bip}
For any hereditary graph class $\Pi$, we have $\ensuremath{\mathrm{\textsc{mtj}}}(\Pi)=\ensuremath{\mathrm{\textsc{mtj}}}(\Pi_\mathrm{bip})$ and
$\ensuremath{\mathrm{\textsc{tar}}}(\Pi)=\ensuremath{\mathrm{\textsc{tar}}}(\Pi_\mathrm{bip})$.
\end{proposition}
\begin{proof
The definitions of the thresholds imply that $\ensuremath{\mathrm{\textsc{mtj}}}(\Pi)\geq \ensuremath{\mathrm{\textsc{mtj}}}(\Pi_\mathrm{bip})$ and $\ensuremath{\mathrm{\textsc{tar}}}(\Pi)\geq \ensuremath{\mathrm{\textsc{tar}}}(\Pi_\mathrm{bip})$, since~$\Pi \supseteq \Pi_\mathrm{bip}$. For the reverse direction, assume that the reconfiguration threshold of~$\Pi_\mathrm{bip}$ (in one of the models) is at most~$k$ and consider any graph~$G \in \Pi$ with independent sets~$I$ and~$J$ of equal size. By Proposition~\ref{prop:balancedbip} the cost of reconfiguring~$I$ to~$J$ is bounded by the cost of reconfiguring~$I \setminus J$ to~$J \setminus I$ in~$G[I \Delta J]$. Since~$G[I \Delta J]$ is bipartite and~$\Pi$ is hereditary, we have~$G[I \Delta J] \in \Pi_\mathrm{bip}$, and hence the cost of reconfiguring in~$G[I \Delta J]$ is at most~$k$. So reconfiguring~$I$ to~$J$ can be done with cost at most~$k$ in this model.
\end{proof}
\section{Threshold for Multiple Token Jump Reconfiguration} \label{sec:mtj}
We start our discussion of token jump reconfiguration by recalling the following known result.
\begin{theorem}[{\cite[Theorem~7]{KMM12}}] \label{thm:tree}
Let the graph $G=(V,E)$ be a forest. Then $\ensuremath{\mathrm{\textsc{mtj}}}(G) \leq 1$.
\end{theorem}
The intuition behind this result is that since a forest does not contain any cycle, one can start reconfiguring from the leaf nodes or the isolated vertices, each of which has at most one neighbor from the target configuration. For arbitrary graphs, the above procedure does not work since there may not be any leaves or isolated vertices. But if a graph~$G$ has a small vertex cover, then its MTJ reconfiguration threshold is again small.
\begin{theorem} \label{thm:max-match}
Let $G=(V,E)$ be a graph. Then $\ensuremath{\mathrm{\textsc{mtj}}}(G)\leq \max(\mathrm{\textsc{vc}}(G),1)$.
\end{theorem}
\begin{proof
We prove the theorem using induction on the number~$n$ of vertices in $G$. For~$n=1$ the claim is trivially true, so consider a graph~$G = (V,E)$ with source and target independent sets~$I$ and~$J$ of equal size and~$|V| > 1$. Our induction hypothesis is that any graph~$G'$ with less than~$|V|$ vertices has MTJ reconfiguration threshold upper-bounded by~$\max(\mathrm{\textsc{vc}}(G'),1)$.
By Proposition~\ref{prop:balancedbip} it is enough to show that in the graph $G[I\Delta J]$ induced by $I\Delta J$, starting from $I'=I\setminus J$, one can construct a sequence of MTJ moves to reach the configuration $J'=J\setminus I$ with step-size at most $\max(\mathrm{\textsc{vc}}(G),1)$. Let~$V' := I \Delta J$. Note that~$\mathrm{\textsc{vc}}(G[I \Delta J]) \leq \mathrm{\textsc{vc}}(G)$, and let $S\subseteq V'$ be a vertex cover of $G[I \Delta J]$ of cardinality at most $\mathrm{\textsc{vc}}(G)$. If~$S$ is a vertex cover of~$G[I \Delta J]$, then there is no edge between any two vertices of $V' \setminus S$. Assume without loss of generality that $|I'\cap S|\geq |J'\cap S|$ (otherwise swap the role of $I'$ and $J'$, which does not affect reconfigurability). We distinguish three cases.
\textbf{Case 1.} If the vertex cover is empty ($S = \emptyset$), then~$G[I' \Delta J']$ has no edges. Consequently, all vertex subsets in the graph are independent, and we can reconfigure~$I'$ to~$J'$ by jumping one token at a time. By Proposition~\ref{prop:balancedbip}, this implies~$I$ can be reconfigured to~$J$ in~$G$ using jumps of size~$1$.
\textbf{Case 2.} Suppose that~$|I'| = |J'| \leq \mathrm{\textsc{vc}}(G)$. Then we can jump the tokens from~$I'$ onto~$J'$ in a single step of size at most~$\mathrm{\textsc{vc}}(G)$, and complete the argument using Proposition~\ref{prop:balancedbip}.
\textbf{Case 3.} If the previous cases do not apply, we claim that~$s := |I' \cap S| > 0$. Indeed, if~$s = 0$, then since~$|I' \cap S| \geq |J' \cap S|$ we would have~$I' \cap S = J' \cap S = \emptyset$, implying that~$S = \emptyset$ and that the first case applies. Moreover, we have~$|J'\setminus S|\geq s$, otherwise
\begin{equation}
|J'|=|J'\cap S|+|J'\setminus S|\leq (\mathrm{\textsc{vc}}(G)-s)+(s-1)=\mathrm{\textsc{vc}}(G)-1,
\end{equation}
and we are in the previous case. Now let~$Z$ be an arbitrary set of~$s$ vertices from~$J' \setminus S$.
Choose all the vertices in $I'\cap S$ and jump their tokens to~$Z$, i.e., remove the vertices $I'\cap S$ from the independent set~$I'$ and replace them by~$Z$ to obtain~$I''$. The set~$I''$ is independent because the fact that~$S$ is a vertex cover implies that the only neighbors of~$Z \subseteq J' \setminus S$ belong to the set~$S$, while~$I''$ contains no vertex from~$S$. Since~$|I \cap S| \leq |S| \leq \mathrm{\textsc{vc}}(G)$, the step size of this move is at most~$\mathrm{\textsc{vc}}(G)$.
Consider the graph~$G'$ which is obtained from~$G[I \Delta J]$ by removing~$Z$ and~$I' \cap S$, which is again a balanced bipartite graph. Note that since the only neighbors of~$Z$ belong to~$I$ (since~$G$ is bipartite and~$Z \subseteq J$), and belong to~$S$ (since~$S$ is a vertex cover and~$Z \cap S = \emptyset$), it follows that~$G'$ contains no vertex that is a neighbor of~$Z$ in~$G[I \Delta J]$. Consequently, the union of~$Z$ with any independent set in~$G'$ is independent in~$G[I \Delta J]$. Since~$G'$ is smaller than~$G$, by induction one can reconfigure~$I' \setminus S$ to~$J' \setminus Z$ in the graph~$G'$ with steps of size at most~$\mathrm{\textsc{vc}}(G') \leq \mathrm{\textsc{vc}}(G)$. Adding~$Z$ to each set in the corresponding reconfiguration sequence produces a sequence that reconfigures~$(I' \setminus S) \cup Z$ to~$J'$ in~$G[I \Delta J]$. By inserting the step from~$I'$ to~$(I' \setminus S) \cup Z$ at the front of this sequence, we obtain a reconfiguration from~$I'$ to~$J'$ with steps of size at most~$\mathrm{\textsc{vc}}(G)$ in~$G[I \Delta J]$. By Proposition~\ref{prop:balancedbip} this implies that~$I$ can be reconfigured to~$J$ with steps of size~$\mathrm{\textsc{vc}}(G)$ in~$G$, completing the proof for this case.
\end{proof}
An even cycle of length~$2n$ has MTJ reconfiguration threshold~$n$. Since its vertex cover number is~$n$, Theorem~\ref{thm:max-match} is best-possible. Long cycles are not the only graphs whose MTJ reconfiguration threshold equals half the size of the vertex set. Bistable graphs (defined below), of which the pumpkin structure defined in the introduction is a special case, also have this property.
\subsection{MTJ Reconfiguration Threshold in Terms of Bistable Rank} \label{sect:mtj:bistable}
In this section we introduce the notion of \emph{bistable graph}, derive several properties of bistable graphs, and use these to bound the MTJ reconfiguration threshold in terms of the size of the largest induced bistable subgraph. The resulting bounds on the MTJ reconfiguration threshold are tight, but can be hard to apply to specific graph classes: it may be difficult to estimate the size of the largest induced bistable graph, or even to determine whether a given graph is bistable or not. In Section~\ref{sect:mtj:pumpkin} we will therefore relate the size of the largest induced bistable subgraph to the size of the largest pumpkin subgraph. This will result in upper- and lower bounds on the MTJ reconfiguration threshold in terms of the largest pumpkin structure contained in the graph (class), which is arguably a more insightful parameter. The resulting bound will not be best-possible, however.
\begin{definition}[Bistable graphs] \label{def:bistable}
A graph is called \emph{bistable} if it is connected, bipartite, and has exactly two distinct maximum independent sets formed by the two partite sets in its unique bipartition. The \emph{rank} of a bistable graph is defined as the size of its maximum independent sets.
Let~$\ensuremath{\mathrm{\textsc{bi}}}(G)$ denote the rank of the largest induced bistable subgraph of~$G$. If~$G$ contains no induced bistable subgraphs (which can only occur if~$G$ has no edges), then we define~$\ensuremath{\mathrm{\textsc{bi}}}(G)$ to be one. For a graph class $\Pi$ we define $\ensuremath{\mathrm{\textsc{bi}}}(\Pi):=\sup_{G\in\Pi}\ensuremath{\mathrm{\textsc{bi}}}(G)$.
\end{definition}
The pumpkin shown in Figure~\ref{fig:pumpkin} forms an example of a bistable graph. Lemma~\ref{lemma:bistable} connects bistable graphs to independent set reconfiguration. Consider the task of reconfiguring the $J$-partite set to the $I$-partite set in a balanced bipartite graph~$G = (I \cup J, E)$. If we have a set~$S \subseteq I$ such that~$|S| \geq |N(S)|$, then one way to make progress in the reconfiguration is to select~$|S|$ vertices from~$N(S) \subseteq J$ and jump their tokens onto the vertices in~$S$, resulting in a new independent set of the same size. The following lemma shows that when we consider a set~$S$ that is \emph{minimal} with respect to being at least as large as its neighborhood, then the induced subgraph~$G[N[S]]$ is bistable. Hence the cost of such a jump of~$|S|$ vertices is bounded by~$\ensuremath{\mathrm{\textsc{bi}}}(G)$, which will allow us to bound the MTJ reconfiguration threshold.
\begin{lemma} \label{lemma:bistable}
Let~$G = (I \cup J, E)$ be a balanced bipartite graph without isolated vertices and let~$S \subseteq I$ be inclusion-wise minimal with the properties that~$|S| \geq |N(S)|$ and~$S$ is not empty. Then~$G[N[S]]$ is bistable.
\end{lemma}
\begin{proof}
We have to show that the graph~$G'$ induced by the vertices from~$S$ and their neighborhood satisfies all conditions for being bistable. Since~$G$ is bipartite,~$G'$ is as well. Before proving the remaining properties, we establish the following claim.
\begin{claim} \label{claim:bistable:matching}
The graph~$G'$ has a matching covering~$S$.
\end{claim}
\begin{claimproof}
Assume for a contradiction that no such matching exists. By Fact~\ref{fact:konig}, there is a set~$S' \subseteq S$ with~$|N_G(S')| = |N_{G'}(S')| < |S'|$. As~$G$ has no isolated vertices, we have~$|S'| > 1$. Removing an arbitrary vertex from~$S'$ to obtain~$S''$ then decreases the size of the set by at most one without increasing the neighborhood size. Hence~$|N_G(S'')| \leq |S''|$, a contradiction to the minimality of~$S$.
\end{claimproof}
We now show that~$G'$ has all properties of a bistable graph.
\subparagraph{Connectivity.} Assume for a contradiction that~$G'$ is not connected. If~$G'$ has a connected component with vertex set~$C$ that contains at least as many $I$-vertices as $J$-vertices, then~$S' := C \cap I$ is a strict subset of~$S$ with~$|S'| \geq |C \cap J| \geq |N_{G'}(S')| = |N_G(S')|$, contradicting minimality of~$S$. Otherwise, all connected components of~$G'$ have strictly more~$J$-vertices than~$I$-vertices. Since the $J$-vertices in~$G'$ form the neighborhood of~$S$, this implies that~$|N_G(S)| > |S|$, a contradiction to the choice of~$S$. Hence~$G'$ is connected.
\subparagraph{Balance.} Since~$G'$ is connected, it has a unique bipartition and it is easy to verify that~$S$ is one of the partite sets: $G' = (S \cup J', E')$. Since there is a matching covering~$S$ (Claim~\ref{claim:bistable:matching}) and all matching partners of vertices in~$S$ are distinct and belong to~$J'$, we therefore have~$|J'| \geq |S|$. We have~$|J'| = |N_G(S)| \leq |S|$ by assumption on~$S$, establishing that~$|J'| = |S|$ which proves~$G'$ is balanced. This implies that a matching in~$G'$ that saturates~$S$ (which exists by Claim~\ref{claim:bistable:matching}) is in fact a perfect matching in~$G'$.
\subparagraph{Two maximum independent sets.} Assume for a contradiction that~$G'$ has at least three maximum independent sets. Then there is a maximum independent set in~$G$ that is not equal to either of the two partite sets~$J'$ or~$S$; let~$X$ be such a maximum independent set. Since~$G'$ is bipartite and has a perfect matching~$M$, the set~$X$ contains exactly one vertex from each matching edge in~$M$. Now let~$\hat{S} := X \cap I$. Since~$X \neq J'$ by assumption, it follows that~$\hat{S}$ is not empty; since~$X \neq S$, it is a proper subset of~$S$. We show that~$|N_G(\hat{S})| \leq |\hat{S}|$, contradicting minimality of~$S$.
Let~$M' \subseteq M$ denote the matching edges intersected by~$\hat{S}$. Since~$X$ contains one vertex from each edge of~$M$, for all edges in~$M \setminus M'$ the $J$-endpoint of the edge belongs to the independent set~$X$. So the $J$-endpoint of an edge in~$M \setminus M'$ is not in the neighborhood of~$\hat{S}$, as~$X$ is independent. Consequently, only the matching partners of~$\hat{S}$ can be in the neighborhood of~$\hat{S}$, implying there are at most~$|\hat{S}|$ such neighbors. Hence~$|N_G(\hat{S})| \leq |\hat{S}|$; a contradiction. It follows that~$G'$ has at most two maximum independent sets. To see that it has exactly two, it suffices to observe that since~$G'$ has a perfect matching, both its partite sets are maximum independent sets.
This establishes that~$G'$ satisfies all conditions for being bistable and concludes the proof of Lemma~\ref{lemma:bistable}.
\end{proof}
Now, in the lemma below, we prove two key properties of bistable graphs. They will later be useful to relate the quantities~$\ensuremath{\mathrm{\textsc{pum}}}(G)$ and~$\ensuremath{\mathrm{\textsc{bi}}}(G)$.
\begin{lemma}\label{lem:bistable-property}
Let $G = (I\cup J,E)$ be a bistable graph. Then the following holds:
\begin{enumerate}
\item $G$ has a perfect matching covering $I$ (and hence $J$). \label{prop:bistable:pm}
\item $G$ is biconnected. \label{prop:bistable:biconnected}
\end{enumerate}
\end{lemma}
\begin{proof}
(\ref{prop:bistable:pm}) By K\H{o}nig's theorem (cf.~\cite[Thm. 16.2]{Schrijver03}), the size of a maximum matching in the bipartite graph~$G$ equals the size of a minimum vertex cover in~$G$. By Definition~\ref{def:bistable}, the partite sets~$I$ and~$J$ are maximum independent sets and therefore have equal size. Since the complement of a maximum independent set is a minimum vertex cover, it follows that~$V(G) \setminus I = J$ is a minimum vertex cover. Hence there is a matching of size~$|J| = |I|$ in~$G$, which is a perfect matching since it covers~$2|J| = |V(G)|$ vertices.
(\ref{prop:bistable:biconnected}) Assume for a contradiction that $G$ is not biconnected. Let $v$ be a cutvertex and let~$M$ be a perfect matching in~$G$, which exists by the previous property. Assume that~$v \in I$; the argument for~$v \in J$ is symmetric. Let~$u$ be the matching partner of~$v$ under~$M$. Since~$v$ is a cutvertex, the graph~$G - \{v\}$ consists of multiple connected components~$C_1, \ldots, C_\ell$. Without loss of generality, assume that~$u$ is contained in component~$C_1$. For all components~$C_i$ with~$i \geq 2$, the component contains the same number of~$I$ and~$J$-vertices: for each vertex its matching partner in the opposite partite set belongs to the same component. For component~$C_1$ the number of~$I$-vertices is one smaller than the number of $J$-vertices, since the matching partner of~$u$ does not belong to~$C_1$. Consider the set~$S$ consisting of the $J$-vertices from~$C_1$ along with the~$I$-vertices of all other components~$C_2, \ldots, C_\ell$ of~$G - \{v\}$. The set~$S$ is independent in~$G- \{v\}$ since it consists of entire partite sets of different components of the bipartite graph. Since~$v \not \in S$ it follows that~$S$ is also independent in~$G$. As~$S$ contains exactly one endpoint from each edge in~$M$ (the $I$-endpoint for matching edges intersecting a component~$C_i$ for~$i \geq 2$, and the $J$-endpoint for the remaining matching edges) it follows that~$S$ is a maximum independent set in~$G$ that differs from~$I$ and~$J$; a contradiction to Definition~\ref{def:bistable}.
\end{proof}
\begin{theorem} \label{thm:bistable}
For any graph~$G$ it holds that~$\ensuremath{\mathrm{\textsc{mtj}}}(G) \leq \ensuremath{\mathrm{\textsc{bi}}}(G)$. Moreover, if~$G \neq K_1$, then there exists an induced subgraph~$G'$ of~$G$ with~$\ensuremath{\mathrm{\textsc{mtj}}}(G') \geq \ensuremath{\mathrm{\textsc{bi}}}(G) \geq \ensuremath{\mathrm{\textsc{bi}}}(G')$.
\end{theorem}
\begin{proof}
We first prove the lower bound on~$\ensuremath{\mathrm{\textsc{mtj}}}(G)$.
Note that if~$G \neq K_1$ contains no induced bistable subgraphs, then $G$ is a collection of isolated vertices, and in that case $\ensuremath{\mathrm{\textsc{mtj}}}(G)=\ensuremath{\mathrm{\textsc{bi}}}(G)=1$.
Assume then that~$G$ contains a nonempty induced bistable subgraph
$G' = (I' \cup J', E')$ of rank~$\ensuremath{\mathrm{\textsc{bi}}}(G)$.
By Definition~\ref{def:bistable}, the sets~$I'$ and~$J'$ are the only independent sets of size~$\ensuremath{\mathrm{\textsc{bi}}}(G)$ in~$G'$. It follows that in any MTJ reconfiguration sequence from~$I'$ to~$J'$, the set~$I'$ is immediately followed by~$J'$ which requires a jump of~$|I'| = |J'| = \ensuremath{\mathrm{\textsc{bi}}}(G)$ tokens simultaneously. Hence~$\ensuremath{\mathrm{\textsc{mtj}}}(G') \geq \ensuremath{\mathrm{\textsc{bi}}}(G)$.
We prove the upper bound on~$\ensuremath{\mathrm{\textsc{mtj}}}(G)$ by induction on the size of the graph. If~$G$ consists of a single vertex, then there is a unique nonempty independent set, so~$\ensuremath{\mathrm{\textsc{mtj}}}(G) = 0$.
In the remainder, assume~$G$ has more than one vertex and let~$I$ and~$J$ be two independent sets in~$G$ of equal size.
By Proposition~\ref{prop:balancedbip} it suffices to prove that~$I' := I \setminus J$ can be MTJ-reconfigured to~$J \setminus I$ in the graph~$G' := G[I \Delta J]$ with jumps of size at most~$\ensuremath{\mathrm{\textsc{bi}}}(G)$.
Assume first that~$G'$ has no isolated vertices, and let~$S \subseteq I \setminus J$ be an inclusion-wise minimal nonempty subset of~$I \setminus J$ with the property that~$|S| \geq |N_{G'}(S)|$. Such a set exists since~$G'$ is a balanced bipartite graph with partite sets~$I \setminus J$ and~$J \setminus I$, so the set~$I \setminus J$ satisfies the stated condition (but may not yet be minimal).
It is easy to verify that since~$G'$ has no isolated vertices and~$S$ is minimal, we have~$|N_{G'}(S)| = |S|$. Now move all tokens from~$N_{G'}(S)$ onto~$S$ in a single jump of size~$|S|$. By Lemma~\ref{lemma:bistable}, the graph~$G'[N_{G'}[S]] = G[N_{G'}[S]]$ is a bistable induced subgraph of~$G$ of rank~$|S|$, and therefore~$\ensuremath{\mathrm{\textsc{bi}}}(G) \geq |S|$ which shows that the size of the jump is sufficiently small.
We may then invoke induction similarly as in the proof of Theorem~\ref{thm:max-match} to complete the argument. If~$G'$ has an isolated vertex, then instead one can jump a token onto this isolated vertex and induct. This concludes the proof of Theorem~\ref{thm:bistable}.
\end{proof}
The following corollary characterizes the MTJ reconfiguration threshold of hereditary graph classes. It follows directly from Theorem~\ref{thm:bistable}. It applies to all graph classes except the one consisting only of the single graph~$K_1$ with a single vertex, for which the reconfiguration threshold is zero but~$\ensuremath{\mathrm{\textsc{bi}}}(K_1) = 1$ by definition.
\begin{corollary}
For any hereditary graph class~$\Pi \neq \{K_1\}$ it holds that~$\ensuremath{\mathrm{\textsc{mtj}}}(\Pi) = \ensuremath{\mathrm{\textsc{bi}}}(\Pi)$.
\end{corollary}
\begin{proof}
By Theorem~\ref{thm:bistable} we have~$\ensuremath{\mathrm{\textsc{mtj}}}(G) \leq \ensuremath{\mathrm{\textsc{bi}}}(G) \leq \ensuremath{\mathrm{\textsc{bi}}}(\Pi)$ for all graphs~$G \in \Pi$, hence $\ensuremath{\mathrm{\textsc{mtj}}}(\Pi) \leq \ensuremath{\mathrm{\textsc{bi}}}(\Pi)$. To prove the converse, consider an arbitrary graph~$G \in \Pi$ having at least one edge. Then~$G$ contains an induced bistable subgraph~$H = (I \cup J, E)$ of rank~$\ensuremath{\mathrm{\textsc{bi}}}(G)$, and since~$\Pi$ is hereditary we have~$H \in \Pi$. Reconfiguring~$I$ to~$J$ in~$H$ requires a jump of size~$|I| = |J| = \ensuremath{\mathrm{\textsc{bi}}}(G)$ since those are the only two independent sets of that size. Hence~$\ensuremath{\mathrm{\textsc{mtj}}}(\Pi) \geq \ensuremath{\mathrm{\textsc{bi}}}(G)$ for all~$G \in \Pi$ with at least one edge, showing that~$\ensuremath{\mathrm{\textsc{mtj}}}(\Pi) \geq \ensuremath{\mathrm{\textsc{bi}}}(\Pi)$ if~$\Pi$ contains at least one bistable graph. In the exceptional setting that~$\Pi$ contains no bistable graph, all graphs in~$\Pi$ are edgeless causing~$\ensuremath{\mathrm{\textsc{bi}}}(\Pi)$ to be one. Since~$\Pi \neq \{K_1\}$, the graph consisting of two isolated vertices is contained in~$\Pi$, which has reconfiguration threshold one. Hence the lower bound also holds in this case.
\end{proof}
\subsection{MTJ Reconfiguration Threshold in Terms of Pumpkin Size} \label{sect:mtj:pumpkin}
In this section we formally introduce the pumpkin structure described in the introduction. We relate pumpkins to bistable graphs to obtain bounds on the MTJ reconfiguration threshold in terms of the size of the largest pumpkin subgraph.
\begin{definition}[{Pumpkin}]
A \emph{pumpkin} is a graph consisting of two terminal vertices~$u$ and~$v$ linked by two or more vertex-disjoint paths with an odd number of edges, having no edges or vertices other than those on the paths. A path can consist of the single edge~$\{u,v\}$. The \emph{size} of the pumpkin is the total number of vertices.
For a graph $G$ we denote by $\ensuremath{\mathrm{\textsc{pum}}}(G)$ the size of the largest (not necessarily induced) subgraph isomorphic to a pumpkin that is contained in $G$, or zero if~$G$ contains no pumpkin. For a graph class $\Pi$ we define $\ensuremath{\mathrm{\textsc{pum}}}(\Pi):=\sup_{G\in\Pi}\ensuremath{\mathrm{\textsc{pum}}}(G)$.
\end{definition}
An example of a pumpkin structure is shown in Figure~\ref{fig:pumpkin}. Observe that a pumpkin is a bipartite graph, since all cycles consist of two $uv$-paths of odd length and are therefore even. Furthermore, a pumpkin is a \emph{balanced} bipartite graph: vertices~$u$ and~$v$ belong to different partite sets since their distance is odd, and on every (odd-length) $uv$-path in the structure there is an even number of interior vertices, which alternate between the two partite sets. It is not difficult to verify that the two partite sets are the only maximum independent sets in a pumpkin, leading to the following observation.
\begin{observation} \label{obs:pumpkin:bistable}
Every pumpkin graph is bistable.
\end{observation}
The next theorem shows that the rank of the largest bistable induced subgraph of~$G$ can be upper-bounded in terms of the size of~$G$'s largest pumpkin subgraph.
\begin{theorem}\label{thm:bistable-pumpkin}
For any bistable graph $G$, $\ensuremath{\mathrm{\textsc{bi}}}(G)\leq f(\ensuremath{\mathrm{\textsc{pum}}}(G))$, where $f(k)=(k^3+k^2)^{k^2+1}+1.$
\end{theorem}
\begin{proof}
Consider a bistable graph~$G = (I \cup J, E)$. If~$G$ is acyclic, then any biconnected subgraph of~$G$ contains at most two vertices. There is a unique bistable graph with at most two vertices, which consists of a single edge and has rank one. Since any bistable graph is biconnected by Lemma~\ref{lem:bistable-property}, we have $\ensuremath{\mathrm{\textsc{bi}}}(G) \leq 1 = f(0) \leq f(\ensuremath{\mathrm{\textsc{pum}}}(G))$ if~$G$ is acyclic. In the remainder we assume that~$G$ contains a cycle, which implies that~$\ensuremath{\mathrm{\textsc{pum}}}(G) \geq 1$ since any cycle in the bipartite graph~$G$ is even and forms a pumpkin.
For ease of notation, define~$L := \ensuremath{\mathrm{\textsc{pum}}}(G)$. Construct a depth-first search (DFS) tree~$T$ of $G$, starting at an arbitrary vertex~$r$ which becomes the root of the tree. By the structure of the DFS process, we obtain the following property: if~$u$ and~$v$ are vertices of~$G$ that are adjacent in~$G$, then~$u$ is an ancestor of~$v$ in~$T$, or~$v$ is an ancestor of~$u$. For~$v \in T$, we use~$T_v$ to denote the subtree of~$T$ rooted at~$v$. We will often use~$T_v$ to refer to the vertices in the tree as well.
\begin{claim} \label{claim:depth}
The depth of $T$ is at most $L^2$.
\end{claim}
\begin{claimproof}
Assume for a contradiction that there is a path from the root~$r$ of~$T$ to a leaf~$\ell$, consisting of more than~$L^2$ edges. By Lemma~\ref{lem:bistable-property}, graph~$G$ is biconnected. The existence of a path of more than~$L^2$ edges in a biconnected graph~$G$ is known~\cite[Theorem 1]{D52} to imply that~$G$ contains a simple cycle of length more than~$L$. Since~$G$ is bipartite the cycle is even and forms a pumpkin: it splits into two odd paths. So~$\ensuremath{\mathrm{\textsc{pum}}}(G) > L$, a contradiction.
\end{claimproof}
\begin{figure}
\begin{subfigure}{5cm}
\begin{center}
\includegraphics[scale=0.35]{DFS}
\caption{DFS tree of a biconnected bipartite graph.}
\label{fig:DFS}
\end{center}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}{8cm}
\begin{center}
\includegraphics[scale=0.35]{Treewidth}
\caption{Grid-like balanced bipartite graph with large treewidth and small TAR reconfiguration threshold.}
\label{fig:treewidth}
\end{center}
\end{subfigure}
\caption{(\ref{fig:DFS}) Depth-first search tree of a bipartite biconnected graph. Tree-edges are drawn solid, while the remaining edges of~$G$ are drawn with dotted lines. The three children~$u_1, u_2, u_3$ of~$v$ induce subtrees of types A, B, and~C, respectively. (\ref{fig:treewidth}) Template for constructing graphs of large treewidth that can be TAR reconfigured with a buffer of size two. The treewidth is large due to the presence of a large grid minor.}
\label{fig:DFS:treewidth}
\end{figure}
If each vertex in~$T$ has at most~$(L^3 + L^2)$ children, then the bound on the depth given by the previous claim implies that~$T$ (and therefore~$G$) has at most~$\sum _{i=0}^{L^2} (L^3 + L^2)^i = (L^3 + L^2)^{L^2+1} + 1$ vertices and therefore:
\begin{equation} \label{eq:pumpkinsize}
\ensuremath{\mathrm{\textsc{bi}}}(G) \leq |V(G)| \leq (L^3 + L^2)^{L^2+1} + 1 \leq f(L) = f(\ensuremath{\mathrm{\textsc{pum}}}(G)).
\end{equation}
To complete the proof, it therefore suffices to show that no vertex of~$T$ has more than~$L^3 + L^2$ children. Assume for a contradiction that some vertex~$v$ exists with a larger number of children~$u_1, \ldots, u_m$, for some~$m > L^3 + L^2$. By switching the roles of~$I$ and~$J$ if needed, we may assume that~$v \in I$. For a vertex~$w$, let~$M_w$ denote the set of its proper ancestors in~$T$. We classify the children $u_i$ for~$i \in [m]$ into three types:
\begin{enumerate}[{\normalfont Type A:}]
\item Some vertex of $T_u$ has an edge in $G$ to a vertex in $M_u \cap J$.
\item Vertex~$u$ is not of type A and $|I \cap T_u|\neq |J \cap T_u|$.
\item Vertex~$u$ is not of type A and $|I \cap T_u| = |J\cap T_u|$.
\end{enumerate}
Observe that any child of~$v$ belongs to exactly one type.
\begin{claim}
There are less than $L^3$ type-A children of~$v$.
\end{claim}
\begin{claimproof}
Suppose there are at least~$L^3$ type-A children of~$v$ and assume these are numbered~$u_1, \ldots, u_{L^3}$. Each subtree~$T_{u_i}$ for~$i \in [L^3]$ contains a vertex that has an edge in~$G$ to a proper ancestor of~$u_i$ in~$J$, by our definition of types. Since~$v \in I$, this cannot be~$v$ so that in fact it is also a proper ancestor of~$v$ and belongs to~$M_v$.
Since~$T$ has depth at most~$L^2$ by Claim~\ref{claim:depth}, by the pigeon-hole principle there is a vertex $w\in M_v \cap J$, such that $L$ subtrees among $T_{u_1},\ldots, T_{u_{L^3}}$ contain a vertex that is adjacent to~$w$ in~$G$. From each such subtree~$T_{u_i}$, we obtain a path in~$G$ from~$v$ to~$w$ whose internal vertices belong to~$T_{u_i}$, by going from~$v$ to~$u_i$, then to a neighbor of~$w$ in the subtree~$T_{u_i}$ using the tree edges, and ending with the edge to~$w$. Applying this procedure to each of the~$L$ subtrees that connect to~$w$ yields at least $L$ internally vertex-disjoint paths from $v$ to $w$. Since $G$ is bipartite and $v$ and $w$ belong to different partite sets, each path connecting $v$ and $w$ is of odd length. Hence this collection of~$L$ vertex-disjoint paths between~$v$ and~$w$ forms a pumpkin of size more than~$L$: each of the~$L$ paths has at least one internal vertex, and together with~$v$ and~$w$ this gives size at least~$L+2$. This contradicts our choice of~$L = \ensuremath{\mathrm{\textsc{pum}}}(G)$.
\end{claimproof}
\begin{claim}
There are at most $\ensuremath{\mathrm{\textsc{depth}}}(v)+1$ type-B children of~$v$.
\end{claim}
\begin{claimproof}
Since $G$ is bistable, it has a perfect matching~$M$ by Lemma~\ref{lem:bistable-property}. By the properties of a DFS tree, for each vertex in~$T$ its neighbors in~$G$ are among its ancestors and descendants in~$T$. The number of $I$ and $J$-nodes in a subtree~$T_{u_i}$ rooted at a type-B child~$u_i$ are not equal. Since each vertex in~$T_{u_i}$ is assigned a unique neighbor in the other partite set by the perfect matching~$M$, it follows that the matching partner of some vertex in~$T_{u_i}$ does not belong to~$T_{u_i}$, and must therefore be a proper ancestor of~$u_i$ by the properties of DFS trees. Since there are at most~$\ensuremath{\mathrm{\textsc{depth}}}(v) + 1$ ancestors of~$v$ to use as matching partners, and each type-B child uses a different ancestor as a matching partner for one of its vertices, the number of type-B subtrees is at most~$\ensuremath{\mathrm{\textsc{depth}}}(v) + 1$.
\end{claimproof}
Since the depth of~$T$ is at most~$L^2$ by Claim~\ref{claim:depth}, any vertex~$v$ that is not a leaf has depth at most~$L^2 - 1$. Hence the number of type-B children of~$v$ is at most~$L^2$.
\begin{claim}
No child of~$v$ is of type~$C$.
\end{claim}
\begin{claimproof}
Suppose there exists a type-C child~$u_i$ of~$v$. Subtree~$T_{u_i}$ does not contain a vertex adjacent to~$M_{u_i} \cap J$, else~$u_i$ would have been type~$A$. Any $G$-neighbor of a vertex in~$T_{u_i}$ that is not contained in~$T_{u_i}$ is a proper ancestor of~$u_i$, by the properties of DFS trees. Hence the set $J' = (J \setminus T_{u_i}) \cup (I \cap T_{u_i})$ forms an independent set in~$G$. By the definition of type-C vertices, $|J \cap T_{u_i}|=|I \cap T_{u_i}|$, so that~$|J'| = |J|$. This shows that~$J'$ is a maximum independent set distinct from~$I$ and~$J$, contradicting the assumption that~$G$ is bistable.
\end{claimproof}
The preceding claims show that no vertex of~$T$ has more than~$L^2 + L^3$ children, which completes the proof of Theorem~\ref{thm:bistable-pumpkin} using (\ref{eq:pumpkinsize}).
\end{proof}
The following theorem is our main result on the MTJ reconfiguration threshold. It bounds the MTJ reconfiguration threshold of a hereditary graph class~$\Pi$ in terms of the maximum size of pumpkin subgraph of a graph in~$\Pi_\mathrm{bip}$. Recall that~$\Pi_\mathrm{bip}$ contains the bipartite graphs in~$\Pi$.
\begin{theorem}\label{th:pumpkin}
For any hereditary graph class $\Pi$, the following holds:
\begin{equation}
g_1(\ensuremath{\mathrm{\textsc{pum}}}(\Pi_\mathrm{bip}))\leq \ensuremath{\mathrm{\textsc{mtj}}}(\Pi)\leq g_2(\ensuremath{\mathrm{\textsc{pum}}}(\Pi_\mathrm{bip})),
\end{equation}
where $g_1$, $g_2:\mathbbm{N}\to\mathbbm{N}$ are positive non-decreasing functions defined as
$g_1(k)=k/2$ and~$g_2(k)=(k^3+k^2)^{k^2+1}+1$.
Moreover, for every graph~$G$ we have~$\ensuremath{\mathrm{\textsc{mtj}}}(G)\leq g_2(\ensuremath{\mathrm{\textsc{pum}}}(G))$.
\end{theorem}
\begin{proof
We combine the bounds on the MTJ reconfiguration threshold of Theorem~\ref{thm:bistable}, with the relation between pumpkins and bistable graphs of Theorem~\ref{thm:bistable-pumpkin}.
\subparagraph{Lower bound on MTJ.} Consider a bipartite graph~$G$ in~$\Pi_\mathrm{bip}$. Then~$G$ contains a pumpkin subgraph on~$\ensuremath{\mathrm{\textsc{pum}}}(G)$ vertices~$S \subseteq V(G)$. Then~$G[S] = (I \cup J, E)$ is a bipartite supergraph of a pumpkin, which is contained in~$\Pi_\mathrm{bip}$. Since any pumpkin is bistable by Observation~\ref{obs:pumpkin:bistable}, it follows that reconfiguring~$I$ to~$J$ in the pumpkin subgraph of~$G[S]$ requires a jump of size~$|I| = |J| = \ensuremath{\mathrm{\textsc{pum}}}(G) / 2$. It is clearly no easier to reconfigure~$I$ to~$J$ in the supergraph~$G[S] \in \Pi_\mathrm{bip}$. Hence~$\ensuremath{\mathrm{\textsc{mtj}}}(\Pi) \geq \ensuremath{\mathrm{\textsc{pum}}}(G) / 2$ for all graphs~$G$ in~$\Pi_\mathrm{bip}$, giving the lower bound~$\ensuremath{\mathrm{\textsc{mtj}}}(\Pi) \geq \ensuremath{\mathrm{\textsc{pum}}}(\Pi_\mathrm{bip}) / 2$.
\subparagraph{Upper bound on MTJ.} By Proposition~\ref{prop:bip} it suffices to prove that~$\ensuremath{\mathrm{\textsc{mtj}}}(\Pi_\mathrm{bip}) \leq g_2(\ensuremath{\mathrm{\textsc{pum}}}(\Pi_\mathrm{bip}))$. Consider an arbitrary graph~$G \in \Pi_\mathrm{bip}$. By Theorem~\ref{thm:bistable}, we have~$\ensuremath{\mathrm{\textsc{mtj}}}(G) \leq \ensuremath{\mathrm{\textsc{bi}}}(G) \leq g_2(\ensuremath{\mathrm{\textsc{pum}}}(G)) \leq g_2(\ensuremath{\mathrm{\textsc{pum}}}(\Pi_\mathrm{bip}))$, where the second inequality follows from Theorem~\ref{thm:bistable-pumpkin}. Hence~$\ensuremath{\mathrm{\textsc{mtj}}}(\Pi_\mathrm{bip}) \leq g_2(\ensuremath{\mathrm{\textsc{pum}}}(\Pi_\mathrm{bip}))$, concluding the proof.
\end{proof}
While the upper bound of Theorem~\ref{th:pumpkin} has room for improvement, the following lemma shows that the exponential dependency on the pumpkin size in the upper bound is unavoidable.
\begin{proposition} \label{prop:super:pumpkin}
Let $\Pi_{\ensuremath{\mathrm{\textsc{pum}}}}(k) := \{ G : \ensuremath{\mathrm{\textsc{pum}}}(G) \leq k\}$ be the class of all graphs $G$ whose largest
pumpkin subgraph has size at most~$k$. Then $\ensuremath{\mathrm{\textsc{mtj}}}(\Pi_{\ensuremath{\mathrm{\textsc{pum}}}}(k)) = 2^{\Omega(k)}$.
\end{proposition}
\begin{proof
For each~$k \geq 1$ we will construct a graph $G_k$ belonging to the class $\Pi_{\ensuremath{\mathrm{\textsc{pum}}}}(24k+6)$
with $\ensuremath{\mathrm{\textsc{mtj}}}(G_k) = 2^{\Omega(k)}$. We will call $G_k$ a \emph{super-pumpkin}. It is defined
recursively, as explained next.
Define a (4,2)-pumpkin, denoted $P_{4,2}$, to be a pumpkin whose two terminal
vertices are connected by four paths with two interior vertices each.
A super-pumpkin is now defined as follows. Like a regular
pumpkin, it has two designated terminal vertices.
The super-pumpkin $G_1$ consists of just a single edge, whose endpoints are its terminal vertices.
The super-pumpkin $G_k$ is obtained by gluing two copies of a
super-pumpkin~$G_{k-1}$---we will denote these copies by
$G_{k-1}^1$ and $G_{k-1}^2$---into a (4,2)-pumpkin~$P_{4,2}$.
This is done by identifying the terminal vertices of $G_{k-1}^1$ and $G_{k-1}^2$
with specific vertices of the (4,2)-pumpkin, as indicated in
Fig.~\ref{fi:super-pumpkin}.
\begin{figure}
\begin{center}
\includegraphics{super-pumpkin}
\end{center}
\caption{Construction of a super-pumpkin.}
\label{fi:super-pumpkin}
\end{figure}
Note that $|G_k|$, the number of vertices of~$G_k$, satisfies $|G_k| = 2 |G_{k-1}| + 6$
with $|G_1|=2$. Hence, $|G_k|=2^k + 6\sum_{i=0}^{k-2} 2^i = 2^k + 6(2^{k-1}-1)$.
\begin{claim} \label{claim:super-size}
$G_k$ has exactly two independent sets of size~$|G_k|/2$, and these independent sets are disjoint.
\end{claim}
\begin{claimproof}
The proof is by induction on~$k$. It will be convenient to prove the following
stronger claim on $\mathcal{I}(G_k)$, the set of all independent sets of~$G_k$.
\begin{quotation}
\noindent $\mathcal{I}(G_k)$ contains no independent set of size more than~$|G_k|/2$ and exactly two independent sets
of size~$|G_k|/2$. These independent sets are disjoint, and one of them contains one
terminal vertex of~$G_k$ while the other contains the other terminal vertex.
\end{quotation}
This claim trivially holds for $\mathcal{I}(G_1)$, so now consider $\mathcal{I}(G_k)$ for~$k>1$.
Let $s,t$ be the two terminal vertices of~$G_k$,
and label the other vertices of the pumpkin~$P_{4,2}$ as $u_1,\ldots,u_4$
and $v_1,\ldots,v_4$---see Fig.~\ref{fi:super-pumpkin}.
We define $W := \{s,t,u_1,u_3,v_2,v_4\}$ to be the set
of vertices in $G_k$ that do not occur in $G_{k-1}^1$ or $G_{k-1}^2$.
We distinguish two types of independent sets in~$\mathcal{I}(G_k)$.
\medskip
\noindent \textbf{Type~1.}
\emph{Independent sets $I$ such that both $G_{k-1}^1$ and $G_{k-1}^2$ have $|G_{k-1}|/2$ vertices in~$I$.}
Note that by the induction hypothesis we have $|\{u_2,v_1\}\cap I|= |\{u_4,v_3\}\cap I| =1$
for any Type~1 independent set~$I$. Moreover, the total number of vertices from $I$
inside $G_{k-1}^1$ and $G_{k-1}^2$ is $2(|G_{k-1}|/2)=|G_k|/2-3$.
We will argue that $G_k$ has two Type~1 independent sets with $|G_k|/2$
vertices and with the required properties, and that all other Type~1 independent sets
have less than $|G_k|/2$ vertices. To this end
we distinguish three subtypes of Type~1.
\begin{itemize}
\item \emph{Type~1(i). Independent sets $I$ with $u_2\in I$ and $u_4\in I$.}
By the induction hypothesis such independent sets $I$ exist, and the choice of vertices of
$I$ inside $G^1_{k-1}$ and $G^2_{k-1}$ is fixed.
Moreover, there is only one way to obtain an independent set $I^*$ with $|G_k|/2$
vertices, namely by adding $\{u_1,u_3,t\}$ from~$W$---all other selections from~$W$
give smaller independent sets.
\item \emph{Type~1(ii). Independent sets $I$ with $v_1\in I$ and $v_3\in I$.}
Again, the choice of vertices
for~$I$ inside $G_{k-1}^1$ and $G_{k-1}^2$ is fixed, and there is
only one way to obtain an independent set of $|G_k|/2$ vertices, this time by adding $\{s,v_2,v_4\}$.
This independent set $I^{**}$ is disjoint from $I^*$---this follows from the induction hypothesis and the fact $\{u_1,u_3,t\}\cap\{s,v_2,v_4\}=\emptyset$---and it contains~$s$
while $I^*$ contains~$t$.
\item \emph{Type~1(iii). Independent sets $I$ with $u_2\in I$ and $v_3\in I$, or $v_1\in I$ and $u_4\in I$.}
Now at most two of the vertices from~$W$ can be in~$I$, and so $|I|<|G_k|/2$.
\end{itemize}
\textbf{Type 2.}
\emph{Independent sets $I$ such that at least one of $G_{k-1}^1$ and $G_{k-1}^2$ has less than
$|G_{k-1}|/2$ vertices in~$I$.}
We will argue that all such independent sets have less than $|G_k|/2$ vertices.
Assume without loss of generality that $G^1_{k-1}$
has less than $|G_{k-1}|/2$ vertices in~$I$. If $G^2_{k-1}$ also has less than
$|G_{k-1}|/2$ vertices in~$I$, then
the total number of vertices from $I$ in $G_{k-1}^1$ and $G_{k-1}^2$
is at most $2(|G_{k-1}|/2-1) = |G_{k}|/2 -5$, and since at most four vertices can be
selected from~$W$ we have $|I|<|G_{k}|/2$.
If $G^2_{k-1}$ has $|G_{k-1}|/2$ vertices in~$I$,
then $|\{u_4,v_3\}\cap I| =1$. Assume without loss of generality that $u_4\in I$.
Then $s$ and $v_4$ are not in~$I$. Since $v_2$ and $t$ cannot be both in $I$,
we conclude that we can select at most three vertices from~$W$ into~$I$.
This again implies that $|I|<|G_k|/2$.\\
Note that each $I\in\mathcal{I}(G_k)$ is of Type~1 or Type~2, since $G_{k-1}^1$ and $G_{k-1}^2$
cannot have more than $|G_{k-1}|/2$ vertices in~$I$ by the induction hypothesis.
This finishes the proof of the claim.
\end{claimproof}
Claim~\ref{claim:super-size} implies that $\ensuremath{\mathrm{\textsc{mtj}}}(G_k) = |G_k|/2 = 2^{\Omega(k)}$: since there are only two independent
sets of size~$|G_k|/2$, say $I$ and $J$, and these are disjoint, the only way to go from
$I$ to $J$ is to remove all tokens from $I$ and place them onto~$J$. Next we bound
the size of the largest pumpkin in $G_k$.
\begin{claim} \label{claim:super-size2}
$\ensuremath{\mathrm{\textsc{pum}}}(G_k) \leq 24k+6.$
\end{claim}
\begin{claimproof}
Define $d_{\max}$ to be the maximum degree in $G_k$ and $C_k$ to be the
maximum length of any simple cycle in~$G_k$.
Then $\ensuremath{\mathrm{\textsc{pum}}}(G_k) \leq d_{\max} \cdot C_k/2$.
The following statement is easy to prove by induction: the degree of the terminal vertices
in $G_k$ is four, and the maximum degree of any other vertex in $G_k$ is six.
Hence, $d_{\max}=6$ and so $\ensuremath{\mathrm{\textsc{pum}}}(G_k) \leq 3 C_k$.
Next we argue that $C_k\leq 8k+2$. To this end, define $L_k$ to be the length
(measured in number of vertices) of a longest simple path in $G_k$ that ends at
the two terminal vertices of~$G_k$. Then $L_k = L_{k-1}+4$ with $L_1=2$, and
thus $L_k = 4k-2$. We now prove that $C_k \leq 8k+2$ by induction on~$k$.
We have $C_1=0$, so the statement is true for $k=1$.
Now suppose~$k>1$. Let $\mathcal{C}$ be a simple cycle in $G_k$.
If $\mathcal{C}$ stays within one of the copies of $G_{k-1}$
we have $|\mathcal{C}| \leq C_{k-1}$ by induction. Otherwise the
maximum possible length for $\mathcal{C}$ is obtained by taking a longest
path from $u_2$ to $v_1$ in the first copy of $G_{k-1}$,
a longest path from $u_4$ to $v_3$ in the second copy, and connecting
them into a cycle using all six vertices in~$W$, where $W$ is defined as before. Hence,
\[
C_k \leq \max(C_{k-1}, 2 L_{k-1} +6) = \max(C_{k-1},8k+2).
\]
It follows that $C_k \leq 8k+2$. Hence, $\ensuremath{\mathrm{\textsc{pum}}}(G_k) \leq 3C_k \leq 24k+6$.
\end{claimproof}
This concludes the proof of Proposition~\ref{prop:super:pumpkin}.
\end{proof}
\begin{comment}
\subsection{MTJ Reconfiguration in Disk Graphs}
Despite the exponential dependency of the MTJ threshold on the pumpkin size
in general, for some specific graph classes the dependency can even be quadratic.
A graph is called unit disk graph if it is an intersection graph of equal-sized
circles in the plane, i.e.~each vertex corresponds to a circle, and an edge appears between
any two vertices when the corresponding circles intersect (tangent circles are assumed
to intersect)~\cite{CCJ90}.
In the proposition below we show that for the class of unit disk graphs
the dependency of the MTJ threshold on the pumpkin size is quadratic.
\begin{proposition}
Let $\Pi$ be the class of all unit disk graphs. Then $\ensuremath{\mathrm{\textsc{mtj}}}(\Pi)$ is $O(\ensuremath{\mathrm{\textsc{pum}}}(\Pi_\mathrm{bip})^2)$.
\end{proposition}
\begin{proof}
As before, by Proposition~\ref{prop:bip} it is enough to show that $\ensuremath{\mathrm{\textsc{mtj}}}(\Pi_\mathrm{bip})$ is $O(\ensuremath{\mathrm{\textsc{pum}}}(\Pi_\mathrm{bip})^2)$.
We show in the proof of Theorem~\ref{th:pumpkin}, Case 1, that $\ensuremath{\mathrm{\textsc{mtj}}}(G)$ is bounded by the size of the largest biconnected component of $G[I\Delta J]$.
Note that since $I\Delta J$ is bipartite and hence there is no odd-cycle, the maximum degree of any node in $G[I\Delta J]$ is five (the maximum number of disjoint disks that can be packed on the boundary of another disk of same size). Therefore the maximum pumpkin size is a constant multiple of the maximum even-cycle length.
In the claim below we therefore show that the size of a biconnected component is at most quadratic in terms of the length of the maximum even-cycle.
\begin{claim}
Let $G$ be any bipartite unit disk graph with bipartition $I$ and $J$. Each biconnected component of G contains at most $O(L^2)$ vertices, where $L$ is the length of the longest even cycle in~$G[I\Delta J]$.
\end{claim}
\begin{claimproof}
Let $H$ be a biconnected component of a bipartite unit disk graph~$G$, and let $u,v\in H$ be two nodes whose corresponding disk centers are at maximum Euclidean distance from each other. Let $D$ denote this distance.
Since $H$ is biconnected we have two vertex-disjoint paths between $u$ and $v$, and because $G$ is a bipartite unit disk graph this means we have an even cycle of length $\Omega(D)$.
On the other hand, note that the nodes from $H$ are contained in a disk of radius $D$.
To see this, take any one of the disk centers, and construct a circle of radius $D$ around it.
By the definition of $D$, all other disk centers must lie withing this circle.
Now, since the disks of nodes in $I$ are pairwise disjoint we have $|I\cap H|=O(D^2)$. Similarly, $|J\cap H|=O(D^2)$, and so $|H|= O(D^2)$.
\end{claimproof}
\end{proof}
\end{comment}
\section{Threshold for Token Addition Removal Reconfiguration} \label{sec:tar}
In this section we study the model of token additional removal. First observe that when~$G$ is a forest, we have $\ensuremath{\mathrm{\textsc{mtj}}}(G)\leq 1$ and therefore $\ensuremath{\mathrm{\textsc{tar}}}(G)\leq 1$ as well.
Also, from Theorem~\ref{thm:max-match} we get $\ensuremath{\mathrm{\textsc{tar}}}(G)\leq \max(\mathrm{\textsc{vc}}(G),1)$.
But the inequality $\ensuremath{\mathrm{\textsc{tar}}}(G)\leq \ensuremath{\mathrm{\textsc{mtj}}}(G)$ tells us nothing about the behavior of the TAR reconfiguration threshold when the MTJ reconfiguration threshold is large.
The next simple proposition immediately points towards this direction.
Indeed observe that a large pumpkin (with large MTJ reconfiguration threshold) can have a small feedback vertex set; this happens for even cycles, for example.
\begin{proposition} \label{prop:fvs}
Let $G=(V,E)$ be a graph. Then $\ensuremath{\mathrm{\textsc{tar}}}(G)\leq \mathrm{\textsc{fvs}}(G)+1$.
\end{proposition}
\begin{proof
Let~$G = (V,E)$ be a graph with a minimum feedback vertex set~$S \subseteq V$ of size~$k$, and let~$I, J \subseteq V$ be independent sets of equal size. By Proposition~\ref{prop:balancedbip} we can assume that~$V = I \cup J$ and~$I \cap J = \emptyset$. If~$|I| = |J| \leq k$, then it is trivial to reconfigure~$I$ to~$J$ with a buffer of size at most~$k$, by first moving all tokens from~$I$ into the buffer, and then onto~$J$. In the remainder we assume~$|I| = |J| > k$. Let~$S_I$ be a superset of~$I \cap S$ of size~$k$, and let~$S_J$ be a superset of~$J \cap S$ of size~$k$. Then the graph~$G' := G - (S_I \cup S_J)$ is a subgraph of~$G - S$ and is therefore acyclic since~$S$ is a feedback vertex set. By Theorem~\ref{thm:tree} it follows that~$I' := I \setminus S_I$ can be MTJ reconfigured to~$J' := J \setminus S_J$ in~$G'$ by jumps of size~$1$, which easily implies that~$I'$ can be TAR reconfigured to~$J'$ in~$G'$ using a buffer of size at most~$1$; let~$\mathcal{S}$ be a corresponding reconfiguration sequence. To reconfigure~$I$ to~$J$ in~$G$, start by removing the tokens from the~$k$ vertices in~$S_I$ and place them in the buffer. Then apply the reconfiguration sequence~$\mathcal{S}$ to reconfigure~$I'$ to~$J'$, using at most~$1$ extra buffer token. Finish by moving the~$k$ buffer tokens onto~$S_J$ to arrive at the independent set~$J' \cup S_J = J$.
\end{proof}
One can see that the above bound is tight, by considering the TAR reconfiguration threshold of a complete balanced bipartite graph. Indeed for $K_{n,n}$, the minimum size of a feedback vertex set is $n-1$, and one can see that in order to include any one of the vertices of the target independent set the reconfiguration must pass through the empty set. This shows that the TAR reconfiguration threshold is also~$n$.
\subsection{TAR Reconfiguration Threshold in Terms of Pathwidth}
As the main result of this section, we will show that the TAR reconfiguration threshold of a graph is upper-bounded in terms of its \emph{pathwidth}. Before proving that statement, we present a structural lemma about path decompositions that will be useful in the proof.
\begin{lemma} \label{lem:pathwidthprefix}
Let~$G = (I \cup J, E)$ be a bipartite graph with a nice path decomposition~$\mathcal{P} = (X_1, \ldots, X_r)$ of width~$k$. Let~$S \subseteq J$ such that~$|N(S)| \leq |S|$ while no non-empty subset of~$S$ has this property. If we order the vertices in~$S$ as~$i_1, \ldots, i_t$ such that~$r_\mathcal{P}(i_1) < r_\mathcal{P}(i_2) < \ldots < r_\mathcal{P}(i_t)$, then~$|N(\{i_1, \ldots, i_{t'}\})| < t' + k$ for all~$1 \leq t' \leq t$.
\end{lemma}
Intuitively, the lemma says the following. Suppose a set~$S \subseteq J$ is inclusion-wise minimal with respect to being no smaller than its neighborhood. Then ordering~$S$ according to the right endpoints of the intervals representing~$S$ in the path decomposition, we are guaranteed that every prefix of~$S$ has a fairly small neighborhood compared to its size: the neighborhood size exceeds the size of the prefix by less than the pathwidth. Note that since the lemma deals with bipartite graphs only, no vertex of~$S$ can belong to the neighborhood of any prefix of~$S$. The ordering of the vertices is uniquely defined since the path decomposition is nice. The bound of Lemma~\ref{lem:pathwidthprefix} is best-possible. Consider a complete bipartite graph~$K_{n,n}$, with pathwidth~$n$. In any optimal path decomposition, for~$t' = 1$ the first vertex in the ordering has a neighborhood of size~$n$ and so~$n < t' + n = 1 + n$, but a better bound is not possible.
\begin{proof}[Proof of Lemma \ref{lem:pathwidthprefix}]
First observe that in a graph with a path decomposition of width~$k=0$ there can be no edges. Then the only vertex-minimal set~$S$ satisfying the assumptions is an isolated vertex, for which the claim trivially holds. In the remainder we assume~$k \geq 1$. For $t'=t$ we have~$\{i_1,\ldots,i_{t'}\}=S$, and by assumption $|N(S)|\leq |S|$. So for~$t' = t$ the claim in the lemma holds trivially for any $k\geq 1$. Assume for a contradiction that there is some~$t' < t$ such that:
\begin{equation}\label{eq:cross}
|N(\{i_1,\ldots,i_{t'}\})|-t'\geq k.
\end{equation}
We partition~$T := S \cup N(S)$ into three disjoint subsets to derive some structural properties that will lead to a contradiction.
\begin{enumerate}[{\normalfont (i)}]
\item $T_1:= \{v\in S\cup N(S):r_\mathcal{P}(v)\leq r_\mathcal{P}(i_{t'})\}$, the set of all vertices in $S\cup N(S)$ that are not contained in any of the bags after the bag with index $r_\mathcal{P}(i_{t'})$.
\item $T_2:= \{v\in S\cup N(S):l_\mathcal{P}(v)> r_\mathcal{P}(i_{t'})\}$, the set of all vertices in $S\cup N(S)$ that are not contained in any of the bags before or including the bag with index $r_\mathcal{P}(i_{t'})$.
\item $T_3:= \{v\in S\cup N(S):l_\mathcal{P}(v)\leq r_\mathcal{P}(i_{t'})<r_\mathcal{P}(v)\}$, the set of all vertices in $S\cup N(S)$ that are contained in some bags before or including the bag with index $r_\mathcal{P}(i_{t'})$ and also in some bag after it.
\end{enumerate}
Observe that~$(T_1 \cap S) \cup (T_2 \cap S) \cup (T_3 \cap S)$ is a partition of~$S$, and that~$T_1 \cap S = \{i_1, \ldots, i_{t'}\}$.
\begin{claim} \label{claim:sthree}
$|T_3|\leq k$.
\end{claim}
\begin{claimproof}
From property (P3) in the definition of path decomposition we know that $T_3\subseteq X_{\ell}$ for $\ell = r_{\mathcal{P}}(i_{t'})$.
Now $|X_{\ell}|\leq k+1$ since the width of~$\mathcal{P}$ is at most~$k$, and we know that $i_{t'}\in X_\ell \setminus T_3$. Therefore we have $|T_3|\leq |X_\ell\setminus\{i_{t'}\}|\leq k$.
\end{claimproof}
For the remainder of the proof we distinguish two cases.
\subparagraph{Case 1: $T_2 \cap S = \emptyset$.} Then~$(T_1 \cap S) \cup (T_3 \cap S)$ is a partition of~$S$. By Claim~\ref{claim:sthree} we have~$|T_3 \cap S| \leq k$, and therefore
\begin{equation} \label{eq:caseone:ssize}
|S| = |T_1 \cap S| + |T_3 \cap S| \leq |T_1 \cap S| + k = t' + k.
\end{equation}
\begin{claim} \label{claim:prefixneighbors:beforetprime}
In Case 1 we have $N(\{i_{t'+1}, \ldots, i_t\}) \subseteq N(\{i_1, \ldots, i_{t'}\})$.
\end{claim}
\begin{claimproof}
Assume for a contradiction that~$v \in N(\{i_{t'+1}, \ldots, i_t\}) \setminus N(\{i_1, \ldots, i_{t'}\})$.
By~(\ref{eq:cross}) we have~$|N(\{i_1,\ldots,i_{t'}\})|\geq t' + k$, and the existence of~$v$ shows that
$$|N(S)| = |N(\{i_1, \ldots, i_t\})| > |N(\{i_1, \ldots, i_{t'}\})| \geq t' + k \geq |S|,$$ by (\ref{eq:caseone:ssize}). But this contradicts the starting assumption that~$|N(S)| \leq |S|$.
\end{claimproof}
\begin{claim} \label{claim:caseone:three}
In Case 1 we have~$|T_3 \cap S| \geq k$, implying that~$T_3 \subseteq S$ and~$|T_3 \cap S| = k$.
\end{claim}
\begin{claimproof}
Suppose that~$|T_3 \cap S| < k$. Then:
\begin{align*}
|N(S)| &\geq |N(\{i_1, \ldots, i_{t'}\})| & \text{since $S \supseteq \{i_1, \ldots, i_{t'}\}$ and~$S$ is independent,} \\
&\geq k + t' & \text{by (\ref{eq:cross}),} \\
&= |T_1 \cap S| + k & \text{since~$|T_1 \cap S| = t'$,} \\
&> |T_1 \cap S| + |T_3 \cap S| & \text{by the assumption~$k > |T_3 \cap S|$,} \\
&= |S| & \text{since~$T_2 \cap S = \emptyset$,}
\end{align*}
contradicting the precondition to the lemma. It follows that~$|T_3 \cap S| \geq k$. Since~$|T_3 \cap S| \leq |T_3| \leq k$ by Claim~\ref{claim:sthree}, it follows that~$|T_3 \cap S| = k$ and that all vertices of~$T_3$ belong to~$S$.
\end{claimproof}
Let~$\ell := r_\mathcal{P}(i_{t'})$. Since the path decomposition is nice there is only one vertex (i.e.,~$i_{t'}$) that occurs in~$X_\ell$ but not after~$X_\ell$. So~$X_\ell = \{i_{t'}\} \cup T_3$, and Claim~\ref{claim:caseone:three} implies that no vertex of~$N(S)$ occurs in~$X_\ell$ since~$X_\ell = \{i_{t'}\} \cup T_3 \subseteq S$. Claim~\ref{claim:prefixneighbors:beforetprime} shows that all neighbors of~$i_{t'+1}, \ldots, i_t$ are also neighbor to some vertex of the prefix~$i_1, \ldots, i_{t'}$. Since~$i_1, \ldots, i_{t'}$ are ordered by increasing right endpoint of the intervals representing them in the decomposition, all neighbors of~$i_{t'+1}, \ldots, i_t$ therefore have to occur in a bag with index at most~$r_\mathcal{P}(i_{t'})$, and since~$X_{\ell}$ contains no vertex of~$N(S)$, by (P3) it follows that no vertex of~$N(S)$ occurs in a bag with index~$\ell$ or later. Since~$X_\ell = \{i_{t'}\} \cup T_3$ and~$|T_3 \cap S| = k$ by the previous claim, there are~$k + 1$ vertices in~$X_\ell$. Since the size difference of consecutive bags in a nice path decomposition is exactly one, and no bag has size more than~$k+1$ since the width is~$k$, it follows that~$X_{\ell - 1} = X_\ell \setminus \{v\}$ for some vertex~$v \in \{i_{t'}\} \cup T_3 \subseteq S$. Since no vertex of~$N(S)$ occurs in bag~$X_\ell$ or after, and~$v$ does not occur in~$X_{\ell - 1}$ or earlier, it follows that~$v$ does not occur in a bag together with a vertex of~$N(S)$. By the definition of path decomposition, this implies that~$v$ has no neighbor in~$N(S)$; since~$v \in S$ and~$S$ is an independent set (it is a subset of a partite set of a bipartite graph), this implies that~$v$ is an isolated vertex in~$G$. But since~$1 \leq t' < t = |S|$, the set~$S' := \{v\}$ is a nonempty strict subset of~$S$ for which~$0 = |N(S')| \leq |S'| = 1$, contradicting the precondition to the lemma. This concludes the proof of Case 1.
\subparagraph{Case 2: $T_2 \cap S \neq \emptyset$.} We continue the proof of Lemma~\ref{lem:pathwidthprefix} for the case that~$T_2 \cap S \neq \emptyset$. We will show that $T_2\cap S$ is a nonempty strict subset of $S$ with $|T_2\cap S|\geq |N(T_2\cap S)|$. This will contradict our assumption that $S$ is inclusion-wise minimal with the property that $|S|\geq |N(S)|$. Now let us denote $|T_3\cap I|=k_I$ and $|T_3\cap J|=k_J$.
Note that~$T_3 \cap J = T_3 \cap S$, and observe from Claim~\ref{claim:sthree} that
\begin{equation}\label{eq:temp3}
k_I+k_J\leq k.
\end{equation}
Recall from the choice of $r_\mathcal{P}(i_{t'})$ that $|T_1\cap S|= |\{i_1,\ldots,i_{t'}\}|=t'$. Since $S=(T_1\cup T_2 \cup T_3)\cap S$, and the $T_i$'s are mutually disjoint, we have:
\begin{align*}
|S| &= |T_1\cap S| + |T_2\cap S| + |T_3\cap S|\\
&= t'+ |T_2\cap S| + k_J.
\end{align*}
Therefore,
\begin{equation}\label{eq:S2}
|T_2\cap S|=|S|-k_J-t'.
\end{equation}
Also note that
\begin{equation}\label{eq:temp}
\begin{split}
|N(S)| &= |N ((T_1\cup T_2 \cup T_3)\cap S)|\\
&\geq |N((T_1\cup T_2)\cap S)|\\
& = |N(T_1\cap S)|+|N(T_2\cap S)| - |N(T_1\cap S)\cap N(T_2\cap S)|.
\end{split}
\end{equation}
Now observe that any vertex which is a neighbor of some vertex in $T_1\cap S$ and some vertex in $T_2\cap S$, must be both in some bag with index at most~$r_{\mathcal{P}}(i_{t'})$ (to meet~$T_1 \cap S$) and in some bag with index strictly more than $r_{\mathcal{P}}(i_{t'})$ (to meet~$T_2 \cap S$).
This implies that
$
N(T_1\cap S)\cap N(T_2\cap S)\subseteq T_3\cap I.
$
Therefore
\begin{equation}\label{eq:temp2}
|N(T_1\cap S)\cap N(T_2\cap S)|\leq |T_3\cap I|=k_I.
\end{equation}
Hence, \eqref{eq:temp} and \eqref{eq:temp2} yield
\begin{equation}\label{eq:neighbor}
|N(T_2\cap S)|\leq |N(S)|-|N(\{i_1,\ldots,i_{t'}\})|+k_I.
\end{equation}
Therefore, combining \eqref{eq:S2} and \eqref{eq:neighbor} we get
\begin{align*}
|T_2\cap S|-|N(T_2\cap S)|&\geq |S|-k_J-t'-|N(S)|+|N(\{i_1,\ldots,i_{t'}\})|-k_I\\
&= (|S|-|N(S)|)+(|N(\{i_1,\ldots,i_{t'}\})|-t')-(k_I+k_J)\\
&\geq 0 \quad\mbox{from our hypothesis about $S$, and Equation }\eqref{eq:cross}\mbox{ and }\eqref{eq:temp3}.
\end{align*}
So~$T_2 \cap S$ is a nonempty strict subset of~$S$ satisfying the key property, contradicting that~$S$ is inclusion-wise minimal. This completes the proof of Lemma~\ref{lem:pathwidthprefix}.
\end{proof}
Using Lemma~\ref{lem:pathwidthprefix} we bound the TAR reconfiguration threshold in terms of pathwidth.
\begin{theorem}\label{th:pathwidth}
Let $G = (V,E)$ be a graph. Then $\ensuremath{\mathrm{\textsc{tar}}}(G)\leq \max(\mathrm{\textsc{pw}}(G),1)$.
\end{theorem}
\begin{proof}
We prove this theorem using induction on the number of vertices.
As before, it is enough to consider $G=(V,E)$ and assume that the initial and target independent sets $I$ and $J$ respectively are such that $|I|=|J|$, $I\cup J=V$ and $I\cap J=\emptyset$.
We will show that~$\mathrm{\textsc{pw}}(G)\leq k$ implies that~$\ensuremath{\mathrm{\textsc{tar}}}(G)\leq k$, using induction on the number of vertices~$n$. For $n=1$, the statement is trivially true. Now fix any $k\geq 1$, and assume the induction hypothesis that any graph $G$ with $n$ vertices satisfying $\mathrm{\textsc{pw}}(G)\leq k$ has $\ensuremath{\mathrm{\textsc{tar}}}(G)\leq k$.
Assume $G$ is a graph of $n+1$ vertices having pathwidth at most $k$. Let $S$ be an inclusion-minimal subset of $J$ for which $|S|\geq |N(S)|$. Such a set exists since $|J| = |I| \geq |N(J)|$. We will show that if we reconfigure the set $S$ in a suitable order by moving tokens from~$N(S)$ onto~$S$, then the buffer size will not grow beyond $k$. There are enough vertices in~$S$ to accommodate all tokens on~$N(S)$, and afterward we will invoke induction.
We first deal with a special case. If~$S = \{v\}$ is a singleton set, then it has degree at most one since~$|S| \geq |N(S)|$. Move the token from the neighbor~$u$ of~$v$ (or from an arbitrary vertex~$u$, if~$v$ has no neighbors) into the buffer, and then onto~$v$. By induction there exists a TAR reconfiguration from~$I \setminus \{u\}$ to~$J \setminus \{v\}$ in~$G - \{u,v\}$ using a buffer of size at most~$\max(\mathrm{\textsc{pw}}(G - \{u,v\}), 1) \leq \max(\mathrm{\textsc{pw}}(G), 1)$. When inserting the token move from~$u$ onto~$v$ at the beginning of this sequence, we get a TAR reconfiguration from~$I$ to~$J$ with the desired buffer size. In the remainder of the proof we can therefore assume~$|S| \geq 2$. This implies that~$|S| = |N(S)|$: if~$|S| > |N(S)|$ and~$|S| \geq 2$, then we can remove a vertex~$v$ from~$S$ to obtain~$|S \setminus \{v\}| \geq |N(S \setminus \{v\})|$ for the nonempty set~$S \setminus \{v\}$, contradicting minimality.
Let $\mathcal{P}=(X_1,X_2,\ldots,X_r)$ be a nice path decomposition of width at most $k$. If~$G$ has no edges, then~$S$ is a singleton set containing an isolated vertex. Since we already covered that case, we know~$G$ has at least one edge, so any path decomposition has width~$k \geq 1$.
Enumerate the vertices of $S$ as~$i_1,\ldots, i_m$ such that $r_{\mathcal{P}}(i_1) < \ldots < r_{\mathcal{P}}(i_m)$. Hence the vertices are ordered by increasing rightmost endpoint of the interval of bags containing it.
In order to describe the reconfiguration procedure we suitably group several TAR reconfiguration steps together as one step in the algorithm.
In particular, one reconfiguration step in the algorithm described below will consist of a run of successive removals of nodes, followed by a single node addition.
We use the notion of a \emph{buffer set} $B_t$ at the $t^{th}$ step of the reconfiguration, such that $|B_t|$ will correspond to the number of tokens in the buffer at any particular time, and $\max_t |B_t|+1$ will correspond to the maximum buffer size of the corresponding TAR reconfiguration sequence. The buffer set is a subset of vertices, showing where the tokens in the buffer came from. At time step $t=0$, define $W_0=I$ to be the independent set of vertices with a token, and let the buffer set $B_0$ be empty. We will define intermediate independent sets~$W_i$ and buffer sets~$B_i$ representing the grouped reconfiguration steps. The algorithm stops when $W_m$ contains all vertices in $S$; we will then invoke the induction hypothesis to finish the sequence. From the sequence~$(W_0, W_1, \ldots, W_m)$ one obtains a formal reconfiguration sequence as defined in Section~\ref{sec:reconfig} by inserting ``transitioning independent sets'' in between~$W_i$ and~$W_{i+1}$ for all~$i$. From~$W_i$, repeatedly remove one vertex until arriving at~$W_{i+1} \setminus W_i$, and then add the single vertex of~$W_{i+1} \setminus W_i$ to the resulting set.
For $t\geq 1$, the transition from~$t-1$ to~$t$ is obtained as follows. Let~$u_t$ be an arbitrary vertex from $B_{t-1}\cup (N(i_t)\cap W_{t-1})$. Intuitively, at step~$t$ we take the token from~$u_t$ (in the buffer set or on a neighbor of $i_t$) and move it onto vertex~$i_t$, causing~$u_t$ to disappear from the buffer and adding~$i_t$ to the independent set. To ensure the resulting set is independent, tokens on neighbors of~$i_t$ are moved into the buffer beforehand.
Observe that the above step is valid only if $B_{t-1}\cup (N(i_t)\cap W_{t-1})$ is nonempty. Below in Claim~\ref{claim:buffer} we show that due to the choice of $S$, this is indeed the case for all $t\leq m$. Formally, we obtain the following:
\begin{algorithm*}[{Reconfiguring graphs with small pathwidth}]
Initialize with~$B_0 = \emptyset$ and~$W_0 = I$. We recursively define~$B_t$ and~$W_t$ for~$t \geq 1$.
\begin{enumerate}
\item The neighbors of~$i_t$ that have tokens (i.e.~that are in the current independent set) are removed from the previous independent set~$W_{t-1}$, making room to add~$i_t$ to the new independent set: $W_t=(W_{t-1}\setminus N(i_t))\cup \{i_t\}$.
\item The neighbors of~$i_t$ belonging to the previous independent set~$W_t$ move to the buffer, while~$u_t$ is removed from the buffer since its token has moved onto~$i_t$:
\begin{equation}
B_t=(B_{t-1}\cup (N(i_t)\cap W_{t-1}))\setminus \{u_t\}.
\end{equation}
\end{enumerate}
\end{algorithm*}
As mentioned earlier, a step from $W_t$ to $W_{t+1}$ can be thought as a sequence of successive removals of the nodes $N(i_{t+1})\cap W_{t}$, and then addition of the node $i_{t+1}$.
During this successive TAR reconfiguration sequence corresponding to the step $W_t$ to $W_{t+1}$, the maximum buffer size is given by $|B_{t+1}|+1$, since the buffer size will be~$|B_{t-1}\cup (N(i_t)\cap W_{t-1})|$ just before the buffer token from~$u_t$ is moved onto~$i_t$.
Therefore, the maximum buffer size in the entire TAR reconfiguration sequence starting from $W_0$ and ending at $W_m$ is given by $\max_{0\leq t\leq m} |B_t|+1$.
Also, at the end of the algorithm, all vertices from the set $S$ will be in the independent set, and no vertex in the buffer set.
This can be seen by observing the following.
Initially all tokens were on the vertices belonging to the set $N(S) \subseteq I$, since~$S \subseteq J$.
At each step of the algorithm essentially one token is selected from $N(S)$ as long as the number of such tokens is positive, and is placed on some vertex in $S$.
Now since $|S|\geq |N(S)|$, all the tokens in $N(S)$ must eventually exhaust before the algorithm terminates placing one token at each vertex of $S$.
For the validity of the above algorithm we claim the following, which in turn also characterizes the size of the buffer set at all intermediate time steps.
\begin{claim} \label{claim:buffer}
For all $1 \leq t\leq m$ we have that
$B_{t-1}\cup (N(i_t)\cap W_{t-1})$ is nonempty,
and that $|B_t|=|N(\{i_1,\ldots,i_{t}\})|-t$.
\end{claim}
\begin{claimproof}
Suppose on the contrary that there exists $t'\leq m$, such that $B_{t'-1}\cup (N(i_{t'})\cap W_{t'-1})$ is empty for the first time. If~$t' = 1$, then~$B_{t'-1} \cup (N(i_{t'}) \cap W_{t'-1})$ is empty, and in particular~$N(i_{t'}) = \emptyset$, so that~$i_{t'} = i_1$ is an isolated vertex. But since~$|S| \geq 2$ by our argument above, it follows that~$S' = \{i_1\}$ is a nonempty strict subset with~$|S'| \geq |N(S')|$; a contradiction. So in the remainder we consider~$t' > 1$. We show that, for all $t< t'$, $|B_t|=|N(\{i_1,\ldots,i_{t}\})|-t$.
Using this, we prove that $2 \leq t' \leq m$ leads to a contradiction.
Observe that for any $t< t'$, after the $t^{th}$ step of the algorithm, the total number of distinct vertices that have been added to the buffer set is given by $|N(\{i_1,\ldots,i_{t}\})|$. Furthermore, for all $t''\leq t < t'$, the set $B_{t''-1}\cup (N(i_{t''})\cap W_{t''-1})$ has always been nonempty.
This implies that at each step, precisely one token has been removed from the buffer, thus reducing the size of the buffer set by moving a buffer token onto a vertex that is added to the independent set.
Therefore, in total $t$ times the size of the buffer set reduces by one.
Since initially the buffer set was empty, for any $t<t'$
we have $|B_t|=|N(\{i_1,\ldots,i_{t}\})|-t$.
Since we have assumed that $B_{t'-1}\cup (N(i_{t'})\cap W_{t'-1})$ is empty, we know $B_{t'-1}$ is empty, and therefore from the above argument
$|B_{t'-1}|=|N(\{i_1,\ldots,i_{t'-1}\})|-(t'-1)=0$.
Defining $S':=\{i_1,\ldots,i_{t'-1}\}\subsetneq S$, we have $|N(S')|\leq |S'|$. Since~$t' \geq 2$ the set~$S'$ is nonempty, contradicting the minimality of~$S$. This proves the first part of the claim. Since the buffer does not become empty until after step~$t$, the given argument then also proves the second part of the claim.
\end{claimproof}
Note that in particular $|B_m|=|N(\{i_1,\ldots,i_{m}\})|-m= |N(S)|-|S| = 0$; the buffer empties for the first time only after reconfiguring the whole set.
It remains to show that throughout the process the buffer size will not grow beyond~$k$, i.e.~$|B_t|\leq k-1$, for all $t\leq m$.
Claim~\ref{claim:buffer}~(ii) implies that
$\max_{t\leq m}|B_t|\geq k$ if and only if $\exists\ t \leq m$ such that
$|N(\{i_1,\ldots,i_{t}\})|-t\geq k$,
which is not possible due to Lemma~\ref{lem:pathwidthprefix}.
This then ensures that throughout the algorithm, the buffer size will never exceed $k$.
Since the buffer set empties out after reconfiguring the set $S$,
after the execution of the algorithm, $W_m\cap J=S$ and $W_m\cap I\subset V\setminus (S\cup N(S))$.
Now define $G'=G- (S\cup N(S))$, $I'=I\cap W_m$, and $J'=J\setminus S$. Observe that $G'$ has pathwidth at most $k$, and
$|I'|=|I\cap W_m|=|I|-|S|=|J'|$.
Furthermore, since $S$ is non-empty, $|V(G')|\leq n$. By the induction hypothesis, there exists a TAR reconfiguration sequence from~$I'$ to~$J'$ in~$G'$ using a buffer of size at most~$k$. Since $N(S)$ is not in $G'$, any independent set in $G'$ remains to be an independent set in $G$ when augmented with the set $S$. Therefore we can first apply the given reconfiguration from~$N(S)$ to~$S$, followed by the reconfiguration from~$I'$ to~$J'$, to reconfigure~$I$ to~$J$ with a buffer of size at most~$k$.
\end{proof}
Observe by considering a complete balanced bipartite graph on $2n$ vertices $K_{n,n}$, that in general the above bound is tight. Indeed, from \cite{B98} we know that $K_{n,n}$ has pathwidth equal to $n$, and as explained earlier, the TAR reconfiguration threshold is also $n$.
\subsection{Obstructions to TAR Reconfigurability}
Having proved Theorem~\ref{th:pathwidth}, it is natural to ask whether pathwidth in some sense characterizes the TAR reconfiguration threshold: does large pathwidth of a graph imply that its TAR reconfiguration threshold is large? This is not the case: the pathwidth of a complete binary tree is proportional to its depth~\cite{KinnersleyL94}, but its reconfiguration threshold is one by Theorem~\ref{thm:tree}.
We now identify a graph structure which forces the TAR reconfiguration threshold to be large.
First we formally introduce the special type of minor illustrated in Figure~\ref{fig:binary}.
\begin{definition}[{Bipartite topological double minor}] \label{def:btd:minor}
Let $G = (I \cup J, E)$ be a bipartite graph and let~$H$ be an arbitrary graph. Then $H$ is a \emph{bipartite topological double minor} of $G$, if one can assign to every $v\in V(H)$ a subgraph $\varphi(v)$ of $G$, which is either an edge or an even cycle in $G$, and one can assign to each edge $e = \{u, v\} \in E(H)$ a pair of odd-length paths $\psi_1(e)$, $\psi_2(e)$ in $G$, such that the following holds:
\begin{itemize}
\item For any $u,v \in V(H)$ with $u\neq v$ the subgraphs $\varphi(u)$ and $\varphi(v)$ are vertex-disjoint.
\item For any $v \in V(H)$ no vertex of $\varphi(v)$ occurs as an interior vertex of a path $\psi_1(e)$ or $\psi_2(e)$, for any $e \in E(H)$.
\item For any $e, e' \in E(H)$ the paths $\psi_1(e)$ and $\psi_2(e')$ are internally vertex-disjoint.
\item For any $e = \{u,v\} \in E(H)$ the paths $\psi_1(e)$ and $\psi_2(e)$ both have one endpoint in $\varphi(v)$ and one endpoint in $\varphi(u)$.
\item For any $v \in V(H)$ and edge $\{u,v\}\in E(H)$, the attachment points of~$\psi_1(e)$ and~$\psi_2(e)$ in~$\varphi(v)$ belong to different partite sets.
\end{itemize}
The triple~$(\varphi, \psi_1, \psi_2)$ is a \emph{BTD-minor model} of~$H$ in~$G$. For an edge~$e \in E(H)$ we define $\psi'_1(e), \psi'_2(e) \subseteq V(G)$ as the \emph{interior} vertices of the paths~$\psi_1(e)$ and~$\psi_2(e)$, which may be~$\emptyset$ if the path consists of a single edge.
\end{definition}
Intuitively, $H$ occurs as a bipartite topological double minor (or \emph{BTD-minor}) if each vertex of $H$ can be realized by an edge or even cycle, and every edge of $H$ can be realized by two odd-length paths that connect an $I$-vertex of $\varphi(v)$ to a $J$-vertex of $\varphi(u)$ and the other way around, in such a way that these structures are vertex-disjoint except for the attachment of paths to cycles. The definition easily extends to bipartite graphs whose bipartition is not given, since a BTD-minor is contained within a single connected component of the graph, which has a unique bipartition.
\begin{proposition} \label{prop:btd:minor}
Let~$G = (I \cup J, E)$ be a bipartite graph having a connected graph~$H$ as a BTD-minor model~$(\varphi, \psi_1, \psi_2)$, such that each vertex of~$G$ is in the image of~$\varphi$,~$\psi_1$, or~$\psi_2$. Then~$G$ has a perfect matching with~$|I| = |J|$ edges, and for any independent set~$W$ in~$G$:
\begin{enumerate}
\item For each vertex~$v$ of~$H$ we have~$|W \cap \varphi(v)| \leq |\varphi(v)| / 2$.
\item For each edge~$e$ of~$H$ and~$i \in \{1,2\}$ we have~$|W \cap \psi_i'(e)| \leq |\psi_i'(e)| / 2$.
\end{enumerate}
For a \emph{maximum} independent set~$W$, equality holds in all cases.
\end{proposition}
\begin{proof
To see that~$G$ has a perfect matching, observe that each~$\varphi(v)$ for~$v \in V(H)$ is either an edge or an even cycle, which can be covered completely be a matching consisting of edges from~$\varphi(v)$. For each~$e \in E(H)$ and~$i \in \{1,2\}$ there is an even number of interior vertices on the path~$\psi_i(e)$, since the path has odd length. The interior vertices~$\psi'_i(e)$ can therefore also be covered completely by a matching of edges among~$\psi'_i(e)$. Since each vertex of~$G$ is in the image of~$\varphi$ or~$\psi_{1,2}$, the sets~$\varphi(v)$ together with sets~$\psi'_i(e)$ for~$e \in E(H)$ and~$i \in \{1,2\}$ cover~$V(G)$. Since these sets are vertex-disjoint by Definition~\ref{def:btd:minor}, they form a partition of~$G$. By the preceding argument, this implies~$G$ has a perfect matching~$M$ where no edge crosses the described partition of~$V(G)$. This matching has size~$|I| = |J|$ since~$G$ is bipartite.
Now we prove the two claimed properties. If~$W$ is an independent set, it contains at most one endpoint from each edge in~$M$ and therefore contains at most half the vertices of each~$\varphi(v \in V(H))$ and each~$\psi'_i(e \in E(H))$. Any independent set~$W$ achieving equality for all these sets has size~$|I| = |J|$ and is therefore maximum.
\end{proof}
For a bipartite graph~$G$, let~$\ensuremath{\mathop{\mathrm{\textsc{treeminor}}}}(G)$ denote the largest integer~$k$ for which~$G$ contains a complete binary tree of depth~$k$ as a BTD-minor. For a class of bipartite graphs $\Pi$ we define $\ensuremath{\mathop{\mathrm{\textsc{treeminor}}}}(\Pi):=\sup_{G\in\Pi}\ensuremath{\mathop{\mathrm{\textsc{treeminor}}}}(G)$.
\begin{theorem}\label{th:bip minor}
There exists a real constant~$c > 0$ such that any hereditary graph class $\Pi$ satisfies~$\ensuremath{\mathrm{\textsc{tar}}}(\Pi) \geq c \cdot \ensuremath{\mathop{\mathrm{\textsc{treeminor}}}}(\Pi_\mathrm{bip})$.
\end{theorem}
\begin{proof}
As before, we consider a balanced bipartite graph $G\in\Pi_\mathrm{bip}$ with bipartition $V(G)=I\cup J$ that has a complete binary tree $T$ of depth $d$ as a BTD-minor.
Since the graph class is hereditary, for the lower bound, we consider only the subgraph of $G$ induced by $\bigcup_{v\in V(T)}\{\varphi(v)\} \cup \left(\bigcup_{e\in E(T)}\{\psi_1(e)\cup\psi_2(e)\} \right)$, and without loss of generality, we shall refer to it as $G$ itself.
\begin{fact}[\cite{BC09}] \label{fact:ndb}
There is a universal constant $c_1>0$ such that if~$T$ is a complete binary tree of depth $d$, then
$\displaystyle \max_{1\leq i\leq |V(T)|}\min_{S\subseteq V(T);|S|=i}|N_T(S)|\geq c_1 \cdot d$.
\end{fact}
The above implies that there exists $i_0 \leq |V(T)|$, such that any size-$i_0$ subset of $V(T)$ has a neighborhood of size at least $c_1 \cdot d$. Let~$I \cup J$ be the unique bipartition of the connected graph~$G$, and consider an arbitrary TAR reconfiguration sequence from $I$ and $J$. In this sequence $(I = W_0, W_1, \ldots, W_t = J)$ of independent sets in $G$, look at the reconfiguration step when for the first time there exists $S\subseteq V(T)$ with $|S|=i_0$, such that the intermediate independent set $W$ at that step contains $\bigcup_{v\in S}(\varphi(v)\cap J)$, and for all $v\notin S$ it satisfies $(\varphi(v)\cap W\cap J)\subsetneq(\varphi(v)\cap J)$. We will prove that~$|J| - |W| \geq c_1 \cdot d$, implying that from the initial independent set of~$|I| = |J|$ tokens, at least~$c_1 \cdot d$ tokens must reside in the buffer.
To prove the theorem, consider the intermediate independent set $W$, and the set $S\subseteq V(T)$ with $|S|=i_0$ satisfying the above criteria. The following claim shows that for each vertex in~$N_T(S)$, the independent set~$W$ uses at least one vertex fewer than the maximum independent set~$J$ does.
\begin{claim}\label{claim:fewer}
Consider an edge $e=\{u,v\}\in E(T)$ with $u\in S$ and $v\notin S$, and let~$Q_{e,v} \subseteq V(G)$ denote the vertices in~$\varphi(v) \cup \psi'_1(e) \cup \psi'_2(e)$. The following holds:
\begin{equation}\label{eq:object}
|W\cap Q_{e,v}| < |J \cap Q_{e,v}|=\frac{|Q_{e,v}|}{2}.
\end{equation}
\end{claim}
\begin{claimproof}
By Proposition~\ref{prop:btd:minor}, the maximum independent set~$J$ contains exactly half the vertices of~$Q_{e,v}$.
If $|W\cap\psi'_i(e)|<|\psi'_i(e)|/2$ for some~$i \in \{1,2\}$, then we are done: by Proposition~\ref{prop:btd:minor} the set~$W$ contains fewer vertices from~$\psi'_i(e)$ that the maximum independent set~$J$ does, and this cannot be compensated within the other parts of the structure since~$J$ contains half the vertices there and no independent set contains more.
In the remainder, we can assume that~$W$ contains exactly half the vertices from~$\psi'_1(e)$ and~$\psi'_2(e)$. Then the following are true:
\begin{enumerate}[{\normalfont (i)}]
\item All $J$-nodes of $\varphi(u)$ are in $W$ (by our choice of~$W$ and since~$u \in S$).
\item Some $J$-node of $\varphi(v)$ is not in $W$ (by our choice of~$W$ and since~$v \not \in S$).
\item Some $I$-node of $\varphi(v)$ is not in~$W$. To see this, let~$i \in \{1,2\}$ such that~$\psi_i(e)$ is an odd-length path from a $J$-node in~$\varphi(u)$ to an $I$-node in~$\varphi(v)$, which exists by Definition~\ref{def:btd:minor}, and orient it in that direction. Since the first vertex on the path is a $J$-node in~$\varphi(u)$, it is contained in~$W$ as shown above. Hence the second vertex on the path, the first interior vertex, is not in~$W$. Since exactly half the interior vertices from~$\psi_i(e)$ belong to~$W$, every other interior vertex from~$\psi_i(e)$ is in~$W$. Since the path has an even number of interior vertices and the first interior vertex is not in~$W$, the last interior vertex must be in~$W$. But this prevents its $I$-node neighbor in~$\varphi(v)$ from being in~$W$.
\end{enumerate}
Therefore, since $\varphi(v)$ is either an edge or an even cycle, we have $|W \cap \varphi(v)|<|\varphi (v)|/2$ by observing the following: the only independent sets in~$\varphi(v)$ of size~$|\varphi(v)| / 2$ are~$\varphi(v) \cap I$ and~$\varphi(v) \cap J$, but~$\varphi(v) \cap W$ is not equal to either of these sets since it avoids a $J$-node and an $I$-node. Hence $|W \cap \varphi(v)|<|\varphi (v)|/2 = |J \cap \varphi(v)|$, and Proposition~\ref{prop:btd:minor} shows that this cannot be compensated in other parts of the minor model, implying~$|W \cap Q_{e,v}| < |J \cap Q_{e,v}|$.
\end{claimproof}
Using Claim~\ref{claim:fewer} we finish the proof of Theorem~\ref{th:bip minor}. For each~$v \in N_T(S)$, pick an edge~$e = \{u,v\}$ such that~$u \in S$. By Claim~\ref{claim:fewer} the set~$W$ contains less than half the vertices of~$Q_{e,v}$, while the maximum independent set~$J$ contains exactly half. Since the sets~$Q_{e,v}$ considered for different vertices~$v \in N_T(S)$ are disjoint, while Proposition~\ref{prop:btd:minor} shows that from the other pieces of the minor model~$W$ cannot use more vertices than~$J$ does, it follows that~$|W| \leq |J| - |N_T(S)| \leq |J| - c_1 \cdot d$. Hence the buffer contains at least~$c_1 \cdot d$ tokens.
\end{proof}
\section{Conclusion} \label{sec:conclusion}
In this paper we considered two types of reconfiguration rules for independent set, involving simultaneously jumping tokens and reconfiguration with a buffer. For both models, we derived tight bounds on the corresponding reconfiguration thresholds in terms of several graph parameters like the minimum vertex cover size, the minimum feedback vertex set size, and the pathwidth.
Many results in the literature concerning the parameter pathwidth can be extended to hold for the parameter treewidth as well.
This is not the case here; the upper bound on the TAR reconfiguration threshold in terms of pathwidth (Theorem~\ref{th:pathwidth}) cannot be strengthened to treewidth, since one can make arbitrarily deep complete binary trees as BTD-minors in bipartite graphs of treewidth only two (see Figure~\ref{fig:binary}).
On the other hand, there are bipartite graphs of large treewidth with TAR reconfiguration threshold two (Figure~\ref{fig:treewidth}). To characterize the TAR reconfiguration threshold one therefore needs to combine graph connectivity (as measured by the width parameters) with notions that constrain the parity of the connections in the graph. This is precisely why we introduced BTD-minors.
We conjecture that the converse of Theorem~\ref{th:bip minor} holds, in the sense that any hereditary graph class having a large TAR reconfiguration threshold must contain a graph having a complete binary tree of large depth as a BTD-minor. Our belief is based partially on the fact that a BTD-minor model of a deep complete binary tree is arguably the simplest graph of large pathwidth and feedback vertex number. Resolving this conjecture is our main open problem.
{\small
\subparagraph*{Acknowledgments.}
This research was financially supported by The Netherlands Organization for Scientific Research (NWO) through TOP-GO grant 613.001.012, Gravitation Networks grant 024.002.003 and a Veni grant `Frontiers in Parameterized Preprocessing'. Debankur thanks Sem Borst for his comments on the motivation for the reconfiguration thresholds.
\bibliographystyle{plainurl}
|
train/arxiv
|
BkiUdIjxK1ThhBMLgM1G
| 5
| 1
|
\section{Introduction}
It is supposed that in ultra-relativistic heavy-ion collisions a new phase with free quarks and gluons (quark-gluon plasma) can be generated. This phase transition is accompanied by the restoration of spontaneously broken chiral symmetry. Theoretically, a spontaneous breaking of symmetry leads to the existence of a massless Goldstone boson, which is supposed to be a pion in quantum chromodynamics (QCD). Being the lightest hadron with the Goldstone nature, the pion plays a special role in hadronic physics. Elastic $\pi\pi$-scattering is a fundamental process for quantum chromodynamics (QCD) at low energies as it provides a direct link between the theoretical formalism of a chiral symmetry and experiment. Moreover, pions can be quickly created in the early phase of a heavy-ion collision, as two and more pions are the final state of some hadronic interactions. So the pion gas evolution at a finite temperature and density especially near the phase transition is an object of great interest.
The increasing temperature of the pion gas leads to an increase in density ($n\sim T^3$) and in-medium collisions occur more often. It can make the average lifetime of a particular pion state shorter and thus increase its width. With a further increase in density, the raised inverse processes can restore the destroyed state and increase back the average lifetime of the pion state (or decrease its width). The resulting width is related to the returning process of a disturbed system to an equilibrium state (a damping width). The occurrence of the finite pion damping width in dense matter was predicted by Blaschke et al. \cite{Voskresensky:1995tx}. Later, the effect of the finite pion width on the in-medium $\rho$-meson behaviour was discussed in Ref.~\cite{vanHees:2000bp}.
In this article, we concentrate on the calculations of the pion damping width and the pion spectral function in the hot interacting pion gas below the critical temperature in the framework of the Nambu-Jona-Lasinio (NJL) model. This model is often used for investigation of in-medium meson properties \cite{Blaschke:2003zt,Yudichev:2005yz}.
The width of a particular state is calculated via the collision integral in the Kadanoff - Baym approach \cite{KadanoffBaym}, which includes the total scattering amplitude of collision processes. The description of the $\pi\pi$-scattering amplitude in NJL-like models at the one-loop level includes two types of diagrams: the ''box" diagram and the meson exchange diagram \cite{Quack:1994vc,Fu:2009zs}, and is performed only in simplest kinematics (s=$4m_\pi^2, t=u=0$), as the calculations at finite temperature require the Matsubara technique, which is laborious in the arbitrary kinematics, especially for four-pole integrals\cite{Rehberg:1995nr,Khvorostukhin:2020foa}. We show that the $\pi\pi-$scattering amplitude for arbitrary kinematics can be truncated by using the pole approximation of meson propagators and supposing the 4-pion interaction to be a constant. The obtained relations are similar to the scattering amplitudes in the meson-exchange model \cite{Cotanch:2002vj}.
The spectral function of meson correlations can be a key to many real-time observables in strongly-interacting systems, as it provides information on quasi-particle spectra and collective excitation of the system. Speaking of pion gas, some stages should be discussed: from the low-temperature non-interacting gas through the high-temperature non-relativistic interacted gas to the stage with high density, where the quark structure of the pion becomes more significant. The first stage can be described via a simple $\delta-$function. The second one is discussed, for example, at $T<T_c$ in the framework of the O(4) linear $\sigma-$model \cite{Chiku:1997va}. In our work, the pion spectral function is calculated in the hot pion gas using the two-iteration method \cite{KadanoffBaym}, and this approach of the elastic scattering is also limited by the temperature below the critical temperature $T < T_c\sim 0.19$ GeV where pion is still a bound state. For the last stage, where $T>T_c$, the formalism of the phase shift of quark-antiquark scattering to the meson spectral function can be applied in the framework of the NJL model\cite{Xia:2014bla}. The non-relativistic approach on the basis of the Beth-Uhlenbeck formula, extended recently by D. Blaschke\cite{Blaschke:2019col,Blaschke:2013zaa} enables the authors to study meson spectral functions at high temperatures where the chiral phase transition and the Mott transition have already appeared. These calculations show that there is still a chance for collective modes at $T>T_c$.
We will discuss the formalism of the NJL model and scattering amplitudes within the model in Section 2, and show numerical calculations
of the pion damping width and the pion spectral function in Section 3. Finally, we summarize in Section 4.
\section{Pion-pion scattering amplitude within the NJL model}
\label{sec:damping}
\subsection{The model formalism}
Generally, to consider pion-pion scattering in the medium, the $\sigma$- and $\rho$- mesons should be taken into account, as they appear as an intermediate state in the scattering process \cite{Quack:1994vc}. Nevertheless, the work focuses on the SU(2) NJL model \cite{RevModPhys.64.649,Kalinovsky:2015kzf} and the Lagrangian with scalar and pseudo-scalar interactions has the following form:
\begin{equation}
\mathcal{L}_{\rm NJL}=\bar{q}\left(i\gamma_\mu \partial^\mu - \hat{m}_0
\right) q+ G \left[\left(\bar{q}q\right)^2+\left(\bar{q}i\gamma_5
\vec{\tau} q \right)^2\right],
\label{njl}
\end{equation}
where $G$ is the scalar coupling constant, $\bar{q}, q$ are the quark
fields, $\hat{m}_0$ is the diagonal matrix of the current quark mass,
$\hat{m}_0 = {\rm diag}(m_u^0, m_d^0)$ with $m_u^0 = m_d^0 = m_0$, and
$\overrightarrow{\tau}$ are the Pauli matrices in space SU(2),
$\tau^a(a = 1,2,3)$.
In the mean field approximation, the constituent quark mass is
provided by the gap equation:
\begin{equation}
m = m_0 + 2 i G \int \frac{dp}{(2\pi)^4} {\rm Tr}\{S(p)\},
\end{equation}
where $S(p) = (\hat{p}-m)^{-1}$ is the quark propagator and trace is
taken over Dirac, flavour and colour indices.
Considering mesons as a quark-antiquark bound state, the meson propagator in the framework of the random-phase approximation can be written as a matrix in the meson space:
\begin{equation}
D_M(k^2) = \frac{2 G}{1 - 2 G \ \Pi_{M}(k^2)},
\label{mesonProp}
\end{equation}
where $k$ is the meson four-momentum. Equations for meson masses can be derived as poles of the meson propagator (Eq.(\ref{mesonProp})) at vanishing three-momentum (the Bethe-Salpeter equation):
\begin{equation}
\det\{1 - 2G \ \Pi_{M}(M_M-i\Gamma_M/2, \vec{0})\} = 0.
\label{Bet-Salp}
\end{equation}
The equation is given in complex form to remind that after the meson mass exceeds the mass of constituents, the equation does not represent the stable bound state, rather a resonant state. The complex form of Eq. (\ref{Bet-Salp}) is used to determine both mass $M_M$ and width $\Gamma_M$ of the meson. The polarization operator $\Pi_{M}(k^2)$ defines the meson
properties:
\begin{equation}
\Pi_{M} (k^2) = i \int \frac{d^4p}{(2\pi)^4} \ \mbox{Tr}\,
\left[ \Gamma_M S(p+k) \Gamma_M S(p)
\right],
\end{equation}
where the vertex factor $\Gamma_M$ depends on the sort of meson: $\Gamma_M = i \gamma_5 \tau^a $ for the pseudo-scalar meson and $\Gamma_M = {\bf 1} \tau^a $ for the scalar meson. Both pion-quark and $\sigma$-quark coupling strengths $g_{\pi qq}$, $g_{\sigma qq}$ are obtained from the polarization operator $\Pi_M$ as
\begin{equation}
g^{-2}_{Mqq} = \frac{\partial \Pi_m (k^2)}{\partial k^2}\vert_{^{k^2 = M^2}}.
\end{equation}
\begin{figure}[!h]
\centerline{ \includegraphics[width=0.45\linewidth]{masses.pdf}
\includegraphics[width=0.45\linewidth]{couplings.pdf}}
\caption{Left panel: the double quark mass $2m_q(T)$, meson masses $M_\pi(T)$, $M_\sigma(T)$ and their broadening $M_M\pm\Gamma_M(T)/2$ at $\mu_q = 0$. The vertical dash-dotted lines correspond to the critical temperature and the Mott temperature for pion. Right panel: coupling constants $g_{\pi qq}$,$g_{\sigma qq}$. }
\label{mesmass}
\end{figure}
To describe the mass spectra in the NJL model, a set of parameters is required: the cut-off parameter $\Lambda = 0.639$ GeV, the current quark mass $m_0 = 5.5$ MeV and the coupling constant $G = 5.227$ GeV$^{-2}$, which are fixed at zero temperature to reproduce some phenomenological values like pion mass, quark condensate and the pion weak decay constant.
Temperature dependence of double quark masses $2m_q$, meson masses $M_\pi\pm\Gamma_\pi/2$ and $M_\sigma\pm\Gamma_\sigma/2$ and coupling constants $g_{\pi qq}$,$g_{\sigma qq}$ as functions of temperature at $\mu_q=0$ are shown in Fig. \ref{mesmass}. The quark mass (or the quark condensate) in the NJL model plays the role of the order parameter for the chiral phase transition, which appears for a given parameter set at $T = 0.192$ GeV \cite{RevModPhys.64.649,Kalinovsky:2015kzf}. At higher temperatures, when $m_q \rightarrow m_0$ the spontaneously broken chiral symmetry is supposed to be partially restored and the mass of the sigma meson tends to the mass of its chiral partner pion. The temperature when meson mass exceeds the mass of the constituents is the Mott temperature. As can be seen in Fig.\ref{mesmass}, for the pion $T^\pi_{\rm{Mott}} = 0.22$ GeV.
\subsection{Scattering amplitude}
The scattering amplitude for the $\pi\pi$ scattering process in the framework of the NJL and PNJL models is described in detail in Refs. \cite{Quack:1994vc,Fu:2009zs}. In the lowest order $1/N_c$ there are two types of Feynman diagrams contributing to the $\pi\pi$-scattering amplitude: four-vertex ``box''-diagrams, described the four-quark interaction and the meson-exchange diagrams.
The second type of processes contributing to the $\pi\pi$-scattering amplitude are processes with a meson as an intermediate state.
As in the $SU(2)$ NJL model only scalar and pseudo-scalar quark-antiquark states are presented, the role of the intermediate state is played only by the scalar $\sigma$-meson, and the triangle vertex corresponds to $\sigma \rightarrow \pi\pi$ decay. The amplitude can be written as
\begin{equation}
i\mathcal{T}^{\sigma} = i A^{\sigma \pi\pi}D_\sigma(p)i A^{\sigma\pi\pi},
\label{triagTot}
\end{equation}
where $D_\sigma$ is the meson propagator,
the triangle amplitude $A^{\sigma\pi\pi}$ defines the coupling strength $g_{\sigma\pi\pi}$= $2 g_{\sigma q q} g_{\pi qq}^2 A^{\sigma\pi\pi}$ \cite{Zhuang:2000tz,Friesen:2011ma}.
The contribution from this diagram plays a role as long as the decay $\sigma \rightarrow \pi\pi$ is possible, in other words as long as the $\sigma$-meson mass exceeds the mass of two pions (in our model this temperature is $T^\sigma_{diss} = 0.189$ GeV).
Using the NJL and PNJL models for description of the scattering amplitude $|\mathcal{T}|$ at finite temperature is limited by the simplest kinematic $p_1=p_2=p_3=p_4= p$ \cite{Quack:1994vc,Fu:2009zs}. Considering non-trivial kinematics is quite difficult and requires complicated calculations \cite{Rehberg:1995nr,Khvorostukhin:2020foa}.
Nevertheless, a model truncation is possible. Assuming that "box" diagrams are weakly dependent on the kinematic conditions, the contribution of these diagrams can be replaced by the constant of 4-pion interaction $g_{4\pi}$. Using the pole approximation for the meson propagator Eq. (\ref{mesonProp}) as
\begin{equation}
D_\sigma(x) \approx \frac{g_{\sigma q q}^2}{M_\sigma^2- x- i\Gamma_\sigma M_\sigma},
\label{propPole}
\end{equation}
and considering the triangle vertex as the coupling constant $g_{\sigma\pi\pi}$, it is possible to show that the scattering amplitudes $T_0, T_1, T_2$ can be written by analogy with the meson exchange model\cite{Cotanch:2002vj}, where quark-exchange diagrams correspond to contact terms. Then the amplitudes of a particular total isospin $I$ ($I = 0,1,2$) defined as $T_I$ have the form:
\begin{eqnarray}
T_0 &=& 5 g_{4\pi} + g^2_{\sigma\pi\pi}\left( 3 D_\sigma(s)+D_\sigma(t)+D_\sigma(u) \right),\label{amp_a0} \\
T_1 &=& g^2_{\sigma\pi\pi}\left( D_\sigma(t) - D_\sigma(u) \right), \label{amp_a1}\\
T_2 & =& 2 g_{4\pi} + g^2_{\sigma\pi\pi}\left(D_\sigma(t)+D_\sigma(u) \right),
\label{amp_a2}
\end{eqnarray}
where $g_{4\pi}$ is the constant of 4-pion interaction and $D_\sigma(s,t,u)$ is the meson propagator\cite{Bjorken:100769}.
The amplitudes $T_0, T_1, T_2$ obtained in the framework of the original NJL model for the case $t=u=0$ give the scattering lengths $a_0 = T_0/32\pi$, $a_1=0$, $a_2 = T_2/32\pi$. These results are shown in Fig.\ref{couplings}, right panel with solid lines. By inserting the calculated values for $a_0, a_2$, meson masses and coupling constants into Eqs.(\ref{amp_a0}-\ref{amp_a2}), the constant $g_{4\pi}$ can be obtained as a function of temperature. The temperature behaviour of the renewed scattering lengths $a_0$, $a_2$ calculated on the basis of Eqs.(\ref{amp_a0}-\ref{amp_a2}) is shown as dashed lines in the right panel of the Fig.\ref{couplings} in comparison with the NJL results. The results for $a_0$ in the NJL model and the used approximation coincide. The decay constant $g_{\sigma\pi\pi}$ calculated in the NJL model and the constant of four-pion interaction $g_{4\pi}$, normalized to factor $-20$ are shown in the left panel in the Fig. \ref{couplings}.
Considering a pion-pion collision for a neutral pion, one should take into account the following contributions to the total cross section : $\pi^{0}\pi^{0} \rightarrow \pi^{0}\pi^{0}$, $\pi^{+}\pi^{-}\rightarrow \pi^{0}\pi^{0}$ and $\pi^{\pm}\pi^{0} \rightarrow \pi^{\pm}\pi^{0}$. The physical $\pi\pi$-scattering amplitudes are related to the isospin amplitudes by\cite{Cotanch:2002vj}
\begin{eqnarray}
\mathcal{T}_{\pi^0\pi^0\rightarrow\pi^0\pi^0} &=& \frac{2}{3} T_2+\frac{1}{3} T_0 \\
\mathcal{T}_{\pi^\pm\pi^0\rightarrow\pi^\pm\pi^0} &=& \frac{1}{2} T_2 + \frac{1}{2}T_1 \\
\mathcal{T}_{\pi^0\pi^0\rightarrow\pi^+\pi^-} &=& \frac{1}{3} T_2 - \frac{1}{3} T_0,
\end{eqnarray}
from where
\begin{eqnarray}
\mathcal{T}_{\pi^0\pi^0\rightarrow\pi^0\pi^0} &=& 3 g_{4\pi}+ g^2_{\sigma\pi\pi}\left(D_\sigma(s)+D_\sigma(t)+D_\sigma(u)\right) \\
\mathcal{T}_{\pi^\pm\pi^0\rightarrow\pi^\pm\pi^0} &=& g_{4\pi} + g^2_{\sigma\pi\pi} D_\sigma(u) \\
\mathcal{T}_{\pi^0\pi^0\rightarrow\pi^+\pi^-} &=& g_{4\pi} - g^2_{\sigma\pi\pi} D_\sigma(s).
\end{eqnarray}
\begin{figure}[!h]
\centerline{ \includegraphics[width=0.45\linewidth]{gspipi_g4pi.pdf}
\includegraphics[width = 0.44\textwidth]{a0a2.pdf}}
\caption{Left panel: the decay constant $g_{\sigma\pi\pi}$ and constant $g_{4\pi}$, as functions of temperature. Right panel: scattering lengths $a_0$, $a_2$ calculated with Eqs.(\ref{amp_a0})-(\ref{amp_a2}) (dashed lines) as function of temperatures in comparison with the original NJL results (solid lines).}
\label{couplings}
\end{figure}
\label{sec:preparation}
\section{Pion damping width and pion spectral function}
The description of the resonance properties at finite temperature and density was carried out by Kadanoff and Baym \cite{KadanoffBaym}. The width $\Gamma$ of a particular state (or lifetime $\tau$) can be calculated by the equation
\begin{equation}
\Gamma(p) = \tau^{-1}(p) =\Sigma^{>}(p)\pm\Sigma^{<}(p),
\end{equation}
where the sign ''$-$" is used for bosons and ''$+$'' for fermions. The lifetime of particular state in a dense medium depends on the probability of its decay into other states and the probability of inverse processes, which can restore the decayed state and thereby can prolong the lifetime of the state. The ''+'' for fermions appears due to the Pauli principle, which prevents the inverse processes of scattering into the existing state. For bosons, there appears a competition between the processes which tends to increase the number of existing states (described by the function $\Sigma^<$ ) and the processes which tends to decrease them ($\Sigma^>$). The functions $\Sigma^<$, $\Sigma^>$ have the form \cite{KadanoffBaym}
\begin{eqnarray}
\Sigma^{<}(p) &=& \int_{p_1}\int_{p_3}\int_{p_4}(2\pi)^4\delta_{p_1,p_2;p_3,p_4}|T|^2G^>(p_1)G^<(p_3)G^<(p_4), \label{func_sigma1}\\
\Sigma^{>}(p)&=& \int_{p_1}\int_{p_3}\int_{p_4}(2\pi)^4\delta_{p_1,p_2;p_3,p_4}|T|^2G^<(p_1)G^>(p_3)G^>(p_4),
\label{func_sigma2}
\end{eqnarray}
where the functions $G_i^>=[1+n_i(\omega)]A_i(p^2)$, $G_i^<=n_i(\omega)A_i(p^2)$ define the average density of particles with momentum $\bar{p}$ and energy $\omega$, and $n_i = (\rm{exp} (\beta\omega)- 1)^{-1}$ is the particles occupation number for bosons. Some notations were introduced for brevity: $\int_{p_i} = \int\frac{dp_i}{(2\pi)^4}$, $\delta_{p_1,p_2;p_3,p_4} = \delta(p_1+p_2-p_3-p_4)$, with $p_2=p$, as $\Gamma$ is calculated in the rest framework for particle number 2.
After integration over zero-components of momenta, integrals (\ref{func_sigma1}), (\ref{func_sigma2}) take a form
\begin{eqnarray}
&& \Sigma^{<(>)}(m_2,\vec{0}) = \\
&& = \frac{1}{64\pi^4}\int d \Omega \frac{d \vec{p_1}}{2 E_1}\int_{-1}^1 d (\cos\alpha)\frac{|\vec{p_3^*}|^2\cdot|\mathcal{T}|^2}{|\vec{p_3^*}|(E_1+m_2)-|\vec{p_1}||\vec{p_3^*}| \rm{cos}\alpha}F^{<(>)} \nonumber \label{func_sigma1_int},
\end{eqnarray}
where $d\Omega= ds_1 A(s_1) ds_3 A(s_3) ds_4 A(s_4)$, the factors $F^>= n_1 (n_3+1)(n_4+1)$ and $F^<=(n_1+1)n_3 n_4$ are introduced for brevity, the momentum $\vec{p_3^*}$ is defined as:
\begin{eqnarray}
|\vec{p_3^*}|_{1,2} =
\frac{|\vec{p_1}|a b \pm \sqrt{|\vec{p_1}|^2 a^2 b^2 +((\sqrt{s_2})+E_1 )^2-|\vec{p_1} |^2 b^2)(a^2-4s_3 (\sqrt{s_2}+E_1 )^2 ) }}{2((\sqrt{s_2}+E_1 )^2 -|\vec{p_1} |^2 b^2)}. \nonumber
\label{eq_p3}
\end{eqnarray}
with $a=(\sqrt{s_2 }+s_1 )^2+s_3-s_4+s_1- E_1^2$, $s_i = m_i^2$ and $b = \cos\alpha$. The spectral function A ($s_i$) is chosen in the Breit–Wigner form
\begin{equation}
A(s) =\alpha_s\frac{M\Gamma}{(s-M^2)^2+M^2\Gamma^2},
\label{Breit_Wigner}
\end{equation}
where $M$ is the meson pole mass, $\Gamma$ is the corresponding meson width, and $\alpha_s$ is a normalization factor, $\alpha_s=2$.
It is clearly seen in the left panel of Fig.\ref{Gamma_Aw}, that the pion state broadens in a hot pion gas after T$\sim 0.1$ GeV and reaches the maximal value $\Gamma\sim 0.075$ GeV at $T\sim 0.145$ GeV, then the curve turns down. This behaviour can be caused by the properties of the scalar $\sigma$- resonance, which plays a significant role in the pion-pion scattering. As was said before, the calculations in our model are limited by $T\sim 0.19$ GeV, where the chiral phase transition takes place and the $\sigma\rightarrow\pi\pi$ decay is limited by $T\sim 0.189$ GeV, after which only the "box"-diagram participates in the scattering amplitude. The pion width increases again near the critical temperature, as pion at high temperature tends to melt. The results obtained for the amplitude with taking into account only "box"-diagrams are shown in Fig. \ref{Gamma_Aw} with dashed line.
\begin{figure}[!h]
\centerline{
\includegraphics[width = 0.5\textwidth]{Gamma_T.pdf}
\includegraphics[width = 0.5\textwidth]{spectral_function.pdf}}
\caption{Left panel: the pion width as functions of temperature. The red dotted line corresponds to the case when amplitude contains only ''box''-diagrams. Right panel: the pion spectral function $A(\omega,\bar{0})$ at different temperatures. }
\label{Gamma_Aw}
\end{figure}
We would like to pay some attention to the spectral function $A_i(\omega,\bar{p})$, which defines the possible energy spectrum $\omega$ for a particle with momentum $\bar{p}$. For a non-interacting gas the spectral function can be used as a $\delta$-function, but for a more general case it should be considered as \cite{KadanoffBaym}
\begin{equation}
A(\omega, {\bar{p}}) = \frac{\Gamma(\omega, \bar{p})}{(\omega-E(\bar{p})-{\rm Re}\Sigma_c(\omega,\bar{p}))^2+(\frac{\Gamma(\omega, \bar{p})}{2})^2}.
\end{equation}
To find the spectral function $A(\omega,\bar{p})$, a self-consistent set of functional equations has to be solved. The set contains equations for $\Gamma(\omega,\bar{p})$, $A(\omega,\bar{p})$ and self-energy $\Sigma_c(\omega,\bar{p})$
\begin{equation}
{\rm Re}\Sigma_c(\omega, {\bar{p}}) =\mathcal{P}\int\frac{d\omega'}{2\pi}\frac{\Gamma(\omega',\bar{p})}{\omega-\omega'},
\end{equation}
which again depends on the function of $\Gamma(p)$. All functions depend on energy and momentum, and the set of functional equations is complicated to solve.
For simplicity, the set of equations can be solved by the iteration method. The first step starts from $A(\omega, {\bar{p}})$ in the approximation of small $\Gamma$, for which the Breit-Wigner form (Eq. (\ref{Breit_Wigner})) is used. For the second step, the obtained $\Gamma$ and $A$ are put back into Eqs.(\ref{func_sigma1}, \ref{func_sigma2}) to get the second approximation for $\Gamma$ and obtain the second iteration for the spectral function $A(\omega, {\bar{p}})$.
The spectral functions at several temperatures are shown in Fig.\ref{Gamma_Aw}, right panel. As at T=0 the pion is a bound state, its spectral function is almost $\delta$-function, which is located at the pole corresponding to the pion mass. With increasing temperature, the width increases and the pion becomes a resonant state. As at $T>0.15$ GeV the width goes down again, the spectral function at $T\sim 0.17$ GeV
is higher and narrower than at $T=0.15$ and $T = 0.188$ GeV. As we mentioned above, we are limited in our approximation of elastic scattering and are not able to calculate the spectral function at temperatures higher than the critical temperature. For such research the approximation of the scattering phase shift is more correct, as this approximation controls both the bound state of the meson and the quark background\cite{Xia:2014bla}.
\section{Conclusion and outlooks}
The in-medium modification of hadron properties at high temperature and density can affect the observables and should be taken into account in the analysis of experimental data. One of these properties is the finite meson width, that can appear in dense matter. For light particles the increase in temperature leads to an increase of density, and collisions of particles occur at a much higher rate. This leads to an increase in the probability of both direct processes, tending to decrease the lifetime of a particular state (or increase its width) and the inverse processes, tending to increase it. The resulting width is related to the damping - the process of returning a disturbed many-particle system to an equilibrium state.
In this work in a self-consistent approach in the framework ofthe NJL model, the pion damping width and the pion spectral functions are considered. It is shown in Fig. \ref{Gamma_Aw}, the pion width at $T<0.17$ GeV depends on the inclusion of the $\sigma$-meson, namely allowance of the decay $\sigma\rightarrow\pi\pi$ and the induced decay $\pi+\pi\rightarrow\sigma$. The dashed line shows that the absence of the $\sigma$-exchange diagram in the total scattering amplitude does not demonstrate the broadening of the pion width in this area. At $T\rightarrow T_c$, the system tends to the deconfinement phase and the quark-antiquark pion structure becomes more significant: the pion is considered rather a resonant state and the pion width rises up. We find that at critical temperatures $T>0.19$ GeV our approach ceases to be valid, and for a more detailed study of the width and spectral function one should use the quark-antiqurk phase shift formalism developed in Refs.\cite{Xia:2014bla,Blaschke:2019col,Blaschke:2013zaa}.
As a final step, we try to evaluate the effect of the $\rho-$meson on the damping pion width. It is interesting, as a significant effect of the including of the finite pion width in the observables of a $\rho-$meson is discussed in literature \cite{vanHees:2000bp}. An evaluation of the effect of the vector channel on the $\pi\pi$ scattering amplitude in the framework of the NJL model has a lot of difficulties as the including of the $\rho$-meson strongly depend on the parameters, and on the quark mass affecting the $\rho-$meson to be a bound state \cite{He:1997gn,Jaminon:2002dx}. For simplicity, we use the mass equation obtained in the low-momentum expansion obtained in Ref.\cite{Ebert:1992ag}, supposing the width $\Gamma_{\rho qq} = 0$ GeV.
\begin{figure}[!h]
\centerline{
\includegraphics[width = 0.6\textwidth]{Gamma_rho.pdf}}
\caption{The pion width as functions of temperature with taking into account the $\rho$-meson channel (green dot-dashed line) in comparison with scalar channel (black solid line).}
\label{Gamma_rho}
\end{figure}
The partial amplitudes $T_0, T_1, T_2$ in Eqs.(\ref{amp_a0}-\ref{amp_a2}) should be extended by the $\rho-$channel additional terms \cite{Cotanch:2002vj}:
\begin{eqnarray}
T_0^\rho &=& 2 g^2_{\rho\pi\pi}\left( (u-s) D_\rho(t)+ (t-s) D_\rho(u) \right),\label{amp_a0_rho} \\
T_1^\rho &=& g^2_{\rho\pi\pi}\left(2 (u-t) D_\rho(s) + (u-s) D_\rho(t) +(s-t) D_\rho(u) \right), \label{amp_a1_rho}\\
T_2^\rho & =& g^2_{\rho\pi\pi}\left((s-u)D_\rho(t)+(s-t)D_\rho(u) \right),
\label{amp_a2_rho}
\end{eqnarray}
with the meson propagator $D_\rho(x) \approx g_{\rho q q}^2(M_\rho^2- x)^{-1}$ and $M_\rho$, $g_{\rho q q}$ calculated in accordance with Ref. \cite{Ebert:1992ag}, $g_{\rho\pi\pi}= 5.14$. The result of a such simple iteration for including the vector channel to the pion damping width is presented in Fig. \ref{Gamma_rho} with green dot-dashed line. As it was shown in Ref.\cite{Jaminon:2002dx}, the inclusion of the $\rho-$channel in the $\pi\pi$-scattering amplitude in the low-momentum approach has non-vanishing contribution only to the $a_1^1$ scattering length, which is defined by the $T_1$ isospin amplitude and does not contribute to the $a_0^0$ and $a_0^2$ ($T_0, T_2$ amplitudes). So, in this simplest approach we should not expect a strong effect to the pion damping width from the $\rho-$meson. And a more detailed description of the $\rho$-meson and $\pi-a_1$ mixing can give a more significant effect to the behaviour of the pion damping width in the hot pion gas.
|
train/arxiv
|
BkiUbpXxK3xgpZzQm5HA
| 5
| 1
|
\section{Introduction} The physics of flat bands has generated considerable
excitement over the years \cite{Tasaki1998,Derzhko2015,Leykam2018} In a flat
band, the kinetic energy is completely suppressed; thus, transport is hindered
by a vanishing group velocity, and any kind of interaction is non-perturbative
in nature and can mix the extensive number of degenerate states in the flat
band, with the potential to create complex many-body states and phenomena. One
well known example of this mechanism at work is the fractional quantum Hall
effect, where interactions induce highly non-trivial behaviour of the electrons
in the degenerate Landau levels of a magnetic field.
Thus, flat band systems are well-suited for producing unconventional phenomena
\cite{Parameswaran2013,BERGHOLTZ2013, Derzhko2015}. For both fermions and
bosons, they allow to realise the fractional quantum hall effect in absence of a
magnetic field \cite{Sheng2011,Wang2011,Neupert2011,Sun2011}, i.e. fractional
Chern Insulators, and at potentially high temperatures \cite{Tang2011}. Other
contexts include high-temperature superconductivity \cite{Imada2000,Peotta2015},
Wigner crystalisation \cite{Wu2007,Jaworowski2018}, realising higher-spin
analogs of Weyl-fermions \cite{Dora2011}, bands with chiral character
\cite{Ramachandran2017}, lattice super-solids \cite{Huber2010}, fractal
geometries \cite{Pal2018}, magnets with dipolar-interactions
\cite{Maksymenko2017}, and Floquet physics \cite{Du2017,Roman-Taboada2017}. Flat
bands of magnons also play a crucial role in determining the behaviour of
quantum magnets in magnetic fields
\cite{Schulenburg2002,Zhitomirsky2005,Schmidt2006,Derzhko2007}.
Interest in flat band physics is not restricted to the presence of interactions,
but also extends to their response to disorder, as the flat band states can turn
out to be critical displaying multifractality \cite{Chalker2010}, or
unconventional localisation behaviour \cite{Flach2014,Bodyfelt2014,Leykam2017}.
They also appear in purely classical mechanical systems \cite{Perchikov2017},
and in the field of photonics \cite{Zong2016,Leykam2018a}. Quite recently, flat
bands have been experimentally demonstrated in a realistic Kagome material
\cite{Lin2018} as well as in optical lattices \cite{Jo2012}.
\begin{figure}
\begin{minipage}{0.99\columnwidth}
\includegraphics[width=.99\columnwidth]{{./figures/illustration_kagome_hopping}.pdf}
\end{minipage}
\caption{Kagome lattice with lattice vectors $a_1$ and $a_2$, shown is a
finite-size lattice with $L_x=L_y=3$, opposite edges are identified for
periodic boundary conditions. The model contains site-dependent nearest-neighbour tunnelings
$t_{ij}$ and chemical potentials $\mu_k$.
The highlighted sites correspond to a zero-energy flat band-state of the MCM, hexagon
and system-spanning loop (dark gray) or the BDM, double hexagon (black).
\label{fig:illustration_lattice}}
\end{figure}
In this work consider non-interacting nearest neighbour hopping models on the
Kagome lattice with correlated bond- and site-disoder, as illustrated in
Fig.~\ref{fig:illustration_lattice}. The simple nearest neighbour hopping model
on the Kagome lattice is known to host a degenerate flat band
\cite{Mielke1991,Mielke1991a,Mielke1992,Tasaki1992,Mielke1993} with a quadratic
band touching point believed to be topologically protected\cite{Bergman2008}.
However, in interacting many-body physics it is often preferable to work with a
gapped flat band to protect it from 'Landau-level mixing', i.e.\ from
interactions with the dispersive bands.
Here, we explicitly construct a gapped flat band on the Kagome lattice. The
simplest setting in which it appears contains modulated bond and site-disorder,
both in presence of translational symmetry (where one can speak of a band) and
in absence of it, i.e. in the presence of random disorder, where one may still
identify an extensive manifold of degenerate states. In fact, we find that a
local perturbation to the Hamiltonian can open a gap above the flat band. This
indicates that the band-touching is protected not just by topology but requires
also symmetry.
We obtain exact solutions for the flat band states of all of these models,
facilitating a clear interpretation of why the chosen type of correlated
site-bond-disorder does not lift the extensive degeneracy of the flat band, and
providing new insight into the stability of the flat bands and the protection of
the quadratic band-touching point. Our study also adds an example where
compactly localised Wannier-states can be explicitly constructed for a
disordered flat band model.
Our treatment extends previous observations on the flat band in kagome, such as
the observed stability of the flat band and band-touching points to breathing
anisotropy \cite{Essafi2017}, and opens up new perspectives: We show how to
selectively gap out the flat band, or the Dirac cones, or all bands. Thus, our
results reinforces the role of the kagome lattice as a platform for the study of
topological physics and flat band physics in general, in particular the physics
of perturbations and disorder in flat bands.
\section{Model}
We study non-interacting particles on the kagome lattice
\begin{equation}
\mathcal{H} = \sum_{\left<i,j\right>} \left( t_{ij} \hat{c}^{\dagger}_{i} \hat{c}_j +c.c. \right)+ \sum_i \mu_i \hat{n}_i \, ,
\end{equation}
with nearest-neighbour (complex) hoppings $t_{ij}$ between sites $i,j$ and
site-dependent chemical potentials $\mu_i $ at site $i$. In the models we
consider $\mu_i$ is given as a function of the couplings $t_{ij}$. The specific
correlation between the hopping and potential terms is motivated by a connection
to bond-disordered Heisenberg models \cite{Bilitewski2017} where it naturally
arises via an exact rewriting of the Hamiltonian.
The Hamiltonian can be compactly written via its matrix elements $H_{ij}$ as
$\mathcal{H} = \sum_{ij} c_i^{\dagger} H_{ij} c_j$. Noting that this only
connects nearest neighbours, and that every nearest neighbour pair belongs
either to an up or down triangle of the Kagome lattice, we rewrite the
Hamiltonian in the following way
\begin{align}
\mathcal{H} &= \mathcal{H}^{\vartriangle} + \mathcal{H}^{\triangledown} \\
H_{ij}^{\vartriangle / \triangledown} &=\begin{cases} \bar{\gamma}^{\vartriangle / \triangledown}_i \gamma^{\vartriangle / \triangledown}_j + |\gamma^{\vartriangle / \triangledown}_i|^2 \delta_{ij} , & \text{for } i,j \in \alpha \\
0 , &\text{otherwise} \end{cases}
\label{eq:H_split}
\end{align}
where we first split it into its contribution on the up and down triangles, and
then define all couplings within a triangle $\alpha$ via site and triangle
dependent (complex) factors $\gamma^{\vartriangle/\triangledown}_{i}$.
This form makes the correlation between the hoppings and chemical potentials
explicit. Specifically, we have $t_{ij} = \bar{\gamma}_i^{\alpha}
\gamma_j^{\alpha}$ for sites $i,j$ in the triangle $\alpha$ and $\mu_i =
|\gamma_i^{\vartriangle}|^2+|\gamma_i^{\triangledown}|^2$. In the presence of
lattice-inversion symmetry $\mathcal{H}^{\vartriangle} =
\mathcal{H}^{\triangledown}$ and these factors become solely site-dependent. We
will refer to the model with lattice inversion symmetry as the maximal Coulomb
model (MCM), and with broken lattice inversion symmetry as the bond-disordered
model (BDM).
This also allows us to make an insightful connection to the Hamiltonian of the
non-disordered model, essentially the disordered model can be understood as a
rescaling of the clean model by the $\gamma$ factors. Using that the Hamiltonian
is fully specified by its matrix elements $H_{ij}$, we can further split them as
a product of three matrices as
\begin{equation}
H^{\vartriangle/\triangledown} = \bar{\Gamma}^{\vartriangle/\triangledown} H^{\vartriangle/\triangledown}_0 \Gamma^{\vartriangle/ \triangledown}
\label{eq:H_factors}
\end{equation}
with $\Gamma^{\vartriangle/\triangledown}_{ij} = \delta_{ij}
\gamma_i^{\vartriangle/\triangledown}$, a diagonal matrix containing the scaling
factors, and $H_0$ the matrix of the clean system with $\gamma_i^{\alpha} \equiv
1$, describing the nearest neighbour hopping on the kagome lattice.
Making use of the form $\mathcal{H}=\sum_{ij} c^{\dagger}_i H_{ij} c_j$ the
action of the Hamiltonian on single particle states $\ket{\Psi}=\sum_i \psi_i
c_i^{\dagger} \ket{\mathrm{vac}}$ is simply
\begin{equation}
\mathcal{H} \ket{\Psi} = \sum_i H_{ik} \psi_k c^{\dagger}_i\ket{\mathrm{vac}}= \sum_i (H \psi)_i c^{\dagger}_i \ket{\mathrm{vac}} \, .
\label{eq:H_action}
\end{equation}
From this we obtain the expectation value as
\begin{equation}
\expval{\Psi}{\mathcal{H}}{\Psi} = \sum_{ij} \bar{\psi}_i H_{ij} \psi_j = \sum_{\alpha} \left| \sum_{i\in\alpha}\gamma_{i}^{\alpha}\psi_i \right|^2 =
\sum_{\alpha} \left|\psi_{\alpha} \right|^2 ,
\end{equation}
where in the second equality we used the explicit form of the Hamiltonian,
Eq.~\ref{eq:H_split}, which splits into a sum over triangles $\alpha$, and in
the last equality defined the sum of scaled amplitudes within a triangle
$\psi_{\alpha} = \sum_{i \in \alpha} \gamma_{i}^{\alpha} \psi_i$.
Thus, exact zero-modes are states with $\psi_{\alpha}=0$ on all triangles
$\alpha$. This condition is typically referred to as a groundstate constraint in
the theory of frustrated magnets and is intimately connected to height-mappings
and emergent gauge theory descriptions of the groundstate phase. For spins the
condition $\psi_{\alpha}=0$ is more stringent and can only be fulfilled for not
too disparate bond values due to the unit length constraint which is found to
lead to a phase transition of the model. In contrast, here it can be fulfilled
for arbitrary choices.
\section{Construction of Flat Band states}
\sectionn{Exact Mapping of flat band for the MCM} The clean system is known to
host an exactly flat band at $E=0$ which touches the dispersive band at $q=0$
\cite{Bergman2008}.
In the non-disordered model ($\gamma_{i}^{\alpha}=1$), the ground state
condition $\psi_{\alpha} = \sum_{i \in \alpha} \psi_i=0$ reduces to the simple
sum of amplitudes in every triangle vanishing. It is easy to check that the
states illustrated in Fig.~\ref{fig:illustration_lattice}, a hexagon loop with
alternating $+,-$, and a system-spanning loop with alternating $+,-$ amplitudes,
satisfy this, and (less-trivially) that these yield $N_s/3+1$ linearly
independent zero-energy states. Since the kagome lattice has 3 sites in the unit
cell and thus 3 bands, finding $N_s/3+1$ states at the same energy then also implies
the band-touching.
For the MCM all these zero-modes of the clean system can be mapped
to zero-modes of the disordered model via
\begin{equation}
\Psi^{\mathrm{FB}}_{\mathrm{MCM}} = \Gamma^{-1} \Psi^{\mathrm{FB}}_{0} \, ,
\end{equation}
which follows directly from $H^{\vartriangle} = H^{\triangledown}$ in the MCM
together with Eq.~\ref{eq:H_factors} and Eq.~\ref{eq:H_action}, e.g. the
observation that the disordered model can be understood as a rescaling of the
clean model. Thus, we obtain an exactly flat band at $E=0$. This
further implies that the band touching point is preserved as well.
The flat band states of the MCM can therefore be characterised the same way as
in the clean system \cite{Bergman2008}: The MCM (a) $N_s/3+1$ zero-modes, (b) of
which $(N_s/3-1)$ can be chosen as lin. independent localised hexagon loop modes
and 2 as system-spanning delocalised loops (both types arising via the mapping
from the zero-modes of the clean system), and (c) the flat band is gapless
touching the dispersive band. The two different types of states are
schematically illustrated in Fig.~\ref{fig:illustration_lattice}.
We emphasise that this is completely independent of the specifics of $\gamma_i$,
e.g. it holds true for translationally invariant, completely disordered,
positive, negative and sign-changing, and real or complex choices. In fact, it
holds true for a slightly more general model, where $\Gamma^{\vartriangle} = c
\, \Gamma^{\triangledown} $ which in particular includes the model with
breathing anisotropy.
\sectionn{Construction of flat band for BDM}
\begin{figure}
\begin{minipage}{0.99\columnwidth}
\includegraphics[width=.99\columnwidth]{{./figures/illustration_double_hexagon}.pdf}
\end{minipage}
\caption{A double hexagon of the kagome lattice. The wavefunction of a BDM zero
energy state is localised on the black sites.
Note that the state occupies 11 sites, and is part of 10 triangles, thus,
there are 11 degrees of freedom and 10 constraints, in addition to the
wavefunction normalisation, implying that there is a unique solution for such
a localised state.
\label{fig:illustration_double_hexagon}}
\end{figure}
We note that such a mapping is not possible for the BDM where $\Gamma$ differs
non-trivially between up- and down-triangles. Thus, it is not immediately
obvious that the BDM should host an extensively degenerate groundstate band and
if so whether the band-touching point is preserved.
We first summarise the findings and then provide a construction of the flat band
states. We find that (a) the BDM has $N_{s}/3$ exact zero-modes/flat band, (b)
the flat band states states can all be localised and (c) the flat band is
generically gapped.
We emphasise the last point, stating that it is possible to maintain the
flatness of the band while gapping it from the dispersive bands in contrast to
the claimed topological protection \cite{Bergman2008}. We will analytically show
this in the next section for translationally invariant model, and provide
numerical evidence for disordered systems. In fact, it is sufficient to break
inversion symmetry by changing a single coupling $\gamma_{i}^{\vartriangle}$ to
create a gap to the flat band.
We now explicitly construct the $N_s/3$ linearly independent localised states
forming the degenerate flat band. To do so, we consider a double hexagon of the
kagome lattice shown with our conventions for the site labels in
Fig.~\ref{fig:illustration_double_hexagon}. We note that such a state occupies
11 sites and these sites are part of 10 triangles of the kagome lattice. Each
triangle contributes one scalar constraint $\Psi_{\alpha}=0$, in addition to one
normalisation constraint, thus, we might expect a unique solution on every
hexagon-pair.
The resulting linear system of equations can be solved explicitly (see SM
\cite{supplemental}), and the wave-function amplitudes may be written as a
function of the coupling terms $\gamma_i^{\alpha}$ as $\Psi_i = \Psi_1 \,
f_i(\gamma_i^{\alpha})/D(\gamma_i^{\alpha}) $. This solution is only valid if
the determinant $D$ given by
\begin{equation}
\Delta = \gamma^{\triangledown}_3 \gamma^{\triangledown}_5 \gamma^{\triangledown}_7 \gamma^{\triangledown}_9 \gamma^{\triangledown}_{11} \,
\gamma^{\vartriangle}_2 \gamma^{\vartriangle}_4 \gamma^{\vartriangle}_6 \gamma^{\vartriangle}_8 \gamma^{\vartriangle}_{10} - (\triangledown \leftrightarrow \vartriangle) \, ,
\end{equation}
is non-zero. This manifestly vanishes in the presence of inversion symmetry
($\gamma^{\vartriangle} =\gamma^{\triangledown} $), but is non-zero if inversion
symmetry is broken ($\gamma^{\vartriangle} \neq \gamma^{\triangledown} $).
Therefore, in the BDM there is a unique localised state on every double-hexagon.
We have checked (numerically) that taking $L^2$ such double-hexagons tiling the
full kagome lattice does yield $L^2$ independent states, thus, providing a full
basis for the zero-energy states of the BDM, in contrast to the MCM and the
clean system which requires the system spanning loop states \cite{Bergman2008}.
It is also easy to show that no such solution for a localised state is possible
on a single heaxagon (see SM \cite{supplemental}), thus, proving that these
found states indeed form a maximally localised basis of the flat band manifold.
Typically, in presence of interactions the size of the maximally localised basis
states strongly affects the behaviour of the model, and here we find that this
size doubles in presence of infinitesimal disorder. In fact, the existence of a
compactly localised basis for flat bands is an open question of research with
relations to the topology of the corresponding Bloch bands
\cite{Maimaiti2017,Maimaiti2018,Rhim2018}.
\section{Gapped flat bands}
It remains to show that the BDM flat band states are indeed gapped and do not
touch the dispersive bands, which we will show in the next sections both for
translationally invariant and generic disordered models.
\sectionn{Translationally invariant systems}
\begin{figure}
\begin{minipage}{0.99\columnwidth}
\includegraphics[width=.99\columnwidth]{{./figures/band_structure}.pdf}
\end{minipage}
\caption{ Dispersion along high-symmetry lines in the Brillouin zone.
From top left to bottom right: clean system, MCM with
$\gamma_A=\gamma_A^{\vartriangle}=\gamma_A^{\triangledown}<\sqrt{2}$, BDM with
$\gamma_A=2$ and BDM with $\gamma_A^{\vartriangle}= \frac{1}{\gamma_A^{\triangledown}}=\gamma_B^{\triangledown}=\frac{1}{\gamma_B^{\vartriangle}}= 0.5$.
\label{fig:band_structure}}
\end{figure}
We begin by considering translationally invariant systems with real couplings.
In that case the model has 6 (3) free parameters
$\gamma_{A,B,C}^{\vartriangle/\triangledown}$ for BDM (MCM), e.g. the couplings
on the three sites (A,B,C) in a triangle of the Kagome lattice, with different
couplings on the up and down triangle for the BDM model.
In this case, one can analyse the model in momentum space, and analytical
results can be obtained (see SM \cite{supplemental}). We find that for every $q$
there is exactly one zero-mode, i.e. we find a flat band at $E=0$ for both the
BDM and MCM as anticipated from the construction of the zero-modes above.
Importantly, this allows us to obtain an analytic expression for the gap of the
BDM, thus, proving our claim that the BDM flat band can indeed be gapped.
We will consider illustrative examples for the gap below, please see SM
\cite{supplemental} for the general expression of the gap. As the simplest model
consider just $\gamma_A^{\vartriangle} \neq 1$, then the gap scales as
\begin{equation}
\Delta_{\mathrm{gap}}=\frac{1}{2} \left(5+ {\gamma_A^{\vartriangle}}^2 -\sqrt{\left( {\gamma_A^{\vartriangle}}^2+1 \right)^2 +16 \gamma_A^{\vartriangle}+16}\right)
\end{equation}
showing a quadratic scaling for small deviations away from the homogeneous
system.
A more symmetric arrangement can be obtained by considering
$\gamma_A^{\vartriangle} = \frac{1}{\gamma_A^{\triangledown}}
=\gamma_B^{\triangledown} =\frac{1}{\gamma_B^{\vartriangle}}=x$, which yields
the gap to the flat band as
\begin{equation}
\Delta_{\mathrm{gap}} = x^2 +x^{-2} -2 \, .
\end{equation}
We note that this allows to cleanly separate the flat band by an (arbitrarily)
large gap from all dispersive bands, making the Kagome lattice a prime platform
to study physics in flat bands.
We show dispersion relations along high-symmetry lines in the Brillouin zone for
the clean model, the MCM and the BDM in Fig.~\ref{fig:band_structure}. We
emphasise that clearly both models retain an exactly flat band at $E=0$. As
discussed above the MCM always retains the band-touching point at $q=0$
($\Gamma$ point), but the Dirac-points can be gapped for large perturbations
(not shown).
In contrast in the BDM, the flat band is always gapped as seen in the lower
panel of Fig.~\ref{fig:band_structure}, already for infinitesimal changes in the
couplings. Just changing a single coupling generically gaps both the flat band
and the Dirac points (lower left panel). For the symmetric choice described
above, the flat band is gapped, but the Dirac points remain gapless (lower right
panel).
In summary, we have shown that we can selectively gap out the flat bands and
keep the Dirac cones or gap out the Dirac cones, but keep the quadratic band
touching point, or gap out all bands.
\sectionn{Local Perturbation} Before considering fully disordered models it is
insightful to understand the effect of a local perturbation to the system. For a
topologically protected band-crossing one would expect the resulting gap to
scale to zero exponentially in system size.
We modify the Hamiltonian locally by changing a single coupling
$\gamma^{\vartriangle}$ affecting one site potential $\mu$ and two tunnel
couplings $t$. As a result, in Fig.~\ref{fig:gaps}(a) we observe a linear
decrease of the gap with inverse number of sites $\sim N_s^{-1}$, consistent
with the gap closing in the thermodynamic limit. However, the decay is clearly
not exponential as would be expected for a topologically protected degeneracy.
\begin{figure}
\begin{minipage}{0.49\columnwidth}
\includegraphics[width=.99\columnwidth]{{./figures/gap_local_perturbation}.pdf}
\end{minipage}
\begin{minipage}{0.49\columnwidth}
\includegraphics[width=.99\columnwidth]{{./figures/gap_random_bond}.pdf}
\end{minipage}
\caption{(a) Gap of the flat band in presence of a local perturbation versus
inverse number
of sites $N_s$ on a log-log scale showing a linear scaling with inverse
number of sites $\sim N_s^{-1}$.
(b) Gap of the flat band for the fully disordered system versus inverse
number of sites for different disorder strengths $\delta$.
\label{fig:gaps}}
\end{figure}
\sectionn{Disordered Systems} Next, we consider fully disordered models with
random choices for $\gamma^{\vartriangle/\triangledown}_{i}$. As an example we
consider a box-uniform distribution $\gamma \in [1-\delta,1+\delta]$. However,
we emphasise that this specific choice is not relevant and the conclusions hold
true for any generic disorder distribution.
The gap to the flat band versus inverse system size for a range of values of
$\delta$ is shown in Fig.~\ref{fig:gaps}(b). It extrapolates to a finite value
in the thermodynamic limit for $\delta < 1$, and scales as $\delta^2$ for small
disorder strengths. Thus, we conclude that disorder of this type gaps out the
flat band, even for infinitesimal disorder strength.
We also note in passing that the finite gap implies that the projector into the
flat band decays exponentially for the BDM model, but decays algebraically for
the gapless MCM.
\sectionn{Flat Band Ferromagnetism in a disordered model} Flat bands are known
to host ferromagnetic phases in presence of repulsive interactions
\cite{Mielke1991,Mielke1991a,Mielke1992,Tasaki1992,Mielke1993,Tasaki1996}. The
presence of a gap to the flat band in our model ensures that the many-body
groundstate at filling $n=1/6$ is the unique fully-saturated ferromagnetic
state.
To see this in our model of disordered flat bands, we consider a fermionic
version with repulsive Hubbard interactions,
\begin{equation}
\mathcal{H} = \sum_{\left<i,j\right>,\sigma} \left( t_{ij} \hat{c}^{\dagger}_{i\sigma} \hat{c}_{j\sigma} +c.c. \right)+ \sum_i \mu_i \hat{n}_i + U \sum_i n_{\uparrow} n_{\downarrow}\, ,
\label{eq:Hubbard}
\end{equation}
for spin 1/2 fermions, $n_i = n_{i\uparrow}+ n_{i\downarrow}$, $t_{ij}$ and
$\mu_i$ are chosen as above, and we consider the BDM to have a flat gapped
non-interacting band.
Since for $U>0$ the interaction term is positive, and the kinetic energy is
positive-definite by construction, many body states with $E=0$ are necessarily
groundstates.
One groundstate is easily obtained by filling the non-interacting flat band
completely with polarised spins which do not interact. Thus, we have at filling
$n = 1/6$ a ferromagnetic groundstate with maximal spin $S=L^2/2$, with the full
$2S (2S+1)$ degeneracy due to the $SU(2)$ symmetry of the model. The main
question to obtain ferromagnetism is whether this groundstate is unique, or if
there are additional non-magnetic states as well. Here, it turns out that the
groundstate is gapped, since the non-interacting band-structure has a finite gap
for the BDM.
We performed exact diagonalisation of the Hubbard model, Eq.~\ref{eq:Hubbard},
on small finite-size Kagome clusters ($2 \times 2$, $2\times 3$) to confirm that
the groundstate is indeed of the described form.
Finally, due to the presence of a spectral gap, we expect the ferromagnetism to
be stable to finite perturbations and fluctuations in the particle number.
Indeed, ferromagnetism is expected to be enhanced compared to the usual Kagome
case, since the localised non-interacting states now contain two hexagons.
\section{Outlook}
Demonstrating that the flat bands of the kagome lattice can be gapped opens up
the kagome lattice as a prime platform for the clean, i.e. isolated from the
dispersive bands by an arbitrarily large gap, study of topological and more
general flat band phenomena.
In addition, the presence of a flat band in a disordered model is highly
non-trivial and of general interest even if it requires fine-tuning between the
hopping and site-potential terms.
In terms of realisations of the specific type of couplings: We recall that this
model is naturally realised in the large-N limit
\cite{Stanley_1968,Garanin_1999} of a classical nearest-neighbour
bond-disordered Heisenberg-(Anti)ferromagnet, where the correlation between
site- and bond-disorder arrises from the spin length constraint. In other
settings it is unlikely that bond- and site-disorder is correlated in the
required way, thus, the system would need to be specifically designed. In this
case we envision it would be considerably easier to realise the translationally
invariant model reducing the required number of parameters that have to be
tuned. (For the minimal model we would require tuning 1 site-potential and 2
tunneling couplings in each unit cell). This might be feasible in cold-gas
setups where control over individual sites and bonds is possible by the use of
quantum gas microscopes.
In terms of topological properties of the flat band, we note that fluxes in the
MCM model are trivial by construction (since they can be removed by a unitary
gauge transformation). The BDM model in contrast supports non-trivial fluxes
along the hexagon loops of the lattice. However, since in the BDM model all
states of the flat band can be chosen localised, the non-interacting model is
necessarily topologically trivial \cite{Read2017}.
Our model also presents a natural realisation of flat band ferromagnetism on the
Kagome lattice, where the gap of the single particle spectrum results in a
unique gapped fully saturated ferromagnetic many-body state in presence of
repulsive on-site interactions. We reserve the further discussion of interacting
many-body phases in the gapped flat band and the effects on the magnon bands of
magnets for future work.
It might also be interesting to explore the effect of longer-range interactions
on the flat bands of this model which have recently been found to be remarkably
stable for the non-disordered model \cite{Maksymenko2017}.
\sectionn{Acknowledgements} We thank J. Richter for insightful discussions and
comments on a first draft of this paper. This work was in part supported by
Deutsche Forschungsgemeinschaft via SFB 1143.
|
train/arxiv
|
BkiUfhvxK7IABLfxI9d3
| 5
| 1
|
\section{Introduction}
Massive stars as well as their stellar remnants created after supernova (SN) events spatially coexist in their parent molecular cloud (MC) \citep{mck07}.
The accelerated particles created at the SN shock fronts and/or in star-forming regions (SFRs) can cause emission at $\gamma$-ray energies as the result of their interaction with ambient photons and/or with the interstellar medium (ISM) in which these objects evolve \citep{banik+17, rom10}.
Knowledge of the physical conditions of the ambient matter in bright $\gamma$-ray emitting molecular complexes is a particular important key in order to obtain information on the creation and distribution of Galactic cosmic rays. In these regards, surveys designed to cover the large-scale distribution of different molecular species and of atomic hydrogen \citep{hey15, kal09} constitute an ideal framework to assess these matters.
We here present a revisited analysis of the environs of the supernova remnant (SNR) {\kes}, where an excess in $\gamma$ rays at GeV energies was detected with the {\itshape Fermi}-Large Area Telescope (LAT) \citep{liu+15}. The interaction of {\kes} with nearby molecular gas has been known since the 1990s with the discovery of OH-1720~MHz maser emission \citep{koralesky+98}. A small portion of this gas about $\sim$~8$^{\prime}$ in size that also has a radial velocity of $\sim$$-$50~km~s$^{-1}$ was investigated by \citet{zhang+15}. Now, based on the interesting results reported by these authors, we have begun the analysis of the ISM in a significatively larger region using new information of the molecular $^{12}$CO and $^{13}$CO lines emission from The Mopra Southern Galactic Plane CO Survey, as well as neutral hydrogen and mid-infrared data from the Southern Galactic Plane Survey (SGPS) and {\Spitzer}, respectively. From our study we identified for the first time the giant parent molecular complex where {\kes}, a number of massive young stellar objects, and star-forming regions are mixed.
The paper is organised as follows: Sect.~\ref{data} describes the observations, and the complete characterisation of the discovered cloud along with the proton content determination are presented in Sect.~\ref{counterparts}. Sect.~\ref{summary} summarises our findings. The implications of the current study on the production of the observed $\gamma$ rays along with the modelling of the broadband emission from radio to $\gamma$-ray energies to understand the relative contribution of leptons and hadrons in the {\kes} region will be addressed in a companion paper (Supan et al. 2018b).
\section{Observations and data analysis}
\label{data}
\subsection{$^{12}$CO and $^{13}$CO($J$ = 1 $-$ 0) observations from The Mopra Southern Galactic Plane CO Survey}\label{data-co}
We used observations of the {\twelveCO} and {\thirtyCO} {\COtrans} rotational transition taken from The Mopra Southern Galactic Plane CO Survey \citep[hereafter, CO Mopra survey;][]{burton+13} to study the large-scale molecular structures in the region of SNR~{\kes}. For both CO isotopologues, the half-power beam width (HPBW) is 36{\as} at the observing frequencies, while the velocity resolution in each dataset is $\sim$0.09~{\kms}. Compared with the earlier CO survey of \citet{dam01}, the spatial and spectral resolution of the new Mopra data is about 14 times better in the region in which we are interested.
Both the {\twelveCO} and {\thirtyCO} ({\COtrans}) lines are important to obtain a compressive view of the molecular material. While the optically thick emission from the {\twelveCO} is in general significantly more intense than that observed in the {\thirtyCO} line, the latter is a more faithful tracer of the denser regions in the molecular gas. Therefore, we used both lines to determine the physical parameters of the molecular gas, in the way explained as follows. First, we computed the integrated emission of the {\twelveCO} in a region $R$ and in a given velocity range $\Delta v$ from the relation
\begin{equation}\label{W12CO}
W_{^{12}\mathrm{CO}} = \frac{1}{\eta_\mathrm{XB}} \iint_R \int_{\Delta v} T_{^{12}\mathrm{CO}}(l,b,v) \, dl \, db \, dv,
\end{equation}
\noindent
where $T_{^{12}\mathrm{CO}}$ corresponds to the brightness temperature of the {\twelveCO} emission. The factor $\eta_\mathrm{XB}$ is included as a correction made to account for the extended beam efficiency of the Mopra telescope, which at the observing frequency for the {\twelveCO} (115.27~GHz) is $\eta_\mathrm{XB}$ = 0.55 \citep{ladd+05}.
Then, we calculated the integrated column density of the H$_{2}$ directly by adopting the CO-to-H$_{2}$ conversion factor {\Xco} = $N(\mathrm{H}_2) / W_{^{12}\mathrm{CO}}$ = $2.0\times10^{20}$~\cm{-2}~(K~\kmps)$^{-1}$, which has an estimated uncertainty of about 30\% \citep{bolatto+13}.
On the other hand, in the case of {\thirtyCO}, under the hypothesis of local thermodynamic equilibrium, it is possible to calculate the total column density $N(^{13}\mathrm{CO})$ according to
\noindent
\begin{equation}\label{N13CO}
N(^{13}\mathrm{CO}) = \frac{1}{\eta_\mathrm{XB}} 2.42 \times 10^{14} \frac{T_\mathrm{ex}+0.88}{1-e^{5.29/{T_\mathrm{ex}}}} \iint_R \int_{\Delta v} \tau_{13}\, dl \, db \, dv ,
\end{equation}
\noindent
where $T_\mathrm{ex}$ represents the excitation temperature of the {\COtrans} transition and $\tau_{13} = \tau_{13}(l,b,v)$ is the corresponding optical depth of the {\thirtyCO} ({\COtrans}) spectral line \citep{wilson+13}. For this line (observed at 110.20~GHz), the corresponding correction is $\eta_\mathrm{XB}$ = 0.55 \citep{ladd+05}.
Under the assumption that the {\thirtyCO} line is optically thin, the following approximation holds \citep{wilson+13}:
\noindent
\begin{equation}\label{tau13}
\int_{\Delta v} \tau_{13} dv \approx \frac{1}{J(T_\mathrm{ex})-J(T_\mathrm{bkg})} \int_{\Delta v} T_{^{13}\mathrm{CO}}(v) \, dv ,
\end{equation}
\noindent
and the radiation temperature $J(T)$ is defined as
\begin{equation}\label{J}
J(T) = \frac{5.29}{e^{5.29/T}-1} .
\end{equation}
In Eq.~\ref{tau13}, $T_{^{13}\mathrm{CO}}(v)$ denotes the brightness temperature of the emission for a gas moving at the radial velocity $v$, while $T_\mathrm{bkg}$ indicates the temperature of the background, $\sim$2.7~K. After we calculated the {\thirtyCO} column density, $N(\mathrm{H}_2)$ was determined using the conversion factor $N(^{13}\mathrm{CO}) / N(\mathrm{H}_2) = 5.6 \times 10^5$ given by \citet{simon+01}.
\subsection{HI observations}
\label{data-hi}
The emission of the neutral hydrogen (HI) 21 cm line was extracted from the Southern Galactic Plane Survey \citep[SGPS,][]{mcclure+05}, which combines observations carried out with the interferometer Australia Telescope Compact Array (ATCA) and the 64 m single-dish telescope of Parkes. These data have an angular resolution of \dms{2}{\prime}{2} and a separation between consecutive velocity channels of $\sim$0.8~{\kms}. The rms noise per velocity channel is $\sim$1.6~K.
The atomic contribution to the proton content of the medium can be calculated in the optically thin limit by integrating the HI emission in a velocity range $\Delta v$, using the expression \citep{dickey-lockman-90}
\begin{eqnarray}\label{NHI}
N(\mathrm{HI}) &=& 1.823 \times 10^{18} \iint_R \int_{\Delta v} T(l,b,v)\, dl \, db \, dv ,
\end{eqnarray}
\noindent
where $T(l,b,v)$ is the brightness temperature of the emission of the HI gas moving at a radial velocity $v$ along the $(l,b)$ direction, within a region $R$.
\section{Results}
\label{counterparts}
Figure~\ref{field} displays a three-colour composite map centred at ($l$, $b$)$\simeq$ (\dms{337}{\circ}{8}, \dms{0}{\circ}{0}) corresponding to a large field of about 55{\am}~$\times$~45{\am} in the direction to SNR~{\kes}. This representation depicts the mid-infrared emission at 8~$\mu$m and 24~$\mu$m from {\Spitzer} GLIMPSE and MIPSGAL images \citep{churchwell09, carey09}, along with the radio continuum emission at 843~MHz detected with a $\sim$1{\am} angular resolution in the
Sydney University Molonglo Sky Survey \citep[SUMSS,][]{bock+99}.
The radio- and infrared-emitting objects in the surveyed region are labelled in the figure, and their properties are analysed in this section. The interstellar gas emission observed by Mopra in the $^{12}$CO {\COtrans} line, integrated in the velocity range from $\sim$$-$71 to $-$41~km~s$^{-1}$ (the velocities always refer to that of the local standard of rest, LSR), is traced by white contours (we stress this result again in Sect.~\ref{MC-distance}). In addition, the statistical significance of the GeV emission as revealed by {\Fermi} is overlaid on the infrared and radio image.
The $\gamma$-ray data drawn in Fig.~\ref{field} correspond to an updated analysis of the high-energy emission on the basis of $\text{about nine}$ years of observations with the {\Fermi} telescope. The morphological and spectral modelling from radio- to $\gamma$-ray energies will be investigated in Supan et al. (2018b). In the following, we focus on the properties of the molecular gas in the large molecular complex reported by the CO Mopra Survey.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.75\textwidth]{AA-2018-33183_figure1_low-res.eps}
\caption{Large-scale three-colour intensity map of the $\gamma$-ray emitting region around SNR~{\kes}: Mid-infrared data at 8 and 24~$\mu$m from MIPSGAL and GLIMPSE {\Spitzer} are displayed in green and red, respectively, while the radio continuum emission at 843~MHz from {\kes} and the HII regions is shown in blue.
Dashed red contours correspond to statistical significances of 20, 22, 25, and 27$\sigma$ determined from the re-analysis of the $\gamma$-rays photons detected by {\Fermi} in the in the 0.5-400~GeV energy range (Supan et al. 2018b).
White contours trace the molecular gas emission in the $^{12}$CO ({\COtrans}) line integrated from $-$71 to $-$41~{\kmps} at levels of 87, 101, 119, and 137~K~{\kmps}. Objects are labelled with their names or numbers (see also information in Table~\ref{table_kes41_HIIs}):
{\kes} and the OH maser \citep{koralesky+98} [in blue]; HII regions [numbers from 1 to 8] \citep{jones+12}; Class~II 6.7~GHz CH$_{3}$OH masers [filled yellow diamonds] \citep{caswell+10_MeOH}; and massive young stellar object candidates [upright filled red triangles] \citep{lumsden13}.}
\label{field}
\end{figure*}
\subsection{CO molecular emission}
\label{MC-distance}
As a first step, we demonstrate that the molecular gas emission depicted in Fig.~\ref{field} is not the result of a chance coincidence along the line of sight, but corresponds to a single large cloud identified for the first time in the new data collected by the CO Mopra Survey. The hydroxyl (OH) maser emission detected at 1720~MHz is produced between an interacting small cloud and SNR~{\kes} \citep{koralesky+98}, which are both immersed in the large cloud unveiled here by the CO Mopra survey data.
To establish a reliable distance estimate for all the molecular material, we used data from the CO in the {\COtrans} rotational transition of the $^{12}$CO and $^{13}$CO, in conjunction with spectral-line SGPS observations of the HI. As test cases, Figs.~\ref{COHIspectra}a, b show two sets of CO and HI spectra extracted from two 2$^{\prime}$$\times$2$^{\prime}$ regions, 14$^{\prime}$ and \dms{5}{\prime}{3} away from the maser spot, respectively. These test areas, which we have denoted Box 1 and 2, are overlaid on the integrated intensity image of $^{13}$CO in Fig.~\ref{COHIspectra}c.
The depicted $^{13}$CO distribution corresponds to the molecular gas in the $-71$ to $-41$~km~s$^{-1}$ range (the same interval as considered in the display presented in Figs.~\ref{field} and ~\ref{COframes}) in which the molecular emission is prominent. In the spectra corresponding to Box~1, a peak at $\sim$$-$66~km~s$^{-1}$ in the line emission from $^{12}$CO can be discerned within
the surveyed velocity range.
According to the circular rotation curve model of the Galaxy by \citet{fich+89}, kinematic distances of 4 and 12~kpc are associated with this velocity in the fourth quadrant of the Galaxy. However, the correlation of the CO peak with a maximum in the HI profile permits us to place the interstellar gas in Box~1 at the far kinematic distance of $\sim$12~kpc, which is in broad agreement with the distance calculated to the OH maser emission \citep{koralesky+98}. If the gas within Box~1 were located at the near distance, the cold HI embedded in the molecular gas would absorb the warm HI background emitting at the radial velocity of the CO gas (i.e. $\sim$$-$66~km~s$^{-1}$), and this would be noted by a self-absorption of the HI 21~cm line correlated with the CO emission from the cloud.
This is in fact what is observed for the gas inside Box~2, for which a valley at $\sim$$-$57~km~s$^{-1}$ in coincidence with a CO peak is clearly visible.
We interpret this spectral feature as a signature of HI self-absorption, indicating that part of the molecular gas seen in projection inside the CO complex is located at the near position of 4~kpc. It should be also mentioned that the deepest self-absorption line observed for this clump at $-$39~km~s$^{-1}$ was excluded as it lies outside the selected velocity range in which the CO gas was integrated.
After we inspected the construction of HI and CO spectra over the complete extension of the molecular material shown in Fig.~\ref{COHIspectra}c, we also found weakly pronounced HI valleys that were anti-correlated with the CO line emission between $-$52.5 and $-$48~km~s$^{-1}$. From our analysis, we estimated that in the velocity range in which we are interested ($-$71-$-$41~km~s$^{-1}$), only $\sim$1.5\% of the molecular gas seen in projection as part of the large molecular cloud is placed at the near distance of 4~kpc.
We thus strongly emphasize the importance of the CO Mopra survey in mapping the large-scale structure of the molecular gas. Only in this way are we able to conclude that
most of the gas with a systematic velocity of $-$71 - $-$41~{\kms} is at a distance of 12.0$\pm$3.6~kpc and forms the natal cloud of the remnant {\kes}, the HII regions, and the massive stellar activity observed in this part of the Galaxy. The main source of error in this estimate is related to uncertainties inherent to the circular rotation model. We quantitatively discuss in Sect.~\ref{protons} the physical conditions of the ambient matter.
After we determined that a single molecular cloud overlaps (in projection) the $\gamma$-ray emission detected at GeV energies, we proceed to investigate the spatial
distribution of the CO gas linked to the complex mix of thermal and nonthermal components within it. To do this, we constructed maps corresponding to the {\twelveCO} {\COtrans} rotational line emission integrated every 7.5~km~s$^{-1}$. This is shown in Fig.~\ref{COframes}, where few representative radio continuum and $\gamma$-ray contours are superimposed to facilitate comparison. At the corresponding velocity interval, each
frame also includes grey contours indicating the integrated $^{13}$CO emission. Even though the $^{13}$CO is a better identifier of the denser structures (typically $\sim$10$^{3}$~cm$^{-3}$) than the $^{12}$CO regions, we found that the global distribution of the integrated $^{13}$CO {\COtrans} around the $\gamma$-ray source is similar to that observed for the $^{12}$CO molecular emission in the same transition, where the two gases share peak positions, shapes, and sizes.
Based on the spatial distribution of the molecular gas, studied in all its extension for the first time in this work, it is evident that the cloud is about 28$^{\prime}$$\times$18$^{\prime}$ (at the estimated distance of 12~kpc, the cloud size is $98 \times 63$~pc). The main part of this cloud lies within the outermost confidence contour of the $\gamma$-ray emission shown in Fig.~\ref{COframes}.
A local inspection of the cloud shows a bright and roughly asymmetric molecular structure about $\sim$5$^{\prime}$$\times$3$^{\prime}$, which, centred at ($l$, $b$)=(\dms{337}{\circ}{77}, \dms{-0}{\circ}{015}) lies close to the SNR (velocity range between $-$63.3 and $-$48.5~km~s$^{-1}$).
As outlined above, this region contains the OH maser spot and constitutes a fraction of the molecular material that directly interacts with {\kes}. Additionally, maxima are readily distinguished in the large field of view near ($l$, $b$)=(\dms{337}{\circ}{72}, \dms{-0}{\circ}{065}) and ($l$, $b$)= (\dms{337}{\circ}{73}, \dms{-0}{\circ}{053}) in spatial coincidence with the radio emission from the HII regions in the field (see panels b) and c)). This is an expected result since, as we show below, these correlations correspond to molecular components related to regions of new stellar activity.
Finally, we did not find any evidence of a shell-like structure in the gas distribution that is morphologically related to SNR~{\kes}, as claimed \citet{zhang+15} based on their analysis of the molecular gas. This discrepancy may arise because the earlier study was performed in a region that only represents a small fraction of the total extent of the cloud shown in Fig.~\ref{COframes}. For comparison purposes, the small rectangle region used by \citet{zhang+15} to calculate the molecular gas parameters is also shown in Fig.~\ref{COframes}b.
\begin{figure*}[!ht]
\centering
\includegraphics[width=16cm]{AA-2018-33183_figure2_low-res.eps}
\caption{{\bfseries a)}, {\bfseries b)} Test CO and HI 21 cm spectra towards two selected regions, labelled Box 1 and 2, used to determine the distance to the $\gamma$-ray emitting molecular gas. Both areas are depicted in panel {\bfseries c)}. The shaded area corresponds to the LSR velocities in which the molecular emission is dominant. The correspondence between the atomic and molecular emission indicates that the molecular material in Box~1 is located at the far kinematic distance of $\sim$12~kpc. We found similar results (not shown here) across most of the CO distribution corresponding to the large cloud. HI self-absorption inside Box~2 revealed that superposed along the line of sight are a few molecular components that lie in front of the large cloud (see discussion in the text).
{\bfseries c)} Integrated channel map of $^{13}$CO ({\COtrans}) from the Mopra Survey data over LSR velocities range $-$71 to $-41$~km~s$^{-1}$ (the velocity interval shaded in {\bfseries a)} and {\bfseries b)}). Contours at the 0.09, 0.3, 0.5, and 0.7~Jy~beam$^{-1}$ (in green) from the 843~MHz SUMSS data were included to facilitate the comparison.
The plus marks the OH maser spot \citep{koralesky+98}.}
\label{COHIspectra}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.7\textwidth]{AA-2018-33183_figure3_low-res.eps}
\caption{Molecular gas distribution as traced by the $^{12}$CO ({\COtrans}) (in colours) and the $^{13}$CO ({\COtrans}) (in gray contours) data from the Mopra Survey.
The range of velocities is indicated at the top right corner of the panels. To facilitate the multiwavelength correlations between the remnant {\kes}, the star-forming regions, and the $\gamma$-ray emitting area with their surroundings, the same radio contours (in white) as in Fig.~\ref{COHIspectra} were included in each panel. The $\gamma$-ray emission is drawn as red contours at 20, 24, and 27$\sigma$ (Supan et al. 2018b) only in panels {\bfseries b)} and {\bfseries c)}.
The areas used in Sect.~\ref{protons} to calculate the ambient properties are represented by the ellipses E1 and E2 included in panels {\bfseries b)} and {\bfseries c)}, respectively.
The blue rectangle included in panel {\bfseries b)} marks the smaller field studied by \citet{zhang+15}.
The plus in panel {\bfseries d)} marks the OH maser spot \citep{koralesky+98}.}
\label{COframes}
\end{figure*}
\subsection{Star-forming activity in the GeV-emitting region}
\label{SFR}
In order to
obtain a comprehensive multiwavelength picture in the region in which we are interested, we also explored the emission at infrared wavelengths since this spectral range is especially adequate to reveal gas structures in their ionised form, such as ionised hydrogen (HII) and star-forming regions.
Figure~\ref{field} clearly shows thermal gas in the field. The correspondence of the emission at mid-infrared wavelengths with the radio continuum emission is seen as white regions in and around the site where the $\gamma$-ray excess is observed.
All known HII regions in the Galaxy show this emission distribution, in which the radiation at 24~$\mu$m from the hot dust grains is limited by a polycyclic aromatic hydrocarbon (PAH) region shining at 8~$\mu$m. Eight catalogued HII regions are found to be distributed in the region of the molecular cloud and the $\gamma$-ray source, they are labelled in Fig.~\ref{field} and tabulated in Table~\ref{table_kes41_HIIs} \citep{jones+12}.
Especially noticeable are the ionised gas regions labelled from 1 to 6 in Fig.~\ref{field}, which shows a clear match with the $\gamma$-ray excess. The HII regions in the field of view, with radio recombination line velocities similar to that of the detected molecular cloud, have measured distances between $\sim$11 and 12.3~kpc, in accord with the distance that we established in Sect.~\ref{MC-distance} for the molecular cloud (see also Table~\ref{table_kes41_HIIs}).
No infrared counterparts are observed in {\kes}, which is consistent with the nonthermal nature of the radio emission. Nevertheless, it is worth mentioning that although weak, infrared emission at middle wavelengths was detected in several young SNRs, which under restricted conditions originates in the dust grains formed in the SN ejecta \citep{rho08} or by grains heated by the passage of an interacting SN blast wave \citep{williams06}.
\begin{table*}[ht!]
\centering
\caption{Parameters of thermal gas and indicators of stellar activity in the field of the {\Fermi} source from available catalogues. The first column indicates the number that labels each HII region as in Fig.~\ref{field}.
In Columns 2 to 4, the Galactic coordinates (in the $l+b$ form), the radio recombination line (RRL) velocity ($v_\mathrm{RRL}$), and the distance (in kpc) for each HII region are listed.
The number in Column 5 is the number of 6.7-GHz CH$_3$OH maser detections towards the HII regions, in the velocity range of the MC detected in correlation with the {\Fermi} source. The last column shows the number of MYSO candidates catalogued towards the HII regions.
``$\geqslant$ TP'' indicates that the HII region is beyond the tangent point ($\sim$8~kpc).}
\label{table_kes41_HIIs}
\begin{tabular}{cccccccc}\hline\hline
Source & Source position\tablefoottext{a} & $v_\mathrm{RRL}$\tablefoottext{a} & Distance\tablefoottext{a} & CH$_3$OH & MYSO \\
number & (Galactic $l+b$) & (K~{\kms}) & (kpc) & detection\tablefoottext{b} & candidates\tablefoottext{c} \hfil\\\hline
1 & 337.711$-$0.056 & -50.0 & 12.12 & 2 & 2 \\
2 & 337.711+0.089 & -76.7 & 10.79 & 1 & 3 \\
3 & 337.622$-$0.067 & -55.0 & 11.84 & 3 & 2 \\
4 & 337.667$-$0.167 & -53.0 & 11.95 & -- & 1 \\
5 & 337.978$-$0.144 & -- & $\geqslant$ TP & 1 & 4 \\
6 & 338.011+0.022 & -63.3\tablefoottext{d} & $\geqslant$ TP & -- & 1 \\
7 & 337.686+0.137 & -47.0 & 12.30 & 3 & 3 \\
8 & 338.114$-$0.193 & -53.0 & 11.96 & -- & 4 \\\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{From \citet{jones+12}.}
\tablefoottext{b}{From the 6-GHz methanol multibeam maser catalogue of \citet{caswell+10_MeOH}.}
\tablefoottext{c}{From the Red MSX Source (RMS) Survey \citep{lumsden13}.}
\tablefoottext{d}{This value corresponds to the LSR velocity of the HII region (not the $v_\mathrm{RRL}$) from {\thirtyCO} observations, according to \citet{anderson+14}.}}
\end{table*}
In the region we are interested in, 6.7~GHz methanol (CH$_{3}$OH) maser members of Class~II were also identified \citep{caswell+10_MeOH}. It is evident that the spatial distribution in the field of the known masers mostly coincides with regions of ionised thermal gas (see also Table~\ref{table_kes41_HIIs}). This identification allows us to infer that high-mass star-forming activity occurs in the region mapped in Fig.~\ref{field} where the $\gamma$-ray flux was detected. It has been demonstrated that this class of methanol maser emission is one of the best markers of massive young stellar objects \citep[see][for further information about the maser classification and its relationship with high-mass young stellar objects]{breen13}. In this respect, we found 16 high-mass protostellar object candidates positionally coincident with the $\gamma$-ray emitting region (see Fig.\ref{field}), although the distances to them are not reported \citep{lumsden13}.
Notably, in our selected region we did not find catalogued Wolf-Rayet stars,\footnote{See \url{http://www.pacrowther.staff.shef.ac.uk/WRcat} for a compilation of Wolf-Rayet stars.} massive OB associations, or other early-type stars with strong winds \citep{chini12}.
In summary, the infrared-radio correlation together with information from surveys is used here as observational evidence supporting the hypothesis that massive star formation is ongoing in the large molecular cloud.
\subsection{Proton content in the $\gamma$-ray emitting molecular cloud}
\label{protons}
The notorious agreement between the $\gamma$ rays and the CO emission means that it is suspicious that both arise from the same region where interactions of energetic particles (accelerated in a cosmic source) with interstellar matter occur. We determined the properties of the complex environment where {\kes} and the star-forming groups exist by selecting two elliptical zones referred to in the text as ellipses 1 (E1) and 2 (E2), respectively. They are depicted in Fig.~\ref{COframes}. The former, with a major axis of $15^\prime$ and a minor axis of $10^\prime$, comprises the bright molecular region adjacent to SNR~{\kes}. Our selection of a second region reflects the fact that a significant contribution of interstellar material, and hence protons, is also present in a region larger than the portion of the cloud that directly interacts with the remnant. This material spatially corresponds with the star-forming regions observed around the remnant (see Sect.~\ref{SFR}).
In our analysis of the proton content, we consider the ellipse E2 of axes $26^\prime$ and $20^\prime$ as an appropriate choice encircling all of the radio thermal components. In the subsequent analysis, we determined for each ellipse the proton content based on the contributions from each gas phase, that is, molecular (H$_2$), atomic (HI), and ionised (HII). Therefore, the total proton column density $N_{\mathrm{p}}$ is given by $N_{\mathrm{p}}=N(\mathrm{H_{2}})+N(\mathrm{HI})+N(\mathrm{HII})$. For both areas, we calculated the molecular column density for the $^{12}$CO from Eq.~\ref{W12CO}. For the $^{13}$CO we first derived the optical depth of this gas assuming that the excitation temperatures of
$^{12}$CO and $^{13}$CO have the same value (Eq.~\ref{tau13}). The obtained optical depth is 0.28. This result indicates that the assumption of an optically thin gas distribution for the $^{13}$CO is valid. Therefore, we used Eq.~\ref{N13CO} to obtain the corresponding molecular column density within the LSR velocity range from $-71$ to $-41$~km~s$^{-1}$.
Next, we evaluated according to Eq.~\ref{NHI} the atomic contribution to the total column density by integrating the hydrogen gas in both ellipses.
Additionally, following the procedure described in \citet{katsuta17}, we inferred the number density of ionised hydrogen based on the free-free emission within each elliptical region. We found, however, that the contribution of the ionised gas to the total column density is not significant, corresponding to less than 4\% of the total value for the two regions.
We used the obtained column densities to calculate the proton content $n_{\mathrm{p}}$=$N_{\mathrm{p}}/L$ for each of the considered regions, where we assumed that the thickness $L$ for the interstellar gas along the line of sight equals the average size of each ellipse ($L_{1}$$\sim$43~pc and $L_{2}$$\sim$80~pc at a distance of 12~kpc, for ellipses E1 and E2, respectively).
On the basis of our results obtained for both isotopologues over each elliptical area, we calculated the total mass of the gas, adopting a distance $d=12$~kpc in the relation, $M=\mu\,m_{\mathrm{H}}\, d^{2}\,\Omega\,N(\mathrm{H_{2}})$, where $\mu$=2.8 is the mean molecular weight if a relative helium abundance of 25\% is assumed, $m_{\mathrm{H}}$ is the hydrogen mass, and $\Omega$ is the solid angle subtended by the region of interest.
As explained in Sect.~\ref{MC-distance}, we used the self-absorption to resolve foreground CO gas (at a distance of 4~kpc). After a careful inspection of this gas distribution, for ellipse E1 we found a mass of gas of about $1.5 \times 10^4$~{\Msun} that is not related to the large cloud. For ellipse E2, we determined that cold foreground gas lies along the line of sight with a mass approximately equal to $3.4 \times 10^4$~{\Msun}.
In our measure of the proton content in the SNR surrounding area (E1) and in the larger region encircling all the thermal components in the field (E2), these masses were not accounted for.
The parameters of the gas in the cloud are listed in Table~\ref{table_kes41_density}. They are typical for giant molecular clouds. The errors in the column density estimates, and consequently, in the determination of masses and number densities, are mainly associated with the selection of the region used to integrate the emission and uncertainties in the determination of the distance.
We note that our analysis yields physical parameters of the interstellar medium that are consistent with those presented by \citet{zhang+15}, when the differences in the sizes of the inspected regions are taken into account. Thus, we emphasize that the total mass and proton density derived here represent mean values obtained in the large analysed regions inside the cloud, whereas the values ($\sim$20$\times$10$^{4}$~M$_{\odot}$, $n(\mathrm{H_{2}})$$\sim$310-510~cm$^{-3}$) published by \citet{zhang+15} can be considered to be associated with the local gas only around {\kes}.
In addition, we remark here that the total proton content in the region of {\kes} is similar to that obtained for the interstellar medium towards other well-known interacting medium-age SNRs. Canonical sources that serve as examples of this observation are the remnant RX~J1713.7$-$3946, Vela Jr., HESS~J1731$-$347, and W44 \citep{kuriki+17}, all of which are evolving in environments with high proton densities (hundreds of particles cm$^{-3}$).
\begin{table*}[ht!]
\centering
\caption{Physical properties of the CO, HI, and HII gas.}
\label{table_kes41_density}
\begin{tabular}{cccccccccc}\hline\hline
\multirow{2}{*}{Region\tablefoottext{a}} & $l$\tablefoottext{b} & $b$\tablefoottext{c} & Size\tablefoottext{d} & \multicolumn{3}{c}{Mean column density [$10^{22}$~\cm{-2}]\tablefoottext{e}} & Total proton density\tablefoottext{f} & Total mass\tablefoottext{g}\\
& (deg) & (deg) & (arcmin) & $N(\mathrm{H_{2}})$ & $N(\mathrm{HI})$ & $N(\mathrm{HII})$ & $n_{\mathrm{p}}$[\cm{-3}] &
$M$[$10^5$~\Msun] \\\hline
Ellipse~E1 & $337.82$ & $-0.042$ & 15~$\times$~10 & 5.6~$\pm$~1.3 & 0.44~$\pm$~0.11 & 0.14~$\pm$~0.04 & 950~$\pm$~330 & 9.7~$\pm$~4.0 \\
Ellipse~E2 & $337.81$ & $-0.017$ & 26~$\times$~20 & 4.9~$\pm$~1.1 & 0.49~$\pm$~0.12 & 0.21~$\pm$~0.05 & 460~$\pm$~160 & 32~$\pm$~14 \hfil\\\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{Region name. See Fig.~\ref{COframes}.}
\tablefoottext{b,c}{Galactic longitude and latitude.}
\tablefoottext{d}{Region size.}
\tablefoottext{e}{Mean column densities derived from the CO and the HI
integrated intensity (in the $-$71-$-$41~km~s$^{-1}$ range), and the ionised gas.}
\tablefoottext{f}{Proton density derived from the mean column densities in Col.~(5) using $N_{\mathrm{p}}=N(\mathrm{H_{2}})+N(\mathrm{HI})+N(\mathrm{HII})$.}
\tablefoottext{g}{Total mass of gas in each selected region.}
}
\end{table*}
\section{Concluding remarks}
\label{summary}
Using the high-quality data acquired as part of The CO Mopra Southern Galactic Plane CO Survey, we investigated in detail the physical properties of the $^{12}$CO and $^{13}$CO ({\COtrans})
gas emission in the direction to SNR~{\kes}. The molecular gas information presented in this work, used in conjunction with neutral atomic hydrogen observations from SGPS and {\Spitzer} mid-infrared data to describe the thermal gas, as well as HII, masers, and young massive stellar objects, provide a complete characterization of the SNR environment.
Taking advantage of the large area covered by the observations, we uncovered the natal cloud of these objects, which is located at 12.0$\pm$3.6~kpc from us, with a size of $\sim$28$^{\prime} \times$18$^{\prime}$ and covers a broad velocity range from $\sim$$-$71 to $-$41~km~s$^{-1}$. This cloud matches (on the plane of the sky) the $\gamma$-ray emission detected by {\Fermi}. Compared with the previous work of \citet{zhang+15} in a much smaller region only around SNR~{\kes}, this is the first time that the molecular gas towards the $\gamma$-ray emission is analysed in its whole extent.
For the large cloud, we found a total interstellar proton density of 460$\pm$160~cm$^{-3}$, while for the smaller region enclosing the $\gamma$-ray peak (within the relatively low angular resolution of {\Fermi}) and the molecular material adjacent to {\kes}, the proton density is $950\pm330$~cm$^{-3}$. Both estimates include contributions from the molecular, atomic, and ionised gases in the region.
This work clearly demonstrates the effectiveness of the CO Mopra Survey in the quest of finding large interstellar complexes associated with $\gamma$-ray radiation, which provides high sensitivity and high spatial and spectral resolution. The case discussed here may add to a short list of the few well-known SNRs \citep[e.g. W28, W51C, and IC~443,][]{gab15} that interact with molecular material in massive SFRs. The implications of our analysis on the production of the $\gamma$-ray flux will be investigated in a separate paper (Supan et al. 2018b).
\bibliographystyle{aa}
|
train/arxiv
|
BkiUbVDxK0zjCobCMq-Y
| 5
| 1
|
\section{Introduction}
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has highlighted the importance of accurate classification of antibody test results. Most work has focused on labeling data as previously infected (seropositive) or na{\"i}ve.
Due to the deployment of SARS-CoV-2 vaccines in late 2020, there is a clear need for a classification scheme that correctly distinguishes between na{\"ive}, previously infected, and uninfected but vaccinated individuals. However, the traditional diagnostic classification methods of confidence intervals and receiver operating characteristics have no obvious extensions to a multiclass setting.
Current multiclass applications in
diagnostic classification are mostly limited to supervised learning and do not address the central role of mathematical modeling in
diagnostics. Example studies include the application of support-vector machines to automatically sort endomysial autoantibody tests of celiac disease into one of four classes \citep{caetano2019automatic}, and another that trained deep neural networks to label resting-state functional magnetic resonance imaging results with one of six Alzheimer's disease state \citep{ramzan2020deep}.
However, these approaches may not accurately quantify precise training population characteristics or account for the role of prevalence, both of which should inform the classification procedure. In contrast, modeling can overcome this limitation. Binary (two class) examples include two-dimensional (2D) modeling of antigen targets coupled with optimal decision theory \citep{patrone2021classification}, statistical modeling applied to either antibody or viral-load tests \citep{bottcher2022statistical}, and an approach to the time-dependent problem for antibody measurements \citep{bedekar2022prevalence}. However, none of these works discussed multiclass extensions.
This paper uses mathematical modeling to fully address the task of multiclass classification in the context of diagnostic testing. We begin by showing that the notion of \textit{generalized prevalence}--the relative fraction of the population in each class--is fundamental for defining our objective function, the convex combination of false classification rates (Section \ref{sec:classif}). Minimization thereof yields optimal classification. Interestingly, we show that these prevalences can be computed without classification by solving a linear system.
We validate our methods using a SARS-CoV-2 serological data set with na{\"i}ve, previously infected, and vaccinated classes \citep{ainsworth2020performance,wei2021antibody}\footnote{Certain commercial equipment, instruments, software, or materials are
identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.} (Section \ref{sec:1D_ex}).
We then computationally validate the convergence of generalized prevalence estimates to the true values in mean square and illustrate a generalization to 2D data (Section \ref{sec:comp}). Finally, the discussion includes further analysis of prevalence estimation, extensions, and limitations (Section \ref{sec:disc}).
\section{Notation}
This work combines set and measure theory with applied diagnostics; readers will likely not be experts in both. In order for the ideas of this paper to be readily implemented by diagnostics experts and the applications understood by mathematicians, we provide baseline terminology from both fields.
\subsection{ Definitions from applied diagnostics}
\begin{itemize}
\item The na{\"i}ve class comprises individuals that have not been previously infected or vaccinated. In a binary classification, such samples are often referred to as `negative'.
\item The previously infected class comprises individuals with a prior infection but who are unvaccinated. In a binary classification, such samples are often referred to as `positive'.
\item The vaccinated class comprises individuals who have been inoculated against a disease without a prior infection.
\item Training data correspond to samples for which the true classes are known. Typically, such data are used to construct conditional probability models.
\item Test data correspond to samples for which the true classes are unknown or assumed to be unknown for validation purposes. Typically, a classification procedure is applied to such data.
\item Generalized prevalence is the relative fraction of samples in a population that belong to each class.
\end{itemize}
\subsection{Definitions from measure theory}
\begin{itemize}
\item A set is a collection of objects, e.g.\ measurement values. A domain is a set in a continuous measurement space; see Figure \ref{fig:phase} for an example.
\item The symbol $\mathbb{R}$ denotes the set of all real numbers. The symbol $\mathbb{R}^m$ denotes the real coordinate space of dimension $m$ consisting of all real-valued vectors of length $m$.
\item The symbol $\in$ indicates set inclusion. The expression $\bm{r} \in A$ means $\bm{r}$ is in set $A$.
\item The symbol $\subset$ denotes the subset relationship of two sets. The expression $A \subset B$ means that all elements in $A$ are contained in $B$.
\item The use of a superscript $C$ denotes the complement of a set. The set $D^C$ contains all elements in the measurement space not in $D$.
\item The symbol $\emptyset$ denotes the empty set, which contains no elements.
\item The operator $\cup$ denotes the union of two sets. The set $C = A \cup B$ contains all elements in either $A$ or $B$ or both.
\item The operator $\cap$ denotes the intersection of two sets. The set $C = A \cap B$ contains all elements in both $A$ and $B$.
\item The operator $\setminus$ denotes the set difference. The set $C = A \setminus B$ contains all objects in $A$ that are not also in $B$.
An equivalent interpretation is that $A \setminus B$ is the result of removing the common elements of $A$ and $B$ from $A$.
\item The notation $A = \{\bm{r} : *\}$ defines the set $A$ as all $\bm{r}$ that satisfy condition $*$.
\end{itemize}
\subsection{Notation specific to this paper}
\begin{itemize}
\item The set $\Omega$ denotes the entire measurement space.
\item The label $C_j$ refers to the $j$th class.
\item The generalized prevalence for class $C_j$ is denoted by $q_j$.
\item The set $D_j$ denotes a domain corresponding to $C_j$.
\item The use of a superscript $\star$ denotes an optimal quantity. For example, $D_j^{\star}$ could be an optimal classification domain corresponding to class $C_j$.
\end{itemize}
\section{Generalized prevalence estimation and multiclass classification}
\label{sec:classif}
Prevalence estimation and classification rely on the same framework of antibody measurements. For each individual or sample, we represent corresponding measurements as a vector $\bm{r} = (r_{1}, \ldots r_{m} ) \in \Omega \subset \mathbb{R}^m$. Here, $\bm{r}$ could denote $m$ antibody types targeting different parts of a virus as measured in median fluorescence intensity (MFI). Let $P_j(\bm{r})$ describe the probability that a sample from class $C_j$ yields measurement value $\bm{r}$.
These conditional probability density functions (PDFs) are assumed known in this section; their construction is considered for example serological data in Section \ref{sec:pdfs}.
The generalized prevalence $q_j$ is the relative fraction of the population corresponding to class $C_j$. In what follows we assume there are $n$ classes. The generalized prevalences must sum to 1, which implies
\begin{subequations}
\begin{eqnarray}
\sum_{j = 1}^n q_j = 1,
\label{eq:prev_cond_1} \\
q_k = 1 - \sum_{\substack{j = 1 \\ j \neq k}}^n q_j, \quad \text{ for } k \in \{1, \ldots, n\}.
\label{eq:prev_cond_2}
\end{eqnarray}
\label{eq:q_relation}
\end{subequations}
The probability density $Q(\bm{r})$ of a measurement $\bm{r}$ for a test sample is given by
\begin{equation}
Q(\bm{r}) = \sum_{j = 1}^n q_j P_j(\bm{r}).
\end{equation}
The product $q_j P_j(\bm{r})$ is the probability that a random sample both belongs to class $C_j$ and has measurement value $\bm{r}$;
thus, the expression for $Q$ is an instance of the law of total probability.
This quantity plays an important role in prevalence estimation and classification.
\subsection{Generalized prevalence estimation}
\label{sec:pool}
\label{sec:prev}
To demonstrate the importance of prevalence in diagnostic classification, consider the United States population's SARS-CoV-2 antibody response. In early 2020, most samples should have been classified as na{\"i}ve because the disease prevalence was small. By February 2022,
the disease prevalence was estimated at 57.7 \% \citep{clarke2022seroprevalence}; a significant fraction of samples should have been classified as previously infected. Crucially, \textit{the same measurement value may be classified differently depending on the disease prevalence}. This example shows that prevalence plays an integral role in classifying diagnostic tests and should be estimated before classification of test data.
We address this need by designing unbiased estimators for the prevalences $\{q_j\}$.
For $n$ classes, consider a partition $\{D_j\}$
that separates the measurement space $\Omega$ into $n$ nonempty domains. It is important to note that these $D_j$ are not classification domains.
Define
\begin{equation}
Q_{j} = \int_{D_j} Q(\bm{r}) d \bm{r} = \int_{D_j} \sum_{k = 1}^n q_{k} P_k(\bm{r}) d \bm{r} = \sum_{k = 1}^n q_{k} \int_{D_j} P_k(\bm{r}) d \bm{r} = \sum_{k = 1}^n P_{j,k}q_{k},
\end{equation}
where
\begin{equation}
P_{j,k} = \int_{D_j} P_k(\bm{r}) d \bm{r}.
\end{equation}
Writing this as a linear system yields
\begin{equation}
\left[ \begin{array}{c}
Q_1 \\ \vdots \\ Q_n
\end{array} \right] = \left[ \begin{array}{ccc}
P_{1,1} & \ldots & P_{1,n} \\
\vdots & \ddots & \vdots \\
P_{n,1} & \ldots & P_{n,n}
\end{array} \right] \left[ \begin{array}{c}
q_1 \\ \vdots \\ q_n
\end{array} \right].
\label{eq:q_full}
\end{equation}
Taking $k = n$ in (\ref{eq:prev_cond_2}) implies
\begin{equation}
\left[ \begin{array}{c}
Q_1 \\ \vdots \\ Q_{n-1}
\end{array} \right] - \left[ \begin{array}{c}
P_{1,n} \\ \vdots \\ P_{n-1, n}
\end{array} \right] = \left( \left[ \begin{array}{ccc}
P_{1,1} & \ldots & P_{1, n-1} \\
\vdots & \ddots & \vdots \\
P_{n-1,1} & \ldots & P_{n-1, n-1}
\end{array} \right] - \left[ \begin{array}{c}
P_{1,n} \\ \vdots \\ P_{n-1,n}
\end{array} \right] \underbrace{[1, \ldots, 1]}_{n-1} \right) \left[ \begin{array}{c}
q_1 \\ \vdots \\ q_{n-1}
\end{array} \right].
\end{equation}
This yields the prevalences $\bm{q}$ as the solution to the system
\begin{subequations}
\begin{eqnarray}
\bm{q} = (\bm{P} - \bm{P_n})^{-1} \left( \bm{\overline{Q}} - \overline{\bm{P_n}} \right),
\label{eq:prev_est_m} \\
q_k \geq 0 \quad \text{ for } k = 1, 2, \ldots, n-1,
\label{eq:prev_est_const}
\end{eqnarray}
\label{eq:prev_est_eqn}
\end{subequations}
\noindent where $\bm{q}$ is the vector of length $n-1$ whose $j$th entry is $q_j$, $\bm{P}$ is the $(n-1) \times (n-1)$ matrix whose $(i,j)$th entry is $P_{i, j}$, $\bm{P_n}$ is the $(n-1) \times (n-1)$ matrix whose $(i,j)$th entry is $P_{i, n}$, $\bm{\overline{Q}}$ is the vector of length $n-1$ whose $j$th entry is $Q_j$, and $\overline{\bm{P_n}}$ is the vector of length $n-1$ whose $j$th entry is $P_{j, n}$. The last prevalence $q_n$ is found via (\ref{eq:prev_cond_2}) with $k = n$. We assume that the inverse of the matrix $\bm{P} - \bm{P_n}$ exists; Section \ref{sec:disc_lim_prev} further discusses the matrices $\bm{P}$ and $\bm{P} - \bm{P_n}$.
To estimate the generalized prevalences, estimate the $Q_j$ by $\hat{Q}_j$, where
\begin{equation}
Q_j \approx \hat{Q}_j = \frac{1}{S} \sum_{i = 1}^S \mathbb{I}( \bm{r}_i \in D_j).
\label{eq:Q_approx}
\end{equation}
Here, $S$ is the total number of samples and $\mathbb{I}$ denotes the indicator function. Substituting $\hat{Q}_j$ for $Q_j$ in (\ref{eq:prev_est_eqn}) yields an estimate $\hat{q}_k$ for $q_k$. When the PDFs $P_j(\bm{r})$ are known and $(\bm{P} - \bm{P_n})^{-1}$ exists, these estimates $\hat{q}_k$ are unbiased, i.e., $E[\hat{q}_k] = q_k$. This follows directly from the fact that $\hat{Q}_j$ is a Monte Carlo estimator of $Q_j$. Further, the generalized prevalence estimates converge to the true values in mean square as the number of samples is increased \citep{caflisch1998monte}. This is illustrated in Section \ref{sec:conv_prev_est}.
We note that the generalized prevalence estimates are not unique due to the arbitrariness of the $\{D_j\}$.
However, the non-uniqueness allows us to select any reasonable partition over which to find the estimators $\{ \hat{Q}_j\}$. This is discussed further in Section \ref{sec:disc_lim_prev}.
\subsection{Optimal classification}
\label{sec:opt_class}
Our task is to define a partition $\{D_j\}$ (not necessarily the same as for prevalence estimation) of the measurement space $\Omega$ such that each domain corresponds to one and only one class $C_j$. A measurement $\bm{r}$ is assigned to class $j$ if $\bm{r} \in D_j$. We require
\begin{subequations}
\begin{eqnarray}
\mu_{j}\left(\bigcup_{k = 1}^n D_k \right) = 1 \quad \forall j \in \{1, \ldots, n\},
\label{eq:req2} \\
\mu_{\ell}(D_j \cap D_k) = 0 \text{ for } j \neq k, \quad \forall \ell = 1, 2, \ldots, n
\label{eq:req1}
\end{eqnarray}
\label{eq:req}
\end{subequations}
where $ \mu_{\ell}(X) = \int_X P_{\ell}(\bm{r}) d\bm{r}$. Here, (\ref{eq:req2}) ensures that any sample can be classified and (\ref{eq:req1}) enforces single-label classification (up to sets of measure zero). To identify an optimal partition $\{D_j^{\star}\}$, we construct the loss function
\begin{equation}
\mathscr{L}(D_1, \ldots, D_n) = \sum_{j = 1}^n q_j \int_{\Omega \setminus D_j} P_j(\bm{r}) d\bm{r}.
\label{eq:loss}
\end{equation}
Here, (\ref{eq:loss}) is the generalized prevalence-weighted convex combination of false classification rates as a function of the domains $D_j$. Intuitively, we expect that a sample with measurement $\bm{r}$ should be assigned to the class domain $D_j$ to which it has the highest probability of belonging; that is, the highest value of $q_j P_j(\bm{r})$ for all $j$. Accordingly, the loss function (\ref{eq:loss}) penalizes misclassified measurements $\bm{r}$ with high probability values.
To address situations in which a measurement has an equal highest probability of belonging to two or more classes, we introduce the following set for each class $C_j$:
\begin{equation}
\mathscr{E}_j = \bigcup_{\substack{k = 1 \\ k \neq j}}^n \{ \bm{r}: q_k P_k(\bm{r}) = q_j P_j(\bm{r}) = \max_i q_i P_i(\bm{r}) \} .
\end{equation}
In most practical implementations all $ \mathscr{E}_j$ have measure zero, and
the domains
\begin{equation}
D_j^{\star} = \{\bm{r} : q_j P_j(\bm{r}) > q_k P_k (\bm{r}) \text{ for } k \neq j\}
\label{eq:opt_d}
\end{equation}
minimize the loss function $\mathscr{L}$ up to sets of measure zero. The proof is shown in \ref{sec:app_a} and involves a straightforward application of set theory; see also \cite{williams2006gaussian} for similar ideas.
If $\mathscr{E}_j$ has nonzero measure, randomly assigning a measurement in $\mathscr{E}_j$ to one of the classes to which it has equal maximal probability of belonging does not effect the loss $\mathscr{L}$.
In this case, the optimal domains are generalized to
\begin{equation}
D_j^{\star} = \{\bm{r} : q_j P_j(\bm{r}) > q_k P_k (\bm{r}) \text{ for } k \neq j\} \cup Z_{\mathscr{E}_j},
\end{equation}
where $Z_{\mathscr{E}_j}$ is an element of a partition of $\mathscr{E}_j$ that we define iteratively as follows.
\begin{subequations}
\begin{eqnarray}
Z_{\mathscr{E}_1} = \mathscr{E}_1, \\
Z_{\mathscr{E}_k} = \mathscr{E}_k \mathbin{\big \backslash} \bigcup_{j = 1}^{k-1} \mathscr{E}_j, \quad k \in \{ 2, \ldots, n\} .
\end{eqnarray}
\end{subequations}
This ensures that no measurement in a set $\mathscr{E}_j$ is assigned to more than one optimal domain. Note that by construction, $Z_{\mathscr{E}_n}$ is empty.
Figure \ref{fig:phase} shows a 2D conceptual illustration of $\{ \mathscr{E}_j\}$, which are the lines delineating the optimal regions. In 2D, line segments have Lebesgue measure zero. Thus, classification follows (\ref{eq:opt_d}). Note that the ``multipoint" at which the lines meet has equal probability of belonging to all three classes. We discuss this further in Section \ref{sec:loc_acc}.
\begin{figure}[t]
\centering
\includegraphics[scale=.5]{7_19_22_phase.eps}
\caption{Illustration of classification domains $D_1, D_2$, and $D_3$ in which the sets of equal probability of a measurement belonging to two or more classes are shown as lines separating the optimal regions.}
\label{fig:phase}
\end{figure}
\section{Example applied to SARS-CoV-2 antibody data with three classes}
\label{sec:1D_ex}
To demonstrate the concepts developed in Section \ref{sec:classif}, we apply our methods to serological data with three classes. Publicly available data sets associated with \cite{ainsworth2020performance} and \cite{wei2021antibody} provide previously infected, na{\"i}ve, and vaccinated antibody measurements. The vaccine data \citep{wei2021antibody} are recorded for individuals that were innoculated with one of two vaccines. We refer to these as Vaccine A and Vaccine B and analyze the populations separately and together. The studies provide SARS-CoV-2 anti-spike immunoglobulin (IgG) antibody measurements; see \ref{sec:app_MFI} for measurement details. We use one-dimensional (1D) data to illustrate a straightforward multiclass example; Section \ref{sec:2D} demonstrates that our analysis holds for higher measurement dimensions.
All data are transformed to a logarithmic scale as follows:
\begin{equation}
r = \log_2(\tilde{r} + 2) - 1.
\label{eq:log_transform}
\end{equation}
Here, $\tilde{r}$ and $r$ represent the original and log-transformed values; $\tilde{r}$ has units of ng/mL and $r$ is nondimensional.
This transformation puts the data on the scale of bits and allows for better viewing of measurements that range over several decades of MFI values in the original units. \cite{wei2021antibody} truncated vaccinated samples, with lower and upper transformed limits of 1 and roughly 8.
Figure \ref{fig:1D_hist_data} shows a histogram of the data with the vaccinated category split by vaccine manufacturer (Figure \ref{fig:1D_hist_split}) and combined (Figure \ref{fig:1D_hist_all}).
Previously infected samples have the largest spike IgG antibody levels and na{\"i}ve samples the smallest; the vaccinated class falls in the middle. The vaccinated class overlaps with some na{\"i}ve and previously infected samples. Due to the truncation of vaccinated measurements, the corresponding right-most histogram bin contains many samples.
We separate the data into randomly generated training (80 \% of samples) and test (20 \%) populations.
\begin{figure}[t]
\centering
\subfloat[][Vaccine A and Vaccine B split]{\includegraphics[scale=.5]{3_18_22_3class_hist_2.eps}
\label{fig:1D_hist_split}}
\subfloat[][Vaccine A and B combined]{\includegraphics[scale=.5]{3_18_22_3class_hist_all.eps}
\label{fig:1D_hist_all}}
\caption{Histograms of previously infected, na{\"i}ve, and vaccinated data from Ainsworth et. al (2020) and Wei et al. (2021).
}
\label{fig:1D_hist_data}
\end{figure}
\subsection{Conditional probability distributions}
\label{sec:pdfs}
We fit probability distributions to the training data to model the na{\"i}ve, previously infected, and vaccinated antibody responses. For our purposes, we assume these are distinct classes; a sample belongs to one and only one of the three categories.
To construct the conditional PDF for each population, we select a parameterized model that empirically characterizes the shape and spread of the samples.
We determine parameters separately for the three training populations by maximum likelihood estimation (MLE).
The na{\"i}ve training population is fit to a Burr distribution
\begin{equation}
N(r) = \frac{ck}{\lambda} \left( \frac{r}{\lambda} \right)^{c-1} \left[ 1 +\left( \frac{r}{\lambda}\right)^c \right]^{-k-1},
\label{eq:N_eqn}
\end{equation}
which describes a right-skewed sample population.
The previously infected training population is fit to a stable distribution described by characteristic function
\begin{equation}
\phi(r) = \exp\left\{i r \delta - |\gamma r|^{\alpha} \left[1 + \frac{2i}{\pi} \beta \text{sgn}(r) \log(\gamma r)\right] \right\}
\label{eq:P_eqn}
\end{equation}
for $\gamma \neq 1$.
Here, $i$ is the imaginary unit and sgn is the sign function, which returns +1, -1, or 0. This distribution describes a left-skewed sample population.
We fit the vaccinated training populations to an extreme-value distribution after observing the mostly symmetric shape of the data with a spike at the right truncation limit:
\begin{equation}
V(r) = \frac{1}{\sigma} \exp \l \frac{r - \mu}{\sigma} \r \exp \left[ - \exp \l \frac{r - \mu}{\sigma} \r \right].
\label{eq:V_eqn}
\end{equation}
We apply data censoring to better fit the truncated data; this is described in \ref{sec:app_trunc}.
\begin{figure}[h]
\centering
\includegraphics[scale=.5]{8_24_22_vaxA_pdf.eps}
\caption{Conditional PDFs for the three na{\"i}ve, previously infected, and vaccinated classes trained on the training data for a Vaccine A visualization of the vaccinated class. See Supplemental Figure S1 for the PDFs of the Vaccine B and combined visualizations. }
\label{fig:pdf}
\end{figure}
The analysis in this section is identical for all three visualizations of the vaccinated class. In what follows, we report all results but show only the Vaccine A figures as examples. Corresponding figures for the Vaccine B and combined visualizations of the vaccine class are left to the Supplemental Data.
Figure \ref{fig:pdf} shows the conditional PDFs, represented as continuous curves, trained on the three-class training data with a Vaccine A vaccinated class. The blue, red, and black curves correspond to the na{\"i}ve, previously infected, and vaccinated models.
The effect of truncating the data at the upper limit is visible in the right-most bin of the vaccinated class histogram; this is accounted for by the data-censoring. As a result, the vaccinated class PDF exhibits spikes at the upper and lower truncation values. This spike is an artifact of the original data collection process and not a typical problem.
\subsection{Generalized prevalence estimation}
Recall that prevalence estimation of test data requires a partition that separates the measurement space $\Omega$ into $n$ nonempty domains. Here, the number of classes is $n = 3$. We create a partition using $k$-means clustering with $k = 3$, which assigns each measurement to the cluster with the closest mean. Figure \ref{fig:part} shows the partition for our test data set with a Vaccine A vaccinated class. The clustering separates the three populations reasonably well; see Section \ref{sec:disc_lim_prev} for the importance of this statement. The partition need not perfectly separate the data by class to estimate prevalences with high accuracy.
We estimate generalized prevalences for the test data via (\ref{eq:prev_est_eqn}) and record true and estimated values in Table \ref{table:prev_est}.
\begin{figure}[h]
\centering
\includegraphics[scale=.5]{5_24_22_clust_AZ.eps}
\caption{Test data $k$-means partitioning with a Vaccine A vaccinated class for generalized prevalence estimates. We use $k = 3$ classes; the clustered domains are labeled as $D_1, D_2$, and $D_3$. See Supplemental Figure S2 for the partitions of the Vaccine B and combined visualizations of the vaccinated class.}
\label{fig:part}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{|l|r|rrr|rrr|r|}
\hline
& \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Vaccine data set\\ Estimated (true) generalized prevalence\end{tabular}}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Errors\\ (\%)\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Avg.\\ errors\\ (\%)\end{tabular}}} \\ \hline
& \multicolumn{1}{l|}{\textbf{}} & \multicolumn{1}{c|}{\textit{A}} & \multicolumn{1}{c|}{\textit{B}} & \multicolumn{1}{c|}{\textit{All}} & \multicolumn{1}{c|}{\textit{A}} & \multicolumn{1}{c|}{\textit{B}} & \multicolumn{1}{c|}{\textit{All}} & \\ \hline
\multirow{3}{*}{\textbf{Class}} & $N$ & \multicolumn{1}{r|}{0.523 (0.521)} & \multicolumn{1}{r|}{0.538 (0.531)} & \multicolumn{1}{r|}{0.445 (0.444)} & \multicolumn{1}{r|}{0.377}& \multicolumn{1}{r|}{1.02} & {0.223} & 0.540 \\ \cline{2-9}
& $P$ & \multicolumn{1}{r|}{0.300 (0.286)} & \multicolumn{1}{r|}{0.313 (0.292)} & {0.285 (0.244)} & \multicolumn{1}{r|}{4.69} & \multicolumn{1}{r|}{7.10} & {17.0} & 9.93 \\ \cline{2-9}
& $V$ & \multicolumn{1}{r|}{0.177 (0.193)} & \multicolumn{1}{r|}{0.149 (0.177)} & {0.270 (0.312)} & \multicolumn{1}{r|}{7.99} & \multicolumn{1}{r|}{15.0} & {13.6} & 12.2 \\ \hline
\textbf{Avg.} & & \multicolumn{1}{r|}{} & \multicolumn{1}{r|}{} & & \multicolumn{1}{r|}{4.74} & \multicolumn{1}{r|}{7.67} & {10.3} & 7.55 \\ \hline
\end{tabular}
\caption{ Estimated and true generalized prevalences for the test data na{\"i}ve (N), previously infected (P), and vaccinated (V) classes. Vaccine A (A) and Vaccine B (B) are considered separately and together (All).}
\label{table:prev_est}
\end{table}
\subsection{Optimal classification}
We classify the training data using known generalized prevalences via (\ref{eq:opt_d}).
Figure \ref{fig:train} shows the optimal domains, labeled $D_N^{\star}$, $D_V^{\star}$, and $D_P^{\star}$, for a Vaccine A vaccinated class.
For this 1D example with three classes, the optimal classification domain boundaries can be represented by upper and lower threshold levels. Samples with measurements below the smaller level are classified as na{\"i}ve, samples with measurements between the thresholds as vaccinated, and samples with measurements above the larger level as previously infected. All three populations have overlapping PDFs, which reduces classification accuracy.
Accurate classification of test data is possible with reasonably close prevalence estimates. We classify the test data using estimated generalized prevalences and display the optimal classification domains for a Vaccine A vaccinated class in Figure \ref{fig:test_opt}.
\begin{figure}[h]
\centering
\subfloat[][Training data]{
\includegraphics[scale=.5]{8_24_22_train_opt_AZ.eps}\label{fig:train}}
\subfloat[][Test data]{\includegraphics[scale=.5]{8_24_22_test_opt_AZ.eps}\label{fig:test_opt}}
\caption{Training (a) and test (b) data with a Vaccine A vaccinated class with optimal decision thresholds using a known prevalence.
Vertical dashed lines indicate the optimal decision boundaries. The optimal na{\"i}ve, vaccinated, and previously infected domains are labeled $D_N^{\star}$, $D_V^{\star}$, and $D_P^{\star}$. See Supplemental Figures S3 and S4 for the optimal domains for Vaccine B and combined visualizations of the vaccinated class.
}
\label{fig:train_test}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{|c|r|r|}
\hline
\multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Training\\ classification \\ error (\%)\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Test\\ classification\\ error (\%)\end{tabular}}} \\ \hline
\textbf{Vaccine A} & 7.87 & 5.08 \\ \hline
\textbf{Vaccine B} & 7.07 & 5.46 \\ \hline
\textbf{Combined} & 7.90 & 4.78 \\ \hline
\end{tabular}
\caption{Classification errors for training data using a known prevalence and test data with an estimated prevalence. Vaccine A and Vaccine B are considered separately and together (Combined).}
\label{table:train_test_classif}
\end{table}
Training and test data classification errors are recorded in Table \ref{table:train_test_classif}. Taken over the three considerations of the vaccinated class, the average error for the training data is 7.61 \% and the same for the test data is 5.11 \%.
\section{Computational validation}
\label{sec:comp}
We numerically demonstrate two important features of our generalized prevalence estimation and multiclass optimal classification procedures. First, we show the convergence of our prevalence estimates to the true values as the number of samples is increased. Second, we present a 2D tri-class problem to show how the method generalizes to higher dimensional measurement spaces.
\subsection{Convergence of prevalence estimates}
\label{sec:conv_prev_est}
We use our probability models (\ref{eq:N_eqn})-(\ref{eq:V_eqn}) to generate synthetic data sets whose relative frequencies match the generalized prevalences of the \cite{ainsworth2020performance} and \cite{wei2021antibody} data.
We systematically increase the number of synthetic data points used while holding generalized prevalences fixed to study the effect of sample size on prevalence convergence. For each number of points used, we generate 1000 synthetic data sets and compute statistics on our results.
\begin{figure}[t]
\centering
\subfloat[][1000 synthetic samples]{\includegraphics[scale=.5]{8_26_22_syn_vax_A_1000.eps}
\label{fig:syn_AZ_data}}
\subfloat[][ Boxchart of estimate statistics]{\includegraphics[scale=0.5]{5_24_22_prev_est_converge_AZ.eps}
\label{fig:syn_AZ_boxchart}} \\
\subfloat[][ Convergence of the error in mean square]{\includegraphics[scale=0.5]{5_24_22_prev_est_conv_rate_AZ.eps}
\label{fig:syn_AZ_conv}}
\begin{minipage}[b]{18em}
\caption{Prevalence estimation convergence for synthetic data using a Vaccine A vaccinated class (1000 simulations).
In (b), the boxes display the median and upper and lower quartiles as the line inside the box and its top and bottom edges. The whiskers show non-outlier maximum and minimum values; outliers vary from the median by more that 1.5 times the difference between the upper and lower quartiles, and are shown as circles. In (b) and (c), the subscripts $N$, $P$, and $A$ denote na{\"i}ve, previously infected, and Vaccine A vaccinated. The number of samples is $S$.}
\label{fig:syn_AZ}
\end{minipage}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{|l|r|r|r|r|}
\hline
& \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{\textbf{Na{\"i}ve}} & \multicolumn{1}{c|}{\textbf{Previously infected}} & \multicolumn{1}{c|}{\textbf{Vaccine A}} \\ \hline
\textbf{\begin{tabular}[c]{@{}l@{}}True generalized\\ prevalences\end{tabular}} & \multicolumn{1}{l|}{} & 0.521 & 0.286 & 0.193 \\ \hline
\multirow{4}{*}{\textbf{Number of samples}} & $10^2$ & 0.518 $\pm$ 0.0212 & 0.289 $\pm$ 0.0211 & 0.193 $\pm$ 0.0299 \\ \cline{2-5}
& $10^3$ & 0.522 $\pm$ 0.0067 & 0.286 $\pm$ 0.0061 & 0.193 $\pm$ 0.0093 \\ \cline{2-5}
& $10^4$ & 0.522 $\pm$ 0.0021 & 0.285 $\pm$ 0.0020 & 0.193 $\pm$ 0.0030 \\ \cline{2-5}
& $10^5$ & 0.522 $\pm$ 0.0007 & 0.285 $\pm$ 0.0007 & 0.193 $\pm$ 0.0011 \\ \hline
\end{tabular}
\caption{ Generalized prevalence estimate means and standard deviations taken over 1000 simulations of synthetic data generated from the probability models for increasing number of samples for a Vaccine A vaccinated class.}
\label{table:prev_conv}
\end{table}
Figure \ref{fig:syn_AZ} shows our analysis for the Vaccine A vaccinated class.
Figure \ref{fig:syn_AZ_boxchart} shows a boxchart of the statistics for using $10^2, 10^3, 10^4$, and $10^5$ samples. The estimates have more outliers and variation when few samples are used, which decreases as the number of points is increased. Even with few samples, the median generalized prevalence estimates are close to the true generalized prevalences. Table \ref{table:prev_conv} records the mean and standard deviations of our results. Even for only 1000 samples, our estimates agree with the true generalized prevalences with roughly 2 \% relative error.
Figure \ref{fig:syn_AZ_conv} plots the standard deviation of the prevalence estimate error on a log-log scale against the number of samples.
The standard deviation should decrease with the inverse square root of the number of samples \citep{caflisch1998monte}, which is plotted for comparison. Our empirical convergence rates all agree with the theory through 10,000 samples; the rate is maintained for the Vaccine A vaccinated class.
\subsection{Generalization to higher dimensions}
\label{sec:2D}
\begin{figure}[t]
\centering
\subfloat[][Probability model contours and synthetic data]{\includegraphics[scale=.5]{9_28_21_RBD_N_3_cat_contours.eps}}
\subfloat[][$k$-means clustering]{\includegraphics[scale=.5]{9_28_21_RBD_N_3_class_clust.eps}} \\
\subfloat[][Optimal classification domains]{
\includegraphics[scale=.5]{9_28_2_RBD_N_3_cat_opt_data.eps}}
\begin{minipage}[b]{18em}
\caption{(a) Level sets of conditional PDFs with example synthetic data, (b) $k$-means clustering, (c) optimal classification domains with estimated generalized prevalences. In (c), the subscripts $N$, $P$, and $V$ denote na{\"i}ve, previously infected, and vaccinated.
}
\label{fig:3_contours}
\end{minipage}
\end{figure}
We now explore a 2D synthetic numerical validation of generalized prevalence estimation and multiclass optimal classification. See \cite{luke2022improving} for a discussion of the implications of higher-dimensional modeling on diagnostic testing accuracy. The synthetic values we use are modeled off the receptor-binding domain (RBD) and nucleocapsid (N) SARS-CoV-2 antibody targets; together these form a measurement double $\bm{r}$. Details about the models and information about the data are given in \ref{sec:app_b}. Figure \ref{fig:3_contours}a shows an example of 2D synthetic antibody measurements with na{\"i}ve, previously infected, and vaccinated classes with true prevalences of 0.3, 0.2, and 0.5. The conditional PDFs are shown as contour lines of constant probability. We use 1000 total synthetic samples.
To quantify uncertainty in the prevalence estimates, we randomly generate 1000 synthetic sets of samples using fixed prevalences. We then partition the measurement space via $k$-means clustering using one synthetic sample set (see Figure \ref{fig:3_contours}b), fix the partition, and use (\ref{eq:prev_est_eqn}) to generate prevalence estimates for all sets.
The results are shown in Table \ref{table:q_stats}. Figure \ref{fig:q_hist} shows histograms of the generalized prevalence estimates and true values, which fall within the middle of each distribution. We classify using these estimated prevalences via (\ref{eq:opt_d}) and find an average error of 1.58 \%. Figure \ref{fig:3_contours}c shows example optimal classification domains. The gold region is the previously infected domain, the purple is the vaccinated, and the remainder of the measurement space, colored in light blue, defines the na{\"i}ve domain. For this example,
the false classification rate is 1.8 \%.
\begin{table}[h]
\centering
\begin{tabular}{|c|r|r|r|r|}
\hline
\multicolumn{1}{|l|}{} & \multicolumn{1}{|l|}{\textbf{True val}} & \multicolumn{1}{c|}{$\bm{\mu}$} & \multicolumn{1}{c|}{$\bm{\sigma}$} & \multicolumn{1}{c|}{\textbf{CV} $\bm{\left( \frac{\sigma}{\mu}\right)}$} \\ \hline
$\bm{q_1}$ & 0.3 & 0.300 & $9.6 \times 10^{-3}$ & 0.0319 \\ \hline
$\bm{q_2}$ & 0.2 & 0.200 & $7.4 \times 10^{-3}$ & 0.0371 \\ \hline
$\bm{q_3}$ & 0.5 & 0.500 & $6.3 \times 10^{-3}$ & 0.0126 \\ \hline
\end{tabular}
\caption{ Statistics for the data shown in Figure \ref{fig:q_hist}. The true values of the prevalences are given along with the mean $\mu$, standard deviation $\sigma$, and coefficient of variation (CV) of the estimates. }
\label{table:q_stats}
\end{table}
\begin{figure}[t]
\centering
\subfloat[][Na{\"i}ve]{\includegraphics[scale=.5]{7_22_22_q1_hist.eps}}
\subfloat[][Previously infected]{\includegraphics[scale=.5]{7_22_22_q2_hist.eps}} \\
\subfloat[][Vaccinated]{\includegraphics[scale=.5]{7_22_22_q3_hist.eps}}
\begin{minipage}[b]{18em}
\caption{(a-c): Histograms of generalized prevalence estimates from 1000 synthetic data sets. Ten bins are used for each histogram and the true prevalence is shown as a vertical red line.}
\label{fig:q_hist}
\end{minipage}
\end{figure}
\section{Discussion}
\label{sec:disc}
\subsection{Limiting cases of prevalence estimation and implications for assay design}
\label{sec:disc_lim_prev}
An interesting observation of our prevalence estimation scheme is that the structure of the matrix underpinning the linear system encodes information about overlap between populations. As such, the matrix potentially informs best practices for prevalence estimation.
Further, our method extends the binary procedure of \cite{patrone2021classification} and may provide insight into the simpler setting. Here we examine limiting cases of prevalence estimation and connect characteristics of the matrix $\bm{P}$ to assay accuracy.
We explore interpretations of equivalent definitions of singularity of the matrix $\bm{P}$. Recall that the quantity $P_{j,k}$ gives the probability density of class $k$ falling in domain $D_j$. If all elements of a row (column) of the matrix $\bm{P}$ are zero, the probability of any measurement value falling in (belonging to) the corresponding domain (class) is zero. If the columns of $\bm{P}$ are linearly dependent, the probability of a sample belonging to class $C_k$ having a measurement in domain $D_j$ is a linear combination of the probabilities of samples belonging to all other classes having measurements in domain $D_j$.
This occurs for a choice of partition where all points fall in a single domain $D_j$. In this extreme case, there is an apparent dependence (in the linear algebra sense) of the measurement values of different classes.
As a related example, for the 1D SARS-CoV-2 antibody data from \cite{ainsworth2020performance} and \cite{wei2021antibody}, we can construct a partition where one trial domain is empty, both $\bm{P}$ and $\bm{P} - \bm{P_n}$ are singular, and therefore prevalence estimation is not possible.
To avoid this situation, one should select nonempty trial domains, i.e., training data should lie in each element of the partition.
In the limiting case that $P_{ij} = 0$, the measurement of a sample in class $C_j$ has zero probability of falling in domain $D_i$. The most extreme separation of training data occurs when the PDFs have nonzero support only on mutually exclusive elements of the partition. In this setting, the matrix $\bm{P}$ is a permutation matrix, and the prevalence estimates are merely the relative fractions $\bm{\hat{Q}}$ of measurements in each domain. If the partition elements are correctly matched to the classes, this extreme separation corresponds to a perfect assay because there are no misclassifications.
We note that the matrices that result from a $k$-means partition of the 1D SARS-CoV-2 tri-class data are close to permutation matrices; one example is
\begin{equation}
\bm{P} = \left[\begin{array}{ccc}
0.9749 & 0.0058 & 0.1055 \\
0.0237 & 0.0495 & 0.8101 \\
0.0015 & 0.9447 & 0.0844
\end{array} \right].
\label{eq:ex_P}
\end{equation}
Selection of trial domains with a high degree of class separation may be a key to our low-error prevalence estimates.
We speculate that under certain conditions a matrix $\bm{P}$ that is a permutation matrix may be optimal in the sense that it minimizes the prevalence estimate error.
In 1D, it may be possible to construct an optimization in terms of the samples assigned to each element of the partition, such as
\begin{equation}
\argmin_{f_1(x), f_2(x), \bm{P_{\pi}}} ||\bm{P_{\pi} P} - \bm{I} ||_2^2,
\end{equation}
where $f_1(x)$ and $f_2(x)$ are indicator functions determining which samples are assigned to elements 1 and 2 of the partition (without loss of generality).
Here, $\bm{P_{\pi} P}$ is the row permutation of $\bm{P}$ closest to the identity matrix, $\bm{I}$. For the matrix $\bm{P}$ given by (\ref{eq:ex_P}), for example, $\bm{P_{\pi}} = [\bm{e}_1; \bm{e}_3; \bm{e}_2]$, where $\bm{e}_j$ is the $j$th standard basis vector.
We leave a search for the minimum prevalence error estimate to future work; see \cite{patrone2022minimizing} for an approach to the binary case. The extension of their work to the multiclass setting is not obvious because the objective function to minimize can be generalized in many different ways.
As a final note on extreme cases of prevalence estimation, in expectation, the problem is unconstrained. The constrained problem may be a viable alternative when it is known that one prevalence is close to zero.
\subsection{Local accuracy}
\label{sec:loc_acc}
Recall that in Section \ref{sec:opt_class} we needed to consider sets of measurements with equal probability of belonging to more than one class. This concept is related to \textit{local accuracy}, $Z$, which compares the probability that a test sample belongs to a particular class and has measurement $\bm{r}$ to the measurement density of a test sample with measurement $\bm{r}$.
We generalize the binary version from \cite{patrone2022holdout} to the multiclass setting:
\begin{equation}
Z(\bm{r}, D_1, \ldots, D_n) = \frac{q_k P_k(\bm{r})}{Q(\bm{r})} = \frac{q_k P_k(\bm{r})}{\sum_{j = 1}^n q_j P_j(\bm{r})}, \quad \bm{r} \in D_k,
\end{equation}
where $\{D_k\}$ partitions $\Omega$. Let $Z^{\star}(\bm{r}) = Z(\bm{r}, D_1^{\star}, \ldots, D_n^{\star})$ be the local accuracy of the optimal solution to the multiclass problem. It is straightforward to show that $1/n \leq Z^{\star} \leq 1$. Due to optimality, if $\bm{r} \in D_k^{\star}$, we have $ q_k P_k(\bm{r}) \geq q_j P_j(\bm{r})$ for $j \neq k$.
Then
\begin{equation}
n q_k P_k (\bm{r}) \geq \sum_{j = 1}^n q_j P_j(\bm{r}) = Q(\bm{r}),
\end{equation}
and so $q_k P_k(\bm{r})/Q(\bm{r}) = Z^{\star}(\bm{r}) \geq 1/n$ for $r \in D_k^{\star}$. $Z^{\star}$ is maximized at 1 when $Q(\bm{r}) = q_k P_k(\bm{r})$ for $\bm{r} \in D_k^{\star}$.
In the multiclass setting, we have $Z^{\star} = 1/n$ when
\begin{equation}
q_1 P_1(\bm{r}) = \ldots = q_n P_n(\bm{r}).
\end{equation}
We will refer to such an $\bm{r}$, if it exists, as a multipoint of the optimal domains. The lower bound on $Z^{\star}$ is only attained at a multipoint. To see this, consider some measurement $\bm{v}$ that is not a multipoint. Then there exist $j, k \in \{1, \ldots, n\}$, $j \neq k$, such that $q_j P_j(\bm{v}) < q_k P_k(\bm{v})$. Then, since the classification is optimal, $\bm{v} \not \in D_j^{\star}$ and $\bm{v} \in D_m^{\star}$ for some $m$ (it may be that $m = k$). Clearly, $q_j P_j(\bm{v}) < q_m P_m(\bm{v})$. Further, $q_{\ell} P_{\ell} (\bm{v}) \leq q_m P_m(\bm{v})$ for $\ell \neq j$. It follows that
\begin{equation}
Q(\bm{v}) = \left[ \sum_{\substack{i = 1 \\ i \neq j}}^n q_i P_i(\bm{v}) \right] + q_j P_j(\bm{v}) < (n-1) q_m P_m(\bm{v}) + q_m P_m (\bm{v})
\end{equation}
and so $Q(\bm{v}) < n q_m P_m(\bm{v})$, which gives $1/n < Z^{\star}(\bm{v})$ for a non-multipoint.
The concept of local accuracy could be used to decide which values to hold out in an indeterminate class in order to meet a global accuracy target \citep[see][]{patrone2022holdout}.
For any measurement, we can compute what the local accuracy would be if we chose to assign it to each class in turn. Using the SARS-CoV-2 tri-class example as an illustration, conducting this procedure on a measurement $\bm{r}$ may result in, say, similarly high local accuracies for the previously infected and vaccinated classes, but a low local accuracy for the na{\"i}ve class. In this situation and without an optimal classification scheme, we may not feel confident labeling the sample as previously infected or as vaccinated, since their probabilities are similar, but we can say the sample is almost certainly not na{\"i}ve. This leads naturally to the observation that any subset of classes can be combined to make a new class. In particular, we can reduce the problem to a binary classifier. For our serological example, perhaps it is desirable to consider previously infected and vaccinated samples together, or equally possible that it is difficult to tell them apart for a particular assay, and so our goal becomes to classify them separately from na{\"i}ves. This reduction of the problem size by combining classes is in a sense a projection onto a lower class space. Specifically, consider
\begin{equation}
Q(\bm{r}) = \sum_{j = 1}^n q_j P_j(\bm{r}) = \underbrace{\sum_{j = 1}^k q_j P_j (\bm{r})}_{\tilde{q}_1 \tilde{P}_1(\bm{r})} + \underbrace{\sum_{j = k+1}^n q_j P_j (\bm{r})}_{\tilde{q}_2 \tilde{P}_2(\bm{r})},
\end{equation}
where $\tilde{P}_1(\bm{r})$ and $ \tilde{P}_2(\bm{r})$ are newly-created PDFs with associated prevalences $\tilde{q}_1$ and $\tilde{q}_2 = 1 - \tilde{q}_1$. The task becomes to find $\tilde{q}_1$, for which there may be an optimal strategy.
The diagnostic community's current analog to local accuracy is the concept of a likelihood ratio (LR), calculated as $s_e/(1 - s_p)$ for a previously infected test result, where $s_e$ and $s_p$ represent sensitivity and specificity. The previously infected LR can be interpreted as the ratio of probabilities of correctly to incorrectly predicting a previously infected result \citep{riffenburgh2011statistics}. These values use average population information through $s_e$ and $s_p$ values, which may not always be available or representative. In contrast, local accuracy uses local information, since the latter is conditioned on knowing individual measurement values.
\subsection{Extensions}
Multiclass methods are readily equipped to handle further stratification of antibody data, such as by age group, biological sex, or coronavirus disease of 2019 (COVID-19) booster status. An additional class could be added for individuals who are both vaccinated and previously infected. Studies have demonstrated a greater antibody response post-vaccination for previously-infected versus COVID-19 na\"{i}ve recipients \citep{dalle2021serology,narowski2022sars}; this could allow for these populations to be distinguished by our classification scheme. Further, we minimize the prevalence-weighted combination of misclassifications, but the optimization problem can be rewritten for any desired objective function. Reformulations include ``rule-in'' or ``rule-out'' tests that meet desired sensitivity or specificity targets \citep{florkowski2008sensitivity}. Our methods may even be generalizable to multi-label classification, in which a sample can be assigned to more than one class; we anticipate challenges designing the corresponding optimization problem. Finally, the methods presented here can be applied to any setting where class size estimation and population labeling are required; an example is cell sorting in flow cytometry.
\subsection{Limitations}
Model selection is inherently subjective; \cite{schwartz1967estimation} showed that the error goes to zero as more data points are added. As the number of antibody measurements increases, corresponding to viewing the data in higher dimensions, additional modeling choices become available.
\cite{patrone2021classification} suggest the possibility of minimizing misclassifications over a family of models; see also \cite{smith2013uncertainty} for a discussion of model form errors. Classification accuracy and prevalence estimation of the 1D data sets from \cite{ainsworth2020performance} and \cite{wei2021antibody} suffer from overlap between their spike IgG values. If more measurements were available per sample, modeling the data in a higher dimension could improve class separation and thereby lower error rates \citep[see][]{luke2022improving}. Further, our models do not account for time-dependence. This concept is important when classifying antibody tests, which are known to have a half life on the order of several months post infection or vaccination \citep{xia2021longitudinal,kwok2022waning}. See \cite{bedekar2022prevalence} for a time-dependent approach to the binary setting.
\subsection{Implications for assay developers}
We have solved the multiclass diagnostic classification problem, which was previously unresolved. Antibody measurements from vaccinated individuals can now be distinguished from previously infected and na{\"i}ve samples.
Our work is the first to obtain unbiased predictions of the relative fractions of vaccinated, previously infected, and na{\"i}ve individuals in a population. These estimates are improved as more samples are added. Best practices for conducting these predictions include dividing the range of all possible measurement values into nonempty regions that create separation between samples of neighboring regions. This can be easily achieved using pre-defined clustering algorithms. Our procedure hinges on selecting probability distributions to model training populations, which can be conducted automatically for measurements of a single antibody target in several open-source programming languages. Our classification scheme is also easily implementable, and can be modified to prioritize specificity if desired. Regardless of the reformulation, the error is minimized by construction.
\section{Acknowledgements}
This work is a contribution of the National Institute of Standards and Technology and is not subject to copyright in the United States. R.L. was funded through the NIST PREP grant 70NANB18H162. The aforementioned funder had no role in study design, data analysis, decision to publish, or preparation of the manuscript. Use of data provided in this paper has been
approved by the NIST Research Protections Office (IRB no. ITL 2020 2057). The authors wish to thank Drs. Daniel Anderson and Eric Shirley for useful discussions during preparation of this manuscript.
\section{Data Availability}
Analysis scripts and data developed as a part of this work are available upon reasonable request. Original data are provided in \cite{ainsworth2020performance} and \cite{wei2021antibody}.
\section{Declarations of Competing Interests}
The authors have no competing interests to declare.
\setcounter{equation}{0}
|
train/arxiv
|
BkiUdhM25V5hcGj05hKu
| 5
| 1
|
\section{Introduction}
Fermionic ultra-cold atoms with nearly resonant scattering in the unitarity regime \cite{Zwierlein2005, Partridge2005, Kohl2005, Stoferle2006, Zwierlein2006, Chin2006, Partridge2006, Stewart2008, Jordens2008, Schneider2008, Gaebler2010} realize the strongest possible form of Cooper pairing, revealed by critical velocity \cite{Miller2007, Diener2008}. It is natural to expect that zero-temperature normal states near unitarity would be very strongly correlated with a host of unconventional properties, possibly bearing some resemblance to those found in cuprates \cite{Stajic2004}. The unitarity limit is therefore an excellent starting point for studies of correlated fermionic superconductors and insulators, which has not been exploited enough in literature. The benefits are both theoretical and experimental. Systematic perturbative and renormalization group calculations are feasible mainly because the unperturbed ground state (fixed point) is a simple state, the vacuum or a band insulator. Experimentally, the unitarity limit is routinely accessed in cold gases of alkali atoms tuned near a broad Feshbach resonance \cite{Bloch08p885, Tiesinga2009}.
Our ultimate goal is to address the long-standing questions about the nature of unconventional normal states proximate to strongly paired fermionic superfluids or superconductors. We design here a simple and tractable model in which the fermionic excitation gap is opened by an external periodic potential, rather than strong interactions. Attractive interactions between quasiparticles whose energy scale exceeds this gap can still give rise to pairing and superfluidity. The need for strong interactions justifies asking if the normal state proximate to the superfluid might have some unconventional properties reflecting strong correlations, especially in universal regimes shaped by resonant scattering.
The inquiry into unconventional superfluidity from the resonant scattering point of view began a long time ago \cite{Eagles1969}. The present interest in this subject is driven in parallel by a variety of unconventional superconductors in condensed matter physics, and ultra-cold atoms with nearly resonant scattering. The recent theoretical studies of scattering resonances in lattice potentials \cite{Fedichev2004, Carr2005, Dickerscheid2005, Gubbels2006, Zhou2006, Titvinidze2009, Watanabe2009} often rely on two-channel tight-binding models, featuring fermionic atoms resonantly coupled to closed channel bosonic particles. It has been argued that models of this kind provide a good effective description of the microscopic lattice systems of interest \cite{Duan2005, Diener2006, Koetsier2006}. The findings of these studies include lattice Feshbach resonances shifted from their empty-space values.
Most of the mentioned theoretical works approach resonant scattering from a somewhat microscopic angle, exemplified by perturbation theory with the vacuum unperturbed ground-state. Indeed, the usual universal behavior of particles tuned to a broad Feshbach resonance is established in the low density limit. However, universality is a many-body phenomenon and its complete description requires field-theoretical tools such as renormalization group. This issue becomes pressing in the present problem of interest, a band insulator whose non-interacting ground-state contains a macroscopic number of fermions in fully populated bands. These fermions may not be dynamically inert despite the Pauli exclusion principle, due to the strong interactions which bring the system to its unitarity regime.
In this paper we take a field-theoretical approach to nearly resonant pairing between gapped fermions. Abandoning all microscopic details in a renormalization group (RG) calculation allows us to gain a perspective on the generic and universal behavior of a large class of fermionic lattice systems. The effective theories we subject to RG are constructed to preserve the universality class of the microscopic system. This ensures that the universal phase diagram and other macroscopic properties of the microscopic system are correctly captured despite the neglect of microscopic details. The price to pay is the necessity to deal with multiple flavors of low energy quasiparticles, such as particle and hole excitations which may exist at multiple wavevectors in the first Brillouin zone. Note that the two-channel models with dynamical bosonic fields used in many previous studies are not guarantied to be in the same universality class as the microscopic system, and may in some cases describe different physics than this paper. We characterize the universality stemming from resonant scattering of quasiparticles in band insulators, and discover generalized unitarity regimes in which quasiparticles of different flavors scatter resonantly. The manifestations of unitarity which we discuss include universal ratios of measurable quantities such as critical temperature, pressure and density. We also analyze the types and conditions for pairing instabilities, conventional versus unconventional superfluid transitions, and emphasize the existence of correlated bosonic Mott insulating states in the phase diagram.
The RG analysis reveals why pairing fluctuations indeed play the crucial role in systems of gapped fermions with short-range attractive interactions. Unlike previous studies of unitarity in continuum lattice potentials, which focused on the zero-density limit \cite{Burovski2006, Zhai2007, moon:230403, Burkov2009}, we point out that the unitarity regime in the same universality class can be found at finite densities near any zero temperature band insulator to superfluid transition. The structure of fixed points depends on whether both particles and holes participate equally in the dynamics, or just one of the two quasiparticle types. In the latter case, the exact RG equations can be derived, which allows one to track the run-away flows of attractive interaction couplings (as in some studies of Iron-pnictides \cite{Wang2009, Thomale2009}). It is these run-away flows that can lead to boson-dominated dynamics at low energies. We find that instabilities in the particle-hole channel are discouraged by attractive interactions.
The run-away flows imply the ultimate RG breakdown when the diverging couplings reach cut-off scales. However, the resulting low energy bosonic dynamics is known to introduce additional fixed points associated with superfluid transitions \cite{Fisher1989a}, which appear as strong-coupling fixed points in the present RG. The superfluid transition in this regime can be either in the bosonic mean-field, or XY universality class. The mean-field universality with dynamical exponent $z=2$ emerges as the result of run-away flows from the unitarity dominated by either particles or holes, while the $z=1$ XY universality is related in the same fashion to the unitarity shaped by \emph{both} particles and holes. Therefore, the analysis here provides a glimpse of the more complete structure of fixed points in theories of fermionic particles with attractive interactions, sketched in Fig.\ref{FullRG}.
The mentioned finite-density fixed points describe unitarity in zero-density effective theories of particle and hole excitations. Therefore, one can relate a nearly critical interaction strength to scattering lengths in collisions among particles and holes. Any attractive interaction in two dimensions effectively puts the low energy quasiparticles into their Bose-Einstein condensate (BEC) limit, so quasiparticles injected in the insulating state immediately combine into bound-state pairs \cite{L1977} (whose size can be very large at weak couplings). The effective Bardeen-Cooper-Schrieffer (BCS) regime exists only above two dimensions, at least in the weak coupling limit ($|U|<|U^*|\propto\epsilon$ in Fig.\ref{FullRG}). Note that localization tendencies due to the lattice potential enhance the strength of effective interactions with respect to those between completely free fermions \cite{Fedichev2004, Koetsier2006}.
\begin{figure}
\includegraphics[width=2.7in]{global-rg.eps}
\caption{\label{FullRG}(color online) A hypothetical renormalization group (RG) flow diagram at zero temperature of a fermionic lattice theory with attractive interactions in $d \ge 2$ dimensions. The parameters are fermion density-density interaction $U$ and bandgap (or negative chemical potential) $E_g$. This paper explores in detail the vicinity of two weak-coupling fixed points which govern the pair-breaking superfluid to insulator transition: Gaussian (G) and unitarity (U), separated in proportion to $\epsilon=d-2$. The negative couplings $U$ below unitarity experience run-away flows under RG and lead to the boson-dominated dynamics. A superfluid-insulator transition in this regime is captured by a bosonic effective theory with dynamical exponent $z=2$ (mean-field universality) or $z=1$ (XY universality). In the latter case, one expects an additional fixed point (XY) in $d \le 3$. The Gaussian fixed point of the bosonic effective theory appears at the transition line in the limit $U\to-\infty$, $E_g\to\infty$. The shaded area is the superfluid or superconducting phase, and the red thick line is the second order superfluid-insulator transition. The dashed green line encloses the region in which the fermionic RG is valid.}
\end{figure}
We begin the discussion by laying out the effective theory of a lattice fermionic system in the section \ref{secModel}. Then, in the section \ref{secP} we analyze the exact RG for a single species of fermions, either particles or holes, and reveal the development of short range pairing correlations in insulating states. The section \ref{secPH} presents the RG calculations for particles and holes and identifies a large number of fixed points associated with resonant scattering. This analysis is expanded in the section \ref{secMF} to multiple quasiparticle species living at different wavevectors in the Brillouin zone. All results and conclusions are summarized in the discussion section \ref{secDiscussion}.
\subsection{Model}\label{secModel}
As a generic model of a band insulator we consider the imaginary-time action of neutral fermionic particles with interactions $U$, in a lattice potential $V(\Bf{r})$:
\begin{eqnarray}\label{ContModel1}
&& \!\!\!\!\!\!\! S = \int \textrm{d}\tau \biggl\lbrack \textrm{d}^{d}r \; \psi_{\alpha}^{\dagger}
\left( \frac{\partial}{\partial\tau} - \frac{\boldsymbol{\bigtriangledown}^2}{2m} + V(\boldsymbol{r}) - \mu \right) \psi_{\alpha} \\
&& \!\!\!\!\!\!\! +\int \textrm{d}^{d}r_1 \textrm{d}^{d}r_2 U(|\Bf{r}_1-\Bf{r}_2|)
\psi_{\alpha}^{\dagger}(\Bf{r}_1) \psi_{\alpha}^{\phantom{\dagger}}(\Bf{r}_1)
\psi_{\beta}^{\dagger}(\Bf{r}_2) \psi_{\beta}^{\phantom{\dagger}}(\Bf{r}_2) \biggr\rbrack \ , \nonumber
\end{eqnarray}
where the summation over spins $\alpha\in\lbrace\uparrow,\downarrow\rbrace$ is implicit. This is a microscopic multi-band model defined in continuum space and not a priori tied to the vicinity of a critical point. The density of particles is tuned to any number of completely populated bands at zero temperature by placing the chemical potential $\mu$ in a bandgap. The dynamics of the resulting band insulator can be described by an effective theory of low-energy quasiparticles belonging to the valence and conduction bands. Sufficiently strong attractive interactions $U$ can drive the system into a superfluid state, and a similar instability can be created by bringing the chemical potential sufficiently close to a band edge, even at weak couplings. A qualitative example of the superfluid-insulator transitions is shown in Fig.\ref{pd1}.
\begin{figure}
\includegraphics[width=2.8in]{sf-transitions-2d.eps}
\caption{\label{pd1}(color online) Superfluid transitions (thick red lines) out of a two-dimensional band insulator at zero temperature. The three shown transitions correspond to arbitrarily chosen different strengths of contact attractive interactions, becoming stronger going from top to bottom. The lattice potential is given by $V(\Bf{r})=2V \lbrack \cos( 2\pi x / a_L ) + \cos( 2\pi y / a_L ) \rbrack$, where $V$ is the lattice amplitude and $a_L$ the lattice spacing. Both $V$ and $\mu$ are measured in the units of ``recoil energy'' $E_0=\hbar^2/2ma_L^2$. The thin red line outlines the band edge of the corresponding non-interacting model. The dashed blue lines are trajectories in the parameter space along which the transitions dominated by particles (p), holes (h), or both (ph) can occur.}
\end{figure}
In order to derive the effective theory we formally integrate out the fermion fields from high-energy bands in the path integral. At best, this can be done perturbatively, for example using the Feynman diagram technique. Unless the perturbative integration breaks down, the effective theory takes form
\begin{eqnarray}\label{Seff}
&& S_{\textrm{eff}} = \sum_n \int \frac{\textrm{d}\omega}{2\pi} \frac{\textrm{d}^d k}{(2\pi)^d} \;
f_{n,k,\alpha}^{\dagger} \left( -i\omega + E_n(\Bf{k}) \right) f_{n,k,\alpha}^{\phantom{\dagger}} \nonumber
\\ && ~~ +
\sum_{n_1 m_1} \sum_{n_2 m_2} U_{n_1 n_2}^{m_1 m_2} \int
\frac{\textrm{d}\omega_1}{2\pi} \frac{\textrm{d}^d k_1}{(2\pi)^d}
\frac{\textrm{d}\omega_2}{2\pi} \frac{\textrm{d}^d k_2}{(2\pi)^d}
\frac{\textrm{d}\Omega}{2\pi} \frac{\textrm{d}^d q}{(2\pi)^d}
\nonumber
\\ & & ~~~~ \times f_{m_1,k_1+q,\alpha}^{\dagger} f_{n_1,k_1,\alpha}^{\phantom{\dagger}}
f_{m_2,k_2-q,\beta}^{\dagger} f_{n_2,k_2,\beta}^{\phantom{\dagger}}
\end{eqnarray}
in terms of the quasiparticle fermion fields $f$. In this paper we will consider cases in which the band indices $n_i, m_i$ denote one or two bands. The most generic transitions are driven by the chemical potential $\mu$, so that only one band is important (see Fig.\ref{pd1}, (p) and (h) trajectories). The transitions involving particles and holes (the (ph) trajectory in Fig.\ref{pd1}) require at least two bands for a complete description.
A potential problem recognized in a number of studies is that in the vicinity of resonant scattering the microscopic interactions may correspond to energy scales (much) larger than the bandgap. Then, many high energy bands may be significantly hybridized with the conduction and valence bands. In the present formulation of the problem, this can lead to a strong renormalization of low-energy quasiparticle dispersions $E_n(\Bf{k})$ and effective interactions $U_{n_1 n_2}^{m_1 m_2}$. In most circumstances this does not endanger the analysis which follows.
The goal of this paper is to explore the universal properties of nearly resonantly interacting fermions in periodic potentials, for the purpose of which we apply renormalization group. Therefore, the precise functional forms and values of the effective $E_n(\Bf{k})$ and $U_{n_1 n_2}^{m_1 m_2}$ are not of interest, and we do not attempt to derive them from any microscopic model. The perturbative expansions of $E_n(\Bf{k})$ and $U_{n_1 n_2}^{m_1 m_2}$, in powers of the ratio $U/E_{he}$ between the microscopic interaction strength $U$ and the smallest energy $E_{he}$ of the integrated high-energy fermions, need not converge fast. However, these expansions must be convergent, otherwise the effective $E_n(\Bf{k})$ and $U_{n_1 n_2}^{m_1 m_2}$ would contain singular features which would invalidate our analysis. In general there should be a minimum number of bands which must be kept in the effective theory in order for it to be properly analytic, and this number may grow when one approaches the resonant scattering in empty space at $V(\Bf{r})\equiv 0$. We shall assume that this number is never larger than two in the present cases of interest. An assumption of this kind is made in all other studies and it is justified by the fact that a lattice potential shifts the scattering resonance from its empty-space position toward the effective lattice BEC limit \cite{Fedichev2004, Koetsier2006}. In other words, the effective unitarity limit in a lattice which we seek to describe corresponds to the BCS regime in empty space for the same microscopic interactions, where $U/E_{he}$ is not too large.
\section{Renormalization group analysis}\label{secRG}
Here we apply renormalization group (RG) to a band insulator in the unitarity regime. We will identify various fixed points associated with unitarity (resonant two-body scattering) which emerge in the presence of an external periodic potential $V(\Bf{r})$, but otherwise are analogous to the unitarity fixed point of a uniform system at $V(\Bf{r})\equiv 0$. The main difference is that the fixed points we shall discuss occur at finite densities of microscopic particles, corresponding to fully occupied bands, whereas universality in the uniform system stems from a \emph{zero density} fixed point.
There are two characteristic situations which will be considered separately. First, one fermion species (either particles or holes) generally dominates dynamics in the unitarity regime, so the renormalization group equations can be derived exactly to all orders of perturbation theory. This is extremely useful because run-away flows of interaction couplings can be traced more reliably. The second situation is more special and occurs when both particles and holes participate equally in dynamics. Then, the fixed point structure becomes intricate, but can be accessed only in an $\epsilon$ (or large-$N$) expansion. At the end we briefly discuss extensions to more realistic cases with multiple relevant fermion species, and measurable manifestations of the universality class.
\subsection{Transitions involving one fermion species}\label{secP}
A transition dominated by either particles or holes, but not both, is generally caused by chemical potential changes as illustrated in Fig.\ref{pd1} with (p) and (h) dashed lines. As a natural starting point one can imagine a band insulator either in the deep BCS limit, or with a very deep lattice potential, where the chemical potential is brought much closer to one of the bands than to the other. However, such extreme regimes are not necessary initially because they are created by the RG flow (the bandgap is a relevant operator). We shall discover that an attractive interaction undergoes a run-away flow in two dimensions and competes with the bandgap. This competition is resolved at cut-off scales where the RG breaks down. However, a new strongly-coupled universality class takes over in that limit, associated with superfluid to Mott-insulator transition with dynamical exponent $z=2$ at the intersection of the thick red and dashed-blue lines (p) or (h) in Fig.\ref{pd1}.
A characteristic weak-coupling interacting fixed point contained in a theory of interacting fermions is unitarity, and it is found at zero temperature when the chemical potential lies exactly at the boundary between a fermion band and a bandgap (or vacuum). The effective action contains a single species of interacting spinful fermions, not much different from the theory of the system without a lattice. If these fermions live in a valence band, we can immediately reformulate the theory in terms of holes. Therefore, we can always write the critical effective theory in the limit of zero quasiparticle density. The universality class at unitarity will be the same as discussed in Ref.\cite{Nishida2007, nikolic:033608}.
The critical theory of interest in $d$ dimensions, augmented by the relevant chemical potential is:
\begin{eqnarray}\label{CritTheory1}
S_1 & = & \int \mathcal{D}k \; f_{k,\alpha}^{\dagger} \left( -i\omega + E(\Bf{k}) \right)
f_{k,\alpha}^{\phantom{\dagger}} \\ & + &
U \int \mathcal{D}k_1 \mathcal{D}k_2 \mathcal{D}q \;
f_{k_1,\alpha}^{\dagger} f_{k_1+q,\alpha}^{\phantom{\dagger}}
f_{k_2,\beta}^{\dagger} f_{k_2-q,\beta}^{\phantom{\dagger}}
\nonumber \ ,
\end{eqnarray}
where $k=(\omega,\Bf{k})$, $\mathcal{D}k = \textrm{d} \omega \textrm{d}^d \Bf{k} / (2\pi)^{d+1}$, and
\begin{equation}
E(\Bf{k}) = E_0 + \frac{k^2}{2m} \ .
\end{equation}
All fermion loop Feynman diagrams vanish (the poles of the Green's functions on the loop lie in the same complex half-plane), and only the interaction coupling is renormalized by a summable geometric progression of ladder diagrams. The exact RG equations are found to be \cite{SubirQPT}:
\begin{equation}
\frac{\textrm{d} E_g}{\textrm{d} l} = 2 E_g \qquad , \qquad \frac{\textrm{d} U}{\textrm{d} l} = (2-d)U - \Pi U^2 \ ,
\end{equation}
where $l$ is the scale parameter, $E_g = E_0 + U$ is the effective bandgap, and $\Pi$ is a positive cutoff-dependent constant. A fixed point is always found at $E_g=0$, $U=0$. An additional non-trivial fixed point is found in $d\neq 2$ at $E_g=0$, $U=U^*=(2-d)\Pi^{-1}$, which describes attractive interactions if $d>2$. The schematic flow of interaction couplings is shown in Fig.\ref{RG1}.
\begin{figure}
\includegraphics[width=2.2in]{rg-p.eps}
\caption{\label{RG1}The RG flow of the interaction coupling $U$ in the theory (\ref{CritTheory1}).}
\end{figure}
Any attractive interaction $U<0$ in $d=2$ has a run-away flow to $U\to-\infty$, while in $d>2$ it needs to be large enough to flow toward $U\to-\infty$. Repulsive interactions $U>0$, on the other hand, flow to the Gaussian fixed point in $d\ge 2$. Therefore, only attractive interactions can produce strongly correlated states. Since the RG equations are exact, we can precisely characterize the run-away flow, assuming that the effect of multi-body collisions remains negligible at least in the insulating state. Solving for $U(l)$ in two and three dimensions we obtain:
\begin{equation}
U(l) = \left\lbrace
\begin{array}{lcl}
\frac{U(0)}{1+\Pi U(0)l} & , & d=2 \\[0.1in]
\frac{U(0)}{\lbrack 1+\Pi U(0) \rbrack e^l-\Pi U(0)} & , & d=3
\end{array}
\right\rbrace
\end{equation}
In both cases, the run-away flows have vertical asymptotes so that $U(l)$ diverges at a finite value of $l$ ($l=|\Pi U(0)|^{-1}$ in $d=2$). This indicates that Cooper pairs become stable at a finite length scale and have a finite coherence length despite the fermion bandgap. However, the interpretation of the run-away flow breaks-down at the cut-off scale because so large $U$ will pair-up the high-energy fermions, which were assumed to be unpaired in this RG procedure. A boson-dominated dynamics takes over the shortest length-scales under consideration.
Note that $U(l)\to-\infty$ at a finite $l$ cannot be immediately interpreted as a signal of superfluidity. This is due to the fact that at finite $l$ we do not yet have a theory which transparently describes dynamics at macroscopic scales, while superfluidity is verified only by macroscopic long-range correlations. Since RG is based on integrating out \emph{high energy} modes, it does not provide a precise answer to the question of what phase the system lives in, but only gives an indication.
The fermion gap in an insulating state grows exponentially under RG, $E_g(l) = E_g(0) e^{2l}$. If $E_g(l)$ is the first to reach the cutoff scale, the further RG flow is halted (RG breaks down) in a state apparently devoid of particles. This is a band insulator. It is obtained in $2+\epsilon$ dimensions for any sufficiently weak interaction $|U(0)|<|U^*|\sim\epsilon$, or even for $|U(0)|>|U^*|$ provided that the gap $E_g(0)$ is large enough. If $U(l)$ is the first to reach its cutoff instead, then boson-dominated dynamics at shortest length-scales requires switching to a purely bosonic effective theory in order to determine what happens at large length-scales. Both insulating and superfluid phases are possible in this limit, despite a finite fermion gap, but the transition between them is in a different universality class than the BCS pair-breaking transition.
\subsection{Transitions involving particles and holes}\label{secPH}
A special case is obtained in the vicinity of vanishing gaps for both particle and hole excitations. A pairing transition influenced by a corresponding fixed point cannot be obtained by changing the chemical potential alone, but can be accessed by tuning interaction strength or lattice depth, at a fixed particle density (see Fig.\ref{pd1}, the (ph) dashed line). The values of $\mu$ and $V$ should lie at the intersection of effective conduction and valence bands where the effective bandgap closes. We shall again discover a run-away flow of interaction couplings, but this time it quickly invalidates the perturbative RG. The run-away flow is expected to eventually lead to a strong-coupling fixed point in the XY universality class, associated with the superconductor to Mott-insulator transition at an integer number of bosons per lattice site.
The critical theory for valence ($\textrm{v}$) and conduction ($\textrm{c}$) electrons is:
\begin{eqnarray}\label{S2}
S_2 & = & \sum_n \int \mathcal{D}k \;
f_{n,k,\alpha}^{\dagger} \left( -i\omega + E_n(\Bf{k}) \right) f_{n,k,\alpha}^{\phantom{\dagger}}
\\ & + &
\sum_{n_1 m_1} \sum_{n_2 m_2} U_{n_1 n_2}^{m_1 m_2} \int \mathcal{D}k_1 \mathcal{D}k_2 \mathcal{D}q
\nonumber
\\ & & ~~ \times f_{m_1,k_1+q,\alpha}^{\dagger} f_{n_1,k_1,\alpha}^{\phantom{\dagger}}
f_{m_2,k_2-q,\beta}^{\dagger} f_{n_2,k_2,\beta}^{\phantom{\dagger}}
\nonumber \ ,
\end{eqnarray}
where $n\in\lbrace \textrm{c},\textrm{v} \rbrace$ and
\begin{equation}
E_{\textrm{v}}(\Bf{k}) = -E_{\textrm{v}0} - \frac{k^2}{2m_{\textrm{v}}} \quad , \quad
E_{\textrm{c}}(\Bf{k}) = E_{\textrm{c}0} + \frac{k^2}{2m_{\textrm{c}}} \nonumber \ .
\end{equation}
Here we made the simplest assumption that the bandgap $E_0 = E_{\textrm{v}0} + E_{\textrm{c}0} \ge 0$ is direct and small (or vanishing) at only one wavevector in the Brillouin zone. Modifications of this assumption are straight-forward and the appropriate more realistic circumstances will be discussed in the following section. Performing a particle-hole transformation for the valence band cannot help us construct an exact RG procedure. Instead, it is convenient to work directly with the native particle degrees of freedom.
The interaction couplings $U_{n_1 n_2}^{m_1 m_2}$ in the band representation are derived from microscopic short-range interactions in real-space. For example, a pure contact interaction $U \psi_{\alpha}^{\dagger}(\Bf{r}) \psi_{\alpha}^{\phantom{\dagger}}(\Bf{r}) \psi_{\beta}^{\dagger}(\Bf{r}) \psi_{\beta}^{\phantom{\dagger}}(\Bf{r})$ in (\ref{ContModel1}) gives:
\begin{eqnarray} \label{BandInteractions}
&& U_{n_1 n_2}^{m_1 m_2}(\Bf{k}_1,\Bf{k}_2,\Bf{q}) = \\
&& ~~ U \int\limits_{\textrm{UC}} \textrm{d}^d r \; u_{m_1,\Bf{k}_1+\Bf{q}}^*(\Bf{r}) u_{n_1,\Bf{k}_1}^{\phantom{*}}(\Bf{r})
u_{m_2,\Bf{k}_2-\Bf{q}}^*(\Bf{r}) u_{n_2,\Bf{k}_2}^{\phantom{*}}(\Bf{r}) \nonumber
\end{eqnarray}
where UC indicates integration over the lattice unit-cell, and $\psi_{n,\Bf{k}}(\Bf{r}) = u_{n,\Bf{k}}(\Bf{r}) e^{i\Bf{k}\Bf{r}}$ are Bloch wave-functions. This expression illustrates an important property of interactions in the band representation which follows from the overlap features of the Bloch wavefunctions. As a rule of thumb, the couplings $U_{n_1 n_2}^{m_1 m_2}$ are largest by magnitude if $n_i=m_i$ for both $i=1,2$ and smallest if $n_i \neq m_i$ for both $i=1,2$. The strongest interaction channels involve a single band, while the interband couplings are weaker. This is a natural situation for generic band structures and short-range interactions, but it could be reversed in principle.
It is important to note that the spatial dependence of the microscopic interaction potential $U(\Bf{r})$ on the distance $\Bf{r}$ between the interacting particles is not automatically irrelevant (in the RG sense) in the presence of the lattice. Short-range variations of $U(\Bf{r})$, at or below the lattice spacing length-scale, affect the relative strength of the couplings $U_{n_1 n_2}^{m_1 m_2}$, which may lead to non-trivial interacting fixed points as discussed below. Only the long-range variations and the related crystal momentum dependence of the interaction couplings are not relevant in the vicinity of the fixed points of interest.
After normal ordering the interactions take the form:
\begin{widetext}
\begin{eqnarray}
&& \sum_{n_1 m_1} \sum_{n_2 m_2} U_{n_1 n_2}^{m_1 m_2}
\int \mathcal{D}k_1 \mathcal{D}k_2 \mathcal{D}q \;
f_{m_1,k_1+q,\alpha}^{\dagger} f_{n_1,k_1,\alpha}^{\phantom{\dagger}}
f_{m_2,k_2-q,\beta}^{\dagger} f_{n_2,k_2,\beta}^{\phantom{\dagger}} = \\
&& \sum_{n_1 m_1} \sum_{n_2 m_2} U_{n_1 n_2}^{m_1 m_2}
\int \mathcal{D}k_1 \mathcal{D}k_2 \mathcal{D}q \;
f_{m_1,k_1+q,\alpha}^{\dagger} f_{m_2,k_2-q,\beta}^{\dagger}
f_{n_2,k_2,\beta}^{\phantom{\dagger}} f_{n_1,k_1,\alpha}^{\phantom{\dagger}} +
\sum_{nm} U_{n}^{m} \int \mathcal{D}k \; f_{m,k,\alpha}^{\dagger} f_{n,k,\alpha}^{\phantom{\dagger}}
\nonumber
\end{eqnarray}
\end{widetext}
The generated quadratic terms $U_{n}^{m} = n_0/2\times\sum_{n'} U_{n' n}^{m n'}$, where $n_0$ is the particle density in the ground-state, are similar to a ``charging energy'' since they effectively shift the chemical potential as a result of interactions. However, they also couple the particles in the two bands, so we must redefine the bare Green's function. One way to accomplish this is to treat $U_{n}^{m}$ as a self-energy correction to the non-interacting Green's function $\lbrack i\omega-E_n(\Bf{k}) \rbrack^{-1} \delta_{nm}$:
\begin{eqnarray}
G_n^m(\Bf{k},i\omega) & = & \left(
\begin{array}{cc}
i\omega-E_{\textrm{c}}(\Bf{k})-U_{\textrm{c}}^{\textrm{c}} & -U_{\textrm{c}}^{\textrm{v}} \\
-U_{\textrm{v}}^{\textrm{c}} & i\omega-E_{\textrm{v}}(\Bf{k})-U_{\textrm{v}}^{\textrm{v}}
\end{array}
\right)^{-1} \nonumber \\
& = & \frac{g_n^m(\Bf{k},i\omega)}{(i\omega-z_1(\Bf{k}))(i\omega-z_2(\Bf{k}))} \ .
\end{eqnarray}
It is convenient to define $\zeta_n(\Bf{k}) = E_n(\Bf{k}) + U_n^n$ and $\xi = \sqrt{U_{\textrm{c}}^{\textrm{v}}U_{\textrm{v}}^{\textrm{c}}}$ (note that $U_{\textrm{c}}^{\textrm{v}} = (U_{\textrm{v}}^{\textrm{c}})^*$). Then:
\begin{equation}
z_{1/2}(\Bf{k}) = \frac{\zeta_{\textrm{c}}+\zeta_{\textrm{v}}}{2} \pm \left\lbrack
\left( \frac{\zeta_{\textrm{c}}-\zeta_{\textrm{v}}}{2} \right)^2 + \xi^2 \right\rbrack^{\frac{1}{2}}
\end{equation}
are the new poles of the bare fermion excitations, and
\begin{equation}
g_n^m(\Bf{k},i\omega) = (i\omega - \zeta_{\textrm{c}} - \zeta_{\textrm{v}} + \zeta_n) \delta_{nm} + \xi (1-\delta_{nm}) \ .
\end{equation}
It is easy to show that both poles are always real, one being positive (particle-like) and the other negative (hole-like) as long as $\xi^2>\zeta_{\textrm{c}}\zeta_{\textrm{v}}$. We will assume that the latter condition is satisfied, so that the system is a band insulator despite the ``charging energy''. Consequently, we can expand the poles up to $\mathcal{O}(k^2)$:
\begin{equation}
z_1(\Bf{k}) = \epsilon_1 + \frac{k^2}{2M_1} \qquad , \qquad
z_2(\Bf{k}) = -\epsilon_2 - \frac{k^2}{2M_2} \ ,
\end{equation}
where $\epsilon_i$ are the bare quasiparticle gaps, and $M_i$ are the quasiparticle masses given by:
\begin{equation}
M_{1/2}^{-1} = \alpha \times \frac{m_{\textrm{c}}^{-1} +
m_{\textrm{v}}^{-1}}{2} \pm \frac{m_{\textrm{c}}^{-1} - m_{\textrm{v}}^{-1}}{2} \ .
\end{equation}
The parameter
\begin{equation}\label{PrmAlpha}
\alpha = \frac{E_g}{\sqrt{E_g^2+4\xi^2}}
\end{equation}
captures the amount of mixing between the two bands ($0 \le \alpha \le 1$); $E_g = ( \zeta_{\textrm{c}}-\zeta_{\textrm{v}}) \bigr\vert_{\Bf{k}=0}$ is the effective fermion bandgap. For $\alpha=1$ there is no band mixing since $M_i \in \lbrace m_{\textrm{c}}, m_{\textrm{v}} \rbrace$. In general, $\alpha > |\beta| = |m_{\textrm{c}}-m_{\textrm{v}}|/(m_{\textrm{c}}+m_{\textrm{v}})$ is required in order for both $M_i$ to remain positive. Otherwise, band inversion is caused by large interband couplings and it must be taken into account by redefining the low-energy quasiparticles, which then live at some different momenta in the first Brillouin zone. We shall come back to this situation at the end.
Now we set up the RG. As usual, we keep the masses $m_{\textrm{c}}$ and $m_{\textrm{v}}$ in the absence of interactions fixed under RG. While this does not imply that $M_i$ will be fixed, it sets the scaling dimension for the field operators to $d/2$. The scaling of coordinates and couplings
\begin{eqnarray}
r' = r e^{-l} \qquad & , & \qquad \tau' = \tau e^{-2l} \\
\epsilon'_i = \epsilon_i e^{2l} \qquad & , & \qquad U' = U e^{(2-d)l} \ , \nonumber
\end{eqnarray}
is followed by the diagrammatic integration of high-energy fields living at all Matsubara frequencies and momenta within a shell $|\Bf{k}|\in(\Lambda e^{-\textrm{d} l}, \Lambda)$, where $\Lambda$ is a cut-off momentum scale and $\textrm{d} l$ is an infinitesimal increment of the scale parameter $l$. The resulting one-loop renormalization of the quadratic and quartic couplings is summarized in table \ref{OneLoopRen}. The relevant cutoff-dependent renormalization scales are:
\begin{eqnarray}\label{RenConst}
K_{1k} & = & \frac{S_d \Lambda^d}{(2\pi)^d} \frac{\alpha-(-1)^k}{2\alpha} \\
K'_{2kk'} & = & \frac{S_d \Lambda^{d-2}}{(2\pi)^d}
\frac{-m_{\textrm{c}}m_{\textrm{v}} (1+\alpha^2-2\delta_{kk'})}{2\alpha^3(m_{\textrm{c}}+m_{\textrm{v}})} \nonumber \\
K''_{2kk'} & = & \frac{S_d \Lambda^{d-2}}{(2\pi)^d}
\frac{m_{\textrm{c}}m_{\textrm{v}} \left\lbrack (m_{\textrm{c}}+m_{\textrm{v}})(\alpha^2-1) + 4m_k\delta_{kk'} \right\rbrack}
{2\alpha \left\lbrack \alpha^2( m_{\textrm{c}}+m_{\textrm{v}})^2 - (m_{\textrm{c}}-m_{\textrm{v}})^2 \right\rbrack} \nonumber
\end{eqnarray}
\begin{table}[!]
\begin{displaymath}
\begin{array}{cc}
\begin{minipage}{1.2in} \includegraphics[width=0.7in]{self-energy-Hartree.eps} \end{minipage} &
-2\sum\limits_k K_{1k}^{\phantom{mk}} U_{nk}^{mk} \\[0.5in]
\begin{minipage}{1.2in} \includegraphics[width=1.2in]{self-energy-Fock.eps} \end{minipage} &
\sum\limits_k K_{1k}^{\phantom{mk}} U_{kn}^{mk} \\[0.5in]
\begin{minipage}{1.2in} \includegraphics[width=1.2in]{vertex-bubble.eps} \end{minipage} &
\begin{array}{r}
-2 \sum\limits_{kl} K'_{2kl}
\Bigl( U_{n_1 k}^{m_1 l} U_{n_2 l}^{m_2 k} + U_{n_1 k}^{m_1 l} U_{l n_2}^{k m_2} \\
+ U_{k n_1}^{l m_1} U_{n_2 l}^{m_2 k} + U_{k n_1}^{l m_1} U_{l n_2}^{k m_2} \Bigr)
\end{array} \\[0.5in]
\begin{minipage}{1.2in} \includegraphics[width=1.0in]{vertex-fork.eps} \end{minipage} &
\begin{array}{r}
\sum\limits_{kl} K'_{2kl}
\Bigl( U_{n_1 k}^{m_1 l} U_{n_2 l}^{k m_2} + U_{n_1 k}^{m_1 l} U_{l n_2}^{m_2 k} \phantom{\Bigr)} \\
+ U_{k n_1}^{l m_1} U_{n_2 l}^{k m_2} + U_{k n_1}^{l m_1} U_{l n_2}^{m_2 k} \phantom{\Bigr)} \\
+ U_{n_2 k}^{m_2 l} U_{n_1 l}^{k m_1} + U_{n_2 k}^{m_2 l} U_{l n_1}^{m_1 k} \phantom{\Bigr)} \\
+ U_{k n_2}^{l m_2} U_{n_1 l}^{k m_1} + U_{k n_2}^{l m_2} U_{l n_1}^{m_1 k} \Bigr)
\end{array} \\[0.5in]
\begin{minipage}{1.2in} \includegraphics[width=0.85in]{vertex-ph.eps} \end{minipage} &
\begin{array}{r}
\sum\limits_{kl} K'_{2kl}
\Bigl( U_{k n_2}^{m_1 l} U_{n_1 l}^{k m_2} + U_{k n_2}^{m_1 l} U_{l n_1}^{m_2 k} \phantom{\Bigr)} \\
+ U_{n_2 k}^{l m_1} U_{n_1 l}^{k m_2} + U_{n_2 k}^{l m_1} U_{l n_1}^{m_2 k} \Bigr)
\end{array} \\[0.5in]
\begin{minipage}{1.2in} \includegraphics[width=0.85in]{vertex-pp.eps} \end{minipage} &
\begin{array}{r}
\sum\limits_{kl} K''_{2kl}
\Bigl( U_{kl}^{m_1 m_2} U_{n_1 n_2}^{kl} + U_{kl}^{m_1 m_2} U_{n_2 n_1}^{lk} \\
+ U_{lk}^{m_2 m_1} U_{n_1 n_2}^{kl} + U_{lk}^{m_2 m_1} U_{n_2 n_1}^{lk} \Bigr)
\end{array}
\end{array}
\end{displaymath}
\caption{\label{OneLoopRen}One-loop diagrams which renormalize the couplings $U_n^m$ (first two) and $U_{n_1 n_2}^{m_1 m_2} $ (last four). The renormalization constants $K_{1k}$, $K'_{2kk'}$ and $K''_{2kk'}$ are given in (\ref{RenConst}).}
\end{table}
The RG equations involving all four $U_n^m$ and all sixteen $U_{n_1 n_2}^{m_1 m_2}$ couplings (not all of which are independent) are too complicated to be fully solved. A part of the problem is that the parameter $\alpha$ can also flow under RG, as a result of the renormalization of the couplings $U_n^m$. In order to simplify notation let us absorb the bare fermion gaps into the quadratic ``charging'' couplings since they flow the same way under RG: $U_n^n \to U_n^n + E_{n0}$. We begin by noting that the RG equation for the quadratic couplings is:
\begin{equation}\label{RGquad}
~~ \frac{\textrm{d} U_n^m}{\textrm{d} l} = 2 U_n^m - \sum_k K_{1k} \left( U_{kn}^{mk} + U_{nk}^{km} - 2U_{nk}^{mk} - 2U_{kn}^{km} \right)
\ .
\end{equation}
In $d=2+\epsilon$ dimensions the interacting fixed points will be at $U_{n_1n_2}^{m_1m_2} \propto \epsilon$, implying $U_n^m \propto \epsilon$. Finite values for all $U_n^m$ in $d>2$ dimensions uniquely determine the value for $\alpha$, which has to be fed back into the RG equations to self-consistently determine the fixed points. This can be done only numerically. However, analytical solutions for a subset of fixed points can be found if the couplings $U_{\textrm{c}}^{\textrm{v}}$, $U_{\textrm{cc}}^{\textrm{cv}}$, $U_{\textrm{cv}}^{\textrm{cc}}$, $U_{\textrm{vv}}^{\textrm{cv}}$, $U_{\textrm{cv}}^{\textrm{vv}}$ and their complex conjugates are all zero. In this case, it follows from (\ref{PrmAlpha}) and (\ref{RGquad}) that $\alpha=1$ and does not flow under RG. This will be the focus of the following discussion.
In two dimensions there is only one weak-coupling fixed point, at $U_n^m=0$, $U_{n_1 n_2}^{m_1 m_2}=0$. The flow of $U_\textrm{cc}^\textrm{cc}$, $U_\textrm{vv}^\textrm{vv}$ and $U_\textrm{cv}^\textrm{vc}=U_\textrm{vc}^\textrm{cv}$ is of the same type as shown in Fig.\ref{RG1} at $d=2$, so that the attractive interactions undergo a run-away flow, while repulsive interactions flow to zero. The interband interaction $U_\textrm{cv}^\textrm{cv}=U_\textrm{vc}^\textrm{vc}$ has the opposite behavior, a repulsive one keeps growing, while an attractive one flows to zero. In normal circumstances, due to the Bloch wavefunction properties, the intraband couplings $U_\textrm{cc}^\textrm{cc}$ and $U_\textrm{vv}^\textrm{vv}$ are larger than the interband $U_\textrm{cv}^\textrm{cv}$ and $U_\textrm{cv}^\textrm{vc}$, so the attractive intraband channels dominate at macroscopic scales and lead to Cooper pairing even if the interband channels are repulsive. Instabilities in the particle-hole channel are possible only if all interactions are repulsive, or if for some reason $U_\textrm{cv}^\textrm{cv}$ is repulsive and stronger than the attractive intraband interactions.
\begin{table}[t]
\begin{tabular}{c@{\;\;\;\;\;}c@{\;\;\;\;\;}c}
\includegraphics[height=0.45in]{fermion-vertex-Ucc.eps} &
\includegraphics[height=0.45in]{fermion-vertex-Uvv.eps} &
\includegraphics[height=0.45in]{fermion-vertex-Ucv.eps} \\[0.2in]
\includegraphics[height=0.45in]{fermion-vertex-Um.eps} &
\includegraphics[height=0.45in]{fermion-vertex-Ue1.eps} &
\includegraphics[height=0.45in]{fermion-vertex-Ue2.eps}
\end{tabular}
\caption{\label{TabVertices}Relevant interaction vertices near the zero-quasiparticle-density fixed points for $\alpha=1$ (defined in the text).}
\end{table}
In $d=2+\epsilon$ dimensions with $\epsilon>0$ it is convenient to define:
\begin{equation}
m_{\textrm{c}} = m(1+\beta) \quad , \quad m_{\textrm{v}} = m(1-\beta) \quad , \quad
\beta = \frac{m_{\textrm{c}}-m_{\textrm{v}}}{m_{\textrm{c}}+m_{\textrm{v}}} \nonumber
\end{equation}
and the rescaled independent dimensionless couplings $(u_{\textrm{c}}, u_{\textrm{v}}, u_{\textrm{cv}}, u_{\textrm{m}}, u_{\textrm{e}}, e_{\textrm{g}})$:
\begin{eqnarray}\label{ScalInt}
U_{\textrm{c}}^{\textrm{c}} - U_{\textrm{v}}^{\textrm{v}} & = & \frac{\Lambda^2\epsilon}{m} e_g
\\
U_{\textrm{cc}}^{\textrm{cc}} = K\epsilon \frac{u_{\textrm{c}}}{1+\beta} ~ &,& ~
U_{\textrm{vv}}^{\textrm{vv}} = K\epsilon \frac{u_{\textrm{v}}}{1-\beta}
\nonumber \\
U_{\textrm{cv}}^{\textrm{cv}} + U_{\textrm{vc}}^{\textrm{vc}} = K\epsilon \frac{u_{\textrm{cv}}}{1-\beta^2} ~ &,& ~
U_{\textrm{cv}}^{\textrm{vc}} + U_{\textrm{vc}}^{\textrm{cv}} = K\epsilon \frac{u_{\textrm{m}}}{1-\beta^2}
\nonumber \\
U_{\textrm{cc}}^{\textrm{vv}} = K\epsilon \frac{u_{\textrm{e}} e^{i\theta}}{\sqrt{1-\beta^2}} ~ &,& ~
U_{\textrm{vv}}^{\textrm{cc}} = K\epsilon \frac{u_{\textrm{e}} e^{-i\theta}}{\sqrt{1-\beta^2}}
\nonumber
\end{eqnarray}
where $K = (2\pi)^d / (S_d\Lambda^{\epsilon}m)$. These interactions are diagrammatically represented in the Table \ref{TabVertices}. The RG equations for $\alpha=1$ are:
\begin{eqnarray}\label{RGeq}
\frac{\textrm{d} u_\textrm{c}}{\textrm{d} l} &=& \epsilon \Bigl\lbrack - u_\textrm{c} - 4u_\textrm{c}^2 - 4u_\textrm{e}^2 \Bigr\rbrack
\\
\frac{\textrm{d} u_\textrm{v}}{\textrm{d} l} &=& \epsilon \Bigl\lbrack - u_\textrm{v} - 4u_\textrm{v}^2 - 4u_\textrm{e}^2 \Bigr\rbrack
\nonumber \\
\frac{\textrm{d} u_\textrm{cv}}{\textrm{d} l} &=& \epsilon \Bigl\lbrack - u_\textrm{cv} + 2u_\textrm{cv}^2 + 8(1-\beta^2) u_\textrm{e}^2
\Bigr\rbrack \nonumber \\
\frac{\textrm{d} u_\textrm{m}}{\textrm{d} l} &=& \epsilon \Bigl\lbrack - u_\textrm{m} - 4u_\textrm{m}^2 + 4u_\textrm{cv}u_\textrm{m} \Bigr\rbrack
\nonumber \\
\frac{\textrm{d} u_\textrm{e}}{\textrm{d} l} &=& \epsilon \Bigl\lbrack - u_\textrm{e} + u_\textrm{e} \Bigl(
-4u_\textrm{c} - 4u_\textrm{v} + 8u_\textrm{cv} - 4u_\textrm{m} \Bigr) \Bigr\rbrack
\nonumber \\
\frac{\textrm{d} e_g}{\textrm{d} l} &=& 2e_g - \frac{2u_\textrm{v}}{1-\beta} + \frac{2u_\textrm{cv}}{1-\beta^2} -\frac{u_\textrm{m}}{1-\beta^2}
\nonumber
\end{eqnarray}
Above two dimensions, there are seventeen fixed points with $\alpha=1$. Sixteen of these fixed points $F_{1}-F_{16}$ are given by all possible combinations of:
\begin{eqnarray}
&& \!\!\!\!\!\! u_\textrm{c}\in\Bigl\lbrace 0, -\frac{1}{4} \Bigr\rbrace ~~ , ~~
u_\textrm{v}\in\Bigl\lbrace 0, -\frac{1}{4} \Bigr\rbrace ~~ , ~~ u_\textrm{e} = 0 \nonumber \\
&& \!\!\!\!\!\! (u_\textrm{cv},u_\textrm{m}) \in \Bigl\lbrace \Bigl(\frac{1}{2},0\Bigr), \Bigl(\frac{1}{2},\frac{1}{4}\Bigr),
\Bigl(0,-\frac{1}{4}\Bigr), \Bigl(0,0\Bigr) \Bigr\rbrace \nonumber
\end{eqnarray}
Note that here $u_\textrm{e}$ is always zero. The RG eigenvalues in the subspace of $(u_{\textrm{c}}, u_{\textrm{v}}, u_{\textrm{cv}}, u_{\textrm{m}})$ are $\pm\epsilon$ at all of these fixed points, and allow enumerating $F_{1}-F_{16}$ simply by the variations of relevant/irrelevant flows of $(u_{\textrm{c}}, u_{\textrm{v}}, u_{\textrm{cv}}, u_{\textrm{m}})$. This is illustrated in Fig.\ref{RG2} for the first fifteen fixed points at which at least one of the $u_{\textrm{c}}$, $u_{\textrm{v}}$, $u_{\textrm{cv}}$, $u_{\textrm{m}}$ couplings is zero. Only the Gaussian fixed point is fully stable, while $F_{16}$ with all $u_{\textrm{c}}$, $u_{\textrm{v}}$, $u_{\textrm{cv}}$, $u_{\textrm{m}}$ non-zero is fully unstable. The coupling $u_\textrm{e}$ is irrelevant only at the Gaussian fixed point, while it is found to be marginal at $(u_{\textrm{c}}, u_{\textrm{v}}, u_{\textrm{cv}}, u_{\textrm{m}}) \in \lbrace(-1/4,0,0,0),(0,-1/4,0,0),(0,0,0,-1/4)\rbrace$ and relevant otherwise.
The remaining fixed point $F_{17}$ is the only one with $u_\textrm{e} > 0$:
\begin{eqnarray}
&& \!\!\!\!\!\! u_\textrm{e}^2 = \frac{15}{64\left( 11 - 4\beta^2 + 8\sqrt{4\beta^4-7\beta^2+4} \right)} ~~ , ~~ u_\textrm{m}=0
\nonumber \\
&& \!\!\!\!\!\! u_\textrm{c} = u_\textrm{v} = u_\textrm{cv}-\frac{1}{8} = -\frac{3}{8\left( 5-4\beta^2 + 2\sqrt{4\beta^4-7\beta^2+4}
\right)} \nonumber
\end{eqnarray}
It has only one relevant direction with RG eigenvalue $\epsilon$, the $u_\textrm{e}$ component being the largest in the corresponding eigenvector. The RG flow in the vicinity of this fixed point is illustrated in Fig.\ref{RG3}. Note that formally there are other solutions stemming from (\ref{RGeq}), but they have $u_\textrm{e}^2<0$ corresponding to time-reversal symmetry violations.
\begin{figure}
\subfigure[{}]{\includegraphics[width=1.6in]{rg-ph1.eps}}
\subfigure[{}]{\includegraphics[width=1.6in]{rg-ph2.eps}}
\subfigure[{}]{\includegraphics[width=1.6in]{rg-ph3.eps}}
\subfigure[{}]{\includegraphics[width=1.6in]{rg-ph4.eps}}
\caption{\label{RG2}The fixed points and RG flow of interaction couplings $(u_{\textrm{c}}, u_{\textrm{v}}, u_{\textrm{cv}}, u_{\textrm{m}})$ at $u_\textrm{e}=0$: (a) $u_{\textrm{m}}=0$, (b) $u_{\textrm{cv}}=0$, (c) $u_{\textrm{v}}=0$, (d) $u_{\textrm{c}}=0$. In front of the exposed ``cube'' faces are the run-away regions, signifying pairing correlations in various channels and possible symmetry-broken phases. If one extends the exposed ``cube'' faces in the directions of $u_\textrm{c}>0$, $u_\textrm{v}>0$, $u_\textrm{cv}<0$ and $u_\textrm{m}>0$, the obtained semi-infinite surface (whose corner is a shown ``cube'') encloses the basin of attraction of the Gaussian fixed point.}
\end{figure}
\begin{figure}
\includegraphics[width=1.8in]{rg-ph-e3.eps}
\caption{\label{RG3}The fixed points and RG flow involving $u_{\textrm{e}} \neq 0$ for $u_{\textrm{m}}=0$, $u_{\textrm{c}}=u_{\textrm{v}}$. The shaded semi-infinite surface encloses the basin of attraction of the Gaussian fixed point.}
\end{figure}
Whenever $u_{\textrm{c}}$, $u_{\textrm{v}}$, $u_{\textrm{cv}}$, $u_{\textrm{m}}$ are relevant, their RG eigenvalue is $\epsilon$, the same as the RG eigenvalue at the unitarity fixed point of a uniform zero-density system which corresponds to vacuum resonant scattering. While this is not surprising when it comes to pairing of two quasiparticles $u_\textrm{c}$ or two quasiholes $u_\textrm{v}$, it is interesting to note that the same resonant scattering interpretation can be applied to the couplings $u_\textrm{cv}$ and $u_\textrm{m}$. We can identify the resonantly scattering quasiparticles by tuning to a fixed point with only one finite coupling and then taking a closer look at the operator corresponding to that coupling. In the case of $u_\textrm{m}$, the operator is
\begin{eqnarray}
f_{\textrm{c}\alpha}^{\dagger} f_{\textrm{v}\beta}^{\dagger}
f_{\textrm{c}\beta}^{\phantom{\dagger}} f_{\textrm{v}\alpha}^{\phantom{\dagger}} & = &
-f_{\textrm{c}\alpha}^{\dagger} f_{\textrm{v}\beta}^{\dagger}
f_{\textrm{v}\alpha}^{\phantom{\dagger}} f_{\textrm{c}\beta}^{\phantom{\dagger}} \nonumber \\
& = & |\Phi_{s}|^2-|\Phi_{t0}|^2-|\Phi_{t\uparrow}|^2-|\Phi_{t\downarrow}|^2 \ , \nonumber
\end{eqnarray}
where the operator $\Phi_s=(f_{\textrm{c}\uparrow}f_{\textrm{v}\downarrow}-f_{\textrm{c}\downarrow}f_{\textrm{v}\uparrow})/\sqrt{2}$ annihilates an interband singlet and the operators $\Phi_{t\uparrow}=f_{\textrm{c}\uparrow}f_{\textrm{v}\uparrow}$, $\Phi_{t\downarrow}=f_{\textrm{c}\downarrow}f_{\textrm{v}\downarrow}$ and $\Phi_{t0}=(f_{\textrm{c}\uparrow}f_{\textrm{v}\downarrow}+f_{\textrm{c}\downarrow}f_{\textrm{v}\uparrow})/\sqrt{2}$ annihilate triplet pairs. The fixed point(s) at $u_\textrm{m}<0$ can now be associated with the resonant scattering in the interband singlet Cooper channel. Note that the absence of fixed points at $u_\textrm{m}>0$ rules out resonant scattering in the attractive triplet channel (the fixed point at $u_\textrm{cv}=1/2$, $u_\textrm{m}=1/4$ is fully repulsive in the particle-particle channel).
The interaction $u_\textrm{cv}>0$ at its resonant-scattering fixed point is repulsive in the particle-particle channel and cannot lead to a Cooper pair resonance. However, it becomes attractive in the particle-hole channel. Keeping only $u_\textrm{cv}$ finite allows performing a particle-hole transformation in the valence band, after which the theory contains two similarly dispersing fermion fields (particles and holes) in their vacuum states, interacting attractively. Denoting the particle and hole annihilation operators as $f^{\phantom{\dagger}}_\alpha \equiv f^{\phantom{\dagger}}_{\textrm{c}\alpha}$ and $\bar{f}^{\dagger}_{\alpha} \equiv f^{\phantom{\dagger}}_{\textrm{v}\bar{\alpha}}$ respectively, where $\bar{\alpha}$ is the opposite spin of $\alpha$, the $u_\textrm{cv}$ operator can be written as
\begin{eqnarray}
f_{\textrm{c}\alpha}^{\dagger} f_{\textrm{v}\beta}^{\dagger}
f_{\textrm{v}\beta}^{\phantom{\dagger}} f_{\textrm{c}\alpha}^{\phantom{\dagger}} & = &
-f_{\alpha}^{\dagger} \bar{f}_{\beta}^{\dagger}
\bar{f}_{\beta}^{\phantom{\dagger}} f_{\alpha}^{\phantom{\dagger}} \nonumber \\
& = & -|B_{s}|^2-|B_{t0}|^2-|B_{t\uparrow}|^2-|B_{t\downarrow}|^2 \ . \nonumber
\end{eqnarray}
Now the operators $B_s=(\bar{f}_{\uparrow}f_{\downarrow}-\bar{f}_{\downarrow}f_{\uparrow})/\sqrt{2}$, $B_{t\uparrow}=\bar{f}_{\uparrow}f_{\uparrow}$, $B_{t\downarrow}=\bar{f}_{\downarrow}f_{\downarrow}$ and $B_{t0}=(\bar{f}_{\uparrow}f_{\downarrow}+\bar{f}_{\downarrow}f_{\uparrow})/\sqrt{2}$ annihilate singlet and triplet particle-hole pairs. This interaction does not make any distinction between different spins, so that a scattering resonance appears simultaneously in the singlet and all triplet channels. The bound-state resulting from this resonance is an exciton, and symmetry breaking at finite particle and hole density can be either a singlet exciton condensate, or a ferromagnetic state depending on the other couplings as well as higher order terms in the action. In both cases, the present effective theory would favor ordering at zero wavevector, but the circumstances discussed in the following section could lead to antiferromagnetic and other kinds of ordering at finite wavevectors.
The behavior of $u_\textrm{e}$ does not fit this generic resonant scattering picture. Only at the $F_{17}$ fixed point we find the flow of $u_\textrm{e}$ reminiscent of resonant scattering. The absence of other similar fixed points with $u_\textrm{e}\neq 0$, and the fact that the relevant direction at $F_{17}$ is an almost even-amplitude linear combination of multiple couplings, indicate different physics: an ``assisted scattering resonance'' in the Cooper channel between a pair of fermions dynamically resonating between the conduction and valence bands. In fact, assuming $\theta=0$ in (\ref{ScalInt}), a sufficiently strong interaction of this type would give rise to an extended ``sign-changing'' $s$-wave superfluidity in which the pairing gap on the conduction and valence bands has opposite signs. Such an $s^\pm$ pairing is proposed to occur in iron pnictides \cite{Mazin2009}. Other kinds of pairings with different relative phases between the conduction and valence band pairing gaps could be obtained for other values of $\theta$.
The run-away flows in the vicinity of these fixed points are also very important. They indicate the kinds of instabilities of interacting fermions in lattice potentials and circumstances in which they can develop. This information has greater practical use than the detailed properties of the fixed points, because realistic systems can hardly be tuned very close to these fixed points (except cold atom systems which are tunable to the $u_{\textrm{c}}=-1/4$ and/or $u_{\textrm{v}}=-1/4$ fixed points). In generic lattice systems with attractive interactions we find that the favored phases are featureless insulators and superconductors. A singlet superconductor is indicated by the flow of interaction couplings $u_{\textrm{c}}$, $u_{\textrm{v}}$ and $u_{\textrm{m}}$ toward $-\infty$, although as emphasized in the previous section such run-away flows can also produce bosonic Mott insulators in certain cases. Instabilities in the particle-hole channel are discouraged in normal circumstances with attractive microscopic interactions. Even if the interband couplings end up having repulsive character, the generic lattice and microscopic interaction potentials produce relatively small $u_{\textrm{cv}}$ in comparison to $u_{\textrm{c}}$ and $u_{\textrm{v}}$, so that a typical system with attractive interactions in $d+\epsilon$ dimensions flows either to a charge-dynamics influenced insulator state, or toward particle-particle instabilities. With repulsive interactions, however, the same kind of flows near the fixed points featuring $u_{\textrm{cv}}$ take the system either to spin-dynamics influenced insulators, or toward the particle-hole instabilities.
Finding the full structure of fixed points for any $\alpha>|\beta|$ requires allowing all mixing interband couplings to be finite. Preliminary numerical calculations indeed reveal the existence of additional fixed points with finite mixing interactions and $\alpha<1$. However, a systematic search for these fixed points is very difficult due to the large parameter space and the highly non-linear nature of the RG equations that allow $\alpha$ to flow. The details of these fixed points are not crucial for the present discussion and will not be pursued further.
Now we return to the possibility of band inversion which occurs for $\alpha < |\beta|$. First, we note that in normal microscopic circumstances $\alpha$ is close to unity because the intraband couplings $U_n^n$ are larger than the interband ones $U_n^m$, $n \neq m$. The RG flow further accentuates this situation as the flow of all $U_n^m$ is exponential. However, if the interband couplings are large enough in comparison to the intraband ones, we must cure the resulting effective band inversion by identifying the true low energy quasiparticles, which must live at some new crystal wavevectors. The appropriate RG equations need to deal with more than two fermion flavors. Attractive interactions in such strong interband channels would naturally lead to paired states which spontaneously break translational symmetry, while repulsive interactions would give rise to patterned exciton condensates.
\subsection{Transitions involving multiple fermion species, and universality classes}\label{secMF}
The lowest energy quasiparticles in band insulators are often concentrated around multiple symmetry-related wavevectors in the Brillouin zone. For example, the simple cubic periodic potential in three dimensions
\begin{equation}
V(\Bf{r}) = 2V \left\lbrack \cos\left( \frac{2\pi x}{a_L} \right) + \cos\left( \frac{2\pi y}{a_L} \right)
+ \cos\left( \frac{2\pi z}{a_L} \right) \right\rbrack \nonumber
\end{equation}
produces a band insulator with two fermions per site (for not too small $V$) whose lowest hole excitations live at $\Bf{k}_{\textrm{v}}=(\pi,\pi,\pi)/a_L$ in the valence band and lowest particle excitations live at $\Bf{k}_{\textrm{c}1}=(\pi,0,0)/a_L$ and two other symmetry-related wavevectors $\Bf{k}_{\textrm{c}2}$, $\Bf{k}_{\textrm{c}3}$ in the conduction band. An effective fermionic theory of this band insulator requires either one hole or three particle fields for generic pairing transitions of the type discussed in section \ref{secP}. The discussion in section \ref{secPH} has to be extended to one hole and three particle fields in this case.
An effective theory will generally include couplings among all of its fermion fields, and some of the couplings will have the same value by symmetries. As a prototype theory we can take the action (\ref{Seff}) allowing the labels $n,m\dots$ to identify any relevant fermion flavor. Like before, the RG analysis would reveal fixed points and run-away flows corresponding to same-flavor pairing and flavor-mixing instabilities. The latter kind could lead to supersolid phases in the particle-particle channel, or exciton condensates in the particle-hole channel, both bringing translational symmetry breaking and new universality classes. On the other hand, the same-flavor pairing instabilities are the most likely outcome of generic attractive interactions in lattice potentials due to the typically dominant couplings in the same-flavor channel.
All superfluid transitions in the unitarity limit which involve only the same-flavor pairing always belong to the same universality class. This universality class can be characterized by critical ratios between pressure, temperature, energy per particle, and chemical potential (relative to the band edge) at small but finite quasiparticle densities. A useful way of calculating the critical ratios involves applying a Hubbard-Stratonovich transformation to the model (\ref{ContModel1}) to decouple the short-range interaction, and then promoting the obtained two-channel model to an Sp($2N$) symmetry group by introducing $N$ copies of the spinful fermion fields which couple to the same Hubbard-Stratonovich field. Fluctuation corrections to the mean-field thermodynamic functions take the form of $1/N$ expansions, so at least in the limit of large $N$ one can obtain systematic perturbative expressions in the absence of a natural small parameter near unitarity. Taking the physical value $N=1$ and including only the lowest order correction (``Gaussian fluctuations'' of the order parameter) already produces very good estimates in the uniform system \cite{nikolic:033608, Veillette06a}.
Provided that the inter-flavor scattering vanishes at the fixed point, trivial adjustments are needed to accommodate multiple fermion flavors in the presence of a lattice, most notably in the ratios derived from extensive quantities, such as those containing pressure and energy density. For example, the critical pressure $P$ at the finite-temperature $T=T_c$ superfluid transition (in $d=3$)
\begin{equation}
\left. \frac{(P-P_0)/N}{(2m)^{3/2} T^{5/2} n_f} \right|_{T=T_c} = 0.13188 +
\frac{0.4046}{N} + \mathcal{O} (1/N^2) \nonumber
\end{equation}
acquires a factor of $n_f$ in the denominator on the left-hand side, the total number of low-energy particle and hole flavors in the Brillouin zone ($P_0$ is the zero-temperature degeneracy pressure of the band insulator). The value of $n_f$ depends on the bandgap $E_g$; in the limit $T_c \gg E_g$ both particle and hole flavors should be counted in $n_f$, otherwise only particles or holes are important based on the chemical potential.
Another small adjustment of the uniform system $1/N$ expansions in Ref.\cite{nikolic:033608, Veillette06a} is needed in the critical ratios involving the chemical potential. We need to express the chemical potential $\mu$ relative to the nearest band edge. If the conduction band is nearest, then the finite quasiparticle density at zero temperature is obtained when $\mu>0$, while finite density requires $\mu<0$ if the valence band is nearest. The critical temperatures at both $\mu>0$ and $\mu<0$ are universal functions of $|\mu|$:
\begin{equation}
\left. \frac{|\mu|}{T} \right|_{T=T_c} = 1.50448 + \frac{2.785}{N} +
\mathcal{O} (1/N^2) \ . \nonumber
\end{equation}
This expression applies even in the limit $T_c \gg E_g$ when both particles and holes are important, because this limit can be interpreted as $|\mu| \gg E_g$. If the lattice depth is so small that the bandgap closes ($E_g=0$), we must take the larger of the two values of $|\mu|$ obtained by measuring the chemical potential with respect to the overlapping ``conduction'' and ``valence'' band edges. Additional phase transitions below $T_c$ are possible for $T_c \sim |\mu| \gg E_g$, involving the onset of pairing in different channels: particle, hole and interband, each characterized by its own order parameter (see previous section). Re-entrant behavior can be anticipated in this regime when only one fermion species is paired at $T=0$ and another one is separated from the chemical potential by a gap smaller than $T_c$. Then, the thermal population of the fermions across the gap can lead to pairing in additional channels at $0 < T'_c < T < T_c$.
Anisotropy associated with low-energy quasiparticles at symmetry-transforming wavevectors in the Brillouin zone is equally easily treated. For any quasiparticle flavor with dispersion
\begin{equation}
E(\Bf{k}) = E_0 + \sum_{i=1}^{d} \frac{k_i^2}{2m_i}
\end{equation}
we redefine momentum so that ${k'}_i^2/2m = k_i^2/2m_i$, where $m$ is a mass to be determined. The measure in path integrals acquires a factor of $\sqrt{(\prod_i m_i)/m^d}$ from this change of variables, which can be absorbed into the redefinition of matter fields. This also leads to a renormalization of all interaction couplings. The choice
\begin{equation}
m = \left(\prod_i m_i\right)^{\frac{1}{d}}
\end{equation}
converts the quasiparticle dispersion into an isotropic one without renormalizing any fields or couplings. It is therefore this geometric mean which should replace the mass in all $1/N$ expansions.
\section{Discussion and conclusions}\label{secDiscussion}
We considered a band insulator subjected to pairing in the unitarity regime as a model system. The simplest realization of such a system is found in trapped neutral ultra-cold gases of alkali atoms placed in an optical lattice. The density of atoms can be chosen to correspond to two atoms per lattice site in the central portion of the trap, while the strength of attractive interactions among them is routinely controlled by the Feshbach resonance. A superfluid transition from a thermally excited band insulator has been already experimentally studied in this kind of a system in the vicinity of the BCS-BEC crossover \cite{Chin2006}.
The focus of our analysis was the characterization of the universal phase diagram featuring $T=0$ transitions between band insulators and superfluid states. In $d>2$ dimensions we identified a BCS limit in which this transition is pair-breaking, meaning that its universal properties are transparently captured by a BCS-like theory. A special limiting case of the pair-breaking transition is found at unitarity, where all interaction effects become independent of microscopic scales, leading to the universal dependence of critical temperature and other thermodynamic functions on the particle density in the superfluid state.
The BEC limit, found at any interaction strength in $d=2$ or at sufficiently strong interactions in $d>2$, brings a different universality class to superfluid transitions. Fermionic excitations belong to high energies, so the effective theory capturing the transition has only bosonic fields. The transition occurs between the superfluid and a bosonic Mott insulator. The universality class is characterized either by the dynamical exponent $z=2$ (generic bosonic mean-field transitions driven by the chemical potential), or $z=1$ (XY transitions driven at fixed density of two fermions per lattice site).
The BCS and BEC limits considered here are relative to a particular band insulator with a fixed lattice potential and particle density at zero temperature. While the particle density is finite in the ground-state, the unitarity regime between these BCS and BEC limits is found at zero quasiparticle density in the effective low-energy theory describing the band insulator. The full microscopic model includes short-range interactions and multiple fermion bands, with the chemical potential residing in a bandgap. Integrating out high-energy fermions leaves behind the effective theory featuring at most two bands immediately adjacent to the chemical potential (the conduction and valence bands). The remaining low energy fermions experience renormalized interactions, and may exist in multiple flavors as quasiparticles and quasiholes concentrated around different symmetry-related wavevectors in the first Brillouin zone (individually having anisotropic dynamics). All of this complexity reduces to a few relevant interaction couplings in the vicinity of renormalization group (RG) fixed points that signify universal behavior, the most naturally occurring ones corresponding to unitarity in the same universality class as if the system were microscopically uniform.
The physical meaning of these fixed points, revealed by RG, is the resonant scattering of quasiparticles. Multiple flavors of quasiparticles give rise to multiple possibilities for resonant scattering. An interesting discovered possibility is the resonant scattering between particles and holes in the presence of repulsive interactions, the unitarity limit in the particle-hole channel separating the regimes with non-existing and existing exciton bound states (excitonic ``BCS'' and ``BEC'' regimes respectively). Other possibilities not elaborated here also exist in generic circumstances with multiple fermion flavors, leading to translational symmetry breaking in ordered states. However, most of these universal regimes may be inaccessible in realistic systems because they involve tuning either the details of lattice potentials, or short-range spatial features (at the lattice spacing scales) of the microscopic interaction potential.
The notable exception are unitarity regimes in the uniform particle-particle and hole-hole channels, which can be reached in cold atom systems using Feshbach resonances. The simplest to obtain is the unitarity at a transition driven by the chemical potential, which is naturally found in a trapped gas of cold atoms at an interface between the superfluid and insulating atom clouds. In this case the RG identifies only one relevant interaction parameter, which is tuned by the Feshbach resonance. The transitions driven at fixed density by changing the interaction strength or lattice depth are harder to push to the full unitarity because there are two RG relevant operators (particle-particle and hole-hole scattering lengths) which need to be tuned to their fixed point values. Nevertheless, manifestations of this kind of unitarity can be observed at finite temperatures if the critical temperature is larger than the bandgap.
The RG also provides an indication of the macroscopic properties of states away from the fixed points. If the strength of attractive interactions $U$ is smaller by magnitude than its fixed-point value $|U^*| \propto \epsilon$ in $d+\epsilon$ dimensions, then a gapped fermion system is macroscopically a band insulator. Otherwise, the coupling $U$ flows toward $-\infty$ under RG at \emph{finite length scales}, implying the formation of Cooper pairs at short length scales before the onset of superfluidity at large length scales. It is in this manner that the fermionic RG predicts the existence of bosonic Mott insulators, but a bosonic effective theory is then required to access the superfluid transition at macroscopic scales.
Perhaps the main significance of the presented RG analysis is that the most generic weak-coupling fixed points in fermionic theories, which control the universal properties of insulating and superfluid phases, can be interpreted as resonant scattering. There is a distinction between appropriate ``BCS'' and ``BEC'' regimes in different kinds of pairing channels, in terms of the existence of appropriate two-quasiparticle bound states. The run-away flows of interaction couplings in the ``BEC'' regimes signify the emergence of correlated insulating states separated from ordered phases by transitions in bosonic universality classes. In some circumstances these ``BEC-limit'' insulators may be thermodynamic phases, such as a valence-bond crystal or a spin liquid adjacent to an antiferromagnet (condensate of excitonic ``molecules''), or a charge-density wave adjacent to a superconductor.
Therefore, the presented model and analysis provide a direct insight into the possibilities for the development of strong pairing correlations in fermionic lattice systems. The emergence of boson-dominated superfluid transitions among fermions and the corresponding universality classes can be traced down to the well known physics of BCS-BEC crossovers. Even if interactions are not strong enough to bring the system close to its unitarity limit in empty space, the presence of a lattice frustrates the motion of particles and promotes interaction effects, effectively pushing the system toward its lattice unitarity \cite{Fedichev2004, Koetsier2006}. Furthermore, in two dimensions there is no BCS limit strictly speaking. Two quasiparticles injected into the conduction band will form a bound state regardless of how weak the attractive interactions are. Of course, the size of this ``vacuum'' bound state might be much larger than the spacing between particles, but this does not preclude the bosonic universality of the superfluid transition.
One potentially important aspect of this is that a conceptually similar situation is found in cuprate high temperature superconductors. Cuprates are quasi two-dimensional systems in which the underdoped normal state (pseudogap) exhibits gapped fermionic quasiparticles, albeit with a specific $d$-wave pairing symmetry and a gap of completely different origin than in this paper. A number of unconventional properties of cuprates can be qualitatively understood as being related to a fluctuation-driven transition.
\acknowledgments
I am very grateful to Zlatko Te\v{s}anovi\'{c} for generously sharing his insight, which motivated me to carry out this RG analysis. I also thank Erhai Zhao and Peter Armitage for very helpful discussions. A part of this work was done at the Aspen Center for Physics, and Institute for Quantum Matter at Johns Hopkins University. The support for this research was provided by the Office of Naval Research (grant N00014-09-1-1025A), and the National Institute of Standards and Technology (grant 70NANB7H6138, Am 001).
|
train/arxiv
|
BkiUfxw5qhDCqaup-BEp
| 5
| 1
|
\section{Introduction}
The strong Markov property for stochastic differential equations (SDEs) is one of the most fundamental results in the theory of classical stochastic processes. It
claims that for any given optional time $\tau$ we have
\begin{equation}\label{strong markov for classical sde}
{{E}}_P[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)|\mathcal{F}_{{\tau+}}]
={{E}}_P[\varphi(X_{t_1}^y,\cdots,X_{t_m}^y)]_{y=X_\tau^x}
\end{equation}
for SDEs $(X^x_t)_{t\geq 0}$ with initial value $x$. Here ${{E}}_P$ and ${{E}}_P[\cdot|\mathcal{F}_{{\tau+}}]$ stands for the expectation and conditional expectation, respectively, related to a probability measure $P$. It was obtained by K. It\^{o} in his pioneering work \cite{It0}, and since then, it has been widely applied to
stochastic control, mathematical finance and probabilistic method for
partial differential equations (PDEs); see, e.g., \cite{BL,Fr,Ok}.
Recently, motivated by probabilistic interpretations for fully nonlinear PDEs and financial problems with model uncertainty, Peng \cite{P3,P4,P7} systematically introduced the notion of nonlinear $G$-expectation $\hat{\mathbb{E}}[\cdot]$ by stochastic control and
PDE methods. Under the $G$-expectation framework, a new kind of Brownian motion, called $G$-Brownian motion, was constructed. The corresponding stochastic calculus
of It\^{o}'s type was also established. Furthermore, by the contracting mapping theorem, Peng obtained the existence and
uniqueness of the solution of $G$-SDEs:
\begin{equation}%
\begin{cases}
dX_{t}^{x}=b(X_{t}^{x})dt+\sum_{i,j=1}^{d}h_{ij}(X_{t}^{x})d\langle
B^{i},B^{j}\rangle_{t}+\sum_{j=1}^{d}\sigma_{j}(X_{t}^{x})dB_{t}%
^{j},\ \ \ \ t\in \lbrack0,T],\\
X_{0}^{x}=x,
\end{cases}
\label{GSDE in intro}%
\end{equation}
where $B=(B^{1},\ldots,B^{d})$ is $G$-Brownian motion and $\langle B^{i},B^{j}\rangle$ is its cross-variation process, which is not deterministic unlike the classical case.
A very interesting problem is whether, for $G$-SDEs, the following
generalized strong Markov property is true:
\begin{equation}
\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_{1}}^{x},\cdots,X_{\tau+t_{m}}%
^{x})]=\hat{\mathbb{E}}[\varphi(X_{t_{1}}^{y},\cdots,X_{t_{m}}^{y}%
)]_{y=X_{\tau}^{x}}.\label{Strongmar}%
\end{equation}
In this paper, we first construct the conditional $G$-expectation $\hat{\mathbb{E}}_{\tau+}[\cdot]$ for any given optional time $\tau$ by extending the definition of conditional $G$-expectation $\hat{\mathbb{E}}_t[\cdot]$ to optional times. The main tools in this construction are a universal continuity estimate for
$\hat{\mathbb{E}}_{t}[\cdot]$ (see Lemma \ref{Et continuity lemma}) and a new
kind of consistency property (see Proposition \ref{main proposition}). We also show that $\hat{\mathbb{E}}_{\tau+}[\cdot]$ can preserve most useful properties of classical conditional expectations except the linearity.
Based on the conditional expectation $\hat{\mathbb{E}}_{\tau+}[\cdot]$, we then further obtain the strong Markov property (\ref{Strongmar}) for
$G$-SDEs by adapting the standard discretization method. In contrast to the linear case, the main difficulty is that in the nonlinear expectation context the dominated convergence theorem does not hold in general. We tackle this problem by using
Kolmogorov's criterion for tightness and the properties of $\hat{\mathbb{E}%
}_{\tau+}[\cdot]$.
In particular, for $G$-Brownian motion $B$, we obtain
that the reflection principle for $B$ holds and $(B_{\tau+t}-B_{\tau
})_{t\geq0}$ is still a $G$-Brownian motion. Finally, with the help of the strong Markov
property, the level set of $G$-Brownian motion is also investigated.
We note that problem of constructing $\hat{\mathbb{E}}_{\tau+}[\cdot]$ was first considered in \cite{NH}, where $\hat{\mathbb{E}}_{\tau+}[\cdot]$ is defined for all upper semianalytic (more general than Borel-measurable) functions by the analytic sets theory. But the corresponding conditional expectation is also upper semianalytic and when the usual Borel-measurablity can be attained remains unknown. In our paper, by a completely different approach, our construction focuses on a large class of Borel functions to obtain more regularity properties for $\hat{\mathbb{E}}_{\tau+}[\cdot]$, among which is its measurability with respect to $\mathcal{F}_{\tau+}$. Moreover, some of these properties are important for the derivation of strong Markov property for $G$-SDEs.
This paper is organized as follows. In Section 2, we recall some basic notions
of $G$-expectation, $G$-Brownian motion and $G$-SDEs. Section 3 is devoted to
the construction of the conditional $G$-expectation $\hat{\mathbb{E}}_{\tau+}[\cdot]$
and the investigation of its properties. Then, in Section 4, we study the strong
Markov property for $G$-SDEs. Finally, in Section 5, we use the strong Markov
property to prove that the level set of $G$-Brownian motion has no isolated point.
\section{Preliminaries}
In this section, we review some basic notions and results of $G$-expectation. More relevant details can be found in \cite{GJ,Linq,Liny,LW,P3,P4,P7,P9}
\subsection{$G$-expectation space}
Let $\Omega$ be a given nonempty set and $\mathcal{H}$ be a linear space of
real-valued functions on $\Omega$ such that if $X_{1}$,$\dots$,$X_{d}%
\in \mathcal{H}$, then $\varphi(X_{1},X_{2},\dots,X_{d})\in \mathcal{H}$ for
each $\varphi \in C_{b.Lip}(\mathbb{R}^{d})$, where $C_{b.Lip}(\mathbb{R}^{d})$ is the space of bounded, Lipschitz functions on $\mathbb{R}^{d}$.
$\mathcal{H}$ is considered as the space of random variables.
\begin{definition}
A sublinear expectation $\hat{\mathbb{E}}$ on $\mathcal{H}$ is a functional
$\mathbb{\hat{E}}:\mathcal{H}\rightarrow \mathbb{R}$ satisfying the following
properties: for each $X,Y\in \mathcal{H}$,
\begin{description}
\item[{\rm (i)}] {Monotonicity:}\quad$\mathbb{\hat{E}}[X]\geq \mathbb{\hat{E}%
}[Y]\ \ \text{if}\ X\geq Y$;
\item[{\rm (ii)}] {Constant preserving:}\quad$\mathbb{\hat{E}}%
[c]=c\ \ \ \text{for}\ c\in \mathbb{R}$;
\item[{\rm (iii)}] {Sub-additivity:}\quad$\mathbb{\hat{E}}[X+Y]\leq
\mathbb{\hat{E}}[X]+\mathbb{\hat{E}}[Y]$;
\item[{\rm (iv)}] {Positive homogeneity:}\quad$\mathbb{\hat{E}}[\lambda
X]=\lambda \mathbb{\hat{E}}[X]\ \ \ \text{for}\ \lambda \geq0$.
\end{description}
The triple $(\Omega,\mathcal{H},\mathbb{\hat{E}})$ is called a sublinear
expectation space.
\end{definition}
\begin{definition}
Two $d$-dimensional random vectors $X_{1}$ and $X_{2}$ defined respectively on
sublinear expectation spaces $(\Omega_{1},\mathcal{H}_{1},\mathbb{\hat{E}}%
_{1})$ and $(\Omega_{2},\mathcal{H}_{2},\mathbb{\hat{E}}_{2})$ are
called identically distributed, denoted by $X_{1}\overset{d}{=}X_{2}$, if%
\[
\mathbb{\hat{E}}_{1}[\varphi(X_{1})]=\mathbb{\hat{E}}_{2}[\varphi
(X_{2})], \ \ \ \ \text{for each} \ \varphi \in C_{b.Lip}(\mathbb{R}^{d}).
\]
\end{definition}
\begin{definition}
On the sublinear expectation space $(\Omega,\mathcal{H},\hat{\mathbb{E}})$, an $n$-dimensional random vector $Y$ is said to be independent from a $d$-dimensional random vector $X$, denoted by $Y\bot X$, if
$$\hat{\mathbb{E}}[\varphi(X,Y)]=\hat{\mathbb{E}}[\hat{\mathbb{E}}[\varphi(x,Y)]_{x=X}], \ \ \ \ \text{for each}\ \varphi\in C_{b.Lip}(\mathbb{R}^{d+n}).$$
\end{definition}
A $d$-dimensional random vector $\bar{X}$ is said to be an independent copy of $X$ if $\bar{X}\overset{d}{=} X$ and $\bar{X}\bot X$.
\begin{definition}\textbf{($G$-normal distribution)}
A $d$-dimensional random vector $X$ defined on $(\Omega,\mathcal{H},\hat{\mathbb{E}})$ is called $G$-normally distributed if for any $a,b\geq0$,
$$aX+b\bar{X}\overset{d}{=}\sqrt{a^2+b^2}X,$$
where $\bar{X}$ is an independent copy of $X$. Here the letter $G$ denotes the function $G(A):=\frac12 \hat{\mathbb{E}}[\langle AX,X\rangle]$ for $A\in \mathbb{S}(d)$, where $\mathbb{S}(d)$ denotes the
space of all $d \times d$ symmetric matrices.
\end{definition}
In the rest of this paper, we denote by $\Omega:=C([0,\infty); \mathbb{R}^d)$ the space of all $\mathbb{R}^d$-valued continuous paths $(\omega_t)_{t\geq0}$, equipped with the distance
$$\rho_d(\omega^1,\omega^2):=\sum_{i=1}^\infty\frac1{2^i}[(||\omega^1-\omega^2||_{C^d[0,i]}\wedge 1)],$$
where $||\omega^1-\omega^2||_{C^d[0,T]}:=\max_{t\in[0,T]}|\omega_t^1-\omega_t^2|$ for $T>0$.
Given any $T>0$, we also define $\Omega_T:=\{(\omega_{t\wedge T})_{t\geq 0}:\omega\in\Omega\}$.
Let $B_t(\omega):=\omega_t$ for $\omega\in \Omega$, $t\geq 0$ be the canonical process. We set
$$L_{ip}(\Omega_T):=\{\varphi(B_{t_1},B_{t_2}-B_{t_1}\cdots,B_{t_n}-B_{t_{n-1}}):n\in\mathbb{N},0\leq t_1<t_2\cdots<t_n\leq T,\varphi\in C_{b.Lip}(\mathbb{R}^{d\times n})\}$$
as well as
\begin{equation}\label{9237257894334}
L_{ip}(\Omega):=\bigcup_{m=1}^ \infty L_{ip}(\Omega_m).
\end{equation}
Let $G:\mathbb{S}(d)\rightarrow\mathbb{R}$ be a given monotonic and sublinear function. The $G$-expectation on $L_{ip}(\Omega)$ is defined by
$$
\hat{\mathbb{E}}[X]:=\widetilde{\mathbb{E}}[\varphi(\sqrt{t_1}\xi_1,\sqrt{t_2-t_1}\xi_2,\cdots,\sqrt{t_n-t_{n-1}}\xi_n)],
$$
for all $X=\varphi(B_{t_1}, B_{t_2}-B_{t_1},\cdots,B_{t_n}-B_{t_{n-1}}), 0\leq t_1<\cdots<t_n<\infty,$
where $\{\xi_i\}_{i=1}^n$ are $d$-dimensional identically distributed random vectors on a sublinear expectation space $(\widetilde{\Omega},\widetilde{\mathcal{H}},\widetilde{\mathbb{E}})$ such that $\xi_i$ is $G$-normal distributed and $\xi_{i+1}$ is independent from $(\xi_1,\cdots,\xi_i)$ for $i=1,\cdots,n-1.$ Then under $\hat{\mathbb{E}}$, the canonical process $B_t=(B_t^1,\cdots,B_t^d)$ is a $d$-dimensional $G$-Brownian motion in the sense that:
\begin{itemize}
\item [{\rm(i)}] $B_0=0$;
\item [{\rm(ii)}] For each $t,s\geq 0$, the increments $B_{t+s}-B_t$ is independent from $(B_{t_1},\cdots,B_{t_n})$ for each $n\in \mathbb{N}$ and $0\leq t_1\leq \cdots\leq t_n\leq t$;
\item [{\rm(iii)}] $B_{t+s}-B_t\overset{d}{=}\sqrt{s}\xi$ for $t,s\geq 0$, where $\xi$ is $G$-normal distributed.
\end{itemize}
\begin{remark}
\upshape{
{\rm(i)} It is easy to check that $G$-Brownian motion is symmetric, i.e., $(-B_t)_{t\geq 0}$ is also a $G$-Brownian motion.
{\rm(ii)} If specially $G(A)=\frac{1}{2}\text{tr}(A)$, then the $G$-expectation is a linear expectation which corresponds to the Wiener measure $P$, i.e., $\hat{\mathbb{E}}=E_{P}$.}
\end{remark}
The conditional $G$-expectation for $X=\varphi(B_{t_1}, B_{t_2}-B_{t_1},\cdots,B_{t_n}-B_{t_{n-1}})$ at $t=t_j$, $1\leq j\leq n$ is defined by
$$
\hat{\mathbb{E}}_{t_j}[X]:=\phi(B_{t_1}, B_{t_2}-B_{t_1},\cdots,B_{t_j}-B_{t_{j-1}}),
$$
where $\phi(x_1, \cdots,x_j)=\hat{\mathbb{E}}[\varphi(x_1, \cdots,x_j, B_{t_{j+1}}-B_{t_j},\cdots,B_{t_n}-B_{t_{n-1}})]$.
For each $p\geq1$, we denote by $L_G^p(\Omega_t)$ ($L_G^p(\Omega)$ resp.) the completion of $L_{ip}(\Omega_t)$ ($L_{ip}(\Omega)$ resp.) under the norm $||X||_p:=(\hat{\mathbb{E}}[|X|^p])^{1/p}$.
The conditional $G$-expectation $\hat{\mathbb{E}}_t[\cdot]$ can be extended continuously to $L_G^1(\Omega)$ and satisfies the following proposition.
\begin{proposition}\label{condition expectation property}
For $X,Y\in L_G^1(\Omega)$, $t,s\geq 0$,
\begin{description}
\item [{\rm (i)}] $\hat{\mathbb{E}}_t[X]\leq \hat{\mathbb{E}}_t[Y] \ \text{for} \ X\leq Y$;
\item [{\rm (ii)}] $\hat{\mathbb{E}}_t[\eta]=\eta \ \text{for} \ \eta\in L_G^1(\Omega_t)$;
\item [{\rm (iii)}] $\hat{\mathbb{E}}_t[X+Y]\leq \hat{\mathbb{E}}_t[X]+\hat{\mathbb{E}}_t[Y]$;
\item [{\rm (iv)}] If$\ \eta\in L_G^1(\Omega_t)$ and is bounded, then $ \hat{\mathbb{E}}_t[\eta X]=\eta^+\hat{\mathbb{E}}_t[X]+\eta^-\hat{\mathbb{E}}_t[-X]$;
\item [{\rm (v)}] $ \hat{\mathbb{E}}_t[\varphi(\eta,X)]=\hat{\mathbb{E}}_t[\varphi(p,X)]_{p=\eta}$, for each $\ \eta\in L_G^1(\Omega_t;\mathbb{R}^d)$, $X\in L_G^1(\Omega;\mathbb{R}^n)$ and $\varphi\in C_{b.Lip}(\mathbb{R}^{d+n})$;
\item [{\rm (vi)}] $\hat{\mathbb{E}}_s[\hat{\mathbb{E}}_t[X]]=\hat{\mathbb{E}}_{t\wedge s}[X]$.
\end{description}
\end{proposition}
We define
$$\mathcal{F}_t:=\sigma(B_s:s\leq t) \ \ \ \ \text{and} \ \ \ \ \mathcal{F}:=\bigvee_{t\geq 0}\mathcal{F}_t$$
as well as
$$
L^0(\mathcal{F}_t):=\{X:X\ \text{is} \ \mathcal{F}_t\text{-measurable}\} \ \ \ \ \text{and} \ \ \ \ L^0(\mathcal{F}):=\{X:X\ \text{is} \ \mathcal{F}\text{-measurable}\}.
$$
The following is the representation theorem.
\begin{theorem}(\cite{DHP,HP})\label{DHP representation}
There exists a family $\mathcal{P}$ of weakly compact probability measures on $(\Omega,\mathcal{F})$ such that
$$\hat{\mathbb{E}}[X]=\sup_{P\in\mathcal{P}}E_P[X], \qquad \text{for each}\ X\in L_G^1(\Omega).$$
$\mathcal{P}$ is called a set that represents $\hat{\mathbb{E}}$.
\end{theorem}
\begin{remark}
\upshape{
Under each $P\in\mathcal{P}$, the $G$-Brownian motion $B$ is a martingale.}
\end{remark}
Given $\mathcal{P}$ that represents $\hat{\mathbb{E}}$, we define the capacity
$$c(A):=\sup_{P\in\mathcal{P}} P(A), \ \ \ \ \text{for each}\ A\in \mathcal{F}.$$
A set $A\in\mathcal{B}(\Omega)$ is said to be \textit{polar} if $c(A)=0$. A property is said to \textit{holds ``quasi-surely''} (\textit{q.s.}) if it holds outside a polar set.
In the following, we do not distinguish two random variables $X$ and $Y$ if $X=Y$ q.s.
\begin{lemma}\label{upward mct for capacity}
Let $\{A_n\}_{n=1}^\infty$ be a sequence in $\mathcal{B}(\Omega)$ such that $A_n\uparrow A$. Then $c(A_n)\uparrow c(A)$.
\end{lemma}
For each $p\geq1$, we set
$$\mathbb{L}^p(\Omega):=\{X\in {L}^0(\mathcal{F}): \sup_{P\in\mathcal{P}}E_P[|X|^p]<\infty\}$$
and the larger space
$$
\mathcal{L}(\Omega):=\{X\in L^0(\mathcal{F}):E_P[X]\ \text{exists for each }\ P\in\mathcal{P}\}.
$$
We extend the $G$-expectation to $\mathcal{L}(\Omega)$, still denote it by $\hat{\mathbb{E}}$, by setting
$$\hat{\mathbb{E}}[X]:=\sup_{P\in\mathcal{P}}E_P[X], \ \ \ \ \text{for}\ X\in \mathcal{L}(\Omega).$$
From \cite{DHP}, we know that $\mathbb{L}^p(\Omega)$ is a Banach space under the norm $||\cdot||_p:=(\hat{\mathbb{E}}[|\cdot|^p])^{1/p}$ and $L_G^p(\Omega)\subset \mathbb{L}^p(\Omega)$.
For $\{X_n\}_{n=1}^\infty\subset \mathbb{L}^p(\Omega)$, $X\in \mathbb{L}^p(\Omega)$, we say that $X_n\rightarrow X$ {in} $\mathbb{L}^p$, denoted by $X=\mathbb{L}^p\text{-}\lim_{n\rightarrow\infty}X_n,$ if $\lim_{n\rightarrow\infty}\hat{\mathbb{E}}[|X_n-X|^p]=0$.
\begin{lemma}\label{upward mct for rv}
Let $X_n\in \mathcal{L}(\Omega)$ be a sequence such that $X_n\uparrow X\ q.s.$ and $-\hat{\mathbb{E}}[-X_1]>-\infty$. Then $$\hat{\mathbb{E}}[X_n]\uparrow \hat{\mathbb{E}}[X].$$
\end{lemma}
For each $T>0$ and $p\geq1$, we define
\begin{align*}
\label{}
M_G^{p,0}(0,T):=& \{\eta=\sum_{j=0}^{N-1}\xi_j(\omega)I_{[t_j,t_{j+1})}(t): N\in\mathbb{N},\ 0\leq t_0\leq t_1\leq \cdots\leq t_N\leq T, \\
& \ \xi_j\in L_{G}^p(\Omega_{t_j}),\ j=0,1\cdots,N\}.
\end{align*}
For each $\eta\in M_G^{p,0}(0,T)$, set the norm $\|\eta\|_{M_G^{p}}:=(\hat
{\mathbb{E}}[\int_{0}^{T}|\eta_{t}|^{p}dt])^{\frac{1}{p}} $ and denote by
$M_G^{p}(0,T)$ the completion of $M_G^{p,0}(0,T)$ under $\|\cdot
\|_{M_G^{p}} $.
According to \cite{LP,P7}, we can define $\int_0^t\eta_sdB_s^i$, $\int_0^t\xi_sd\langle B^i,B^j\rangle_s$ and $\int_0^t\xi_sds$ for $\eta\in M_G^2(0,T)$ and $\xi\in M_G^1(0,T)$, where $\langle B^i,B^j\rangle$ denotes the cross-variation process, for $1\leq i,j\leq d$.
\subsection{Stochastic differential equations driven by $G$-Brownian motion}
We consider the following $G$-SDEs: for each given $0\leq t\leq T<\infty$,
\begin{equation}\label{SDE}
\begin{cases}
dX^{t,\xi}_{s}=b(X_{s}^{t,\xi})ds+\sum_{i,j=1}^dh_{ij}(X_{s}^{t,\xi})d\langle B^i,B^j\rangle_s+\sum_{j=1}^{d}\sigma_j(X_{s}^{t,\xi})dB^j_s,\ \ \ \ s\in [t,T],\\
X_{t}^{t,\xi}=\xi,
\end{cases}
\end{equation}
where $\xi\in L_G^p(\Omega_t;\mathbb{R}^n)$, $p\geq 2$ and
$b,h_{ij},\sigma_j:\mathbb{R}^n\rightarrow \mathbb{R}^n$ are given deterministic functions satisfying the following assumptions:
\begin{description}
\item [(H1)]Symmetry: $h_{ij}=h_{ji}, 1\leq i,j\leq d$;
\item [(H2)]Lipschitz continuity: there exists a constant $L$ such that for each $x,x'\in\mathbb{R}^n$,
$$|b(x)-b(x')|+\sum_{i,j=1}^d|h_{ij}(x)-h_{ij}(x')|+\sum_{j=1}^d |\sigma_j(x)-\sigma_j(x')|\leq L|x-x'|.$$
\end{description}
For simplicity, $X_{s}^{0,x}$ will be denoted by $X_{s}^{x}$ for $x\in\mathbb{R}^n$. We have the following estimates for $G$-SDE (\ref{SDE}) which can be found in \cite{P7,G1}.
\begin{lemma}\label{GSDE}
Assume that the conditions $(H1)$ and $(H2)$ hold. Then $G$-SDE (\ref{SDE}) has a unique solution $(X_{s}^{t,\xi})_{s\in[t,T]}\in M^p_G(t,T;\mathbb{R}^n)$. Moreover, there exists a constant $C$ depending on $p,T,L,G$ such that for any $x,y\in \mathbb{R}^n,\ t,t'\in[0,T]$,
\begin{equation}\label{SDE sup control}
\hat{\mathbb{E}}[\sup_{s\in [0,t]}|X^{x}_s|^p]\leq C(1+|x|^p),
\end{equation}
\begin{equation}
\label{SDE3}\hat{\mathbb{E}}[|X_t^x-X^y_{t'}|^p]\leq C(|x-y|^p+(1+|x|^p)|t-t'|^{p/2}).
\end{equation}
\end{lemma}
Noting that $X^x_s=X^{t,X^x_t}_s$ for $s\geq t$, we see from Theorem 4.4 in \cite{HJPS1} that
\begin{lemma}\label{HJPS2 lemma}For each given $\varphi\in C_{b.Lip}(\mathbb{R}^{n})$ and $0\leq t\leq T$, we have
$$\hat{\mathbb{E}}_t[\varphi(X_{t+s}^{x})]=\hat{\mathbb{E}}[\varphi(X_{t+s}^{t,y})]_{y=X_t^x},\ \ \ \ \text{for}\ s\in [0,T-t].$$
\end{lemma}
\section{Construction of the conditional $G$-expectation $\hat{\mathbb{E}}_{\tau+}$}
In this section, we provide a construction of the conditional $G$-expectation $\hat{\mathbb{E}}_{\tau+}$ for any optional time $\tau$ and study its properties. This notion is needed in the derivation of strong Markov property for $G$-SDEs in Section 4. We shall also give an application on the reflection principle for $G$-Brownian motion at the end of this section.
\subsection{The construction of conditional $G$-expectation $\hat{\mathbb{E}}_{\tau+}$ on ${L}_{G}^{1,\tau+}(\Omega)$}
The mapping $\tau:\Omega\rightarrow[0,\infty)$ is called a stopping time if $\{\tau\leq t\}\in \mathcal{F}_t$ for each $t\geq 0$ and an optional time if $\{\tau< t\}\in \mathcal{F}_t$ for each $t\geq 0$. A stopping time is an optional time but the converse may not hold.
For each optional time $\tau$, we define the $\sigma$-field
$$
\mathcal{F}_{\tau+}:=\{A\in\mathcal{F}:A\cap \{\tau<t\}\in\mathcal{F}_t,\ \forall t\geq0\}=\{A\in\mathcal{F}:A\cap \{\tau\leq t\}\in\mathcal{F}_{t+},\ \forall t\geq0\},
$$
where $\mathcal{F}_{t+}=\cap_{s>t}\mathcal{F}_s.$
If $\tau$ is a stopping time, we also define
$$
\mathcal{F}_{\tau}:=\{A\in\mathcal{F}:A\cap \{\tau\leq t\}\in\mathcal{F}_{t},\ \forall t\geq0\}.
$$
Let $\tau$ be an optional time. For each $p\geq1$, we set
$${L}_{G}^{0,p,\tau+}(\Omega)=\{X=\sum_{i=1}^n\xi_iI_{A_i}: \ n\in\mathbb{N},\ \{A_i\}_{i=1}^n\text{ is an}\ \mathcal{F}_{\tau+}\text{-partition of}\ \Omega,\ \xi_i\in L_G^p(\Omega),\ i=1,\cdots,n\}$$
and denote by ${L}_{G}^{p,\tau+}(\Omega)$ the completion of ${L}_{G}^{0,p,\tau+}(\Omega)$ under the norm $||\cdot||_p$.
In this subsection, we want to define the conditional $G$-expectation $$
\hat{\mathbb{E}}_{\tau+}:{L}_{G}^{1,\tau+}(\Omega)\rightarrow{L}_{G}^{1,\tau+}(\Omega)\cap L^0(\mathcal{F}_{\tau+}).$$
This will be accomplished
in three stages by progressively constructing the conditional expectation on $L_{ip}(\Omega)$, $L^1_G(\Omega)$ and finally ${L}_{G}^{1,\tau+}(\Omega)$.
\begin{remark}
\label{ui remark}
\upshape{ According to Theorem 25 in \cite{DHP}, for $X\in L^1_G(\Omega)$, we have
\begin{equation}\label{ui property}
\hat{\mathbb{E}}[|X|I_{\{|X|>N\}}]\rightarrow 0, \ \ \ \ \text{as} \ N\rightarrow \infty.
\end{equation}
This, together with a direct calculation, implies that (\ref{ui property}) still holds for $X\in L^{1,\tau+}_G(\Omega)$.}
\end{remark}
In the following, unless stated otherwise, we shall always assume that the optional time $\tau$ satisfying the following assumption:
\begin{description}
\item[(H3)] $c(\{\tau>T\})\rightarrow 0$,\ \ \ \ as $ T\rightarrow \infty$.
\end{description}
\subsubsection*{Stage one: $\hat{\mathbb{E}}_{\tau+}$ on $L_{ip}(\Omega)$}
Let $X\in L_{ip}(\Omega)$. The construction of $$\hat{\mathbb{E}}_{\tau+}:L_{ip}(\Omega)\rightarrow L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$$ consists of two steps.
\textit{Step 1.}
For any given simple discrete stopping time $\tau$ taking values in $\{t_i:i\geq 1\}$, we define
\begin{equation}\label{Etau for discrete stopping time, lip}
\hat{\mathbb{E}}_{\tau+}[X]:=\sum_{i=1}^{\infty}\hat{\mathbb{E}}_{t_i}[X]I_{\{\tau=t_i\}},
\end{equation}
where a discrete stopping (or optional) time is \textit{simple} means that $t_i\uparrow\infty$, as $i\rightarrow\infty$. Here we
employ the convention that $t_{n+i}:=t_n+i$, $i\geq 1$, if $\tau$ is a discrete stopping (or optional) time taking finitely many values $\{t_i:i\leq n\}$ with $t_i\leq t_{i+1}$.
\textit{Step 2.} For a general optional time $\tau$, let $\tau_n$ be a sequence of simple discrete stopping times such that $\tau_n \rightarrow \tau$ uniformly.
We define \begin{equation}\label{890377433993}
\hat{\mathbb{E}}_{\tau+}[X]:=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\hat{\mathbb{E}}_{\tau_n+}[X].
\end{equation}
\begin{proposition}\label{taun Cauchy lemma}
The conditional expectation $\hat{\mathbb{E}}_{\tau+}:L_{ip}(\Omega)\rightarrow L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$ is well-defined.
\end{proposition}
In the following, for notation simplicity, we always use $C_X$ to denote the bound of $X$ for any bounded function $X:\Omega\rightarrow\mathbb{R}$. Similarly, for any given bounded, Lipschitz function $\varphi:\mathbb{R}^n\rightarrow\mathbb{R}$, we always use $C_\varphi$ and $L_\varphi$ to denote its bound and Lipschitz constant respectively.
The proof relies on the following lemmas.
We set
$$\Lambda_{\delta,T}:=\{(u_1,u_2):0\leq u_1,u_2\leq T,\ |u_1-u_2|\leq \delta\}.$$ The first three lemmas concern the continuity properties of conditional expectation $\hat{\mathbb{E}}_t$ on $L_{ip}(\Omega)$.
\begin{lemma}\label{Et continuity lemma}
Let $X=\varphi(B_{t_1},B_{t_2}-B_{t_1},\cdots,B_{t_n}-B_{t_{n-1}})$ for $\varphi\in C_{b.Lip}(\mathbb{R}^{n\times d})$ with $0\leq t_1<t_2<\cdots<t_n<\infty$. Then for any $T\geq 0$ and $0\leq s_1\leq s_2\leq T$
, we have
\begin{equation}\label{8765433667889}
|\hat{\mathbb{E}}_{s_2}[X]-\hat{\mathbb{E}}_{s_1}[X]|\leq C\{\sup_{(u_1,u_2)\in \Lambda_{s_2-s_1,T}}(|B_{u_2}-B_{u_1}|\wedge 1)+\sqrt{s_2-s_1}\},
\end{equation}
where $C$ is a constant depending only on $X$ and $G$.
\end{lemma}
\begin{proof}
First suppose $s_1,s_2\in [t_i,t_{i+1}]$ for some $0\leq i\leq n$ with the convention that $t_0=0,t_{n+1}=\infty$. By the definition of conditional $G$-expectation on $L_{ip}(\Omega)$, we have
\begin{equation}\label{78787899}
\hat{\mathbb{E}}_{s_j}[X]=\psi_j({B_{t_1},\cdots,B_{t_{i}}-B_{t_{i-1}},B_{s_j}-B_{t_{i}}}),\ \ \ \ \text{for}\ j=1,2,
\end{equation}
where $$
\psi_j(x_1,\cdots,x_{i},x_{i+1})=\hat{\mathbb{E}}[\varphi(x_1,\cdots,x_{i},x_{i+1}+B_{t_{i+1}}-B_{s_j},\cdots,B_{t_{n}}-B_{t_{n-1}})].
$$
From the sub-additivity of $\hat{\mathbb{E}}$,
\begin{align*}
&|\psi_1(x_1,\cdots,x_{i},x_{i+1})-\psi_2(x'_1,\cdots,x'_{i},x'_{i+1})|\\
&\ \ \leq (L_\varphi(\sum_{j=1}^{i+1}|x_j-x'_j|+\hat{\mathbb{E}}[|B_{s_2}-B_{s_1}|]))\wedge (2C_\varphi)\\
&\ \ \leq C_1(\sum_{j=1}^{i+1}|x_j-x'_j|\wedge 1+\sqrt{s_2-s_1}),
\end{align*}
where $C_1=(L_\varphi(1\vee \hat{\mathbb{E}}[|B_1|]))\vee (2C_\varphi)$.
Combining this with (\ref{78787899}), we obtain
\begin{equation}\label{888888}
|\hat{\mathbb{E}}_{s_2}[X]-\hat{\mathbb{E}}_{s_1}[X]|
\leq
C_1(|B_{s_2}-B_{s_1}|\wedge 1+\sqrt{s_2-s_1}).
\end{equation}
Next, suppose $s_1\in [t_{i},t_{i+1}],s_2\in [t_{j},t_{j+1}]$ for some $j\geq i$.
Applying estimate (\ref{888888}), we have
\begin{align*}
|\hat{\mathbb{E}}_{s_2}[X]-\hat{\mathbb{E}}_{s_1}[X]|
&\leq |\hat{\mathbb{E}}_{s_2}[X]-\hat{\mathbb{E}}_{t_j}[X]|+|\hat{\mathbb{E}}_{t_j}[X]-\hat{\mathbb{E}}_{t_{j-1}}[X]|+\cdots+|\hat{\mathbb{E}}_{t_{i+1}}[X]-\hat{\mathbb{E}}_{s_1}[X]|\\
&\leq C_1(|B_{s_2}-B_{t_j}|\wedge 1+\cdots+|B_{t_{i+1}}-B_{s_1}|\wedge 1) + C_1(\sqrt{{s_2}-{t_j}}+\cdots+\sqrt{{t_{i+1}}-{s_1}})\\
& \leq C\{\sup_{(u_1,u_2)\in \Lambda_{s_2-s_1,T}}(|B_{u_2}-B_{u_1}|\wedge 1)+\sqrt{s_2-s_1}\},
\end{align*}
where $C=(n+1)C_1$.
\end{proof}\\
Note that the estimate in the above lemma is universal: the right-hand side of estimate (\ref{8765433667889}) depends only on the difference $s_2-s_1$ instead of the values of $s_1$ and $s_2$. Then we can easily get the following discrete stopping time version. A more general form is given in Lemma
\ref{generalized Etau continuity lemma}.
\begin{lemma}\label{Etau continuity lemma}
Let $X\in L_{ip}(\Omega)$. Then for any $T,\delta>0$ and discrete stopping times $\tau,\sigma\leq T$ taking finitely many values such that $|\tau-\sigma|\leq \delta$, we have
\begin{equation}|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{\sigma+}[X]|
\leq C\{\sup_{(u_1,u_2)\in \Lambda_{\delta,T}}(|B_{u_2}-B_{u_1}|\wedge 1)+\sqrt{\delta}\},
\end{equation}
where $C$ is a constant depending only on $X$ and $G$.
\end{lemma}
\begin{proof}
Assume $\tau=\sum_{i=1}^nt_iI_{\{\tau=t_i\}},\sigma=\sum_{i=1}^ms_iI_{\{\sigma=s_i\}}$. By the definition (\ref{Etau for discrete stopping time, lip}), we have
\begin{align*}
|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{\sigma+}[X]|
& =|\sum_{i=1}^{n}\hat{\mathbb{E}}_{t_i}[X]I_{\{\tau=t_i\}}-\sum_{j=1}^{m}\hat{\mathbb{E}}_{s_j}[X]I_{\{\sigma=s_j\}}|\\
&\leq \sum_{i=1}^{n}\sum_{j=1}^{m}|\hat{\mathbb{E}}_{t_i}[X]-\hat{\mathbb{E}}_{s_j}[X]|I_{\{\tau=t_i\}\cap \{\sigma=s_j\} }.
\end{align*}
Then by
Lemma \ref{Et continuity lemma}, there exists a constant $C$ depending on $X$ and $G$ such that
\begin{align*}
|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{\sigma+}[X]|
&\leq \sum_{i=1}^{n}\sum_{j=1}^{m}C(\sup_{(u_1,u_2)\in \Lambda_{|t_i-s_j|,T}}(|B_{u_2}-B_{u_1}|\wedge 1)+\sqrt{|t_i-s_j|})I_{\{\tau=t_i\}\cap \{\sigma=s_j\} }\\
&\leq C(\sup_{(u_1,u_2)\in \Lambda_{\delta,T}}(|B_{u_2}-B_{u_1}|\wedge 1)+\sqrt{\delta}).
\end{align*}
The proof is complete.
\end{proof}
\begin{lemma}\label{sup continuity lemma}Let $T>0$ be a given constant. Then
\begin{equation}\label{12345678901}\hat{\mathbb{E}}[\sup_{(u_1,u_2)\in \Lambda_{\delta,T}}(|B_{u_2}-B_{u_1}|\wedge 1)]\downarrow 0,\ \ \ \ \text{as}\ \delta\downarrow 0.
\end{equation}
\end{lemma}
\begin{proof}
Given any $\varepsilon>0$, by the tightness of $\mathcal{P}$, we may pick a compact set $K\subset \Omega_T$ such that $c(K^c)<\varepsilon$. Then by the Arzel\`{a}-Ascoli theorem, there exists a $\delta>0$ such that $|B_{u_1}(\omega)-B_{u_2}(\omega)|\leq \varepsilon$ for $\omega\in K$ and $|u_1-u_2|\leq\delta$, $0\leq u_1,u_2\leq T$.
Consequently,
\begin{align*}
\hat{\mathbb{E}}[\sup_{(u_1,u_2)\in \Lambda_{\delta,T}}(|B_{u_2}-B_{u_1}|\wedge1)]
\leq \hat{\mathbb{E}}[\sup_{(u_1,u_2)\in \Lambda_{\delta,T}}|B_{u_2}-B_{u_1}|I_K]+c({K^c})
\leq 2\varepsilon.
\end{align*}
Since $\varepsilon$ can be arbitrarily small, we obtain the lemma.
\end{proof}
\begin{remark}\label{remark after sup continuity lemma}
\upshape{ From the proof, we know that the above lemma is still true for a more general case that $\hat{\mathbb{E}}$ is the upper expectation of a tight family of probability measures. To be precise, for any fixed $T$, let $\Omega_T$ be defined as in Section 2, $(B_t)_{0\leq t\leq T}$ be the canonical process and $\hat{\mathbb{E}}=\sup_{P\in\mathcal{P}'}E_P$, where $\mathcal{P}'$ is a tight family of probability measures on $\Omega_T$, then (\ref{12345678901}) holds.
This generalization will be used in the next section.}
\end{remark}
The following lemma is analogous to the classical one.
\begin{lemma}\label{pre consistant Etau lemma}
Let $X\in L_{ip}(\Omega)$ and $\tau,\sigma$ be two simple discrete stopping times. Then $\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[X]=\hat{\mathbb{E}}_{\tau+}[X]$ on $\{\tau\leq \sigma\}$.
\end{lemma}
\begin{proof}
Assume $\tau,\sigma$ taking values in $\{t_i:i\geq 1\}$ and $\{s_i:i\geq 1\}$.
Then $$
\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[X]=\sum_{i,j=1}^\infty \hat{\mathbb{E}}_{t_i\wedge s_j}[X]I_{\{\tau=t_i,\sigma= s_j\}}.
$$
Multiplying $I_{\{\tau\leq \sigma\}}$ on both sides, since $t_i\leq s_j$ on $\{\tau=t_i,\sigma= s_j\}\cap {\{\tau\leq \sigma\}} $, it follows that
$$
I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[X]=\sum_{i,j=1}^\infty \hat{\mathbb{E}}_{t_i}[X]I_{\{\tau\leq \sigma\}}I_{\{\tau=t_i,\sigma= s_j\}}=\sum_{i=1}^\infty \hat{\mathbb{E}}_{t_i}[X]I_{\{\tau=t_i\}}I_{\{\tau\leq \sigma\}}=I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{\tau+}[X],
$$
which is the desired conclusion.
\end{proof}
\begin{proof}[Proof of Proposition \ref{taun Cauchy lemma}]
Assume $X\in L_{ip}(\Omega)$.
Let $\tau_n$ be a sequence of simple discrete stopping times such that $\tau_n\rightarrow \tau$ uniformly. We need to show that $\hat{\mathbb{E}}_{\tau_n+}[X]$ is a Cauchy sequence in $\mathbb{L}^1$ and the limit is independent of the choice of the approximation sequence $\tau_n$.
Assume $\tau_n=\sum_{i=1}^\infty t^n_iI_{\{\tau_n=t^n_i\}}$ and $|\tau_n-\tau|\leq \delta_n\rightarrow 0$, as $n\rightarrow\infty$. We can take $n_0$ large enough such that $\delta_n\leq 1$ for $n\geq n_0$, and hence $\{\tau\leq T\}\subset \{\tau_n\leq T+1\}$ and $\{\tau\leq T\}\subset \{\tau_m\leq T+1\}$, for $m,n\geq n_0$. Then it follows from Lemma \ref{pre consistant Etau lemma} that
\begin{equation}\label{23453455676585}
\begin{split}
|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau_{m}+}[X]|
&=|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau_{m}+}[X]|I_{\{\tau\leq T\}}+|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau_{m}+}[X]|I_{\{\tau>T\}}\\
&\leq|\hat{\mathbb{E}}_{(\tau_n\wedge (T+1))+}[X]-\hat{\mathbb{E}}_{(\tau_m\wedge (T+1))+}[X]|I_{\{\tau\leq T\}}+2C_XI_{\{\tau>T\}}.
\end{split}
\end{equation}
For any $\varepsilon>0$, we choose $T$ large enough such that $c(\{\tau>T\})\leq \varepsilon$ by (H3). Taking expectation on both sides of (\ref{23453455676585}) and letting $n,m\rightarrow\infty$, we then obtain by Lemma \ref{Etau continuity lemma} and Lemma \ref{sup continuity lemma}
$$
\limsup_{n,m\rightarrow\infty}\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau_{m}+}[X]|]\leq 2C_Xc(\{\tau>T\})\leq 2C_X\varepsilon.
$$
Since $\varepsilon$ can be arbitrarily small, this implies $$\lim_{n,m\rightarrow\infty}\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau_{m}+}[X]|]= 0.$$
Similar argument shows that if there exists another simple discrete sequences $\tau'_n$ such that $\tau'_n\rightarrow \tau$ uniformly, we have
$$\lim_{n\rightarrow\infty}\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau'_{n}+}[X]|]=0.$$
Next, for each $n\geq 1$, we set
\begin{equation}\label{approximation tau in KS problem}
\tau_n:=f_n(\tau):=\sum_{i=1}^{\infty}t^n_iI_{\{t^n_{i-1}\leq \tau< t^n_i\}},\ \ \ \ \text{where}\ t^n_i:=\frac{i}{2^n},\ i\geq 0.
\end{equation}
Then we deduce $\hat{\mathbb{E}}_{\tau_n+}[X]\in L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$ by the observation that
$$
\sum_{i=1}^{m}\hat{\mathbb{E}}_{t^n_i}[X]I_{\{\tau_n=t^n_i\}}\in L^{0,1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+}),\ \ \ \ \text{for each}\ m\geq 1
$$
and
\begin{align*}
&\hat{\mathbb{E}}[|\sum_{i=1}^{\infty}\hat{\mathbb{E}}_{t^n_i}[X]I_{\{\tau=t^n_i\}}-\sum_{i=1}^{m}\hat{\mathbb{E}}_{t^n_i}[X]I_{\{\tau_n=t^n_i\}}|]\\
&\ \ \leq\hat{\mathbb{E}}[\sum_{i=m+1}^{\infty}|\hat{\mathbb{E}}_{t^n_i}[X]|I_{\{\tau_n=t^n_i\}}]\\
&\ \ \leq C_X\hat{\mathbb{E}}[\sum_{i=m+1}^{\infty}I_{\{\tau_n=t^n_i\}}]\\
&\ \ = C_Xc(\{\tau\geq t^n_m\})\rightarrow 0, \ \ \ \ \text{as}\ n\rightarrow\infty.
\end{align*}
By the definition (\ref{890377433993}), this implies $\hat{\mathbb{E}}_{\tau+}[X]\in L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$.
Finally, if $\tau$ is itself a simple discrete stopping time, then $\hat{\mathbb{E}}_{\tau+}$ defined by (\ref{890377433993}) coincides with the one defined by (\ref{Etau for discrete stopping time, lip}) since we can take the approximation sequence $\tau_n\equiv\tau,n\geq 1$ in Step 2.
\end{proof}
Now we give three fundamental properties which are important for the extension of $\hat{\mathbb{E}}_{\tau+}$ to $L^1_G(\Omega)$.
\begin{proposition}\label{lip Etau lemma}
The conditional expectation $\hat{\mathbb{E}}_{\tau+}$ satisfies the following properties: for $X,Y\in L_{ip}(\Omega)$,
\begin{description}
\item [{\rm (i)}] $\hat{\mathbb{E}}_{\tau+}[X]\leq \hat{\mathbb{E}}_{\tau+}[Y], \ \text{for} \ X\leq Y$;
\item [{\rm(ii)}] $\hat{\mathbb{E}}_{\tau+}[X+Y]\leq \hat{\mathbb{E}}_{\tau+}[Y]+\hat{\mathbb{E}}_{\tau+}[Y]$;
\item [{\rm(iii)}] $\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[X]$.
\end{description}
\end{proposition}
In order to prove (iii), we need the following proposition. It is a generalized version of Proposition 2.5 (vi) in \cite{HJPS}.
\begin{proposition}\label{main proposition}
Let $A_i\in\mathcal{F}_{t_i},i\leq n$ for $0\leq t_1\leq \cdots\leq t_n$ such that $\cup_{i=1}^n A_i=\Omega$ and $A_i\cap A_j=\emptyset$ for $i\neq j$. Then for each $\xi_i\in L_G^{1}(\Omega),\ i\leq n$, we have
\begin{equation}\label{first proposition}
\hat{\mathbb{E}}[\sum_{i=1}^n\xi_iI_{A_i}]=\hat{\mathbb{E}}[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi_i]I_{A_i}].
\end{equation}
\end{proposition}
\begin{proof}
\textit{Step 1.}
Suppose first $\xi_i\geq 0$, $i=1,\cdots,n$. For any $P\in\mathcal{P}$, by
Lemma 17 in \cite{HP1}, we have
$$E_P[\xi_i|\mathcal{F}_{t_i}]\leq \hat{\mathbb{E}}_{t_i}[\xi_i]\ \ \ \ P\text{-a.s.}$$
Then
$$ E_P[\sum_{i=1}^n\xi_iI_{A_i}]
=E_P[\sum_{i=1}^nE_P[\xi_i|\mathcal{F}_{t_i}]I_{A_i}]\leq E_P[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi_i]I_{A_i}]\leq \hat{\mathbb{E}}[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi_i]I_{A_i}].$$
This implies
$$
\hat{\mathbb{E}}[\sum_{i=1}^n\xi_iI_{A_i}]=\sup_{P\in\mathcal{P}}E_P[\sum_{i=1}^n\xi_iI_{A_i}]\leq \hat{\mathbb{E}}[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi_i]I_{A_i}].$$
Now we prove the reverse inequality. We only need to show that, for each $P\in\mathcal{P}$,
\begin{equation}
\label{87666879854}E_P[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi_i]I_{A_i}]\leq\hat{\mathbb{E}}[\sum_{i=1}^n\xi_iI_{A_i}].
\end{equation}
Let $P\in\mathcal{P}$ be given. For $i\leq n$, noting that $A_i,A_i^c \in\mathcal{F}_{t_i}$, we can choose a sequence of increasing compact sets $K^{i}_m\subset A_i$, $m\geq 1$ such that $P(A_i\backslash K^{i}_m)\downarrow0$, as $m\uparrow \infty$ and a sequence of increasing compact sets $\widetilde{K}^i_m\subset A_i^c$, $m\geq 1$ such that $P(A_i^c \backslash\widetilde{K}^i_m)\downarrow0$, as $m\uparrow \infty$.
Moreover, since $K^{i}_m\cap\widetilde{K}^i_m=\emptyset$ and $K^{i}_m,\widetilde{K}^i_m$ are compact sets, we have
\begin{equation}
\label{111111223232}
\rho_d(K^{i}_m, \widetilde{K}^i_m)>0.
\end{equation}
For each $i, m$, by Theorem 1.2 in \cite{Bi} and (\ref{111111223232}), there exist two sequences $\{\varphi^{i,m}_{l}\}_{l=1}^{\infty},\{\widetilde{\varphi}^{i,m}_{l}\}_{l=1}^\infty\subset C_b(\Omega_{t_i})$ such that $\varphi^{i,m}_{l}\downarrow I_{K^{i}_m}$, $\widetilde{\varphi}^{i,m}_{l}\downarrow I_{\widetilde{K}^i_k}$, as $l\rightarrow\infty$ and
\begin{equation}\label{3453534}
\varphi^{i,m}_{l}\cdot\widetilde{\varphi}^{i,m}_{l}=0,\ \ \ \ \text{for all}\ l\geq 1.
\end{equation}
Applying the classical monotone convergence theorem under $P$, we have
\begin{equation}\label{899870776}
\begin{split}
E_P[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi_i]I_{A_i}]&= E_P[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi_i]I_{A_i}\prod_{j=1}^{i-1}I_{A_j^c}]\\
&=\lim_{m\rightarrow\infty} E_P[\sum_{i=1}^{n}\hat{\mathbb{E}}_{t_i}[\xi_i]I_{K^{i}_m}\prod_{j=1}^{i-1}I_{\widetilde{K}^{j}_m}] \\
&=\lim_{m\rightarrow\infty}\lim_{l\rightarrow\infty} E_P[\sum_{i=1}^{n}\hat{\mathbb{E}}_{t_i}[\xi_i]{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}] \\
&\leq\lim_{m\rightarrow\infty}\lim_{l\rightarrow\infty} \hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{t_i}[\xi_i]{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}] .
\end{split}
\end{equation}
For any fixed $m,l$, by (vi), (ii), (iv) of Proposition \ref{condition expectation property}, we have
\begin{equation*}
\label{2342345334464}
\begin{split}
\hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{t_i}[\xi_i]{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}] &= \hat{\mathbb{E}}[\hat{\mathbb{E}}_{t_{n-1}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{t_i}[\xi_i]{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}] ]\\
&=\hat{\mathbb{E}}[\sum_{i=1}^{n-1}\hat{\mathbb{E}}_{t_i}[\xi_i]{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}+\hat{\mathbb{E}}_{t_{n-1}}[\xi_n{\varphi^{n,m}_{l}}]\prod_{j=1}^{n-1}{\widetilde{\varphi}^{j,m}_l}].
\end{split}
\end{equation*}
By (\ref{3453534}) and Proposition \ref{condition expectation property} (iv), we note that
$$
\hat{\mathbb{E}}_{t_{n-1}}[\xi_{n-1}]{\varphi^{n-1,m}_{l}}+\hat{\mathbb{E}}_{t_{n-1}}[\xi_n{\varphi^{n,m}_{l}}]\widetilde{\varphi}^{n-1,m}_l
=\hat{\mathbb{E}}_{t_{n-1}}[\xi_{n-1}{\varphi^{n-1,m}_{l}}+\xi_n{\varphi^{n,m}_{l}}\widetilde{\varphi}^{n-1,m}_l].
$$
We thus obtain $$\hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{t_i}[\xi_i]{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}]=\hat{\mathbb{E}}[\sum_{i=1}^{n-2}\hat{\mathbb{E}}_{t_i}[\xi_i]{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}
+\hat{\mathbb{E}}_{t_{n-1}}[\xi_{n-1}{\varphi^{n-1,m}_{l}}+\xi_n{\varphi^{n,m}_{l}}\widetilde{\varphi}^{n-1,m}_l]\prod_{j=1}^{n-2}{\widetilde{\varphi}^{j,m}_l}].$$
Repeating this procedure, we conclude that
\begin{equation}\label{987676655}
\hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{t_i}[\xi_i]{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}]=\hat{\mathbb{E}}[\hat{\mathbb{E}}_{t_1}[\sum_{i=1}^{n}\xi_i{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}]]=\hat{\mathbb{E}}[\sum_{i=1}^{n}\xi_i{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}].
\end{equation}
Substituting (\ref{987676655}) into (\ref{899870776}), we arrive at the inequality
\begin{equation}\label{88776554545}
E_P[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi_i]I_{A_i}] \leq\lim_{m\rightarrow\infty}\lim_{l\rightarrow\infty} \hat{\mathbb{E}}[\sum_{i=1}^{n}\xi_i{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}]
.
\end{equation}
By Theorem 1.31 in Chap VI of \cite{P7}, we note that
\begin{align*}
\label{}
\lim_{l\rightarrow\infty}\hat{\mathbb{E}}[\sum_{i=1}^{n}\xi_i{\varphi^{i,m}_{l}}\prod_{j=1}^{i-1}{\widetilde{\varphi}^{j,m}_l}]
&= \hat{\mathbb{E}}[\sum_{i=1}^{n}\xi_iI_{K^{i}_m}\prod_{j=1}^{i-1}I_{\widetilde{K}^{j}_m}]\\
&\leq\hat{\mathbb{E}}[\sum_{i=1}^{n}\xi_iI_{K^{i}_m}]\\
&\leq\hat{\mathbb{E}}[\sum_{i=1}^n\xi_iI_{A_i}].
\end{align*}
Thus (\ref{87666879854}) is proved.
\textit{Step 2.}
Consider now the general case. We define $\xi^N_i=\xi_i\vee (-N)$
for constant $N>0$. By Step 1,
\begin{equation}
\label{3247839273465}
\hat{\mathbb{E}}[\sum_{i=1}^n(\xi_i^N+N)I_{A_i}]=\hat{\mathbb{E}}[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi^N_i+N]I_{A_i}].
\end{equation}
Note that
$$
\hat{\mathbb{E}}[\sum_{i=1}^n(\xi^N_i+N)I_{A_i}]=\hat{\mathbb{E}}[\sum_{i=1}^n\xi^N_iI_{A_i}]+N
$$ and $$\hat{\mathbb{E}}[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi^N_i+N]I_{A_i}]=\hat{\mathbb{E}}[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi^N_i]I_{A_i}]+N.$$
Subtracting $N$ from both sides of (\ref{3247839273465}), we obtain
$$
\hat{\mathbb{E}}[\sum_{i=1}^n\xi^N_iI_{A_i}]=\hat{\mathbb{E}}[\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\xi^N_i]I_{A_i}].
$$
Letting $N\rightarrow \infty$ yields (\ref{first proposition})
by (\ref{ui property})
\end{proof}
\begin{proof}[Proof of Proposition \ref{lip Etau lemma}]
(i), (ii) follows immediately from the definition of $\hat{\mathbb{E}}_{\tau+}$ and Proposition \ref{condition expectation property} (i), (iii). We just need to prove (iii).
First suppose $\tau$ is a simple discrete stopping time. By Proposition \ref{main proposition}, noting that $\{\tau=t_i\}\in \mathcal{F}_{t_i},i\geq 1$, we have,
$$
\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[\sum_{i=1}^{\infty}\hat{\mathbb{E}}_{t_i}[X]I_{\{\tau=t_i\}}]=\lim_{n\rightarrow\infty}\hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{t_i}[X]I_{\{\tau=t_i\}}]=\lim_{n\rightarrow\infty}\hat{\mathbb{E}}[\sum_{i=1}^{n}X I_{\{\tau=t_i\}}]=\hat{\mathbb{E}}[X].
$$
Now we consider the general optional time $\tau$. Taking a simple discrete stopping time sequence $\tau_n\rightarrow\tau$ uniformly, we obtain $$\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[\mathbb{L}^1\text{-}\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau_n+}[X]]=\lim_{n\rightarrow\infty}\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau_n+}[X]]= \hat{\mathbb{E}}[X],$$
which is the desired result.
\end{proof}
\subsubsection*{Stage two: $\hat{\mathbb{E}}_{\tau+}$ on $L_G^1(\Omega)$}
We proceed to define $$\hat{\mathbb{E}}_{\tau+}: L_G^1(\Omega)\rightarrow L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+}). $$
Let $X\in L_G^1(\Omega)$. Then there exists a sequence $\{X_n\}_{n=1}^\infty\subset L_{ip}(\Omega)$ such that $X_n\rightarrow X$ in $\mathbb{L}^1$.
We define
$$
\hat{\mathbb{E}}_{\tau+}[X]:=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\hat{\mathbb{E}}_{\tau+}[X_n].
$$
This extension of $\hat{\mathbb{E}}_{\tau+}$ also satisfies the basic properties in Proposition \ref{lip Etau lemma}.
\begin{proposition}\label{LG1 welldefined lemma}
The conditional expectation $\hat{\mathbb{E}}_{\tau+}:L_G^1(\Omega)\rightarrow L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$ is well-defined and satisfies: for $X,Y\in L_{G}^1(\Omega)$,
\begin{description}
\item [{\rm (i)}] $\hat{\mathbb{E}}_{\tau+}[X]\leq \hat{\mathbb{E}}_{\tau+}[Y], \ \text{for} \ X\leq Y$;
\item [{\rm (ii)}] $\hat{\mathbb{E}}_{\tau+}[X+Y]\leq \hat{\mathbb{E}}_{\tau+}[Y]+\hat{\mathbb{E}}_{\tau+}[Y]$;
\item [{\rm (iii)}] $\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[X]$.
\end{description}
\end{proposition}
\begin{proof}
(i)-(iii) are obvious by the definition and Proposition \ref{lip Etau lemma}. We just show that $\hat{\mathbb{E}}_{\tau+} $ is well-defined on $L_G^1(\Omega)$.
Let $X\in L_G^1(\Omega)$. Take any $\{X_n\}_{n=1}^\infty\subset L_{ip}(\Omega)$ such that $X_n\rightarrow X$ in $\mathbb{L}^1$. By (i), (ii), (iii) of Proposition \ref{lip Etau lemma}, we have
\begin{align*}
\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[X_n]-\hat{\mathbb{E}}_{\tau+}[X_m]|]
\leq \hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[|X_n-X_m|]]=\hat{\mathbb{E}}[|X_n-X_m|]\rightarrow 0,\ \ \ \ \text{as}\ n,m\rightarrow\infty.
\end{align*}
Moreover, a similar argument shows that the limit is independent of the choice of the approximation sequence $\{X_n\}_{n=1}^\infty$.
\end{proof}
\subsubsection*{Stage three: $\hat{\mathbb{E}}_{\tau+}$ on $L_G^{1,\tau+}(\Omega)$}
Finally, we define
$$\hat{\mathbb{E}}_{\tau+}: L^{1,\tau+}_G(\Omega)\rightarrow L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$$
by two steps.
\textit{Step 1.}
Let $X=\sum_{i=1}^{n}\xi_iI_{A_i}\in L_G^{0,1,\tau+}(\Omega)$, where $\xi_i\in L_G^1(\Omega)$ and $\{A_i\}_{i=1}^n$ is an $\mathcal{F}_{\tau+}$-partition of $\Omega$.
We define $$
\hat{\mathbb{E}}_{\tau+}[X]:=\sum_{i=1}^{n}\hat{\mathbb{E}}_{\tau+} [\xi_i]I_{A_i}.
$$
Then $\hat{\mathbb{E}}_{\tau+}$ is well-defined by the following lemma.
\begin{lemma}\label{well define lemma}
Let $A\in \mathcal{F}_{\tau+}$ and $\xi,\eta\in L_G^1(\Omega)$. Then $\xi I_{A}\geq \eta I_A$ implies
\begin{equation}\label{34567}
I_{A}\hat{\mathbb{E}}_{\tau+}[\xi]\geq I_{A}\hat{\mathbb{E}}_{\tau+}[\eta].
\end{equation}
\end{lemma}
\begin{proof}
By approximation, we may assume that $\xi,\eta \in L_{ip}(\Omega)$.
We first prove the case that $\tau$ is a simple discrete stopping time taking values in $\{t_i:i\geq 1\}$ and $A\in \mathcal{F}_{\tau}$. Applying Lemma 2.4 in \cite{HJPS}, we have
\begin{equation*}
I_{A}\hat{\mathbb{E}}_{\tau+}[\xi]=\sum_{i=1}^\infty\hat{\mathbb{E}}_{t_i}[\xi]I_{A\cap \{\tau=t_i\}}\geq \sum_{i=1}^\infty\hat{\mathbb{E}}_{t_i}[\eta]I_{A\cap \{\tau=t_i\}}=I_{A}\hat{\mathbb{E}}_{\tau+}[\eta].
\end{equation*}
Now for the general $\tau$, take $\tau_n$ as (\ref{approximation tau in KS problem}). Since $A\in \mathcal{F}_{\tau+}\subset\mathcal{F}_{\tau_n}$, we have
$$
I_{A}\hat{\mathbb{E}}_{\tau+}[\xi]=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}I_{A}\hat{\mathbb{E}}_{\tau_n+}[\xi]\geq \mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}I_{A}\hat{\mathbb{E}}_{\tau_n+}[\eta]=I_{A}\hat{\mathbb{E}}_{\tau+}[\eta].
$$
This proves the lemma.
\end{proof}
\begin{proposition}\label{L01tauG lemma}The conditional expectation $\hat{\mathbb{E}}_{\tau+}:L_G^{0,1,\tau+}(\Omega)\rightarrow L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$ satisfies: for $X,Y\in L^{0,1,\tau+}_G(\Omega)$,
\begin{description}
\item [{\rm (i)}] $\hat{\mathbb{E}}_{\tau+}[X]\leq \hat{\mathbb{E}}_{\tau+}[Y], \ \text{for} \ X\leq Y$;
\item [{\rm (ii)}] $\hat{\mathbb{E}}_{\tau+}[X+Y]\leq \hat{\mathbb{E}}_{\tau+}[Y]+\hat{\mathbb{E}}_{\tau+}[Y]$;
\item [{\rm (iii)}] $\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[X]$.
\end{description}
\end{proposition}
\begin{proof}
We just prove (iii). The proof for (i), (ii) is trivial.
First assume that $\tau$ is a simple discrete stopping time taking values in $\{t_j:j\geq 1\}$ and $X=\sum_{i=1}^{n}\xi_iI_{A_i}$, where $\xi_i\in L_{ip}(\Omega)$ and $\{A_i\}_{i=1}^n$ is an $\mathcal{F}_{\tau}$-partition of $\Omega$. By Proposition \ref{main proposition},
$$
\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{\tau+}[\xi_i]I_{A_i}]=\lim_{m\rightarrow\infty}\hat{\mathbb{E}}[\sum_{i=1}^{n}\sum_{j=1}^{m}\hat{\mathbb{E}}_{t_j}[\xi_i]I_{{A_i}\cap{\{\tau=t_j\}}}]
=\lim_{m\rightarrow\infty}\hat{\mathbb{E}}[\sum_{i=1}^{n}\sum_{j=1}^{m}\xi_iI_{{A_i}\cap{\{\tau=t_j\}}}]=\hat{\mathbb{E}}[X].$$
Next suppose that $\tau$ is an optional time and $X=\sum_{i=1}^{n}\xi_iI_{A_i}$, where $\xi_i\in L_{ip}(\Omega)$ and $\{A_i\}_{i=1}^n$ is an $\mathcal{F}_{\tau+}$-partition of $\Omega$. Taking $\tau_m$ as (\ref{approximation tau in KS problem}), then we derive that
$$
\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{\tau+}[\xi_i]I_{A_i}]
=\lim_{m\rightarrow\infty}\hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{\tau_m+}[\xi_i]I_{A_i}]=\lim_{m\rightarrow\infty}\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau_m+}[\sum_{i=1}^{n}\xi_iI_{A_i}]]
=\lim_{m\rightarrow\infty}\hat{\mathbb{E}}[X]=\hat{\mathbb{E}}[X].
$$
Consider finally the general case that $\tau$ is an optional time and $X=\sum_{i=1}^{n}\xi_iI_{A_i}$, where $\xi_i\in L^1_G(\Omega)$ and $\{A_i\}_{i=1}^n$ is an $\mathcal{F}_{\tau+}$-partition of $\Omega$. We can take sequences $\xi^k_i\in L_{ip}(\Omega)$ such that $\xi^k_i\rightarrow \xi_i$ in $\mathbb{L}^1$, $i\leq n$ to conclude that
$$
\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{\tau+}[\xi_i]I_{A_i}]
=\lim_{k\rightarrow\infty}\hat{\mathbb{E}}[\sum_{i=1}^{n}\hat{\mathbb{E}}_{\tau+}[\xi^k_i]I_{A_i}]
=\lim_{k\rightarrow\infty}\hat{\mathbb{E}}[\sum_{i=1}^{n}\xi^k_iI_{A_i}]=\hat{\mathbb{E}}[X],
$$
as desired.
\end{proof}
\textit{Step 2.} Let $X\in L_G^{1,\tau+}(\Omega)$. Then there exists a sequence $\{X_n\}_{n=1}^\infty\subset L_G^{0,1,\tau+}(\Omega)$ such that $X_n\rightarrow X$ in $\mathbb{L}^1$. We define
$$
\hat{\mathbb{E}}_{\tau+}[X]:=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\hat{\mathbb{E}}_{\tau+}[X_n].
$$
\begin{proposition}\label{Etau welldefined} The conditional expectation $\hat{\mathbb{E}}_{\tau+}:L_G^{1,\tau+}(\Omega)\rightarrow L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$ is well-defined and satisfies the following properties: for $X,Y\in L^{1,\tau+}_G(\Omega)$,
\begin{description}
\item [{\rm (i)}] $\hat{\mathbb{E}}_{\tau+}[X]\leq \hat{\mathbb{E}}_{\tau+}[Y], \ \text{for} \ X\leq Y$;
\item [{\rm (ii)}] $\hat{\mathbb{E}}_{\tau+}[X+Y]\leq \hat{\mathbb{E}}_{\tau+}[Y]+\hat{\mathbb{E}}_{\tau+}[Y]$;
\item [{\rm (iii)}] $\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[X]$.
\end{description}
\end{proposition}
\begin{proof}
It is immediate from the definition of $\hat{\mathbb{E}}_{\tau+}$ on $L_G^{1,\tau+}(\Omega)$ and Proposition \ref{L01tauG lemma}.
\end{proof}
\begin{remark}\label{Etau+ remark}
\upshape{If $G(A)=\frac12{\text{tr}(A)}$, we have $L^{1}_G(\Omega)=L^{1,\tau+}_G(\Omega)=L^{1}_{P}(\Omega)$ for the Wiener measure $P$, where $L^{1}_{P}(\Omega):=\{X\in\mathcal{F}:\ E_{P}[|X|]<\infty\}$. Moreover, $\hat{\mathbb{E}}_{\tau+}[\cdot]$ is just the classical conditional expectation ${{E}}_{P}[\cdot|\mathcal{F}_{\tau+}]$.}
\end{remark}
\begin{remark}\label{Etau remark}
\upshape{
Let $\tau$ be a stopping time satisfying (H3).
\begin{description} \item [{\rm (i)}] We define $L^{1,\tau}_G(\Omega)$ as $L^{1,\tau+}_G(\Omega)$ with $\mathcal{F}_\tau$ in place of $\mathcal{F}_{\tau+}$. By a similar manner, we can define the conditional expectation at $\tau$
$$
\hat{\mathbb{E}}_{\tau}:{L}_{G}^{1,\tau}(\Omega)\rightarrow{L}_{G}^{1,\tau}(\Omega)\cap L^0(\mathcal{F}_{\tau}),$$
and analogous properties (throughout this paper) hold for $\hat{\mathbb{E}}_{\tau}$ and $L^{1,\tau}_G(\Omega)$. For convenience of readers, we sketch the construction.
\textit{Stage one.} Let $X\in L_{ip}(\Omega)$. First for a simple discrete stopping time $\tau$ taking values in $\{t_i:i\geq 1\}$, we define
\begin{equation*}
\hat{\mathbb{E}}_{\tau}[X]:=\sum_{i=1}^{\infty}\hat{\mathbb{E}}_{t_i}[X]I_{\{\tau=t_i\}}.
\end{equation*}
Then for the general $\tau$, we take a sequence of simple discrete stopping times $\tau_n$ such that $\tau_n \rightarrow \tau$ uniformly and define \begin{equation*}
\hat{\mathbb{E}}_{\tau}[X]:=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\hat{\mathbb{E}}_{\tau_n}[X].
\end{equation*}
\textit{Stage two.} Let $X\in L_G^1(\Omega)$. Then there exists a sequence $\{X_n\}_{n=1}^\infty\subset L_{ip}(\Omega)$ such that $X_n\rightarrow X$ in $\mathbb{L}^1$.
We define
$$
\hat{\mathbb{E}}_{\tau}[X]:=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\hat{\mathbb{E}}_{\tau}[X_n].
$$
\textit{Stage three.} First for $X=\sum_{i=1}^{n}\xi_iI_{A_i}\in L_G^{0,1,\tau}(\Omega)$, where $\xi_i\in L_G^1(\Omega)$ and $\{A_i\}_{i=1}^{n}$ is an $\mathcal{F}_{\tau}$-partition of $\Omega$,
we define $$
\hat{\mathbb{E}}_{\tau}[X]:=\sum_{i=1}^{n}\hat{\mathbb{E}}_{\tau} [\xi_i]I_{A_i}.
$$
For $X\in L_G^{1,\tau}(\Omega)$, there exists a sequence $\{X_n\}_{n=1}^\infty\subset L_G^{0,1,\tau}(\Omega)$ such that $X_n\rightarrow X$ in $\mathbb{L}^1$. We define
$$
\hat{\mathbb{E}}_{\tau}[X]:=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\hat{\mathbb{E}}_{\tau}[X_n].
$$
\item [{\rm (ii)}] If $\tau\equiv t$ for some constant $t\geq 0$, then $\hat{\mathbb{E}}_{\tau}$ and $L_G^{1,\tau}(\Omega)$ reduce to $\hat{\mathbb{E}}_{t}$ and $L_G^{1,t}(\Omega)$ defined in \cite{HJPS}.
\item [{\rm (iii)}] In the case that $\tau$ is a stopping time, both $\hat{\mathbb{E}}_{\tau+}$ and $\hat{\mathbb{E}}_{\tau}$ are defined. From the definitions of $\hat{\mathbb{E}}_{\tau+}$ and $\hat{\mathbb{E}}_{\tau}$, it is easy to see that
$$ \hat{\mathbb{E}}_{\tau+}[X]=\hat{\mathbb{E}}_{\tau}[X],\ \ \ \ \text{for}\ X\in {L}^{1,\tau}_G(\Omega).$$
If $G(A)=\frac12{\text{tr}(A)}$, then $L^{1}_G(\Omega)=L^{1,\tau}_G(\Omega)=L^{1}_{P}(\Omega)$ and $\hat{\mathbb{E}}_{\tau}[\cdot]$ reduces to the classical conditional expectation ${{E}}_{P}[\cdot|\mathcal{F}_{\tau}]$, where $P$ is the Wiener measure.
\end{description}}
\end{remark}
\subsection{Some further properties of $\hat{\mathbb{E}}_{\tau+}$ on ${L}_{G}^{1,\tau+}(\Omega)$}
Let $\tau$ be an optional time satisfying (H3). In this subsection, we describe several interesting properties enjoyed by the conditional expectation $\hat{\mathbb{E}}_{\tau+}$ on $L^{1,{\tau}+}_G(\Omega)$. We begin by observing the following four significant statements.
\begin{proposition}\label{Etau proposition on L1tau}The conditional expectation $\hat{\mathbb{E}}_{\tau+}:L_G^{1,\tau+}(\Omega)\rightarrow L^{1,\tau+}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$ satisfies the following properties:
\begin{description}
\item [{\rm (i)}]If $X_i\in L^{1,{\tau}+}_G(\Omega)$, $i=1,\cdots,n$ and $\{A_i\}_{i=1}^n$ is an $\mathcal{F}_{\tau+}$-partition of $\Omega$, then $\hat{\mathbb{E}}_{\tau+}[\sum_{i=1}^nX_iI_{A_i}]=\\ \sum_{i=1}^n\hat{\mathbb{E}}_{\tau+}[X_i]I_{A_i}$;
\item [{\rm (ii)}] If $\tau$ and $\sigma$ are two optional times and $X\in L^{1,{\tau}+}_G(\Omega)$, then $\hat{\mathbb{E}}_{\tau+}[X]I_{\{\tau\leq \sigma\}}=\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau\leq \sigma\}}]$;
\item [{\rm (iii)}] If $X\in L^{1,{\tau}+}_G(\Omega)$, then $\hat{\mathbb{E}}_{(\tau\wedge T)+}[XI_{\{\tau\leq T\}}]\rightarrow \hat{\mathbb{E}}_{\tau+}[X]$ in $\mathbb{L}^1$, as $ T\rightarrow\infty$;
\item [{\rm (iv)}] If $\{\tau_n\}_{n=1}^\infty,\tau$ are optional times such that $\tau_n\rightarrow\tau$ uniformly, as $n\rightarrow\infty$ and $X\in L^{1,\tau_0+}_G(\Omega) $, where $\tau_0:=\tau \wedge (\wedge_{n=1}^\infty\tau_n)$, then $\hat{\mathbb{E}}_{\tau_n+}[X]\rightarrow \hat{\mathbb{E}}_{\tau+}[X]$ in $\mathbb{L}^1$, as $n\rightarrow\infty$; in particular,
if $\tau_n\downarrow\tau$ uniformly, as $n\rightarrow\infty$ and $X\in L^{1,{\tau}+}_G(\Omega)$, then $\hat{\mathbb{E}}_{\tau_n+}[X]\rightarrow \hat{\mathbb{E}}_{\tau+}[X]$ in $\mathbb{L}^1$, as $n\rightarrow\infty$.
\end{description}
\end{proposition}
\begin{remark}
\upshape{
For two optional times $\tau$ and $\sigma$, since $A\cap{\{\tau\leq \sigma\}}, A\cap\{\tau=\sigma\} \in \mathcal{F}_{(\tau\wedge \sigma)+}\subset \mathcal{F}_{\sigma+}$ for $A\in \mathcal{F}_{\tau+}$, we have $XI_{\{\tau\leq \sigma\}}, XI_{\{\tau= \sigma\}}\in L^{1,{(\tau\wedge \sigma)}+}_G(\Omega)\subset L^{1,{\sigma}+}_G(\Omega)$ for $X\in L^{1,{\tau}+}_G(\Omega) $. Hence the conditional expectations $\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau\leq \sigma\}}]$, $ \hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau=\sigma\}}]$, $\hat{\mathbb{E}}_{\sigma+}[XI_{\{\tau\leq \sigma\}}]$ and $ \hat{\mathbb{E}}_{\sigma+}[XI_{\{\tau=\sigma\}}]$ are all meaningful. }
\end{remark}
The following generalization of Lemma \ref{Etau continuity lemma} is needed for the proof of Proposition \ref{Etau proposition on L1tau} (iv).
\begin{lemma}\label{generalized Etau continuity lemma}
Let $X\in L_{ip}(\Omega)$. Then there exists a constant $C$ depending on $X$ and $G$ such that
$$|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{\sigma+}[X]|
\leq C\{\sup_{(u_1,u_2)\in \Lambda_{\delta,T}}(|B_{u_2}-B_{u_1}|\wedge 1)+\sqrt{\delta}\},
$$
for any $T,\delta>0$ and optional times $\tau,\sigma\leq T$ such that $|\tau-\sigma|\leq \delta$.
\end{lemma}
\begin{proof}
Let $\tau_n,\sigma_n\leq T+1$ be two sequences of discrete stopping times taking finitely many values such that $\tau_n\rightarrow\tau,\sigma_n\rightarrow \sigma$ uniformly, as $n\rightarrow\infty$. For any $\varepsilon>0$, we have $|\tau_n-\sigma_n|\leq \delta+\varepsilon$ when $n$ large enough. Then by
Lemma \ref{Etau continuity lemma}, there exists a constant $C$ depending on $X,G$ such that
$$|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\sigma_n+}[X]|
\leq C\{\sup_{(u_1,u_2)\in \Lambda_{\delta+\varepsilon,T}}(|B_{u_2}-B_{u_1}|\wedge 1)+\sqrt{\delta+\varepsilon}\}.
$$
First letting $n\rightarrow \infty$ and then letting $\varepsilon\downarrow 0$, we get the desired conclusion.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Etau proposition on L1tau}]
(i) Let $X_i=\sum_{j=1}^m\xi_j^iI_{B_j^i}\in L_G^{0,1,\tau+}(\Omega) $, where $\xi_j^i\in L_G^{1}(\Omega)$ and $\{B_j^i\}_{j=1}^m$ is an $\mathcal{F}_{\tau+}$-partition of $\Omega$.
By the definition of $\hat{\mathbb{E}}_{\tau+}$ on $L^{0,1,\tau+}_G(\Omega)$, we have
\begin{align*}
\hat{\mathbb{E}}_{\tau+}[\sum_{i=1}^nX_iI_{A_i}] & =\hat{\mathbb{E}}_{\tau+}[\sum_{i=1}^n\sum_{j=1}^m\xi_j^iI_{B_j^i}I_{A_i}] \\
&=\hat{\mathbb{E}}_{\tau+}[\sum_{i=1}^n\sum_{j=1}^m\xi_j^iI_{A_i\cap B_j^i}] \\
& =\sum_{i=1}^n\sum_{j=1}^m\hat{\mathbb{E}}_{\tau+}[\xi_j^i]I_{A_i\cap B_j^i}.
\end{align*}
Using the definition of $\hat{\mathbb{E}}_{\tau+}$ again, this can be further written as
$$\sum_{i=1}^n(\sum_{j=1}^m\hat{\mathbb{E}}_{\tau+}[\xi_j^i]I_{B_j^i})I_{A_i}=\sum_{i=1}^n\hat{\mathbb{E}}_{\tau+}[X_i]I_{A_i}.$$
Now the result for the general case of $X_i\in L_G^{1,\tau+}(\Omega) $ follows from a direct limit argument.
(ii)
First assume $X\in L_{ip}(\Omega)$.
Let $\tau_n:=f_n(\tau),\sigma_n:=f_n(\sigma)$ be as in (\ref{approximation tau in KS problem}). Since $\{\tau\leq \sigma\}\subset\{\tau_n\leq \sigma_n\}$, by Lemma \ref{pre consistant Etau lemma}, we have
$$
I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{(\tau_n\wedge\sigma_n)+}[X]=I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{\tau_n+}[X].
$$
Letting $n\rightarrow\infty$, we obtain $$
I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[X]=I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{\tau+}[X].
$$
Then by a simple approximation, we get for $X\in L^1_G(\Omega)$
$$
I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[X]=I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{\tau+}[X].
$$
Now it follows from (i) that
$$
\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau\leq \sigma\}}]=I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[X]=I_{\{\tau\leq \sigma\}}\hat{\mathbb{E}}_{\tau+}[X].
$$
Next we consider the case $X=\sum_{i=1}^n \xi_iI_{A_i}$, where $\xi_i\in L_G^1(\Omega)$ and $\{A_i\}_{i=1}^n$ is an $\mathcal{F}_{\tau+}$-partition of $\Omega$. We have
\begin{align*}
\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau\leq \sigma\}}]
&=\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[\sum_{i=1}^n\xi_iI_{A_i\cap\{\tau\leq \sigma\}}]\\
&=\sum_{i=1}^n\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[\xi_i]I_{A_i\cap\{\tau\leq \sigma\}}\\
&=\sum_{i=1}^n\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[\xi_i]I_{\{\tau\leq \sigma\}}I_{A_i}.
\end{align*}
Since $\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[\xi_i]I_{\{\tau\leq \sigma\}}=\hat{\mathbb{E}}_{\tau+}[\xi_i]I_{\{\tau\leq \sigma\}}$, it follows that
\begin{align*}
\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau\leq \sigma\}}]
&=\sum_{i=1}^n\hat{\mathbb{E}}_{\tau+}[\xi_i]I_{A_i}I_{\{\tau\leq \sigma\}}\\
&=\hat{\mathbb{E}}_{\tau+}[X]I_{\{\tau\leq \sigma\}}.
\end{align*}
Finally, we obtain the the conclusion for $X\in L^{1,\tau+}_G(\Omega)$ after an approximation.
(iii) We first assume that $X$ is bounded. By (i) and (ii), $$\hat{\mathbb{E}}_{(\tau\wedge T)+}[XI_{\{\tau\leq T\}}]I_{\{\tau\leq T\}}=\hat{\mathbb{E}}_{(\tau\wedge T)+}[XI_{\{\tau\leq T\}}]=\hat{\mathbb{E}}_{\tau+}[X]I_{\{\tau\leq T\}}.$$
Then we directly calculate
\begin{align*}
\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[XI_{\{\tau\leq T\}}]|]
&=\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[XI_{\{\tau\leq T\}}]|I_{\{\tau> T\}}]\\
&\leq C_Xc({\{\tau> T\}})\rightarrow 0,\ \ \ \ \text{as} \ T\rightarrow\infty.
\end{align*}
To pass to the case of general $X$ we may argue as follows. Set $X_N:=(X\wedge N)\vee (-N)$ for constant $N>0$. For any $\varepsilon>0$, by Remark \ref{ui remark}, we can take $N$ large enough such that
$$
\hat{\mathbb{E}}[|X-X_N|]\leq\varepsilon.
$$
Then
\begin{align*}
\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[XI_{\{\tau\leq T\}}]|]
&\leq \hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{\tau+}[X_N]|]
+\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[X_N]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[X_NI_{\{\tau\leq T\}}]|]\\
&\ \ \ +\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{(\tau\wedge T)+}[X_NI_{\{\tau\leq T\}}]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[XI_{\{\tau\leq T\}}]|]\\
&\leq 2\varepsilon+\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[X_N]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[X_NI_{\{\tau\leq T\}}]|].
\end{align*}
Letting $T\rightarrow\infty$, we get
$$
\limsup_{T\rightarrow\infty}\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[XI_{\{\tau\leq T\}}]|]\leq 2\varepsilon,
$$
which implies, since $\varepsilon$ can be arbitrarily small,
$$
\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[X]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[XI_{\{\tau\leq T\}}]|]\rightarrow 0,\ \ \ \ \text{as}\ T\rightarrow\infty.
$$
(iv) \textit{Step 1.}
Suppose that $\tau_n,\tau\leq T$. We first assume $X\in L^1_G(\Omega)$. For any given $\varepsilon>0$, there exists an $\widetilde{X}\in L_{ip}(\Omega)$ such that
$$\hat{\mathbb{E}}[|\widetilde{X}-X|]\leq \varepsilon.$$ Then
\begin{align*}
\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau+}[X]|]
&\leq \hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau_n+}[\widetilde{X}]|]+\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[\widetilde{X}]-\hat{\mathbb{E}}_{\tau+}[\widetilde{X}]|]+\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau+}[\widetilde{X}]-\hat{\mathbb{E}}_{\tau+}[X]|]\\
&\leq 2\varepsilon +\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[\widetilde{X}]-\hat{\mathbb{E}}_{\tau+}[\widetilde{X}]|].
\end{align*}
We now let $n\rightarrow \infty$ and use Lemma \ref{generalized Etau continuity lemma} and Lemma \ref{sup continuity lemma} to obtain $$\limsup_{n\rightarrow\infty}\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau+}[X]|]\leq 2\varepsilon,$$
which implies
\begin{equation}\label{867655678993} \hat{\mathbb{E}}_{\tau_n+}[X]\rightarrow\hat{\mathbb{E}}_{\tau+}[X] \ \ \ \ \text{in}\ \mathbb{L}^1.
\end{equation}
Next, for $X=\sum_{i=1}^k X_iI_{A_i}$, where $X_i\in L_G^1(\Omega)$ and $\{A_i\}_{i=1}^k$ is an $\mathcal{F}_{\tau_0+}$-partition of $\Omega$, the conclusion follows from (\ref{867655678993}) and the observation that
$$
\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau+}[X]|]\leq \sum_{i=1}^k\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[X_i]-\hat{\mathbb{E}}_{\tau+}[X_i]|].
$$
Finally, for $X\in L^{1,\tau_0+}_G(\Omega)$, we can find an $\widetilde{X}\in L^{0,1,\tau_0+}_G(\Omega)$ such that
$$\hat{\mathbb{E}}[|\widetilde{X}-X|]\leq \varepsilon.$$
Following the argument for the case of $X\in L^1_G(\Omega)$ we can then obtain the conclusion for $L^{1,\tau_0+}_G(\Omega)$.
\textit{Step 2.}
We now consider the case that $\tau$ is not bounded. Without loss of generality, we can assume $0\leq \tau \vee (\vee_{n=1}^\infty\tau_n)-\tau_0\leq 1$. For any $T>0$,
by (ii),
$$\hat{\mathbb{E}}_{\tau_n+}[X]I_{\{\tau_n\leq T+1\}}=\hat{\mathbb{E}}_{(\tau_n\wedge (T+1))+}[XI_{\{\tau_n\leq T+1\}}].$$
Multiplying $I_{\{\tau_0\leq T \}}$, (i) implies
$$\hat{\mathbb{E}}_{\tau_n+}[X]I_{\{\tau_0\leq T \}}=\hat{\mathbb{E}}_{(\tau_n\wedge (T+1))+}[XI_{\{\tau_0\leq T \}}].$$
Similarly, we have
$$\hat{\mathbb{E}}_{\tau+}[X]I_{\{\tau_0\leq T \}}=\hat{\mathbb{E}}_{(\tau\wedge (T+1))+}[XI_{\{\tau_0\leq T \}}].$$
Let first $X$ be bounded. We have
\begin{equation}\label{788766654366789}
\begin{split}
|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau+}[X]|
&= |\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau+}[X]|I_{\{\tau_0\leq T\}}+|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau+}[X]|I_{\{\tau_0>T\}}\\
&\leq |\hat{\mathbb{E}}_{(\tau_n\wedge (T+1))+}[XI_{\{\tau_0\leq T\}}]-\hat{\mathbb{E}}_{(\tau\wedge (T+1))+}[XI_{\{\tau_0\leq T\}}]|+2C_XI_{\{\tau_0>T\}}.
\end{split}
\end{equation}
For any $\varepsilon>0$, we choose $T$ large enough such that $c({\{\tau_0>T\}})\leq c({\{\tau>T\}})\leq \varepsilon$. Taking expectation $\hat{\mathbb{E}}$ on both sides of (\ref{788766654366789}) and letting $n\rightarrow\infty$, we then obtain
$$\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[X]-\hat{\mathbb{E}}_{\tau+}[X]|]\leq
2C_X\varepsilon,$$
which implies the conclusion. If $X$ is not necessarily bounded, we obtain the same conclusion by a similar truncation technique as in (iii).
\end{proof}
The next result concerns the pull-out properties.
\begin{proposition}\label{Etau proposition on L1tau2}The conditional expectation $\hat{\mathbb{E}}_{\tau+}$ satisfies:
\begin{description}
\item [{\rm (i)}] If $X\in L^{1,{\tau}+}_G(\Omega)$ and $\eta ,Y\in L^{1,{\tau}+}_G(\Omega) \cap L^0(\mathcal{F}_{\tau+})$ such that $\eta$ is bounded, then $ \hat{\mathbb{E}}_{\tau+}[\eta X +Y]=\eta^+\hat{\mathbb{E}}_{\tau+}[X]+\eta^-\hat{\mathbb{E}}_{\tau+}[-X]+Y$;
\item [{\rm (ii)}] If $\eta\in L^{1,{\tau+}}_G(\Omega;\mathbb{R}^d)\cap L^0(\mathcal{F}_{\tau+};\mathbb{R}^d)$, $X\in L^{1,{\tau+}}_G(\Omega;\mathbb{R}^n)$ and $\varphi\in C_{b.Lip}(\mathbb{R}^{d+n})$, then $ \hat{\mathbb{E}}_{\tau+}[\varphi(\eta,X)]=\hat{\mathbb{E}}_{\tau+}[\varphi(p,X)]_{p=\eta}$.
\end{description}
\end{proposition}
In the proof of Proposition \ref{Etau proposition on L1tau2}, we shall need the following lemmas. We first study the local property of $\hat{\mathbb{E}}_{\tau+}$.
\begin{lemma}\label{Etau consistency identity0}
Let $X\in L^{1,\tau+}_G(\Omega)$ for two optional times $\tau$ and $\sigma$. Then
\begin{equation} \hat{\mathbb{E}}_{\tau+}[X]I_{\{\tau=\sigma\}}=\hat{\mathbb{E}}_{\sigma+}[XI_{\{\tau=\sigma\}}].
\end{equation}
\end{lemma}
\begin{proof}
By Proposition \ref{Etau proposition on L1tau} (ii),
\begin{equation*}
\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau\leq \sigma\}}]=\hat{\mathbb{E}}_{\tau+}[X]I_{\{\tau\leq \sigma\}}.
\end{equation*}
Multiplying $I_{\{\tau=\sigma\}}$ on both sides, we see from Proposition \ref{Etau proposition on L1tau} (i) that
\begin{equation}
\label{987739783432}
\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau= \sigma\}}]=\hat{\mathbb{E}}_{\tau+}[X]I_{\{\tau=\sigma\}}.
\end{equation}
Noting that $XI_{\{\tau= \sigma\}}\in L^{1,\sigma+}_G(\Omega)$, we can apply a similar argument to $\tilde{X}=XI_{\{\tau= \sigma\}},\tilde{\sigma}=\tau,\tilde{\tau}=\sigma$ to obtain
$$\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau= \sigma\}}]=\hat{\mathbb{E}}_{\sigma+}[XI_{\{\tau=\sigma\}}].$$
Combining this with (\ref{987739783432}), we obtain the lemma.
\end{proof}
\begin{lemma}\label{Etau discrete lemma 01}
Let $X\in L^{1,\tau+}_G(\Omega)$ for a simple optional time $\tau$ taking values in $\{t_i:i\geq 1 \}$. Then
$$\hat{\mathbb{E}}_{\tau+}[X]=\sum_{i=1}^\infty\hat{\mathbb{E}}_{t_i+}[XI_{\{\tau=t_i\}}].$$
\end{lemma}
\begin{proof}
Note that $\{\tau=t_i\}\in \mathcal{F}_{\tau+}$. Applying Lemma \ref{Etau consistency identity0}, we have
$$
\hat{\mathbb{E}}_{\tau+}[X]=\sum_{i=1}^\infty\hat{\mathbb{E}}_{\tau+}[X]I_{\{\tau=t_i\}}=\sum_{i=1}^\infty\hat{\mathbb{E}}_{t_i+}[XI_{\{\tau=t_i\}}].
$$
\end{proof}
The following deterministic-time version of Proposition \ref{Etau proposition on L1tau2} is also needed.
\begin{lemma}\label{LG1t conditional expectation 2}For each $t\geq 0$, the conditional expectation $\hat{\mathbb{E}}_t$ satisfies the following properties:
\begin{description}
\item [{\rm (i)}] If $X\in L_G^{1,t}(\Omega)$ and $\eta,Y\in L_G^{1,t}(\Omega)\cap L^0(\mathcal{F}_t)$ such that $\eta$ is bounded, then $ \hat{\mathbb{E}}_t[\eta X+Y]=\eta^+\hat{\mathbb{E}}_t[X]+\eta^-\hat{\mathbb{E}}_t[-X]+Y$;
\item [{\rm (ii)}] If $\eta\in L_G^{1,t}(\Omega;\mathbb{R}^{d})\cap L^{0}(\mathcal{F}_t;\mathbb{R}^{d})$, $X\in L_G^{1,t}(\Omega;\mathbb{R}^{n})$, then $ \hat{\mathbb{E}}_t[\varphi(\eta,X)]=\hat{\mathbb{E}}_t[\varphi(p,X)]_{p=\eta}$, for each $\varphi\in C_{b.Lip}(\mathbb{R}^{d+n})$.
\end{description}
\end{lemma}
\begin{proof}
We just prove (i). Statement (ii) can be proved similarly.
\textit{Step 1.} We first assume
$$\eta=\sum_{i=1}^n \eta_i I_{A_i},\ \ Y=\sum_{i=1}^n Y_i I_{A_i},\ \ X=\sum_{i=1}^n X_i I_{A_i},$$
where $\eta_i, Y_i\in L_G^1(\Omega_t)$, $X_i\in L_G^1(\Omega)$ such that $\eta_i$ is bounded and $\{A_i\}_{i=1}^n$ is an $\mathcal{F}_t$-partition of $\Omega$.
By the definition of $\hat{\mathbb{E}}_t$ on $L^{0,1,t}_G(\Omega)$ (see Remark \ref{Etau remark}) and properties (ii), (iv) of Proposition \ref{condition expectation property}, we have
\begin{align*}
\hat{\mathbb{E}}_t[\eta X+Y] & =\hat{\mathbb{E}}_t[\sum_{i=1}^n(\eta_i X_i+Y_i)I_{A_i}] \\
& =\sum_{i=1}^n\hat{\mathbb{E}}_t[\eta_i X_i+Y_i]I_{A_i} \\
&=\sum_{i=1}^n(\eta_i^+\hat{\mathbb{E}}_t[X_i]+\eta_i^-\hat{\mathbb{E}}_t[-X_i]+Y_i)I_{A_i} \\
&=\eta^+\hat{\mathbb{E}}_t[X]+\eta^-\hat{\mathbb{E}}_t[-X]+Y.
\end{align*}
\textit{Step 2.} Now we consider the general case. We take a sequence $\{X_n\}_{n=1}^\infty\subset L_G^{0,1,t}(\Omega)$ such that
$$X_n\rightarrow X \ \ \ \ \text{in}\ \mathbb{L}^1.$$
Moreover, we define
\begin{equation}
\label{980897897487303544}
\eta_n:=\sum_{-2^n}^{2^n}\frac{kC_\eta}{2^{n}}I_{\{\frac{kC_\eta}{2^{n}}\leq \eta<\frac{(k+1)C_\eta}{2^{n}}\}}\end{equation}
and
\begin{equation}
\label{9805665567303544}
Y_n:=\sum_{-n2^n}^{n2^n-1}\frac{k}{2^{n}}I_{\{\frac{k}{2^{n}}\leq Y<\frac{k+1}{2^{n}}\}}+nI_{\{Y\geq n \}}-nI_{\{Y< -n \}}.
\end{equation}
Then
$$|\eta_n-\eta|\leq\frac{C_\eta}{2^n} \ \ \ \ \text{and} \ \ \ \ Y_n\rightarrow Y \ \ \text{in} \ \mathbb{L}^1, \ \text{as}\ n\rightarrow\infty$$
since
$$\hat{\mathbb{E}}[|Y_n-Y|]\leq \hat{\mathbb{E}}[|Y_n-Y|I_{\{-n\leq Y<n\}}]+
\hat{\mathbb{E}}[|Y_n-Y|I_{\{|Y|\geq n\}}]\leq \frac{1}{2^{n}}+\hat{\mathbb{E}}[|Y|I_{\{|Y|\geq n\}}]\rightarrow 0, \ \ \ \ \text{as} \ n\rightarrow \infty$$
because of Remark \ref{ui remark}.
Applying Step 1, we have
\begin{equation}\label{345611}
\hat{\mathbb{E}}_t[\eta_n X_n+Y_n] =\eta_n^+\hat{\mathbb{E}}_t[X_n]+\eta_n^-\hat{\mathbb{E}}_t[-X_n]+Y_n.
\end{equation}
We note that
\begin{align*}
\hat{\mathbb{E}}[|\eta_nX_n+Y_n-\eta X-Y|]
&\leq \hat{\mathbb{E}}[|\eta_nX_n-\eta_n X|]+\hat{\mathbb{E}}[|X||\eta_n-\eta |]+\hat{\mathbb{E}}[|Y_n-Y|]\\
&\leq C_\eta\hat{\mathbb{E}}[|X_n-X|]+\frac{C_\eta}{2^n}\hat{\mathbb{E}}[|X|]+\hat{\mathbb{E}}[|Y_n-Y|]\\
&\rightarrow 0, \ \ \ \ \text{as} \ n\rightarrow\infty
\end{align*}
and similarly,
$$\hat{\mathbb{E}}[|\eta_n^+\hat{\mathbb{E}}_t[X_n]+\eta_n^
-\hat{\mathbb{E}}_t[-X_n]+Y_n-(\eta^+\hat{\mathbb{E}}_t[X]+\eta^-\hat{\mathbb{E}}_t[-X]+Y)|]\rightarrow 0,\ \ \ \ \text{as}\ n\rightarrow\infty.$$
Thus the left-hand side (right-hand side resp.) of (\ref{345611}) converges to the left-hand side (right-hand side resp.) of
$$ \hat{\mathbb{E}}_t[\eta X+Y]=\eta^+\hat{\mathbb{E}}_t[X]+\eta^-\hat{\mathbb{E}}_t[-X]+Y,$$ which completes the proof.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Etau proposition on L1tau2}]
We define $\tau_n$ as (\ref{approximation tau in KS problem}).
Since $\mathcal{F}_{\tau+}\subset \mathcal{F}_{\tau_n}$, we have $L^{1,{\tau+}}_G(\Omega)\subset L^{1,{\tau_n}}_G(\Omega)$. Thus for any $Z\in L^{1,{\tau+}}_G(\Omega)$, we have $ZI_{\{\tau_n=t^n_i\}}\in L^{1,t^n_i}_G(\Omega)$, and hence $\hat{\mathbb{E}}_{t^n_i+}[ZI_{\{\tau_n=t^n_i\}}]=\hat{\mathbb{E}}_{t^n_i}[ZI_{\{\tau_n=t^n_i\}}]$ according to Remark \ref{Etau remark} (iii).
Then by Proposition \ref{Etau proposition on L1tau} (iv) and Lemma \ref{Etau discrete lemma 01},
\begin{align*}
\hat{\mathbb{E}}_{\tau+}[\eta X +Y]
&=\mathbb{L}^1\text{-}\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau_n+}[\eta X +Y]\\
&=\mathbb{L}^1\text{-}\lim_{n\rightarrow\infty}\sum_{i=1}^{\infty}\hat{\mathbb{E}}_{t^n_i+}[(\eta X +Y)I_{\{\tau_n=t^n_i\}}]\\
&=\mathbb{L}^1\text{-}\lim_{n\rightarrow\infty}\sum_{i=1}^{\infty}\hat{\mathbb{E}}_{t^n_i}[(\eta X +Y)I_{\{\tau_n=t^n_i\}}]\\
&=\mathbb{L}^1\text{-}\lim_{n\rightarrow\infty}\sum_{i=1}^{\infty}\hat{\mathbb{E}}_{t^n_i}[\eta I_{\{\tau_n=t^n_i\}} XI_{\{\tau_n=t^n_i\}} +YI_{\{\tau_n=t^n_i\}}].
\end{align*}
By Lemma \ref{LG1t conditional expectation 2} (i), we note that
\begin{align*}
&\hat{\mathbb{E}}_{t^n_i}[\eta I_{\{\tau_n=t^n_i\}} XI_{\{\tau_n=t^n_i\}} +YI_{\{\tau_n=t^n_i\}}]\\
&\ \ =\eta^+ I_{\{\tau_n=t^n_i\}}\hat{\mathbb{E}}_{t^n_i}[XI_{\{\tau_n=t^n_i\}}]+\eta^- I_{\{\tau_n=t^n_i\}}\hat{\mathbb{E}}_{t^n_i}[-XI_{\{\tau_n=t^n_i\}}]+YI_{\{\tau_n=t^n_i\}}\\
&\ \ =\eta^+ \hat{\mathbb{E}}_{t^n_i}[XI_{\{\tau_n=t^n_i\}}]+\eta^-\hat{\mathbb{E}}_{t^n_i}[-XI_{\{\tau_n=t^n_i\}}]+YI_{\{\tau_n=t^n_i\}}.
\end{align*}
We thus have
\begin{align*}
\hat{\mathbb{E}}_{\tau+}[\eta X +Y]
&=\mathbb{L}^1\text{-}\lim_{n\rightarrow\infty} (\eta^+\hat{\mathbb{E}}_{\tau_n+}[ X ]+\eta^- \hat{\mathbb{E}}_{\tau_n+}[ -X])+Y\\
&=\eta^+\hat{\mathbb{E}}_{\tau+}[ X ]+\eta^- \hat{\mathbb{E}}_{\tau+}[ -X]+Y.
\end{align*}
The property (ii) is proved similarly.
\end{proof}
\subsection{Extension from below}
For a sequence $\{X_n\}_{n=1}^\infty$ in $ L^{1,\tau+}_G(\Omega)$ such that $X_n\uparrow X$ q.s., we can not expect $X\in L^{1,\tau+}_G(\Omega)$ (e.g., $X_n:= n, n\geq 1$). So it is necessary to introduce the extension of $\hat{\mathbb{E}}_{\tau+}$ from below as follows to guarantee the upward monotone convergence.
Let $\tau$ be a given optional time and recall the convention (H3). We set
$$L^{1,\tau+,*}_G(\Omega):=\{X\in L^0(\mathcal{F}):\text{there exists}\ X_n\in L^{1,\tau+}_G(\Omega)\ \text{such that}\ X_n\uparrow X \ q.s.\}.$$
For $X\in L^{1,\tau+,*}_G(\Omega)$, let $\{X_n\}_{n=1}^\infty\subset L^{1,\tau+}_G(\Omega)$ such
that $X_n\uparrow X$ q.s. We define
$$
\hat{\mathbb{E}}_{\tau+}[X]:=\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[X_n].
$$
\begin{proposition}\label{L1tau* welldefine}
The conditional expectation $\hat{\mathbb{E}}_{\tau+}:L^{1,\tau+,*}_G(\Omega)\rightarrow L^{1,\tau+,*}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$ is well-defined and satisfies:
for $X,Y\in L^{1,\tau+,*}_G(\Omega)$,
\begin{description}
\item [{\rm (i)}] $\hat{\mathbb{E}}_{\tau+}[X]\leq \hat{\mathbb{E}}_{\tau+}[Y], \ \text{for} \ X\leq Y$;
\item [{\rm (ii)}] $\hat{\mathbb{E}}_{\tau+}[X+Y]\leq \hat{\mathbb{E}}_{\tau+}[Y]+\hat{\mathbb{E}}_{\tau+}[Y]$;
\item [{\rm (iii)}] $\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[X]]=\hat{\mathbb{E}}[X]$.
\end{description}
\end{proposition}
We need the following lemmas for the proof of the above proposition.
\begin{lemma}\label{L1tau upward mct}
Let $X_n,X\in L^{1,\tau+}_G(\Omega)$ such that $X_n\uparrow X$ q.s. Then
$
\hat{\mathbb{E}}_{\tau+}[X_n]\uparrow \hat{\mathbb{E}}_{\tau+}[X]\ \text{q.s.}
$
\end{lemma}
\begin{proof}
Since $X_n\leq X$ implies $\hat{\mathbb{E}}_{\tau+}[{X}_n]\leq \hat{\mathbb{E}}_{\tau+}[{X}]$ by Proposition \ref{Etau welldefined} (i), we have $$\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[{X}_n] \leq \hat{\mathbb{E}}_{\tau+}[X].$$
Then it suffices to prove the reverse inequality.
Assume on the contrary that $\eta:=\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[X_n]\geq \hat{\mathbb{E}}_{\tau+}[X]$ q.s. does not hold, i.e.,
$$
c(\{\eta<\hat{\mathbb{E}}_{\tau+}[X]\})>0.
$$
Since
$$
D_k:=\{\eta+\frac1k\leq \hat{\mathbb{E}}_{\tau+}[X]\}\cap \{|\eta|\leq k\}\uparrow \{\eta<\hat{\mathbb{E}}_{\tau+}[X]\},
$$
we can take $k$ large enough such that, by Lemma \ref{upward mct for capacity},
$$
c(D_k)>0.
$$
Then by Lemma \ref{upward mct for rv}, Proposition \ref{Etau welldefined} (iii), Proposition \ref{Etau proposition on L1tau} (i) and Proposition \ref{Etau proposition on L1tau2} (i), we have
\begin{align*}
\hat{\mathbb{E}}[(X+k)I_{D_k}]=\lim_{n\rightarrow\infty}\hat{\mathbb{E}}[(X_n+k)I_{D_k}]=\lim_{n\rightarrow\infty}\hat{\mathbb{E}}[(\hat{\mathbb{E}}_{\tau+}[X_n]+k)I_{D_k}]
=\hat{\mathbb{E}}[(\eta+k) I_{D_k}].
\end{align*}
But
$$
\hat{\mathbb{E}}[(X+k)I_{D_k}]=\hat{\mathbb{E}}[(\hat{\mathbb{E}}_{\tau+}[X]+k)I_{D_k}]\geq \hat{\mathbb{E}}[(\eta+\frac{1}{k}+k) I_{D_k}],
$$
which is a contradiction by Proposition 29 in \cite{HP1}
\end{proof}
\begin{proof}[Proof of Proposition \ref{L1tau* welldefine}]
Let $X\in L^{1,\tau+,*}_G(\Omega)$.
For any $X_n\in L^{1,\tau+}_G(\Omega)$ such that $X_n\uparrow X$ q.s., obviously $\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[X_n]$ exists. We now show that if moreover there is another sequence $\widetilde{X}_n\in L^{1,{\tau}+}_G(\Omega)$ such that
$
\widetilde{X}_n\uparrow X
$ q.s.,
it holds
$$
\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[X_n]=\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[\widetilde{X}_n] \ \ \ \ \text{q.s.}
$$
Noting that $X_n\wedge \widetilde{X}_m\uparrow X_n$, as $m\rightarrow\infty$, by Lemma \ref{L1tau upward mct}, we have
$$
\hat{\mathbb{E}}_{\tau+}[X_n]=\lim_{m\rightarrow\infty} \hat{\mathbb{E}}_{\tau+}[X_n\wedge \widetilde{X}_m]\leq \lim_{m\rightarrow\infty} \hat{\mathbb{E}}_{\tau+}[\widetilde{X}_m].
$$
This follows
$$
\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[X_n]\leq \lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[\widetilde{X}_n]
$$
Exchanging $X_n,\widetilde{X}_n$, we get the reverse
$$
\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[X_n]\geq \lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[\widetilde{X}_n].
$$
Thus
$$
\lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[X_n]= \lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[\widetilde{X}_n].
$$
Therefore, $\hat{\mathbb{E}}_{\tau+}$ is well-defined.
Given the definition of $\hat{\mathbb{E}}_{\tau+}$ on $L^{1,\tau+,*}_G(\Omega)$ and Proposition \ref{Etau welldefined}, the proof for properties (i), (ii), (iii) is straightforward. We shall just omit it.
\end{proof}
\begin{proposition}\label{Etau proposition on L1tau*}The conditional expectation $\hat{\mathbb{E}}_{\tau+}$ on $ L^{1,{\tau}+,*}_G(\Omega)$ satisfies the following properties:
\begin{description}
\item [{\rm (i)}]If $X_i\in L^{1,{\tau}+,*}_G(\Omega)$, $i=1,\cdots,n$ and $\{A_i\}_{i=1}^n$ is an $\mathcal{F}_{\tau+}$-partition of $\Omega$, then $\hat{\mathbb{E}}_{\tau+}[\sum_{i=1}^nX_iI_{A_i}]=\sum_{i=1}^n\hat{\mathbb{E}}_{\tau+}[X_i]I_{A_i}$;
\item [{\rm (ii)}] If $\tau,\sigma$ are two optional times and $X\in L^{1,\tau+,*}_G(\Omega)$, then $\hat{\mathbb{E}}_{\tau+}[X]I_{\{\tau\leq \sigma\}}=\hat{\mathbb{E}}_{(\tau\wedge\sigma)+}[XI_{\{\tau\leq \sigma\}}]$;
\item[{\rm (iii)}] If $X\in L^{1,{\tau}+,*}_G(\Omega)$ and $\eta,Y\in L^{1,{\tau}+,*}_G(\Omega)\cap L^0(\mathcal{F}_{\tau+})$ such that $\eta,X$ is nonnegative, then $ \hat{\mathbb{E}}_{\tau+}[\eta X +Y]=\eta\hat{\mathbb{E}}_{\tau+}[X]+Y$;
\item[{\rm (iv)}]If $X_n\in L^{1,\tau+,*}_G(\Omega)$ such that $X_n\uparrow X$ q.s., then $X\in L^{1,\tau+,*}_G(\Omega)$ and
$
\hat{\mathbb{E}}_{\tau+}[X_n]\uparrow \hat{\mathbb{E}}_{\tau+}[X]\ \text{q.s.}
$
\end{description}
\end{proposition}
\begin{proof}
Statements (i), (ii), (iii) follow directly from Proposition \ref{Etau proposition on L1tau} (i), (ii), Proposition \ref{Etau proposition on L1tau2} (i) and the definition of $\hat{\mathbb{E}}_{\tau+}$ on $L^{1,\tau+,*}_G(\Omega)$.
(iv) By Proposition \ref{L1tau* welldefine} (i), we have
$$
\hat{\mathbb{E}}_{\tau+}[X]\geq \lim_{n\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[{X}_n].$$ To
prove the reverse inequality, for each $X_n$, we take a sequence $X_n^m\in {L}^{1,\tau+}_G(\Omega)$ such that
$X_n^m\uparrow X_n $, as $m\rightarrow \infty$. We define $\tilde{X}_m:=\vee_{n=1}^mX_n^m\in {L}^{1,\tau+}_G(\Omega)$. Then
$$
\tilde{X}_m\leq \vee_{n=1}^mX_n=X_m\ \ \ \ \text{and}\ \ \ \ \tilde{X}_m\uparrow X,\ \ \text{as}\ m\rightarrow\infty.
$$
It follows from the definition of $\hat{\mathbb{E}}_{\tau+}$ on $L^{1,\tau+,*}_G(\Omega)$ that
$$
\hat{\mathbb{E}}_{\tau+}[X]=\lim_{m\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[\tilde{X}_m]\leq \lim_{m\rightarrow\infty}\hat{\mathbb{E}}_{\tau+}[{X}_m],
$$
as desired.
\end{proof}
\begin{remark}
\upshape{
Let $\tau$ be a stopping time satisfying (H3). We define $L^{1,\tau,*}_G(\Omega)$ as ${L^{1,\tau+,*}_G(\Omega)}$ with $\mathcal{F}_\tau$ replacing $\mathcal{F}_{\tau+}$. We can similarly extend
$\hat{\mathbb{E}}_{\tau}$ from below to
$L^{1,\tau,*}_G(\Omega)$ and similar properties also hold for $\hat{\mathbb{E}}_{\tau}$ on
$L^{1,\tau,*}_G(\Omega)$. Moreover,
$$ \hat{\mathbb{E}}_{\tau+}[X]=\hat{\mathbb{E}}_{\tau}[X],\ \ \ \ \text{for}\ X\in {L}^{1,\tau,*}_G(\Omega).$$}
\end{remark}
\subsection{The reflection principle for $G$-Brownian motion}
As an application, we give the following reflection principle for $G$-Brownian motion.
\begin{theorem}
Let $\tau$ be an optional time (without the assumption that $\tau$ satisfies (H3)). Then
$$\widetilde{B}_t:=2B_{t\wedge\tau}-B_t=B_{t\wedge\tau}-(B_t-B_{\tau})I_{\{t>\tau\}}, \ \ \ \ \text{for}\ t\geq0,$$
is still a $G$-Brownian motion.
\end{theorem}
\begin{proof}
It suffices to prove that the two processes have the same finite-dimensional distributions, i.e., for any $0\leq t_1<t_2<\cdots<t_n\leq T<\infty$, we have
\begin{equation}\label{reflect equation}
(\widetilde{B}_{t_1},\widetilde{B}_{t_2}-\widetilde{B}_{t_{1}},\cdots ,\widetilde{B}_{t_n}-\widetilde{B}_{t_{n-1}})\overset{d}{=}(B_{t_1},{B}_{t_2}-{B}_{t_{1}},\cdots ,B_{t_n}-B_{t_{n-1}}).
\end{equation}
Moreover, by replacing $\tau$ with $\tau\wedge T$ we may assume without loss of generality that $\tau\leq T$.
Suppose first that $\tau$ is a stopping time taking finitely many values. We may assume that $\tau$ also takes values in $\{t_i:i\leq n\}$ since we can refine the partition in (\ref{reflect equation}).
Then by the version of Lemma \ref{Etau discrete lemma 01} for $\hat{\mathbb{E}}_{\tau}$, we have
\begin{align*}
&\hat{\mathbb{E}}_{\tau}[\varphi(\widetilde{B}_{t_1},\widetilde{B}_{t_2}-\widetilde{B}_{t_{1}},\cdots ,\widetilde{B}_{t_n}-\widetilde{B}_{t_{n-1}})]\\
&\ \ =\hat{\mathbb{E}}_{\tau}[\varphi(2B_{t_1\wedge\tau}-B_{t_1},\cdots,2B_{t_n\wedge\tau}-B_{t_n}-(2B_{t_{n-1}\wedge\tau}-B_{t_{n-1}}))]\\
&\ \ =\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\varphi(2B_{t_1\wedge\tau}-B_{t_1},\cdots,2B_{t_n\wedge\tau}-B_{t_n}-(2B_{t_{n-1}\wedge\tau}-B_{t_{n-1}}))I_{\{\tau=t_i\}}]\\
&\ \ =\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\varphi(2B_{t_1\wedge t_i}-B_{t_1},\cdots,2B_{t_n\wedge t_i}-B_{t_n}-(2B_{t_{n-1}\wedge t_i}-B_{t_{n-1}}))]I_{\{\tau=t_i\}}.
\end{align*}
Note that, for $k\leq i$,
$$2B_{t_k\wedge t_i}-B_{t_k}-(2B_{t_{k-1}\wedge t_i}-B_{t_{k-1}})=B_{t_k}-B_{t_{k-1}},$$
and for $k> i$,
$$
2B_{t_k\wedge t_i}-B_{t_k}-(2B_{t_{k-1}\wedge t_i}-B_{t_{k-1}})=-(B_{t_k}-B_{t_{k-1}})\overset{d}{=}B_{t_k}-B_{t_{k-1}}
$$
because of the symmetry of $G$-Brownian motion.
We see from the definition of conditional expectation $\hat{\mathbb{E}}_{t_i}$ on $L_{ip}(\Omega)$ that
\begin{align*}
&\hat{\mathbb{E}}_{t_i}[\varphi(2B_{t_1\wedge t_i}-B_{t_1},\cdots,2B_{t_n\wedge t_i}-B_{t_n}-(2B_{t_{n-1}\wedge t_i}-B_{t_{n-1}}))]\\
&\ \ =\hat{\mathbb{E}}_{t_i}[\varphi(B_{t_1},\cdots,B_{t_i}-B_{t_{i-1}},-(B_{t_{i+1}}-B_{t_{i}}),\cdots,-(B_{t_{n}}-B_{t_{n-1}}))]\\
&\ \ =\hat{\mathbb{E}}_{t_i}[\varphi(B_{t_1},\cdots,B_{t_i}-B_{t_{i-1}},B_{t_{i+1}}-B_{t_{i}},\cdots,B_{t_{n}}-B_{t_{n-1}})].
\end{align*}
Therefore,
\begin{align*}
&\hat{\mathbb{E}}_{\tau}[\varphi(\widetilde{B}_{t_1},\widetilde{B}_{t_2}-\widetilde{B}_{t_{1}},\cdots ,\widetilde{B}_{t_n}-\widetilde{B}_{t_{n-1}})]\\
&\ \ =\sum_{i=1}^n\hat{\mathbb{E}}_{t_i}[\varphi(B_{t_1},\cdots,B_{t_i}-B_{t_{i-1}},B_{t_{i+1}}-B_{t_{i}},\cdots,B_{t_{n}}-B_{t_{n-1}})]I_{\{\tau=t_i\}}\\
&\ \ =\hat{\mathbb{E}}_{\tau}[\varphi({B}_{t_1},{B}_{t_2}-{B}_{t_{1}},\cdots ,{B}_{t_n}-{B}_{t_{n-1}})].
\end{align*}
Taking expectation $\hat{\mathbb{E}}$ on both sides, we have
$$
\hat{\mathbb{E}}[\varphi(\widetilde{B}_{t_1},\widetilde{B}_{t_2}-\widetilde{B}_{t_{1}},\cdots ,\widetilde{B}_{t_n}-\widetilde{B}_{t_{n-1}})]
=\hat{\mathbb{E}}[\varphi({B}_{t_1},{B}_{t_2}-{B}_{t_{1}},\cdots ,{B}_{t_n}-{B}_{t_{n-1}})].
$$
Turning to the general optional time $\tau\leq T$, we take a sequence of stopping times $\tau_k\leq T+1$ with finitely many values such that $0\leq \tau_k-\tau\leq \frac1k\downarrow 0$. Then
\begin{equation}\label{798374235739382489}
\hat{\mathbb{E}}[\varphi(2B_{t_1\wedge\tau_k}-B_{t_1},\cdots,2B_{t_n\wedge\tau_k}-B_{t_n}-(2B_{t_{n-1}\wedge\tau_k}-B_{t_{n-1}}))] =\hat{\mathbb{E}}[\varphi(B_{t_1},\cdots,B_{t_n}-B_{t_{n-1}})].
\end{equation}
By a similar analysis as in the first paragraph in the proof of Lemma \ref{Et continuity lemma}, we have for some constant $C$ depending on $\varphi$
\begin{align*}
&\hat{\mathbb{E}}[|\varphi(2B_{t_1\wedge\tau_k}-B_{t_1},\cdots,2B_{t_n\wedge\tau_k}-B_{t_n}-(2B_{t_{n-1}\wedge\tau_k}-B_{t_{n-1}}))\\
&\ \ \ -\varphi(2B_{t_1\wedge\tau}-B_{t_1},\cdots,2B_{t_n\wedge\tau}-B_{t_n}-(2B_{t_{n-1}\wedge\tau}-B_{t_{n-1}}))|]\\
&\ \ \leq C\hat{\mathbb{E}}[\sup_{(u_1,u_2)\in \Lambda_{k^{-1},T+1}}(|B_{u_2}-B_{u_1}|\wedge 1)]\downarrow 0,\ \ \ \ \text{as}\ k\rightarrow \infty.
\end{align*}
Thus (\ref{reflect equation}) follows from letting $k\rightarrow\infty$ in (\ref{798374235739382489}).
\end{proof}
\section{Strong Markov Property for $G$-SDEs}
With the notion of conditional expectation $\hat{\mathbb{E}}_{\tau+}$ in hand, we now turn our attention to the strong Markov property for $G$-SDEs.
We first state the Markov property for $G$-SDEs.
\begin{lemma}\label{markov property of sde}
For $\varphi\in C_{b.Lip}(\mathbb{R}^{m\times n})$, $0\leq t_1< t_2< \cdots< t_m<\infty$ and $t\geq 0$, we have $$\hat{\mathbb{E}}_t[\varphi(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_m}^{x})]=\hat{\mathbb{E}}[\varphi(X_{t_1}^{y},X_{t_2}^{y},\cdots,X_{t_m}^{y})]_{y=X_t^x}.$$
\end{lemma}
\begin{proof}
Since $(B_{t+s}-B_t)_{s\geq 0}$ is still a $G$-Brownian motion and the coefficients $b,h_{ij},\sigma_j$ in $G$-SDE (\ref{SDE}) are independent of the time variable, we have, for any $s\geq0,\ y\in\mathbb{R}^n$,
$$
X_{t+s}^{t,y}\overset{d}{=}X_{s}^{y}.
$$
This implies, for $\widetilde{\varphi}\in C_{b.Lip}(\mathbb{R}^{n})$, $$\hat{\mathbb{E}}[\widetilde{\varphi}(X_{t+s}^{t,y})]_{y=X_t^x}=\hat{\mathbb{E}}[\widetilde{\varphi}(X_{s}^{y})]_{y=X_t^x}.$$
Hence by Lemma \ref{HJPS2 lemma},
\begin{equation}\label{44444444}
\hat{\mathbb{E}}_t[\widetilde{\varphi}(X_{t+s}^{x})]=\hat{\mathbb{E}}[\widetilde{\varphi}(X_{s}^{y})]_{y=X_t^x}.
\end{equation}
For $\varphi\in C_{b.Lip}(\mathbb{R}^{m\times n})$, by Proposition \ref{condition expectation property} (vi) and (v), we have
\begin{align*}
\hat{\mathbb{E}}_t[\varphi(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_m}^{x})]
&=\hat{\mathbb{E}}_t[\hat{\mathbb{E}}_{t+t_{m-1}}[\varphi(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_m}^{x})]]\\
&=\hat{\mathbb{E}}_t[\overline{\varphi}_{m-1}(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_{m-1}}^{x})],
\end{align*}
where $$\overline{\varphi}_{m-1}(y_1,\cdots,y_{m-1}):=\hat{\mathbb{E}}_{t+t_{m-1}}[\varphi(y_1,y_2,\cdots,y_{m-1},X_{t+t_m}^{x})],\ \ \ \ (y_1,\cdots,y_{m-1})\in\mathbb{R}^{(m-1)\times n}.$$
We note that
$$\overline{\varphi}_{m-1}(y_1,\cdots,y_{m-1})=\hat{\mathbb{E}}[\varphi(y_1,y_2,\cdots,y_{m-1},X_{t_m-t_{m-1}}^{y'_{m-1}})]_{y'_{m-1}=X_{t+t_{m-1}}^{x}}$$
by (\ref{44444444}).
Then $$\overline{\varphi}_{m-1}(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_{m-1}}^{x})=\varphi_{m-1}(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_{m-1}}^{x}),$$
where $$\varphi_{m-1}(y_1,\cdots,y_{m-1}):=\hat{\mathbb{E}}[\varphi(y_1,y_2,\cdots,y_{m-1},X_{t_m-t_{m-1}}^{y_{m-1}})] ,\ \ \ \ (y_1,\cdots,y_{m-1})\in\mathbb{R}^{(m-1)\times n}.$$
Thus we have
$$
\hat{\mathbb{E}}_t[\varphi(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_{m}}^{x})]=\hat{\mathbb{E}}_t[\varphi_{m-1}(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_{m-1}}^{x})].
$$
Repeating this procedure, we get
\begin{align}
\hat{\mathbb{E}}_t[\varphi(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_{m}}^{x})]\notag
&=\hat{\mathbb{E}}_t[\varphi_{m-1}(X_{t+t_1}^{x},X_{t+t_2}^{x},\cdots,X_{t+t_{m-1}}^{x})]\notag\\
&\ \vdots\label{eq1}\\
&=\hat{\mathbb{E}}_t[\varphi_{1}(X_{t+t_1}^{x})]\notag\\
&=\hat{\mathbb{E}}[\varphi_{1}(X_{t_1}^{y})]_{y=X_{t}^x},\notag
\end{align}
where
$$
\varphi_{m-i}(y_1,\cdots,y_{m-i}):=\hat{\mathbb{E}}[\varphi_{m-(i-1)}(y_1,y_2,\cdots,y_{m-i},X_{t_{m-(i-1)}-t_{m-i}}^{y_{m-i}})],\ \ \ \ 1\leq i\leq m-1.
$$
Taking $t=0,\ x=y$ in (\ref{eq1}), we obtain
\begin{equation}\label{eq2}
\hat{\mathbb{E}}[\varphi(X_{t_1}^{y},X_{t_2}^{y},\cdots,X_{t_m}^{y})]=\hat{\mathbb{E}}[\varphi_{1}(X_{t_1}^{y})],\ \ \ \ \text{for any } y\in\mathbb{R}^n.
\end{equation}
This, combining with (\ref{eq1}), proves the corollary.
\end{proof}
We now give the strong Markov property for $G$-SDEs. It generalizes the well-known strong Markov property for classical SDEs to $G$-SDEs in the framework of nonlinear $G$-expectation. We set $\Omega':=C([0,\infty);\mathbb{R}^n)$ with the distance $\rho_n$ and denote by $B'$ the corresponding canonical process. Recall that we always assume that the optional time $\tau$ satisfies (H3).
\begin{theorem}\label{main theorem}
Let $(X^x_t)_{t\geq 0}$ be the solution of $G$-SDE (\ref{SDE}) satisfying (H1), (H2) and $\tau$ be an optional time. Then for each $\varphi\in C_{b.Lip}(\mathbb{R}^{m\times n})$ and $0\leq t_1\leq \cdots\leq t_m=:T'<\infty$, we have
\begin{equation}\label{strong markov for sde}
\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]
=\hat{\mathbb{E}}[\varphi(X_{t_1}^y,\cdots,X_{t_m}^y)]_{y=X_\tau^x}.
\end{equation}
\end{theorem}
We first need the following lemma to justify that the conditional expectation on the left-hand side of (\ref{strong markov for sde}) is meaningful. We denote the paths for a process $Y$ by
$
Y_\cdot:=(Y_t)_{t\geq 0}.
$
\begin{lemma}\label{belong to L1tau lemma 1}
We have
\begin{equation}
\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)\in L^{1,\tau+}_G(\Omega).
\end{equation}
\end{lemma}
\begin{proof}
\textit{Step 1.} First assume $\tau\leq T$. Take discrete stopping time $\tau_n\leq T+1$ as (\ref{approximation tau in KS problem}).
By the definition of $L^{0,1,\tau+}_G(\Omega)$, we have $$\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)\in L^{0,1,\tau+}_G(\Omega).$$
Then it suffices to show that
\begin{equation}\label{9876856758980978675}
\hat{\mathbb{E}}[|\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)-\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)|]\rightarrow 0, \ \ \ \ \text{as}\ n\rightarrow\infty.
\end{equation}
Consider now the mapping $\Omega\overset{X^x_\cdot}{\longrightarrow}\Omega'$. By Lemma \ref{GSDE} (\ref{SDE3}), for each $T_1\geq 0$, there exists a constant $C_{T_1}$ (depending on $T_1$) such that for each $t,s\leq T_1 $,
$$E_P[|X^x_t-X^x_s|^4]\leq\hat{\mathbb{E}}[|X_t^x-X_s^x|^4]\leq C_{T_1}|t-s|^2,\ \ \ \ \text{for each} \ P\in\mathcal{P}.$$
Then we can apply the well-known Kolmogorov's moment criterion for tightness (see, e.g., Problem 2.4.11 in \cite{KS}) to conclude that the induced probability family $\{P\circ (X_{\cdot}^x)^{-1}: P\in\mathcal{P}\}$ is tight on $\Omega'$.
We denote the induced capacity by $c^x_2:=\sup_{P\in\mathcal{P}} P\circ (X_{\cdot}^x)^{-1} $ and the induced sublinear expectation by $\hat{\mathbb{E}}^x_2:=\sup_{P\in\mathcal{P}}E_{P\circ (X_{\cdot}^x)^{-1}}$. Then
\begin{align*}
&\hat{\mathbb{E}}[|\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)-\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)|]\\
&\ \ \leq \hat{\mathbb{E}}[\sup_{ s,s'\in \Lambda_{2^{-n},T+1}}|\varphi(X_{s'+t_1}^x,\cdots,X_{s'+t_m}^x)-\varphi(X_{s+t_1}^x,\cdots,X_{s+t_m}^x)|]\\
&\ \ =\hat{\mathbb{E}}^x_2[\sup_{ s,s'\in \Lambda_{{2^{-n},T+1}}}|\varphi(B'_{s'+t_1},\cdots,B'_{s'+t_m})-\varphi(B'_{s+t_1},\cdots,B'_{s+t_m})|].
\end{align*}
Proceeding similarly to the first paragraph in proof of Lemma \ref{Et continuity lemma}, we obtain for some constant $C$ depending on $\varphi$
$$
\hat{\mathbb{E}}^x_2[\sup_{ s,s'\in \Lambda_{{{2^{-n},T+1}}}}|\varphi(B'_{s'+t_1},\cdots,B'_{s'+t_m})-\varphi(B'_{s+t_1},\cdots,B'_{s+t_m})|]\leq C\hat{\mathbb{E}}^x_2[\sup_{ s,s'\in \Lambda_{{2^{-n},T+1+T'}}}(|B'_s-B'_{s'}|\wedge 1)],
$$
which converges to $0$ as $n\rightarrow \infty$ by Remark \ref{remark after sup continuity lemma}.
\textit{Step 2.} For the general case, by Step 1, we have
$$
\varphi(X_{\tau\wedge T+t_1}^x,\cdots,X_{\tau\wedge T+t_m}^x)\in L^{1,(\tau\wedge T)+}_G(\Omega)\subset L^{1,\tau+}_G(\Omega)
.
$$
Note that
$$
\hat{\mathbb{E}}[|\varphi(X_{\tau\wedge T+t_1}^x,\cdots,X_{\tau\wedge T+t_m}^x)-\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)|]\leq 2C_\varphi c(\{\tau>T\})\rightarrow 0, \ \ \ \ \text{as}\ T\rightarrow\infty.
$$
The result now follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main theorem}]
Let $\tau\leq T$. We define $\tau_n$ as (\ref{approximation tau in KS problem}). Then $\tau_n\leq T+1$ takes finitely values $\{t^n_i:i\leq d_n\}$ with $d_n:=[2^nT]+1$.
By (\ref{9876856758980978675}) and Proposition \ref{Etau proposition on L1tau} (iv), we have
\begin{equation*}
\begin{split}
&\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)]-\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]|]\\
&\ \ \leq \hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)]-\hat{\mathbb{E}}_{\tau_n+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]|]\\
&\ \ \ \ \ +\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]-\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]|]\\
&\ \ \leq \hat{\mathbb{E}}[|\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)-\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)|]\\
&\ \ \ \ \ +\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{\tau_n+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]-\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]|]\\
&\ \ \rightarrow 0, \ \ \ \ \text{as}\ n\rightarrow \infty.
\end{split}
\end{equation*}
Moreover, since $\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)\in L^{1,\tau_n}_G(\Omega)$, by Remark \ref{Etau remark}, we have $$\hat{\mathbb{E}}_{\tau_n+}[\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)]=\hat{\mathbb{E}}_{\tau_n}[\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)].$$
Combining these with the version of Lemma \ref{Etau discrete lemma 01} for $\hat{\mathbb{E}}_{\tau_n}$, we have
\begin{align*}
\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]
&=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\hat{\mathbb{E}}_{\tau_n}[\varphi(X_{\tau_n+t_1}^x,\cdots,X_{\tau_n+t_m}^x)]\\
&=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\sum_{i=1}^{d_n}\hat{\mathbb{E}}_{t^n_i}[\varphi(X_{{t^n_i}+t_1}^x,\cdots,X_{{t^n_i}+t_m}^x)]I_{\{\tau_n={t^n_i}\}}.
\end{align*}
Note that from Lemma \ref{markov property of sde}
$$
\hat{\mathbb{E}}_{t^n_i}[\varphi(X_{{t^n_i}+t_1}^x,\cdots,X_{{t^n_i}+t_m}^x)]=\hat{\mathbb{E}}[\varphi(X_{t_1}^y,\cdots,X_{t_m}^y)]_{y=X^x_{t^n_i}},
$$
We thus obtain
\begin{align*}
\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]
&=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\sum_{i=1}^{d_n}\hat{\mathbb{E}}[\varphi(X_{t_1}^y,\cdots,X_{t_m}^y)]_{y=X^x_{t^n_i}}I_{\{\tau_n={t^n_i}\}}\\
&=\mathbb{L}^1\text{-}\lim_{n\rightarrow \infty}\hat{\mathbb{E}}[\varphi(X_{t_1}^y,\cdots,X_{t_m}^y)]_{y=X^x_{\tau_n}}\\
&=\hat{\mathbb{E}}[\varphi(X_{t_1}^y,\cdots,X_{t_m}^y)]_{y=X^x_{{\tau}}},
\end{align*}
where the last equality is derived from a proof similar to that of Lemma \ref{belong to L1tau lemma 1} by using (\ref{SDE3}) of Lemma \ref{GSDE} for spatial variables.
Now for the general $\tau$, applying Step 1, we have
\begin{equation}
\label{23454464655}
\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau\wedge T+t_1}^x,\cdots,X_{\tau\wedge T+t_m}^x)]
=\hat{\mathbb{E}}[\varphi(X_{t_1}^y,\cdots,X_{t_m}^y)]_{y=X^x_{\tau\wedge T}}.
\end{equation}
Since $\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)\in L^{1,\tau+}_G(\Omega)$ by Lemma \ref{belong to L1tau lemma 1}, we can apply Proposition \ref{Etau proposition on L1tau} (iii) to obtain
\begin{align*}
&\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau\wedge T+t_1}^x,\cdots,X_{\tau\wedge T+t_m}^x)]-\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]|]\\
&\ \ \leq \hat{\mathbb{E}}[|\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau\wedge T+t_1}^x,\cdots,X_{\tau\wedge T+t_m}^x)]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)I_{\{\tau\leq T\}}]|]\\
&\ \ \ \ \ +\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)I_{\{\tau\leq T\}}]-\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]|]\\
&\ \ \leq \hat{\mathbb{E}}[|\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau\wedge T+t_1}^x,\cdots,X_{\tau\wedge T+t_m}^x)]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)I_{\{\tau\leq T\}}]|I_{\{\tau\leq T\}}]\\
&\ \ \ \ \ +\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau\wedge T+t_1}^x,\cdots,X_{\tau\wedge T+t_m}^x)]-\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)I_{\{\tau\leq T\}}]|I_{\{\tau> T\}}]\\
&\ \ \ \ \ +\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)I_{\{\tau\leq T\}}]-\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]|]\\
&\ \ \leq C_\varphi c({\{\tau>T\}})+\hat{\mathbb{E}}[|\hat{\mathbb{E}}_{(\tau\wedge T)+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)I_{\{\tau\leq T\}}]-\hat{\mathbb{E}}_{\tau+}[\varphi(X_{\tau+t_1}^x,\cdots,X_{\tau+t_m}^x)]|]\\
&\ \ \rightarrow 0, \ \ \ \ \text{as}\ T\rightarrow \infty.
\end{align*}
Thus letting $T\rightarrow \infty$ in (\ref{23454464655}) yields (\ref{strong markov for sde}).
\end{proof}
Next we consider an extension of Theorem \ref{main theorem} in which the cylinder functions $\varphi$ is replaced by (lower semi-) continuous functions $\widetilde{\varphi}$ depending on the whole paths of $G$-SDEs. It maybe useful in the following work.
\begin{theorem}\label{extended SDE strongmarkov1}
Let $\varphi\in C_b(\Omega')$. Then
\begin{equation}\label{12312342421453}\hat{\mathbb{E}}_{\tau+}[\varphi(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[ \varphi(X^y_\cdot)]_{y=X^x_{\tau}}.
\end{equation}
\end{theorem}
The conditional expectation on the left-hand side of (\ref{12312342421453}) is meaningful by the following two lemmas.
\begin{lemma}\label{belong to L1tau lemma 2}
Assume $\varphi\in C_{b}(\Omega')$ and there exists a constant $\mu>0$ such that for some $T'>0$,
\begin{equation}\label{3245323}|\varphi(\omega^1)-\varphi(\omega^2)|\leq \mu||\omega^1-\omega^2||_{C^n[0,T']}, \ \ \ \ \text{for each}\ \omega^1,\omega^2\in \Omega'.\end{equation}
Then
\begin{equation}
\varphi(X^x_{\tau+\cdot})\in L^{1,\tau+}_G(\Omega).
\end{equation}
\end{lemma}
\begin{remark}
\upshape{ Note that (\ref{3245323}) implies that $\varphi$ only depends on the path of $\omega\in \Omega'$ on $[0,T']$.}
\end{remark}
\begin{proof}
As in the Step 2 of the proof of Lemma \ref{belong to L1tau lemma 1}, it suffices to
suppose that $\tau\leq T$ for some $T>0$.
Consider for each $m\in\mathbb{N}$ the function from $\mathbb{R}^{({m+1})\times n}$ to $\Omega'$ defined by
$$
\phi_m(x_0,x_1,x_2,\cdots,x_m)(t)=\sum_{k=0}^{m-1}\frac{(t^m_{k+1}-t)x_{k}+(t-t^m_k)x_{k+1}}{t^m_{k+1}-t^m_k}I_{[t^m_k,t^m_{k+1})}(t)+x_mI_{[t_m^m,\infty)},
$$
where $t^m_k=\frac{kT'}{m},k=0,1,\cdots,m$. Since $\varphi\circ\phi_m$ is a bounded, Lipschitz function from $\mathbb{R}^{({m+1})\times n}$ to $\mathbb{R}$, by Lemma \ref{belong to L1tau lemma 1}, we have
$$
\varphi(\phi_m(X^x_{\tau+t^m_0},X^x_{\tau+t^m_1},X^x_{\tau+t^m_2},\cdots,X^x_{\tau+t_m^m})) \in L^{1,\tau+}_G(\Omega).
$$
We employ the notation in the proof of
Lemma \ref{belong to L1tau lemma 1} and proceed similarly to obtain some constant $C>0$ depending on $\varphi$ such that
\begin{align*}
& \hat{\mathbb{E}}[|\varphi(\phi_m(X^x_{\tau+t^m_0},X^x_{\tau+t^m_1},\cdots,X^x_{\tau+t_m^m}))-\varphi(X^x_{\tau+\cdot})|] \\
&\ \ \leq\hat{\mathbb{E}}[\sup_{0\leq t\leq T}|\varphi(\phi_m(X^x_{t+t^m_0},X^x_{t+t^m_1},\cdots,X^x_{t+t_m^m})-\varphi(X^x_{t+\cdot})|] \\
&\ \ =\hat{\mathbb{E}}^x_2[\sup_{0\leq t\leq T}|\varphi(\phi_m(B'_{t+t^m_0},B'_{t+t^m_1},\cdots,B'_{t+t_m^m})-\varphi(B'_{t+\cdot})|]\\
&\ \ \leq C\hat{\mathbb{E}}^x_2[\sup_{ s,s'\in \Lambda_{{m^{-1}}{T'},T+T'}}(|B'_s-B'_{s'}|\wedge 1)]\\
&\ \ \rightarrow 0, \ \ \ \ \text{as}\ m\rightarrow\infty,
\end{align*}
This completes the proof.
\end{proof}
\begin{lemma}\label{belong to L1tau* lemma}
Let $\varphi\in C_b(\Omega')$. Then
\begin{equation}\varphi(X^x_{\tau+\cdot})\in L^{1,\tau+}_G(\Omega).
\end{equation}
\end{lemma}
\begin{proof}
Let
$$
\varphi_m(\omega):=\inf_{\omega'\in\Omega'}\{\varphi(\omega')+m||\omega-\omega'||_{C^n[0,m]}\},\ \ \ \ \text{for}\ \omega\in \Omega'.
$$
Then by Lemma 3.1 in Chap VI of \cite{P7}, $\varphi_m\in C_{b}(\Omega')$ satisfies
\begin{itemize}
\item [(i)]$|\varphi_m(\omega^1)-\varphi_m(\omega^2)|\leq m||\omega^1-\omega^2||_{C^n[0,m]},\ \text{for}\ \omega^1,\omega^2\in \Omega';$
\item [(ii)] $\varphi_m\uparrow \varphi$;
\item [(iii)] $|\varphi_m|\leq C_\varphi$.
\end{itemize}
Thus we have $\varphi_m(X^x_{\tau+\cdot})\in L^{1,\tau+}_G(\Omega)$ by Lemma \ref{belong to L1tau lemma 2}.
As discussed in the proof of Lemma \ref{belong to L1tau lemma 2}, it suffices to prove the result for $\tau\leq T$.
Let $\hat{\mathbb{E}}^x_2$ and $c^x_2$ be defined as in the proof of
Lemma \ref{belong to L1tau lemma 1}. We have
\begin{align*}
\hat{\mathbb{E}}[|\varphi_m(X^x_{\tau+\cdot})-\varphi(X^x_{\tau+\cdot})|] &\leq\hat{\mathbb{E}}[\sup_{0\leq t\leq T}|\varphi_m(X^x_{t+\cdot})-\varphi(X^x_{t+\cdot})|]\\
&=\hat{\mathbb{E}}^x_2[\sup_{0\leq t\leq T}|\varphi_m(B'_{t+\cdot})-\varphi(B'_{t+\cdot})|].
\end{align*}
Given any $\varepsilon>0$, since $c^x_2$ is tight on $\Omega'$, we can pick a compact set $K\subset \Omega'$ such that $c^x_2(K)<\varepsilon$. Note that $K\times [0,T]$ is still compact and $(\omega,t)\mapsto \varphi_m(B'_{t+\cdot}),\varphi(B'_{t+\cdot})$ are continuous functions such that $\varphi_m(B'_{t+\cdot})\uparrow\varphi(B'_{t+\cdot})$. We have by Dini's theorem $$\varphi_m(B'_{t+\cdot})\uparrow\varphi(B'_{t+\cdot}) \ \ \ \ \text{uniformly on}\ \ K\times [0,T].$$
Hence, we can choose $m$ large enough such that
$$
| \varphi_m(B'_{t+\cdot})-\varphi(B'_{t+\cdot})|\leq \varepsilon \ \ \ \ \text{on}\ \ K\times [0,T].
$$
Then
\begin{align*}
&\hat{\mathbb{E}}^x_2[\sup_{0\leq t\leq T}|\varphi_m(B'_{t+\cdot})-\varphi(B'_{t+\cdot})|]\\
&\ \ \leq \hat{\mathbb{E}}^x_2[\sup_{0\leq t\leq T}|\varphi_m(B'_{t+\cdot})-\varphi(B'_{t+\cdot})|I_K]+\hat{\mathbb{E}}^x_2[\sup_{0\leq t\leq T}|\varphi_m(B'_{t+\cdot})-\varphi(B'_{t+\cdot})|I_{K^c}]\\
&\ \ \leq \varepsilon+2\varepsilon C_\varphi.
\end{align*}
Since $\varepsilon$ can be arbitrarily small, we obtain
$$
\hat{\mathbb{E}}[|\varphi_m(X^x_{\tau+\cdot})-\varphi(X^x_{\tau+\cdot})|] \rightarrow 0,\ \ \ \ \text{as}\ m\rightarrow\infty.
$$
This proves the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{extended SDE strongmarkov1}]
\textit{Step 1.} Suppose $\tau\leq T$ for some $T>0$ and $\varphi\in C_{b}(\Omega')$ such that (\ref{3245323}) holds for some $T'>0$.
For each $m\in\mathbb{N}$, we define $\phi_m$ as in the proof of Lemma \ref{belong to L1tau lemma 2}.
Then Theorem \ref{main theorem} gives
\begin{equation}\label{789000}
\hat{\mathbb{E}}_{\tau+}[\varphi(\phi_m(X^x_{\tau+t^m_0},X^x_{\tau+t^m_1},X^x_{\tau+t^m_2},\cdots,X^x_{\tau+t_m^m}))]
=\hat{\mathbb{E}}[\varphi(\phi_m(X^y_{t^m_0},X^y_{t^m_1},X^y_{t^m_2},\cdots,X^y_{t_m^m}))]_{y=X^x_{\tau}}.
\end{equation}
According to the proof of Lemma \ref{belong to L1tau lemma 2},
$$
\varphi(\phi_m(X^x_{\tau+t^m_0},X^x_{\tau+t^m_1},\cdots,X^x_{\tau+t_m^m}))\rightarrow \varphi(X^x_{\tau+\cdot}) \ \ \ \ \text{in} \ \mathbb{L}^1,\ \text{as}\ m\rightarrow\infty.
$$
Consequently,
\begin{equation*}\label{88888765656}
\hat{\mathbb{E}}_{\tau+}[\varphi(\phi_m(X^x_{\tau+t^m_0},X^x_{\tau+t^m_1},\cdots,X^x_{\tau+t_m^m}))]\rightarrow \hat{\mathbb{E}}_{\tau+}[\varphi(X^x_{\tau+\cdot})]\ \ \ \ \text{in} \ \mathbb{L}^1,\ \text{as}\ m\rightarrow\infty.
\end{equation*}
It remains to consider the right side of (\ref{789000}).
For any fixed $R>0$, by Kolmogorov's criterion for tightness, the family $\mathcal{P}_R:=\bigcup_{y\in \overline{B_R(0)}}\{P\circ (X_{\cdot}^y)^{-1}:P\in\mathcal{P}\}$ is tight on $\Omega'$, where $B_R(0)$ is an open ball with center $0$ and radius $R$ in $\mathbb{R}^n$ and $\overline{B_R(0)}$ is its closure. We denote the corresponding sublinear expectation by $\hat{\mathbb{E}}^R_2:=\sup_{P\in\mathcal{P},y\in \overline{B_R(0)} }E_{P\circ (X_{\cdot}^y)^{-1}}$.
We may apply a similar analysis as in the proof of Lemma \ref{belong to L1tau lemma 2} to obtain for some constant $C$ depending on $\varphi$
\begin{equation*}
\begin{split}
& \hat{\mathbb{E}}[|\varphi(\phi_m(X^y_{t^m_0},X^y_{t^m_1},\cdots,X^y_{t^m_m}))-\varphi(X^y_\cdot)|] \\
& \ \ =\hat{\mathbb{E}}^y_2[|\varphi(\phi_m(B'_{t^m_0},B'_{t^m_1},\cdots,B'_{t^m_m}))-\varphi(B'_\cdot)|]\\
& \ \ \leq \hat{\mathbb{E}}^R_2[|\varphi(\phi_m(B'_{t^m_0},B'_{t^m_1},\cdots,B'_{t^m_m}))-\varphi(B'_\cdot)|]\\
&\ \ \leq C\hat{\mathbb{E}}^R_2[\sup_{ s,s'\in \Lambda_{{m^{-1}}{T'},T'}}(|B'_s-B'_{s'}|\wedge 1)]\\
&\ \ \rightarrow 0, \ \ \ \ \text{as} \ m\rightarrow \infty, \ \text{for any}\ y \in \overline{B_R(0)},
\end{split}
\end{equation*}
where $\hat{\mathbb{E}}^y_2:=\sup_{P\in\mathcal{P}}E_{P\circ (X_{\cdot}^y)^{-1}}$.
That is,
\begin{equation}\label{3324325343}
\hat{\mathbb{E}}[|\varphi(\phi_m(X^y_{t^m_0},X^y_{t^m_1},\cdots,X^y_{t^m_m}))-\varphi(X^y_\cdot)|]\rightarrow 0, \ \ \ \ \text{as}\ m\rightarrow \infty, \ \text{uniformly for }\ y\in \overline{B_R(0)}.
\end{equation}
For any fixed $\varepsilon>0$, we can first choose $R$ large enough such that by Lemma \ref{GSDE} (\ref{SDE sup control})
$$
c(\{|X^x_\tau|>R\})\leq \frac{\hat{\mathbb{E}}[|X^x_\tau|]}{R}\leq \frac{\hat{\mathbb{E}}[\sup_{t\in [0,T]}|X^x_t|]}{R}\leq \varepsilon
$$
and then choose $m$ large enough such that by (\ref{3324325343})
$$
\hat{\mathbb{E}}[|\varphi(\phi_m(X^y_{t^m_0},X^y_{t^m_1},\cdots,X^y_{t^m_m}))-\varphi(X^y_\cdot)|]\leq \varepsilon,\ \ \ \ \text{for all}\ y\in \overline{B_R(0)}.
$$
Thus we have
\begin{align*}
&\hat{\mathbb{E}}[|\hat{\mathbb{E}}[\varphi(\phi_m(X^y_{t^m_0},X^y_{t^m_1},X^y_{t^m_2},\cdots,X^y_{t^m_m}))]_{y=X^x_{\tau}}-\hat{\mathbb{E}}[ \varphi(X^y_\cdot)]_{y=X^x_{\tau}}|]\\
&\ \ \leq \hat{\mathbb{E}}[|\hat{\mathbb{E}}[\varphi(\phi_m(X^y_{t^m_0},X^y_{t^m_1},X^y_{t^m_2},\cdots,X^y_{t^m_m}))]_{y=X^x_{\tau}}-\hat{\mathbb{E}}[ \varphi(X^y_\cdot)]_{y=X^x_{\tau}}|I_{\{|X^x_\tau|\leq R\}}]+2C_\varphi c(\{|X^x_\tau|>R\})\\
&\ \ \leq \varepsilon+2C_\varphi\varepsilon,
\end{align*}
which implies
$$
\hat{\mathbb{E}}[|\hat{\mathbb{E}}[\varphi(\phi_m(X^y_{t^m_0},X^y_{t^m_1},X^y_{t^m_2},\cdots,X^y_{t^m_m}))]_{y=X^x_{\tau}}-\hat{\mathbb{E}}[ \varphi(X^y_\cdot)]_{y=X^x_{\tau}}|]\rightarrow 0, \ \ \ \ \text{as} \ m\rightarrow\infty.
$$
Therefore, letting $m\rightarrow\infty$ in (\ref{789000}), we obtain
$$\hat{\mathbb{E}}_{\tau+}[\varphi(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[ \varphi(X^y_\cdot)]_{y=X^x_{\tau}}.$$
\textit{Step 2.}
Assume $\tau\leq T$ and $\varphi\in C_b(\Omega)$. Define $\varphi_m$ as in the proof of Lemma \ref{belong to L1tau* lemma}.
According to Step 1,
\begin{equation}\label{444444}
\hat{\mathbb{E}}_{\tau+}[\varphi_m(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[ \varphi_m(X^y_\cdot)]_{y=X^x_{\tau}}.
\end{equation}
Letting $m\rightarrow \infty$, from the proof of Lemma \ref{belong to L1tau* lemma}, we obtain that
$$
\hat{\mathbb{E}}_{\tau+}[\varphi(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[ \varphi(X^y_\cdot)]_{y=X^x_{\tau}},
$$
where the convergence of right-hand side is obtained by a similar analysis as in Step 1 and the proof of Lemma \ref{belong to L1tau* lemma}.
\textit{Step 3.} We proceed as in the last paragraph of the proof of Theorem \ref{main theorem} to obtain the result for the general case that $\tau$ is an optional time and $\varphi\in C_{b}(\Omega)$.
\end{proof}
\begin{corollary}\label{strong markov for lower semicontinuous function}
Let $\varphi$ be lower semi-continuous on $\Omega'$ and bounded from below, i.e., $\varphi\geq c$ for some constant $c$. Then $\varphi(X^x_{\tau+\cdot}) \in L^{1,\tau+,*}_G(\Omega)$ and
$$\hat{\mathbb{E}}_{\tau+}[ \varphi(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[ \varphi(X^y_{\cdot})]_{y=X^x_{\tau}}.$$
\end{corollary}
\begin{proof}
We pick a sequence $\varphi_m\in C_b(\Omega')$ such that $\varphi_m\uparrow \varphi$. Then the conclusion follows from Theorem \ref{extended SDE strongmarkov1}, Lemma \ref{upward mct for rv} and Proposition \ref{Etau proposition on L1tau*} (iv).
\end{proof}
Assuming $n=d$, $x= 0$, $b=h_{ij}=0$, $\sigma:=(\sigma_1,\cdots,\sigma_d)=I_{d\times d}$ in Corollary \ref{strong markov for lower semicontinuous function}, we immediately have the strong Markov property for $G$-Brownian motion.
\begin{corollary}\label{extended BM strongmarkov1}
Let $\varphi$ be lower semi-continuous, bounded from below on $\Omega$ and $\tau$ be an optional time. Then
\begin{equation}\label{43543}\hat{\mathbb{E}}_{\tau+}[\varphi(B_{\tau+\cdot})]=\hat{\mathbb{E}}[\varphi(B_\cdot^y)]_{y=B_{\tau}},
\end{equation}
where $B_t^y:=y+B_t,\ t\geq 0$ for $y\in\mathbb{R}^d$. In particular,
for each $\phi\in C_{b.Lip}(\mathbb{R}^{m\times d})$ and $0\leq t_1\leq \cdots\leq t_m<\infty$,
\begin{equation*}
\hat{\mathbb{E}}_{\tau+}[\phi(B_{\tau+t_1},\cdots,B_{\tau+t_m})]=\hat{\mathbb{E}}[\phi(B^y_{t_1},\cdots,B^y_{t_m})]_{y=B_{\tau}}.
\end{equation*}
\end{corollary}
The following result says that $G$-Brownian motion starts afresh at an optional time, i.e., $\overline{B}_t:=(B_{\tau+t}-B_{\tau})_{t\geq 0}$ is still a $G$-Brownian motion.
\begin{corollary}\label{refresh of G-Brownian motion}
Let $\tau,\varphi$ be assumed as in the above Corollary. Then
\begin{equation}\hat{\mathbb{E}}_{\tau+}[\varphi(B_{\tau+\cdot}-B_{\tau})]=\hat{\mathbb{E}}[\varphi(B_{\tau+\cdot}-B_{\tau})]=\hat{\mathbb{E}}[\varphi(B_\cdot)].
\end{equation}
In particular, for each $\phi\in C_{b.Lip}(\mathbb{R}^{m\times d})$, $0\leq t_1\leq\cdots\leq t_m<+\infty$, $m\in\mathbb{N}$, we have
\begin{align*}
\hat{\mathbb{E}}_{\tau+}[\phi(B_{\tau+t_1}-B_{\tau},\cdots,B_{\tau+t_m}-B_{\tau})]=\hat{\mathbb{E}}[\phi(B_{\tau+t_1}-B_{\tau},\cdots,B_{\tau+t_m}-B_{\tau})]=\hat{\mathbb{E}}[\phi(B_{t_1},\cdots,B_{t_m})].
\end{align*}
\end{corollary}
\begin{proof}
We only need to prove the first one, which implies the second one as a special case. Setting ${\tilde{\varphi}}(\omega):=\varphi((\omega_t-\omega_0)_{t\geq 0})$ in (\ref{43543}), we have
$$
\hat{\mathbb{E}}_{\tau+}[\varphi(B_{\tau+\cdot}-B_{\tau})]=\hat{\mathbb{E}}[\varphi(B_{\cdot})].
$$
Taking expectation on both sides, by
Proposition \ref{Etau welldefined}, we then obtain
$$ \hat{\mathbb{E}}[\varphi(B_{\tau+\cdot}-B_{\tau})]=\hat{\mathbb{E}}[\varphi(B_{\cdot})].$$
\end{proof}
\section{An application}
Let $(B_t)_{t\geq 0}$ be a 1-dimensional $G$-Brownian motion such that $\underline{\sigma}^2:=-\mathbb{\hat{E}}[-B^2_1]>0$ (non-degeneracy). Let $a\in\mathbb{R}$ be given. For each $\omega\in\Omega$, define the level set
\begin{equation}
\mathcal{L}_\omega(a):=\{t\geq 0: B_t(\omega)=a\}.
\end{equation}
It is proved in \cite{WZ} that $\mathcal{L}_\omega(a)$ is q.s. closed and has zero Lebesgue measure. Using the strong Markov property for $G$-Brownian motion, we can obtain the following theorem.
\begin{theorem}\label{no isolate point theorem for G-Bm}For q.s. $\omega\in\Omega$, the level set $\mathcal{L}_\omega(a)$ has no isolated point in $[0,\infty)$.
\end{theorem}
To prove Theorem 5.1, we need the following two lemmas.
\begin{lemma}\label{Bm change sign lemma}
For q.s. $\omega$, $G$-Brownian motion $(B_t)_{t\geq 0}$ changes sign infinitely many times in $[0,\varepsilon]$, for any $\varepsilon>0$.
\end{lemma}
\begin{proof}
Define $\tau_1:=\inf\{t> 0:B_t>0\}.$ Then $\tau_1$ is an optional time by Lemma 7.6 in Chap 7 of \cite{Ka}. Let $P\in\mathcal{P}$ and $t\geq 0$ be given. Since $B$ is a martingale,
we can apply the classical optional sampling theorem to obtain
${E}_P[-B_{\tau_1\wedge t}]=0$. Thus $\mathbb{\hat{E}}[-B_{\tau_1\wedge t}]=0$. Noting that $-B_{\tau_1\wedge t}\geq 0$, we then have $-B_{\tau_1 \wedge t}=0$ q.s., i.e., $B_{\tau_1 \wedge t}=0\ \text{q.s.}$ Similar analysis for $-B$ shows $B_{\tau_2 \wedge t}=0$ q.s., for $\tau_2:=\{t>0:B_t<0\}$. Therefore, $B_{\tau_0 \wedge t}=0$ q.s., for $\tau_0:=\tau_1\vee\tau_2$. This implies $B_{\tau_0 \wedge t}=0\ \text{for each}\ t\geq 0,\ \text{q.s.}$
Applying Proposition 1.13 in Chap IV of \cite{Marc} under each $P\in\mathcal{P}$, we then have $\langle B\rangle_{\tau_0 \wedge t}=0\ \text{for each}\ t\geq 0,\ \text{q.s.}$ But from Corollary 5.4 in Chap III of \cite{P7} that ${\langle B\rangle_{t+s}-\langle B\rangle_{t}}\geq \underline{\sigma}^2s>0$ for each $s> 0$, we must have $\tau_0=0$ q.s. Hence,
$\tau_1=0$ and $\tau_2=0,\ \text{q.s.},$ which imply the desired result.
\end{proof}
\begin{lemma}\label{Bm unbound lemma}
We have
\begin{equation}
\sup_{0\leq t<\infty}B_t=+\infty \ \ \text{and}\ \ \inf_{0\leq t<\infty}B_t=-\infty, \ \ \ \ \text{q.s.}
\end{equation}
\end{lemma}
\begin{proof}
We only prove the first equality, from which the second one follows by the symmetry of $G$-Brownian motion.
Define $\tau_t=\inf\{s\geq 0:\langle B\rangle_s>t\}$. Under each $P\in\mathcal{P}$, $B$ is a martingale. Then by Theorem 1.6 in Chap V of \cite{Marc}, $(B_{\tau_t})_{t\geq 0}$ is a classical Brownian motion. Applying Lemma 3.6 in Chap I of \cite{Rog}, we have
$$
\sup_{0\leq t<\infty}B_{\tau_t}=+\infty\ \ \ \ P\text{-a.s.}
$$
Since $\{\tau_t:t\in [0,\infty)\}=[0,\infty),$ we then obtain
$$
\sup_{0\leq t<\infty}B_{t}=+\infty\ \ \ \ P\text{-a.s.}
$$
Therefore,
$$
\sup_{0\leq t<\infty}B_{t}=+\infty\ \ \ \ \text{q.s.}
$$
\end{proof}
\begin{remark}\label{level set unbound remark}
\upshape{This lemma implies that $\mathcal{L}_\omega(a)$ is q.s. unbounded.}
\end{remark}
\begin{proof}[Proof of Theorem \ref{no isolate point theorem for G-Bm}]
Let $t\geq 0$. Define the optional time after $t$
$$
\tau_t=\inf\{s>t:B_s=a\}.
$$
By Lemma \ref{Bm unbound lemma} (see also Remark \ref{level set unbound remark}), $\tau_t$ is q.s. finite.
Now we are going to show that
\begin{equation}
\label{332432353532}
\tau_{\tau_t}=\inf\{{s>\tau_t}:B_s=a\}={\tau_t}\ \ \ \ \text{q.s.}
\end{equation}
For any $n\geq 1$, since $\tau_t\wedge n$ satisfies (H3), then Corollary \ref{refresh of G-Brownian motion} implies that $(B_{\tau_t\wedge n+s}-B_{\tau_t\wedge n})_{s\geq 0}$ is still a $G$-Brownian motion.
Hence, by Lemma \ref{Bm change sign lemma}, there exists a set $\Omega_n\subset \Omega$ such that $c(\Omega_n^c)=0$ and on $\Omega_n$, $(B_{\tau_t\wedge n+s}-B_{\tau_t\wedge n})_{s\geq 0}$ changes its sign infinitely many times on any $[0,\varepsilon]$.
Let
$$
\Omega_0:=\bigcup_{n=1}^\infty(\Omega_n\cap\{\tau_t\leq n\}).
$$
For any $P\in\mathcal{P}$, we have
$$P(\Omega_0^c)=P(\bigcap_{n=1}^\infty (\Omega_n^c\cup\{\tau_t> n\}))\leq P(\Omega_n^c\cup\{\tau_t> n\})=P(\{\tau_t> n\})\rightarrow P(\{\tau_t=\infty\})=0, \ \ \ \ \text{as} \ n\rightarrow\infty.
$$
Thus $$
c(\Omega_0^c)=0.
$$
For any fixed $\omega\in \Omega_0$, there exists an $n$ such that $\omega\in \Omega_n\cap\{\tau_t\leq n\}$. Since $\tau_t(\omega)\wedge n=\tau_t(\omega)$, then $((B_{\tau_t+s}-B_{\tau_t})(\omega))_{s\geq 0}$ changes its sign infinitely many times on any $[0,\varepsilon]$. Therefore,
$$\tau_{\tau_t}(\omega)={\tau_t}(\omega),$$
which proves (\ref{332432353532}).
Note that, for any fixed $p<q$,
$$\Lambda_{p,q}:=\{\omega\in\Omega: \ \text{there is only one}\ s\in (p,q) \ \text{such that}\ B_s(\omega)=a\}\subset \{\omega\in\Omega:\tau_p<q, \tau_{\tau_p}\geq q \}.$$
We must have $c(\Lambda_{p,q})=0$. Thus the set $$\{\omega\in \Omega:\ \mathcal{L}_\omega(a)\ \text{has isolated point}\}=\bigcup_{0\leq p<q;\ p,q\in Q}\Lambda_{p,q}$$
is a zero capacity set.
\end{proof}
|
train/arxiv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.