This section is devoted to the proof of Theorem 3.1. Let \(\nu \) be a nonzero invariant measure of Y. By Proposition 2.4, we have \(\nu ((-\infty , x])< \infty \) for every \(x\in \mathbb \).
We state a first lemma which rephrases the result of Theorem 3.1.
Lemma 4.1Let \(\xi >3\) and \(c_\nu >0\). Then, the existence of \(d_\nu \in \mathbb \) such that equation (3.1) holds is equivalent to the fact that as \(x\rightarrow \infty \), uniformly in \(x\le y \le 2x\) we have
$$\begin \nu ((x, y])= c_\nu (y-x)+ O\big (x^\big )\,. \end$$
(4.1)
Similarly, let \(c_\nu >0\), then the existence of \(d_\nu \in \mathbb \) and \(\delta >0\) such that equation (3.1) with error term replaced by \(O\big (e^\big )\) holds, is equivalent to the existence of \(\delta >0\), such that as \(x\rightarrow \infty \), uniformly in \(x\le y \le 2x\):
$$\begin \nu ((x, y])= c_\nu (y-x)+ O\left( \exp \big (-\delta x \big )\right) . \end$$
(4.2)
ProofWe prove the first statement. Writing \(\nu ((x, y])=\nu ((-\infty , y])-\nu ((-\infty , x])\), it is clear that (3.1) implies (4.1). Conversely, assume that (4.1) holds. For any \(x\ge 1\), consider the (unique) integer n such that \(2^n < x \le 2^\) and write
$$\begin \begin \nu ((-\infty , x])- c_\nu x =&\; \nu \big ((-\infty , 1]\big )-c_\nu \\&+ \sum _^ \nu \big (\big (2^i, 2^\big ]\big )- c_\nu \big (2^-2^i\big )\\&+\nu \big (\big (2^n, x\big ]\big )- c_\nu \big (x-2^n\big )\,, \end \end$$
(4.3)
and note that, due to (4.1), the series \(\sum _ \left( \nu ([2^i, 2^])- c_\nu (2^-2^i) \right) \) converges and its partial remainder \(\sum _ \left( \nu ([2^i, 2^])- c_\nu (2^-2^i) \right) = O(2^)\). By (4.1), the term \(\nu ((2^n, x])- c_\nu (x-2^n)\) is also a \(O(2^)\). Using finally that \(O(2^)=O(x^)\), we get that (3.1) holds with \(d_\nu := \nu ((-\infty , 1])-c_\nu + \sum _^ \nu ((2^i, 2^])- c_\nu (2^-2^i) \).
The proof is similar in the case of error term \(O\big (e^\big )\). \(\square \)
Our focus will now be proving (4.1). There will be four preliminary steps:
(1)By elaborating on the well-known representation of the invariant measure of a recurrent MC (see, for example, [17, Sec. 6 of Ch. 3]), we show that \(\nu ((x,y])\) can be represented by using the excursion occupation times \(V_(b, (x, y])\) of Y, see the equation (4.5). In Lemma 4.2, we give an expression of \(V_(b, (x, y])\) in terms of the strictly descending epochs of the random walk S and a crucial function F that links \(V_(b, (x, y])\) and the Y process observed at the ladder epochs of S (see (4.7)).
(2)By considering the weak ascending ladder epochs of S, we rewrite F in terms of a new process \((J_k(\theta ))_\), defined in (4.18), which is now an increasing Markov chain.
(3)We determine the asymptotic behavior of F, see Proposition 4.6: This is the main technical estimate in the proof.
(4)We perform some technical estimates that are central in controlling the error arising when replacing F by its asymptotic behavior into the representation formula for the cumulative function of \(\nu \).
We recall that we work under the assumptions of Theorem 3.1, that is, the criticality condition (1.6), (H-1) of Theorem 1.2, (T-1) or \(\mathbf\), and (T-2) explicitly stated in Theorem 3.1.
4.1 Step 1: Representation of the Invariant MeasureFirst, we observe that Y can be explicitly solved. By induction, we see that when \(Y_0=\theta \), for \(n=0, 1,2,\ldots \)
$$\begin Y_n\,=\, \log \left( e^+ \sum _^ e^\right) = \theta + S_n + \log \left( 1+ e^ \sum _^ e^\right) \,,\nonumber \\ \end$$
(4.4)
where as before \(S_n = \sum _^n \texttt_i\) and we adopt the convention \(\sum _\emptyset :=0\).
Since \(\nu \) is nonzero, let us fix an arbitrary \(a >0\) such that \(\nu ((-\infty , a]) >0\) and denote by \(\tau _Y(a):=\inf \\) the first return time to \((-\infty , a]\) of Y. By applying [17, Theorem 3.6.5] to the recurrent Markov chain Y, and by applying the Markov property at time 1, we have that for any Borel set \(I\subset (a, \infty )\),
$$\begin \nu (I)&= \int _ }\,}}_ \left[ \sum _^ \textbf_}\right] \nu (dv) \nonumber \\&= \int _ }\,}}_\big [ V_(Y_1, I) \textbf_} \big ]\nu (\textrmv) , \end$$
(4.5)
where, for \(b>a\) and any Borel set \(I\subset (a, \infty ]\), we set
$$\begin V_(b, I):=\, \mathbb _b\left[ \sum _^\textbf_ }\right] \, \,=\, \sum _^\infty }\,}}_b\Big ( \tau _Y(a)> n, Y_n \in I\Big ). \end$$
(4.6)
Later on, we will take I of the form (x, y], with \(a<x\le y \le 2x\), but let us remain general for now.
We are now going to give a representation of \(V_a(b, I)\) using the fluctuation theory for the one-dimensional random walk S. Consider the strictly descending ladder times \(\varrho _0:=0\) and, for \(j\ge 1\)
$$\begin \varrho _j:=\,\min \left\: S_n < S_}\right\} . \end$$
(4.7)
Furthermore, consider the process \(\Theta =(\Theta _j)_\) defined by \(\Theta _j:= Y_\). Let
$$\begin \eta _j:=\, \sum _ }^ e^}-S_n } \qquad \text \qquad \Delta _j\,=\, S_}-S_ >0. \end$$
(4.8)
By the strong Markov property of S, \(((\eta _k, \Delta _k))_\) is an IID sequence adapted to the filtration \((}}_)_\), where \(}}_n:= \sigma (\)\) for \(n\ge 0\). Using (4.4), we have that
$$\begin \Theta _= \log \Big ( e^} e^ + \eta _ \Big ), \end$$
(4.9)
which, in view of the fact that \((\eta _j, \Delta _j)_\) is an IID sequence, implies that \((\Theta _j)_\) is a Markov chain. For completeness, let us mention that if \(\Theta _0= Y_0=b\), then \(\Theta _j\) is given by:
$$\begin \Theta _j= Y_ = \log \left( \, e^} + \sum _^j e^- S_} \eta _k\right) \,. \end$$
(4.10)
Lemma 4.2For \(b>a\), for any Borel set \(I\subset (a, \infty )\),
$$\begin V_(b, I)= }\,}}_b \left[ \sum _^ F(\Theta _j, I) \right] = \sum _^\infty }\,}}_b\left[ \textbf_(a)>j\}} F(\Theta _j, I)\right] , \end$$
(4.11)
where \(\tau _\Theta (a)\) is the first return time to \((-\infty , a]\) of \(\Theta \):
$$\begin \tau _\Theta (a):=\inf \, \end$$
(4.12)
and \(F(\theta , I)\) is defined, for any \(\theta \in \mathbb \) and any Borel set \(I\subset \mathbb \), by:
$$\begin F(\theta , I):= }\,}}_\theta \left[ \sum _^ \textbf_}\right] = \sum _^\infty }\,}}_\theta (\rho _1>n, Y_n\in I). \end$$
(4.13)
ProofWe use the definition of Y to see that \(Y_n \ge Y_ + S_n-S_ \ge Y_ =\Theta _j\) for \(\varrho _j \le n < \varrho _\). This means that in fact \(\tau _Y(a)\) must be equal to \(\varrho _\), with \(\tau _\Theta (a)\) defined in (4.12). Hence,
$$\begin \begin V_(I)&= }\,}}_b \left[ \sum _^ \sum _}^-1} \textbf_}\right] \\&= \sum _^\infty }\,}}_b \left[ \textbf_} \sum _}^-1} \textbf_}\right] . \end \end$$
(4.14)
Observe that the event \(\\) is measurable with respect to \(}}_}\), and that, due to the Markov property of Y, the conditional expectation of \( \sum _}^-1} \textbf_}\) with respect to \(}}_}\)
is equal to \(F(Y_}, I)\). We obtain (4.11), and the proof of Lemma 4.2 is complete. \(\square \)
4.2 Step 2: Representation of the Function F in Terms of an Auxiliary ProcessRecall (4.13) for the definition of \(F(\theta , I)\). We are going to use again the fluctuation theory for the random walk S to give a representation of F in terms of an increasing process. Let us introduce some notations.
Let \(H_k:= S_\) and \(\alpha _k, k\ge 0,\) be the weak ascending ladder heights and epochs of S: \(\alpha _0:=0\) and for any \(k\ge 1\)
$$\begin \alpha _k:= \inf \left\:\, S_n \ge S_}\right\} . \end$$
(4.15)
Let us also introduce for \( k \ge 1\):
$$\begin \widetilde_k:= H_-H_ \ge 0 , \qquad \widetilde_k :=\, \sum _+1}^ e^\,. \end$$
(4.16)
By the Markov property of S, the sequence \(((\widetilde_k, \widetilde_k))_\) is IID. Let us define a new Markov chain \((J_k(\theta ))_k\), adapted to the filtration \((}_)_\), by \(J_0(\theta ):=\theta \), and for all \(k=0, 1, 2, \dots \)
$$\begin J_(\theta ):=J_k(\theta ) + \widetilde_ + \log (1 + \widetilde_ e^ ). \end$$
(4.17)
Note that \(J_k(\theta )\) is increasing in k. Explicitly, \(J_k(\theta )\) is given by:
$$\begin J_k(\theta ) \, =\, \log \Big (e^ + \sum _^k \widetilde_j \, e^}\Big ) = \theta + H_k+ \log \Big (1 + e^ \sum _^k \widetilde_j \, e^}\Big ).\nonumber \\ \end$$
(4.18)
Lemma 4.3For any \(\theta \in \mathbb \), any Borel set \( I\subset \mathbb \), we have
$$\begin F(\theta ,I) = \sum _^\infty }\,}}(J_k(\theta ) \in I)\,. \end$$
(4.19)
ProofThe proof is based on the duality lemma for the random walk S. By the definitions of \( F(\theta , I)\) and \(\varrho _1\), we have
$$\begin \begin F(\theta , I)&= \sum _^\infty }\,}}\Bigg (S_1\ge 0,..., S_n \ge 0, \log \Big (e^ + \sum _^ e^\Big ) \in I\Bigg ) \\&= \sum _^\infty }\,}}\Bigg (S_1\ge 0,..., S_n \ge 0, \log \Big (e^ + \sum _^ e^}\Big ) \in I\Bigg ). \end \end$$
Since \((S_n-S_)_\) has the same distribution as \((S_j)_\), we obtain that
$$\begin \begin F(\theta , I)&=\sum _^\infty }\,}}\Bigg ( S_n\ge S_,..., S_n \ge S_1, S_n\ge 0, \, \log \Big (e^ + \sum _^n e^\Big ) \in I\Bigg )\\&= \sum _^\infty }\,}}\Bigg (\log \Big (e^ + \sum _^ e^\Big ) \in I\Bigg )\\&= \sum _^\infty }\,}}\Bigg (\theta + H_k + \log \Big (1 + e^ \sum _^ e^\Big ) \in I\Bigg ). \end \end$$
Note that
$$\begin \sum _^ e^= \sum _^k \sum _+1}^ e^ e^= \sum _^k \widetilde_\ell \, e^, \end$$
(4.20)
by definition of \(\widetilde\) in (4.16). The strong Markov property of S yields that \((\widetilde_\ell , H_\ell -H_)_\) are IID. This yields that \((H_k, \sum _^k \widetilde_\ell \, e^)\) is distributed as \((H_k, \sum _^k \widetilde_j \, e^})\) and therefore that \(\theta + H_k + \log (1 + e^ \sum _^ e^) \) is distributed as \(J_k(\theta )\). This concludes the proof. \(\square \)
4.3 Step 3: The Asymptotic Behavior of the Function FWe now take \(I=(x, y]\), with \(0\le x \le y \le 2x\). We are going to determine the asymptotic behavior of \(F(\theta , (x, y])\) as \(x\rightarrow \infty \) by using the fact that \(J_k(\theta )\) behaves asymptotically like a random walk with increments having the law of \(H_1\), which is a nonnegative random variable. The renewal function of H is defined as
$$\begin R(x):= \sum _^\infty }\,}}\Big ( H_k \le x\Big ), \qquad x\ge 0, \end$$
(4.21)
and \(R(x):=0\) for \(x<0\).
The following lemma, due to Rogozin [32] and Stone [34], plays a crucial role in the study of F.
Lemma 4.4[32], formula (28), and [34]]. Assume that
(R-1)::there exists \(n_0\ge 1\) such that the law of \(H_\) has an absolutely continuous part: This property is normally referred to as “the law of \(H_1\) is spread out”;
and either
(R-2)::there exists \(\kappa \ge 2\) such that \(}\,}}[H_1^\kappa ] < \infty \);
or
\(^\prime ):}\):there exists \(c>0\) such that \(}\,}}[e^] < \infty \).
Then for any \(x>0\),
$$\begin \left| R(x) - \left( c_R\, x + c'_R\right) \right| \, \le \, \varphi (x):= C_\varphi \, (1+x)^, \qquad & \hbox }, \\ C_\varphi \, e^, \qquad & \hbox ^\prime )}}, \end\right. }\nonumber \\ \end$$
(4.22)
where \(C_\varphi , C'_\varphi \) are two (unimportant) positive constants and
$$\begin c_R:=\,\frac}\,}}(H_1)}, \qquad c'_R:=\, \frac}\,}}\left[ H_1^2\right] }}\,}}\left[ H_1\right] ^2}.\end$$
(4.23)
Remark 4.5(i) By the main theorem in Doney [16],
(R-2) is satisfied if \(}\,}}[(\texttt^+)^]<\infty \), and hence, it is satisfied under hypothesis (T-1) of Theorem 3.1 with \(\kappa =\xi -1\ge 2\);
\(^\prime )}\) is satisfied under \(^)}\).
(ii) The case \(^\prime )}\) of Lemma 4.4 is stated in Stone [34] with an implicit constant that must be equal to \(c'_R\). See also [1, Sec. VII.2 and Ex. VII.2.2].
The case (R-2) is also obtained in Carlsson [5, Corollary 2 and Remark 2] under the slightly weaker assumption that \(H_1\) is strongly nonlattice, instead of (R-1). Here, we follow the formalism of Rogozin [32], which states that under (R-2),
$$\begin R(x)\limits ^} c_R\, x + c'_R - \frac}\,}}(H_1))^2} }\,}}\left[ \left( H_1-x\right) ^2\textbf_}\right] + o\big (x^\big ).\nonumber \\ \end$$
(4.24)
Notice that the condition (R-1) is equivalent to [32, Condition (6)], by Remark 4 there. Using (R-2) we see that \(}\,}}[H_1^2 \textbf_}] \le }\,}}(H_1^\kappa ) x^\), and hence, there exists \(C_R>0\) such that
$$\begin \left| R(x) - \left( c_R\, x + c'_R\right) \right| \, \le \, C_R x^\ \text \ R(x)\, \le \, C_R\, x \qquad \text x\ge 1.\nonumber \\ \end$$
(4.25)
Of course (4.25) is also valid in the case \(\mathbf\), and \(C_R\) is just a constant involved in upper bounds on error terms, while \(c_R\) and \(c'_R\) are the constants in (4.23).
The following proposition gives the key estimate in the proof of Theorem 3.1. Recall (4.13) for the definition of F.
Proposition 4.6Assume (R-1) and (R-2) with \(\kappa \ge 2\). Then, for \(x\rightarrow \infty \) and uniformly in \(0\le \theta \le x/2\) and \(x\le y \le 2x\), we have
$$\begin F(\theta , (x, y]) \,=\, c_R\, (y-x) + O\big ( x^\big )\,. \end$$
(4.26)
Assuming \(\mathbf)}\) instead of (R-2), there exists a positive constant \(\delta \) such that (4.26) holds with error term replaced by \(O\left( e^\right) \).
Before presenting the proof of Proposition 4.6, we state two lemmas. The first lemma will allow us to jump from initial point \(\theta \) to some larger (random) initial point. From this larger initial point, it will become possible to compare F with R: Upper and lower bounds are given in the second lemma, which is useful really for large \(\theta \).
Lemma 4.7For every \(\theta \in \mathbb \), any Borel set \(I\subset \mathbb \) and any \(m\ge 1\), we have
$$\begin F(\theta , I)&= F_m(\theta , I) + }\,}}\left[ F(J_m(\theta ), I) \right] \end$$
(4.27)
where
$$\begin F_m(\theta , I)&\, := \, \sum _^ }\,}}(J_k(\theta ) \in I)\, . \end$$
(4.28)
ProofThis is just an application of the Markov property at time m for the Markov chain \((J_k(\theta ))_\). \(\square \)
Lemma 4.8For every x and y such that \(0\le x\le y \le 2x\), for every \(\theta \ge 0\) and \(t>0\), we have
$$\begin \begin F(\theta , (x, y])&\le R\left( y-\theta \right) -R\left( x- \theta - t\right) + \Lambda (2x, e^ t) \\ F(\theta , (x, y])&\ge R\left( y- \theta -t \right) - R\left( x-\theta \right) - \Lambda (2x, e^ t)\,, \end \end$$
(4.29)
where
$$\begin \Lambda (x, t)&\, := \, \sum _^\infty }\,}}\Big ( H_k\le x, \sum _^\infty \widetilde_j \, e^} > t \Big ). \end$$
(4.30)
ProofWe use (4.19) for an expression of \(F(\theta , (x, y])\). By (4.18), we have
$$\begin H_k+\theta \le J_k(\theta )\le H_k+\theta +e^ \sum _^\infty \widetilde_j \, e^}.\end$$
(4.31)
Hence for \(z\in \\),
$$\begin }\,}}(J_k(\theta )\le z) \le }\,}}(H_k\le z-\theta )\end$$
(4.32)
and
$$\begin \begin }\,}}\big (J_k(\theta )\le z\big )&\ge }\,}}\Big (H_k+\theta +e^ \sum _^\infty \widetilde_j \, e^} \le z\Big )\\&\ge }\,}}\Big (H_k+\theta + t\le z, \sum _^\infty \widetilde_j \, e^} \le e^t\Big )\\&\ge }\,}}\big (H_k+\theta + t\le z\big )-}\,}}\Big (H_k \le 2 x, \sum _^\infty \widetilde_j \, e^} > e^t \Big ), \end \end$$
(4.33)
using \(z-\theta -t\le 2x\) in the last line. Applying these two inequalities with \(z=y\) then \(z=x\), and summing over k, we get the lower and the upper bounds in the lemma. \(\square \)
Proof of Proposition 4.6The asymptotic behavior of F will follow from Lemmas 4.7 and 4.8 once we have obtained the estimates on \(F_m(\theta ,(x,y]), }\,}}[ R(y- J_m(\theta ))-R(x-J_m(\theta )-t)], }\,}}[ R(y- J_m(\theta )-t)-R(x-J_m(\theta ))]\) and \(}\,}}[\Lambda (2x, e^t)]\). This will be done separately in what follows.
For ease of exposition, we introduce the symbol \(O_}(f(s))\) to denote a quantity whose absolute value is bounded by C|f(s)| with \(C>0\) independent of the parameter s. In general, s may be of dimension larger than one and belong to a domain that will be specified.
4.3.1 Estimate of \(F_m(\theta ,(x, y])\)Since \(F_m(\theta ,(x, y])\, \le \, \sum _^ }\,}}\left( J_k(\theta )>x\right) \le m }\,}}\left( J_m(\theta )>x\right) \), our goal is to estimate \(}\,}}\left( J_m(\theta ) > x\right) \). We observe that for all \(x\ge 4\), \(\theta \in [0, x/2]\) and \(m\ge 1\), if \(H_m \le \frac\) and \(\sum _^\infty \widetilde_j \, e^} \le e^\), then
$$\begin J_m(\theta )\,\le \, H_m+\log \Bigg (e^ + \sum _^\infty \widetilde_j \, e^}\Bigg )\le \frac + \log \big (2e^}\big ) \le x. \end$$
Hence, for all for all \(x\ge 4\), \(\theta \in [0, x/2]\) and \(m\ge 1\),
$$\begin }\,}}(J_m(\theta )> x) \,&\le \, }\,}}\left( H_m> \frac\right) + }\,}}\left( \sum _^\infty \widetilde_j \, e^}> e^\right) \end$$
(4.34)
We first estimate the second term in (4.34), as the estimation of first term will depend on whether the case (R-2) or \(\mathbf)}\) applies.
Remark that \(\widetilde_1= 1+ \sum _^ e^ \le 1 + \sum _^\infty e^ \textbf_ S_j < 0\}}\) by using the definition of \(\alpha _1\) and the fact that \(H_1\ge 0\). By [9, Lemma A.2], \(\mathbb \left[ \texttt^2\right] < \infty \) (i.e., hypothesis (T-2)) implies that for some \(\delta _1>0\),
$$\begin \mathbb [\exp ( \delta _1 \widetilde_1)]< \infty . \end$$
(4.35)
Recall that \((\widetilde_\ell , H_\ell -H_)_\) is an IID sequence. By (4.35), we can apply [22, Theorem 2.1] to obtain some \(\delta _2>0\) such that
$$\begin C_:=\, }\,}}e^^\infty \widetilde_j \, e^}} \,<\, \infty . \end$$
(4.36)
We deduce from (4.36) and the Markov inequality that for every \(\lambda >0\)
$$\begin }\,}}\Bigg (\, \sum _^\infty \widetilde_j \, e^} > \lambda \Bigg ) \, \le \, C_ e^\,. \end$$
(4.37)
In particular,
$$\begin }\,}}\Bigg (\sum _^\infty \widetilde_j \, e^}> e^\Bigg ) \, \le \, C_ e^}. \end$$
(4.38)
It remains to estimate \(}\,}}\left( H_m > \frac\right) \) according to whether the hypothesis (R-2) or \(\mathbf)}\) applies. We start with the case (R-2). As discussed right after Lemma 4.4, hypothesis (T-1) of Theorem 3.1 yields that the \(\kappa =\xi -1\) moment of \(H_1\) is finite, hence, using the Rosenthal inequality [25], there exists a constant \(C_\kappa >0\) such that
$$\begin }\,}}\left[ H_n^\kappa \right] \, \le \, C_\kappa \, n^\kappa , \qquad \text n=1,2, \ldots . \end$$
(4.39)
Consequently, using the Markov inequality, for any \(x>0\) and \(m\ge 1\)
$$\begin }\,}}\left( H_m > \frac\right) \le 4^\kappa \, C_\kappa \, m^\kappa \, x^, \qquad \text \text \text \text. \end$$
(4.40)
In the case \(\mathbf\), \(}\,}}[e^]= e^ \) with \(c_H:= \log }\,}}[e^] \in (0,\infty )\). The Markov inequality yields that for any \(x\ge 0\) and \(m\ge 1\),
$$\begin }\,}}\left( H_m > \frac\right) \le e^, \qquad \hbox )}}. \end$$
(4.41)
Applying (4.38), (4.40) and (4.41) to (4.34), we get that for all \(x\ge 4\), \(\theta \in [0, x/2]\) and \(m\ge 1\),
$$\begin }\,}}(J_m(\theta )>x) =\, O_} (m^ x^), \qquad & \hbox }, \\ \, O_} (e^), \qquad & \hbox }, \end\right. } \end$$
(4.42)
so
$$\begin F_m(\theta ,x) =\, O_}( m^ x^), \qquad & \hbox }, \\ O_} ( m\, e^), \qquad & \hbox }. \end\right. } \end$$
(4.43)
4.3.2 Estimate of \(}\,}}[ R(y- J_m(\theta ))- R(x-J_m(\theta )- t)]\)We note that, by (4.25), \(R(x)=O_}(x)\) for \(x \ge 1\), and we observe that R is nondecreasing.
Let \(x\ge 2, y\in [x, 2x]\), \(m\ge 1\) and \(t\in (0,1)\). Whenever \(J_m(\theta )\le x/2\), we have, as a consequence of Lemma 4.4:
$$\begin R(y- J_m(\theta ))- R(x-J_m(\theta )- t) = c_R (y-x+t) +O_}\left( \varphi \left( \frac-1\right) \right) ,\nonumber \\ \end$$
(4.44)
with the function \(\varphi \) defined in (4.22). Hence, for \(x\ge 2, y\in [x, 2x]\), \(m\ge 1\) and \(t\in (0,1)\)
$$\begin&}\,}}\left[ R(y- J_m(\theta ))- R(x-J_m(\theta )- t) \right] \nonumber \\&\quad =\, }\,}}\left[ \left( c_R (y-x+t) +O_}\left( \varphi \left( \frac-1\right) \right) \right) \textbf_\}}\right] + }\,}}\left[ O_}(x) \textbf_\}}\right] \nonumber \\&\quad =\, c_R (y-x+t) +O_}\left( \varphi \left( \frac-1\right) \right) + O_}\left( x }\,}}\left( J_m(\theta )> \frac\right) \right) . \end$$
(4.45)
Using (4.42), we conclude that for all for \(x\ge 8, y\in [x, 2x]\), \(\theta \in [0, /2]\), \(m\ge 1\) and \(t\in (0,1)\)
$$\begin & }\,}}[ R(y- J_m(\theta ))- R(x-J_m(\theta )- t)] \,\nonumber \\ & \quad = c_R (y-x+t)+ O_}\left( x^ + m^\kappa x^\right) , & \hbox }, \\ O_}\left( e^\right) , & \hbox }, \end\right. }\nonumber \\ \end$$
(4.46)
where \(\delta _3\) is an arbitrarily fixed constant, satisfying \(0<\delta _3<\min (c/8, C_\varphi ')\).
4.3.3 Estimate of \(}\,}}[ R(y- J_m(\theta )-t)- R(x-J_m(\theta ))]\)Exactly in the same manner, we obtain that for all \(x\ge 8, y\in [x, 2x]\), \(\theta \in [0, /2]\), \(m\ge 1\) and \(t\in (0,1)\)
$$\begin & }\,}}[ R(y- J_m(\theta )-t)- R(x-J_m(\theta ))] \,\nonumber \\ & \quad = c_R (y-x-t)+ O_}\left( x^ + m^\kappa x^\right) , & \hbox }, \\ O_}\left( e^\right) , & \hbox )}}. \end\right. }\nonumber \\ \end$$
(4.47)
4.3.4 Estimate of \(}\,}}[\Lambda (2x, e^t)]\)Recall (4.30) for the definition of \( \Lambda (\cdot , \cdot )\). For any \(n\ge 1\) and \(s>0\), we have:
$$\begin \Lambda (x, s) \le \sum _^\infty }\,}}[H_k\le x] + n }\,}}\Bigg ( \sum _^\infty \widetilde_j \, e^} > s \Bigg ). \end$$
(4.48)
The second term in (4.48) is controlled using (4.37): It is bounded by \(C_\,n\, e^\). For the first term in (4.48), observe that it is equal to
$$\begin }\,}}[R(x-H_n)]\le R(x) }\,}}(H_n \le x)= O_}(x }\,}}(H_n \le x)),\end$$
(4.49)
assuming that \(x\ge 1\). We complete this estimate by observing that since \(}\,}}[H_1]>0\), by the classical Cramér large deviation theorem there exist \(C'>0\) and some small \(\delta _4>0\) such that
$$\begin }\,}}\left( H_k \le \delta _4 k\right) \,\le \, C' e^\qquad \text k =1,2, \ldots . \end$$
(4.50)
Consequently, choosing \(n: =\lceil x/\delta _4\rceil \) in (4.48), we obtain, for \(x\ge 1\) and \(s>0\),
$$\begin \Lambda (x, s) = O_}( x (e^ + e^)). \end$$
(4.51)
Furthermore, for \(\theta \ge 0\), by using the fact that \(J_m(\theta )\ge H_m\) and by using (4.50) again, we have
$$\begin }\,}}\Big [e^} \Big ] \le }\,}}\Big [ e^}\Big ] \le e^} +C' e^,\end$$
(4.52)
which, combined with (4.51), shows that for all \(x\ge 1\), \(y\in [x, 2x]\), \(\theta \ge 0\), \(m\ge 1\) and \(t\in (0,1)\),
$$\begin }\,}}\big [\Lambda \big (2x, e^t\big ) \big ] \,=\, O_} \left( x \left( e^ + e^} +e^\right) \right) . \end$$
(4.53)
4.3.5 Conclusion of the Proof of Proposition 4.6Now, we are ready to give the proof of (4.26). We make our choice for m and t. In the case (R-2), we take \(m:=\lfloor x^\rfloor \) with some \( \delta \in (0, /)\). In the case \(\mathbf)}\), we take \(m:= \lfloor / \rfloor \), where we recall the constants \( c_H, \delta _3 \) appeared in (4.46). In both cases, we set \(t:=\exp (/})\).
By Lemmas 4.7 and 4.8, together with estimates (4.43), (4.46), (4.47) and (4.53), for all \(x\ge 8, y\in [x, 2x]\), \(\theta \in [0, /2]\),
$$\begin F(\theta ,(x, y]) \, = \, c_R (y-x) + O_} \left( x^\right) , & \hbox }, \\ O_} \left( e^\right) , & \hbox )}}, \end\right. } \end$$
(4.54)
with \(\delta _5:= \, \min (\frac, \frac, 1)\), and the proof of Proposition 4.6 is complete. \(\square \)
4.4 Step 4: Technical Estimates for the Control of the Cumulative Function of \(\nu \).Recall that we have chosen \(a>0\) in order to ensure that \(\nu ((-\infty , a])>0\). Recall (4.5), i.e., that for any \(a< x \le y \le 2x\),
$$\begin \nu ((x, y])= \int _ }\,}}_[V_a(Y_1, (x, y]) \textbf_} ]\nu (\textrmv). \end$$
(4.55)
In the representation of \(V_a(b, (x, y])\) given in Lemma 4.2, we are going to replace F by its asymptotic expression in the right-hand side of (4.26). To this end, we need to control the error terms, and this is the goal of the next lemma. Recall the definition of process \(\Theta \) just after (4.7) and the notation \(\tau _\Theta \) introduced in Lemma 4.2.
Lemma 4.9If a is large enough, there exists \(C>0\) such that for every \(b> a\):
$$\begin }\,}}_b\left[ \tau _\Theta (a)\right] \,\le \, C \, b ,} \end$$
(4.56)
and, for every \(x\ge 0\),
$$\begin }\,}}_b \left[ \sum _^ \textbf_}\right] \, \le \, C\, e^. \end$$
(4.57)
ProofRecall (4.9). If \(\Theta _j>a\), then
$$\begin \begin \Theta _-\Theta _j&= \log ( e^} + \eta _ e^) \\&\le \log \big ( e^} + \eta _ e^ \big )=:W_-W_j. \end \end$$
(4.58)
In other words, if we consider a real-valued random walk \((W_j)\) with step distribution given as above and starting from \(W_0:=b>a\),
Comments (0)