# An improvement on Bennett’s inequality for the Poisson distribution

If 0}” class=”latex” />A Poisson random variable ${{bf Poisson}(lambda)}$ with the typical ${lambda}$ It’s a random variable that takes values ​​in pure numbers with a chance distribution.

$displaystyle {bf P}( {bf Poisson}(lambda) = k) = e^{-lambda} frac{lambda^k}{k!}.$

One is normally involved with limiting the higher tail possibilities

$displaystyle {bf P}( {bf Poisson}(lambda) geq lambda(1+u))$

for ${u geq 0}$or decrease queue possibilities

$displaystyle {bf P}( {bf Poisson}(lambda) leq lambda(1+u))$

for”https://s0.wp.com/latex.php?latex=%7B-1+%3C+u+%5Cleq+0percent7D&bg=ffffff&fg=000000&s=0&c=20201002″ resourceset=”https://s0.wp.com/latex.php?latex=%7B-1+%3C+u+%5Cleq+0percent7D&bg=ffffff&fg=000000&s=0&c=20201002 1x, https://s0.wp.com /latex.php?latex=%7B-1+%3C+u+%5Cleq+0percent7D&bg=ffffff&fg=000000&s=0&c=20201002&zoom=4.5 4x” backside=”{-one < u leq 0}" class="latex" />. A normal device for this Bennett’s inequality:

Proposition 1 (Bennett’s inequality) one has

$displaystyle {bf P}( {bf Poisson}(lambda) geq lambda(1+u)) leq exp(-lambda h(u))$

for ${u geq 0}$ And

$displaystyle {bf P}( {bf Poisson}(lambda) leq lambda(1+u)) leq exp(-lambda h(u))$

for”https://s0.wp.com/latex.php?latex=%7B-1+%3C+u+%5Cleq+0percent7D&bg=ffffff&fg=000000&s=0&c=20201002″ resourceset=”https://s0.wp.com/latex.php?latex=%7B-1+%3C+u+%5Cleq+0percent7D&bg=ffffff&fg=000000&s=0&c=20201002 1x, https://s0.wp.com /latex.php?latex=%7B-1+%3C+u+%5Cleq+0percent7D&bg=ffffff&fg=000000&s=0&c=20201002&zoom=4.5 4x” backside=”{-one < u leq 0}" class="latex" />The place

$displaystyle h(u) := (1+u) log(1+u) - u.$

from the Taylor enlargement ${h(u) = frac{u^2}{2} + O(u^3)}$ for ${u=O(1)}$ we finalize the Gaussian tail bounds within the regime ${u = o(1)}$ (and particularly when ${u = O(1/sqrt{lambda})}$ (within the spirit Chernof, BernsteinAnd Hoeffding inequalities). however the place within the regime ${u}$ the massive and optimistic individual features a slight achieve on these different classical limits ( ${exp(- lambda u log u)}$ write as a substitute of write ${exp(-lambda u)}$).

Proof: We use the exponential second methodology. Any ${t geq 0}$From the Markov inequality we get:

$displaystyle {bf P}( {bf Poisson}(lambda) geq lambda(1+u)) leq e^{-t lambda(1+u)} {bf E} exp( t {bf Poisson}(lambda) ).$

A normal calculation moment generating function The Poisson distribution is given as:

$displaystyle exp( t {bf Poisson}(lambda) ) = exp( (e^t - 1) lambda )$

and therefore

$displaystyle {bf P}( {bf Poisson}(lambda) geq lambda(1+u)) leq exp( (e^t - 1)lambda - t lambda(1+u) ).$

For ${u geq 0}$turned out to be optimized by adjusting the fitting facet ${t = log(1+u)}$on this case the right-hand facet simplifies to: ${exp(-lambda h(u))}$. This proves the primary inequality; the second inequality is equally confirmed (however now ${u}$ And ${T}$ not optimistic moderately than unfavorable). $Box$

2. rationalization Bennett’s inequality additionally applies to (appropriately normalized) sums of restricted unbiased random variables. In some instances, direct comparative inequalities exist to narrate these variables to the Poisson case. For instance, suppose ${S = X_1 + dots + X_n}$ is the sum of the Boolean arguments ${X_1,dots,X_n in {0,1}}$ whole common ${sum_{j=1}^n {bf E} X_j = lambda}$ and collectively ${sup_i {bf P}(X_i) leq varepsilon}$ some”https://s0.wp.com/latex.php?latex=%7B0+%3C+%5Cvarepsilon+%3C+1percent7D&bg=ffffff&fg=000000&s=0&c=20201002″ resourceset=”https://s0.wp.com/latex.php?latex=%7B0+%3C+%5Cvarepsilon+%3C+1percent7D&bg=ffffff&fg=000000&s=0&c=20201002 1x, https://s0.wp.com/latex. php?latex=%7B0+%3C+%5Cvarepsilon+%3C+1percent7D&bg=ffffff&fg=000000&s=0&c=20201002&zoom=4.5 4x” backside=”{0 < varepsilon < 1}" class="latex" />. Then for any pure quantity ${k}$we have now

$displaystyle prod_{i neq i_1,dots,i_k} {bf P}(X_i=0)$

$displaystyle leq frac{1}{k!} (sum_{i=1}^n frac{{bf P}(X_i=1)}{{bf P}(X_i=0)}) ^k times prod_{i=1}^n {bf P}(X_i=0)$

$displaystyle leq frac{1}{k!} (frac{lambda}{1-varepsilon})^k prod_{i=1}^n exp( - {bf P}(X_i = one))$

$displaystyle leq e^{-lambda} frac{lambda^k}{(1-varepsilon)^kk!}$

$displaystyle leq e^{frac{varepsilon}{1-varepsilon} lambda} {bf P}( mathbf{Poisson}(frac{lambda}{1-varepsilon}) = k) .$

Like this, for ${varepsilon}$ small can effectively verify queue possibilities ${S}$ when it comes to the tail chance of a Poisson random variable near the imply. ${lambda}$; that is, after all, very intently associated to the well-known undeniable fact that the Poisson distribution arises because the restrict of the sums of many unbiased boolean variables, every of which isn’t zero with a small chance. To see This article by Bentkus And This article by Pinelis for some extra helpful (and fewer apparent) comparability inequalities of this sort.

On this notice, I wished to report Bennett’s remark that when exiting the Gaussian regime one could be constrained by a small polynomial issue. ${u = O(1/sqrt{lambda})}$achieve an element particularly ${1/sqrt{lambda}}$ When ${u sim 1}$. This remark shouldn’t be troublesome and is implicit within the literature (for instance, it may be drawn from the extra basic conclusions). This article by Talagrandand the fundamental thought is already This article by Glynn), however I could not discover a clear model of this assertion within the literature, so I am placing it right here on my weblog. (But when a reader is aware of of a reference that principally accommodates the next boundary, I would be completely happy to know.)

Proposition 3 (Improved Bennett’s inequality) one has

$displaystyle {bf P}( {bf Poisson}(lambda) geq lambda(1+u)) ll frac{exp(-lambda h(u))}{sqrt{1 + lambda min(u, u^2)}}$

for ${u geq 0}$ And

$displaystyle {bf P}( {bf Poisson}(lambda) leq lambda(1+u)) ll frac{exp(-lambda h(u))}{sqrt{1 + lambda u^2 (1+u)}}$

for”https://s0.wp.com/latex.php?latex=%7B-1+%3C+u+%5Cleq+0percent7D&bg=ffffff&fg=000000&s=0&c=20201002″ resourceset=”https://s0.wp.com/latex.php?latex=%7B-1+%3C+u+%5Cleq+0percent7D&bg=ffffff&fg=000000&s=0&c=20201002 1x, https://s0.wp.com /latex.php?latex=%7B-1+%3C+u+%5Cleq+0percent7D&bg=ffffff&fg=000000&s=0&c=20201002&zoom=4.5 4x” backside=”{-one < u leq 0}" class="latex" />.

Proof: We begin with the primary inequality. We are able to assume that ${u geq 1/sqrt{lambda}}$, as a result of in any other case the declare follows from the standard Bennett inequality. We develop the left facet like this:

$displaystyle e^{-lambda} sum_{k geq lambda(1+u)} frac{lambda^k}{k!}.$

Observe for this ${k geq lambda(1+u)}$ HE

$displaystyle frac{lambda^{k+1}}{(k+1)!} leq frac{1}{1+u} frac{lambda^{k+1}}{(k+1) )!} .$

So the sum is decided by the primary time period instances a geometrical sequence. ${sum_{j=0}^infty frac{1}{(1+u)^j} = 1 + frac{1}{u}}$. So we will join the left facet like this:

$displaystyle ll e^{-lambda} (1 + frac{1}{u}) sup_{k geq lambda(1+u)} frac{lambda^k}{k!}.$

by Stirling approachThis

$displaystyle ll e^{-lambda} (1 + frac{1}{u}) sup_{k geq lambda(1+u)} frac{1}{sqrt{k}} frac{(elambda)^k}{k^k}.$

The expression within the supremum is lowering ${k}$ for {k > lambda}” class=”latex” />so we will join it

$displaystyle ll e^{-lambda} (1 + frac{1}{u}) frac{1}{sqrt{lambda(1+u)}} frac{(elambda)^ {lambda(1+u)}}{(lambda(1+u))^{lambda(1+u)}},$

which simplifies

$displaystyle ll frac{exp(-lambda h(u))}{sqrt{1 + lambda min(u, u^2)}}$

After a routine calculation.

We now flip to the second inequality. As earlier than, we will assume that ${u leq -1/sqrt{lambda}}$. First we get rid of a degenerate case the place”https://s0.wp.com/latex.php?latex=%7Bpercent5Clambdapercent281percent2Bupercent29+%3C+1percent7D&bg=ffffff&fg=000000&s=0&c=20201002″ resourceset=”https://s0.wp.com/latex.php?latex=%7Bpercent5Clambdapercent281percent2Bupercent29+%3C+1percent7D&bg=ffffff&fg=000000&s=0&c=20201002 1x, https://s0.wp .com/latex.php?latex=%7Bpercent5Clambdapercent281percent2Bupercent29+%3C+1percent7D&bg=ffffff&fg=000000&s=0&c=20201002&zoom=4.5 4x” backside=”{lambda(1+u) < 1}" class="latex" />. Right here is the left facet solely

$displaystyle {bf P}( {bf Poisson}(lambda) = 0 ) = e^{-lambda}$

and the fitting facet could be in contrast

$displaystyle e^{-lambda} exp( - lambda (1+u) log (1+u) + lambda(1+u) ) / sqrt{lambda(1+u)}.$

Since that point ${-lambda(1+u) log(1+u)}$ unfavorable and”https://s0.wp.com/latex.php?latex=%7B0+%3C+%5Clambdapercent281percent2Bupercent29+%3C+1percent7D&bg=ffffff&fg=000000&s=0&c=20201002″ resourceset=”https://s0.wp.com/latex.php?latex=%7B0+%3C+%5Clambdapercent281percent2Bupercent29+%3C+1percent7D&bg=ffffff&fg=000000&s=0&c=20201002 1x, https://s0 .wp.com/latex.php?latex=%7B0+%3C+%5Clambdapercent281percent2Bupercent29+%3C+1percent7D&bg=ffffff&fg=000000&s=0&c=20201002&zoom=4.5 4x” backside=”{0 < lambda(1+u) < 1}" class="latex" />we see that the fitting facet is ${gg e^{-lambda}}$and the guess is legitimate on this case.

It stays to consider the regime ${u leq -1/sqrt{lambda}}$ And ${lambda(1+u) geq 1}$. The left facet expands like this:

$displaystyle e^{-lambda} sum_{k leq lambda(1+u)} frac{lambda^k}{k!}.$

Addition dominates the primary time period instances a geometrical sequence ${sum_{j=-infty}^0 frac{1}{(1+u)^j} = frac{1}u}$. maximal ${k}$ comparable ${lambda(1+u)}$so we will join the left facet to:

$displaystyle ll e^{-lambda} frac{1}u sup_{lambda(1+u) ll k leq lambda(1+u)} frac{lambda^ k}{k!}.$

Utilizing the Stirling method as earlier than, we will hook it up like this:

$displaystyle ll e^{-lambda} frac{1}u frac{1}{sqrt{lambda(1+u)}} frac{(elambda)^{ lambda(1+u)}}{(lambda(1+u))^{lambda(1+u)}},$

which simplifies

$displaystyle ll frac{exp(-lambda h(u))}{sqrt{1 + lambda u^2 (1+u)}}$

After a routine calculation. $Box$

The identical evaluation could be reversed to indicate that the boundaries given above are principally as sharp as constants, not less than ${lambda}$ (And ${lambda(1+u)}$) is massive.

#enchancment #Bennetts #inequality #Poisson #distribution