• 13,987 hits

## Identities on the zeta function and harmonic numbers

Leonard Euler gave a general identity involving the harmonic number $H_n$ and the Riemann zeta function $\zeta(n)$ for natural numbers $n$.

$\displaystyle{2\sum_{n=1}^{\infty}\frac{H_n}{n^m} = (m+2)\zeta(m+1)-\sum_{k=1}^{m-2}\zeta(m-k)\zeta(k+1)}$.

A special case of this is the identity

$\displaystyle{\sum_{n=1}^{\infty}\frac{H_n}{n^2} = 2\zeta(3)}$.

I first saw the above identity in Alex Youcis’s blog Abstract Nonsense and in course of further investigation, I was able to find several identities involving the Riemann zeta function and the harmonic numbers. While it is practically impossible to go through the entire mathematical literature to see if a formula is new or a rediscovery, in this post we shall see a few identities which are not given in the above links. Also if I encounter an identity elsewhere at any point of time, I have taken care to delete it from this post.

For $n \ge 1$ the following relations hold.

Entry 1.

$\displaystyle{\sum_{r=1}^{\infty}H_r \Big\{\frac{1}{r^{2n}} - \frac{1}{(r+1)^{2n}}\Big\}=\zeta(2n+1)}$.

Entry 2.

$\displaystyle{\sum_{r=1}^{\infty}H_r \Big\{\frac{1}{r^{4n-1}} - \frac{1}{(r+1)^{4n+1}}\Big\}=\zeta(4n)}$.

Entry 3.

$\displaystyle{\sum_{r=1}^{\infty}H_r \Big\{\frac{1}{r^{4n+1}} - \frac{1}{(r+1)^{4n+1}}\Big\}=\zeta(4n+2)}$.

Entry 4.

$\displaystyle{\sum_{r=1}^{\infty}\frac{H_r}{(r+1)^{2n}} = n\zeta(2n+1) - \sum_{k=2}^{n}\zeta(k)\zeta(2n+1-k)}$.

Entry 5.

$\displaystyle{\sum_{r=1}^{\infty}\frac{H_r}{(r+1)^{4n-1}} = \frac{4n-3}{4}\zeta(4n) - \sum_{k=1}^{n-1}\zeta(2k+1)\zeta(4n-2k-1)}$.

Entry 6.

$\displaystyle{\sum_{r=1}^{\infty}\frac{H_r}{(r+1)^{4n+1}}= \frac{4n-1}{4}\zeta(4n+2)-\frac{\zeta(2n+1)^2}{2}}$

$\displaystyle{-\sum_{k=1}^{n-1}\zeta(2k+1)\zeta(4n-2k+1)}$.

Entry 7.

$\displaystyle{\sum_{r=1}^{\infty}\frac{H_r}{(r+2)^{2n}} = n\zeta(2n+1) - 2n - \sum_{k=2}^{n}\zeta(k)\zeta(2n+1-k) + \sum_{k=2}^{2n}\zeta(k)}$.

Entry a. If $|x| < 1$ then

$\displaystyle{\sum_{n=1}^{\infty}\zeta(4n)x^{4n} = \frac{1}{2}-\frac{\pi x}{4}(\cot(\pi x) +\coth(\pi x))}$.

## On the real roots of ζ(x) = a

Let $\mathbb N$ be the set of natural numbers and $\mathbb Q^+$ be the set of positive rational numbers. For every positive integer $n \ge 2$, there exists a unique real number $s_n$ such that $\zeta (s_n) = n$. The question is does there exists an $n \in \mathbb N$ such that $s_n \in \mathbb Q^+$ be rational? In other words, does there exist a positive rational $\frac{p}{q}$ such that $\zeta(\frac{p}{q}) \in \mathbb N$? At present, we don’t even know if $s_n$ is always rational, let alone irrational or transcendental. Investigation seems to suggest that $\zeta(\frac{p}{q}) \notin \mathbb N$ for any $\frac{p}{q} \in \mathbb Q^+$. However this has not been completely established. In this post, we shall look at some on the results in this line of research.

Expanding the Riemann zeta function about $s=1$ gives

$\displaystyle{\zeta(s) = \frac{1}{s-1} + \gamma_0 + \sum_{n=0}^{\infty}\frac{(-1)^n}{n!}\gamma_n (s-1)^n}$

where the constants

$\displaystyle{\gamma_n = \lim_{m \rightarrow \infty}\Bigg [\sum_{k=1}^{m}\frac{(\ln k)^n}{k} - \frac{(\ln m)^{n+1}}{n+1}\Bigg]}$

are known as Stieltjes constants. Inverting the above series we obtain:

Theorem 1Let $x > 1$ and $\zeta(x) = a$; then,

$\displaystyle{x = 1 + \frac{1}{a-\gamma _0} - \frac{\gamma _1}{1!(a-\gamma _0)^2} + \frac{\gamma _2}{2!(a-\gamma _0)^3}- \frac{\gamma _3 - 12\gamma _1}{3!(a-\gamma _0)^4} + O\Big(\frac{1}{a^5}\Big)}$.

Proof. The proof follows by inverting the Stieltjes series expansion of $\zeta (x)$$\Box$

We can rewrite the Stieltjes series expansion of the  Riemann zeta function in the following form which we shall use regularly in our analysis:

$\displaystyle{\zeta \Big (1+\frac{1}{s}\Big ) = s + \gamma_0 + \sum_{n=0}^{\infty}\frac{(-1)^n\gamma_n}{n!^n}}$.

We want to find an interval where the function $\zeta(1+\frac{1}{s})$ is strictly decreasing. Clearly $\zeta(1+\frac{1}{s})$ is continuous and strictly decreasing in the interval $(0, \infty)$. The trivial zeros of $\zeta(s)$ occur at $s=-2n$ where $n \in N$. In other words, the trivial zeros of $\zeta(1+\frac{1}{s})$ occur at $s = \frac{-1}{2n+1}$ and all these zeros lie in the interval $(-1/3, 0)$. Hence in this interval, $\zeta(1+\frac{1}{s})$ will have infinitely many points of inflexion. Excluding the interval $(-1/3, 0)$, we can divide the real line in to two parts $(-\infty, -1/3)$ and $(0,\infty)$; and in each of these two intervals, $\zeta(1+\frac{1}{s})$ in strictly decreasing. For positive rationals $\frac{p}{q}$, the scope of our current study is the interval  $(0,\infty)$.

Lemma 2For every finite real $a>1$, there exists a unique real number $c_a, 0< c_a < 1-\gamma _0$, such that

$\displaystyle{\zeta\Big(1+\frac{1}{a-1 +c_a}\Big) = a}$.

Proof. Let lemma follows form the fact that in our interval of interest $(0,\infty)$$\zeta (1+\frac{1}{s}) - s$ is strictly decreasing. $\Box$

Theorem 3. Let $c_a$ be as defined in Lemma 2. The asymptotic expansion of $c_a$ is

$\displaystyle{c_a = 1-\gamma_0 + \frac{\gamma_1}{a-1} - \frac{\gamma_2 + \gamma_1-\gamma_1\gamma_0}{(a-1)^2}}$

$\displaystyle{+ \frac{\gamma_3 + 2\gamma_2- 2\gamma_2\gamma_0 + \gamma_1 - 2\gamma_1\gamma_0 + \gamma_1\gamma_0^2 - \gamma_1^2}{(a-1)^3}+ O\Big(\frac{1}{a^4}\Big)}.$

Proof. This is obtained by inverting the above Stieltjes series expansion of $\zeta (1+\frac{1}{a-1+c_a})$$\Box$

The next result gives us a method to compute $c_a$ numerically. We have

Lemma 4. Let $k_r$ be defined recursively as

$\displaystyle{k_{r+1} = a + k_r - \zeta \Big (\frac{a+k_r}{a-1+k_r}\Big )}$

where $k_0 = 1-\gamma_0$. Then $\lim_{r \rightarrow \infty}k_r = c_a$ .

Marek Wolf, has sent me the computed values of $c_n$ for $2 \le c_n \le 1000$. Some of these numbers were used in the proof of the following theorem.

Proof. Trivial. $\Box$

Theorem 5. If $1 \le p-q \le 40$ then $\zeta (\frac{p}{q}) \notin \mathbb N$.

Proof. We have $c_2 \approx 0.37240621590$. There are exactly twenty three positive fractions $\frac{m}{n}$ with $n \le 40$ such that

$\displaystyle{c_2 \le \frac{m}{n} < 1- \gamma_0}$.

For each of these fractions, we find (by actual computation) that there exists a integer $r \le 43$ such that

$\displaystyle{c_r \le \frac{m}{n} < c_{r+1}}$.

Since $c_n$ is strictly increasing, it implies that if $c_n \in \mathbb Q^+$ then the denominator of $c_n$ has to be $\ge 41$. Hence the theorem follows. $\Box$

If we continue working with the idea given the proof of Theorem 5, we can prove $p-q > n_0$ for some large natural number $n_o$. But this method gives no hope of proving that $\zeta (\frac{p}{q}) \notin \mathbb N$. Therefore we must look for some other ideas.

Theorem 6. Let $\zeta (1+\frac{1}{x_i}) \in \mathbb N$, $x_i \in \mathbb R^+ , i=1, 2, \ldots , k$ and $a_j \in \mathbb N$ such that

$\sum a_j = \{1, 2, 3, 4, 5, 7, 8, 10, 12 , 15\}$.

If $y=m+\sum a_k x_k$ where is any non-negative integer; then $\zeta (1+\frac{1}{y}) \notin \mathbb N$.

Some applications of Theorem 6 are given below.

Example 7. If $x \in \mathbb R^+$ and $2 \le n \le 5, n \in \mathbb N$ then at least one of the two numbers

$\zeta (1+\frac{1}{x})$ and $\zeta (1+\frac{1}{nx})$

is not a integer.

Example 8. If $x, y \in \mathbb R^+$ then at least one of the three numbers

$\zeta (1+\frac{1}{x}), \zeta (1+\frac{1}{y})$ and $\zeta (1+\frac{1}{x+y})$

is not a integer.

## Dividing an angle into an arbitrary ratio

Back in my school days, probably in the year 1997, I first read about a famous and ancient geometrical problem, the problem of trisecting an angle using only a compass and an unmarked ruler. It took me some time to understand that a general method for trisecting all angles is impossible arbitrary angle $\theta$. Of course with a marked ruler, we can always trisect an arbitrary angle, and moreover, in case of some special angles such as $\pi$, $\pi/2$ or some non-constructible angles such as $3\pi/7$, we can trisect even with an unmarked ruler. I found some approximate trisection methods, each of which gave a dominant term $\theta/3$ and an error terms that was much smaller than the dominant term. At that time, I thought is it possible to find a method that gives an approximate division of an arbitrary angle into an arbitrary ratio. In formal terms, this problem can be stated as:

Problem: Let $n$ be positive integer and $x$ be any positive realGiven an angle $\theta$ and a line that divides it in the ratio $1/x$ find a method with finite number of steps to construct an line that approximately divides the angle in the ratio $1/(x+n)$ using only a compass and an unmarked ruler.

Solution: We shall use the notation $[A]$ to denote the angle $A$. In the diagram let $[COB]=\theta$ be an acute angle and let $[IOB]=\theta/x$ be given.

Steps:

1. With O as center and radius OC, draw an arc to cut OB at D.
2. Join CD and draw DJ perpendicular to OB.
3. Let OI intersect DJ at I and the arc CD at K.
4. Produce DK to intersect OC at G.
5. Let OI intersect CD at E.
6. Extent GE to intersect OB at H.
7. Join IH to intersect GD at F.
8. Draw a line OA through F. $[AOB] \approx \theta/(x+1)$.
9. Repeating this method $n$ times, to construct a line that approximately divides $\theta$ in the ratio $1/(x+n)$.

As $\theta$ increases, the lines OC and DG tend to become parallel to each other and therefore the paper size required for construction increases. To overcome this problem, we can consider an obtuse angle as the sum of a right angle and an acute angle and apply our method on each part separately and then add up the final angle resulting form each of the two parts.

## A conjecture on consecutive primes

In 1982, Farideh Firoozbakht made an interesting conjecture that the sequence $p_n ^{1/n}$ decreases i.e. for all $n \ge 1$,

$p_n ^{\frac{1}{ n}} > p_{n+1} ^{\frac{1}{n+1}}$.

I present a stronger form of Firoozbakht’s conjecture.

Conjecture: Let $p_n$ be the n-th prime and let $r(n) = p_n^{1/n}$.

$\displaystyle{\lim_{n \rightarrow \infty} \frac{n^2}{\ln n} \Big(\frac{r(n)}{r(n+1)}-1\Big) = 1}.$

If the above conjecture is true than we can have explicit bounds on $p_n ^{1/n}$ in terms of $p_{n+1}$.

Corollary 1: For every $\epsilon, 0<\epsilon<1$, there exists a sufficiently large natural number $N_{\epsilon}$, which depends only on $\epsilon$, such that for all $n>N_{\epsilon}$,

$(1+n^{-2}) p_{n+1} ^{\frac{1}{n+1}} < p_n ^{\frac{1}{n}} < (1+n^{-2+\epsilon})p_{n+1} ^{\frac{1}{n+1}}$.

The above inequality would imply Cramer’s conjecture and in fact we have a stronger bound on the gap between consecutive primes.

Corollary 2: For all sufficient large $n$,

$p_{n+1} - p_n < \ln ^2 p_n - 2\ln p_n + 1$.

References

## Analogues of Cramer’s conjecture

Abstract: In this post, I present my approach to Cramer’s conjecture on the gap between consecutive primes. However since the sequence of primes $p_n$ are just a special of the sequence of numbers asymptotically equidistributed modulo one $s_n$, I have considered the general problem of estimating the gap between consecutive terms of $s_n$. Applying this general theory on prime numbers show why Cramer’s conjecture should possibly be true.

1. Introduction

One of the most famous open problems in the theory of primes is the gap between consecutive primes. Given two consecutive prime numbers $p_{n+1}$ and $p_n$, how large can the difference $g(n)=p_{n+1}-p_n$ be? The answer to this question has baffled mathematicians for almost a century; however continuous progress has been since 1920 when Harald Cramer’s proved on the assumption of the Riemann Hypothesis, that

$p_{n+1} - p_n = O(p_n^{0.5}\ln p_n)$.

Cramer’s Conjecture: Cramer applied probabilistic methods to estimate an upper bound on $g(n)$ and in 1936, he suggested that

$\displaystyle{\limsup_{n \rightarrow \infty}\frac{p_{n+1} - p_n}{\ln ^2 p_n} = 1}.$

This statement is known as the Cramer’s conjecture. Intuitively, this means the gaps between consecutive primes are always small, and it quantifies asymptotically just how small they can be. This conjecture has neither been proved nor disproved. In Cramer’s model one assumes that the probability that a natural number $x$ is prime is $1/\ln x$. This model was consistent with empirical and  in 1966, Gallagher showed that the Hardy-Littlewood conjecture on $k$-tuples was consistent with Cramer’s model.

Cramer-Granville’s Conjecture: Despite so much evidence in support of Cramer’s probabilistic model, in 1985, Helmut Maier proved a result that actually contradicts the predictions of Cramer’s. The problem with Cramer’s model is that it fails to take into account divisibility. Thus, for the primes $p > 2$, the probability that $p+1$ is prime is not $1/\ln(p + 1)$ as suggested by the Cramer model but rather $0$ since $p+1$ is even. Further $p+2$ is always odd; therefore it is twice as likely to be prime as a random natural number. Thus $n$ and $n+2$ being primes are not independent events. Based on Maier’s result, in 1995, Andrew Granville refined Cramer’s model and conjectured that

$\displaystyle{\limsup_{n \rightarrow \infty}\frac{p_{n+1} - p_n}{\ln ^2 p_n} \ge 2e^{-\gamma}}$

where $\gamma$ is the Euler-Mascheroni constant. The modified statement

$p_{n+1} - p_n < M\ln ^2 p_n$

with $M>1$ is known as the Cramer-Granville’s Conjecture.

Firoozbakht’s conjecture:  In 1982, Iranian mathematician Farideh Firoozbakht the following conjecture which is lesser known but appears to have far reaching consequences in the study of the gap between consecutive primes that the sequence $p_n^{1/n}$ decreases for all $n \ge 1$. This conjecture has been computationally verified for all primes up to $10^{12}$. From the prime number theorem, it can be shown that Firoozbakht’s conjecture implies

$p_{n+1} - p_n < \ln ^2 p_n - \ln p_n + 1$.

This upper bound is not only stronger that the Cramer’s conjecture but it also contradicts Granville’s limit $2e^{-\gamma}$. Thus either Cramer-Granville’s Conjecture is false or the Firoozbakht’s conjecture is false. But which one if false? We don’t know for sure. Currently the best unconditional result is far from either; $g(n)=O(p_n^{0.535})$ which was proved by R. Baker and G. Harman.

2. Sufficient conditions for Cramer’s conjecture

Lemma 1. Let $c_n$ be the largest real such that

$\displaystyle{p_n ^{\frac{1}{n}} > \Big(1-\frac{c_n \ln p_n}{n^2}\Big)p_{n+1} ^{\frac{1}{n+1}}}.$

If there exists a natural number $n_0$ such that for all $n>n_0$,

1. $c_n \le 0$ then Firoozbakht’s conjecture or,
2. $c_n$ is a positive constant then the Cramer-Granville conjecture is true.

This can easily be proved by finding the upper bound on $p_{n+1} - p_n$ from the inequality mentioned in the lemma.

Let $p(n) = p_n ^{1/n}$. If $p(n) > p(n+1)$, we define $a_n$ as

$\displaystyle{a_n = -\frac{1}{\ln n}\ln\Big(\frac{p(n)}{p(n+1)} - 1\Big)}.$

Lemma 2If there exists a natural number $n_0$ such that for all $n>n_0, a_n \ge 0$ then Firoozbakht’s conjecture is true and this would imply that

$p_{n+1} - p_n < \ln^2 p_n - \ln p_n + 1$.

This lemma is also proved by assuming the hypothesis in the statement of the lemma and finding the upper bound on $p_{n+1} - p_n$. Thus we see that we can gain new insights in to the difference between consecutive primes by studying the quantity $p(n) = p_n ^{1/n}$ and more specifically, we need a good estimation of the ratio $p(n)/p(n+1)$.

3. Gap between consecutive terms of sequences asymptotically equidistributed mod 1

The following idea is drawn form Firoozbakht’s conjecture. Let $s_n$ be asymptotically equidistributed modulo one and let $h(n)=s_n^{1/n}$. If we can find a lower bound on $h(n)/h(n+1)$ than by algebraic manipulations, we can obtain an upper bound on the gap $s_{n+1} - s_n$. The first task is to find a good estimate of $h(n)=s_n^{1/n}$. Since $s_n$ is asymptotically equidistributed mod 1, we have

$\displaystyle{\lim_{n \rightarrow \infty}\frac{1}{n}\sum_{r \le n}f\Big(\frac{s_r}{s_n}\Big) = \int_{0}^{1}f(x)dx}.$

Taking $f(x) =x^{1/n}$, and simplifying we have:

Lemma 3If $s_n$ is asymptotically equidistributed mod $1$ and $\lim_{n \rightarrow \infty}s_n^{1/n} = 1$, then

$\displaystyle{s_n^{1/n} = \frac{n+1}{n^2}\sum_{r \le n}s_r^{\frac{1}{n}} + o(1)}.$

Let $j(n) = \frac{n+1}{n^2}\sum_{r \le n}s_r^{\frac{1}{n}}$ and let $\epsilon _n = o(1)$ be the error term so that $h(n) = j(n) + \epsilon _n$.

Lemma 4. Assuming the hypothesis of Lemma 3, there exists a positive intger $n_0$ and a positive constant $c$ such that for all $n > n_0$,

$\displaystyle{\frac{j(n)}{j(n+1)} > 1 - \frac{c\ln s_n}{n^2}}.$

Although $h(n) = j(n) + o(1)$, Lemma 4 does not directly imply that

$\displaystyle {\frac{h(n)}{h(n+1)} > 1 - \frac{c\ln s_n}{n^2}}.$

should hold for all sufficiently large $n$. However if this relation were true than we would have been able to show that

$\displaystyle{s_{n+1}-s_n < \frac{(2+\epsilon) s_n \ln s_n}{n} + 1}.$

To be continued.

References

## Benford process tends to minimize product

Consider the simple question, what is the largest product of two numbers that can be formed using all the ten digits of the decimal system, each digit used exactly once. We have two answers $\{96420, 87531\}$ and $\{9642, 875310\}$. Continuing on the same line, we can seek three numbers that give the maximum product and so on. In general we can ask: Given a set $A$ of $n$ single digits numbers, not necessarily distinct, form a set of $k$ numbers using all the given numbers i.e. of the digit 2 occurs thrice in the set $A$ then we can use the digit 2 exactly three times while forming our set with the maximum product. Trivially we can see that there can be one of more answer depending on weather the number $0$ is a element of $A$ because  we can form more and one set with same product by placing the $0$ in the unit’s place of any number that does not end in a $0$.

Suppose that a certain random process can generate one set of $k$ number per run using all the elements of a given a set of numbers $A$. Hence an exhaustive run of this process will generate all possible set of numbers with $k$ elements. Let the set of all numbers generated by this random process satisfy the conditions of Benford Law. We shall call such a process as a Benford process and formally define it as:

Benford Process: A Benford process is a random process which generates numbers that satisfy the conditions of Benford Law.

Now if we run a Benford process once, how likely is that the process generates a set of numbers that give the maximum product. In this article we shall see that under the conditions of Benford Law, the probability that the $k$ numbers so generated will have the largest product should be the least. In other words, numbers generated a random process which satisfies the conditions of Benford Law are least likely give the maximum product.

Theorem 1. For any give $k$ and a set of single digits numbers $A$, not necessarily distinct, the numbers generated by a Benford process, using all the elements of $A$, is least likely to have the maximum product.

Proof: The proof of this theorem follows trivially form the following lemma.

Lemma 1Let $A=\{d_1, d_2, ... , d_n\}$ be a set of $n$ digits in base 10 notation, not necessarily distinct, such that $0\le d_i \le d_{i+1}\le 9$. If a set $M_k (A)$ of $k$ elements is formed using only and all the elements of the set $A$ then the elements of $M_k (A)$ are given by

$a_j = d_{n-k+j}d_{n-2k+j}...d_{n-rk+j}$

where $n-rk+j \ge 1, j=1, 2, ... , k$.

Thus larger a digit, the more significant position it occupies. According to the General Benford Law, the probability of a number starting with a larger digit in the most significant place is lower than that of number starting with a smaller number in the most significant place. Hence the theorem follows.

## Asymptotic and partial equidistribution modulo one

1. Introduction

A sequence of real numbers $a_n$ is equidistributed on a given interval if the probability of finding $a_n$ in any subinterval is proportional to the subinterval length. In particular, if $\{a_n\}$ denotes the fractional part of $a_n$ by then a sequence $a_n$ is said to be equidistributed modulo 1 or uniformly distributed modulo 1 if , for all $0 \le b < c \le 1$,

$\displaystyle{\lim_{N \rightarrow \infty}\frac{\#\{r \le N : b < \{a_n\} \le c\}}{N} = c - b.}$

Often problems related to equidistribution modulo 1 can be established using a powerful theorem called Weyl’s criterion which gives the necessary and sufficient conditions for a sequence to be equidistributed mod 1. Weyl’s criterion can be stated in the following equivalent forms.

Theorem 1. (Weyl’s Criterion) Let $a_n$ be a sequence of real numbers. The following are equivalent:

1. The sequence $a_n$ is equidistributed mod 1.
2. For each nonzero integer $k$, we have $\displaystyle{\lim_{n \rightarrow \infty}\frac{1}{n}\sum_{r
3. For each Riemann-integrable $f:[0,1] \rightarrow \mathbb{C}$ we have

$\displaystyle{\lim_{n \rightarrow \infty}\frac{1}{n}\sum_{r \le n}f(\{a_r\}) = \int_{0}^{1} f(x)dx}.$

2. Asymptotic equidistribution mod 1

Lemma 2.1. If $u_n$ is a bounded sequence of real numbers such that for every fixed $\epsilon$, $0 < \epsilon \le 1$$\lim_{n \rightarrow \infty}u_{[n\epsilon]} = \epsilon$ then $u_n$ is equidistributed mod 1.

Example. Taking $u_r = r/n$ in Lemma 2.1, we have

$\displaystyle{\lim_{n \rightarrow \infty}\frac{1}{n}\sum_{r \le n}f\Big(\frac{r}{n}\Big) = \int_{0}^{1} f(x)dx}.$

Recall that the above example, also known as the rectangle formula, is a standard method in elementary calculus for evaluating the limit of a sum using definite integral. Since the rectangle formula holds for all functions $f$ Riemann integrable in $(0,1)$ it implies that the sequence $r/n$ approaches equidistribution mod 1 as $n \rightarrow \infty$. This leads us to the concept of asymptotic equidistribution mod 1 which is defined as follows.

Definition 2.2. A sequence of positive real numbers $s_n$ is said to be asymptotically equidistributed modulo one if $s_n$ is increasing and the sequence of ratios

$\displaystyle{\frac{s_1}{s_n},\frac{s_2}{s_n},...,\frac{s_n}{s_n}}$

approach uniform distribution modulo one as $n \rightarrow \infty$.

Theorem 2.3. The sequence $s_n = a n(\ln (b n))^{c}$, where $a, b$ and $c$ are positive constants, is asymptotically equidistributed mod 1 and therefore by Weyl’s Criterion we have

$\displaystyle{\lim_{n \rightarrow \infty}\frac{1}{n}\sum_{r \le n}f\Big(\frac{s_r}{s_n}\Big) = \int_{0}^{1} f(x)dx}.$

Thus the sequence of natural numbers, the sequence of primes $p_n$, the sequence of composite numbers $c_n$ and the sequence of the imaginary parts of the non-trivial zeroes of the Riemann zeta function $\gamma _n$ are asymptotically equidistributed mod 1.

Example.

$\displaystyle{\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{r \le n}\frac{p_n ^{2}}{p_n ^{2}+ p_r ^{2}}=\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{r \le n}\frac{\gamma _n ^{2}}{\gamma _n ^{2}+ \gamma _r ^{2}} = \frac{\pi}{4}.}$

Trivially if $a_n$ equidistributed modulo 1 then $a_n$ is also asymptotically equidistributed modulo one. The converse of this however is highly non trivial and to appreciate this fact we shall look at three celebrated results of analytic number theory in the field of equidistribution mod 1.

Let $\alpha$ be any irrational number and $\beta$ be any real number. We have

1. H. Wely: The sequence $\alpha n$ is equidistributed mod 1.
2. I. Vinogradov: The sequence $\alpha p_n$ is equidistributed mod 1
3. E. Hlawka: The sequence $\beta \gamma_n$ is equidistributed mod 1.

This three results would follow as special cases if the following conjecture on asymptotic equidistribution were true.

Conjecture 2.4. If $a_n$ is also asymptotically equidistributed modulo 1 and $\alpha$ is any irrational number then the sequence $\alpha a_n$ is equidistributed modulo 1.

A proof or disproof of this conjecture has eluded me so far.

3. Partial equidistribution modulo 1

Consider any the set $S_0 = {a_1, a_2, ... , a_n}$ where $a_n$ is any sequence equidistributed modulo 1. Now from the set $S_0$ let us remove all elements  that satisfy the condition $0.2 \le a_i < 0.3$ to obtain a new set $S_1$. Clearly no element of $S_1$ will have the digit 2 in the first place after the decimal point and therefore the set $S_1$ no longer equidistributed modulo 1 because no element of $S_0$ will lie in the interval $(0.2,0.3[$. In general we can remove from the set $S_0$ all the elements which begins with a given sequence of digits after the decimal point. The remaining elements of $S_1$ will form a partially equidistributed sequence.We call the set $S_1$ to be partially equidistributed modulo 1 because as we shall see below that the set $S_1$ shows properties analogous to the set $S_0$ which is equidistributed modulo 1.

If numbers are represented in decimal nation, the first digit after the decimal point can have 10 different values, the first two digits after the decimal point can have 100 different values and in general, if we consider the first $m$ digits after the decimal point, we can have $10^m$ different possibilities.  From these $10^m$ different possibilities let us form a sequence $b_n$ partially equidistributed modulo 1 such that the first $m$ digits of $b_r$ after the decimal point can take $k$ different values from the set $D=\{d_1, d_2, ... , d_k\}$.

Theorem 3.1. If $b_r$ is partially equidistributed modulo 1 and $D=\{d_1, d_2, ... , d_k\}$ is the set of allowed values of the first $m$ digits after the decimal point then

$\displaystyle{\lim_{n \rightarrow \infty}\frac{1}{n}\sum_{r=1}^{n}f(b_r) = \frac{10^m}{k} \sum_{d \in D}\int_{\frac{d}{10^m}}^{\frac{d+1}{10^m}}f(x)dx}.$

Theorem 3.1 is a generalization of (3) of Theorem 1. If $k=10^m$ then set $D$ will contain all possible combinations of $m$ digits. Hence $b_r$ will be equidistributed modulo 1 and RHS of Theorem 3.1 reduces to

$\displaystyle{\int_{0}^{1} f(x)dx}.$

To be contd.