You are currently browsing the category archive for the ‘expository’ category.

This summer I’ve been studying problems in geometric combinatorics such as the joints problem and the Erdös distinct distance problem. The joints problem was open for 20 years and the distinct distance problem was open for over 60 years before they were resolved by a new technique called the polynomial method. (Algebraic techniques have been around for longer, but their application to incidence problems is new.) In addition to solving hard problems, the polynomial method has the added virtue of producing short and simple proofs. I will showcase two examples the polynomial method in action: Dvir’s proof of the finite field Kakeya conjecture, and Kaplan, Sharir, and Shustin’s proof of the joints problem in {{\mathbb R}^d}.

1. The Kakeya Problem

The finite field Kakeya problem and the joints problem are related to an open problem called the Kakeya conjecture. A Kakeya set, or Besicovitch set, is a subset of {{\mathbb R}^d} that contains a unit line segment in every direction. Besicovitch showed that such sets may have Lebesgue measure zero for {d\geq 2}. However, Davies showed that for {d=2}, Kakeya sets must have Hausdorff dimension {2}, so that a Kakeya set is actually a two dimensional object, despite having no area. The Kakeya conjecture is that for all {d}, Kakeya sets must have full Hausdorff dimension. Wolff was able to show that the Hausdorff dimension of a Kakeya set must be at least {\frac{d+2}{2}} for {d\geq 2}, but progress beyond this has been slow.
In a survey paper, Wolff introduced a finite field analog of the Kakeya conjecture. Let {{\mathbb F}_q} be the finite field with {q} elements. A Kakeya set in {{\mathbb F}_q^d} is a set that contains a line in every direction. That is, {K\subseteq {\mathbb F}_q^d} is a Kakeya set if for all {\bold{y}} in {{\mathbb F}_q^d}, there exists a {\bold{x}} in {{\mathbb F}_q^d} such that the set {\{\bold{x}+t\bold{y}\colon t\in{\mathbb F}_q\}} is a subset of {K}. The finite field Kakeya conjecture is that for any Kakeya set {K} we have {|K|\geq C_dq^d} for some fixed constant {C_d} that is independent of {q}.

Theorem 1 (Dvir 2008) If {K} is a Kakeya set in {{\mathbb F}_q^d}, then {|K|\geq C_dq^d}, where {C_d} is a constant that we may choose to be {\frac{1}{d!}}.

Wolff showed that {|K|\geq C_dq^{(d+2)/2}} using a proof similar in spirit to his {\frac{d+2}{2}} lower bound on the Hausdorff dimension of Kakeya sets in {{\mathbb R}^d}. Since proofs in the finite field case use similar tools to attacks on the {{\mathbb R}^d}, but involve fewer technicalities, it was hoped that the finite field case would serve as an instructive model. In 2008, Dvir proved the finite field Kakeya conjecture using the so-called polynomial method, which currently has no continuous analog.

2. The Fundamental Lemma

The fundamental lemma of the polynomial method is a higher dimensional analog of the fact that, given {n} numbers, there is a non-zero polynomial of degree {n} vanishing at those numbers:

Lemma 2 (Fundamental Lemma) Let {k} be a field and let {S} be a finite subset of {k^n}. If {|S|=m}, then there is a non-zero polynomial of degree {d\leq\lceil(n!m)^{1/n}\rceil} that vanishes on {S}.

Proof: Let {d} be the least integer such that {m<{n+d\choose d}}. By elementary combinatorics, the number of monomials in {k[x_1,\ldots,x_n]} of degree at most {d} is {{n+d\choose d}}, so a polynomial {P} in {k[x_1,\ldots,x_n]} with variable coefficients will have {{n+d\choose d}} degrees of freedom. Requiring that {P} vanishes on {S} imposes {m} constraints. Since we have more degrees of freedom than constraints, our linear system for the coefficients of {P} has a non-trivial solution.

By a previous post, we know that if {d} is the least integer such that {m<{n+d\choose d}}, then {d\leq\lceil(n!m)^{1/n}\rceil}, which completes the proof. \Box

3. Dvir’s Proof

The basic argument in Dvir’s proof of the finite field Kakeya conjecture is to assume that you have a Kakeya set that is smaller than {Cq^d}. This allows us to find a non-zero polynomial of degree less than {q} vanishing on our Kakeya set. However, we will show that a polynomial of degree less than {q} vanishing on a Kakeya set must vanish at every point of {{\mathbb F}_q^d}, contradicting the fact that the polymomial was non-zero.

Proof: Let {K} be a Kakeya set in {{\mathbb F}_q^d}. Suppose by way of contradiction that {K<Cq^d} for some constant {C}, which we will choose during the course of the proof. By the fundamental lemma, there exists a non-zero polymomial {P} in {k[x_1,\ldots,x_d]} of degree {n} vanishing on {K}, where {n\leq\lceil (d!Cq^d)^{1/d}\rceil}. Choosing {C\leq\frac 1{d!}} will ensure that {n<q}.

Fix a point {\bold{y}} in {{\mathbb F}_q^d}. We wish to show that {P(\bold{y})=0}. Since {K} is a Kakeya set, there exists a point {\bold{x}} in {{\mathbb F}_q^d} such that {\{\bold{x}+t\bold{y}\colon t\in{\mathbb F}_q\}} is a subset of {K}. Let {\tilde{P}(t)=P(\bold{x}+t\bold{y})}; {\tilde{P}} is a degree {n} polynomial in {{\mathbb F}_q[t]}. Since {P} vanishes on {K} we have {\tilde{P}(t)=0} for each {t} in {{\mathbb F}_q}. Thus {\tilde{P}} vanishes identically, because it has more roots than its degree.

Since {\tilde{P}} vanishes identically, it’s coefficients must be zero. The coefficients of {\tilde{P}} are polynomials in {\bold{x}} and {\bold{y}}, and it turns out that the coefficients of {t^n} is the {n^\text{th}} degree homogeneous part of {P} evaluated at {\bold{y}}. (Recall that a polynomial {P} may be split up at a sum of its homogenous parts {P=\sum_{i=0}^n P_i}, where all of the monomials in {P_i} are of degree {i}.) Thus we have shown that {P_n(\bold{y})=0}.

Since {\bold{y}} was arbitrary, {P_n} vanishes identically on {{\mathbb F}_q^d}, which contradicts the fact that {P} is degree {n}. \Box

4. The Joints Problem

In the same survey, Wolff describes another discrete incidence problem that is related to the Kakeya conjecture: the joints problem. A joint in {{\mathbb R}^d} is a point where {d} lines intersect in indepedent directions. The joints problem is to find an upper bound the number of joints formed by {n} lines in {{\mathbb R}^d}.

A lower bound on the maximum number of joints in {{\mathbb R}^3} is given by the example of a cube lattice {[0,\sqrt{n}]\times [0,\sqrt{n}]\times [0,\sqrt{n}]}. Each side of the cube has roughly {n} lattice points, and we take as our set of lines the lines through these lattice points that are parallel to either the {x}, {y}, or {z} axes—this gives us roughly {3n} lines. There are roughly {n^{3/2}} lattice points inside the cube, and these points are exactly the set of joints formed by our lines. The conjecture asserts that this is sharp: {n} lines determine at most {Cn^{3/2}} joints.

After Dvir’s proof of the finite field Kakeya problem, there were hopes that the polynomial method could be applied to problems in {{\mathbb R}^d}. The combined genius of Larry Guth and Nets Katz produced the first such successful application, which was futher generalized and simplified by a host of researchers. I will give a simplified proof due Kaplan, Sharir, and Shustin.

The basic outline of their proof is this: assume that the set of joints is larger than we believe it should be, which allows us to find a non-zero polynomial vanishing on all of the joints and all of the lines forming the joints. We will show that if a polynomial vanishes on all the lines forming a joint, the gradient of the polynomial must vanish at that joint too. Iterating this argument, we will show that all of the partial derivatives of the polynomial vanish. Since the partial derivatives must be constant at some point, this shows that the coefficients of the polynomial must be zero.

Theorem 3 (Guth and Katz 2009) Let {L} be a set of {n} lines in {{\mathbb R}^d}. If {J_L} is the set of joints formed by {L}, then there exists a constant {C} such that {|J_L|\leq Cn^{d/(d-1)}}.

Proof: Let’s set {m=|J_L|} for convenience. Suppose by way of contradiction that {m>Cn^{d/(d-1)}}, where {C} is a constant that we will choose during the course of the proof.

Let {\rho=m/2n}. We will refine {L} so that each remaining line contains at least {\rho} joints. We begin by picking a line {\ell} in {L}. If {\ell} contains fewer than {\rho} points of {J_L}, we will discard {\ell} and remove all of the points on {\ell} from {J_L}. We continue this process until all of the remaining lines contain at least {\rho} joints. We will call the resulting sets {L'} and {J_L'}. Note that every point of {J_L'} is a joint of {L'}. Since we removed at most {m/2} joints from {J_L}, we have {m\geq|J_L'|\geq m/2}.

Next we will find a non-zero polynomial that vanishes on {J_L'} and all of the lines of {L'}. Since {|J_L'|\leq m}, the fundamental lemma provides a non-zero polynomial {P} of degree {r\leq\lceil(d!m)^{1/d}\rceil<2(d!m)^{1/d}} that vanishes on {J_L'}. We would also like {P} to vanish on all of the lines of {L'}. Because {P} is degree {r}, {P} will vanish on any line that contains more than {r} zeros of {P}. By our refining process, each line of {L'} contains at least {\rho} zeros of {P}. Since {\rho=m/2n>\frac 12 C^{(d-1)/d}m^{1/d}}, we can make {\rho>r} by choosing {C>4^{d/(d-1)}(d!)^{1/(d-1)}}, in which case {P} vanishes on all of the lines of {L'}, as desired.

The polynomial {P} has a peculiar property: at each point {\bold{x}} in {J_L'}, {P} vanishes along {d} lines {\ell_1,\ldots,\ell_d} in {L'}, which run in independent directions. Suppose that {\ell_i} runs in direction {\bold{y}_i}, so that {\ell_i=\{\bold{x}+t\bold{y}_i\colon t\in{\mathbb R}\}}. The directional derivative of {P} at {\bold{x}} in the direction of {\bold{y}_i} must be zero, since it is evaluated along {\ell_i}; that is, {\nabla P(\bold{x})\cdot\bold{y}_i=0}. The direction vectors {\bold{y}_1,\ldots,\bold{y}_d} span all of {{\mathbb R}^d}, because they are independent. Thus {\nabla P(\bold{x})} is perpedicular to every vector in {{\mathbb R}^d}, and so {\nabla P(\bold{x})=0}. Since {\bold{x}} was arbitrary, {\nabla P} vanishes at every point of {J_L'}.

Now we can iterate the preceeding argument: since the components of {\nabla P} vanish on {J_L'} and have degree less than {r}, they also vanish on every line of {L'}. If we take one component of {\nabla P}, say {P_{x_1}}, we can use that fact that {P_{x_1}} vanishes on the lines of {L'} to show that {\nabla P_{x_1}} vanishes on {J_L'}. Continuing in this way, we see that all of the iterated partial derivatives of {P} must be zero. But the {r^\text{th}} degree partial derivatives of {P} are multiples of the coefficients of {P}. Thus the coefficients of {P} must zero, which contradicts the fact that {P} is a non-zero polynomial. \Box

5. Bibliography

Here is a short bibliography for each section.

The Kakeya Problem:

Dvir’s Proof of the Finite Field Kakeya Problem:

The Joints Problem:

  • Guth and Katz’s 2009 paper (difficult read)
  • Kaplan, Sharir, and Shustin’s short proof
  • Quilodran’s summary

I would like to record a few propositions about number of monomials of a given degree in the ring of polynomials in several variables. We begin with a basic counting problem: suppose we have {n} boxes and {k} balls—how many ways can we place the balls in the boxes? The boxes may be empty, or they may contain any number of balls, we only require that all {k} are placed. There is a clever way of reducing this problem to the easier problem of counting the number of ways to select {k} items out of a larger set.

Proposition 1 The number of ways to place {k} balls into {n} boxes is {{n+k-1\choose k}}.

Proof: We will represent the problem typographically, using {n+1} vertical bars to form the boxes, and representing the balls by dots. Thus if {n=3} and {k=4}, one arrangement might look like {|\bullet||\bullet\bullet\bullet|}, where the first box contains one ball, the second box is empty, and the last box contains three balls.

Notice that we have {n+k+1} symbols. Since the first and last symbols are always lines, out of the {n+k-1} inside symbols, we must choose {k} of these to be dots. The number of ways to choose {k} dots out of {n+k-1} symbols is {{n+k-1\choose k}}, and since every choice of {k} dots corresponds to a way of placing {k} balls into {n} boxes, our desired count is {{n+k-1\choose k}}. \Box

With this proposition in hand, we can proceed to counting monomials. First, let’s recall some definitions. Let {R} be any commutative ring (e.g. the integers {{\mathbb Z}}), and let {A=R[x_1,\ldots,x_n]} be the ring of polynomials over {A} in {n} variables. A monomial in {A} is a product of the variables: {x_1^{\alpha_1}\cdots x_n^{\alpha_n}}. The degree of a monomial is the sum of its exponents: {\deg(x_1^{\alpha_1}\cdots x_n^{\alpha_n})=\alpha_1+\cdots+\alpha_n}. For example, the monomials of degree three in {{\mathbb Z}[x,y]} are {x^3}, {x^2y}, {xy^2}, and {y^3}.

First we count monomials of fixed degree.

Proposition 2 The number of monomials in {A=R[x_1,\ldots,x_n]} of degree {d} is {{n+d-1\choose d}}.

Proof: We can reduce the problem of counting monomials to the problem of counting balls in boxes. We create a box for each variable, and we let the degree of each variable be the number of balls in its box. Using the example from before, if {n=3} and {d=4}, a possible arrangement is {|\bullet||\bullet\bullet\bullet|}, which corresponds to the monomial {x_1x_3^3}. Of course we could use the monomial to place balls in boxes, so the two counting problems are equivalent. Hence the number of monomials in {A} of degree {d} is {{n+d-1\choose d}}, the number of ways to place {d} balls in {n} boxes. \Box

Next we’ll count the number of monomials of degree less than or equal to {d}.

Proposition 3 The number of monomials in {A=R[x_1,\ldots,x_n]} of degree less than or equal to {d} is {{n+d\choose d}}.

Proof: Again we reduce the problem of counting monomials to the problem of counting balls in boxes, but this time we add an extra box to get the monomials of lower degree. So if {n=3} and {d=4}, a possible arrangement is {|\bullet||\bullet\bullet|\bullet|}, which corresponds to {x_1x_3^2}. Notice that the dot in the last box isn’t counted, so our monomial is degree three instead of degree four. Since we have {n+1} boxes and {d} balls, the number of possibilities is {{n+d\choose d}}, as desired. \Box

We also could have proved this using the previous proposition by noting that the number of monomials in {n} variables of degree less than or equal to {d} is the same as the number of monomials in {n+1} variables of degree equal to {d}. To get from the monomials of degree {d} in {n+1} variables to the monomials of degree at most {d} in {n} variables, we simply substitute 1 into the last variable. To get back the other way we must “homogenize.” For example, {x_1^2x_2} goes to the degree 5 monomial {x_1^2x_2x_3^2}.

The last two propositions yield a nice corollary. Since the number of monomials in {n} variables of degree less than or equal to {d} is the sum of the number of monomials of degree equal to {0,1,2,\ldots, d}, we have a combinatorial proof of the identity

\displaystyle  \sum_{k=0}^d{n+k-1\choose k}={n+k\choose k}.

-Incidence problems and bipartite graphs-

An important problem in combinatorial geometry is to study the number of incidences between a set of points and set of lines (or curves, surfaces, etc.). More specifically, let {P} be a finite subset of {\mathbb{R}^n} and let {L} be a finite set of lines (or curves, surfaces, etc.). We define the set of incidences between {P} and {L} by {I(P,L):=\{(p,\ell)\in P\times L\colon p\in\ell\}}. Our problem is now to study the size of {I(P,L)}. Of course we always have the trivial bound {|I(P,L)|\leq |P||L|}, but we would like to do better.

Introducing the set of incidences of {P} and {L} may just seem like good notation, but in fact it is characterizing our incidence problem in terms of graph theory: we define the incidence graph {G(P,L)} of {P} and {L} to be the graph whose vertices are points of {P} and lines of {L}, with a point {p} connected to a line {\ell} if {p\in\ell}. So the set of edges of {G(P,L)} is {I(P,L)}.

The incidence graph {G(P,L)} is a special type of graph called a bipartite graph. A bipartite graph is a graph {G} whose vertices form two disjoint sets, {A} and {B}, such that no edge joins two points of {A} and no edge joins two points of {B}; that is, the edges of {G} are of the form {\{a,b\}} with {a} in {A} and {b} in {B}. We will write {G=G(A,B)} to specify the vertex sets and we write {E(G)} or {E(A,B)} for the edges of {G}.

As with incidence graphs, we have the trivial bound {|E(A,B)|\leq |A||B|} and in general we can’t do any better. In the next section we will prove non-trivial bounds on the number of edges of a bipartite graph under certain restrictions. By geometric considerations, we’ll see that many incidence graphs satisfy these restrictions, thus giving us non-trivial bounds on {|I(P,L)|}. It turns out that these bounds are actually sharp for point-line incidences over finite fields.

-Restricted subgraph bound-

We noted above that the trivial bound {|E(A,B)|\leq |A||B|} cannot be improved in general. We call a graph that achieves equality a complete bipartite graph. That is, a complete bipartite graph is a bipartite graph with vertex sets {A} and {B} such that every vertex of {A} is connected to every vertex of {B}. If {|A|=s} and {|B|=t}, then we denote the complete bipartite graph with vertex sets {A} and {B} by {K_{s,t}}. If we consider only bipartite graphs {G(A,B)} whose edge set does not contain a copy of {K_{s,t}} for some {s\leq|A|} and {t\leq|B|}, we should be able to improve this trivial bound:

Restricted Subgraph Bound: Let {G=G(A,B)} be a bipartite graph that does not contain a {K_{s,t}} as a subgraph, where “{s}” corresponds to points of {A} and “{t}” corresponds to points of {B}. The cardinality of the set {E(G)} of edges of {G} satisfies

\displaystyle |E(G)|\leq \sqrt[s]{t-1}|A||B|^{1-1/s}+s|B|.

Before we prove the restricted subgraph bound, let’s note that a point-line incidence graph contains no {K_{2,2}}, because any two points lie on a unique line. Thus {|I(P,L)|\leq |P||L|^{1/2}+2|L|}.

Proof: For now, let’s assume that each vertex in B has degree greater than s, so that in particular s|B|\leq E(G). The general case will follow easily from this particular case.

We define the set {F} of “fans” in {G}:

\displaystyle F:=\{(a_1,\ldots,a_s,b)\colon (a_1,b),\ldots,(a_s,b)\in E(G), a_i\not=a_j\mbox{ for }i\not=j\}.

We will count |F| in two ways. First note that

\displaystyle |F|=s!\sum_{b\in B}{\deg(b)\choose s}. \ \ \ \ \ (1)

Next note any {s}-tuple {(a_1,\ldots, a_s)} contributes at most {t-1} fans {(a_1,\ldots,a_s,b_i)}, {i=1,\ldots,t-1}, to {F}. Thus

\displaystyle |F|\leq (t-1)s!{|A|\choose s}. \ \ \ \ \ (2)

Combining the counts in (1) and (2), we have

\displaystyle s!\sum_{b\in B}{\deg(b)\choose s}\leq (t-1)s!{|A|\choose s}. \ \ \ \ \ (3)

Since {{x\choose s}} is convex in {x}, the sum on the left side of (3) is minimized by the average of the degrees:

\displaystyle s!\sum_{b\in B}{\deg(b)\choose s}\geq|B|s!{|E(G)|/|B|\choose s}\geq |B|\left(\frac{|E(G)|}{|B|}-s\right)^s. \ \ \ \ \ (4)

Combining (4) with (3), we have

\displaystyle |B|\left(\frac{|E(G)|}{|B|}-s\right)^s\leq (t-1)s!{|A|\choose s}\leq (t-1)|A|^s.

Solving for {|E(G)|} yields the desired result:

\displaystyle |E(G)|\leq \sqrt[s]{t-1}|A||B|^{1-1/s}+s|B|.

\Box

By symmetry we have the bound

\displaystyle |E(G)|\leq \sqrt[t]{s-1}|A|^{1-1/t}|B|+t|A|.

Thus we may conclude that

\displaystyle |E(G)|\leq\min\{\sqrt[s]{t-1}|A||B|^{1-1/s}+s|B|, \sqrt[t]{s-1}|A|^{1-1/t}|B|+t|A|\}.

-Acknowledgements-

The proof I gave here is based off of Jacob Fox’s lecture notes from a combinatorics class at MIT.

The popularity lemma is simple lemma about refining the vertices of a bipartite graph by degree. Since it gives you a lower bound on the number of edges that remain after refining, it can be used to show that not too many edges are lost by refining.

Popularity Lemma: Let {G(A,B)} be a bipartite graph. Let {B'} be the set of vertices in {B} with {degree\geq\mu}, where {\mu\geq 0}. Then the cardinality of the set of edges from {A} to {B'} satisfies

\displaystyle |E(A,B')|\geq |E(A,B)|-\mu|B|.

We can generalize slightly: if {\rho} is a constant such that {|E(A,B)|\geq\rho|B|}, then

\displaystyle |E(A,B')|\geq (\rho-\mu)|B|.

The first version can be recovered by setting {\rho =|E(A,B)|/|B|}, which is the average degree of a vertex in {B}. From this point of view, the bound is only useful when {\mu} is smaller than the average degree. In particular, if {\mu=c\rho}, where {0<c<1} and {\rho} is the average density of a vertex in {B}, then

\displaystyle |E(A,B')|\geq (1-c)|E(A,B)|.

Proof: We may divide the vertices of {B} into vertices with degree at least {\mu} and vertices with degree less than {\mu}: {B=B'\cup B\backslash B'}. Summing degrees over the vertices of {B} yields

\displaystyle |E(A,B)|=\sum_{b\in B}\deg(b)= |E(A,B')|+\sum_{b\in B\backslash B'}\deg(b).

Since each vertex in {B\backslash B'} has degree less than {\mu}, we have

\displaystyle \sum_{b\in B\backslash B'}\deg(b)\leq \mu|B'|\leq \mu|B|.

Since {|E(A,B)|\geq\rho|B|}, we have

\displaystyle \rho|B|\leq |E(A,B')|+\mu|B|.

Rearranging yields the desired result. \Box

Although I’m sure this is an old result, I first saw this lemma in Guth and Katz’s 2008 paper on the joints problem.

Given an arbitrary set of points in the plane, we would expect most pairs of points to determine distinct lines–this is the “generic” case. The obvious counterexample is when all of the points lie on a single line. Beck’s lemma says that one of these two situations must occur (up to constants) in any finite set of points in the plane.

Beck’s Lemma: Let {P} be a subset of {\mathbb{R}^n} with {|P|=n}. Then one of the following holds:

  1. There is a line containing {cn} points of {P} for some constant {c},
  2. or there are {c'n^2} distinct lines containing at least two points of {P} each.

Note that (2) is the generic case, while (1) is a counterexample to the generic case.

Proof of Beck’s lemma: We say that {a,b\in P} are {j}-connected if the line determined by {a} and {b} contains between {>2^j} and {\leq 2^{j+1}} points of {P}.

Let {L_j} be the set of lines determined by {P} containing more than {2^j} points of {P}. Then by the Szemerédi-Trotter theorem:

\displaystyle |L_j|2^j\leq I(P,L_j)\leq\max\{4n+|L_j|,4n^{2/3}|L_j|^{2/3}+|L_j|\}.

If the max is {4n+|L_j|}, then we have {|L_j|\leq 4n/(2^j-1)} and if the max is {4n^{2/3}|L_j|^{2/3}+|L_j|}, then we have {|L_j|\leq 64n^2/(2^{j}-1)^3}. Thus

\displaystyle |L_j|\lesssim\frac n{2^j}+\frac{n^2}{2^{3j}}.

If {a} and {b} are {j}-connected, then they lie on a line of {L_j}. Thus the number of lines created by a {j}-connected pair is at most {L_j}. Each such line contains at most {{2^{j+1}\choose 2}} pairs of points, so

\displaystyle \# j\mbox{-connected pairs}\lesssim 2^jn+\frac{n^2}{2^j}.

Let {C} be a positive constant that we will determine later. From now on we focus our attention on {j} such that {C\leq 2^j\leq n/C}. For this range of {j}, the number of {j}-connected pairs is bounded above by {n^2/C}:

\displaystyle \sum_{C\leq 2^j\leq n/C }2^jn+\frac{n^2}{2^j}\lesssim \frac{n^2}C,

where we bounded the sum of first term by the highest term (i.e. {1+2+\cdots+2^j\leq 2^{j+1}}) and we bounded the second term using the geometric series ({1+2^{-1}+2^{-2}+\cdots = 2}).

Since there are {{n\choose 2}=n(n-1)/2} pairs of points, if {C} is large enough then there are more than {n^2/4} pairs of points that are not {j}-connected for {C\leq 2^j\leq n/C}. If one pair is above this range, then the line that they determine contains at least {n/C} points, which gives us (1). Otherwise we have more than {n^2/4} pairs below this region and so we have option (2). QED

Beck’s original proof did not use the Szemeredi-Trotter theorem—in fact, Beck’s paper was published in the same issue of Combinatorica as Szemeredi and Trotter’s paper. Instead he proved the bound on the number of lines containing between 2^j and 2^{j+1} points:

\displaystyle |L_j|-|L_{2j}|\lesssim\frac{n^2}{2^{(2+\delta)j}},

where \delta=1/20 and 1\leq 2^j\leq(2n)^{1/2}. This estimate is sufficient for our argument.