5. A Phragménlike Heuristic¶
The sequential Phragmén's method is fast, seems to work well in practice, and gives a solution that satisfies the Proportional Justified Representation (PJR) axiom. However, we identify two problems with it. First, it does not give a constant factor approximation to the maximin support objective. Second, it lacks versatility, in the sense that it cannot be used to improve upon an arbitrary solution given as input. We describe a heuristic closely related to the sequential Phragmén's method, which takes as input an arbitrary partial solution, defines scores over the validator candidates, and uses them to add a new validator to the solution, removing another validator if necessary.
Checking if a committee satisifies the PJR property is NPhard. However, we define a stronger property called PJR($d$), which depends on a parameter $d$. We show that it implies PJR (for an adecuate value of $d$), and that it can checked efficiently, using our heuristic. We also build from our heuristic a "PJR($d$)enabler"; that is, an algorithm that takes as input a solution of minimum support $d$, performs some swaps using the heuristic, and returns a solution of minimum support at least $d$ and observing PJR($d$) (and also PJR). Finally, we provide an efficient factor3.15 approximation algorithm, which starts with an empty set and alternates between using our heuristic to elect a new validator, and running a flow balancing algorithm to improve the support distribution on the current set.
Notation¶
Recall that an instance of the NPoS election problem is a graph $(N\cup V, E)$ representing the trust relations between nominators and validator candidates, a vector $b\in\mathbb{R}_{\geq 0}^N$ of nominator budgets, and a target number $m$ of validators to elect.
A support distribution from nominators to validators is represented by a vector of edge weights. Such a vector is called affordable if, besides nonnegativity constraints, it observes the budget constraints for each nominator . Furthermore, it is called maximally affordable with respect to a validator set if for each having at least one neighbor in . By a solution we mean a pair where with and is maximally affordable for $S$. Our algorithm will return solutions of size , but we will often consider solutions of smaller sizes in intermediate steps and in the analyses.
Given an edge weight vector and a validator , the support of $v$ relative to $w$ is . For a solution $(S,w)$, we extend this definition to . The maximin support objective is maximizing over all feasible solutions to the given instance.
The basic heuristic¶
Suppose that $(S,w)$ is a maximally affordable solution with $S\leq m$, and we wish to add a new candidate $v'\in V\setminus S$ and give it some support $d$. To do this, in general we will need to reduce the support of any other candidate $v \in S$ who has a neighbor nominator $n$ in common with $v'$. When we do this, we want to ensure that we do not reduce the support of $v$ below $d$ (assuming it was peviously above $d$). A simple way to ensure this is to reduce the weight on edge $nv$ from $w_{nv}$ to $w_{nv}\cdot(d/supp_w(v))$, and assign the difference to edge $nv'$. That way, it is clear that even if all nominators $n$ supporting $v$ are also neighbors of $v'$, the new support of $v$ does not go below $d$.
Thus, if for each $n\in N$ and $d\geq 0$ we define the nominator's slack as
and for each $v'\in V\setminus S$ and $d\geq 0$ we define that candidate's prescore as
then we can add $v'$ to the solution with support $prescore_w(v,d)$, while not making any other validator's support decrease from over $d$ to under $d$. In particular, if $prescore_w(v',d)\geq d$, the new solution $(S \cup {v'},w')$ has .
The resulting heuristic, which adds a new candidate to the initial solution, is formalized below.
Algorithm: InsertCandidate
 Let and $w'\leftarrow w$.

For all $n$ with $nv' \in E$:
 If $\nexists v\in S$ with $nv\in E$, set , otherwise set ;
 For all $v\in S$ with $w_{nv} > 0$ and $supp_w(v) > d$: increase by , and set ;

Return .
The next result follows from the definitions and the discussion above and its proof is skipped.
Lemma 1: If we run InsertCandidate($S,w,v',d$) for some maximally affordable solution $(S,w)$ with $S\leq m$, $v'\in V\setminus S$ and $d\geq 0$ to get $(S',w')$, then
 is maximally affordable,
 ,
 for all $v\in S$ we have that , and consequently if , we obtain
 the running time of the algorithm is $O(E\cdot S)=O(E\cdot m)$.
How high can we make $d$ and have the property $prescore_w(v',d)\geq d$ hold? We define $score_w(v')$ to be the maximum $d$ such that $prescore_w(v',d) \geq d$. Our heuristic now becomes apparent.
Heuristic: Given a maximally affordable solution $(S,w)$ with $S\leq m$, find the candidate $v'\in V\setminus S$ maximizing $score_w(v')$ and execute InsertCandidate($S,w,v',score_w(v')$), so that the new solution $(S',w')$ observes
This is the core idea of our method. In the remainder of the section we establish how to find the candidate with the largest score efficiently.
Fix a candidate $v'\in V\setminus S$, and consider the function $f(d):=prescore_w(v',d)d$ in the interval . Notice that this function is continuous and strictly monotone decreasing, and that $score_w(v')$ corresponds to its unique root. We can therefore approximate this root with binary search, as binary search works on any monotonic function. However, we can do better. Sort the set of support values so that $d_1<d_2<\cdots<d_k$, for some $k\leq S$, and note that $prescore_w(v',d)$ is piecewise linear with respect to $d$, namely it is linear in each of the intervals $[0,d_1), \ [d_1, d_2), \cdots, [d_k, \infty)$. By treating $prescore_w(v',d)$ as a linear function in the neighborhood of , and solving for , we obtain
The interesting thing about the previous identity is that the righthand side stays constant if we replace $d^*=score_w(v')$ by any other value $d$ within the same interval, among the abovedefined intervals. This motivates us to define the following score function, for any $v'\in V\setminus S$ and $d\geq 0$:
Function $score(v',d)$ is very similar to, but algorithmically more convenient than function $prescore(v',d)$. We remark that for $d=0$, the expression for $1/score_w(v',0)$ corresponds to the notion of score in sequential Phragmén's method. Hence, the latter can be seen as a special case of our approach.
Lemma 2. Fix a maximally affordable solution $(S,w)$ and a candidate $v'\in V\setminus S$:
(i) Function $score_w(v',d)$ is piecewise constant with respect to $d$, namely it is constant in each of the intervals $[0,d_1), \ [d_1, d_2), \cdots, [d_k, \infty)$, where the values $d_1<d_2<\cdots <d_k$ constitute the set .
(ii) is the unique root of function ; moreover, $g(d)$ is strictly positive for all and strictly negative for all .
(iii) The above defined value is equal to .
Proof. Let $num(d)$ and $denom(d)$ be respectively the numerator and the denominator in the definition of $score_w(v',d)$.
It is easy to check that if $d_i<d_j$ are two values such that $\nexists v\in S$ with $d_i \leq supp_w(v)< d_j$, then both $num(d)$ and $denom(d)$ stay constant in the interval $[d_i,d_j)$. This proves point (i).
Now consider function $g(d):=score_w(v',d)  d$:
As $denom(d)$ is always strictly positive for $d\geq 0$, we have that functions $g(d)$ and $f(d)$ have the same roots and the same signs, and we already argued that function $f(d):=prescore_w(v'd)d$ is strictly monotone decreasing and has $d^*=score_w(v')$ as its only root. This proves point (ii).
To prove point $(iii)$, let , and consider first a value . If then there is nothing to prove. Otherwise, we have that
Therefore,
where we used the fact that because . From the previous inequality, we conclude that $d^* \geq num(d)/denom(d) = score(v',d)$, as desired.
Similarly, if and , then
Therefore,
and thus we conclude that $d^* > num(d)/denom(d) = score(v',d)$.
$\square$
Corollary 3. Fix a maximally affordable solution $(S,w)$. Then,
(i) Function is constant in each of the abovedefined intervals $[0,d_1), \ [d_1, d_2), \cdots, [d_k, \infty)$.
(ii) is the unique root of function ; moreover, $h(d)$ is strictly positive for each and strictly negative for each .
(iii) The above defined point is equal to .
This corollary easily follows from the previous lemma, and its proof is skipped. Notice that the value is precisely what we need to find in our heuristic, and point (ii) establishes that we can find it with binary search. We define the explicit algorithm next.
Algorithm. CalculateMaxScore

Compute the values , where .

Let $i_{lo}=0$, $i_{hi}=k$, $i=0$.

For each $n\in N$, compute $b_n  \sum_{v \in S: \ supp_w(v) \leq d_i} w_{nv}$, and $\sum_{v \in S: \ supp_w(v) > d_i} w_{nv} /supp_w(v)$.

For each $v'\in V\setminus S$, compute

Let .

Let $i'$ be the highest value such that , and set . If , set .

If $i_{lo}< i_{hi}$, set $i\leftarrow \lceil (i_{lo}+i_{hi})/2 \rceil$ and go back to 3.; else, return .
Lemma 4: CalculateMaxScore returns and a $v'$ that attains that score, in $O(\log S)=O(\log m)$ iterations, where each iteration takes time $O(E)$.
Proof: It is easy to verify that each iteration of the algorithm above executes in time $O(E)$. Let . From point (i) in the corollary, we can reduce our search for to only the evaluations of function over the $O(m)$ points $d_i$. Moreover, by point (ii), we can perform binary search, so that if is larger than $d_i$ then we can restrict our search to values $d$ larger than the current $d_i$, and otherwise we can restrict our search to values smaller than the current $d_i$. This shows that only $O(\log m)$ iterations are performed. Finally, by point (iii) we can also restrict our search to values $d$ larger than the current $d_{\max}$, thus speeding up our search even more. To finish the proof, we remark that we choose to initialize the index $i$ to zero because it seems to speed up the search in many implementations, but it can be initialized to any other value between $i_{lo}$ and $i_{hi}$.
$\square$
When we have the score, we can insert candidate $v_{\max}$ to the current solution $(S,w)$ using InsertCandidate(), thus obtaining a new solution with , as desired.
(Parameterised) Proportional Justified Representation.¶
We can generalise the PJR property to our weighted votes setting and consider adding a parameter. For each nominator $n\in N$, let $V_n\subseteq V$ be the subset of candidates that are trusted by $n$, i.e. .
Definition: A committee $S$ (of any size) satisfies Proportional Justified Representation with parameter $d$ (PJR($d$) for short) if there is no subset $N'\subseteq N$ of nominators and integer $t>0$ such that:
a) $\sum_{n\in N'} b_n \geq t\cdot d$,
b) $\cap_{n\in N'} V_n\geq t$, and
c) $S\cap (\cup_{n\in N'} V_n)<t$.
In other words, if there is a set $N'$ of nominators who can "afford" to provide a support of $d$ to each one of $t$ commonly trusted candidates, they will indeed be represented by at least $t$ candidates in $S$ (though not necessarily commonly trusted). Notice that if a committee satisfies PJR($d$), then it also satisfies PJR($d'$) for each $d'\geq d$, so the property gets stronger as $d$ decreases. Notice also that a committee $S$ satisfies the standard version of PJR if and only if it satisfies PJR($d$) for $d=\sum_{n\in N}b_n / S$.
Checking whether a committee $(S,w)$ satisfies standard PJR is known to be NPhard. However, for any $d\geq 0$ we can efficiently check whether $\max_{v'\in V\setminus S} score_w(v')<d$, and this in turn implies PJR($d$).
Lemma 5: If a set $S$ (of any size) does not satisfy PJR($d$) for some parameter $d$ then, for any maximally affordable edge weight vector $w\in\mathbb{R}^E_{\geq 0}$, there must be a candidate $v'\in V\setminus S$ with $prescore_w(v',d)\geq d$, and consequently $score_w(v')\geq d$.
Proof: If $S$ does not satisfy PJR($d$), there must be a subset $N'\subseteq N$ of nominators with a) $\sum_{n\in N'}b_n \geq t\cdot d$, b) $\cap_{n\in N'} V_n\geq t$, and c) $S\cap (\cup_{n\in N'} V_n) \leq t1$. Therefore, the set $(\cap_{n\in N'} V_n)\setminus S$ must be nonempty; let $v'$ be a candidate in it. Fix a maximally affordable weight vector $w$; we claim that $prescore_w(v',d)\geq d$. We have
where we used fact a) on the fifth line, and fact c) on the last line. This proves that $prescore_w(v,d) \geq d$. The fact that $score_w(v) \geq d$ follows from the definition of score. $\square$
Algorithm. TestThatImpliesPJR($S,w,d$)

For each $v \in S$ compute $supp_w(v)$.

For each $n\in N$, compute

For each $v'\in V\setminus S$, compute and if $prescore_w(v',d)\geq d$, return false.

Return true.
Lemma 6: Algorithm TestThatImpliesPJR($S,w,d$) runs in time $O(E)$, and if it returns true for a given solution $(S,w)$, then $S$ satisfies PJR($d$).
Local Search for provable PJR¶
From the previous lemma, it follows that for a maximally affordable solution $(S,w)$ and for any $d\geq 0$, either the solution satisfies PJR($d$), or there is a new candidate which can be inserted with support at least $d$, and can be used to replace another candidate with low support. This observation naturally gives rise to the following local search procedure, for which we fix a small constant $\varepsilon>0$.
Algorithm. LocalSearchForPJR($S,w, \varepsilon$)

Let .

Let .

Let .

If $d_{\max}<d_{next}$, return $(S,w)$.

Remove $v$ from $S$, and set for each $n\in N$.

Let , to add $v_{\max}$ to $(S,w)$.

Go to 1.
Theorem 7. If we run LocalSearchForPJR($S,w,\varepsilon$) on any maximally affordable solution $(S,w)$ of size $m$ and support , then it returns a maximally affordable solution $(S',w')$ of size $m$:
i) with , and
ii) satisfying PRJ($d''$), where , and so also satisfying standard PJR.
Moreover, if the input has a $c$factor approximation guarantee for the maximin support objective, for some parameter $c>1$, then the algorithm performs $m\cdot O(1+ varepsilon^{1}\log c)$ iterations, where each iteration executes in time $O(E\cdot m)$.
Proof. By the correctness of algorithm InsertCandidate($S,w,v_{\max}, d_{\max}$), at the beginning of each iteration we have a maximally affordable solution of size $m$, where the new $d_{current}$ is larger than the minimum between $d_{current}$ and $d_{next}$ in the previous iteration. Now, notice that $d_{current}$ can never be greater than $\sum_{n\in N} b_n/m$, as the sum of supports cannot exceed the sum of budgets, and thus it is always the case that $d_{next}\geq d_{current}$. This shows that the minimum support of the current solution never decreases throughout the iterations, and proves point i).
Next, if the algorithm eventually finalizes and outputs a solution $(S',w')$ at step 4. with minimum support $d'$, then the current value of $d_{next}$ is $d''$ as defined in ii). By correctness of the algorithm CalculateMaxScore($S,w$) we know that $\max_{v'\in V\setminus S'} score_{w'}(v')<d''$, and so by Lemma 5 it must satisfy PJR($d'$), and since $d'\leq \sum_{n\in N} b_n/m$ then it also satisfies standard PJR.
It is easy to see that the complexity of each iteration is dominated by the execution of algorithm InsertCandidate($S,w,v_{\max}, d_{\max}$) at step 6., with a running time of $O(E\cdot m)$. So, it only remains to prove that the algorithm terminates after only $O(m\varepsilon^{1}\log c)$ iterations. To do that, we analyze the evolution of the parameter $d_{current}$: First, we argue that if $d_{current}=\sum_{n\in N}b_n/m$, then the algorithm terminates in the current iteration. This is because all nominators have zero slack, so all candidates in $V\setminus S'$ have zero score as well, and $d_{\max}=0$ so the condition at step 4. is fulfilled.
Next, we claim that if the value of $d_{current}$ at iteration $i$ is , then either the algorithm terminates at iteration $i+m$ at the latest, or . This is because , and in each iteration after $i$ we are removing one candidate of least support while not adding any candidate with support under $d^i_{next}$. As there are only $m$ candidates, we can do this at most $m$ times before the minimum support reaches the value . This value is either at least , or it is and in the latter case we terminate immediately.
Finally, if the algorithm executes a total of $t$ iterations before returning a solution with minimum support $d'$, and the minimum support $d$ of the input is $c$approximation to the maximin support problem, then
We conclude that $\lfloor\frac{t1}{m}\rfloor \log_{1+\varepsilon} c$, and so $t \leq m(1+\log_{1+\varepsilon} c)=m\cdot O(1+ \varepsilon^{1} \log c)$
$\square$
Factor 3.15 approximation algorithm¶
We propose a greedy algorithm that starts with an empty set and runs $m$ iterations, where each iteration uses our heuristic to insert a new validator and then runs a weight redistribution algorithm over the current set.
In particular, for a given solution $(S,w)$ with $S\leq m$, we run a weight rebalancing algorithm that computes an $\varepsilon$approximation of the minnorm maxflow (MNMF) weight vector for set $S$. We formalize this definition below.
Definition: For a nonempty validator set $S\subseteq V$ and a constant $\varepsilon>0$, an edge weight vector is an $\varepsilon$MNMF for $S$ if
(i) $w$ is maximally affordable, i.e. for each $n\in N$ having at least one neightbor in $S$,
(ii) for any $n\in N$ and any two neighbors $v,v'\in S$ of it, if $w_{nv}>0$ then $supp_w(v)\leq (1+\frac{\varepsilon}{5\cdotS})supp_{w}(v')$, and
(iii) For all affordable $w'$, $supp_{w'}(S)\leq (1+\epsilon)supp_w(S)$.
In our note on the minnorm maxflow problem, we provide more information about $\varepsilon$MNMF vectors and present an algorithm MNMF($S,w,\varepsilon$) that returns an $\varepsilon$MNMF for $S$ in polynomial time, and where the input vector $w$ is optional.
Consider the following algorithm.
Algorithm. BalanceBetweenHeuristic()

Initialise $S$ to the empty set and $w$ to the empty vector;

For $i$ from $1$ to $m$:
 Let $(v,d)\leftarrow CalculateMaxScore(S,w)$;
 Update $(S,w)\leftarrow InsertCandidate(S,w,v,d)$;
 Update $w \leftarrow MNMF(S,w,\varepsilon)$;

Return $(S,w)$.
Analysis¶
The main result of the section is showing that the previous algorithm offers a $3.15\cdot (1+\varepsilon)$factor approximation, and satisfies PJR.
Theorem 8: The procedure $BalanceBetweenHeuristic()$ returns a solution $(S, w)$ for which , where is the maximin support across all solutions of the given NPoS election instance. Moreover, the solution satisfies PJR() and, if $\varepsilon \leq 1/m$, PJR.
We begin with a couple of needed technical result.
Observation. For any $0\leq \varepsilon\leq 1$ and any $m\geq 1$, we have the inequality
Proof: The inequality holds for any real $x$. Replacing $x$ by $\varepsilon/(5m)$ and raising both sides to the power $m$, we obtain
Finally, since the function on the righthand side is convex, within the range $0\leq \varepsilon \leq 1$ it can be upper bounded by the linearization $1+(e^{1/5} 1)\varepsilon$. It can be checked that , and the claim follows.
$\square$
Proposition 9: Let $(S,w)$ be a solution where $S< m$ and $w$ is an $\epsilon$MNMF of $S$ for some $0\leq \varepsilon \leq 1$, and let $d^*$ be as in Theorem 8. Then, there exists a nonempty set $T\subseteq V\setminus S$ with the property that for each $0\leq a\leq 1$, there is a set of nominators $N_a\subseteq N$ such that
a) each $n\in N_a$ has a neighbor in $T$,
b) , and
c) For each $v\in S$ such that $w_{nv}>0$ for some $n\in N_a$, we have that .
Proof. Let , where $m'<m$. Let be an optimal size$m$ solution to the maximin support problem, so that . We define set $T$ as , which is clearly nonempty. Fix a parameter $0\leq a\leq 1$.
We have by the previous observation that . By the pigeonhole principle, there must be an integer $0\leq i\leq m'$ such that $\nexists v\in S$ with
Let $d_l$ and $d_u$ be respectively the lower and upper bounds above, and notice that . Define the sets $S_l$ and $S_u$ as containing the validators $v$ such that $supp_w(v)< d_l$ and $supp_w(v)\geq d_u$, respectively. By the definition of $d_l$ and $d_u$, we know that $S_l$ and $S_u$ partition the set $S$.
Going forward, partition the set $N=N_l \cup N_u$, where $N_l$ contains the $n\in N$ that have a neighbor in $S_l$, and $N_u$ contains those that do not. We highlight some properties of these sets.
* $\nexists n\in N_l, v\in S_u$ such that $w_{n,v}>0$: Assuming otherwise that there is such a pair, the fact that $n$ is in $N_l$ implies that it has a neighbor $v'\in S_l$. But then we can apply the definition of $\varepsilon$MNMF to obtain $supp_w(v)/supp_w(v')\leq 1+\epsilon/(5m')=d_u/d_l$, which contradicts the definitions of $S_l$ and $S_u$. * By the fact that $w$ is maximally affordable, and that each nominator in $N_l$ has neighbors in $S_l$ but gives no support to $S_u$, we have
* If we define $N_a\subseteq N_u$ as those $n\in N_u$ that have a neighbor in $T$, then claim a) becomes evident, and claim c) follows from the fact that $N_a$ has no neighbors in $S_l$ (by def. of $N_u$) and all of its neighbors in $S_u$ have a support of at least $d_u>d_l\geq ad^*/(1+\epsilon/4)$. Hence, it only remains to prove claim b).
Assume without loss of generality that provides a support of exactly to each , by possibly capping its edge weights, and define the edge weight vector $w'$ by capping $w$ arbitrarily such that $supp_{w'}(v)=d_l$ if $v\in S_u$, and $0$ otherwise (these vectors are affordable but in general not maximally affordable). Define , which we consider as a flow over the network induced by .
Clearly, the net excess of set $N$ relative to $f$ is , the net excess of set $N_l$ is at most $\sum_{n\in N_l} b_n < S_ld_l$, and the net demand of set $S_u$ is . By subtracting the last two terms from the first one, we obtain that the flow going from $N_u$ to is at least
A key observation now is that none of the flow originating at $N_u$ can pass by, or end in, $S_l \cup N_l$. This is because there are no edges between $N_u$ and $S_l \cup N_l$, by definition of $N_u$; and even though the flow can pass by $S_u$, there is no flow from $S_u$ to $S_l\cup N_l$ in because $w'$ provides no flow from $S_l\cup N_l$ to $S_u$. Therefore, the formula above is actually a lower bound on the flow going from $N_u$ to . Finally, we notice that if we decompose flow $f$ into simple paths, any path from $N_u$ to $T$ must have the last edge originating in $N_u$, or more specifically in $N_a$. This proves that , which is claim b), and concludes the proof of the proposition.
$\square$
Lemma 6: Given a solution $(S,w)$ with $S \leq m$ that satisfies Condition 1, there exists a $v \notin S$ with $score_w(v) \geq d^*/4(1+\epsilon/2)$.
Proof: Apply Proposition 1 and set $a=1/2$, then by using that for any $a_1,\dots, a_n$, there exists an $a_i$ with $a_i \geq \sum_i a_i/n$, there is a $v \in T$ such that the set
has $\sum_{n \in A_{v,d^/2}} b_n \geq d^/2$. Now for any $n \in A_{v,d^/2}$, $slack(n,d^/4(1+\epsilon/2))= \sum_{v:(n,v) \in E} w_n,v (1d^/4(1+\epsilon/2) supp_w(v)) \geq b_n/2$ and so $prescore(v,d^/4(1+\epsilon/2)) \geq \sum_{n \in A_{v,d^/2}} slack(n, d^/4(1+\epsilon/2)) \geq d^/4 > d^/4(1+\epsilon/2)$. Thus $score_w(v) \geq d^*/4(1+\epsilon/2)$.
We can do better than this by using different $a$.
Lemma 7 Given a solution $(S,w)$ with $S \leq m$ that satisfis Condition 1, there exists a $v \notin S$ with $score_w(v) \geq d^*/3.15(1+epsilon/4)$.
Proof: The following will be crucial:
Lemma 8 Consider a finite sum $ \sum_i f(x_i) a_i$, where $f$ is strictly increasing with derivative $f'(x)$, $a_i \geq 0$ for all $i$ and for some $y \leq \min_i x_i$, $f(y) = 0$, then
Proof: We can write the sum as a Lebesgue integral over the measure with weights $a_i$, obtaining that: The conditions on $f$ are enough for it to be invertible with derivative $df^{1}/dt=1/f'(f^{1}(t))$ and $f^{1}(0)=y$, so we can substiture $x=f^{1} t$ into the above to obtain:
For sume $0 \leq b \leq 1$, to be determined later, we consider $\sum_{v \in T} prescore(w,b d^*)$, we have
Now for $n \in N_b$, let $supp(n)= \max_{v':w_{n,v'} > 0} supp_w(n)$ and $supp(n)=\infty$ if $w_{n,v'}=0$ for all $v'$. We certainly have $supp(n) \geq b d^$ from the definition of $N_b$. More generally for $n \in N_b$, $n \in N_a$ if and only if $supp(n) \geq a d^$. We have and so using Lemma 8, So for any $b$ with $b(2ln b) \leq 1$, we have that there is an $v \in T$ with In particular, this holds for $b=1/3.15$. Thus there exists a $v \notin S$ with $score_w(v) \geq d^*/3.15(1+\epsilon/4)$.
Now we can prove Theorem 1
Proof of Theorem 1: For a set $S$, we define $d^*_S$ to be the maximum over afforable $w$ of $supp_w(S)$.In order not to lose error with condition 1 (ii), we need:
Lemma 9: Let $(S,w)$ be a solution with $S \leq m$ that satisfies Condition 1. Let $(S',w')$ be a solution of any size with $S \subseteq S'$ and for all $v \in S$ with $supp_{w'}(v) < sepp_w(v)$, we have $supp_{w'}(v) \geq d$ for some $d$. Then $d^_{S'}$ defined similarly has $d^_{S'} \geq \min {d^*_S, d/(1+\epsilon/4)}$.
Proof: Note that $(1+\epsilon/12m)^{m+1}$ \leq (1+\epsilon/4)$. Since there are at most $m$ values of $supp_w(v)$ for $v \in S$, by the pidgeon hole principle, there must be an $x=d/(1+\epsilon/12m)^i$ for $1 \leq i \leq m$ such that no $v \in S$ has $x < supp_w(v) \leq x(1+\epsilon/12m)$. Let $X \subseteq S$ be the set of $v \in S$ with $supp_v(w) \leq x$.
Let $N_X={n:\exists v \in X, (n,v) \in E}$. By condition 1, any $n \in N_X$ has $w_{n,v'}=0$ for $v' \notin X$. By our construction, we also have $w'_{n,v'} =0$ for $v \notin X$.
If $X$ is empty, then $d^_S \geq supp_w(S) \geq x \geq d/(1+\epsilon/3)$ and the construction gives a $w'$ with $d^{S'} \geq supp{w'}(S') \geq d/(1+\epsilon/3)=\min {d^*_S, d/(1+\epsilon/3)}$ and we are done.
If $X$ is nonempty, then we claim that $d^_S=d^{S'}=d^_X$. Let $w^_S$,$w^_{S'}$ and $w^_X$ be affordable assignments that achieve these. Then by optimality of $d^_X$, $d^_S \leq supp{w^_S}(X) \leq d^X$ and similarly $d^_{S'} \leq d^_X$. On the other hand, we have that Now consider modifying $w$ or $w'$ by setting $w{n,v}$ for $v \in X, $n \in N_X$ to be $(w^X){n,v}$. Since $n \in n_X$ had $w_{n,v}=w'_{n,v} =0$ for $v \notin X$, this is still affordable and the supports for $v \notin X$ remain the same, that is $> x$. So these have minimum support $d^_X$, and we have $d^_{S'}=d^_S$, which gives the the lemma.
Now we claim inductively that $d^_{S_i} \geq d^/3.15(1+\epsilon/4)^2$. Lemma 7 implies that there is a $v \notin S_{i1}$ with $score_w(v) \geq d^/3.15(1+\epsilon/4)$. We add this to $w$, while not reducing the support of any $v$ to below $d^/3.15(1+\epsilon/4)$. Lemma 9 gives that $d_{S_i} \geq d^/3.15(1+\epsilon/4)^2$ The induction gives that $d^{S_m} \geq d^/3.15(1+\epsilon/4)^2$. Stare balances gives a $w_m$ that satiusfies condtion 1 (ii) and so $supp_{w_m}(S_m) \geq d^{S_m}/(1+\epsilon/4) \geq d^/3.15(1+\epsilon/4)^3 \geq d^/3.15(1+\epsilon)$.
For the PJR claim, suppose that $S_m$ does not satify PJR($d$) for some $d$. Then nor do any $S_i$, since they are subsets of $S_m$. So by Lemma 5, for every $0 \leq i \leq m1$, there is a $v_i \notin S_i$ with $score_{w_i}(v_i) \geq d$. The same argument as above gives that $supp_{w_m}(S_m) \geq d/(1+\epsilon/4)^2 < d/(1+\epsilon)$. It follows that $S_m$ does satisfy PJR($supp_{w_m}(S_m)(1+epsilon)$).
Suppose that $S_m$ does not satisfy PJR but $\epsilon \leq 1/m$. Then by Lemma 5, there exists a $v \notin S$ with $score_{w_n}(v) \geq \sum_n b_n/m$. By Lemmas 2 and 1, we can run InsertNewCandidate($w_n, score_{w_n}(v)$), to obatian a $w'$ such that $\supp_{w'}(S_m \cup {v}) = \min { score_{w_n}(v), supp_{w_m}(S_m) }$. But since $S_m \cup {v}=m+1$, we must have $\supp_{w'}(S_m \cup {v}) \leq \sum_n b_n/(m+1)$. Thus we get that $supp_{w_m}(S_m) \leq \sum_n b_n/(m+1)$. So if $\epsilon \leq 1/m$, then $supp_{w_m}(S_m) (1+\epsilon) \leq supp_{w_m(S_m) (m+1)/m \leq sum_n bn /m$ and since PJR ($d$) implies PJR($d'$) for $d' > d$, we have PJR. This means that $\epsilon \leq 1/m$ implies PJR.