Authors: Jonas Gehrlein
Last updated: 07.12.2020
The validator elections are essential for the security of the network, where nominators have the important task to evaluate and select the most trustworthy and competent validators. However, in reality this task is quite challenging and comes with significant effort. The vast amount of data on validators (which is constantly increasing) requires a substantial technical expertise and engagement. Currently, the process is too cumbersome and many nominators are either not staking or avoiding spending too much time to go through the large amount of data. Therefore, we need to provide tools, which both aid nominators in the selection process, while still ensuring that the outcome is beneficial for the network.
The following write-up provides an overview of several potential steps, which benefit the nominators while maintaining their freedom of choice. As a first step, it is helpful to illustrate why recommendations should be based on user's preferences and cannot be universal for all individuals.
It is not desirable to provide an exogenous recommendation of a set of validators, because user's preferences (especially risk-preferences) are quite different. Therefore, a comparison between metrics on different scales (e.g., self-stake in DOTs vs. performance in %) is not exogenously not possible. In addition, the shape of the marginal utility functions even within one dimension is unclear and based on individual's preferences. It is outside our competence to decide on various trade-offs of the selection process on behalf of nominators. To illustrate this issue, consider the following simple example:
|Validator 1||4%||26 DOTs||Yes||Average|
|Validator 2||7%||280 DOTs||No||Average - 1%|
|Validator 3||1%||1 DOT||No||Average + 5%|
All validators in the table have different profiles, where none is dominated. Validator 3 potentially yield high profits but does not have much self-stake (skin-in-the-game) and is without registered identity. Validator 1 charges a higher fee for their service but might leverage a reputable identity. Validator 2 requires substantial fees but has the most self-stake. One could easily think of different preferences of users, who would prefer any one of those validators. While probably every user could make a choice from that selection, the problem gets increasingly difficult for a set of 200-1000 validators.
Code of conduct for recommendations
As mentioned before, we cannot and do not want to give an exogenous recommendation to the users. We prefer methods, which values this insight and generates a recommendation based on their stated preferences. While valuing the preferences of the users, we still can nudge their decisions in a direction beneficial for the network (e.g., to promote decentralization). Nevertheless, the recommendation should be as objective as possible and should not discriminate against any specific validator.
Validator selection is divided into several chapters. In the sections "Underlying dataset" (Link), we illustrate which data might be useful and how additional metrics can be generated. Afterwards, we can apply a simple concept from economics to significantly reduce the size of potentially intresting validators. Afterwards, This is the first step to give users a way to choose at hand. Then, we discuss some ideas to further curate the set of validators to promote goals of the network. As a last section the UTAStar method illustrates a sophisticated approach to estimate the individual marginal preference functions of the user and make a more precise recommendation.
This section explains which data can be gathered about validators in Polkadot and Kusama and are relevant for a selection process. Those metrics indicated with a * are used in the final data-set, the other variables are used to generate additional metrics. Currently, we focus on quantitative on-chain data as those are verifiable and easy to process. This purely quantitative approach should be regarded as complementary to a selection process based on qualitative data, where nominators are e.g., voting for validators based on their identity or influence / engagement in the community.
|Public Address*||No||Yes||The public identifier of the validator.|
|Identity*||No||Yes||Is there a verified on-chain identity?|
|Self-stake*||No||Yes||The amount of tokens used to self-elect. Can be seen as skin-in-the-game.|
|Other-Stake||No||Yes||The amount of allocated stake (potentially) by other nominators.|
|Total-Stake||No||Yes||The sum of self-stake and other-stake.|
|Commission||Maybe||Yes||The amount of commission in % which is taken by the validator for their service.|
|Era-Points||Yes||Yes||The amount of points gathered per era.|
|Number of Nominators*||No||Yes||The amount of nominators allocated to a validator.|
Era-Points: The era-points are awarded to a validator for performing beneficial action for the network. Currently this is mainly driven by block production. In general, the distribution of era-points should be uniformly distributed in the long run. However, this can vary if validators operates on a superior setup (stronger machine, more robust internet connection). In addition, there is significant statistical noise from randomness in the short-term, which can create deviations from the uniform distribution.
Some of the retrieved on-chain data might be not very useful for nominators or can serve some additional metrics, which help in the selection process.
|Average Adjusted Era-Points||Yes||Yes||The average adjusted era-points from previous eras.|
|Performance||Yes||Yes||The performance of a validator determined by era-points and commission.|
|Relative Performance*||Yes||Yes||The performance normalized to the set of validators.|
|Outperforming MLE||Yes||Yes||An indicator how often a validator has outperformed the average era-points. Should be 0.5 for an average validator.|
|Average Performer*||-||Yes||A statistical test of the outperforming MLE against the uniform distribution. Indicates if a validator statistically over- or underperforms.|
|Active Eras*||Yes||Yes||The number of active eras.|
|Relative total stake*||No||Yes||The total stake normalized to the set of validators.|
|Operator Size*||No||Yes||The number of validators which share a similar on-chain identity.|
Average Adjusted Era-Points To get a more robust estimate of the era-points, additional data from previous eras should be gathered. Since the total era-points are distributed among all active validators, and the set of active validators might change, it could bias the results. To counter that, we can adjust the era-points of each era by the active set size of that era. As this is the only biasing factor on the theoretical per-capita era-points, we can thereby make the historic data comparable.
It is unclear how many previous eras should be used as having a too long history might bias the results towards the average while too short of a history diminishes the robustness of the metric. One idea could be to use the average of .
Performance: The performance of a validator from the point of view of a nominator is determined by the amount of era-points gathered by that validator, the nominator's share of the total stake and the commission a validator is charging. In addition, the performance level is linear in the bond of the nominator and is thereby independent from that. We can combine those metrics into one:
The relative performance is then simply defined by:
This gives a more understandable measure as the performance is normalized between 0 and 1. Additionally, it is robust to potential changes within the network (e.g. with a larger number of validators the era-points are reduced per era) and prevents false anchoring effects.
Outperforming MLE: By gathering the historic era-points per validator during past eras, we can calculate how often a validator outperformed the average. As era-points should be distributed uniformly, a validator should outperform the average 50% of times. However, as mentioned before, in reality additional factors as hardware-setup and internet connection can influence this. This helps nominators to select the best performing validators while creating incentives for validators to optimize their setup.
Significance MLE: As the expected value of the outperforming MLE is 0.5 and the distribution should be uniformly, we can calculate whether a validator significantly over- or underperforms by:
If we can say that the respective validator outperforms significantly (10% significance level), while indicates significant underperformance.
Operator Size: Based on the identity of a validator, we can estimate how many validators are run by the same entity. It is both in the interest of users and the network that there are not too many operators and that those operators are not too large. Selecting validators of larger operators might increase the risk of superlinear slashing, because it is reasonable to assume that those operators follow similar security practices. A failure of one validator might mean a failure of several of those validators which increases the punishment superlinearly. A counter-argument to this might be that larger operators are much more sophisticated with their setup and processes. Therefore, this objective measure should be left to the user to judge.
After constructing the dataset as elaborated in the section "underlying data", we can start reducing the set of validators to reduce the amount of information a nominator has to process. One concept is to remove dominated validators. As we do not make qualitative judgements e.g., which "identity" is better or worse than another, we can remove validators who are inferior to another, since there is no rational reason to nominate them. A validator is dominated by another validator if all her properties are equal and at least one property is worse. Consider the following example:
|Number||Public Address||Identity||Self-stake||Nominators||Relative Performance||Outperformer||Active Eras||Operator Size|
Validator 1 is dominated by Validator 2, which means that it is worse in every dimension (note, as mentioned above a user might prefer larger operators in which case this would not be true). Validator 3 is dominated by Validator 3 and therefore can be removed from the set. By this process the set can be reduced to two validators. In practice, this shows to be quite powerful to vastly reduce the set size.
Here we have the opportunity to do additional cleanup to the remaining set. As mentioned in the code of conduct, those should be optional but we can suggest default values for users.
- Include at least 1 inactive validator. (we might suggest some inactive nodes based on some other processes.)
- Reduce risk of super-linear slashing (i.e., remove validators from operators).
- Remove validators who run on the same machine (some analysis of IP addresses possible?).
After the set has been reduced by removing dominated validators and giving some filter option the user can easily select preferred validators manually. In this step, the selection is purely based on personal preferences and for example a nominator might order the validators by their relative performance and select those who also satisfy some requirements on a minimum self-stake.
This method takes the filtered table from section LINK as input and therefore can be seen as a natural extension to the method before.
UTA (UTilité Additive) belongs to the methods of preference disaggregation (Jacquet-Lagrèze & Siskos, 1982). UTAStar is an improvement on the original algorithm. The general idea is that the marginal utility functions of a decision makers (DM) on each dimension of an alternative (i.e. criterion) can be deduced from a-priori ranked lists of alternatives. It uses linear programming to search for utility functions which satisfy the initial ranking of the DM while giving other properties (such as the maximum utility is normalized to 1).
This writeup relies strongly on Siskos et al., 2005
- : marginal utility function of criteria i.
- : Criteria.
- : Evaluation of alternative x on the crterion.
- : Vector of performances of alternative on criteria.
- Learning set which contain alternatives presented to the DM to give a ranking on. Note, that the index on the alternative is dropped.
The UTAStar method infers an unweighted additive utility function:
where is a vector of performances. with the following constraints:
where are non decreasing valued functions which are normalized between 0 and 1 (also called utility functions).
Thereby the value of each alternative :
where and are the under- and overestimation error. is a potential error relative to
The corresponding utility functions are defined in a piecewise linear form to be estimated by linear interpolation. For each criterion, the interval is cut into intervals and the endpoints are given by:
The marginal utility function of x is approximated by linear interpolation and thus for
The learning set is rearranged such that (best) is the head and is the tail (worst). This ranking is given by the user.
then we can be sure that the following holds:
where is a small and positive number which is an exogenous parameter set as the minimum discrepancy between the utilities of two consecutive options. In order to ensure monotonicity we further transform the utility differences between two consecutive interval endpoints:
Step 1: Express the global value of the alternatives in the learning set in terms of marginal values and then transform to according to the above mentioned formula and by means of
Step 2: Introduce two error functions and on by writing each pair of consecutive alternatives as:
Step 3: Solve the linear problem:
Step 4: Robustness analysis to find find suitable solutions for the above LP.
Step 5: Apply utility functions to the full set of validators and return the 16 best scoring ones.
Step 6: Make some ad hoc adjustments to the final set (based on input of the user). For example:
- include favorites
- at most one validator per operator
- at least X inactive validators
There remain a few challenges when we want to apply the theory to our validator selection problem.
- One challenge is how to construct the learning set. The algorithm needs sufficient information to generate the marginal utility functions.
- Find methods to guarantee performance dispersion of the different criteria.
- Use machine learning approaches to iteratively provide smaller learning sets which gradually improve the information gathered.
- Potentially use simulations to simulate a wide number of learning sets and all potential rankings on them to measure which learning set improves the information the most.
- UTAStar assumes piece-wise linear monotone marginal utility functions. Other, methods improve on that but might be more difficult to implement.