Multishield's geometric mean to get chain work in Multi-POW coins
Opened this issue · 0 comments
TL;DR
Digibyte's Multishield does a secure and accurate calculation for chain work when a coin allows multiple POWs, despite the problem caused by different POWs having vastly different difficulties and "Moore's law" change over time. It's a devilish problem because if you hard-code a scaling factor ("exchange rate") for work between them, the factors usually diverge a lot over time due to disparate advances in hardware and software for the different POWs. The Multishield solution is to calculate the geometric mean of the most recent block difficulty of each POW and assign that as the chain work of the most recent block. This should not be confused with the difficulty algorithm. (They should each have an independent difficulty algorithm.) It's just the metric to determine chain work to determine the leading tip.
Difficulty algorithms in multi-pow
The best if not only correct way do a difficulty algorithm for multiple POWs is to keep their calculations separate. You can use the same difficulty algorithm for each POW, but its inputs are the prior difficulties and timestamps for the given POW. If there are 5 POWs allowed, their target block times are 5x the block time for the coin. I will assume there is no requirement for the POWs to be used with equal frequency like Raven and Pigeon, but this chain work metric can be used in that scheme too.
Off-chain real-world value of reward+fees is the absolute measure of work if the mining markets are efficiently competitive.
A key to seeing Multishield works correctly is to see the basis of "work" in any coin is the real-world value of the reward for the work. If the USD reward for a Litecoin (LTC) block is 1/200 as much BTC's (at a given point in time), then you know the LTC block required 1/200 as much work (value), without regard to their POWs or solvetime. This assumes the efficient market hypothesis, aka equally competitive markets for the different POWs. In a multi-POW coin, this means a POW with a 0.001 difficulty and another with 1E6 difficulty are doing the same amount of work if they are getting the same reward per time.
Example of Effectiveness
As an example of how it works better than an arithmetic mean used in the same way, consider a 2-POW coin where POW "A" difficulty is 1,000,000x more than POW "B". In the difficulty algorithm, their independent target block times are 2x the desired block time (because there are 2 POWs). Assuming a competitive market in the hashing for the reward, they are "equally difficult" to mine despite the wildly different difficulties. If there's a tip race where A's difficulty increases 4% in tip 1 and B's difficulty increases 10% in tip 2 and these blocks are the only difference in the tips, then tip 2 with B wins the tip race because
p*sqrt(1.10*pB) > p*sqrt(1.04*pA)
where p is the geometric mean work of the parent block before the fork An arithmetic mean would have incorrectly caused the tip with 4% more "A" to win. A 1E6 scaling for B and the normal sum of difficulties would work, but as I mentioned, scaling factors deviate over time. The scaling factor can be adjusted over time (not hard coded) and thereby work, but it's complicated as I discussed in a prior multi-pow issue. It also has security that is much weaker than the geometric mean (at least if the proportions or frequency of the various POW solutions are not forced to be fixed) which I discuss below.
My description is slightly different from Multishield
Digibyte's MentalCollatz appears to have discovered Multishield before January 2015. His Multishield calculates the next-expected difficulty of each algo at the time it calculates the chain work for a block. My simpler method of just using the difficulty of the most recent block of each POW is 1 block behind his in the estimate.
Implementation Gotchas
The difficulties may need to be scaled down to prevent overflow. Roots allow a potential forced chain split attack due the floating point calculation, but the attack should not hardly work because one chain should be able to easily pull ahead of the small error. There's a way to use ceiling() and floor() to prevent floating point errors. A reliable integer-based sqrt() function might be found and used if you have an even number of POWs like A, B, C, and D. It could be done with SQRT(SQRT(A*B)*SQRT(C*D)) which also reduces the overflow problem.
Is it correct in an absolute sense?
I want to see if it calculates chain work correctly in an absolute sense. TL;DR: No, but it seems at least as good as any other option I can foresee. If the POW difficulties were "orthogonal" to each other and the volume of their multiplication were the true chain work we're after, then the geometric mean is the correct method, but I do not know how to argue that is the correct view. For a concrete example, assume a 2-POW coin and one of the POW's hashrate (and therefore its difficulty) increases 3x. There is now 2x or 100% increase in average chain work per time (even before difficulty has had a chance to increase stabilized) by the "normal" method (I'm talking about the method of using a scaling factor for one of the POW's chain work to be on the same scale as the other POW). But the geometric mean indicates there is only SQRT(1*3) = 1.71 = 71% more work per block after difficulty has stabilized at the higher level (it has the same 2x increase in work/time as the scaling method before difficulty has risen, so it drops from 100% to 71% increase). Thinking back to my definition of "work" and efficient market hypothesis, if hashrate increased 3x in only 1 algo and the exchange rate has not changed (as indicated by the other algo not increasing its work) then the efficiency of the 1st algo must have improved "3x". This would mean there has not been any increase in the cost to mine so therefore there is no more work per bock (0% increase in my definition of work). So I have 100%, 71%, and 0% as possibilities for the increase in chain work. A 4th possibility is that price increased 3x and the "less work" POW had a lot of hashrate disappear for some reason, i.e. the efficient market assumption is invalid. This would indicate 3x more (200% increase) work. The problem in trying to decide how much work to assign is that the real world value (e.g. USD in constant dollars) spent on mining is not measurable on the chain, not to mention USD inflation. Given the unknowns, the geometric mean seems as good as any in defining the chain work. As with the scaling factor method, it is perfectly correct in the boundary condition where Moore's law is dead and the profitability and availability of the equipment and software for the algos are not changing. Both methods will give the same result when POW hash rates will change equally. A 71% increase in exchange price would result in each difficulty increasing 71% if all else stays the same. Beyond that, I can't show it is an objectively correct metric. Moore's law and availability of equipment might "compound" the variability between algos and the geometric mean is useful in "compounding" situations. This is roughly a restatement of my initial comment about volume.
Are we already using this method?
Coins use the sum of the log of difficulty to record chain work, and if you use the arithmetic average of the log of each POW's most recent difficulty to assign the work for each block, it's the same as the log of the geometric mean of the difficulties:
1/3 * ( log(A) + log(B)+ log(C) ) = log((ABC)^(1/3))
In other words, it's not different from what we're already doing if we define chain work as the sum of the log of each difficulty.
Is it possible this is the correct definition of the work?
Work is an energy expenditure on CAPEX and OPEX which is linear with the number of hashes, not the log of the hashes (difficulty). A log implies entropy and entropy is relevant to time and information, so it seems to be a viable question. But physical entropy changes are usually linear with energy changes, not a log of the energy spent. It's all the same in the sense it will not cause a reordering of blocks from simply adding up difficulties, unless the public chain has higher variability in hashrate than an attack, which is the opposite of what we'd expect. That is, for a given sum of difficulties, the sum of logs of difficulties that change the least from each other gives the highest value. The least variation is the most stable set of miners, not necessarily the miners who spent the most on hashing. It's possible a stable network deserves a slight advantage like this over the raw sum of difficulties. It also makes long-range attacks much harder, greatly reducing the advantages of Moore's law (100x hashrate gets only 2x ability to rewrite chains using log base 10). Mid-range attacks on normal POW are slightly easier because an attacker can set equally-spaced timestamps to get equal difficulties. But in a multi-pow scheme using the log of the geometric mean, an attacker has to use all the POWs with equal frequency to capitalize on this effect which keeps security high which I'll detail below.
Increased Security Against Double Spending
In simple chain-work calculations, multi-POWs are only as secure as the least-secure ("rentable") POW if there is no limit on the number of blocks in a row a given POW can solve. To alleviate this, Verge, Raven, Pigeon, and maybe several others have a requirement to switch POW. It's interesting that Multishield reduces the vulnerability if there is no requirement. This isn't simple to describe and explain, but here goes. In short, his chain work goes up only as a root of his hashrate dominance while his difficulty is going up "linearly" with the difficulty algorithm calculation. To say it another way multishield gives an advantage to chains (or POWs) that do not suddenly increase (or decrease) the hashrate for one (or a subset) of the POWs. A selfish mine attacker wants to use the "easiest to rent" POW (at least from his point of view since the efficient market hypothesis can fail for different reasons). In order to win a chain work race, he has to get 5+ blocks from his POW in the same time the public chain is getting 5 blocks from all 5 POWs. He has to have 5x the hashrate for his POW as the public miners who use it, not 102% to do a 51% attack. But remember he gets paid 5x more, at least before the difficulty rises, so that's not the problem with attacking. His difficulty will go up 5x because he's solving too quickly. When difficulty eventually goes up 5x, he can only get 1 block per 5 blocks of the public chain (block time for each of the 5 POWs is 5x the coin's block time). So far this is not different from a good non-geometric method of calculating chain work. But with Multishield, his chain work is going up only by the 5th root of his increase in difficulty. So his chain work per time at that point (when he is paying 5x the public chain per block to hash but no longer getting blocks faster will be 5^(1/5) * 1/5 = 27% of the public chain. If a simpler scaling factor method for chain work is used, he gets full chain work credit for his cost. A difficulty algorithm might increase 20% after 20 blocks if the hashrate is 5x. His chain work per time with multishield at that point will be 1.2^(1/5) * 4/5 = 83% of the public chain. So he has to increase his hashrate faster than the difficulty algorithm can keep up. If he starts with 10x hashrate, difficulty would increase by 40% after 20 blocks, so his chain work per time at the 20th block would be roughly 1.4^(1/5) * (10 * (1-0.4))/5 = 128% of the public chain's, requiring 2x the hashrate and getting only 28% more reward/time (6 blocks to 5). If the difficulty algorithm adjustment is fast enough and the number of blocks needed for confirmation is large enough, it will require a lot of hashrate with less reward/HR to win a race to do double spends. Otherwise, Multishield needs an extra requirement like Verge or Raven to be safe enough against double spends to justify the multi-POW method. It at least makes on-off mining for a largish number of blocks unappealing.
Raven and Pigeon enforce equal frequency of the POWs (X16R & X16S which I think is better) but I do not know if they implement the difficulty algorithm or chain work calculation well. If they use a hard-coded scaling factor to relate the relative chain work, there is a problem. This chain work calculation can work for them.
Myriadcoin may have used the Multishield method before Digibyte. Pyrk also uses it.