Pricing structure (what is paid work?)
arjunhassard opened this issue · 1 comments
Let's assume that all remuneration calculations discussed in this issue incorporate the following inputs in precisely the same way:
- The number of Ursulas assigned to the policy (
n
) - The number of recipients (which equals the number of policies until we have multi-Bob policies)
Primary Calculation Inputs
[Input 1: Policy duration] Our current calculation takes one further input, the policy duration, measured in periods of 24 hours. This enables the total cost of a policy to be calculated up front (value
), paid into an escrow, the sum split into the number of periods within the policy's duration, each of which is paid out to participating Ursulas every time they confirm activity.
- This calculation leaves Ursulas vulnerable to an attacker Bob who can flood a policy with unlimited requests, with near-zero economic consequences.
- This also puts high-throughput users at risk of being ignored by Ursulas (this is only an issue if the number of requests begins to impact Ursula's overheads). It's also possible for Ursulas to anticipate high-throughput – this is mentioned in issue #4.
[Input 2: Access requests] The number of access requests sent to a policy becomes the variable determining the policy's price, replacing duration.
- The number of access requests must be tallied up at the end of each period, and Alice/Bob billed. This begs the question, what happens if the bill is not paid? Alternatively, Alices/Bobs could be required to pay up front for the number of access requests expected within a given period. Then the question is, what happens if the number of access requests goes over this sum? Do unused requests roll over? If we follow the rules of Ethereum gas payments, then users would have to deposit at least as much as they expect the sum of access requests to cost, with any leftover cash returned once the policy expires.
- How do we measure the number of access requests? Relying on Bob to report it isn't ideal – but neither is a system that totally ignores Bob, who may be subject to throttling by misbehaving Ursulas.
- This calculation could leave Alices vulnerable to an attacker Bob who can overwhelm a policy with requests, running up the bill for Alice, unless the application automatically passes this cost onto Bob.
[Input 3: Users] The number of users is the primary input into the cost calculation. This can be calculated as the sum of Alices, sum of Bobs, or sum of both, or the sum of unique keys (i.e. users/devices can be Alices and Bobs without incurring extra costs). This would make it easier for network adopters to budget for access control, since for many applications, revenue (and/or the venture's fundraising potential) is a function of the total user population.
- This approach leaves Uruslas vulnerable to being flooded with both arrangements/policies and requests, where the former is can also enable the latter.
Additional Logic, Calculation Modifiers & Combinations
- For any of the three primary inputs, we also need a coefficient to bring the cost into line with real-world expectations – for example, pricing that roughly resembles the monthly cost of AWS Cloud HSM – see issue #6. This is currently the
rewardRate
. - One solution to Ursulas being flooded with requests may be to program hard caps on policies or users. These can be set based on the number of requests that would overwhelm a small Ursula (e.g. 500k requests per user per day, or 10k per policy per day), but is unlikely to impact the needs of legitimate users.
- Combining users and requests as inputs would bring NuCypher in line with the pricing structure of most major KMS services.
Real-world scenarios
Which calculation/combination we settle on hinges to some extent the nature of adopter use of the network. In a scenario where hardly any adopters/users require highly frequent access requests, then policy duration may suffice as the key input variable / resource. However, this would imply that a major selling point of the NuCypher network – that it can scale to handle high volume / frequent sharing, is not being leveraged. The only exception to this would be an application with a very high number of policies, but low numbers of requests per policy (can we think of a real-world application with this characteristic?).
Excluding that particular type of application, this may also imply that we need to change our pricing significantly – since in basic terms, revenue is a function of a. price of service and b. frequency of usage – if we are assuming a low frequency of usage, then we will have to increase the price significantly to generate enough revenue. The problem with this approach, from a product perspective, is if we are targeting applications with very low-throughputs but strong need for trustlessness, is that we then start competing with client-side PKI, which is free. Our value proposition is reduced to "Alice can go offline".
The only exception to this would be an application with a very high number of policies, but low numbers of requests per policy (can we think of a real-world application with this characteristic?).
^ Could this be the patient-controlled medical record app scenario? Large number of patients each with policies issued for their respective doctors, but the number of requests made by doctors would be small and concentrated (once every 6 months for example)...unless I'm misunderstanding what you mean.
Pricing model similarities with major KMS services, i.e. based on users/number of keys and requests, could be positive by simplifying pricing decisions for applications i.e. a direct comparison could be made between NuCypher and the alternatives - assuming the vulnerabilities you expressed can be mitigated.
Which calculation/combination we settle on hinges to some extent the nature of adopters' use of the network.
It would be hard to make assumptions about the nature of future adopters. Our pricing would need to be well thought out for a variety of scenarios. Presumably, I would think we would end up with a variety of policy usage patterns by applications - some high and some low. It is also possible that usage patterns for an application can fluctuate over time eg. increased requests for photos on a photo sharing app during Christmas.
One thought I had and based on Prysm's key rotation comment, we could offer an optional (?) feature for re-issuing/turning over a policy after a specific period of time. AWS charges for this functionality - see https://aws.amazon.com/kms/pricing/; that being said they charge to hold on to the old keys, we don't need to do that. However, perhaps an app would like to re-issue policies every so often to different Ursulas for security reasons? This may help with better distribution over time of variable usage (high/low) policies across the network by repeatedly re-issuing a policy to different Ursulas over the length of the policy. Of course, some gas cost would be incurred here. If nothing, it could be a premium priced feature that we provide that supplements revenue, assuming it is possible.