NuCypher’s “free market”
arjunhassard opened this issue · 4 comments
Since it is near-impossible, nor necessarily desirable, to completely prevent providers from adjusting the price point they offer to customers – we examine the incentives compelling providers to proactively implement ‘independent’ pricing, and conversely, passively follow ‘standardised’ pricing. We begin with possible engagement flows.
Engagement flow
The ‘engagement flow’ is the protocol through which customers and providers discover one another, agree to the price and other parameters of the service, and confirm service commencement/compensation. This flow is important to a decentralized network in many respects, including UX, security and redundancy – but crucially, it impacts price convergence/divergence trends and the degree to which the market is ‘free’.
Customer-driven engagement
A simple engagement flow involves the customer first constructing a job offer package (in NuCypher this is called an Arrangement
), which specifies attributes of the service required (in NuCypher's case the policy duration and security threshold n
[number of providers] are the most important), and finally a deposit to cover the total cost. The customer submits this to the network and waits for the specified number of providers to accept it. This choice of engagement flow has the following characteristics and possible reactions from customers:
- Without a formal, advertised price discovery tool, there is no obvious way to know a priori what the distribution of price points are, and whether there are a sufficient number of providers willing to serve at the desired price.
- Without burdensome extra-protocol actions – i.e. contacting and orchestrating providers individually – there is no straightforward means to pay different providers different amounts for the same service (i.e. managing the same policy).
- The onus is on the customer to bid a price. If they want to fish for a deal – i.e. lower than the default or dominant price – then they must submit a deposit of commensurately smaller size.
- Fishing for a deal may lead to failed offers, if <
n
providers are willing to accept the proposed price. - This may lead to sophisticated customers starting with the issuance of very low priced offers and steadily increasing the price until
n
providers accept. They may also opt to decreasen
. The former could come at the expense of a non-trivial time delay, the latter decreases security/redundancy. - It may be more difficult for customers who require longer duration policies, since there will be fewer compatible providers (i.e. with locked-tokens for that length), and therefore a greater likelihood of failed offers and time wasted. A customer cannot get a true price signal for a long duration policy by testing with short duration policies, since the offer acceptance rate will differ even for the same set of providers.
The lack of formal or provider-driven price discovery means that some customers will stick to the default pricing and not attempt to ‘shop around’, particularly those with end-users that are sensitive to slow or unreliable UX. On the other hand, the ability for developers to ‘sponsor’ policies and abstract the payment away from end-users greatly facilitates this kind of strategy. If customers do employ the deal-search method described above, low-cost providers will accept offers only to see those jobs timeout or withdrawn due to the lack of other willing providers. This may lead to proactive banding together at certain lower-than-default price points.
Alternative: bidding
A common pricing model for decentralised networks is some sort of auction system. These appear to be inappropriate for the NuCypher network for a very fundamental reason: the service is not scarce. Unlike in a consensus mechanism with finite block sizes, the number of transactions that can be processed by a NuCypher provider in a given time period is practically unlimited. Nor do NuCypher providers incur extra overheads above and beyond the minimum – i.e. the costs of staying online and answering requests promptly – if demand increases. Conversely, a service like data storage, market-making or heavy computation are all highly capacity-sensitive (financially and/or in terms of risk) and therefore encourage, or in some cases necessitate, fishing around to find the highest customer bids. In terms of COGs, NuCypher is similar to a SaaS product – the demand-overhead (x-y) relationship is sublinear. The conclusion is that, in the absence of a price-fixing cartel, it is rational to accept all non-zero bids/offers, and hence an auction system (whether first-price, second-price or some other configuration) is fairly pointless.
Alternative: formal price discovery layer
To be discussed.
Pricing strategies on a provider level
Independent pricing
Reasons to diverge from the default or dominant price point:
- In order to serve a customer segment that cannot currently afford NuCypher.
This incentive can be harnessed to increase adoption of the network – see 'Demand-driven pricing' section of the full pricing analysis. - In order to stand out, in lieu of other options for a provider to differentiate themselves. Perception of service ‘quality’ for a given engagement is relatively binary. Arguably, providers either deliver a correct re-encryption or revocation when expected, or they don’t. Hence, the continuum along which a provider can stand out to customers is very limited. A public scoreboard logging the percentage of answered requests would likely show high performance amongst the majority of providers, and therefore not provide customers with sufficient incentive to pay more for a marginally more reliable provider (though they may blacklist low-performers). This incentive is stronger in epochs of oversupply.
Note: One exception to the relative homogeneity of NuCypher service quality is the latency with which request calls are answered, and so a major avenue for differentiation, besides price, may be the strategic placing of worker machines around the world, plus other optimisation (e.g. higher RAM). This is only a differentiator if customers both need and are optimising for low-latency, and is a costly exercise if customers are not – this chicken-and-egg situation means latency differentiation will probably not occur in early epochs of the network. - In order to undercut other providers and/or undermine their business sustainability.
Note: unlike in the traditional economy, there is a lack of information on other providers – in particular, their funds and operational efficiency – so sacrificing revenue to put another provider out of business is very risky (they may have deeper pockets than expected). - External, macro changes necessitating increases in revenue to stay afloat – including the value of subsidies (inflation) decreasing due to a. scheduled decay, b. large increases in the stake ratio or c. non-temporary depreciation of the native token.
Standardised pricing
Reasons to stick with the default (or converge to the dominant) price point:
- The majority of customers stick to the default price point, because ‘choosing’ providers involves proactivity, risk and extra-protocol work. Although, as discussed, it is conceivable that an application developer could create a rule to only select providers at a lower-than-default price point, this may be limited to a minority of sophisticated customers, especially in early epochs.
- There is a lack of ‘relationship stickiness’ between customer and provider. Although some customers may want their policies to perpetuate for a long time, e.g. 1-2 years, and the entire cost of said policies is agreed upon and paid for up-front via the deposit, there is almost nothing preventing the customer switching to another provider once these policies have expired (or if they need other policies prior to expiry) or even revoking the initial policies for a near-full refund if a cheaper provider emerges. In other words, there is almost zero customer ‘lock-in’, either contractually or in terms of time/effort (for example, the time it takes a salesforce to learn a new CRM interface). This means that offering unsustainably low prices in an attempt to secure customer lock-in/loyalty is a poor strategy.
- Efficiency-driven (i.e. sustainable) undercutting strategies are difficult due to the nature of the service. Beyond a low minimum of outgoing expenses (hardware + internet connection + electricity OR server rental), there is little opportunity to leverage economies of scale, or other strategies, to increase the efficiency of service. (The exception to this is the cost of maintenance and upgrades, which is a time/expertise/salary-driven overhead and can be made leaner). This means that differentiation based on price can only go so far (i.e. as close as possible to universal minimum expenses) if it is to be sustainable. In other words, there is not a great deal of wiggle room for one provider to be significantly more operationally efficient than another. Hence price wars may involve providers running at a loss, which, as examined in points (2) and (4), is risky in the long-term.
- Some providers may want to expediently maximise the predictability/stability of the service to customers. For example, if many providers choose to offer prices that imply unsustainable operations below a given subsidy value, and this inevitably necessitates price rises, this risks irrevocable damage to the network if customers with large sunk costs are suddenly, or even gradually, unable to afford the service. Providers vested in the network (e.g. those with NU tokens locked for 1+ years) will seek to avoid this scenario, as even the prospect of this could seriously hamper medium/long-term demand.
Without a formal price discovery mechanism, the customer has no way of knowing a priori what the distribution of price points are, and whether there are a sufficient number of providers willing to serve at the desired price.
What about building in price quotes into the Learning Loop? So when Alice connects to the network to learn about the Ursulas, she's also receiving pricing information from all of them?
If I'm not mistaken, using the list of stakers from StakingEscrow
and the public variable PolicyManager.nodes(staker_address).minRewardRate
(which is individual for each staker), anyone can discover all pricing information of the network.
Update: At launch and in the early epochs of the network's existence, customers and providers will converge upon a fixed, universal price range, instituted and enforced via PolicyManager.sol (see nucypher/nucypher#1567).
A note on the 'enforceability' of pricing – in the future, if providers wish to deviate from the range (for reasons explored in previous comments), then they have the right to lobby and propose changes via the governance mechanisms that exist at that time. Failing this, the remaining option is to re-deploy NuCypher's Ethereum contracts and mobilise both customers and other providers (in order to satisfy requirements for n
> 1) to migrate over to this new marketplace. For those migrating providers to continue receiving a subsidy (rewards), this approach would also necessitate the creation of a new token and associated supply. This resembles a hostile network fork – a herculean undertaking – meaning we can safely assume the vast majority of providers (and customers) will adhere to universal pricing.
The way to check current prices that @cygnusv describes might be good enough at network launch. Eventually there will be a diversity of prices within the bounds of the fixed range – and providers may make regular adjustments. So a way for customers to calculate the lowest sum that will get them n
providers, every time they deploy a policy, is definitely useful. If there aren't any problems caused by this, we can add instructions to a new page devoted to pricing in the documentation.