near/NEPs

[Discussion] Native fungible token

nearmax opened this issue · 6 comments

We are considering adding a native fungible token support on the runtime level. On the low-level each account record will have a map id of the token -> balance which will be empty by the default and the runtime will operate with it exactly the same way it operates with NEAR token: it will perform balance checks and use the same transactions actions to operate with it. The benefit of it, is that we can have significantly faster and cheaper fungible token operations, and very strong safety guarantees (as oppose to people relying on the safety of various fungible token implementations).

As a bonus feature, we might be able to re-express state staking and validator staking as transfer operations in terms of these account records, as three different tokens that are tied 1:1 with each other.

This issue does not provide thorough description of the proposed design, since the purpose is to open the discussion of the potential design, pros/cons, and the timeline.

Please leave the comments.

Addressing comment from @evgenykuzyakov :

think we shouldn’t support it on protocol level, because it creates extra load on the protocol itself.

For accounts that do not use fungible token there should not be extra overhead. Is there a specific design you are thinking of where this overhead is going to be palpable?

Instead we should optimize Runtime in the direction where the contract overhead is almost invisible. It can be done by optimizing the way we work with contracts, e.g. caching, initial memory, preparation, etc.

Contract calls require spinning up a virtual machine. Spinning up a VM will always have extra cost and probably theoretical limits on how fast it can be. @olonho are you aware of any instance of VM that takes almost no CPU cost to spin up?

Overhead for protocol tokens:

  • Development and maintenance for a limited set of features. The protocol always limits what's possible to do with the token because to add new functionality you need a protocol change through an upgrade.
  • Security consideration. When doing a use-case specific implementation we have to spent similar amount of time on custom logic within protocol instead of having it within a VM and securing the VM itself.
  • Initial distribution. We need a centralized token registry, likely a contract that allows register tokens and give initial balance. This mingles contract and runtime environment.
  • Growing demand: Let's say we want allowance, or transferring a bucket of tokens, or wrap a token into another token. Also we may want NFTs later, and the control of the total supply.

Alternative

Requirements:

  • support for multiple contracts on one account (modules)
  • support for globally available contract code (so each account doesn't need to pay for a contract).

Modules

Module is identified by module_id. This ID can be a hash of the contract code or some unique global ID (if we have a global registry).
It has independent storage from other modules and other accounts.
An account may have any number of modules installed. To call a module we can either extend current FunctionCall action or add a new one, e.g. ModuleFunctionCall. It takes the module ID in addition to the method name.

Modules allow implementation of large number of use-cases that require p2p operations without touching the centralized contract. E.g. transfers

Example of a token module

ModeleFunctionCall to transfer 42 DAI from alice.near to bob.near may look like this:

  • account_id = "alice.near"
  • module_id = "token-module" or module_id = "8gDNMKnQjoT9mnJfK1Z82hjoHdtUinGvG8CKfgLPKjdc"
  • method_name = "transfer"
  • arguments = '{"token_id": "DAI", "receiver_id": "bob.near", amount: "42"}'

The contract call can check permissions, subtract balance and make a call to the same module but on on bob.near account to deposit 42 DAI tokens.

Now when implementing a module contract you have 2 additional fields in the context:

  • current_module_id
  • predecessor_module_id

Based on the predecessor_module_id == current_module_id you can trust that the transaction came from the token module and it can safely deposit the required balance.

To have a refund in case the transfer fails (e.g. because bob.near doesn't have the module), the initial module can attach a promise that will revert the balance.

@evgenykuzyakov where is user data stored? Is it also globally available?

Since there can be N modules per account, the data has to be stored somewhere.

Modules themselves are added as a persistent map to the account similar to access keys. They are stored as

/// (KEY)
TrieKey::Module {
  account_id: AccountId,
  module_id: ModuleId,
}

/// (VALUE) Optional to have, but it can have module permissions later.
/// E.g. whether the module can spend native tokens. 
Module {
  code_hash: CryptoHash
}

The module stores data under:

(KEY)
TrieKey::ModuleData {
  account_id: AccountId,
  module_id: ModuleId,
  data_key: Vec<u8>,
}

/// (VALUE)
Vec<u8>
`

I see. That makes sense. What about some metadata of the module itself? For example if it is a fungible token module, people may want to know about the total supply.

My idea is to store global logic/data on the centralized account, e.g. "token-module". It will have the same contract, but it allows to mint/burn tokens. If it's a multi-token contract then it can also have a registry for tokens names and distribute initial supply