ethereum/EIPs

ERC777 Token Standard

jbaylina opened this issue ยท 514 comments

Please, see https://eips.ethereum.org/EIPS/eip-777 for further discussion.


Was there discussion of adding a Mint/Burn pair of events and/or mint/burn functions to this proposed standard?

If this was discussed and rejected, what are the reasons for rejecting it? If it was not discussed, should it have been?

While not foolproof (because a contract may neglect to call these events), it would make the automated accounting of ICO sales for token contracts that do comply a lot easier. To accurately account for existing ERC 20 token sales, one must read and understand the contract's code.

What is the use of _to while is obvious that is the TokenFallback reciever itself (the contract address(this)), and why is needed _ref if we can store a _ref data inside _data if application needs it?

I would find better to stick to the needed stuff, such as:

    /**
    * @notice ERC223 and ERC667 Token fallback 
    * @param _from sender of token
    * @param _amount value sent
    * @param _data data sent
    **/    
    function tokenFallback(
        address _from,
        uint _amount,
        bytes _data
    )

Can you describe situations where _ref and _to are important, or crucial?

@3esmit The _to is because the proxy that handles the interface for a specific address can be a different contract. Please see EIP #672 .

For the _ref, this should act as a reference, for example a check number, or an invoice number. In general the ref will be set by the operator and the data will be set for the sender and will be the equivalent to the data in an ethereum transaction.

May be a good alternative would be to integrate this 2 parameters in data and define a standard for data This way we would maintain current compatibility with EIP223...

I suggest also adding a boolean return to tokenFallback, and token contract require a true return to accept transaction, in order to avoid this scenario: dapperlabs/cryptokitties-bounty#3

@3esmit This is problematic. This function is called after the transfer is done. So returning false would mean to rollback the transfer. This can add a lot of reentrance issues, so I decided that the function ether executes or throws the full transaction.
The nice thing of this standard is that if the tokens a sent via send it eans the the receiver must register the interface in EIP672 way. If not, it fails. Of course you can use the old transfer method for backwards compatibility.

izqui commented

I propose renaming operatorData to logData to make more explicit that the purpose of that data is no other than being part of a log. The ability of adding context to token transfers is powerful, and the gas hit is minimal when they are not used.

Really like and support this proposal, exactly the vision that made me excited about ERC223 10 months (!!!) ago. We are considering making ERC777 the base standard for all the tokens issued on @aragon!

This is an interesting proposal, but I worry about the entire ecosystem having to migrate to new multisig wallets in order to be able to receive ERC777 tokens.

It seems like there was an attempt made to create a whitelist of contracts that one can safely transfer to even if they do not implement ITokenReceipient:

The function MUST throw if:

  • to is a contract that is not prepared to receive tokens. That is it is a contract that does not implements ITokensReceived interface and the hash of the sourcecode is not in between the whitilisted codes listed in the appendix of this code.

But there is no such appendix, I would love to see it ๐Ÿ˜Š

@onbjerg We are working on it. We are thinking in keeping this list open for a while (centralized) and close the list at some point (make it decentralized).

Was there any consideration over allowing users to specify how much an operator can control, e.g. changing authorizeOperator() to:

function authorizeOperator(address operator, uint authorizedAmount) public?

One could use 2^256 - 1 (or hypothetically the totalSupply() if that never grows) to simulate the previous true behaviour and 0 for false.


The only difference for new contracts implementing ERC20 is that registration of ITokenRecipient via EIP-672 takes precedence over ERC20. This means that even with on a ERC20 transfer call, the token contract MUST check via EIP-672 if the to address implements tokensReceived and call it if available.

I find this somewhat confusing and unexpected. We'll have a dichotomy of "ERC20" tokens: ones that will never call the tokensReceived() callback, even if ITokenRecipient is registered; and ones that will always check. Even if the ERC20 functions are only supposed to be called via old contracts, I think there'll be lots of confusion about this since the meaning of what an "ERC20" token will have essentially changed depending on if your token also supports EIP777.

It also feels odd because you don't have to support the ERC20 interface with EIP777, but you most likely will to support prior contracts expecting that standard.

What if EIP777 was instead a superset of ERC20's interface but overrided specific parts, e.g. transfer() and transferFrom(), to support the ITokenRecipient interface?


I kind of like and dislike the send() nomenclature. On one hand, it's nice how it parallel's ETH's transfer() and send() nomenclature. On the other, it's confusing because these two terms are now both overloaded with different meanings for ETH and tokens. It's confusing enough that we have both for ETH, but it's going to be even more confusing when there's the same names for tokens. I do like the naming for transferAndCall() because it's really obvious what it's probably going to do.

I guess an alternative could be transferToRecipient().

@sohkai:
1.- The idea o authorizeOperator is mainly to authorise a contract.
The maximum allowed limitation and many others limitations, like a daily limits, should be implemented in the operator contract and keep this standard as clean as possible.

2.- The idea is that the receiver should have the warranty that the tokensReceived() method is ALWAYS called. Even if it is called via an obsolete ERC20 transfer() or transferFrom() method. This way, for example, allows a recipient to NEVER accept a specific token. or forward some tokens to a specific charity.

3.- The big problem of maintaining transfer() name in the new standard is that if you use transfer() in an ERC20 only token, you will end up locking a lot of tokens. This mistake might become very common in a moment where 50% of the tokens are ERC20Only and 50%ERC777.

As I have mentioned in other threads, I strongly recommend removing decimals. Here is a cross post of what I have said elsewhere:

Decimals are easily the number one source of confusion for both token authors and users of ERC20. I strongly recommend removing this as a variable and instead asserting that tokens must have a certain "humanizing divisor". Reasonable choices IMO are:

  • 0 - The purpose of decimals is to humanize a very large number, nothing more. If you issue a bunch of your tokens, then people can work with gigatokens instead of tokens. People are used to this already with hard drives (no one talks about hard drive size in bytes, it's gigabytes or terrabytes). This scales with the system and allows it to easily change with time.
  • 10^24 - This Allows the token to center on a range that is maximally within the accepted SI prefixes, ranging all the way from yoctotokens to yottatokens. From a scientific/mathematics standpoint, this is probably the best option.
  • 10^18 - 10^18 is the most common humanizing divisor, and it is what ETH used. In order to limit confusion, there may be value in asserting that everyone should just use this. While this isn't a particularly optimal choice, it is fairly compelling due to ETH choosing it.
  • 10^2 - Most fiat currencies use cents, in general, population is more used to currencies with 2 decimals than 0 or more than 2. I'm including this for completeness, but it ends up being effectively the same as 0.

I think the worst option is to continue to allow for variable humanizing divisors. This doesn't actually solve any real problems, since any chosen unit is very likely to be a wrong choice at some point in time (too big or too small). Also, since the token author can pick the token supply, allowing them to also choose the humanizing divisor doesn't give them any more/less power to try to target a "nice human-scale number".

You mention function send(address to, uint256 value, bytes userData, bytes operatorData) public; in the interface but it doesn't appear in the function descriptions below. Perhaps it was meant to be replaced by operatorSend but you forgot to delete it from the interface?

I recommend splitting function authorizeOperator(address operator, bool authorized) public; into:

function authorizeOperator(address operator) public;
function revokeOperator(address operator) public;

At the callsite, this will provide a lot more clarity as to what is happening.

This is rad. Its going to be a long, slow journey to move away from ERC20 but this is a good first step. Couple things:

  1. Why has spender authorization been moved to a boolean? I personally haven't found a use-case for allowing a spender to access a specific amount, but it seems like a nice feature to have since its already part of an existing standard.
  2. Why use the noun operator? I understand this is stupid-picky and certainly hair-splitty, but the work spender is, IMO, a really good descriptor of that particular actor. Operator just sounds like the person has more capability than they do (they aren't really "operating" on the tokens).

Anyways, big ๐Ÿ‘. ERC20 needs an upgrade.

Public state variable for decimal is string public decimals;?
I think that should be uint8 public decimals; based on function decimals() public constant returns (uint8). Prolly a typo.

As I have mentioned in other threads, I strongly recommend removing decimals. Here is a cross post of what I have said elsewhere:

Unfortunately quite a few coins have a very good reason for selecting a different number of decimals. Many of them are in the wild already. Forcing all 10 n decimals would require internal restrictions that would, for example, force rounding of values or revert if an incorrect amount is specified.

Our objective is seldom to expect people to interact directly with the blockchain but, as an example, MEW does a good job of removing the decimal confusion.

Should the ITokenRecipient contract also have a function that always returns true stating it's capable of this? It's a way to allow wallet implementers to know which function to use, and therefore save gas.

function isITokenRecipient() returns (bool) { return true};

Great stuff!

1- initially I too thought as @sohkai that authorizeOperator() would need a form of limiting the amount. In the end the ERC20 approve (which is a confusing name) does have a value up to which the spender is allowed.

I understand and share what you say

The idea o authorizeOperator is mainly to authorise a contract.
The maximum allowed limitation and many others limitations, like a daily limits, should be implemented in the operator contract and keep this standard as clean as possible

But I also think that it's an interesting addition to remind implementers to include optional limitation logic.


2- operatorSend userData vs operatorData
what is the scenario you are imagining for userData?
in any case it's a data that the operator has to input when calling the operatorSend function. Why couldn't both data points be contained in one?


3- Backwards Compatibility
I also found this a bit confusing

The only difference for new contracts implementing ERC20 is that registration of ITokenRecipient via EIP-672 takes precedence over ERC20. This means that even with on a ERC20 transfer call, the token contract MUST check via EIP-672 if the to address implements tokensReceived and call it if available.

I understand that new smart contracts will detect the right function to call (right?)
but what about users interacting directly with the contract? It will be confusing to see 2 functions that supposedly do more or less the same thing but have different names.
confusing UX and a potential source of problems if you say that "tokens will probably be locked"

@lyricalpolymath
3- New contracts that use new tokens must use send() and not transfer(). transfer() is just for backwards compatibility. mainly old smart contracts, as I expect that UI will be upgraded at some point.
Stay tunned for (1 & 2)

@alexvandesande To know if a contract implements ITokenRecipient, the reverseENS is used (EIP672) which will never throw and you will know if it implements or not the Interface. The gas cost should be the same as the one you propose.

@GoldenDave Others have made the same argument in the past but were unable to provide (IMO) a compelling argument as to why forcing the humanizing divisor to the same for all tokens is bad. The most common cited example is "what if I have a token that is pegged to USD (or similar), which only has 2 decimals?" In this case, you can still have 24 decimals (or whatever the standard defines) exposed to the user and the contract can internally store however it likes. In this case, you would simply multiply whatever internal value you have by 10^22 when it is returning to the user. In all cases I have seen people come up with (including the USD peg) nothing is hurt by having a token be more divisible. There is really nothing fundamentally wrong with having 1 attousd.

@jbaylina I support reverse ENS, but I don't see why not also add this to the contract itself. Is simpler to build, will work on any network, including test networks etc. Also, to check ens resolver you need to have multiple calls (see if there's a resolver, then check the resolver etc) AND to have an extra function on the constructor function to set the ens resolver info.

Again, I'm all for ENS, but why not add on the contract simple info like that? Reminds me of the debate on either tokens should have symbol and names on the contract or on a token registry: in contract won by the simplicity of it.


Also: I'd like to propose to add a provable standard to this token. One of the most requested features I get from token creators is how to send tokens without having ether and I think it makes sense that should be a core function of whatever is the next big token version.

Others have made the same argument in the past but were unable to provide (IMO) a compelling argument as to why forcing the humanizing divisor to the same for all tokens is bad.

During the HelloGold token sale, contributors received HGT which entitled them to a share of a reward token GBT (our gold backed token) which is related to the amount of management fees that we receive for storing clients' gold pro rated to the person's HGT holding.

In order that anybody holding the minimum amount of HGT should receive GBT during a distribution we calculated that GBT would work with 18 decimals but as a result HGT would need to have 8 d.p. Any more precision would be pointless and misleading.

It it rather dictatorial to say that everybody needs to normalise everything to meet a number of decimal points that do not particularly agree with them, especially when we already have a method of handling it.

Should the ITokenRecipient contract also have a function that always returns true stating it's capable of this? It's a way to allow wallet implementers to know which function to use, and therefore save gas.

It is great idea - but when I ran a quick test on remix, a contract with a simple fallback function would falsely satisfy your requirements.

function(){
}

appears to return true when a non existent function xyz() returns (bool) is called.

https://gist.github.com/DaveAppleton/ef44e9745b1f57c7ae0d6744a15bc5c6

@alexvandesande One of the nicest think of this standard is that not only you can have functionality in smart contract recipients, but also in any regular account. You can program for examle that you don't accept tokens sent to your public regular account. Or that you send half of it to a charity.
I agree that using EIP672 is a little complicated, and what's the worst, ENS still is centralised in some way. So that is why we plan to use EIP #820 which is equivalent to EIP672 but much more simpler and pure decentralised contract. (It still is a work in progress).

@alexvandesande Regarding the provable functionality, the idea is to do that via an operator. The operator can, for example, accept signed checks, which they are very much provable transfers.

This standard should allow for token contract creators to set some default operators to be authorised for everybody.

@DaveAppleton I just tested your code and got

{
	"0": "bool: false"
}

So it seems it should work.

Remix or testnet?

Maybe remix is not the best playground..
image

@jbaylina I like the approach of using the approved operator for that sort of thing.

Regarding EIP672 or 820, I don't like the idea of requiring any registry contract: these will change depending on the network, the wallet must be aware of them and makes the whole code a lot less reusable. I would simply support introspection in the contract itself, it's literally one line function and adding support for registering yourself on the register will certainly be more than that. I like having registry of addresses with more information, just don't like having to rely on them.

If you don't like hasTokenFallback then we can add a more general hasSupportFor() function. But I'd keep it simple.

@alexvandesande
1.- How would you associate a piece of code to a regular account? I don't see any other way than a registry contract (ENS or notENS). EIP820 is very interesting because it allows to execute a code every time that tokens are received. Even if the destination is a regular account. For example, you can prevent receiving tokens, or send some of the tokens to a charity. Using EIP-165, which is more or less what you propose, does not allow to define any code for regular accounts.
2.- In the other side, I would like to highlight an incredible way that @Arachnid showed to me on how deploy EIP820Registry like contracts.
The idea is that you create a deployment transaction. you change the signature to a deterministic value, for example 0xAAAAAAAA.... From this signature, you recover the address that would have generated the transaction. Of course you don't know the private key of that address, but you can send Ether to that address and then just broadcast the transaction!!
This method, you can create the registry in any blockchain and you know for sure that the address will be the same in all the chains.
You can see the implementation of this here: https://github.com/jbaylina/eip820/blob/master/js/deployment.js
It works!
This exciting technique is a great way to deploy any "Pure decentralised" contract.
An applause to @Arachnid please !!!.

@Arachnid ๐Ÿ‘ ๐Ÿ‘ ๐Ÿ‘

Now that we have all those nice cryptographic primitives in EVM that allow for untraceable tokens, it would be reasonable to create a token standard that would actually support untraceable tokens. In order to do so, the transfer of tokens that currently both debits the payer account and credits the beneficiary account in the same transaction needs to be separated into two steps:

  • withdraw: debiting the payer account and turning some piece of information into a valid off-chain token
  • deposit: crediting the beneficiary account, if a valid off-chain token has been supplied to it.

This means that the sum of all balances will no longer be equal to totalSupply, but rather less or equal. The difference (that perhaps merits a separate accessor/query function such as inTransition) would be tokens in transition, i.e. ones that have already been withdrawn from payer accounts, but not yet deposited to beneficiary accounts. In practice, if people care about privacy, the majority of funds will be in this state.

The actual amounts that can be withdrawn and deposited should be restricted to a small set of possible values so that transactions cannot be linked by amounts. However, this restriction does not need to be part of the standard, it is up to each token to implement their version.

I really love this standard @jbaylina, as it takes the good of #223 and makes it more clear, also due to the new function names. At the same time it can is backwards compatible.

One note: i would rename isOperatorAuthorizedFor(..) to isOperatorFor(...) to make it more concise, as Operator already implies that its operating in somebodies name.

@nagydani actually a testimony on how flexible is the operator model, this could make so that any ERC777 coin would be automatically untraceable if so desired: you can add a ring signature mixer as an approved operator, and then it will automatically transfer coins to itself and out.

@jbaylina I'm still unsure about using an external registry: I tried to understand how that worked on the Yoga token example but it's rather complex and not very obvious. Do you have a short gist showing an example of a constructor function that self-registers a token on a registry, without requiring the token to have 'owners' and such?

@alexvandesande could you elaborate a bit on this? The simplest example would involve four participants (Alice, Bob, Carol, Dave), with Alice and Bob each sending a coin and Carol and Dave each receiving a coin, but it should be impossible to tell whether it was Aโ†’C, Bโ†’D or Aโ†’D, Bโ†’C.
BTW, separating the debit and the credit parts of a transfer transaction might come handy with Ethereum 2 as well: there, if the two addresses are on different fibres, you cannot do anything else anyway.

+1 to removing 'decimals' (set it to some default value, like 18))

@nagydani Alice authorizes "mixerContract" as an operator for her tokens. Alice signs a message creating some off chain information and sends it (directly or via someone else) to the MixerContract, which then removes all coins from her account. Alice sends some information to Bob, off chain, and Bob provides this to the MixerContract (maybe via Carol), which now credits Bob with the tokens. From the outside, nobody can tell if Bob's coin came from Alice, Carol or Dave. All code was done on the MixerOperator, not the coin itself.

mcdee commented

Any reason that the events have mixed tense? Send is current tense but RevokedOperator is past tense. There would be less mental confusion if only one tense was used for all event names.

mcdee commented

NOTE: The decimals value returned SHOULD be 18.

Not a fan of this, unless you can give a solid reason why it should be 18. Reasons that don't wash include:

  • "offline wallets can rely on it". No they can't, unless decimals is mandated to be 18 rather than suggested
  • "Ethereum uses 18". So what? Tokens aren't Ether
  • "your token contract can restrict transfers to a lower precision if you want". This creates wasted transactions, causes user confusion, and complicates token contract logic
devzl commented

+1 to removing 'decimals' (set it to some default value, like 18))

A standard shouldn't be opinionated and lock you in a way of doing things. The decimal value should be up to the user's needs and situation, and be their to decide.

@mcdee I think there will be more user confusion when someone trades 10 ETH for 0.001 ABC token because an exchange listed the "decimals" value incorrectly and it looked like they were getting 10,000 ABC.

Let's be real, most users aren't checking the difference between 0xde0b6b3a7640000 and 0x38d7ea4c68000 in their transaction data. Standards like this make life easier for users, and for developers (us).

Do you have a valid and general use-case for fewer or greater decimals than 18? I'm open to hearing new arguments, but to me this standard makes sense for 99% of cases, and for the 1% there are plenty of elegant solutions.

mcdee commented

...an exchange listed the "decimals" value incorrectly...

If the exchange bothered to read the decimals value from the chain this wouldn't happen. Suggesting that reading this value is somehow to onerous for the exchange to handle, or too complex for them to handle correctly, doesn't seem particularly realistic.

Let's be real, most users aren't checking the difference between 0xde0b6b3a7640000 and 0x38d7ea4c68000 in their transaction data. Standards like this make life easier for users, and for developers (us).

Saying that a value "should" be a particular figure is the worst of all worlds. It isn't definitely a given value, so no assumptions can be made, but gives the impression that it will be a given value, so assumptions will be made.

Do you have a valid and general use-case for fewer or greater decimals than 18? I'm open to hearing new arguments, but to me this standard makes sense for 99% of cases, and for the 1% there are plenty of elegant solutions.

#724 hashes all of this out and provides examples of situations where decimals other than 18 are in the wild, but the general use case is a token that has a specific divisibility. 0 is the most obvious one i.e. an indivisible token.

What I haven't heard is any good reason for restricting decimals to a fixed value, or giving one value prominence over another (the only one that holds any water is the offline/hardware situation, but given that the symbol of the token is required anyway for the transaction to be contextualised the argument is weak). @MicahZoltu is the strongest proponent for removing decimals entirely but given that the comment in which he proposes this has three options for decimals, each with a valid reason as to why they might be chosen, should show that a single value doesn't fit all use cases.

To be clear, I would rather "keep decimals" than have decimals be SHOULD. I think specs should take hard-line stances and not be vague whenever possible.

Please remember that decimals is a humanizing multiplier. Since it is not possible to predict the future value of your token, and tokens will fluctuate in value relative to each other, all tokens will at some point need to use SI prefixes in order to display them meaningfully to humans. Believing that you can pick a number in advance such that your token will always render in a human friendly way (for all of time) is naive.

Since we must accept that all tokens will need SI prefixes at some point in time, my argument is to just remove one more piece of complexity from future tokens that doesn't add any meaningful value.

Also, I do think there is value in being able to render the number of tokens in an offline signer even if you don't have the symbol. Being able to say

Transferring 15.6 gigatokens of 0xabcd token

is significantly better than

Transferring 15600000000000000000000000000000000 indivisible units of 0xabcd token

Regarding the "indivisible token" argument, there are a few options:

  1. Use SI prefixes like everyone else and people will just buy/sell 1 yoctotoken or 1 attotoken rather than 1 gigatoken like they do with some others.
  2. Make it so your token transfer either rounds or throws when people try to transfer sub-tokens.
  3. Allow for token divisibility for the sake of transferability, but round down to nearest "1 token" when using the token for whatever it is the token is used for that requires "whole" units.

Side note: I'm curious for what the use-case is for indivisible fungible tokens. I'm struggling to come up with a scenario where you want to target your humanizing factor to be around the size of an indivisible token, but you also want them to be fully fungible (meaning tokens don't represent non fungible physical assets or something). In the case of digital gold or USD pegged tokens, divisibility beyond that of the underlying asset is still valuable for trading, it just means that when a user wants to redeem a token for 1 gold bar or 1 USD they must supply tokens in whole increments. However, I see no problem with people trading less than 1 bar of gold or less than 1 USD otherwise (this argues for option 3 above).

mcdee commented

Since we must accept that all tokens will need SI prefixes at some point in time

Must we? What about a token that represents a right to use (for example a software license)? It can be both indivisible and fungible, and its price will naturally float to a reasonable value (in ETH or USD, depending on prevalence) rather than suffer significant speculation (especially if the token has an uncapped total supply).

Note I'm not saying that the above is definite, but it is possible, and the standard should accommodate such situations.

In the case of digital gold or USD pegged tokens, divisibility beyond that of the underlying asset is still valuable for trading

Only if the token creator is happy with divisibility beyond that of the underlying asset. They may be, they may not be, Again, it should be a choice and the standard should allow that choice to be made and easily visible to end users.

I think we are agreed, however, that regardless of where decimals ends up the current use of should is a bad idea.

@MicahZoltu, @mcdee, @alexvandesande, @bwheeler96 Every time I am more convinced that the decimals() and SHOULD is not the best idea. (You are all convincing me)

Before going forward, let me just clarify two concepts that I think we are mixing:

We have to distinguish between basicUnitand minimalUnit
basicUnit would be the number of weis that correspond to the basic Unit. (10^18 in the case of Ether). This is equivalent to 10^decimals .
minimalUnit would be the number of weis of the smallest divisible part. (1 in the case of Ether).

So here are the options for the standard:

  1. define basicUnit() and minimalUnit() methods and let the token creator chose the values freely. This is the most flexible option.
  2. recommend basicUnit() to 10^18 and do not talk about minimalUnit(). This is the actual implementation as decimals() is equivalent to basicUnit().
  3. force basicUnit() to 10^18 and do not talk about minimalUnit(). This case would be the same that removing the decimals() from the standard and change the SHOULD be 10^18 for a MUST be 10^18 (This is the option propose by @AnthonyAkentiev )
  4. Do not talk about basicUnit() nor minimalUnit() at all in the specification. That is do not talk about decimals and let the token creators do what they want. (The easiest way to reach a consensus is to consensuate that there is no consensus :) )
  5. force basicUnit() to 10^18 and define minimalUnit() letting the token creator to chose the value freely.
  6. force minimalUnit() to 1 and define basicUnit() letting the token creator chose the value freely.

I would like to hear you opinion on a range from 0 to 10 on each point:

Scale:
10->The best option
8 -> A good option
6 -> I would just accept it
4 -> I would not accept it.
0 -> This is an authentic disaster option.

Here is my score:
1-> 8
2-> 6
3-> 7
4-> 5
5-> 9
6-> 6

Of course fill free to add other options if you think they are better.

@jbaylina We have a real case DApp that we developed: ethlend.io
Every time we need to pass token amount from DApp to the LendingRequest contract or from coinmarketcap through the Oraclize -> we have to deal with different decimals value.

Everything becomes more complicated with decimals)
However, I think that option 5 above is a good choice.

Thank you for providing awesome list of options.

@jbaylina not sure I understand what you mean by minimalUnit. And when you say Wei, you mean "the smallest possible unit of that currency, equivalent to Wei", right?

mcdee commented

1 -> 6 (I believe it would confuse token contract developers)
2 -> 4 (I still don't like the idea of setting one value above others)
3 -> 0 (there really, really are valid scenarios for having variable per-contract decimals)
4 -> 4 (cop-out; let's reach some consensus here)
5 -> 8 (the only immediate downside is that no-one can have divisibility higher than 10^18, but I don't think that this is something worth worrying about)
6 -> ? Isn't this just 2 in different clothes (and without the recommendation)?

So 5 is looking like the best option. It allows offline systems to display values correctly without needing to look up data on-chain. It allows token contract creators to select the divisibility of their token. The only variable is minimalUnit (which might need a better name; atom?) and basicUnit vanishes.

And echoing @alexvandesande using the term "Wei" in here is a bit weird as Wei is a specific subdivision of Ether.

Oh with @mcdee comments I think I get it now. I agree with (5) : set all tokens to be automatically divisible to 10^18 and phase out the decimals standard. Instead set some minimum divisibility standard that tells wallets not to try to send anything less than that (which is enforced by the code itself).

Option 5 seems like a best of both worlds. Enforce 1 token = 1e18 but allow developers to set their own minimumUnit. This keeps the math consistent across applications but allows for an additional constraint to be applied if desired.

Is minimalUnit a tooling hint or something that MUST be enforced by the contract? If I have a minimalUnit of 10^15 (and for the sake of this example 10^18 is used as basicUnit) and a tool tries to send 10^14 attotokens is the contract expected to fail or is the contract expected to succeed?

If we do go with minimalUnit (I'm not yet casting my "vote"), my recommendation in this case is to have the spec say:

minimalUnit SHOULD be 0 - Unless you have a really good reason why your token cannot be fully divisible (there are almost no such good reasons), then you should just set this to 0.

I still would like to hear some examples of tokens that are problematic when divisible. People keep saying they exist but so far I haven't heard any convincing arguments for castrating divisibility. Even in the software license example, I don't see a problem with a software license being divisible. You need a whole number of them to use it as a license, but it doesn't hurt anything to allow people to buy/sell half of a license. I can't buy anything with 0.01 USD, but it doesn't hurt anything that I can trade in increments of 0.01 USD. (cc @mcdee since you seem to be making the strongest statements that these examples exist).

Remember, just because something is divisible doesn't mean people have to leverage that divisibility. I can trade less than 1 REP, but my account has exactly an integer amount of REP in it because I choose not to trade sub-1 REP amounts.

mcdee commented

minimalUnit SHOULD be 0 - Unless you have a really good reason why your token cannot be fully divisible (there are almost no such good reasons), then you should just set this to 0.

That's really opinionated. Define what it means, give examples, and let people choose for themselves.

Even in the software license example, I don't see a problem with a software license being divisible.

The problem is that the token creator might not want people to trade fractions of licenses. If you don't have a problem with the concept of fractional licenses that's fine but that doesn't mean that everyone should follow suit. A standard that actively reduces functionality is not good and when people want that functionality they will be forced to diverge from the standard.

If person A and person B want to create token license, and person A is happy with having divisible licenses but person B is not then why should we exclude person B from being able to use this standard?

Bottom line is that having a hard-coded limit on divisibility is a restriction for no good purpose. There are tokens out there today with varying levels of divisibility and continuing to provide the ability for token creators to select divisibility has no downsides that I can see and increases the number of use cases where the standard can be applied.

A standard that actively reduces functionality is not good

All standards reduce functionality, constraining what people can do is what it means to have a standard. The more permissive a standard is, the harder it is for people to integrate with it. Decimals is a good example of this because it adds complexity to tools which now need to handle variable decimals, something that they wouldn't have to code for if the standard were less permissive.

For any feature that makes the standard more permissive, we must weigh the benefits that permissiveness provides against the integration costs that permissiveness imposes. In this case, the benefits are that it allows people to create tokens with constrained divisibility and the costs are that when integrating with the token developers need to account for variable divisibility.

My argument is simply that the benefits of being more permissive in this case are:

  • minimal (this isn't a major feature)
  • not breaking/critical (it may be annoying to a token author, but doesn't break anything)
  • historically not often leveraged (most tokens have large decimals)
    IMO, these benefits don't outweigh the increased integration costs for all tools.

I agree, my SHOULD comment is incredibly opinionated. However, what I have witnessed in the Ethereum token space is that the vast majority of people who are authoring tokens don't really have a strong understanding of what they are doing or the complexities they are introducing on the ecosystem with their choices. Because of this, I think we should make it very obvious what the "right" thing to do is (that introduces the lowest external costs/complexities).


After thinking more on it, my vote is:
1 -> 4
2 -> 2
3 -> 10
4 -> 0
5 -> 6
6 -> 4

mcdee commented

In this case, the benefits are that it allows people to create tokens with constrained divisibility and the costs are that when integrating with the token developers need to account for variable divisibility.

Yes this is the trade-off. My opinion on it is that a more flexible standard is worthwhile because it will gain wider use. Your appears to be that it isn't worthwhile because of the additional work involved. Guess we'll have to go with the majority opinion.

I really like this standard as an evolution from erc20.

But I would remove "decimals", "name" and "symbol" from the standard. These are not needed for the token operation and since contracts have a fixed address, wallets can get these info through other means including a registry (as part of another EIP) (if they want to use the blockchain for that).

Naming should also not necessarily be dictated by the contract. Creator of token do not necessarily want to nor can name their token.
Imagine a contract that in some way generate new token types without the capability to find/decide or wish an appropriate names/symbols at the point of creation?

In erc20 these info were optional. If there are reasons to make them part of the standard, it would be great to include the rationale so we can discuss it further but my stance on them is that all of these info are external to the functioning of the contract and are better off set outside of it.

mcdee commented

A common requirement for token contracts is to carry out bulk transfers i.e. multiple transfers in the same transaction. Often this is purely for gas savings, but sometimes there is a requirement for atomic transfer of tokens to multiple recipients.

I'm torn on if this should be considered for inclusion in to the standard. It does have utility, but there are two issues. On a cleanliness-of-standard front this is something that can be carried out by a separate contract and so isn't 100% necessary to be present in the token contract. I might push to include it anyway, as it is as I say a commonly requested addition to ERC-20 contracts, but Solidity doesn't like bytes[] so the bulk send signature would need to be one of:

function bulkSend(address[] to, uint256[] value) public;
function bulkSend(address[] to, uint256[] value, bytes userData) public;

i.e. userData would either be blank or constant for all recipients.

In terms of the function of bulkSend() it should loop through the arrays and send() to each. To allow for atomicity it should throw if any of the send()s fail.

izqui commented

@mcdee I think bulk functionality should be kept outside the standard as it can be performed in a second layer protocol easily. This is a toy project I have been working on exploring how to achieve super cheap token transfers: https://github.com/aragonlabs/pay-protocol/blob/master/contracts/PayProtocol.sol#L106 Making this work with 777 would be as easy as supporting operatorSend in the same way transferFrom is used now.

0xjac commented

1-> 3 (too much flexibility, it will introduce confusions and misinterpretation)
2-> 6 (that is how is now, not the best but I'm OK with it)
3-> 7 (not enough flexibility, I can't think of a use for variable decimals but if there is, this blocks it)
4-> 0 (too much of a flexibility, it will introduce confusion and misinterpretation)
5-> 9 (could it prevent some contracts from working properly if minimalUnit is too high?)
6-> 5 (somewhat similar to 2 without the recommendation)

I don't like the term minimalUnit. How about minorUnit, granularity, smallestDenomination?

mcdee commented

@izqui yeah like I say I'm torn on it. It's easy enough to add or not to the token contract, or to run as a separate contract or second layer service, it just seems to be asked for a lot as a function so would be used. Not having it in the standard doesn't hurt, though, as people can add it to their own contracts anyway.

@jacquesd granularity is probably the best of those options (I proposed atom above as well). It is important to pick something that doesn't confuse people as to the value being a multiplier rather than a minimum (e.g. if the value is 100 then it needs to be clear that the user can't transfer 101).

So let me understand if that would be a good summary of what it seems everyone is converging in the tokens discussion:

  • Keep decimals() as a standard for backward compatibility, but require that all tokens keep it to 18
  • Add a secondary information called granularity(), or minimalDivisor() or something similar, that should be kept at 0 1, unless the creator wants to make their token less divisible. Example: a token that only uses integers, granularity should be 10^18 (or maybe just 18?), and it should also enforce it on code. (return error when trying to send a number not divisible by it)

Everyone on the same page?

mcdee commented

@alexvandesande on the second point, it should be 1 not 0 as the minimum, but apart from that I'm in agreement. I'd also make the value an absolute value rather than a power of 10, which means that unlike decimals this should be a uint256

Updated the standard and the reference implementation with the granularity() and the fixed decimals. Please, check you see it ok.

@mcdee Just replaced Send by Sent in the event to keep it coherent.

mcdee commented

@jbaylina cool. If you're going to use past tense for all events you should also change Mint->Minted and Burn->Burned

0xjac commented

@mcdee Nice catch, it's been changed at 0xjac/ERC777/pull/18 It will be updated here very soon.

@onbjerg

I worry about the entire ecosystem having to migrate to new multisig wallets in order to be able to receive ERC777 tokens.

Why would people need to migrate to new multisig wallets? By design, multisig wallets can execute any arbitrary transaction, there's no reason they can't register an ITokenRecipient interface with the EIP-820 registrar and/or register themselves with the tokenable contracts registrar. Once you've done either of those things, as long as your multisig's fallback function doesn't throw/revert on a 0 value transaction no further changes are necessary to technically be compatible.

Your old multisig wouldn't be able to log the incoming transactions (or respond to them in any other way), you'd need to migrate to a new contract to do that, but it would be able to receive tokens and since it can execute any arbitrary transaction it would also be able to interact with them.

there's no reason they can't register an ITokenRecipient interface with the EIP-820 registrar

I have to say, I'm really not a fan of relying on tying a new token standard to the use of an external function registry. I think determining how a token transfer should process should be left as part of the token implementation if the author wants to do anything beyond perhaps querying the receiving address as to its capabilities.

If there needs to be more than one way to transfer a token based on the capabilities of the receiving contract, than I think the right thing to do would be:

  1. Check if the receiving address is a contract, if it's not well, just send the token.
  2. On contracts, call something like function standardCheck(bytes32 _standard) view public (returns bool) where _standard can be the hash of whatever string this token standard defines.

This would decentralize the standard registry, since only contracts that actually implement certain standards would ever include them in their internal standards registry. The suggested implementation throws on an attempted send()/operatorSend() to a contract that doesn't implement this standard, and sends on to wallets anyway; this should work the same way, but obviate the need for any calls to an EIP-820 registrar.

Since we already assume multisig wallets would need to implement tokensReceived() anyway, they'll need to be redeployed regardless if they want to handle ERC777 tokens.

As a bonus, any token wanting to integrate with #820 could also query standardCheck() for that, and proceed accordingly.

mcdee commented

send() might be problematic for a couple of reasons. First, truffle (at least, unsure about other Javascript apps) creates contract objects with a send() method for their own purposes and as a result it is not possible to test an EIP-777 token within truffle (calling send() goes in to a loop and eventually throws with an insane out of gas issue). Second, overloading send() causes problems with Javascript apps that create contract objects with methods for each function; only one send() method is present even though there are two in the contract.

Personally I don't care too much because I use Go, but there are a lot of developers that use truffle and Javascript and this could be a barrier to adoption.

@mcdee send is a perfectly valid method name in solidity. And I think it's the best name because it remembers the "Ethereum method" to send tokens. And it's very diferent to transferwhich is what I want to avoid.

As a work arround for this you can use: token.methods['send(address,uint256)]` to call a send function and to explicitily distinguish between overloaded methods.

mcdee commented

@jbaylina I agree with everything you say (although note that truffle-contract doesn't allow overloading of methods at current). The concern comes from thinking about ease of adoption, testing, etc. Perhaps it comes down to ensuring that good annotated examples are provided for situations like this where developers might be caught out.

mcdee commented

A slightly more left-field suggestion: add an ITokenSender interface that is the equivalent of ITokenRecipient but runs before the tokens are transferred.

The purpose of ITokenSender would be to allow a sender to put filters and actions in place prior to sending tokens. It came out of consideration of how to put limits on an operator's ability to send tokens (similar to the limited approve() in ERC-20) but has wider purpose. Some examples off the top of my head:

  • limit addresses to which the token can be transferred (so tokens in your cold wallet can only be transferred to your hot wallet)
  • limit the number of tokens that can be transferred in a given period
  • move additional tokens prior to the transfer (to a witholding account for tax purposes, for example)
  • time-lock sends so that after an initial attempt at a send, which fails, a subsequent transaction will only succeed after a given time has passed

and I'm sure that there are others.

This is primarily aimed at operatorSend() but some of the examples above also apply to send().

ITokenSender would be:

interface ITokenSender {
  function tokensToSend(address from, address to, uint amount, bytes userData, address operator, bytes operatorData) public returns (bool proceed)
}

similar to ITokenRecipient except that it returns true or false depending on if the transfer should continue. On a return of false the send should exit but not throw (to allow for state changes in tokensToSend() to stick).

Might I suggest that tokensRecieved() be changed from

function tokensReceived(address from, address to, uint amount, bytes userData, address operator, bytes operatorData) public;
to
function tokensReceived(address from, uint amount, bytes data) public;

Rationale:

  1. I have quite a bit of difficulty seeing a use case where a contract would need to get a copy of it's own address in the to variable.
  2. I don't think the contract receiving tokens should have any reason to care who the operator was; they're just acting as agent for the actual token holder. Also, if the operator was acting on behalf of the user, then they should be setting the only bytes data variable received. By nature, the user isn't submitting this transaction, and a receiving contract should handle all transfers of a token the same, regardless of the entity initiating the transaction.
mcdee commented

@mjdillon the contract handling tokensReceived does not need to at the to address, it can be a separate contract (to doesn't even need to be a contract it can be a regular address). operator can be useful for auditing/accounting purposes, at the least.

@mcdee I like the idea of ITokenSender.

I would follow the philosophy of just throw or not throw to accept the operation. Instead of returning true/false.

I would like to hear the opinion of the community on this proposal before including it. Specially if there is a strong opposition to this. If not, I'm going to add it.

^^@channel^^

mcdee commented

@jbaylina throw/not throw means that the ITokenSender cannot both alter state and refuse the transfer, so use cases such as time-locked sends wouldn't be possible.

That said, I prefer throw/not throw in the general case. If ITokenSender ends up using throw/not throw it reduces but doesn't eliminate the value that it brings. I could certainly live with either solution but thought I'd highlight the differences and why I initially went for a return value approach.

@mcdee Can you put a concrete example that would not be possible with throw/not throw?

mcdee commented

@jbaylina here's a thrown-together example of a time-locked ITokenSender. A real-world one would have separate times for each token as well as the to/from pair, generate an event when the initial request was denied (to allow the owner of the funds to be made aware of the clock ticking), and various other features to make it useful but this should be enough to show where allowing state updates but refusing the transfer could be handy.

/**
 * A contract that blocks tokens from being transferred for a given time
 * period.  It works by denying an initial attempt to send a token,
 * storing the time at which it will be allowed, and allowing it if a
 * subsequent transaction with the same parameters at a suitably later
 * time.
 */
contract TimeLockedTokenSender is ITokenSender {
    // Mapping is from=>to=>time allowed
    // (Outside of example this would be more comprehensive)
    mapping(address=>mapping(address=>uint)) allowed;
    
    function tokensToSend(address from,
                          address to,
                          uint256 value,
                          bytes userData,
                          address operator,
                          bytes operatorData) public returns (bool proceed) {
        if (allowed[from][to] == 0) {
            // First time we have seen this transaction; mark it as allowed in
            // the future and refuse it for now
            // (Hard-coded 1 hour time period for example purposes)
            allowed[from][to] = block.timestamp + 1 hours;
            return false;
        } else if (allowed[from][to] < block.timestamp) {
            // Not allowed to send it yet
            return false;
        } else {
            // Allowed to send; remove marker and proceed
            allowed[from][to] = 0;
            return true;
        }
    }
}

@mcdee In general, I don't like when you use a method to do another. In this case make a transfer to activate the side effect of triggering a timer.

Let me propose you 2 alternative ways to solve it to see if I convince you:

contract TimeLockedTokenSender1 {
    // Mapping is from=>to=>time allowed
    // (Outside of example this would be more comprehensive)
    mapping(address=>mapping(address=>uint)) allowed;
    
    function triggerTimer(address to) {
        allowed[msg.sender][to] = block.timestamp + 1 hours;
    }
    
    function tokensToSend(address from,
                          address to,
                          uint256 value,
                          bytes userData,
                          address operator,
                          bytes operatorData) public returns (bool proceed) {
        require(allowed[from][to] >= block.timestamp);
        allowed[from][to] = 0;
    }   
}

And the second alternative

contract TimeLockedTokenSender2 {
    // Mapping is from=>to=>time allowed
    // (Outside of example this would be more comprehensive)
    mapping(address=>mapping(address=>uint)) allowed;
    
    function tokensToSend(address from,
                          address to,
                          uint256 value,
                          bytes userData,
                          address operator,
                          bytes operatorData) public returns (bool proceed) {
                              
        if ( extractBytes4(userData) == bytes4(sha3("triggerTimer()") ) ) {
            require(value == 0);
            allowed[from][to] = block.timestamp + 1 hours;
        } else if (userData.length == 0) {
            require(allowed[from][to] >= block.timestamp);
            allowed[from][to] = 0;
        }
    }
    
    function extractBytes4(bytes data) internal returns(bytes4) {
        return bytes4(data[0] << 24 | data[1] << 16 | data[2] << 8 | data[3]);
    }
}

In general a smart contract that makes a transfer and return, wants it have the waranty that the tokens are transfered.

mcdee commented

@jbaylina 1) uses a separate function and 2) requires a user to send two different transactions to carry out the operation. I agree there are lots of different ways of doing this, but none of them are as clean as just sending the same transaction twice.

In general a smart contract that makes a transfer and return, wants it have the waranty that the tokens are transfered.

That's an assumption that I've seen voided many times in contracts, although I do tend to agree with it unless there is a good reason to not. As I say, I can live with tokensToSend() going down the throw/not throw path.

mcdee commented

A few notes looking at minting and burning.

  1. the Minted() event does not have operator data. Given that the requirements for minting tokens requires operatorData and the Sent() event contains this data it seems like an odd omission
  2. the documentation around tokensReceived for minting seems to be corrupted, specifically the second paragraph. It is also unclear if tokenable contracts should receive minted tokens (I assume they should, but it is not explicit)
  3. Burnt() is a little strangely named. The past tense of burn is usually burned, with burnt being used in an adjectival manner (e.g. "burnt dinner")
mcdee commented

The documentation for send() says:

The function MUST throw if:

  • msg.sender account balance does not have enough tokens to spend
  • to is a contract which is not prepared to receive tokens. Specifically, it is a contract that does register an address (its own or another) via EIP-820 implementing the ITokenRecipient interface; or whose hash of the source code is not in the whitelisted codes listed in the appendix of this code.

The second point appears to be incorrect. Perhaps it should say "Specifically, it is a contract that neither registers an address (its own or another) via EIP-820 implementing the ITokenRecipient interface nor whose hash of the source code is in the whitelisted codes listed in the appendix of this code."

There is also no mention of Tokenable (hideous name, by the way) in the conditions for send() but it is referenced later.

I wholly support adding ITokenSender, I believe it's essential to making operators useful and the standard becomes very extensible to a lot of applications.
I don't see the harm in allowing side effects, especially since transactions in ERC20 can already fail without reverting, so this change would mirror that signature. While other things might be a bit cleaner conceptually, the ease of integration would probably be much higher and the resulting interface remains standardized. But if that is adopted, I suggest allowing the same for ITokenRecipient, as well.

mcdee commented

I've spent a bit of time looking at Tokenable and have some concerns about its design and its impact on this EIP.

The idea of Tokenable is stated as:

The Tokenable Contracts Registry is a registry where contracts wishing to receiving tokens without registering the ITokenRecipient via EIP-820 CAN register themselves.

The key word here is "themselves". For a contract to register itself as able to accept tokens it needs to have knowledge of the tokenable contracts registry.

Given that the tokenable contracts registry is being presented as part of this standard it's unlikely that any pre-existing contracts will have this knowledge, and as a result will not have the relevant functions to be able to register themselves.

For future contracts it seems that having a second interface creates unnecessary complexity. If a contract wishes to receive tokens without checks it can register itself (or another contract) as an ITokenRecipient and provide a pass-through tokensReceived().

So: given that pre-existing contracts won't be able to use Tokenable and future contracts can use ITokenRecipient to provide the same functionality it doesn't appear Tokenable has a valid purpose in this EIP. Am I missing something?

@mcdee

This is an error in the standard. It must not say "themselves". It is a centralized and trusted database of contracts that SHOULD accept tokens by default without having to implement ITokenRecipient. They should be treated as a regular account. The plan is to be an owned contract for a while and the transfer ownership to 0x0 at some point to make it untrusted.

Constrcting this DB can be a mess, so I'm thinking in just removing this enterilly from the standard. The only problem on this is that users of Multisigs and Proxy contracts will have to call once a setInterfaceImplementer() on EIP820 in order to register an AcceptAll contract as the implementation for ITokenReceipt interface.

The accept all contract will be some thing like this:

contract AcceptAll {
   function tokensToSend(address from,
                          address to,
                          uint256 value,
                          bytes userData,
                          address operator,
                          bytes operatorData) public returns (bool proceed) {
      // Do nothing, means that it accepts any thing.
   }
}

This contract will be deployed using the EIP820 technique, so that will share address for any blockchain.

I would like to hear opinions on this too.

I was under the assumption that the main mechanism for transferring tokens would still be transfer for a long while, even under ERC777 tokens, which means the protection would only be gained for new contracts which actively choose to block receiving.
I don't see how manually adding contracts to the Tokenable registry would be capable of remotely covering the necessary contracts. Allowing manual adding from anyone could be a solution, but I imagine there could be quite an outcry if things interacting with old contracts break all the time and require manual intervention to fix.
(Tokenable should also only be checked if a contract doesn't register ITokenRecipient.)
It might be worthwhile to keep the Tokenable registry and once include all ever active contracts, but only allowing them to remove themselves.
This way all new contracts could be enabled (forced?) to switch to use send only, which would massively speed up the transformation of the ecosystem to one that doesn't lose any tokens.

mcdee commented

@jbaylina yeah a "centralized and trusted database" doesn't sound like it fits very well with the rest of the design. Seems like dropping Tokenable is a good idea.

If a token contract is expected to send tokens to existing contracts it appears to make most sense to suggest that the token contract developer provide both EIP-777 and ERC-20 functionality, then they can fall back to transfer() where required.

mcdee commented

@hyperfekt the problems with asking contracts to remove themselves from a registry are that a) some contracts might not know about the registry and b) some contract might not be able to remove themselves from the registry.

@mcdee Sorry, some redundancy slipped in there by accident. That provision was mostly just for the few contracts which might support ERC777 before the registry goes live, but that is covered by checking for ITokenRecipient first.
The proposed registry (which includes all old contracts) just changes the protection from 'all contracts which are new and choose to implement ITokenRecipient' to 'all contracts which are new', as we can force them to implement ITokenRecipient; because if we don't allow old contracts to continue working people will just use transfer instead of send and our protection is downgraded to 'all contracts which are new and choose to implement ITokenRecipient'.
A good alternative might be to register all contracts which have interacted with tokens instead, which would further increase protection, and probably only incurs very few incompatibilities. (And also is a ton cheaper in terms of storage).
Depending on the reliability of this registry, it could be worthwhile to consider swapping the functions transfer and send, this way even if old contracts are the originators of the transfer the tokens could be protected from loss.

mcdee commented

@hyperfekt token contract developers have two options. They can either include ERC-20 functionality, in which case they can have the "benefits" of ERC-20 compatibility, or they can exclude ERC-20 functionality. Either way, though, sits outside what I see as the remit of EIP-777.

mcdee commented

Apologies for the scattered nature of these comments but I believe that it makes sense for the Burned() event to also have a data parameter, userData, analagous to Sent(). And for Minted() I don't see how an operator could mint tokens for a user so it makes sense for Minted() to provide from and userData rather than operator and operatorData.

There are also issues with the Sent() and Minted() events over the ordering of their arguments. Indexed arguments are stored as topics rather than data, so taking Sent() as an example in fact the values will be stored as:

- from: topics[1]
- to: topics[2]
- value: data[0]
- operator: topics[3]
- userData: data[1]
- operatorData: data[2]

which will be pretty confusing for users trying to parse the events. I'd be inclined to move the indexed operator to the first argument, which fits with the other events that have operator and puts all indexed arguments first.

This makes the updated events as follows:

event Sent(address indexed operator, address indexed from, address indexed to, uint256 value, bytes userData, bytes operatorData);
event Minted(address indexed from, address indexed to, uint256 value, bytes userData);
event Burned(address indexed from, uint256 value, bytes userData);

(You could move operator to go after to in Sent() if you have a strong preference; the important thing is to keep the indexed parameters ahead of the non-indexed).

0xjac commented

@mcdee The idea of tokensToSend is interesting, however I don't see any use case which
could not be done with an operator and operatorSend.

Quick examples to illustrate what I mean with your examples

  1. limit addresses to which the token can be transferred (so tokens in your cold wallet can only be transferred to your hot wallet):
function walletTransfer(address _coldWallet, uint256 _value, bytes _operatorData) public onlyOwner {
  Ierc777(token).operatorSend(_coldWallet, hardCodedHotWallet, _value, '', bytes _operatorData);
}
  1. limit the number of tokens that can be transferred in a given period (example from @jbaylina)
function sendWithinAnHour(address _from, address _to, uint256 _value, bytes _userData, bytes _operatorData) public only Owner {
  require(allowed[from][to] >= block.timestamp);
  allowed[from][to] = 0;
  Ierc777(token).operatorSend(_from, _to, _value, _userData, _operatorData);
}
function triggerTimer(address to) {
    allowed[msg.sender][to] = block.timestamp + 1 hours;
}
  1. move additional tokens prior to the transfer (to a witholding account for tax purposes, for example): simple example to pay the 7.7% Swiss VAT
function sendWithSwissVAT(address _from, address _to, uint256 _value, bytes _userData, bytes _operatorData) {
  Ierc777(token).operatorSend(_from, SwissFederalCustomsAdministration, _value.mul(0.077), _userData, _operatorData);
  Ierc777(token).operatorSend(_from, _to, _value, _userData, _operatorData);
}

I would take advantage of operators for more complex behaviors such as those mentioned above and keep the standard as simple as possible. Adding tokensToSend may make it easier to deal with those specific cases but I fear it will complicate the task of easily implementing the standard overall.

Regarding your other comments:

  1. Burnt: I wasn't sure which one to go to, I've changed it to Burned
  2. Minted: yes the lack of operatorData was an omission in the doc (it was already in the reference implementation) This has been fixed.
  3. tokensRecevied for minting:
    I moved things around and I failed... I improved it. If it still unclear let me know and I'll improve it. I also clarified how send, operatorSend and minting must throw with respect to the tokenable registry.
  4. Contracts registering themselves in the tokenable registry: As @jbaylina mentioned, this is a mistake and it has been corrected. We are still discussing this part of the standard. One of the main objective behind ERC-777 is to make is as easy and simple to use and to implement. I am not sure which of the presence or the absence of the tokenable registry goes more towards this objective.

@hyperfekt transfer should only be used for interaction with older contracts (which do not support ERC-777).

When using the ERC-20 compatible methods of an ERC-777 token you get some of the benefits (tokensReceived) of ERC-777 while maintaining backwards compatibility. However you are still exposing yourself to the risks of ERC-20. If you are transferring to a contract which may lock your tokens, you should use approve/transferFrom instead.

For new contracts, you should use the new send method instead. Also note that when implementing an ERC-777 token contract, implementing the ERC-20 compatible method is not required and one could very well create an ERC-777-only contract without the transfer method.

0xjac commented

@mcdee

userData in Burned

In Sent, the userData is intended for the recipient not the sender. With Burned there is no recipient so the userData would be intended to no one.

Passing userData in Burned to the "burner" would compare to passing userData to the sender in Sent but that is meaningless since the userData is originating from the sender.
What type of data are you thinking of passing in the Burned event?

operator in Minted

In Sent, from (and userData) is the address (and the data) from the account initially holding the tokens.

When minting, new tokens are created. They are not taken from the balance of from. More importantly upon minting, tokensReceived MUST be called as well with from set to 0x and operator set to msg.sender (address who triggered the mint). This is allows to differentiate a tokensReceived call from a send/operatorSend and a call from minting. I think it also makes sense because in this case the operator sent tokens from nowhere (0x) to some address. To avoid confusion and keep consistency the address who issued the mint is also called operator in the Minted event.

It is true, that when implementing tokensReceived the from address can be 0x and I will update the document to emphasis that a 0x from is an expected valid value (and that whether operator is authorized for 0x is meaningless).

If anyone thinks that operator and operatorData in the Minted event are still confusing terms, I am ok with renaming them to something like mintOperator and mintOperatorData (but tokensReceived will remain unchanged and keep ox for from and operator/operatorData).

indexed arguments

That's a very nice catch. The reason why they are in this order was to be consistent between the send method and the Sent (and thereafter the Minted/Burned events).

Looking at it, rather than reorganizing the arguments, I would maybe make operator not indexed. Maintaining the current order is not actually important but I am not sure that having the operator indexed is the best solution. Solidity hashes the indexed values before setting them as topic. Leaving the operator unindexed and thus readable in the data may be better. Opinions on this are welcome!

mcdee commented

@jacquesd regarding using operatorSend() instead of ITokenSender: yes you could use external contracts to carry out the operation, but they would not trigger when called through functions that are part of the proposed standard. The benefit of ITokenSender is that it triggers on a standard call to the token contract's send(), which makes it very powerful and, in my opinion, greatly increases the ability of the standard.

I'll give an example of replacing the old-style approve functionality: a token sender that only allows an operator to send tokens up to a given limit. With the current standard there is no way for a sender to limit the number of tokens as the operator can call operatorSend() directly as soon as they are made an operator for however many tokens they desire. An appropriate ITokenSender would be able to replicate the approve functionality in a clean way, separated from the main logic of the token contract but without operators being able to circumvent it.

I'm not sure about the argument of ITokenSender significantly increasing the complexity of this standard. The code is no more than:

    address senderImplementation = interfaceAddr(from, "ITokenSender");
    if (senderImplementation != 0) {
        ITokenSender(senderImplementation).tokensToSend(from, to, value, userData, operator, operatorData);
    }

prior to transferring tokens from one to another (assuming that tokensToSend() throws rather than returns a bool, although it's hardly more complex in the latter case). For all of the potential benefits that users can gain for hooking in to the send process this seems like not a lot of work.

After some more thought I don't think that Tokenable has any place in the standard. This is because the registry can either be permissioned (i.e. only an address can say if it is tokenable or not) or not. If permissioned then it doesn't give any benefit to existing contracts as they won't have the function call already in place to inform the registry. If not permissioned then anyone can alter the tokenable status of contracts and it cannot be considered trusted.

I'll have a look through the changes and let you know if I have further thoughts on them.

This standard is really shaping up nicely. I'd love to see it finalised soon and start to be put out there as a viable alternative to ERC-20.

mcdee commented

userData in Burned()

In general (i.e. across all functions), I can see userData having two different purposes. The first is to allow ITokenSender/ITokenRecipient to receive additional information and act upon it. The second is to provide some sort of additional record to the transaction, for example it might contain a reference ID (order number when sending to a vendor, account number when sending to an exchange).

I don't see any purpose of including userData in events for the former purpose, as once the contract has acted upon the information it either lets the transaction complete or throws. Using it for the latter purpose however it makes perfect sense for the data to be in the event.

So, looking at the latter case it would make sense for userData to be present in the Burned() event if the user wanted to reference why they burned the tokens. An example of this might be a contract that burns tokens every epoch to reduce total supply and increase price (as an alternative mechanism to paying dividends). In this situation userData might contain the epoch for which the tokens were burned.

operator in Minted()

I believe that the issue here comes down to overloading the use of the word "operator". In the rest of the spec "operator" means "an account allowed to do something on another account's behalf". In this particular case it appears to mean "a user for the token contract allowed to mint tokens".

Although a Minted event doesn't have a sender it does have an initiator. I'd make the first parameter of Minted() the address of the user who initiated the mint transaction, called either by or from (the former for accuracy, the latter for consistency). I'd also change operatorData to userData:

event Minted(address indexed by, address indexed to, uint256 amount, bytes userData);

Calling tokensReceived() on minting is indeed an issue. Although I understand what is in the spec, it does create additional complexity for all implementations of tokensReceived() as they need to consider if this is a send() or a mint() and obtain data from different fields accordingly. I admit that I'm struggling to think of a situation where tokensReceived() would care if the tokens received are the result of a send or a mint. And technically speaking it is trivial for any minter to mint() to themselves and then send() to the recipient so if there ever was a situation where there were different codepaths for receiving minted and sent tokens the sender could avoid the minted() path with minimal effort.

I can see every ITokenRecipient have to write something of the form:

function tokensReceived(address from, address to, uint256 value, bytes userData, address operator, bytes operatorData) public {
    if (from == 0) {
        // Minted
        from = operator;
    }   
    // Carry on as normal

or worse, not write it and assume that from is always the sender's address. Imagine a ledger:

function tokensReceived(address from, address to, uint256 value, bytes userData, address operator, bytes operatorData) public {
    received[from] += value;
}

this doesn't seem unreasonable but could lead back to the situation where received tokens are lost, as they are not credited to the correct account.

On the grounds of safety and the principle of least surprise I would approach the arguments tokensReceived() for minted tokens as follows:

  • from: msg.sender
  • to: the address to which the minted tokens were sent
  • amount: the amount of the minted tokens
  • userData: data supplied by the user
  • operator: always 0
  • operatorData: always blank

unless there is any good reason why minted tokens sent to an address should be treated differently from pre-existing tokens sent to that address, but as I say I can't come up with one.

indexed arguments

It can be handy for addresses to be indexed as it allows for easy searching of actions by a given address.
To my knowledge Solidity does not hash topics (and pulling a random ERC20 transfer from Etherscan at https://etherscan.io/tx/0x55713b1511dcce9edd0d070137530daef8755512cd004f2e2ca771d73f72a6a6#eventlog shows the from and to addresses in the topics unhashed) so you can retrieve the address regardless of if it is indexed or not. Given that you have the topic slot available I'd prefer to see it indexed and the event parameters re-ordered, but I understand the option to unindex it.

I see what you mean. My worry is that with so much infrastructure using ERC20 in place it will aeons for ERC777 send to become the default, as it basically requires replacing every contract that receives tokens, and then replacing every contracts that sends tokens. (I don't see any tokens not implementing ERC20 for now).
I might however be mistaken in the number of new contracts that implement tokensReceived, I admit I have a somewhat pessimistic view there, but I wanted to register my concerns. If the consensus is that that's not as big an issue as I think, then I'm willing to defer to that.