tc39/proposal-decorators

Is metadata too confusing?

Closed this issue · 31 comments

This is something that's been bothering me for a while but I've had a hard time trying to articulate it. Instead I'm just going to throw some things out there that I find confusing and see if anything resonates (code is pseudo)

The inconsistency of public metadata represented as an object and private metadata as an array.

instance[Symbol.metadata][DECO].public // { ... }
instance[Symbol.metadata][DECO].private // [ ... ]

Metadata in superclasses is inherited in public objects but appended to private arrays

class A {
  @deco #fieldA
  @deco fieldA
}
class B extends A {
  @deco #fieldB
  @deco fieldB
}
...
b[Symbol.metadata][DECO].public // { fieldB, __proto__: { fieldA } }
b[Symbol.metadata][DECO].private // [ #fieldB, #fieldA ]

Knowing if metadata is inherited can be determined within the public object but not by the private object, not alone.

...
const pubB = B.prototype[Symbol.metadata][DECO].public
Object.keys(pubB) // fieldB
Object.keys(Object.getPrototypeOf(pubB)) // fieldA

const privB = B.prototype[Symbol.metadata][DECO].private
Object.keys(privB) // [ #fieldB, #fieldA ]
Object.keys(Object.getPrototypeOf(privB)) // (Array.prototype, not A metadata)

Though you could potentially go back to the provider of the [Symbol.metadata].

...
B.prototype[Symbol.metadata][DECO].private // [ #fieldB, #fieldA ]
Object.getPrototypeOf(B.prototype)[Symbol.metadata][DECO].private // [ #fieldA ]
// ^ therefore it can be determined [ #fieldB, #fieldA ] - [ #fieldA ] = [ #fieldB ] is specific to B.prototype

Understanding private member names (# + identifier) cannot be used for lookup, ironically they'd still have less collisions in an object structure than with public keys.

class A {
  @deco #prop = 1
  // @deco #prop() {} // SyntaxError!
  @deco prop = 1
  @deco prop () {} // Allowed
}
...
a[Symbol.metadata][DECO].public // { prop() } (only 1)
a[Symbol.metadata][DECO].private // [ #prop ]

Notably, the metadata accessible for a does not match the instance-accessible property prop since the field will override the prototype-defined method but decorators are applied top to bottom.

...
a.prop // 1 (field)
a[Symbol.metadata][DECO].public.prop // (method)

Members can exist on the instance itself or on the prototype. For metadata, there's only the prototype (ignoring static for now). Fields are installed on the instance and do not exist on the prototype, yet their metadata goes to the prototype.

class A {
  @deco fieldA = 1
}
...
A.prototype[Symbol.metadata][DECO].public // { fieldA }
A.prototype.fieldA // undefined
...
a.fieldA // 1
a.hasOwnProperty(Symbol.metadata) // false

Storage looks something like the following where only public methods align with metadata, both existing on the prototype.

class A {
  @deco #fieldA // member: on instance; meta: on A.prototype and B.prototype
  @deco fieldA // member: on instance; meta: on A.prototype
  @deco #methodA () {} // member: on instance; meta: on A.prototype and B.prototype
  @deco methodA () {} // member: on A.prototype; meta: on A.prototype
}
class B extends A {}

Iterating over public metadata (related: #422) can be problematic. for...in will get you all the visible members in pubic metadata but not a preferred approach. Without it, to get all inherited members you'll need to dig through each level of inheritance, but then it could be difficult to keep track of overrides.

class A {
  @deco methodA () {}
  @deco method () {}
}
class B extends A {
  @deco method () {}
}
...
for (let key in b[Symbol.metadata][DECO].public) {
    key // method, methodA (overridden inherited method omitted)
}
// vs
for (let key of Object.keys(b[Symbol.metadata][DECO].public)) {
    key // method (no inherited methods)
}
// vs
let target = b[Symbol.metadata][DECO].public
while (target) {
    for (let key of Object.keys(target)) {
        key // [1] method; [2] methodA, method (includes overridden method)
    }
    target = Object.getPrototypeOf(target)
}

Fields behave the same despite fields overwriting one another in the instance rather than overriding.

class A {
  @deco fieldA
  @deco field
}
class B extends A {
  @deco field
}
...
let target = b[Symbol.metadata][DECO].public
while (target) {
    for (let key of Object.keys(target)) {
        key // [1] field; [2] fieldA, field (includes overwritten/replaced field)
    }
    target = Object.getPrototypeOf(target)
}

Though I dislike the object/array disparity of public/private, I understood the motivation behind it. Originally I thought privates could just as well have been stored in objects, especially given inherited privates of the same name would be accessible through the inheritance of the metadata object (so long as you're not using for...in). No private metadata would be lost due to collisions.

However, some of the other complications got me thinking maybe instead both public and private metadata should be stored in arrays. This would provide consistency while also addressing collision/replacement issues in public. And just like with private metadata currently, authors can include whatever they need within the metadata to access/differentiate/whatever among the other metadata in that set.

class A {
  @deco @deco prop
  @deco prop () {}
}
...
// maybe?
a[Symbol.metadata][DECO].public // [ prop, prop, prop() ]

Along these lines, I don't think it would not be necessary (or desirable) for inherited metadata to get added to the current prototype's metadata array. Not doing so would allow a clearer separation of metadata at a class level while still granting access to inherited metadata if necessary going through the prototype chain.

class A {
  @deco methodA () {}
  @deco method () {}
}
class B extends A {
  @deco method () {}
}
...
// maybe?
B.prototype[Symbol.metadata][DECO].public // [ method ] (nothing from A)
Object.getPrototypeOf(B.prototype)[Symbol.metadata][DECO].public // [ method, methodA ]

Inheritance would not be available through the public (or private) object itself... though I guess it could. I'm just not sure arrays inheriting from other arrays is very conventional.

Now, if public and private are arrays, do they still need to be divided into their respective public and private namespaces (related: #421)?

class A {
  @deco methodA () {}
  @deco #methodA () {}
}
...
// maybe?
a[Symbol.metadata][DECO] // [ methodA, #methodA ]

The metadata API, as it currently is defined, was built with three main considerations:

  1. It will primarily be used by more experienced developers, who will be more inclined to learn a more complex API that offers significant power. This has been a guiding design principal for the entire decorators proposal: decorator authors are generally experts, decorator users may not be.

  2. Decorator authors have a variety of very common use cases for metadata. For instance, it is very common to want to use metadata to apply a transformation to every decorated element for a given decorator. Take dependency injection for example:

    class Foo {
      @inject('service') bar;
    
      @inject('privateService') #baz;
    }
    
    class Bar extends Foo {
      @inject('service2') bar;
    
      @inject('privateService2') #baz;
    }

    In this example the desired outcome would be that the instance has a public instance property named bar which contains service2, and two private fields named #baz in separate scopes which contain privateService and privateService2. The current metadata API makes this trivial.

  3. Providing an opinionated API in this case will be better than providing an API which is less opinionated, because otherwise every single decorator will have to re-implement these same utility functions in order to accomplish basic goals like the above.

It is possible that a simpler, less opinionated API would make more sense, and that we could rely on community norms and libraries to take care of these common functions, but the goal of the current design was to build in the most common cases and provide ways to work around them for less common cases. I admit I have considered the possibility of merging public and private metadata into a single array of descriptors many times, but my current feeling is that this would result in more boilerplate code having to be written per-decorator, and would lead to worse performance and a worse experience on an ecosystem level. If this were an API that were going to be commonly used by every JS developer, I would probably agree that consistency would be a better design goal here, but given it is not, I prefer the tradeoffs we've made in the current design.

Should also note that for…in would be preferable for this API, specifically because it takes shadowing/inheritance into account. I believe there is a case to make the prototype of the public object null to make this use case more ergonomic, as it would be in keeping with this design philosophy.

It seems very unwise to create any scenario where for-in’s decades of being a bad practice gets disrupted by it being an expected use case.

I have been working with JS for a decade and this is the first time I’ve seen it asserted that for..in is bad practice.

@pzuraq jslint has been requiring for-in use an if guard to avoid the common bug of including inherited properties since the early 2000s, and jshint/eslint continue to have a rule about it.

@ljharb my understanding was that rule was from before APIs such as Object.values and Object.keys were available, and for..in was the only way to accomplish the common goals of iterating over own props. With the introduction of those APIs, they became the standard way to iterate over own props, and for..in became the way to iterate over all props taking inheritance into account. This was, to me, why it seemed like this rule is not enabled by default in eslint:recommended.

The current design uses this feature precisely because this is an extremely common use case for public, inherited decorators. If a child class shadows a property on a parent class, then most of the time the metadata for the child class should take precedence. In the rare occurrence where this is not the case, users can still look up parent metadata by manually traversing the prototype chain of public metadata, ensuring there is an escape hatch.

I still think that the usage of this language feature follows the design principles I outlined above. It does encourage usage of for..in in a way that is less common, and may initially be confusing to some developers. This API, however, is most commonly going to be used by developers who are familiar with these nuances, and can spend the time to learn how to leverage them to accomplish their goals more easily, with a net benefit to the community since it'll result in less object creation, manual rebuilding of this inheritance tree, and memory usage, etc.

@pzuraq yes, that's correct, but the use case of "iterating over all props" is a rare one. I agree that it applies here, but it brings a lot of issues, including "what happens if two inheritance levels have the same metadata key"?

@ljharb I'm not sure why that would be an issue introduced by this design in particular. If we were to flatten metadata into an array that was not inherited, for example, then decorators would still need to walk the inheritance tree of metadata and manually figure out which values to apply. If two different libraries are using the same metadata key, then the same problem will exist, and the metadata library will have to determine whether or not it should ignore the parent metadata in favor of the child, or apply both.

This is more of a conceptual problem with metadata inheritance as a whole, and I think the exact mechanics will have to be determined per-decorator. If two decorators choose to share a metadata key, then they'll have to figure out how to handle collisions like this. This complexity is also why the proposal requires Symbols for metadata keys, to prevent unintentional collisions like this.

@senocular

Repeated-use of decorators on private members inherently append, public members replace

Just caught this as well, this is incorrect. Each decorator gets a single storage slot for the element it is currently decorating, public or private. Setting the metadata twice on that storage slot, with the same metadata key, will overwrite the previous value.

The same is true for public elements with the same name. A decorator which decorates a method and a field with the same name will be able to read the previous value with getMetadata, and update it accordingly. It will not be required to clobber.

I agree - but for-in doesn't provide any way to walk the inheritance chain. Since that's what decorator authors will likely have/want to do, they'll instead need to use Object.entries, and then Object.getPrototypeOf, and then recurse, otherwise they'll risk missing shadowed metadata.

I totally agree that Symbols avoids the risk of unintentional collision, but if that's not a concern, then why would an API that provides a flat array not work fine?

@ljharb I think I'm saying the opposite? Look at my original example above.

class Foo {
  @inject('service') bar;

  @inject('privateService') #baz;
}

class Bar extends Foo {
  @inject('service2') bar;

  @inject('privateService2') #baz;
}

In this case, which is a very common case, the decorator author does want to take inheritance into account. The child has overridden the bar property, and wishes to inject a different value onto it. In this case, with the current API, you would use for..in. With the array-based API, you would have to walk the prototype chain as you describe, and keep a list of the already-seen public properties to skip in parents.

Each decorator gets a single storage slot for the element it is currently decorating, public or private. Setting the metadata twice on that storage slot, with the same metadata key, will overwrite the previous value.

The same is true for public elements with the same name. A decorator which decorates a method and a field with the same name will be able to read the previous value with getMetadata, and update it accordingly. It will not be required to clobber.

Thanks, I think this one was based off of the behavior of the experimental playground. I'll remove it from my list and log a bug with that implementation.

@senocular

Repeated-use of decorators on private members inherently append, public members replace

Just caught this as well, this is incorrect. Each decorator gets a single storage slot for the element it is currently decorating, public or private. Setting the metadata twice on that storage slot, with the same metadata key, will overwrite the previous value.

The same is true for public elements with the same name. A decorator which decorates a method and a field with the same name will be able to read the previous value with getMetadata, and update it accordingly. It will not be required to clobber.

For the transpiler this behaviour is complicated with private metadata, because we don't have the private name for keep only one slot per member. Currently, the Experimental Transpiler add new metadata for private member into the Array and serveral call to setMetadata add new elements.

Similarly, getMetadata() don't work propertly with private metadata into the Experimental Transpiler.

We are working on a solution...

Apologize for the mistakes, we had not correctly understood the behavior of private metadata and we has increased the confusion (the entropy level).

We have just released the https://javascriptdecorators.org/ version 0.5.0 that fixes this bug.

The same is true for public elements with the same name. A decorator which decorates a method and a field with the same name will be able to read the previous value with getMetadata, and update it accordingly. It will not be required to clobber.

I am still wary about this design, it feels like it adds footguns for decorator authors such that they need to effectively reimplement class ordering semantics in order to capture metadata correctly. I wrote quite a bit in a separate issue, however even just this part of the metadata design by itself feels like it encourages writing broken code.

As an example something like field/method ordering means that these two otherwise identical classes:

class Foo {
    @setSomeMetadata(10)
    name;
    
    @setSomeMetadata(20)
    name() {
    
    }
}

class Foo {
    @setSomeMetadata(20)
    name() {
    
    }
    
    @setSomeMetadata(10)
    name;
}

would have different metadata under the obvious implementation (i.e. just calling context.setMetadata(SOME_KEY, value)). In order to fix this EVERY decorator which uses metadata needs to class-ordering aware i.e. it needs to store the kind of metadata and update it according to the expected class shape.

It would be simpler for implementing decorators in a more kind-agnostic way if metadata was associated more directly with particular elements and lifted up onto the class.

As an example of where this would apply, consider a simple decorator that provides some assertions for args/return:

const TYPE_ASSERTIONS = Symbol("TYPE_ASSERTIONS");

// This is an oversimplified decorator that provides assertions for typeof types
// of provided parameters, in practice it would support at least optional parameters
// and the like, but those features are not the point of this example
function assert({ args: argsTypes=[], returns: returnsType }) {
    return function(func, context) {
        context.setMetadata(TYPE_ASSERTIONS, { args: argsTypes, returns: returnsType });
        return function(...args) {
            // The actual body here is not too important
            if (args.length !== argsTypes.length) {
                throw new TypeError("unexpected number of arguments");
            }
            for (let i = 0; i < args.length; i++) {
                if (typeof args[i] !== argsTypes[i]) {
                    throw new TypeError(`argument ${ i } has wrong type`);
                }
            }
            const returnValue = func(...args);
            if (returnsType !== undefined && typeof returnValue !== returnsType) {
                throw new TypeError(`return type was wrong type`);
            }
            return returnValue;
        }
    }
}

this implementation feels reasonable to me, however the metadata is broken for getters and setters (or with field/method cloberring).

For example if we do the following:

class RendereredVertex {
    #x;
    #y;
    
    @assert({ returns: 'number' })
    get x() {
        return this.#x;
    }
    
    @assert({ args: ['number'] })
    set x(newX) {
        this.#x = newX;
        this.#triggerRendering();
    }
    
    // Render the vertex somehow
    #triggerRendering() {
    
    }
}

then only the set metadata will be available:

RendereredVertex[Symbol.metadata].public.x; // { args: ['number'] }

This feels fairly broken to me, the getter and setter are separate parts of the class, so this separation feels to me that it should be separated as such in the metadata as well.

And it's not like the fix is that easy, like I said before in order to fix it the program needs to understand the entire class evaluation order to know which members in the class will actually be the winners, i.e. we need some ridiculous unwiedly logic like:

//------ global
const metadataKind = new WeakMap();

//------ in decorator
const metadata = createMetadataSomehow();
const previousMetadata = context.getMetadata(METADATA_KEY);
const previousMetadataKind = currentMetadataKind !== undefined
    ? metadataKind.get(currentMetadata) 
    : "none";

// Fields and classes always win so set unconditionally
if (context.kind === "field" || context.kind === "class") {
    context.setMetadata(METADATA_KEY, metadata);
    metadataKind.set(metadata, metadataKind);
} else if (context.kind === "method") {
    // Methods beat accessors and methods, but not other kinds
    if (previousMetadataKind === "accessor"
    || previousMetadataKind === "method") {
        context.setMetadata(METADATA_KEY, metadata);
        metadataKind.set(metadata, "method");
    }
} else if (context.kind === "accessor") {
    // Accessors beat accessors and methods, but not other kinds
    if (previousMetadataKind === "accessor"
    || previousMetadataKind === "method") {
        context.setMetadata(METADATA_KEY, metadata);
        metadataKind.set({ get: metadata, set: metadata }, "accessor");
    }
} else if (context.kind === "getter") {
    // Setters beat methods, merge with other accessors, but beat no others
    if (previousMetadataKind === "method") {
        // Override metadata as method is deleted
        context.setMetadata(METADATA_KEY, { set: metadata, get: undefined });
        metadataKind.set(metadata, "accessor");
    } else if (previousMetadataKind === "accessor") {
        // Merge with existing accessor metadata
        context.setMetadata(METADATA_KEY, { ...previousMetadata, set: metadata });
        metadataKind.set(metadata, "accessor");
    }
} else if (context.kind === "setter") {
    // Setters beat methods, merge with other accessors, but beat no others
    if (previousMetadataKind === "method") {
        // Override metadata as method is deleted
        context.setMetadata(METADATA_KEY, { set: metadata, get: undefined });
        metadataKind.set(metadata, "accessor");
    } else if (previousMetadataKind === "accessor") {
        // Merge with existing accessor metadata
        context.setMetadata(METADATA_KEY, { ...previousMetadata, get: metadata });
        metadataKind.set(metadata, "accessor");
    }
}

Given that the engine could just manage this complexity for us ensuring that it provides the metadata for members ACTUALLY PRESENT ON THE CLASS, I don't see why this would be delegated to userland given just how tediously complicated it is to deal with and is easy to get wrong. Additionally if new kinds are added this logic needs to be extended to every such type, whereas if this logic were builtin, then the engine would automatically be able to set metadata correctly for new types.

@Jamesernator the problems you are describing are already present for decorators that are in use today, and have not been a major issue in the community. I suspect that this is because while it is common for both a getter and setter to exist on a class, and possibly for metadata to be shared between them, it is much less common in reality for any other type of collision to exist. In which case, the solution you described above becomes something more like:

const metadata = createMetadataSomehow();
const previousMetadata = context.getMetadata(METADATA_KEY);

context.setMetadata(METADATA_KEY, { ...previousMetadata, set: metadata });

This would also likely work regardless, because the assertions could be additive instead of replacing each other, and if a user adds a field that clobbers a method that has different assertions that generally is a good sign that they may want to rewrite/rethink some of that code (thinking for instance of type assertions).

Also, regarding the design you proposed in the other thread where decorators append to an array instead of placed as a single value, there are two issues with it:

  1. A design where the metadata value is either an array if multiple metadata values exist or a single metadata value (e.g. Metadata | Metadata[]) was shot down because it would be a confusing API, and it would make handling array metadata values very difficult.
  2. A design where all metadata was always wrapped in an array was shot down because it would be too expensive to do by default when most of the time only one metadata value will exist.

So doing this would mean having to overhaul the metadata API as a whole. The simpler API described in this thread could potentially work, since all metadata would be in an array. But then, we would be forcing developers to do a much more common task for every decorator with metadata - reimplementing an inheritance tree in order to describe the inheritance of metadata. We discussed this again recently, and came to the conclusion that the current API is still the best API for solving the most common problems decorator authors face.

At this point, we are not changing the proposal any further until the latest Babel transform has shipped and the proposal has been reviewed for stage 3 by TC39. Based on the outcome there, there may be further changes, but until then we won't be making any further updates.

@Jamesernator the problems you are describing are already present for decorators that are in use today, and have not been a major issue in the community.

Existing decorators have a completely different api than what is in this new decorator proposal. Old decorators allowed for more powerful things than the current API does (particularly mutating class shape within any property/method decorators even).

This would also likely work regardless, because the assertions could be additive instead of replacing each other, and if a user adds a field that clobbers a method that has different assertions that generally is a good sign that they may want to rewrite/rethink some of that code (thinking for instance of type assertions).

Sure, but clobbering methods is fully allowed JS code and has always functioned. Like I mentioned above this just leads to a situation where metadata and the class structure are out of sync unless every decorator takes care to avoid this.

Also, regarding the design you proposed in the other thread where decorators append to an array instead of placed as a single value, there are two issues with it:

So doing this would mean having to overhaul the metadata API as a whole. The simpler API described in this thread could potentially work, since all metadata would be in an array.

For the record I don't mind alternative solutions either, it's mainly that the current design actively leads to inconsistencies with how metadata is represented on the class for some elements. And I particularly feel it will be rather likely that decorators will be written for methods using .setMetadata that would've otherwise worked completely fine, i.e. like the example I provided above.

We discussed this again recently, and came to the conclusion that the current API is still the best API for solving the most common problems decorator authors face.

An alternative solution that is entirely compatible with normal uses of methods and fields would just be have the metadata set on the class simply be the metadata associated with what won, i.e.:

function setMeta(value) {
    return function(_, context) {
        context.setMetadata(KEY, value);
    }
}

class Foo {
    @setMeta(10) // Wins even though method comes later
    name = 10;
    
    @setMeta(20) // This won't win, as ultimately the field will always win
    name() {
    
    }
}

Foo[Symbol.metadata].public.name; // 10

People could still use .getMetadata to forward values as needed, but generally they would be fine just to use .setMetadata. This doesn't change how chains of decorators on the same element work today, and generally means decorators should be easier to implement for getters/setters rather than the current design which makes decorators need to care about what they're decorating when a simpler design would allow them to write decorators that target fields more heavily.

Existing decorators have a completely different api than what is in this new decorator proposal. Old decorators allowed for more powerful things than the current API does (particularly mutating class shape within any property/method decorators even).

The limitations of metadata were exactly the same, however. It was and is not possible to associate metadata directly with the methods/functions themselves, like you are describing. Users must associate metadata via a key/value map associated with the constructor. Even with these limitations, none of the library authors we consulted and discussed with had issues like the theoretical ones you are describing. Given that reflect-metadata is the most popular decorator library in usage, and there do not appear to be issues with it, I don't see strong evidence that this is even a rare edge case.

An alternative solution that is entirely compatible with normal uses of methods and fields would just be have the metadata set on the class simply be the metadata associated with what won, i.e.:

This would require decorator application order to no longer be in sync with class element definition order, as decorators for methods/accessors would need to be applied first so that their metadata could be accessible for class field decorators. Overall I don't think this would be too disruptive, as long as decorator expression evaluation itself was done in definition order, but I'm also hesitant to make that change since it's observable and potentially counterintuitive. I will discuss it at the next group meeting to see what everyone thinks though, but as I said before it's unlikely that any changes will occur unless there is feedback from the next TC39 meeting.

TC39 spent an entire plenary day in 2016 discussing class element evaluation order as it relates to decorators and coming to consensus; i strongly suggest not attempting to relitigate that discussion.

Even with these limitations, none of the library authors we consulted and discussed with had issues like the theoretical ones you are describing. Given that reflect-metadata is the most popular decorator library in usage, and there do not appear to be issues with it, I don't see strong evidence that this is even a rare edge case.

If "library authors" here only refers to decorator authors, then probably they are fine, but something to be noted about existing decorator authors is that they probably won't be an entirely representative sample for what will follow, in particular because at current the barrier for using and especially writing decorators is high. The current decorators APIs that things like TypeScript supports were comparatively more advanced to work with, dealing with descriptors directly and being able to perform much more powerful class operations than present (e.g. things like mobx or redux-store and so on).

The new decorator proposal considerably lowers the barrier to entry, particularly for "function wrapping" type decorators that simply wrap a function with a new one. However as I've mentioned the current metadata API introduces footguns for "function wrapping" decorators when applied to getters/setters. And with a low barrier to entry, I would expect many such function wrapping decorators to begin appearing on places like npm. I certainly don't think it would unreasonable for people to expect the pattern I used above to work, and for method decorators it does, but it has clobbering behaviour on getters/setters which means that even if someone wrote a perfectly fine method decorator, the only thing preventing it being used on getters/setters is the language getting in the way.

I also personally find the whole "current decorators are fine with it" to not be a very compelling argument to not making other kinds of decorators easier to implement, especially when there's a few strategies for implementation some of which wouldn't really affect method/field decorators negatively in any way.


The other issue with class-fields winning over methods (or accessors) is I think a considerably lesser problem because while I do agree it's probably unlikely too many people will want to have methods and fields with the same name, I think it is still important for features in the language to produce things that are logically consistent.

This would require decorator application order to no longer be in sync with class element definition order, as decorators for methods/accessors would need to be applied first so that their metadata could be accessible for class field decorators. Overall I don't think this would be too disruptive, as long as decorator expression evaluation itself was done in definition order, but I'm also hesitant to make that change since it's observable and potentially counterintuitive.

I think this would make sense, currently except for the keys themselves, fields always "evaluate last" (well at instantiation time), so swapping fields and methods is (other than computed keys) invariant. So for this particular issue I think that would make sense.

TC39 spent an entire plenary day in 2016 discussing class element evaluation order as it relates to decorators and coming to consensus;

Just to be clear this doesn't affect the actual evaluation order of the decorator expressions, for this specific issue it would just apply the field decorators themselves last.

i strongly suggest not attempting to relitigate that discussion.

The current decorators proposal has significant differences to the proposal from 2016, and has gone through major total redesigns in the interim (i.e. the static decorators proposal). At that time even, most people were using transpiler style class fields, the current model of class fields was only accepted into the specification last year. The current class model + what this current proposal introduces is quite different to what was discussed 6 years ago, given that it seems worthwhile to at least raise it.

An x decorator must apply after x, and before x+1, otherwise x+1 would see an undecorated x value.

An x decorator must apply after x, and before x+1, otherwise x+1 would see an undecorated x value.

Fields aren't installed until instantiation time though, so no a method decorator (x+1) coming after a field cannot see an undecorated regular field (x) because the decorators aren't decorating the instance so they know nothing about fields installed on instances during decoration (other than what is provided directly to them by the context).

Right - my statement applies assuming x and x+1 are the same kind of thing.

Put another way, it shouldn’t be possible for anything to observe an undecorated thing.

Right - my statement applies assuming x and x+1 are the same kind of thing.

Put another way, it shouldn’t be possible for anything to observe an undecorated thing.

I don't see how this relates to the previous things I've been discussing, neither issue nor any of the proposed solutions provide any such power. The issues I have been describing above are specifically to do with mixed kinds.

Maybe I’m misunderstanding. I’m saying that a static field initializer, positioned after a decorated prototype or static anything, shouldn’t be able to access the undecorated prototype thing; and an instance field initializer, positioned after any decorated instance thing, shouldn’t be able to access the undecorated instance thing.

Maybe I’m misunderstanding. I’m saying that a static field initializer, positioned after a decorated prototype or static anything, shouldn’t be able to access the undecorated prototype thing.

For the order change @pzuraq was refering to (although perhaps I'm misunderstanding his intent) would just be to order (non-static) field decorators so that they always evaluate after methods/accessor decorators i.e. these two classes print exactly the same thing:

function logDecoration(message) {
    return function(value) {
        console.log(message);
        return value;
    }
}

class Foo {
    @logDecoration("field")
    foo = "bar";
    
    @logDecoration("method")
    foo() {}
}

class Foo2 {
    @logDecoration("method")
    foo() {}

    @logDecoration("field")
    foo = "bar";
}

// Both classes print method then field

In particular [step 2](Decorators are called (as functions) during class definition, after the methods have been evaluated but before the constructor and prototype have been put together.) just has it so that field decorators would be evaluated after all method and accessor decorators rather in class order. That means field decorators would always have the last say on .setMetadata (which would correspond to the fact fields always get installed on the instance, so their metadata should be most important).

This feels more consistent to me given that if I have an instance the actual thing I see on the instance is always the field, never the method, hence the metadata should really be for the instane. While I can see some argument that source-order regardless of type might be more consistent, my disagreement here is that (non-static) fields already have behaviour that changes their evaluation time is spread over two locations anyway (one place for their key, one for their installation).

Another possible solution, that I think is even less desirable but possible, would be to have separate metadata for instance data vs the prototype so that both metadata are accessible (e.g. C[Symbol.metadata].public.{instance,prototype} or similar). However this feels fairly unneccessary given people generally care about reading the public metadata for how it relates to instances rather than the prototype. (Specifically after the class is defined via C[Symbol.metadata], not using .getMetadata during decoration).

If "library authors" here only refers to decorator authors, then probably they are fine, but something to be noted about existing decorator authors is that they probably won't be an entirely representative sample for what will follow, in particular because at current the barrier for using and especially writing decorators is high.

@Jamesernator reflect-metadata currently has 4.4 million weekly downloads on NPM. Decorators are also part of the default stack for many frameworks, including Angular.js, Stencil.js, and Ember.js to name a few. They are probably the most used pre-stage-3 ES proposal currently, and possibly ever. This means we have an unusually high amount of data about what the experience is for decorator users and authors, what the pain points are, and what improvements can be made.

This proposal was explicitly crafted in large part by taking in that information and addressing the real world concerns and issues with previous usage of decorators. I think it is notable that the edge cases you are concerned about have not been brought up once in discussion with any of these users, frameworks, or authors. It of course should not be the only deciding factor, as you pointed out - we have fixed many things in this proposal that are worse in current decorators because even though "current decorators are fine with it", they were pain points and they could be made better.

This is a case where it's not clear that this is a pain point at all, and the change you are proposing touches on a very important part of decorator initialization that (as @ljharb has pointed out) has been litigated many times before, which is why I think that the evidence of the pain point (or lack thereof) is relevant.

Maybe I’m misunderstanding. I’m saying that a static field initializer, positioned after a decorated prototype or static anything, shouldn’t be able to access the undecorated prototype thing; and an instance field initializer, positioned after any decorated instance thing, shouldn’t be able to access the undecorated instance thing.

@ljharb I full heartedly agree here, as we've discussed before. The reason I mentioned above that I thought this might work is because I don't believe that decorators would be able to see this ordering difference except through unwieldy side-effects, because in this iteration of the proposal decorators do not have access to the class. So consider the following:

function decMethod(method) {
  console.log('method');

  return function() {
    return method.apply(this, ...arguments);
  }
}

function decField() {
  console.log('field');

  return (x) => x + 1;
}

class Foo {
  @decField field = 123;

  @decMethod method() {}
}

If @decField were applied after @decMethod, there would still be no way for @decMethod to see the undecorated version of @decField. @decMethod does not have access to the class itself, so the constraint that no decorator can see the partially constructed class is upheld here.

Now if the class were somehow side-channeled to the decorator, then there would be a difference: the decorated method would be on the prototype before @decField runs, whereas previously it would be the undecorated version of the method. So in this case, it would actually be more consistent overall with the goal of not being able to observe the undecorated state. Generally, though, it would not be, because method decorators would still run in order and would still be able to reference undecorated methods prior to decoration (again, IF the class is side-channeled to the decorator, which is also an unheard of use case).

The only other ordering difference would be that side effects like the console statements would be different. Currently there aren't any use cases for using side-effects directly from decorator application like this (usually they occur during initialization, per instance), but I wouldn't be surprised if they came up, possibly for registration types of cases.

I’m personally less concerned about side effects than about access to the undecorated thing; either way, my intuition is that it’s best to avoid even bringing up class evaluation order, since there’s likely PTSD from that particular meeting.

@Jamesernator we discussed this at the meeting today and concluded that while the edge cases you're describing are unlikely to be real world issues, there's also nothing that prevents us from applying decorators in an order so that it won't occur. Decorator application is a new phase in class definition, so the existing class definition evaluation order doesn't necessarily apply here.

The order we agreed on for application is:

  1. Static method decorators
  2. Proto method decorators
  3. Static field decorators
  4. Instance field decorators
  5. Class decorators

We'll be updating the spec text and documentation shortly.

#451 has been opened up as a potential alternative here.

Closing this issue as metadata has been broken out into a separate proposal here: https://github.com/tc39/proposal-decorator-metadata