Proposal: Variadic Kinds -- Give specific types to variadic functions
sandersn opened this issue · 270 comments
Variadic Kinds
Give Specific Types to Variadic Functions
This proposal lets Typescript give types to higher-order functions that take a variable number of parameters.
Functions like this include concat
, apply
, curry
, compose
and almost any decorator that wraps a function.
In Javascript, these higher-order functions are expected to accept variadic functionsas arguments.
With the ES2015 and ES2017 standards, this use will become even more common as programmers start using spread arguments and rest parameters for both arrays and objects.
This proposal addresses these use cases with a single, very general typing strategy based on higher-order kinds.
This proposal would completely or partially address several issues, including:
- #5331 -- Tuples as types for rest ...arguments
- #4130 -- Compiler incorrectly reports parameter/call target signature mismatch when using the spread operator
- #4988 -- Tuples should be clonable with Array.prototype.slice()
- #1773 -- Variadic generics?
- #3870 -- Rest types in generics for intersection types.
- #212 -- bind, call and apply are untyped (requires #3694's this-function types).
- #1024 -- Typed ...rest parameters with generics
I'll be updating this proposal on my fork of the Typescript-Handbook: sandersn/TypeScript-Handbook@76f5a75
I have an in-progress implementation at sandersn/TypeScript@f3c327a which currently has the simple parts of the proposal implemented.
It supercedes part 2 of my previous proposal, #5296.
Edit: Added a section on assignability. I'm no longer sure that it strictly supercedes #5296.
Preview example with curry
curry
for functions with two arguments is simple to write in Javascript and Typescript:
function curry(f, a) {
return b => f(a, b);
}
and in Typescript with type annotations:
function curry<T, U, V>(f: (t: T, u: U) => V, a:T): (b:U) => V {
return b => f(a, b);
}
However, a variadic version is easy to write in Javascript but cannot be given a type in TypeScript:
function curry(f, ...a) {
return ...b => f(...a, ...b);
}
Here's an example of using variadic kinds from this proposal to type curry
:
function curry<...T,...U,V>(f: (...ts: [...T, ...U]) => V, ...as:...T): (...bs:...U) => V {
return ...b => f(...a, ...b);
}
The syntax for variadic tuple types that I use here matches the spread and rest syntax used for values in Javascript.
This is easier to learn but might make it harder to distinguish type annotations from value expressions.
Similarly, the syntax for concatenating looks like tuple construction, even though it's really concatenation of two tuple types.
Now let's look at an example call to curry
:
function f(n: number, m: number, s: string, c: string): [number, number, string, string] {
return [n,m,s,c];
}
let [n,m,s,c] = curry(f, 1, 2)('foo', 'x');
let [n,m,s,c] = curry(f, 1, 2, 'foo', 'x')();
In the first call,
V = [number, number, string, string]
...T = [number, number]
...U = [string, string]
In the second call,
V = [number, number, string, string]
...T = [number, number, string, string]
...U = []
Syntax
The syntax of a variadic kind variable is ...T
where T is an identifier that is by convention a single upper-case letter, or T
followed by a PascalCase
identifier.
Variadic kind variables can be used in a number of syntactic contexts:
Variadic kinds can be bound in the usual location for type parameter binding, including functions and classes:
function f<...T,...U>() {}
}
class C<...T> {
}
And they can be referenced in any type annotation location:
function makeTuple<...T>(ts:...T): ...T {
return ts;
}
function f<...T,...U>(ts:...T): [...T,...U] {
// note that U is constrained to [string,string] in this function
let us: ...U = makeTuple('hello', 'world');
return [...ts, ...us];
}
Variadic kind variables, like type variables, are quite opaque.
They do have one operation, unlike type variables.
They can be concatenated with other kinds or with actual tuples.
The syntax used for this is identical to the tuple-spreading syntax, but in type annotation location:
let t1: [...T,...U] = [...ts,...uProducer<...U>()];
let t2: [...T,string,string,...U,number] = [...ts,'foo','bar',...uProducer<...U>(),12];
Tuple types are instances of variadic kinds, so they continue to appear wherever type annotations were previously allowed:
function f<...T>(ts:...T): [...T,string,string] {
// note the type of `us` could have been inferred here
let us: [string,string] = makeTuple('hello', 'world');
return [...ts, ...us];
}
let tuple: [number, string] = [1,'foo'];
f<[number,string]>(tuple);
Semantics
A variadic kind variable represents a tuple type of any length.
Since it represents a set of types, we use the term 'kind' to refer to it, following its use in type theory.
Because the set of types it represents is tuples of any length, we qualify 'kind' with 'variadic'.
Therefore, declaring a variable of variadic tuple kind allows it to take on any single tuple type.
Like type variables, kind variables can only be declared as parameters to functions, classes, etc, which then allows them to be used inside the body:
function f<...T>(): ...T {
let a: ...T;
}
Calling a function with arguments typed as a variadic kind will assign a specific tuple type to the kind:
f([1,2,"foo"]);
Assigns the tuple type ...T=[number,number,string]
...T. So in this application of
f,
let a:...Tis instantiated as
let a:[number,number,string]. However, because the type of
ais not known when the function is written, the elements of the tuple cannot be referenced in the body of the function. Only creating a new tuple from
a` is allowed.
For example, new elements can be added to the tuple:
function cons<H,...Tail>(head: H, tail: ...Tail): [H,...Tail] {
return [head, ...tail];
}
let l: [number, string, string, boolean];
l = cons(1, cons("foo", ["baz", false]));
Like type variables, variadic kind variables can usually be inferred.
The calls to cons
could have been annotated:
l = cons<number,[string,string,boolean]>(1, cons<string,[string,boolean]>("foo", ["baz", false]));
For example, cons
must infer two variables, a type H and a kind ...Tail.
In the innermost call, cons("foo", ["baz", false])
, H=string
and ...Tail=[string,boolean]
.
In the outermost call, H=number
and ...Tail=[string, string, boolean]
.
The types assigned to ...Tail are obtained by typing list literals as tuples -- variables of a tuple type can also be used:
let tail: [number, boolean] = ["baz", false];
let l = cons(1, cons("foo", tail));
Additionally, variadic kind variables can be inferred when concatenated with types:
function car<H,...Tail>(l: [H, ...Tail]): H {
let [head, ...tail] = l;
return head;
}
car([1, "foo", false]);
Here, the type of l
is inferred as [number, string, boolean]
.
Then H=number
and ...Tail=[string, boolean]
.
Limits on type inference
Concatenated kinds cannot be inferred because the checker cannot guess where the boundary between two kinds should be:
function twoKinds<...T,...U>(total: [...T,string,...U]) {
}
twoKinds("an", "ambiguous", "call", "to", "twoKinds")
The checker cannot decide whether to assign
...T = [string,string,string], ...U = [string]
...T = [string,string], ...U = [string,string]
...T = [string], ...U = [string,string,string]
Some unambiguous calls are a casualty of this restriction:
twoKinds(1, "unambiguous", 12); // but still needs an annotation!
The solution is to add type annotations:
twoKinds<[string,string],[string,string]>("an", "ambiguous", "call", "to", "twoKinds");
twoKinds<[number],[number]>(1, "unambiguous", 12);
Uncheckable dependencies between type arguments and the function body can arise, as in rotate
:
function rotate(l:[...T, ...U], n: number): [...U, ...T] {
let first: ...T = l.slice(0, n);
let rest: ...U = l.slice(n);
return [...rest, ...first];
}
rotate<[boolean, boolean, string], [string, number]>([true, true, 'none', 12', 'some'], 3);
This function can be typed, but there is a dependency between n
and the kind variables: n === ...T.length
must be true for the type to be correct.
I'm not sure whether this is code that should actually be allowed.
Semantics on classes and interfaces
The semantics are the same on classes and interfaces.
TODO: There are probably some class-specific wrinkles in the semantics.
Assignability between tuples and parameter lists
Tuple kinds can be used to give a type to rest arguments of functions inside their scope:
function apply<...T,U>(ap: (...args:...T) => U, args: ...T): U {
return ap(...args);
}
function f(a: number, b: string) => string {
return b + a;
}
apply(f, [1, 'foo']);
In this example, the parameter list of f: (a: number, b:string) => string
must be assignable to the tuple type instantiated for the kind ...T
.
The tuple type that is inferred is [number, string]
, which means that (a: number, b: string) => string
must be assignable to (...args: [number, string]) => string
.
As a side effect, function calls will be able to take advantage of this assignability by spreading tuples into rest parameters, even if the function doesn't have a tuple kind:
function g(a: number, ...b: [number, string]) {
return a + b[0];
}
g(a, ...[12, 'foo']);
Tuple types generated for optional and rest parameters
Since tuples can't represent optional parameters directly, when a function is assigned to a function parameter that is typed by a tuple kind, the generated tuple type is a union of tuple types.
Look at the type of h
after it has been curried:
function curry<...T,...U,V>(cur: (...args:[...T,...U]) => V, ...ts:...T): (...us:...U) => V {
return ...us => cur(...ts, ...us);
}
function h(a: number, b?:string): number {
}
let curried = curry(h, 12);
curried('foo'); // ok
curried(); // ok
Here ...T=([number] | [number, string])
, so curried: ...([number] | [number, string]) => number
which can be called as you would expect. Unfortunately, this strategy does not work for rest parameters. These just get turned into arrays:
function i(a: number, b?: string, ...c: boolean[]): number {
}
let curried = curry(i, 12);
curried('foo', [true, false]);
curried([true, false]);
Here, curried: ...([string, boolean[]] | [boolean[]]) => number
.
I think this could be supported if there were a special case for functions with a tuple rest parameter, where the last element of the tuple is an array.
In that case the function call would allow extra arguments of the correct type to match the array.
However, that seems too complex to be worthwhile.
Extensions to the other parts of typescript
- Typescript does not allow users to write an empty tuple type.
However, this proposal requires variadic kinds to be bindable to a empty tuple.
So Typescript will need to support empty tuples, even if only internally.
Examples
Most of these examples are possible as fixed-argument functions in current Typescript, but with this proposal they can be written as variadic.
Some, like cons
and concat
, can be written for homogeneous arrays in current Typescript but can now be written for heteregoneous tuples using tuple kinds.
This follows typical Javascript practise more closely.
Return a concatenated type
function cons<H,...T>(head: H, tail:...T): [H, ...T] {
return [head, ...tail];
}
function concat<...T,...U>(first: ...T, ...second: ...U): [...T, ...U] {
return [...first, ...second];
}
cons(1, ["foo", false]); // === [1, "foo", false]
concat(['a', true], 1, 'b'); // === ['a', true, 1, 'b']
concat(['a', true]); // === ['a', true, 1, 'b']
let start: [number,number] = [1,2]; // type annotation required here
cons(3, start); // == [3,1,2]
Concatenated type as parameter
function car<H,...T>(l: [H,...T]): H {
let [head, ...tail] = l;
return head;
}
function cdr<H,...T>(l: [H,...T]): ...T {
let [head, ...tail] = l;
return ...tail;
}
cdr(["foo", 1, 2]); // => [1,2]
car(["foo", 1, 2]); // => "foo"
Variadic functions as arguments
function apply<...T,U>(f: (...args:...T) => U, args: ...T): U {
return f(...args);
}
function f(x: number, y: string) {
}
function g(x: number, y: string, z: string) {
}
apply(f, [1, 'foo']); // ok
apply(f, [1, 'foo', 'bar']); // too many arguments
apply(g, [1, 'foo', 'bar']); // ok
function curry<...T,...U,V>(f: (...args:[...T,...U]) => V, ...ts:...T): (...us: ...U) => V {
return us => f(...ts, ...us);
}
let h: (...us: [string, string]) = curry(f, 1);
let i: (s: string, t: string) = curry(f, 2);
h('hello', 'world');
function compose<...T,U,V>(f: (u:U) => U, g: (ts:...T) => V): (args: ...T) => V {
return ...args => f(g(...args));
}
function first(x: number, y: number): string {
}
function second(s: string) {
}
let j: (x: number, y: number) => void = compose(second, first);
j(1, 2);
TODO: Could f
return ...U
instead of U
?
Decorators
function logged<...T,U>(target, name, descriptor: { value: (...T) => U }) {
let method = descriptor.value;
descriptor.value = function (...args: ...T): U {
console.log(args);
method.apply(this, args);
}
}
Open questions
- Does the tuple-to-parameter-list assignability story hold up? It's especially shaky around optional and rest parameters.
- Will the inferred type be a union of tuples as with the optional-parameter case? Because
bind
,call
andapply
are methods defined on Function, their type arguments need to be bound at function-creation time rather than thebind
call site (for example). But this means that functions with overloads can't take or return types specific to their arguments -- they have to be a union of the overload types. Additionally, Function doesn't have a constructor that specifies type arguments directly, so there's really no way provide the correct types tobind
et al. TODO: Add an example here. Note that this problem isn't necessarily unique to variadic functions. - Should rest parameters be special-cased to retain their nice calling syntax, even when they are generated from a tuple type? (In this proposal, functions typed by a tuple kind have to pass arrays to their rest parameters, they can't have extra parameters.)
+1, this is really useful for functional programming in TypeScript! How would this work with optional or rest arguments? More concrete, can the compose
function be used on functions with rest arguments or optional arguments?
Good point. I think you could assign the smallest allowed tuple type to an optional-param function since tuple are just objects, which allow additional members. But that's not ideal. I'll see if I can figure out the compose
example and then I'll update the proposal.
Actually union types would probably work better. Something like
function f(a: string, b? number, ...c: boolean[]): number;
function id<T>(t: T): T;
let g = compose(f, id): (...ts: ([string] | [string, number] | [string, number, boolean[]]) => number
g("foo"); // ok
g("foo", 12); // ok
g("foo", 12, [true, false, true]); // ok
This still breaks rest parameters, though.
@ahejlsberg, you had some ideas how tuple kinds would work, I think.
So 👍 on this. For information this is related to (and would fulfill) #3870. We have tried to implement a compose type API in TypeScript but are having to work around some of the limitations noted in this proposal. This would certainly solve some of those problems!
It seems though that sometimes you may want to "merge" such tuple types instead of persisting them, especially with something like compose. For example:
function compose<T, ...U>(base: T, ...mixins: ...U): T&U {
/* mixin magic */
}
Also, in a lot of your examples, you have been using primitives. How would you see something more complex working, especially if there are conflicts?
Unfortunately this proposal as-is does not address #3870 or the type composition, since the only composition operator for tuple kinds is [T,...U]
. You could also write this as T + ...U
(which is more indicative of what happens to the types), but #3870 and your type composition library need T & ...U
. I think that might be possible, but I need to understand @JsonFreeman's and @jbondc's ideas from #3870 first. I'll expand the proposal if I can figure out how it should work.
Note: I decided to go with the syntax [...T, ...U]
because it looks like the equivalent value spreading syntax, but T + ...U
is more indicative of what's happening with the types. If we end up with both, then +
and &
might be the operators to use.
Big 👍 on this!
+1 awesome! It would allow to express such things much more expressive and lightweight.
My point in #3870 seems to be an issue here. Specifically, I worry about inferring type arguments for variadic type parameters.
Type argument inference is a rather complicated process, and it has changed in subtle ways over time. When arguments are matched against parameters in order to infer type type arguments, there are no guarantees about the order in which candidates are inferred, nor how many candidates are inferred (for a given type parameter). This has generally not been a problem because the result surfaced to the user does not (in most cases) expose these details. But if you make a tuple type out of the inference results, it certainly does expose both the order and the count of the inferences. These details were not intended to be observable.
How serious is this? I think it depends on how exactly the inference works. What is the result of the following:
function f<...T>(x: ...T, y: ...T): ...T { }
f(['hello', 0, true], [[], 'hello', { }]); // what is the type returned by f?
@jbondc, -
seems like a good idea. I'll keep it in mind but not explore it here, because I think we should introduce new type operators one at a time. Both &
and +
create new types, but &
creates an intersection type whereas +
creates a new tuple type (which is why I prefer the syntax [T,...U]
instead of T + ...U
, because [T,U]
already does this for types).
@JsonFreeman l think it's OK to do one of two things with repeated kind parameters:
- Union the types:
f(['hello', 1], [1, false]): [string | number, number | boolean]
- Disallow inference of repeated tuple kind parameters, particularly if type argument inference proves complicated. Something like this:
f(['hello', 1], [1, false]) // error, type arguments required
f<[string, number]>(['hello', 1], [1, false]) // error, 'number' is not assignable to 'string'
f<[string | number, number | boolean]>(['hello', 1], [1, false]); // ok
I think real libraries (like the reactive extensions @Igorbek linked to) will usually only have one tuple kind parameter so even though neither (1) nor (2) are particularly usable, it shouldn't impact real-world code much.
In the examples above, curry
is the hardest to infer -- you have to skip f: (...args:[...T,...U]) => V
, infer ...ts:...T
, then go back and set ...U
to what's left after consuming ...T
from f
's parameters.
I've started prototyping this (sandersn/TypeScript@1d5725d), but haven't got that far yet. Any idea if that will work?
I would err on the side of disallowing anything where the semantics is not clear (like repeated inferences to the same spreaded type parameter). That allays my concern above as well.
I can't think of a good mechanism for typing curry. As you point out, you have to skip the parameter list of the first function to consume the ...T
argument and then see what's left over. There would have to be some policy to postpone inferences to a spreaded type parameter if it's not final in its list. It could get messy.
That said, I think this is worth a try. There is high demand for the feature.
I think you would have to skip multiple tuple kinds that occur in the same context (eg top-level like (...T,string,...U) => V
or concatenated like [...T,...U,...T]
). Then you can make multiple passes on the skipped kinds, eliminating already-inferred kinds and re-skipping kinds that are still ambiguous. If at any point no single kind is available for inference, stop and return an error.
So, yeah. Complicated.
You may be able to draw inspiration from a similar problem. It is actually somewhat similar to the problem of inferring to a union or intersection. When inferring to a union type that includes a type parameter that is a member of the inference context, as in function f<T>(x: T | string[])
, you don't know whether to infer to T. The intended manifestation of the union type may have been string[]
. So typescript first infers to all other constituents, and then if no inferences were made, infers to T.
In the case of intersection, it's even harder because you may have to split the type of the argument across the different intersection constituents. Typescript doesn't make inferences to intersection types at all.
What if you only allowed spreading tuple if it is the last type in its sequence? So [string, ...T]
would be allowed, but [...T, string]
would not be?
If I understand correctly, this would actually solve the mixin story in TypeScript. Am I correct in this understanding?
Maybe. Can you give an example? I'm not fluent with mixin patterns.
The syntax of a variadic kind variable is ...T where T is an identifier that is by convention a single upper-case letter, or T followed by a PascalCase identifier.
Can we leave the case of a type parameter identifier up to the developer?
@Aleksey-Bykov +1. I don't see a reason why that shouldn't be the case.
Developers with Haskell background would appreciate that.
Sorry, that sentence can be parsed ambiguously. I meant 'or' to parse tightly: "by convention (a single upper-case letter || T followed by a PascalCase identifier)". I'm not proposing constraining the case of the identifiers, just pointing out the convention.
For what it's worth, though, I have a Haskell background and I don't like breaking conventions of the language I'm writing in.
Sorry for derailing. My last curious question (if you don't mind me asking) what is the "convention" of TypeScript that might get broken and who is concerned?
This should type check, assuming T & ...U
means T & U & V & ...
(which is the intuitive behavior).
function assign<T, U, ...V>(obj: T, src: U, ...srcs: ...V): T & U & ...V {
if (arguments.length < 2) return <T & U & ...V> obj
for (const key of Object.keys(src)) {
(<any> obj)[key] = (<any> src)[key]
}
if (arguments.length === 2) return <U> obj
return mixin<T, ...V>(obj, ...srcs)
}
Or in a definition file:
interface Object {
assign<T, U, ...V>(host: T, arg: U, ...args: ...V): T & U & ...V
}
@Aleksey-Bykov the convention I'm talking about is the case of type parameter identifiers. Who is concerned? People who have to read new Typescript code they've never seen before -- conventions help new readers understand new code faster.
@sandersn What @Aleksey-Bykov got the impression of was that the following would be syntactically invalid:
function assign<a, b, ...cs>(x: a, y: b, ...zs: ...cs): a & b & ...cs;
@isiahmeadows &
and |
operations over kinds are not covered in this proposal, although I should add them to open questions/future work if I haven't. Right now the only proposed operator is concatenation: [THead, ...TTail]
.
One difference is that concatenation still produces a tuple type while &
and |
produce intersection and union types respectively.
@sandersn My assign
example in TypeScript would be trivial to change with that.
Although:
- Intersection would be similar to concatenation, although it's more like concatenating dictionaries than concatenating lists. Variadic types might be implemented on top of existing machinery there.
- Union would be like intersection, except it's only keeping the common parts. Once again, variadic types might be implemented on top of existing machinery.
@isiahmeadows An intersection is not in general a concatenation of dictionaries. That's only true for an intersection of object types, but not for example an intersection of unions. Unions are also not the same as just taking the properties objects have in common. The two are better characterized by the set of values inhabiting them.
@sandersn I'm a little confused about type argument inference with variadic kinds. What should be inferred here?
function foo<...T>(...rest: ...T): ...T { }
foo('str', 0, [0]);
Is the result [string, number, number[]]
? That would mean you have to rely on type argument inference adding candidates in a left-to-right order, which is not a trivial assumption. It would also be the first time the type system surfaces the list of inference candidates to the user.
I know that is an experimental / early proposal, but we could discuss ...T
syntax for rest parameters. From my perspective, it doesn't really work.
So the proposed syntax is:
declare function f<...T>(...a: ...T);
let's compare with existing syntax of rest parameters:
declare function f(...a: number[]);
so the type of a
parameter that catches rest arguments is number[]
, so we can clearly understand that is an array. By analogy, I can infer that ...T
from the proposal represents an array as well. But that's not very obvious.
Next, let's say we could define more restrictive rest parameters:
declare function f(...a: [number, string]);
// same as
declare function f(c: number, d: string); // or very close to
So now, we still see that type of a
is a tuple (which is an array).
My proposal is to use more consistent way to treat notion of ...T
to represent as a "some abstract ordered list of types". And use it in the same way how we use spread operator:
var a: [number, string] = [1, "1"];
var b = [true, ...a]; // this must be [boolean, number, string], but it doesn't work :)
So ...a
in case of variable, is just 1, "1"
.
My syntax for defining rest parameters by ...T
notion:
declare function f<...T>(...a: [...T]);
declare function g<H, ...T>(head: H, ...tail: [...T]): [H, ...T];
For me it makes much more sense.
@Igorbek I've been running on the assumption declare function f<...T>(...a: ...T);
already worked like that. But I don't see declare function f(...a: [number, string]);
getting much use.
To be more clear.
Originally proposed syntax for rest parameters:
function func<...T>(...a: ...T)
If I can do this
function g<...T>(...a: ...T): [number, ...T] { ... }
then I will be to able do this:
function f<...T>(...a: ...T): [...T] { return a; }
So the type of a
is [...T]
(we return so), but we defined it as ...T
in the signature.
We could say that ...T
and [...T]
are same, but it doesn't work in case of variables.
For variables:
var a = [1, 2];
[a] === [[1,2]];
[...a] === [1, 2];
f(...a) === f(1, 2)
...a === 1, 2 // virtually
If we apply same to standard rest parameters
function f(...a: number[]): number[] { return a; }
the type of a
is number[]
(by return type), same as it was defined in the signature.
@isiahmeadows yes, function f(...a: [number, string])
doesn't work. I just developed thoughts about how we can treat rest parameters.
So, going further. To explicitly define type parameters, the following syntax was proposed:
function f<...T, ...U>()
f<[number, string], [boolean, number]>();
Turns to:
f<...[number, string], ...[boolean, number]>();
So this might work too:
function g<T1, T2, T3>()
g<A, B, C>();
// same as
g<...[A, B, C]>();
g<...[A], ...[B, C]>();
g<...[A], B, C, ...[]>();
@JsonFreeman that is how my prototype works, yes. But I'm not familiar enough with the type inference algorithm to understand why it works. In other words, the question is not whether left-to-right inference is a trivial assumption but a correct one. For the identity case the answer is yes but I don't know if you can construct cases where the answer is no.
Also can you work through an example of an exposed set of type inference candidates? Like I said, I don't understand the working of the inference algorithm very well, so an example would help me to see what you mean.
And even better:
function<...T>(...a: T): T;
// same as
function<...T>(...a: [...T]): T;
I suggest to prefix [] to the type identifier to signify the rest of type params.
function fn<R, []T>(...a:[]T): R;
It's 1 character shorter than ...T
and (in my opinion) makes less visual noise.
@Aleksey-Bykov I'm actually of the opposite opinion on that. It doesn't fit with the existing rest parameter syntax, so I believe it's also less clear from a glance.
[...T]
/T
as a rest array parameter type seems much better for me. Once again, compare with array and their sprad operator:
arrays | types (from proposal) | types (my update) |
---|---|---|
var x = [1,2] |
no | T = [T1, T2] |
[0, ...x] === [0,1,2] |
[T0, ...T] === [T0, T1, T2] |
[T0, ...T] === [T0, T1, T2] |
f(x) === f([1, 2]) |
no | f<T>() === f<[T1, T2]>() |
f(...x) === f(1, 2) |
f<...T>() === f<[T, T2]> ? |
f<...T>() === f<T1, T2> |
f(0, ...x) === f(1, 2) |
f<T0, ...T>() === f<T0, [T, T2]> ? |
f<T0, ...T>() === f<T0, T1, T2> |
From proposal
function g<...T>(...x: ...T) {
// being called as g(1, "a");
var a: ...T; // [number, string] ?
var b: [number, ...T]; // [number, number, string]
var c: [...T]; // [number, string] - same as a ? so [...T] is same as ...T - weird
}
From my update
function g<...T>(...x: T) {
// being called as g(1, "a");
var a: T; // [number, string]
var b: [number, ...T]; // [number, number, string]
var c: [...T]; // [number, string]
}
The update looks nicer now IMO. Lists to represent types sounds very nice, but even typed Lisps don't go that far (homoiconic types, anyone? 😄).
I get the allure of purity, but I'm also looking at the pragmatic aspect as well. Lists would also be relatively easy to implement on their own, but it doesn't fit in with the rest of the language. It's almost like the numerous attempts to implement monads in Java (the language) or lambdas in C - they always turn out incredibly ugly and hackish.
@sandersn I can try to explain what I mean by exposing the list of candidates. Type argument inference generates a list of candidates for each type parameter. Then it checks if any candidate is a supertype of all the others, and if so, that candidate is the winner. So in the following example:
function foo<T>(a: T, b: T): T {}
foo(["hi", 0], ["", ""]);
The arguments will be typed, and then inferred to each parameter. Two candidates will be generated, namely (string | number)[]
and string[]
. But the first one will win because it is a supertype of the second. And as a result, the user never observes that string[]
was ever in the picture. There is one inference for T
, and all other candidates are invisible. This means that there are two things invisible to the user, namely the order of the candidates and the multiplicities of the candidates.
Here is an issue with the multiplicities if you rely on the candidate list as your list of elements in the tuple denoted by ...T
:
function foo<...T>(...rest: ...T): ...T
foo(0, 1);
I think you would want to infer [number, number]
for T given the intent of your proposal as I understand it. But because of the contains check in line https://github.com/Microsoft/TypeScript/blob/master/src/compiler/checker.ts#L6256, the number
candidate will only be added once, and T
will be inferred as [number]
. This is the mulitiplicity issue I was talking about.
As for the order, it is left to right. But there are multiple passes, and arguments will be reprocessed if they contain function expressions that will be contextually typed. If there are n arguments containing contextually typed function expressions, then there are n + 1 passes over the arguments. An example is Array.prototype.reduce, where the initialValue parameter is effectively typed and inferred before the callback, despite the fact that it's on the right. So something like the following might be an issue for the proposal:
function foo<...T>(...rest: ...T): ...T
foo(x => x, 0);
Intuitively, T should be [(x: any) => any, number]
, but if you rely on the order the candidate are added, it will be [number, (x: any) => any]
. This is because type argument inference is left to right generally, but functions subject to contextual typing are deferred to the end.
Both the multiplicity and the order issues I've explained are instances of surfacing the candidate list. @ahejlsberg will surely be a good person to ask about this as well, and indeed he can help explain, confirm or disprove anything I've said.
@JsonFreeman why you think it'd be an issue?
It can be implemented by virtually introduce extra generic types for each rest factual argument and infer against function with fixed parameters length.
For example,
function foo<...T>(...rest: T) { ... }
foo(x => x, 0);
// to infer, the following function is used
function foo2<T0, T1>(rest0: T0, rest1: T1) { ... }
foo2(x => x, 0);
// inferred as
foo2<(x: any) => any, number>
// T0 = (x: any) => any
// T1 = number
// T = [T0, T1] = [(x: any) => any, number]
BTW, can we infer x => x
to be of type { <T>(x: T): T; }
?
@Igorbek I think your suggestion about manufacturing type parameters (at least as intuition, regardless of how it is implemented) is the correct way to do it. You could infer a sequence of types for T
, where each element in the sequence has an index and a list of candidates (this is an alternate way of implementing what you mentioned).
However, my point was, I do not think this is what would naturally happen if you just repurposed the inference candidate list as the inferred tuple. It would require explicit mechanics to make the right thing happen.
For your point about { <T>(x: T): T; }
, that doesn't generalize well to typing things like x => foo(x)
where foo is some function. You'd need to know the type of x
to do overload resolution for foo
.
A small step out from the battle with type-checker inference rules.
I have a comment/suggestion about the syntax. I think there are two consistent but mutually exclusive options:
1. Formal rest type arguments
If we choose this form:
type F<...Args> = (...args:...Args) => ...Args
then we should use it like
var a: F // a: () => []
var b: F<number> // b: (arg: number) => [number]
var c: F<number, string> // c: (arg1: number, arg2: string) => [number, string]
...
Thus it will be true rest formal types. They should be used only at the last position of the formal type parameter section.
2. Tuple-typed rest arguments
(...args:[string, number]) => boolean IS EQUIVALENT TO (s: string, n: number) => boolean
In this case we always have fixed number of slots in formal type parameter section.
function f<T>(...args: T): T {
return args;
}
we infer that T should be a tuple type if either condition is met:
- T is used for rest parameters like (...args: T) => T
- T is used in spread composition like [...T] or [number, ...T, string]
Thus, we do not need to use an ellipsis in the formal type parameter section (we can infer it even syntactically without any type checker)
in this case, we can write also
function f<T>(...args: [...T]): [...T] {
return args;
}
but it is redundant.
Personally, I would like to see the later one implemented in TypeScript. @JsonFreeman, @sandersn?
@Artazor I think it boils down to expressivity, and I don't think the two approaches are necessarily equivalent. The second one includes the ability to spread a rest type parameter inside a tuple type, whereas the first does not seem to.
I think for generic type references, it is just a matter of deciding where and syntactically how to use a rest type parameter. This would need to be decided for all type constructors that take a type sequence (tuples, signatures, generic type references).
For generic signatures, it's more complicated because of type argument inference. What if you had the following:
function callback(s: string, n: number): void { }
declare function foo<...T>(cb: (...cbArgs: T) => void, ...args: T): [...T];
foo(callback, "hello", 0, 1);
What does foo return? My point is just that people expect generics rules to be the same for generic types and generic signatures, but if you make generic types more expressive, type argument inference needs a way to handle it. This may just be a matter of formally identifying the cases that are hard for type argument inference, and requiring the user to pass explicit type arguments in these cases.
In terms of my opinion, I think your option 1 is better. I personally do not see the use of using tuple types as rest parameters. I think a rest parameter should only be allowed to be an array type, or a rest type parameter, because it is supposed to have variable length. I also like the concept of a rest type parameter being an abstract sequence of types, not associated with something that already exists in the type system.
My philosophy on tuples is that they represent a subset of array values where the length is known. Those array values are real runtime entities. I don't like the idea of using them as a sort of type system device to represent an abstract sequence of types (for instance the sequence of parameters in a signature). But whether you are allowed to spread a rest type parameter in a tuple is a different story.
I like the tuple proposal because it's more powerful and solves more use cases, it's also very intuitive that I can spread a tuple as a rest parameter because tuples are just arrays and when calling a function with a rest parameter I can spread the array. The type system would then match my understanding of the code better.
@JsonFreeman in your case foo would return [string, number, number]
as that would be inferred from ...args
, the inferred cb type would be (string, number, number) => void
and the passed callback would just ignore the last argument which is very common in both TS and JS.
I don't like the idea of using them a sort of type system device to represent an abstract sequence of types
That's exactly what they are, JS doesn't know about tuples, only TS. For TS a tuple is a sequence of types.
I like tuple-based approach too. Especially if we could have compatible functions signatures like that:
// all are equivalent
(a: A, b: B, c: C) => R;
(a: A, b: B, ...rest: [C]) => R;
(a: A, ...rest: [B, C]) => R;
(...args: [A, B, C]) => R;
// this is more complicated
(a: A, ...rest: T[]) => R;
(...args: [A, ...T]) => R; // no in current syntax
The latter we cannot express with current syntax but could if we had #6229 adopted.
So for me it seems that a proper way is to use tuples and unify tuples to express more. Without more expressive tuples, it'd be hard to have something like [...T, ...T]
because T as a tuple have an open length.
@JsonFreeman for your example, @Pajn showed exactly as my understanding of that - there's no any visible problems in inferring these types.
@JsonFreeman I'd better use that syntax
declare function foo<T>(cb: (...cbArgs: T) => void, ...args: T): T;
declare function foo<T>(cb: (...cbArgs: T) => void, ...args: T): [...T]; // same
Hm, probably it may introduce some ambiguity:
declare function foo<T>(...args: T): T;
foo(1); // T is [number] or number[]?
// however, here it'd be more explicit
declare function foo<T>(...args: T[]): T[];
foo(1); // T is number[]
// and here
declare function foo<T>(...args: [...T]): T;
foo(1); // T is [number]
I could get behind the idea of spreading a rest type parameter in a tuple. But I'm not sure I want a rest type parameter to be implicitly interpreted as a tuple. @Pajn's example would still work if rest type parameters are allowed to be spread in all type sequence positions (tuples, parameter lists, type arguments).
@Igorbek You're right about the ambiguity in your first example. Your third example is problematic too though. Given a sequence like number, string
, there are 2 possible instantiations of the signature. Namely (arg1: number, arg2: string) => [number, string]
as well as (arg1: [number, string]) => [number, string]
(adopting the implicit tuple interpretation for the sake of the example).
The other odd thing about the implicit tuple interpretation, is this: say you have a rest type parameter T being instantiated to number, string
. Now say you pass those as type arguments, Foo<T>
. Is that to be interpreted as Foo<[number, string]>
whereas Foo<...T>
is Foo<number, string>
? There is an argument for this, as it would be extending the spread operator to the type system. But I'd still rather the tuple version be represented as Foo<[...T]>
Call me crazy, but I sense some fundamental flaws with the idea of using
tuples. What happens if you try to spread a tuple type across too many
parameters? Like this?
declare function foo<T>(...args: [...T]): void
foo<[number]>(1, 2)
Also, what happens if the type parameters are of the wrong type or used in
unusual, potentially erroneous places?
// 1. unusual place
declare foo<T>(x: T, ...ys: [...T]): void
// 2. bad type
declare foo<T>(...xs: [...T]): void
foo<number>(2)
The first example is directly relevant for Function#apply (and could be an
error), and the second is a non-obvious mistake that will fail to compile,
and non-trivial to detect with Intellisense.
On Sun, Feb 28, 2016, 03:04 Jason Freeman notifications@github.com wrote:
The other odd thing about the implicit tuple interpretation, is this: say
you have a rest type parameter T being instantiated to number, string.
Now say you pass those as type arguments, Foo. Is that to be
interpreted as Foo<[number, string]> whereas Foo<...T> is Foo<number,
string>?—
Reply to this email directly or view it on GitHub
#5453 (comment)
.
Your third example is problematic too though. Given a sequence like
number, string
, there are 2 possible instantiations of the signature. Namely(arg1: number, arg2: string) => [number, string]
as well as(arg1: [number, string]) => [number, string]
(adopting the implicit tuple interpretation for the sake of the example).
From my third example that is clear it can only interpret as (...args: [number, string]) => [number, string]
:
declare function foo<T>(...args: [...T]): T;
foo(1, "a"); // T is [number, string]
const result: [number, string] = foo<[number, string]>(1, "a");
// however, it is assignable to/from the following signatures:
const f1: (arg1: number, arg2: string) => [number, string] = foo<[number, string]>;
const f2: (arg1: number, ...rest: [string]) => [number, string] = foo<[number, string]>;
The other odd thing about the implicit tuple interpretation, is this: say you have a rest type parameter
T
being instantiated tonumber, string
.
T
cannot instantiated to number, string
as it is a true tuple. It must be [number, string]
.
Now say you pass those as type arguments,
Foo<T>
. Is that to be interpreted asFoo<[number, string]>
whereasFoo<...T>
isFoo<number, string>
?
True. However, having <...T>
seems redundant for this particular use cases we're discussing (catch positioned types for rest arguments). Nevertheless, let's say we have it.
There is an argument for this, as it would be extending the spread operator to the type system. But I'd still rather the tuple version be represented as
Foo<[...T]>
There're two cases when we might use that syntax:
// in a signature declaration
declare function foo<[...T]>(...args: [...T]): [...T];
// and when type instantiated, so in the usage
type T = [number, string]
foo<T>();
foo<[...T]>();
// the latter can virtually be replaced as
type _T = [...T]; // which is a type operation that should produce [number, string]
foo<_T>();
// and more
type Extended = [boolean, ...T]; // [boolean, number, string]
So for usage it's nothing more than type operator like |
, &
or []
. But in the declaration that syntax might be interpret as T extends any[]
or whatever base type for all tuples is, to indicate that must be a tuple type.
@isiahmeadows
What happens if you try to spread a tuple type across too many
parameters? Like this?
declare function foo<T>(...args: [...T]): void
foo<[number]>(1, 2); // ok, foo<[number]> is of type (...args: [number]) => void
// [1, 2] is being passed in place of args
// is [1, 2] which is [number, number] assignable to [number]? yes, with current rules
// no error
Also, what happens if the type parameters are of the wrong type or used in
unusual, potentially erroneous places?
// 1. unusual place
declare foo<T>(x: T, ...ys: [...T]): void
// 1. [...T] can be interpret as a type constraint "must be a tuple type"
// 2. if we call with type specified
foo<number>(1); // number doesn't meet constraint
foo<[number]>(1, 2); // argument of type 'number' is not assignable to parameter 'x' of type '[number]'
foo<[number]>([1], 2); // ok
// 3. if we call without type, it must be inferred
foo(1); // according to current rules, T would be inferred as '{}[]' - base type of all tuples
// so, argument of type 'number' is not assignable to parameter 'x' of type '{}[]'
foo([1, 2], 2); // T is inferred as '[number, number]
// rest arguments of type '[number]' are not assignable to rest parameters 'ys' of type '[number, string]'
foo([1], 2, 3); // T is '[number]',
// x is of type '[number]',
// ys is of type '[number]',
// rest arguments are of type '[number, number]' which is assignable to '[number]',
// no error
// 2. bad type
declare foo<T>(...xs: [...T]): void
foo<number>(2); // type 'number' doesn't meet constraint
I still do not see the benefit of representing these things as tuples. Furthermore, I think they should be declared as <...T>
and not <T>
. As I said before, I don't see tuple types as an appropriate device to use for arbitrary length type sequences in the type system. I am still not convinced that this is required for the expressivity that people want.
I'd agree it could be more expressive, but having 'spread' operator in type parameters position will limit us to catch rest arguments once only, same as we can't have rest parameters twice. So given <...T>
and <A, B, C>
, T
will catch them as [A, B, C]
. And we wouldn't able to express <...T, ...U>
as it would be ambiguous - [A, B, C], []
or [A, B], [C]
or ... etc.
Let's say I wanted to express a function with the following behavior:
declare function foo(a: A, b: B): R;
declare function boo(c: C, d: D, e: E): U;
let combined: (a: A, b: B, c: C, d: D, e: E) => [R, U] = combine(foo, boo);
// so the signature could be:
declare function combine<R, U, ???>(
f1: (...args: [...T1]) => R,
f2: (...args: [...T2]) => U):
(...args: [...T1, ...T2]) => [R, U];
// if ??? is '...T1, ...T2'
combine<R, U, A, B, C, D, E> // what will be T1 and T2 ?
combine<R, U, ...[A, B, C], ...[D, E]> // ok ? so we will preserve spread to specific positions. so then
combine<...[R, U], A, ...[B, C, D], E> // will be restricted.
// however, ES6 allows to do it with function arguments
f(1, 2, 3);
f(...[1, 2], 3);
f(...[1], ...[2, 3]);
// if ??? is 'T1 extends TupleBase, T2 extends TupleBase'
// or just '[...T1], [...T2]' as a shortcut for such constraints
combine<R, U, [A, B, C], [D, E]> // pretty explicit, and doesn't occupy spread operator for type arguments
Ok, I see now how you are thinking of it. It sounds like what you're proposing is actually a different feature from what I thought. Instead of adding a new construct for capturing a sequence of type parameters, you just want tuple types to be spreadable because they already represent a sequence of types. That way it is possible to pass multiple tuples of various lengths in a more transparent way.
In javascript, it's more like function foo([...rest]) { }
instead of function foo(...rest) { }
.
That makes more sense to me now, thanks for explaining. I think that is a reasonable approach.
@JsonFreeman Exactly!
@JsonFreeman Question: why should [1, 2]
satisfy [number]
? That seems very odd to me. That actually working would be very surprising. It's not at all type safe.
Not that I have anything against tuples being used for variadic types, though (I'm neutral, too be honest).
@isiahmeadows in what way is [1, 2]
not substituable for [number]
? It's definitely a subtype. It's the same how { x: 1, y: 2 }
is a valid { x: number }
Okay. I'll concede partially, but do take into account Function.prototype.apply, which accepts a tuple of arguments.
interface Function<T, U, V> {
(this: T...args: [...U]): V;
apply(object: T, args: U): V;
}
If the caller throws a TypeError on too many arguments, then passing too many will result in a runtime error, not a compile error like it should.
Isn't it pretty rare for any JS function to throw TypeError when passed too many arguments? What are some examples?
@isiahmeadows as an abstract example, I understood that the error you are worried about is:
function f(x: number): void {
// throw if too many arguments
}
f.apply(undefined, [1,2,3]); // runtime error, no compile-time error
f(1,2,3) // compile-time error and runtime error.
Is that correct?
@sandersn, I think that TypeError on too many arguments is something that violates the spirit of the JS, as we usually pass function with less formal arguments than actual ones that will be passed into this function. We simply do not use them. For example Array.prototype.forEach
What about function currying? That's probably much more common, with Ramda
and lodash/fp.
On Mon, Feb 29, 2016, 13:45 Anatoly Ressin notifications@github.com wrote:
@sandersn https://github.com/sandersn, I think that TypeError on too
many arguments is something that violates the spirit of the JS, as we
usually pass function with less formal arguments than actual ones that will
be passed into this function. We simply do not use them. For example
Array.prototype.forEach—
Reply to this email directly or view it on GitHub
#5453 (comment)
.
@isiahmeadows I'd say that currying based on the arguments.length
is very unstable and runtime error-prone. Real currying is extra-argument-proof:
var plus = x => y => x + y
console.log(plus(3)(4)) // 7
console.log(plus(3,10)(4,20)) // still 7
When I pass my function with fixed signature as callback to somewhere I think about it in the following way: 'my function expects at least those arguments'
What about things like foldl
?
const list = [1, 2, 3]
console.log(foldl((a, b) => a + b, 0, list))
console.log(foldl((a, b) => a + b, 0)(list))
console.log(foldl((a, b) => a + b)(0, list))
console.log(foldl((a, b) => a + b)(0)(list))
That's very common in functional programming. And omitting the last
argument is fairly common.
On Mon, Feb 29, 2016, 13:52 Anatoly Ressin notifications@github.com wrote:
@isiahmeadows https://github.com/isiahmeadows I'd say that currying
based on the aruments.length is very unstable and runtime error-prone.
Real currying is extra-argument-proof:var plus = x => y => x + y
console.log(plus(3)(4)) // 7
console.log(plus(3,10)(4,20)) // still 7—
Reply to this email directly or view it on GitHub
#5453 (comment)
.
If you want to pass that as a callback to, say, map
(working over a list
of lists), you'll probably want to curry it.
On Mon, Feb 29, 2016, 13:59 Isiah Meadows impinball@gmail.com wrote:
What about things like
foldl
?const list = [1, 2, 3] console.log(foldl((a, b) => a + b, 0, list)) console.log(foldl((a, b) => a + b, 0)(list)) console.log(foldl((a, b) => a + b)(0, list)) console.log(foldl((a, b) => a + b)(0)(list))That's very common in functional programming. And omitting the last
argument is fairly common.On Mon, Feb 29, 2016, 13:52 Anatoly Ressin notifications@github.com
wrote:@isiahmeadows https://github.com/isiahmeadows I'd say that currying
based on the aruments.length is very unstable and runtime error-prone.
Real currying is extra-argument-proof:var plus = x => y => x + y
console.log(plus(3)(4)) // 7
console.log(plus(3,10)(4,20)) // still 7—
Reply to this email directly or view it on GitHub
#5453 (comment)
.
I think it's mostly about that:
type T = [number, string];
var a: T = [1, "a", 2]; // valid
// in this cases tuple types or parameter types cannot be inferred:
f(...a, true); // you could think number,string,boolean were passed, but weren't
const c = [...a, true]; // you could think that is of type [number, string, boolean] but it's not
// according to current rules, the best inferred types might be [number, string, number|string|boolean]
// same manner with variadic kinds, types are constructed properly:
type R = [...T, boolean]; // [number, string, boolean]
That's why I've proposed #6229
The question of whether [1, 2]
satisfies [number]
is a valid one to ask and debate. But what has it got to do with the spreadable tuples feature?
It's whether a variadic application of tuples should ignore extra arguments
or not. This overloaded function should elaborate more of my concern.
declare function foo(x: number, ...args: string[]): void
declare function foo<T>(...args: [...T]): void
foo<[number]>(1, 2)
// This will always fail
declare function foo(x: number, ...args: string[]): void
declare function foo<T>(x: T): void
foo<number>(1, 2)
On Mon, Feb 29, 2016, 18:47 Jason Freeman notifications@github.com wrote:
The question of whether [1, 2] satisfies [number] is a valid one to ask
and debate. But what has it got to do with the spreadable tuples feature?—
Reply to this email directly or view it on GitHub
#5453 (comment)
.
And this is why, for practical reasons, I would prefer rest parameter-like
variadic types.
On Mon, Feb 29, 2016, 19:00 Isiah Meadows impinball@gmail.com wrote:
It's whether a variadic application of tuples should ignore extra
arguments or not. This overloaded function should elaborate more of my
concern.declare function foo(x: number, ...args: string[]): void declare function foo<T>(...args: [...T]): void foo<[number]>(1, 2) // This will always fail declare function foo(x: number, ...args: string[]): void declare function foo<T>(x: T): void foo<number>(1, 2)On Mon, Feb 29, 2016, 18:47 Jason Freeman notifications@github.com
wrote:The question of whether [1, 2] satisfies [number] is a valid one to ask
and debate. But what has it got to do with the spreadable tuples feature?—
Reply to this email directly or view it on GitHub
#5453 (comment)
.
@JsonFreeman that's because spread operator for types and arrays/tuples. If spread type operator is allowed in form of "given types A
, B
and T = [A]
, so then [...T, B]
will construct [A, B]
" (which is implicitly proposed) then it would not be aligned with array/tuple spread operator. Given var a: [A]
and var b: B
, the type of expression [...a, b]
cannot be proven as to be of type [A, B]
. According to current rules of tuples, it could be proven as to be of type [A, A|B]
.
Does it makes sense for you? Or I can create comparison table to highlight that mismatching.
@Igorbek I understand what you are saying. It ultimately stems from the fact that the compiler has perfect knowledge of the types it's dealing with, but not perfect knowledge of the values. In particular, in your example, the value a
has unknown length, whereas the type [A]
has known length. This was one of the reasons I was initially uncomfortable about using tuple types for this purpose. But I am not sure it's a serious problem.
@isiahmeadows I see what you are asking about, but why is the issue any clearer with rest type parameters? If you have more arguments than type arguments, the same question can be asked.
The type safe solution would be more consistent with the rest of the
language if it mimicked the argument syntax.
My point is if you're effectively spreading a rest parameter, you get
exactly the argument types and no more. Curried functions have a return
type dependent on the argument type. So if you apply one too many arguments
to partially apply a curried function, you'll get a completely different
type. Handling rest types like tuples would lead to runtime errors instead,
which are never good.
On Tue, Mar 1, 2016, 06:07 Jason Freeman notifications@github.com wrote:
@Igorbek https://github.com/Igorbek I understand what you are saying.
It ultimately stems from the fact that the compiler has perfect knowledge
of the types it's dealing with, but not perfect knowledge of the values. In
particular, in your example, the value a has unknown length, whereas the
type [A] has known length. This was one of the reasons I was initially
uncomfortable about using tuple types for this purpose. But I am not sure
it's a serious problem.@isiahmeadows https://github.com/isiahmeadows I see what you are asking
about, but why is the issue any clearer with rest type parameters?—
Reply to this email directly or view it on GitHub
#5453 (comment)
.
@isiahmeadows can you give example code for the currying problem?
I still think that even if you used rest type parameters (which I am all for), you'd have to explicitly decide not to allow excess arguments, but I agree with @isiahmeadows that you probably should.
type FullCurry<T> = ((initial: T, xs: T[]) => T) | ((initial: T) => (xs: T[]) => T)
declare function foldl<T>(func: (acc: T, item: T) => T, initial: T, xs: T[]): T
declare function foldl<T>(func: (acc: T, item: T) => T): FullCurry<T>
declare function foldl<T>(func: (acc: T, item: T) => T, initial: T): (xs: T[]) => T
interface Function<T, R, ...A> {
apply<U extends T>(inst: U, args: [...A]): R
apply(inst: T, args: [...A]): R
}
function apply(reducer: (initial: number) => number): (number[]) => number {
reducer.apply(undefined, [0, []])
}
const func = apply(foldl<number>((x, y) => x + y))
func([1, 2, 3]) // Runtime error
I'll add my variation too. Let's see the example of variadic curry from the proposal:
function curry<...T,...U,V>(f: (...ts: [...T, ...U]) => V, ...as:...T): (...bs:...U) => V {
return ...b => f(...as, ...b);
}
So then, I started to use it:
function f(a: number, b: string, c: string) { return c.toUpperCase(); }
var a: [number, string] = [1, "boo", 2]; // valid
const cf = curry(f, ...a); // cf is of type string => string
cf("a"); // runtime error
@isiahmeadows Whether they are represented as rest type parameters or as tuple types, it sounds like you object to the ability to spread them in a tuple position.
@Igorbek I think your example is similar in that the problem is not how variadic type sequences are represented. It's the ability to spread them in tuples that leads to problems.
@JsonFreeman It's more that I object to this behavior:
class A {}
class B {}
class C {}
declare function foo(a: A, b: B): C;
// This should not work
let value: [A, B, C]
foo(...value)
Does that clarify?
@isiahmeadows it should work actually
@JsonFreeman
I feel it shouldn't. That's my biggest objection. I feel it's potentially dangerous if it is.
Question: what should be the inferred return type of ret
?
declare function foo(a: A, b: B, c: C, d: D): D
let ret = foo.bind(...[new A(), new B(), new D()])
This is actually pretty important.
That last example definitely seems like it shouldn't work. Essentially you need a mechanism to align sequences of types if function.bind is really going to work properly. You would need something akin to unification, where the types of the arguments to bind are matched against the original function's arguments, and then the remained is in the return type.
That said, it doesn't seem like anything in what's been proposed or discussed can handle that (regardless of whether extra arguments of tuples are allowed), though it's possible I missed something.
I think the biggest problem is that some sort of tuple pattern matching, where each parameter's type is matched against the spread types, needs to be done with the type parameters (a la LiveScript/CoffeeScript's arguments) to fix that problem. It's probably impossible otherwise. And as for how complicated it is, good luck implementing it. 😄
Or to be more precise, it'll require non-strict (in the sense of eager vs lazy) type checking to work. I also think that's probably a more useful extension than just variadic types, anyways, since it pretty much opens the door for a lot of other more useful things, like self-recursive types.
// I hate this idiom.
interface NestedArray<T> extends Array<Nested<T>> {}
type Nested<T> = T | NestedArray<T>
// I would much prefer this, but it requires non-strict type checking.
type Nested<T> = T | Nested<T>[]
Thankfully, non-strict type checking should be a purely non-breaking change in that only code that previously failed to check now works.
That's probably the biggest thing blocking Function.prototype.bind
's proper typing, other than the fact it'll require a very complex type signature.
That's an interesting connection. I'm not convinced they are related though. The recursive types issue is a consequence of the caching policy for generics, and the representation of type aliases in the compiler. All the information is there, it's just the compiler's design that gets in the way.
For the tuple pattern matching, you can't always know how many arguments are being matched against the tuple. If you spread an array into the arguments of bind
, you don't know how many are left in the resulting callback.
@JsonFreeman that being said, do you think that as a step of adoption argument spread operator proposal #6229 needs to be considered first?
And non-strict checking of types would allow enough laziness to make it easier to fix that problem with Function.prototype.bind
. With such laziness, you could accomplish that type with the following (which will require a tuple syntax to sequence them, unless multiple rest parameters are okay in a type declaration):
interface Function {
bind<R, T, ...X, ...Y>(
this: (this: T, ...args: [...X, ...Y]) => R,
thisObject: T,
...args: [...X]
): (this: any, ...rest: [...Y]) => R
}
Why would this require non-strict type checking to infer? You have to deduce the rest type step by step to check against the function. Given the following, here's how it would have to check:
// Values
declare function func(a: number, b: string, c: boolean, d?: symbol): number
let f = func.bind(null, 1, "foo")
// How to infer
bind<R, T, ...X, ...Y>(
this: (this: T, ...args: [...X, ...Y]) => R,
thisObject: T,
...args: [...X]
): (this: any, ...rest: [...Y]) => R
// Infer first type parameter
bind<number, T, ...X, ...Y>(
this: (this: T, ...args: [...X, ...Y]) => number,
thisObject: T,
...args: [...X]
): (this: any, ...rest: [...Y]) => number
// Infer second type parameter
bind<number, any, ...X, ...Y>(
this: (this: any, ...args: [...X, ...Y]) => number,
thisObject: any,
...args: [...X]
): (this: any, ...rest: [...Y]) => number
// Infer first part of rest parameter
bind<number, any, number, ...*X, ...Y>(
this: (this: any, ...args: [number, ...*X, ...Y]) => number,
thisObject: any,
...args: [number, ...*X]
): (this: any, ...rest: [...Y]) => number
// Infer second part of rest parameter
bind<number, any, number, string, ...*X, ...Y>(
this: (this: any, ...args: [number, string, ...*X, ...Y]) => number,
thisObject: any,
...args: [number, string, ...*X]
): (this: any, ...rest: [...Y]) => number
// First rest parameter ends: all ones that only uses it are fully spread
bind<number, any, number, string, ...Y>(
this: (this: any, ...args: [number, string, ...Y]) => number,
thisObject: any,
...args: [number, string]
): (this: any, ...rest: [...Y]) => number
// Infer first part of next rest parameter
bind<number, any, number, string, boolean, ...*Y>(
this: (this: any, ...args: [number, string, boolean, ...*Y]) => number,
thisObject: any,
...args: [number, string]
): (this: any, ...rest: [boolean, ...*Y]) => number
// Infer second part of next rest parameter
// Note that information about optional parameters are retained.
bind<number, any, number, string, boolean, symbol?, ...*Y>(
this: (
this: any,
...args: [number, string, boolean, symbol?, ...*Y]
) => number,
thisObject: any,
...args: [number, string]
): (this: any, ...rest: [boolean, symbol?, ...*Y]) => number
// Second rest parameter ends: all ones that only uses it are exhausted
bind<number, any, number, string, boolean, symbol?>(
this: (this: any, ...args: [number, string, boolean, symbol?]) => number,
thisObject: any,
...args: [number, string]
): (this: any, ...rest: [boolean, symbol?]) => number
// All rest parameters that are tuples get converted to multiple regular
parameters
bind<number, any, number, string, boolean, symbol?>(
this: (
this: any,
x0: number,
x1: string,
x2: boolean,
x3?: symbol
) => number,
thisObject: any,
x0: number,
x1: string
): (this: any, x0: boolean, x1?: symbol) => number
// And this checks
This is how non-strict type checking works. It infers types as needed, instead of the instant it sees it. You can (and should) combine the two passes, so that wrong types fail. Example:
let f = func.bind(null, 1, Symbol("oops"))
// How to infer
bind<R, T, ...X, ...Y>(
this: (this: T, ...args: [...X, ...Y]) => R,
thisObject: T,
...args: [...X]
): (this: any, ...rest: [...Y]) => R
// Infer first type parameter
bind<number, T, ...X, ...Y>(
this: (this: T, ...args: [...X, ...Y]) => number,
thisObject: T,
...args: [...X]
): (this: any, ...rest: [...Y]) => number
// Infer second type parameter
bind<number, any, ...X, ...Y>(
this: (this: any, ...args: [...X, ...Y]) => number,
thisObject: any,
...args: [...X]
): (this: any, ...rest: [...Y]) => number
// Infer first part of rest parameter
bind<number, any, number, ...*X, ...Y>(
this: (this: any, ...args: [number, ...*X, ...Y]) => number,
thisObject: any,
...args: [number, ...*X]
): (this: any, ...rest: [...Y]) => number
// Infer second part of rest parameter
bind<number, any, number, string, ...*X, ...Y>(
this: (this: any, ...args: [number, string, ...*X, ...Y]) => number,
thisObject: any,
...args: [number, symbol /* expected string */, ...*X] // fail!
): (this: any, ...rest: [...Y]) => number
In this case, the expected parameter should be the first one inferred in that round, doing a depth-first iteration. In this case, the first one inferred in that search was a string, and symbols aren't assignable to strings, so this fails with that.
And because of this and trying to type Function.prototype.apply
, my opinion on using tuples for applying rest types have changed.
interface Function {
apply<T, R, ...X>(
this: (this: T, ...args: [...X]) => R,
thisArg: T,
args: [...X]
): R
}
Few other notes:
-
There needs to be a way of spreading arrays and tuples as rest type parameters.
interface Foo extends Function<void, ...string[]> {}
-
There needs to be two separate types for constructors and callables, with Functions being the union of the two. Callable objects should implement the callable interface, class constructors should implement the constructible interface, and ES5 functions should implement the union of the two.
-
Function.prototype.bind
and friends should check against all overloads for the function. If there's multiple that work, it should return a union of all of them.
Those type parameters in your example are not really the type parameters of the bind signature though. They belong to the Function type. But yes, the idea is that if you could use two rest params, or spread two rest type params in a tuple, you would be able to write this.
In order for the signature of bind to be flexible enough, the boundary between ...X
and ...Y
needs to be decided on a per call basis. It would need to be inferred. It would be a problem if a signature were to use ...X
in isolation though. In this case, the boundary will not have been decided. For example:
interface SomeType<T, R, ...X, ...Y> {
someMethod(someArgs): [...X]; // No way of knowing how long X is
}
And overloads are quite a problem for the Function type. I don't think you want to take the union type elementwise on each argument, because that would allow mixing and matching of parameters between overloads. Is that what you meant?
TL;DR: Skip to the horizontal line break. I have a new, more practical idea.
- Yes, I'm aware they really belong on
Function
itself. - That kind of problem is why I said non-strict type matching (in the sense of Haskell) is necessary. You can't eagerly resolve the type normally, because that one would require an iterative, lazy search to do. It's possible to determine algorithmically, but you would have to keep track of things that wouldn't normally need tracked in like, C++.
- If the two arguments are in isolation (like in your example), the compiler should complain. And that situation can be detected with a type-level dependency analysis of each variadic argument in the interface/whatever. It also isn't trivial, but it can be verified when reading the type declaration itself (actually, shortly after).
Although I'm also thinking it might be a little more feasible to define these kinds of situations on just the method in question. It'll also be much easier and faster to detect those kinds of potential problems you alluded to.
interface Function<R, T, ...A> {
// Split it up for just this method, since it's being resolved relative to the
// method itself.
bind[...A = ...X, ...Y](
this: (this: T, ...args: [...X, ...Y]) => R,
thisObject: T,
...args: [...X]
): (this: any, ...rest: [...Y]) => R
}
There is a potential other problem that will be much harder to work out (and why I think it should be constrained to 2, not _n_ divisions):
declare function foo<...T>[...T = ...A, ...B, ...C](
a: [...A, ...C],
b: [...A, ...B],
c: [...B, ...C]
): any
// This should obviously check, but it's non-trivial to figure that out.
let x = foo<
boolean, number, // ...A
string, symbol, // ...B
Object, any[] // ...C
>(
[true, 1, {}, []],
[true, 1, "hi", Symbol()],
["hi", Symbol(), {}, []]
)
Sorry if I'm getting too deep into CS theory here...
Yes, I think that is the right idea. It is not pretty, but I can't think of any other way to properly type bind
, knowing the type arguments of Function
. The ultimate thing is that a boundary must be inferred. And I agree that it should be limited to 2 buckets so that you have to infer 1 boundary, instead of some arbitrary number of boundaries, which can blow up combinatorially.
There are likely more issues that we have not thought of.
@JsonFreeman Another issue are things like curry
. I've yet to come up with something that can properly type that. And it'll take a while before I can. I'd have to do some serious Haskell-like type hacking to come up with such a process.
Thinking about how think kind of proposal could work with some Bluebird functions.
interface PromiseConstructor {
// all same type
all<T>(promises: PromiseLike<T>[]): Promise<T[]>;
join<T>(...promises: PromiseLike<T>[]): Promise<T[]>;
// varying types
all<...T>(promises: [...PromiseLike<T>]): Promise<[...T]>;
join<...T>(...promises: [...PromiseLike<T>]): Promise<[...T]>;
// this is sketchy... ^
}
interface Promise<T> {
// all same type
then<U>(onFulfill: (values: T) => U): Promise<U>;
spread<U>(onFulfill: (...values: T) => U): Promise<U>;
}
interface Promise<...T> {
// varying types
then<U>(onFulfill: (values: [...T]) => U): Promise<U>;
spread<U>(onFulfill: (...values: [...T]) => U): Promise<U>;
}
Do we have a solution for the all<...T>(promises: [...PromiseLike<T>]): Promise<...T>;
above?
@DerFlatulator
See my big comment in PromiseConstructor. I also corrected your Promise interface to be a little closer to my proposal.
interface PromiseConstructor {
new <T>(callback: (
resolve:
(thenableOrResult?: T | PromiseLike<T>) => void,
reject: (error: any) => void
) => void): Promise<T, [T]>;
new <...T>(callback: (
resolve:
(thenableOrResult?: [...T] | PromiseLike<[...T]>) => void,
reject: (error: any) => void
) => void): Promise<[...T], ...T>;
// all same type
all<T>(promises: PromiseLike<T>[]): Promise<T[], ...T[]>;
join<T>(...promises: PromiseLike<T>[]): Promise<T[], ...T[]>;
// varying types
all<...T>(promises: [...PromiseLike<T>]): Promise<[...T], ...T>;
join<...T>(...promises: [...PromiseLike<T>]): Promise<[...T], ...T>;
// all<...T>(promises: [...PromiseLike<T>]): Promise<[...T], ...T> should
// expand to this:
//
// all<T1, T2, /* ... */>(promises: [
// PromiseLike<T1>,
// PromiseLike<T2>,
// /* ... */
// ]): Promise<[T1, T2, /* ... */], T1, T2, /* ... */>;
//
// This should hold for all rest parameters, potentially expanding
// exponentially like ...Promise<[Set<T>], ...Thenable<T>> which should
// expand to something like this:
//
// Promise<[Set<T1>], Thenable<T1>, Thenable<T2> /* ... */>,
// Promise<[Set<T2>], Thenable<T1>, Thenable<T2> /* ... */>,
// // etc...
}
interface Promise<T, ...U> {
// all same type
then<V>(onFulfill: (values: T) => V): Promise<[V], V>;
spread<V>(onFulfill: (...values: T) => V): Promise<[V], V>;
// all same type, returns tuple
then<...V>(onFulfill: (values: T) => [...V]): Promise<[...V], ...V>;
spread<...V>(onFulfill: (...values: T) => [...V]): Promise<[...V], ...V>;
// varying types
then<V>(onFulfill: (values: [...U]) => V): Promise<[V], V>;
spread<V>(onFulfill: (...values: [...U]) => V): Promise<[V], V>;
// varying types, returns tuple
then<...V>(onFulfill: (values: [...U]) => [...V]): Promise<[V], ...V>;
spread<...V>(onFulfill: (...values: [...U]) => [...V]): Promise<[V], ...V>;
}
If [...Foo<T>]
expands to [Foo<T1>, Foo<T2>, /*... Foo<TN>*/]
, then is [...Foo<T,U>]
a syntax error or a combinatorial expansion?
@DerFlatulator
- If exactly one of
T
orU
is a rest parameter, it expands normally. AssumingT
is a rest parameter, then it would be[Foo<T1, U>, Foo<T2, U>, /*... Foo<TN, U>*/]
. - If both are rest parameters, and can have their length correctly inferred, it should be a combinatorial expansion (well...length of T times length of U).
- If neither are rest parameters, it's a syntax error.
Note that I strongly oppose more than 2 rest parameters for practical reasons, and that the rest parameters, if they need split up, should only be split on a per-method basis. Something like this:
interface Function<R, T, ...A> {
// Split it up for just this method, since it's being resolved relative to the
// method itself.
bind[...A = ...X, ...Y](
this: (this: T, ...args: [...X, ...Y]) => R,
thisObject: T,
...args: [...X]
): (this: any, ...rest: [...Y]) => R
}
(If someone can come up with a better syntax, I'm all ears. I don't like it, but I can't come up with anything that doesn't visually conflict.)
@isiahmeadows
With 2., in what order would the expansion be?
[
Foo<T1, U1>, Foo<T2, U1>, /*... */ Foo<TN,U1>,
Foo<T1, U2>, Foo<T2, U2>, /*... */ Foo<TN,U2>,
/* ... */
Foo<T1, UN>, Foo<T2, UN>, /*... */ Foo<TN,UN>
]
Or conversely:
[
Foo<T1, U1>, Foo<T1, U2>, /*... */ Foo<T1,UN>,
Foo<T2, U1>, Foo<T2, U2>, /*... */ Foo<T2,UN>,
/* ... */
Foo<TN, U1>, Foo<TN, U2>, /*... */ Foo<TN,UN>
]
Won't this ambiguity would cause confusion? Perhaps limiting to one dimension would be wise.
Just an alternative suggestion for split syntax:
interface Function<R, T, ...A> {
bind<[...X, ...Y] = [...A]>(
this: (this: T, ...args: [...X, ...Y]) => R,
thisObject: T,
...args: [...X]
): (this: any, ...rest: [...Y]) => R
}
@DerFlatulator
I'd expect the second. And I'd doubt it would cause too much confusion, since as long as it's consistently that, people would quickly get used to it. It's also an unusual edge case that would only really be run into in practice by people who know what they're doing, or people who should question the need in the first place.
I'm also looking at it as you're expanding the first, then the second for each part of the first. Like this pseudocode:
for (let TT of T) {
for (let UU of U) {
expand(TT, UU);
}
}