microsoft/TypeScript

Trade-offs in Control Flow Analysis

RyanCavanaugh opened this issue Β· 101 comments

Some rough notes from a conversation @ahejlsberg and I had earlier about trade-offs in the control flow analysis work based on running the real-world code (RWC) tests. For obvious reasons I'll be comparing what Flow does with similar examples to compare and contrast possible outcomes.

The primary question is: When a function is invoked, what should we assume its side effects are?

One option is to be pessimistic and reset all narrowings, assuming that any function might mutate any object it could possibly get its hands on. Another option is to be optimistic and assume the function doesn't modify any state. Both of these seem to be bad.

This problem spans both locals (which might be subject to some "closed over or not" analysis) and object fields.

Optimistic: Bad behavior on locals

The TypeScript compiler has code like this:

enum Token { Alpha, Beta, Gamma }
let token = Token.Alpha;
function nextToken() {
    token = Token.Beta;
}
function maybeNextToken() {
    if (... something ...) {
        nextToken();
    }
}

function doSomething() {
    if (token !== Token.Alpha) {
        maybeNextToken();
    }
    // is this possible?
    if (token === Token.Alpha) {
        // something happens
    }
}

Optimistically assuming token isn't modified by maybeNextToken incorrectly flags token === Token.Alpha as an impossibility. However, in other cases, this is a good check to do! See later examples.

Optimistic: Bad behavior on fields

The RWC suite picked up a "bug" that looked like this:

// Function somewhere else
declare function tryDoSomething(x: string, result: { success: boolean; value: number; }): void;

function myFunc(x: string) {
    let result = { success: false, value: 0 };

    tryDoSomething(x, result);
    if (result.success === true) { // %%
        return result.value;
    }

    tryDoSomething(x.trim(), result);
    if (result.success === true) { // ??
        return result.value;
    }
    return -1;
}

The ?? line here is not a bug in the user code, but we thought it was, because after the %% block runs, the only remaining value in result.success's domain is false.

Pessimistic: Bad behavior on locals

We found actual bugs (several!) in partner code that looked like this:

enum Kind { Good, Bad, Ugly }
let kind: Kind = ...;
function f() {
    if (kind) {
        log('Doing some work');
        switch (kind) {
            case Kind.Good:
                // unreachable!
        }
    }
}

Here, we detected the bug that Kind.Good (which has the falsy value 0) is not in the domain of kind at the point of the case label. However, if we were fully pessimistic, we couldn't know that the global function log doesn't modify the global variable kind, thus incorrectly allowing this broken code.

Pessimistic: Bad behavior on fields, example 1

A question on flowtype SO is a good example of this

A smaller example that demonstrates the behavior:

function fn(arg: { x: string | null }) {
    if (arg.x !== null) {
        alert('All is OK!');
        // Flow: Not OK, arg.x could be null
        console.log(arg.x.substr(3));
    }
}

The problem here is that, pessimistically, something like this might be happening:

let a = { x: 'ok' };
function alert() {
    a.x = null;
}
fn(a);

Pessimistic: Bad behavior on fields, example 2

The TS compiler has code that looks like this (simplified):

function visitChildren(node: Node, visit: (node: Node) => void) {
    switch(node.kind) {
        case SyntaxKind.BinaryExpression:
            visit(node.left);
            visit(node.right); // Unsafe?
            break;
        case SyntaxKind.PropertyAccessExpression:
            visit(node.expr);
            visit(node.name); // Unsafe?
            break;
    }
}

Here, we discriminated the Node union type by its kind. A pessimistic behavior would say that the second invocations are unsafe, because the call to visit may have mutated node.kind through a secondary reference and invalidated the discrimination.

Mitigating with (shallow) inlining / analysis

Flow does some assignment analysis to improve the quality of these errors, but it's obviously short of a full inlining solution, which wouldn't be even remotely practical. Some examples of how to defeat the analysis:

// Non-null assignment can still trigger null warnings
function fn(x: string | null) {
    function check1() {
        x = 'still OK';
    }

    if (x !== null) {
        check1();
        // Flow: Error, x could be null
        console.log(x.substr(0));
    }
}
// Inlining is only one level deep
function fn(x: string | null) {
    function check1() {
        check2();
    }
    function check2() {
        x = null;
    }

    if (x !== null) {
        check1();
        // Flow: No error
        console.log(x.substr(0)); // crashes
    }
}

Mitigating with const parameters

A low-hanging piece of fruit is to allow a const modifier on parameters. This would allow a much faster fix for code that looks like this:

function fn(const x: string | number) {
  if (typeof x === 'string') {
    thisFunctionCannotMutateX();
    x.substr(0); // ok
  }
}

Mitigating with readonly fields

The visitChildren example above might be mitigated by saying that readonly fields retain their narrowing effects even in the presence of intervening function calls. This is technically unsound as you may have both a readonly and non-readonly alias to the same property, but in practice this is probably very rare.

Mitigating with other options

Random ideas that got thrown out (will add to this list) but are probably bad?

  • pure modifier on functions that says this function doesn't modify anything. This is a bit impractical as we'd realistically want this on the vast majority of all functions, and it doesn't really solve the problem since lots of functions only modify one thing so you'd really want to say "pure except for m"
  • volatile property modifier that says this "this property will change without notice". We're not C++ and it's perhaps unclear where you'd apply this and where you wouldn't.

pure is a bit impractical

with persistent data structures it makes perfect practical sense, amortized time/memory consumption for a typical application would be comparable, it take's some discipline and support from the language ( #6230, #2319, #1213), but it's worth the trouble

with readonly for immutables ans pure for functions it will be blazing fast compared to inlining, because modifiers only need to be checked once for each function/interface

please no more half measures based on assumptions for a couple of typical cases and endless exceptions that only worsen soundness

Looking through the RWC tests, I see only one case where the additional narrowing performed by #9407 causes unwanted errors. However, there were scores of real bugs and inconsistencies being caught, such as checking for the same value twice, assuming that enum members with the value 0 pass truthiness checks, dead branches in if and switch statements, etc. In aggregate, I think our optimistic assumption that type guards are unaffected by intervening function calls is the best compromise.

BTW, the compiler itself relies on side effecting changes to a token variable in the parser. For example:

if (token === SyntaxKind.ExportKeyword) {
    nextToken();
    if (token === SyntaxKind.DefaultKeyword) {
        // We have "export default"
    }
    ...
}

This becomes an error with #9407 because the compiler continues to think token has the value SyntaxKind.ExportKeyword following the call to nextToken (and thus reports an error when token is compared to SyntaxKind.DefaultKeyword).

We will instead be using a function to obtain the current token:

if (token() === SyntaxKind.ExportKeyword) {
    nextToken();
    if (token() === SyntaxKind.DefaultKeyword) {
        // We have "export default"
    }
    ...
}

where the function is simply:

function token(): SyntaxKind {
    return currentToken;
}

Since all modern JavaScript VMs inline such simple functions there is no performance penalty.

I think this pattern of suppressing type narrowing by accessing mutable state using a function is a reasonable one.

is there any possibility of a const on an argument?

function fn(const x: string | number)

In swift/objc they added the "in", "out", and "inout" meta attributes to arguments so you could see if they meant for a value to be modified. i feel like this could be useful for annotating whether a function will modify a variable it received or not. Not sure if I'm missing something.

While pure is all the "rage" these days, I think many JavaScript developers don't understand what it means and could easily lead to surprises and or large scale frustration. Because the inherit universal mutability of JavaScript I think pure would generally be shackles that would potentially go unused because of the over strictness of the concept... It just seems an "all or nothing" concept, when most of JavaScript is shades of grey.

I think a argument by argument modifier makes the most sense. Of course const and readonly only imply disallowing reassignment. Personally, I don't find it confusing of expanding those to inform the flow control that property mutations are "unsafe" as that would be a design time only construct.

Is there any benefit in considering a deeply immutable design time keyword to avoid any confusion between const and readonly? Potentially even immutable (though that is jargoney).

in my stubborn defense of pure:

  • it is easy to implement (at least in my naive view of it)
  • it is fast with almost no impact on current performance level (naive again)
  • it will do what it's been asked for
  • it will do it 100% correct no exceptions ever
  • it will cover 30% cases out of the box without having to learn anything: [1, 2, 3].map(x => x + 1 /* <-- hi, i am pure function */)
  • and it won't hurt to have it in addition to what will work best for the mainstream JavaScript audience

i really wish it was it was considered

other options may work out too and should be considered, just not a big fan of half-baked solutions

I understand your argument, although I think what you would cover with "obviously" pure functions are not ones that suffer from CFA challenges. You example of [1, 2, 3].map(x => x + 1) does not suffer from CFA function boundary issues, and while it is logically pure, there is not benefit in denoting this at design time.

I think your argument might have more merit if you could think of some examples where there are current CFA issues that would be solved by a pure notation that benefitted the developer. My argument isn't against the concept of writing pure functions, it is more of the argument that an "all or nothing" solution likely won't be practical.

One of the biggest current issues with CFA in my opinion is in Promise callbacks. There are many situations where those might not be pure. That is why I think an argument by argument mutability notation would solve significantly more use cases and do so in a more end-developer controllable way.

although you are right that the example doesn't target CFA issues, the answer is rather evident:

  • if a callback meets all constraints that make it pure, then it is safe to say that all assertions that were made outside of it are still valid inside of it (including all CFA reasonings deduced before) because it doesn't change anything

this is one merit of the pureness, among many others

let me say it again, if a promise callback is pure, all reasoning that was done outside of it's scope or time is still valid inside of it

you might wonder why i am so certain, let me explain, there is no notion of time in math that stands behind the idea of pureness, time doesn't exist there, it's absolutely static, everything existed forever and forever will and cannot ever change (because changes need time) to the point where a pure function can be thrown away and replaced with its result (what's called referential transparency) which for the any given arguments will always be the same (hence a question why not just use a hash-table instead to map arguments to results), so you don't really need a function in the first place, armed with such super static equivalence between results that were calculated vs hardcoded the only merit in using a function is to save some memory that otherwise would be taken by a huge static table mapping all possible arguments to all results, since math is absolute abstraction we only care when it comes to program it

this is what a pure function is, this is why it will always work

i agree that it's rather 'all or nothing', but is the price you have to pay for the peace of mind because in its strictness it solves the CFA problems and many others simply by giving them no place to exist

think about it, a "pure" (aka persistent) data structure, say, a linked list, can be deeply copied just by... being assigned:

const one = toList();
const absolutelyAnotherListThatWeCanDoWhateverWeWant = one;

how cool is that?

in my humble opinion "pure" functions and data structures can easily cover 40% of typical everyday needs without even feeling restricted

as for the rest 60% (since we don't appreciate any sort of extremes) we can use good old mutable impure JavaScript

but mind those 40% percent of no problem ever! that's a darn bargain, just let it happen and enjoy it

totally agree with @Aleksey-Bykov arguments in favor of having "pure" introduced into the type checker.

Just my .02, I think "pure" would be too abstract and too confusing to understand.

I'm actually in favor of apple style semantics, "in", "out", "inout"

Also I think const could lead to confusion since they would be similar in usage but different concepts.

It might be easier for beginners to pick up and understand but the intent simply wouldn't be as clear.

@RyanCavanaugh The compiler should already be able to answer the question in the example of Optimistic: Bad behavior on locals already with yes. If doSomething is called when token is Token.Alpha, the condition token !== Token.Alpha of the first if statement is false. Hence, the body is skipped and nextToken is never called. token is Token.Alpha still holds when the second if statement is reached. Ergo the answer is yes.

We can make this question insteresting again by changing doSomething like this:

function doSomething() {
    if (token !== Token.Alpha) {
        maybeNextToken();
        // is this possible?
        if (token === Token.Alpha) {
        // something happens
        }
    }
}

Now it is only possible if maybeNextToken changes token to Token.Alpha. If you are optimistic you would answer the question with no, since you assume that functions have no side effects like changing variables. However, if you change token in nextToken to Token.Alpha instead of Token.Beta

function nextToken() {
    token = Token.Alpha;
}

it would be possible if and only if ... something ... is true in maybeNextToken. If you are optimistic you would still answer the question with no even though the correct answer is yes, maybe.

Infer constraints about functions

How about tracking variable changes of functions?

The basic idea is that we know a constraint before a function call and that we can reason about a constraint after a function call.

Note: I write a new introduced term bold in the following when it is mentioned the first time. Jump back to this position to see the definition of this term.

if statements

This idea is quite similar to control flow base type analysis. So let's take a look at the if statement in the example of control flow base type analysis first (before we discover constraints introduced by functions):

function foo(x: string | number | boolean) {
    if (typeof x === "string") {
        x; // type of x is string here
        x = 1;
        x; // type of x is number here
    }
    x; // type of x is number | boolean here
}

I rewrite the constraint annotations like this

function foo(x: string | number | boolean) {
    // x :: string | number | boolean
    if (typeof x === "string") {
        // x :: string
        x = 1;
        // x :: number
    }
    // x :: number | boolean
}

As usual, a single colon like in x: string | number | boolean means variable x has type string | number | boolean.

A double colon like in x :: string | number | boolean describes a value-typed-binding, which means that the current value of x is of type string | number | boolean. The type of the value may be any subtype of the type of the variable. You can for example conclude from the check typeof x === "string" is true that we have x :: string (the current value of x is a string).

The constraints of a simple if statement can be described in more general as

// cBefore
if (condition) {
    // c1
    ...
    // c2
}
// cAfter

Here constraint is abbreviated as c (cBefore stands for constraint before). I name our constraints in the example above like this:

function foo(x: string | number | boolean) {
  // cBefore := (x :: string | number | boolean)
  if (typeof x === "string") {
    // c1 := (x :: string)
    x = 1;
    // c2 := (x :: number)
  }
  // cAfter := (x :: number | boolean)
}

I write c := e to assign a constraint expression e to the constraint called c.

In the example, x = 1; is a side effect that causes a change on the constraint we have before. Before x = 1; we have c1 := (x :: string) and afterwards we have c2 := (x :: number).

I write c with c' for the constraint where all appearances of value-type-bindings x :: T of c are overridden with value-type-bindings x :: T' of c'. In this example c1 with c2 is (x :: string) with (x :: number) which results in (x :: number), since the old constraint value-type-binding (x :: string) in c1 is overridden by (x :: number) of t1.

I describe a side effect s as a constraint transition c -> c'. A constraint transition takes constraint c (that holds before side effect s) and gives you a new constraint c' (that holds after side effect s). Given a constraint transition t: c -> e where e is a constraint expression, I use function like notation t(cp) to express constraint c' that is equal to e where all appearances of c are replaced by cp.

Using this notation, the example from above looks like this:

// c1 := (x :: string)
x = 1; // t1: c -> c with (x :: number)
// c2 := t1(c1)

You can describe reasoning about constraints of a simple if statement in more general like this

// cBefore
if (condition) {
    // c1 := cBefore & condition
    ... // t1
    // c2 := t1(c1)
    //     = t1(cBefore & condition)
}
// cAfter := t1(cBefore & condition) | (cBefore & !condition)

I write c & condition to describe the case that condition is true. Otherwise, I write c & !condition to describe the case that condition is false. In both cases the constraint c had been given before the check. From the check of condition you can often reason about a new constraint c' that holds. If condition is true implies that c' holds, you can rewrite c & condition as c with c'. Similarly, if condition is false implies that c' holds, you can rewrite c & !condition as c with c'. If no constraint follows, you just write c instead.

In the example, condition is typeof x === "string". If condition is fulfilled the body is executed. Thereby, c1 is cBefore & condition. From fulfilled condition you know typeof x === "string" holds. From that, you can infer the constraint transition x :: string. As a result, cBefore & condition equals cBefore with (x :: string). Since cBefore is x :: string | number | boolean you get (x :: string | number | boolean) with (x :: string) that is reduced to x :: string.

After the if statement you know that

  • either condition had been fulfilled and it's body has been executed that causes side effects described by t1
    • which would result in t1(cBefore & condition) and can be reduced to x :: string (like described above)
  • or that condition had been unfulfilled
    • which would result in cBefore & !condition
      • where !condition is !(typeof x === "string") which is typeof x !== "string"
      • so you can infer (x :: string | number | boolean) with !(x :: string) which can be reduced to x :: number | boolean

From that you can conclude that cAfter is t1(cBefore & condition) | (cBefore & !condition) which is equal to (x :: string) | (x :: number | boolean) and can be reduced to x :: number | boolean.

Constraints introduced by functions

Now we have warmed up, we take a look at constraints that are introduced by functions.

(User-Defined) Type Guards

A type guard is a function whose return type is a type predicate:

function isFish(pet: Fish | Bird): pet is Fish {
    return (<Fish>pet).swim !== undefined;
}

Those type predicates are constraints that are introduced by a function (here isFish) and are already part of TypeScript:

let pet: Fish | Bird

if (isFish(pet)) {
    pet.swim();
}
else {
    pet.fly();
}

Let's add constraints as comments:

let pet: Fish | Bird
// cBefore := (pet :: Fish | Bird)
if (isFish(pet)) {
    // c1 := cBefore & isFish(pet)
    //     = (pet :: Fish | Bird) & (pet :: Fish)
    //     = (pet :: Fish)
    pet.swim();
}
else {
    // c2 := cBefore & !isFish(pet)
    //     = (pet :: Fish | Bird) & !(pet :: Fish)
    //     = (pet :: Bird)
    pet.fly();
}

Before the if statement you know that pet is Fish | Bird. If isFish(pet) is fulfilled, the type predicate pet is Fish of isFish holds for the value of our local variable pet. That means that it adds a constraint pet :: Fish to c1. Thus, it's ok to call pet.swim();. In the else branch you know that the type predicate pet is Fish is unfulfilled and you can add !(pet :: Fish) to c2, which results in pet :: Bird.

Functions in general

In a similar way to type predicates of type guards I introduce constraints that can be automatically inferred from expressions and statements. I will add constraints to the functions one after another in my modified example (see comment above) of the example behavior on locals (see first post) to answer the question is this possible? in the comment in the function doSomething:

enum Token { Alpha, Beta, Gamma }
let token = Token.Alpha;
function nextToken() {
    token = Token.Alpha;
}
function maybeNextToken() {
    if (condition) {
        nextToken();
    }
}

function doSomething() {
    if (token !== Token.Alpha) {
        maybeNextToken();
        // is this possible?
        if (token === Token.Alpha) {
          // something happens
        }
    }
}

nextToken

function nextToken() {
    token = Token.Alpha;
}

nextToken assigns Token.Alpha to the shared local variable token. Like the assignment x = 1; in the if statement example, token = Token.Alpha is a side effect that can be described as a constraint transition:

function nextToken() {
    // cBefore
    token = Token.Alpha; // t1: c -> c with (token :: Token.Alpha)
    // cAfter := t1(cBefore)
    //         = cBefore with (token :: Token.Alpha)
}

Here cBefore describes the constraints that we have before calling nextToken and cAfter describes the constraints that we have after calling nextToken. Like an assignment, a function can be described as a constraint transition. Since nextToken is doing just the same as the assignment token = Token.Alpha; it describes the same constraint transition as t1.

maybeNextToken

Let's have a look at the caller maybeNextToken:

function maybeNextToken() {
    if (condition) {
        nextToken();
    }
}

First you add the constraint transitions (here t1 for nextToken) and afterwards the constraints of the if statement:

function maybeNextToken() {
    // cBefore
    if (condition) {
        // cBefore & condition
        nextToken(); // t1: c -> c with (token :: Token.Alpha)
        // t1(cBefore & condition)
    }
    // cAfter := t1(cBefore & condition) | (cBefore & !condition)
}

Since condition is not specified in the original example you don't know its result. I assume that it is an expression without any side effect. To relax this assumption even further I consider condition is a shared variable of type boolean. Thereby, you can derive condition :: true if condition is fulfilled and condition :: false otherwise. Thereof, cAfter is

((cBefore with (condition :: true)) with (token :: Token.Alpha)) | (cBefore with (condition :: false))

Despite that simplification maybeNextToken is described as the constraint transition cBefore -> cAfter.

doSomething

function doSomething() {
    if (token !== Token.Alpha) {
        maybeNextToken();
        // is this possible?
        if (token === Token.Alpha) {
          // something happens
        }
    }
}

Like in maybeNextToken you first add constraint transitions and afterwards the constraints of the if statement:

function doSomething() {
    // cBefore
    if (token !== Token.Alpha) {
        // c1
        maybeNextToken(); // t1: c -> ((c with (condition :: true)) with (token :: Token.Alpha)) | (c with (condition :: false))
        // c2
        // is this possible?
        if (token === Token.Alpha) {
          // c3
          // something happens
          // c4
        }
        // c5
    }
    // cAfter
}

Our goal is to answer the question if it is possible that token === Token.Alpha is fulfilled in the condition of the second if statement. This is only possible if c2 contains a constraint token :: T where T is a type that contains Token.Alpha. Hence, you only look at c1 and c2:

  • c1 is cBefore & token !== Token.Alpha. Since token is not Token.Alpha it only can be Token.Beta or Token.Gamma. This results in cBefore with (token :: Token.Beta | Token.Gamma).

  • c2 is derived by replacing c in t1 with c1:

    c2 := ((c with (condition :: true)) with (token :: Token.Alpha))
          | (c with (condition :: false))
        = (((cBefore with (token :: Token.Beta | Token.Gamma)) with (condition :: true)) with (token :: Token.Alpha))
          | ((cBefore with (token :: Token.Beta | Token.Gamma)) with (condition :: false))

    Since the type of the value of token in cBefore is overridden by cBefore with (token :: ...) operations, you can leave cBefore out:

        = (((token :: Token.Beta | Token.Gamma) with (condition :: true)) with (token :: Token.Alpha))
          | ((token :: Token.Beta | Token.Gamma) with (condition :: false))

    I'm only interested in token to answer the question. Hence, you can remove all with operations of other variables (here condition):

        = ((token :: Token.Beta | Token.Gamma) with (token :: Token.Alpha))
          | (token :: Token.Beta | Token.Gamma)
        = (token :: Token.Alpha)
          | (token :: Token.Beta | Token.Gamma)
        = token :: Token.Alpha | Token.Beta | Token.Gamma
        = token :: Token

From c2 = token :: Token you conclude that the answer of the question is:
Yes, token might be Token.Alpha in the condition of the second/inner if statement.

@maiermic your post was pretty complex, so I may have misunderstood it, but it looks like you are describing what control flow anaylsis already does, plus extending it to 'look into' the functions being called within guarded blocks as well. If so, how is this different to the inlining approach mentioned by @RyanCavanaugh in the OP?

@yortus That's right, I describe what control flow analysis already does to introduce the notation I use afterwards to explain my idea. However, I'd like to clarify that I don't 'look into' a function every time it is called (if that wasn't clear so far). Instead I infer the constraint transition of a function when I look at it's definition. Therefore, I have to look into the function, but I only have to do this once, since I can carry the constraint transition in the type of the function. When I look at the function call, I can use the constraint transition that I inferred before to calculate the constraint that holds after the function call from the constraint that holds before the function call.

Mitigating with (shallow) inlining / analysis

I don't know how (shallow) inlining/analysis works and @RyanCavanaugh doesn't explain it. Nevertheless, he shows two examples.

1. Example

// Non-null assignment can still trigger null warnings
function fn(x: string | null) {
    function check1() {
        x = 'still OK';
    }

    if (x !== null) {
        check1();
        // Flow: Error, x could be null
        console.log(x.substr(0));
    }
}

Flow claims that x could be null after the call of check1. Flow doesn't look into check1. Otherwise, it would know that x has type string and cann't be null.

In my approach, the constraint transition of check1 is inferred as t: c -> (x :: string). Since we checked that x !== null is true before and the type of the variable x is string | null we have x :: string as constraint that holds before the call of check1. Hence, we pass x :: string to t to get the constraint that holds after the call of check1. t(x :: string) results in x :: string. As a consequence, x cann't be null in console.log(x.substr(0));.

Note: You can even infer x :: 'still OK' in check1 in that case. Thereby, you could even determine the type of the return value of x.substr(0) which is 's'. However, this would require further knowledge about the build-in method substr. I didn't cover such an approach in my previous post.

2. Example

// Inlining is only one level deep
function fn(x: string | null) {
    function check1() {
        check2();
    }
    function check2() {
        x = null;
    }

    if (x !== null) {
        check1();
        // Flow: No error
        console.log(x.substr(0)); // crashes
    }
}

To come straight to the point, the constraint transition

  • of check2 is t2: c -> (x :: null) and
  • of check1 is t1 = t2

Before the call of check1 we have x :: string. Afterwards we have t1(x :: string), which results in x :: null. Thus, my approach detects the error and flow doesn't.

Quite long to fully read and understand, but this seems very interesting. It makes me think of a kind of propositional logic solver like the Prolog language. Correct me if I'm wrong.
Not sure it would be easy to implement such a thing, but, this could be an evolution target for the TypeScript's Type Inference and Control Flow analysis.

@maiermic hmm yes interesting. I suppose two issues would be:

a) performance. @RyanCavanaugh mentions a "full inlining solution [...] wouldn't be even remotely practical", but what about this deep constraint analysis?

b) what about calls to third-party functions (e.g. calls into platform, frameworks, or node dependencies) where we have no function bodies to look into (only .d.ts declarations)?

might be related #8545 (comment)

@Aleksey-Bykov #8545 (comment) looks like a similar approach for manually annotating constraints, even though you only refer to a specific case (object initializers)

@yortus good questions

a) performance. @RyanCavanaugh mentions a "full inlining solution [...] wouldn't be even remotely practical", but what about this deep constraint analysis?

@RyanCavanaugh May you please explain performace issues of a full inlining solution?

Without thinking too much about it, I guess deep constraint analysis might be doable in linear or linearithmic time complexity. Gathered constraints are seralizable and can be cached so that you can speed up analysis time. Manuall constraint annotations can further improve analysis time.

b) what about calls to third-party functions (e.g. calls into platform, frameworks, or node dependencies) where we have no function bodies to look into (only .d.ts declarations)?

If you don't know the code, you cann't do flow analysis. Without .d.ts declarations you couldn't even do any type analysis. You either include constraint annotaions in .d.ts files or you have to fall back to another approach. You might choose an optimistic approach if no constraint annotaions exist, since you can add constraint annotations to .d.ts if a function has a relevant side effect.

Initially I thought that the current unsound strategy would work, though in the previous months 1) narrowing works on more locations (with CFA) 2) narrowing works on more types (enums, numbers) and 3) narrowing works with more type guards. This has demonstrated several issues with the current strategy.

A solution that has been discussed was that the compiler should track where a variable might be modified through a function call. Since JavaScript is a higher-order language (functions can be passed as arguments, stored in variables and so on), it's hard/impossible to track where a function might be called. A reassignment of a variable can also happen without a function call, for instance by a getter or setter function, or by arithmetic that calls .valueOf() or .indexOf(). For a language that is not higher-order, the analysis can be done easily, since a function call will directly refer to the declaration of a function. Using the same analysis in a higher-order language will still be unsound; this will probably cause issues when an assignment happens in .forEach(() => { ... }) for instance.

My suggestion is basically that a variable may only be narrowed if its assignments can be tracked completely. A variable can only be narrowed if all reassignments (that is, assignments except the initializer) are inside the control-flow graph of the current function. That means that a variable declared with const or one that has no reassignments can be narrowed in any function. A variable with reassignments in different functions can never be narrowed. The later is restrictive; a user should copy a value of such variable into a const variable to narrow it.

To reduce this restriction, the control-flow graph of the function could be extended to an inter-procedural control flow graph. The analysis stays sound, but becomes more accurate. The graph can relatively easily be extended at call expression that directly refer to a function declaration. Assignments in functions that are not used in higher-order constructs can now be tracked too. It would be interesting to extend this to higher-order functions too, but that would be more complex. It would be necessary to annotate function definitions with the control flow of their arguments some way.

I'm not sure whether an abstraction over the control flow graph is needed. The graph could become a lot bigger with this inter-procedural analysis. However, the compiler already uses a compressed control flow graph, which basically contains only branch nodes and assignments. It might help to track which variables are referenced by a function to skip the whole function if the analyzed variable is not referenced there.

Thoughts? Let me know if something is not clear.

How is this code possible with strictNullChecks on? I want to use strict null checks, but this control flow checking makes it impossible to write this kind of code efficiently:

let myValue: string
[1, 2, 3].forEach(x => {
  if (x == 2) {
    myValue = 'found it'
  }
})
console.log(myValue) // error: myValue used before assignment

Since the callback is not guaranteed to be executed TS sees that not all code paths assign the value. The same would be true when using a for of loop, because the array is not guaranteed to be non-empty. Even though I as a developer can guarantee for sure that [1, 2, 3] is not empty and forEach will be called, there is no way for me to tell that TS, no /* ts ignore no-use-before-assignment */ or something like that. I have to disable strict null checks completely.

@felixfbecker

Either initialize it with a sentinel:

let myValue: string = ''
// Dangerous if myValue is never actually set to a real value

Or, if you're worried about the sentinel never getting overwritten, with undefined. This more closely mimics the underlying JS but requires ! assertion on use:

let myValue: string | undefined = undefined;
...
console.log(myValue!);
// Still dangerous if myValue is never actually set to a real value, but less so since undefined is likely to fail earlier / harder

Or, have a check to narrow the type:

let myValue: string | undefined = undefined;
...
if (myValue === undefined) {
    throw new Error();
}
console.log(myValue); // Guaranteed safety

@RyanCavanaugh Following up here, from #11393, with a suggestion.

I would be nice to have an indication of whether the optimistic heuristic was used or not when reporting an error. Something like Operator '===' cannot be applied to types 'false' (likely) and 'true'.

This will allow the developer to quickly figure out if the error is certain, or if it may just be the consequence of an optimistic assumption made by the compiler. It may also reduce the flow of non-bugs reported here πŸ˜„.

optimistic heuristic

Just wanted to point out that TypeScript has no concept of this at the moment 🌹

@basarat s/heuristic/assumption/

What I mean is the following:

function foo(arg: any) { arg.val = true; }

var arg = { val: true };
arg.val = false;
foo(arg);
if (arg.val === true) console.log('test is true!');

Error: Operator '===' cannot be applied to types 'false' (likely) and 'true'

Without foo(arg) call:

Error: Operator '===' cannot be applied to types 'false' and 'true'

Would it be possible to have an affects ... return type, in the same way as we have ... is ...? Any functions containing assignments or invocations of functions returning affects ... would implicitly have affects ... intersected with the annotated return type. This could be used to do transitive flow analysis as we currently have for never, and avoid some problems with excessively aggressive or permissive narrowing due to lack of information at function call boundaries.

Went through the thread and realized what I was proposing is a naive subset of what has already been fleshed out in @maiermic's amazing treatment.

I think it is worth opening a separate issue to discuss the merits of that approach and possibly get a feel for it by reimplementing the existing type guards feature in this fashion. A systematic approach to flow analysis would make reasoning about more advanced features much easier. E.g. if this was implemented, pure could be implemented internally as cAfter := cBefore. Even if this doesn't end up affecting the implementation in any way, it seems like a useful convention for discussing flow analysis in the compiler.

Didn't find similar case, could you check this out?

function render(): string | null {
    const state: "a" | "b" | "c" = "a"
    if (state === "a") {
        return "a"
    } else if (state === "b") {
        return "b"
    } else if (state === "c") {
        return "c"
    }
}

This code fails to compile on typescript 2.1 with strictNullChecks option enabled. Is it covered by somw tradeof with control flow analysis or could be considered as a bug?

Here's a case that should be possible to figure out:

function f(a: number[] | undefined) {
    if (a === undefined) {
        return [];
    }

    // Works
    a = a.map(x => x + 1);
    return a.map(x =>
    	// Error: Object is possibly 'undefined'.
    	x + a.length);
}

The closure only exists in a scope where a exists. There's no way to access it in the if block.

@Strate That function will return undefined, not null.

@andy-ms Your example above works as long as there are no assignments to a anywhere in the function. See #10357.

evmar commented

This below simple snippet demonstrates the problem again. (It fails in the if statements with "Operator '==' cannot be applied to types 'Y.A' and 'Y.B'.")

Without proposing any changes to the language or heuristics here, what's an idiomatic way to make this code compile? Right now it's rejected by the compiler. Should we be inserting type coercions on each use of x to force it back to type Y?

enum Y {
  A,
  B
}

let x: Y = Y.A;

x = Y.A;
changeX();
if (x == Y.B) { ... }
if (x == Y.B) { ... }

function changeX() {
  x = Y.B;
}

You could write const getX = () => x; and use getX() in place of x.
(You could also change the design to something like x = changeX().)

bgnx commented

Is there any options to disable this current optimistic behavior or disable cfa completely???

FWIW, I just stumbled over exactly the same problem, also in a tokenizer/parser, where a next_token() method is called which sets a class member to another token, and the calling code doesn't detect that there's a new token and tells me my comparison is impossible (because it thinks the token type must be LeftBracket):

screen shot 2018-03-20 at 19 16 03

Look, I get that there is a debate on how much CFA typescript should do. But the CFA errors are cryptic.

I just spent 20 minutes debugging this:

  run() {
        this.state = State.RUNNING;
        while (this.state === State.RUNNING) {
            const nextOp = this.code[this.pc];
            if (nextOp === undefined) {
                break;
            } 
            nextOp.operation(this);
        }

        if (this.state === State.EXCEPTION) {
            throw new Error(this.exceptionReason);
        } 
}

Which results in

Operator '===' cannot be applied to types 'State.RUNNING' and 'State.EXCEPTION'.

Presumably because CFA thinks that this.state must always === state.RUNNING. But nextOp.operation(this) can change this.state which CFA cannot deduce at compile time.

Of course, if you remove the break, all of a sudden the code works! πŸ˜–, because the CFA is no longer over-eager about determining the value of this.state. (IMO this is a bug, but it seems that there is a discussion about this instead)...

Either way, I believe most programmers would be scratching their heads wondering what is going on. It would be helpful if the error message said something to the effect of: Error: CFA predicts that this path is not possible, try refactoring your code.

I think it might be a little bit easier to reason about this sort of thing if the treatment of effects (side effects like local mutation) and "coeffects" (requirements that certain effects have occurred) was formally specified. Then it would be clearer which weird behaviors are down to PEBKAC, which ones are due to bugs in the type checker implementation, and which ones are bugs in the specification.

Hi there! Sorry, but I lost the overview what this issue currently tracks and if there is a discussion to fix some flow related issues?

I came here to ask this question, because I saw two unrelated people stumbled over a destructing issue related to union types and they couldn't figure out the problem. I know why this happens, but is there some discussion to maybe fix this behaviour to make some code more ergonomic?

type AorB = { type: 'a'; value: string } | { type: 'b'; value: number };

function fun({ type, value }: AorB) {
  switch (type) {
    case 'a':
      return value.toLowerCase(); // throws: TS thinks it could be `number`
    case 'b':
      return value;
  }
}
Bnaya commented

@donaldpipowitch to solve this kind of issue, there's a need in somekind of depended type expression, that the control flow analysis & destructuring will create on the fly
I've typed to make one explicitly with conditional mapped types, but its not working:

function fun<T extends 'a' | 'b'>(type: T, value: T extends 'a' ? string : number) {
  switch (type) {
    case 'a':
      return value.toLowerCase(); // throws: TS thinks it could be `number`
    case 'b':
        return value.toExponential();
  }
}

function alsoNotLikeThat<T extends 'a' | 'b'>(type: T, value: typeof type extends 'a' ? string : number) {
  switch (type) {
    case 'a':
      return value.toLowerCase(); // throws: TS thinks it could be `number`
    case 'b':
        return value.toExponential();
  }
}

If would be nice if my code above would have worked

I see there's few examples with synchronous side effects in functions here, but lots of linked issues around how this causes big problems with async/await, and no direct discussion of that that I can see, so it's worth highlighting. Here's an example of some async code where this issue causes problems:

async function test(p: Promise<any>) {
    let x: 'a' | 'b' = 'a';

    setTimeout(() => {
        x = 'b';
    }, 500);

    await p;

    // x is inferred as 'a' here, so this isn't allowed, but it could be 'b'
    if (x === 'b') {
    }
}

(Playground link)

Note that the inference is wrong here despite explicit types of x. This issue makes it very awkward to write a function using await to wait for a side effect, which I think is a fairly common case.

Meanwhile in the equivalent promise-based code the types work totally fine:

function test(p: Promise<any>) {
    let x: 'a' | 'b' = 'a';

    setTimeout(() => {
        x = 'b';
    }, 500);

    p.then(() => {
		// x is 'a' | 'b' here, correctly
        if (x === 'b') {
        }
    });
}

I just just experienced the exact problem that @pimterry explained in (#9998 (comment)) and was a bit confused as why it complained to me that this wasn't allowed

+1 for #9998 (comment), this is really stupid, await is definitely not a pure statement.

jhnns commented

I just wanted to document another workaround for this problem. Instead of using a getter function as described by @ahejlsberg, it's also possible to use a type assertion:

// instead of
// let value: 1 | 2 = 1;
let value = 1 as 1 | 2;

function changeValue() {
    value = 2;
}

changeValue();

value; // `value` is correctly typed as 1 | 2

The reason I don't use TypeScript in certain projects where we are not forced to use TypeScript is due to this issue.

interface Something {
    maybe?: () => void;
}

function run(isTrue: boolean): Something {
    const object: Something = {};
    if (isTrue) {
        object.maybe = (): void => {
            console.log('maybe');
        };
    }
    return object;
}

run(true).maybe();

The code above is deterministic. There will always be one outcome. TypeScript should be able to follow this. I understand it will however need to follow this object specifically and every other object for that matter through their entire life in the code. Which would put a lot of stress on the compile time checking.

I don't understand why I am allowed to set a method as optional and use it in this way.

Personally I don't see the advantage of using TypeScript when this is the type of code I wish to write, hence why I use plain JavaScript.

Regarding how to implement this, a simple flag in the tsconfig could allow control flow analysis.

Sorry for bumping this, but this issue becomes quite old while there seem to be no plans in TS roadmap to do something with it.

The current situation with await is not invalidating type narrowing is quite severe, as it allows to easily trigger runtime errors when there are many async functions dealing with the same object, as it's described in #9998 (comment)

Note that while invalidating each type after await feels like "hard to use` for the end-users, the issue is easily solved by what Flow already proposes: if you need to work with some value of the object which might be invalidated, simply save the reference and be with it:

declare let test:
  | { type: 'ONE'; value?: number }
  | { type: 'TWO'; value: number }

const run = async (): Promise<void> => {
  if (test.type !== 'TWO') {
    throw null
  }

  const savedValue = test.value

  await delay(1000)

  test.value // should invalidate to be `number | undefined`
  savedValue // should be type narrowed `number`
}

@basickarl You need to use generics for your case:

interface Something {
	maybe?: () => void;
}

function run<T extends boolean>(isTrue: T):
	T extends true
		? Something & Required<Pick<Something, "maybe">>
		: Something {
	const object: Something = {};
	if (isTrue) {
		object.maybe = () => {
			console.log('maybe');
		};
	}

	// @ts-ignore
	return object;
}

run(true).maybe();

// @ts-expect-error
run(false).maybe();

@ExE-Boss Thank you SO much! I posted a question on Stack Overflow but it was slaughtered, I will update with this code of yous, hopefully it will help future people!

@basickarl Would you mind posting a link to your Stack Overflow question here too?

not sure if it's related to this issue, anyway this is what happens with my code:

let resolver;
new Promise(resolve => (resolver = resolve));
console.log(resolver); // error: Variable 'resolver' is used before being assigned.ts(2454)

I post here an issue, which i think related to this topic. I Don't really understand why the compiler is not able to infer arg: c in my last function :

(tell me if i should open a new issue instead)

type t = number[] | c;

class c {
  constructor(public data: number[]) {}
  m(n: number): void { console.log(n); }
}

function f(arg: t): void {
  if (!(arg instanceof c)) {
    arg = new c(arg);
  }
  arg.m(0); // expected behavior: arg is inferred as c
}

function g(arg: t): void {
  if (!(arg instanceof c)) {
    arg = new c(arg);
  }
  (() => {
    arg.m(0); // expected behavior: arg is inferred as c
  })();
}

function h(arg: t): void {
  if (!(arg instanceof c)) {
    arg = new c(arg);
  }
  for (let n = 0; n < 3; n++) {
    arg.m(n); // expected behavior: arg is inferred as c
  }
}

function i(arg: t): void {
  if (!(arg instanceof c)) {
    arg = new c(arg);
  }
  [0, 1, 2].forEach(n => {
    arg.m(n); // actual behavior: arg is not inferred as c !
  });
}

@basickarl hey, You can use overloading:

interface Something {
    maybe?: () => void;
}

function run(isTrue: true): { maybe: () => void };
function run(isTrue: boolean): Something {
    const object: Something = {};
    if (isTrue) {
        object.maybe = (): void => {
            console.log('maybe');
        };
    }
    return object;
}

run(true).maybe(); // no error here anymore

Since #40860 was closed automatically...

interface ValidationErrorMap {
    readonly [errorCode: string]: string | ValidationErrorMap;
}

type ValidationResult = ValidationErrorMap | "success";

function combine(results: ReadonlyArray<ValidationResult>): ValidationResult {
    let finalResult: ValidationResult = "success";

    for(const result of results) {
        finalResult = (result === "success") ? finalResult
            : (finalResult === "success") ? result
            : { ...finalResult, ...result }; // Spread types may only be created from object types.
    }

    return finalResult;
}

Intuitively, if the compiler is smart enough to recognize that finalResult is initialized with a specific value of "success" it should be smart enough to see that the value changes inside a loop scope, and can't be assumed.

Since #41045 was closed automatically...

type Thing = { data: any };

const things: Map<string, Thing> = new Map();

function add_thing (id: string, data: any) {
    let thing = things.get(id);

    if (typeof thing === 'undefined') {
        thing = { data };
        things.set(id, thing);
    }

    // Object is possibly 'undefined'
    return () => thing.data;
}

It is not possible for "thing" to be undefined based on the above code.

From #41113, I'll add another duplicate to the pile:

function doSomething(callback: () => void) {
  callback();
}

let result: 'foo' | 'bar' = 'bar';

doSomething(() => {
    result = 'foo';
})

// This should work, but fails because type is always "bar"
if (result === 'foo') {

}

You can fix it by casting to the appropriate type:

let result: 'foo' | 'bar' = 'bar' as 'foo' | 'bar';

This can also happen with promises - although it may be possible to detect the case where the type needs to be expanded
Consider the following:

enum State {
  Start = 0,
  Foo = 1,
  Middle = 2,
  End = 3
}
class TestClass {
  public state: State = State.Start;
  async go(): Promise<void> {
    this.state = State.End
    const promise = new Promise<void>(resolve => {
      // doSomething();
      this.state = State.Middle;
      resolve();
    });
    await promise;
    if (this.state === State.Middle) { // Compiler Error
      
    }
  }
}

const testClass = new TestClass()
console.log(testClass.state);
testClass.go();
console.log(testClass.state);

https://www.typescriptlang.org/play?ts=4.2.0-dev.20201103#code/KYOwrgtgBAygLgQzsKBvAUFWiBOcoC8UADADSZQBiA9tYVAIzlYCyAlgCYcA2KRATMygBREB3oBmdAF90AY24IAzkqgAVYErgBhRSrQUADmABG3NnKhakwAFzYb9eDYB0zvAG4KygJ4hLAObUABQAlPYACjjUEGxKwAA8AG7UnAB8BlhYcAAWcS7WyE6IyC6iHBRYctQgWlCG0bHx9CDAAO5QUTFxiSnpwTia1NxJfBkYWVkA9FNQHNQwMcC5bCABYV6T2XlKBSV8DqXsXLybW4NKw6MblVDSoWdYCG0IbPgN3fGPUGwAZlDBFa7Qp8AhEZxHTg8YChTJbW6yLCyWTyGp1ZBaXTKVREVodDSYvRKMKo2rDYAubjUdYYnREvY2B7oWlYlQuII3apk3iU6mAzR07EM5APIA

Simple case in test code:

const a = ["a"];
assert(a.length === 1);
a.push("b");
// error
assert(a.length === 2);

Playground

Is there any way to force a pessimistic compiler and assume that all specified types are possible, disabling automatic narrowing? This is quite frustrating when I have to use as Type statements to cast variables into the types I've already specified for them. It would also lead to very error-prone code when types change because I have to bypass the compiler to keep it from erroring out on simple loops and callbacks. I think a short-term solution (seeing that this has been open since 2016) would be to add a configuration option to let the user choose between the two methods while a more permanent solution is developed.

typescript has different types for the same variable in the same scope
value here has two different types.
one is string and the other is string | string[]

function getValue(value: string | (()=>string)):void{}

function test(value: string | string[]): void{

 if(value instanceof Array){
   return
 }
 
  // commenting out this line resolve the issue, but I don't know why?
  // TS consider `value` here as string | string[], but it should be string only
  value = 'string'

  // `value` here is string
  getValue(value)

  // but here is string | string[]
  getValue(()=>value)
}

play

issue #44921

@eng-dibo If you read the issue description, it’s fully explained why that happens.

I'm trying to get the idea from the issue description, but I couldn't
if you explain what exactly causes this issue for my code snippet, it would be nice of you
@fatcerberus

An out operator, applicable to both arguments and ​​external identifiers would be interesting.

For arguments:

// Operator `out` in `result`.
declare function tryDoSomething(x: string, out result: { success: boolean; value: number; }): void;

function myFunc(x: string) {
    let result = { success: false, value: 0 };

    tryDoSomething(x, result);
    if (result.success === true) { // %%
        return result.value;
    }

    tryDoSomething(x.trim(), result);
    if (result.success === true) { // Ok - `result` is `out`
        return result.value;
    }
    return -1;
}

For external identifiers:

let token = SyntaxKind.ExportKeyword

declare function  nextToken(): SyntaxKind out(token) // A comma-separated list - `out(a,b,c)`

if (token === SyntaxKind.ExportKeyword) {
    nextToken();
    if (token === SyntaxKind.DefaultKeyword) { // OK - `nextToken` affects `token`
        ...
    }
    ...
}

In addition, if a function calls another with side effects, it will need to propagate the same out statements.

For arguments:

declare function foo(out point: Point): void

function bar(point: Point) {
    foo(point) // Error - function `bar` needs to declare `point` as `out` too
}

For external identifiers:

let x = 0

declare function incX(): void out x

function foo() {
    incX() // Error - function `foo` needs to declare `out x` too.
}

Maybe, a & operator instead of out:

declare function foo(&x: number): void
declare function bar(): void &(x,y)

Or any other more elegant syntax with the same functionality.

An out operator, applicable to both arguments and ​​external identifiers would be interesting.

Initially, "out variables" could be entirely optional. Changing the value of a variable without declaring it as out is acceptable. The restriction only applies when there are calls to functions that already contain out declarations (need to be propagated by the calling function). Next, there could be a strict option to not allow modifications to external variables without declaring them as out in the function definition.

I find adding modifiers to specific members of interfaces is excessive. Also because it is the function that must define if the member of the interface provided as argument must change or not, and not the interface itself. Consequently, setting an argument of type point: { x, y } to out in a function makes the compiler understand that any member of point may have changed after the function returns.

BorrowScript is a Rust inspired Borrow Checker with TypeScript inspired Syntax. It adds ownership operators that looks like a related approach to the suggested out operator (see write operator and this issue regarding a current typo in the readme in the write example). Keep in mind that BorrowScript is overall a different language that compiles to static binaries. For example, string is mutable. Hence, the example of the write operator looks like this

function writeFoo(write foo: string) {
  foo.push('bar')
  console.log(foo) // "foobar"
}

As far as I understand, this should work for objects, too

function changeState(write data: { state: 'stopped' | 'running' }) {
  // toggle state
  if (data.state === 'stopped') {
    data.state = 'running';
  } else {
    data.state = 'stopped';
  }
}

In the last section HTTP Server with State of the readme is a draft how closures might look like, but (syntax) is not final. See [copy counterRef] in this excerpt of the example:

const counterRef = new Mutex(0)
server.get('/', (req, res)[copy counterRef] => {
  let value = counterRef.lock()
  value.increment()
  res.send()
})

@RyanCavanaugh This could be applied to your example Optimistic: Bad behavior on locals of the issue description. I add [write token] to the signatures of nextToken and maybeNextToken

which results in

enum Token { Alpha, Beta, Gamma }
let token = Token.Alpha;
function nextToken()[write token] {
    token = Token.Beta;
}
function maybeNextToken()[write token] {
    if (... something ...) {
        nextToken();
    }
}

function doSomething() {
    if (token !== Token.Alpha) {
        maybeNextToken();
    }
    // is this possible? yes, since `maybeNextToken` with annotation `[write token]` may have been called 
    if (token === Token.Alpha) {
        // something happens
    }
}

where I answered the question in the comment in doSomething.

Note: I guess, [write token] may be inferred from the body of the function (e.g. if an assignment token = ... exists). If it can not be inferred, the proper default should be read.

This should also work for Optimistic: Bad behavior on fields, but write result has to be added explicitly to the declaration of tryDoSomething, since the body of the function is not known:

// Function somewhere else
declare function tryDoSomething(x: string, write result: { success: boolean; value: number; }): void;

function myFunc(x: string) {
    let result = { success: false, value: 0 };
    // result.success is known to be false
    tryDoSomething(x, result);
    // result.success may have changed, since result may have changed by call of tryDoSomething,
    // which has `write result`
    if (result.success === true) { // %%
        return result.value;
    }

    tryDoSomething(x.trim(), result);
    // result.success may have changed, since result may have changed by call of tryDoSomething,
    // which has `write result`
    if (result.success === true) { // ??
        return result.value;
    }
    return -1;
}

In Pessimistic: Bad behavior on locals log should have no write kind, i.e. no changes are required.

In Pessimistic: Bad behavior on fields, example 1 alert may have [write a], but it is (still) not known in fn if the passed args is a.

function fn(arg: { x: string | null }) {
    if (arg.x !== null) {
        alert('All is OK!');
        // Flow: Not OK, arg.x could be null
        console.log(arg.x.substr(3));
    }
}

let a = { x: 'ok' };
function alert()[write a] {
    a.x = null;
}
fn(a);

You could assume pessimistically that arg.x might have changed, since a has a property x (as well as arg), which might be changed by alert, since it has write a. If the (names of) properties are different, you could assume that arg may not be the same object as a. However, I guess, it is better to be optimistic in general, i.e. do not assume that write a might change a different variable arg, even though that means that you would not detect the bug in this example.

In Pessimistic: Bad behavior on fields, example 2 I would argue in the same way for an optimistic approach, i.e. even if visit has write node, the expressions passed to visit are not the same variable as the parameter node of visitChildren. To highlight this, I renamed the parameter of visit (from node to child) to avoid ambiguity (with the parameter node of visitChildren), when referring to the variable name, i.e. visit has write child instead of write node, since node refers to visitChildren. Changed example:

function visitChildren(node: Node, visit: (child: Node) => void) {
    switch(node.kind) {
        case SyntaxKind.BinaryExpression:
            visit(node.left);
            visit(node.right); // Unsafe? No, since node.left is passed and not node
            break;
        case SyntaxKind.PropertyAccessExpression:
            visit(node.expr);
            visit(node.name); // Unsafe? No, since node.expr is passed and not node
            break;
    }
}

As in the previous example Pessimistic: Bad behavior on fields, example 1, you may not detect bugs in case, visit can change node anyway. For example, if you can access and change the parent node of node.left, i.e. node. You would not know that node.left.parent === node.

In Mitigating with (shallow) inlining / analysis I added write x to check1 and check2. The first example leads to a false positive, but the second works fine

// Non-null assignment can still trigger null warnings
function fn(x: string | null) {
    function check1()[write x] {
        x = 'still OK';
    }

    if (x !== null) {
        check1();
        // False positive: Error, x could be null, since `write x` does not tell what changed
        console.log(x.substr(0));
    }
}
function fn(x: string | null) {
    function check1()[write x] {
        check2();
    }
    function check2()[write x] {
        x = null;
    }

    if (x !== null) {
        check1();
        // error, since check1 has write x
        console.log(x.substr(0));
    }
}

I'd like an //@ts-no-type-narrowing override (or some way to mark up a variable as not being able to be type narrowed) because it's annoying having to work around this issue (and it's much clearer for my colleagues than //ts-expect-error.

I have a callback function which is passed to a class, various methods on the class are awaited, any of which may call the callback function and that callback function changes a variable. Between each method I want to check the value of the variable but typescript is throwing a hissy fit because "enum with a long name" value 0 can never be equal to "enum with a long name" value 3, and the amount of bloat I have to add to check that same variable 3 times when typescript thinks it hasn't changed is making my code unreadable.

e.g.:

var myvariable: no_narrow MyReallyLongEnumName = MyReallyLongEnumName.Value0;

That way we could mark up variables passed in as function parameters and such. We'd still get typing information for the variable and type checking and autocomplete on our checks and assignments but we'd be able to override the compiler's type narrowing for an individual variable that we know is going to change.

This is just absurd. Can you not turn off ts(2339) for JavaScript or something? This flags millions of lines of JavaScript code as broken.

One more example where current behavior is problematic:

let init: Array<number> | false = false

const setInit = (...args: Array<number>) => {
    init = args
}

setInit(1, 2, 3)
// init is now = [1, 2, 3]
// but TypeScript still thinks it's `false`, so inside below 'if' refinement it thinks it's of type `never`

if (init) setInit(...init)
//                   ^^^^ error, because `init` is of type: `never`

Playground link with relevant code

A variable that is a union of two or more types is being assumed to only be the initialized type and refined to it, even though the actual type might change as a side effect of another function that's executed in the same scope.

wbt commented

I am coming here via #37415 and #27900 due to discussion being aggregated here, but those others are more specific.
Here is an illustrative code sample where two structures the same in JavaScript are treated differently by TypeScript, which is quite confusing:

const examplePromiseConstructor = function(generallyAString ?: string) {
	return new Promise(function(resolve, reject) {
		//At the start of this function, the default value of the param is set,
		//but the logic for doing so is more complex than what can readily fit into
		//the function signature (e.g. depending on a complex combination of
		//other parameters omitted from this minimal example)
		//and may reference another function in the same object,
		//which prevents automatic inference of that object's type
		generallyAString = 'a string';
		console.log(generallyAString); //generallyAString is of type 'string'
		Promise.resolve()
		.then(function() {
			//generallyAString should still be of type 'string'
			console.log(generallyAString); //generallyAString is of type 'string | undefined'
			resolve(generallyAString)
		}).catch(function(err: unknown) {
			console.error(err);
			reject(err)
		});
	});
}
//From a JavaScript perspective, this is exactly the same as the above,
//but Typescript treats it differently and maintains the type narrowing
//through the resolution of the promise which demonstrably cannot affect
//the type of the value. 
const exampleAsync = async function(generallyAString ?: string) {
	generallyAString = 'a string';
	console.log(generallyAString);
	try {
		await Promise.resolve();
		console.log(generallyAString); //generallyAString is still of type 'string'
		return generallyAString;
	} catch(err: unknown) {
		console.error(err);
		throw err;
	};
}

The log lines are mostly there so you can hover over the variable name and see the type.
In the real example which sent me here, the async call is more complex than Promise.resolve() but doesn't take the local generallyAString as a parameter and doesn't affect its value or type.

#27900 makes the point that it's arguable which of these two is correct, though in this case where the variable can't be changed by that call, maintaining the narrowing makes much more sense. In either case, the inconsistency between how these two cases are handled is very confusing and seems like a bug.

@wbt
Both examples are not exactly the same.
TS is IMO here exactly right due to async behaviour of promises. Proven by changing your example:

const examplePromiseConstructor = function(generallyAStringParam ?: string) {
	return new Promise(function(resolve, reject) {
        generallyAStringParam = 'not const';
		const generallyAString = 'a string';
		console.log(generallyAString); //generallyAString is of type 'string'
		Promise.resolve()
		.then(function() {
			console.log(generallyAString); //generallyAString is of type 'string'
			console.log(generallyAStringParam); //generallyAStringParam is of type 'string | undefined'
		});
        //generallyAString = undefined // not allowed
        generallyAStringParam = undefined // oh!
	});
}
examplePromiseConstructor()
wbt commented

@HolgerJeromin I don't agree with your comment. TypeScript can see the code as it actually is, which is NOT as in your hypothetical, and see that there is no change in the parameter type which could precede arrival at the 'then' block.

Hi, I was pointed to report this here, would it be part of this issue?

JS type checking: recognize globalThis.vars set in functions
In vscode Settings, enable JS/TS: Check JS

Create app.js

globalThis.uti = {}

function init() {
    globalThis.gigel = 1
}

function plugin(options) {

    console.log('ready on port', process.env.PORT)

    console.log(uti) // this works

    console.log(gigel) // Error: cannot find name

}

init()
plugin()

gigel should be recognized as a global variable, even though it was set in a function.
Thanks!

Did not read everyone's response as there is a lot, but why can't typescript look inside the function to see if it does change and the possible outcomes given that?

I have a different suggestion to solve this problem. Instead of trying to get TypeScript to know correctly what the type is, give us a way to override the type in a given scope. To say this variable is this type now. Something I'd imagine it looked like.

const chars = 'Hello // Cake\nPotato'.split('')
while (chars.length) {
    if (chars[0] === '/') {
        now chars[0] = string // Telling TypeScript that chars[0] is now to be seen as a string and not '/' within the given scope.
        do
            chars.shift()
        while (chars[0] !== '\n') // Currently throws an error saying there is no overlap between chars[0] and '\n'
        chars.shift()
    }
    console.log(chars.shift())
}

Don't know if this is the correct place to address this, but it would be great if TypeScript could do some flow analysis for assignments? Recently I write a lot of mocks of API calls for our tests. Let's say you have some resource which can have a lot of different status. There are a lot of shared properties, but also status-specific properties. And of course you can transition from one status to another.

Currently I write it like this:

type Example = { status: 'a' } | { status: 'b',  field: string };

// generated at some point
const example = { status: 'a' } as Example;

// later update data
example.status = 'b';
if (example.status === 'b')
    example.field = '123';

But it would be great if I could write it like this:

type Example = { status: 'a' } | { status: 'b',  field: string };

// generated at some point
const example = { status: 'a' } as Example;

// later update data
example.status = 'b';
example.field = '123'; // this currently errors

Furthermore it would be great if TypeScript would complain on this:

type Example = { status: 'a' } | { status: 'b',  field: string };

// generated at some point
const example = { status: 'a' } as Example;

// later update data
example.status = 'b';
// FORGOT TO SET "field"

Basically requiring me to set all mandatory fields immediately after setting status to 'b'.

@donaldpipowitch but what do you mean by "immediately"? What type does example have at the beginning of example.field = '123'?

This seems like a better fit for

let example: Example = {status: 'a'};
//later
example = {...example, status: 'b', field: '123'};

unless there's some reason it's vital that the object referenced by example not change.

Yeah, it's actually vital that the object reference doesn't change.
"immediately": Was just thinking about something similar to assigning properties in a constructor (only that there is no constructor and the assignment needs to happen directly after setting the status in the example).

Hey!

Below code gives error "This comparison appears to be unintentional because the types 'false' and 'true' have no overlap.".
Replacing "setIt(oo);" with "oo.cancel = true;" makes sense.

Is this related to this issue or new one?
If related, any suggestion except "ts-ignore"? Maybe some internal hinting decorator/flag, comment that will mark setIt as mutating passed value?

function setIt(o: { cancel: boolean }): void {
    o.cancel = true;
}

function doIt(): void {
    const oo = { cancel: false };
    if (oo.cancel === true) {
        console.log('1');
        return;
    }
    setIt(oo);
    if (oo.cancel === true) {
        console.log('2');
        return;
    }
    console.log('3');
}

doIt();
Woodz commented

Just encountered a bug in control flow for foreach vs for loop:

let x: string | number | null = null;

for (let i = 0; i < 1; i++) {
  x = 'string';
}
[1].forEach(i => {
  x = 1;
});

x;

Expected: x should be inferred as type string | number | null
Actual: x is inferred as type string | null` (it seems to ignore the forEach control flow)

Reproduced on TS v5.0.3

https://www.typescriptlang.org/play?#code/DYUwLgBAHgXBDOYBOBLAdgcwgHwmgrgLYBGISOe+wwEAvJdQNwBQzAZgPbkAUokKdCAAZGEAQB4IARlEoA1HICUEAN7MI0QQHJEqTFpYBfZgG0pAXQB0nJAFEAhgGMAFtwG0AfKvWb6M5oaKLMxQLEA

I just had an issue the other way round with async and await (adding for better searchability). Happy to create a seperate issue when needed.

Typescript did not detect possible type change after an async function call.
The code after await could be called many minutes later (same as code inside a .then Promise method) so the detected type information should be reset.

Playground link

class Foo{
  static prop: string | number = '';
}
// working
Foo.prop = '';
Promise.resolve()
.then(()=>{
  Foo.prop;
    //^?
    // detected as string | number
});
  
// not working
async function context() {  
  Foo.prop = '';
  async function foo (){
    Foo.prop = 1;
  }
  await foo()
  Foo.prop;
    //^?
   // detected as string only but should be string | number
}

Re: "Mitigating with const parameters". That would simplify CFA and therefore speed up type checking, and of course help users. Seems essential.

In case it's not been covered, here's an example that doesn't deal with objects whatsoever, so there's no concern with mutation, only reassignment: https://tsplay.dev/m3K1qm Note that it works if i change the var to const, but I'd expect TS to be able to determine that the var isn't actually reassigned anywhere and pretend it's a const.

FYI The behavior seems to be inconsistent depending on how the function is defined

Here's a simple playground
https://tsplay.dev/w2gbVN

const element = new Node();

if (element instanceof HTMLElement) {
  function one() {
    element;
    // ^? Node
  }
  const two = function () {
    element;
    // ^? HTMLElement
  };
  
  const three = () => {
    element;
    // ^? HTMLElement
  };
}

Ideally it would be preferable if it kept the type narrowing here since we're narrowing on a const variable, not on a let or some mutable field - but even if that's not gonna be implemented due to complications I think it would be better to at least keep in consistent maybe?

FYI The behavior seems to be inconsistent depending on how the function is defined

Your example looks like correct behaviour according to JS hoisting rule - the inner function gets pulled up to the top of the parent function scope.

A work around for this type check:

let x: 1 | boolean = false

setTimeout(() => { x=1 })

await waitUntilXChanged(...)

// wrong 
if (x === 1) { ... }

const xIsNumber = (res: typeof x): res is 1 => x === 1 
// passed
if (xIsNumber(x)) { ... }

@borisovg

FYI The behavior seems to be inconsistent depending on how the function is defined

Your example looks like correct behaviour according to JS hoisting rule - the inner function gets pulled up to the top of the parent function scope.

This playground behavior seems to indicate that one is not hoisted to outside the if scope:

class Bob { name(){ return "bob"}};
class Alice { name() { return "alice"}};
const element = Math.random() > 0.5 ? new Alice() : new Bob();

one(); // error - Cannot find name 'one'.(2304), also JS runtime error "one is not defined"


if (element instanceof Bob) {
  one();
  function one() {
    console.log(element.name())
    element;
    // ^? Bob | Alice
  }
  const two = function () {
    element;
    // ^? Bob
  };
  
  const three = () => {
    element;
    // ^? Bob
  };
}

Therefore, in the above example, because element is const, element inside the if will always be instanceof Bob.

@craigphicks You are right that I don't actually fully understand how hoisting of function definitions works! πŸ˜‚ There is definitely some hoisting going on in non-strict mode but I don't know how that works with TS:

if (1 > 0) {
  function foo() {
    console.log("FOO");
  }
}

foo()
// prints 'FOO' as unless 'use strict' is set

@borisovg it's hoisted if that branch gets executed.
So in the example before I would assume it is safe to consider element to be Bob - since it would be impossible to call this function without element being Bob

In your example replace 1 > 0 with 1 < 0 and you will get an error even w/o 'use strict'

@HugeLetters Looking at your original example it is interesting that if you change const element to let element all the 3 inner functions will type it as Node despite the guard. There is definitely as TS dimension being added here.

Yup, they all change since with let there's no guarantee it won't be reassigned w/o some overly complicated checks - and practically impossible to guarantee when you introduce async

I just ran into a problem with this for the first time despite using TypeScript for years.

Here's a simplified example to demonstrate. Essentially, my class has a property. That property can be changed by a method, but TypeScript doesn't think it can.

class Foo {
  state: 'online' | 'offline' = 'online'

  foo() {
    if (this.state !== 'online') return
    this.bar()
    if (this.state === 'offline') return // This line gives an error because TypeScript thinks the value must be `'online'`.
    console.log('still online')
  }

  bar() {
    this.state = 'offline'
  }
}

https://www.typescriptlang.org/play/?ts=5.4.5#code/MYGwhgzhAEBiD29oG8BQ1oQC5iwUwC5oByeAOxAEsy9joAfE+AM2aproF4mLrbV00ZogAUAShSCMlZtBFYAFpQgA6bLjzQAhJ26leHCQCc8WAK5GyU6IuUqARmCPjrMubdXr80XXpZs+YmNTCzJoAHpw6AAVJRh2TQBzSgA3PBgwMLwjI3gjaHs8YDAzCE1ogE8ABzwAZWAjSiqsGyUyAGsYRU0UsBAzTQBbUpbC6AADfQTicZVrYHIIeBA8FRB4RJFibEoQEGhyabFBAF8BDEdnCTQMDA81HG8-VmnT1DOgA

I just ran into a problem with this for the first time despite using TypeScript for years.

Here's a simplified example to demonstrate. Essentially, my class has a property. That property can be changed by a method, but TypeScript doesn't think it can.

class Foo {
  state: 'online' | 'offline' = 'online'

  foo() {
    if (this.state !== 'online') return
    this.bar()
    if (this.state === 'offline') return // This line gives an error because TypeScript thinks the value must be `'online'`.
    console.log('still online')
  }

  bar() {
    this.state = 'offline'
  }
}

https://www.typescriptlang.org/play/?ts=5.4.5#code/MYGwhgzhAEBiD29oG8BQ1oQC5iwUwC5oByeAOxAEsy9joAfE+AM2aproF4mLrbV00ZogAUAShSCMlZtBFYAFpQgA6bLjzQAhJ26leHCQCc8WAK5GyU6IuUqARmCPjrMubdXr80XXpZs+YgloE3NLa2ByCHgQPBUQeABzEWJsShAQaHJ2WjFBAF8BDEdnCTQMDA81HG8-VhziAtRCoA

The error ocurs from L8 if (this.state !== 'online') return, after that line, type of this.state must be 'online'.

@dhlolo The error occurs on line 7.

This comparison appears to be unintentional because the types '"online"' and '"offline"' have no overlap.(2367)`

Line 5 contains if (this.state !== 'online') return, not line 8.

While TypeScript is correct that this.state must be 'online' immediately after line 5, line 6 (this.bar()) changes that, so TypeScript should not be so confident that this.state is still 'online' after line 6.

Note that it would be a different story if state was a variable instead of a property, since this.bar() could not change it. However, it's a property of a class, which can be changed by methods in between checks.

@dhlolo The error occurs on line 7.

This comparison appears to be unintentional because the types '"online"' and '"offline"' have no overlap.(2367)`

Line 5 contains if (this.state !== 'online') return, not line 8.

While TypeScript is correct that this.state must be 'online' immediately after line 5, line 6 (this.bar()) changes that, so TypeScript should not be so confident that this.state is still 'online' after line 6.

Note that it would be a different story of state was a variable instead of a property, since this.bar() could not change it. However, it's a property of a class, which can be changed by methods in between checks.

It seems tsc will not detect effect of method β€˜bar’ which is to change 'this.state'. It can only infer in some obvious shallow grammar. If you replace 'this.bar()' with this.state = 'offline', the error disappear. By the way, you can use asserts to avoid the error: https://www.typescriptlang.org/play/?ts=5.4.5#code/MYGwhgzhAEBiD29oG8BQ1oQC5iwUwC5oByeAOxAEsy9joAfE+AM2aproF4mLrbV00ZogAUAShSCMlZtBFYAFpQgA6bLjzQAhJ26leHCQCc8WAK5GyU6IuUqARmCPiA3NZlzbq9fmi69LGx8xBLQJuaW1sDkEPAgeCog8ADmIsTYlCAg0OTstGKCAL6Cjs5iRJAQeEZYMF7QyiiYOPhEpKx5dIWSGBheai2aAR3BbhjFxUA

@dhlolo The problem isn't that TypeScript doesn't detect that bar() changes this.state. I would never expect it to do that. The problem is that TypeScript assumes that this.state can't have been changed at all. This means that you are writing code with the false assumption that this.state must be 'online' and potentially using it in places where 'offline' would be invalid.

class Foo {
  state: 'online' | 'offline' = 'online'

  foo() {
    if (this.state !== 'online') return
    this.bar()
    // @ts-ignore: TypeScript raises an error because it assumes `this.state` must be `'online'`.
    if (this.state === 'offline') {
      console.log('offline');
    }

    // qux only accepts `'online'` or `'not-online'`, but `this.state` could be `'offline'`.
    // TypeScript doesn't raise an error because it assumes `this.state` must be `'online'`.
    this.qux(this.state);
  }

  bar() {
    this.state = 'offline'
  }

  qux(input: 'online' | 'not-online') {
    if(input !== 'online' && input !== 'not-online') {
      // This will throw because `input` is actually `'offline'`.
      throw new Error(`Expected 'online' or 'not-online'. Got '${input}'`);
    }
  }
}

@dhlolo The problem isn't that TypeScript doesn't detect that bar() changes this.state. I would never expect it to do that. The problem is that TypeScript assumes that this.state can't have been changed at all. This means that you are writing code with the false assumption that this.state must be 'online' and potentially using it in places where 'offline' would be invalid.

class Foo {
  state: 'online' | 'offline' = 'online'

  foo() {
    if (this.state !== 'online') return
    this.bar()
    // @ts-ignore: TypeScript raises an error because it assumes `this.state` must be `'online'`.
    if (this.state === 'offline') {
      console.log('offline');
    }

    // qux only accepts `'online'` or `'not-online'`, but `this.state` could be `'offline'`.
    // TypeScript doesn't raise an error because it assumes `this.state` must be `'online'`.
    this.qux(this.state);
  }

  bar() {
    this.state = 'offline'
  }

  qux(input: 'online' | 'not-online') {
    if(input !== 'online' && input !== 'not-online') {
      // This will throw because `input` is actually `'offline'`.
      throw new Error(`Expected 'online' or 'not-online'. Got '${input}'`);
    }
  }
}

Honestly, you are right. It seems to be too optimistic in your case which class method may set class property. But other choices could be assuming that all method could change property(which seems to be too pessimistic), or analyse deeper(which may make things much more complicated).

@MartinJohns You seem to dislike this whole exchange. Care to chime in as to why?

@jordanbtucker Because it's just a repetition of the comments before and the linked issues. Your case is literally the first example on this issue. To me it's spam. And by responding to your question I'm unfortunately doing the same, adding another comment that will make everyone subscribed get a notification, without any real content or change.

@MartinJohns I didn't see many, if any, examples in this discussion that specifically dealt with simple class properties and methods, so I thought I was adding to the conversation. Maybe I missed it, but it just seemed like it hadn't been discussed specifically.

Is this isn't the right place to have discussions about code flow analysis, then maybe it should just be locked.

@jordanbtucker - I'm glad you brought up volatility in the context of class properties and methods, because ...

It's not uncommon to see member functions named XXXMutating or XXXNonMutating() or similar, especially in a garbage collected language like JS. That's because there may be multiple agents referencing the same object that depend upon that object not changing. If an agent wants to mutate an object being referenced by other agents, then it should clone the object first. In that scenario, it would be helpful to have member function typed as "self mutatating" or "not self mutating", and have that enforced for the actions within the member functions. Typing could help with always cloning when necessary, and never cloning when not necessary.

If member functions could be so marked, and bar was marked as "self mutating" then it could be (with proper design) a simple O(1) operation to reset this to its widened state. (Useful, even if it isn't the only reason for such marking.)

I encountered this use case, I believe it belongs here.

What happens: trying to make typeguard based on shape of children props. But type is not inferred correctly.
Happens at 5.4.5, so it is not part of #57465 fix
But same workaround can be applied (x is type)

type ChildItem = {id: number} | string

type Item = {
  children: ChildItem[] // can be objects or strings here
}

type NarrowedItem = {
  children: {id: number}[] // only objects here
}

const allItems: Item[] = [] // initially accept with any children

const filteredItems: NarrowedItem[] = allItems.filter(item => // and here only accept items
  item.children.every(child => typeof child !== 'string') // if all its children are objects
) // it should be correct in runtime, but typescript thinks it is not narrowed

const filteredItemsWorkaround: NarrowedItem[] = allItems.filter((item): item is NarrowedItem => // same code, but with "item is NarrowedItem"
  item.children.every(child => typeof child !== 'string')
) // it works here

Error on line 13 (filteredItems declaration):

Type 'Item[]' is not assignable to type 'NarrowedItem[]'.
  Type 'Item' is not assignable to type 'NarrowedItem'.
    Types of property 'children' are incompatible.
      Type 'ChildItem[]' is not assignable to type '{ id: number; }[]'.
        Type 'ChildItem' is not assignable to type '{ id: number; }'.
          Type 'string' is not assignable to type '{ id: number; }'.

Playground: https://www.typescriptlang.org/play/?#code/C4TwDgpgBAwgFgSwDYBMCSwIFsoF4oDeCKAXFAHYCuWARhAE4C+UAPlAM7D0LkDmAsACghoSFAzY8hIVCgBjRKnoRyZeMnSYsAbQC6UAPQH5AQ3JQ6UAPY0AVhDnB21+hy49ezuAwhDGQkXBoADkTenorAHcITUl8Ahl5RRRlVUJiMipaBkY9Q2MrciQQazsHJyhvZT8AwTlCzigTJCQJLHYyNrz8PKMoHgRgBGbiprk5CDBgKEjBuCbyEoUNVNr68kaAM2RMZVj2slDwqJiu-XwRtvYAOm2kXYAKQbiAPnyFlEqfayKSk3HJtNnu1EsDrsslCprhAAG4MEAPCGfXBvUQQKybJIaKAAQlw+AA5JxuHwCQBKd4ITEjfoVJGpJrKUr2RzsIQUvqDDhwKyUVAWaD1cLlfrmeiUchDLAQAA0Fko0zR7Dk3CmUGAiHIAGtnFyEM5yFZpuQwhFoig1g1pnddqctOwAOpWehasK88ikKBHM127DdJotK63HYMB5PLRksjA-rOb0nfZ4N59dgmaXyKwoWXy6azDVQABE0f1XtN8ba+dBWnByVS0Lh9ARSMT6qCGKx-LxhOJHnJ7MpOedOq+yiAA