golang/go

spec: less error-prone loop variable scoping

rsc opened this issue Β· 137 comments

rsc commented

We propose to change for loop variables declared with := from one-instance-per-loop to one-instance-per-iteration. This change would apply only to packages in modules that explicitly declare a new enough Go version in the go.mod file, so all existing code is unaffected. Changing the loop semantics would prevent unintended sharing in per-iteration closures and goroutines (see this entry in the Go FAQ for one explanation). We expect this change to fix many subtly broken for loops in new code, as well as in old code as it is updated to the newer Go version. There was an earlier pre-proposal discussion of this idea at #56010.

For example, consider a loop like the one in this test:

func TestAllEven(t *testing.T) {
	testCases := []int{0, 2, 4, 6}
	for _, v := range testCases {
		t.Run("sub", func(t *testing.T) {
			t.Parallel()
			if v&1 != 0 {
				t.Fatal("odd v", v)
			}
		})
	}
}

This test aims to check that all the test cases are even, but it doesn't check them all. The problem is that t.Parallel stops the closure and lets the loop continue, and then it runs all the closures in parallel when the loop is over and TestAllEven has returned. By the time the if statement in the closure executes, the loop is done, and v has its final iteration value, 6. All four subtests now continue executing in parallel, and they all check that 6 is even, instead of checking each of the test cases.

Most Go developers are familiar with this mistake and know the answer: add v := v to the loop body:

func TestAllEven(t *testing.T) {
	testCases := []int{0, 2, 4, 6}
	for _, v := range testCases {
		v := v // MAKE ITERATION VALUE SHARING BUG GO AWAY
		t.Run("sub", func(t *testing.T) {
			t.Parallel()
			if v&1 != 0 {
				t.Fatal("odd v", v)
			}
		})
	}
}

Changing the loop semantics would in essence insert this kind of v := v statement for every for loop variable declared with :=. It would fix this loop and many others to do what the author clearly intends.

A subtler variation is when the code says testCases := []int{1, 2, 4, 6} (note the non-even test case 1):

func TestAllEvenBuggy(t *testing.T) {
	testCases := []int{1, 2, 4, 6}
	for _, v := range testCases {
		t.Run("sub", func(t *testing.T) {
			t.Parallel()
			if v&1 != 0 {
				t.Fatal("odd v", v)
			}
		})
	}
}

TestAllEvenBuggy passes today, because the test is only checking 6, four times. Changing the loop semantics would still fix the loop to do what the author clearly intended, but it would break the test, because the test is passing incorrectly. So there is the potential to cause problems for users.

Because of this potential for problems, the new loop semantics would only apply in Go modules that have opted in to the release with the new loops. If that was Go 1.22, then only packages in a module with a go.mod that says go 1.22 would get the new loop semantics. Packages in other modules, including packages in dependencies, would get the old semantics. This would guarantee that all existing Go code keeps working the same as it does today, even when compiling with a new toolchain. People only need to debug loop changes when they opt in to the new semantics in go.mod. This approach is in keeping with our backwards and forwards compatibility work, #56986 and #57001, specifically the principle that toolchain upgrades preserve the behavior of old code, and compatibility is based on the go line.

If this proposal is accepted, users will need additional tooling support for a successful transition. That would come primarily in two forms: a compiler mode that reports affected loops, and a debugging tool that identifies the specific loops whose changed compilation is responsible for causing a test failure. There is a demo of the tooling support at the end of this comment.

The tooling support is important and necessary, but we expect it to be needed only rarely. Analysis and conversion of Google's own Go source code found that only about 1 package test per 8,000 started failing due to the new semantics, and the bug was essentially always in the test itself, like in TestAllEvenBuggy. In contrast, the new loopclosure vet analysis that shipped in Go 1.20 flagged definite bugs in 1 test per 400. The rest stayed passing with their bugs fixed by the new semantics. That is, in Google's code base, about 5% of the tests that contain this kind of sharing mistake were like TestAllEvenBuggy, exposed as buggy by the new semantics. The other 95% of the tests with this kind of mistake still passed when they started testing what they intended to. Of all the test failures, only one was caused by a loop variable semantic test change in non-test code. That code was very low-level and could not tolerate a new allocation in the loop. The design document has details. This evidence suggests that changing the semantics is usually a no-op, and when it’s not, it fixes buggy code far more often than it breaks correct code.

Based on the preliminary discussion the further work summarized here, @dr2chase and I propose that we make the change in an appropriate future Go version, perhaps Go 1.22 if the stars align, and otherwise a later version.

More details can be found in the design document.

Update, May 10 2023: Note that the change will not break the common idiom of changing a loop variable in the body of a 3-clause for loops. See this comment for details and links before commenting about 3-clause for loops. Thanks.


Tooling Demonstration

To demonstrate the tooling we would provide to support a successful transition, here is a complete example of a test that passes today but fails with the new loop semantics:

% cat x_test.go
package x

import "testing"

func Test(t *testing.T) {
	testCases := []int{1, 2, 4, 6}
	for _, v := range testCases {
		t.Run("sub", func(t *testing.T) {
			t.Parallel()
			if v&1 != 0 {
				t.Fatal("odd v", v)
			}
		})
	}
	for _, v := range testCases {
		t.Run("sub", func(t *testing.T) {
			t.Log(v)
		})
	}
	for _, v := range testCases {
		t.Run("sub", func(t *testing.T) {
			t.Parallel()
			if v&1 != 0 {
				t.Fatal("odd v", v)
			}
		})
	}
}
%

Building the program with -d=loopvar=2 reports all the affected loops:

% GOEXPERIMENT=loopvar go test -gcflags=all=-d=loopvar=2 x_test.go
# runtime
go/src/runtime/proc.go:1815:7: loop variable freem now per-iteration, stack-allocated
go/src/runtime/proc.go:3149:7: loop variable enum now per-iteration, stack-allocated
go/src/runtime/mgcmark.go:810:6: loop variable d now per-iteration, stack-allocated
go/src/runtime/traceback.go:623:7: loop variable iu now per-iteration, stack-allocated
go/src/runtime/traceback.go:943:7: loop variable iu now per-iteration, stack-allocated
# runtime/pprof
go/src/runtime/pprof/pprof.go:386:9: loop variable r now per-iteration, heap-allocated
go/src/runtime/pprof/proto.go:363:6: loop variable e now per-iteration, stack-allocated
go/src/runtime/pprof/proto.go:612:9: loop variable frame now per-iteration, heap-allocated
go/src/runtime/pprof/protomem.go:29:9: loop variable r now per-iteration, heap-allocated
# command-line-arguments [command-line-arguments.test]
./x_test.go:7:9: loop variable v now per-iteration, stack-allocated
./x_test.go:15:9: loop variable v now per-iteration, stack-allocated
./x_test.go:20:9: loop variable v now per-iteration, stack-allocated
# internal/fuzz
go/src/internal/fuzz/fuzz.go:432:9: loop variable e now per-iteration, stack-allocated
--- FAIL: Test (0.00s)
    --- FAIL: Test/sub (0.00s)
        x_test.go:11: odd v 1
    --- FAIL: Test/sub#08 (0.00s)
        x_test.go:24: odd v 1
FAIL
FAIL	command-line-arguments	0.199s
FAIL
%

Note that some loops are in the Go standard library. Those are unlikely to be the cause, so we can limit the diagnostics to the current package by dropping all=:

% GOEXPERIMENT=loopvar go test -gcflags=-d=loopvar=2 x_test.go
# command-line-arguments [command-line-arguments.test]
./x_test.go:7:9: loop variable v now per-iteration, stack-allocated
./x_test.go:15:9: loop variable v now per-iteration, stack-allocated
./x_test.go:20:9: loop variable v now per-iteration, stack-allocated
--- FAIL: Test (0.00s)
    --- FAIL: Test/sub (0.00s)
        x_test.go:11: odd v 1
    --- FAIL: Test/sub#08 (0.00s)
        x_test.go:24: odd v 1
FAIL
FAIL	command-line-arguments	0.119s
FAIL
%

That trims the diagnostic output, but it still leaves us to check all our loops. In this trivial example, 2 of 3 are buggy, but in a real program there are fewer needles and more haystack. To solve that problem, we can use a dynamic tool that reruns a test, varying which loops compile with the new semantics, to identify the specific loops that cause the failure:

% bisect -loopvar go test x_test.go
bisect: checking target with all changes disabled
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=n x_test.go... ok (13 matches)
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=n x_test.go... ok (13 matches)
bisect: checking target with all changes enabled
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=y x_test.go... FAIL (13 matches)
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=y x_test.go... FAIL (13 matches)
bisect: target succeeds with no changes, fails with all changes
bisect: searching for minimal set of changes to enable to cause failure
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=+0 x_test.go... ok (7 matches)
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=+0 x_test.go... ok (7 matches)
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=+1 x_test.go... FAIL (6 matches)
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=+1 x_test.go... FAIL (6 matches)
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=+01 x_test.go... FAIL (3 matches)
bisect: run: go test -gcflags=all=-d=loopvar=1,loopvarhash=+01 x_test.go... FAIL (3 matches)
...
bisect: FOUND failing change set
--- change set #1 (enabling changes causes failure)
./x_test.go:7:9
---
...
bisect: FOUND failing change set
--- change set #2 (enabling changes causes failure)
./x_test.go:20:9
---
...
bisect: target succeeds with all remaining changes enabled
%

Now we know to spend our attention on the loops at lines 7 and 20, which are in fact the buggy ones. (The loop at line 15 has been cleared of suspicion.) Bisect uses an adaptation of binary search that can identify the relevant loop in a relatively small number of trials, making it useful even on very large, very complicated tests that run for a long time.

seebs commented

The biggest concern I could suggest would be the compatibility promise, and the go.mod stuff addresses that.

When I started learning Go, I found a page which purported to be a list of common Go pitfalls. It said "pitfalls", plural, but in fact, this was the only one. That has been my experience of using the language for the last few years; this is the pitfall in Go programming.

Do it. Doooo eeeeet.

It may be pedantic, but packages in modules with go.mod files that specify go 1.22 or a higher go version should get the new semantics.

Personally, I strongly oppose to apply this proposal to for ...; ...; ... {...} loops.

The proposal docs mentions that the semantic-equivalent form of

for i := 0; i < 3; i++ {
    prints = append(prints, func() { println(i) })
}

is like

{
	i_outer := 0
	first := true
	for {
		i := i_outer
		if first {
			first = false
		} else {
			i++
		}
		if !(i < 3) {
			break
		}
		prints = append(prints, func() { println(i) })
		i_outer = i
	}
}

Sorry, this adds too much magical implicitness and makes it is hard to make a clear explanation for the new semantic.
And it breaks the try-to-be-explicit principle in Go design.
Worse, this breaks most people's expectation. The iteration variable declaration statement is only executed once, but there will be multiple instances of it created at run time.

Obeying users' intuitions is important. This is why I support applying this proposal to for-range loops.


[update]: I don't think the so-called "accidental" bugs in using for ...; ...; ... {...} loops mentioned in the proposal docs are caused by mis-understanding the semantic of for ...; ...; ... {...} loops. IMHO, the cause of the bug in

var ids []*int
for i := 0; i < 10; i++ {
	ids = append(ids, &i)
}

is totally the same as the bug in

var ids []*int
var i int
for i = 0; i < 10; i++ {
	ids = append(ids, &i)
}

In both, I think most people think there is only one instance of the variable i during the whole loop execution.

@go101 That is not the code as mentioned in the proposal; you have added an extra outer loop where the proposal only had a block.

@zikaeroh corrected now. It doesn't affect the opinion.

@go101 to be clear, you're only showing the "after" part of that kind of expansion, as far as this proposal is concerned. Currently, the equivalent expansion would be

{
	i := 0 // with proposal this is i_outer
	first := true
	for {
                // with proposal, there's an i := i_outer here
		if first {
			first = false
		} else {
			i++
		}
		if !(i < 3) {
			break
		}
		prints = append(prints, func() { println(i) })
                // with proposal, there's an i_outer = i here
	}
}

i.e. the proposal is to add i := i_outer; … ; i_outer = i, which is significantly less "added magic" than your comment makes it appear.

@Merovius Yes, that was a small mistake which has been corrected. It doesn't affect the whole opinion.

Overall I am very much for this proposal. However @go101 did bring up a valid point about for ; ; {} loops as opposed to range loops.

I would just want somebody to confirm the behavior of the following loop under the new semantics:

for i := 0; i < 5; i++ {
  fmt.Println(i)
  i += 1
}

Under the new semantics I believe it would print: 0, 1, 2, 3, 4 instead of: 0, 2, 4.

This might be a little unexpected to users who do not know this history, and the bugs associated with capturing the loop variable.

To reiterate (unintentional pun), I think the proposal is beneficial to Go but this is my only worry.

@davidmdm
The semantic change doesn't affect the behavior of your example. It will still print 0 2 4.

The change does affect the behaviors of some use cases.

@davidmdm I haven't seen any sort of convincing argument, why the three-clause for loop would be different than a range loop. Any argument I've seen is just as applicable to both.

Meanwhile "having two loop constructs which behave extremely differently in relation to short variable declarations is confusing and hard to explain" seems obvious and hard to refute to me.

YMMV, of course.

@Merovius
It's different than most other imperative programming languages. If we take JS for example:

for (let i = 0; i < 5; i++) {
  console.log(i)
  i++
}

This will print the sequence 0, 2, 4. The same goes for python, and most other high level languages.

Go would be unique (maybe, I won't pretend to know most languages...) in this regard.

I am still for it, because it solves subtle bugs with regards to closures and addressability that I have fallen prey to before.
What I am looking for, perhaps, is an acknowledgement that Go would adopt a slightly different for-loop semantic than other languages and that this may be confusing to new-comers. Beyond that, is Go going to provide go vet checks or something to help users not write poor code in the new semantics?

For example:

for i := 0; i < 5; i++ {
  fmt.Println(i)
  i += 1 // go vet: ineffective assignment -> iteration variable moving out of scope
}

@davidmdm Maybe you missed it, but the proposal has the "transformed" code copy the final state of the loop variable to the beginning of the next loop, meaning that the above should also print ”0, 2, 4", and that += 1 is not actually ineffective.

{
	i_outer := 0
	first := true
	for {
		i := i_outer
		if first {
			first = false
		} else {
			i++
		}
		if !(i < 5) {
			break
		}
		fmt.Println(i)
		i += 1
		i_outer = i
	}
}

The only semantic change this proposal should have is to do with capturing the loop variable by address or by a closure.

rsc commented

@davidmdm You misunderstand the new semantics. They are basically the same as the Javascript let form you mentioned. The loop you showed does not change meaning. This kind of loop is explicitly called out at the end of the https://go.googlesource.com/proposal/+/master/design/60078-loopvar.md#language-specification section. ^F for "Note that in both 3-clause..."

I would encourage you to try any programs you like on your own computer, using gotip with GOEXPERIMENT=loopvar.

% go install golang.org/dl/gotip@latest
go: downloading golang.org/dl v0.0.0-20230502172222-5216546bad51
% gotip download
Updating the go development tree...
...
Building Go cmd/dist using /Users/rsc/sdk/go1.17.13. (go1.17.13 darwin/amd64)
Building Go toolchain1 using /Users/rsc/sdk/go1.17.13.
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.
Building Go toolchain2 using go_bootstrap and Go toolchain1.
Building Go toolchain3 using go_bootstrap and Go toolchain2.
Building packages and commands for darwin/amd64.
---
Installed Go for darwin/amd64 in /Users/rsc/sdk/gotip
Installed commands in /Users/rsc/sdk/gotip/bin
*** You need to add /Users/rsc/sdk/gotip/bin to your PATH.
Success. You may now run 'gotip'!
% cat x.go
package main

import "fmt"

func main() {
	for i := 0; i < 5; i++ {
		fmt.Println(i)
		i += 1
	}
}
% gotip run x.go
0
2
4
% GOEXPERIMENT=loopvar gotip run x.go
0
2
4
% 
rsc commented

Regarding 3-clause loops, three points:

If we don't fix 3-clause for loops, the footgun remains.

If we leave 3-clause loops alone, then every time you change a loop from 3-clause to range or back, you have to stop and reason about the possible capture difference you might be introducing. When you are doing that kind of conversion, you should have to think about whether the iteration visits the same things and then be able to stop there.

Real-world evidence all points to this change fixing some 3-clause for loops and breaking none.

I freely admit that it is possible to write 3-clause for loops that behave differently in the new semantics, just as it is possible to write range loops that behave differently, such as this one by Tim King:

func sum(list []int) int {
	m := make(map[*int]int)
	for _, x := range list {
		m[&x] += x
	}
	for _, sum := range m {
		return sum
	}
	return 0
}

As Merovius put it so well, if you saw this in real life, the surprise would be not that the code behaves differently with the new loop semantics but that it was written this way at all.

Focusing on artificial examples misses the point, which is that in actual programs, when per-iteration vs per-loop does matter, 99% of the time users expect per-iteration. Here are three real examples of 3-clause loop mistakes that made it into a repo and had to be corrected (I found these by searching git histories for diff hunks consisting only of added x := x lines):

(Of course, searching git histories misses all the buggy loops that were debugged before being checked in.)

Having just gone through all the test failures inside Google caused by enabling the new semantics globally, I can tell you that 3-clause loops were a minority of root causes of test failures, but they did exist, and all of them clearly expected per-iteration semantics. The test failures happened because the test was incorrectly passing, like TestAllEvenBuggy above. In no case was the loop change breaking a good loop. That is, I saw no 3-clause for loops that correctly depended on the current per-loop semantics. Nor did I see any range loops, except the low-level one I mentioned that was prohibited from allocating.

That's some of the real-world evidence in favor of changing 3-clause loops. I have looked, and I found no evidence in favor of not changing them. If you want to make the case for not changing them, the way to do that would be to provide real-world examples of code that breaks with the new semantics. You might start by running your own tests against gotip with the loop variable change enabled (instructions here).

Evidence outweighs gut reaction.

I sympathize with the gut reaction from @go101 and @davidmdm here. I remember the first time I saw 3-clause for loops with per-iteration variable semantics, in an early talk about Dart around 2009 or so. My own gut reaction was that I was certain it was a terrible idea that couldn't possibly work, too strange to merit serious consideration. But as they say, certainty is just an emotion. The evidence we've built up over more than a decade of experience with Go seems to prove conclusively that Dart was right (as is JavaScript's for-let) and Go was wrong – I was wrong. Live and learn.

@zikaeroh @rsc

I am embarrassed to have not caught on that the iteration loop variable would inform the next loop variable.
I hope my confusion was at least instructive to others, and with that I am 100% behind this proposal.

πŸš€ πŸ’₯ πŸš€ πŸ’―

Number one frustration with how lambdas interact with loops in Go and Node.js. This is actually a safety concern.

On the one hand, declaring lambdas inside of a loop is usually a performance loss, compared to declaring the lambda before or after the loop. On the other hand, when you need to curry some specific iteration state, then it is annoying to have to use additional boilerplate there.

100% code coverage is not guaranteed to catch this glitch, by the way. This is a rather subtle quirk dating back to how C does its dirty work. If the developer has insufficient tests, then many downstream problems can occur.

i can think of no useful program that desires only the last iteration state to be curried into all the lambdas.

Let's adopt this fix ASAP.

cep21 commented

This is 100% a good idea and worth doing, but I worry we're down playing the idea "only packages in a module with a go.mod that says go 1.22 would get the new loop semantics". I was trying to find more about this in @rsc 's video and link (may have missed it there's a lot to catch up on).

Tools like renovatebot and dependabot are almost required for open source projects. Many configure these tools to auto update minor (and patch) version dependencies (of course after tests run) and even auto merge (there could be dozens of these packages to keep up to date). My mental model is to never expect the line

-go 1.21
+go 1.22

to be something that could break my existing code since it is a minor change. These tools by default auto update this line.

What is the correct mental model for seeing the above diff? Is there an assumption beyond optimizations (sort ordering changing). Maybe the bar is "beyond reasonable doubt" for behavior changes, while I thought the previous bar was much more deterministic. Maybe it's off topic or overblown, but I don't know what I should think when I review renovatebot diffs in the future that do patch changes to the "go" line in the mod file.

Maybe there should be an official-ish recommendation of Only change go.mod go versions if you explicitly need a new feature or library.

rsc commented

What is the correct mental model for seeing the above diff?

The correct mental model should be that you are running a newer version of a dependency and you should test that your software continues to work as expected with that newer dependency. This is true for any dependency, not just the Go line. If you are using foo/bar v1.2.3 and dependabot bumps it to v1.2.4 or v1.3.0, the fact that it's still v1 (not a different major version) means the author intends that there are no breaking changes. But maybe the author fixed a bug, and your code accidentally depends on the buggy behavior. Or maybe the author introduced a bug by accident, and your program is the one that discovers the bug. Intending not to break you is different from not breaking you.

The same is true for the go line. If you see the go version change, then the Go team intends that there are no breaking changes for you. But maybe we fixed a bug you depend on, or maybe we introduced a bug by accident. In literally every instance of a loopvar-induced failure I've seen, "fixed a bug you depend on" is an accurate description. We absolutely understand that intending not to break you is different from not breaking you. When we do break you, we want you to have the right tools to make that as painless as possible. That's why the loopvar change is keyed on the go line, and it's also why other compatible-but-possibly-breaking changes will have GODEBUGs with defaults keyed on the go line, as of Go 1.21 (see this doc).

Circling back to dependabot, if dependabot proposes a change, any change at all, it would be a mistake to accept it solely because "it's only a version number increment". The only way dependabot is safe is if (1) the change passes all your tests, and (2) you have very good test coverage. If that's already true for your module dependencies, then you'll have no problem with go line bumps either.

rsc commented

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
β€” rsc for the proposal review group

Let me re-state my opinion:

I support making the change for for-range loops. What I oppose to is making the change for traditional for;; loops.

The current semantic of for-range loops is counter-intuitive, the change will make it intuitive.

The current semantic of traditional for;; loops is intuitive, the change will make it counter-intuitive.

It is natural for me to think the iteration variables k and vin the following for-range loop are declared in each loop step.

func foo(f func(*K, *V)) {
	for k, v := range c {
		// If we think the iteration variables k and v are declared in
		// each loop step, then the following k and v are exactly
		// the iteration variables k and v.
		f(&k, &v)
	}
}

On the other hand, it is natural for me to think the iteration variables k and v in the following traditional for;; loop are only declared once during the whole loop execution, because the statement k, v := next() is only executed once during the whole loop execution. And I think this is also most people's intuitions (proof).

func bar(f func(*K, *V)) {
	for k, v := next(); ; k, v = next() {
		// By the proposed change, the following k and v are implicitly
		// declared and they are not the above iteration variables k and v.
		// This is too counter-intuitive.
		f(&k, &v)
	}
}

That is all. Obeying intuition is so important for a language. A thousand arguments to break the rule are still pale. Adding magic implicitness is a big no-no (esp. for Go, a language promoting explicitness). Doing this will break many people's expectations.


In fact, after I read all the arguments in related threads to break the rule, I found none of them are reasonableconvincing.

If we don't fix 3-clause for loops, the footgun remains.

Sorry, if the change is made on traditional for;; loops, it might remove a footgun (so-called, which is actually caused by unprofession and carelessness), but it will also create new footguns.

Currently, I have found two new footguns caused by this:

The first:

// The loop prints true now. The semantic change will make it print false.
	for i, p := 0, (*int)(nil); p == nil; print(p == &i) {
		p = &i
	}

The second:

// Now the value "a" will not duplicated at each loop step.
// The semantic change will make "a" be duplicated (twice)
// at each loop step.
func foo(constFunc func(*[100000]int)) {
	for a, i := [100000]int{}, 0; i < len(a); i++ {
		constFunc(&a)
	}
}

That is two surprises for many people. Several people said the two cases are artificial or "intentionally confusing programs". I'm speechless on such non-serious attitudes. I can't believe that the bar to keep backward-compatibility is down to such a low level. The two code pieces are just two demonstrations. They are valid code, it is hard to say no code like these are used in practice.

The evidence we've built up over more than a decade of experience with Go seems to prove conclusively that Dart was right (as is JavaScript's for-let) and Go was wrong

It is unfair to compare Go with JS (I'm not familar with Dart, so I'm not sure if this is also true for Dart). Go is quite different from JS, Go supports pointers and large-size types/values, which makes the use cases using Go loops much more diverse. This is evident in the above two new footgun examples.

If we leave 3-clause loops alone, then every time you change a loop from 3-clause to range or back, you have to stop and reason about the possible capture difference you might be introducing. When you are doing that kind of conversion, you should have to think about whether the iteration visits the same things and then be able to stop there.

I don't think this is not a problem for a qualified Go programmer. It is unnecessary to make the pointless consistency. The two kinds of loops are different in nature and a qualified Go programmer should realize the result. In fact, making them behave differently is a great feature so that Go programmers have more choices for different scenarios.

Even if the argument is valid, then similar situations will happen if the proposed change is applied to traditional for;; loops. For example, in practice, sometimes, for some reason, a programmer rewrites the first loop to the second one. She/he expects that the re-writing will not change the execution of loop, but her/his expectation might be broken. If the execution change is a performance degradation, it might be not detected in time, depending on the degree of the performance degradation.

	// 1
	for v := f(); g(); h(&v) {
		... // no continue here
	}

	// 2
	for v := f(); g();  {
		... // no continue here
		h(&v)
	}

That's some of the real-world evidence in favor of changing 3-clause loops. I have looked, and I found no evidence in favor of not changing them.

Firstly, I think maybe what you have looked are still not enough. Second, I also don't see universal and strong desires in favor of changing them.

And we need to make it clear whether or not the causes of the following two bugs are the same.

var ids []*int
for i := 0; i < 10; i++ {
	ids = append(ids, &i)
}

and

var ids []*int
var i int
for i = 0; i < 10; i++ {
	ids = append(ids, &i)
}

Finally, in my opinion, mixing the changes to the two kinds of loops is not a good idea. I suggest to split this proposal into two, one for for-range loops and the other for traditional for;; loops.

I have noticed that this proposal has been added to the active column and will be reviewed. I'm curious that why do this in such a hurry. Personally, I think the current analysis work is still not sufficient. Shouldn't it be done after the experimental phase is almost over?

@go101 I think most participants in the conversation understand your opinion. That doesn't mean they can't disagree with it.

And I think this is also most people's intuitions (proof).

To me, the fact that your arguments are based on extremely contrived code to show anything about "intuition" makes them very weak. I still, to this day, have not formed an opinion on what any of your examples "should" do, because I think the only correct response to encountering them in the wild isn't to try to puzzle through and explain their behavior, but to remove them.

They are bad code. "This change would make extremely bad and obtuse code counter-intuitive" is a non-starter of an argument to me. That code is counter-intuitive, no matter what it does.

That is two surprises for many people. Several people said the two cases are artificial or "intentionally confusing programs". I'm speechless on such non-serious attitudes. I can't believe that the bar to keep backward-compatibility is down to such a low level.

We are accepting breaking backwards compatibility with this proposal. Even with the part you explicitly endorse. So, clearly this is not the issue.

You are making an argument based on supposed "counter-intuitive semantics". And for that, yes, the fact that your examples are contrived is extremely salient.

I have noticed that this proposal has been added to the active column and will be reviewed. I'm curious that why do this in such a hurry. Personally, I think the current analysis work is still not sufficient. Shouldn't it be done after the experimental phase is almost over?

This proposal had a discussion and a separate issue discussing adding the experiment flag. Both of which you are perfectly aware of, as you've participated in both. So, pretending that there hasn't been extensive conversation about this so far is simply disingenuous.

Meanwhile, being added to the active column simply means that this proposal is in the active discussion phase. Nothing more. That is, it's not a "likely accept", or anything. There are proposals which spent many months in the active phase. So, again, pretending that there is any "hurry" implied by adding it to the list of active proposals is wrong.

That being said, the response to both the discussion and this proposal have been overwhelmingly positive. Note that this issue accumulated 230 πŸ‘ in two days, and only 6 πŸ‘Ž. The emoji-votes on the original discussion are even more unanimously and overwhelmingly positive. There also have been few proposals in the past so thoroughly evidenced and backed by actual data. So, yes, just because you don't like it, doesn't mean we shouldn't do this soon, based on the long discussion phase and extremely positive feedback we had so far.

rsc commented

@go101, your follow-up post repeatedly uses words like β€œintuitive” and β€œnatural” and β€œreasonable” and β€œexpectations” as if they are universal absolutes. To me, the new semantics are intuitive and natural and reasonable and match my expectations better than the old semantics. We are going to have to agree to disagree about how the new semantics make us feel.

I want to be clear: instead of focusing on feelings, we are going to make this decision based on evidence.

The evidence we have presented in favor of making the change is our experience running the new semantics in a code base with many million lines of Go code. The evidence you have presented is a few hypothetical programs that look nothing like any Go program I have ever seen in actual practice. The evidence in favor of the change is – objectively and scientifically – far stronger.

About your hypotheticals, you wrote:

The two code pieces are just two demonstrations. They are valid code, it is hard to say no code like these are used in practice.

On the contrary, it is easy for me to say that no code like those is used in practice, or at least that it is exceedingly rare. Because, again, we are already running many millions of lines of Go code using the new semantics and have encountered zero instances of code that broke due to the new semantics of 3-clause for loops. The single instance where correct code arguably broke was a range loop. The bar is not β€œnever anywhere”. The bar is β€œexceedingly rare”, and the evidence we have gathered absolutely establishes that.

I have invited you to look for real-world evidence supporting your position and explained how to go about doing that. If you would like to gather real evidence and share it with us, please do. But otherwise, please stop posting the same arguments over and over. Thank you.

mbyio commented

I think making this change is definitely the right thing to do. It always takes me out of my flow when I need to think about variable scoping in loops, and I agree that it is almost always a bug to rely on the old behavior.

I'm only concerned about the upgrade process. I'm specifically looking at this section of the proposal https://go.googlesource.com/proposal/+/master/design/60078-loopvar.md#mechanical-rewriting-to-preserve-old-semantics-is-possible-but-mostly-unnecessary-churn

I often work with code that all lives in a single Go module and lacks trustworthy tests. The code is still worked on, on a weekly to monthly basis. We might dive in, make a small edit somewhere, maybe add a new if statement, upgrade a module, etc. No major changes, just maintenance work. I worry that there will be a strong tendency to just leave code like that on go 1.21 (or whatever version it is) forever.

My understanding is, with the static compiler tooling being proposed, engineers will have to do some manual work to check any loops that were flagged. Then they'll have to test the program to make sure it works as expected after their changes. I think even the suggestion that there is some indeterminate amount of manual work required will cause some people to never upgrade. For some other people, especially those who are less confident in their understanding of the language and this change, they may have trouble using the tools you're proposing.

Maybe I am still scarred by the Python 2 -> Python 3 upgrade process. One of the biggest annoyances was having to come back to some codebase still on Python 2, where the language was very different from Python 3, and make small edits like I described above. It made the whole process more error prone because of the context switching and unclear set of supported language features. But I didn't really have time to go through and do the actual upgrade, because I was supposed to just make a small edit. So I had to tough it out.

Obviously, this change is tiny compared to Python 2 -> Python 3. But I'm thinking of the future, go 1.50, when maybe there are a bunch of these tiny language fixes that add up over time. There might still be old projects stuck on 1.21 years from now because of this loop change.

So, I would like there to be some opt-in tooling that automatically edits code to preserve the old behavior in suspicious loops. That way, in cases where they otherwise might not be able to upgrade, someone can just run this upgrade tool, and engineers who fully understand the change and have some time to spare can go back and make gradual edits to fix old code over weeks/months/years, and we won't have to deal with a divide between go 1.21 and go 1.22 codebases. I know the community could always write a tool like that, but once you have to rely on a community tool, that adds another obstacle to the upgrade since you now have to research to find out if the tool is trustworthy or not.

Sorry for the long message, I hope that makes sense. Thanks for all the work on Go.

1268 commented

I sympathize with the gut reaction from @go101 and @davidmdm here. I remember the first time I saw 3-clause for loops with per-iteration variable semantics, in an early talk about Dart around 2009 or so. My own gut reaction was that I was certain it was a terrible idea that couldn't possibly work, too strange to merit serious consideration. But as they say, certainty is just an emotion. The evidence we've built up over more than a decade of experience with Go seems to prove conclusively that Dart was right (as is JavaScript's for-let) and Go was wrong – I was wrong. Live and learn.

@rsc you seem to be mis-representing what JavaScript actually does. From the docs:

This also happens if you use a var statement as the initialization, because variables declared with var are only function-scoped, but not lexically scoped (i.e. they can't be scoped to the loop body).

https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/for#lexical_declarations_in_the_initialization_block

In this context, var variables are function-scoped, even when declared in a loop clause. that is not what Go does. Current Go scopes to the loop, not the function, and "new" Go scopes to the iteration. Neither Go behavior matches JavaScript. As others have said and demonstrated with code examples, this change is essentially a "macro" that expands into some surprising code. Saving one line of explicit code that is currently needed, does not justify that added ambiguity introduced with this change I feel.

@4cq2 The examples given in the link show that the scoping behavior of let matching what this Go proposal is implementing. Namely, it says:

for (let i = 0; i < 3; i++) {
  setTimeout(() => {
    console.log(i);
  }, 1000);
}

Prints 0, 1, 2. So, I don't find the JS comparison a misrepresentation at all. Additionally, the current Go behavior is exactly what the linked page describes for a let pulled out of the for loop, whose equivalent is also described in the Go proposal as the new way to write the code using the old semantics.

Sure, var doesn't behave that way, but var has scoping unlike any variable in Go and I think it's very accurate to say that nearly all developers avoid var as much as possible when writing JavaScript/TypeScript so it's not representative of "what JavaScript actually does" on its own.

1268 commented

@zikaeroh my point is, that the JavaScript comparison really doesn't have a place on this page.

JavaScript started with var, then added let. so they started with function scoped, then added lexical scoping. this isn't what Go is proposing, so to even invoke the comparison is at best confusing, and at worst misleading.

Plus the kicker, JavaScript doesn't use pointers, at least not in the sense that Go does. and pointers are the entire crux of the issue here. when dealing with "value" loop variables, this issue is non existing, to my knowledge. so again the comparison with JavaScript is dubious.

Plus the kicker, JavaScript doesn't use pointers, at least not in the sense that Go does. and pointers are the entire crux of the issue here. when dealing with "value" loop variables, this issue is non existing, to my knowledge. so again the comparison with JavaScript is dubious.

Pointers are not the crux here; writing nearly exactly the same code in Go, no pointers:

for i := 0; i < 3; i++ {
  go func() {
    time.Sleep(time.Second)
    fmt.Println(i)
  }()
}

This will print 3, 3, 3. Run this (not using the playground, as it runs go vet which finds the bug!): https://glot.io/snippets/gkvmu2z2ry

And to be clear, the design doc has examples that do not use pointers at all. The JavaScript comparison is not dubious to me.

1268 commented

This will print 3, 3, 3. Run this (not using the playground, as it runs go vet which finds the bug!): https://glot.io/snippets/gkvmu2z2ry

@zikaeroh fair enough, but you conveniently left out the first paragraph, here it is again:

JavaScript started with var, then added let. so they started with function scoped, then added lexical scoping. this isn't what Go is proposing, so to even invoke the comparison is at best confusing, and at worst misleading.

I left it out because I don't find it relevant at all. All comparisons to JavaScript prior to your comment explicitly referenced for-let, not for-var.

1268 commented

the change that JavaScript made, was to add a syntax for lexical scoping, as an alternative to function scoping.

That is not what Go is proposing. Go already has lexical scoping, so this proposal is something else. the comparison doesn't hold, as least not in the way I read the initial comment invoking it.

The one example, why I want this change, is that currently you can't safely change semantics of functions that accept other functions. Consider this code:

for _, v := range someSlice {
    processSomething(func() {
        // read data from v
    })
}

Currently I have to promise that the lambda passed to processSomething finishes execution before the next iteration even if I don't write anything to v. While in theory this sounds like an easy fix, in practice you have to do manual work and ensure that you don't capture anything from outer loops if those exist. If you you believe this problem is non-realistic - try to update tests that use t.Run adding t.Parallel on sufficiently big codebase. You'll be surprised how hard it is to be confident that you covered all cases.

@4cq2 The JS for-let comparison only adds explanatory context, it's not the reason to make the change. I agree with @zikaeroh that it's accurate, but really, it doesn't terribly matter. At worst, it's a bad editorial choice. I don't understand why you are digging your heels into your opinion that this editorial choice was bad, given that nothing about the proposal really changes if it where made differently.

The salient point is that, empirically, people seem to expect that iteration variables are created per-iteration, not per-loop. Even if Go was the first language to do what's proposed, that data would still speak a clear language.

As others have said and demonstrated with code examples, this change is essentially a "macro" that expands into some surprising code.

That is not true, unless you say any compiler activity is "essentially a macro". In particular, as I pointed out above, the complexity of the "expanded code" doesn't change materially under this proposal.

And to be clear, the range loop is also pretty similarly "magical". It needs to carry around an implicit iterator variable, to make this code print 0,1,2,3:

for i := range make([]int, 4) {
    println(i)
    i++
}

Yet, ostensibly, that's not a problem. We live with this "magic" without causing any problems.

I think the range loop change is great. I tend to agree with @go101 and others that the proposed semantics for the 3-clause loops are confusing. It's a strange situation where you have unintuitive semantics causing less bugs than intuitive semantics.

It's also a common source of confusion in JavaScript and Dart. [1, 2, 3, 4, 5, 6, 7]


The salient point is that, empirically, people seem to expect that iteration variables are created per-iteration, not per-loop.

@Merovius I don't think this is the real explanation of those bugs. I think people are writing code as if closures were able to somehow capture the current value of any variable.

I don't think people expect that

var i int
for i = 0; i < 10; i++ {
    go func(){ fmt.Println(i) }()
}

behaves differently from

for i := 0; i < 10; i++ {
    go func(){ fmt.Println(i) }()
}

So I disagree with the proposal text that "Changing both forms eliminates that bug from the entire language, not just one place". The bug still happens when the variable isn't declared as for i := 0. And when the proposal says:

That consistency means that if you change a loop from using range to using a 3-clause form or vice versa, you only have to think about whether the iteration visits the same items, not whether a subtle change in variable semantics will break your code.

This is true, but it comes at the cost of having to think whether changing from := to = will subtly break your code.

For these reasons I think only changing the range loop, but not the 3-clause loop, is more consistent. This is also the reason why C# did not make the change for this kind of loop (and instead only for foreach):

When you say

for(int i; i < 10; i += 1) { ... }

it seems dead obvious that the i += 1 means β€œincrement the loop variable” and that there is one loop variable for the whole loop, not a new fresh variable i every time through! We certainly would not make this proposed change apply to for loops.

- Eric Lippert


I want to be clear: instead of focusing on feelings, we are going to make this decision based on evidence.

@rsc The danger of taking this stance too far is that there are important considerations that don't have hard evidence:

  1. I think that changing the 3-clause loop adds more mental overhead when reading 3-clause loops.
  2. I think it will be a source of confusion for learners and will guide them to build a wrong mental model where a closure captures the current value of a referenced variable.

Personally, I don't think fixing a handful of bugs in existing (and future) code makes up for the disadvantages. Especially since this is a bug that go vet already catches. Others may disagree, but it's a judgement call, not based solely on evidence.

@LukasGelbmann I'm not sure that it changes anything for the code you wrote actually. Personally, I am of the opinion that the proposed change makes things more intuitive .

I expect a difference actually between the loop variable being declared before the loop and via a short variable declaration in the loop: it's a different scope. One is tied to the for loop. The other is accessible even after the loop is exited.

The only change if I understand the proposal well is that instead of being scoped to all iterations of the for loop (using for i:=0), the scope would be even smaller, per iteration.
Then, after the post-condition is applied at the end of the loop body, the value of the iterator variable is copied onto the next iterator variable if I'm not mistaken.

What may change perhaps is taking a pointer to the iterator variable from within the for loop. Typically when closing over it.

I think that's welcome. There has been a lot of confusion since early Go when capturing loop variable (some good examples have been mentioned in some other posts (when launching goroutines for instance etc.))

In general, I don't think that people want to close over the final value of the iterator variable.

It's likely that this change will not affect any code that is already running.

@LukasGelbmann

It's a strange situation where you have unintuitive semantics causing less bugs than intuitive semantics.

To me, "intuition" is "decision-making without conscious thought". If the code people actually write is buggy under assumption A and correct under assumption B, then, to me at least, assumption B is what's "intuitive". If we can agree that people generally don't want to write broken code.

But ultimately that's a question for dictionary writers or psychologists to squabble over, in my opinion. Personally, I'm okay if Go becomes "less intuitive", if the result is that the code written in it contains fewer bugs. And we can empirically say with pretty high confidence that that would be the case under this proposal - for both kinds of loops.

I think that changing the 3-clause loop adds more mental overhead when reading 3-clause loops.

If that was the case, it should be easy to produce a (realistic) example of where it makes a difference. The only examples that came up so far already have functionally ∞ mental overhead, so there doesn't seem much of a difference to me.

I think it will be a source of confusion for learners and will guide them to build a wrong mental model where a closure captures the current value of a referenced variable.

I'm wondering why you are not concerned about this for range loops. It seems the exact same logic applies.

Personally, I'm okay if Go becomes "less intuitive", if the result is that the code written in it contains fewer bugs.

That's fair, agree to disagree.

If that was the case, it should be easy to produce a (realistic) example of where it makes a difference. The only examples that came up so far already have functionally ∞ mental overhead, so there doesn't seem much of a difference to me.

The mental overhead, for me personally, would come in when reading code like this and verifying that it's correct:

for d := 3; d < len(terms); d++ {
	terms[d] = func(xs, termOut []float64) {
		for i, x := range xs {
			termOut[i] = math.Pow(x, float64(d+1))
		}
	}
}

I'd prefer this to read this code with an explicit d := d also in the future.

Mental overhead under the new semantics also comes into play for me when the loop variable is modified, like in the example above:

for i := 0; i < 5; i++ {
	fmt.Println(i)
	i += 1
}

Here the confusion in the discussion above, and the mental overhead for me, is due to the implicit copy (i_outer = i_inner) at the end of each iteration. It doesn't apply to range loops.

Probably I will get better at it with time.

I'm wondering why you are not concerned about this for range loops. It seems the exact same logic applies.

Basically for the reason given by Eric Lippert for C#. I went into more detail about the difference in an older discussion.

the mental overhead for me, is due to the implicit copy (i_outer = i_inner) at the end of each iteration. It doesn't apply to range loops.

Yes, it does. Today.

There's no such copy operation in range loops. Yes there is an implicit counter, but the current value of i is discarded (overwritten) after each iteration in your example.

There is the same implicitly generated i_outer iteration variable, is the point. How exactly it is managed and when exactly it is copied from where to where differs. But you simply can't convince me that doing that for the three-clause-for loop creates an unduly cognitive burden, when no one actually thinks about it happening for a range loop in practice. We manage this kind of stuff just fine.

Ultimately, what is readable and how clear something is is subjective, of course. But I find this hard to believe.

rsc commented

@mbyio, thanks for taking the time to write your note. I hear your concerns. We will look into whether it makes sense to provide that "rewrite to preserve old semantics unconditionally" tool as a kind of backstop for people, even though I would not recommend its use in general.

rsc commented

@LukasGelbmann, I read your [1, 2, 3, 4, 5, 6, 7] links, and my reading is that not a single one was claiming the behavior was wrong. They were instead asking "how does this work?" That's an entirely reasonable question! It was the first question I asked when I saw a loop like that. But all those people asking questions seemed to me pleasantly surprised. I didn't see anyone saying "I wanted console.log to print 4 4 4 4 instead of 0 1 2 3."

I will repeat what I said before: We are not going to decide the question of 3-clause for loops based on intuition or taste. While you are free to continue to make arguments about what is and is not intuitive, I'm just letting you know those arguments are not going to affect the decision. What will affect the decision is actual data from real-world codebases. Thanks.

The third new footgun:

package main

import "fmt"

func main() {
	for counter, i := 0, 0; i < 5; i++  {
		i := i
		if i&1 == 0 {
			defer func() {
				counter++
				fmt.Printf("#%d: %d\n", counter, i)
			}()
		}
	}
}
$ gotv :tip run loopvar.go
[Run]: $HOME/.cache/gotv/bra_master/bin/go run loopvar.go
#1: 4
#2: 2
#3: 0
$ GOEXPERIMENT=loopvar gotv :tip run loopvar.go
[Run]: $HOME/.cache/gotv/bra_master/bin/go run loopvar.go
#1: 4
#1: 2
#1: 0

@go101 A "footgun" is not "anything that would be different".

Sorry, I don't what you mean. Here, in this thread, "footgun" means "surprise".

That's a non-standard definition (a birthday gift is a "surprise", but definitely not a "footgun") and yet, you don't even manage to hit that incredibly moved goal post. That code does exactly what I'd expect.

YMMV, of course, but also, it doesn't really matter, as long as your understanding of what's surprising contradicts the empirical evidence.

@go101 that's an interesting example. If I wrote something like this, it might not have been my intent in the first place but perhaps a bug.

Meaning that, in the loop body, one would probably have written counter := counter as well.

That's actually the crux of the issue I think.

I believe it is easier to explain that with the short variable declaration, loop variables are "redeclared" on each iteration with new values.

The point being that even if someone forgets to write i := i etc. The code will still run according to the initial intent.

The "footgun" is probably the old semantics in the first place :)

The new semantics are more "declarative". The value that is closed over is the value at the time of the iteration.

I mean, if you only change the counter in the defer function which is defined in the loop, assuredly, the value that should be captured is 0.

The new semantics still make sense to me here I think?

rsc commented

@go101, since you are repeating things you have already said, I will do the same:

The way to influence this decision is to find examples of real code from real systems that the change breaks.

Hypothetical examples that change behavior are not going to influence this discussion.

Please stop posting new hypothetical examples. You have made your point. (We have heard you. We disagree.)

@rsc

We will look into whether it makes sense to provide that "rewrite to preserve old semantics unconditionally" tool as a kind of backstop for people, even though I would not recommend its use in general.

I do understand why there can be some hesitancy rolling the (extremely loaded) dice, would something like a compiler directive be an option here? like:

//go:noloopscopefix
for ...

I don't think rewriting to preserve semantics helps do gradual code repair, and the dice are loaded to suggest that real bugs are fixed in by changing the semantics.

@rsc

We will look into whether it makes sense to provide that "rewrite to preserve old semantics unconditionally" tool as a kind of backstop for people, even though I would not recommend its use in general.

I do understand why there can be some hesitancy rolling the (extremely loaded) dice, would something like a compiler directive be an option here? like:

//go:noloopscopefix
for ...

I don't think rewriting to preserve semantics helps do gradual code repair, and the dice are loaded to suggest that real bugs are fixed in by changing the semantics.

You will be able to do it with //go:build go1.20 per file IIRC.


@go101

Funny thing about that code, is that I would definetly write it the other way around:

package main

import "fmt"

func main() {
	counter := 0
	for i := 0, 0; i < 5; i++  {
		if i&1 == 0 {
			defer func() {
				counter++
				fmt.Printf("#%d: %d\n", counter, i)
			}()
		}
	}
}

And would expected it to work.

@rsc
You will be able to do it with //go:build go1.20 per file IIRC.

Thanks, I hadn't noticed #59033 - per-file seems forgiving enough. I'm fairly skeptical that a rewrite tool is a better choice than playing with build tags.

@atdiar The i iteration variable is not important here, I may remove it totally. It is just used to do the demonstration.

@rsc All the three new "footgun" exmaple code pieces have their own unique characteristics. I haven't seen an example like the third one before posting it here.

@DmitriyMV In your code, you chose a way to avoid the problem, not solve it. :)

Up to now, the discussion in this thread has proven that the discussion on the proposed changes is still not sufficient enough. We need more Go programmers to be aware of the changes before deciding to accept it or not. I estimate less than 1% Go programmers are aware of the changes now. Considering there are many private Go projects, we need to wait more Go programmers to participate the experiment.

Go supports pointers, large-size types, closures, and multi-value assignments. All of these make Go 3-clause loops have more variation uses than other languages. We should be careful to make their semantic changes.

@go101 you can remove i but the question of which value is captured in the closure remains.

counter is not a pointer.
So its lifetime being extended outside of the scope of the for... loop can be quite surprising as well and it might not be the behavior that should have existed in the first place.

I guess @rsc idea is for people to test their existing code to know whether people rely on the old scoping rules or not. Given that for the corpus he tested, iteration-local scoping was a net positive.

At first let me state that I have no concerns regarding the proposal. I checked a number of my modules with GOEXPERIMENT=loopvar gotip and my unit tests were all successful. While that doesn't prove a lot, it strengthens my confidence there is no practical problem. I suggest others do the same.

I agree however that one must get used to the new semantics for the three-clause for-loop because my personal mental model was that

var a []*int
for i := 0; i < 3; i++ {
     a = append(a, &i)
}

is equivalent to

var a []*int
{
    i := 0
    for i < 3 {
       a = append(a, &i)
       i++
    }
}

This mental model will be wrong going forward. I don't regard it as an issue, because most loops will not change semantics at all, if there is no closure or an address reference in the loop. Where it is, there is a high likelihood that a lingering bug will be fixed.

I'm all for the semantic change for range loops. As for 3-clause loops, I was indifferent at first. But the more I think about it, the more I'm convinced that the 3-clause loops semantic shouldn't be changed.

we are going to make this decision based on evidence [emphasis on original]

I agree with @rsc about this statement. The important numbers we should seek and compare are # bugs fixed vs. # bugs caused by the change.

But we don't live in an ideal world. It's hard to gather unbiased and representative evidence for many reasons:

  1. It's easier to find cases where bugs fixed than cases where bugs caused by the change. For example, we can find the former cases by searching x := x, as done by @rsc. On the other hand, it's hard to formulate search terms to find the latter cases. In any case, we can only find things we expect to find. The problem is that there may be much more cases that we don't think of, or even when we can think of some of them, we can't formulate the appropriate search terms.

  2. Maybe the only alternative to relying on code search is using tests to get the number of bugs fixed and caused by the change. However, we need to realize that many code is not well tested, much less having 90%+ code coverage. Not to mention that, as Dijkstra famously said:

    Program testing can be used to show the presence of bugs, but never to show their absence!

    That means the fact that all tests pass doesn't imply that there are no bugs. We can only prove there are no bugs using formal methods, which in most cases not practical at all.

  3. So far the code we've been getting evidence from is only from GitHub public repos and Google's codebase. This implicitly assumes that code from them is a representative sample of all Go code. However, open source Go packages and Google's codebase may be of higher quality and more well-tested than code from many other companies, whichβ€”due to deadlines or very specific requirementsβ€”may use hacks and obscure tricks.

If the problems above apply to both range loops and 3-clause loops, why do I support the semantic change of one and not of the other? It's because there are much fewer ways you can use (and abuse) range loops than 3-clause loops. The breakages caused by the new semantic of range loops revolve around the use of the address of the loop variable. In contrast, you can do almost anything in 3-clause loops.

Let's look at the grammar of the 3-clause loops from the spec:

ForClause = [ InitStmt ] ";" [ Condition ] ";" [ PostStmt ] .

InitStmt = SimpleStmt .
PostStmt = SimpleStmt .

Condition = Expression .

As you can see, we can do much more with SimpleStmt and Expression than (ab)using loop variable addresses in the range loops case.

Of course at the end of the day we still can't get the exact numbers of bugs fixed vs. bugs caused by the change. However, the numbers we have now may underrepresent the real distribution. As much as I want the decision is based on empirical evidence (because it's more objective and no one can argue with the numbers), I don't think gathering the evidence is an easy task. So, we may need to discuss things more subjective, e.g. which is more intuitive, which is more common, which requires less mental overhead, etc.

@danieljl Does it break really?

I agree with @rsc about this statement. The important numbers we should seek and compare are # bugs fixed vs. # bugs caused by the change.

But we don't live in an ideal world. It's hard to gather unbiased and representative evidence for many reasons:

It's certainly true that it's hard to get unbiased evidence. But we have some evidence that this change not only fixes some bugs, but also doesn't introduce any, for 3-clause loops. Like, this isn't just the absence of evidence, it's evidence for absence.

For example, the pattern below is not uncommon and will break in the new semantic.

I might be daft, but can you explain the breakage? Because I can't see it. From what I can tell, IsDone and Done have pointer-receivers (as they have side-effects). So I'd expect InitFoo to also return a pointer, in which case this seems fine.

And even if InitFoo returns a value (which would be exceedingly rare), then its value would get copied back-and-forth, including all fields. So, that still seems fine. It might break if its methods start goroutines that operate on fields or escape pointers to its fields or have mutexes that should not be copied - but in those cases, returning a value from InitFoo is really bad idea.

So… I don't think this would break in any common usage of this pattern?

But we have some evidence ... this isn't just the absence of evidence, it's evidence for absence. β€” @Merovius

If it is some unbiased enough (no need to be 100% unbiased) evidence, it's good to use. But in this case it's clearly biased evidence, which may be worse than no evidence, because it can be misleading.

From what I can tell, IsDone and Done have pointer-receivers (as they have side-effects). So I'd expect InitFoo to also return a pointer, in which case this seems fine. β€” @Merovius

I don't understand how you could come into that conclusion. The fact that some methods have pointer receivers doesn't require the "constructor" to return a pointer.

And even if InitFoo returns a value (which would be exceedingly rare), then its value would get copied back-and-forth, including all fields. So, that still seems fine.

You may have some misunderstanding of how shallow copy / memcpy works because it doesn't work like you think it is. If InitFoo returns a value, the new semantic will break the code. Demo: https://go.dev/play/p/sxrRvfeeRx6 (In the current semantic it will output "go" once, whereas in the new semantic it will output "go" indefinitely and will never finish the loop.)

fzipp commented

In the current semantic it will output "go" once, whereas in the new semantic it will output "go" indefinitely and will never finish the loop.

@danieljl I can't confirm this with GOEXPERIMENT=loopvar:

% go install golang.org/dl/gotip@latest
go: downloading golang.org/dl v0.0.0-20230502172222-5216546bad51
% gotip download
Updating the go development tree...
...
Building Go cmd/dist using /usr/local/go. (go1.20.3 darwin/arm64)
...
Success. You may now run 'gotip'!
% cat x.go
package main

import "fmt"

func main() {
	for foo := InitFoo(); !foo.IsDone(); {
		fmt.Println("go")
		foo.Done()
	}
}

type Foo struct {
	isDone bool
}

func InitFoo() Foo {
	return Foo{isDone: false}
}

func (f *Foo) Done() {
	f.isDone = true
}

func (f *Foo) IsDone() bool {
	return f.isDone
}
% gotip run x.go
go
% GOEXPERIMENT=loopvar gotip run x.go
go
% 

@danieljl I think the new semantics are closer to this: https://go.dev/play/p/39xzBuu8Ou8

Which only prints "go" once as well.

If it is some unbiased enough (no need to be 100% unbiased) evidence, it's good to use. But in this case it's clearly biased evidence, which may be worse than no evidence, because it can be misleading.

This is derailing the discussion. Either you have a real world examples proving your point or you not. So far all opponents of this change didn't find provide anything other than "I can write this code, that will be broken with this change, so I don't like this change.". On the other hand, we have numerous bugs (even linked with that discussion) that will be fixed.

Also - before saying things "biased" please provide the evidence.

Demo: https://go.dev/play/p/sxrRvfeeRx6 (In the current semantic it will output "go" once, whereas in the new semantic it will output "go" indefinitely and will never finish the loop.)

Please read the proposal again and all previous discussions. @atdiar provided the correct example (tho assignment in the end should happen unconditionally even if loop contains continue).

@atdiar Thanks for correcting my code emulating the new semantic.


@DmitriyMV

If it is some unbiased enough (no need to be 100% unbiased) evidence, it's good to use. But in this case it's clearly biased evidence, which may be worse than no evidence, because it can be misleading.

This is derailing the discussion. Either you have a real world examples proving your point or you not.

Did you even read my first long explanation of why I think the evidence presented so far is biased?

Also - before saying things "biased" please provide the evidence.

The evidence of what? The evidence that the evidence presented so far is biased OR the evidence against the semantic change of 3-clause loops? In either case, again, please read my first post: #60078 (comment)


So far no one disagreed or even commented about my 3-point argument that the evidence presented so far is biased; but instead everyone focused on one specific example, which I admit, incorrectly demonstrates the new semantic. I wonder if that means no one really disagrees that the evidence presented so far is biased / not representative. If someone argues otherwise, I'd like to hear their argument.

@danieljl

So far no one disagreed or even commented about my 3-point argument that the evidence presented so far is biased

I disagree. I did comment on it. Also, feel free to provide better evidence.

@danieljl

On the other hand, it's hard to formulate search terms to find the latter cases.

Not really. Go /x tooling gives you enough info about types, and there are tools like go-ruleguard and gogrep that can help you identify such cases. You don't need to actually perform text search and you can find very specific cases if you go with AST and types lookup.

In any case, we can only find things we expect to find.

We are getting dangerously close to Russell's teapot. So far we have seen plenty of evidence supporting fixing this behaviour, and none for not doing it. Even opponents failed to provided real world examples, and they had plenty of time.

So far the code we've been getting evidence from is only from GitHub public repos and Google's codebase. This implicitly assumes that code from them is a representative sample of all Go code. However, open source Go packages and Google's codebase may be of higher quality and more well-tested than code from many other companies, whichβ€”due to deadlines or very specific requirementsβ€”may use hacks and obscure tricks.

Not really. The fact that this behavior resulted in numerous bugs in Google services as well as many public repositories says that people tend to think alike in common scenarios. This is what we can actually see and observe.

However, the numbers we have now may underrepresent the real distribution.

I'm getting tired of this "may or may not happen". So far evidence, collected from several resources, have been overwhelmingly supportive of this change. Almost all people here, on reddit, on hacker news and other discussion platforms support this change, with some being surprised that it doesn't work that way currently. I have seen none real objections to making this change that do not start with with word "may" or "maybe".

I will stop here cause I see no point continuing this discussion. Not until some new real world evidence will arise.

@atdiar
per your comment. I agree with you sometimes people expect an iteration variable in a traditional 3-clause loop to be a single loop-step scoped. What I disagree with you is that this is not always true, which is a big different from for-range loops. This has been proven in the 3 new footgun examples.

@danieljl
I agree with you viewpoints. But admittedly, the example in your comment looks not like a broken case.

I won't refute some other skewed viewpoints. In fact, I will feel ashamed if I hold those viewpoints.

I'm helping this project by showing some bad effects of this proposal. In fact, I have not the responsibility to do this, not to mention that I have not responsibility to find the use cases in real world. It is the responsibility of the proposal makers to do this. If they think this is hard to do, then we should encourage the whole community to distribute the work load.

As mentioned in the last comment, iteration variables in traditional 3-clause loops are sometimes expected to be one single loop-step scoped. and sometimes expected to be whole-loop scoped.

Currrently, the third clause in a traditional 3-clause loop can't be a re-declaration statement. We can remove this limit and use a re-declaration statement to explicitly show which iteration variables are expected to be one single loop-step scoped. For example

	for i := 0; i < 3; i := i+1 {
		prints = append(prints, func() { println(i) })
	}

	for counter, i := 0, 0; i < 5; i := i + 1 {
		if i&1 == 0 {
			defer func() {
				counter++
				fmt.Printf("#%d: %d\n", counter, i)
			}()
		}
	}

This will make things keep explicit and not break backward compatibility.


[update]: just an unproven theory assumption: it looks the iteration variables expected to be one single loop-step scoped are declared in the first clause and modified in the third clauses through assignments. If this is true, we can let new versions of go fix to modify the assignment statements in the third clauses to re-declaration statements. This is the bottom-line to let me think the change is not reckless and devastating.

@go101 the point is that anyone can build synthetic examples that somewhat rely on the old scoping rules but the question is whether anyone does in real programs.

That's why @rsc asked about real code examples. :)

You will be able to convince him and others if you can find examples in real code where people relied on the old scoping behavior.

You will be able to convince him and others if you can find examples in real code where people relied on the old scoping behavior.

To be clear: I wouldn't put it that strongly. Currently, the evidence indicates that the overwhelming majority of programs would benefit from this change. Producing a single real-world example would not disprove that. It's just that the inability to not even produce a single one seems to strengthen the conclusion that there are vanishingly few.

I think to be convincing, an example would have to either 1. be a pretty common pattern that we can find in lots of real code, or 2. exhibit real novelty, a behavior we did not anticipate but that would be clearly natural to write. Which would demonstrate that we actually misunderstood how easy it is to make a mistake under the new semantics.

The examples so far don't do 1, as they are entirely synthetic. And they don't do 2, as they are either absurdly unreadable and their effect would be hard to tell under any semantic (the ones with many function calls and pointer-comparisons in the loop clauses) or actually pretty easy to reason through and not really surprising (the counter example).

@atdiar
Again, it is not my responsibility to show the real code example. It is the responsibilities of the proposal makers (to prove no such cases are in all the go code in the world). :) My help is just to show the possibilities.

@go101 oh I don't think anyone is asking specifically that you find an example in someone else's codebase of reliance on such behavior.

It's mostly about you reporting cases extracted from your own real code or the dependencies you use.

That's the only practical way to go about it. (distributed).

But just thinking about when one would want such behavior, we can already intuit that it would be exceedingly rare if not always a bug.

The claim has never been that there are no cases where this proposal would break real-world code. The claim was that this proposal will fix overwhelmingly more problems than it causes. Moving the goal post like that is clearly designed to create an impossible to fulfill standard of proof (it's impossible to prove a negative) to gridlock the discussion and sabotage consensus. There is no point in having an argument where one side is not interested in reaching a consensus. Therefore, I strongly suggest to stop engaging.

@rsc has provided strong evidence that this change - for both kinds of loops - fixes overwhelmingly more problems than it causes. If anyone is dissatisfied with that level of evidence, they are free to provide their own. Or at the very least, provide clear, falsifiable claims on how such evidence can be obtained. Until then, there's just no point in belaboring this argument.

The claim has never been that there are no cases where this proposal would break real-world code. The claim was that this proposal will fix overwhelmingly more problems than it causes.

Yeah, that's why I support the semantic change of range loops. As I said before, because there are only a few ways range loops can be (ab)used, # problems fixed >>>>> # problems caused.

As for the semantic change of 3-clause loops, I hope at least we can all agree that, although the problems caused by both types of loops are not many, the semantic change of 3-clause loops will cause more problems than that of range loops will do. That's because, as I explained before, there are more ways 3-clause loops can be (ab)used. (For details, please read a later section in my original comment that explains the grammar of 3-clause loops: #60078 (comment) .) So, even if there will be more problems fixed than problems caused, it won't be as dramatic as in the range loops case. In the 3-clause loops case, # problems fixed >> # problems caused.

We must remember that we will break our stability promise. We should only do that if the change will break almost no code (which I believe true for the range loops case, but not for the 3-clause loops case). If we accept any change that will fix more than it will break, this proposal won't be the last that will break the stability promise. (Aside: Will it be the first proposal that will break it?)

To reiterate and summarize my points about sampling bias, basically there are two parts of the problem:

  1. How can we be sure that we don't oversample # problems fixed and undersample # problems caused?

  2. Even if we can solve the first problem, there is a problem that is harder to solve: How can enlarge our sample so that we don't rely only on public code and Google's codebase? It may require a large-scale questionnaire project involving many companies that heavily use Go. We need to realize that today only a fraction of Go users are aware about the planned change.

That's what I think we should do if we'd like to go with the empirical evidence route. Or alternatively, we can approach it analytically, more specifically, "combinatorically", i.e. acknowledging from the for grammars that there are much more ways 3-clause loops can be (ab)used than there are for range loops.


On a side note, I think more people will support the semantic change of 3-clause loops, ifβ€”as unlikely as it isβ€”Go 2 is released to accommodate this proposal. (Why don't we want to do that, though? Because Go 2 is reserved for bigger breaking changes? But if we are to follow semver, we should increase the major version.) That way, virtually all Go users will be aware of the breaking change, and no promises will be broken. In contrast, much fewer users will be aware of the breaking change if only minor version is incremented.

rsc commented

@go101 and @danieljl, we have heard your arguments, many times now. The fact that most of the participants here still disagree with you does not mean you have been unclear or misunderstood. Please step back and leave room for others to participate and bring up other concerns. Thank you.

we have heard your arguments, many times now. The fact that most of the participants here still disagree with you does not mean you have been unclear or misunderstood. Please step back and leave room for others to participate and bring up other concerns. Thank you.

@rsc I thought we still have an open and unfinished discussion. Youβ€”the very person who suggested that empirical evidence is more important than anythingβ€”didn't even respond to my original post that challenges the underlying assumptions of using empirical evidence. But sure, I'll step back if no one else want to discuss my concerns anymore. If you consider my comments as noises, I apologize.

Maybe if I can ask, when the proposal is accepted and the next Go version is released, please at least mention the semantic change in the title of the release announcement post. Also, in the changelogs please make the semantic change very conspicuous, e.g. by making the font bold and red. That way, hopefully more Go users will be aware of the broken stability promise and can plan accordingly.

Fantastic, and I'm really looking forward to this change!

My only real question: Can you provide more examples of situations where this breaks things? (Even when they are cases of less than ideal code.) I saw the allocation use-case mentioned. Very interesting, thank you. You also mention certain percentages above, so I assume you have lots more examples. In particular, I might have missed the example but you wrote:

Of all the test failures, only one was caused by a loop variable semantic test change in non-test code.

Could you show us this code, or did I miss it in the wall of text above? Cheers!

Of all the test failures, only one was caused by a loop variable semantic test change in non-test code.

Could you show us this code, or did I miss it in the wall of text above?

As it says above, the design doc has more details. In this case see https://go.googlesource.com/proposal/+/master/design/60078-loopvar.md#fixes and search for "a test checks that the code contains no allocations or even runtime write barriers."

search for "a test checks that the code contains no allocations or even runtime write barriers."

Yeah I saw that one, thank you! I thought the original message alluded to some more, sorry. If there are any other known bugs at all created by this change, I'd like to see some!

Keep up the great work!

@purpleidea in case you haven't read them:

@purpleidea that failing example is an interesting one. If the analysis were a little pickier, it would not have applied there. The address is taken on a code path that ends in a break, so it is technically no longer in the iteration (though it is in scope). We implemented a minor optimization for an easier-to-recognize case if the addressed variable is instead returned, e.g., return &saved. I thought about doing it for break, but so far, have not, partly because it is trickier than return.

Hi @dr2chase

@purpleidea that failing example is an interesting one. [...]

Just to confirm, you are talking about how a potential future improvement to the complier could have avoided a new allocation under the new semantics (which normally would be an optimization whose absence wouldn't cause a test failure, but in this case the test was explicitly checking that no allocations occurred)?

Related snippet from the design doc:

var original *mapping
for _, saved := range savedMappings {
	if saved.start <= m.start && m.end <= saved.end {
		original = &saved
		break
	}
}

Unfortunately, this code was in a very low-level support program that is invoked when a program is crashing, and a test checks that the code contains no allocations or even runtime write barriers.

I’ve been thinking about this a bit. At first the tiny performance hit bugged me. A person in my company slack made a comment that we will will soon have "performance loops" and "range loops". I sat with that idea for a bit. I don't think it is not a bad thing. In addition to the common bugs with for-range loops that this proposal addresses, I have found the order of assignment variables in a for-range loop to be confusing for new gophers and new programmers. Last week I pair programmed with a senior engineer who paused trying to remember what the order of values on the left hand side of the for-range expression was. I felt that confusion. I used to write Vue.js templates and would get confused going back and fourth (they do it the other way around).

Go programmers tend to prioritize readability over compact code. I feel moving forward (regardless of whether this proposal is accepted) I will prefer to use for-index loops when iterating though slices/arrays and write them like so (this is less error prone, more performant (not that it usually matters), and I need to remember less syntax):

for index := 0; index < len(slice); index++ {
  element := slice[index]
  //...
}

I will continue to use the range loop for maps and channels. I've noticed the ambiguity on the left side of the range expression is not an issue in those cases.

Go programmers tend to prioritize welcoming new programmers. Making for-range loops less error prone for those who care to use it seems inclusive as does using loops that read more like C/Javascript for-index loops.

@thepudds yes, that one failure, if we/I had implemented that trickier optimization (I am not adept at tree-node transformations) would have avoided transforming that loop and thus also avoided that failure.

I am programming full-time in Go for the last 5 years, and I have only once or twice run into a bug caused by this (which was really quickly solved) in all these 5 years. (Maybe dislike this comment if you have ran into it more then, say once a year).

To show I am really not a genius programmer, I have about 50 times more run into the stupid mistake of updating an array element that is iterated over by value in a range operation (ie. for v, i := range array { v = ...} ). Even that was not a big problem. So I wonder if the cure is worth the hassle. (Real world examples that illustratie it would really help decide this).

I am saying this especially because, if this is implemented, it would potentially force the programmer to lookup the Go version in the mod file, for every package worked on, all code reviews, etc. Then, what if you want to use say go >1.22 for other stuff but do not want to check all for loops in all your code (yet)? I mean you're kinda forced to it. I will add hooks for senseless team discussions too, maybe.

All in all, people will manage of course, I certainly won't have sleepless nights if this is added. I just want to make the note that it adds another dime to the bag of cognitive load without solving any significant existing problems in reality? It may also serve as "prior art" for other future "incompatible" language tweaks because "we did that for-loop thing too".

Finally, I find the example given in the proposal a bit misleading because safe&correct parallel programming without channels is really hard anyway (to me, anyways). So this proposal could be based off some more mundane example to do it more right maybe.

Totally agree. This is indeed another concern point I plan to make.
Using minor semvars to mark feature adding is a good idea, but it is not a good idea to mark behavior changes.
I even think it is also not a good idea to use major semvars to mark language semantic changes.

I'm not sure I'm getting a practical problem to be solved by this surgery. Despite seemingly good reasoning you are about to break the compatibility promise where clearly stated that "It is intended that programs written to the Go 1 specification will continue to compile and run correctly, unchanged, over the lifetime of that specification." Such decisions open a Pandora's box of random "language improvements" with unpredictable fallout.

Along with issues for Go beginners, the worst scenario I expect is tricky bugs in third-party libraries accidentally/incidentally upgraded. Vast majority of my Go projects have tons of dependencies and some of them contain spaghetti code. This imperfect code is not under my control. The only merit of this code is that it's time proved and stable.
Accepted this proposal, this stability would be terminated for long.

I am programming full-time in Go for the last 5 years, and I have only once or twice run into a bug caused by this (which was really quickly solved) in all these 5 years. (Maybe dislike this comment if you have ran into it more then, say once a year).

There are two costs associated with issue: One is actual bugs slipping through reviews and causing issues, the other is having to put x := x into your code. For the former, we have pretty concrete data. The second is harder to quantify, but it's certainly annoying and confusing. The question "why is there an x := x here?" comes up a lot with newcomers.

There's also the point that something that may happen rarely individually, can happen frequently at scale. If this bug happens to every Go programmer ~once a year, at an estimated ~1M Go developers, that's ~1M preventable bugs per year.

I am saying this especially because, if this is implemented, it would potentially force the programmer to lookup the Go version in the mod file, for every package worked on, all code reviews, etc.

I don't think this is quite as big a problem in practice. The vast majority of loops are not affected. I'd also expect most people who do code reviews to be somewhat aware of the context of the project.

It's also a temporary issue. After a year or so, older Go versions will be unsupported and phased out.

Then, what if you want to use say go >1.22 for other stuff but do not want to check all for loops in all your code (yet)? I mean you're kinda forced to it.

That is kind of true. But note that there is tooling to make the upgrade as simple as possible. In particular, you don't have to check all loops - the bisect tool can give you a set to check on. And there are escape hatches. If everything fails, you could even put //go:build go1.22 into all existing .go files to get the old behavior and then put the stuff that needs a newer Go version into a separate file with //go:bulid go1.23.

But, really, if you are affected by the migration, that means you most likely currently have a bug you didn't notice. I'm not sure I can sympathize with that suddenly being a huge problem.

@pkorotkov The proposal is aware that this constitutes a break of the compatibility promise. As for the slippery slope argument, it is also pretty clear that this is a one-time exception. No one wants a world where we do things like this more often.

For your concern about 3rd party code, note that nothing about that will change, unless that 3rd party itself decides to upgrade to a newer Go version. That is, if the go.mod of your dependency says go 1.21, it will be compiled with the current semantics for loops, regardless of what your go.mod says.

Hi @markusheukelom

Finally, I find the example given in the proposal a bit misleading because safe&correct parallel programming without channels is really hard anyway (to me, anyways). So this proposal could be based off some more mundane example to do it more right maybe.

Here are some non-concurrency examples.

CL 40937 fixes a bug in x/crypto/ssh that is a good example of how the current semantics can bite you even without any concurrency or parallel programming. Here the bug is due to the pointer that is stored in the map:

	knownKeys := map[string]*KnownKey{}
	for _, l := range db.lines {
		if l.match(addrs) {
			typ := l.knownKey.Key.Type()
			if _, ok := knownKeys[typ]; !ok {
				knownKeys[typ] = &l.knownKey
			}
		}
	}

And here's a simplified version (taken from the CL description):

	var p *int
	a := []int{0, 1, 2, 3}
	for _, i := range a {
		if i == 1 {
			p = &i
		}
	}
	fmt.Println(*p) // Prints 3

That example happens to be one where I saw some long-time Go contributors stare at the original problem and the simplified version, and even knowing there was a loop capture bug, they concluded "I don't see the bug".

You can see some other real-world non-concurrency problems in the examples I gathered here in #57173. (The context there is I had sent an improvement for the go vet loopclosure analysis for Go 1.20, and that issue is talking about possible follow-up steps).

The first two examples in the original pre-proposal discussion in #56010 also show the current semantics causing bugs even without any concurrency.

In some sense, the non-concurrency examples are arguably more insidious, including because the race detector won't usually flag them.

@thepudds Thanks for your example.

While this proposal might fix the immediate bug in your example, it potentially leads to much more obscure bugs later in the application.

Consider:

var p *int
a := []int{0, 1, 2, 3}
for _, i := range a {  // ! assume i is created every iteration, as per the proposal
    if i == 1 {
          p = &i
	}
}
fmt.Println(*p) // prints "1" correctly now

// let's update the entry in the slice 
*p = 99
fmt.Println(a) // prints [0, 1, 2, 3], instead of the intended [0, 99, 2, 3]

This is especially true in your original (non-simplified) example: if someone takes a pointer to a some value, it's likely because it is expected that the value to be updated at some point (the other reason is efficiency, but that doesn't make sense in the provide example). However, while it seems everything works fine, knownKeys now points to copies of the db.lines entries. This will lead to potentially very hard to understand bugs later if this map is used to update keys.

In contrast, today you are forced to write something like this:

// best
a := []int{0, 1, 2, 3}
for i := range a {
	v := &a[i]
	if i == 1 {
		p = v
	}
}

// - OR -
	
// often seen too
a := []int{0, 1, 2, 3}
for _, i := range a {
	i := i
	if i == 1 {
		p = &i
	}
}

The former is always to be preferred; this will not have "update p somewhere a lot later" bugs. The latter works as long as p is only read. However, at least this second form makes it much more explicit that i is copied and that you are pointing to a copy of a value.

Note that he real issue here seems that it's not possible to range over the address of the values. Somethling like for _, p := &range a .

@markusheukelom I don't understand the argument. The code you said is incorrect, is just as incorrect today as it will be after the proposal. The first of your correct examples is just as correct after the proposal, as it is today. The second of your correct examples is actually broken (it behaves the same as your "broken" example), again just the same today as after the proposal.

So I don't understand how your example makes a case that the proposal would lead to more subtle bugs. Exactly the same code is exactly as broken, before and after.

What am I missing?

@markusheukelom I'm not too sure about the semantics you have in mind but note that in the code you wrote, I don't see how p is pointing into the slice.

It is pointing to the iteration variable. So if you were to print *p again at the end, you would just get different values.

Still in both cases, a doesn't change unless I'm mistaken.

On the topic of go.mod, it only affects libraries that are being upgraded.

Since a compiler does not check the semantics of user-code but just what can be encoded by the type system, if people rely on the old semantics tests should break (only if the old behavior is being relied upon at that). (meaning code should be tested anyway)
There is also tooling to help.
In fact, it doesn't really break backward compatibility. It's forward compatibility which may need work.

Given that, there is no need to worry I think.

First, as a reminder for anyone reading or posting here, it's pretty easy to try the loopvar GOEXPERIMENT, and to do so without impacting your default installation of Go:

$ go install golang.org/dl/gotip@latest  # install a light wrapper (in standard GOBIN location)
$ gotip download                         # clone & compile the latest dev version (stored in ~/sdk/gotip)
$ gotip version                          # run 'gotip' instead of your normal 'go' command
$ GOEXPERIMENT=loopvar gotip test ./...  # test with loopvar experiment enabled

On Windows, the last step can be broken into set GOEXPERIMENT=loopvar then gotip test ./... or similar.


@markusheukelom, just to make some of the feedback on your example more concrete.

If we compare Go 1.20 vs. gotip with GOEXPERIMENT=loopvar for your example from above in #60078 (comment), it prints the following (with some brief comments added by me):

$ go1.20.4 run .
3                 // surprising to many
[0 1 2 3]         // p never points into slice, so *p = 99 does not update slice

vs.

$ GOEXPERIMENT=loopvar gotip run .
1                  // improvement over Go 1.20
[0 1 2 3]          // same as Go 1.20

The 3 vs. 1 output is basically the example from CL 40937.

The slice output of [0 1 2 3] is the same, so you might have had a mistake in the "try to update the slice" part of your example, or I might have misunderstood what you were trying to show.

Despite seemingly good reasoning you are about to break the compatibility promise where clearly stated that "It is intended that programs written to the Go 1 specification will continue to compile and run correctly, unchanged, over the lifetime of that specification." Such decisions open a Pandora's box of random "language improvements" with unpredictable fallout.

Those were broke before, for correctness reasons. And most likely will be done again.

Those were broke before, for correctness reasons. And most likely will be done again.

This disallowance (and I think the others you meant) will result in compiler errors. @DmitriyMV, respectfully, it's quite different from a silent change in behavior under the question in the proposal.

it's quite different from a silent change in behavior under the question in the proposal.

Before behavior changes a version number has to be bumped somewhere, so I don't see this as exactly a silent change. Along the lines #60078 (comment), we could say bumping version numbers without adequate testing may or may not introduce bugs, but I'm wondering if it's more useful to assert that it definitely introduces a code quality defect.

FWIW I was web searching e.g. stackexchange or Go documentation or blogging to see if there's a concise formulation of best practices for bumping version numbers, in a way that's focused on minimizing introducing defects (not bugs, but more intangible code quality defects). IMHO go tooling generally promotes better decisions, but people (especially people new to Go) might have intuitions that don't transfer etc. I didn't find something that seemed excellent, although nothing blatantly wrong either.

Before behavior changes a version number has to be bumped somewhere, so I don't see this as exactly a silent change.

I don't wish to appear negative but this sounds too artificial, in the course of time developers will have to upgrade their packages. Out of options, just because the Go language moves forward and its older versions get unmaintained. And that's where an ambush could be.

@Merovius (and @atdiar)

@markusheukelom I don't understand the argument. The code you said is incorrect, is just as incorrect today as it will be after the proposal. The first of your correct examples is just as correct after the proposal, as it is today. The second of your correct examples is actually broken (it behaves the same as your "broken" example), again just the same today as after the proposal.

So I don't understand how your example makes a case that the proposal would lead to more subtle bugs.

Because with the new proposal you may be more easily fooled into thinking that

a := []int{0, 1, 2, 3}
var *p int
for _, i := range a {
	if i == 1 {
		p = &i
	}
}

is actually a good implementation, not realising that p does not point into a but to a copy of a[1] instead. (Note that this example is extremely simplified; in the real world it is a lot more noisy, see above).

You are possibly more easily fooled, because today this does not work at all: both reading p and writing p will give wrong or unintended results. With the new proposal however, the code above works as long as you only read p. But if somewhere far away from this loop p is updated, this is not reflected back in the slice. This is more subtle because this "update-p-bug" may only manifest itself when a is persisted for example, not persisting the update value. This can be far away from this loop, both in code and in time.

Instead, what the programmer likely intended to do is:

a := []int{0, 1, 2, 3}
var *p int
for i := range a {
        v := &a[i]
	if i == 1 {
		p = &v
	}
}

Now p can be used to update the slice element.

My argument is that the real issue in the example is that you actually just want an easier / more natural way to ge a pointer to the slice element, instead of a per-iteration variable copy of the element.

I do agree however, that technically speaking the proposal fixes the bug in the example.

I hope this clarifies my point a bit.

I don't wish to appear negative but this sounds too artificial, in the course of time developers will have to upgrade their packages.

You have to maintain your software somehow - Go already goes to a very serious lengths for supporting old codebases. This feature will not change that, in fact it even solidifies the current approach of slow evolution. But at the same time, the language either evolves and tries to fix old mistakes, becomes bloated with different ways of doing one thing, or people start to leave. I don't think we want C++ approach, of preserving old ways of doing things, while adding new - this results in a incomprehensible code.

@markusheukelom I find that a thin presupposition, to be honest. It seems to me that it is chaining someone misunderstanding the language in several basic ways with writing relatively uncommon code and verifying that with a pretty niche way to double-check their assumption (by printing the value a pointer points at, instead of the pointer itself - or writing a test for behavior). Which is not to say it can't happen, but to me at least, that can't really offset the kind of real-world data and experience we have about the bugs caused by the current behavior.

My argument is that the real issue in the example is that you actually just want an easier / more natural way to ge a pointer to the slice element, instead of a per-iteration variable copy of the element.

Even if that was true (I'm always skeptical of talking about "the real cause" of a misunderstanding), this particular example is only one example of many. And most of those happen around closures, which definitely do not make this assumption.

@DmitriyMV @pkorotkov @AndrewHarrisSPU Personally, I don't think any of these are really debatable:

  1. This proposal breaks the compatibility promise. Using the go directive in go.mod somewhat mitigates that (because the compiler can use the old behavior if the code was written under old assumptions) but there was Go before modules, therefore it definitely does break the compatibility promise.
  2. We decided that we are okay to break the compatibility promise in a process dubbed "Go 2", as long as it happens in a well-defined set of ways - namely, it must be a "removal", not a "redefinition", so that the compiler can report a breakage.
  3. This proposal violates the rules set there as well, by being a redefinition. It's literally the example given there for such a change.
  4. The proposal text mentions all of this. It also clearly states that this is a one-time exception.

In my opinion, it is neither helpful to mince words and pretend that this proposal does not break promises we set. Nor is it helpful to belabor that point. The people in favor of this proposal are aware of this breakage. You won't convince them that this proposal should not be adopted by continuing to talk about it.

If you are worried about the precedent this proposal would set, the way to go about it is to point anyone who, in the future, tries to make the argument "but we redefined loop semantics" at that last point, to definitively say that this can't be used as a precedent for future redefinitions. We would only be adopting this proposal under the assumption that it doesn't set a precedent, so it would be an oxymoron to use it that way.