golang/go

Proposal: A built-in Go error check function, "try"

griesemer opened this issue · 815 comments

Proposal: A built-in Go error check function, try

This proposal has been closed. Thanks, everybody, for your input.

Before commenting, please read the detailed design doc and see the discussion summary as of June 6, the summary as of June 10, and most importantly the advice on staying focussed. Your question or suggestion may have already been answered or made. Thanks.

We propose a new built-in function called try, designed specifically to eliminate the boilerplate if statements typically associated with error handling in Go. No other language changes are suggested. We advocate using the existing defer statement and standard library functions to help with augmenting or wrapping of errors. This minimal approach addresses most common scenarios while adding very little complexity to the language. The try built-in is easy to explain, straightforward to implement, orthogonal to other language constructs, and fully backward-compatible. It also leaves open a path to extending the mechanism, should we wish to do so in the future.

[The text below has been edited to reflect the design doc more accurately.]

The try built-in function takes a single expression as argument. The expression must evaluate to n+1 values (where n may be zero) where the last value must be of type error. It returns the first n values (if any) if the (final) error argument is nil, otherwise it returns from the enclosing function with that error. For instance, code such as

f, err := os.Open(filename)
if err != nil {
	return …, err  // zero values for other results, if any
}

can be simplified to

f := try(os.Open(filename))

try can only be used in a function which itself returns an error result, and that result must be the last result parameter of the enclosing function.

This proposal reduces the original draft design presented at last year's GopherCon to its essence. If error augmentation or wrapping is desired there are two approaches: Stick with the tried-and-true if statement, or, alternatively, “declare” an error handler with a defer statement:

defer func() {
	if err != nil {	// no error may have occurred - check for it
		err =// wrap/augment error
	}
}()

Here, err is the name of the error result of the enclosing function. In practice, suitable helper functions will reduce the declaration of an error handler to a one-liner. For instance

defer fmt.HandleErrorf(&err, "copy %s %s", src, dst)

(where fmt.HandleErrorf decorates *err) reads well and can be implemented without the need for new language features.

The main drawback of this approach is that the error result parameter needs to be named, possibly leading to less pretty APIs. Ultimately this is a matter of style, and we believe we will adapt to expecting the new style, much as we adapted to not having semicolons.

In summary, try may seem unusual at first, but it is simply syntactic sugar tailor-made for one specific task, error handling with less boilerplate, and to handle that task well enough. As such it fits nicely into the philosophy of Go. try is not designed to address all error handling situations; it is designed to handle the most common case well, to keep the design simple and clear.

Credits

This proposal is strongly influenced by the feedback we have received so far. Specifically, it borrows ideas from:

Detailed design doc

https://github.com/golang/proposal/blob/master/design/32437-try-builtin.md

tryhard tool for exploring impact of try

https://github.com/griesemer/tryhard

rasky commented

I agree this is the best way forward: fixing the most common issue with a simple design.

I don't want to bikeshed (feel free to postpone this conversation), but Rust went there and eventually settled with the ? postfix operator rather than a builtin function, for increased readability.

The gophercon proposal cites ? in the considered ideas and gives three reason why it was discarded: the first ("control flow transfers are as a general rule accompanied by keywords") and the third ("handlers are more naturally defined with a keyword, so checks should too") do not apply anymore. The second is stylistic: it says that, even if the postfix operator works better for chaining, it can still read worse in some cases like:

check io.Copy(w, check newReader(foo))

rather than:

io.Copy(w, newReader(foo)?)?

but now we would have:

try(io.Copy(w, try(newReader(foo))))

which I think it's clearly the worse of the three, as it's not even obvious anymore which is the main function being called.

So the gist of my comment is that all three reasons cited in the gophercon proposal for not using ? do not apply to this try proposal; ? is concise, very readable, it does not obscure the statement structure (with its internal function call hierarchy), and it is chainable. It removes even more clutter from the view, while not obscuring the control flow more than the proposed try() already does.

To clarify:

Does

func f() (n int, err error) {
  n = 7
  try(errors.New("x"))
  // ...
}

return (0, "x") or (7, "x")? I'd assume the latter.

Does the error return have to be named in the case where there's no decoration or handling (like in an internal helper function)? I'd assume not.

Your example returns 7, errors.New("x"). This should be clear in the full doc that will soon be submitted (https://golang.org/cl/180557).

The error result parameter does not need to be named in order to use try. It only needs to be named if the function needs to refer to it in a deferred function or elsewhere.

I am really unhappy with a built-in function affecting control flow of the caller. This is very unintuitive and a first for Go. I appreciate the impossibility of adding new keywords in Go 1, but working around that issue with magic built-in functions just seems wrong to me. It's worsened by the fact that built-ins can be shadowed, which drastically changes the way try(foo) behaves. Shadowing of other built-ins doesn't have results as unpredictable as control flow changing. It makes reading snippets of code without all of the context much harder.

I don't like the way postfix ? looks, but I think it still beats try(). As such, I agree with @rasky .

Edit: Well, I managed to completely forget that panic exists and isn't a keyword.

The detailed proposal is now here (pending formatting improvements, to come shortly) and will hopefully answer a lot of questions.

@dominikh The detailed proposal discusses this at length, but please note that panic and recover are two built-ins that affect control flow as well.

One clarification / suggestion for improvement:

if the last argument supplied to try, of type error, is not nil, the enclosing function’s error result variable (...) is set to that non-nil error value before the enclosing function returns

Could this instead say is set to that non-nil error value and the enclosing function returns? (s/before/and)

On first reading, before the enclosing function returns seemed like it would eventually set the error value at some point in the future right before the function returned - possibly in a later line. The correct interpretation is that try may cause the current function to return. That's a surprising behavior for the current language, so a clearer text would be welcomed.

I think this is just sugar, and a small number of vocal opponents teased golang about the repeated use of typing if err != nil ... and someone took it seriously. I don't think it's a problem. The only missing things are these two built-ins:

https://github.com/purpleidea/mgmt/blob/a235b760dc3047a0d66bb0b9d63c25bc746ed274/util/errwrap/errwrap.go#L26

Not sure why anyone ever would write a function like this but what would be the envisioned output for

try(foobar())

If foobar returned (error, error)

I retract my previous concerns about control flow and I no longer suggest using ?. I apologize for the knee-jerk response (though I'd like to point out this wouldn't have happened had the issue been filed after the full proposal was available).

I disagree with the necessity for simplified error handling, but I'm sure that is a losing battle. try as laid out in the proposal seems to be the least bad way of doing it.

@webermaster Only the last error result is special for the expression passed to try, as described in the proposal doc.

Like @dominikh, I also disagree with the necessity of simplified error handling.

It moves vertical complexity into horizontal complexity which is rarely a good idea.

If I absolutely had to choose between simplifying error handling proposals, though, this would be my preferred proposal.

It would be helpful if this could be accompanied (at some stage of accepted-ness) by a tool to transform Go code to use try in some subset of error-returning functions where such a transformation can be easily performed without changing semantics. Three benefits occur to me:

  • When evaluating this proposal, it would allow people to quickly get a sense for how try could be used in their codebase.
  • If try lands in a future version of Go, people will likely want to change their code to make use of it. Having a tool to automate the easy cases will help a lot.
  • Having a way to quickly transform a large codebase to use try will make it easy to examine the effects of the implementation at scale. (Correctness, performance, and code size, say.) The implementation may be simple enough to make this a negligible consideration, though.

I just would like to express that I think a bare try(foo()) actually bailing out of the calling function takes away from us the visual cue that function flow may change depending on the result.

I feel I can work with try given enough getting used, but I also do feel we will need extra IDE support (or some such) to highlight try to efficiently recognize the implicit flow in code reviews/debugging sessions

The thing I'm most concerned about is the need to have named return values just so that the defer statement is happy.

I think the overall error handling issue that the community complains about is a combination of the boilerplate of if err != nil AND adding context to errors. The FAQ clearly states that the latter is left out intentionally as a separate problem, but I feel like then this becomes an incomplete solution, but I'll be willing to give it a chance after thinking on these 2 things:

  1. Declare err at the beginning of the function.
    Does this work? I recall issues with defer & unnamed results. If it doesn't the proposal needs to consider this.
func sample() (string, error) {
  var err error
  defer fmt.HandleErrorf(&err, "whatever")
  s := try(f())
  return s, nil
}
  1. Assign values like we did in the past, but use a helper wrapf function that has the if err != nil boilerplate.
func sample() (string, error) {
  s, err := f()
  try(wrapf(err, "whatever"))
  return s, nil
}
func wrapf(err error, format string, ...v interface{}) error {
  if err != nil {
    // err = wrapped error
  }
  return err
}

If either work, I can deal with it.

func sample() (string, error) {
  var err error
  defer fmt.HandleErrorf(&err, "whatever")
  s := try(f())
  return s, nil
}

This will not work. The defer will update the local err variable, which is unrelated to the return value.

func sample() (string, error) {
  s, err := f()
  try(wrapf(err, "whatever"))
  return s, nil
}
func wrapf(err error, format string, ...v interface{}) error {
  if err != nil {
    // err = wrapped error
  }
  return err
}

That should work. It will call wrapf even on a nil error, though.
This will also (continue to) work, and is IMO a lot clearer:

func sample() (string, error) {
  s, err := f()
  if err != nil {
      return "", wrap(err)
  }
  return s, nil
}

No one is going to make you use try.

Not sure why anyone ever would write a function like this but what would be the envisioned output for

try(foobar())

If foobar returned (error, error)

Why would you return more than one error from a function? If you are returning more than one error from function, perhaps function should be split into two separate ones in the first place, each returning just one error.

Could you elaborate with an example?

@cespare: It should be possible for somebody to write a go fix that rewrites existing code suitable for try such that it uses try. It may be useful to get a feel for how existing code could be simplified. We don't expect any significant changes in code size or performance, since try is just syntactic sugar, replacing a common pattern by a shorter piece of source code that produces essentially the same output code. Note also that code that uses try will be bound to use a Go version that's at least the version at which try was introduced.

@lestrrat: Agreed that one will have to learn that try can change control flow. We suspect that IDE's could highlight that easily enough.

@Goodwine: As @randall77 already pointed out, your first suggestion won't work. One option we have thought about (but not discussed in the doc) is the possibility of having some predeclared variable that denotes the error result (if one is present in the first place). That would eliminate the need for naming that result just so it can be used in a defer. But that would be even more magic; it doesn't seem justified. The problem with naming the return result is essentially cosmetic, and where that matters most is in the auto-generated APIs served by go doc and friends. It would be easy to address this in those tools (see also the detailed design doc's FAQ on this subject).

@nictuku: Regarding your suggestion for clarification (s/before/and/): I think the code immediately before the paragraph you're referring to makes it clear what happens exactly, but I see your point, s/before/and/ may make the prose clearer. I'll make the change.

See CL 180637.

I actually really like this proposal. However, I do have one criticism. The exit point of functions in Go have always been marked by a return. Panics are also exit points, however those are catastrophic errors that are typically not meant to ever be encountered.

Making an exit point of a function that isn't a return, and is meant to be commonplace, may lead to much less readable code. I had heard about this in a talk and it is hard to unsee the beauty of how this code is structured:

func CopyFile(src, dst string) error {
	r, err := os.Open(src)
	if err != nil {
		return fmt.Errorf("copy %s %s: %v", src, dst, err)
	}
	defer r.Close()

	w, err := os.Create(dst)
	if err != nil {
		return fmt.Errorf("copy %s %s: %v", src, dst, err)
	}

	if _, err := io.Copy(w, r); err != nil {
		w.Close()
		os.Remove(dst)
		return fmt.Errorf("copy %s %s: %v", src, dst, err)
	}

	if err := w.Close(); err != nil {
		os.Remove(dst)
		return fmt.Errorf("copy %s %s: %v", src, dst, err)
	}
}

This code may look like a big mess, and was meant to by the error handling draft, but let's compare it to the same thing with try.

func CopyFile(src, dst string) error {
	defer func() {
		err = fmt.Errorf("copy %s %s: %v", src, dst, err)
	}()
	r, err := try(os.Open(src))
	defer r.Close()

	w, err := try(os.Create(dst))

	defer w.Close()
	defer os.Remove(dst)
	try(io.Copy(w, r))
	try(w.Close())

	return nil
}

You may look at this at first glance and think it looks better, because there is a lot less repeated code. However, it was very easy to spot all of the spots that the function returned in the first example. They were all indented and started with return, followed by a space. This is because of the fact that all conditional returns must be inside of conditional blocks, thereby being indented by gofmt standards. return is also, as previously stated, the only way to leave a function without saying that a catastrophic error occurred. In the second example, there is only a single return, so it looks like the only thing that the function ever should return is nil. The last two try calls are easy to see, but the first two are a bit harder, and would be even harder if they were nested somewhere, ie something like proc := try(os.FindProcess(try(strconv.Atoi(os.Args[1])))).

Returning from a function has seemed to have been a "sacred" thing to do, which is why I personally think that all exit points of a function should be marked by return.

Someone has already implemented this 5 years ago. If you are interested, you can
try this feature

https://news.ycombinator.com/item?id=20101417

I implemented try() in Go five years ago with an AST preprocessor and used it in real projects, it was pretty nice: https://github.com/lunixbochs/og

Here are some examples of me using it in error-check-heavy functions: https://github.com/lunixbochs/poxd/blob/master/tls.go#L13

I appreciate the effort that went into this. I think it's the most go-ey solution I've seen so far. But I think it introduces a bunch of work when debugging. Unwrapping try and adding an if block every time I debug and rewrapping it when I'm done is tedious. And I also have some cringe about the magical err variable that I need to consider. I've never been bothered by the explicit error checking so perhaps I'm the wrong person to ask. It always struck me as "ready to debug".

@griesemer
My problem with your proposed use of defer as a way to handle the error wrapping is that the behavior from the snippet I showed (repeated below) is not very common AFAICT, and because it's very rare then I can imagine people writing this thinking it works when it doesn't.

Like.. a beginner wouldn't know this, if they have a bug because of this they won't go "of course, I need a named return", they would get stressed out because it should work and it doesn't.

var err error
defer fmt.HandleErrorf(err);

try is already too magic so you may as well go all the way and add that implicit error value. Think on the beginners, not on those who know all the nuances of Go. If it's not clear enough, I don't think it's the right solution.

Or... Don't suggest using defer like this, try another way that's safer but still readable.

@deanveloper It is true that this proposal (and for that matter, any proposal trying to attempt the same thing) will remove explicitly visible return statements from the source code - that is the whole point of the proposal after all, isn't it? To remove the boilerplate of if statements and returns that are all the same. If you want to keep the return's, don't use try.

We are used to immediately recognize return statements (and panic's) because that's how this kind of control flow is expressed in Go (and many other languages). It seems not far fetched that we will also recognize try as changing control flow after some getting used to it, just like we do for return. I have no doubt that good IDE support will help with this as well.

I have two concerns:

  • named returns have been very confusing, and this encourages them with a new and important use case
  • this will discourage adding context to errors

In my experience, adding context to errors immediately after each call site is critical to having code that can be easily debugged. And named returns have caused confusion for nearly every Go developer I know at some point.

A more minor, stylistic concern is that it's unfortunate how many lines of code will now be wrapped in try(actualThing()). I can imagine seeing most lines in a codebase wrapped in try(). That feels unfortunate.

I think these concerns would be addressed with a tweak:

a, b, err := myFunc()
check(err, "calling myFunc on %v and %v", a, b)

check() would behave much like try(), but would drop the behavior of passing through function return values generically, and instead would provide the ability to add context. It would still trigger a return.

This would retain many of the advantages of try():

  • it's a built-in
  • it follows the existing control flow WRT to defer
  • it aligns with existing practice of adding context to errors well
  • it aligns with current proposals and libraries for error wrapping, such as errors.Wrap(err, "context message")
  • it results in a clean call site: there's no boilerplate on the a, b, err := myFunc() line
  • describing errors with defer fmt.HandleError(&err, "msg") is still possible, but doesn't need to be encouraged.
  • the signature of check is slightly simpler, because it doesn't need to return an arbitrary number of arguments from the function it is wrapping.

@s4n-gt Thanks for this link. I was not aware of it.

@Goodwine Point taken. The reason for not providing more direct error handling support is discussed in the design doc in detail. It is also a fact that over the course of a year or so (since the draft designs published at last year's Gophercon) no satisfying solution for explicit error handling has come up. Which is why this proposal leaves this out on purpose (and instead suggests to use a defer). This proposal still leaves the door open for future improvements in that regard.

The proposal mentions changing package testing to allow tests and benchmarks to return an error. Though it wouldn’t be “a modest library change”, we could consider accepting func main() error as well. It’d make writing little scripts much nicer. The semantics would be equivalent to:

func main() {
  if err := newmain(); err != nil {
    println(err.Error())
    os.Exit(1)
  }
}

One last criticism. Not really a criticism to the proposal itself, but instead a criticism to a common response to the "function controlling flow" counterargument.

The response to "I don't like that a function is controlling flow" is that "panic also controls the flow of the program!". However, there are a few reasons that it's more okay for panic to do this that don't apply to try.

  1. panic is friendly to beginner programmers because what it does is intuitive, it continues unwrapping the stack. One shouldn't even have to look up how panic works in order to understand what it does. Beginner programmers don't even need to worry about recover, since beginners aren't typically building panic recovery mechanisms, especially since they are nearly always less favorable than simply avoiding the panic in the first place.

  2. panic is a name that is easy to see. It brings worry, and it needs to. If one sees panic in a codebase, they should be immediately thinking of how to avoid the panic, even if it's trivial.

  3. Piggybacking off of the last point, panic cannot be nested in a call, making it even easier to see.

It is okay for panic to control the flow of the program because it is extremely easy to spot, and it is intuitive as to what it does.

The try function satisfies none of these points.

  1. One cannot guess what try does without looking up the documentation for it. Many languages use the keyword in different ways, making it hard to understand what it would mean in Go.

  2. try does not catch my eye, especially when it is a function. Especially when syntax highlighting will highlight it as a function. ESPECIALLY after developing in a language like Java, where try is seen as unnecessary boilerplate (because of checked exceptions).

  3. try can be used in an argument to a function call, as per my example in my previous comment proc := try(os.FindProcess(try(strconv.Atoi(os.Args[1])))). This makes it even harder to spot.

My eyes ignore the try functions, even when I am specifically looking for them. My eyes will see them, but immediately skip to the os.FindProcess or strconv.Atoi calls. try is a conditional return. Control flow AND returns are both held up on pedestals in Go. All control flow within a function is indented, and all returns begin with return. Mixing both of these concepts together into an easy-to-miss function call just feels a bit off.


This comment and my last are my only real criticisms to the idea though. I think I may be coming off as not liking this proposal, but I still think that it is an overall win for Go. This solution still feels more Go-like than the other solutions. If this were added I would be happy, however I think that it can still be improved, I'm just not sure how.

@buchanae interesting. As written, though, it moves fmt-style formatting from a package into the language itself, which opens up a can of worms.

As written, though, it moves fmt-style formatting from a package into the language itself, which opens up a can of worms.

Good point. A simpler example:

a, b, err := myFunc()
check(err, "calling myFunc")

@buchanae We have considered making explicit error handling more directly connected with try - please see the detailed design doc, specifically the section on Design iterations. Your specific suggestion of check would only allow to augment errors through something like a fmt.Errorf like API (as part of the check), if I understand correctly. In general, people may want to do all kinds of things with errors, not just create a new one that refers to the original one via its error string.

Again, this proposal does not attempt to solve all error handling situations. I suspect in most cases try makes sense for code that now looks basically like this:

a, b, c, ... err := try(someFunctionCall())
if err != nil {
   return ..., err
}

There is an awful lot of code that looks like this. And not every piece of code looking like this needs more error handling. And where defer is not right, one can still use an if statement.

I don’t follow this line:

defer fmt.HandleErrorf(&err, “foobar”)

It drops the inbound error on the floor, which is unusual. Is it meant to be used something more like this?

defer fmt.HandleErrorf(&err, “foobar: %v”, err)

The duplication of err is a bit stutter-y. This is not really directly apropos to the proposal, just a side comment about the doc.

adg commented

I share the two concerns raised by @buchanae, re: named returns and contextual errors.

I find named returns a bit troublesome as it is; I think they are only really beneficial as documentation. Leaning on them more heavily is a worry. Sorry to be so vague, though. I'll think about this more and provide some more concrete thoughts.

I do think there is a real concern that people will strive to structure their code so that try can be used, and therefore avoid adding context to errors. This is a particularly weird time to introduce this, given we're just now providing better ways to add context to errors through official error wrapping features.

I do think that try as-proposed makes some code significantly nicer. Here's a function I chose more or less at random from my current project's code base, with some of the names changed. I am particularly impressed by how try works when assigning to struct fields. (That is assuming my reading of the proposal is correct, and that this works?)

The existing code:

func NewThing(thingy *foo.Thingy, db *sql.DB, client pb.Client) (*Thing, error) {
        err := dbfile.RunMigrations(db, dbMigrations)
        if err != nil {
                return nil, err
        }
        t := &Thing{
                thingy: thingy,
        }
        t.scanner, err = newScanner(thingy, db, client)
        if err != nil {
                return nil, err
        }
        t.initOtherThing()
        return t, nil
}

With try:

func NewThing(thingy *foo.Thingy, db *sql.DB, client pb.Client) (*Thing, error) {
        try(dbfile.RunMigrations(db, dbMigrations))
        t := &Thing{
                thingy:  thingy,
                scanner: try(newScanner(thingy, db, client)),
        }
        t.initOtherThing()
        return t, nil
}

No loss of readability, except perhaps that it's less obvious that newScanner might fail. But then in a world with try Go programmers would be more sensitive to its presence.

@josharian Regarding main returning an error: It seems to me that your little helper function is all that's needed to get the same effect. I'm not sure changing the signature of main is justified.

Regarding the "foobar" example: It's just a bad example. I should probably change it. Thanks for bringing it up.

defer fmt.HandleErrorf(&err, “foobar: %v”, err)

Actually, that can’t be right, because err will be evaluated too early. There are a couple of ways around this, but none of them as clean as the original (I think flawed) HandleErrorf. I think it’d be good to have a more realistic worked example or two of a helper function.

EDIT: this early evaluation bug is present in an example
near the end of the doc:

defer fmt.HandleErrorf(&err, "copy %s %s: %v", src, dst, err)

@adg Yes, try can be used as you're using it in your example. I let your comments re: named returns stand as is.

people may want to do all kinds of things with errors, not just create a new one that refers to the original one via its error string.

try doesn't attempt to handle all the kinds of things people want to do with errors, only the ones that we can find a practical way to make significantly simpler. I believe my check example walks the same line.

In my experience, the most common form of error handling code is code that essentially adds a stack trace, sometimes with added context. I've found that stack trace to be very important for debugging, where I follow an error message through the code.

But, maybe other proposals will add stack traces to all errors? I've lost track.

In the example @adg gave, there are two potential failures but no context. If newScanner and RunMigrations don't themselves provide messages that clue you into which one went wrong, then you're left guessing.

adg commented

In the example @adg gave, there are two potential failures but no context. If newScanner and RunMigrations don't themselves provide messages that clue you into which one went wrong, then you're left guessing.

That's right, and that's the design choice we made in this particular piece of code. We do wrap errors a lot in other parts of the code.

I share the concern as @deanveloper and others that it might make debugging tougher. It's true that we can choose not to use it, but the styles of third-party dependencies are not under our control.
If less repetitive if err := ... { return err } is the primary point, I wonder if a "conditional return" would suffice, like #27794 proposed.

        return nil, err if f, err := os.Open(...)
        return nil, err if _, err := os.Write(...)

I think the ? would be a better fit than try, and always having to chase the defer for error would also be tricky.

This also closes the gates for having exceptions using try/catch forever.

This also closes the gates for having exceptions using try/catch forever.

I am more than okay with this.

I agree with some of the concerns raised above regarding adding context to an error. I am slowly trying to shift from just returning an error to always decorate it with a context and then returning it. With this proposal, I will have to completely change my function to use named return params (which I feel is odd because I barely use naked returns).

As @griesemer says:

Again, this proposal does not attempt to solve all error handling situations. I suspect in most cases try makes sense for code that now looks basically like this:
a, b, c, ... err := try(someFunctionCall())
if err != nil {
return ..., err
}
There is an awful lot of code that looks like this. And not every piece of code looking like this needs more error handling. And where defer is not right, one can still use an if statement.

Yes, but shouldn't good, idiomatic code always wrap/decorate their errors ? I believe that's why we are introducing refined error handling mechanisms to add context/wrap errors in stdlib. As I see, this proposal only seems to consider the most basic use case.

Moreover, this proposal addresses only the case of wrapping/decorating multiple possible error return sites at a single place, using named parameters with a defer call.

But it doesn't do anything for the case when one needs to add different contexts to different errors in a single function. For eg, it is very essential to decorate the DB errors to get more information on where they are coming from (assuming no stack traces)

This is an example of a real code I have -

func (p *pgStore) DoWork() error {
	tx, err := p.handle.Begin()
	if err != nil {
		return err
	}
	var res int64
	err = tx.QueryRow(`INSERT INTO table (...) RETURNING c1`, ...).Scan(&res)
	if err != nil {
		tx.Rollback()
		return fmt.Errorf("insert table: %w", err)
	}

	_, err = tx.Exec(`INSERT INTO table2 (...) VALUES ($1)`, res)
	if err != nil {
		tx.Rollback()
		return fmt.Errorf("insert table2: %w", err)
	}
	return tx.Commit()
}

According to the proposal:

If error augmentation or wrapping is desired there are two approaches: Stick with the tried-and-true if statement, or, alternatively, “declare” an error handler with a defer statement:

I think this will fall into the category of "stick with the tried-and-true if statement". I hope the proposal can be improved to address this too.

I strongly suggest the Go team prioritize generics, as that's where Go hears the most criticism, and wait on error-handling. Today's technique is not that painful (tho go fmt should let it sit on one line).

The try() concept has all the problems of check from check/handle:

  1. It doesn't read like Go. People want assignment syntax, without the subsequent nil test, as that looks like Go. Thirteen separate responses to check/handle suggested this; see Recurring Themes here:
    https://github.com/golang/go/wiki/Go2ErrorHandlingFeedback#recurring-themes

    f, #      := os.Open(...) // return on error
    f, #panic := os.Open(...) // panic on error
    f, #hname := os.Open(...) // invoke named handler on error
    // # is any available symbol or unambiguous pair
    
  2. Nesting of function calls that return errors obscures the order of operations, and hinders debugging. The state of affairs when an error occurs, and therefore the call sequence, should be clear, but here it’s not:
    try(step4(try(step1()), try(step3(try(step2())))))
    Now recall that the language forbids:
    f(t ? a : b) and f(a++)

  3. It would be trivial to return errors without context. A key rationale of check/handle was to encourage contextualization.

  4. It's tied to type error and the last return value. If we need to inspect other return values/types for exceptional state, we're back to: if errno := f(); errno != 0 { ... }

  5. It doesn't offer multiple pathways. Code that calls storage or networking APIs handles such errors differently than those due to incorrect input or unexpected internal state. My code does one of these far more often than return err:

    • log.Fatal()
    • panic() for errors that should never arise
    • log a message and retry

@gopherbot add Go2, LanguageChange

How about use only ? to unwrap result just like rust

mattn commented

The reason we are skeptical about calling try() may be two implicit binding. We can not see the binding for the return value error and arguments for try(). For about try(), we can make a rule that we must use try() with argument function which have error in return values. But binding to return values are not. So I'm thinking more expression is required for users to understand what this code doing.

func doSomething() (int, %error) {
  f := try(foo())
  ...
}
  • We can not use try() if doSomething does not have %error in return values.
  • We can not use try() if foo() does not have error in the last of return values.

It is hard to add new requirements/feature to the existing syntax.

To be honest, I think that foo() should also have %error.

Add 1 more rule

  • %error can be only one in the return value list of a function.

In the detailed design document I noticed that in an earlier iteration it was suggested to pass an error handler to the try builtin function. Like this:

handler := func(err error) error {
        return fmt.Errorf("foo failed: %v", err)  // wrap error
}

f := try(os.Open(filename), handler)  

or even better, like this:

f := try(os.Open(filename), func(err error) error {
        return fmt.Errorf("foo failed: %v", err)  // wrap error
})  

Although, as the document states, that this raises several questions, I think this proposal would be far more more desirable and useful if it had kept this possibility to optionally specify such an error handler function or closure.

Secondly, I don't mind that a built in that can cause the function to return, but, to bikeshed a bit, the name 'try' is too short to suggest that it can cause a return. So a longer name, like attempt seems better to me.

EDIT: Thirdly, ideally, go language should gain generics first, where an important use case would be the ability to implement this try function as a generic, so the bikeshedding can end, and everyone can get the error handling that they prefer themselves.

Hacker news has some point: try doesn't behave like a normal function (it can return) so it's not good to give it function-like syntax. A return or defer syntax would be more appropriate:

func CopyFile(src, dst string) (err error) {
        r := try os.Open(src)
        defer r.Close()

        w := try os.Create(dst)
        defer func() {
                w.Close()
                if err != nil {
                        os.Remove(dst) // only if a “try” fails
                }
        }()

        try io.Copy(w, r)
        try w.Close()
        return nil
}

@sheerun the common counterargument to this is that panic is also a control-flow altering built-in function. I personally disagree with it, however it is correct.

  1. Echoing @deanveloper above, as well as others' similar comments, I'm very afraid we're underestimating the costs of adding a new, somewhat subtle, and—especially when inlined in other function calls—easily overlooked keyword that manages call stack control flow. panic(...) is a relatively clear exception (pun not intended) to the rule that return is the only way out of a function. I don't think we should use its existence as justification to add a third.
  2. This proposal would canonize returning an un-wrapped error as the default behavior, and relegate wrapping errors as something you have to opt-in to, with additional ceremony. But, in my experience, that's precisely backwards to good practice. I'd hope that a proposal in this space would make it easier, or at least not more difficult, to add contextual information to errors at the error site.

maybe we can add a variant with optional augmenting function something like tryf with this semantics:

func tryf(t1 T1, t1 T2, … tn Tn, te error, fn func(error) error) (T1, T2, … Tn)

translates this

x1, x2, … xn = tryf(f(), func(err error) { return fmt.Errorf("foobar: %q", err) })

into this

t1, … tn, te := f()
if te != nil {
	if fn != nil {
		te = fn(te)
	}
	err = te
	return
}

since this is an explicit choice (instead of using try) we can find reasonable answers the questions in the earlier version of this design. for example if augmenting function is nil don't do anything and just return the original error.

dzrw commented

I'm concerned that try will supplant traditional error handling, and that that will make annotating error paths more difficult as a result.

Code that handles errors by logging messages and updating telemetry counters will be looked upon as defective or improper by both linters and developers expecting to try everything.

a, b, err := doWork()
if err != nil {
  updateCounters()
  writeLogs()
  return err
}

Go is an extremely social language with common idioms enforced by tooling (fmt, lint, etc). Please keep the social ramifications of this idea in mind - there will be a tendency to want to use it everywhere.

@politician, sorry, but the word you are looking for is not social but opinionated. Go is an opinionated programming language. For the rest I mostly agree with what you are getting at.

dzrw commented

@beoran Community tools like Godep and the various linters demonstrate that Go is both opinionated and social, and many of the dramas with the language stem from that combination. Hopefully, we can both agree that try shouldn't be the next drama.

@politician Thanks for clarifying, I hadn't understood it that way. I can certainly agree that we should try to avoid drama.

I am confused about it.

From the blog: Errors are values, from my perspective, it's designed to be valued not to be ignored.

And I do believe what Rop Pike said, "Values can be programmed, and since errors are values, errors can be programmed.".

We should not consider error as exception, it's like importing complexity not only for thinking but also for coding if we do so.

"Use the language to simplify your error handling." -- Rob Pike

And more, we can review this slide

image

One situation where I find error checking via if particularly awkward is when closing files (e.g. on NFS). I guess, currently we are meant to write the following, if error returns from .Close() are possible?

r, err := os.Open(src)
if err != nil {
    return err
}
defer func() {
    // maybe check whether a previous error occured?
    return r.Close()
}()

Could defer try(r.Close()) be a good way to have a manageable syntax for some way of dealing with such errors? At least, it would make sense to adjust the CopyFile() example in the proposal in some way, to not ignore errors from r.Close() and w.Close().

dzrw commented

@seehuhn Your example won't compile because the deferred function does not have a return type.

func doWork() (err error) {
  r, err := os.Open(src)
  if err != nil {
    return err
  }
  defer func() {
    err = r.Close()  // overwrite the return value
  }()
}

Will work like you expect. The key is the named return value.

I like the proposal but I think that the example of @seehuhn should be adressed as well :

defer try(w.Close())

would return the error from Close() only if the error was not already set.
This pattern is used so often...

a8m commented

I agree with the concerns regarding adding context to errors. I see it as one of the best practices that keeps error messages much friendly (and clear) and makes debug process easier.

The first thing I thought about was to replace the fmt.HandleErrorf with a tryf function, that prefixs the error with additional context.

func tryf(t1 T1, t1 T2, … tn Tn, te error, ts string) (T1, T2, … Tn)

For example (from a real code I have):

func (c *Config) Build() error {
	pkgPath, err := c.load()
	if err != nil {
		return nil, errors.WithMessage(err, "load config dir")
	}
	b := bytes.NewBuffer(nil)
	if err = templates.ExecuteTemplate(b, "main", c); err != nil {
		return nil, errors.WithMessage(err, "execute main template")
	}
	buf, err := format.Source(b.Bytes())
	if err != nil {
		return nil, errors.WithMessage(err, "format main template")
	}
	target := fmt.Sprintf("%s.go", filename(pkgPath))
	if err := ioutil.WriteFile(target, buf, 0644); err != nil {
		return nil, errors.WithMessagef(err, "write file %s", target)
	}
	// ...
}

Can be changed to something like:

func (c *Config) Build() error {
	pkgPath := tryf(c.load(), "load config dir")
	b := bytes.NewBuffer(nil)
	tryf(emplates.ExecuteTemplate(b, "main", c), "execute main template")
	buf := tryf(format.Source(b.Bytes()), "format main template")
	target := fmt.Sprintf("%s.go", filename(pkgPath))
	tryf(ioutil.WriteFile(target, buf, 0644), fmt.Sprintf("write file %s", target))
	// ...
}

Or, if I take @agnivade's example:

func (p *pgStore) DoWork() (err error) {
	tx := tryf(p.handle.Begin(), "begin transaction")
        defer func() {
		if err != nil {
			tx.Rollback()
		}
	}()
	var res int64
	tryf(tx.QueryRow(`INSERT INTO table (...) RETURNING c1`, ...).Scan(&res), "insert table")
	_, = tryf(tx.Exec(`INSERT INTO table2 (...) VALUES ($1)`, res), "insert table2")
	return tryf(tx.Commit(), "commit transaction")
}

However, @josharian raised a good point that makes me hesitate on this solution:

As written, though, it moves fmt-style formatting from a package into the language itself, which opens up a can of worms.

I'm totally on board with this proposal and can see its benefits across a number of examples.

My only concern with the proposal is the naming of try, I feel that its connotations with other languages, may skew deveopers perceptions of what its purpose is when coming from other languages. Java comes to find here.

For me, i would prefer the builtin to be called pass. I feel this gives a better representation of what is happening. After-all you are not handling the error - rather passing it back to be handled by the caller. try gives the impression that the error has been handled.

It's a thumbs down from me, principally because the problem it's aiming to address ("the boilerplate if statements typically associated with error handling") simply isn't a problem for me. If all error checks were simply if err != nil { return err } then I could see some value in adding syntactic sugar for that (though Go is a relatively sugar-free language by inclination).

In fact, what I want to do in the event of a non-nil error varies quite considerably from one situation to the next. Maybe I want to t.Fatal(err). Maybe I want to add a decorating message return fmt.Sprintf("oh no: %v", err). Maybe I just log the error and continue. Maybe I set an error flag on my SafeWriter object and continue, checking the flag at the end of some sequence of operations. Maybe I need to take some other actions. None of these can be automated with try. So if the argument for try is that it will eliminate all if err != nil blocks, that argument doesn't stand.

Will it eliminate some of them? Sure. Is that an attractive proposition for me? Meh. I'm genuinely not concerned. To me, if err != nil is just part of Go, like the curly braces, or defer. I understand it looks verbose and repetitive to people who are new to Go, but people who are new to Go are not best placed to make dramatic changes to the language, for a whole bunch of reasons.

The bar for significant changes to Go has traditionally been that the proposed change must solve a problem that's (A) significant, (B) affects a lot of people, and (C) is well solved by the proposal. I'm not convinced on any of these three criteria. I'm quite happy with Go's error handling as it is.

To echo @peterbourgon and @deanveloper, one of my favourite things about Go is that code flow is clear and panic() is not treated like a standard flow control mechanism in the way it is in Python.

Regarding the debate on panic, panic() almost always appears by itself on a line because it has no value. You can't fmt.Println(panic("oops")). This increases its visibility tremendously and makes it far less comparable to try() than people are making out.

If there is to be another flow control construct for functions, I would far prefer that it be a statement guaranteed to be the leftmost item on a line.

hmage commented

One of the examples in the proposal nails the problem for me:

func printSum(a, b string) error {
        fmt.Println(
                "result:",
                try(strconv.Atoi(a)) + try(strconv.Atoi(b)),
        )
        return nil
}

Control flow really becomes less obvious and very obscured.

This is also against the initial intention by Rob Pike that all errors need to be handled explicitly.

While a reaction to this can be "then don't use it", the problem is -- other libraries will use it, and debugging them, reading them, and using them, becomes more problematic. This will motivate my company to never adopt go 2, and start using only libraries that don't use try. If I'm not alone with this, it might lead to a division a-la python 2/3.

Also, the naming of try will automatically imply that eventually catch will show up in the syntax, and we'll be back to being Java.

So, because of all of this, I'm strongly against this proposal.

I don't like the try name. It implies an attempt at doing something with a high risk of failure (I may have a cultural bias against try as I'm not a native english speaker), while instead try would be used in case we expect rare failures (motivation for wanting to reduce verbosity of error handling) and are optimistic. In addition try in this proposal does in fact catches an error to return it early. I like the pass suggestion of @hiimjc.

Besides the name, I find awkward to have return-like statement now hidden in the middle of expressions. This breaks Go flow style. It will make code reviews harder.

In general, I find that this proposal will only benefit to the lazy programmer who has now a weapon for shorter code and even less reason to make the effort of wrapping errors. As it will also make reviews harder (return in middle of expression), I think that this proposal goes against the "programming at scale" aim of Go.

gbbr commented

One of my favourite things about Go that I generally say when describing the language is that there is only one way to do things, for most things. This proposal goes against that principle a bit by offering multiple ways to do the same thing. I personally think this is not necessary and that it would take away, rather than add to the simplicity and readability of the language.

I like this proposal overall. The interaction with defer seems sufficient to provide an ergonomic way of returning an error while also adding additional context. Though it would be nice to address the snag @josharian pointed out around how to include the original error in the wrapped error message.

What's missing is an ergonomic way of this interacting with the error inspection proposal(s) on the table. I believe API's should be very deliberate in what types of errors they return, and the default should probably be "returned errors are not inspectable in any way". It should then be easy to go to a state where errors are inspectable in a precise way, as documented by the function signature ("It reports an error of kind X in circumstance A and an error of kind Y in circumstance B").

Unfortunately, as of now, this proposal makes the most ergonomic option the most undesirable (to me); blindly passing through arbitrary error kinds. I think this is undesirable because it encourages not thinking about the kinds of errors you return and how users of your API will consume them. The added convenience of this proposal is certainly nice, but I fear it will encourage bad behavior because the perceived convenience will outweigh the perceived value of thinking carefully about what error information you provide (or leak).

A bandaid would be if errors returned by try get converted into errors that are not "unwrappable". Unfortunately this has pretty severe downsides as well, since it makes it so that any defer could not inspect the errors itself. Additionally it prevents the usage where try actually will return an error of a desirable kind (that is, use cases where try is used carefully rather than carelessly).

Another solution would be to repurpose the (discarded) idea of having an optional second argument to try for defining/whitelisting the error kind(s) that may be returned from that site. This is a bit troublesome because we have two different ways of defining an "error kind", either by value (io.EOF etc) or by type (*os.PathError, *exec.ExitError). It's easy to specify error kinds that are values as arguments to a function, but harder to specify types. Not sure how to handle that, but throwing the idea out there.

The problem that @josharian pointed out can be avoided by delaying the evaluation of err:

defer func() { fmt.HandleErrorf(&err, "oops: %v", err) }()

Doesn't look great, but it should work. I'd prefer however if this can be addressed by adding a new formatting verb/flag for error pointers, or maybe for pointers in general, that prints the dereferenced value as with plain %v. For the purpose of the example, let's call it %*v:

defer fmt.HandleErrorf(&err, "oops: %*v", &err)

The snag aside, I think that this proposal looks promising, but it seems crucial to keep the ergonomics of adding context to errors in check.

Edit:

Another approach is to wrap the error pointer in a struct that implements Stringer:

type wraperr struct{ err *error }
func (w wraperr) String() string { return (*w.err).Error() }

...

defer handleErrorf(&err, "oops: %v", wraperr{&err})

Couple of things from my perspective. Why are we so concerned about saving a few lines of code? I consider this along the same lines as Small functions considered harmful.

Additionally I find that such a proposal would remove the responsibility of correctly handling the error to some "magic" that I worry will just be abused and encourage laziness resulting in poor quality code and bugs.

The proposal as stated also has a number of unclear behaviors so this is already problematic than an explicit extra ~3 lines that are more clear.

We currently use the defer pattern sparingly in house. There's an article here which had similarly mixed reception when we wrote it - https://bet365techblog.com/better-error-handling-in-go

However, our usage of it was in anticipation of the check/handle proposal progressing.

Check/handle was a much more comprehensive approach to making error handling in go more concise. Its handle block retained the same function scope as the one it was defined in, whereas any defer statements are new contexts with an amount, however much, of overhead. This seemed to be more in keeping with go's idioms, in that if you wanted the behaviour of "just return the error when it happens" you could declare that explicitly as handle { return err }.

Defer obviously relies on the err reference being maintained also, but we've seen problems arise from shadowing the error reference with block scoped vars. So it isn't fool proof enough to be considered the standard way of handling errors in go.

try, in this instance, doesn't appear to solve too much and I share the same fear as others that it would simply lead to lazy implementations, or ones which over-use the defer pattern.

If defer-based error handling is going to be A Thing, then something like this should probably be added to the errors package:

        f := try(os.Create(filename))
        defer errors.Deferred(&err, f.Close)

Ignoring the errors of deferred Close statements is a pretty common issue. There should be a standard tool to help with it.

komuw commented

A builtin function that returns is a harder sell than a keyword that does the same.
I would like it more if it were a keyword like it is in Zig[1].

  1. https://ziglang.org/documentation/master/#try
frou commented

Built-in functions, whose type signature cannot be expressed using the language's type system, and whose behavior confounds what a function normally is, just seems like an escape hatch that can be used repeatedly to avoid actual language evolution.

We are used to immediately recognize return statements (and panic's) because that's how this kind of control flow is expressed in Go (and many other languages). It seems not far fetched that we will also recognize try as changing control flow after some getting used to it, just like we do for return. I have no doubt that good IDE support will help with this as well.

I think it is fairly far-fetched. In gofmt'ed code, a return always matches /^\t*return / – it's a very trivial pattern to spot by eye, without any assistance. try, on the other hand, can occur anywhere in the code, nested arbitrarily deep in function calls. No amount of training will make us be able to immediately spot all control flow in a function without tool assistance.

Furthermore, a feature that depends on "good IDE support" will be at a disadvantage in all the environments where there is no good IDE support. Code review tools come to mind immediately – will Gerrit highlight all the try's for me? What about people who choose not to use IDEs, or fancy code highlighting, for various reasons? Will acme start highlighting try?

A language feature should be easy to understand on its own, not depend on editor support.

@kungfusheep I like that article. Taking care of wrapping in a defer alone already drives up readability quite a bit without try.

I'm in the camp that doesn't feel errors in Go are really a problem. Even so, if err != nil { return err } can be quite the stutter on some functions. I've written functions that needed an error check after almost every statement and none needed any special handling other than wrap and return. Sometimes there just isn't any clever Buffer struct that's gonna make things nicer. Sometimes it's just a different critical step after another and you need to simply short circuit if something went wrong.

Although try would certainly make that code a lot easier to nicer to read while being fully backwards compatible, I agree that try isn't a critical must have feature, so if people are too scared of it maybe it's best not to have it.

The semantics are quite clear cut though. Anytime you see try it's either following the happy path, or it returns. I really can't get simpler than that.

This looks like a special cased macro.

@dominikh try always matches /try\(/so I don't know what your point is really. It's equally as searchable and every editor I've ever heard of has a search feature.

@qrpnxz I think the point he was trying to make is not that you cannot search for it programatically, but that it's harder to search for with your eyes. The regexp was just an analogy, with emphasis on the /^\t*, signifying that all returns clearly stand out by being at the beginning of a line (ignoring leading whitespace).

Thinking about it more, there should be a couple of common helper functions. Perhaps they should be in a package called "deferred".

Addressing the proposal for a check with format to avoid naming the return, you can just do that with a function that checks for nil, like so

func Format(err error, message string, args ...interface{}) error {
    if err == nil {
        return nil
    }
    return fmt.Errorf(...)
}

This can be used without a named return like so:

func foo(s string) (int, error) {
    n, err := strconv.Atoi(s)
    try(deferred.Format(err, "bad string %q", s))
    return n, nil
}

The proposed fmt.HandleError could be put into the deferred package instead and my errors.Defer helper func could be called deferred.Exec and there could be a conditional exec for procedures to execute only if the error is non-nil.

Putting it together, you get something like

func CopyFile(src, dst string) (err error) {
    defer deferred.Annotate(&err, "copy %s %s", src, dst)

    r := try(os.Open(src))
    defer deferred.Exec(&err, r.Close)

    w := try(os.Create(dst))
    defer deferred.Exec(&err, r.Close)

    defer deferred.Cond(&err, func(){ os.Remove(dst) })
    try(io.Copy(w, r))

    return nil
}

Another example:

func (p *pgStore) DoWork() (err error) {
    tx := try(p.handle.Begin())

    defer deferred.Cond(&err, func(){ tx.Rollback() })
    
    var res int64 
    err = tx.QueryRow(`INSERT INTO table (...) RETURNING c1`, ...).Scan(&res)
    try(deferred.Format(err, "insert table")

    _, err = tx.Exec(`INSERT INTO table2 (...) VALUES ($1)`, res)
    try(deferred.Format(err, "insert table2"))

    return tx.Commit()
}

This proposal takes us from having if err != nil everywhere, to having try everywhere. It shifts the proposed problem and does not solve it.

Although, I'd argue that the current error handling mechanism is not a problem to begin with. We just need to improve tooling and vetting around it.

Furthermore, I would argue that if err != nil is actually more readable than try because it does not clutter the line of the business logic language, rather sits right below it:

file := try(os.OpenFile("thing")) // less readable than, 

file, err := os.OpenFile("thing")
if err != nil {

}

And if Go was to be more magical in its error handling, why not just totally own it. For example Go can implicitly call the builtin try if a user does not assign an error. For example:

func getString() (string, error) { ... }

func caller() {
  defer func() {
    if err != nil { ... } // whether `err` must be defined or not is not shown in this example. 
  }

  // would call try internally, because a user is not 
  // assigning an error value. Also, it can add a compile error
  // for "defined and not used err value" if the user does not 
  // handle the error. 
  str := getString()
}

To me, that would actually accomplish the redundancy problem at the cost of magic and potential readability.

Therefore, I propose that we either truly solve the 'problem' like in the above example or keep the current error handling but instead of changing the language to solve redundancy and wrapping, we don't change the language but we improve the tooling and vetting of code to make the experience better.

For example, in VSCode there's a snippet called iferr if you type it and hit enter, it expands to a full error handling statement...therefore, writing it never feels tiresome to me, and reading later on is better.

@josharian

Though it wouldn’t be “a modest library change”, we could consider accepting func main() error as well.

The issue with that is that not all platforms have clear semantics on what that means. Your rewrite works well in "traditional" Go programs running on a full operating system - but as soon as you write microcontroller-firmware or even just WebAssembly, it's not super clear what os.Exit(1) would mean. Currently, os.Exit is a library-call, so Go implementations are free just not to provide it. The shape of main is a language concern though.


A question about the proposal that is probably best answered by "nope": How does try interact with variadic arguments? It's the first case of a variadic (ish) function that doesn't have its variadic-nes in the last argument. Is this allowed:

var e []error
try(e...)

Leaving aside why you'd ever do that. I suspect the answer is "no" (otherwise the follow-up is "what if the length of the expanded slice is 0). Just bringing that up so it can be kept in mind when phrasing the spec eventually.

  • Several of the greatest features in go are that current builtins ensure clear control flow, error handling is explicit and encouraged, and developers are strongly dissuaded from writing "magical" code. The try proposal is not consistent with these basic tenets, as it will promote shorthand at the cost of control-flow readability.
  • If this proposal is adopted, then perhaps consider making the try built-in a statement instead of a function. Then it is more consistent with other control-flow statements like if. Additionally removal of the nested parentheses marginally improves readability.
  • Again, if the proposal is adopted then perhaps implement it without using defer or similar. It already cannot be implemented in pure go (as pointed out by others) so it may as well use a more efficient implementation under the hood.

I see two problems with this:

  1. It puts a LOT of code nested inside functions. That adds a lot of extra cognitive load, trying to parse the code in your head.

  2. It gives us places where the code can exit from the middle of a statement.

Number 2 I think is far worse. All the examples here are simple calls that return an error, but what's a lot more insidious is this:

func doit(abc string) error {
    a := fmt.Sprintf("value of something: %s\n", try(getValue(abc)))
    log.Println(a)
    return nil
}

This code can exit in the middle of that sprintf, and it's going to be SUPER easy to miss that fact.

My vote is no. This will not make go code better. It won't make it easier to read. It won't make it more robust.

I've said it before, and this proposal exemplifies it - I feel like 90% of the complaints about Go are "I don't want to write an if statement or a loop" . This removes some very simple if statements, but adds cognitive load and makes it easy to miss exit points for a function.

I just want to point out that you could not use this in main and it might be confusing to new users or when teaching. Obviously this applies to any function that doesn't return an error but I think main is special since it appears in many examples..

func main() {
    f := try(os.Open("foo.txt"))
    defer f.Close()
}

I'm not sure making try panic in main would be acceptable either.

Additionally it would not be particularly useful in tests (func TestFoo(t* testing.T)) which is unfortunate :(

The issue I have with this is it assumes you always want to just return the error when it happens. When maybe you want to add context to the error and the return it or maybe you just want to behave differently when an error happens. Maybe that is depending on the type of error returned.

I would prefer something akin to a try/catch which might look like

Assuming foo() defined as

func foo() (int, error) {}

You could then do

n := try(foo()) {
    case FirstError:
        // do something based on FirstError
    case OtherError:
        // do something based on OtherError
    default:
        // default behavior for any other error
}

Which translates to

n, err := foo()
if errors.Is(err, FirstError) {
    // do something based on FirstError
if errors.Is(err, OtherError) {
    // do something based on OtherError
} else {
    // default behavior for any other error
}

To me, error handling is one of the most important parts of a code base.
Already too much go code is if err != nil { return err }, returning an error from deep in the stack without adding extra context, or even (possibly) worse adding context by masking the underlying error with fmt.Errorf wrapping.

Providing a new keyword that is kind of magic that does nothing but replace if err != nil { return err } seems like a dangerous road to go down.
Now all code will just be wrapped in a call to try. This is somewhat fine (though readability sucks) for code that is dealing with only in-package errors such as:

func foo() error {
  /// stuff
  try(bar())
  // more stuff
}

But I'd argue that the given example is really kind of horrific and basically leaves the caller trying to understand an error that is really deep in the stack, much like exception handling.
Of course, this is all up to the developer to do the right thing here, but it gives the developer a great way to not care about their errors with maybe a "we'll fix this later" (and we all know how that goes).

I wish we'd look at the issue from a different perspective than *"how can we reduce repetition" and more about "how can we make (proper) error handling simpler and developers more productive".
We should be thinking about how this will affect running production code.

*Note: This doesn't actually reduce repetition, just changes what's being repeated, all the while making the code less readable because everything is encased in a try().

One last point: Reading the proposal at first it seems nice, then you start to get into all the gotchas (at least the ones listed) and it's just like "ok yeah this is too much".


I realize much of this is subjective, but it's something I care about. These semantics are incredibly important.
What I want to see is a way to make writing and maintaining production level code simpler such that you might as well do errors "right" even for POC/demo level code.

Since error context seems to be a recurring theme...

Hypothesis: most Go functions return (T, error) as opposed to (T1, T2, T3, error)

What if, instead of defining try as try(T1, T2, T3, error) (T1, T2, T3) we defined it as
try(func (args) (T1, T2, T3, error))(T1, T2, T3)? (this is an approximation)

which is to say that the syntactic structure of a try call is always a first argument that is an expression returning multiple values, the last of which is an error.

Then, much like make, this opens the door to a 2-argument form of the call, where the second argument is the context of the try (e.g. a fixed string, a string with a %v, a function that takes an error argument and returns another error etc.)

This still allows chaining for the (T, error) case but you can no longer chain multiple returns which IMO is typically not required.

@cpuguy83 If you read the proposal you would see there is nothing preventing you from wrapping the error. In fact there are multiple ways of doing it while still using try. Many people seem to assume that for some reason though.

if err != nil { return err } is equally as "we'll fix this later" as try except more annoying when prototyping.

I don't know how things being inside of a pair of parenthesis is less readable than function steps being every four lines of boilerplate either.

It'd be nice if you pointed out some of these particular "gotchas" that bothered you since that's the topic.

Readability seems to be an issue but what about go fmt presenting try() so that it stands out, something like:

f := try(
    os.Open("file.txt")
)

@MrTravisB

The issue I have with this is it assumes you always want to just return the error when it happens.

I disagree. It assumes that you want to do so often enough to warrant a shorthand for just that. If you don't, it doesn't get in the way of handling errors plainly.

When maybe you want to add context to the error and the return it or maybe you just want to behave differently when an error happens.

The proposal describes a pattern for adding block-wide context to errors. @josharian pointed out that there is an error in the examples, though, and it's not clear what the best way is to avoid it. I have written a couple of examples of ways to handle it.

For more specific error context, again, try does a thing, and if you don't want that thing, don't use try.

@boomlinde Exactly my point. This proposal is trying to solve a singular use case rather than providing a tool to solve the larger issue of error handling. I think the fundamental question if exactly what you pointed out.

It assumes that you want to do so often enough to warrant a shorthand for just that.

In my opinion and experience this use case is a small minority and doesn't warrant shorthand syntax.

Also, the approach of using defer to handle errors has issues in that it assumes you want to handle all possible errors the same. defer statements can't be canceled.

defer fmt.HandleErrorf(&err, “foobar”)

n := try(foo())

x : try(foo2())

What if I want different error handling for errors that might be returned from foo() vs foo2()?

@MrTravisB

What if I want different error handling for errors that might be returned from foo() vs foo2()?

Then you use something else. That's the point @boomlinde was making.

Maybe you don't personally see this use case often, but many people do, and adding try doesn't really affect you. In fact, the rarer the use case is to you the less it affects you that try is added.

@qrpnxz

f := try(os.Open("/foo"))
data := try(ioutil.ReadAll(f))
try(send(data))

(yes I understand there is ReadFile and that this particular example is not the best way to copy data somewhere, not the point)

This takes more effort to read because you have to parse out the try's inline. The application logic is wrapped up in another call.
I'd also argue that a defer error handler here would not be good except to just wrap the error with a new message... which is nice but there is more to dealing with errors than making it easy for the human to read what happened.

In rust at least the operator is a postfix (? added to the end of a call) which doesn't place extra burden to dig out the the actual logic.

zeebo commented

Expression based flow control

panic may be another flow controlling function, but it doesn't return a value, making it effectively a statement. Compare this to try, which is an expression and can occur anywhere.

recover does have a value and affects flow control, but must occur in a defer statement. These defers are typically function literals, recover is only ever called once, and so recover also effectively occurs as a statement. Again, compare this to try which can occur anywhere.

I think those points mean that try makes it significantly harder to follow control flow in a way that we haven't had before, as has been pointed out before, but I didn't see the distinction between statements and expressions pointed out.


Another proposal

Allow statements like

if err != nil {
    return nil, 0, err
}

to be formatted on one line by gofmt when the block only contains a return statement and that statement does not contain newlines. For example:

if err != nil { return nil, 0, err }

Rationale

  • It requires no language changes
  • The formatting rule is simple and clear
  • The rule can be designed to be opt in where gofmt keeps newlines if they already exist (like struct literals). Opt in also allows the writer to make some error handling be emphasized
  • If it's not opt in, code can be automatically ported to the new style with a call to gofmt
  • It's only for return statements, so it won't be abused to golf code unnecessarily
  • Interacts well with comments describing why some errors may happen and why they're being returned. Using many nested try expressions handles this poorly
  • It reduces the vertical space of error handling by 66%
  • No expression based control flow
  • Code is read far more often than it's written, so it should be optimized for the reader. Repetitive code taking up less space is helpful to the reader, where try leans more towards the writer
  • People have already been proposing try existing on multiple lines. For example this comment or this comment which introduces a style like
f, err := os.Open(file)
try(maybeWrap(err))
  • The "try on its own line" style removes any ambiguity about what err value is being returned. Therefore, I suspect this form will be commonly used. Allowing one lined if blocks is almost the same thing, except it's also explicit about what the return values are
  • It doesn't promote the use of named returns or unclear defer based wrapping. Both raise the barrier to wrapping errors and the former may require godoc changes
  • There doesn't need to be discussion about when to use try versus using traditional error handling
  • Doesn't preclude doing try or something else in the future. The change may be positive even if try is accepted
  • No negative interaction with the testing library or main functions. In fact, if the proposal allows any single lined statement instead of just returns, it may reduce usage of assertion based libraries. Consider
value, err := something()
if err != nil { t.Fatal(err) }
  • No negative interaction with checking against specific errors. Consider
n, err := src.Read(buf)
if err == io.EOF { return nil }
else if err != nil { return err }

In summary, this proposal has a small cost, can be designed to be opt-in, doesn't preclude any further changes since it's stylistic only, and reduces the pain of reading verbose error handling code while keeping everything explicit. I think it should at least be considered as a first step before going all in on try.

Some examples ported

From #32437 (comment)

With try

func NewThing(thingy *foo.Thingy, db *sql.DB, client pb.Client) (*Thing, error) {
        try(dbfile.RunMigrations(db, dbMigrations))
        t := &Thing{
                thingy:  thingy,
                scanner: try(newScanner(thingy, db, client)),
        }
        t.initOtherThing()
        return t, nil
}

With this

func NewThing(thingy *foo.Thingy, db *sql.DB, client pb.Client) (*Thing, error) {
        err := dbfile.RunMigrations(db, dbMigrations))
        if err != nil { return nil, fmt.Errorf("running migrations: %v", err) }

        t := &Thing{thingy: thingy}
        t.scanner, err = newScanner(thingy, db, client)
        if err != nil { return nil, fmt.Errorf("creating scanner: %v", err) }

        t.initOtherThing()
        return t, nil
}

It's competitive in space usage while still allowing for adding context to errors.

From #32437 (comment)

With try

func (c *Config) Build() error {
	pkgPath := try(c.load())
	b := bytes.NewBuffer(nil)
	try(emplates.ExecuteTemplate(b, "main", c))
	buf := try(format.Source(b.Bytes()))
	target := fmt.Sprintf("%s.go", filename(pkgPath))
	try(ioutil.WriteFile(target, buf, 0644))
	// ...
}

With this

func (c *Config) Build() error {
	pkgPath, err := c.load()
	if err != nil {	return nil, errors.WithMessage(err, "load config dir") }

	b := bytes.NewBuffer(nil)
	err = templates.ExecuteTemplate(b, "main", c)
	if err != nil { return nil, errors.WithMessage(err, "execute main template") }

	buf, err := format.Source(b.Bytes())
	if err != nil { return nil, errors.WithMessage(err, "format main template") }

	target := fmt.Sprintf("%s.go", filename(pkgPath))
	err = ioutil.WriteFile(target, buf, 0644)
	if err != nil { return nil, errors.WithMessagef(err, "write file %s", target) }
	// ...
}

The original comment used a hypothetical tryf to attach the formatting, which has been removed. It's unclear the best way to add all the distinct contexts, and perhaps try wouldn't even be applicable.

@cpuguy83
To me it is more readable with try. In this example I read "open a file, read all bytes, send data". With regular error handling I would read "open a file, check if there was an error, the error handling does this, then read all bytes, now check if somethings happened..." I know you can scan through the err != nils, but to me try is just easier because when I see it I know the behaviour right away: returns if err != nil. If you have a branch I have to see what it does. It could do anything.

I'd also argue that a defer error handler here would not be good except to just wrap the error with a new message

I'm sure there are other things you can do in the defer, but regardless, try is for the simple general case anyway. Anytime you want to do something more, there is always good ol' Go error handling. That's not going away.

@zeebo Yep, I'm into that.
@kungfusheep 's article used a one line err check like that and I got exited to try it out. Then as soon as I save, gofmt expanded it into three lines which was sad. Many functions in the stdlib are defined in one line like that so it surprised me that gofmt would expand that out.

@qrpnxz

I happen to read a lot of go code. One of the best things about the language is the ease that comes from most code following a particular style (thanks gofmt).
I don't want to read a bunch of code wrapped in try(f()).
This means there will either be a divergence in code style/practice, or linters like "oh you should have used try() here" (which again I don't even like, which is the point of me and others commenting on this proposal).

It is not objectively better than if err != nil { return err }, just less to type.


One last thing:

If you read the proposal you would see there is nothing preventing you from

Can we please refrain from such language? Of course I read the proposal. It just so happens that I read it last night and then commented this morning after thinking about it and didn't explain the minutia of what I intended.
This is an incredibly adversarial tone.

@cpuguy83
My bad cpu guy. I didn't mean it that way.

And I guess you gotta point that code that uses try will look pretty different from code that doesn't, so I can imagine that would affect the experience of parsing that code, but I can't totally agree that different means worse in this case, though I understand you personally don't like it just as I personally do like it. Many things in Go are that way. As to what linters tell you to do is another matter entirely, I think.

Sure it's not objectively better. I was expressing that it was more readable that way to me. I carefully worded that.

Again, sorry for sounding that way. Although this is an argument I didn't mean to antagonize you.

#32437 (comment)

No one is going to make you use try.

Ignoring the glibness, I think that's a pretty hand-wavy way to dismiss a design criticism.

Sure, I don't have to use it. But anybody I write code with could use it and force me to try to decipher try(try(try(to()).parse().this)).easily()). It's like saying

No one is going to make you use the empty interface{}.

Anyway, Go's pretty strict about simplicity: gofmt makes all the code look the same way. The happy path keeps left and anything that could be expensive or surprising is explicit. try as is proposed is a 180 degree turn from this. Simplicity != concise.

At the very least try should be a keyword with lvalues.

It is not objectively better than if err != nil { return err }, just less to type.

There is one objective difference between the two: try(Foo()) is an expression. For some, that difference is a downside (the try(strconv.Atoi(x))+try(strconv.Atoi(y)) criticism). For others, that difference is an upside for much the same reason. Still not objectively better or worse - but I also don't think the difference should be swept under the rug and claiming that it's "just less to type" doesn't do the proposal justice.