graph-gophers/graphql-go

Use field as resolver instead of method.

nicksrandall opened this issue ยท 16 comments

Would it be possible to use a struct field as a resolver instead of always using methods? Most of my methods are just simple getters so it would be convenient to allow resolver to default to field. Then I could implement method only when I need special logic.

That would be really useful, maybe it could use introspection and struct tags like json:"name", and a resolver method should override the field.

not sure how much performance penalty we pay for using reflection on this. I do codegen to write these simple resolvers.

We should test and get some data, that would get us some information. I will do my best to try to get some benchmarks.

The amount of reflection that this implementation uses is probably my biggest concern. I do love how minimal the API is though.

@bsr203 I'd love to see your code gen tools if you'd be willing to share.

I have two concerns about this:

  1. Simplicity.
    I think that the following happened a lot during the design of the Go language:
    Is it possible? Yes.
    Would it give the option to write less code? Yes.
    Should we add it? Probably not.
    Adding this feature does not enable us to do something that we could not do before. It is just a second way to get to the same outcome. Imho this is something that is usually avoided in the Go world. Simplicity is preferred.

  2. Tight coupling between resolver and underlying data structures.
    Imho resolvers are a view on the underlying data structures. Those data structures can change without changing the GraphQL schema. The resolver can provide legacy fields. By directly accessing struct fields, this distinction becomes less obvious. This can cause the data structures to become more rigid, because people don't want to affect the schema. I think keeping this strong separation makes your GraphQL codebase more healthy in the long term.

@nicksrandall I agree with neelance fully, and used codegen during the initial migration. The code is tied with our internal stuff, and not intend to use for general purpose. I could highlight how we do it.

for a type Widget keep a convention that WidgetResolver is the graphql resolver for it.
there are many codegen example available in github, and there are some generic one. Use one of it, and get the Struct and Field info for Widget. Now, you may skip some of the fields with struct tags, and rest, you can generate simple mappers. HTH

// Writer is a structure to hold accumulated output
type Writer struct {
	Buf bytes.Buffer
}

// Printf writes the formatted string to writer
func (g *Writer) Printf(format string, args ...interface{}) {
	g.Buf.WriteString(fmt.Sprintf(format, args...))
}

func (st *Struct) writeTypeResolver(g *Writer) error {
	g.Printf(`
		// %[1]sResolver is field resolver for %[1]s
		type %[1]sResolver struct {
			d *%[1]s
		}
	`, st.Name())

	for _, f := range st.fields() {
		if f.shouldBeSkipped() { // based on some struct tag
			continue
		}

		gotp := f.goType() // find the go type for base type, and complex type like Time
	
		g.Printf(`
		// %[2]s resolver for %[1]s`, st.Name(), f.Name)

		if f.Name == "ID" {
			g.Printf(`
				func (r *%[1]sResolver) %[2]s() graphql.ID {
					return graphql.ID(r.d.%[2]s)
				}
			`, st.Name(), f.Name)
			continue
		}
		if f.conf.NonNull { // this is set through struct tag
			g.Printf(`
					func (r *%[1]sResolver) %[2]s() %[3]s {
						return r.d.%[2]s
					}
				`, st.Name(), f.Name, gotp)
			continue
		}
		// nullable
		writeNullableField(g, st, f, gotp)
	}
  }
  
  func writeNullableField(g *Writer, st *Struct, f *Field, gotp string) {
  	switch f.Type {
  	case Bool:
  		g.Printf(`
  			func (r *%[1]sResolver) %[2]s() *%[3]s {
  				if r.d.%[2]s == false {
  					return nil
  				}
  				return &r.d.%[2]s
  			}`, st.Name(), f.Name, gotp)
  	case Uint, Uint16, Uint32:
  	case Int, Int8, Int16:
  		g.Printf(`
  			func (r *%[1]sResolver) %[2]s() *int32 {
  				if r.d.%[2]s == 0 {
  					return nil
  				}
  				d := int32(r.d.%[2]s)
  				return &d
  			}`, st.Name(), f.Name, gotp)
  	case Int32:
  		g.Printf(`
  			func (r *%[1]sResolver) %[2]s() *int32 {
  				if r.d.%[2]s == 0 {
  					return nil
  				}
  				return &r.d.%[2]s
  			}`, st.Name(), f.Name)
  	case Uint64, Int64:
      panic("TODO")
  	case String:
  		g.Printf(`
  			func (r *%[1]sResolver) %[2]s() *%[3]s {
  				if r.d.%[2]s == "" {
  					return nil
  				}
  				return &r.d.%[2]s
  			}`, st.Name(), f.Name, gotp)
  	case Float32, Float64:
  		g.Printf(`
  			func (r *%[1]sResolver) %[2]s() *%[3]s {
  				if r.d.%[2]s == %[3]s(0) {
  					return nil
  				}
  				return &r.d.%[2]s
  			}`, st.Name(), f.Name, gotp)
  	default:
    // handle special fields and complex type

    // else just skip, and implement the resolver by hand
  	
  	}
  }

@neelance I agree with your point of view in golang simplicity. Simplicity is really complicated. And i was thinking how this problem would be solved by the go team and this are my thoughts. What about adding tags to structs?

As i'm sure you may know, the json Decoder/Encoder transforms structs into json by looking it's field name, and if tags are present it chooses them over the field name. We could have a similar behavior as the decoder, and i think we could stay well into the go philosophy.

This would allow us to have similar behavior to graphql-js, that resolves fields using the same name given in the json object.

I was thinking something like this

#Schema
type Vendor {
	id: String
	name: String
}
type Vendor struct {
	ID             string    `json:"id" graph:"id"`
	Name           string    `json:"name" graph:"name"`
	Email          string    `json:"email"`
}

As you can see here, i could have a Vendor struct that only works with the fields that conform to the schema by having a tag, or if i need i could have a resolver func to conform the schema.
This be really useful, as the original Vendor struct i have in my codebase has 20 fields, that would require to write 20 resolvers only for that type.

Let me know what you think.

@neelance your points make sense to me. Thanks for explaining.

Another design point might be the API we are currently using in Thunder's GraphQL implementation (see https://github.com/samsarahq/thunder/blob/master/example/main.go).

By default, all fields on the Go struct get exposed as GraphQL fields. For example, the fragment

type Address struct {
    City, State string
}

type Friend struct {
    Name string
    Addresses []*Address
}

func (s *Schema) Friend() schemabuilder.Object {
    object := schemabuilder.Object{
        Type: Friend{},
    }
    
    object.FieldFunc("numAddresses", func(f *Friend) int {
        return len(f.Addresses)
    }
    
    return object
}

would result in a GraphQL schema

type Address {
    city string
    state string
}

type Friend {
    name string
    addresses [Address]
    numAddresses int
}

Automatically exposing fields feels similar to encoding/json's behavior, and the simpler schemas seem worth it in the end. To opt-out, struct fields can optionally include a graphql:"-" tag. It might make sense to add an option to schemabuilder.Object to ignore all struct fields when wrapping external structs.

For encoding/json reading struct field values is the only standard way to use it. It does not also support reading values from methods. If you really want to do something special, then you have to implement MarshalJSON.

Same for graphql-go: The only standard way to read values are methods, because that fits better to the GraphQL model. Also adding the option to read struct field values would only make the API more complicated.

However, we may want to add an entry point for custom behavior, e.g. ResolveGraphQL similar to MarshalJSON. Then you can write your own generic resolver that reads struct fields. It might also be useful for #17.

I wrote some code generation to do this. Theoretically using codegen to do this has some slowdowns, namely because you have to return a slice of the resolver type rather than just returning pointers to structs themselves. Therefore you require an extra O(n) operation with memory allocation (albeit just ~12 bytes or so) on every slice you return. It's fast, but for large datasets maybe it might incur a performance penalty.

Hey all I though I would just share the quick hacked together generator we made to generate most of the boiler-plate code required. It's by no means a full solution, but we plan to continue working on it to cover all scenarios. PRs are always welcome too ;-) https://github.com/bLevein/graphql-gen-go

Just a heads up I updated https://github.com/bLevein/graphql-gen-go and it should now generate all the correct type resolver funcs for fields and the correlating go types. Still need to add unions, interfaces, and a few other things. Any input is appreciated, so feel free to file bug and/or feature requests. (p.s. I know the code is still kind of ugly, but it works for now ;-) )

@euforic is there a reason why you didn't choose to use golang's text/template?

@tonyghita I like the approach grpc takes with filling a buffer, so that is the direction I wanted to go with it. Makes things a bit more flexible.

This may be addressed by #88.