googleforgames/agones

Using portPolicy None results in a gameserver's port being 0

Closed this issue · 5 comments

What happened:

I attempted to use the new feature in v1.41.0 to set my portPolicy to None and when my gameserver started, the gameserver's port in its status was set to 0. My gameserver code was written to take the Address and Port from the gameserver's status and use them for connections. When the port is 0, this failed.

What you expected to happen:

My expectation is perhaps that when portPolicy is set to None, the Port field of the gameserver's status would become equal to the containerPort value. In other words, to connect to a gameserver Pod with a None portPolicy, I would use the PodIP and the containerPort to connect.

How to reproduce it (as minimally and precisely as possible):

Use any gameserver spec and set its portPolicy to None.

portPolicy:
  - name: game
    portPolicy: None
    protocol: UDP
    containerPort: 1234

Check the status once the gameserver is running:

image

Anything else we need to know?:

Environment:

  • Agones version: v1.41.0
  • Kubernetes version (use kubectl version): v1.30.0-eks-036c24b
  • Cloud provider or hardware configuration: AWS EKS
  • Install method (yaml/helm): Helm
  • Troubleshooting guide log(s):
  • Others:

Just to double check, it is behind a feature gate - have you enabled it?

https://agones.dev/site/docs/installation/install-agones/helm/#configuration - the value should be something like:"PortPolicyNone=true"

And could you show a bit more of your gameserver spec (with the sensitive info removed)?

Okay, I did not use the feature gate. I would maybe expect to receive an error if I was trying to use an gated feature. Let me try that later to see if the behavior changes.

Here is more of my Fleet definition:

apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
  name: game-server
  labels:
    dev/changelist: "2024-06-12-6166-232"
spec:
  replicas: 1
  scheduling: Packed
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
  template:
    metadata:
      labels:
        dev/changelist: "2024-06-12-6166-232"
        app.kubernetes.io/name: game-server
        app.kubernetes.io/managed-by: Helm
    spec:
      container: game-server
      ports:
        - containerPort: 7777
          name: game
          portPolicy: None
          protocol: UDP
      health:
        initialDelaySeconds: 30
        periodSeconds: 20
        failureThreshold: 5
      sdkServer:
        logLevel: Info
        grpcPort: 9357
        httpPort: 9358
      template:
        metadata:
          labels:
            dev/changelist: "2024-06-12-6166-232"
            app.kubernetes.io/managed-by: Helm
        spec:
          containers:
            - name: game-server
              securityContext: {}
              image: "ECR"
              imagePullPolicy: Always
              env:
              resources: {}
              volumeMounts:
                - name: ini-overrides
                  mountPath: "blah"
                  subPath: FizzBuzz.ini
          imagePullSecrets:
            - name: ecr
          serviceAccountName: game-server
          securityContext: {}
          volumes:
            - name: ini-overrides
              configMap:
                name: game-server
                items:
                  - key: FizzBuzz.ini
                    path: FizzBuzz.ini
          nodeSelector:
            serverType: gameservers
          tolerations:
            - effect: NoSchedule
              key: serverType
              operator: Equal
              value: game

I think we can probably close this.

image

It looks like enabling the feature gate gets it to function the way I expect.

Okay, I did not use the feature gate. I would maybe expect to receive an error if I was trying to use an gated feature.

That is a bug though, if the feature gate is not enabled, we should throw a validation error. I'll leave this open, as we should fix this part.

func (gss *GameServerSpec) validateFeatureGates(fldPath *field.Path) field.ErrorList {
var allErrs field.ErrorList
if !runtime.FeatureEnabled(runtime.FeaturePlayerTracking) {
if gss.Players != nil {
allErrs = append(allErrs, field.Forbidden(fldPath.Child("players"), fmt.Sprintf("Value cannot be set unless feature flag %s is enabled", runtime.FeaturePlayerTracking)))
}
}
if !runtime.FeatureEnabled(runtime.FeatureCountsAndLists) {
if gss.Counters != nil {
allErrs = append(allErrs, field.Forbidden(fldPath.Child("counters"), fmt.Sprintf("Value cannot be set unless feature flag %s is enabled", runtime.FeatureCountsAndLists)))
}
if gss.Lists != nil {
allErrs = append(allErrs, field.Forbidden(fldPath.Child("lists"), fmt.Sprintf("Value cannot be set unless feature flag %s is enabled", runtime.FeatureCountsAndLists)))
}
}
return allErrs
}

Is where the validation should be added (in case people are looking).