benchmark-action/github-action-benchmark

Benchmark job never finishes

gaby opened this issue ยท 18 comments

gaby commented

Hello,

We are currently having an issue with v1.19.2 of this action where it just hangs and never finishes(even after hours). We tried enabling debug logs but there's nothing there.

PR showing the issue: gofiber/fiber#2818
Job(Debug enabled): https://github.com/gofiber/fiber/actions/runs/7695977872/job/21065465963?pr=2818

CI File:

on:
  push:
    branches:
      - master
      - main
    paths:
      - "**"
      - "!docs/**"
      - "!**.md"
  pull_request:
    paths:
      - "**"
      - "!docs/**"
      - "!**.md"

name: Benchmark
jobs:
  Compare:
    runs-on: ubuntu-latest
    steps:
      - name: Fetch Repository
        uses: actions/checkout@v4

      - name: Install Go
        uses: actions/setup-go@v5
        with:
          go-version: "1.20.x"

      - name: Run Benchmark
        run: set -o pipefail; go test ./... -benchmem -run=^$ -bench . | tee output.txt

      - name: Get Previous Benchmark Results
        uses: actions/cache@v4
        with:
          path: ./cache
          key: ${{ runner.os }}-benchmark

      - name: Save Benchmark Results
        uses: benchmark-action/github-action-benchmark@v1.19.2
        with:
          tool: "go"
          output-file-path: output.txt
          github-token: ${{ secrets.BENCHMARK_TOKEN }}
          benchmark-data-dir-path: "benchmarks"
          fail-on-alert: true
          comment-on-alert: ${{ github.event_name == 'push' || github.event_name == 'workflow_dispatch' }}
          auto-push: false
          save-data-file: ${{ github.event_name == 'push' || github.event_name == 'workflow_dispatch' }}
gaby commented

Ping @rhysd @ktrz Tried again today and job never finishes. It only happens after updating to latest release.

ktrz commented

Hi @gaby
I'm looking into that!
Could you verify if this also happens if you pin it to v1.19.1?

ktrz commented

Thanks, I'll watching it

ktrz commented

@gaby
From what I can see in the logs it looks like the extraction logic hangs. I think it might have been broken by the change in the regexp for extracting GoLang results in v1.18.0
I should be able to pick up your benchmark results as a test case

gaby commented

@ktrz Is that related to this PR? #177

Kinda makes sense, our Benchmark is quite extensive so it could had exposed some edge case bug.

ktrz commented

@gaby I think so. Can I use output of your benchmarks as a test case? I'll trim it down when I find what is causing the issue

@ktrz Is that related to this PR? #177

Kinda makes sense, our Benchmark is quite extensive so it could had exposed some edge case bug.

gaby commented

@gaby I think so. Can I use output of your benchmarks as a test case? I'll trim it down when I find what is causing the issue

@ktrz Is that related to this PR? #177
Kinda makes sense, our Benchmark is quite extensive so it could had exposed some edge case bug.

Yes, go for it.

ktrz commented

hey @gaby
I believe I have a fix ready! Would you mind having a look at PR #225

ktrz commented

hey @gaby I believe I have a fix ready! Would you mind having a look at PR #225

One thing to note is that with this upgrade you will receive much more benchmark results as since #177 we are extracting all the additional metrics from benchmark results so in your case it would be:

  • <YourRegularBenchName>
  • <YourRegularBenchName> - B/op
  • <YourRegularBenchName> - allocs/op
gaby commented

@ktrz Looks good to me. Having extra benchmarks is very well welcome too!

ktrz commented

@gaby I'm glad! I will wait for @ningziwen to have a look at the PR
I've added some backward compatibility as I've noticed that before v1.18.0 in case of multiple benchmark metrics in Go there was a weird unit being extracted eg. ns/op 0 B/op 0 allocs/op was treated as a unit instead of just ns/op

So in your case the resulting benchmarks will be:

  • <YourRegularBenchName> - this will be backward compatible metric with the unit being a slightly wrong (as it used to be)
  • <YourRegularBenchName> - ns/op
  • <YourRegularBenchName> - B/op
  • <YourRegularBenchName> - allocs/op

I'll probably make an option like goBackwardCompatibleMetrics that will default to true and will deprecate it in future versions. Created an issue to track that #226

gaby commented

@ktrz That sounds fair. Our main branch is currently using v1.16.x but we are in the process of a major release so having clean new numbers would help

ktrz commented

released in v1.19.3

@gaby pls confirm if that works for you

gaby commented

@ktrz The CI now passes, thank you! https://github.com/gofiber/fiber/actions/runs/7756481537/job/21153857457?pr=2827

One question, looking at the output. Does this look alright to you?

    {
      name: 'Benchmark_AcquireCtx',
      value: 466.4,
      unit: 'ns/op\t    1568 B/op\t       5 allocs/op',
      extra: '2582733 times\n4 procs'
    },
    {
      name: 'Benchmark_AcquireCtx - ns/op',
      value: 466.4,
      unit: 'ns/op',
      extra: '2582733 times\n4 procs'
    },
    {
      name: 'Benchmark_AcquireCtx - B/op',
      value: 1568,
      unit: 'B/op',
      extra: '2582733 times\n4 procs'
    },
    {
      name: 'Benchmark_AcquireCtx - allocs/op',
      value: 5,
      unit: 'allocs/op',
      extra: '2582733 times\n4 procs'
    }

The 2582733 times never gets parsed. Which is how many times the loop was executed during the benchmark.

ktrz commented

@ktrz The CI now passes, thank you! https://github.com/gofiber/fiber/actions/runs/7756481537/job/21153857457?pr=2827

One question, looking at the output. Does this look alright to you?

    {

      name: 'Benchmark_AcquireCtx',

      value: 466.4,

      unit: 'ns/op\t    1568 B/op\t       5 allocs/op',

      extra: '2582733 times\n4 procs'

    },

    {

      name: 'Benchmark_AcquireCtx - ns/op',

      value: 466.4,

      unit: 'ns/op',

      extra: '2582733 times\n4 procs'

    },

    {

      name: 'Benchmark_AcquireCtx - B/op',

      value: 1568,

      unit: 'B/op',

      extra: '2582733 times\n4 procs'

    },

    {

      name: 'Benchmark_AcquireCtx - allocs/op',

      value: 5,

      unit: 'allocs/op',

      extra: '2582733 times\n4 procs'

    }

The 2582733 times never gets parsed. Which is how many times the loop was executed during the benchmark.

As far as I understand the this was treated as a metadata for the benchmarks. I'm not familiar with the Go benchmark output well enough to answer in more detail

gaby commented

@ktrz The 2582733 is how many times the benchmark loop was executed. The 466.4 ns/op is how long each loop took to run. The 1568 b/op is how many bytes it took. The 5 allocs/op is how many memory allocs happened during the loop. The 4 procs is how many cpus were used during the benchmark.

gaby commented

@ktrz Would it be possible to add the "times" in the benchmark parsing? Should I create a separate ticket for this?