Jobs created by Sidekiq-cron shouldn’t propagate trace header
Closed this issue · 9 comments
Issue Description
Currently when sidekiq-cron enqueues jobs, trace headers are added, leaving a Long running trace with several unrelated jobs, and no root transaction.
This should be patched in Sentry-sidekiq to reset the trace id for sidekiq cron when sidekiq-cron patch is enabled
Reproduction Steps
Run app
Enqueue job every x minute
Find trace (or follow from an error in Issues tab)
See duration is several hours and without root transaction.
Expected Behavior
Each enqueued job from sidekiq cron is unrelated and with different trace IDs.
Actual Behavior
All enqueued jobs (per process/sidekiqCron Thread) are with same trace id.
Ruby Version
3.3.4
SDK Version
master
Integration and Its Version
Sidekiq
Sentry Config
No response
true, this is also how we do it in celery beat, will fix!
Could it be something as simple as:
def enque!(*)
# make sure the current thread has a clean hub
Sentry.clone_hub_to_current_thread
Sentry.with_scope do |scope|
Sentry.with_session_tracking do
super
end
end
end
Obviously, I don't know how it's done in Celery - But the one above is inspired by the Rack middleware - Except the patching of enque! method from Sidekiq cron.
There's a similar issue with clockwork - everything scheduled from the clockwork process inherits one huge trace. Ideally there should be an api to do something like
Sentry.without_trace_propagation do
SomeJob.perform_async
end
Giving this a go currently.
There is a slight dependency on #2403
If I get this right, there should be a root transaction, that is the enqueuing from sidekiq-cron in one thread; That should probably be queue.publish, for this.
Would you be OK with a partial implementation, that depends on #2403 before merging? @sl0thentr0py @solnic
hey, I will ask @solnic to follow up here once he's back next week
Would you be OK with a partial implementation, that depends on #2403 before merging? @sl0thentr0py @solnic
@frederikspang yes thank you, just open a PR and we can take it from there!
There's a similar issue with clockwork - everything scheduled from the clockwork process inherits one huge trace. Ideally there should be an api to do something like
Sentry.without_trace_propagation do
SomeJob.perform_async
end
We have this exact issue and are using Clockwork.
Any suggestions on how to fix it?
There's a similar issue with clockwork - everything scheduled from the clockwork process inherits one huge trace. Ideally there should be an api to do something like
Sentry.without_trace_propagation do
SomeJob.perform_async
endWe have this exact issue and are using Clockwork. Any suggestions on how to fix it?
The lack of an around_job
in clockwork complicates this a bit.
You could start by separating into transactions, with before_run and after_run.
Look at the PR attached - There may be room for a "wish" or OSS contribution to clockwork, to introduce an around_job
-callback, in which case the implementation of #2446 should be fairly simple for clockwork.
@Linuus doing this in the clockwork process only should work as a workaround if you need it urgently.
Sidekiq.configure_client do |config|
config.client_middleware do |chain|
chain.remove Sentry::Sidekiq::SentryContextClientMiddleware
end
end
This will remove the trace header injection but only in the clockwork process. Make sure you don't put it in a general configuration.
For the longer term, I think I will add a configuration parameter to the sidekiq gem which lets you control the trace header injection behaviour.