spandex-project/spandex

Can't make it work.

Closed this issue · 8 comments

Hey, i've spent the day trying to make this work. looked at the example(which is outdated, but i found a couple of places with new documentation and adapted). i dont see any logs. or exceptions. even when i enable verbose? mode.
Is there any place i can go to to receive assistance? i've even tried to debug the code to no avail :(

Hey @BlueHotDog! Maybe this issue can help you investigate further? Feel free to paste any trace/config info here and I can try and help you debug.

Happy to help if you can provide some more details on your setup. Have you set up spandex datadog? Also, once you get things set up, if you could PR fixes to the documentation that misled you we would be very grateful!

Hey, followed up on the thread.
Everything looks ok, what's very weird is that no matter the host that i put into host, i get no exceptions. which makes me wonder.
here's the output of the suggested commands:

started_trace: {:ok,
 %Spandex.Trace{
   baggage: [],
   id: 1452386667563016093,
   priority: 1,
   spans: [],
   stack: [
     %Spandex.Span{
       completion_time: nil,
       env: "DEV",
       error: nil,
       http: nil,
       id: 3133911937493517027,
       name: "foo",
       parent_id: nil,
       private: [],
       resource: nil,
       service: :api,
       services: [],
       sql_query: nil,
       start: 1593780109285562000,
       tags: [],
       trace_id: 1452386667563016093,
       type: nil
     }
   ]
 }}
updated_span: {:ok,
 %Spandex.Span{
   completion_time: nil,
   env: "DEV",
   error: nil,
   http: nil,
   id: 3133911937493517027,
   name: "foo",
   parent_id: nil,
   private: [],
   resource: "/bar",
   service: :my_service,
   services: [],
   sql_query: nil,
   start: 1593780109285562000,
   tags: [],
   trace_id: 1452386667563016093,
   type: :web
 }}
finished_span: {:ok,
 %Spandex.Span{
   completion_time: 1593780109286206000,
   env: "DEV",
   error: nil,
   http: nil,
   id: 3133911937493517027,
   name: "foo",
   parent_id: nil,
   private: [],
   resource: "/bar",
   service: :my_service,
   services: [],
   sql_query: nil,
   start: 1593780109285562000,
   tags: [],
   trace_id: 1452386667563016093,
   type: :web
 }}
finished_trace: {:ok,
 %Spandex.Trace{
   baggage: [],
   id: 1452386667563016093,
   priority: 1,
   spans: [
     %Spandex.Span{
       completion_time: 1593780109286206000,
       env: "DEV",
       error: nil,
       http: nil,
       id: 3133911937493517027,
       name: "foo",
       parent_id: nil,
       private: [],
       resource: "/bar",
       service: :my_service,
       services: [],
       sql_query: nil,
       start: 1593780109285562000,
       tags: [],
       trace_id: 1452386667563016093,
       type: :web
     }
   ],
   stack: []
 }}

And here's the config:

 [trace_key: Api.Tracer, tags: [], services: [], private: [], strategy: Spandex.Strategy.Pdict, service: :api, adapter: SpandexDatadog.Adapter, disabled?: false, env: "DEV"] 

Hey @BlueHotDog based on the Slack thread, it sounds like you got this resolved via the verbose?: true option, so I'll go ahead and close it. Please feel free to comment and reopen if you need some more help.

Hey, indeed i did. but, i think its a weird behavior. i think that errors while trying to send the info to the agent should be printed out regardless of verbosity. currently it's very all or nothing. i agree that printing each request is too much, but errors should be printed.

And thanks for the help on slack! ❤️

@BlueHotDog generally speaking, we want to incur as little overhead as possible by default. It's a difficult dichotomy, because it often means being less informative in some error cases, and even in some places the user might never find out that something is wrong (because most of their requests are fine, but some are getting errors silently). I think we could default it to silent mode, and add a log_errors mode. But I think we should add a circuit breaker before we do anything like that, to avoid excessive logging (imagine the agent goes down, and we're spewing hundreds of logs).

Good point, circuit-breaker/gradual backoff + logs might be a good idea!