awslabs/aws-c-io

retry strategy logs not emitting?

quinnj opened this issue · 3 comments

quinnj commented

Describe the bug

I'm building a higher-level HTTP client using aws-c-http, and in the normal course of development, I sometimes enable aws logging down to the debug/trace level to try and identify problems. This works well by calling aws_logger_set_log_level, but curiously, the retry strategy logs don't seem to be emitted. Take this sample of logs for example:

[DEBUG] [2023-09-08T22:37:14Z] [00000002028ca080] [channel-bootstrap] - id=0x600000a4c7e0: acquiring bootstrap reference
[TRACE] [2023-09-08T22:37:14Z] [00000002028ca080] [event-loop] - id=0x600000256850: scheduling task 0x600003490440 cross-thread for timestamp 2447234044077916
[TRACE] [2023-09-08T22:37:14Z] [00000002028ca080] [event-loop] - id=0x600000256850: signaling event-loop that cross-thread tasks need to be scheduled.
[INFO] [2023-09-08T22:37:14Z] [00000002028ca080] [connection-manager] - id=0x124f25930: Successfully created
[TRACE] [2023-09-08T22:37:14Z] [00000002a36b7000] [event-loop] - id=0x600000256850: wake up with 1 events to process.
[TRACE] [2023-09-08T22:37:14Z] [00000002a36b7000] [event-loop] - id=0x600000256850: notified of cross-thread data to process
[TRACE] [2023-09-08T22:37:14Z] [00000002a36b7000] [event-loop] - id=0x600000256850: processing cross-thread tasks
[TRACE] [2023-09-08T22:37:14Z] [00000002a36b7000] [event-loop] - id=0x600000256850: task 0x600003490440 pulled to event-loop, scheduling now.
[DEBUG] [2023-09-08T22:37:14Z] [00000002a36b7000] [task-scheduler] - id=0x600003490440: Scheduling cull_idle_connections task for future execution at time 2447234044077916
[TRACE] [2023-09-08T22:37:14Z] [00000002a36b7000] [event-loop] - id=0x600000256850: running scheduled tasks.
[TRACE] [2023-09-08T22:37:14Z] [00000002a36b7000] [event-loop] - id=0x600000256850: detected more scheduled tasks with the next occurring at 13095482167 using timeout of 13s 95482167ns.
[TRACE] [2023-09-08T22:37:14Z] [00000002a36b7000] [event-loop] - id=0x600000256850: waiting for a maximum of 13s 95482167ns
[ Info: acquiring retry token: 127.0.0.1
[TRACE] [2023-09-08T22:37:14Z] [00000002028ca080] [event-loop] - id=0x600000256940: scheduling task 0x2b1d3d7b0 cross-thread for timestamp 0
[TRACE] [2023-09-08T22:37:14Z] [00000002028ca080] [event-loop] - id=0x600000256940: signaling event-loop that cross-thread tasks need to be scheduled.
[TRACE] [2023-09-08T22:37:14Z] [00000002a362b000] [event-loop] - id=0x600000256940: wake up with 1 events to process.
[TRACE] [2023-09-08T22:37:14Z] [00000002a362b000] [event-loop] - id=0x600000256940: notified of cross-thread data to process
[TRACE] [2023-09-08T22:37:14Z] [00000002a362b000] [event-loop] - id=0x600000256940: processing cross-thread tasks
[TRACE] [2023-09-08T22:37:14Z] [00000002a362b000] [event-loop] - id=0x600000256940: task 0x2b1d3d7b0 pulled to event-loop, scheduling now.
[DEBUG] [2023-09-08T22:37:14Z] [00000002a362b000] [task-scheduler] - id=0x2b1d3d7b0: Scheduling aws_exponential_backoff_retry_task task for immediate execution
[TRACE] [2023-09-08T22:37:14Z] [00000002a362b000] [event-loop] - id=0x600000256940: running scheduled tasks.
[DEBUG] [2023-09-08T22:37:14Z] [00000002a362b000] [task-scheduler] - id=0x2b1d3d7b0: Running aws_exponential_backoff_retry_task task with <Running> status
[ Info: acquired retry token, acquiring connection: 127.0.0.1
[DEBUG] [2023-09-08T22:37:14Z] [00000002a362b000] [connection-manager] - id=0x124f25930: Acquire connection
[DEBUG] [2023-09-08T22:37:14Z] [00000002a362b000] [connection-manager] - id=0x124f25930: snapshot - state=1, idle_connection_count=0, pending_acquire_count=1, pending_settings_count=0, pending_connect_count=1, vended_connection_count=0, open_connection_count=0, ref_count=1

this is from an initial http request where I'm creating a new connection manager and retry strategy, but only the connection-manager logs are coming through. We can even see that backoff_retry_task is getting scheduled once the retry strategy is initialized, (ref: Scheduling aws_exponential_backoff_retry_task task for immediate execution), but we're missing, for example, the following log when first creating the retry strategy:

AWS_LOGF_INFO(AWS_LS_IO_STANDARD_RETRY_STRATEGY, "static: creating new standard retry strategy");

And indeed, I don't see any retry strategy related logs throughout my log tracing.

I tried to snoop around the various "log subject" related functionality, but as far as I can tell, the default pipeline logger doesn't actually allow or do any filtering on log subjects, so I'm at a loss as to why these logs specifically aren't coming through, while other aws-c-io logs seem to be just fine (e.g. event-loop, etc.).

Do I have something misconfigured somehow?

Expected Behavior

Expecting the retry-strategy related logs to be emitted like other aws-c-io component logs.

Current Behavior

No retry strategy logs are being emitted when log level is set to maximum (i.e. trace).

Reproduction Steps

I have pretty standard debug shared library builds locally, and I'm loading them from Julia (programming language) then using the Julia FFI to call various exported aws-c-* library functions, including setting/initializing a global logger, setting the global logger to output to stderr, etc. This all seems to be working as expected except the retry strategy logs.

Possible Solution

No response

Additional Information/Context

No response

aws-c-io version used

248e39a

Compiler and version used

not exactly sure; can find out if needed

Operating System and version

pretty up-to-date macos

Can you make sure you are initializing the logging correctly? You can do that with this: aws_io_library_init(allocator);

Greetings! It looks like this issue hasn’t been active in longer than a week. We encourage you to check if this is still an issue in the latest release. Because it has been longer than a week since the last update on this, and in the absence of more information, we will be closing this issue soon. If you find that this is still a problem, please feel free to provide a comment or add an upvote to prevent automatic closure, or if the issue is already closed, please feel free to open a new one.

Sorry for the slow response here; I ended up going through some personal changes that ended up with a new machine where I was forced to re-setup everything from scratch and now it works! Must have been something I had wrong before. Sorry for the noise!