shotover/shotover-proxy

Intermittent test failures

Opened this issue · 5 comments

rukai commented

Intermittent test failures are now occurring pretty commonly, time to do some investigation and flush them out.

cassandra_int_tests::cluster_single_rack_v4::case_2_datastax

Oh oops, I wasnt thinking and overwrote this with a manually induced test failure.

shotover   04:41:06.226239Z  INFO shotover_proxy::runner: Starting Shotover 0.1.9
shotover   04:41:06.226261Z  INFO shotover_proxy::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover   04:41:06.226268Z  INFO shotover_proxy::runner: topology=Topology { sources: {"cassandra_prod": Cassandra(CassandraConfig { listen_addr: "127.0.0.1:9042", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"main_chain": [CassandraSinkCluster(CassandraSinkClusterConfig { first_contact_points: ["172.16.1.2:9044", "172.16.1.3:9044"], local_shotover_host_id: 2dd022d6-2937-4754-89d6-02d2933a8f7a, shotover_nodes: [ShotoverNode { address: 127.0.0.1:9042, data_center: "dc1", rack: "rack1", host_id: 2dd022d6-2937-4754-89d6-02d2933a8f7a }], tls: None, connect_timeout_ms: 3000, read_timeout: None })]}, source_to_chain_mapping: {"cassandra_prod": "main_chain"} }
shotover   04:41:06.226368Z  INFO shotover_proxy::config::topology: Loaded chains ["main_chain"]
shotover   04:41:06.226374Z  INFO shotover_proxy::sources::cassandra: Starting Cassandra source on [127.0.0.1:9042]
shotover   04:41:06.226410Z  INFO shotover_proxy::config::topology: Loaded sources [["cassandra_prod"]] and linked to chains
shotover   04:41:06.226479Z  INFO shotover_proxy::server: accepting inbound connections
shotover   04:41:06.230321Z  INFO connection{id=2 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:06.230866Z  INFO connection{id=3 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:06.360203Z  INFO shotover_proxy::transforms::cassandra::sink_cluster::topology: Topology task control connection finalized against node at: 172.16.1.2:9044
shotover   04:41:06.376165Z  INFO connection{id=4 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.3:9044
shotover   04:41:06.808989Z  INFO connection{id=5 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
test cassandra_int_tests::cluster_single_rack_v4::case_3_scylla has been running for over 60 seconds
shotover   04:41:30.365171Z  INFO connection{id=7 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:30.365650Z  INFO connection{id=8 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:30.371416Z  INFO connection{id=9 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:41:30.587334Z  INFO connection{id=10 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:41:30.682172Z  INFO connection{id=12 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:30.683146Z  INFO connection{id=13 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:30.685560Z  INFO connection{id=14 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
shotover   04:41:30.877738Z  INFO connection{id=15 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
2023-03-17T04:41:31.024182Z  WARN datastax::internal::core::Decoder::decode_warnings: Server-side warning: `USE <keyspace>` with prepared statements is considered to be an anti-pattern due to ambiguity in non-qualified table names. Please consider removing instances of `Session#setKeyspace(<keyspace>)`, `Session#execute("USE <keyspace>")` and `cluster.newSession(<keyspace>)` from your code, and always use fully qualified table names (e.g. <keyspace>.<table>). Keyspace used: test_prepare_statements, statement keyspace: test_prepare_statements, statement id: 2418a6606d64ccd02a7805fc4416a352    
shotover   04:41:34.096373Z  INFO connection{id=17 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:34.096883Z  INFO connection{id=18 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:34.101051Z  INFO connection{id=19 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
shotover   04:41:34.291226Z  INFO connection{id=20 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
shotover   04:41:34.403093Z  INFO connection{id=22 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:34.403970Z  INFO connection{id=23 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:34.406026Z  INFO connection{id=24 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:41:34.584385Z  INFO connection{id=25 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
2023-03-17T04:41:34.962489Z ERROR datastax::internal::core::ControlConnection::handle_refresh_table_or_view: No row found for table (or view) test_native_types_keyspace.native_types_table in system schema tables.    
shotover   04:41:36.675709Z  INFO connection{id=27 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:36.676163Z  INFO connection{id=28 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:36.678469Z  INFO connection{id=29 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.3:9044
shotover   04:41:36.837232Z  INFO connection{id=30 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:41:37.477438Z  INFO connection{id=32 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:41:52.733208Z  INFO connection{id=34 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:52.733648Z  INFO connection{id=35 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:41:52.736008Z  INFO connection{id=36 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
shotover   04:41:52.937763Z  INFO connection{id=37 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
shotover   04:42:19.811701Z  INFO connection{id=39 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:42:19.812153Z  INFO connection{id=40 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:42:19.814258Z  INFO connection{id=41 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
shotover   04:42:20.035077Z  INFO connection{id=42 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
shotover   04:42:38.839475Z ERROR connection{id=25 source="CassandraSource"}: shotover_proxy::server: connection was unexpectedly terminated

Caused by:
    0: Chain failed to send and/or receive messages, the connection will now be closed.
    1: CassandraSinkCluster transform failed
    2: Failed to create new connection
    3: destination 172.16.1.3:9044 did not respond to connection attempt within 3s
2023-03-17T04:42:40.372003Z  WARN void: Lost control connection to host 127.0.0.1    
shotover   04:42:40.371532Z ERROR connection{id=24 source="CassandraSource"}: shotover_proxy::server: connection was unexpectedly terminated

Caused by:
    0: Chain failed to send and/or receive messages, the connection will now be closed.
    1: CassandraSinkCluster transform failed
    2: Failed to create new connection
    3: destination 172.16.1.3:9044 did not respond to connection attempt within 3s
2023-03-17T04:42:40.381189Z  WARN void: Lost control connection to host 127.0.0.1    
shotover   04:42:40.380955Z ERROR connection{id=29 source="CassandraSource"}: shotover_proxy::server: connection was unexpectedly terminated

Caused by:
    0: Chain failed to send and/or receive messages, the connection will now be closed.
    1: CassandraSinkCluster transform failed
    2: Failed to create new connection
    3: destination 172.16.1.3:9044 did not respond to connection attempt within 3s
shotover   04:42:42.564606Z  INFO connection{id=44 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.4:9044
shotover   04:42:43.842535Z  WARN connection{id=43 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: A successful connection to a control node was made but attempts to connect to these nodes failed first:
* 172.16.1.3:9044:
    - Failed to create new connection
    - destination 172.16.1.3:9044 did not respond to connection attempt within 3s
shotover   04:42:43.844293Z  INFO connection{id=43 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
2023-03-17T04:42:45.565742Z ERROR datastax::internal::core::Cluster::on_reconnect: Unable to reestablish a control connection to host 127.0.0.1 because of the following error: Underlying connection error: Received error response 'Internal shotover (or custom transform) bug: Chain failed to send and/or receive messages, the connection will now be closed.

Caused by:
    0: CassandraSinkCluster transform failed
    1: Failed to create new connection
    2: destination 172.16.1.3:9044 did not respond to connection attempt within 3s' (0x02000000)    
shotover   04:42:45.565387Z ERROR connection{id=44 source="CassandraSource"}: shotover_proxy::server: connection was unexpectedly terminated

Caused by:
    0: Chain failed to send and/or receive messages, the connection will now be closed.
    1: CassandraSinkCluster transform failed
    2: Failed to create new connection
    3: destination 172.16.1.3:9044 did not respond to connection attempt within 3s
shotover   04:42:52.852825Z  INFO connection{id=46 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:42:52.853281Z  INFO connection{id=47 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:42:52.854781Z  INFO connection{id=48 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:42:53.045498Z  INFO connection{id=49 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:42:55.122057Z  INFO connection{id=50 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:42:58.654256Z  INFO connection{id=51 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:43:19.887771Z  INFO connection{id=53 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:43:19.888221Z  INFO connection{id=54 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:43:19.890321Z  INFO connection{id=55 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:43:20.082451Z  INFO connection{id=56 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.3:9044
shotover   04:44:03.606764Z  INFO connection{id=58 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:44:03.607255Z  INFO connection{id=59 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:44:03.611767Z  INFO connection{id=60 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.3:9044
shotover   04:44:03.912827Z  INFO connection{id=61 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.3:9044
shotover   04:44:16.891330Z ERROR connection{id=43 source="CassandraSource"}: shotover_proxy::server: connection was unexpectedly terminated

Caused by:
    0: Chain failed to send and/or receive messages, the connection will now be closed.
    1: CassandraSinkCluster transform failed
    2: Failed to create new connection
    3: destination 172.16.1.3:9044 did not respond to connection attempt within 3s
shotover   04:44:19.094083Z  INFO connection{id=62 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:44:25.448566Z ERROR connection{id=30 source="CassandraSource"}: shotover_proxy::server: connection was unexpectedly terminated

Caused by:
    0: Chain failed to send and/or receive messages, the connection will now be closed.
    1: CassandraSinkCluster transform failed
    2: Failed to create new connection
    3: destination 172.16.1.3:9044 did not respond to connection attempt within 3s
shotover   04:44:27.452462Z  INFO connection{id=63 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:45:02.171358Z  INFO connection{id=65 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 66 (configure the client to use a supported version by default to improve connection time)
shotover   04:45:02.171857Z  INFO connection{id=66 source="CassandraSource"}: shotover_proxy::codec::cassandra: Negotiating protocol version: rejecting version 65 (configure the client to use a supported version by default to improve connection time)
shotover   04:45:02.174707Z  INFO connection{id=67 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.2:9044
shotover   04:45:02.387528Z  INFO connection{id=68 source="CassandraSource"}: shotover_proxy::transforms::cassandra::sink_cluster: Control connection finalized against node at: 172.16.1.3:9044
2023-03-17T04:45:04.749774Z  WARN void: Lost control connection to host 127.0.0.1    
shotover   04:45:04.749356Z  INFO shotover_proxy::runner: received SIGTERM
shotover   04:45:04.749814Z  INFO shotover_proxy::runner: Shotover was shutdown cleanly.
thread 'cassandra_int_tests::cluster_single_rack_v4::case_2_datastax' panicked at 'Unexpected event 04:42:38.839475Z ERROR connection{id=25 source="CassandraSource"}: shotover_proxy::server: connection was unexpectedly terminated

Caused by:
    0: Chain failed to send and/or receive messages, the connection will now be closed.
    1: CassandraSinkCluster transform failed
    2: Failed to create new connection
    3: destination 172.16.1.3:9044 did not respond to connection attempt within 3s
Any ERROR or WARN events that occur in integration tests must be explicitly allowed by adding an appropriate EventMatcher to the method call.', /home/runner/work/shotover-proxy/shotover-proxy/tokio-bin-process/src/process.rs:246:21
stack backtrace:
   0: rust_begin_unwind
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/std/src/panicking.rs:575:5
   1: core::panicking::panic_fmt
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/panicking.rs:64:14
   2: tokio_bin_process::process::BinProcess::consume_remaining_events_inner::{{closure}}
   3: tokio_bin_process::process::BinProcess::consume_remaining_events::{{closure}}
   4: tokio_bin_process::process::BinProcess::shutdown_and_then_consume_events::{{closure}}
   5: lib::cassandra_int_tests::cluster_single_rack_v4::{{closure}}
   6: lib::cassandra_int_tests::cluster_single_rack_v4::case_2_datastax::{{closure}}::{{closure}}
   7: tokio::runtime::runtime::Runtime::block_on
   8: core::ops::function::FnOnce::call_once
   9: serial_test::serial_code_lock::local_serial_core
  10: core::ops::function::FnOnce::call_once
  11: core::ops::function::FnOnce::call_once
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
2023-03-17T04:45:05.526246Z ERROR cdrs_tokio::transport: Transport error! error=IO error: Connection reset by peer (os error 104)
test cassandra_int_tests::cluster_single_rack_v4::case_2_datastax ... FAILED

## redis_int_tests::multi

shotover   23:34:03.094628Z  INFO shotover_proxy::runner: Starting Shotover 0.1.9
shotover   23:34:03.094657Z  INFO shotover_proxy::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover   23:34:03.094663Z  INFO shotover_proxy::runner: topology=Topology { sources: {"redis_prod": Redis(RedisConfig { listen_addr: "127.0.0.1:6379", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"redis_chain": [ConsistentScatter(ConsistentScatterConfig { route_map: {"one": [RedisTimestampTagger, RedisSinkSingle(RedisSinkSingleConfig { address: "127.0.0.1:3331", tls: None, connect_timeout_ms: 3000 })], "three": [RedisTimestampTagger, RedisSinkSingle(RedisSinkSingleConfig { address: "127.0.0.1:3333", tls: None, connect_timeout_ms: 3000 })], "two": [RedisTimestampTagger, RedisSinkSingle(RedisSinkSingleConfig { address: "127.0.0.1:3332", tls: None, connect_timeout_ms: 3000 })]}, write_consistency: 2, read_consistency: 2 })]}, source_to_chain_mapping: {"redis_prod": "redis_chain"} }
shotover   23:34:03.094696Z  WARN shotover_proxy::transforms::distributed::consistent_scatter: Using this transform is considered unstable - Does not work with REDIS pipelines
shotover   23:34:03.095483Z  INFO shotover_proxy::config::topology: Loaded chains ["redis_chain"]
shotover   23:34:03.095641Z  INFO shotover_proxy::sources::redis: Starting Redis source on [127.0.0.1:6379]
shotover   23:34:03.095791Z  INFO shotover_proxy::config::topology: Loaded sources [["redis_prod"]] and linked to chains
shotover   23:34:03.095942Z  INFO shotover_proxy::server: accepting inbound connections
shotover   23:34:03.096977Z  INFO connection{id=1 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain one was shutdown
shotover   23:34:03.097043Z  INFO connection{id=1 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain three was shutdown
shotover   23:34:03.096468Z  INFO connection{id=1 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain two was shutdown
shotover   23:34:03.097758Z  INFO connection{id=3 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain two was shutdown
shotover   23:34:03.097792Z  INFO connection{id=3 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain one was shutdown
shotover   23:34:03.097801Z  INFO connection{id=3 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain three was shutdown
thread 'redis_int_tests::multi' panicked at 'assertion failed: `(left == right)`
  left: `Ok(2)`,
 right: `Ok(3)`', shotover-proxy/tests/redis_int_tests/basic_driver_tests.rs:94:5
stack backtrace:
   0: rust_begin_unwind
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:575:5
   1: core::panicking::panic_fmt
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:64:14
   2: core::panicking::assert_failed_inner
   3: core::panicking::assert_failed
   4: lib::redis_int_tests::basic_driver_tests::test_keys_hiding::{{closure}}
   5: lib::redis_int_tests::multi::{{closure}}::{{closure}}
   6: tokio::runtime::runtime::Runtime::block_on
   7: core::ops::function::FnOnce::call_once
   8: serial_test::serial_code_lock::local_serial_core
   9: core::ops::function::FnOnce::call_once
  10: core::ops::function::FnOnce::call_once
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/ops/function.rs:507:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test redis_int_tests::multi ... FAILED

This should be fixed by: #1080

rukai commented

and another:

redis_int_tests::cluster_handling

shotover   01:50:11.551156Z  INFO shotover_proxy::runner: Starting Shotover 0.1.9
shotover   01:50:11.551248Z  INFO shotover_proxy::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover   01:50:11.551276Z  INFO shotover_proxy::runner: topology=Topology { sources: {"redis_prod": Redis(RedisConfig { listen_addr: "127.0.0.1:6379", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"redis_chain": [RedisSinkCluster(RedisSinkClusterConfig { first_contact_points: ["127.0.0.1:2220", "127.0.0.1:2221", "127.0.0.1:2222", "127.0.0.1:2223", "127.0.0.1:2224", "127.0.0.1:2225"], direct_destination: Some("127.0.0.1:2220"), tls: None, connection_count: None, connect_timeout_ms: 3000 })]}, source_to_chain_mapping: {"redis_prod": "redis_chain"} }
shotover   01:50:11.556594Z  INFO shotover_proxy::transforms::redis::sink_cluster: connected to upstream
shotover   01:50:11.556707Z  INFO shotover_proxy::config::topology: Loaded chains ["redis_chain"]
shotover   01:50:11.556761Z  INFO shotover_proxy::sources::redis: Starting Redis source on [127.0.0.1:6379]
shotover   01:50:11.556920Z  INFO shotover_proxy::server: accepting inbound connections
shotover   01:50:11.557192Z  INFO shotover_proxy::config::topology: Loaded sources [["redis_prod"]] and linked to chains
thread 'redis_int_tests::cluster_handling' panicked at 'called `Result::unwrap()` on an `Err` value: An error was signalled by the server: The specified node is not a master', shotover-proxy/tests/redis_int_tests/basic_driver_tests.rs:1005:10
stack backtrace:
   0: rust_begin_unwind
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/std/src/panicking.rs:575:5
   1: core::panicking::panic_fmt
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/panicking.rs:64:14
   2: core::result::unwrap_failed
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/result.rs:1790:5
   3: core::result::Result<T,E>::unwrap
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/result.rs:1112:23
   4: lib::redis_int_tests::basic_driver_tests::is_cluster_replicas_ready::{{closure}}
             at ./tests/redis_int_tests/basic_driver_tests.rs:1000:15
   5: lib::redis_int_tests::basic_driver_tests::test_cluster_ports_rewrite_nodes::{{closure}}
             at ./tests/redis_int_tests/basic_driver_tests.rs:1027:61
   6: lib::redis_int_tests::cluster_handling::{{closure}}::{{closure}}
             at ./tests/redis_int_tests/mod.rs:256:55
   7: <core::pin::Pin<P> as core::future::future::Future>::poll
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/future/future.rs:125:9
   8: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/park.rs:283:63
   9: tokio::runtime::coop::with_budget
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/coop.rs:102:5
  10: tokio::runtime::coop::budget
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/coop.rs:68:5
  11: tokio::runtime::park::CachedParkThread::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/park.rs:283:31
  12: tokio::runtime::context::BlockingRegionGuard::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/context.rs:315:13
  13: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/scheduler/multi_thread/mod.rs:66:9
  14: tokio::runtime::runtime::Runtime::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/runtime.rs:284:45
  15: lib::redis_int_tests::cluster_handling::{{closure}}
             at ./tests/redis_int_tests/mod.rs:258:5
  16: core::ops::function::FnOnce::call_once
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
  17: serial_test::serial_code_lock::local_serial_core
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/serial_test-1.0.0/src/serial_code_lock.rs:25:5
  18: lib::redis_int_tests::cluster_handling
             at ./tests/redis_int_tests/mod.rs:240:1
  19: lib::redis_int_tests::cluster_handling::{{closure}}
             at ./tests/redis_int_tests/mod.rs:241:29
  20: core::ops::function::FnOnce::call_once
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
  21: core::ops::function::FnOnce::call_once
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test redis_int_tests::cluster_handling ... FAILED

and hit again:

shotover   03:15:58.426453Z  INFO shotover_proxy::runner: Starting Shotover 0.1.9
shotover   03:15:58.426541Z  INFO shotover_proxy::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover   03:15:58.426568Z  INFO shotover_proxy::runner: topology=Topology { sources: {"redis_prod": Redis(RedisConfig { listen_addr: "127.0.0.1:6379", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"redis_chain": [RedisSinkCluster(RedisSinkClusterConfig { first_contact_points: ["127.0.0.1:2220", "127.0.0.1:2221", "127.0.0.1:2222", "127.0.0.1:2223", "127.0.0.1:2224", "127.0.0.1:2225"], direct_destination: Some("127.0.0.1:2220"), tls: None, connection_count: None, connect_timeout_ms: 3000 })]}, source_to_chain_mapping: {"redis_prod": "redis_chain"} }
shotover   03:15:58.432166Z  INFO shotover_proxy::transforms::redis::sink_cluster: connected to upstream
shotover   03:15:58.432287Z  INFO shotover_proxy::config::topology: Loaded chains ["redis_chain"]
shotover   03:15:58.432341Z  INFO shotover_proxy::sources::redis: Starting Redis source on [127.0.0.1:6379]
shotover   03:15:58.432459Z  INFO shotover_proxy::config::topology: Loaded sources [["redis_prod"]] and linked to chains
shotover   03:15:58.433101Z  INFO shotover_proxy::server: accepting inbound connections
thread 'redis_int_tests::cluster_handling' panicked at 'called `Result::unwrap()` on an `Err` value: An error was signalled by the server: The specified node is not a master', shotover-proxy/tests/redis_int_tests/basic_driver_tests.rs:1008:10
stack backtrace:
   0: rust_begin_unwind
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/std/src/panicking.rs:575:5
   1: core::panicking::panic_fmt
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/panicking.rs:64:14
   2: core::result::unwrap_failed
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/result.rs:1790:5
   3: core::result::Result<T,E>::unwrap
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/result.rs:1112:23
   4: lib::redis_int_tests::basic_driver_tests::is_cluster_replicas_ready::{{closure}}
             at ./tests/redis_int_tests/basic_driver_tests.rs:1003:15
   5: lib::redis_int_tests::basic_driver_tests::test_cluster_ports_rewrite_nodes::{{closure}}
             at ./tests/redis_int_tests/basic_driver_tests.rs:1030:61
   6: lib::redis_int_tests::cluster_handling::{{closure}}::{{closure}}
             at ./tests/redis_int_tests/mod.rs:256:55
   7: <core::pin::Pin<P> as core::future::future::Future>::poll
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/future/future.rs:125:9
   8: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/park.rs:283:63
   9: tokio::runtime::coop::with_budget
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/coop.rs:102:5
  10: tokio::runtime::coop::budget
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/coop.rs:68:5
  11: tokio::runtime::park::CachedParkThread::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/park.rs:283:31
  12: tokio::runtime::context::BlockingRegionGuard::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/context.rs:315:13
  13: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/scheduler/multi_thread/mod.rs:66:9
  14: tokio::runtime::runtime::Runtime::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.25.0/src/runtime/runtime.rs:284:45
  15: lib::redis_int_tests::cluster_handling::{{closure}}
             at ./tests/redis_int_tests/mod.rs:258:5
  16: core::ops::function::FnOnce::call_once
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
  17: serial_test::serial_code_lock::local_serial_core
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/serial_test-1.0.0/src/serial_code_lock.rs:25:5
  18: lib::redis_int_tests::cluster_handling
             at ./tests/redis_int_tests/mod.rs:240:1
  19: lib::redis_int_tests::cluster_handling::{{closure}}
             at ./tests/redis_int_tests/mod.rs:241:29
  20: core::ops::function::FnOnce::call_once
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
  21: core::ops::function::FnOnce::call_once
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test redis_int_tests::cluster_handling ... FAILED
rukai commented

## test redis_int_tests::multi
Should be solved by #1086

shotover   04:43:36.664124Z  INFO shotover_proxy::runner: Starting Shotover 0.1.9
shotover   04:43:36.664157Z  INFO shotover_proxy::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover   04:43:36.664164Z  INFO shotover_proxy::runner: topology=Topology { sources: {"redis_prod": Redis(RedisConfig { listen_addr: "127.0.0.1:6379", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"redis_chain": [TuneableConsistencyScatter(TuneableConsistencyScatterConfig { route_map: {"two": [RedisTimestampTagger, RedisSinkSingle(RedisSinkSingleConfig { address: "127.0.0.1:3332", tls: None, connect_timeout_ms: 3000 })], "three": [RedisTimestampTagger, RedisSinkSingle(RedisSinkSingleConfig { address: "127.0.0.1:3333", tls: None, connect_timeout_ms: 3000 })], "one": [RedisTimestampTagger, RedisSinkSingle(RedisSinkSingleConfig { address: "127.0.0.1:3331", tls: None, connect_timeout_ms: 3000 })]}, write_consistency: 2, read_consistency: 2 })]}, source_to_chain_mapping: {"redis_prod": "redis_chain"} }
shotover   04:43:36.664206Z  WARN shotover_proxy::transforms::distributed::tuneable_consistency_scatter: Using this transform is considered unstable - Does not work with REDIS pipelines
shotover   04:43:36.664249Z  INFO shotover_proxy::config::topology: Loaded chains ["redis_chain"]
shotover   04:43:36.664260Z  INFO shotover_proxy::sources::redis: Starting Redis source on [127.0.0.1:6379]
shotover   04:43:36.664830Z  INFO shotover_proxy::config::topology: Loaded sources [["redis_prod"]] and linked to chains
shotover   04:43:36.665030Z  INFO shotover_proxy::server: accepting inbound connections
shotover   04:43:36.666437Z  INFO connection{id=1 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain two was shutdown
shotover   04:43:36.666457Z  INFO connection{id=3 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain two was shutdown
shotover   04:43:36.666700Z  INFO connection{id=1 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain one was shutdown
shotover   04:43:36.666903Z  INFO connection{id=1 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain three was shutdown
shotover   04:43:36.666919Z  INFO connection{id=3 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain three was shutdown
shotover   04:43:36.666518Z  INFO connection{id=3 source="RedisSource"}: shotover_proxy::transforms::chain: Buffered chain one was shutdown
test redis_int_tests::multi has been running for over 60 seconds
thread 'redis_int_tests::multi' panicked at 'assertion failed: `(left == right)`
  left: `Err(WRONGTYPE: Operation against a key holding the wrong kind of value)`,
 right: `Ok("OK")`', shotover-proxy/tests/redis_int_tests/assert.rs:12:5
stack backtrace:
   0: rust_begin_unwind
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/std/src/panicking.rs:575:5
   1: core::panicking::panic_fmt
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/panicking.rs:64:14
   2: core::panicking::assert_failed_inner
   3: core::panicking::assert_failed
   4: lib::redis_int_tests::assert::assert_ok::{{closure}}
   5: lib::redis_int_tests::basic_driver_tests::test_tuple_args::{{closure}}
   6: lib::redis_int_tests::multi::{{closure}}::{{closure}}
   7: tokio::runtime::runtime::Runtime::block_on
   8: core::ops::function::FnOnce::call_once
   9: serial_test::serial_code_lock::local_serial_core
  10: core::ops::function::FnOnce::call_once
  11: core::ops::function::FnOnce::call_once
             at /rustc/2c8cc343237b8f7d5a3c3703e3a87f2eb2c54a74/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test redis_int_tests::multi ... FAILED
rukai commented

kafka_int_tests::passthrough_encode

shotover   00:11:38.791417Z  INFO shotover::runner: Starting Shotover 0.1.10
shotover   00:11:38.791519Z  INFO shotover::runner: configuration=Config { main_log_level: "info,shotover_proxy=info", observability_interface: "0.0.0.0:9001" }
shotover   00:11:38.791551Z  INFO shotover::runner: topology=Topology { sources: {"kafka_source": Kafka(KafkaConfig { listen_addr: "127.0.0.1:9192", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None })}, chain_config: {"main_chain": TransformChainConfig([DebugForceEncodeConfig { encode_requests: true, encode_responses: true }, KafkaSinkSingleConfig { address: "127.0.0.1:9092", connect_timeout_ms: 3000, read_timeout: None }])}, source_to_chain_mapping: {"kafka_source": "main_chain"} }
shotover   00:11:38.791726Z  INFO shotover::config::topology: Loaded chains ["main_chain"]
shotover   00:11:38.791854Z  INFO shotover::sources::kafka: Starting Kafka source on [127.0.0.1:9192]
shotover   00:11:38.791972Z  INFO shotover::config::topology: Loaded sources [["kafka_source"]] and linked to chains
shotover   00:11:38.792124Z  INFO shotover::server: accepting inbound connections
2023-04-26T00:11:38.794440Z ERROR librdkafka: librdkafka: FAIL [thrd:localhost:9192/bootstrap]: localhost:9192/bootstrap: Connect to ipv6#[::1]:9192 failed: Connection refused (after 0ms in state CONNECT)    
2023-04-26T00:11:38.795066Z ERROR rdkafka::client: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): localhost:9192/bootstrap: Connect to ipv6#[::1]:9192 failed: Connection refused (after 0ms in state CONNECT)    
2023-04-26T00:11:38.795597Z ERROR rdkafka::client: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down    
shotover   00:11:45.140390Z  INFO shotover::runner: received SIGTERM
shotover   00:11:45.566429Z  INFO shotover::runner: Shotover was shutdown cleanly.
shotover   00:11:45.566337Z ERROR connection{id=5 source="KafkaSource"}: shotover::server: failed to send or encode message: Broken pipe (os error 32)
thread 'kafka_int_tests::passthrough_encode' panicked at 'Unexpected event 00:11:45.566337Z ERROR connection{id=5 source="KafkaSource"}: shotover::server: failed to send or encode message: Broken pipe (os error 32)
Any ERROR or WARN events that occur in integration tests must be explicitly allowed by adding an appropriate EventMatcher to the method call.', /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-bin-process-0.1.0/src/process.rs:248:21
stack backtrace:
   0: rust_begin_unwind
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/std/src/panicking.rs:579:5
   1: core::panicking::panic_fmt
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/panicking.rs:64:14
   2: tokio_bin_process::process::BinProcess::consume_remaining_events_inner::{{closure}}
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-bin-process-0.1.0/src/process.rs:248:21
   3: tokio_bin_process::process::BinProcess::consume_remaining_events::{{closure}}
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-bin-process-0.1.0/src/process.rs:198:13
   4: tokio_bin_process::process::BinProcess::shutdown_and_then_consume_events::{{closure}}
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-bin-process-0.1.0/src/process.rs:186:13
   5: lib::kafka_int_tests::passthrough_encode::{{closure}}::{{closure}}
             at ./tests/kafka_int_tests/mod.rs:36:51
   6: <core::pin::Pin<P> as core::future::future::Future>::poll
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/future/future.rs:125:9
   7: <core::pin::Pin<P> as core::future::future::Future>::poll
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/future/future.rs:125:9
   8: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}}
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:541:57
   9: tokio::runtime::coop::with_budget
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/coop.rs:107:5
  10: tokio::runtime::coop::budget
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/coop.rs:73:5
  11: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:541:25
  12: tokio::runtime::scheduler::current_thread::Context::enter
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:350:19
  13: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:540:36
  14: tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}}
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:615:57
  15: tokio::macros::scoped_tls::ScopedKey<T>::set
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/macros/scoped_tls.rs:61:9
  16: tokio::runtime::scheduler::current_thread::CoreGuard::enter
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:615:27
  17: tokio::runtime::scheduler::current_thread::CoreGuard::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:530:19
  18: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/scheduler/current_thread.rs:154:24
  19: tokio::runtime::runtime::Runtime::block_on
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.27.0/src/runtime/runtime.rs:302:47
  20: lib::kafka_int_tests::passthrough_encode::{{closure}}
             at ./tests/kafka_int_tests/mod.rs:36:5
  21: core::ops::function::FnOnce::call_once
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/ops/function.rs:250:5
  22: serial_test::serial_code_lock::local_serial_core
             at /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/serial_test-2.0.0/src/serial_code_lock.rs:25:5
  23: lib::kafka_int_tests::passthrough_encode
             at ./tests/kafka_int_tests/mod.rs:24:1
  24: lib::kafka_int_tests::passthrough_encode::{{closure}}
             at ./tests/kafka_int_tests/mod.rs:25:31
  25: core::ops::function::FnOnce::call_once
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/ops/function.rs:250:5
  26: core::ops::function::FnOnce::call_once
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test kafka_int_tests::passthrough_encode ... FAILED
rukai commented

redis_int_tests::multi

thread 'redis_int_tests::multi' panicked at 'assertion failed: `(left == right)`
  left: `Ok(0)`,
 right: `Ok(1)`', shotover-proxy/tests/redis_int_tests/assert.rs:16:5
stack backtrace:
   0: rust_begin_unwind
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/std/src/panicking.rs:579:5
   1: core::panicking::panic_fmt
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/panicking.rs:64:14
   2: core::panicking::assert_failed_inner
   3: core::panicking::assert_failed
   4: lib::redis_int_tests::assert::assert_int::{{closure}}
   5: lib::redis_int_tests::basic_driver_tests::test_scan::{{closure}}
   6: lib::redis_int_tests::multi::{{closure}}::{{closure}}
   7: tokio::runtime::runtime::Runtime::block_on
   8: core::ops::function::FnOnce::call_once
   9: serial_test::serial_code_lock::local_serial_core
  10: core::ops::function::FnOnce::call_once
  11: core::ops::function::FnOnce::call_once
             at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library/core/src/ops/function.rs:250:5

Seems like a flush didnt work/complete/occur causing undeleted values.

shotover-proxy::lib kafka_int_tests::cluster_1_rack_single_shotover::case_2_java

shotover   02:43:00.064746Z  INFO shotover::runner: Starting Shotover 0.3.1
shotover   02:43:00.064770Z  INFO shotover::runner: configuration=Config { main_log_level: "info, shotover::connection_span=debug", observability_interface: Some("0.0.0.0:9001") }
shotover   02:43:00.064787Z  INFO shotover::runner: topology=Topology { sources: [Kafka(KafkaConfig { name: "kafka", listen_addr: "127.0.0.1:9192", connection_limit: None, hard_connection_limit: None, tls: None, timeout: None, chain: TransformChainConfig([KafkaSinkClusterConfig { first_contact_points: ["172.16.1.2:9092"], shotover_nodes: [ShotoverNodeConfig { address: "127.0.0.1:9192", rack: "rack0", broker_id: 0 }], local_shotover_broker_id: 0, connect_timeout_ms: [30](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:31)00, read_timeout: None, tls: None, authorize_scram_over_mtls: None }]) })] }
shotover   02:43:00.064816Z  INFO shotover::sources::kafka: Starting Kafka source on [127.0.0.1:9192]
shotover   02:43:00.064945Z  INFO shotover::config::topology: Shotover is now accepting inbound connections
2024-06-17T02:43:00.065661Z  INFO j4rs::logger: Setting classpath to -Djava.class.path=/home/runner/work/shotover-proxy/shotover-proxy/target/release/jassets/j4rs-0.18.0-jar-with-dependencies.jar    
2024-06-17T02:43:00.066182Z  INFO j4rs::logger: Setting library path to -Djava.library.path=/home/runner/work/shotover-proxy/shotover-proxy/target/release/deps:/usr/lib:/lib    
2024-06-17T02:43:00.067112Z  INFO j4rs::logger: Passing to the Java world the name of the library to load: j4rs-4c4b93671bac0689    
2024-06-17T02:43:00.101906Z  INFO j4rs::logger: No JVMs exist. Creating a new one...    
test kafka_int_tests::cluster_1_rack_single_shotover::case_2_java ... FAILED

failures:

failures:
    kafka_int_tests::cluster_1_rack_single_shotover::case_2_java

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 100 filtered out; finished in 22.94s


--- STDERR:              shotover-proxy::lib kafka_int_tests::cluster_1_rack_single_shotover::case_2_java ---
[main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 
	auto.include.jmx.reporter = true
	bootstrap.controllers = []
	bootstrap.servers = [127.0.0.1:9192]
	client.dns.lookup = use_all_dns_ips
	client.id = 
	connections.max.idle.ms = 300000
	default.api.timeout.ms = 60000
	enable.metrics.push = true
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.max.ms = 1000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.connect.timeout.ms = null
	sasl.login.read.timeout.ms = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.login.retry.backoff.max.ms = 10000
	sasl.login.retry.backoff.ms = 100
	sasl.mechanism = GSSAPI
	sasl.oauthbearer.clock.skew.seconds = 30
	sasl.oauthbearer.expected.audience = null
	sasl.oauthbearer.expected.issuer = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	sasl.oauthbearer.jwks.endpoint.url = null
	sasl.oauthbearer.scope.claim.name = scope
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.token.endpoint.url = null
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 1[31](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:32)072
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.3
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS

[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 2ae524ed6254[38](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:39)c5
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1718592183108
thread 'kafka_int_tests::cluster_1_rack_single_shotover::case_2_java' panicked at /home/runner/work/shotover-proxy/shotover-proxy/test-helpers/src/connection/kafka/java.rs:[47](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:48)5:58:
called `Result::unwrap()` on an `Err` value: JavaError("org.apache.kafka.common.errors.UnknownTopicOrPartitionException: \n")
stack backtrace:
   0: rust_begin_unwind
   1: core::panicking::panic_fmt
   2: core::result::unwrap_failed
   3: lib::kafka_int_tests::test_cases::admin_setup::{{closure}}
   4: lib::kafka_int_tests::test_cases::standard_test_suite::{{closure}}
   5: lib::kafka_int_tests::cluster_1_rack_single_shotover::{{closure}}
   6: lib::kafka_int_tests::cluster_1_rack_single_shotover::case_2_java::{{closure}}
   7: tokio::runtime::runtime::Runtime::block_on
   8: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node 0 disconnected.
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node -1 disconnected.
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node 0 disconnected.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node 0 (/127.0.0.1:9192) could not be established. Node may not be available.
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node 0 disconnected.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node 0 (/127.0.0.1:[91](https://github.com/shotover/shotover-proxy/actions/runs/9541036548/job/26293873919?pr=1669#step:8:92)92) could not be established. Node may not be available.