comeara/pillar

Allow to specify consistency levels for reads/writes of applied migrations

Opened this issue · 1 comments

We just ran into an issue that (most probably) was caused by the consistency level that's used by default (ONE): when running migrate this fails with an AlreadyExistsException, which (most probably) is caused by an incomplete read with the default consistency ONE.

We just fixed this issue (for us) by changing the default/session consistency level in the sbt-pillar-plugin to QUORUM.

Because we're running in a multi DC environment it would be optimal if it would be possible to specify different consistency levels for reads and writes: Writes should be done using EACH_QUORUM by default, which allows you to run Reads with LOCAL_QUORUM - assuming that the applied migrations more often read than written, this combination provides the best overall performance (relatively fast reads).

We are seeing similar behavior and our migrations fail because of this. Below is a row count of applied_migrations for one of the keyspaces. Running a nodetool repair on the keyspace syncs the missing rows. This happens even though the keyspace is setup with proper replication settings

keyspace_name             | durable_writes | strategy_class                                       | strategy_options
---------------------------+----------------+------------------------------------------------------+----------------------------
      core_service |           True | org.apache.cassandra.locator.NetworkTopologyStrategy |          {"DC_DATA_1":"3"}


keyspace: core_service

Datacenter: DC_DATA_1
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                      Load       Tokens  Owns (effective)  Host ID                               Rack
UN  host_ip_address_1_DC_DATA_1  3.75 MB    256     100.0%            3851106b                              RAC1
UN  host_ip_address_2_DC_DATA_1  3.72 MB    256     100.0%            d1201142                              RAC1
UN  host_ip_address_3_DC_DATA_1  3.72 MB    256     100.0%            81625495                              RAC1
Datacenter: DC_OPSCENTER_1
==========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                           Load       Tokens  Owns (effective)  Host ID                               Rack
UN  host_ip_address_4_DC_OPSCENTER_1  631.31 MB  256     0.0%              39e4f8af                              RAC1

Query: select count(*) from core_service.applied_migrations; (performed on each node)

host_ip_address_1_DC_DATA_1 core_service applied_migrations

 count
-------
     1

(1 rows)
host_ip_address_2_DC_DATA_1 core_service applied_migrations

 count
-------
     2

(1 rows)
host_ip_address_3_DC_DATA_1 core_service applied_migrations

 count
-------
     2

(1 rows)
host_ip_address_4_DC_OPSCENTER_1 core_service applied_migrations

 count
-------
     2

(1 rows)