LINBIT/windrbd

test code for writing when resync is in progress.

jumpapex opened this issue · 5 comments

@johannesthoma

I have create code for test at:
https://github.com/jumpapex/drbd (branch: paul)
https://github.com/jumpapex/windrbd (branch: paul)

I'm try to test the following case:

  1. a resync(e.g. sector 0 with 4K size) is in progress.
  2. after local read of the resync completed, a local write(sector 0 with 4K size) to same block comes,
  3. P_RS_WRITE_ACK of the resync received but not yet completed(assume it is complete a little later).
  4. network break, so local write request will completed without send to remote. it will set bit 0 of bitmap
  5. now continue process P_RS_WRITE_ACK, drbd_set_in_sync will clear the bit 0 which local write just set.

so, is the change by local write is not recorded?

this case is difficult to setup in real world, but it may happen :)

the test code is trying to simulate the case

paul

Hi Paul, thank you for your bug report, I forwarded it to our DRBD experts. They will be back on Monday ...

Best regards,

  • Johannes

Hi Paul, the problem you describe is prevented by the exclusivity of resync extents with activity log extents. This applies to all versions of DRBD up to and including the 9.1 series. From 9.2, the approach is a little different. It is described here. As you can see, the DRBD developers have thought about this already :-)

@JoelColledge

Thank you for your clarification.

The document "That is, application IO is blocked by resync IO even when that resync IO has not yet obtained the lock for its interval".

I just wonder how it could be done(even ... has not yet obtained the lock...):-)

Would you please to give me some brief hints or where it is implemented in the source?

Thanks!

Paul

@JoelColledge may I hand over this question to you?

@johannesthoma @JoelColledge

finally, I got the application/resync write conflication check in drbd-9.2.3.
while it is not found in the windrbd 1.1.2(refer to drbd-9.1.6), or I could not found it:-)

Thanks all for your help!

Paul