storj-archived/core

Not sending response to the alloc requests with large bucket size for big hard disk

Closed this issue · 3 comments

Package Versions

/home/greg10/.nvm/versions/node/v8.9.4/lib
└── (empty)

v8.9.4


### Expected Behavior

If the bucket size is OK, the node should send a response to the alloc request.
example:
{"level":"info","message":"handling alloc request from 3439d79e19b26af8e9bab492d161f45c48beca11 hash 7da59fcba549d1e15d9d0528865c63773c2608d9 size 101","timestamp":"2018-02-22T01:16:09.878Z"}
{"level":"info","message":"Sending alloc response hash 7da59fcba549d1e15d9d0528865c63773c2608d9 size 101","timestamp":"2018-02-22T01:16:09.964Z"}


### Actual Behavior
It some times does not send response to large KFS bucket, even the farmer's bucket has enough free space. See the log:
{"level":"info","message":"handling alloc request from 872f559aec9973e1b336e9b1befe3341534b1ef6 hash a09c7cfbce65609699b992931fee03405a221d85 size 13974057527","timestamp":"2018-02-22T02:04:58.584Z"}
{"level":"debug","message":"negotiator returned: false","timestamp":"2018-02-22T02:04:58.584Z"}
{"level":"debug","message":"max KFS bucket size 34359738247, used 0, free 34359738247, shard size 13974057527","timestamp":"2018-02-22T02:04:58.771Z"}
{"level":"debug","message":"we have enough free space: true","timestamp":"2018-02-22T02:04:58.771Z"}
{"level":"debug","message":"not sending an offer for the contract","timestamp":"2018-02-22T02:04:58.772Z"}
{"level":"info","message":"replying to message to 1a461d17ac8523f09d7c620d9cdd0d37736ec546","timestamp":"2018-02-22T02:04:58.772Z"}

### Steps to Reproduce

Please include the steps the reproduce the issue, numbered below. Include as
much detail as possible.

1. I have a storage configuration of 120GB
2. After running for some time and the shared storage is 100M, as I see in my area 80% of the alloc request are more than 2GB, I set the configuration file for 8TB.
See the line below, which is my setting:
"storageAllocation": "8TB", 
3. Now I still can not receive those  alloc request for large chunks of data with size of more than 1GB.

Works as expected. See #752 for more details.

Default maxShardSize is 4GB on the farmer and bridge side. The new bridge version is not deployed.

Do you mean that it is the expected behavior for the current version and it is the limitation that it can only accept buckets size that is less than 4 GB?

More or less yes. As soon as the new bridge version is deployed the renter will not be able to upload bigger shards.