Can't write to Dynamic Large Object
dmolesUC opened this issue · 8 comments
DynamicLargeObjectCreateFile
succeeds, but when I try to write to it with io.CopyBuffer
, the write always fails. Am I using the API correctly? Is there something more explicit I need to do with regard to segment management?
Here's the relevant code:
var out io.WriteCloser
dloOpts := swift.LargeObjectOpts{
Container: obj.container,
ObjectName: obj.objectName,
}
out, err = cnx.DynamicLargeObjectCreateFile(&dloOpts)
if err != nil {
logger.Detailf("error opening upload stream: %v\n", err)
return err
}
defer func() {
err := out.Close()
if err != nil {
logger.Detailf("error closing upload stream: %v\n", err)
}
}()
buffer := make([]byte, streaming.DefaultRangeSize) // 5 MiB
written, err := io.CopyBuffer(out, body, buffer)
if err != nil {
logger.Detailf("error writing to upload stream: %v\n", err)
}
logger.Detailf("wrote %d bytes to %v/%v\n", written, obj.container, obj.objectName)
return err
And the output:
error writing to upload stream: Object Not Found
wrote 0 bytes to distrib.stage.9001.__c5e/cos-crvd-1548190722.bin
error closing upload stream: Object Not Found
If I explicitly set the ChunkSize
, I get wrote <chunk size> bytes
, but I think that's just io.CopyBuffer
being overoptimistic.
Does the container distrib.stage.9001.__c5e
exist in advance?
It does. I can write a small object (or even medium-sized, e.g. 1 GiB) with ObjectCreate
with no problems.
Hmm, I wonder if this is an eventual consistency problem.
Did the object get created - can you see how far rclone got?
You could try increasing this timeout
Line 23 in b2a130b
I'm not 100% sure where the error is coming from in the swift library code - I think it is from here
Line 308 in b2a130b
Can you put a printf in to confirm that?
From a swift ls
it doesn't look like the object got created. I've only been programming in Go for a couple of months—what's the best way to override/edit that code? Take a local clone and override it in my go.mod
with replace github.com/ncw/swift => ../swift
? (I'll try that for now and see how it goes.)
OK. I put in some print statements and stepped through some things in the debugger. It looks like the failure is in a PUT
request to upload the first segment:
PUT /v1/AUTH_merritt/distrib.stage.9001.__c5e_segments/segments/636/f732d637276642d313534383237303333352e62696e064669103e69b023b459d8e9a1afe89db496fbfdd20325026875916380c40534da39a3ee5e6b4b0d3255bfef95601890afd80709/0000000000000001
from here:
Line 396 in b2a130b
which in turn comes from: from here:
Line 332 in b2a130b
FWIW, I tried creating a large object in this container with swift upload -S 1073741824
and that seemd to work OK.
Is there possibly an issue with that segment URL? I notice the segment names for the object I created from the command line seem to be very different from the PUT URL:
distrib.stage.9001.__c5e_segments/6.3GB.bin/1548272174.337676/6764573490/1073741824/00000005
distrib.stage.9001.__c5e_segments/6.3GB.bin/1548272174.337676/6764573490/1073741824/00000002
distrib.stage.9001.__c5e_segments/6.3GB.bin/1548272174.337676/6764573490/1073741824/00000001
distrib.stage.9001.__c5e_segments/6.3GB.bin/1548272174.337676/6764573490/1073741824/00000004
distrib.stage.9001.__c5e_segments/6.3GB.bin/1548272174.337676/6764573490/1073741824/00000003
distrib.stage.9001.__c5e_segments/6.3GB.bin/1548272174.337676/6764573490/1073741824/00000006
distrib.stage.9001.__c5e_segments/6.3GB.bin/1548272174.337676/6764573490/1073741824/00000000
According to the Swift docs, the segment names should be in the form <name>/<timestamp>/<size>/<segment>
; are they?
Side note: Is there a way to explicitly set the segment size in the Go API? There's ChunkSize
, but that seems to be the size of the upload buffer, which isn't quite the same thing.
According to the openstack docs
On a PUT
If the container for the object does not already exist, the operation returns the 404 Not Found response code.
Which is puzzling
Side note: Is there a way to explicitly set the segment size in the Go API? There's
ChunkSize
, but that seems to be the size of the upload buffer, which isn't quite the same thing.
The ChunkSize
also controls the size of the chunks. You can turn the buffering on and off with NoBuffer
According to the Swift docs, the segment names should be in the form
<name>/<timestamp>/<size>/<segment>
; are they?
This library uses a different convention - it is only a convention after all. I'm not sure why though - I didn't write the SLO/DLO support... If you want to use your own segment prefix then set the SegmentPrefix
option.
I came across this issue, but in my case the Object Not Found
error was returned when the <container>_segments
container didn't exist (used by default here, and referenced on segment write here). The error message wording misled my troubleshooting for a bit. Once the container was in place large object uploads worked fine 😸.