aws/aws-sdk-cpp

Failed to upload file using transfer manager to S3 bucket.

as6432 opened this issue · 38 comments

as6432 commented

Describe the bug

After upgrading AWS CPP SDK to 1.11.178, file upload using transfer manager failing with error 'InvalidChunkSizeError Message: Only the last chunk is allowed to have a size less than 8192 bytes'.
We tried uploading files with various size but getting same error.

Expected Behavior

File upload using transfer manager should work.

Current Behavior

Failed to upload file of any size using aws transfer manager cpp APIs after upgrading aws cpp sdk 1.11.178 version.

Reproduction Steps

Take file of any size and try to upload it using aws cpp transfer manager APIs with AWS CPP SDK version 1.11.178.

Possible Solution

No response

Additional Information/Context

1116	15:38:25.830	 [8956]	(StorageArchive)	<9232>	EV:H	{AwsSdk::TransferManager} [Trace] Seeking input stream to determine content-length to upload file to bucket: janitacnonworm with key: 22C863DFB567C94D9D818E5FE00F7685/2023/10-17/a/54f/a54f205d-becf-4d34-91ef-d4688f2812e9
1117	15:38:25.830	 [8956]	(StorageArchive)	<9232>	EV:H	{AwsSdk::TransferManager} [Trace] Setting content-length to 60126 bytes. To upload file to bucket: janitacnonworm with key: 22C863DFB567C94D9D818E5FE00F7685/2023/10-17/a/54f/a54f205d-becf-4d34-91ef-d4688f2812e9
1118	15:38:25.830	 [8956]	(StorageArchive)	<9232>	EV:H	{AwsSdk::TransferManager} [Debug] Transfer handle [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Scheduling a single-part upload.
1119	15:38:25.830	 [8956]	(StorageArchive)	<7520>	EV:H	{AwsSdk::TransferManager} [Info] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Updated handle status from [NOT_STARTED] to [IN_PROGRESS].
1120	15:38:25.830	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::Aws::Endpoint::DefaultEndpointProvider} [Trace] Endpoint str eval parameter: Region = ap-south-1
1121	15:38:25.830	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::Aws::Endpoint::DefaultEndpointProvider} [Trace] Endpoint bool eval parameter: UseFIPS = 0
1122	15:38:25.830	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::Aws::Endpoint::DefaultEndpointProvider} [Trace] Endpoint bool eval parameter: UseDualStack = 0
1123	15:38:25.830	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::Aws::Endpoint::DefaultEndpointProvider} [Trace] Endpoint bool eval parameter: UseArnRegion = 0
1124	15:38:25.830	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::Aws::Endpoint::DefaultEndpointProvider} [Trace] Endpoint bool eval parameter: DisableMultiRegionAccessPoints = 0
1125	15:38:25.830	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::Aws::Endpoint::DefaultEndpointProvider} [Trace] Endpoint str eval parameter: Bucket = janitacnonworm
1126	15:38:25.830	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::Aws::Endpoint::DefaultEndpointProvider} [Debug] Endpoint rules engine evaluated the endpoint: https://janitacnonworm.s3.ap-south-1.amazonaws.com
1127	15:38:25.830	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::Aws::Endpoint::DefaultEndpointProvider} [Trace] Endpoint rules evaluated props: {"authSchemes":[{"disableDoubleEncoding":true,"name":"sigv4","signingName":"s3","signingRegion":"ap-south-1"}]}
1145	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSAuthV4Signer} [Debug] Note: Http payloads are not being signed. signPayloads=0 http scheme=https
1146	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSAuthV4Signer} [Debug] Canonical Header String: amz-sdk-invocation-id:B56073AC-DD2A-44FA-8D33-42953E37C476|amz-sdk-request:attempt=1|content-encoding:aws-chunked|content-type:application/octet-stream|host:janitacnonworm.s3.ap-south-1.amazonaws.com|transfer-encoding:chunked|x-amz-content-sha256:STREAMING-UNSIGNED-PAYLOAD-TRAILER|x-amz-date:20231017T100825Z|x-amz-decoded-content-length:60126|x-amz-meta-evmetadata:<EnterpriseVaultFile Version="1" FileIdentifier="610DC4D48ED73E1ABB1F5D05C8050CC1~28~66377FE9~00~1" FileType="DVSSP" ArchivedDateUTC="45216.4215277778" PartitionEntryId="122C863DFB567C94D9D818E5FE00F76851q10000EV-EVProduct-Server1"/>|x-amz-sdk-checksum-algorithm:CRC32|x-amz-storage-class:STANDARD|x-amz-trailer:x-amz-checksum-crc32|
1147	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSAuthV4Signer} [Debug] Signed Headers value:amz-sdk-invocation-id;amz-sdk-request;content-encoding;content-type;host;transfer-encoding;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length;x-amz-meta-evmetadata;x-amz-sdk-checksum-algorithm;x-amz-storage-class;x-amz-trailer
1148	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSAuthV4Signer} [Debug] Canonical Request String: PUT|/22C863DFB567C94D9D818E5FE00F7685/2023/10-17/a/54f/a54f205d-becf-4d34-91ef-d4688f2812e9|amz-sdk-invocation-id:B56073AC-DD2A-44FA-8D33-42953E37C476|amz-sdk-request:attempt=1|content-encoding:aws-chunked|content-type:application/octet-stream|host:janitacnonworm.s3.ap-south-1.amazonaws.com|transfer-encoding:chunked|x-amz-content-sha256:STREAMING-UNSIGNED-PAYLOAD-TRAILER|x-amz-date:20231017T100825Z|x-amz-decoded-content-length:60126|x-amz-meta-evmetadata:<EnterpriseVaultFile Version="1" FileIdentifier="610DC4D48ED73E1ABB1F5D05C8050CC1~28~66377FE9~00~1" FileType="DVSSP" ArchivedDateUTC="45216.4215277778" PartitionEntryId="122C863DFB567C94D9D818E5FE00F76851q10000EV-EVProduct-Server1"/>|x-amz-sdk-checksum-algorithm:CRC32|x-amz-storage-class:STANDARD|x-amz-trailer:x-amz-checksum-crc32|amz-sdk-invocation-id;amz-sdk-request;content-encoding;content-type;host;transfer-encoding;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length;x-amz-meta-evmetadata;x-amz-sdk-checksum-algorithm;x-amz-storage-class;x-amz-trailer|STREAMING-UNSIGNED-PAYLOAD-TRAILER
1149	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSAuthV4Signer} [Debug] Final String to sign: AWS4-HMAC-SHA256|20231017T100825Z|20231017/ap-south-1/s3/aws4_request|e7da88fddc3d3a32ff1b63e461b621cb45fd1dee8a37032f09be65c21904bccf
1150	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSAuthV4Signer} [Debug] Final computed signing hash: e3bf44ceca50a065352725f9682438f916a79f43d14a2d3349de9c0b3c9c41e2
1151	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSAuthV4Signer} [Debug] Signing request with: AWS4-HMAC-SHA256 Credential=AKIA4TI2VRXXASSLYIYD/20231017/ap-south-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;content-encoding;content-type;host;transfer-encoding;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length;x-amz-meta-evmetadata;x-amz-sdk-checksum-algorithm;x-amz-storage-class;x-amz-trailer, Signature=e3bf44ceca50a065352725f9682438f916a79f43d14a2d3349de9c0b3c9c41e2
1152	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSClient} [Debug] Request Successfully signed
1153	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Trace] Making PUT request to uri https://janitacnonworm.s3.ap-south-1.amazonaws.com/22C863DFB567C94D9D818E5FE00F7685/2023/10-17/a/54f/a54f205d-becf-4d34-91ef-d4688f2812e9
1154	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpConnectionPoolMgr} [Info] Attempting to acquire connection for janitacnonworm.s3.ap-south-1.amazonaws.com:443
1155	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpConnectionPoolMgr} [Debug] Pool doesn't exist for endpoint, creating...
1156	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpConnectionPoolMgr} [Debug] Pool has no available existing connections for endpoint, attempting to grow pool.
1157	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinConnectionContainer} [Info] Pool grown by 2
1158	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpConnectionPoolMgr} [Info] Connection now available, continuing.
1159	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpConnectionPoolMgr} [Debug] Returning connection handle 05071580
1160	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] Acquired connection 05071580
1161	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Warn] Failed setting TCP keep-alive interval with error code: 12018
1162	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpHttp2} [Debug] HTTP/2 enabled on WinHttp handle: 06404E40.
1163	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] AllocateWindowsHttpRequest returned handle 06404E40
1164	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] with headers:
1165	15:38:25.845	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] amz-sdk-invocation-id: B56073AC-DD2A-44FA-8D33-42953E37C476|amz-sdk-request: attempt=1|authorization: AWS4-HMAC-SHA256 Credential=AKIA4TI2VRXXASSLYIYD/20231017/ap-south-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;content-encoding;content-type;host;transfer-encoding;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length;x-amz-meta-evmetadata;x-amz-sdk-checksum-algorithm;x-amz-storage-class;x-amz-trailer, Signature=e3bf44ceca50a065352725f9682438f916a79f43d14a2d3349de9c0b3c9c41e2|content-encoding: aws-chunked|content-type: application/octet-stream|host: janitacnonworm.s3.ap-south-1.amazonaws.com|transfer-encoding: chunked|user-agent: APN/1.0 Veritas/1.0 EnterpriseVault/15.0 api/S3 ft/s3-transfer|x-amz-content-sha256: STREAMING-UNSIGNED-PAYLOAD-TRAILER|x-amz-date: 20231017T100825Z|x-amz-decoded-content-length: 60126|x-amz-meta-evmetadata: <EnterpriseVaultFile Version="1" FileIdentifier="610DC4D48ED73E1ABB1F5D05C8050CC1~28~66377FE9~00~1" FileType="DVSSP" ArchivedDateUTC="45216.4215277778" PartitionEntryId="122C863DFB567C94D9D818E5FE00F76851q10000EV-EVProduct-Server1"/>|x-amz-sdk-checksum-algorithm: CRC32|x-amz-storage-class: STANDARD|x-amz-trailer: x-amz-checksum-crc32|
1166	15:38:25.892	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] 8200 bytes transferred for part [1].
1167	15:38:25.892	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] 16400 bytes transferred for part [1].
1168	15:38:25.908	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] 24600 bytes transferred for part [1].
1169	15:38:25.908	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] 32800 bytes transferred for part [1].
1170	15:38:25.908	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] 41000 bytes transferred for part [1].
1171	15:38:25.908	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] 49200 bytes transferred for part [1].
1172	15:38:25.908	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] 57400 bytes transferred for part [1].
1173	15:38:25.908	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] 60252 bytes transferred for part [1].
1174	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] Received response code 403
1175	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] Received content type application/xml
1176	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] Received headers:
1177	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] HTTP/1.1 403 Forbidden|Connection: close|Date: Tue, 17 Oct 2023 10:08:26 GMT|Transfer-Encoding: chunked|Content-Type: application/xml|Server: AmazonS3|x-amz-request-id: CQY4E20AM11S74NR|x-amz-id-2: p/ji6GAfS1LdXZlvAvMIVSo1Icng7f28q93dO+1x7oK3XkaqflJGWMos+OABQAny+ZDu3qpXfek=|
1178	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] Closing http request handle 06404E40
1179	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpSyncHttpClient} [Debug] Releasing connection handle 05071580
1180	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::WinHttpConnectionPoolMgr} [Debug] Releasing connection to endpoint janitacnonworm.s3.ap-south-1.amazonaws.com:443
1181	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSClient} [Debug] Request returned error. Attempting to generate appropriate error codes from response
1182	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSErrorMarshaller} [Trace] Error response is <?xml version="1.0"?>|<?xml version="1.0" encoding="UTF-8"?>|<Error>|    <Code>InvalidChunkSizeError</Code>|    <Message>Only the last chunk is allowed to have a size less than 8192 bytes</Message>|    <Chunk>2</Chunk>|    <BadChunkSize>8184</BadChunkSize>|    <RequestId>CQY4E20AM11S74NR</RequestId>|    <HostId>p/ji6GAfS1LdXZlvAvMIVSo1Icng7f28q93dO+1x7oK3XkaqflJGWMos+OABQAny+ZDu3qpXfek=</HostId>|</Error>|
1183	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSErrorMarshaller} [Warn] Encountered Unknown AWSError 'InvalidChunkSizeError': Only the last chunk is allowed to have a size less than 8192 bytes
1184	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSXmlClient} [Error] HTTP response code: 403|Resolved remote host IP address: |Request ID: CQY4E20AM11S74NR|Exception name: InvalidChunkSizeError|Error message: Unable to parse ExceptionName: InvalidChunkSizeError Message: Only the last chunk is allowed to have a size less than 8192 bytes|7 response headers:|connection : close|content-type : application/xml|date : Tue, 17 Oct 2023 10:08:26 GMT|server : AmazonS3|transfer-encoding : chunked|x-amz-id-2 : p/ji6GAfS1LdXZlvAvMIVSo1Icng7f28q93dO+1x7oK3XkaqflJGWMos+OABQAny+ZDu3qpXfek=|x-amz-request-id : CQY4E20AM11S74NR
1185	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSClient} [Warn] If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
1186	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::AWSClient} [Debug] Server time is Tue, 17 Oct 2023 10:08:26 GMT, while client time is Tue, 17 Oct 2023 10:08:25 GMT
1187	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Error] Transfer handle [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Failed to upload object to Bucket: [janitacnonworm] with Key: [22C863DFB567C94D9D818E5FE00F7685/2023/10-17/a/54f/a54f205d-becf-4d34-91ef-d4688f2812e9] HTTP response code: 403|Resolved remote host IP address: |Request ID: CQY4E20AM11S74NR|Exception name: InvalidChunkSizeError|Error message: Unable to parse ExceptionName: InvalidChunkSizeError Message: Only the last chunk is allowed to have a size less than 8192 bytes|7 response headers:|connection : close|content-type : application/xml|date : Tue, 17 Oct 2023 10:08:26 GMT|server : AmazonS3|transfer-encoding : chunked|x-amz-id-2 : p/ji6GAfS1LdXZlvAvMIVSo1Icng7f28q93dO+1x7oK3XkaqflJGWMos+OABQAny+ZDu3qpXfek=|x-amz-request-id : CQY4E20AM11S74NR
1188	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Debug] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Setting part [1] to [FAILED].
1189	15:38:25.923	 [8956]	(StorageArchive)	<5316>	EV:H	{AwsSdk::TransferManager} [Info] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Updated handle status from [IN_PROGRESS] to [FAILED].
1190	15:38:25.923	 [8956]	(StorageArchive)	<9232>	EV:H	{AwsSdk::TransferManager} [Info] Transfer handle [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Attempting to abort multipart upload.
1191	15:38:25.923	 [8956]	(StorageArchive)	<9232>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle ID [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Cancelling transfer.
1192	15:38:25.923	 [8956]	(StorageArchive)	<9232>	EV:H	{CS3Storage::WriteObjectToStorage:#693} Error while uploading object. Exception [HTTP response code: 403|Resolved remote host IP address: |Request ID: CQY4E20AM11S74NR|Exception name: InvalidChunkSizeError|Error message: Unable to parse ExceptionName: InvalidChunkSizeError Message: Only the last chunk is allowed to have a size less than 8192 bytes|7 response headers:|connection : close|content-type : application/xml|date : Tue, 17 Oct 2023 10:08:26 GMT|server : AmazonS3|transfer-encoding : chunked|x-amz-id-2 : p/ji6GAfS1LdXZlvAvMIVSo1Icng7f28q93dO+1x7oK3XkaqflJGWMos+OABQAny+ZDu3qpXfek=|x-amz-request-id : CQY4E20AM11S74NR]
1193	15:38:25.923	 [8956]	(StorageArchive)	<7064>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Waiting on handle to abort upload. In Bucket: [janitacnonworm] with Key: [22C863DFB567C94D9D818E5FE00F7685/2023/10-17/a/54f/a54f205d-becf-4d34-91ef-d4688f2812e9] with Upload ID: [].
1194	15:38:25.923	 [8956]	(StorageArchive)	<7064>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Finished waiting on handle. In Bucket: [janitacnonworm] with Key: [22C863DFB567C94D9D818E5FE00F7685/2023/10-17/a/54f/a54f205d-becf-4d34-91ef-d4688f2812e9] with Upload ID: [].
1195	15:38:25.939	 [8956]	(StorageArchive)	<7064>	EV:H	{AwsSdk::TransferManager} [Trace] Transfer handle [07F66FE0-4F4F-40F2-B2AB-CE67DECB8B49] Status changed to FAILED after waiting for cancel status. In Bucket: [janitacnonworm] with Key: [22C863DFB567C94D9D818E5FE00F7685/2023/10-17/a/54f/a54f205d-becf-4d34-91ef-d4688f2812e9] with Upload ID: [].

AWS CPP SDK version used

1.11.178

Compiler and Version used

Visual Studio 2017

Operating System and version

Windows Server 2016

Take file of any size and try to upload it using aws cpp transfer manager APIs with AWS CPP SDK version 1.11.178.

im able to run transfer manager to upload files with the following code

#include <aws/core/Aws.h>
#include <aws/core/utils/threading/Executor.h>
#include <aws/transfer/TransferManager.h>
#include <aws/s3/S3Client.h>
#include <iostream>
#include <thread>

using namespace std;
using namespace Aws;
using namespace Aws::Utils;
using namespace Aws::S3;
using namespace Aws::Transfer;

int main() {
  Aws::SDKOptions options;
  options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Debug;

  Aws::InitAPI(options);
  {
    S3ClientConfiguration config({}, Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::Never);
    auto s3Client = Aws::MakeShared<Aws::S3::S3Client>("S3Client", config);
    auto executor = Aws::MakeShared<Aws::Utils::Threading::PooledThreadExecutor>("executor", std::thread::hardware_concurrency() - 1);
    Aws::Transfer::TransferManagerConfiguration managerConfiguration(executor.get());
    managerConfiguration.s3Client = s3Client;
    managerConfiguration.checksumAlgorithm = S3::Model::ChecksumAlgorithm::NOT_SET;
    managerConfiguration.transferInitiatedCallback = [](const TransferManager *,
        const std::shared_ptr<const TransferHandle> &) -> void {};
    managerConfiguration.errorCallback = [](const TransferManager *,
        const std::shared_ptr<const TransferHandle> &,
        const Aws::Client::AWSError<Aws::S3::S3Errors> &error) -> void {
        std::cout << "Transfer failed!\n";
        std::cout << "Error: " << error.GetMessage() << "\n";
    };
    auto transferManager = Aws::Transfer::TransferManager::Create(managerConfiguration);

    transferManager->UploadFile("/your/file.txt",
        "yourebucket",
        "keyname.txt",
        "text/plain",
        {});

    transferManager->WaitUntilAllFinished();
  }
  Aws::ShutdownAPI(options);
  return 0;
}

in addition I ran all of the transfer manager tests and they seem to be passing. Can you give me any more details on what you are trying to upload or about the setup of your environment so i could try to replicate it?

as6432 commented

We are trying to upload file using stream(Not passing file directly to API) from Windows machine. Also we are providing metadata to transfer manager.

Also we are creating S3client object additionally providing S3EndPointProvider object.

Please confirm and let me know in this case does it works at your end ?

Please confirm and let me know in this case does it works at your end ?

could you provide a code example? perhaps editing the one above to create a scenario that fails for you? if you are able to provide code that fails it goes a long way for me to replicate it otherwise i'm kind of in the dark looking for the issue.

as6432 commented

@sbiscigl I have uploaded code with AWS CPP SDK version 1.9. 257 and 1.11.178 in this (https://github.com/as6432/AWS_S3/blob/main/TransferManagerIssue.zip) zip file. you can open and update project to change SDK version by modifying AwsCPPSDKVersion parameter value in AWSS3.vcxproj. Also provide aws access key, secret key, bucket name, file path and region in main.cpp file.

On AWS SDK 1.9. 257 version it works fine but same code files for AWS SDK 1.11.178 version. Please let me know if you need additional information from me.

@sbiscigl Any update on this issue ?

@as6432 Im hesitant about downloading a zip file for security reasons. is it possible to post just the plain cpp file on this issue? preferably just editing my earlier code.

@sbiscigl pasted code from above zip. Please try this code.

main.cpp

#include <aws/s3/S3Client.h>
#include <aws/s3/model/PutObjectRequest.h>
#include <aws/core/Aws.h>
#include <aws/core/auth/AWSCredentialsProvider.h>
#include <aws/transfer/TransferManager.h>
#include <aws/s3/model/GetBucketLocationRequest.h>
#include <aws/s3/model/GetBucketLocationResult.h>
#include <iostream>
#include "UnbufferedStreamSyncBuf.h"

using namespace Aws::S3;
using namespace std;

static const CStringA EV_METADATA_KEY = "evmetadata";
// Change below parameter as per your environment
static const char* AWS_ACCESS_KEY_ID        = "ACCESS_KEY";
static const char* AWS_SECRET_ACCESS_KEY    = "SECRET_ACCESS_KEY";
static const char* BUCKET                   = "BUCKET";
static const char* AWS_BUCKET_REGION        = "REGION";
static const char* FILE_PATH_TO_UPLOAD      = "E:\\POCs\\TransferManagerIssue\\InputFiles\\IBMNotesInstall.log";


#pragma region Internal Error Codes
#define CLOUD_E_INVALID_CREDENTIALS         0xC00471CFL
#define CLOUD_E_AWS_BUCKET_NOT_FOUND        0xC00471D0L
#define CLOUD_E_NETWORK_CONNECTIVITY_ISSUE  0xC00471D1L
#define STORAGE_E_ACCESS_DENIED             0xC0041801L
#define CLOUD_E_AWS_RESOURCE_NOT_FOUND      0xC00471D2L
#pragma endregion  

#pragma region Internal Methods

HRESULT GenerateStoreIdentifier(CStringA &csUUID)
{
#pragma comment(lib,"Rpcrt4.lib")

    GUID Uuid;
    if (RPC_S_UUID_NO_ADDRESS == ::UuidCreate(&Uuid))
    {
        return HRESULT_FROM_WIN32(RPC_S_UUID_NO_ADDRESS);
    }

    // Convert the UUID to a string
    unsigned char *StringUuid(NULL);
    if (RPC_S_OUT_OF_MEMORY == ::UuidToStringA(&Uuid, &StringUuid))
    {
        return HRESULT_FROM_WIN32(RPC_S_OUT_OF_MEMORY);
    }

    if (!StringUuid)
    {
        return E_POINTER;
    }

    csUUID = StringUuid;
    ::RpcStringFreeA(&StringUuid);

    return S_OK;
}

HRESULT MapAWSErrorsToHResult(const std::shared_ptr<Aws::Transfer::TransferHandle> &outcome, CStringW& csErrorDescription)
{
    Aws::StringStream ss;
    ss << outcome->GetLastError();
    csErrorDescription = ss.str().c_str();
    Aws::S3::S3Errors errType = outcome->GetLastError().GetErrorType();

    HRESULT hr = S_OK;
    switch (errType)
    {
    case Aws::S3::S3Errors::INVALID_ACCESS_KEY_ID:
    case Aws::S3::S3Errors::INVALID_SIGNATURE:
    case Aws::S3::S3Errors::SIGNATURE_DOES_NOT_MATCH:
        hr = CLOUD_E_INVALID_CREDENTIALS;
        break;
    case Aws::S3::S3Errors::NO_SUCH_BUCKET:
        hr = CLOUD_E_AWS_BUCKET_NOT_FOUND;
        break;
    case Aws::S3::S3Errors::NETWORK_CONNECTION:
        hr = CLOUD_E_NETWORK_CONNECTIVITY_ISSUE;
        break;
    case Aws::S3::S3Errors::ACCESS_DENIED:
        hr = STORAGE_E_ACCESS_DENIED;
        break;
    case Aws::S3::S3Errors::RESOURCE_NOT_FOUND:
    case Aws::S3::S3Errors::NO_SUCH_KEY:
        hr = CLOUD_E_AWS_RESOURCE_NOT_FOUND;
        break;
    default:
        hr = E_FAIL;
        break;
    }
    return hr;
}

template <typename R, typename E>
HRESULT MapAWSErrorsToHResult(const Aws::Utils::Outcome <R, E>& outcome, CStringW& csErrorDescription)
{
    csErrorDescription = outcome.GetError().GetMessage().c_str();
    Aws::S3::S3Errors errType = outcome.GetError().GetErrorType();

    HRESULT hr = S_OK;
    switch (errType)
    {
    case Aws::S3::S3Errors::INVALID_ACCESS_KEY_ID:
    case Aws::S3::S3Errors::INVALID_SIGNATURE:
    case Aws::S3::S3Errors::SIGNATURE_DOES_NOT_MATCH:
        hr = CLOUD_E_INVALID_CREDENTIALS;
        break;
    case Aws::S3::S3Errors::NO_SUCH_BUCKET:
        hr = CLOUD_E_AWS_BUCKET_NOT_FOUND;
    break;
    case Aws::S3::S3Errors::NETWORK_CONNECTION:
        hr = CLOUD_E_NETWORK_CONNECTIVITY_ISSUE;
        break;
    case Aws::S3::S3Errors::ACCESS_DENIED:
        hr = STORAGE_E_ACCESS_DENIED;
        break;
    case Aws::S3::S3Errors::RESOURCE_NOT_FOUND:
    case Aws::S3::S3Errors::NO_SUCH_KEY:
        hr = CLOUD_E_AWS_RESOURCE_NOT_FOUND;
        break;
    default:
        hr = E_FAIL;
        break;
    }
    return hr;
}

#pragma endregion

HRESULT WriteObjectToStorage(const CComPtr<IStream>& spStream,
    const CStringA& csMetadata,
    ULONGLONG nTotalBytesInput,
    ULONGLONG& nTotalBytesWritten,
    CStringW& csErrorDescription)
{
    HRESULT hr = S_OK;
    CStringA csKey;
    GenerateStoreIdentifier(csKey);

    Aws::SDKOptions options;
    Aws::InitAPI(options);
    {
        Aws::String userName = AWS_ACCESS_KEY_ID;
        Aws::String password = AWS_SECRET_ACCESS_KEY;
        Aws::Auth::AWSCredentials creds(userName, password);

        // Get bucket's region
        Aws::Client::ClientConfiguration configuration;
        configuration.region = "us-east-1";
        S3Client client(creds, configuration, Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::Never, true);
        Aws::S3::Model::GetBucketLocationRequest getBucketLocationRequest;
        getBucketLocationRequest.SetBucket(Aws::String(BUCKET));
        Aws::S3::Model::GetBucketLocationOutcome getBucketLocationOutcome = client.GetBucketLocation(getBucketLocationRequest);

        if (!getBucketLocationOutcome.IsSuccess())
            std::cout << "Error";
        else
        {
            Aws::S3::Model::BucketLocationConstraint bucketLocationConstraint =
                getBucketLocationOutcome.GetResult().GetLocationConstraint();

            Aws::String bucketRegion = Aws::S3::Model::BucketLocationConstraintMapper::GetNameForBucketLocationConstraint(bucketLocationConstraint);            
            configuration.region = bucketRegion;
        }

        // Use Transfer manager to upload stream
        auto bufferFromStream = Aws::MakeShared<UnbufferedStreamSyncBuf>("ALLOCATION_TAG", spStream, std::ios_base::in);
        auto requestStream = Aws::MakeShared<Aws::IOStream>("ALLOCATION_TAG", bufferFromStream.get());
        auto exec = Aws::MakeShared<Aws::Utils::Threading::PooledThreadExecutor>("ALLOCATION_TAG", 5);
        Aws::Transfer::TransferManagerConfiguration transferConfig(exec.get());
        transferConfig.s3Client = Aws::MakeShared<Aws::S3::S3Client>("ALLOCATION_TAG", creds, configuration, Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::Never, true);;
        transferConfig.computeContentMD5 = false;
        transferConfig.bufferSize = 5 * 1024 * 1024;
        transferConfig.transferBufferMaxHeapSize = transferConfig.bufferSize * 5;
        if (nTotalBytesInput < transferConfig.bufferSize)
        {
            Aws::S3::Model::PutObjectRequest putObjectRequest;
            putObjectRequest.WithBucket(BUCKET)
                .WithStorageClass(Aws::S3::Model::StorageClass::STANDARD);

            transferConfig.putObjectTemplate = putObjectRequest;
        }
        else
        {
            Aws::S3::Model::CreateMultipartUploadRequest multipartUploadRequest;
            multipartUploadRequest.WithBucket(BUCKET)
                .WithStorageClass(Aws::S3::Model::StorageClass::STANDARD);

            transferConfig.createMultipartUploadTemplate = multipartUploadRequest;
        }

        //transferConfig.transferStatusUpdatedCallback =
        //    [&](const Aws::Transfer::TransferManager*, const std::shared_ptr<const Aws::Transfer::TransferHandle>& handle)
        //{
        //    csMessage.Format(L"%s(): Transfer Status = %s", csMethod, handle->GetStatus());
        //    m_ptrStreamerObject->TraceMessage(csMessage);
        //};



        //transferConfig.uploadProgressCallback =
        //    [&](const Aws::Transfer::TransferManager*, const std::shared_ptr<const Aws::Transfer::TransferHandle>& handle)
        //{
        //    csMessage.Format(L"%s(): Upload Progress: %I64u of %I64u bytes", csMethod, handle->GetBytesTransferred(), handle->GetBytesTotalSize());
        //    m_ptrStreamerObject->TraceMessage(csMessage);
        //};

        auto transferManager = Aws::Transfer::TransferManager::Create(transferConfig);

        auto metadata = Aws::Map<Aws::String, Aws::String>();
        metadata.emplace(Aws::String(EV_METADATA_KEY), Aws::String(csMetadata));

        auto transferHandle = transferManager->UploadFile(requestStream, Aws::String(BUCKET), Aws::String(csKey),
            "application/octet-stream", metadata);

        transferHandle->WaitUntilFinished();

        auto status = transferHandle->GetStatus();

        if (status == Aws::Transfer::TransferStatus::COMPLETED)
        {
            hr = S_OK;

            size_t nParts = transferHandle->GetCompletedParts().size();
            std::cout << "Object uploaded successfully with Number of parts " << nParts << endl;
            nTotalBytesWritten = transferHandle->GetBytesTransferred();
            std::cout << "Status " << (int)status << " TotalBytesWritten " << nTotalBytesWritten << endl;
        }
        else
        {
            transferManager->AbortMultipartUpload(transferHandle);
            hr = MapAWSErrorsToHResult(transferHandle, csErrorDescription);

            //std::wcout << L"Error while uploading object. Exception " << csErrorDescription.GetString() << endl;
        }
    }
    Aws::ShutdownAPI(options);
    return hr;
}


void main(int argc, char* argv[])
{

    CStringA csFilePath = FILE_PATH_TO_UPLOAD;
    CStringA csMetadata = "EnterpriseVault File";
    CStringW csErrorDescription;

    // Open file as stream
    CComPtr<IStream> spFileStm;
    HRESULT hr = ::SHCreateStreamOnFileEx(CString(csFilePath), STGM_READWRITE | STGM_SHARE_EXCLUSIVE, FILE_ATTRIBUTE_NORMAL, FALSE, NULL, &spFileStm);
    if(FAILED(hr))
    {
        std::wcout << L"Failed to open stream on file [" << csFilePath.GetString() << L"]" << std::endl;
        exit(1);
    }
    // Get size of stream
    STATSTG statstg;
    memset(&statstg, 0, sizeof(statstg));
    hr = spFileStm->Stat(&statstg, STATFLAG_NONAME);
    if (FAILED(hr))
    {
        std::wcout << L"Failed to retrieve size of stream [" << csFilePath.GetString() << L"]" << std::endl;
        exit(1);
    }

    ULONGLONG nTotalBytesInput = statstg.cbSize.QuadPart;
    ULONGLONG nTotalBytesWritten(0);

    std::wcout << L"Size of file [" << csFilePath.GetString() << L"] " << nTotalBytesInput << L" to be uploaded" << std::endl << std::endl;

    hr = WriteObjectToStorage(spFileStm, csMetadata, nTotalBytesInput, nTotalBytesWritten, csErrorDescription);
    if (FAILED(hr))
    {
        std::wcout << csFilePath << L" uploaded failed with error " << csErrorDescription.GetString() << std::endl;
    }
    else
    {
        std::wcout << csFilePath << L" uploaded successfully." << std::endl;
    }
    spFileStm.Release();
}

UnbufferedStreamSyncBuf.cpp

#include "UnbufferedStreamSyncBuf.h"
#include <algorithm>

UnbufferedStreamSyncBuf::UnbufferedStreamSyncBuf(const CComPtr<IStream>& spStream, std::ios_base::openmode which)
    : std::streambuf(), pStream(spStream), bReadOnly(!(which & std::ios_base::out))
{

}

UnbufferedStreamSyncBuf::~UnbufferedStreamSyncBuf()
{
    if (!bReadOnly)
        UnbufferedStreamSyncBuf::sync();
}

bool UnbufferedStreamSyncBuf::backup()
{
    LARGE_INTEGER liMove;
    liMove.QuadPart = -1LL;
    bool bReturnVal = false;

    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);
    if (SUCCEEDED(pStream->Seek(liMove, STREAM_SEEK_CUR, NULL)))
    {
        bReturnVal = true;
    }
    return bReturnVal;
}

int UnbufferedStreamSyncBuf::overflow(int ch)
{
    int nReturnVal = traits_type::eof();
    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);

    if (ch != traits_type::eof())
    {
        if (SUCCEEDED(pStream->Write(&ch, 1, NULL)))
        {
            nReturnVal = ch;
        }
    }
    return nReturnVal;
}

int UnbufferedStreamSyncBuf::underflow()
{
    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);

    int ch = UnbufferedStreamSyncBuf::uflow();
    if (ch != traits_type::eof())
    {
        ch = backup() ? ch : traits_type::eof();
    }
    return ch;
}

int UnbufferedStreamSyncBuf::uflow()
{
    int nReturnVal = 0;
    char ch;
    ULONG cbRead;

    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);

    // S_FALSE indicates we've reached end of stream

    if (S_OK == pStream->Read(&ch, 1, &cbRead))
        nReturnVal = ch;
    else
        nReturnVal = traits_type::eof();


    return nReturnVal;
}

int UnbufferedStreamSyncBuf::pbackfail(int ch)
{
    if (ch != traits_type::eof())
    {
        ch = backup() ? ch : traits_type::eof();
    }
    return ch;
}

int UnbufferedStreamSyncBuf::sync()
{
    int nReturnVal = 0;

    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);

    if (!SUCCEEDED(pStream->Commit(STGC_DEFAULT)))
        nReturnVal = -1;

    return nReturnVal;
}

std::ios::streampos UnbufferedStreamSyncBuf::seekpos(std::ios::streampos sp, std::ios_base::openmode)
{
    std::ios::streampos posToReturn = -1;
    LARGE_INTEGER liMove;
    liMove.QuadPart = sp;

    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);

    if (SUCCEEDED(pStream->Seek(liMove, STREAM_SEEK_SET, NULL)))
        posToReturn = sp;

    return posToReturn;
}

std::streampos UnbufferedStreamSyncBuf::seekoff(std::streamoff off, std::ios_base::seekdir way, std::ios_base::openmode)
{
    std::ios::streampos posToReturn = -1;
    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);

    STREAM_SEEK sk;
    switch (way)
    {
    case std::ios_base::beg: sk = STREAM_SEEK_SET; break;
    case std::ios_base::cur: sk = STREAM_SEEK_CUR; break;
    case std::ios_base::end: sk = STREAM_SEEK_END; break;
    default: return -1;
    }
    LARGE_INTEGER liMove;
    liMove.QuadPart = static_cast<LONGLONG>(off);
    ULARGE_INTEGER uliNewPos;

    if (SUCCEEDED(pStream->Seek(liMove, sk, &uliNewPos)))
    {
        posToReturn = static_cast<std::streampos>(uliNewPos.QuadPart);
    }

    return posToReturn;
}

std::streamsize UnbufferedStreamSyncBuf::xsgetn(char *s, std::streamsize n)
{
    std::streamsize sizeToReturn = 0;

    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);

    ULONG cbRead;
    if (SUCCEEDED(pStream->Read(s, static_cast<ULONG>(n), &cbRead)))
    {
        sizeToReturn = static_cast<std::streamsize>(cbRead);
    }

    return sizeToReturn;
}

std::streamsize UnbufferedStreamSyncBuf::xsputn(const char *s, std::streamsize n)
{
    std::streamsize sizeToReturn = 0;
    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);

    ULONG cbWritten;

    if (SUCCEEDED(pStream->Write(s, static_cast<ULONG>(n), &cbWritten)))
    {
        sizeToReturn = static_cast<std::streamsize>(cbWritten);
    }

    return sizeToReturn;
}

std::streamsize UnbufferedStreamSyncBuf::showmanyc()
{
    std::streamsize sizeToReturn = 0;
    CComCritSecLock<CComAutoCriticalSection> lock(m_CS);

    STATSTG stat;
    if (SUCCEEDED(pStream->Stat(&stat, STATFLAG_NONAME)))
    {
        std::streampos lastPos = static_cast<std::streampos>(stat.cbSize.QuadPart - 1);
        LARGE_INTEGER liMove;
        liMove.QuadPart = 0LL;
        ULARGE_INTEGER uliNewPos;
        if (SUCCEEDED(pStream->Seek(liMove, STREAM_SEEK_CUR, &uliNewPos)))
        {
            sizeToReturn = std::max<std::streamsize>(lastPos - static_cast<std::streampos>(uliNewPos.QuadPart), 0);
        }
    }

    return sizeToReturn;
}

UnbufferedStreamSyncBuf.h

#pragma once
#include <ios>
#include <atlstr.h>
#include "atlbase.h"
using namespace ATL;

// Read-write unbuffered streambuf implementation which uses no
// internal buffers (adapted Matt Austern's example from http://www.drdobbs.com/184401305)
class UnbufferedStreamSyncBuf : public std::streambuf
{
public:
    UnbufferedStreamSyncBuf(const CComPtr<IStream>& spStream, std::ios_base::openmode which = std::ios_base::in | std::ios_base::out);
    ~UnbufferedStreamSyncBuf();

protected:
    virtual int overflow(int ch = traits_type::eof());
    virtual int underflow();
    virtual int uflow();
    virtual int pbackfail(int ch = traits_type::eof());
    virtual int sync();
    virtual std::streampos seekpos(std::streampos sp, std::ios_base::openmode which = std::ios_base::in | std::ios_base::out);
    virtual std::streampos seekoff(std::streamoff off, std::ios_base::seekdir way, std::ios_base::openmode which = std::ios_base::in | std::ios_base::out);
    virtual std::streamsize xsgetn(char *s, std::streamsize n);
    virtual std::streamsize xsputn(const char *s, std::streamsize n);
    virtual std::streamsize showmanyc();

private:
    CComPtr<IStream> pStream;
    CComAutoCriticalSection m_CS;
    bool bReadOnly;
    bool backup();
};

@sbiscigl are you able to reproduce issue with code provided ?

The code you provided is a lot, and its hard to narrow down the root cause with the amount of unknown in the files for instance you are uploading the file E:\\POCs\\TransferManagerIssue\\InputFiles\\IBMNotesInstall.log that i do not have access to.

looking at your sample code and the core logic and the statement

We are trying to upload file using stream(Not passing file directly to API) from Windows machine. Also we are providing metadata to transfer manager.

I updated my example from above to use a file stream

#include <aws/core/Aws.h>
#include <aws/core/utils/threading/Executor.h>
#include <aws/transfer/TransferManager.h>
#include <aws/s3/S3Client.h>
#include <fstream>
#include <iostream>
#include <thread>

using namespace std;
using namespace Aws;
using namespace Aws::Utils;
using namespace Aws::S3;
using namespace Aws::Transfer;

int main()
{
    SDKOptions options;
    options.loggingOptions.logLevel = Logging::LogLevel::Debug;

    InitAPI(options);
    {
        const auto s3Client = Aws::MakeShared<S3Client>("S3Client");
        const auto executor = Aws::MakeShared<Threading::PooledThreadExecutor>(
            "executor", std::thread::hardware_concurrency() - 1);
        TransferManagerConfiguration managerConfiguration(executor.get());
        managerConfiguration.s3Client = s3Client;
        managerConfiguration.transferInitiatedCallback = [](const TransferManager*,
                                                            const std::shared_ptr<const TransferHandle>&) -> void
        {
        };
        managerConfiguration.errorCallback = [](const TransferManager*,
                                                const std::shared_ptr<const TransferHandle>&,
                                                const Client::AWSError<S3Errors>& error) -> void
        {
            std::cout << "Transfer failed!\n";
            std::cout << "Error: " << error.GetMessage() << "\n";
        };
        const auto transferManager = TransferManager::Create(managerConfiguration);
        const auto fileStream = Aws::MakeShared<FStream>("ALLOCATION_TAG",
                                                         R"(C:\Users\Administrator\Desktop\transfer_test\whatever.txt)");

        transferManager->UploadFile(fileStream,
                                    "sbiscigl",
                                    "keyname.txt",
                                    "text/plain",
                                    {});

        transferManager->WaitUntilAllFinished();
    }
    ShutdownAPI(options);
    return 0;
}

Can you please confirm that this example works on your end? and if you can mutate the example to not work?

Im sorry for going back and forth on getting a code example to replicated this but we need to eliminate "on my machine issues" so that i can properly fix this

also something worth trying that i want to see if it will work is setting the following configuration parameters

managerConfiguration.checksumAlgorithm = Model::ChecksumAlgorithm::NOT_SET;

I know its not ideal to change code, but let me know if that works.

@sbiscigl code given by you is not working. It gives NoSuchUpload exception. Please check or add anything missing in the code.

Also as you mentioned "on my machine issues", issue we reported tested on another machine and got same error on each machine.

code given by you is not working

it works for me on my machine, with my file, and i'm trying to narrow down the reasons on why it could not be working.

It gives NoSuchUpload exception

This doesn't have to do with the code rather the file you are uploading, and the buffer size you have allocated for it. We create multipart uploads when your file is larger than buffer allocated to the transfer manager

bool TransferManager::MultipartUploadSupported(uint64_t length) const
{
    return length > m_transferConfig.bufferSize &&
            m_transferConfig.s3Client            &&
            m_transferConfig.s3Client->MultipartUploadSupported();
}

how big is the file that you are uploading?

also unclear from your message, did you try updating the configuration for managerConfiguration.checksumAlgorithm ? what was the result? im very interested as thats likely what is causing this/will fix it

@sbiscigl I have tried code given by you and tried various combination by computeMD5 and checksumAlgorithm with different file size. Below is the result, we used template to set storage class and we decide template(putobject or createmultipartUpload) based on input stream/chunk size. If file is less than 5 mb we use putobject template else we use createmultipartupload template. Hope templates are not causing issue.

<style> </style>
File Size ComputeMD5 CheckSumAlgorithm Template Result
Small (1 kb) FALSE not used so it will have default value PutObjectTemplate Success
Medium (388 kb) FALSE not used so it will have default value PutObjectTemplate Failure: InvalidChunkSize
Large(800 mb) FALSE not used so it will have default value createMultipartUploadTemplate Gets Hang no output
<style> </style>
File Size ComputeMD5 CheckSumAlgorithm Template Result
Small (1 kb) not set so it will have default value not set so it will have default value PutObjectTemplate Success
Medium (388 kb) not set so it will have default value not set so it will have default value PutObjectTemplate Failure: InvalidChunkSizeError
Large(800 mb) TRUE not set so it will have default value createMultipartUploadTemplate Success
<style> </style>
File Size ComputeMD5 CheckSumAlgorithm Template Result
Small (1 kb) not set so it will have default value not set so it will have default value PutObjectTemplate Success
Medium (388 kb) not set so it will have default value ChecksumAlgorithm::NOT_SET PutObjectTemplate Failure: SignatureDoesNotMatch Error message: The request signature we calculated does not match the signature you provided. Check your key and signing method
Large(800 mb) TRUE not set so it will have default value createMultipartUploadTemplate Success

How computeMD5 will work in case of bucket having object lock enabled, AFAIK computeMD5 = true is mandatory for bucket with object lock ?

Thank you for all of this this information is exactly what i need to try to get something reproduced. I'll see what we can find with this. Bear with us as we follow up on this investigation as a lot of thing are competing for our time right now.

How computeMD5 will work in case of bucket having object lock enabled, AFAIK computeMD5 = true is mandatory for bucket with object lock ?

so this has a lot to do with how checksums work in the transfer manager and the fact we added a way to specify checksums in the transfer manager recently and that default checksum is CRC32.

In short prior to this change everything by default would use md5 checksums. If set to true in transfer manager, you would calculate the MD5 checksum in the actual transfer manager. So theres a few things at play here that we have to look at:

  • CRC32 checksumming is like bugged in some way on the windows http client. the invalid chunk size tells me that because crc32 checksumming uses trailing checksum. If you used the curl client you would likely see different results. but we really should figure out why the windows client behaves as such.
  • Setting ChecksumAlgorithm::NOT_SET should preserve behavior where MD5 is used by default in client, but perhaps it doesnt.
  • Setting ComputeMD5 on the transfer manager will use the old checksumming way.

If you want to use the new default CRC32 checksumming, let us look into your test cases and try to replicate and see what is going on. Doing this will likely also show why ChecksumAlgorithm::NOT_SET isnt working as intended.

That said according to your testing and this information if you set ComputeMD5 to true on the checksum configuration it should work as expected, if not, then we can also look at that too.

@sbiscigl Meanwhile tell us what was the default value of ChecksumAlgorithm in aws cpp sdk 1.9.257 ?

In theory it was ChecksumAlgorithm::NOT_SET and that fell back and defaulted to MD5, which is why i suggested using ChecksumAlgorithm::NOT_SET because in theory it should be a "fallback" to the behavior in 1.9.257. Which is something that i need to look into.

@sbiscigl we made below changes in aws transfer manager code and setting computeMD5 to true from our POC code irrespective of bucket having object lock enabled or not.

…\aws-sdk-cpp\src\aws-cpp-sdk-transfer\source\transfer\TransferManager.cpp

Out of the box SDK code:

OFTB_AWS_Code

Code modified to fix issue:

Modified_AWS_Code

We this fix able to upload file stream of all sizes. Please let me know your thoughts on this fix.

yep I really like this idea, will work on getting it in asap

Thanks for your suggestion, testing your above fix.

@sbiscigl @jmklix Let us know the result asap

Greetings,
I'm confronting a more or less similar issue, I am trying to write a function to upload files (essentially .mov,.mp4,.jpg files) in an S3 bucket asynchronously, my code works fine with small files but when i tried to upload a large files (3Gb) for example the function stops to upload the file asynchronously and my app freezes. My code is :

m_clientConfig->region = m_deviceData->bucketsRegionMap()[bucketName].toStdString();
m_s3Client = std::make_sharedAws::S3::S3Client(m_creds, *m_clientConfig);

    auto exec = Aws::MakeShared<Aws::Utils::Threading::PooledThreadExecutor>("ALLOCATION_TAG", 5);

    Aws::Transfer::TransferManagerConfiguration transferConfig(exec.get());
    transferConfig.s3Client = m_s3Client;
    transferConfig.computeContentMD5 = false;
    transferConfig.bufferSize = 5 * 1024 * 1024;
    transferConfig.transferBufferMaxHeapSize = transferConfig.bufferSize * 5;

    Aws::S3::Model::CreateMultipartUploadRequest multipartUploadRequest;
    multipartUploadRequest.WithBucket(bucketName.toStdString()).WithStorageClass(Aws::S3::Model::StorageClass::STANDARD);

    transferConfig.createMultipartUploadTemplate = multipartUploadRequest;

    transferConfig.transferStatusUpdatedCallback =
        [](const Aws::Transfer::TransferManager *,
           const std::shared_ptr<const Aws::Transfer::TransferHandle> &handle) {
            bool success = handle->GetStatus() == Aws::Transfer::TransferStatus::COMPLETED;

            if (!success)
            {
                auto err = handle->GetLastError();
                if (err.GetMessage() != "")
                {
                    std::cout << "File upload failed:  "<< err.GetMessage() << std::endl;
                }
            }
            else
            {
                // Verify that the upload retrieved the expected amount of data.
                assert(handle->GetBytesTotalSize() == handle->GetBytesTransferred());
            }
        };

    transferConfig.uploadProgressCallback =
        [this, id](const Aws::Transfer::TransferManager *,
           const std::shared_ptr<const Aws::Transfer::TransferHandle> &handle) {
            auto progress = ((float)handle->GetBytesTransferred() / (float)handle->GetBytesTotalSize()) * (float)100;
            qDebug() << progress;
            emit m_deviceData->returnProgress(id, progress);
        };

    auto transferManager = Aws::Transfer::TransferManager::Create(transferConfig);
    QString filename = pathLocal.split("/").last();
    auto uploadHandle = transferManager->UploadFile(pathLocal.toStdString(), bucketName.toStdString(),
                                                    filename.toStdString(), "text/plain",
                                                    Aws::Map<Aws::String, Aws::String>());

I'm using QtCreator to develop the application. Is there an issue with my code ? anyone can help me to know why my upload function stops to work asynchronously with large files ?

Thank you in advance for your response.

@fmtibaa can you set computeContentMD5 to true and try to upload large file ?

@as6432 thank you for your response. I tried to set computeContentMD5 to true , my app stops a bit (around 1-3 seconds) and then the upload starts asynchronously, i don't know why but maybe my code is not good for asynchronous actions ...

@sbiscigl @jmklix Setting computeMD5 = true always is slowing down uploads. We use to set that only in case of S3 Object Lock because it was mandatory; but now always setting is causing slowness for buckets which don't have S3 Object Lock as well.

@sbiscigl any update ?? Also can you please let me know in which sdk version aws introduced this change of using checksumalorithm if computemd5 is false ?

Sorry for the lack of updates. We are still looking into this and it seems to be something wrong with the content size being calculated incorrectly in some threads.

During my testing only a few files have been failing so a temporary workaround could be to just retry the files that failed to upload. (I'm seeing 1/1000ish files failing, but I would be interested to see how often it's happing for you). You could also try a version prior to 1.11.220 as that is the version with this commit

@jmklix we reported this issue on AWS SDK version 1.11.178. Tell us the AWS SDK commit where this issue is fixed.

Sorry for wrong suggestion. We are working on testing this PR that seems to fix the concurrency issues that are causing your problems. I will make sure to update you here when this gets released, but you can test it yourself now.

@as6432 sorry for taking the time it did, but I do believe we fixed this for you in pull/2785. it will be released today with the tag 1.11.223. Give that a shot and let us know if that fixes it for you. Once again, sorry for the the delay, thanks for working with us, and I do believe this will fix it for you

@sbiscigl Test on 1.11.223 and it is not working for us. I guess fix we suggested not incorporated in 1.11.223 version.

Hi @as6432 ,

The fix @sbiscigl has mentioned is indeed merged and released in version 1.11.223.
The fix covers invalid size/md5 checksum errors.

Could you please clarify if you observe the same issues as before (as reported above)?

Best regards,
Sergey

@SergeyRyabinin yes we are getting same issue. we have locally fixed DoSinglePartUpload method in transfermanager.cpp which worked for us as we were getting failure for medium size file. Similar change didn't present in 1.11.223, see our worked suggested fix, #2727 (comment)

Can you try the latest release: 1.11.224
That includes this PR which has the changes equivalent to your suggested fix.

@jmklix Getting same issue. Did it worked at your side on windows platform for file size less than BufferSize ?

@as6432 What error('s) are you getting with the updated version of this sdk?

Greetings! It looks like this issue hasn’t been active in longer than a week. We encourage you to check if this is still an issue in the latest release. Because it has been longer than a week since the last update on this, and in the absence of more information, we will be closing this issue soon. If you find that this is still a problem, please feel free to provide a comment or add an upvote to prevent automatic closure, or if the issue is already closed, please feel free to open a new one.

@jmklix same error which reported.

I was able to reproduce this error on windows:

InvalidChunkSizeError|Error message: Unable to parse ExceptionName: InvalidChunkSizeError Message: Only the last chunk is allowed to have a size less than...

which was difference than the error that we fixed above. This looks like it might be something wrong with the configuration and we are looking into the cause of this bug. We do have a temporary workaround that uses S3ClientConfiguration instead of Client::ClientConfiguration:

//use this
S3ClientConfiguration s3config;
const auto s3Client = Aws::MakeShared<S3Client>("S3Client", s3config);

//DON'T use this
Client::ClientConfiguration config;
const auto s3Client = Aws::MakeShared<S3Client>("S3Client", config);

Please let me know if this workaround works for you and/or you have any other questions.

Greetings! It looks like this issue hasn’t been active in longer than a week. We encourage you to check if this is still an issue in the latest release. Because it has been longer than a week since the last update on this, and in the absence of more information, we will be closing this issue soon. If you find that this is still a problem, please feel free to provide a comment or add an upvote to prevent automatic closure, or if the issue is already closed, please feel free to open a new one.