CORTX v0.2.0: creating IAM user and S3 I/O testing
Closed this issue · 18 comments
Hi,
Been following the cortx-aws-k8s-installation guide, but encountered a trouble at section 4.1.
After successfully authenticating using the CORTX credentials, I received the expected "Bearer bf7axxx" token. However, when using it to send a create-account request, I got a "404 not found" error from the S8 API:
[root@master cc]# curl -H 'Authorization: Bearer bf7a24a8aac14a8387177f548b34781f' -d '{ "account_name": "gts3account", "account_email": "gt@seagate.com", "password": "Account1!", "access_key": "gregoryaccesskey", "secret_key": "gregorysecretkey" }' https://$CSM_IP:8081/api/v2/s3_accounts --insecure
404: Not Found
Here is how I requested the "Bearer token":
[root@master cc]# curl -v -d '{"username": "cortxadmin", "password": "Cortxadmin@123"}' https://$CSM_IP:8081/api/v2/login --insecure
* About to connect() to 10.107.201.208 port 8081 (#0)
* Trying 10.107.201.208...
* Connected to 10.107.201.208 (10.107.201.208) port 8081 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
* subject: CN=seagate.com,O=Seagate Tech,L=Pune,C=IN
* start date: Feb 18 11:58:25 2021 GMT
* expire date: Feb 16 11:58:25 2031 GMT
* common name: seagate.com
* issuer: CN=seagate.com,O=Seagate Tech,L=Pune,C=IN
> POST /api/v2/login HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.107.201.208:8081
> Accept: */*
> Content-Length: 56
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 56 out of 56 bytes
< HTTP/1.1 200 OK
< Authorization: Bearer bf7a24a8aac14a8387177f548b34781f
< Content-Type: application/json
< Content-Length: 25
< Server: NULL
< Strict-Transport-Security: max-age=63072000; includeSubdomains
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Security-Policy: script-src 'self'; object-src 'self'
< Referrer-Policy: no-referrer, strict-origin-when-cross-origin
< Pragma: no-cache
< Expires: 0
< Cache-control: no-cache, no-store, must-revalidate, max-age=0
< Date: Mon, 21 Mar 2022 04:04:14 GMT
<
* Connection #0 to host 10.107.201.208 left intact
{"reset_password": false}[root@master cc]#
The Kubernetes cluster consists of one master and one worker (Centos 8). CORTX is deployed using the latest main branch. The baremetals are not from AWS, but from Chameleon.
pods
[root@master cc]# kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-client-rv52r 1/1 Running 0 2d13h
consul-server-0 1/1 Running 0 2d13h
cortx-control-5dc5f7b6-ttbsk 1/1 Running 0 2d13h
cortx-data-node-1-6949c7c88b-8lwlw 3/3 Running 0 2d13h
cortx-ha-679b57d66b-j6vg8 3/3 Running 0 2d13h
cortx-server-node-1-5464b57b76-f2ttc 2/2 Running 0 2d13h
kafka-0 1/1 Running 0 2d13h
openldap-0 1/1 Running 0 2d13h
zookeeper-0 1/1 Running 0 2d13h
services
[root@master cc]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-dns ClusterIP 10.97.150.134 <none> 53/TCP,53/UDP 2d13h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 2d13h
cortx-control-loadbal-svc NodePort 10.107.201.208 <none> 8081:32239/TCP 2d13h
cortx-data-clusterip-svc-node-1 ClusterIP 10.106.110.22 <none> 22003/TCP,29001/TCP,29000/TCP 2d13h
cortx-data-headless-svc-node-1 ClusterIP None <none> <none> 2d13h
cortx-ha-headless-svc ClusterIP None <none> <none> 2d13h
cortx-hax-svc ClusterIP 10.97.236.246 <none> 22003/TCP 2d13h
cortx-io-svc-0 NodePort 10.97.118.40 <none> 8000:32262/TCP,8443:30626/TCP 2d13h
cortx-server-clusterip-svc-node-1 ClusterIP 10.100.128.209 <none> 22003/TCP 2d13h
cortx-server-headless-svc-node-1 ClusterIP None <none> <none> 2d13h
cortx-server-loadbal-svc-node-1 NodePort 10.109.192.150 <none> 8000:32280/TCP,8443:31222/TCP 2d13h
kafka ClusterIP 10.108.103.229 <none> 9092/TCP 2d13h
kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 2d13h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d15h
openldap-svc ClusterIP 10.106.219.194 <none> 389/TCP 2d13h
zookeeper ClusterIP 10.109.117.186 <none> 2181/TCP,2888/TCP,3888/TCP 2d13h
zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 2d13h
Sorry I might be a little new to CORTX -- been trying for a few hours! Would appreciate any suggestion!
Many thanks,
Faradawn
What release version are you using? The guide currently only works for v0.0.22 (see step 3.1).
If you are using something post v0.0.22 (e.g. 0.1.0), then the APIs have changed. There are some links in #140 for the new APIs. Of note, there are no more "s3 accounts", only IAM accounts. The AWS guide will need to be updated for 0.1.0+, that just hasn't happened yet.
Hi Keith,
1. Create IAM user - success
I was using v0.1.0 release -- thanks so much for suggesting issue #140! Following the steps, I think I created an IAM account as such:
export CSM_IP=`kubectl get svc cortx-control-loadbal-svc -ojsonpath='{.spec.clusterIP}'`
curl -v -d '{"username": "cortxadmin", "password": "Cortxadmin@123"}' https://$CSM_IP:8081/api/v2/login --insecure
curl -H 'Authorization: Bearer c6a4ee6375554c1c9fc16a91c7aecb29' -d '{ "uid": "12345678", "display_name": "gts3account", "access_key": "gregoryaccesskey", "secret_key": "gregorysecretkey" }' https://$CSM_IP:8081/api/v2/s3/iam/users --insecure
# returned a user JSON object
Then, tested GET IAM user as such, which was successful:
curl -H 'Authorization: Bearer 9ce907b269fd405a80d0d1f1f8a5183b' https://$CSM_IP:8081/api/v2/s3/iam/users/12345678 --insecure
# returned a user JSON object
2. Link Bucket - 404
However, when tried to link a bucket to the user, I encountered a 404 not found:
curl -X PUT -H 'Authorization: Bearer 9ce907b269fd405a80d0d1f1f8a5183b' -d '{ “operation“: ”link”, “arguments”: {“uid“:”12345678”, “bucket“:”test-bucket”} }' https://$CSM_IP:8081/api/v2/s3/bucket --insecure
# 404: Not Found
The guide I followed was this Confluence page
3. Test S3 IO - connection time out
Here is a description of my cortx-io-svc-0
pod:
[root@master cc]# kubectl describe svc cortx-io-svc-0
Name: cortx-io-svc-0
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.97.118.40
IPs: 10.97.118.40
Port: cortx-rgw-http 8000/TCP
TargetPort: 8000/TCP
NodePort: cortx-rgw-http 32262/TCP
Endpoints: 192.168.84.145:8000
Port: cortx-rgw-https 8443/TCP
TargetPort: 8443/TCP
NodePort: cortx-rgw-https 30626/TCP
Endpoints: 192.168.84.145:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Tried a few different combinations of NodePort IP and NodePort, also tried Endpoints, but none availed:
NODE_PORT_IP=10.97.118.40 (or Endpoints)
NODE_PORT=32262 (or 30626)
aws s3 mb s3://test-bucket --endpoint-url http://$NODE_PORT_IP:$NODE_PORT
# make_bucket failed: s3://test-bucket Connect timeout on endpoint URL: "http://10.97.118.40:32262/test-bucket"
aws s3 mb s3://test-bucket --endpoint-url http://192.168.84.145:8443
# make_bucket failed: s3://test-bucket Connection was closed before we received a valid response from endpoint URL: "http://192.168.84.145:8443/test-bucket"
The guide I followed is this Confluence page's AWS CLI section. Wondered did I do something wrong?
4. Node Port IP or Endpoints?
In addition, wondered should we use $NODE_PORT_IP:$NODE_PORT or Endpoints' URL to perform a S3 IO? I was a little unfamiliar of their difference!
Been trying for a few hours, and any insight would help! Thanks in advance!
Best,
Faradawn
For the convenience of the Seagate development team, this issue has been mirrored in a private Seagate Jira Server: https://jts.seagate.com/browse/CORTX-29704. Note that community members will not be able to access that Jira server but that is not a problem since all activity in that Jira mirror will be copied into this GitHub issue.
Hi @faradawn thanks for trying out these guides. To me it looks as though you have followed the guides. I am not sure what is going wrong @abhijit1patil can you assist with this issue @faradawn is having with the link bucket operation?
@faradawn Thanks for trying out the guide.
For user documentation please follow below link
https://seagate-systems.atlassian.net/wiki/spaces/PUB/pages/931922025/IAM+User+API+Specifications
Link/unlink bucket API is not supported yet through CSM APIs for Motr as BE. However this API will work with RADOS as BE
Hi @kupranay, @hessio, and @keithpine,
Issue resolved -- thanks so much for the help! Successfully created an IAM user with the guide Kupranay provided above, and performed a basic IO with the CORTX Development with RGW guide!
Here is a summary of the commands:
Create IAM user
# login to CSM to get the Bearer token
export CSM_IP=`kubectl get svc cortx-control-loadbal-svc -ojsonpath='{.spec.clusterIP}'`
tok=$(curl -d '{"username": "cortxadmin", "password": "Cortxadmin@123"}' https://$CSM_IP:8081/api/v2/login -k -i | grep -Po '(?<=Authorization: )\w* \w*')
# create IAM user
curl -X POST -H "$tok" -d '{ "uid": "12345678", "display_name": "gts3account", "access_key": "gregoryaccesskey", "secret_key": "gregorysecretkey" }' https://$CSM_IP:8081/api/v2/s3/iam/users -k
# check user
curl -H "Authorization: $tok" https://$CSM_IP:8081/api/v2/s3/iam/users/12345678 -k -i
Perform S3 IO
# install and configure aws
pip3 install awscli awscli-plugin-endpoint
aws configure set plugins.endpoint awscli_plugin_endpoint
aws configure set default.region us-east-1
aws configure set aws_access_key_id gregoryaccesskey
aws configure set aws_secret_access_key gregorysecretkey
kubectl describe svc cortx-io-svc-0
Calico_IP=ifconfig -> vxlan.calico inet "192.168.xxx.xx"
PORT=NodePort -> cortx-rgw-http "30056"/TCP
# make, upload, delete
aws s3 mb s3://test-bucket --endpoint-url http://$Calico_IP:$PORT
aws s3 cp foo.txt s3://test-bucket/foo.txt --endpoint-url http://$Calico_IP:$PORT
aws s3 ls s3://test-bucket --endpoint-url http://$Calico_IP:$PORT
aws s3 rb s3://test-bucket --endpoint-url http://$Calico_IP:$PORT # An error occurred (BucketNotEmpty)
Small problem with getting objects and deleting a bucket
After uploading three text files to test-bucket
. I tried to get them with the GET request, but failed:
[root@master learn]# curl -H "Authorization: $tok" $url -v
* About to connect() to 192.168.219.64 port 30056 (#0)
* Trying 192.168.219.64...
* Connected to 192.168.219.64 (192.168.219.64) port 30056 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 192.168.219.64:30056
> Accept: */*
> Authorization: Bearer 22072dea057248129f8144fd95311879
>
< HTTP/1.1 400 Bad Request
< Content-Length: 98
< Accept-Ranges: bytes
< Content-Type: application/xml
< Date: Tue, 29 Mar 2022 04:24:18 GMT
< Connection: Keep-Alive
<
* Connection #0 to host 192.168.219.64 left intact
<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><HostId></HostId></Error>[root@master learn]#
I tried to follow the CORTX S3 API Guide, but wondered is there an example as for how to perform a get-object
?
In addition, I encountered some problem deleting the bucket:
[root@master learn]# aws s3 ls s3://test-bucket/ --endpoint-url http://$Calico_IP:$PORT
2022-03-29 03:37:49 47 learn
2022-03-29 03:39:24 253 mails
2022-03-29 03:23:49 0 object1
[root@master learn]# aws s3 rb s3://test-bucket/ --endpoint-url http://$Calico_IP:$PORT
remove_bucket failed: s3://test-bucket/ An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: Unknown
Wondered did I do something wrong?
But besides the above two problems, the S3 API seemed to be working decently!
If there is anything I could do, please let me know!
Best,
Faradawn
Hi @faradawn so in s3 you need to delete anything inside a bucket before you can delete the bucket hence why it is giving you the An error occurred (BucketNotEmpty) when calling the DeleteBucket operation:
error message. So if you use this command to delete each of the 3 objects in your bucket you should be able to delete this bucket then https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-object.html.
And as for cURL I wouldn't have lots of experience with using cURL but for me I had to give the date, content-type, host and authorization headers. To me the error seems to be that the host id is incorrect. Maybe try using s3.seagate.com
as the host. Here is what my cURL script looked like:
#!/bin/sh
s3Server="s3.seagate.com"
s3Bucket="mybucket"
s3File="sample.txt"
s3Key="ABCDEFGHHIJK"
s3Secret="ABCDEFGHIJK12345"
resource="/${s3Bucket}/"
contentType="application/octet-stream"
dateValue="`date -u +%a,\ %e\ %b\ %Y\ %T\ %Z`"
stringToSign="GET\n\n${contentType}\n${dateValue}\n${resource}"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -k -H "Host: ${s3Server}" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${s3Server}/${s3Bucket}/
Alternatively instead of using cURL you could use s3api list-objects
https://docs.aws.amazon.com/cli/latest/reference/s3api/list-objects.html this would be much easier than trying to debug cURL if you would like to try that.
@johnbent @faradawn I found the issue which was making this documentation confusing, this doc under section AWS CLI point 2. the steps were as follows:
aws s3 mb s3://test-bucket --endpoint-url http://10.230.245.117:31000
aws s3 ls --endpoint-url http://10.230.245.117:31000
touch foo.txt
aws s3 cp foo.txt s3://test-bucket/object1 --endpoint-url http://10.230.245.117:31000
aws s3 ls s3://test-bucket --endpoint-url http://10.230.245.117:31000
aws s3 rb s3://test-bucket --endpoint-url http://10.230.245.117:31000 (*bucket must be empty for this command to succeed)
aws s3 ls --endpoint-url http://10.230.245.117:31000
Which will give you an error when you run the remove bucket command because you can only remove empty buckets in s3.
So instead I updated it to look like this:
aws s3 mb s3://test-bucket --endpoint-url http://10.230.245.117:31000
aws s3 ls --endpoint-url http://10.230.245.117:31000
touch foo.txt
aws s3 cp foo.txt s3://test-bucket/object1 --endpoint-url http://10.230.245.117:31000
aws s3 ls s3://test-bucket --endpoint-url http://10.230.245.117:31000
aws s3 rm s3://test-bucket/object1 --endpoint-url http://10.230.245.117:31000
aws s3 rb s3://test-bucket --endpoint-url http://10.230.245.117:31000 (*bucket must be empty for this command to succeed)
aws s3 ls --endpoint-url http://10.230.245.117:31000
So when you run this now you remove the object from the bucket and then when you go to delete it, it will be empty and you won't see the BucketNotEmpty
error like before. Thanks a lot @faradawn for testing this for us and helping us improve our documentation.
Hi @osowski well, no there was no need it was Confluence page so I just edited the Confluence page
Okay great. @cdeshmukh was working on getting CORTX Development with RGW guide moved over to the https://github.com/Seagate/cortx-k8s/tree/main#using-cortx-on-kubernetes section, so if you could work with him to check on the latest status of that (since you have the recent context here), that would be great!
Great, thanks for the update I will do that! :)
Hi @hessio,
Got that the cannot delete bucket error was due to the non-emptiness of the bucket! Thanks for providing the commands to first remove the objects and then delete the bucket! It was very clear!
Will try out the curl and s3api list bucket command you provided to try retrieving an object! Will let you know if I have any trouble -- thanks a lot!
Hi @osowski,
Thanks for your kind words of encouragement! Will keep learning CORTX and testing it on Kubernetes (with non-AWS) instances! WIll let you know if I encountered further issues!
I think I can close this issue -- may I? If there is anything I could do, please let me know!
Best,
Faradawn
Walter Lopatka commented in Jira Server:
Resolved in GitHub
Walter Lopatka commented in Jira Server:
Resolved in GitHub
Walter Lopatka commented in Jira Server:
Resolved in GitHub