Ubiquity support for ppc64le
Closed this issue · 5 comments
Hi,
This issue is to track ppc64le support for ubiquity. I have been able to build ubiquity binary for ppc64le and was able to run unit tests as well. Here is the output of running unit tests:
• SUCCESS! 3.383086ms PASS
coverage: 17.0% of statements
PASS
testing: warning: no tests to run
coverage: 0.0% of statements
Found no test suites, did you forget to run "ginkgo bootstrap"?
Ginkgo ran 9 suites in 1m16.85447563s
Test Suite Passed
-> Finished running unit tests (exit code 0)
I was able to successfully build Docker image with few changes, for which the PR would be attached in this issue.
Docker image build was successful, the changes are generic, and the testing has been done on x86_64, thanks to @yadaven! who did the testing, for which the logs are here:
[root@vm1 ubiquity]# docker run -it --name=Y2 -e PORT=9999 -e LOG_PATH=/tmp/ -e DEFAULT_BACKEND=spectrum-scale -e DEFAULT_FILESYSTEM_NAME=gpfs_device -e SSC_NFS_SERVER_ADDRESS=192.168.58.22 -e FORCE_DELETE=true -e SSC_REST_ENDPOINT=https://192.168.34.8:443 -e SSC_REST_USER=admin -e SSC_REST_PASSWORD=admin001 -e CONFIG_PATH=/gpfs/gpfs_device -e SSC_REST_HOSTNAME=ubiquity-acceptance-ss-server -e UBIQUITY_PLUGIN_SSL_MODE=require -p 9999:9999 -v /gpfs/gpfs_device:/gpfs/gpfs_device/ ubiquity1:latest
Creating SSL directory /var/lib/ubiquity/ssl/private and setting ownership to user postgres ...
Creating default SSL key and certificate for ubiquity
Generating a 4096 bit RSA private key
...........................................++
.....................................................................................................................................++
writing new private key to 'root.key'
-----
Creating default SSL key /var/lib/ubiquity/ssl/private/ubiquity.key
Generating a 4096 bit RSA private key
............................................................................................................................++
.......................................................++
writing new private key to '/var/lib/ubiquity/ssl/private/ubiquity.key'
-----
Creating default SSL certificate /var/lib/ubiquity/ssl/private/ubiquity.key
Signature ok
subject=/CN=01a944ff4544/emailAddress=user@test.com
Getting CA Private Key
Creating default SSL key and certificate for ubiquity - done!
Calling ubiquity...
Starting Ubiquity Storage API server with config resources.UbiquityServerConfig{Port:9999, LogPath:"/tmp/", ConfigPath:"/gpfs/gpfs_device", SpectrumScaleConfig:resources.SpectrumScaleConfig{DefaultFilesystemName:"gpfs_device", NfsServerAddr:"192.168.58.22", SshConfig:resources.SshConfig{User:"", Host:"", Port:""}, RestConfig:resources.RestConfig{Endpoint:"https://192.168.34.8:443", User:"admin", Password:"admin001", Hostname:"ubiquity-acceptance-ss-server"}, ForceDelete:true}, ScbeConfig:resources.ScbeConfig{ConfigPath:"", ConnectionInfo:resources.ConnectionInfo{CredentialInfo:resources.CredentialInfo{UserName:"", Password:"****", Group:""}, Port:8440, ManagementIP:""}, DefaultService:"", DefaultVolumeSize:"", UbiquityInstanceName:"", DefaultFilesystemType:""}, BrokerConfig:resources.BrokerConfig{ConfigPath:"", Port:0}, DefaultBackend:"spectrum-scale", LogLevel:""}
2018-01-25 09:52:32.536 INFO 1 main.go:70 main::main Checking for heartbeat.... []
2018-01-25 09:52:32.579 INFO 1 main.go:77 main::main Heartbeat acquired []
2018-01-25 09:52:32.579 INFO 1 main.go:80 main::main Obtaining handle to DB []
ubiquity: 2018/01/25 09:52:32 spectrumscale.go:83: spectrumLocalClient: init start
ubiquity: 2018/01/25 09:52:32 connectors.go:58: Initializing SpectrumScale REST connector
ubiquity: 2018/01/25 09:52:32 datamodel.go:90: SpectrumDataModel: Create Volumes Table start
ubiquity: 2018/01/25 09:52:32 datamodel.go:96: SpectrumDataModel: Create Volumes Table end
ubiquity: 2018/01/25 09:52:32 spectrumscale.go:95: spectrumLocalClient: init end
ubiquity: 2018/01/25 09:52:32 spectrumscale_nfs.go:35: spectrumNfsLocalClient: init start
ubiquity: 2018/01/25 09:52:32 spectrumscale.go:83: spectrumLocalClient: init start
ubiquity: 2018/01/25 09:52:32 connectors.go:58: Initializing SpectrumScale REST connector
ubiquity: 2018/01/25 09:52:32 datamodel.go:90: SpectrumDataModel: Create Volumes Table start
ubiquity: 2018/01/25 09:52:32 datamodel.go:96: SpectrumDataModel: Create Volumes Table end
ubiquity: 2018/01/25 09:52:32 spectrumscale.go:95: spectrumLocalClient: init end
ubiquity: 2018/01/25 09:52:32 spectrumscale_nfs.go:54: spectrumNfsLocalClient: init end
2018-01-25 09:52:32.609 ERROR 1 simple_rest_client.go:225 scbe::initTransport failed [[{error=stat /var/lib/ubiquity/ssl/public/scbe-trusted-ca.crt: no such file or directory}]]
2018-01-25 09:52:32.609 ERROR 1 simple_rest_client.go:76 scbe::NewSimpleRestClient client.initTransport failed [[{error=stat /var/lib/ubiquity/ssl/public/scbe-trusted-ca.crt: no such file or directory}]]
2018-01-25 09:52:32.609 ERROR 1 scbe_rest_client.go:80 scbe::newScbeRestClient NewSimpleRestClient failed [[{error=stat /var/lib/ubiquity/ssl/public/scbe-trusted-ca.crt: no such file or directory}]]
2018-01-25 09:52:32.609 ERROR 1 scbe.go:61 scbe::NewScbeLocalClient NewScbeRestClient failed [[{error=stat /var/lib/ubiquity/ssl/public/scbe-trusted-ca.crt: no such file or directory}]]
ubiquity: 2018/01/25 09:52:32 clients.go:47: Not enough params to initialize 'scbe' client
Starting Storage API server on port 9999 ....
CTL-C to exit/stop Storage API server service
2018-01-25 09:53:52.626 INFO 1 storage_api_handler.go:51 web_server::func1 Activating just one backend [[{Backend=spectrum-scale}]]
ubiquity: 2018/01/25 09:53:52 spectrumscale.go:99: spectrumLocalClient: Activate start
ubiquity: 2018/01/25 09:53:52 rest_v2.go:170: spectrumRestConnector: IsFilesystemMounted
ubiquity: 2018/01/25 09:53:52 rest_v2.go:177: Get Nodes URL %s https://192.168.34.8:443/scalemgmt/v2/nodes
ubiquity: 2018/01/25 09:53:52 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:53:52 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:53:52 rest_v2.go:187: Got hostname from config ubiquity-acceptance-ss-server
ubiquity: 2018/01/25 09:53:52 rest_v2.go:192: spectrum rest Client: node name: ubiquity-acceptance-ss-server
ubiquity: 2018/01/25 09:53:52 rest_v2.go:204: spectrumRestConnector: IsFilesystemMounted end
This method is not yet implementedubiquity: 2018/01/25 09:53:52 rest_v2.go:152: spectrumRestConnector: GetClusterId
ubiquity: 2018/01/25 09:53:52 rest_v2.go:158: Get Cluster URL : %s https://192.168.34.8:443/scalemgmt/v2/cluster
ubiquity: 2018/01/25 09:53:52 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:53:52 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:53:52 rest_v2.go:166: spectrumRestConnector: GetClusterId end
ubiquity: 2018/01/25 09:53:52 spectrumscale.go:145: spectrumLocalClient: Activate end
2018-01-25 09:53:52.9 INFO 1 storage_api_handler.go:51 web_server::func1 Activating just one backend [[{Backend=spectrum-scale-nfs}]]
ubiquity: 2018/01/25 09:53:52 spectrumscale_nfs.go:58: spectrumNfsLocalClient: Activate-start
ubiquity: 2018/01/25 09:53:52 spectrumscale.go:99: spectrumLocalClient: Activate start
ubiquity: 2018/01/25 09:53:52 rest_v2.go:170: spectrumRestConnector: IsFilesystemMounted
ubiquity: 2018/01/25 09:53:52 rest_v2.go:177: Get Nodes URL %s https://192.168.34.8:443/scalemgmt/v2/nodes
ubiquity: 2018/01/25 09:53:52 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:53:52 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:53:52 rest_v2.go:187: Got hostname from config ubiquity-acceptance-ss-server
ubiquity: 2018/01/25 09:53:52 rest_v2.go:192: spectrum rest Client: node name: ubiquity-acceptance-ss-server
ubiquity: 2018/01/25 09:53:52 rest_v2.go:204: spectrumRestConnector: IsFilesystemMounted end
This method is not yet implementedubiquity: 2018/01/25 09:53:52 rest_v2.go:152: spectrumRestConnector: GetClusterId
ubiquity: 2018/01/25 09:53:52 rest_v2.go:158: Get Cluster URL : %s https://192.168.34.8:443/scalemgmt/v2/cluster
ubiquity: 2018/01/25 09:53:53 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:53:53 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:53:53 rest_v2.go:166: spectrumRestConnector: GetClusterId end
ubiquity: 2018/01/25 09:53:53 spectrumscale.go:145: spectrumLocalClient: Activate end
ubiquity: 2018/01/25 09:53:53 spectrumscale_nfs.go:61: spectrumNfsLocalClient: Activate-end
2018-01-25 09:55:05.782 INFO 1 migrate.go:44 database::doMigrations migrating [[{migration={{0 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC <nil>} }}]]
2018-01-25 09:55:05.783 INFO 1 migrate.go:44 database::doMigrations migrating [[{migration=&{0 {{0 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC <nil>} } 0 }}]]
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:149: spectrumLocalClient: create start
ubiquity: 2018/01/25 09:55:05 datamodel.go:167: SpectrumDataModel: GetVolume start
ubiquity: 2018/01/25 09:55:05 datamodel.go:173: SpectrumDataModel: GetVolume end
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:163: Opts for create: map[string]interface {}{"filesystem":"gpfs_device", "gid":"1000", "nfsClientConfig":"*(Access_Type=RW,Squash=no_root_squash,SecType=sys,Protocols=3:4)", "quota":"1024M", "size":"1", "type":"fileset", "uid":"1000", "backend":"spectrum-scale"}
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:169: Trying to determine type for request
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:714: determineTypeFromRequest start
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:729: determineTypeFromRequest end
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:175: Volume type requested: fileset
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:733: validateAndParseParams start
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:714: determineTypeFromRequest start
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:729: determineTypeFromRequest end
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:795: validateAndParseParams end
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:182: Params for create: %!s(bool=false),gpfs_device,,
ubiquity: 2018/01/25 09:55:05 spectrumscale.go:486: spectrumLocalClient: createFilesetQuotaVolume start
ubiquity: 2018/01/25 09:55:05 rest_v2.go:260: spectrumRestConnector: CreateFileset
ubiquity: 2018/01/25 09:55:05 rest_v2.go:279: filesetreq {pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f root fileset for container volume 0 0 0 0 0 0 0 }
ubiquity: 2018/01/25 09:55:05 rest_v2.go:283: Create Fileset URL: https://192.168.34.8:443/scalemgmt/v2/filesystems/gpfs_device/filesets
ubiquity: 2018/01/25 09:55:06 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:06 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:06 rest_v2.go:66: spectrumRestConnector: isRequestAccepted
ubiquity: 2018/01/25 09:55:06 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:06 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:06 rest_v2.go:76: spectrumRestConnector: isRequestAccepted end
ubiquity: 2018/01/25 09:55:06 rest_v2.go:80: spectrumRestConnector: waitForJobCompletion
ubiquity: 2018/01/25 09:55:06 rest_v2.go:55: spectrumRestConnector: checkAsynchronousJob
ubiquity: 2018/01/25 09:55:06 rest_v2.go:60: spectrumRestConnector: checkAsynchronousJob end
ubiquity: 2018/01/25 09:55:06 rest_v2.go:85: Job URL: https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000001&fields=:all:
ubiquity: 2018/01/25 09:55:06 rest_v2.go:96: spectrumRestConnector: AsyncJobCompletion
ubiquity: 2018/01/25 09:55:06 rest_v2.go:101: jobUrl https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000001&fields=:all:
ubiquity: 2018/01/25 09:55:07 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:07 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:09 rest_v2.go:101: jobUrl https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000001&fields=:all:
ubiquity: 2018/01/25 09:55:10 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:10 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:12 rest_v2.go:101: jobUrl https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000001&fields=:all:
ubiquity: 2018/01/25 09:55:12 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:12 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:14 rest_v2.go:101: jobUrl https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000001&fields=:all:
ubiquity: 2018/01/25 09:55:14 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:14 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:14 rest_v2.go:117: Job https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000001&fields=:all: Completed Successfully: {[mmcrfileset 'gpfs_device' 'pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f' -t 'fileset for container volume' --inode-space 'root' --allow-permission-change 'chmodAndSetAcl' mmlinkfileset 'gpfs_device' 'pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f' -J '/gpfs/gpfs_device/pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f' ] [] 0 [] [EFSSG0070I File set pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f created successfully. EFSSG0078I File set pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f successfully linked.
]}
ubiquity: 2018/01/25 09:55:14 rest_v2.go:118: spectrumRestConnector: AsyncJobCompletion end
ubiquity: 2018/01/25 09:55:14 rest_v2.go:92: spectrumRestConnector: waitForJobCompletion end
ubiquity: 2018/01/25 09:55:14 rest_v2.go:300: spectrumRestConnector: CreateFileset end
ubiquity: 2018/01/25 09:55:14 rest_v2.go:478: spectrumRestConnector: SetFilesetQuota
ubiquity: 2018/01/25 09:55:14 rest_v2.go:484: Set Quota URL: https://192.168.34.8:443/scalemgmt/v2/filesystems/gpfs_device/filesets/pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f/quotas
ubiquity: 2018/01/25 09:55:14 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:14 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:14 rest_v2.go:66: spectrumRestConnector: isRequestAccepted
ubiquity: 2018/01/25 09:55:14 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:14 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:14 rest_v2.go:76: spectrumRestConnector: isRequestAccepted end
ubiquity: 2018/01/25 09:55:14 rest_v2.go:80: spectrumRestConnector: waitForJobCompletion
ubiquity: 2018/01/25 09:55:14 rest_v2.go:55: spectrumRestConnector: checkAsynchronousJob
ubiquity: 2018/01/25 09:55:14 rest_v2.go:60: spectrumRestConnector: checkAsynchronousJob end
ubiquity: 2018/01/25 09:55:14 rest_v2.go:85: Job URL: https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000002&fields=:all:
ubiquity: 2018/01/25 09:55:14 rest_v2.go:96: spectrumRestConnector: AsyncJobCompletion
ubiquity: 2018/01/25 09:55:14 rest_v2.go:101: jobUrl https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000002&fields=:all:
ubiquity: 2018/01/25 09:55:15 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:15 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:17 rest_v2.go:101: jobUrl https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000002&fields=:all:
ubiquity: 2018/01/25 09:55:17 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:17 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:19 rest_v2.go:101: jobUrl https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000002&fields=:all:
ubiquity: 2018/01/25 09:55:19 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:19 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:21 rest_v2.go:101: jobUrl https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000002&fields=:all:
ubiquity: 2018/01/25 09:55:22 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:22 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:22 rest_v2.go:117: Job https://192.168.34.8:443/scalemgmt/v2/jobs?filter=jobId=3000000000002&fields=:all: Completed Successfully: {[mmsetquota 'gpfs_device:pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f' --block '1024M:1024M' ] [] 0 [] [EFSSG0040I The quota has been successfully set.]}
ubiquity: 2018/01/25 09:55:22 rest_v2.go:118: spectrumRestConnector: AsyncJobCompletion end
ubiquity: 2018/01/25 09:55:22 rest_v2.go:92: spectrumRestConnector: waitForJobCompletion end
ubiquity: 2018/01/25 09:55:22 rest_v2.go:509: spectrumRestConnector: SetFilesetQuota end
ubiquity: 2018/01/25 09:55:22 datamodel.go:146: SpectrumDataModel: InsertFilesetQuotaVolume start
ubiquity: 2018/01/25 09:55:22 datamodel.go:158: SpectrumDataModel: insertVolume start
ubiquity: 2018/01/25 09:55:22 datamodel.go:163: SpectrumDataModel: insertVolume end
ubiquity: 2018/01/25 09:55:22 datamodel.go:154: SpectrumDataModel: InsertFilesetQuotaVolume end
ubiquity: 2018/01/25 09:55:22 spectrumscale.go:513: Created fileset volume with fileset pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f, quota 1024M
ubiquity: 2018/01/25 09:55:22 spectrumscale.go:514: spectrumLocalClient: createFilesetQuotaVolume end
ubiquity: 2018/01/25 09:55:22 spectrumscale.go:199: spectrumLocalClient: create end
ubiquity: 2018/01/25 09:55:22 spectrumscale.go:298: spectrumLocalClient: GetVolumeConfig start
ubiquity: 2018/01/25 09:55:22 datamodel.go:167: SpectrumDataModel: GetVolume start
ubiquity: 2018/01/25 09:55:22 datamodel.go:185: SpectrumDataModel: GetVolume end
ubiquity: 2018/01/25 09:55:22 spectrumscale.go:801: getVolumeMountPoint start
ubiquity: 2018/01/25 09:55:22 rest_v2.go:237: spectrumRestConnector: GetFilesystemMountpoint
ubiquity: 2018/01/25 09:55:22 rest_v2.go:243: Get Filesystem Mount URL: https://192.168.34.8:443/scalemgmt/v2/filesystems/gpfs_device
ubiquity: 2018/01/25 09:55:22 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:22 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:22 rest_v2.go:252: spectrumRestConnector: GetFilesystemMountpoint end
ubiquity: 2018/01/25 09:55:22 spectrumscale.go:820: getVolumeMountPoint end
ubiquity: 2018/01/25 09:55:22 rest_v2.go:460: spectrumRestConnector: IsFilesetLinked
ubiquity: 2018/01/25 09:55:22 rest_v2.go:400: spectrumRestConnector: ListFileset
ubiquity: 2018/01/25 09:55:22 rest_v2.go:406: List Fileset URL: https://192.168.34.8:443/scalemgmt/v2/filesystems/gpfs_device/filesets/pvc-d1b73d32-01b5-11e8-8e5b-080027dd486f
ubiquity: 2018/01/25 09:55:22 rest_v2.go:43: spectrumRestConnector: isStatusOK
ubiquity: 2018/01/25 09:55:22 rest_v2.go:49: spectrumRestConnector: isStatusOK end
ubiquity: 2018/01/25 09:55:22 rest_v2.go:421: spectrumRestConnector: ListFileset end
ubiquity: 2018/01/25 09:55:22 rest_v2.go:473: spectrumRestConnector: IsFilesetLinked end
ubiquity: 2018/01/25 09:55:22 spectrumscale.go:340: spectrumLocalClient: GetVolumeConfig finish
Are you trying to test GA 1.0 with Scale and with Docker volume plugin?
@shay-berman Hi, the testing was done with master (i.e GA 1.0) with scale and k8s dynamic provisioner.
SO, I was able to successfully test ubiquity with SCBE and IBM block storage with ICP 2.1.0.2-rc2 running on power systems.
Here are the results of deployment:
root@ubuntu:~/go/src/github.com/IBM/ubiquity-k8s/scripts/installer-for-ibm-storage-enabler-for-containers# ./ubiquity_cli.sh -a status -n ubiquity
Working in namespace [ubiquity].
kubectl get storageclass | egrep "ubiquity|^NAME"
---------------------------------------------------------------------
NAME PROVISIONER
gold ubiquity/flex
kubectl get --namespace ubiquity secret/ubiquity-db-credentials secret/scbe-credentials cm/k8s-config cm/ubiquity-configmap pv/ibm-ubiquity-db pvc/ibm-ubiquity-db svc/ubiquity svc/ubiquity-db daemonset/ubiquity-k8s-flex deploy/ubiquity deploy/ubiquity-db deploy/ubiquity-k8s-provisioner
---------------------------------------------------------------------
NAME TYPE DATA AGE
secrets/ubiquity-db-credentials Opaque 3 15h
secrets/scbe-credentials Opaque 2 15h
NAME DATA AGE
cm/k8s-config 1 15h
cm/ubiquity-configmap 10 15h
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/ibm-ubiquity-db 20Gi RWO Delete Bound ubiquity/ibm-ubiquity-db gold 15h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/ibm-ubiquity-db Bound ibm-ubiquity-db 20Gi RWO gold 15h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/ubiquity ClusterIP 10.0.0.55 <none> 9999/TCP 15h
svc/ubiquity-db ClusterIP 10.0.0.72 <none> 5432/TCP 15h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds/ubiquity-k8s-flex 2 2 2 2 2 <none> 15h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/ubiquity 1 1 1 1 15h
deploy/ubiquity-db 1 1 1 1 15h
deploy/ubiquity-k8s-provisioner 1 1 1 1 15h
kubectl get --namespace ubiquity pod | egrep "^ubiquity|^NAME"
---------------------------------------------------------------------
NAME READY STATUS RESTARTS AGE
ubiquity-6cd557959-wsrxj 1/1 Running 0 15h
ubiquity-db-6c8757d854-ggjt8 1/1 Running 0 15h
ubiquity-k8s-flex-gx5b6 1/1 Running 0 15h
ubiquity-k8s-flex-ph2bf 1/1 Running 0 15h
ubiquity-k8s-provisioner-5f9b77cf8f-9wssp 1/1 Running 0 15h
root@ubuntu:~/go/src/github.com/IBM/ubiquity-k8s/scripts/installer-for-ibm-storage-enabler-for-containers#
Here is the result of sanity test:
root@ubuntu:~/go/src/github.com/IBM/ubiquity-k8s/scripts/installer-for-ibm-storage-enabler-for-containers# ./ubiquity_cli.sh -a sanity -n ubiquity
Working in namespace [ubiquity].
--------------------------------------------------------------
Sanity description:
1. Create sanity-pvc, sanity-pod and wait for creation.
2. Delete the sanity-pod, sanity-pvc and wait for deletion.
Note: Uses yml files from directory ./yamls/sanity_yamls
--------------------------------------------------------------
persistentvolumeclaim "sanity-pvc" created
pvc [sanity-pvc] status is [Pending] while expected status is [Bound]. sleeping [3 sec] before retrying to check [0/10]
pvc [sanity-pvc] status is [Pending] while expected status is [Bound]. sleeping [3 sec] before retrying to check [1/10]
pvc [sanity-pvc] status [Bound] as expected (after 2/10 tries)
pod "sanity-pod" created
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [0/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [1/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [2/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [3/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [4/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [5/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [6/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [7/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [8/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [9/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [10/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [11/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [12/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [13/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [14/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [15/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [16/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [17/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [18/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [19/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [20/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [21/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [22/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [23/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [24/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [25/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [26/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [27/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [28/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [29/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [30/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [31/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [32/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [33/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [34/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [35/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [36/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [37/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [38/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [39/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [40/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [41/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [42/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [43/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [44/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [45/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [46/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [47/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [48/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [49/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [50/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [51/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [52/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [53/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [54/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [55/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [56/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [57/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [58/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [59/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [60/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [61/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [62/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [63/100]
pod [sanity-pod] status is [Pending] while expected status is [Running]. sleeping [3 sec] before retrying to check [64/100]
pod [sanity-pod] status [Running] as expected (after 65/100 tries)
pod "sanity-pod" deleted
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 3m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [0/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 3m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [1/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 3m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [2/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 3m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [3/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 3m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [4/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [5/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [6/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [7/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [8/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [9/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 1/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [10/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 0/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [11/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 0/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [12/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 0/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [13/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 0/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [14/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 0/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [15/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 0/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [16/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 0/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [17/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 0/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [18/100]
NAME READY STATUS RESTARTS AGE
sanity-pod 0/1 Terminating 0 4m
pod [sanity-pod] still exists. sleeping [3 sec] before retrying to check [19/100]
pod [sanity-pod] was deleted (after 20/100 tries)
persistentvolumeclaim "sanity-pvc" deleted
pvc [sanity-pvc] was deleted (after 0/10 tries)
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-f7163ff9-123d-11e8-b9ce-fa163ea7e2e1 1Gi RWO Delete Released ubiquity/sanity-pvc gold 5m
pv [pvc-f7163ff9-123d-11e8-b9ce-fa163ea7e2e1] still exists. sleeping [2 sec] before retrying to check [0/10]
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-f7163ff9-123d-11e8-b9ce-fa163ea7e2e1 1Gi RWO Delete Released ubiquity/sanity-pvc gold 5m
pv [pvc-f7163ff9-123d-11e8-b9ce-fa163ea7e2e1] still exists. sleeping [2 sec] before retrying to check [1/10]
pv [pvc-f7163ff9-123d-11e8-b9ce-fa163ea7e2e1] was deleted (after 2/10 tries)
"IBM Storage Enabler for Containers" sanity finished successfully.
Hi @Pensu if there is no issue, could you please close this ticket?
@shay-berman Hi, yeah, since the ppc64le support will be coming, I will close this one.