Hyperd: got wiretype 0, want 2
enzian opened this issue · 11 comments
After installing according to the deployment guide, I get an error from frakti when calling hyperd:
E0606 17:54:29.893393 5824 manager.go:215] RunPodSandbox from hyper runtime service failed: rpc error: code = Unknown desc = proto: bad wiretype for field types.UserInterface.Gateway: got wiretype 0, want 2
Anybody know what I did wrong? Is this caused by a mismatch of version between frakti and hyperd?
@enzian Please install hyperd from source. The current release of hyperd prebuilt package does not match the one used by frakti.
hmm, after a while I figured out that this might be it - so I'm on that! I would be nice to note this in the deployment guide for the 1.10 version ;-)
ok, so, built my own hyperd and hyperctl binaries - and Frakti and Hyperd started chatting - but containers consistently fail:
I0610 20:58:26.630948 11423 vm_states.go:301] SB[vm-mFyQIqpKrJ] startPod: &json.Pod{Hostname:"nginx-app-786897f7d7-882hl", DeprecatedContainers:[]json.Container(nil), DeprecatedInterfaces:[]json.NetworkInf(nil), Dns:[]string{"10.96.0.10"}, DnsOptions:[]string{"ndots:5"}, DnsSearch:[]string{"default.svc.cluster.local", "svc.cluster.local", "cluster.local", "ewl-internet.ch"}, DeprecatedRoutes:[]json.Route(nil), ShareDir:"share_dir", PortmappingWhiteLists:(*json.PortmappingWhiteList)(0xc421610c00)}
I0610 20:58:31.063566 11423 vm_states.go:304] SB[vm-mFyQIqpKrJ] pod start successfully
I0610 20:58:31.063581 11423 provision.go:285] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] sandbox init result: <nil>
I0610 20:58:31.076103 11423 provision.go:464] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] adding resource to sandbox
I0610 20:58:31.305116 11423 run.go:47] Starting pod "k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046" in vm: "vm-mFyQIqpKrJ"
I0610 20:58:31.305135 11423 provision.go:500] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] start all containers
I0610 20:58:31.310657 11423 server.go:388] getting image: nginx:latest
I0610 20:58:33.395280 11423 server.go:406] got image: nginx:latest
I0610 20:58:35.035757 11423 server.go:418] pull image of nginx:latest for its digest: sha256:3e2ffcf0edca2a4e9b24ca442d227baea7b7f0e33ad654ef1eb806fbd9bedcf0
I0610 20:58:37.064660 11423 container.go:511] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[(k8s_nginx-app.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_6035b5d6)] create container 064a94b882226e050fe59cbc7440a3042187a449da9bf21ee4db53d7ade4c353 (w/: [])
I0610 20:58:37.064747 11423 container.go:533] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[064a94b88222(k8s_nginx-app.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_6035b5d6)] describe container
I0610 20:58:37.064955 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas default-token-btqsf_faa5ad32
I0610 20:58:37.064963 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[default-token-btqsf_faa5ad32] volume inserted
I0610 20:58:37.065023 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas etc-hosts_c0e63e3e
I0610 20:58:37.065029 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[etc-hosts_c0e63e3e] volume inserted
I0610 20:58:37.065091 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas ba0fc572_dcea5466
I0610 20:58:37.065097 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[ba0fc572_dcea5466] volume inserted
I0610 20:58:37.081580 11423 container.go:257] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[064a94b88222(k8s_nginx-app.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_6035b5d6)] start container
E0610 20:58:37.093727 11423 container.go:259] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[064a94b88222(k8s_nginx-app.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_6035b5d6)] failed to start container: Create new container failed: Error:
E0610 20:58:37.094040 11423 container.go:30] ContainerStart failed Create new container failed: Error: with request container_id:"064a94b882226e050fe59cbc7440a3042187a449da9bf21ee4db53d7ade4c353"
I0610 20:58:37.096917 11423 container.go:1075] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[064a94b88222(k8s_nginx-app.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_6035b5d6)] container exited with code 255 (at 2018-06-10 18:58:37.095768433 +0000 UTC)
I0610 20:58:37.096932 11423 container.go:1081] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[064a94b88222(k8s_nginx-app.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_6035b5d6)] clean up container
I0610 20:58:37.960930 11423 server.go:388] getting image: nginx:latest
I0610 20:58:39.988034 11423 server.go:406] got image: nginx:latest
I0610 20:58:41.534852 11423 server.go:418] pull image of nginx:latest for its digest: sha256:3e2ffcf0edca2a4e9b24ca442d227baea7b7f0e33ad654ef1eb806fbd9bedcf0
I0610 20:58:43.597426 11423 container.go:511] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[(k8s_nginx-app.1_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_0fe7c43f)] create container cef9b1e31fcc5c327a8e3064239fc76dde180fc78ef13c62ddb2b05dfb49af77 (w/: [])
I0610 20:58:43.597528 11423 container.go:533] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[cef9b1e31fcc(k8s_nginx-app.1_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_0fe7c43f)] describe container
I0610 20:58:43.597760 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas default-token-btqsf_22a3a407
I0610 20:58:43.597769 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[default-token-btqsf_22a3a407] volume inserted
I0610 20:58:43.597822 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas etc-hosts_b01aaee4
I0610 20:58:43.597827 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[etc-hosts_b01aaee4] volume inserted
I0610 20:58:43.597871 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas 4ffe6298_a09d7559
I0610 20:58:43.597876 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[4ffe6298_a09d7559] volume inserted
I0610 20:58:43.605281 11423 container.go:257] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[cef9b1e31fcc(k8s_nginx-app.1_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_0fe7c43f)] start container
E0610 20:58:43.625096 11423 container.go:259] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[cef9b1e31fcc(k8s_nginx-app.1_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_0fe7c43f)] failed to start container: Create new container failed: Error:
E0610 20:58:43.625439 11423 container.go:30] ContainerStart failed Create new container failed: Error: with request container_id:"cef9b1e31fcc5c327a8e3064239fc76dde180fc78ef13c62ddb2b05dfb49af77"
I0610 20:58:43.627587 11423 container.go:1075] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[cef9b1e31fcc(k8s_nginx-app.1_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_0fe7c43f)] container exited with code 255 (at 2018-06-10 18:58:43.627552334 +0000 UTC)
I0610 20:58:43.627596 11423 container.go:1081] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[cef9b1e31fcc(k8s_nginx-app.1_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_0fe7c43f)] clean up container
I0610 20:58:43.706680 11423 context.go:403] SB[vm-mFyQIqpKrJ] remove container 064a94b882226e050fe59cbc7440a3042187a449da9bf21ee4db53d7ade4c353
I0610 20:58:43.706695 11423 context.go:470] SB[vm-mFyQIqpKrJ] remove disk default-token-btqsf_faa5ad32
I0610 20:58:43.706703 11423 volume.go:150] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[default-token-btqsf_faa5ad32] volume remove from sandbox (removed: true)
I0610 20:58:43.706709 11423 context.go:470] SB[vm-mFyQIqpKrJ] remove disk etc-hosts_c0e63e3e
I0610 20:58:43.706714 11423 volume.go:150] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[etc-hosts_c0e63e3e] volume remove from sandbox (removed: true)
I0610 20:58:43.706719 11423 context.go:470] SB[vm-mFyQIqpKrJ] remove disk ba0fc572_dcea5466
I0610 20:58:43.706724 11423 volume.go:150] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[ba0fc572_dcea5466] volume remove from sandbox (removed: true)
I0610 20:58:59.079430 11423 server.go:388] getting image: nginx:latest
I0610 20:59:01.032680 11423 server.go:406] got image: nginx:latest
I0610 20:59:02.698147 11423 server.go:418] pull image of nginx:latest for its digest: sha256:3e2ffcf0edca2a4e9b24ca442d227baea7b7f0e33ad654ef1eb806fbd9bedcf0
I0610 20:59:04.739909 11423 container.go:511] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[(k8s_nginx-app.2_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_97d5b420)] create container 7ef5b8beea615b65f2d028fce29c096a472115411f7716f041e5740b59f75904 (w/: [])
I0610 20:59:04.740007 11423 container.go:533] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[7ef5b8beea61(k8s_nginx-app.2_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_97d5b420)] describe container
I0610 20:59:04.740201 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas default-token-btqsf_c510b846
I0610 20:59:04.740208 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[default-token-btqsf_c510b846] volume inserted
I0610 20:59:04.740253 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas etc-hosts_3c0618d4
I0610 20:59:04.740257 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[etc-hosts_3c0618d4] volume inserted
I0610 20:59:04.740294 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas d04c3efc_0b9cef65
I0610 20:59:04.740298 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[d04c3efc_0b9cef65] volume inserted
I0610 20:59:04.750374 11423 container.go:257] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[7ef5b8beea61(k8s_nginx-app.2_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_97d5b420)] start container
E0610 20:59:04.754058 11423 container.go:259] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[7ef5b8beea61(k8s_nginx-app.2_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_97d5b420)] failed to start container: Create new container failed: Error:
E0610 20:59:04.755475 11423 container.go:30] ContainerStart failed Create new container failed: Error: with request container_id:"7ef5b8beea615b65f2d028fce29c096a472115411f7716f041e5740b59f75904"
I0610 20:59:04.765789 11423 container.go:1075] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[7ef5b8beea61(k8s_nginx-app.2_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_97d5b420)] container exited with code 255 (at 2018-06-10 18:59:04.765751675 +0000 UTC)
I0610 20:59:04.765796 11423 container.go:1081] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[7ef5b8beea61(k8s_nginx-app.2_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_97d5b420)] clean up container
I0610 20:59:04.873381 11423 context.go:403] SB[vm-mFyQIqpKrJ] remove container cef9b1e31fcc5c327a8e3064239fc76dde180fc78ef13c62ddb2b05dfb49af77
I0610 20:59:04.873396 11423 context.go:470] SB[vm-mFyQIqpKrJ] remove disk default-token-btqsf_22a3a407
I0610 20:59:04.873405 11423 volume.go:150] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[default-token-btqsf_22a3a407] volume remove from sandbox (removed: true)
I0610 20:59:04.873411 11423 context.go:470] SB[vm-mFyQIqpKrJ] remove disk etc-hosts_b01aaee4
I0610 20:59:04.873417 11423 volume.go:150] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[etc-hosts_b01aaee4] volume remove from sandbox (removed: true)
I0610 20:59:04.873422 11423 context.go:470] SB[vm-mFyQIqpKrJ] remove disk 4ffe6298_a09d7559
I0610 20:59:04.873427 11423 volume.go:150] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[4ffe6298_a09d7559] volume remove from sandbox (removed: true)
I0610 20:59:31.081046 11423 server.go:388] getting image: nginx:latest
I0610 20:59:33.425166 11423 server.go:406] got image: nginx:latest
I0610 20:59:34.953424 11423 server.go:418] pull image of nginx:latest for its digest: sha256:3e2ffcf0edca2a4e9b24ca442d227baea7b7f0e33ad654ef1eb806fbd9bedcf0
I0610 20:59:37.146485 11423 container.go:511] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[(k8s_nginx-app.3_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_b3b4a8af)] create container 38477f8b2e56261ba1fd05b61be67c147a92cd1ad0011a427410aa6e82a29809 (w/: [])
I0610 20:59:37.146560 11423 container.go:533] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[38477f8b2e56(k8s_nginx-app.3_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_b3b4a8af)] describe container
I0610 20:59:37.146886 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas default-token-btqsf_973f5143
I0610 20:59:37.146899 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[default-token-btqsf_973f5143] volume inserted
I0610 20:59:37.146983 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas etc-hosts_88a9df86
I0610 20:59:37.146990 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[etc-hosts_88a9df86] volume inserted
I0610 20:59:37.147053 11423 context.go:438] SB[vm-mFyQIqpKrJ] return volume add success for dir/nas 24e4e490_e75849a9
I0610 20:59:37.147059 11423 volume.go:115] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[24e4e490_e75849a9] volume inserted
I0610 20:59:37.171102 11423 container.go:257] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[38477f8b2e56(k8s_nginx-app.3_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_b3b4a8af)] start container
E0610 20:59:37.182464 11423 container.go:259] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[38477f8b2e56(k8s_nginx-app.3_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_b3b4a8af)] failed to start container: Create new container failed: Error:
E0610 20:59:37.182783 11423 container.go:30] ContainerStart failed Create new container failed: Error: with request container_id:"38477f8b2e56261ba1fd05b61be67c147a92cd1ad0011a427410aa6e82a29809"
I0610 20:59:37.187463 11423 container.go:1075] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[38477f8b2e56(k8s_nginx-app.3_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_b3b4a8af)] container exited with code 255 (at 2018-06-10 18:59:37.186934993 +0000 UTC)
I0610 20:59:37.187470 11423 container.go:1081] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Con[38477f8b2e56(k8s_nginx-app.3_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_b3b4a8af)] clean up container
I0610 20:59:38.222508 11423 context.go:403] SB[vm-mFyQIqpKrJ] remove container 7ef5b8beea615b65f2d028fce29c096a472115411f7716f041e5740b59f75904
I0610 20:59:38.222519 11423 context.go:470] SB[vm-mFyQIqpKrJ] remove disk default-token-btqsf_c510b846
I0610 20:59:38.222525 11423 volume.go:150] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[default-token-btqsf_c510b846] volume remove from sandbox (removed: true)
I0610 20:59:38.222529 11423 context.go:470] SB[vm-mFyQIqpKrJ] remove disk etc-hosts_3c0618d4
I0610 20:59:38.222533 11423 volume.go:150] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[etc-hosts_3c0618d4] volume remove from sandbox (removed: true)
I0610 20:59:38.222536 11423 context.go:470] SB[vm-mFyQIqpKrJ] remove disk d04c3efc_0b9cef65
I0610 20:59:38.222539 11423 volume.go:150] Pod[k8s_POD.0_nginx-app-786897f7d7-882hl_default_3b8550ce-6ce0-11e8-a2e6-08002776861a_16cb2046] Vol[d04c3efc_0b9cef65] volume remove from sandbox (removed: true)
The failed to start container: Create new container failed: Error:
is not terribly helpful for me ;-) @bergwolf do you know if I'm still having a dependency issue here?
@enzian Can you please enable bogus debug info when starting hyperd, so that we get more logs about what has happened? You can do it by appending something like --v=7
to hyperd arguments.
ok, bumped up the verbosity to 7, this is what I've got
So the failure is here:
I0611 11:39:04.125060 80707 vm_console.go:100] SB[vm-vPliCrdCnA] [CNL] call hyper_new_container, json {"id":"c7faf95a6bedae6610bc633f32e87521e174356b4cbe2f4bffbca2f97e2a9226","rootfs":"rootfs","image":"/bdf16b891f79fe83aed3d2e2e74b5c9c7408e1c007bf9deb318dc271ef676938","fsmap":[{"source":"qEhZERnGga","path":"/kube-dns-config","readOnly":true,"dockerVolume":false},{"source":"HzlkEBdcBP","path":"/var/run/secrets/kubernetes.io/serviceaccount","readOnly":true,"dockerVolume":false},{"source":"DYPqCgmrxJ","path":"/etc/hosts","readOnly":false,"dockerVolume":false},{"source":"roZUlTFZwb","path":"/dev/termination-log","readOnly":false,"dockerVolume":false}],"process":{"id":"init","terminal":false,"stdio":1,"stderr":2,"args":["/kube-dns","--domain=cluster.local.","--dns-port=10053","--config-dir=/kube-dns-config","--v=2"],"envs":[{"env":"KUBE_DNS_SERVICE_PORT_DNS","value":"53"},{"env":"PATH","value":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"},{"env":"KUBE_DNS_SERVICE_HOST","value":"10.96.0.10"},{"env":"KUBERNETES_PORT_443_TCP_PROTO","value":"tcp"},{"env":"KUBERNETES_PORT_443_TCP","value":"tcp://10.96.0.1:443"},{"env":"KUBERNETES_SERVICE_HOST","value":"10.96.0.1"},{"env":"KUBE_DNS_SERVICE_PORT_DNS_TCP","value":"53"},{"env":"KUBE_DNS_PORT","value":"udp://10.96.0.10:53"},{"env":"KUBE_DNS_PORT_53_UDP_PORT","value":"53"},{"env":"KUBE_DNS_PORT_53_UDP_ADDR","value":"10.96.0.10"},{"env":"KUBE_DNS_SERVICE_PORT","value":"53"},{"env":"KUBERNETES_PORT","value":"tcp://10.96.0.1:443"},{"env":"KUBE_DNS_PORT_53_TCP_PROTO","value":"tcp"},{"env":"KUBE_DNS_PORT_53_TCP","value":"tcp://10.96.0.10:53"},{"env":"KUBERNETES_SERVICE_PORT","value":"443"},{"env":"KUBERNETES_PORT_443_TCP_PORT","value":"443"},{"env":"KUBERNETES_PORT_443_TCP_ADDR","value":"10.96.0.1"},{"env":"KUBE_DNS_PORT_53_TCP_PORT","value":"53"},{"env":"PROMETHEUS_PORT","value":"10055"},{"env":"KUBE_DNS_PORT_53_UDP","value":"udp://10.96.0.10:53"},{"env":"KUBERNETES_SERVICE_PORT_HTTPS","value":"443"},{"env":"KUBE_DNS_PORT_53_UDP_PROTO","value":"udp"},{"env":"KUBE_DNS_PORT_53_TCP_ADDR","value":"10.96.0.10"}],"workdir":"/"},"restartPolicy":"never","initialize":true,"readOnly":false}, len 2064
I0611 11:39:04.127505 80707 vm_console.go:100] SB[vm-vPliCrdCnA] [CNL] parse container json failed
Thanks for pointing that out! Now what else do I need to build to get this to run? ;-)
What I else I need to build? The answer was hyperstart
the kernel and image have to match hyperd... This means so far - everything is working - well except kubedns which keeps crashing but that's another story!
Thanks to @bergwolf for the assistance!
For those who want to run the 1.10.0 version here are some pointers:
You can follow the deployment guide in this repo but you cannot use the distributed version of the hyperhq tools as described since the interfaces the 1.10 release of frakti requires from hyperd do not match the latest ones released by hyperd. This means you'll have to build your own hyperd binaries.
So here's a step-by-step guide on how to get there:
-
Follow the deployment guide and install hyperd
-
Install the golang toolchain:
$ sudo add-apt-repository ppa:gophers/archive
$ sudo apt-get update
$ sudo apt-get install golang-1.10-go
$ sudo ln /usr/lib/go-1.10/bin/gofmt /usr/bin/gofmt
$ sudo ln /usr/lib/go-1.10/bin/go /usr/bin/go
$ mkdir ~/go
# export GOPATH=~/go
Now you should be able to call the go tool:
$ go version
go version go1.10 linux/amd64
-
Clone and build
hyperd
,hyperctl
andhyperstart
according to this guide -
Stop the hyperd service:
systemctl stop hyperd
-
Rename and Replace the installed binaries with your newly built ones:
$ cd ~/go/src/github.com/hyperhq
$ sudo mv /usr/bin/hyperd /usr/bin/hyperd.back && sudo cp hyperd/cmd/hyperd/hyperd /usr/bin/hyperd
$ sudo mv /usr/bin/hyperctl /usr/bin/hyperctl.back && sudo cp hyperd/cmd/hyperctl/hyperctl /usr/bin/hyperctl
$ sudo mv /var/lib/hyper/kernel /var/lib/hyper/kernel.back && sudo cp hyperstart/build/arch/x86_64/kernel /var/lib/hyper/kernel
$ sudo mv /var/lib/hyper/hyper-initrd.img /var/lib/hyper/hyper-initrd.img.back && sudo cp hyperstart/build/hyper-initrd.img /var/lib/hyper/hyper-initrd.img
We habe now replaced hyperd
, hyperctl
, kernel
and hyper-init.img
and can now restart the hyperd daemon: sudo systemctl start hyperd
.
- Resume the deployment guide of frakti by installing docker as described there!
happy hackings!
@enzian Thanks a lot for updates! We should release a new version of hypercontainer binaries to avoid such confusion!
that would certainly solve this issue - on the other hand - I learnt a lot about hyper in the mean time ;-)