Playbook fails on latest EC2 Ubuntu 16.04
bluemalkin opened this issue · 2 comments
bluemalkin commented
Create an EC2 on AWS using the latest Ubuntu 16.04.2 LTS and run the example playbook.
it fails:
failed: [localhost] => (item=[u'linux-image-extra-4.4.0-1013-aws', u'linux-image-extra-virtual']) => {"failed": true, "item": ["linux-image-extra-4.4.0-1013-aws", "linux-image-extra-virtual"], "msg": "No package matching 'linux-image-extra-4.4.0-1013-aws' is available"}
...ignoring
TASK [angstwad.docker_ubuntu : Try again to install linux-image-extra if previous attempt failed] ***
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No package(s) matching 'linux-image-extra-4.4.0-1013*' available"}
zoolii commented
The same here on scaleway Ubuntu 16.04 server
failed:
(item=[u'linux-image-extra-4.9.20-std-1', u'linux-image-extra-virtual']) => {"failed": true, "item": ["linux-image-extra-4.9.20-std-1", "linux-image-extra-virtual"], "msg": "No package matching 'linux-image-extra-4.9.20-std-1' is available"}
...ignoring
TASK [angstwad.docker_ubuntu : Try again to install linux-image-extra if previous attempt failed] *******************************************************
fatal: FAILED! => {"changed": false, "failed": true, "msg": "No package(s) matching 'linux-image-extra-4.9.20-std*' available"}
angstwad commented
Okay, so the root cause for this, it turns out, is that it looks like Ubuntu 16.04 linux-image-extra-$kernel_version
lag behind the latest kernels. So above, you have kernel 4.4.0-1013-aws installed, but the latest extras is linux-image-extra-4.4.0-1012
. ...and I unfortunately didn't produce this issue because my kernel was 4.4.0-1010-aws.
However, it turns we don't even need to install the extras on 16.04, making this a moot point. I'll disable the installation on 16.04 which should resolve this issue.