process exit with code 132
ArmiT opened this issue · 23 comments
When I try to start a docker container using the command:
docker run -it -p 8888:80 openresty/openresty:alpine
it stops with exit code 132.
dmesg says:
[ 7037.973584] docker0: port 1(vethd17cbc4) entered blocking state
[ 7037.973587] docker0: port 1(vethd17cbc4) entered disabled state
[ 7037.974176] device vethd17cbc4 entered promiscuous mode
[ 7037.977153] IPv6: ADDRCONF(NETDEV_UP): vethd17cbc4: link is not ready
[ 7038.455470] eth0: renamed from veth377baff
[ 7038.470311] IPv6: ADDRCONF(NETDEV_CHANGE): vethd17cbc4: link becomes ready
[ 7038.470384] docker0: port 1(vethd17cbc4) entered blocking state
[ 7038.470386] docker0: port 1(vethd17cbc4) entered forwarding state
[ 7038.848814] traps: openresty[11535] trap invalid opcode ip:7ffa43f3cc08 sp:7ffd03b36190 error:0
[ 7038.848829] in libluajit-5.1.so.2.1.0[7ffa43f35000+276000]
[ 7039.174984] docker0: port 1(vethd17cbc4) entered disabled state
[ 7039.175110] veth377baff: renamed from eth0
[ 7039.297612] docker0: port 1(vethd17cbc4) entered disabled state
[ 7039.304632] device vethd17cbc4 left promiscuous mode
[ 7039.304647] docker0: port 1(vethd17cbc4) entered disabled state
docker info
Containers: 12
Running: 0
Paused: 0
Stopped: 12
Images: 7
Server Version: 17.05.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 71
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.10.0-20-generic
Operating System: Ubuntu 17.04
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.856GiB
Name: armit-pc
ID: ASVC:P2BR:M7SV:3KOM:66KK:IIU5:5K3L:RYGM:RJAF:FTSD:D2W2:DNMN
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
docker images openresty/openresty:alpine
REPOSITORY TAG IMAGE ID CREATED SIZE
openresty/openresty alpine 1aa892b461b2 3 weeks ago 44.9MB
Any idea what this could be?
@ArmiT I don't think so since we need the CRC32 instructions in LuaJIT, which was first added to SSE 4.2.
Better build your own binary image directly in that box from the following OpenResty source tarball:
https://openresty.org/download/openresty-1.11.2.4rc0.4.tar.gz
This new version of OpenResty can correctly detect the lack of SSE 4.2 instructions in your current machine's CPU.
@ArmiT Mind you, without the SSE 4.2 CRC32 instructions, creating new Lua strings can be VERY SLOW in worse cases.
So in the end, there is no issue here? I'm going to morph this into a documentation issue -- to let people know that the Docker Hub doesn't support some things (e.g. ARM) or include some things (SSE 4.2) and users should build their own images locally.
So is there a way to enable/disable the -msse4.2 in configure args, bcs our build env all supported SSE 4.2 but some target host is AMD platform.
@pkking you can try the ./configure
option --with-luajit-xcflags='-mno-sse4.2'
to disable SSE 4.2 in a box with SSE 4.2 support.
But it is STRONGLY discouraged to disable SSE 4.2 for builds running on boxes with SSE 4.2.
@agentzh thx for the suggestion, we are considering replacing these AMD box :)
Anyway, is there a statistics about how the performance we will lost?
+1
While it (maybe) affects performance (well, "orders of magnitude" is not a benchmark),
why are you forcing this through compiler flags?
And not through runtime detection, like it's done for SSE4.1 and every other extended instruction set?
If that's so critical performance-wise, why leave it up to compiler optimizations? What happens if next version of gcc decides not to use crc32c in that particular situation?
@isage This cannot easily be done at runtime and even if it could, it cannot be added without runtime overhead in the hot code path. We use completely different code paths for lj_str_new
:
The C compiler cannot do this automatically. Well, even modern optimization compilers are not magic.
GCC always keeps backward compatibility for these SSE primitives otherwise it's a bug for it.
Below is some real benchmark for this if you are interested:
Please do not judge us without actually looking at the code... That would not usually be helpful.
I'm pretty sure that you aware of such thing as cpuid. Luajit already calls cpuid on start to determine what opcodes to emit.
I fail to see how the same approach can't be used to choose between two codepaths in runtime. 99% of modern game engines do that without some "magic compilers".
Sure, sse4.2 opcodes will be left in code, they just won't run and won't throw "invalid opcode".
@agentzh worst argument ever.
Here you go isage/luajit2@0fcdf12
Binary, compiled on non-sse4.2 hardware runs with performance gain using crc32 instructions on sse4.2 capable hardware.
Binary, compiled on sse4.2 hardware still runs on non-sse4.2 hardware, because sse4.2 instructions are never called.
lua-resty-template benchmark shows similar to stock v2.1-agentzh values.
@isage That looks good. Will you please create a pull request so that it's easier to review and discuss?
Badly need this guys, just realized one of our servers doesn't have sse4.2, took me awhile to track down what exactly was causing openresty to throw invalid opcode in VMs.
Just hit this on my Synology NAS which has a Intel Atom CPU D2700 . Will there be a docker hub image with pre-disabled or dynamic SSE4.2?
Tried building a custom image with this docker-compose:
openresty:
image: torarnv/openresty:stretch
build:
context: git://github.com/openresty/docker-openresty.git
dockerfile: stretch/Dockerfile
args:
RESTY_CONFIG_OPTIONS_MORE: --with-luajit-xcflags='-mno-sse4.2'
cache_from:
- openresty/openresty:stretch
But got a warning about "One or more build-args [RESTY_CONFIG_OPTIONS_MORE] were not consumed"
Perhaps the Tips & Pitfalls section should highlight that these build args only work for the fat images that build ngnix from source?
@torarnv Now that the automated builds are beefed up (they weren't before when this build was made). If Travis can build no-sse
packages, then I'll do it -- see #103 which I just created.
Each "Building" section says what Dockerfiles they work with (although I see Bionic was missing and will add that) -- and note it's all the build-from-source images that take those options. Although I'm open to documentation enhancements, I don't want to add a line that says "make sure you read the advanced sections below very carefully and know you are in the right one". If you have a specific text suggestion, or can identify what confused you, I'm happy to merge or edit the README.
If you didn't see it, these are the options for customizing the Stretch images.
BTW, I think it's cool that you were trying to get this running on an embedded NAS.
Thanks @neomantra! I ended up basing on the alpine image, which also builds from source, and I built my image on macOS since the Synology docker is too broken to build images :)