Segmentation Fault
haydenfree opened this issue · 11 comments
Hi,
I've got the API set up to export functionality from a package, and it works without any problems. However, if I leave the API running on an EC2 instance via a screen, I get a segmentation fault after awhile. Any idea why this may be?
Hey @wookay, any ideas on this?
I don't know much about the EC2. do you have some output messages?
could you test it with master branch?
The following is the result of adding Bukdu.jl via the repo link, and the version is listed as v0.4.12 #master when I did run status
in the Pkg Manager
I should note the fault now occurs after the first request is made, but previously is occurred after an undetermined number (more than 10-20 at least)
Thank you. would you test it with master branch again.
@wookay I think since the version is still the same it isn't grabbing the newest changes?
take 0.4.13-DEV
(v1.3) pkg> add Bukdu#master
julia> using Bukdu
julia> Bukdu.BUKDU_VERSION
v"0.4.13-DEV"
Bukdu Listening on 0.0.0.0:8080
Task (runnable) @0x00007fa9ec255600
julia> INFO: GET RESTController healthcheck 200 /healthcheck
┌ Error: error handling request
│ exception =
│ IOError: stream is closed or unusable
│ Stacktrace:
│ [1] check_open at ./stream.jl:328 [inlined]
│ [2] uv_write_async(::Sockets.TCPSocket, ::Ptr{UInt8}, ::UInt64) at ./stream.jl:961
│ [3] uv_write(::Sockets.TCPSocket, ::Ptr{UInt8}, ::UInt64) at ./stream.jl:924
│ [4] unsafe_write(::Sockets.TCPSocket, ::Ptr{UInt8}, ::UInt64) at ./stream.jl:1007
│ [5] unsafe_write at /home/ec2-user/.julia/packages/HTTP/lZVI1/src/ConnectionPool.jl:134 [inlined]
│ [6] macro expansion at ./gcutils.jl:91 [inlined]
│ [7] write at ./strings/io.jl:186 [inlined]
│ [8] writeheaders(::HTTP.ConnectionPool.Transaction{Sockets.TCPSocket}, ::HTTP.Messages.Response) at /home/ec2-user/.julia/packages/HTTP/lZVI1/src/Messages.jl:439
│ [9] startwrite(::HTTP.Streams.Stream{HTTP.Messages.Request,HTTP.ConnectionPool.Transaction{Sockets.TCPSocket}}) at /home/ec2-user/.julia/packages/HTTP/lZVI1/src/Streams.jl:85
│ [10] handle_stream(::HTTP.Streams.Stream{HTTP.Messages.Request,HTTP.ConnectionPool.Transaction{Sockets.TCPSocket}}) at /home/ec2-user/.julia/packages/Bukdu/Yel7o/src/server.jl:39
│ [11] macro expansion at /home/ec2-user/.julia/packages/HTTP/lZVI1/src/Servers.jl:360 [inlined]
│ [12] (::HTTP.Servers.var"#13#14"{typeof(Bukdu.handle_stream),HTTP.ConnectionPool.Transaction{Sockets.TCPSocket},HTTP.Streams.Stream{HTTP.Messages.Request,HTTP.ConnectionPool.Transaction{Sockets.TCPSocket}}})() at ./task.jl:333
└ @ HTTP.Servers ~/.julia/packages/HTTP/lZVI1/src/Servers.jl:364
signal (11): Segmentation fault
in expression starting at REPL[7]:0
uv_tcp_getpeername at /workspace/srcdir/libuv/src/unix/tcp.c:301
jl_tcp_getpeername at /buildworker/worker/package_linux64/build/src/jl_uv.c:704
_sockname at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/Sockets/src/Sockets.jl:754
getpeername at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/Sockets/src/Sockets.jl:741 [inlined]
handle_stream at /home/ec2-user/.julia/packages/Bukdu/Yel7o/src/server.jl:33
macro expansion at /home/ec2-user/.julia/packages/HTTP/lZVI1/src/Servers.jl:360 [inlined]
#13 at ./task.jl:333
unknown function (ip: 0x7fa9ee777937)
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2135 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2305
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1631 [inlined]
start_task at /buildworker/worker/package_linux64/build/src/task.c:659
unknown function (ip: 0xffffffffffffffff)
Allocations: 73556140 (Pool: 73540100; Big: 16040); GC: 74
Segmentation fault
Bukdu.BUKDU_VERSION
returns the following:
v"0.4.13-DEV"
to do not getpeername
for a while,
I added enable_remote_ip
option to Bukdu.start
(default is false
)
I hope it just works.
I've had the API running for a few days now and haven't encountered this error yet. This is the longest it has run without problem. I didn't enable_remote_ip
with Bukdu.start()
, but seems to be working.
Thank you!