JuliaSmoothOptimizers/AmplNLReader.jl

hang in jampl_jac

Closed this issue · 3 comments

On my machine (OS X 10.7), test.jl is hanging in jampl_jac at

jl_value_t *float64_array_type = jl_apply_array_type(jl_float64_type, 1);

The backtrace is:

#0  0x000000010d067e30 in alloc_4w ()
#1  0x000000010d055422 in jl_tuple2 ()
#2  0x000000010d05baf8 in jl_apply_array_type ()

digging in more, I found that Julia gets stuck in this loop in add_page in gc.c:

while ((char*)v <= lim) {
    *pfl = v;
    pfl = &v->next;
    v = (gcval_t*)((char*)v + p->osize);
}

@JeffBezanson @Keno, any idea what could lead to this? The code in ampl.jl uses the Julia C API.

@dpo, I was going to suggest this at some point, but perhaps the C-level code could be simplified by avoiding using julia.h and instead letting the julia wrapper allocate any arrays? This would also make it easier to re-use array storage instead of allocating new vectors on every call. At the julia level, you could still have a function like hess that returns a tuple, but the tuple would be allocated in julia, avoiding any potential sources of error in interacting with the Julia C interface and GC. (It's not clear that an error in your code is causing this issue, but it would greatly simplify debugging if we could discard that possibility.)

Here's the output of versioninfo():

Julia Version 0.3.0-rc1+307
Commit 60c3f40* (2014-08-01 22:08 UTC)
DEBUG build
Platform Info:
  System: Darwin (x86_64-apple-darwin11.4.2)
  CPU: Intel(R) Core(TM)2 CPU         T7400  @ 2.16GHz
  WORD_SIZE: 64
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Core2)
  LAPACK: libopenblas
  LIBM: libm
  LLVM: libLLVM-3.3
dpo commented

@mlubin I'm not seeing any hangs here, but you're right and that was on my list of things to do. Since I couldn't find any documentation, I simply followed examples to create tuples in Julia. A GC-related bug was fixed recently. It's very possible that there's still a bug in there. Now I wonder if there's a performance penalty to allocating arrays at the Julia level (though I agree that the maintenance benefit might outweigh it).

Actually I can see two reasons why allocating on the Julia side would improve performance:

  1. Type inference for the returned values of jac, hess, etc currently fails because Julia has no way to know that the C functions return tuples (this could be fixed by adding type assertions)
  2. Arrays allocated by julia (vs. malloc) are aligned at memory boundaries for improved performance with BLAS/SIMD operations