Core dump on Ubuntu
DWAA1660 opened this issue · 5 comments
error in console:
Extracted 'libjllama.so' to '/tmp/libjllama.so'
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGILL (0x4) at pc=0x00007fb9b4934142, pid=19985, tid=19986
#
# JRE version: Java(TM) SE Runtime Environment (21.0.1+12) (build 21.0.1+12-LTS-29)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (21.0.1+12-LTS-29, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# C [libjllama.so+0x23142] gpt_params::gpt_params()+0x3e2
#
# Core dump will be written. Default location: Core dumps may be processed with "/lib/systemd/systemd-coredump %P %u %g %s %t 9223372036854775808 %h" (or dumping to /home/david/Documents/GitHub/untitled1/target/core.19985)
#
# An error report file with more information is saved as:
# /home/david/Documents/GitHub/untitled1/target/hs_err_pid19985.log
#
# If you would like to submit a bug report, please visit:
# https://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Aborted (core dumped)
hs_err_pid19985.log https://hastebin.com/share/jujavunoji.yaml
Simplest code version that it breaks with
import de.kherud.llama.LlamaModel;
public class Main {
public static void main(String[] args) {
try (LlamaModel model = new LlamaModel("./causallm_7b.Q2_K.gguf")) {
}
}
}```
If you need any more info i would be happy to give it i tried on my local laptop and my 2 vps so it seems reproducible
Same as #25, can you provide your CPU information? 3rd Gen Intel from your attached log so no AVX512 right? Do you have any machines that you can test with that do work?
Same as #25, can you provide your CPU information? 3rd Gen Intel from your attached log so no AVX512 right? Do you have any machines that you can test with that do work?
Did not work on AMD EPYC 7763 64-Core Processor
and Intel i5 3rd gen
thats all I can test on
Same problem here
13th Gen Intel(R) Core(TM) i9-13900K on Ubuntu 22.04:
AVX512 is not available.
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 183
model name : 13th Gen Intel(R) Core(TM) i9-13900K
stepping : 1
microcode : 0x119
cpu MHz : 3000.000
cache size : 36864 KB
physical id : 0
siblings : 32
core id : 0
cpu cores : 24
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 32
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs ept_mode_based_exec tsc_scaling usr_wait_pause
bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb
bogomips : 5990.40
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
You should be able to build with LLAMA_NATIVE=OFF
(or build on your target machine) to fix the issue, looks like that flag was added to this repo's build scripts but the publish job failed
I just released version 3.0 and the problems should hopefully no longer occur. Feel free to re-open otherwise.