intel-analytics/ipex-llm

Docker on Windows vllm serving issue

ktjylsj opened this issue · 15 comments

I faced an issue with the Docker environment on Windows running vllm serving.
I tried start_service.sh code in the docker.
https://github.com/intel-analytics/ipex-llm/tree/main/docker/llm/serving/xpu/docker
It kept killed when I ran the code.

/usr/local/lib/python3.11/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from torchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have libjpeg or libpng installed before building torchvision from source?
warn(
2024-05-15 12:51:57,577 - INFO - intel_extension_for_pytorch auto imported
INFO 05-15 12:51:58 api_server.py:258] vLLM API server version 0.3.3
INFO 05-15 12:51:58 api_server.py:259] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=['*'], api_key=None, served_model_name='/llm/models/luxia-8b-instruct-v0_1', lora_modules=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], load_in_low_bit='sym_int4', model='/llm/models/luxia-8b-instruct-v0_1', tokenizer=None, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, download_dir=None, load_format='auto', dtype='float16', kv_cache_dtype='auto', max_model_len=2048, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, seed=0, swap_space=4, gpu_memory_utilization=0.75, max_num_batched_tokens=10240, max_num_seqs=12, max_paddings=256, max_logprobs=5, disable_log_stats=False, quantization=None, enforce_eager=True, max_context_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', max_cpu_loras=None, device='xpu', engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 05-15 12:51:58 llm_engine.py:68] Initializing an LLM engine (v0.3.3) with config: model='/llm/models/luxia-8b-instruct-v0_1', tokenizer='/llm/models/luxia-8b-instruct-v0_1', tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=xpu, seed=0)
INFO 05-15 12:51:58 attention.py:71] flash_attn is not found. Using xformers backend.
2024-05-15 12:53:55,495 - INFO - Converting the current model to sym_int4 format......
2024-05-15 12:53:55,529 - INFO - Only HuggingFace Transformers models are currently supported for further optimizations
./start_service.sh: line 17: 28 Killed python -m ipex_llm.vllm.entrypoints.openai.api_server --served-model-name $served_model_name --port 8000 --model $model --trust-remote-code --gpu-memory-utilization 0.75 --device xpu --dtype float16 --enforce-eager --load-in-low-bit sym_int4 --max-model-len 2048 --max-num-batched-tokens 10240 --max-num-seqs 12

gc-fu commented

Hi, thank you for posting this issue.

Can you run this env_check.sh script in your container so that I can collect some hardware information?

Also, after the process get killed, can you run sudo dmesg and post the result of last few lines? The process is killed while converting the model, so I suspect that the reason why the process get killed is due to lack of memory.

Can you post your docker container config?

Here's dmesg results.

[ 6993.117252] [ 11627] 0 11627 1231151 433 958464 81335 0 python
[ 6993.117690] [ 11632] 0 11632 1252848 586 966656 81950 0 python
[ 6993.118123] [ 11634] 0 11634 1252848 588 966656 81948 0 python
[ 6993.118666] [ 11636] 0 11636 1252848 588 966656 81948 0 python
[ 6993.119254] [ 11638] 0 11638 1252848 588 966656 81948 0 python
[ 6993.119704] [ 11640] 0 11640 1252848 589 966656 81947 0 python
[ 6993.120150] [ 11642] 0 11642 1252848 588 966656 81948 0 python
[ 6993.120670] [ 11644] 0 11644 1252848 593 966656 81943 0 python
[ 6993.121191] [ 11646] 0 11646 1252848 588 966656 81948 0 python
[ 6993.121823] [ 11648] 0 11648 1252848 589 966656 81948 0 python
[ 6993.122343] [ 11650] 0 11650 1252848 589 966656 81948 0 python
[ 6993.122899] [ 11652] 0 11652 1252848 589 966656 81948 0 python
[ 6993.123422] [ 11654] 0 11654 1252848 590 966656 81947 0 python
[ 6993.123906] [ 11656] 0 11656 1252848 589 966656 81948 0 python
[ 6993.124440] [ 11658] 0 11658 1252848 589 966656 81948 0 python
[ 6993.124956] [ 11660] 0 11660 1252848 586 966656 81951 0 python
[ 6993.125466] [ 11662] 0 11662 1252848 586 966656 81951 0 python
[ 6993.126027] [ 11664] 0 11664 1252848 586 966656 81951 0 python
[ 6993.126511] [ 11666] 0 11666 1252848 588 966656 81949 0 python
[ 6993.126971] [ 11668] 0 11668 1252848 588 966656 81949 0 python
[ 6993.127479] [ 11670] 0 11670 1252848 587 966656 81950 0 python
[ 6993.127949] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=18d73749caae8cba162332c8da8d1268cccfc94ea387145c989159f25eb57222,mems_allowed=0,global_oom,task_memcg=/docker/18d73749caae8cba162332c8da8d1268cccfc94ea387145c989159f25eb57222,task=python,pid=11551,uid=0
[ 6993.129599] Out of memory: Killed process 11551 (python) total-vm:25431556kB, anon-rss:15201156kB, file-rss:0kB, shmem-rss:8kB, UID:0 pgtables:42172kB oom_score_adj:0

I couldn't find env_check.sh, but I found collect_env.py.
Here's the output of it.

root@docker-desktop:/llm/vllm# python collect_env.py
Collecting environment information...
PyTorch version: 2.1.0a0+cxx11.abi
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35

Python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i5-13500
CPU family: 6
Model: 191
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 4991.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] intel-extension-for-pytorch==2.1.10+xpu
[pip3] numpy==1.26.4
[pip3] torch==2.1.0a0+cxx11.abi
[pip3] torchvision==0.16.0a0+cxx11.abi
[pip3] triton==2.1.0
[conda] Could not collectROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.3.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

My docker image is this.
intelanalytics/ipex-llm-serving-vllm-xpu:2.1.0-SNAPSHOT

gc-fu commented

From what I have seen, the reason why the process is killed is due to OOM:

oom-kill:constraint=CONSTRAINT_NONE

Try allocate more memory to the container and see if anything goes smoothly.

I allocated more memory for the container, but the same occurred.
memory="64G", shm-size="32g"
Should I have to allocate more memory for the container?

gc-fu commented

Try allocate more memory for the container and see if the error occurs again

I set memory parameter as --ipc=host and still occurs.

The system is equipped with 32GB of RAM and operates in an Arc770 16G environment.
It has already been confirmed that the model runs using the ipex-llm benchmark.

gc-fu commented

Do you mean ipex-llm benchmark in your docker environment or in windows environment?

I tested both environments, Docker and Windows.

gc-fu commented

Can you try a small model, such as this one https://huggingface.co/Qwen/Qwen-1_8B-Chat.
Besides, could you use task manager to monitor the memory usage when running the example and check if the memory usage is correct or not.

Hi, the Qwen 8B model works for Windows environments.
Thank you for your help.

one quick question.
How could I know that the model is too big for memory or A770's memory?

gc-fu commented

It is not related to the A770's memory. It is because your host memory is a little bit small.

The model is first loaded in float16 or float32 format in host memory, this will require much more memory than models in sym_int4 format. The conversion itself also needs some memory.

After the conversion is done, the model will be then sent to the XPU. Typically, a converted 7b model in sym_int4 format requires around 4G xpu memory.

Thank you for the detailed explanation.
Well noted on it.