ChenRocks/UNITER

if you are going to provid the code about extracting feature from a image?

FightingFighting opened this issue · 7 comments

Hi,

Thank you for your gret job.

if you are going to provid the code about extract feature from a image? as current code is no easy to be used in other dataset.

Thank you!

The extraction code is inside this Docker container. You should be able to use this with scripts/extract_imgfeat.sh

https://hub.docker.com/r/chenrocks/butd-caffe/tags?page=1&ordering=last_updated

when I use this docker image, an error happened:

E0407 17:29:21.355960   120 common.cpp:114] Cannot create Cublas handle. Cublas won't be available.
E0407 17:29:21.356129   120 common.cpp:121] Cannot create Curand generator. Curand won't be available.
F0407 17:29:21.356248   120 common.cpp:152] Check failed: error == cudaSuccess (35 vs. 0)  CUDA driver version is insufficient for CUDA runtime version

my os is centos 7.6,
Tesla V100
gpu driver is : 418.87.00
cuda version is : 10.1

I also used the cuda 11.1 with driver 455.32, but failed to run this container too.

I wonder if else have this problem or any advices ? thanks

when I use this docker image, an error happened:

E0407 17:29:21.355960   120 common.cpp:114] Cannot create Cublas handle. Cublas won't be available.
E0407 17:29:21.356129   120 common.cpp:121] Cannot create Curand generator. Curand won't be available.
F0407 17:29:21.356248   120 common.cpp:152] Check failed: error == cudaSuccess (35 vs. 0)  CUDA driver version is insufficient for CUDA runtime version

my os is centos 7.6,
Tesla V100
gpu driver is : 418.87.00
cuda version is : 10.1

I also used the cuda 11.1 with driver 455.32, but failed to run this container too.

I wonder if else have this problem or any advices ? thanks

change the script solved this:

docker run --gpus '"'device=$CUDA_VISIBLE_DEVICES'"' --ipc=host --rm \ ...
>
docker run --gpus all --ipc=host --rm \ ...

I ran into the following memory errors when trying to running the feature extraction code from the docker container, docker run --gpus all --ipc=host --rm for a folder with only 10000 images on a large GPU (>12GB Memory).

F0607 00:52:09.333093 122 syncedmem.cpp:71] Check failed: error == cudaSuccess (2 vs. 0) out of memory

Do you have any suggestions for a quick fix? I don't have a better idea than splitting the images into different folders and it seems awkward to do that.

@JaredFern Thank u very much.

@suzyahyah maybe it is your CUDA install got something wrong. two possible reason: 1, out of memory 2, cuda and gpu version is not matched.

@JaredFern , hi do u know to to generate the image feature from ground true? it is seem that it just can generate detected feature in this docker image.