showlab/VisorGPT

The original training data file

Opened this issue · 17 comments

Hi Sierkinhane,
Very nice work. Can you provide the original training data file for us to understand how your data is organized? And how to process it as the visorgpt_dagger_train_seq.bin?

Thanks.

Hello, I will clean the code and prepare the instruction in the coming days. Maybe a week.

Great, looking forward to your update.

Hi, sorry for the late relay cause I'm too busy these days. I would like to first share th preprocessed .txt file of COCO box at here and you can use the below script to process it to .pt file:

cd ./train
python3 preprocess.py --corpus_path train_box.txt \
                      --vocab_path models/google_uncased_en_coord_vocab.txt \
                      --dataset_path train_seq.pt --processes_num 8 \
                      --seq_length 1024 --tgt_seq_length 1024 --data_processor lm

I will try to provide the code for converting box/mask/keypoint annotations of .json to sequences .txt in the coming days. :)

Hi, thanks @Sierkinhane for show how to create pt file from txt :)

Hi, sorry for the late relay cause I'm too busy these days. I would like to first share th preprocessed .txt file of COCO box at here and you can use the below script to process it to .pt file:

cd ./train
python3 preprocess.py --corpus_path train_box.txt \
                      --vocab_path models/google_uncased_en_coord_vocab.txt \
                      --dataset_path train_seq.pt --processes_num 8 \
                      --seq_length 1024 --tgt_seq_length 1024 --data_processor lm

I will try to provide the code for converting box/mask/keypoint annotations of .json to sequences .txt in the coming days. :)

Hello, if I want to train Object Centric Bounding-Box, the content of corpus is similar to "box; object centric; large; 1; 0; great white shark; [xmin 95 ymin 66 xmax 510 ymax 310]", or "box ; object centric; large; 1; 0; [ great white shark xmin 95 ymin 66 xmax 510 ymax 310 ]?

Hi, the second prompt is for continuous generation or scene completion for multiple objects. If only one object is involved in an image, the first prompt is sufficient.

Hi, the second prompt is for continuous generation or scene completion for multiple objects. If only one object is involved in an image, the first prompt is sufficient.

Thank you, I really appreciate your reply

Hello! Thank you so much for your work! Do you have any plans to make keypoint annotations .txt file public recently?

Exactly. I'm quite busy these months, but I plan to update the repository with the complete files next month. The txt files of cocokeypoints and crowdpose are available at here and here.

Thank you very much for your reply! May I ask if they are both processed with preprocess.py for pre-processing? Also, the two links you provided both seem to be crowdpose.txt files:)

Hi, I have updated the link. You can merge these txt files into one file and process it using preprocess.py.

Thank you very much for your prompt reply! There's a question I'd like to ask. I see that in the keypoint data, there is “person, person; [ a ”as well as “[ person a...], [ person a...]"Two types, does this affect the effectiveness of the training? Because I see in the demo, the type of seq_prompt is in the format of [ person.

They are two kinds of prompts and will not affect the modeling a lot. Maybe you can refer to the paper for details.

Thank you very much, I re-read the paper again. However, I have now trained and saved the file "visorgpt_dagger_train_seq.bin-200000" (430M), how do I handle it as "visorgpt_dagger_train_seq.bin/200000/mp_rank_00_model_states. pt" such file type?

It seems that you didn't use the deepspeed strategy. You can try to set --load_model_path as the .bin file.

OK! Your suggestion is valid! Looking forward to your complete inference code and your subsequent exciting work :)

Great! Thank you.