jbwang1997/OBBDetection

About dataset

Opened this issue · 10 comments

Hi, thanks for your work!!!
I didn't find detailed instructions for dataset production in readme. If I want to train my own dataset (PNG and JSON formats), is the data preparation the same as MMdetection?

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.

But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.

But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.
But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset?
Could you provide the structure of your XMLs?

Hi, thanks for your work!!!
I didn't find detailed instructions for dataset production in readme. If I want to train my own dataset (PNG and JSON formats), is the data preparation the same as MMdetection?

Could you provide the structure of JSON and tell me whether your images need to be split like DOTA dataset?

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.
But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset?
Could you provide the structure of your XMLs?

my XML annotation is this:
xml.txt

and my images do not need to be split.

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.
But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset?
Could you provide the structure of your XMLs?

my XML annotation is this:
xml.txt

and my images do not need to be split.

Could you give some advice? Thanks!

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.
But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset?
Could you provide the structure of your XMLs?

my XML annotation is this:
xml.txt
and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset.
I recommend your refer xml_style.py and load the rotated box data in data_info.

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.
But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset?
Could you provide the structure of your XMLs?

my XML annotation is this:
xml.txt
and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset.
I recommend your refer xml_style.py and load the rotated box data in data_info.

Thanks very much!

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.
But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset?
Could you provide the structure of your XMLs?

my XML annotation is this:
xml.txt
and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset.
I recommend your refer xml_style.py and load the rotated box data in data_info.

Thanks very much!

请问您这个问题解决了吗,我现在也打算训练自己的xml数据集,可以说说具体怎么做吗

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.
But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset?
Could you provide the structure of your XMLs?

my XML annotation is this:
xml.txt
and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset. I recommend your refer xml_style.py and load the rotated box data in data_info.

请问[xml_style.py]文件应该怎么使用呢