Fine-tuning TEED Model with Custom Dataset for Diverse Object Outlines
GAB3-13 opened this issue · 1 comments
Hey, I am planning on fine-tuning this model with a variety of subjects such as faces, animals, and logos. I decided on this model because it closely alligns with the desired output.
My dataset consists of 100 images, each with a resolution of 2500x2500 pixels. Given the specific nature of my project, these images are significantly larger and potentially more complex than those in the BIPED dataset originally used to train the TEED model.
Challenges & Questions
Data Augmentation & Preparation: I've followed the BIPED dataset's augmentation process for my dataset for the most part. Considering the size and diversity of my images, are there any specific augmentation techniques or preprocessing steps you would recommend?
Model Fine-tuning: Given the unique size (2500x2500) of my dataset's images and their content diversity, I will need to make adjustments to the training procedure outlined for the BIPED dataset. Could you provide guidance or recommendations on optimizing the training process for a dataset with such characteristics? Specifically, I'm interested in any modifications to the training parameters, input size handling, or batch processing to accommodate the high-resolution images.
Loss Functions: The current loss functions used for TEED should seems like it will work for my task without modifications. However, I'm open to suggestions if you think adjustments or additional loss functions could enhance the model's performance to my dataset.
Thank you for your time, hope to hear back!
Hey, hope you solved your problem, I little later here. Below my answers.
Hey, I am planning on fine-tuning this model with a variety of subjects such as faces, animals, and logos. I decided on this model because it closely alligns with the desired output.
My dataset consists of 100 images, each with a resolution of 2500x2500 pixels. Given the specific nature of my project, these images are significantly larger and potentially more complex than those in the BIPED dataset originally used to train the TEED model.
Challenges & Questions Data Augmentation & Preparation: I've followed the BIPED dataset's augmentation process for my dataset for the most part. Considering the size and diversity of my images, are there any specific augmentation techniques or preprocessing steps you would recommend?
Sorry, before I answer this, I need to see what is the edge-map of your data TEED gives you? what is wrong with the current checkpoint?
Model Fine-tuning: Given the unique size (2500x2500) of my dataset's images and their content diversity, I will need to make adjustments to the training procedure outlined for the BIPED dataset. Could you provide guidance or recommendations on optimizing the training process for a dataset with such characteristics? Specifically, I'm interested in any modifications to the training parameters, input size handling, or batch processing to accommodate the high-resolution images.
I think the fine-tuning is not the problem you can just set anything you want and start with the transfer learning.
Loss Functions: The current loss functions used for TEED should seems like it will work for my task without modifications. However, I'm open to suggestions if you think adjustments or additional loss functions could enhance the model's performance to my dataset.
I think, it should work just fine. Maybe you can train with the input size of 720x720. Its a good think you train with different image input size I see the differences.
Thank you for your time, hope to hear back!
Cheers Xavier.