yunshengtian/Assemble-Them-All

Would this multi-part-assembly/disassembly method work with other type of datasets (e.g. the breaking bad dataset)

ttsesm opened this issue ยท 19 comments

ttsesm commented

Hi @yunshengtian,

Congrats for this nice work and thank you making it public. I am trying to make my first steps to the multi-part-assembly research field and I am happy that I have discovered your work.

Recently I came upon the Breaking bad dataset and benchmark which I am not sure whether you are aware and I was playing with it. In principle it is a dataset that the authors made for object fracturing and assembly. I was checking on their dataset for quite some time now and I am curious whether your approach would be able to address such fracturing and assembly/disassembly models.

I would appreciate your feedback and whether you have already tried something similar or not.

Thanks.

lylwy commented

I saw this question of yours, so I'll discuss it here.
If I remember correctly the author got the assembly information via disassembly2assembly, and since I've never run the project I'm not sure if his final output is what you want. I haven't tried using this method to run on top of the dataset you mentioned, so I can't give good advice about it.

ttsesm commented

Sure, thanks. I've installed and played a bit with the project. The simulator is quite nice. So now I am trying to see how and if possible to adapt it for the aforementioned dataset. It is not clear though what the translation in the .json files correspond. Also in the join_assembly_rotation subset shouldn't be the rotation transformation there as well. It would be nice if @yunshengtian could give us some elaboration.

In principle, with the dataset that I have posted if I manage to create these .json files for the corresponding objects then this work should work as well. Then, the other question would be whether it is possible to obtain the R|t transformation matrix from the path planning.

Yes, our code works with all sorts of assembly models, including the Breaking Bad dataset, though it may require some pre-processing on the data to adapt to our code. I will update a tutorial on this soon. But to clarify your questions before that:

It is not clear though what the translation in the .json files correspond.

The assembly obj meshes are suggested to be centered at the origin (0,0,0). So translation.json corresponds to where these meshes should be in their assembled state.

Also in the join_assembly_rotation subset shouldn't be the rotation transformation there as well.

It might be more clear if you understand how we created this subset: We randomly rotated the whole assembly and then translated each rotated part according to their centers of mass, and finally we store all the translation separately into a single json file. That is why the rotation transformation is not needed here following this particular way of creating the dataset. But to be more general, the full transformation matrix can be used to replace the current translation json, which requires some more engineering effort to the code.

whether it is possible to obtain the R|t transformation matrix from the path planning.

Yes please refer to this line where we get the planned path, represented as a list of states (either 3D or 6D vectors depending on whether you set rotational DoFs in the planning).

ttsesm commented

@yunshengtian thanks a lot for the elaboration ;-).

Yes, our code works with all sorts of assembly models, including the Breaking Bad dataset, though it may require some pre-processing on the data to adapt to our code. I will update a tutorial on this soon.

Perfect, that would be great. I am looking forward for such a demo, I think it would give a better understanding how this work could be used with different datasets.

The assembly obj meshes are suggested to be centered at the origin (0,0,0). So translation.json corresponds to where these meshes should be in their assembled state.

Ok, I see.

It might be more clear if you understand how we created this subset: We randomly rotated the whole assembly and then translated each rotated part according to their centers of mass, and finally we store all the translation separately into a single json file. That is why the rotation transformation is not needed here following this particular way of creating the dataset. But to be more general, the full transformation matrix can be used to replace the current translation json, which requires some more engineering effort to the code.

Ok, I see your point though I think representing the transformation of the pieces in regards to (0,0,0) origin through a full R|t transformation matrix would be a more clear representation for the users.

Yes please refer to this line where we get the planned path, represented as a list of states (either 3D or 6D vectors depending on whether you set rotational DoFs in the planning).

Interesting, to be honest I am not familiar with the path planning term though I have heard about it never happened to use it in practice. That's why I am trying to figure out how it can be used for extracting different variables.

Another question which I am trying to clarify is whether with this approach someone can resolve the assembly task of an unknown/unseen object. Meaning that you do not provide the assembly gt positions of the multiple pieces of the object and based on the path plannings that have been extracted from other objects of the dataset the algorithm tries to guess the path planning of the pieces of the new object. So in principle how the algorithm generalizes, because as I understand it you always need to provide the final assembly as input. This could be actually a really nice research area to investigate.

whether with this approach someone can resolve the assembly task of an unknown/unseen object

This also sounds interesting to me and I believe this is challenging as a research topic (if thinking about generalization to different assemblies and many parts). We have a relevant paper https://arxiv.org/abs/2111.12772 that focuses on such prediction of assembled states, but scaling to many parts is still hard.

ttsesm commented

Yes, I've seen this work as well. Actually, I discovered your work here through that paper. I am want to try this work as well though I would need to do some changes to the dataset since it accepts only B-Rep input as I understood.

Some more questions that came to me after playing a bit more with the code and reading the paper:

  1. Can you explain also a bit what the values inside the path planning represent. For example I used the --save-dir and --n-save-state 5 parameters, i.e.:
python examples/run_joint_plan.py --planner bfs --dir joint_assembly --id 00007 --render --save-dir results --n-save-state 5

This gave me 5 .npy file of 4x4 values like:
image

So what these correspond to? Is it a transformation matrix?

  1. As well as what the --n-save-state corresponds to the first N path plans or it is that the whole path plan is represented by N sub-path plans.

  2. In the paper you mention that you mention that for more complex cases e.g. zig-zag paths etc... BFS might be less efficient to be used and you suggest the Monte-Carlo Tree Search (MCTS). Just to confirm, you do not have this path search currently implemented in your code right? Because the two options that I can see are BFS and BK-RRT.

  3. The translation.json file since it represents the position where the assembled pieces should is used only for inference and obtaining the error or it has another use as well

ttsesm commented

@yunshengtian any comment on my questions above?

Hey there, I'm back. Sorry for the delay.

  1. --save-dir is the directory to save assembly motions (represented in .npy matrices) and --n-save-state is for the number of states to save per path. Imagine you have a path to disassemble a part, but it's your choice to save how many states along the path, whether you want denser or coarser. And right that is a transformation matrix.
  2. The whole path will be interpolated uniformly to --n-save-state states. E.g. the original path may contain 200 states but you can choose to save only 10 states among them.
  3. No I don't have MCTS implemented. It would be interesting to explore.
  4. I am not sure what do you mean, could you elaborate it more?
ttsesm commented

Thanks @yunshengtian for the feedback, no worries for the delay. For 1, 2, and 3 everything seems clear now.

Regarding 4 and the translation.json files, you have told me that they correspond to where these meshes should be in their assembled state in regards to the initial position (0,0,0). So as I understand the numbers correspond to the translation of the piece from the (0,0,0) position to the final assembled position, right? Then, I am trying to understand where this translation info from the .json file is used why you need this? are you using it as the ground truth position? Is it necessary to have it beforehand or not? I hope this is more clear to you.

Moreover, I've tried an example from the joint_assembly_rotation but it doesn't seem to save the states for some reason. The command I've used is:

python examples/run_joint_plan.py --planner bfs --dir joint_assembly_rotation/general --id 06397 --rotation --render --save-dir results --record-dir results --n-save-state 6

Any idea why?

Thanks.

Right I am using this as the ground truth position. It's necessary to have it given the current code, but I will look into this soon and see if things can be cleaned up.

The translation.json is used here and here to translate parts to their ground truth assembled states.

Moreover, I've tried an example from the joint_assembly_rotation but it doesn't seem to save the states for some reason.

Have you observed this issue in other datasets as well or only in joint_assembly_rotation? By the way, I think --record-dir and --save-dir have to be different since they save different things.

ttsesm commented

Ok, thanks for the clarification regarding the .json files.

No, for the joint_assembly that I've tested it, it works and saves the states without issues. Yes, you can have the record and save dirs in different paths but with the same it works fine. In your pc, the used command above works fine? did you tested it?

I found the reason. The states/path is not saved because the planner fails to find a feasible solution. In the code, the returned path is None so nothing is saved. You can double check the output gif, which will probably tell you that the disassembly is unsuccessful. Maybe I can add some terminal output to make it more intuitive.

ttsesm commented

I found the reason. The states/path is not saved because the planner fails to find a feasible solution. In the code, the returned path is None so nothing is saved. You can double check the output gif, which will probably tell you that the disassembly is unsuccessful. Maybe I can add some terminal output to make it more intuitive.

Ok, I see. Well the created gif for the corresponding example is the one bellow:

06397
The reason for the fail is something specific which can be addressed by the input parameters or it just fails for this specific object?

This can be addressed (at least I observed success on this) but not guaranteed:

  1. Try fixing the longer object and move the shorter one instead;
  2. Try with different initial orientations of the meshes;
  3. Let it run longer.
ttsesm commented

This can be addressed (at least I observed success on this) but not guaranteed:

1. Try fixing the longer object and move the shorter one instead;

I guess this is done by swapping the --move-id and --still-ids parameter values, right?

2. Try with different initial orientations of the meshes;

This is by setting the values in the .json file or by re-rotating the meshes

3. Let it run longer.

ok

They are all correct ๐Ÿ‘

ttsesm commented

Thanks

Hi, as promised, I have updated some brief instructions and a pre-processing script for applying the algorithm to your custom meshes. See here.

ttsesm commented

Thanks @yunshengtian!!!
I will test it asap ;-)