Build error when using ament_python_install_package() and generated interfaces
jtbandes opened this issue ยท 24 comments
Originally posted by @jacobperron in #131 (comment):
This change [#131] prevents a package from building if it both generates interfaces and installs a Python package. For example, I have a package that is not longer building since it does something like this:
rosidl_generate_interfaces(${PROJECT_NAME}_msgs ... ) ament_python_install_package(${PROJECT_NAME})I see an error like:
CMake Error at /home/jacob/ros2-linux/share/ament_cmake_python/cmake/ament_python_install_package.cmake:108 (add_custom_target): add_custom_target cannot create target "ament_cmake_python_symlink_my_package" because another target with the same name already exists. The existing target is a custom target created in source directory ...
I'm running into this issue in rosbridge_suite as well when building for Rolling: RobotWebTools/rosbridge_suite#581
One thing I don't understand is why this error appears for rosapi, but not rosbridge_library.
I'm a CMake noob, but would it be possible to simply pass a different, unique target name on this line instead of just ${PROJECT_NAME}
?
(Meta-question: is there anything we could have done in either the rosbridge_suite repo or this repo to catch this kind of failure sooner, and not have to wait for another Rolling release to incorporate a fix into rosbridge_suite? Asking because I'm still learning about the release process, having recently taken on some maintenance of rosbridge_suite for ROS 2 compatibility.)
This issue is blocking packaging of rosbridge_suite for rolling (RobotWebTools/rosbridge_suite#661). @clalancette @jacobperron Any ideas how the issue can be resolved? Has any progress been made on it yet?
I would say the easiest workaround is to split the Python code and the message generation into two separate packages. You generally want to do that anyway; a downstream client may want to communicate using your custom messages, but may not want/need the other code that comes along with it.
@jtbandes This dropped off my radar, sorry for the late reply.
One thing I don't understand is why this error appears for rosapi, but not rosbridge_library.
I would expect the error to appear for both packages. I'm guessing it has something to do with the package.xml; there's a discrepancy between the package.xml of rosbridge_library and rosapi. Notably, rosbridge_library does not declare itself as an interface package (ie. with <member_of_group>rosidl_interface_packages</member_of_group>
). This is just a guess.
I'm a CMake noob, but would it be possible to simply pass a different, unique target name on this line instead of just ${PROJECT_NAME}?
I think this is doable. Though, since #131, I think the main issue is now we risk install collisions between rosidl_generate_interfaces
and your custom package. @hidmic is this accurate?
Any ideas how the issue can be resolved? Has any progress been made on it yet?
I'm not really sure how to proceed. Maybe we provide an option for users install multiple Python packages with the same name? We could try to detect actual install collisions and error in that case. I don't think anyone has looked into this or proposed other options. I worked around the issue originally by creating a separate package for my interfaces ๐คท
is there anything we could have done in either the rosbridge_suite repo or this repo to catch this kind of failure sooner, and not have to wait for another Rolling release to incorporate a fix into rosbridge_suite
I think this is what Rolling is meant for: allow maintainers to build/test against the latest changes and fix things in preparation for the next ROS release. You could setup a CI job for your repo that builds against Rolling. I suggest using our Debian testing repository to avoid having to wait for the next Rolling sync (instructions here).
Another workaround to consider is renaming your Python package such that it does not match the package name.
I've made a PR to split up the packages: RobotWebTools/rosbridge_suite#665
Since some of these msgs were used in tests only, I called that package rosbridge_test_msgs
. Do you think it will be possible to have our build & test jobs succeed in the build farm without actually adding this rosbridge_test_msgs
to the rosdistro files?
Since some of these msgs were used in tests only, I called that package
rosbridge_test_msgs
. Do you think it will be possible to have our build & test jobs succeed in the build farm without actually adding thisrosbridge_test_msgs
to the rosdistro files?
If you are talking about the buildfarm at https://build.ros2.org, then no, this won't work. Anything the tests depend on need to be released onto the buildfarm.
I can think of at least 2 ways to fix this:
- Release all of the packages, including the test packages, onto the buildfarm. Then everything will work.
- Split the tests into a separate package as well, and then blacklist that package from being released onto the ROS 2 buildfarm. In that case, the buildfarm will never try to run the tests (since the package isn't released). Then run the tests in the GitHub Actions or whatever that run for incoming PRs.
We do the second thing for https://github.com/ros2/system_tests . It is never released into a distribution, but we run tests with it nightly from https://ci.ros2.org .
Thanks for the info!
Anything the tests depend on need to be released onto the buildfarm.
Aren't the tests running from an environment that was built on the buildfarm? As long as the test_msgs package is in the same repo, it will be checked out and built with colcon build
, right? Or is there something about the buildfarm environment that will prevent it from using a locally built package (which isn't otherwise released) as a <test_depend>
?
Aren't the tests running from an environment that was built on the buildfarm? As long as the test_msgs package is in the same repo, it will be checked out and built with
colcon build
, right?
No, that's not how the buildfarm works.
In short, the buildfarm doesn't use colcon at all. It only checks out the specific sources for the package that is building, and installs all other dependencies in the package.xml via debian packages. So if a dependency isn't released, it won't have built a debian package and thus won't be available to satisfy the dependency.
Ok, that makes sense. It seems like the easiest thing to do is release the rosbridge_test_msgs
package. Do you have any concerns with that approach?
Ok, that makes sense. It seems like the easiest thing to do is release the
rosbridge_test_msgs
package. Do you have any concerns with that approach?
That will certainly work. I prefer approach 2 that I pointed out above, in that you aren't releasing "useless" (to end users) packages into the ROS distribution. But it's a relatively minor point so whatever is easier for you is fine.
My concern with 2 is that the tests won't run at all on the build farm. I imagine this means it would be possible for a change in another package to break rosbridge_server, but we wouldn't notice until after the release when things fail at runtime.
My concern with 2 is that the tests won't run at all on the build farm. I imagine this means it would be possible for a change in another package to break rosbridge_server, but we wouldn't notice until after the release when things fail at runtime.
Yeah, that's true. We're generally relying on https://ci.ros2.org to discover that for us before it ever gets to the buildfarm, and you could do a similar thing with a GitHub Action. But it is additional infrastructure you have to maintain.
Either way is fine.
I think this is doable. Though, since #131, I think the main issue is now we risk install collisions between rosidl_generate_interfaces and your custom package. @hidmic is this accurate?
Yes! This fell off my radar completely.
To solve this, perhaps we can rehash ament_cmake_python
CMake macros to be a bit more idiomatic. As in, have a macro that add a module (i.e. a collection of files and/or subdirectories) and then have a macro that installs a list of modules as a package. ament_python_install_package
is already reconstructing the source package in the build tree for setuptools
to build the egg, so it's not that far off. Thoughts @jacobperron @clalancette ?
@hidmic If I understand your proposal, this still puts the responsibility on the user to avoid install collisions if they are installing interfaces and custom Python code to the same module. It still sounds like an improvement though.
this still puts the responsibility on the user to avoid install collisions if they are installing interfaces and custom Python code to the same module.
To the extent that user provided and generated .py
sources must not overwrite each other, yes. We can detect duplication at configure time and fail the build early though.
Can we add a warning on the tutorial that this isn't supported?
Can we add a warning on the tutorial that this isn't supported?
Sure, please feel free to open a PR there and we'd be happy to review.
I have this issue in sound_play
.
hmmm, it seems we must separate python package and message package.
i have a question, message package must be a separate package?
I know it is a good practice, but I think it is just a tradition or a custom habit.
We face this issue from humble
or rolling
, so it seems this rule is recently introduced.
If it is ruled, it should be written in REP, I think.
If I must split the package, the users have to change all the message name, which is a big change.
I want to know whether this kind of change is done with ros2 team agreement or not.
If this change is done with agreement and written in REP, I will follow.
However, if it is not, this issue should be fixed.
i have a question, message package must be a separate package?
Currently, yes, due to this bug.
However, if it is not, this issue should be fixed.
Yes, this bug should be fixed. Someone has to find the time to do it.
I made a hotfix #187.
In this PR, ament_python_install_package is called only when the python package is not installed.
Any progress on this? This is still a huge issue. Propose we merge @knorth55 hotfix 187 unless work is being done to fix the bug.
+1 would be helpful to have this fixed.
+1 would be great if this bug is fixed.
So we discussed this bug a bit.
The most ament_cmake
way to do this is to change up the way that ament_python_install_package
works. For most things that are part of ament_cmake
, the way that they work is that the call (to e.g. ament_export_libraries) doesn't do much work, and instead sets up a bunch of cmake variables, along with a hook. When ament_package
is eventually called, it calls all of the hooks, and at that point the hook does the work based on the cmake variables. This is done this way so that ament_package
has a full view of the environment, and avoids problems exactly like this.
For whatever reason, ament_python_install_package
does not work like this at the moment. We should change ament_python_install_package
to work in the more ament_cmake
way, which will resolve this bug.
What I'm going to do here is to close this issue (and related PR) in favor of ament/ament_cmake#514 (partially because GitHub doesn't allow us to transfer issues between organizations). Then we can continue the conversation over there (I'll also replicate this comment there).