quasar-team/quasar

Yocto integration documentation?

BenLand100 opened this issue · 10 comments

I've stumbled around a bit trying to create a bitbake file that can build a quasar project, without a lot of success. Is there documentation or reference material on this anywhere?

Hi @BenLand100 ,
(to attention of @parasxos @schlenk4 @ben-farnham )
documentation: Documentation/yocto.html (this project)
bitbake file: Extra/yocto/my-opcua-server.bb (this project also)

note last time we did it for Yocto was with Rocko and Sumo revisions, that's been 2 yrs ago or so... Here at CERN we build quasar massively for SocS and other embedded platforms but we shifted our attention to have a common OS rather than using Yocto which seemed to be too difficult.

If you were interested to review the Yocto docs, or revive it somehow, feel welcome regarding contributions.

Ah, I see, you suggest essentially bypassing the python tool and doing a separate CMake build. I've taken a somewhat different approach to integrating with yocto (really petalinux). I'm just running the standard ./quasar.py build and then installing the results from the bitbake file. I've decided to push ahead with this instead of doing a separate cmake build, so that development and production are built the same way.

This seems to work, with a few caveats:

  • There's no python3-colorama-native in openembedded currently, so I removed this dependency from the quasar framework in my repo
  • Quasar's build pulls opem62541-compat from a git repo during build, but yocto doesn't support this (namely, no ssl certificates available at build time, so the git clone fails), which can be "fixed" (!) by disabling ssl verification with git in the ExternalProject_Add command.
  • This approach require a lot of -native packages in the bitbake to run the build system
  • The config.xml requires a ../Configuration/Configuration.xsd which in turn (!) requires a ../../Meta/config/Meta.xsd file at the appropriate relative locations.

Particularly on that last point --- is there a better way to handle this? I suppose the relative paths are setup such that it's easy to run from the example directories, but having to reproduce this exact file structure relative to any configuration file is a bit of a pain.

Hi, to relate to your points:

  • colorama is a dependency which is trivial to get in Yocto (see e.g. the libxml recipe; lxml-native was also not there once),
  • for pulling open62541-compat from the web: this is ancient behaviour, we changed it already. It suggest to me you are on not-so-fresh version of open62541-compat. May I ask you where did you take the version indication from and which version you actually have?
  • not sure what you mean a lot of -native packages? A bunch of them yes clearly, a lot? ;-)
  • actually I don't think that Meta.xsd problem is well phrased: in the build process both files are merged into one simple Configuration.xsd which do not require Meta.xsd anymore.

Curious - could you tell us anything about the application you are making with quasar?

Certainly, this is for an OPC UA interface to the DUNE Warm Interface Board (WIB) to be used by DUNE slow controls. Giovanna Lehmann suggested quasar as a good way to proceed here. You can find the repo for the quasar project here and the bitbake file I came up with here. The changes I made to the project to build with yocto are all in this commit

  • Fairly sure I'm using a recent tag of quasar - my local quasar repo is at the 1.5.4 tag. ./quasar.py build with this version runs a git clone of open62541-compat.

  • Regarding -native dependencies, the spirit of that comment was that colorama and indent/astyle are fairly extraneous for an automated build system. Having these be optional would potentially be helpful. Petalinux (especially) has a fair amount of version lock-in without reading a lot of xilinx forum posts, so it was frankly easier to remove colorama than add or edit its bitbake to include the native clause.

  • I've perhaps installed the wrong Configuration.xsd onto the system, then. The bitbake file I referenced should clarify which one I've installed, and I believe this is the correct relative path specified by the config.xml that I have been testing with. Not sure what XML schema files can support, but if there was some way to hardcode an install path into these files (short of using sed at build time, which would work in a pinch - any XML equivalent of an environment variable?) that would be ideal.

  • In a similar vein, I see the XML schema files reference schema on cern servers. Will this work without access to the broader internet? If not, is there a way to disable schema verification?

Hi @BenLand100 ,
please could you also tell us which version of open62541-compat you're on?

Would you be interested to be added to quasar-users mailing list (no spam there) for quasar-related announcements etc?

Sure, it looks like the auto-cloned open62541-compat in build/open62541-compat is at tag v1.3.10. You can add bland100@sas.upenn.edu to the mailing list. Thanks!

Hi Ben,

so since there is a couple of points and, as stated earlier, we recently didn't have anyone to keep the Yocto support up-to-date, if you think it'd help, there are two things we could do:

  1. you can open a support ticket (JIRA project OPCUA, assignee myself),
  2. we can have a Zoom session and try to figure out how to improve.

Let me know ;-)

BTW: would Yocto really be used for DUNE? There seems to be a CERN-wide consensus to use CentOS for ARM w/ PetaLinux-built kernel.

I've got something that works now, so from my point of view I'm satisfied, but its worth considering some of these things for future improvements, if improved Yocto/deployment support is desired. I may make some adjustments to the schema / validation in the WIB project, if changes are required, since these features may not be broadly applicable.

The WIB, at least, will be running Yocto on its Ultrascale+ ARM cores. Being able to run one command and have a fully configured, versioned, and repeatable linux system ready in a tar archive is a pretty attractive feature of Yocto, imo. We don't envision people ssh'ing into these devices for any reason, so ultimately the distro doesn't matter much. Then again, we might try to fit the whole system onto some smallish QSPI flash, so being able to pare down Yocto to some bare minimum of packages is attractive.

Hi Ben,

so would you for instance like to contribute documentation updates (or even a wholly new chapter or so) to let other profit from what you've been through with quasar+Yocto? Or even a concise write-up of steps you had to take to get it done? I'd consider a pity not to profit from your mileage especially that we do not have many people on Yocto anymore. I can't offer much for it - basically adding to the list of contributors is all I can do ;-) Let me / us know.

I'll add a writeup of the changes to my TODO list, and open a PR with it at some point in the next few weeks. It'll probably be in markdown format, but could be converted to html prett easily. I'll likely end up doing this for DUNE colleagues, anyway, since there's a wider interest in using quasar than just for the WIB.