sailfishos/sailfish-browser

Building gecko-dev fails

Opened this issue · 4 comments

I tried building gecko-dev/xulrunner-qt5 but I end up with the following error:

+ source /home/sis/experimente/sailfish/gecko-dev/gecko-dev/../obj-build-mer-qt-xr/rpm-shared.env
/var/tmp/rpm-tmp.kAYgrK: line 40: /home/sis/experimente/sailfish/gecko-dev/gecko-dev/../obj-build-mer-qt-xr/rpm-shared.env: No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.kAYgrK (%build)

This happens both locally with sfdk as well as on github CI: https://github.com/simonschmeisser/gecko-dev/actions/runs/4487756415/jobs/7891530571

any hints on what I might be doing wrong? I promise writing/finalizing some build tutorial in exchange for guidance 😉

I just tried building in mb2 but I get the same error message so it seems I miss some step?

Did you apply the patches correctly before building? For example with "mb2 ... build -p" or just manually.

Thanks! Adding -p to apply the patches helped me get a bit further. Now I get the following error:

0:21.92 js/src> creating ./config.data
 0:21.95 js/src> Creating config.status
 0:22.01 Creating config.status
 0:22.21 Traceback (most recent call last):
 0:22.21   File "/home/sis/experimente/sailfish/gecko-dev/gecko-dev/configure.py", line 181, in <module>
 0:22.21     sys.exit(main(sys.argv))
 0:22.21   File "/home/sis/experimente/sailfish/gecko-dev/gecko-dev/configure.py", line 57, in main
 0:22.21     return config_status(config)
 0:22.21   File "/home/sis/experimente/sailfish/gecko-dev/gecko-dev/configure.py", line 176, in config_status
 0:22.21     return config_status(args=[], **normalize(sanitized_config))
 0:22.21   File "/home/sis/experimente/sailfish/gecko-dev/gecko-dev/python/mozbuild/mozbuild/config_status.py", line 134, in config_status
 0:22.21     reader = BuildReader(env)
 0:22.22   File "/home/sis/experimente/sailfish/gecko-dev/gecko-dev/python/mozbuild/mozbuild/frontend/reader.py", line 868, in __init__
 0:22.22     self._gyp_worker_pool = ProcessPoolExecutor(max_workers=max_workers)
 0:22.22   File "/usr/lib/python3.8/concurrent/futures/process.py", line 555, in __init__
 0:22.22     self._call_queue = _SafeQueue(
 0:22.22   File "/usr/lib/python3.8/concurrent/futures/process.py", line 165, in __init__
 0:22.22     super().__init__(max_size, ctx=ctx)
 0:22.22   File "/usr/lib/python3.8/multiprocessing/queues.py", line 42, in __init__
 0:22.22     self._rlock = ctx.Lock()
 0:22.22   File "/usr/lib/python3.8/multiprocessing/context.py", line 68, in Lock
 0:22.22     return Lock(ctx=self.get_context())
 0:22.22   File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 162, in __init__
 0:22.22     SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
 0:22.22   File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 57, in __init__
 0:22.22     sl = self._semlock = _multiprocessing.SemLock(
 0:22.22 FileNotFoundError: [Errno 2] No such file or directory
 0:22.31 *** Fix above errors and then restart with\
 0:22.31                "./mach build"
 0:22.31 make: *** [client.mk:115: configure] Error 1
error: Bad exit status from /var/tmp/rpm-tmp.ZTjEkC (%build)


RPM build errors:
    Bad exit status from /var/tmp/rpm-tmp.ZTjEkC (%build)

https://gist.github.com/simonschmeisser/9fcf544b0b5aa03bd6865919ea467c3e

Does that ring any bell for you? If not I'll experiment a bit in the coming days

I appreciate this is from a long time ago, but maybe it's worth replying for the benefit of anyone else who hits this.

I believe this "SemLock" issue is x86-specific, so if you build for one of the other targets (arm, aarch64) you can avoid this. In order to work around it for x86 you'll need to apply this change to your scratchbox2 installation: sailfishos/scratchbox2@3fb3ee2

Note that if you want to go back and build for a non-x86 target you may then have to revert this change to get it to work again.