openedx-unsupported/devstack

Missing vendor file node_modules/backbone.paginator/lib/backbone.paginator.js

rgraber opened this issue · 12 comments

NOTE: The documentation of the workaround has been moved to #1088. This ticket is now just for the technical resolution of the issue.

make dev.provision results in "Missing vendor file node_modules/backbone.paginator/lib/backbone.paginator.js"

I think @rijuma saw this several times provisioning mid week but it eventually worked after nuking volumes and running again so it may not be an always error 😱

I can't say if this was exactly the same case as mine mentioned by @ashultz0. On my case pyenv was not correctly setup to run the correct version by default (v3.8.16), and running the provisioning corrupted some container setups.

In order to fix my issue, I had to add eval "$(pyenv init -)" before the virtualenvwrapper.sh script. My final .bashrc had these lines:

# ~/.bashrc

eval "$(pyenv init -)"
source /usr/local/bin/virtualenvwrapper.sh

After that, I double checked that the proper python version was the default on pyenv on a fresh terminal start:

~ python -V
Python 3.8.16

Then removed all related containers and volumes from docker so the provisioning would start from scratch.
After that, provision went smoothly.

Seems like this is possibly also a manifestation of M1/Silicon issues.

Edit: we have also received word that this is happening on hosted devstack as well.

On hosted devstack, we managed to fix this issue with an npm ci and then a run of paver update_assets inside the container's shell. This may or may not work for folks on Apple Silicon machines.

This might be something that we want/could be run when folks run into this. Maybe provisioning fails? https://github.com/openedx/devstack/blob/master/provision-lms.sh#L80-L81

Some suggestions for diagnosing:

  • Does this happen if the user doesn't update their docker CPU/RAM/disk image setting as recommended in https://edx.readthedocs.io/projects/open-edx-devstack/en/latest/getting_started.html#prerequisites ? People often fail to notice that step, and an NPM failure due to insufficient RAM might leave devstack in a bad state even if they later find that section and update their settings.
  • It feels like the most likely underlying causes of this are either "NPM didn't successfully complete" or "copying installed NPM packages to the vendored JS directory failed". Can we put better error handling in these steps to catch the root cause when it happens rather than blowing up downstream in the provisioning process like this?
  • backbone.paginator hasn't been updated in 5 years. Should we just vendor it in for real, and see if that shifts the error message in any useful way?

Not sure if this is a separate issue, but it may be that that the hash caching that we use to avoid doing npm related install can get into a bad state? Provisioning seems to clear it: https://github.com/openedx/devstack/blob/master/provision-lms.sh#L80-L81, but paver install_prereqs may not update when it should.

I have been able to work around this in the past by manually installing backbone.paginator, backbone, and a few others. Not a good developer experience, but I have been able to get around it before.

Am on an M1 Mac running Devstack.

[inform] @justinhynes reported having this issue "not running local" (remote desktop?) and successfully used the following workaround:

Run npm ci and then paver update_assets from an LMS shell.

Maybe we just need a quick win (for now) of adding this to the troubleshooting guide?

The following workaround worked for at least 2 developers:

1. Enter LMS shell (make lms-shell).
2. npm ci
3. paver update_assets

It has been noted that paver install_prereqs, which uses a cached hash to determine whether or not npm needs to be updated again, may get into a bad state. An alternative workaround that one could try, that would also hopefully fix the cached, would be to try the following:

1. Enter LMS shell (make lms-shell).
2. make clean
3. paver install_prereqs
4. paver update_assets

That said, it may be that npm ci is enough to make the cached npm hash to become out of date, and running paver install_prereqs might notice this and potentially run npm ci again and update the hash. This might just cause one additional unnecessary run of the npm install.

UPDATE: It probably isn't worth trying this alternate workaround, and we should just update the troubleshooting doc. We could however separately try to get to the bottom of how this happens, or how we could detect and auto-fix, or detect and auto suggest the fix.

Follow-up checklist (for Arch-BOM usage)

  • Is the issue flaky or consistent?
  • Does it affect multiple people or multiple types of systems?
  • Update the devstack troubleshooting documentation page if necessary
    • Do we need a new troubleshooting section?
    • Did a troubleshooting section already exist, but it wasn't easy to find given the symptoms?
    • If a recurring issue, should we ticket an automated resolution in place of the doc?

FED-BOM is able to reproduce this issue and trying to debug it. We think it might fix with frontend package upgrades of edx-platform. We have a separate ticket for platform npm package upgrades and one of its PR will be up soon