WordPress/openverse-api

`MEDIA_INDEX_MAPPING` not retrievable from settings

sarayourfriend opened this issue · 7 comments

Sentry link

https://sentry.io/share/issue/43ef4374b5b142868bca159d47506bb3/

Description

from catalog.configuration.elasticsearch import ES, MEDIA_INDEX_MAPPING

Something isn't working in this line. It is causing the settings object not to include MEDIA_INDEX_MAPPING.

Introduced in #712

Suggested solution

Revert ES configuration refactors.

I suggest reverting the ES configuration refactors to keep only the existing large scale refactors in the current deploy. We're trying to do too much in a single version change and we already have risky changes. If we can cut out some of these refactors to isolate only the changes necessary to be able to debug production ES CPU usage (which we've been blocked on for several weeks due to another big refactor in the search controller area) then we can move forward with more confidence.

Did we set the environment variables used by this PR in the staging environment?

IMAGE_INDEX_NAME="image"
AUDIO_INDEX_NAME="audio"

Perhaps if not, the error with the code is setting the default value and by passing these it would work.

To clarify where this is happening, by the way, because it's non-obvious from Sentry due to misconfiguration causing it to show the local environment.

Unless someone is actually causing this error locally, it can only be happening in staging API because production doesn't have this code yet.

@zackkrida No, the two environment variables added in the staging deployment were only LOG_LEVEL and ALLOWED_HOSTS [Ref].

I'm unable to reproduce this locally on a fresh copy on main. It's very hard to see why this would raise errors in staging but work normally on a local copy.

I don't see any mention of this event in the staging logs. Searching for ‘MEDIA_INDEX_MAPPING’ in /dev/api/gunicorn/api log group yielded no hits.

Additionally, this event only occurred once whereas, judging by the error location, it should be occurring non-stop in a start → crash → restart loop.

For these reasons, I think it's safe to ignore this as a fluke.

While we don't have the sentry alerts separated by environment (to-be-deployed), I cross-referenced a few of the pieces of this alert against others.

  1. The breadcrumb for an earlier ES hit points to es:9200, which is the URL we use in the local development stack. In production, this is http://openverse-elasticsearch-prod.private:9200
    image
    image

  2. The server_name field corresponds to the Docker container ID in which the API is running. The server_name in this alert is 5d3b4c585574. The current container IDs for staging are:

  • eb1035da4ee6
  • 29f45c7a2970
  • 23b301981cf1
  • 11808dc2fd0c
  • 1225f87be930

Both of these factors lead me to believe this occurred locally (rather than in staging/prod).

Closing due to irreproducibility.