truera/trulens

RuntimeError: Could not locate app.query.rets.source_nodes[:].note.text in app/record.

JIAWENee opened this issue ยท 10 comments

I'm trying to run trulens to evaluate the RAG triplet metric on my llama_index RAG.

My code:

from trulens_eval import TruLlama
from trulens_eval import FeedbackMode

tru_recoder = TruLlama(
    query_engine,
    app_id="App_1",
    feedbacks=[
        f_qa_relevance,
        f_qs_relevance,
        f_groundedness
    ]
)

with tru_recoder as recoding:
    query_engine.query("what are the key to building a career in AI?")

Error message:

Run of run in <Thread(TP.submit with debug timeout_0, started 10360)> failed with: Could not locate app.query.rets.source_nodes[:].note.text in app/record.
exception calling callback for <Future at 0x24b5f5873d0 state=finished raised RuntimeError>
Traceback (most recent call last):
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\feedback\feedback.py", line 708, in extract_selection
    arg_vals[k] = list(q_within_o.get(o))
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\utils\serial.py", line 943, in get
    for start_selection in start_items:
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\utils\serial.py", line 944, in get
    for last_selection in last_step.get(start_selection):
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\utils\serial.py", line 402, in get
    raise KeyError(
KeyError: 'Key not in dictionary: note'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\software\anaconda3\envs\torch_py310\lib\concurrent\futures\_base.py", line 342, in _invoke_callbacks
    callback(self)
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\app.py", line 1168, in _add_future_feedback
    res = future_result.result()
  File "D:\software\anaconda3\envs\torch_py310\lib\concurrent\futures\_base.py", line 451, in result
  return self.__get_result()
  File "D:\software\anaconda3\envs\torch_py310\lib\concurrent\futures\_base.py", line 403, in __get_result
    raise self._exception
  File "D:\software\anaconda3\envs\torch_py310\lib\concurrent\futures\thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\utils\python.py", line 374, in _future_target_wrapper
    return func(*args, **kwargs)
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\utils\threading.py", line 174, in _run_with_timeout
    raise e
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\utils\threading.py", line 159, in _run_with_timeout
    res: T = fut.result(timeout=timeout)
  File "D:\software\anaconda3\envs\torch_py310\lib\concurrent\futures\_base.py", line 458, in result
    return self.__get_result()
  File "D:\software\anaconda3\envs\torch_py310\lib\concurrent\futures\_base.py", line 403, in __get_result
  raise self._exception
  File "D:\software\anaconda3\envs\torch_py310\lib\concurrent\futures\thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\feedback\feedback.py", line 504, in run
    raise e
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\feedback\feedback.py", line 499, in run
    input_combinations = list(
  File "D:\software\anaconda3\envs\torch_py310\lib\site-packages\trulens_eval\feedback\feedback.py", line 710, in extract_selection
    raise RuntimeError(
RuntimeError: Could not locate app.query.rets.source_nodes[:].note.text in app/record.

๐Ÿค–

Hello @JIAWENee! I'm Dosu, a bot here to help you solve bugs, answer questions, and guide you on becoming a contributor while waiting for a human maintainer. I'm taking a quick look at your issue and will provide you with a full answer shortly.

The RAG successfully runs the query but the metrics do not get computed. Any suggestions on how to fix?

๐Ÿค–

Based on the error message and traceback you've provided, it seems like the issue is related to the extract_selection function in feedback.py trying to access a note key that doesn't exist in the dictionary it's working with. This could be due to a mismatch between the expected structure of the data and the actual structure.

Here are a few steps you can take to debug this issue:

  1. Check the structure of your data: The error message suggests that the code is trying to access app.query.rets.source_nodes[:].note.text in app/record. You should verify that app/record indeed has this structure. You can do this by printing or logging the structure of app/record before the line where the error occurs.

  2. Check the extract_selection function: The traceback suggests that the error is happening in the extract_selection function in feedback.py. You should check this function to see if it's correctly handling the data. It might be expecting a note key that doesn't always exist, in which case you might need to add a check for the existence of this key before trying to access it.

  3. Check your feedbacks: In your TruLlama instance, you're passing f_qa_relevance, f_qs_relevance, and f_groundedness as feedbacks. You should check these feedbacks to see if they're correctly defined and if they're producing the expected output.

Without more specific details about the structure of your data or the exact location in the code where the error occurs, it's challenging to provide a more precise solution. If you could provide more details about your data structure or the feedbacks you're using, it would be easier to pinpoint the issue.

For more information, you can refer to the TruLens Eval v0.24.1 documentation.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a ๐Ÿ‘ if this is helpful and ๐Ÿ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

Hey,

Could you please give a contained reproducible example to help with debugging.

@JIAWENee Can you please give us an example to debug?

Hi, @JIAWENee

I'm helping the trulens team manage their backlog and am marking this issue as stale. From what I understand, you encountered a "RuntimeError" while running trulens to evaluate the RAG triplet metric on a llama_index RAG. Dosu-bot provided a detailed response suggesting steps to debug the issue, including checking the data structure, the relevant function, and the feedbacks being used. Other users, such as avinashmaheshjoshi and yuvneshtruera, requested a reproducible example to aid in debugging.

Could you please confirm if this issue is still relevant to the latest version of the trulens repository? If it is, please let the trulens team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!

When I run the following, I get an error repeated many times. Running the code without the async query (.aquery) works just fine.

from trulens_eval import Tru
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from trulens_eval.feedback.provider import OpenAI
from trulens_eval import Feedback
import numpy as np
from trulens_eval.app import App
from trulens_eval.feedback import Groundedness
from trulens_eval import TruLlama

# Initialize provider class
tru = Tru(database_url=DATABASE_URL)
provider = OpenAI()

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)

query_engine = index.as_query_engine()

context = App.select_context(query_engine)

grounded = Groundedness(groundedness_provider=OpenAI())
# Define a groundedness feedback function
f_groundedness = (
    Feedback(grounded.groundedness_measure_with_cot_reasons)
    .on(context.collect())  # collect context chunks into a list
    .on_output()
    .aggregate(grounded.grounded_statements_aggregator)
)

# Question/answer relevance between overall question and answer.
f_answer_relevance = Feedback(provider.relevance).on_input_output()

# Question/statement relevance between question and each context chunk.
f_context_relevance = (
    Feedback(provider.context_relevance_with_cot_reasons)
    .on_input()
    .on(context)
    .aggregate(np.mean)
)

tru_query_engine_recorder = TruLlama(
    query_engine,
    app_id="LlamaIndex_App1",
    feedbacks=[f_groundedness, f_answer_relevance, f_context_relevance],
)

with tru_query_engine_recorder:
    res = await query_engine.aquery("What is the author's name?")

I get this error repeated many times:

Run of run in <Thread(TP.submit with debug timeout_1, started 139829939275520)> failed with: Selector __record__.app.query.rets.source_nodes[:].node.text.collect() does not exist in source data.
exception calling callback for <Future at 0x7f2c606b84f0 state=finished raised InvalidSelector>
Traceback (most recent call last):
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py", line 1056, in _extract_selection
    arg_vals[k] = list(q.get(source_data))
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/utils/serial.py", line 1023, in get
    yield list(start_items)
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/utils/serial.py", line 1026, in get
    for start_selection in start_items:
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/utils/serial.py", line 1026, in get
    for start_selection in start_items:
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/utils/serial.py", line 1026, in get
    for start_selection in start_items:
  [Previous line repeated 2 more times]
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/utils/serial.py", line 1027, in get
    for last_selection in last_step.get(start_selection):
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/utils/serial.py", line 438, in get
    raise KeyError(
KeyError: 'Key not in dictionary: query'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/dsml/.pyenv/versions/3.9.6/lib/python3.9/concurrent/futures/_base.py", line 329, in _invoke_callbacks
    callback(self)
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py", line 1285, in _add_future_feedback
    res = future_result.result()
  File "/home/dsml/.pyenv/versions/3.9.6/lib/python3.9/concurrent/futures/_base.py", line 438, in result
    return self.__get_result()
  File "/home/dsml/.pyenv/versions/3.9.6/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
    raise self._exception
  File "/home/dsml/.pyenv/versions/3.9.6/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/utils/python.py", line 386, in _future_target_wrapper
    return func(*args, **kwargs)
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/utils/threading.py", line 174, in _run_with_timeout
    raise e
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/utils/threading.py", line 159, in _run_with_timeout
    res: T = fut.result(timeout=timeout)
  File "/home/dsml/.pyenv/versions/3.9.6/lib/python3.9/concurrent/futures/_base.py", line 445, in result
    return self.__get_result()
  File "/home/dsml/.pyenv/versions/3.9.6/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
    raise self._exception
  File "/home/dsml/.pyenv/versions/3.9.6/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py", line 828, in run
    raise e
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py", line 814, in run
    input_combinations = list(
  File "/home/dsml/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py", line 1058, in _extract_selection
    raise InvalidSelector(
trulens_eval.feedback.feedback.InvalidSelector: Selector __record__.app.query.rets.source_nodes[:].node.text.collect() does not exist in source data.
Run of run in <Thread(TP.submit with debug timeout_4, started 139828599437056)> failed with: Selector __record__.app.query.rets.source_nodes[:].node.text does not exist in source data.

Here's the versions i have.

>>> trulens_eval.__version
0.27.2
>>> llama_index.core.__version__
0.10.30

Hi @oliverproudfoot , can you try this alternative to set context:

from trulens_eval import Select
context = Select.RecordCalls.ainvoke.rets.source_nodes[:].node.text

The issue maybe our assumption that the main app method is invoke but you are using ainvoke. If that works, please use the fix while we trying to figure out a better way of determining context in these cases.

The RAG successfully runs the query but the metrics do not get computed. Any suggestions on how to fix?

Hi @JIAWENee , can you include the code that sets up the feedback functions you used?

Hi @oliverproudfoot , can you try this alternative to set context:

from trulens_eval import Select
context = Select.RecordCalls.ainvoke.rets.source_nodes[:].node.text

The issue maybe our assumption that the main app method is invoke but you are using ainvoke. If that works, please use the fix while we trying to figure out a better way of determining context in these cases.

When I do this I get the following error:

๐Ÿฆ‘ Tru initialized with db url sqlite:///default.sqlite .
๐Ÿ›‘ Secret keys may be written to the database. See the `database_redact_keys` option of Tru` to prevent this.
[nltk_data] Downloading package punkt to [/home/dsml/nltk_data...](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f6f6c697665722d70726f7564666f6f742f696e666572656e636573222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c2270617468223a222f686f6d652f6f6c697665722d70726f7564666f6f742f696e666572656e6365732f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a227673636f64652d66696c65486f7374227d7d-0040ssh-002dremote-002bcloudtops-002doliver-002dproudfoot-002epluralsight-002eteleport-002esh.vscode-resource.vscode-cdn.net/home/dsml/nltk_data...)
[nltk_data]   Package punkt is already up-to-date!
โœ… In groundedness_measure_with_cot_reasons, input source will be set to __record__.app.ainvoke.rets.source_nodes[:].node.text.collect() .
โœ… In groundedness_measure_with_cot_reasons, input statement will be set to __record__.main_output or `Select.RecordOutput` .
โœ… In relevance, input prompt will be set to __record__.main_input or `Select.RecordInput` .
โœ… In relevance, input response will be set to __record__.main_output or `Select.RecordOutput` .
โœ… In context_relevance_with_cot_reasons, input question will be set to __record__.main_input or `Select.RecordInput` .
โœ… In context_relevance_with_cot_reasons, input context will be set to __record__.app.ainvoke.rets.source_nodes[:].node.text
โ”ƒ                                              Selector check failed                                              โ”ƒ
โ”—โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”›

Source of argument source to groundedness_measure_with_cot_reasons does not exist in app or expected record:       

                                                                                                                   
 __record__.app.ainvoke.rets.source_nodes[:].node.text.collect()                                                   
 # or equivalently                                                                                                 
 Select.RecordCalls.ainvoke.rets.source_nodes[:].node.text.collect()                                               
                                                                                                                   

The data used to make this check may be incomplete. If you expect records produced by your app to contain the      
selected content, you can ignore this error by setting selectors_nocheck in the TruLlama constructor.              
Alternatively, setting selectors_check_warning will print out this message but will not raise an error.            


                                              Additional information:                                              

Feedback function signature:                                                                                       

                                                                                                                   
 (source: str, statement: str) -> Tuple[float, dict]                                                               
                                                                                                                   

The prefix __record__.app selects this data that exists in your app or typical records:                            

 โ€ข Object of type dict starting with:                                                                              

                                                                                                                   
       {                                                                                                           
         '_retriever': {'retrieve': [...], '_retrieve': [...], '_aretrieve': [...]},                               
         '_response_synthesizer': {'_llm': {...}, 'get_response': [...]},                                          
         'query': [RecordAppCall(...)],                                                                            
         'aquery': [RecordAppCall(...)],                                                                           
         'retrieve': [RecordAppCall(...), RecordAppCall(...)],                                                     
         'synthesize': [RecordAppCall(...), RecordAppCall(...)]                                                    
       }                                                                                                           
                                                                                                                   
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[3], [line 43](vscode-notebook-cell:?execution_count=3&line=43)
     [35](vscode-notebook-cell:?execution_count=3&line=35) # Question/statement relevance between question and each context chunk.
     [36](vscode-notebook-cell:?execution_count=3&line=36) f_context_relevance = (
     [37](vscode-notebook-cell:?execution_count=3&line=37)     Feedback(provider.context_relevance_with_cot_reasons)
     [38](vscode-notebook-cell:?execution_count=3&line=38)     .on_input()
     [39](vscode-notebook-cell:?execution_count=3&line=39)     .on(context)
     [40](vscode-notebook-cell:?execution_count=3&line=40)     .aggregate(np.mean)
     [41](vscode-notebook-cell:?execution_count=3&line=41) )
---> [43](vscode-notebook-cell:?execution_count=3&line=43) tru_query_engine_recorder = TruLlama(
     [44](vscode-notebook-cell:?execution_count=3&line=44)     query_engine,
     [45](vscode-notebook-cell:?execution_count=3&line=45)     app_id="LlamaIndex_App1",
     [46](vscode-notebook-cell:?execution_count=3&line=46)     feedbacks=[f_groundedness, f_answer_relevance, f_context_relevance],
     [47](vscode-notebook-cell:?execution_count=3&line=47) )
     [49](vscode-notebook-cell:?execution_count=3&line=49) with tru_query_engine_recorder:
     [50](vscode-notebook-cell:?execution_count=3&line=50)     res = await query_engine.aquery("What is the author's name?")

File [~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/tru_llama.py:303](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/tru_llama.py:303), in TruLlama.__init__(self, app, **kwargs)
    [300](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/tru_llama.py:300) kwargs['root_class'] = Class.of_object(app)  # TODO: make class property
    [301](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/tru_llama.py:301) kwargs['instrument'] = LlamaInstrument(app=self)
--> [303](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/tru_llama.py:303) super().__init__(**kwargs)

File [~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:563](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:563), in App.__init__(self, tru, feedbacks, **kwargs)
    [560](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:560) if self.feedback_mode == FeedbackMode.WITH_APP_THREAD:
    [561](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:561)     self._start_manage_pending_feedback_results()
--> [563](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:563) self._tru_post_init()

File [~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:710](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:710), in App._tru_post_init(self)
    [707](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:707) dummy = self.dummy_record()
    [709](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:709) for feedback in self.feedbacks:
--> [710](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:710)     feedback.check_selectors(
    [711](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:711)         app=self,
    [712](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:712)         # Don't have a record yet, but use an empty one for the non-call related fields.
    [713](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:713)         record=dummy,
    [714](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:714)         warning=self.selector_check_warning
    [715](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/app.py:715)     )

File [~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py:749](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py:749), in Feedback.check_selectors(self, app, record, source_data, warning)
    [746](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py:746)     return False
    [748](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py:748) else:
--> [749](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py:749)     raise ValueError(
    [750](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py:750)         "Some selectors do not exist in the app or record."
    [751](https://vscode-remote+dev-002dcontainer-********/jupyter/analysis/trulens/~/.pyenv/versions/3.9.6/envs/trulens/lib/python3.9/site-packages/trulens_eval/feedback/feedback.py:751)     )

ValueError: Some selectors do not exist in the app or record.

Setting selectors_nocheck doesn't help.