alejandro-ao/exa-crewai

A few questions about the exa-crewai application

Opened this issue · 2 comments

Very interesting application of crewai !

I am trying to replicate your results from this github repository using 2 methods:

1 - running it using: poetry run newsletter_gen
I have placed the .env file (with my exa and groq api keys) in the root folder (where also the poetry.lock en pyproject.toml files etc. are)
I have no experience with poetry, so this is somewhat of a hurdle for me.

It seems to work, but I get a lot of these errors in the terminal tab of Visual Studio Code:
requests.exceptions.SSLError: HTTPSConnectionPool(host='telemetry.crewai.com'_**, port=4319): Max retries exceeded with url: /v1/traces (Caused by SSLError(SSLError(1, '[SSL: TLSV1_ALERT_ACCESS_DENIED] tlsv1 alert access denied (_ssl.c:1006)')))

2 - running it as a streamlit app: streamlit run app.py
This also seems to work.

But I have a few observations that tell me that not everything is working:

  • No log file output in the logs directory
  • No output file report.md with the output of the research in the root folder
  • No button to push to download the final report

Can you shed some light on this?

Also I get this WARNING:

C:\Users\jfhmb\AppData\Local\pypoetry\Cache\virtualenvs\newsletter-gen-NPsKieJm-py3.11\Lib\site-packages\langchain_api\module_import.py:87:

LangChainDeprecationWarning:
Importing GuardrailsOutputParser from langchain.output_parsers is deprecated.

Please replace the import with the following:
from langchain_community.output_parsers.rail_parser import GuardrailsOutputParser

In which file should I do that replacement?

Can you shed some light on this?

UPDATES:

  • I experimented with local OLLAMA LLMs in EXA-CREWAI and it seems to work, mainly because in that way rate limit errors are circumvented (the Groq models often give me those errors if I use them in EXA-CREWAI)
  • I also have a possible solution for getting rid of the errors related to “host='telemetry.crewai.com'” mentioned above.

I will first test these further the coming days.
If that works out fine, I will publish that here!

I found this solution for getting rid of the errors related to “host='telemetry.crewai.com'” when using CrewAI in the exa-search app:

I placed this piece of code in the top of crew.py:
os.environ["OTEL_SDK_DISABLED"] = "true"

After that I only see this message appearing in the terminal window of VS Code sometimes:
"2024-06-14 02:20:17,425 - 25632 - init.py-init:1218 - WARNING: SDK is disabled."

A problem that I keep having when using local OLLAMA LLMs in EXA-CREWAI:
I get no links in the final report, although there were real links found by the exa search earlier on in the run.
Sometimes these kind of links are placed in the final report: [https://www.example.com/], which is no good of course.

Also a lot of times only 1 article "survives" into the final report.
This possibly has to do with a too short search timewindow in combination with a complex search query ?

What can be the cause of this?