fix(llm): address an issue where saved AI API keys are not carried over to the next sessions
Closed this issue · 9 comments
Reproducible Steps
-
Create any plots and display via
maidr.show()
-
Hit H key and provide API keys for both OpenAI and Google Gemini
-
Click Save and Close
-
If prompted, save the password in your browser
-
Open LLM from the interactive plot area via Ctrl+Shift+/ (on Windows) or Alt+Shift+/ (on Mac)
-
Make sure the AI responses are working
-
Close the maidr browser and exit out of the current Python repl
-
Repeat the steps above and see if the AI API keys are preserved to the next sessions
Current Behavior
API keys are not carried over to the next sessions even when it's saved in the browser
Suggested Solution
- When
maidr.show
is executed search for the following env variables viaos.getenv()
:
-
OPENAI_API_KEY
-
GOOGLE_API_KEY
- Use these keys as fallback
I have noticed some possible issues with the proposal of using the env variables:
-
Missing API: Currently
maidr.js
does not have the API property in its JSON schema so we cannot directly pass the keys from Python to the browser frontend. -
Security risk: We cannot simply bundle the API keys inside the JSON which will open a malicious hole.
@dakshpokar Please investigate more reliable solutions. Ideally, we want to make the maidr.js
session cookie reliable, and this issue needs to be further investigated from the upstream end.
Greetings Professor @jooyoungseo,
Upon investigating the issue, I discovered that the changing port number with each run in interactive mode causes the loss of local storage variables. This is because the local storage, where we store the OPENAI and GEMINI keys in the settings_data
field, is bound to the domain and port combination. As a result, each new run with a different port results in the loss of stored keys.
As you discussed in the previous comment, I also thought of passing it from Python Binder to Browser Frontend, but that could potentially be a security risk.
Another solution I explored was storing the keys upon initial entry and passing them to subsequent runs. However, this is not feasible due to the (Same Origin Policy).
I also found that we can have a way to share local storage data across domains (+ports) but for that, we need to know which domain we want the local storage data to be injected into, which in our case we will not know considering that every time we run we have a unique port. So this is eliminated.
Lastly, I considered using cookies, as their Same Origin Policy is based on the domain name, not the port. Since our domain remains 'localhost', this could work. However, there is a risk that any web application could access the cookies and retrieve the keys, making this approach unsuitable for our needs.
I am thinking of some other solutions as well, I will keep you posted about those but right now this is what I found. I will take tomorrow's time to find a reliable fix.
Best regards,
Daksh Pokar
@dakshpokar Can we pin down the port number to make it stable? What's the tradeoff?
Greetings Professor @jooyoungseo,
Port Selection is handled by py-htmltools internally. As we don't have control over this, pinning down the port number doesn't seem to be feasible.
Professor @jooyoungseo,
I did think of one solution that just might work, but for that, we need to store the OpenAI and Gemini Keys in Python first. What we can do is that with every run we can just inject the keys in the localStorage of the current browser instance. We can easily do so with the following JS -
function addKeyValueLocalStorage(iframeId, key, value) {
const iframe = document.getElementById(iframeId);
if (iframe && iframe.contentWindow) {
try {
iframe.contentWindow.localStorage.setItem(key, value);
} catch (error) {
console.error('Error accessing iframe localStorage:', error);
}
} else {
console.error('Iframe not found or inaccessible.');
}
}
addKeyValueLocalStorage('myIframe', 'openAIKey', <<fetch securely from python binder>>);
Within the onload
attribute of the iframe, we can add the above JS snippet.
This does work perfectly, we just have to ask the user in the Python binder for the OpenAI and/or Gemini Keys. Once that is done we can store these keys in an encrypted manner and fetch this on the go whenever an instance is run.
cc: @SaaiVenkat
@dakshpokar Why do we have to ask users to enter the keys each time? Could we just fetch the keys from users env variables for both OpenAI and Gemini keys?
Yes, Professor @jooyoungseo, we will store the keys in environment variables and will not ask for them each time. I would like clarification on when to request the keys: if a user denies the request initially, should we ask again? I am considering the best approach to ensure a positive experience for our target audience.
@dakshpokar -- I would rather include the instructions on how to add OPENAI_API_KEY
and GOOGLE_API_KEY
to their env variables in our user guide. We don't need to implement the interactive prompt just like other libraries (e.g., openai; langchain; etc.).
Just fetch the OPENAI_API_KEY
and GOOGLE_API_KEY
from user env variable please at this point.
Sure Professor @jooyoungseo that works!