I saw there was a PR to add gpt4all support: #30, but it doesn't seem that it was merged into main. Has there been any though to expanding local LLM support, specifically ollama? With the release of new llama models I'm wondering if they are good enough to run ChemCrow?