Provide trustworthy questions to COVID-19 via NLP
Staging: https://covid-staging.deepset.ai/
Prod: https://covid.deepset.ai/
API: https://covid-backend.deepset.ai/docs
🤖Telegram Bot: Add it to your account via @corona_scholar_bot
- People have many questions about COVID-19
- Answers are scattered on different websites
- Finding the right answers takes a lot of time
- Trustworthiness of answers is hard to judge
- Many answers get outdated soon
- Aggregate FAQs and texts from trustworthy data sources (WHO, CDC ...)
- Provide an UI where people can ask questions
- Use NLP to match incoming questions of users with meaningful answers
- Users can provide feedback about answers to improve the NLP model and flag outdated or wrong answers
- Display most common queries without good answers to guide data collection and model improvements
- Scrapers to collect data
- Elasticsearch to store texts, FAQs, embeddings
- NLP Models implemented via Haystack to find answers via a) detecting similar question in FAQs b) detect answers in free texts (extractive QA)
- NodeJS / koa / eggjs middleware
- React Frontend
- Check out the demo app to get a basic idea
- Data: At the moment we are using scrapers to create a CSV that get's ingested into elasticsearch
- Model: The NLP model to find answers is build via haystack. It's configured and exposed via this API.
- Frontend/middleware: TODO
This project is build by the community for the community. We are really appreciating every kind of support! There's plenty of work on UX, Design, ML, Backend, Frontend, Middleware, Data collection ...
We are also happy if you just report bugs, add documentation or flag useful/inappropriate answers returned by the model.
Gitter Channel: GitHub Issues will be the main communication channel, but Gitter can be used for higher-level coordination etc.
Some next TODOs we see:
- Integrate basic data sources via scrapers that return a csv with fields: question, answer, answer_html, link, name, source, category, country, region, city, lang, last_update
- More scrapers / smart scraper to scale data sources
- Handling of special non-FAQ questions via other APIs (e.g. “How many infections in Berlin?”)
- Improve API to foster external integrations (e.g. Chat systems)
- Logging & storage to foster analysis of common queries with bad results
- Support other languages (data collection)
- English+German evaluation dataset & pipeline to benchmark models
- Benchmark baseline models
- Improve NLP models for FAQ matching (better embeddings, e.g. sentence-bert trained on Quora duplicate questions dataset)
- Add extractive QA Models
- Support other languages (models)
- Tune Elasticsearch + Embedding models
- Integrate user feedback mechanism for answers (flag as "correct", "not matching my question", "outdated", "fake news")
- Tab to explore common queries and those with bad answers
- Logos / icons
- Intuitive displaying of search results
- UX for adding/reviewing data sources by the crowd