Business users and non-technical professionals often need to quickly analyse or transform tabular data in spreadsheets for ad hoc business intelligence. However, they might lack the necessary programming knowledge to do so themselves and therefore must reach out to a data analyst. Such unexpected delays have the potential to incur huge opportunity costs for time-sensitive business decisions which must be informed by accurate analysis of data.
Generative AI powered by Large Language Models (LLMs) is being used to create novel text, images, and even videos. LLMs specialising in generating code are already being used in enterprise solutions like GitHub Copilot, Gemini Code Assist by Google, watsonx by IBM, and Amazon Q Developer (previously Amazon CodeWhisperer) to boost productivity for developers and programmers. Along the same lines, there now exist LLMs specialising in generating Structured Query Language (SQL), which is widely used across enterprise domains to manage databases and analyse and transform tabular data.
In this workshop, we will scratch the idea of fetching and analyzing data using natural language. We demonstrate how to accurately do a quick proof of concept by creating an streamlit application using Ollama endpoints to analyse and query CSV files. We also discuss challenges and techniques to overcome these challenges to a certain extent.
- Quick overview of the workshop
- Chapter-1: Converting natural language to SQL using Code LLM for SQL Table.
- Discussion on running Quantized LLMs locally for memory constraints and data privacy.
- Chapter-2: Hands-on: Setting up Ollama model server
- Chapter-3: Metadata pruning for Large Table. Helpful for reducing hallucination/Confusion by reducing prompt length.
- Chapter-4: Hands-on: Data processing techniques for correcting LLM Hallucinations using Static Analysis with sqlglot.
- Hands-on: Setting up Streamlit and building quick interactive front-end applications
- Discussion on how to create generic "Chat with X" capabilities
This workshop is intended for data enginners, data scientists, and researchers with basic Python experience who are working on Generative AI use-cases and want to leverage enterprise data. This might also interest business analysts or business consumers who require data querying and analysis services regularly.
Overall, any professional with at least some experience with Python programming who is interested in getting started with Gen AI will stand to benefit from this workshop since it covers both the end-to-end data pipeline as well how to prepare a demo-worthy front-end user interface.
- How to analyse tabular data in CSV format using English language queries.
- How to run LLMs locally or within your organisation network using Ollama.
- How to quickly develop interactive web applications using Streamlit.
- How to create “Chat with X” applications for other data formats.
We will be using the following tools during the workshop. Participants might find it useful to make themselves familiar with these prior to the workshop.
- SQLCoder: Open source model for NL2SQL. One can use other models as well such as GPT-4, Github Copilot, Amazon Q etc.
- OLLAMA: Setting up language model endpoints with faster inference performance.
- Embedding Model - MixedBread: The sentence embedding model used for metadata pruning
- SQLGLOT: SQLGlot is a no-dependency SQL parser, transpiler, optimizer, and engine. It aims to read a wide variety of SQL inputs (21 different dialects) and output syntactically and semantically correct SQL in the targeted dialects. We use this to correct hallucinations in the generated SQL
- Model Quantisation