/kai

Konveyor AI - static code analysis driven migration to new targets via Generative AI

Primary LanguageJupyter NotebookApache License 2.0Apache-2.0

Konveyor AI (Kai)

Konveyor AI (Kai) simplifies the process of modernizing application source code to a new platform. It uses Large Language Models (LLMs) guided by static code analysis, along with data from Konveyor. This data provides insights into how similar problems were solved by the organization in the past, helping streamline and automate the code modernization process.

Pronunciation of 'kai': https://www.howtopronounce.com/ka%C3%AC-4

Approach

Kai implements a Retrieval Augmented Generation (RAG) approach that leverages data from Konveyor to help generate code suggestions to aid migrating legacy code bases to a different technology. The intent of this RAG approach is to shape the code suggestions to be similar to how an organization has solved problems in the past, without additional fine-tuning of the model.

The approach begins with using static code analysis via the Kantra tool to find areas in the source code that need attention. 'kai' will iterate through analysis information and work with LLMs to generate code changes to resolve incidents identified from analysis.

Demo Video

DemoVideo

Blog Posts

Getting Started

  1. Run the kai backend image locally with sample data
  2. Walk through a guided scenario to evaluate how Kai works

Launch the Kai backend with sample data

The quickest way to get running is to leverage sample data commited into the Kai repo along with the podman compose up workflow

  1. git clone https://github.com/konveyor/kai.git
  2. cd kai
  3. Run podman compose up. The first time this is run it will take several minutes to download images and to populate sample data.
    • After the first run the DB will be populated and subsequent starts will be much faster, as long as the kai_kai_db_data volume is not deleted.
    • To clean up all resources run podman compose down && podman volume rm kai_kai_db_data.
  4. Kai backend is now running and ready to serve requests

Guided walk-through

After you have the kai backend running via podman compose up you can run through a guided scenario we have to show Kai in action at: docs/scenarios/demo.md

  • docs/scenarios/demo.md walks through a guided scenario of using Kai to complete a migration of a Java EE app to Quarkus.

Others ways to run Kai

The above information is a quick path to enable running Kai quickly to see how it works. If you'd like to take a deeper dive into running Kai against data in Konveyor or your own custom data, please see docs/Getting_Started.md

Debugging / Troubleshooting

  • Kai backend will write logging information to the logs directory.
    • You can adjust the level via kai/config.toml and change the file_log_level = "debug" value
  • Tracing information is written to disk to aid deeper explorations of Prompts and LLM Results. See docs/contrib/Tracing.md

Technical design documents

  • See our technical design related information at docs/design

Roadmap and Early Builds

  • Kai is in it's early development phase and is NOT ready for production usage.
  • See Roadmap.md to learn about the project's goals and milestones
  • Please see docs/Evaluation_Builds.md for information on early builds.

Contributing

Our project welcomes contributions from any member of our community. To get started contributing, please see our Contributor Guide.

Code of Conduct

Refer to Konveyor's Code of Conduct here.