/alhazen

AI agents + toolkits for scientific knowledge

Primary LanguageJupyter NotebookMIT LicenseMIT

Home - Alhazen

Alhazen is a framework for scientists to perform local studies of the literature. You can use it to build a local library of scientific knowledge expressions (papers, webpages, database records, etc.), use web-robots and other available tools to locate and download full text, and then use generative AIs to process the content of your library.

The goal of this work is threefold:

  1. To provide a pragmatic AI tool that helps read, summarize, and synthesize available scientific knowledge.
  2. To provide a platform for development of AI tools in the community.
  3. To actively develop working systems for high-value tasks within the Chan Zuckerberg Initiative’s programs and partnerships.

The system uses available tools within the rapidly-expanding ecosystem of generative AI models, including open models that can be run locally (such as Llama-2, Mixtral, Smaug, Olmo, etc. ) as well as state-of-the-art commercial APIs (such as OpenAI, Gemini, Mistral, etc).

To use local models, it is recommended that Alhazen be run on a large, high end machine such as an M2 Apple Macbook with 48+GB of memory - we are not actively supporting Windows or Linux yet.

Caution + Caveats

  • This toolkit provides functionality to use agents to download information from the web. Care should be taken by users and developers should make sure they abide by data licensing requirements and third party websites terms and conditions and that that they don’t otherwise infringe upon third party privacy or intellectual property rights.
  • All data generated by Large Language Models (LLMs) should be reviewed for accuracy.

Installation

Install dependencies

Postgresql

Alhazen requires postgresql@14 to run. Homebrew provides an installer:

$ brew install postgresql@14

which can be run as a service:

$ brew services start postgresql@14
$ brew services list

If you install Postgresql via homebrew, you will need to create a postgres superuser to run the psql command.

$ createuser -s postgres

Note that the Postgres.app system also provides a nice GUI interface for Postgres but installing the pgvector package is a little more involved.

Ollama

The tool uses the Ollama library to execute large language models locally on your machine. Note that to able to run the best performing models on a Apple Mac M1 or M2 machine, you will need at least 48GB of memory.

Huridocs

We use a PDF document text extraction and classification system called Huridocs. In particular, our PDF processing requires a docker image of their PDF Paragraphs Extraction system. To run this, perform the following steps:

1. git clone https://github.com/huridocs/pdf_paragraphs_extraction
2. cd pdf_paragraphs_extraction
3. docker-compose up

Install Alhazen source code

git clone https://github.com/chanzuckerberg/alzhazen
conda create -n alhazen python=3.11
conda activate alhazen
cd alhazen
pip install -e .

How to use

We provide a number of low-level interfaces to work with Alhazen.

Notebooks

We have developed numerous worked examples of corpora that can generated by running queries on public sources and then processing the results with LLM-enabled workflows. See the nbs/cookbook subdirectory for examples.

Marimo Dashboards

We provide simple dashboards using Marimo notebooks. These provide simple runnable, ‘reactive notebooks’ (similar to the excellent ObservableHQ system but implemented in Python). They provide lightweight dashboards and data visualization.

For a dashboard that shows contents of all active databases on the current machine, run

marimo run dashboards/000_corpus_browser.py 

Applications

We use simple python modules to run applications. To generate a simple gradio chatbot to interact with the `` library to create a modular command line interface (CLI) for Alhazen. Use the following command structure to execute specific demo applications:

python -m alhazen.apps.chat --loc <path/to/location/of/data/files> --db_name <database_name>

Environmental Variables

The following environment variables will need to be set:

  • ALHAZEN_MODEL_TYPE = ‘ollama’ for locally-running models, or ‘openai’ for OpenAI’s models
  • ALHAZEN_MODEL_NAME = ‘mixtral:instruct’ or ‘gpt-4’.

To use other commercial services, you should also set appropriate environmental variables to gain access. Examples include:

  • OPENAI_API_KEY
  • NCBI_API_KEY
  • VERTEXAI_PROJECT_NAME

Code Status and Capabilities

This project is still very early, but we are attempting to provide access to the full range of capabilities of the project as we develop them.

The system is built using the excellent nbdev environment. Jupyter notebooks in the nbs directory are processed based on directive comments contained within notebook cells (see guide) to generate the source code of the library, as well as accompanying documentation.

Examples of the use of the library to address research / landscaping questions specified in the use cases can be found in the nb_scratchpad/cookbook subdirectory of this github repo.

Contributing

We warmly welcome contributions from the community! Please see our contributing guide and don’t hesitate to open an issue or send a pull request to improve Alhazen.

This project adheres to the Contributor Covenant code of conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to opensource@chanzuckerberg.com.

Where does the Name ‘Alhazen’ come from?

One thousand years ago, Ḥasan Ibn al-Haytham (965-1039 AD) studied optics through experimentation and observation. He advocated that a hypothesis must be supported by experiments based on confirmable procedures or mathematical reasoning — an early pioneer in the scientific method five centuries before Renaissance scientists started following the same paradigm (Website, Wikipedia, Tbakhi & Amir 2007).

We use the latinized form of his name (‘Alhazen’) to honor his contribution (which goes largely unrecognized from within non-Islamic communities).

Famously, he was quoted as saying:

The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and, applying his mind to the core and margins of its content, attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.

Here, we seek to develop an AI capable of applying scientific knowledge engineering to support CZI’s mission. We seek to honor Ibn al-Haytham’s critical view of published knowledge by creating a AI-powered system for scientific discovery.

Note - when describing our agent, we will use non-gendered pronouns (they/them/it) to refer to the agent.