Pinned Repositories
alpaca-lora
Instruct-tune LLaMA on consumer hardware
annotated_diffusion_pytorch
Public repo for HF blog posts
ascii_art
🎨 ASCII art library for Python
attention-visualization
visualizing attention for LLM users
automated-podcast-generation
The project generates the podcast automatically by curating the content from the website and convert them into the speech file using Google WaveNet for speech generation. Program also update .XML file to publish every podcast episode into Apple's iTunes.
Convert-News-Feed-to-Audio-Files-using-GTTS-in-Python
The project is build on Google colaboratory using Python. The scripts extract the first five feeds from the website and convert them to audio file using GTTS.
llm-latent-language
Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".
MalConv-Deep-learning-for-PE-malware-classification
iBibek's Repositories
iBibek/ascii_art
🎨 ASCII art library for Python
iBibek/llm-latent-language
Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".
iBibek/-Learning-Interpretability-Tool
The Learning Interpretability Tool: Interactively analyze ML models to understand their behavior in an extensible and framework agnostic interface.
iBibek/Awesome-LLM-Safety
A curated list of security-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the security implications, challenges, and advancements surrounding these powerful models.
iBibek/bark-with-voice-clone
🔊 Text-prompted Generative Audio Model - With the ability to clone voices
iBibek/character_AI_open
Generate multi-round conversation roleplay data based on self-instruct, about 1k different personality data and conversations
iBibek/ChatDoctor
iBibek/ecco
Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).
iBibek/funktio-ai-samples-graph-rag
Samples and demos for Funktio AI
iBibek/Google-Search-API
iBibek/hf-waitress
Serving LLMs in the HF-Transformers format via a PyFlask API
iBibek/IP-Adapter-images
The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
iBibek/jan-chat-ui
Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)
iBibek/latent-adversarial-training
iBibek/LLM-Conversation-Safety
[NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey
iBibek/LLMs-Finetuning-Safety
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
iBibek/mlx-examples
Examples in the MLX framework
iBibek/ollama-server-docs
iBibek/ongdb-graph-db
ONgDB is an independent fork of Neo4j® Enterprise Edition version 3.4.0.rc02 licensed under AGPLv3 and/or Community Edition licensed under GPLv3
iBibek/phi2-finetune
iBibek/prompt-injection-interp
iBibek/RetrievalTutorials
iBibek/Reverse-Engineering-Tools-for-Large-Language-Models
RevLLM -- Reverse Engineering Tools for Large Language Models
iBibek/sanskrit-nano-gpt
iBibek/sentencepiece
Unsupervised text tokenizer for Neural Network-based text generation.
iBibek/spreadsheet-is-all-you-need
A nanoGPT pipeline packed in a spreadsheet
iBibek/transformer-explainer
Learn How Transformers work in Generative AI with Interactive Visualization
iBibek/TransformerLens
A library for mechanistic interpretability of GPT-style language models
iBibek/uncertain_ground_truth_ddx_dermatology
Dermatology ddx dataset, Jax implementations of Monte Carlo conformal prediction, plausibility regions and statistical annotation aggregation from our recent work on uncertain ground truth (TMLR'23 and ArXiv pre-print).
iBibek/yet-another-applied-llm-benchmark
A benchmark to evaluate language models on questions I've previously asked them to solve.