pablogiaccaglia's Stars
mlabonne/llm-course
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
ray-project/ray
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Pythagora-io/gpt-pilot
The first real AI developer
advimman/lama
🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022
jxnl/instructor
structured outputs for llms
codewithsadee/vcard-personal-portfolio
vCard is a fully responsive personal portfolio website, responsive for all devices.
shroominic/codeinterpreter-api
👾 Open source implementation of the ChatGPT Code Interpreter
mlfoundations/open_flamingo
An open-source framework for training large multimodal models.
roboflow/awesome-openai-vision-api-experiments
Must-have resource for anyone who wants to experiment with and build on the OpenAI vision API 🔥
bermanmaxim/LovaszSoftmax
Code for the Lovász-Softmax loss (CVPR 2018)
luciddreamer-cvlab/LucidDreamer
Official code for the paper "LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes".
Xwin-LM/Xwin-LM
Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment
chaoyi-wu/PMC-LLaMA
The official codes for "PMC-LLaMA: Towards Building Open-source Language Models for Medicine"
richard-peng-xia/awesome-multimodal-in-medical-imaging
A collection of resources on applications of multi-modal learning in medical imaging.
WAMAWAMA/TNSCUI2020-Seg-Rank1st
This is the source code of the 1st place solution for segmentation task in MICCAI 2020 TN-SCUI challenge.
Jianing-Qiu/Awesome-Healthcare-Foundation-Models
cheng-01037/Self-supervised-Fewshot-Medical-Image-Segmentation
[ECCV'20] Self-supervision with Superpixels: Training Few-shot Medical Image Segmentation without Annotation (code&data-processing pipeline)
snap-stanford/med-flamingo
cambridgeltl/visual-med-alpaca
Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.
uni-medical/STU-Net
The largest pre-trained medical image segmentation model (1.4B parameters) based on the largest public dataset (>100k annotations), up until April 2023.
PathologyFoundation/plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
duyhominhnguyen/LVM-Med
Release LMV-Med pre-trained models
ai-forever/KandinskyVideo
KandinskyVideo — multilingual end-to-end text2video latent diffusion model
zhaozh10/ChatCAD
[COMMSENG'24, TMI'24] Interactive Computer-Aided Diagnosis using LLMs
camenduru/CoDeF-colab
Emory-HITI/EMBED_Open_Data
Data descriptor and sample notebooks for the Emory Breast Imaging Dataset (EMBED) hosted on the AWS Open Data Program
chaoyi-wu/GPT-4V_Medical_Evaluation
LLaVA-VL/LLaVA-Med-preview
JRC1995/Multilingual-BERT-Disaster
Resources for: Cross-Lingual Disaster-related Multi-label Tweet Classification with Manifold Mixup (ACL SRW 2020)
indigo-ai/BERTino
Repository of BERTino, an Italian DistilBERT model