Pinned Repositories
InternLM-XComposer
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
AMC-1
[ECCV 2018] PyTorch implementation for AMC: AutoML for Model Compression and Acceleration on Mobile Devices.
Google-Colab
Starting with Google Colab
hello-world
Just another repository
lopa07.github.io
Monami Banerjee home page
pytorch-model-training
Train CIFAR10 dataset with ResNet18 model in PyTorch
wide-blocked-sparse-nets
Extending the work "Are wider nets better given the same number of parameters?" (https://arxiv.org/abs/2010.14495).
ms-swift
Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
VILA
VILA - a multi-image visual language model with training, inference and evaluation recipe, deployable from cloud to edge (Jetson Orin and laptops)
ollama-python
Ollama Python library
Lopa07's Repositories
Lopa07/AMC-1
[ECCV 2018] PyTorch implementation for AMC: AutoML for Model Compression and Acceleration on Mobile Devices.
Lopa07/Google-Colab
Starting with Google Colab
Lopa07/hello-world
Just another repository
Lopa07/lopa07.github.io
Monami Banerjee home page
Lopa07/pytorch-model-training
Train CIFAR10 dataset with ResNet18 model in PyTorch
Lopa07/wide-blocked-sparse-nets
Extending the work "Are wider nets better given the same number of parameters?" (https://arxiv.org/abs/2010.14495).