Pinned Repositories
ViT-CIFAR
PyTorch implementation for Vision Transformer[Dosovitskiy, A.(ICLR'21)] modified to obtain over 90% accuracy FROM SCRATCH on CIFAR-10 with small number of parameters (= 6.3M, originally ViT-B has 86M).
MLP-Mixer-CIFAR
PyTorch implementation of Mixer-nano (#parameters is 0.67M, originally Mixer-S/16 has 18M) with 90.83 % acc. on CIFAR-10. Training from scratch.
image-classification-pytorch
A collection of image classification models along with results for CIFAR-10/100.
FastNST-TF2
Re-implementation for Fast Neural Style Transfer[Johnson, J.(ECCV'16)] in TensorFlow2.
UNet
Re-implementation of U-Net[Ronneberger, O.(MICCAI15)] in PyTorch.
TransGAN-PyTorch
(Ongoing) Unofficial re-implementation of TransGAN[Jiang, Y.(2021)].
ShuffleChannelLayer
Novel regularization by shuffling channels of feature maps.
deep-latent-sequence-model
Pytorch implementation of "A Probabilistic Formulation of Unsupervised Text Style Transfer" by He. et. al. at ICLR 2020
diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
gpt_index
GPT Index is a project consisting of a set of data structures designed to make it easier to use large external knowledge bases with LLMs.
omihub777's Repositories
omihub777/mteb
MTEB: Massive Text Embedding Benchmark
omihub777/ViT-CIFAR
PyTorch implementation for Vision Transformer[Dosovitskiy, A.(ICLR'21)] modified to obtain over 90% accuracy FROM SCRATCH on CIFAR-10 with small number of parameters (= 6.3M, originally ViT-B has 86M).
omihub777/japanese-lora-llm
A collection of Japanese LoRA-tuned LLMs.
omihub777/gpt_index
GPT Index is a project consisting of a set of data structures designed to make it easier to use large external knowledge bases with LLMs.
omihub777/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
omihub777/deep-latent-sequence-model
Pytorch implementation of "A Probabilistic Formulation of Unsupervised Text Style Transfer" by He. et. al. at ICLR 2020
omihub777/MLP-Mixer-CIFAR
PyTorch implementation of Mixer-nano (#parameters is 0.67M, originally Mixer-S/16 has 18M) with 90.83 % acc. on CIFAR-10. Training from scratch.
omihub777/UNet
Re-implementation of U-Net[Ronneberger, O.(MICCAI15)] in PyTorch.
omihub777/sim-real
omihub777/TransGAN-PyTorch
(Ongoing) Unofficial re-implementation of TransGAN[Jiang, Y.(2021)].
omihub777/image-classification-pytorch
A collection of image classification models along with results for CIFAR-10/100.
omihub777/ShuffleChannelLayer
Novel regularization by shuffling channels of feature maps.
omihub777/FastNST-TF2
Re-implementation for Fast Neural Style Transfer[Johnson, J.(ECCV'16)] in TensorFlow2.
omihub777/ShareConv
ShareConv(SConv) is a novel convolution layer, which shares its own weights over channel.