LuvCjy's Stars
bubbliiiing/mask-rcnn-tf2
这是一个mask-rcnn-tf2的库,可以用于训练自己的模型。
XPixelGroup/BasicSR
Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
murufeng/FUIR
A Flexible and Unified Image Restoration Framework (PyTorch), including state-of-the-art image restoration model. Such as NAFNet, Restormer, MPRNet, MIMO-UNet, SCUNet, SwinIR, HINet, etc. ⭐⭐⭐⭐⭐⭐
open-mmlab/mmagic
OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
hee0624/process_image
generate noise image 生成噪声图片,用来cv领域
FanChiMao/SUNet
SUNet: Swin Transformer with UNet for Image Denoising
scnu/scnuthesis
符合华南师范大学硕士/博士学位论文格式要求的LaTeX模板。
Fan-Treasure/Restormer
Reproduce of the CVPR 2022 paper "Restormer: Efficient Transformer for High-Resolution Image Restoration"
leftthomas/Restormer
A PyTorch implementation of Restormer based on CVPR 2022 paper "Restormer: Efficient Transformer for High-Resolution Image Restoration"
dome272/VQGAN-pytorch
Pytorch implementation of VQGAN (Taming Transformers for High-Resolution Image Synthesis) (https://arxiv.org/pdf/2012.09841.pdf)
wyhuai/DDNM
[ICLR 2023 Oral] Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model
sashrika15/noise2noise
Pytorch implementation of Noise2Noise paper.
ZhenyuTan/Noise2Noise-Cryo-EM-image-denoising
pytorch implementation of noise2noise for Cryo-EM image denoising
juglab/DivNoising
DivNoising is an unsupervised denoising method to generate diverse denoised samples for any noisy input image. This repository contains the code to reproduce the results reported in the paper https://openreview.net/pdf?id=agHLCOBM5jP
jeongHwarr/various_FCM_segmentation
Image segmentation Using Various Fuzzy C-means Algorithms (FCM, EnFCM, MFCM).
ariffyasri/fuzzy-c-means
Image Segmentation using Fuzzy C Means
SINGROUP/Graph-AFM
Machine learning molecule graphs from atomic force microscopy images.
Probe-Particle/ppafm
Classical force field model for simulating atomic force microscopy images.
bubbliiiing/unet-pytorch
这是一个unet-pytorch的源码,可以训练自己的模型
bigmb/Unet-Segmentation-Pytorch-Nest-of-Unets
Implementation of different kinds of Unet Models for Image Segmentation - Unet , RCNN-Unet, Attention Unet, RCNN-Attention Unet, Nested Unet
zhixuhao/unet
unet for image segmentation
heroineyy/cv_homework--2.0
hoichanjung/AFM_Image_Denoising
Comparative Study of Deep Learning Algorithms for Atomic Force Microscope Image Denoising
CVHub520/X-AnyLabeling
Effortless data labeling with AI support from Segment Anything and other awesome models.
alshedivat/keras-gp
Keras + Gaussian Processes: Learning scalable deep and recurrent kernels.
yatengLG/ISAT_with_segment_anything
Labeling tool with SAM(segment anything model),supports SAM, SAM2, sam-hq, MobileSAM EdgeSAM etc.交互式半自动图像标注工具
TommyZihao/MMSegmentation_Tutorials
Jupyter notebook tutorials for MMSegmentation
TommyZihao/Train_Custom_Dataset
标注自己的数据集,训练、评估、测试、部署自己的人工智能算法
sumner-harris/Deep-Learning-with-ICCD-Images
AdityaTheDev/ReconstructionOfImage-Using-DeepAutoEnccoders
Autoencoder is a type of neural network where the output layer has the same dimensionality as the input layer. In simpler words, the number of output units in the output layer is equal to the number of input units in the input layer. An autoencoder replicates the data from the input to the output in an unsupervised manner and is therefore sometimes referred to as a replicator neural network. The autoencoders reconstruct each dimension of the input by passing it through the network. It may seem trivial to use a neural network for the purpose of replicating the input, but during the replication process, the size of the input is reduced into its smaller representation. The middle layers of the neural network have a fewer number of units as compared to that of input or output layers. Therefore, the middle layers hold the reduced representation of the input. The output is reconstructed from this reduced representation of the input.