IgnatPolezhaev
6th year of MIPT | Data Science @ SBER
Moscow Institute of Physics and TechnologyMoscow, Russia
Pinned Repositories
MDS-ViTNet
We present a novel methodology we call MDS-ViTNet (Multi Decoder Saliency by Vision Transformer Network) for enhancing visual saliency prediction or eye-tracking. Our trained model achieves state-of-the-art results across several benchmarks.
Optical-Character-Recognition-CCPD2019
This repository implements the task of recognizing license plates on the CC2019 dataset.
Generative-adversarial-neural-network-trained-on-the-Flickr-Faces-dataset
This work was done on the basis of a PyTorch [tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html). [Flickr Faces](https://github.com/NVlabs/ffhq-dataset)
Semantic-segmentation-of-skin-lesions
The dataset from ADDI project was used in this repository. Segmented melanocytic lesions. I used two models: Signet and UNet. Augmentation was performed on the data, after which they were fed into the model. The quality was measured using the IoU metric. It is considered as the ratio of the intersection of target and predict to their union. I used several loss functions: BCE, Dice, Focal and SIM. I got the best quality on the UNet + BCE loss model: 0.736.
Framework-Backpropagation
Neural Network Framework with backpropagation. The framework contains layers such as: Linear, ReLU, LeakyReLU, Sigmoid, SoftMax, Dropout, BatchNorm. Criterion: MSE, CrossEntropy. Optimizer: SGD. A model trained on the Mnist dataset was written.
Fruit-detection-with-model-fasterrcnn_resnet50_fpn
In this repository, fruit is detected from the [dataset](https://www.kaggle.com/datasets/mbkinaci/fruit-images-for-object-detection). To do this, the Pytorch pre-trained fasterrcnn_resnet50_fpn model is used. Quality is measured using accuracy and IoU metrics. As a result, after training, the F1 quality of the measure was obtained at 0.87.
Classification-of-dataset-monkeys-from-Kaggle
In this repository, monkeys were classified using three different models: self-written VGG16, EfficientNet_B4, DenseNet169.
Fine-tuning-LLaMA-7B
In this repository, a project has been implemented to generate pickups using the LLaMA-7B language model
Humans-segmentation-with-help-UNet
In this project, people were segmented in 320x240x3 photos. UNet, the IoU quality metric, and the Dice Los loss function were taken as the model. Augmentation was also used: mirror reflections and rotations. The model was trained using best practices. As a result, masks of size 320x240 were obtained in the range [0, 1]. The final result on valid data was an IoU metric of about 0.7.
IgnatPolezhaev
Config files for my GitHub profile.
IgnatPolezhaev's Repositories
IgnatPolezhaev/MDS-ViTNet
We present a novel methodology we call MDS-ViTNet (Multi Decoder Saliency by Vision Transformer Network) for enhancing visual saliency prediction or eye-tracking. Our trained model achieves state-of-the-art results across several benchmarks.
IgnatPolezhaev/Pipeline-for-T5-translator
This repository implements a pipeline for training the T5 model for translation from English to Russian.
IgnatPolezhaev/Fine-tuning-LLaMA-7B
In this repository, a project has been implemented to generate pickups using the LLaMA-7B language model
IgnatPolezhaev/Classification-of-dataset-monkeys-from-Kaggle
In this repository, monkeys were classified using three different models: self-written VGG16, EfficientNet_B4, DenseNet169.
IgnatPolezhaev/IgnatPolezhaev
Config files for my GitHub profile.
IgnatPolezhaev/Optical-Character-Recognition-CCPD2019
This repository implements the task of recognizing license plates on the CC2019 dataset.
IgnatPolezhaev/Fruit-detection-with-model-fasterrcnn_resnet50_fpn
In this repository, fruit is detected from the [dataset](https://www.kaggle.com/datasets/mbkinaci/fruit-images-for-object-detection). To do this, the Pytorch pre-trained fasterrcnn_resnet50_fpn model is used. Quality is measured using accuracy and IoU metrics. As a result, after training, the F1 quality of the measure was obtained at 0.87.
IgnatPolezhaev/Generative-adversarial-neural-network-trained-on-the-Flickr-Faces-dataset
This work was done on the basis of a PyTorch [tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html). [Flickr Faces](https://github.com/NVlabs/ffhq-dataset)
IgnatPolezhaev/Recurrent-neural-network-on-the-example-of-IMDB-reviews
This laptop implements the classification of reviews on IMDB using a recurrent neural network. A convolutional neural network with pre trained embeddings of Love words was also used. Achieved prediction accuracy: 86%. This assignment was as homework on a course from Deep Learning School.
IgnatPolezhaev/Kaggle-competitative-Tabular-Playground-Series-Feb-2022
Welcome to my code, which I used in Kaggle competitative (https://www.kaggle.com/c/tabular-playground-series-feb-2022/overview). To write the prediction of bacteria, I used a neural network written in the Pytorch library.
IgnatPolezhaev/Framework-Backpropagation
Neural Network Framework with backpropagation. The framework contains layers such as: Linear, ReLU, LeakyReLU, Sigmoid, SoftMax, Dropout, BatchNorm. Criterion: MSE, CrossEntropy. Optimizer: SGD. A model trained on the Mnist dataset was written.
IgnatPolezhaev/Semantic-segmentation-of-skin-lesions
The dataset from ADDI project was used in this repository. Segmented melanocytic lesions. I used two models: Signet and UNet. Augmentation was performed on the data, after which they were fed into the model. The quality was measured using the IoU metric. It is considered as the ratio of the intersection of target and predict to their union. I used several loss functions: BCE, Dice, Focal and SIM. I got the best quality on the UNet + BCE loss model: 0.736.
IgnatPolezhaev/Humans-segmentation-with-help-UNet
In this project, people were segmented in 320x240x3 photos. UNet, the IoU quality metric, and the Dice Los loss function were taken as the model. Augmentation was also used: mirror reflections and rotations. The model was trained using best practices. As a result, masks of size 320x240 were obtained in the range [0, 1]. The final result on valid data was an IoU metric of about 0.7.