hyeon-jeong's Stars
eriklindernoren/PyTorch-GAN
PyTorch implementations of Generative Adversarial Networks.
opencv/opencv_contrib
Repository for OpenCV's extra modules
capstone-engine/capstone
Capstone disassembly/disassembler framework for ARM, ARM64 (ARMv8), Alpha, BPF, Ethereum VM, HPPA, LoongArch, M68K, M680X, Mips, MOS65XX, PPC, RISC-V(rv32G/rv64G), SH, Sparc, SystemZ, TMS320C64X, TriCore, Webassembly, XCore and X86.
volatilityfoundation/volatility
An advanced memory forensics framework
WebAssembly/wabt
The WebAssembly Binary Toolkit
plaidml/plaidml
PlaidML is a framework for making deep learning work everywhere.
jfzhang95/pytorch-deeplab-xception
DeepLab v3+ model in PyTorch. Support different backbones.
Crypto-Cat/CTF
CTF challenge (mostly pwn) files, scripts etc
kootenpv/neural_complete
A neural network trained to help writing neural network code using autocomplete
chervonij/DFL-Colab
DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab
goberoi/faceit
A script to make it easy to swap faces in videos using the faceswap library, and YouTube videos.
llSourcell/deepfakes
This is the code for "DeepFakes" by Siraj Raval on Youtube
Dvd848/CTFs
Writeups for various CTFs
iitzco/faced
🚀 😏 Near Real Time CPU Face detection using deep learning
EndlessSora/DeeperForensics-1.0
[CVPR 2020] A Large-Scale Dataset for Real-World Face Forgery Detection
yuezunli/celeb-deepfakeforensics
Celeb-DF: A Large-scale Challenging Dataset for DeepFake Forensics
seba-1511/dist_tuto.pth
Official code for "Writing Distributed Applications with PyTorch", PyTorch Tutorial
themefisher/kross-jekyll
Kross is creative portfolio theme for Jekyll.
volny/stylish-portfolio-jekyll
A Jekyll implementation of the Stylish Portfolio template by Start Bootstrap
cc-hpc-itwm/DeepFakeDetection
Qingcsai/awesome-Deepfakes
All about Deepfakes & Detection
iliasprc/Deep-Fakes
JStehouwer/FFD_CVPR2020
iamaaditya/pixel-deflection
Deflecting Adversarial Attacks with Pixel Deflection
alejandrodebus/Pytorch-Utils
Useful functions to work with PyTorch. At the moment, there is a function to work with cross validation and kernels visualization.
dfrws/dfrws2018-challenge
The DFRWS 2018 challenge (extended into 2019) is the second in a series of challenges dealing with Internet of Things (IoT). IoT is defined generally to include network and Internet connected devices usually for the purpose of monitoring and automation tasks. Consumer-grade “Smart” devices are increasing in popularity and scope. These devices and the data they collect are potentially interesting for digital investigations, but also come with a number of new investigation challenges.
abhn/portfolio
A simple and modern portfolio template which is lightweight, mobile responsive and looks modern.
spellml/deeplab-voc-2012
Garima13a/MNIST_GAN
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN & Pix2Pix in PyTorch, Jun-Yan Zhu A list of generative models The idea behind GANs is that you have two networks, a generator 𝐺 and a discriminator 𝐷 , competing against each other. The generator makes "fake" data to pass to the discriminator. The discriminator also sees real training data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real, training data. The discriminator is a classifier that is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistinguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector that the generator uses to construct its fake images. This is often called a latent vector and that vector space is called latent space. As the generator trains, it figures out how to map latent vectors to recognizable images that can fool the discriminator. If you're interested in generating only new images, you can throw out the discriminator after training. In this notebook, I'll show you how to define and train these adversarial networks in PyTorch and generate new images!
SchwiftyUI/Tetris
It's Tetris!