Pinned Repositories
Advanced-Deep-Trading
Mostly experiments based on "Advances in financial machine learning" book
apollo
An open autonomous driving platform
cap6610_project
Car_Detection
Deep-Trading
Algorithmic trading with deep learning experiments
EEL6512ImageProcessComputerVision
ECEComputerVision
EGS1006
Git_test
Test Git command
LectureNotes
Lecture notes from Fall 2018 Foundations of Machine Learning course at the University of Florida. These notes were written by Alina Zare.
LOC
Museum Localization
chenshenlv's Repositories
chenshenlv/Advanced-Deep-Trading
Mostly experiments based on "Advances in financial machine learning" book
chenshenlv/apollo
An open autonomous driving platform
chenshenlv/cap6610_project
chenshenlv/Car_Detection
chenshenlv/Deep-Trading
Algorithmic trading with deep learning experiments
chenshenlv/EEL6512ImageProcessComputerVision
ECEComputerVision
chenshenlv/EGS1006
chenshenlv/Git_test
Test Git command
chenshenlv/LectureNotes
Lecture notes from Fall 2018 Foundations of Machine Learning course at the University of Florida. These notes were written by Alina Zare.
chenshenlv/LOC
Museum Localization
chenshenlv/MNIST_GAN
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN & Pix2Pix in PyTorch, Jun-Yan Zhu A list of generative models The idea behind GANs is that you have two networks, a generator 𝐺 and a discriminator 𝐷 , competing against each other. The generator makes "fake" data to pass to the discriminator. The discriminator also sees real training data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real, training data. The discriminator is a classifier that is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistinguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector that the generator uses to construct its fake images. This is often called a latent vector and that vector space is called latent space. As the generator trains, it figures out how to map latent vectors to recognizable images that can fool the discriminator. If you're interested in generating only new images, you can throw out the discriminator after training. In this notebook, I'll show you how to define and train these adversarial networks in PyTorch and generate new images!
chenshenlv/Multiple_Polygon_Generate
chenshenlv/Optix
chenshenlv/optix_advanced_samples
chenshenlv/raytracinginoneweekendincuda
The code for the ebook Ray Tracing in One Weekend by Peter Shirley translated to CUDA by Roger Allen. This work is in the public domain.
chenshenlv/render_kinect
Simulation of Kinect Measurements
chenshenlv/Sound-localization
chenshenlv/SoundLab
chenshenlv/Spoon-Knife
This repo is for demonstration purposes only.