qingfenghcy's Stars
chineseocr/chineseocr
yolo3+ocr
sikaozhe1997/Xin-Yue
岳昕:致北大师生与北大外国语学院的一封公开信
freelzy/Tencent_Social_Ads
第一届腾讯社交广告高校算法大赛(全国14名)
TheAlgorithms/Java
All Algorithms implemented in Java
qqwweee/keras-yolo3
A Keras implementation of YOLOv3 (Tensorflow backend)
chinese-poetry/chinese-poetry
The most comprehensive database of Chinese poetry 🧶最全中华古诗词数据库, 唐宋两朝近一万四千古诗人, 接近5.5万首唐诗加26万宋诗. 两宋时期1564位词人,21050首词。
CSAILVision/semantic-segmentation-pytorch
Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset
houshanren/hangzhou_house_knowledge
2017年买房经历总结出来的买房购房知识分享给大家,希望对大家有所帮助。买房不易,且买且珍惜。Sharing the knowledge of buy an own house that according to the experience at hangzhou in 2017 to all the people. It's not easy to buy a own house, so I hope that it would be useful to everyone.
BoyuanJiang/Age-Gender-Estimate-TF
Face age and gender estimate using TensorFlow
loyalzc/tencent_ad
腾讯社交广告算法大赛2018
BladeCoda/Tencent2017_Final_Coda_Allegro
腾讯2017社交广告源码(决赛排名第23位)
YouChouNoBB/2018-tencent-ad-competition-baseline
2018腾讯广告算法大赛baseline 线上0.73
dhvanikotak/Emotion-Detection-in-Videos
The aim of this work is to recognize the six emotions (happiness, sadness, disgust, surprise, fear and anger) based on human facial expressions extracted from videos. To achieve this, we are considering people of different ethnicity, age and gender where each one of them reacts very different when they express their emotions. We collected a data set of 149 videos that included short videos from both, females and males, expressing each of the the emotions described before. The data set was built by students and each of them recorded a video expressing all the emotions with no directions or instructions at all. Some videos included more body parts than others. In other cases, videos have objects in the background an even different light setups. We wanted this to be as general as possible with no restrictions at all, so it could be a very good indicator of our main goal. The code detect_faces.py just detects faces from the video and we saved this video in the dimension 240x320. Using this algorithm creates shaky videos. Thus we then stabilized all videos. This can be done via a code or online free stabilizers are also available. After which we used the stabilized videos and ran it through code emotion_classification_videos_faces.py. in the code we developed a method to extract features based on histogram of dense optical flows (HOF) and we used a support vector machine (SVM) classifier to tackle the recognition problem. For each video at each frame we extracted optical flows. Optical flows measure the motion relative to an observer between two frames at each point of them. Therefore, at each point in the image you will have two values that describes the vector representing the motion between the two frames: the magnitude and the angle. In our case, since videos have a resolution of 240x320, each frame will have a feature descriptor of dimensions 240x320x2. So, the final video descriptor will have a dimension of #framesx240x320x2. In order to make a video comparable to other inputs (because inputs of different length will not be comparable with each other), we need to somehow find a way to summarize the video into a single descriptor. We achieve this by calculating a histogram of the optical flows. This is, separate the extracted flows into categories and count the number of flows for each category. In more details, we split the scene into a grid of s by s bins (10 in this case) in order to record the location of each feature, and then categorized the direction of the flow as one of the 8 different motion directions considered in this problem. After this, we count for each direction the number of flows occurring in each direction bin. Finally, we end up with an s by s by 8 bins descriptor per each frame. Now, the summarizing step for each video could be the average of the histograms in each grid (average pooling method) or we could just pick the maximum value of the histograms by grid throughout all the frames on a video (max pooling For the classification process, we used support vector machine (SVM) with a non linear kernel classifier, discussed in class, to recognize the new facial expressions. We also considered a Naïve Bayes classifier, but it is widely known that svm outperforms the last method in the computer vision field. A confusion matrix can be made to plot results better.
eragonruan/text-detection-ctpn
text detection mainly based on ctpn model in tensorflow, id card detect, connectionist text proposal network
tesseract-ocr/tesseract
Tesseract Open Source OCR Engine (main repository)
xionghc/Facial-Expression-Recognition
Facial-Expression-Recognition in TensorFlow. Detecting faces in video and recognize the expression(emotion).
ageitgey/face_recognition
The world's simplest facial recognition api for Python and the command line
julycoding/BAT-ML-1000
BAT机器学习面试1000题系列
1c7/chinese-independent-developer
👩🏿💻👨🏾💻👩🏼💻👨🏽💻👩🏻💻**独立开发者项目列表 -- 分享大家都在做什么
CyC2018/CS-Notes
:books: 技术面试必备基础知识、Leetcode、计算机操作系统、计算机网络、系统设计
aoapc-book/aoapc-bac2nd
Source codes for book <<<BeginningAlgorithmContests>> Second edition
chenyuntc/pytorch-book
PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation (《深度学习框架PyTorch:入门与实战》)
Xiangyu-CAS/FashionAI_Keypoints
Heatmap approach for Fashion AI keypoint Challenge
JasonDu1993/paperreader
JasonDu1993/hourglasstensorlfow
Tensorflow implementation of Stacked Hourglass Networks for Human Pose Estimation
Xiangyu-CAS/Realtime_Multi-Person_Pose_Estimation.PyTorch
Pytorch implementation of Realtime_Multi-Person_Pose_Estimation
williamfiset/DEPRECATED-data-structures
A collection of powerful data structures
tensorflow/models
Models and examples built with TensorFlow
exacity/deeplearningbook-chinese
Deep Learning Book Chinese Translation
raghakot/keras-resnet
Residual networks implementation using Keras-1.0 functional API