Pinned Repositories
Analysis-of-Wikipedia-Entities
Analysis of Wikipedia Entities is done by extracting information from the infobox. The data used is the entire 40 GB Wikipedia Data Dump.
awesome-fashion-ai
A repository to curate and summarise research papers related to fashion and e-commerce
datasets
🤗 Fast, efficient, open-access datasets and evaluation metrics for Natural Language Processing and more in PyTorch, TensorFlow, NumPy and Pandas
Link-News-Entities-with-Wikipedia
Link entities in News Feeds with Wikipedia entity pages using the title of the news.
mobile-vision
Mobile vision models and code
OpenNMT-py
Open-Source Neural Machine Translation in PyTorch http://opennmt.net/
Perceptron-Algorithms
Implemention of 2 class and multi-class perceptron algorithms
Phrase-Based-Model
Implementation of Phrase Based Model to translate sentences from English to German and vice versa
tsne-tensorboard-visualisation
This repository provides a starter code for using tensorboard via tensorflow for visualising embeddings
Wikipedia-Search-Engine
Involves building a search engine on the Wikipedia Data Dump using the data dump of 2013 of size 43 GB. The search results returns in real time.
ayushidalmia's Repositories
ayushidalmia/awesome-fashion-ai
A repository to curate and summarise research papers related to fashion and e-commerce
ayushidalmia/Wikipedia-Search-Engine
Involves building a search engine on the Wikipedia Data Dump using the data dump of 2013 of size 43 GB. The search results returns in real time.
ayushidalmia/tsne-tensorboard-visualisation
This repository provides a starter code for using tensorboard via tensorflow for visualising embeddings
ayushidalmia/Phrase-Based-Model
Implementation of Phrase Based Model to translate sentences from English to German and vice versa
ayushidalmia/Link-News-Entities-with-Wikipedia
Link entities in News Feeds with Wikipedia entity pages using the title of the news.
ayushidalmia/Analysis-of-Wikipedia-Entities
Analysis of Wikipedia Entities is done by extracting information from the infobox. The data used is the entire 40 GB Wikipedia Data Dump.
ayushidalmia/datasets
🤗 Fast, efficient, open-access datasets and evaluation metrics for Natural Language Processing and more in PyTorch, TensorFlow, NumPy and Pandas
ayushidalmia/mobile-vision
Mobile vision models and code
ayushidalmia/Perceptron-Algorithms
Implemention of 2 class and multi-class perceptron algorithms
ayushidalmia/ayushidalmia.github.io
A personal webpage for Ayushi Dalmia
ayushidalmia/OpenNMT-py
Open-Source Neural Machine Translation in PyTorch http://opennmt.net/
ayushidalmia/Crawling-Semi-Structured-Data
Find Business Email IDs from Yelp pages
ayushidalmia/d2go
D2Go is a toolkit for efficient deep learning
ayushidalmia/detectron2
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
ayushidalmia/DLBB
:book: :hammer: DLBook Builder
ayushidalmia/Eigen-Faces
An end to end system to recognise faces based on Principle Component Analysis
ayushidalmia/Hand-Digit-Recognition-System
An end to end system to recognise hand written digits using KNN Classifier
ayushidalmia/ktrain
ktrain is a Python library that makes deep learning and AI more accessible and easier to apply
ayushidalmia/machine-learning-cheat-sheet
Classical equations and diagrams in machine learning
ayushidalmia/nmt
TensorFlow Neural Machine Translation Tutorial
ayushidalmia/python-cloudant-101
This repo intends to help someone to get started in using Cloudant with Python. It has scripts to load, index and query a database.
ayushidalmia/scikit-learn
scikit-learn: machine learning in Python
ayushidalmia/seq2seq
A general-purpose encoder-decoder framework for Tensorflow
ayushidalmia/WebMiningTutorials
Web Mining tutorial and class dumps
ayushidalmia/Word-Based-Model
Implementation of Word Based Model to translate sentences from English to German and vice versa. Implementation of IBM Model 1 is done here.