Data Science Hacks is created and maintained by Analytics Vidhya for the data science community.
It includes a variety of tips, tricks and hacks related to data science, machine learning
These Hacks are for all the data scientists out there. It doesn’t matter if you are a beginner or an advanced professional, these hacks will definitely make you efficient!
Feel free to contribute your own data science hacks here. Make sure that your hack follows the contribution guidelines
This repository is part of the free course by Analytics Vidhya. To learn more of such awesome hacks visit Data Science Hacks, Tips and Tricks
How can you extract image data directly from chrome in one click? Imagine that you want to make your own machine learning project but you don't have enough data, it becomes a daunting task Worry not you can use the ResourceSaver extension to directly download data! Let's see how!
Steps:
- Install the chrome extension from the given URL.
- Go to Google Images or any webpage from where you want to save the data.
- Open Inspect Element and click to ResourceSaver Tab
- Click on the button Save All Resources and a zip file will be created.
- Unzip the file and open folder encrypted-tbn0.gstatic.com
- You can find the images here.
Pandas Apply is one of the most commonly used functions for playing with data and creating new variables. It returns some value after passing each row/column of a data frame with some function. The function can be both default or user-defined.
It helps to select subset of data based on the value of the data in the dataframe
It is used to create MS Excel style spreadsheet. Levels in the pivot table will be stored in MultiIndex objects (hierarchical indexes) on the index and columns of the result DataFrame.
pd.crosstab() function is used to get an initial “feel” (view) of the data.
It is used to apply vectorized string functions on a pandas dataframe column. Let’s say you want to split the names in a dataframe column into first name and last name. pandas.Series.str along with split( ) can be used to perform this task.
Here is an interesting hack to extract email ids present in long pieces of text by just using 2 lines of code in Python using regular expressions. Extracting information from social media posts and websites has become a common practice in data analytics but sometimes we end up trying complicated methods to achieve things that can be solved easily by using the right technique.
One of the most important assumptions in linear and logistic regression is that our data must follow a normal distribution but we all know that's usually not the case in real life. We often need to transform our data into normal/ gaussian distribution.
Preprocessing is one of the key steps for improving the performance of a model. One of the main reasons for text preprocessing is to remove unwanted characters from text like punctuation, emojis, links and so on which is not required for our problem statement.
Elbow Method is used for identifying the value of k in k-Nearest Neighbors. It's a plot of errors at different values of k and we select the k value having least error!
An important part of data analysis is to preprocessing. Many times we need to scale our features like in the case of k-NN we always need to scale the data before building model or else it'll give spurious results.
Most of the data collected today, hold the date and time variables. There is a lot of information that you can extract from these features and you can utilize it in your analysis!
Deeplearning models usually require a lot of #data for training. But acquiring massive amounts of data comes with its own challenges. Instead of spending days manually collecting data, you can make use of Image Augmentation techniques. It is the process of generating new images. These new images are generated using the existing training images and hence we don’t have to collect them manually.
Tokenization is the primary task while building the vocabulary. HuggingFace recently created a library for tokenization which provides an implementation of today's most used tokenizers, with a focus on performance and versatility. Key features: Ultra-fast: They can encode 1GB of text in ~20sec on a standard server's CPU
You can extract categorical and numeric features into seperate dataframes in just 1 line of code! This can be done using the select_dtypes function.
Do you want to to do perform quick data analysis on your dataframe? You can use pandas profiling to generate profile report of your dataset in just 1 line of code!
Convert wide form dataframe into long form dataframe in just 1 line of code! In pd.melt(), one more columns are used as identifiers. "Unmelt the data", use pivot() function
Do you know how you can get the history of all the commands running inside your jupyter notebook? Use %history, jupyter notebook's built-in magic function! Note - Even if you have cut the cells in your notebook, %history will print those commands as well!
Create heatmap on pandas dataframe using seaborn! It helps you understand the complete range of values at a glimpse.
Scikit-learn has released its stable 0.22.1 version with new features and bug fixes. One new function is the plot_confusion_matrix function which generates an extremely intuitive and customisable confusion matrix for your classifier. Bonus tip: You can specify the format of the numbers appearing in the boxes using the values_format parameter('n' for whole numbers, '.2f' for float, etc)
What will be the output if you run the following commands in single cell of your jupyter notebook? df.shape df.head() Ofcourse it'll be first five rows of your dataframe. Can we get output of both the command run in same cell? You can do it using InteractiveShell.
Most of you have heard about the library tqdm and you might be using it track the progress of forever running for loops. Most of the times we write complex functions having nested for loops. #tqdm allows tracking that too. Here is how you can track the nested loops using tdqm in python.
Deeplearning models usually require a lot of data for training. But acquiring massive amounts of data comes with its own challenges. Instead of spending days manually collecting data, you can make use of Image Augmentation techniques. It is the process of generating new images. These new images are generated using the existing training images and hence we don’t have to collect them manually.
jupyter-themes provides an easy way to change theme, fonts and much more in your jupyter notebook.
Steps -
- Install jupyter-themes -
- using anaconda
conda install -c conda-forge jupyterthemes
- using pip
pip install jupyterthemes
- using anaconda
- Check list of themes -
jt - l
- Select a theme
jt -t chesterish
- To restore to default theme -
jt -r
To do this we use jupyter-themes, it provides an easy way to change theme, fonts and much more in your jupyter notebook.
Steps -
-
Install jupyter-themes -
- using anaconda
conda install -c conda-forge jupyterthemes
- using pip
conda install -c pip install jupyterthemes
- using anaconda
-
Change the theme, cell width, cell height
jt -t chesterish -cellw 100% lineh 170
What do you do when you need to change the data type of a column to DateTime? We can do this directly at the time of reading data using parse_dates argument.
You can share your jupyter notebook with non-programmers very easily and the best way to do it is by using jupyter nbviewer. Pro tip - You can use Binder to execute the code from nbviewer on your machine!
Do you know how to plot a decision tree in just 1 line of code? Sklearn provides a simple function plot_tree() to do this task. You can tweak the hyperparameters as per your requirements.
Do you know how you can invert a dictionary in python? Dictionary is a collection which is unordered, changeable and indexed. It is widely used in day to day programming, and machine learning tasks.
Cufflinks binds plotly directly to pandas dataframes! Therefore you can make interactive charts without any hassle or long codes.
This hack is about saving contents of a cell to a .py file using the magic command %%writefile and then running the file in another jupyter notebook using the magic command %run
Are you getting confused while printing some of the data structures? Worry not, it is very common. pretty-print module provides an easy way to print the data structures in a visually pleasing way!