Application of ETL process on raw Aamazon sales data along with its analysis using Jupyter Notebook and Tableau for a dashboard.
Overview • Tools • Architecture • Demo • Support • License
Amazon is the world's largest eCommerce website. It was originally launched as a book-selling website and sold its first book in 1995.
This project involves Extract Transform Load(ETL) process on fictious raw sales data of Amazon. Exploratory Data Analysis(EDA) is performed on it using Jupyter Notebook to extract key insights about the sales and later on an executive's sales dashbaord is produced using Tableau.
The repository directory structure is as follows:
├── LICENSE
├── README.md <- The top-level README for developers using this project.
|
├── run.py <- Python script to start ETL process.
|
├── data
│ ├── interim <- Intermediate data that has been transformed using ETL process.
│ ├── processed <- The final, canonical data set for analysis.
│ └── raw <- The original, immutable data dump.
│
│
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-mwg-initial-data-exploration`.
│
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
|
│
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module.
│ │
│ ├── data <- Scripts to perform ETL.
│ ├── make_dataset.py
| └── multiple_files_to_single_excel_file.py
|
|
├── dashboard <- Dashboard created using transformed data.
| └── Sales Dashboard.twbx
|
├── resources <- Resources for this readme file.
To build this project, following tools and packages were used:
- Python
- Python packages mentioned in requirements.txt.
- PyCharm
- Github
- Jupyter Notebook
- Tableau Desktop
The architecture of this project is straightforward which can be understood by the following diagram.
According to the diagram we first create a python script which performs ETL for us on the raw dataset. The output of this process is clean data which is then used for exploratory analysis in jupyter Notebook and to create a dashboard in Python.
The figure below shows a snapshot of ETL process being conducted through terminal. Type run.py (raw data directory). In my case I typed run.py data/raw. (figure may take few seconds to load)
The following dashboard was created on Tableau.
If you have any doubts, queries or, suggestions then, please connect with me on any of the following platforms:
This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.