/pyspark

loading different data formats into azure delta lake

Primary LanguageJupyter Notebook

pyspark

Utilising Azure databricks platform to load, read and write different files ( parquet, text, zip, archive, sql db) using pyspark and hive

Video to demonstrate functionality, speed and code syntax of Kaggle Jupyter notebook pyspark code and Azure pyspark databricks platform