An extension to the Legend framework for spark / delta lake based environment, combining best of open data standards with open source technologies
In addition to the JDBC connectivity enabled to Databricks from the legend-engine itself, this project helps organizations define data models that can be converted into efficient data pipelines, ensuring data being queried is of high quality and availability. Raw data can be ingested as stream or batch and processed in line with the business semantics defined from the Legend interface. Domain specific language defined in Legend Studio can be interpreted as a series of Spark SQL operations, helping analysts create Delta Lake tables that not only guarantees schema definition but also complies with expectations, derivations and constraints defined by business analysts.
Make sure to have the jar file of org.finos.legend-community:legend-delta:X.Y.Z
and all its dependencies available in
your spark classpath and a legend data model (version controlled on gitlab) previously compiled to disk or packaged
as a jar file and available in your classpath. For python support, please add the corresponding library from pypi
repo.
pip install legend-delta==X.Y.Z
We show you how to extract schema, retrieve and enforce expectations and create delta tables in both scala and python sample notebooks.
Databricks, Inc.
Copyright 2021 Databricks, Inc.
Distributed under the Apache License, Version 2.0.
SPDX-License-Identifier: Apache-2.0