You have data in both Databricks and Salesforce that you use to create machine learning models to help your customers and improve your business outcomes. Wouldn't it be nice if you could take advantage of the ease-of-use, simplicity, and power of Databricks for model training and serving to power ML related workloads on Salesforce? Now you can, with Salesforce BYOM for Databricks.
Here are just a handful of the the many use cases you can unlock with Salesforce BYOM for Databricks:
- Recommend products to customers based on what they've purchased or viewed in the past
- Predict whether a given lead will convert to a sale
- Determine likelihood of escalation of a use case
- Predict whether a given customer is likely to churn
- Forecast late payments
- and so many more!
This solution accelerator will help you:
- set up the required objects in Salesforce
- query data from Salesforce to set up a feature table in Unity Catalog
- build and train a model in Databricks using MLflow for experiment tracking
- manage model lifecycle using Unity Catalog
- deploy a model to a Databricks serving endpoint
- configure the serving endpoint in Einstein Studio on Salesforce
- score a dataset in Salesforce using the model
You can use this as a starting point for building your own models against Salesforce data in Databricks!
Corey Abshire corey.abshire@databricks.com
Sharda Rao sharda.rao@salesforce.com
© 2024 Databricks, Inc. All rights reserved. The source in this notebook is provided subject to the Databricks License [https://databricks.com/db-license-source]. All included or referenced third party libraries are subject to the licenses set forth below.
library | description | license | source |
---|---|---|---|
PyYAML | Reading Yaml files | MIT | https://github.com/yaml/pyyaml |
Although specific solutions can be downloaded as .dbc archives from our websites, we recommend cloning these repositories onto your databricks environment. Not only will you get access to latest code, but you will be part of a community of experts driving industry best practices and re-usable solutions, influencing our respective industries.
To start using a solution accelerator in Databricks simply follow these steps:
- Clone solution accelerator repository in Databricks using Databricks Repos
- Attach the
RUNME
notebook to any cluster and execute the notebook via Run-All. A multi-step-job describing the accelerator pipeline will be created, and the link will be provided. The job configuration is written in the RUNME notebook in json format. - Execute the multi-step-job to see how the pipeline runs.
- You might want to modify the samples in the solution accelerator to your need, collaborate with other users and run the code samples against your own data. To do so start by changing the Git remote of your repository to your organization’s repository vs using our samples repository (learn more). You can now commit and push code, collaborate with other user’s via Git and follow your organization’s processes for code development.
The cost associated with running the accelerator is the user's responsibility.
Please note the code in this project is provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects. The source in this project is provided subject to the Databricks License. All included or referenced third party libraries are subject to the licenses set forth below.
Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.