databricks/databricks-vscode

[BUG] Cannot attach to a cluster in VSCode using Databricks Extension

abhilashshakti opened this issue · 6 comments

Describe the bug
I'm trying to work remotely with my VSCode while syncing with Databricks Extension.
However, I keep seeing this error:

Access mode should be "Single User" or "Shared"). Currently it is LEGACY_SINGLE_USER_STANDARD. Please attach a new cluster.

I have a new cluster with "Single User" access, but the problem still persist. See the screenshot below.

Screenshots
image

image

System information:

  1. Paste the output ot the Help: About command (CMD-Shift-P):
    Version: 1.81.1 (Universal)
    Commit: 6c3e3dba23e8fadc360aed75ce363ba185c49794
    Date: 2023-08-09T22:20:33.924Z
    Electron: 22.3.18
    ElectronBuildId: 22689846
    Chromium: 108.0.5359.215
    Node.js: 16.17.1
    V8: 10.8.168.25-electron.0
    OS: Darwin arm64 22.6.0

Databricks Extension Version: Latest version as of Sep 7, 2023

Databricks Extension Logs
Please attach the databricks extension logs

Additional context
Add any other context about the problem here.

Hi @abhilashshakti. This popup is meant for enabling dbconnect integration. Since dbconnect requires clusters > 13.x and for those clusters, we do NOT have the LEGACY_SINGLE_USER_STANDARD, it should work for those clusters.

If you are not using dbconnect, you can ignore this popup. We will be fixing the language in the next update.

In that case, this is a bug. I think I see where the issue is. Let me try to pull in the fix in the next release.

This might be a bug with databricks-connect @kartikgupta-db. Getting the same error in a notebook trying to DatabricksSession.builder.getOrCreate() even though the cluster is set to Single User. It also fails on Shared as well oddly?

In both cases we see: "data_security_mode": "LEGACY_SINGLE_USER_STANDARD" for the Single User cluster and
"data_security_mode": "LEGACY_TABLE_ACL" for the Shared in the config JSON.

Rethinking this, are you on UC? @jacksongoode @abhilashshakti ?

Closing this issue for now. A similar issue raised by customers internally. was fixed by moving to a UC cluster. Please feel free to reopen the issue if that does not fix it for you.