astral-sh/ruff-vscode

Migrate to ruffd

charliermarsh opened this issue · 13 comments

We should use ruffd instead of pygls for the Language Server. Similar to rust-analyzer, we should bundle ruffd into the extension directly.

Happy to jump on this. Is bundling the sole issue, or was there anything else you ran into?

@Seamooo - Nope! I didn't look into it much, I just decided to do the easy thing (use the Python template) to get something useable up on the Marketplace. Let me know how I can help!

If ruffd is location agnostic (i.e., does not care about CWD). Then, you can greatly simplify everything in the extension, and I am happy to help with improving the performance by limiting the number of processes we spawn.

You can also use the VS Code API directly without the Language server and remove a lot of boiler plate code that is there to host and manage a language server.

One thing to note, ruffd must detect the config file based on the file location and not based on CWD. It should re-scan or re-load settings if they change. A common issue with server style linting is that settings are not picked up if changes are made after the server has started.

Fortunately the LSP has taken most of that into consideration and offers a workspace root on initialization and also a "workspace did change" notification when there is a change, along with a few other notifications for file creation, etc. At this stage settings are created on diagnostic generation, however, they are not based on open buffers, instead they are based on the raw representation of the file (ie if you don't save your settings, they won't affect linting), as such, there's no room currently to make a miss-step with configuration, outside of an issue with the internal resolution of configuration in ruff.

The requirements for live update settings from open buffers would be twofold in requiring a unified settings resolution interface, (admittedly already existing in ruff::pyproject::find_pyproject_toml), and accepting updates for toml files.

If this is something that seems important I'm happy to setup an issue over at ruffd

@charliermarsh I've been playing around with this a bit and so far see 3 choices in the way that integration will happen:

  • submodule ruffd
  • move the vscode-specific stuff to ruffd (similar to rust-analyzer)
  • download the release binary for ruffd (similar to how ruff is bundled, but either from crates.io or a github release)

Development should be fine for all 3 options (the release binary maybe a little more unwieldy but there is some setup that can be done to make it a less painful), so it mainly comes down to personal preference.

I generally prefer monorepos, but for OSS I think there are some advantages to maintaining separate repos. Let's start with a submodule? That seems fine to me.

Is ruffd multi-root and no-root aware? I have not dug into how ruff picks up config.

The multi-root scenario is important for multi-python scenario. Basically, user has one big project root with multiple packages. Each package can have its own venv and dependencies. This is common in cases where people are designing for example both server and client packages.

The no-root scenario is where user open just a file and not a workspace.

I believe ruff::pyproject::find_project_root describes the behaviour you're looking for. As this called for each file in ruffd the pyproject.toml file in it's ancestory (terminating at git or mercurial repo roots) will be used as the config source

@Seamooo - Do you think it's worth me adding code actions to the current version of the plugin, to support autofix from VS Code? Or would you advise just waiting for a ruffd migration?

I think it's worthwhile as that's going to happen sooner.

A part of the migration should be feature parity. The key feature that's missing from ruffd is on-edit diagnostic generation. I've gone back and forth on whether to add this with full text parsing as a first step, or just to enable this once incremental parsing and incremental diagnostic generation, is there. Given that this has been sitting in limbo for a week though, I think achieving that feature parity faster is the higher priority.