Integrate a WebView into a raw window
FredrikNoren opened this issue ยท 19 comments
I'd like to integrate a webview into a wgpu application, but I'm not sure what the best way to do that is
Describe the solution you'd like
I'd like to render a webview as a sidebar in my wgpu 3d application.
Describe alternatives you've considered
- Run the webview in a hidden window, snapshot it and use as a texture, and feed user input back by simulating events through eval. Would hinge on #266 or equivalent. Not sure how performant this would be, in case of animations etc.
- Run the webview inside of the wgpu application somehow, and assign it to a part of the screen. Not sure if this is at all possible
- Swap it around; make everything a webview and then somehow get a handle to a canvas or something which I can use as a draw target for my wgpu application. Not sure if this is possible either.
Would you assign yourself to implement this feature?
Possibly, but not sure even what the right solution is here yet
Additional context
I'm not sure if those are the only options, or if there are more. If anyone has any input or experience on this I'd love to hear.
I would say you also have to other options which seem to be more practical imho:
- Only use tauri and compile your wgpu application to WASM, then render inside the browser using WebGL2 (maybe WebGPU if it ever becomes widely supported).
- Run wgpu in headless mode and render to a bitmap (see wgpus' "capture" example). Then pass the bitmap from your rust backend to the frontend via RPC where you can display it (for example in a canvas). Of course if your application depends on user input you would have to always feed any input back to the renderer in which case the first option might be superior.
This is something I am personally interested in (and actually related to multiple webviews per one window) but there is other important priorities so it is not going to be implemented soon. Ofc PRs are always welcome.
@FrankenApps My comments on those two;
- We're unfortunately using API's that aren't available in WASM, so we can't do that currently.
- With WGPU there's usually 1-3 frames of delay for reading a texture from the gpu to the cpu, so I think doing it that way would lead to a bit too much lag.
@amrbashir Ok cool, so you'd say that the second path (assign a webview to a section of the window) would be the way to go here? Do you have a rough idea what needs to be done to make that happen?
@FredrikNoren Yup, I meant the second option.
I think WRY could be turned into windowing-agnostic library that doesn't need to rely on just using TAO and could be used with anything that could provide a raw window handle. Linux is a bit weird because we use need to use gtk (which wgpu doesn't support afaik) instead of a raw x11/wayland window so lets ignore that bit for now and focus on macOS and Windows.
On Windows and macOS, we can just make the WebView
accept a raw window handle and then use that to add the webview and position it.
Here is an example of what I think the integration could be. (all code below are theoretical and doesn't exist but I could assure you, is possible at least on Windows):
let window = WgpuWindow::new();
let sidebar_webview = WryWebViewBuilder::new(window.raw_window_handle())
.with_position(0, 0)
.with_size(200, window.height())
.build()
.unwrap();
/// wgpu window event_loop
if let Event::Resize(size) == event {
sidebar_webview.resize(200, size.height).unwrap();
}
@amrbashir gtk doesn't support Vulkan contexts, that's why it's not compatible with wgpu. It's using OpenGL for rendering (contrary to many reports that were written just before the gtk4 release), so mixing them isn't really an option.
That's unfortunate since we are stuck using webkit2gtk (for now?).
Gtk4 seems to have experimental vulkan support.
Read this comment about the situation with the Vulkan renderer for GTK4 straight from a GNOME developer. I wouldn't count on it.
It's possible to create a Vulkan context in a Gtk window by creating an empty window and then using gdk_surface_create_vulkan_context, but I don't think that you can use it for any GTK widgets afterwards.
Not sure how relevant this is: I've created a fork of GoldenLayout for DomTerm. This fork allows dragging panes between different top-level windows (using the HTML drag-and-drop API). Normally, a sub-window (pane) is implemented using an <iframe>
which works fairly well, including using Wry. However, when using the Qt front-end we can create a separateQWebEngineView
for each pane in addition to a "top-level" one for the GoldenLayout "chrome" (headers, drag handles, etc). This has some benefts:
- Panes dragged to a different or new top-level window don't need to be re-created (serialized).
- DomTerm supports creating panes to browse a web-site. Some sites don't allow being in an iframe - this includes
google.com
. Using a separateQWebEngineView
fixes this problem. - Browsing an external website may create browser state that is not fully re-created when reloading, which creates a bad experience if the browser pane is dragged to another window.
The Qt implementation depends on more than just creating and positioning sub-windows. For example it needs to fiddle with transparency, z-order, and focus. So implementing similar support in Wry is probably a lot of work - or maybe not.
The Qt implementation depends on more than just creating and positioning sub-windows. For example it needs to fiddle with transparency, z-order, and focus. So implementing similar support in Wry is probably a lot of work - or maybe not.
transparency
: we already have options to either make the webview completely transparent or specify anRGBA
color for background (macOS not implemented yet)z-order
: if possible, we could implement specifying initial z-order but I think it should be the user responsibility to organize adding sub-views/sub-windows to the parent window in the correct z-order they expect it to be. We could also implement an API that raises the webview to be the top-most in z-order.focus
: we used to have an API to focus the webview and we can add it back (although it didn't support macOS but should be possible)
"I think it should be the user responsibility to organize adding sub-views/sub-windows to the parent window in the correct z-order they expect it to be."
In the current Qt implementation it needs to be somewhat dynamic, mainly because of event handling. Normally, the "master" window (which includes the GoldenLayout chrome) is in the background (lower), while the panes are in the foreground (higher). However, on occasional the panes are moved to the background (while still visible by making the corresponding parts of the master transparent). This includes while menus are active (by default Domterm uses jsMenus); during a popup (such as "About DomTerm"); while dragging. Some of these might not be necessary - for example one might use Tauri/Wry menus - though jsMenus does have some niceties.
More fundamental is creating multiple WebView instances in a single Window, and be able to position them individually. Positions can be calculated by GoldenLayout but we need a hook to actually change the position (relative to the top window).
In the current Qt implementation it needs to be somewhat dynamic
Yeah in a more complex apps, a webview.raise_to_top()
is needed
More fundamental is creating multiple WebView instances in a single Window, and be able to position them individually.
Yeah that is a must, although I think GTK might give us a hard time, will have to see once we try implementing this.
for example one might use Tauri/Wry menus - though jsMenus does have some niceties.
what niceties are missing from our menus? I am working on an improved API in muda (which will eventually replace the menus in tauri and will be removed from tao/wry) and I'd appreciate any feedback about the new API that is being implemented in this PR
if you have suggestions for menus, please comment in the linked PR so we can keep discussion here around webviews
i've got a fork with a very quick and dirty solution to make wry accept a raw window handle: dev...maxjvh:wry:dev made it for https://github.com/maxjvh/nih-plug-webview which adds webview support to audio plug-ins created with nih-plug. it currently only works on mac and windows.
not sure how that fork could/should be refactored in order to be mergeable here
This would also be really useful for us at Graphite because we'd like to render the viewport natively using wgpu
while using tauri for the editor UI. Sending images via the websocket connection is not really possible
@wusyong - I think we could make an update here...
We now support winit window on all platform starting v0.33
On linux, it will require a winit fork winit-gtk
, but it offers same API as winit.
It's impossible to create webkit2gtk from a raw window handle unless it's GTK context.
https://github.com/wusyong/gtk-widget-in-x11-window/blob/main/src/main.rs
So supporting winit directly is more feasible. I believe most other crates are also built on top of winit.
@wusyong Can you give an example of how to use wry with winit? I am trying to get the hello world example working with it but having trouble.
@b0o You can enable winit
feature in Cargo.toml
wry = { version = "0.33", default-features = false, features = ["file-drop", "protocol", "winit"] }
btw RFC for this issue is here tauri-apps/rfcs#12