MicrosoftEdge/WebView2Feedback

Off-screen rendering with webview2

ajaymonga opened this issue · 109 comments

Hi
Is it possible to get webview2 to render into a shared memory region like CEF :
https://bitbucket.org/chromiumembedded/cef/wiki/GeneralUsage#markdown-header-off-screen-rendering

In my application I use 2 process architecture where main process does not have network access and second process uses CEF to render webcontent into a shared memory region from where main app can read the pixels. I am wondering if I can achieve this using webview2

Thanks

AB#28491736

Currently this is not possible. We have offscreen rendering on our backlog, and are tracking it in #20, but I think this ask is clearer so I'll also add this issue to our item. Thanks!

Hi, do you have any updates on this which you're able to share?

Unfortunately not yet. We have not begun work on this yet. This is a large amount of work, and while very high on our priority list, gets bumped each quarter as higher priority asks come in. I really want to do this work as it's currently one of our top asks, but the earliest that could happen is Q1 2022 at this point.

avail commented

Any news on this?

We are starting on the design phase of this work. Would you mind sharing your use case so that we can consider it in our plans?

doxxx commented

We are currently using CefSharp to render into an offscreen buffer that is sent to an FPGA to be composited onto a live video feed. The render includes playback of MPEG4-encoded video. We were looking into using WebView because CefSharp's default Chromium build does not include the MPEG4 codec. But without offscreen rendering, that's a moot point. We've since produced our own custom build of Chromium to include the codec.

Our use case is to show web content in our immersive VR spaces. See https://www.igloovision.com/software/enterprise-package/igloo-web

We need access to the web rendered textures so that we can warp and blend multiple images to projector outputs to create a clear seamless view on the inside of a cylinder or room.

Currently we use CEF for the web input with a user maintained patch for getting the shared textures. See https://bitbucket.org/chromiumembedded/cef/pull-requests/285 and read down to the end of the comments for full details. We build CEF with proprietary codecs included. The builds of our app and CEF are done with C++.

This is unsatisfactory because it uses the deprecated custom compositing (which could be removed in future) instead of skia and is difficult to keep updated.

We would look at moving to webview2 if access to the rendered textures could be provided and supported along with the ability to include proprietary codecs

Thanks for the info @rjx-ray! If you don't need high-frequency rendering you could consider using the CapturePreview function to get an image, but this isn't great for things like videos or other animations.

Hi @champnic, thanks for the response but we do need high-frequency rendering, typically for YouTube and other web video display.

avail commented

We are starting on the design phase of this work. Would you mind sharing your use case so that we can consider it in our plans?

Game UI development, specifically within DirectX 9 (Ex) / 10 / 11 contexts.

Ability to render into an offscreen texture and display that as overlay of the game.

We are starting on the design phase of this work. Would you mind sharing your use case so that we can consider it in our plans?

Currently, we utilize CefSharp Offscreen for generating a PDF from HTML content or using it to generate screenshots of HTML in a server side environment.

Hi, are there any updates on this feature?

We are starting on the design phase of this work. Would you mind sharing your use case so that we can consider it in our plans?

CAD editor software, I want to compose web ui and opengl/d3d rendering together.

Any XR Application wanting to be able to have some 2D UI inside the 3D Environment: it means being able to render into a DirectX Texture and to be able to inject input in the off-screen view (pointer + keyboard)

Thanks for the info! Unfortunately our design work is slow going due to high priority bugs, but we're still making progress.

Is #579 related?

My use case would be Game Overlay.
And actually dotnet/maui(and in extension WPF/WinForms)'s BlazorWebView could benefit from this change too as they wouldn't have to create the fake elements(they are just precisely tracking the WebView window on top), but could do normal real elements in their respective apis without any weird quirks and bugs.

I'd say an api like CefSharp would be quite pleasant and easy to port over existing CefSharp code.
https://github.com/cefsharp/CefSharp/blob/d8674fd076c021eddcc0cb579687ca3c51a63767/CefSharp.OffScreen/DefaultRenderHandler.cs

Hi, do you have any updates you're able to share?

Adding another use case: Presentations on screens that are not just Windows desktops.

From our experience with CEF we'd need the following (in descending priority order) to switch:

  • Rendering into either a raw CPU buffer or into a fully accelerated DXGI surface (higher level APIs optional but this is what most people use)
  • Full control over the presentation parameters, such as resolution, frame rate, display scaling, output color space (including WCG/HDR), etc. Fully manual VSync is appreciated; the client engine might render ahead and buffer, or batch render to disk, so it's not strictly in real time.
  • Proper graphics resource lifetime management. Client app must be able to exactly specify when/how it is able to receive frames and when it is finished processing them; no spurious callbacks when client isn't ready (anymore), and no mutexes that render either side unresponsive when held a bit too long or not acknowledged (that one probably came out a bit too detailed, but you might be able to feel the pain here).
  • Full manual input injection, such as keyboard, mouse, and multi touch inputs.
  • Audio should also be redirected. Ideally per browser instance but globally would be fine at first. Client must be able to specify the sample format and channel layout, and it should be a pull model in which the client requests a certain # of sample frames and WebView2 outputs that number of frames exactly, in order to avoid timing discontinuities between the browser audio and the actual audio path the client uses.
  • ... and you might recognize a pattern here. There are other subsystems that would be cool to have redirected to an API, for example audio/video input (for web based streaming/conference services) or geolocation. A feature to auto-block all outside interaction that's not intercepted (Midi, Bluetooth, notifications, printing (possible security hole there!), etc) would perhaps be helpful, too.

It might be tempting to keep these APIs as close to the corresponding Windows APIs as possible but it's not strictly necessary - this feature will mostly be used from deep within client code, and very possibly behind a bunch of abstraction layers, so I'd aim for minimal and clean first.

Hope this was not too much/harsh but currently a whole industry is dependent on that CEF pull request above which has been in PR limbo for years and will soon just stop working altogether, and an alternative to that would be a very appreciated thing ;)

@champnic - Is there any news you can share with us?

+1

Curve commented

+1

Please use the thumbs up reaction instead of occluding the issue with +1 comments

@DanielsCode Unfortunately not really. We had begun the design phase but are currently dealing with high priority bugs. I really want to get this done as I know it's a huge pain for a large portion of our developers, and I'm hoping we can get back to focusing on it soon.

Hello, just to add another use case.
I would like to create a ASP Web API controller where the user POSTs an HTML string, or JSON object that is merged into an HTML template, the controller would use WebView2 to render the HTML off-screen, save a PDF to a stream and then return the stream as a file for the user to download.

It seems CoreWebView2.PrintToPdfStreamAsync is in pre-release, so we just need headless mode and I'm good to go.

Hello,

Using WinForms, has anyone tried to put the WebView2 component on a separate Form and put that form off-screen or make it not Visible?

With either of this code:

        private void OnFormLoad(object sender, EventArgs e)
        {
            Form form = (Form)sender;
            form.ShowInTaskbar = false;
            form.Visible = false;
        }
        private void OnFormLoad(object sender, EventArgs e)
        {
            Form form = (Form)sender;
            form.ShowInTaskbar = false;
            form.Location = new Point(-10000, -10000);
        } 

Was able to use it this way but, since it's not an "official" way, anyone that has used it has detected any kind of issues with it?

The first bug I noticed is when doing Alt+TAB with the form on offscreen it shows up on the list, with the window not visible it flickers when it opens.

Thank you

pzxbc commented

+1 wait for off-screen rendering

bbday commented

yes should be great off-screen version like cefsharp to automate task with clicks/keyboard input and intercept request/response

Hi all,

We are investigating this request and have a couple of questions to better understand your scenario:

  1. What framework is your app (e.g. Win32, WPF, WinForms)?
  2. How does your app render it's individual elements?
  3. Would rendering WV2 to a ID3D11Texture2D or IDXGISurface address your scenario?

Thanks,
Nishitha

Game Overlay

  1. .NET 7
  2. DirectX/Vulkan, getting bitmap buffer would work cross-api
  3. 50%
avail commented

DX9/DX12
Currently CEF
yes.

@honzapatCZ and @avail just updated question #3. Thanks!

avail commented

To update answer 3; being able to handle OnPaint for DX9 to at least have software rendering would be nice, but a texture/surface would definitely be extremely useful for the more modern rendering backends

@nishitha-burman

  1. .Net 6/7 and Win32/C++
  2. currently CEF and DirectX/Vulcan/OpenGL
  3. Yes
  1. Win32, OpenFrameworks
  2. CEF, shared textures patch, d3d11 texture->interop ->OpenGL
  3. Yes
  1. ASP.NET Web API
    Save to PDF service using CoreWebView2.PrintToPdtStremAsync
  2. Blazor WASM app, does not render WebView2, PDF stream returned from controller in File method
  3. N/A
bbday commented
1. .NET 7 WPF
2. DirectX/Vulkan, getting bitmap buffer would work cross-api
  1. .NET 6, WPF with a custom Direct2D canvas
  2. Direct2D render target. I'd expect the following or similar flow: WV2 -> IDXGISurface -> ID2D1Bitmap -> ID2D1RenderTarget
  3. Yes, as long as IDXGISurface is shareable
doxxx commented
  1. .NET 4.7.2 service (no desktop GUI), may move to .NET 6/7/... in the future.
  2. We're only using the CefSharp browser to render.
  3. I think it would be difficult. We're currently using the CefSharp Paint event to take the raw 32bpp image bytes and send them to an FPGA for compositing onto a live video feed displayed on a dedicated monitor.
kebby commented
  1. C++ (win32, Direct3D11) + C# (winforms)
  2. CEF with shared textures patch and ChromiumFX -> D3D11
  3. Yes.

I'm the maintainer of WebView4Delphi and CEF4Delphi. I can't speak for all the developers using WebView4Delphi but I've been asked if it would be possible to add an off-screen rendering mode to WebView4Delphi.

  1. Win32 using Delphi or Lazarus with WebView4Delphi.
  2. Copy the bitmap buffer for maximum compatibility.
  3. That wouldn't be enough. Please, consider using a bitmap buffer in addition to a ID3D11Texture2D or IDXGISurface.
  1. C++ (win32, Direct3D11) + C# (winforms)
  2. CEF with shared textures patch and ChromiumFX -> D3D11
  3. Yes.
  1. C++ (Win32, Xbox)
  2. CEF -> update texture (BGRA8) in CEF's OnPaint event
  3. Yes
Devyre commented
  1. C++. Desktop cross platform. d3d11, opengl/vulkan.
  2. CEF with shared textures patch + custom modifications.
  3. As long as the texture can be shared it would be good for win32.
2A5F commented

1.Unity
2.DirectX
3.Yes

yewnyx commented
  1. C# .NET 7 Console App on Linux or Windows
  2. Software rendering (i.e. save to png on demand, resolution ideally settable on request)
  3. N/A or probably not
Robula commented
  1. ASP.NET Web API (C# .NET 7)
  2. Currently, using PuppeteerSharp to render SVG and convert to PNG/JPG. Interested in WV2 as a replacement.
  3. N/A
dt200r commented

Any update on this? Looking to offscreen in .NET 7 asp.net.

We use it as game overlay

  1. Win32
  2. DX11/12
  3. Yes that would be perfect
  1. Flexible, but using C#/.NET with interop calls where needed.
  2. Flexible, but I am familiar with MonoGame engine.
  3. That would be an acceptable workaround.

I am researching integrating my popular display motion testing website, www.testufo.com -- into a Direct3D executable, that can run offline, and is capable of custom frame rates to simulate custom refresh rates (on VRR displays such as G-SYNC and FreeSync).

Note: I get millions of unique visitors; it's the most popular display motion testing website, and many content creators use it as part of their gaming monitor benchmarking test suite. There are over 30 different tests selectable at upper right. Combined with configurable options in each test, this creates over 1 million possible variations of different TestUFO tests or educational demos (some animations are for testing, and some animations are educational about display science).

I need to control the compositing rate (refresh rate / paint rate) too.

AKA, simple terms; I'd like to have one browser compositing event per Direct3D frame rendered. For all animations, paint updates, scroll updates, graphics updates, etc. And have the requestAnimationFrame() javascript callback event synchronize correctly to the frame rate.

When you add WebView2 support to Direct3D, make sure it is capable of being configured to synchronize one browser compositing event per one Direct3D frame presented (for browser animations/scroll/paint events) -- even independently of the display refresh rate.

Why?

(See More) Details and Use Cases

Short Version:

I have additional requirements; My target display isn't necessarily the same refresh rate as the physical displays. In addition I also need to simulate a custom fixed Hz on a VRR display (e.g. simulate 100Hz via outputting 100fps VRR to a 144Hz display).

So that things like requestAnimationFrame() within the WebView2 will automatically synchronize to the frame rate of the Direct3D -- requestAnimationFrame() is specc'd to occur once every refresh rate as long as performance & spare processing headroom & power plan permits.

Long Version:

I need to see www.testufo.com/refreshrate and www.vsynctester.com successfully detect that a custom accurately-framepaced frame rate is a specific refresh rate -- the algorithms achieve it by detecting the consistency of time between individual JavaScript requestAnimationFrame() (the defacto HTML5 refresh cycle callback event) during a low-processing-overhead animation. It should be able to detect it as a refresh rate if it's accurately framepaced (e.g. 73fps framepaced almost exactly 1/73sec apart, best-effort). Basically the browesr compositor loop needs to be able to sync to the frame rate of the DirectX engine, so that requestAnimationFrame() within the Javascript inside WebView2 synchronizes to the frame rate. When the frame rate is framepaced accurately (+/- 5% jitter) TestUFO recognizes it as an assumed 'refresh rate'.

TestUFO has multiple millions of unique visitors quarterly, and is used by over 500 content creators representing a net total of over a hundred million of viewers/subscribers (e.g. RTINGS, LinusTechTips, TomsHardware, C|Net, etc) (list of some content creators using my TestUFO) as part of their display/gaming monitor testing suite. So even more people see the results than even visit the site directly! Since display quality is different in VRR versus non-VRR mode, I want to use WebView2 + Direct3D + custom frame rates, to run TestUFO in VRR mode (via custom-set frame rates).

TestUFO is designed to perform well at even high refresh rates, as long as GPU acceleration is available. It even worked fine at 480Hz on an experimental 480Hz monitor, photos here a few years ago

Essentially, the virtualized 'refresh rate' of WebView2 can just be the frame rate of Direct3D that's displaying the WebView2 as a texture.

Custom fixed frame rates during VRR operation (which is possible by Direct3D), has been a very frequent feature request by content creators responsible for millions.

There may be a situation where you need to let the Direct3D developer configure whether they want WebView2 compositing events at:

  • At frame rate of the Direct3D (the frequency of the Present() event); or
  • At custom fixed rate (60 by default), independent of Direct3D; or
  • At the actual display refresh rate (of the monitor that the Direct3D context is running on).
  • Or enable an optional callback event, that executes the next browser compositing event and its related callbacks (requestAnimationFrame etc)

These flags/arguments will help hint how frequently the WebView2 should composite -- for things like animations, scrolling, and other things move. I want to composite WebView2 at a rate matching the Direct3D, but that can interfere with animations within it (especially during varying frame rates).

There are pros/cons for each; requiring configurability.

Example Use Cases

  • A VR or AR headset may render at a refresh rate (90Hz) that is different from the physical displays that shows up in Control Panel (60 Hz). The developer would configure a custom fixed rate, or use the callback event.

  • A developer may want to keep animations in webpages running at a constant rate independent of the Direct3D rendering rate. In this case, you'd choose a custom fixed rate, or choose actual display refresh rate, as the WebView2 internal compositing frequency. The browser offscreen framebuffer would keep refreshing continuously at its own independent frame rate, to be blitted to a texture once per Direct3D frame.

  • Or other developers may want to go more 'efficient' (power saving). Frame rate would be the method of power management for such app. Low frame rates such as 30fps is more miserly than high frame rates.

  • Offscreen rendering may not wish to be stuck at 60, and such a developer may have a custom 'refresh rate' they want to simulate -- whether low for processing-efficiency or high for making scrolling and animations work optimally matching a performance display (e.g. 120Hz phone, tablet, xbox, television etc, in the semi-mainstreaming of 120Hz). Whether as a texture or a HUD overlay or etc.

  • Or more 'custom' (intentional custom frame rates) , by having only one browser compositor event occur per Direct3D frame. (and its attendant JavaScript-level callback events such as requestAnimationFrame() ), for animations that are intentionally designed to treat a frame rate like a refresh rate. (This is actually technically true for VRR -- the frame rate is the refresh rate, and the refresh rate is the frame rate -- whenever the frame rate is inside VRR range, the display immediately refreshes upon frame presented by the computer. Instead of having its own refresh rate clock, VRR such as FreeSync and G-SYNC means display is syncing to the frame rate -- ala www.testufo.com/vrr simulation)

Example Precedents in Experiments

The best browsers running on AR/VR headsets sync their compositing refresh rate to the refresh rate of the headset, whether it's standalone, or whether it's casted (e.g. from a PC). It stutters very awfully (and stutters = headaches) if the browser compositing rate isn't synchronized.

I was able to trick a Chrome window or Edge window to sync to a VRR rate in a very approximate (rough) manner -- e.g. windowed VRR mode with a Chrome/Edge window running in the background. You need to use NVIDIA Control Panel setting for windowed VRR where VRR is enabled for the foreground windowed app. When this is done, and you play with this, this forces all background apps to composite at the same rate as the foreground app. The foreground video game window would fluctuate in frame rate, and force the compositing of the underlying browser window to vary. (With some minor side effects).

In my situation, I would intentionally run a constant Direct3D frame rate, in order to create custom "compositing rates" within Webview2 -- and I would want to configure compositing to match the frame rate of the Direct3D presentation. Meaning I need browser compositing events (and its underlying callbacks) would synchronize to the frame rate, causing its requestAnimationFrame() callback event inside Javascript to occur once per Direct3D frame.

I was evaluating the use of CEF, and synchronizing its paint event to do the workaround, but WebView2 would be quite fantastic.

  1. WinForms but flexible
  2. WebKit based into bitmap buffer
  3. Not directly but possible; getting a bitmap as convenience functionality would be great

UseCase: Rendering SVG images into raster images

@nishitha-burman @champnic any news on that topic you could share with us?
Do you need further feedback from us in order the finalize this feature?

Also posting here to express interest in the feature!

  1. Win32/WinForms
  2. Direct3D 12
  3. Absolutely

expressing interest too:

  1. Win32 (native no .NET)
  2. D3D12 / D3D11 / Vulkan
  3. That would be the best.

I think this issue is relevant to being able to show the Steam Overlay over WebView2. So far I've found it extremely difficult to do this, as covered in this blog post I wrote about it. In short the Steam Overlay does not understand the multi-process architecture of WebView2. The app can work around this by creating its own D3D11 swapchain and layering it over WebView2 with DirectComposition. That doesn't work with transparency though as the Steam Overlay expects to render over an opaque background, and the alpha blending looks all wrong.

AFAICT the Steam Overlay renders itself in the call to Present() and so the app itself has almost no control over how it's actually rendered, which severely limits any ability to work around these problems. However I think if WebView2 could render itself to a texture or a swapchain, then I think it should work correctly. Then the app could create its own D3D11 swapchain, render WebView2 in to it, and then call Present(), at which point the Steam Overlay draws over the existing WebView2 content.

To answer the prior questions for our specific case:

  1. Win32, C++
  2. Our app only shows WebView2, but also needs to show the Steam Overlay.
  3. Rendering to ID3D11Texture2D would work as the app could paste that in to the swapchain and then call Present(). I'm not sure if IDXGISurface would support this case, but I suspect it probably would, so long as it belongs to the app process and not the WebView2 process so Steam can draw over it.
  1. Win32
  2. Vulkan
  3. ID3D11Texture2D preferred

More info about TestUFO use case; I posted before, but expanding:

1 -- VRR support consideration:

I desparately want to make a standalone TestUFO executable wrapper around TestUFO motion test, and play to a variable refresh rate Direct3D. (So make sure requestAnimationFrame() refresh cycle callback in the HTML renderer able to tick-tock properly to a frame rate unsynchronized to the display's max-Hz refresh rate). In other words, I want to override the offscreen browser's "assumed refresh rate", and make it tick tock to the frame rate of the Direct3D presentation instead. This will allow me to support VRR in a web browser engine, simply by building an executable Direct3D wrapper around TestUFO (aka a custom browser that uses Edge engine, just to support VRR).

2 -- HDR support consideration:

Microsoft Edge now also has a hidden browser flag that enables HDR ( edge://flags/#enable-experimental-web-platform-features supports enabling 48-bit HDR framebuffers, as does Chrome too). It even supports whiter than white, in my upcoming browser-based TestUFO 2.0 HDR, optionally producing whites for scientific motion tests (I'm in over 30 peer reviewed papers).

The top part of the screen is hex #FFFFFF, while that white square is rec2020(5,5,5) which can peak at pretty bright on the new 1000nit HDR gaming displays, used for showing nice highlights (in games it's often only on 1% of pixels; good neons, good sunglints on metal, etc)

#FFFFFF (surrounding background) versus color(rec2020 5 5 5):
image

This HDR version is coming out for CES 2024, but I want to support VRR sometime later in 2024 through one of many possible workflows (including, of course, possibly this one). Note, this website is often used by gamers, manufacturers, researchers, to test their displays, including OLEDs, LCDs, gaming displays, etc.

So ideally -- when supporting browser rendering, please also support the existing HDR Direct3D, with an unbroken colorspace-conversion chain (untouched pixels as possible). I am researching other workflows, but I just wanted to mention this additional consideration -- offscreen HDR Edge rendering to a HDR video game.

  1. C++ Win32, mix of raw Win32 APIs, WinUI3, C++/WinRT, OpenXR, Oculus API, and OpenVR. None of those XR APIs (or even Windows Holographic) have any DirectComposition support
  2. Direct2D and Direct3D11; interacts with other programs, so using existing D3D12 and Vulkan interop
  3. Yes, strongly preferred. Being able to display animations/videos efficiently is highly desirable, and overhead of main RAM buffers is too high.

So, I finally ended up taking the Windows::Graphics::Capture approach. Using full C++/WinRT but still wanting render-to-texture is probably a little rare, but in case anyone else wants the modern stuff including co_await rather than COM + std::future:

Firstly, you want these docs: https://learn.microsoft.com/en-us/microsoft-edge/webview2/reference/winrt/microsoft_web_webview2_core/ - whenever you google, you'll get the classic COM C++/Win32 docs instead, or the UWP docs. This is one of the few places in the WinRT world where the UWP API differs significantly.

With that done, first, there's some objects you'll want to keep track of:

  unique_hwnd mBrowserWindow;

  winrt::Windows::UI::Composition::Compositor mCompositor {nullptr};
  winrt::Windows::UI::Composition::ContainerVisual mRootVisual {nullptr};
  winrt::Windows::UI::Composition::ContainerVisual mWebViewVisual {nullptr};

  winrt::Microsoft::Web::WebView2::Core::CoreWebView2Environment mEnvironment {
    nullptr};
  winrt::Microsoft::Web::WebView2::Core::CoreWebView2CompositionController
    mController {nullptr};
  winrt::Microsoft::Web::WebView2::Core::CoreWebView2 mWebView {nullptr};

You must explicitly initialize these to nullptr, otherwise they will be default-constructed as real instances, probably on a different thread to the one you want.

Register a window class:

  WNDCLASSW windowClass {
    .style = CS_HREDRAW | CS_VREDRAW,
    .lpfnWndProc = &WebView2PageSource::WindowProc,
    .hInstance = GetModuleHandle(nullptr),
    .lpszClassName = WindowClassName,
  };
  ::RegisterClassW(&windowClass);

... and from here on, it needs to be in the Windows::Graphics::Capture DispatcherQueue's thread:

Create an invisible window that is a child of HWND_MESSAGE instead of HWND_DESKTOP:

  mBrowserWindow = unique_hwnd {CreateWindowExW(
    0,
    WindowClassName,
    L"YOUR APP HERE WebView2 Host",
    0,
    CW_DEFAULT,
    CW_DEFAULT,
    0,
    0,
    HWND_MESSAGE,
    NULL,
    GetModuleHandle(nullptr),
    nullptr)};

Set up Windows::UI::Composition:

  mCompositor = {};
  mRootVisual = mCompositor.CreateContainerVisual();
  mRootVisual.Size({WIDTH, HEIGHT});
  mRootVisual.IsVisible(true);

  mWebViewVisual = mCompositor.CreateContainerVisual();
  mRootVisual.Children().InsertAtTop(mWebViewVisual);
  mWebViewVisual.RelativeSizeAdjustment({1, 1});

Set up webview 2:

  using namespace winrt::Microsoft::Web::WebView2::Core;
  CoreWebView2EnvironmentOptions options;

  const auto userData = Filesystem::GetLocalAppDataDirectory() / "WebView2";
  std::filesystem::create_directories(userData);

  const auto windowRef
    = CoreWebView2ControllerWindowReference::CreateFromWindowHandle(
      reinterpret_cast<uint64_t>(mBrowserWindow.get()));

  mEnvironment = co_await CoreWebView2Environment::CreateWithOptionsAsync(
    {}, userData.wstring(), {});
  mController
    = co_await mEnvironment.CreateCoreWebView2CompositionControllerAsync(
      windowRef);
  mWebView = mController.CoreWebView2();

  auto settings = mWebView.Settings();
  const auto userAgent = std::format(
    L"{} YOUR_APP_HERE/{}.{}.{}.{}",
    std::wstring_view {settings.UserAgent()},
    Version::Major,
    Version::Minor,
    Version::Patch,
    Version::Build);
  settings.UserAgent(userAgent);

  mController.BoundsMode(CoreWebView2BoundsMode::UseRawPixels);
  mController.RasterizationScale(1.0);
  mController.ShouldDetectMonitorScaleChanges(false);
  mController.Bounds({
    0,
    0,
    mSize.Width<float>(),
    mSize.Height<float>(),
  });

  mController.RootVisualTarget(mWebViewVisual);
  mController.IsVisible(true);

  mWebView.Navigate(L"https://www.testufo.com");

... and finally, create the Windows::Graphics::Capture::GraphicsCaptureItem:

 winrt::Windows::Graphics::Capture::GraphicsCaptureItem::
    CreateFromVisual(mRootVisual);

Then use WGC as normal.

Edit: This is just isolating and WinRT-ifying @jnschulze's Flutter work with COM and std::future over on #20 (comment) ; they figured out this approach :)

For off-screen rendering in particular, you probably want to disable v-sync, and to cap the framerate:

  const auto edgeArgs
    = std::format(L"--disable-gpu-vsync --max-gum-fps={}", FramesPerSecond);
  CoreWebView2EnvironmentOptions options;
  options.AdditionalBrowserArguments(edgeArgs);

  mEnvironment = co_await CoreWebView2Environment::CreateWithOptionsAsync(
    {}, userData.wstring(), options)

Thanks for finding a solution to this! I need to try this.

You even found a solution for the independent-framerate capability, which is what I needed.

The only missing piece is whether there is a 48-bit HDR Edge WebView framebuffer — since I will need to support HDR. Edge already supports HDR graphics (Good browser test page for HDR: https://gregbenzphotography.com/hdr/ …).

Both Edge and Direct3D supports HDR, but will a WebView (yet) support HDR? And if Windows::Graphics::Capture supports copying other color depths other than 24-bit RGB (HDR 48-bit, e.g. rec2100-pq or rec2100-hlg or display-p3)…

I can make do without HDR, but it may be a good time for Microsoft to make sure their API-chain is at least HDR-ready, so there’s no color-decimation in the path.

WGC definitely supports HDR via DirectXPixelFormat::R16G16B16A16Float for real windows; I've not tried for WebView2 as for my use case I always want SDR, even on HDR systems.

@mdrejhon also, thanks, testufo was my go-to test to make sure I had this plumbed up correctly :) As you mentioned, VRR doesn't work, but it was still reaching ~ 238hz when it reaches 240hz on a v-synced chrome.

@fredemmott - Great stuff!
Are you able to achieve transparency this way?

@fredemmott - Great stuff! Are you able to achieve transparency this way?

Yep:

  mController.DefaultBackgroundColor(winrt::Windows::UI::Colors::Transparent());

Then when I use devtools to add some border-radius:

image

You mean the border radius of the buttons? Is this real transparency or just blending against the background visual of the window? Does it work when there's an image or other content behind?

You mean the border radius of the buttons? Is this real transparency or just blending against the background visual of the window? Does it work when there's an image or other content behind?

Real RGBA data in the D3D11 texture. That's not a window background, it's 3D scene with the webview2 rendered as a quad - moving view so there's an object at the top left corner here:

image

Thanks for the clarification - it wasn't even clear ot me which part is the webview content 😆

Anyway I just mixed this up with Microsoft.Composition, but you are actually using Windows.Composition where WebView2 transparency is doing just fine. Thanks again.

@mdrejhon also, thanks, testufo was my go-to test to make sure I had this plumbed up correctly :) As you mentioned, VRR doesn't work, but it was still reaching ~ 238hz when it reaches 240hz on a v-synced chrome.

You can get perfect 240fps 240Hz if you exit Task Mananger and your background RGB controller software, and not run other browser tabs. TestUFO has been tested up to 1000Hz already.

Now…

VRR TestUFO may be achieved this way

  • Direct3D Executable app
  • Capped frame rate in WebView2 using your info
  • Windowed GSYNC enabled in Control Panel
  • VRR frame rate configured by user (command line or app settings screen)
  • Direct3D frame rate sync’d to WebView2 frame rate

Now while I do C# with a bit of dabble in C++…. I have never done Direct3D programming from scratch (only modifications) — does anyone have a Hello World web browser demo for Direct3D that is MIT/Apache or something reasonably permissive?

Does not have to support all this nor VRR, just a dummy working Direct3D WebView2 app I can modify to suit needs. A code skeleton to work from, hardcoded to a URL or offline HTML file. Preferably WebView2 texture displaying 1:1 mapped to full screen, for simplicity’s sake.

If nothing open source already exists, I may be willing to pay for this small starter code skeleton (www.blurbusters.com/about/contact) if there isn’t already an open source harness or windows sample that I can modify…

@mdrejhon also, thanks, testufo was my go-to test to make sure I had this plumbed up correctly :) As you mentioned, VRR doesn't work, but it was still reaching ~ 238hz when it reaches 240hz on a v-synced chrome.

You can get perfect 240fps 240Hz if you exit Task Mananger and your background RGB controller software, and not run other browser tabs. TestUFO has been tested up to 1000Hz already.

Now…

VRR TestUFO may be achieved this way

  • Direct3D Executable app
  • Capped frame rate in WebView2 using your info
  • Windowed GSYNC enabled in Control Panel
  • VRR frame rate configured by user (command line or app settings screen)
  • Direct3D frame rate sync’d to WebView2 frame rate

Now while I do C# with a bit of dabble in C++…. I have never done Direct3D programming from scratch (only modifications) — does anyone have a Hello World web browser demo for Direct3D that is MIT/Apache or something reasonably permissive?

Does not have to support all this nor VRR, just a dummy working Direct3D WebView2 app I can modify to suit needs. A code skeleton to work from, hardcoded to a URL or offline HTML file. Preferably WebView2 texture displaying 1:1 mapped to full screen, for simplicity’s sake.

If nothing open source already exists, I may be willing to pay for this small starter code skeleton (www.blurbusters.com/about/contact) if there isn’t already an open source harness or windows sample that I can modify…

Sorry, I wasn’t clear - I have no trouble with VRR 240 in chrome; i

@mdrejhon also, thanks, testufo was my go-to test to make sure I had this plumbed up correctly :) As you mentioned, VRR doesn't work, but it was still reaching ~ 238hz when it reaches 240hz on a v-synced chrome.

You can get perfect 240fps 240Hz if you exit Task Mananger and your background RGB controller software, and not run other browser tabs. TestUFO has been tested up to 1000Hz already

sorry for being unclear: I have no trouble with 240fps with gsync in chrome; that is not translating to the same with WGC+webview2

sorry for being unclear: I have no trouble with 240fps with gsync in chrome; that is not translating to the same with WGC+webview2

Aha. Browser frame rate debugging can be done using www.testufo.com/animation-time-graph

Also, if you know where I can get a code sample of this, let me know…

Also, if you know where I can get a code sample of this, let me know…

Not isolated, and doesn't meet your license requirements (it's GPLv2 with an exception for the Windows App SDK which I'm not convinced is GPL-compatible given it can only be built by MS employees), but for others or in case it's useful anyway, it's https://github.com/OpenKneeboard/OpenKneeboard/blob/master/src/app/app-common/PageSource/WGCPageSource.cpp + https://github.com/OpenKneeboard/OpenKneeboard/blob/master/src/app/app-common/PageSource/WebView2PageSource.cpp ; structured like that as I also have Window Capture as a content source, among others.

Ah, the TestUFO requires something a bit more permissive (MIT/Apache) since I'd have to combine it with proprietary modules later on.

But may I reach out to you privately about possible options? Contact me mark[at]blurbusters.com or at contact me. Either way, congratulations for finding a VRR solution for TestUFO. But I do really need help with the skeleton, and am willing to pay for that bit of help. I've been looking for years for a VRR-compatible web browser wrapper (for TestUFO VRR), and this is the likely breakthrough solution, for custom stutter-free below-max-Hz frame rates with green VALID for any custom framerate=Hz. Perhaps one day of consulting work or something.

  • Could be dual-license (e.g. forking me or licensing me a copy via an alternate license; this is legal if you're the only developer)
  • Could be rewrite into a simpler skeleton example (just one fullscreen quad, no VRR yet, no UI, relay mouse/keyboard input), which I'll later modify to suit
  • Alternate options that might move me forward ASAP
aidv commented

@nishitha-burman Hi, all. I implemented the feature in CEF, probably you guys got interested: https://chromium-review.googlesource.com/c/chromium/src/+/5265077 https://bitbucket.org/chromiumembedded/cef/pull-requests/719

It says that the pull request has been declined. What does this mean?

It says that the pull request has been declined. What does this mean?

Sorry, moved to https://bitbucket.org/chromiumembedded/cef/pull-requests/734

aidv commented

Can someone help me understand what is being discussed here?

I'm having a challenge that seems to be quite hard to solve.

I'm looking for the fastest way to push large amounts of data to a WebGL/WebGPU context and render something in the WebGL/WebGPU scene.

The pipeline looks like this: CPU -> IPC -> WebView2 -> GPU

However, the performance is very slow, and there are many points of failure along the way.

Our algorithm runs on the CPU (soon on the GPU with OpenCL)
The generated data is sent via IPC to the WebView, which is really slow,
The WebView2 runs web code that pushes the data to the GPU as a texture, e.g webGPU....writeTexture()

All of these steps take time.

For example, just pushing the texture data to the GPU takes 1.2s, while the equivalent pure C code takes 1.2ms (!!!!!!).

One idea I had was to create an offscreen C/C++ OpenGL renderer and send the frames to the WebView2, but since IPC is slow, this won't work.

Would what has been discussed in this gihub issue potentially solve the bottlenecks I am having?

What is the absolute best and most efficient way to render something in my C backend and show it in a on my WebView2?

Would what has been discussed in this gihub issue potentially solve the bottlenecks I am having?

No, this is about rendering the WebView2 itself inside an offscreen surface, so a browser can be displayed e.g. in a 3d environment, got nothing to do with code running inside the WebView2

For example, just pushing the texture data to the GPU takes 1.2s, while the equivalent pure C code takes 1.2ms (!!!!!!).

You can try implementing this logic with WebGPU Native, and expose some handle to javascript in frameworks like CEF, just keep everything in native. You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

aidv commented

For example, just pushing the texture data to the GPU takes 1.2s, while the equivalent pure C code takes 1.2ms (!!!!!!).

You can try implementing this logic with WebGPU Native, and expose some handle to javascript in frameworks like CEF, just keep everything in native. You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

Really?? That would be perfect.

Please tell me how this could be achieved, it’s critical to our application.

I’m willing to pay for help.

aidv commented

You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

I have no idea how this would be done with WebView2. Any idea?

I'm looking for the fastest way to push large amounts of data to a WebGL/WebGPU context and render something in the WebGL/WebGPU scene.

The pipeline looks like this: CPU -> IPC -> WebView2 -> GPU

Which kind of IPC are you using? Have you tried ICoreWebView2SharedBuffer?
https://learn.microsoft.com/en-us/microsoft-edge/webview2/reference/win32/icorewebview2sharedbuffer

aidv commented

I'm looking for the fastest way to push large amounts of data to a WebGL/WebGPU context and render something in the WebGL/WebGPU scene.
The pipeline looks like this: CPU -> IPC -> WebView2 -> GPU

Which kind of IPC are you using? Have you tried ICoreWebView2SharedBuffer? https://learn.microsoft.com/en-us/microsoft-edge/webview2/reference/win32/icorewebview2sharedbuffer

Yes I'm using ICoreWebView2SharedBuffer, but it's extremely slow. Edit: It's also synchronous, so it blocks the UI.

Sending 62MB of data from the backend to the webview2 takes about 4 seconds.

It's actually faster to save the data on disk, host a webserver, and use fetch() in the frontend to get the data, about 380ms.

So ICoreWebView2SharedBuffer is very inefficient, but I'd want to access the WebGL/WebGPU scene directly from the native backend if possible, but render the scene in the frontend.

This seems to be quite a big challenge.

Yes I'm using ICoreWebView2SharedBuffer, but it's extremely slow. Edit: It's also synchronous, so it blocks the UI.

Oh, didn't know it's so bad. Their file access isn't really fast either. Unfortunately they have removed support for pepper plugins (NACL). We still have an application that is using it for rendering videos on surfaces via OGL ES (Electron App using a trick to enable it), but it doesn't work in the Edge WebView.
A while ago, I had talked to Chromium devs about this gap (unable to render some, content onto a surface inside the browser), but unfortunately they didn't have a good replacement to suggest.
We're using a layered approach now, i.e. transparent webview on top. You could also have another one below...

It's actually faster to save the data on disk, host a webserver, and use fetch() in the frontend to get the data, about 380ms.

Assuming, your data is image data, how does this compare?

  let image = document.createElement('img');
  image.src = 'file:///c:/my62mbImage.png'
  await img.decode();
  let bitmap = await createImageBitmap(image);

It's non-blocking at least.

aidv commented

It's actually faster to save the data on disk, host a webserver, and use fetch() in the frontend to get the data, about 380ms.

Assuming, your data is image data, how does this compare?

  let image = document.createElement('img');
  image.src = 'file:///c:/my62mbImage.png'
  await img.decode();
  let bitmap = await createImageBitmap(image);

It's non-blocking at least.

This gives pretty much the same performance as fetchibg from a webbrowser. Very fast.

But problem is that this would require some disk io, which isn’t optimal.

You mentioned a transparent webview, I thought about that too, simply create a native opengl scene and overlap a transparent webview on top.

I guess this approach is valid too.

Any caveats to think of if I use this approach?

This gives pretty much the same performance as fetchibg from a webbrowser. Very fast.

But problem is that this would require some disk io, which isn’t optimal.

Maybe using overlapped (memory-mapped) IO, but when you change the data (which would be just a memcopy), I'm not sure whether the browser engine would read it again, rather than assuming it has cached it already, and I don't know whether it is possible to rename a file which is opened in this mode.
Browser caching itself might also involve copying of the data, yet I'm not sure whether this is done for file: URLs.

It's really frustrating when you look through all of those new browser APIs. They have tons of features in many directions, but there's hardly any good and fast way for getting local data into the browser...
I've read an article abouit "transferrable objects" in WebAPIs where it was shown that a 32MB blob could be transferred between WebWorker and main thread in 6.6ms instead of 360ms. On Android, it's possible to transfer such transferrable objects between the (Chromium) WebView and Java native code (did that last week for a MessagePort, those don't have much data, so can't say much about performance). Yet, the Edge WebView2 doesn't have something like that. They only have that ridiculously complex mechanism where you need to create COM interfaces for the native object which you want to share with the webview. I'm not sure how that performs, but it might be another option in case you haven't tried.

Any caveats to think of if I use this approach?

WebView Transparency doesn't work in WinUI3 apps and you cannot render WinUI3 content on top of a WebView2 (there's a way though, but with other caveats).
Also, CSS filters don't work on the background (because it don't know about). But everything else is fine. We're using this approach on many platforms.

aidv commented

This gives pretty much the same performance as fetchibg from a webbrowser. Very fast.
But problem is that this would require some disk io, which isn’t optimal.

Maybe using overlapped (memory-mapped) IO, but when you change the data (which would be just a memcopy), I'm not sure whether the browser engine would read it again, rather than assuming it has cached it already, and I don't know whether it is possible to rename a file which is opened in this mode. Browser caching itself might also involve copying of the data, yet I'm not sure whether this is done for file: URLs.

It's really frustrating when you look through all of those new browser APIs. They have tons of features in many directions, but there's hardly any good and fast way for getting local data into the browser... I've read an article abouit "transferrable objects" in WebAPIs where it was shown that a 32MB blob could be transferred between WebWorker and main thread in 6.6ms instead of 360ms. On Android, it's possible to transfer such transferrable objects between the (Chromium) WebView and Java native code (did that last week for a MessagePort, those don't have much data, so can't say much about performance). Yet, the Edge WebView2 doesn't have something like that. They only have that ridiculously complex mechanism where you need to create COM interfaces for the native object which you want to share with the webview. I'm not sure how that performs, but it might be another option in case you haven't tried.

Any caveats to think of if I use this approach?

WebView Transparency doesn't work in WinUI3 apps and you cannot render WinUI3 content on top of a WebView2 (there's a way though, but with other caveats). Also, CSS filters don't work on the background (because it don't know about). But everything else is fine. We're using this approach on many platforms.

Yeah, it just doesn't seem like there's an easy way to do it. I am however interested in what @cnSchwarzer proposed regarding having a shared WebGPU handle.

This way I could have all the algorithmic stuff happen in the C backend, while the results would be rendered in the WebView2 frontend.

That would be an ultimate solution.

Yeah, it just doesn't seem like there's an easy way to do it. I am however interested in what @cnSchwarzer proposed regarding having a shared WebGPU handle.

Frankly, I think that's just nonsense.

The statement was:

You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

WebGPU is a browser API which wraps native platform functionality. How should it be possible to create that "natively"? There is no "native WebGPU context" you can create from outside. And further, there's no WebGPU API which allows you to connect to an outside native context. Neither in one nor in the other direction.
An HTML element like would be perfect, which can be part of the DOM but operated from outside...

aidv commented

Yeah, it just doesn't seem like there's an easy way to do it. I am however interested in what @cnSchwarzer proposed regarding having a shared WebGPU handle.

Frankly, I think that's just nonsense.

The statement was:

You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

WebGPU is a browser API which wraps native platform functionality. How should it be possible to create that "natively"? There is no "native WebGPU context" you can create from outside. And further, there's no WebGPU API which allows you to connect to an outside native context. Neither in one nor in the other direction. An HTML element like would be perfect, which can be part of the DOM but operated from outside...

There’s Dawn, a WebGPU thing for C++.

I’m not sure how it would work, but I’m open to any ideas.

Well - that IS the WebGPU implementation used in Chromium...

What would be another option is to fork Electron. This gives you access to absolutely everything (i.e. Chromium source level modifications).

You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

There's anothher reason why this cannot work: There's a process boundary. There's no way for sharing a (whatever kind of) GPU context between diferent processes. Whatever you do would need to happen within the same process, i.e. the browser's rendering process.
Which brings me back to pepper/nacl plugins. These are loaded into the browser process and you can render to OpenGL surfaces directly. Going the Electron way would allow you to keep this enabled until there's a better way.

I think it is definitely possible to create some modification around Dawn to make it satisfy your needs.
There're plenty of Dawn api which is not exposed to js (not in WebGPU standard, obviously), and I was digging around to see how to import a external texture into the dawn context, and find out it is absolutely possible. You'll need to figure out Dawn representatives in v8 engine, then you can do whatever you want once you got them.
I cannot give any guarantee, but I think you will not be disappointed if you look into it.

I was talking about framework (like CEF) level modification from the beginning. This is obviously not a job can be done in WebView2 api. Whatever, it is off topic, and shouldn't continue in this issue.

aidv commented

I think it is definitely possible to create some modification around Dawn to make it satisfy your needs. There're plenty of Dawn api which is not exposed to js (not in WebGPU standard, obviously), and I was digging around to see how to import a external texture into the dawn context, and find out it is absolutely possible. You'll need to figure out Dawn representatives in v8 engine, then you can do whatever you want once you got them. I cannot give any guarantee, but I think you will not be disappointed if you look into it.

Yeah, this veered off topic so I’ma just shut up about it.

But before we put the lid on this, what would you recommend me to look into in terms of dawn and electron?

I was talking about framework (like CEF) level modification from the beginning

It would require modifications to the Chromium source, not only the framework - which is how I had understood your comment. Sorry for misunderstanding.

Yeah, this veered off topic so I’ma just shut up about it.

Maybe you just create a new issue...
(like about content rendering to a panel element in the DOM from the native side)

Missed this, so replying to @fredemmott

Sorry, I wasn’t clear - I have no trouble with VRR 240 in chrome;

To be clear -- I require ability of custom perfect-framepaced arbitrary frame rates below Max Hz.

e.g. 57fps looks exactly like 57Hz, with a green-colored VALID and very flat www.testufo.com/animation-time-graph. Or 123.5fps that looks like perfect stutter free 123.5Hz Etc.

I need a fully dynamic-framerate-capable (not hardcoded to maxHz) VRR framepacing that is correctly refreshing at at dynamic asynchronous refresh cycles, and not to default Windows' scheduled MaxHz-refreshes-per-seconds during DWM+VRR ops. Chromium code design does something to force refreshing of the Chromium framebuffer at every DWM scheduled refresh cycle somehow, spoiling VRRs' raison-detre.

Before W3C moved to WHATWG, I tried to post a suggestion for a VSYNC API that would solve this. w3c/html#375 ... For now, I have given up on browsers' complete inability to do proper true VRR.

So the current fallback plan is to use some offscreen CEF style system, and simply sieze over control over frame presentation to do it the proper way. Hopefully there's no hardcoded tick-tock built into Chromium (Chrome automatically uses 60fps on displays it cannot sync to refresh rate to, e.g. older Linux distributions), or it ruins my plans.

Part of the reason is skills silos -- browser developers are VERY skilled at designing browsers, but don't understand VRR engineering. I believe browser developers don't quite fully understand VRR - so this "browser bug" (non-true VRR during sub-MaxHz frame rates) has existed for almost a decade;

So if I set a 57fps cap (by any means, like busywaits inside requestAnimationFrame() or a provided sanctioned technique...), I need the monitor sending photons to my eyeballs exactly 1/57sec apart (as VRR is designed to do so) and not rounded-off to the next refresh cycle scheduled by Microsoft Windows DWM (even when Windowed GSYNC is enabled). In true VRR operations, the frame Present() or glxxSwapBuffers() call, immediately causes the display to refresh. That's how video games do it.

Why?

  1. Display quality can vary at different VRR framerate=refreshrates.
    Engineering Info: Overdrive artifacts (LCD ghosting/coronas) can improve/worsen at different frame rates, due to how overdrive algorithms are optimized for specific refresh rates and not the whole VRR continuum.
  2. Over 500 content creators (with ~100 million viewership totalled) use tests invented by Blur Busters.
    Displays vary greatly in quality and reviewers would love to test different TRUE VRR frame rates in TestUFO without Chrome & Windows force-refreshing MaxHz times per second.
    For my credentials, see www.blurbusters.com/inventions

Yes, I have tried the framerate cap setting in Chrome, and it doesn't bypass the forced DWM refreshing, unlike many other apps can.

sorry for being unclear: I have no trouble with 240fps with gsync in chrome; that is not translating to the same with WGC+webview2

Desired solution:

  • Ability to do arbitrary erratic-stutter-free frame rates below MaxHz with perfect framepacing (as software/hardware allows).

I am still deeply dissapointed that browsers still don't do proper VRR (during sub-MaxHz frame rates).

This is why I am moving my plans to doing offscreen rendering, to workaround this "browsers cant do VRR" bug.

Developer TIP (for Chromium engineers)

Debugging suggestion for Chromium software developers (if anyone reads this):

  1. Enable VRR in both your monitor menus AND your graphics drivers
  2. Run a browser CANVAS-2D animation in full screen mode (as if you're playing a browser-based game)
  3. Turn on your monitor's frame rate / refresh rate setting (most VRR monitors have an OSD that reports current refresh rate)
  4. Make sure that the monitor's detected VRR refresh rate matches what shows up in the web browser/diagnostics/your own debug framerate counter.
  5. Test arbitrary frame rates inside your display's supported VRR range.
  6. TEST PASS CONDITION: For a 240Hz monitor -- if your animation is running at 87.5fps inside the monitor's published VRR range -- your monitor should be reporting 87.5 (true VRR) rather than 240 (automatic compositor in DWM or Chromium)

Hi
Sorry to come back to the subject, but a lot has happened since it was opened.

And these days, I think there are solutions that should allow you to provide WebView2 on Linux and especially to add this mode.

So, is it possible to provide offscreen-rendering using Ozone Layer ? (cf. chromiumembedded/cef#3263)

The CEF team will be migrating to this architecture. So if, like them, your heart is in Chromium, you can either do what they do or do it better.

Regards

As you can see in chromium/src/ui/ozone/platform, there's no clue about ozone Windows/macOS support, which I think Chromium/CEF won't migrate to this shortly. In fact, it is also hard to know which platform the OSR host is on, so I think sharing dmabuf is a better solution for now on Linux.
FYI, the viz based OSR has been merged to CEF recently: https://bitbucket.org/chromiumembedded/cef/pull-requests/734
I think Webview2 can easily implement GPU OSR using FrameSinkVideoCapturer.

Hello @reitolab

I don't think it was merged because this pull request was rejected. The CEF strategy is described here: chromiumembedded/cef#3685
chromiumembedded/cef#3681

What is certain is that Microsoft has what it takes to offer OSR on both Linux and Windows.