Cannot use "D3D_DRIVER_TYPE_WARP" drive with "D3D_FEATURE_LEVEL_11_0"
ZyJChen opened this issue · 11 comments
When initializing d3d device with "D3D_DRIVER_TYPE_WARP" and "D3D_FEATURE_LEVEL_11_0", the sample D3DVisualization program stops at " hr = m_pd3dDevice->OpenSharedResource(sharedHandle, __uuidof(ID3D11Resource), (void**)(&tempResource11));" at the method "InitRenderTarget(void * pResource)" in class CCube.
Anybody knows why?
WARP and REF devices do not support shared resources. Attempting to create a resource with this flag on either a WARP or REF device will cause the create method to return an E_OUTOFMEMORY error code.
https://msdn.microsoft.com/en-us/library/windows/desktop/ee913554
You are not using a create method, but I guess the same restriction applies, in that it cannot open a shared resource since it simply doesn't support it.
Thank you again, weltkante. You helped me. I am having a old video card in my PC.
Hello, weltkante. Could you please explain what is shared resources inside D3D11Image? I just use this package and kind of unclear about it.
It is used to share your rendered data with WPF, on gpu memory, without having to transfer through the cpu.
In the general case of D3DImage you can share your own textures with WPF. In case of D3D11Image the render target texture is created for you as convenience.
If you want to do software rendering then D3DImage (and D3D11Image) is the wrong thing because it is about textures on the gpu. For software rendering there is WriteableBitmap. You probably could configure the WARP device to render to a WriteableBitmap and create a similar framework as D3D11Image but using WriteableBitmap + WARP instead of D3DImage + Hardware. However I have no experience with software rendering so I can't advise you or give you samples, I just think it could work reasonably.
Since this shared resource from D3D11Image is a Texture2D. Can we create the DepthStencilView from this shared resource instead of create a new Texture2D before creating DepthStencilView? I read your code from line 118 to 130 (https://gist.github.com/weltkante/13b41a0289339ebf80e7)
No. You may want to go back to the basics of D3D to make sure you understand how to set up a device - this stuff has nothing to do with WPF interop.
When rendering with a depth buffer you have two textures, one receiving the colors and one receiving the depth data. The texture receiving the colors is provided by D3D11Image as a shared texture because WPF will need to display the colors later. WPF doesn't care about whether or not you have a depth buffer because it doesn't need to display it, so D3D11Image doesn't provide you with a depth buffer texture. If you want to use a depth buffer you have to create your own texture just as if you weren't using WPF at all.
I see. Thank you.
Can we use the shared resource Texture2D provided by D3D11Image to render our customed 2D Image data?
For example, For developing a video player, video data is coming frame by frame, decoded by CPU. Whenever the video data is finished by CPU, I want to render it right to the GPU.
When rendering these 2D image data, can I use this shared resource provided by D3D11Image as a dynamic texture? like
D3D11_MAPPED_SUBRESOURCE mappedResource;
HR(dc->Map(m_texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource));
BYTE* mappedData = reinterpret_cast<BYTE*>(mappedResource.pData);
for(UINT i = 0; i < m_buffer.height; ++i) {
memcpy(mappedData, m_buffer.buffer, m_buffer.rowspan);
mappedData += mappedResource.RowPitch;
m_buffer.buffer += m_buffer.rowspan;
}
dc->Unmap(m_texture, 0);
Never tried that, but I doubt it would work efficiently.
video data is coming frame by frame, decoded by CPU
If you are decoding on CPU anyways you should just go ahead and use a WriteableBitmap. WPF will do the transfer to video memory for you.
In the code you suggest above you would ask D3D11 to transfer data from CPU to GPU. WPF will then do another GPU-GPU copy (because of the way WPF and D3DImage is implemented). If you use a WriteableBitmap you can avoid one of the copies and just give WPF your data directly.
As I said in an earlier comment, D3DImage is made for the case where your data is already on the GPU as a result of rendering.
weltkante, do you have any sample code to use WriteableBitmap to do software rendering? Your last "DepthStencilBuffer" code helps me a lot. Thanks!
Sorry, I have no self-contained examples ready. WriteableBitmap can either be written with WritePixels
API methods, or you can write directly into the memory through the BackBuffer
properties. I'm sure you can find some examples and tutorials on the net though.