cancel
Showing results for 
Search instead for 
Did you mean: 

wglDXRegisterObjectNV returns 0 but no error ( NV_DX_interop2 )

knchaffin
Explorer
Has anyone successfully added NV_DX_interop2 functionality to a PC Win10 OVR SDK app with NVIDIA GTX 1080 GPU?  Just to be clear, my goal is to be able to execute arbitrary GL code and have it
write to the DX eye render targets, with the proper wglDXLockObjectsNV()
and wglDXUnlockObjectsNV() surrounds.


The problem I am having is that when I call

gl_handles[0] = wglDXRegisterObjectNV(gl_handleD3D, dxColorbuffer,  gl_names[0], GL_RENDERBUFFER,  WGL_ACCESS_READ_WRITE_NV);

the handle is set to 0, but there is no error detected by GetLastError();  So, I have 2 renderbuffer objects, color and depth, that are derived from the OVR eye render targets/buffers.  The dxColorbuffer object appears correct, as does the dxDepthbuffer object.  The call succeeds on the dxDepthbuffer object and sets the handle to a non-null value.

I think I am doing all the required steps correctly, including:

gl_handleD3D = wglDXOpenDeviceNVFunc((void *)((OVR::Render::D3D11::RenderDevice*)pRender)->Device); 
glGenFramebuffers(2, gl_fbonames);
glGenRenderbuffers(2, gl_names);
gl_handles[0] = wglDXRegisterObjectNV(gl_handleD3D, dxColorbuffer, gl_names[0],   GL_RENDERBUFFER,      WGL_ACCESS_READ_WRITE_NV);

... subsequent calls such as glBindFramebuffer(), etc. are present but not shown.

Thanks.












57 REPLIES 57

knchaffin
Explorer
Hopefully I am getting closer to understanding why wglDXRegisterObjectNV() returns null for ovr eye render textures.  If I create the ID3D11Texture2D* object manually,  wglDXRegisterObjectNV() succeeds with a non-null handle returned.  If I call it on an existing ID3D11Texture2D object, it always returns null.  I will trying to list the minimal source code steps showing how I am getting the pre-existing textures for the call and hopefully someone can tell me what I am doing wrong.  The code just follows the color buffer (not depth) and is not meant to be compiled.  It just shows the types of the pertinent objects.
PlatformCore::PlatformCore(Application *app)
{
pApp = app;
pApp->SetPlatformCore(this);
StartupSeconds = OVR::Timer::GetSeconds();
}
// following are at the Application class level
pApp->SetPlatformCore(this);
class PlatformCore* pPlatform=pApp;
RenderDevice* pRender = pPlatform->SetupGraphics(Session, OVR_DEFAULT_RENDER_DEVICE_SET,
graphics, RenderParams, luid);

ID3D11Device *deviceD3D= ((OVR::Render::D3D11::RenderDevice*)pRender)->Device;
struct RenderTarget // from class Texture : public RefCountBase<Texture>
{
Ptr<Texture> pColorTex;
Ptr<Texture> pDepthTex;
Sizei Size;
};

RenderTarget RenderTargets[Rendertarget_LAST]; // [3]
RenderTarget* DrawEyeTargets[Rendertarget_LAST]; // the buffers we'll actually render to
ovrTextureSwapChain oTSC[2]; // one for color and one for depth
oTSC[0] = DrawEyeTargets[0]->pColorTex->Get_ovrTextureSet();
int swapChainLength = 0;
ovr_GetTextureSwapChainLength(Session, oTSC[0], &swapChainLength);
int currentIndex = 0;
ovr_GetTextureSwapChainCurrentIndex(Session, oTSC[0], &currentIndex);
int workingIndex = (currentIndex + 1) % swapChainLength;
ID3D11Texture2D *dxColorbuffer;
ID3D11Texture2D *dxDepthbuffer;
ovr_GetTextureSwapChainBufferDX(Session, oTSC[0], workingIndex, IID_ID3D11Texture2D,
(void **)&dxColorbuffer);
wglDXOpenDeviceNVFunc = (PFNWGLDXOPENDEVICENVPROC)wglGetProcAddress("wglDXOpenDeviceNV"); // etc
HANDLE m_hInteropDevice = wglDXOpenDeviceNVFunc((void *)deviceD3D);

GLuint m_gltextures[2] = { 0,0 }; // color and depth buffers as Texture2D
HANDLE m_hInteropObjects[2] = { 0,0 }; // color and depth buffers as Texture2D
glGenTextures(2, m_gltextures);
m_hInteropObjects[0] = wglDXRegisterObjectNV(m_hInteropDevice, dxColorbuffer,
m_gltextures[0],
GL_TEXTURE_2D,
WGL_ACCESS_READ_WRITE_NV);

The last line always returns a null handle, even though no error is set.  I do not think the function sets an error on failure.

Thanks for taking a look.



























knchaffin
Explorer
I've narrowed my problem down to the fact that the textures at the ovrTextureSwapChain level are typeless and are overridden at the render target view with a valid type, format, size etc.  If I grab a texture at the RTV level, or similar, I can
call wglDXRegisterObjectsNV() and a non null handle is returned.

I have not been able to do a GL render to the registered texture, but I'm still trying.


knchaffin
Explorer
I'm afraid that I am going to have to abandon my NV_DX_interop2 efforts, unless someone can direct me in a workable direction.  As mentioned above, the root problem is that the Oculus SDK application frameworks all make use of ID3D11View interfaces by creating the render target textures format as "typeless" and then overriding the format in the view creation call to the desired format, which can then be used by the "pipeline.  Unfortunately, for NV_DX_interop2, the render target ID3D11Texture2d pointers have to be used to register the targets with GL.  Since the base render targets are typeless, the registration fails.  I've tried all sorts of workarounds, to no avail.  BTW, the same thing applies to depth stencil views and shader resource views, in addition to render target views.

Any ideas on how to effectively do this?

Thanks for your time.  I'm hoping that someone can point me in the right direction, or tell me this is impossible without rewriting the SDK Samples  application frameworks.

thewhiteambit
Adventurer
Use Volgaskoys approach. You have to create additional buffers, because the Oculus API was done by people that are not really good in 3D. Sorry folks, but if you knew what you did, you would not have so many stupid implementation failures. The only way of using interop is to have additional buffers and additional copy operations not necessary if the LibOVR would be done good in first place (this was still possible with pre 0.8 API but eliminated for convenience for people that find it to complicated to create a texture by themselves and tell the API about it. I would not mind having the option of LibOVR creating the textures, but being forced to is only stupid). I can't imagine any problems forcing the programmers at Oculus to do it the way they do it. I tried to explain the faults in the API layout years ago, but I finally gave up. Now interop is a real PITA, but I can say it is definitely possible. Also pray you don't have to copy depth buffers for interop, that is really hard and will break your brain. Still possible, but hard due to bad API layout.

thewhiteambit
Adventurer
See, this is from 2016!
https://forums.oculusvr.com/developer/discussion/36032/how-come-ovr-api-is-still-so-bad-designed-aft...

And this is from 2015, me and some developers complaining about unnecessary render steps needed because of stupid API layout:
https://forums.oculusvr.com/developer/discussion/comment/317469

You only have to abandon all hope of real improvements, find your workarounds for faults in the API layout and hope that Oculus will not break them again and again with unnecessary API changes the only break things but bring no improvement - like the latest Runtime did by adding a stupid new ASW. Sorry but I am really pissed because this company does not even give documentation on breaking changes and no possibility to roll back for testing/compatibility.

knchaffin
Explorer
@thewhiteambit
I'm taking you at your word that full NV_DX_interop2 is possible if I create my own render targets.  Just for clarification, if I create my own swapchain, do I have to do that at the ID3D11 level and avoid the OVR SDK textures totally, or can I still use the OVR swapchain functionality.   Do we know specifically which part of the OVR API breaks the interop?  Tests I ran recently seemed to indicate that I could call wglRegisterObjectNV() successfully on an OVR texture swap chain texture, as long as that swap chain was not associated with the the OVR render pipeline (i.e., not part of the OVR device swapchain, etc.)

So, my plan is to create my own texture swapchain, register the color buffer  with GL, etc., have GL render to that texture and then in the OVR side of things, do a BLT from my texture to the OVR backbuffer.  Is this what you were suggesting?  Is there a BLT that comprehends to depth buffer?

Can you say what approach is required to do something similar with the depth buffer texture?  Seems like a shader might have to be used to merge the GL written depth texture with the OVR backbuffer depth texture but the GL color buffer contents might have to be used in that shader also.

Thanks for your consideration.  I had abandoned the interop approach since no one would indicate if interop could work, but I'm willing to spend more time interop as long as it has been successfully used by others.

thewhiteambit
Adventurer
Swapchain? You don't need a classic swapchain at all, thats (pre) DX9. Just create DX textures, create an interop instance for GL, render to them in GL (or render to something else and then copy). Then on the other DX end, take that texture(s) and copy again to the OVR swapchain - there is no other way than using these OVR provided buffers. Thats IMHO to many unnessesary copy operations, but the stupid OVR API layout forces you to do it that way. If it was possible (like in (pre) 0.8 LibOVR) to tell the API which textures to use, you could save on copy operations and memory usage.

This should work vice versa if you are using OVR in GL mode. I did this with OVR in DX mode with textures rendered in GL. I can assure you it will work, and also guarantee you it will bring you to the edge of madness - in detail also when you find out that (s)RGB settings are unusable in OVR since the very beginning and they never fixed this. They try to guess sRGB from texture format - instead of giving you the simple option to tell what sRGB interpretation to use. They will do it mostly wrong, and when you finally found a way to force the lame API to a proper sRGB interpretation, Oculus will change it in the coming runtime, rendering your current perfect working release useless when the clients already have it for months without errors. Horray! Together with some odd sRGB features postfitted in GL and restricted textures formats working for interop... you will have fun with this!

Some incompatible sRGB formats will give you only black, and to make it worse, there is a bug in some nvidia drivers that DX/GL-interop will only work the second time the process making use of interop is started.

Depth buffers are in another hell, since they are hard to copy at all. You can use them as DX shader resource and write the depth values by a fullscreen quad render pass, but this is real PITA. Since the latest ASW approach broke depth buffer usage, you currently don't have to bother about depth buffers at all. Thats the "good news" part here.

Oh, have I mentioned MSAA rendertarges? It will work, but don't ask me what I had to do, you have to give your first born to the dark lord. Nothing of this would be necessary if Oculus would provide a better API, but I guess they don't want to hire people knowing how to do that. Instead they put a lot of money in nagging home environments having the sound of campfire on the headphones when all you want is silence...

You probably have one frame latency if you copy buffers from GL, and if you provide those to OVR, it will introduce some horrible judder effects if you provide depth buffers. It worked perfect in pre 1.36 (or 1.35) runtimes, but now this looks like shit, so we had to disable it.

Really, I can't believe how stupid this is done by Oculus again and again with problems lasting since 2015 - and if you found a good workaround for the stupid things they did, you can bet they will break everything in the next months to come by adding other behavior to the OVR runtime.

knchaffin
Explorer
Thanks @thewhiteambit for your detailed response. I am working on just using a single color and single depth texture for GL rendering and do a swapchain later if needed.  As you mentioned, I probably will not need to do a swapchain.

I have SRGB turned off in my app since that was messing up my OVRvision Pro stereoscopic 3D video camera image, so I may not need to worry about that.

I probably never stated that I am trying to do concurrent interop where both DX and GL are rendering to the OVR swapchain, more or less as layers.  As such, I'm not going to be able to get away with a simple copy from the GL rendered texture to the OVR swapchain but will probably have to do a shader to blend the two renders in some way, hopefully via the depth buffers, but that remains to be seen.  I will do the simple copy to confirm that interop is working first.

I've been doing DirectX since DX3 and Windows since Windows 1.  I have found that I have to have an extremely high tolerance for the OS and SDK changes breaking my code.  When I was doing DX game engine development, it would take me weeks and sometimes months to rewrite my engine after every new DX version came out.  But, it is what it is if I want to play.  Now I use Unity if I want a game engine and I fix my Unity projects after every major or not so major Unity release.  Since I also include NVIDIA CUDA and/or MS DirectCompute GPU parallel code in almost all of my projects, I also have those break.  I spend as much time rewriting code as I spend writing it. I don't think the problems are limited to OVR.  But, I also get very frustrated:)



thewhiteambit
Adventurer
I am doing mixed GL and DX in interop also, otherwise it wouldn't make much sense to not only use OVR in GL mode.

It is something completely different, if another release of DX has you to change some code to adapt - or - and this is what I find hard to adapt, changing runtime behind your back and breaking your code that is already out there. This is what oculus is constantly doing!

Afaik even new DX runtimes never broke running applications already out there. At least not on a regular basis as Oculus runtime is doing.

For the SRGB part, you are lucky if you can just skip this completely. Still I had some problems raising form not the SRGB part (I know how to handle this), but OVR API having inconsequent SRGB handling and making changes in the runtime when the code is already out there.

If at least they gave you a chance to force a specific runtime version, as it is possible with DX by binding to a specific library release... no, they just change it and there is nothing you can do about it, but shipping a new release to all your customers with fixed workaround code. And of course they will blame you and not Oculus. I have no problem with new SDKs breaking code either, but runtimes shipped via forced internet updates? Cmon Oculus...

thewhiteambit
Adventurer
But talking about stupid decisions, I still don't get why Microsoft is not shipping all the latest redistributable DX DLLs and redistributable VC Runtimes with OS updates  :#