Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums may be revoked at the discretion of Oculus staff.

wglDXRegisterObjectNV returns 0 but no error ( NV_DX_interop2 )

knchaffinknchaffin Posts: 34
Brain Burst
edited March 21 in PC Development
Has anyone successfully added NV_DX_interop2 functionality to a PC Win10 OVR SDK app with NVIDIA GTX 1080 GPU?  Just to be clear, my goal is to be able to execute arbitrary GL code and have it write to the DX eye render targets, with the proper wglDXLockObjectsNV() and wglDXUnlockObjectsNV() surrounds.


The problem I am having is that when I call

gl_handles[0] = wglDXRegisterObjectNV(gl_handleD3D, dxColorbuffer,  gl_names[0], GL_RENDERBUFFER,  WGL_ACCESS_READ_WRITE_NV);

the handle is set to 0, but there is no error detected by GetLastError();  So, I have 2 renderbuffer objects, color and depth, that are derived from the OVR eye render targets/buffers.  The dxColorbuffer object appears correct, as does the dxDepthbuffer object.  The call succeeds on the dxDepthbuffer object and sets the handle to a non-null value.

I think I am doing all the required steps correctly, including:

gl_handleD3D = wglDXOpenDeviceNVFunc((void *)((OVR::Render::D3D11::RenderDevice*)pRender)->Device); 
glGenFramebuffers(2, gl_fbonames);
glGenRenderbuffers(2, gl_names);
gl_handles[0] = wglDXRegisterObjectNV(gl_handleD3D, dxColorbuffer, gl_names[0],   GL_RENDERBUFFER,      WGL_ACCESS_READ_WRITE_NV);

... subsequent calls such as glBindFramebuffer(), etc. are present but not shown.

Thanks.












«1

Comments

  • volgaksoyvolgaksoy Posts: 69 Oculus Staff
    Are you doing the GL to DX conversion in hopes of using OpenGL with the OVR SDK API? Or are you hoping to mix and match GL and DX APIs in your own app? I ask because you can directly use OpenGL with the OVR SDK API without having to jump thru interop hoops.
  • knchaffinknchaffin Posts: 34
    Brain Burst
    I am hoping to mix and match GL and DX APIs in my own app.  I am aware of the OpenGL support within the OVR SDK API.  I hope I am understanding the difference correctly.  What I am really trying to do is integrate the OVR Avatar SDK into my DX based OVR SDK app.  The Avatar SDK is GL only.  So, I'm trying to jump through a few hoops such that the Avatar SDK API and example code can be used basically unchanged.
  • knchaffinknchaffin Posts: 34
    Brain Burst
    By the way, my app is a research platform that also includes a stereoscopic video camera mounted on an Arduino robotics turret and controlled via UDP sockets communications 2 way between my app and the Arduino.  So, the HMD poses drive the pan and tilt of the video camera.  A lot is going on in this application.
  • knchaffinknchaffin Posts: 34
    Brain Burst
    A bit more info...  I am using the pColorTex->Get_ovrTextureSet() to obtain the eye render target 3 member swap chain and then the
    ovr_GetTextureSwapChainCurrentIndex(..) function to get the current index and then using (currentIndex+1)%swapChainLength to index into ovr_GetTextureSwapChainBufferDX() to obtain my dxColorBuffer .

    Reading other forum conversations leads me to believe that there may be something in the OVR API that prevents NV_DX_interop2 from functioning, such as perhaps the OVR API locking the DX render targets such that GL cannot write to them via interop.  Those conversations are from 3 years back.  Does anyone know if there is such a limitation that would prevent generic GL writing to the shared DX render targets other than using a blit from the GL textures to the DX textures?

    Otherwise I am wondering if there is a special setting for the pixel format or attributes when opening the DX/GL device after calling:

    wglChoosePixelFormatARBFunc(..)
    context = wglCreateContextAttribsARBFunc(hDC, 0, attribs);
    wglMakeCurrent(hDC, context);

    gl_handleD3D = wglDXOpenDeviceNVFunc((void *)((OVR::Render::D3D11::RenderDevice*)pRender)->Device);

    I've been working on this issue for a couple of weeks, to no avail.





  • JacksonGordonJacksonGordon Posts: 132
    Art3mis
    bump @volgaksoy (i'm just really curious)

  • volgaksoyvolgaksoy Posts: 69 Oculus Staff
    The Oculus provided textures internally use GL_TEXTURE_2D instead of GL_RENDERBUFFER type you're using in the wglDXRegisterObjectNV call. So we share the D3D resource over as a texture, and then in GL bind that texture as a framebuffer in GL. Once that's done, you also need to properly lock and unlock the resources when D3D is accessing them using wglDXLockObjectsNV and wglDXUnlockObjectsNV.

    However, I think you should side step all of that, and avoid pushing the Oculus SDK provided textures thru the interop. Instead create your own textures and render targets in your own code. Share them between GL and D3D as necessary. Copy the contents of the GL render over to the final Oculus created DX render afterwards. The perf hit to copy over the contents would be minimal. That way you're not trying to make heads or tails of the Oculus SDK's textures.
  • knchaffinknchaffin Posts: 34
    Brain Burst
    Thank you very much for your suggestions volgaksoy . This gives me a lot to work with :)
  • knchaffinknchaffin Posts: 34
    Brain Burst
    Is there any possibility that Oculus is doing a wglDXRegisterObjectsNV in the background on the application rendertarget swapchain textures?  No matter what I try, wglDXRegisterObjectsNV returns 0 but sets no error.  I think I have seen this behavior before  when I tried to register an object that was already registered.  Of course this also raises the question as to whether Oculus is also doing a wglDXOpenDeviceNVFunc in the background, since wglDXRegisterObjectsNV  requires a DX/GL device handle argument .

    It would be nice to know if in fact full NV_DX_interop2 via wglDXOpenDeviceNVFunc is possible with the OVR SDK and runtime.   I'm sure my confusion is related to my ignorance of exactly what Oculus is doing in the runtime DLL, for which source code is not provided (at least that is my understanding).  I see the SDK LibOVR Shim functions for communicating with the DLL, but that's it.

    Thanks to anyone who can clarify this.



  • knchaffinknchaffin Posts: 34
    Brain Burst
    edited March 24
    Since I do not fully understand how interop is implemented on the OVR SDK and runtime, I feel that I should mention my framework for my Windows app.

    - My app is based on the sample Oculus World Demo app framework.
    - The RenderDevice and eye render target objects are declared and defined in the  OculusWorldDemo.h header and the Samples/CommonSrc/Render_D3D11_Device.h headers.

    As such, in the OculusWorldDemo.h file, the following are defined:

    class OculusWorldDemoApp : public Application                                                                         
    {
        ....
       struct RenderTarget                                                                                               
        {                                                                                                                 
            Ptr<Texture>    pColorTex;    
    // Texture declared in SamplesCommonSrcRender_D3D11_Device.h    
                                    
            Ptr<Texture>    pDepthTex;                                                                                  
        };

    // Last render target for eye FOV buffers.                                                                        
        static const int RenderTarget_EyeLast = Rendertarget_BothEyes + 1;                                              
                                                                                                                         
        RenderTarget        RenderTargets[Rendertarget_LAST];                                                            
        RenderTarget*       DrawEyeTargets[Rendertarget_LAST];
        ....
    };

    Is this what you meant when you said I should create my own textures rather than relying on those Oculus provides?  All further actions on the render textures are  done through these pColorTex and pDepthTex.objects.  Should these textures be compatible with the full interop2, if desired?

    Thanks











  • knchaffinknchaffin Posts: 34
    Brain Burst
    edited March 26
    Hopefully I am getting closer to understanding why wglDXRegisterObjectNV() returns null for ovr eye render textures.  If I create the ID3D11Texture2D* object manually,  wglDXRegisterObjectNV() succeeds with a non-null handle returned.  If I call it on an existing ID3D11Texture2D object, it always returns null.  I will trying to list the minimal source code steps showing how I am getting the pre-existing textures for the call and hopefully someone can tell me what I am doing wrong.  The code just follows the color buffer (not depth) and is not meant to be compiled.  It just shows the types of the pertinent objects.
    PlatformCore::PlatformCore(Application *app)
    {
    pApp = app;
    pApp->SetPlatformCore(this);
    StartupSeconds = OVR::Timer::GetSeconds();
    }
    // following are at the Application class level
    pApp->SetPlatformCore(this);
    class PlatformCore* pPlatform=pApp;
    RenderDevice* pRender = pPlatform->SetupGraphics(Session, OVR_DEFAULT_RENDER_DEVICE_SET,
    graphics, RenderParams, luid);

    ID3D11Device *deviceD3D= ((OVR::Render::D3D11::RenderDevice*)pRender)->Device;
    struct RenderTarget // from class Texture : public RefCountBase<Texture>
    {
    Ptr<Texture> pColorTex;
    Ptr<Texture> pDepthTex;
    Sizei Size;
    };

    RenderTarget RenderTargets[Rendertarget_LAST]; // [3]
    RenderTarget* DrawEyeTargets[Rendertarget_LAST]; // the buffers we'll actually render to
    ovrTextureSwapChain oTSC[2]; // one for color and one for depth
    oTSC[0] = DrawEyeTargets[0]->pColorTex->Get_ovrTextureSet();
    int swapChainLength = 0;
    ovr_GetTextureSwapChainLength(Session, oTSC[0], &swapChainLength);
    int currentIndex = 0;
    ovr_GetTextureSwapChainCurrentIndex(Session, oTSC[0], &currentIndex);
    int workingIndex = (currentIndex + 1) % swapChainLength;
    ID3D11Texture2D *dxColorbuffer;
    ID3D11Texture2D *dxDepthbuffer;
    ovr_GetTextureSwapChainBufferDX(Session, oTSC[0], workingIndex, IID_ID3D11Texture2D,
    (void **)&dxColorbuffer);
    wglDXOpenDeviceNVFunc = (PFNWGLDXOPENDEVICENVPROC)wglGetProcAddress("wglDXOpenDeviceNV"); // etc
    HANDLE m_hInteropDevice = wglDXOpenDeviceNVFunc((void *)deviceD3D);

    GLuint m_gltextures[2] = { 0,0 }; // color and depth buffers as Texture2D
    HANDLE m_hInteropObjects[2] = { 0,0 }; // color and depth buffers as Texture2D
    glGenTextures(2, m_gltextures);
    m_hInteropObjects[0] = wglDXRegisterObjectNV(m_hInteropDevice, dxColorbuffer,
    m_gltextures[0],
    GL_TEXTURE_2D,
    WGL_ACCESS_READ_WRITE_NV);

    The last line always returns a null handle, even though no error is set.  I do not think the function sets an error on failure.

    Thanks for taking a look.



























  • knchaffinknchaffin Posts: 34
    Brain Burst
    I've narrowed my problem down to the fact that the textures at the ovrTextureSwapChain level are typeless and are overridden at the render target view with a valid type, format, size etc.  If I grab a texture at the RTV level, or similar, I can
    call wglDXRegisterObjectsNV() and a non null handle is returned.

    I have not been able to do a GL render to the registered texture, but I'm still trying.


  • knchaffinknchaffin Posts: 34
    Brain Burst
    I'm afraid that I am going to have to abandon my NV_DX_interop2 efforts, unless someone can direct me in a workable direction.  As mentioned above, the root problem is that the Oculus SDK application frameworks all make use of ID3D11View interfaces by creating the render target textures format as "typeless" and then overriding the format in the view creation call to the desired format, which can then be used by the "pipeline.  Unfortunately, for NV_DX_interop2, the render target ID3D11Texture2d pointers have to be used to register the targets with GL.  Since the base render targets are typeless, the registration fails.  I've tried all sorts of workarounds, to no avail.  BTW, the same thing applies to depth stencil views and shader resource views, in addition to render target views.

    Any ideas on how to effectively do this?

    Thanks for your time.  I'm hoping that someone can point me in the right direction, or tell me this is impossible without rewriting the SDK Samples  application frameworks.

  • thewhiteambitthewhiteambit Posts: 282
    Art3mis
    edited April 13
    Use Volgaskoys approach. You have to create additional buffers, because the Oculus API was done by people that are not really good in 3D. Sorry folks, but if you knew what you did, you would not have so many stupid implementation failures. The only way of using interop is to have additional buffers and additional copy operations not necessary if the LibOVR would be done good in first place (this was still possible with pre 0.8 API but eliminated for convenience for people that find it to complicated to create a texture by themselves and tell the API about it. I would not mind having the option of LibOVR creating the textures, but being forced to is only stupid). I can't imagine any problems forcing the programmers at Oculus to do it the way they do it. I tried to explain the faults in the API layout years ago, but I finally gave up. Now interop is a real PITA, but I can say it is definitely possible. Also pray you don't have to copy depth buffers for interop, that is really hard and will break your brain. Still possible, but hard due to bad API layout.
  • thewhiteambitthewhiteambit Posts: 282
    Art3mis
    edited April 13
    See, this is from 2016!
    https://forums.oculusvr.com/developer/discussion/36032/how-come-ovr-api-is-still-so-bad-designed-after-years-of-iteration

    And this is from 2015, me and some developers complaining about unnecessary render steps needed because of stupid API layout:
    https://forums.oculusvr.com/developer/discussion/comment/317469

    You only have to abandon all hope of real improvements, find your workarounds for faults in the API layout and hope that Oculus will not break them again and again with unnecessary API changes the only break things but bring no improvement - like the latest Runtime did by adding a stupid new ASW. Sorry but I am really pissed because this company does not even give documentation on breaking changes and no possibility to roll back for testing/compatibility.
  • knchaffinknchaffin Posts: 34
    Brain Burst
    I'm taking you at your word that full NV_DX_interop2 is possible if I create my own render targets.  Just for clarification, if I create my own swapchain, do I have to do that at the ID3D11 level and avoid the OVR SDK textures totally, or can I still use the OVR swapchain functionality.   Do we know specifically which part of the OVR API breaks the interop?  Tests I ran recently seemed to indicate that I could call wglRegisterObjectNV() successfully on an OVR texture swap chain texture, as long as that swap chain was not associated with the the OVR render pipeline (i.e., not part of the OVR device swapchain, etc.)

    So, my plan is to create my own texture swapchain, register the color buffer  with GL, etc., have GL render to that texture and then in the OVR side of things, do a BLT from my texture to the OVR backbuffer.  Is this what you were suggesting?  Is there a BLT that comprehends to depth buffer?

    Can you say what approach is required to do something similar with the depth buffer texture?  Seems like a shader might have to be used to merge the GL written depth texture with the OVR backbuffer depth texture but the GL color buffer contents might have to be used in that shader also.

    Thanks for your consideration.  I had abandoned the interop approach since no one would indicate if interop could work, but I'm willing to spend more time interop as long as it has been successfully used by others.
  • thewhiteambitthewhiteambit Posts: 282
    Art3mis
    edited April 15
    Swapchain? You don't need a classic swapchain at all, thats (pre) DX9. Just create DX textures, create an interop instance for GL, render to them in GL (or render to something else and then copy). Then on the other DX end, take that texture(s) and copy again to the OVR swapchain - there is no other way than using these OVR provided buffers. Thats IMHO to many unnessesary copy operations, but the stupid OVR API layout forces you to do it that way. If it was possible (like in (pre) 0.8 LibOVR) to tell the API which textures to use, you could save on copy operations and memory usage.

    This should work vice versa if you are using OVR in GL mode. I did this with OVR in DX mode with textures rendered in GL. I can assure you it will work, and also guarantee you it will bring you to the edge of madness - in detail also when you find out that (s)RGB settings are unusable in OVR since the very beginning and they never fixed this. They try to guess sRGB from texture format - instead of giving you the simple option to tell what sRGB interpretation to use. They will do it mostly wrong, and when you finally found a way to force the lame API to a proper sRGB interpretation, Oculus will change it in the coming runtime, rendering your current perfect working release useless when the clients already have it for months without errors. Horray! Together with some odd sRGB features postfitted in GL and restricted textures formats working for interop... you will have fun with this!

    Some incompatible sRGB formats will give you only black, and to make it worse, there is a bug in some nvidia drivers that DX/GL-interop will only work the second time the process making use of interop is started.

    Depth buffers are in another hell, since they are hard to copy at all. You can use them as DX shader resource and write the depth values by a fullscreen quad render pass, but this is real PITA. Since the latest ASW approach broke depth buffer usage, you currently don't have to bother about depth buffers at all. Thats the "good news" part here.

    Oh, have I mentioned MSAA rendertarges? It will work, but don't ask me what I had to do, you have to give your first born to the dark lord. Nothing of this would be necessary if Oculus would provide a better API, but I guess they don't want to hire people knowing how to do that. Instead they put a lot of money in nagging home environments having the sound of campfire on the headphones when all you want is silence...

    You probably have one frame latency if you copy buffers from GL, and if you provide those to OVR, it will introduce some horrible judder effects if you provide depth buffers. It worked perfect in pre 1.36 (or 1.35) runtimes, but now this looks like shit, so we had to disable it.

    Really, I can't believe how stupid this is done by Oculus again and again with problems lasting since 2015 - and if you found a good workaround for the stupid things they did, you can bet they will break everything in the next months to come by adding other behavior to the OVR runtime.
  • knchaffinknchaffin Posts: 34
    Brain Burst
    Thanks @thewhiteambit for your detailed response. I am working on just using a single color and single depth texture for GL rendering and do a swapchain later if needed.  As you mentioned, I probably will not need to do a swapchain.

    I have SRGB turned off in my app since that was messing up my OVRvision Pro stereoscopic 3D video camera image, so I may not need to worry about that.

    I probably never stated that I am trying to do concurrent interop where both DX and GL are rendering to the OVR swapchain, more or less as layers.  As such, I'm not going to be able to get away with a simple copy from the GL rendered texture to the OVR swapchain but will probably have to do a shader to blend the two renders in some way, hopefully via the depth buffers, but that remains to be seen.  I will do the simple copy to confirm that interop is working first.

    I've been doing DirectX since DX3 and Windows since Windows 1.  I have found that I have to have an extremely high tolerance for the OS and SDK changes breaking my code.  When I was doing DX game engine development, it would take me weeks and sometimes months to rewrite my engine after every new DX version came out.  But, it is what it is if I want to play.  Now I use Unity if I want a game engine and I fix my Unity projects after every major or not so major Unity release.  Since I also include NVIDIA CUDA and/or MS DirectCompute GPU parallel code in almost all of my projects, I also have those break.  I spend as much time rewriting code as I spend writing it. I don't think the problems are limited to OVR.  But, I also get very frustrated:)



  • thewhiteambitthewhiteambit Posts: 282
    Art3mis
    edited April 15
    I am doing mixed GL and DX in interop also, otherwise it wouldn't make much sense to not only use OVR in GL mode.

    It is something completely different, if another release of DX has you to change some code to adapt - or - and this is what I find hard to adapt, changing runtime behind your back and breaking your code that is already out there. This is what oculus is constantly doing!

    Afaik even new DX runtimes never broke running applications already out there. At least not on a regular basis as Oculus runtime is doing.

    For the SRGB part, you are lucky if you can just skip this completely. Still I had some problems raising form not the SRGB part (I know how to handle this), but OVR API having inconsequent SRGB handling and making changes in the runtime when the code is already out there.

    If at least they gave you a chance to force a specific runtime version, as it is possible with DX by binding to a specific library release... no, they just change it and there is nothing you can do about it, but shipping a new release to all your customers with fixed workaround code. And of course they will blame you and not Oculus. I have no problem with new SDKs breaking code either, but runtimes shipped via forced internet updates? Cmon Oculus...
  • thewhiteambitthewhiteambit Posts: 282
    Art3mis
    edited April 15
    But talking about stupid decisions, I still don't get why Microsoft is not shipping all the latest redistributable DX DLLs and redistributable VC Runtimes with OS updates  :#
  • knchaffinknchaffin Posts: 34
    Brain Burst
    Luckily, what I am doing now is just for my own entertainment.  Up until recently I was a university researcher and would pull my hair out trying to keep a research platform and protocol unchanged and functional for the duration of a 5 year federal grant:)  Also, prior to that I worked on commercial 3D software and that was a pain in the butt when something broke due to an OS update usuallly (or a driver update).  And, as I get older, it gets harder and harder to develop bleeding edge systems.  I think you and I may be similar in that we will push past the limits of all systems in trying to do what we envision.

    I'm just trying to develop an immersive VR environment in which I can control all of my many music synthesizers and software using the Touch controllers, HMD motion and gesture control.  I want some me-avatars in there with me so I can have a virtual trio :)

  • knchaffinknchaffin Posts: 34
    Brain Burst

    I have everything set up to do the interop in the way that you suggested, but so far when I use DeviceContext->CopyResource(pDst, pDXtexture)
    I'm not getting anything rendered.  For testing, I am trying to just copy the empty pDXtexture to the destination without having GL render to it.  My expectation is that it should replace what is in the pDst texture when I do this as I am not using a depth texture.  I have also let the GL code clear the DX texture to an opaque red, to no avail. 

    I have tried several pDst textures:

    ID3D11Texture2D *pDst;

    pDst=Device->BackBuffer, and
    Device->BackBufferRT->GetResource(pDst), and

    IDXGISwapChain *dxswapchain = Device->SwapChain;
    dxswapchain->GetBuffer(0, IID_ID3D11Texture2D, (void **)&pDst);  and

    ovrTextureSwapChain oTSC = Device->CurRenderTarget->Get_ovrTextureSet();
    ovr_GetTextureSwapChainBufferDX(Session, oTSC, 0, IID_ID3D11Texture2D, (void **)&pDst);  // and other indices 1 and 2

    All textures have been confirmed to have the same format and dimensions.

    I can call:
    Device->Clear(1.0, 0.0, 0.0, 1.0, 1, 1);
    to clear the BackBufferRT render target view to opaque red, and that seems to work correctly depending where I call it relative to the 3 "~layers" I am rendering.  Those layers are the Tuscany scene and/or my OVRvision Pro video, the two Touch controllers, and then the interop GL layer.

    I'm somewhat at a loss as to what is going on.

    What do you use for the CopyResource() destination texture in your interop applications?

    I'm sure I am doing something stupid.

    Thanks.





  • thewhiteambitthewhiteambit Posts: 282
    Art3mis
    @knchaffin
    I am using CopySubresourceRegion() but just for convenience. CopyResource() also works. Did you manage to use the GL interop textures in DX space (for example as a shader resource while rendering), so you can be sure it's the copy operation failing? Sorry to not be very specific, I implemented this long time ago. But I still remember Copy operations often resulting in only black textures. Maybe you want to fill the destination texture with Blue in advance, so you can see if the read of the copy operation just had it reading wrong and interpret it as Black, or if it does not perform write at all. With CopySubresourceRegion you also can try to copy only a little part in the center then.
  • knchaffinknchaffin Posts: 34
    Brain Burst

    I followed your advice and created my own ID3D11Texture2D render target/shader resource texture of the same format and size of the OVR final render targest (backbuffer, current render target, swap chain members, etc).  I also initialized this DX texture with a checkerboard data pattern in CreateTexture().  For testing purposes I'm trying to make sure I can copy this texture to the OVR final render texture(s) before registering it with GL via the interop functionality, although I have successfully registered it in tests.  I have tried using Context->CopyResource() and Context->CopySubResourceRegion() to every variant of the OVR final render target textures I can think of and verified that those texture formats and dimensions match my DX texture.  The problem is that I never see the checkerboard texture in the final output.  I can successfully clear the render target color to opaque red via:
    D3D11Device->Clear(1.0, 0.0, 0.0, 1.0, 0.0, true, false);
    which actually ends up clearing D3D11Device->CurrRenderTarget->GetRtv().  So, I made sure that one variant for my CopyResource() is to destination D3D11Device->CurrRenderTarget->GetRtv()->GetResource()

    Do either of you know which OVR render target I should copy to and if this should work?  Is there anything else I may need to do to make this work?  This is not an interop issue, but rather an issue of copying pure DX textures to OVR render target textures.

    Thanks in advance.







  • thewhiteambitthewhiteambit Posts: 282
    Art3mis
    edited April 19
    @knchaffin
    You are probably running into non compatible texture formats for simple blit copying. I know there is somewhere a huge table in DX documentation explaining all compatible texture formats. But I couldn't find them.

    Or did you just switch the parameter order on CopyResource? Is uses a counter intuitive way of destination first, then source...
  • knchaffinknchaffin Posts: 34
    Brain Burst

    I agree that pDst and pSrc argument order is counter intuitive, but I do have them correct.

    Both the source and destination textures are
    DXGI_FORMAT_R8G8B8A8_UNORM
    The DX documentation does not limit the formats that can be used with CopyResource() but rather requires that the formats be "compatible" if a format conversion is required.  E.G.., the texel sizes are 32 bits in both.
  • thewhiteambitthewhiteambit Posts: 282
    Art3mis
    @knchaffin
    just trying to guess why a simple copy can fail. What other flags/parameters do you use for creation of the source and destination texture(s)?
  • knchaffinknchaffin Posts: 34
    Brain Burst

    ID3D11Texture2D* d3d_colortex_;

    const int w = 1600;
    const int h = 900;
    if (CurrRenderTarget)
    {
    // w=CurrRenderTarget->GetWidth();
    // h=CurrRenderTarget->GetHeight();
    debug_trace("CurrRenderTarget w=%d h=%d", w, h);
    }
    // Create DX surface with dimensions of CurrRenderTarget
    D3D11_TEXTURE2D_DESC desc;
    ZeroMemory(&desc, sizeof(desc));
    desc.Height = h;
    desc.Width = w;
    desc.MipLevels = 1;
    desc.ArraySize = 1;
    desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // A four-component, 32-bit unsigned-normalized-integer format that supports 8 bits per channel including alpha.
    desc.SampleDesc.Count = 1;
    desc.SampleDesc.Quality = 0;
    desc.Usage = D3D11_USAGE_DEFAULT;
    desc.BindFlags = 0;
    desc.BindFlags |= D3D11_BIND_RENDER_TARGET;
    desc.BindFlags |= D3D11_BIND_SHADER_RESOURCE;
    desc.CPUAccessFlags = 0;
    desc.MiscFlags = 0;

    HRESULT hr=deviceD3D11->CreateTexture2D(&desc, NULL, &d3d_colortex_);

    The above is my DX texture creation code without the checkerboard data initialization.  Note, I have sRGB turned off in the app.  The OVR rendertargets were created by OVR in the default initialization steps, but with sRGB turned off.  I'm using the OculusWorldDemo app framework, with the /Samples/CommonSrc/Render/Render_D3D11_Device.cpp code included in the VS2017 project..  The DX texture desc values were selected to match obtained desc/format of the OVR render color target (or backbuffer, or swap chain textures as they all are verified to have the same desc/format values)..




  • thewhiteambitthewhiteambit Posts: 282
    Art3mis
    @knchaffin
    The texture desc seems ok. Have you tried if copying works without registering the textures for GL interop?
  • knchaffinknchaffin Posts: 34
    Brain Burst
    Yes, that is what I am doing.  I tested the interop registration and it succeeds, but I commented that out for testing the resource copying.

    I suspect that I am trying to copy to the wrong OVR final render target texture.  For testing I am not using any eye views or RT's but rather just using the single DX texture I created and trying to copy it to the:
    ovrD3D11device class object members as defined in CommonSrc/Render_D3D11_Device.h:
    Ptr<IDXGISwapChain> SwapChain (GetBuffer(0,)); or
    Ptr<ID3D11Texture2D> BackBuffer; or
    Ptr<ID3D11RenderTargetView> BackBufferRT; or
    Ptr<Texture> CurRenderTarget; ( CurRenderTarget->GetTex()) or          (CurRenderTarget->GetRtv()->GetResource(&ppResource1) 

    For each destination test, I make sure the DX texture description matches as the RTV based textures have different width and height than the other textures.

    Or the OVR runtime is locking the textures or something similar.




  • knchaffinknchaffin Posts: 34
    Brain Burst

    I think only the BackBuffer and SwapChain textures make any sense to try to copy to as I think the RenderTargetViews GetResource() calls only return the resource from which the RTV was constructed and the RTV's are only accessed via the pipeline to which they are set via Context->OMSetRenderTargets() .  So I'm hoping the CopyResource to the Backbuffer or device SwapChain happens independent  of the RTV's .  Does this make sense?

«1
Sign In or Register to comment.