New to the forums? Click here to read the How To guide. -- Developers click here.

Breakthrough Intel volumetric capture technology, the future of VR and cinema

LZoltowskiLZoltowski Posts: 6,429 Volunteer Moderator
edited August 6 in General
Saw this on BBC click the other day,



That scene was captured under a giant dome, using 360 degree volumetric capture on a giant stage, the scene was shot ONCE then processed into a huge virtual stage where the camera can be moved anywhere giving unlimited camera angles. 

This tech would allow entire movies to be shot in 360 volumes and then adapted to VR so you could walk around the action while it plays out, putting you inside of the movie. This is still under heavy development but the possibilities are crazy. Virtual Westworld anyone?!



Several Hollywood studios have already expressed interest in Intel Studios with Paramount already considering projects there.

If you have access to UK BBC I player:(or a VPN)

Where a more indepth interview and demo is available:
https://www.bbc.co.uk/iplayer/episode/b0bcywsl/click-shooting-stars
Core i7-7700k @ 4.9 Ghz | 32 GB DDR4 Corsair Vengeance @ 3000Mhz | 2x 1TB Samsung Evo | 2x 4GB WD Black
ASUS MAXIMUS IX HERO | MSI AERO GTX 1080 OC @ 2000Mhz | Corsair Carbide Series 400C White (RGB FTW!) 

Be kind to one another :)

Comments

  • LZoltowskiLZoltowski Posts: 6,429 Volunteer Moderator
    If only the Wachowskis had access to this for Matrix 2 .. that Agent Smith, Neo fight would have been NUTS.
    Core i7-7700k @ 4.9 Ghz | 32 GB DDR4 Corsair Vengeance @ 3000Mhz | 2x 1TB Samsung Evo | 2x 4GB WD Black
    ASUS MAXIMUS IX HERO | MSI AERO GTX 1080 OC @ 2000Mhz | Corsair Carbide Series 400C White (RGB FTW!) 

    Be kind to one another :)
  • falken76falken76 Posts: 2,613 Valuable Player
    Saw this on BBC click the other day,



    That scene was captured under a giant dome, using 360 degree volumetric capture on a giant stage, the scene was shot ONCE then processed into a huge virtual stage where the camera can be moved anywhere giving unlimited camera angles. 

    This tech would allow entire movies to be shot in 360 volumes and then adapted to VR so you could walk around the action while it plays out, putting you inside of the movie. This is still under heavy development but the possibilities are crazy. Virtual Westworld anyone?!



    Several Hollywood studios have already expressed interest in Intel Studios with Paramount already considering projects there.

    If you have access to UK BBC I player:(or a VPN)

    Where a more indepth interview and demo is available:
    https://www.bbc.co.uk/iplayer/episode/b0bcywsl/click-shooting-stars

    Wow it looks like a stage built around the Matrix Bullet time tech but upgraded into an entire room, this looks amazing.
  • LZoltowskiLZoltowski Posts: 6,429 Volunteer Moderator
    falken76 said:
    Saw this on BBC click the other day,



    That scene was captured under a giant dome, using 360 degree volumetric capture on a giant stage, the scene was shot ONCE then processed into a huge virtual stage where the camera can be moved anywhere giving unlimited camera angles. 

    This tech would allow entire movies to be shot in 360 volumes and then adapted to VR so you could walk around the action while it plays out, putting you inside of the movie. This is still under heavy development but the possibilities are crazy. Virtual Westworld anyone?!



    Several Hollywood studios have already expressed interest in Intel Studios with Paramount already considering projects there.

    If you have access to UK BBC I player:(or a VPN)

    Where a more indepth interview and demo is available:
    https://www.bbc.co.uk/iplayer/episode/b0bcywsl/click-shooting-stars

    Wow it looks like a stage built around the Matrix Bullet time tech but upgraded into an entire room, this looks amazing.
    Yeah pretty neat stuff, the one major difference is that the scene is converted into actual 3d data, meaning that it can be used in more than just film, but also VR and games. They can also change the lighting after the scene has been shot.
    Core i7-7700k @ 4.9 Ghz | 32 GB DDR4 Corsair Vengeance @ 3000Mhz | 2x 1TB Samsung Evo | 2x 4GB WD Black
    ASUS MAXIMUS IX HERO | MSI AERO GTX 1080 OC @ 2000Mhz | Corsair Carbide Series 400C White (RGB FTW!) 

    Be kind to one another :)
  • falken76falken76 Posts: 2,613 Valuable Player
    falken76 said:
    Saw this on BBC click the other day,



    That scene was captured under a giant dome, using 360 degree volumetric capture on a giant stage, the scene was shot ONCE then processed into a huge virtual stage where the camera can be moved anywhere giving unlimited camera angles. 

    This tech would allow entire movies to be shot in 360 volumes and then adapted to VR so you could walk around the action while it plays out, putting you inside of the movie. This is still under heavy development but the possibilities are crazy. Virtual Westworld anyone?!



    Several Hollywood studios have already expressed interest in Intel Studios with Paramount already considering projects there.

    If you have access to UK BBC I player:(or a VPN)

    Where a more indepth interview and demo is available:
    https://www.bbc.co.uk/iplayer/episode/b0bcywsl/click-shooting-stars

    Wow it looks like a stage built around the Matrix Bullet time tech but upgraded into an entire room, this looks amazing.
    Yeah pretty neat stuff, the one major difference is that the scene is converted into actual 3d data, meaning that it can be used in more than just film, but also VR and games. They can also change the lighting after the scene has been shot.

    Every studio will want access to this.
  • KlodsBrikKlodsBrik Posts: 784
    3Jane
    edited August 6
    LAX ?? ... Great Apple are gonna buy Intel and release I-Lax in a few years time .... Now we all need an apple product even thou few of us thought we could live without it :( 

    That said, this is gonna be awesome when developed and we have gpu and res to run videos on our hmd to get the full experience. 
    Be good, die great !
  • LZoltowskiLZoltowski Posts: 6,429 Volunteer Moderator
    KlodsBrik said:
    LAX ?? ... Great Apple are gonna buy Intel and release I-Lax in a few years time .... Now we all need an apple product even thou few of us thought we could live without it :( 

    That said, this is gonna be awesome when developed and we have gpu and res to run videos on our hmd to get the full experience. 
    What is LAX?
    Core i7-7700k @ 4.9 Ghz | 32 GB DDR4 Corsair Vengeance @ 3000Mhz | 2x 1TB Samsung Evo | 2x 4GB WD Black
    ASUS MAXIMUS IX HERO | MSI AERO GTX 1080 OC @ 2000Mhz | Corsair Carbide Series 400C White (RGB FTW!) 

    Be kind to one another :)
  • ZenbaneZenbane Posts: 12,274 Power Poster
    Great stuff!

    And remember, this is coming:

    Oculus is developing an immersive theater VR experience with real actors

    The central idea is to use trained actors who perform live while viewers interact with them from the comfort of their homes, with everyone using Oculus Rift headsets to enter into and experience the shared world. Oculus hopes that blending the benefits of immersive theater with the unique experimental benefits only VR can provide might be a winning combination that could encourage more consumers, developers, and artists to invest in VR as a form of entertainment and artistic expression. “We’re really interested in, how do you create that experience of live actors without needing to be in a site-specific location,” Rachitsky told CNET. “It’s a way to scale.”

    https://www.theverge.com/2018/4/30/17303904/oculus-vr-immersive-theater-real-actors-motion-capture

    Are you a fan of the Myst games? Check out my Mod at http://www.mystrock.com/
    Catch me on Twitter: twitter.com/zenbane
  • KlodsBrikKlodsBrik Posts: 784
    3Jane
    edited August 6
    What is LAX?
    During the vid above there is a big LAX sign before they enter the intel dome .... Company name ? ... ( its a conspiracy !!! )

     Hey, you made me watch all those Shane videoes .. lol


    EDIT: Not a sign, more like huge letters.


    EDIT again: Wont let me copy the timecode ... The LAX thingy is at 1:11
    Be good, die great !
  • cyberealitycybereality Posts: 26,156 Oculus Staff
    That's absolutely amazing!
    AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i
    Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV
  • LZoltowskiLZoltowski Posts: 6,429 Volunteer Moderator
    Zenbane said:
    Great stuff!

    And remember, this is coming:

    Oculus is developing an immersive theater VR experience with real actors

    The central idea is to use trained actors who perform live while viewers interact with them from the comfort of their homes, with everyone using Oculus Rift headsets to enter into and experience the shared world. Oculus hopes that blending the benefits of immersive theater with the unique experimental benefits only VR can provide might be a winning combination that could encourage more consumers, developers, and artists to invest in VR as a form of entertainment and artistic expression. “We’re really interested in, how do you create that experience of live actors without needing to be in a site-specific location,” Rachitsky told CNET. “It’s a way to scale.”

    https://www.theverge.com/2018/4/30/17303904/oculus-vr-immersive-theater-real-actors-motion-capture

    Oh yeah I forgot about that! .... Imagine that concept combined with the theatre. mind blown
    Core i7-7700k @ 4.9 Ghz | 32 GB DDR4 Corsair Vengeance @ 3000Mhz | 2x 1TB Samsung Evo | 2x 4GB WD Black
    ASUS MAXIMUS IX HERO | MSI AERO GTX 1080 OC @ 2000Mhz | Corsair Carbide Series 400C White (RGB FTW!) 

    Be kind to one another :)
  • kernowkernow Posts: 732
    Trinity
    Curious what cameras they are using for this. Doing this with still images is expensive, as-is, with the number (preferably 20+) of high MP (18+ recommended) cameras needed... doing video (pushing MP need higher to avoid cleanup as much as possible), and having the cameras that far away (thus higher MP need, and more cameras). Not to mention the framework, lights, and the space it is set up in... This setup must be horribly expensive to build (the cameras, obvious, the most expensive part).

    This makes me feel even more inadequate (my still model setup is pretty ghetto... not enough MP, not enough cameras, thus lots of cleanup after the model builds [especially where there is hair... hair sucks]).
  • LZoltowskiLZoltowski Posts: 6,429 Volunteer Moderator
    edited August 6
    kernow said:
    Curious what cameras they are using for this. Doing this with still images is expensive, as-is, with the number (preferably 20+) of high MP (18+ recommended) cameras needed... doing video (pushing MP need higher to avoid cleanup as much as possible), and having the cameras that far away (thus higher MP need, and more cameras). Not to mention the framework, lights, and the space it is set up in... This setup must be horribly expensive to build (the cameras, obvious, the most expensive part).

    This makes me feel even more inadequate (my still model setup is pretty ghetto... not enough MP, not enough cameras, thus lots of cleanup after the model builds [especially where there is hair... hair sucks]).

    From the BBC interview, 120+ full 4K video cameras. (they say this can scale up or down depending on the production requirements) The storage requirements are around 10 petabytes and everything gets computed on site using proprietary hardware and software.

    The director gets a crude point cloud based on basically photogrammetry where he can position the virtual camera, then the clever algorithms go to work blending frame from different cameras together + photogrammetry and 3d data.

    The most expensive part is the huge underground onsite data centre with a lot of computing horsepower.

    This is the largest facility of its kind in the world and although still in development big-name clients are already lining up and production worthy content is being produced.
    Core i7-7700k @ 4.9 Ghz | 32 GB DDR4 Corsair Vengeance @ 3000Mhz | 2x 1TB Samsung Evo | 2x 4GB WD Black
    ASUS MAXIMUS IX HERO | MSI AERO GTX 1080 OC @ 2000Mhz | Corsair Carbide Series 400C White (RGB FTW!) 

    Be kind to one another :)
  • kernowkernow Posts: 732
    Trinity
    From the BBC interview, 120+ full 4K video cameras. (they say this can scale up or down depending on the production requirements) The storage requirements are around 10 petabytes and everything gets computed on site using proprietary hardware and software.

    Daamn...
    <speechless>
  • LZoltowskiLZoltowski Posts: 6,429 Volunteer Moderator
    kernow said:
    From the BBC interview, 120+ full 4K video cameras. (they say this can scale up or down depending on the production requirements) The storage requirements are around 10 petabytes and everything gets computed on site using proprietary hardware and software.

    Daamn...
    <speechless>
    They also have to shoot at 60 fps plus high shutter speeds to eliminate motion blur as that messes with photogrammetry. 
    Core i7-7700k @ 4.9 Ghz | 32 GB DDR4 Corsair Vengeance @ 3000Mhz | 2x 1TB Samsung Evo | 2x 4GB WD Black
    ASUS MAXIMUS IX HERO | MSI AERO GTX 1080 OC @ 2000Mhz | Corsair Carbide Series 400C White (RGB FTW!) 

    Be kind to one another :)
  • kernowkernow Posts: 732
    Trinity

    Wow, I can't imagine how much processing is going on there... I invested in a copy of 3DF Zephyr [...]

    I use Agisoft Photoscan. Have you used it? And if so, how would you compare 3DF Zephyr to it?
    Just a glance at Zephyr's wikipage, I can see that it does video natively (something PhotoScan does not do).

    DaftnDirect said:
    > (I'd probably have to have a coffee while I waited)

    LOL :)
  • DaftnDirectDaftnDirect Posts: 4,049 Valuable Player

    I haven't tried that @kernow, I see it has a free trial but you have to apply via email so may give it a go.

    The only reason I bought 3DF Zephyr was I'd tried the trial version as part of an experiment to get a reference model done for my London project:

    That was done just using about 50 screenshots of Google Earth rotation around Blackfriars and I was quite impressed with the results considering that was a photogrammetry generated model of a photogrammetry generated model!


    I've been testing it on small objects however with mixed results. Camera settings and lighting make a huge difference... all parts of the object must be in pin-sharp focus at fairly close range so aperture and exposure settings are important as is the right amount of contrast, reflections are a no-no.


    I've been trying with the object outside on a cloudy day to maximise even lighting but it's still a little hit and miss. So I've decided I need a light tent and rotary turntable and probably having a printed template on the turntable with coded targets will help. I believe both Zephyr and Photoscan support these targets. I'll upload some results after I've re-tested with the tent.

    Gateway 2000, Pentium II 300 Mhz CPU, 64Mb RAM, STB Velocity 128 AGP Graphics Card with 4MB SGRAM, 6.4Gb Hard Drive, US Robotics 56.5kbps Internal Modem, 12/24x CDROM Drive, Ensoniq AudioPCI, Windows 95.
  • kernowkernow Posts: 732
    Trinity
    DaftnDirect,
    I had not even considered using Google Earth screenshots, and that's a better result than what I would have expected if I had — may have to give that a go.
  • LZoltowskiLZoltowski Posts: 6,429 Volunteer Moderator

    I haven't tried that @kernow, I see it has a free trial but you have to apply via email so may give it a go.

    The only reason I bought 3DF Zephyr was I'd tried the trial version as part of an experiment to get a reference model done for my London project:

    That was done just using about 50 screenshots of Google Earth rotation around Blackfriars and I was quite impressed with the results considering that was a photogrammetry generated model of a photogrammetry generated model!


    I've been testing it on small objects however with mixed results. Camera settings and lighting make a huge difference... all parts of the object must be in pin-sharp focus at fairly close range so aperture and exposure settings are important as is the right amount of contrast, reflections are a no-no.


    I've been trying with the object outside on a cloudy day to maximise even lighting but it's still a little hit and miss. So I've decided I need a light tent and rotary turntable and probably having a printed template on the turntable with coded targets will help. I believe both Zephyr and Photoscan support these targets. I'll upload some results after I've re-tested with the tent.

    You can actually extract the geometry from google earth ... using something like ninja ripper, it extracts assets from any direct3D or webGL app, obviously, you can only use those models for educational purposes and nothing commercial. There is a 3Ds Max importer of Ninja capture data 
    Core i7-7700k @ 4.9 Ghz | 32 GB DDR4 Corsair Vengeance @ 3000Mhz | 2x 1TB Samsung Evo | 2x 4GB WD Black
    ASUS MAXIMUS IX HERO | MSI AERO GTX 1080 OC @ 2000Mhz | Corsair Carbide Series 400C White (RGB FTW!) 

    Be kind to one another :)
  • DaftnDirectDaftnDirect Posts: 4,049 Valuable Player

    I've tried that in the past LZ (3D Ripper DX in my case) and it worked very well, but Google have made some updates that prevent the ripping now.


    You can still use older versions of Google Earth but those versions don't have the newer 3D photogrammetry modelling, Thy have the vector based models that were created by hand which only cover smaller areas of the major cities and are a little out of date now. Really, really annoying!!!!


    I think they must have started to see models appearing based on their work so not surprising. I'm careful only to use my photogrammetry for reference though so if and when I finish my London model I can use it without worries!

    Gateway 2000, Pentium II 300 Mhz CPU, 64Mb RAM, STB Velocity 128 AGP Graphics Card with 4MB SGRAM, 6.4Gb Hard Drive, US Robotics 56.5kbps Internal Modem, 12/24x CDROM Drive, Ensoniq AudioPCI, Windows 95.
  • BrixmisBrixmis Posts: 2,003
    Project 2501
    As long as they don't tie it to a specific headset (such as the Go) I'm officially excited!




  • kojackkojack Posts: 4,649 Volunteer Moderator

    I haven't tried that @kernow, I see it has a free trial but you have to apply via email so may give it a go.

    The only reason I bought 3DF Zephyr was I'd tried the trial version as part of an experiment to get a reference model done for my London project:

    That was done just using about 50 screenshots of Google Earth rotation around Blackfriars and I was quite impressed with the results considering that was a photogrammetry generated model of a photogrammetry generated model!


    I've been testing it on small objects however with mixed results. Camera settings and lighting make a huge difference... all parts of the object must be in pin-sharp focus at fairly close range so aperture and exposure settings are important as is the right amount of contrast, reflections are a no-no.


    I've been trying with the object outside on a cloudy day to maximise even lighting but it's still a little hit and miss. So I've decided I need a light tent and rotary turntable and probably having a printed template on the turntable with coded targets will help. I believe both Zephyr and Photoscan support these targets. I'll upload some results after I've re-tested with the tent.

    Cool! I never thought of photogrammetry of google earth. :)
    That's a bit like the guy on here who was doing photogrammetry of screenshots of Quill to turn drawings into better meshes than Quill could export itself.

    Earlier this year I did a photogrammetry subject in my masters degree. I started with 3d Zephyr because it seemed cool and had a free version. (I'd used open source tools in the past, VisualSFM and stuff). But I switched to Photoscan. It was faster, had better results and supported my 3D Connexion Space Pilot Pro (instant thumbs up for that feature).
    Although 3D Zephyr could give good results too, but took longer for similar quality.

    I can't remember if I've posted these on here before. Oh well.
    Here's a figure I photographed: (master chief figure standing on a paper plate on a clear turn table):

    The 3D Zephyr version:


    The Photoscan version:


    Then I loaded it into Medium:


    Seeing him life size was freaky.

    I tried 3D Zephyr's turntable mode, but the result just wasn't as good as static model with moving camera.



  • DaftnDirectDaftnDirect Posts: 4,049 Valuable Player

    I've been upping the point cloud density but yeah to get the best results takes a huge amount of processing time. I think a light tent is going to help with problems I was sometimes getting with shadows interfering with some models so hopefully I can find the perfect camera aperture/exposure setup to go with the tent.


    Either way, it's fascinating digitising real objects in this way... reminds me of Tron.

    Gateway 2000, Pentium II 300 Mhz CPU, 64Mb RAM, STB Velocity 128 AGP Graphics Card with 4MB SGRAM, 6.4Gb Hard Drive, US Robotics 56.5kbps Internal Modem, 12/24x CDROM Drive, Ensoniq AudioPCI, Windows 95.
  • LZoltowskiLZoltowski Posts: 6,429 Volunteer Moderator

    I've been upping the point cloud density but yeah to get the best results takes a huge amount of processing time. I think a light tent is going to help with problems I was sometimes getting with shadows interfering with some models so hopefully I can find the perfect camera aperture/exposure setup to go with the tent.


    Either way, it's fascinating digitising real objects in this way... reminds me of Tron.

    Ahhhh yes. 
    Core i7-7700k @ 4.9 Ghz | 32 GB DDR4 Corsair Vengeance @ 3000Mhz | 2x 1TB Samsung Evo | 2x 4GB WD Black
    ASUS MAXIMUS IX HERO | MSI AERO GTX 1080 OC @ 2000Mhz | Corsair Carbide Series 400C White (RGB FTW!) 

    Be kind to one another :)
  • DaftnDirectDaftnDirect Posts: 4,049 Valuable Player
    Every time I digitise an object now, I'm going to say.. 'I warn you, I'm going to have to put you on the games grid'
    Gateway 2000, Pentium II 300 Mhz CPU, 64Mb RAM, STB Velocity 128 AGP Graphics Card with 4MB SGRAM, 6.4Gb Hard Drive, US Robotics 56.5kbps Internal Modem, 12/24x CDROM Drive, Ensoniq AudioPCI, Windows 95.
  • w_benjaminw_benjamin Posts: 148
    Art3mis
    KlodsBrik said:
    What is LAX?
    During the vid above there is a big LAX sign before they enter the intel dome .... Company name ? ... ( its a conspiracy !!! )

     Hey, you made me watch all those Shane videoes .. lol


    EDIT: Not a sign, more like huge letters.


    EDIT again: Wont let me copy the timecode ... The LAX thingy is at 1:11
    Based on the clip of the plane landing, I'd say it's the sign outside of Los Angeles International Airport.
  • danybonindanybonin Posts: 59
    Hiro Protagonist
    kojack said:

    I haven't tried that @kernow, I see it has a free trial but you have to apply via email so may give it a go.

    The only reason I bought 3DF Zephyr was I'd tried the trial version as part of an experiment to get a reference model done for my London project:

    That was done just using about 50 screenshots of Google Earth rotation around Blackfriars and I was quite impressed with the results considering that was a photogrammetry generated model of a photogrammetry generated model!


    I've been testing it on small objects however with mixed results. Camera settings and lighting make a huge difference... all parts of the object must be in pin-sharp focus at fairly close range so aperture and exposure settings are important as is the right amount of contrast, reflections are a no-no.


    I've been trying with the object outside on a cloudy day to maximise even lighting but it's still a little hit and miss. So I've decided I need a light tent and rotary turntable and probably having a printed template on the turntable with coded targets will help. I believe both Zephyr and Photoscan support these targets. I'll upload some results after I've re-tested with the tent.

    Cool! I never thought of photogrammetry of google earth. :)
    That's a bit like the guy on here who was doing photogrammetry of screenshots of Quill to turn drawings into better meshes than Quill could export itself.

    Earlier this year I did a photogrammetry subject in my masters degree. I started with 3d Zephyr because it seemed cool and had a free version. (I'd used open source tools in the past, VisualSFM and stuff). But I switched to Photoscan. It was faster, had better results and supported my 3D Connexion Space Pilot Pro (instant thumbs up for that feature).
    Although 3D Zephyr could give good results too, but took longer for similar quality.

    I can't remember if I've posted these on here before. Oh well.
    Here's a figure I photographed: (master chief figure standing on a paper plate on a clear turn table):

    The 3D Zephyr version:


    The Photoscan version:


    Then I loaded it into Medium:


    Seeing him life size was freaky.

    I tried 3D Zephyr's turntable mode, but the result just wasn't as good as static model with moving camera.




    My god! How you did that? Can you explain more? Im impressed!
    core i7 5930k @ 4.4ghz
    nvidia rtx 2080 ti
    16GB ram
    asus pg27uq
    oculus cv1 4 sensors on usb3 on 5 distinct usb controller
  • ElusiveMarlinElusiveMarlin Posts: 182
    Art3mis
    I love the bit where Flynn gets "Digitised",  It reminds me of entering Home 2.0 on the Rift :-)
    Intel Core i7 8700K 3.7GHz | RAM 32GB DDR4 2666MHz | GeForce GTX 1080 8GB |
    240GB SATA III SSD | 3TB HDD | 867Mbps Built-in Wi-Fi

    "Behind every mask there is a face, and behind that a story...."
  • hoppingbunny123hoppingbunny123 Posts: 352
    Nexus 6
    how will they work around the props to take video?
Sign In or Register to comment.