New to the forums? Click here to read the "How To" Guide.

Developer? Click here to go to the Developer Forums.

Waiting is hard... (Ideas while we wait)

donkaradiablodonkaradiablo Posts: 310
Hiro Protagonist
edited February 2016 in Off-Topic
I think I know what's going to be the next big thing.

Real 3D "movies" of real people that you can watch from all angles as if they were there with you, with positional tracking, captured with multiple scanners, played back thanks to Carmack code compressing the hell out of that 4D data to be accessed and streamed to the GPU as needed depending on where you look.

Not 3D 360 degree movies... not real time rendered fake stuff. Real people, like they are really there with you.

Not whole stadiums, just one person. Also movies as they scan actors and put them in CGI spaces anyway. And as we take our loved ones to be scanned so we get to keep their memories forever, maybe we'll get that feeling our grandparents must have felt when they took our parents in for their first photos.

No physics, no dynamic lighting, no simulations, no artificial intelligence, no destructible environments... just predetermined, recorded stuff streamed using the information of where you are looking and the prediction of where you will be looking, which Oculus is already doing.

Yes, it's a whole lot of data. But all types of work andoptimizations Carmack has done and let us know he looked into in the past years should translate so well to this. And he has supercomputers with special purpose computing units at his fingertips, with lower level access.

What we have seen so far is nothing. Knowing that is what makes waiting hard. It's like you knew the photo was coming to town and everyone is excited but you know they are doing this thing called "movies"and you believe it will be big.

(edited: subject)
Design with input solution, unifying mobile and PC product lines
the input solution that could have been
ideas
Revolutionize the way we interact with...
Change the world...
Community...
BJsryuo.gif
«1

Comments

  • VizionVRVizionVR Posts: 3,022
    Wintermute
    I like this idea. To take it further, imagine an entire digitally scanned live environment ( a party, for instance). You can walk around in it, fast forward, rewind, pause, explore in great detail. Sure it's still years away, but I guess I can wait. :)
    Not a Rift fanboi. Not a Vive fanboi. I'm a VR fanboi. Get it straight.
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    This would also translate so well to AR. Have that person there in your room with you and your friends, whether it's a celebrity dancing in the middle of the room, an inspirational figure giving a speech, a friend miles away, a loved one long gone or a youtube figure making commentary... Or all at the same time, with 3D audio, letting you focus on one person at a time if you wish.

    Virtual Teleportation.

    With surprise personal virtual visits to your own room by celebrities, that you can have a conversation with. It's like the invention of photo, telephone, network broadcasting, home video, on demand content, personal computers, internet, cgi, 3d multiplayer games, social media, video blogging all roled into one big conclusion: We will never be apart again.

    id tech 6 research type ray casting and voxels used for data compression, using hardware acceleration with modern special units and low level access, combined with streaming of large data as needed a la id tech 5 megatextures, depending on where you are looking with the head tracking data the rift provides, probably with eye tracking in the future, made to be a solid fps experience with scalable graphics quality per frame, again a la id tech 5, using a foveated approach when possible, timewarped and everything for a smoother experience...

    Real time 4D broadcasting would be the goal, as virtual teleportation. But bandwidth would make it hard for the first generation. A PC with high-end SSD, multicore heavy CPU and multi GPU graphics with low level API access should make it possible for the precaptured material to be displayed locally. Maybe merged with CGI background, maybe in a real time rendered environment, or a hybrid solution.

    A 1157 horror experience / Marilyn Manson video / Quake2 Strogg torture chambers combination type VR horror experience, in UE4 archvis type spaces rendered in real time, with 3D 360 degree skybox type horizon captured from real life, TV's that look like security monitors from other rooms displaying 2D content, with lower quality rendered 4D captures the way Technolust already does it, looped far from the viewer, with real time rendered events, happening at a distance you cannot tell the difference in quality, and real life captured characters next in line for what's happening over there, in this "Hostel" type environment, blindfolded and gagged, tied up to chairs, next to you crying and screaming... (No need for their eyes to track you, or them to respond to you, or have any type of AI). With this guy in a gas mask, walking around in volumetric fog that moves around him, deciding who to pick next, getting in your personal space, maybe asking you questions that you respond to with your head, and your answers determine what combination of these 4D captures is played next... Now in this case, it is you that is virtually teleported.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Dressing rooms with mounted 180 degree 3D cams and a 4K GearVR. You wanna know if that dress makes your butt look fat... turn around and see for yourself.

    Well, you would not want cams in the dressing room itself, so it's just a room next to it. With a mirror facing the cam at an angle allowing you to see front and back on GearVR and to take the GearVR off and check yourself again in the mirror.

    Shopping AND an out of body experience :)
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    E3 2015 will be something else.

    ___________________________________________________


    Display tech on the desktop space is moving fast
    1. RPS guys seem to be happy with current 40 inch 4K monitors for gaming.
    2. 144Hz IPS G-sync monitors were unveiled at CES.
    3. HDR support has been added to HDMI 2.0a spec
    4. LG has 8K monitors on the way

    2016-2017, we'll probably be able to get a 32/40 inch 4K/8K HDR 144hz G-Sync monitor/TV for the desktop, maybe requiring a dual cable setup or whatever... It will be instant eyegasm.

    ___________________________________________________

    GearVR will be 4K this year. When viewing movies, the virtual movie screen will have a resolution of 720p. The Rift may not be so different. This is barely enough to not look crappy compared to the current standard in TV/monitors. Just like with CB, we'll here comments about how it's not as crisp as a desktop monitor but it's cool. Even to keep that relative image quality of "meh for some, cool for others" requires yearly resolution bumps to keep up with the trends in monitors.

    Thankfully the display tech on the mobile space is also moving fast.
    ___________________________________________________

    DK1 @720p, late 2012,
    DK2 @1080p, mid 2014, uses Note3 screen
    GearVR Note4 @2K, end of 2014, CB @1K per eye mid 2015?
    GearVR Note5 @4K, end of 2015, CV1 @2K per eye end of 2015 - mid 2016?
    GearVR Note6 @8K, end of 2016, Rift @4K per eye end of 2016 - mid 2017?
    CV3 @8K per eye, end of 2017- mid 2018
    CV4 @16K per eye, end of 2018- mid 2019
    ___________________________________________________

    In just a few years, VR may reach 16K per eye, which seems to be considered the holy grail. It may just have to get there to be acceptable compared to monitors and the competition in the mobile area may help. By then, the input solution, tracking, tools for content creation, the app market and platform will all be there. GPUs with lots of stacked memory at high bandwidth and low level access by programmers will be the norm. Developers will have a lot of experience creating VR games and experiences and great examples and guidelines by Valve and Oculus.

    We'll also see big movie studios opening gaming studios, or merging with some, bringing experience and know-how from all kinds of computer graphics and content creation backgrounds together to create the new Hollywood. They will have products like movies in VR format using 4D captures, 4D CGI and real-time rendering for an unprecedented experience. Imagine Animatrix, Matrix Reloaded and the game that fills in the gap but instead of watching the movies and playing the game separately, you are in the VR movie/game. The actors talk to you, just like they are there.

    Insurgent demo looks and feels fantastic for now. That will change with 4D video captures in VR become the norm. 3D video based humans in VR look like 2D spite effects in 3D games. Your brain just doesn't buy it. 4D captures in VR spaces at 16K per eye will be hard to not buy.

    ___________________________________________________

    Waiting is hard. Partly because atm, Oculus knows it's not me they have to dazzle with the next big thing in store and get high on their magic future tech. It's the big names. Like multi-billion dollar names... Whoever makes and delivers what creations billions will be looking at in the future.

    But on a good note, if the roadmap I made is not so far off, a CB @1K per eye can be released mid 2015, which is the perfect time for an input devkit, because it is when E3 takes place, where a demo can be showcased to the public using all these components.

    An input devkit, that would come with that E3 demo, that you can get for your DK1 or DK2, or with a CB if you wish, will be here this summer. The CB option would make this a limited test run for Oculus for the two display based model, and the start of the production of components that will be used in CV1 with upgraded components like the display.

    Carmack said he'll be spending more time on software at Oculus. Palmer said in-house made exclusive content will be there. Carmack is great at being able to put together something that would be "more than words" and knows that's what is needed to push VR. They have a E3 demo floor where they can just murder everyone with in house made content for gamers, which is the best thing the guy has been known for his whole life, using their input solution that can be ordered online at the time, and a CB. They can just murder and still say "btw, the CV1 will be this, but with better resolution".

    What they show there will probably be the first public demo of an IP that will be their flagship.

    ___________________________________________________

    This just lines up with all the confirmed and unconfirmed clues we have had until now:

    design for CV1 being pretty much locked down,
    that design following CB,
    CB being a two display solution,
    CB rumored to be @1k per eye,
    The Vive rumored to be 1k per eye for dev kits,
    talks of a release of an input kit for devs,
    talks of limited summer release of a Rift based on the CB design,
    The Vive rumored to be 2k per eye for the consumer version,
    a Note 5 level display manufacturing tech enabling panels for CV1 at 2k per eye,
    Oculus-Samsung product cycle so far,
    Carmack at work on software at Oculus,
    Palmer talking about exclusivity of in-house made content,
    the upcoming E3,
    Samsung bringing it's full PR power to the table...
    Oculus VR filing for Crescent Bay trademark

    ___________________________________________________


    E3 2015 will be something else.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    1. Rendering passes and parts of the moving world where the eye is more sensitive to temporal resolution, rendered at full speed, at half resolution
    2. Parts of the 3D world where the eye is more sensitive to detail , rendered at half speed, full resolution, timewarped
    3. Layered on top of each other with a fast pass of noise on Z data

    If layering in the next SDK is as flexible as that, or is intended to get to be in the end, we can see some cool stuff.

    It could also be cool to have a very low resolution flexible screen panels at twice the frame rate of the main display, located in the hmd around the lenses, used to provide ambient lighting beyond the displays fov. Could simply interpolate ambient color at 2x fps, creating another layer, in hardware.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Why can't this on your wrist be the input solution:

    pBKYCw4.jpg

    Having the fingers at a fixed distance should make the tracking go a lot smoother. Could also work for scanning the controller you are holding.

    Could have IR leds and sensors on it, just like the Rift. Add two fixed cams instead of one, placed around the room similar to LightHouse stations, it should enable hand tracking, finger tracking, 360 degree head tracking, controller scaning and everything.

    Could also have those "emotion sensors", heartbeat and everything if that's necessary. In games, that could be where your notifications are displayed, like a virtual smartwatch. The one on the left hand could display your webcam's feed and get linked to your mobile's notifications, bringing your real world inside the virtual one.

    ________________________________________________________________________________________


    A bluetooth capable version with a battery could be used with Gear VR with other IR cams on the Gear VR to track the IR leds on the wristband for hand position tracking, relative to the head. And two self powered IR led devices to place around you wherever you go for the Gear VR to track its own position in space relative to those, with the same IR cams on it. The IR led devices are fixed in space, so by knowing the head's relative position changes, you know the absolute position of the head in space, similarly you know the absolute position of the hands in space. No cables.

    So the whole set is:

    OSSzDTv.jpg

    Has 3 more IR cams at the backplate, using CB design with nice weight distribution. The cams having 120 degrees of fov each, there is no blind spot. Tracking 360 degrees around you.

    Would that not work?
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Optional VR runner pack upgrade:

    CpJFuxs.jpg

    Two bigger versions of those wristbands, that go on your ankles, with 3 IR cams to cover 360 degrees. One more IR led station, that goes on the floor.

    That should work for movement. The sensors would work to detect rotation and movement speed, as you run in place. The IR cams on the anklebands, detecting their position relative to the stationary IR led units would be used for positional tracking, feeding the data wirelessly to your GearVR. No drift.

    You would not even need to lift your feet from the ground. Just moving them would be enough. You could also be sitting on a chair doing that if the seated experience is a must. It would also work with an omni, a sphere, a virtual bike, a treadmill...

    ________________________________________________________________________________________

    Even the basic package would provide positional tracking for the head in cable-free portable VR and enable good enough hand tracking at least when the hands are in your view. Wristbands could be an optional upgrade package and so could the anklebands with an additional IR led unit.

    t7ATeX0.jpg

    ________________________________________________________________________________________



    Complete package:

    xKEIzfX.jpg

    What would potentially make this whole package work flawlessly (in terms of performance) is a SOC on GearVR, getting all that data from the sensors and IR cams on the HMD, wristbands and anklebands, processing it and sending only the final positional data to the phone. Updating head position data more frequently than hands, hands more frequently than feet, keeping latency at minimum for the head. So all that the SOC on the phone does is rendering, timewarp and predistortion. That could potentially help with overheating too.

    This complete package with a SOC might sound like too much, but I have a feeling that all this would not cost more than the Vive with wireless controllers and LightHouse stations. It should also be fairly user friendly as everything is battery powered and you can just place the stationary units anywhere, with one of the IR led units on the floor, followed by the "reset all positions gesture" and it should work.

    ________________________________________________________________________________________


    CV1 for the PC could be the same package with the HMD having it's own displays and a single cable connection to your PC, not needing a battery, with the stationary IR led units also with cables instead of batteries, bringing cost down. The PC could still just do rendering, timewarp and predistortion, providing more eye candy and bigger experiences than portable VR. The SOC on the HMD could also be dropped for cost reasons, letting the PC crunch positional tracking data, just the way it works now.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Stacked memory async multi GPUs with low level access APIs must be as big a generational leap as X360 to XboxOne. It is the next gen and it's here this summer.

    I hope someone (working with an experienced manufacturer like Asus) makes it into a wearable VR PC, with a battery packing notebook-like design, minus the screen, keyboard and touchpad, plus much better cooling, straps and padding between it and your back . That would make untethered VR for the PC possible, without the need for a wireless display solution.

    71MbSgO.png

    Could pack additional sensors and IR leds too. Combined with the set above, you'd have untethered VR, with next-gen quality graphics and almost full body tracking, with hands, feet, fingers and their relative position from the body.

    This "IR cam on the hmd, IR leds on portable units" approach could be the solution to Oculus not being laser sharp focused on a single product now (as stated by Carmack), by unifying the tracking approach and design for both PC and mobile and making untethered VR with positional tracking possible on both platforms.

    It would even be possible to unify the product lines, with the last piece of the puzzle: A Galaxy Note (or S6) sized device packing dual display panels + displayport connection to a PC port and USB pass through connection to GearVR to connect it to a PC (no battery, no SOC, thus lighter and cheaper to produce), something that can be inserted to GearVR, making it compatible with a PC. Just one combined, replaceable cable from the wearable PC to the Oculus Display. It would not only unify the Oculus lines but also make it easier to upgrade the Rift. It could also make us, the fans stop worrying about the mobile market getting all the loving and the PC getting the short end of the stick as the products would be one and the same.

    LVDVPQg.png

    The pieces are all there for an ecosystem that lets users choose how deep they want to go into the rabbit hole (and their pockets) by offering compatible products that come together:

    1. GearVR with IR cams and two portable IR led units
    2. Galaxy note, OR, a same sized dual display + connections pack to connect to a PC
    3. Wristbands
    4. Anklebands with one IR led unit
    5. Wearable PC

    To bring the same experiences with scaled down graphics on mobile VR for the masses and this visual quality to the enthousiast market with high-end wearable PC VR:

    817jR32.png

    It would be like Oculus launching the Walkman/iPod and the Surround system that plugs to it.

    "In an ideal world", what would enable this on the software side, could be two Carmack made renderers, dropped in UE4. One designed and optimized for fixed hardware on the mobile VR side, one based on Vulkan, designed and optimized for the fixed hardware in that wearable PC.

    That would make perfect sense if you think about how UE has user friendly tools, is free, has the source code out there, has the renderer running in its own thread, has different builds of the renderer dropped in by for example nvidia with their version supporting VXGI, how even Quake2 had the renderer in a separate dll and how Carmack said the renderer in Rage was not the hard part that took much of his time, it was the rest of the engine.

    I get excited thinking about all these possibilities, then bummed out remembering that "the key to happiness is reasonable expectations".

    I feel like Cartman having to freeze himself till the launch of the Wii here, waiting for "the dawn of home VR"... Meanwhile, HL2 Update on DK2 is beyond my childhood dreams.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • AshlesAshles Posts: 515
    Art3mis
    And as we take our loved ones to be scanned so we get to keep their memories forever

    This is actually one aspect of VR that worries me a little.

    If we get to a stage where loved ones can be realistically scanned and their avatars given a certain degree of simple bahaviours/actions (perhaps even recordings of their speech) such an application might become an unhealthy 'crutch' in the event of bereavement.

    A person might miss their loved one so much that they become reliant on the false virtual world in which a facsimile of the person still exists and this would not be psychologically healthy.
    (The grieving process generally needs to go through certain natural stages in order for an individual to accept the loss - this is one of the reason I have a real issue with 'psychics' as their actions interrupt/delay/derail this process with false stories and the creation of memories that actually have nothing to do with the real departed person).

    If I lost someone close to me and I knew I could put on a headset and see them sitting next to me again, smiling, talking... it might be very hard not to become very dependent on this.
    "Into every life a little fantasy must fall..."
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    I find the risks associated with this to be very similar to the ones for cannabis. One is emotional dependency and addiction by emotional dependency. The other one is how it can affect the perception of reality at a young age where the brain is still developing. I think we will see a lot of controversy centered around these two. I would advocate the legalization of cannabis but with an age limit, so it's hard for me to say VR would be safe for children. You get haptics in there, the father/mother who past away looks real, feels real and the emotional impact would be hard to master. That could also take in-app purchases to a whole new level.

    On the other hand, this will open up so many possibilities for therapy. Guided VR sessions with pharmaceuticals. It might be a healthy way to say good bye to loved ones, under supervision from a pro. Or to overcome a bad memory, replacing it with a virtual one with repeated sessions. Honestly, that's gonna make todays therapy sessions look like a joke. On the flip side, brainwashing will get to a whole new level.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • morenosubamorenosuba Posts: 135
    Art3mis
    I think I know what's going to be the next big thing.

    Real 3D "movies" of real people that you can watch from all angles as if they were there with you, with positional tracking, captured with multiple scanners, played back thanks to Carmack code compressing the hell out of that 4D data to be accessed and streamed to the GPU as needed depending on where you look.

    Not 3D 360 degree movies... not real time rendered fake stuff. Real people, like they are really there with you.

    Not whole stadiums, just one person. Also movies as they scan actors and put them in CGI spaces anyway. And as we take our loved ones to be scanned so we get to keep their memories forever, maybe we'll get that feeling our grandparents must have felt when they took our parents in for their first photos.

    No physics, no dynamic lighting, no simulations, no artificial intelligence, no destructible environments... just predetermined, recorded stuff streamed using the information of where you are looking and the prediction of where you will be looking, which Oculus is already doing.

    Yes, it's a whole lot of data. But all types of work andoptimizations Carmack has done and let us know he looked into in the past years should translate so well to this. And he has supercomputers with special purpose computing units at his fingertips, with lower level access.

    What we have seen so far is nothing. Knowing that is what makes waiting hard. It's like you knew the photo was coming to town and everyone is excited but you know they are doing this thing called "movies"and you believe it will be big.

    (edited: subject)

    this is what i actually thought VR Cinema was when i first got my DK1. boy was i disappointed.....but not really. many other games/demos were still fantastic with VR.
    i also think that was you have described is the future of movies. not watching a movie, but BEING IN the movie. free to move around the scene, and watching it from any angle....that would be amazing.
    not sure how they would shoot such a movie, but it's basically the same thing as a role playing video game. take Skyrim for example. it's telling you a story, but with VR you are INSIDE the events, and you are even interacting with the story.
    again, not sure what kind of cameras you'd need to render the space on a 360 degrees, but it's very doable i'm sure. it would probably have to be something like those 3D multy light scans from Veiviev, except you'd be capturing motion as well. actually Veiviev already captured very basic motion (breathing) and it looks fantastic.
    This is very doable. it just takes someone with money that wants to shoot a movie exclusively for VR. to get it doe right, you have to have VR in mind from the start.
    someone should really try and make a short demo. Neo learning to navigate the Matrix from Morpheus should be fitting theme for such a demo :D.
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Nowadays, the way they shoot movies is actors in green screen and location in CGI. Even parts of actors are CGI, like in movies like Guardians of the Galaxy, Avengers or Man of Steel. So as long as they capture the actors performance in 4D, fourth dimension being time, meaning it's not just models from still shots of the subject with depth information but a continuous capture of the motion of the subject with depth information, they can place it in a CGI environment. Basically what they have been doing in 2D, but now in 3D.

    In VR, the environment can be rendered in real time and those 4D captures might be dropped in that world. What happens today is you have one model with data consisting of of polygons, textures, just one and that is the data needed. Animation is applied to it and you have an animated 3D object. For 4D capture to work, the engine would have to stream new data every frame, data of a full capture of the subject at that frame.

    Today, when they scan objects to put them in the game, they optimize that content by converting all that data to lower polygon models and normal maps. Similarly game content creation tools might optimize the hell out of that 4D data. And just the way megatexture tech works, streaming in data required for where you are and what you are looking at, those 4D captures if turned into voxels, could have the same logic applied to them, kinda like in
    this demo. Or those 4D captures could just be converted to 3D models and animation data, like in this one. Or maybe that 3D model and animation data, could be used as a shading layer, over that 4D capture of the subject, having the best of both world, real performance with a layer of lighting that matches the environment. Instead of voxels for the high fidelity capture layer, this could also be used. Voxel data at low res could be used for the lighting pass, the way nvidia does for global illumination with VXGI. Determining whatever combination of these works best is the kind of experiments I would hope the Carmack we know and love would be doing at Oculus .

    So in the end, the solution would be a hybrid form of rendering,
    1. The world around you rendered in real time just like it is now in UE4 archvis demos like these: 1, 2, 3, 4, using laser scan captured assets from the real world like in this one.
    2. Simple Stereo3D video based sprites used for people far from the viewer, like the people in the distance in this one between 0:30-1:00
    3. Objects and NPCs at medium distance from you, photoscan based, animated with physics applied to them. Like the Vieviev models, or Quantum Capture models, or a table that gets knocked over during the scene.
    4. Closer to the viewer, it's 4D captures, in voxels, displayed at the highest resolution if you are looking at them, blocky if you are not or parallax capable movies like PresenZ-VR, with a layer of shading/lighting applied to it to blend with the environment. It's non-interactive emotional content.
    5. Mobile in the environment, objects and NPCs at varied distance from you are photoscan based, animated and have physics applied to them. They also have glasses, masks and whatever so they do not take you to that famous uncanny valley.
    6. Models that are a mix of all these. Thing of Barney in HL, being a real person under there when he takes of the mask, when the action starts and he puts on the mask, it's an animated character. Your brain building it's perception of reality based on pattern recognition, prediction, merging memory and what's happening in the moment, would blend those two to a point where it would feel like that guy is real, you just would not be sure if he was acting or keeping it real :)

    Now this is a whole lot of data to stream. But we have 3gb/sec SDDs on the highest end, compared to 5400 rpm HDDs with less than 50mb/sec just two days ago; we have 64bit systems with tons of ram compared to 4GB max yesterday and we have 3D stacked memory on the GPUs, lots of it at very high bandwitdh. This is where Carmack magic should come in handy.

    So if you are putting together a VR movie experience, like Story Studio is meant to do, and you are showing it to people on your own PC build, it could be this monster of a PC, that will be common in a year or two among hardcore VR fans and available to them this summer, and it could take advantage of knowing where the viewer is and knowing where everything is in relationship to it. Just like for movies, where first the storyboard is made, the experience can be first designed in engine, with simple 3D models and environments. When that story design phase is complete, assets could be replaced with those steps above depending on their interactivity, distance etc.

    With a story that would tie in a helmet you have to keep at all times, an AR helmet, that keeps you alive in this hostile environment, the experience could claim that you are in the real world and these are real peole around you. Visually, it would be hard to distinguish that from really being there with such a helmet on your head. You would know that you have a helmet, there is no getting around that with the SDE and low res, but you would not know if you were there with a helmet on your head or just in VR. This game seems to have such a story (just guessing from the helmet shot and videos). When the displays in the helmets hit 16K per eye around 2018-2020, a story tie in for a helmet may no longer be needed as everything could look very real.

    Actually I have an idea for an Oculus demo with Nate Mitchell in it. He is standing behind you, he tells you to put on the headset and you do. It is supposedly a demo for hand tracking, as a form of augmented reality with the Rift, replacing your hands in the video feed with robot hands. He says he is switching the cams on the headset on, you hear him to your right, next to you. You get the "video feed" you see him to your right. But he is simply not in the room anymore, in the real world. You hear him there with positional audio and see him there with 4D captures, your hands being rendered with handtracking and the environment being a laserscan to render, not a video. It should be possible to replicate the room as Oculus started to design their own demo booths instead of just showing the demos in wherever. Nate tells you to take off the helmet. He is not there. You put on the helmet again, he is still there on your right, but he also is walking in on your left, and there are two Nate Michells now. You take off the helmet, there is a TV in front of you, showing you how he left the second you put on the helmet. How more convincing can a VR demo get?

    And that is the new Hollywood. A mix of game tech, on location laser scanning, scanning bodies for models, capturing performances for motion, capturing actors in 4D, and displaying in VR, with post effects, fog and noise tying everything together. We havent seen an example of this yet, the Kite & Lightning demo is probably as close as it gets to this on todays hardware. But that is about to change this summer because hardware is changing.

    So come E3, either Oculus is going to be unleashing the beast, or it will drift further and further to mobile space while Valve unleashes the beast :)
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    And this could be the 4K version, released at the same time, or as and upgrade later. I'd hope for a single dp1.3 input but 2 dp1.2 would probably be the way to go as that would do the trick and be more compatible with stuff already out there and maybe even be cheaper to produce for a while.

    xHApkO6.png

    Still used with a single, replaceable cable, combining the three.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Taking into account how important Palmer considers haptics to be, my guess for the final design would include this:

    ReUZS3A.jpg

    with maybe two types of options for gloves:
    One cheap and light led glove, maybe with patterns on the outside facilitating tracking, like these:
    https://twitter.com/ID_AA_Carmack/status/526905527230824448
    And a second one, which is the same thing but with haptics.

    With wristbands having IR cams facing both sides of the hand, never losing sight of the fingers, this should make pretty solid tracking possible. Again... working on both the PC and mobile lines.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • AshlesAshles Posts: 515
    Art3mis
    I find the risks associated with this to be very similar to the ones for cannabis. One is emotional dependency and addiction by emotional dependency. The other one is how it can affect the perception of reality at a young age where the brain is still developing. I think we will see a lot of controversy centered around these two. I would advocate the legalization of cannabis but with an age limit, so it's hard for me to say VR would be safe for children. You get haptics in there, the father/mother who past away looks real, feels real and the emotional impact would be hard to master. That could also take in-app purchases to a whole new level.

    On the other hand, this will open up so many possibilities for therapy. Guided VR sessions with pharmaceuticals. It might be a healthy way to say good bye to loved ones, under supervision from a pro. Or to overcome a bad memory, replacing it with a virtual one with repeated sessions. Honestly, that's gonna make todays therapy sessions look like a joke. On the flip side, brainwashing will get to a whole new level.

    Very true. Therapy of the future is almost certain to involve VR at least for some approaches to treatment (it's already being used now).
    "Into every life a little fantasy must fall..."
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    It's also the basis of "the real Matrix". Morpheus gives a bunch of misfits drugs in the form of pills, hooks them up to the Rift, gives them trips with illusions of grandeur. Using manipulation through suggestion, cookies with drugs, shots to the arm and even drugs hooked to the whole body right after he shaves their heads "like a monkey, ready to be shot into space", conditions them to become terrorists ready to go blow up buildings and shoot people and that's okay because "Businessmen, teachers, lawyers, carpenters. The very minds of the people we are trying to save. But until we do, these people are still a part of that system and that makes them our enemy". Dresses them nice, takes them to clubs, makes them cool (all in VR). Trains them in VR, gives them a bunch more drugs and "lots of guns" and sends them to places to blow up. Of course he's just doing what "The Oracle" tells him to do, which makes him the perfect manipulator because he doesn't know that he is lying, he's on drugs too. And Zion is just another VR program they hook up these morons after they are done with them. (Would probably be possible with the right amount of drugs and subjects at the right age.)

    When I think about how lines like "it's so powerful, it could be used by bad guys, so they are limiting export" were tossed around for Japanese consoles, how rumors like "they trained with this flight simulator" were tossed around after a worldwide tragedy, and how many controversies were centered around games and music videos, it is natural to expect more BS centered around VR than MJ. You've got parents concerned with what kids are watching on TV? Wait till it's in an alternate reality, surrounding the kid, something that the parent can't even see from the outside.

    It doesn't even have to be crime of the century, the "Clockwork Orange" you can't look away, "Room 23 in Lost" that is a dream like state thanks to pharmaceuticals, "Dark City" style impanting fake memories, by repeating them this way a million times under the influence until the brain can't tell what is real... Could be as simple as getting a guy drunk, letting him wake up with his hands tied behind a chair and a VR device glued to his face using some simple local anesthesia, getting him to empty his bank accounts via his voice acticated smartwatch and fingerprint activated smartphone because he thinks his family is in danger. Or the subject in the same position, might think he/she is having intercourse with a person, while in reality it's just another person.

    Some weird things are bound to happen with VR... But we may never hear of them because the victims may never figure out what really happened.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    1. Rendering passes and parts of the moving world where the eye is more sensitive to temporal resolution, rendered at full speed, at half resolution
    2. Parts of the 3D world where the eye is more sensitive to detail , rendered at half speed, full resolution, timewarped
    3. Layered on top of each other with a fast pass of noise on Z data

    If layering in the next SDK is as flexible as that, or is intended to get to be in the end, we can see some cool stuff.

    It could also be cool to have a very low resolution flexible screen panels at twice the frame rate of the main display, located in the hmd around the lenses, used to provide ambient lighting beyond the displays fov. Could simply interpolate ambient color at 2x fps, creating another layer, in hardware.

    Add to the list a fast pass, low res, low quality, high fov layer in the background with the current high quality low(er) fov layer on top in the center, for high fov PC headsets. The high quality image is rendered with more detail on the center and gets blurred around the edges anyway, so it creates a natural transition. If eye tracking is there, the center image can move accordingly. Could be partly done during timewarp.


    ______________________________________________


    Also, wrote this in another topic, it wouldn't hurt to have it here too:

    The adoption rate potential of mobile/portable VR is huge if you think about:

    1. The way Wii got to people who weren't interested in gaming at all
    2. The way mobile games have done a similar job
    3. The adoption rate of the hardware for smartphones
    4. The adoption rate of portable, compact stuff that you see on other people, like the Walkman, the iPod, Gameboy, Nintendo DS
    5. How almost everybody walks into a mobile store at some point
    6. How mobile service providers are eager to offer zero-down payment plans to keep you as a customer
    7. How a platform with content delivery, app delivery, payment system, that comes with a portable device, is practical for the consumer, a cash cow for the platform owner
    8. How a phone hardware manufacturer, who is also a display manufacturer, can sponsor content like 60 fps K-pop videos to promote it's TVs today, can sponsor content made for these headsets tomorrow, with some obvious intersection in between.
    9. How bad Samsung wants to be "the next big thing", be seen as the "technology leader", like Apple, instead of "that company in the far east that you can make build stuff" and how they are willing to spend money on PR to push this aggressively.
    10. How easy and practical and cheap it is to just pick up this futuristic looking thing for a portable personal theater type use, for educational programs in schools, for waiting rooms, for a bluetooth capable little box on you, streaming videos on the fly from your drone, for a "see for yourself" room next to changing cabinets in shops to give you an out of body experience to see if that dress makes your butt look fat... You get the picture.


    You walk in to a shopping center, a mobile shop, a Walmart, you see this new thing Samsung has on display, between the phones and the consoles. It says "virtual reality", it says "try it". You put it on, it has magical Story Studio content, a virtual movie theater, 3D 360 degree K-pop videos, fun games, Youtube, Netflix, Hulu... It also says "zero down".

    ___________________________________________________________________


    The problem with this approach being the main focus for Oculus:

    1. You may have a genius like Carmack, solving your problems in software, providing tracking and lens correction and everything. None of that would matter when big companies come in, using brute force in hardware approach like LightHouse, or Wearility lenses, going large scale production, bringing the cost down for those, meaning all that time spent on trying to do things on the cheap has no value anymore.
    2. Before Apple comes in, Google enters the mobile market with a standard of it's own, that works with all smartphones that follow it, not just a few.
    3. Apple comes in to market. It comes in with Disney apps and Marvel games in 2017. Nobody would care about you anymore. Not even with the full marketing power of Samsung. Apple watch outselling all the smart watches combined to date in one day is a good example of this.
    4. Not only that, but you have lost your cult following by switching to a platform that limits what you can offer, giving up on everything ypou have claimed were needed for VR, like 75-90fps and positional tracking, by saying "the consoles are not powerful enough for what we want to do" first and saying "you know what, people buy this, so it's not bad" for a mobile SOC powered product.

    ___________________________________________________________________


    The solution "in an ideal world":

    You merge your mobile and PC lines with a design

    1. Allowing positional tracking on the portable side of things,
    2. Allowing PC to be connected to the same device, using the same app/game/content platform, to run the same content at 90hz, at higher quality
    3. Allowing a cutting edge PC powered flagship title to become your posterchild, that is also supported on mobile, but the screenshots and videos people see are from the PC version and hard to not be impressed.
    4. Allowing you to keep beeing the company who pushes sales for cutting edge hardware like upcoming stacked memory GPUs, faster CPUs, brand name PCs, making every PR move they make mentioning VR a promotion of your product.
    5. Allowing better hand tracking than anyone, possibly helped with two types of gloves:
    One cheap and light led glove, maybe with patterns on the outside facilitating tracking, like these:
    https://twitter.com/ID_AA_Carmack/status/526905527230824448
    And a second one, which is the same thing but with haptics.

    Again... working on both the PC and mobile lines.

    ___________________________________________________________________


    So the advantage Oculus VR has is:

    1. Being in both the mobile/portable and cutting edge hardware PC markets. The problem that comes with it is "not being laser sharp focused" and at least at the hardware side of things, that can stop being a problem by following a design that lets the two lines merge.
    2. On the platform side of things, you have your mobile platform running and accepting payment worldwide now, and your PC platform will follow.
    3. On the software side of things, the tools that can be used to create content, like Unity and UE, work for both mobile and PC. They just need a Carmack renderer for each platform, one that is mobile optimized, one that is PC optimized, and both using the Vulkan api that is supposed to be perfect for this job. Remember how many rendering paths Doom 3 had, or Quake 2 had? How it was not the renderer that took Carmack's time for id tech 5 (Rage), but other parts ot the engine? Considering how UE4 code is out there and builds with a different renderer are possible, this should be a no brainer.

    ___________________________________________________________________

    This year has been hard on them, with pressure coming in to get this thing out the door fast from all sides: developers, partners, fans... But they have done a lot:

    1. They have perfected the HMD design with GVR2, in terms of the straps used, weight distribution, fan to stop overheating and lens fog, a design to allow adjustment of focus. Also CB design with dual display panels, allowing them to use panels that come out of production for the Galaxy line, perfectly good for square shaped small panels but faulty beyond that.
    2. They have perfected the platform with feedback from the community and all the time and effort it takes to have a paid app store worldwide.
    3. They have perfected the design guidelines and integration with tools used for creating content.
    4. They have some in-house made content and some third party made content to launch the platform with.


    ___________________________________________________________________

    You make a great name for yourself and build a platform, thanks to that Carmack magic allowing you to do some things in software before the hardware that can be used to do them goes mainstream and down in price, you get the benefit of being the first to market. But for that to work, YOU HAVE TO BE FIRST TO MARKET!

    All that needs to happen now is:

    1. The demo of a flagship product in E3, with visuals that would get everyone hyped up, like the one posted in this message.
    2. An announcement for the final design of the thing being surprisingly neat and exciting. And with an input solution that feels natural and is futureproof in terms of the gesture design it allows being portable to the following generations
    3. And an announcement that takes place at the E3 event for limited availability during the summer, that allows developers to finish their content for the final design, the early adopters to be the guinea pigs for the final test run of that design.
    4. To turn the hype train into a hype rocket, by reaching record sale numbers in days, the announcement states that this thing is available for ordering now at their website, for a limited edition shipping in summer, and preorders to be first in line for the high volume consumer version launch, happening Q4 2015/Q1 2016 and picking up speed in Q2 2016. So if you like what you see, you better order fast...

    If you think about it, it's a limited launch that makes them "first to market" with a consumer version, and a mobile version, and a platform that works on both. And the high volume to market starts happening at the same time the mobile consumer product goes on sale and is promoted by Samsung.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    An imagining of Half Life using scan based head models with advanced skin shaders for NPCs
    u9aNs2X.png
    (mock-up using HL2 Update and a photo)

    Hoping it could be done by using something like Otoy's Lightstage:
    qxSLyOZ.png
    (2009)

    Probably with 4D captures of the face for scripted parts of the game.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    When I think about how lines like "it's so powerful, it could be used by bad guys, so they are limiting export" were tossed around for Japanese consoles, how rumors like "they trained with this flight simulator" were tossed around after a worldwide tragedy, and how many controversies were centered around games and music videos, it is natural to expect more BS centered around VR than MJ. You've got parents concerned with what kids are watching on TV? Wait till it's in an alternate reality, surrounding the kid, something that the parent can't even see from the outside.

    see:
    http://www.gamesradar.com/sony-removed-suicide-option-project-morpheuss-heist-because-it-was-too-stressful/
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    "In the weeks ahead, we’ll be revealing the details around hardware, software, input, and many of our unannounced made-for-VR games and experiences coming to the Rift. Next week, we’ll share more of the technical specifications here on the Oculus blog."

    xBzHSl6.jpg

    E3...
    A new IP with Carmack magic
    Set to define the next step in first person games

    Set on a land that feels magical
    Like being inside a Disney/Pixar universe
    You are inside an animated universe... after all

    It has story studio made experiences scattered around
    It feels like a dream
    A lucid dream where you take control and you let go of it
    And have experiences that almost feel spiritual

    Or nostalgic... like going into an arcade game you played as a kid
    And an adventure game... And an animated movie.

    The trailer ends with a cutscene where a familiar voice is saying "feel the energy between your hands"
    And there is an energyball/lighting/fireball forming between your "virtual hands"
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • GSSGSS Posts: 69
    Hiro Protagonist
    CV1 was just announced. https://www.oculus.com/blog/first-look- ... g-q1-2016/

    There may be an input solution too
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    That's what this bit was referring to:
    The trailer ends with a cutscene where a familiar voice is saying "feel the energy between your hands"
    And there is an energyball/lighting/fireball forming between your "virtual hands"

    I'm guessing hand tracking will be Oculus VR's way of one upping Valve as it will enable the team to say: made for SteamVR games can work perfectly on the Oculus Rift but made for Oculus Rift content that use our robust hand tracking is best experienced on the Rift.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    We will probably see something like this soon:

    BZCzMvC.png

    And if that happens to be the case, this is probably gonna be available to order on Oculus VR website as the input dev-kit and it's probably gonna be demoed on stage at E3, with some in-house games that make use of it, also with gestures mapped to actions to control the interface, showcasing the next step in computer human interaction.

    Funny thing is, if this and the cam that is used for tracking, and a wearable headband or shutter glasses with IR leds similar to those that go on the Rift, are bundeled together (without a Rift), that would enable TV users to use their hands to control games and apps, and have parallax on their TVs similar to this (but with more precision) :



    That would be a great way to make sure Oculus VR's input method gets support in non-VR games and apps too. Gamers who do not want to give up their TVs and who aren't interested in wearing a VR helmet, would still get a great level of immersion out of this.

    It would also enable VR game developers to port their titles to non-VR environments, as even a VR only game design like EvadeVR, would be possible to bring to the TV.

    This is probably what Nintendo should have done to keep it's Wii mojo and take it to the next level.

    Would be funny if that was presented by Palmer, with a dodge the bullet type of game on TV, where at some point Palmer would lift his hand and stop the bullets, and reflect them back.

    What would make this presentation with in-house games and a hand tracking solution trend faster than the speed of light is a Star Wars game deal of course. Force lighting, reflecting lasers, telekinesis, training where you try to dodge lightsaber attacks... In front of a TV or in VR. Both experiences powered by Oculus.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Mobile is going to get so powerful so very soon.
    1. With lower power consumption, compact form factor, GPUs with HBM/HBM2 should find their way to mobile faster than previous gen techs.
    2. APUs and SOCs with high speed interconnect between the CPU and that High Bandwidth Memory, just the way the GPU and HBM are linked, must be a no brainer.
    3. Manufacturing tech will be there.

    Just as an example, an AMD APU with Zen CPU cores, next-gen GPU cores and HBM, accessed with Mantle/Vulkan, should allow for a mobile VR device (with a fan) that is much more powerful than current gen consoles and even most current gaming PCs, probably sooner than we think.

    A GearVR that ships in 2017 is likely to have very advanced graphics.
    ______________________________________________________________________

    nvidia
    http://wccftech.com/nvidia-ceo-talks-pascal-pk100-pk104-gpus-produced-finfets-hbm/
    shifting from GDDR5 memory to HBM2
    up to 32GB of video buffer
    upto 1TB/s of bandwidth
    2X the performance per watt
    2.7X memory capacity
    3X the bandwidth of Maxwell
    either the 14nm or 16nm node using FinFETs

    AMD
    http://www.forbes.com/sites/jasonevangelho/2015/05/06/confirmed-amd-to-launch-new-hbm-equipped-desktop-graphics-cards-by-end-of-june-2015/
    High Bandwidth Memory (HBM)
    Much higher memory bandwidth speed than GDDR5
    Less power consumption
    On-package memory reduces complexity of enthusiast-class graphics
    Opportunities to extend across AMD product portfolio

    Samsung
    http://techreport.com/news/28244/report-samsung-spending-14-billion-on-new-semiconductor-fab
    Plans to spend 14 billion on new semiconductor fab.
    The biggest investment in a single semiconductor production line
    Initial production at the new facility is expected to begin in the first half of 2017
    That's a timeline that could point to the production of 10-nm chips
    ______________________________________________________________________

    What that means for VR is

    1. We've had DK1 and 2 to develop games and experiences that are also compatible with mobile systems (2013-2014)
    2. Next we'll have consumer GearVR, with OTOY and Carmack magic providing very impressive visuals in experiences and minigames. (2015-2016)
    3. Next we'll have CV1 that we can plug to a computer with expensive hardware accessed with modern low level APIs, allowing impressive visuals in endless universes. (2016-2017)
    4. Next we'll have GearVR type mobile VR solutions, with SOCs having HBM, DX12 level GPUs and 8+ core CPUs in the same package. (2017-2018)
    5. Next we'll have those at retinal resolution (2018-2019)

    All the tech that will shape VR and gaming in general until 2020, is being carved today.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    I'd back this if it was a Kickstarter project:



    It would be aimed at consumers looking for untethered VR at high quality, and developers looking to test high quality content they make for mobile VR coming to market H2 2017 - H1 2018.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist


    This... With interactive 3D animated characters that are scan based and use image based hdr lighting...

    This Look and Click Adventure gameplay style seems to be cut out for OTOY light field capture based environments + capture based animations on LightScan based models rendered with image based hdr lighting.



    Would need one "invisible to the user 3D model of the room" to occlude animated models when they are behind the couch for example. Could include a simple voxel based version of that model used for real time lighting the characters. Animated models could even cast shadows on that 3D model using that light information, a shadow on the transparent 3D layer that would be overlayed on top of the light field layer. Soft shadows, using ray casting in real time, which is another tech that OTOY is working on. OTOY's capture tools could evolve to a point where every light field capture also constructs those 3D models of the space captured for the game engine to use, out ot the images acquired from all angles.

    What would complete this and make it perfect is Virtual Desktop turned into a UE4/Unity asset, displayed on that TV. It could be your virtual office where you are boss and NPCs work for you if "creating macros and running them" is turned into "showing the NPC what to do and letting the NPC take over and do the repetitive task for you".

    Virtual Desktop turned into a UE4/Unity asset would also enable taking your desktop computer to any virtual world as your mobile computer. Could be on your virtual wristband, could be a tablet, could be a holographic floating screen. You could be in space and still getting emails, chasing dragons and buying diapers on amazon, working from wherever your soul desires to go, even if you are physically at the office.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Either those 1080x1200 panels are smaller than previously anticipated, something like 1.2" dense panels, or I have been way too optimistic with my expectations.

    Looking forward to the details about the lenses, the input solution and the in-house games. Oculus pretty much got all the "not fun" news out first (release date, resolution, even requirements). From now on, the road to E3 can be fascinating, or I'm being optimistic again.

    The layers and compositor in the last SDK seem to be all I hoped for. It looks like that's going to allow some funky ways of optimizations. Different resolutions, different fps targets for different layers, composed together, distorted and timewarped positionally at the final stage, is going to turn into magic in the hands of talented programmers.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Requiring 2 USB 3.0 ports could mean some cool things, like a very high resolution IR cam, or 3 cam solution to reconstruct you and your game space in the virtual space, or multiple IR cams on the hmd, or IR cams on wristbands and one on the desk... The resolution of the capture that happens in your room is more important than the resolution of the display if SDE is eliminated.

    Deep Echo, with soft shadows and good lighting in Ultra mode, would rock with a little better resolution, no SDE and a fixed high refresh rate. It would be fantastic if those hands were mirroring the movement of your own, with the lighting and shadows from the environment on your in game avatar. For NPCs, scan based models with masks are "freaking real" good, like The Enforcer in the latest Technolust demo. UE4 with great animation and great design can offer great experiences at the Rift's resolution with a high end card when the Rift comes out.

    I wonder if lenses that take care of SDE and create a smooth image out of basically fixed black noise between pixels, maybe it can create smooth images out of noise of real time ray tracing, especially if the patterns of that noise were programmable and used in conjunction with positional tracking and timewarp.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist


    uber-cool. hoping one of the two usb3.0 ports required is for something like this. would make cv1 worth the wait (vs getting the vive).
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
  • donkaradiablodonkaradiablo Posts: 310
    Hiro Protagonist
    Was really looking forward to this input solution that could have been:

    P963q5r.png

    That's a shame.
    Design with input solution, unifying mobile and PC product lines
    the input solution that could have been
    ideas
    Revolutionize the way we interact with...
    Change the world...
    Community...
    BJsryuo.gif
Sign In or Register to comment.