New to the forums? Click here to read the "How To" Guide.

Developer? Click here to go to the Developer Forums.

Fluid OS - A real Virtual Reality Operational System

mateuszicamateuszica Posts: 37
edited October 2015 in Showcase
Hi everyone

I'm developing what i envision as a real operational operational system for HMD VR.
I really need the help of the VR Developers community with suggestions and new ideas because i feel like i'm entering into a uncharted field.
I don't want to just take Windows OS or MAC os and place it in a 3D monitor in a virtual environment.
I want to create a whole system for VR universe from ground-up .


I was thinking in something like this.
l60456-digitale_malerei-android_legacy.jpg

When Apple created a smartphone with multi-touch they had to create a new system with new gestures and new applications that fit this this new system.
I think we need to create a new way to interact with the the operational system and aplication in the VR setting with the help of the oculus rift and the new CONTROL VR (or other hardware that capture the hand moviments perfectly).


I want to borrow some gestures from the android and Iphone IOS and also create new ones and create apps Just for this system.

It is in very early stages of development (i'm using Unity) and i'm waiting the control VR SDK to release a Demo.
I need your feedback and suggestions

More inspiration

4577

Comments

  • AdderAdder Posts: 4
    edited July 2014
    Eye tracking would be a good thing to replace a mouse pointer for some new form of menu/focus system. Coupled with voice control perhaps triggered by a finger pinch type of gesture to represent 'listen' to the OS; thumb and forefinger for left click, thumb and middle finger for right click etc could allow simpler voice commands to access different array's of menu commands. A basis for the commands would have to be about pushing and pulling applications through planes in front of the viewer, but otherwise normal file/folder commands and custom menu options could be handled in that menu/focus system. I'd have it treat most everything as a graphics manipulation program, as it would be the easiest way for futuregen's to interact with applications and information... visually.
  • mptpmptp Posts: 237
    Brain Burst
    I've been thinking about this a lot myself.
    I think the way to go about this would be to create a Linux-based operating system, where you can take individual windows, render them as textures onto planes in 3D space, and have the ability to use certain gestures to interact with them, as well as open, close and move them around.

    I have absolutely no programming experience with low-level stuff like messing with kernels and whatnot, so I can only offer a few design choices that I've thought about:

    1. You can't get rid of the keyboard.
    Love it or hate it, you'll simply never get any reasonable adoption if you're not using a physical, qwerty keyboard. That means no emulating a keyboard and detecting keystrokes based on finger location, and no novel means of text entry. I think the ideal way to do this would be to somehow do 1:1 tracking of the keyboard location on the desk, and render it in that position. The tricky thing with this is a lot of keyboards have slightly different key layouts, so to allow the user to continue to use their keyboard and not get confused, you would need to have a community creating perfect 3D models of various keyboards (or ideally, have manufacturers release these models along with their physical products).

    2. All windows need to be dynamic
    This is fairly obvious, but it's essential that all windows can be moved around in all 3 directions and resized at will. This could probably be achieved easily using simple gestures. The details of this would have to be worked out through trial and error - what feels most natural, and is most convenient. The user needs to be able to take their hands off the keyboard for just a moment to grab a window (preferably with one hand), and place it up to the top right, while grabbing another window that was sitting behind it and resizing it to fill most of their immediate field of view.

    3. VR integration needs to be seamless
    This has already been largely achieved, but it needs to be possible to launch VR apps from within the OS without any transition. The fact that the new API allows the Rift to share the positional and rotational data between apps is significant in achieving this.

    4. Hands and body must be rendered
    This is the trickiest thing right now. But users are going to be spending a lot of time within the space rendered by this OS (considering that the majority of users are going to be tech enthusiasts, 8 hours isn't going to be an unrealistic upper limit on single sessions). To avoid this getting uncomfortable, I think it's important to make the user feel grounded and give them their kinesthetic sense. This means giving them a body that responds to their movements 1:1.
    The hard part with this is all of the input methods that allow for this kind of experience will probably turn out to be passing trends, especially since it seems that Oculus is working on their own input device which will achieve similar results. (I'm talking specifically about Leap Motion and ControlVR right now, for finger tracking as a minimum employing reverse kinematics to solve for wrist, elbow and shoulder positions, or forward kinematics for each joint with ControlVR)
    I think if a developer was to start working on an ideal VR OS right now, Leap Motion would be the way to go since it's not as cost-prohibitive as ControlVR, and you don't need such a wide area of operation.

    In general
    To be honest, I don't think the time has come for a dedicated VR operating system, just because of the lack of an input system that we can really count on.
    I think VR is going to be a big enough deal that any VROS is going to be made by the likes of google, facebook, microsoft, apple, etc, I only hope that they do it right.
    If not, then I'm sure that the technology that they use will become a standard for OS input within VR, and then a clever smaller team of developers can make a Linux-based OS using the same input that will kick butt.

    Until then, I'm going to be satisfied with the current solutions that just render the windows framebuffer to a plane in VR. :)

    edit: Just thought - if there was some low-level way to simulate many separate display devices (like being able to simulate 20 screens hypothetically), and have a separate framebuffer* for each device. If that was the case, then you could just have the desktop rendered to one framebuffer*, and each window rendered to its own virtual display. Then you could render each virtual display's framebuffer* as a texture onto planes, with the desktop somehow rendered as a background sphere or something.
    * note: I only vaguely understand what a framebuffer is and how to work with it, so I have no idea if this is actually feasible.
    I make things calling myself OmniPudding. http://www.omnipudding.com
  • TgaudTgaud Posts: 788
    I dont see why we should emulate a keyboard, when we have a physical keyboard...

    All we have to do is calibrate the position of the keyboard, so the two of them matchs perfectly.
  • i think is much better to swype your finger in the air using a virtual keyboard
  • ryanythryanyth Posts: 77
    edited September 2014
    I can see that we are still too dependent on traditional input devices
    when it comes to envisioning a future real VR OS.

    We definitely have to move past using the keyboard/mouse combo simply because they
    are not relevant in an environment when we cannot even see what we're typing.
    (I know most of you are touch typists, but what i'm envisioning here is an OS that can be used by
    even people who are very new with touch typing, or maybe even with physical disabilities)

    I have a few suggestions listed with my comments on them:

    Input via existing Touchscreen device (eg: Apple Ipad / Android Tablet)

    Example of this in action. (Sure, it only allows for poking and interacting with a virtual character
    on screen for now, but with more work and research, can be used as a viable input device.)


    Pros:
    a) No need to see what you're typing and it is very intuitive to just use touchscreen for input.
    b) No need to buy additional equipment, user can just use existing mobile device.
    c) Portable.

    Cons:
    a) Limited typing capability (Im guessing for this to work, you probably need to swipe letters on the ipad?)
    b) Limited functionality (Can't imagine using ipad for anything other then as a pointing device or simple typing)
    c) Not usable if user does not have use of their limbs.

    ============================================================================================

    Input via Virtual Augmented Reality

    Example of this in action. (Extremely impressive video.)


    Pros:
    a) Your imagination is the limit when it comes to UI design. With further work and research,
    you can now actually design UI to work with hand gestures like in Minority Report
    b) Able to actually have normal vision, albeit virtually but that means user can
    actually go BACKWARD and use keyboard/mouse if they are more comfortable with it.

    Cons:
    a) Limited to a single room with Motion Capture Set-up installed (for now)
    future iterations might involve better wireless motion sensing tech
    b) Cost is extremely prohibitive for the average user.
    c) Setting up of equipment may be extremely challenging for the average user.
    d) Not usable if user does not have use of their limbs.
  • Voice Input

    Example of this in action. (Yes, we've all seen this iconic scene in action.
    But note how fluid the conversation is between Robert Downey Jr and "Jarvis"
    True voice recognition IMO, would be indistinguishable from human speech)

    Interestingly enough, this video shows a combination of both "Virtual Augmented Reality" AND Voice Input.



    Pros:
    a) Most natural way of Human input.
    b) No additional equipment to buy
    c) Limbs not required, just talk.

    Cons:
    a) Not suitable for apps that require precision. eg: CAD, drawing
    b) At our current rate of technology progress, it might be quite awhile before we can achieve this.


    ==========================================================================================

    Input via Thought / Conciousness

    NOW we're diving into DEEP Sci-Fi / Cyberpunk Territory!!!
    (Obviously this one will probably not be possible in our lifetime, but i'm just putting it out here as a reference.)

    I don't think there are any videos that can adequately show how this can be done,
    but basically you control input via your thought or actually "Disconnect" from your physical body
    and enter the network. Concepts like "MindJack", "Ghostdive" etc You get the idea.

    Pros:
    a) COMPLETELY FREE from the physical limitations of the fragile human body.
    b) You can't input faster then Human Thought itself.
    c) This is BEYOND virtual reality, and can fundamentally change how humans think.

    Cons:
    a) Is this even feasible in the forseeable future?
    b) Potential Legal, Ethical issues? What's to stop people from being able to hack into your brain?
    c) We're already pissed off with BSODs now. Imagine what a BSOD will do to your brain.
  • dagf2101dagf2101 Posts: 5
    NerveGear
    I dont know if this was adressed before but what about taking a real tablet with infrared leds on it for position tracking.
    Capture the tablet output and then render it on a virtual tablet that has the same position as the real tablet.

    That way we have at least a tablet to write in the virtual world. I guess the touch position can also be used as a mouse.
  • 3DEA3DEA Posts: 47
    I think i have seen something like this before :-)
  • Where?
  • 3DEA3DEA Posts: 47
    Here somewhere ;-) SpaceSys VROSE

    A recent WIP video:
  • mateuszicamateuszica Posts: 37
    edited September 2014
    3dea
    This is just Another "beatiful-evoriement" simulator of 2d flat screen(s) running windows (to use with mouse )

    My idea is to develop a new operating system and not simulate a 2d screen with a old operacional system
  • By the way, "3d desktops" like spacesys exists since 1999

    Example:
    http://tricks-collections.com/wp-content/uploads/2010/02/Sphere-3d-Desktop.jpg
  • 3DEA3DEA Posts: 47
    We like to think that what we are working on, and where we are aiming at, is quite the opposite to what you compared our work to. We are aware of several shots to the VR environments in the past and we've probably tried them all. You are right that these are only classical desktops extended perhaps to an 0.5 of additional dimension at best, looking pretty but with no additional usability.

    Not trying to discourage you, but creating a new operating system from the scratch today, is an effort that would demand not only huge resources but also an impossible strength to gain any significant market impact over the big players. But an operating system itself is actually not the graphical user interface, be it in 2D or 3D, it is a very large set of system functions dealing with process management, memory management, file managament, hardware abstraction, to name but a few...

    This is exactly why we think that a proper VR enabled OS environment (VROSE as we like to call it) is a perfect fit for current OSs - it brings a whole new dimension to the way you work with and organise your files, it also opens the new way to work with and organise application windows inside a VR environment. Think about having a graphical editing app overcoming the window size limits imposed by the physical monitor limits. Think about stripping all the tool windows and property sheets you need in such an app from the main window and letting them flow in a VR space so you only need to turn your head in order to access them. This is to mention just a few basic features we are working on..

    We have already achieved all of the above with our VROSE and yet this is only a small milestone in our roadmap. Other interesting things that SpaceSys already supports include Oculus Rift and Kinect (meaning voice control, gestures movement, but also Minority Report style gestures in the future...). Other relevant technologies will be supported in the future as well, so we rather think of the mouse / keyboard as transitional but necessary input devices until the more VR friendly input technologies are mature enough for everyday use. All elements of traditional desktop (icons, folders, menus, windows...) are by now true 3D objects inside SpaceSys and allow for true interaction inside a 3D environment, but also all traditional desktop functions (with which the user is already familiar, meaning very fast transition...) are now fully functional inside a 3D environment and work with current VR technologies.

    Other planned features include a multiuser and crossplatform support which will present a seamless experience shared by users of all major operating systems, and will also open further possibilities like visual sharing, team work inside a VR environment, a full featured 3D GUI framework for application developers, etc..

    The idea that you have is a similar to our work and i can tell you it takes more than a wish to make it happen, we are working hard for quite a while, exactly on creating a seamless environment where you can work as you imagined, and it is an UI that enables all of what you speak about. Only companies as huge as Google, Yahoo, Microsoft, Apple, Facebook can create a new OS with the significant market impact. All we can do is adapt to them.

    In short, SpaceSys is here and now, we are working hard to polish things for the next release. It tries to sum what's available for everyday use from current VR technologies and presents an easy step (but does not stop at it!) from current computing experience towards VR computing with all its bells and whistles.
  • 3DEA3DEA Posts: 47
    mateuszica wrote:
    3dea
    This is just Another "beautiful-environment" simulator of 2d flat screen(s) running windows (to use with mouse )

    My idea is to develop a new operating system and not simulate a 2d screen with a old operational system


    We also use Microsoft's Kinect, with gesture and voice controls, currently only Kinect 1 (from xbox), still waiting to get Kinect for Windows 2, but the controls are there. We can go for leap motion now and later for anything that comes our way, Intel RealSense, or hydras or whatever.. What ever we implement and build we try to think for the future hardware so we create controls thinking on what is to come.

    We are also thinking about creating our own hardware, a tactile feedback controller, cant talk about it yet, but should be better for controlling such a system than anything available.. But that is a project for itself still in early designs.

    We have the EEG headset but it is still in a bad shape, APIs are crude, we will wait for them to become more available and with more precision. We could loose so much time working on it now that it makes no sense. In about 3-4 years i hope they will get there..

    We are working on all of it, will take some more time to implement everything. You can download the alpha demo on our web and check it out.

    As for the graphics, current state is a placeholder, we can create anything out of it as long as we have the system running.

    We have a similar goal, yours is a bit bigger, creating an OS would take some time, for you alone..
  • mateuszicamateuszica Posts: 37
    Good job!
  • AbnormAbnorm Posts: 2
    NerveGear
  • this post is 14 months old, nobody developed something like this??
Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories