cancel
Showing results for 
Search instead for 
Did you mean: 

Fluid OS - A real Virtual Reality Operational System

mateuszica
Honored Guest
Hi everyone

I'm developing what i envision as a real operational operational system for HMD VR.
I really need the help of the VR Developers community with suggestions and new ideas because i feel like i'm entering into a uncharted field.
I don't want to just take Windows OS or MAC os and place it in a 3D monitor in a virtual environment.
I want to create a whole system for VR universe from ground-up .


I was thinking in something like this.


When Apple created a smartphone with multi-touch they had to create a new system with new gestures and new applications that fit this this new system.
I think we need to create a new way to interact with the the operational system and aplication in the VR setting with the help of the oculus rift and the new CONTROL VR (or other hardware that capture the hand moviments perfectly).


I want to borrow some gestures from the android and Iphone IOS and also create new ones and create apps Just for this system.

It is in very early stages of development (i'm using Unity) and i'm waiting the control VR SDK to release a Demo.
I need your feedback and suggestions

More inspiration
1835670-those-crazy-gesture-based-gadgets-from-minority-report-dont-seem-so-crazy-now-rotator.jpg
17 REPLIES 17

Adder
Honored Guest
Eye tracking would be a good thing to replace a mouse pointer for some new form of menu/focus system. Coupled with voice control perhaps triggered by a finger pinch type of gesture to represent 'listen' to the OS; thumb and forefinger for left click, thumb and middle finger for right click etc could allow simpler voice commands to access different array's of menu commands. A basis for the commands would have to be about pushing and pulling applications through planes in front of the viewer, but otherwise normal file/folder commands and custom menu options could be handled in that menu/focus system. I'd have it treat most everything as a graphics manipulation program, as it would be the easiest way for futuregen's to interact with applications and information... visually.

mptp
Explorer
I've been thinking about this a lot myself.
I think the way to go about this would be to create a Linux-based operating system, where you can take individual windows, render them as textures onto planes in 3D space, and have the ability to use certain gestures to interact with them, as well as open, close and move them around.

I have absolutely no programming experience with low-level stuff like messing with kernels and whatnot, so I can only offer a few design choices that I've thought about:

1. You can't get rid of the keyboard.
Love it or hate it, you'll simply never get any reasonable adoption if you're not using a physical, qwerty keyboard. That means no emulating a keyboard and detecting keystrokes based on finger location, and no novel means of text entry. I think the ideal way to do this would be to somehow do 1:1 tracking of the keyboard location on the desk, and render it in that position. The tricky thing with this is a lot of keyboards have slightly different key layouts, so to allow the user to continue to use their keyboard and not get confused, you would need to have a community creating perfect 3D models of various keyboards (or ideally, have manufacturers release these models along with their physical products).

2. All windows need to be dynamic
This is fairly obvious, but it's essential that all windows can be moved around in all 3 directions and resized at will. This could probably be achieved easily using simple gestures. The details of this would have to be worked out through trial and error - what feels most natural, and is most convenient. The user needs to be able to take their hands off the keyboard for just a moment to grab a window (preferably with one hand), and place it up to the top right, while grabbing another window that was sitting behind it and resizing it to fill most of their immediate field of view.

3. VR integration needs to be seamless
This has already been largely achieved, but it needs to be possible to launch VR apps from within the OS without any transition. The fact that the new API allows the Rift to share the positional and rotational data between apps is significant in achieving this.

4. Hands and body must be rendered
This is the trickiest thing right now. But users are going to be spending a lot of time within the space rendered by this OS (considering that the majority of users are going to be tech enthusiasts, 8 hours isn't going to be an unrealistic upper limit on single sessions). To avoid this getting uncomfortable, I think it's important to make the user feel grounded and give them their kinesthetic sense. This means giving them a body that responds to their movements 1:1.
The hard part with this is all of the input methods that allow for this kind of experience will probably turn out to be passing trends, especially since it seems that Oculus is working on their own input device which will achieve similar results. (I'm talking specifically about Leap Motion and ControlVR right now, for finger tracking as a minimum employing reverse kinematics to solve for wrist, elbow and shoulder positions, or forward kinematics for each joint with ControlVR)
I think if a developer was to start working on an ideal VR OS right now, Leap Motion would be the way to go since it's not as cost-prohibitive as ControlVR, and you don't need such a wide area of operation.

In general
To be honest, I don't think the time has come for a dedicated VR operating system, just because of the lack of an input system that we can really count on.
I think VR is going to be a big enough deal that any VROS is going to be made by the likes of google, facebook, microsoft, apple, etc, I only hope that they do it right.
If not, then I'm sure that the technology that they use will become a standard for OS input within VR, and then a clever smaller team of developers can make a Linux-based OS using the same input that will kick butt.

Until then, I'm going to be satisfied with the current solutions that just render the windows framebuffer to a plane in VR. 🙂

edit: Just thought - if there was some low-level way to simulate many separate display devices (like being able to simulate 20 screens hypothetically), and have a separate framebuffer* for each device. If that was the case, then you could just have the desktop rendered to one framebuffer*, and each window rendered to its own virtual display. Then you could render each virtual display's framebuffer* as a texture onto planes, with the desktop somehow rendered as a background sphere or something.
* note: I only vaguely understand what a framebuffer is and how to work with it, so I have no idea if this is actually feasible.
Melbourne-based creative technologist. I flit between experimental AR/VR experiences, audiovisual electronics and full-stack web development. http://www.lachansleight.io

Tgaud
Honored Guest
I dont see why we should emulate a keyboard, when we have a physical keyboard...

All we have to do is calibrate the position of the keyboard, so the two of them matchs perfectly.

mateuszica
Honored Guest
i think is much better to swype your finger in the air using a virtual keyboard

ryanyth
Honored Guest
I can see that we are still too dependent on traditional input devices
when it comes to envisioning a future real VR OS.

We definitely have to move past using the keyboard/mouse combo simply because they
are not relevant in an environment when we cannot even see what we're typing.
(I know most of you are touch typists, but what i'm envisioning here is an OS that can be used by
even people who are very new with touch typing, or maybe even with physical disabilities)

I have a few suggestions listed with my comments on them:

Input via existing Touchscreen device (eg: Apple Ipad / Android Tablet)

Example of this in action. (Sure, it only allows for poking and interacting with a virtual character
on screen for now, but with more work and research, can be used as a viable input device.)



Pros:
a) No need to see what you're typing and it is very intuitive to just use touchscreen for input.
b) No need to buy additional equipment, user can just use existing mobile device.
c) Portable.

Cons:
a) Limited typing capability (Im guessing for this to work, you probably need to swipe letters on the ipad?)
b) Limited functionality (Can't imagine using ipad for anything other then as a pointing device or simple typing)
c) Not usable if user does not have use of their limbs.

============================================================================================

Input via Virtual Augmented Reality

Example of this in action. (Extremely impressive video.)



Pros:
a) Your imagination is the limit when it comes to UI design. With further work and research,
you can now actually design UI to work with hand gestures like in Minority Report
b) Able to actually have normal vision, albeit virtually but that means user can
actually go BACKWARD and use keyboard/mouse if they are more comfortable with it.

Cons:
a) Limited to a single room with Motion Capture Set-up installed (for now)
future iterations might involve better wireless motion sensing tech
b) Cost is extremely prohibitive for the average user.
c) Setting up of equipment may be extremely challenging for the average user.
d) Not usable if user does not have use of their limbs.

ryanyth
Honored Guest
Voice Input

Example of this in action. (Yes, we've all seen this iconic scene in action.
But note how fluid the conversation is between Robert Downey Jr and "Jarvis"
True voice recognition IMO, would be indistinguishable from human speech)

Interestingly enough, this video shows a combination of both "Virtual Augmented Reality" AND Voice Input.




Pros:
a) Most natural way of Human input.
b) No additional equipment to buy
c) Limbs not required, just talk.

Cons:
a) Not suitable for apps that require precision. eg: CAD, drawing
b) At our current rate of technology progress, it might be quite awhile before we can achieve this.


==========================================================================================

Input via Thought / Conciousness

NOW we're diving into DEEP Sci-Fi / Cyberpunk Territory!!!
(Obviously this one will probably not be possible in our lifetime, but i'm just putting it out here as a reference.)

I don't think there are any videos that can adequately show how this can be done,
but basically you control input via your thought or actually "Disconnect" from your physical body
and enter the network. Concepts like "MindJack", "Ghostdive" etc You get the idea.

Pros:
a) COMPLETELY FREE from the physical limitations of the fragile human body.
b) You can't input faster then Human Thought itself.
c) This is BEYOND virtual reality, and can fundamentally change how humans think.

Cons:
a) Is this even feasible in the forseeable future?
b) Potential Legal, Ethical issues? What's to stop people from being able to hack into your brain?
c) We're already pissed off with BSODs now. Imagine what a BSOD will do to your brain.

dagf2101
Honored Guest
I dont know if this was adressed before but what about taking a real tablet with infrared leds on it for position tracking.
Capture the tablet output and then render it on a virtual tablet that has the same position as the real tablet.

That way we have at least a tablet to write in the virtual world. I guess the touch position can also be used as a mouse.

3DEA
Honored Guest
I think i have seen something like this before 🙂

mateuszica
Honored Guest
Where?