cancel
Showing results for 
Search instead for 
Did you mean: 

Nimble & 13th, new design targets?

ganzuul
Honored Guest
Early-days VR has a new dimension, or a new paradigm. It's as if instead of bringing the player into the game we can bring the game to the player. The capabilities of the headset has to define what sort of user interface design we eventually settle on.

The DK2 was a bit confusing since it didn't in fact do what headset-mounted cameras could do. These acquisitions make a lot more sense, mainly because the type of occlusion that you get from the natural perspective of the eyes in your head is something we are used to dealing with anyway.
In fact, the DK2 seemed like a step in the wrong direction. People are hell-bent on not sitting down for their VR experience, and there is just so much budget that one can put into the final package that the consumer pays for. This entry-level device has to be the basis for the VR platform. - We're going mobile and we're taking our User Interface and HUD Design with us. Preferably not while it's glued to our faces.


It would seem that the first thing a new user will be doing is look around and observe how their natural surrounding gets digitized and positioned in their field of vision, where they expect these things to be. After orienting themselves in this way, they are likely to look back to their desk, keyboard, mouse and monitor since that's where all their computer stuff is located anyway. If the monitor shows a clone of what the headset is displaying, then we get the 80s infinite video feedback corridor. This desktop real-estate might be put to better use, such as a frame for holding virtual post-it notes.
The space immediately outside of the monitor's frame feels like a natural place to begin exploration of augmented reality. Certain apps could register themselves with a VR-oriented service that runs as a local process on the host computer. They could place their volumetric icons on the frame of the monitor.

This is all familiar territory, but what happens when the user gets up and leaves? Some of the apps might remain leashed to the user's virtual vicinity. Others might wait for the user to call them over, rushing towards the user all at once like an avalanche. Since we are unlikely to shake off the rectangular window paradigm of GUI design any time soon that experience might be nauseating and overwhelming.
The UI objects will need to retain some form of user-defined organization while not embedding themselves in physical walls, becoming inaccessible. Having apps smoothly following the contours of walls and furniture could be very attractive, but simultaneously hot fireplaces with buttons to click need to be avoided.

It would be possible to have control and content separate, just like keyboard & mouse and the monitor are separate. The control would remain fixed to the user's body while the content is free to float around. This would almost have to be a rule that everybody follows, in-game and out. Users would have a personal space that no content may violate without express permissions, and they could feel secure that nothing will attempt to penetrate this virtual shield about their person. - Nothing that could make people instinctively react to virtual threats and put them in real danger, sitting, standing or walking.
1 REPLY 1

g4c
Explorer
RE: any surface can be a control.

Maybe a small thimble device that could give some haptic feedback to the fingertip. Some device driven by a hydraulic box on the back of a glove.

Keep the hydraulics "tight" for high frequency response. this could involve metamaterials within silicon fingertip vessel to constrain expansion dimensions. feed pipes can be high grade nylon.

Then users finger tip/s could feel buzzes, and even click resistance similar to that found in real buttons.

Small speaker in the glove back module could provide audio click feedback.

"The Nimble Thimble"
Android VR Developer. https://twitter.com/SiliconDroid