cancel
Showing results for 
Search instead for 
Did you mean: 

Leap Motion V2 Tracking now in Public Developer Beta

geekmaster
Protege
Leap Motion V2 Tracking now in Public Developer Beta

I have been very busy lately, travelling to all the main VR and gaming related expos and events, and lots of extra private events that go with that experience. However, I did receive a pair of Leap Motion devices recently and I tested them (one at a time) using the new V2 beta SDK and demos:

https://community.leapmotion.com/t/v2-tracking-now-in-public-developer-beta/1202

It works much better than I had expected, based on previous reviews of the V1 SDK in various forums. The new tracking algorithms as impressive and much more useful for some applications, though I suspect they could be more robust with TWO trackers mounted below and above (or to the side) of the hands. I plan to test a pair of these devices running in Virtual Machines (which support USB passthrough) which then use network communications to attempt to fuse their results.

The best example of improvement is in the diags portion of the new SDK, showing good finger tracking. I sent a video showing an area that could use improvement in the skeletal finger tracking portion of the downloadable Orientation demo, but diags works great, IMHO.

In the mean time, those with Leap Motion devices can download and test the new SDK at the Leap Motion web site.
12 REPLIES 12

cybereality
Grand Champion
I tried the new SDK and I have mixed feelings about it. One one hand, it has improved significantly. On the other hand, it's still not where it needs to be for true VR-level hand tracking.

In the best case (fingers open, palm facing down) it works great. But as soon as you turn your hand around, or occlude the fingers, it starts to fail. Even something simple like grabbing a cube and looking at it was difficult for me.

Maybe with multiple sensors this can be improved. Not sure. It just wasn't what I was hoping for.
AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV

Gerald
Expert Protege
I agree with Cyber on this - I love the Leap Motion as a VR input device, but I really do not see the hand-tracking being the solution. I rather have a stable non-tracked hand emulation with a STEM than a glitchy hand via LMC.

BUT ... the new SDK removes other annoyances we had to programm around in the past (like switching indexes of fingers) and the tracking itself has improved too. If one is just ignoring the hand you can do really cool stuff with the Leap and I hope developers will do just that.
The interaction options in 3D space with the Leap are the coolest input device for seated VR I have seen. You still need to combine it with a mouse or a keyboard to get "buttons" or a quick way to rotate, but I see no reason not to combine them.
check out my Mobile VR Jam 2015 title Guns N' Dragons

Miffyli
Honored Guest
Me three on the "V2 is step forward, but still some work". Altho it has addressed the biggest problems, in my opinion, just like the fact you had to connect finger indexes when it lost track of hand for frame or two randomly. My older test codes in Python won't even work on this new system mainly because of that.
Looking forward results of your tests @geekmaster. Even with two of these I don't think it'd be too pricy if it offers robust finger tracking with low latency, high accuracy and bigger space. It still feels like best bet for Oculus if it still going to be designed for sitting.

hellary
Protege
Yeah I just picked up a Leap motion to try out the new software (I got it for less that half price on eBay on a whim) and it still has a lot of the old problems. My desk is in front of some windows (north facing though) which means it just doesn't work and in the visualiser, my hand kept changing colour which means a new ID (new detected hand) which was one of the things back when I was developing with a Leap that I found really frustrating. I'll give it a proper test when it's dark though. Should really get some blinds put up!

racerx3
Honored Guest
I'll try give V2 a try, but I'm not holding my breath. My belief is that the hardware is simply limited-- until it gets an overhaul, it'll remain an interesting (and frustrating) novelty.

ucap
Honored Guest
"racerx3" wrote:
My belief is that the hardware is simply limited


This is a bit of a misconception. The limitation is not the hardware - which is just two cameras in a small form factor - but the feature extraction concept used. This thread
https://community.leapmotion.com/t/leap-patent-application-and-other-references/717
has a link to United States Patent Application 20130182079, and if you look e.g. at Fig. 4c, 5, 12 on
https://patentimages.storage.googleapis.com/pdfs/US20130182079.pdf
you will find that - at least originally - the idea of the Leap is to:

a) take monochromatic stereo images of scene
b) transfer images pixel-interleaved to host PC
c) filter image: enhance constrast, compensate for lens distortion
d) detect any silhouette edges for each camera, per scanline
e) reconstruct elliptical cross-sections from each stereo pair of silhouette edges, in each scanline
f) match ellipses across scanlines to identify pointables
g) discriminate fingers from tools from palm

This approach has significant advantages for certain interesting uses cases, but also limitations. A single camera pair cannot resolve self-occlusion of different parts of the hand, human fingers are not elliptical in cross-section even if you slice them perpendicular (not to mention the palm itself), and parts of the hand might align with the scanlines. Other Leap patents concern issues of background elimination, which has in practice turned out to be one of the important robustness concerns.

The hardware looks like a sensible compromise with respect to cost, USB bus bandwidth, and data samples. Three cameras in a line would provide 6 data points to reconstruct 5 unknowns (in the case of ellipses), 3+ cameras arranged in the plane might reduce scanline alignment issues etc., better illumination might be feasible.

But in first approximation, the hardware is well designed to match the requirements of key algorithms used. To the extent the hardware is limited, it is because the tracking approach itself is limited. If you cannot work with the uses cases it supports, you will have to use a different - and likely heavier and more costly - hardware, and a different algorithm.

geekmaster
Protege
"ucap" wrote:
...
https://community.leapmotion.com/t/leap-patent-application-and-other-references/717
has a link to United States Patent Application 20130182079, and if you look e.g. at Fig. 4c, 5, 12 on
https://patentimages.storage.googleapis.com/pdfs/US20130182079.pdf
...

I find this patent-fetish stuff to be horribly disturbing. How are we going to explain to future generations of mankind how our patent system killed the giants upon whose shoulders they must stand to survive?

"If I have seen further it is by standing on the shoulders of giants." -- Sir Isaac Newton

cubytes
Protege
It works much better than I had expected, based on previous reviews of the V1 SDK in various forums. The new tracking algorithms as impressive and much more useful for some applications, though I suspect they could be more robust with TWO trackers mounted below and above (or to the side) of the hands. I plan to test a pair of these devices running in Virtual Machines (which support USB passthrough) which then use network communications to attempt to fuse their results.


sweet let me know how this works 🙂 networked multiangle optical tracking is probably the best bet aside from imu tracking.

how about one under the hands(where you would normally put the LM) and the other attached to the front the DK1?


"If I have seen further it is by standing on the shoulders of giants." -- Sir Isaac Newton


tru that

cubytes
Protege
hey GM you wouldn't happen to have a project tango device laying around to experiment with would ya? im wondering if googles 3D mapping tech or just the optics of project tango devices themselves could be used to improve positional tracking and/or optical tracking in some way?