cancel
Showing results for 
Search instead for 
Did you mean: 

Linux 'putting your money where your mouth is' thread...

jherico
Level 5
So, the Linux 'nagging thread' has kind of dragged on for a while. Oculus has pledged Linux support, but given no ETA on when we'll see positional tracking supported for DK2. It could be this week. It could be months.

However, I think that sitting around and waiting for Oculus to solve the issue kind of goes against the spirit of Linux anyway. If you want something done, you should be willing to do it yourself. To that end, I'm working on porting the current 0.4.2 SDK to Linux. I've managed to reverse engineer some of the code required to interact with the LEDs so that they can be turned on. The camera is natively supported. Right now the primary missing component to getting this done is the software for calculating a head pose based on an image from the camera.

Unfortunately this kind of math is outside my field of expertise. So I'm putting a call out to the community to see if interested parties might be able to assist with this.

If you want a Linux SDK and don't want to wait for Oculus to get around to it, and you've got the skills, here's a video of them captured from the webcam on Linux:




You can download original file from here: https://s3.amazonaws.com/Oculus/oculus_rift_leds.webm

If you can write C or C++ code that will take that video, or an image from that video, and turn it into a head pose, let me know and I'll work with it to produce a viable Linux SDK with positional tracking.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action
44 REPLIES 44

okreylos
Level 2
"vrcoder3d" wrote:
Do we think this is a good idea? If so, does anyone have experience remastering LiveCDs yet? And what open source IDE might offer newcomers to VR (and potentially Linux) the most pleasant first-time experience?


I've made a (very) experimental live respin of Fedora 20, including Nvidia's vendor-supplied binary GPU driver and our entire VR software stack. It boots directly from USB stick, but it obviously can't run inside a VM due to lack of GPU access. The idea was to make it easier to test-drive our software, but it could serve as a starting point.

lhl
Level 2
I've been traveling sans DK2/Linux box, but I started putting together some docs on jherico's project, just to start to gather up some of the far-flung stuff out there: https://github.com/jherico/OculusRiftHacking/wiki

pH5
Level 2
"pH5" wrote:
00002000  00 89 97 e2 7a 00 01 00  09 00 fe 57 c1 af 00 00
00002010 0c 00 63 1a dc 83 e7 ca 85 40 01 00 0c 00 f4 3c
00002020 f4 46 b8 cb 85 40 02 00 0c 00 42 d2 37 0a 31 bf
00002030 77 40 03 00 0c 00 4c 32 ee 7a fe ac 6c 40 04 00
00002040 0c 00 f8 1a 08 80 b9 3d e0 bf 05 00 0c 00 f4 1b
00002050 eb e1 e0 f7 d4 3f 06 00 0c 00 fe a7 2f 24 67 4b
00002060 3b 3f 07 00 0c 00 c1 6f 08 a3 8f 14 53 3f 08 00
00002070 0c 00 8b e6 c0 de 56 2d c0 bf ff ff ff ff ff ff

Let's reorder that a bit:

2000: 00 89 97 e2 7a 00 01 00 09 00
200a: fe 57
200c: c1 af
200e: 00 00 0c 00 63 1a dc 83 e7 ca 85 40
201a: 01 00 0c 00 f4 3c f4 46 b8 cb 85 40
2026: 02 00 0c 00 42 d2 37 0a 31 bf 77 40
2032: 03 00 0c 00 4c 32 ee 7a fe ac 6c 40
203e: 04 00 0c 00 f8 1a 08 80 b9 3d e0 bf
204a: 05 00 0c 00 f4 1b eb e1 e0 f7 d4 3f
2056: 06 00 0c 00 fe a7 2f 24 67 4b 3b 3f
2062: 07 00 0c 00 c1 6f 08 a3 8f 14 53 3f
206e: 08 00 0c 00 8b e6 c0 de 56 2d c0 bf

Interpreting the last 8 bytes of the nine 12-byte blocks as double yields:

697.363044, 697.464979, 379.949473, 229.406064, -0.507535, 0.327629, 0.000416, 0.001165, -0.126384

Those look suspiciously like the focal lengths and distortion center point that blackguest measured.
I guess the remaining values have to be the radial and tangential distortion coefficients in some order.

Maybe I'm now grasping at straws, but the 2-byte blocks at 200a and 200c interpreted as 16-bit signed integers are:

22526, 44993

Could those be the intrinsic parameters' principal point in units of centipixels?

lhl
Level 2
Just as an update for those that might have missed it/not been following along, okreylos has posted the work he's done to decode the LED constellation (identified by 10-bit blinking pattern) here: http://doc-ok.org/?p=1124

At this point it is possible to:
* Pull the LED/IMU positions (3D model of the HMD)
* Talk to the HMD, initialize LEDs, sync with the camera
* Identify and relatively-stabley track LEDs w/ blob and blink tracking

I've dropped a line to okreylos to see if he has a repo/code dump for his new work, but it seems like w/ the basics tackled, pose estimation is next (and then sensor fusion).

Since jherico's OculusRiftHacking repo hasn't had updates, I forked it to keep track of the existing code/tools: https://github.com/lhl/OculusRiftHacking

I'm happy to take pull requests if there's any extant code that I've missed (I just pulled the projects into root but maybe will do some shuffling for if becomes unweildy say rift vs pose or other reference project code) or just let me know of something that's out there and I'll add it.

I'm documenting information I find in the wiki: https://github.com/lhl/OculusRiftHacking/wiki
Anyone with a github account can add/edit this documentation. Unlike OculusVR, I won't be deleting the wiki without notice (grumble grumble)

Also, to kick off some pose estimation discussion, I found an interesting paper by Fernando Herranz et al on Camera Pose Estimation using Particle Filters which describes a particle filtering algorithm for pose estimation. Positional/angular error seems large but it was done w/ a 640x480x30Hz camera and there's no sensor fusion. I assume that the optical tracking can be relatively coarse / used primarily for drift correct against the much more accurate IMU data.

I'm not a hardcore CV guy at all, so looking forward to hopefully hearing some informed feedback here?

lhl
Level 2
BTW, the PDF link (or maybe # of links) was being flagged as spam, so here's the link to the paper on "Camera Pose Estimation using Particle Filters": http://www.es.ewi.tudelft.nl/papers/2011-Herranz-pose-estimation.pdf

jimisdead
Level 2
I'm a bit disappointed with the lack of linux support so far, but as the hardware is still in flux I'm not surprised.

As long as a community version doesn't prompt oculus to stop supporting their linux drivers I really the idea as I can't hack around with the low level stuff atm.

I'm a computer science researcher based in medical imaging and machine vision. The positional tracking of the LEDs is a far simpler project than several other ones I've done recently so skills won't be a problem... but time is.

I'd be far more inclined to dedicate some of my time to the project if there was a clearer picture of what the goals are and what the progress is.

Is someone heading the project who might be able to setup a quick site hosting some information, a repository of the latest bits of code, list what's being worked on by who, and what is needed?

jherico
Level 5
There is a Github repository for hosting tools and tidbits of code related to the project here: https://github.com/jherico/OculusRiftHacking

Oliver Kreylos has written up an excellent summary of his and others' findings so far here:

Part 1:
http://doc-ok.org/?p=1095

Part 2:
http://doc-ok.org/?p=1124
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action

kaetemi
Level 2
I'd back a Kickstarter for this.

matus
Level 2
just send some money to oliver, he needs a new cpu.

lhl
Level 2
"jimisdead" wrote:

Is someone heading the project who might be able to setup a quick site hosting some information, a repository of the latest bits of code, list what's being worked on by who, and what is needed?


As jherico mentioned, he's hosting a project right now as a meta-repository w/ available code. It looks like okreylos/doc_ok/Oliver is chugging along - he just tweeted a pic w/ pose estimation a couple hours ago.

I've started a doc on the wiki w/ a roadmap: https://github.com/jherico/OculusRiftHacking/wiki/Roadmap

Looks like sensor fusion is the only missing piece once Oliver gets around to packaging his work. Oculus has published a fair amount on how they do things, and there's lots of existing work (see the Sensor Fusion page on the wiki for the results of a quick fishing expedition)

After the components are done there's the matter of wrapping up all the work in a proper statically compiled daemon and packaging it up. I'm assuming that since chunks of the code is is GPLv2 that's what the daemon will end up being, but that shouldn't be a problem for people using it, since the way I see it, the daemon will just run as a system service and emit or broadcast tracking data...