I am doing a project where I would like to mount a depth sensor such as a 3D depth camera, sonar or Lidar onto the HMD, to detect obstacles in real time and then represent them as point cloud data or 3D meshes onto the display. This way the wearer can navigate more safely. It is important that I can scan and represent the objects as meshes dynamically in real time and not statically scanned beforehand.
Bear with me as I am clueless on how to start and never used sensors or created an algorithm for the Oculus quest, used unity or anything of the sort. For example would I need an adruino/ rasberry pi to for the depth data or can it be somehow handled by the Quest 2 and using Unity?
Additionally any recommendations for a cheap depth sensor less than €100? I need it to preferrably have a FOV of 90 degrees or better.
Any assistance, material, links and videos that can be provided to get started or confirm this is possible would be very much appreciated.
The quest 2 has something to this extent native, i don't think you will need an external depth sensor.
you just need to access the api's via the on-board depth sensor.