New to the forums? Click here to read the "How To" Guide.

Developer? Click here to go to the Developer Forums.

volumetric display and anti sde technology idea.

hoppingbunny123hoppingbunny123 Posts: 391
Nexus 6

you have a screen put over every pixel and shape the screen to take the light from the pixel and fill in the spaces visible between the pixels that make sde.

how you do this is using 4 things.

- measuring in tenths, you have a tube, the tube measuring 1 to 2. at 1 is the pixel, at 2 is the glass covering the total area of the end of the tube, in-between 1 and 2 measuring in tenths is the tube.

in-between the pixel and the tube at 1 on the tube is two lenses.

- the lens math for the double convex and double concave lens is, the light leaving the lens is equal to the shape of the light entering the lens, but inverted.

- double convex to magnify the light from the pixel, which removes the light scatter.

- a double concave lens, to take the focused collected light and make it all one color again and not focused.

now the screen is magnified which acts like the vr lens, but without the Fresnel lens.

- send the light from the pixel to the double convex lens,
- then from the double convex lens send the light to the double concave lens,
- then from the double concave lens send the light to the tube.
the tubes walls are painted white to reflect the light from the pixel. 
the top and bottom of the tube is not painted white for the pixel light to be able to enter and leave the tube.

the tube is larger at the 2 mark then it is at the 1 mark, at 2 the pixels on the display are nested together removing all sde.

to hold the tube in place, you have a mesh at at least two places in-between 1 and 2 on the tube and the tube being different sizes alone the length of the tube slots into the mesh.

on the bottom of the tube, the lenses can be stored in a separate tube, the container is fixed to the mesh.

volumetric display

you take the same principle for the pixel and use it for the entire display screen, all the pixels are covered by the 1 double convex lens, 1 double concave lens, and the tube is where the new idea is.

the tube has the white on the tube change from black to white, and the tenths on the tube have hundredths joining each tenth.

it might be in practice using hundredths is too much but its good to explain the idea.

starting from 1 you have a single hundredth mark. you light up the section on the tube white on the entire perimeter of the tube at that first hundredth mark.

the rest of the tube is black.

then keep that hundredth at 1 lit up and go to the next hundredth and light that hundredth up.

the rest of the tube is still black.

repeat this for every single hundredth in order from 1 to 2 to light up the entire tube.

now none of the tube is black.

ok now you can light up the tube, you go to the next step and add in panels of glass inside the tube, this panel of glass will capture the light from the displays pixels at that spot on the tube.

the entire area of the glass panel is lit up at that hundredth marker on the tube so the light from the panel reflects the light from the display on the lit up glass panel.

this can be so at 1 there is a glass panel, the light at the first hundredth lights up the glass panel and the picture is seen at that paint in the tube.

the person sees the display from 2 on the tube so the display at 1 shows the picture to be deep inside the tube.

the glass panel is lit up by both the light at that hundredth going into the glass perimeter, and second by the light in the tube in-between the glass in the tube and the double concave lens.

so if you want no light on the first glass panel at 1 in the tube you turn off the lights in-between 1 and the glass panel and at the glass panel and turn on the light for every hundredth after that up to and at the next glass panel,.

then coat the glass panel on the 1 side facing the lit hundredths with a anti-reflective coat, so the glass panel not lit up isnt seen reflecting light when it shouldn't.

then you have the display show a 3d image at a high frame rate and the picture matches a section of the tube, the frame rate is high enough the person sees the tube as one image but in volumetric 3d. with no sde.


  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited November 2018

    i got around to building a prototype today and yesterday, today i got it going for 2 distances now to finish my crafting to test for 3 distances.

    heres the video showing my results so far for 2 distances;

    video removed

    heres what id id its a little different that what i described in my first post.

    - i used clear see through cs case covers as my screens.
    - i used black crafting paper for the tubes.
    - and i used a lot of duct tape.

    the tube, is given some light, from a phone its not so good the phone doesnt send light like a flashlight, so you need to hold the phone right next to the cd case cover on the entrance side of the tube.

    the tube is a funnel and the big side has a cd case cover and is the entrance for light, and the exit side of the funnel has a cd case cover too and is the small end of the funnel.

    the theory is i havent tested it yet, but the theory is if the next funnel on the other side of the cd case on the exit end of the first funnel is made to fit the large end of the funnel to the exit end of the first funnel then i make a exit end for the second funnel the smallest end of the two funnels, then the exit end on the second funnel will give me the third distance.

    how this is supposed to work for holograms in theory is the light moves in one direction and as it moves its shown on the cd case covers for the three distances, starting for the farthest distance then the second farthest distance covers the farthest distance then the nearest distance covers the second farthest distance.

    this way you can loop this to show the three 3d distances to let each distance paint a picture for that 3d distance.

    seeing as how the light comes from the distance closest to the person i think you need a see through oled screen to send light to the three distances., or some sort of mirrors to send the light from a cabinet out of sight. send the light in and as the light comes out its angled at the screen exit like a prism reflecting light.

    and thats a working actual real life hologram technology that works! take it for free no royalties wanted. when i get the rest of the crafting done i will post another youtube video showing the third distance.

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    ok, i tried for a while but kept running into a problem that i figured out how to fix but first to say what i'm going to do before i craft it up.

    the cone sends light pushed out from the light source. so that the first layer has the larger end of the cone and the second layer has the smaller end of the cone funnel.

    you shine a light on the first layer and the cone sends the light pushed forward onto the second layer screen first.

    the problem comes when the third layer and second cone are introduced.

    if you have the second cone any different slope from the first cone slope it introduces irregularities. you need to continue the slope of the first cone in the second cone.

    then if you use a funnel the light from the first layer has the light pushed back towards the center, the funnel works inversely.
    so you need to only have the slope for the first half of the cone funnel for both funnels.

    the second half has to have no slope, so the light pushed forward disappears layer by layer as you move the light source left to right.

    i will add a picture. then i will start crafting after some coffee. after its all crafted up i will add a new youtube video showing the results.

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited November 2018
    my youtube video showing my final hologram prototype;

    and some pictures added below.

    what i learned is the funnel needs to have only one side at an angle for both funnels, and both funnels need to share the same angle, and the other side of both funnels need to be straight not at an angle and both funnels straight sides need to line up.

    besides that it didnt work so good until i learned a third step, the middle reflective cd case is flat on the funnel. but the top and bottom cd case need to be at angles on its funnel, angles that stretch away from the middle cd case on the side of the angle in the funnel.

    then i need a large enough angle for the top and bottom cd case so the images dont blend together so they can be seen clearly with no sight of the other two reflections.

    right now the third cd base in the far back only shows a image starting at the middle of the cd case but its a proof of concept so i wont worry i think with more angle tinkering it could show from the part the funnel touches the cd case.

    and thats how you make a hologram.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited November 2018
    I looked at the farthest cd case and saw the dots reflecting on it and measured the size of the second funnel exit, it shouldn't be big enough to fit two dots.

    So shrink the hole of the second funnel exit, by making the two funnels angle steeper.

    I measured and found the first funnel entrance, has a angle side. Then the second funnel exit should, on the angle side, should be directly over the first funnel entrance on the straight side, that makes a 90 degree triangle.

    That leaves the question how long the funnel is so split in two to make two funnels the second funnel exit can only barely fit one dot from the pen flashlight in the second funnel exit.
    The answer to that will be trial and error.

    So back to crafting to see it. Then i can post my results.


    After crafting then testing it, it behaved as i said it would. Now you see the furthest distance from side to side.

    You need to hold the light a good distance from the screen, and look at the funnel from the middle of the first funnel a bit farther than the light shining in the first funnel.

    I made a video and will upload it in a few hours.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited November 2018
    heres the new video showing the new model where the third distance the farthest distance has the light dot start from the edge of the funnel to the edge of the funnel.

    i added the green lines to the picture to illustrate the design focus in this redesign. and it worked.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    I watched a youtube video about both the avegant augmented reality and the magic leap. that got me to watch another youtube video about waveguide technology for vr and augmented reality.

    it occurred to me that the way they bounce light off of glass is how I make the funnel carry the light. maybe I would try to make a version of my crafting hologram that's not a funnel and is see through like augmented reality, so I did and here it is;

    I just used the cd cases for my funnel craft and put another layer on top of that and held it in place then put pictures and light in and it worked like the funnel design.

    I thought about how the display would travel so it lit up the three displays and it occurred to me to have three displays, one for each angle, and turn the displays on and off fast to shine line on that angle at the brightest angle. you could then tilt the panels so you could fit three displays side by side. in theory I haven't tried it yet, say with three pen flashlights but it makes sense to me its doable.

    I found out that the magic leap uses this type of technology, but is stuck at using only two layers. well with my technology which is free, they could have 3 layers, maybe more.

    I think that's all the crafting I can do now its time for oculus or magic leap to take the tech and make something good for vr or ar or both, please. :)
  • MradrMradr Posts: 2,898 Valuable Player
    edited November 2018
    The problem with light wave - is you need to take your design and make about 100 different viewpoints for your depth of field. One of the reasons Oculus approach is better is because it variable because you are just moving the screen instead giving you the fine grain control.  Granted they're doing it in VR where they have more control - but if you could do it for AR - then you broke something that close to a billion dollars couldn't xD 
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited November 2018
    I think its for headsets, not a still tv on the wall, but if you kept still and looked at the tv from the sweet spot it would work for a tv on the wall.

    I made this tech for headsets like oculus rift, so your always looking at the sweet spot. where you move your head and face the display is still in the same place on your face.

    for how the picture is put in there I think it would work like this;

    - monitors face eyes
    - reflective surface eyes see face eyes
    - reflection shines picture from displays onto reflective surface, faces away from the eyes.

    the three distanced panels are doubled one set of three panels per eye. the flat side of the funnel is near the nose.

    so its cheap, and you could make it, just have to figure out how to sync the monitors to show a single distance,  or three distances at once, like a hologram. you could use off the shelf stuff too you just have to know how to build the sw to run it and fabricate a working model.

    upon testing reflecting the video source onto a reflective surface, a cd case, then shining that reflective surface onto the three layers what I see is junk, you need the original source of video shining on the three layers, directly. or you could try something like a plane mirror that shines the light straight as the reflective surface then shine that surface onto the three layers.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited November 2018
    i was watching some youtube videos and saw some about holograms and saw people were making their own holograms;

    so i thought why not try this with my hologram craft i made? so i did and here it is;

    same technology as my previous hologram technology in my previous posts just made brand new using old cd cases, and see through glue gun, a broken clothes pin, and my phone with its finger ring i use to help not drop my phone (its precious 2 me :)).

    i got it going for one screen first, the bottom screen, but it looked kind of drab, so i saw the second screen looked a bit better so i tinkered with my design and came up with the tilted phone idea to get three levels of video.

    i ignore the top and bottom layers and only look at the middle layer. i can see through the picture better if i put the picture up to my eyes. but i find i like to look at it about half a foot away from my face.
    i added a light source to help with the picture quality.

    i find its really cool to watch the videos looking at this craft down in my lap, i dont know why but its really better looking and seems more like a hologram to my eyes, but its a bit awkward seems like im invading the singers personal space, so i dont do that much.

    i could use cleaner glass and my middle picture shows the cd case on the top of the picture if i look hard so the middle cd case needs longer plastic on the bottom layer so the pictures pretty in the middle layer.

    it sure is neat. its augmented reality man!

    edit, if you try to craft one the calibration I used is to shine a flashlight at the three layers like I show in my previous video, and the far left and right touch the edge of the cd case and the middle light is centered. all three lights are in the same horizontal line.


    I think if the bottom layer is seen to cut the middle layer in half horizontally the top horizontal half in the middle layer would look holographic, but the bottom half of the middle layer needs the bottom layer to be pure flat, that means i would have to cut the cd case for the bottom layer.

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    I tried to get half of the screen covered in the middle layer by half of the screen on the bottom layer, to increase the look of a hologram in the middle layer, but it looked really ugly, but it worked for some dark scenes too but not very noticeable over the ugly line in the middle of the screen.

    so I had two options if I was going to try to make the middle layer look ok, I could either go for less of a pre-dominant visible line and have a line towards the bottom of the screen in the middle layer and hope it worked, or just erase all lines from the middle layer and that's what I did, I erased all line from the bottom layer that was visible in the middle layer.

    I found it made the picture better in the bottom and middle layer too. now my home made crafted device is usable and fun, lol.

    heres the new video and pictures of this second version of the augmented reality device;

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    after trying the craft ar out for a while I found some necessary things to improve;

    the phone tilt, which meant the bottom layer had to be reseated.

    I added a black trim to the bottom layer where the cd case was cut to not see the cut when im watching videos.

    I added some black paper to the back of the second layer so the top layer didn't show the back of the cd case.

    i filled in some more on the cloths pin the phone hooks on to get the new phone angle.

    i taped on black paper to make the craft look better.

    after watching the new blade runner short where Trixie dies i watched it in a dim lit room and i thought i was watching some far into the future technology from some movie like blade runner, the picture in the ar craft was mesmerizing.

    so now i have a working model its all dolled up and good to behold and works i will shelve it, put it in my drawer alone with my earlier funnel craft which i redid too, it was ugly and inefficient so i redid it using crafting glue gun instead of a lot of duct tape.

    if they make some ar device i will watch the blade runner short with Trixie again. i had fun making this ar device and using it, but now its time to put it away and wait for a professional model from some company like oculus to make one.

    heres the videos showing the final ar craft i made.

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited November 2018
    one other thing, I thought you would take care to wear a face mask when using the glue gun but in case you don't know, wear a face mask to stop the flying bit of glue from being breathed in!

    if for some reason you breathed in a small strand of hot glue gun glue and its messed up your breathing, heres what I did that fixed me up.

    - boil some water, breathe it in for 5 minutes for 3 or 4 minutes at a time, 2 times spaced about 4 minutes apart.
    - then added half a cup of white vinegar to the boiling water, wait for it to boil then take in about 5 to 10 breathes of the boiling vinegar, it will close up your lungs for a second so don't make the vinegar mix too strong, if it burns too much then dilute the mix with more water.
    the vinegar will kill the germs in the lungs and dissolve the glue.

    it will be sore in the lungs for a few hours, then it will clear up after a good nights rest.
    this might help too if you have a cold that's stuffed up your lungs. bugs hate vinegar.

    upon testing reflecting the video source onto a reflective surface, a cd case, then shining that reflective surface onto the three layers what I see is junk, you need the original source of video shining on the three layers, directly. or you could try something like a plane mirror that shines the light straight as the reflective surface then shine that surface onto the three layers.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited November 2018
    today I decided to try out angling another layer of cd case inside my ar craft to see if I could get another video source to be seen at a different angle and it worked but I didn't make it. then I saw if I layered cd cases I got a double picture that looked 3d, or more 3d, but it was ugly it worked only in one part of the picture so I didn't do that either.

    then I looked at my craft and was unsatisfied it was good, so I reangled the bottom layer, then I replaced the bottom layer cd case cause I didn't want to see the bits of plastic covering the picture, it breaks the immersion I find a bit anyway. then I saw some marks on the second layer so I took that off too and put on a clean second layer, as clean as I had in my drawer of cd cases anyway, I want to go to amazon and get some new cd cases with no scratches next I think.

    so I adjusted the picture on the second and third layers since they were taken off the craft, why not, and now the three layers looks better imho.

    so here is a video I took showing my augmented reality craft 4, or version 4;

    the only thing now is the lamp on the side doesn't work so good, it washes out the picture or lights it unevenly I find, so I might fix that next.

    and I showed the pen flashlight showing the three layers like I did with my tunnel craft so you could see how this ar craft 4 compares to my previous tunnel or funnel crafts. its in the video I just linked too.


    after taping up the insides of the craft on both inside layers the flashlight on the right is partially hidden and now works to make the picture better and doesn't make the picture worse. so I will put it back in my drawer next to my redone funnel craft, maybe when I get new cd cases the picture will be good but right now the cd cases are scratched up.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    I drew a picture of the optics so a formula could be made up, I will attach the picture.

    the goal of the thread was to reduce sde, and I did that if you look at the mirror effect that shrinks the image in the mirror tunnel;

    I find the image on the second layer to be shrunk enough the visible pixel effect on the top layer is reduce to give the image a anti sde look, it looks better without sde.

    look at my videos you can see the image on the third layer is smaller than the image on the top layer.

    this optical logic diagram is from what I see and understand of my craft, I might come up with a better image and explanation in the future if I think hard about it, but right now this is all I have.

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited December 2018
    edited into later post, this was rough draft.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    today I saw the oculus patent for a tilting mirror vr;

    what this is missing it occurred to me, was the variable depth of the display..
    that's where my invention comes into play.

    if you have the mirror tilt the image to the eye, the image being tilted by the mirror can be one of the three layers in my optical invention.

    you show each layer by moving the contraption I made so the window the mirror sees is one of the three depths in my optical invention, the top layer, or middle layer, or bottom layer.

    this way the eye sees the depth from the mirror.

    and to have the variable focal range between 25 cm (~10 inches) to infinity, you have the distance from the mirror be increased or decreased using a mechanical spacer. I imagine the space between the top layer and middle or bottom layer being increased or decreased would have an effect on the perceived size of the object on that layer. so if the middle layer was at the distance of the bottom layer it would appear smaller and then the bottom layer being even farther would appear minuscule or very far away.

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    thinking about the oculus patent and how to do it.

    take two displays with the same resolution, same pixel density, but two different sizes, one larger one smaller.

    the smaller display will be the picture on the outside. the larger display will be the picture on the inside.

    here is a quote from the article:

    "A steerable mirror and optical combiner then project this display into the lens, at the position the user’s eye is pointed. "

    a "optical combiner" joins the two pictures together.

    so the optical system I made has the second or third layer of my optical invention be the image seen in the center, where the foveated rendering works in the oculus patent.
    this layer in my optical system is the same size as the smaller display seen on the outside.
    and the size of the picture in the layer in my optical system the eye looks at is the size of the area you want in focus with foveated rendering.

    the layer seen in my optical system is not the size of the entire picture, just the size the eye tracking sees the eye is looking at.

    in my optical system, by the layer seen distance from the top layer, the layer seen in my optical system has a greater pixel density and so less sde.

    you could also just show the entire picture with less sde and have the adjustable layer distance to adjust for the variable focal range but this would stop the foveated rendering eye tracking and data collection or $ so you decide whats best for you.

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    It occurred to me that there could be two displays that work in the varifocal that uses my optical system 

    They use the same technique i described for foveated rendering described in the oculus patent that uses my optical system 

    The one display being smaller than the other is the surrounding display, the larger display followed by foveated rendering is the other display. 

    The two displays work with foveated rendering so the eye tracking moves the varifocal display.

    It would be big though but bring vr into the future.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    I thought I would add some pictures to show my idea in my posts regarding the oculus patent and my optical invention. the oculus patent would be the fourth picture, this is for one eye, for two eyes just repeat the mechanism for the other eye;

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    referring to my picture with the two distances (3, 5), how would the mirror 1 show the same size image to mirror 2?

    - the image from the display changes shape depending on the focal range distance so that the display always goes to mirror 1 the same size regardless of how the image is shrunk by going into farther focal ranges.
    - the image shows parts of the image in focus these images in focus are matched with the focal range and the focal range has a value of what images are in that focal range. this can be enhanced by foveated rendering to further reduce whats in focus in the focal range.

    then theres the question of how will the software know when there are obstacles that occlude the virtual object? 

    - (there is the multiple points on the virtual object at the focal range value). (there is the multiple points on the thing (like a moving hand) that blocks the view of the virtual object that has its own focal range value.)

    normally in HoloLens you find the new thing like the hand by refreshing the mesh, this plots the points on the hand in reference to the rest of the things previously plotted. but its not real time you need a cycle to refresh the mesh currently.

    then if the virtual object has to refresh the mesh to see the potential obstacle and it doesn't, then it wont know to occlude the virtual object from being seen.

    so you need to account for the points on the obstacle, then the focal range of this thing with the multiple points, and then see if that thing is a obstacle that blocks the view of the virtual object.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited December 2018
    adding a bi-convex lens that serves as a magnifying glass, to the pipeline either in-between mirror 1 and mirror 2, or in-between mirror 2 and the eyes, to make the image size the eyes see the same size as mirror 1 sees, and to enlarge the image depending on how far the focal range shrinks the image down.

    and I saw a article about how facial recognition is pretty bad, a lot of false positives. so I thought about why and it seems to me it that is facial recognition, suffers from the same mesh refresh problem that the HoloLens suffers from, but using a different technique.

    what is happening is there is a match to the face, the face is a mesh, and the mesh is being matched to a feeler, like you wash dishes or dentures, you rotate the dishes or dentures as you rinse them under the sink water. the mesh is the dishes or dentures being rotated, and the facial recognition sw is whats rubbing the dishes or dentures under the water.

    the problem arises when the dishes or dentures that is the mesh of the face, rotates and the feeler doesn't refresh, the same problem as what happens with the HoloLens mesh not refreshing. then the feeler is feeling a part of the face and identifying that part of the face as a previous part of the face, a previous face mesh, and giving a final face recognition face match that's a different face.

    what should be happening is when the dishes, or dentures, or face mesh rotates as its being washed under the sink or felt up by the facial recognition sw, is when the mesh rotates the feeler feels that different part of the face. it then rotates the face mesh and feels that different part of the face mesh. as it rotates the face mesh of the person its looking at it gets enough of this to create a image of the face mesh and match it up to a library of faces.

    that means given how many face meshes per face, 1 face mesh per refresh or facial rotation or position, and then how many refreshes match a given face you have a percentage of matches of refreshes of the mesh that its somebody in the library.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited December 2018
    today i put a plane mirror on the second layer, and that made the picture crisp and clear.
    so the picture send from the optical device on layer 2 at distances 3 or 4 to mirror 1 is sent from a plane mirror. mirror 1 and 2 would be plane mirrors too.

    because the eyes watch the distance in layer 2 i think you need to remove the magnifying glass. just keep it mirrors 1 and 2 to see layer 2. i tested holding a mirror and seeing the things from the mirror at different distances and my eyes adjusted to see the different distances as if there was no mirror, but adding a magnifying glass might change this.

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited December 2018
    after some careful consideration i have a basic template in a mechanical and logical process.
    here is the diagram of the process, then some math after this diagram, then some examples to tie it all together,
    I drew a better picture of my optical creation;

    square root of 1 = 1; image from video source, square of 1 = 1 = reflection of image on first layer on the cd cover surface; a mirror reflection.

    value x = space on second layer before picture
    value w = ( value x * value x),
    root of value w = value x

    = reflection seen on second layer blends with the picture seen on the first layer so both images are seen

    value y = space on third layer before picture
    (root of value y) squared = value z

    = reflection seen on third layer blends with the picture seen on the first and second layers so three images are seen

    for clarification about how the image appears when using mirror 1 and mirror 2. the plane mirror sends mirror 1 the same size picture of the room regardless of the distance the mirror moves from layer 1. but the image that's being watched and had a virtual video on top of reality, that stays the same shape regardless of how far the plane mirror moves from or to layer 1. this way the virtual image grows or shrinks realistically. this is for augmented reality. 

    for virtual reality its the same but the room is virtual, the eyes still move focal length and see some virtual object and that objects size grows or shrinks depending on how far the mirror moves from layer 1 to send the image to mirror 1.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited December 2018
    I came across a strange phenomenon after making my optical craft and using cd cases that warped the image. the optical device I made has three layers, 1 picture per layer, and the middle layer is the most distorted.

    when I watch a view of a person I see their emotions more clearly on the middle layer picture, like a alethiometer from the golden compass lore.

    even though the image is the most distorted it has the most immersive picture that's like looking into another world. its too much so I had to put it away.

    anyway this post isn't about that its about how to take a 2d picture or video, and find 3d information from it. if you look at my attached picture in the 7zip and look at the middle image you will see the top is curved up on the tips, the bottom middle not clearly visible in the picture also pulls the picture down in the bottom middle.

    looking at this picture I see the 2d image on the 3rd layer look 3d on the second layer. I clearly see the difference in depth perception of the two pictures compared side by side.

    whats happening I think is the curving of the picture in the second layer is acting like my focal mechanic of moving the layer away from the top layer and then sending that to mirror 1. then the person nearest the camera is like the layer nearer layer 1, and the person farther is like the person farther from layer 1. not sure of this but its a hunch.

    anyway it might be useful if your trying to find whats nearer the camera for things like object recognition that's analyzing a picture for depth clues. it works to see into a persons soul too from what I've seen in my time watching the machine I made.



    whole polygons aren't rendering on the peripheral of your view when they should be (this causes the flickering). example when in the lobby, the map is in edge of my peripheral vision and my eye can see half of it, if I turn slightly so a 1/3 of the map is visible, the map polygon disappears. Not sure if this is Unity bug, Onward bug, or PiMax bug.

    I did feel a little motion sickness, I'm guessing because of the larger FOV [UPDATE] didn't feel sick the second time I used the HMD


    reading about how the pi-max Fresnel lens creates a image distortion, it might be because of there being a line or dot on the lens material the image is going through that's pulling the image away from its path. 

    the image distortion i described in this post, the top curve I described is there because of the top cd case layer having a line on the cd case, near the back of the cd case where the phone touches the cd case, that pulls the top of the picture in the second layer up on the corners of the picture. 

    and the middle cd case layer has a dot on the cd case in the middle of the picture that pulls the picture down in the middle.

    maybe the Fresnel lens has lines in it that pull the picture, the lines in the Fresnel lens being visible only when the eye moves.

    you need Fresnel lenses that don't have lines that pull the picture distorting it when you move your eyes. same thing for the magic leap and HoloLens optical lens they use same principle of a Fresnel lens imho after reading about the technology so they have a distortion made by the lines pulling the picture too, when the eyes move.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    I tried to find the exact spot that caused the top corners on the image on the second layer of my optical device to curve up, but I couldn't. I figure its because of the square ocean wave phenomenon.

    the square wave phenomenon can be described in terms of my optical device by how the line draws the picture warping the picture as it draws it out to the line, this line drawing out the picture to the line and warping the picture by doing do is equivalent to one sea wave moving at an oblique or non-90 degree angle.

    the problem of the square sea wave happens at the spot where two sea waves join at an angle, making the surrounding area intensified to draw the picture out and warp it.

    I drew a picture to show the two sea waves joining at an angle and intensifying the distortion as a result, its the picture on the left;

    the arrow is the picture and its drawn to the black line and the black line distorts the arrow picture along the green line.

    if you took the square ocean wave and rearranged the lines you rearrange the distortion made by the line to reduce the optical error that distorts the picture and creates god rays in vr, and would stop the flickers of the picture as you rotate your eyeballs in a circle in the white room you see before entering the vr desktop program in the rift.

    you find where the lines join to make a ocean square wave, look for the black lines that distort the picture, there is a green line there, now rearrange the black lines in the Fresnel lens so they line up where the lines meet so the vr god rays and flicker is reduced.

    that means you need to cut the Fresnel lens into lines that reduce the flicker where the lines currently create distortion that's intensified.

    you might have to experiment with cutting a side of the Fresnel lens at a straight line instead of curves to find where this fixes god rays and flicker as you rotate your eyeballs. like a trial and error process, if you don't know where the lines are that is.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited December 2018
    still looking at the Fresnel lens problem, this is probably why people get motion sick, they roll their eyeballs around and when they do this the screen flickers as the ridges in the Fresnel lens sends the light to different lines. then it doesn't matter if the frame rate is 90 fps the screen still flickers like its not 90 fps and then you get vr sickness.

    I looked at the ridges on the Fresnel lens and sw these three pictures;

    picture 1 is what the picture looks like in VR with the eyes looking straight ahead, there is little to no ghosting in the sweet spot unless the picture is black and white showing text.

    but picture 2 is when the eyes look around the center to the side and the picture gets ghosting or whats called god rays. the image gets a smeared look.

    this ghosting is when the light is going to a different ridge as the light from the Fresnel lens is sent into the eye and the eye is at an odd angle not looking directly in the center of the screen.

    the third picture is adding a ridge on top of the Fresnel lens edge, the eye rolls over a section of the Fresnel lens blending light from the inside to the outside. the inner Fresnel lens rings to the outer Fresnel rings.

    adding a ridge to the Fresnel lens rings, on the slope of the Fresnel lens (seen in picture 3 the blue line stopping the red line), stops the roll of the eye from blurring the light from the inner rings to the outer rings. and only acts when the eye roll changes the light direction of the Fresnel lens.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited December 2018
    the idea of getting a larger fov with poor Fresnel lenses is a bad choice because the people who roll their eyes and judder the picture from the lines in the Fresnel lens blurring the picture will be worse on a larger fov. they will rotate their eyes farther and if the lines from the smaller fov are still present then all hell breaks loose with judder from the lines absorbing the picture and created judder as a result, making larger fov create more nausea from vr.

    "I know it sounds stupid, but I was wondering if the widened FOV has any impact on VR sickness? I just have this feeling that it will increase tolerance…
    Are any of our testers susceptible to VR sickness normally and can speak on this?"

    "The testers mentioned that especially at 170 (large) FoV, but also slightly at 150 and 130, they did have a bit of new VR sickness, but very quickly adapted to it. Probably just takes a few hours or days to get used to the new sets once you get them."

    the last thing oculus needs is a larger fov with poor quality Fresnel lenses to make a lot more people feel sick.

    how do they get their vr legs, like at youtube did, he wobbles his head not roll his eyes to keep his eyeballs from rolling the picture over the Fresnel lens lines which would create judder, no he moves his head and keeps his eyes centered over the sweet spot and so feels less vr sickness.

    watch the video to see wobble the picture meaning he is wobbling his head, the picture wobbles all the time even for minutia movement;

  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    today I thought about how to do face recognition sw and thought it would be necessary to have the depth recognition ar and vr use first then build face recognition off of that foundation.

    so I drafted this up before I learn python as a todo for when I can code up depth recognition for ar and vr to use before I set off to do face recognition;

    create a depth logic;
    - from 2 d image, create a new picture with top corners curved up, and bottom middle to pull the picture.
    the logic focuses on the bottom horizontal line of the picture, the center of this line on the bottom.

    - the first depth = near this bottom line center, there is two sized images, one small one large.
    the small sized image is referencing the second larger image.

    - the second depth = a second small image, similar in shape to the first small shape, is near the same large shape the first small shape was next too.

    - the two small shapes stretch the larger image down towards each smaller shapes bottom center line, cooperatively.
    so that the two small shapes stretch the larger shape across each smaller shapes bottom lines,
    which opens the larger shape up between the two smaller shapes bottom lines.
    this is where logic comes into play, the second small shape is smaller than the first small shape, this is seen in depth 3d depth.

    in the row there is the larger shape, in the column there is the smaller shape.

    the smaller shapes changes rows, but stay the same column, the first small shape has more consecutive rows joined than the second small shape.

    the larger shape stays the same row for multiple columns. if the consecutive column is divided up, the next column is still near the previous columns row.

    if one larger shape is broken up into columns, and the other larger shape isnt broken up into columns,
    the larger shape with the larger percentage of consecutive columns, allowing the larger shape to be made up of additional larger shapes in other columns that share the same or near the same row, is the larger shape.

    using these rules I can get an ai or sw to id whats depth in a 2d picture to be able to plot occlusion when I hand goes to grab a vr object and could be used in HoloLens or magic leap.
  • hoppingbunny123hoppingbunny123 Posts: 391
    Nexus 6
    edited December 2018
    carrying on from my previous post, building my todo list for image recognition so ar and vr will be able to occlude things like ar virtual images when a rl hand passes through them, this is the logic conclusion that's more an ai slant, but still relevant to my previous post;

    the logic for the two similar shapes but different sizes is by default to function to open up the larger shape towards each smaller shapes bottom lines,
    thats why the second smaller shape is smaller.

    but, if the first smaller shape is smaller than the second smaller shape, then its to not open the larger shape. its boolean false to opening the larger shape.

    if theres a single shape like a human outline with a column value, the other shapes in the background have either a smaller or larger column value to give reference to the humans outline if the picture is referencing opening or not opening something.

    - the second smaller shape being larger than the first smaller shape column value wise, means the larger shape is not being opened, 
    - while the second smaller shape being smaller than the first smaller shape column value wise, means the larger value is opening up.

    assuming two shapes the human is the first small shape, and something else maybe another human maybe not is the second smaller shape of varying size relative to the first smaller shape.

    so the picture having a human and a second smaller shape either gives the logic of hope or failure, depending on the humans shape size to the second similar shape in the background or similar shaped column value in the background.
Sign In or Register to comment.