got some free sw for you to use for tracking — Oculus
New to the forums? Click here to read the "How To" Guide.

Developer? Click here to go to the Developer Forums.

got some free sw for you to use for tracking

hoppingbunny123hoppingbunny123 Posts: 561
Trinity
edited September 20 in General
i used a program called negative screen to make the color change;

https://zerowidthjoiner.net/negativescreen#newCommentForm

i made my own color and added it to the program by editing the configuration file.

copy and paste this over the corresponding section of the configuration file. open the file by right clicking the program when open, click edit configuration,, open with notepad, scroll to section, paste over default version;

edit colors updated below;

//

now how i think this can be used to improve inside out tracking is to have the two colors going at once, compared, to find the borders of images in the picture the camera sees so things like lighting dont break tracking like it currently does.

the negativescreen program isnt mine i just made the color matrices. but the negativescreen is free to use on github just follow the link;





Comments

  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    edited September 24



  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    edited September 20



  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    i updated the colors and arranged them in order so that if you see the colors starting at f6 then go sequentially up to f11 the colors change in relation to the previous color.
  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    edited September 20


  • MowTinMowTin Posts: 1,711 Valuable Player
    I'm a bit confused. I'm not sure what this is about. Software for tracking? Are you talking about some kind of Ai tracking program? Does it have anything to do with VR? 
    i7 6700k 2080ti   Rift-S, Index
  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    if the camera can be programmed so it sees the same image differently it might help to not lose tracking.

    to that end there are a variety of colors to choose from to cycle through a image to find tracking markers for things live inside out tracking.

    currently the type of lighting affects oculus inside out tracking which i think could be stopped by using different colors of the same image.

    the facial id is just an example of the various colors available.
  • MowTinMowTin Posts: 1,711 Valuable Player
    Where are you changing the colors? Is it a file? What are you trying to achieve? 
    i7 6700k 2080ti   Rift-S, Index
  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    heres the face id video, watch it to see how i use the colors to id face parts, and read the video description for more info;


    and here is the color and inverted color that i think the oculus inside out could use to help stop tracking errors. the idea is if the problem is there being no video to see the tracking is lost, but invert the colors and now there is something visible to see.

    the sw i use to do the inversion has a bunch of color possibilities besides red and white, watch the video to see what i did;




  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    i made a video showing how the color i made not only work for facial recognition but eye tracking, this is the eye tracking video its best to watch the face id video first though so you know what im doing here;


    this should help vr eye tracking technology.

  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    edited September 20
    i made another video to show how important distance is to tracking;


    the idea being more distance = more blurring of light sources into one light source. losing details and distinguishing features as a result which makes things like face id and eye tracking impossible, the new video shows that tracking also loses when the distance to the light source is too great.

    it might be that having variable zoom focus might help tracking in vr to get closer to light sources enough so the details in the light source are seen like in my video in this post.

    the video in the end is from this video;

  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    i watched a youtube video about amazon rekognition facial id, and the speaker said that one of the things they want to find is nudes. so i went and looked at two nudes, one of a man one of a woman frontal since thats closest to how the face recognition works in my testing, and this is the markers i found for identifying if there is a nude in the video;




    for woman nude frontal, facial recognition eyes = nude frontal breasts, facial recognition mouth = her sex organ.

    for the man, facial recognition eyes = the armpits, face recognition mouth = sex organ.

    also there is a long color on the inside of the legs for both man and woman from hip to knee.also lit up is the space between the arms and torso for both man and woman.

    this is using the f7 color mode shown in the links.

    f7 = mouth eyes = top of torso area and sex organ
    f9 = chin = darker colors around groin
    f10 = area of entire head = entire body is colored or mostly colored

  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    edited September 28
      updated colors below
  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    edited September 25

    i was reading a story about google fighting deepfakes, and thought that i already had a tool to use for this so why not update the methodology and simplify it and then add this to my current tracking ideas i described before.

    this can be used for tracking to find differences between images and the same goes for deepfakes, deepfakes are the same image tracking is consecutive images.

    heres the written guide on how to do the comparison it goes along with the video i put above in this post;

    suppose you have three pictures and want to have a test to see the differences between the three pictures.

    picture 1 = original
    picture 2 = different than picture 1 but same picture
    picture 3 = different than picture 1 and 2 but same picture

    download virtualdub; Download V1.10.4 (x86 / 32-bit) release build (VirtualDub-1.10.4.zip)

    http://virtualdub.sourceforge.net/

    download a video player for this example i use a mp4 file and play a video i dl from youtube;

    https://mpc-hc.org/downloads/

    install avisynth;

    https://sourceforge.net/projects/avisynth2/

    open mpc hc, open mp4 file, go to frame, go fullscreen, click alt + print screen to get frame in clip board.

    open paint, paste in frame, save as 24 bit bmp. do this for all three pictures, but pictures 2, 3,
    use the negativescreen program to make them different than picture 1.

    download negativescreen latest binary; https://zerowidthjoiner.net/negatives...

    open negativescreen. go to a profile, any 1 but for this example choose profile 1.
    save picture 2 with the new colors.

    open negativescreen configuration file, scroll down to the profile you used for picture 2, and add .1 to the first row first column.
    save file close program, reopen program, go to the same profile, take a fullscreen picture of the frame in mpc hc, save to paint as picture 3.

    make notepad file on desktop, call it script

    rename extension from .txt to .avs, copy and paste this code into the script file, rename file location;

    start code below, ignore this line:

    clip1=ImageSource("C:\Users\office\Desktop\1.bmp")
    clip2=ImageSource("C:\Users\office\Desktop\2.bmp").Subtitle("No luma mask")
    clip3=ImageSource("C:\Users\office\Desktop\3.bmp").Subtitle("Luma mask")
    clip4=Blankclip(clip1)
    # --- special purpose clips below, for comparison ---
    clip5 = SUBTRACT(clip1, clip2).LEVELS(107,1,149,0,255)
    clip6 = SUBTRACT(clip1, clip3).LEVELS(107,1,149,0,255)
    clip7 = SUBTRACT(clip2, clip3).LEVELS(107,1,149,0,255)

    desc_clip1 = "Source"
    desc_clip2 = "Picture 1"
    desc_clip3 = "Picture 2"
    desc_clip4 = "Difference between clip 2 and 3"
    desc_clip5 = "Difference Source - Picture 1"
    desc_clip6 = "Difference Source - Picture 2"
    desc_clip7 = "Difference Source - Difference between clip 2 and 3"

    # The two lines below will usually call clips 1-2-3-4, but 1-5-6-7 may be inserted
    #    to see [amplified] mathematical difference between clips
    # To put it another way, these are the four clips we want to display
    vertclip1 = STACKVERTICAL(clip1.SUBTITLE(desc_clip1), clip5.SUBTITLE(desc_clip5))
    vertclip2 = STACKVERTICAL(clip6.SUBTITLE(desc_clip6), clip7.SUBTITLE(desc_clip7))

    STACKHORIZONTAL(vertclip1,vertclip2)

    end code above, ignore this line:

    now open virtualdub and from video click direct stream processing, then open the script.avs file you made, youll see the frame from the video for picture 1.

    now from video, click copy output frame to clipboard.

    go to paint, paste in frame, look at cells in picture to see the difference between the 3 pictures.

    - the bottom left picture is the difference between picture 1 and 2,
    - the top right is the difference between picture 1 and 3,
    - the bottom right is the difference between picture 2 and 3.











  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    i did some testing with negativescreen, fixing the colors using nvidia color control panel, and the colors were still hard to see the game i was playing.

    so i found out that the hue is responsible for taking the colors and making the picture crystal clear, increase the hue to increase picture fidelity. my hue was 2 just for normal tv calibration, and 10 after testing to fix the color in negativescreen, but to make the f10 color look good in my game and clear i had to put my nvidia color hue to 128, thats 126 more than it was for calibration.

    then the color was now mostly clear, but still too oversaturated making fidelity still bad, so i lowered my saturation from the calibrated value of 68 it was without using negativescreen to 17 using negativescreen f10 color.

    it adds in a additional green color, but the picture is really pretty. i think i bumped into a secret here and im sharing that secret with you here, bump up the hue, lower the saturation, use the f9 color, see beauty!

    and it helps me see in the game too.

  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    edited September 28
    heres the updated colors, whats changed is i added a new color f6, yellow. you might have to tone down your saturation to make it look nice though, my sat = 19, hue = 2;

    Grayscale=win+alt+F5
    { 0.3,  0.3,  0.3,  0.0,  0.0 }
    { 0.6,  0.6,  0.6,  0.0,  0.0 }
    { 0.1,  0.1,  0.1,  0.0,  0.0 }
    { 0.0,  0.0,  0.0,  1.0,  0.0 }
    { 0.0,  0.0,  0.0,  0.0,  1.0 }

    yellow and inverted=win+alt+F6
    { 0.358, -0.077, 0.300,  0.000,  0.000 }
    { -0.969, -0.786, -0.534,  0.000,  0.000 }
    { -0.289, -0.068, -0.469,  0.000,  0.000 }
    {  -0.700,  -0.700,  -0.700,  1.000,  0.000 }
    {  1.451,  1.303,  1.037,  0.000,  1.000 }

    Inverted Cyan/Orange-ish=win+alt+F7
    { 2.9, 0.5, -0.7, 0.7, 0.0 }
    { -7.3, 0.1, 1.9, 0.4, 0.0 }
    { -1.5, -0.1, 0.1, -0.3, 0.0 }
    { 0.0, 0.1, 0.1, 1.0, 0.0 }
    { 1.5, 0.1, 0.1, 0.1, 1.0 }

    Inverted Blue/Pink=win+alt+F8
    { 1.1, 0.2, 0.3, 0.7, 0.0 }
    { -7.3, 0.2, 1.0, 0.4, 0.0 }
    { -1.5, 0.1, 0.1, 0.3, 0.0 }
    { 0.0, 0.1, 0.1, 1.0, 0.0 }
    { 1.5, -0.1, 1.0, 0.1, 1.0 }

    Inverted Cyan/Red-ish=win+alt+F9
    { 2.9, 0.6, 0.7, 0.7, 0.0 }
    { -7.0, 0.2, 1.9, 0.4, 0.0 }
    { -1.5, -0.1, 0.1, -0.3, 0.0 }
    { 0.0, 0.1, 0.1, 1.0, 0.0 }
    { 1.5, -0.1, -0.1, 0.1, 1.0 }

    Contrasted Red/Cyan=win+alt+F10
    { 0.6, -3.3, -3.3, 1.2, 0.0 }
    { 0.0, 6.3, 6.3, 0.4, 0.0 }
    { 0.2, 1.5, 1.5, -0.3, 0.0 }
    { 0.1, 0.0, 0.0, 1.0, 0.0 }
    { 0.1, -1.0, -1.0, 0.1, 1.0 }

    Inverted Deep-Red/Cyan=win+alt+F11
    { 0.6, 3.3, 3.3, 1.2, 0.0 }
    { 0.0, -6.3, -6.3, 0.4, 0.0 }
    { -0.2, -1.5, -1.5, -0.3, 0.0 }
    { 0.1, 0.0, 0.0, 1.0, 0.0 }
    { -0.1, 1.0, 1.0, 0.1, 1.0 }



  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    i updated the f6 color again. the previous version worked on one monitor at home but not the other monitor at some other house. this one works for the other house it should work for the monitor i have at home too.
  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    "optical physics theory called contrast—the brightness, or luminance, according to how the eye perceives it. Colours of one wavelength against a colour of another identical wavelength, will have low contrast and can’t be detected easily. But put that colour against a different wavelength/colour (holding all luminance values equal!), and you have created high contrast, which makes it much easier to detect. "


    this method illuminates things that are the same thing. hair or faces for example. its functionality is based on wavelength accuracy, too similar wavelengths and color distinction is off.

    to fix the wavelength problem theres two solutions;
    - focus, by getting the camera focus closer you can find wavelength difference, but this is hard
    - changing the gamma first then the contrast, lowering both of them incrementally.

    then with the wavelengths shown to be different you could take two images of the same thing and run them though ths image difference detector and find things that are the same thing;


    this would help with things like tracking, or resolving details in a image or car pedestrian avoidance using cameras.

    the only problem is the avisynth doesnt detect the nvidia color change as a change to the image so how to apply the gamma then contrast changes to the actual image i dont know how todo.


    iso.zip 36.2K
  • hoppingbunny123hoppingbunny123 Posts: 561
    Trinity
    edited October 8
    just like contrast detection is fooled by similar colors equaling the same wavelength, see the sheet experiment in the attached video, same color sheet as the walls.

    there is the speed detection trick, where moving slowly enough keeps the wavelength the same enough so you could inch your way past a motion detection system by moving slowly.


    if you took the time to record the video and speed it up you would see the similar colored things move fast which would trigger a motion detection. this is the inverse but same method as the contrast and gamma color being lowered, but its for speeding up video.

    this might help detect pedestrians on the road.


    maybe the cosmos tracking could be fixed by speeding up the video to detect the hand moving. or to have strobe lights on the controllers that provide motion to detect.


    so i think to have accurate motion detection you need three things;
    a motion detector
    - the f6 color from the negativescreen program, and my f6 color
    - lowering the contrast and gamma
    - speeding up the video maybe 1.25 to 1.50 times the original 1 speed.
Sign In or Register to comment.