cancel
Showing results for 
Search instead for 
Did you mean: 

Real VR: Connecting the computer to the brain

Zenbane
MVP
MVP

A recent article was released discussing the emerging tech of Neuralink, which has been making headlines lately due to advancements driven by Elon Musk:

https://www.caesarvr.com/vrisael2013-blog/real-virtual-reality-2025

 

It looks like the author of the article doesn't speak English as their native language, but the content is spot on with many other similar articles.


Moving away from hand-controllers is certainly the future and true next gen for VR. Overall, I believe VR is still in its First Gen mode. Adding more ways to track fingers and more hardware to make prettier colors doesn't define VR, nor does it advance the current generation. The only way that VR can truly advance is for the manner in which we "experience" VR to change beyond physical limitations.

 

This is a combination of things:

  • Eye Tracking
  • Computer-Brain connection
  • Tetherless/Wireless
  • Freedom of movement (e.g. not limited to a single room or external sensors)
  • Advances in Machine Learning and Artificial Intelligence

We're already half-way there it would seem.

 

It would be great to see a first release by 2025!

12 REPLIES 12

hoppingbunny123
Rising Star

I wonder what monkey is doing now.

 

https://youtu.be/2rXrGH52aoM

hoppingbunny123
Rising Star

Having thought about how neural networks operate I think they're fancy condition logic.

 

The brain accepts the neural link conditions and functions to control those conditions. 

 

Conditions do not provide feedback though so applying a neural network to the brain needs additional mechanism that operates similar to a neural network. 

 

A rotating cord operated using the neural network conditions might work, if the brain learns that the cord is operated by the conditions the brain interfaces with.

 

The monkey grabbing the controller is the idea they're already using.

 

 


@hoppingbunny123 wrote:

Having thought about how neural networks operate I think they're fancy condition logic.

 


I can see why one would think that, but condition logic (If/Then/Else) is in stark contrast to neural networks.

 

There's plenty of good reading material on the differences; simply search for "neural network vs decision trees"
https://www.google.com/search?q=neural+networks+vs+decision+trees

 

At a high level:

  • Condition Logic (Decision Trees) = Linear
  • Neural Networks = Non-Linear

 

It would be like referring to a fighter jet as a fancy car.

hoppingbunny123
Rising Star

A neural network needs to use test conditions for two things:

- the weight 

- the neural network 

 

The conditions operate the neural network. Therefore the conditions being controlled by weight = the neural network is not intelligent.


@hoppingbunny123 wrote:

A neural network needs to use test conditions for two things:

- the weight 

- the neural network 

 

The conditions operate the neural network. Therefore the conditions being controlled by weight = the neural network is not intelligent.


 

Sorry, but none of that is accurate. I'd recommend reading about Neural Networks before trying to assess them.

😁 ok

Pixie40
Expert Trustee

@Zenbane wrote:

Moving away from hand-controllers is certainly the future and true next gen for VR. Overall, I believe VR is still in its First Gen mode. Adding more ways to track fingers and more hardware to make prettier colors doesn't define VR, nor does it advance the current generation. The only way that VR can truly advance is for the manner in which we "experience" VR to change beyond physical limitations.

I would argue that we're already in the 3rd generation of VR, at least. The first generation was MASSIVE. It took a room filled with machinery and computers just to let you stand in place and look around. 2nd gen came about in the 80's and 90's with the advert of the micro computer. Still too bulky for general use, too limited, and it needed dedicated hardware that took up a lot of space. Then with the release of phone VR (crap as it might be), the Rift, and the Vive headset I'd say we entered the 3rd generation.

 

That said, this field is a lot harder to delineate into discrete hardware generations. Unlike video game consoles there isn't a leap forward every few years as better hardware is released. Instead it's more gradual. For the most part VR has progressed the same way as computers have, with small iterations over time.

Lo, a quest! I seek the threads of my future in the seeds of the past.

Completely agree with that, Pixie. I've never really considered PhoneVR (MobileVR) to be 2nd Gen. I always found it to be a downgraded version of what we're calling 3rd Gen (Rift, Vive); where the key difference is 6D0F vs 3D0F. But I can also see an argument made to where that very different (3 v 6 DoF) itself creates the difference between 2nd and 3rd Gen.

 

I've always considered the VR Headsets like Virtual Boy to be 2nd Gen

https://en.wikipedia.org/wiki/Virtual_Boy

 

But I guess it really all depends on where we start counting since the history of VR goes as far back as 1838 with stereopsis:

https://virtualspeech.com/blog/history-of-vr

 

Overall though, I would agree that we're in 3rd gen of consumer VR. We have not yet advanced beyond that, even with the breakthroughs of stand-alone, hybrid, inside-out tracking, or controllers like the Knuckles.

 

The next gen will likely be when we're doing full body tracking without the need for extra gear, eye tracking, and the ability to read brain waves.

hoppingbunny123
Rising Star

 i watched a video wher mr.musk said the education method needs a upgrade to implement videogames as the main educational tool.

 

education is a conversation first. followed by social cues to learn and follow or copycat, similar to how chickens run away when one chicken sounds the alarm. theres a language to the social cues, and thats where the game can implement various cues to regular book learning methods like the calculator that allows the cues to be more meaningful in case the student gets the wrong impression or feels social anxiety from the social cues or the teacher should not be teaching because the social cues are vulgar and hateful.

 

i think a parrot method of book learning where the student response can have a array of elements that each array gives off a distinct sound that shows if the students actions were logical or not. 

 

an example of this is where a video game troll that swears at someone hatefully has a parrot response but in a mocking tone of voice that makes the person assaulted verbally feel vindicated at their attacker getting sworn at back. 

 

i feel that the weights in a neural network should have a feedback that lets it align to social norms then its not how the weights are adjusted now which is mechanical and not human at all.