This year's Facebook Connect will be virtual. It looks like we'll be able to register for attendance; details to come.
Facebook released news on this today. So now would be a fun time to start making some predictions!
Here is what I am expecting:
Above all, I am hoping that we once again have Carmack go unscripted. 😁
Odd. I rebooted my PC since I posted that, and it's working now in Firefox. Also the word Spoiler is blue now, it was black when it wasn't working.
> For anyone who heard Carmack's unscripted speech yesterday, he quotes himself about the Metaverse from the 1990s: "Building the metaverse is a moral imperative"
Carmack told that we are not there yet in terms of hardware power. I you have seen Q&A after that speech you may notice there are only a few avatarts in a room with virtual Carmack 🙂 And you have seen that those avatar movements was not perfect and fluid. Also he noted that if we add some fancy style to avatars it could lower number of possible avatar connected to the room. And he wish that next Connect could be possible in VR with all those people attending virtually.
But.... I think that is not the case. We need not support for those number of avatart at all! 🙂
Just think of those people as a data streams. All you need to do is bring down those data streams bandwidth to supported number. Let's say ..... to 1 bit at a time from a person 🙂
How that is possible? Have you seen a concert with lot's of people lighting their flashlights? Every single bright dot is a real person behind it! And that is only 1 pixel to show that light! 🙂
So you could show a wall or amphitheater with 10.000 people or more at once represented with a single pixel (or very small image). And if you need a moving light that is just another bit. It is not 2 bits per second. You need to transfer on/off, move/stop events. So those are 2 bits per any longer time.
Now we think that avatar representation should be as detailed as possible (including movement). But we can't have it photo realistic yet on any hardware. So there are simple cartoonish avatarts. And... people accept them as a real person!
More even. We have e-mails labeled with a string like "firstname.lastname@example.org" and.... we also learned to accept that string as a real person! No photo realistic 3D representation needed to write an e-mail! 😄
So we could reduce people representation to any! All we need that people on other side know that there are real people on other side.
Ok... they need to learn to accept them. So you need to show photo realistic 3D images first. That is like talking in other language (other visual language). Some people already have any avatar form in vr multiplayer worlds and learned it, but most people are not. That leads us to ... we need to translate one visual language to other one. So people new to vr will see "real world" visual representation up until they learn to accept new vr representations (same as it was to "e-mail" or 2d avatars).
So .... metaverse actually is a translation tool. It will translate 10.000 people data streams (with avatarts and full body tracking) to a single data stream that a person (and his hardware/connection) could accept 🙂
That is all we need to make 10.000 people present on a live VR conference 😄
There is more to it. You could have a tool that will auto select some of those people an represent as full body avatars to a speaker. Those who want to ask a question or just random (and changing).
And another tool to allow vote for a question to ask a speaker (so 10.000 questions will not overflow speaker, still all of them could be recorded and played back/answered later if needed). Even speaker could ask a questing with several answers and see a wall of 10.000 pixels representing those individual answers (with a color, brightness, small +/-, blinking or in any other way). Then speaker could sort those answers by any parameter (say ... "devs/gamers/other users") or ... leave those as a random picture. Could zoom it any part of that "wall" or even select one person to talk to him.
So you could translate 10.000 data flows not only to 10.000 bits stream per second, but also you could unfold it to 10.000 hours of full streams to watch it later. Or make some montage (like now you have 10 streams from cameras and bring it down to single stream). So metaverse could do montage also 🙂
Every person present at the event will have same personal translation. So he could see speaker as a full stream (avatar, movments and voice) and some of other people (friends or random). It's possible to use smartphone to get only speakers stream (and still be present as a "pixel" on a 10.000 wall). Even smart watch will do! (you get voice stream and "pixels" or 2d avatar of a speaker instead of vr or video stream).
So any person could be present from almost any device! Or.... they could press "record" and have access to recorded full stream (in a cloud) later from any of theirs devices in any format.
We could manage lot's of info streams that way. And that could be in any direction. Even you could have 10.000 comments to your message and reduce those to a wall of 10.000 pixels of likes/dislikes to see them at once or even "answer Thanks! to all who likes" with one message 🙂
So what do you think of it? Is it possible with current hardware (personal/servers/clouds)? 🙂
Any other thoughts on Carmack's talk? 🙂
Been thinking about this full bodytracking.
An eyeopener to me but you have most likely already been discussing in it the fora.
But oh my. Ive been practicing Nordshaolin kungfu for 13 years.
Soon these skills will be an achievement in vr as well.
Irl skills will be transferred to vr.
There is a huge scene for crazy workout do what you can to kill app/games application if the tracking works the way it seems.
I need to get back into unity ...