New to the forums? Click here to read the "How To" Guide.

Developer? Click here to go to the Developer Forums.

Cities like real life in VR. How far off are we?

mstdesignsmstdesigns Posts: 298
edited September 2013 in Off-Topic
How far off are we to create a city like NYC that would be indistinguishable from real life?

The surface area of the NYC is 1,213sqkm which is about 1,213,000,000sqm. Well of course the world isn't flat so if we were to give volumetric shapes to everything we would need at least 10x the surface area. This means about 12,310,000,000sqm to be textured. A square meter texture to be indistinguishable from real life would need to be about 8192x8192 pixels which at 40bits of data per pixel would take up 320mb. This means that we need about 3.7 Exabytes to just store the textures! The polygonal data would probably be around the same so that would make it 7.4 Exabytes just for storage. Of course this is uncompressed data but it is still huge! Rendering power needed would also be counted in Exaflop numbers. So my guess is that we are anywhere around 15-20 years far to be able to do something like that.

If we were to use voxels instead of current polygonal mapping, each cubic meter with 8192x8192x8192 voxels would need 512GB of data! :shock: If we were to enclose NYC in a cube, it would take up 1,213,000,000,000 cubic meters. That means we need about 1.2 Yottabytes of data to store NYC inside an ultra high definition minecraft world :shock: :shock: :shock:

Comments

  • SawersadamSawersadam Posts: 49
    I have no way to validate your arithmetic. But doubt it would be convincing if you didn't also account for people, objects, interiors which presumably would also chew up a fair amount of resource.
  • mstdesignsmstdesigns Posts: 298
    Yeah the polygonal one might be even higher than I thought (probably around 10-20 times more), as you can actually create a finite object with an almost infinite surface area. The voxel one is accurate though due to the nature of the voxels.
  • You would not store the voxel data as a flat grid, that would consume far more memory then will ever be useful or practical. :P

    Typically you use some form of empty space compression, such as a sparse voxel octree or a sparse point cloud. Using a "pointerless" SVO you can reduce memory size down to ~2 bits per occupied voxel and with reasonable empty space can reduce memory useage to 20-200MB for an 8k^3 volume.

    If you track unique sub-trees and merge them you can reduce the memory even further, by another order of magnitude in some cases. This changes your Octree into a Directed Acyclic Graph (DAG), see http://www.cse.chalmers.se/~kampe/highResolutionSparseVoxelDAGs.pdf for an example of such as system. Of course there are other methods as well that can give similar results, such as Perfect Spatial Hashing http://research.microsoft.com/en-us/um/people/hoppe/perfecthash.pdf.

    This doesn't directly answer your question but should show, at the very least, that we're not quite as far off as your numbers might suggest. Combine a compact and efficient volumetric representation with procedural generation - usually using some sort of shape grammar to build up complex man-made objects from simple parts - and things start getting very interesting. :)
  • mstdesignsmstdesigns Posts: 298
    Indeed I was refering to totally uncompressed data, as we are unable to calculate directly how much volume do all the objects of NYC take up. Perhaps taking data from 3D maps would enable us to eliminate with better accuracy the empty space (Of course we should have 50-100meters of soil in our map just like minecraft does). Still though current graphics cards allow for a maximum of 4096x4096x4096 voxel rendering so we are indeed quite a bit far :P The main point of voxels though is being able to design the insides of the objects as well other than just the outside. Thus, I don't believe we could limit each qubic meter to just 200mb if we rendered the insides as well (For example human organs, steel enforced concrete in buildings etc). We would also definitely use instancing which would make it probably 1000x less in size. However, the more the instancing the less the accurate simulation.
  • I just want to clarify a few things if you don't mind. :)

    First, there is no reason to limit GPUs to 4k^3 volumes. For example, in one of the papers I linked, a 128k^3 volume with over 19 billion solid voxels was being rendered on a GTX 680 consuming less then 1GB of memory (though without material data). Remember, this data is not stored in a flat 3D grid.

    Next I call SVOs, Perfect Spatial Hashes, etc. compression but remember - it is completely lossless compression. In other words, modern voxel systems never store the data in a flat grid (at least not for very large sections anyway). Some systems, such as Gigavoxels, will store data in "bricks" for easy interpolation and hardware rendering but these bricks tend to be small.

    Higher resolution volumes with tend to have a higher ratio of empty space since surfaces become better defined and interiors can be modeled correctly (which contain a lot of empty space). In addition homogenous space and repeating structures can be compressed losslessly as well. Even random seeming structures, such as hairballs, have a lot of redundant 3D structures. Think of this as "instancing" but at arbitrary scales (sub-trees) and with no loss of fidelity.

    In addition further "lossy" compression can be applied. This ranges from image-like methods applied to 3 dimensions (for example ASTC supports 3D volumes if I remember correctly - though this would require bricks) to procedural generators (which can be read with appropriate data to produce structures that mimic their real equivalents). These kinds of systems can reduce memory footprint by orders of magnitude as well.

    Finally the data cannot be stored in memory all at once. Similar to MegaTexture like systems, data would be generated / streamed based on the data that is actually required for rendering. Good occlusion culling will help here, which is really useful in city like environments anyway. Read about Gigavoxels if you want an example of an actual implementation of some of these ideas. While that system is not ideal, as more modern research shows much more compact and efficient systems are possible, its still a great step in the right direction and the papers are well written and easy to understand. :)
  • kojackkojack Posts: 6,389 Volunteer Moderator
    mstdesigns wrote:
    which at 40bits of data per pixel would take up 320mb.
    Why 40 bits? Are you using R10G10B10A10?
  • mstdesignsmstdesigns Posts: 298
    kojack wrote:
    mstdesigns wrote:
    which at 40bits of data per pixel would take up 320mb.
    Why 40 bits? Are you using R10G10B10A10?

    Yes, I used RGBA. Alpha would be essential to untextured single-color voxels. How else would we render voxeled water and glass? :D
  • kojackkojack Posts: 6,389 Volunteer Moderator
    I'm not asking if there's alpha, I'm asking why 40 bit instead of 32 bit. Most monitors (and definitely the rift) can't show 10bit per channel, nvidia and amd only enable 10 bit output on their pro cards (quadro and firepro respectively). If this is for the future, might as well go with a 64 bit deep colour or 128 bit floating point hdr colour.
    mstdesigns wrote:
    If we were to use voxels instead of current polygonal mapping, each cubic meter with 8192x8192x8192 voxels would need 512GB of data! :shock:
    An 8192x8192x8192 cube of voxels at 40 bit is 2.56TB, not 512GB.
    mstdesigns wrote:
    If we were to enclose NYC in a cube, it would take up 1,213,000,000,000 cubic meters. That means we need about 1.2 Yottabytes of data to store NYC inside an ultra high definition minecraft world
    Enclosing a 1213km^2 in a cube means the size is roughly 34828m x 34828m x 34828m. That's 96 yotta bytes (at 2.5TB per cubic metre).

    At my max modem rate (I can get around 10Mbytes/s when really lucky), it would take 350 billion years to download 96 yotta bytes.

    :D
  • mscanfpmscanfp Posts: 659
    As I try to keep up with a thread like this, I think about my English Major in College and sigh a little bit.

    Mike
    Like to read? Cool, check out my author page on Amazon. I run free promotions all the time, and you might read something you like, http://www.amazon.com/Michael-Scanlon/e/B00KDPB88U/ref=ntt_dp_epwbk_0
  • poltergeistluxxpoltergeistluxx Posts: 51
    Hiro Protagonist
    if Google earth is implementing a Feature that calculates everything in hd 3d (it is already possible with Google street view and oculus rift) then you can walk probably trough the cities in 3d with oculus rift ...

    but why go with oculus vr in a real life City? there are thousands of games coming out where you can walk in even better places and big cities ;)
  • mscanfp wrote:
    As I try to keep up with a thread like this, I think about my English Major in College and sigh a little bit.

    Mike


    Haha, hit a bowl and started reading this topic, and then got to your post. :lol:
    +1

    I have nothing else to contribute. :ugeek:
  • Also it depends on how big each voxel is relative to inches or cm.
Sign In or Register to comment.