Oculus Medium Suggestions - Page 11 — Oculus
New to the forums? Click here to read the "How To" Guide.

Developer? Click here to go to the Developer Forums.

Oculus Medium Suggestions

15678911»

Comments

  • RaptureReaperRaptureReaper Posts: 9
    NerveGear
    Once you guys bring in masks and alphas, you'll pave the way to reaching zbrush territory :)
  • hughJhughJ Posts: 27
    Brain Burst
    edited June 14
    Fluorescent light tubes (light sources that are long ovals), useful for things like starship running lights, neon, etc.

    Better yet, convert stamp (or layer) to light source.  [Not sure if this is possible, but would essentially make the object invisible and ultra-emissive in the same way that the point light is, drawing color and/or dispersal source based on the stamp or layer].  Would be cool for all sorts of things, city "window" illumination, etc

    Middle ground: A "draw-able" intensity controllable light source, maybe built off the capsule or sphere. 
    I think arbitrarily shaped area light sources tends to make things pretty complicated -- AFAIK it boils down to having to calculate all the intersection tests across the surface of the light source and everything in the scene in order to produce an appropriate penumbra.  Any renderer that supports this these days is probably using a variant of path-tracing.  Directional lights, spotlights, and point light sources are simplified approximations (hacks), which is why they're able to be used in real-time rendering.
  • martinegailmartinegail Posts: 1
    NerveGear
    Nice to know about medium functionality. We will learn some new from it.
  • DreamShaperDreamShaper Posts: 725
    3Jane
    hughJ said:
    I think arbitrarily shaped area light sources tends to make things pretty complicated -- AFAIK it boils down to having to calculate all the intersection tests across the surface of the light source and everything in the scene in order to produce an appropriate penumbra.  Any renderer that supports this these days is probably using a variant of path-tracing.  Directional lights, spotlights, and point light sources are simplified approximations (hacks), which is why they're able to be used in real-time rendering.
    Interesting.  Are there other lighting things besides those which have potential to be used in real-time?
  • hughJhughJ Posts: 27
    Brain Burst
    Nothing that I can think of.  For the most part real-time boils down to point-based approximations of light emitters, or variations on that.

    In the context of Medium I would think the feasible route for them would be to have a fancier "photo" rendering mode (sort-of akin to what you see in Forza/GranTurismo and others) where you sacrifice some interactivity and speed in order to give it more time to generate beauty shots.  Granted that in VR your head/POV is always in motion, so you can't really do a long multi-pass/accumulation like you can with a fixed camera, but if the objects in the scene are fixed for a period of time it would at least allow you the time to bake certain things (shadows and diffuse/lambert shading). 

    ...Or you could simply limit that beauty photo mode to the hand-held camera, which would allow you to make the camera fully fixed and give it however many seconds to generate an image.  There's a bunch of different ways you could do this really, everything from merely compositing a bunch of frames from their existing renderer (similar to what people used to do here with photoshop to get multiple light source renders in Medium), or utilizing an altogether different psuedo-realtime renderer.

    I think I've said this before somewhere, but I feel like using Medium can be boiled down into 3 distinct steps:  sculpting -> painting -> rendering.  Each of those things has unique data+algorithmic needs that probably ought to be a distinct mode.  Sculpting benefits a ton from their voxel engine, whereas painting is way more suited to polygonal meshes with proper texture mapping, and rendering ought to be geared more for fidelity at the cost of interactivity.  Personally I'd rather sacrifice some ease of being able to seamlessly jump back and forth between sculpting, painting and rendering if it meant having much more useful implementations of each step, and avoid having to spend more time in Zbrush/Maya/Blender than you do in Medium in order to get very polished results. 

    It's a bit of a bummer that 9 times out of 10 when you see a very good looking Medium sculpt online, much of what people are impressed by was neither done by Medium, nor even possible to do in Medium.  Moreover, I worry this has a knock-on effect of discouraging new users once they realize this because the implication is that you need to have 3rd party professional non-VR tools to get results that don't look like colored mashed potato sculptures.
  • P3nT4gR4mP3nT4gR4m Posts: 1,705 Valuable Player
    @hughJ I've felt the same way about paint for quite some time. Once they have the UV unwrapping sorted out, using the mesh for paint operations would definitely be the way forward even if, like you said, you have to lock it down from editing, it'd be useful as hell to be able to full-suite the diffuse map at least. Maybe even throw some normal painting in there.
  • DreamShaperDreamShaper Posts: 725
    3Jane
    edited June 21
    @hughJ Seems like a still frame long render might be possible as a mode.  Alternately, maybe do something like the way tilt brush does and save the key data to perform an "out of vr" high resolution render.  I suspect that we'll see some of the polygon meshes at some point in the future, although it's hard to say when that will happen.  Seems like they are building a framework that has the approach in mind.

    Hopefully an integrated vr pipeline emerges, where each area can be specialized to do what it does best and the data can be easily transferred between stages.  I suppose eventually we'll get an all in one package of some kind, but that's going to require more power
  • hughJhughJ Posts: 27
    Brain Burst

    @jessicazeta
    Not that you folks owe us this, but I'd be interested in hearing some insight on what your guys' mid-term and long-term internal development roadmaps are like for Medium.  Not necessarily as comprehensive as a Trello board, or Carmack-style .plan log or anything like that, but maybe just a 'state of the union' style overview a couple times a year?  Obviously there's a spectrum of who your users are, how they utilize Medium, and that dictates what sort of feature additions and fixes they desire.  Similarly, I'd imagine that within your own studio you all have your own thoughts that tug in different directions.  

    What is Medium to you guys?  Is it popular (active users) relative to other Oculus applications?  Has it been growing?  Do you need more evangelism from your users?  How big is the team working on it, and how committed is Facebook to a PCVR-based art tool?  What expectations should your users have for its future?  Is Medium more likely to streamline features in an attempt to become cross-platform with low-powered mobile devices, or is there desire to continually expand features and bridge the gap between itself and professional CAD tools?  

    Presumably at some point Pixologic, Blender, and/or Autodesk are going to integrate their own VR support into their portfolio of tools, so I'm curious what that prospect represents to Medium's development.  Does that prompt you guys to consider things like adding an API/SDK for 3rd party plugins, or perhaps even spinning yourself off to become a plugin for other CAD tools?  

    Consumer VR seems to be at a cross-roads right now as it tries to find a business model that satisfies the desire for both growth and sustainable profit, and Medium (imo) seems to exist in a weird position between the direction of affordable mainstream mobile VR (Quest), and the world of enterprise/professional and enthusiast/hobbyist digital artists that spend thousands on workstations, cintiqs, ipad pros, and zbrush licenses.  I guess there's an untapped 3rd market segment in there if a RiftS/Rift2 were to achieve next-gen XBox compatibility -- Medium could be a big deal for a mainstream platform like that.
  • SamuelABSamuelAB Posts: 51
    Hiro Protagonist
    A direct link to Autodesk Revit, a major architectural modelling application.
    https://www.autodesk.ca/en/products/revit/overview

    If we could link and update Revit models in Medium, sculpt and then send simplified version of these models back to Revit, it would create a revolutionary platform for architectural design. Right now, there are no spatially intuitive ways to models architectural concepts in VR and this would be a fleshed out solution for the conceptual phase.

    It is possible to do this at the moment (bring Revit model to VR, design in Medium and bring back to Revit) , but the process is not automated and requires semi-specialized knowledge. Alignment of the geometry within Revit is also not straightforward.

    A direct link to Revit would be extremely beneficial to the architectural design industry.
  • OctopsOctops Posts: 3
    NerveGear
    I recently went back into Medium after not touching it for a long time, giving me a fresh set of eyes and here's a few things I ran into.

    -Transparent material. I used a reference image, and only having an opaque material made it hard to trace.
    -Scrolling with the thumb stick in the file/stamp browers. It makes sense and is just intuitive.
    -The cut tool has no steady stroke, which leads me to the next point.
    -The brush constraints should be global switches, not per brush. Perhaps the constraints and snaps could be in the same location, as they are closely related.
    -Switch layer by pointing at the layer piece and clicking a button, not having to go into the menu.
    -Random color per layer visualization toggle.
    The surface constraint is wonky. Should optionally follow the normal of the surface, and a brush depth setting is needed.

    And some tools and features I'd like to see:

    -Primitives, as in parametric shapes. Having 50.000 stamps doesn't make sense. Having a dozen very flexible primitives does. You could have that when you press left/right on the right thumb stick which allows you to edit a tool,
     handles show up on primitives, allowing you to adjust them with your left hand.
    For non primitive stamps this could be used for non uniform scaling.

    -Another line tool that works "point to point", i.e click A is registered, and when you place click B, a line is formed from A to B. Between clicks you could change size and even stamps for very controlled tapers and complex profiles.

    -Multicopy/Array. Just a sample rate setting for the brush would be a quick and dirty option to do this as a low sample rate would result in copies along your stroke rather than a continuous line.

    -Some gesture based controls. Have you experimented  with something like the hotbox in Maya or similar systems?
    https://blenderartists.org/uploads/default/original/4X/f/5/5/f552f610c98b5d3e85a4669ad8ca5d3c33347a7c.gif
    Maybe you can delete a model by grabbing it and throwing it away or shaking it really fast for example. Use the medium at your disposal. Just having floating screens for your controls seem a bit conservative.

    -A lattice tool
    -Of course masking and alphas.


  • DreamShaperDreamShaper Posts: 725
    3Jane
    edited July 18
    @Octops interesting mix of ideas.  Some others more problematic.

    Transparency:  likely too resource intensive, at this time

    Automated layer switching:  two sides to this one.  Avoiding the scene graph would be nice, just to check a box  However automated layer switching, without a combo that doesn't easily get triggered, could easily become a nightmare of finding yourself unintentionally working on the wrong layer.  has potentials.  [Edit] Then again, not automatically switching is annoying at times, and causes the same issues.  Still, experience with needing to use isolate with the flood fill option, and with deselections when using the manipulation features,leaves me a bit wary about accidental switching.

    Grab and throw to discard: That would be horrible. Sensors lose track sometimes, and it can be a lot of lost work on a random misread.  Working on large sculpts can have significant load times.  It's not that hard to click new sculpt.

    Cut tool: no real preference on this. Very rarely use it, negative stamps are almost always more useful, and faster.

    Random color:  Some potentials in that general area.  Maybe a select random on the palette.  If you had that and your brush set to stomp, it would get that same effect.  I'd like to see stomp have some other options such as Opacity (global) for tonal control/color mixing. Not sure how processing intensive that would be, but I'd like to find out. Would really be curious to see what random/opacity could do as a combination.  Some interesting lines of thought in that area.

    Parametrics:  While these would be useful, particularly for primitives, I also like having a stamp collections, so wouldn't want to see that feature depreciated.  Parametrics aren't quite as useful for multi-color stamps and more texture oriented ones.

    Point to point line drawing mechanism: Definitely in favor of this.  Pathing a curve would be awesome as well.

    Multi-copy:  In favor of something like this, ability to have copies at set intervals would be a nice addition.


  • daniellieskedaniellieske Posts: 9
    NerveGear
    I noticed a thing working with snapping: for rotational snapping it would be great to be able to snap only on one of the three axis.

    I was drawing with a cube and the line constraint and the rotational snapping helped me to keep my lines perfectly vertical. However, I couldn't rotate my cube stamp freely because the rotational snapping snaps all axis at the same time. A rotational snap only on the Z-axis would have been useful.

    Thanks for your consideration.

    -DanieL 
  • MetronsMetrons Posts: 48
    Brain Burst
    I've asked before.

    Please, please, please, please, please, please let us scale our brush shapes. Ideally I would use my left hand and pull on an axis 'Y' to scale up my brush, or scale in 'ZX' along two axis. This should also have snapping as an option as well. Nice clean snapped shapes I'd we want.

    If we could scale our brush shapes. EVERY brush then becomes MANY brushes. If I'm sculpting fingers with a tube type brush, I could scale in Y and have much longer fingers now. Currently I'm limited to uniform scale. With scaling the tip, we could have endless more options with just a single brush shape.


    Pleeeeeeeeeeeease add this. Please! Every brush could become so much more, and way more useful to boot.
  • MetronsMetrons Posts: 48
    Brain Burst
    Transform with falloff please. I sculpted some arms last night. The arms ended up being too long and there is nothing I can do to tweak these shapes cleanly. Sorry but. Zbrush transpose would be ideal. Something like that anyway.
  • DreamShaperDreamShaper Posts: 725
    3Jane
    Metrons said:
    I've asked before.

    Please, please, please, please, please, please let us scale our brush shapes. Ideally I would use my left hand and pull on an axis 'Y' to scale up my brush, or scale in 'ZX' along two axis. This should also have snapping as an option as well. Nice clean snapped shapes I'd we want.

    If we could scale our brush shapes. EVERY brush then becomes MANY brushes. If I'm sculpting fingers with a tube type brush, I could scale in Y and have much longer fingers now. Currently I'm limited to uniform scale. With scaling the tip, we could have endless more options with just a single brush shape.


    Pleeeeeeeeeeeease add this. Please! Every brush could become so much more, and way more useful to boot.
    I suspect it's saving the form during stamp creation, so there would likely be a lag while calculating and storing what is essentially a temporary stamp, along with whatever move calculations would be needed to stretch it.  "Pull dots", "pull cords" or something would be pretty good for this, and minimizing accidental triggering.  It's interesting, and if can't be done yet, would be a nice feature when hardware gets a bit quicker.

    Of course, the next question is, if the tech is developed, why stop with stamps.  Imagine being able to adjust an entire layer that way
  • crazlucicrazluci Posts: 2
    NerveGear
     I have a long way to go in reading this *entire thread and seeing what suggestions have been made and (especially) reading what responses are coming from Oculus Devs and knowledgeable users.
     I, myself, have reason to want to take a medium sculpt and make it available to other point cloud operation softwares that I use. It appears (guessing) the Medium software purges the voxel information buffer iteratively, once the iso-surface is calculated for screen display. (This is seen in Move tool). I could be waaaay off, but I think I want to use that form:
       I want either [the sculpt as pointcloud, to be reprocessed into a 2D transfer function editor for visible rgba surfaces via (densities)], or the sculpt *as a series of BMP or DICOM or JPG or PNG image— serially sliced in the (x-y) plane from the sculpt. These can be further manipulated once preprocessed.

    I also want to use Medium tools to reconform other DICOM sets or image stacks, to warp them using Liquify, Morph, and Move tools
    Thoughts?
    CALuce
Sign In or Register to comment.