https://unity3d.com/learn/tutorials/topics/scripting -- beginner gameplay Unity documentation for Scripting
2 methods here: line 7 Start() and line 13 Update() -- whatever is written inside () is the ‘Method’
Start is called at beginning of every scene or whenever a game object is used for first time
Update is called once per frame; looks for user input, move an object, determine amount of health used during game play, things relevant to ongoing gameplay
Specific timings related to scripts, i.e. Awake- called before start. Called when game first starts up, or when an object first exists in a game; if you need something to happen at the very time in a script, put inside Awake method
To turn a line of code into a ‘comment’, put 2 forward slashes in front of line of code
To comment a whole block of code, /* (line 18)
To uncomment a line */ (line 21)
OnEnable method; runs in between Awake and Start. If you need something called every time something is enable or disabled
Here’s the Unity tutorial on this (Awake and Start); https://unity3d.com/learn/tutorials/topics/scripting/awake-and-start?playlist=17117
https://www.youtube.com/watch?v=M1KuIi5pCno - bit of a better explanation (see above image)
https://www.youtube.com/watch?v=HXVsSqL8l4Q -- this series from Charger Games seems to do a better job of explaining. It’s all pretty much the same info, but Raja goes a bit slower. Or at least he points to things as they happen on the screen.
FixedUpdate() happens to keep 100% in sync w/ Unity physics system; applying forces, detecting collisions, anywhere that might require physical movement
OnMouseDown() called only when specific event happens; happens whenever user clicks on this object
Reset(); useful for setting up default properties of a script
https://www.youtube.com/watch?v=lV-Hwjl90Ow -- Unity Primer on Scripting. Recorded session from Mike Geig (@Mikegeig)
"It [Binaural audio] puts you in the exact sound field as originally intended," says Prof. Choueiri from Princeton’s 3D Audio and Applied Acoustics lab. "You can hear a bird flying over your head. You’ll hear a whisper in one ear. And if you record a band, you’ll hear it exactly as the band was positioned when playing."
The really cool thing about this article is the demonstration of binaural audio that identifies how the brain hears positioning in sound. Here’s what it looks like, but definitely check it out with headphones:
You can also get a pretty good understanding of the difference between Surround Sound and 3D audio from this article
https://www.ossic.com/blog/2015/10/15/3d-audio-terms -- highlights from this article. Actually it was a good primer in sound terminology:
SOUND FIELD- A sound field (or soundscape if you want to get fancy) is the area or distance where something can still be heard.
SOUND LOCALIZATION- your ability to recognize where a sound is coming from. This includes direction and distance. In regards to virtual reality, it can be when a placement is assigned to an object in virtual space.
HRTF- Head Related Transfer Function (HRTF) is the effect you have on the sound field just by being there. It is measured at your ears. It takes into account many factors on how sound waves interact with your body such as including the outer ear shape, inner ear, head shape, and even torso. Everyone has a unique HRTF meaning that we all hear differently.
BINAURAL AUDIO- a method of recording sound using 2 microphones in a dummy head with ears and other human features. It is created to make a listener feel as though they are in the room that the sound is coming from. It uses the left and right playback channels, so while it is not perfect it is getting closer to what 3D audio entails, but you still are missing the height and depth of 3D audio. Traditionally, recordings have been made using two methods: mono and stereo. Mono uses a single microphone to pick up sound, while stereo uses two, spaced apart from each other. Binaural recording takes the stereo method one step further by placing two microphones in ear-like cavities on either side of a stand or dummy head. Because the dummy head recreates the density and shape of a human head, these microphones capture and process sound exactly as it would be heard by human ears, preserving interaural cues.
3D AUDIO- 3d audio allows the user to hear three- dimensional sounds such as what’s above/below, near/far, and around them. It gives spatial location to sound, allowing us to know where the sound is coming from. Is there a door opening behind me, or bird overhead?
Made w/ Unity
Re: The Fall Part 2, posted 8/8/16
“A lot of my design decisions are centred around one core goal-- I like to be able to start playing the game, from any point in any scene, from the editor”; this is an interesting concept to me because it’s really getting to the heart of gaming- can you play the damn thing and is it fun to do it? Setting it up so you can play from any moment, though seemingly obvious, according to the author, is a really novel idea.
“Learning to rely on custom gizmos to help me navigate my scenes. Drawing arrows between connected objects, or even displaying custom icons when important variables are null has ended up saving me some time. That way, having objects connected via the inspector offers a great bonus to show comprehensive design overviews, as opposed to just being a pain in the ass.
“I initialize as much as I reasonably can or write simple testing code inside Awake or Start [glad I dealt with those concepts in my Scripting section this blog]-- that way, if I forget something that isn’t hooked up right in a scene, I get errors as soon as I hit the play button.” I think that’s genius by the way. Seeing right away what the error issues are as soon as the scene is Awake… That’s really smart design practice.
With this post in Made With Unity, I learned that I had been missing out on a really cool game called The Fall, which now I can’t wait to play. Also, seeing that the dev only put this post up *yesterday*, I feel pretty current with what’s going on in at least one section of the dev world. It kind of felt like this process- pulling back the curtain a bit, showing you how the sausage gets made. It’s not always pretty, but it aint about pretty. It’s about learning, breaking things, and getting things done. Which is what I’m trying to do. And doing this blog and having the sections of learning I’m incorporating feels like I’m achieving that goal
http://bit.ly/2aBMYZ4 From Voices of VR Podcast #125
First, and NPC is… a Non-Player Character.
Rob wrote script for ‘Gunner’ for Gear VR; also wrote ‘Assembly’ for Oculus RIft with nDreams studio
Discusses how to make an emotional experience work with someone you need to connect with, i.e. an NPC
Getting out of the ‘uncanny valley’ is a really expensive business- what does he mean by this?; Guess we first need to get a solid definition of what the uncanny valley is…. Japanese roboticist Masahiro Mori found that humans react positively to robots that approach human-like appearance, but there is a point where we see a steep dropoff – i.e. a negative emotional response – when robots or other human facsimiles look too human, but still not quite right. This dropoff is called the uncanny valley, the point at which robots stop being cool and start giving us the creeps. [the unhealthy ‘person’ in the purple shirt below is actually a Zombie]
If the NPC doesn’t adhere to a set of social rules, then it doesn’t feel human
Having game objects/NPCs act like humans or have aspects of human interactivity [voiceovers, some sort of emotional attachment- here we are with sound again]
Morgan believes that 90% of characterization, or 90% of story happens inside the player’s mind
Morgan makes an interesting point, and he makes it several times, that when you lack that human engagement piece within and among your game characters-- if they don’t respond w/ pain when you bump into them, if they don’t make eye contact or look your direction when you enter a room-- you end up having to make up what is lost in human factor with visuals. You have to make the eye candy pop because the interactive piece of your game/story is missing. But creating stunning visuals is EXPENSIVE, so making the story more humanized, can and does affect your bottom line. Especially if you’re bootstrapping or have limited initial investment. Important stuff
Discussion at 7:10 where in a game, a person is underwater and has a connection to an NPC through an earpiece. So the emotional attachment is built through the connection of voice. AND, this effect is not expensive to create. It just takes a bit of narrative imagination
A player in VR is going to relate to a character, but really working of limitations of VR (small budgets, small studios) finding a story based way to overcome these problems.
An NPC in motion, or one that is doing something else while you’re talking to or engaged with the player is much more believable than one that’s standing in front of you and just looks creepy while he’s staring at you and about to pull a HAL
First 5 minutes of an experience, player is adjusting
Instead of starting game with a huge event (costly and technically difficult most times) start with the knowledge in mind that the first 5 minutes the player is adjusting to the VR world. Give them that ‘warm up time’ to get accustomed, because they’re going to be getting accustomed to the new environment which their brain is now tricking them into believing; he did say, as an example, have the player start the game with a bag over their head in the back of a van. Since you’re adjusting, you might as well be scared sh*tless and have a bag over your head, right? Since you’re going to be distracted, why spend a lot of money on that time
Double down on voice direction. Once people have popped out of immersion, it’s hard to get them back in. Not just a storytelling issue when you get the voiceovers wrong (bad acting) it’s an immersion issue. And bringing a person back to that immersion level is extremely difficult