This will be both a practical and theoretical lesson that shows you a complete Unity project with some advanced functionalities. You will be provided with a simple “City builder” unity game where you can instantiate buildings in a city and animate cars to patrol over a specified path. The objective is not to teach you how to program this game, as it will be outside of the scope of this course, but rather demonstrate to you what is possible to be done in Unity with a few classes and some built-in features of the game engine. Furthermore, we will explore some of the VR specific mechanics that are used in game engines such as Unity for making the experiences more interactive (e.g. different types of locomotion, and interaction with objects).
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Download the City Builder Unity Package [1]. The package includes all the resources used in the city builder game. This is a very simple game, where the user can select different buildings from a menu and place them in specific locations in a city (only some areas allow for placing a building) using their mouse. When selecting a building to be placed in the city, its base will change color. The red color indicates that the current location of the building is illegal (and therefore it cannot be placed there), whereas the blue color indicates a legal location. The color of the base will continuously change as the user moves the building in the city to find a location to place it. Once the building is in a legal location, the user can click down on the mouse (left click) to place the building in the city. If the user clicks on an already placed building, the base of the building will become purple, indicating that it has been selected. Once selected, buildings can be removed from the city by pressing the “D” button. Moreover, the user can move the camera along the X and Z axes using the arrow keys on the keyboard (left, right, up, and down keys). This will help the user navigate the environment when placing buildings. Lastly, there is a car that patrols a street over a specified path. A quick video of this game can be seen below:
Import this package into a new Unity project. Once the import is done, navigate to the folder “City Builder”, and open the scene with the same name “City builder”. Make sure your “maximize on play” option in your game window is enabled, and then play the scene.
When the scene is playing, you can see a blue car patrolling over a specified path. You can use the arrow keys on your keyboard (left, right, up, down keys) to move the camera along the X and Z axes. If you click on the buttons of the menu on the left side of the screen, you can select different buildings to place in the city. Select a building, move it over a legal position and left click with your mouse to place it down. Place a few more buildings in the city. Click on a building that is already in the city. You will notice that its based will change color to purple. Press the “D” button on your keyboard to delete this building.
To give you an idea of how such a scene can be set up, we will go through a series of steps to recreate the City Builder scene. All the resources you need are already inside the “City Builder” folder.
A new panel called Navigation will be opened on the right side of the editor. Select the “Bake” tab on this panel and click on the Bake button.
After a few seconds your scene should look like this:
Navigate to the Prefabs folder again and drag the “Waypoints” prefab onto the scene, and set its position to (62.5,0,-70.2). This GameObject has four children, where each act as a patrol point for our Car AI script. You can inspect the position of each these points in the scene. Now, if we select our Car GameObject again, and look at its properties in the inspector menu, we can add patrol points to the Car AI script. Simply drag each waypoint (children of the WayPoint object named Waypoint 1 to 4) to each element (0 to 3) place in the Car AI script.
Inside the City Builder folder, navigate to the “Scripts” subfolder. Drag the “BuildingManager”, “BuildingPlacement”, and “MoveCamera” scripts onto the Main Camera GameObject in your scene.
Most of our functionalities are covered by these three scripts, so we will briefly go over them. The “Building Manager” script takes care of generating the menu on the left side of the screen and adding buttons to it to the number of buildings we want to have in our menu. You can see that this script has a property called “Buildings” in the editor. If you expand this option, you will see a placeholder for “Size”. Change this value to 3. This means that we want our menu to hold three buttons each representing a different building, so we can select them and place them in the city. When you change the value to 3, you will notice that three new empty placeholders are shown. Go to the Prefabs subfolder again, and drag the Blue, Brown, and Green building prefabs to each of those placeholders.
Now to the number of buildings we have added to this list, we will have buttons with the same name as the building prefabs generated for us. Clicking on each button will inform the “Building Placement” Script, which building to generate and attach to the mouse pointer. For more detail, please examine the fully commented script of “Building Manager”.
Next, we will examine the “Building Placement” script. There are three properties for this script in the editor that we need to change. First, we need to add different materials for the “base” of the buildings to indicate illegal and legal positions of the building by means of color. Navigate to the “Materials” subfolder and drag the “Valid” material to its corresponding placeholder in the inspector panel. Do the same for the “Invalid”. Once the “Building Manager” script informs the “Building Placement” script which building is requested to be generated, the “Building Placement” script will instantiate (create a copy on the fly) that building and will attach it to the position of the mouse pointer. Furthermore, it will use the valid and invalid materials we’ve just added to the component, so it can change the color of the base of the buildings to red when they collide with buildings and streets (i.e. invalid), or otherwise blue (i.e. valid).
The “Building Placement” script is also monitoring the clicking of the mouse by the user. If there is a building attached to the mouse pointer and the position is legal and the user clicks on the mouse, it will place the building on that exact location and will disable its base (so it looks realistic when placed in the city). Notwithstanding, we also want the user to be able to click on a building that is already placed in the city so they can remove it. This means that our “Building Placement” implements a second check condition to see if no building is attached to the mouse pointer at the moment when the user clicks on the left mouse button. If that is the case, it also checks whether what the user has clicked on is actually a building or not. For this check, our script uses the “Building Mask” property, as shown in the editor. All the building prefabs we have added to the “Building Manager” script already belong to the “Building” layer. All we need to do is tell unity which layer to look for when the user clicks on objects in the city. As such, we need to create a new layer and set that as our Buildings Mask. For this, click on the “Layer” dropdown menu on top of this panel, and select “Add Layer”.
Create a new layer called “Building” and set the Building Mask to that layer (this layer should already exist in your layer list, if not, create it).
The last property we need to set in the unity editor is related to the “Move Camera” script. We need to indicate a speed value for the movement of the camera, so we can move it using the arrow keys. Set this value to 15.
That is all you needed to do in order to set up a fully working City Builder scene with the assets that were provided to you. Press play and enjoy your creation. You should be able to move the camera around, place buildings in designated areas of the city and remove them. You may notice that we have not covered which script handles the color changing of the base of buildings when they are selected, as well as the function for deleting them. This is because, each of the building prefabs we added to the “Building Management” script has a script called “PlacableBuilding” that takes care of these functions, and we do not need to set anything up for them to work. However, in short, once a building that is already placed in the city is clicked on, the “Building Placement” script detects which exact building was clicked and triggers the “PlacableBuilding” script attached to it. This script will then change the base color of that building to purple, and if the user presses the “D” key, it will remove that building from the scene.
You can also watch this complete video tutorial on how to assemble this scene.
Note: Although we did not directly go over the scripts line by line, I strongly recommend that you look at the scripts inside the “Scripts” subfolder to have a better understanding of how some of these functions are implemented in C#. All the scripts are fully commented for your convenience.
Most of the mechanics used in desktop experiences can also be used in VR. However, not all of them are best choices for VR experiences. Two of the most prominent examples are locomotion and interaction mechanics. In this section, we will briefly explore the different locomotion and interaction mechanics that are designed specifically for VR experience.
Locomotion can be defined as the ability to move from one place to another. There are many ways in which locomotion can be implemented in games and other virtual experiences. Depending on the employed camera perspective and movement mechanics, the users can move their viewpoint within the virtual space in different ways. Obviously, there are fundamental differences in locomotion possibilities when comparing 2D, 2.5D, and 3D experiences. Even within the category of 3D experiences, locomotion can take many different forms. To give you a very general comparison, consider the following:
Locomotion in 2D games is limited to the confines of the 2D space (X and Y axes). The camera used in 2D games employs an orthogonal projection. Therefore, the game space is seen as “flat”. In these experiences, users can move a character using mechanics such as “point and click” (as is the case in the most 2D real-time strategy games) or using keystrokes on a keyboard or other types of controllers. The movements of the orthogonal camera in these experiences is also limited to the confines of the 2D space. Consequently, the users will not experience the illusion of perceiving the game through the eyes of the character (e.g. Cuphead [4], Super Mario Bros, [5] etc.). Evidently, the same type of locomotion can be employed in the 3D space as well. For instance, in the City Builder game, we have seen in this lesson, the camera uses a perspective projection. However, the locomotion of the user is limited to the two axes of X and Z. The three-dimensional perspective of the camera (viewpoint of the user), however, creates an illusion of depth and a feeling of existing “in the sky”. In more sophisticated 3D games, such as First-person shooters (FPS [6]) where it is desired that the players experience the game through the eyes of their character, the feeling of movement is entirely different. We stress the word “feeling” since the logic behind translating (moving) an object within a 2D space compared to a 3D space from one point to another is not that different. However, the differences in the resulting feeling of camera movements are vast (unattached distant orthogonal vs. perspective projection through the eyes of the character). In many modern games (not necessarily shooters) with a first-person camera perspective, the players can be given six degrees of freedom for movement and rotation.
The FPS Controller we used in the previous lessons is an example of providing such freedom (except for rotation along the Z-axis). The mechanics for the movement in such games, however, are almost all the time through a smooth transition from one point to another. For instance, in the two different locomotion mechanics we used in this course (FPS Controller, and camera movement) you have seen that we gradually increase or decrease the Position and Rotation properties of the Transform component attached to GameObjects. This gradual change of values over time (for as long as we hold down a button for instance) creates the illusion of smoothly moving from one point to another.
As was previously mentioned, there are many ways in which locomotion can be realized in virtual environments, depending on the type and genre of the experience, and the projection of the used camera. Explaining all the different varieties would be outside the scope of this course. Therefore, we will focus on the one that is most applicable in VR.
The experience of Virtual Reality closely resembles a first-person perspective. This is the most effective way of using VR to create an immersive feeling of perceiving a virtual world from a viewpoint natural to us. It does not come as a surprise that in the early days of mainstream VR development, many employed the same locomotion techniques used in conventional first-person desktop experiences in VR. Although we can most definitely use locomotion mechanics such as “smooth transition” in VR, the resulting user-experiences will not be the same. As a matter of fact, doing so will cause a well-known negative effect associated with feelings such as disorientation, eyestrain, dizziness, and even nausea, generally referred to as simulator sickness.
According to Wienrich et al. “Motion sickness usually occurs when a person feels movement but does not necessarily see it. In contrast, simulator sickness can occur without any actual movement of the subject” [1]. One way to interpret this is that simulator sickness is a form of physical-psychological paradox that people experience when they see themselves move in a virtual environment (in this case through VR HMDs) but do not physically feel it. The most widely accepted theory as to why this happens is the “sensory conflict theory” [2]. There are, however, several other theories that try to model or predict simulator sickness (e.g. the poison theory [3], the model of negative reinforcement [4], [5], the eye movement theory [4-5], and the and the postural instability theory [6]). Simulator sickness in VR is more severe in cases where the users must locomote, particularly using smooth transition, over a long distance. As such, different approaches have been researched to reduce this negative experience. One approach suggested by [1] is to include a virtual nose in the experience so the users would have a “rest frame” (a static point that does not move) when they put on the HMD.
Other approaches such as dynamic field of view (FOV) reduction when moving or rotating have also shown to be an effective way to reduce simulator sickness.
In addition to these approaches, novel and tailored mechanics for implementing locomotion, specifically in VR, have also been proposed. Here we will list some of the most popular ones:
Other forms of teleportation include adding effects when moving the user’s perspective from one location to another (e.g. fading, sounds, seeing a project of the avatar move, etc.), or providing a preview of the destination point before actually teleporting to that location:
Another interesting and yet different example of teleportation is the “thrown object teleported”, where instead of pointing at a specific location, the user throws an object (using natural gestures for grabbing and throwing objects in VR as we will discuss in the next section) and then teleport to the location where the object rests.
There are many other locomotion mechanics for VR (e.g. mixing teleportation and smooth movement, run in-place locomotion, re-orientation of the world and teleportation together, etc.) that we did not cover in this section. However, the most popular and widely used ones were briefly mentioned.
[1] C. Wienrich, CK. Weidner, C. Schatto, D. Obremski, JH. Israel. A Virtual Nose as a Rest-Frame-The Impact on Simulator Sickness and Game Experience. 2018, pp. 1-8
[2] J. T. Reason, I. J. Brand, Motion sickness, London: Academic, 1975.
[3] M. Treisman, Motion Sickness: An Evolutionary Hypothesis” Science, vol. 197, pp. 493-495, 1977.
[4] B. Lewis-Evans, Simulation Sickness and VR-What is it and what can developers and players do to reduce it?
[5] J. J. La Viola, "A Discussion of Cybersickness in Virtual Environments", ACM SIGCHI Bulletin, vol. 32, no. 1, pp. 47-56, 2000.
[6] G. E. Riccio, T. A. Stoffregen, "An ecological theory of motion sickness and postural instability", Ecological Psychology, vol. 3, pp. 195-240, 1991.
Interactions with GameObjects in non-VR environments are limited to conventional modalities and their affordances. In desktop experiences, for instance, interactions are limited to pointing at objects and graphical user interface element. In certain gaming consoles, however, such as Nintendo Wii [18] and Xbox Kinect, users can perform natural gestures to interact with objects in a game or to perform actions.
Interaction with GameObjects in VR can be considerably more natural compared to desktop experiences. The immersive nature of VR HDMs combined with the possibility of locomotion in a natural point of view affords a much closer interaction experience to real-life compared to any other gaming console or desktop. Thanks to the power of sensors and controller data in VR headsets, we can constantly track the position, orientation, and intensity of hand movements in VR. As such, users can use natural gestures for interaction with different types of objects while perceiving them from a natural viewpoint.
Users can interact with objects by reaching out and grabbing them when they are in their proximity, or they can grab them from distance using a pointer. Once an object is grabbed, users can use the physics properties to place them somewhere in the virtual environment, throw them, and even change their scale and rotation.
An example of grabbing an object from distance using pointers:
More natural forms of interaction in VR are via gestures. For instance, users can spin a wheel using a circular gesture or pull down a lever using a pull gesture. The popular VR archery game is a prime example of using natural gestures to interact with game objects.
Similar to the bow and arrow, some objects such as a fire extinguisher are used by two hands in real-life. The same principle could be applied in VR, where the user grabs the capsule with one hand and the hose with another. An overview of some of these interaction mechanics is demonstrated in the videos below from one of the most popular VR plugins for Unity called VRTK.
In the videos posted above, you can see that users can naturally interact with different types of objects, by sitting down and pulling out a drawer, flipping switches, etc. The same principle can be applied to graphical user interface (GUI) objects. Users can use natural gestures to collide with different GUI objects such as buttons to interact with them, or they can use pointers to point at a specific GUI element. Users can grab a selector on a slider and move their hand to the left or right to decrease/increase a value, or they can move a text element in the scene.
As the inspiration of different interaction mechanics for your assignment, you can watch the following video:
The homework assignment for this lesson is to reflect on the City Builder game and think about different VR specific locomotion and interaction mechanics that can be added to it to turn it into a VR experience. You are required to write a proposal on how you think City Builder can be transformed into a VR experience.
Note that you have absolute freedom to change the conceptual design of City Builder for your new design proposal.
You are free to explore a large amount of content on the web that demos different locomotion and interaction mechanics used in VR games and other VR experiences and select the ones that you think would make City Builder a good VR experience. For each mechanic that you want to include in City Builder, write a short description of that mechanic including a screenshot or a video, why do you think it will be a good fit for City Builder, and if applicable, what existing mechanic it will replace. You are required to propose at least two locomotion mechanics (to not be used simultaneously, but interchangeably), and two interaction mechanics. Feel free to change any aspect of City Builder if you think doing so will enrich users’ experience in VR. For instance, instead of a top-down camera, you can propose to use a first-person camera, or instead of selecting buildings from a GUI menu you can propose that users could grab buildings from a pallet of items they hold in one hand.
Write a two- to three-page report on your proposed changes to the City Builder.
This assignment is due on Tuesday at 11:59 p.m.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Explanation of two locomotion mechanics to be used in your proposed design and justification as to why they are good choices | 2 pts | 1 pts | 0 pts | 2 pts |
Explanation of two interaction mechanics to be used in your proposed design and justification as to why they are good choices | 2 pts | 1 pts | 0 pts | 2 pts |
Write up is well thought out, researched, organized, and clearly communicates why student selected the proposed mechanics | 6 pts | 3 pts | 0 pts | 6 pts |
Total Points: 10 |
In this lesson, you experienced what it takes to create a very simple game in Unity. You used some of the more advanced features of Unity such as AI and experienced setting up a semi-complex scene in the editor. Furthermore, you were provided with some slightly complex C# scripts that made the City Builder game come alive and were explained the interrelations among the scripts. In the second part of the lesson, you were introduced to the most common locomotion and interaction mechanics used in VR development. We hope that this lesson has helped to broaden your knowledge on the design and development of virtual experiences in Unity and the more novel interaction and locomotion mechanics employed in VR experiences these days.
You have reached the end of Lesson 9! Double-check the to-do list on the Lesson 9 Overview page to make sure you have completed the activity listed there.
Links
[1] https://courseware.e-education.psu.edu/downloads/geog497/
[2] https://chorophronesis.psu.edu/
[3] https://www.youtube.com/watch?v=mP7ulMu5UkU
[4] http://www.cupheadgame.com/
[5] https://supermariobros.io/
[6] https://www.callofduty.com/
[7] https://upload.wikimedia.org/wikipedia/commons/f/fa/6DOF_en.jpg
[8] https://commons.wikimedia.org/wiki/File:6DOF_en.jpg
[9] https://www.youtube.com/channel/UCG08EqOAXJk_YXPDsAvReSg
[10] https://www.tomshardware.com/news/columbia-university-vr-sickness-research,32093.html
[11] https://www.youtube.com/watch?v=_p9oDSeUaws
[12] https://giphy.com/gifs/teleportation-KHiHRHPJ27SCgIBGW9?utm_source=media-link&utm_medium=landing&utm_campaign=Media%20Links&utm_term=
[13] https://www.youtube.com/channel/UCx78qBGQl-oCvGwX-MaVRgA
[14] https://giphy.com/gifs/teleportation-with-preview-hTICiMnGGKczkyoiYc?utm_source=media-link&utm_medium=landing&utm_campaign=Media%20Links&utm_term=
[15] https://www.youtube.com/channel/UCJLFizUO3gCQnyiYKMYvUgg
[16] https://www.youtube.com/channel/UCPhckk25N7P8OI580ucRidA
[17] https://www.youtube.com/channel/UCIeaxRCoSLFLHvrCInghqcw
[18] http://wii.com/
[19] https://www.youtube.com/channel/UC_0LCcbZc9G4SepLaCs1smg
[20] https://www.youtube.com/channel/UCI4Wh0EQPjGx2jJLjmTsFBQ
[21] https://media.giphy.com/media/Y3YCvTWl1VW10DqoHg/giphy.gif
[22] https://www.youtube.com/channel/UCWRk-LEMUNoZxUmY1wO7DBQ
[23] https://www.youtube.com/channel/UCpmu186jY1s3T9ADieN-uug
[24] https://www.youtube.com/channel/UCSVjrpufOzTd3jZsUGs-L0g