This is the course outline.
This course brings into the classroom the ideas and developments that surround both 3D modeling and immersive technologies such as Virtual Reality (VR). The excitement is universal in academia (for over 30 years on and off) and the private sector (more recently) with the expectation of an 80 billion dollar industry by 2025 according to Goldman Sachs (reported on CNBC [1]). It is fair to assume that the geospatial sciences will never be the same once immersive technologies, everything from Augmented Reality (AR), Mixed Reality (MR), Augmented Virtuality (AV), to Virtual Reality, have moved into most people’s living rooms. In the following, we use the term xR or 3D/xR as a combination, to refer to a range of immersive technologies that are available.
Each of the topics that you will encounter in this course could (and is) easily be approached in individual courses! The point of this course is to give students an overview of current developments in 3D modeling and xR that we will keep up-to-date to the best of our abilities, but things are moving and developing fast. Students will encounter some hands-on experience with some of the most versatile and powerful tools for modeling and creating virtual realities such as SketchUp™, Unity3D™, or ArcGIS Pro (not in the residential version). Given the tremendous advancements in 3D modeling and xR that are currently taking place, the course is designed as a timely entry-level course with the goal to give students an understanding of the game-changing nature that recent developments will have for the geospatial sciences. This is an evolving course and any feedback you might have is greatly appreciated.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
To give you an idea of current developments in the field of Virtual Reality, Augmented Reality, Augmented Virtuality, and Mixed Reality, we would like you to watch the following videos and think about and reflect on the core differences in the technologies they use.
The first video gives you a quick overview of a collaboration between Microsoft and NASA using Microsoft’s Hololens. Please do feel free to dive deeper into the diverse application scenarios that are envisioned for the Hololens and related products!
The second video provides you with an overview of the next generation MR headsets. The new HoloLens 2 by Microsoft provides infinite possibilities for users to interact with high-quality holograms that can be in the form of animated models of objects, conversational agents, natural phenomena, etc. The term that Microsoft is using to characterize the HoloLens products is “mixed reality”. What do you think this actually means?
The third video is a nice way of showing experiences in VR by placing users in front of a green screen. This adds a bit more information than filming people who are experiencing VR and then communicating the experience through their reactions. You probably have seen plenty of those videos showing people of hesitating to walk over a virtual abyss they know physically does not exist, playing horror games, or experiencing a different body. The HTC Vive that is demoed in the video below, like the Oculus Rift, is a high-end head-mounted VR system that requires a relatively powerful computer.
The fourth video is a potpourri of systems that are referred to as augmented reality. It is rumored that Apple will be focusing on augmented reality technologies and you may have played Pokemon Go. It is important to note that there is a difference between Mixed and Augmented reality. In the former, virtual elements are embedded in the real environment in such a way that they provide the illusion that they are part of the environment. This illusion is enforced through virtual elements abiding with the laws of physics that govern our reality (e.g. a virtual robot that can collide with a physical table). Augmented reality, on the other hand, does not necessarily follow this principle.
The last video demonstrated the concept of Augmented Virtuality. The concept of AV refers to the idea of augmenting the virtual experience with elements of reality. An example of this approach could be the user of 360 degrees images and videos in a VR experience.
We invite you now to reflect on the different technologies that have become available in the last couple of years and that you have seen in the three videos above.
Please use the Lesson 1 Discussion (VR Concepts) to start a discussion about this topic using the questions above as a guide. Active participation is part of your grade for Lesson 1. In addition to making your own post, please make sure to comment on your peers' contributions as well.
Your post is due Saturday to give your peers time to comment. All comments are due by Tuesday at 11:59 p.m.
It is impossible to write a complete list of all the [A,M,V]R systems and content providers that are currently throwing their products on the market. Essentially, all major tech company and numerous startup companies are venturing into the field of VR. We are providing a short overview of some of the main systems that are available or are filling niche products. The image below provides a visual overview and a little farther down the line we have a quick memory game for you.
It is fair to say that the current hype around VR has been dramatically spurred by Oculus and the fact that a little startup company after only two years got purchased by Facebook for two billion dollars (see Mark Zuckerberg's announcement [3]) [4].
Oculus started with a then teenager, Palmer Luckey, who was building VR prototypes in his garage (are not all stories like this starting in garages)? Luckey sent one of his prototypes, the PR6 that he called the Rift, to famous game designer John Carmack (Doom, Quake, founder of id Software). Carmack a little later would demo the Rift at an E3 event where it created a lot of traction. Luckey teamed up with Brendan Iribe, Michael Antonov, and Nate Mitchell, and together they launched a Kickstarter Campaign, that raised not the anticipated $250K but over 2 million dollars (for more details see "How Palmer Luckey Created Oculus Rift [5]".
From there on it was a wild ride. The biggest change for the young company came when Facebook in 2014 bought it for two billion dollars. Had VR, even up until March 2014 created some hype and fantasies, at this point it had everyone’s attention.
After two developer versions, the consumer version started shipping in the Spring of 2016! Below is a figure that shows the transition from DK1 to the consumer version. The latter offers an OLED display, a resolution of 2160 x 1200, a 90 Hz refresh rate, a 110-degree field of view, and a tracking field of up to 5 x 11 feet. The Oculus Touch controllers have not yet been released. If you want to run an Oculus on your computer the recommended minimum specs are an NVIDIA GTX 970 / AMD 290 equivalent or greater, Intel i5-4590 equivalent or greater, 8GB+ RAM, Compatible HDMI 1.3 video output, 2x USB 3.0 ports, Windows 7 SP1 or newer.
The latest versions of Oculus Rift can be used with either a remote controller or a more complicated
input modality such as Oculus Touch.
While the remote is rather limited as an interaction modality (only a few buttons), Oculus Touch provides a wide variety of interaction possibilities (e.g., finger tracking, gesture recognition) and a larger set of buttons on each controller.
Moreover, by default two Oculus sensors are used to track and translate the movements of users (head movements based on the VR headset and hand movements based on the Touch controllers) in VR. This technology provides a 6-DOF (degrees of freedom) (3-axis rotational tracking + 3-axis positional tracking) tracking.
On March 20th, 2019 Oculus announced a new model to the Rift family called Oculus Rift S. The new member of the Rift family uses a Fast-switch LCD display with a resolution of 2560×1440 (1280×1440 per eye) with a refresh rate of 80 Hz. It utilizes a new technology called “inside-out tracking” using five cameras embedded into the headset, which eliminates the need for external sensors for tracking while maintaining a 6-DOF tracking. In short, the headset itself can track and translate the movements of itself and the controllers, which allows the users to freely move around a physical environment without restrictions. If you want to use an Oculus Rift S on your computer the recommended specs are an NVIDIA GTX 1060 / AMD Radeon RX 480 or greater graphics card, Intel i5-4590 / AMD Ryzen 5 1500X or greater, 8GB+ RAM, DisplayPortTM 1.2 / Mini DisplayPort, 1x USB 3.0 ports, and Windows 10.
Since the emergence of cost-effective VR HMDs developers and users have always been restricted to limited mobility options due to the requirements for these HMDs to be connected to powerful computers via cables (i.e. tethered).
On October 11th, 2017 a new generation of Oculus HDMs called “Oculus Go” was announced and later on May 1st, 2018 released to the public. Go is described as an untethered all-in-one headset meaning that it can function as an independent device without the need to be connected to a computer. Evidently, the first generation of these untethered all-in-one headsets was not as powerful as their predecessors in terms of computing power. Oculus Go utilizes a Snapdragon 821 SoC processor and is powered by a 2600 mAh battery, 5.5" fast-switching LCD with a resolution of 2560×1440 (1280×1440 per eye) and refresh rate of 72 Hz, Adreno 530 internal graphics card, with non-positional 3-DOF tracking due to the lack of sensor use. Unlike the Rift, Go does not use rich controllers such as “Touch” but instead, uses a simple remote controller with limits interaction possibilities. On December 4th of 2020, Oculus discontinued the Go HMD and officially replaced it with the next generation, Quest.
On May 21th, 2019, Oculus released a new breed of untethered all-in-one headsets called Oculus Quest. This HMD brings the best features of Rift and Go together in one. Similar to Go, it is an all-in-one standalone headset, and similar to the Rift it uses inside-out-tracking with 6-DOF. Furthermore, it utilizes a more powerful GPU and processor (4 Kryo 280 Gold processor with 4GB RAM, and an Adreno 540 graphics card), capable of running high-quality applications. It has a PenTile OLED display with a resolution of 1440 × 1600 per eye, and a refresh rate of 72 Hz. Furthermore, unlike the other untethered Oculus product Go, Quest uses the latest version of touch controllers.
On October 13th, 2020, Oculus released the new generation of Quest, called Quest 2. This new generation of the Quest HMD has superior specifications in the processor (Qualcomm Snapdragon XR2), RAM (6 GB), and display resolution (1832 x 1920 per eye) and starts at a very competitive price of $299.
The HTC Vive is a collaboration project of software company Valve and tech giant HTC. Not going into any details of how they came together, the Vive is a truly awesome product that offers substantial flexibility and sophisticated interaction opportunities. The main difference to the Rift is that it does not require users to remain in front of a computer screen (or a fixed location). The Vive comes with room sensors that allow users in small spaces to actively move around (you saw the intro VR video [7]), which is awesome. Additionally, the controllers allow for tracking locations of hands, provide advanced interactivity, and provide haptic feedback in the form of vibrations.
It is always very difficult to explain experiences one has in a VR environment like the Vive to someone via text … one of the best solutions without actually experiencing it oneself is this video filmed using a green background (see intro video [8] again).
The Vive also has a 2160 x 1200 resolution, offers a 90 Hz refresh rate, and a 110-degree field of view. In contrast to the Rift, the tracking field is 15 x 15 feet. The minimum computing requirements are similar to the Rift: NVIDIA GeForce GTX 970 /Radeon R9 280 equivalent or greater, Intel Core i5-4590 equivalent or greater, 4GB+ of RAM, Compatible HDMI 1.3 video output, 1x USB 2.0 port.
On January 8th, 2018, HTC released an updated version of Vive called Vive Pro. The new HMD has a higher resolution at 1440 x 1600 resolution per eye and a refresh rate of 90 Hz. Thanks to its sophisticated tracking sensors (called lighthouses) Vive provides up to 22’11” X 22’11” room-scale tracking of headset and controllers. Vive Pro is also equipped with two cameras embedded into the headset, facing outward. These cameras can be used to bring the real world into VR (Sounds similar to a type of xR previously discussed?). Moreover, Vive affords multi-user interaction in the same physical space. Notwithstanding, each user requires their own dedicated computer and play-area.
In November 2018, VIVE released their first untethered all-in-one headsets called Vive Focus. The focus utilizes inside-out-tracking technology and affords 6-DOF tracking. It has a 3K AMOLED display, with a resolution of 2880 x 1600 and a refresh rate of 75 Hz, and a Qualcomm Snapdragon™ 835 processor and two controllers.
In addition to awesome and popular HMDs, VIVE also provides a variety of external sensors and controllers that can be used to create a much richer and close-to-reality VR experiences. These include trackers that can be attached to arms and legs to provide a feeling of embodiment of users in VR (tracking and projecting the hand and leg movements of users in VR), controllers that look like weapons, and VR gloves that cater for hand and finger actions in VR while providing haptic feedback to users.
There are numerous other HMD brands such as Samsung GearVR, Nintendo Lab VR kit, Sony PlayStation VR, Lenovo Mirage, etc. with different specifications. Going into details about each and every one of these different brands would be out of the scope of this lecture. Notwithstanding, it would be of great interest to become familiar with some of the HMDs are go beyond the conventional specifications.
Although not new, Samsung’s GearVR is of the most affordable (but limited) HMDs. Instead of requiring the user to purchase his or her own computer, it works with various Samsung Galaxy phones. You can think of it as a fancier version of the Google Cardboard viewer. While the Cardboard viewer essentially works with every smartphone, the GearVR requires a Samsung phone although in its newest release the GearVR accepts a range of Samsung phones. While the Cardboard viewer is somewhat limited in its interactive capabilities (gaze and the little magnet button on the side), the GearVR has a back button and a little trackpad on the side. Computing power and display characteristics will depend on your phone.
While most VR HDMs have a field of view (FOV) of around 110 degrees, Pimax is an HMD with a FOV of 170-200 degrees (close to human vision). It has an 8K resolution (3840 x 2160 per eye) with a refresh rate of 80 Hz and is tethered. In addition to being compatible with a variety of controllers (e.g. HTC Vive controllers), Pimax has a special Leap Motion [12] module embedded in the HMD. This module provides real-time and accurate hand tracking and can enable users to solely use their hands as the interaction modality. Furthermore, as another addition to its modular system, Pimax utilizes an eye-tracking module that can provide real-time tracking of users’ eye movements while interacting with a VR experience. Such information will be useful for an array of research topics, including usability studies, user-behavior analysis, and learning content adaptation. The high FOV provided by Pimax enable users to perceive the virtual environment in a way much closer to real life. This can potentially set the basis for more effective training of visual-spatial skills in VR and a more successful transfer of these skills to real-world contexts.
As is evident from the different HMDs we have seen so far, the resolution of these devices has improved significantly over the past few years. Notwithstanding, they are still not comparable to high-end desktop experiences in that respect. As one of the main applications of VR HMDs is training via realistic simulations, display quality and high-resolution imagery can play a determining role in the successful realization of learning goals in these applications. For this exact reason, the Varjo company has released a VR HMD called Varjo VR-1. Standing at almost 6000$, it is one of the most expensive HMDs in the market, but it possesses unique characteristics. It has a Bionic display – human eye-like resolution of 60 Pxl per degree and combines a 1920x1080 low-persistence micro-OLED and a 1440x1600 low-persistence AMOLED. It is important to note that this HDM is not targeted at average consumers, but rather for AEC (architecture, engineering, and construction), industrial design, virtual production, healthcare, and training and simulation. Given the fact that resolution is one of the drawbacks of VR which impedes its proper utilization in simulations and application where detailed graphical elements are important (e.g. creation of artwork in VR, reading long texts in VR, surgery simulation, interacting with a cockpit panel with lots of buttons and small labels, etc.), this HDM opens the door for interesting research opportunities. Furthermore, it has a built-in eye-tracking module that provides researchers with a great opportunity to monitor the behavior of users when interacting with a VR experience for the purposes of usability studies, measurements of attention/interest, user behavior studies, and content adaptation.
The bottom line is that there is a tremendous amount of development on the VR market that is spurring developments in industry and academia. We just gave you a little glimpse of some of the most prominent examples but there are many, many more. To further acquaint you with some of the technologies we have two tasks outlined on the next page.
On June 28th, 2019, Valve released the first of their second-generation HMDs called Index. Valve Index has a display resolution of 1440 x 1600 per eye and a refresh rate of 120 Hz. Valve index is backward compatible with all HTC products including the Vive HMDs and the lighthouses (base stations used for tracking). It comes with a new generation of the lighthouse sensors (base station 2.0) that offer improved tracking range, better field of view, and potentially a 300 percent larger tracking space. Perhaps one of the most delightful features of the Valve Index is the novel controllers (called Knuckles) that are significantly more ergonomic compared to other controllers on the market.
Find a product related to [A,M,V]R, not covered in this section that you are excited about. Write a one to two paragraph description, add a picture and submit your response to the Lesson 1 assignment.
Please upload your assignment by 11:59 p.m. on Tuesday.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Product described is different from course examples. | 2 pts | 1 pts | 0 pts | 2 pts |
Image provided is not copyrighted. | 1 pts | .5 pts | 0 pts | 1 pts |
Write up is well thought out, researched, organized, contains some technical specifications, and clearly communicates excitement. | 3 pts | 1.5 pts | 0 pts | 3 pts |
The document is grammatically correct, typo-free and cited where necessary. | 1 pts | .5 pts | 0 pts | 1 pts |
Total Points: 7 |
"At the time of writing, virtual reality in the civilian domain is a rudimentary technology, as anyone who has work a pair of eyephones will attest that the technology is advancing rapidly is perhaps less interesting than the fact that nearly all commentators discuss it as if it was a fully realized technology. There is a desire for virtual reality in our culture that on can quite fairly characterize as a yearning." (Penny, 1993: 18; in Gillings, 2002)
The Geospatial Sciences have an intricate relation to 3D modeling and VR. While representations of our
environments have been largely in 2D, that is, maps in the past centuries, there also have been efforts to make 3D information part of the geospatial toolset. This does not come as a surprise if we look outside the USA where GIScience is often part of Geomatics (surveying) departments, for example, in Canada and Australia. Interest in VR and 3D modeling has always come in waves so far, that is, people got interested and excited about the prospects of having 3D models and virtual access to these models (see quote by Penny above), started implementing systems and theorizing about the potential, and then gave up as they ran into too many obstacles. Looking into one such effort, a workshop and subsequent book project led by Peter Fisher and David Unwin (2002) summarized the state of the art in geography more than a decade ago and identified prospects and challenges of using VR:
If we look at these problems from today’s perspective, we can confidently assert that most of these problems have been resolved (if not all of them). Motion sickness is largely taken care of by advancements in computing technology, that is, better sensor technology and higher framerates allow for almost perfect synchronization of proprioceptive and visual senses reducing or eliminating motion sickness. And, if anything, this is a development that will get even better in the near future as graphics card manufacturers such as NVIDIA or AMD are now developing new products for a growing market, not just a niche. Check out the specs of the new NVIDIA GTX 1080 in comparison to older models GEFORCE [16]. Some problems still persist. It is still not the case that everyone feels comfortable in a VR environment and some people still get motion sick; unfortunately, it is worse if you are old or not getting enough VR time. However, we are getting very close to weed out ergonomics issues that have been standing in the way of mass deployment of VR.
As you see with the Cardboard version of a VR experience, the one that we have sent you, it is possible to get content into people’s hands rather easily. All you need is a smartphone and some apps. This still does not include high-end applications such as Oculus Rift or HTC Vive as they require hardware as well as a beefy gaming computer, but with the advancements in computing power and the availability of low-cost VR ready system, it is only a matter of time until essentially everyone will be able to access a VR (or AR or MR) system.
Likewise, the issue of VR being a rather complex technology does certainly not stand in the way any longer. While VR and 3D modeling are still requiring some training for people who would like to dive deep into software and content development, creating a simple VR experience is as easy as pressing a button. The intro video that you can find on the introductory page is created using a Ricoh™ Theta S, a 360-degree camera that can be purchased via Amazon™ for about $350. During the development of the course, the software app that comes with this camera has drastically improved and allows now for watching videos in the viewer and has a direct upload to YouTube (rather than having to infuse 360-degree videos with additional information, which was the case in May 2016).
So what is remaining as a challenge for VR to become omnipresent in the geospatial sciences and to enter the mass market? Interestingly, in a recent survey, the lack of content has been identified as one of the main remaining obstacles (Perkins Coie LLP & UploadVR Inc., 2016). This is, in my humble
opinion, one of the biggest opportunities for the geospatial sciences. With a long-standing interest in organization information about the world in databases, scanning and sensing the environment, and, more recently modeling efficiently in 3D with geospatial specific approaches and technologies, geospatial is potentially playing a critical role in the developing VR market, and benefitting from it in return. We shall see whether this prediction holds.
For a long time, people have been building 3D models not in the computer but by using physical miniature versions of buildings. The Figure below shows a nice example of the 1630 city of Göttingen, Germany. To get an idea of the current status quo of 3D modeling from a GIS perspective we would like you to read a white paper published by Esri in 2014. Please be advised that there will be questions in quizzes and the final exam on the content of the white paper.
3D Urban Mapping: From Pretty Pictures to 3D GIS (An Esri White Paper, December 2014) [2]
Fisher, P. F., & Unwin, D. (Eds.). (2002). Virtual reality in geography. London, New York: Taylor & Francis.
Gillings, M. (2002). Virtual archaeologies and the hyper-real. In P. F. Fisher & D. Unwin (Eds.), Virtual reality in geography (pp. 17–34). London, New York: Taylor & Francis.
Penny, S. (1993). Virtual bodybuilding. Media Information Australia, 69, 17–22.
Perkins Coie LLP & UploadVR Inc. (2016). 2016 Augmented and Virtual Reality Survey Report. Retrieved from https://dpntax5jbd3l.cloudfront.net/images/content/1/5/v2/158662/2016-VR... [18]
3D modeling was expensive in the past. With the availability of efficient 3D workflows and technologies that either sense the environment directly (such as LiDAR and photogrammetry) or use environmental information to create 3D models (such as Esri CityEngine's procedural modeling, see Lesson 2 and 4) it is now possible to properly use 3D models across academic disciplines. While there is still an ongoing debate to which extend 3D is superior to 2D from a spatial thinking perspective and there is, indeed, beauty in abstracting information to reveal what is essential (see London subway map [19]), there is also, without a doubt, a plethora of tasks that greatly benefit from using 3D information. For example, whether or not you are gifted with high spatial abilities, inferring 3D information from 2D representations is challenging. Below is a quick collection of topics and fields that benefit from 3D modeling and analysis (see also Biljeki et al. 2015):
Can you think of additional areas of application of 3D modeling not mentioned above?
Use the Lesson 1 Discussion (Application of 3D Modeling) to suggest and discuss further applications.
Please post your response by Saturday to give your peers time to comment by Tuesday.
Complementing developments in 3D modeling it is fair to say that we are entering the age of immersive technologies. There are, unsurprisingly, many concepts that characterize VR systems. Scientist think about them to describe systems and interactions on the basis of specific characteristics rather than on the basis of brands. One could ask is the Rift superior to the Vive, or, alternatively one could ask whether it makes a difference that people have more than twice as much space to move around, the Vive allows people to turn a room into a VR experience while the Rift is spatially more restrictive (although this may change). Other such characteristics that potentially make a difference are the field of view, the resolution of the display, or the level of realism of the 3D models.
We will discuss here three concepts, immersion, presence, and interaction that have received considerable attention in the study of VR.
We have already discussed that the renewed interest in virtual technology, the fidelity, and overall quality of information presented in virtual environments are vastly improving. These improvements are not only propelling VR technologies into living rooms and research facilities, but they also have the potential to impact on spatial experiences that users have with virtual environments from the inside of a volcano to the Mayan ruins to informal settlements in Rio. In several studies, it has been shown that these spatial experiences may have positive influences on knowledge development and transfer to a real-world application (Choi & Hannafin, 1995). In other words, by enhancing one’s intensity of the experience, it may offer a way to facilitate learning through understanding and engagement. The literature on spatial experiences within virtual environments, however, is full of ambiguous concepts centered around the relationship between presence, immersion, and interactivity. In bringing conceptual clarity to distinguish these concepts, knowledge derived from previous research can become easier to implement and evaluate. This knowledge can then help to clarify the importance of spatial experiences for systems that enable experiences that only virtual environments can offer. We start the discussion by distinguishing between presence and immersion, defining dimensions of presence, and examining knowledge built from spatial experiences.
How difficult it is to characterize these concepts should become apparent when you try to define, for example, immersion, yourself. The problem with most scientific concepts is the term we use to describe a concept often also has meaning in everyday language (mathematicians get out of this mess by using symbols, e.g., π, and then define what it means). When we think of immersion we have a couple of options defining its meaning. When my kids watch their 20 minute TV show every other evening (Masha and the Bear for the 2-year-old, Word Girl for the 5-year-old), I experience the power of immersion first hand. I would describe their state as being glued to the TV and if unresponsiveness is a measure for immersion than I could make an argument that we really do not need VR. Everyone probably has made similar experiences reading a book whether It or Harry Potter that has taken you under their spell. In becoming engaged with the media, you start to feel a part of the story, feeling present in the space.
Yet, from a technology perspective immersion can be defined differently. For example, in the work by Slater (1999), immersion is distinguished from presence with the former indicating a physical characteristic of the medium itself for the different senses involved. Furthermore, according to Witmer and Singer (1998) immersion is defined as “a psychological state characterized by perceiving oneself to be enveloped by, included in, and interacting with an environment that provides a continuous stream of stimuli and experiences” (ibid, page 227). Witmer and Singer have also argued that immersion is a prerequisite for experiencing presence.
To give you a simple example of immersion, the figure below shows three setups that we used in a recent study on how different levels of immersion influence the feeling of being present (more below) in a remote meeting. We compared a standard desktop setting (the lowest level of immersion) with a three-monitor set up (medium level of immersion) against using an immersive headset (here the Oculus Rift). You can order these technologies as we have just done along a spectrum of immersiveness which then helps to design, for example, experiments to test whether or not being physically immersed has an effect, for example, on the subjective feeling of team membership. Imagine that instead, you are using Skype (or zoom, or any other video conferencing tool) but instead of staring at your single monitor, the other person is streaming a 360-degree video directly into your VR HMD.
In contrast, the nature of presence in and of itself is highly subjective, making it hard to distinguish from concepts like immersion (well, we just did). The sense of presence is indeed the subjective sense of being in the virtual environment (Bulu, 2012). Many smart people have thought about this and to make things more fun have developed further distinctions such as spatial presence, co-presence, and telepresence. Sense of presence seems to be, interestingly enough, related to immersion and several studies have shown that immersive capabilities of technology, specifically VR technology, impact the subjective feeling of presence (Balakrishnan & Sundar, 2011). Therefore, different configurations of technology to increase immersive capabilities should increase the sense of presence experienced by a user (well, it is a plausible hypothesis).
The relationship between immersion and presence forms at the point when user attention is focused (or absorbed) enough to allow for involvement in the content to become engaged with the digital environment (Oprean, 2014). This relationship allows for a clear distinction between immersion and presence while enabling a framework for measuring the influence of one on the other. On similar grounds, and as was previously mentioned, Witmer and Singer (1988) argue that both immersion and involvement are prerequisites for experiencing presence in a virtual environment. Involvement from their perspective is defined as “a psychological state experienced as a consequence of focusing one's energy and attention on a coherent set of stimuli or meaningfully related activities and events” (ibid, page 227), and also related to other concepts similar to engagement such as the “flow state” introduced by Csikszentmihalyi (1992).
Presence is a large and heavily measured construct in research on spatial experiences. The construct of presence consists of a number of dimensions specific to different contextual purposes: spatial, co-, tele-, social, etc. Each dimension of presence is considered with the same foundational meaning: that a user feels ‘present’ within a given medium, perceiving the virtual content (Steuer, 1992). The different dimensions, however, help to further distinguish the context or situation in which presence occurs.
The dimensions of presence cover a range of contextually different concepts that are often used interchangeably (Zhao, 2003). Telepresence distinguishes itself through its original context, teleoperation work (Sheridan, 1992). Steuer (1992) defines telepresence as distinguishable from presence by suggesting it refers to the experience of a secondary, such as virtual, environment through a communication medium. Taking both of these distinctions into account, telepresence, as related to its context of teleoperation, suggests that perceptions relate to experiencing a virtual environment in which work (i.e., interaction) can occur through a communication medium. This definition relates to the development occurring in collaboration research with telepresence systems; systems meant to mediate a sense of presence to enable work on tasks, individually or collectively.
Co-presence is similar, yet a different dimension of presence, than telepresence. Nowak and Biocca (2003) note the distinctions between telepresence and co-presence as pertaining to connection with others. Whereas telepresence can occur without other individual’s involvement, co-presence is dependent on another human being present to connect with through a communication medium. This human connection is what distinguishes co-presence from telepresence, a focus on a human-human relation (Zhao, 2003).
Social presence, another dimension of presence, has a similar definition to co-presence but requires a group of humans to connect with (Nowak & Biocca, 2003). Gunawardena and Zittle (1997) further the definition of social presence stating it is the degree of perceived realness of a person in a mediated communication. This perception of realness is dependent on two concepts: intimacy and immediacy (Gunawardena & Zittle, 1997).
Spatial presence builds on the foundational definition of presence, the feeling of being in an environment. Steuer (1992) notes that with any mediated communication there is overlap with the physical (real world) presence. Spatial presence there relates to the believability of being present within the mediated space, making spatial presence purely experiential.
In distinguishing the dimensions of presence, it becomes easier to establish factors which impact the level at which a user’s perception of being a part of a mediate space. In particular, such a distinction is necessary to understand the true implications of telepresence systems and how the initial sense of being a part of a mediated space impacts teamwork.
Interaction, in contrast, seems to be somewhat more straight forward and less troublesome to define than the concepts of immersion and presence. However, like with most things, once you dig deeper into it you will find that it is not so straight forward after all to make distinctions, especially when more than one modality is involved. Interaction can be broken down into dimensions such as navigation (whether an environment is navigable or not), does the interface allow for interaction and which modality is involved (gaze used often in cardboard environments versus haptic gloves), and human agency, which could relate to navigability but it also could be a measure of how much a user is able to interact with and/or manipulate a particular environment.
Instead of trying to further define these concepts we ask you to have a look at the Figure below. Most concepts we discussed can be arranged on a spectrum from high to low (especially immersion and interactivity). This distinction is useful when we think about different VR technologies.
To summarize, this section provided a glimpse into the more academic discussion of how to conceptualize virtual reality. We briefly discussed important concepts such as immersion, presence, and interactivity and showed exemplarily how everyday use of the terms may interfere with their more academic definitions (although even academics get lots in these discussions). Distinguishing VR technologies along the dimensions of these concepts, for example, from high to low levels of immersion, is useful to a) talk about categories of technology rather than individual products; b) to start investigations of which technology might actually help in communicating or making decisions about space and place in the past, present, and future.
References
Balakrishnan, B., & Sundar, S. (2011). Where am I? How can I get there? Impact of navigability and narrative transportation on spatial presence. Human-Computer Interaction, 26(3), 161-204.
Bulu, S. T. (2012). Place presence, social presence, co-presence, and satisfaction in virtual worlds. Computers & Education, vol. 58, no. 1, pp. 154–161.
Csikszentmihalyi, M. & Csikszentmihalyi, I. S. (1992). Optimal experience: Psychological studies of flow in consciousness, Cambridge university press.
Choi, J. I., & Hannafin, M. (1995). Situated cognition and learning environments: Roles, structures, and implications for design. Educational technology research and development, 43(2), 53-69.
Gunawardena, C. N., & Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer‐mediated conferencing environment. American journal of distance education, 11(3), 8-26.
Nowak, K. L., & Biocca, F. (2003). The effect of the agency and anthropomorphism on users' sense of telepresence, copresence, and social presence in virtual environments.Presence: Teleoperators & Virtual Environments, 12(5), 481-494. doi:10.1162/105474603322761289.
Oprean, D. (2014). Understanding the immersive experience: Examining the influence of visual immersiveness and interactivity on spatial experiences and understanding. Unpublished doctoral dissertation, University of Missouri.
Sheridan, T. B. (1992). Musings on telepresence and virtual presence.Presence: Teleoperators & Virtual Environments, 1(1), 120-126.
Slater, M. (1999). Measuring presence: A response to the Witmer and Singer presence questionnaire. Presence: Teleoperators and Virtual Environments,8(5), 560-565.
Steuer, J. (1992). Defining virtual reality: Dimensions determining telepresence. Journal of communication, 42(4), 73-93.
Witmer, B. G. & Singer, M. J. (1998). Measuring presence in virtual environments: A presence questionnaire, Presence 7 (3), 225–240.
Zhao, S. (2003). Toward a taxonomy of copresence. Presence: Teleoperators and Virtual Environments, 12(5), 445-455.
There is an almost infinite number of examples of VR content that is pushed on the market with new and exciting opportunities being added on a daily basis. We have selected a couple that we are looking at in a little more detail.
We have sent you a Google Cardboard. It is actually a slightly more advanced version than the standard one and it does help to adjust the distance between the lenses. If you are using Google Cardboard, there are many, many options to choose from that offer you content from Google itself but also other VR companies such as Within. You are strongly encouraged to explore content on your own! As one of the assignments we would like to give you, please watch the 360° video offered by Within called Clouds over Sidra. You can watch it either in the Cardboard viewer, or, if you prefer or have not received your viewer yet, you can also watch it on your phone without the viewer, or through other VR headsets, or on your computer via the web. The video Clouds over Sidra has been embedded below for your convenience.
To watch this 360 video you can click on the video, visit WITHIN [21] or download and use the WITHIN app for a cardboard experience (recommended).
When watching the video, especially in the Cardboard / immersive headset option, take note on a couple of things:
All major technology companies are investing heavily in 3D/VR technology. Google’s Expeditions Pioneer Program is one example. Expeditions is a professional virtual reality platform that is built for the classroom. Google has teamed up with teachers and content partners from around the world to create more than 150 engaging journeys - making it easy to immerse students in entirely new experiences. Feel free to explore. This tool is geared toward classroom experiences but may offer some insights into how geography education may change in the future. Maybe geography will return to the K12 curriculum?
The New York Times, as one of the first major news providers, has created an NYT VR app [23] that can be downloaded for both Apple and Android platforms.
The app can be used in combination with Google Cardboard or any other viewer that accepts your smartphone. By now all major news corporations have 360-degree video offerings.
Find two examples of 3D/VR projects other than the ones discussed above and write a short paragraph about each. In this exercise, you should select projects that you find exciting and reflect on why you think they will change the geospatial sciences, how information is communicated or made accessible, how people will be able to understand places across space and time differently, etc. Projects do not have to be movies but can address any 3D/VR topics somewhat related to geospatial.
Please upload your assignment by 11:59 p.m. Tuesday.
In your reflection make sure that you answer the following questions:
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Examples provided are different from course examples. | 2 pts | 1 pts | 0 pts | 2 pts |
Link to the website/app and description of the platform through which the student experienced the movie is provided. | 1 pts | .5 pts | 0 pts | 1 pts |
Reflection is well thought out, researched, organized, and answers the questions posed in the lesson. | 4 pts | 2 pts | 0 pts | 4 pts |
The document is grammatically correct, typo-free and cited where necessary. | 1 pts | .5 pts | 0 pts | 1 pts |
Total Points: 8 |
In this introductory lesson, we went through a plethora of topics contributing to the excitement and opportunities that 3D modeling and VR have for the geospatial sciences but also for society at large. It is a legitimate question whether or not communication about places through time and space will be the same after 3D/VR technologies have entered mainstream fully. There are a lot of opportunities and there are still a lot of unknowns. The lack of content, despite a booming market, has been identified as the main obstacle for VR; geospatial sciences through 3D modeling developments are in an excellent position to play a key role as content providers and also benefit from a booming VR market in return. We are at the verge of a new era of information systems that will allow for the development of immersive experiences alongside classic visualization techniques.
In the next lesson, we will talk more about the different workflows that are in place or developing that allow for the creation of 3D content.
You have reached the end of Lesson 1! Double-check the to-do list on the Lesson 1 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 2.
As discussed in Lesson 1, 3D modeling and the availability of 3D data are going mainstream. Technology, while it has always developed as part of society for thousands of years, has now reached a level of maturity that allows non-experts to make substantial contributions in several fields. 3D modeling and VR are at a point where cartography was 20 years ago when map-making was placed into the hands of the masses (for better or worse). The workflows that we are discussing in this lesson are examples of how affordable and accessible 3D modeling has become but also provide insights into higher-end solutions. As with all aspects of this course, we will only scratch the surface of a number of possibilities and what was true at the beginning of the course may even be replaced by newer technologies, better algorithms, and lower-cost solutions at the end. We are living in exciting times.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
The Figure below shows different models of Penn State campus and Penn State buildings, respectively. It can easily be imagined that the level of detail of the model on the top may suffice for certain planning or analysis purposes, but that a detailed model (bottom) is required for applications such as architectural planning and engineering. The level of detail is not easy to define for 3D models and there is not a single standard for the "right" level of detail of 3D models. There are related discussions in cartography such as what is a schematic map (or are all maps schematic?). The level of detail is important though as it has implications on aspects such as performance, for example, in virtual reality environments. While there is not a direct correspondence between level of detail and polygon count (the number of individual surfaces that need to be processed), it can be easily imagined that complex models require a lot of computing power to interact with while, simple models allow for keeping up sufficient frame rates, for example, in immersive VR experiences (see Lesson 8 and 9). Interestingly, in many planning scenarios, people may decide to have more than one version of a model, one with a low and one with a higher level of detail in order to use them flexibly in different applications. Unlike map generalizations, there are only a few approaches that allow for 3D model generalization.
Below are examples of different levels of detail of 3D models.
It is advisable to think about the level of detail that one requires before starting the modeling process. It is still the case that the more detail you need, the higher the cost will be in terms of time, software product, and computing resources.
A good starting point for thinking of the levels of detail of 3D models is an approach referred to as CityGML [27] that envisioned to standardize 3D models. While the active development of CityGML has slowed down, its classification of levels of detail provides a good basis for discussion. CityGML identifies five levels of detail based on accuracy and minimal dimensions of objects that can roughly be characterized as follows (see also Figure 2.1.2, and Kolbe et al. 2005):
As this brief discussion illustrates, the level of detail has substantial consequences on the visual appearance of 3D models, as well as on cost and efficiency. Cost is associated with time but also actual software products (e.g., high-end CAD programs for a higher level of detail; procedural rule software for a lower or medium level of detail). Efficiency is related to the computing power necessary to turn a 3D model into a smooth interactive experience.
Creating 3D models of buildings, artifacts, and cities enable potentially better decision making, education, and planning in many fields such as archeology, architecture, urban sciences, and geography. Creating virtual cities, for example, allows for presenting scenarios of future development plans with more realism to assess their feasibility and plan their implementation. There are informational values in creating virtual cities including better communication with non-technical audiences that is, the community. Additionally, there are new approaches in 3D analysis, for instance, dealing with views, shadows, the reflected heat of building blocks, and the calculation of hours of sunshine.
3D modeling is the process of digitally representing the 3D surface of an object. However, the model is usually rendered to be displayed as a 2D image on a computer screen. Experiencing the model in 3D can be achieved using 3D printing devices or presenting it in a virtual reality (VR) environment. There are different methods of 3D model construction and methods of accessing them. Figure 2.2.1 provides an overview of a couple of options for both creating and accessing 3D models that we will discuss in more detail later in this lesson. Many of the tools that are shown in the figure will be introduced throughout the course while others are challenging to work within an online environment.
The Figure above just name a couple of examples for efficiently collecting and accessing 3D data and information. A booming market for 3D technologies such as photogrammetry, 360-degree cameras, scanning tablets, but also the efficient use of existing data such as Open Street Map, have made content creation for 3D environments accessible to a wide audience. At the same time, VR technology is going mainstream with international companies such as Facebook™ or HTC™ weeding out roadblocks such as cyber/motion sickness.
To give you an idea of modeling, let us first consider modeling by hand. In this case, a model of an object, whether existent or not in the past, present, or future is constructed by a human using software modeling tools manually. Creating 3D models manually can be done using software such as SketchUp (see Lesson 3), Maya, 3Ds Max, Blender and Cinema 4D, or any one of a number of software that exists for 3D modeling. There are two main techniques in manually creating 3D models: solid modeling and shell modeling. Solid modeling often uses CSG (constructive solid geometry) techniques. CAD packages such as Rhino, SolidWorks, and SketchUp are considered solid modeling platforms. Complex 3D models can be created with the use of solid geometries. The Figure below shows a couple of solid geometries that are used to create potentially very complex models. An example of a complete 3D model is shown below: a model of the historic Old Main created as part of Geography 497 in Spring 2016.
Shell modeling only represents the surface of an object. These types of models are easier to visualize and work with. An example of shell modeling is the use of polygon meshes consisting of a collection of triangles or any other type of convex polygon (from other geography lessons on interpolation you might be familiar with triangulation). Polygon meshes only represent the exterior of a model. To make the model smoother and more realistic, a method called tessellation can be used to break down the polygon meshes to higher resolution poly meshes. Programs such as Maya, 3D Studio Max, and Blender all utilize different methods of working with polygonal meshes.
While there is a distinct difference between CSG and shell modeling, many of the programs mentioned and even those that were not, provide support for both types in a limited capacity. Understanding what type of modeling best fits your project needs helps with choosing the best software for 3D modeling tasks.
Shell modeling can be achieved based on photographs/ aerial imagery or scanning model geometry from the real world using, for example, laser scanners. In this section, we will learn about A) Reality modeling and B) Scanning: Remotely Sensed Data such as LiDAR Point Clouds.
Reality models provide real-world context using aerial imagery and/or photographs. Technologies like 3D imaging and photogrammetry with the use of UAVs (unmanned aerial vehicle) provide these images to be used in software for converting a 2D map to a 3D model. Examples of software for this approach are ContextCapture by Bentley and VUE, PlantFactory, Ozon, Drone2Map by ESRI, or CarbonScatter by E-on.
LumenRT, for example, is an E-on software licensed by Bentley. It can create real-time immersive experiences. It presents a combination of tools that create ultra-high-definition videos and photographic images for 3D projects.
Data-driven modeling can also be achieved through point clouds to render exterior surfaces. A point cloud can be obtained by laser scanning. LiDAR as a system of collecting point clouds is explicitly explained in this section. The fact that point clouds do not contain any type of geometry is an important characteristic in modeling because they do not change the appearance or respond to scene lighting by default. They enable the designer to incorporate realistic 3D objects without having to model them. The drawback is that the objects are not editable with traditional techniques (Autodesk) [35]. Software that processes and supports point clouds as input data are Potree, Rhino, the Autodesk products, Blender, Pointools, Mathworks, and Meshlab.
LiDAR is an active remotely sensed method of collecting information. It uses active sensors that have their own source of light or illumination. There are two types of LiDAR systems: airborne and terrestrial scanning systems. Airborne Laser scanning known as LiDAR (Light Detection and Ranging) is a remote sensing technique that measures variable distances to the earth using a pulsed laser. Aircraft, helicopters, or UAVs (unmanned aerial vehicle) such as drones are common platforms for collecting LiDAR data. The LiDAR systems collect x, y, and z data at predefined intervals.
Based on LiDAR data, high-quality elevation models can be created. For instance, the airborne LiDAR data collected for the Penn State campus contains points at 25 cm intervals. LiDAR returns two types of elevation models: (1) a first return surface including anything above the ground such as buildings and canopy, it is referred to as DSM (digital surface model) and, (2) The ground or bare earth which is referred to as DEM (digital elevation model), it contains topography.
The workflows of DEM and DSM creation are outside of the scope of this course. Using these elevation models, buildings, trees, and roads can be extracted from the LiDAR data with elevation information. High-resolution LiDAR data can provide more precise information regarding height. Most GIS systems such as LiDAR analyst, LAStools, and Lp360 made the data extraction automatic or semi-automatic. For instance, the extracted information of buildings contains roof sections, shape size and estimated height of buildings. To detect more accurate height information, facade geometry, and textures, extra systems such as terrestrial laser scanning and photogrammetry should be incorporated into the model. Terrestrial or ground-based laser scanning (Terrestrial LiDAR) is used for high accuracy feature reconstruction. With a combination of airborne and terrestrial LiDAR more realistic 3D models can be created.
The following image shows how a LiDAR point cloud using an airborne LiDAR system can be colorized with RGB values from a digital aerial photo to produce a realistic 3D oblique view. There are numerous papers on developing algorithms that can automate the process of 3D modeling using LiDAR data and aerial photography.
For realistic scenes, more complex models are needed. Creating these types of models can be tedious. One solution is to construct 3D models or textures algorithmically. Procedural modeling has been used in some techniques in computer graphics to create 3D models based on sets of rules. Procedural computing is the process of creating data algorithmically instead of manually. In computer graphics, it is used for creating textures. Mostly, procedural rules are used for creating complex models such as plants, buildings, cities or landscapes which require more specialized tools and are time-consuming for a person to build. With procedural rules, developing complicated models is simplified through repeating processes, automatic generation, and if necessary randomization.
One of the methods in procedural modeling is the use of fractals (fractal geometry) to create self-similar objects. Fractals are complex geometries with never-ending patterns. In other words, they can have infinite patterns at different resolutions. This type of approach is useful in creating natural models such as plants, clouds, coastlines, and landscapes. For instance, a tree can be automatically created based on branching properties and type of leaves. Then a forest can be generated automatically by tree randomization.
Another method in procedural modeling is generating models based on grammars. Grammar-based modeling defines rules that repeatedly can produce more details and create finer models as in the image below. Tree shape data structure or model hierarchy plays an important role in grammar-based modeling both in terms of programming efficiency and adding more details to a shape.
Instances of procedural modeling platforms are ESRI CityEngine, Vue, and Fugu. Xfrog is a component that can be added to a different platform such as Maya and CAD to build plants, organic shapes or abstract objects.
Procedural modeling will be the topic of Lesson 4.
Typically, 3D models are created with a concrete final application or product in mind. This could be presentational material for a web site, a standalone 3D application like a virtual city tour or computer game, a VR application intended to be run on one of the different consumer VR platforms available today, or some spatial analysis to be performed with the created model to support future decision making, to name just a few examples. In many of these application scenarios, a user is supposed to interact with the created 3D model. Interaction options range from passively observing the model, for example, as an image, animation, or movie, over being able to navigate within the model, to being able to actively measure and manipulate parts of the model. Usually, the workflows to create the final product require many steps and involve several different software tools, including 3D modeling software, 3D application building software like game engines, 3D spatial analysis tools, 3D presentation tools and platforms, and different target platforms and devices.
The figure below illustrates how 3D application building workflows could look like on a general level: The workflow potentially starts with some measured or otherwise obtained input data that is imported into one or more 3D modeling tools, keeping in mind such input could also simply be sketches or single photos. Using 3D modeling tools, 3D models are created manually or using automatic computational methods, or a combination of both. The created models can then either be used as input for some spatial analysis tools, directly used with some presentational tools, for example, to embed an animated version of the model on a web page, or the models are exported/imported into a 3D application building software, which typically are game engines such as Unity3D or the Unreal engine, to build a 3D application around the models. The result from the 3D application building work can again be a static output like a movie to be used as presentation material, or an interactive 3D or VR application built for one of many available target platforms or devices, for example, a Windows/macOS/Linux stand-alone 3D application, an application for a mobile device (for instance, Android or iOS-based), or a VR application for one of the many available VR devices. The figure below is an idealization in that the workflows in reality typically are not as linear as indicated by the arrows. Often it is required to go back one or more steps, for instance from the 3D application building phase to the 3D modeling phase, to modify the previous outputs based on experiences made or problems encountered in later phases. It is also possible that the project is conducted by working on several stages in parallel, such as creating multiple levels of detail (LoD) 3D models to work with the intended platform. Moreover, while we list example software for the different areas in the figure, these lists are definitely not exhaustive and, in addition, existing 3D software often covers several of these areas. For instance, 3D modeling software often allows for creating animation such as camera flights through a scene, or in the case of Esri's CityEngine/ArcGIS Pro allow for running different analyses within the constructed model.
A 3D modeling project can also aim at several or even many target formats and platforms simultaneously. For instance, for the PSU campus modeling project, that we are using as an example in this course, the goal could be to produce (a) images and animated models of individual buildings for a web presentation, (b) a 360° movie of a camera flight over the campus that can be shared via YouTube and also be watched in 3D with the Google Cardboard, (c) a standalone Windows application showing the same camera flight as in the 360° video, (d) a similar standalone application for Android-based smartphone but one where the user can look around by tilting and panning the phone, and (e) VR applications for the Oculus Rift and HTC Vive where the user can observe and walk around in the model.
The next figure illustrates the concrete workflow for this project (again in an idealized linear order) with the actual software tools used and providing more details on the interfaces between the different stages. The main input data for the campus model is extracted from Airborne LiDAR point clouds. Based on this data, building footprints, buildings, and trees height, Digital Elevation Model (DEM) are extracted. Roads (streets and walkways) are 2D shapefiles received from the Office of Physical Plant at PSU. Roof shapes and tree types information are collected based on surveys. DEM which is in raster format and other data that are in vector format (polygons, lines, and points) are imported to CityEngine which is a generation engine for creating procedural models. Procedural modeling using CityEngine will be explained in more detail in Lesson 4. The result of the procedural model can be shared on the web as an ArcGIS Web Scene for visualization and presentation. It can also be exported to 3D/VR application building software.
The 3D/VR application building software we are going to use in this course will be the Unity3D game engine (Lessons 8 and 9). A game engine is a software system for creating video games, often both 2D and 3D games. However, the maturity and flexibility of this software allows for implementing a wide range of 3D and now VR applications. The entertainment industry has always been one of the driving forces of 3D and VR technology, so it is probably not surprising that some of the most advanced software tools to build 3D applications come from this area. A game engine typically comprises an engine to render 3D scenes in high-quality, an engine for showing animations, a physics and collision detection engine as well as additional systems for sound, networking, etc. Hence, it saves the game/VR developer from having to implement these sophisticated algorithmic software modules him- or herself. In addition, game engines usually have a visual editor that allows for producing games (or other 3D application) even by non-programmers via mouse interaction rather than coding, as well as a scripting interface in which all required game code can be written and integrated with the rest of the game engine or in which tedious scene building tasks can be automated.
Unity can import 3D models in many formats. A model created with CityEngine can, for instance, be exported and then imported into Unity using the Autodesk FBX [42]format. SketchUp models can, for instance, be imported as COLLADA .dae [43] files. In the context of our PSU campus project, Unity3D then allows us to do things like creating a pre-programmed camera flight over the scene, placing a user-controlled avatar in the scene, and implementing dynamics (e.g., think of day/night and weather changes, computer-controlled moving object like humans or cars) and user interactions (e.g., teleporting to different locations, scene changes like when entering a building, retrieving additional information like building information or route directions, and so on).
The different extensions and plugins available for Unity allow for recording images and movies of an (animated) scene as well aw to build final 3D applications for all common operating systems and many VR devices. For instance, you will learn how to record 360° images for each frame of a camera flight over the campus in Unity. These images can be stitched together to form a 360° movie that can be uploaded to YouTube and from there be watched in 3D on a smartphone by using the Google Cardboard VR device (or on any other VR device able to work with 360° movies directly). You will also learn how to build a standalone application from a 3D project that can be run on Windows outside of the Unity editor. Another way to deploy the campus project is by building a standalone app for an Android-based phone. Unity has the built-in functionality to produce Android apps and push them to a connected phone via the Android SDK (e.g., coming with Android Studio [44]). Finally, Unity can interface with VR devices such as the Oculus Rift and HTC Vive to directly have the scene rendered to the head-mounted displays of these devices, interact with the respective controllers, etc. There exist several Unity extensions for such VR devices and the VR support is continuously improved and streamlined with each new release of Unity. SteamVR [45] and the OVR [46], and VRTK [47]for Unity currently take a central role for building Unity applications for the Rift and the Vive.
We have briefly mentioned photogrammetry and several of you have found related examples in the previous week. Amazing things are happening in the area of photogrammetry! Photogrammetry, for those who are not familiar with it, can be loosely defined as the art and science of measuring in photos. For the case of 3D modeling this has vast applications:
Photogrammetry has traditionally been within the field of geodesy and remote sensing, but its applications are becoming ubiquitous. Originally, it was used to measure distances, define the extent of areas, and essentially identify and geo-locate terrain features with the purpose to complement databases and/or to create maps.
To give you an example of a recent approach that uses Google Earth images for 3D modeling, we would like you to read the following article.
Rapid 3D Modeling Using Photogrammetry Applied to Google Earth [24] by Jorge Chen and Keith C. Clarke.
As you read, please keep in mind the following questions:
This assignment has two tasks.
Please find an article that describes a 3D/VR workflow. Make sure that the article has been peer-reviewed. Determine whether the entire article was peer-reviewed (most journals are) or just the abstract was peer-reviewed (as some conference proceedings may be). Either download the article, if that is an option or save the URL to the article for use in the second half of your assignment.
Once you have decided upon an article, write a 4-6 sentence paragraph detailing why you selected this particular article and what you like about it. Make sure that you explain how the article adds value to GIS and/or other sciences. Some reasons you might select an article are because it discusses advances in GIS and/or other disciplines you think are important, it describes its topic clearly, it is informative and would allow you to implement a workflow yourself, or it gets high marks because it is just cool. In your document, provide a full reference for the article (using APA style) and a link to the article or provide an electronic copy of the document. Below are two examples of complete references.
This assignment is due on Tuesday at 11:59 p.m.
Please submit your completed deliverable to the Lesson 2 (Article and Review) Assignment.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Article selected is peer-reviewed, the type of peer review is indicated, the link to the article (or file) is provided, and full reference is present. | 5 pts | 2.5 pts | 0 pts | 5 pts |
Write up clearly communicates how the article adds value to GIS or other scientific fields. | 4 pts | 2 pts | 0 pts | 4 pts |
Write up is well thought out, researched, organized, contains some technical specifications and clearly communicates why student selected and likes the article. | 5 pts | 2.5 pts | 0 pts | 5 pts |
The document is grammatically correct, typo-free, and cited where necessary. | 1 pts | .5 pts | 0 pts | 1 pts |
Total Points: 15 |
This lesson hopefully conveyed how creative and exciting the developments are that allow for the creation of 3D models and VR content. There are classic approaches coming out of photogrammetry, such as structure from motion mapping (SFM), that existed for a long time but only recently reached a mature level. There are creative minds that combine different technologies such as Google Earth and SFM to simulate fly overs and much more. The article we integrated by Chen and Clarke was only been presented in September 2016, so very recent. As we have been successfully using it in our own work, we thought we would integrate it into this lesson. It is a really fast-moving field and if anything, many products will become even more mature over the next years with numerous start-ups pushing into the market.
You have reached the end of Lesson 2! Double-check the to-do list on the Lesson 2 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 3.
SketchUp is one of the most versatile and easiest to learn 3D modeling tools on the market. It offers an excellent balance between functionality and quality on the one hand and ease of use on the other. It allows users with relatively little experience to create exceptional 3D models of almost anything. SketchUp is not tied to a single domain (e.g., architecture) but you will find that every discipline and industry in need of 3D models is using SketchUp: architects, landscape architect, architectural engineers, interior designers, construction professionals, urban planners, woodworkers, artists, sculptors, mapmakers, historians, 3D game designers, and more. SketchUp was part of the Google™ family but is now owned by Trimble™. We will only scratch the surface of this versatile modeling tool that through its warehouse for models (assets) from users and companies might become one of your favorite tools if you are planning to create building models of the past, present, or future, if you want to design room interiors, model your dream car, or, through its large expansion opportunities you might be interested in creating solutions for augmented reality. One of our colleagues said that he treats SketchUp like Minecraft™ but instead of largely using blocks you have every geometric form under the sun at your disposal.
While geographers are only slowly yet steadily entering the world of 3D, certainly one of the biggest trends at recent ESRI user conferences, the results of using SketchUp in our residential courses are quite remarkable. Here are some models that some students have created after a weeklong introduction. More models and more information can be found on our websites (Sketchfab [54]; Sites [55]).
The model below shows Old Main, the landmark building of Penn State, in its historic form (before 1930). The model has been created by Matt Diaz and curated by Niloufar Kiromasi, both interns at ChoroPhronesis (Summer 2016). Modeling has been possible thanks to an ongoing collaboration with the Penn State Library who is offering historic images and floor plans for buildings that do not exist anymore on campus. Our efforts make a contribution to what has come known as cultural heritage research, in our case, we are working toward a strategy to recreate environments long gone. If you would like to know more about Old Main’s history, visit Penn State's Historic Old Main [56].
Click the image below to move the building around using your mouse, stylus, or finger on the screen. If you want to view the building using your Cardboard Viewer, go to Sketchfab Penn State University 1922: Old Main [57].
SketchUp model of Old Main [57]
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
I would like to offer you a word of caution: not everyone is a modeler. That does not mean and should not mean that you cannot create a beautiful model (or pass this lesson!). It simply means that some people develop a love for creating models while others really do not like it. One piece of advice that works on almost everything in these situations is to practice, practice, and practice. We will start off everyone on the right foot by going through several tutorials; we will encourage you to practice (and practice, and practice). We will also request a two-tier exercise at the end acknowledging that this is a 2-week max exercise and not a course on SketchUp. However, there is also the opportunity to make a contribution to an exciting project we started, that is, we have been creating historic campus models for Penn State. The video below shows a fly-through of part of the campus model for 1922 (slightly outdated as this is an ongoing project). On Sites at Penn State: Penn State Campus - Past, Present, and Future [55], we have collected building models from students who made contributions and if you are interested, there are several buildings yet to be modeled. If you ever have a chance to visit One World Center, they have a tour through the history of New York projected onto their elevator walls, visit World Trade Center Elevator Video [59]. Penn State does not have long enough elevators but it is a feasible way to communicate the dramatic changes Penn State and State College have gone through using 3D models and VR.
The model that we are building can be turned into a VR experience by using a game engine such as Unity (see Modules 8 and 9). Once in Unity, there are numerous options for interacting and experiencing the models: they can be perceived in HMDs such as Oculus Rift or HTC Vive, there are extensions to create mobile phone apps, and, as you will learn, you can create a fly-through using a 360 degree camera and exporting the video, for example, to YouTube. Just to give you an example of the latter, below is a YouTube video taken from our 1922 Campus model. You can also view the video at YouTube Penn State campus in 1922 flythrough (non-360 degree version) [60].
One of the best resources to get you started with SketchUp, which covers all essential basics necessary for efficiently creating building models, is the SketchUp: Essential Training course on Lynda.com [61]. If you are new to Lynda.com I can highly recommend its repertoire of courses that are well designed, especially for technical skills. Lynda.com is free for all Penn Staters and as the essential course is perfect for our introduction and ‘only’ three hours long, we will use it and have designed exercises around its content. It is not the idea that you simply watch the videos, but that you work through the exercises. After going through the essential training course you are well set up for your own modeling project (see final assignment).
In addition to the Lynda.com course, the following sites also provide valuable resources for learning SketchUp.
SketchUp comes in two versions, Free and Pro. The Free version is now (since 2018) web-based and allows for modeling a variety of objects and offers very decent functionality. The Pro version offers additional functionality and is also more flexible and open for additional plug-ins (e.g., specialized export options or augmented reality).
Some of the tasks that SketchUp Pro will allow you to do are (see also Schreyer 2016):
As part of this course, we recommend using the free trial of SketchUp Pro. Most of what you learn will be also available in the Free version but the Lynda course we are using is tailored to the pro version (they, too, demo with the trial version). To download SketchUp, go to SketchUp [64], click ‘trysketchUp'. On the next page, under "I use SketchUp for...", choose Higher Education and click Create Account to start your 30-day free trial. Please make sure that your computer meets the minimum requirements and keep in mind that using a 3-button/wheel mouse is kind of essential. In other words, if you are a digital nomad like myself, and work everywhere on your laptop, you might want to invest in a wireless mouse. It will make your modeling life drastically easier. After downloading, simply follow the instructions.
As a Penn State student, you have access to a catalog of free self-paced courses on a huge variety of both technical and non-technical topics. These courses are high-quality video-based courses presented by professionals. One of those topics happens to be SketchUp, which you will be using in this lesson. Instead of recreating the training for you here, you will need to access the SketchUp Essentials training through Lynda.com via Penn State's subscription.
We strongly encourage you to not simply watch the videos but to practice what you learn immediately as you go through the exercises. To foster practicing, there are several tasks to be solved based on what you will learn in the Lynda course. You will not be using the exercise files provided in Lynda. The data, models that you will need to create the deliverables for the tasks can be downloaded here:
Download Lesson 3 Assignment File [65] now!
The Lynda course offers an introduction to SketchUp for both, Windows and Macintosh systems. Our team primarily uses Windows but we will try to answer questions for Mac users, too (if possible). Read through the tips and the tasks and deliverables below before starting the course. They will make more sense after you get into the course and start learning. Once you begin the Lynda course, go through it at your own pace.
To access the SketchUp click the link below to access the full course on the Penn State Lynda website. If you aren't logged into a Penn State service like LionPATH, email or Canvas, you may need to log in using your Penn State Access Account user id and password (i.e. abc123). Once you arrive at the course, bookmark the page using your preferred browser to make it easier to return quickly. The course takes over 3 hours to complete but is broken into sections to make it easier to choose a resting point.
SketchUp 2020 Essential Training [66] by Tammy Cody
(You have subscription for courses on Linkedin Learning through Penn State)
Below are the tasks to solve during or immediately after working through the Lynda course. The deliverables will serve as proof that you completed and understand the task. You will need the files you downloaded on the previous page (not the Lynda exercise files) to complete the tasks correctly. Download the Lesson 3 Sample File [67] to check that you are capturing the correct images. Before uploading your file, check your screen captures with the ones in the file. Note: Your images should be in full color! The file will display in black and while.
Compare your Word document containing the screenshots for Tasks 1-10 with the sample file linked above. Once you are happy that your file is similar to the sample file, upload your document to Lesson 3 Essential Training Assignment.
Criteria | Full Credit | No Credit | Possible Points |
---|---|---|---|
Task 1: screenshot matches sample | 1 pts | 0 pts | 1 pts |
Task 2: screenshot matches sample | 1 pts | 0 pts | 1 pts |
Task 3: screenshot matches sample | 1 pts | 0 pts | 1 pts |
Task 4: screenshot matches sample | 1 pts | 0 pts | 1 pts |
Task 5: screenshot matches sample | 1 pts | 0 pts | 1 pts |
Task 6: screenshot matches sample | 1 pts | 0 pts | 1 pts |
Task 7: screenshot of component is present | 1 pts | 0 pts | 1 pts |
Task 8: screenshot matches sample | 1 pts | 0 pts | 1 pts |
Task 9: screenshot matches sample | 1 pts | 0 pts | 1 pts |
Task 10: screenshot matches sample | 1 pts | 0 pts | 1 pts |
Total Points: 10 pts |
Below are a few concepts that you either have encountered or are worth noticing about SketchUp.
While the name contains the word "sketch", it is a bit misleading. If you want to, you can model extremely precisely and accurately in SketchUp.
Like many other programs these days, SketchUp allows for snapping to various locations of/within objects such as endpoints, midpoint, edges, or circle centers.
This is one of SketchUp's most powerful features. There are several types of inferences that make it easier to create 3D models. Examples are point, line, or shape inferences. Different inferences are indicated by a color code. To learn more about inferencing, check out the SketchUp Help Center [68].
Being able to enter values for objects characteristics directly is extremely useful, especially if you have measured an object you plan to model outside SketchUp.
This is a crazy important feature of SketchUp. Groups and components are not only making modeling easier and faster, but they are also allowing for improving the performance of 3D models. A group is, as the name implies, is a selection of entities grouped/joined together. Important: if you copy a group, all its contents are copied, too, adding to the required storage space. A component, in contrast, is a set of entities that are centrally stored and each time it is being inserted into a model, it is considered an instance. What this practically means is that the storage space of your model is drastically reduced (just think of hundreds of windows and imagine modeling them individually or as a component).
As SketchUp originally was Google SketchUp it does not come as a surprise that it has rather convenient geo-location capabilities (latitude/longitude coordinates). Georeferencing a model is handy especially for embedding it into larger projects and for further analysis (e.g., shadow analysis). Once a model is georeferenced, other information sources become available such as:
3D modeling allows us to recreate objects or invent new objects using different 3D modeling programs yet; there are limitations to what even modern-day computers can handle in terms of 3D modeling. What exactly does this mean? Essentially, 3D modeling involves lots of information relating to points and lines connected to one another across a coordinate plane. The more points and lines you use in a 3D model, the more work a computer must do to allow you to visualize those points and lines as a 3D model. Aside from the hardware, many software also has limits on the number of points and lines or geometry it can handle. Now think ahead to software such as game engines. Such software can now be used to add information to the geometry you provide in 3D models to produce interactions such as movement. The added information can start to cause lag, slowing the intended interaction so that it is no longer occurring in real-time. So how do you prevent this?
The gaming industry has developed several techniques, termed optimization methods that assist with this issue of too much geometry. Optimization helps you to maintain the highest level of detail (LoD) with as little geometry as possible. So, what exactly is level of detail? Level of detail relates to the amount of realism a 3D model and ultimately a 3D scene can have through geometry and materials. Typically, the more geometry a 3D model has, the more realistic it will appear.
Geometry is generally measured in terms of weight by either Faces (Quads) or Triangles. Some software refers to Faces as Polys or Polygons as they have four sides. Heavy models have more Faces or Triangles while Light models are the opposite. The goal is to maintain the same base shape but have different amounts of detail. For example, curved shapes take up a lot more geometry than square shapes.
Thinking back to our original issue of preventing lag or slow movement, obviously, Light models are best, but are they realistic enough? Level of detail refers to a scale of geometry weight. Some software is capable of handling 3D scenes with a mix of LoD models, where 3D models that are in focus are Heavy models but models out of focus are Light. This suggests that one needs to have both Heavy and Light models of all geometry in a scene. While this is often the case for video game development, it certainly is not good for everyone. So how else can we address this issue?
Rendering is one of the keys to creating realistic models and scenes. It requires you to add Materials (otherwise known as Textures) to your 3D models. Additionally, you introduce Light sources to the scene your 3D model(s) are in to increase the realism. Increasing the realism through materials and light sources offsets the need for detailed geometry. You use a Render Engine, there are many options in SketchUp as Extensions, to calculate the influence of the Light sources and Materials on your geometry in a scene. Keep in mind Rendering in itself only generates a static image. It can be used to render animations as video files as well, but this does not necessarily help with our issue for interactive environments. Rendering in real-time is another option but it will take up even more computer resources when combined with all your geometry and interactions in a scene. So, how do we make static rendering appear as if it were in real-time?
The gaming industry considered another solution to this issue, Texture-Baking, Render-to-Texture or Light-Mapping. This allows your 3D geometry to be associated with a render-generated material that stores all of the render results on each surface or face. This can be a time-consuming process but what it allows you to do is to add detailed information in the form of an image file like a JPEG or PNG instead of having a lot of extra geometry. We will not go into the details of this now but this technique allows you to make sure what you see, in say Sketch Up rendered also shows up in other software where you may want to import your 3D model/3D scene.
What we will introduce here, however, is Face Normals. Face Normals are part of the Material process. Have you ever noticed in SketchUp, before you add materials that some surfaces appear as a different color? This has to do with the Face Normal. Face Normals indicate which direction a material should be facing. If all face normals are correct, all your geometry would have the same color from the same viewpoint. In rendering terms, normal tells the computer where the front side of a surface is so it can show materials and apply the lighting correctly.
Flipping the Face Normals on the Grey geometry in the image will correct the color difference so all normals are facing the same direction. Now, why is this important? There are two main reasons, one of which I mentioned earlier. First, if you use a Rendering Engine, even in SketchUp, you need the computer to know which side is up so the lights work correctly. Second, while not necessarily critical in SketchUp itself but when importing your model to other software, the materials may not show up at all because the other software cannot tell which side of a Face is the front side where a material is applied. This can be challenging to correct in another software that may not be designed to handle this issue, such as a game engine. It is easiest to fix Face Normals before you apply materials but it can be done after materials have been added in your 3D modeling software.
Ultimately, the solution you choose is up to you and based on the needs of your project. However, the best practice is to focus on Lightweight models and if the detail is not enough, to add a little extra geometry to see what will work. Relying on materials is necessary for Lightweight models as the materials can replace details you did not model. Correcting Face Normals is necessary for making sure those materials and also light sources show correctly in other software. Whether you use a Rendering Engine or not, you still need to use materials that help with adding detail to your 3D models.
To learn more about Optimization for SketchUp please check out the Knowledge Base article Making a detailed model in a lightweight way [69].
In this exercise, you will work with different optimization and pre-rendering techniques that are applicable to a number of 3D modeling programs.
Goal: To introduce optimization and the role of rendering SketchUp models.
Software: Sketch Up
Required extentions for this Task:
TT_Lib Extension by ThomThom
CleanUp Extension by ThomThom
Model Info Extension by ThomThom
Probes Extension by ThomThom
Hardware: None
Model: Download Lesson 3.5 Assignment Model [70].
Criteria | Full Credit | No Credit | Possible Points |
---|---|---|---|
Screenshot of model | 1 pts | 0 pts | 1 pts |
Screenshot of model statistics | 1 pts | 0 pts | 1 pts |
Screenshot of model after using clean up | 2 pts | 0 pts | 2 pts |
Screenshot of model statistics after using clean up | 2 pts | 0 pts | 2 pts |
Briefly describe the differences, how much optimization has happened | 1 pts | 0 pts | 1 pts |
Screenshot of the model after manual cleaning | 2 pts | 0 pts | 2 pts |
Screenshot of the model statistics after manual cleaning (if done thoroughly) | 2 pts | 0 pts | 2 pts |
Screenshot of face normals flipped | 2 pts | 0 pts | 2 pts |
Screenshot of Final Model (as described in instructions) | 2 pts | 0 pts | 2 pts |
Total Points: 15 pts |
Upload your word document and your SketchUp file to Lesson 3 Optimization Assignment.
In order to deepen your modeling expertise, we would like you to create your own 3D model of a building. There are a number of options, and while we provide general guidelines, we leave it up to you to choose your route. Modeling is really a rather personal matter and finding your own style is important. We provide more information and some general guidelines below.
Now that you are somewhat familiar with SketchUp, your task is to create a model of a building. This model does not need to be an architectural sketch but should be perceptually similar to the original. You should pay attention to things such as windows, roof shape, textures, etc.
These are the basic requirements. If you stick to them and deliver a SketchUp model that meets these criteria, you will receive 90% of the possible points. We have a number of options below that will allow you to receive 100%. Basically, you have to include at least one advanced feature.
Below is a list of things you can do to fulfill advanced requirement criteria. You do not have to choose them all. You can do just one or however many you would like to tackle. Generally speaking, you need to provide a model that shows a higher level of sophistication/complexity, not necessarily for all its features but for some. It is necessary that you provide a brief 1-2 paragraph write-up that explains how you improved the model with added detail. Alternatively (or in addition), there are steps you can perform that would satisfy the advanced requirements. Here is the overview:
As mentioned before, modeling is a rather personal experience, there are many ways to model and in the end, you might find your personal style. Here are three examples of how you can arrive at a building model:
Upload your SketchUp file to the Lesson 3 3D Model Assignment.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
The perceptual appeal of the entire model | 6 pts | 3 pts | 0 pts | 6 pts |
Level of realism of doors | 2 pts | 1 pts | 0 pts | 2 pts |
Level of realism of windows | 2 pts | 1 pts | 0 pts | 2 pts |
Detail of the roof and associated features | 2 pts | 1 pts | 0 pts | 2 pts |
The complexity of the building | 3 pts | 1.5 pts | 0 pts | 3 pts |
The level of model optimization | 3 pts | 1.5 pts | 0 pts | 3 pts |
Naturalness of textures | 2 pts | 1 pts | 0 pts | 2 pts |
Total Points: 20 pts |
Once you have completed your model and would like to proudly share it with the world (non-mandatory), there are again different ways to accomplish that. One of the most prominent platforms for sharing all kinds of 3D models is Sketchfab [73]. The models that you saw earlier in this module, embedded into the course website, are hosted on Sketchfab. Many modelers allow you to use their models in different contexts and Sketchfab is continuously developing ways that will make your models even more accessible such as built-in VR features. We will be using Sketchfab to organize the final submission of this module and to give you an idea of how to share 3D models that you create.
To finalize your modeling assignment, please create a Sketchfab account.
SketchUp official home page [74]
The official SketchUp YouTube channel [75]
The official SketchUp Community Forum [76]
SketchUp Help Center: Quick Reference Card [63]
In addition to the essential training course we used in this module, there are numerous courses that might be of interest. Simply search for "SketchUp" at Lynda.com [77].
You have gone through a hands-on modeling process in this lesson using SketchUp. While there is still a learning curve and every powerful tool requires practice, SketchUp has many, many features that make modeling easier. From its incredible warehouses to its geo-location features, it offers modelers inside and outside the geospatial sciences a lot of options for exploring the third dimension. For many applications, it is important to have a tool like SketchUp as it may be the only option to create a 3D model. It is also a good learning experience to come to appreciate the effort that has gone into designing software such as City Engine, which uses an entirely different approach as we will explore in the next lesson.
You have reached the end of Lesson 3! Double-check the to-do list on the Lesson 3 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 4.
Lesson 3 has introduced you to hands-on 3D modeling using an efficient approach for various modeling purposes. Instances of when you need hands-on modeling are: (a) you are interested in a substantial amount of detail in your models, (b) your models are non-existent anymore and cannot be, for example, scanned, or (c) you’d like to make use of a substantial asset warehouse that provides you with professionally designed items such as fridges or couches. As you can easily imagine, this approach will be prohibitive if you are interested in modeling entire cities or any large number of buildings, if you would like to test changes quickly (e.g., increzasing the number of floors), or if you need flexible access to your models, for example, through the internet.
In contrast to modeling by hand (also referred to as static modeling), the general concept of procedural modeling has been discussed in Module 2. Rather than modeling individual buildings, procedural computing (as a basis for procedural modeling) is the process of creating models algorithmically instead of manually. This chapter will provide a more detailed introduction to procedural modeling as it is the basis for modeling cities and other geographic scale information in GIS environments. However, we decided not to teach a specific software as a generating platform in this course. We are already introducing you to a number of software options and ESRI CityEngine, although a powerful tool, requires a bit more time than we have in one or two lessons. We will provide you with resources and a demo (in later lessons) that show some of the functionality but do not expect you to learn CE for this course. Yet, we would like you to understand procedural modeling. Later in Module 5, you will explore how procedural rule packages (that we created for you) can be applied in ArcGIS Pro (this is also in line with the long term strategy of ESRI). What this means is that you will be able to modify rules created by a procedural modeling software but you do not have to learn the specifics of the software (City Engine). As Arc GIS Pro is a rather new software product the functionality to change procedural rules is still limited but improving from version to version. The software has already been updated twice since we started designing this course.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
As discussed in Module 2, procedural modeling is about constructing detailed textured 3D models by formulating rules instead of manually modeling them (CityEngine [79]). These rules are constructed based on a specific type of shape grammar (Stiny & Gips 1971). Stiny and Gips invented shape grammars and defined shape grammars as a method of two and three-dimensional shape generation. A shape grammar is a set of shape transformation rules to generate a language or set of design (Knight, 1989; Duarte & Beirao, 2010). They are spatial rather than symbolic and also non-deterministic. It means that the designer is not constrained by a limited choice of rules (Knight, 1989).
Shape Grammar is a way of defining algorithms. It makes design, generally speaking, more understandable. In order to read and use shape grammars, we need to have a short review of the basic terms below (Stingy, 1980):
A shape rule specifies how the existing part of a shape can be transformed. It consists of two labeled shapes on each side of the arrow (α-->β). Each labeled shape is made up of a finite set of shapes which are points, lines, planes, or volumes. For instance, in the figure below a simple shape grammar consisting of two rules is defined. The first rule specifies that a triangle with a dot can be modified by adding an inscribed triangle and removing the dot from the original triangle. The second rule states that one can also simply remove the dot from a triangle with a dot. Given the initial shape of a triangle with a dot, a shape can be derived by applying these two shape rules in a different order, namely, rule a -> rule a -> rule b in this case.
To create an urban plan or design, the complexity and characteristics of that environment can be interpreted as shape grammars to increase flexibility in suggesting the number of solutions and scenarios. Duarte & Beirao (2010) indicate that old urban plans are not compatible with contemporary urban societies and that there is a need for flexible designs. They propose that there are two ways to use shape grammars in urban environments: (1) as an analytical tool to find the structure of an existing design, or (2) as a synthetic tool to create a new language for design. Both methods automate the process of design generation.
Although a shape grammar can be generated by hand, for more complex rules and choices, a generation engine is needed to create and display geometries and decide which shape rule to apply. ESRI CityEngine is a generation engine for creating shape grammars, applying rules, and generating shapes. CityEngine’s programming language to define shape rules is called CGA (computer-generated architecture) shape grammar. The next section briefly explains the CGA shape grammar and how it can be applied to create virtual 3D models of cities.
Creating a model based on rules is less labor-intensive. However, the preparation of rules is time-consuming. The strength of this modeling approach is that once the rules are created, generating and creating models takes a small amount of time. The following diagram shows the total cost of procedural modeling vs handcrafted modeling.
To give you a deeper understanding of Shape Grammars, we would like you to read Shape Grammars in Education and Practice: History and Prospects [78] by Terry Knight. Knight, T. W. (1989).
In this section, we focus on an introduction to ESRI CityEngine as an instance of procedural (grammar-based) modeling software. Although this software won’t be used in this course, the result of its procedural modeling will be applied in lessons 5 to 7 that will introduce you to ArcGIS Pro. CityEngine is standalone desktop software. ‘The computer is given a code-based ‘procedure’ which represents a series of commands - in this context geometric modeling commands - which then will be executed. Instead of the "classical" intervention of the user, who manually interacts with the model and models 3d geometries, the task is described "abstractly", in a rule file’ (CityEngine [79]). The set of generic procedural rules can be used or modified when creating virtual cities. Moreover, new procedural rules can be defined, modified, or rewritten to create more detailed and specific designs.
Please watch the video below to learn more about the user interface and navigation in CityEngine.
CityEngine is suitable for transforming 2D GIS data to 3D cities. GIS data such as points, lines and polygons can be used to create 3D models. The current version of CityEngine connects to ArcGIS through data exchange. However, it is not tightly integrated into the ArcGIS system yet. The created 3D models can be exported to other software for analysis or editing such as ArcGIS, Maya, 3ds Max, Google Earth, the Web (see Lesson 7) or Unity (as we’ll explore in Lessons 8 and 9) (CityEngine Frequently Asked Questions [80]). The following figure shows different data formats of input and output to and from CityEngine. CityEngine does not support automatic creation of 3D city models. The platforms that support automatic creation of 3D models in urban environments, use oblique imagery or LiDAR data to reconstruct volumes and building shapes based on point clouds and aerial imagery. However, the main strength of CityEngine is data creation, the use of other datasets such as geodatabases and export capabilities. A single procedural rule can be used to generate many 3D models. GIS data feature attributes can be used to add more accuracy to the model. Instances of such attributes are building height, number of floors, roof type, and roof materials.
As mentioned, CityEngine uses a programming language called CGA shape grammar. Rules written with CGA are grouped into rule files that can be assigned to initial shapes in CityEngine. For instance, 2D building footprint polygons can be assigned a rule file containing the rules for interactively creating building models from the 2D polygons as illustrated in the figure below. One of the rules defined in the rule file needs to be declared as the starting rule, meaning that the generation of the building model will start with that rule.
The initial shapes to start the iterative refinement process can be created either in CE or imported from another source. More precisely, there are three ways of creating shapes in CityEngine (see image below): (1) manually creating shapes, (2) deriving shapes from a graph (e.g., street segments and lots from road network), (3) importing shapes (e.g., 2D building footprints from a shapefile).
All CGA rules have the format Predecessor --> Successor
Where “Predecessor” always is the symbolic name of the shape that should be replaced by the “Successor” part on the right side of the arrow. The “Successor” part consists of a number of shape operations that determine how the new shape is derived from the input shape and symbolic names for the output shape(s) of the rule (which can then be further refined by providing rules where these shape symbols appear on the left side of the arrow). To give an example, the following two rules create a block separated into 4 floors from a 2D polygon:
Lot --> extrude(10) Envelope Envelope --> split(y) { ~2.5: Floor }*
The first rule states that the initial 2D shape (“Lot”) should be replaced by 3D shapes called “Envelope” that are produced by extruding the initial 2D polygon by 10 units. The second rule says that each “Envelope” shape created by the first rule should be split along the vertical y-axis into “Floor” shapes of height 2.5, so four floors altogether. We could extend this simple example by adding rules for splitting each floor into the front, back, and side facades, applying different textures to these, inserting windows and doors, and so on. Application of a rule file to the initial shapes results in a shape tree of shapes created in the process. The leaves of the tree are the shapes that will actually make up the final model, while all inner nodes are intermediate shapes that have been replaced by applying one of the rules. The shape tree for the simple example would start with the “Lot” that is replaced by an “Envelope” shape, which in turn is replaced by the four “Floor” shapes making up the leaves (see figure below).
CGA defines a large number of shape operations (see CGA Shape Grammar Reference [82]). Shape operations can have parameters such as the extrude and split operations in the previous example. Rules can also be parameterized to provide additional flexibility (see Parameterized Rule [83]). Moreover, rules can be conditional (see Conditional Rule [84] or Stochastic Rule [85]), and rule files can contain attribute declarations (see [86]Attributes [87]), function definitions (see Functions [88]), and style definitions (see Styles [89]). If you want to learn more about writing CGA rules, tutorials 6 to 9 at Introduction to the CityEngine Tutorials [90] would be a good starting point.
To create building geometries through CGA rules, the following general workflow can be used:
Lot --> extrude (4) Building Or: attr height = 30 Lot -->extrude(height) Building
In order to modify the resulting 3D models, different possibilities exist: (1) edit the rules by scripting, (2) overwrite the rule parameters of a rule set on a per-building basis, and (3) if stochastic rules are used (rules with random parameters), the random seed of all or single buildings can be altered. The last method is used in creating virtual cities that should have certain characteristics. These types of randomization are not a representation of reality.
The following figure shows how a rule can be overwritten (2nd method mentioned above). This window provides values to the parameters of a rule. To overwrite the rule, there are some options: (a) to use the rule-defined value, (b) manually add a user-defined value, (c) object attribute, (d) shape attribute, or (e) layer attribute. Layer attribute is to choose attributes based on shapefiles or geodatabases attribute table. For instance, if you have height value as a field in the building footprints’ attribute table, you can use it to automatically generate extrusions. Other instances of fields that can be used for buildings are roof types, roof materials, façade materials, etc.
Airborne LiDAR data provides information on variable elevations. This elevation information is useful in both creating the DEM (digital elevation model) and estimating buildings’ height. The airborne LiDAR data collected for Penn State campus is in 25 cm intervals. You can see the Campus LiDAR point clouds of buildings and vegetation in the Figure below. All noises and irrelevant categories are removed for presentation purpose.
LiDAR returns information for two types of elevation models: (1) DSM or digital surface model (Figure below) which is a first return surface including anything above the ground such as buildings and canopy. (2) Topography: the ground or bare earth which is referred to as DEM (digital elevation model).
In CityEngine, for creating a 3D model of Penn State campus, the first step is to insert a DEM. Having a DEM defines the coordinate system and the ground elevation base for locating building footprints and streets. To estimate campus building heights, a digital surface model is created. Based on the normalized DSM layer, the minimum and maximum heights of every point are extracted. Extracting the roof shape and finding the mean of elevation of points within each roof section, the mean of elevation for every part of buildings is calculated (Figure below). All this preparation is done in ArcGIS Desktop to prepare the base shapefile to be imported to CityEngine.
Every single object of the building footprints imported to CityEngine will be considered as one shape or Lot. By clicking on any part of the building footprints, a shape will be selected and you can see the shape number in the inspector menu in the following figure. Each shape has many attributes for each section of buildings such as the mean of height and roof types.
The figure below shows the extrusion rule assigned to the shapes based on the height values in the attribute table.
In the following figure, you can see rule parameters in the building settings that can be overwritten base on layer attributes.
In the CityEngine CGA grammar rules, basic roof shapes such as flat, hip, gable, pyramid, and shed are defined. Examples of basic rules for different roof shapes are as shown in the figure below:
However, roof structures of campus buildings are complex. For more realistic and sophisticated roof types, new specific rules should be written for each complex building and this inverts the aim of procedural modeling which is facilitating the modeling process instead of hand modeling each building. Instead of writing complex rules or assigning one roof type for each building footprint, a combination of roof types is used for each building. In other words, each section of a building with a different height value has a roof type assigned to it. This has been possible by the use of LiDAR information. Each building is divided into different roof sections based on different heights and types. For instance, the following figure has a complex roof type. The middle parts are flat with various heights. The edges are shed and need a different rule to be applied.
A ‘roof type’ attribute has been created for the building footprints shapefile based on Google Earth images. To select different shapes with similar roof types in CityEngine, a simple python scripting is written. The python editor is executed inside CityEngine and is convenient for writing scripts and executing. The figure below is the window for creating a python module and the next figure shows a sample python code for selecting roof types of the same value.
The script then can be run by pressing F9 or selecting Run Script from the Python menu. Then for selected shapes, the rule parameter of roof type is modified. You can see selected shapes and roof type options in the figure below.
CityEngine rule file of ‘Building_From_Footprint.cga’ has three options for generating façade textures: (1) realistic, (2) Schematic, and (3) solid color. In the Figure below, you can see the three options for façade representation. You are going to explore these options in ArcGIS Pro. ESRI has a library of façade textures for different building types which we have used to texturize campus model. However, for a more realistic view of campus, images from building facades can be captured and used in this model.
In the images above, you can see that trees have been generated for the campus model. These trees are based on a tree dataset in GIS which has been linked to tree models using “3D Vegetation with LumenRT Models”.This rule [91]covers a lot of common trees found in North America. the models can be previewed in this Link [92].
The first step in creating tree models is to Match LumenRT model names with your tree names. This rule file uses the common name of tree species as the identifier of each model. Therefore, it is very important to keep your tree name in accordance with the name in the rule file. You can achieve this by adding a field in ArcGIS and filled it with the corresponding LumenRT names using Python or create an excel file where you match LumenRT tree name with your own dataset, and then link this file with your original dataset in ArcGIS. The second step is to import the tree dataset to CityEngine and align shapes to the terrain.
Selecting the tree shapes, you can assign the rule file and generate the rule.
After generating the rule, you may find that the tree models generated are not right. In this step, you need to assign the attributes from the dataset to rule parameters. To do this, simply click on the Attribute Connection Editor near the Name, select layer attribute, and find the field you add to your tree dataset.
To use the created models, you have two options: (1) export it to other formats, and (2) share it. The model can be exported to different formats such as Autodesk FBX, Collada, KML, and OBJ. For sharing the model, one option is to share the whole scene in the CityEngine Web Viewer. Another option is to share the rules as a rule package that can be used locally on your machine. Having the rule package (RPK), you can share it with others using ArcGIS Pro or ArcGIS Online. In Lesson 5, you will learn how to apply a rule package exported from CityEngine as a symbol layer to your database.
This assignment has three tasks associated with it. The first task will prepare you to complete the other two successfully.
Explore all three models and software websites provided below. Working through the questions will help you prepare for Tasks 2 and 3. By now these models should be familiar to you, but you may want to take some notes to help you organize your thoughts and keep everything straight.
Procedural model created at ChoroPhronesis [28] Lab using ESRI CityEngine [94] and CGA Shape Grammar rules. This model is shared with you using CE WebScene. You can turn layers on and off, zoom, rotate and explore the part of campus modeled.
Note: This interface may take a while to load. Please be patient. To see the full size interface with additional options click the Model 1 link [93].
Reality modeling created at Icon Lab using ContextCapture [96] software from Bentley. This model is photorealistic and shared with you as a YouTube video.
If you want to see a larger version of the video, click the Model 2 link [95].
Historical campus models (1922) created at ChoroPhronesis [28] Lab using Sketchup. These models are combined in ESRI CityEngine [94] with trees and base map generated using procedural rules. This model is shared with you as a flythrough YouTube video.
If you want to see a larger version of the video, click the Model 3 link [97]. [98]
After reviewing all three models and taking notes, start a discussion on what you believe the advantages and disadvantages are for each of the three approaches. Make sure that you comment on each other's posts to help each other better understand the nuances and details of each one. Remember that participation is part of your final grade, so commenting on multiple peers posts is highly recommended. Please also make sure to use information from the reading assignment on Shape Grammars.
Post your response to the Lesson 4 Discussion (models) discussion.
Once you have posted your response and read through your peers' comments, synthesize your thoughts with the comments and write a one-page paper max about the three different approaches.
Once you are happy with your paper, upload it to the Lesson 4 Assignment: Reflection Paper.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Clearly names and describes advantages and disadvantages, including time, learning curve, accessibility, applicability, flexibility, etc. | 10 pts | 5 pts | 0 pts | 10 pts |
Names areas of application that are best suited to each of the approaches | 3 pts | 1.5 pts | 0 pts | 3 pts |
The document is grammatically correct, typo-free, and cited where necessary | 2 pts | 1 pts | 0 pts | 2 pts |
Total Points: 15 |
We would like to acknowledge that we have by now given you a quite intense tour through various modeling approaches. It is, however, important to have at least an overview of what is currently available as the project you may be working on in the future (past this course) may require you to design a custom made solution. In addition to various workflows including structure from motion mapping and hand-on modeling that we explored previously, this lesson focused more deeply on procedural modeling and its origins in theories of shape grammar. Procedural modeling is a wonderful tool to both explore the organization of built environments in theory but also, more recently, as a powerful tool to make modeling more efficient. ESRI has realized this potential and bought City Engine which was founded at the ETH Zurich (their technical university). Keeping the different approaches in mind it important as we move on in the course.
You have reached the end of Lesson 4! Double-check the to-do list on the Lesson 4 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 5.
Duarte, J.P.; Beirão, J.N. Towards a methodology for flexible urban design: designing with urban patterns and shape grammars, in Environment and Planning B: planning and design. Vol. 38 (5) 2012, pp. 879-902.
Knight, T. W. (1989). Shape Grammars in Education and Practice: History and Prospects, Internet Paper: http://www.mit.edu/~tknight/IJDC [99].
Stiny, G., & Gips, J. (1971, August). Shape Grammars and the Generative Specification of Painting and Sculpture. In IFIP Congress (2) (Vol. 2, No. 3).
Stiny, G. (1980). Introduction to shape and shape grammars. Environment and Planning B, 7(3), 343-351.
In this module, you will build a 2D map of the University Park campus including buildings, trees, roads, walkways, and a digital elevation model (DEM). You will learn how to convert 2D maps to 3D presentations. Then procedural symbology that we have created for you in CityEngine will be imported and applied to the 3D map to create a realistic scene.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
The first step into 3D modeling using ArcGIS Pro is making a map. First, you will create a project. Then you will add necessary data to your map where you have all the tools you need. Then, you will explore the University Park Campus with navigation tools and bookmarks. This 3D model of the University Park Campus will be the final result of this lesson.
If you do not have ArcGIS pro installed on your computer, please install it before continuing.
The first step in making a map is creating a project, which contains maps, databases, and directories.
Project templates are useful in creating a new project because they have aspects of ArcGIS Pro that are important including folder connections, access to databases and servers, and predefined maps.
A Blank template starts a new empty project. It means you won’t have the aspects mentioned above and will start from scratch to build your project. Scene views are for 3D map presentations. Global Scene is a useful template when your data is best represented on a globe. A Global Scene creates a project based on ArcGlobe (part of 3D Analyst extinction of ArcGIS for Desktop). Local Scene is useful for a small area to perform analysis or edit. It is similar to ArcScene in ArcGIS for Desktop. The Map template is suitable for creating a 2D map for your project. It creates a geodatabase in the project folder.[1]
[1] For more information on project templates: (1) ArcGIS Pro [102], (2) GISP, Tripp Corbin. 2015. Learning ArcGIS Pro. Community Experience Distilled. Packt Publishing: P. Accessed September 22, 2016. http://lib.myilibrary.com?id=878863 [103].
To explore Penn State’s University Park Campus, you need data. Download the data [104]. You can save the data package on any location on your computer. Please make sure to unzip the file. It is highly recommended that you save data in the project folder you created before, UniversityParkCampus. The reason is that if you have to move to a different computer, saving everything in your project folder will avoid (most likely) issues with connecting data to your project.
Note: ESRI ArcGIS software is sensitive to the change of data location. If the location address is changed compared to where the addresses are stored in the project, you have to re-link data.
In the pane of the window, under my computer click on the folder that you have saved the data files in. If you have used the default location for saving the project (C:\Users\Yourname\Documents\ArcGIS\Projects), the geodatabase is inside the UniversityParkCampus folder.
Go inside the geodatabase and Double-click the following layers to add them to the map: UP_BUILDINGS,UP_Major_Roads, UP_Minor_Roads, UP_Sidewalks, and UP_TREES.
Attention: if you go to the project folder you can see that a geodatabase named exactly as your project has been created: “UniversityParkCampus.gdb’.This is the geodatabase that you will use for saving the results of the analysis. The geodatabase that we have given you (Lesson5.gdb) contains external data prepared for you to start.
The map will center on the University Park campus in State College, PA.
Before we focus on symbolizing the layers and improve the map, you will learn how to navigate the map and create bookmarks to quickly return to key areas.
In the next lesson, you will learn about data symbolization and editing.
You probably noticed during your map-exploration that it was hard to distinguish some of the features because of how they were symbolized. In this section, you will symbolize your map, for example, to enhance readability.
First, you'll give the buildings layer a more appropriate color.
Next, you will change the major roads symbol.
Repeat the previous steps for UP_Minor_Roads. Change Minor roads’ symbol to ‘Gray 40%’ with no outline color.
Next, you will change the UP_Sidewalks.
For the hatch line symbol, choose simple stroke (2.0 pt). Hover over to see its name.
Choose 2.5 pt for the line width.
Trees in Up_Trees are point features. Here we will show you how to symbolize point features based on attributes. Before symbolizing this layer, explore the attribute table for this layer.
The results will be.
For the evergreen symbol, under Layers, choose shape maker from the menu. This time instead of Font, click on the Style tab.
Based on the screenshot, choose Fir Green for the color, and 18 pt for the symbol size.
For the unknown symbol, under Layers, choose shape maker from the menu.
To present a 3D scene of University Park Campus, you need an elevation base for your building footprints and other layers such as roads and trees. In this section, we focus on preparing the Digital Elevation Model (DEM) for University Park Campus. The Digital Elevation Model (DEM) is a bare earth elevation model. It has been extracted from LiDAR data.
In the previous section, you worked with feature data, data displayed as discrete objects, or features. While feature data is great for depicting buildings, roads, or trees, it is not the best way to depict elevation over a continuous surface. To do that, you'll use raster data, which can demonstrate a continuous surface. Raster data is composed of pixels, each with its own value. Although it looks different from feature data, you add it to the map in the same way.
The pop-up shows the Pixel Value, which indicates the actual value of a pixel. In this raster, it shows the elevation. In the above image, the selected pixel has an elevation of about 1159 feet above sea level.
Before exploring the raster data in 3D, we need to smooth the elevation model, so the 3D model of campus fits the elevation model nicely. In order to monitor the level of smoothness of a DEM, creating contours can be helpful. A contour set built based on a raw digital elevation model (DEM) data can show minor variations and irregularities in the data. Creating a smooth contour set for topography is helpful in smoothing the data.
Note: the smooth grid should not be used for any analysis that requires raw DEM. For instance, building height cannot be extracted from a smoothed DEM.
The Focal Statistics tool will resample from the DEM and apply a search distance defined by cells or model distance.
Click the contour layer in the contents pane. Make sure the symbol is highlighted by clicking on it. From the top ribbon, under Feature Layer, click Labeling. Under Label Class group, choose Contour for Field value. For class value click on the SQL button.
In this section, you’ll visualize 3D data. You will learn how to convert a 2D map to 3D scene, visualize the DEM layer and other feature classes such as trees and buildings.
Most commonly we display data as a 2D map (although this may change in the future). Traditionally, in ArcGIS software, a 2D map was displayed in ArcMap and a 3D scene displayed by ArcScene.
In ArcGIS Pro, you will have 2D maps and 3D scenes in the same platform. A scene is a map that displays data in 3D. By default, ArcGIS Pro will convert a map to a global scene, which depicts the entire world as a spherical globe. Since your area of interest is University Park Campus, not the entire globe, you will need to change the settings so the map converts to a local scene instead.
Your map converts to 3D, creating a new pane called Map_3D. You can go back to your 2D map at any time by clicking the Map tab.
The flatness of the campus contrast with hills in the distance. By default, scene uses a map of elevation data, called an elevation surface, to determine the ground's elevation. It is a low resolution but spans the entire world.
Another layer that is flat but should not be is the building footprint. The UP_BUILDINGS layer has height data in its attributes. It has been extracted from LiDAR data (as discussed in Lesson 4). To display the layer in 3D, you will use a command called extrusion, which displays features in 3D by using a constant or an attribute as the z-value. In this layer, the attribute will be Z_Mean.
Now you can turn on Roads and sidewalks. Make sure they are under 2D layers. Later in this Lesson, you will extrude trees and symbolize them.
As you can see, the extruded buildings in the previous section, are blocks with no details, just the elevation. In this section, you will learn how to present part of west campus with more details. To do so, you should remove some of the features from UP_BUILDINGS that overlap with the layer that has more detailed information. This new layer (UP_Roof_Segments), that you will add to the map, consists of roof segments with detailed elevation. This information has been extracted from LiDAR data. In other words, instead of treating buildings as a big, undifferentiated chunk, every change in shape or elevation in each building has been detected and a new layer that represents those change/segments have been created. In Module 4, we have explained the process of creating this layer in detail.
In this section, you will remove the overlapping part of UP_BUILDINGS Layer with the P_Roof_Segment. This means that the UP_Roof_Segments represents part of campus with more details while UP_BUILDINGS will represent the rest of the campus (where the detailed model is not available) with less detail.
Therefore, you should remove the overlapping feature from UP_BUILDINGS Layer. This means that the UP_Roof_Segments represents part of campus with more details while UP_BUILDINGS will represent the rest of the campus (where the detailed model is not available) with less detail.
This is how your model should look like. You can navigate by holding down the scroll wheel or the V key and drag the pointer to tilt and rotate the scene. You can see the level of details each building presents in 3D.
To see how the buildings look like in reality, go to GoogleMaps [106]. Search for University Park Campus.
Click the earth option at the bottom of the page.
Your map will turn to Earth view.
Zoom in to Penn State Alumni Association. This is how the roofs structures look like:
Go back to your ArcGIS Pro project and find the same building. You can find the same level of detail in roof structure.
Go back to Google map and explore more. You see that some of the buildings have flat roofs and some shed roofs. However, in your 3D model, all the roofs are flat. You will add different roof types later in this module.
However, what you need to keep is the rest of the buildings, not what has been selected. Therefore, you will switch selection to select the remaining part of the campus.
In this section, you’ll add special 3D textures and models to your scene to give it a more realistic appearance. The symbology of the structures is presentable in 3D but doesn't give the impression of a realistic city model. For instance, types of roofs, roof textures or façade textures, are not defined in this type of 3D presentation. To make the campus look more realistic, you can set the layer's symbology with a rule package created in CityEngine (see Lesson 2). Rule packages contain a series of design settings that create more complex symbology. Although you cannot create rule packages in ArcGIS Pro, you can apply and modify them from an external file (more in the next lessons).
In this section, you will insert a 3D model of trees with assigned rules. In your 3D Scene, you can see that 2D symbols of trees are presented in 3D. One of the ways to show your trees in 3D is to use LiDAR information such as crown diameter and tree height along with CityEngine procedural rules. However, ArcGIS Pro does not support assigning rule packages to point layers on the Symbology pane, yet. ArcGIS Pro is under constant development and will offer this functionality in the future. A workaround is to export the tree point layer as Polygons with Z information. Then the polygons can be converted to points. The point feature class with Z information and inherited rules can be added to your map as a preset.
You have learned how to apply a rule package symbology to buildings to demonstrate a realistic view of campus. For some types of 3D analysis, an analytical demonstration may suffice. For the next Module, you will need an analytical presentation of campus. In this section, you will learn how to switch from realistic to analytical. In the next lesson, you will learn examples of 3D spatial analysis and you will need these analytical 3D models for the analysis.
In this lesson, you learned how to work with ArcGIS Pro from creating a map and symbolizing layers to exploring raster and 3D data. You also learned about symbolizing 3D layers with rule packages from CityEngine. In the next lesson, you will learn about some 3D analysis that is possible with 3D data.
This assignment has 3 parts, but if you followed the directions as you were working through this lesson, you should already have most of the last part done.
If you haven't already submitted your deliverable for this part, please do it now. You need to turn in all four of them to receive full credit. You can paste your screenshots into one file or zip all of the PDFs and upload that. Task 2 will need to be a PDF, so either way, you will have two files to zip.
Criteria | Full Credit | No Credit | Possible Points |
---|---|---|---|
Task 1: Screenshot of all layers symbolized for UP Campus correct and present | 1 pt | 0 pts | 1 pt |
Task 2: PDF map of Smoothed_DEM Layer with a legend showing the minimum and maximum elevation correct and present | 1 pt | 0 pts | 1 pt |
Task 3: PDF map of your 3D campus with realistic facades and trees or simply create a screenshot correct and present | 1 pt | 0 pts | 1 pt |
Task 4: PDF map of your 3D campus with schematic facades and analytical trees or simply create a screenshot correct and present | 1 pt | 0 pts | 1 pt |
Total Points: 4 |
Please reflect on your 3D modeling experience, using ArcGIS Pro; anything you learned or any problems that you faced while doing your assignment as well as anything that you explored on your own and added to your 3D modeling experience.
Please use Lesson 5 Discussion (Reflection) to post your response to reflect on the options provided above and reply to at least two of your peer's contributions.
Please remember that active participation is part of your grade for the course.
Your post is due Saturday to give your peers time to comment. All comments are due by Tuesday at 11:59 p.m.
Once you have posted your response to the discussion and read through your peers' comments, write a one-page reflection paper on your experience in 3D modeling with the following deliverables as a proof that you have completed and understood the process of 3D modeling for Campus.
Your assignment is due on Tuesday at 11:59 p.m.
Please submit your completed paper and deliverables to the Lesson 5 Graded Assignments.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Paper clearly communicates the student's experience developing the 3D model in ArcGIS Pro | 5 pts | 2.5 pts | 0 pts | 5 pts |
The paper is well thought out, organized, contains evidence that student read and reflected on their peer's comments | 5 pts | 2.5 pts | 0 pts | 5 pts |
The document is grammatically correct, typo-free, and cited where necessary | 1 pts | .5 pts | 0 pts | 1 pt |
Total Points: 11 |
In this lesson, you have learned how to symbolize 2D and 3D maps. You also have learned the differences between static modeling and procedural modeling. Most importantly, you practiced how to create a 3D model of elevation and how to incorporate City Engine procedural rule symbology into GIS.
You have reached the end of Lesson 5! Double-check the to-do list on the Lesson 5 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 6.
In the previous lesson, you created a 3D map of the University Park campus. You learned how to symbolize layers based on procedural rule packages. In this lesson, you will learn two types of 3D spatial analysis. First, you will demonstrate the buildings that are affected in the case of a thunderstorm and a flash flood in State College. Second, you will learn how to do Sun Shadow analysis. It will give you a visual sense of how different sun positions will create various shadow volumes in different directions. It is a useful tool for building design and understanding the sun’s path in a given coordinate and also for figuring out the surface area that a building shades during a day.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
In this section, you will determine which areas of campus are more affected by flooding, by creating a raster layer representing accumulated water resulting from a thunderstorm. You will use the analytical 3D scene you created in the previous Lesson to visualize the flooding. In this study, you will use a FEMA flood map to find out how much of campus might be affected by the 1-percent annual chance flood, which is referred to as 100-year flood1. You will buffer the floodways around campus to present the flood plain and calculate the campus area affected by the 100-year flood. The floodway data is downloaded from FEMA flood map service center2. On flood plains, if you consider that water accumulation or the height of the water is a constant value, other factors that affect which areas are more affected by the flood are dependent on various factors. Examples of these factors are land elevation, surface material (pervious or impervious surfaces), vegetation coverage, and surface condition (saturated or dry). In this study, we only consider elevation as a factor. Therefore, you will extrude the flood data to see which buildings or trees are more affected.
For more information, refer to FEMA: Flood Zones [108]
Download Lesson6.gdb [104] and unzip the file, then save it to the main project location.
(C:\Users\YOURUSER\Documents\ArcGIS\Projects\ UniversityParkCampus).
Change the color scheme to Spectrum by Wavelength-Full Bright. You can turn on Show names to see the names on top of each color scheme.
Go back to the Map_3D. Change the color scheme of Smoothed_DEM to Spectrum by Wavelength-Full Bright. Make sure that UP_Roof_Section and Building_Footprint layers are turned on. As you can see, the areas with higher elevations have warmer colors. It means that buildings located in the southern part of campus are more likely to be affected by flood, if we consider all the buildings in the same flood plain.
Click to modify symbology. Under the Appearance tab, select Symbology and choose Unique Values.
As you have learned in previous lessons, symbolize your layers based on FLD_ZONE to have good visualization of various values. Remove the outline for values.
Zoom to UP_BUILDING layer. You should have a clear visualization of floodways with different categories.
Based on FEMA’s information3, categories A, AE, and AO are all 1-percent-annual-chance flood. A is determined using approximate methodologies. AE is created based on detailed methods. Finally, AO represents shallow flooding where average flood depths are between 1 to 3 feet. X represent areas that are not in a floodway. For this study, you are going to consider all three A, AE, and AO layers as one floodway category.
3FEMA: Flood Zones [108]
The best way to create a boundary is to use Smoothed_DEM layer as the extent of the campus area. To be able to create a polygon out of a raster, you will need a raster with integer values. The current raster values are float. On the ribbon, on the Analysis tab, click Tools under Geoprocessing and search Int. Click Int (Spatial Analysis). The input layer is Smoothed_DEM and rename the output raster to DEM_int. Click Run.
Select Raster Calculator (Spatial Analyst Tools). Using Raster calculator, multiply DEM_int by 0 to create a constant value raster.
Name the output raster as rasterzero. Click run. Now you have a raster with value of 0.
Go back to Geoprocessing Tools tab. Search or raster to polygon tool. Click Raster to Polygon (Conversion Tools). Convert rasterzero layer. Save the output polygon as Boundary. Click Run.
Remove rasterzero from Contents pane. You don’t need it anymore. Change the symbology of the Boundary layer to no fill, color with Tuscan Red, outline with the width of 2pt.
Turn off FLD_HAZ_FEMA. And make sure that FLD_HAZ_Campus is located under UP_Buildiing. This will be the result.
For this study, you are going to consider all three A, AE, and AO layers as one floodway category.
On the ribbon, on the map tab, click select by attribute.
Under expression, add the following clause to select all three floodway categories:
Click Apply, then OK. All three floodway categories will be selected.
You have selected all the floodways on campus. Open the attribute table. Can you figure out how much the area of selected floodways are?
Click for the answer.
In Geoprocessing Tools, search buffer. Click Buffer (Analysis Tools).
Note: When you have selected features in a layer, the analysis will run only on selected parts.The input feature is FLD_HAZ_Campus. Name the output feature class as Flood_Extent. For the distance consider 1000 feet. This distance is just an example to teach you how the analysis works. It does not mean that the flood plain in this area is truly 1000 feet.
Set Side Type as ‘Full’, meaning that the buffer will be from both sides of floodways. Click Run.
We are interested in the flood extent areas inside the boundary. The green areas are more affected than the rest of the campus. Now, you will create a polygon layer that has both areas: flood extent (1000 feet) and the rest of the campus. The proper tool to add the flood extent to the campus area is ‘Identity’. This tool computes geometric intersections of campus areas and flood plains. Those parts of campus overlapping with flood extent will get the attributes of identity features (flood extent).
Before updating the flood zones of campus area, you need to clip the Flood_Extent, to extract areas inside the boundary. Go to Analysis tab, geoprocessing tools. Search Clip. Select Clip (Analysis Tools). Input feature is Flood_Extent. Clip Feature is Boundary. Name the output as Flood_Extent_Clip.
Turn off all layers. Go back to geoprocessing tools. Search update. Select Update (Analysis Tools). Input feature is Boundary, Update feature is Flood_Extent_Clip. Name the output feature class Flood_Zones_Campus. Click Run.
Right-click on Flood_Zones_Campus in Contents pane. Open Attribute Table. You can see there are two features in this layer with id categories 0 and 1. Select the row with id=0.
What is the area of flood plain with higher risk (id=0) in acres?
Click for the answer.
The selected area is the 1000 feet extent from floodways and the rest is campus area with less flood risk in square feet.
As you can see in the attribute table, the layer that you have created does not have any height information. You need water height information to extrude the layer properly in the 3D scene. Therefore, you will add a new attribute to the table and give it desired values.
At the top of the attribute table, click the Field Add button. The Fields view opens, where you will be able to edit parameters.
For the empty field at the bottom of the table, under Field Name, type Height. For Data Type, choose Float. Choosing Float over Integer allows you to have decimals.
On the ribbon, on the Fields tab, click Save. The changes will be added to the table.
Close the Fields view. Return to the attribute table.
On top of the attribute table click the Calculate field button.
The input table is Flood_Zone_Campus. The field Name is Height. For the expression, you will consider 5 feet of flood for areas around the floodways. This number is for presentation purposes and is not accurate. Click OK.
To add it to the 3D scene, in the Contents pane, drag Flood_Zones_Campus from 2D Layers to 3D layers. Placing it below Building_Footprints. Make sure the other 2D layers are off.
If you cannot see the layer, right-click on the layer and select properties. Click Elevation and make sure the Features are set as "on the ground".
Click Flood_Zones_Campus. On the ribbon, click Appearance. Under Extrusion, choose Max Height as Extrusion Type. Select Height as the field of Extrusion. The Unit will be US Feet.
Click Flood_Zones_Campus. On the ribbon, click Appearance and from the Symbology drop-down menu choose Unique Values. In the Symbology pane, the value field is Id. Click Add all values. Change the color of id=0 as Apatite Blue with no border. The color for id=1 will be Sodalite Blue with no border.
Again select the Flood_Zones_Campus. On the ribbon, click Appearance. On Effects group, change the transparency to 30 percent.
This will be the result:
Using your mouse left click and V on the keyboard, navigate through the map. Zoom in to some buildings in different flood zones.
You can see how less Walker building is flooded compared to peripheral buildings on campus. Changing the height information flood elevation, you can have different results. If you are curious to see how the visualization changes, change the attribute values of Height field in the attribute table.
Shadow analysis is a tool that allows you to analyze daylight conditions by giving a certain date and time. It can be hourly analysis between sunrise and sunset or the changes in shadows at a certain time during different dates. It is a useful tool for building design and understanding the sun’s path in a given coordinate. Assume that you are a designer and you would like to design a high-performance building that its source of energy is solar. Also, the sun position and movement across your site play an important role in your design to benefit from the daylighting.
The most important questions would be how much does a building shade the surrounding area and how much does the surrounding area, such as buildings and trees, shade a building? It is obvious that trees’ shadows are desirable in summer and very important in cooling the environment, whereas in the winter the important factor in the environment is the shade created by buildings.
In this section, you will carry out the sun shadow analysis from sunrise to sunset in winter (January 1st, 2021). If you are interested you can carry out another analysis for summer (July 1st, 2021) to compare the results. Creating the shadow volumes, you will learn how to animate the shadow volumes that you have created.
Click the Project tab. Select Save As.
Make sure the following layers are checked in the 3D Layers Content Pane: Up_Roof_Segments, Building_Footprints., and Topography.
Note: What you have created in Lesson 5 as a 3D building, is an extrusion from attribute. It means that you extruded a 2D polygon based on a ‘Z’ attribute you extracted from LiDAR Data or DSM or any other sources. Having an extruded building, does not make your features with 3D geometry. It is just a 3D visualization of your 2D data. For many 3D spatial Analysis, you need 3D features or Multipatch features.
3D features are features with 3D geometry. 3D features can be displayed without the need to draping them over a surface like DEM. They already have Z information in their geometry. In the image below you can see how 2D features (Crème color) are dependent on a surface for display in a 3D scene, whereas 3D features (Red color) have z information and are located in the elevation they belong to without extrusion.
Multipatch features are 3D objects that represent a collection of patches to present boundary (or outer surface) of a 3D feature. Multipatches can be 3D surfaces or 3D solids (volumes).
Also, the icon next to a multipatch feature is a 3D feature. The image below shows the multipatch icon. For now the only supported type of features for multipatch layers are polygons. Points and lines can be converted to 3D features but not multipatch features.
On the top ribbon, click Map and select Add Data. Locate Lesson6.gdb and select ‘Building_MP_Campus’.
Note: The reason that you will work on the smaller part of campus for this 3D spatial analysis is that your computer might not be able to handle bigger data. The multipatch layer for bigger areas of campus are also provided, in case you would like to carry on more analysis in your free time (Buildings_MP).
Right click on the symbol under layers to modify the color. Change the ‘Building_MP _Campus’ to Fire Red.
Right-click on Building_MP_Campus and select Properties. Select Elevation from the menu. Set the features are, Relative to the ground. Click Ok.
Uncheck UP_Roof_Segments. Now you can see what the multipatch surfaces look like. Navigate by holding down the scroll wheel or the V key and drag the pointer to tilt and rotate the scene.
4 ArcGIS Pro [112]
5 ArcGIS Pro [112]
Select Sun Shadow Volume (3D Analyst Tools).
You will use the multipoint feature ‘Building_MP_Campus’ as an input. On January 1st, 2021 the sunrise is at 7:54 am and sunset is 5:52 pm. Set the start and end date as below. Pennsylvania is in the Eastern Standard Time zone. Change the Iteration Interval to 2 and Iteration Unit to Hours. Click Run.
Turn on UP_Roof_Segment and turn off Building_Mp_Campus. Change the color of SunShadow_January to ‘Topaz Sand’. Right-click on the layer, select Properties. Click Elevation and make sure the features are relative to the ground. This is the result of your analysis.
On the top ribbon, under appearance, click Symbology. Select Unique Values.
Select set 3 for the color scheme.
On the content pane or in the symbology pane you can explore the categories.
The first category is the shadow created by the sun at 7:54 am and the last one is the shadow created at 3:54 pm. Because of the overlaying layers, you cannot see each shadow clearly.
On the top ribbon, click appearance. In Effects Category, define Layer Transparency as 30%.
Making a Layer transparent, you can see different shadow volumes easier. However, there are more efficient ways to see your results.
ArcGIS Pro offers a few ways to show an area of interest for your data, including visibility ranges, definition queries, and range. To visualize your data as a dynamic range, you can use any layer, or set of layers, that contain numeric, non-date fields. Once you define the range properties for your layer, an interactive, on-screen slider is used to explore the data through a range you customized’.5
On the content pane, right-click on SunShadow_January1 layer and select properties. Click the Range Tab. Setting the range properties of the layer, you can explore the data interactively using a range slider.
Click Add Range. Click the drop-down menu of Field. You will see that DATE_TIME is not in the list of fields that can be used because it is not a numerical field. You can select Azimuth. Set the extent from 120 to 236. Click Add. Click OK.
Defining a range, you will be able to use a tool in ArcGIS Pro called Range Slider. This tool gives you the ability to filter overlapping content to make it more accessible and visible. You can see the Slider on the right side of your map.
To set the Slider, on the top ribbon, under Map, click Range. Under Full Extent choose Sun Shadow_January1. Make sure the min and max are set to 120 and 236.
Make sure the active range is Azimuth, the same field you defined in layer properties.
Take a look at your map. You can see the first visible range now is from 123 to 144. This azimuth range belongs to the first hour of the sunrise from 7:54 am to 9:54 am
Where is the shadow's direction of the first range? (From 7:54 am to 9:54 am)
Click for the answer.
This belongs to 8:35 am to 9:54 am. If you continue with the Range slider, you can see all 5 categories of time frames.
The last step shows you the changes in shadow volume and its direction towards the end of the day. The shadow moves from North West in the morning to the East towards the end of the day.
Instead of clicking next, you can play the slider and it will play ranges after each other with the pace you have chosen. The pace, from slower to faster can be defined on the top ribbon, Range tab.
You have learned how to change symbology in Lesson 5. To make a more presentable animation of the shadow volumes, define the symbology of ‘SunShadow_January1’ using a gradual color change from lighter colors to darker colors. You need 5 gradual colors; for instance, you can choose a color ramp of Blue or Orange or Green or any other color you find appropriate.
Note: make sure to change the current range from 123 to 230 to see all shadow calcifications under the ‘SunShadow_January1’ layer.
In this section, you are going to calculate the ground surface covered by shadow during the day for Walker building.
Change the current slider range to 123-230 to see all the shadow volumes.
On the Content pane, click List by Selection.
Go back to List by Drawing Order.
On the top ribbon, under the Map tab, click Select. Draw a rectangle on top of Walker building’s shadows to select them. Make sure not to select other buildings’ shadows.
Right-click on SunShadow_January1 layer. Click Data, Export Features.
Click symbology under Appearance and change the color to the unique value. Change the color from white to another color.
The Input Feature Class is Shadow_Walker. Name the output ‘Shadow_Walker_footprint’. To make one unique surface, you need to group your features based on a unique field. The “Source” field which is ‘Buildings_MP_Campus’ is a unique attribute that can group or dissolve all features together based on it. Click Run.
You can see your layer in the 2D Layers list. Uncheck Shadow Walker from the 3D Layers.
What is the shadow area? Do you think it is an accurate shadow area?
Click for answer.
The shadow area of Walker building is around 50682.01 square feet. It is not a correct shadow area because at the beginning of this section, we asked you to calculate the ground shadow surface. However, the result of your shadow footprint contains the building itself. Although the building’s roof is covered by shadow during certain times of the day, we are only interested in ground surface.
Turn off Up_Roof_Segment. Drag UP_Buildings from 3D Layers to 2D Layers and turn it on. Make sure it is on top of Shadows_Walker_footprints.
On the top ribbon, click Analysis. Select Tools from Geoprocessing group. Search Erase.
Turn on Up_Roof_Segments.
What is the area of Walker building Shadow now?
Click for the answer.
35017.51 Square Feet
In this section, you have learned how to calculate the ground shadow surface for Walker building. In this assignment calculate ground shadow surface for the Nursing Sciences Building.
You would like to share the results of shadow volumes over the imagery. One way is to use Google Earth. It has both the imagery and 3D building models and is a good way to present your results.
If you already have Google Earth skip this step. Otherwise, go through the following process:
KML (Keyhole Markup Language) is an XML based file format that is used in Earth browsers such as Google Earth and Google Maps. Consider you would like to share shadow surface in the morning, at noon and before sunset to demonstrate the shaded surface during the day. To do so, you will select the time of the day you are interested in and you will convert the Layer to KML.
In this step, you will open three KMZ files you have created in Google Earth. Since Google Earth is free to use, then you can share your Google Earth file with anyone.
Upload "UniversityParkCampus_Shadow.kmz" to Lesson 6 Assignment: Google Earth (2).
Criteria | Full Credit | Half Credit | No Credit |
---|---|---|---|
Create a KMZ file | 4 pts | 2 pts | 0 pts |
The KMZ file has all three shadows | 4 pts | 2 pts | 0 pts |
This assignment has 4 parts, but if you followed the directions as you were working through this lesson, you should already have most of the first part done.
If you haven't already submitted your deliverable for this part, please do it now. You need to turn in all four of them to receive full credit. You can paste your screenshots into one file, or zip all of the PDFs and upload that. Make sure you label all of your tasks as shown below.
Criteria | Full Credit | No Credit | Possible Points |
---|---|---|---|
Task 1: Screenshot of flooded Old Main | 4 pts | 0 pts | 4 pts |
Task 2: Screenshot of Campus with gradual symbology of SunShadow_January1 | 2 pts | 0 pts | 2 pts |
Task 3: Screenshot of ground shadow surface of Nursing Sciences Building | 2 pts | 0 pts | 2 pts |
Task 4: Screenshot of attributes of ground shadow surface of Nursing Sciences Building | 2 pts | 0 pts | 2 pts |
Total Points: 10 |
Please reflect on your 3D spatial analysis experience, using ArcGIS Pro; anything you learned or any problems that you faced while doing your assignment as well as anything that you explored on your own and added to your 3D spatial analysis experience.
Please use Lesson 6 Discussion (Reflection) to post your response to reflect on the options provided above and reply to at least two of your peer's contributions.
Please remember that active participation is part of your grade for the course.
Your post is due Saturday to give your peers time to comment. All comments are due by Tuesday at 11:59 p.m.
Upload "UniversityParkCampus_Shadow.kmz" to the Lesson 6 Assignment: Google Earth.
Please upload your assignment by Tuesday at 11:59 p.m.
Criteria | Full Credit | Half Credit | No Credit | Total |
---|---|---|---|---|
The KMZ file has all three shadows | 6 pts | 3 pts | 0 pts | 6 pts |
Total points: 6 pts |
Once you have posted your response to the discussion and read through your peers' comments, write a one-page paper reflecting on what you learned from working through the lesson and from the interactions with your peers in the Discussion.
Please upload your paper by Tuesday at 11:59 p.m.
Please submit your completed paper to the Lesson 6 Assignment: Reflection Paper.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Paper clearly communicates the student's experience developing the 3D spatial analysis in ArcGIS Pro. | 5 pts | 2.5 pts | 0 pts | 5 pts |
The paper is well thought out, organized, contains evidence that student read and reflected on their peers' comments. | 3 pts | 1 pts | 0 pts | 3 pts |
The document is grammatically correct, typo-free, and cited where necessary. | 2 pts | 1 pts | 0 pts | 2 pts |
Total Points: 10 |
In this lesson, you have learned the importance of 3D data and 3D maps for spatial analysis. To explore various geoprocessing analysis, you have practiced two different methods to analyze flood and sun shadow analysis. In terms of 3D data, it is important to remember the difference between multipatch as 3D data and extruded 2D data. In the end, you experienced how to share your work on other platforms.
You have reached the end of Lesson 6! Double-check the to-do list on the Lesson 6 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 7.
In this and the next two lessons, you will be working with the Unity3D game engine, simply referred to as Unity throughout the rest of the course. Unity is a rather complex software system for building 3D (as well as 2D) computer games, and other 3D applications. Hence, we will only be able to scratch the surface of all the things that can be done with Unity in this course. Nevertheless, you will learn quite a few useful skills related to constructing 3D and VR applications in Unity dealing with virtual environments. Furthermore, you will become familiarized with C# programming (the main programming language used in this engine) through a series of exercises that guide you on developing your first interactive experience in a game engine.
This lesson will begin with an overview to acquaint you with the most important concepts and the interface of Unity. It will cover basic project management in Unity, the Project Editor interface, manipulation of a 3D scene, importing 3D models and other assets, and building a stand-alone Windows application from a Unity project. Moreover, you will learn how to create a very simple “roll-the-ball” 3D game using C#, which will prepare you for more advanced interaction mechanics we will be implementing in the next lessons.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Unity is a game engine developed by Unity Technologies. Development started over a decade ago and the initial version of Unity was released in 2005. Unity has experienced rapid growth, in particular over the last few years, and is undoubtedly the most popular game engine out there. According to the Unity website, 50% of mobiles games, and 60% of AR/VR content are made with unity. The company has over 2000 employees and more than 3 billion devices worldwide are running unity [1]. There have been several major releases of Unity over the years with new minor versions being released on a regular basis. The current major release is Unity 2019 with the latest stable version being version 2019.1.10 at the time of this writing. In 2010, Unity Technologies launched the Unity Asset Store, an online marketplace where developers can obtain and sell assets for their Unity projects including tools and extensions, scripts, artwork, audio resource, etc. (Unity Asset Store [121]). Today there exists many side products and activities around the Unity platform. For instance, Unity offers its own Certification Program (Unity Certification [122]) to interested individuals and has its own VR/AR division called Unity Labs (Unity Labs [123]).
One of Unity’s main advantages is that it is cross-platform both during development time and deployment. Unity applications can be deployed for many platforms including Windows, macOS, Linux, and different mobile operating systems. That means not only can you choose on which operating system you want to create your 3D application but once it has been created, it can be deployed to a large variety of devices. For instance, you can build stand-alone programs for the common desktop PC operating systems (Windows, macOS, Linux, etc.) as well as versions for different mobile phones (e.g., Android-based phones and iPhones). Of course, the differences in performance and hardware resources between a PC and a mobile phone need to be considered, but there is support for this in Unity as well. In addition, Unity has been a front-runner when it comes to building VR applications and supports or provides extensions for a large variety of consumer-level VR devices such as the Google Cardboard, Gear VR, Oculus Rift, HTC Vive, and other mainstream HMDs. Three of the most popular VR development plugins (a series of libraries and pre-made game objects that facilitate access to HDMs and enable developers to program different functionalities) for unity are SteamVR (by Valve corporation), OVR (also known as Oculus integration, by Oculus), and VRTK (a community-based plugin by Sysdia solutions).
While its origin is in the gaming and entertainment industry, Unity is being used to develop interactive 3D software in many other application areas including:
More examples can be found at Made with Unity [127]
In this course, we will use Unity to build simple interactive and non-interactive demos around the 3D environmental models we learned to create in the previous lessons. You will also learn how to produce 360° videos from these demos such that these can be watched in 3D using the Google Cardboard. Lastly, you will learn how easy it can be to add physics to different geometrical objects in unity and make them “behave” the way you want.
Search the web for a 3D (or VR) application/project that has been built with the help of Unity. The only restriction is that it is not one of the projects linked in the list above and not a computer game. Then write a 1-page review describing the application/project itself and what you could figure out how Unity has been used in the project. Reflect on the reasons why the developers have chosen Unity for their project. Make sure you include a link to the project's web page in your report.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
URL is provided, the application/project uses Unity, is not already listed on this page, and is not a game. | 2 pts | 1 pts | 0 pts | 2 |
The report clearly describes the purpose of the application/project and the application itself. | 3 pts | 1.5 pts | 0 pts | 3 |
The report contains thoughtful reflections on the reasons why Unity has been chosen by the developers. | 3 pts | 1.5 pts | 0 pts | 3 |
The report is of the correct length, grammatically correct, typo-free, and cited where necessary. | 2 pts | 1 pts | 0 pts | 2 |
Total Points: 10 |
[1] Unity [128] (accessed on July 16, 2019)
Before providing an overview of Unity’s main concepts and interface, let us start by downloading and installing Unity so that you can follow along and immediately start to experiment with the things you will learn about. Unity comes in three different versions: the “Personal” is completely free, while the “Professional” comes at a monthly fee ($125 at the time of this writing), and the “Plus” at a monthly fee (25$ at the time of this writing). For our purposes, the Personal Edition is completely sufficient. If you are interested in knowing more about the differences between the versions, an overview is provided at unity Store [129].
Please follow the instructions below to download and install the Personal Edition of Unity:
Go to Download Unity [130] and click on “Download Unity Hub“. The Hub is a new mechanism for managing the different versions of the Unity editor you want to install on your machine.
Execute the downloaded file (UnityHubSetup.exe) and go through the installation dialog. Once the installation is finished, if you have checked the “Run Unity Hub” option at the end of the installation process, when you click on Finish, the Unity Hub will open. Alternatively, you can also navigate to your desktop and double click on the newly created Unity icon named “Unity Hub” to start it.
You should be able to see a window as depicted below. Locate the profile icon on the top right corner of this window, click on it, and then select “Sign in”. A new window will pop up asking for your login information. Sign in with your unity account. If you do not have a unity account, go ahead and create one using the “create one” link on the pop-up window, and then sign in.
Once you are signed in, make sure the “Installs” category is selected on the left panel (the one highlighted with the red area in the figure below) and click on the blue “Add” button. A new window with pop up where you can select which version of unity you want to install. At the time of writing this, the latest stable version is 2019.1.10f1. Please download this version as it is compatible with the rest of the instructions given in the course (if you choose to install a new version of the editor, you might need to change some of the c# codes we will be using). To do so either click on the "Download archive" on the window that has poped up, or follow this link (https://unity3d.com/get-unity/download/archive [132]). This action will open a new tab in your default web browser. On that web page, select the Unity 2019.X, navigate to almost the end of the page and then click on the green button next to "Unity 2019.1.1" that says "Unity Hub". This will open this version of Unity editor in the Unity Hub for you to download and install.
This action will automatically proceed to the next dialog window in the Unity Hub, where you can select which extra modules to install. If you do not have Microsoft Visual Studio, it will be automatically selected for you on this list. Moreover, if you are developing on Linux or Mac, make sure to select the relevant “Build” module for your OS, so can deploy your virtual experience as a desktop application later. Click “Next” and wait until the installation is finished (This could be a lengthy process, so be patient).
Once the installation is complete, you should have a Unity icon (with a specific version) on your Desktop and “Unity” in your Start Menu. Before Starting Unity, we need to create a project. To do this, make sure the “Projects” category is selected in the Unity Hub (the one highlighted with the red area in the figure below) and click on the blue “New button”. Unity provides several templates for us to choose from (depending on the purpose of the project, e.g. 2D, 3D, VR lightweight RP, etc.). For the purpose of this lesson, we will use the default 3D template. Change the name of your project to “MyFirstUnityProject”. You can also change the location where the project will be saved to another address if you want (this is optional and solely based on your preference). Press “Create”, and after a few minutes the unity editor will be ready for us to use.
A Unity project consists of one or more Scenes and each Scene is formed by a set of GameObjects located in the Scene. The name GameObject obviously is based in Unity’s main purpose as a game engine but these entities can be anything that we would want in a 3D application: 3D models of buildings, terrain, lighting sources, cameras, avatars, etc. They can also be non-physical things, in particular, abstract entities just serving as containers for scripts that perform some function such as realizing changes in the environment when the 3D application is executed. The main programming language used to write scripts in Unity is C#.
When editing a project in Unity as well as when executing a Unity project, only one Scene is active at any given moment. Each Scene has a name and you can see the name of the currently active Scene in the title bar: There are four parts separated by hyphens, the version of Unity, the name of the Scene, the name of the project, and the platform currently chosen to build the Unity project for. The Scene name part right now will be “SampleScene” as this is a default scene automatically create by Unity whenever a new project is created.
In a computer game built in Unity, each level would be a different Scene. In addition, things like cut scenes and menu screens would also be realized as individual Scenes. The game code will then have to implement the logic for switching between the Scenes. Similarly, in a non-game 3D application, you could also have several Scenes to switch between different environments, e.g. an outdoor model and several indoor models.
To give you an example of having more than one scene in a project, as well as how a scene can be populated with gameobjects, we have prepared for you a unity package that contains a scene called “Main”, populated with terrain, and a few classic buildings of the PSU campus in State College. Download this Unity Package [104]. Once the download is complete, simply double click on it, and an import window inside the Unity editor will pop up showing the folder structure inside the package. By default, all the folders should be selected. Click on import, and all the selected folders will be added to the “Assets” structure of your project. If you receive a warning saying that the package was exported using an earlier version of unity, simply click on "Continue".
This is a relatively large package (almost 200MB), and therefore it could take a while before the import is complete.
Once the import is finished, you may notice that a new folder called “Historic Campus” is added to your project folder structure under Assets (highlighted in the figure below). In addition, another folder with the title “Standard Assets” is also added, which is a collection of ready-to-use assets provided by Unity to speed up development time. The standard assets folder contains a “first-person controller” prefab that is used in the historical campus scene.
This folder structure is used to organize the resources of your project. It is good practice to organize your files into folders based on some meaningful structure and add a sensible title to the folders.
If you select the “Historic Campus” folder with your mouse, you can see that its contents are listed on the down-center part of the project panel in the editor. This is another way to view and navigate your project folder structure. There are several subfolders inside the Historic Campus folder that contain the 3D models, textures, and other resources used in the scene. To open the imported scene of the historical campus, simply double-click on the “main” scene. We will this scene to go over the basics of unity.
If you click on the small “Play” button on the top center of the editor (alternatively you can use the “Ctrl + P” shortcut), you can run the scene and see it in action. Currently, the scene has three buildings, a simple flat plane, and a first-person controller. Use your mouse to look around, and the WASD keys to navigate the environment. Once you are done experimenting, you can stop the scene from running by either clicking on the play button again or pressing “Ctrl + P”.
As you may have already noticed in the previous section, there are two main windows in the editor that show two different views of the scene. The window you will mostly look at when creating a Unity project is the Scene Window currently open in the left-center of the screen. It shows a 3D view of the Scene you are currently working on, in this case consisting of a few historic PSU campus buildings from the year 1922 and an underlying plane. The window on the right-center of the screen is called the game window and shows the output of the scene when the project is built. The game window allows you to have a feeling of how this scene of your virtual experience will look like from a user’s perspective, while you are developing. Evident from its title, and as you have already experienced, once you “play” your scene you experience it in this window.
You can alter the arrangement of all the panels and windows to your liking by simply dragging them around. For instance, if you prefer not to see the game window while you are editing your scene, you can drag its panel header and drag it next to the scene window and then drop it.
If you click into the scene window to give it focus, you can then use the mouse to change the perspective from which you are looking at the scene. The details of how this goes are affected by a set of buttons at the top left. The seven buttons on the left are the main ones to control the interaction with the scene and the objects in it with the mouse. For now, make sure that the first button (i.e. Hand tool) is selected as in the screenshot below:
Now go ahead and try:
You will learn more details about how to interact with the Scene Window soon. Let us now turn our attention to the panel on the left, which is the Hierarchy panel. It lists the GameObjects in the Scene. Currently, these are the “FPS Controller”, a “Directional Light” and the “Campus” 3D model. The panel is called “Hierarchy” because GameObjects are organized in a hierarchy and the panel shows us the parent-child relationships. The “Campus” object is actually just a container for grouping the 3D objects of the individual buildings, and of the terrain together. By clicking on the small arrow on the left, you can expand that part of the hierarchy and will see the sub-entities of the “Campus” object, called “Terrain”, and “Buildings”. As you probably already noticed, these objects, in turn, are just containers. “Buildings” for instance contains the models of the individual buildings shown in the Scene.
You can double-click on any entity in the hierarchy to change the view in the Scene Window to focus on that particular object. Try this out by double-clicking on some of the buildings, e.g. starting with the “Woman’s building” object.
You may already have noticed that selecting an object in the Hierarchy Window, either by double-clicking it or by just doing a single click if you don’t want the view in the Scene Window to change, has an effect on what is displayed in the window on the right side of the Scene view. This is the Inspector Window, which has the purpose of displaying detailed information about the selected object(s) in our Scene as well as allowing us to modify the displayed properties and add new components to a GameObject that affect its appearance and behavior. If you select the “Liberal Arts” building and check out the Inspector Window you will see that there is a bunch of information grouped into sections, including so-called components attached to the object that will determine how the object will be rendered in the scene. Now compare this to what you see when you select the “FPS Controller” object or the “Directional Light”. Since these are special objects with particular purposes other than being displayed in the Scene, they have special properties related to that particular purpose, for instance, the FPS controller has a child called “FirstPersonCharacter”. This gameobject has a camera component attached to it (as the camera should move when the player character moves around the scene) with multiple parameters (e.g. Filed of view). As an experiment, change the value of the field of view parameter using the slider to see its effect.
The “Directional Light” is one of several possible light sources that Unity provides. It essentially simulates the sunlight and will be used to render the Scene, resulting in shadows, etc. Instead of having a camera component attached to a child of the FPS controller object, we could have added a static “Main Camera” to the scene. The camera object determines what will be shown when the Unity project is being run, either inside the Unity editor or after we exported it as a stand-alone application. To see the difference, disable the FPS controller and add a main camera to the scene as shown in the video:
We first need to disable the FPS Controller by unchecking it in the inspector panel. Then, we need to add a static camera to our scene. To do this, we will click on the GameObject menu on top of the editor and select camera. Next, we need to adjust the rotation and position of our static camera so it captures the part of the scene we want to show. To do this, we will use the menu we have addressed previously to switch our mouse functionalities. In the video, you have seen the use of the Move and Rotate tool. Note that the same results could be achieved by manipulating the numerical transform values of the camera GameObject in the inspector menu. To try this, observe the numerical values while you use the mouse to move or rotate the object, or alternately, change the numerical values to see their effect.
A very handy way to change the camera perspective (and also the position and orientation of other GameObjects) is by first changing the Scene view to the desired perspective and then telling Unity to use that same perspective for the camera object. This can be accomplished by using "GameObject -> Align With View" from the Gameobject menu bar, or by simply using the keyboard shortcut CTRL+SHIFT+F. To practice this, double-click the Old Botany building to have Scene view focus on this building. Now select the Camera object (single click!) and use Align With View. See how the camera perspective in the Camera Preview has changed to be identical with what you have in the Scene view? On a side note, the GameObject menu also has the function “Align View to Selected” which is the inverse direction of “Align With View”: it can, for instance, be used to change the Scene view to your current camera perspective.
The last panel we will cover in this section is the “Console”. This panel displays warning and error message when Unity experiences any problems with your project or prints messages that you log in your source code (e.g. for debugging purposes).
As was previously mentioned, the way the individual windows and panels are arranged here is the default layout of the Unity editor. But you can freely re-arrange the windows in pretty much any way you want, for instance by clicking on the project tab and dragging it to a different part of the screen. The Layout drop-down menu at the top right allows you to switch between predefined layouts as well as define your own layouts. If somehow the layout got messed up, you can select "Default" there and it should go back to the standard layout that we have been using here so far.
By now, you should be able to move and rotate GameObjects in the scene and play and stop the scene. With respect to the latter, there is an important and useful functionality to know about the Play Mode: While in Play Mode (either running or paused), you can still make changes to the objects in your scene, e.g. by changing parameters in the Inspector panel. However, in contrast to changes you make in Edit Mode, these changes will only be temporary; they will not be saved in your Scene. So when you stop play and go back to Edit Mode, these changes will be gone. This can still be very useful because it allows you to try out things while the project is running and immediately see the impacts.
The last thing we want to quickly mention here is how you actually build a stand-alone application from your Unity project, so basically how to create a stand-alone program for a platform of your choice that does the same thing that you get in the Unity editor when in Play Mode. This is done by going "File -> Build Settings…", which opens a new dialog:
In the pane at the top, you can select the Scenes that should be included in the build. Click on the “Add Open Scenes” button to add the current scene to the build window. At the bottom left, you select the target platform for your build. It is possible to install extensions to build applications for other platforms than you currently see here. The one relevant for this course is “PC, Mac & Linux Standalone” and you then have to select “Windows” as the Target Platform on the right. With the “Player Settings” button at the bottom, you can set more parameters that affect your build. Pressing “Build” will start the process of compiling the stand-alone application, while “Build & Run” will in addition run the program after the compilation process has finished. When you have all parameters for your build already set, you can use “Build & Run” directly from the File menu. When you click "Build" you will be asked to choose a folder to save the project in. In the following save dialog, navigate to your “Documents” folder and use “Walkthrough” as the file name under which the application will be saved. Then click “Save”. Unity will show you the progress bar for building the application which can take a few minutes. If you have a Mac, the windows application would not be played on your computer. You need to create a Mac application.
It is time to get more familiar with the interface and components of the Unity Project Editor and get some practical experience. The Unity web page [134] provides video tutorials on the basics and some advanced aspects of the Project Editor that we are going to rely on here. So please grab a cup of coffee, tea, or whatever you prefer, and watch the videos listed below. Please pause often to try out and practice the things explained there on the project you have. In particular, make sure to practice changing the view by panning and zooming, as well as quickly changing the perspective with the View Gizmo and manipulating objects in the Scene view. If something goes completely wrong, you can always delete all the folders and re-import the unity package and then start over.
That was quite a bit of new material but now you a very good understanding of the unity editor. We are almost done with our video tutorials before we engage in some fun practical work in the next section where we will build a very simple game! Please take a few minutes to learn about some of the more advanced features of unity using the following three videos.
In this walkthrough, we will use Unity to build a stand-alone Windows application from a SketchUp model. It will be a rather simple application, just featuring a keyboard-controlled camera that allows for steering the camera in a 3D scene with a single building.
Close Unity for now and download the Old Main model we are going to use here as a SketchUp .skp file. [104] You should still have the Unity project “MyFirstUnityProject” from the beginning of the lesson with just an empty scene ("SampleScene" in the scenes folder). This is what we will use for this walkthrough. The first task we have to solve is to export the 3D model from SketchUp into Unity.
Note: If the Old main model is too heavy for your personal computer, you can use the model for Walker Laboratory [135].
Unity allows for importing 3D models in different formats. Typically the files with the 3D models can be simply put into the Assets folder (or some subfolder of the Assets folder) of a project and then Unity will discover these and import them so that they show up under Assets in the project. Until very recently, there was no support to directly import SketchUp .skp files into Unity and, therefore, 3D models had to be exported from SketchUp in a different format as shown in the figure below.
However, direct import of SketchUp .skp files is now possible. So we are simply going to put the downloaded .skp file into the Assets folder. Before that, please make sure that neither Unity nor SketchUp are currently running! Then perform the following steps:
If the above method does not work for your case, you can always simply add the skp model into the editor by dragging and dropping it into the folder structure. You may have noticed that the Building object we have imported looks rather dull and is missing lots of textures and materials. To fix this problem, we need to extract all the materials and textures from the prefab of the model so that Unity knows how to map them. Select the prefab of the model you have imported into your folder. You can see that in the inspector menu it is detected that this is a SketchUp model, but all the materials for the individual components of this model are missing (labeled as none).
To fix this, click on the “Extract Textures …” button in the inspector panel. Create a new folder in the pop-up window and rename it to “Textures”, and then click on “Select Folder”. Do the same for Materials by clicking on the “Extract Materials …” and creating a new folder called “materials”. Once done, the model should have all the materials and textures added to it.
The next thing we are going to do is place the Old Main model in the Scene and make a few more changes to the Scene.
5.3 If you select the “MainCamera” and look at the Preview (or try out Play Mode), you will see that this is now quite a bit away from the building. So let us place the camera such that we have a nice view of the building from the sunny side and the building roughly fills the entire picture. Remember that one way to do this is by changing the Scene view to the desired perspective and then use “Align With View”.
To make things at least a little bit interactive, we now want to be able to move the camera freely around in the scene using mouse and keyboard. Such interactivity is typically achieved by adding script components. So in this case, we want to attach a script to the “MainCamera” object that reads input from the mouse and keyboard and adapts the camera position and rotation accordingly.
We won’t teach you scripting with C# for this functionality because we can reuse some existing script. A resource for Unity scripts and extensions outside of the Asset Store is the Unity3D Wiki. As it happens, we find a script published under the Creative Commons Attribution-ShareAlike license there to realize the free-flight camera control we are looking for. The script can be downloaded here [104].
Let us have a very short look at the code. If you have some programming experience, you will recognize that there are a bunch of variables defined early on in the script (public float … = …) that will determine things like movement speed, etc. of the camera. These variables will be accessible from the Unity Inspector Window once we have attached this script to our camera. The actual functionality to control the camera is realized in the function Update() { … } . GetAxis(…) is used to read information from the mouse, while GetKey(…) is being employed to check the status of certain keys on the keyboard. The content of the variable called “transform” roughly corresponds to what you always see in the Transform section of the Inspector Window and is used here to change the camera position and rotation based on the user input. The Update() function of a script that is attached to a GameObject in the scene will be called for every tick (= rendered frame) when the project is executed. So in Play Mode, the keyboard and mouse will be constantly checked to achieve a smooth control of the camera.
Here are the steps to add this code as a script asset to our project and then attach it to the camera object in the Scene:
The objective of this section is to get you familiar with the basics of C# programming in unity and the different awesome components (e.g. physics) offered by Unity. By the end of this section, you will be able to create a simple “Roll the ball” game as shown below:
The objective of the game is to roll a ball inside a box and collect all the available coins. The coins should simulate a rotation animation, capturing the attention of the player. Each time the ball hits a coin one point should be awarded to the player, a “cashier” sound effect played, and then that coin should disappear from the scene (i.e. collected).
The following steps will help you to first conceptualize what needs to be implemented and how it should be done, and second, guide you on how to utilize the Unity3D game engine to realize the needed game objects, mechanics, and functionalities.
Before beginning the programming of the game behaviors described above, we need to set up the play area (i.e. the box) in which we will be moving the ball. To achieve this, we will use the default empty scene that comes with every new unity project. The scene should be already open and ready. To add the floor, in the Unity editor, go to the GameObject menu on top, then 3D Object -> Plane. This will add a plane to the scene. Make sure the Position and Rotation Parameters of the Transform component in the Inspector menu (Red box in the figure below) are all set to zero:
Next, we need to add some color to our plane GameObject. To do this, in the project folder structure, right click on “Assets” and go to Create -> Folder. Rename this folder to “Materials”.
Now right-click on the newly created materials folder and go to Create -> Material. Rename this material to “PlaneMat”.
Select the newly created “PlaneMat” and change its color from the Inspector panel on the right by clicking on the color picker next to the “Albedo”. Select a color of your liking.
Drag the “PlaneMat” onto the Plane GameObject in the scene. The Plane should now have color.
Next, we need to add four walls to the sides of the Plane so the Ball wouldn’t fall off. To do this, go to the GameObject menu on top, then 3D Object -> Cube. While the newly added Cube game object is selected set its transform parameters in the Inspector menu as depicted below:
Doing so will result in a wall placed on the right side of the plane. To have the proper view of the play area when running the scene or inspecting the scene output in the “Game” tab, set the position and rotation parameters of the “Main Camera” game object as shown below:
Create a new material with a color of your choice inside the materials folder and name it “WallMat”. Drag this material onto the Cube game object that acts as the wall.
To add more meaning to our scene, rename the Plane and Cube game object to “Floor” and “Wall”, by right clicking on each and then selecting “Rename”.
Select the “Wall” game object, hold down the Ctrl button and press D (you can also right click on the wall gameobject and click on duplicate). if done correctly your “Wall” object should be duplicated and labeled “Wall (1)”. Change the X value of the position parameter of “Wall (1)” from -5 to 5. Using this simple process, a new wall is added to the left side of the plane.
Repeat the same procedure to add walls to the up and down sides of the planer. Simply duplicate the “Wall” object twice more and change its parameters as shown below:
At this stage, we should have the play area ready as this:
Next, we need to add a ball to our scene and make it controlled by the player and roll around the play area. To add a ball to the scene, go to GameObject -> 3D Object -> Sphere. Change the name of this game object to “Ball”. Set its transform parameters as shown below. And add a material (Create “BallMat”) with a color to it.
If we play ( ) the scene, you will realize that the ball will float in the air instead of dropping on the plane. The reason for this is that the ball game object does not have any physics attached to it yet. To fix this, select the ball game object, and in the Inspector panel, click on “add component”, search for “Rigidbody” and hit enter.
Doing so will add a Rigidbody component to the ball game object. You may notice that the “use gravity” parameter of this component is checked by default, which will add gravity to this game object, and therefore it will drop onto the plane. Press play and see the result.
Rolling the ball: To enable the player to control the movement of the ball, we will use the arrow keys (the up, down, left, and right keys) on the keyboard. The idea is that by pressing the up arrow key we will make the ball roll forward, the down arrow key, backward, and the left and right arrow keys, to the sides. To achieve this, we will need to do a little bit of C# programming using the physics engine of Unity.
Create a new folder in the project panel called “Scripts”. Right-click on this folder, select Create -> C# Script. Rename this script to “BallMovement” (Make sure you use the same exact spelling for the script name -case sensitive- if you are planning to copy the source code!). Drag this script onto the Ball game object. Double click on the script to open it in Visual Studio (this could take a while, and you might be asked to log in. You can either create an account or explore different options provided to you). Our simple ball movement script should look like this (Please read the comments in the code for further explanation). Either type or copy (make sure the name of the class i.e. “BallMovement” is spelled exactly as you have name named the script file) the following code into your script (note, the lines in green are comments are not executed by the engine, they are just extra explanation for you to understand the logic behind the code. Therefore, if you decide to type the following source code into your script, you do not need to write the lines in green):
using System.Collections; using System.Collections.Generic; using UnityEngine; public class BallMovement : MonoBehaviour { // we need to define a variable that controls for the speed with which the ball rolls // the declaration of variables is done here // the type of the variable is float and its label Speed. The float type means that the value of Speed can be with precision points e.g. 15.5 public float Speed; // FixedUpdate is called several times per frame which will create the illusion of smooth movement void FixedUpdate() { // we need to take the input from the arrow keys pressed by the player. By default in unity, the left and right arrow keys are labeled as "Horizontal" and the up and down arrow keys as "Vertical" // we will define two variables where the value of each will be set "several times per frame" if the player presses the HorizontalMovement (i.e. the left or right key) and/or the VerticalMovement (i.e. the up or down key) float HorizontalMovement = Input.GetAxis("Horizontal"); float VerticalMovement = Input.GetAxis("Vertical"); // since we are dealing with three dimensional space, we will need to define a 3D vector based on which the ball should move. As such, we define a Vector3 called MoveBall that takes three parameters of X,Y,Z // movement on the X axis will be our HorizontalMovement (i.e. the left or right key) and on the Z axis the VerticalMovement (i.e. the up or down key). As we do not want to move the ball in the Y axis in the 3D space (i.e. up and down in the 3D space), the Y value will be set to 0 Vector3 MoveBall = new Vector3(HorizontalMovement,0,VerticalMovement); //lastly, we will need to access the physics component of our ball game object (i.e. the Rigidbody) and add a force to it based on the values of the vector we just defined. We will multiple this force value with our Speed variable to be able to control the Speed of the force as we wish. gameObject.transform.GetComponent<Rigidbody>().AddForce(MoveBall * Speed); } }
Once you are done populating your script with the source code above, make sure you hit save. It is absolutely important to remember that every time we modify a script, we save it so that Unity updates the editor based on the changes we have made.
If we go back to unity and select the ball game object, we will see that in the inspector panel under the “Ball Movement” script, there is now a “Speed” variable that we can set. Change its value to 25 and hit play. Use the arrow keys to roll the ball inside the play area.
Adding coins to the scene: download the unity package file roll the ball assets [104]. Double click on this file, and it will open an “Import Unity Package” window inside unity. Click on import. Doing this will add two new folders in your assets hierarchy. One is called “Coin” which contains the model we will be using for the collectibles in our game. The other is called “Sounds” which will be covered in the next steps.
Go to the “Coin” Folder and select the Coin model. Drag this model into your scene and place it somewhere inside the play area. Change its transform parameters as follows:
We will now add a rotation effect to this coin to catch the attention of the player. To do this, create a new script inside the “Scripts” Folder called “CoinRotation”. Drag this script onto the coin game object. Double click on the script to open it in visual studio. Our simple coin rotation script will look like this (Please read the comments in the code for further explanation):
using System.Collections; using System.Collections.Generic; using UnityEngine; public class CoinRotation : MonoBehaviour { // we need to define a variable that controls for the speed with which the coin rotates public float RotationSpeed; void FixedUpdate() { // We need to instruct this object to constantly rotate around its own Y axis (i.e. up) // this is done by accessing the transform component of this gameobject and using the built-in RotateAround unity function // this function takes three parameters. first one indicates the point in the 3D space where the rotation should happen. Since we want the coin to rotate around itself, we will pass the position of the coin itself as this parameter // the second parameter indicates around which axis should the game object rotate. we will set this to the up axis so the coin rotates around the Y // the third parameter indicates the speed with which the coin should rotate. for this we will use our RotationSpeed variable gameObject.transform.RotateAround(gameObject.transform.position, transform.up, RotationSpeed); } }
If we go back to unity and select the coin game object, we will see that that in the inspector panel under the “Coin Rotation” script, there is now a “Rotation Speed” variable that we can set. Change its value to 15 and hit play. The coin should be continuously rotating around its Y-axis.
Use the arrow keys to roll the ball inside the play area and hit the coin. You will notice that the ball will go right through the coin. The reason for this is that there is no collision happening between these two game objects. The ball game object has a Sphere collider attached to it by default, but the coin game object does not have a collider. Select the coin game object, in the Inspector panel select add component, search for “Box collider”, and hit enter.
Play the scene and try to hit the coin with the ball again. You will notice that this time, the two objects will collide. We would need to have colliders on both objects to be able to determine when they “hit” each other. We need this to be able to collect the coins (i.e. remove the coin from the scene when the ball hits it). You may have noticed that during the collision, the coin will momentarily block the movement of the ball until it slides to either side of the coin and keeps rolling based on the force behind it. We do not want the player to experience this effect. In contrast, we want the game to be able to detect exactly when the two objects are colliding, but we do not want them to block each other in any way. The collision system of Unity can help us in achieving this goal using “Trigger”. If you select the coin game object, you will notice that under the “Box Collider” component that we just added, there is a parameter called “Is Trigger”. Enable this parameter by clicking on the CheckBox next to it. If we hit play, we can see that although we have colliders attached to both objects, the ball will not be blocked by the coin anymore.
Coin collection: create a new script called “CoinCollection” and attach it to the Coin game object. Open the script in visual studio and modify the code based on the instructions below:
using System.Collections; using System.Collections.Generic; using UnityEngine; public class CoinCollection : MonoBehaviour { // this is the default unity function that will be called whenever two objects collide and one has the Trigger parameter enabled // the argument "other" refers to the object that hits the game object this script is attached to. private void OnTriggerEnter(Collider other) { // In this case the we want to check if the "other" is the "Ball" // Make sure your ball gameobject is named “Ball”, otherwise you will receive an error! if (other.name == "Ball") { // if this condition is correct and indeed the Ball game object has hit this game object where the script is attached (i.e. the coin), we will remove this game object Destroy(this.gameObject); } } }
Duplicate the coin game object several times and spread the coins across the play area. Play the scene and hit the coins. You will see that as soon as the ball hits a coin, that particular coin will be “destroyed”.
Point system: Before destroying a coin upon collision with the ball, we want to reward the player with a point for managing to collect it. To implement this functionality, create an empty gameobject by going to GameObject -> “Create Empty”. Rename this empty game object to “Point System”. Create new script called “PointSystem” and attach it to the “Point System” game object. Open the script and modify the code based on the instructions below:
using System.Collections; using System.Collections.Generic; using UnityEngine; public class PointSystem : MonoBehaviour { // we need to have a variable that holds the value of the points we aggregate public int Points; // we need to defined a GUIStyle profile for our graphical user interface (GUI) that shows the value of points to change its font size, shape, etc. public GUIStyle PointStyle; // this is the default unity graphical user interface (GUI) function. It will render GUI elements on the screen on a specific 2D coordinate private void OnGUI() { // a GUI label to show the value of the aggregated points, place at the X=10, Y=10 coordinates, with width and height of 100, the next argument is the content that the label shows which is in this case // a numerical value. we need to convert the numerical value to a string (text) to be able to show it on the label. The last argument indicates which GUIStyle profile should be used for this label GUI.Label(new Rect(10,10,100,100),Points.ToString(),PointStyle); } }
The script described above is responsible for holding the value of the points the player aggregates and visualizing it on the screen. To increase the points however, we will add a line to the script responsible for detecting when the ball hits a coin. For this go back to the “CoinCollection” script and modify it based on the instructions below:
using System.Collections; using System.Collections.Generic; using UnityEngine; public class CoinCollection : MonoBehaviour { // this is the default unity function that will called whenever two objects collide and one has the Trigger parameter enabled // the argument "other" refers to the object that hits the game object this script is attached to. private void OnTriggerEnter(Collider other) { // In this case the we want to check if the "other" is the "Ball" if (other.name == "Ball") { //if this condition is correct and indeed the Ball game object has hit us, access the Point System game object, then the PointSystem script attached to it, and increase the Points variable by 1 GameObject.Find("Point System").GetComponent<PointSystem>().Points += 1; // we will remove this game object where the script is attached to (i.e. the coin) Destroy(this.gameObject); } } }
Next, select the “Point System” game object. On the inspector panel on the right, in the “PointSystem” Script attached to it expand the “Point Style” GUIStyle and change it as instructed below:
Press play and start aggregating points!
Collection sound: another feature we need to add to our game is a sound effect. We want to play a “cashier” sound effect each time the ball hits a coin. To achieve this. Create a new empty game object and rename to “Sound Management”. Select this newly created game object and click on “Add Component”, search for “Audio Source” and hit enter.
Navigate to the “Sounds” folder in your project hierarchy and drag the “CashierSound” clip to the “AudioClip” field in the “AudioSource” component we just added to our audio management game object. Uncheck the “Play On Awake” parameter of the “AudioSource” component so the sound clip does not play as soon as we start the scene.
Next, we need to instruct the game to play the audio clip whenever we collect a coin. To achieve this, modify the “CoinCollection” script based on the instructions below:
using System.Collections; using System.Collections.Generic; using UnityEngine; public class CoinCollection : MonoBehaviour { // this is the default unity function that will called whenever two objects collide and one has the Trigger parameter enabled // the argument "other" refers to the object that hits the game object this script is attached to. private void OnTriggerEnter(Collider other) { // In this case the we want to check if the "other" is the "Ball" if (other.name == "Ball") { //if this condition is correct and indeed the Ball game object has hit us, access the Point System game object, then the PointSystem script attached to it, and increase the Points variable by 1 GameObject.Find("Point System").GetComponent<PointSystem>().Points += 1; // access the audio management game object, then the AudioSource component, and play GameObject.Find("Sound Management").GetComponent <AudioSource>().Play(); // we will remove this game object where the script is attached to (i.e. the coin) Destroy(this.gameObject); } } }
Hit play and enjoy your creation!
It is NOT the goal of this course to teach you C# programming. However, without basic knowledge of C# you will not be able to create a rich virtual experience. The provided scripts to you in this exercise are fully commented for your convenience. Please take the time to study them, as they will prove to be rather useful in the next session.
The homework assignment for this lesson is to report on what you have done in this lecture. The assignment has two tasks and deliverables with different submission deadlines, all in week 8, as detailed below. Successful delivery of the two tasks is sufficient to earn the full mark on this assignment. However, for efforts that go "above & beyond" by improving the roll-the-ball game, e.g. create a more complex play area -like a maze- instead of the box, and/or make the main camera follow the movement of the ball, you could be awarded one point if you fall short on any of the other criteria.
To help with the last suggested extra functionality, use the following script and attach it to the main camera:
using System.Collections; using System.Collections.Generic; using UnityEngine; public class FollowBall : MonoBehaviour { // reference to the ball game object public GameObject Ball; // a vector3 to hold the distance between the camera and the ball public Vector3 Distance; // Start is called once before the first frame update void Start() { // assign the ball game object Ball = GameObject.Find("Ball"); //calculate the distance between the camera and the ball at the beginning of the game Distance = gameObject.transform.position - Ball.transform.position; } // This called after the Update finishes void LateUpdate() { // after each frame set the camera position, to the position of the ball plus the original distance between the camera and the ball // replace the ??? with your solution gameObject.transform.position = ???; } }
Search the web for a 3D (or VR) application/project that has been built with the help of Unity. The only restriction is that it is not one of the projects linked in the list above and not a computer game. Then write a 1-page review describing the application/project itself and what you could figure out how Unity has been used in the project. Reflect on the reasons why the developers have chosen Unity for their project. Make sure you include a link to the project's web page in your report.
Deliverable 1: Submit your review report to the appropriate Lesson 7 Assignment.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
URL is provided, the application/project uses Unity, is not already listed on this page, and is not a game. | 2 pts | 1 pts | 0 pts | 2 pts |
The report clearly describes the purpose of the application/project and the application itself. | 3 pts | 1.5 pts | 0 pts | 3 pts |
The report contains thoughtful reflections on the reasons why Unity has been chosen by the developers. | 3 pts | 1.5 pts | 0 pt | 3 pts |
The report is of the correct length, grammatically correct, typo-free, and cited where necessary. | 2 pts | 1 pts | 0 pts | 2 pt |
Total Points: 10 |
If you follow the detailed instructions provided in this lecture, you should be able to have a fully working roll-the-ball game.
Deliverable 2: Create a unity package of your game and submit it to the appropriate Lesson 7 Assignment.
You can do this by simply going to the Assets in the top menu and then selecting “Export Package”.
Make sure all the folders and files of your project are checked and included in the package you export (you can test this and import your package in a new Unity project to see if it runs).
If you choose to implement the functionality of the main camera to follow the ball movement, please explain your solution as a comment in your script. You can add comments (texts in green) in C# by using “//”.
We highly recommend that, while working on the different tasks for this assignment, you frequently create backup copies of your Unity project folder so that you can go back to a previous version of your project if something should go wrong.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Your game runs smoothly and as intended without any errors or strange behavior | 6 pts | 3 pts | 0 pt | 6 pts |
Your project is complete and has all the required functionalities (coin rotation, collection, and sound effect). | 3 pts | 1.5 pts | 0 pt | 3 pts |
Your Unity package has a proper folder structure, and all the files have meaningful names like it was done in the instructions | 1 pt | .5 pts | 0 pts | 1 pt |
Total Points: 10 |
In this lesson, you learned about the Unity3D game engine and the role it can play in creating 3D and VR applications. You should now be familiar with the interface of the Unity editor and the main concepts of Unity and be able to create simple projects and scenes. In the next lesson, we will explore other features of Unity for building 3D applications, including adding an avatar to a scene, creating a camera animation, importing resources from the Unity Asset Store, and creating a 360° movie that can be uploaded to YouTube and be watched in 3D on a mobile phone via the Google Cardboard.
You have reached the end of Lesson 7! Double-check the to-do list on the Lesson 7 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 8.
This will be another practical lesson with several hands-on walkthroughs to explore other features of Unity that are useful for building 3D applications. This includes adding an avatar to a scene such that the user can walk in an environment around buildings; and creating a camera animation to generate a video with a pre-programmed camera flight over the scene. We will learn to import resources from SketchUp, the Unity Asset Store, and to create a 360° movie that can be uploaded to YouTube and then watched in 3D on a mobile phone via the Google Cardboard.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
In Lesson 2, we introduced 3D application building software like Unity as part of a general 3D/VR application building workflow. The main role of 3D application building software can be seen in providing the tools to combine different 3D models into a single application, to add dynamics (and, as a result, making the model more realistic), and to design ways for a user to interact with the 3D environment. We also learned that 3D application building software typically provides support to build stand-alone applications for different target platforms including common desktop operating systems, different mobile platforms, and also different VR devices.
In the previous lesson, you already use some of the tools available in Unity for adding dynamics and interaction possibilities to a 3D scene in the form of giving the user control over the movement of gameobjects, animating gameobjects via scripting, collision detection, basic graphical user interface, and scripting more advanced behaviors, e.g. collecting coins and awarding points. As you have already experienced, adding behavior to your gameobjects is done either through standard unity components (e.g. rigidbody, colliders, etc.) or through scripting.
In this lesson, you will gain more practical experience with other tools available in Unity. But first, let's start the lesson with a brief overview of general tools utilized by Unity for 3D application building. As we go through this first part of the lesson, you will begin to realize that you have already used some of these tools in the previous lesson! We roughly divided the following list into ways to make a scene dynamic and ways to add interactions. However, this is a very rough distinction with certain aspects or tools contributing to both groups. In particular, scripting can be involved in all of these aspects and can be seen as a general-purpose tool capable of extending Unity in almost unlimited ways (e.g. in the previous lesson we used scripting to both animate our coins by constantly rotating them, as well as collecting them upon collision with the ball).
Animations: Animations [139] obviously are one of the main tools to make a scene dynamic. We will talk about animations quite a bit in this lesson and you will create your own animation clip to realize a camera flight over a scene. Here are a few examples of animations that can make a scene more realistic and dynamic:
Since this is a Geography/GIS class, we also want to briefly mention that support for importing GIS data into 3D modeling and application building software to employ it as part of 3D or VR application is slowly increasing. The image below shows a world map GameObject consisting of meshes for all countries in the Unity editor. The meshes were created from an ESRI shapefile using BlenderGIS, an add-on for importing/exporting GIS data into Blender. Then the meshes were exported and imported into Unity as an Autodesk .fbx file.
Recently, Mapbox announced the Mapbox Unity SDK [149], providing direct access to Mapbox hosted GIS data and Mapbox APIs within Unity. We believe that more initiatives like this can be expected for the near future, partially driven by the huge potential that current developments in VR and AR technology offer to GIS software developers.
In this walkthrough, we will see how we can import a 3D model from a SketchUp into Unity. Although the Unity Asset Store contains a large repository of free 3D models that you can use in your projects, depending on the requirements, it might be needed to import custom models into Unity. As you have already gained some experience in modeling 3D objects in SketchUp, it will be useful to see how you can import your model directly into Unity and add behaviors to it. In this walkthrough, we will import a SketchUp model of the “Walker” building into Unity, extract its materials and textures, and add it to a scene. Then, we will add a first-person controller prefab from the standard assets of unity which will enable us to walk around our scene.
You should still have the Unity project “MyFirstUnityProject” from the beginning of the previous lesson. Open your Unity Hub and open the “MyFirstUnityProject” (if you do not have this project anymore, follow the instructions from the previous lesson on how to create it. Make sure you also import the first package you used in the previous lesson – download here [104]). We will use this project to perform this walkthrough. Download the following Walker Laboratory [135] SketchUp file and once the download is completed, navigate to the location where it is downloaded (by default it should be in your User\[your username]\Downloads).
In the Unity editor, create a new folder in your project folder structure and rename it to “Walker Building”. Drag and drop your downloaded SketchUp file into this folder. Create a new scene in the same folder, by right-clicking on the folder from the folder structure panel -> create -> Scene. Name this scene “WalkerScene”. Open this scene and drag your WalkerLabaratory SketchUp model into the scene. Reset its position and rotation values to zero. Change the position of the Main Camera in the scene so the building is within its view. Change the X rotation of the directional light to adjust the lighting/show of your scene.
You may have noticed that the Building object we have imported looks rather dull and is missing lots of textures and materials. To fix this problem, we need to extract all the materials and textures from the prefab of the model so that Unity knows how to map them. Select the prefab of the model you have imported into your folder. You can see that in the inspector menu it is detected that this is a SketchUp model, but all the materials for the individual components of this model are missing (labeled as none).
To fix this, click on the “Extract Textures …” button in the inspector panel. Create a new folder in the pop-up window and rename it to “Textures”, and then click on “Select Folder”. Do the same for Materials by clicking on the “Extract Materials …” and creating a new folder called “materials”. Once done, the model should have all the materials and textures added to it.
To add a first-person controller, we will use the default FPSController prefab on unity. This is the same prefab that you have used in the first part of the previous lesson. Navigate to the “Standard Assets” folder -> “Characters” -> “FirstPersonCharacter” -> “Prefabs”. Drag and drop the FPSController prefab into your scene. Place the FPSController somewhere on the plane (you can start with X=0 and Z=0 for position and then move the FPSController along these axes) and make sure the Y value of its position parameter is set to 1. Rotate the FPSController GameObject so it faces the building. If you play the scene now, you will notice that the FPSController will fall through the plane. This is because we have not yet attached colliders to our objects. The FPSController prefab already has a collider so all we need to do is to add colliders to the plane and all the children of the Walker GameObject.
To achieve this, you have two options. You should either select all the child objects of the building prefab that has a “Mesh renderer” component attached to it and add a “Mesh collider” to that object manually. Since these objects have meshes, the physics engine of Unity will use the shape of the mesh to create a collider with the same shape around that object. Or you can select the Walker prefab from your project folder and check the “Generate Colliders” parameter in the inspector menu under SketchUp option, and then press “Apply”. Press play, and you should be able to navigate on this plane using the WASD keys on your keyboard and look around using your mouse.
Once you are done experimenting with your new creation, disable the FPSController GameObject by clicking on it once, and then unchecking it from the inspector panel.
In the next walkthrough, we will cover animating a camera and for this, we need to use the static Main Camera in our scene.
Animations are one of the main methods to bring some life into a 3D scene in Unity and make it dynamic. Objects in the scene may be permanently going through the same looped animation (think of the leaves of trees moving in the wind) or they may be in a particular animation state that can change, potentially triggered by some external event (think of a human avatar that can switch from a walking animation to a running animation and that may have many other animations associated with).
The theoretical concept behind this idea is the notion of a State Machine: Each GameObject is always in one of a pre-defined set of states (state for walking, state for running, etc.), potentially associated with a particular animation. The transition from one state to another is triggered by some event which, for instance, can be some user input, a collision with another object, or simply a certain amount of time passed. The states with possible transitions between them form the object's state graph as shown in the image below. The state graph together with a pointer to the object's current state when the project is running constitute the object's state machine.
In this course, we will use animations only to animate the main camera to create a camera flight over the scene. Other uses for animations for 3D/VR applications could be making objects like plants more realistic, creating movable objects like doors, or populating the environments with dynamic entities like people or cars. Our camera flight will only need a single animation and, hence, state. So we won't have to deal with complex state graphs and the transitions between states. Nevertheless, please watch at least the first 90 seconds of the following video to get a bit of an understanding of the Unity's Animator component and controller and what happens under the surface when working with Animation State Machine. Don't worry if some of the details don't get completely clear.
The main tool for creating an animation in Unity is the Animation View. It's a very powerful but also slightly complex tool so that we will only get to know its basic features here. For details, please see the documentation available here [150]. The Animation View is for creating and editing animation clips as well as previewing them. Please watch the following short video to familiarize yourself with the idea of creating a camera animation clip in Unity using the Animation View. You do not have to try out what is shown in the video yourself; we will be doing this in the walkthrough in the next section (the walkthrough will slightly deviate from what is shown in the video though).
The Unity web page [151] has more videos related to animations available, dealing for instance with importing animations created in other software, creating animations for a humanoid avatar, etc. You won't need to know about these more advanced topics for this course, but if this a topic of interest to you, feel free to check these out.
In this walkthrough, we are going to program a camera flight over the Walker building using Unity's animation functionality similar to what you saw in the video from the previous section. Have the Walker Scene open. For your assignment, you will create this animation for the campus [104] with more buildings. (The building used in these instructions is the Walker building but you might see the name of Old Main in the hierarchy. That's just not using the right name. Hope it wont confuse you).
We should only have one active camera called “Main Camera” in the Scene. We will make the Main Camera to be the child of a general container object called “Camera Setups”. The animation for the camera flight will be defined for the parent object (Camera Setups) and applied to all its child entities. So use “GameObject -> Create Empty” to place the container GameObject in the scene. Change the name from “GameObject” to “Camera Setups”. (The image belongs to the scene with Old Main. In your Scene with the Walker building, you don't have the FPS Controller)
Finally, drag the “Main Camera” object onto “Camera Setups” so that it becomes a child of it. This is how things should now look in the Hierarchy Window:
Now we have to place the camera in a good starting position for our camera flight. We want the camera flight to start a little bit away from the Walker building, and then move towards it before transitioning into an orbit around the building. So move the Scene view to a perspective similar to that shown in the screenshot below. First, set the position of both the parent object “Camera Setup” and its child object “Main Camera” to 0,0,0. Then manipulate the location and rotation of only the parent object “Camera Setup” to move both objects at the same time to a location you desire. If you did everything correctly, the Main Camera Position and Rotation values will all still be zero as in the screenshot below because they are measured relative to the parent object.
Now we are going to create the camera flight path. First, we are going to create a new Animation entity for the camera flight in our project Assets and link it to our Camera Setup object. Then we are going to set intermediate positions for the camera along the desired flight path. Unity will automatically interpolate between these positions so that we get a smooth flight path.
Create a new folder called “Camera Animation” under the Assets folder. Open this folder, right-click on an empty space within the folder, select Create -> Animation. This creates a new animation clip entity. Let's call it “CameraFlight”.
Connect the CameraFlight animation to our Camera Setup object by selecting Camera Setup in the Hierarchy Window and then dragging the CameraFlight animation from the Assets Window to the free area with the “Add Component” button in the Inspector Window (you have seen examples of adding elements and components to GameObjects using this method in previous videos). After doing so, the Inspector Window should show the newly attached Animator component as in the screenshot below. In addition, a new Animator object called "Camera Setup.controller" will automatically appear in the Assets window.
During the next steps, make sure that "Camera Setup" remains selected in the Hierarchy Window. Go to "Window -> Animation -> Animation” in the menu bar to open the Animation Window which is the main tool for creating an animation.
Let us first make a few adjustments to the Animation Window to make things easier. At the top right, you can see the timeline for the animation. 1:00 means one second into the animation. You need to have the mouse cursor in the lower right part of the window and then turn the mouse wheel to adjust the timeline. Our animation should probably be around 1 minute long. So set it up so that the timeline goes until at least 60:00 seconds (= 1 minute). It is a good idea to make the Animation Window as wide as possible.
Now let’s clearly specify which properties we want to animate in our animation, namely the position and rotation values of our Camera Setup object. You will have to use the "Add Property" button twice to add the Position and Rotation, respectively. Press “Add Property”, then unfold the “Transform” entry, and use the + button to add Position and after that Rotation to the list. The result should look like in the second image below:
The diamond symbols below the timeline are the keyframes, which we will use to define intermediate positions and orientations for the camera along the flight path. As you can see, Unity has automatically created two keyframes, a start frame at 0:00 and another one at 1:00. You can select a keyframe by clicking on the upper diamond (the one that is in the darker gray bar, the selected keyframe is shown in blue) and you can drag it to the left or right to change the time when it will occur. You can delete a key frame by right-clicking the diamond and then selecting "Delete Keys". That's what we want to do here so that we end up with only the start keyframe like in the figure below:
The red record button on the top left determines whether changes will be recorded for the position in the animation where the vertical white bar is located which will result in a new keyframe being added at that position. When record is on, the position and rotation properties of the Camera Setup will turn red. Do a left-click at 10:00 in the timeline to move the white vertical bar to that position.
Now let's create a second keyframe at 10:00 and record it. Press on the record button (the red circle in the animation window) and then use the move or rotate tools to change the position and rotation of the Camera Setup GameObject in the scene. Note that all the changes you are making to the position and rotation of the camera will happen in 10 seconds, as this is the limit we have specified for this frame. Once you are happy with the changes, stop the recording.
Repeat the same process to add another keyframe every 10 seconds and make another recording.
You can use the perspective gizmo at the top right of the Scene Window to quickly switch between the top-down and side view of the view, which can be very helpful here.
You can now click the Play button in the Animation window (not the main Play button at the top!) and Unity will now show you the animation between the first and second keyframe in the Game Window. Press the button again to stop the animation.
Before continuing, make again sure that you have Camera Setup selected in the Hierarchy and that recording in the Animation window is turned on (record button with a red dot is highlighted).
Now continue with this to build a camera animation of 60 seconds with 7 keyframes in total, one every 10 seconds. For each of the keyframes move the view to the next corner of Old Main. The last keyframe at 60:00 should be roughly identical to the very first one, meaning you should move "Camera Setup" away from the building again.
The final animation should look similar to the one in this 360° video [152] (producing such a 360° video from the animation is described later in this lesson). Creating such animation can be a bit tricky at first but you will get used to it quickly. Don't worry if the orbit around the building isn't perfect. This is ok because we are only using very few intermediate keyframes here. You can try to improve on this in your homework assignment. If something goes completely wrong, you can always delete the last few keyframes using "right-click -> Delete Keys". Here are a few additional hints that you may find helpful:
Once you are done with your animation, you can close the Animation window, maximize the Game Window, and look at your camera flight in full size. Just press the main Play button at the top or CTRL+P.
Check out whether you are happy with your camera flight as it appears in the Game Window. If not, you can revisit it by selecting Camera Setups and opening the Animation window again under “Windows -> Animation”.
You can now go ahead and build a stand-alone Windows application as explained in Lesson 7. Our next goal will be a little different. We want to be able to see the camera flight in 3D and be able to actually look around. One option would be to turn this project into a VR application, for instance for the Oculus Rift or HTC Vive. But since most likely not many of us have access to an expensive VR device, what we are going to do instead is to use the Google Cardboard for our “VR application”. The approach we are taking for this is to record a 360° movie of our animation, make it available on YouTube, and use YouTube’s Cardboard support available on most smartphones to turn the video into a VR application.
You already learned about building stand-alone 3D applications for Windows with Unity. Creating stand-alone software that allows for experiencing the application in real 3D with the help of a VR head-mounted display connected to the Windows computer is not much different. While you probably won't be able to try this out yourself (unless you own a device such as the Oculus Rift or HTC Vive), Unity VR Overview [153] provides a good overview on VR support in Unity and building VR stand-alone applications. Additional information can be found at Unity Learn [154].
Similarly, you can build stand-alone applications for mobile platforms such as Android or iOS and even have the built app directly pushed to your mobile device. This requires additional software to be installed on your computer, such as the Android SDK if you are building for an Android phone or tablet. However, despite a lot of progress in hardware and, in particular, graphics performance on modern mobile devices, smoothly rendering 3D scenes in high quality on a mobile phone, for instance, can still be problematic. This is particularly true if the scene needs to be rendered for the left and right eye separately to support a VR device like the Google Cardboard. We will, therefore, take a different approach in the following walkthrough and instead use Unity to produce a 360° video that can be shared, for instance via YouTube, and then be experienced in real 3D on pretty much any(!) smartphone with the Google Cardboard. While this approach is limited in that no interaction between the user and the objects in the scene is possible, advantages of the approach in addition to being less dependent on high-performance hardware are that we do not need to produce different builds for different mobile platforms and no additional software needs to be installed. If you, however, are interested in exploring how to build stand-alone (VR) applications for Android devices, the Android environment setup instructions [155] and the video below are good starting points.
To be able to create a 360° video of the camera flight from the previous walkthrough we need a few things:
Let us start by importing the Unity extension for taking 360° snapshots.
Import the unity package (link in previous page) into your “MyFirstUnityProject” project. When done, a new folder called “360 Capture” will be added to your project folder structure. This folder contains a script and three Render Texture objects (one for each eye and one that merges them together). The script will capture the camera output and renders it on the textures. To set up our scene for capturing, add the script “CreateStereoCubemaps” to the Main Camera object (i.e. child of Camera Setup) by dragging the script onto it.
You will notice that in the inspector panel of the Main Camera GameObject there are three empty fields “Cubemap Left”, “Cubemap Right”, and “Equirect”. Will need to populate these fields with our textures in the “360 capture” folder. Drag the corresponding texture to the right place holder in the inspector panel of Main Camera as shown in the figure below:
Next, double click on the “CreateStereoCubemaps” script to open it in visual studio. In the start function of this script (as shown in the figure below), you can see that we have instructed Unity to capture 25 frames per second.
You do not need to study the rest of the script (unless you are interested in doing so), but to give you a general idea, we are capturing 25 monoscopic frames per second of the entire scene and saving each frame as a PNG file somewhere on our drive. Since the address of where the images are to be saved is hard-coded, you need to modify it to point to a location where you want the monoscopic images to be saved. On line 67, we have indicated this address (the orange text between the “ ”) along with a line of comment that tells you how to modify it (make sure you save the file after you have modified the address).
One easy way to update this address is to create a folder somewhere on your hard drive (you will need at least 1.5 GB of free space). Inside the folder click on the address bar on top and copy the address.
Make sure that the “\” in the address you have copied are replaced with “\\” when you replace it with the orange text in between “”.
Now if you play the scene, depending on the specifications of your computer system, the scene will perform differently. Since we are forcing Unity to capture 25 frames per second, the camera flight animation we’ve just created might not run very smoothly on older computers. If you play your scene now and notice lots of lagging, do NOT be alarmed. Let the camera flight animation finish (it could take a while on slower computers so be patient!), and then stop the scene from playing.
Now if you navigate to the folder, where you have instructed the script to save the PNG files, you will see that it is populated with around 1400-1500 images.
We now want to produce an MP4 movie from the individual 360° images. We will use the free command-line tool ffmpeg for this.
You can download a Windows build of the ffmpeg [104].
Extract the downloaded .zip and you get a folder with a bunch of files in it (similar to the image below). Make sure there are not two folders with the same name inside each other. The main executable ffmpeg.exe is located in the bin subfolder. We will directly run this .exe file, no installation is needed.
To be able to use the ffmpeg.exe as a command-line tool, let’s open a command prompt. Click on Windows and search cmd. Open cmd. You need to navigate to the location where you have extracted the content of the zip file. This is a very simple process.
For example, I have extracted the contents of the zip file (i.e., the folder named ffmpeg-20210331-6ef5d8ca86-win64-static) to my desktop. What I need to do is to first open the command prompt. This can be done by clicking on the start button on the windows bar and type "command prompt". By default, the command prompt is at the root location for the current user of the OS (e.g., C:\Users\Pejman>). Now I need to access to the folder we have extracted to the desktop. To do so, you need to type:
cd desktop
and then press enter.
This simple command with navigating the command prompt to the desktop folder of your computer. Using a similar approach, we need to access the "ffmpeg-20210331-6ef5d8ca86-win64-static" folder. Now instead of typing the complete name of this folder which is rather long, you can simply type cd followed by a few letters of the folder name (i.e., ffmpe..) and then press the tab key on your keyboard. This will autocomplete the name based on what can be found on the desktop. The next step is to go to the bin folder using the same method. At the end of this process, you should be at this location within the command prompt:
For more help, please visit Digital Citizen [157].
The command that you need to type in now to have ffmpeg create a movie is given below, but you will have to make some small changes to adapt it to your system: The highlighted part needs to be replaced with the path to your folder where the images are saved. (In case you start by copy & pasting the command from this web page into the command prompt rather than typing it yourself, please check that the double quotes appear as straight double quotes, not curly ones and you also may have to retype hyphens and underscores as well. The safest approach is to type the complete command yourself, or paste it into a notepad and copy it from the notepad.). The last part of the address, namely “Image_%06d.png” is a pointer to the first image in this folder, and the program will then iterate through all the images based on this starting point. You do NOT need to change this last part.
ffmpeg.exe -r 25 -start_number 100000 -i "C:\Users\jow\Documents\UnityProjects\MyFirstUnityProject\LSCaptureFiles\Image_%06d.png" -vcodec libx264 -crf 15 -pix_fmt yuv420p myfirstunityproject.mp4
In the command, we tell ffmpeg that the produced video should use 25 frames per second, where the images are located and the pattern for how they are named, which video codec to use, etc. The final parameter is the name of the output file (myfirstunityproject.mp4) that will be generated. When you execute the modified command (by pressing enter), ffmpeg should start to process the images and create the MP4 video. Remember there are about 1400 frames.
When ffmpeg is done, there should now be a file called myfirstunityproject.mp4 in the bin subfolder of ffmpeg. You can move this file, for instance into your Documents folder.
Now run the video and have a look. If your standard video player is not able to display the video, we recommend that you go to VideoLan [158] to download and install VLC, a great and free video player, and then watch the video with this program.
Now that we have the video our goal is to get it uploaded to a private YouTube channel and watch it with the cardboard on our smartphone. However, we first have to use yet another tool to adapt some of the metadata in the movie file so that YouTube will correctly recognize it as a 360° movie. We will follow Google’s Upload 360-degree videos [159] instructions.
The final steps of this walkthrough will require that you have a YouTube account and channel. That doesn’t mean that the 360° videos produced in this course will be publicly available for everyone. It is possible to use your channel only to share these with particular people, like everyone in this class. If you do not have a YouTube account/channel yet, please follow the Create an account on YouTube [161] instructions to create these.
You should now be on the main page for your YouTube channel. Click on the “Upload” link at the top to upload a new video. Set the field to determine who can watch the movie to “Private” for now. When you later want to share a video with the class, you can instead set this to “Unlisted” and share the link to the video. Unlisted means that everyone with the link can watch the video but it won’t be visible to anyone else as part of search results on YouTube. Now, drag the video file myfirstunityproject360.mp4 from the Window Explorer on to the upload icon. The upload will start immediately. After the upload is completed, YouTube will start processing the file. After this has finished as well, you can click “Done”.
The video should now be listed in the YouTube Studio [162] under the videos tab.
Clicking on it should take you to a page as shown below, where you can play your video, or open its link in a different tab.
You should now be able to watch this video on YouTube like any other video. You should, in addition, be able to look around during the camera flight either by dragging with the left mouse button pressed or by using the arrows in the top left corner of the video. By clicking the Settings icon at the bottom, you may be able to increase the resolution at which the video is displayed unless it is already displayed at the highest available resolution.
If you play this video on your phone, you will get the option to see it in VR mode when you switch to full screen. If you select that option, you can put your phone in a CardBoard and enjoy your first VR creation.
You can check out other VR videos on YouTube at YouTube Virtual Reality [163].
Note: Make sure your YouTube app is up to date!
Please reflect on your 3D spatial analysis experience, using ArcGIS Pro; anything you learned or any problems that you faced while doing your assignment as well as anything that you explored on your own and added to your 3D spatial analysis experience.
The homework assignment for this lesson is to apply the techniques you learned in this lecture to the historic campus scene that you already saw in the previous lesson (it should be already in your “MyFirstUnityProject” project under the “Historic Campus” folder). You are supposed to create a nice camera animation and produce a stand-alone windows application as well as a 360° movie. The video should be uploaded to your YouTube channel and shared together with your Windows application with the rest of the class by posting the links and a short description on the forums. Finally, you are supposed to provide constructive feedback on a video created by one of your classmates and submit a write-up reflecting on this assignment.
The assignment has several tasks and deliverables with different submissions deadlines, all in week 10, as detailed below. Successful delivery of the tasks 1-6 is sufficient to earn 90% on this assignment. The remaining 10% is reserved for efforts that go "above & beyond" by improving the historic campus model, e.g. in ways suggested below, before creating the video and stand-alone Windows application.
For this assignment, you will use the historic campus scene from Lesson 8. We suggest you start with a fresh download of the Unity project: Historic Campus [164]. The campus models have already been imported and placed in a scene (main) and things should look like in the screenshot below when opening the project in Unity. You are now supposed to create a ~1.5min camera flight over the scene using what you learned about animating the camera in this lesson. But before you start with the camera flight, you may want to think about what you want to do for "above & beyond" efforts (see details below) as this may have an effect on the camera trajectory you want to create (e.g. in case you are going to add additional models to the scene).
Produce a stand-alone Windows application showing your camera animation as you learned in this lesson. Please note that if you put in work for above & beyond points, this needs to be done before creating the Windows application so that these efforts are included.
Deliverable 1: Create a .zip file containing all the files and folders of your built.
Upload the application to the PSU Box Comunity Folder [165] and include the link to your application when posting on the Lesson 8 Video Discussion (see Task 4).
Produce a 360° video showing your camera animation as you learned in this lesson. Don't forget to adapt the meta information before uploading the video to your personal YouTube channel. Please note that if you put in work for above & beyond points that should be visible in the video, this needs to be done before creating the video so that these efforts are included.
Deliverable 2: Upload the 360° video to your YouTube channel and share it via a link. Include the link when posting on the Lesson 8 Video Discussion (see Task 4).
Deliverable 3: Create a post on the Lesson 8 Video Discussion including the links for your Windows stand-alone application & 360° video. Add a one-paragraph description of what you did to improve the model for above & beyond points, if anything. If your above & beyond efforts are only visible/audible in either the 360° video or the stand-alone application, please make this clear as well.
Watch a video posted by one of your classmates and write down 1-2 paragraphs in which you provide constructive feedback. If possible, please pick a video that has not yet received any feedback. Comment on the technical and aesthetic quality of the video and, if applicable, the extensions made to the model. Make suggestions on how the work can be improved or extended. If the description mentions that some additional effort is only included in the Windows stand-alone application, you can feel free to ignore that part.
Deliverable 4: Post your feedback as a reply to the respective video post on Lesson 8 Video Discussion.
Create a ~700 words write-up reflecting on what you learned in this assignment, what problems you ran into, if and how you were able to resolve these, etc. Please also include a complete description of things you did for above & beyond efforts. Provide some final assessment of your experience with Unity.
Deliverable 5: Submit your write-up to the Lesson 8 Assignment.
This part of the assignment is for allowing you to let your creativity run free to extend or improve the historical campus scene and/or explore some features of Unity development on your own based on one of the many tutorials available on the web. What you do is up to you; the following are just two suggestions for directions in which you can think:
If you are looking for some relaxing background music, you can browse freesound [168]for a suitable track that is freely available. Alternatively, or in addition to using music, you can record your own commentary describing either the scene or what you did for this assignment and have this played during the camera flight. If you experience any problems with adding sound, feel free to help each other out in Lesson 8 Discussion.
Helpful advice:
We highly recommend that, while working on the different tasks for this assignment, you frequently create backup copies of your Unity project folder so that you can go back to a previous version of your project if something should go wrong.
For the camera animation, we recommend using many intermediate keyframes unless the camera is just moving in a straight line. In particular, try to not have a rotation of more than 90° between two successive keyframes in the animation.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Camera flight has the correct length, is smooth and aesthetically pleasing. | 3 pts | 1.5 pts | 0 pts | 3 pts |
Windows stand-alone application is shared via Box and shows the camera flight. | 3 pts | 1.5 pts | 0 pts | 3 pts |
Video is shared via YouTube and shows the camera flight. | 3 pts | 1.5 pts | 0 pts | 3 pts |
Video has been correctly created and uploaded as a 360° video allowing to look around while the camera is moving over the scene. |
3 pts | 1.5 pts | 0 pts | 3 pts |
Above & Beyond: The number of points varies depending on how much effort was required (e.g. placing an additional 3rd-party 3D model in the scene will give fewer points than placing a self-made model or adding background music to the scene). | 3 pts | 1.5 pts | 0 pts | 3 pts |
Total Points: 15 |
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Paper clearly communicates what you learned in this assignment, what problems you ran into, if and how you were able to resolve these, etc. Paper also includes a complete description of things you did for above & beyond efforts and a final assessment of your experience with Unity. | 5 pts | 2.5 pts | 0 pts | 5 pts |
Write-up is of the correct length and is well and thoughtfully written. | 5 pts | 2.5 pts | 0 pts | 5 pts |
The document is grammatically correct, typo-free, and cited where necessary. | 2 pts | 1.0 pts | 0 pts | 2 pts |
Total Points: 12 |
In this lesson, you significantly extended your Unity skills by learning how to add a user-controlled avatar and enable colliders for the objects in the scene, and how to create a camera animation with the Animation View. We also spoke about the different ways to create VR experiences for different devices with Unity, including using 360° videos for the Google Cardboard as an efficient option for devices that may not have the hardware performance to run VR applications in real-time. We have now come full circle with regard to the 3D and VR application building workflows we discussed in Lesson 2. We hope that these last two lessons focusing on Unity and the skills you acquired in them will prove to be a helpful complement to your 3D modeling and analysis repertoire.
You have reached the end of Lesson 8! Double-check the to-do list on the Lesson 8 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 9.
This will be both a practical and theoretical lesson that shows you a complete Unity project with some advanced functionalities. You will be provided with a simple “City builder” unity game where you can instantiate buildings in a city and animate cars to patrol over a specified path. The objective is not to teach you how to program this game, as it will be outside of the scope of this course, but rather demonstrate to you what is possible to be done in Unity with a few classes and some built-in features of the game engine. Furthermore, we will explore some of the VR specific mechanics that are used in game engines such as Unity for making the experiences more interactive (e.g. different types of locomotion, and interaction with objects).
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Download the City Builder Unity Package [104]. The package includes all the resources used in the city builder game. This is a very simple game, where the user can select different buildings from a menu and place them in specific locations in a city (only some areas allow for placing a building) using their mouse. When selecting a building to be placed in the city, its base will change color. The red color indicates that the current location of the building is illegal (and therefore it cannot be placed there), whereas the blue color indicates a legal location. The color of the base will continuously change as the user moves the building in the city to find a location to place it. Once the building is in a legal location, the user can click down on the mouse (left click) to place the building in the city. If the user clicks on an already placed building, the base of the building will become purple, indicating that it has been selected. Once selected, buildings can be removed from the city by pressing the “D” button. Moreover, the user can move the camera along the X and Z axes using the arrow keys on the keyboard (left, right, up, and down keys). This will help the user navigate the environment when placing buildings. Lastly, there is a car that patrols a street over a specified path. A quick video of this game can be seen below:
Import this package into a new Unity project. Once the import is done, navigate to the folder “City Builder”, and open the scene with the same name “City builder”. Make sure your “maximize on play” option in your game window is enabled, and then play the scene.
When the scene is playing, you can see a blue car patrolling over a specified path. You can use the arrow keys on your keyboard (left, right, up, down keys) to move the camera along the X and Z axes. If you click on the buttons of the menu on the left side of the screen, you can select different buildings to place in the city. Select a building, move it over a legal position and left click with your mouse to place it down. Place a few more buildings in the city. Click on a building that is already in the city. You will notice that its based will change color to purple. Press the “D” button on your keyboard to delete this building.
To give you an idea of how such a scene can be set up, we will go through a series of steps to recreate the City Builder scene. All the resources you need are already inside the “City Builder” folder.
A new panel called Navigation will be opened on the right side of the editor. Select the “Bake” tab on this panel and click on the Bake button.
After a few seconds your scene should look like this:
Navigate to the Prefabs folder again and drag the “Waypoints” prefab onto the scene, and set its position to (62.5,0,-70.2). This GameObject has four children, where each act as a patrol point for our Car AI script. You can inspect the position of each these points in the scene. Now, if we select our Car GameObject again, and look at its properties in the inspector menu, we can add patrol points to the Car AI script. Simply drag each waypoint (children of the WayPoint object named Waypoint 1 to 4) to each element (0 to 3) place in the Car AI script.
Inside the City Builder folder, navigate to the “Scripts” subfolder. Drag the “BuildingManager”, “BuildingPlacement”, and “MoveCamera” scripts onto the Main Camera GameObject in your scene.
Most of our functionalities are covered by these three scripts, so we will briefly go over them. The “Building Manager” script takes care of generating the menu on the left side of the screen and adding buttons to it to the number of buildings we want to have in our menu. You can see that this script has a property called “Buildings” in the editor. If you expand this option, you will see a placeholder for “Size”. Change this value to 3. This means that we want our menu to hold three buttons each representing a different building, so we can select them and place them in the city. When you change the value to 3, you will notice that three new empty placeholders are shown. Go to the Prefabs subfolder again, and drag the Blue, Brown, and Green building prefabs to each of those placeholders.
Now to the number of buildings we have added to this list, we will have buttons with the same name as the building prefabs generated for us. Clicking on each button will inform the “Building Placement” Script, which building to generate and attach to the mouse pointer. For more detail, please examine the fully commented script of “Building Manager”.
Next, we will examine the “Building Placement” script. There are three properties for this script in the editor that we need to change. First, we need to add different materials for the “base” of the buildings to indicate illegal and legal positions of the building by means of color. Navigate to the “Materials” subfolder and drag the “Valid” material to its corresponding placeholder in the inspector panel. Do the same for the “Invalid”. Once the “Building Manager” script informs the “Building Placement” script which building is requested to be generated, the “Building Placement” script will instantiate (create a copy on the fly) that building and will attach it to the position of the mouse pointer. Furthermore, it will use the valid and invalid materials we’ve just added to the component, so it can change the color of the base of the buildings to red when they collide with buildings and streets (i.e. invalid), or otherwise blue (i.e. valid).
The “Building Placement” script is also monitoring the clicking of the mouse by the user. If there is a building attached to the mouse pointer and the position is legal and the user clicks on the mouse, it will place the building on that exact location and will disable its base (so it looks realistic when placed in the city). Notwithstanding, we also want the user to be able to click on a building that is already placed in the city so they can remove it. This means that our “Building Placement” implements a second check condition to see if no building is attached to the mouse pointer at the moment when the user clicks on the left mouse button. If that is the case, it also checks whether what the user has clicked on is actually a building or not. For this check, our script uses the “Building Mask” property, as shown in the editor. All the building prefabs we have added to the “Building Manager” script already belong to the “Building” layer. All we need to do is tell unity which layer to look for when the user clicks on objects in the city. As such, we need to create a new layer and set that as our Buildings Mask. For this, click on the “Layer” dropdown menu on top of this panel, and select “Add Layer”.
Create a new layer called “Building” and set the Building Mask to that layer (this layer should already exist in your layer list, if not, create it).
The last property we need to set in the unity editor is related to the “Move Camera” script. We need to indicate a speed value for the movement of the camera, so we can move it using the arrow keys. Set this value to 15.
That is all you needed to do in order to set up a fully working City Builder scene with the assets that were provided to you. Press play and enjoy your creation. You should be able to move the camera around, place buildings in designated areas of the city and remove them. You may notice that we have not covered which script handles the color changing of the base of buildings when they are selected, as well as the function for deleting them. This is because, each of the building prefabs we added to the “Building Management” script has a script called “PlacableBuilding” that takes care of these functions, and we do not need to set anything up for them to work. However, in short, once a building that is already placed in the city is clicked on, the “Building Placement” script detects which exact building was clicked and triggers the “PlacableBuilding” script attached to it. This script will then change the base color of that building to purple, and if the user presses the “D” key, it will remove that building from the scene.
You can also watch this complete video tutorial on how to assemble this scene.
Note: Although we did not directly go over the scripts line by line, I strongly recommend that you look at the scripts inside the “Scripts” subfolder to have a better understanding of how some of these functions are implemented in C#. All the scripts are fully commented for your convenience.
Most of the mechanics used in desktop experiences can also be used in VR. However, not all of them are best choices for VR experiences. Two of the most prominent examples are locomotion and interaction mechanics. In this section, we will briefly explore the different locomotion and interaction mechanics that are designed specifically for VR experience.
Locomotion can be defined as the ability to move from one place to another. There are many ways in which locomotion can be implemented in games and other virtual experiences. Depending on the employed camera perspective and movement mechanics, the users can move their viewpoint within the virtual space in different ways. Obviously, there are fundamental differences in locomotion possibilities when comparing 2D, 2.5D, and 3D experiences. Even within the category of 3D experiences, locomotion can take many different forms. To give you a very general comparison, consider the following:
Locomotion in 2D games is limited to the confines of the 2D space (X and Y axes). The camera used in 2D games employs an orthogonal projection. Therefore, the game space is seen as “flat”. In these experiences, users can move a character using mechanics such as “point and click” (as is the case in the most 2D real-time strategy games) or using keystrokes on a keyboard or other types of controllers. The movements of the orthogonal camera in these experiences is also limited to the confines of the 2D space. Consequently, the users will not experience the illusion of perceiving the game through the eyes of the character (e.g. Cuphead [170], Super Mario Bros, [171] etc.). Evidently, the same type of locomotion can be employed in the 3D space as well. For instance, in the City Builder game, we have seen in this lesson, the camera uses a perspective projection. However, the locomotion of the user is limited to the two axes of X and Z. The three-dimensional perspective of the camera (viewpoint of the user), however, creates an illusion of depth and a feeling of existing “in the sky”. In more sophisticated 3D games, such as First-person shooters (FPS [172]) where it is desired that the players experience the game through the eyes of their character, the feeling of movement is entirely different. We stress the word “feeling” since the logic behind translating (moving) an object within a 2D space compared to a 3D space from one point to another is not that different. However, the differences in the resulting feeling of camera movements are vast (unattached distant orthogonal vs. perspective projection through the eyes of the character). In many modern games (not necessarily shooters) with a first-person camera perspective, the players can be given six degrees of freedom for movement and rotation.
The FPS Controller we used in the previous lessons is an example of providing such freedom (except for rotation along the Z-axis). The mechanics for the movement in such games, however, are almost all the time through a smooth transition from one point to another. For instance, in the two different locomotion mechanics we used in this course (FPS Controller, and camera movement) you have seen that we gradually increase or decrease the Position and Rotation properties of the Transform component attached to GameObjects. This gradual change of values over time (for as long as we hold down a button for instance) creates the illusion of smoothly moving from one point to another.
As was previously mentioned, there are many ways in which locomotion can be realized in virtual environments, depending on the type and genre of the experience, and the projection of the used camera. Explaining all the different varieties would be outside the scope of this course. Therefore, we will focus on the one that is most applicable in VR.
The experience of Virtual Reality closely resembles a first-person perspective. This is the most effective way of using VR to create an immersive feeling of perceiving a virtual world from a viewpoint natural to us. It does not come as a surprise that in the early days of mainstream VR development, many employed the same locomotion techniques used in conventional first-person desktop experiences in VR. Although we can most definitely use locomotion mechanics such as “smooth transition” in VR, the resulting user-experiences will not be the same. As a matter of fact, doing so will cause a well-known negative effect associated with feelings such as disorientation, eyestrain, dizziness, and even nausea, generally referred to as simulator sickness.
According to Wienrich et al. “Motion sickness usually occurs when a person feels movement but does not necessarily see it. In contrast, simulator sickness can occur without any actual movement of the subject” [1]. One way to interpret this is that simulator sickness is a form of physical-psychological paradox that people experience when they see themselves move in a virtual environment (in this case through VR HMDs) but do not physically feel it. The most widely accepted theory as to why this happens is the “sensory conflict theory” [2]. There are, however, several other theories that try to model or predict simulator sickness (e.g. the poison theory [3], the model of negative reinforcement [4], [5], the eye movement theory [4-5], and the and the postural instability theory [6]). Simulator sickness in VR is more severe in cases where the users must locomote, particularly using smooth transition, over a long distance. As such, different approaches have been researched to reduce this negative experience. One approach suggested by [1] is to include a virtual nose in the experience so the users would have a “rest frame” (a static point that does not move) when they put on the HMD.
Other approaches such as dynamic field of view (FOV) reduction when moving or rotating have also shown to be an effective way to reduce simulator sickness.
In addition to these approaches, novel and tailored mechanics for implementing locomotion, specifically in VR, have also been proposed. Here we will list some of the most popular ones:
Other forms of teleportation include adding effects when moving the user’s perspective from one location to another (e.g. fading, sounds, seeing a project of the avatar move, etc.), or providing a preview of the destination point before actually teleporting to that location:
Another interesting and yet different example of teleportation is the “thrown object teleported”, where instead of pointing at a specific location, the user throws an object (using natural gestures for grabbing and throwing objects in VR as we will discuss in the next section) and then teleport to the location where the object rests.
There are many other locomotion mechanics for VR (e.g. mixing teleportation and smooth movement, run in-place locomotion, re-orientation of the world and teleportation together, etc.) that we did not cover in this section. However, the most popular and widely used ones were briefly mentioned.
[1] C. Wienrich, CK. Weidner, C. Schatto, D. Obremski, JH. Israel. A Virtual Nose as a Rest-Frame-The Impact on Simulator Sickness and Game Experience. 2018, pp. 1-8
[2] J. T. Reason, I. J. Brand, Motion sickness, London: Academic, 1975.
[3] M. Treisman, Motion Sickness: An Evolutionary Hypothesis” Science, vol. 197, pp. 493-495, 1977.
[4] B. Lewis-Evans, Simulation Sickness and VR-What is it and what can developers and players do to reduce it?
[5] J. J. La Viola, "A Discussion of Cybersickness in Virtual Environments", ACM SIGCHI Bulletin, vol. 32, no. 1, pp. 47-56, 2000.
[6] G. E. Riccio, T. A. Stoffregen, "An ecological theory of motion sickness and postural instability", Ecological Psychology, vol. 3, pp. 195-240, 1991.
Interactions with GameObjects in non-VR environments are limited to conventional modalities and their affordances. In desktop experiences, for instance, interactions are limited to pointing at objects and graphical user interface element. In certain gaming consoles, however, such as Nintendo Wii [184] and Xbox Kinect, users can perform natural gestures to interact with objects in a game or to perform actions.
Interaction with GameObjects in VR can be considerably more natural compared to desktop experiences. The immersive nature of VR HDMs combined with the possibility of locomotion in a natural point of view affords a much closer interaction experience to real-life compared to any other gaming console or desktop. Thanks to the power of sensors and controller data in VR headsets, we can constantly track the position, orientation, and intensity of hand movements in VR. As such, users can use natural gestures for interaction with different types of objects while perceiving them from a natural viewpoint.
Users can interact with objects by reaching out and grabbing them when they are in their proximity, or they can grab them from distance using a pointer. Once an object is grabbed, users can use the physics properties to place them somewhere in the virtual environment, throw them, and even change their scale and rotation.
An example of grabbing an object from distance using pointers:
More natural forms of interaction in VR are via gestures. For instance, users can spin a wheel using a circular gesture or pull down a lever using a pull gesture. The popular VR archery game is a prime example of using natural gestures to interact with game objects.
Similar to the bow and arrow, some objects such as a fire extinguisher are used by two hands in real-life. The same principle could be applied in VR, where the user grabs the capsule with one hand and the hose with another. An overview of some of these interaction mechanics is demonstrated in the videos below from one of the most popular VR plugins for Unity called VRTK.
In the videos posted above, you can see that users can naturally interact with different types of objects, by sitting down and pulling out a drawer, flipping switches, etc. The same principle can be applied to graphical user interface (GUI) objects. Users can use natural gestures to collide with different GUI objects such as buttons to interact with them, or they can use pointers to point at a specific GUI element. Users can grab a selector on a slider and move their hand to the left or right to decrease/increase a value, or they can move a text element in the scene.
As the inspiration of different interaction mechanics for your assignment, you can watch the following video:
The homework assignment for this lesson is to reflect on the City Builder game and think about different VR specific locomotion and interaction mechanics that can be added to it to turn it into a VR experience. You are required to write a proposal on how you think City Builder can be transformed into a VR experience.
Note that you have absolute freedom to change the conceptual design of City Builder for your new design proposal.
You are free to explore a large amount of content on the web that demos different locomotion and interaction mechanics used in VR games and other VR experiences and select the ones that you think would make City Builder a good VR experience. For each mechanic that you want to include in City Builder, write a short description of that mechanic including a screenshot or a video, why do you think it will be a good fit for City Builder, and if applicable, what existing mechanic it will replace. You are required to propose at least two locomotion mechanics (to not be used simultaneously, but interchangeably), and two interaction mechanics. Feel free to change any aspect of City Builder if you think doing so will enrich users’ experience in VR. For instance, instead of a top-down camera, you can propose to use a first-person camera, or instead of selecting buildings from a GUI menu you can propose that users could grab buildings from a pallet of items they hold in one hand.
Write a two- to three-page report on your proposed changes to the City Builder.
This assignment is due on Tuesday at 11:59 p.m.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Explanation of two locomotion mechanics to be used in your proposed design and justification as to why they are good choices | 2 pts | 1 pts | 0 pts | 2 pts |
Explanation of two interaction mechanics to be used in your proposed design and justification as to why they are good choices | 2 pts | 1 pts | 0 pts | 2 pts |
Write up is well thought out, researched, organized, and clearly communicates why student selected the proposed mechanics | 6 pts | 3 pts | 0 pts | 6 pts |
Total Points: 10 |
In this lesson, you experienced what it takes to create a very simple game in Unity. You used some of the more advanced features of Unity such as AI and experienced setting up a semi-complex scene in the editor. Furthermore, you were provided with some slightly complex C# scripts that made the City Builder game come alive and were explained the interrelations among the scripts. In the second part of the lesson, you were introduced to the most common locomotion and interaction mechanics used in VR development. We hope that this lesson has helped to broaden your knowledge on the design and development of virtual experiences in Unity and the more novel interaction and locomotion mechanics employed in VR experiences these days.
You have reached the end of Lesson 9! Double-check the to-do list on the Lesson 9 Overview page to make sure you have completed the activity listed there.
In the final lesson of this course, we ask you to reflect on the potential of 3D modeling and VR (or AR/ MR) for the geospatial sciences. We have seen in this course that there is a lot of development and advancement in technology taking place that has made 3D modeling more accessible and is pushing virtual reality into the mainstream.
This is not to say that there are no issues left or that there is not a lot of work to be done on all accounts. But, with a consumer market industry behind current developments we are looking into a future of communication, sharing, and understanding environmental information that will be vastly different from our current approaches.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
The course has provided an overview of a number of topics surrounding 3D modeling and analysis and VR. The goal was to give students an overview of the technologies and software products that are available or emerging and how geospatial sciences will change in the near future. Many topics had to be left out, but we would like to show at least two short videos of projects done by our research team.
The reason to include these projects here is that they provide very natural extensions to the topics we have addressed in the course. The first project is the historic campus modeling effort that you have engaged within lesson 3, using the armory, and in the Unity lessons (7 and 8). This is an ongoing project that is self-funded and lives of contribution largely from students like yourself. The one aspect we would like to focus on here is that the model that we created and that you have explored can be used and refined in numerous ways. Using a different software tool, LumenRT, we created a more realistic rendering of the historic campus. LumenRT can transform 3D models into real-time 3D immersive scenes called LiveCubes. The LiveCubes can be shared with other people and let them navigate in the scene without installing the software. It also automatically adds accurate lighting, shadows, and reflections to the objects in the scene with real-time global illumination. For demonstration purposed please watch the following video. Keep this video in mind when you think of your audience. While many people of the younger generation are perfectly happy with game-like environments, in our experiences people of advanced age prefer a more realistic representation. While this representation is still challenging for interactive immersive VR experiences, they can be used in current technology, for example, in the form of 360-degree videos or images, which can also be shared via YouTube.
The second example we’d like you to explore is from a project in collaboration with geosciences professor Peter La Femina. Pete is a volcanologist and has a repertoire of fascination journeys. Sometimes he is in the position to take students along, but this is not always the case. Virtual field trips have always been a dream of geography but so far they have never fully taken off. We are working on changing the acceptance of virtual field trips by implementing a VR experience that will allow students to explore the insight of a volcano. This is also an example of a workflow as discussed in lesson 2. Pete took a drone equipped with both LiDAR and photo cameras with him to Iceland and visited the volcano Thrinukagigur on Iceland. If you are interested, there is a lot more information on this Inside the Volcano [193], a famous volcano website. The volcano is reasonable dormant such that it is safe to fly, for example, a drone into it (you can actually visit the volcano).
Finally, we would like you to read another article that discusses, from a more theoretical perspective, the role that using the landscape itself (especially the local landscape someone may be used to) as a means of communication. As an example, we have two projects that deal with VR and climate change. Both projects provide an opportunity to think through the options we have to communicate with people about how climate change may change their local environments, as well as what sustainable living can look like. The article is written by Stephen Sheppard and has just been published in Landscape and Urban Planning. You can download the article Making climate change visible: A critical role for landscape professionals [194] using Science Direct, through the Penn State Libraries. You may need to log in using your Penn State Access Account user id and password when you follow the link.
If you need to search for the article, the complete information is:
Journal title and issue: Landscape and Urban Planning 142 (2015) 95–105
Article title: Making climate change visible: A critical role for landscape professionals
Author: Stephen R.J. Sheppard
There are two deliverables for this lesson.
Deliverable 1: Article Reflection After reading the article by Sheppard provide a 500-word max write up reflecting on a potential paradigm shift that 3D modeling and VR induce for communicating and understanding environmental change. Guiding questions:
Criteria | Full Marks | Half Marks | No Marks | Possible Points |
---|---|---|---|---|
Write up clearly communicates how the local landscape can improve communicating environmental change. | 2 | 1 | 0 | 2 |
Write up clearly communicates how 3D modeling and VR can improve communicating environmental change. | 3 | 1.5 | 0 | 3 |
Write up is well thought out, organized, contains some theoretical considerations and clearly communicates why or why not this article is an example of a paradigm shift in the geospatial sciences. | 4 | 2 | 0 | 4 |
The document is grammatically correct, typo-free, and cited where necessary. | 1 | 0.5 | 0 | 1 |
Total Points: 10 |
Deliverable 2: Reflection Paper: Write a 750-word max reflection on the course discussing your perspective on 3D modeling and VR for the future of the geospatial sciences. Guiding questions:
Criteria | Full Marks | Half Marks | No Marks | Possible Points |
---|---|---|---|---|
Write up clearly communicates student’s perspective on the potential and/or pitfalls of 3D modeling and VR for geospatial sciences. | 4 | 2 | 0 | 4 |
Write up clearly identifies key technologies in the area of 3D modeling and VR most important for geospatial sciences. | 4 | 2 | 0 | 4 |
Write up is well thought out, organized, contains some theoretical considerations and clearly communicates why or why not this article is an example of a paradigm shift in the geospatial sciences. | 6 | 3 | 0 | 6 |
The document is grammatically correct, typo-free, and cited where necessary. | 1 | 0.5 | 0 | 1 |
Total Points:15 |
Lesson 10 provided a short outlook on how 3D modeling and VR have the potential to change how we communicate environmental change. Discussing Sheppard's article is one example of how researchers start to add theory to the technological developments that are happening. The video of the historic campus project provided a glimpse of how realistic 3D models can become with off-the-shelf software opening new possibilities for communicating with a wide audience. The Thrinukagigur example shows that immersive technologies will have an impact on a wide spectrum of geo-spatial sciences and that we are finally in the position to realize virtual field trips.
You have reached the end of Lesson 10! Double-check the to-do list on the Lesson 10 Overview page to make sure you have completed all of the activities listed there. Congratulation! You have finished this course!
It is essential that you read this entire Syllabus document as well as the material covered in the Course Orientation. Together these serve as our course "contract."
Instructor: Mahda M. Bagher, PhD
Office: Remote
Department of Geography, College of Earth and Mineral Sciences, The Pennsylvania State University
NOTE: I will read and respond to e-mail and discussion forums Frequently.
Penn State Online [196] offers online tutoring to World Campus students in math, writing, and some business classes. Tutoring and guided study groups for residential students are available through Penn State Learning [197].
Description: 3D Modeling and VR for the Geospatial Sciences is an introductory-level science course that introduces students to emerging topics at the interface of concepts and tools that capture/sense environmental information (e.g., LiDAR) and concepts/tools that allow for immersive access to geospatial information (e.g., HTC Vive). The course offers a high-level perspective on the major challenges and opportunities facing the development of current 3D technologies, differences between classic modeling and procedural rule-based modeling, the development of VR technologies, the role of game engines in the geospatial sciences. Topics that will be covered include an introduction to the 3D Modeling and 3D sensing technologies, 360 degree cameras, VR apps and tools, workflows how to integrate environmental information into VR environments, an introduction to procedural rule modeling, hands-on experience in creating 3D models of Penn State Campus, the creation of a virtual Penn State Campus, accessing and exploring a virtual campus in Unity (game engine), and the export of flyovers to platforms such as YouTube 360.
Prerequisites and concurrent courses: None.
When students successfully complete this course, they will be prepared to:
On average, most students spend 12-15 hours per week working on course assignments. Your workload may be more or less depending on your study habits.
I have worked hard to make this the most effective and convenient educational experience possible. The Internet may still be a novel learning environment for you, but in one sense it is no different from a traditional college class: how much and how well you learn is ultimately up to you. You will succeed if you are diligent about keeping up with the class schedule and if you take advantage of opportunities to communicate with me as well as with your fellow students.
Specific learning objectives for each lesson and project are detailed within each lesson. The class schedule is published under the Calendar tab in Canvas (the course management system used for this course).
Required textbooks: None
Recommended textbooks:
There are numerous books relevant to the course content. Resources are added throughout the course allowing students to follow up on specific topics.
Required materials:
D-scope Pro Google Cardboard Kit with Straps may be purchased online [198].
Online lesson content
All other materials needed for this course are presented online through our course website and in Canvas. In order to access the online materials, you need to have an active Penn State Access Account user ID and password (used to access the online course resources). If you have any questions about obtaining or activating your Penn State Access Account, please contact the World Campus Helpdesk [199].
This course relies on a variety of methods to assess and evaluate student learning, including:
Citation and Reference Style
I personally prefer APA as a citation style but as long as you are complete and consistent you may choose any style.
It is important that your work is submitted in the proper format to the appropriate Canvas Assignment or Discussion Forum and by the designated due date. I strongly advise that you not wait until the last minute to complete these assignments—give yourself time to ask questions, think things over, and chat with others. You will learn more, do better...and be happier!
Due dates for all assignments are posted on the mini-syllabus and course calendar in Canvas.
Assignments | Percent of Grade |
---|---|
Practical assignments | 60% |
Reflective writing assignments | 30% |
Participation | 10% |
Total | 100% |
I will use the Canvas gradebook to keep track of your grades. You can see your grades in the Grades page of Canvas. Overall course grades will be determined as follows. Percentages refer to the proportion of all possible points earned.
Letter Grade | Percentages |
---|---|
A | 93 - 100 % |
A- | 90 - 92.9 % |
B+ | 87 - 89.9 % |
B | 83 - 86.9 % |
B- | 80 - 82.9% |
C+ | 77 - 79.9 % |
C | 70 - 76.9 % |
D | 60 - 69.9 % |
F | < 60 % |
X |
Unsatisfactory (student did not participate) |
Curve
I am not planning to curve grades.
Late Policy
I do not accept any "late work." In exceptional circumstances, you should contact me. The earlier you contact me to request a late submission, the better. Requests will be considered on a case-by-case basis. Generally, late assignments will be assessed a penalty of at least 25% and will not be accepted more than one week after the original due date.
Printable Schedule [200]
Below you will find a summary of the primary learning activities for this course and the associated time frames. This course is 15 weeks in length, including an orientation. See our Calendar in Canvas for specific lesson time frames and assignment due dates.
Weekly schedule: Lessons open on Mondays. The close date might vary depending on the course difficulty. Initial discussions posts are due on Saturday, with all discussion comments due by Tuesday. All other assignments are due on Sundays. I expect you to participate in the online discussion forums as they count toward your participation grade.
NOTE: See the CANVAS Calendar tab for a full semester calendar of events.
Date: | Week 1 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 2 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 3 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 4 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 5 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 6 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 7 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 8 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 9 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 10 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 11 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 12 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 13 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 14 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
Date: | Week 15 |
---|---|
Topics: |
|
Readings: |
|
Assignments: |
|
For this course, we recommend the minimum technical requirements outlined on the World Campus Technical Requirements [203] page, including the requirements listed for same-time, synchronous communications. If you need technical assistance at any point during the course, please contact the IT Service Desk [204] (for World Campus students) or Penn State's IT Help Portal [205] (for students at all other campus locations).
Access to a reliable Internet connection is required for this course. A problem with your Internet access may not be used as an excuse for late, missing, or incomplete coursework. If you experience problems with your Internet connection while working on this course, it is your responsibility to find an alternative Internet access point, such as a public library or Wi-Fi ® hotspot.
This site is considered a secure web site, which means that your connection is encrypted. We do, however, link to content that isn't necessarily encrypted. This is called mixed content. By default, mixed content is blocked in Internet Explorer, Firefox, and Chrome. This may result in a blank page or a message saying that only secure content is displayed. Follow the directions on our Technical Requirements [203] page to view the mixed content.
Mahda is a doctoral candidate in Geography, specializing in GIS and Immersive technologies. She is part of the team at the Center for Immersive Experiences. Her interest is in facilitating embodied learning through designing immersive learning experiences in immersive VR. she is specifically interested in research related to memory and embodied cognition in the field of education. Her main focus has been on embodiment in the context of geospatial data visualization in VR. She also works on procedural modeling, rebuilding historical data, and integrating geospatial data in VR, including LiDAR data, DEM, shapefiles, raster data, and any point clouds. She has received several awards for academic and research achievement in her graduate research, most notably the William and Amy Easterling Outstanding Graduate Research Assistant Award and the Willard Miller Award, 1st place for the Ph.D. proposal. She will be graduating in May 2022.
This assignment has three tasks associated with it. The first task will prepare you to complete the other two successfully.
Explore the models and software websites provided below. Working through the questions will help you prepare for Tasks 2 and 3. By now these models should be familiar to you, but you may want to take some notes to help you organize your thoughts and keep everything straight.
Procedural model created at ChoroPhronesis [28] Lab using ESRI CityEngine [94] and CGA Shape Grammar rules. This model is shared with you using CE WebScene. You can turn layers on and off, zoom, rotate and explore the part of campus modeled.
Note: This interface may take a while to load. Please be patient. To see the full-size interface with additional options click the Model 1 link [93].
Historical campus models (1922) created at ChoroPhronesis [28] Lab using Sketchup. These models are combined in ESRI CityEngine [94] with trees and base map generated using procedural rules. This model is shared with you as a flythrough YouTube video.
If you want to see a larger version of the video, click the Model 2 link [97]. [206]
After reviewing the models and taking notes, start a discussion on what you believe the advantages and disadvantages are for each of the approaches. Make sure that you comment on each other's posts to help each other better understand the nuances and details of each one. Remember that participation is part of your final grade, so commenting on multiple peers' posts is highly recommended. Please also make sure to use information from the reading assignment on Shape Grammars.
Post your response to the Lesson 4 Discussion (models) discussion.
Once you have posted your response and read through your peers' comments, synthesize your thoughts with the comments and write a one-page paper max about the different approaches.
Once you are happy with your paper, upload it to the Lesson 4 Assignment: Reflection Paper.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Clearly names and describes advantages and disadvantages, including time, learning curve, accessibility, applicability, flexibility, etc. | 10 pts | 5 pts | 0 pts | 10 pts |
Names areas of application that are best suited to each of the approaches | 3 pts | 1.5 pts | 0 pts | 3 pts |
The document is grammatically correct, typo-free, and cited where necessary | 2 pts | 1 pts | 0 pts | 2 pts |
Total Points: 15 |
In this module, you will build a 2D map of the University Park campus including buildings, trees, roads, walkways, and a digital elevation model (DEM). You will learn how to convert 2D maps to 3D presentations. Then procedural symbology that we have created for you in CityEngine will be imported and applied to the 3D map to create a realistic scene.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
This Lesson tasks have 4 parts.
Please reflect on your 3D modeling experience, using ArcGIS Pro; anything you learned or any problems that you faced while doing your assignment as well as anything that you explored on your own and added to your 3D modeling experience.
Please use Lesson 5 Discussion (Reflection) to post your response to reflect on the options provided above and reply to at least two of your peer's contributions.
Please remember that active participation is part of your grade for the course.
Your post is due Saturday to give your peers time to comment. All comments are due by Tuesday at 11:59 p.m.
Once you have posted your response to the discussion and read through your peers' comments, write a one-page reflection paper on your experience in 3D modeling with the following deliverables as proof that you have completed and understood the process of 3D modeling for Campus.
Your assignment is due on Tuesday at 11:59 p.m.
Please submit your completed paper and deliverables to the Lesson 5 Graded Assignments.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Paper clearly communicates the student's experience developing the 3D model in ArcGIS Pro | 5 pts | 2.5 pts | 0 pts | 5 pts |
The paper is well thought out, organized, contains evidence that student read and reflected on their peer's comments | 4 pts | 2 pts | 0 pts | 4 pts |
The document is grammatically correct, typo-free, and cited where necessary | 1 pts | .5 pts | 0 pts | 1 pt |
Total Points: 10 |
If you followed the directions as you were working through this lesson, you should already have most of the last part done.
If you haven't already submitted your deliverable for this part, please do it now. You can paste your screenshots into one file or zip all of the PDFs and upload that. Task 2 will need to be a PDF, so either way, you will have two files to zip.
Grading Rubric
Criteria | Full Credit | No Credit | Possible Points |
---|---|---|---|
Task 1: Screenshot of all layers symbolized for UP Campus correct and present | 3 pt | 0 pts | 3 pt |
Task 2: PDF map of Smoothed_DEM Layer with a legend showing the minimum and maximum elevation correct and present | 5 pt | 0 pts | 5 pt |
Total Points:8 |
Criteria | Full Credit | No Credit | Possible Points |
---|---|---|---|
Task 3: PDF map of your 3D campus with realistic facades and trees or simply create a screenshot correct and present5 | pt | 0 pts | 5 pt |
Task 4: PDF map of your 3D campus with schematic facades and analytical trees or simply create a screenshot correct and present | 5 pt | 0 pts | 5 pt |
Total Points: 10 |
This assignment has 4 parts, but if you followed the directions as you were working through this lesson, you should already have most of the first part done.
If you haven't already submitted your deliverable for this part, please do it now. You need to turn in all four of them to receive full credit. You can paste your screenshots into one file, or zip all of the PDFs and upload that. Make sure you label all of your tasks as shown below.
Criteria | Full Credit | No Credit | Possible Points |
---|---|---|---|
Task 1: Screenshot of flooded Old Main | 4 pts | 0 pts | 4 pts |
Task 2: Screenshot of Campus with gradual symbology of SunShadow_January1 | 2 pts | 0 pts | 2 pts |
Task 3: Screenshot of ground shadow surface of Nursing Sciences Building | 2 pts | 0 pts | 2 pts |
Task 4: Screenshot of attributes of ground shadow surface of Nursing Sciences Building | 2 pts | 0 pts | 2 pts |
Total Points: 10 |
Please reflect on your 3D spatial analysis experience, using ArcGIS Pro; anything you learned or any problems that you faced while doing your assignment as well as anything that you explored on your own and added to your 3D spatial analysis experience.
Please use Lesson 6 Discussion (Reflection) to post your response to reflect on the options provided above and reply to at least two of your peer's contributions.
Please remember that active participation is part of your grade for the course.
Your post is due Saturday to give your peers time to comment. All comments are due by Tuesday at 11:59 p.m.
Upload "UniversityParkCampus_Shadow.kmz" to the Lesson 6 Assignment: Google Earth.
Please upload your assignment by Tuesday at 11:59 p.m.
Criteria | Full Credit | Half Credit | No Credit | Total |
---|---|---|---|---|
The KMZ file has all three shadows | 8 pts | 4 pts | 0 pts | 8 pts |
Total points: 8 pts |
Once you have posted your response to the discussion and read through your peers' comments, write a one-page paper reflecting on what you learned from working through the lesson and from the interactions with your peers in the Discussion.
Please upload your paper by Tuesday at 11:59 p.m.
Please submit your completed paper to the Lesson 6 Assignment: Reflection Paper.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Paper clearly communicates the student's experience developing the 3D spatial analysis in ArcGIS Pro. | 5 pts | 2.5 pts | 0 pts | 5 pts |
The paper is well thought out, organized, contains evidence that student read and reflected on their peers' comments. | 3 pts | 1 pts | 0 pts | 3 pts |
The document is grammatically correct, typo-free, and cited where necessary. | 2 pts | 1 pts | 0 pts | 2 pts |
Total Points: 10 |
In this and the next two lessons, you will be working with the Unity3D game engine, simply referred to as Unity throughout the rest of the course. Unity is a rather complex software system for building 3D (as well as 2D) computer games, and other 3D applications. Hence, we will only be able to scratch the surface of all the things that can be done with Unity in this course. Nevertheless, you will learn quite a few useful skills related to constructing 3D and VR applications in Unity dealing with virtual environments. Furthermore, you will become familiarized with C# programming (the main programming language used in this engine) through a series of exercises that guide you on developing your first interactive experience in a game engine.
This lesson will begin with an overview to acquaint you with the most important concepts and the interface of Unity. It will cover basic project management in Unity, the Project Editor interface, manipulation of a 3D scene, importing 3D models and other assets, and building a stand-alone Windows application from a Unity project. Moreover, you will learn how to create a very simple “roll-the-ball” 3D game using C#, which will prepare you for more advanced interaction mechanics we will be implementing in the next lessons.
By the end of this lesson, you should be able to:
To Read |
|
---|---|
To Do |
|
If you have any questions, please post them to our "General and Technical Questions" discussion (not e-mail). I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Search the web for a 3D (or VR) application/project that has been built with the help of Unity. The only restriction is that it is not one of the projects linked in the list above and not a computer game. Then write a 1-page review describing the application/project itself and what you could figure out how Unity has been used in the project. Reflect on the reasons why the developers have chosen Unity for their project. Make sure you include a link to the project's web page in your report.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
URL is provided, the application/project uses Unity, is not already listed on this page, and is not a game. | 2 pts | 1 pts | 0 pts | 2 pts |
The report clearly describes the purpose of the application/project and the application itself. | 3 pts | 1.5 pts | 0 pts | 3 pts |
The report contains thoughtful reflections on the reasons why Unity has been chosen by the developers. | 3 pts | 1.5 pts | 0 pt | 3 pts |
The report is of the correct length, grammatically correct, typo-free, and cited where necessary. | 2 pts | 1 pts | 0 pts | 2 pt |
Total Points: 10 |
To practice Unity project management, scene editing, and project building, we want you to apply the steps from this walkthrough: Using Unity to Build a Stand-Alone Windows Application [207].
Please set up a new Unity project for this assignment. Then follow the steps from the walkthrough to add the building model to the scene, make sure that the lighting source and camera are placed in a reasonable way relative to the building model, and make the camera keyboard controlled. In the end, build a stand-alone Windows application from your project.
Three screenshots should be taken in Play(!) mode showing the entire Unity editor window. Each screenshot should show the building from a different perspective. The keyboard control of the camera will allow you to navigate the camera around to achieve this.
Uploading the zip file of the Windows stand-alone application you built (consisting of both the .exe file and the ..._Data folder) or a link to the uploaded zip file on clouds such as OneDrive or Dropbox.
Criteria | Ratings | Pts | ||
---|---|---|---|---|
This criterion is linked to a Learning OutcomeThree screenshots in Play(!) mode showing the building from three different perspectives. |
|
2 pts |
||
This criterion is linked to a Learning OutcomeLight source and camera are placed in a reasonable way relative to the building. |
|
2 pts |
||
This criterion is linked to a Learning OutcomeExtendedFlycam script for keyboard control has been applied correctly. |
|
2 pts |
||
This criterion is linked to a Learning OutcomeWorking Windows stand-alone application. |
|
4 pts |
||
Total Points: 10 |
The homework assignment for this lesson is to report on what you have done in this lecture. The assignment has two tasks and deliverables with different submission deadlines, all in week 8, as detailed below. Successful delivery of the two tasks is sufficient to earn the full mark on this assignment. However, for efforts that go "above & beyond" by improving the roll-the-ball game, e.g. create a more complex play area -like a maze- instead of the box, and/or make the main camera follow the movement of the ball, you could be awarded one point if you fall short on any of the other criteria.
To help with the last suggested extra functionality, use the following script and attach it to the main camera:
using System.Collections; using System.Collections.Generic; using UnityEngine; public class FollowBall : MonoBehaviour { // reference to the ball game object public GameObject Ball; // a vector3 to hold the distance between the camera and the ball public Vector3 Distance; // Start is called once before the first frame update void Start() { // assign the ball game object Ball = GameObject.Find("Ball"); //calculate the distance between the camera and the ball at the beginning of the game Distance = gameObject.transform.position - Ball.transform.position; } // This called after the Update finishes void LateUpdate() { // after each frame set the camera position, to the position of the ball plus the original distance between the camera and the ball // replace the ??? with your solution gameObject.transform.position = ???; } }
If you follow the detailed instructions provided in this lecture, you should be able to have a fully working roll-the-ball game.
Deliverable 1: Create a unity package of your game and submit it to the appropriate Lesson 7 Assignment.
You can do this by simply going to the Assets in the top menu and then selecting “Export Package”.
Make sure all the folders and files of your project are checked and included in the package you export (you can test this and import your package in a new Unity project to see if it runs).
If you choose to implement the functionality of the main camera to follow the ball movement, please explain your solution as a comment in your script. You can add comments (texts in green) in C# by using “//”.
We highly recommend that, while working on the different tasks for this assignment, you frequently create backup copies of your Unity project folder so that you can go back to a previous version of your project if something should go wrong.
Criteria | Full Credit | Half Credit | No Credit | Possible Points |
---|---|---|---|---|
Your game runs smoothly and as intended without any errors or strange behavior | 6 pts | 3 pts | 0 pt | 6 pts |
Your project is complete and has all the required functionalities (coin rotation, collection, and sound effect). | 3 pts | 1.5 pts | 0 pt | 3 pts |
Your Unity package has a proper folder structure, and all the files have meaningful names like it was done in the instructions | 1 pt | .5 pts | 0 pts | 1 pt |
Total Points: 10 |
Links
[1] https://www.cnbc.com/2016/01/14/virtual-reality-could-become-an-80b-industry-goldman.html
[2] https://www.e-education.psu.edu/geogvr/sites/www.e-education.psu.edu.geogvr/files/Lesson_01/files/esri2014_3D.pdf
[3] http://www.facebook.com/zuck/posts/10101319050523971
[4] https://www.facebook.com/zuck/posts/10101319050523971
[5] https://www.smithsonianmag.com/innovation/how-palmer-luckey-created-oculus-rift-180953049/?no-ist
[6] https://www.oculus.com/
[7] http://www.youtube.com/watch?v=qYfNzhLXYGc?rel=0
[8] https://www.youtube.com/watch?v=qYfNzhLXYGc?rel=0
[9] http://enterprise.vive.com
[10] https://www.samsung.com/us/
[11] https://vr.google.com/cardboard/
[12] https://www.leapmotion.com/
[13] https://pimaxvr.com/
[14] https://varjo.com/
[15] https://www.valvesoftware.com/en/index
[16] http://www.geforce.com/hardware/10series/geforce-gtx-1080
[17] http://commons.wikimedia.org
[18] https://dpntax5jbd3l.cloudfront.net/images/content/1/5/v2/158662/2016-VR-AR-Survey.pdf
[19] https://tfl.gov.uk/maps/track/tube
[20] http://www.mdpi.com/2220-9964/4/4/2842/pdf
[21] https://www.with.in/
[22] https://www.google.com/edu/expeditions/#about
[23] http://www.nytimes.com/marketing/nytvr/
[24] https://www.cartogis.org/docs/proceedings/2016/Chen_and_Clarke.pdf
[25] https://sites.psu.edu/chorophronesis/
[26] https://opp.psu.edu/
[27] http://www.citygml.org/
[28] http://sites.psu.edu/chorophronesis/
[29] https://help.sketchup.com/en/article/3000100
[30] http://www.digitaltutors.com/tutorial/2266-Modeling-an-Organized-Head-Mesh-in-Maya
[31] http://autode.sk/2e25Os7
[32] https://www.bentley.com/en/products/product-line/reality-modeling-software/contextcapture
[33] https://info.e-onsoftware.com/home
[34] http://www.lumenrt.com/
[35] http://autode.sk/1Ub21Ct
[36] http://gmv.cast.uark.edu/scanning-2/airborne-laser-scanning/
[37] http://desktop.arcgis.com/en/arcmap/10.3/manage-data/las-dataset/lidar-solutions-creating-raster-dems-and-dsms-from-large-lidar-point-collections.htm
[38] https://commons.wikimedia.org/wiki/File:SierpinskiTriangle.PNG
[39] http://web.comhem.se/solgrop/3dtree.htm
[40] http://cehelp.esri.com/help/index.jsp?topic=/com.procedural.cityengine.help/html/toc.html
[41] https://chorophronesis.psu.edu/
[42] http://www.autodesk.com/products/fbx/overview
[43] https://en.wikipedia.org/wiki/COLLADA
[44] https://developer.android.com/studio/index.html
[45] http://store.steampowered.com/steamvr
[46] https://developer.oculus.com/downloads/unity/
[47] https://vrtoolkit.readme.io/
[48] http://psu-libs.maps.arcgis.com/apps/MapSeries/index.html?appid=8b8e085ddc4746bb9dcac35c62bfd1f9
[49] https://sketchfab.com/models/4e3c9c3f6c4a44daae1174ad03bbf1cb?utm_medium=embed&utm_source=website&utm_campain=share-popup
[50] https://sketchfab.com/geographyugent?utm_medium=embed&utm_source=website&utm_campain=share-popup
[51] https://sketchfab.com?utm_medium=embed&utm_source=website&utm_campain=share-popup
[52] https://sketchfab.com/models/88c928bd1ffa4d8f9a15601fca35889f?utm_medium=embed&utm_source=website&utm_campain=share-popup
[53] https://sketchfab.com/chorophronesis?utm_medium=embed&utm_source=website&utm_campain=share-popup
[54] https://sketchfab.com/chorophronesis
[55] https://sites.psu.edu/campus/
[56] https://sketchfab.com/3d-models/penn-state-university-1922-old-main-af5578a3566a468bbe82c9a87c88eae8
[57] https://sketchfab.com/models/af5578a3566a468bbe82c9a87c88eae8
[58] https://sketchfab.com/models/af5578a3566a468bbe82c9a87c88eae8?utm_medium=embed&utm_source=website&utm_campain=share-popup
[59] https://www.youtube.com/watch?v=cKTPaqbXrAY
[60] https://www.youtube.com/watch?v=zYQvTJCpUFk
[61] https://www.lynda.com/
[62] http://www.sketchup.com/learn
[63] http://help.sketchup.com/en/article/3000081#qrc
[64] https://www.sketchup.com/
[65] https://www.e-education.psu.edu/geogvr/sites/www.e-education.psu.edu.geogvr/files/Lesson_03/Files/armory_forstudents.skp
[66] https://www.lynda.com/SketchUp-tutorials/Welcome/630600/687949-4.html?autoplay=true
[67] https://www.e-education.psu.edu/geogvr/sites/www.e-education.psu.edu.geogvr/files/Lesson_03/Files/VR_lecture3TasksEdited.pdf
[68] https://help.sketchup.com/en/article/3000083#know-inference
[69] https://help.sketchup.com/en/making-detailed-model-lightweight-way
[70] https://psu.app.box.com/s/c6vujly5g148xgyoesnaxjihij4gpqtm
[71] https://psu.box.com/s/g9lcguhdlmms693a9ck8hwvxmqmfw2mt
[72] https://help.sketchup.com/en/article/3000115
[73] https://sketchfab.com/
[74] http://www.sketchup.com
[75] https://www.youtube.com/user/SketchUpVideo
[76] http://forums.sketchup.com/
[77] https://www.lynda.com/member
[78] http://www.mit.edu/~tknight/IJDC/
[79] https://www.esri.com/en-us/arcgis/products/arcgis-cityengine/overview
[80] https://doc.arcgis.com/en/cityengine/2019.1/get-started/cityengine-faq.htm
[81] https://doc.arcgis.com/en/cityengine/latest/help/cityengine-help-intro.htm
[82] https://doc.arcgis.com/en/cityengine/latest/help/help-cga-modeling-overview.htm
[83] https://doc.arcgis.com/en/cityengine/2020.0/help/help-rules-parameters.htm
[84] https://doc.arcgis.com/en/cityengine/2020.0/help/help-conditional-rule.htm
[85] https://doc.arcgis.com/en/cityengine/2020.0/help/help-stochastic-rule.htm
[86] http://cehelp.esri.com/help/index.jsp?topic=/com.procedural.cityengine.help/html/manual/cga/writing_rules/cga_wr_attributes.html
[87] https://doc.arcgis.com/en/cityengine/2020.0/help/help-rules-attributes.htm
[88] https://doc.arcgis.com/en/cityengine/2020.0/help/help-rules-functions.htm
[89] https://doc.arcgis.com/en/cityengine/2020.0/help/help-rules-styles.htm
[90] https://doc.arcgis.com/en/cityengine/2020.0/tutorials/introduction-to-the-cityengine-tutorials.htm
[91] http://: https://www.arcgis.com/home/item.html?id=0fd3bbe496c14844968011332f9f39b7
[92] http://www.arcgis.com/apps/CEWebViewer/viewer.html?3dWebScene=6015f7d48dff4b3084de76bcf22c5bca
[93] http://psugeog.maps.arcgis.com/apps/CEWebViewer/viewer.html?3dWebScene=e08bb0c65fb74f849e3c95d2e7184474
[94] http://www.esri.com/software/cityengine
[95] https://www.youtube.com/watch?v=8cFc0In_Dkw&index=6&list=PLacF1o4TNbFlNYeE8WAJbQ6IUQOxF7_Ez
[96] https://www.bentley.com/en/products/brands/contextcapture
[97] https://youtu.be/zYQvTJCpUFk
[98] https://youtu.be/zYQvTJCpUFk?rel=0
[99] http://www.mit.edu/~tknight/IJDC
[100] https://www.arcgis.com/features/index.html
[101] https://sites.psu.edu/psugis/arcgis-pro-sign-in/
[102] http://pro.arcgis.com/en/pro-app/help/projects/create-a-new-project.htm
[103] http://lib.myilibrary.com/?id=878863
[104] https://courseware.e-education.psu.edu/downloads/geog497/
[105] https://www.e-education.psu.edu/geogvr/node/849
[106] http://www.google.com/maps
[107] https://www.google.com/maps
[108] https://www.fema.gov/flood-zones
[109] https://msc.fema.gov/portal/search?AddressQuery=state%20college%20pa#searchresultsanchor
[110] http://pro.arcgis.com/en/pro-app/tool-reference/analysis/identity.htm
[111] http://desktop.arcgis.com/en/arcmap/latest/extensions/3d-analyst/multipatches.htm
[112] http://pro.arcgis.com/en/pro-app/help/mapping/range/get-started-with-the-range-slider.htm
[113] https://chorophronesis.psu.edu
[114] https://learn.arcgis.com/en/projects/identify-landslide-risk-areas-in-colorado/lessons/analyze-flood-risk.htm
[115] http://www.esri.com/videos/watch?videoid=2208&isLegacy=true&title=3d-urban-analysis
[116] https://pro.arcgis.com/en/pro-app/help/mapping/animation/animate-through-a-range.htm#ESRI_SECTION1_88DD7DCAD7184E15BF078ACEEEEB14FA
[117] http://pro.arcgis.com/en/pro-app/help/mapping/range/visualize-numeric-values-using-the-range-slider.htm
[118] http://www.esri.com/videos/watch?videoid=4607/watch/4607/range-sliderisLegacy=true
[119] https://www.google.com/earth/
[120] https://www.arcgis.com/index.html
[121] https://assetstore.unity.com/
[122] https://certification.unity.com/
[123] https://unity.com/labs
[124] http://universesandbox.com/
[125] https://www.youtube.com/watch?v=PF_4RvW3-XI
[126] https://www.nesdis.noaa.gov/data-visualization
[127] https://unity.com/madewith
[128] https://unity3d.com/public-relations
[129] https://store.unity.com/
[130] https://unity3d.com/get-unity/download
[131] https://unity.com/
[132] https://unity3d.com/get-unity/download/archive
[133] https://unity.com
[134] https://learn.unity.com/search?k=%5B%5D
[135] https://drive.google.com/file/d/1Qxbi9AaOq6CGPs5TYO3EALd3F2L68SYS/view?usp=sharing
[136] http://www.sketchup.com/
[137] https://unity3d.com/
[138] https://visualstudio.microsoft.com/
[139] https://docs.unity3d.com/Manual/AnimationSection.html
[140] https://docs.unity3d.com/Manual/StateMachineBasics.html
[141] https://docs.unity3d.com/Manual/ParticleSystems.html
[142] https://docs.unity3d.com/Manual/PhysicsSection.html
[143] https://docs.unity3d.com/Manual/Audio.html
[144] https://docs.unity3d.com/Manual/class-InputManager.html
[145] https://docs.unity3d.com/Manual/UISystem.html
[146] https://learn.unity.com/search?k=%5B%22q%3AUser+Interface%22%5D
[147] https://www.youtube.com/watch?v=dzOsvgc2cKI
[148] https://www.vive.com/us/accessory/controller/
[149] https://blog.mapbox.com/announcing-the-mapbox-unity-sdk-afb07d33bd96
[150] https://docs.unity3d.com/Manual/animeditor-UsingAnimationEditor.html
[151] https://learn.unity.com/tutorial/basics-of-animating
[152] https://www.youtube.com/watch?v=cTbKllbYuow&feature=youtu.be
[153] https://unity3d.com/learn/tutorials/topics/virtual-reality/vr-overview?playlist=22946
[154] https://learn.unity.com/
[155] https://docs.unity3d.com/Manual/android-sdksetup.html
[156] https://ffmpeg.zeranoe.com/builds/
[157] https://www.digitalcitizen.life/command-prompt-how-use-basic-commands
[158] http://www.videolan.org/vlc/
[159] https://support.google.com/youtube/answer/6178631?hl=en
[160] https://github.com/google/spatial-media/releases/tag/v2.0
[161] https://support.google.com/youtube/answer/161805?hl=en&ref_topic=3024170
[162] https://studio.youtube.com
[163] https://www.youtube.com/channel/UCzuqhhs6NWbgTzMuM09WKDQ
[164] https://psu.app.box.com/s/302dfknb69zvosx4ddo82co0ipueh7a3
[165] https://psu.box.com/s/2du3cg4rtx341kgvspou9zcd1p9mjrc7
[166] https://www.youtube.com/watch?v=q5ejxITvEh8
[167] https://www.youtube.com/watch?v=1BMJFgK68IU&ab_channel=Unity
[168] http://freesound.org/
[169] https://www.youtube.com/watch?v=mP7ulMu5UkU
[170] http://www.cupheadgame.com/
[171] https://supermariobros.io/
[172] https://www.callofduty.com/
[173] https://upload.wikimedia.org/wikipedia/commons/f/fa/6DOF_en.jpg
[174] https://commons.wikimedia.org/wiki/File:6DOF_en.jpg
[175] https://www.youtube.com/channel/UCG08EqOAXJk_YXPDsAvReSg
[176] https://www.tomshardware.com/news/columbia-university-vr-sickness-research,32093.html
[177] https://www.youtube.com/watch?v=_p9oDSeUaws
[178] https://giphy.com/gifs/teleportation-KHiHRHPJ27SCgIBGW9?utm_source=media-link&utm_medium=landing&utm_campaign=Media%20Links&utm_term=
[179] https://www.youtube.com/channel/UCx78qBGQl-oCvGwX-MaVRgA
[180] https://giphy.com/gifs/teleportation-with-preview-hTICiMnGGKczkyoiYc?utm_source=media-link&utm_medium=landing&utm_campaign=Media%20Links&utm_term=
[181] https://www.youtube.com/channel/UCJLFizUO3gCQnyiYKMYvUgg
[182] https://www.youtube.com/channel/UCPhckk25N7P8OI580ucRidA
[183] https://www.youtube.com/channel/UCIeaxRCoSLFLHvrCInghqcw
[184] http://wii.com/
[185] https://www.youtube.com/channel/UC_0LCcbZc9G4SepLaCs1smg
[186] https://www.youtube.com/channel/UCI4Wh0EQPjGx2jJLjmTsFBQ
[187] https://media.giphy.com/media/Y3YCvTWl1VW10DqoHg/giphy.gif
[188] https://www.youtube.com/channel/UCWRk-LEMUNoZxUmY1wO7DBQ
[189] https://www.youtube.com/channel/UCpmu186jY1s3T9ADieN-uug
[190] https://www.youtube.com/channel/UCSVjrpufOzTd3jZsUGs-L0g
[191] http://news.psu.edu/story/427720/2016/09/29/academics/reconnecting-penn-state%E2%80%99s-past-through-virtual-reality
[192] https://news.psu.edu/story/436448/2016/11/08/research/project-explores-how-virtual-reality-can-help-students-learn
[193] https://insidethevolcano.com/the-volcano/
[194] http://www.sciencedirect.com.ezaccess.libraries.psu.edu/science/article/pii/S0169204615001449
[195] mailto:mmm6749@psu.edu
[196] https://student.worldcampus.psu.edu/course-work-and-success/tutoring
[197] https://pennstatelearning.psu.edu/tutoring
[198] https://www.amazon.com/Pro-Google-Compatible-Instructions-Construction/dp/B00Q1FITMO/ref=sr_1_1_sspa?keywords=D-scope+Pro+Google+Cardboard+Kit&qid=1582053123&s=electronics&sr=1-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUFMRU1aR1RXSkMxNzImZW5jcnlwdGVkSWQ9QTEwNDc2NDgxU0JBODhRNU85WjdCJmVuY3J5cHRlZEFkSWQ9QTA0OTA3NDQxVUJSVlFYWEZWVTJDJndpZGdldE5hbWU9c3BfYXRmJmFjdGlvbj1jbGlja1JlZGlyZWN0JmRvTm90TG9nQ2xpY2s9dHJ1ZQ==
[199] http://student.worldcampus.psu.edu/student-services/helpdesk
[200] https://www.e-education.psu.edu/geogvr/javascript%3A%3B
[201] https://www.e-education.psu.edu/geogvr/sites/www.e-education.psu.edu.geogvr/files/Lesson_02/Files/Chen_and_Clarke.pdf
[202] https://www.sciencedirect.com/science/article/pii/S0169204615001449
[203] https://www.worldcampus.psu.edu/general-technical-requirements
[204] https://student.worldcampus.psu.edu/help-and-support/technical-support/it-service-desk
[205] https://pennstate.service-now.com/sp?id=get_it_help
[206] https://www.youtube.com/embed/uBevSvm6IA4
[207] https://psu.instructure.com/courses/2179843/modules/items/34140262