Welcome to Lesson 4! In this lesson, you will practice planning and designing a UAS mission. For this lesson, we will focus on imaging sensor (digital cameras), as it is widely used for geospatial projects. Successful execution of any mapping project requires a tremendous amount of planning prior to mission execution. Planning must be done by an experienced person who is familiar with all aspects of mapping. Mission planning includes the following categories:
At the successful completion of this lesson, you should be able to:
In this section, you will understand the value of studying area maps for a project prior to the development of the flight plan.
Flight planners should acquaint themselves with the project area through two types of maps before proceeding with further steps of the design; those are U.S. Topo Quadrangle Maps and Sectional Aeronautical Charts.
The U.S. Topo Quadrangles Map, mainly a topographic map, shows the details of the contours of the land (terrain elevation). See Figure 4.1. This type of map reveals all information that a planner needs about the topography in the project area. Topography affects flight plan parameters such flight lines, spacing, and imagery spacing. Quad maps can be downloaded from the USGS [3]. You can also review a sample of such maps for the State College area [4].
Sectional Aeronautical Charts, which are also called VFR charts (Figure 4.2), are described as “the primary navigational reference medium used by the VFR pilot community. The 1:500,000 scale Sectional Aeronautical Chart Series is designed for visual navigation of slow to medium speed aircraft. The topographic information featured consists of the relief and a judicious selection of visual checkpoints used for flight under visual flight rules. The checkpoints include populated places, drainage patterns, roads, railroads, and other distinctive landmarks. The aeronautical information on Sectional Charts includes visual and radio aids to navigation, airports, controlled airspace, restricted areas, obstructions, and related data. These charts are updated every six months, most Alaska Charts annually. To better understand these charts, review the FAA “Aeronautical Chart User Guide [5]”. You can also watch this YouTube video on learning how to read the sectional charts [6]:.
The VFR acronym is adopted from “Visual Flight Rules [7]” where a pilot relies on the visual see-and-avoid rule during flight. To download such charts, visit the FAA site [8].
The topographic map and the aeronautical chart provide an overview of the area and the contents of the ground cover (both natural and man-made), restricted airspace such as airport approaches, high towers, etc.
No less important than visualizing a sectional chart, is to utilize the online FAA sites and other services, which allow you to zoom in to your geographic location to stand on the airspace status and the allowed flights ceiling. Here are a couple of the free services available to the public:
1. The AirMap [10] App
2. Visualize it: See FAA UAS Data on a Map [11]
3. B4UFLY [12]
The focal plane of an aerial camera is the plane where all incident rays coming from the object are focused. The focal plane is where the film of a film-based camera is placed. With the introduction of digital cameras, the focal plane is occupied by the CCD array, replacing the film.
A digital camera like the ones we use at home is called a “digital frame” camera just to distinguish it from other designs of digital cameras such as “push broom” cameras. Digital frame cameras [13] have the same geometric characteristics as the film camera that employs the film as the recording medium.
A digital frame camera consists of a sensor that is a two-dimensional array of charge-coupled device [14] (CCD) elements (CCD is also called pixel). The sensor is mounted at the focal plane of the camera. When an image is taken, all CCDs of the sensor are exposed simultaneously, thus producing a digital frame. Figure 4.3 (from Wolf, page 75) illustrate how a digital camera captures an area on the ground that falls within the lens' field of view (FOV).
The size of a digital camera is measured by the size of its sensor. The higher number of CCDs (pixels) in the sensors, the bigger and more expensive the camera is. If a camera has a sensor with 4000 pixels by 4000 pixels, it is called a 16 megapixels camera. That is because it has 16,000,000 pixels. UAS imaging productivity, i.e. how many acres the UAS can cover in an hour, depends on the sensor size, battery life, and the lens focal length. The article "DJI Phantom 4 RTK vs. WingtraOne [15]" clearly illustrates the difference between UAS productivity based on sensor and UAS capabilities. In that article, you will also learn about some fundamental capabilities that we usually expect from a mapping drone.
The lens for a mapping camera usually contains compound lenses put together to form the lens cone. The lens cone also contains the shutter and diaphragm.
The lens is the most important and most expensive part of a mapping aerial camera. Cameras on board of the UAS are not of that level of quality, as they were not manufactured to be used as mapping cameras. Mapping cameras are called metric cameras, and are built so that the internal geometry of the camera holds its characteristics despite harsh working conditions and changing operational environments. Lenses for cameras on board of the UAS are small in size and lighter in weight. They are also less expensive than standard mapping cameras. Lenses for mapping cameras should be calibrated to determine the accurate value for focal length and lens distortion (imperfectness) characteristics.
Shutters are used to limit the passage of light to the focal plane. The shutter speed of aerial cameras typically ranges between 1/100 and 1/1000 seconds. Shutters are of two types: focal-plane shutters or the between-the-lens shutters. The latter one is the most common shutter used for aerial cameras. Most digital camera shutters are designed according to two mechanisms: the leaf shutter (also called mechanical or global shutter or the dilating aperture shutter) or the electronic rolling shutter (curtain or sliding shutter). The leaf shutter exposes the entire sensor array at once, while the rolling shutter exposes one line of pixels at a time. For aerial imaging from a moving platform such as a UAS, leaf shutter is recommended because it minimizes image blur. To understand the shortcoming of the rolling shutter, watch this video [16].
It is important to know which shutter is used for your camera as most processing software including Pix4D provide correction for the rolling shutter effect. However, the software does not correct for it automatically, and you will need to activate that option before you start processing the imagery.
More information on different types of shutter mechanisms can be found on Wikipedia's Shutter (photography) page [17].
In order to understand mission flight planning, you need to understand the geometry of the image as it is formed within the camera. The size of the CCD array and lens focal length coupled with flying altitude (above ground) determines the image scale or the ground resolution of the image. Therefore, it is essential to the work of the flight planner to have all of this information understood and available before starting to design a mission.
In photogrammetry, we usually deal with three types of imagery (photography), They are defined in term of the angle that the camera optical axis makes with the vertical (nadir), those are:
For the purpose of this course, we will focus only on the first two types, and that is vertical and near-vertical photography.
Figure 4.3 illustrates the basic geometry of a vertical photograph or image. By vertical photograph or image, we mean an image taken with a camera that is looking down at the ground. As the aircraft moves, so does the camera, and this makes it impossible to take a true vertical image. Therefore, vertical image definition allows a few degrees deviation from the nadir (the line connecting the lens frontal point and the point on the ground that is exactly beneath the aircraft). In summary, a vertical image is an image that is either looking straight down to the ground or is looking a few degrees to either side of the aircraft.
As the sun's rays hit the ground, they reflect back toward the camera, and some actually enter the camera through the lens. This physical phenomenon enables us to express the ground-image relation using trigonometric principles. In Figure 4.3, ground point A is projected at image location a' and ground point B is projected at image location b' on the film. From such geometry, the film four corners a' b' c' d' cover an area on the ground represented by the square ABCD. Such relations not only enable us to compute the ground coverage of a photograph (image) but also enable us to compute the scale of such a photograph or image.
The scale of an image is the ratio of the distance on the image to the corresponding distance on the ground. In Figure 4.4, the distance on the ground AB will be projected on the image on line ab, therefore, the image scale can be computed using the following formula:
Equation 1:
Analyzing the two triangles (the small triangle with base ab and the large triangle with base AB) of Figure 4.4, one can also conclude, using the similarity of triangles principle, that the scale is also equal to:
Equation 2:
Scale is expressed either in a unitless ratio such as 1/12,000 (or 1:12,000) or in pronounced units ratio such as 1 in. = 1,000 ft (or 1”=1,000’).
The following two examples will walk you step by step through the process of computing scales for imagery produced from a film-based camera and from a digital camera. In digital cameras, the scale does not play any role in defining the image quality, as is the case with film-based camera. In digital cameras, we use the Ground Sampling Distance (GSD) to describe the resolution quality of the image while in film-based cameras we use the film scale.
Aerial photographs were acquired from an altitude of 6,000 ft AMT (Above Mean Terrain) with a film-based aerial camera with lens focal length of 6 inches. Determine the scale of the resulting photography.
Solution:
From Figure 4.4 and equations 1 & 2,
Therefore,
OR
Scale = 1:12,000 or 1"=1,000'
Scale is meaningless in digital mapping products as the scale concept was created to represent measured distances on old days maps which are plotted on paper. However, people are still using scale and it would take time before the new generation of mappers embrace the digital representation of the new geospatial products. Digital camera manufacturers provides information of the sensor used in their cameras. Some of them expresses it as 16 mega pixels which could be a square array of 6,000x6,000 pixels or a rectangular with any ratio of width/height such as 8,000x2,000 pixels or a ratio of width/height equal to 4. Some cameras manufacturers provide the sensor array size in pixels and in millimeter and some provide it with combination of number of pixels and sensor size in inch leaving wondering about the physical size of the CCD, see Figure 4.5. Figure 4.6, illustrates camera information that you need to dig deep into the provided information to obtain what you want. From Figure 4.6 which represents the information provided for the multi-spectral camera on board of the DJI Phantom 4 agricultural UAS you can indirectly derive the sensor dimensions from the given array size in pixels and the CCD size, or 3 um, which is inserted in the focal length information. The sensor dimensions in pixels were not provided directly and you would need to figure it out from the two values provided for the optical center. The optical center, or the origin of the image coordinates at 0,0, is usually located in the middle, i.e. center of the array, therefore the total width of the array is equal to 800 pixels X 2 = 1,600 pixels while the sensor height is equal to 650 pixels x 2 = 1,300 pixels. Knowing the number of pixels in the width direction, or 1,600, and the pixel size of 3 micrometer, the sensor width can be derived to be equal to 1,600 x 0.003 = 4.8 mm, similarly, the sensor height is equal to 1,300x0.003 = 3.9 mm.
The following is an example on calculating the scale for digital imagery acquired using digital camera:
Aerial imagery was acquired with a digital aerial camera with lens focal length of 100 mm and CCD size of 0.010 mm (or 10 microns). The resulting imagery had a ground resolution of 30 cm (1 ft). Determine the scale of the resulting imagery.
Solution
From Figure 4.4 and equation 1, assume that the distance ab represents the physical size of one pixel or CCD, which is 0.010 mm, and the distance AB is the ground coverage of the same pixel or 30 cm.
Therefore,
OR
Scale = 1:30,000 or 1"=2,500'
Aerial imagery was acquired with a digital aerial camera with lens focal length of 50 mm and CCD size of 0.020 mm (or 20 microns). The resulting imagery had a ground resolution of 60 cm (2 ft). Determine the scale of the resulting imagery.
Solution
Scale = 1:30,000 or 1"=2,500'
Imagery acquired for photogrammetric processing is flown with two types of overlap: Forward Lap and Side Lap. The following two subsections will describe each type of imagery overlap.
Forward lap, which is also called end lap, is a term used in photogrammetry to describe the amount of image overlap intentionally introduced between successive photos along a flight line (see Figure 4.7). Flight 3 illustrates an aircraft equipped with a mapping aerial camera taking two overlapping photographs. The centers of the two photographs are separated in the air with a distance B. Distance B is also called air base. Each photograph of Figure 4.7 covers a distance on the ground equal to G. The overlapping coverage of the two photographs on the ground is what we call forward lap.
This type of overlap is used to form stereo-pairs for stereo viewing and processing. The forward lap is measured as a percentage of the total image coverage. Typical value for the forward lap for photogrammetric work is 60%. Because of the light weight of the UAS, we expect substantial air dynamic and therefore substantial rotations of the camera (i.e., crab); therefore, I recommend the amount of forward lap to be at least 70%.
Side lap is a term used in photogrammetry to describe the amount of overlap between images from adjacent flight lines (see Figure 4.8). Figure 4.8 illustrates an aircraft taking two overlapping photographs from two adjacent flight lines. The distance in the air between the two flight lines (W) is called lines spacing.
This type of overlap is needed to make sure that there are no gaps in the coverage. The side lap is measured as a percentage of the total image coverage. The typical value for the side lap for photogrammetric work is 30%. However, because of the light weight of the UAS, we expect substantial air dynamic and therefore substantial rotations of the camera (i.e. crab), and therefore I recommend using at least 40% side lap.
Ground coverage of an image is the area on the ground (the square ABCD of Figure 4.3) covered by the four corners of the photograph a'b'c'd' of Figure 4.3. Ground coverage of a photograph is determined by the camera internal geometry (focal length and the size of the CCD array) and the flying altitude above ground elevation.
Example on Image Ground Coverage:
A digital camera has an array size of 12,000 pixels by 6,000 pixels (Figure 4.9). If the physical CCD size is 0.010 mm (10 um) camera, how much area in acres will each image cover on the ground if the resulting ground resolution (GSD) of a pixel is 1 foot?
Solution
Ground coverage across the width (W) of the array = 12,000 pixels x 1 ft/pixel = 12,000 ft
Ground coverage across the height (L) of the array= 6,000 pixels x 1 ft/pixel = 6,000 ft
Covered area per image =
In this section, we start the practical work for flight planning an imagery mission. By the end of this section, you should be able to develop a flight plan for an aerial imagery mission. Successful execution of any photogrammetric project requires thorough planning prior to the execution of any activity in the project.
The first step in the design is to decide on the scale of imagery or its resolution and the required accuracy. Once those two requirements are known, the following processes follow:
For the flight plan, the planner needs to know the following information, some of which he or she ends up calculating:
Figure 4.8 shows three overlapping squares with light rays entering the camera at the lens focal point. Successive overlapping images forms a strip of imagery we usually call "strip" or "flight line," therefore photogrammetric strip (Figure 4.8) is formed from multiple overlapping images along a flight line, while photogrammetric block (Figure 4.9) consists of multiple overlapping strips (or flight lines).
Once we compute the ground coverage of the image, as it was discussed in the "Geometry of Vertical Image" section, we can compute the number of flight lines and the number of images and draw them on the project map (Figure 4.10), aircraft speed, flying altitude, etc.
Before we start the computations of the flight lines and images numbers, I would like you to understand the following helpful hints:
Now, let us start figuring out how many flight lines we need for the project area illustrated in Figure 4.13, to the right. Figure 4.13 shows rectangular project boundaries (in black dashed lines) with length equal LENGTH and width equal WIDTH that was designed to be flown with 6 flight lines (red lines with arrowheads). To figure out the number of flight lines needed to cover the project area, we will need to go through the following computations:
In Figure 4.13, you may have noticed that the flight direction for each flight line alternates between North-to-South and South-to-North from one flight line to the adjacent one. Flying the project in this manner increases the aircraft fuel efficiency so the aircraft can stay longer up in the air.
Once we determine the number of flight lines, we need to figure out how many images will cover the project area. To do so, we need to go through the following computations:
Figure 4.14 is the same as Figure 4.13 with added blue circles that represent photo centers of the designed images. The circles are only given to one flight line, and I will leave it to your imagination to fill all the flight lines with such circles.
Flying altitude is the altitude above certain datum the UAS flies during data acquisition. The two main datum used are either the average (mean) ground elevation or the mean sea level. Figure 4.15 illustrates the relationship between the aircraft and the datum and how the two systems relate to each other. In Figure 4.15, we have an aircraft that is flying at 3000 feet above average (mean) ground elevation, represented by the blue horizontal line in the figure. We also have the mean terrain elevation (the blue horizontal line), situated at 600 feet above the mean sea level. Therefore, the flying altitude will be expressed in two ways, those are:
We now need to determine at what altitude the project should be flown. To do so, we go back to the camera internal geometry and scale as we discussed in section 4.3. Assume that the imagery to be acquired with a camera with lens focal length of f and with CCD size of b. We also know in advance what the imagery ground resolution or GSD should be. The flying altitude will be computed as follows:
OR
from which, H can be determined.
Here, we need to make sure that both f and b are converted to have the same linear unit, in which case the resulting altitude will be in the same linear unit of the GSD. If we assume the following values:
f = 50mm
ab = 0.010mm (or 10um)
GSD = 0.30 meter, the flying altitude will be:
meters above ground level
Controlling the aircraft speed is important for maintaining the necessary forward or end lap expected for the imagery. Fly the Aircraft too fast, and you end up with less forward lap than anticipated, while flying the aircraft too slowly results in too much overlap between successive images. Both situations are harmful to the anticipated products and/or the project budget. Little amount of overlap reduces the capability of using such imagery for stereo viewing and processing, while too much overlap results in too many unnecessary images that may affect the project budget negatively. In the previous subsections, we computed the airbase or the distance between two successive images along one flight line that satisfy the amount of end lap necessary for the project. Computing the time between exposures is a simple matter once the airbase is determined and the aircraft speed is decided upon.
When the camera exposes an image, we need the aircraft to move a distance equal to the airbase before it exposes the next image. If we assume the aircraft speed is (v) therefore the time (t) between two consecutive images is calculated from the following equation:
For example, if we computed the airbase to be 1000 ft and we used aircraft with speed of 150 knots, the time between exposure is equal to:
In the navigation world, way points [18] are defined as “sets of coordinates that identify a point in physical space.” Close to this definition is the one used by mapping professionals, and that involves using sets of coordinates to locate the beginning point and the end point of each flight line. Way points are important for the pilot and camera operator to execute the flight plan. Way points in manned aircraft imagery acquisition are usually located a couple of miles outside the project boundary on both sides of the flight line (i.e., a couple of miles before approaching the project area and a couple of miles after exiting the project area or for UAS operations it would be a couple hundreds meters before approaching the project area and a couple hundreds meters after exiting the project area). The pilot uses way points to align the aircraft to the flight line before entering the project area. In UAS operation, a "Way Point" marks the beginning or the end of a flight line where the UAS either positions itself before starting taking pictures or it ends taking pictures on a certain flight line.
A project area is 20 miles long in the east-west directions and 13 miles in the north-south direction. The client asked for natural color (3 bands) vertical digital aerial imagery with a pixel resolution or GSD of 1 ft using a frame-based digital camera with a rectangular CCD array of 12,000 pixels across the flight direction (W) and 7,000 pixels along the flight direction (L) and a lens focal length of 100 mm. The array contains square CCDs with a dimension of 10 microns. The end lap and side lap are to be 60% and 30%, respectively. The imagery should be delivered in tiff file format with 8 bits (1 byte) per band or 24 bits per color three bands (RGB). Calculate:
Solution:
Looking into the project size (20x13 miles) and the one-foot GSD requirements, a mission planner should realize right away that image acquistion task for such project size and specifications can only be achieved using a manned aircraft.
The camera should be oriented so the longer dimension of the CCD array is perpendicular to the flight direction (see Figure 4.12).
Past experience with projects of a similar nature is essential in estimating cost and developing delivery schedule. In estimating cost, the following main categories of efforts and materials are considered:
Once quantities are estimated as illustrated in the above steps, hours for each phase are established. Depending on the project deliverables requirements, the following labor items are considered when estimating costs:
The table in Figure 4.16 provides an idea about the going market rates for geospatial products that can be used as guidelines when pricing a mapping project using manned aircraft operation and metric digital camera and lidar. The industry needs to come up with a comparable table based on Unmanned operations. There is no good pricing model established for UAS operation as the standards and produicts quality are widely varialble depending on who offers such services and whether it fall strictly under the "Professional Services" designation.
Product | GSD ft | Price per sq mile | Comments |
---|---|---|---|
Ortho | 0.5 | $150-$200 | Based on large projects |
Ortho | 1.0 | $80-$100 | Based on large projects |
Ortho | 2.0 | $30-$60 | Based on large projects |
lidar | 3.2 | $100-$500 | Depends on accuracy, terrain, and required details |
After the project hours are estimated, each phase of the project may be scheduled based on the following:
The schedule will also consider the constrains on the window of opportunity due to weather conditions. Figure 4.17 illustrates the number of days, per state/region, available annually for aerial imaging campaigns. Areas like the state of Maine have only 30 cloudless days per year that are suitable for aerial imaging activities.
Chapter 18 of Elements of Photogrammetry with Applications in GIS, 4th edition
For practice, develop two flight plans for your project, one by using manual computations and formulas as described in this section and one by using "Mission Planner" software. Compare the two.
In this section, we will discuss the topics of camera calibration and sensor boresighting.
Most existing UASs that are dedicated to photogrammetric imaging carry on board less expensive cameras that we call nonmetric cameras. Nonmetric cameras are cameras with variable interior geometry (i.e., unknown focal length) and with relatively large lens distortion. In order to conduct photogrammetric mapping from the resulting imagery from such cameras, we need to determine to a known accuracy all interior camera parameters such as the focal length and the coordinates of the principal point, and to model the lens distortion.
The principal point of a camera is the point where lines from opposite corners of the CCD array or the lines connecting the opposite mid-way points of the CCD array sides intersect, Figure (4.18). However, when the lens is fitted on the camera body, it is impossible to align the center of the lens and the principal point described above, resulting in offset distances xp and yp as illustrated in Figure 4.18. Those two values are determined in the process of camera calibration that needs to be represented in the photogrammetric mathematical model during computations.
Mapping film camera calibration was usually performed in special laboratories dedicated to this task such as the USGS calibration lab for film cameras, which was shut down permanently on April 1, 2017 after decades of services to the mapping community. However, with the advancements in the computational analytical model in photogrammetry, we can determine the camera parameters analytically through a process called camera self-calibration from within the aerial triangulation process. Most UAS data processing software such as the one used in this course support camera self-calibration.
The term “boresighting” is usually used to describe the process of determining the differences in the rotations of the sensor (such as camera) rotational axes and the rotational axes of the Inertial Measurement Unit (IMU), which is usually bolted to the camera body. The IMU [19] is a device that contains gyros and accelerometers used in photogrammetry and lidar to sense and measure sensors rotations and accelerations. In photogrammetry where the IMU is used on an imaging camera, the boresight parameters are determined by flying over a well controlled site (site with accurate ground controls) and then conducting aerial triangulation on the resulted imagery.
The aerial triangulation process will compute the six exterior orientation parameters (X, Y, Z, omega, phi, kappa) while the IMU will measure the three orientation parameters' roll, pitch, and heading (or yaw). Comparing the two sets of the orientation angles of the camera as computed by the aerial triangulation and measured by the IMU, one can establish the differences in the rotations of the camera in reference to the inertial system (from the IMU). These differences (or offsets values) will be used to correct all the future IMU-derived orientation to convert the rotation angles from inertia to photogrammetric systems so it will be utilized in the mapping process.
A similar process is followed for determining the offset values for the IMU used in the lidar system. For the lidar offset determination, there is no aerial triangulation used as it follows different processing steps. To determine the boresight offset values in lidar, the lidar has to be flown in a certain configuration over a well controlled site. Figure 4.19 represents an ideal design for lidar boresight determination. From the figure, there are two lines flown in the east-west directions (one flight line flown due east and the other flown the opposite direction, due west) from a certain altitude and two flight lines flown in the opposite direction (north-south) from an altitude that is nearly double the altitude of the east-west flight lines.
Congratulations! You have just finished Lesson 4, UAS Mission Planning and Control. I hope that you appreciate the importance of this lesson material in relation to the Concept of Operation for any UAS. UAS projects based on poor planning mean nothing but guaranteed failure or/and poor quality derived products. Computations may seem complicated, but I tried to walk you through the different steps with details. However, if you feel that you are overwhelmed with understanding the design concepts, please do not hesitate to write to me.
1 | Complete the Lesson 4 Quiz. |
---|---|
2 | Start Pix4D processing for Exercise 1 (Wiregrass Gravel Mine, Alabama) using these instructions [20]. Submit your reports in Lesson 6 (5 points) |
3 |
Start Pix4D processing for Exercise 2 (County Line Road, Dayton, Ohio) using these instructions. [21] Submit your reports in Lesson 8 (8 points) |
4 | Practice the use of "Mission Planner" software to develop a flight plan. |
5 | Participate in the "Human Elements of UAS" Discussion Forum |
Links
[1] https://www.e-education.psu.edu/geog892/sites/www.e-education.psu.edu.geog892/files/images/lesson06/Camera_Calibration-yastikli_naci.pdf
[2] https://www.e-education.psu.edu/geog892/sites/www.e-education.psu.edu.geog892/files/images/lesson06/Camera_Calibration_91.pdf
[3] https://store.usgs.gov/maps
[4] https://www.e-education.psu.edu/geog892/sites/www.e-education.psu.edu.geog892/files/images/lesson06/PA_State%20College_223993_1962_24000_geo.pdf
[5] https://www.e-education.psu.edu/geog892/sites/www.e-education.psu.edu.geog892/files/images/Lesson04/Aeronautical_Chart.pdf
[6] http://www.youtube.com/watch?v=6ITjUfl80bs
[7] http://en.wikipedia.org/wiki/Visual_flight_rules
[8] http://www.faa.gov/air_traffic/flight_info/aeronav/digital_products/vfr/#SecPDFs
[9] https://www.faa.gov/air_traffic/flight_info/aeronav/productcatalog/VFRCharts/
[10] https://app.airmap.com/geo?34.017931,-118.496046,9.417193z
[11] https://faa.maps.arcgis.com/apps/webappviewer/index.html?id=9c2e4406710048e19806ebf6a06754ad
[12] https://www.faa.gov/uas/recreational_fliers/where_can_i_fly/b4ufly/
[13] http://en.wikipedia.org/wiki/Digital_camera
[14] http://en.wikipedia.org/wiki/CCD_camera
[15] https://wingtra.com/best-drones-for-photogrammetry-wingtraone-comparison/phantom-4-rtk-vs-wingtra/
[16] https://www.youtube.com/watch?v=dNVtMmLlnoE&feature=youtu.be
[17] http://en.wikipedia.org/wiki/Shutter_%28photography%29
[18] http://en.wikipedia.org/wiki/Waypoint
[19] http://en.wikipedia.org/wiki/Inertial_Measurement_Unit
[20] https://psu.instructure.com/files/102041812/download?download_frd=1
[21] https://www.e-education.psu.edu/geog892/sites/www.e-education.psu.edu.geog892/files/GEOG892_Pix4D_Excercise2.pdf