GEOG 858
Spatial Data Science for Emergency Management

Emerging Theme: Geospatial Artificial Intelligence (geoAI)


We hear a lot about artificial intelligence (AI) these days, and indeed AI is a rapidly expanding field finding applications in many aspects of our lives. Emergency management and geospatial applications are no exception. So, what is AI and GeoAI and how are they being applied to the phases of emergency management?

Artificial intelligence refers to a range of approaches and applications whereby computers are trained to simulate intelligent human behavior and act in autonomous or semi-autonomous ways. AI systems are also able to process and learn from vast amounts of data that are difficult if not impossible for humans to readily understand.


If you want to learn more about AI or don’t feel like you have a very good understanding of what it is all about, have a look at these resources (optional):

What about geospatial artificial intelligence (geoAI)? I’d like you to have a look at two videos that illustrate some of the current characteristics of geoAI and where it is heading. The first video is a presentation on machine learning and the prediction of road accidents from a recent Esri conference (8:50 minutes). It provides a good overview of geoAI and an interesting application to illustrate what’s currently possible with one of the main GIS software systems.

Click here for a transcript of the GeoAI: Machine Learning meets ArcGIS video.


OMAR MAHER: Artificial intelligence is all over the news-- deep learning, self-driving cars, robots will take over the world, you name it. But it really boils down to three main categories. AI is the paradigm of computing, to get machines to be as smart as humans. Machine learning is a subfield of AI. It's about learning from data to derive rules and detect patterns instead of being explicitly programmed by humans. And deep learning is a specific machine learning technique using deep neural networks, which is really good, with high dimensional data like voice, image, and videos.

But maybe you're asking yourself what does this have to do with ArcGIS and my work in 2018. Do we have machine learning in ArcGIS today? The answer is yes. We do have machine learning tools in ArcGIS, and we can integrate with deep learning and machine learning engines as well. You can use the existing machine learning tools in ArcGIS for three main things-- predictions, clustering, and classification.

For example, for prediction, you can use geographically weighted regression to predict the impact of global climate change on local temperature. For clustering, you can use space times cubes to analyze the fading and emerging hotspots across space and time for a phenomena like accidents. And you can use support vector machines for classification to classify impervious surfaces from high-resolution imagery to better plan for flood. In addition to that, ArcGIS can integrate with machine learning and deep learning engines as well.

And that what I want to show you today. We are going to use ArcGIS Pro with scikit-learn of Python, a powerful machine learning engine, to predict the accident probability per segment per hour in Utah. And before jumping to machine learning, we asked ourselves, what could cause an accident in the first place? Is it more like weather factors, like snow, rain, and fog? Is it temporal aspects like time of day, day of the week, rush hour maybe? Is it spatial aspects like proximity to intersections or the road width or the road curvature?

There might be tons of variables. And the kind of data that we need to train our model is really large. We're talking about seven years of accidents data-- 400,000 accidents, 500,000 segments. It's nearly impossible for any human to analyze this manually and find the deep correlations and predict. But what about passing all of these data inputs to the machine to let it help us to find the patterns and predict those risk segments?

We're going to do three main things. First, we're going to use ArcGIS Pro to prepare our training data and explore the data. Then we're going to pass it to scikit-learn to train the machine learning model. And finally, we're going to see the predictions visually using ArcGIS online versus the actual accidents. So why don't we just see this in action? So we're bringing all of these data points to ArcGIS Pro. We grab in the road network layer, all of the collisions. We spatially join them with the road network layer. The average daily traffic, as you can see, it differs according to segment, the intersections, whether there are signal lights or not. We get weather feeds from about 27 weather stations at Utah.

And you can notice that many accidents are happening near intersections. And many of them are happening on roads with high curvature. So we extract those spatial features, like road sinuosity and proximity to intersections and speed limit and others. And finally, we add the billboards data set to see if there is a correlation between the location of billboards and accidents. And all of this data processing and preparation could be automated in Jupyter Notebooks using Python.

So we use the arcpy to start creating our GDB database, extract that data from online and offline sources-- the data of accidents, intersections, and others-- and start doing our spatial feature extraction. So for every segment, we want to calculate the proximity to the intersection, the speed limit, the degree of sinuosity, and many other spatial parameters. And finally, we joined the weather feeds and accidents to the road networks.

Now, we have our training data ready. We start passing it to scikit-learn to train our machine learning model. We use the Python API. To do this, we import scikit-learn, and we start loading our training data set. So, as you can see, for every accident in the last seven years, we have about 40 different spatial, weather, and temporal feature-- temporal features, like hour of the day, week of the day, month; spatial features like sinuosity and length and proximity to intersections; and weather features like snow, raining, foggy, and more.

One thing to mention that it would have been extremely challenging to prepare this training data set and extract those features without ArcGIS Pro. We have used lots of Geoprocessing functions to prepare this data set. Now, we are ready to train some models. We're using a powerful machine learning algorithm called gradient boosting. Not only it's good with predictions but also with explaining why those prediction's happening. We split our data into training and test data-- 90% training, 10% testing for validation. We play with some hyperparameters, like the number of trees in that model, the depth of each tree, and some other factors. And we finally start training our machine learning model.

And this is the first outcome of the model. These are the top factors behind accidents based on the data that we have analyzed. You can see that there is lots of weather factors like temperature, visibility, and wind speed; temporal factors like hour of the day or month of the week; spatial factors like sinuosity or average daily traffic or the road being one way or another. Now, we want to see the actual predictions of that model. We want to calculate the risk score for every segment and see if the actual accidents actually did match that.

Today is December 3rd. It's snowy. It's icy, and it's raining. And we want to see the area of Woods Cross. So our model predicts that the interstates, the highways, and some inner roads have some risk. The color reflects the risk. Red being the highest. Green being the lowest. Orange is still risky but not as red. So where did the actual accidents happen? Here we go. We find that nearly all of them are happening on those segments that we have predicted with high score. You can see that they are happening on the interstates, the highways, and some internal roads, like this one here, for example, and that one here, happening on this inner road, which is curvy.

Let's go to the area of South Salt Lake. It's near to downtown, and we expect lots of accidents. So our model predicts that the interstates and highways have higher risk and some inner roads. Notice that this area is a bit greenish. This area here has more risk. And the actual accidents actually happened on the highways and many of the inner roads as well. You can see a pattern here that many of those accidents, especially in the inner roads, are happening near to intersections like this one, that one, and this one here.

Finally, we can see the area of South Valley Regional Airport. These are the predictions, and those are the accidents, again, happening on the highways and some inner roads, this curvy road. We have used machine learning to understand why accidents happen and predict the risk per segment.

And that's one application. We have many other applications. We have used deep learning with satellite imagery to detect objects at scale. And we have used it actually with historical origin, destination data to predict accurately the arrival times of vehicles. And we have used machine learning with GPS traces from vehicles from some oil companies to automatically detect new road segments and automatically register them to the network. Thank you.

Credit: Esri Events

The second video is an interview with Nigel Clifford the CEO Ordinance Survey, the UK’s national geospatial agency where he talks about the future of AI and geoAI in particular (4:35 minutes).

Click here for a transcript of the AI and Deep Learning will make findings of geospatial data more accurate and meaningful video.


AMIT RAJ SINGH: Joining me right now is Nigel Clifford of Ordnance Survey. How is Ordinance Survey using artificial intelligence and automation?

NIGEL CLIFFORD: We are both researching it and we're using it. And that's the way that you get progress. So it's not just about theory. It's also about practice. So we are investing in academic research. We're are sponsoring some PhDs in local university, looking at deep learning and machine learning. But then we're also bringing that back into the business.

So when we look at aerial imagery, we're looking at auto change detection, where we are taking our aerial imagery, we are classifying the features, we're comparing those features automatically with previous versions of the map, and therefore, we're able to spot the changes which are occurring.

So that is a form of machine learning. It's a form of artificial intelligence. But I would say we're still very much looking at the tip of the iceberg of what can be provided here. So we're very excited about it. We're investing in it, but we're still way, way, way to go, in terms of the full potential.

AMIT RAJ SINGH: How do you see it fundamentally changing the way we work?

NIGEL CLIFFORD: If we allow ourselves to think very freely, then it should help us in terms of decisions, it should take away a lot of the more mundane activities that we're involved in. It should also allow us to come up with more answers because it's going to be able to probe and analyze datasets, which the human mind would just never be able to cope with.

So I think the impact of this on human society is going to be very profound. And one of the aspects that I spoke about during the presentation is, that gives rise to some interesting issues for insurance companies, for governments, for public bodies, for private bodies, when you've got an artificial intelligence, which is making decisions. How far do you need to be able to explain those decisions to the citizen, or the customer, or the buyer of a service? So there's some huge potential there. But also, there are some speed bumps that we're going to have to navigate before we get up to full speed on artificial intelligence.

AMIT RAJ SINGH: And how do you see geospatial fit into this change?

NIGEL CLIFFORD: Geospatial, at one level, it's another huge dataset, which can, therefore, be utilized by machine learning and deep learning to help make its findings more accurate. It's also going to be a huge user of machine learning and artificial intelligence. So as I was saying, talking about change detection, that's one obvious area. But also, talking about things like autonomous vehicles or smart cities, those are going to be geospatial related. So we sometimes call geospatial the golden thread, which links so many different datasets, and so I think geospatial is going to be absolutely at the heart of making sense of these trillions of bits of data which are going to be in the ether and being looked at by big machines.

AMIT RAJ SINGH: Because of artificial intelligence, we might lose plenty of jobs. Some estimate that we will lose 40% of the workforce. How do you see this for a country like India, which is heavily dependent on human labor?

NIGEL CLIFFORD: I think one of the key aspects is it allows further levels of innovation, it allows further levels of capability to be brought to bear to the problems of any country, or company, or institution. So I would be optimistic and say, actually, it's going to provide more answers and provide more freedom for individuals to make a profound impact on their country. So I would be optimistic about it rather than pessimistic.

AMIT RAJ SINGH: How do you see artificial intelligence and deep learning help us in our third industrial revolution that we are talking about?

NIGEL CLIFFORD: One example would be it would help manage risk. So as we go into the next generation of services or the next generation of health issues, I think it will help us come to conclusions more quickly. It will also help us look at more options, more rapidly, and it will also help us paint some of the risks more clearly and more precisely than ever before.

So I think the use of this in predictive modeling is going to be one of the big steps for cities, for individuals, for companies, for governments. So I think that's a really powerful change, that you don't have to go and do something before you begin to understand the implications, you can model it out and model it in fine detail. That's never been able to be done before.

Credit: Geospatial World


If you want to learn more about geoAI, have a look at this recent paper by Trang VoPham et al from 2018. At the very least, it will be a good resource if you come back to the topic of geoAI in the future.

Emerging trends in geospatial intelligence (geoAI): potential applications for environmental epidemiology.

Finally, I’d like you to consider a few examples of AI and geoAI applied to recent emergency management problems. The first is a presentation from Robert Munro, CTO Figure Eight, at the recent CogX AI conference (18:47 minutes). While not specifically focused on geoAI, geospatial problems figure through most of his presentation. Contrast this perspective with what we saw from Esri in the earlier video. The second video is a quick recap of the surf rescue video you saw in the UAV exercise in Lesson 2.

Click here for a transcript of the AI for Disaster Response video.

DR. ROBERT MUNRO: Thank you. It's great to be here this afternoon and to talk about a topic which is really dear to my heart. How can we make sure that AI can help us respond to people when they're most in need? I'm Robert Munro. I'm the CTO of a company called Figure Eight. Until recently we were known as Crowd Flower. You might be familiar with that as the company name. Recently rebranded. We make the most widely used software for annotated training data and build complete AI systems that go from raw data to deployed models. So if your car parks itself, your music is based on recommendations, your fruit is scanned between farm and table, you're probably using AI that we ship.

But today I'm going to talk about one particular application, AI for disaster response because this was actually my path. My path in artificial intelligence was not a typical one. Out of undergraduate while I'd studied AI, I didn't really expect to have a career in it because there really weren't careers back at that time.

And so I went and worked as an engineer globally, I had this one experience in the mid-2000s. So I was working for the United Nations High Commission for Refugees. I was working in refugee camps in Liberia and in Sierra Leone where I was living at the time, and we were installing solar power systems at schools and clinics supporting refugee camps so that these schools and clinics could be more self-sufficient.

And it was while I was there at one of these very remote clinics in Liberia that I was standing there. We were already kind of late getting in, so I had to build this system, we had just two days, we had to move on to the next place. And someone came up to me in this village and they're like hey, we think some new refugees just came over the border from Cote d'Ivoire into the neighboring valley, but we don't know much about them. And of course, they came up to me because we were the ones there working for the UN Refugee Commission and installing solar power.

But frustratingly, we couldn't find out anything about them. It was just one valley over. We didn't know if there were 10 or 10,000 refugees there. And what was especially frustrating was that I had five bars of cell phone reception. I had perfect cell phone reception and no doubt some of the refugees did as well, and they were probably bouncing their cell phone signals off that same tower that I was.

But even if I could connect with them, they would have spoken one of a half a dozen local languages for which there certainly weren't machine translation systems and other AI that we took for granted even 15 years ago, like search engines and spam filtering, it wouldn't have worked in any of those languages as well. So even if we could connect, there wasn't much we could do.

Ultimately in this use case, we just had to report back to the capital Monrovia that maybe there are refugees there. We don't know how many. We weren't able to find out. So that really motivated me, because I thought, well, you know, I'm not really needed to be on a roof drilling in solar panels, a lot of people can do that. Someone employed locally can do that. But I had studied AI, so I thought, well, how can I help AI be adapted to more languages? And so that's what motivated me at that time to move to Silicon Valley, where I got a Ph.D. focused on natural language processing for health and disaster response at Stanford looking at how we can adapt to low resource languages in this context.

And while I did that I continued to work in disaster response. And so the first time we were able to deploy AI in a disaster response was in 2010. I'm sure a lot of you remember in 2010 a very large earthquake hit Haiti in January of that year. More than 100,000 were people killed immediately, and more than a million people were left homeless, and so working with a number of people in the international response community, we were able to set up a phone number, like a 911, or I guess I should say, I forget, triple one in the UK? The equivalent of 911? 999. All right. Everyone who's visiting, take note in case of emergency later.

So we were able to set up the equivalent of a 999 service in Haiti because most of their cell phone towers were still working. Calls weren't getting through, but text messages were. So on broadcast radio, we could advertise that people could call the service to report their needs or resources if they had any, and we could link them with international responders. But we had this problem in that the majority of text messages were sent in Haitian Creole, the one language that most people spoke there, and the majority of the international response community only spoke English as their common language.

So messages like you could see on the left, which were really important. So a hospital running out of supplies, someone undergoing childbirth, someone needing search and rescue, couldn't be understood by the people coming into the country. So it fell on me to find and manage people who spoke both English and Haitian Creole to do real-time translation, categorization, and mapping of these messages so that a plain text message sent in Haitian Creole could be an English report categorized with the exact longitude and latitude for the international response community.

So in 48 hours I was able to define and ultimately manage about 2,000 members of the Haitian diaspora who were able to join us from 49 countries worldwide and in real time be able to complete this translation and use their local knowledge in a way that just simply wouldn't be possible if you weren't from that region. We're also integrating with AI systems here. So taking a message in Haitian Creole in its English translation, we're able to feed that parallel data to machine translation engines at Microsoft and at Google, and they quickly released the first ever machine translation systems for Haitian Creole and English within a week of the disaster, based in part off the data that we were able to give to them for emergency messages. And as many of you who work in machine learning will know, the fact that these were somewhat messy social messages and they were about the topics of health and disaster response means that the machine translation systems were then also better in messages related to health and disaster response.

One of the big takeaways here was the importance of engaging the local community in this process. So just imagine that we'd already solved the problem of machine translation and you had a message that looked like this. Sacred Heart Hospital in the village of Okap is ready to receive patients. So looking at this map here, who can see Okap on this map? No one? I'm going to zoom in a little. It's up here. Anyone? Okap? No? I'm going to zoom in once more. See it now? No, right? It's difficult to see. So Okap is actually slang for Cap-Haitien. And once you're told that, it kind of makes sense. So Cap-Haitien with a C-A-P in the language becomes K-A-P, the same way a hard C and a K might alternate in English and German. O is a marker for a location, just by coincidence, as it can begin in Gaelic. So these follow linguistically consistent rules, but if you don't know what the slang word is, you could spend a long time looking for this. And Cap-Haitien is the second largest city after Port au Prince in Haiti. This isn't a small city. So your knowledge of the existence of the city might not be enough.

So working with these Haitians worldwide, they were able to use their linguistic knowledge, but also their local knowledge. Because if you're from there, you would actually know that Sacred Heart Hospital was about 15 kilometers south of the city itself in a smaller town called my Maillot. So again, something very important if you're evacuating in this case by helicopter people to a given location.

So to share one example of people collaborating to find this out. So here we have two people collaborating, Delilah in San Francisco, Apo in Montreal to try and geo-locate a message. So, on this online chat with people coordinating, Delilah saying I need Thomassin, Apo, please. So where is Thomassin, and Apo immediately replies, here is the longitude and latitude. It's in this area after Petion-Ville, Google Maps isn't there. And if you look at Google Maps at the time, you can see that yes, like this is just a bend in the road, none of the suburbs or roads are labeled there.

But Apo grew up there, so he can drop a pin exactly which generates a longitude and latitude, which means that the responders can go out and address this issue. In this case, it was a breach birth. It was nice. This was probably going to be a troubled birth regardless. And so for a small period of time following this disaster, they had some of the best physicians in the world able to respond to this particular medical emergency.

I think It's very interesting that you know because I knew this place like my pocket, I know this place like the back of my hand, and Delilah says well, thank God you're here. And it's interesting to think about where here is. I mean, did hear mean the online chat room? Is it San Francisco? Is it Montreal? Or is it with Haiti? It shows how people can collaborate globally in order to work together to solve some of our biggest problems.

So that was my path. Disaster response, Stanford University. I founded a few companies in the AI space in San Francisco. Immediately before joining Figure Eight, I was running product for a natural language processing and translation at AWS, helping convince them to be multilingual in their first ever products, which I think itself helps a lot of people. And the reason this is important is that I wonder what, you know, what's your intuition? So how much of the world's conversations daily is English? On a given day, how many of the world's conversations are in the English language?

So the answer is 5%. Just 5% of the world's conversations are in English, and that's fairly consistent. But about 95% of AI only works in English or only works well in English. And if you speak a minority language, you're disproportionately more likely to be the victim of a man-made or a natural disaster. And also education between men and women will favor men for dominant languages. You get the same divisions across ethnicity.

In fact, race in parts of the Amazon is determined by your language more than your actual ethnicity. So this linguistic bias is also a gender and a racial bias that we have in our AI today. I think this is one of the biggest problems that we're facing, is what AI technologies are available for everybody in the world.

One really interesting and also linguistic use case that I've worked on, again, before I joined the company, but using Figure Eight's technology, is epidemic tracking. So this is the famous map from Jon Snow. Like, not that Jon Snow, but the 1800s Jon Snow, who discovered a cholera outbreak using geographic information mapping just down the road here in London.

And so disease outbreaks are still the largest killer in the world, and no organization is tracking them all. You might have seen movies where people have great big screens and it's a heat map and it flares up every time there is an outbreak. That doesn't exist anywhere. And the budgets for those movies probably exceed the budgets of any one organization actually tracking disease outbreaks globally.

And this is pretty scary, because in the last 75 years, we've only eliminated one human disease, smallpox, and the amount of air travel has increased greatly. And we definitely put a lot of resources into stopping terrorists getting onto flights, but a pathogen is more likely to sneak onto a flight undetected. It's certainly been responsible for many more of the world's deaths. And the reason that this is a linguistic problem is that 90% of the world's pathogens come from this thin band of the tropics. This thin band of the tropics has 90% the world's ecological diversity, including things that can kill us.

By maybe coincidence, maybe not, the same thin band of the tropics has 90% of the world's linguistic diversity. So what that means is that the first time that somebody notices an outbreak, they're speaking about it in one of 6,000 different languages, and chances are that language is not English or Spanish or Mandarin. It's not a dominant language. We can actually go back in time and find reports of disease outbreaks weeks, months, sometimes decades before they're finally put in front of virologists and identified as being a new pathogen that we need to track. Every single transmission is a possibility that these could mutate to become more fatal, and so we want to get ahead of any outbreak as soon as possible.

So in the case of swine flu, we can find cables coming out of southern China weeks ahead of when this was identified as H1N5, as a new strain of the flu that hadn't been identified before. In the case of bird flu, we can find local newspaper reports in Mexico months before it was identified as a new strain of the flu, with telltale symptoms, like all the young people are sick in the village at the moment. So if you're a virologist or an epidemiologist, you're like, oh, right, this is obviously like a new strain of the flu. But in this paper, it was just remarked upon and missed. In the case of something like HIV, we can go back decades to find this kind of information.

So simply finding these reports as early as possible can help prevent epidemics. So epidemicIQ is an initiative that was trying to take millions of reports worldwide and find out which of these are relevant to disease outbreaks so that they're 15 to 20 per day, which really are new disease outbreaks that we care about can be put in front of the right epidemiologist and virologist of just for review. So those of you who speak Russian, Arabic, or what's the third language up there, Mandarin, you'll see that these are disease outbreaks categorized by the type of disease, the location that it's in, the number of people infected.

So we're able to use machine learning to filter that anything that might look like a disease, put this in front of crowdsourced workers, micro taskers for review, have them correct or reject the given machine learning analysis. Finally, put that in front of the domain experts for review. With all of that information from the analysts going back to the machine learning models, so it continually updated, adapting in about 15 different languages, including some here in Europe.

So in 2011, there was no outbreak of E. coli in Germany that had a number of fatalities. And we're able to show using this kind of online tracking of newspaper articles and social media in German, that we can get ahead of the European CDC in identifying the outbreaks and where they occurred.

It's something I've used it in smaller languages as well. So in partnership with UNICEF, using the same human in the loop AI process to adapt to a number of local languages in Nigeria. In this case, it was called the First 1,000 Days program. So from when a woman learns that she is pregnant, for the following 1,000 days through birth and beyond, tracking things like the number of vaccinations, the ongoing weight and changing weight of the child in order to help with maternal care. Again, having language independent AI that local analysts within Nigeria who spoke those languages could encode and then adapt to their given use cases.

And then we're just starting to see more use cases in computer vision in addition to natural language processing. So the company I was at at the time, we hosted aerial imagery analysis following Hurricane Sandy to identify what was not just a marshland but an actually flooded region, which FEMA used to help decide where to deploy their resources. Right through to some interesting use cases on our product today where people are using computer vision in sub-Saharan Africa to track elephants and people who are near elephants in order to identify areas where there might be poachers encroaching on the herds.

Something we're really proud to announce just a few weeks ago now is that we've made eight new open datasets available on our platform, all of which tackle either a social good problem or a particularly hard problem in machine learning. And two of them are particularly related to disaster response. So this is a map of some of the different dedicated workforces that we have on our platform, and as an extension, they speak a number of different languages.

So in this case, this is speakers of Swahili helping is in partnership with the Red Cross create Swahili recordings of a number of health and disaster response related messages in the Swahili language translations to English so that online translation systems, speech recognition systems, can become better across all these languages. So that's a Swahili translation speech and transcription.

And then happy to announce that as of tomorrow we'll have a new dataset available which is a collection of text messages, including the ones in Haiti, across a number of disasters, encoded for a number of common disaster response topics. And again, it's an open data set, so we're really hoping that anyone in the machine learning community can experiment on this data set, tell us trends about what people communicate in disasters that we don't already know, and then also come up with machine learning solutions that can help us automate or semi-automate the disaster response process going forward.

Thank you all for your time. I think I burned all my question time right now. I'm getting a nod back there. But I will be available at the speaker room after this session, immediately following. All right. Thank you all.

Credit: CogX 2018

Artificial Intelligence Processing at the Edge: The Little Ripper Lifesaver UAV (2:05 minutes)

---> Breaking news: Here is an update on Little Ripper and its cousin CrocSpotter.

Click here for a transcript of the Artificial Intelligence Processing at the Edge: The Little Ripper Lifesaver UAV video.



EDDIE BENNET: Australia has a very long coastline, and that presents us with some unique challenges. Drones provide us with a great opportunity to get a lifesaving service outside of where we traditionally have those services now. In 2015, we used the first drone to fly along a beach and look for people in trouble and to save lives.

We used the drones in three ways. The first is for surveillance. So we're able to get vision and understand situations, understand when people might be getting into trouble. We have warning devices, loudspeakers fitted to the drone so that we can actually warn people that they are about to get into trouble or what to do if they are in trouble. And third thing is that we can intervene and deploy an automatically inflatable rescue device from the drone which can support up to four people. And it keeps them afloat in the surf, in the water until they can be rescued.

Earlier this year, on the Central Coast, a sandbank collapsed and 12 people were washed into deep water. A lifesaving drone was sent to the area and quickly located exactly where they were. Lifesavers responded to the area very quickly and all 12 people were saved.

TONI BURKETT: If we could get that longer flying time, it would be really helpful for our job as lifeguards as well as being able to fly in all weather, such as high winds and rain, because that's when we're going to need that aerial perspective. And if the integration of the lifesaving technology in the drones could be improved, such as the sharks' water, that would be really helpful in our jobs as well.

EDDIE BENNET: So if we can use artificial intelligence to help us detect people and respond to situations, then that is going to be a wonderful opportunity to save lives. Little Ripper Lifesaver have very advanced lifesaving drones. Intel has very advanced artificial intelligence. Imagine a world if we combine those two together. What a great opportunity to save life anywhere in the world.

Credit: Intel Newsroom


Finally, I’d like to refer you back to the Digital Humanitarians textbook. As you know, this book provides a contrasting perspective to many of the government and private sector perspectives we consider. This is also the case with geoAI. If you have the time or want to come back to it later, I recommend Chapter 6: Artificial intelligence in the Sky.


  1. Post a comment in the Emerging Theme Discussion (L8) forum that describes how you think geospatial artificial intelligence (geoAI) might integrate with geospatial systems for emergency management. What are the big challenges associated with geoAI that we should focus our attention toward solving?
  2. The initial post should be completed during the first 5 days of the lesson.
  3. Then, I'd like you to offer additional insights, critiques, a counter-example, or something else constructive in response to your colleagues on two of the following 5 days.
  4. Brownie points for linking to other technology demos, pictures, blog posts, etc., that you've found to enrich your posts.
    NOTE: Respond to this assignment in the Emerging Theme Discussion (L8) forum by the date indicated on the course calendar.

Grading Criteria

This discussion will be graded out of 15 points.

Please see the Discussion Expectations and Grading page under the Orientation and Course Resources module for details.