GEOG 583
Geospatial System Analysis and Design

Technology Trends - Making Sense of Massive Sensor Data

Technology Trends - Making Sense of Massive Sensor Data

This week we're going to take a closer look at the challenges that lie ahead when it comes to massive sensor data collected by satellites, drones, and even your phone. New small satellites are going up all the time, resulting in an explosion of new (and massive) imagery data sources. Drones have transitioned from their initial roots in defense to become widely available (and usable) in the general public. And most of us now carry around a phone that collects an enormous amount of sensor data, whether we might want it to or not.

To begin thinking through the challenges posed by the new spatial data deluge, I'd like you to watch a talk by Chris Holmes, a product architect at Planet and a key figure in multiple other geospatial organizations like the OGC. Chris' 2018 Keynote for the FOSS4G North America conference is about a concept he calls the Queryable Earth. This talk covers a very wide range of examples and challenges, but it all connects to a core theme around making sense out of massive spatial data.

Video: FOSS4G-NA 2018 Keynote: Towards a Queryable Earth (45:10)

Click for a transcript of the FOSS4G-NA 2018 Keynote Video

CHRIS HOLMES: So thank you all for coming. I'm excited to talk about this vision of queryable Earth and how we all get there together. So a little bit about me. I wear a few different hats, and you'll see these perspectives throughout the talk.

The primary hat is Planet. I work there three days a week doing a bunch of product stuff. I'm also the Radiant Earth's first technical fellow. This gives me time to work on standards, and you'll see a lot of that. I'm also on the board of the OGC, the Open Geospatial Consortium, which makes geospatial standards, and I've been attempting to guide them to embrace more open source principles.

My background is really true and true open source geospatial software. So GeoServer was the first project that I worked on. I was a founding member of OSGeo, so talking to this community really feels like home, so pretty excited to be here and deliver this message.

Today, I'm going to talk about how we can help geospatial information and software live up to the promise that we all feel. I'm going to go out on a limb and posit that most people in this room understand the power of geospatial. You've devoted your professional lives to it. You've made a conscious effort to be here. You're learning more and collaborating with others. You've seen that it can give incredible insight, can improve decisions, and make the world a better place.

But how many times have you tried to explain that to somebody else and, you know, you spend a couple of minutes and talk about it, and then you just end up saying, eh, it's like Google Earth, you know? Like, it's hard to really articulate the power of geospatial, and so you just leave it at that.

So I actually believe it's possible to make that power obvious to everyone. It's just that we're still quite early in this journey of geospatial information living up to its promise.

So at Planet, our CEO uses this term queryable Earth, which I find a really useful vision for bringing the power of geospatial to everyone. The goal is to index the planet and make it searchable just like Google indexes the web.

This is a mock-up of that vision where a normal user can get information about the planet. The user can see dashboards, they can see port activity in Vancouver, floods in Houston. And they're not having to learn a bunch of really specific geospatial stuff to get a picture of what's happening.

And indeed, people should be able to ask queries about the Earth. How many miles of new road were built in China this month? How much deforestation happened in the state of Para in the last week, or in the last day? And could you notify me automatically when that happens in an area where deforestation is illegal and we can go do something about it? Are corn yields in Iowa predicted to be more than last year?

So before I dive deep into this queryable Earth vision, I'd like to explore two major trends in geospatial. I believe we're at a turning point in the geospatial world that opens up the potential for really bold new directions. And the first of those is the explosion of data.

The big driver of this data explosion has been mobile devices. And not just the actual devices, but the trillions of dollars invested refining every single component, the powers of cell phone. And the benefits of this gargantuan investment are trickling out to make an impact in many other domains.

In location, the most obvious is that every connected device now generates location data with ease. This is a GPS chip. You can embed that anywhere. There's other ways to get location. So this investment is enabling location in all our devices.

Planet Labs took that technology investment from computer electronics and leveraged it to make small, highly capable spacecraft. This is the Dove satellite, the result of several years of agile iteration. My colleague Jessie talked about this in 2015, how we're building satellites with a mission to image the entire planet every single day.

I'm going to skip over most of the Planet story today. It's been articulated elsewhere and it's pretty exciting, except to report one thing which is last year, we launched more than 150 satellites and we've accomplished that first mission. We're imaging the entire Earth every day. This is a super cool video of 88 of our satellites being injected into space, which makes up a good bulk of that constellation.

So those Planet constellations pulling down tons of data every day over a million images with 180 Dove satellites. And those have been complemented by assets from two acquisitions, the RapidEye satellites and 13 SkySat satellites that are capable of 80-centimeter resolution.

And it's not just Planet producing lots of data. DigitalGlobe has five incredible satellites producing millions of square kilometers. Airbus has a bevy of satellites. The Sentinel series has launched six satellites already. And all of these have many more planned.

Launch is also getting cheaper. So there's more and more companies, and that barrier to get into space is less. So there's new nano rocket companies coming on with smaller launch vehicles, but they'll still carry a number of CubeSats and smaller satellites. This is a Rocket Lab launch, and Planet was one of the first payloads to go up on it.

The other part of the data explosion is that the cloud has actually made a lot more data accessible. So even though we were producing lots of data with Landsat, with others, normal people and even experts couldn't really use it unless they spent lots of time accessing it.

So this Landsat on AWS project was originally a collaboration between Planet, Amazon, USGS, Mapbox, and Esri to make the data on the cloud accessible. And the cool thing is once it was accessible, the use of the data just shot up. So the effective data explosion was much greater in the last few years than just the actual amount of new data because it's accessible.

So Amazon has actually built on the success of that project and has many, many data sets available on their cloud. Data's available on other clouds, so this accessing with ease has really contributed substantially to this data explosion.

And these trends are only accelerating. This is number of satellites launched. And rest assured, many more are planned. And remember that each satellite launched in 2017 is far more capable than one of the ones launched in the '90s. So it's just generating massive amounts of data.

So how is remote sensing handling this data explosion? You know, we have this data online, more coming on every day. And the big problem is that the standard remote sensing process only scales with more people.

This is an image I found online that is indicative. There's a person sitting at a desktop, and that's kind of a key step in this traditional process. And all kinds of really cool analysis is possible, but it depends on a person sitting down, operating their machine, bringing their human ingenuity for the best results. So you need that person in the loop.

But with millions of images coming down, it's just too much information for traditional remote sensing. Even if we hired 10 times or 100 times the number of people, it's just still too much.

So that brings us to the second major trend transforming geospatial. Right as this explosion of data is becoming too much for anyone to deal with, we've had incredible advances in computer vision that have the potential to absolutely transform how remote sensing works.

So my guess is most people in this audience have some exposure to the advances in computer vision. It's this ability to automatically identify what's in a photo. Pull out these cars, pull out that building.

Most of these advances in computer vision in particular are based on this fundamental data set called ImageNet. It's got over a million images with 1,000 object classes that is what the algorithms learn from. And every year, they hold a contest to see who is best able to identify objects in images.

And it's been improving drastically every year. In 2015, they actually got more accurate than human standard error rates of 5%. And the key to this drop is what's called deep learning.

So deep learning works more like the human brain. It has to be trained on images. The computer model is fed millions of images, told what they are, and it sort of figures out itself how to tell them apart. It uses a bunch of different layers. This is a sort of in-browser animation of these different layers. One might identify lines, one circles, one faces. And then from all of that, it'll give an answer.

And the cool thing is once these are trained, that part is really computationally expensive. But once they're trained, they operate really fast. This is, again, running in a browser. So pretty powerful.

So with all this incredible computation power, the industry has gotten really good at drawing boxes around cats and dogs. Thankfully, it's started to go beyond just boxes to what they call image segmentation, which is clearly a lot more relevant to our field of geospatial and imagery.

To me, the one sad thing about all this incredible research, though, is that most of the applications are for helping people take their cats in photos or identify their friends in selfies. And I appreciate these as a consumer, but what if that same technology could be used to help save our planet?

With the amount of imagery overwhelming normal remote sensing workflows, I'm more and more convinced that computer vision and deep learning is what will help us make use of all this imagery being generated. I think we'll see a leapfrogging of traditional remote sensing techniques using deep learning algorithms to identify relevant change on Earth.

We're starting to see some fairly incredible results, even early on. I've seen models reach 90% accuracy with only 100 sample images. Now, getting to 100% accuracy is obviously a lot harder, but traditional remote sensing has been often quite happy if they can get to 85%, even after lots of optimization.

So there's been early success identifying a number of objects in land cover classes. At Planet, we've got an imagery analytics team that has been exploring what's possible. Many others are doing similar experiments. These are in no way exclusive to Planet. It's just where these examples are coming from.

Ships have proven to be one of the easiest for most since you just check over water. The shapes are pretty distinctive. Planes are another popular object. Counting cars for retail trends. This is an experiment we did. We used the high resolution where you could make out the cars, combine it with the medium resolution to get the area and identify those trends daily.

Land use classification uses the image segmentation deep learning technologies. This one is a pretty-- three class, not super complex, but it's really quite powerful, especially if you run it weekly. So you can see the red being the change that's happening, identifying what is forest and what is not forest.

Similar techniques are used for buildings. This is a project to detect urban growth that we did with World Bank and Tanzania to-- could we identify the buildings automatically and see that growth and change?

Road detections are similar techniques. And there's more niche objects like oil well pads and rigs determine where new infrastructure is coming in, or identifying pools for insurance purposes. Shipping containers are another one. You want to get that daily trend of the global economic activity.

So these are all pretty awesome. But if you look closely, true are used every day. Like, true are in production at scale. You know, they're promising proof of concepts, but I've definitely been waiting to see, when is somebody going to use this in their process?

I've realized, though, that there's actually an example that was recently written about of computer vision being used for real GIS at scale that's been successfully deployed for a couple of years. I think in December, there was this great blog post by Justin about Google Maps and comparing it to Apple Maps, and Google's advantages there. And their big advantage is computer vision.

So you can see, these are pulled from his post. This kind of incredible architecture detail that's pulled out in the upper left and lower right. Trailers are being able to be identified, and even the sheds behind houses are pulled out. So they're doing this totally automated.

This algorithm is so good, it often runs ahead of the roads. On the map, you can see the road's only halfway done. The picture clearly has the road finished and all the buildings, but the buildings are up to date. They're getting images or buildings in towns of 100 people. You know, far from the urban cores that others do. And just this level of architectural detail of the antennas on top.

And from their press release on this when they updated the buildings, they made it clear. They said, these building footprints complete with height detail are algorithmically created by taking aerial imagery, using computer vision techniques to render the building shapes.

This says to me that computer vision and machine learning for remote sensing are already here. It's not some future concept that might work. Google is using it at scale, even if it's just for buildings.

And I'm very much reminded by this quote by William Gibson. Google's realized the power of computer vision, and they're just living in this future that's not yet available to everybody else. It hasn't been distributed.

But going back to my opening statement, how are we going to bring this capability to everybody? I think it's time to start thinking about distributing that future to more than just Google.

The coolest thing about this trend in deep learning from the perspective of people into open like me is that most all the code power in these frameworks is open source. Google, Microsoft, Facebook, Amazon, Baidu have been investing huge amounts of money in deep learning. But instead of keeping that code private, they're open sourcing their entire deep learning frameworks, as well as actual models so you can take a model from them that's trained on some data and adapt it to your problem.

One of the reasons for this is the field has been driven by academics traditionally, and they've enabled kind of true open collaboration across everybody. This is U-Net, a published paper. It's a really good image segmentation algorithm that's been applied to geospatial problems.

Now, before you start thinking these companies are amazing altruists helping the world, I do want to explain what they do keep private. The key to any good open strategy is to hold back one key piece that others can't easily replicate. And in the case of Google and others, it's what's called the label training data.

Well, ImageNet has about a million images. Google has an internal data set with 300 million images. And they published this paper showing how just the more images you put at a problem, the better the algorithm does with fine tuning and without fine tuning. So that's the piece they're keeping private.

Their Cloud Vision API, you know, anyone can tap into, use their AI to identify labels on images. This is one I put up. It identifies snow and winter and ski, and it's pretty right on. But that is not available to everybody. You can get the frameworks that run it, but the training data that went into it is not available to everyone.

So this label training data is what they keep private. And it's really a key to making the algorithms work, having sufficient data to describe what is in the images so they can learn. And I think this quote is right on. As the differentiator to most of the successful deep learning, algorithms has lots of label training data.

So we're never going to replicate Google's vision to API unless we're gathering literally hundreds of millions of images. But the opportunity is to use those same algorithms and use those same models and frameworks, but apply to Earth imagery. If we build up labels of Earth imagery, we can tap into those investments.

For imagery, the chips end up looking something like this. This is planes chipped up into little pieces. This was made from Planet's open California data set. And that is used to train algorithms.

So I do want to pull back one level and emphasize that deep learning for geospatial is not just about imagery. To truly identify valuable information about objects, we're going to have to pull in a lot of other data sources.

So locations of ships and planes, which is billions of points, cell phone telemetry, where people are just located you can extract out and aggregate and get an update for roads and other infrastructure. And indeed, most every sensor on the internet of things can be spatially enabled. So there's just going to be billions and billions of points of data generated.

And the key is to fuse all these data sources together using deep learning to make sense of them. Get to this real-time model of the world with lots of different inputs to get scalable object detection. Identify roads and ships from space, combine it with readings from cell phones, sensors, and other data sources.

So now that we understand the two major trends reshaping the geospatial industry, we can return to this notion of queryable Earth, which promises to bring this power of geospatial insight to everyone. I think this dashboard does really nicely communicate the high-level vision. And while this user interface is interesting, what's most interesting to me-- and I think most people in this room-- is what's behind it. You know, what do we need to do to make it a reality to go from the massive amounts of geospatial data to a simple dashboard with user notifications?

So Google's proven it's possible to fully automate the detection of new buildings at scale using computer vision. We need for a queryable Earth to do that with any object that can be identified in imagery.

And then we need to do it every single day. You can see the immense amounts of human activity. Constant change. This is made with the Planet Stories application and animated to just see that activity. And you want to be able to pull out information from that.

And then you want anyone to be able to plug into the stream of data they care about. Not everyone wants all the information streaming at them. They might just care about ships. Give me some real-time updates about that particular thing, because it's relevant to me.

And it's not just information, like I mentioned, or from imagery. AIS data complements detections from space to get the name of the ship and the destination. Real-time traffic gives a picture of the conditions now. Cell phone telemetry can complement road detections from space. So get to that near-real-time model by fusing a bunch of different sources.

This is another example of information feed done with Planet data by Raster Foundry. And you can see the ships are identified, but the really important thing is these trends. You want to look at what's happening in Singapore. Most users don't want to look at an image every single day. They just want to know what's happening broadly and be notified if anything has changed.

So an information feed abstracts remote sensing in GIS to just the information about the objects that can be queried. I think the key is that it's data and software together. It's taking raw data, processing it, and making it available through an API that makes the data much easier to use.

With Google Maps, you know, I think Google Maps API has a great example of this. The software is relatively simple. It was pretty easy for open source developers to replicate the JavaScript cool map interface.

But the value is the billions of dollars of data, but then it's so easy to use because it adapts to your needs. You don't have to download a whole bunch of data and sort it out. You get this really nice interface and programmatic way to access that.

So I'd encourage us all to not get too fixated on either the software or the data. The power comes from both together. And ultimately, the rest of the world wants good data, but they need geospatial experts to put it into forms that are actually accessible and usable.

So to me, the goal is geospatial insight incorporated into every decision. Track your organization assets like you watch your Uber car. Pull out higher level insights with ease.

So how do we get to this queryable Earth? How do we build the infrastructure to support the information feeds constantly updating with valuable information? I think the most important step is to make the geospatial world more accessible, modern, and, yeah, open. There's a lot of pieces that go into this.

So information feeds are a slice of the pie. This is really a bunch of different building blocks. And this is the section of the talk we'll geek out of bit. I'll try to keep things fairly high level on each of them. There's been a ton of great community effort on each of them, so I'd like to draw attention to each area, but also attempt to stitch together how these work with one another to form the base infrastructure of this new vision of geospatial accessible by everyone.

So I think the first step is really establish a bedrock of infrastructure on the cloud. I've been writing about this notion of cloud native geospatial, which is not just using new tools like Kubernetes and the cloud native stack but actually rethinking how geospatial would look if it was built from the ground up for the cloud.

Because this queryable Earth is not going to be built with desktop software. There is just too much data. It's terabytes and petabytes. You can't use anything but the cloud. You're not going to fit it on your desktop GIS.

So there's a few cornerstones of this cloud native approach, and a really great community effort to lay this foundation. The first is Cloud Optimized GeoTIFF, also known as a COG. It's entirely possible due to some awesome advances in GDAL, particularly creative use of vsicurl to do remote calls to information, plus a special formating of the GeoTIFF that enables this streaming access just like an MP3 that you stream online.

Landsat and AWS was a precursor to this. Today all Planet data, along with many new data sets, is available in the format. This is a site that shows COGs in action. I can just enter the URL of that piece of data directly online and get a map. It'll zoom in to the map. I can go to full zoom level. I can share that with somebody else. And this traditionally took a lot of processing, but you can just access the data directly.

It's backed by a scalable lambda tiler. So running on AWS lambda. That means it can scale infinitely. It's run by Radiant Earth, this one. Anyone can actually try out this tiles.rdnt.io and put your own COGs in there and get tiles into your web map.

And there's growing support for COGs in a variety of software and data providers. Google Earth Engine now actually lets you take any operation that you can think of in their Engine and export it as cloud-optimized GeoTIFF. This is called a composite and clip Planet imagery in the Bay Area to a COG.

This is a slowed down image of how this composite works. They kind of select all the images and get to a cloud-free selection. And then it produces this three or four-gigabyte file that we can put online.

Once that's online, this is using the tiles.radiant.io. This is a false color composite that's created on the fly because that COG was computed with its near-IR channel. And then we can render it right there.

The COGs can also be used by desktop GIS, or at least the cool open source ones so far, hoping proprietary GIS catches up. Q just here demonstrates how the data can be streamed. So yeah. Typically, you'd have to download a three-gigabyte file, wait anywhere between two minutes and two hours and longer depending on your connection before you could start analyzing and working with it. This is actually streaming the real analytic bytes, so you can do analysis on it locally without having to download it all.

And then one of the coolest advances is by the EOX guys in Austria, which is a pure JavaScript cloud-optimized GeoTIFF reader. So people say serverless is so cool, using lambda. This actually uses even less servers than a serverless architecture. It's just simply a COG on a cloud read directly with JavaScript. It pulls exactly the bits that it needs. And so you can do client-side analysis. You can do NDVI because it gets the full data. It's not just the rendered image.

Azavea also just added the ability to import any COG into their project called Raster Foundry. So again, this is the exact same file that were doing all these visualizations. With Raster Foundry, you import it. You just referenced the COG. You can go to your project and then use all their cool tools. So they have annotation. You can start to make your own labels directly from that file.

And then you can also make use of their lab tools. So these are pretty awesome. You can put in a formula. This is NDVI band math and then a water mask. Just type it in like an equation. It then generates that into a GUI where you can hook it up to your project. You can move it around. You can share it, this kind of visual model builder of the processing you want to happen. You select your bands. You can control the histogram color.

But then with that, you can see any node on this network rendered right there. So this is comparing the NDVI with the mask. And again, this is not downloaded, not in their system. This file is still sitting on the cloud where I put it, but they're able to do all their raster processing streaming directly. So everyone's able to do what they want, utilizing just the bits that they need.

So the next piece of core infrastructure for cloud native geospatial is this collaborative aspect called SpatioTemporal Asset Catalogs. The core idea came out of a discussion a few of us had at Foss4g North America in 2015 which led to enable the simplest possible sharing of imagery. Just put a GeoTIFF stored on the cloud, ideally a COG with JSON sidecar files and openly license it for any use.

OpenAerialMap's a really cool project built on top of Open Imagery Network that lets anyone upload drone imagery for disasters. It builds a nice RESTful API on top of the network and a GUI to utilize it.

Stack came about as every satellite imagery provider was making a RESTful imagery search API, powering a GUI to search imagery. This is four of them. There's many more of them. And they're are really similar, but all slightly different. And they all talk to an API, but those APIs, you couldn't just write one client for. So a group of 23 people from 14 organizations met and hashed out a core spec of, can we just make this search operation a little more interoperable?

There's now implementations by Boundless, Planet, Harris. CBERS is a really cool sensor out of Brazil and China that's all available in stack and COG. And the design principle behind stack is to prioritize flexibility and ease of implementation in order to make it so far more data gets exposed.

Traditionally in the geo world, people stand up these large portals like their individual data portal that you can go and download, and they're standing up indexes. They're standing up visualizations. This kind of says, everyone shouldn't have to stand up their own database just to make their imagery available for search, so let people just put it in this format.

So the core is a small JSON file. It can be shared statically or through a dynamic API. You can choose. And then this core record lets everything that is online be represented as a web page. So people can just refer to a single location.

Right now, if I want a landsat image, I can't just give you a web page to get it. You have to know how to access it and download it and use it. This enables me to point at a particular location online. And this is a JavaScript tool that runs on top of the JSON and generates this full HTML page purely from putting it in the stack format. And then if you have a cloud-optimized GeoTIFF in there, it'll make a slippy map. You know, you can zoom in and share it and actually use it in that location.

So stack and COG are really complementary pieces of technology. This page, it actually gives you a lot of what people use a desktop GIS for. They just want to download and look at the image and see that there's flooding here. It'll also let you go full screen, share it with a friend.

So I really want to emphasize the kind of radical simplicity of these two technologies. They make the data available and accessible doing the simplest possible thing, and then other tech can grow and adapt and build cooler things on top of it. So I hope within a few years, we see most all imagery, both public and private, accessible as COGs and stacks.

So the next challenge is ensuring all imagery data produced is actually putting its best foot forward to computer vision specialists. So I'm actually relatively new to the imagery field, and I've been in the open source geo community for a while.

But when I came to it, I was actually really surprised at how much time is spent on just getting the data ready for analysis, and how intimidating it can be. Most of the work is on this stuff.

This is images that you get, and they're not aligned by pixel. You have to set them to align pixel by pixel on your own, because most providers aren't doing really pixel alignment. They're not going to co-register it for you.

We make people figure out the cloud masks, you know? Yeah, most providers provide cloud masks, but most of them aren't usable, especially to a particular problem. And certainly not things like cloud shadows and haze. We push that on the user to get the clouds out of their image.

Atmospheric correction. You know, atmosphere is different every single day. It looks different. But we're expecting people to correct that themselves as opposed to wrapping up a surface reflectance product for them.

And we make people understand projections, you know? You're new to this field, and you don't understand why this image doesn't line up with image because its numbers are different, a different projection. So let's cut up tiles in ways that people can make use of.

Together, all these things, sort of processing data in this way to make it so people can just use it is known as Analysis Ready Data. So they can focus on the analysis problems at hand and not all that pre-processing.

And the next-- there's a number of people that are starting to produce data like this. The next step is really to think about cross-provider analysis ready data. There's landsat, ARD, there's Sentinel ARD. But how do you make it so all of those work together? We're organizing a workshop to bring people together in August from industry, government, and academia to try to create truly cross-provider ARD, hopefully establish some baseline standards that others can use.

So then the next piece of infrastructure is geoprocessing software that can scale. There's a new breed of these cloud native engines that can scale out to thousands of nodes to do processing, be able to crunch the tsunami of data that's being generated. At Planet, we run 30,000 or more virtual machines just to crunch through the data coming from our constellation. And that's just a single constellation. It'll take even more machines to process the world's data.

The cool thing is that most all these engines use open source software at their core, so they're really building on what's come before in the open source geospatial world. And we need to continue to evolve the open source tools to be more cloud-aware. But super great that they're built on open source.

And then we're also starting to explore new stacks that are built from the ground up for the cloud. So hopefully, there's more and more of that effort to really understand that cloud native compute.

And ideally, these systems are able to deliver this ARD on demand. So instead of just producing an analysis ready data once, let a user customize the parameters they want, get the ARD for their particular problem.

OK. So once you've got the imagery properly formatted and accessible, the next piece we really need to get right is this label training data that I mentioned before. And I think it's a really fundamental piece to understand.

Thankfully, there's been a number of organizations that are starting to build up these open databases of label training data. They're primarily in service of contests, a set of contests that they run. But they're keeping their databases open for people to continue to train on and build algorithms with. And this is a great start, but I'd like to see us go even further.

There's a small, early community of collaborators talking about the idea of aligning the various efforts, providing some clear guidelines for future efforts so that all the open data produced by different organizations can be used together in a single model as opposed to a bunch of different classes done in different ways.

This can help us get data from many different locations. And ideally, you want an algorithm that works just as well in Atlanta, in Cairo, in Cape Town because it's got label training data in each of those. So this key is to get this representative global sample.

I don't think these need to be massive mapping efforts. We don't need to attempt to map lots. We just need a few key areas that are different from one another really well labeled. So thinking about one kilometer by one kilometer chips that have stacks of data over time that have a whole bunch of really good labels.

In many ways, this is kind of all the pieces previously mentioned coming together to power computer vision and deep learning algorithm. So it's the open imagery. It sampled into chips. It's processed into analysis ready data. That data is available as COGs to stream and use. Those are indexed in SpatioTemporal Asset Catalogs to be searched while also used as a basis for these object labels.

And then ideally, we have community-evolved ontologies. Be able to share a meaning of a road, but also define our own particular objects if we want. And then expose those all as geo JSON and open formats that people can use. And this should be open for all in a single, well-understood license for the data.

To build this, we need a community around this. ImageNet was built with a community of people building it up. You know, I think we can leverage OpenStreetMap, which is the leading community effort in geospatial. With a few tweaks, it can align with this and get to really verified, validated, labeled data.

We're also going to have to improve the tools to this. And I think it's really about kind of the people and the machines together. This is another example from Raster Foundry where the algorithm first tries to identify ships. Then the human comes in and verifies that that was a ship or not or are not sure. And this type of labeling that is informed by the computer vision as opposed to just trying to do it from scratch I think is pretty powerful.

And then there need to be tools for people to create those labels. A computer vision person should have that all abstracted and they shouldn't have to worry about the details of GIS, and have those work with all the common machine learning environments.

And then the final step for this ecosystem is get to some open machine learning models. You can go to Google and get a generic image model and clone that for your purposes. There should be a baseline set of models for Earth imagery that people can customize and tweak further.

So the true, final step is to wrap all this great infrastructure into interfaces that truly abstract out GIS and remote sensing. A normal developer shouldn't have to learn a bunch of special APIs just to make use of information derived from imagery.

And I believe these pieces are the core of a queryable Earth infrastructure. We need that ARD processed to stack and COG, using those machine learning models, and producing these information feeds. That's kind of our stuff to do. But the interface to that world I think needs to be more accessible. We can't just make a bunch of different interfaces with tons of operations and expect the world to thank us for our incredible information.

This is all the specs at the Open Geospatial Consortium, but a few of us have been pushing the OGC to radically simplify their approach, improve the process to become more relevant to developers and to the world. The first result of this effort is a new version of the Web Feature Service specification.

I've got a soft spot in my heart for WFS, because it's the first spec I ever worked on, and I think that version 1.0 wasn't so bad. It kind of got worse and worse, but-- or just more and more that you had to implement. Version 3.0 is a complete reboot that's far simpler even than WFS 1.0 was.

And the work's being done completely in the open using tools and processes pioneered by open source developers like using GitHub and their issue tracking and polar quest to actually create the spec. They fully embraced open API as a way to specify the endpoint with REST and JSON at the core.

And a few months ago we did a hackathon at USGS in Fort Collins and had a bunch of the open source geospatial communities show up. And that kind of looking at the spec and implementing it makes me feel a lot better about the spec.

There were some really great open source contributions. The Geo-Python crew showed up in force and has continued working on a really nice WFS3 implementation. As you can see, it's a lot more webbie than previous versions. You can explore just by clicking around as opposed to having to read a big spec to even make your first call.

And we got to open layers, front ends of various servers built during the hackathon. And there was a GDAL WFS plugin that people have started to use and integrate.

So I believe WFS3 has the potential to make the geospatial world more accessible to a broader group of developers. And it can be a basis for interoperable information feed APIs. There needs to be some extensions for things like notifications, but I think it's a powerful direction to pursue.

And there's really good momentum in OGC for WFS3, and indeed for redoing the standards baseline to take this more simple and accessible approach. I know many people have been skeptical of OGC the last few years, and I certainly have been too. But I hope more of you will join and help shape it to build the types of standards we're proud to share with the world.

OK. So I just hit you with a lot of specific parts. I want to tie this back to the overall goal. To power the queryable Earth, we need advances in a number of directions. Continue to work on cloud native geospatial software, make data more accessible and open, create machine learning models that are open to automatically pull out information from images, and put those into APIs that any developer can understand and work to abstract out GIS and remote sensing.

So I showed part of this diagram before, how these pieces fit together. But the key is this piece added of the APIs and abstract and the GIS remote sensing. We need to do our work on queryable Earth, but then be sure that interface to everyone else can be tapped into by anyone without geospatial expertise.

So I believe the geospatial world is going to be radically transformed by this explosion of data combined with deep learning and computer vision, and this ability to automatically extract information from images. And we're going to move from static maps to near-real-time dashboards of information.

This is one of my favorite projects by CARTO, this dashboard that they did for the mayor of New York. And maps are part of it, but the main interaction is this heads up display. You can dig into a map, but mostly you look at the overall view. And it's powered by weekly updates of information from the various departments, but soon you'll be able to do something like tap into Planet feeds to show progress on construction or get real-time telemetry from Uber or cell phone data to get a pulse of the city.

So I'd like to end with a challenge for our Foss4g community. Instead of just trying to do a better job of what proprietary GIS does, let's go beyond. Let's take up a bold vision of what geospatial could be and make it integrated in everyone's lives.

Let's aim for these information feeds powered by computer vision, built natively for the cloud, surfacing the world's information, and fully interoperable with the broader tech world. Let's build this infrastructure for queryable Earth, providing and up-to-date, living model of the state of our planet that anyone can tap into.

And it may be obvious, but it's worth underscoring that this queryable Earth vision won't be possible without collaboration across the entire geospatial industry. It's more than a single company can possibly accomplish, and that means all of you.

And I believe this is really important. The problems of the world are now interconnected with drastic changes to our planet happening every day. But many of those changes pass by unnoticed.

I'm certain this group of people can have a momentous impact on the world. And indeed, I fear for the world's survival if we're not able to get an accurate pulse and MRI scan and X-ray and sonogram for what is happening.

So I want to thank each of you for any geospatial open source code you have built, as it's truly the foundation of the future of geospatial. And I'm excited for you to keep working away at the ground level, as we need that solid infrastructure.

And I encourage us as a community to look beyond this current paradigm of GIS to build cloud-native infrastructure and embrace computer vision to leapfrog what's come before. I believe that open source will help define the future of geospatial, and I believe our community's impact on the future of the planet has only just begun. Thank you.

Credit: Planet. "FOSS4G-NA 2018 Keynote: Towards a Queryable Earth." YouTube. August 28, 2018.

In the second video, I want you to check out how Facebook approaches data integrity with respect to their maps. As Danil Kirsanov and Jeff Underwood described at the 2019 State of the Map US conference, Facebook heavily leverages OpenStreetMap to provide their mapping services. They receive hundreds of thousands of requests from users to make updates to the maps they show. So they are dealing with massive data of a different sort, humans are the 'sensors' here, and maintaining the integrity of OSM while also improving it when possible are key goals.

Video: Machines and Mappers: How AI and Tooling can Assist with OSM Data Integrity (20:22)

Click here for a transcript of this talk.

JEFF UNDERWOOD: Maintaining a database of millions and millions of features is easy feat. OpenStreetMap contains millions of features, thousands of mappers, and maintaining and ensuring data quality has always been a challenge throughout the entire history of the project. At Facebook, we've been thinking about ways to maintain the quality but also improve it and ensure that only good data gets on the map.

So today, we're going to talk about how AI and tooling can do that. So to start, you may be surprised to learn Facebook has maps. We do indeed. And in fact, it's in lots of places. It turns out that high quality maps make for better user experiences, so we have maps and places to search in events, travel, humanitarian causes and jobs, or even in things like if you get lost at a conference, maybe share your location messenger with a friend.

In fact, there's over 140 different use cases currently that use maps at Facebook in our library of apps. So we now use OpenStreetMap globally for the base map at Facebook, but on top of that, we have different layers of other additional data, so things like places data, job listing locations, pins. So really, most of the times when you interact with a map on Facebook, it's probably using different data than actual OSM.

But we're here to talk about data integrity today, mostly. So we approach this problem in two different ways, proactively and reactively. One of the ways we act reactively is through map reports from our products. When there's an issue, users can submit a report to us saying, hey, like this town name is wrong, this road looks incorrect. And it comes to one of our teams here.

Currently, we get about 600 reports a day that are maps-relevant, and our team goes through each and every one of them. There's been 192,000 so far since we switched over to OpenStreetMap, and these get kind of put into two different buckets. If they turn out to be actually Facebook data, we do a fix on Facebook. So in this case, OSM was correct but our places data team, city name was misspelled. A user let us know and we got it fixed.

More typically though, we find an issue with OpenStreetMap, and when that happens, our team goes in and fixes it. So directly from reports, we fixed 388 issues currently, but beyond that we've also discovered broader scale issues such as generic names. We found name equals yes, name equals building. Thousands upon thousands did something on the map, and we have a subtask to fix these.

We have a partner in Indonesia, Hot Indonesia. They fixed 15,000 of them themselves using MapRoulette and outside of that, we have made public tasks where we fixed over 30,000 so far. Plenty to go. So if you've been on MapRoulette in the last six months or so, you may have seen some of these incorrect name challenges we've created, and there's plenty of these.

You know, we work on these ourselves too, publicly. Our team goes in and fixes things, but we found that the community really engages too. If you provide the information, people want to go fix the map themselves. So you can see these are just different buckets, but in the far left one, the community actually did like 6,000 fixes themselves just because they knew that it was there and they wanted to make the map a better place.

And shout out to [martime] for the new MapRoulette. It is so good. But beyond, that we want to be proactive as well, and that means stopping bad data from ever getting on the map. One of the ways we've done this is internally, we've used iD for several years now for AI-assisted roadmapping we do, and we built internal tools. One of them was a validation panel similar to [jozzin] because iD just didn't really have that at the time.

It would tell you some stuff when you went to upload, but it didn't tell you as you mapped. It didn't really give you that feedback as you went. So we built this panel, and we thought that it might be great for mainline iD, and showed it to Brian and Quincy, and they thought it was awesome. They made it even better. They added like the ability to turn off [AUDIO OUT] off checks.

And auto fixes made it really nice, and now that's part of mainline iD, and you know, it's a way to teach mappers as they go. When they don't make a connection, they can now see immediately that there's a validation there now. I can fix that. And it even tells them what to do.

We also work on the HOT Tasking Manager. You know, many parts of the world are still very unmapped and HOT projects are one of the ways that we fill in the map, and this involves usually a lot of fairly new volunteers. So you know, we want to find ways to make better mapping decisions for them, so we partner with HOT and Dev C to make the assisted mapping for Good Tasking Manager.

Dev C made this thing called the Enola Enabler, and with that, you can add additional data to tasks like how hard they're going to be, you know, how many roads are missing, how many buildings are there, how dense is it in terms of features. We can guide users to like if it's a new map or maybe go to an easier task where there's less to do.

Once they get there, we have integrated our rapid editor so they can use AI-assisted mapping, and you know, that makes it much easier for anyone to map but new people especially, because it shows you where the roads are already and it gives you a suggested tag, which my new mappers struggle with. Adding tags is not necessarily a natural thing you do.

But it's not just about tooling or AI. It's also about ground truthing and working with local communities. We found that having partners in countries really does make a big difference, but AI system mapping frees up time to go do some of the ground truthing. So in Indonesia especially, our partner HOT has been using AI system mapping for the past almost two years now, and it makes the mapping much faster.

But also, when we find areas where we're not really sure what's going on such as in this example, there's bridges that look fairly crossable on satellite but when we go to them in person, we that they look very different on the ground, so we identify places and we send out a team. And then we provide this information back to the community so they can get a better idea of how to map when they encounter these situations.

And like I said, using the speed of AI assisted mapping, we've managed to map about 95% of Indonesia at this point. Here's a short animation of daily progress through the country. So now Danielle is going to go into more detail on what we actually did with our map with AI efforts. Thank you.

DANIL KIRSANOV: Thanks a lot, Jeff. Yeah. Where should I start? About 15 years ago, satellite images became available for the wide-scale mapping. At the time, it was a lot of discussion about how satellite images are going to affect the mapping process. There were concerns about privacy. There were concerns about the accuracy of the data, about the alignment.

But looking at what we have right now, right now we're very comfortable mapping on top of satellite images knowing the limitation. Alignment is a good example of this limitation. So I feel the very similar discussion happening nowadays about how computers can help mapping, what computers can do, what are those boring tasks and repetitive tasks a computer can do, and what decisions mappers would like to do themselves.

A good example is the very basic, on the fly verification checks. Genetic verification, is there a road crossing the building? Do we need the bridge and so on and so forth. So those checks are now very common in the iD editor, for example, in JSON. In the future we can imagine how those checks could evolve. For example, if you're doing the map change, that dramatically changes the traffic pattern in the area. Maybe you should have a warning. If you're trying to map a building or the roads that's not there on satellite image, maybe the machine can tell, like, double check what you're doing right there.

Another line of work that we are looking into it is a suggested features. Turns out computers now days are very good at detecting the roads and buildings and can suggest it to the mapper to take a look and map this particular area. And potentially we can imagine, in the future, there are other tasks where a computer can help. First of all, the alignment, how to align the date between different layers of the satellite images. And another very interesting direction, how to conflate and merge together data coming from different data sources.

So as you know, here in Facebook we experiment a lot with machine learning methods for mapping. We share it with the community. We get feedback from you. Sometimes it's very direct, very passionate feedback that we get into the account. And this year, actually, we think we figured something out. So we released RapiD several weeks ago. RapiD is a machine learning-assisted map editor. And this is how it works.

So this line of work starts from the Tasking Manager. So you select the area. Everybody is familiar, I guess, with the workflow. And this is a standard workflow without RapiD, right? Let's say I need to map a couple of roads.

While the mapping happens, what we did, RapiD is an extension of the iD editor. And, by the way, thanks a lot for the iD Editor team for their support. So here you select the road. You select the road type and commit to the OpenStreetMap.

So here's a RapiD workflow. We went and detected those potential roads that could be added to the map, right? Now you click it on those roads-- only on the ones that you like. You select the road types. If you guessed it incorrectly, which also happens, then you address the validation checks. Right? Those validation checks are just done before.

The rows that are not added to the map, the magenta roads, are not going to be committed. So it's only the roads that you select that get onto the map. So it's completely up to you which part of the suggested roads you would like to use. And then you commit it to OpenStreetMap exactly as it is. So we believe this is the right balance between what the machine can do, like suggest use it, and between what decision you make in the end as the editor.

So what do we have-- oops. Let me go. There it is. How does it work behind the scene? Basically it all starts with a very big computer vision model, the deep neural network model. That can detect the roads on the high resolution satellite images. This is quite a challenging task, in particular because terrains could be very, very challenging.

In many cases, the machinery is advancing now so it does it correctly. Sometimes, actually, you can see a false positive, actually. Here's a good example of false positive here, where there's the dry riverbed that looks very similar to the road. It's falsely detected as a road. And this is actually another case where the user can decide what should be or should not be added to the map.

Another interesting case with a lot of tree coverage here. Again, the machine, it turns out, are quite good at detecting the road. And of course this is my favorite example. This is Boston. So this is all various streets detected in Boston that they couldn't find.

It turns out that Boston is very well mapped, right? Nobody needs the new roads in Boston. So what we do as the last stage of this process is to download the existing OpenStreetMap-- you can see it on the left-- and only keep the suggested roads that's already not yet mapped in the OpenStreetMap. So this is the magenta roads, the suggested roads that you can add to the map.

So this is basically how RapiD works. We released this, as open source several weeks ago. So we got a lot of feedback as usual. And it was-- part of the feedback was about suggested features, what we can do. Right now you can map it in several countries. So we expanded the list of countries on a weekly basis.

But another feedback was how to expand this whole editing process. One ask was to add the buildings. Another ask was quite general. So as you can see, the whole workflow is not only about machine learning, right? It's about some different data source that's coming to the map, and you can manipulate it.

So we took two of those ideas. And when you're talking about the external data source, and buildings, and the United States, there is immediately one big data source come to mind. And this is, as you probably know, the Microsoft Buildings. Microsoft first of all did a great job at detecting all the footprints of the buildings in the United States. They presented this work in the OpenStreetMap conference, I believe, last year. And this is what they did.

So today we would like to show you the experimental feature, what to be the [INAUDIBLE]. Anyhow, so this surface is the roads, right? We are already quite confident that we can detect and map the roads in the area. So this is the classic RapiD flow. But think about the buildings now, right, the 120 millions of unmapped buildings in the United States.

And this is experimental workflow that you can try today. And it works exactly the same as with the roads. You select only the buildings that you like. You modify them, make sure the shape is correct, change the attributes, and can add it to the map directly. So here's what you can do today.

If you would like to learn more about this project, go to mapwith.ai and read about what you can do. And you can map roads from there. For the purpose of this conference, for where we have the experimental endpoint basically, experimental site where you can map the buildings today. So if you would like to try it, it's very simple.

Take a laptop, go to this building. It's going to be up for several days because it's experimental. Zoom into any part of the United States, any areas that you like. Where you live, where your parents live, check it out. And give us the feedback.

So we are going to have the open session today from 3 to 4 o'clock in the next to second floor. Talk to us, try RapiD, and give us the feedback-- constructive feedback hopefully. [LAUGHTER]

Well, one more time, thanks a lot for the idea iD Editor folks. Huge thanks for the Microsoft folks and for the Facebook team for built this tool, tested it, and released it several weeks ago. Thank you.

[APPLAUSE]

PRESENTER: We have several minutes for questions if people have them. There.

AUDIENCE: For the new buildings that you've added for Microsoft, did you do any diffing like you did with the roads so only buildings not present in OSM would be available for--

DANIL KIRSANOV: Exactly, yeah. We run some very similar conflation basically. We download the buildings on the fly and subtract the buildings that are already on OSM.

AUDIENCE: I'm curious for the conflation. Is that done in RapiD at runtime, or do you do that offline?

DANIL KIRSANOV: We do it-- actually, it's a two-stage process because most of the things we can filter offline. But somebody else could be editing the map exactly the same time, right? So you're actually doing it to the-- or do it on the fly. And that's what we do.

AUDIENCE: What were you using before OpenStreetMap for the Facebook for the first part of the presentation? And when did you switch over to OpenStreetMap?

DANIL KIRSANOV: That's a long story actually. It started before I was here. So I think I probably can not [INAUDIBLE]. I think before we were-- what we were using before?

AUDIENCE: Here Maps.

DANIL KIRSANOV: Here Maps, OK. Yeah, but this was, yeah, several iterations.

AUDIENCE: I was just wondering about the linear conflation that you're doing. Because I think I've seen some stuff where you do building centroids. You subtract the buildings where centroids exist or something like that. But what's your approach for figuring out if you're finding the same road but, like, shifted in OSM?

DANIL KIRSANOV: Yes. Actually, there's short answer and long answer. Short answer, this is very complicated.

[LAUGHTER]

Long answer, it was really nice presentation from Microsoft yesterday. So I can connect you with those people. And there are people who are working on the conflation in our team that you can talk later on. But, yes, it's quite the-- for buildings a little bit easier than for the roads. For roads, it's a nightmare, but we'll figure it out.

AUDIENCE: So as a lazy OSM user, one thing that I would just worry about that I wanted to ask you is, is it-- how quickly can I approve the predicted roads? Because I could imagine with the example that you gave, where there's a river, if I'm going through quickly and I'm very excited about how quickly I can map the space, I might just say yes, yes, yes, yes, yes to everyone that I see. So do you press a single letter to say yes, and then you have to physically click on the next feature? Or how does that work?

DANIL KIRSANOV: Yeah, right now, basically with every power comes with power to abusing it, right? So, yes, we put in several checks for that. You need to click on each feature individually before you add it to the map. And that was the main reason why we didn't want to do with those bulk approve, bulk approves. Yeah.

PRESENTER: One more question, anyone?

AUDIENCE: Sorry, novice question, but image resource, like, how is that? Is it constantly changing, what you're using as source imagery?

DANIL KIRSANOV: This is a great question. Yes. So we use something very similar. We use Maxar images, basically. And for this particular editor, we provide exactly the imagery that we are using for road mapping because we [INAUDIBLE] computer offline. But this is very similar to what's already in standard OpenStreetMap.

DANIL KIRSANOV: Yeah. Thank you.

Join the Lesson 9 Technology Trends Discussion Assignment.