Google Earth’s Incredible 3D Imagery, Explained

years ago, Apollo 8 left Earth for the moon. While in orbit,
astronaut Bill Anders decided to take an
unexpected photo. BILL ANDERS: Oh, my god, look
at that picture over there. There’s the earth coming up. Wow, is that pretty. SPEAKER: Hey, don’t take that. It’s not scheduled. Later named
“Earthrise,” the photo captured our planet in
a way that it had never been seen before. And while the 240,000-mile trip
to get to that vantage point was, I’m sure, amazing– I mean, if you ever get
that offer, take it– if you don’t have a spaceship
and three days to spare, lucky for you,
all you need to do is click a link to get
almost the same view in about a second. [MUSIC PLAYING] This is the Earth, all 196.9
million square miles of it. GOPAL: Google Maps is
for finding your way. Google Earth is
about getting lost. Google Earth is the
biggest repository of geo-imagery, the most
photorealistic digital version of our planet. We’re trying to
create a mirror world so people can go anywhere. NAT: From
mountains, to cities, to the bottom of the ocean. Google Earth has been around
for about the last 10 years. And just like our Earth, it’s
been evolving over this time. The imagery has been
getting better and better. I was really curious to know
how is Google Earth created? How many images
actually make it up? And where do they come from? Last year, together with Lo,
I met up with Gopal and Kevin to find out. GOPAL: So how do we
build Google Earth? The way it starts
is we look at places that we want to
collect in imagery, and then we collect it through
a variety of different ways. One is satellites. And satellites give
you the global views. And that’s all 2D imagery
that’s wrapped around the globe. When you get closer
to the ground, we have 3D data that
we collect from planes. NAT: Yes, you
heard right– planes. I’d always assumed that
every overhead photo of the Earth I’d ever seen
was taken by a satellite. But I learned creating
3D imagery requires special conditions. So Google flies planes or, as
I now like to think of them, street view cars with wings. LO: What are some of the
challenges that you guys have? KEVIN: The biggest
challenge in doing something like this is weather. Our preference is always
to have clear skies. GOPAL: It took us a really
long time to get London, because we had to fly
over London a lot, before we got a fully
cloud-free image of London. KEVIN: Come on in. LO: Thanks. NAT: Kevin told us that a typical
flight to take photos is around five hours, except
the planes aren’t going across the country, they’re
making little zigzags over the same area. KEVIN: So it’s north,
and turn south. It’s sort of like
mowing the lawn. NAT: This pattern
helps the photos overlap. And multiple cameras
help capture a place from different angles. KEVIN: The planes have
five different cameras– one looking down, and
forward, back, left and right. NAT: In my
mind, I’m picturing, like, photograph, photograph,
photograph, photograph. KEVIN: Yep. NAT: And then, something
puts them all together. Something called an algorithm? KEVIN: Photogrammetry, yeah. GOPAL: Which is
just a fancy word for taking all of this imagery
that we collect from the plane and constructing a 3D model. NAT: The first step
to creating a place in 3D is a little bit
of photo editing. GOPAL: So all the
imagery is prepped. That would be removing
clouds, removing haze, color correcting. You’ll actually see
that a lot of the cities don’t even have cars in them. We actually take the
time to extract those. NAT: Then the
3D science begins. KEVIN: The big
breakthrough that’s happened in the last few years
has been the introduction of computer vision. The computer looks for
features within the overlapping images that are the same. We use a special
GPS antenna that allows us to know
where that camera was, so we know roughly where things
were taken and at what angle. NAT: And this allows
them to create a depth map. GOPAL: And that’s just our
understanding of how far the things are from the camera. And we take all of
these various depth maps from the different cameras,
stitch those together in what’s called a
mesh, which is basically a big 3D reconstruction of
the place, and we texture it. And texture is applying
the photography that we took to the sides
of these 3D buildings. It’s almost like taking pieces
of paper and cutting them up. You can actually extract
the edges of something, and then stitch those
together, and then understand what that shape might be. And organic shapes are
what’s hard to render. It gets even more
complicated when you’re talking about
trees, because trees have branches and leaves. And often, you might
see them as a lollipop, because that’s as
good as we can get. But we’re getting better at
modeling those organic shapes. We did a collect of
Yosemite National Park. And we were able
to actually capture that in really high fidelity,
with all the different bends that a rock face might have. NAT: Do you know how
many different images make up what I see as Google Earth? GOPAL: Yeah. It’s probably on the
orders of tens of millions. One interesting stat is to look
at what we call Pretty Earth. And this is the global view. So we have a full,
seamless image. And that comes from about
700,000 scenes from Landsat. And what we’re doing
is we’re finding the best pixel from each. So if you look at Google Earth,
it’s springtime everywhere. NA: To be precise, a
800-billion-pixel spring globe, which is so big, if you wanted
to print that out on your home printer, you would need
to find a piece of paper the size of a city block. GOPAL: If you took
a single computer, it would take 60
years to process that. NAT: And you can just
keep multiplying this times all the different
levels of zoom that exist in Google Earth,
which are over 20. So even though using Earth
feels like one seamless world that you’re just
zipping around, it’s really more like
you’re traveling through a series
of Russian dolls, all made up of puzzle pieces. LO: I think
everybody else wants to know, how often
do the images get updated? It’s like the
number one question. GOPAL: We try to update
it as often as possible. The image all the
way from space, when you’re looking
at the whole globe, we try to update that maybe
once every couple of years. As you start to dive in,
we update that imagery more and more frequently. So for big city populations,
it’ll be under a year. What that allows
you to do is look at how the planet has
changed over time. And we use this product,
called Earth Engine, that allows you to look
at all of this data and, using computer vision, draw
out insights from the things that are changing. So we can track
deforestation in the Amazon, because we can see how the
trees are shrinking and growing. And then, from that, we can
generate a heat map of the most logged places on the planet. We can also do that with fishing
and see the most overfished areas. Think about it as a health
monitor for the planet. Watching these cycles
happen on the planet, you realize that this
is a living thing. And the product we’re building
has to be a living thing to reflect that. It’s not a static planet. It is fully alive
at a macro scale. And that is very eye-opening,
when you’re actually watching the things change
right in front of your eyes, when you have that
perspective to see that. NAT: Thanks for watching. And if you haven’t played
with Google Earth recently, I highly recommend this new
feature they just launched called, I’m Feeling Lucky. You roll the dice, and
then it teleports you to a random spot in the world. Also, to give you a
heads-up, in the coming weeks we’ll be diving deep into VR,
including Google Earth VR. So there’s that to
look forward to. OK, that’s all from me. Bye. [MUSIC PLAYING]


Add a Comment

Your email address will not be published. Required fields are marked *