Artificial Intelligence And Human Rights | SkollWF 2019


– Too, let’s see if . Hi. It’s great to be with everybody. I’m gonna bring up a
couple images quickly. Let’s see. You guys let me know if it’s up. He’s doing something. It’s an interesting time and it’s interesting to be here at Oxford because over in the Bodleian Library are the the papers of Ada Lovelace and the correspondence that Ada had with Charles Babbage. Just like email there,
I think there were nine postal deliveries a day in the U.K. You can actually read it. Stephen Wolfram, I call
him the Nora Ephron of math, he’s this incredible mathematician. He read all of them. One of the things that he told me that Ada once said, and Ada, of course, is the first person to
think of algorithms. We’ll bring her up. But she said, I wish to bequeath to the generations a calculus
of the nervous system. Darwin wrote about our
past at the same time Ada wrote about our future. The future she envisioned
was a positive one. Now the reality is we have the opportunity to use our collective genius because we’ve inter-networked ourselves. We can work with that collective genius, like we’re talking about here at Skoll. Skoll is the ultimate place where this is happening right now amongst us and our interconnections here and our digital connections to transform our systems. That’s the opportunity. What better place than human rights and bringing everybody. If we wanna really solve everything, how about we just include everyone and they will together
solve everything with us? The challenge we have is we bring our whole selves,
including some bad things. Humans have good and bad in us. This is an image that I wanna give that just faces just to frame a little bit of some of the challenge side of this. This happens to be a picture
that Business Insider did for the U.S. government. They were looking at how the U.S. Congress votes together or not. What’s really interesting to notice is that from 1953 on, the grayscale, it doesn’t matter which party, we have a two-party system, is in power, people are kinda working together. Not perfect, like any time, they’re human, too, but look at this. As we get into this cable area and then into the internet itself, we divide up. We’re in these landscapes, no matter what country you go,
what part of the world, we’re in these landscapes, and they’re pretty rough for not hearing each other. It’s very easy to come into these spaces and propagandize people. We certainly saw that with
the Cambridge Analytica and other things that are going on. I also wanted to bring up this ’cause it’s a great resources, and there’s a couple
resources we’re gonna hear from our amazing colleagues, Weapons of Math Destruction. This is a really excellent expose that Cathy O’Neil did,
and I encourage you, just even watch her TED Talk. You don’t have to read the book, if you wanna read the
book, but diving into these algorithms that are being used by justice systems for sentencing people, for choosing who gets the job, who gets the interview, who gets the loan, who gets the house. All these things. We can really discriminate
against each other. It’s just not black box. It’s dangerous to make it a black box. The other thing that I point out with her, it’s not just the computer scientists who are in charge. It’s all of us together. It takes a cross-functional team. Two data sets I wanna show you that are especially dangerous. One is around who gets to speak. This is an example. This particular one, I love the internet. People just do amazing things, and a group called Polygraph did this. Over here, do you see the blue and then a little bit of red? That’s men’s lines to women’s lines in children’s TV. We teach our children that that men speak, and women speak a little. When we group up, what do we make? Same thing, ’cause we would pattern what we learned as children. Men’s lines to women’s
lines in 2,000 films. Do we wanna train on this data? Or do you wanna have men speak, yes, and have women speak, too? As we get older, men get more lines, and women get less. That’s a challenge. Also, let’s look at face recognition. These cameras that we have in our pocket are incredibly racist. Have you ever noticed
that you take a photo of somebody with white
skin, it works great. Take a photo of somebody with darker skin, ya have to light balance it. Who thought of that design? Joy Buolamwini, if you
haven’t seen her piece, AI, Ain’t I a Woman is an homage to Sojourner Truth’s
incredible speech in 1851. Joy is an exceptional computer scientist at Media Lab. When she was doing some face recognition, it couldn’t see her,
no matter what she did. So she put on a white face, and it saw her. Face recognition recognizes white men at about 99%, and women of color at about 77%. That’s dangerous and it’s unfair and it’s not right, and so we make sure everybody’s in the design team and that our data sets train against that. We also face this existential truth that work is joyful, work is what we wanna do, and people are afraid of the future and we are thinking about AI. There’s a wonderful new
book called Big Nine. What I wanna point out here is what the Chinese government is saying. Chinese government is saying, in President Xi’s speech,
we would like to use, be the cyber superpower and use this to spread positive information, uphold the correct political direction and guide public opinion and values towards the right direction, which to me sounds like the book 1984. They are now doing a score, a social score, of people. You cross the street
wrong, your photo goes up. We don’t want that
kinda surveillance world and we don’t want that kind of
surveillance capitalism that, the one we were just talking about, the age of surveillance capitalism. What do we want? We’re the ones we’ve been waiting for. How do we stay with that? We have this weird situation. I always put two photos up ’cause you can look at this challenge
like the future’s here, but it’s not evenly distributed. People say that all the
time, but let’s take a place. Do you realize either of
these places in the desert? What are they? – [Man] Burning Man.
– [Woman] Burning Man. – [Megan] Okay, Burning Man, the playa. What’s this? – [Man] A refugee camp. – [Megan] It’s a refugee camp. This is the Zaatari refugee camp on the Syrian border in Jordan. Now they’re kinda the same size and similar budget. One was made out of joy,
one was made out of sadness. But why do we run one place one way, and one place the other? These people are being self-actualized. It assumes talent. This place is really,
you talk to refugees, they feel like they’ve been put in prison. They can’t manifest their talent forward the way the structure of the system is. Systems, not people, systems are cruel or kind, depending on who you are. Could we use all these beautiful AI and algorithms, in the spirit of Ada, to really bring forth human rights and human capability? How might we do that? Is it just one group
that’s supposed to do that? Or a handful of sorta brogramming people that I used to work with in Silicon Valley in charge of everyone? No. How do we change that? One of the ways we change that is stop having boring school, and start having fun school. Have 10th graders teach a
chief of police how to code. This is happening in New Orleans. They’re working on use of force data, officer-involved shooting,
all the data sets around justice, in the
spirit of Ida B. Wells, one of the greatest justice scientists. Now we have communities of practice working across using the
internet to have justice. This is Star Wars. Remember I showed you the movies? This is Star Wars from
1977, gender and race. That’s Carrie Fisher’s lines. This is our current Star Wars. We’re getting better. But we showed this to the teams that made those movies and they were like, aw, I thought we did better. We humans are trying to include more, but we need some help, and
we could use AI for that. This is a project with USC. We’re using face recognition and natural language processing, like Siri and Alexa, to analyze what
we’re making in movies before we finish the movie so that we can make it more equal, more helpful, generate creative confidence for anyone, so we don’t have to have such biased media that accelerates some confidence, ’cause
I get to see myself do everything, or decelerate
everybody’s confidence ’cause I’m never seen on screen. Today in family TV, it’s 15 to one boy programmers to girl programmers that our children watch. Do you guys know Bletchley Park? It’s very near us. Halfway to Cambridge and here is where they cracked the Enigma codes. Heroic engineering. Fighting the Nazis using mathematics. This is a movie called The Imitation Game that depicts that. Turing and Joan Clarke,
they’re real people. I walked in the Oval Office right after Prince William was there. I said, Mr. President, you know what you and I are about to do? It’s related to Prince William. He said, “How’s that?” We were about to do some coding. I said, well, the prince’s wife, the
Duchess of Cambridge’s great aunt and grandmother
were code breakers at Bletchley Park. I said, in fact, sir. He said, “Oh, I just saw that movie.” I said, sir, 2/3 of the people at Bletchley were women. These are them, elite mathematicians. He said, “Uh, the movie
doesn’t show that.” I said, yes, sir, it’s
killing the economy. But it’s interesting ’cause even with the legacy of Kate, it’s 25 to one boy visitors to girl
visitors to this museum because computers are for boys, right? They’re for robots and self-driving cars and precision medicine, and they’re not for foster care and poverty and all these things, right? The farther back we can look, the farther forward we’ll see. Knowing about Ada and that she had this vision for us in the future and what we might do is really important. She’s the daughter of Mary Wollstonecraft, great feminist, born at
the time of Mary Shelley. That all of us are amazing. That’s what we’re gonna do today is we’re gonna hear a little bit of, as I started, this scary dystopia that’s very real. Please scrub in, just like they did at Bletchley Park. We need everybody. But also, there’s this incredible future that our panelists are gonna show us that’s near future, and it’s now, and it’s things you’re doing. Every single organization that you are in should be using AI and data science, and
pulling in your 10th graders to help and adding it to their homework. That’s happening different places. If you do that, then
we’ll have two things. We need to fight the
bad, just like we do it in the analog world, but we also need to accelerate the positive
in the digital world because it’ll rebalance the data sets into what we care with our human values. Last thing I’ll leave you with, she has cool hair. First up, Elisabeth
Hausler is gonna talk about Build Change and Built Change. Come on up.
– [Elisabeth] Thank you. Hi, everyone. It’s great to see so
many people in the room. Thank you so much for coming. I’m Elisabeth Hausler, founder and CEO of Build Change. Build Change is a systems change catalyst in the field of disaster-resilient, safe, affordable housing. Build Change saves lives by working with governments, engineers, builders, homeowners, banks, to design, regulate, finance, incentive housing and schools so they don’t collapse in earthquakes and typhoons in Asia, Latin America, Caribbean and most
recently the United States and Puerto Rico. Housing, safe housing,
is a basic human right. There are 3 billion people
that will be living in unsafe housing, substandard
housing, by 2030. 3 billion people. That’s 1/3 of the entire
global population. Our use of technology has been driven by the need to fix this,
the need to address this basic human right of safe housing. Using AI, using VR, using BIM software, using various technologies to be able to accelerate the
possibilities of safe housing for all to make it so no one dies in an earthquake or a typhoon. I’m gonna show a video that illustrates what we’ve been doing with AI and with some of these
revolutionary design tools to retrofit houses there. The 2015 earthquake
devastated the country. Millions and millions
of people were affected. There were a lot of
buildings that were damaged, but not collapsed. You can see they all
pretty much look the same, the same geometries, the
same building materials, the same problems and the same solutions. But the government came out with a program where they were giving
subsidies to only people that had completely destroyed houses, not just damaged houses. We said, well, how can we
retrofit these buildings? We came up with a retrofit
solution that cost $3,000 to preserve an asset it would take $20,000 to replace. But the problem was it
took too much manpower, engineers to go to the site, to measure the building,
to do the calculation, to come up with a retrofit solution for each unique building. But we realized the
buildings aren’t that unique. If we do the structural engineering, very good structural
engineering, up front, we can come up with a retrofit solution that works for any of these buildings, even though they have
different geometries. We programed that with our friends at Autodesk Foundation in their BIM software to be replicable to each type of building. Here’s where the AI came in. AI is a great way of us to expedite the assessment, the go, no-go process. We trained it to
understand the differences between the buildings
so that we could then rapidly come up with
an engineering design, a bill of quantity, a cost and ultimately where we wanna go, a very
fast building permit. The AI ran through many,
many different iterations. By the time we were
done, it could recognize whether or not a building
could be retrofitted, and we could blur the lines between actual photos and generated configurations of buildings. We developed this app that
the homeowner could use to crowdsource data, basically to upload a photo of their home. Then we could rapidly go through the go, no-go process to determine
whether their house could be retrofitted,
instead of having to send an engineer to the site,
so that we could reach thousands and thousands
and thousands of people with safe housing solutions. There are still thousands of buildings that need to be retrofitted in Nepal, but this is a way we
can universalize access to the thousands and millions of people who live in unsafe housing. We are now implementing something similar in Colombia to retrofit buildings before the next disaster. Now what does this also do? By facilitating a fast
design and assessment, we create more jobs for builders, we create more jobs for
people who generally need an economic opportunity. This is a retrofitted
house before and after. This is a homeowner who had the choice between building a small one-room home or retrofitting her building. She’s happy and safe as an outcome. Before I pass on the mic to my colleagues, I’m gonna talk about four groups of humans and how they interact with technology: builders, engineers, homeowners and government officials. Builders and engineers first. I’m both, I’m an engineer and a builder, so I have affinity for both professions. But after 15 years of beating our heads against the wall trying
to find enough engineers, enough building inspectors to be able to do this work, we’ve decided, well, actually, we should probably
try to automate that part. Because if we automate that part, again, if we make the design and the assessment more efficient, then we can focus on getting resources into
the hands of builders so that more builders can have jobs. Again, we’re normally working in a place where builders and
construction professionals need economic opportunities. Homeowners. Since day one, Build Change has been about empowering homeowners to make decisions about the materials and the architecture of their house, making it so they can drive the process. It should be their decision. There’s all sorts of reasons in history why not including
homeowners in this process is a bad thing. I’m not gonna go into that here. But what I’m afraid of
is by automating so much, we’re gonna take a step backwards in all this work that we’ve been doing to revolutionize the way NGOs work in post-disaster context
to include homeowners, it’s gonna be erased
by automating too much. We’ve been using VR, especially, a virtual reality experience,
to enable homeowners to really see what their
house is gonna look like and be part of the
decision-making process. The last point I wanna
make and the last group of humans I wanna talk about are government officials. AI is not going to replace the need to make a tough decision and to allocate resources to solve a difficult problem. We have been working with the World Bank, who asked the question, after the 2017 hurricane season, which
devastated a number of islands in the Caribbean,
including Dominica, they asked, well, what would happen if the same thing happened in St. Lucia? We’ve been working with them to use AI, Google Street View and
various different technologies to, first of all, determine is the housing stock the same between the two countries? Can we train these systems to be able to collect enough
information about the house to have enough information to be able to evaluate whether or not
it’s going to collapse? Then are these buildings gonna collapse? We went through this process. Of course, the answer to
all these questions is yes. Yes, the housing stock is similar. Yes, we can train these machines to collect the information that they need to make the prediction. Yes, the house is gonna collapse. I’m thinking the whole time, have I become the crusty engineer who thinks, gosh, darn it, I know those buildings are gonna collapse? I don’t need a fancy AI to tell me that. Because we know these buildings are going to collapse. The question is now is this information going to compel action? Is this going to change the way priorities are made? Is the government and the private sector going to actually retrofit these houses? We need to make sure
AI isn’t a distraction from making the tough decision. We need to bring all of
these things together, just like Megan said, the homeowners, the engineers, the
technology, the builders, the financiers, the political will all needs to be there. Thank you.
– Thank you. Stay up there, I’m gonna keep you up here. – We are gonna do one question, come on. One question, anybody, or an insight or a thought or anything for Elisabeth? Thank you very much. It’s such a good example of such an important application,
especially did ya see those houses? Just seeing what you could see if you looked in the right way, together with tech, and then do. Anybody with a question? No, they’re quiet, nobody. Okay, we can wait. – [Elisabeth] All right.
– All right, Babusi. There’s gonna be more, collect ’em. Babusi Nyonga, Nyoni, Nyoni. Come on up. He’s a fabulous AI evangelist– – Fabulous. – And he’s gonna talk to us about that. – Cool.
– Thanks. – Good afternoon. Today I’d like to speak to you about a dance app. Behind me is a text sent by a user of the Vosho Dance App, which is a web app that uses AI-powered pose estimation from Google’s tensive law library to parametrize and rate a popular South African music dance known as ivosho. The dance itself is infamous for its unrealistic demands on one’s knees and its requirements for
near-superhuman strength, which I will demonstrate now with your participation. and cheering But I’m gonna need your help. I need a clap in this BPM. Perfect. That’s it, that’s it. That’s the whole dance. I made it. Thank you. The app was accessed on over 100 different device types during the December period in South Africa, including on this 26-pound model from little known phone
manufacturer Mobicel. Actually, to accurately tell you the story, I’m gonna have to start
from the beginning. Two years ago, I began working with the United Nations refugee agency innovation silo on a
project known as Jetson. Jetson uses data science,
statistical processes, design thinking methodology and qualitative research to predict the movements of displaced people. With an initial pilot in Somalia, Jetson has underlined the importance of partnership, collaboration
and transparency. Led by the amazing
talents of our colleagues Rebeca Moreno and Sofia Kyriaz, we used anonymized and
desegregated data by region. The predictors used by the model were suggested by either
UNHCR field operatives or refugee themselves. They included climate, weather, conflict and historic population movement patterns. But as with any innovative effort, ours was not without its challenges. The biggest challenge we faced was data scarcity. The reason was the
machine learning processes we relied on needed
monthly anonymized data from legally open sources
to feed into models. Because we relied on
multiple external partners for this data, we missed some of our monthly deadlines for
displacement predictions, with the ramifications felt on the ground. But ours is sadly not a unique problem. It’s actually well documented that Africa lags behind when it
comes to the collection of timely, accurate and reliable data. These data gaps undermine
efforts by stakeholders to target resources, to map policies and to track accountability. They ultimately hinder
innovative processes, such as ours, that rely on
a democratic data economy. So in the face of this adversity, we looked to ways to plug holes, and we found proxies that could be used in lieu of our core value set. One of the proxies we found was this guy. UNHCR staff, in
collaboration with refugees, discovered that the price of a goat would decrease relative
to the number of people whose departure from an area was imminent. What we actually learned was that families would sell livestock in preparation for a move from an area, and by so doing would saturate the market and lower the price of a goat. By creating a virtual
commodity trade marketplace in Somali refugee camps, we were able to assuage the limitations of our reliance on open data as we knew it. What this taught us was a very important lesson on what it really takes for this kind of technology to have an impact. The thing is, without the proximity to the perceived beneficiaries
of our innovation, all our efforts would have met with a timely and uneventful end. One might even go further, as to say that what are statistical proxies today might actually very well be the future of open data on the African continent. The truth of the matter is technology that fails to harness opportunities presented to it by the audience for which it is created will probably fail on delivering on the
mandate appointed thereto. With that in mind, I’d
like to take us back to the story of the dance app. After we discovered South
Africans’ willingness to interact with leading-edge technology on a spectrum of device types, we decided to do more with the algorithm. We repurposed the dance
app, and we created a prototype for the early diagnosis of Parkinson’s disease. What this prototype does is it measures the relative position of a subject’s limbs as they walk. It measures the rigidity of their torso, their gait, their posture and the movement of their arms. It gives an evaluation after three consecutive assessments
over a period of time. We already know from
our dance app experiment that there’s definitely a willingness on the African continent to interact with new technology, as
long as it is relevant. When properly harnessed, mobile phones present a new frontier for
data-reliant innovation, especially on the African continent, considering that fact that over 80% of Africans have access
to or own a mobile phone. As I stated, our dance app proved that as long as there’s a
context for the innovation, it will find an audience. Between death-defying dance moves and goats, one thing is definitely clear: When it comes to AI innovation in Africa, convention is not your ally. With that, I thank you. – Stay here. Stay here. It’s so interesting,
Babusi, as you pointed out, there’s so man signals
around us, and how do we see through the noise and
choose particular ones? Does anybody have a question out there? Yes, no? Or a comment or a thought
or something you saw. Yeah? – [Woman] I just wanted to
know how you got the goat data. Were you able to tap into markets to find what the goat price was? – As I mentioned earlier, we have UNHCR field operatives on the ground who work within the refugee camps. Through partnership with the refugees, it was discovered that
there were certain proxies that we could use that would plug the gaps that open data had left open. – Interesting, during the Ebola crisis, there’s a company called Premise that, sadly, was able to tell where Clorox or bleaches were begin
bought off the shelf in particular locations. You could move aid and
support to those neighborhoods to see if you could help. There’s a lotta proxies. I know of a company that looks at hedge fund work and
just sees how many cars in her parking lot to see how well a company’s doing and how it changes. There’s a lotta data out
there, signal to noise. The big question, there’s
a question back here, the big question, also,
is, as we talked about, do we have the will to
do something with it? Also, are we going to also take care of the privacy that we do,
like with your gait walk? Some countries would
use seeing how you walk to save Parkinson’s, and some countries or places would use that to identify who you are and do something negative. Yeah? – [Man] I just wanted to see him throw it. I’m curious, as we talk about technology and these leaps, how did you get from a dance app to Parkinson? ‘Cause if we think about
transferring technology to solve problems, it’s all about how do ya make these leaps from A to B, which most folks wouldn’t think about putting together. – Exactly. For us, the dance app was
just a fun experiment. Pose estimation on mobile devices, it’s using the most recent iteration of AI because it uses
what are called tensiles that are extra layers on top of traditionally trained models that are able to be used on, for example, a mobile phone, just harnessing the phone’s CPU. With the dance app, we were able to make a fun iteration of this understanding of your posture and the
position of your limbs, and what can be done with it. But then looking at
the current methodology for diagnosing Parkinson’s now, it’s also very symptomatic. What medical practitioners will do is they will assess some of the parameters that our app will look at. That leap for us was more how to make more meaningful technology,
considering the number of devices we know we can run on and how this can be received by the communities we want it included in, as long as there’s context for it. It wasn’t really informed by some crazy smart thing. It was just, what can
we do with this dance now that we can do this? – [Megan] Awesome, awesome, thank you. – Cool, any other question? – I think we’ll go more questions. We’re gonna bring all
of these guys back up at the end, we’ll do a couple more. Thank you, Babusi. All right, next up, Dunstan Alistair-Hope, Allison-Hope, is managing director of BSR, which is Business
for Social Responsibility. Come on up. – Thank you. Two disappointing things to start with. I’m not gonna dance, and
my talk is fairly short, so it’s going to rely
on questions from you. But if you want to use a dance app to diagnose if I’ve got Parkinson’s, let me know after, as that
would be most welcome. I work with an organization called BSR, Business for Social Responsibility. We are an international
nonprofit organization that works with 250 of the
world’s largest companies on a full range of different social and environmental issues. We work on human rights issues, climate change, women’s
empowerment issues, quite a diverse range of topics. My focus has been mainly on technology and human rights. What I want to share with you today are three key points that we are using to inform our work with, at the moment, mainly technology
companies on human rights and artificial intelligence. The first point I have here is that human rights-based methodologies offer a robust framework for the responsible development and use of AI, and should form an essential part of business policy and practice. Now that sounds like a very, very obvious statement, doesn’t it? But in the conversations that we have, that’s not necessarily the case. What’s been striking
to us over recent years has been the focus on
ethics and human rights. You’ll hear a lot about,
sorry, I misspoke. Ethics and artificial intelligence. You’ll see a lot of
discussions and debates around the ethics of AI. That is very good, it’s
important to discuss issues of fairness, of
discrimination, of justice. Those are all very important topics to get into when thinking about designing and developing AI. But we feel that human
rights-based approaches provide something very robust. It grounds artificial intelligence and the development and use of it in international human rights law and international human rights standards. What we’re trying to do is work with companies to think
not just about ethics, but about human rights and what artificial intelligence means for that. The second point is that companies, organizations outside
the technology industry have an essential role to play, and should be much more
proactively involved in the development of
responsible approaches to AI. Again, another statement
that sounds quite obvious, but it has been very
interesting to witness over recent years the emergence of lots of different organizations working on ethics or human rights and artificial intelligence,
and the way in which the companies at the table
in those organizations are almost all technology companies. It would be Google, it would be Apple, it would be Microsoft, it would be IBM. It’s very much focused on
the technology industry. There’s a lot of scrutiny, quite rightly, around the work of
technology companies and AI. But when we work with
these technology companies to think about human rights and AI, we very quickly focus on the use phase. How is that AI going to be deployed, for what purpose, by what companies, in what industry? Very quickly, we actually have to start thinking about human rights issues in other sectors. What we would like to see are companies in the retail industry, in logistics, in transport and financial services paying the same sort of attention to issues of human rights and AI that the technology industry is grappling with themselves. We’ve deliberately used the phrase here companies and organizations. I think it’s as important for nonprofit organizations,
civil society organizations, social entrepreneurs to think about the human rights opportunities and risks that are attached to AI. Then the third and final point is the one that I was most afraid and nervous of making in terms of having to describe it. Innovative methods of
human rights due diligence are needed to uncover blind spots, imagine unintended consequences and anticipate a highly uncertain future. I was nervous about making this point in terms of how to convey the message I’m trying to convey here, but then attended the
plenary session last night. We had that great talk about futures methodologies
and future scenarios. That’s exactly what we’re trying to do with companies on human rights and artificial intelligence. We have a sustainable
futures lab that uses very similar methodologies to the ones described yesterday to stimulate dialogue and discussion with
companies about the future. What are some of the potential unintended uses of AI? What things might seem crazy now, but could actually be
a very common feature of our future where we need
to make decisions today about how to address human rights because of the implications
that they’ll have in five, 10, 15 years’ time? We’ve been trying to deploy some of the similar methodologies
that you heard yesterday to open up the conversation, uncover blind spots and really broaden our horizons about some of the risks and opportunities that exist with AI. I have no idea how many minutes I’ve just spent talking, but I think it was less than my–
– It’s great. – Allotted time. I would love some questions. – Any questions for Dunstan? Yeah, back there. Okay, ya got a big toss. Here. – Three words, you’ll find the answer. China human rights AI. – That is the toughest question. I guess it came up in your opening– – [Megan] Maybe two more words, oh dear. – I attended before this
session a discussion on ethics and decision making. I think the China case presents a very difficult challenge because there are all sorts of things underway in China that will present very extreme risks to human rights in terms of use of facial
recognition and AI. For the companies we work with, the question is often should they be present in China? If so, how are they present in China? You’re seeing that play out right now with controversies around
Google and Microsoft and other companies in terms of how they operate in China. I myself haven’t reached a conclusion as to what the right avenue is. What is the right thing for a company to do in terms of whether to operate or not in China? What I do know is if and when they do so, taking a very deliberate approach to thinking through privacy, freedom of expression, surveillance, child rights, nondiscrimination, and how what they do in China or other countries has an impact on those issues is essential. The way that you discussed in the opening, I think is a huge challenge. We run the risk of very
segmented approaches to how we deploy AI. Companies could choose not to operate in China, but a lot of Chinese companies will carry on doing what they’re doing. – It’s an interesting challenge of, again, there’s two parts. Just think about this in digital, but it’s also in analog. In our real world, we
behave like this, too. How do we address it? We do two things. One is we directly address it. We have to work on that. Perhaps it’s through
making laws and rules, and maybe some other countries are gonna break them, but we can have ours and we can work towards that and we can work towards
international shaming of certain kinds of behaviors. Then this work that we have to do to get the flourishing of the more positive stuff, especially our nonprofit sector, to have CTOs and techies inside of their orgs. There’s a lot of people who suffer in the tech sector wanting to work on these topics, but they
can’t get a job here ’cause the organizations don’t realize they need those people. There’s a real opportunity to really flourish the positive, to really accelerate that, while we also directly combat the negative. – There’s a question back there, I think. – Yeah. – I think it’s a long way.
– Way back. We have to have two tosses, somebody help. Woo! Yeah. – [William] Heads up. Hi, I’m William Perrin. I’m a trustee of Carnegie U.K. Trust. We’re a very old-school think tank. We’ve been around over a hundred years. We saw this new AI stuff coming along and thought, well, do you need a whole set of new laws
and new ethics discussions when we already have really good laws that could apply to this if you think a little bit imaginatively? I applaud your approach. In the U.K., we worked with a legislator to put down a question to the government to say, well, when you deploy AI in the workplace in the U.K., is that covered as a tool or machine in the 1974 Health and Safety at Work Act? They came back within 48 hours and said unambiguously yes.
– Yes. – [William] There you are. You have a perfectly
good, traditional law, well proven, that is seen to apply to AI. I strongly urge people, before they engage in what are actually quite weak ethics exercises that wouldn’t pass academic muster in many institutions, to look at the existing law set, to work with your legislators to say should this apply to AI, can it, and how can we frame it in such a way such that AI is not some
special, magic thing? It is just a new technology, and it should be regulated with the widest set of technologies we have so it doesn’t inadvertently
cause social harm. – Thank you for that. It really underlines, Dunstan, what you led with in the very beginning of trying to take the human rights that we have, and get these companies, kinda pull these conversations. Any other things you would add to that? – One statement to agree with you, and one statement to build on it. We get asked the question quite often, should AI be regulated? I will answer somewhat along those lines, that actually you need
to look at regulations industry by industry, issue by issue. There are many regulations
that are already there that need to be applied
in the new setting. There are places and times where they might need to be updated or clarified. Is facial recognition a
form of data processing? If you’ve uploaded your photo onto various sites, it might say this can be processed in certain ways. Is facial recognition
covered by that term or not? There are places where
certain interpretations need to be made of existing laws, but I agree with the
premise of the question. – [William] Do I throw this back now? Is that how it works? – I think we have to go. We have to watch out on time. Let us hold those questions, but remember them ’cause we’re gonna have another space. We have our last speaker to bring us home. Tanya O’Carroll is at
Amnesty International, and she’s the director of Amnesty Tech. – Hi, this is in some ways perfect. The danger of going last is always that you’re gonna just repeat everything that’s come before, but
I think in this case, actually, it really does, it’s highlighted that there is an elephant in the room, and it
actually hasn’t come up, so I’m really pleased about that, although I also feel disappointed that, yet again, I am the person arriving to the party, the dance party, and turning off the music and being like, sorry. In the AI space, I have
been working on this for Amnesty for a number of years. There seems to be two big frames or discourses that are
the dominant discourses. One, AI for good, and lots of people get together and have really exciting and incredible conversations and are doing amazing, innovative things. It feels like that’s a whole field that’s thriving and very exciting. Then over here, we have this whole frame around AI ethics. I’m regularly in spaces, same as Dunstan, with many of the companies
and the partnership on AI and many academics and others trying to think about these really tough ethics questions. But neither of them are addressing this really big question in the center, which is about power. I’m gonna do that. Before I get to the elephant, just very briefly, within Amnesty, how I’ve come at this is, so on the AI ethics side, is we’ve been working for about two 1/2 years to do exactly what Dunstan was talking about, which is really bring a
human rights framework. One of the resources which is good for people to know about is called the Toronto Declaration. It was launched last May
by Amnesty Access Now. We’ve got about 30 other
signatories to that. Interestingly enough,
while all of the companies, or many of them, DeepMind, Google, Microsoft, others,
turned up to the drafting of that, not a single one has signed on, despite all of our advocacy. The feedback we had varied from it’s just not fun enough, the language is just not positive enough. We’re like, it’s human rights law. It’s really quite dry. Not much we can do about that. But it was interesting to see that they weren’t really willing to actually bind themselves to the norms that they just technically should already be binding themselves to. Then the other side, we are doing a whole bunch on AI for good, so I really am a believer in the positive uses of technology. In the last year, we’ve done three pilots using automation and machine learning directly in the human
rights monitoring process. One of them was Decode Darfur, essentially helping to train an algorithm to identify burnt villages in Darfur by using the power of Amnesty activists, the same ones that used
to write letters for us, and now actually training
and labeling data as part of a micro-tasking
initiative we have called Decoders. The second one, we did some natural language processing with a partner called Element AI in order to create a model that predicted abuse against women, journalists and politicians. Then the third one around
monitoring executions and use of the death penalty worldwide by ingesting media
articles, analyzing them and interpreting them in order to help us publish our statistics every year. Again, believer in all of this, but to the elephant. The elephant in the room is the current business model. The reason this is important is because the data, the way that it’s captured, the way that it’s harvested, ingested, exploited, mined,
without people’s consent, in order to not only
predict their behaviors, but to influence the
actions that they take is one of the biggest and
most existential threats that we face as a society. It has to be central to any discussions that we’re having on AI. It’s really interesting that within the AI debate, very often we see it as a technical issue. There’s the dirty data. That’s all of the polluted, bad, biased historical data that Cathy was talking about. Then there’s the black box that none of us understand. Then you put dirty data
in black box algorithm, and out comes human rights abuse or bad algorithm. Naturally, lots of clever
people get together and go, okay, well,
let’s fix the dirty data. Let’s de-pollute it, let’s clean it up, and let’s think about how we get round the black box by auditing algorithms, by having more transparency. Therefore, we’ll be
able to see the outcome. These are all good things,
they should happen. Whether they will happen by companies and whether they actually need to be legislated to ensure that happens is another question. But at the end of the
day, it doesn’t really get at this question as
to why is the data being, who is collecting the
data in the first place? Why is it being collected? What is the consent
underpinning all of that? It feels to me a little bit like we’re talking about
the cherry on the cake, a huge amount, which is the algorithm. How does it look, how does it taste? Then we’ll start talking a little bit about the ingredients
that go into the cake. That’s the data. We’re missing the fact that the cook is just a psychopath, and no one’s talking about it, and nobody wants to eat that cake. This is about power,
this is about politics, this is about asymmetries of power that are age old. They go way, way, way beyond the internet. Google, just to take a
really quick example, currently billions of
search queries every year, every day, sorry. Billions of YouTube clicks. 1 billion active Gmail users. Estimates of 25% of Google traffic, of internet traffic, sorry, in the whole of North America goes
through Google’s service. Google is not just
connected to the internet. Google is the internet
for many, many people. Take Facebook, it’s even more so. In many countries,
Facebook is the interface that most people have to the internet. There is nothing else. They did some interesting surveys and polling in some African countries and Southeast Asian countries. They found that lots and lots of people were saying that they
were not internet users, but they were Facebook users. They were like, well,
how do you explain that? It’s because a lotta
people just didn’t realize that they are accessing the internet because Facebook is it. What kind of internet is
it that they’re getting? The last year, two years, has just told us I think everything we
need to know about this. What kind of internet is Facebook? Two years ago, we have the scandal where we find out that they’re trying to actually actively
influence our emotions and moods by tweaking the algorithms in terms of what content we get. One of the biggest
psychosocial experiments that there’s been with
absolutely no consent. Second one, Cambridge Analytica. I’m not even gonna go into it because we could talk about it forever. What’s interesting is that it’s still referred to as a data breach. This was not a data breach. This was Facebook operating exactly as Facebook was set up to operate. They shared data with third party. They just got caught out, and that’s what’s really happening here. Influencing voting. Then recently, two months ago, you might have seen the story about Facebook has been paying teenagers, age 13 years old, 13 to
17, as well as others, $20 a month to essentially install what is equivalent to
spyware on their phones that allows Facebook to harvest every single interaction that these teenagers are having with their friends in order to influence them, again. This question of the data monopolies, this question of the power of these handful of companies is massive. It’s massive to your point, Babusi, which is about if the data is sat behind this walled garden, behind this fortress
walls, then the R and D for our future is also behind those fortress walls. How are we supposed to be? You were saying that there’s not, it’s difficult to track
migration patterns. Facebook, if they were
to provide that data, perhaps, has tons and tons
of interesting information that we could use, but
they’re not interested in solving the problems that we’re interested in solving. This is one of the major problems. Just to end, we have to move beyond talking about AI for good and AI ethics. That’s without a doubt. While the initiatives are important and while I take your point, yes, we need to be creating positive models, Cathy, for what this looks like, we cannot build just, equal and fair automated systems on top of a corrupt toxic sludge. We have to tackle the business model. It’s a little bit like talking about. Yay. It’s a little bit like talking about climate change, as we had
for about two decades, where we focused on citizen action. Turn off the light switches, get some solar panels, conserve water. We now know if we don’t take action, that the world is
genuinely in serious peril, and it’s political action that is needed. Just as in the digital world, we need political action, as well. Every time you hear AI for good, if you could really push data as a public good, I think that would start to change the conversation. AI ethics, AI ethics, let’s debate them, let’s discuss them. It’s positive, but really what we need to be talking about is
rights-based regulation. Then the biggest challenge, disruption. Everyone who’s at Skoll is a disruptor in one way or another. Let us be harnessing
that disruptive spirit to actually disrupt the
current business model of the internet. Then we can start talking
about AI for good. – Fabulous. Yes, and I think that
this in this central point of if the product’s free,
you are the product. What are we gonna do about it? Also, on the side of the,
not the Chinese people, but the Chinese government and what they wanna do, and top down. Any quick questions or comments? Yeah, back here. – [Man] Hi, thanks. That was brilliant. A question, on that last one about disrupting the business model, if you had a magic wand that was somewhat realistic, what would that look like to you? – Yes, I’d love it if we could fix it with a magic wand, actually, ’cause that would be brilliant and save four or five years of what I’m gonna be working on. But at least at Amnesty, we are currently trying to come up with
really concrete proposals ’cause I think, actually, the fact that people applaud, people want change. I really believe that people
want alternatives now. People are angry. But we’re not doing a very good job of giving them clear, concrete goals. What does the fix look like? I think there’s some really
interesting stirrings that are happening in
the domain of antitrust, so breaking up the tech companies, or breaking up the data, or making it, if data’s a public good, then it needs to be primarily in the hands of our public servants
so that we can access it as citizens so that it’s primarily benefiting us as citizen. I think all of this
stuff around data trust is really interesting and definitely part of the solution. I think a right to opt out. I think we really have to start extending from the GDPR, really thinking what does a right to opt out actually look like. You should be able to use these services, which are now, essentially
amount to the internet without having to accept
pervasive surveillance if you don’t want to. Then the other, I think
there’s questions around how these companies pay in and the lack of taxation. We’re kinda looking at a model of opt out, pay up and break up, but just small wishes like that. – Getting into the details right now, all of us are experiencing
the awesome move by the E.U. to take a step forward, it’s still very brute
force, like yes or no? Are there any grayscale in here of how we implement cookies and other things? Very important to get into this. Yeah? – Thank you. I’m really glad you said what you said. It was really important, thank you. I’m very interested in
political intelligence. We help marginalized political actors access diplomatic processes. For years, I’ve been talking to a fellow in Silicon Valley who runs a big data analytics firm that’s now doing AI analytics on big data. He’s sold his technology
of political intelligence, which sucks up data
from all over the world to make predictions about conflict, terrorist attacks, all kinds
of extraordinary things, he sold that technology
to two main customers, the CIA and BlackRock, which is one of the biggest hedge funds in the world. My worry is precisely
what you’ve talked about, which is that AI is actually going to reinforce existing power differences, not actually expose, not rebalance them, but reinforce them. One of the critical questions for me is access to the AI technology. You’ve talked about access to data, but how do we ensure that the technology is actually more openly available? The open source people would say we help make software open source, but AI technology is
very different from that. How do we ensure open access to that? – No, I take it to that point. Really, technology is a public good in lots of ways. But the point about the CIA, I remember my cyber law professor, about 12 years ago, giving an impassioned speech about how Facebook had built what the NSA could only have dreamed of building in its wildest imagination. That was 12 years ago. Compared to what it looks like now, and I think after the Snowden moment, we had this thing of, oh, gosh, all the data that the NSA or CIA or GCHQ might have about us. We focused on the governments. What’s interesting is that, actually, the corporate amassing of that data is the same thing as the government amassing that data because, inevitably, it is shared with governments. On top of that, it’s a little bit like, somebody I know says this a lot, it’s a little bit like asking would you rather be punched in the head or punched in the stomach? You’re still being punched. I think, yeah, there’s a huge amount that we have to do to. These are big questions, but I think it starts with raising them, talking about them more, outing the elephant every
single chance you get. – Yeah, and also, I don’t know if these guys have the image of it, but if you guys have been tracking in Singapore and Korea, there’s a group that’s done DQ. I just talk about TQ, tech quotients. Make sure you have somebody with TQ on your team and build yours. But DQ, it’s a little bit small to read, but this is a proposal that’s coming out of parts of Asia for everybody around digital literacy
that we would have. There’s kind of, again,
two sides of the coin. One is directly pushing
and working on that, but the other is how do
we include more people, scrub them into the design side and the fluency side? This is what are the
skills that every human should have around digital safety, digital use, digital identity, digital rights, digital literacy, digital communications,
digital emotional intelligence, security, this is about bullying online. If you look at literacy, what I love about this particular proposal is that it’s data and literacy. It’s about computational literacy. Recode, you guys are
in the room somewhere. This is a Brazilian team
that’s out training. How many people have you guys trained? – [Man] 1.7 million. – Through the library, so scale for it. Again, look for who
already’s got it, and there. What I like here is it doesn’t say some people get to learn Excel, and everybody else over
here is gonna learn coding. Everybody should learn how to do digital arts and all these things, and we should do it
from when we’re little. Raspberry Pi, we’re in the
country of Raspberry Pi. Second graders can take these, it’s the board from your phone. We can teach ourselves to be fluent in this stuff and use
it for the joyful things that we’ve been talking about, and slowly diminish this. The business models, I think, are right in the heart of this, as well as their surveillance behavior
of certain countries. Those two are coming at us. What we wanna do next
is we’re gonna come up for Q and A, but we actually have a method to make all of you guys a panelist, too. It’s called popcorn. What we’re gonna do is we’re gonna take two minutes, and the question for you, we wanna crowdsource because we’ve all flown here, and that’s a
lotta knowledge in this room. Turn to your neighbor and discuss, tell us what is something you have seen that’s actually promising that’s real? It’s either something
that is a version of AI or something someone’s
doing about human rights, or some combination of the two. What is it that makes
you hopeful that’s real? It’s not a Post-it note idea,
but it’s really happening and you’ve seen it happen. If you haven’t seen anything, maybe try to brainstorm with each other what’s out there. Okay, ready? You got two minutes, each take a turn. Go. Thank you, everybody. Obviously we don’t have enough time to go through the whole room, but we’re gonna do what we call popcorn, which is if there’s
something that your neighbor just said to you that you’re like, oh, my God, everybody needs to know this, or if you wanna either get them to say it or summarize for them or
something you were thinking. It has to be like Twitter length. We’ve gotta get through about five or six folks. Who’s got something that you think, an insight, something we all could really use to know, and it’ll bring us in a good director? Way back, okay, we got
a lotta hands, okay. Should we go way back? Way back, here and here. Those two. – [Man] Slight insight
from my neighbor Grace, but I kinda took it a
little bit to the next step, which is around the strategy
of international shaming and how would you do that in the age of a Facebook or a Google. – Okay, but you’re not
allowed to have something you questioned. Have you seen something
that’s already doing that? – [Man] Yes, her–
– What is it? What are you doing? – [Man] Her organization. – Grace, tell us. – [Grace] Hi, I’m Grace. I run an organization called Walk Free– – Thank you. – [Grace] It’s an international
anti-slavery movement. I was just saying I love the comment about systems change,
and attacking the system, not the cherry on top, because it’s really all well and good for
everyone to do something, it’s important, but we need the biggest supply chains in the world
to be doing something. We need governments to be responding. – Are you doing something? – [Grace] Yeah, we’ve seen a massive shift in the last eight years from being locked outside the room, throwing stones, to now being in the room with Walmart and Nestle and these big supply chains because they see the risk to themselves. – Is Sally still in here? Sally was in here. She was saying first they ignore you, then they dismiss you. Then eventually, they fight you, and then they let you in. You guys are on the path. All right, any other key
insights, cool things? There was somebody right over here– – [Marine] Marine.
– Go. – [Marine] Marine Hooney. Working with children and youth, what we’ve realized is technology is, it’s revising the way we
quality care for children. It is almost displacing
our parenting capacity, and we’re handing over to technology. – Now flip that, hold on. You’ve got that. Hold on, you are doing
something really interesting ’cause I know you are,
which is to change that into mental health. Just say a sentence of what you are doing, the work you guys are doing in schools around mental health,
but just specifically addressing some of this. – [Marine] It’s
social-emotional well-being, and helping the teachers and caregivers to look at children differently and instill and stimulate
their internal competencies. Self-esteem, confidence, know who I am, what I have and what I can do. – We ended up at dinner, which is the serendipity of Skoll. You were saying the
child sleeping at school, what does the teacher think of that child? Is the child bad? Is that child hungry? How do we flip how we
feel about the child? How do we feel, based on
different characteristics of that child, a child of poverty, a child a different race from us? How do we flip that? You guys are doing that
work in South Africa. What’s the name of your organization? – [Marine] Repsy.
– Repsy. Other people, let’s get two more. Yeah? – I’m really inspired by the use of AI to achieve energy efficiency. We had a panel recently in which both citizen-to-citizen smart grids to be able to trade energy in blocks and neighborhoods, and then appliance-level
efficiency devices to be able to create an algorithm of how houses predictively
use energy, and thereby to optimize when you draw down and to be able to optimize
the way power is used, and the combination of smart grids and device-level optimization. The panel was saying it
could probably get us 1/2 the way to our climate goals if we just push on efficiency. We gotta do all the other stuff, but that was a source
of enormous hope to me. – It’s really exciting. These little devices in your house, if you flipped to that second chart, one of the things that was really great is when you add tech
folks, TQ, to government. The Presidential Innovation Fellows who are entrepreneurs-in-residence
came in. They started doing Green Button. They went, if the U.S.
government can do this, lots of governments are doing this, and revealed privately,
through their login, your energy data. Then you can start doing things with it with apps and others. This is what we did with
the Data Science Cabinet, which is where you ask,
in your government, every ministry to deliver a data scientist to the cabinet. Then you can start to have a rubric, it’s a bit of a eye chart, but just like a project at school where someone’s getting a high grade and they’re doing well, all the way down to very basic. Give a rubric to your government, and have people do that. I think we’re gonna hop, we’ll do one more right there. Then I wanna get you guys. – Thanks. – Okay, thank you for addressing the elephant in the room. I live in Egypt, and our government just bought surveillance to monitor everybody’s Facebook. Please just think of Africa, again, as a place where there’s
no proper governance of everything that you are discussing. There’s nobody actually there enforcing laws to do more of that. You have corporations and governments in the West that are helping that happen. I think the conversation
here is very advanced for– – Africa.
– Africa. I really think we should just keep that in mind as we’re discussing
a global conversation. – Totally. All right. Babusi, you were born
and raised in Africa. – Yeah. – I was in Africa when I got an email from the White House to join the U.S. CTO as a U.S. CTO considerations. One of the things that I
think is really striking is there’s a layer in the world, and Africa’s an example, but everywhere, President Obama was going to Boise, Idaho. We wondered in Idaho,
how many tech meetups and incubator spaces and those things would there be there? Just like how many might be in any city, especially because I was in one of them when I got the email in Uganda in Africa. All around our world are these tech people who have these fluencies,
whether they’re in Gaza or whether they’re in other places. It turns up there were 15 tech meetups in Boise, Idaho, including one with 800 people in it. The girl that developed
the team was meeting twice a week, but no
one in Boise knew that. The same thing, I think,
is true everywhere we’re working, where there’s a layer of people who are fluent in tech, and maybe choosing certain topics that they’re working on, and people who just so happen to be using this stuff. I don’t know if you might comment about that dichotomy and what you’re finding we’re able to do with
that by bridging that and getting people to
meet each other locally. – Yeah, I would definitely say when it comes to tech hubs in Africa, there’s different flavor. We have hubs that meet, propagated by entities such as Facebook and Google. There’s hubs for entrepreneurship that also collide with
technology, as well. There’s hubs that look at nurturing talent to, I guess, code develop. There’s multiple efforts on the continent. Most of it is not really
grassroots upwards, which is the unfortunate thing. It more or less aligns with an agenda or an outcome that won’t necessarily directly benefit the communities in which it is. An example is in Nigeria just recently, there was an e-commerce startup that recently had its ICO, but it’s not headquartered in Nigeria, but it makes its money in Nigeria. All the innovation is
happening in Nigeria, but it’s headquartered elsewhere, and the money is going somewhere else. – Yes, and this speaks to some of this public sector, how do we get the data into the public areas and stuff. You guys have any comments on these kinds of topics we’ve been talking about? About how to bridge these divides
and how to protect ourselves from these surveillance governments? To me, it feels like the only way is we are the ones we’ve been waiting for. We have to really team up and work across all our sectors. Elisabeth, you talked about government and did the government have the will to do the specific building work. But this work here,
this idea of teaming up on business model. Dunstan, you were talking
about getting inside the companies themselves
and having them flip. Any comments about directions or coaching you’d give us all on where your hope direction is? I know we have almost no more time, so maybe we’ll end up with this as our last area of focus. – Yeah, I think it’s a great opportunity to reinforce this concept
of business model, and making a business model that works really for everyone because there’s a sort of cautionary tale here about the built environment. No one asked me any questions because maybe you’re bored with
hearing about housing or the built environment, or you think that issue is done or solved because you sitting in this room, you’re probably not world about this building
collapsing, are you? You’re not. The built environment has evolved to serve the needs of a large majority of a population in a very positive way, but yet there are 1/3 of the population that actually has to worry
about that every day. I feel like there’s an opportunity here for tech to make inclusive business models that really work for everyone that bring the regulation, as well as the market forces, as well as
the just political will that’s needed to really drive this forward in a positive process
that, 50 years from now, we don’t all roll our
eyes on a conversation about AI because we think that’s solved, but it’s actually not solved for the majority of the population. – When you’re doing the work, like the way you’re doing it, it’s not
just for other people, but always design with, and then in the design chair is everybody, especially the people facing
the hardest challenges ’cause they have the most insight, so the diversity of the team. – I guess I have one hope and fear with this discussion. The hope is that we are
having this dialogue, that the discussion about
artificial intelligence and human rights is happening, and lots of people are
participating in that discussion. That give some hope. The fear is the fear of the bad actor. I have a lotta belief in more inclusive business
models, for example, and more open access to
the benefits of innovation. But there’s also this downside. If you make technology available, we’re also making it
available to bad actors. How do we address that
misuse of technology and how it can be deployed for bad? That has to be on our minds, as well as we’re thinking about
how it’s used for good. I was very taken by your conclusion right there at the end about how we think about AI for good, and what needs to be in place for us
to do that effectively. – We never hear AI for good. It’s all the rest of it. For AI for good, I always think is all the rest of it AI for greed? Yes, maybe it is. But I spoke about Bletchley Park. That was some technology and organizing in a very evil way in World War II that we had to fight against. This is not new, this is
not a new human issue. It’s just our version of it, and it’s really hard. It’s really hard. I think it takes signal to noise work, and it really takes, remember the two, refugee camp versus Burning Man? When I went into United States government, I went back in time at least 20 years from my commercial experience of the physical tools I was
working, which is ridiculous for that top talent and the money in that government. Most governments are like that. We need the government and
the public institution, our civic society and all of our NGOs to get these kinda characters into your management team with you and start pushing these ways so that we can actually move to what you are pulling us, what everybody’s pulling us towards. – I’ll say one more hopeful thing, and then somebody said about naming and shaming, or shaming. I think we’re just waiting for a campaign, we’re waiting for a clear campaign goal. I think people will get behind something. I think people do want change and they want it to be systemic change and they want it to be
respectful of rights and they want the benefits of technology and data to be for us, the people that are the moment, giving so much and left out of the conversation. I think a campaign, a global movement, I think it’s coming. – One of the ways that we do a campaign with shift7, and we do it with the United Nations Foundation,
who’s here, too, is we scout, like we just did in the room, for who’s
already fixing things. We use the internet to do it. Tomorrow we’re gonna be
teaching Solution Summit. We’ve been doing it, Susan Alzner here, when she was at the U.N.,
helped make it happen, is now at shift7. But we find these doers, like all of you. We get a thousand submissions from a hundred-plus countries in two weeks when we ask off the U.N. site who’s fixing the SDGs already, promising or actual? We can also use the technologies and our crowdsourcing, our Wikipedia ways, to maybe try to see if we can get some collective genius on this stuff. I really encourage all of us to keep thinking, when you say, oh, no, what are we gonna do about? How do you go, I wonder
who is doing something? Where are they? There’s 7 billion of us. Somebody’s got something. I’m hopeful about our talent if we can just hear each
other and collaborate. I think, are we outta time? Yeah. There were a bunch of questions around. Can you just say what you were gonna ask ’cause you were really stretching? Just there, yeah. – [Woman In Black Jacket]
I hate that I risk ending on a less hopeful note,
but it was gonna be that there’s, technology
creates, potentially, different haves and have nots. – Very much so.
– Right. Youth, whom we think of as some ways have nots and some ways
in a technology space may be haves, in fact, because they’ve grown up and have
a different experience of technology. I was thinking about it because some of your language, for me, is intimidating. I have to struggle to stay with you, what’s tech, what’s AI. How do I, as someone who’s
grown up professionally in government, so I’m
familiar with the systems you describe, how do I then make sure I stay with it enough
to send someone young on my team to a meeting about tech? – [Megan] Yeah, because
we need your wisdom, too. – Thinking about that difference in inclusion, which, in some ways, is non-intuitive, seems sort of another complicating but important factor. – Yeah, the picture that I showed with the 10th grader
with the police chief, one of the most promising, and you guys might have, show a couple examples of really promising things. One of the things I love is when the thing that’s the problem becomes the resource for the other thing. Today we have this idea
that all the children have to learn STEM. If you really think about how we teach, when you teach reading,
we also teach writing. When we teach science and technology, you have to learn all this stuff, and you’re not gonna make anything until years later, right? Why don’t we let kids express with STEM? One of the really creative
opportunities we saw outta Los Angeles, there’s a woman there named Jeanne Holm, who’s Mayor Garcetti of L.A.’s innovation. She’s also a UCLA professor. She just started to use the data sets from the city, which were becoming open, as you said, these public data sets, we have tons of them, for homework. Maybe you could work on your home during your homework. It was working, and the students were really inspired, and started to get more diverse set of students who were interested in any topic. I’d like to work on air
quality in Los Angeles. I’d like to work on homelessness. I wanna work on the transport issue. I wanna work on garbage pickup. I wanna work on animals,
shelters, whatever, poverty, food delivery. That’s all data sets we have. It’s really exciting. She created the Data Science Federation, federation homage to Star Trek. Any students can have this. Now many community colleges, UCL, USC, all have the data sets from the city in the homework in those schools, which makes it much less
boring to learn STEM, and you’re working on real problems. Then they team with the city team. Suddenly, the city team, like you, would have these young
people in your team, and you can go together to the meetings. Now there’s 131 jurisdiction
in Southern California behaving this way. This is the kinda thing we can do in this we are the ones
we’ve been waiting for, and put two and 10 together, and then story tell it so more people can share. We can do that in any municipality. We can take our scared blood pressure down ’cause it’s really scary stuff. But it’s not if we work together and we also find each other. If you need a doctor, get a doctor. If you need a techie,
if you need a operator, if you need a lawyer, team up. Any other thoughts came
to mind for you guys around that? Good. Go scout for what’s out there. Tomorrow morning, come to our breakfast on the Solutions Summit. Thank you so much. Thank you to our panelists for your key insights and your questions.

Add a Comment

Your email address will not be published. Required fields are marked *