Marita Cheng: When I was growing up, I had a family friend who became blind in his 20s. When we went out as a family, he would say to me, "Rita hold my hand, hold my arm, and tell me what you see." So I'd say, "There's some flowers here to the left. There's a gate here to the right. There's a mountain over in the distance." And he would say, "What color are those flowers? Can I use my hand, reach out, and touch them? Could you lead my hand to that?" I'd say, "Oh, they're pink, they're blue." And he'd say, "Tell me more, tell me more about what you can see. Share it with me." About eight months ago, Alberto and I decided to create an app to enable blind people to recognize their surroundings. We used something called convolutional neural networks, which is a computer system that's been trained on millions of images. It learns the features of a dog. It learns what a flower looks like. It learns a fork, a knife, everyday objects. Using this system, we created something called "Aipoly" that recognizes over 1,000 everyday objects. So, a blind person just needs to walk around with their phone, and put it over various objects, and it will say the name of the object. Using voice over, the phone can relay the word on the screen to that blind person, so they know exactly what's in front of them. (Applause) Since we released our application in January, we've had over a 100,000 downloads around the world. The app has been so popular we've translated it into seven languages. Alberto Rizzoli: After experiencing the technology the first time, our users kept asking us for more. We asked them to think of our technology as a superpower for a moment, something they could effortlessly evoke at any time, and gain understanding of what was in front of them. And surprisingly, nobody really wants X-ray vision or telescopic goggles, but what everyone wants is more information. It's not surprising because 60% of the information that we perceive comes through sight. It is the main tool that we use to understand our surroundings and often, to make decisions. If you're blind you must rely on other senses like touch or hearing, and you miss out on the lightning-fast identification that our brain and eyes do every second of every day, if you're a sighted person. We went to the Santa Clara Valley Blind Center, and we tried to build this superpower. We tried to see what kind of information people wanted, and it's simple things like whether a dish is clean or not, whether you can cross the street, what product am I looking at? Things that can lead to a decision, from a simple gaze, to an understanding of the situation in front of you. We asked what form factor people preferred, and we built it. So we put together some jawbone conductive headphones, a pair of sunglasses, and a tiny camera, and we asked our friends to think of a common situation in which they had to make many small decisions, and we told them we will be giving them the prototype, and taking them in the middle of that situation. Let's see how it went. (Video starts) [We asked blind individuals [what are the hardest things to do when visually impaired] I mean it takes me forever to go grocery-shopping. Even with someone helping me that has known me for years. I'll say, "What's in that cabinet?" Or develop a system, right to left, top to bottom. [So we took them grocery shopping with our technology] Computer: Oranges. Man: This is great. I'm really liking it. Apples, grapes, carrots. I'm looking, I'm looking. Computer: Lilies. Man: Lilies. Computer: Bouquet. Man: A bouquet, ahh! Computer: Roses, flowers. Man: Can I take these home? This is great. Computer: Roses. Man: Roses. Computer: Bouquet, tulips. Man: Tulips. Computer: Pineapple. Woman: It's a pineapple. Computer: Mango. Woman: Mango. Computer: M&Ms. Woman: M&Ms. Computer: Tic Tac. Woman: It just said, "Tic Tac." Computer: Tic Tac. Woman: Tic Tac. Computer: Paper note, calendar. Woman: Calendar, you got it. Wow, I didn't know what that was at all. Computer: Pretzels. Woman: Pretzels. Computer: Pretzels. Woman: It said, "Pretzels." Computer: Lipton tea. Woman 2: Lipton? Tea? Computer: Lipton teabags. Woman 2: It's like I'm seeing it, but I'm not, it's seeing it for me. Computer: Coffee mate. Woman 2: Mate; coffee mate. It didn't say "coffee," but it kept saying "mate." Computer: Mate, mate. Man 2: I put on the glasses, and right away, it told me there was an apple, there were oranges, and there was this, and there was that, and it's like, "This is great!" Instant love. (Video ends) (Applause) AR: That little pair of glasses connected to their phones could identify four to 5,000 objects in real time. That's about the capacity of a five-year-old child. A simple accessory can now expand a person's perception to thousands of new possibilities. This is the power of marrying artificial and human intelligence, and the potential is still vastly untapped. This isn't going to be a revolution just because GPUs are getting faster, or the research is getting more open, but because the barriers of entry to impacting millions of lives for artificial intelligence are getting lower and lower. The Paralympic games are starting in a few weeks, an event where sheer force of will, training, and technology turn people with a disability into super humans. and so, too, will all ability to think, perceive, make decisions, and learn increase exponentially. You will be building the tools to make this happen. So tomorrow, with your morning coffee, take 40 minutes and try out a tutorial on deep learning. Build yourself a small superpower. All it takes is your laptop, and a bunch of data, like your holiday pictures. Superpower engineer - that's a great dream job. The good news is that the world needs many, many more of them so we can't wait to see what you will be building next. Thank you. (Applause)