"What AI is -- and isn't | Sebastian Thrun and Chris Anderson - YouTube"

Title: "What AI is -- and isn't | Sebastian Thrun and Chris Anderson - YouTube" Transcript: "Chris Anderson: Help us understand what machine learning is, because that seems to be the key driver of so much of the excitement and also of the concern around artificial intelligence. How does machine learning work? Sebastian Thrun: So, artificial intelligence and machine learning is about 60 years old and has not had a great day in its past until recently. And the reason is that today, we have reached a scale of computing and datasets that was necessary to make machines smart. So here's how it works. If you program a computer today, say, your phone, then you hire software engineers that write a very, very long kitchen recipe, like, "If the water is too hot, turn down the temperature. If it's too cold, turn up the temperature." The recipes are not just 10 lines long. They are millions of lines long. A modern cell phone has 12 million lines of code. A browser has five million lines of code. And each bug in this recipe can cause your computer to crash. That's why a software engineer makes so much money. The new thing now is that computers can find their own rules. So instead of an expert deciphering, step by step, a rule for every contingency, what you do now is you give the computer examples and have it infer its own rules. A really good example is AlphaGo, which recently was won by Google. Normally, in game playing, you would really write down all the rules, but in AlphaGo's case, the system looked over a million games and was able to infer its own rules and then beat the world's residing Go champion. That is exciting, because it relieves the software engineer of the need of being super smart, and pushes the burden towards the data. As I said, the inflection point where this has become really possible -- very embarrassing, my thesis was about machine learning. It was completely insignificant, don't read it, because it was 20 years ago and back then, the computers were as big as a cockroach brain. Now they are powerful enough to really emulate kind of specialized human thinking. And then the computers take advantage of the fact that they can look at much more data than people can. So I'd say AlphaGo looked at more than a million games. No human expert can ever study a million games. Google has looked at over a hundred billion web pages. No person can ever study a hundred billion web pages. So as a result, the computer can find rules that even people can't find. CA: So instead of looking ahead to, "If he does that, I will do that," it's more saying, "Here is what looks like a winning pattern, here is what looks like a winning pattern. " ST: Yeah. I mean, think about how you raise children. You don't spend the first 18 years giving kids a rule for every contingency and set them free and they have this big program. They stumble, fall, get up, they get slapped or spanked, and they have a positive experience, a good grade in school, and they figure it out on their own. That's happening with computers now, which makes computer programming so much easier all of a sudden. Now we don't have to think anymore. We just give them lots of data. CA: And so, this has been key to the spectacular improvement in power of self-driving cars. I think you gave me an example. Can you explain what's happening here? ST: This is a drive of a self-driving car that we happened to have at Udacity and recently made into a spin-off called Voyage. That's happening with comput We have used this thing called deep learning to train a car to drive itself, and this is driving from Mountain View, California, to San Francisco on El Camino Real on a rainy day, with bicyclists and pedestrians and 133 traffic lights. And the novel thing here is, many, many moons ago, I started the Google self-driving car team. And back in the day, I hired the world's best software engineers to find the world's best rules. This is just trained. We drive this road 20 times, we put all this data into the computer brain, and after a few hours of processing, it comes up with behavior that often surpasses human agility. So it's become really easy to program it. And back in the day, I hired the world's best software engineers to find the world's best rules. This is just trained. We drive this road 20 times, we put all this data into the computer brain, and after a few hours of processing, it comes up with behavior that o The vehicle has a radar to do distance estimation. This is very commonly used in these kind of systems. On the left side you see a laser diagram, where you see obstacles like trees and so on depicted by the laser. But almost all the interesting work is centering on the camera image now. We're really shifting over from precision sensors like radars and lasers into very cheap, commoditized sensors. Maybe we can see that. Talk about this. ST: This is an example where we posed a challenge to Udacity students to take what we call a self-driving car Nanodegree. We gave them this dataset and said "Hey, can you guys figure out how to steer this car?" And if you look at the images, it's, even for humans, quite impossible to get the steering right. It drives better than I could drive on this imagery, using deep learning. And again, it's the same methodology. It's this magical thing. When you give enough data to a computer now, and give enough time to comprehend the data, it finds its own rules. CA: And so that has led to the development of powerful applications in all sorts of areas. It takes more than a decade of training to be a good dermatologist. What you see here is the machine learning version of it. It's called a neural network. "Neural networks" is the technical term for these machine learning algorithms. They've been around since the 1980s. This one was invented in 1988 by a Facebook Fellow called Yann LeCun, and it propagates data stages through what you could think of as the human brain. Andrew Ng has been able to show that it's able to find cat faces and dog faces in vast amounts of images. What my student team at Stanford has shown is that if you train it on 129,000 images of skin conditions, including melanoma and carcinomas, you can do as good a job as the best human dermatologists. CA: You were telling me an anecdote. I think about this image right here. What happened here? ST: This was last Thursday. That's a moving piece. What we've shown before and we published in "Nature" earlier this year was this idea that we show dermatologists images and our computer program images, and count how often they're right. " And then he had a second moment, where he said, "Well, let me just check with the app." So he took out his iPhone and ran our piece of software, our "pocket dermatologist," so to speak, and the iPhone said: cancer. It said melanoma. And then he was confused. And he decided, "OK, maybe I trust the iPhone a little bit more than myself," and he sent it out to the lab to get it biopsied. Are you thinking of doing this, making an app that allows self-checking? ST: So my in-box is flooded about cancer apps, with heartbreaking stories of people. I mean, some people have had 10, 15, 20 melanomas removed, and are scared that one might be overlooked, like this one, and also, about, I don't know, flying cars and speaker inquiries these days, I guess. So we're currently doing clinical tests, and if these clinical tests commence and our data holds up, we might be able at some point to take this kind of technology and take it out of the Stanford clinic and bring it to the entire world, places where Stanford doctors never, ever set foot. CA: And do I hear this right, that it seemed like what you were saying, because you are working with this army of Udacity students, that in a way, you're applying a different form of machine learning than might take place in a company, We turn them around in 48 hours, and we've been able to build a self-driving car that can drive from Mountain View to San Francisco on surface streets. It's not quite on par with Google after seven years of Google work, but it's getting there. And it took us only two engineers and three months to do this. But we've been able to build this car in three months, so I am actually rethinking how we organize corporations. We have a staff of 9,000 people who are never hired, that I never fire. They show up to work and I don't even know. Then they submit to me maybe 9,000 answers. I'm not obliged to use any of those. CA: I mean, Gary Kasparov said on the first day [of TED2017] that the winners of chess, surprisingly, turned out to be two amateur chess players with three mediocre-ish, mediocre-to-good, computer programs, that could outperform one grand master with one great chess player, like it was all part of the process. I don't want to come into my kitchen and have the refrigerator fall in love with the dishwasher and tell me, because I wasn't nice enough, my food is now warm. I wouldn't buy these products, and I don't want them. But the truth is, for me, AI has always been an augmentation of people. It's been an augmentation of us, to make us stronger. And I believe this new wave of AI will make us much, much stronger as a human race. CA: We'll come on to that a bit more, but just to continue with the scary part of this for some people, like, what feels like it gets scary for people is when you have a computer that can, one, rewrite its own code, so, it can create multiple copies of itself, try a bunch of different code versions, possibly even at random, and then check them out and see if a goal is achieved and improved. but it's a very remote possibility. So let me just translate what I heard you say. In the AlphaGo case, we had exactly this thing: the computer would play the game against itself and then learn new rules. And what machine learning is is a rewriting of the rules. It's the rewriting of code. But I think there was absolutely no concern that AlphaGo would take over the world. Isn't there a chance that, as this broadens out, there could be a different kind of runaway effect? ST: That's where I draw the line, honestly. And the chance exists -- I don't want to downplay it -- but I think it's remote, and it's not the thing that's on my mind these days, because I think the big revolution is something else. It can't even control a motorcycle. It's a very specific, domain-specific function, and the same is true for our cancer app. There has been almost no progress on this thing called "general AI," where you go to an AI and say, "Hey, invent for me special relativity or string theory. Today, 75 percent of us work in offices and do repetitive things. We've become spreadsheet monkeys. And not just low-end labor. We've become dermatologists doing repetitive things, lawyers doing repetitive things. I think we are at the brink of being able to take an AI, look over our shoulders, and they make us maybe 10 or 50 times as effective in these repetitive things. ST: Yeah, and that's an issue, and it's a big issue, and it was pointed out yesterday morning by several guest speakers. Now, prior to me showing up onstage, I confessed I'm a positive, optimistic person, so let me give you an optimistic pitch, which is, think of yourself back 300 years ago. Europe just survived 140 years of continuous war, none of you could read or write, there were no jobs that you hold today, like investment banker or software engineer or TV anchor. we're always good in acknowledging past progress and the benefit of it, like our iPhones or our planes or electricity or medical supply. We all love to live to 80, which was impossible 300 years ago. But we kind of don't apply the same rules to the future. So if I look at my own job as a CEO, I would say 90 percent of my work is repetitive, I don't enjoy it, I spend about four hours per day on stupid, repetitive email. What this will empower is to turn this creativity into action. Like, what if you could build Google in a day? What if you could sit over beer and invent the next Snapchat, whatever it is, and tomorrow morning it's up and running? And that is not science fiction. What's going to happen is, we are already in history. CA: Hmm. So when Martin Ford argued, you know, that this time it's different because the intelligence that we've used in the past to find new ways to be will be matched at the same pace by computers taking over those things, what I hear you saying is that, not completely, because of human creativity. It's a technology that helps us do repetitive things. And the progress has been entirely on the repetitive end. It's been in legal document discovery. It's been contract drafting. It's been screening X-rays of your chest. And these things are so specialized, I don't see the big threat of humanity. When this is said and done, we're going to remember everything we've ever said and seen, you'll remember every person, which is good for me in my early stages of Alzheimer's. Sorry, what was I saying? I forgot. CA: (Laughs) ST: We will probably have an IQ of 1,000 or more. There will be no more spelling classes for our kids, because there's no spelling issue anymore. The dream is that humans can rise to just a new level of empowerment and discovery. That's the dream. ST: And think about this: if you look at the history of humanity, that might be whatever -- 60-100,000 years old, give or take -- almost everything that you cherish in terms of invention, of technology, of things we've built, has been invented in the last 150 years. I believe only one percent of interesting things have been invented yet. Right? We haven't cured cancer. We don't have flying cars -- yet. Hopefully, I'll change this. That used to be an example people laughed about. (Laughs) It's funny, isn't it? Working secretly on flying cars. We don't live twice as long yet. We have no gravity shields. We can't beam ourselves from one location to another. That sounds ridiculous, but about 200 years ago, experts were of the opinion that flight wouldn't exist, even 120 years ago, and if you moved faster than you could run, you"
 
based in the text above, provide 10 takeways with supporting evidence or examples for each takeaway when available
  1. Machine learning is a process where computers can find their own rules by being given examples and inferring their own rules. This is a big shift from having software engineers write millions of lines of code. (Sebastian Thrun)
  1. The scale of computing and datasets necessary to make machines smart has only been reached recently, which is why artificial intelligence and machine learning have not had a great day in their past until recently. (Sebastian Thrun)
  1. AlphaGo, a computer program that recently won against the world's residing Go champion, is an example of a system that looked over a million games and inferred its own rules. (Sebastian Thrun)
  1. Instead of giving kids a rule for every contingency, children learn through experience and figuring things out on their own. This is happening with computers now, which makes programming much easier. (Sebastian Thrun)
  1. Computers can look at much more data than people can, which allows them to find rules that even people can't find. (Sebastian Thrun)
  1. Deep learning has been used to train a car to drive itself, and after processing data from driving a road 20 times, it comes up with behavior that often surpasses human agility. (Sebastian Thrun)
  1. Machine learning algorithms called "neural networks" have been able to find cat faces and dog faces in vast amounts of images. They have also been able to diagnose skin conditions such as melanoma and carcinomas as well as the best human dermatologists. (Andrew Ng)
  1. Machine learning is a technology that helps us do repetitive things, which allows us to turn creativity into action. (Sebastian Thrun)
  1. The progress with machine learning has been entirely on the repetitive end, in areas such as legal document discovery, contract drafting, and screening X-rays of the chest. (Sebastian Thrun)
  1. Machine learning has the potential to empower humans to just a new level of empowerment and discovery, with humans rising to become 10 to 50 times as effective in these repetitive things. (Sebastian Thrun)
 
Built with Potion.so