Last night, I headed down to the beautiful Bell House, in the Gowanus section of Brooklyn, for this month's Secret Science Club lecture featuring Dr Hod Lipson, profession of mechanical engineering and director of the Creative Machines Lab at Columbia University. The subject of Dr Lipson's lecture was AI and robotics.
Dr Lipson began his lecture by declaring that, after two decades of research, the fields of artificial intelligence and robotics took off in the last couple of years. He posited the questions, what are the trends in robotics, where are they going, and what are the 'game changers' in the field? He urged the audience to think long term, then, displaying a movie still, intoned that 'a long time ago and many galaxies away', the conventional wisdom was that robots would take humanoid forms. Also, there was a 'lone genius' model of scientific research which is not the norm.
Today, there are millions of robots, mainly in factories. These robots are powerful and precise, but they are not clever. Dr Lipson posed the question, 'how do you make robots more adaptive?' He noted, regarding artificial intelligence, that there are 'tidbits' everywhere, but not in the manner in which most laypersons think- AI is involved in investment banking, weather prediction, and music programming. He joked that artificial intelligence paired with real money. The number of robots is growing and the role of artificial intelligence in daily life is growing- what had been talked about for decades is now common.
Dr Lipson then displayed a picture of a Roomba and then a picture of a drone, then quipped that computers that once would have netted a researcher a PhD are now used as toys. He then displayed a quote by Marc Andreessen: "Software is eating the world." Artificial intelligence is now 'infusing' robots. Dr Lipson then showed us a video of table tennis champion Timo Boll playing a match against a robot dubbed KUKA:
In the match, the robot dominated until Mr Boll was able to figure out ways to thwart it- robots perform well when things are in the right place at the right time, but they have trouble coping with 'corner cases'. It is difficult for robots to adapt to conditions that their programmers didn't anticipate.
Dr Lipson then backtracked a bit, giving a quick history of robotics. In 1912, John Hammond, Jr. and Benjamin Miessner designed a self-orienting robot which was programmed to turn towards light sources. This robot, designed to deliver explosives behind enemy lines, was described as being able “to inherit almost superhuman intelligence.” Throughout the 20th century, the goal was to develop exponentially more powerful technologies, a goal expressed most succinctly in Moore's Law. The exponential growth of processing power is a small part of the equation... something was missing in the hardware and software, and a political war arose within the computer science community about how to achieve artificial intelligence. Should programming be a 'top down' process or should computer scientists design computers that learn? In traditional computer programming, a programmer writes code and the computer uses the algorithm to solve a problem. With the advent of artificial learning, the computer learns how to solve a problem. The problem with top-down programming is that it is impossible to think of every 'if then else' statement needed to cope with a multitude of conditions. The top down approach doesn't work with all problems- there's no code to keep a driverless car on the road if the 'if then else's' are insufficient.
In 1957, Frank Rosenblatt of Cornell University developed a neural network that could distinguish simple shapes, and pioneered the field of machine learning. One early development in machine learning involved a computer that could play checkers. The original programming entailed writing algorithms instructing the computer to take opponent's pieces- the computer played a mediocre checkers game because humans made unexpected moves, things that the programming didn't anticipate. The programmers then changed their strategy- they collected data and programmed the computer to 'play' many games of checkers, mimicking the winning games. The computer was able to accumulate 'lifetimes' of checker playing experience. Then came the development of chess playing computers, also being able to learn multiple chess games until able to beat grandmasters. Recently, a computer was able to beat a champion Go player. Dr Lipson asked if this was the end of an era, then said that computers are poised to tackle the real world.
The subject of the lecture then shifted to the DARPA autonomous vehicle challenge. Dr Lipson referred to the work of Stanford's Sebastian Thrun, who addressed the quandry, does one use machine learning or big data? Does one teach an autonomous vehicle to drive like one would teach a human how to drive? Ultimately, the machine learning approach was taken. In 2007, there was a collision between the Cornell driverless car and the MIT driverless car, which Dr Lipson was quick to blame on the MIT team- the Cornell driverless car interpreted an immovable object as another vehicle then stopped in order to give it right-of-way, the MIT vehicle passed the stationary Cornell vehicle just as it reinterpreted the immovable object and started to steer around it. Both machines 'sensed' the world around them, but they didn't 'understand' what they sensed. The eyes worked, but they could not see. There was a failure of understanding.
Along with artificial intelligence, advancements in material science are also revolutionizing robotics. New synthetic 'muscle fiber' has been developed and 3D printers can make pieces which mimic organic structures. The programming of these new robots marries algorithms with machine learning, combining to produce an explosion in AI. In the area of Big Data, the number of cameras has increased exponentially... visual data, once difficult to obtain, is now ubiquitous. While academia always had access to 'fast' computers, this data was the missing element in early artificial intelligence efforts.
In 2012, ImageNet, originally a set of one million images, was created to test artificial intelligence. A million images were classified in thousands of categories, and the AI's were tasked to label novel images correctly. Dr Lipson jocularly referred to ImageNet as a 'Mechanical Turk' for recognizing images. Describing the project, Dr Lipson noted that it was 'freelancers training AI to take their jobs'. Originally, the ImageNet had a 25% error rate, a rate altogether unacceptable to employ in driverless cars. In 2012, a team from the University of Toronto developed an algorithm called SuperVision which dropped the error rate to 15%. Dr Lipson joked that this dramatic improvement was 'like seeing Jesus'. The real game-changer in visual recognition is the fact that the code is open source. While Frank Rosenblatt's early neural networks involved many 'layers' of wires, the new neural networks for visual recognition involve many 'layers' of code- it's a 'souped up' version of the neural network. Humans typically have an error rate of 5% in ImageNet, much of the error rate is due to an inability to distinguish images due to unfamiliarity with, say, breeds of dogs, or types of lizards. In 2015, a Microsoft team was able to bring the error rate down to 3%, for the first time in history, machines were able to 'understand' images better than humans were.
Another game changer in AI is the improvement of voice recognition and a refinement of image understanding- for example, the ability for a computer to recognize a 'kiss' in a movie. Computers are gradually understanding lots of images and the connections between objects.
Returning to the subject of driverless cars, Dr Lipson noted that it's difficult to distinguish things on the road... there are many 'corner cases'. Is an object a pothole? An oil spill? A shadow? The object recognition abilities of AI's couldn't be trusted. Is an object a fire hyrant, or a kid? With new hardware and new software bringing better image recognition ability to machines, the last link in the 'driverless car' puzzle has been made.
Dr Lipson then noted that AI has wormed its way into all aspects of our lives. Because the algorithms are open-source, anyone can use them. There is a 'dark side', though, we don't always understand how the algorithms work. In one particular instance, an image recognition program 'learned' how to distinguish faces, even though it was never programmed to do so- the AI learned that tracking human faces was useful. While PhD candidates were working on facial recognition software, this particular computer just learned it. With the advent of the 'cloud', Dr Lipson joked, what one robot learns, all robots know. With shared data, a driverless car can draw upon thousands of lifetimes of driving, a robot doctor can 'learn' on millions of patients.
Dr Lipson opined that Isaac Asimov's Three Laws of Robotics were garbage- we don't know how robots learn and make decisions. The best way to track AI is to use AI to do so. The goal is to make 'curious and creative mechanisms'. Most artificial intelligences are designed to make decisions- buy or sell? These AI's take in information and make decisions.
Another kind of intelligence is creativity- humans create, but synthesis, unlike analysis, is difficult to teach. In developing 'creative' robots, the best inspirations are not from humans, but from evolution. In one particular case, a 3D printed robot was designed using a biological model as inspiration, and an 'evolutionary' algorithm was used develop a 'crawling' robot. Originally, the robot didn't have a self-image, and didn't know how to walk. First the robot had to discover what it looks like, and initially it flailed around until a self-image developed. Then the robot learned that it had four legs and used this information to develop a method of walking. The robot had to learn to walk, Dr Lipson joked that it was a far cry from an 'evil spider robot'. Then, when the robot had learned how to walk, one leg was removed, so the robot had to modify its self image and adjust its locomotion. Other gaits were generated for different damaged morphologies... here's a video of Dr Lipson explaining the robot's learning process:
The subject of the talk then shifted to artistic ability- Dr Lipson quipped that human beings have dreams, even dogs have dreams, but can a computer paint real 'art' or write a symphony? He described efforts to teach a robot how to paint, and noted that robots can paint decent portraits. In another instance, he described an AI that created an image of a double pendulum in action, and the AI was able to duplicate the equation describing the pendulum's motion using only algebra.
The lecture then veered into the subject of metacognition, thinking about thinking. Dr Lipson joked that computer scientists can't mention cognition or consciousness until they get tenure. Can one speak of 'robopsychology' or theory of mind? In the case of the walking robot, a self image had to develop, and as alterations were made to the machine, the self-image needed changing. Currently, artificial intelligences have no theory of mind- for example, drones aren't aware of other drones... yet. Dr Lipson brought up the subject of AI affecting jobs, and noted that the real question is 'what will people do to other people with artificial intelligences?' Utopia or dystopia? The answer will rely on human use of AI. Today, we are at a cusp, with a Cambrian explosion of robotics on the horizon. The main evolutionary changes in the Cambrian explosion involved the evolution of eyes, the ability to perceive the world, in many animal lineages. There is an explosion of AI 'forms' and successful robots will be 'rewarded' with replication in an echo of the evolutionary model.
In the Q&A, some bastard in the audience asked about the 'holy grail' of robotics, the development of self-replicating machines such as the hypothetical Von Neumann probe. While noting that the crude mechanics of self-replicating machines are challenging, software self-replication is possible, dependent on providing building blocks and definitions of self-replication. Another question, regarding robot consciousness elicited the joke that that's a topic restricted to tenured professors- Dr Lipson then asked, is a dog conscious? Consciousness doesn't have to achieve the level of human consciousness- are self-awareness and self-simulation sufficient? Another question involved biological models for robotics, and Dr Lipson cited evolutionary theory and neuroscience as useful complemetary disciplines. Regarding robotic medicine, how would one ensure ethical behavior? While one cannot program ethics into a computer, an AI could be taught as a child is taught- give examples and hope for the right choices. Another question involved maintaining control over AI's- with machines being able to exceed human limitations, perceiving more wavelengths, more frames per second, there is less human control over those machines. Regarding the open-source nature of much of computer science, the open-source movement has led to more data and better algorithms, but while corporations develop open source algorithms, they tend to keep the data private. The final question regarded driverless vehicles... is vehicle to vehicle 'awareness' better than a totally autonomous system? Dr Lipson preferred totally autonomous vehicles- they are harder to hack and no transponders are needed.
Once again, the Secret Science Club served up a fantastic lecture. Kudos to Dr Lipson, Dorian and Margaret, and the staff of the beautiful Bell House. As an added bonus, last night my friend Peter, originally of Yonkers but currently residing in San Diego, was able to attend the lecture. He's involved in app development and he had good things to say about the lecture. It's nice to see a nerd's nerd reacting to the lecture and the vibe- it's not every day that one attends a hard science lecture that appeals to both experts and laypersons accompanied by a fully stocked bar... it's just every Secret Science Club day.