“Wiener and the Engineers” – David Mindell

David A. Mindell
David A. Mindell is Dibner Professor of the History of Engineering and Manufacturing, and Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. He is founder and director of MIT’s “DeepArch” research group in technology, archaeology, and the deep sea. His research interests include the history of automation in the military, the history of electronics and computing, theories of engineering systems, deep ocean robotic archaeology, and the history of space exploration.  http://mindell.scripts.mit.edu/homepage/?page_id=71

Download mp3 HTML Transcript

“Wiener and the Engineers” – David Mindell

David:
Thank you. It’s a pleasure to be here. An honor to be invited to talk about my old friend Norbert Wiener, who I of course never met but feel like he’s an old friend because I worked on his papers and on his work for many years. The title of my talk is Norbert Wiener and the Engineers, which is somewhat a topic of my book from 2000, Between Human and Machine: Feedback, Control and Cybernetics, about what was the relationship between Wiener and cybernetics and the kind of systems and machines that people were building on a daily basis in a variety of settings starting in the teens into the 30’s and into the 40’s. I’ll say a little bit about that story and then really update it to very much present day, literally yesterday and tomorrow, the sort of stuff I spend my time on where I think there’s a case to be made that the world of autonomy and robotics having grown up in a fairly non-cybernetic paradigm I think is coming around to what you might call a cybernetic paradigm and I’ll leave that final judgement to you and the audience.

This is a model from 1916 actually of a fire control predictor. It was the naval ship version of how do you assess the state of a target and figure out the course and velocity, extrapolate that into a future that involved building a mechanical analogue computational model inside a machine called a range keeper. I don’t know if anyone noticed, probably nobody did, but down at the end of the hall here there’s a room that says fire control room, which those were things that were on battleships at the period. If you Google a topic like fire control you’re always confused. There’s two different meanings of the term. One is fighting fires and one is shooting guns. This was fire control in 1916 and over the course of the next twenty years as that technology tried to grapple with the move of the need for prediction from the 2D world of ships into the much more challenging and much higher bandwidth 3D world of aircraft and attacking aircraft these were all technologies that were in place and in use when the war started in 1939. 1941 for this country.

I put them up here because it’s one of the things that got me interested in cybernetics way back when when I was in graduate school was this to me very compelling and very strange and very interesting just configuration of how these people are coupled with machines through, in this case this is actually an audio predictor. This is an audio detection and ranging device for listening for approaching aircraft. This one happens to be oral but there’s this kind of, in these cases there’s eyes, ears, mouth, hands. This is inside this fire control device, which was an anti-aircraft director and giant binoculars basically for range finding. I also love all the terms on this one. You may not be able to read them but this guy’s the illumination control officer. I thought that I should be renamed that as a professor, right. That’s my job is to illuminate and control the illuminations from my students. The talker and the control officer and the pointer and the trainer.

This is more of an informal kind of outdoor battlefield version of the same thing. Each of them has this, what today you would call cybernetic sort of compelling mixing and very intimate bodily and cognitive coupling between the human and machine. From day one and it still holds that appeal for me, just find that strange and fascinating for reasons that I think I can explain. There’s something about that combination. In the book I also talk about this group which is the fire control division of the National Defense Research Counsel, which is what supported Wiener’s work on anti-aircraft prediction and led to the Yellow Paper. The Yellow Peril Paper. They supported one hundred and fifty other projects over the course of the war.

It’s a very interesting group. You can see George Philbrick from the Foxboro Company not too far from here down 128. Often thought of the inventor of the operational amplifier, Karl Wilds from MIT, bunch of industrial people, Warren Weaver from the Rockefeller Foundation who coined the term molecular biology in the 30’s among many other interesting things and wrote of course the intro of Shannon’s 1948 book, president of the Sperry Gyroscope Company, Harold Hazen, pioneer in servotheory, Thornton Fry from Bell Labs, Ivan Getting from the Radiation Lab, George Stibitz from Bell Labs, Digital Computer and just this group right here, this is the only picture, the only time I have them all like that together sort of maps the landscape of what was going on.

Interestingly Wiener’s project was their single least expensive project. I guess mathematicians just aren’t very expensive to support. You buy the chalk and there you go. The work that you’re all familiar of Wiener’s during the second world war was sponsored and funded by this group within this setting. Unfortunately his optimal predictor, which was provably optimal, was only a few percent better than the fairly crude ones that were being used at the time and so it wasn’t really ever implemented. There’s another story there about how few actual product cycles went from beginning to end over the course of the war and most of the work that was supported by this group was tremendously important later on but only the very earliest of it actually found it’s way into battlefield situations.

Again you find all kinds of, I’m also just fascinated by the graphical diagrams. This is the inside of a turret on an American battleship. This image would be from about 1943, same year as the Yellow Peril Paper and just the way it’s diagrammed I think is lovely. Again you have the hand, eye thing, hand, brain, eye connection right here for all three of these folks. But then they’re within these blocked diagrams and this is actually also the period were you begin to see information flow block diagrams. Back in those 1916 things I actually had to go through and draw the pictures myself for them and a few of them are in the book because people didn’t make those diagrams. They made pictures of equipment but here they’re beginning to sort of think in this kind of information flow thing.

There are mechanical servos but then there are also kind of information elements including a box called computer. This is again, this is actually the Draper Mark 15 Gun Sight, which was designed at MIT and was the sort of smaller, very cheapo quick and dirty solution but was incredibly effective and there’s a very interesting story the Doc Draper’s son Jim is telling about the importance of these devices defending American ships in the Pacific in the Marianas and other places. This photograph is actually taken from the USS Massachusetts which is a battleship museum which is about an hour south of here in New Bedford, Massachusetts.

The introduction of radar of course sort of radically transforms a lot of these things because now you have electronic signals. A lot of the mechanical analogue stuff really can’t keep up with those electronic signals. The first time that they connected a radar signal to drive a servo driven gun the thing almost grinded itself into oblivion because all the noise and the signal wreaked havoc on the gears. That was really the moment well Bell Labs, Harold Black, Arthur Niequist feedback theory in the frequency domain was brought into these otherwise big slow time domain servos and that was a kind of union that actually Wiener claimed credit for at certain points which I don’t think he really deserved. That happened during the war and it was only the press of these groups together. I go through in the book a little about how well they knew each other before the war and nobody really realized that this thing they called feedback control and this thing they called feedback amplifiers were the same thing.

Again another picture. This is actually from a US battleship with these big mechanical computers. This is taken from the Vietnam era but it’s actually 1930’s technology. This is the computer that controls the big sixteen inch guns and how you aim those guns. A fairly sophisticated human machine network of if you follow the bits and you follow the quantities they go in a machine and out a dial and in someone’s eye and through their brain and out their hand sixteen or eighteen times in the course of a single aiming maneuver. Actually this is the picture that’s on the cover of the book because it includes the writing piece that I really like also. He’s listening, he’s looking, he’s talking and he’s writing all at the same time.

At the end of the war, this is really not in the book, but it’s work that I gradually became aware of as I was finishing that book. Actually the world of aviation took up the feedback theory that had been developed largely in the west coast, largely at North American Aviation. People like Harold Chestnut and Walter Evans and stuff that undergraduate engineers are still trained to go through. Not just Bode plots but root locus diagrams. That’s all the aviation contribution to feedback control and actually a lot of it is still used in a computerized way.

This is a NASA image from the 1950’s including a human in a feedback loop. Again it’s very cybernetic by anybody’s definition. You have the eyes on the control panel, the hand and this one happens to be a simulator but it would be the same thing for an actual pilot in an aircraft and you being to get people modeling the aircraft using techniques from feedback amplifier theory. You also then beget a whole, in a sense a generation of attempts to model the human responses and there are certain ways that that’s applicable and it’s also a very difficult thing because people change a lot. Their gains go up a lot when they’re stressed and there’s any number of accidents you can talk about where a system is basically stable, pilot gets under stress for one thing or another and the system goes unstable. If you ever see a DC10 crash on landing, that’s why they do that actually. It happens all the time. More than people want to, would like to admit.

One of my favorite little anecdotes from this period was when the engineers would say, they began to realize you could replace the actual aircraft with a simulator and the control engineers would say the aircraft is really just a big analogue computer for solving the equations and motion, which is this wonderful kind of inversion of the whole idea of modeling. I guess partly what I want to claim is that … Let me go through one more little episode. Extend that forward a little bit and as I was finishing the cybernetics and Wiener book it really occurred to me that the lunar landings were the kind of ultimate moment of the expression of that kind of mid-century cybernetics, for lack of a better term, from an engineering point of view.

It was a highly cybernetic moment. On the one hand you can say, were they using Wiener’s filtering theory? Well actually the common filter, what we today know as the common filter, was invented for the lunar landings and those are sort of a logical extrapolation from a Wiener filter. It was more done by Dick Patton at MIT who just passed away. There’s a lot of internal mathematical work that’s certainly influenced by Wiener’s work. Then the human machine coupling in the actual lunar systems was very cybernetically influenced.

Here’s where you get into the interesting subtleties of what did Wiener actually accomplish because no small number of people I have talked to who work on these projects say things like, “I came to MIT because I read about Wiener and I wanted to work with him and I ended up working on inertial guidance and stuff but that was the writing that inspired me to come there.” Now if you looked at the work that they published and the things that they did, probably never even cited Norbert Wiener, you wouldn’t necessarily say it was Cybernetic with a capital C, yet in the 50’s you see a lot of young people, young engineers particularly being inspired by this.

My teacher, who really gave me a lot of this outlook is Tom Sheridan at MIT. He came to MIT to work with Wiener in the 50’s. He actually traveled to the Soviet Union with him just in the few years before he died and was heavily influenced by him even though if you read Sheridan’s work he wouldn’t say this is cybernetics or I’m doing cybernetics. He’s working on telerobotics, automation and human supervised re-control. That’s the title of one of his books.

The next book I wrote then skipped over a lot of the intervening years but was about the Apollo program and the engineering of the computers and the software in the Apollo program in general or in the Apollo guidance computer. The computers that were onboard. Then particularly the last two hundred pages of the book are about the last ten minutes of the landing. How it was designed. How it was intended to work. Then what actually happened on the actual six landings, which is a very interesting story and I can show you and I just selected out any number of these images but there are hundreds of them, again to show the kind of mindset of the people working on these systems.

They began to realize … In fact when NASA first came to the instrumentation lab at MIT with a contract to do the Apollo computer they said, “No problem. We can build a computer to go to the moon.” I said, “What’s the interface going to look like?” They said, “Interface? It’s going to have two buttons. Button one says Go To Moon and button says Take Me Home. ICBM’s don’t have interfaces so why would you need one here?” That did not turn out to be how it played out for political reasons, for safety reasons, for any number of reasons that I go into in the book.

This is but one of very many interface diagrams created by Jim Nevins and his group there. Just about again how the data flows. This is the computer console with the star trackers and for this particular operation, which there might have been fifty of in the course of a mission, how this stuff would sort of go around. This was how they mapped and designed what the interfaces and the interactions were going to look like. This is really, I’ll say it now and it’s the slide I’m going to end with later, is the major point of my talk which is it’s a harder problem technically to make a system to land on the moon that interacts with a person than it is to do it automatically. I was really going to spend the rest of my talk talking about why that’s a relatively recent realization in robotics and is very much the way the world of robotics and autonomous systems is going.

You may have seen some of these images. The last phase of the landing started at fifty thousand feet in a sort of ten mile orbit. There was this braking phase then the lunar module would pitch up and this approach or visibility phase so the crew could look out and see it. They had this wonderful system where the computer had a digital display. It was actually one the first of those little seven second LED displays. It wasn’t LED but it looked like it. It would give the commander, in the first case of course Neil Armstrong, a number like thirty three. He had a reticle on his window that he would look out of, right over here this triangular window. You can see the little things. There were two reticles actually. There were two offset on two panes of glass. If he had his head in the point that they were lined up he was aligned properly and he would look for the thirty three and look through that number that the computer was telling him down here and that would say, “This is where you’re being brought for landing”. Fully under automatic control.

If he didn’t like that landing spot he could take the joy stick and jog it forward or backwards to sort of cursor around that spot and the computer then would recalculate the entire parallel projectory and bring him there. Then he could move it right or left as well and he could do that an infinite number of times before landing. Although his scope of choices got smaller as he got lower. It was basically a kind of supervisory control auto land. What this picture is, which is a computer graphic representation that’s on the cover of the book although the cover of the book mostly obscures it, is the moment about two hundred feet above the moon where Armstrong reaches up and turns that thing off.

At that point the New York Times said he landed it manually, which wasn’t true at all. He landed it in an only slightly less semi-automated flyby wire mode but it was hailed as the triumph of the human over the computer. The computer fails, there were these program alarms and Armstrong turns it off. Very interesting story. Won’t go into all the details of it but that whole thing is arguably the climatic moment of twentieth century technology. In some way shows you how difficult and complicated it is to make a system where there are human lives at stake that is automated in whatever degree.

The interesting contrast is actually with Soviet spacecraft controls which were much more highly automated than the American ones were in the 60’s and I think it’s still true. Part of that was because they weren’t using digital computers, they were using analog computers. It turns out for reasons that are probably pretty obvious to you, it’s harder to make an analog computer that’s richly interactive and in the digital computer, which was sort of a radical thing for Apollo, it enabled not a higher level of automation but actually a much richer engineering process to incorporate the human’s activities in the ways that were desired and rich and helpful.

I’m going to skip ahead now to present day. The book that I’m writing now is sort of about this problem, about why pure autonomy, which I don’t think exists anyway, is actually a less challenging problem than autonomy within a human context. That’s really the problem and that the government and a lot of industry, and you can almost see it in the way that Google publicizes this sort of space cadet car that they’re making, which is quite unrealistic in some ways and is part of some larger strategy. You can see them gradually coming around that the last thing you’re going to want is a car that you sit in that you read the newspaper in. Not least because it’s actually pretty dangerous. Because you know where we learn that? With airliners. If you can’t intervene with the system when it fails in a robust way and all systems will fail, you have a problem.

What I’m doing in the book is comparing deep ocean exploration and remote telerobotics there with aviation. This is a predator cockpit so remote vehicles and space exploration. I won’t go into all those but I’ll give you a couple of little anecdotes. This is sort of the world that I started in which again got me interested in these questions which is the world of undersea vehicles. This is a family tree created by the Oceanographic Institution in Woods Hole which is a couple hours south of here, which sort of represents the technological media that I worked in and is kind of a linear revolution story that is very much the one I’m brought into for the first fifteen years of my career or so.

You have Alvin down here, the three man submersible. Jason, Jr., a little remote vehicle. This actually went down the grand staircase of the Titanic. Jason, which I worked on in the last 80’s, which is sort of the first large scale fiber optically controlled remote vehicle and then by implication you move toward higher and higher levels of autonomy with these autonomous vehicles. These REMUS vehicles are actually the ones that found the Air France, the wreckage from Air France 447. Abe and Sea Bed and Century are mostly deep ocean science vehicles. Actually, a little piece from the book, Abe is the first autonomous robot to have it’s own obituary in the New York Times. Actually Nerious also imploded just about three or four weeks ago in one of the trenches off of New Guinea.

All kinds of interesting stories in these vehicles but the implication is that there’s a linear progress from human to remote to autonomous. That I think is the kind of, if you think of computer science and AI having a somewhat separate and independent genesis and evolution from cybernetics. The computer science department at MIT was founded in 1968. Much later than the cybernetic age in a certain way and was mostly composed of people from mathematics and electrical engineering. A little bit of psychology but not all that much. You’d be hard pressed to call that a cybernetic department in the way that other places have it.

This is the kind of ultimate growth of that philosophy, of the ultimate expression is pure autonomy with no humans in the loop. I have a sort of different view which is that the systems all evolved together. We’ve found, and I’ll just give you one slide that kind of shows the future of remote vehicles which have either acoustical communications or now increasingly optical through water communications or other kinds of communications and of course those of us building autonomous vehicles always forgot there was a man vehicle in the mix every time we went to sea. It was called a ship. Any autonomous vehicle operation, one day they may get rid of ships but most of them today are still man vehicle, people on the vehicle, people programming the computers on board the autonomous vehicle, sending it out, it goes out, it has a certain amount of autonomy for a certain period of time and then it comes home. You have this very rich kind of human interaction and the autonomy is very much bounded in the time domain.

I sort of try to move beyond a lot of these what I think of as twentieth century dichotomies about presence versus not present, manned versus unmanned. Hugely interesting stories around the evolution of these terms. Human operated versus autonomous and show how these things all mix in real time. Rather the questions you really want to ask with a lot of these remote telerobotic systems are not are they manned or unmanned but where are the people, what are they doing, when are they doing literally down to the second by second level, through what ban width and why do you care? These questions, I have a chapter in the book on the Predator operations. I had a student who did a very close ethnographic study of how Predator pilots work. These questions go a long way toward illuminating what’s at stake in remote warfare through satellite links for example.

What you call autonomy is actually something that sort of is there but it’s always interacting with band, with anytime you have a vehicle that, look at the Mars Rovers, they are sort of autonomous for twenty minutes at a time in between transmissions. Even then if you look at how they’re operated to do path planning for example, the engineer on the ground who operates the vehicle can have them do autonomous path planning but it’s actually expensive in power because of the computer resources and expensive in time. You have to spend it like a resource and autonomy I think is well thought of as something you spend like a resource. It’s costly, it’s very costly to verify and prove and certify. You do it when it helps you but otherwise you may or may not and it moves around.

This is actually a quote from a defense science board report from about two years ago which captions it pretty well. All autonomous systems are joint human machine cognitive systems. There are no fully autonomous systems just as there are not fully autonomous soldiers, sailors, airmen or marines. There is an exact parallel in cognitive science where is cognition something that happens entirely within the brain or increaseably people see cognition as a distributed resource that happens in networks of people through varying bandwidth communication channels.

This is a quote I just pulled out last night from the draft version, although it’s published, of the National Research Council report on autonomy and civil aviation incorporating increasing autonomous systems and vehicles in national air space. This is about can drone’s fly over civilian areas. Would require humans and machines to work together in new and different ways which have not been identified. There is a technological challenge to allow people and autonomy to work together. The aviation system is a wonderful, rich, extremely contentious at the moment example of it. That there’s a lot of changes that would have to take place in merely making up a new rule.

I’ll leave you with one project that I’ve been working on, which is an “unmanned helicopter”. This was the title when the test flights were announced in April from the Wall Street Journal, Navy Drones With a Mind of Their Own. Clearly the press account here has got the kind of pure autonomy, we are making unmanned systems that will just go and do their own thing with no human input. If you actually look at how that system is designed and engineered, I won’t go into all the details of this but it’s intended to come deliver cargo into a landing, some sort of remove landing zone. We did a whole lot of study of people who receive helicopters with cargo on them in remote landing zones and the prospect of an unmanned helicopter coming into them at full speed to the place that they were standing was horrifying. They clearly needed to have some level of abort capability and some kind of interaction with it as the thing landed.

In fact there’s a laser scanner on the vehicle that’s capable of mapping the terrain and identifying appropriate landing spots and there’s this little interaction with a person on the ground with a little Ipad. I’ll show you a better image of it here. The person says, “I want you to land here.” The vehicle, “Well, that doesn’t work for me, it’s too close to the trees.” The system would say, “I’ve identified three other possibilities.” The person may say, “These are all good or they’re not.” That was the challenge in engineering this system. Making it go somewhere where there were no people either on board or on the ground and just pick out a landing spot and land is actually comparatively not hard compared to doing it in a place where there are people around. Whole different notions of risk and relationships and what not.

This is a little bit of a story about how we modeled the autonomy based on that idea but I’ll sort of conclude this with just this story about how in order for the people to trust such an unmanned helicopter and feel like it was going to do something that was going to help them rather than kill them and of course anybody whose been on the ground in Iraq or Afghanistan has seen all kinds of unmanned vehicles flying around with no idea what they’re doing or who owns them and it’s a very scary situation. You have to give the behaviors to those autonomous systems that have to be behaviors that people can develop mental models for so they seem understandable and explainable.

Just the constraint of having to do that limits how complex those behaviors can be. You can’t have too many internal states. The way people in the human factors world describe this in technical language is, the problem with autonomous systems is that they internal states of the system are not apparent to the user. There’s a whole since of kind of making interfaces that make those internal states apparent to the user but we all know from operating our computers that there’s only so many states that you can download from an interface from a given time. Especially in real time under pressure.

I’m going to leave with this idea, which I’ve all ready discussed. Pure autonomy is an easier problem than rich human involvement with an autonomous system and I think you see through these various government reports and through the very slow movement of the AI robotics community that building robots that can live around people is a very interesting difficult problem. It doesn’t, I had a colleague say this and it was very upfront and I think he was right. He said, “We engineers really like designing things that are purely autonomous because it lives in a nicely controlled world and that’s the kind of world we like to live in and the instant you introduce the people into the equation it gets very messy.” There’s this burgeoning recognition among both the sponsors and among the people building the systems that how to make them live within a human context is in itself a technical challenge although we don’t have always the best engineering tools for it.

My question, which I’ll leave you with rather than a statement is, does that represent a kind of, exactly what the title of the conference is, Norbert Wiener and the 21st Century. Is there a kind of coming back around to that richness of technology in the human setting as opposed to the kind of automated robot vision, which I think is actually a 20th century modernist vision and one that’s carried through and still has a lot of hold on people but most of the science fiction that people refer to with it is in that mode and now we’re living in a world where we have to figure out how to live with these things and that changes what we can expect of them and what they’re capable of. Thanks for your attention and I will leave it at that.

Felipe Pait:
Thank you David. Am amazing presentation. I’m just wondering your closing remarks actually we’re now just going in full circle and starting to hear yes, but do we want to go there. Are we opening that Pandora’s box that he started to warn us about. Thank you very much for a fantastic presentation. I think we’ve got time for a couple of questions and certainly Mark, a quick one from you.

Mark:
Yes. Thank you very much. That was the most fascinating thing I’ve heard at the conference. My question is do you believe that all of the powers around the world were attempting to build autonomous warbots have the same constraints and have come to the same burgeoning understanding? Is everybody designing to the same endpoint?

David:
That’s a rich interesting question. I’m most familiar with the situation in the US but you have to remember that when you look at military there’s a thing called the military profession which is built around these kind of kill or no kill decisions in a certain way and most of the people I’ve talked to don’t really want to give that up. They want to improve it in whatever way and those decisions are social decisions right, and that’s where I do this in the book. I go through some of the predator scenarios and especially the ones that are considered accidents or tragedies actually and talk about what ends up happening and what they did in an unknowing way was push those decisions through the network to a place where they had totally untrained operators making those decisions who thought they were present in the remote environment but were not as present as they thought they were. In a way they were misled by the compelling nature of the virtual and the remote.

It’s always been possible to build automatic killing systems. Landmines are that way. Any number of other things. There may be reasons that people would still want to do that. I think at the, again this whole problem, you really have to deal with the institutions in a serious way and the institution of military professionalism or even just taking the US Air Force and it’s relationship to these vehicles very fraught, very complex. It’s a kind of deep social change they’re experiencing and everybody’s trying to work through it. I guess I’m skeptical of the utility of fully automatic machines that way. Again the reason I say there are no purely autonomous systems because any system to be useful you can always find the human wrapper of input and output that, even the viking probes that are leaving the solar system, the instant they stop having that wrapper, they’re gone and they’re autonomous but they have no benefit, social benefit at all or social role.

Felipe Pait:
Thanks Dave. Gentleman down front?

Speaker 5:
I’m more concerned, we’re not the most [inaudible 00:36:04] encounter a warbot but we might encounter a Google car and my brother works for Google [inaudible 00:36:13] full disclosure, and we talked about this and he said, “Oh we made a quarter million miles without an error so far.” He told me the story about what happened with Cornell and MIT cars that crashed at zero mph in the early days.

David:
That’s a real interesting story actually.

Speaker 5:
He said, “We’re a million miles without an error.” I said, “Well that’s nice but if you take where I live in Lexington and go to the intersection of Route 2 and 128 a quarter of a million people pass by there every day so if they travel from there to where we are in [inaudible 00:36:48] that’s your quarter million miles without an error.” Hundreds of errors times that is a major disaster if it happens in the wrong way. How do we see that the way maybe ten Google cars on a relatively empty county road work fine, one Google car on 128 might be a disaster.

David:
Absolutely. The Google car is designed to work in a fully mapped, like down to the centimeter scale environment, which is they have that map for the roads in California they fly on, they drive on. Even Google is coming around to acknowledge that urban driving is a huge challenge. Because urban driving is all about what is that crazy person standing in the crosswalk going to do in a couple of minutes. Are they going to cross, are they not going to cross. The level of social relationships that we have either eye to eye or car to car is very rich. There’s an interesting kind of backlash on the robotics community among people who work on those problems because as one of my friends, John Leonard said in a seminar we had this spring, “I’m a little don’t really know what to do about this because I’ve been working on this problem my whole life.” He’s talking about the simultaneous localization and mapping problem. It’s hard problem and here comes Google with all their billions an they say they have it solved. I know that’s not the case but how does he deal with that as a researcher.

The place that I look to for a lot of these things is in commercial airliners because there are all kinds of phenomena that are, I have a whole long story in the book about Air France 447 which is a very frightening and rich story, that from the moment of one minute into that crash when the thing was still perfectly savable they were flying a perfect airplane. The problem that initially started it, which is the Pedo troops freezing, disappeared, and yet they got into a kind of corner of the cognitive matrix that they couldn’t get out of. It doesn’t happen very often in airliners but it happens regularly enough and there are forty thousand flights a day in this country. That’s probably the number of car trips that I pass by driving here on 128 this morning and you’re going to encounter these kind of subtle cognitive problems with automation at the automobile scale and who knows what that’s going to look like really.

Felipe Pait:
Thanks Dave. We have time for one more and I do promise the gentlemen-

Speaker 6:
[inaudible 00:39:53]

David:
Those are both very good questions. All the examples I gave is they’re all from extreme environments. Okay. Places where human beings cannot live on their own and where the consequences are death. That’s just interesting for me. I don’t think their phenomena that I’m talking about are any different there but A, they make good stories and it’s stuff I like to study. B, I think these environments have been forced to adopt robotics and automation in a way twenty or thirty years earlier than automobiles, surgery, other places and so it’s interesting to look at what’s been learned in those extreme environments for these other kind of more daily life environments.

The human center computing thing, that’s a big issue in the kind of human factors world, human centered systems they call it. It’s not so different from what I’m talking about here but the problem is most of the people who practice that stuff focus on interfaces. The interfaces need to be good, that’s important, but actually what I try to do in this helicopter project, I didn’t really go into it in my talk, is the claim is that it’s not just, the interfaces aren’t going to solve the problem. You have to actually think about what the basic behaviors inside the autonomy are and that’s, all projects when people actually kind of make them end up getting walled off between the people who are human factors, people who are mostly applied psychology types and the computer science algorithm types. I spend a lot of my time these days trying to take those groups and bring them together and not just push off the human centered part to the interfaces but think about it the day you start thinking about the autonomy and who you’re going to be working with and what setting and what not.

Felipe Pait:
Thank you Dave. I’m sure Dave would be happy to take any further questions and discussions over coffee. You’ll be around for the rest-

David:
Yes and then I’ll do the history panel at 2:00.

Felipe Pait:
Right. Thanks. Please show appreciation.