“Cybernetics, Art and Creativity” – Paul Pangaro

Download mp3 HTML Transcript

“Cybernetics, Art and Creativity” – Paul Pangaro

Paul Pangaro:
My thanks to Juan who invited me, and all of my colleagues, who I’m very thrilled to be on a panel with. I’m going to set my own timer, which implies a bad thing, but I’ll do my best. I should, first, establish my own bonafides and that’ll be the first of a five part talk, plus a teaser. I met Pask in 76 when I was introduced to him by Nicholas Negroponte. Nicholas said “Paul, this is Gordon Pask. Writer and producer for the state. Gordon, this is Paul Pangaro, an actor.”

This was the beginning of my world changing. It establishes that I’m interested in performance, and so the performance of idea of cybernetics worked for me, immediately. Of course, Gordon was an extraordinary performer in his own right, in many, many ways and many spheres. Later, Elizabeth Pask liked to tell this story that Gordon didn’t quite know what he was doing, not in that sense, but in a sense that he didn’t know what he was working in until he met Norbert Wiener. At that point, Gordon realized that he had been, and would always be doing cybernetics. This origin story is similar to mine. It was when I met Pask that I realized what I would be doing.

Part two. I started following Gordon around and I did that for a couple of decades. His style was that of an experimentalist. He made things and he saw what happened. He made conversations with individuals and with groups and he saw what happened. He did this human to human, human to machine, machine to machine, famously. All performative, all interactive. One of the famous examples is the music color, that you’ve heard a lot about, which is this device where a musician would play and a microphone would listen. The device would project lights, on a scrim, in a music hall. This oversimplifies it, in many ways.

This was not simply a first order switch, where if the player played bass then it was green and treble was yellow. That would be a reactive system. It was not just a first order loop in which, perhaps, it tried to be reasonably responsive and not overly responsive, but it was a second order loop. The second order loop gave the machine a purpose and it’s purpose was not to be bored, we can say.

You have two second order loops interacting. The human and the machine, amazing. He then built teaching systems along these lines. It’s fascinating to me that music color came first, I love that story and then typing tutors and various other tracking tasks and extrapolated into cognitive tasks to learning, to knowing and ultimately developed a theory called conversation theory which had other contributors but which I feel Pask was the center of.

Along the way he wrote some amazing pieces. This piece in the [Asha Reichart 00:03:23] book on cybernetic serendipity coming from this wonderful exhibition that you’ve also seen pictures of today in which he talks about aesthetically potent environments. I love that phrase. In that book he also, I believe, provides a theory of media which I haven’t seen better stated or more powerfully stated. This was the man.

Section three, fast forward today. What has happened since? Let’s see. Arpanet, internet, web, the cost of computing going from astronomical to essentially zero. The difficulty of starting companies going from astronomical to 15 year olds in their rooms. The problem is, I claim, that we are still in an age of the engineer as the designer of our experiences. We know that this was true 30 years ago, the engineers were coding etc. Now designers are coding more, not enough in my opinion, but the engineer is still the designer so I would like to see a transition in which the conversation is the designer. If you’re familiar with Warren McCullough and the redundancy of potential command, this is the idea that if you tell me something and I do something you might say, “I made the decision.” No, what you told me made the decision. I want the conversation as a designer in the sense. Why? Why do I want this?

Section four. I want a stronger legacy from Norbert Wiener and the Macy meetings. I want to change this dire situation we’re in in which I wish I didn’t want to volunteer but I’m dying to volunteer to be in the army that Andy [Pickareen 00:05:15] is talking about of this new paradigm. I feel this is extremely important. It’s important because the alternative is we will continue to live under the control, meant more strongly than cybernetics means it is regulation, under the control of Google, Facebook, Amazon, the usual suspects. What do I mean by that? These are AI guys. This is where they come. This is computational science, this is engineering as designer of my experience. No thanks, don’t want that, prefer an alternative.

I want a situation in which the purpose can be negotiated. The purpose of a Google result is Google’s purpose more than mine and their purpose I think is in contradiction to what I’m interested in. Conversation as a designer, in two senses. I want the engineer not to be in control, I want the conversation to be in control, and the variety in that conversation in control. The variety in a sense of requisite variety, Ashby. Not just an engineering point of view, a possibility point of view, a user point of view, a product point view and a commercial point of view and everything. Social point of view, sustainability point of view, value driven as we’ve heard about today.

Also I’d like to understand the design process itself better. After all you really can’t design my experience. You as a designer might be able to help me design my own experience, that sounds better doesn’t it? The designer is the meta-designer and the user becomes the designer. Makes sense to me.

Section five. Where does this go? We’ve heard about courses in design. Tom Fischer spoke about it, [Glandolous 00:06:59] taught for many years, others taught about their interest in design. I’ve also had the privileged of teaching in design with Hugh [Deverly 00:07:09], a long time colleague and great design planner in San Francisco. We started for Terry [Wintergrad 00:07:13] actually at Stanford in his program there designing a course and delivering a course that was about design and cybernetics. That means if you’re designing what is useful about cybernetics to know. First order, fine. Requisite variety sure, second order, no question. Then we get Pask in conversation and then we get bio-cost and [autoparesis 00:07:43] and a few other things.

I sit here saying that students come to me and it’s like Christmas. Students come to me and say, “Paul I took a course two years ago and I’m still using that stuff every day.” I go, “Thank you. This is why I keep doing it. It’s why I can’t avoid doing it.” This is a way forward in my view. This is a way forward to do something with cybernetics. To go from engineerist designers to conversationist designer and to afford a different world. A world in which Google does some good stuff but it’s really more my world than their world, this would be my preference. Those are my five sections.

Here’s the teaser. What do we design? What should we make? What would be a good thing? By some odd coincidence tomorrow I’m launching a product which is the tiniest little opening of the door into what I’d call conversation with content. It’s a bit of Pask and a whole lot of JavaScript. That’s not enough, that’s really trivial in a way. I keep complaining about Google, I would like the more important part of the loop that Google doesn’t do. What does Google do? Google you type in some keywords, which are kind of like questions but not as good, and they give you results, which are kind of like answers but not as good. That’s half the loop. What about the other half. What about looking at those possible answers and making a better question. I’ve got to do all that here. I’m not as good at is as I’d like to be. I would like to invoke you in helping me be better at it so that I can be better at it.

Can I make a machine that helps me ask better questions? I believe the answer to that is yes. This would be a kind of fun palace of intent. There was a reference earlier to fun palace which is a piece of architecture which is completely re-configurable. How about a search engine where we eliminate search and maybe engine, engineering metaphor, we don’t want that either right? A kind of design engine, a design world, forget engine. A fun palace of intent in which I can experience new intents in a conversational flow with a machine. If Pask in the 50s could create a conversation with a machine with a soldering iron a bunch of wire and some lights we can’t? That’s embarrassing. I reject that. I don’t sit calmly with …

Why am I interested in questions? Because answers are dead. Once you have the answer it’s dead on arrival right? An answer is about the past. I’m of the now, I want an answer about what I want now. I don’t always know when I want it, but I’m human I have a need, I want it. I’m not satisfied, I want a better answer to my question. Once I get to that answer I’m in the same problem so I’m always looping and moving forward. To have a question asking machine is more like being alive or maybe it can make me more alive. A petit mal of ideas in which much like this device where we dance and move but it’s in the cognitive domain or better yet in the living domain, in the acting domain. This is what I wanted to say, five sections and a teaser. Thank you very much.

Felipe Pait:
Again thank you the three of you for your participation in the panel. I’m sure there’s going to be questions so please raise your hand if that’s the case. Fun palace of intent.

Paul:
Did you like that?

Speaker 3:
[inaudible 00:12:13]

Speaker 4:
It always depends on your purpose doesn’t it. I’m not sure what you mean by programming perception in but obviously one of the ways that we can change the way that we interact with the world is by putting filters in front of and having processes, having machines process sounds and light and in any manner that we choose so it depends on what you want to do with it. What the user wants to do with it in terms of what kinds of transformations are involved. I guess that’s the best answer I can give.

Speaker 3:
[inaudible 00:13:37]

Speaker 4:
That’s a really good idea yes.

Paul:
It’s even all ready.

Speaker 3:
Yes I wanted to ask Paul about Gordon Pask’s psychological persons and how they might have a dialogue with one another internally and with the machine and could that be more than a two way dialogue?

Paul:
Alina’s referring to the notion that Pask called psychological individuals. P individuals to be distinct from M individuals, so there are three M individuals here but there are a morass of P individuals or as a colleague of mine, Claudia Lamaro, like to call them P selves. We are many selves. This is a manifestation of the internal dialogue that Pask wrote a lot about and says that if I’m having a conversation here with Andy it’s the same structure as if I’m saying to myself, “Well what do you think about that? Well I don’t know that talk wasn’t very interesting. Yeah but last time you were better. I know but I wasn’t feeling …” This is the same thing and Alina’s asking a great question and it’s kind of where I’m going with this idea of asking questions is if you can get a machine to take multiple perspectives which may be inconsistent with one another and inconsistent with you, it’s in that debate in which you learn things. What your preferences are, what the trade offs might be and how you might decide to act based on values that you choose as a result.

Could we set off a cloud full of individual processors, each of which are doing a Paskian calculus. I’m not kidding here and I’m not just waving my hands, to look into a belief system and provide alternative ways forward in that belief system, either an evolution or a set of actions that would be contradictory or not. Then the question is how does the machine decide which of the thousands it could show you, which one? My friend Michael [Gagen 00:15:41] has an idea which is some way of deciding on satisfaction or minimum cost for the machine which might be how much power the computation might require to hold one belief versus another.

Speaker 4:
I’m beginning to think that the library of Parliament is your machine, now I’m beginning to understand, we had this conversation before. As I said we get, all the analysts here, get a bazillion questions from a range of people that all have belief systems, it’s called affiliation to a party. They all have to follow it. We’ll get these questions and we have to answer in a non-partisan way. Questions are poorly put because they’re filtered through a central intake system. Basically a person who’s on duty and they decide to farm out the question to who they think it might relate to. Your job as an analyst, the very first thing you’ve got to do if you’re doing it well is get ahold of the client, and try to figure out what’s the context of this question.

Paul:
Why do you think I was trying to pick your brain earlier? Because I think you have a guide for me?

Speaker 4:
Now I will need to answer it in a way that’s not going to cause me huge cost of, “Well you’re just giving me a schpiel. I don’t trust you.” You need to develop a kind of a trust relationship that firstly you have the expertise that they’re speaking and you need to know what the context of the question is. Often it will be political but actually it can also be, this is a staffer who actually has to write an essay and they’re using the library for the wrong reasons. There is this sort of iterative process of determining what the real question is and then trying to get them to it.

Paul:
Let’s build it together shall we.

Speaker 4:
Comments.

Paul:
Thank you.

Speaker 5:
That was helpful because I’ve been trying to formulate this question about conversation and whether or not this ideal machine that could be created, is it or will be created now that you have the ideal collaborator. Does it depend on the person who’s using it having these powers of conversation? That is also something that even interpersonal, person to person communication requires a lot of development and skill. I work in the summers as a national park interpreter speaking with guests all over the world. A few years ago, one of the big new things in interpretation is conversation based interpretation rather than the standing up on the stage and talking to people and assuming that you know what they want to know, using conversation to develop an answer to their questions that they don’t quite know that they have until you’ve asked them the right questions and got the right questions from them and that is tough to do. I guess would part of this machine be teaching conversation and how to do that, or how does that work into your thoughts about it?

Paul:
There’s a lot in what you just said. Certainly the participants matter and the human participants as well as the machine. Really what I’m trying to do is make a better machine participant for conversation in which the goal is better questions. One way of describing it. I would worry about the word ideal, by the way. I know you’re trying to be flattering and I appreciate that. It would be impossible to be ideal. I think conversation is the only way to go. I’m going to punt a little bit on the details here in the interest of time. There’s no substitute for a conversation in which we can be who we are and become something else. That co-evolutionary drift is the important thing about being human.

I’m not expecting the machine to be great at this and please done misunderstand me into thinking, I’m not saying you are, into thinking that I’m going to make a great machine the way Google’s a great machine. No, that’s the problem. Google’s trying to solve the problem of indexing the world’s knowledge. That’s what they say. The point of the story is perfection in that realm is the problem I’m trying to move away from. I don’t want perfection, I don’t want an ideal, I want a dialogue, I want a conversation. I’m not expecting the machine to be great at this any more than I’m going to be great in my conversation with you but I’m going to have a place in it and together we’ll go where we couldn’t’ go separately.

Speaker 7:
I guess what it’s making me think about is how this type of experiment thinking about making a machine that can do a better conversation has powerful implications for people developing better skills at conversation [inaudible 00:20:42].

Paul:
To answer your question more directly I think it would make humans better at conversation if it were better. I’m not sure that would be an explicit goal, I think it would be an outcome. Learning to learn would be the ultimate.

Speaker 8:
I think there’s a, I don’t want to say, certain degree of sophistry in this but maybe hubristic sophistry something of that nature. Hubristic whatever, they’re all the same. The challenge obviously is to make conversation that everybody can benefit from. That means you have to have a sort of common experience base. I guess the classic example, prototype example, is the experiment a number of years ago with kittens who never really learned perception properly if they didn’t’ have their feet on the ground when they were being moved around in an optically challenging environment. The ones who were walking around learned and the ones that were being carried around in a little vehicle didn’t learn.

The same way I think when you talk to people you have to have a common experience base of some nature. Machines are never going to have that common experience base. They will be able to collect everybody else’s common experience base but they’ll never be able to internalize it in any fashion as far as I can imagine. The classic example, I spoke one time with somebody who I couldn’t possibly understand how he could have done it but he won a medal of honor in a battle in the pacific during World War II doing something that nobody human could do. He did it of course, he was human.

That was he took several bullets and then he picked up by the very, very hot barrel a 50 caliber machine gun, held it in his arms, and proceeded to save the rest of the people in his company. No one can understand how he could have done that yet he did it and subsequent to that he couldn’t communicate it really to anybody because you don’t have that common reference ground. Anybody who’s touching a hot thing and hold it in your hands and know that it’s going to burn your hands, knowing that you’ve got holes in your legs and things like this, how do you make a machine understand that?

Paul:
Can I respond to him briefly? Just very quickly. Of course not, I’m not trying to make a machine that’s has a common experience but I don’t want everyone to have the same experience. I want a conversation. I still think there are ways to make progress even without being able to do what we agree cannot be done.

Speaker 4:
I think the power of conversation with machines …