“Reading Wiener in Rio” – Felipe Pait

Download mp3 HTML Transcript

“Reading Wiener in Rio” – Felipe Pait

Felipe Pait:
I was … When I heard about the conference, I was interested in questioning how to out cybernetics together again. Looking from a point of view from electric engineering, you see cybernetics flying in all different directions, communications, artificial intelligence, controls, they all go in different places.

I found that actually the feedback control aspect of cybernetics hasn’t really had such a great impact in the social sciences. Then I thought, okay, let me think about the social sciences, let me think about the easiest of them. The easiest social science is economics, right. We actually have a few statements which are true and un obvious so it’s the easiest one.

The point of our cybernetics hasn’t really had much impact in economics or the other social sciences, I don’t think. Then I thought of the following: There’s an odd phenomenon of adaptive control, which is considered very undesirable in adaptive control, which are bursts. Then I tried to explain in words, the mathematics explanation is quite involved and it’s hard to find this explanation in words. It has to do with the second, the other feedback loop.

What you do in adaptive control is, you have a system, you have a control, and at the same time you’re trying to optimize the controller with the second loop. This leads to instability, to bursts. Things start working, there’s a picture I put in the handout, which is part of what I gave to you. I think it’s on the second page.

You have these periodic bursts that really cannot be understood by any sort of linear theory and they’re very typical of adaptive control. What I think, they’re actually quite similar to the bursts that happens, the bubble that burst 5 years ago, in the sense that people …

Let me talk about how people in the economic agents act, as if they were a control system. We have learned how to make the economic system work, then the bankers start getting excited and they say “Oh maybe I can make it more optimal, maybe I can make it a little better, make I can improve my profit a little bit”, and at some point, they pass a threshold and they go into instability and things blow up again.

In this blow up, they learn that what they’re trying to optimize wasn’t the real world. They’re trying to optimize some fiction of nature or something. Their model was completely wrong, and they’re trying to optimize based on their model. It led to garbage. I think it’s an interesting phenomenon.

Then, and this is connected to something that I’ve been interested in recently, it’s trying to recover an old idea in control. This is a model free design of a control system. In control theory it goes by the name of Direct Adaptive Control. You’re not trying to create the model of the object you’re trying to control. You’re trying to directly act to achieve a certain goal.

Some of you may be familiar with adaptive control. You have heard of Model Reference Adaptive Control. Model Reference Adaptive Control is a junk idea that doesn’t lead anywhere. I wanted to get some idea of adaptive control that doesn’t go back to some direct adaptive control, that doesn’t use reference models.

That led me to something independently I got to an idea that I presented to a science conference recently. I think it’s useful in itself, which is direct optimization. I’m going to conclude this on mathematics and then we have some time to go back for questions.

Most people who went through some sort of engineering education or mathematics know optimization. What’s optimization? You want to minimize a function. Take the derivative, make it equal to zero, find the point, optimize. Often you cannot do that, because you don’t have access to the object, the function that you’re trying to optimize. You cannot compute derivatives.

There is this field called direct optimization. Direct optimization, what you have is an oracle. So you have a function, you’re trying to find the minimal of a function. You have a function and you want to optimize it. You ask your oracle, what’s the value of the function at this point? It gives you the value. What’s the value of another point? It gives you another value and you’re trying to find the minimum.

My recipe is, take all those points, compute the exponential of the value of the function. I have a formula here. You test the points X I, give a weight, and take the center of mass. What do I mean? I’m going to take the center of mass of many measurements, the ones that have a high value of f, because I have an exponential, I’m throwing away. Ignoring. The ones that have a low value, I’m putting in my weighting, I give a big weight, and I take those into account. It’s a obvious idea. If anyone knows optimization, if anyone has ever seen this, let me know. I have never seen this anywhere. It’s obvious. It’s totally obvious, if you’ve seen it let me know.

I asked a few people, I couldn’t get it. The thing about this, if you’re interested in the equations, we can talk about that later, you can write a very easy computational version of how to do it recursively, which is the method.

My method is to take the estimate, estimate denoted by hat, this bar is center. Center of mass. The intuition is, I throw away the points where the cost is high, I keep the points where the cost is low, and I’m trying to minimize the function. The connection of this method with winners work, is with the mathematical side.

That is the following; I write a continuous hand version, why do I write a continuous hand version? Because I’m a control theorist, we love everything in a continuous line. You write the continuous line version, you get this formula, and I say the following: I have one point. At each instant of time I have a point which I consider to be the candidate. My candidate. My [inaudible 00:07:38]. I’m going to have a certain curiosity. The difference between my candidate and where I probe. So I’m going to probe around the point which I think is the minimum. Which I think is the best.

If I have no clue of what I’m doing, if I don’t have any other information, I’m going to choose randomly around that point. If I have some extra knowledge, by all means, use it. If I have no extra knowledge, I’m going to look around the point that I have.

Then I’m going to make the following assumption, that this random search is going to be a random noise, a white noise. The derivative of a winner process. [inaudible 00:08:22] So I’m going to choose randomly following random walk, and I get some results which I don’t think I have time to do it. I just used a slice from a specialised presentation, we could talk about them later.

What do I get? I get the following recursive formula, which is very interesting. My center of mass is equal to what I had before, minus the expected value of the gradient of the function. Fascinating.

I never took a gradient. I never asked what the value of the function is, I don’t know the value of the function, but by searching randomly according to a random walk basically, I get using by [inaudible 00:09:20] rule, I get my That, my estimate to move in the direction of the gradient. Which is what any self respecting engineer would do, if they could. And we cannot. I’m saying I don’t know how to take the gradient, I don’t know how take derivative. I cannot do anything like that.

My claim is, or my goal is, to use this method to improve on the usual adaptive control methods, and try to get away from those bursts, which, as I have argued before, have a certain similarity to something that occurs in the real world.

I know this was a very rambling, very short talk, but please ask me questions.

Speaker 2:
Are there any questions? That may be for the best because we’re starting to get a little late. [inaudible 00:10:27]

Felipe Pait:
Yes?

Speaker 3:
The adaptive control, it’s useful for when the systems change during the time. The structure of the system, the [inaudible 00:10:36] have changed. Where in your approach do you take care of this?

Felipe Pait:
Well, I’m not presenting here, for obvious reasons, a complete approach to adaptive control, but let’s think, that I am interested in the case. I am considering the case where, let’s say, this object I am trying to control has just changed. I had some control system and some object but lets say it changed. It jumps to something else.

I’m not considering the drift or how it’s changing with time, I’m just considering the moment when it changed, and that would be the logic. I’m thinking that I’m looking for the minimum point of a certain function. To complete the description of an adaptive control system, I have to tell you how that function I’m trying to minimize, how it relates to a certain control cost or a control objective. Which is something I would at a controls conference.

Speaker 3:
Thank you.