“Wiener on Innovation” – Question and Answer Session
Greg Adamson:
At this stage we now have time for some questions and discussion so I’d really encourage people to either ask questions or give a very short contribution to this question of innovation. Do we have any questions? Yeah.
Male:
When it comes to the automation points that you’re bringing up, this actually strikes home to me because many members of my generation, the millennials, are going to end up potentially permanently unemployed due to the few jobs that are actually available to them. When it comes to automation taking away, dramatically reducing the total job market, what do you propose we actually do? We can’t necessarily remove automation and job growth is undefined at this point. Are there social or political items that you would necessarily propose to help or that you would change for society?
Prof Levy:
I think the first thing to do would be to pursue traditional macroeconomic policy which we haven’t been doing. One of the confusions right now is the question of how much of the current economic situation is being caused by computerization and as best as I can tell, the answer is relatively little. We know that there are two kinds of recessions, the kind of recession that the Federal Reserve creates on its own to break inflation, and the hallmark of that is rising, tightening interest rates in order to cool off economic activity.
The other kind of recession, which is much rarer, is the kind that occurs after a financial collapse. In this country the Great Depression was one of those and 2008 was the first that we’ve had of that kind since the Great Depression, but you’ve seen that kind of thing in other countries. When you put the combined experience together, what you have is a recovery after a financial collapse that takes on the order of five to six to seven years. You can see, for example, that interest rates are basically zero and still we don’t see much stimulation coming along from that.
We know what the standard remedy for that is, which is government running bigger deficits. We came into the situation already with fairly big deficits because of the previous administrations and the willingness to cut taxes even as they funded the war in Iraq and Afghanistan. Once you got past the initial stimulus bill in 2008, Congress basically followed a contractionary period. What you’re seeing now is still this big macroeconomic hangover, but because it’s gone on so long, people say, “Well, this must be the new normal and this must be due to automation,” and so on and so forth.
As far as I can tell there’s relatively little evidence of that. While these kinds of problems are real, and while in twenty or thirty years you may start seeing real bites, say for example, in the jobs of college educated workers from computerization, right now there is very little evidence of that and quite a lot of evidence that this is just macroeconomic mismanagement.
Male:
What would you propose in twenty to thirty years then when it does become a problem?
Prof Levy:
I think you have to propose some kinds of redistribution to guarantee income. The best way, if you think about how to structure it, is in much broader ownership of capital. That is to say, that if it’s the computers and the capital that are getting all the returns then you really have to distribute capital ownership in order to take advantage of that. That’s going to take political movements. You say, “Well is that impossible?” I think, at a given point in time you can start thinking that things are impossible, but as pressure builds, suddenly impossible things become possible, and that would be, I think, the preferred way of going about that.
Work sharing doesn’t have much to do with how you hold up incomes, which is basically what you’re talking about. I think it’s much broader capital ownership is what you have to talk about.
Speaker 1:
Michael.
Prof Postol:
May I make a comment on …
Speaker 1:
Sorry.
Prof Postol:
Sorry, I just want to … Professor Levy I think has answered this question but I’d just like to comment on it using slightly different language. I think when you look at the distribution of wealth in this country, and when I say wealth I’m not only talking about money that people have in hand, but what they get paid, wages for example, you will find that the wages people are getting have been relatively stagnant at the lower levels while people at the upper levels have made tremendous advances. You have a disparity that’s very large between the wealthiest people and the less wealthy. This is a trend that has been going on most strongly since around the mid to late 1970s up until today. One of the consequences of that is you don’t have as much buying power in the population and hence you don’t have as much demand for goods and services, which then of course comes around to less demand for jobs that provide goods and services.
I think you should disagree with me if you think I’m saying this incorrectly, but it seems to me what Professor Levy was making a point on is there has to be enough political pressure brought by anybody. I mean I’m well off but I’m very much for redistributing wealth. There has to be an awakening and a political push to get this corrected or this country is going to bifurcate into literally people who have tremendous amounts and people who have nothing, and all you have to do to see the consequences of that, go to a third world nation.
Speaker 1:
[inaudible 00:06:09] Michael.
Prof Arnold:
At my university those of us were conducting research that directly involves human beings need to go through a mandated process of gaining ethics approval, and we need to [indicate 00:06:29], we need to anticipate possible harms that might be done to our human participants and undertake that those have been mitigated it in ways that [inaudible 00:06:39] [will be effective 00:06:40], et cetera, et cetera, et cetera. My colleagues though who work in the physics department, in computer science department, engineering, and so forth, whose research is not directly engaged with human beings but whose research at the second and third and fourth order has enormous effects on human beings, have no such requirement to anticipate harms, to consider the ethical implications of the research and to present those considerations to any sort of process whereby decisions might be made to go ahead or not to go ahead, to alter, et cetera, et cetera.
I don’t know what the situation is in your universities. Is there any point do you think in trying to extend this regime of overt ethics approval to research that’s conduced in physics departments and engineering departments and so forth?
Prof Levy:
We, about two years ago a question came up at MIT about just how autonomous were autonomous vehicles, and so because of that a couple of us got involved in conversations with people in computer science, and because of that we’ve been going on now for two years with meetings, periodic meetings between computer scientists and economists to try and understand what the diffusion of computerized work would mean [inaudible 00:08:27], so on the last meeting last Friday morning we had about twenty people gathered around. I don’t know whether you can put restrictions on what people do, but you can certainly make them aware of what’s going on, and it’s taken a while to do that because we have this kind of basic asymmetry that we discovered where economists were doing all the writing on the impact of computerized work but we really didn’t understand the technology very well.
Computer scientists understood the technology fine, but they really had no interest in the labor market. That’s not what they was paid to do. That’s slowly beginning to change and they are beginning to understand or at least the group, the self selected group that’s part of these discussions are beginning to understand this could be a big deal and could be very important, and it’s very uncomfortable. I mean just it is. Of course, economists have committed sins of their own, and so we know what it’s like to live with discomfort, so it’s okay. I think it’s probably better to try and make real efforts at education and getting people internally to see what’s happening than it is just to put down a restriction and say, “Well, you know … ” Because it’s very hard in a lot of situations to predict how something is going to roll out in five years or ten years or what the situation is going to be.
Prof Postol:
I’m very reserved about having somebody restrict people based on their notion of ethics, right, because you always have people who have some idea about what is ethical and what is not and they may not have thought it through as well, I mean I’ve seen this many times personally. What I have tried to do in my own course is I’m in a weird subject area. I’m interested in military technology and its implications for national and international security, so I have a course where I go through things like nuclear weapons and their effects and explosives and nerve agents, and terrorists, and why terrorism can be motivated, under what circumstances.
One of the things I constantly emphasize in the class is I use a lot of anecdotes of the times that I was actually in government. I was a principal advisor to the chief of naval operations at one time in my career, and the point that I make is that these things are double edged. There are many valid ways to look at these ethical questions that are not necessarily incompatible, but are different. The interesting thing I have found is I have, and this is just me reporting what’s happened, is my students write up an assessment at the end of the course and it’s very common over the years for them to say, “I’ve never been exposed to these ethical questions.”
Now if you look at MIT in particular I think there’s a near absence of concern as an institution about ethical questions. Individuals, of course, go their own way. Some people are highly ethical. Some are highly unethical. I don’t think you can dictate that, but I would say that institutionally and in terms of the educational fabric of the institution we do a terrible job on the question of ethical issues, and in fact the education of a lot of the administrators at MIT on ethical questions is really wanting as well, so can you expect the students to behave any differently from the administrators when they see certain things go on?
Of course, if the institution is not even interested in providing some serious ethical, not stand up and say, “You did this wrong and I’m in charge.” It’s a real problem. Incidentally it’s not that way at Stanford where I also spend a lot of time, so it’s not particular to all these universities. There are real problems at Stanford, no q-. All of these places have big problems and that’s the nature of large human institutions, but there my experience directly has been that there is a serious attempt to confront the ethical questions. They don’t always get it right, but they’re trying.
Speaker 7:
[inaudible 00:13:11]. I don’t work in a university environment but a lot of our directors do and so we do discuss the issue of software and experiments and academic environments and one related point I wanted to make is that from what I understand a lot of academic journals and university guidelines don’t require the source code for programs involved in experiments to be published, and that that’s a pretty common practice. I think that’s a really unfortunate thing whether you’re talking about ethics evaluations or just pure experimental reproducibility. I think that the habit and the requirements, especially from public universities, should be publishing all the source code for programs that are involved in their experiments so that somebody else can actually see the method and reproduce it and evaluate it for its effects on people.
I think we should also be concerned about the experiments that are going on outside of the university environment, so Facebook is conducting massive social experiments on anybody who uses it on a regular basis with no oversight and without publishing the source code that they use to do that, so I think we need to think about the way traditional university ethics guidelines and social survey experiments should maybe be considered outside of that environment, and one of those elements should be providing the source code so users can see what’s being done to them while they use their computer.
Speaker 1:
Thanks [inaudible 00:14:25]. Now there’s someone at the back.
Male:
[But 00:14:30] thirty plus years ago when I started to teach robotics there was this sort of argument that a robot is going to replace a human being and all these sort of discussions. What I didn’t get from your panel is that in those days, thirty plus years ago, I told my student and people who were sponsoring me that robotics is a shifting industry. It takes a job in some area but it also create jobs in other area. What I didn’t get from your discussion was there was very little emphasis on the fact that some of these population also create lots of great jobs and lots of intelligent people like that young person who was worried about his future, but you’re involved and you’re getting a great deal from it. I wish [their 00:15:28] aspect was emphasized a little bit more. That’s what I got. Maybe you [did 00:15:34], but I didn’t get that impression.
Prof Levy:
No. In my summary I was talking about Wiener’s paper. You’re right, I mean there’s some balance on-
Male:
Could you please kindly use the microphone?
Prof Levy:
Yes. I’m sorry. The best that I can tell … You make a good point. I would say the net impact on employment of automation that at least I’ve seen so far has been negative in the sense of we can certainly create … Let me back up for a second. Consistent with what I said before we still have the power in macroeconomic terms to get employment down to the level that we want to. The question is what’s the mix of jobs that comes out of that and what’s the mix of wages that come out of that. What we’ve seen so far is that what this automation does by kind of taking out jobs in the middle begins to split apart the labor force so you have more low wage jobs at the bottom and then some more higher wage jobs at top. That is the kind of picture that emerges from this stuff.
If the question is does the growth of jobs at the top more than offset the loss of middle wage jobs. The answer is no. I’ve seen no study that suggests that that’s true.
Speaker 1:
Any other … Do we have a last question? Yeah. Down the front.
Male:
Good morning and thanks for a very interesting presentation. I just wanted to mention some of the work being done at MIT in the initiative on the digital economy which picks up and does analysis on a number of the trends that you’ve mentioned. There’s a nice book called The Second Machine Age, Erik Brynjolfsson and Andrew McAfee which is published by the MIT press, but the question that I wanted to ask is when the authors presented to us they said that when they looked at the question of 2004 there seemed to be three areas where humans were well in advance of computers, machines. One was in interacting with the physical world, so kind of vision and fine motor skills.
The other one was in language, so things like voice recognition and natural language, and the third one was in problem solving, particularly when it came to unstructured questions and pattern recognition. Their conclusion in 2014 is that computers have now effectively overtaken humans in all these areas and they can’t now identify an area where we can really say the human is superior than the computer or the machine, and so there appears to be no barrier now to the in principle automation of any activity. I wondered whether you had any comment on that?
Prof Levy:
Yes. I do actually have a comment on that. I would say that one of the things … Erik and Andy are part of these meetings that we have, and I would say that the conclusion of the computer scientists in the meetings would be quite contrary to what you say, that if you look, for example, at the most advanced machine vision, ability to categorize images and classify images into categories, you’re talking on the basis of maybe eighty percent accuracy. Natural language, somewhat better. If you’re looking at what robotics can do now I would say that probably the way to think about that is to look at the videos from the most recent DARPA robotic challenge. This is very, very crude stuff in terms of grasping and all the rest of that stuff.
I’ve read that book and I’ve read that statements, but I would say that’s a case that if economists were held to the malpractice statutes in this country they’d be in jail the rest of their lives for making those statements.
Male:
Good idea.
Speaker 1:
Any other final … We’re just about out of time, but any final comments? Maybe I can ask, if anybody wants to know more about the things that you’ve been speaking about is there a particular thing, a website, an event or anything like that that you’d refer them to, so if we could just go [inaudible 00:20:31].
Speaker 7:
More about the Free Software Foundation you can find at FSF.org or GNU.org, G-N-U.
Prof Postol:
Feel free to email me. I do answer my emails at [email protected]. I do not have a website.
Prof Levy:
I’m semi-retired so I guess I’m incognito so it’s okay.
Speaker 1:
Very good. Thank you very much.
For some questions and discussion, so I’d really encourage people to either ask questions or give a very short contribution to this question of innovation. Do we have any questions? Yeah.
Male:
When it comes to the automation points that you’re bringing up, this actually strikes home to me because many members of my generation, the millennials, are going to end up potentially permanently unemployed due to the few jobs that are actually available to them. When it comes to automation taking away, dramatically reducing the total job market, what do you propose we actually do? We can’t necessarily remove automation and job growth is undefined at this point. Are there social or political items that you would necessarily propose to help or that you would change for society?
Prof Levy:
I think the first thing to do would be to pursue traditional macroeconomic policy which we haven’t been doing. One of the confusions right now is the question of how much of the current economic situation is being caused by computerization and as best as I can tell, the answer is relatively little. We know that there are two kinds of recessions, the kind of recession that the Federal Reserve creates on its own to break inflation, and the hallmark of that is rising, tightening interest rates in order to cool off economic activity.
The other kind of recession, which is much rarer, is the kind that occurs after a financial collapse. In this country the Great Depression was one of those and 2008 was the first that we’ve had of that kind since the Great Depression, but you’ve seen that kind of thing in other countries. When you put the combined experience together, what you have is a recovery after a financial collapse that takes on the order of five to six to seven years. You can see, for example, that interest rates are basically zero and still we don’t see much stimulation coming along from that.
We know what the standard remedy for that is, which is government running bigger deficits. We came into this situation already with fairly big deficits because of the previous administrations and the willingness to cut taxes even as they funded the war in Iraq and Afghanistan. Once you got past the initial stimulus bill in 2008, Congress basically followed a contractionary period. What you’re seeing now is still this big macroeconomic hangover, but because it’s gone on so long, people say, “Well, this must be the new normal and this must be due to automation,” and so on and so forth.
As far as I can tell there’s relatively little evidence of that. While these kinds of problems are real, and while in twenty or thirty years you may start seeing real bites, say for example, in the jobs of college educated workers from computerization, right now there is very little evidence of that and quite a lot of evidence that this is just macroeconomic mismanagement.
Male:
What would you propose in twenty to thirty years then when it does become a problem?
Prof Levy:
I think you have to propose some kinds of redistribution to guarantee income. The best way, if you think about how to structure it, is in much broader ownership of capital. That is to say, that if it’s the computers and the capital that are getting all the returns then you really have to distribute capital ownership in order to take advantage of that. That’s going to take political movements. You say, “Well is that impossible?” I think, at a given point in time you can start thinking that things are impossible, but as pressure builds, suddenly impossible things become possible, and that would be, I think, the preferred way of going about that.
Work sharing doesn’t have much to do with how you hold up incomes, which is basically what you’re talking about. I think it’s much broader capital ownership is what you have to talk about.
Speaker 1:
Michael.
Prof Postol:
May I make a comment on …
Speaker 1:
Sorry.
Prof Postol:
Sorry, I just want to … Professor Levy I think has answered this question but I’d just like to comment on it using slightly different language. I think when you look at the distribution of wealth in this country, and when I say wealth I’m not only talking about money that people have in hand, but what they get paid, wages for example, you will find that the wages people are getting have been relatively stagnant at the lower levels while people at the upper levels have made tremendous advances. You have a disparity that’s very large between the wealthiest people and the less wealthy. This is a trend that has been going on most strongly since around the mid to late 1970s up until today. One of the consequences of that is you don’t have as much buying power in the population and hence you don’t have as much demand for goods and services, which then of course comes around to less demand for jobs that provide goods and services.
I think you should disagree with me if you think I’m saying this incorrectly, but it seems to me what Professor Levy was making a point on is there has to be enough political pressure brought by anybody. I mean I’m well off but I’m very much for redistributing wealth. There has to be an awakening and a political push to get this corrected or this country is going to bifurcate into literally people who have tremendous amounts and people who have nothing, and all you have to do to see the consequences of that, go to a third world nation.
Speaker 1:
[inaudible 00:27:11] Michael.
Prof Arnold:
At my university those of us were conducting research that directly involves human beings need to go through a mandated process of gaining ethics approval, and we need to [indicate 00:27:31], we need to anticipate possible harms that might be done to our human participants and undertake that those have been mitigated it in ways that [inaudible 00:27:41] [will be effective 00:27:42], et cetera, et cetera, et cetera. My colleagues though who work in the physics department, in computer science department, engineering, and so forth, whose research is not directly engaged with human beings but whose research at the second and third and fourth order has enormous effects on human beings, have no such requirement to anticipate harms, to consider the ethical implications of the research and to present those considerations to any sort of process whereby decisions might be made to go ahead or not to go ahead, to alter, et cetera, et cetera.
I don’t know what the situation is in your universities. Is there any point do you think in trying to extend this regime of overt ethics approval to research that’s conduced in physics departments and engineering departments and so forth?
Prof Levy:
We, about two years ago a question came up at MIT about just how autonomous were autonomous vehicles, and so because of that a couple of us got involved in conversations with people in computer science, and because of that we’ve been going on now for two years with meetings, periodic meetings between computer scientists and economists to try and understand what the diffusion of computerized work would mean [inaudible 00:29:29], so on the last meeting last Friday morning we had about twenty people gathered around. I don’t know whether you can put restrictions on what people do, but you can certainly make them aware of what’s going on, and it’s taken a while to do that because we have this kind of basic asymmetry that we discovered where economists were doing all the writing on the impact of computerized work but we really didn’t understand the technology very well.
Computer scientists understood the technology fine, but they really had no interest in the labor market. That’s not what they was paid to do. That’s slowly beginning to change and they are beginning to understand or at least the group, the self selected group that’s part of these discussions are beginning to understand this could be a big deal and could be very important, and it’s very uncomfortable. I mean just it is. Of course, economists have committed sins of their own, and so we know what it’s like to live with discomfort, so it’s okay. I think it’s probably better to try and make real efforts at education and getting people internally to see what’s happening than it is just to put down a restriction and say, “Well, you know … ” Because it’s very hard in a lot of situations to predict how something is going to roll out in five years or ten years or what the situation is going to be.
Prof Postol:
I’m very reserved about having somebody restrict people based on their notion of ethics, right, because you always have people who have some idea about what is ethical and what is not and they may not have thought it through as well, I mean I’ve seen this many times personally. What I have tried to do in my own course is I’m in a weird subject area. I’m interested in military technology and its implications for national and international security, so I have a course where I go through things like nuclear weapons and their effects and explosives and nerve agents, and terrorists, and why terrorism can be motivated, under what circumstances.
One of the things I constantly emphasize in the class is I use a lot of anecdotes of the times that I was actually in government. I was a principal advisor to the chief of naval operations at one time in my career, and the point that I make is that these things are double edged. There are many valid ways to look at these ethical questions that are not necessarily incompatible, but are different. The interesting thing I have found is I have, and this is just me reporting what’s happened, is my students write up an assessment at the end of the course and it’s very common over the years for them to say, “I’ve never been exposed to these ethical questions.”
Now if you look at MIT in particular I think there’s a near absence of concern as an institution about ethical questions. Individuals, of course, go their own way. Some people are highly ethical. Some are highly unethical. I don’t think you can dictate that, but I would say that institutionally and in terms of the educational fabric of the institution we do a terrible job on the question of ethical issues, and in fact the education of a lot of the administrators at MIT on ethical questions is really wanting as well, so can you expect the students to behave any differently from the administrators when they see certain things go on?
Of course, if the institution is not even interested in providing some serious ethical, not stand up and say, “You did this wrong and I’m in charge.” It’s a real problem. Incidentally it’s not that way at Stanford where I also spend a lot of time, so it’s not particular to all these universities. There are real problems at Stanford, no q-. All of these places have big problems and that’s the nature of large human institutions, but there my experience directly has been that there is a serious attempt to confront the ethical questions. They don’t always get it right, but they’re trying.
Speaker 7:
[inaudible 00:34:13]. I don’t work in a university environment but a lot of our directors do and so we do discuss the issue of software and experiments and academic environments and one related point I wanted to make is that from what I understand a lot of academic journals and university guidelines don’t require the source code for programs involved in experiments to be published, and that that’s a pretty common practice. I think that’s a really unfortunate thing whether you’re talking about ethics evaluations or just pure experimental reproducibility. I think that the habit and the requirements, especially from public universities, should be publishing all the source code for programs that are involved in their experiments so that somebody else can actually see the method and reproduce it and evaluate it for its effects on people.
I think we should also be concerned about the experiments that are going on outside of the university environment, so Facebook is conducting massive social experiments on anybody who uses it on a regular basis with no oversight and without publishing the source code that they use to do that, so I think we need to think about the way traditional university ethics guidelines and social survey experiments should maybe be considered outside of that environment, and one of those elements should be providing the source code so users can see what’s being done to them while they use their computer.
Speaker 1:
Thanks [inaudible 00:35:27]. Now there’s someone at the back.
Male:
[But 00:35:31] thirty plus years ago when I started to teach robotics there was this sort of argument that a robot is going to replace a human being and all these sort of discussions. What I didn’t get from your panel is that in those days, thirty plus years ago, I told my student and people who were sponsoring me that robotics is a shifting industry. It takes a job in some area but it also create jobs in other area. What I didn’t get from your discussion was there was very little emphasis on the fact that some of these population also create lots of great jobs and lots of intelligent people like that young person who was worried about his future, but you’re involved and you’re getting a great deal from it. I wish [their 00:36:29] aspect was emphasized a little bit more. That’s what I got. Maybe you [did 00:36:35], but I didn’t get that impression.
Prof Levy:
No. In my summary I was talking about Wiener’s paper. You’re right, I mean there’s some balance on-
Male:
Could you please kindly use the microphone?
Prof Levy:
Yes. I’m sorry. The best that I can tell … You make a good point. I would say the net impact on employment of automation that at least I’ve seen so far has been negative in the sense of we can certainly create … Let me back up for a second. Consistent with what I said before we still have the power in macroeconomic terms to get employment down to the level that we want to. The question is what’s the mix of jobs that comes out of that and what’s the mix of wages that come out of that. What we’ve seen so far is that what this automation does by kind of taking out jobs in the middle begins to split apart the labor force so you have more low wage jobs at the bottom and then some more higher wage jobs at top. That is the kind of picture that emerges from this stuff.
If the question is does the growth of jobs at the top more than offset the loss of middle wage jobs. The answer is no. I’ve seen no study that suggests that that’s true.
Speaker 1:
Any other … Do we have a last question? Yeah. Down the front.
Male:
Good morning and thanks for a very interesting presentation. I just wanted to mention some of the work being done at MIT in the initiative on the digital economy which picks up and does analysis on a number of the trends that you’ve mentioned. There’s a nice book called The Second Machine Age, Erik Brynjolfsson and Andrew McAfee which is published by the MIT press, but the question that I wanted to ask is when the authors presented to us they said that when they looked at the question of 2004 there seemed to be three areas where humans were well in advance of computers, machines. One was in interacting with the physical world, so kind of vision and fine motor skills.
The other one was in language, so things like voice recognition and natural language, and the third one was in problem solving, particularly when it came to unstructured questions and pattern recognition. Their conclusion in 2014 is that computers have now effectively overtaken humans in all these areas and they can’t now identify an area where we can really say the human is superior than the computer or the machine, and so there appears to be no barrier now to the in principle automation of any activity. I wondered whether you had any comment on that?
Prof Levy:
Yes. I do actually have a comment on that. I would say that one of the things … Erik and Andy are part of these meetings that we have, and I would say that the conclusion of the computer scientists in the meetings would be quite contrary to what you say, that if you look, for example, at the most advanced machine vision, ability to categorize images and classify images into categories, you’re talking on the basis of maybe eighty percent accuracy. Natural language, somewhat better. If you’re looking at what robotics can do now I would say that probably the way to think about that is to look at the videos from the most recent DARPA robotic challenge. This is very, very crude stuff in terms of grasping and all the rest of that stuff.
I’ve read that book and I’ve read that statements, but I would say that’s a case that if economists were held to the malpractice statutes in this country they’d be in jail the rest of their lives for making those statements.
Male:
Good idea.
Speaker 1:
Any other final … We’re just about out of time, but any final comments? Maybe I can ask, if anybody wants to know more about the things that you’ve been speaking about is there a particular thing, a website, an event or anything like that that you’d refer them to, so if we could just go [inaudible 00:41:32].
Speaker 7:
More about the Free Software Foundation you can find at FSF.org or GNU.org, G-N-U.
Prof Postol:
Feel free to email me. I do answer my emails at [email protected]. I do not have a website.
Prof Levy:
I’m semi-retired so I guess I’m incognito so it’s okay.
Speaker 1:
Very good. Thank you very much.