BEFORE GOING ON with Joseph Weizenbaum's introduction to Computer Power And Human Reason, you should remember that when he wrote the book in 1975, computers were still huge machines that existed only at universities and colleges rich enough to have one (or access to one by line) industries and businesses or government institutions. Perhaps a few very rich people had one but the access to them was nothing like it became in the following two decades. I think anyone who does not think that the problems Weizenbaum outlined in this section has not magnified and intensified with the common use of computers they aren't living in the world as it really is. The very things he has noted were already present then, the ideas, the habits of thought, the language, the ideology of scientism (which he describes as being endemic at M.I.T. and among people with educational credentials and those trying to get them) has become commonplace and important both in its presence in political discourse and in its violation by those who pick and choose which science they'll believe (or be told to believe by the media) to the outrage of others who, I will note, also pick and choose what they'll believe.
These, then are the thoughts and questions which have refused to leave me since the deeper significances of the reactions to ELIZA I have described began to become clear to me. Yet I doubt that they could have impressed themselves on me as they did were it not that I was (and am still in a concentrate of technological society as a teacher in the temple of technology that is the Massachusetts Institute of Technology, an institution that proudly boasts of being "polarized around science and technology." There I live and work with colleagues, many of whom trust only modern science to deliver reliable knowledge of the world. I confer with them on research proposals to be made to government agencies, especially to the Department of "Defense." Sometimes I become more than a little frightened as I contemplate what we lead ourselves to propose, as well as the nature of the arguments we construct to support our proposals. Then, too, I am constantly confronted by students, some of whom have already rejected all ways but the scientific to come to know the world, and who seek only a deeper, more dogmatic indoctrination in that faith (although that word is no longer in their vocabulary). Other students suspect that not even the entire collection of machines and instruments at M.I.T. can significantly help give meaning to their lives. They sense the presence of a dilemma in an education polarized around science and technology, an education that implicitly claims to open a privileged access-path to fact, but that cannot tell them how to decide what to count as fact. Even while they recognize the genuine importance of learning their craft, they rebel at working on projects that appear to address themselves neither to answering interesting questions of fact nor to solving problems in theory.
Such confrontations with my own day-to-day social reality have gradually convinced me that my experiences with ELIZA was symptomatic of deeper problems. The time would come, I was sure, when I could no longer be able to participate in research proposal conferences, or honestly respond to my students' need for therapy (yes, that is the correct word), without first attempting to make sense of the picture my own experience with computers had so sharply drawn for me.
Of course, the introduction of computers into our already highly technological society has, as I will try to show, merely reinforced and amplified those antecedent pressures that have driven man to an ever more highly rationalistic view of his society and an ever more mechanistic image of himself. It is therefore important that I construct my discussion of the impact of the computer on man and his society so that it can be seen as a particular kind of encoding of a much larger impact, namely, that on man's role in the face of technologies and techniques he may not be able to understand and control. Conversations around that theme have been going on for a long time and they have intensified in the last few years.
I will repeat that the one issue that first led me to re-read and start commenting here on Joseph Weizenbaum's book were news reports about university based computer scientists working under contracts with the American and British military establishments to develop "artificial intelligence" that would be put into armed drones so that the drones could "decide" to bomb a target independent of human decision making, to rely on the computers to "decide" that it's OK to kill individuals or groups of people based on whatever the scientists working for the generals and colonels decided on as the criteria could reliably identify people it's OK to kill .
I have commented here and elsewhere about the similar work of computer scientists that instruct machines to classify images of people for good and bad traits, and how, without any intention of doing so, search engines will, for example, return images of entirely innocent People of Color in searches for "criminals" or any number of other negative, objective stereotypical categories. And that started to be noted far more than a quarter of a century after Weizenbaum wrote his book. I theorized last year that such things were inevitable because the people creating the systems and those using them (remember, the algorithms for search engines modify their operation under number of clicks by people) will have already imbibed racial and other stereotypes. Even without any intention to do so, they will become part of the program.
Machines don't have the ability to reflect on what "they are doing" and in consultation with morality, something which science and math cannot codify, resolve to try not to do it. Those aspects of the programming at Google, at Youtube, at Facebook that have been such a boon as fascists, neo-Nazis, Putin, the Mercers, etc. AND WHICH THEY INTENTIONALLY TAP FOR THE OWN ENDS, inevitably reflect and concentrate racism, sexism, ethnic and other stereotypes are there by the intentions of the companies and the geeks they hire to write code, those are the source of their wealth, they aren't going to voluntarily give those up no matter how damaging and evil the consequences of them are.
And those systems, the ones at Google and Facebook's and the ones developed to bomb people independent of human judgment, are not different in kind or capable of not choosing to follow stereotypes and to make themselves the tools of evil intent, by those who merely notice those feature of the systems are there and exploitable or those who intentionally engineer them into systems to similarly exploit those terrible aspects of modern technology as I am sure the Mercers, the Putin regime, other gangster regimes in control of government and those financed by billionaires and multimillionaires will.
The systems the Pentagon and British and other militaries are commissioning from places like M.I.T. and Cambridge and other science departments will probably include other malignant aspects that are yet unknown, probably unknown to the very scientists who write the programs and certainly unknown to the machines they are programming.
If anything the problems Joseph Weizenbaum saw then are far worse now because none of what he warned about was taken seriously. Not serious enough to do something about it. I have to think the amorality of science and scientism has more than a little to do with that and we cannot continue to live with that. We can certainly die from it.
No comments:
Post a Comment