Tuesday, May 24, 2016

Note:  My brother had a bad night.   I'll try to post something new later but here is something relevant to yesterday's post about the lunacy that passes muster at the Left Forum.

People Are Not Machines Machines Don't Have Rights or Moral Obligations

You will probably hear it today, you will almost certainly hear it this week,  "people are hard-wired to..."  In the last couple of decades as materialists presenting their ideological metaphors as neuro- or cognitive science people have been taught to believe that they are "hard-wired" to behave and think and even perceive in the way they do by their "genes", evo-psy has a large hand in it too.

The common view of  human minds expressed in the media is that "science proves" that we are the "moist robots" of Daniel Dennett, the "lumbering robots" of Richard Dawkins , "computers made of meat"  the phrase used by other materialists.   Even if other expressions are used,  that is the enforced point of view presented by a media, almost certainly as an article of "scientific" faith presented by people who probably couldn't tell you much of anything about genes and what they do or about how computers work.   That it is a belief that is entirely congenial to the corporations they work for as they sell our minds as product to advertisers who see us as units of potential profit might be seen as ironic, considering the passage I'm about to type into this piece, from Computer Power and Human Reason:  From Judgement to Calculation by the eminent and, I would say, prophetic, computer scientist, the late Joseph Weizenbaum

Introduction

In 1935 Michael Polanyi,  then holder of the Chair of Physical Chemistry at Victoria University of Manchester,  England, was suddenly shocked into a confrontation with philosophical questions that have ever since dominated his life.  The shock was administered by Nicolai Bukharin,  one of the leading theoreticians of th Russian Communist party,  who told Polanyi that "under socialism the conception of science pursued for its own sake would disappear, for the interests of scientists would spontaneously turn to the problems of the current Five Year Plan."  Polanyi sensed then that "the scientific outlook appeared to have produced a mechanical conception of man and history in which there was no place for science itself."  And further that "this conception denied altogether any intrinsic power of thought and thus denied any grounds for claiming freedom of thought."  

I don't know how much time Polanyi thought he would devote to developing an argument for a contrary concept of man and history.  His very shock testifies to the fact that he was in profound disagreement with Bukharin,  therefore that he already conceived of man differently,  even if he could not then give explicit form to his concept.  It may be that he determined to write a counterargument in Bukharin's position,  drawing only on his own experience as a scientist, and to have done with it in short order.  As it turned out,  however, the confrontation with philosophy triggered by Bukharin's revelation was to demand Polanyi's entire attention from then to the present day [c1975]

I recite this bit of history for two reasons.  The first is to illustrate that ideas which seem at first glance to be obvious and simple, and which ought therefore to be universally credible once they have been articulated,  are sometimes buoys marking out stormy channels in deep intellectual seas.  That science is creative, that the creative act in science is equivalent to the creative act in art, that creation springs only from autonomous individuals, as such a simple and, one might think, obvious idea.  Yet Polyani has, as have many others, spent nearly a lifetime exploring, the ground in which it is anchored and the turbulent sea of implications which surrounds it. 

The second reason I recite this history is that I feel myself to be reliving part of it.  My own shock was administered not by any important political figures espousing his philosophy of science, but by some people who insisted on misinterpreting a piece of work I had done.  I write this without bitterness and certainly not in a defensive mood  Indeed, the interpretations I have in mind tended, if anything, to overrate what little I had accomplished and certainly its importance.  No, I recall that piece of work now only because it seems to me to provide the most parsimonious way of identifying the issues I mean to discuss.   

The work was done in the eriod 1964-1966, and was reported in the computer-science literature in January 1966 and August 1967.  To summarize it briefly,  I composed a computer program with which one could "converse" in English.  The human conversationalist partner would type his portion of the conversation on a typewriter connected to a computer, and the computer, under control of my program, would analyze the message that had so been transmitted to it,  compose a response to it in English, and cause the response to be typed on the computer's typewriter.

I chose the name ELIZA for the language analysis program because, like the Eliza of Pygmalion fame,  it could be taught to "speak" increasingly well.  Because the conversations must about something, that is, because they must take place within some context,  the program was constructed in a two-tiered arrangement, the first tier consisting of the language analyzer and the second of a script.  The script is a set of rules rather like those that might be given to an actor who is to use them to improvise around a certain theme.  Thus ELIZA could be given a script to enable it to maintain a conversation about cooking eggs or about managing a bank checking account,  and so on.  Each specific script thus enabled ELIZA to play a specific conversational role. 

For my first experiment, I gave ELIZA a script designed to permit it to play ( and I should realy say parody) the role of a Rogerian psychotherapist engaged in an initial interview with a patient.  The Rogerian psychotherapist is relatively easy to imitate because much of his technique consists of drawing his patient out by reflecting the patients statements back to him.  The following conversation between a young lady ELIZA playing doctor illustrates both the Rogerian technique of encouraging a patient to keep talking and the operation of a computer program ELIZA .  The first to "speak" is the young lady.  The computer's responses are printed entirely in capitals

Men are all like that
IN WHAT WAY
They're always bugging us about something or other.
CAN YOU THINK OF A SPECIFIC EXAMPLE
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE
He says I'm depressed much of the time.
I'M SORRY TO HEAR YOU ARE DEPRESSED....

... DOCTOR, as ELIZA playing psychiatrist came to be known, soon becoming famous around the Massachusetts Institute of Technology,  where it first came into existence, mainly because it was an easy program to demonstrate.  Most other programs could not vividly demonstrate the information-processing power of a computer to visitors who did not already have some specialized knowledge, say of some branch of mathematics.  DOCTOR, on the other hand, could be appreciated on some level by anyone.  Its power as a demonstration vehicle was further enhanced by the fact that the visitor could actually participate in its operation.  Soon copies of DOCTOR, constructed on the basis of my published description of it,  began appearing at other institutions in the United States.  The program became nationally known and even, in certain circles, a national plaything.

The shocks I experience as DOCTOR became widely known and "played" were due principally to three distinct events.

1.  A number of practicing psychiatrists seriously believed the DOCTOR computer program could grow into a nearly completely automatic form of psychotherapy.  Colby et al.* write, for example, 

"Further work must be done before the program will be ready for clinical use.  If the method proves beneficial,  then it would provide a therapeutic took which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists.  Because of the time-sharing capabilities of modern and future computers, several hundred patients an hour could be handled by a computer system designed for this purpose.  The human therapist, involved in the design and operation of this system, would not be replaced, but would become a much more efficient man since his efforts would no longer be limited to the one-to-one patient-therapist ration as now exists."

I had thought it essential, as a prerequisite to the very possibility that one person might help another learn to cope with his emotional problems, that the helper himself participate in the other's experience of those problems and, in large part by way of his own sympathetic recognition of them, himself come to understand them.  There are undoubtedly many techniques to facilitate the therapist's imaginative projection into the patient's inner life.  But that it was possible for even one practicing psychiatrist to advocate that this crucial component of therapeutic process could be entirely supplanted by pure technique - that I had not imagined!  What must a psychiatrist who makes such a suggestion think he is doing while treating a patient,  that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter?  Perhaps Colby et al. give us the required clue when they write;

"A human therapist can be viewed as an information processor and decision maker with a set of decision rules which are closely linked to short-range and long-range goals, ... He is guided in these decisions by rough empiric rules telling him what is appropriate to say and not to say in certain contexts.  To incorporate these processes, to the degree possessed by a human therapist, in the program would be a considerable undertaking but we are attempting to move in this direction."

What can a psychiatrist's image of his patient be when he sees himself, as therapist, not as an engaged human being acting as a healer, but as an information processor following rules, etc." 

Such questions were my awakening to what Polany had earlier called a "scientific outlook that appeared to have produced a mechanical conception of man."  

* Nor is Dr. Colby alone in his enthusiasm for computer administered psychotherapy.   Dr. Carl Sagan, the astrophysicist, recently commented on ELIZA in Natural History, vol. LXXXIV,  "No such computer program is adequate for psychiatric use today, but the same can be remarked about some human psychotherapists.  In a period when more and more people in our society seem to be in need of psychiatric counseling, and when time sharing of computers is widespread,  I can imagine the development of a network of computer psychotherapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested and largely non-directive psychotherapist." 

For anyone who wants to read ahead, the entire Introduction, it has been posted online.   There are a number of versions of ELIZA available online.   Those which I have tried would require a large amount of credulous acceptance on the part of the human, though I doubt people in 2013 have become any less credulous about the fact that they are interacting with a machine than those in the mid-60s were.  If anything, people are far more impressed with the far more powerful computers and sophisticated programs and far, far less impressed with people, even in their own minds.   The extent to which that is due to their casual experience with using computers and what influence that has had on the language people use to talk about our minds, I don't know.  I do know that what was commonly believed by people during that time, that people were really thinking, freely choosing, living beings seems to have given way to exactly the mechanical view that Weizenbaum warned about.    As he was surprised to find, it was among scientists who he, and earlier, Polanyi, believed should have known better that the mechanical view of humanity was already more common.   That it was, apparently, acceptable among psychotherapists and psychologists should tell us that there was something seriously wrong with the scientific identity of those academic fields.  I would say that the subsequent decades, as Behaviorism was succeeded by evolutionary psychology, the beliefs, assumptions and attitudes on display, have almost entirely dominated those and other "sciences" dealing with our minds.

I don't think the sci-fi nightmare of us being dictated to by enslaving machines is the problem, though, as scientists in "artificial intelligence" work to give predator drones the ability to "decide" to kill and to carry out those "decisions" that could change in the most drastic way possible.   The more immediate problem is that how people see themselves and, especially, "other people" has an controlling influence on their political choices and how they will react to the choices made by politicians and courts.   Which is why it is even more important to understand the folly of believing people are computers.  Which is why Weizenbaum's book is so important.  

1 comment:

  1. Because if we make things simpler, they are easier to "handle" (a euphemism for "control"), and if things are easier to handle, then we have power over them.

    And having power isn't everything, it's the only thing.

    The answer to this (not surprisingly) is philosophy (or theology, but that's a narrower recourse I don't think is quite as universal, because making it universal weakens theology and forces it to be what it is not). The "other" of phenomenology (French, mostly) is the key here, the idea of the "other" being not to objectify people who are not you into "things" you can control (i.e., handle).

    Life, as my Pastoral Care teacher insisted, is messy. We go wrong when we think religion, any more than science, is a method of control, and that control is the telos of understanding.

    ReplyDelete