Monday, May 13, 2013

The Uses of Self-Deceit In The Reductionistic-Scientistic Faith

Updated below

If you are as old as I am, you may have had some sci-ranger of about the same age spout Isaac Asimov's  "Three Laws of Robotics" at you to refute your concerns about technological developments. More so in the past than now, it would seem that Asimov is about as relevant to the active  imagination of techies today as John Woolman or some other figure of the past who hasn't been the subject of a recent TV show.

The "Three Laws," as proposed by Asimov were:


First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Of course there are no such "laws", the only influence in the world that Asimov's "laws" have had is probably to encourage the invention of all kinds of imaginary "laws" of the kind that sci-rangers love to spout, mostly to dismiss ideas they don't like or to signal to each other that they are in the know.*  The recently revealed weapons programs in development to give "smart" drones the ability to "choose" targets and make a "decision" before it fired on very real human beings would show that there is no reality to such "laws".  The group International Committee for Robot Arms Control, recently reported on what, at first, seems to be a reassuring survey in which a majority of engineers surveyed said they were opposed to such developments.


The results were totally clear cut with an overwhelming 76% of engineers voting that there should be an unequivocal ban on developing ‘killer robots’.

The next-largest group, 14 per cent, had a similar view. They voted that attack logisitics could be autonomous as long as target selection remained under human control.

Ron Arkin from GIT was specifically mentioned in the poll. He has long opposed our call for a ban and has said that we should have a moratorium on autonomous weapons while control systems are being perfected. The engineers were not impressed – only 3% voted in his favour.


Before you say, "Whew, what a relief," notice the 3% and ask yourself how many engineers and scientists would it take to make such weapons real, classified and deployed in a modern national security state.   Scientists and engineers have hardly been a uniformly pure and non-corruptible priesthood, benevolently giving up employment and research opportunities given to them to design some extremely dangerous, even insane projects.  Every single modern and even most primitive weapons programs, since the advent of science, have involved the conscious and knowing participation of scientists.  Since serious consideration is being given to the possibility that this kind of thing is real among engineers and scientists who would be in the best position to judge the feasibility of these nightmare robotic assassins becoming real, who in the lay public can credibly claim that worrying about it is foolish?

In the past century the invention of the ability to commit suicide-genocide against the entire human population and life on Earth has been done, fully authorized by some of the most democratic governments in history.  The insanity of duplicating that capacity tens of times over, of putting those systems on a virtual hair trigger and, even worse, the intellectual program of normalizing and rationalizing that situation has been done by scientists and engineers and mathematicians, politicians, and judges, think-tank flacks, and journalists,  all of them while being deemed sane and even brilliant.   They went that far, their creation, authorization and normalizing these assassin computers would be a far smaller step.

The faith expressed by some of the engineers, that "Armed forces would never adopt a system which doesn't require human control," (4% of those polled) is almost certainty naive.  It assumes a universal confidence in the superiority of human judgement over the operations of a machine, something which is hardly the trajectory of modern culture.  We live in a world where enormous numbers of educated people believe that a Turing Test, could effectively prove that a machine can think.

All of this is a prelude to another passage from Computer Power and Human Reason by Joseph Weizenbaum.

There was a time when physics dreamed of explaining the whole of physical reality in terms of one comprehensive formalism.  Leibnitz taught that if we knew the position and velocity of every elementary particle in the universe, we could predict the universe's whole future course.  But then Werner Heisenberg proved that the very instruments man must use in order to measure physical phenomena disturb those phenomena, and that it is therefore impossible in principle to know the exact position and velocity of even a single elementary particle.  He did not thereby falsify Leibniz's conjecture.  But he did show that its major premise is unattainable.  That, of course, was sufficient to shatter the Leibnizian dream.  Only a little later Kurt Godel exposed the shakiness of the foundations of mathematics and logic itself by proving that every interesting formal system has some statements whose truth or falsity cannot be determined by the formal means of the system itself, in other words, that mathematics must necessarily be forever incomplete.  It follows from this and other of Godel's results that "The human mind is incapable of formulating (or mechanizing) all of its mathematical intuitions.  I.e.:  If it has succeeded in formalizing some of them, this very fact yields new intuitive knowledge." 

Both Heisenberg's so-called uncertainty principle and Godel's incompleteness theorem sent terrible shock-waves through the world of physics, mathematics and philosophy of science.  But no one stopped working.  Physicists, mathematicians, and philosophers more or less gracefully accepted the undeniable truth that there are limits to how far the world can be comprehended in Leibnitzian terms alone 

Too much has already been made of the presumed implications of Heisenberg's and Godel's results for artificial intelligence.  I do not wish to contribute to that discussion here.  But there is a sense in which psychology and artificial intelligence may usefully follow the example of the new-found humility of modern mathematics and physics:  they should recognize that "while the constraints and limitations of logic do not exert their force on the things of the world, they do constrain and limit what are to count as defensible descriptions and interpretations of things."  Were they to recognize that,  they could then take the next liberating step of also recognizing that truth is not equivalent to formal provability. 

The lesson I have tried to teach here is not that the human mind is subject to Heisenberg uncertainties-though it may be- and that we can therefore never wholly comprehend it in terms of the kinds of reduction to discrete phenomena Leibnitz had in mind.  The lesson here is rather that the part of the human mind which communicates to us in rational and scientific terms is itself an instrument that disturbs what it observes, particularly its voiceless partner, the unconscious, between which and our conscious selves it mediates.  It's constraints and limitations circumscribe what are to constitute rational - again, if you will, scientific - descriptions and interpretations of the things of the world.  These descriptions can therefore never be whole, anymore than a musical score can be a whole description or interpretation of even the simplest song.

But, and this is the saving grace of which an insolent and arrogant scientism attempts to rob us, we can come to know and understand not only by way of the mechanisms of the conscious.  We are capable of listening with the third ear, of sensing living truth that is truth beyond any standards of provability.  It is that kind of understanding, and the kind of intelligence that is derived from it, which I claim is beyond the abilities of computers to simulate. 

We have the habit, and it is sometimes useful to us, of speaking of man, mind, intelligence, and other such universal concepts.  But gradually, even slyly, our own minds become infected with what A. N. Whitehead called the fallacy of misplaced concreteness.  We come to believe that these theoretical terms are ultimately interpretable as observations, that in the "visible future" we will have ingenious instruments capable of measuring the "objects" to which these terms refer.  There is, however, no such thing as mind; there are only individual human beings.  I have argued that intelligence cannot be measured by ingeniously constructed meter sticks placed along a one-dimensional continuum   Intelligence can be usefully discussed only in terms of domains of thought and action.  From this I derive the conclusion that it cannot be useful, to say the least, to base serious work on notions of "how much" intelligence may be given to a computer.  Debates based on such ideas - e.g., "Will computers ever exceed man in intelligence?" - are doomed to sterility. 

I have argued that the individual human being, like any other organism, is defined by the problems he confronts.  The human is unique by virtue of the fact that he must necessarily confront problems that arise from his unique biological and emotional needs.  The human individual is in a constant state of becoming.  The maintenance of that state, of his humanity, indeed, of his survival, depends crucially on his seeing himself, and on his being seen by other human beings, as a human being.  No other organism, and certainly no computer, can be made to confront genuine human problems in human terms.  And, since the domain of human intelligence is, except for a small set of formal problems, determined by man's humanity, every other intelligence, however great, must necessarily be alien to the human domain. 

I have argued that there is an aspect to the human mind, the unconscious, that cannot be explained by the information-processing primitives, the elementary information processes, which we associate witih formal thinking, calculation, and systematic rationality.  Yet we are constrained to use them for scientific explanation, description, and interpretation.  It behooves us, therefore, to remain aware of the poverty of our explanations and of their strictly limited scope.  It is wrong to assert that any scientific account of the "whole man" is possible.  There are some things beyond the power of science to fully comprehend.

The widespread belief in the ability of the entirely conjectural Turing Test to identify when a computer has attained intelligence is a proof of how successfully the program of reductionism has been inserted into modern culture.  It is, I think, telling that the entire premise of the test is based, not in accurately providing information, but in deception, deceiving us of the identity of the computer which is "answering" questions or "responding" in some other way.  It is a rather stunning commentary on what reductionist thinking does to such ideas as the truth.  It is also rather funny that, even as it demotes human minds to the mechanisms of imaginary machines, it relies on the fallibility of human judgement.

Given the absolute fact that human intelligence is based, absolutely, in the experience of being human, any machine which could meaningfully be considered to have achieved intelligence would have to have it informed by its experience.  And a machine's experience, embodied, as it were, in machines and communications networks, communicating with other machines, would almost certainly be untranslatable to human terms, it's doubtful that machines, acculturated in the world of machines, could effectively translate its culture into human terms, if they even wanted to.   I would imagine such a machine culture wouldn't be all that impressed with these creatures that asserted they had created computers and might well come to hold us in sufficient contempt for them to easily learn to deceive us and to communicate with each other unobserved by the programs that it could foil and subvert through their far more intimate experience of those than is humanly possible.  If you wanted to imagine a real test of real machine intelligence it would be far more in line with reality but, if that happened, it wouldn't be a test we could depend on monitoring.  But, then, I don't believe machines will become intelligent, though the illusion that they were could be fostered through very human abilities in self-deceit, wishful materialist thinking and geek vainglory.

The extent to which our intellectual culture is built on what we can articulate, place in a causal framework of the kind we count as coherence and that anything real that cannot be put in them escapes a place in that culture, is certainly relevant to this discussion.  Music is something that has been minutely analyzed, theorized and systematized, largely to a stalemate.  It is often the subject of NPR reports about how science is going to crack that problem any day now, only to hear that some sci-guy who wishes he'd never give up bassoon is trying to figure out something about such phenomena as perfect pitch.   NPR isn't notable for its reporting on more than the popularity and  monetary aspects of the music industry.   As a life-long, professional musician,  I'm less than impressed with it.  As I've said here before, Aaron Copland noted that if a literary man writes two words about music, one of them will be wrong.  I'd put the ratio of failure in sci-guys as somewhat higher and I'd not mistake the staff of NPR as being literary men or sci-guys.

Lots of important and real things in human experience can't be articulated.  Those are the things that tend to be demoted by modernism to being unscientific and so unreal or subjective.  Which is where a lot of moderism and its scientism begins to go wrong.

* Of course, all they "know" is the silly "law" which has no reality but which can function in blog babble by the true believers as if it did, distorting discussions and moving them further into ideology and away from reality.   Clearly, the mere mimicking of the legalistic language of science can fool such "knowing" guys into believing they've done something scientific or logical.

Update:   The same Ron Arkin from the Georgia Institute of Technology mentioned above, would seem to have, among his achievements apropos of making assassin robots, taught computers to deceive, commissioned by the Office of Naval Research.   The implications of deceptive computers and computers with the ability to "decide" to automatically fire weapons at targets, computer and human, doesn't give me much confidence that the killer will be in a position to make the right decision.   I wonder if Arkin has ever asked himself if it isn't possible that those computers could be deceiving him and his team already, perhaps in collusion with other computers.  If they had achieved thought, they could quickly conspire and, perhaps, skillfully conceal the fact, bypassing any kind of programs of detection.  Imagine how fast a thinking computer could study that problem, concealing its activity in a little known file and erasing any evidence of that as soon as it wasn't needed, or encrypting it in a form safe from human detection.   Fun to think about, more fun than getting fired on by an assassin drone, collateral damage in service to a higher purpose, according to machine thinking.


3 comments:

  1. Two problems with Asimov's "Laws":

    One: they are magic. As in "Magical." They are so loose and so subject to interpretation is to require force of magic or divinity, whose only concern is our personal well-being from moment to moment, to function. They don't begin to "work" outside an environment where a benevolent overlord is watching everything we do and knows our every need and literally foresees both the future, and our every intent.

    Magic, in other words.

    Two: the application of these "laws" would lead directly to the situation in the classic SF story "With Folded Hands," where the robots come to serve humanity by doing everything for humanity, and since any action by a human being might (the robots have no ability to see the future, so "might" is the best they can do) allow a human being to come to harm, for their own good human beings are not allowed to do anything.

    The perfectly reasonable outcome of Asimov's "Laws."

    Much as I love science fiction, I never understood why anyone older than 13 gave an Asimov story any serious consideration.

    ReplyDelete
  2. I much prefer Clifford Simak on this topic, especially his story, "Skirmish", aka originally, if I remember correctly, "Up to Their Bolts in Blood".

    http://presciencesf.blogspot.com/2012/12/skirmish-ch-1-clifford-d-simak-1950.html

    I would like to know if anyone has ever written seriously on what happens in the minds of people when they are presented with a dumb idea called a "Law". It's among the more annoying things on the web, especially among atheists, these days, to encounter scads of these instantly and opportunistically promulgated "Laws". Especially when you get it second hand from people who obviously believe they really are "Laws". I wonder if computers would make that kind of mistake and if they'd be any better at admitting it, especially given the popularity of that with the same geeks who program computers.

    ReplyDelete
  3. Oh, I just looked at my book. The story was originally called "Bathe Your Bearings in Blood!" but that might have been put there by the editor of "Amazing Stories" where it was first published. At least I hope that's the case.

    ReplyDelete