Paul Bley: Piano
Steve Swallow: Bass
Paul LaRoca: Drums
Syndrome
"It seems to me that to organize on the basis of feeding people or righting social injustice and all that is very valuable. But to rally people around the idea of modernism, modernity, or something is simply silly. I mean, I don't know what kind of a cause that is, to be up to date. I think it ultimately leads to fashion and snobbery and I'm against it." Jack Levine: January 3, 1915 – November 8, 2010 LEVEL BILLIONAIRES OUT OF EXISTENCE
Saturday, May 11, 2013
Friday, May 10, 2013
The Strange History of Altruism: Marilynne Robinson's second Terry Lecture
The Strange History of Altusim
Of the four fine essays given by Marilynne Robinson in this series, this one is my favorite. One of the most enlightening passages was the one in which she talks about the famous case of Phineas Gage. It is a demonstration of how the reductionist method practiced by those who demote the mind to chemicals and neural circuitry produces a facile, two-dimensional cartoon of real human beings, ignoring enormous parts of human life and personality, not on the basis of it being irrelevant but it being inconvenient to their purpose. Whose imagined Phineas Gage is more convincing? That of the alleged scientists or the novelist-essayist?
I am indebted to Daniel Dennett for the ant and the lancet fluke, a metaphor that comes to mind often as I read in his genre. for example, consider poor Phineas Gage, the rail-road worker famous for the accident he suffered and survived more than 150 years ago, an explosion that sent a large iron rod through his skull. Wilson, Pinker, Gazzaniga, and Antionio Damasio all tell this tale to illustrate the point that aspects of behavior we might think of as character of personality are localized in a specific region of the brain, a fact that, by their lights, somehow compromises the idea of individual character and undermines the notion that our amiable traits are intrinsic to our nature.
Very little is really known about Phineas Gage. The lore that surrounds him in parascientific contexts is based on a few anecdotes of uncertain provenance, to the effect that he recovered without significant damage - except to his social skills. Gazzaniga says, "He was reported the next day by the local paper to be pain free." Now, considering that his upper jaw was shattered and he had lost an eye, and that it was 1848, if he was indeed pain free, this should surely suggest damage to the brain. But, together with his rational and coherent speech minutes after the accident, it is taken to suggest instead that somehow his brain escaped injury, except to those parts of the cerebral cortex that had, till then, kept him from being "'fitful, irreverent, and grossly profane.'" He was twenty-five at the time of the accident. Did he have dependents? Did he have hopes? these questions seem to me of more than novelistic interest in understanding the rage and confusion that emerged in him as he recovered.
How oddly stereotyped this anecdote is through any number of tellings. It is as if there were a Mr. Hyde in us all that would emerge sputtering expletives if our frontal lobes weren't there to restrain him. If any kind of language is human and cultural, it is surely gross profanity, and, after that, irreverence, which must have reverence as a foil; to mean anything at all. If to Victorians this behavior seemed like this emergence of the inner savage, this is understandable enough. But from our vantage, the fact that Gage was suddenly disfigured and half blind, that he suffered a prolonged infection of the brain, and that "it took much longer to recover his stamina," according to Gazzaniga, might account for some of the profanity, which, after all, culture and language have prepared for such occasions. But the part of Gage's brain where damage was assumed by modern writers to have been localized is believed to be the seat of the emotions. Therefore - the logic here is unclear to me - his swearing and reviling the heavens could not mean what it means when the rest of us do it. Damasio gives extensive attention to Gage, offering the standard interpretation of the reported change in his character. He cites at some length the case of a "modern Phineas Gage," a patient who, while intellectually undamaged, lost "his ability to choose the most advantageous course of action." Gage himself behaved "dismally" in his compromised ability "to plan for the future, to conduct himself according to the social rules he previously had learned, and to decide on the course of action that ultimately would be most advantageous to his survival." The same could certainly be said as well of Captain Ahab. So perhaps Melville meant to propose that the organ of veneration was located in the leg. My point being that another proper context for the interpretation of Phineas Gage might be others who have suffered gross insult to the body, especially those who have been disfigured by it. And in justice to Gage, the touching fact is that he was employed continually until his final illness. No one considers what might have been the reaction of other people to him when his moving from job to job - his only sin besides cursing and irritability - attracts learned disapprobation.
I trouble the dust of poor Phineas Gage only to make the point that in these recountings of his afflictions there is no sense at all that he was a human being who thought and felt, a man with a singular and terrible fate. In the absence of an acknowledgment of his subjectivity his reaction to this disaster is treated as indicating damage to the cerebral machinery, not to his prospects, or his faith, or his self-love. It is as if in telling the tale the writers participate in the absence of compassionate imagination, of benevolence, that they posit for their kind And there is another point as well. This anecdote is far too important to these statements about the mind, and about human nature. It ought not to be the center of any argument about so important a question as the basis of human nature. It is too remote in time, too phrenological in its initial descriptions, too likely to be contaminated by sensationalism to have any weight as evidence. Are we really to believe that Gage was not in pain during those thirteen years until his death" How did that terrible exit wound in his skull resolve? No conclusion can be draw, except that in 1848 a man reacted to severe physical trauma more or less as a man living in 2009 be expected to do. The stereotyped appearance of this anecdote, the particulars it includes and those whose absence it passes over, and the conclusion that is drawn from it are a perfect demonstration of the difference between parascientific thinking and actual science.
This is only one of the masterpieces of human observation and elucidation contained in Robinson's essays. All of those reconstructions of Phineas Gage are acts of imagination, Robinson's no more than Gazzaniga's or Damasio's, I'll ask again, whose version of him is more credible, more mindful of what must have been left out and in consideration of the believably of various features of the near-contemporary accounts in which the story comes down to us. Who is more exacting in that? What are the motives involved in the reconstructions of the real man.
Of the four fine essays given by Marilynne Robinson in this series, this one is my favorite. One of the most enlightening passages was the one in which she talks about the famous case of Phineas Gage. It is a demonstration of how the reductionist method practiced by those who demote the mind to chemicals and neural circuitry produces a facile, two-dimensional cartoon of real human beings, ignoring enormous parts of human life and personality, not on the basis of it being irrelevant but it being inconvenient to their purpose. Whose imagined Phineas Gage is more convincing? That of the alleged scientists or the novelist-essayist?
I am indebted to Daniel Dennett for the ant and the lancet fluke, a metaphor that comes to mind often as I read in his genre. for example, consider poor Phineas Gage, the rail-road worker famous for the accident he suffered and survived more than 150 years ago, an explosion that sent a large iron rod through his skull. Wilson, Pinker, Gazzaniga, and Antionio Damasio all tell this tale to illustrate the point that aspects of behavior we might think of as character of personality are localized in a specific region of the brain, a fact that, by their lights, somehow compromises the idea of individual character and undermines the notion that our amiable traits are intrinsic to our nature.
Very little is really known about Phineas Gage. The lore that surrounds him in parascientific contexts is based on a few anecdotes of uncertain provenance, to the effect that he recovered without significant damage - except to his social skills. Gazzaniga says, "He was reported the next day by the local paper to be pain free." Now, considering that his upper jaw was shattered and he had lost an eye, and that it was 1848, if he was indeed pain free, this should surely suggest damage to the brain. But, together with his rational and coherent speech minutes after the accident, it is taken to suggest instead that somehow his brain escaped injury, except to those parts of the cerebral cortex that had, till then, kept him from being "'fitful, irreverent, and grossly profane.'" He was twenty-five at the time of the accident. Did he have dependents? Did he have hopes? these questions seem to me of more than novelistic interest in understanding the rage and confusion that emerged in him as he recovered.
How oddly stereotyped this anecdote is through any number of tellings. It is as if there were a Mr. Hyde in us all that would emerge sputtering expletives if our frontal lobes weren't there to restrain him. If any kind of language is human and cultural, it is surely gross profanity, and, after that, irreverence, which must have reverence as a foil; to mean anything at all. If to Victorians this behavior seemed like this emergence of the inner savage, this is understandable enough. But from our vantage, the fact that Gage was suddenly disfigured and half blind, that he suffered a prolonged infection of the brain, and that "it took much longer to recover his stamina," according to Gazzaniga, might account for some of the profanity, which, after all, culture and language have prepared for such occasions. But the part of Gage's brain where damage was assumed by modern writers to have been localized is believed to be the seat of the emotions. Therefore - the logic here is unclear to me - his swearing and reviling the heavens could not mean what it means when the rest of us do it. Damasio gives extensive attention to Gage, offering the standard interpretation of the reported change in his character. He cites at some length the case of a "modern Phineas Gage," a patient who, while intellectually undamaged, lost "his ability to choose the most advantageous course of action." Gage himself behaved "dismally" in his compromised ability "to plan for the future, to conduct himself according to the social rules he previously had learned, and to decide on the course of action that ultimately would be most advantageous to his survival." The same could certainly be said as well of Captain Ahab. So perhaps Melville meant to propose that the organ of veneration was located in the leg. My point being that another proper context for the interpretation of Phineas Gage might be others who have suffered gross insult to the body, especially those who have been disfigured by it. And in justice to Gage, the touching fact is that he was employed continually until his final illness. No one considers what might have been the reaction of other people to him when his moving from job to job - his only sin besides cursing and irritability - attracts learned disapprobation.
I trouble the dust of poor Phineas Gage only to make the point that in these recountings of his afflictions there is no sense at all that he was a human being who thought and felt, a man with a singular and terrible fate. In the absence of an acknowledgment of his subjectivity his reaction to this disaster is treated as indicating damage to the cerebral machinery, not to his prospects, or his faith, or his self-love. It is as if in telling the tale the writers participate in the absence of compassionate imagination, of benevolence, that they posit for their kind And there is another point as well. This anecdote is far too important to these statements about the mind, and about human nature. It ought not to be the center of any argument about so important a question as the basis of human nature. It is too remote in time, too phrenological in its initial descriptions, too likely to be contaminated by sensationalism to have any weight as evidence. Are we really to believe that Gage was not in pain during those thirteen years until his death" How did that terrible exit wound in his skull resolve? No conclusion can be draw, except that in 1848 a man reacted to severe physical trauma more or less as a man living in 2009 be expected to do. The stereotyped appearance of this anecdote, the particulars it includes and those whose absence it passes over, and the conclusion that is drawn from it are a perfect demonstration of the difference between parascientific thinking and actual science.
This is only one of the masterpieces of human observation and elucidation contained in Robinson's essays. All of those reconstructions of Phineas Gage are acts of imagination, Robinson's no more than Gazzaniga's or Damasio's, I'll ask again, whose version of him is more credible, more mindful of what must have been left out and in consideration of the believably of various features of the near-contemporary accounts in which the story comes down to us. Who is more exacting in that? What are the motives involved in the reconstructions of the real man.
Marilynne Robinson's Terry Lectures
I have been called away, unexpectedly, for several days. I'm going to post links to Marilynne Robinson giving her Terry Lectures, collected in her book "Absence of Mind".
On Human Nature
On Human Nature
Thursday, May 9, 2013
When The Blind Believe They Can See
Updated below (someone asked me to say when I update)
Every once in a while, during the long forays into this kind of science and philosophical stuff, it's good for me to remind myself that my intention is political, promoting the individual and common good, the preservation and protection of life on Earth, equality, justice, economic justice... If I was not convinced that this is of the most basic importance to those, I would not go into it at all. The political consequences of the ideas discussed in this passage from Weizenbaum's chapter "Computer Models in Psychology" are quite obvious, I hope.
Sometimes a very complex idea enters the public consciousness in a form so highly simplified that it is little more than a caricature of the original; yet this mere sketch of the original idea may nevertheless change the popular conception of reality dramatically. For example, consider Einstein's theory of relativity. Just how and why this highly abstract mathematical theory attracted the attention of the general public at all, let alone why it became for a time virtually a public mania and its author a pop-culture hero, will probably never be understood. But the same public which clung to the myth that only five people in the world could understand the theory, and which thus acknowledged its awe of it, also saw the theory as providing a new basis for cultural pluralism; after all, science had now established that everything is relative. A more recent example may be found in the popular reception of the work of F. Crick and J. D. Watson, who shared the Nobel prize in Medicine in 1962 for their studies of the molecular structure of DNA, the nucleic acid within the living cell that transmits the hereditary pattern. Here again highly technical results, reported in a language not at all comprehensible to the layman, were grossly oversimplified and overgeneralized in the public mind into the now-popular impression that it is already possible to design a human being to specifications decided on in advance. In one fell swoop, the general public created for itself a vision of a positive eugenics based not on such primitive and (I hope) abhorrent techniques as the killing and sterilization of "defectives," but on the creation of supermen by technological means. What these two examples have in common is that both have introduced new metaphors into the common wisdom.
A metaphor is, in the words of I. A. Richards, "fundamentally a borrowing between and intercourse of thoughts, a transaction between contexts" Often the heuristic value of a metaphor is not that it expresses a new idea, which it may or may not do, but that it encourages the transfer of insights, derived from one of its contexts into the other context. Its function thus closely resembles that of a model. A Western student of Asian societies may, for example, not learn anything directly from the metaphoric observation that the overseas Chinese are the Jews of Asia*. But it may never have occurred to him that the position of Jews in the Western world, e.g., as entrepreneurs, intellectuals, and targets of persecution, may serve as a model that can provoke insights and questions relevant for understanding the social role and function of, say, the Chinese in Indonesia. Although calling that possibility to his attention may not give the Western student a new idea, it may enable him to derive new ideas from the interchange of two contexts, neither of which are themselves new to him, but which he had never before connected.
Neither the idea of one object moving relative to another nor that of a man being fundamentally a physical object, was new to the common wisdom of the 1920s and the 1960s, respectively. What struck the popular imagination when, for some reason, the press campaigned for Einstein's theory, was that science appeared to have pronounced relativity to be a fundamental and universal fact. Hence the slogan "everything is relative' was converted into a legitimate contextual framework which could, potentially, at least, be coupled to every other universe of discourse, e.g., as an explanatory model legitimating cultural pluralism. The results announced by Crick and Watson fell on a soil already prepared by the public's vague understanding of computers, computer circuitry, and information theory (with its emphasis on codes and coding), and, of course by its somewhat more accurate understanding of Mendelian genetics, inheritance of traits, and so on. Hence it was easy for the public to see the "cracking" of the genetic code as an unraveling of a computer program, and the discovery of the double-helix structure of the DNA molecule as an explication of a computer's basic wiring diagram. The coupling of such a conceptual framework to one that sees man as a physical object virtually compels the conclusion that man may be designed and engineered to specification.
There is no point in complaining that Einstein never intended his theory to serve as one half of the metaphor just described. It is, after all, necessary for the two contexts coupled by a metaphor to be initially disjoint, just as (as I insisted earlier) a model must not have a causal connection to what it models. the trouble with the two metaphoric usages we have cited is that, in both, the metaphors are overextended. Einstein meant to say that there is no fixed, absolute spacetime frame within which physical events play out their destinies. Hence every description of a physical event ( and in that sense, of anything) must be relative to some specified spacetime frame. To jump from that to "everything is relative" is to play too much with words. Einstein's contribution was to demonstrate that, contrary to what had until then been believed, motion is not absolute. When one deduces from Einstein's theory that, say wealth and poverty are relative, in that it is not the absolute magnitudes of the incomes of the rich and poor that matters, but the ratios of one to the other, one has illicitly elevated a metaphor to the status of a scientific deduction.
The example from molecular biology illustrates an over extension of a metaphor in another sense there is the extent of what we know about the human as a biological organism is vastly exaggerated. The result is, to say the least, a premature closure of ideas. The metaphor, in other words, suggests the belief that everything that needs to be known is known.
I will note the fact that both Crick and Watson have, publicly, and even within what is commonly believed to be science, supported the primitive concept of eugenics that Weizenbaum mentions, never mind the more genteel expression it often takes these days. In Weizenbaum's word, both of them "grossly oversimplified and overgeneralized" the meaning of genetics and their discovery. They were hardly alone in that nor were they the first.
While researching the series I did about Darwin, Galton and Haeckel last year, the number of prominent post-WWII era scientists who still believed in the same brand of eugenics as Pearson and even Davenport astonished me. The culture of science seems to allow even very sophisticated scientists, who are supposed to learn from the real world, to be blinded by science to the extent that they couldn't see the disaster that eugenics produces in real world, human societies, producing horrors more abundant than the ones eugenics is superstitiously believed to prevent.
Crick, who is commonly presented as something of a broad-minded progressive figure, as compared to Watson, campaigned for the applied program of eugenics as expounded by Arthur Jensen, writing many letters on his behalf. I will not draw a metaphor to relate that to his scientistic materialism and his, also, often stated desire to destroy the belief in what he termed "vitalism" (you can safely read "the soul" or, by extension, "God") except to note that those two things exist in one mind, on the basis of what the possessor of that mind believes to be "science". Noting the connection between those ideas in the one and same mind, and their presumed reliance on a more basic unity of belief, requires no metaphor. Crick violently rejected a far less obvious dualistic modeling of the mind so the connection between his materialism and his pesudo-scientific eugenics would not violate his own framing. There was no compartmentalization of or fire wall between Crick's materialism and his eugenics, nor is there in the minds of other materialist-eugenicists and there are more of them than many who see themselves as up-to-date and sciency folk would probably care to believe. Indeed, one of the things that seriously alarmed me within the last ten years, slapping me out of my lazy late middle-age torpor, was the casual and unaware belief in eugenics expressed by those who believe themselves to be liberal.
* Given that his mention of this metaphor may raise a few eyebrows I'm well accustomed to having raised at me, I'll point out that Joseph Weizenbaum's family were among those who fled the Nazis. His account is given in a documentary movie made a few years back, "Rebel At Work", [English transcript in pdf].
Update: I understand someone believes I'm making up what I said about Crick. Here, from this small sample of his thinking on racial inequality and eugenics, is a letter to Sir Charles Snow in 1969
Dear Charles
I gave a talk to University College on 'The Social Impact of Biology' and the BBC subsequently broadcast a shortened version of it. As I covered a very broad range of topics I decided not to publish it, and no manuscript exists as I spoke from notes. As far as I remember I said that the biological evidence was that all men were not created equal, and it would not only be difficult to try to do this, but biologically undesirable. As an aside I said that the evidence for the equality of different races did not really exist. In fact, what little evidence there was suggested racial differences.
Had I enlarged on the subject I would have dwelt on the probable positive differences, such as, for example, the Jews and the Japs, rather than speak only about Negroes. From what I hear you are saying something along these lines. I would certainly love to see what you've written when you're satisfied with it.
F.H.C. Crick
And, as can be seen in his letter to another scientific racist and eugenicist, his fellow Nobel laureate, William Schockley, he wasn't only a fan of Arthur Jensen's scientific racism .
Every once in a while, during the long forays into this kind of science and philosophical stuff, it's good for me to remind myself that my intention is political, promoting the individual and common good, the preservation and protection of life on Earth, equality, justice, economic justice... If I was not convinced that this is of the most basic importance to those, I would not go into it at all. The political consequences of the ideas discussed in this passage from Weizenbaum's chapter "Computer Models in Psychology" are quite obvious, I hope.
Sometimes a very complex idea enters the public consciousness in a form so highly simplified that it is little more than a caricature of the original; yet this mere sketch of the original idea may nevertheless change the popular conception of reality dramatically. For example, consider Einstein's theory of relativity. Just how and why this highly abstract mathematical theory attracted the attention of the general public at all, let alone why it became for a time virtually a public mania and its author a pop-culture hero, will probably never be understood. But the same public which clung to the myth that only five people in the world could understand the theory, and which thus acknowledged its awe of it, also saw the theory as providing a new basis for cultural pluralism; after all, science had now established that everything is relative. A more recent example may be found in the popular reception of the work of F. Crick and J. D. Watson, who shared the Nobel prize in Medicine in 1962 for their studies of the molecular structure of DNA, the nucleic acid within the living cell that transmits the hereditary pattern. Here again highly technical results, reported in a language not at all comprehensible to the layman, were grossly oversimplified and overgeneralized in the public mind into the now-popular impression that it is already possible to design a human being to specifications decided on in advance. In one fell swoop, the general public created for itself a vision of a positive eugenics based not on such primitive and (I hope) abhorrent techniques as the killing and sterilization of "defectives," but on the creation of supermen by technological means. What these two examples have in common is that both have introduced new metaphors into the common wisdom.
A metaphor is, in the words of I. A. Richards, "fundamentally a borrowing between and intercourse of thoughts, a transaction between contexts" Often the heuristic value of a metaphor is not that it expresses a new idea, which it may or may not do, but that it encourages the transfer of insights, derived from one of its contexts into the other context. Its function thus closely resembles that of a model. A Western student of Asian societies may, for example, not learn anything directly from the metaphoric observation that the overseas Chinese are the Jews of Asia*. But it may never have occurred to him that the position of Jews in the Western world, e.g., as entrepreneurs, intellectuals, and targets of persecution, may serve as a model that can provoke insights and questions relevant for understanding the social role and function of, say, the Chinese in Indonesia. Although calling that possibility to his attention may not give the Western student a new idea, it may enable him to derive new ideas from the interchange of two contexts, neither of which are themselves new to him, but which he had never before connected.
Neither the idea of one object moving relative to another nor that of a man being fundamentally a physical object, was new to the common wisdom of the 1920s and the 1960s, respectively. What struck the popular imagination when, for some reason, the press campaigned for Einstein's theory, was that science appeared to have pronounced relativity to be a fundamental and universal fact. Hence the slogan "everything is relative' was converted into a legitimate contextual framework which could, potentially, at least, be coupled to every other universe of discourse, e.g., as an explanatory model legitimating cultural pluralism. The results announced by Crick and Watson fell on a soil already prepared by the public's vague understanding of computers, computer circuitry, and information theory (with its emphasis on codes and coding), and, of course by its somewhat more accurate understanding of Mendelian genetics, inheritance of traits, and so on. Hence it was easy for the public to see the "cracking" of the genetic code as an unraveling of a computer program, and the discovery of the double-helix structure of the DNA molecule as an explication of a computer's basic wiring diagram. The coupling of such a conceptual framework to one that sees man as a physical object virtually compels the conclusion that man may be designed and engineered to specification.
There is no point in complaining that Einstein never intended his theory to serve as one half of the metaphor just described. It is, after all, necessary for the two contexts coupled by a metaphor to be initially disjoint, just as (as I insisted earlier) a model must not have a causal connection to what it models. the trouble with the two metaphoric usages we have cited is that, in both, the metaphors are overextended. Einstein meant to say that there is no fixed, absolute spacetime frame within which physical events play out their destinies. Hence every description of a physical event ( and in that sense, of anything) must be relative to some specified spacetime frame. To jump from that to "everything is relative" is to play too much with words. Einstein's contribution was to demonstrate that, contrary to what had until then been believed, motion is not absolute. When one deduces from Einstein's theory that, say wealth and poverty are relative, in that it is not the absolute magnitudes of the incomes of the rich and poor that matters, but the ratios of one to the other, one has illicitly elevated a metaphor to the status of a scientific deduction.
The example from molecular biology illustrates an over extension of a metaphor in another sense there is the extent of what we know about the human as a biological organism is vastly exaggerated. The result is, to say the least, a premature closure of ideas. The metaphor, in other words, suggests the belief that everything that needs to be known is known.
I will note the fact that both Crick and Watson have, publicly, and even within what is commonly believed to be science, supported the primitive concept of eugenics that Weizenbaum mentions, never mind the more genteel expression it often takes these days. In Weizenbaum's word, both of them "grossly oversimplified and overgeneralized" the meaning of genetics and their discovery. They were hardly alone in that nor were they the first.
While researching the series I did about Darwin, Galton and Haeckel last year, the number of prominent post-WWII era scientists who still believed in the same brand of eugenics as Pearson and even Davenport astonished me. The culture of science seems to allow even very sophisticated scientists, who are supposed to learn from the real world, to be blinded by science to the extent that they couldn't see the disaster that eugenics produces in real world, human societies, producing horrors more abundant than the ones eugenics is superstitiously believed to prevent.
Crick, who is commonly presented as something of a broad-minded progressive figure, as compared to Watson, campaigned for the applied program of eugenics as expounded by Arthur Jensen, writing many letters on his behalf. I will not draw a metaphor to relate that to his scientistic materialism and his, also, often stated desire to destroy the belief in what he termed "vitalism" (you can safely read "the soul" or, by extension, "God") except to note that those two things exist in one mind, on the basis of what the possessor of that mind believes to be "science". Noting the connection between those ideas in the one and same mind, and their presumed reliance on a more basic unity of belief, requires no metaphor. Crick violently rejected a far less obvious dualistic modeling of the mind so the connection between his materialism and his pesudo-scientific eugenics would not violate his own framing. There was no compartmentalization of or fire wall between Crick's materialism and his eugenics, nor is there in the minds of other materialist-eugenicists and there are more of them than many who see themselves as up-to-date and sciency folk would probably care to believe. Indeed, one of the things that seriously alarmed me within the last ten years, slapping me out of my lazy late middle-age torpor, was the casual and unaware belief in eugenics expressed by those who believe themselves to be liberal.
* Given that his mention of this metaphor may raise a few eyebrows I'm well accustomed to having raised at me, I'll point out that Joseph Weizenbaum's family were among those who fled the Nazis. His account is given in a documentary movie made a few years back, "Rebel At Work", [English transcript in pdf].
Update: I understand someone believes I'm making up what I said about Crick. Here, from this small sample of his thinking on racial inequality and eugenics, is a letter to Sir Charles Snow in 1969
Dear Charles
I gave a talk to University College on 'The Social Impact of Biology' and the BBC subsequently broadcast a shortened version of it. As I covered a very broad range of topics I decided not to publish it, and no manuscript exists as I spoke from notes. As far as I remember I said that the biological evidence was that all men were not created equal, and it would not only be difficult to try to do this, but biologically undesirable. As an aside I said that the evidence for the equality of different races did not really exist. In fact, what little evidence there was suggested racial differences.
Had I enlarged on the subject I would have dwelt on the probable positive differences, such as, for example, the Jews and the Japs, rather than speak only about Negroes. From what I hear you are saying something along these lines. I would certainly love to see what you've written when you're satisfied with it.
F.H.C. Crick
And, as can be seen in his letter to another scientific racist and eugenicist, his fellow Nobel laureate, William Schockley, he wasn't only a fan of Arthur Jensen's scientific racism .
Yes, !Xóõ Is a Real Language
Someone asks me if I made up !Xóõ in a comment or if it's real. It's a real language spoken by a small number of people in southwestern Botswana. It's sometimes called the most phonemically rich language of all surviving languages. The written form, with its amazing number of characters is wonderful to look at.
Here is an Mp3 of the text, first in English, then in !Xóõ
Wednesday, May 8, 2013
"Never Act as Though Any Single Perspective Can Comprehend the Whole..." More from Joseph Weizenbaum
I will, in what follows, try to maintain the position that there is nothing wrong with viewing man as an information processor (or indeed as anything else) nor with attempting to understand him from that perspective, providing, however that we never act as though any single perspective can comprehend the whole man. Seeing man as an information-processing system does not in itself dehumanize him, and may very well contribute to his humanity in that it may lead him to a deeper understanding of one specific aspect of his human nature. It could, for example, be enormously important for man's understanding his spirituality to know the limits of the explanatory power of an information - processing theory of man,. In order for us to know those limits, the theory would, of course, have to be worked out in considerable detail.
Before we discuss what an information -processing theory of man might look like, I must say more about theories and especially about their relations to models. A theory is first of all a text, hence a concatenation of the symbols of some alphabet. But it is a symbolic construction in a deeper sense as well the very therms that a theory employs are symbols which, to paraphrase Abraham Kaplan, grope for their denotation in the real world or else cease to be symbolic...
I am going to break in here and call your attention to the remark of Stephen Hawking I pointed out the other day in which he called for excusing his branch of physics from the most basic requirements of physics, for it to be exempted from exactly this search for the "denotation," that is demonstrating correspondence of his symbols, to an actual entity represented in the real world, the physical world, the world that science has as its only legitimate subject matter. He even called for his "universes" to be exempted from logic, the means by which such denotation could be found. The extremely bizarre call for physics to be exempted from its only legitimate subject matter was done, not only with the tacit acceptance by the materialist-naturalist-physicalist - you can safely read "atheists" - who took his book to its bosom, but, in the case of the physicists or scientists among them, without objection to him also demanding an exemption from what is supposed to be, according to them, the entire and only real reality AS PROVED BY THE METHODS AND PROCEDURES OF SCIENCE . Demanding that one perspective, the one in which they obtain their fame and living, can comprehend the whole of an entity far larger than the one Weizenbaum addresses in this passage, the real UNIVERSAL set, in which people, in the materialist sense, as physical objects, are merely elements. As of now, I'm unaware of a formal demand from elite physicists from an exemption of that sort, at least not one which is articulated as such.
For Sean Carroll and the others, it has to be asked how they can maintain that religious people, the ones who make no fundamentalist pretenses to the subject of their belief being knowledge in the sense of scientific knowledge, are illegitimate in holding the supernatural, defined as non-material, non-physical, not held to be subject to the limitations of the physical universe, is real when they accept the call to do the same thing for their beliefs about physical reality, releasing their beliefs about the physical universe from the properties of the physical universe? It should be asked how long physics and cosmology, which has been taken on a quest to kill off God by atheism, can hope to maintain the reason for their repute and renown with that kind of stuff being said by the most famous among them.
Continuing:
... The words "grope for" are Kaplan's, and are a happy choice - for to say that terms of symbols "find" their denotation in the real world would deny, or at least obscure, the fact that the symbolic terms of a theory can never be finally grounded in reality.
Breaking in again, perhaps it is the frustration of looking for entities, "other universes" etc. and failing to find evidence that they are at all grounded in reality that leads Hawking to seek the exemption for his branch of "science" from the most basic requirement that it be about physical reality and held together by a logically coherent case that what he imagines has some correspondence to physical reality. That attempt NOT being seen as an enormous scandal within physics, especially in its openly materialist faction, is, frankly, disgustingly hypocritical. The demand would seem to attempt to bypass the sad fact that their theory can never be grounded in reality, moving to a place in which its preeminence is derived, exactly, from its being exempt from having ANY ground in reality.
Again:
Definitions that define words in terms of other words leave those other words to be defined. In science generally, symbols are often defined in terms of operations. In physics, for example, mass is, informally speaking, that property of an object which determines its motion during collision with other objects. (If two objects moving at identical velocities come to rest when brought into head-on collision, it is said that they have the same mass) This definition of mass permits us to design experiments involving certain operations whose outcomes "measure" the mass of objects. Momentum is defined as the product of the mass of an object and its velocity (mv), acceleration as the rate of change of velocity with time (a = dv/dt), and finally force as the product of mass and acceleration (f = ma). In a way it is wrong to say that force is "defined" by the equation f = ma. A more suitable definition given in some physics texts is that force is an influence capable of producing a change in the motion of a body. The difference between the two senses of "definition" alluded to here illustrates that so-called operational definitions of a theory's terms provide a basis for the design of experiments and the discovery of general laws, but that these laws may then serve as implicit definitions of the terms occurring in them. These and still other problematic aspects of definition imply that all theoretic terms, hence all theories, must always be characterized by a certain openness. No term of a theory can ever be fully and finally understood. Indeed, to once more paraphrase Kaplan, it may not be possible to fix the content of a single concept or term in a sufficiently rich theory (about, say, human cognition) without assessing the truth of the whole theory. This fact is of the greatest importance for any assessment of the whole theory. This fact is of the greatest importance for any assessment of computer models of complex phenomena.
A theory is, of course, not merely any grammatically correct text that uses a set of terms somehow symbolically related to reality. It is a systematic aggregate of statements of laws. Its content, its very value as theory, lies at least as much in the structure of the interconnections that relate its laws to one another, as in the laws themselves. (Students sometimes prepare themselves for examinations in physics by memorizing lists of equations. They may well pass their examinations with the aid of such feats of memory, but it can hardly be said that they know physics, that, in other words, they command a theory.) A theory, at least a good one, is thus not merely a kind of data bank in which one can "look up" what would happen under such and sch conditions. It is rather more like a map (an analogy Kaplan also makes) of a partially explored territory. Its function is often heuristic, that is, to guide the explorer in further discovery. The way theories make a difference in the world is thus not that they answer questions, but that they guide and stimulate intelligent search. And (again) there is no single "correct" map of a territory. An aerial photograph of an area serves a different heuristic function, say. for a land use planner, than does a demographic map of the same area. One use of a theory, then, is that it prepares the conceptual categories within which the theoretician and the practitioner will ask his questions and design his experiments.
Perhaps I should apologize for using this passage to profile my enormous problem with what Hawking said and its embrace by Sean Carroll and others, clearly motivated at least as much from their hatred of religion as for any scientific purpose. But what Weizenbaum pointed out about the relationship of science to the physical universe it was invented to study, in terms of theory "groping for" its denotation, perhaps finds its most stunning illustration in the demand for an exemption from its exigencies by today's most celebrated physicist and cosmologist.
Before we discuss what an information -processing theory of man might look like, I must say more about theories and especially about their relations to models. A theory is first of all a text, hence a concatenation of the symbols of some alphabet. But it is a symbolic construction in a deeper sense as well the very therms that a theory employs are symbols which, to paraphrase Abraham Kaplan, grope for their denotation in the real world or else cease to be symbolic...
I am going to break in here and call your attention to the remark of Stephen Hawking I pointed out the other day in which he called for excusing his branch of physics from the most basic requirements of physics, for it to be exempted from exactly this search for the "denotation," that is demonstrating correspondence of his symbols, to an actual entity represented in the real world, the physical world, the world that science has as its only legitimate subject matter. He even called for his "universes" to be exempted from logic, the means by which such denotation could be found. The extremely bizarre call for physics to be exempted from its only legitimate subject matter was done, not only with the tacit acceptance by the materialist-naturalist-physicalist - you can safely read "atheists" - who took his book to its bosom, but, in the case of the physicists or scientists among them, without objection to him also demanding an exemption from what is supposed to be, according to them, the entire and only real reality AS PROVED BY THE METHODS AND PROCEDURES OF SCIENCE . Demanding that one perspective, the one in which they obtain their fame and living, can comprehend the whole of an entity far larger than the one Weizenbaum addresses in this passage, the real UNIVERSAL set, in which people, in the materialist sense, as physical objects, are merely elements. As of now, I'm unaware of a formal demand from elite physicists from an exemption of that sort, at least not one which is articulated as such.
For Sean Carroll and the others, it has to be asked how they can maintain that religious people, the ones who make no fundamentalist pretenses to the subject of their belief being knowledge in the sense of scientific knowledge, are illegitimate in holding the supernatural, defined as non-material, non-physical, not held to be subject to the limitations of the physical universe, is real when they accept the call to do the same thing for their beliefs about physical reality, releasing their beliefs about the physical universe from the properties of the physical universe? It should be asked how long physics and cosmology, which has been taken on a quest to kill off God by atheism, can hope to maintain the reason for their repute and renown with that kind of stuff being said by the most famous among them.
Continuing:
... The words "grope for" are Kaplan's, and are a happy choice - for to say that terms of symbols "find" their denotation in the real world would deny, or at least obscure, the fact that the symbolic terms of a theory can never be finally grounded in reality.
Breaking in again, perhaps it is the frustration of looking for entities, "other universes" etc. and failing to find evidence that they are at all grounded in reality that leads Hawking to seek the exemption for his branch of "science" from the most basic requirement that it be about physical reality and held together by a logically coherent case that what he imagines has some correspondence to physical reality. That attempt NOT being seen as an enormous scandal within physics, especially in its openly materialist faction, is, frankly, disgustingly hypocritical. The demand would seem to attempt to bypass the sad fact that their theory can never be grounded in reality, moving to a place in which its preeminence is derived, exactly, from its being exempt from having ANY ground in reality.
Again:
Definitions that define words in terms of other words leave those other words to be defined. In science generally, symbols are often defined in terms of operations. In physics, for example, mass is, informally speaking, that property of an object which determines its motion during collision with other objects. (If two objects moving at identical velocities come to rest when brought into head-on collision, it is said that they have the same mass) This definition of mass permits us to design experiments involving certain operations whose outcomes "measure" the mass of objects. Momentum is defined as the product of the mass of an object and its velocity (mv), acceleration as the rate of change of velocity with time (a = dv/dt), and finally force as the product of mass and acceleration (f = ma). In a way it is wrong to say that force is "defined" by the equation f = ma. A more suitable definition given in some physics texts is that force is an influence capable of producing a change in the motion of a body. The difference between the two senses of "definition" alluded to here illustrates that so-called operational definitions of a theory's terms provide a basis for the design of experiments and the discovery of general laws, but that these laws may then serve as implicit definitions of the terms occurring in them. These and still other problematic aspects of definition imply that all theoretic terms, hence all theories, must always be characterized by a certain openness. No term of a theory can ever be fully and finally understood. Indeed, to once more paraphrase Kaplan, it may not be possible to fix the content of a single concept or term in a sufficiently rich theory (about, say, human cognition) without assessing the truth of the whole theory. This fact is of the greatest importance for any assessment of the whole theory. This fact is of the greatest importance for any assessment of computer models of complex phenomena.
A theory is, of course, not merely any grammatically correct text that uses a set of terms somehow symbolically related to reality. It is a systematic aggregate of statements of laws. Its content, its very value as theory, lies at least as much in the structure of the interconnections that relate its laws to one another, as in the laws themselves. (Students sometimes prepare themselves for examinations in physics by memorizing lists of equations. They may well pass their examinations with the aid of such feats of memory, but it can hardly be said that they know physics, that, in other words, they command a theory.) A theory, at least a good one, is thus not merely a kind of data bank in which one can "look up" what would happen under such and sch conditions. It is rather more like a map (an analogy Kaplan also makes) of a partially explored territory. Its function is often heuristic, that is, to guide the explorer in further discovery. The way theories make a difference in the world is thus not that they answer questions, but that they guide and stimulate intelligent search. And (again) there is no single "correct" map of a territory. An aerial photograph of an area serves a different heuristic function, say. for a land use planner, than does a demographic map of the same area. One use of a theory, then, is that it prepares the conceptual categories within which the theoretician and the practitioner will ask his questions and design his experiments.
Perhaps I should apologize for using this passage to profile my enormous problem with what Hawking said and its embrace by Sean Carroll and others, clearly motivated at least as much from their hatred of religion as for any scientific purpose. But what Weizenbaum pointed out about the relationship of science to the physical universe it was invented to study, in terms of theory "groping for" its denotation, perhaps finds its most stunning illustration in the demand for an exemption from its exigencies by today's most celebrated physicist and cosmologist.
Tuesday, May 7, 2013
Two Stories About Medicine One Provocative, One Shocking and Alarming
I don't know much of anything about the controversial use of chelation therapy in the treatment of heart disease except that it is controversial. Chelation is one of those topics that has been deemed to be "woo" by the medical wing of the "Skepticism" industry. But I don't know enough to know if it might be one of those cases when, oddly enough, their listing it on their Index of Prohibited Ideas is valid. But a story by Karen Weintraub in the Boston Globe on April 22 was interesting because it cast a light on the idea of "woo" itself.
A study showing a small but positive result for chelation as therapy for heart disease was conducted under the supervision of the federal government, a rather large study. You would think that having some unanticipated validation of a treatment that is used, unofficially by about 50,000 Americans a year, would be welcomed by cardiologists and others involved in treating heart disease and public health. But that's not nearly the case.
The positive result triggered a firestorm of opposition from come cardiologists who dismissed the study as junk science; while others defended the study, published in the prestigious Journal of the American Medical Association as no more flawed than any other.
The emotional debate reinforces the importance of scientific research, several doctors and ethicists said, even as it shows weakness in the scientific process. And it raises questions about the attitudes many doctors have shown toward alternative medicine.
"It challenges the foundation of western medicine to accept alternative medicines," said Felician Cohn, bioethics director for Kaisar Permanete Orange County.
The results surprised even the researchers conducting the research, who had expected it to give the negative results predicted by the current beliefs of doctors and researchers.
No one was more surprised by the results in the TACT trial than the researchers who conducted it.
A photo of the meeting where they first learned the results shows shock and dismay, said Gervasio A Lamas, chief of Columbia University's division of cardiology at Mt. Sinai Medical Center in Maimi Beach, Fla.
" We really do look astonished," Lamas said. "Some people had their head on the table saying; 'no you've got to be kidding me.'"
Lamas said he felt he had to study chelation after a patient came to him asking whether to use the therapy. His first reaction was "of course not!" Then he began to research the treatment and realized that the "only correct answer was: I don't know."
You can look up the reaction online, I have not, yet. I suspect it will be dominated by the "Skeptical" rejection by people who "know" it can't be true, some of them media doctors such as Steven E. Nissen who is given prominent mention in the story in the Globe . And, as I said, they might be right, though, with this study, there is more reason to believe they may not be. Though the question remains as to why these doctors wouldn't be thrilled to have evidence that they had another possible means of preventing second and third heart attacks in their patients that wouldn't make them more willing to look at the possible effectiveness of a widely used unofficial therapy? What does that say about their value of patient care as opposed to their preexisting bundle of beliefs on the basis of no such study?
-------
Yesterday's reading brought up something about medical science that is far more stunning and disturbing, which opens far more of "the foundation of western medicine" and much more of biology into far more basic doubt. [Note: Sorry, just realized I'd forgotten the link.]
New scientific research has cast grave doubt on the safety testing of hundreds of thousands of consumer products, food additives and industrial chemicals.
Everyday products, from soft drinks and baby foods, to paints, gardening products, cosmetics and shampoos, contain numerous synthetic chemicals as preservatives, dyes, active ingredients, or as contaminants. Official assurances of the safety of these chemicals are based largely on animal experiments that use rabbits, mice, rats and dogs. But new results from a consortium of researchers and published in the Proceedings of the National Academy of Sciences suggest such assurances may be worthless (Seok et al. 2013).
The results of these experiments challenge the longstanding scientific presumption holding that animal experiments are of direct relevance to humans. For that reason they potentially invalidate the entire body of safety information that has been built up to distinguish safe chemicals from unsafe ones. The new results arise from basic medical research, which itself rests heavily on the idea that treatments can be developed in animals and transferred to humans.
The research originated when investigators noted that in their medical specialism of inflammatory disease (which includes diabetes, asthma and arthritis), drugs developed using mice have to date had a 100% failure rate in almost 150 clinical trials on humans.
According to Kristie Sullivan, Director of Regulatory Testing Issues at the Physicians Committee for Responsible Medicine (PCRM), this is not unusual “about 90% of all pharmaceuticals tested for safety in animals fail to reach the market, or are quickly pulled from the market”. Wanting to understand why this might be so, the consortium decided to test the effects of various treatments that lead to inflammation, and systematically compare results between mice and humans. This postulated correlation across different animal species is sometimes known as the concordance assumption.
"The concordance assumption" is something I never encountered in any of the biology classes I took in high school or college, nor do I remember it being mentioned in anything I've read in the matter of animal research, just about every single one of which featured either research or experiments conducted on animals, most of which explicitly asserted some rather broad assertions about human beings and between different species on the basis of this unstated "assumption". How serious is the problem with it?
In a first set of experiments the researchers looked at acute inflammation in mice brought on by various stimuli. These stimuli were bacterial toxins (endotoxaemia), trauma, and burns. To measure responses the authors quantified positive or negative changes in gene activity for thousands of individual genes. The researchers found that changes in activity of a particular mouse gene after treatment typically failed to predict changes in activity in the closest related human gene. This was not the expected result. If humans and mice are meaningfully similar (i.e. concordant) then gene activity changes in mice should have closely resembled those in humans after a similar challenge. But they did not.
In further experiments, the researchers identified another difference. While humans responded with similar patterns of gene changes to each of the three different challenges (trauma, burns, and endotoxaemia), mice did not. The three treatments in mice each resulted in a distinct set of gene activity changes. This confirmed the initial results in the sense that mice and humans responded differently. It also implied that the differences in gene response between mice and humans are attributable not so much to a lot of detailed ‘noise’ but to fundamental differences in the physiology of mice and humans in dealing with these challenges.
Next, the researchers examined the activity of specific biological signaling pathways after similar treatments. These too were highly divergent between mice and humans. Surprised by the consistently poor correlations between the two species, the authors then tested other human/mouse models of inflammatory diseases. Again, the similarity between mice and humans was low.
In summary, repeated experiments confirmed that, when it comes to inflammation, mice and humans have little in common, a finding important enough in itself given the prevalence of inflammation-related diseases in humans. These include allergies, celiac disease, asthma, rheumatoid arthritis, and autoimmune diseases.
Of the two articles mentioned here, this one really has immediate and extremely disturbing potential for harm.
Thus the Seok study is not the first to conclude that mice are poor models for human disease, but it is notable for being by far the most comprehensive. Combined with results of previous experiments, its conclusions suggest researchers should expect that mouse, and probably other animal testing, is of little use in advancing the treatment of human illnesses, including heart disease and cancer.
In other words, the public is probably being badly served by much of the money spent on medical research. According to PCRM’s Kristie Sullivan, “the National Institutes of Health is giving researchers billions of dollars every year for research on animals”. While missing out on potential cures, the public is also likely being exposed to dangerous or ineffective pharmaceuticals. Animal testing nearly prevented the approval of valuable drugs such as penicillin and subsequent antibiotics, but it did not prevent the thalidomide disaster of the 50s and 60s (Greek and Swingle Greek, 2003).,,
... If animals are not useful predictors of important disease responses in humans it is unlikely they are useful as test subjects for toxicological safety. In other words, lack of concordance means that the synthetic chemicals that are found in industrial products, incorporated into food, and otherwise spread throughout the environment, are essentially untested. The regulatory process through which they passed was never a scientifically validated and evidence-based system, but now the evidence shows it to have been functioning as a system of random elimination. “We are not protecting humans” says Kristie Sullivan, noting that “even a National Academy study agrees that many toxicological tests are not human-relevant.”
The effect of this is nothing less than shocking to someone who was brought up with the faith that all of that horrific and cruel animal testing was scientifically valid and a hard but necessary evil. Now it would seem that even the scientific character of its basic theory was more faith than science. I will not try to tease out its origins and ancestry, not just now, but will repeat that this article is about the most disturbing thing I've read in years not related to climate change.
What else it means for the enormous faith in a far less demonstrable "concordance" between the minds of animals as remotely related to us as other mammals and us, never mind the ever popular one between human beings and ants, it calls it into the most fundamental question. If science missed the issues discussed in this article for all of those decades, there is absolutely no reason to have any faith in the wild speculations by those who find human consciousness and thought inconvenient for their "scientific" faith. Unless they can account for it with science as clear as this, it should be considered to be rank superstition based in something far less valid than human experience, ideology.
A study showing a small but positive result for chelation as therapy for heart disease was conducted under the supervision of the federal government, a rather large study. You would think that having some unanticipated validation of a treatment that is used, unofficially by about 50,000 Americans a year, would be welcomed by cardiologists and others involved in treating heart disease and public health. But that's not nearly the case.
The positive result triggered a firestorm of opposition from come cardiologists who dismissed the study as junk science; while others defended the study, published in the prestigious Journal of the American Medical Association as no more flawed than any other.
The emotional debate reinforces the importance of scientific research, several doctors and ethicists said, even as it shows weakness in the scientific process. And it raises questions about the attitudes many doctors have shown toward alternative medicine.
"It challenges the foundation of western medicine to accept alternative medicines," said Felician Cohn, bioethics director for Kaisar Permanete Orange County.
The results surprised even the researchers conducting the research, who had expected it to give the negative results predicted by the current beliefs of doctors and researchers.
No one was more surprised by the results in the TACT trial than the researchers who conducted it.
A photo of the meeting where they first learned the results shows shock and dismay, said Gervasio A Lamas, chief of Columbia University's division of cardiology at Mt. Sinai Medical Center in Maimi Beach, Fla.
" We really do look astonished," Lamas said. "Some people had their head on the table saying; 'no you've got to be kidding me.'"
Lamas said he felt he had to study chelation after a patient came to him asking whether to use the therapy. His first reaction was "of course not!" Then he began to research the treatment and realized that the "only correct answer was: I don't know."
You can look up the reaction online, I have not, yet. I suspect it will be dominated by the "Skeptical" rejection by people who "know" it can't be true, some of them media doctors such as Steven E. Nissen who is given prominent mention in the story in the Globe . And, as I said, they might be right, though, with this study, there is more reason to believe they may not be. Though the question remains as to why these doctors wouldn't be thrilled to have evidence that they had another possible means of preventing second and third heart attacks in their patients that wouldn't make them more willing to look at the possible effectiveness of a widely used unofficial therapy? What does that say about their value of patient care as opposed to their preexisting bundle of beliefs on the basis of no such study?
-------
Yesterday's reading brought up something about medical science that is far more stunning and disturbing, which opens far more of "the foundation of western medicine" and much more of biology into far more basic doubt. [Note: Sorry, just realized I'd forgotten the link.]
New scientific research has cast grave doubt on the safety testing of hundreds of thousands of consumer products, food additives and industrial chemicals.
Everyday products, from soft drinks and baby foods, to paints, gardening products, cosmetics and shampoos, contain numerous synthetic chemicals as preservatives, dyes, active ingredients, or as contaminants. Official assurances of the safety of these chemicals are based largely on animal experiments that use rabbits, mice, rats and dogs. But new results from a consortium of researchers and published in the Proceedings of the National Academy of Sciences suggest such assurances may be worthless (Seok et al. 2013).
The results of these experiments challenge the longstanding scientific presumption holding that animal experiments are of direct relevance to humans. For that reason they potentially invalidate the entire body of safety information that has been built up to distinguish safe chemicals from unsafe ones. The new results arise from basic medical research, which itself rests heavily on the idea that treatments can be developed in animals and transferred to humans.
The research originated when investigators noted that in their medical specialism of inflammatory disease (which includes diabetes, asthma and arthritis), drugs developed using mice have to date had a 100% failure rate in almost 150 clinical trials on humans.
According to Kristie Sullivan, Director of Regulatory Testing Issues at the Physicians Committee for Responsible Medicine (PCRM), this is not unusual “about 90% of all pharmaceuticals tested for safety in animals fail to reach the market, or are quickly pulled from the market”. Wanting to understand why this might be so, the consortium decided to test the effects of various treatments that lead to inflammation, and systematically compare results between mice and humans. This postulated correlation across different animal species is sometimes known as the concordance assumption.
"The concordance assumption" is something I never encountered in any of the biology classes I took in high school or college, nor do I remember it being mentioned in anything I've read in the matter of animal research, just about every single one of which featured either research or experiments conducted on animals, most of which explicitly asserted some rather broad assertions about human beings and between different species on the basis of this unstated "assumption". How serious is the problem with it?
In a first set of experiments the researchers looked at acute inflammation in mice brought on by various stimuli. These stimuli were bacterial toxins (endotoxaemia), trauma, and burns. To measure responses the authors quantified positive or negative changes in gene activity for thousands of individual genes. The researchers found that changes in activity of a particular mouse gene after treatment typically failed to predict changes in activity in the closest related human gene. This was not the expected result. If humans and mice are meaningfully similar (i.e. concordant) then gene activity changes in mice should have closely resembled those in humans after a similar challenge. But they did not.
In further experiments, the researchers identified another difference. While humans responded with similar patterns of gene changes to each of the three different challenges (trauma, burns, and endotoxaemia), mice did not. The three treatments in mice each resulted in a distinct set of gene activity changes. This confirmed the initial results in the sense that mice and humans responded differently. It also implied that the differences in gene response between mice and humans are attributable not so much to a lot of detailed ‘noise’ but to fundamental differences in the physiology of mice and humans in dealing with these challenges.
Next, the researchers examined the activity of specific biological signaling pathways after similar treatments. These too were highly divergent between mice and humans. Surprised by the consistently poor correlations between the two species, the authors then tested other human/mouse models of inflammatory diseases. Again, the similarity between mice and humans was low.
In summary, repeated experiments confirmed that, when it comes to inflammation, mice and humans have little in common, a finding important enough in itself given the prevalence of inflammation-related diseases in humans. These include allergies, celiac disease, asthma, rheumatoid arthritis, and autoimmune diseases.
Of the two articles mentioned here, this one really has immediate and extremely disturbing potential for harm.
Thus the Seok study is not the first to conclude that mice are poor models for human disease, but it is notable for being by far the most comprehensive. Combined with results of previous experiments, its conclusions suggest researchers should expect that mouse, and probably other animal testing, is of little use in advancing the treatment of human illnesses, including heart disease and cancer.
In other words, the public is probably being badly served by much of the money spent on medical research. According to PCRM’s Kristie Sullivan, “the National Institutes of Health is giving researchers billions of dollars every year for research on animals”. While missing out on potential cures, the public is also likely being exposed to dangerous or ineffective pharmaceuticals. Animal testing nearly prevented the approval of valuable drugs such as penicillin and subsequent antibiotics, but it did not prevent the thalidomide disaster of the 50s and 60s (Greek and Swingle Greek, 2003).,,
... If animals are not useful predictors of important disease responses in humans it is unlikely they are useful as test subjects for toxicological safety. In other words, lack of concordance means that the synthetic chemicals that are found in industrial products, incorporated into food, and otherwise spread throughout the environment, are essentially untested. The regulatory process through which they passed was never a scientifically validated and evidence-based system, but now the evidence shows it to have been functioning as a system of random elimination. “We are not protecting humans” says Kristie Sullivan, noting that “even a National Academy study agrees that many toxicological tests are not human-relevant.”
The effect of this is nothing less than shocking to someone who was brought up with the faith that all of that horrific and cruel animal testing was scientifically valid and a hard but necessary evil. Now it would seem that even the scientific character of its basic theory was more faith than science. I will not try to tease out its origins and ancestry, not just now, but will repeat that this article is about the most disturbing thing I've read in years not related to climate change.
What else it means for the enormous faith in a far less demonstrable "concordance" between the minds of animals as remotely related to us as other mammals and us, never mind the ever popular one between human beings and ants, it calls it into the most fundamental question. If science missed the issues discussed in this article for all of those decades, there is absolutely no reason to have any faith in the wild speculations by those who find human consciousness and thought inconvenient for their "scientific" faith. Unless they can account for it with science as clear as this, it should be considered to be rank superstition based in something far less valid than human experience, ideology.
Who Really "Knows" E8 ?
On the Impossibility of Knowing It All And What That Means and The Faith Based Rejection of Science
So how big a computation is this character table for split E8? Fokko's software immediately told us exactly how many different representations we were talking about (453,060); that's also the number of terms that could appear in the character formulas. So the character table is a square matrix of size 453060. The number of entries is therefore about 200 billion, or 2 x 1011.
But, not to worry because he continues:
Fortunately the matrix is upper triangular, so we only need 100 billion entries
Reading even this narrative, for me, is like looking at the shadow of a reflection of the E8 figure at a great distance, through a gauze. I can gather enough of the achievement to be very impressed but I really can't even understand the terms in the first paragraph. My friend who teaches mathematics at a quite decent land-grant university and who publishes several papers a decade told me that she doesn't understand much more of it. I don't think she's just trying to make me feel better, she's admitting the same thing that Richard Lewontin did more generally.
First, no one can know and understand everything. Even individual scientists are ignorant about most of the body of scientific knowledge, and it is not simply that biologists do not understand quantum mechanics. If I were to ask my colleagues in the Museum of Comparative Zoology at Harvard to explain the evolutionary importance of RNA editing in trypanosomes, they would be just as mystified by the question as the typical well-educated reader of this review.*
As soon as I read the size of the effort of constructing the model of the E8 figure my first question was if anyone, even the most well informed members of the group could meaningfully claim to understand it or how confident they could really be in the tightness of their results. No one could possibly master more than a small part of the topic and there is not really any such thing as knowledge that is held collectively, not without a great deal of faith in all of the other members of the group. Perhaps in the efficacy of the computers and the intellectual architecture of the attempt. Faith would, obviously, be a requirement of even a mathematical or scientific "fact" of even less daunting dimensions. Short of many minds being joined as the computers could be, the fact is that no one can really "know" much of anything about even that many dimensions. The holding that what is "known" about it is actually known stretches the meaning of the word. That word is similarly stretched even further to cover the entirety of what is included in science.
While the obvious connection of this issue of faith in science with religious faith are there, those aren't what I'm interested in addressing in this post. I'm interested in, once again, addressing the annoying and arrogant superstition among the obnoxious scientistic fundamentalists that is getting steadily worse. Yes, I've been looking at the trash filtered out from my in-box, again.
Everywhere since the inception of the "Skepticism" industry and especially since Sam Harris and Richard Dawkins have instituted the new atheist fad, the internet has been plagued with masses of these scientistic fundamentalists, many of whom are far more ignorant of even very simple math and science than, for example, I am. It's not uncommon to find them making baroque and elaborate arguments about, most typically, religion on the basis of string or M-theory that involves speculations about more than eight dimensions, all while appearing to not be able to solve a linear equation, never mind a quadratic one. As can be seen in the series I did about James Randi et al. it is possible for a total ignoramus in matters scientific, to be revered as an oracular figure of science by a large number of acolytes. And not only the quite ignorant of science but popular figures who are actual scientists. One person I encountered recently said, well, yeah, Randi doesn't have any training in science but he spent a lot of time with Carl Sagan. I noted that Carl Sagan spent a lot of time with Ann Druyan but that didn't make him a woman. What is clear about this is that even such a figure of the church of scientism as Carl Sagan must have known that Randi is a complete non-entity in science but they were OK with the effort to sell him as a representative of science.
The reason for that is ideological and political, Randi shares a faith in materialism with Sagan and other actual scientists in the promotion of scientism. That shared faith is enough for them to, not merely overlook the dishonesty of the effort, but often to participate in Randi's PR promotion. The sheer dishonesty of Randi and the widespread acceptance of his self-generated career as a spokesman for science by scientists who know it is a total fraud, is certainly a scandal. I would say it is a scandal big enough to do actual harm to science. But that is the price that lots of scientists are willing to pay. I think it's as clear an example of the corruption of science as it is alleged to be as opposed to the ideology that it has become in far too many cases.
On The Faith Based Rejection of Science
As creationism and climate change denial shows, science can't only be accepted on the basis of pre-existing faith but it can be rejected as well. I would have to say that it is in the materialists' faith-based rejection of science and other ideas that is the most basic aspect of their religion. That is an aspect of "Skepticism" that is too little addressed. I will point out, again, that several of the pseudo-scientific "voices of science" such as Randi and Penn Jillette, have extended this practice of "Skeptical" denial to climate change science.
Dr. Dean Radin has posted a linked index of peer-reviewed studies in psi and related phenomena. Many of the papers lay out quite impressively careful and controlled experiments which have yielded results of far, far more than sufficient statistical evidence than is generally required by science. I can understand quite a bit of the math in some of them so I don't have to take that on faith. I know the "Skeptics" provide a level of oversight that almost certainly guards against lapses in methodology and attention, I have even more faith in the internal critics and referees in that area. I would say that someone who is even more knowledgeable of statistics than I am would not need as high a level of faith in the results of that science than need to either accept or reject it. Though they would be even more aware of the necessary effort required to SCIENTIFICALLY challenge the reviewed results that are reported. Requirements that are often not practiced by the "Skeptics". I don't think anyone who hasn't even read the abstract of a paper can reject it on the basis of anything but faith. I'll bet not one in 200 of the blog "Skeptics" could do even that.
* Richard Lewontin addresses the problem of the fact that we are all at the mercy of an inevitable reliance on authority, and the choices of even very sophisticated scientists will often be less than universally accepted by those with more knowledge than they have.
Third, it is said that there is no place for an argument from authority in science. The community of science is constantly self-critical, as evidenced by the experience of university colloquia "in which the speaker has hardly gotten 30 seconds into the talk before there are devastating questions and comments from the audience." If Sagan really wants to hear serious disputation about the nature of the universe, he should leave the academic precincts in Ithaca and spend a few minutes in an Orthodox study house in Brooklyn. It is certainly true that within each narrowly defined scientific field there is a constant challenge to new technical claims and to old wisdom. In what my wife calls the Gunfight at the O.K. Corral Syndrome, young scientists on the make will challenge a graybeard, and this adversarial atmosphere for the most part serves the truth. But when scientists transgress the bounds of their own specialty they have no choice but to accept the claims of authority, even though they do not know how solid the grounds of those claims may be. Who am I to believe about quantum physics if not Steven Weinberg, or about the solar system if not Carl Sagan? What worries me is that they may believe what Dawkins and Wilson tell them about evolution.
Monday, May 6, 2013
Answer to an E-mail
I don't expect to have a high number of readers because reading one of my posts takes some commitment of time and attention. From what I've seen, that don't make for a popular blog. Given that, after more than six years of doing this, I remain an, as they say, "rank amateur", I'm pleasantly surprised anyone reads what I post. Given my editing, it might take a bit of deciphering, at times, as well. I know they're not coming for the cute pictures and silly cartoons, cookie cutter remarks about pop culture and celebrity gossip. Though if I shared what I was told about....
OK, well, never mind that.
I consider my readers to be braver and stronger than I am.
Update: Responseto a further e-mail
Why, yes. That was open flattery. I figure I should let people know I understand the demands these long posts make.
OK, well, never mind that.
I consider my readers to be braver and stronger than I am.
Update: Responseto a further e-mail
Why, yes. That was open flattery. I figure I should let people know I understand the demands these long posts make.
How Scientists Trick Themselves And Us Into Believing A Tiny Focus Shows The Entire Reality
Before the passage that follows, Joseph Weizenbaum recalls the joke in which a policeman encounters a drunk on his hands and knees under a lamp post. He asks him what he's doing.
"Looking for my keys."
"There aren't any keys here. Where did you drop them."
"Over there." He points in the dark.
"Why are you looking over here then?"
" Because the light is so much better."
That's how I was told the joke, anyway.
He concentrates on computer scientists who mistake their limited focus for an entire system but in all of science, up to even cosmologists, they clearly mistake it for the entire universe. More about that later. For now, I couldn't possibly say it better than Weizenbaum did
Science can proceed only by simplifying reality. The first step in its process of simplification is abstraction. And abstraction means leaving out of account all those empirical data which do not fit the particular conceptual framework in which science at the moment happens to be working, which, in other words, are not illuminated by the light of the particular lamp under which science happens to be looking for keys. Aldous Huxley remarked on this matter with considerable clarity:
" Pragmatically [scientists] are justified in acting in this odd and extremely arbitrary way; for by concentrating exclusively on the measurable aspects of such elements of experience as can be explained in terms of a causal system they have been able to achieve a great and ever increasing control over the energies of nature. But power is not the same thing as insight and, as a representation of reality, the scientific picture of the world is inadequate for the simple reason that science does not even profess to deal with experience as a whole, but only with certain aspects of it in certain contexts. All this is quite clearly understood by the more philosophically minded men of science. But unfortunately some scientists, many technicians, and most consumers of gadgets have lacked the time and inclination to examine the philosophical foundations and background of the sciences. Consequently they tend to accept the world picture implicit in the theories of science as a complete and exhaustive account of reality; they tend to regard those aspects of experience which scientists leave out of account, because they are incompetent to deal with them, as being somehow less real than the aspects which science has arbitrarily chosen to abstract from out of the infinitely rich totality of given facts."
One of the most explicit statements of the way in which science deliberately and consciously plans to distort reality, and then goes on to accept that distortion as a "complete and exhaustive" account, is that of the computer scientist Herbert A. Simon, concerning his own fundamental theoretical orientation:
" An ant, viewed as a behaving system, is quite simple. The apparent complexity of it s behavior over time is largely a reflection of the complexity of the environment in which it finds itself ... the truth or falsity of [this] hypothesis should be independent of whether ants, viewed more microscopically are simple or complex systems. At the level of cells or molecules, ants are demonstrably complex; but these microscopic details of the inner environment may be largely irrelevant to the ant's behavior in relation to the outer environment. That is why an automaton, though completely different at the microscopic level, might nevertheless simulate the ant's gross behavior...
" I should like to explore this hypothesis, but with the word 'man' substituted for 'ant'.
" A man, viewed as a behaving system is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself... I myself believe that the hypothesis holds even for the whole man."
With a single stroke of the pen, by simply substituting "man" for "ant," the presumed irrelevancy of the microscopic details of the ant's inner environment to its behavior has been elevated to the irrelevancy of the whole man's inner environment to his behavior! Writing 23 years before Simon, but as if Simon's words were ringing in his ears, Huxley states;
"Because of the prestige of science as a source of power, and because of the general neglect of philosophy, the popular Weltanschauung of our times contains a large element of what may be called 'nothing-but' thinking. Human beings, it is more or less tacitly assumed, are nothing but bodies, animals, even machines ... values are nothing but illusions that have somehow got themselves mixed up in our experience of the world; mental happenings are nothing but epiphenomena... spirituality is nothing but ... and so on."
Except, of course, that here we are not dealing with the "popular" Weltanschauung, but with that of one of the most prestigious of American scientists. Nor is Simon's assumption of what is irrelevant to the whole man's behavior " more or less tacit"; to the contrary, he has, to his credit, made it quite explicit.
Simon also provides us with an exceptionally clear and explicit description of how, and how thoroughly, the scientist prevents himself from crossing the boundary between the circle of light cast by his own presuppositions and the darkness beyond. In discussing how he went about testing the theses that underlie his hypothesis, i.e. that man is quite simple, etc., he writes;
"I have surveyed some of the evidence from a range of human performances, particularly those that have been studied in the psychological laboratory.
The behavior of human subjects in solving cryptarithmetic problems, in attaining concepts, in memorizing, in holding information in short-term memory, in processing visual stimuli, and in performing tasks that use natural languages provides strong support for these theses... generalizations about human thinking... are emerging from the experimental evidence. They are simple things, just as our hypothesis led us to expect. Moreover, though the picture will continue to be enlarged and clarified, we should not expect it to become essentially more complex. Only human pride argues that the apparent intricacies of our path stem from quite different sources than the intricacy of the ant's path."
The hypothesis to be tested here is, in part, that the inner environment of the whole man is irrelevant to his behavior. One might suppose that, in order to test it, evidence that might be able to falsify it would be sought. One might, for example, study man's behavior in the face of grief or of a profound religious experience. But these examples do not easily lend themselves to the methods for the study of human subjects developed in psychological laboratories. Nor are they likely to lead to the simple things an experimenter's hypotheses lead him to expect. They lie in the darkness in which the theorist, in fact, has lost his keys' but the light is so much better under the lamppost he himself has erected.
There is thus no chance whatever that Simon's hypothesis will be falsified by his or his colleagues' minds. The circle of light that determines and delimits his range of vision simply does not illuminate any areas in which questions of, say, values or subjectivity can possibly arise. Questions of that kind, being, as they must be, entirely outside his universe of discourse, can therefore not lead him out of his conceptual framework, which like all other magical explanatory systems, has a ready reserve of possible hypotheses available to explain any conceivable event.
Almost the entire enterprise that is modern science and technology is afflicted with the drunkard's search syndrome and with the myopic vision which is its direct result. But, as Huxley has pointed out, this myopia cannot sustain itself without being nourished by experiences of success. Science and technology are sustained by their translations into power and control. To the extent that computers and computation may be counted as part of science and technology, they feed at the same table. The extreme phenomenon of the compulsive programmer teaches us that computers have the power to sustain megalomaniac fantasies. But that power of the computer is merely an extreme version of a power that is inherent in all self-validating systems of thought. Perhaps we are beginning to understand that the abstract systems - the games computer people can generate in their infinite freedom from the constraints that delimit the dreams of workers in the real world - may fail catastrophically when their rules are applied in earnest. We must also learn that the same danger is inherent in other magical systems that are equally detached from authentic human experience, and particular in those sciences that insist they can capture the whole man in the abstract skeletal frameworks.
"There will be no chance that Simon's hypothesis will be falsified by his or his colleagues' minds." "Falsification" as the touchstone guaranteeing the presence of the quest of modern alchemy, "science," has been introduced into the popular imagination since Weizenbaum wrote what he did. You can encounter sci-rangers who demonstrate their profound ignorance of what the word means flashing it like iron pyrite all over the web. As this passage shows, the concept is little understood by even very sophisticated people, scientists included, perhaps especially. They certainly will be unaware that even among scientists and the philosophers of science, "Falsifiability" isn't granted the status of a fixed and uncontroversial truth.
Much of the misconception of science, as encountered among the sciency, is encompassed in this passage. You can read it and remember it was written during the dawn of Sociobiology, as expounded by E.O. Wilson, the world's foremost experts in ants and the reduction of people into "lumbering robots" by the "evolutionary" psychology that quickly succeeded it and overtook it. In that form, as popularized in "The Selfish Gene" and other radically reductionist popularization, it has taken root among the would-be intelligentsia, both those in their own minds and those with advanced degrees. It governs how they see other peoples' lives and, astonishingly enough, their own experience.
The TV commercial says, "If love is a chemical reaction,...." only in the minds of many, perhaps most, college educated folks these days, there is no if about it. I would say if only they understood the practice of the radical abstraction and the profoundly limited focus of the psychological reductionist practice that is the origin of that superstition, they might not fall for it. The chain of assumptions that leads to that belief contains many links that are ideological, I would guess even more of them than the "Intelligent Design" effort might. Only, ideological links that are based in materialism are invisible to any but rather rigorous reviewers of science because the consideration of material entities is the subject matter of science.
In the 20th century and on to today, there has been an odd form of elevation of scientists, usually at the twilight of their productive career in science, to the status of popular sage or, more often, oracle. These people declaim their prophesy on TV and YouTube to an eager lay public and to each other, scientists are as prone to falling for PR as anyone. Since the 1970s, these have been prophets of materialistic scientism, just about any of the big name scientists whose names are recognized would fall into that category. The editing of popular culture, done mostly by non-scientists who are quite ga-ga with the glamorous cache that science has or by wannabees who gave up long ago, disappears scientists who don't teach that dogma. As real, working scientists forget that they are sampling a very limited amount of human experience to issue their doctrines and declaring their universal efficacy, the rest of us are prone to doing the same thing, adding ignorant credulity to the mix.
"Looking for my keys."
"There aren't any keys here. Where did you drop them."
"Over there." He points in the dark.
"Why are you looking over here then?"
" Because the light is so much better."
That's how I was told the joke, anyway.
He concentrates on computer scientists who mistake their limited focus for an entire system but in all of science, up to even cosmologists, they clearly mistake it for the entire universe. More about that later. For now, I couldn't possibly say it better than Weizenbaum did
Science can proceed only by simplifying reality. The first step in its process of simplification is abstraction. And abstraction means leaving out of account all those empirical data which do not fit the particular conceptual framework in which science at the moment happens to be working, which, in other words, are not illuminated by the light of the particular lamp under which science happens to be looking for keys. Aldous Huxley remarked on this matter with considerable clarity:
" Pragmatically [scientists] are justified in acting in this odd and extremely arbitrary way; for by concentrating exclusively on the measurable aspects of such elements of experience as can be explained in terms of a causal system they have been able to achieve a great and ever increasing control over the energies of nature. But power is not the same thing as insight and, as a representation of reality, the scientific picture of the world is inadequate for the simple reason that science does not even profess to deal with experience as a whole, but only with certain aspects of it in certain contexts. All this is quite clearly understood by the more philosophically minded men of science. But unfortunately some scientists, many technicians, and most consumers of gadgets have lacked the time and inclination to examine the philosophical foundations and background of the sciences. Consequently they tend to accept the world picture implicit in the theories of science as a complete and exhaustive account of reality; they tend to regard those aspects of experience which scientists leave out of account, because they are incompetent to deal with them, as being somehow less real than the aspects which science has arbitrarily chosen to abstract from out of the infinitely rich totality of given facts."
One of the most explicit statements of the way in which science deliberately and consciously plans to distort reality, and then goes on to accept that distortion as a "complete and exhaustive" account, is that of the computer scientist Herbert A. Simon, concerning his own fundamental theoretical orientation:
" An ant, viewed as a behaving system, is quite simple. The apparent complexity of it s behavior over time is largely a reflection of the complexity of the environment in which it finds itself ... the truth or falsity of [this] hypothesis should be independent of whether ants, viewed more microscopically are simple or complex systems. At the level of cells or molecules, ants are demonstrably complex; but these microscopic details of the inner environment may be largely irrelevant to the ant's behavior in relation to the outer environment. That is why an automaton, though completely different at the microscopic level, might nevertheless simulate the ant's gross behavior...
" I should like to explore this hypothesis, but with the word 'man' substituted for 'ant'.
" A man, viewed as a behaving system is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself... I myself believe that the hypothesis holds even for the whole man."
With a single stroke of the pen, by simply substituting "man" for "ant," the presumed irrelevancy of the microscopic details of the ant's inner environment to its behavior has been elevated to the irrelevancy of the whole man's inner environment to his behavior! Writing 23 years before Simon, but as if Simon's words were ringing in his ears, Huxley states;
"Because of the prestige of science as a source of power, and because of the general neglect of philosophy, the popular Weltanschauung of our times contains a large element of what may be called 'nothing-but' thinking. Human beings, it is more or less tacitly assumed, are nothing but bodies, animals, even machines ... values are nothing but illusions that have somehow got themselves mixed up in our experience of the world; mental happenings are nothing but epiphenomena... spirituality is nothing but ... and so on."
Except, of course, that here we are not dealing with the "popular" Weltanschauung, but with that of one of the most prestigious of American scientists. Nor is Simon's assumption of what is irrelevant to the whole man's behavior " more or less tacit"; to the contrary, he has, to his credit, made it quite explicit.
Simon also provides us with an exceptionally clear and explicit description of how, and how thoroughly, the scientist prevents himself from crossing the boundary between the circle of light cast by his own presuppositions and the darkness beyond. In discussing how he went about testing the theses that underlie his hypothesis, i.e. that man is quite simple, etc., he writes;
"I have surveyed some of the evidence from a range of human performances, particularly those that have been studied in the psychological laboratory.
The behavior of human subjects in solving cryptarithmetic problems, in attaining concepts, in memorizing, in holding information in short-term memory, in processing visual stimuli, and in performing tasks that use natural languages provides strong support for these theses... generalizations about human thinking... are emerging from the experimental evidence. They are simple things, just as our hypothesis led us to expect. Moreover, though the picture will continue to be enlarged and clarified, we should not expect it to become essentially more complex. Only human pride argues that the apparent intricacies of our path stem from quite different sources than the intricacy of the ant's path."
The hypothesis to be tested here is, in part, that the inner environment of the whole man is irrelevant to his behavior. One might suppose that, in order to test it, evidence that might be able to falsify it would be sought. One might, for example, study man's behavior in the face of grief or of a profound religious experience. But these examples do not easily lend themselves to the methods for the study of human subjects developed in psychological laboratories. Nor are they likely to lead to the simple things an experimenter's hypotheses lead him to expect. They lie in the darkness in which the theorist, in fact, has lost his keys' but the light is so much better under the lamppost he himself has erected.
There is thus no chance whatever that Simon's hypothesis will be falsified by his or his colleagues' minds. The circle of light that determines and delimits his range of vision simply does not illuminate any areas in which questions of, say, values or subjectivity can possibly arise. Questions of that kind, being, as they must be, entirely outside his universe of discourse, can therefore not lead him out of his conceptual framework, which like all other magical explanatory systems, has a ready reserve of possible hypotheses available to explain any conceivable event.
Almost the entire enterprise that is modern science and technology is afflicted with the drunkard's search syndrome and with the myopic vision which is its direct result. But, as Huxley has pointed out, this myopia cannot sustain itself without being nourished by experiences of success. Science and technology are sustained by their translations into power and control. To the extent that computers and computation may be counted as part of science and technology, they feed at the same table. The extreme phenomenon of the compulsive programmer teaches us that computers have the power to sustain megalomaniac fantasies. But that power of the computer is merely an extreme version of a power that is inherent in all self-validating systems of thought. Perhaps we are beginning to understand that the abstract systems - the games computer people can generate in their infinite freedom from the constraints that delimit the dreams of workers in the real world - may fail catastrophically when their rules are applied in earnest. We must also learn that the same danger is inherent in other magical systems that are equally detached from authentic human experience, and particular in those sciences that insist they can capture the whole man in the abstract skeletal frameworks.
"There will be no chance that Simon's hypothesis will be falsified by his or his colleagues' minds." "Falsification" as the touchstone guaranteeing the presence of the quest of modern alchemy, "science," has been introduced into the popular imagination since Weizenbaum wrote what he did. You can encounter sci-rangers who demonstrate their profound ignorance of what the word means flashing it like iron pyrite all over the web. As this passage shows, the concept is little understood by even very sophisticated people, scientists included, perhaps especially. They certainly will be unaware that even among scientists and the philosophers of science, "Falsifiability" isn't granted the status of a fixed and uncontroversial truth.
Much of the misconception of science, as encountered among the sciency, is encompassed in this passage. You can read it and remember it was written during the dawn of Sociobiology, as expounded by E.O. Wilson, the world's foremost experts in ants and the reduction of people into "lumbering robots" by the "evolutionary" psychology that quickly succeeded it and overtook it. In that form, as popularized in "The Selfish Gene" and other radically reductionist popularization, it has taken root among the would-be intelligentsia, both those in their own minds and those with advanced degrees. It governs how they see other peoples' lives and, astonishingly enough, their own experience.
The TV commercial says, "If love is a chemical reaction,...." only in the minds of many, perhaps most, college educated folks these days, there is no if about it. I would say if only they understood the practice of the radical abstraction and the profoundly limited focus of the psychological reductionist practice that is the origin of that superstition, they might not fall for it. The chain of assumptions that leads to that belief contains many links that are ideological, I would guess even more of them than the "Intelligent Design" effort might. Only, ideological links that are based in materialism are invisible to any but rather rigorous reviewers of science because the consideration of material entities is the subject matter of science.
In the 20th century and on to today, there has been an odd form of elevation of scientists, usually at the twilight of their productive career in science, to the status of popular sage or, more often, oracle. These people declaim their prophesy on TV and YouTube to an eager lay public and to each other, scientists are as prone to falling for PR as anyone. Since the 1970s, these have been prophets of materialistic scientism, just about any of the big name scientists whose names are recognized would fall into that category. The editing of popular culture, done mostly by non-scientists who are quite ga-ga with the glamorous cache that science has or by wannabees who gave up long ago, disappears scientists who don't teach that dogma. As real, working scientists forget that they are sampling a very limited amount of human experience to issue their doctrines and declaring their universal efficacy, the rest of us are prone to doing the same thing, adding ignorant credulity to the mix.
"... scientific demonstrations, even mathematical proofs, are fundamentally acts of persuasion" : More from Computer Power and Human Reason by Joseph Weizenbaum
It may be that human values are illusory, as indeed B. F. Skinner argues. If they are, then it is presumably up to science to demonstrate that fact, as indeed Skinner (as scientist) attempts to do. But then science must itself be an illusory system. For the only certain knowledge science can give us is knowledge of the behavior of formal systems, that is, systems that are games invented by man himself and in which to assert truth is nothing more or less than to assert that, as in a chess game, a particular board position was arrived at by a sequence of legal moves. When science purports to make statements about man's experiences, it bases them on identifications between the primitive (that is, undefined) objects of one of its formalisms the pieces of one of its games and some set of human observations. No such sets of correspondences can ever be proved to be correct. At best, they can be falsified, in the sense that formal manipulations of a system's symbols may lead to symbolic configurations which, when read in the light of the set of correspondences in question, yield interpretations contrary to empirically observed phenomena. Hence all empirical science is an elaborate structure built on piles that are anchored, not on bedrock as is commonly supposed, but on the shifting sand of fallible human judgment, conjecture, and intuition. It is not even true, again contrary to common belief, that a single purported counter-instance that, if accepted as genuine would certainly falsify a specific scientific theory, generally leads to the immediate abandonment of that theory. Probably all scientific theories currently accepted by scientists themselves (excepting only those purely formal theories claiming no relation to the empirical world) are today confronted with contradicting evidence of more than negligible weight that, again if fully credited, would logically invalidate them. Such evidence is often explained (that is, explained away) by ascribing it to error of some kind, say, observational error, or by characterizing it as inessential, or by the assumption (that is, the faith) that some yet-to-be-discovered way of dealing with it will some day permit it to be acknowledged but nevertheless incorporated into the scientific theories it was originally thought to contradict. In this way scientists continue to rely on already impaired theories and to infer "scientific fact" from them.
The man on the street surely believes such scientific facts to be as well-established, as well-proven, as his own existence. His certitude is an illusion. Nor is the scientist himself immune to the same illusion. In his praxis, he must, after all, suspend disbelief in order to do or think anything at all. He is rather like a theatergoer, who in order to participate in and understand what is happening on the stage, must for a time pretend to himself that he is witnessing real events. The scientist must believe his working hypothesis, together with its vast underlying structure of theories and assumptions, even if only for the sake of the argument. Often the "argument" extends over his entire lifetime. Gradually he becomes what he at first merely pretended to be; a true believer. I choose the word "argument" thoughtfully, for scientific demonstrations, even mathematical proofs, are fundamentally acts of persuasion.
Scientific statements can never be certain; they can be only more or less credible. And credibility is a term in individual psychology, i.e., a term that has meaning only with respect to an individual observer. To say that some preposition is credible is, after all, to say that it is believed to be an agent who is free not to believe it, that is, by an observer who, after exercising judgment and (possibly) intuition chooses to accept the proposition as worthy of his believing it. How then can science, which itself surely and ultimately rests on vast arrays of human value judgments, demonstrate that human value judgments are illusory? It cannot do so without forfeiting its own status as the single legitimate path to understanding man and his world.
But no merely logical argument, no matter how cogent or eloquent can undo this reality that science has become the sole legitimate form of understanding in the common wisdom. When I say that science has been gradually converted into a slow-acting poison, I mean that the attribution of certainty to scientific knowledge by the common wisdom, an attribution now made so nearly universally that it has become a commonsense dogma, has virtually delegitimatized all other ways of understanding. People viewed the arts, especially literature, as sources of intellectual nourishment and understanding but today the arts are perceived largely as entertainments. The ancient Greek and Oriental theaters, the Shakespearean stage, the stages peopled by the Ibsens and Chekhovs nearer to our day - these were schools. The curricula they taught were vehicles for understanding the societies they represented. Today, although an occasional Arthur Miller or Edward Albee survives and is permitted to teach on the New York or London stage, the people hunger only for what is represented to them to be scientifically validated knowledge. They seek to satiate themselves at such scientific cafeterias as Psychology Today, or on popularized versions of the works of Masters and Johnson, or on Scientology as revealed by L. Ron Hubbard. Belief in the rationality-logicality equation has corroded the prophetic power of language itself. We can count, but we are rapidly forgetting how to say what is worth counting and why.
As you read that, I hope you took into account that this was written in the mid-1970s, and some then current aspects of pop culture have given way for others. You can choose your 2013 counterparts but I would certainly replace B. F. Skinner with Richard Dawkins - just as Skinner's defunct behaviorism has been supplanted by Dawkins' evo-psy. Finding the counterpart for Hubbard is a bit more difficult but not because there is only one obvious one. Arthur Miller is, of course, dead, though, I believe Albee is still with us and as recently as last year was quite articulate about, among other things, the further decline in the theater.
I can only wonder what Joseph Weizembaum would have made of the foremost force for the scientism he warned against, "science" blogs. Since most of his late life was spent in Germany and his further thinking is unavailable in English perhaps he addressed them.
What Weizenbaum had to say is, if anything, far, far more true today. I would hold that it is far more true of what is officially denominated to be liberal politics. Looking back, I would date the late 60s and 1970s as the turning point, when liberal politics, dominated by the enormous moral force of Reverend King and the largely religious and effective civil rights movement and early anti-war movement to the anti-religious, "scientific" "left" that began replacing them at that time. That liberal politics reached its height in influence during the other than liberal Johnson and Nixon administrations testifies to the strength of that now lost liberalism. As the Clinton and Obama administrations prove, the "liberalism" of today doesn't even have the power to move the law when they hold the entire government.
The media, the foremost beneficiaries of the form of libertarianism that posed as liberalism, largely concentrated in the elite media and those indifferent if not actually hostile to religion, has proven it will sell out the genuine ideals of liberalism for fame and fortune. The list of putative liberal or leftists of that era who have jumped to what is universally recognized as being "the right" is impressively larger than those who have jumped the other way. I think it would be useful to come to a better understanding of that phenomenon, which I'd call something like "the Hentoff-Hitchens effect".
UPDATE: From The Lexicon of Popular Atheist Locutions
Word salad: One of a number of pat statements that means something is too complex or too long for the post-literate era atheist to understand. As used it is a variation on the logical positivist practice of declaring a statement not in accord with their ideological framing to be meaningless, though “word salad” is generally far less skillfully deployed.
If “word salad” is used to denote an actual passage that is nonsense it carries the danger that the user will be suspected of a low level of reading comprehension earned for the phrase by those who use it most often. It is more accurate to say "that is nonsense". However, unlike the use of "word salad" it doesn't carry the presumption that the one doing the dismissing is immune from having to be able to say why they have said that.
See also: But that's haaarrrrrd!
The man on the street surely believes such scientific facts to be as well-established, as well-proven, as his own existence. His certitude is an illusion. Nor is the scientist himself immune to the same illusion. In his praxis, he must, after all, suspend disbelief in order to do or think anything at all. He is rather like a theatergoer, who in order to participate in and understand what is happening on the stage, must for a time pretend to himself that he is witnessing real events. The scientist must believe his working hypothesis, together with its vast underlying structure of theories and assumptions, even if only for the sake of the argument. Often the "argument" extends over his entire lifetime. Gradually he becomes what he at first merely pretended to be; a true believer. I choose the word "argument" thoughtfully, for scientific demonstrations, even mathematical proofs, are fundamentally acts of persuasion.
Scientific statements can never be certain; they can be only more or less credible. And credibility is a term in individual psychology, i.e., a term that has meaning only with respect to an individual observer. To say that some preposition is credible is, after all, to say that it is believed to be an agent who is free not to believe it, that is, by an observer who, after exercising judgment and (possibly) intuition chooses to accept the proposition as worthy of his believing it. How then can science, which itself surely and ultimately rests on vast arrays of human value judgments, demonstrate that human value judgments are illusory? It cannot do so without forfeiting its own status as the single legitimate path to understanding man and his world.
But no merely logical argument, no matter how cogent or eloquent can undo this reality that science has become the sole legitimate form of understanding in the common wisdom. When I say that science has been gradually converted into a slow-acting poison, I mean that the attribution of certainty to scientific knowledge by the common wisdom, an attribution now made so nearly universally that it has become a commonsense dogma, has virtually delegitimatized all other ways of understanding. People viewed the arts, especially literature, as sources of intellectual nourishment and understanding but today the arts are perceived largely as entertainments. The ancient Greek and Oriental theaters, the Shakespearean stage, the stages peopled by the Ibsens and Chekhovs nearer to our day - these were schools. The curricula they taught were vehicles for understanding the societies they represented. Today, although an occasional Arthur Miller or Edward Albee survives and is permitted to teach on the New York or London stage, the people hunger only for what is represented to them to be scientifically validated knowledge. They seek to satiate themselves at such scientific cafeterias as Psychology Today, or on popularized versions of the works of Masters and Johnson, or on Scientology as revealed by L. Ron Hubbard. Belief in the rationality-logicality equation has corroded the prophetic power of language itself. We can count, but we are rapidly forgetting how to say what is worth counting and why.
As you read that, I hope you took into account that this was written in the mid-1970s, and some then current aspects of pop culture have given way for others. You can choose your 2013 counterparts but I would certainly replace B. F. Skinner with Richard Dawkins - just as Skinner's defunct behaviorism has been supplanted by Dawkins' evo-psy. Finding the counterpart for Hubbard is a bit more difficult but not because there is only one obvious one. Arthur Miller is, of course, dead, though, I believe Albee is still with us and as recently as last year was quite articulate about, among other things, the further decline in the theater.
I can only wonder what Joseph Weizembaum would have made of the foremost force for the scientism he warned against, "science" blogs. Since most of his late life was spent in Germany and his further thinking is unavailable in English perhaps he addressed them.
What Weizenbaum had to say is, if anything, far, far more true today. I would hold that it is far more true of what is officially denominated to be liberal politics. Looking back, I would date the late 60s and 1970s as the turning point, when liberal politics, dominated by the enormous moral force of Reverend King and the largely religious and effective civil rights movement and early anti-war movement to the anti-religious, "scientific" "left" that began replacing them at that time. That liberal politics reached its height in influence during the other than liberal Johnson and Nixon administrations testifies to the strength of that now lost liberalism. As the Clinton and Obama administrations prove, the "liberalism" of today doesn't even have the power to move the law when they hold the entire government.
The media, the foremost beneficiaries of the form of libertarianism that posed as liberalism, largely concentrated in the elite media and those indifferent if not actually hostile to religion, has proven it will sell out the genuine ideals of liberalism for fame and fortune. The list of putative liberal or leftists of that era who have jumped to what is universally recognized as being "the right" is impressively larger than those who have jumped the other way. I think it would be useful to come to a better understanding of that phenomenon, which I'd call something like "the Hentoff-Hitchens effect".
UPDATE: From The Lexicon of Popular Atheist Locutions
Word salad: One of a number of pat statements that means something is too complex or too long for the post-literate era atheist to understand. As used it is a variation on the logical positivist practice of declaring a statement not in accord with their ideological framing to be meaningless, though “word salad” is generally far less skillfully deployed.
If “word salad” is used to denote an actual passage that is nonsense it carries the danger that the user will be suspected of a low level of reading comprehension earned for the phrase by those who use it most often. It is more accurate to say "that is nonsense". However, unlike the use of "word salad" it doesn't carry the presumption that the one doing the dismissing is immune from having to be able to say why they have said that.
See also: But that's haaarrrrrd!