Sunday, May 3, 2026

Three Cheers For Lizzie Velasquez - More On Controlling The Trolls

YESTERDAY THE ESTIMABLE LIZZIE VELASQUEZ posted this:


 The more power to you,  Ms. Velasquez,  I love your content. 

Dawkins Again Settles For Trendslop In His Quest To Put The Final Nail In The Coffin Of God

 The reaction to ELIZA showed me more vividly than anything I had seen hitherto the enormously exaggerated attributions of an even well-educated audience is capable of making, even strives to make, to a technology it does not understand.

Joseph Weizenbaum

RMJ HAS ALERTED me to Richard Dawkins' latest attention getting exercise in announcing that he has interacted with an "artificial intelligence" program for three days and has declared it to be conscious.  Looking for where he has made that declaration, an outlet called, UnHerd, I found the beginning of his pay-wall blocked article.   

The Turing Test is shorthand for a 1950 thought experiment that the great mathematician, logician, computer-pioneer, and cryptographer Alan Turing (1912-1954) called the “Imitation Game”. He proposed it as an operational way in which the future might face up to the question: “Can machines think?”

The future has now arrived. And some people are finding it uncomfortable.

Modern commentators have tended to ignore the (incidental) details of Turing’s original game and rephrase his message in these terms: if you are communicating remotely with a machine and, after rigorous and lengthy interrogation, you think it’s human, then you can consider it to be conscious. Let’s graduate the definition as follows: the more prolonged, rigorous and searching your interrogation, the stronger should be your conviction that an entity that passes the test is conscious.

When Turing wrote — and for most of the years since — it was possible to accept the hypothetical conclusion that, if a machine ever passed his operational test, we might consider it to be conscious. We were comfortably secure in the confidence that this was a very big if, kicked into future touch. However, the advent of large language models (LLM) such as ChatGPT, Gemini, Claude, and others has provoked a hasty scramble to move the goalposts. It was one thing to grant consciousness to a hypothetical machine that — just imagine! — could one day succeed at the Imitation Game. But now that LLMs can actually pass the Turing Test? “Well, er, perhaps, um… Look here, I didn’t really mean it when, back then, I accepted Turing’s operational definition of a conscious being…”

I will not tell you how I very easily overcame the pay-wall to read the rest of his article,  I wasn't even expecting my attempt to cut and paste that bit of it to yield the result but it was similar to the way that some got past the Department of "justice's" masking in the Epstein material.   

Consider Dawkin's declaration that: 

the more prolonged, rigorous and searching your interrogation, the stronger should be your conviction that an entity that passes the test is conscious.

Having read his account of his interrogation, I am quite unimpressed with its rigor and searching not to mention its shortness.   I won't go into the, um, literary aspect of it  though I'd love to know what Marylinne Robinson (" In his new book, The God Delusion, he has turned the full force of his intellect against religion, and all his verbal skills as well, and his humane learning, too, which is capacious enough to include some deeply minor poetry.") would make of what the article says about his use of lit in his evaluation.

Given the widely noted factoid that Turing came up with the idea of his test from a gender-bender parlor game, the entire thing is based on intentional deception and credulity.   You can read the paper he said that in here.   For a popular description of it with a bit of more current observations on the idea and Turing's context, you can read this

Turing starts with a parlor game with a dash of gender fuckery: “It is played with three people, a man (A), a woman (B), and an interrogator (C). The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the two is the man and which is the woman.”

In this game, deception is the rule. Turing says that the woman is supposed to be honest. Her best strategy is to be herself, he explained. But the trick is for the man to perform as a woman: “It is A’s [the man’s] object in the game to try and cause C [the interrogator] to make the wrong identification.”

Then, Turing takes his gender-confusing game and adds an extra twist: “What will happen when a machine takes the part of A [the man] in the game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’ ”

I don't know if the UnHerd people were being amusing when in the longer posted article they reference another bit of columnage by Dawkins, 

 Suggested reading

Why men are different from women

By Richard Dawkins

but his citation of Turing's test,  based on a gender-bending game, is kind of funny.  I mean, Dawkins lost his "Humanist of the Year" award for some comments interpreted as being anti-transexual.  And, as noted in the article just linked to,  Dawkins has some real issues when it comes to not only Transwomen, but Women, too. 

Interesting as that is to consider, I'm going to say that I think Dawkins' will to believe that an "AI" program is conscious is related to his career as Britain's current most famous atheist.  

I hadn't known that Dawkins was going to publish this as I noted the other day that consciousness has not been defined, even as it is the one thing that we can empirically know since our own consciousness is that through which we experience and believe and know anything at all.  I also noted that, despite that, not a few hard-core materialist-atheists of the most scientistic kind have, staring into the face of the "hard-problem" that consciousness is for materialism, scientism and the atheist ideology that is based in those, have declared consciousness to be an illusion.   As I (and others) have pointed out,  they seek to discredit the reality of consciousness by declaring it has the status of a state of consciousness.   I don't think Dawkins is that far gone, if not for the reason that he either is unappreciative of the problem for his ideology through his general philosophical insouciance or if he just doesn't think it's important.   Though I do think he might have a notion that, as previous contenders for his status as the most famous atheist in Britain have, that his ideology cannot be secure as long as a conclusive materialist answer to that problem is pending. 

As for even a rather sophisticated college-credentialed celebrity thinking that machines can think, that's a conclusion that many have been jumping to for more than a half a century.   The early computer scientist Joseph Weizenbaum famously was horrified to find that many such, including another materialist-atheist celebrity, Carl Sagan, mistook his unsophisticated chat bot ELIZA as such.   From his still relevant and highly readable Computer Power And Human Reason. 

Another widespread, and to me surprising reaction to the ELIZA program was the spread of a belief that it demonstrated a general solution to the problem of a computer understanding of natural language.  In my paper, I had tried to say that no general solution to the problem was possible, i.e., that language is understood on in contextual frameworks, that even these can be shared by people to only a limited extent, and that consequently even people are not embodiments of any such general solution.  But these conclusions were often ignored.  In any case, ELIZA was such a small and simple step.  Its contribution was, if any at all, only to vividly underline what many others had long ago discovered, namely, the importance of context in language understanding.  The subsequent, much more elegant, and surely more important work of Winograd in computer comprehension of English is currently being misinterpreted just as ELIZA was.  The reaction to ELIZA showed me more vividly than anything I had seen hitherto the enormously exaggerated attributions of an even well-educated audience is capable of making, even strives to make, to a technology it does not understand.  Surely, I thought, decisions made by the general public about emergent technologies depend much more on what that public attributes to such technologies than on what they actually are or an or cannot do.  If, as appeared to be the case, the public's attributions are wildly misconceived, then public decisions are bound to be misguided and often wrong.  Difficult questions arise out of these observations; what, for example, are the scientist's responsibilities with respect to making his work public?  And to whom (or what) is the scientist responsible? 

My interaction with "AI," much of it involuntary and unwilling,  leads me to the conclusion that it's just a souped up search engine that copies and regurgitates content that is posted online.  Some have noted it is an automated stealer of content that is then automated to repackage it, seemingly on the basis of its currency based on how often it is clicked on in web searches - so it probably is also stealing the results of automated searches of the past instead of evaluating the quality of what it steals.  It has the same relationship with original thought that Temu has with original design in that regard.  

A far, far more sophisticated analysis done by humans as to what "AI" of the kind that Dawkin's notes is based on "Large language models"  does, can be found in a recent article from Harvard Business Review.  

Leaders might assume that LLMs are able to offer a kind of unbiased, outside perspective. But new research found that leading LLMs have clear biases when it comes to strategy and consistently recommend strategies that align with modern managerial buzzwords and trends rather than context-specific strategic logic. This propensity for AI to opt for buzzy ideas over reasoned solutions is called “trendslop,” and leaders should beware of it warping their strategic planning. When using AI in strategic planning, leaders should: use it to expand options, not make choices; counteract known and potential biases; remain alert to changing biases; watch out for the hybrid trap; and not rely on context alone.

Perhaps being what academia takes as truly important, the amassing of money, they have put more thought into it than an ideological atheist would get away with in general culture. 

What the "AI" fed back to the person asking the questions changed, drastically, when a few of the assumptions fed into it changed and what it fed back was consistently influenced, not by logic, but by what it was scooping up from wherever.

In our recent research, we found that leading LLMs have clear biases when it comes to strategy. They consistently recommend strategies that align with modern managerial buzzwords and trends rather than context-specific strategic logic. Across thousands of simulations, we saw LLMs almost uniformly select the same trendy strategies, regardless of context. We call the propensity for AI to opt for buzzy ideas over reasoned solutions “trendslop.” In the context of strategic analysis, we call this phenomenon, “strategy trendslop.”

Which, by the way, is what I think Dawkins did in lots of both his scientific writing and in his "God Delusion" (a review of that from which the Robinson quote above comes)  which, notably, had more citations of Douglas Adams than it much did the subject matter that he purported to be refuting.   And Carl Sagan was known to do a bit of that, too.  Though I do think that Sagan's work in his own field was probably far more founded in scientific observation than Dawkins' has been. 

I don't think ideological atheists have a very good track record when it comes to judging machine intelligence.