Blockheads and Other Brutes

Ned Block defined a system known today as a Blockhead (“Troubles with Functionalism”, Minnesota Studies in the Philosophy of Science) to illustrate a problem that look-up tables pose for the Turing test. Blockhead is a “stupid” machine that stores all possible conversations within some limited duration and, thus, passes the Turing Test. This is, of course physically impossible, as Frank Tipler argues with back of the envelope calculations in The Physics of Immortality:

… the human brain can code as much as 10^15 bits is correct, then since an average book codes about 10^6 bits, it would require more than 100 million books to code the human brain. It would take at least thirty five-story main university libraries to hold this many books. We know from experience that we can access any memory in our brain in about 100 seconds, so a hand simulation of a Turing Test-passing program would require a human being to be able to take off the shelf, glance through, and return to the shelf all of these 100 million books in 100 seconds. If each book weighs about a pound (0.5 kilograms), and on the average the book moves one yard (one meter) in the process of taking it off the shelf and returning it, then in 100 seconds the energy consumed in just moving the books is 3 x 1019 joules; the rate of energy consumption is 3 x 1011 megawatts. Since a human uses energy at a normal rate of 100 watts, the power required is the bodily power of 3 x 1015 human beings, about a million times the current population of the entire earth. A typical large nuclear power plant has a power output of 1,000 megawatts, so a hand simulation of the human program requires a power output equal to that of 300 million large nuclear power plants. As I said, a man can no more hand-simulate a Turing Test-passing program than he can jump to the Moon. In fact, it is far more difficult.

In typical philosophical fashion we suspend the laws of physics and look to conceivability and logical possibility to analyze this scenario. Given that Blockheads are logically possible, what does it say about the Turing Test? Should we not attribute intelligence to Blockhead merely because he uses brute-force methods? It is difficult to know how to respond to this. When we are trying to break a candidate program for the Turing test we ask it questions that, while coherent, have high surprisal. This is used to defeat look-up tables/canned responses, which are the hallmarks of poorly designed AI. However, in this case it is stipulated that the table has all of the possible answers, so such a strategy is useless. So, we have a system that passes the Turing Test with abhorrent methods. Is this really a problem?Let’s consider the larger set of agents I call Brutes (for brute force). Brutes use any or all algorithms that are foreign to ours, or are, in general, unappealing in their processes. This can be in the form of look-up tables, randomized answers, Liebnizian monsters that have monadic processes that never talk to one-another or receive any input, etc. However, Brutes, by fiat pass the Turing test. Now suppose after a neuro-imaging breakthrough we discover some percentage of the population utilizes some abhorrent algorithms in their thinking. Are we to then discount them as being mere Brutes, not truly thinking, feeling people? To do so would be ridiculous. Now suppose that some percentage of the population use nothing but abhorrent algorithms. Can we dismiss them? Again, I do not think so. To maintain otherwise seems to me to be a result of arbitrary algorithmic chauvinism or finding consciousness and intelligence to be an essence that exists apart from function–the same intuition that allows philosophers to take zombie arguments seriously and succumb to the quagmire of privileged access.

In a way, talk of Brutes and Blockheads just misses the point of the Turing test, which is to dissociate the methods, substrate and appearance of the candidate system from the difference that makes a difference: behavior.

I do not see how we can hold an agent’s (or potential agent’s) algorithms against them.

Leave a comment

Your email address will not be published. Required fields are marked *