Brian Weatherson over at Crooked Timber wrote, in short:
One of my quirkier philosophical views is that the most pressing question in metaphysics, and perhaps all of philosophy, is how to distinguish between disjunctive and non-disjunctive predicates in the special sciences. This might look like a relatively technical problem of no interest to anyone. But I suspect that the question is important to all sorts of issues, as well as being one of those unhappy problems that no one seems to even have a beginning of a solution to. One of the issues that it’s important to was raised by Brad DeLong yesterday. He was wondering why John Campbell might accept the following two claims.
- There is an important and unbridgeable gulf between our notions of physical causation and our notions of psychological causation.
- Martian physicists–intelligences vast, cool, and unsympathetic with no notions of human psychology or psychological causation–could not understand why, could not put their finger on physical variables and factors explaining why, the fifty or so of us assemble in the Seaborg Room Monday at lunch time during the spring semester.
I don’t know why Campbell accepts these claims. And I certainly don’t want to accept them. But I do know of one good reason to accept them, one that worries me no end some days. The short version involves the conjunction of the following two claims.
- Understanding a phenomenon involves being able to explain it in relatively broad, but non-disjunctive, terms.
- Just what terms are non-disjunctive might not be knowable to someone who only knows what the Martian physicists know, namely the microphysics of the universe.
Broader explanations are better as long as the terms they use are not disjunctive. The idea that some terms are disjunctive and others aren’t goes back at least to Goodman’s Fact, Fiction and Forecast. Goodman famously defined up a new term grue. Something is grue, I’ll say, iff it is green and observed or blue and unobserved. As Goodman noted, observing lots of emeralds and seeing they are all grue provides us with no reason to think the next emerald we see will be grue. This kind of simple induction doesn’t work when dealing with terms like ‘grue’. Various authors, most importantly David Lewis have argued that the distinction Goodman pointed towards, between disjunctive terms like ‘grue’ and non-disjunctive terms like ‘green’, has many implications for across philosophy. Following tradition, I’ll call the ‘grue’-like terms gruesome, and ‘green’-like terms natural. (And I’ll often suppress the fact that the difference between gruesomeness and naturalness is a matter of degree, as there are a spectrum of cases in the middle.)
You can read his entire post here. Provoked, I posted the following response:
IMHO, “naturalness”, like Goodman’s projectibility is a philosophical nonstarter–there is no logically prescribed language from which to judge the disjunctiveness of predicates. Another, mathematical, way of saying this is that predicate encodings are not invariant in any non question-begging way (specifically, they are homeomorphic).
Rather, the existence of a discontinuity in the mathematical sense, between the micro and macro-levels presents a real problem for reductionists, because discontinuity implies uncomputability (trust me). However, I am skeptical that there are any uncomputable macro-predicates in the special sciences, though this belief is subject to an interesting paradox, as it’s determination is, itself, uncomputable.
Another interesting notion of emergence is information theoretic and bound together with questions of computational intractability and complexity. In contrast to the uncomputable case, which I take to be ontological, the information theoretic approaches are pragmatic.
Sorry if the above sounds obtuse, but I haven’t the time to elaborate at the moment.
Bedau, M., “Weak Emergence”. In James Tomberlin, ed., Philosophical Perspectives: Mind, Causation, and World, pp. 375-399. Blackwell Publishers, ISBN: 0631207937
Boschetti & Gray, “Emergence and Computability” Journal Paper, to be submitted to Emergence: Complexity and Organization.
Kelly, K. & Glymour, C., “Why You’ll Never Know whether Roger Penrose is a Computer”, Behavioral and Brain Sciences, 13, 4, Dec. 1990.
Brian didn’t directly respond to my post, but he did clarify his philosophical concerns:
What I’m interested in is why it’s true (assuming it is true) that some weakenings of the explanation (from hit with a stone to hit with a projectile for example) make explanations deeper, but some weakenings (from hit to hit or melted for example) make explanations worse.
To which I replied:
What you call weakenings sounds like the addition of parameters (an thus wider compatibility with possible worlds) in the distasteful case, and generalizing a parameter in the desirable one. If this is the case, there are interesting works addressing the subject that explain why the former is undesirable, while the later is desirable. Kevin Kelly, for instance, has an account of simplicity that jives with this.
Full disclosure: I was, once-upon-a-time, a student of Kelly’s.
Again, no response from Brian.
A sharp and no doubt more mathematically proficient fellow than myself, Michael Greinecker, in replied, writing:
One can formulate “fundamental physics” usung gruesome predicates, so Lewis way is of no use. The only possible way out I see is to determine the choice of predicates and laws jointly and impose some coditions on both of them together (something like a complexity index). The problem even comes up when learning a language. The Wittgenstein-Kripke puzzle of private language is basically a variant: If you tell me that emeralds are green, how can know you didn’t want to tell me that emeralds are grue?
“Another, mathematical, way of saying this is that predicate encodings are not invariant in any non question-begging way (specifically, they are homeomorphic).”
Wether they are homeomorphic depends on the topology chosen, and their is neither a natural predicate space nor a natural topology for it.
“Rather, the existence of a discontinuity in the mathematical sense, between the micro and macro-levels presents a real problem for reductionists, because discontinuity implies uncomputability (trust me).”
A dicontinuity of a real function with the usual topology, that is. I don’t see the relevance of the problem here.
I think he is spot-on about Lewis and he called me on my lack of clarity, so I admitted as much and tried to be clearer about the mathematical objects in play:
Apologies for this digression…
You are completely right about homeomorphism being relative to topology, but whether or not there is a natural topology for representing problems is debatable. The topology used by Kelly captures levels of underdetermination as understood as levels of complexity on the Borel hierar
chy. The topology he uses to represent problems is the Baire space.
He present it this way:
Goodman’s point was that syntactic features invoked in accounts of relational support (e.g., “uniformity” of the input stream) are not preserved under translation, and hence cannot be objective, language-invariant features of the empirical problem itself. The solvability (and corresponding underdetermination) properties of the preceding problem persist no matter how one renames the inputs along the input streams (e.g., the feather [Baire space really] has the same branching structure whether the labels along the input streams are presented in the blue/green vocabulary or in the grue/bleen vocabulary). Both are immune, therefore, to Goodman-style arguments, as are all methodological recommendations based upon them. (from “The Logic of Success”)
From his rigmarole he develops a system whereby empirical problems may be classified into complexity classes corresponding to notions of decidable/verifiable/refutable with certainty/in n mind changes/in the limit/gradually. My point, along these lines, is that the coding does not matter to the computability of the reduction (more on this below).
A dicontinuity of a real function with the usual topology, that is. I don’t see the relevance of the problem here.
Indeed, discontinuity in a real function presents is uncomputable. This becomes relevant to the current discussion if reduction is understood in the following Nagelian way: T reduces T’ just in case the laws of T’ are derivable from those of T. Derivability, then is taken as ‘computably decided from’. That is, given microphysical facts, there exists a computable function mapping these facts to the psychological, or other special sciences. What is derivable, is of course indexed to the capacities of the agent. In this case I suppose humans are Turing equivalent, though we can modify this assumption up or down (FSM or analog computer, say), and uncomputability/underivability will reassert itself. This, I think presents an intrinsic problem for reduction relative to the capacities of an agent, whereas gruesomeness does not. Gruesomness does, however, present a legitimate coding problem that can be treated information theoretically, but that is another very long story.
You can find some relevant philosophical papers on my abortive attempt at a formal epistemology blog. It’s ugly, but it has some classics.
In trying to
provoke Brian into a response address Brian’s concerns, I wrote:
One last try at being relevant.
I have been assuming too much about the conversation being about the accounts of explanation available to us from philosophy of science (Deductive-nomological account, Statistical-relevance, etc.). In this mode the question is tinged with considerations like those I addressed earlier and reduction is understood in a kind of Nagelian way: T reduces T’ just in case the laws of T’ are derivable from those of T. Then, in the case of microphysics to mind, T = our microphysical theory and T’ = our theory of mind. Further, the language of choice is not relevant to derivability, thus it is not relevant to reducibility. Further still, while the assumption of reducibility is quite useful, but the epistemic determination of metaphysical reducibility is not decidable.
However, once a language of inquiry is fixed, though it is not philosophically special, there are plenty of desirable features of hypotheses and explanations that correspond to non-gruesomness. In model selection, for example, the hypothesis that optimizes tradeoff in simplicity and fit is proffered. Under the AIC criteria, philosophically endorsed by Eliot Sober and Malcolm Forster, the preferred model balances simplicity and fit while increasing predictive accuracy. Simplicity here refers to minimizing parameters and fit, minimizing distance (actually Kullback-Leibler divergence) from observed values.
The generalization from hit with a stone to hit with a projectile for example abstracts away irrelevant features (e.g. further, object with mass m, velocity v), but some “weakenings” (from hit to hit or melted for example) make explanations worse by adding unnecessary parameters (e.g. heat h for melting, perhaps).
For what it is worth, I would suggest an end-run around the disjunctive predicates issue and address what it means to be a fruitful theory of reducibility and explanation in the scientific context. There is, by my lights, no question-begging account of a privileged language from which to judge naturalness. For instance, there is an evolutionary story about the veridical nature of our natural concepts, but this fails to provide suitable grounds for our concepts for several reasons, including natural selection being about good enough concepts (survivable in a human day-to-day, heuristic sense), not true concepts.
I fear none of this will appear relevant as the discussion seems rooted in the post-Kripkean conceptual analysis mode where philosophical intuitions are plumbed for metaphysical implications without fleshing-out the logical and mathematical features of our philosophical intuitions. I take this to be a doomed methodology, but that is another matter.
Perhaps I have been hopelessly warped into a methodological monomaniac by my time at CMU.
Brian has not responded, but Michael has a remaining question:
“This becomes relevant to the current discussion if reduction is understood in the following Nagelian way: T reduces T’ just in case the laws of T’ are derivable from those of T. Derivability, then is taken as ‘computably decided from’. That is, given microphysical facts, there exists a computable function mapping these facts to the psychological, or other special sciences. What is derivable, is of course indexed to the capacities of the agent. In this case I suppose humans are Turing equivalent, though we can modify this assumption up or down (FSM or analog computer, say), and uncomputability/underivability will reassert itself.”
My problem is another one: Why should the real line be a good model of science? If we are living in a discrete world, jumps wouldn’t be a problem for computability.
To which I replied:
Again you are correct, however, I am not claiming that the real line is the canonical model for science, nor am I claiming that “jumps” are somehow inherently computationally problematic. In my haste, I was simply being to imprecise about the sense of discontinuity I was using.
The universe may indeed be discrete, as the digital physics folks think. As Richard Feynman wrote in The Character of Physical Law:
It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypotheses that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.
Without begging the question, however, we are sill stuck with the epistemic conundrum of discovery, whereby, the discrete nature of the universe cannot be decided with certainty, but can be converged on in the limit. It is not so much about the real line representing the universe, as it is
representing asessment and discovery complexity with whatever space is appropriate, as proven by representation theorems. As Nancy Cartwright wrote, “…the representation theorems for the concepts we offer in use in modern science that we find our best candidates for “constitutive principles”. These are the preconditions for the application of our concepts to empirical reality” (from “In Praise of The Representation Theorem“).
I am afraid that all of this takes us too far afield from Brian’s post. If you wish to discuss this further, please email me.
Brian has not addressed anything I wrote. I can understand, since it is a bit off topic and in a different tradition. Still, I was hoping for a lively exchange.
This all leaves me pretty unsatisfied and mindful of why I chose to leave academic philosophy. Or maybe it is just sour grapes.