Discussion about this post

User's avatar
Eric L's avatar

It seems that your definition of dynamical relevance is congruent to interactionist dualism. It states that consciousness must be an additional orthogonal thing that can be added to the base theory that makes a difference when you add it. Is that fair?

The place where you lose me is here:

"Causal closure of the physical is a metaphysical condition, often put as the requirement that every physical effect has a sufficient physical cause. Here, “physical” designates the metaphysical category of physical things and properties. If a theory of consciousness postulates a new brain function that is part of this metaphysical category, it satisfies causal closure while still being dynamically relevant. Hence, there is no conflict here."

It is only dynamically relevant up until the moment you incorporate that new property into the theory! But this is a weird property to insist a theory of consciousness should have. It's clearer in this later quote:

"consciousness can be physical and also dynamically relevant. The simplest example is a theory of consciousness that postulates that consciousness is a brain function that neuroscience doesn’t know."

Here, the unknowness of the brain function seems to be an essential feature in rendering it compatible with your criterion. If it were known, then we could update our theory of neuroscience to include it, and then consciousness would not be dynamically relevant relative to that revised theory, and so... we should expect that the new theory is wrong too, because consciousness should change something when added to it?

This is something I hear implicitly in a lot of the philosophical arguments around consciousness that seem intuitive to a lot of people and seem off to me. A requirement that only a black box can conscious, because to have an understanding of how it works is to have an alternate explanation to consciousness. So a computer program cannot be conscious because we know the AI does what it does because it's sensors update its state and then the CPU follows the instructions to turn that into its actions (implicitly, ergo it did this for a different reason than because it is conscious), but this is just what having an explanation looks like. (Or, if you've got other ideas of what having an explanation could look like, I'd love to hear some examples of the sort of thing where you fill in the part we don't know with some speculative bullshit that would fit the bill.) What if this sort of computation *is* the sort of thing that gives rise to consciousness? Then the question of dynamical relevance is ill formed, because you can't ask how the theory changes with vs without consciousness.

More generally, in any theory of consciousness where consciousness is explained in terms of other more fundamental properties -- basically any theory of consciousness that isn't dualist -- you can't really ask the question of dynamical relevance as you formulate it, because you can't take consciousness out of the theory and leave it otherwise in tact, so there is no comparison to make to see if consciousness does something. If consciousness is explained by other things in the theory, then (theory - consciousness) is not well defined, so dynamical relevance is not well defined in a broad class of theories of consciousness I don't think we should rule out!

Expand full comment

No posts