Thursday, June 26, 2014

Why Philosophical Arguments about the Computational Mind Don’t Interest Me

And yet the idea of the computational mind does.

Philosophers have generated piles of arguments about whether or not or what way the mind is computational. I’ve read some of these arguments, but not many. They’re not relevant to my interests, which, as many of you know, are very much about the mind as computer. After all, in my major theoretical and methodological set piece, Literary Morphology: Nine Propositions in a Naturalist Theory of Form, the third proposition is “The form of a given work can be said to be a computational structure” and the fourth: “That computational form is the same for all competent readers.” I’m committed, and have been for years.

But the philosophical arguments, pro and con, have almost nothing to do any of the various models that have been posed and investigated through mathematical analysis and computer implementation. The philosophical arguments thus have no bearing on what interests me, the nuts and bolts of neuro-mental computation. For example, I read Searle’s (in)famous Chinese room argument when it was first published in 1980. Here’s how the Stanford Encyclopedia of Philosophy (SEP) summarizes it:
Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics.
My response then, and now: And…? It simply doesn’t connect with anything you have to do get a model up and running.

But then neither do the arguments in favor of computation. The SEP offers the following counter argument (one of several):
Consider a computer that operates in quite a different manner than the usual AI program with scripts and operations on sentence-like strings of symbols. The Brain Simulator reply asks us to suppose instead the program simulates the actual sequence of nerve firings that occur in the brain of a native Chinese language speaker when that person understands Chinese—every nerve, every firing. Since the computer then works the very same way as the brain of a native Chinese speaker, processing information in just the same way, it will understand Chinese.
And…? Who cares?

Thinking about these arguments never got me anywhere useful. What I really wanted was to extend the work I’d done on Shakespeare’s Sonnet 129 (which I published first in MLN and then later in Language and Style) to Coleridge’s “Kubla Khan.” Those philosophical arguments are irrelevant to that task.

Now, since I have yet to extend that early work of “Kubla Khan” I suppose I could take that as evidence that the computer model doesn’t work. THE computer model? Gimme a break! The fact that I’ve not been able to make something work is no reason to believe that it can’t work at all.

As Willard McCarty has said in various places, the models we can produce now are toys. We create them so as to learn something we couldn’t otherwise learn. The sense of limitation we get through such work is more fine-grained and robust than we could ever get through philosophical discourse. As our knowledge increases, so does our awareness of our limitations.

On the one hand I have little faith that one day computers will equal or even surpass us in intelligence or creativity; those seem to me empty hopes / fears / ambitions. They’re projections from a worldview that is being rendered obsolete by real accomplishments. At the same time I remain convinced that something of what the mind does is captured by computing, and nothing else. One need not believe that computing holds the whole story in order to believe that it holds something valuable. And the only way to realize that value is to use the tools and see what you can build.

No comments:

Post a Comment