Friday, March 14, 2003

The best effort at Artificial Intelligence I've come across was one in which the computer was simply programmed with data as they were needed. I can't find the article, so good luck! The example given was that the computer realized that a person with claustrophobia would be uncomfortable travelling by train to Paris from London, based on the following: the Chunnel is an enclosed space; claustrophobes feel uncomfortable travelling in enclosed spaces for a long distance; the Chunnel is 163,680 feet long; this length is longer than fifty feet; fifty feet is a long distance. This induction is impressive, but I wouldn't call it intelligence.

Induction and deduction as methods have switched definitions on occasion. Here, I mean by induction the method of reaching a generally applicable conclusion from singular statements. Deduction, therefore, is the method of creating the general conclusion first, then testing it with singular statements. Deduction is now, more or less, the accepted scientific method. One creates an hypothesis which leads one to various falsifiable predictions, tests those predictions, and decides whether this theory is false, or unfalsified. Not true in the absolute sense, of course, since by this method no theory can ever be proven true. It remains only and for the moment unfalsified.

This method is so familiar that it's hard to remember how young it is; and it is so familiar because it's so useful. Karl Popper, an Austrian philosopher recently revitalized by the book Wittgenstein's Poker, was one of the most influential advocates of it. Indeed, in his Logic of Scientific Discovery, he demonstrates the impossibility of the inductive method reaching any new discoveries.


That inconsistencies may easily arise in connection with the principle of induction should have been clear from the work of Hume; also, that they can be avoided, if at all, only with difficulty. For the principle of induction [i.e. a statement which would allow us to put inductive inferences in an acceptable logical form; that is, move from the singular to the universal] must be a universal statement in its turn. Thus if we try to regard its truth as known from experience, then the very same problems which occasioned its introduction will arise all over again.

He goes on to disparage Kant's attempt to place the principle of induction in the realm of the a priori; I think he gives Kant short shrift, since this view requires more that a shrugging off, but the important thing is that Popper has an alternative. If we begin with our conclusion, we cannot prove it, but we can falsify it if a singular event does not fit. If I decide that 'All peace protesters are stupid', this hypothesis can be disproven when I meet a smart peace protester. All I need is one instance of this, and my hypothesis must be thrown out or modified. This insight of Popper's is what destroyed the then juggernautical Vienna Circle and its logical positivism, which demanded that a statement, to be meaningful, must be capable of proof.

I'll quote Einstein, because I can, and it's his birthday, and I like him, in his address on the occasion of Max Planck's sixtieth birthday. "There is no logical path leading to these [highly universal] laws. They can only be reached by intuition, based on something like an intellectual love of the objects of experience." Popper himself thinks that every discovery has "'an irrational element'".

Until this "irrational element" (which I rather doubt is truly irrational) can be harnessed, computers will still be nothing more than excellent adding machines, a useful form of outside memory. They will be unable, to use a term of Kuhn's, to initiate or understand a "paradign shift"

P.S. Comparisons to Plato's Meno have been left as an exercise for the reader.