science fact & science fiction

vol. LXXI, No. 6, August 1963, pp. 6, 92-94

An editorial by John W. Campell


A Place for the Subconscious

Thereīs a huge difference between an intellectual conviction ­ no matter how completely sincere ­ and an emotional feeling of belief. An intellectual conviction is usually logical, and sometimes itīs even rational but lacks real motivating power.

The difference between ``logical'' and ``rational'' really becomes true, deep feeling-awareness only when you have the experience of arguing with someone who is perfectly logical, absolutely and irrefutably logical ... and irrational. The ``computing psychotic'' type of the committed insane represents the end-example of the type. His logic will be absolutely flawless; youīll shortly find that you, not he, are guilty of false syllogisms, argumentum ad hominem, distributed-middle, and other forms of bad logic.

Only he goes on being magnificently irrational, despite his perfect logic.

The problem is , of course, that perfect logic applied to false postulates yields perfectly logical irrationality. The Master False Postulate of the system the computing psychotic operates on is one widely accepted: ``Anything that is logical is necessarily rational.'' Since his logic is flawless, that proves him that heīs perfectly rational.

The great difficulty lies in the fact that while we have worked out a codified, formal technique of manipulating postulates ­ thatīs what we mean by ``Logic'' ­ we have no codified or formalized system for deriving postulates. Thus you can check on the rigor of another manīs logical thinking, and cross-communicate with him as to the nature and validity for the logical steps, but you can not check his derivation of the postulates heīs manipulating so logically.

For example, when Newton studied Keplerīs laws of planetary motion, Galileoīs work of falling bodies, pendulums, accelerations, et cetera, he abstracted from the data certain postulates, now known as Newtonīs Law of Motion and Gravity.

He derived from those postulates certain conclusions. That his conclusions were absolutely validly derived, by perfect logic, could be checked. But there was no means whatever of cross-checking the process by which he had abstracted those postulates from the data.

Keplerīs lawīs of planetary motion were simply observational rules-of-thumb ­ they were not ``logical'' or ``rational'', but simply pragmatic.

Newtonīs postulates ­ his ``Laws'' ­ could not then, and can not now, be provably derived from the data he used. There is absolutely no known method of going from the data Newton worked with to the postulates he reached. That his thinking process in doing so was sound absolutely cannot be proven, even today. We do not know how postulates can be abstracted from data. Men can do it; this we know as a pragmatic fact. How they do it we do not know.

Certainly Newtonīs postulates were ``proven'' in his own lifetime; ``proven'' in the narrow sense of ``shown to be useful in predicting real phenomena in the real universe.''

But in that sense, Ptolemaic astronomy had been ``proven'' too, a millenium or so earlier.

It is because we still do not know how to do what all men do constantly in their lives ­ abstract postulates from observation ­ that we can not design a machine that can think, nor help the psychotic to re-abstract and correct his postulates. (And canīt re-abstract and correct our own false postulates either, of course!)

In the course of developing computers ­ modern terminology prefers that word rather than ``robotic brains'' ­ men have been forced to acknowledge gaps in their understanding of thinking that they were able to glide over with a swift, easy, ``you know what I mean ...'' previously. There was the method of ``explaining'' something with the magnificent phrase ``by means of function'' so long as you didnīt have to specify what the function was, or how it operated.

Robots, however, have a devasting literal-mindedness. They tend to say, ``Duh, boss, I donīt know what you mean. Tell me.'' Even more devasting is the robotīs tendency to do precisely and exactly what you told it to do. The gibbering feeling that can be induced in the man trying to instruct a robot can demonstrated beautifully by a very simple little business. Makes a wonderful way of explaining the problems of automation and cybernetics to a non-technical audience ­ or a technical audience thatīs never worked with that kind of problem. Try this one in a group some time:

``Assume that I am a robot. I like all robots ­ follow orders given me with exact, literal, and totally uncaring precision. Now each of you, of course, knows how to take off a coat: all you have to do is to give me directions as how to take off my coat.''

Usually the instructions start with ``Take hold of your lapels with your hands.''

This is complied with by taking the left lapel in the right hand, and the right lapel in the left hand ­ since the intended position werenīt specified.

``No ... no! Take the left lapel with the left hand, and the right lapel with the right hand!''

You do. Taking the left lapel somewhere up under your left ear, and the right lapel at about the level of your right-side pocket. When the order is corrected ­ i.e., adequate precision and completeness of instructions have been worked out ­ the next step is usually ``Now straighten out your arms.''

This allows of many interesting variations. You can straighten your arms out straight in front of you, making ripping noises as you do since the robot could, we assume, tear the cloth readily. Or you can straighten them straight out to the sides, or straight up ­ with ripping-noises sound effects in any case. Or, naturally, any combination that happens to appeal to you: the order was positive, but not explicit.

Usually about this time the audience has genuine realization that stating explicitly what you mean, in even so simple a matter as taking off a coat, is no easy task. From that point on, the difficulty and frustrations of trying to design automatic machinery can be understood a lot more sympathetically.

This is the first, and simplest level of working with a system that is perfectly logical, but not rational. The results the instructor gets are the logical consequences of the postulates ­ the orders ­ he feeds into the logical-not-rational system.

Very recently, Dr. Gotthard Gunther, working at the Electrical Engineering Research Laboratories of the University of Illinois, has developed a formal, codifiable system of mathematical hyper-logic ­ I must call it ``hyper-logic'' simply to distinguish the fact that it goes beyond the multi-valued logics that have been common therefore, and posses characteristics and potentialities never before available. It is, in effect, a formal-mat