Rereading this review after eight years, I find little of substance that I would change if I were to write it today. I am not aware of any theoretical or experimental work that challenges its conclusions; nor, so far as I know, has there been any attempt to meet the criticisms that are raised in the review or to show that they are erroneous or ill-founded.
I had intended this review not specifically as a criticism of Skinner’s speculations regarding language, but rather as a more general critique of behaviorist (I would now prefer to say “empiricist”) speculation as to the nature of higher mental processes. My reason for discussing Skinner’s book in such detail was that it was the most careful and thoroughgoing presentation of such speculations, an evaluation that I feel is still accurate. Therefore, if the conclusions I attempted to substantiate in the review are correct, as I believe they are, then Skinner’s work can be regarded as, in effect, a reductio ad absurdum of behaviorist assumptions. My personal view is that it is a definite merit, not a defect, of Skinner’s work that it can be used for this purpose, and it was for this reason that I tried to deal with it fairly exhaustively. I do not see how his proposals can be improved upon, aside from occasional details and oversights, within the framework of the general assumptions that he accepts. I do not, in other words, see any way in which his proposals can be substantially improved within the general framework of behaviorist or neobehaviorist, or, more generally, empiricist ideas that has dominated much of modern linguistics, psychology, and philosophy. The conclusion that I hoped to establish in the review, by discussing these speculations in their most explicit and detailed form, was that the general point of view was largely mythology, and that its widespread acceptance is not the result of empirical support, persuasive reasoning, or the absence of a plausible alternative.
If I were writing today on the same topic, I would try to make it more clear than I did that I was discussing Skinner’s proposals as a paradigm example of a futile tendency in modern speculation about language and mind. I would also be somewhat less apologetic and hesitant about proposing the alternative view sketched in Sections 5 and 11 — and also less ahistorical in proposing this alternative, since in fact it embodies assumptions that are not only plausible and relatively well-confirmed, so it appears to me, but also deeply rooted in a rich and largely forgotten tradition of rationalist psychology and linguistics. I have tried to correct this imbalance in later publications (Chomsky, 1962, 1964, 1966; see also Miller et al., 1960; Katz and Postal, 1964; Fodor, 1965; Lenneberg, 1966).
I think it would also have been valuable to try to sketch some of the reasons — and there were many — that have made the view I was criticizing seem plausible over a long period, and also to discuss the reasons for the decline of the alternative rationalist conception which, I was suggesting, should be rehabilitated. Such a discussion would, perhaps, have helped to place the specific critique of Skinner in a more meaningful context.
References in the Preface
Chomsky, N., “Explanatory Models in Linguistics,” in Logic, Methodology and Philosophy of Science, ed. E. Nagel, P. Suppes, and A. Tarski. Stanford; Caliph.: Stanford University Press, 1962.
———-, Current Issues in Linguistic Theory. The Hague: Mouton and Co., 1964.
———-, Cartesian Linguistics. New York: Harper and Row, Publishers, 1966.
Fodor, J., “Could Meaning Be an ‘rm’,” Journal of Verbal Learning and Verbal Behavior, 4 (1965), 73–81.
Katz, J. and P. Postal, An Integrated Theory of Linguistic Description. Cambridge, Mass: M.I.T. Press, 1964.
Lenneberg, E., Biological Bases of Language. (In press.)
Miller, G. A., E. Galanter, and K. H. Pribram, Plans and the Structure of Behavior. New York: Holt, Rhinehart, and Winston, Inc., 1960.
The Review
by Noam Chomsky
“A Review of B. F. Skinner’s Verbal Behavior” in Language, 35, No. 1 (1959), 26–58.
A great many linguists and philosophers concerned with language have expressed the hope that their studies might ultimately be embedded in a framework provided by behaviorist psychology, and that refractory areas of investigation, particularly those in which meaning is involved, will in this way be opened up to fruitful exploration. Since this volume is the first large-scale attempt to incorporate the major aspects of linguistic behavior within a behaviorist framework, it merits and will undoubtedly receive careful attention. Skinner is noted for his contributions to the study of animal behavior. The book under review is the product of study of linguistic behavior extending over more than twenty years. Earlier versions of it have been fairly widely circulated, and there are quite a few references in the psychological literature to its major ideas.
The problem to which this book is addressed is that of giving a “functional analysis” of verbal behavior. By functional analysis, Skinner means identification of the variables that control this behavior and specification of how they interact to determine a particular verbal response. Furthermore, the controlling variables are to be described completely in terms of such notions as stimulus, reinforcement, deprivation, which have been given a reasonably clear meaning in animal experimentation. In other words, the goal of the book is to provide a way to predict and control verbal behavior by observing and manipulating the physical environment of the speaker.
Skinner feels that recent advances in the laboratory study of animal behavior permit us to approach this problem with a certain optimism, since “the basic processes and relations which give verbal behavior its special characteristics are now fairly well understood … the results [of this experimental work] have been surprisingly free of species restrictions. Recent work has shown that the methods can be extended to human behavior without serious modification” (3).[1]
It is important to see clearly just what it is in Skinner’s program and claims that makes them appear so bold and remarkable, It is not primarily the fact that he has set functional analysis as his problem, or that he limits himself to study of observables, i.e., input-output relations. What is so surprising is the particular limitations he has imposed on the way in which the observables of behavior are to be studied, and, above all, the particularly simple nature of the function which, he claims, describes the causation of behavior. One would naturally expect that prediction of the behavior of a complex organism (or machine) would require, in addition to information about external stimulation, knowledge of the internal structure of the organism, the ways in which it processes input information and organizes its own behavior. These characteristics of the organism are in general a complicated product of inborn structure, the genetically determined course of maturation, and past experience. Insofar as independent neurophysiological evidence is not available, it is obvious that inferences concerning the structure of the organism are based on observation of behavior and outside events. Nevertheless, one’s estimate of the relative importance of external factors and internal structure in the determination of behavior will have an important effect on the direction of research on linguistic (or any other) behavior, and on the kinds of analogies from animal behavior studies that will be considered relevant or suggestive.
Putting it differently, anyone who sets himself the problem of analyzing the causation of behavior will (in the absence of independent neurophysiological evidence) concern himself with the only data available, namely the record of inputs to the organism and the organism’s present response, and will try to describe the function specifying the response in terms of the history of inputs. This is nothing more than the definition of his problem. There are no possible grounds for argument here, if one accepts the problem as legitimate, though Skinner has often advanced and defended this definition of a problem as if it were a thesis which other investigators reject. The differences that arise between those who affirm and those who deny the importance of the specific “contribution of the organism” to learning and performance concern the particular character and complexity of this function, and the kinds of observations and research necessary for arriving at a precise specification of it. If the contribution of the organism is complex, the only hope of predicting behavior even in a gross way will be through a very indirect program of research that begins by studying the detailed character of the behavior itself and the particular capacities of the organism involved.
Skinner’s thesis is that external factors consisting of present stimulation and the history of reinforcement (in particular, the frequency, arrangement, and withholding of reinforcing stimuli) are of overwhelming importance, and that the general principles revealed in laboratory studies of these phenomena provide the basis for understanding the complexities of verbal behavior. He confidently and repeatedly voices his claim to have demonstrated that the contribution of the speaker is quite trivial and elementary, and that precise prediction of verbal behavior involves only specification of the few external factors that he has isolated experimentally with lower organisms.
Careful study of this book (and of the research on which it draws) reveals, however, that these astonishing claims are far from justified. It indicates, furthermore, that the insights that have been achieved in the laboratories of the reinforcement theorist, though quite genuine, can be applied to complex human behavior only in the most gross and superficial way, and that speculative attempts to discuss linguistic behavior in these terms alone omit from consideration factors of fundamental importance that are, no doubt, amenable to scientific study, although their specific character cannot at present be precisely formulated. Since Skinner’s work is the most extensive attempt to accommodate human behavior involving higher mental faculties within a strict behaviorist schema of the type that has attracted many linguists and philosophers, as well as psychologists, a detailed documentation is of independent interest. The magnitude of the failure of this attempt to account for verbal behavior serves as a kind of measure of the importance of the factors omitted from consideration, and an indication of how little is really known about this remarkably complex phenomenon.
The force of Skinner’s argument lies in the enormous wealth and range of examples for which he proposes a functional analysis. The only way to evaluate the success of his program and the correctness of his basic assumptions about verbal behavior is to review these examples in detail and to determine the precise character of the concepts in terms of which the functional analysis is presented. Section 2 of this review describes the experimental context with respect to which these concepts are originally defined. Sections 3 and 4 deal with the basic concepts — stimulus, response, and reinforcement, Sections 6 to 10 with the new descriptive machinery developed specifically for the description of verbal behavior. In Section 5 we consider the status of the fundamental claim, drawn from the laboratory, which serves as the basis for the analogic guesses about human behavior that have been proposed by many psychologists. The final section (Section 11) will consider some ways in which further linguistic work may play a part in clarifying some of these problems.
Although this book makes no direct reference to experimental work, it can be understood only in terms of the general framework that Skinner has developed for the description of behavior. Skinner divides the responses of the animal into two main categories. Respondents are purely reflex responses elicited by particular stimuli. Operants are emitted responses, for which no obvious stimulus can be discovered. Skinner has been concerned primarily with operant behavior. The experimental arrangement that he introduced consists basically of a box with a bar attached to one wall in such a way that when the bar is pressed, a food pellet is dropped into a tray (and the bar press is recorded). A rat placed in the box will soon press the bar, releasing a pellet into the tray. This state of affairs, resulting from the bar press, increases the strength of the bar-pressing operant. The food pellet is called a reinforcer; the event, a reinforcing event. The strength of an operant is defined by Skinner in terms of the rate of response during extinction (i.e, after the last reinforcement and before return to the pre-conditioning rate).
Suppose that release of the pellet is conditional on the flashing of a light. Then the rat will come to press the bar only when the light flashes. This is called stimulus discrimination. The response is called a discriminated operant and the light is called the occasion for its emission: this is to be distinguished from elicitation of a response by a stimulus in the case of the respondent.[2] Suppose that the apparatus is so arranged that bar-pressing of only a certain character (e.g., duration) will release the pellet. The rat will then come to press the bar in the required way. This process is called response differentiation. By successive slight changes in the conditions under which the response will be reinforced, it is possible to shape the response of a rat or a pigeon in very surprising ways in a very short time, so that rather complex behavior can be produced by a process of successive approximation.
A stimulus can become reinforcing by repeated association with an already reinforcing stimulus. Such a stimulus is called a secondary reinforcer. Like many contemporary behaviorists, Skinner considers money, approval, and the like to be secondary reinforcers which have become reinforcing because of their association with food, etc.[3] Secondary reinforcers can be generalized by associating them with a variety of different primary reinforcers.
Another variable that can affect the rate of the bar-pressing operant is drive, which Skinner defines operationally in terms of hours of deprivation. His major scientific book, Behavior of Organisms, is a study of the effects of food-deprivation and conditioning on the strength of the bar-pressing response of healthy mature rats. Probably Skinner’s most original contribution to animal behavior studies has been his investigation of the effects of intermittent reinforcement, arranged in various different ways, presented in Behavior of Organisms and extended (with pecking of pigeons as the operant under investigation) in the recent Schedules of Reinforcement by Ferster and Skinner (1957). It is apparently these studies that Skinner has in mind when he refers to the recent advances in the study of animal behavior.[4]
The notions stimulus, response, reinforcement are relatively well defined with respect to the bar-pressing experiments and others similarly restricted. Before we can extend them to real-life behavior, however, certain difficulties must be faced. We must decide, first of all, whether any physical event to which the organism is capable of reacting is to be called a stimulus on a given occasion, or only one to which the organism in fact reacts; and correspondingly, we must decide whether any part of behavior is to be called a response, or only one connected with stimuli in lawful ways. Questions of this sort pose something of a dilemma for the experimental psychologist. If he accepts the broad definitions, characterizing any physical event impinging on the organism as a stimulus and any part of the organism’s behavior as a response, he must conclude that behavior has not been demonstrated to be lawful. In the present state of our knowledge, we must attribute an overwhelming influence on actual behavior to ill-defined factors of attention, set, volition, and caprice. If we accept the narrower definitions, then behavior is lawful by definition (if it consists of responses); but this fact is of limited significance, since most of what the animal does will simply not be considered behavior. Hence, the psychologist either must admit that behavior is not lawful (or that he cannot at present show that it is — not at all a damaging admission for a developing science), or must restrict his attention to those highly limited areas in which it is lawful (e.g., with adequate controls, bar-pressing in rats; lawfulness of the observed behavior provides, for Skinner, an implicit definition of a good experiment).
Skinner does not consistently adopt either course. He utilizes the experimental results as evidence for the scientific character of his system of behavior, and analogic guesses (formulated in terms of a metaphoric extension of the technical vocabulary of the laboratory) as evidence for its scope. This creates the illusion of a rigorous scientific theory with a very broad scope, although in fact the terms used in the description of real-life and of laboratory behavior may be mere homonyms, with at most a vague similarity of meaning. To substantiate this evaluation, a critical account of his book must show that with a literal reading (where the terms of the descriptive system have something like the technical meanings given in Skinner’s definitions) the book covers almost no aspect of linguistic behavior, and that with a metaphoric reading, it is no more scientific than the traditional approaches to this subject matter, and rarely as clear and careful.[5]
Consider first Skinner’s use of the notions stimulus and response. In Behavior of Organisms (9) he commits himself to the narrow definitions for these terms. A part of the environment and a part of behavior are called stimulus (eliciting, discriminated, or reinforcing) and response, respectively, only if they are lawfully related; that is, if the dynamic laws relating them show smooth and reproducible curves. Evidently, stimuli and responses, so defined, have not been shown to figure very widely in ordinary human behavior.[6] We can, in the face of presently available evidence, continue to maintain the lawfulness of the relation between stimulus and response only by depriving them of their objective character. A typical example of stimulus control for Skinner would be the response to a piece of music with the utterance Mozart or to a painting with the response Dutch. These responses are asserted to be “under the control of extremely subtle properties” of the physical object or event (108). Suppose instead of saying Dutch we had said Clashes with the wallpaper, I thought you liked abstract work, Never saw it before, Tilted, Hanging too low, Beautiful, Hideous, Remember our camping trip last summer?, or whatever else might come into our minds when looking at a picture (in Skinnerian translation, whatever other responses exist in sufficient strength). Skinner could only say that each of these responses is under the control of some other stimulus property of the physical object. If we look at a red chair and say red, the response is under the control of the stimulus redness; if we say chair, it is under the control of the collection of properties (for Skinner, the object) chairness (110), and similarly for any other response. This device is as simple as it is empty. Since properties are free for the asking (we have as many of them as we have nonsynonymous descriptive expressions in our language, whatever this means exactly), we can account for a wide class of responses in terms of Skinnerian functional analysis by identifying the controlling stimuli. But the word stimulus has lost all objectivity in this usage. Stimuli are no longer part of the outside physical world; they are driven back into the organism. We identify the stimulus when we hear the response. It is clear from such examples, which abound, that the talk of stimulus control simply disguises a complete retreat to mentalistic psychology. We cannot predict verbal behavior in terms of the stimuli in the speaker’s environment, since we do not know what the current stimuli are until he responds. Furthermore, since we cannot control the property of a physical object to which an individual will respond, except in highly artificial cases, Skinner’s claim that his system, as opposed to the traditional one, permits the practical control of verbal behavior[7] is quite false.
Other examples of stimulus control merely add to the general mystification. Thus, a proper noun is held to be a response “under the control of a specific person or thing” (as controlling stimulus, 113). I have often used the words Eisenhower and Moscow, which I presume are proper nouns if anything is, but have never been stimulated by the corresponding objects. How can this fact be made compatible with this definition? Suppose that I use the name of a friend who is not present. Is this an instance of a proper noun under the control of the friend as stimulus? Elsewhere it is asserted that a stimulus controls a response in the sense that presence of the stimulus increases the probability of the response. But it is obviously untrue that the probability that a speaker will produce a full name is increased when its bearer faces the speaker. Furthermore, how can one’s own name be a proper noun in this sense?
A multitude of similar questions arise immediately. It appears that the word control here is merely a misleading paraphrase for the traditional denote or refer. The assertion (115) that so far as the speaker is concerned, the relation of reference is “simply the probability that the speaker will emit a response of a given form in the presence of a stimulus having specified properties” is surely incorrect if we take the words presence, stimulus, and probability in their literal sense. That they are not intended to be taken literally is indicated by many examples, as when a response is said to be “controlled” by a situation or state of affairs as “stimulus.” Thus, the expression a needle in a haystack “may be controlled as a unit by a particular type of situation” (116); the words in a single part of speech, e.g., all adjectives, are under the control of a single set of subtle properties of stimuli (121); “the sentence The boy runs a store is under the control of an extremely complex stimulus situation” (335) “He is not at all well may function as a standard response under the control of a state of affairs which might also control He is ailing” (325); when an envoy observes events in a foreign country and reports upon his return, his report is under “remote stimulus control” (416); the statement This is war may be a response to a “confusing international situation” (441); the suffix -ed is controlled by that “subtle property of stimuli which we speak of as action-in-the-past” (121) just as the -s in The boy runs is under the control of such specific features of the situation as its “currency” (332). No characterization of the notion stimulus control that is remotely related to the bar-pressing experiment (or that preserves the faintest objectivity) can be made to cover a set of examples like these, in which, for example, the controlling stimulus need not even impinge on the responding organism.
Consider now Skinner’s use of the notion response. The problem of identifying units in verbal behavior has of course been a primary concern of linguists, and it seems very likely that experimental psychologists should be able to provide much-needed assistance in clearing up the many remaining difficulties in systematic identification. Skinner recognizes (20) the fundamental character of the problem of identification of a unit of verbal behavior, but is satisfied with an answer so vague and subjective that it does not really contribute to its solution. The unit of verbal behavior — the verbal operant — is defined as a class of responses of identifiable form functionally related to one or more controlling variables. No method is suggested for determining in a particular instance what are the controlling variables, how many such units have occurred, or where their boundaries are in the total response. Nor is any attempt made to specify how much or what kind of similarity in form or control is required for two physical events to be considered instances of the same operant. In short, no answers are suggested for the most elementary questions that must be asked of anyone proposing a method for description of behavior. Skinner is content with what he calls an extrapolation of the concept of operant developed in the laboratory to the verbal field. In the typical Skinnerian experiment, the problem of identifying the unit of behavior is not too crucial. It is defined, by fiat, as a recorded peck or bar-press, and systematic variations in the rate of this operant and its resistance to extinction are studied as a function of deprivation and scheduling of reinforcement (pellets). The operant is thus defined with respect to a particular experimental procedure. This is perfectly reasonable and has led to many interesting results. It is, however, completely meaningless to speak of extrapolating this concept of operant to ordinary verbal behavior. Such “extrapolation” leaves us with no way of justifying one or another decision about the units in the “verbal repertoire.”
Skinner specifies “response strength” as the basic datum, the basic dependent variable in his functional analysis. In the bar-pressing experiment, response strength is defined in terms of rate of emission during extinction. Skinner has argued[8] that this is “the only datum that varies significantly and in the expected direction under conditions which are relevant to the ‘learning process.’” In the book under review, response strength is defined as “probability of emission” (22). This definition provides a comforting impression of objectivity, which, however, is quickly dispelled when we look into the matter more closely. The term probability has some rather obscure meaning for Skinner in this book.[9] We are told, on the one hand, that “our evidence for the contribution of each variable [to response strength] is based on observation of frequencies alone” (28). At the same time, it appears that frequency is a very misleading measure of strength, since, for example, the frequency of a response may be “primarily attributable to the frequency of occurrence of controlling variables” (27). It is not clear how the frequency of a response can be attributable to anything BUT the frequency of occurrence of its controlling variables if we accept Skinner’s view that the behavior occurring in a given situation is “fully determined” by the relevant controlling variables (175, 228). Furthermore, although the evidence for the contribution of each variable to response strength is based on observation of frequencies alone, it turns out that “we base the notion of strength upon several kinds of evidence” (22), in particular (22–28): emission of the response (particularly in unusual circumstances), energy level (stress), pitch level, speed and delay of emission, size of letters etc. in writing, immediate repetition, and — a final factor, relevant but misleading — over-all frequency.
Of course, Skinner recognizes that these measures do not co-vary, because (among other reasons) pitch, stress, quantity, and reduplication may have internal linguistic functions.[10] However, he does not hold these conflicts to be very important, since the proposed factors indicative of strength are “fully understood by everyone” in the culture (27). For example, “if we are shown a prized work of art and exclaim Beautiful!, the speed and energy of the response will not be lost on the owner.” It does not appear totally obvious that in this case the way to impress the owner is to shriek Beautiful in a loud, high-pitched voice, repeatedly, and with no delay (high response strength). It may be equally effective to look at the picture silently (long delay) and then to murmur Beautiful in a soft, low-pitched voice (by definition, very low response strength).
It is not unfair, I believe, to conclude from Skinner’s discussion of response strength, the basic datum in functional analysis, that his extrapolation of the notion of probability can best be interpreted as, in effect, nothing more than a decision to use the word probability, with its favorable connotations of objectivity, as a cover term to paraphrase such low-status words as interest, intention, belief, and the like. This interpretation is fully justified by the way in which Skinner uses the terms probability and strength. To cite just one example, Skinner defines the process of confirming an assertion in science as one of “generating additional variables to increase its probability” (425), and more generally, its strength (425–29). If we take this suggestion quite literally, the degree of confirmation of a scientific assertion can be measured as a simple function of the loudness, pitch, and frequency with which it is proclaimed, and a general procedure for increasing its degree of confirmation would be, for instance, to train machine guns on large crowds of people who have been instructed to shout it. A better indication of what Skinner probably has in mind here is given by his description of how the theory of evolution, as an example, is confirmed. This “single set of verbal responses … is made more plausible — is strengthened — by several types of construction based upon verbal responses in geology, paleontology, genetics, and so on” (427). We are no doubt to interpret the terms strength and probability in this context as paraphrases of more familiar locutions such as “justified belief” or “warranted assertability,” or something of the sort. Similar latitude of interpretation is presumably expected when we read that “frequency of effective action accounts in turn for what we may call the listener’s ‘belief’” (88) or that “our belief in what someone tells us is similarly a function of, or identical with, our tendency to act upon the verbal stimuli which he provides” (160).[11]
I think it is evident, then, that Skinner’s use of the terms stimulus, control, response, and strength justify the general conclusion stated in the last paragraph of Section 2. The way in which these terms are brought to bear on the actual data indicates that we must interpret them as mere paraphrases for the popular vocabulary commonly used to describe behavior and as having no particular connection with the homonymous expressions used in the description of laboratory experiments. Naturally, this terminological revision adds no objectivity to the familiar mentalistic mode of description.
The other fundamental notion borrowed from the description of bar-pressing experiments is reinforcement. It raises problems which are similar, and even more serious. In Behavior of Organisms, “the operation of reinforcement is defined as the presentation of a certain kind of stimulus in a temporal relation with either a stimulus or response. A reinforcing stimulus is defined as such by its power to produce the resulting change [in strength]. There is no circularity about this: some stimuli are found to produce the change, others not, and they are classified as reinforcing and nonreinforcing accordingly” (62). This is a perfectly appropriate definition[12] for the study of schedules of reinforcement. It is perfectly useless, however, in the discussion of real-life behavior, unless we can somehow characterize the stimuli which are reinforcing (and the situations and conditions under which they are reinforcing). Consider first of all the status of the basic principle that Skinner calls the “law of conditioning” (law of effect). It reads: “if the occurrence of an operant is followed by presence of a reinforcing stimulus, the strength is increased” (Behavior of Organisms, 21). As reinforcement was defined, this law becomes a tautology.[13] For Skinner, learning is just change in response strength.[14] Although the statement that presence of reinforcement is a sufficient condition for learning and maintenance of behavior is vacuous, the claim that it is a necessary condition may have some content, depending on how the class of reinforcers (and appropriate situations) is characterized. Skinner does make it very clear that in his view reinforcement is a necessary condition for language learning and for the continued availability of linguistic responses in the adult.[15] However, the looseness of the term reinforcement as Skinner uses it in the book under review makes it entirely pointless to inquire into the truth or falsity of this claim. Examining the instances of what Skinner calls reinforcement, we find that not even the requirement that a reinforcer be an identifiable stimulus is taken seriously. In fact, the term is used in such a way that the assertion that reinforcement is necessary for learning and continued availability of behavior is likewise empty.
To show this, we consider some examples of reinforcement. First of all, we find a heavy appeal to automatic self-reinforcement, Thus, “a man talks to himself… because of the reinforcement he receives” (163); “the child is reinforced automatically when he duplicates the sounds of airplanes, streetcars …” (164); “the young child alone in the nursery may automatically reinforce his own exploratory verbal behavior when he produces sounds which he has heard in the speech of others” (58); “the speaker who is also an accomplished listener ‘knows when he has correctly echoed a response’ and is reinforced thereby” (68); thinking is “behaving which automatically affects the behaver and is reinforcing because it does so” (438; cutting one’s finger should thus be reinforcing, and an example of thinking); “the verbal fantasy, whether overt or covert, is automatically reinforcing to the speaker as listener. Just as the musician plays or composes what he is reinforced by hearing, or as the artist paints what reinforces him visually, so the speaker engaged in verbal fantasy says what he is reinforced by hearing or writes what he is reinforced by reading” (439); similarly, care in problem solving, and rationalization, are automatically self-reinforcing (442–43). We can also reinforce someone by emitting verbal behavior as such (since this rules out a class of aversive stimulations, 167), by not emitting verbal behavior (keeping silent and paying attention, 199), or by acting appropriately on some future occasion (152: “the strength of [the speaker’s] behavior is determined mainly by the behavior which the listener will exhibit with respect to a given state of affairs”; this Skinner considers the general case of “communication” or “letting the listener know”). In most such cases, of course, the speaker is not present at the time when the reinforcement takes place, as when “the artist…is reinforced by the effects his works have upon… others” (224), or when the writer is reinforced by the fact that his “verbal behavior may reach over centuries or to thousands of listeners or readers at the same time. The writer may not be reinforced often or immediately, but his net reinforcement may be great” (206; this accounts for the great “strength” of his behavior). An individual may also find it reinforcing to injure someone by criticism or by bringing bad news, or to publish an experimental result which upsets the theory of a rival (154), to describe circumstances which would be reinforcing if they were to occur (165), to avoid repetition (222), to “hear” his own name though in fact it was not mentioned or to hear nonexistent words in his child’s babbling (259), to clarify or otherwise intensify the effect of a stimulus which serves an important discriminative function (416), and so on.
From this sample, it can be seen that the notion of reinforcement has totally lost whatever objective meaning it may ever have had. Running through these examples, we see that a person can be reinforced though he emits no response at all, and that the reinforcing stimulus need not impinge on the reinforced person or need not even exist (it is sufficient that it be imagined or hoped for). When we read that a person plays what music he likes (165), says what he likes (165), thinks what he likes (438–39), reads what books he likes (163), etc., BECAUSE he finds it reinforcing to do so, or that we write books or inform others of facts BECAUSE we are reinforced by what we hope will be the ultimate behavior of reader or listener, we can only conclude that the term reinforcement has a purely ritual function. The phrase “X is reinforced by Y (stimulus, state of affairs, event, etc.)” is being used as a cover term for “X wants Y,” “X likes Y,” “X wishes that Y were the case,” etc. Invoking the term reinforcement has no explanatory force, and any idea that this paraphrase introduces any new clarity or objectivity into the description of wishing, liking, etc., is a serious delusion. The only effect is to obscure the important differences among the notions being paraphrased. Once we recognize the latitude with which the term reinforcement is being used, many rather startling comments lose their initial effect — for instance, that the behavior of the creative artist is “controlled entirely by the contingencies of reinforcement” (150). What has been hoped for from the psychologist is some indication how the casual and informal description of everyday behavior in the popular vocabulary can be explained or clarified in terms of the notions developed in careful experiment and observation, or perhaps replaced in terms of a better scheme. A mere terminological revision, in which a term borrowed from the laboratory is used with the full vagueness of the ordinary vocabulary, is of no conceivable interest.
It seems that Skinner’s claim that all verbal behavior is acquired and maintained in “strength” through reinforcement is quite empty, because his notion of reinforcement has no clear content, functioning only as a cover term for any factor, detectable or not, related to acquisition or maintenance of verbal behavior.[16] Skinner’s use of the term conditioning suffers from a similar difficulty. Pavlovian and operant conditioning are processes about which psychologists have developed real understanding. Instruction of human beings is not. The claim that instruction and imparting of information are simply matters of conditioning (357–66) is pointless. The claim is true, if we extend the term conditioning to cover these processes, but we know no more about them after having revised this term in such a way as to deprive it of its relatively clear and objective character. It is, as far as we know, quite false, if we use conditioning in its literal sense. Similarly, when we say that “it is the function of predication to facilitate the transfer of response from one term to another or from one object to another” (361), we have said nothing of any significance. In what sense is this true of the predication Whales are mammals? Or, to take Skinner’s example, what point is there in saying that the effect of The telephone is out of order on the listener is to bring behavior formerly controlled by the stimulus out of order under control of the stimulus telephone (or the telephone itself) by a process of simple conditioning (362)? What laws of conditioning hold in this case? Furthermore, what behavior is controlled by the stimulus out of order, in the abstract? Depending on the object of which this is predicated, the present state of motivation of the listener, etc., the behavior may vary from rage to pleasure, from fixing the object to throwing it out, from simply not using it to trying to use it in the normal way (e.g., to see if it is really out of order), and so on. To speak of “conditioning” or “bringing previously available behavior under control of a new stimulus” in such a case is just a kind of play-acting at science (cf. also 43n).
The claim that careful arrangement of contingencies of reinforcement by the verbal community is a necessary condition for language-learning has appeared, in one form or another, in many places.[17] Since it is based not on actual observation, but on analogies to laboratory study of lower organisms, it is important to determine the status of the underlying assertion within experimental psychology proper. The most common characterization of reinforcement (one which Skinner explicitly rejects, incidentally) is in terms of drive reduction. This characterization can be given substance by defining drives in some way independently of what in fact is learned. If a drive is postulated on the basis of the fact that learning takes place, the claim that reinforcement is necessary for learning will again become as empty as it is in the Skinnerian framework. There is an extensive literature on the question of whether there can be learning without drive reduction (latent learning). The “classical” experiment of Blodgett indicated that rats who had explored a maze without reward showed a marked drop in number of errors (as compared to a control group which had not explored the maze) upon introduction of a food reward, indicating that the rat had learned the structure of the maze without reduction of the hunger drive. Drive-reduction theorists countered with an exploratory drive which was reduced during the pre-reward learning, and claimed that a slight decrement in errors could be noted before food reward. A wide variety of experiments, with somewhat conflicting results, have been carried out with a similar design.[18] Few investigators still doubt the existence of the phenomenon, E. R. Hilgard, in his general review of learning theory,[19] concludes that “there is no longer any doubt but that, under appropriate circumstances, latent learning is demonstrable.”
More recent work has shown that novelty and variety of stimulus are sufficient to arouse curiosity in the rat and to motivate it to explore (visually), and in fact, to learn (since on a presentation of two stimuli, one novel, one repeated, the rat will attend to the novel one),[20] that rats will learn to choose the arm of a single-choice maze that leads to a complex maze, running through this being their only “reward”;[21] that monkeys can learn object discriminations and maintain their performance at a high level of efficiency with visual exploration (looking out of a window for 30 seconds) as the only reward[22] and, perhaps most strikingly of all, that monkeys and apes will solve rather complex manipulation problems that are simply placed in their cages, and will solve discrimination problems with only exploration and manipulation as incentives.[23] In these cases, solving the problem is apparently its own “reward.” Results of this kind can be handled by reinforcement theorists only if they are willing to set up curiosity, exploration, and manipulation drives, or to speculate somehow about acquired drives[24] for which there is no evidence outside of the fact that learning takes place in these cases.
There is a variety of other kinds of evidence that has been offered to challenge the view that drive reduction is necessary for learning. Results on sensory-sensory conditioning have been interpreted as demonstrating learning without drive reduction.[25] Olds has reported reinforcement by direct stimulation of the brain, from which he concludes that reward need not satisfy a physiological need or withdraw a drive stimulus.[26] The phenomenon of imprinting, long observed by zoologists, is of particular interest in this connection. Some of the most complex patterns of behavior of birds, in particular, are directed towards objects and animals of the type to which they have been exposed at certain critical early periods of life.[27] Imprinting is the most striking evidence for the innate disposition of the animal to learn in a certain direction and to react appropriately to patterns and objects of certain restricted types, often only long after the original learning has taken place. It is, consequently, unrewarded learning, though the resulting patterns of behavior may be refined through reinforcement. Acquisition of the typical songs of song birds is, in some cases, a type of imprinting. Thorpe reports studies that show “that some characteristics of the normal song have been learned in the earliest youth, before the bird itself is able to produce any kind of full song.”[28] The phenomenon of imprinting has recently been investigated under laboratory conditions and controls with positive results.[29]
Phenomena of this general type are certainly familiar from everyday experience. We recognize people and places to which we have given no particular attention. We can look up something in a book and learn it perfectly well with no other motive than to confute reinforcement theory, or out of boredom, or idle curiosity. Everyone engaged in research must have had the experience of working with feverish and prolonged intensity to write a paper which no one else will read or to solve a problem which no one else thinks important and which will bring no conceivable reward — which may only confirm a general opinion that the researcher is wasting his time on irrelevancies. The fact that rats and monkeys do likewise is interesting and important to show in careful experiment. In fact, studies of behavior of the type mentioned above have an independent and positive significance that far outweighs their incidental importance in bringing into question the claim that learning is impossible without drive reduction. It is not at all unlikely that insights arising from animal behavior studies with this broadened scope may have the kind of relevance to such complex activities as verbal behavior that reinforcement theory has, so far, failed to exhibit. In any event, in the light of presently available evidence, it is difficult to see how anyone can be willing to claim that reinforcement is necessary for learning, if reinforcement is taken seriously as something identifiable independently of the resulting change in behavior.
Similarly, it seems quite beyond question that children acquire a good deal of their verbal and nonverbal behavior by casual observation and imitation of adults and other children.[30] It is simply not true that children can learn language only through “meticulous care” on the part of adults who shape their verbal repertoire through careful differential reinforcement, though it may be that such care is often the custom in academic families. It is a common observation that a young child of immigrant parents may learn a second language in the streets, from other children, with amazing rapidity, and that his speech may be completely fluent and correct to the last allophone, while the subtleties that become second nature to the child may elude his parents despite high motivation and continued practice. A child may pick up a large part of his vocabulary and “feel” for sentence structure from television, from reading, from listening to adults, etc. Even a very young child who has not yet acquired a minimal repertoire from which to form new utterances may imitate a word quite well on an early try, with no attempt on the part of his parents to teach it to him. It is also perfectly obvious that, at a later stage, a child will be able to construct and understand utterances which are quite new, and are, at the same time, acceptable sentences in his language. Every time an adult reads a newspaper, he undoubtedly comes upon countless new sentences which are not at all similar, in a simple, physical sense, to any that he has heard before, and which he will recognize as sentences and understand; he will also be able to detect slight distortions or misprints. Talk of “stimulus generalization” in such a case simply perpetuates the mystery under a new title. These abilities indicate that there must be fundamental processes at work quite independently of “feedback” from the environment. I have been able to find no support whatsoever for the doctrine of Skinner and others that slow and careful shaping of verbal behavior through differential reinforcement is an absolute necessity. If reinforcement theory really requires the assumption that there be such meticulous care, it seems best to regard this simply as a reductio ad absurdum argument against this approach. It is also not easy to find any basis (or, for that matter, to attach very much content) to the claim that reinforcing contingencies set up by the verbal community are the single factor responsible for maintaining the strength of verbal behavior. The sources of the “strength” of this behavior are almost a total mystery at present. Reinforcement undoubtedly plays a significant role, but so do a variety of motivational factors about which nothing serious is known in the case of human beings.
As far as acquisition of language is concerned, it seems clear that reinforcement, casual observation, and natural inquisitiveness (coupled with a strong tendency to imitate) are important factors, as is the remarkable capacity of the child to generalize, hypothesize, and “process information” in a variety of very special and apparently highly complex ways which we cannot yet describe or begin to understand, and which may be largely innate, or may develop through some sort of learning or through maturation of the nervous system. The manner in which such factors operate and interact in language acquisition is completely unknown. It is clear that what is necessary in such a case is research, not dogmatic and perfectly arbitrary claims, based on analogies to that small part of the experimental literature in which one happens to be interested.
The pointlessness of these claims becomes clear when we consider the well-known difficulties in determining to what extent inborn structure, maturation, and learning are responsible for the particular form of a skilled or complex performance.[31] To take just one example,[32] the gaping response of a nestling thrush is at first released by jarring of the nest, and, at a later stage, by a moving object of specific size, shape, and position relative to the nestling. At this later stage the response is directed toward the part of the stimulus object corresponding to the parent’s head, and characterized by a complex configuration of stimuli that can be precisely described. Knowing just this, it would be possible to construct a speculative, learning-theoretic account of how this sequence of behavior patterns might have developed through a process of differential reinforcement, and it would no doubt be possible to train rats to do something similar. However, there appears to be good evidence that these responses to fairly complex “sign stimuli” are genetically determined and mature without learning. Clearly, the possibility cannot be discounted. Consider now the comparable case of a child imitating new words. At an early stage we may find rather gross correspondences. At a later stage, we find that repetition is of course far from exact (i.e., it is not mimicry, a fact which itself is interesting), but that it reproduces the highly complex configuration of sound features that constitute the phonological structure of the language in question. Again, we can propose a speculative account of how this result might have been obtained through elaborate arrangement of reinforcing contingencies. Here too, however, it is possible that ability to select out of the complex auditory input those features that are phonologically relevant may develop largely independently of reinforcement, through genetically determined maturation. To the extent that this is true, an account of the development and causation of behavior that fails to consider the structure of the organism will provide no understanding of the real processes involved.
It is often argued that experience, rather than innate capacity to handle information in certain specific ways, must be the factor of overwhelming dominance in determining the specific character of language acquisition, since a child speaks the language of the group in which he lives. But this is a superficial argument. As long as we are speculating, we may consider the possibility that the brain has evolved to the point where, given an input of observed Chinese sentences, it produces (by an induction of apparently fantastic complexity and suddenness) the rules of Chinese grammar, and given an input of observed English sentences, it produces (by, perhaps, exactly the same process of induction) the rules of English grammar; or that given an observed application of a term to certain instances, it automatically predicts the extension to a class of complexly related instances. If clearly recognized as such, this speculation is neither unreasonable nor fantastic; nor, for that matter, is it beyond the bounds of possible study. There is of course no known neural structure capable of performing this task in the specific ways that observation of the resulting behavior might lead us to postulate; but for that matter, the structures capable of accounting for even the simplest kinds of learning have similarly defied detection.[33] Summarizing this brief discussion, it seems that there is neither empirical evidence nor any known argument to support any specific claim about the relative importance of “feedback” from the environment and the “independent contribution of the organism” in the process of language acquisition.
We now turn to the system that Skinner develops specifically for the description of verbal behavior. Since this system is based on the notions stimulus, response, and reinforcement, we can conclude from the preceding sections that it will be vague and arbitrary. For reasons noted in Section 1, however, I think it is important to see in detail how far from the mark any analysis phrased solely in these terms must be and how completely this system fails to account for the facts of verbal behavior. Consider first the term verbal behavior itself. This is defined as “behavior reinforced through the mediation of other persons” (2). The definition is clearly much too broad. It would include as verbal behavior, for example, a rat pressing the bar in a Skinner-box, a child brushing his teeth, a boxer retreating before an opponent, and a mechanic repairing an automobile. Exactly how much of ordinary linguistic behavior is verbal in this sense, however, is something of a question: perhaps, as I have pointed out above, a fairly small fraction of it, if any substantive meaning is assigned to the term reinforced. This definition is subsequently refined by the additional provision that the mediating response of the reinforcing person (the listener) must itself “have been conditioned precisely in order to reinforce the behavior of the speaker” (225, italics his). This still covers the examples given above, if we can assume that the reinforcing behavior of the psychologist, the parent, the opposing boxer, and the paying customer are the result of appropriate training, which is perhaps not unreasonable. A significant part of the fragment of linguistic behavior covered by the earlier definition will no doubt be excluded by the refinement, however. Suppose, for example, that while crossing the street I hear someone shout Watch out for the car and jump out of the way. It can hardly be proposed that my jumping (the mediating, reinforcing response in Skinner’s usage) was conditioned (that is, I was trained to jump) precisely in order to reinforce the behavior of the speaker; and similarly, for a wide class of cases. Skinner’s assertion that with this refined definition “we narrow our subject to what is traditionally recognized as the verbal field” (225) appears to be grossly in error.
Verbal operants are classified by Skinner in terms of their “functional” relation to discriminated stimulus, reinforcement, and other verbal responses. A mand is defined as “a verbal operant in which the response is reinforced by a characteristic consequence and is therefore under the functional control of relevant conditions of deprivation or aversive stimulation” (35). This is meant to include questions, commands, etc. Each of the terms in this definition raises a host of problems. A mand such as Pass the salt is a class of responses. We cannot tell by observing the form of a response whether it belongs to this class (Skinner is very clear about this), but only by identifying the controlling variables. This is generally impossible. Deprivation is defined in the bar-pressing experiment in terms of length of time that the animal has not been fed or permitted to drink. In the present context, however, it is quite a mysterious notion. No attempt is made here to describe a method for determining “relevant conditions of deprivation” independently of the “controlled” response. It is of no help at all to be told (32) that it can be characterized in terms of the operations of the experimenter. If we define deprivation in terms of elapsed time, then at any moment a person is in countless states of deprivation.[34] It appears that we must decide that the relevant condition of deprivation was (say) salt-deprivation, on the basis of the fact that the speaker asked for salt (the reinforcing community which “sets up” the mand is in a similar predicament). In this case, the assertion that a mand is under the control of relevant deprivation is empty, and we are (contrary to Skinner’s intention) identifying the response as a mand completely in terms of form. The word relevant in the definition above conceals some rather serious complications.
In the case of the mand Pass the salt, the word deprivation is not out of place, though it appears to be of little use for functional analysis. Suppose however that the speaker says Give me the book, Take me for a ride, or Let me fix it. What kinds of deprivation can be associated with these mands? How do we determine or measure the relevant deprivation? I think we must conclude in this case, as before, either that the notion deprivation is relevant at most to a minute fragment of verbal behavior, or else that the statement “X is under Y-deprivation” is just an odd paraphrase for “X wants Y,” bearing a misleading and unjustifiable connotation of objectivity.
The notion aversive control is just as confused. This is intended to cover threats, beating, and the like (33). The manner in which aversive stimulation functions is simply described. If a speaker has had a history of appropriate reinforcement (e.g., if a certain response was followed by “cessation of the threat of such injury — of events which have previously been followed by such injury and which are therefore conditioned aversive stimuli”), then he will tend to give the proper response when the threat which had previously been followed by the injury is presented. It would appear to follow from this description that a speaker will not respond properly to the mand Your money or your life (38) unless he has a past history of being killed. But even if the difficulties in describing the mechanism of aversive control are somehow removed by a more careful analysis, it will be of little use for identifying operants for reasons similar to those mentioned in the case of deprivation.
It seems, then, that in Skinner’s terms there is in most cases no way to decide whether a given response is an instance of a particular mand. Hence it is meaningless, within the terms of his system, to speak of the characteristic consequences of a mand, as in the definition above. Furthermore, even if we extend the system so that mands can somehow be identified, we will have to face the obvious fact that most of us are not fortunate enough to have our requests, commands, advice, and so on characteristically reinforced (they may nevertheless exist in considerable strength). These responses could therefore not be considered mands by Skinner. In fact, Skinner sets up a category of “magical mands” (48–49) to cover the case of “mands which cannot be accounted for by showing that they have ever had the effect specified or any similar effect upon similar occasions” (the word ever in this statement should be replaced by characteristically). In these pseudo-mands, “the speaker simply describes the reinforcement appropriate to a given state of deprivation or aversive stimulation.” In other words, given the meaning that we have been led to assign to reinforcement and deprivation, the speaker asks for what he wants. The remark that “a speaker appears to create new mands on the analogy of old ones” is also not very helpful.
Skinner’s claim that his new descriptive system is superior to the traditional one “because its terms can be defined with respect to experimental operations” (45) is, we see once again, an illusion. The statement “X wants Y” is not clarified by pointing out a relation between rate of bar-pressing and hours of food-deprivation; replacing “X wants Y” by “X is deprived of Y” adds no new objectivity to the description of behavior. His further claim for the superiority of the new analysis of mands is that it provides an objective basis for the traditional classification into requests, commands, etc. (38–41). The traditional classification is in terms of the intention of the speaker. But intention, Skinner holds, can be reduced to contingencies of reinforcement, and, correspondingly, we can explain the traditional classification in terms of the reinforcing behavior of the listener. Thus, a question is a mand which “specifies verbal action, and the behavior of the listener permits us to classify it as a request, a command, or a prayer” (39). It is a request if “the listener is independently motivated to reinforce the speaker” a command if “the listener’s behavior is… reinforced by reducing a threat, a prayer if the mand “promotes reinforcement by generating an emotional disposition.” The mand is advice if the listener is positively reinforced by the consequences of mediating the reinforcement of the speaker; it is a warning if “by carrying out the behavior specified by the speaker, the listener escapes from aversive stimulation” and so on. All this is obviously wrong if Skinner is using the words request, command, etc., in anything like the sense of the corresponding English words. The word question does not cover commands. Please pass the salt is a request (but not a question), whether or not the listener happens to be motivated to fulfill it; not everyone to whom a request is addressed is favorably disposed. A response does not cease to be a command if it is not followed; nor does a question become a command if the speaker answers it because of an implied or imagined threat. Not all advice is good advice, and a response does not cease to be advice if it is not followed. Similarly, a warning may be misguided; heeding it may cause aversive stimulation, and ignoring it might be positively reinforcing. In short, the entire classification is beside the point. A moment’s thought is sufficient to demonstrate the impossibility of distinguishing between requests, commands, advice, etc., on the basis of the behavior or disposition of the particular listener. Nor can we do this on the basis of the typical behavior of all listeners. Some advice is never taken, is always bad, etc., and similarly, with other kinds of mands. Skinner’s evident satisfaction with this analysis of the traditional classification is extremely puzzling.
Mands are operants with no specified relation to a prior stimulus. A tact, on the other hand, is defined as “a verbal operant in which a response of given form is evoked (or at least strengthened) by a particular object or event or property of an object or event” (81). The examples quoted in the discussion of stimulus control (Section 3) are all tacts. The obscurity of the notion stimulus control makes the concept of the tact rather mystical. Since, however, the tact is “the most important of verbal operants,” it is important to investigate the development of this concept in more detail.
We first ask why the verbal community “sets up” tacts in the child — that is, how the parent is reinforced by setting up the tact. The basic explanation for this behavior of the parent (85–86) is the reinforcement he obtains by the fact that his contact with the environment is extended; to use Skinner’s example, the child may later be able to call him to the telephone. (It is difficult to see, then, how first children acquire tacts, since the parent does not have the appropriate history of reinforcement.) Reasoning in the same way, we may conclude that the parent induces the child to walk so that he can make some money delivering newspapers. Similarly, the parent sets up an “echoic repertoire” (e.g., a phonemic system) in the child because this makes it easier to teach him new vocabulary, and extending the child’s vocabulary is ultimately useful to the parent. “In all these cases we explain the behavior of the reinforcing listener by pointing to an improvement in the possibility of controlling the speaker whom he reinforces” (56). Perhaps this provides the explanation for the behavior of the parent in inducing the child to walk: the parent is reinforced by the improvement in his control of the child when the child’s mobility increases. Underlying these modes of explanation is a curious view that it is somehow more scientific to attribute to a parent a desire to control the child or enhance his own possibilities for action than a desire to see the child develop and extend his capacities. Needless to say, no evidence is offered to support this contention.
Consider now the problem of explaining the response of the listener to a tact. Suppose, for example, that B hears A say fox and reacts appropriately — looks around, runs away, aims his rifle, etc. How can we explain B’s behavior? Skinner rightly rejects analyzes of this offered by J. B. Watson and Bertrand Russell. His own equally inadequate analysis proceeds as follows (87–88). We assume (l) “that in the history of [B] the stimulus fox has been an occasion upon which looking around has been followed by seeing a fox” and (2) “that the listener has some current ‘interest in seeing foxes’ — that behavior which depends upon a seen fox for its execution is strong, and that the stimulus supplied by a fox is therefore reinforcing.” B carries out the appropriate behavior, then, because “the heard stimulus fox is the occasion upon which turning and looking about is frequently followed by the reinforcement of seeing a fox,” i.e, his behavior is a discriminated operant. This explanation is unconvincing. B may never have seen a fox and may have no current interest in seeing one, and yet may react appropriately to the stimulus fox.[35] Since exactly the same behavior may take place when neither of the assumptions is fulfilled, some other mechanism must be operative here.
Skinner remarks several times that his analysis of the tact in terms of stimulus control is an improvement over the traditional formulations in terms of reference and meaning. This is simply not true. His analysis is fundamentally the same as the traditional one, though much less carefully phrased. In particular, it differs only by indiscriminate paraphrase of such notions as denotation (reference) and connotation (meaning), which have been kept clearly apart in traditional formulations, in terms of the vague concept stimulus control. In one traditional formulation a descriptive term is said to denote a set of entities and to connote or designate a certain property or condition that an entity must possess or fulfill if the term is to apply to it.[36] Thus, the term vertebrate refers to (denotes, is true of) vertebrates and connotes the property having a spine or something of the sort. This connoted defining property is called the meaning of the term. Two terms may have the same reference but different meanings. Thus, it is apparently true that the creatures with hearts are all and only the vertebrates. If so, then the term creature with a heart refers to vertebrates and designates the property having a heart. This is presumably a different property (a different general condition) from having a spine; hence the terms vertebrate and creature with a heart are said to have different meanings. This analysis is not incorrect (for at least one sense of meaning), but its many limitations have frequently been pointed out.[37] The major problem is that there is no good way to decide whether two descriptive terms designate the same property.[38] As we have just seen, it is not sufficient that they refer to the same objects. Vertebrate and creature with a spine would be said to designate the same property (distinct from that designated by creature with a heart). If we ask why this is so, the only answer appears to be that the terms are synonymous. The notion property thus seems somehow language-bound, and appeal to “defining properties” sheds little light on questions of meaning and synonymy.
Skinner accepts the traditional account in toto, as can be seen from his definition of a tact as a response under control of a property (stimulus) of some physical object or event. We have found that the notion control has no real substance and is perhaps best understood as a paraphrase of denote or connote or, ambiguously, both. The only consequence of adopting the new term stimulus control is that the important differences between reference and meaning are obscured. It provides no new objectivity. The stimulus controlling the response is determined by the response itself; there is no independent and objective method of identification (see Section 3). Consequently, when Skinner defines synonymy as the case in which “the same stimulus leads to quite different responses” (118), we can have no objection. The responses chair and red made alternatively to the same object are not synonymous, because the stimuli are called different. The responses vertebrate and creature with a spine would be considered synonymous because they are controlled by the same property of the object under investigation; in more traditional and no less scientific terms, they evoke the same concept. Similarly, when metaphorical extension is explained as due to “the control exercised by properties of the stimulus which, though present at reinforcement, do not enter into the contingency respected by the verbal community” (92; traditionally, accidental properties), no objection can be raised which has not already been leveled against the traditional account. Just as we could “explain” the response Mozart to a piece of music in terms of subtle properties of the controlling stimuli, we can, with equal facility, explain the appearance of the response sun when no sun is present, as in Juliet is [like] the sun. “We do so by noting that Juliet and the sun have common properties, at least in their effect on the speaker” (93). Since any two objects have indefinitely many properties in common, we can be certain that we will never be at a loss to explain a response of the form A is like B, for arbitrary A and B. It is clear, however, that Skinner’s recurrent claim that his formulation is simpler and more scientific than the traditional account has no basis in fact.
Tacts under the control of private stimuli (Bloomfield’s “displaced speech”) form a large and important class (130–46), including not only such responses as familiar and beautiful, but also verbal responses referring to past, potential, or future events or behavior. For example, the response There was an elephant at the zoo “must be understood as a response to current stimuli, including events within the speaker himself” (143).[39] If we now ask ourselves what proportion of the tacts in actual life are responses to (descriptions of) actual current outside stimulation, we can see just how large a role must be attributed to private stimuli. A minute amount of verbal behavior, outside the nursery, consists of such remarks as This is red and There is a man. The fact that functional analysis must make such a heavy appeal to obscure internal stimuli is again a measure of its actual advance over traditional formulations.
Responses under the control of prior verbal stimuli are considered under a different heading from the tact. An echoic operant is a response which “generates a sound pattern similar to that of the stimulus” (55). It covers only cases of immediate imitation.[40] No attempt is made to define the sense in which a child’s echoic response is “similar” to the stimulus spoken in the father’s bass voice; it seems, though there are no clear statements about this, that Skinner would not accept the account of the phonologist in this respect, but nothing else is offered. The development of an echoic repertoire is attributed completely to differential reinforcement. Since the speaker will do no more, according to Skinner, than what is demanded of him by the verbal community, the degree of accuracy insisted on by this community will determine the elements of the repertoire, whatever these may be (not necessarily phonemes). “In a verbal community which does not insist on a precise correspondence, an echoic repertoire may remain slack and will be less successfully applied to novel patterns.” There is no discussion of such familiar phenomena as the accuracy with which a child will pick up a second language or a local dialect in the course of playing with other children, which seem sharply in conflict with these assertions. No anthropological evidence is cited to support the claim that an effective phonemic system does not develop (this is the substance of the quoted remark) in communities that do not insist on precise correspondence.
A verbal response to a written stimulus (reading) is called textual behavior.
Other verbal responses to verbal stimuli are called intraverbal operants. Paradigm instances are the response four to the stimulus two plus two or the response Paris to the stimulus capital of France. Simple conditioning may be sufficient to account for the response four to two plus two,[41] but the notion of intraverbal response loses all meaning when we find it extended to cover most of the facts of history and many of the facts of science (72, 129); all word association and “flight of ideas” (73–76); all translations and paraphrase (77); reports of things seen, heard, or remembered (315); and, in general, large segments of scientific, mathematical, and literary discourse. Obviously, the kind of explanation that might be proposed for a student’s ability to respond with Paris to capital of France, after suitable practice, can hardly be seriously offered to account for his ability to make a judicious guess in answering the questions (to him new): What is the seat of the French government?, … the source of the literary dialect?,.. the chief target of the German blitzkrieg?, etc., or his ability to prove a new theorem, translate a new passage, or paraphrase a remark for the first time or in a new way.
The process of “getting someone to see a point,” to see something your way, or to understand a complex state of affairs (e.g., a difficult political situation or a mathematical proof) is, for Skinner, simply a matter of increasing the strength of the listener’s already available behavior.[42] Since “the process is often exemplified by relatively intellectual scientific or philosophical discourse,” Skinner considers it “all the more surprising that it may be reduced to echoic, textual, or intraverbal supplementation” (269). Again, it is only the vagueness and latitude with which the notions strength and intraverbal response are used that save this from absurdity. If we use these terms in their literal sense, it is clear that understanding a statement cannot be equated to shouting it frequently in a high-pitched voice (high response strength), and a clever and convincing argument cannot be accounted for on the basis of a history of pairings of verbal responses.[43]
A final class of operants, called autoclitics, includes those that are involved in assertion, negation, quantification, qualification of responses, construction of sentences, and the “highly complex manipulations of verbal thinking.” All these acts are to be explained “in terms of behavior which is evoked by or acts upon other behavior of the speaker” (313). Autoclitics are, then, responses to already given responses, or rather, as we find in reading through this section, they are responses to covert or incipient or potential verbal behavior. Among the autoclitics are listed such expressions as I recall, I imagine, for example, assume, let X equal…, the terms of negation, the is of predication and assertion, all, some, if, then, and, in general, all morphemes other than nouns, verbs, and adjectives, as well as grammatical processes of ordering and arrangement. Hardly a remark in this section can be accepted without serious qualification. To take just one example, consider Skinner’s account of the autoclitic all in All swans are white (329). Obviously we cannot assume that this is a tact to all swans as stimulus. It is suggested, therefore, that we take all to be an autoclitic modifying the whole sentence Swans are white. All can then be taken as equivalent to always, or always it is possible to say. Notice, however, that the modified sentence Swans are white is just as general as All swans are white. Furthermore, the proposed translation of all is incorrect if taken literally. It is just as possible to say Swans are green as to say Swans are white. It is not always possible to say either (e.g., while you are saying something else or sleeping). Probably what Skinner means is that the sentence can be paraphrased “X is white is true, for each swan X.” But this paraphrase cannot be given within his system, which has no place for true.
Skinner’s account of grammar and syntax as autoclitic processes (Chap. 13) differs from a familiar traditional account mainly in the use of the pseudo-scientific terms control or evoke in place of the traditional refer. Thus, in The boy runs, the final s of runs is a tact under control of such “subtle properties of a situation” as “the nature of running as an activity rather than an object or property of an object.”[44] (Presumably, then, in The attempt fails, The difficulty remains, His anxiety increases, etc., we must also say that the s indicates that the object described as the attempt is carrying out the activity of failing, etc.) In the boy’s gun, however, the s denotes possession (as, presumably, in the boy’s arrival, … story, … age, etc.) and is under the control of this “relational aspect of the situation” (336). The “relational autoclitic of order” (whatever it may mean to call the order of a set of responses a response to them) in The boy runs the store is under the control of an “extremely complex stimulus situation,” namely, that the boy is running the store (335). And in the hat and the shoe is under the control of the property “pair.” Through in the dog went through the hedge is under the control of the “relation between the going dog and the hedge” (342). In general, nouns are evoked by objects, verbs by actions, and so on. Skinner considers a sentence to be a set of key responses (nouns, verbs, adjectives) on a skeletal frame (346). If we are concerned with the fact that Sam rented a leaky boat, the raw responses to the situation are rent, boat, leak, and Sam. Autoclitics (including order) which qualify these responses, express relations between them, and the like, are then added by a process called composition and the result is a grammatical sentence, one of many alternatives among which selection is rather arbitrary. The idea that sentences consist of lexical items placed in a grammatical frame is of course a traditional one, within both philosophy and linguistics. Skinner adds to it only the very implausible speculation that in the internal process of composition, the nouns, verbs, and adjectives are chosen first and then are arranged, qualified, etc., by autoclitic responses to these internal activities.[45]
This view of sentence structure, whether phrased in terms of autoclitics, syncategorematic expressions, or grammatical and lexical morphemes, is inadequate. Sheep provide wool has no (physical) frame at all, but no other arrangement of these words is an English sentence. The sequences furiously sleep ideas green colorless and friendly young dogs seem harmless have the same frames, but only one is a sentence of English (similarly, only one of the sequences formed by reading these from back to front). Struggling artists can be a nuisance has the same frame as marking papers can be a nuisance, but is quite different in sentence structure, as can be seen by replacing can be by is or are in both cases. There are many other similar and equally simple examples. It is evident that more is involved in sentence structure than insertion of lexical items in grammatical frames; no approach to language that fails to take these deeper processes into account can possibly achieve much success in accounting for actual linguistic behavior.
The preceding discussion covers all the major notions that Skinner introduces in his descriptive system. My purpose in discussing the concepts one by one was to show that in each case, if we take his terms in their literal meaning, the description covers almost no aspect of verbal behavior, and if we take them metaphorically, the description offers no improvement over various traditional formulations. The terms borrowed from experimental psychology simply lose their objective meaning with this extension, and take over the full vagueness of ordinary language. Since Skinner limits himself to such a small set of terms for paraphrase, many important distinctions are obscured. I think that this analysis supports the view expressed in Section I, that elimination of the independent contribution of the speaker and learner (a result which Skinner considers of great importance, cf. 311–12) can be achieved only at the cost of eliminating all significance from the descriptive system, which then operates at a level so gross and crude that no answers are suggested to the most elementary questions.[46] The questions to which Skinner has addressed his speculations are hopelessly premature. It is futile to inquire into the causation of verbal behavior until much more is known about the specific character of this behavior; and there is little point in speculating about the process of acquisition without much better understanding of what is acquired.
Anyone who seriously approaches the study of linguistic behavior, whether linguist, psychologist, or philosopher, must quickly become aware of the enormous difficulty of stating a problem which will define the area of his investigations, and which will not be either completely trivial or hopelessly beyond the range of present-day understanding and technique. In selecting functional analysis as his problem, Skinner has set himself a task of the latter type. In an extremely interesting and insightful paper,[47] K. S. Lashley has implicitly delimited a class of problems which can be approached in a fruitful way by the linguist and psychologist, and which are clearly preliminary to those with which Skinner is concerned. Lashley recognizes, as anyone must who seriously considers the data, that the composition and production of an utterance is not simply a matter of stringing together a sequence of responses under the control of outside stimulation and intraverbal association, and that the syntactic organization of an utterance is not something directly represented in any simple way in the physical structure of the utterance itself. A variety of observations lead him to conclude that syntactic structure is “a generalized pattern imposed on the specific acts as they occur” (512), and that “a consideration of the structure of the sentence and other motor sequences will show…that there are, behind the overtly expressed sequences, a multiplicity of integrative processes which can only be inferred from the final results of their activity” (509). He also comments on the great difficulty of determining the “selective mechanisms” used in the actual construction of a particular utterance (522).
Although present-day linguistics cannot provide a precise account of these integrative processes, imposed patterns, and selective mechanisms, it can at least set itself the problem of characterizing these completely. It is reasonable to regard the grammar of a language L ideally as a mechanism that provides an enumeration of the sentences of L in something like the way in which a deductive theory gives an enumeration of a set of theorems. (Grammar, in this sense of the word, includes phonology.) Furthermore, the theory of language can be regarded as a study of the formal properties of such grammars, and, with a precise enough formulation, this general theory can provide a uniform method for determining, from the process of generation of a given sentence, a structural description which can give a good deal of insight into how this sentence is used and understood. In short, it should be possible to derive from a properly formulated grammar a statement of the integrative processes and generalized patterns imposed on the specific acts that constitute an utterance. The rules of a grammar of the appropriate form can be subdivided into the two types, optional and obligatory; only the latter must be applied in generating an utterance. The optional rules of the grammar can be viewed, then, as the selective mechanisms involved in the production of a particular utterance. The problem of specifying these integrative processes and selective mechanisms is nontrivial and not beyond the range of possible investigation. The results of such a study might, as Lashley suggests, be of independent interest for psychology and neurology (and conversely). Although such a study, even if successful, would by no means answer the major problems involved in the investigation of meaning and the causation of behavior, it surely will not be unrelated to these. It is at least possible, furthermore, that such a notion as semantic generalization, to which such heavy appeal is made in all approaches to language in use, conceals complexities and specific structure of inference not far different from those that can be studied and exhibited in the case of syntax, and that consequently the general character of the results of syntactic investigations may be a corrective to oversimplified approaches to the theory of meaning.
The behavior of the speaker, listener, and learner of language constitutes, of course, the actual data for any study of language. The construction of a grammar which enumerates sentences in such a way that a meaningful structural description can be determined for each sentence does not in itself provide an account of this actual behavior. It merely characterizes abstractly the ability of one who has mastered the language to distinguish sentences from nonsentences, to understand new sentences (in part), to note certain ambiguities, etc. These are very remarkable abilities. We constantly read and hear new sequences of words, recognize them as sentences, and understand them. It is easy to show that the new events that we accept and understand as sentences are not related to those with which we are familiar by any simple notion of formal (or semantic or statistical) similarity or identity of grammatical frame. Talk of generalization in this case is entirely pointless and empty. It appears that we recognize a new item as a sentence not because it matches some familiar item in any simple way, but because it is generated by the grammar that each individual has somehow and in some form internalized. And we understand a new sentence, in part, because we are somehow capable of determining the process by which this sentence is derived in this grammar.
Suppose that we manage to construct grammars having the properties outlined above. We can then attempt to describe and study the achievement of the speaker, listener, and learner. The speaker and the listener, we must assume, have already acquired the capacities characterized abstractly by the grammar. The speaker’s task is to select a particular compatible set of optional rules. If we know, from grammatical study, what choices are available to him and what conditions of compatibility the choices must meet, we can proceed meaningfully to investigate the factors that lead him to make one or another choice. The listener (or reader) must determine, from an exhibited utterance, what optional rules were chosen in the construction of the utterance. It must be admitted that the ability of a human being to do this far surpasses our present understanding. The child who learns a language has in some sense constructed the grammar for himself on the basis of his observation of sentences and nonsentences (i.e., corrections by the verbal community). Study of the actual observed ability of a speaker to distinguish sentences from nonsentences, detect ambiguities, etc., apparently forces us to the conclusion that this grammar is of an extremely complex and abstract character, and that the young child has succeeded in carrying out what from the formal point of view, at least, seems to be a remarkable type of theory construction. Furthermore, this task is accomplished in an astonishingly short time, to a large extent independently of intelligence, and in a comparable way by all children. Any theory of learning must cope with these facts.
It is not easy to accept the view that a child is capable of constructing an extremely complex mechanism for generating a set of sentences, some of which he has heard, or that an adult can instantaneously determine whether (and if so, how) a particular item is generated by this mechanism, which has many of the properties of an abstract deductive theory. Yet this appears to be a fair description of the performance of the speaker, listener, and learner. If this is correct, we can predict that a direct attempt to account for the actual behavior of speaker, listener, and learner, not based on a prior understanding of the structure of grammars, will achieve very limited success. The grammar must be regarded as a component in the behavior of the speaker and listener which can only be inferred, as Lashley has put it, from the resulting physical acts. The fact that all normal children acquire essentially comparable grammars of great complexity with remarkable rapidity suggests that human beings are somehow specially designed to do this, with data-handling or “hypothesis-formulating” ability of unknown character and complexity.[48] The study of linguistic structure may ultimately lead to some significant insights into this matter. At the moment the question cannot be seriously posed, but in principle it may be possible to study the problem of determining what the built-in structure of an information-processing (hypothesis-forming) system must be to enable it to arrive at the grammar of a language from the available data in the available time. At any rate, just as the attempt to eliminate the contribution of the speaker leads to a “mentalistic” descriptive system that succeeds only in blurring important traditional distinctions, a refusal to study the contribution of the child to language learning permits only a superficial account of language acquisition, with a vast and unanalyzed contribution attributed to a step called generalization which in fact includes just about everything of interest in this process. If the study of language is limited in these ways, it seems inevitable that major aspects of verbal behavior will remain a mystery.
Notes
[1] Skinner’s confidence in recent achievements in the study of animal behavior and their applicability to complex human behavior does not appear to be widely shared. In many recent publications of confirmed behaviorists there is a prevailing note of skepticism with regard to the scope of these achievements. For representative comments, see the contributions to Modern Learning Theory (by W. K. Estes et al.; New York: Appleton-Century-Crofts, Inc., 1954); B. R. Bugelski, Psychology of Learning (New York: Holt, Rinehart & Winston, Inc., 1956); S. Koch, in Nebraska Symposium on Motivation, 58 (Lincoln, 1956); W. S. Verplanck, “Learned and Innate Behavior,” Psych. Rev., 52, (1955), 139. Perhaps the strongest view is that of H. Harlow, who has asserted (“Mice, Monkeys, Men, and Motives,” Psych. Rev., 60, [1953] 23–32) that “a strong case can be made for the proposition that the importance of the psychological problems studied during the last 15 years has decreased as a negatively accelerated function approaching an asymptote of complete indifference.” N. Tinbergen, a leading representative of a different approach to animal behavior studies (comparative ethology), concludes a discussion of functional analysis with the comment that “we may now draw the conclusion that the causation of behavior is immensely more complex than was assumed in the generalizations of the past. A number of internal and external factors act upon complex central nervous structures. Second, it will be obvious that the facts at our disposal are very fragmentary indeed” — The Study of Instinct (Toronto: Oxford Univ. Press, 1951), p. 74.
[2] In Behavior of Organisms (New York: Appleton-Century-Crofts, Inc., 1938), Skinner remarks that “although a conditioned operant is the result of the correlation of the response with a particular reinforcement, a relation between it and a discriminative stimulus acting prior to the response is the almost universal rule” (178–79). Even emitted behavior is held to be produced by some sort of “originating force” (51) which, in the case of operant behavior is not under experimental control. The distinction between eliciting stimuli, discriminated stimuli, and “originating forces” has never been adequately clarified and becomes even more confusing when private internal events are considered to be discriminated stimuli (see below).
[3] In a famous experiment, chimpanzees were taught to perform complex tasks to receive tokens which had become secondary reinforcers because of association with food. The idea that money, approval, prestige, etc. actually acquire their motivating effects on human behavior according to this paradigm is unproved, and not particularly plausible. Many psychologists within the behaviorist movement are quite skeptical about this (cf. 23n). As in the case of most aspects of human behavior, the evidence about secondary reinforcement is so fragmentary, conflicting, and complex that almost any view can find some support.
[4] Skinner’s remark quoted above about the generality of his basic results must be understood in the light of the experimental limitations he has imposed. If it were true in any deep sense that the basic processes in language are well understood and free of species restriction, it would be extremely odd that language is limited to man. With the exception of a few scattered observations (cf. his article, “A Case History in Scientific Method,” The American Psychologist, 11 [1956] 221–33), Skinner is apparently basing this claim on the fact that qualitatively similar results are obtained with bar pressing of rats and pecking of pigeons under special conditions of deprivation and various schedules of reinforcement. One immediately questions how much can be based on these facts, which are in part at least an artifact traceable to experimental design and the definition of stimulus and response in terms of smooth dynamic curves (see below). The dangers inherent in any attempt to extrapolate to complex behavior from the study of such simple responses as bar pressing should be obvious and have often been commented on (cf., e.g., Harlow, op. cit.). The generality of even the simplest results is open to serious question. Cf. in this connection M. E. Bitterman, J. Wodinsky, and D. K. Candland, “Some Comparative Psychology,” Am. Jor. of Psych., 71 (1958), 94–110, where it is shown that there are important qualitative differences in solution of comparable elementary problems by rats and fish.
[5] An analogous argument, in connection with a different aspect of Skinner’s thinking, is given by M. Scriven in “A Study of Radical Behaviorism,” Univ. of Minn. Studies in Philosophy of Science, I. Cf. Verplanck’s contribution to Modern Learning Theory, op. cit. pp. 283–88, for more general discussion of the difficulties in formulating an adequate definition of stimulus and response. He concludes, quite correctly, that in Skinner’s sense of the word, stimuli are not objectively identifiable independently of the resulting behavior, nor are they manipulable. Verplanck presents a clear discussion of many other aspects of Skinner’s system, commenting on the untestability of many of the so-called “laws of behavior” and the limited scope of many of the others, and the arbitrary and obscure character of Skinner’s notion of lawful relation; and, at the same time, noting the importance of the experimental data that Skinner has accumulated.
[6] In Behavior of Organisms, Skinner apparently was willing to accept this consequence. He insists (41–42) that the terms of casual description in the popular vocabulary are not validly descriptive until the defining properties of stimulus and response are specified, the correlation is demonstrated experimentally, and the dynamic changes in it are shown to be lawful. Thus, in describing a child as hiding from a dog, “it will not be enough to dignify the popular vocabulary by appealing to essential properties of dogness or hidingness and to suppose them intuitively known.” But this is exactly what Skinner does in the book under review, as we will see directly.
[7] 253f. and elsewhere, repeatedly. As an example of how well we can control behavior using the notions developed in this book, Skinner shows here how he would go about evoking the response pencil. The most effective way, he suggests, is to say to the subject, “Please say pencil” (our chances would, presumably, be even further improved by use of “aversive stimulation,” e.g., holding a gun to his head). We can also “make sure that no pencil or writing instrument is available, then hand our subject a pad of paper appropriate to pencil sketching, and offer him a handsome reward for a recognizable picture of a cat.” It would also be useful to have voices saying pencil or pen and … in the background; signs reading pencil or pen and …; or to place a “large and unusual pencil in an unusual place clearly in sight.” “Under such circumstances, it is highly probable that our subject will say pencil.” “The available techniques are all illustrated in this sample.” These contributions of behavior theory to the practical control of human behavior are amply illustrated elsewhere in the book, as when Skinner shows (113–14) how we can evoke the response red (the device suggested is to hold a red object before the subject and say, “Tell me what color this is”).
In fairness, it must be mentioned that there are certain nontrivial applications of operant conditioning to the control of human behavior. A wide variety of experiments have shown that the number of plural nouns (for example) produced by a subject will increase if the experimenter says “right” or “good” when one is produced (similarly, positive attitudes on a certain issue, stories with particular content, etc.; cf. L. Krasner, “Studies of the Conditioning of Verbal Behavior,” Psych. Bull., 55 [1958], for a survey of several dozen experiments of this kind, mostly with positive results). It is of some interest that the subject is usually unaware of the process. Just what insight this gives into normal verbal behavior is not obvious. Nevertheless, it is an example of positive and not totally expected results using the Skinnerian paradigm.
[8] “Are Theories of Learning Necessary?”, Psych. Rev., 57 (1950), 193–216.
[9] And elsewhere. In his paper “Are Theories of Learning Necessary?” Skinner considers the problem how to extend his analysis of behavior to experimental situations in which it is impossible to observe frequencies, rate of response being the only valid datum. His answer is that “the notion of probability is usually extrapolated to cases in which a frequency analysis cannot be carried out. In the field of behavior we arrange a situation in which frequencies are available as data, but we use the notion of probability in analyzing or formulating instances of even types of behavior which are not susceptible to this analysis” (199). There are, of course, conceptions of probability not based directly on frequency, but I do not see how any of these apply to the cases that Skinner has in mind. I see no way of interpreting the quoted passage other than as signifying an intention to use the word probability in describing behavior quite independently of whether the notion of probability is at all relevant.
[10] Fortunately, “In English this presents no great difficulty” since, for example, “relative pitch levels … are not … important” (25). No reference is made to the numerous studies of the function of relative pitch levels and other intonational features in English.
[11] The vagueness of the word tendency, as opposed to frequency, saves the latter quotation from the obvious incorrectness of the former. Nevertheless, a good deal of stretching is necessary. If tendency has anything like its ordinary meaning, the remark is clearly false. One may believe strongly the assertion that Jupiter has four moons, that many of Sophocles’ plays have been irretrievably lost, that the earth will burn to a crisp in ten million years, and so on, without experiencing the slightest tendency to act upon these verbal stimuli. We may, of course, turn Skinner’s assertion into a very unilluminating truth by defining “tendency to act” to include tendencies to answer questions in certain ways, under motivation to say what one believes is true.
[12] One should add, however, that it is in general not the stimulus as such that is reinforcing, but the stimulus in a particular situational context. Depending on experimental arrangement, a particular physical event or object may be reinforcing, punishing, or unnoticed. Because Skinner limits himself to a particular, very simple experimental arrangement, it is not necessary for him to add this qualification, which would not be at all easy to formulate precisely. But it is of course necessary if he expects to extend his descriptive system to behavior in general.
[13] This has been frequently noted.
[14] See, for example, “Are Theories of Learning Necessary?”, op. cit., p. 199. Elsewhere, he suggests that the term learning be restricted to complex situations, but these are not characterized.
[15] “A child acquires verbal behavior when relatively unpatterned vocalizations, selectively reinforced, gradually assume forms which produce appropriate consequences in a given verbal community” (31). “Differential reinforcement shapes up all verbal forms, and when a prior stimulus enters into the contingency, reinforcement is responsible for its resulting control…. The availability of behavior, its probability or strength, depends on whether reinforcements continue in effect and according to what schedules” (203–4); elsewhere, frequently.
[16] Talk of schedules of reinforcement here is entirely pointless. How are we to decide, for example, according to what schedules covert reinforcement is arranged, as in thinking or verbal fantasy, or what the scheduling is of such factors as silence, speech, and appropriate future reactions to communicated information?
[17] See, for example, N. E. Miller and J. Dollard, Social Learning and Imitation (New York, 1941), pp. 82–83, for a discussion of the “meticulous training” that they seem to consider necessary for a child to learn the meanings of words and syntactic patterns. The same notion is implicit in O. H. Mowrer’s speculative account of how language might be acquired, in Learning Theory and Personality Dynamics, (New York: The Ronald Press, Inc., 1950), Chap. 23. Actually, the view appears to be quite general.
[18] For a general review and analysis of this literature, see D. L. Thistlethwaite, “A Critical Review of Latent Learning and Related Experiments,” Psych. Bull., 48 (1951), 97–129. K. MacCorquodale and P. E. Meehl, in their contribution to Modern Learning Theory op. cit., carry out a serious and considered attempt to handle the latent learning material from the standpoint of drive-reduction theory, with (as they point out) not entirely satisfactory results. W. H. Thorpe reviews the literature from the standpoint of the ethologist, adding also material on homing and topographical orientation (Learning and Instinct in Animals [Cambridge, 1956]).
[19] Theories of Learning, 214 (1956).
[20] O. E. Berlyne, “Novelty and Curiosity as Determinants of Exploratory Behavior,” Brit. Jor. of Psych., 41 (1950), 68–80; id., “Perceptual Curiosity in the Rat,” Jor. of Comp. Physiol. Psych., 48 (1955), 238–46; W. R. Thompson and L. M. Solomon, “Spontaneous Pattern Discrimination in the Rat,” ibid., 47 (1954), 104–7.
[21] K. C. Montgomery, “The Role of the Exploratory Drive in Learning,” ibid. pp. 60–63. Many other papers in the same journal are designed to show that exploratory behavior is a relatively independent primary “drive” aroused by novel external stimulation.
[22] R. A. Butler, “Discrimination Learning by Rhesus Monkeys to Visual-Exploration Motivation,” ibid., 46 (1953), 95–98. Later experiments showed that this “drive” is highly persistent, as opposed to derived drives which rapidly extinguish.
[23] H. F. Harlow, M. K. Harlow, and D. R. Meyer, “Learning Motivated by a Manipulation Drive,” Jor. Exp. Psych., 40 (1950), 228–34, and later investigations initiated by Harlow. Harlow has been particularly insistent on maintaining the inadequacy of physiologically based drives and homeostatic need states for explaining the persistence of motivation and rapidity of learning in primates. He points out, in many papers, that curiosity, play, exploration, and manipulation are, for primates, often more potent drives than hunger and the like, and that they show none of the characteristics of acquired drives. Hebb also presents behavioral and supporting neurological evidence in support of the view that in higher animals there is a positive attraction in work, risk, puzzle, intellectual activity, mild fear and frustration, and so on. “Drives and the CNS,” Psych. Rev., 62 [1955], 243–54.) He concludes that “we need not work out tortuous and improbable ways to explain why men work for money, why children learn without pain, why people dislike doing nothing.” In a brief note “Early Recognition of the Manipulative Drive in Monkeys,” British Journal of Animal Behavior, 3 [1955], 71–72), W. Dennis calls attention to the fact that early investigators (G. J. Romanes, 1882; E. L. Thorndike, 1901), whose “perception was relatively unaffected by learning theory, did note the intrinsically motivated behavior of monkeys,” although, he asserts, no similar observations on monkeys have been made until Harlow’s experiments. He quotes Romanes (Animal Intelligence [1882]) as saying that “much the most striking feature in the psychology of this animal, and the one which is least like anything met with in other animals, was the tireless spirit of investigation.” Analogous developments, in which genuine discoveries have blinded systematic investigators to the important insights of earlier work, are easily found within recent structural linguistics as well.
[24] Thus, J. S. Brown, in commenting on a paper of Harlow’s in Current Theory and Research in Motivation (Lincoln: Univ. of Nebraska Press, 1953), argues that “in probably every instance [of the experiments cited by Harlow] an ingenious drive-reduction theorist could find some fragment of fear, insecurity, frustration, or whatever, that he could insist was reduced and hence was reinforcing” (53). The same sort of thing could be said for the ingenious phlogiston or ether theorist.
[25] Cf. H. G. Birch and M. E. Bitterman, “Reinforcement and Learning: The process of Sensory Integration,” Psych. Rev., 56 (1949), 292–308.
[26] See, for example, his paper “A Physiological Study of Reward” in D. C. McClelland, ed., Studies in Motivation (New York: Appleton-Century-Crafts, Inc., 1955), pp. 134–43.
[27] See Thorpe, op. cit., particularly pp. 115–18 and 337–76, for an excellent discussion of this phenomenon, which has been brought to prominence particularly by the work of K. Lorenz (cf. “Der Kumpan in der Umwelt des Vogels,” parts of which are reprinted in English translation in C. M. Schiller, ed., Instinctive Behavior [New York: International Universities Press, 1957], pp. 83–128).
[28] Op. cit., p. 372.
[29] See, e.g., J. Jaynes, “Imprinting: Interaction of Learned and Innate Behavior,” Jor. of Comp. Physiol. Psych., 49 (1956), 201–6, where the conclusion is reached that “the experiments prove that without any observable reward, young birds of this species follow a moving stimulus object and very rapidly come to prefer that object to others.”
[30] Of course, it is perfectly possible to incorporate this fact within the Skinnerian framework. If, for example, a child watches an adult using a comb and then, with no instruction, tries to comb his own hair, we can explain this act by saying that he performs it because he finds it reinforcing to do so, or because of the reinforcement provided by behaving like a person who is “reinforcing” (cf. 164). Similarly, an automatic explanation is available for any other behavior. It seems strange at first that Skinner pays so little attention to the literature on latent learning and related topics, considering the tremendous reliance that he places on the notion of reinforcement; I have seen no reference to it in his writings. Similarly, F. S. Keller and W. N. Schoenfeld, in what appears to be the only text written under predominantly Skinnerian influence, Principles of Psychology (New York: Appleton-Century-Crofts, Inc., 1950), dismiss the latent learning literature in one sentence as “beside the point,” serving only “to obscure, rather than clarify, a fundamental principle” (the law of effect, 41). However, this neglect is perfectly appropriate in Skinner’s case. To the drive-reductionist, or anyone else for whom the notion reinforcement has some substantive meaning, these experiments and observations are important (and often embarrassing). But in the Skinnerian sense of the word, neither these results nor any conceivable others can cast any doubt on the claim that reinforcement is essential for the acquisition and maintenance of behavior. Behavior certainly has some concomitant circumstances, and whatever they are, we can call them reinforcement.
[31] Tinbergen, op.cit., Chap. VI, reviews some aspects of this problem, discussing the primary role of maturation in the development of many complex motor patterns (e.g., flying, swimming) in lower organisms, and the effect of an “innate disposition to learn” in certain specific ways and at certain specific times. Cf. also P. Schiller, “Innate Motor Action as a Basis for Learning,” in C. H. Schiller, ed., Instinctive Behavior (New York: International Universities Press, 1957), pp. 265–88, for a discussion of the role of maturing motor patterns in apparently insightful behavior in the chimpanzee.
Lenneberg (“The Capacity for Language Acquisition”, in J. A. Fodor, ed., The Structure of Language [Prentice-Hall, Inc., 1964]) presents a very interesting discussion of the part that biological structure may play in the acquisition of language, and the dangers in neglecting this possibility.
[32] From among many cited by Tinbergen, op. cit., p. 85.
[33] Cf. K. S. Lashley, “In Search of the Engram,” Symposium of the Society for Experimental Biology, 4 (1950), 454–82. R. Sperry, “On the Neural Basis of the Conditioned Response,” British Journal of Animal Behavior, 3 (1955), 41–44, argues that to account for the experimental results of Lashley and others, and for other facts that he cites, it is necessary to assume that high-level cerebral activity of the type of insight, expectancy, and so on is involved even in simple conditioning. He states that “we still lack today a satisfactory picture of the underlying neural mechanism” of the conditioned response.
[34] Furthermore, the motivation of the speaker does not, except in the simplest cases, correspond in intensity to the duration of deprivation. An obvious counter-example is what Hebb has called the “salted-nut phenomenon” (Organization of Behavior [New York, 1949], p. 199). The difficulty is of course even more serious when we consider deprivations not related to physiological drives.
[35] Just as he may have the appropriate reaction, both emotional and behavioral, to such utterances as the volcano is erupting or there’s a homicidal maniac in the next room without any previous pairing of the verbal and the physical stimulus. Skinner’s discussion of Pavlovian conditioning in language (154) is similarly unconvincing.
[36] J. S. Mill, A System of Logic (1843). R. Carnap gives a recent reformulation in “Meaning and Synonymy in Natural Languages,” Phil. Studies, 6 (1955), 33–47, defining the meaning (intension) of a predicate Q for a speaker X as “the general condition which an object y must fulfill in order for X to be willing to ascribe the predicate Q to y.” The connotation of an expression is often said to constitute its “cognitive meaning” as opposed to its “emotive meaning,” which is, essentially, the emotional reaction to the expression.
Whether or not this is the best way to approach meaning, it is clear that denotation, cognitive meaning, and emotive meaning are quite different things. The differences are often obscured in empirical studies of meaning, with much consequent confusion. Thus, Osgood has set himself the task of accounting for the fact that a stimulus comes to be a sign for another stimulus (a buzzer becomes a sign for food, a word for a thing, etc.). This is clearly (for linguistic signs) a problem of denotation. The method that he actually develops for quantifying and measuring meaning (cf. C. E. Osgood, G. Suci, P. Tannenbaum, The Measurement of Meaning [Urbana: Univ. of Illinois Press, 1957]) applies, however, only to emotive meaning. Suppose, for example, that A hates both Hitler and science intensely, and considers both highly potent and “active,” while B, agreeing with A about Hitler, likes science very much, although he considers it rather ineffective and not too important. Then, A may assign to “Hitler” and “science” the same position on the semantic differential, while B will assign “Hitler” the same position as A did, but “science” a totally different position. Yet, A does not think that “Hitler” and “science” are synonymous or that they have the same reference, and A and B may agree precisely on the cognitive meaning of “science.” Clearly, it is the attitude toward the things (the emotive meaning of the words) that is being measured here. There is a gradual shift in Osgood’s account from denotation to cognitive meaning to emotive meaning. The confusion is caused, no doubt, by the fact that the term meaning is used in all three senses (and others). [See J. Carroll’s review of the book by Osgood, Suci, and Tannenbaum in Language, 35, No. 1 (1959).]
[37] Most clearly by Quine. See From a Logical Point of View (Cambridge, 1953), especially Chaps. 2, 3, and 7.
[38] A method for characterizing synonymy in terms of reference is suggested by Goodman, “On Likeness of Meaning,” Analysis, 10 (1949), 1–7. Difficulties are discussed by Goodman, “On Some Differences about Meaning,” ibid., 13 (1953) 90–96. Carnap, op. cit., presents a very similar idea (Section 6), but somewhat misleadingly phrased, since he does not bring out the fact that only extensional (referential) notions are being used.
[39] In general, the examples discussed here are badly handled, and the success of the proposed analyzes is overstated. In each case, it is easy to see that the proposed analysis, which usually has an air of objectivity, is not equivalent to the analyzed expression. To take just one example, the response I am looking for my glasses is certainly not equivalent to the proposed paraphrases: “When I have behaved in this way in the past, I have found my glasses and have then stopped behaving in this way,” or “Circumstances have arisen in which I am inclined to emit any behavior which in the past has led to the discovery of my glasses; such behavior includes the behavior of looking in which I am now engaged.” One may look for one’s glasses for the first time; or one may emit the same behavior in looking for one’s glasses as in looking for one’s watch, in which case I am looking for my glasses and I am looking for my watch are equivalent, under the Skinnerian paraphrase. The difficult questions of purposiveness cannot be handled in this superficial manner.
[40] Skinner takes great pains, however, to deny the existence in human beings (or parrots) of any innate faculty or tendency to imitate. His only argument is that no one would suggest an innate tendency to read, yet reading and echoic behavior have similar “dynamic properties.” This similarity, however, simply indicates the grossness of his descriptive categories. In the case of parrots, Skinner claims that they have no instinctive capacity to imitate, but only to be reinforced by successful imitation (59). Given Skinner’s use of the word reinforcement, it is difficult to perceive any distinction here, since exactly the same thing could be said of any other instinctive behavior. For example, where another scientist would say that a certain bird instinctively builds a nest in a certain way, we could say in Skinner’s terminology (equivalently) that the bird is instinctively reinforced by building the nest in this way. One is therefore inclined to dismiss this claim as another ritual introduction of the word reinforce. Though there may, under some suitable clarification, be some truth in it, it is difficult to see how many of the cases reported by competent observers can be handled if reinforcement is given some substantive meaning. Cf. Thorpe, op. cit. p. 353f.; K. Lorenz, King Solomon’s Ring (New York, 1952), pp. 85–88; even Mowrer, who tries to show how imitation might develop through secondary reinforcement, cites a case, op. cit., p. 694, which he apparently believes, but where this could hardly be true. In young children, it seems most implausible to explain imitation in terms of secondary reinforcement.
[41] Although even this possibility is limited. If we were to take these paradigm instances seriously, it should follow that a child who knows how to count from one to 100 could learn an arbitrary 10 x 10 matrix with these numbers as entries as readily as the multiplication table.
[42] Similarly, “the universality of a literary work refers to the number of potential readers inclined to say the same thing” (275; i.e., the most “universal” work is a dictionary of clichés and greetings) a speaker is “stimulating” if he says what we are about to say ourselves (272) etc.
[43] Similarly, consider Skinner’s contention (362–65) that communication of knowledge or facts is just the process of making a new response available to the speaker. Here the analogy to animal experiments is particularly weak. When we train a rat to carry out some peculiar act, it makes sense to consider this a matter of adding a response to his repertoire. In the case of human communication, however, it is very difficult to attach any meaning to this terminology. If A imparts to B the information (new to B) that the railroads face collapse, in what sense can the response The railroads face collapse be said to be now, but not previously, available to B? Surely B could have said it before (not knowing whether it was true), and known that it was a sentence (as opposed to Collapse face railroads the). Nor is there any reason to assume that the response has increased in strength, whatever this means exactly (e.g., B may have no interest in the fact, or he may want it suppressed). It is not clear how we can characterize this notion of “making a response available” without reducing Skinner’s account of “imparting knowledge” to a triviality.
[44] (332). On the next page, however, the s in the same example indicates that “the object described as the boy possesses the property of running.” The difficulty of even maintaining consistency with a conceptual scheme like this is easy to appreciate.
[45] One might just as well argue that exactly the opposite is true. The study of hesitation pauses has shown that these tend to occur before the large categories — noun, verb, adjective; this finding is usually described by the statement that the pauses occur where there is maximum uncertainty or information. Insofar as hesitation indicates on-going composition (if it does at all), it would appear that the “key responses” are chosen only after the “grammatical frame.” Cf. C. E. Osgood, unpublished paper; F. Goldman-Eisler, “Speech Analysis and Mental Processes,” Language and Speech, 1 (1958), 67.
[46] E.g., what are in fact the actual units of verbal behavior? Under what conditions will a physical event capture the attention (be a stimulus) or be a reinforcer? How do we decide what stimuli are in “control” in a specific case? When are stimuli “similar”? And so on. (It is not interesting to be told, e.g., that we say Stop to an automobile or billiard ball because they are sufficiently similar to reinforcing people [46].) The use of unanalyzed notions like similar and generalization is particularly disturbing, since it indicates an apparent lack of interest in every significant aspect of the learning or the use of language in new situations. No one has ever doubted that in some sense, language is learned by generalization, or that novel utterances and situations are in some way similar to familiar ones. The only matter of serious interest is the specific “similarity.” Skinner has, apparently, no interest in this. Keller and Schoenfeld, op. cit., proceed to incorporate these notions (which they identify) into their Skinnerian “modern objective psychology” by defining two stimuli to be similar when “we make the same sort of response to them” (124; but when are responses of the “same sort”?). They do not seem to notice that this definition converts their “principle of generalization” (116), under any reasonable interpretation of this, into a tautology. It is obvious that such a definition will not be of much help in the study of language learning or construction of new responses in appropriate situations.
[47] “The Problem of Serial Order in Behavior,” in L. A. Jeffress, ed., Hixon Symposium on Cerebral Mechanisms in Behavior (New York: John Wiley & Sons Inc., 1951). Reprinted in F. A. Beach, D. O. Hebb, C. T. Morgan, H. W. Nissen, eds., The Neuropsychology of Lashley (New York: McGraw-Hill Book Company, 1960). Page references are to the latter.
[48] There is nothing essentially mysterious about this. Complex innate behavior patterns and innate “tendencies to learn in specific ways” have been carefully studied in lower organisms. Many psychologists have been inclined to believe that such biological structure will not have an important effect on acquisition of complex behavior in higher organisms, but I have not been able to find any serious justification for this attitude. Some recent studies have stressed the necessity for carefully analyzing the strategies available to the organism, regarded as a complex “information-processing system” (cf. J. S. Bruner, J. J. Goodnow, and G. A. Austin, A Study of Thinking [New York, 1956]; A. Newell, J. C. Shaw, and H. A. Simon, “Elements of a Theory of Human Problem Solving,” Psych. Rev., 65, [1958], 151–66), if anything significant is to be said about the character of human learning. These may be largely innate, or developed by early learning processes about which very little is yet known. (But see Harlow, “The Formation of Learning Sets,” Psych. Rev., 56 (1949), 51–65, and many later papers, where striking shifts in the character of learning are shown as a result of early training; also D. O. Hebb, Organization of Behavior, 109 ff.). They are undoubtedly quite complex. Cf. Lenneberg, op. cit., and R. B. Lees, review of N. Chomsky’s Syntactic Structures in Language, 33 (1957), 406f, for discussion of the topics mentioned in this section.
This archive contains 0 texts, with 0 words or 0 characters.