Whit Tabor’s Research Projects

Last update: October 3, 2012

People, at least in this place and time, are very focused on order. Although we often find ourselves in situations of flux, where sense is in question, and things we have counted on do not continue to be reliable, we nevertheless tend to respond to those situations by looking toward a future state which we envision as well-ordered. We feel like we will be happy once such a state is stably attained. Consistent with this tendency of personal outlook, science also tends to focus on order. Indeed, many scientific insights take the form of reduced-dimension descriptions of complex phenomena, thus allowing us to see order in what might, at first glance, have seemed an uncompassable heterogeneity. Although order is clearly important, a theory of our interaction with it cannot stop at characterizing the order alone: the transitions to and from various ordered states are also important.

Consider the case in which a system has a structure, and then switches to a new organizational scheme. Suppose the system is complex in the sense that there are many parts and they function interdependently so that if you damage or remove a part, the system will not work anymore. Then, the new order would seem to have to appear instantaneously, or else the system would lack functionality for a period of time while the new order was being put into place. I have studied such transitions in human perception and learning, and in language change over historical time. In many of these cases, the evidence suggests that new order comes into being gradually. Nevertheless, the system does not lose functionality. This, then, is the question: how can structure gracefully change? Much of my work has focused on seeking an answer to this question.

Several notions have turned out to be helpful: dynamical systems theory, self-organization, fractals, super-Turing computation. These notions are described here.

The following are currently active projects:

Local Coherence Phenomena in Sentence Processing. This project examines the formation of coherent syntactic and semantic interpretations of sentences during reading and listening. A standard perspective holds that the language system employs a grammar which guides the construction of a sentence parse as words are perceived in the input stream. Taking self-organization to operate at the level of words, one reaches the prediction that the process is more bottom-up than the standard approach assumes. If a sentence contains a subsequence that can form a coherent structure, then that structure should tend to form, even in the face of conflicting structure that has formed on the basis of preceding elements of the sentence. Tabor & Hutchins (2004) found evidence for such grammar-flouting interference effects when participants read sentences like “The coach smiled at the player tossed the Frisbee by the opposing team”, where the words “the player tossed” appeared to create interference of the sort predicted. Tabor & Hutchins (2004) describe a relevant self-organizing parsing model, SOPARSE. Kukona & Tabor (2011) find SOPARSE principles relevant in explaining Visual World Paradigm data on the processing of lexically vs. referentially ambiguous phrases. Kukona (2011) [email] explores a number of cases of lexical local coherence, where the grammar-flouting interference effect is created by just one word.

Default Categorization. There has been considerable debate about what morphological irregularity tells us about the nature of human language systems. Famously, children learning the past tense of English verbs show a U-shaped learning pattern in which high frequency regulars (e.g. “liked”) and irregulars (e.g., “gave”) are learned early, then there is a tendency toward overregularization, in which some of the once-correctly inflected irregulars are regularized (e.g., “gived”), and finally the system is sorted out and all forms are (generally) correctly inflected. The regulars and irregulars seem, at first glance, to be governed by different mental principles—for example, adults generate novel Tabor, Cho, & Dankowicz (submitted) report an artificial category learning experiment which involves a default class and several irregular (or “strong”) classes. Although the pattern in their study is not the same as the English past tense pattern, it also shows stages of learning, with a U-shaped structure. Tabor et al. describe a recurrent neural network model and use dynamical analysis methods to probe the organizational principles of the network over the course of learning. Their results highlight a way of making explicit comparisons between symbolic models which use symbols and rules to describe language patterns, and recurrent neural network models, which use neuron-like units. The results contribute to understanding of the graceful change issue by mapping out in detail a simple case of an emergent pattern that does not conform to the simplest possible symbolic ideal, but is nevertheless similar to it. They point to the notion that there are layers and layers of symbolic ideals, some more ideal than others, so part of the story of graceful change may be that language systems employ many semi-ideal symbolic systems when they are in transit between two more-ideal symbolic systems.

Recursion Learning. The discussion of fractals above noted that fractal sets offer a way of understanding graceful change in recursive systems and that recursive patterns are pervasive in natural languages. This project studies the learning of simple recursive systems by adults in an Artificial Grammar Learning task called “Locus Prediction” (or “Box Prediction”)—Cho et al., 2011. In one version of this task, participants see the black outlines of four boxes arranged in a diamond on a computer screen. When they click on the screen, one of the boxes changes color. Their task is to predict, by clicking on it, which box will change color next. The boxes change color according to a pattern specified by a recursive grammar. For example, the grammar

S → 1 S 2 3 4

S → ∅

generates the sequences, 1 2 3 4, 1 1 2 3 4 2 3 4, 1 1 1 2 3 4 2 3 4 2 3 4, etc (each numeral indexes a box, though the numerals themselves are not printed on the screen). Sentences from the grammar are strung end to end in one long sequence of box color changes. Over the course of about 300 or 400 trials, participants seem to pick up on the recursive structure of the task. The total run time is under 15 minutes. Cho et al. (2011) find that even the UConn undergrads, a relatively homogeneous population, show considerable variation in their response to this task. The range of their behaviors provides evidence for a continuum of grammatical states on the way toward learning recursion. The case just described is a simple “counting recursion” case: one must learn to keep track of the number of ones in order to predict the number of “2 3 4″‘s. Tabor, Cho, & Szkudlarek (2012) provide more evidence for continuous grammar encoding and find that at few people can learn a more complex “mirror recursion” pattern in this task. The latter two papers explore a prediction of the fractal grammar models described above.

Grammaticalization. A close relative of the graceful change puzzle defined above is the graceful advent puzzle: How can a system create a structure that did not exist before? It’s very mysterious if the system does not possess the structure in a latent form initially (though this case is not incoherent and is worth considering). It’s also mysterious if the system intially latently possesses all possible forms that it might exhibit and then gradually progresses toward exhibiting one of those forms. Again, the problem lies in explaining how the system manages to stay steadily on the path toward exhibiting the novel form before the form actually exists. What guides it along this path? In my doctoral thesis (Tabor, 1994b—see especially chapters 1 and 5), I offered evidence that exactly this sort of thing happened in the development of English Degree Modifier “sort of” and “kind of” (“The sun looked kind of purple”; “She sort of danced past, upsetting my hat”—firt instances of this type around 1800). The Degree Modifier use arose gradually from the older, Noun Preposition use (“We found a kind of sistern”, “What sort of music do they make?”—first instances in Old English, for “kind” and Middle English (1300’s) for “sort”): there was a protracted frequency change that preceded the first clear instance of the novel Degree Modifier behavior. Recently Victor Kuperman and I have studied the more recent history of this construction, finding evidence for the emergence of a syntactic quasi-category (Kuperman & Tabor, submitted).

Emergence of Communication in Ensembles of People. This is a new project that picks up on the observation that systematicities observed in grammaticalization are not at the level of individual people’s language systems, but rather at the level of communities. We need a formal theory and experimental techniques which characterize language at this scale. The emerging field of Experimental Semiotics (see Galantucci, 2009) explores a variety of helpful (and sometimes very creative) experimental and simulation methods. With the help of many people, I am developing group coordination experiments in which we expect communication to emerge as part of action.

Formal Characterization of the Relationship between Dynamical and Symbolic Computation. This work explores the deployment of computational behaviors in dynamical systems that are interpreted as symbol generators (described above under “Super-Turing computation”). The dynamical systems considered are simple mathematical systems that are related to recurrent artificial neural networks like the Simple Recurrent Network (Elman, J. L. 1990. Finding structure in time. Cognitive Science, 14, 179-211). Some key findings (Tabor, 2009b; Tabor, 2011) are: even some very simple system exhibit a great variety of computational behaviors: finite languages, finite-state languages, context free languages, super-Turing languages. In this case, the context free languages are particularly interesting. They are associated with edge-of-chaos dynamical systems (Tabor, 2002) and they perform a kind of recursive computation, resembling the patterns of natural languages. We can ask, What are the parametric circumstances under which these specially, highly organized and expressive cases occur? The answer is that they are associated with particular kinds of symmetries in the weight values of the neural network—certain weights have to be coordinated with other weights. This lends support to the view that complex cognitive behavior is associated with self-organization (multiple small elements, in this case weights, coordinate in the production of complex behavior at the level of the ensemble). An outstanding and very interesting question that I am currently working on is how to interpret these dynamical generators as probabilistic models, relating them, for example to Bayesian treatments of cognitive structure.

Here’s an additional, possibly useful thought: I mentioned above the importance of having a theory that encompasses not only order, but the transitions between ordered states. Though it sounds paradoxical, and may in fact be impossible to achieve, there may nevertheless be value in striving to have a “theory of disorder”. Chaos theory seems to be something like this. Part of the trick seems to be to step in a different direction from one’s habitual tendency to insist on pat understanding. It is OK to want it, totally good to strive vigorously for it, and possibly good to have it momentarily, but it is helpful not to insist on holding tightly on to it afterward.