Notions/Ideas/Concepts that ground Solab research

Dynamical systems theory. Dynamical systems theory is the mathematical theory of systems that are characterized in terms of how they change. Two cases have been extensively considered: systems of differential equations (dx/dt = f(x, t)) and iterated maps x(t+1) = f(x(t), t). Usually the state spaces of such systems are complete metric spaces and the functions are continuous functions. Such systems are mainly interesting if they involve feedback—that is, the change in a variable depends on its own state (the equations accommodate non-feedback cases, but these are generally degenerate—e.g. dx/dt = k, for k a constant). When the systems have feedback, they can exhibit stabilities, or sets toward which the system tends asymptotically. These come in several interesting varieties: limit cycles, where the system repeatedly passes through the same set of states in the same order (fixed points are a special case), drift, where the system wanders endlessly in an orderly fashion through its state space without revisiting states, chaos, where the system moves wildly, exhibiting sensitive dependence on initial conditions, transitivity, and a dense set of periodic points (one might say that order and wildness are thoroughly mixed up), and edge-of-chaos behavior, in which order and wildness are in a kind of balance (I think this is where grammars exist). Cognitive scientists of language and brain will recall the intensity of debates about whether there is “feedback” between various parts of the mental system. My view is that these debates have been largely resolved in favor of the view that feedback is present. I take this as an indicator that dynamical stabilities are central to cognition. Regarding the puzzle of graceful change, one of the main contributions of the dynamical systems perspective as a whole is to situate grammars defined by continuous functions in a metric space, so parametric adjustment produces gradual behavioral change.

Self-organization. Self-organization has many different definitions in the literature—it seems to be a very subtle notion. I have thus far found it most useful to refer to an intuitive definition made with words: Under restricted environmental conditions, a collection of independently acting but interacting elements exhibits organized structure at the scale of the group. Self-organization helps with the graceful change puzzle as follows: graceful change seems paradoxical when the atoms of structure are symbols: there doesn’t seem to be a sensible way to continuously deform a symbol. But in self-organization, the symbols emerge from the interactions of many tiny parts. When a few tiny parts change, the system loses its perfect symbolic form, but it can retain functionality—the system is continuous in the sense that minute change in the parts makes for minute change in the behavior (for example, moving it out of edge of chaos behavior into chaotic behavior, or moving it from a very simple form of edge of chaos behavior to a more complex form).

Fractals. Fractals are self-similar sets that live in metric spaces. Barnsley’s book, Fractals Everywhere, provides a beautiful visual and mathematical introduction. Recursion, defined as the symbol processing situation in which an object of a particular type generates (in some sense, “contains”) an object of its own type. Thus, one may say that fractals are a kind of recursive spatial object. Fractals help with the graceful change puzzle as follows. Recursion is a core case of complexity in natural language. To define a recursive system using symbols and rules, as is done in the classical theory of computation, one uses a set of interdependent rules. Indeed, in this case, it does not make sense to speak of gradual transformation of the system because any change in the rules makes a very large change in behavior, and many changes in the actual symbolic notation produce meaninglessness (lack of functionality). However, fractals are recursive systems that can continuously metamorphose. Adopting a fractal model of grammar makes graceful change sensible.

Super-Turing computation. A common computational formalism defines a “language” as a set of strings drawn from a finite alphabet (we can consider finite-length strings, or infinite-length strings, or both; languages of one-sided infinite strings are called omega languages). One may think of a language as a function from the set of all strings to {0, 1} where “1” means the string is part of the language, and “0” means it is not. So-called “Turing” computation is computation with finite rules made of symbols drawn from a finite alphabet, the use of a “tape” with a countably infinite number of slots on which symbols can be place and erased, and that makes each decision in a finite number of steps (Turing gets the label because he formally defined this class of computations and proved some of its important properties, but his attention does not seem to have been limited to Turing computation). Without loss of generality, we may adopt a formalism in which the Turing computers generate languages. A standard question is, Which languages are Turing-computable? Much of the theory of computation and much of cognitive theorizing has focused on Turing-computable languages. These have the nice property that they are, in some sense, finitely describable (modulo the unboundedness of the tape). There is a natural way of interpreting a discrete dynamical system as a symbol generator (by specifying a cover for its state space, associating a symbol emission with the presence of the system in each element of the cover, and then considering the sequences of symbols that the system emits when it is integrated from a particular initial state). It turns out that some interesting dynamical systems (e.g. neural networks) generate both Turing-computable and non-Turing-computable (i.e. “super-Turing”) languages. In some dynamical systems, these super-Turing languages are densely intermingled with the Turing computable languages in the parameter space. The Turing computable languages turn out to be a good way of formalizing the notion “structure” in the discussion above. When certain dynamical systems make a transition from one Turing computable parameter setting to another (I’m glossing over the issue of the initial state—one can assume it’s fixed), they go through super-Turing states. Thus, understanding the nature and deployment of these super-Turing states seems helpful for addressing the graceful change question.