In the models described herein, these effects are due to a top-down nonspecific inhibitory gain control signal that is released in parallel with specific excitatory signals. One model concerns how the brain's auditory system constructs coherent representations of acoustic objects from the jumble of noise and harmonics that relentlessly bombards our ears throughout life. This model, called the ARTSTREAM model, suggests an approach to solving the cocktail party problem. Another model, the ARTPHONE model, helps to clarify the neural representation of a speech code as it evolves in real time, and is used to simulate data about variable-rate speech categorization. Yet another model, called a FACADE theory model, helps to explain how visual thalamocortical interactions give rise to boundary percepts such as illusory contours and surface percepts such as filled-in brightnesses. Top-down corticogeniculate feedback interactions select and synchronize\break LGN activities that are consistent with cortical activations. In addition to these systems are the more familiar ART systems for explaining visual object recognition, including how recognition categories are learned whose number, size, and shape fit the statistics of a nonstationary environment. Taken together, these results provide accumulating evidence that the brain parsimoniously specifies a small set of computational principles to ensure its stability and adaptability in responding to many different types of environmental challenges. The talk will also discuss why procedural memories, or memories that control movements, are not conscious by specifying why they are not designed to achieve resonance.