Return to Kechen Zhang homepage

Kechen Zhang

Papers available online

  • Temporal sequence

    Kechen Zhang (2014): How to compress sequential memory patterns into periodic oscillations: general reduction rules. Neural Computation, 26:1542-1599.


    Abstract: A neural network with symmetric reciprocal connections always admits a Lyapunov function, whose minima correspond to the memory states stored in the network. Networks with suitable asymmetric connections can store and retrieve a sequence of memory patterns, but the dynamics of these networks cannot be characterized as readily as that of the symmetric networks due to the lack of established general methods. Here, a reduction method is developed for a class of asymmetric attractor networks that store sequences of activity patterns as associative memories, as in a Hopfield network. The method projects the original activity pattern of the network to a low-dimensional space such that sequential memory retrievals in the original network correspond to periodic oscillations in the reduced system. The reduced system is self-contained and provides quantitative information about the stability and speed of sequential memory retrieval in the original network. The time evolution of the overlaps between the network state and the stored memory patterns can also be determined from extended reduced systems. The reduction procedure can be summarized by a few reduction rules, which are applied to several network models, including coupled networks and networks with time-delayed connections, and the analytical solutions of the reduced systems are confirmed by numerical simulations of the original networks. Finally, a local learning rule that provides an approximation to the connection weights involving the pseudoinverse is also presented.

    Download reprint PDF file (Zhang_2014_NC.pdf).

  • Adaptive method review

    Christopher DiMattina and Kechen Zhang (2013): Adaptive stimulus optimization for sensory systems neuroscience. Frontiers in Neural Circuits, 26:1542-1599.


    Abstract: In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.

    Download reprint PDF file (DiMattina_2013_Front.pdf).

  • Attractor dynamics review

    James J. Knierim and Kechen Zhang (2012): Attractor dynamics of spatially correlated neural activity in the limbic system. Annual Review of Neuroscience , 35:267-285.


    Abstract: Attractor networks are a popular computational construct used to model different brain systems. These networks allow elegant computations that are thought to represent a number of aspects of brain function. Although there is good reason to believe that the brain displays attractor dynamics, it has proven difficult to test experimentally whether any particular attractor architecture resides in any particular brain circuit. We review models and experimental evidence for three systems in the rat brain that are presumed to be components of the rat's navigational and memory system. Head-direction cells have been modeled as a ring attractor, grid cells as a plane attractor, and place cells both as a plane attractor and as a point attractor. Whereas the models have proven to be extremely useful conceptual tools, the experimental evidence in their favor, although intriguing, is still mostly circumstantial.

    Download reprint PDF file (Knierim_2012_AnnRev.pdf).

  • General path integration theory

    John B. Issa and Kechen Zhang (2012): Universal conditions for exact path integration in neural systems. Proceedings of the National Academy of Sciences USA 109: 6716-6720.

    Abstract: Animals are capable of navigation even in the absence of prominent landmark cues. This behavioral demonstration of path integration is supported by the discovery of place cells and other neurons that show path-invariant response properties even in the dark. That is, under suitable conditions, the activity of these neurons depends primarily on the spatial location of the animal regardless of which trajectory it followed to reach that position. Although many models of path integration have been proposed, no known single theoretical framework can formally accommodate their diverse computational mechanisms. Here we derive a set of necessary and sufficient conditions for a general class of systems that performs exact path integration. These conditions include multiplicative modulation by velocity inputs and a path-invariance condition that limits the structure of connections in the underlying neural network. In particular, for a linear system to satisfy the path-invariance condition, the effective synaptic weight matrices under different velocities must commute. Our theory subsumes several existing exact path integration models as special cases. We use entorhinal grid cells as an example to demonstrate that our framework can provide useful guidance for finding unexpected solutions to the path integration problem. This framework may help constrain future experimental and modeling studies pertaining to a broad class of neural integration systems.

    Download reprint PDF file (Issa_2012_PNAS.pdf).

  • Velocity-modulated oscillators

    Adam C. Welday, I. Gary Shlifer, Matthew L. Bloom, Kechen Zhang, and Hugh T. Blair (2011): Cosine directional tuning of theta cell burst frequencies: Evidence for spatial coding by oscillatory interference. Journal of Neuroscience 31: 16157-16176.

    Abstract: The rodent septohippocampal system contains "theta cells", which burst rhythmically at 4 - 12 Hz, but the functional significance of this rhythm remains poorly understood (Buzsaki, 2006). Theta rhythm commonly modulates the spike trains of spatially tuned neurons such as place (O'Keefe and Dostrovsky, 1971), head direction (Tsanov et al., 2011a), grid (Hafting et al., 2005), and border cells (Savelli et al., 2008; Solstad et al., 2008). An "oscillatory interference" theory has hypothesized that some of these spatially tuned neurons may derive their positional firing from phase interference among theta oscillations with frequencies that are modulated by the speed and direction of translational movements (Burgess et al., 2005, 2007). This theory is supported by studies reporting modulation of theta frequency by movement speed (Rivas et al.,1996; Geisler et al., 2007; Jeewajee et al., 2008a), but modulation of theta frequency by movement direction has never been observed. Here we recorded theta cells from hippocampus, medial septum, and anterior thalamus of freely behaving rats. Theta cell burst frequencies varied as the cosine of the rat's movement direction, and this directional tuning was influenced by landmark cues, in agreement with predictions of the oscillatory interference theory. Computer simulations and mathematical analysis demonstrated how a postsynaptic neuron can detect location-dependent synchrony among inputs from such theta cells, and thereby mimic the spatial tuning properties of place, grid, or border cells. These results suggest that theta cells may serve a high-level computational function by encoding a basis set of oscillatory signals that interfere with one another to synthesize spatial memory representations.

    Download reprint PDF file (Welday_2011_JNs.pdf).

  • Hippocampal remapping

    Joseph D. Monaco, James J. Knierim, and Kechen Zhang (2011): Sensory feedback, error correction, and remapping in a multiple oscillator model of place cell activity. Frontiers in Computational Neuroscience 5:39 doi: 10.3389/fncom.2011.00039

    Abstract: Mammals navigate by integrating self-motion signals ("path integration") and occasionally fixing on familiar environmental landmarks. The rat hippocampus is a model system of spatial representation in which place cells are thought to integrate both sensory and spatial information from entorhinal cortex. The localized firing fields of hippocampal place cells and entorhinal grid cells demonstrate a phase relationship with the local theta (6-10 Hz) rhythm that may be a temporal signature of path integration. However, encoding self-motion in the phase of theta oscillations requires high temporal precision and is susceptible to idiothetic noise, neuronal variability, and a changing environment. We present a model based on oscillatory interference theory, previously studied in the context of grid cells, in which transient temporal synchronization among a pool of path-integrating theta oscillators produces hippocampal-like place fields. We hypothesize that a spatiotemporally extended sensory interaction with external cues modulates feedback to the theta oscillators. We implement a form of this cue-driven feedback and show that it can retrieve fixed points in the phase code of position. A single cue can smoothly reset oscillator phases to correct for both systematic errors and continuous noise in path integration. Further, simulations in which local and global cues are rotated against each other reveal a phase-code mechanism in which conflicting cue arrangements can reproduce experimentally observed distributions of "partial remapping" responses. This abstract model demonstrates that phase-code feedback can provide stability to the temporal coding of position during navigation and may contribute to the context-dependence of hippocampal spatial representations. While the anatomical substrates of these processes have not been fully characterized, our findings suggest several signatures that can be evaluated in future experiments.

    Download reprint PDF file (Monaco_2011_Frontiers.pdf).

  • Active data collection

    Christopher DiMattina and Kechen Zhang (2011): Active data collection for efficient estimation and comparison of nonlinear neural models. Neural Computation 23: 2242-2288.

    Abstract: The stimulus-response relationship of many sensory neurons is nonlinear, but fully quantifying this relationship by a complex nonlinear model may require too much data to be experimentally tractable. Here we present a theoretical study of a general two-stage computational method that may help significantly reduce the number of stimuli needed to obtain an accurate mathematical description of nonlinear neural responses. Our method of active data collection first adaptively generates stimuli that are optimal for estimating the parameters of competing nonlinear models, and then uses these estimates to generate stimuli on-line which are optimal for discriminating these models. We applied our method to simple hierarchical circuit models, including nonlinear networks built upon the spatio-temporal or spectral-temporal receptive fields, and confirmed that collecting data using our two-stage adaptive algorithm was far more effective for estimating and comparing competing nonlinear sensory processing models than standard non-adaptive methods using random stimuli.

    Download reprint PDF file (DiMattina_2011_NC.pdf).

  • Persistent activity in thalamus

    Mauktik Kulkarni, Kechen Zhang and Alfredo Kirkwood (2011): Single-cell persistent activity in anterodorsal thalamus. Neuroscience Letters 498: 179-184.

    Abstract: The anterodorsal nucleus of the thalamus contains a high percentage of head-direction cells whose activities are correlated with an animal's directional heading in the horizontal plane. The firing of head-direction cells could involve self-sustaining reverberating activity in a recurrent network, but the thalamus by itself lacks strong excitatory recurrent synaptic connections to sustain tonic reverberating activity. Here we examined whether a single thalamic neuron could sustain its own activity without synaptic input by recording from individual neurons from anterodorsal thalamus in brain slices with synaptic blockers. We found that the rebound firing induced by hyperpolarizing pulses often decayed slowly so that a thalamic neuron could keep on firing for many minutes after stimulation. The hyperpolarization-induced persistent firing rate was graded under repeated current injections, and could be enhanced by serotonin. The effect of depolarizing pulses was much weaker and only slightly accelerated the decay of the hyperpolarization-induced persistent firing. Our finding provides the first direct evidence for single-cell persistent activity in the thalamus, supporting the notion that cellular mechanisms at the slow time scale of minutes might potentially contribute to the operations of the head-direction system.

    Download reprint PDF file (Kulkarni_2011_Neurosci_Lett.pdf).

  • Sparse coding in vision

    Eric T. Carlson, Russell J. Rasquinha, Kechen Zhang and Charles E. Connor (2011): A sparse object coding scheme in area V4. Current Biology 21: 288-293.

    Abstract: Sparse coding has long been recognized as a primary goal of image transformation in the visual system. Sparse coding in early visual cortex is achieved by abstracting local oriented spatial frequencies and by excitatory/inhibitory surround modulation. Object responses are thought to be sparse at subsequent processing stages, but neural mechanisms for higher-level sparsification are not known. Here, convergent results from macaque area V4 neural recording and simulated V4 populations trained on natural object contours suggest that sparse coding is achieved in midlevel visual cortex by emphasizing representation of acute convex and concave curvature. We studied 165 V4 neurons with a random, adaptive stimulus strategy to minimize bias and explore an unlimited range of contour shapes. V4 responses were strongly weighted toward contours containing acute convex or concave curvature. In contrast, the tuning distribution in nonsparse simulated V4 populations was strongly weighted toward low curvature. But as sparseness constraints increased, the simulated tuning distribution shifted progressively toward more acute convex and concave curvature, matching the neural recording results. These findings indicate a sparse object coding scheme in midlevel visual cortex based on uncommon but diagnostic regions of acute contour curvature.

    Download reprint PDF file (Carlson2011CurrBiol.pdf).

  • Functionally equivalent networks

    Christopher DiMattina and Kechen Zhang (2010): How to modify a neural network gradually without changing its input-output functionality. Neural Computation 22: 1-47.

    Abstract: It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.

    Download reprint PDF file (nc-confound.pdf).

  • Optimal stimulus

    Christopher DiMattina and Kechen Zhang (2008): How optimal stimuli for sensory neurons are constrained by network architecture. Neural Computation 20: 668-708.

    Abstract: Identifying the optimal stimuli for a sensory neuron is often a difficult process involving trial and error. By analyzing the relationship between stimuli and responses in feedforward and stable recurrent neural network models, we find that the stimulus yielding themaximum firing rate response always lies on the topological boundary of the collection of all allowable stimuli, provided that individual neurons have increasing input-output relations or gain functions and that the synaptic connections are convergent between layers with nondegenerate weight matrices. This result suggests that in neurophysiological experiments under these conditions, only stimuli on the boundary need to be tested in order to maximize the response, thereby potentially reducing the number of trials needed for finding the most effective stimuli. Even when the gain functions allow firing rate cutoff or saturation, a peak still cannot exist in the stimulus-response relation in the sense that moving away from the optimum stimulus always reduces the response.We further demonstrate that the condition for nondegenerate synaptic connections also implies that proper stimuli can independently perturb the activities of all neurons in the same layer. One example of this type of manipulation is changing the activity of a single neuron in a given processing layer while keeping that of all others constant. Such stimulus perturbations might help experimentally isolate the interactions of selected neurons within a network.

    Download reprint PDF file (nc-boundary.pdf).

  • From grid cells to place cells

    Huge T. Blair, Kishan Gupta, and Kechen Zhang (2008): Conversion of a phase-coded to a rate-coded position signal by a three-stage model of theta cells, grid cells, and place cells. Hippocampus 18:1239-1255.

    Abstract: As a rat navigates through a familiar environment, its position in space is encoded by firing rates of place cells and grid cells. Oscillatory interference models propose that this positional firing rate code is derived from a phase code, which stores the rat's position as a pattern of phase angles between velocity-modulated theta oscillations. Here we describe a three-stage network model, which formalizes the computational steps that are necessary for converting phase-coded position signals (represented by theta oscillations) into rate-coded position signals (represented by grid cells and place cells). The first stage of the model proposes that the phase-coded position signal is stored and updated by a bank of ring attractors, like those that have previously been hypothesized to perform angular path integration in the head-direction cell system. We show analytically how ring attractors can serve as central pattern generators for producing velocity-modulated theta oscillations, and we propose that such ring attractors may reside in subcortical areas where hippocampal theta rhythm is known to originate. In the second stage of the model, grid fields are formed by oscillatory interference between theta cells residing in different (but not the same) ring attractors. The model's third stage assumes that hippocampal neurons generate Gaussian place fields by computing weighted sums of inputs from a basis set of many grid fields. Here we show that under this assumption, the spatial frequency spectrum of the Gaussian place field defines the vertex spacings of grid cells that must provide input to the place cell. This analysis generates a testable prediction that grid cells with large vertex spacings should send projections to the entire hippocampus, whereas grid cells with smaller vertex spacings may project more selectively to the dorsal hippocampus, where place fields are smallest.

    Download reprint PDF file (hipp-3stage.pdf).

  • Moire grid

    Hugh T. Blair, Adam C. Welday, and Kechen Zhang (2007): Moire interference between grid fields that produce theta oscillations: A computational model. Journal of Neuroscience 27: 3211-3229.

    Abstract: The dorsomedial entorhinal cortex (dMEC) of the rat brain contains a remarkable population of spatially tuned neurons called grid cells (Hafting et al., 2005). Each grid cell fires selectively at multiple spatial locations, which are geometrically arranged to form a hexagonal lattice that tiles the surface of the rat's environment. Here, we show that grid fields can combine with one another to form moire interference patterns, referred to as "moire grids", that replicate the hexagonal lattice over an infinite range of spatial scales.Wepropose that dMEC grids are actually moire grids formed by interference between much smaller "theta grids," which are hypothesized to be the primary source of movement-related theta rhythm in the rat brain. The formation of moire grids from theta grids obeys two scaling laws, referred to as the length and rotational scaling rules. The length scaling rule appears to account for firing properties of grid cells in layer II of dMEC, whereas the rotational scaling rule can better explain properties of layer III grid cells. Moire grids built from theta grids can be combined to form yet larger grids and can also be used as basis functions to construct memory representations of spatial locations (place cells) or visual images. Memory representations built from moire grids are automatically endowed with size invariance by the scaling properties of the moire grids. We therefore propose that moire interference between grid fields may constitute an important principle of neural computation underlying the construction of scale-invariant memory representations.

    Download reprint PDF file (jns-moire.pdf).

  • Cortical scaling

    Kechen Zhang and Terrence J. Sejnowski (2000): A universal scaling law between gray matter and white matter of cerebral cortex. Proceedings of the National Academy of Sciences USA 97: 5621-5626.

    Abstract: Neocortex, a new and rapidly evolving brain structure in mammals, has a similar layered architecture in species over a wide range of brain sizes. Larger brains require longer fibers to communicate between distant cortical areas; the volume of the white matter that contains long axons increases disproportionally faster than the volume of the gray matter that contains cell bodies, dendrites, and axons for local information processing, according to a power law. The theoretical analysis presented here shows how this remarkable anatomical regularity might arise naturally as a consequence of the local uniformity of the cortex and the requirement for compact arrangement of long axonal fibers. The predicted power law with an exponent of 4/3 minus a small correction for the thickness of the cortex accurately accounts for empirical data spanning several orders of magnitude in brain sizes for various mammalian species including human and non-human primates.

    Download reprint PDF file (pnas-brains.pdf).



  • Cosine tuning for 3-D object

    Kechen Zhang and Terrence J. Sejnowski (1999): A theory of geometric constraints on neural activity for natural three-dimensional movement. Journal of Neuroscience 19: 3122-2145.

    Abstract: Although the orientation of an arm in space or the static view of an object may be represented by a population of neurons in complex ways, how these variables change with movement often follows simple linear rules, reflecting the underlying geometric constraints in the physical world. A theoretical analysis is presented for how such constraints affect the average firing rates of sensory and motor neurons during natural movements with low degrees of freedom, such as a limb movement and rigid object motion. When applied to non-rigid reaching arm movements, the linear theory accounts for cosine directional tuning with linear speed modulation, predicts a curl-free spatial distribution of preferred directions, and also explains why the instantaneous motion of the hand can be recovered from the neural population activity. For three-dimensional motion of a rigid object, the theory predicts that, to a first approximation, the response of a sensory neuron should have a preferred translational direction and a preferred rotation axis in space, both with cosine tuning functions modulated multiplicatively by speed and angular speed, respectively. Some known tuning properties of motion-sensitive neurons follow as special cases. Acceleration tuning and nonlinear speed modulation are considered in an extension of the linear theory. This general approach provides a principled method to derive mechanism-insensitive neuronal properties by exploiting the inherently low dimensionality of natural movements.

    Download reprint PDF file (jns-object.pdf).



  • Tuning width

    Kechen Zhang and Terrence J. Sejnowski (1999): Neuronal tuning: To sharpen or broaden? Neural Computation 11: 75-84.

    Abstract: Sensory and motor variables are typically represented by a population of broadly tuned neurons. A coarser representation with broader tuning can often improve coding accuracy, but sometimes the accuracy may also improve with sharper tuning. The theoretical analysis here shows that the relationship between tuning width and accuracy depends crucially on the dimension of the encoded variable. A general rule is derived for how the Fisher information scales with the tuning width, regardless of the exact shape of the tuning function, the probability distribution of spikes, and allowing some correlated noise between neurons. These results demonstrate a universal dimensionality effect in neural population coding.

    Download reprint PDF file (nc-tuning.pdf).



  • Maximum likelihood by recurrent network

    Alexandre Pouget, Kechen Zhang, Sophie Deneve and Peter E. Latham (1998): Statistically efficient estimation using population code. Neural Computation 10: 373-401.

    Abstract: Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform estimation in a near-optimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.

    Download reprint PDF file (nc-ml.pdf).

    Link to Alex Pouget's homepage.


  • Place cell reconstruction

    Kechen Zhang, Iris Ginzburg, Bruce L. McNaughton, and Terrence J. Sejnowski (1998): Interpreting neuronal population activity by reconstruction: Unified framework with application to hippocampal place cells. Journal of Neurophysiology 79: 1017-1044.

    Abstract: Physical variables such as the orientation of a line in the visual field or the location of the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which the physical variables are estimated from observed neural activity. Reconstruction is useful first in quantifying how much information about the physical variables is present in the population, and second, in providing insight into how the brain might use distributed representations in solving related computational problems such as visual object recognition and spatial navigation. Two classes of reconstruction methods, namely, probabilistic or Bayesian methods and basis function methods, are discussed. They include important existing methods as special cases, such as population vector coding, optimal linear estimation and template matching. As a representative example for the reconstruction problem, different methods were applied to multi-electrode spike train data from hippocampal place cells in freely moving rats. The reconstruction accuracy of the trajectories of the rats was compared for the different methods. Bayesian methods were especially accurate when a continuity constraint was enforced, and the best errors were within a factor of two of the the information-theoretic limit on how accurate any reconstruction can be, which were comparable with the intrinsic experimental errors in position tracking. In addition, the reconstruction analysis uncovered some interesting aspects of place cell activity, such as the tendency for erratic jumps of the reconstructed trajectory when the animal stopped running. In general, the theoretical values of the minimal achievable reconstruction errors quantify how accurately a physical variable is encoded in the neuronal population in the sense of mean square error, regardless of the method used for reading out the information. One related result is that the theoretical accuracy is independent of the width of the Gaussian tuning function only in two dimensions. Finally, all the reconstruction methods considered in this paper can be implemented by a unified neural network architecture, which the brain could feasibly use to solve related problems.

    Download reprint PDF file (jnp-reconst.pdf).


  • Contineous attractor dynamics: head-direction cell model

    Kechen Zhang (1996): Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. Journal of Neuroscience 16: 2112-2126.

    Abstract: The head-direction (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be accurately controlled by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information.

    Download reprint PDF file (jns-hd.pdf).


  • Optic flow: position-invariance in area MST

    Kechen Zhang, Martin I. Sereno and Margaret E. Sereno (1993): Emergence of position-independent detectors of sense of rotation and dilation with Hebbian learning: An analysis. Neural Computation 5: 597-612.

    Abstract: We previously demonstrated that it is possible to learn position-independent responses to rotation and dilation by filtering rotations and dilations with different centers through an input layer with MT-like speed and direction tuning curves and connecting them to an MSTlike layer with simple Hebbian synapses (Sereno and Sereno 1991). By analyzing an idealized version of the network with broader, sinusoidal direction-tuning and linear speed-tuning, we show analytically that a Hebb rule trained with arbitrary rotation, dilation/contraction, and translation velocity fields yields units with weight fields that are a rotation plus a dilation or contraction field, and whose responses to a rotating or dilating/contracting disk are exactly position independent. Differences between the performance of this idealized model and our original model (and real MST neurons) are discussed.

    Clip here to see a key figure (in Marty Sereno's homepage).

    Download reprint PDF file (nc-mst.pdf).



  • Maxwell's demon: explicit models

    Kechen Zhang and Kezhao Zhang (1992): Mechanical models of Maxwell's demon with noninvariant phase volume. Physical Review A 46: 4598-4605.

    General background: Maxwell's demon is a hypothetical intelligent being originally proposed by Maxwell to demonstrate that the Second Law of Thermodynamics---which, roughly speaking, describes a universal tendency for an isolated system to degenerate into a state of maximum disorder---is statistical in nature and can be breached by intelligence. Historically, the studies (or exorcisement) of the demon have led to a number of interesting results, including the close relation between information and entropy (Szilard), the limitations on the perceptual abilities of the demon (Brillouin), and more recently, the cost of energy dissipation in the computational processes due to the disregard of unwanted information to refresh the memory of the demon (Landauer and Bennett). Following a dynamical approach, we have arrived at a new conclusion: An essential requirement for the demon, regardless of its level of intelligence, is a special dynamics with contracting phase-space volume, and it is possible to construct explicit models of the demon without violating any known physical laws except the Second Law itself.

    Abstract: This paper is concerned with the dynamical basis of Maxwell's demon within the framework of classical mechanics. The authors show that the operation of the demon, whose effect is equivalent to exerting a velocity-dependent force on the gas molecules, can be modeled as a suitable force field without disobeying any laws in classical mechanics. An essential requirement for the models is that the phase-space volume should be noninvariant during time evolution. The necessity of the requirement can be established under general conditions by showing that (1) a mechanical device is able to violate the second law of thermodynamics if and only if it can be used to generate and sustain a robust momentum flow inside an isolated system, and (2) no systems with invariant phase volume are able to support such a flow. The invariance of phase volume appears as an independent factor responsible for the validity of the second law of thermodynamics. When this requirement is removed, explicit mechanical models of Maxwell's demon can exist.

    Download reprint PDF file (pra-demon.pdf).



  • Intrinsic probability in deterministic systems

    Kechen Zhang (1990): Uniform distribution of initial states: The physical basis of probability. Physical Review A 41: 1893-1900.

    Abstract: For repetitive experiments performed on a deterministic system with initial states restricted to a certain region in phase space, the relative frequency of an event has a definite value insensitive to the preparation of the experiments only if the initial states leading to that event are distributed uniformly in the prescribed region. Mechanical models of coin tossing and roulette spinning and the equal a priori probability hypothesis in statistical mechanics are considered in the light of this principle. Probabilities that have arisen from uniform distributions of initial states do not necessarily submit to Kolmogorov's (1956) axioms of probability. In the finite-dimensional case, a uniform distribution in phase space either in the coarse-grained sense or in the limit sense can be formulated in a unified way.

    Download reprint PDF file (pra-prob.pdf).




  • Return to Kechen Zhang homepage