Place cells are believed to organize memory across space and time, inspiring the idea of the cognitive map. Yet unlike the structured activity in the associated grid and head-direction cells, they remain an enigma: their responses have been difficult to predict and are complex enough to be statistically well-described by a random process. Here we report one step toward the ultimate goal of understanding place cells well enough to predict their fields. Within a theoretical framework in which place fields are derived as a conjunction of external cues with internal grid cell inputs, we predict that even apparently random place cell responses should reflect the structure of their grid inputs and that this structure can be unmasked if probed in sufficiently large neural populations and large environments. To test the theory, we design experiments in long, locally featureless spaces to demonstrate that structured scaffolds undergird place cell responses. Our findings, together with other theoretical and experimental results, suggest that place cells build memories of external inputs by attaching them to a largely prespecified grid scaffold.
We recently introduced idealized mean-field models for networks of integrate-and-fire neurons with impulse-like interactions -- the so-called delayed Poissonian mean-field models. Such models are prone to blowups: for a strong enough interaction coupling, the mean-field rate of interaction diverges in finite time with a finite fraction of neurons spiking simultaneously. Due to the reset mechanism of integrate-and-fire neurons, these blowups can happen repeatedly, at least in principle. A benefit of considering Poissonian mean-field models is that one can resolve blowups analytically by mapping the original singular dynamics onto uniformly regular dynamics via a time change. Resolving a blowup then amounts to solving the fixed-point problem that implicitly defines the time change, which can be done consistently for a single blowup and for nonzero delays. Here we extend this time-change analysis in two ways: First, we exhibit the existence and uniqueness of explosive solutions with a countable infinity of blowups in the large interaction regime. Second, we show that these delayed solutions specify "physical" explosive solutions in the limit of vanishing delays, which in turn can be explicitly constructed. The first result relies on the fact that blowups are self-sustaining but nonoverlapping in the time-changed picture. The second result follows from the continuity of blowups in the time-changed picture and incidentally implies the existence of periodic solutions. These results are useful to study the emergence of synchrony in neural network models.
Idealized networks of integrate-and-fire neurons with impulse-like interactions obey McKean-Vlasov diffusion equations in the mean-field limit. These equations are prone to blowups: for a strong enough interaction coupling, the mean-field rate of interaction diverges in finite time with a finite fraction of neurons spiking simultaneously, thereby marking a macroscopic synchronous event. Characterizing these blowup singularities analytically is the key to understanding the emergence and persistence of spiking synchrony in mean-field neural models. However, such a resolution is hindered by the first-passage nature of the mean-field interaction in classically considered dynamics. Here, we introduce a delayed Poissonian variation of the classical integrate-and-fire dynamics for which blowups are analytically well defined in the mean-field limit. Albeit fundamentally nonlinear, we show that this delayed Poissonian dynamics can be transformed into a noninteracting linear dynamics via a deterministic time change. We specify this time change as the solution of a nonlinear, delayed integral equation via renewal analysis of first-passage problems. This formulation also reveals that the fraction of simultaneously spiking neurons can be determined via a self-consistent, probability-conservation principle about the time-changed linear dynamics. We utilize the proposed framework in a companion paper to show analytically the existence of singular mean-field dynamics with sustained synchrony for large enough interaction coupling.
Characterizing metastable neural dynamics in finite-size spiking networks remains a daunting challenge.
We propose to address this challenge in the recently introduced replica-mean-field (RMF) limit. In this limit, networks are made of infinitely many replicas of the finite network of interest, but with randomized interactions across replicas. Such randomization renders certain excitatory networks fully tractable at the cost of neglecting activity correlations, but with explicit dependence on the finite size of the neural constituents. However, metastable dynamics typically unfold in networks with mixed inhibition and excitation. Here, we extend the RMF computational framework to point-process-based neural network models with exponential stochastic intensities, allowing for mixed excitation and inhibition. Within this setting, we show that metastable finite-size networks admit multistable RMF limits, which are fully characterized by stationary firing rates. Technically, these stationary rates are determined as the solutions of a set of delayed differential equations under certain regularity conditions that any physical solutions shall satisfy. We solve this original problem by combining the resolvent formalism and singular-perturbation theory. Importantly, we find that these rates specify probabilistic pseudo-equilibria which accurately capture the neural variability observed in the original finite-size network. We also discuss the emergence of metastability as a stochastic bifurcation, which can be interpreted as a static phase transition in the RMF limits. In turn, we expect to leverage the static picture of RMF limits to infer purely dynamical features of metastable finite-size networks, such as the transition rates between pseudo-equilibria.
Network dynamics with point-process-based interactions are of paramount modeling interest. Unfortunately, most relevant dynamics involve complex graphs of interactions for which an exact computational treatment is impossible. To circumvent this difficulty, the replica-mean-field approach focuses on randomly interacting replicas of the networks of interest. In the limit of an infinite number of replicas, these networks become analytically tractable under the so-called ‘Poisson hypothesis’. However, in most applications this hypothesis is only conjectured. In this paper we establish the Poisson hypothesis for a general class of discrete-time, point-process-based dynamics that we propose to call fragmentation-interaction-aggregation processes, and which are introduced here. These processes feature a network of nodes, each endowed with a state governing their random activation. Each activation triggers the fragmentation of the activated node state and the transmission of interaction signals to downstream nodes. In turn, the signals received by nodes are aggregated to their state. Our main contribution is a proof of the Poisson hypothesis for the replica-mean-field version of any network in this class. The proof is obtained by establishing the propagation of asymptotic independence for state variables in the limit of an infinite number of replicas. Discrete-time Galves–Löcherbach neural networks are used as a basic instance and illustration of our analysis.
What factors constrain the arrangement of the multiple fields of a place cell? By modeling place cells as perceptrons that act on multiscale periodic grid-cell inputs, we analytically enumerate a place cell’s repertoire – how many field arrangements it can realize without external cues while its grid inputs are unique – and derive its capacity – the spatial range over which it can achieve any field arrangement. We show that the repertoire is very large and relatively noise-robust. However, the repertoire is a vanishing fraction of all arrangements, while capacity scales only as the sum of the grid periods so field arrangements are constrained over larger distances. Thus, grid-driven place field arrangements define a large response scaffold that is strongly constrained by its structured inputs. Finally, we show that altering grid-place weights to generate an arbitrary new place field strongly affects existing arrangements, which could explain the volatility of the place code.
Replica-mean-field models have been proposed to decipher the activity of otherwise analytically intractable neural networks via a multiply-and-conquer approach. In this approach, one considers limit networks made of infinitely many replicas with the same basic neural structure as that of the network of interest, but exchanging spikes in a randomized manner. The key point is that these replica-mean-field networks are tractable versions that retain important features of the finite structure of interest. To date, the replica framework has been discussed for first-order models, whereby elementary replica constituents are single neurons with independent Poisson inputs. Here, we extend this replica framework to allow elementary replica constituents to be composite objects, namely, pairs of neurons. As they include pairwise interactions, these pair-replica models exhibit nontrivial dependencies in their stationary dynamics, which cannot be captured by first-order replica models. Our contributions are two-fold: (i) We analytically characterize the stationary dynamics of a pair of intensity-based neurons with independent Poisson input. This analysis involves the reduction of a boundary-value problem related to a two-dimensional transport equation to a system of Fredholm integral equations---a result of independent interest. (ii) We analyze the set of consistency equations determining the full network dynamics of certain replica limits. These limits are those for which replica constituents, be they single neurons or pairs of neurons, form a partition of the network of interest. Both analyses are numerically validated by computing input/output transfer functions for neuronal pairs and by computing the correlation structure of certain pair-dominated network dynamics.
Neural computations emerge from myriad neuronal interactions occurring in intricate spiking networks. Due to the inherent complexity of neural models, relating the spiking activity of a network to its structure requires simplifying assumptions, such as considering models in the thermodynamic mean-field limit. In the thermodynamic mean-field limit, an infinite number of neurons interact via vanishingly small interactions, thereby erasing the finite size of interactions. To better capture the finite-size effects of interactions, we propose to analyze the activity of neural networks in the replica-mean-field limit. Replica-mean-field models are made of infinitely many replicas which interact according to the same basic structure as that of the finite network of interest. Here, we analytically characterize the stationary dynamics of an intensity-based neural network with spiking reset and heterogeneous excitatory synapses in the replica-mean-field limit. Specifically, we functionally characterize the stationary dynamics of these limit networks via ordinary differential equations derived from the Poisson hypothesis of queuing theory. We then reduce this functional characterization to a system of self-consistency equations specifying the stationary neuronal firing rates. Of general applicability, our approach combines rate-conservation principles from point-process theory and analytical considerations from generating-function methods. We validate our approach by demonstrating numerically that replica-mean-field models better capture the dynamics of feedforward neural networks with large, sparse connections than their thermodynamic counterparts. Finally, we explain that improved performance by analyzing the neuronal rate-transfer functions, which saturate due to finite-size effects in the replica-mean-field limit.
Metagenomics has revealed hundreds of species in almost all microbiota. In a few well-studied cases, microbial communities have been observed to coordinate their metabolic fluxes. In principle, microbes can divide tasks to reap the benefits of specialization, as in human economies. However, the benefits and stability of an economy of microbial specialists are far from obvious. Here, we physically model the population dynamics of microbes that compete for steadily supplied resources. Importantly, we explicitly model the metabolic fluxes yielding cellular biomass production under the constraint of a limited enzyme budget. We find that population dynamics generally leads to the coexistence of different metabolic types. We establish that these microbial consortia act as cartels, whereby population dynamics pins down resource concentrations at values for which no other strategy can invade. Finally, we propose that at steady supply, cartels of competing strategies automatically yield maximum biomass, thereby achieving a collective optimum.
In nature a large number of species can coexist on a small number of shared resources, however resource competition models predict that the number of species in steady coexistence cannot exceed the number of resources. Motivated by recent studies of phytoplankton, we introduce trade-offs into a resource competition model, and find that an unlimited number of species can coexist. Our model spontaneously reproduces several features of natural ecosystems including keystone species and population dynamics/abundances characteristic of neutral theory, despite an underlying non- neutral competition for resources.
Natural auditory scenes possess highly structured statistical regularities, which are dictated by the physics of sound production in nature, such as scale-invariance. We recently identified that natural water sounds exhibit a particular type of scale invariance, in which the temporal modulation within spectral bands scales with the centre frequency of the band. Here, we tested how neurons in the mammalian primary auditory cortex encode sounds that exhibit this property, but differ in their statistical parameters. The stimuli varied in spectro-temporal density and cyclo-temporal statistics over several orders of magnitude, corresponding to a range of water-like percepts, from pattering of rain to a slow stream. We recorded neuronal activity in the primary auditory cortex of awake rats presented with these stimuli. The responses of the majority of individual neurons were selective for a subset of stimuli with specific statistics. However, as a neuronal population, the responses were remarkably stable over large changes in stimulus statistics, exhibiting a similar range in firing rate, response strength, variability and information rate, and only minor variation in receptive field parameters. This pattern of neuronal responses suggests a potentially general principle for cortical encoding of complex acoustic scenes: while individual cortical neurons exhibit selectivity for specific statistical features, a neuronal population preserves a constant response structure across a broad range of statistical parameters.
DNA combing allows the investigation of DNA replication on genomic single DNA molecules, but the lengths that can be analysed have been restricted to molecules of 200–500 kb. We have improved the DNA combing procedure so that DNA molecules can be analysed up to the length of entire chromosomes in fission yeast and up to 12 Mb fragments in human cells. Combing multi-Mb-scale DNA molecules revealed previously undetected origin clusters in fission yeast and shows that in human cells replication origins fire stochastically forming clusters of fired origins with an average size of 370 kb. We estimate that a single human cell forms around 3200 clusters at mid S-phase and fires approximately 100,000 origins to complete genome duplication. The procedure presented here will be adaptable to other organisms and experimental conditions.
Five noncoding small RNAs (sRNAs) called the Qrr1-5 sRNAs act at the heart of the Vibrio harveyi quorum-sensing cascade. The Qrr sRNAs posttranscriptionally regulate 20 mRNA targets. Here, we use a method we call RSort-Seq that is based on unbiased high-throughput screening to define the critical bases in Qrr4 that specify its function. The power of our study comes from using the screening results to pinpoint particular nucleotides for follow-up biological analyses that define function. Using this approach, we discover how Qrr4 differentially regulates two of its targets, luxO and luxR. We also show how this strategy can be used to identify intramolecular suppressor mutations. This approach can be applied to any sRNA and any mRNA target.
Bacteria regulate gene expression in response to changes in cell density in a process called quorum sensing. To synchronize their gene-expression programs, these bacteria need to glean as much information as possible about their cell density. Our study is the first to physically model the flow of information in a quorum-sensing microbial community, wherein the internal regulator of the individuals response tracks the external cell density via an endogenously generated shared signal. Combining information theory and Lagrangian formalism, we find that quorum-sensing systems can improve their information capabilities by tuning circuit feedbacks. Our analysis suggests that achieving information benefit via feedback requires dedicated systems to control gene expression noise, such as sRNA-based regulation.
The firing activity of intracellularly stimulated neurons in cortical slices has been demonstrated to be profoundly affected by the temporal structure of the injected current (Mainen & Sejnowski, 1995). This suggests that the timing features of the neural response may be controlled as much by its own biophysical characteristics as by how a neuron is wired within a circuit. Modeling studies have shown that the interplay between internal noise and the fluctuations of the driving input controls the reliability and the precision of neuronal spiking (Cecchi et al., 2000; Tiesinga, 2002; Fellous, Rudolph, Destexhe, & Sejnowski, 2003). In order to investigate this interplay, we focus on the stochastic leaky integrate-and-fire neuron and identify the Hölder exponent H of the integrated input as the key mathematical property dictating the regime of firing of a single-unit neuron. We have recently provided numerical evidence (Taillefumier & Magnasco, 2013) for the existence of a phase transition when H becomes less than the statistical Hölder exponent associated with internal gaussian white noise (H=1/2). Here we describe the theoretical and numerical framework devised for the study of a neuron that is periodically driven by frozen noisy inputs with exponent H>0. In doing so, we account for the existence of a transition between two regimes of firing when H=1/2, and we show that spiking times have a continuous density when the Hölder exponent satisfies H>1/2. The transition at H=1/2 formally separates rate codes, for which the neural firing probability varies smoothly, from temporal codes, for which the neuron fires at sharply defined times regardless of the intensity of internal noise.
Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss–Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.
In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.
The study of the multidimensional stochastic processes involves complex computations in intricate functional spaces. In particular, the diffusion processes, which include the practically important Gauss-Markov processes, are ordinarily defined through the theory of stochastic integration. Here, inspired by the Lévy-Ciesielski construction of the Wiener process, we propose an alternative representation of multidimensional Gauss-Markov processes as expansions on well-chosen Schauder bases, with independent random coefficients of normal law with zero mean and unit variance. We thereby offer a natural multiresolution description of the Gauss-Markov processes as limits of finite-dimensional partial sums of the expansion, that are strongly almost-surely convergent. Moreover, such finite-dimensional random processes constitute an optimal approximation of the process, in the sense of minimizing the associated Dirichlet energy under interpolating constraints. This approach allows for a simpler treatment of problems in many applied and theoretical fields, and we provide a short overview of applications we are currently developing.
Even for simple diffusion processes, treating first-passage problems analytically proves intractable for generic barriers and existing numerical methods are inaccurate and computationally costly. Here, we present a novel numerical method that is faster and has more tightly controlled accuracy. Our algorithm is a probabilistic variant of dichotomic search for the computation of first passage times through non-negative homogeneously Hölder continuous boundaries by Gauss-Markov processes. These include the Ornstein-Uhlenbeck process underlying the ubiquitous “leaky integrate-and-fire” model of neuronal excitation. Our method evaluates discrete points in a sample path exactly, and refines this representation recursively only in regions where a passage is rigorously estimated to be probable (e.g. when close to the boundary).
The classical Haar construction of Brownian motion uses a binary tree of triangular wedge-shaped functions. This basis has compactness properties which make it especially suited for certain classes of numerical algorithms. We present a similar basis for the Ornstein-Uhlenbeck process, in which the basis elements approach asymptotically the Haar functions as the index increases, and preserve the following properties of the Haar basis: all basis elements have compact support on an open interval with dyadic rational endpoints; these intervals are nested and become smaller for larger indices of the basis element, and for any dyadic rational, only a finite number of basis elements is nonzero at that number. Thus the expansion in our basis, when evaluated at a dyadic rational, terminates in a finite number of steps. We prove the covariance formulae for our expansion and discuss its statistical interpretation.