Title and abstracts of the contributed talks

Jason Sakellariou

  • Title: A Pairwise Maximum Entropy Model Able to Learn and Imitate Melodic Styles
    Abstract: We introduce a Maximum Entropy model able to capture the statistics of melodies in music. The model can be used to generate new melodies that emulate the style of the musical corpus which was used to train it. Instead of using the n-body interactions of (n-1)-order Markov models, traditionally used in automatic music generation, we use a k-nearest neighbour model with pairwise interactions only. In that way, we keep the number of parameters low and avoid over-fitting problems typical of Markov models. We show that long-range musical phrases don't need to be explicitly enforced using high-order Markov interactions, but can instead emerge from multiple, competing, pairwise interactions. We validate our Maximum Entropy model by contrasting how much the generated sequences capture the style of the original corpus without plagiarizing it. To this end we use a data-compression approach to discriminate the levels of borrowing and innovation featured by the artificial sequences. The results show that our modelling scheme outperforms both fixed-order and variable-order Markov models. This shows that, despite being based only on pairwise interactions, this Maximum Entropy scheme opens the possibility to generate musically sensible alterations of the original phrases, providing a way to generate innovation. References: "A Pairwise Maximum Entropy Model Able to Learn and Imitate Melodic Styles" J Sakellariou, F Tria, V Loreto, F Pachet (under submission)

    Ulisse Ferrari

  • Title: Characterizing the retina's sensitivity to stimulus perturbations using closed-loop experiments and Fisher information.
    Abstract: Recent experimental progress have made it possible to record from large populations of neurons in sensory structures. However, the high number of neurons raises a major challenge to understand how a neural network processes sensory information. A classical characterization of each single neuron only gives a fragmented view of the network processing. Probing the statistical relationship between stimuli and population responses thus requires new tools suited for the analysis of large populations of neurons. Here we developed a new method to characterize the sensitivity of a population of sensory neurons to the stimulus, and applied it to the retina. We recorded large populations of retinal output neurons (200 neurons) in rats, while presenting a randomly moving object as a visual stimulus. To assess the sensitivity of population coding to changes in the trajectory, we tested the impact of different stimulus perturbations on the retinal response, and searched actively for the perturbation sizes well discriminated by the retina, during the time course of the experiment (online). During these closed-loop experiments, the stimulus was modified in real time as a function of the recorded response to find the smallest perturbations that were effective at changing the population response. To assess the discrimination capacity of the neural population from these data, we estimated the Fisher Information Matrix. This matrix quantifies the change in the neural activity in response to stimulus perturbations. So it quantifies the sensitivity of the neural population along any direction in the stimulus space, and it determines how strong a perturbation should be in order to be distinguished from the reference stimulus. As follows from the Cramer-Rao inequality, the Fisher matrix offers an upper bound to the sensitivity of any stimulus decoder based on the retina ganglion cells response. This framework allows to quantify and characterize the efficiency of the linear decoder in terms of the best possible performance a decoder can reach. Our results show an unexpected sensitivity of the retinal population to high temporal frequencies of the stimulus. We are currently investigating how this sensitivity is built up from the selectivity of individual neurons.

    Christophe Gardella

  • Title: Complex couplings between neurons and population rate uncovered by solvable maximum entropy models of collective activity
    Abstract: Neurons within a population are strongly correlated, but how to simply capture these correlations is still a matter of debate. Recent studies have shown that the activity of each cell is influenced by the population rate, defined as the summed activity of all neurons in the population. However, an explicit, solvable model for these interactions is still lacking. Here we build a probabilistic model of population activity that reproduces the firing rate of each cell, the distribution of the population rate, and the coupling between them. The model is based on the principle of maximum entropy, and provides the simplest description for the interaction between each cell and the rest of the population. It is exactly solvable and its parameters can be learned in a few seconds on a standard computer from large population recordings. We inferred our model for a population of 160 neurons in the salamander retina. In this population, single-cell firing rates depended in unexpected ways on the population rate. In particular, some cells had a preferred population rate at which they were most likely to fire. These complex dependencies are fully accounted by our model, and could not be explained quantitatively or even qualitatively by linear interactions alone. We thus provide a simple and computationally tractable way to learn models that reproduces the dependence of each neuron on the population rate.

    Tomoyuki Obuchi

  • Title: Cross validation in LASSO and its acceralation
    Abstract: We investigate leave-one-out cross validation (CV) as a determiner of the weight of the penalty term in the least absolute shrinkage and selection operator (LASSO). First, on the basis of the message passing algorithm and a perturbative discussion assuming that the number of observations is sufficiently large, we provide simple formulas for approximately assessing two types of CV errors, which enable us to significantly reduce the necessary cost of computation. These formulas also provide a simple connection of the CV errors to the residual sums of squares between the reconstructed and the given measurements. Second, on the basis of this finding, we analytically evaluate the CV errors when the design matrix is given as a simple random matrix in the large size limit by using the replica method. Finally, these results are compared with those of numerical simulations on finite-size systems and are confirmed to be correct. We also apply the simple formulas of the first type of C V error to an actual dataset of the supernovae.

    Andreas Mayer

  • Title: Diversity of immune strategies explained by adaptation to pathogen statistics
    Abstract: Biological organisms have evolved a wide range of immune mechanisms to defend themselves against pathogens. Beyond molecular details, these mechanisms differ in how protection is acquired, processed and passed on to subsequent generations – differences that may be essential to long-term survival. Here, we introduce a mathematical framework to compare the long-term adaptation of populations as a function of the pathogen dynamics that they experience and of the immune strategy that they adopt. We find that the two key determinants of an optimal immune strategy are the frequency and the characteristic timescale of the pathogens. Depending on these two parameters, our framework identifies distinct modes of immunity, including adaptive, innate, bet-hedging and CRISPR-like immunities, which recapitulate the diversity of natural immune systems. Reference: http://arxiv.org/abs/1511.08836

    Fernando Santos

  • Title: Fock space, symbolic algebra, and analytical solutions for small stochastic systems
    Abstract: Randomness is ubiquitous in nature. From single-molecule biochemical reactions to macroscale biological systems, stochasticity permeates individual interactions and often regulates emergent properties of the system. While such systems are regularly studied from a modeling viewpoint using stochastic simulation algorithms, numerous potential analytical tools can be inherited from statistical and quantum physics, replacing randomness due to quantum fluctuations with low-copy-number stochasticity. Nevertheless, classical studies remained limited to the abstract level, demonstrating a more general applicability and equivalence between systems in physics and biology rather than exploiting the physics tools to study biological systems. Here the Fock space representation, used in quantum mechanics, is combined with the symbolic algebra of creation and annihilation operators to consider explicit solutions for the chemical master equations describing small, well-mixed, biochemical, or biological systems. This is illustrated with an exact solution for a Michaelis-Menten single enzyme interacting with limited substrate, including a consideration of very short time scales, which emphasizes when stiffness is present even for small copy numbers. Furthermore, we present a general matrix representation for Michaelis-Menten kinetics with an arbitrary number of enzymes and substrates that, following diagonalization, leads to the solution of this ubiquitous, nonlinear enzyme kinetics problem. For this, a flexible symbolic maple code is provided, demonstrating the prospective advantages of this framework compared to stochastic simulation algorithms. This further highlights the possibilities for analytically based studies of stochastic systems in biology and chemistry using tools from theoretical quantum physics. Reference: Fernando A. N. Santos, Hermes Gadelha, and Eamonn A. Gaffney Phys. Rev. E 92, 062714 - http://journals.aps.org/pre/abstract/10.1103/PhysRevE.92.062714

    Soumen Roy

  • Title: From image processing to phage-bacteria interactions
    Abstract: We revisit the classic problem of Content Based Information Retrieval (CBIR) from multidimensional images. We transform videos into time series and map them into networks. Using this mapping, we show that a simple spatial average of a cropped region of every video frame performs as good as methods involving established dimension reduction techniques like Principal Component Analysis [1]. Additionally, this approach possesses the advantage of being implementable on hardware by the use of mean computing blocks [2]. Using extensive clinical data gathered by clinical collaborators at one of Asia’s largest medical institutions, we also highlight the uses of our method in non-invasive diagnostics [1]. Given the recent alarming emergence of drug-resistant strains of tuberculosis (TB), phage therapy is a appealing and rational alternative. Adopting a multi-pronged, integrative approach we employ an iterative cycle of theoretical techniques (delay differential equations, Monte Carlo simulations etc) and experiments. The detailed dynamics of mycobacteriophage D29 and Mycobacterium smegmatis were scrutinised. The theoretical predictions were further examined using biochemical and cell biological assays. Most remarkably, we find that in addition to lysis, we need to consider a secondary factor for cell death [3]. These findings are obviously important for further development of phage-based therapeutics for TB [4].

    Claude Loverdo

  • Title: High-resolution measurement of viral mutation rates using deep sequence-based fitness profiling data
    Abstract: To understand the risk of viral adaptation, for instance emergence of drug resistance, both the fitness of mutant genotypes and the mutation rate need to be measured. While fitness screenings are commonplace, mutation rates are rarely well-characterized. We propose two methods to estimate the mutation rate using data initially generated via high-throughput fitness screening. The first method uses the frequency of lethal mutants. The second method, specific to mutational screenings in which codons are mutated one at a time, is based on analysis of the frequency of double mutants. We applied these methods to a region of NS5A in hepatitis C virus, for fitness screenings conducted with and without daclatasvir, a newly developed antiviral compound. The two methods are complementary: the second is not biased by RT-PCR errors, and thus revealed which rate estimates from the first method are not influenced by this pervasive source of experimental uncertainty. Both mutation rates and fitness can be extracted from the same high-throughput screening data, enabling a better characterization of the potential for viral adaptation.

    Philippe Marcq

  • Title: Inference of internal stress in a cell monolayer
    Abstract: The mechanical behavior of living tissues is deeply connected with many important biological questions, yet little is known about internal tissue mechanics. Since the traction forces exerted by cells on a planar, deformable substrate can be measured, we propose to combine traction force data with Bayesian inversion to estimate the internal stress field of a cell monolayer. The method is validated using numerical simulations performed in a wide range of conditions. It is robust to changes in each ingredient of the underlying statistical model. Importantly, its accuracy does not depend on the rheology of the tissue. Combining Bayesian inversion with Kalman filtering allows to process time-lapse movies of the traction force field.

    Anne-Florence Bitbol

  • Title: Inferring interaction partners from protein sequences
    Abstract: Specific protein-protein interactions are crucial in the cell, both to ensure the formation and stability of multi-protein complexes, and to enable signal transduction in various pathways. Functional interactions between proteins result in coevolution between the interaction partners. Hence, the sequences of interacting partners are correlated. Is it possible to identify which proteins are specific interaction partners from sequence data alone? Our general approach, which relies on protein sequence covariation and uses an approximate pairwise maximum entropy model to infer direct couplings between residues, has been successfully used to predict the three-dimensional structures of proteins from sequences. We introduce an iterative algorithm to predict specific interaction partners from among the members of two protein families. We assess the algorithm's performance on histidine kinases and response regulators from bacterial two-component signaling systems. The algorithm pro ves successful without any a priori knowledge of interaction partners, yielding a striking 0.93 true positive fraction on our complete dataset. We analyze the origins of this surprising success even in the absence of a training set. Finally, we discuss how our method could be used to predict novel protein-protein interactions. This work is conducted in collaboration with Prof. Ned Wingreen (Princeton University) and Dr. Lucy Colwell (Cambridge University).

    Herve Isambert

  • Title: Learning causal networks with latent variables from multivariate information in genomic data
    Abstract: Learning causal networks from large-scale genomic data remains difficult in absence of time series or perturbation data. Constraint-based methods can, in principle, uncover causality from purely observational data but they are often not robust on small datasets and have complexity issues in the presence of unobserved latent variables, which limits their applicability in practice. We have developed and implemented a causal network inference method, that circumvents the robustness and complexity issues of constraint-based methods through an information theoretic framework. Starting from a complete graph, it iteratively removes dispensable edges by partitioning their mutual information into significant contributions from indirect paths and orient the remaining edges based on the signature of causality in observational data. This information theoretic approach outperforms earlier methods on a broad range of benchmark networks with or without latent variables. The approach has be en applied to reconstruct causal networks from genomic datasets at various biological scales, from gene expression data in single cells to genomic alterations in tumor tissues and the evolution of gene families in vertebrates.

    Thibault Lesieur

  • Title: LOW RANK MATRIX FACTORISATION - A KEY TO MANY PROBLEMS
    Abstract: Low rank matrix factorizations looks at first hand like a very basic problem. In a very crude and simplified way on could say that it's the problem of approximating a matrix by a linear combination of a small number of projector. It turns out that a large number of problems can be reformulated as a low rank matrix factorization problem. In this talk we will see how problems ranging from sparse PCA to community detection in dense networks can be seen as a matrix factorization. We will study these systems in a probabilistic framework. Using techniques coming from statistical and spin-glass physics one is able to analyses these problems both from a theoretical and algorithmical point of view. We will uncover and derive the large gaps in performance that sometimes exist between usual matrix factorization techniques and the theoretical optimal performance that can be achieved. These conjectured optimal performances can sometimes be achieved using an efficient algorithm called Approximate Message Passing algorithm (AMP), whose performance can be analyzed using density evolution equations (or single letter characterization).

    Andrey Lokhov

  • Title: Near optimal structure and parameter learning in Ising model
    Abstract: Ising model is a graphical model representing stationary statistics of binary variables associated with nodes of a graph. They are used to represent systems in various fields such as statistical physics, computational biology, or image processing. However, the structure or parameters of the graphical model are often unknown a priori and have to be learnt from experimental data. In this talk, we consider the problem of inferring the underlying graph and parameters of the Ising model defined on an arbitrary topology from a collection of independent configurations. We suggest a new statistical physics inspired reconstruction method that is computationally efficient and achieves perfect graph structure recovery with a near information theoretical optimal number of samples. We provide mathematical guarantees that our consistent and convex estimator recovers couplings in a number of samples that is small with respect to the system size and couplings strength, outperforming the state of the art algorithms. Importantly, the reconstruction algorithm remains efficient in the low-temperature regime, and can be generalized to graphical models with higher-order interactions and alphabets. Joint work with M. Vuffray, S. Misra and M. Chertkov (Los Alamos National Laboratory)

    Lorenzo Posani

  • Title: Statistical physics techniques for brain-state identification: application to hippocampal neural activity
    Abstract: Over the recent decades, multi-cell recording techniques, based on electrode micro-arrays or optical imaging, have allowed researchers to accurately record neural population activity in vivo. Such recordings provide insights on how the firing activities of a whole population of neurons encode sensory stimuli. In some outstanding cases, such as in the medio-enthorinal cortex or in the hippocampus (which is a region known to be involved in short-memory tasks), neural activity can be traced back to sensory percept. For example, the so-called place cells in the CA1 and CA3 hippocampal regions exhibit sharp spatially-related firing fields, and the collection of these place fields define a spatial map, which is formed when a rat discovers the environment and retrieved, as a memory, each time the rat is placed back in that specific environment. Knowledge of the environment-specific place fields thus allows, in principle, the decoding of the position of the rodent from the neural pop ulation's firing activity. This informational paradigm also allows the application of statistical techniques to study the inner dynamics of internal memory representation, and non-trivial attractor dynamics has been observed when the external stimuli, i.e. the environmental condition, are abruptly changed during the animal’s surroundings exploration. In this talk we present a Statistical Physics approach to the decoding problem in CA1 and CA3 hippocampal regions, applying bayesian inference based on graphical (Ising) models to decode positional information from neural activity, and compare the two regions in terms of internal dynamics and decoding performance.

    Marylou Gabrie

  • Title: Training Boltzmann machines for unsupervised learning using extended mean field approximations
    Abstract: Boltzmann machines are undirected neural networks useful for unsupervised machine learning. In particular, a simple bipartite version - called Restricted Boltzmann machines (RBMs) - has been widely popularized by the discovery of fast training algorithms, relying on approximate Monte Carlo Markov Chains. Realizing that training RBMs is closely related to the inverse Ising problem, a notoriously hard statistical physics problem, we designed an alternative deterministic procedure based on the Thouless-Anderson-Palmer approach. Our algorithm, improving on the naive mean-field approximation, provides performance equal to the commonly used MCMC algorithms while also providing a clear and easy to evaluate objective function to follow progress along training. Moreover, this strategy can be generalized in many ways, including for new network architectures (e.g. deep Boltzmann machines) or for new types of data (e.g. continuous variables with a known prior distribution).

    Alaa Saade

  • Title: Clustering from pairwise similarities
    Abstract:

    Jerome Tubiana

  • Title: Statistical Mechanics of Restricted Boltzmann Machines
    Abstract: A Restricted Botlzmann Machines (RBM) is a simple neural network used to infer a probability distribution from sample data. RBM have been successfully applied to model high-dimensional, multimodal data distributions such as MNIST. It has been argued that RBM effectiveness comes from their ability to learn i) high-order interactions ii) distributed representations of data. We will probe this argument through a statistical mechanics study of RBMs at thermal equilibrium. Analytical and numerical results will be presented.