abstracts of papers published since 1992

For the list without the abstracts, click here.

We study the information storage capacity of a simple perceptron in the error
regime. For random unbiased patterns the geometrical analysis gives a
logarithmic dependence of the information content in the asymptotic limit.

In
that regime the statistical physics approach, when used at the simplest level
of replica theory, does not give satisfactory results. However for
perceptrons with finite stability, the information content can be simply calculated
with statistical physics methods in a region above the critical storage
level, for biased as well as for unbiased patterns.

(Copyright © Institute of Physics and IOP Publishing Limited)

Journal of Physics A: Mathematical and General, Vol. 25 (1992) pp. 5017-5037.

(full paper available on IoP electronic journals web site).

*back to the list of publications*

We demonstrate that formal neural networks techniques allow to build the simplest models compatible with a limited but systematic set of experimental data. The experimental system under study is the growth of mouse macrophage like cell lines under the combined influence of two ion channels, the growth factor receptor and adenylate cyclase. We conclude that 3 components out of 4 can be described by linear multithreshold automata. The remaining component behavior being non-monotonous necessitate the introduction of a fifth hidden variable, or of non-linear interactions.

Network: Computation in Neural Systems, Volume 3, Number 4 (November 1992), pages 393-406.

Initially published by IOP,
this journal has moved to Taylor & Francis/informa healthcare.

(preprint1992.html)

*back to the list of publications*

A Bridge Between Supervised and Unsupervised Learning

We exhibit a duality between two perceptrons that allows us to
compare the theoretical analysis of supervised and unsupervised
learning tasks - more exactly of parameter estimation
and encoding tasks. The first perceptron has one output and is asked
to learn a classification of *p* patterns. The second (dual)
perceptron has *p* outputs and is asked to transmit as much
information as possible on a distribution of inputs. We show in
particular that the maximum information that can be stored in the
couplings for the supervised learning task is equal to the maximum
information that can be transmitted by the dual perceptron.

(Copyright © The MIT Press)

Neural Computation,
Vol. 6, Issue 3 (May 1994), pages 491-508.

(preprint1992.ps.gz, preprint1992.pdf).

*back to the list of publications*

Reconsidering a recently introduced model of sequence-retrieving neural network, we introduce appropriate analogues of the well-known stabilities and show how these, together with two coupling parameters $\lambda$ and $\vartheta$, entirely control the dynamics in the case of strong dilution. The model is exactly solved and phase diagrams are drawn for two different choices of the synaptic matrices; they reveal a rich structure. We then briefly speculate as to the role of these parameters within a more general framework.

(Copyright © Les Editions de Physique 1993)

Journal de Physique I (France), Volume 3 Nb. 6 (1993) pages 1303-1328

abstract and full paper available on EDP Sciences web site, and from HAL.

*back to the list of publications*

We study the ability of a simple neural network (a perceptron architecture,
no hidden units, binary outputs) to process information in the
context of an unsupervised learning task. The network is asked to
provide the best possible neural representation of a given input
distribution, according to some criterion taken from Information
Theory. We compare various optimization criteria that have been proposed :
maximum information transmission, minimum redundancy and closeness to
factorial code.

We show that for the perceptron one can compute
the maximal information that the code (the output neural representation)
can convey about the input. We show that one can use Statistical
Mechanics techniques, such as the replica techniques, to compute
the typical mutual information between input and output distributions.
More precisely, for a Gaussian input source with a given
correlation matrix, we compute
the typical mutual information when the couplings are chosen randomly. We
determine the correlations between the synaptic couplings
which maximize the gain of information. We analyse the results
in the case of a one dimensional receptive field.

Network: Computation in Neural Systems, Volume 4, Number 3 (August 1993), pages 295-312

Initially published by IOP,
this journal has moved to Taylor & Francis/informa healthcare.

(preprint .ps.gz, .pdf).

*back to the list of publications*

We introduce an inferential approach to unsupervised learning which allows us to define an
optimal learning strategy. Applying these ideas to a simple,
previously studied model, we show that it is
impossible to detect structure in data until a critical number of examples
have been presented-an effect
which will be observed in all problems with certain underlying symmetries.
Thereafter, the advantage of
optimal learning over previously studied learning algorithms depends critically
upon the distribution of patterns; optimal learning may be exponentially faster.
Models with more subtle correlations are harder to
analyse, but in a simple limit of one such problem we calculate exactly the
efficacy of an algorithm similar to some used in practice, and compare it
to that of the optimal prescription.

(Copyright © Institute of Physics and IOP Publishing Limited)

J. Phys. A: Math. Gen., Vol. 27 No 6 (21 March 1994) pages 1899-1915

(full paper from
J. Phys. A Online)

(preprint.ps.gz, preprint.pdf).

*back to the list of publications*

a factorial code maximizes information transfer

We investigate the consequences of maximizing information transfer
in a simple neural network (one input layer, one output layer),
focussing on the case of *non linear* transfer
functions. We assume that both receptive fields
(synaptic efficacies) and transfer functions can be
adapted to the environment.

The main result is that, for bounded and invertible transfer functions,
in the case of a vanishing *additive* output noise, and no
input noise, maximization of information (Linsker's *infomax* principle)
leads to a factorial code - hence to the same solution as
required by the redundancy reduction principle of Barlow, or,
in the signal processing language, to Independent Component Analysis (ICA).

We show also that this result is valid for *linear*,
more generally unbounded,
transfer functions, provided optimization is performed under an
additive constraint, that is which can be written as a
sum of terms, each one being specific to one output neuron.
Finally we study the effect of a non zero input noise. We find that,
at first order in the input noise, assumed to be small as compared
to the - small - output noise,
the above results are still valid, provided the *output* noise
is uncorrelated from one neuron to the other.

Network: Computation in Neural Systems,
Volume 5, Number 4 (November 1994), pages 565-581

Initially published by IOP,
this journal has moved to Taylor & Francis/informa healthcare.

(preprint.ps.gz, preprint.pdf).

*(Click here for more about this paper from CiteSeer research index)*.

*back to the list of publications*

Neural trees are constructive algorithms
which build decision trees whose nodes
are binary neurons. We propose a new learning scheme, "trio-learning", which
leads to a significant reduction in the tree complexity. In the
trio strategy, each node
of the tree is optimized by taking into account the knowledge
that it will be followed by two son nodes. Moreover, trio-learning can be used to
build hybrid trees, with internal nodes and terminal nodes of different nature, for solving
any standard task (e.g. classification, regression, density estimation). Promising
results on a handwritten character classification are presented.

(Copyright © World Scientific Publishing Co.)

Int. Journ. of Neur. Syst. Volume 5, Issue 4 (December 1994) pages 259-274.

(abstract and full paper from IJNS web site)

*This work has been performed at Laboratoires d'Electronique Philips S.A.S. (LEP), Limeil-Brévannes, France.
Other publications with LEP: click here, and see below*.

*back to the list of publications*

Review paper on sparse coding in (auto)associative memories
- in particular in Willshaw *et al* (1969) and Hopfield (1982) type models.

*In* The Handbook of Brain Theory and Neural Networks,
Arbib M. A. Ed. (MIT Press, 1995) pp. 899-901

Related works:

- Associative memory: on the (puzzling) sparse coding limit
**J.-P. Nadal**, Journal of Physics A: Mathematical and General, Vol. 24 (1991) pp. 1093-1101

(abstract and full paper available on IoP electronic journals web site). - Information storage in sparsely-coded memory nets
**J.-P. Nadal and G. Toulouse**, Network: Computation in Neural Systems, Vol. 1 (1990) pp. 61-74

abstract and full paper on the review web site. NB: initially published by IOP, this journal has moved to Taylor & Francis/informa healthcare.

*back to the list of publications*

We consider a linear, one-layer feedforward neural network performing
a coding task. The goal of the network is to provide a
statistical neural representation that convey
as much information as possible on the input stimuli in noisy conditions.
We determine the family of synaptic couplings that maximizes
the mutual information between input and output distribution.
Optimization is performed under different constraints on the synaptic
efficacies. We analyze the dependence of the solutions on
input and output noises. This work goes beyond previous studies
of the same problem in that:

(i) we perform a detailed stability
analysis in order to find the global maxima of the mutual information;

(ii) we examine the properties of the optimal synaptic configurations
under different constraints;

(iii) we do not assume translational
invariance of the input data, as it is usually
done when input are assumed to be visual stimuli.

Network: Computation in Neural Systems, Volume 6, Number 3 (August 1995), pages 449-468

Initially published by IOP,
this journal has moved to Taylor & Francis/informa healthcare.

(preprint.ps.gz, preprint.pdf).

*back to the list of publications*

We present a numerical study of a neural tree learning algorithm, the
**trio-learning** strategy. We study the behaviour of the algorithm as
a function of the size of the training set
(*figure*). The results show that a
limited number of examples can be used to estimate both the network performance
and the network complexity that would result from running the algorithm
on a large data set.

Neural Processing Letters
2(2):1-4 (March 1995).
(TOC of the issue
on H. G. Schuster web site)

(preprint.ps.gz, preprint.pdf)

*This work has been performed at Laboratoires d'Electronique Philips S.A.S. (LEP), Limeil-Brévannes, France.
Other publications with LEP: click here*.

*back to the list of publications*

mean field approximation and maximum entropy principle

We present a formal, although simple, approach to the modeling of
a buyer behavior in the type of markets studied
in Weisbuch, Kirman
and Herreiner, 1995.
We compare possible
buyer's choice functions, such as linear or logit function. We study the
resulting behaviour, showing that they
depend on some convexity properties of the choice function.
Our results make use of standard Statistical Physics concepts and techniques.
In particular we use the "mean field approximation" to derive the
long term behaviour of buyers, and
we show that the standard "logit" choice function
can be justified from a general optimization principle,
leading to an *exploration-exploitation* compromise.

In *Advances in Self-Organization and Evolutionary Economics*,
J. Lesourne
and A. Orléan
Eds. (Economica, London, 1998), pp. 149-159.

(preprint1996.ps.gz, preprint1996.pdf).

*back to the list of publications*

Conditions on Cumulants and Adaptive Approaches

In the context of both sensory coding and signal processing,
building factorized codes has been shown to be an efficient
strategy. In a wide variety of situations, the signal to be
processed is a linear mixture of statistically independent sources.
Building a factorized code is then equivalent to performing blind
source separation. Thanks to the linear structure of the data, this
can be done, in the language of signal processing, by finding an
appropriate linear filter, or equivalently, in the language of
neural modeling, by using a simple feedforward neural network.

In this article, we discuss several aspects of the source
separation problem. We give simple conditions on the network output
that, if satisfied, guarantee that source separation has been
obtained. Then we study adaptive approaches, in particular those
based on redundancy reduction and maximization of mutual
information. We show how the resulting updating rules are related
to the BCM theory of synaptic plasticity. Eventually we briefly
discuss extensions to the case of nonlinear mixtures. Throughout
this article, we take care to put into perspective our work with
other studies on source separation and redundancy reduction. In
particular we review algebraic solutions, pointing out their
simplicity but also their drawbacks.

(Copyright © The MIT Press)

Neural Computation,
Vol. 9, Issue 7 (October 1997), pages 1421-1456

(preprint.pdf)

*back to the list of publications*

We study the information processing properties of a binary channel receiving data from a gaussian source. A systematic comparison with linear processing is done. A remarkable property of the binary sytem is that, as the ratio $\alpha$ between the number of output and input units increases, binary processing becomes equivalent to linear processing with a quantization output noise that depends on $\alpha$. In this regime , that holds up to $O( \alpha^{-4})$ , information processing occurs as if populations of $\alpha$ binary units cooperate to represent one $\alpha$-bit output unit. Unsupervised learning of a noisy environment by optimization of the parameters of the binary channel is also considered.

Network: Computation in Neural Systems, Volume 8, Number 4 (November 1997), pages 405-424.

Initially published by IOP,
this journal has moved to Taylor & Francis/informa healthcare.

(preprint.ps.gz, preprint.pdf)

*back to the list of publications*

We show that the statistics of an edge type variable in natural
images exhibits self-similarity properties which resemble those of
local energy dissipation in turbulent flows. Our results show that self-similarity
and extended self-similarity hold remarkably for the statistics of the
local edge variance, and that the very same models can be used to predict
all of the associated exponents. These results suggest using natural
images as a laboratory for testing more elaborate scaling models of
interest for the statistical description of turbulent flows. The properties we
have exhibited are relevant for the modeling of the early visual
system: They should be included in models designed for the prediction of receptive fields.

[© 1998 by The American Physical Society]

Physical Review Letters, Volume 80, Issue 5 (February 2, 1998) pp. 1098-1101

(preprint.ps.gz, preprint.pdf)
(paper from PRL on line)

For more recent works on this subject, see below and the web site of Antonio Turiel.

*back to the list of publications*

infomax implies redundancy reduction

Independent Component Analysis (ICA), and in particular
Blind Source Separation (BSS),
can be obtained from the maximization of mutual information,
as first shown in Nadal and Parga 1994.
The practical interest of this information theoretic
based cost function was then demonstrated in several BSS applications
(see e.g. Bell and Sejnowski 1995, ICA at CNL).

In the present paper the main
result of Nadal and Parga 1994 is extended to the case
of stochastic outputs. More precisely,
we prove that maximization of mutual information between the output
and the input of a feedforward neural network leads to
full redundancy reduction
under the following sufficient
conditions:

(1) the input signal is a (possibly nonlinear) invertible mixture
of independent components; (2) there is no input noise;
(3) the activity of each output neuron is a (possibly) stochastic variable
with a probability distribution depending on the stimulus through
a deterministic function of the inputs; both the probability
distributions and the functions can be different
from neuron to neuron; (4) optimization of the mutual information
is performed over all these deterministic functions.

Network: Computation in Neural Systems, Volume 9, Number 2 (May 1998) pages 207-217

Initially published by IOP,
this journal has moved to
Taylor & Francis/informa healthcare.

(preprint.ps.gz, preprint.pdf)

*back to the list of publications*

In the context of parameter estimation and model
selection, it is only quite recently that
a direct link between the *Fisher information*
and information theoretic quantities has been exhibited. We
give an interpretation of this link within the standard
framework of information theory.
We show that in the context of
population coding,
the mutual information between the activity of a large array of neurons and
a stimulus to which the neurons are tuned is naturally related
to the Fisher information.

In the light of this result we consider the optimization
of the tuning curves parameters
in the case of neurons responding to a stimulus
represented by an angular variable.

(Copyright © The MIT Press)

Neural Computation, Volume 10, issue 7 (October 1, 1998) pp. 1731-1757.

(preprint.ps.gz, preprint.pdf)

*back to the list of publications*

the mutual information between parameters and observations

Recent works in parameter estimation and neural coding have demonstrated that optimal performance are related to the mutual information between parameters and data. In this paper:

We study the mutual information between parameter and data for a family of supervised and unsupervised
learning tasks. The parameter is a possibly, but not necessarily, high-dimensional vector. We derive exact
bounds and asymptotic behaviors for the mutual information as a function of the data size and of some
properties of the probability of the data given the parameter. We compare these exact results with the
predictions of replica calculations. We briefly discuss the universal properties of the mutual information
as a function of data size.

[© 1999 The American Physical Society]

Phys. Rev. E
Volume 59, issue 3 (March 1, 1999), pp. 3344-3360.

(preprint.ps.gz, preprint.pdf)
(abstract and paper from Phys. Rev. E online).

*Short version presented at* NIPS*98:
**Unsupervised clustering:
the mutual information between parameters and observations
Didier Herschkowitz and Jean-Pierre Nadal**

Related works at LPS :

D. Herschkowitz and M. Opper, "Retarded Learning: Rigorous Results from Statistical Mechanics",
Phys.Rev.Lett. 86, 2174 (2001).

See also below.

*back to the list of publications*

We address the problem of blind source separation in the case of a
time dependent mixture matrix.
For a slowly and smoothly varying mixture matrix, we propose a systematic expansion
which leads to a practical algebraic solution when
stationary and ergodic properties hold for the sources.

[© 2000 Elsevier Science B. V.]

Signal Processing, volume 80 issue 10 (October 2000) pp. 2187-2194.

(preprint.ps.gz, preprint.pdf)
(abstract and full text from Elsevier Science Direct)

*back to the list of publications*

Résumé :

Dans le but d'identifier les causes physiques de la variabilité d'un système dynamique, la communauté géophysique utilise de façon intensive les techniques statistiques d'extraction de composantes. Un algorithme récemment développé, fondé sur la théorie de l'information, est introduit dans ce travail : l'analyse en composantes indépendantes (ACI). Cette technique présente deux avantages majeurs sur les techniques classiques. Premièrement, elle a pour but d'extraire des composantes statistiquement indépendantes, là où les techniques classiques cherchent uniquement la décorrélation. Deuxièmement, l'hypothèse linéaire pour le mélange des composantes n'est pas requise. Cette nouvelle technique est présenté dans le contexte de l'analyse de séries temporelles géophysiques. L'algorithme ACI est appliqué à l'étude de la variabilité de la température de surface de l'océan (TSO) tropical, avec une attention particulière pour l'analyse des liens entre le phénomène El Niño/Southern Oscillation (Enso) et la variabilité de la TSO Atlantique.

(© 1999 - Académie des Sciences/ Éditions Scientifiques et Médicales Elsevier SAS)

Les Comptes rendus de l'Académie des Sciences (CRAS), Geoscience (Série IIa, Sciences de la terre et des planètes), vol. 328 num. 9 (1999) pp. 569-575.

*back to the list of publications*

With the aim of identifying the physical causes of variability of a given dynamical system, the geophysical community has made an extensive use of classical component extraction techniques such as principal component analysis (PCA) or rotational techniques (RT). We introduce a recently developed algorithm based on information theory: independent component analysis (ICA). This new technique presents two major advantages over classical methods. First, it aims at extracting statistically independent components where classical techniques search for decorrelated components (i.e., a weaker constraint). Second, the linear hypothesis for the mixture of components is not required. In this paper, after having briefly summarized the essentials of classical techniques, we present the new method in the context of geophysical time series analysis. We then illustrate the ICA algorithm by applying it to the study of the variability of the tropical sea surface temperature (SST), with a particular emphasis on the analysis of the links between El Niño Southern Oscillation (ENSO) and Atlantic SST variability. The new algorithm appears to be particularly efficient in describing the complexity of the phenomena and their various sources of variability in space and time.

(© 2000 by the American Geophysical Union)

Journal of Geophysical Research - Atmospheres, Vol. 105 , No. D13 , p. 17,437 (2000).

*back to the list of publications*

We present a model of opinion dynamics in which agents adjust continuous opinions as a result of random binary encounters whenever their difference in opinion is below a given threshold. High thresholds yield convergence of opinions toward an average opinion, whereas low thresholds result in several opinion clusters. The model is further generalized to network interactions, threshold heterogeneity, adaptive thresholds, and binary strings of opinions.

(Copyright © 2002 Wiley Periodicals, Inc., A Wiley Company)

Complexity Vol 7:3 (2002) pp 55-63 (abstract and paper from Wiley InterScience)

Preprint Nov. 2001, "Interacting Agents and Continuous Opinions Dynamics"
cond-mat/0111494

also as Santa Fe working paper #01-11-072
and on RePEc
and HAL open archives.

*back to the list of publications*

We show that the lower bound to the critical fraction of data needed to infer (learn) the
orientation of the anisotropy axis of a probability distribution, determined by Herschkowitz
and Opper [Phys.Rev.Lett. 86, 2174 (2001)], is not always valid. If there is some
structure in the data along the anisotropy axis, their analysis is incorrect, and learning is
possible with much less data points.

[© 2002 by The American Physical Society]

Comment, Physical Review Letters 88, 099801 (2002)

(preprint: cond-mat/0201256)
*This article has been selected for the
February 15, 2002 issue of the Virtual Journal of Biological Physics Research published by the American Institute of Physics and the American Physical Society.*

*back to the list of publications*

Natural images are complex but very structured objects and, in spite of its com-
plexity, the sensory areas in the neocortex in mammals are able to devise learned
strategies to encode them endciently. How is this goal achieved? In this paper, we
will discuss the multiscaling approach, which has been recently used to derive a
redundancy reducing wavelet basis. This kind of representation can be statistically
learned from the data and is optimally adapted for image coding; besides, it presents
some remarkable features found in the visual pathway. We will show that the
introduction of oriented wavelets is necessary to provide a complete description, which
stresses the role of the wavelets as edge detectors.

(Copyright © 2003 Elsevier Science Ltd.)

Vision Research Vol. 43:9 (2003) pp. 1061-1079.

(preprint.pdf)

For related works, see above and the web site of
Antonio Turiel.

*back to the list of publications*

This letter suggests that in biological organisms, the perceived structure
of reality, in particular the notions of body, environment, space, object,
and attribute, could be a consequence of an effort on the part of brains to
account for the dependency between their inputs and their outputs in terms
of a small number of parameters. To validate this idea, a
procedure is demonstrated whereby the brain of an organism with
arbitrary input and output connectivity can deduce the dimensionality
of the rigid group of the space underlying its input-output relationship, that is
the dimension of what the organism will call physical space.

(© 2003 The MIT Press)

Neural Computation Vol 15:9 (Sept. 2003) pp 2029-2049.

(selected as sample article of the Sept. 2003 issue)

(preprint.ps.gz - preprint.pdf)

*back to the list of publications*

Motivation: We consider any collection of microarrays that can be ordered to form a progression,
as a function of time, or severity of disease, or dose of a stimulant, for example. By plotting the
expression level of each gene as a function of time, or severity, or dose, we form an expression
series, or curve, for each gene. While most of these curves will exhibit random fluctuations, some
will contain pattern, and it is these genes which are most likely associated with the independent
variable.

Results: We introduce a method of identifying pattern and hence genes in microarray expression
curves without knowing what kind of pattern to look for. Key to our approach is the sequence
of ups and downs formed by pairs of consecutive data points in each curve. As a benchmark, we
blindly identified yeast cell cycles genes without selecting for periodic or any other anticipated
behaviour.

(Copyright © 2005 Oxford Journals)

Bioinformatics 2005, Oct. 15; 21(20):3859-3864

Supplementary information can be found at http://www.lps.ens.fr/~willbran/up-down/

Related publications by K. Willbrand (LPS ENS) and Th. Fink (Inst. Curie): see here

*back to the list of publications*

It is widely believed that synaptic modifications under lie learning and memory. However, few studies have examined what can be deduced about the learning process from the distribution of synaptic weights. We analyze the perceptron, a prototypical feedforward neural network, and obtain the optimal synaptic weight distribution for a perceptron with excitatory synapses. It contains more than 50% silent synapses, and this fraction increases with storage reliability: silent synapses are therefore a necessary byproduct of optimizing learning and reliability. Exploiting the classical analogy between the perceptron and the cerebellar Purkinje cell, we fitted the optimal weight distribution to that measured for granule cell-Purkinje cell synapses. The two distributions agreed well, suggesting that the Purkinje cell can learn up to 5 kilobytes of information in the form of 40,000 input-output associations.

(Copyright © 2004 by Cell Press)

Neuron, Vol 43, 745-757, 2 September 2004

(with supplemental data, see here).

*Related work here*.

*back to the list of publications*

The interpretation of geophysical data, such as images of subsurface rocks (seismic data, borehole scans), requires one in particular to perform an elaborate segmentation analysis on strongly textured, anisotropic, and not necessarily brightness-contrasted images. In this paper we explore the possibility of deriving new segmentation algorithms from recent advances in the neural modelling of pre-attentive segmentation in human vision. More specifically we consider a neural model proposed by Zhaoping Li. First, we reproduce some specific results obtained by Zhaoping Li on simple artificial and real images sharing some textural characteristics with geophysical data. Next, from the analysis of the model behaviour, we propose an image processing workflow depending on the textural characteristics and on the type of segmentation (contour enhancement or texture edge detection) one is interested in. With this algorithm one gets promising results: from the computation of a single attribute one extracts the oriented textured feature boundaries without prior classification.

(Copyright © Institute of Physics and IOP Publishing Limited 2004)

J. Geophys. Eng. 1 (2004) 312-326.

Published online 22 November 2004 - Print publication: Issue 4 (10 December 2004)

Currently freely available online as one of the featured articles of the review.

*back to the list of publications*

We propose an agent-based model of a single-asset financial market, described in terms of a small number of parameters, which generates price returns with statistical properties similar to the stylized facts observed in financial time series. Our agent-based model generically leads to the absence of autocorrelation in returns, self-sustaining excess volatility, mean-reverting volatility, volatility clustering and endogenous bursts of market activity non-attributable to external noise. The parsimonious structure of the model allows the identification of feedback and heterogeneity as the key mechanisms leading to these effects.

(Copyright © Institute of Physics and IOP Publishing Limited 2005)

J. Phys.: Condens. Matter 17 No 14 (13 April 2005) S1259-S1268

Paper on IOP site.

*back to the list of publications*

In this paper, we consider a discrete choice model where heterogeneous agents are subject to mutual influences. We explore some consequences on the market's behaviour, in the simplest case of a uniform willingness to pay distribution. We exhibit a first-order phase transition in the profit optimization by the monopolist: if the social influence is strong enough, there is a regime where, if the mean willingness to pay increases, or if the production costs decrease, the optimal solution for the monopolist jumps from a solution with a high price and a small number of buyers, to a solution with a low price and a large number of buyers. Depending on the path of prices adjustments by the monopolist, simulations show hysteretic effects on the fraction of buyers.

(Copyright © 2005 Elsevier B.V.)

Physica A,
Volume 356, Issues 2-4 , 15 October 2005, Pages 628-640

(available online 13 June 2005).

preprint.pdf

*back to the list of publications*

We explore the effects of social influence in a simple market model in which a large number of agents face a binary choice: to buy/not to buy a single unit of a product at a price posted by a single seller (monopoly market). We consider the case of *positive externalities*: an agent is more willing to buy if other agents make the same decision. We consider two special cases of heterogeneity in the individuals' decision rules, corresponding in the literature to the Random Utility Models of Thurstone, and of McFadden and Manski. In the first one the heterogeneity fluctuates with time, leading to a standard model in Physics: the Ising model at finite temperature (known as annealed disorder) in a uniform external field. In the second approach the heterogeneity among agents is fixed; in Physics this is a particular case of quenched disorder models known as random field Ising model, at zero temperature. We study analytically the equilibrium properties of the market in the limiting case where each agent is influenced by all the others (the mean field limit), and we illustrate some dynamic properties of these models making use of numerical simulations in an Agent based Computational Economics approach.

(Copyright © 2005 Taylor & Francis)

Quantitative Finance Vol.5, No. 6, December 2005, 557-568.

(preprint.pdf -
preliminary version: condmat 0311096)

*back to the list of publications*

Phonological rules relate surface phonetic word forms to abstract underlying forms that are stored in the lexicon. Infants must thus acquire these rules in order to infer the abstract representation of words. We implement a statistical learning algorithm for the acquisition of one type of rule, namely allophony, which introduces context-sensitive phonetic variants of phonemes. This algorithm is based on the observation that different realizations of a single phoneme typically do not appear in the same contexts (ideally, they have complementary distributions). In particular, it measures the discrepancies in context probabilities for each pair of phonetic segments. In Experiment 1, we test the algorithm.s performances on a pseudo-language and show that it is robust to statistical noise due to sampling and coding errors, and to non-systematic rule application. In Experiment 2, we show that a natural corpus of semiphonetically transcribed child-directed speech in French presents a very large number of near-complementary distributions that do not correspond to existing allophonic rules. These spurious allophonic rules can be eliminated by a linguistically motivated filtering mechanism based on a phonetic representation of segments. We discuss the role of a priori linguistic knowledge in the statistical learning of phonology.

(Copyright © 2005 Elsevier B.V)

Cognition, Volume 101, Issue 3, October 2006, Pages B31-B41

(preprint Oct. 2005 - paper.pdf)

*back to the list of publications*

We consider a model of socially interacting individuals that make a binary choice in a context of positive additive endogenous externalities. It encompasses as particular cases several models from the sociology and economics literature. We extend previous results to the case of a general distribution of idiosyncratic preferences, called here Idiosyncratic Willingnesses to Pay (IWP).

Positive additive externalities yield a family of inverse demand curves that include the classical downward sloping ones but also new ones with non constant convexity. When $j$, the ratio of the social influene strength to the standard deviation of the IWP distribution, is small enough, the inverse demand is a classical monotonic (decreasing) function of the adoption rate. Even if the IWP distribution is mono-modal, there is a critical value of $j$ above which the inverse demand is non monotonic, decreasing for small and high adoption rates, but increasing within some intermediate range. Depending on the price there are thus either one or two equilibria.

Beyond this first result, we exhibit the {\em generic} properties of the boundaries limiting the regions where the system presents different types of equilibria (unique or multiple). These properties are shown to depend {\em only} on qualitative features of the IWP distribution: modality (number of maxima), smoothness and type of support (compact or infinite).
The main results are summarized as {\em phase diagrams} in the space of the model parameters, on which the regions of multiple equilibria are precisely delimited.

Mathematical Models and Methods in Applied Sciences (M3AS), Volume: 19, Supplementary Issue 1(2009) pp. 1441-1481 (DOI: 10.1142/S0218202509003887)

Preprint arXiv:0704.2333v1 [physics.soc-ph], and HAL-SHS
or RePEc (Research Papers in Economics) open archives, March 2007

*back to the list of publications*

We consider a social system of interacting heterogeneous agents with learning abilities, a model close to Random Field Ising Models, where the random field corresponds to the idiosyncratic willingness to pay. Given a fixed price, agents decide repeatedly whether to buy or not a unit of a good, so as to maximize their expected utilities. We show that the equilibrium reached by the system depends on the nature of the information agents use to estimate their expected utilities.

(Copyright © 2008 Elsevier B.V. )

Physica A: Statistical Mechanics and its Applications Volume 387, Issues 19-20, August 2008, Pages 4903-4916
( online 10 April 2008 ).

Preprint arXiv:0704.2324v1 [physics.soc-ph], April 2007.

*back to the list of publications*

Crime is an economically relevant activity. It may represent a mechanism of wealth distribution but also a social and economic burden because of the interference with regular legal activities and the cost of the law enforcement system. Sometimes it may be less costly for the society to allow for some level of criminality. However, a drawback of such a policy is that it may lead to a high increase of criminal activity, that may become hard to reduce later on. Here we investigate the level of law enforcement required to keep crime within acceptable limits. A sharp phase transition is observed as a function of the probability of punishment. We also analyze other consequences of criminality as the growth of the economy, the inequality in the wealth distribution (the Gini coefficient) and other relevant quantities under different scenarios of criminal activity and probabilities of apprehension.

(Copyright © EDP Sciences, Societàà italiana di Fisica, Springer-Verlag 2009)

Eur. Phys. J. B 68, 133-144 (2009) DOI: 10.1140/epjb/e2009-00066-x

Preprint arXiv:0710.3751 Oct. 2007.

*back to the list of publications*

Much research effort into synaptic plasticity has been motivated by the idea that modifications of synaptic weights (or strengths or efficacies) underlie learning and memory. Here, we examine the possibility of exploiting the statistics of experimentally measured synaptic weights to deduce information about the learning process. Analysing distributions of synaptic weights requires a theoretical framework to interpret the experimental measurements, but the results can be unexpectedly powerful, yielding strong constraints on possible learning theories as well as information that is difficult to obtain by other means, such as the information storage capacity of a cell. We review the available experimental and theoretical techniques as well as important open issues.

(Copyright © 2007 Elsevier B.V)

Trends in Neurosciences, Volume 30, Issue 12, December 2007, Pages 622-629

online November 5th, 2007, on TINS (Elsevier - ScienceDirect) web site.

*Related work here*.

*back to the list of publications*

Information Efficiency and Optimal Population Codes

This paper deals with the analytical study of coding a discrete set of categories
by a large assembly of neurons. We consider population coding schemes, which can
also be seen as instances of exemplar models proposed in the literature to account for
phenomena in the psychophysics of categorization. We quantify the coding efficiency
by the mutual information between the set of categories and the neural code, and we
characterize the properties of the most efficient codes, considering different regimes
corresponding essentially to different signal-to-noise ratio. One main outcome is
to find that, in a high signal-to-noise ratio limit, the Fisher information at the
population level should be the greatest between categories, which is achieved by
having many cells with the stimulus-discriminating parts (steepest slope) of their
tuning curves placed in the transition regions between categories in stimulus space.
We show that these properties are in good agreement with both psychophysical data
-- from different domains such as object recognition and speech perception --,
and with the neurophysiology of the inferotemporal cortex in the monkey, a cortex
area known to be specifically involved in classification tasks.

(Copyright © Springer)

Journal of Computational Neuroscience 25:1 August 2008 pp. 169-187 (DOI: 10.1007/s10827-007-0071-5)

online 31 January 2008 -
preprint.pdf (May 2007).

*Related works, same authors*:

- "Perception of categories: from coding efficiency to reaction times", here.
- "From Exemplar Theory to Population Coding and Back - An Ideal Observer Approach" (preprint.pdf), proceedings of the workshop "Exemplar-Based Models of Language Acquisition and Use", Dublin, 2007.

*back to the list of publications*

This paper summarizes the effects of social influences in a monopoly market with heterogeneous agents. The market equilibria are presented in the limiting case of global influence. Considering static profit maximization there may exist two different regimes: to sell either to a large fraction of customers at a low price, or to a small fraction of them at a higher price. This arises for numerous mono-modal distributions of idiosyncratic willingness to pay if the social influence is strong enough. The seller's optimal strategy switches from one regime to the other at parameter values where the demand has two different Nash equilibria; but the strategy of posting low prices to attract large fractions of buyers may fail due to a lack of coordination.

European Journal of Economic and Social Systems (EJESS) Vol. 22/1 (2009) pp. 11-18 (doi:10.3166/ejess.22.11-18) (full text on EJESS site)

*back to the list of publications*

The collective behavior in a variant of Schelling's segregation model is characterized with methods borrowed from statistical physics, in a context where their relevance was not conspicuous. A measure of segregation based on cluster geometry is defined and several quantities analogous to those used to describe physical lattice models at equilibrium are introduced. This physical approach allows to distinguish quantitatively several regimes and to characterize the transitions between them, leading to the building of a phase diagram. Some of the transitions evoke empirical sudden ethnic turnovers. We also establish links with 'spin-1' models in physics. Our approach provides generic tools to analyze the dynamics of other socio-economic systems.

The European Physical Journal B - Condensed Matter and Complex Systems (EPJB) Volume 70:2 (2009) pp. 293-304 (DOI: 10.1140/epjb/e2009-00234-0)

online 8 July 2009 (full text on EPJB site) -
preprint arXiv:0903.4694, March 2009.

*back to the list of publications*

Basic evidences on non-profit making and other forms of benevolent-based organizations reveal a rough partition of members between some {\em pure consumers} of the public good (free-riders) and {\em benevolent} individuals (cooperators). We study the relationship between the community size and the level of cooperation in a simple model where the utility of joining the community is proportional to its size. We assume an idiosyncratic willingness to join the community ; cooperation bears a fixed cost while free-riding bears a (moral) idiosyncratic cost proportional to the fraction of cooperators. We show that the system presents two types of equilibria: fixed points (Nash equilibria) with a mixture of cooperators and free-riders and cycles where the size of the community, as well as the proportion of cooperators and free-riders, vary periodically.

The European Physical Journal B - Condensed Matter and Complex Systems (EPJB) Volume 71, Number 4 / Octobre 2009, pp. 597-610 (doi: 10.1140/epjb/e2009-00325-x).

online 7 October 2009
- preprint hal-00349642, January 2009.

*back to the list of publications*

A single social phenomenon (such as crime, unemployment or birth rate) can be observed through temporal series corresponding to units at different levels (cities, regions, countries...). Units at a given local level may follow a collective trend imposed by external conditions, but also may display fluctuations of purely local origin. The local behavior is usually computed as the difference between the local data and a global average (e.g. a national average), a view point which can be very misleading. In this article, we propose a method for separating the local dynamics from the global trend in a collection of correlated time series. We take an independent component analysis approach in which we do not assume a small average local contribution in contrast with previously proposed methods. We first test our method on financial time series for which various data analysis tools have already been used. For the S&P500 stocks, our method is able to identify two classes of stocks with marked different behaviors: the `followers' (stocks driven by the collective trend), and the `leaders' (stocks for which local fluctuations dominate). Furthermore, as a byproduct contributing to its validation, the method also allows to classify stocks in several groups consistent with industrials sectors. We then consider crime rate series, a domain where the separation between global and local policies is still a major subject of debate. We apply our method to the states in the US and the regions in France. In the case of the US data, we observe large fluctuations in the transition period of mid-70's during which crime rates increased significantly, whereas since the 80's, the state crime rates are governed by external factors, and the importance of local specificities being decreasing.

Proceedings of the National Academy of Sciences (PNAS)

online April 12, 2010, doi: 10.1073/pnas.0910259107

Preprint arXiv:0909.1490 September 2009.
*Echo in the nonacademic press: Where local policy matters, 16 April 2010, in Emerging Health Threats Forum (a not-for-profit Community Interest Company, established with support from the UK's Health Protection Agency).*

In the 70s Schelling introduced a multiagent model to describe the segregation dynamics that may occur with individuals having only weak preferences for "similar" neighbors. Recently variants of this model have been discussed, in particular, with emphasis on the links with statistical physics models. Whereas these models consider a fixed number of agents moving on a lattice, here, we present a version allowing for exchanges with an external reservoir of agents. The density of agents is controlled by a parameter which can be viewed as measuring the attractiveness of the city lattice. This model is directly related to the zero-temperature dynamics of the Blume-Emery-Griffiths spin-1 model, with kinetic constraints. With a varying vacancy density, the dynamics with agents making deterministic decisions leads to a variety of "phases" whose main features are the characteristics of the interfaces between clusters of agents of different types. The domains of existence of each type of interface are obtained analytically as well as numerically. These interfaces may completely isolate the agents leading to another type of segregation as compared to what is observed in the original Schelling model, and we discuss its possible socioeconomic correlates.

Copyright © 2010 The American Physical Society

Phys. Rev. E 81, 066120 (2010)

Preprint arXiv:1002.3758, February 2010.

*back to the list of publications*

Reaction-times in perceptual tasks are the subject of many experimental and theoretical studies. With the neural decision making process as main focus, most of these works concern discrete (typically binary) choice tasks, implying the identification of the stimulus as an exemplar of a category. Here we address issues specific to the perception of categories (e.g. vowels, familiar faces, ...), making a clear distinction between identifying a category (an element of a discrete set) and estimating a continuous parameter (such as a direction). We exhibit a link between optimal Bayesian decoding and coding efficiency, the latter being measured by the mutual information between the discrete category set and the neural activity. We characterize the properties of the best estimator of the likelihood of the category, when this estimator takes its inputs from a large population of stimulus-specific coding cells. Adopting the diffusion-to-bound approach to model the decisional process, this allows to relate analytically the bias and variance of the diffusion process underlying decision making to macroscopic quantities that are behaviorally measurable. A major consequence is the existence of a quantitative link between reaction times and discrimination accuracy. The resulting analytical expression of mean reaction times during an identification task accounts for empirical facts, both qualitatively (e.g. more time is needed to identify a category from a stimulus at the boundary compared to a stimulus lying within a category), and quantitatively (working on published experimental data on phoneme identification tasks).

Copyright © 2012, Elsevier.

Preprint arXiv:1102.4749, Feb. 2011. Supporting information here.

Brain Research, Volume 1434, 24 January 2012, Pages 47-61.

*Related works, same authors*:

- "Neural Coding of Categories: Information Efficiency and Optimal Population Codes", J. of Comput. Neuroscience 2008, here.
- "From Exemplar Theory to Population Coding and Back - An Ideal Observer Approach" (paper here), proceedings of the workshop "Exemplar-Based Models of Language Acquisition and Use", Dublin, 2007.

*back to the list of publications*

Addressing issues in social diversity, we introduce a model of housing transactions
between agents heterogeneous in their willingness to pay. A key
assumption is that agents preferences for a place depend on both an intrinsic attractiveness
and on the social characteristics of its neighborhood.
The stationary space distribution of income is analytically and numerically characterized.
The main results are that socio-spatial segregation occurs whenever the social influence is strong enough, but even so, some social
diversity is preserved at most locations. Comparing with the Parisian housing
market, the results reproduce general trends concerning the
price distribution and the income spatial segregation.

© 2013 Elsevier B.V.

Journal of Economic Dynamics and Control (JEDC), ), Volume 37, Issue 7, July 2013, Pages 1300-1321

doi:10.1016/j.jedc.2013.03.001.

Preprint arXiv:1012.2606.

Extended abstract in the ICCS2011 online proceedings.

*back to the list of publications*

A new viewpoint on electoral involvement is proposed from the study of the statistics of the proportions of abstentionists, blank and null, and votes according to list of choices, in a large number of national elections in different countries. Considering 11 countries without compulsory voting (Austria, Canada, Czech Republic, France, Germany, Italy, Mexico, Poland, Romania, Spain and Switzerland), a stylized fact emerges for the most populated cities when one computes the entropy associated to the three ratios, which we call the entropy of civic involvement of the electorate. The distribution of this entropy (over all elections and countries) appears to be sharply peaked near a common value. This almost common value is typically shared since the 1970's by electorates of the most populated municipalities, and this despite the wide disparities between voting systems and types of elections. Performing different statistical analyses, we notably show that this stylized fact reveals particular correlations between the blank/null votes and abstentionists ratios. We suggest that the existence of this hidden regularity, which we propose to coin as a `weak law on recent electoral behavior among urban voters', reveals an emerging collective behavioral norm characteristic of urban citizen voting behavior in modern democracies. Analyzing exceptions to the rule provide insights into the conditions under which this normative behavior can be expected to occur.

PLoS ONE 7(7): e39916. doi:10.1371/journal.pone.0039916 (published: July 25, 2012) - Preprint arXiv:1202.6307.

*back to the list of publications*

The cerebellum has long been considered to undergo supervised learning, with climbing fibers acting as a 'teaching' or 'error' signal. Purkinje cells (PCs), the sole output of the cerebellar cortex, have been considered as analogs of perceptrons storing input/output associations. In support of this hypothesis, a recent study found that the distribution of synaptic weights of a perceptron at maximal capacity is in striking agreement with experimental data in adult rats. However, the calculation was performed using random uncorrelated inputs and outputs. This is a clearly unrealistic assumption since sensory in- puts and motor outputs carry a substantial degree of temporal correlations. In this paper, we consider a binary output neuron with a large number of inputs, which is required to store associations between temporally correlated sequences of binary inputs and outputs, modelled as Markov chains. Storage capacity is found to increase with both input and output correlations, and diverges in the limit where both go to unity. We also investigate the capacity of a bistable output unit, since PCs have been shown to be bistable in some experimental conditions. Bistability is shown to enhance storage capacity whenever the output correlation is stronger than the input correlation. Distribution of synaptic weights at maximal capacity is shown to be independent on correlations, and is also unaffected by the presence of bistability.

PLoS Computational Biology, 8(4): e1002448 (2012) doi:10.1371/journal.pcbi.1002448s

*back to the list of publications*

Whenever customers'choices (e.g. to buy or not a given good) depend on others
choices (cases coined 'positive externalities' or 'bandwagon effect' in the
economic literature), the demand may be multiply valued: for a same posted
price, there is either a small number of buyers, or a large one -- in which
case one says that the customers *coordinate*. This leads to a dilemma for the seller:
should he sell at a high price, targeting a small number of buyers,
or at low price targeting a large number of buyers? In this paper we show that
the interaction between demand and supply is even more complex than expected,
leading to what we call the *curse of coordination*: the pricing strategy
for the seller which aimed at maximizing his profit corresponds to posting a
price which, not only assumes that the customers will coordinate, but also lies
very near the critical price value at which such high demand no more exists.
This is obtained by the detailed mathematical analysis of a particular model
formally related to the Random Field Ising Model and to a model introduced in social sciences by T. C. Schelling in the 70's.

(Copyright © Springer 2012)

Journal of Statistical Physics: Volume 151, Issue 3 (2013), Page 494-522
doi:10.1007/s10955-012-0660-1

(this article is part of the special issue Statistical Mechanics and Social Sciences, II)

Online 13 Dec. 2012 -
Preprint arXiv:1209.1321

*back to the list of publications*

We introduce and analyze several variants of a system of differential equations which model the dynamics of social outbursts, such as riots. The systems involve the coupling of an explicit variable representing the intensity of rioting activity and an underlying (implicit) field of social tension. Our models include the effects of exogenous and endogenous factors as well as various propagation mechanisms. From numerical and mathematical analysis of these models we show that the assumptions made on how different locations influence one another and how the tension in the system disperses play a major role on the qualitative behavior of bursts of social unrest. Furthermore, we analyze here various properties of these systems, such as the existence of traveling wave solutions, and formulate some new open mathematical problems which arise from our work.

(Copyright © AIMS 2015)

Networks and Heterogeneous Media (NHM), Vol. 10, Nber 3, pp. 443-475, September 2015 (online July 2015)

doi:10.3934/nhm.2015.10.443 (abstract and paper on NHM web site).

Preprint: arXiv:1502.04725

*back to the list of publications*

Though numerous numerical studies have investigated language change, grammaticalization and diachronic phenomena of language renewal have been left aside, or so it seems. We argue that previous models, dedicated to other purposes, make representational choices that cannot easily account for this type of phenomenon. In this paper we propose a new framework, aiming to depict linguistic renewal through numerical simulations. We illustrate it with a specific implementation which brings to light the phenomenon of semantic bleaching.

(Copyright © ATALA 2016)

Traitement Automatique du Langage (TAL), 2014 Volume 55 Num. 3, pp. 47-71

Paper online May 2016 (article in French).
*Related works, same authors (in English):
"Modeling Language Change: The Pitfall of Grammaticalization", book chapter,
in "Language in Complexity: The Emerging Meaning", Springer 2016, pp. 49-72*,

and paper below.

*back to the list of publications*

The cerebellum aids the learning and execution of fast coordinated movements, with acquired information being stored by plasticity of parallel fibre--Purkinje cell synapses. According to the current consensus, erroneously active parallel fibre synapses are depressed by complex spikes arising as climbing fibres signal movement errors. However, this theory cannot solve the credit assignment problem of using the limited information from a global movement evaluation to optimise behaviour by guiding the plasticity in numerous neurones. We identify the possible implementation of an algorithm solving this problem, whereby spontaneous complex spikes perturb ongoing movements, create an eligibility trace for plasticity and signal resulting error changes to guide plasticity. These error changes are extracted by adaptively cancelling the average error. This framework, stochastic gradient descent with estimated global errors, generates specific predictions for synaptic plasticity rules that contradict the current consensus. However, in vitro plasticity experiments under physiological conditions verified our predictions, highlighting the sensitivity of plasticity studies to unphysiological conditions. Using numerical and analytical approaches we demonstrate the convergence and estimate the capacity of learning in our implementation. Finally, a similar mechanism may operate during optimisation of action sequences by the basal ganglia, where dopamine could both initiate movements and signal rewards, analogously to the dual perturbation and correction role of the climbing fibre outlined here.

Submitted - bioRxiv (2016), doi: http://dx.doi.org/10.1101/053785

*back to the list of publications*

As a large-scale instance of dramatic collective behavior, the 2005 French riots started in a poor suburb of Paris, then spread in all of France, lasting about three weeks. Remarkably, although there were no displacements of rioters, the riot activity did traveled. Daily national police data to which we had access have allowed us to take advantage of this natural experiment to explore the dynamics of riot propagation. Here we show that an epidemic-like model, with less than 10 free parameters and a single sociological variable characterizing neighborhood deprivation, accounts quantitatively for the full spatio-temporal dynamics of the riots. This is the first time that such data-driven modeling involving contagion both within and between cities (through geographic proximity or media) at the scale of a country is performed. Moreover, we give a precise mathematical characterization to the expression ``wave of riots'', and provide a visualization of the propagation around Paris, exhibiting the wave in a way not described before. The remarkable agreement between model and data demonstrates that geographic proximity played a major role in the riot propagation, even though information was readily available everywhere through media. Finally, we argue that our approach gives a general framework for the modeling of spontaneous collective uprisings.

Submitted - arXiv preprint http://arxiv.org/abs/1701.07479 (Jan. 25, 2017).

Supporting information, videos: see here.

*back to the list of publications*

It is generally believed that, when a linguistic item acquires a new meaning, its overall frequency of use in the language rises with time with an S-shaped growth curve. Yet, this claim has only been supported by a limited number of case studies. In this paper, we provide the first corpus-based quantitative confirmation of the genericity of the S-curve in language change. Moreover, we uncover another generic pattern, a latency phase of variable duration preceding the S-growth, during which the frequency of use of the semantically expanding word remains low and more or less constant. We also propose a usage-based model of language change supported by cognitive considerations, which predicts that both phases, the latency and the fast S-growth, take place. The driving mechanism is a stochastic dynamics, a random walk in the space of frequency of use. The underlying deterministic dynamics highlights the role of a control parameter, the strength of the cognitive impetus governing the onset of change, which tunes the system at the vicinity of a saddle-node bifurcation. In the neighborhood of the critical point, the latency phase corresponds to the diffusion time over the critical region, and the S-growth to the fast convergence that follows. The duration of the two phases is computed as specific first passage times of the random walk process, leading to distributions that fit well the ones extracted from our dataset. We argue that our results are not specific to the studied corpus, but apply to semantic change in general.

Submitted. arXiv preprint https://arxiv.org/abs/1703.00203 (Mar. 1st, 2017).
*Related works, same authors: here (in French) and here.*

*back to the list of publications*

page d'accueil - thèmes de recherche - publications - enseignement, séminaires et conférences