International Conference papers (97)

[1] Shatha F. Al-Maliki, Evelyne Lutton, Francois Boue, and Franck P. Vidal. Evolutionary Interactive Analysis of MRI Gastric Images Using a Multiobjective Cooperative-coevolution Scheme. In Gary K. L. Tam and Franck Vidal, editors, Computer Graphics and Visual Computing (CGVC). The Eurographics Association, 2018. [ bib | .pdf ]
In this study, we combine computer vision and visualisation/data exploration to analyse MRI data and detect garden peas inside the stomach. It is a preliminary objective of a larger project that aims to understand the kinetics of gastric emptying. We propose to perform the image analysis task as a multi-objective optimisation. A set of 7 equally important objectives are proposed to characterise peas. We rely on a cooperation co-evolution algorithm called ‘Fly Algorithm’ implemented using NSGA-II. The Fly Algorithm is a specific case of the ‘Parisian Approach’ where the solution of an optimisation problem is represented as a set of individuals (e.g. the whole population) instead of a single individual (the best one) as in typical evolutionary algorithms (EAs). NSGA-II is a popular EA used to solve multi-objective optimisation problems. The output of the optimisation is a succession of datasets that progressively approximate the Pareto front, which needs to be understood and explored by the end-user. Using interactive Information Visualisation (InfoVis) and clustering techniques, peas are then semi-automatically segmented.

[2] Shatha F. Al-Maliki, Evelyne Lutton, Francois Boue, and Franck P. Vidal. MRI Gastric Images Processing using a Multiobjective Fly Algorithm. In International Conference on Parallel Problem Solving from Nature, PPSN 2018: Parallel Problem Solving from Nature, PPSN XV, 2018. [ bib | .pdf ]
When dealing with rare and sparse data, like the ones collected during a long and expensive experimental process, machine learning is used in a different perspective. In this context, optimisation-based approaches combined with user visualisation and interactions are sometimes the best way to cope with modelling issues. We present here an example related to an experimental project aiming at understanding the kinetics of gastric emptying using MRI images of the stomach of healthy volunteers. We show how a cooperation/co-evolution algorithm, the "Fly Algorithm", can be made multi-objective, and its output, a complex Pareto Front, analysed using interactive Information Visualisation (InfoVis) and clustering.

[3] Juliane Floury, Tiago Bianchi, Jonathan Thevenot, Didier Dupont, Frederic Jamme, Evelyne Lutton, Maud Panouille, François Boue, and Steven Le Feunteun. Digestion of milk protein gels in simulated gastric environment: exploration of the disintegration process and diffusion behavior of pepsin. In 2nd Food Structure and Functionality Forum Symposium - from Molecules to Functionality, Singex, Singapore, February 2016. [ bib | http ]
The gastric digestion comprises three phases: physical disintegration, chemical breakdown and nutrient release. Controlling food proteins gelation conditions leads to the formation of particles with specific structural features that change protein digestibility. The development of foods with specific proteolysis rates allows their fit to different ‘nutritional vulnerable groups’ (newborn,elderly, obese, athletes) needs. The hypothesis is that the overall proteolysis reaction rate is limited by the pepsin diffusion rate within the protein structures generated in the stomach.Three milk gels with the same protein concentration but different microstructures were prepared either by rennet, acid coagulation of non-fat milk, or heat treatment of whey proteins. The disintegration of the different gel networks was investigated under digestion in simulated gastric conditions, and the effect of the acidic environment uncoupled from the enzyme effect. The first effect was monitored during 30 minutes before addition of pepsin for two hours of digestion.Kinetics of the process was surveilled by particle size measurements and matter loss.Proteolysis was characterized by SDS-PAGE, and diffusion of fluorescently labelled (FITC) pepsin within the gels was followed using fluorescent recovery after photobleaching with confocal microscopy. In contrast to acid and whey protein gels, rennet gels underwent large microstructural modifications under acidic conditions, forming extremely compact protein aggregates that significantly slowed down pepsin diffusion rates through the modified gel network. Microscopic observations showed slower morphological evolution during the enzymatic digestion, whose rates depended on the gel considered. Moreover, pepsin was able to diffuse within the aggregates.Recent microscopic observations obtained by tryptophan fluorescence imaging on the SOLEIL synchrotron DISCO beamline suggest that the particles were enzyme digested inside out. In this study, we succeeded in interpret the digestion phases as microstructural transformation,enzymatic reaction and diffusion phenomena in order to further dismantle the digestion process from a process engineering perspective.

[4] Thomas Chabin, Alberto Tonda, and Evelyne Lutton. How to mislead an evolutionary algorithm using global sensitivity analysis. In Stéphane Bonnevay, Pierrick Legrand, Nicolas Monmarché, Evelyne Lutton, and Marc Schoenauer, editors, Artificial Evolution: 12th International Conference, Evolution Artificielle, EA 2015, Lyon, France, October 26-28, 2015. Revised Selected Papers, pages 44-57. Springer International Publishing, 2016. [ bib | DOI | http | .pdf ]
The idea of exploiting Global Sensitivity Analysis (GSA) to make Evolutionary Algorithms more effective seems very attractive: intuitively, a probabilistic analysis can prove useful to a stochastic optimisation technique. GSA, that gathers information about the behaviour of functions receiving some inputs and delivering one or several outputs, is based on computationally-intensive stochastic sampling of a parameter space. Nevertheless, efficiently exploiting information gathered from GSA might not be so straightforward. In this paper, we present three mono- and multi-objective counterexamples to prove how naively combining GSA and EA may mislead an optimisation process.

[5] Evelyne Lutton, Alberto Tonda, Nadia Boukhelifa, and Nathalie Perrot. Complex systems in food science: Human factor issues. In FOODSIM'2016, April 3-7, 2016, Catholic University Leuven, Ghent, Belgium, 2016. Keynote Invited Speech. [ bib | .pdf ]
Building in-silico decision making systems is essential in the food domain, albeit highly difficult. This task strongly relies on multidisciplinary research and in particular on advanced techniques from artificial intelligence. The success of such systems depends on how well they cope with the complex properties of food processes, such as the large variety of interacting components including those related to human expertise; and their dynamic, non-linear, multi-scale, uncertain and non-equilibrium behaviors. Robust stochastic optimization techniques, evolutionary computation and in particular Interactive Evolutionary Computation (IEC) seem to be a fruitful framework for developing food science models. A Human-Centered approach to Interactive Evolutionary Computation is discussed in this paper as a possible pertinent way to cope with challenges related to human factors in this context.

[6] Nadia Boukhelifa, Anastasia Bezerianos, Alberto Tonda, and Evelyne Lutton. Research prospects in the design and evaluation of interactive evolutionary systems for art and science. In ACM CHI Workshop on Human Centered Machine Learning, San Jose, CA, United States, 2016. [ bib | http | .pdf ]
We report on our experience in designing and evaluating seven applications from seven different domains using an interactive evolutionary approach. We conducted extensive evaluations for some of these applications, both quantitative and qualitative, and collected rich feedback from our ongoing collaborations with end-user scientists and artists. To ground our discussion, we refer to two applications, from art and science, as exemplars of our work in order to identify strengths and weaknesses in our approach. We argue that human-centered design could play an important role in addressing some of the identified issues such as the “black box” and the “user-bottleneck” effects. We discuss research opportunities requiring human-computer interaction methodologies in order to support both the visible and hidden roles that humans play in interactive evolutionary computation and machine learning.

[7] J. Thevenot, J. Floury, F. Jamme, M. Panouille, E. Lutton, F. Boue, D. Dupont, and S. Le Feunteun. Gastric digestion of milk protein gels as assessed by time-lapse synchrotron uv-microscopy. In Presented at The 16. European microscopy CongressEMC2016 , 28 Aug - 2 Sept, Lyon, FRANCE, 2016. [ bib | http ]
Gastric digestion is the result of physical disintegration, acidic hydrolysis and enzymatic reactions leading to the release of nutrients which are absorbed in the upper intestinal tract. Protein is one of the essential macro-nutrient and can be eaten in a great variety of forms (solubilized, cross-linked, in their native or denatured states). Controlling food protein gelation conditions result in the formation of particles with specific structural features. Several in vivo and in vitro studies have shown an influence of the macro- and microstructure on the kinetics of milk protein hydrolysis. Nevertheless, the mechanisms by which the structure of dairy gels can affect the digestion kinetics remain largely unknown. The aim of the study was to assess the part play by HCl and gastric enzyme (i.e. pepsin) during gastric digestion using a dynamic and label-free imaging technique on the DISCO beamline of Synchrotron SOLEIL to visualize in situ the milk protein gels breakdown kinetics. The DISCO beamline uses the deep ultraviolet range to probe the intrinsic UV tryptophan fluorescence without the need of specific external probes. Two milk gels with the same protein concentration but different microstructures were prepared either by rennet or acid coagulation of non-fat milk. The disintegration of the different networks was monitored under digestion at body temperature in simulated gastric fluids and the effect of the acidic environment uncoupled from the enzyme effect. The evolution of particle area and mean fluorescence intensity has been determined, and used to estimate the kinetics of food particles breakdown. The kinetics of acid gel in vitro digestion was significantly reduced compared to rennet gel. Our data indicate that rennet gel has a two-step behavior during the acidification phase with a swelling followed by a contraction of the particle, not observed for acid gel. In addition, these microstructural modifications of rennet gel affect negatively the enzymatic breakdown kinetics of particles compared to acid gel. This study leads to original methodological developments both from the point of view of the acquisition of data and their joint analysis. Getting in situ information about digestion kinetics, microstructural transformation and enzymatic reaction, allow further analysis of the digestion process.

[8] Thomas Chabin, Alberto Tonda, and Evelyne Lutton. Is global sensitivity analysis useful to evolutionary computation? In Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, GECCO Companion '15, pages 1365-1366, New York, NY, USA, 2015. ACM. [ bib | DOI | http | .pdf ]
Global Sensitivity Analysis (GSA) studies how uncertainty in the inputs of a system influences uncertainty in its outputs. GSA is extensively used by experts to gather information about the behavior of models, through computationally-intensive stochastic sampling of parameters' space. Some studies propose to make use of the considerable quantity of data acquired in this way to optimize the model parameters, often resorting to Evolutionary Algorithms (EAs). Nevertheless, efficiently exploiting information gathered from GSA might not be so straightforward. In this paper, we present a counterexample followed by experimental results to prove how naively combining GSA and EA can bring about negative outcomes.

Keywords: easea, evolutionary computation, global sensitivity analysis, real-valued optimization
[9] Nadia Boukhelifa, Anastasia Bezerianos, and Evelyne Lutton. A mixed approach for the evaluation of a guided exploratory visualization system. In EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3), 25-26 May, 2015. [ bib | .pdf ]
We summarise and reflect upon our experience in evaluating a guided exploratory visualization system. Our system guides users in their exploration of multidimensional datasets to pertinent views of their data, where the notion of pertinence is defined by automatic indicators, such as the amount of visual patterns in the view, and subjective user feedback obtained during their interaction with the tool. To evaluate this type of system, we argue for deploying a collection of validation methods that are: user-centered, observing the utility and effectiveness of the system for the end-user; and algorithm-centered, analysing the computational behaviour of the system. We report on observations and lessons learnt from working with expert users both for the design and the evaluation of our system.

[10] Evelyne Lutton and Nathalie Perrot. Complex systems in food science: Human factor issues. In DOF 2015. 6th International Symposium on Delivery of Functionality in Complex Food Systems Physically-Inspired Approaches from the Nanoscale to the Microscale. July 14 to 17, Maison de la Chimie - Paris, France, 2015. Keynote Speech. [ bib | .pdf ]
Complex systems approaches are an attractive way to model food systems, as it yield powerful tools to address challenging issues like multi-scales and big data issues. The specifics of food domain however raises the focus on another crucial issue that is what can be called the human factor. At every stage, actually, human expertise and decision making have a major importance for a better understanding of food systems. Dealing with this is not a simple and solved problem. This talk illustrates some prospective research in this direction.

[11] Etienne Descamps, Alberto Tonda, Sebastien Gaucel, Ian C Trelea, Evelyne Lutton, and Nathalie Perrot. Modeling competition phenomena in a dairy oil-in-water emulsion using hybrid kinetic monte carlo simulations. In DOF 2015. 6th International Symposium on Delivery of Functionality in Complex Food Systems Physically-Inspired Approaches from the Nanoscale to the Microscale. July 14 to 17, Maison de la Chimie - Paris, France, 2015. [ bib | .pdf | .pdf ]
[12] Evelyne Lutton, Hugo Gilbert, Waldo Cancino, Benjamin Bach, Pierre Parrend, and Pierre Collet. Gridvis: Visualisation of island-based parallel genetic algorithms. In Evopar2014, EvoApplications track of EvoStar, The leading European event on Bio-Inspired Computation, LNCS. Springer, April 2014. Best paper award of Evopar2014, 23-25 April, Granada, Spain. [ bib | .pdf ]
Island Model parallel genetic algorithms rely on various migration models and their associated parameter settings. A fine understanding of how the islands interact and exchange informations is an important issue for the design of efficient algorithms. This article presents GridVis, an interactive tool for visualising the exchange of individuals and the propagation of fitness values between islands. We performed several experiments on a grid and on a cluster to evaluate GridVis’ ability to visualise the activity of each machine and the communication flow between machines. Experiments have been made on the optimisation of a Weierstrass function using the EASEA language, with two schemes: a scheme based on uniform islands and another based on specialised islands (Exploitation, Exploration and Storage Islands).

[13] Sebastien Gaucel, Maarten Keijzer, Evelyne Lutton, and Alberto Tonda. Learning dynamical systems using standard symbolic regression. In EuroGP track of EvoStar, The leading European event on Bio-Inspired Computation, LNCS. Springer, April 2014. Best paper award of EvoStar2014, 23-25 April, Granada, Spain. [ bib | .pdf ]
Symbolic regression has many successful applications in learning free-form regular equations from data. Trying to apply the same approach to differential equations is the logical next step: so far, however, results have not matched the quality obtained with regular equations, mainly due to additional constraints and dependencies between variables that make the problem extremely hard to tackle. In this paper we propose a new approach to dynamic systems learning. Symbolic regression is used to obtain a set of first-order Eulerian approximations of differential equations, and mathematical properties of the approximation are then exploited to reconstruct the original differential equations. Advantages of this technique include the de-coupling of systems of differential equations, that can now be learned independently; the possibility of exploiting established techniques for standard symbolic regression, after trivial operations on the original dataset; and the substantial reduction of computational effort, when compared to existing ad-hoc solutions for the same purpose. Experimental results show the efficacy of the proposed approach on an instance of the Lotka-Volterra model.

[14] Alberto Tonda, Andre Spritzer, and Evelyne Lutton. Balancing user interaction and control in bayesian network structure learning. In Artificial Evolution Conference, LNCS 8752. Springer, October 2013. 21-23 October 2013, Bordeaux, France. [ bib | .pdf ]
In this paper we present a study based on an evolutionary framework to explore what would be a reasonable compromise between interaction and automated optimisation in finding possible solutions for a complex problem, namely the learning of Bayesian network structures, an NP-hard problem where user knowledge can be crucial to distinguish among solutions of equal fitness but very different physical meaning. Even though several classes of complex problems can be effectively tackled with Evolutionary Computation, most possess qualities that are dif- difficult to directly encode in the fitness function or in the individual's genotype description. Expert knowledge can sometimes be used to integrate the missing information, but new challenges arise when searching for the best way to access it: full human interaction can lead to the well-known problem of user-fatigue, while a completely automated evolutionary process can miss important contributions by the expert. For our study, we developed a GUI-based prototype application that lets an expert user guide the evolution of a network by alternating between fully-interactive and completely automatic steps. Preliminary user tests were able to show that despite still requiring some improvements with regards to its efficiency, the proposed approach indeed achieves its goal of delivering satisfying results for an expert user.

[15] Waldo Cancino, Nadia Boukhelifa, Anastasia Bezerianos, and Evelyne Lutton. Evolutionary visual exploration: Experimental analysis of algorithm behaviour. In VizGEC 2013, Workshop on Visualisation Methods in Genetic and Evolutionary Computation. Genetic and Evolutionary Computation Conference, GECCO 2013., July 2013. [ bib | .pdf ]
Recent publications in the domains of interactive evolutionary computation and data visualisation consider an emerging topic coined Evolutionary Visual Exploration (EVE). EVE systems combine visual analytics with stochastic optimisation to aid the exploration of complex, multidimensional datasets. In this work we present an experimental analysis of the behaviour of an EVE system that is dedicated to the visualisation of multidimensional datasets, which are generally characterised by a large number of possible views or projections. EvoGraphDice is an interactive evolutionary system that progressively evolves a small set of new dimensions, to provide new viewpoints on the dataset, in the form of linear and non-linear combinations of the original dimensions. The criteria for evolving new dimensions are not known a priori and are partially specified by the user via an interactive interface: (i) The user selects views with meaningful or interesting visual patterns and provides a satisfaction score. (ii) The system calibrates a fitness function to take into account the user input, and then calculates new views, with the help of an evolutionary engine. In previous work (an observational study), we showed that EvoGraphDice was able to facilitate “exploration” tasks, helping users to discover new interesting views and relationships in their data. Here, we focus on the system's “convergence” behaviour, conducting an experiment with users who have a precise task to perform. The experimental task is set up as a geometrical game, and collected data show that EvoGraphDice is able to “learn” user preferences in a way that helps users fulfill their task (i.e. converge to desired solutions).

[16] Nadia Boukhelifa, Waldo Cancino, Anastasia Bezerianos, and Evelyne Lutton. Evolutionary visual exploration: Evaluation with expert users. In EuroVis 2013, 15th annual Visualization Symposium, June 2013. June 17-21, Leipzig, Germany. [ bib | .pdf ]
We present an Evolutionary Visual Exploration (EVE) system that combines visual analytics with stochastic optimisation to aid the exploration of multidimensional datasets characterised by a large number of possible views or projections. Starting from dimensions whose values are automatically calculated by a PCA, an interactive evolutionary algorithm progressively builds (or evolves) non-trivial viewpoints in the form of linear and non-linear dimension combinations, to help users discover new interesting views and relationships in their data. The criteria for evolving new dimensions is not known a priori and are partially specified by the user via an interactive interface: (i) The user selects views with meaningful or interesting visual patterns and provides a satisfaction score. (ii) The system calibrates a fitness function (optimised by the evolutionary algorithm) to take into account the user input, and then calculates new views. Our method leverages automatic tools to detect interesting visual features and human interpretation to derive meaning, validate the findings and guide the exploration without having to grasp advanced statistical concepts. To validate our method, we built a prototype tool (EvoGraphDice) as an extension of an existing scatterplot matrix inspection tool, and conducted an observational study with five domain experts. Our results show that EvoGraphDice can help users quantify qualitative hypotheses and try out different scenarios to dynamically transform their data. Importantly, it allowed our experts to think laterally, better formulate their research questions and build new hypotheses for further investigation.

[17] Evelyne Lutton, Alberto Tonda, Sebastien Gaucel, Julie Foucquier, Alain Riaublanc, and Nathalie Perrot. Food model exploration through evolutionary optimization coupled with visualization: application to the prediction of a milk gel structure. In From Model Foods to Food Models. DREAM Project's International Conference, June 2013. [ bib | .pdf ]
[18] Etienne Descamps, Nathalie Perrot, Sebastien Gaucel, Cristian Trelea, Alain Riaublanc, Alan Mackie, and Evelyne Lutton. Coupling deterministic and random sequential approaches for structure and texture prediction of a dairy oil-in-water emulsion. In From Model Foods to Food Models. DREAM Project's International Conference, June 2013. [ bib | .pdf ]
[19] Franck P Vidal, Pierre-Frederic Villard, and Evelyne Lutton. Automatic tuning of respiratory model for patient-based simulation. In MIBISOC2013, International Conferencel on Medical Imaging using Bio-Inspired and Soft Computing, May 2013. [ bib | .pdf ]
This paper is an overview of a method recently published in a biomedical journal (IEEE Transactions on Biomedical Engineering). The method is based on an optimisation technique called "evolutionary strategy" and it has been designed to estimate the parameters of a complex 15-D respiration model. This model is adaptable to account for patient's specificities. The aim of the optimisation algorithm is to finely tune the model so that it accurately fits real patient datasets. The final results can then be embedded, for example, in high fidelity simulations of the human physiology. Our algorithm is fully automatic and adaptive. A compound fitness function has been designed to take into account for various quantities that have to be minimised (here topological errors of the liver and the diaphragm geometries). The performance our implementation is compared with two traditional methods (downhill simplex and conjugate gradient descent), a random search and a basic realvalued genetic algorithm. It shows that our evolutionary scheme provides results that are significantly more stable and accurate than the other tested methods. The approach is relatively generic and can be easily adapted to other complex parametrisation problems when ground truth data is available.

[20] Franck P. Vidal, Yoann L. Pavia, Jean-Marie Rocchisani, Jean Louchet, and Evelyne Lutton. Artificial evolution strategy for pet reconstruction. In MIBISOC2013, International Conferencel on Medical Imaging using Bio-Inspired and Soft Computing, May 2013. [ bib | .pdf ]
This paper shows new resutls of our artificial evolution algorithm for positron emission tomography (PET) reconstruction. This imaging technique produces datasets corresponding to the concentration of positron emitters within the patient. Fully three-dimensional (3D) tomographic reconstruction requires high computing power and leads to many challenges. Our aim is to produce high quality datasets in a time that is clinically acceptable. Our method is based on a co-evolution strategy called the "Fly algorithm". Each fly represents a point in space and mimics a positron emitter. Each fly position is progressively optimised using evolutionary computing to closely match the data measured by the imaging system. The performance of each fly is assessed based on its positive or negative contribution to the performance of the whole population. The final population of flies approximates the radioactivity concentration. This approach has shown promising results on numerical phantom models. The size of objects and their relative concentrations can be calculated in two-dimensional (2D) space. In 3D, complex shapes can be reconstructed. In this paper, we demonstrate the ability of the algorithm to fidely reconstruct more anatomically realistic volumes.

[21] Alberto Tonda, Evelyne Lutton, Giovanni Squillero, and Pierre-Henri Wuillemin. A memetic approach to bayesian network structure learning. In EvoComplex, Applications of Evolutionary Computation. EvoStar, The leading European Event on Bio-Inspired Computation., pages 102-111. Springer, April 2013. 3-5 April, Vienna Austria. [ bib | .pdf ]
Bayesian networks are graphical statistical models that represent inference between data. For their effectiveness and versatility, they are widely adopted to represent knowledge in different domains. Several research lines address the NP-hard problem of Bayesian network structure learning starting from data: over the years, the machine learning community delivered effective heuristics, while different Evolutionary Algorithms have been devised to tackle this complex problem. This paper presents a Memetic Algorithm for Bayesian network structure learning, that combines the exploratory power of an Evolutionary Algorithm with the speed of local search. Experimental results show that the proposed approach is able to outperform state-of-the-art heuristics on two well-studied benchmarks.

[22] Benjamin Bach, André Spritzer, Evelyne Lutton, and Jean-Daniel Fekete. Interactive Random Graph Generation with Evolutionary Algorithms. In Springer, editor, Graph Drawing, Lecture Notes in Computer Science, Berlin, Germany, September 2012. Springer. [ bib | .pdf ]
This article introduces an interactive system called Graph-Cuisine that lets users steer an Evolutionary Algorithm (EA) to create random graphs matching a set of user-specified measures. Generating random graphs with particular characteristics is crucial for evaluating graph algorithms, layouts and visualization techniques. Current random graph generators provide limited control of the final characteristics of the graphs they generate. The situation is even harder when one wants to generate random graphs similar to a given one. This is due to the fact that the similarity of graphs is often based on unknown parameters leading to a long and painful iterative process including steps of random graph generation, parameter changes, and visual inspection. Our system is based on an approach of interactive evolutionary computation. Fitting generator parameters to create graphs with defined measures is an optimization problem, while judging the quality of the resulting graphs often involves human subjective judgment. We describe the graph generation process from a user's perspective, provide details about our evolutionary algorithm and demonstrate how Graph-Cuisine is employed to generate graphs that mimic a given real world network.

[23] Waldo Cancino, Nadia Boukhelifa, and Evelyne Lutton. Evographdice: Interactive evolution for visual analytics. In IEEE Congress on Evolutionary Computation, June 10-15, 2012. June 10-15, Brisbane, Australia. [ bib | .pdf ]
Visualization of large and complex datasets is a research challenge, especially in frameworks like industrial design, decision making and visual analytics. Interactive Evolution, used not only as an optimisation tool, but also as an exploration tool may provide some versatile solutions to this challenge. This paper presents an attempt in this direction with the EvoGraphDice prototype, developed on top of GraphDice, a general purpose visualization freeware for multidimensional visualization based on scatterplot matrices. EvoGraphDice interactively evolves compound additional dimensions, that provide new viewpoints on a multidimensional dataset. Compound dimensions are linear combination of the initial data dimensions, they are initialised with a Principal Component Analysis (PCA), and modified progressively by the interactive evolution process. Various interactions are available to the user, either in a transparent way, via a capture of mouse-clicks, or in a fully controlled manner, where the user has the opportunity to modify or include his own compound dimension in the evolved population, control the search space, or do some interactive queries. EvoGraphDice is tested on a synthetic dataset of dimension 6, where a known dependency is rediscovered via interactive manipulation. A second example is presented, based on a real dataset of dimension 13, provided by an industrial partner. Our experiments prove the potential of this interactive approach, and allow us to sketch future directions of development for the EvoGraphDice prototype.

[24] Alberto Paolo Tonda, Evelyne Lutton, Romain Reuillon, Giovanni Squillero, and Pierre-Henri Wuillemin. Bayesian network structure learning from limited datasets through graph evolution. In 15th European Conference on Genetic Programming. Springer Verlag, 2012. 11-13 April, Malaga, Spain. [ bib | .pdf ]
Bayesian networks are stochastic models, widely adopted to encode knowledge in several fields. One of the most interesting features of a Bayesian network is the possibility of learning its structure from a set of data, and subsequently use the resulting model to perform new predictions. Structure learning for such models is a NP-hard problem, for which the scientific community developed two main approaches: score-and-search metaheuristics, often evolutionary-based, and dependency-analysis deterministic algorithms, based on stochastic tests. State-of-the-art solutions have been presented in both domains, but all methodologies start from the assumption of having access to large sets of learning data available, often numbering thousands of samples. This is not the case for many real-world applications, especially in the food processing and research industry. This paper proposes an evolutionary approach to the Bayesian structure learning problem, specifically tailored for learning sets of limited size. Falling in the category of score-and-search techniques, the methodology exploits an evolutionary algorithm able to work directly on graph structures, previously used for assembly language generation, and a scoring function based on the Akaike Information Criterion, a well-studied metric of stochastic model performance. Experimental results show that the approach is able to outperform a state-of-the-art dependency-analysis algorithm, providing better models for small datasets.

[25] Nathalie Perrot, Salma Mesmoudi, Romain Reuillon, Evelyne Lutton, and Isabelle Alavarez. The complex system science for optimal strategy of management of a food system: the camembert cheese ripening. In International Congres of Engineering and Food, pages 325-331, Greece, May 2011. [ bib | http | .pdf ]
Significant advances are needed for food systems in terms of real-time prognosis capability developments, incorporating large scale modelling, distributed simulation and optimisation, and complete integration of the methods and algorithms. The goal is to be able to develop new paradigms at the frontier of life science and computing science for the management of systems like food systems. In parallel, just in the process of emerging and linked to these same questions is the science of complex systems, that proposes ways to understand systems located in turbulent, instable and changing environments. This paper points out and illustrates the interest to develop an approach adapting and coupling some fundamental tools of the complex system science. It combines viability and robustness analysis, multi-objective optimisation calculus and high computational performance using a computing grid. Adapted to the camembert cheese ripening, it has led to propose new strategies for control the process. One solution of the calculated pareto front, is compared to two trajectories tested during experiments led on a pilot, one standard and another optimized one. The total mass loss deviation for the calculated trajectory by comparison to the standard one is 0.04 kg in the same time and for identical microorganisms behaviour.

Keywords: cheese ripening ; viability study ; optimal strategy of management ; computing grid ; multiobjective optimisation
[26] Evelyne Lutton and Jean-Daniel Fekete. Visual analytics of ea data. In Genetic and Evolutionary Computation Conference, GECCO 2011, 2011. July 12-16, 2011, Dublin, Ireland. [ bib | .pdf ]
An experimental analysis of evolutionary algorithms usually generates a huge amount of multidimensional data, including numeric and symbolic data. It is difficult to efficiently navigate in such a set of data, for instance to be able to tune the parameters or evaluate the efficiency of some operators. Usual features of existing EA visualisation systems consist in visualising time- or generation-dependent curves (fitness, diversity, or other statistics). When dealing with genomic information, the task becomes even more difficult, as a convenient visualisation strongly depends on the considered fitness landscape. In this latter case the raw data are usually sets of successive populations of points of a complex multidimensional space. The purpose of this paper is to evaluate the potential interest of a recent visual analytics tool for navigating in complex sets of EA data, and to sketch future developements of this tool, in order to better adapt it to the needs of EA experimental analysis.

[27] Alberto Tonda, Evelyne Lutton, and Giovanni Squillero. Lamps : A test problem for cooperative coevolution. In NICSO 2011, the 5th International Workshop on Nature Inspired Cooperative Strategies for Optimization, October 20-22, Cluj Napoca, Romania, 2011. [ bib | .pdf ]
We present an analysis of the behaviour of Cooperative Co-evolution agorithms (CCEAs) on a simple test problem, that is the optimal placement of a set of lamps in a square room, for various problems sizes. Cooperative Co-evolution makes it possible to exploit more efficiently the artificial Darwinism scheme, as soon as it is possible to turn the optimisation problem into a co-evolution of interdependent sub-parts of the searched solution. We show here how two cooperative strategies, Group Evolution (GE) and Parisian Evolution (PE) can be built for the lamps problem. An experimental analysis then compares a classical evolution to GE and PE, and analyses their behaviour with respect to scale.

[28] Evelyne Lutton, Julie Foucquier, Nathalie Perrot, Jean Louchet, and Jean-Daniel Fekete. Visual analysis of population scatterplots. In 10th Biannual International Conference on Artificial Evolution (EA-2011), Angers, France, 2011. [ bib | .pdf ]
We investigate how visual analytic tools can deal with the huge amount of data produced during the run of an evolutionary algorithm. We show, on toy examples and on two real life problems, how a multidimensional data visualisation tool like ScatterDice/GraphDice can be easily used for analysing raw output data produced along the run of an evolutionary algorithm. Visual interpretation of population data is not used very often by the EA community for experimental analysis. We show here that this approach may yield additional high level information that is hardly accessible through conventional computation.

[29] Salma Mesmoudi, Nathalie Perrot, Romain Reuillon, Paul Bourgine, and Evelyne Lutton. Optimal viable path search for a cheese ripening process using a multi-objective ea. In ICEC 2010, International Conference on Evolutionary Computation, October 2010. 24-26 oct, Valencia, Spain. [ bib | .pdf ]
Viability theory is a very attractive theoretical approach for the modeling of complex dynamical systems. However, its scope of application is limited due to the high computational power it necessitates. Evolutionary computation is a convenient way to address some issues related to this theory. In this paper, we present a multi-objective evolutionary approach to address the optimisation problem related to the computation of optimal command profiles of a complex process. The application we address here is a real size problem from dairy industry, the modeling of a Camembert cheese ripening process. We have developed a parallel implementation of a multiobjective EA that has produced a Pareto front of optimal control profiles (or trajectories), with respect to four objectives. The Pareto front was then analysed by an expert who selected a interesting compromise, yielding a new control profile that seems promising for industrial applications.

[30] F. P. Vidal, E. Lutton, J. Louchet, and J.-M. Rocchisani. Threshold selection, mitosis and dual mutation in cooperative co-evolution: application to medical 3d tomography. In PPSN 2010, 11th International Conference on Parallel Problem Solving From Nature. Springer-Verlag, September 2010. Krakow, Poland. [ bib | .pdf ]
We present and analyse the behaviour of specialised operators designed for cooperative coevolution strategy in the framework of 3D tomographic PET reconstruction. The basis is a simple cooperative co-evolution scheme (the “fly algorithm”), which embeds the searched solution in the whole population, letting each individual be only a part of the solution. An individual, or fly, is a 3D point that emits positrons. Using a cooperative co-evolution scheme to optimize the position of positrons, the population of flies evolves so that the data estimated from flies matches measured data. The final population approximates the radioactivity concentration. In this paper, three operators are proposed, threshold selection, mitosis and dual mutation, and their impact on the algorithm efficiency is experimentally analysed on a controlled test-case. Their extension to other cooperative co-evolution schemes is discussed.

[31] F. P. Vidal, J. Louchet, J.-M. Rocchisani, and E. Lutton. Artificial evolution for PET and SPECT reconstruction. In AAPM Annual Meeting, Philadelphia, PA, July 2010. [ bib | .pdf ]
Purpose: We propose an evolutionary approach for image reconstruction in nuclear medicine. Our method is based on a cooperative coevolution strategy (also called Parisian evolution): the “fly algorithm”. Method and Materials: Each individual, or fly, corresponds to a 3D point that mimics a radioactive emitter, i.e. a stochastic simulation of annihilation events is performed to compute the fly's illumination pattern. For each annihilation, a photon is emitted in a random direction, and a second photon is emitted in the opposite direction. The line between two detected photons is called line of response (LOR). If both photons are detected by the scanner, the fly's illumination pattern is updated. The LORs of every fly are aggregated to form the population total illumination pattern. Using genetic operations to optimize the position of positrons, the population of flies evolves so that the population total pattern matches measured data. The final population of flies approximates the radioactivity concentration. Results: We have developed numerical phantom models to assess the reconstruction algorithm. To date, no scattering and no tissue attenuation have been considered. Whilst this is not physically correct, it allows us to test and validate our approach in the simplest cases. Preliminary results show the validity of this approach in both 2D and fully-3D modes. In particular, the size of objects, and their relative concentrations can be retrieved in the 2D mode. In fully-3D, complex shapes can be reconstructed. Conclusions: An evolutionary approach for PET reconstruction has been proposed and validated using simple test cases. Further work will therefore include the use of more realistic input data (including random events and scattering), which will finally lead to implement the correction of scattering within our algorithm. A comparison study against ML-EM and/or OS-EM methods will also need to be conducted.

[32] Franck P. Vidal, Jean Louchet, Jean-Marie Rocchisani, and Evelyne Lutton. New genetic operators in the fly algorithm: application to medical PET image reconstruction. In Evolutionary Computation in Image Analysis and Signal Processing, EvoApplications 2010, Part I, LNCS 6024,C. Di Chio et al. (Eds.). Springer, April 2010. 7th - 9th April, Istanbul Technical University, Istanbul, Turkey. [ bib | .pdf ]
Our reconstruction method is based on a cooperative coevolution strategy (also called Parisian evolution): the “fly algorithm”. Each fly is a 3D point that mimics a positron emitter. The flies' position is progressively optimised using evolutionary computing to closely match the data measured by the imaging system. The performance of each fly is assessed using a “marginal evaluation” based on the positive or negative contribution of this fly to the performance of the population. Using this property, we propose a “thresholded-selection” method to replace the classical tournament method. A mitosis operator is also proposed. It is triggered to automatically increase the population size when the number of flies with negative fitness becomes too low.

Keywords: Positron emission tomography, genetic algorithms, optimization methods, fly algorithm
[33] Benoit Kaufmann, Jean Louchet, and Evelyne Lutton. Hand posture recognition using real-time artificial evolution. In Evolutionary Computation in Image Analysis and Signal Processing, EvoApplications 2010, Part I, LNCS 6024,C. Di Chio et al. (Eds.), pages 251-260. Springer, April 2010. 7th - 9th April, Istanbul Technical University, Istanbul, Turkey. [ bib | .pdf ]
n this paper, we present a hand posture recognition system (configuration and position) we designed as part of a gestural man-machine interface. After a simple image preprocessing, the parameter space (corresponding to the configuration and spatial position of the user's hand) is directly explored using a population of points evolved via an Evolution Strategy. Giving the priority to exploring the parameter space rather than the image, is an alternative to the classical generalisation of the Hough Transform and allows to meet the real-time constraints of the project. The application is an Augmented Reality prototype for a long term exhibition at the Cité des Sciences, Paris. As it will be open to the general public, rather than using conventional peripherals like a mouse or a joystick, a more natural interface has been chosen, using a microcamera embedded into virtual reality goggles in order to exploit the images of the user's hand as input data and enable the user to manipulate virtual objects without any specific training.

[34] Emmanuel Sapin, Jean Louchet, and Evelyne Lutton. The fly algorithm revisited: Adaptation to cmos image sensor. In ICEC 2009, International Conference on Evolutionary Computation, Madeira, Portugal, October, 5-7 2009. [ bib | .pdf ]
Cooperative coevolution algorithms (CCEAs) usually represent a searched solution as an aggregation of several individuals (or even as a whole population). In other terms, each individual only bears a part of the searched solution. This scheme allows to use the artificial Darwinism principles in a more economic way, and the gain in terms of robustness and efficiency is important. In the computer vision domain, this scheme has been applied to stereovision, to produce an algorithm (the fly algorithm) with asynchronism property. However, this property has not yet been fully exploited, in particular at the sensor level, where CMOS technology opens perpectives to faster reactions. We describe in this paper a new coevolution engine that allow the Fly Algorithm to better exploit the properties of CMOS image sensors.

[35] Franck P. Vidal, Delphine Lazaro-Ponthus, Samuel Legoupil, Jean Louchet, Evelyne Lutton, and Jean-Marie Rocchisani. Artificial evolution for 3d pet reconstruction. In Proceedings of the 9th international conference on Artificial Evolution (EA'09), Strasbourg, France, October 2009. [ bib | .pdf ]
This paper presents a method to take advantage of artificial evolution in positron emission tomography reconstruction. This imaging technique produces datasets that correspond to the concentration of positron emitters through the patient. Fully 3D tomographic reconstruction requires high computing power and leads to many challenges. Our aim is to reduce the computing cost and produce datasets while retaining the required quality. Our method is based on a coevolution strategy (also called Parisian evolution) named “fly algorithm”. Each fly represents a point of the space and acts as a positron emitter. The final population of flies corresponds to the reconstructed data. Using “marginal evaluation”, the fly's fitness is the positive or negative contribution of this fly to the performance of the population. This is also used to skip the relatively costly step of selection and simplify the evolutionary algorithm.

[36] Franck P. Vidal, Delphine Lazaro-Ponthus, Samuel Legoupil, Jean Louchet, Evelyne Lutton, and Jean-Marie Rocchisani. Pet reconstruction using a cooperative coevolution strategy. In Proceedings of the IEEE Medical Imaging Conference 2009, Orlando, Florida, October 2009. IEEE. [ bib | .pdf ]
Fully 3D tomographic reconstruction in nuclear medicine requires high computing power and leads to many challenges. The trend today is to use more general methods that can integrate more realistic models (application-specific physics and data acquisition system geometry). To date, the use of such methods is still restricted due to the heavy computing power needed. Evolutionary algorithms have proven to be efficient optimisation techniques in various domains, including medicine and medical imaging. However the use of evolutionary computation in tomographic reconstruction has been largely overlooked. In previous work, we showed that an artificial coevolution strategy (also called “Parisian evolution”) based on the “fly algorithm” can be used to reconstruct the 3D distribution of radioactive emitters in Single Photon Emission Computed Tomography (SPECT). In this abstract, we propose a computer-based algorithm for fully 3D reconstruction in Positron Emission Tomography (PET) based on the same approach and evaluate its relevance. Realistic models describing the physics of PET could be integrated in the reconstruction loop while taking advantage of artificial evolution to reduce the computing time.

[37] Franck P. Vidal, Jean Louchet, Evelyne Lutton, and Jean-Marie Rocchisani. PET reconstruction using a cooperative coevolution strategy in LOR space. In IEEE Nuclear Science Symposium Conference Record, pages 3363-3366, Orlando, Florida, October 2009. IEEE. [ bib | .pdf ]
This paper presents preliminary results of a novel method that takes advantage of artificial evolution for positron emission tomography (PET) reconstruction. Fully 3D tomographic reconstruction in PET requires high computing power and leads to many challenges. To date, the use of such methods is still restricted due to the heavy computing power needed. Evolutionary algorithms have proven to be efficient optimisation techniques in various domains. However the use of evolutionary computation in tomographic reconstruction has been largely overlooked. We propose a computer-based algorithm for fully 3D reconstruction in PET based on artificial evolution and evaluate its relevance.

Keywords: Positron emission tomography, genetic algorithms, optimization methods
[38] Olivier Barrière, Evelyne Lutton, and Pierre-Henri Wuillemin. Bayesian network structure learning using cooperative coevolution. In Genetic and Evolutionary Computation Conference (GECCO 2009), 2009. [ bib | .pdf ]
We propose a cooperative-coevolution - Parisian trend - algorithm, IMPEA (Independence Model based Parisian EA), to the problem of Bayesian networks structure estimation. It is based on an intermediate stage which consists of evaluating an independence model of the data to be modelled. The Parisian cooperative coevolution is particularly well sui-ted to the structure of this intermediate problem, and allows to represent an independence model with help of a whole population, each individual being an independence statement, i.e. a component of the independence model. Once an independence model is estimated, a Bayesian network can be built. This two level resolution of the complex problem of Bayesian network structure estimation has the major advantage to avoid the difficult problem of direct acyclic graph representation within an evolutionary algorithm, which causes many troubles related to constraints handling and slows down algorithms. Comparative results with a deterministic algorithm, PC, on two test cases (including the Insurance BN benchmark), prove the efficiency of IMPEA, which provides better results than PC in a comparable computation time, and which is able to tackle more complex issues than PC.

[39] Leonardo Trujillo, Gustavo Olague, Francisco Fernandez, and Evelyne Lutton. Selecting local region descriptors with a genetic algorithm for real-world place recognition. In Tenth European Workshop on Evolutionary Computation in Image Analysis and Signal Processing, EvoIASP2008, 2008. Napoli, Italy, 26-28 March. [ bib | .pdf ]
The basic problem for a mobile vision system is determining where it is located within the world. In this paper, a recognition system is presented that is capable of identifying known places such as rooms and corridors. The system relies on a bag of features approach using locally prominent image regions. Real-world locations are modeled using a mixture of Gaussian representations, thus allowing for a multimodal scene characterization. Local regions are represented by a set of 108 statistical descriptors computed from different modes of information. Therefore, the system needs to determine which subset of descriptors capture regularities between image regions of the same location, and also discriminates between regions of different places. A genetic algorithm is used to solve this selection task, using a fitness meaure that promotes : 1) a high classification accuracy; 2) the selection of a minimal subset of descriptors; and 3) a hight separation among place models. The approach is tested on two real world examples: a) using a sequence of still images with 4 different locations; and b) a sequence that contains 8 different locations. Results confirm the ability of the system to identify previously seen places in a real-world setting.

[40] Leonardo Trujillo, Gustavo Olague, Evelyne Lutton, and Francisco Fernandez. Discovering several robot behaviors through speciation. In Fourth European Workshop on Bio-Inspired Heuristics for Design Automation, EvoHOT2008, 2008. Napoli, Italy, 26-28 March. [ bib | .pdf ]
This contribution studies speciation from the standpoint of evolutionary robotics (ER). In ER, the sensory-motor mappings that control an autonomous agent are designed using a neuro-evolutionary framework. An extension to this process is presented here, where speciation is incorporated to the evolution process in order to obtain a varied set of solutions for the same robotics problem using a single algorithmic run. Although speciation is common in evolutionary computation, it has been less explored in behavior-based robotics. When employed, speciation usually relies on a distance measure that allows different individuals to be compared. The distance measure is normally computed in objective or phenotypic space. However, the speciation process presented here is intended to produce several distinc robot behaviors; hence, speciation is sought in behavioral space. Thence, individuals neurocontrollers are described using behavior signatures, which represent the traversed path of the robot within the training environment and are encoded using a character string. With this representation, behavior signatures are compared using the normalized Levenshtein distance metric (N-GLD). Results indicate that speciation in behavioral space does indeed allow the ER system to obtain several navigation strategies for a common experimental setup. This is illustrated by comparing the best individual of each species with those obtained using the Neuro-Evolution of Augmenting Topologies (NEAT) method that speciates neural networks in topological space.

[41] Leonardo Trujillo, Gustavo Olague, Evelyne Lutton, and Francisco Fernandez. Multi-objective design of operators that detect points of interest in images. In The Genetic and Evolutionary Computation Conference, GECCO, 2008. July 12-16, Atlanta, Georgia, USA. [ bib | .pdf ]
In this paper, a multiobjective (MO) learning approach to image feature extraction is described, where Pareto-optimal interest point (IP) detectors are synthesized using genetic programming (GP). IPs are image pixels that are unique, robust to changes during image acquisition, and convey highly descriptive information. Detecting such features is ubiquitous to many vision applications, e.g. object recognition, image indexing, stereo vision, and content based image retrieval. In this work, candidate IP operators are automatically synthesized by the GP process using simple image operations and arithmetic functions. Three experimental optimization criteria are considered: 1) the repeatability rate; 2) the amount of global separability between IPs; and 3) the information content captured by the set of detected IPs. The MO-GP search considers Pareto dominance relations between candidate operators, a perspective that has not been contemplated in previous research devoted to this problem. The experimental results suggest that IP detection is an illposed problem for which a single globally optimum solution does not exist. We conclude that the evolved operators outperform and dominate, in the Pareto sense, all previously man-made designs.

[42] Leonardo Trujillo, Gustavo Olague, Evelyne Lutton, and Francisco Fernandez. Behavior-based speciation for evolutionary robotics. In The Genetic and Evolutionary Computation Conference, GECCO, 2008. July 12-16, Atlanta, Georgia, USA. [ bib | .pdf ]
This paper describes a speciation method that allows an evolutionary process to learn several robot behaviors using a single execution. Species are created in behavioral space in order to promote the discovery of different strategies that can solve the same navigation problem. Candidate neurocontrollers are grouped into species based on their corresponding behavior signature, which represents the traversed path of the robot within the environment. Behavior signatures are encoded using character strings and are compared using the string edit distance. The proposed approach is better suited for an evolutionary robotics problem than speciating in objective or topological space. Experimental comparison with the NEAT method confirms the usefulness of the proposal.

[43] Olivier Barriere and Evelyne Lutton. Experimental analysis of a variable size mono-population cooperative-coevolution strategy. In NICSO 2008, International Workshop on Nature Inspired Cooperative Strategies for Optimization,, 2008. Puerto de La Cruz, Tenerife, 12-14 November. [ bib | .pdf ]
Cooperative coevolution strategies have been used with success to solve complex problems in various application domains. These techniques rely on a formulation of the problem to be solved as a cooperative task, where individuals collaborate or compete in order to collectively build a solution. Several strategies have been developed depending on the way the problem is shared into interdependent subproblems and the way coevolution occur (multipopulation versus monopopulation schemes). Here, we deal with a mono-population strategy (Parisian approach) applied to a problem related to the modeling of a cheese ripening process (french Camembert). A variable sized population Parisian GP strategy has been experimented, using adaptive deflating and inflating schemes for the population size. Experimental results show the effectiveness of the approach on real data collected on a laboratory cheese ripening production line.

[44] Olivier Barriere, Evelyne Lutton, Cedric Baudrit, Mariette Sicard, Bruno Pinaud, and Nathalie Perrot. Modeling human expertise on a cheese ripening industrial process using gp. In PPSN, 10th International Conference on Parallel Problem Solving from Nature, 2008. September 13-17, Technische Universität Dortmund, Germany. [ bib | .pdf ]
Industrial agrifood processes often strongly rely on human expertise, expressed as know-how and control procedures based on subjective measurements (color, smell, texture), which are very difficult to capture and model. We deal in this paper with a cheese ripening process (of french Camembert), for which experimental data have been collected within a cheese ripening laboratory chain. A global and a monopopulation cooperative/coevolutive GP scheme (Parisian approach) have been developed in order to simulate phase prediction (i.e. a subjective estimation of human experts) from microbial proportions and Ph measurements. These two GP approaches are compared to Bayesian network modeling and simple multilinear learning algorithms. Preliminary results show the effectiveness and robustness of the Parisian GP approach.

[45] Leonardo Trujillo, Gustavo Olague, Francisco Fernandez, and Evelyne Lutton. Evolutionary feature selection for bayesian object recognition, novel object detection and object saliency estimation using gmms. In The British Machine Vision Conference, 2007. University of Warwick, UK, September 10-13. [ bib | .pdf ]
This paper presents a method for object recognition, novel object detection, and estimation of the most salient object within a set. Objects are sampled using a scale invariant region detector, and each region is characterized by the subset of texture and color descriptors selected by a Genetic Algorithm (GA). Using multiple views of an object, and multiple regions per view, objects are modeled using mixture of Gaussians, where each object O is a possible class for a given image region. Given a set of objects N, the GA learns a corresponding Gaussian Mixture Model (GMM) for each object in the set employing a one vs. all training scheme. Thence, given an input image where interest regions are detected, if a large majority of the regions are classified as regions of object O, then it is assumed that said object appears within the imaged scene. The GA's fitness propoes: 1) a high classification accuracy, 2) the selection of a minimal subset of descriptors, and 3) a high separation among models. The separation between two GMMs is computed using a weighted version of Fishers linear discriminant, which is also used to estimate the most

[46] Malek Aichour and Evelyne Lutton. Cooperative co-evolution inspired operators for classical gp schemes. In International Workshop on Nature Inspired Cooperative Strategies for Optimisation, 2007. Acireale, Sicily, Italy, Volume Studies in Computational Intelligence. [ bib | .pdf ]
This work is a first step toward the design of a cooperative-coevolution GP for symbolic regression, which first output is a selective mutation operator for classical GP. Cooperative co-evolution techniques rely on the imitation of cooperative capabilities of natural populations and have been successfully applied in various domains to solve very complex optimization problems. It has been proved on several applications that the use of two fitness measures (local and global) within an evolving population allow to design more efficient optimization schemes. We currently investigate the use of a two-level fitness measurement for the design of operators, and present in this paper a selective mutation operator. Experimental analysis on a symbolic regression problem give evidence of the efficiency of this operator in comparison to classical subtree mutation.

[47] J. Lévy Véhel, F. Mendivil, and E. Lutton. Overcompressing jpeg images. In EvoIASP2007, 9th European Workshop on Evolutionary Computation in Image Analysis and Signal Processing. Springer Verlag, 2007. April 11-13, Valencia, Spain. [ bib | .pdf ]
Overcompression is the process of post-processing compressed images to gain either further size reduction or improved quality. This is made possible by the fact that the set of all "reasonable" images has a sparse structure. In this work, we apply this idea to the overcompression of JPEG images: We reduce the blocking artifacts commonly seen in JPEG images by allowing the low frequency coefficients of the DCT to vary slightly. Evolutionary strategies are used in order to guide the modification of the coefficients towards a smoother image.

[48] Gabriela Ochoa, Evelyne Lutton, and Edmund K. Burke. Cooperative royal road functions. In Evolution Artificielle, Tours, France, October 29-31, 2007. [ bib | .pdf ]
We propose using the so called Royal Road functions as test functions for cooperative co-evolutionary algorithms (CCEAs). The Royal Road functions were created in the early 90's with the aim of demonstrating the superiority of GAs over local search methods. Unexpectedly, the opposite was found true, but this research conducted to understanding the phenomenon of hitchhiking whereby unfavorable alleles may become established in the population following an early association with an instance of a highly fit schema. Here, we take advantage of the modular and hierarchical structure of the Royal Road functions to adapt them to the co-evolutionary setting. Using a multiple population approach, we show that a CCEA easily outperforms a standard GA on the Royal Road functions, by naturally overcoming the hitchhiking effect. Moreover, we found that the optimal number of sub-populations for the CCEA is not the same as the number of components the function can be linearly separated, and propose an explanation for this behavior. We argue that this class of functions may serve in foundational studies of cooperative co-evolution.

[49] G. Valigiani, E. Lutton, and P. Collet. Adapting the Elo rating. In 13th ISPE International Conference on Concurrent Engineering, CE'06, Antibes, France, September 18-22 2006. [ bib | .pdf ]
Paraschool (the French leading e-learning company, with more than 250,000 registered students), wanted an intelligent software to guide students in their graph of pedagogic items. The very large number of students suggested to use students as artificial ants, leaving stigmergic information on the web-site graph to optimise pedagogical paths. The differences between artificial ants and students led to describe a new concurrent paradigm called "man-hill optimization," where optimization emerges from the behaviour of humans exploring a web site.

At this stage, the need of rating pedagogical items showed up in order to direct students towards items adapted to their level. A solution was found in the ELO automatic rating process, that also provides (as a side-effect) a powerful audit system that can track syntactic and semantic problems in exercises. For an effective use, this paper shows how the ELO rating process has been modified to overcome the Deflation problem.

[50] E. Lutton and J. Lévy Véhel. Pointwise regularity of fitness landscapes and the performance of a simple es. In CEC'06, Vancouver, Canada, July, 16-21 2006. [ bib | .pdf ]
We present a theoretical and experimental analysis of the influence of the pointwise irregularity of the fitness function on the behavior of an (1+1)ES. Previous work on this subject suggests that the performance of an EA strongly depends on the irregularity of the fitness function. Several irregularity measures have been derived for discrete search spaces, in order to numerically characterize this type of difficulty for EA. These characterizations are mainly based on Holder exponents. Previous studies used however a global characterization of fitness regularity (the global Holder exponent), with experimental validations being conducted on test functions with uniform regularity. This is extended here in two ways: Results are now stated for continuous search spaces, and pointwise instead of global irregularity is considered. In addition, we present a way to modify the genetic topology to accommodate for variable regularity: The mutation radius, which controls the size of the neighbourhood of a point, is allowed to vary according to the pointwise irregularity of the fitness function. These results are explained through a simple theoretical analysis which gives a relation between the pointwise Holder exponent and the optimal mutation radius. Several questions connected to on-line measurements and usage of regularity in EAs are raised.

[51] Pierrick Legrand, Evelyne Lutton, and Gustavo Olague. Evolutionary denoising based on an estimation of hölder exponents with oscillations. In EVOIASP 2006, 8th European Workshop on Evolutionary Computation in Image Analysis and Signal Processing, Budapest, Hungary, April 10-12 2006. [ bib | .pdf ]
In multifractal denoising techniques, the acuracy of the Hölder exponents estimations is crucial for the quality of the outputs. In continuity with the method described in [1], where a wavelet decomposition was used, we investigate the use of another Hölder exponent estimation technique, based on the analysis of the local "oscillations" of the signal. The associated inverse problem to be solved, i.e. finding the signal which is the closest to the initial noisy one but having the prescribed regularity, is then more complex. Moreover, the associated search space is of a different nature as in [1], which necessitates the design of ad-hoc genetic operators.

[52] Gregory Valigiani, Evelyne Lutton, Yannick Jamont, Raphael Biojout, and Pierre Collet. Evaluating a real-size man-hill. In ECE'WSEAS'05, Miami,Florida, USA, November 17-19 2005. [ bib | .pdf ]
"Man-hill" optimisation (a slightly different form of Ant Colony Optimisation) has been applied to the e-learning software of Paraschool (French e-learning company): instead of implementing artificial ants, students visiting the site unknowingly leave stigmergic information on the Paraschool web-site graph, in order to promote the emergence of pedagogic paths. In order to present students with exercises that match their level, it was needed to find some kind of evaluation mechanism, both for the student and for the Paraschool items. A solution was found in the Elo automatic rating process, that also provides as a side-effect a powerful audit system that can track semantic problems in exercises.

[53] Evelyne Lutton, Mario Pilz, and Jacques Lévy Véhel. The fitness map scheme. application to interactive multifractal image denoising. In CEC2005, Edinburgh, UK, September, 2-5 2005. IEEE Congress on Evolutionary Computation. [ bib | .pdf ]
Interactive evolutionary algorithms (IEA) often suffer from what is called the "user bottleneck." In this paper, we propose and analyse a method to limit the user interactions, while still providing sufficient informations for the EA to converge. The method has been currently developed on a multifractal image denoising application: a multifractal denoising method is adapted to complex images, but depends on a set of parameters that are quite difficult to tune by hand. A simple IEA has been developed for this purpose in a previous work. We now experiment an approximation of the user judgment, via a "fitness map", that helps to reduce the number of user-interactions. The method is easily extensible to other interactive, or computationally expensive, evolutionary schemes.

[54] Cynthia Perez, Gustavo Olague, Francisco Fernandez, and Evelyne Lutton. An evolutionary infection algorithm for dense stereo correspondence. In EvoIASP 2005, Lausanne, 30 March-1 April 2005. [ bib | .pdf ]
This work presents an evolutionary approach to improve the infection algorithm to solve the problem of dense stereo matching. Dense stereo matching is used for 3D reconstruction in stereo vision in order to achieve fine texture detail about a scene. The algorithm presented in this paper incorporates two different epidemic automata applied to the correspondence of two images. These two epidemic automata provide two different behaviours which construct a different matching. Our aim is to provide with a new strategy inspired on evolutionary computation, which combines the behaviours of both automata into a single correspondence process. The new algorithm will decide which epidemic automata to use based on inheritance and mutation, as well as the attributes, texture and geometry, of the input images. Finally, we show experiments in a real stereo pair to show how the new algorithm works.

[55] Gregory Valigiani, Yannick Jamont, C. Bourgeois-Republique, Raphael Biojout, Evelyne Lutton, and Pierre Collet. Experimenting with a real-size man-hill to optimize pedagogical paths. In 20th ACM symposium on Applied Computing, SAC'05, Bioinformatics track, Santa Fe, New Mexico, USA, March, 13-17 2005. [ bib | .pdf ]
This paper describes experiments aimed at adapting Ant Colony Optimization (ACO) techniques to an e-learning environment, thanks to the fact that the available on-line material can be organized in a graph by means of hyperlinks between educational topics. The structure of this graph is to be optimized in order to facilitate the learning process for students.

ACO is based on an ant-hill metaphor. In this case, however, the agents that move on the graph are students who unconsciously leave pheromones in the environment depending on their success or failure. In the paper, the whole process is therefore referred to as a "man-hill."

Compared to the [13, 14] papers that were providing guidelines for this problem, real-size tests have been performed, showing that man-hills behave differently from ant-hills. The notion of pheromone erosion (rather than evaporation) is introduced.

[56] Evelyne Lutton, Pierre Grenier, and Jacques Lévy Véhel. An interactive evolutionary algorithm for multifractal bayesian denoising. In EVOIASP, 2005. 30 March - 1 April, Lausanne. [ bib | .pdf ]
We present in this paper a multifractal bayesian denoising technique based on an interactive EA. The multifractal denoising algorithm that serves as a basis for this technique is adapted to complex images and signals, and depends on a set of parameters. As the tuning of these parameters is a difficult task, highly dependent on psychovisual and subjective factors, we propose to use an interactive EA to drive this process. Comparative denoising results are presented with automatic and interactive EA optimisation. The proposed technique yield efficient denoising in many cases, comparable to classical denoising techniques. The versatility of the interactive implementation is however a major advantage to handle difficult images like IR or medical images.

[57] Enrique Dunn, Gustavo Olague, and Evelyne Lutton. Automated photogrammetric network design using the parisian approach. In EvoIASP 2005, 2005. 30 March - 1 April, Lausanne, Nominated for the best paper Award.bib | .pdf ]
We present a novel camera network design methodology based on the Parisian approach to evolutionary computation. The problem is partitioned into a set of homogeneous elements, whose individual contribution to the problem solution can be evaluated separately. These elements are allocated in a population with the goal of creating a single solution by a process of aggregation. Thus, the goal of the evolutionary process is to generate individuals that jointly form better solutions. Under the proposed paradigm, aspects such as problem decomposition and representation, as well as local and global fitness integration need to be addressed. Experimental results illustrate significant improvements, in terms of solution quality and computational cost, when compared to canonical evolutionary approaches.

[58] Evelyne Lutton, Jacques Lévy Véhel, and Pierre Grenier. Débruitage multifractal par évolution interactive. In GRETSI, Louvain-La-Neuve, Belgium, 2005. [ bib | .pdf ]
Nous présentons dans ce papier une méthode interactive de débruitage fondée sur une technique de débruitage multifractal bayésien adaptée aux signaux complexes. Cette technique nécessite le réglage d'un jeu de paramètres, et le résultat dépend fortement de facteurs psychovisuels et subjectifs. L'originalité de l'approche réside dans l'emploi d'un algorithme évolutionnaire interactif pour gérer l'ajustement des paramètres. Nous présentons des résultats comparatifs de débruitage, qui prouvent l'efficacité et la flexibilité de la méthode.

We present in this paper a multifractal bayesian denoising technique based on an interactive EA. The multifractal denoising algorithm that serves as a basis for this technique is adapted to complex images and signals, and depends on a set of parameters. As the tuning of these parameters is a difficult task, highly dependent on psychovisual and subjective factors, an interactive EA has been used to drive this process. Comparative denoising results are presented. The proposed technique yield efficient denoising in many cases, comparable to classical denoising techniques. The versatility of the interactive implementation is however a major advantage to handle difficult images like IR or medical images.

[59] Olivier Pauplin, Jean Louchet, Evelyne Lutton, and Michel Parent. Obstacle detection by evolutionary algorithm: the fly algorithm. In The second International Conference on Autonomous Robots and Agents, ICARA2004, pages 139-140, Palmerston North, New Zealand, December 13-15 2004. [ bib | .pdf ]
Artificial vision is a key element in robots autonomy. The Fly algorithm is a fast evolutionary algorithm designed for real time obstacle detection using pairs of stereo images. It aims to be used in particular in the fields of mobile robotics and automated vehicles. Based on the Parisian approach, the Fly algorithm produces a set of 3-D points which gather on the surfaces of obstacles. This paper describes the use of the Fly algorithm for obstacle detection in a real environment, and a possible use for vehicle control is presented.

[60] Olivier Pauplin, Jean Louchet, Evelyne Lutton, and Michel Parent. Applying evolutionary optimisation to robot obstacle avoidance. In ISCIIA, 2004. December 20-24, 2004, Haikou, China. [ bib | .pdf ]
This paper presents an articial evolution-based method for stereo image analysis and its application to real-time obstacle detection and avoidance for a mobile robot. It uses the Parisian approach, which consists there in splitting the representation of the robot's environment into a large number of simple primitives, the "flies", which are evolvecd following a biologically inspired scheme and give a fast, low-cost solution to the obstacke detection problem in mobile robotics.

[61] Enrique Dunn, Gustavo Olague, Evelyne Lutton, and Marc Shoenauer. Pareto optimal sensing strategies for an active vision system. In CEC, IEEE Congress on Evolutionary Computation, 2004. http://cec2004.org/sessions.htm, Vol. 1, pp. 457-463, Portland, Oregon, USA, June 19-23. [ bib | .pdf ]
We present a multi-objective methodology, based on evolutionary computation, for solving the sensor planning problem for an active vision system. The application of different representation schemes, that allow to consider either fixed or variable size camera networks in a single evolutionary process, is studied. Furthermore, a novel representation of the recombination and mutation operators is brought forth. The developed methodology is incorporated into a 3D simulation environment and experimental results shown. Results validate the flexibility and effectiveness of our approach and offer new research alternatives in the field of sensor planning.

[62] Gustavo Olague, Francisco Fernández, Cynthia Pérez, and Evelyne Lutton. The infection algorithm an artificial epidemic approach for dense stereo matching. In PPSN, Parallel Problem Solving from Nature, 2004. X. Yao et al. (Eds.): LNCS 3242, pp. 622-632, Springer-Verlag, Birmingham, UK, September, 18-22. [ bib | http ]
We present a new bio-inspired approach applied to a problem of stereo images matching. This approach is based on an artifical epidemic process, that we call ldquothe infection algorithm.rdquo The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D informations which allow the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to only produce the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, that propagate like an artificial epidemy over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.

[63] Evelyne Lutton, Emmanuel Cayla, and Jonathan Chapuis. Artie-fract: The artist's viewpoint. In EvoMUSART2003, 1st European Workshop on Evolutionary Music and Art, Essex, April, 14-16 2003. LNCS, Springer Verlag. [ bib | .pdf ]
ArtiE-Fract is an interactive evolutionary system designed for artistic exploration of the space of fractal 2D shapes. We report in this paper an experiment performed with an artist, the painter Emmanuel Cayla. The benefit of such a collaboration was twofold: first of all, the system itself has evolved in order to better fit the needs of non-computer-scientist users, and second, it has initiated an artistic approach and open up the way to new possible design outputs.

[64] Yann Landrin-Schweitzer, Pierre Collet, and Evelyne Lutton. Interactive gp for data retrieval in medical databases. In EUROGP'03, Essex, April, 14-16 2003. LNCS, Springer Verlag. [ bib | .pdf ]
We present in this paper the design of ELISE, an interactive GP system for document retrieval tasks in very large medical databases. The components of ELISE have been tailored in order to produce a system that is capable of suggesting documents related to the query that may be of interest to the user, thanks to evolved profiling information.

Tests on the "Cystic Fibrosis Database" benchmark show that, while suggesting original documents by adaptation of its internal rules to the context of the user, ELISE is able to improve its recall rate.

[65] Yann Landrin-Schweitzer, Pierre Collet, Evelyne Lutton, and Thierry Prost. Introducing lateral thinking in search engines with interactiveevolutionary algorithms. In Annual ACM Symposium on Applied Computing (SAC 2003), Special Track on"Computer Applications in Health Care" (COMPAHEC 2003), 2003. March 9 to 12, Melbourne, Florida, U.S.A. [ bib | .pdf ]
Nowadays, large medical databases consist of a collection of smaller databases, each on possibly different fields and using different formats, making it increasingly difficult to retrieve valuable information among the thousands of documents retrieved by a simple query. A new Evolutionary Learning Interactive Search Engine (ELISE) feeds on previous user requests to retrieve “alternative” documents that may not be returned by more conventional search engines, in a way that may recall “lateral thinking.” Tests on the “Cystic Fibrosis Database” benchmark (CFD) prove that, while suggesting original documents by adaptation of its internal rules to the context of the user, ELISE is able to improve its recall rate.

[66] Yann Semet, Yannick Jamont, Raphael Biojout, Evelyne Lutton, and Pierre Collet. Artificial ant colonies and e-learning: An optimisation of pedagogical paths. In HCII'03 - 10th international conference on Human Computer Interaction, 2003. Crete, Greece, June 22-27. [ bib | .pdf ]
This paper describes current research on the optimisation of the pedagogical path of a student in an existing e-learning software. This optimisation is performed following the models given by a fairly recent field of Artificial Intelligence: Ant Colony Optimisation (ACO). The underlying structure of the E-learning material is represented by a graph with valued arcs whose weights are optimised by virtual ants that release virtual pheromones along their paths. This gradual modification of the graph's structure improves its pedagogic pertinence in order to increase pedagogic success. The system is developed for Paraschool, the leading French E-learning company. Tests will be conducted on a pool of more than 10,000 users.

[67] Yann Semet, Evelyne Lutton, and Pierre Collet. Ant colony optimisation for e-learning: Observing the emergence of pedagogic suggestions. In In SIS2003, IEEE Swarm Intelligence Symposium, Indianapolis, USA, April 24-26, 2003. [ bib | .pdf ]
An attempt is made to apply Ant Colony Optimization (ACO) heuristics to an E-learning problem: the pedagogic material of an online teaching website for high school studentsis modelled as a navigation graph where nodes are exercises or lessons and arcs are hypertext links. The arcs' valuation,representing the pedagogic structure and conditioning the website's presentation, is gradually modified through the release and evaporation of virtual pheromones that reflect the successes and failures of students roaming around the graph.

A compromise is expected to emerge between the pedagogic structure as originally dictated by professors, the collective experience of the whole pool of students and the particularities of each individual.

The purpose of this study conducted for Paraschool, the leading French e-learning company, is twofold: enhancing thewebsite by making its presentation intelligently dynamic and providing the pedagogical team with a refined auditing tool that could help it identify the strengths and weaknesses of its pedagogic choices.

[68] Pierre Collet, Evelyne Lutton, and Jean Louchet. Issues on the optimisation of evolutionary algorithm code. In CEC2002 conference on Evolutionary Computation,Honolulu, May 2002. [ bib | .pdf ]
The aim of this paper is to show that the common belief, in the evolutionary community, that evaluation time usually takes over 90 percent of the total time, is far from being always true. In fact, many real-world applications showed a much lower percentage. This raises several questions, one of them being the balance between fitness and operators computational complexity: what is the use of elaborating smart evolutionary operators to reduce the number of evaluations if as a result, the total computation time is increased ?

[69] Benoît Leblanc, Hervé Toulhoat, Bertrand Braunschweig, and Evelyne Lutton. Mixing monte carlo moves more efficiently with an evolutionary algorithm. In Division of Computers in Chemistry for the 23rd ACS National Meeting, Orlando, Florida, April, 7-11 2002. Winner of a CCG Excellence Award. [ bib | .pdf ]
When considering Markov Chain Monte Carlo sampling in the context of molecular simulations it is generally required to apply different types of Monte Carlo moves. The relative frequencies for each type of move are usually empirically chosen from ranges that appears reasonable, but rather in an arbitrary manner. Here we propose an evolutionary algorithm (population-based stochastic optimizer) that optimizes these frequencies in order to improve the sampling efficiency. We show results for NVT and NPT MC equilibrations of linear polyethylene chains in dense amorphous state, a prototypical case for which sampling efficiency is critical. Making use of problem dependent criteria, this algorithm improves the quality of simulation. Finally we also apply the same algorithm to improve the Parallel Tempering technique, in order to optimize at the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures.

[70] J. Chapuis and E. Lutton. Artie-fract : Interactive evolution of fractals. In 4th International Conference on Generative Art, Milano, Italy, December 12-14 2001. [ bib | .pdf ]
ArtiE-Fract is a user friendly software for the creation of fractal images. It is based on an interactive evolutionary algorithm.

Evolutionary algorithms (EA) are nowadays known as powerful stochastic optimization techniques, and can be considered as a computer implementation of a Darwinian evolution model. Their main characteristic is that they manipulate population of individuals (that represent solutions, points of a search space, programs, rules, images, signals, etc ...), and involve a set of operations (selection, mutation, crossover) applied stochastically to each individual, in order to simulate a sequence of generations. If correctly designed, this dynamic stochastic process concentrates the population on the global optimum of the search space.

However, EA can be used for other purpose than pure optimization, for example, generation of artistic pictures. The appropriate tool is interactive EA, i.e. an EA where the function to be optimized is partly set by the user, in order to optimize something related to the "user satisfaction". ArtiE-Fract evolves a population of fractal images, and displays it via an interface.

More precisely, these fractal images are encoded in individuals as sets of contractive non-linear 2D functions (affine and non-affine), defined either in Cartesian or polar coordinates. Each set of these contractive functions define an IFS (Iterated Function System), to which a particular 2D image, its attractor, is associated.

In ArtiE-Fract the interactivity is twofold:

* the user can guide the EA by giving notations to each image of the population, via a main window that display the whole population.

* or he can directly manipulate the images via a specialized window, and then add or replace a modified individual in the current population (this is a sort of interactive "local" optimization according to his taste). A large set of geometric, colorimetric, structural modification are available at this stage. Moreover, due to the IFS model, some control points can be defined on the attractor images (fixed points) that help to distort the shape in a convenient, but non trivial, manner.

The ArtiE-Fract interface has been carefully designed in order to give access to a wide variety of parameters. This, together with the two particularit ies of giving access to unusual fractal images (non-linear IFS), and allowing the user to interfere at any time with the evolutionary process, make of this software a versatile and user-friendly artistic image generation tool.

[71] E. Lutton, P. Collet, and J. Louchet. Easea comparisons on test functions: Galib versus eo. In EA01 Conference on Artificial Evolution, Le Creusot, France, October 2001. [ bib | .pdf ]
The EASEA language (EAsy Specification of Evolutionary Algorithms) was created in order to allow scientists to concentrate on evolutionary algorithm design rather than implementation. EASEA currently supports two C++ libraries (GALib and EO) and a JAVA library for the DREAM. The aim of this paper is to assess the quality of EASEA-generated code through an extensive test procedure comparing the implementation for EO and GALib of the same test functions.

[72] B. Leblanc, E. Lutton, B. Braunschweig, and H. Toulhoat. History and immortality in evolutionary computation. In Evolution Artificielle, EA01, Le Creusot, France, October 2001. [ bib | .pdf ]
When considering noisy fitness functions for some CPU-time consuming applications, a trade-off problem arise: how to reduce the influence of the noise while not increasing too much computation time. In this paper, we propose and experiment some new strategies based on an exploitation of historical information on the algorithm evolution, and a non-generational evolutionary algorithm.

[73] B. Leblanc, E. Lutton, B. Braunschweig, and H. Toulhoat. Improving molecular simulation : a meta optimisation of monte carlo parameters. In CEC2001, Seoul, South Korea, May 27-30 2001. [ bib | .pdf ]
We present a new approach to perform molecular simulations using evolutionary algorithms. The main application of this work is the simulation of dense amorphous polymers and the goal is to improve the efficiency of sampling, in other words to obtain valid samples from the phase state more rapidly. Our approach is based on parallel Markovian Monte Carlo simulations of the same physico-chemical system, where we optimise some Monte Carlo parameters by means of a real coded genetic algorithm.

[74] Jacques Lévy Véhel and Evelyne Lutton. Evolutionary signal enhancement based on hölder regularity analysis. In EVOIASP2001 Workshop, Como Lake, Italy, Springer Verlag, LNCS 2038, 2001. [ bib | .pdf ]
We present an approach for signal enhancement based on the analysis of the local Hölder regularity. The method does not make explicit assumptions on the type of noise or on the global smoothness of the original data, but rather supposes that signal enhancement is equivalent to increasing the Hölder regularity at each point. The problem of finding a signal with prescribed regularity that is as near as possible to the original signal does not admit a closed form solution in general. Attempts have been done previously on an analytical basis for simplified cases. We address here the general problem with the help of an evolutionary algorithm. Our method is well adapted to the case where the signal to be recovered is itself very irregular, e.g. nowhere differentiable with rapidly varying local regularity. In particular, we show an application to SAR image denoising where this technique yields good results compared to other algorithms. The implementation of the evolutionary algorithm has been done using the EASEA (EAsy specification of Evolutionary Algorithms) language.

[75] Enzo Bolis, Christian Zerbi, Pierre Collet, Jean Louchet, and Evelyne Lutton. A gp artificial ant for image processing : preliminary experiments with easea. In LNCS 2038 Springer Verlag, Lecture Notes on Computer Science, editor, EVOIASP2001, pages 246-255, 2001. Lake Como, Italy. [ bib | .pdf ]
This paper describes how animat-based "food foraging" techniques may be applied to the design of low-level image processing algorithms. First, we show how we implemented the food foraging application using the EASEA software package. We then use this technique to evolve an animat and learn how to move inside images and detect high-gradient lines with a minimum exploration time. The resulting animats do not use standard "scanning + filtering" techniques but develop other image exploration strategies close to contour tracking. Experimental results on grey level images are presented

[76] Y. Landrin-Schweitzer and E. Lutton. Perturbation theory for evolutionary algorithms: towards an estimation of convergence speed. In M. Schoenauer, K. Deb, G. Rudolf, X. Yao, E. Lutton, Merelo. J.J., and H.-P. Schwefel, editors, Parallel Problem Solving from Nature - PPSN VI 6th International Conference, Paris, France, September 16-20 2000. Springer Verlag. LNCS 1917. [ bib | .pdf ]
When considering continuous spaces EA, a convenient tool to model these algorithms is perturbation theory. In this paper we present preliminary results, derived from Freidlin-Wentzell theory, related to the convergence of a simple EA model. The main result of this paper yields a bound on sojourn times of the Markov process in subsets centered around the maxima of the fitness function. Exploitation of this result opens the way to convergence speed bounds with respect to some statistical measures on the fitness function (likely related to irregularity).

[77] Pierre Collet, Evelyne Lutton, Marc Schoenauer, and Jean Louchet. Take it EASEA. In M. Schoenauer, K. Deb, G. Rudolf, X. Yao, E. Lutton, Merelo. J.J., and H.-P. Schwefel, editors, Parallel Problem Solving from Nature - PPSN VI 6th International Conference, Paris, France, September 16-20 2000. Springer Verlag. LNCS 1917. [ bib | .pdf ]
Evolutionary algorithms are not straightforward to implement and the lack of any specialised language forces users to reinvent the wheel every time they want to write a new program. Over the last years, evolutionary libraries have appeared, trying to reduce the amount of work involved in writing such algorithms from scratch, by offering standard engines, strategies and tools. Unfortunately, most of these libraries are quite complex to use, and imply a deep knowledge of object programming and C++. To further reduce the amount of work needed to implement a new algorithm, without however throwing down the drain all the man-years already spent in the development of such libraries, we have designed EASEA (acronym for EAsy Specification of Evolutionary Algorithms): a new high-level language dedicated to the specification of evolutionary algorithms. EASEA compiles .ez files into C++ object files, containing function calls to a chosen existing library. The resulting C++ file is in turn compiled and linked with the library to produce an executable file implementing the evolutionary algorithm specified in the original .ez file.

EASEA is available on the web at: http://apis.saclay.inria.fr/

[78] Hatem Hamda, François Jouve, Evelyne Lutton, Marc Schoenauer, and Michèle Sebag. Unstructured representations in evolutionary topological optimum design. In ESAIM, Actes du 32ème congrès d'analyse numérique CANUM2000, 2000. [ bib ]
[79] Pierre Collet, Evelyne Lutton, Frédéric Raynal, and Marc Schoenauer. Individual gp: an alternative viewpoint for the resolution of complex problems. In GECCO99, Genetic and Evolutionary Computation Conference, July 13 - 17, 1999, Orlando, Florida, USA., 1999. [ bib | .pdf ]
An unususal GP implementation is proposed, based on a more "economic" exploitation of the GP algorithm: the "individual" approach, where each individual of the population embodies a single function rather than a set of functions. The final solution is then a set of individuals. Examples are presented where results are obtained more rapidly than with the conventional approach, where all individuals of the final generation but one are discarded.

[80] Frédéric Raynal, Evelyne Lutton, Pierre Collet, and Marc Schoenauer. Manipulation of non-linear ifs attractors using genetic programming. In CEC99, Congress on Evolutionary Computation, July 6-9, Washington DC. USA., 1999. [ bib | .pdf ]
Non-linear Iterated Function Systems (IFSs) are very powerful mathematical objects related to fractal theory, that can be used in order to generate (or model) very irregular shapes. We investigate in this paper how Genetic Programming techniques can be efficiently exploited in order to generate randomly or interactively artistic "fractal" 2D shapes. Two applications are presented for different types of non-linear IFSs:

- interactive generation of Mixed IFSs attractors using a classical GP scheme,

- random generation of Polar IFSs attractors based on an "individual" approach of GP.

[81] Benoit Leblanc and Evelyne Lutton. Bitwise regularity and ga-hardness. In ICEC 98, May 5-9, Anchorage, Alaska, 1998. [ bib | .pdf ]
We present in this paper a theoretical analysis that relates an irregularity measure of a fitness function to the so-called GA-deception. This approach is a continuation of a previous work that has presented a deception analysis of Holder functions. The analysis developed here is a generalization of this work in two ways: we first use a "bitwise regularity" instead of a Holder exponent as a basis for our deception analysis, second, we perform a similar deception analysis of a GA with uniform crossover. We finally propose to use the bitwise regularity coefficients in order to analyze the influence of a chromosome encoding on the GA efficiency, and we present experiments with Gray encoding.

[82] Benoit Leblanc, Evelyne Lutton, and Jean-Paul Allouche. Artificial Evolution, European Conference, AE 97, Nimes, France, October 1997, Selected papers, volume Lecture Notes in Computer Science, chapter Inverse problems for finite automata:a solution based on Genetic Algorithms. Springer Verlag, 1997. [ bib | .pdf ]
The use of heuristics such as Genetic Algorithm optimisation methods is appealing in a large range of inverse problems. The problem presented here deals with the mathematical analysis of sequences generated by finite automata. There is no known general exact method for solving the associated inverse problem. GA optimisation techniques can provide useful results, even in the very particular area of mathematical analysis. This paper presents the results we have obtained on the inverse problem for fixed point automata. Software implementation has been developed with the help of "ALGON", our home-made Genetic Algorithm software.

[83] Guillaume Cretin and Evelyne Lutton. Fractal image compression : Experiments on hv partitioning and linear combination of domains. In Fractals in Engineering 97. INRIA, 1997. Arcachon, France, June 25-27. [ bib ]
[84] Evelyne Lutton. Genetic algorithms and the optimisation of fractal functions : Deceptivity analysis. In Fractal Geometry and Self-Similar Phenomena, Symposium celebrating the 70th birthday of Benoît Mandelbrot, February 2-4 1995. Curacao. [ bib ]
[85] E. Lutton, G. Cretin, J. Lévy Véhel, P. Glevarec, and C. Roll. Mixed ifs: resolution of the inverse problem using genetic programming. In Evolution Artificielle, Brest, France, 1995. [ bib ]
[86] K. Daoudi, E. Lutton, and J. Levy Vehel. Fractal modeling of speech signals. In Fractals in Engineering, 1994. 1-4 June, Montreal. [ bib | .pdf ]
In this paper, we present a method for speech signal analysis and synthesis based on IFS theory. We consider a speech signal and the graph of a continuous function whose irregularity, measured in terms of its local Hölder exponents, is arbitrary. We extract a few remarkable points in the signal and perform a fractal interpolation between them using a classical technique based on IFS theory. We thus obtain a functional representation of the speech signal, which is well adapted to various applications, as for instance voice interpolation.

[87] Evelyne Lutton and Patrice Martinez. Détection de primitives géométriques bidimensionnelles dans les images à l'aide d'un algorithme génétique. In Evolution Artificielle 94, 1994. Toulouse, France, 19-23 Septembre, english version published as Book Chapter, see http link. [ bib | http | .pdf ]
Nous étudions l'emploi des algorithmes génétiques dans le cadre de l'extraction de primitives (segments, cercles, quadrilatères, etc ...) dans des images. Cette approche est complémentaire de l'approche classique par transformée de Hough, dans le sens où les algorithmes génétiques se révèlent efficaces là où la Transformée de Hough devient trop complexe et trop gourmande en espace mémoire, c'est-à-dire dans les cas où l'on recherche des primitives ayant plus de 3 ou 4 paramètres.

En effet, les algorithmes génétiques, employés en tant qu'outil d'optimisation stochastique, sont réputés coûteux en temps de calcul, mais se révèlent efficaces dans les cas où les fonctions à optimiser sont très irrégulières et de forte dimensionnalité. La philosophie de la méthode que nous présentons est donc très similaire à celle de la transformée de Hough, qui est de rechercher un optimum dans un espace de paramètres. Cependant, nous verrons que les implantations algorithmiques diffèrent.

Cette approche de l'extraction de primitives par algorithmes génétiques n'est pas une idée nouvelle : nous avons repris et amélioré une technique originale proposée par Roth et Levine en 1992. Nous pouvons résumer notre apport sur cette technique en trois points principaux:

* nous avons utilisé des images de distances pour "adoucir" la fonction à optimiser (aussi appelée "fitness"),

* pour détecter plusieurs primitives à la fois, nous avons implanté et amélioré une technique de partage de la population (technique de "sharing"),

* et enfin, nous avons appliqué quelques résultats théoriques récemments établis à propos des probabilités de mutations, ce qui nous a permis d'améliorer, notamment, les temps d'exécution.

[88] Evelyne Lutton and Patrice Martinez. A Genetic Algorithm for the Detection of 2D Geometric Primitives in Images. In 12-ICPR, 1994. Jerusalem, Israel, 9-13 October. [ bib | .pdf ]
We investigate the use of genetic algorithms (GAs) in the framework of image primitives extraction (such as segments, circles, ellipses or quadrilaterals). This approach completes the well-known Hough Transform, in the sense that GAs are efficient when the Hough approach becomes too expensive in memory, i.e. when we search for complex primitives having more than 3 or 4 parameters.

Indeed, a GA is a stochastic technique, relatively slow, but which provides with an efficient tool to search in a high dimensional space. The philosophy of the method is very similar to the Hough Transform, which is to search an optimum in a parameter space. However, we will see that the implementation is different.

The idea of using a GA for that purpose is not new, Roth and Levine have proposed a method for 2D and 3D primitives in 1992. For the detection of 2D primitives, we re-implement that method and improve it mainly in three ways :

* by using distance images instead of directly using contour images, which tends to smoothen the function to optimize,

* by using a GA-sharing technique, to detect several image primitives in the same step,

* by applying some recent theoretical results on GAs (about mutation probabilities) to reduce convergence time.

[89] Evelyne Lutton. 3d model-based stereo reconstruction using coupled markov random fields. In International Conference CAIP'93, Computer Analysis of Images and Pattern (IAPR), 1993. September 13-15, Budapest, Hungary. [ bib ]
[90] E. Lutton, J.-M. Vézien, and A. Gagalowicz. Model-based stereo reconstruction by energy minimization. In IMAGE'COM 93, 1993. 23-25 Mars, Bordeaux, France. [ bib ]
[91] V. Serfaty, A. Ackah-Miezan, E. Lutton, and A. Gagalowicz. Photometric analysis as an aid to 3d reconstruction of indoor scenes. In IS and T / SPIE Symposium on Electronic Imaging : Science and Technology, 1993. January 31 - February 5, San Jose, California, USA. [ bib ]
[92] V. Serfaty, A. Acka-Miezan, E. Lutton, and A. Gagalowicz. Towards a visual model for robot vision : From wireframe to photometric representation. In IMAGE'COM 93, 1993. 23-25 Mars, Bordeaux, France. [ bib ]
[93] Jacques Lévy Véhel and Evelyne Lutton. Optimization of fractal functions using genetic algorithms. In Fractal 93, 1993. London. [ bib ]
In this work, we investigate the difficult problem of the optimization of fractal functions. We first derive some relations between the local scaling exponents of the functions, the sampling rate, and the accuracy of the localization of the optimum, both in the domain and the range of the functions. We then apply these ideas to the resolution of the inverse problem for Iterated Function System (IFS) using a Genetic Algorithm. In the conditions of study (2D problem for sets), the optimization process yields the optimum with a good precision and within a tractable computing time.

[94] Evelyne Lutton, Jean-Marc Vézien, and André Gagalowicz. About criteria for 3d reconstruction of planar facets from a stereo pair. In ISCIS VII, 1992. 2-4 November, Antalya, Turkey. [ bib ]
[95] Evelyne Lutton, Henri Maître, and Jaime Lopez-Krahe. Determining vanishing points with hough transform. In Visual Communication and Image Processing'90, SPIE, pages 537-546, 1990. Lausanne, Switzerland, 2-4 October. [ bib ]
[96] Evelyne Lutton, Mathilde Delmas, and Henri Maître. Une base de données pour la représentation de scènes industrielles complexes d'extérieur. In PIXIM, pages 17-31, Septembre, 25-29 1989. [ bib ]
[97] Evelyne Lutton and Henri Maître. Etude des symétries du problème de perspective à trois lignes. In 7ème congrès AFCET, RFIA, pages 537-546, 1989. 29 nov - 1er déc. [ bib ]

This file was generated by bibtex2html 1.96.