UMR n Gilles Villard

Size: px
Start display at page:

Download "UMR n 5668. Gilles Villard"

Transcription

1 AERES Campagne d'évaluation Unité de recherche Laboratoire de l'informatique du Parallélisme UMR n 5668 Gilles Villard CNRS École Normale Supérieure de Lyon INRIA Université Claude Bernard Lyon 1 Version étendue - I. Rapport d'activité

2 Laboratoire de l Informatique du Parallélisme LIP, École normale supérieure de Lyon - 46, allée d Italie, Lyon cedex 07 LIP progress report LIP general project Extended progress reports and projects by teams LIP documents concerning the evaluation will be available at

3 Introduction Le Laboratoire de l Informatique du Parallélisme (LIP), crée à l ENS en 1988, est devenu une unité mixte de recherche (UMR 5668) par association avec le CNRS en 1989, l INRIA en 1999 et l Université Claude Bernard Lyon 1 en Le laboratoire rassemble plus de 120 scientifiques (chercheurs, enseignants-chercheurs, doctorants et ingénieurs) organisés en six équipes de recherche et secondés par une équipe administrative et technique de 11 personnes. Les équipes sont localisées sur le site lyonnais de Gerland (bâtiments de l ENS et de l UCBL). De l informatique fondamentale et mathématique jusqu à des développements logiciels et matériels innovants, la recherche au LIP couvre un large spectre de domaines : nouvelles architectures et infrastructures de calcul, internet du futur, calcul parallèle, protocoles de communication, réseaux dynamiques, ordonnancement, services et composants, évaluation de performance, compilation, synthèse, optimisation de code, calcul certifié, calcul algébrique, cryptographie, algèbre linéaire, réseaux Euclidiens, complexité, complexité algorithmique, systèmes à événements discrets, automates, graphes, modèles de programmation, preuve formelle, logique, etc. Plusieurs volets des activités du laboratoire sont également tournés vers d autres disciplines scientifiques par le biais de l institut des systèmes complexes (IXXI), d interactions avec les sciences du vivant, et du calcul scientifique. Les chercheurs du LIP ont une forte activité avec près de 600 livres, articles de revues ou de conférences et plus de 50 thèses et habilitations soutenues en quatre ans (près de 900 publications et interventions au total). Nos collaborations internationales se traduisent par la cosignature de chercheurs dans plus de 40 universités étrangères. La production logicielle est très diversifiée, depuis les bibliothèques de calcul jusqu aux logiciels et outils à très large diffusion. Pendant le quadriennal, les chercheurs du LIP ont participé à plus de 100 projets nationaux et internationaux (Région, ANR, Ministère, CNRS, INRIA, Union Européenne, etc.). Ces programmes constituent la première source de financement. Le LIP est partenaire des pôles de compétitivité Minalogic et System@tic et s implique largement en valorisation et transfert vers les industries de communication, de semi-conducteurs et de calculateurs (Alcatel-Lucent, IBM, STMicroelectronics, etc.). Trois startups émergent du laboratoire en Une moitié approximativement des membres permanents du LIP sont enseignants-chercheurs. Avec la participation des chercheurs et des doctorants, les équipes assurent des enseignements dans un large éventail de niveaux à Lyon, et, dans une moindre mesure, à l extérieur. i

4 ii

5 Résumé du projet Le calcul les ordinateurs, les réseaux de communication, et leurs applications participe de manière cruciale au progrès de la société. Les infrastructures numériques ont connu une croissance très rapide au point d être maintenant ubiquitaires. La société perçoit ces infrastructures aussi bien sous la forme des systèmes embarqués ou pervasifs que sous celle de réseaux de calculateurs maintenant à l échelle de la planète. Outre cet impact technologique, on assiste à un autre changement, plus récent, mais tout aussi fondamental : les sciences non-informatiques adoptent souvent, par nécessité, un mode de pensée informatique et algorithmique. Notre défi est de participer et d accompagner cette double révolution. Le projet du LIP s inscrit dans l étude du monde numérique futur et de ses fondements théoriques, dans l optique d inventer de nouveaux concepts informatiques et d anticiper leur répercussion sur les autres sciences. Nos ouvertures stratégiques, qui sont riches et traditionnelles vers les mathématiques, vont en outre vers les sciences numériques, la modélisation, les sciences du vivant et les industries de communication et de semi-conducteurs (pôles de compétitivité Minalogic et System@tic). Ces ouvertures assurent en particulier notre ancrage dans le tissu lyonnais et régional. Nos recherches s organiseront selon deux axes complémentaires et transverses aux équipes : Défis des futures architectures de calcul et de communication : conception, utilisation et adéquation aux contraintes et besoins des applications ; Modèles et méthodes en informatique mathématique : aspects fondamentaux et contribution à l algorithmique, au développement logiciel et matériel innovant et aux avancées technologiques. La machine (ordinateur, infrastructure) est le centre de nos études. Elle est pour nous aussi bien une entité abstraite qu un objet physique. Depuis toujours au LIP les recherches vont du fondamental au développement, ce qui est à la fois une base solide pour notre projet et une source majeure d invention. Pour l objet réel, des combinaisons à l infini d unités de calcul, de communication, et de mémoire, conduisent à une complexité fantastique (puces, multi-cœurs, processeurs embarqués, nuages de calculateurs, réseau internet, systèmes pervasifs, etc.). La maîtrise de cette complexité demande toujours plus de connaissances, avec une machine qui soit à la fois une cible et un moyen pour la recherche. Dans ce contexte, le LIP met un accent particulier sur la qualité du calcul et sur l utilisation optimisée, fiable et sécurisée des ressources. L informatique mathématique et l algorithmique sont des thèmes partagés par tout le laboratoire. Ils revêtent un rôle central afin d inventer de nouveaux modèles et paradigmes qui nous permettent de faire face au changement d échelle et à la complexité des problèmes que nous étudions. Les sciences numériques amènent des défis stratégiques importants. Du traitement du signal et de la synthèse avec Alcatel-Lucent et STMicroelectronics, au projet AFM-CNRS-IBM Décrypton et aux infrastructures numériques intercontinentales, nous continuerons à participer à ces défis dans le cadre d un thème complémentaire aux deux axes mentionnés ci-dessus : Aspects applicatifs et multi-disciplinaires : calcul haute performance, modélisation informatique, systèmes complexes et sciences du vivant, thème où nous trouvons notamment notre participation aux projets de la Fédération Lyonnaise de Modélisation et Sciences Numériques et les évolutions de nos collaborations avec l Institut des Systèmes Complexes. Nos objectifs et évolutions scientifiques nous conduisent à une ré-organisation de six à huit équipes : Arénaire (Arithmétique des ordinateurs), Avalon (Algorithmes et architectures logicielles pour les plates-formes orientées services), CompSys (Compilation, systèmes enfouis et calcul intensif), D-Net (Réseaux dynamiques), MC2 (Modèles de calcul et complexité), Plume (Théorie de la preuve et sémantique formelle), Reso (Protocoles et logiciels optimisés pour réseaux haut débit), Roma (Optimisation des ressources : modèles, algorithmes et ordonnancement). Ces équipes couvrent un large éventail de l informatique incluant les domaines : nouvelles architectures et infrastructures de calcul, internet du futur, réseaux dynamiques, développement logiciel et matériel, calcul parallèle, protocoles de communication, ordonnancement, services et composants, évaluation de performance, compilation, synthèse, optimisation de code, calcul certifié, calcul algébrique, cryptographie, algèbre linéaire, réseaux Euclidiens, complexité, complexité algorithmique, systèmes à évènements discrets, automates, graphes, modèles de programmation, preuve formelle, logique, etc. Le LIP, comme Unité Mixte de Recherche, est associé au CNRS, à l ENS, à l UCBL et à l INRIA. Les équipes Arénaire, Avalon, Compsys, D-Net, Reso et Roma sont des projets existants, en création ou en voie de création à l INRIA. iii

6 iv

7 Contents 1 LIP progress report Scientific overview Teams evolution and highlights Transverse and new research directions Self assessment Scientific activity and production Collaborations Personnel Governance and animation Teaching and training Financial resources Infrastructure Self assessment LIP general project Context LIP s internal evolutions Main scientific evolutions Local, national, and international evolutions Strengths and opportunities Scientific project Addressing the challenges of future computational and communication architectures Mathematical computer science models, methods, and algorithms Cross-fertilization with other disciplines, and application domains Implementing the project Weaknesses and risks Governance and animation Personnel Teaching and training Infrastructure Arénaire - Computer Arithmetic Team Composition Executive summary Research activities Number Systems Efficient Floating-Point Arithmetic and Applications Efficient Polynomial and Rational Approximations Linear Algebra and Lattice Basis Reduction Correct Rounding of Functions Certified Computing Hardware arithmetic operators Application domains and social or economic impact Visibility and Attractivity Prizes and Awards Contribution to the Scientific Community Public Dissemination Contracts and grants v

8 vi External contracts and grants (Industry, European, National) Research Networks (European, National, Regional, Local) Internal Funding International collaborations resulting in joint publications Software production and Research Infrastructure Software Descriptions Contribution to Research Infrastructures Industrialization, patents, and technology transfer Patents (French, European and WorldWide) Creation of Startups Consulting Activities Educational Activities Supervision of Educational Programs Teaching Other teaching activities Self-Assessment Perspectives Towards the automatic design of operators and functions New mathematical tools for function approximation Linear algebra and polynomial evaluation Methods for Certified Computing Euclidean lattice reduction Pairing-based cryptology Publications and Productions Compsys - Compilation and Embedded Computing Systems Team composition Executive summary Research activities Objective 1: Code optimization for special-purpose processors Objective 2: High-level code transformations Objective 3: Hardware and software system integration Application domains and social or economic impact Visibility and attractivity Prizes and awards Contribution to the scientific community Public dissemination Contracts and grants External contracts and grants (industry, European, national) Research networks (European, national, regional, local) Internal funding International collaborations resulting in joint publications Software production and research infrastructure Software descriptions Contribution to research infrastructures Industrialization, patents, and technology transfer Patents (french, European, worldwide) Creation of startups Consulting activities Educational activities Supervision of educational programs Teaching Self-assessment Perspectives Vision and new goals of the team Evolution of research activities Publications... 88

9 vii 5 Graal - Algorithms and Scheduling for Distributed Heterogeneous Platforms Team Composition Executive summary Research activities Scheduling Strategies and Algorithm Design for Heterogeneous Platforms Scheduling for Sparse Direct Solvers Providing Access to HPC Servers on the Grid Application domains and social or economic impact Visibility and Attractivity Prizes and Awards Contribution to the Scientific Community Public Dissemination Contracts and grants External contracts and grants (Industry, European, National) Research Networks (European, National, Regional, Local) Internal Funding International collaborations resulting in joint publications Software production and Research Infrastructure Software Descriptions Contribution to Research Infrastructures Industrialization, patents, and technology transfer Patents (French, European and WorldWide) Creation of Startups Industrial Transfer Consulting Activities Educational Activities Supervision of Educational Programs Teaching Self-Assessment Perspectives: new team ROMA Perspectives: new team Avalon Publications MC2 - Models of Computation and Complexity Team Composition Executive summary Research activities Algebraic complexity Algebraic algorithms Kolmogorov complexity Quantum computing Fault-tolerant computation Infinite words Small-world phenomenon Networks of asynchronous automata Blind scheduling and data broadcast Tilings Algorithms for Network Calculus Simulation platform for complex systems Modeling, simulation and analysis of gene regulatory networks Application domains and social or economic impact Visibility and Attractivity Prizes and Awards Contribution to the Scientific Community Public Dissemination Contracts and grants External contracts and grants (Industry, European, National) Research Networks (European, National, Regional, Local)...146

10 viii Internal Funding International collaborations resulting in joint publications Software production and Research Infrastructure Software Descriptions Contribution to Research Infrastructures Industrialization, patents, and technology transfer Patents (French, European and WorldWide) Creation of Startups Consulting Activities Educational Activities Supervision of Educational Programs Teaching Other teaching Self-Assessment Perspectives Publications Plume - Proof Theory and Formal Semantics Team Composition Executive summary Research activities Curry-Howard correspondence Structures of programming languages Formalizations in the Coq proof assistant Application domains and social or economic impact Visibility and Attractivity Prizes and Awards Contribution to the Scientific Community Public Dissemination Contracts and grants External contracts and grants (Industry, European, National) Research Networks (European, National, Regional, Local) Internal Funding International collaborations resulting in joint publications Software production and Research Infrastructure Software Descriptions Contribution to Research Infrastructures Industrialization, patents, and technology transfer Educational Activities Supervision of Educational Programs Teaching Self-Assessment Perspectives Logical foundations of programming languages Semantic tools for new behavioural properties of programs Formalization Publications Reso - Optimized Protocols and Software for High-Performance Networks Team Composition Executive summary Research activities Optimized Protocol implementations and networking equipements Quality of Service and Transport layer for Future Networks High Speed Network s traffic metrology and statistical analysis Network Services for high-demanding applications Wireless Communications Application domains and social or economic impact...193

11 ix 8.5 Visibility and Attractivity Prizes and Awards Contribution to the Scientific Community Public Dissemination Contracts and grants External contracts and grants (Industry, European, National) Research Networks (European, National, Regional, Local) Internal Funding International collaborations resulting in joint publications Software production and Research Infrastructure Software Descriptions Contribution to Research Infrastructures Industrialization, patents, and technology transfer Patents (French, European and WorldWide) Creation of Startups Consulting Activities Educational Activities Supervision of Educational Programs Teaching Self-Assessment Perspectives Reso Perspectives: new team DNet Sensor network, distributed measure and distributed processing Statistical Characterization of Complex Interaction Networks Theory and Structural Dynamic Properties of dynamic Networks Publications IT Support Team MI-LIP Team Composition Executive summary Software production and Research Infrastructure Software Descriptions Contribution to Research Infrastructures Consulting Activities Self-Assessment Perspectives...238

12 x Table des matières

13 Organigramme du laboratoire, juin 2009

14

15 1. LIP progress report LIP, Laboratoire de l Informatique du Parallélisme, is a joint research department with four heading organizations. This report summarizes its activities from Fall 2005, the date of the previous document, to September LIP has been founded at ENS École Normale Supérieure de Lyon in 1988 and associated with CNRS Centre National de la Recherche Scientifique since 1989, INRIA Institut National de Recherche en Informatique et Automatique since 1999, and UCBL Université Claude Bernard Lyon 1 since About 130 people participate to its activities. There are 54 permanent members whose affiliations 12 CNRS, 17 ENS, 17 INRIA, 8 UCBL show the balanced investment of the partners, and reflect the diversity and wealth of our research and teaching. Our main strength is the creative interaction between long-term fundamental research, innovative software and hardware design, and shorter-term projects and transfer through industrial collaborations. In most LIP thematics, this interaction fosters new trends, both theoretical and practical. Mathematical computer science and algorithmics are transverse to the lab. Along with our links to numerical and computational sciences, this illustrates our special partnership with mathematics. The last four-year policy has been to strengthen the six existing research teams, to encourage transverse and new directions, and to support collaboration with other disciplines, especially through complex system modeling. This is described and analyzed in this chapter. Subsequent chapters will give further details for research teams (Chapt. 3-8) and for the IT support team (Chapt. 9). 1.1 Scientific overview Since our previous four-year plan report, LIP has 56% more members, reaching a total of 133. Almost one third of the permanent members in 2005 have left the lab, and 140 people have joined for at least three months. This high turnover reflects our vitality and attractiveness, and is illustrated by major progress and evolution in every team. LIP has focused on strengthening six main directions with high potential. Internally, this corresponds to an organization in six teams, among which four are joint project-teams with INRIA: Arénaire, Compsys, Reso, and Graal. These four joint projects represent about 80% of the scientific members. Hence, a large part of LIP is directly concerned by three or four heading organizations. Nevertheless, balancing the involvement of the four organizations among the teams, especially regarding MC2 and Plume who have no INRIA members, is a point that we keep in mind. The support provided to the teams, illustrated by 13 new permanent researchers and faculty members, and one permanent engineer, has led to a qualitatively and quantitatively large production. We have produced about 600 peer-reviewed publications and over 50 doctoral and habilitation theses (about 900 publications and communications in total). We have co-signed papers with researchers from over 40 universities abroad. We have taken responsibilities for more than 350 scientific events. About 30% of our operating (non-consolidated) budget is balanced between CNRS-ENS recurring funds, INRIA support to project-teams, and various national and international risk/specific actions funded by the four heading organizations. Hence about 70% of our income comes from external sources, the main one being the National Research Agency (ANR) with a participation to more than 30 collaborative projects. New fundamental directions emerge from this vitality, such as: HPC geometry of numbers and cryptography; applications of linear logic to implicit computational logic; algorithmics of discrete event systems; just-in-time compilation and source-to-source optimizations for high-level synthesis; static scheduling of dynamic systems described with probabilistic models; models and algorithms for Software as a Service (SaaS) platforms, etc. The quality of our doctoral program is assessed by the fact that over 70% of the PhD candidates that have defended between 2000 and 2008 have found permanent research or faculty positions. The success of the four-year plan is also emphasized by large transverse initiatives that LIP has encouraged. The joint research around parallel computing and networking has pushed the convergence of the communication, computation, and storage aspects. Our involvement in various Grid projects and INRIA-Bell labs laboratory, together with our collaborations within the competitiveness cluster System@tic, demonstrates our R&D strengths in the domain. Our initiatives on highlyoptimized arithmetic libraries and on compilation have led to a well-established partnership with STMicroelectronics, in particular through the competitiveness cluster Minalogic and the Nano2012 funding mechanism, with a quick transfer of our knowledge into STMicroelectronics compilers and tools. Beyond the rich spectrum of our software production, from widely disseminated middlewares to sophisticated and dedicated libraries, we note that three software-related start-ups (each recently awarded a national prize) have emerged from LIP. LIP was involved in the main inter-disciplinary initiative that has been the Complex System Institute (IXXI) that is now a recognized structure, independent from LIP. LIP members, mostly from the MC2 team, have actively participated to the creation of the institute, and applying computer science tools for modeling complex phenomenons is one of our main inter-disciplinary thematics. The increasing size of LIP has led us to find new offices outside ENS campus. We now have offices at our disposal on two new laboratory sites at Gerland, close to ENS: rented IXXI offices since 2006, and UCBL offices since These 1

16 2 Part 1. LIP progress report geographical aspects represent an important change in the last period that we especially want to address in the future plan. The last four years have been very successful. The thematics that have emerged, and the teams evolution, will lead to a new LIP organization for the next plan (discussed in a separated document entitled Project document ). Our results show that LIP is able to take advantage of its favorable environment and of the existing opportunities Teams evolution and highlights Since 2005 LIP has been organized in six teams, corresponding to the six main topics: Arénaire Computer arithmetic. After a weakening of computer arithmetic and hardware aspects with the departures of M. Daumas and A. Tisserand (end of 2005), the team has attracted: N. Brisebarre (number theory algorithmics, CR CNRS), V. Lefèvre (computer arithmetic, CR INRIA), D. Stehlé (geometry of numbers algorithmics, CR CNRS), N. Louvet (numerical and computer arithmetic algorithmics, Mcf UCBL), and G. Hanrot (algebraic, cryptographic, and geometry of numbers algorithmics, Prof. ENS). This recruitment, along with new mathematical computer science directions, strengthens the mixing of fundamental research with software- and hardware-tools design that is a main specificity of Arénaire. Around computer arithmetic, the group studies operators together with specific application domains (cryptography, signal processing, linear algebra, lattice basis reduction, etc.), for a better understanding of the impact of the arithmetic choices on solving methods in scientific computing. The aim is to improve the quality of computation at large, and the confidence in the computed results, for example through formal proving. A significant recent orientation is toward designing synthesis tools to automatically generate libraries. Highlights. The new IEEE standard is a consequence of Arénaire s efforts since 1995 (CRLibm, MPFR). Our algorithms are used by the STMicroelectronics C compiler, and our FPGA elementary functions are used worldwide for high-performance computing. Operator synthesis prototypes, using certified libraries such as Sollya, are newly available. The algorithmic (complexity) work around matrix and lattice computing is highly recognized. Compsys Compilation and embedded computing systems. The departure of T. Risset (Prof. INSA, 2005) and A. Fraboulet (Mcf INSA, 2007) have induced a more focused range of activities and objectives in compilation and circuit synthesis. The hiring of C. Alias (compilers and hardware synthesis, CR INRIA) and the ENS Emeritus Professor position of P. Feautrier should preserve the stability of the team for the upcoming years. The goal of Compsys is to adapt and to extend some code-optimization techniques, primarily developed in compilation/parallelization for high-performance computing, to the domain of compilation for embedded computing systems. Compsys focuses more particularly on: code generation for embedded processors, aggressive compilation and just-in-time compilation; high-level program analysis and transformations for high-level synthesis tools. The key direction is to help bridging the gap between compilation and architecture, and between computer science and electrical engineering. A specificity of Compsys is to tackle combinatorial optimization problems arising from actual compilation problems, and to validate these developments in compiler tools through strong industrial collaborations. Highlights. Compsys has established a strong collaboration with STMicroelectronics, in particular within the competitiveness cluster Minalogic, pushing the use of static single assignment (SSA) for register allocation and back-end optimizations. This has led to an international recognition. Carrying the expertise on loop transformations and polyhedral optimizations to embedded computing has led to several new and interesting concepts. In the four-year period, five publications of the team have received a best paper award in international conferences. Graal Algorithms and scheduling for distributed heterogeneous platforms. The team has been strengthened with B. Tourancheau (high-performance computing, Prof. UCBL), L. Marchal (algorithmics and parallel computing, CR CNRS), B. Uçar (linear algebra, CR CNRS), G. Fedak (grid computing, CR INRIA), and C. Perez (component models, CR INRIA). From fundamental aspects to middleware developments, this (balanced) growth has reinforced Graal s three research directions: scheduling and algorithmics for heterogeneous platforms; algorithmics for parallel sparse direct solvers; high-performance servers on the grid. The Graal methodology that consists in addressing fundamental algorithmic challenges inspired by realistic platform and application models, helps bridging the gap between theoretical research and software design. Among the new themes considered we find online problems, stochastic algorithms, multi-criteria optimization, and out-of-core multifrontal solvers. The wide spectrum of the research also provides the mastering of the whole chain, from model definition to implementation. This mastering is a key point for proofs of concepts such as component models for higher abstraction levels. Highlights. DIET (grid middleware) has been chosen as middleware for the Décrypton grid an AFM-CNRS-IBM initiative to support genomics and proteomics resarch and has led to launching a startup. The MUMPS sparse linear algebra solver experiences remarkable recognition and dissemination; out-of-core factorization was a main realization. Y. Robert was Program Chair of the leading international conference in the domain: IEEE International Symposium on Parallel Distributed Processing.

17 1.1 LIP progress report 3 MC2 Models of computation and complexity. The highly-recognized fundamental research of the team has been impacted in 2008 by the departures of five permanent researchers: 2 faculty members retired, 3 CR CNRS moved to other academic positions, and 1 ENS faculty member moved to industry. The scientific loss especially impacts discrete mathematics aspects such as graph theory, and mathematical computer science topics such as complexity theory. The investment and efforts of the team for collaborating within the Complex System Institute have also been challenged by the departure of M. Morvan (Prof. ENS). Following these departures, the MC2 members have defined a more focused range of activities namely models of computation, complexity theory and combinatorics. The initiatives on complex systems have led to boolean and gene network projects, and to an ambitious simulation platform. The creation of a startup should give an enduring legacy to our work on complex systems. Highlights. Progress in the understanding of the relationship between some of the main problems of discrete complexity theory (e.g., P=NP? or P=PSPACE? ) and the corresponding open problems of algebraic complexity theory. First group to obtain quantum lower bounds for hidden subgroup problems. New research activities: Network Calculus and game theory. Development of a general-purpose software platform for the simulation of complex systems and creation of a startup (around Spring 2010). Plume Programs and proofs. P. Baillot (logic and computational complexity, CR CNRS), O. Laurent (logic and semantics, CR CNRS), A. Miquel (realizability, partial secondment), and C. Riba (term rewriting and realizability, Chaire CNRS/Mcf ENS) joined the team. This has a major impact on the scientific goals of Plume, with a strong new direction on proof theory. Plume research deals with methods for the formal analysis of computer programs and, more generally, of computing systems. This covers the foundations of (or of aspects of) programming languages, and static analysis of programs. The team relies on theorem-proving systems to develop rigorous formalizations and proofs about mathematical theories (mainly using the Coq system). Plume builds on the proofs-programs (Curry-Howard) correspondence to strengthen the links between proof theory in logic and computer science, with a specific focus on the integration of additional programming features such as: concurrency, mobility, modularity, and probabilistic behaviors. Highlights. Important progress has been made on the formal tools for studying the operational equivalences of concurrent processes, such as bisimilarity. This deals with both logic for expressing equivalences (Ambient Logic) and proof techniques for establishing bisimilarity results (up-to techniques for weak bisimulation). kextraction is the first program-extraction module for Coq able to deal with classical reasoning. It is a concrete byproduct of the work on the Curry-Howard correspondence for classical logic. Reso Optimized protocols and software for high performance networks. The team has attracted members coming from parent fields, specifically P. Goncalves (signal processing and statistical analysis, CR INRIA), I. Guérin Lassous (wireless and ad hoc networks, Prof. UCBL), J-P. Gelas (logistical and active networking, Mcf UCBL), and T. Begin (networking and performance evaluation, Mcf UCBL). Enriched by this new potential, Reso participated to the creation of the common research lab between INRIA and Alcatel-Lucent Bell labs on self-organizing networks". The arrival of É. Fleury (dynamic networks, Prof. ENS) and G. Chelius (wireless and sensor networks, CR INRIA) reinforced the networking component of LIP as well as the collaboration with the Sisyphe team of the ENS s Physics laboratory on traffic and network analysis and modeling. When joining LIP, É. Fleury proposed to introduce a new research topic coupling his previous experience on sensor networks to dynamic network measurement and analysis. This will lead to a new team organization focused on dynamic networks. Reso s research on core high-speed network infrastructures shows how to efficiently aggregate network, storage, and computing resources into a virtual and integrated environment. This trend, especially investigated within the Grid and Cloud contexts, will certainly influence the design of the future Internet. Highlights. The theoretical dimension of Reso and its collaboration with networking- and performance-evaluation communities has increased the development of a vision for the future networks integrating flow- and energy-awareness, as well as service and virtualization paradigms. A strong collaboration has been established with Alcatel-Lucent through INRIA-Bell labs laboratory and the participation to the competitiveness cluster System@tic. Reso played a key role in the Grid 5000 project (construction, validation). Innovative software will be transferred to Reso s spinoff, LinKTiss, which received the Oseo emergence 2009" prize. Our visibility has increased as assessed by the international Young Marconi Scholarship prize received by S. Soudan Transverse and new research directions Two main transverse directions and the complex system initiative have been successfully fostered by LIP and supported by our heading organizations. High-performance computing and networking. The transverse Grid 5000 project between Graal and Reso is a successful operation that has been highly supported by the laboratory and various initiatives: French Ministry research grant , PSMN Federation FLCHP, Grid CNRS (11kC in 2007, 38kC in 2008), INRIA ADT Aladdin, temporary

18 4 Part 1. LIP progress report ENS engineer positions (LIP Fonds de roulement). Grid 5000 was designed as a large-scale and highly-reconfigurable testbed for large-scale distributed systems. Its large scale and heterogeneity allowed researchers from the Graal team to validate the DIET middleware and several algorithms and applications (bioinformatics, cosmology). The Grid 5000 platform allows researchers from the Reso team to design and validate, at a large scale, their software protocols, services, and frameworks (e.g., the energy-aware reservation infrastructure EARI, the metrology infrastructure MetroFlux). Software and hardware synthesis, compilation, and embedded computing systems. The collaboration of Arénaire and Compsys with STMicroelectronics has grown from informal contacts to formal joint research efforts through the Sceptre and Mediacom projects, with significant funding for PhD students, engineers, and post-docs, and many joint research papers. This has placed LIP as a significant research actor on embedded systems in Rhône-Alpes and in the formal national collaboration between INRIA and STMicroelectronics (accord-cadre). Compsys and STMicroelectronics organized an international Spring school on, for the first time ever, static single assignment. Almost all world experts on the topic were able to participate. IXXI. The Complex System Institute is a main inter-disciplinary initiative that LIP has supported in particular with a 150kC infrastructure donation in IXXI Institute targets a new kind of scientific research community, one emphasizing multi-disciplinary collaboration in pursuit of understanding the common themes that arise in natural, artificial, and social systems. Within IXXI Framework, LIP creates or reinforces collaborations to match fundamental research with the needs of society, by making its contribution to key sectors for the future: medicine, biology, and the environment. New domains are emerging in every team, among which: Linear logic. The arrival of P. Baillot and O. Laurent brings strong knowledge in linear logic and its variants. In particular, the applications to implicit computational complexity open the possibility of new interactions between LIP teams on questions related with resource usage. Algorithmics of discrete event systems. The MC2 team plans to develop this research topic following its work about Network Calculus, a deterministic queuing theory for worst-case performance evaluation. Such analyses require tools from different fields like (min, +) tropical algebra, linear algebra, graph algorithmics or computational geometry. Arithmetic operator generation. Arithmetic cores get coarser, deal with new domains such as finite-field computing for cryptography, and target a wider range of technologies: from ASIC to FPGA in hardware, from embedded systems to HPC in software. To handle this increasing complexity, Arenaire works more and more on the automation of arithmetic core generation and optimization, for instance through improvements on polynomial approximation theory. High-performance geometry of numbers and applications. With the hiring of G. Hanrot and D. Stehlé, and the existing strengths, we have an outstanding expertise around Euclidean lattices. These objects are studied for themselves, but also applied to arithmetic challenges such as as polynomial approximation or the search for correct rounding worst cases. Compilation and embedded computing systems. Thanks to its collaboration with STMicroelectronics, Compsys has now a very strong expertise on static single assignment (SSA) that will be used to develop efficient just-in-time compilation schemes. Compsys expertise on high-level transformations (with connections with Arénaire for FPGA platform use) is also now mature enough to get fruitful results for high-level synthesis, also with STMicroelectronics as industrial partner. Dynamic networks. Our expertise on the deployments of large-scale sensor networks within MOSAR (Integrated Project supported for 5 years by the European Commission under the Life Science Health Priority of the Sixth Framework Program) and SensLAB (ANR Platform project) allows us to gather valuable data on dynamic networks. We want in the near future to foster our capacity of analysis and modeling of real-world graphs and complex networks. Future Internet. We believe that self-optimization challenges will be at the core of future networks design, which will embed virtualization paradigms - deeply related to Cloud principles - traffic and energy awareness. We thus decided to strenghten our competence in performance analysis and optimisation research, notably by hiring Thomas Begin (MCF UCBL). We also aim at invigorate our interaction with local distributed systems, scheduling theory, combinatorial optimisation and signal analyses, while reinforcing our collaborations with complementary teams at INRIA, but also at EPFL and in the US. Following our motto "from the model to the real world" we will continue to transfer to industry, original, innovative and scalable softwares that implement promising concepts. Static scheduling of dynamic systems described with probabilistic models. In the past, most of the theoretical research in Graal dealt with the design of static algorithms for static systems, that is, for systems those characteristics were supposed not to evolve during the lifespan of the studied problem (namely, the execution of an application). During the reporting period, the Graal team more and more focused on solutions for dynamic systems. We initiated our first work dealing with online problems. The hiring of A. Benoit was the opportunity for the team to acquire some knowledge on probabilistic systems. The Graal team is now considering the design of static solutions for dynamic systems that are described through probabilistic models, and has published its first works in this field.

19 1.2 LIP progress report 5 Service Oriented Computing Platforms. The access to Grid and Cloud platforms is driven through the use of different services providing mandatory features such as security, resource discovery, virtualization, load-balancing, etc. Software as a Service (SaaS) has thus to play an important role in the future development of large scale applications. This model based on services provided and/or offered is generalized within software component models which deal with composition issues as well as with deployment issues. These software architectures need many research issues to be solved, from programming interfaces to lower middleware details in Grids, Clouds and P2P systems, and this will be the aim of the Avalon research team, started from Graal Self assessment Our scientific objectives have largely been reealized. This is assessed by the attractivity of the teams and the evolution of the scientific points of focus of LIP. The vitality and progress of the teams keep their potential at the highest level. Following their hirings, Arénaire, Compsys, and Plume have experienced major changes. Nevertheless, the organization of these groups should be quite stable during the upcoming years. Also, thanks to various hirings, the largest teams, Graal and Reso, will give rise to smaller, more coherent, groups. The main transverse initiatives (grids and embedded systems), and the inter-disciplinary participation to the Complex System Institute, are successful. The growing importance of the latter and the departures in MC2 have induced a few team instabilities that have been brought down. In particular, having two temporary positions (researcher and faculty member) next year on quantum complexity is very satisfactory. The balanced focus on the fundamental domains (algorithmics, complexity, logic, statistics, modeling, mathematical computing) and on the software/hardware design and experimental science is at a very satisfying level and remains a clear policy of LIP. 1.2 Scientific activity and production Publications. Our production is highly qualitative and quantitative. We have published 173 articles in peer-reviewed journals, and over 400 in peer-reviewed conferences (for an average number of 100 scientific members, among which 40 permanent members) Total ACL - International and national peer-reviewed journal INV - Invited conferences ACT - International and national peer-reviewed conference proceedings COM / AFF - Short communications and posters in international and national conferences and w orkshops OS - Scientific books and book chapters OV - Scientific popularization DO - Book or Proceedings editing AP - Other Publications TH - Doctoral Dissertations and Habilitation Theses Total The most represented journals are Theoretical Computer Science (21 times), IEEE Transactions on Computers (12 times), Elsevier Future Generation Computer Systems (11 times). For conferences, we especially publish in: IEEE Symposium on Computer Arithmetic (ARITH) (12 times), International Symposium on Symbolic and Algebraic Computation (ISSAC) (9 times), IEEE International Symposium on Cluster Computing and the Grid (7 times), International Parallel and Distributed Processing Symposium (IPDPS) (5 times). Editorial boards, conference committees and organization. We participate to editorial boards of 17 journals, such as Discrete Math. Theor., Parallel Comput., ACM TECS, Appl. Algebr. Eng. Comm., J. Symb. Comput.. LIP members have also been guest editors of 15 journals, such as Ann. Telecommun, Par. Process. Lett., IEEE T. Comput., Theor. Comput. Sci. For major international conferences, LIP members have been: chairs (ex: CCGrid 2008, SSS 2009 or Parco 2009), members/chairs of 9 steering committees (ex: CCGrid 2008, ARITH, and ISSAC); PC chair of 14 main conferences (ex: ACM Gridnets, ACM MobiHoc, IPDPS, ARITH). LIP members have been chairs or PC chairs for 56 workshops (mostly international) or tracks in conferences such as SSA 09 (Static Single-Assignment Form), E2GC2 09 (Energy Efficient Grids, Clouds and Clusters), SLSS 09 (Scheduling for large-scale systems), LCC 09 (Logic and Computational Complexity), RAIM 09 (Rencontres Arithmétique de l Informatique Mathématique). We also have participated in the program comittees of about 300 various events. These significant numbers demonstrate the important LIP recognition and involvement in the national and international communities.

20 6 Part 1. LIP progress report Recognition, prizes, and awards. F. Desprez received an IBM Faculty Award (2008). P. Feautrier received from the Euro-Par Steering Committee an award in recognition of his outstanding contributions to parallel processing (2009). P. Koiran is Junior Member of Institut Universitaire de France (2007). Y. Robert is IEEE Fellow (2006), and Senior Member of Institut Universitaire de France (2007). S. Soudan received the Marconi Young Scholar Prize (2009). LIP members received 16 awards for papers, presentations, or demonstrations at conferences such as ASAP, CGO, CHES, ICALP, ISSAC, JDIR, SIGMETRICS, or TLCA. The Martes ITEA project (Compsys team) received the 2008 ITEA Symposium Silver Award. Softwares, standardization actions. We have a rich and wide software production. This goes from libraries for experimental computer science (proofs of concepts, algorithmic design), to widely-disseminated research softwares, and to industrial transfer. For example we may refer to: - HIPerNET: A new framework for virtual private execution infrastructure composition and orchestration transferred to startup LinKTiss. - BDTS: Software for Bulk Data Transfer Scheduling Service transferred to the LinKTiss startup and partially integrated in the CARRIOCAS-SRV prototype (Alcatel-Lucent). - LAO developments: The contributions of Compsys on back-end code optimizations are transferred to STMicroelectronics, first in the prototype compiler LAO, then in the production compiler. - DIET: Middleware for the deployment of computation servers over Grids and Clouds transfered to the Graal Systems startup. - MUMPS - a multifrontal massively parallel solver: lines of code, 1000 downloads per year, included in lots of finite element packages and numerical simulation codes from both academia and industry. - IEEE The specification of elementary functions in this revised floating-point standard is a direct consequence of Arénaire work on the subject. - FLIP - floating-point for integer processors. Included in the standard compiler tools for the ST200 processor family. - FPLibrary. First library of floating-point operators to include elementary functions for reconfigurable computers - Mathematical libraries are largely disseminated through general softwares like Magma or Maple, or integrated and disseminated through open-source platforms such as Sage (ex: linbox, fplll). - kextraction is a prototypical extension of the Coq proof assistant that allows one to extract certified programs from proofs in classical logic. While not software to be independently distributed, it represents a new technology, that could be included in the next versions of the Coq system. Startups. Three start-ups are emerging from LIP, which illustrates our transfer skills. These start-ups are supported through various sources and all three of them received an Higher Education and Research Ministry Award In relation with team MC2 and the Complex System Institute, É. Boix proposes a software platform for complex systems simulation in biology. Emerging from the Graal team and the DIET software, and represented by D. Loureiro, the second project deals with consulting and software editing around access to distributed HPC resources. In relation with team Reso, P. Vicat-Blanc Primet will offer virtualization and optimization services and softwares for cloud networking. Administration of research. Among the administrative members at the national level one may cite: J.-M. Muller, Chargé de mission at the Sciences et technologies de l information et de l ingéniérie CNRS institute since April 2006; N. Portier and G. Villard, members of the board of CNRS-GDR Informatique Mathématique; E. Fleury is member of the board of CNRS-GDR ASR Architecture, Systèmes, Réseaux and head of CNRS-GDR ResCom (Networking axis). E. Fleury is also in the scientific board and in the steering committee of IXXI (Complex Systems Institute), and member of the board of the regional research cluster ISLE (Informatique, Signal, Logiciel embarqué). P. Lescanne has been president of SPECIF (Société des Personnels Enseignants et Chercheurs en Informatique de France) between 2006 and 2008; F. Desprez is vice-head of the project-team committee, was a board member at the regional INRIA Research Center (2006-) and is a member of INRIA Evaluation commission (2008-); A. Darte, member of INRIA Evaluation commission, , and scientific referent for the national collaboration between INRIA and STMicroelectronics. LIP members are regularly elected to ENS Administration Council or Scientific Council (P. Lescanne, P. Vicat-Blanc Primet, Y. Robert, and S. Torres).

21 1.3 LIP progress report Collaborations LIP members have a broad range of national and international collaborations that we emphasize through different aspects here. Mostly at the individual level, we have publications with 41 foreign universities in: North America (19: USA and Canada), South America (2: Brazil, Mexico), Europe (12: Denmark, Germany, Italy, Portugal, Romania, Serbia, Switzerland, UK), Russia (1), Japan (5), and Australia (1). Especially through the Explora doc region program, we have PhD co-supervisions with universities in Australia, Germany, Portugal, Denmark, Italy, and Serbia. Our non-consolidated budget (see Section 1.7) is 8.6MC, with about 12.6% from the heading organizations (mainly CNRS and Ministry through ENS) under the control of the direction, and 7.2% from INRIA to the four projectteams. Hence about 80% of the budget come from specific project fundings. The main funding source is the National Research Agency (ANR), which represents 27% of the budget with 2.3MC. Here is the list of the ANR projects in which LIP members are involved (at various percentages): National Research Agency (ANR) Alpage Distributed platforms Lego Efficient grid Disc HPC Components Numasis HPC Seismology Solstice Parallel solvers Stochagrid Stochastic perf. models Gwendia Grid workflow SimGrid Simulations Spades Petascale Cloud@Home Cloud comput. Igtmd High-speed transport DSLAB DSL Internet Hipcal Infrastructure virtualization Dmasc Heart signal analysis Sycomore Models of computation Mécamerge Structure emergence Carpelle Virt. Plant modeling MoDyFiable Dynamic modularity Galapagos Proof and geometry CHoCo Curry-Howard corresp. Scalp Proof and cryptography Complice Implicit complexity Eva-Flo Arithmetic op. synthesis Gecko Geometry and complexity SubTile Tilings MINT Network models Dynanets Dynamic networks GeneShape Gene mechanisms LaRedA Euclidean lattices Tchater FPGA signal processing SensLAB VLS sensor net. platform In addition to their recurrent participation, our heading organizations, together with the region, are funders for 24% of our budget through the national and international projects listed below. Local/Region/National projects ARC INRIA Otaphe Scheduling ARC INRIA Georep MUMPS software ARC INRIA Green-Net ADT INRIA Aladdin Grid 5000 ARC Coinc Network calculus ACI Grid Explorer Large scale exp. ACI GeoCal Proof and geometry ACI Sécurité Cryptography ACI New interfaces of mathematics ACI Grid Grid 5000 Region Cluster HPC Région Ragtime Grid/Medical data Seiscope Consortium MUMPS software ATIP CNRS Global properties ATIP CNRS Complex systems Competitiveness clust. System@tic HPC network Competitiveness cluster Minalogic Compilation Nano2012 JIT compilation & arithmetic Nano2012 Prog. transf. for high-level synthesis Competitiveness cluster Minalogic Arithmetic Region/STMicroelectronics Arithmetic Minalogic-Sporaltec ESPAD Sportive perf. study CNRS PEPS Euclidean lattices Region Cluster EmSoc ADT INRIA SensTOOLS Sensor networks AFFSET TubExpo Exposure to tuberculosis study CNRS PEPS Signal processing International projects Exploradoc Berkeley Out-of-core solvers France-Berkeley Fund Parallel solvers CNRS-JST Japan Grid platforms CNRS-JST Japan Parallel solvers INRIA Explorer Japan MIT-France Fund Multicores CNRS Hawaii Scheduling and bioinformatics INRIA Hawaii Ass. Team Sched. and bioinf. INRIA Standardization Actions Arithmetic PAI Fast Australia Overlay networks INRIA Brazil Java on FPGA, dyn. compilation INRIA Italy Proof and concurrency ECOS Chili Models of computation CNRS Serbia Logic and types Region MIRA-Poland Computational logic CNRS USA Compilation PAI Romania FPGA arithmetic PHC INRIA Cryptography CNRS Australie Lattices Australian Res. Council Lattice enum., crypto. XtremeData University FPGA computing French-Israeli Multicomp. Parallel Solvers STIC AMSUD SAMBA Complex Networks INRIA Explorer USA Parallel solvers Among these projects, the investigation operations such as the Ministry ACI, CNRS PEPS, INRIA ARC, and the different INRIA/industry national joint research programmes, allow us to start new collaborations on emerging domains quickly, hence are key tools for our progress. At the international level, this is completed by 21.8% of our budget coming from European projects. European Community NoE CoreGrid Dist.computing NoE Euro-NF Networks NoE Artist2 Embedded systems Cost Action Energy efficiency Cost Dynamic comm. networks Cost Once-CS Complex systems STREP FP6 Euro-China GIN Grid Networks FP7 AI Autonomic Internet SSA FP7 OGF-europe Standard. in Grid NoE Hipeac Emb. architectures and compil. Eureka-ITEA Martes Embedded systems NEST Morphex Morphogenesis, compl. syst. Marie-Curie MetagenoGrids Marie-Curie Algebraic complexity EST Net. Mathematical logic appl. IP FP6 Aeolus Overlay theory and algorithms IP FP7 EDGes Grid systems IP FP6 Life Science Health MOSAR Among the above projects, several involve industrial partners, hence our direct industrial contracts, which represent 5.5% of the budget, are quite far from revealing our level of industrial collaboration. For example, the ANR grant on hardware signal processing is a collaboration with Alcatel-Lucent, the MARTES project was a collaboration with Thales, and the ANR Solstice project includes collaborations with EADS and EDF around our parallel solver MUMPS. We may also emphasize the strong partnerships developed with Alcatel-Lucent and STMicroelectronics with the support of INRIA through the competitiveness clusters Minalogic and System@tic, and INRIA - Bell-Labs Common Lab.

22 8 Part 1. LIP progress report Associations and industry AFM Association Grid HPC Orange-FT Network load Intel USA (computer funding) Alcatel-Lucent Services/resources ADR INRIA-Bell Labs Semantic networking Samtech MUMPS software STMicroelectronics Compilation, arithmetic STMicroelectronics - Thompson - Silicomps SoC simulators During the previous four-year plan, our operating budget has been 3.45MC. The spectrum and volume of our fundings therefore demonstrate our vitality. This allowed us to ensure the diversity of the types of productions, between high-level publications and practical (expensive) developments. 1.4 Personnel LIP has 133 members, representing an increase of 56% over the last four years (Sept. 1st, 2005). The distribution of permanent research staff is 17.5% of faculty members and 16.5% of researchers. The total of ETP-chercheur (Équivalent Temps Plein) is now 33.5 versus 22.5 in CNRS: 9.5 ENS: 15.5 INRIA: 15 UCBL 1 : 8 Temporary: 76 DR CR IR PR MCF IR DR CR IR PR MCF Perm. Res. PhD Eng. Tot. Arénaire Compsys Graal MC Plume Reso Total / /-6 +2/ / /-1 +3/-1 +25/ / / / / The assisting permanent administrative and technical staff is 8% of the total. Non-permanent members represent 58% of the laboratory. There are 31% of PhDs, 11% of temporary researchers (ATERs, postdocs, secondments, long-term visitors), and 16% of temporary engineers. The table above gives detailed data concerning the scientific members since 2005, including a percentage for permanent research engineers, and taking into account known arrivals for Sept The row before last emphasizes the turnover with the number of departures and arrivals during the period. Permanent faculty members and researchers. We start in Sept with 13 more permanent members than in 2005: 23 arrived and 10 left (mobility, promotions, and retirements) 2. There has been one DR promotion. Apart from the main changes in MC2, this large turnover is an excellent sign. The number of CNRS researchers slightly progressed with the largest turnover in Arénaire, Graal, and Plume. The number of INRIA researchers has increased with 6 new members among the four INRIA projects. The two hired ENS professors follow departures. É. Fleury creates a new team on dynamic networks, and G. Hanrot strengthens orientations towards geometry of numbers and cryptographic applications. One Chaire CNRS/MCF ENS position is obtained for C. Riba in Plume, by anticipating a retirement, on logic and semantics. UCBL investment is illustrated by: the Professor position of I. Guérin-Lassous, with a new theme on wireless and ad hoc networks; the one of B. Tourancheau (after a temporary move to industry) who strengthens distributed algorithmic aspects; and three MCF positions in Arénaire and Reso. UCBL is a key support for improving in particular our teaching collaborations in Lyon. The successful LIP faculty-hiring policy has been to encourage outstanding applications and the balanced implications of the home institutions in the teams. Among the 45 permanent researchers, 18 have their Habilitation à diriger des Recherches. Three HDRs have been defended since The fact that most PhDs are co-supervised by junior researchers stimulates the number of HDRs and fosters the promotions. Temporary research personnel. Taking advantage of the policies from our heading organizations and funding agencies (CNRS and INRIA postdocs and partial secondments, ANR, invited positions), LIP encourages temporary researcher hiring. This is a key point for research vitality, and for helping PhDs in finding positions. We had 23 postdocs and ATERs during the period, with 12 such positions at the beginning of Comparing that to 4 in Sept. 2005, this 1 For the sake of simplicity, we also count here one Professor IUFM Lyon and one MCF IUT Roanne. 2 i) With one DR promotion and a new permanent research engineer this gives 15 new permanent scientific positions. ii) A. Fraboulet and T. Risset stayed in Compsys until 2007 as LIP external collaborators, they do not appear in our (administrative) table.

23 1.4 LIP progress report 9 reveals a growth tendency, hence a success that should continue. However, since this may also reveal a general increase of temporary positions (versus permanent ones), we have to be careful in our analysis. We also had 27 visiting junior researchers and professors for more that three months, especially on specific ENS and INRIA positions. Doctoral students. During the four years, the number of doctoral students has been stable around 40, with 41 thesis defended compared to 26 during the previous plan. One key role of LIP is the formation of the outstanding ENS students. According to the school s policy, most of them are encouraged to leave Lyon for following doctoral programs in other cities, and only a small proportion of them prepare their thesis at LIP. Still, this has represented 29 students out of the last 80 PhDs prepared in the laboratory. The number of PhDs is slightly less than the number of permanent faculty members and researchers. On the one hand, the analysis has to take into account the fact that the growth of the number of faculty members and researchers mainly concerns juniors. On the other hand, 40 PhDs represent more than 2 PhD students per HDR, which reveals that young researchers are highly involved in student supervision. The success of our doctoral program is clearly demonstrated by the fact that out of the last 80 defended thesis (since 2000), over 60% have permanent research or faculty positions: 18 have permanent research positions (CNRS, INRIA, Onera), 28 are faculty members in France, and 3 are faculty members abroad. Among other types of employments, 12 have engineer positions in companies such as Michelin, Google, or Intel. Regarding the most recent defenses, 15 PhDs are on postdoc or temporary faculty positions. Attracting good doctoral students is a general difficulty in our disciplines. LIP members work actively for preserving the current supervision rate, especially by taking advantage of various sources of funding. Out of the 80 students in the last four years, 30 were ENS students with corresponding ENS/Ministry grants (Lyon or Paris), 28 (most often coming from other cities or countries) had Ministry grants (3 through INRIA), 19 had grants associated with contracts and industrial collaborations including 5 Cifre and 3 BDI CNRS, 2 had grants from Europe, and 1 from Mexico. With these various types of funding and a selective ENS master, our doctoral program keeps a very good level, and LIP is very active in the organization of the studies (see Section 1.6). The increasing involvement of the Computer Science Department at the international level, according to the ENS policy, and our increasing involvement at Lyon s level, are tools for keeping our formation at the highest level. Administrative members. In 2005, our research was assisted by an administrative staff of 5 people, with the equivalent of 4 full-time people. We have worked in 2009 with the equivalent of 5 full-time people, and we had 3 temporary people in the meantime. We have two ITA CNRS positions. However, one of the assistants is on extended maternity leave since late 2006, and the other position is vacant since the promotion of I. Pera (Jan. 2009). Hence, considering that LIP has 56% more members than in 2005, and comparing the 8.6MC four-year plan budget to the 3.4MC previous one, this reveals that 2009 has been a very difficult year. This also shows how the administrative team is highly efficient and supportive. C. Iafrate (ENS) and S. Boyer (INRIA) share the role of administrative supervision. Beyond general tasks for the direction, C. Iafrate also supervises financial aspects, and a large part of the time of S. Boyer is dedicated to management tasks for INRIA Lyon, that are unrelated to LIP s activities. The three assistants, C. Suter (INRIA), J. Brajon (temporary ENS, until Aug. 2009), and M. Buillas (temporary INRIA, until Feb. 2010), are in charge of the six scientific teams. Our 2009 efforts, with the support of our heading organizations, has led to a permanent ENS assistant starting in Oct. 2009, L. Lécot, and a permanent CNRS assistant starting in Dec We have also obtained a permanent INRIA position starting in Sept for E. Bresle. Taking into account INRIA s tasks for projects outside LIP, by 2010 the improvement will come essentially from the replacement of temporary staff members by permanent ones. We hope that the high turnover of administrative people will stop because the training of temporary assistants induced a sustained extra load on the administrative staff, increasing its difficulties. These questions are a constant concern for the lab and the researchers. The policy of the lab has been not to hire assistants on contracts. We are not sure whether or not this policy can be kept since the support provided to researchers deteriorated between 2005 and 2009, and the complexity of the assistance tasks increased. Technical staff. LIP has 5 permanent research and study engineers (3 CNRS, 1 ENS, 1 INRIA). M. Imbert arrived as INRIA research engineer (Service Expérimentation et Développement) for support on specific research projects in Graal and Reso in Our policy encourages the engineers in their research and software development activities. This policy, which keeps the engineers close to the researchers, greatly helps with the quality and the efficiency of the services, and should be preserved in the future. These research activities correspond to the equivalent of 3 full-time people and has been taken into account in the scientific members table. As in 2005, the IT support team represents the equivalent of 2 full-time engineers for everyday computer and network administration. The work of the team J.-C. Mignot (CNRS), D. Ponsard (CNRS), and S. Torres (ENS) has been acknowledged as a CATI CNRS in 2007 and is detailed in Chapter 9. The skills and efficiency of the group provide a continuous research support of particularly high quality. This means that most of the aspects of the infrastructures are kept up-to-date and secure (workstations and laptops; file, backup and network services; cooling; web management; collaborative and scientific softwares; special-purpose servers; video-conferencing;

24 10 Part 1. LIP progress report tool development; etc.). The IT team also plays a key role for the administration in the Grid 5000 platform, to which no permanent engineer is assigned. LIP has provided a sustained help to the platform, which is an important research resource: obtaining a corresponding permanent position is one of our priorities. Our internal LIP organization works in tight collaboration with the IT ENS team, as well as with the numerical computing ENS pole (PSMN), and on various actions with the INRIA engineers located in Montbonnot. Temporary engineers. During the last years, LIP has experienced a quick growth of the number of temporary engineers, who represent 16% of the scientific members (one engineer for 2.1 permanent faculty member or researcher). The equivalent of a four-year position has been provided by ENS. This help has been crucial for the administration of the Grid 5000 machines. A one-year position is provided by CNRS around the Décrypton project (Aug. 2009). Most of the other positions come from ANR and European projects, and reflect the rich activity of LIP in software development and transfer activities. Most teams are concerned, which shows that the combination of theoretical and practical aspects can follow in very different ways. This allows for instance the transfer of compilation activities in Compsys, and the transfer and dissemination of arithmetic operator libraries in Arénaire. In addition to the support of developments done by researchers, the engineer support here is crucial for transforming research prototypes into complete softwares. Larger software projects gather more engineers, typically 4 to 6 engineers per year, in MC2 for a complex system modeling platform, in Graal for the Diet software, and in Reso for optimized protocols and transport layers. The success of these activities is illustrated by the creation of three corresponding startups. The possibilities of temporary positions should not hide, however, the important need of long-term and permanent positions for most of our software production. Indeed, research softwares evolves with the fundamental research progress and, in general, the necessary involvement of the researchers themselves in code production and maintenance remains at a high level. 1.5 Governance and animation LIP is organized in teams. The Laboratory Council (elected members, heads of the teams, and direction) meets roughly every two weeks. This ensures improved communications and that the large majority of strategic points are discussed by the council. General lab meetings are run several times a year. Meetings between the heads of the teams and the direction are frequent and informal. The Commission des habilités deals with questions concerning the doctoral students and CS programs, and the Teaching council those related to graduate CS studies. The teams run their own scientific seminars regularly, often with outside speakers who can also be of interest for members of other teams. LIP runs a monthly seminar with two outside speakers (a junior and a senior) on a given theme, and, jointly with the CS department, we organize a student s seminar every two weeks. F. Desprez with vice chair G. Villard has chaired the laboratory from July 2006 to November 2008, when he resigned. He was faced with the increasing number of administrative tasks both at LIP s and at INRIA s level. Gilles Villard then took the lab responsibility with vice-chairs D. Hirschkoff and P. Vicat-Blanc Primet. 1.6 Teaching and training Teaching. Through its partnership with two teaching institutions, namely École Normale Supérieure de Lyon (ENSL) and Université Claude Bernard - Lyon 1 (UCBL), LIP is naturally involved in teaching activities. Members of the lab have teaching as well as administrative duties at UCBL, serving notably in the Masters SIR (Systèmes d Information et Réseaux), RTS (Réseaux, Télécommunications et Services), TI (Technologies de l Information), and CCI (Compétences Complémentaires en Informatique) and in various courses in computer science in Licence and Master 1. ENS Département d Informatique (DI) has very tight links with LIP. Administratively, it is run by members of the lab (Y. Robert was head of the DI during the period , É. Fleury will be the new head, starting in the academical year ). In terms of pedagogical content, the local curriculum in computer science is defined through a close interaction between the DI and researchers in LIP, notably regarding the Master in Computer Science hosted by ENS. In 2009, a dedicated committee, the Conseil des formations du LIP, has been created to discuss these matters. LIP members are also involved in the organization of the recruitment exams for admittance at ENS. Students who passed these competitive exams (élèves normaliens) traditionally represent an important part of the pupils per year who follow the local curriculum. In addition to these, more students can be recruited via an application process, and we also exploit partnerships with foreign universities (typically, Erasmus programmes). The vast majority of the students attending the Master at ENSL then enroll in a PhD programme, which is consistent with the proposed courses clear orientation towards research, and with the strong participation of LIP members in the Master. LIP is also involved in the Doctoral School Informatique-Mathématiques, which gathers several doctoral programs in Lyon.

25 1.8 LIP progress report 11 Continuing training. The members of LIP regularly participate to continuing training sessions (an average of 20 such actions per year). For administrative staff, this is training on administrative tools, preparation for civil service examination, or language training, among others. For scientific staff in general, training covers various subjects, including programming languages, computer security, as well as administrative and human resource management. 1.7 Financial resources Consolidated budget. For the four-year plan 3, they have been of the order of 27MC, not taking into account ENS s administrative and infrastructure cost. About 70% of this budget have been salaries directly paid by our heading organizations (permanent and non-permanent members, PhD candidates). Hence, LIP members have managed an operating budget of 8.6MC (see the table below) that represents almost 1/3 of the income. Direction budget. The recurrent budget, directly managed by the lab s direction, has been 190kC/year on the average. This budget came mainly from CNRS and ENS. Indeed, INRIA is essentially funding the project-teams, directly, and UCBL s participation is essentially via salaries and offices on the University s Gerland site. Together with the income from former contracts through the ENS Fonds de roulement and the recent ENS Fonds de recherche (under the arbitrage of the direction), this totals 1.1MC over the four years. Between 2005 and 2007, every year, a third of the funding received by the lab has been dispatched to the research teams. In the last two years, the teams own budgets have risen, notably thanks to ANR and European grants. The budget of the direction was 18.1% of the operating budget of the lab in 2005, 14.6% in 2006, 10.5% in 2007, and 9.7% in As a consequence, the lab support to the teams has shrunk. This offers the opportunity to provide funding to specific scientific operations such as conference support and invitations. During the period, around 120kC have been devoted to the support of Grid 5000 (in addition to other fundings, as well as scientific efforts, provided by the teams involved in this initiative) Total % Direction budget 246.5kC 265.5kC 309.7kC 264kC kC 12.6% INRIA lab and project-teams 110kC 206kC 183.7kC 119kC 618.7kC 7.2% CNRS/ENS/INRIA risk & specific 83.2kC 77.2kC 89.1kC 554.3kC 803.8kC 9.3% Sub-total heading organizations 439.7kC kc 582.5kC 937.3kC kC 29.1% Region 88kC 374.5kC 747.9kC 66.2kC kC 14.8% ANR 384kC 433.5kC 951kC 559.3kC kC 27% Europe 287kC 361kC 400kC 828.7kC kC 21.8% Sub-total contracts 759kC 1169kC 2098kC kC kC 63.6% Associations and industry 85kC 106kC 12kC 333.4kC 536.4kC 6.2% Others/mixed 72kC 0kC 16kC 8.7kC 96.7kC 1.1% Total kC kC kC kC kC 100% Budget for each team. Once considered the budget of the direction, the remaining operating budget is 7.5MC. It is managed by the teams, with CNRS, ENS or INRIA accounting, essentially via the administrative staff of the laboratory. INRIA has provided about 0.6MC directly to the four joint project-teams (80% of the scientific members). The rest of the income around 6.9MC ( ), that is 80% of the operating budget comes from specific calls for proposals and fundings, from both our heading organizations and contracts (see Section 1.3). Altogether the participation of our heading organizations is close to 30% of the operating budget. This participation plays a key role in the emergence of new topics. It partly corresponds to various types of national and international small projects that remain important tools for our risk and long-term research. The remaining 70% of external fundings, with a large ANR proportion, usually applies to bigger collaborative projects, including industrial collaborations (that are not exclusively on purely industrial contracts). The numbers show that LIP members are very active in project-based research and industrial collaborations. The volume of contracts has increased the possibilities for hiring temporary researchers (mainly postdocs) and engineers, which is consistent with what we have discussed in Section 1.4. The other main source of expenses is travel, either for conference participation, for organizing dedicated workshops related to the collaborative projects, or for visiting foreign institutions in the scope of our international collaborations. 1.8 Infrastructure Offices. The increasing number of LIP members has led us to find new offices outside ENS Gerland site. In addition to ENS main building offices used by LIP since its creation 1250m 2 we now have a total of about 500m 2 at our disposal, 3 Since we have incomplete data for 2009 we provide details for the period only. Taking into account 2009 would not modify the main part of our analysis.

26 12 Part 1. LIP progress report distributed over two new laboratory sites. Our investment for IXXI, along with ENS s support, has led us, in 2006, to use rented IXXI offices close to ENS (200m 2 ). Thanks to UCBL, we also use university offices on the Gerland site since 2008 (300m 2, including a surface extension in 2009). For the Grid 5000 node, we use the infrastructures in a dedicated room in a separate ENS building. Being split among three different sites is not satisfying, especially for the MC2 team that is currently located on a site outside ENS campus, and even more so for the Graal team which is currently split over two sites. Hence we have to be very careful with both the administrative and scientific running of the laboratory, until it is possible to gather all the teams on one site only. Technical environment. The IT support team has been acknowledged as a CATI (Centre Automatisé du Traitement de l Information) by CNRS in 2007, its main objectives are: building and managing an IT environment for the laboratory activity; providing support and expertise to research teams for their projects; building and managing experimental platforms. The activities and the maintained infrastructures are detailed in Chapter Self assessment Key elements of our strategy have been to target six strong topics and teams, foster the mixing of theoretical and practical research, train at the highest level, and balance the involvement of the four heading organizations. Our success is emphasized by the visibility, worldwide unique expertise, production, collaborations, and innovating results and future directions on the six points of focus of the four-year plan. The main criteria we use for analyzing our production are: publications, with an important role of the highest-quality conference proceedings; software and hardware developments; national and international collaborations, and community animation; industrial transfer. The transverse aspects of our strategy has brought a strong and fruitful synergy across the teams in theoretical computer science s main trends: algorithms, complexity, logics, and semantics. As a consequence, a Coq seminar will start by Fall These transverse aspects have also led to major progress and interactions, including software development and industrial transfer, on grid computing and networking, embedded computing systems, and inter-disciplinary complex systems. Our weakening on complexity and discrete mathematics, and the difficulties for hiring in architecture and hardware/software synthesis may need a special attention. LIP is very successful for training students, and for attracting and integrating scientists with different backgrounds. However, the productive combination of theoretical, mathematical, and practical aspects, and the growth of the lab require efforts for preserving unified projects. Our quick growth also demands an efficient training and research strategy for increasing the number of doctoral students, and preserving our supervision rate, in an international context that may not be favorable, especially in computer science. Some of our objectives are essentially related to the technological progress that we must anticipate, and to our management of pioneering experimental facilities. The Lyon networking and computing node of Grid 5000, or our software transfer with STMicroelectronics, demonstrate the high level of our research here in various directions. Together with long-term software development and maintenance, this requires sustained infrastructure investments, and engineering support. Our activity (such as the various nature and volume of our fundings, or the increased number of temporary people), with a deeply-modified academic system, requires us to keep a very close watch on the administrative load of the researchers. LIP researchers are so far dealing successfully with a funding policy based on short-term projects. Working tightly with four specific heading organizations clearly represent more opportunities than constraints. Nevertheless, the cost in terms of time spent for administrative duties is very high. All permanent LIP researchers agree today that this workload is becoming too heavy, and that they reach the limit of what is bearable. In a time of major French changes of research organization and funding, with impacts both at the local and national levels, we believe in the importance of fundamental and long-term research, and we invest ourselves to preserve it.

27 Nouvel organigramme du laboratoire

28

29 2. LIP general project 2.1 Context Together with the scientific, political and structural evolutions of our environment, LIP s growth is an important element we have considered for drawing up our project for the next years. In 2009, LIP has over fifty percent more members than in 2005, and more than twice as many members as in Computing (computers, communications, applications) is making major and crucial contributions to society s progress. In the mean time, the amount of infrastructures making up the digital world continues to grow rapidly and starts to consume significant energy resources. Computing and digital resources, as well as their use, are our central objects of study, from both an abstract and practical point of view. Our main research directions are organized around two general and interweaved key challenges: - addressing the complexity of future computational and communication architectures, and the way their design and use efficiently match application constraints and needs; - inventing mathematical computer-science models and methods that contribute, beyond their fundamental aspects, to algorithmic, software/hardware, and technological advances, as well as to the progress of other scientific disciplines. The machine or the computer is seen as either an abstract entity or tangible object. The real computer is a set of processing, networking and memory resources (e.g., chips, multi-cores, embedded processor, grids, clouds, networking and computing infrastructures, pervasive systems) whose combinations lead to a fantastic complexity. The machine is both a research target and a computational means for research. Facing its complexity, we put an accent on the way resources can be used in an optimized, trustful, and safe way. Topics regarding the study of the computer as an abstract entity, such as mathematical computer science and algorithmics, are transversal to the laboratory. They deal with the invention of new paradigms, the modeling of our problems, and how to handle the change of scale of the problems we study. Moreover, a relatively recent and crucial change is that non-computer Science disciplines need and adopt the computer science and algorithmics thinking. Anticipating the repercussions of new computational concepts in other sciences, we participate to and support this revolution. Specifically in the context of Lyon and its region, strategic interactions with other disciplines take place through applications in numerical and computational sciences, complex-systems modeling, biosciences, and communication and semiconductor industries. LIP s tradition of creative interactions between fundamental research and innovative software and hardware production is a solid foundation for our plan. In this section, we analyze the evolutions of our context, from LIP s environment to the international one. From there, we give an overview of LIP s scientific project in Section 2.2 (more detailed projects will be given for each research team and the IT support team from Chapter 3 to Chapter 9), and different aspects for conducting this project are addressed in Section LIP s internal evolutions Since 2005, our number of permanent faculties and researchers from CNRS, INRIA, ENS-Lyon, and UCBL, 45 now, has increased by more than 45%. Research activities are strengthened by the equivalent of 3 full-time permanent research engineers, by 15 temporary research members, and by about as many PhD students as permanent members. For the 48 permanent scientific members we have 1 : Arénaire Compsys Graal MC2 Plume Reso Oct (+2) Sept During the last four years, grid projects such as Grid 5000 have brought about many synergies between the Graal and Reso teams on the high-performance distributed platforms and networks thematic. Including all scientific members, this theme (internet and networks, scheduling, distributed resources and services, high-performance computing) represents more than half of the laboratory, and is highly productive. The scientific aspects related to the lab s growth have been considered regularly, in particular at LIP s level, together with administrative questions. New points of focus have been identified that will lead us to modify LIP s organization with new teams. Following the growth of the Graal research team, 1 In addition to the three permanent researchers, Tanguy Risset and Antoine Fraboulet, both at INSA-Lyon, were part of Compsys, as INRIA projectteam, until This is indicated as (+2) in the table. 15

30 16 Part 2. LIP general project two new teams will be started in 2010, one on the design of algorithms and scheduling strategies that optimize the resource usage of scientific computing applications and one on models and algorithms for service-oriented computing platforms. Research related to networking and network science will follow two axes. On one hand, the Reso team will deal with traffic-awareness, network sharing, and energy efficiency in high-speed network infrastructures. On the other hand, the D-Net team will be launched and address problems related to the science of dynamic graphs and match this work to realworld complex networks. Research related to embedded systems, and more precisely synthesis, quality, and optimization of programs (compilation and computer arithmetic, software/hardware), has been highly developed, especially within the competitiveness cluster Minalogic involving the Arénaire and Compsys teams. The convergence on code synthesis will certainly be accentuated. Compsys has a small but quite stable number of permanent members, thanks to the hiring of one new INRIA researcher. Arénaire has grown with researchers partly oriented towards mathematical computing. More generally, LIP s groups interacting the most with mathematics have evolved towards logic/semantic, algebraic, and applied mathematics aspects (numerical analysis, statistics, etc.). After a lot of departures, team MC2, dealing with models of computation and complexity, reveals a weakening that may have an impact on our discrete mathematics (automata, graph theory) expertise. A long-term support is necessary for fostering MC2 s fundamental research. After a strong involvement in complex systems following the creation of the Complex System Institute (IXXI) in 2002, the team will re-concentrate its efforts on computational complexity and combinatorics, but also wishes to open up to new algorithmic themes like algorithms for discrete-event systems. After a clear evolution, team Plume has gathered a much stronger expertise on logic and formal analysis of computer programs. We have a number of strengths in algorithmics, algorithmic complexity, and fundamental and mathematical computer science, which are transversal domains. In particular, the quality of the computation, and the optimized use of the resources are main concerns. Our progress report presents our research and production in various branches of fundamental computer science. The report also shows our ability to mix these aspects with more practical concerns and the key role of our software/hardware design and developments, and transfer activities. The Complex System Institute (IXXI) is now an efficient and large structure independent from the laboratory. LIP s interface with the institute, whose weakening has followed MC2 s, will be strengthened with the creation of the D- Net team on dynamic networks. The main interactions with disciplines other than mathematics are through various computational applications and modeling (including our IXXI collaborations): among these disciplines, bioscience is an emerging theme with projects in several teams. A notable consequence of our growth is that, since 2008, we have to work at three distinct locations on the Gerland site. To preserve the cohesion of the laboratory, we will encourage transverse research themes and pay special attention to the lab s scientific and administrative life Main scientific evolutions In addition to the political aspects that we will see in the next section, the main ingredients for working out our project have been the teams evolution and the key transverse themes drawn from this thematic progress. Among the main evolutions, we may point out: Logical foundations of programming primitives. The Plume team has increased its expertise on mathematical and logical tools for the study of programs, notably with linear logic, game semantics and realizability. This will allow us to work on the foundations of a larger set of programming constructs and to address new questions such as controlling the complexity of programs as part of the general question of analyzing resource usage. Algorithmics of discrete event systems. The MC2 team plans to develop this research topic following its work about Network Calculus, a deterministic queuing theory for worst-case performance evaluation. Discrete event systems like Petri nets can be viewed as dynamical systems or as models of computation, and their analysis may require tools from various fields like linear algebra, (min, +) tropical algebra, graph algorithmics or computational geometry. Some support can come from applicative fields like the analysis of communication networks and this theme may increase the interplay with LIP s other teams. High-performance geometry of numbers and applications. The successful match of arithmetic expertise and Euclidean lattice expertise in the recent years encourages Arénaire researchers to tackle new applications at this new frontier, such as lattice-based cryptography, MIMO decoding for wireless communications, new methods for the design of numerical filters, and many others. Ensuring performance and accuracy of larger computing cores. After successfully embracing elementary and compound functions of one variable, the Arénaire team wants to extend the notion of arithmetic core to even larger cores such as functions of several variables, signal processing filters, linear algebra or pairing-based cryptography cores. The

31 2.1 LIP general project 17 challenges are a better understanding of the bit-level complexity of these problems, the development of new specific mathematical tools, and a focus on automatic code generation and validation with more interactions with the compilation community. Program optimizations/transformations for embedded computing: just-in-time compilation & high-level synthesis. Rhône-Alpes s context, with the semiconductor industry in the Grenoble area and competitiveness pole Minalogic, is an opportunity to push further research activities on compilation, architecture, and high-level synthesis, in close collaboration with industrial partners. For programs, the diversity of architectures and the need for portability increases the interest for just-in-time compilation, starting from a low-level platform-independent code representation. For circuits, the development of more mature industrial high-level synthesis tools creates a need for source-to-source program transformations (to be applied at the front end of these tools). These two aspects are explored by Compsys. Foundation of a science of dynamic networks. The main goal of D-Net team is to develop knowledge in the field of networking science, so as to provide a better understanding of dynamic graphs from both analytical and structural points of view. In order to develop tools of practical relevance in real-world settings, we will base our methodological studies on real data sets obtained during large-scale in situ experiments. The conjunction of real-world data with the development of theoretical tools proposed in D-Net may impact various scientific fields, ranging from epidemiology to computer science, economics, and sociology. The Internet s architecture redesign aims at solving network-manageability, security and Quality of Service issues. High and predictable performance for emerging high-end cloud and computing applications remain indeed hard to obtain due to the unsuitability of the bandwidth-sharing paradigms and communication protocols of the traditional Internet protocol architecture. Reso will emphasize fundamental issues of network science such as bandwidth and network-resource sharing paradigms, network-resource virtualization and scheduling, energy efficiency, traffic metrology and modeling, flow-aware quality of service and transport protocols. Static scheduling of dynamic systems described with probabilistic models. In the past, most of the theoretical research in Graal dealt with the design of static algorithms for static systems, i.e., for systems whose characteristics were supposed not to evolve during the lifespan of the studied problem (namely, the execution of an application). During the reporting period, the Graal team more and more focused on solutions for dynamic systems. The Graal team initiated its first work dealing with online problems. It has also begun to work on the design of static solutions for dynamic systems that are described through probabilistic models, and will increase its effort in this domain. Algorithms and software architectures for service-oriented platforms. The new Avalon team, started by several researchers from the Graal research team, will focus on algorithmics and software design research issues of software as a service (SaaS) and service-oriented computing (SOC) environments over grids, clouds, and P2P systems. From programming models down to middleware design and resource- and data-management heuristics, the efficient design of these important components of the future Internet requires some fundamental researches. Complex systems and biosciences. Biosciences and bioinformatics are among the main research subjects of the Lyon area. LIP invests resources in this direction: as a test application of computing on large-scale platforms (Graal and the future Avalon team); in relation with the Complex System Institute on modeling (e.g., spreading of pathogens, generegulatory networks) using fundamental computer science tools such as dynamic networks (D-Net), automata (MC2), or game theory (Plume). The teams evolution emphasize the duality of computing in the laboratory: the practical challenges related to future infrastructures, and their applications, rely on fundamental and mathematical computer science approaches. This duality will provide our two main transverse areas. On practical aspects, our common concern is inventing new tools for resourceusage optimization, and improving security and quality of computing. Beyond the lab, this common concern is a strong basis for enlarging our collaborations in computational sciences. On fundamental aspects, original and sophisticated models and methods will produce frameworks for concepts to be stated, and new computational approaches. Fundamental CS research is also essential from a multi-disciplinary point of view Local, national, and international evolutions Lyon evolutions. The French reform is illustrated by the creation of the PRES Université de Lyon and the development of Lyon Cité Campus (Campus plan), with scientific orientations focused on health science and modeling complexity. LIP finds a motivation in this clear global dynamic of the city for gaining a foothold in local collaborations. In particular

32 18 Part 2. LIP general project we may refer to the Blaise Pascal Center on numerical science, and to the Numerical Science & Modeling Federation project. The increasing involvement of UCBL with LIP is illustrated by the fact that one third of our faculty members 2 occupy Lyon-1 positions. The reform is also illustrated by the application of the LRU law Loi relative aux libertés et responsabilités des universités at ENS and at UCBL. Together with the new ENS school, this may modify the respective roles (local versus national) of our four heading organizations regarding, e.g., logistics, scientific policies. Since 2008, we belong to a new and enlarged Doctoral School, and since 2009 to the enlarged UCBL Faculté des Sciences et Technologies. Regional and national evolutions. CNRS and INRIA play key roles at both the regional and national levels. During the last four years, they have provided about 1.6MC 3 either directly to the laboratory, to the teams, or on specific actions such as postdocs, invited positions, and international and risk projects. During the same period, the corresponding global operating budget has been 8.6MC. The start of CNRS INS2I institute (national) and collegiums (regional) is expected, in particular, to increase computer science recognition and the cooperation with other disciplines at the regional level. Along with long-term research, our positioning with respect to CNRS s strategic plan 2020 (and the CNRS-État contract) especially touches: the complexity of computing and communicating infrastructures, with reliability, security and energy challenges; the interaction of computer science and mathematics for novel methods, in relation with the INSMI institute. The national GDR CNRS structures, in particular with respect to the animation of scientific communities, are major instruments for our fundamental research, and its visibility. We are highly involved for example in the GDR Informatique Mathématique or Architecture, Système et Réseau, and in the recent GDR Calcul. About 80% of LIP scientific members belong to one of the four joint INRIA project-teams, and are highly involved in the life of the Grenoble - Rhône-Alpes research center. LIP is positioned with respect to the current INRIA strategic plan especially following three priorities: reliability of computing systems; computation and internet of the future; software synthesis and compilation for embedded systems. We are positioned with respect to INRIA s regional policy on mastering dynamic and heterogeneous resources. We may emphasize INRIA s support to our participation to the regional competitiveness cluster Minalogic, and to our industrial collaborations with STMicroelectronics and Alcatel- Lucent through national contracts. The re-organization of the STMicroelectronics research objectives, in collaboration with CEA and INRIA, towards the development of a multi-core platform (Platform2012) may also offer new collaboration opportunities. INRIA s increasing strength in Lyon also defines priorities in biosciences that could represent new opportunities for us. Numerical and computational sciences are important priorities identified at the regional level by our heading organizations. Our upstream research on the design and the use of computing and communicating infrastructures, such as within the Grid 5000 project, creates opportunities for transferring our methods to other domains. Through projects on grid computing or embedded systems, and our participation to, e.g., the Fédération lyonnaise de calcul haute performance and the Rhône-Alpes regional cluster Informatique, signal, logiciels embarqués, we also find various opportunities for collaborating with other disciplines (life-science computing, signal processing, high-performance numerical solvers, etc.). During the last four years, the global CNRS, ENS, INRIA, and Lyon 1 funding has represented 29% of our budget (recurring fundings determined by the four-year plan have been about 8.5%). The ratio was over 36% during the previous plan. The corresponding total budgets are 8.6MC and 3.4MC, hence the major change in the proportion reflects a quick increase of external fundings, rather than lower organization fundings. Among the former, fundings from the Rhône-Alpes region represent 15% of the total budget, and those from the national research agency (ANR) represent 27%. Hence ANR is now an important actor whose programmation gives a new level of scientific orientations. This is an important tool for funding the theoretical sides of our research, especially through the white programs. Our industrial contracts represent 5.5% of the total budget. This proportion does not accurately illustrate our industrial collaborations. Indeed, beyond the fact that budget measurements should not be the only indicators, we point out that regional, ANR, or European projects may involve industrial partners, or may be related to industrial contracts. International evolutions. ANR, with somehow simpler administrative approaches than European ones, has probably reduced the level of our efforts for European fundings. However, our European participation remains at a high level. The fundings represent about 22% of the total budget, i.e., 1.9MC. The main projects are, for instance, on grid and internet infrastructures, complex systems simulation, or life science and health, together with Marie Curie actions (training networks and fellowships), that is now within FP7 Cooperation and People. The European positioning of the new ENS school may enlarge our sphere of collaborations. CNRS and INRIA Offices for International Relations (DREI and DRI), and ENS, increasingly provide a key support and framework for innovative international risk projects. This includes fine-grained starting collaborations (Italy, Serbia, USA, Australia, Brazil, etc.), standardization actions (IEEE floating point or interval arithmetic standards), international associated teams (metagenomics on grids, LIP-U. Hawaii), or co-agreements for student supervision. 2 by faculty member we mean people who teach and research, by opposition to full-time researchers. 3 All budgets here are non-consolidated.

33 2.2 LIP general project Strengths and opportunities LIP produces high-quality research results. This is assessed by our publication track, our involvement in communities, our national and international collaborations (academic and industrial), our funding sources, the distribution of our software. The researchers we attract and the strengths of LIP s teams allow us to regularly re-consider our research directions, which often leads to important emerging themes. The combination of our four heading organizations institutions is a key addedvalues for the laboratory. With the complementary supports and policies of our organizations, and with their respective strengths, we are in a privileged and attractive situation (research resources, students level, etc.). We are somehow better prepared to further potential changes, for example if a French computer-science alliance between our organizations is created. Our scientific project is built on another key combination: our expertise in both fundamental and practical computer science. This allows us to take advantage of a number opportunities. Among those, the national and international policies tend to recognize computer science as being an actual science, albeit a relatively young one. We are positioned to address the crucial needs in new computer science models, methods, and algorithms. The city, region, and national multidisciplinary policies on computing and mastering resources, represent a unique opportunity for our practical computer science approach. In addition, in the progress report, we have demonstrated our strength for handling various national and local industrial collaborations. 2.2 Scientific project As already mentioned, computing (computers, communications, applications) is making major and crucial contributions to society s progress. Technological impact and ubiquity of resources are usually perceived by society as embedded and pervasive computer systems and physical and virtual infrastructures. However, in addition to the technological progress, a relatively recent and crucial change is that non-computer Science disciplines need and adopt the computer science and algorithmic thinking. A main challenge is to participate to and support both aspect of this revolution. Our positioning is to study together future infrastructures and theoretical frameworks, specifically to invent computational concepts, while anticipating their repercussions in other sciences. The machine, either real (embedded, distributed, pervasive) or abstract, is both a research target and a computational means for our research. Mastering its complexity, especially the optimized use of resources, requires long-term efforts, and its quick change forces us to regularly adapt and adjust our points of focus. Computing and digital resources, as well as their use, are our objects of study. Facing the increasing and quickly changing complexity of the machine paradigms, we put an accent on the way computational and networking resources can be used in an optimized way, trustfully, and safely. Mathematical computer science and algorithmics are topics transversal to the laboratory, in particular for proposing new theoretical frameworks. On one hand, technological progress increases the complexity of machines while improving their computational and networking capabilities. This induces the need for new studies to acquire the knowledge needed to harness the power of these new machines. On the other hand, our research often takes advantage of the new computation power to improve knowledge, namely by allowing to perform more exhaustive simulations and experiments. In return, the technological choices for future infrastructures may be influenced. The abstract computer (or any computer that could ever be built) especially shows that the computer is not only a tool, but a fundamental object of research in its own right. Studying both real and abstract machines, in most groups of the lab, is one of our main and solid strength. This allows us to make and preserve a bridge between theoretical and practical computer science, and is a key source of invention. About the computer and beyond the computer, the traditional collaboration between computer science and mathematics leads to innovative models and methods. These benefit to both disciplines. At this junction of mathematics and computer science, we also address computational science challenges. As the scope and importance of problems that are posed to computers grow, it is essential to design new approaches, algorithmic tools, and software solutions to tackle them. The computer science and algorithmic thinking is now widely applied for modeling various complex phenomena, which represents another important axis of our collaborations with other sciences. Two main transverse areas. LIP is a key place for taking advantage of various scientific competences, approaches and strategies. LIP has a successful tradition of a balanced and efficient mixing of theoretical and practical aspects, as well as long-term and project-based research headed by multiple heading organizations and scientific governances. To take into account our scientific evolutions and our recent growth, a first element of our project is to re-organize the laboratory into 8 teams and project-teams. In addition to a sustained help to the teams points of focus, the policy will be to foster two main transverse issues, with two sub-issues each, that cross-fertilize fundamental, applied, and technological aspects. These issues, deduced from our analysis, and according to the scientific policies of our heading organizations, represent the added-value of the laboratory level:

34 20 Part 2. LIP general project I. Addressing the challenges of future computational and communication architectures I.a. Automation and specification for quality, optimization, and reliability of computation and communication I.b. Methods, algorithms, and software/hardware architectures for future infrastructures II. Mathematical computer science models, methods, and algorithms II.a. Mathematical methods and algorithms II.b. Models, logic, and complexity for programs and systems The two issues are seen as shared challenges and directions within LIP, independently of the division of the laboratory into teams. This is presented below with a general point of view, and with more details by teams from Chapter 3 to Chapter 9. Fundamental algorithmics acts as a binder between the two issues, in particular with parallel algorithmics which is quite ubiquitous. New project-teams. All teams show major scientific evolutions. Nevertheless, four teams continue for the next plan: Arénaire on Computer arithmetic; Compsys on Compilation and embedded computing systems; MC2 on Models of computation and complexity; Plume on Programs and proofs. The Graal team leads to two new projects with two complementary aspects of parallelism: Roma on Resource Optimization: Models, Algorithms, and Scheduling ; Avalon on Algorithms and Software Architectures for Service Oriented Platforms. The team Reso on Optimized protocols and software for highperformance networks, is refocused on the future Internet, and after a two years incubation within Reso, the D-Net team on Dynamic networks is created. Combining practical and theoretical challenges. There is a strong interdependency between issue I, the quick technological development, and the infrastructures constraints and complexity. We build on complementary research on hardware and software synthesis, compilation (Arénaire, Compsys), distributed architectures and networks (Avalon, D-Net, Reso, Roma). The computer and its components are changing; via modeling and abstract approaches, we design new tools to anticipate the changes, to study their impacts and design new approaches that cope with them, and to widen our target architecture spectrum. More directions in formal analysis (Compsys, Plume), the minimization of energy consumption (Compsys, Roma, Reso), and safety and security questions (Arénaire, Plume) will be encouraged. The second issue with II.a, about the study of purely mathematical objects, concerns all the teams. Algebra, geometry of numbers, logic, graph theory, combinatorial approaches, optimization, statistics, and numerical analysis are key mathematical themes on which we strongly rely, and that may benefit from our algorithmic progress. Issues that directly come from computer science s own mathematical concepts, and that mostly relate to abstract machines, expressiveness, modeling, and programs, give issue II.b, where for instance languages (Plume) or problems (MC2) are mathematical objects, where abstract parallel models are designed (Roma), and where we work on expressing numerical quality (Arénaire). The two issues are interweaved through most teams. Long-term fundamental progress guarantees our strength for inventing and anticipating computer evolution (ex: distributed algorithmics, scheduling, optimized and reliable resource usage, quantum computing). We conceive both theoretical and practical machine models. Our fundamental studies on mathematical problems, for example to provide optimized solution algorithms and implementations, identify needs (e.g., new scheduling model and strategy, new type of arithmetic operator), and provide test applications for our practical developments (e.g., linear algebra, cryptography, Euclidean lattices, signal processing). LIP has long been at the forefront of this effort. Conversely, empirical progress, and invention of theoretical models, may emerge from the abstraction of practical phenomenons (e.g., metrology and models for the future Internet, spread of diseases), or upstream objects (e.g., abstract formalization of recent and challenging programming construct, modeling of multi-core architectures, mathematical framework in code optimization). Test applications are also used to validate (fundamental) hypotheses or proposed solutions. Our research, especially regarding Issue I, strongly relies on experimental computing and communicating platforms, including grids, sensor platforms, and FPGA platforms and board. We plan to encourage risk initiatives and equipments in this direction, jointly with the development of software platforms (HPC, SOC, networking, synthesis tools). The local and national experimental equipments in which we are involved are the crucial research facet of high-performance (production) equipments. Cross-fertilization with other disciplines. Computational and numerical sciences are major priorities of our heading organizations. From hardware signal processing with Alcatel-Lucent and STMicroelectronics, and high-level synthesis, to the AFM-CNRS-IBM Décrypthon project, and intercontinental service computing, our main collaborations with disciplines other than mathematics fall into this challenge. We mainly exchange on new algorithms, program optimizations, middlewares, and large-scale experimentations. The Lyon and Rhône-Alpes context is very favorable for strengthening these aspects within the theme:

35 2.2 LIP general project 21 Cross-fertilization with other disciplines, and application domains a. High performance for computational sciences b. Computer-science models, complex systems, biosciences where we especially find our participation to the Numerical Science & Modeling Federation project, the renewal of our collaboration with the IXXI institute through the application of dynamic graphs, and biosciences applications. From fundamental research to startup companies. The fact that LIP is well-known for its high-level fundamental researches should not hide the importance and quality of our software developments. In 2009, three startups were initiated in three different research teams: Graal Systems issued from Graal, LinKTiss issued from Reso, and Cosmo issued from MC2. Researchers and engineers from each team will be involved in these creations and our theoretical results as well as some of our software prototypes will be transferred Addressing the challenges of future computational and communication architectures During the previous plan, synergies have been mainly created on one hand between Graal and Reso, and on the other hand between Arénaire and Compsys. Our project is to enlarge collaborations and take advantage of complementarities between all lab groups for which the machine plays a key role. A common concern is the optimization of resource usage in computing. The teams of the lab address this issue at different levels with various approaches. Although the objects, methods, and goals of these teams widely differ, some convergences can be observed. At this level again, two transverse and interweaved sub-issues may be identified. The first one is more directly related to the optimized solving of non-cs problems. The second one is dedicated to pure CS problems. Automation and specification for quality, optimization, and reliability of computation and communication. Avalon, Arénaire, and Roma, specifically consider application domains. These application domains are mostly at the interface with mathematical/scientific computing (e.g., linear algebra, signal processing, data-intensive applications). The challenges are then to design and synthesize algorithms and codes, and to conceive new software tools that automatically apply our (human) expertise. In addition to resource usage, the issues of HPC implementations, of accuracy/quality, and of reliability, are increasingly important for all. We also target automated code validation, and explore machine-checkable proofs. Plume provides the necessary interface with semantics, specifically through proof assistants. For example, linear algebra kernels, sparse direct solvers, lattice-basis reduction libraries (specifically Arénaire and Roma) lead to common needs to fully harness the computational power of multi-cores. LIP is the place where a common culture can be built. Security questions are currently less considered, the lab should make progress on these questions (e.g., cryptography, security issues with internet redesign). Methods, algorithms, and software/hardware architectures for future infrastructures. At the heart of CS questions, two major points of convergence are on computer/networking/sensor platforms and software/synthesis tools. All teams somehow target common complex infrastructures and developments for which the lab policy will be to provide additional human and material support. The issue of power consumption, once reserved to embedded systems (Compsys, D-Net), is now shared by all. Reso, Roma and Avalon target classical objectives such as platform utilization, and execution time or traffic load, but also the crucial energy consumption one. Regarding statistical characterization of traffic, research carried out in Reso ranges from technological (Metroflux) to methodological and theoretical issues. The underlying objective is to make traffic awareness" a full constituent of future networks, allowing differentiating the flows treatment according to their generating application. Arénaire works at the microscopic level, studying how to improve basic computing units (operations and elementary functions) in software or in hardware. Compsys works at the compilation level, studying how to make the best use of the available execution units and memory units of a processor. D-Net works at the hardware/software level, studying cross-layer approaches to provide an optimized mapping of a software protocol on a hardware device, and to study how to optimize the life time of embedded sensors by performing activity scheduling. Avalon works at the middleware level, studying how Service-Oriented Computing platforms can scale to the Internet level. With the widespread use of multicores, parallelism and scheduling problems are now ubiquitous. This should lead to stronger team collaborations around the understanding of how the characteristics of new architectures impact classical CS problems and our previous works. As very-large-scale platforms and networks may be failure-prone, reliability is a challenge shared by Avalon, D-Net, Roma and Reso. The teams target reliable algorithms or protocol implementations that are able to cope with the volatility and dynamicity of platforms and equipments. General points of convergence, specifically for Avalon and Reso, are about virtualization and quality of service.

36 22 Part 2. LIP general project Mathematical computer science models, methods, and algorithms We have mentioned that making bridges between theoretical and practical computer science is an important source of invention. We have also mentioned how effective the collaboration between computer science and mathematics is for producing innovative models and methods. Conceiving and applying abstract/theoretical tools is a concern common to LIP s teams. In the first sub-issue we mostly apply our expertise in various mathematical domains to complexity, modeling, and algorithmic progress. The second sub-issue deals with CS progress about theoretical algorithmics, new abstract machines and models, logics, semantics, programming models, and graph theory. Mathematical methods and algorithms. This issue concerns all teams, and has a key role for LIP s cohesion as there are a lot of scientific interactions on this subject. In particular we strongly rely on: category theory, logics, graph theory, combinatorial approaches, algebra, number theory (in particular, geometry of numbers), optimization, statistics, and numerical analysis. This expertise is essential for our technological, algorithmic, and software progress. We may illustrate this with some key examples for the future. Tools from more traditional mathematical theories are used to study computer-science problems. For instance, in Arénaire and MC2, works in algebraic and algorithmic complexity naturally involve algebra, in particular polynomial algebra and number theory. The latter field is used by Arénaire to model and solve various practical problems of efficiency or reliability of computations. Arénaire works in geometry of numbers and, for instance, investigates challenging applications of Euclidean lattices in cryptography and computer arithmetic. The combination of its expertise in arithmetic and hardware implementations with a deep knowledge of algebraic curve-based cryptology allows Arénaire to take part to the design of optimized hardware implementations for pairing-based cryptography. A characteristic of Compsys is its focus on combinatorial optimization problems (e.g., linear programming methods, polyhedral optimizations) coming from its application field, the automation of code optimizations. In this context, mathematical and operation research tools are both used and developed. In HPC linear algebra and lattice reduction, as well as in the process of design of function evaluation libraries, numerical analysis plays a key role for conceiving more efficient automatic tools and solutions, while improving (or preserving) the numerical quality (Arénaire, Roma). Probability theory and especially stochastic processes is used in MC2 to study the behavior of probabilistic cellular automata. Statistical signal processing is present in Reso s activities in at least two prospects: application-oriented classification of flows and study of scale-invariant properties in traffic traces. In the latter case, we aim at providing accurate characterization and estimation of the diverse forms of scaling laws present in traffic: long-range dependence, local Hölder regularity and multifractal spectra. Reso also works at identifying new multifractal scaling laws in long-lived TCP flows and at proving large deviation theorems to ensure that the same properties hold with adequate models devised for TCP protocols. Models, logic, and complexity for programs and systems. With this issue we address mathematical CS concepts related to logic, abstract machines, modeling, and programming. Most of the work of MC2 and Plume can be categorized in mathematical computer science, with contributions to the development of new, computer-science oriented, mathematical theories. For instance, there are rich mathematical theories of computational complexity or of cellular automata. Implicit computational complexity gives logical characterizations of an increasing number of complexity classes by means of typing systems. This is a complementary way, with respect to more traditional algorithmic complexity, to study the complexity of programs (Plume, MC2). Roma always starts a new study by formally assessing the complexity of the problem at hand, and by checking the influence of different hypotheses on the complexity of the problem (homogeneity of communication capabilities, for instance). For NP-complete problems attempts are made to derive as precise as possible (theoretical) bounds on the performance of the optimal solution. Plume deals with logical foundations of programming languages with a specific focus on trying to enlarge the spectrum of logically understood programming primitives (e.g., concurrency, probabilities, imperative features, control) and software properties (e.g., termination, complexity). Unifying the known approaches by means of linear logic, game semantics, or realizability theory is a key goal. This includes the theory of parallel programming to provide abstract tools for guarantying safety properties of computation, and to define semantically founded core-programming languages such as process algebras. The need for formal proofs of program properties requires progress on the foundations of mathematics as well as a specific understanding of the constraints underlying the development of proofs in proof assistants. The latter is related to the application context of the proof, and makes the bridge with the work of Arénaire about specification and proof aspects for the reliability of programs. In order to harness the complexity of incoming machines/platforms and new applications, programming models need to be investigated. Challenges include identifying pertinent programming concepts with two major constraints. They have to be simple enough to be manipulated by programmers and to be based on solid foundations to enable efficient execution on a large variety of hardware, which can moreover be dynamic. A promising research direction is to work with meta-models to transform high-level models to models closed to the execution environment. Successful preliminary work

37 2.3 LIP general project 23 has already been started by the Avalon team with software component models. Service-Oriented Computing platforms and middleware systems need models and algorithms for several design issues such as service discovery, deployment, composition and orchestration, etc. Roma s studies are always based on platform models. These models must be precise enough to enable the prediction of the algorithms behavior when deployed and run on the actual platforms. But these models should be simple enough to be analyzable. Hence the need to find good trade-offs. Specifically, models will be defined to base the work on multi-cores. MC2 has been one of the French pioneers in the study of graph decompositions (in particular, tree decompositions) and has remained actively interested in this subject. Graph decompositions were introduced originally with a purely mathematical motivation, but turned out to yield efficient algorithms for a huge variety of problems. Roma will pursue its work on hypergraph partitioning. The underlying idea is to extend the existing hypergraph partitioning approaches to allow their application to new contexts. Currently, Roma uses hypergraph partitioning to solve problems that are often homogeneous in nature. The team will study how to enrich hypergraphs in order to use them to solve heterogeneous versions of these problems. D-Net will continue working on dynamic graph models for network science. This emerging and promising domain has proposed a stream of studies aimed at identifying more properties, their causes and consequences, and capturing them into relevant models. One goal is to use such models as key parameters in the characterization of complexity class for distributed algorithms on dynamic networks and for the study of various phenomena of interest like robustness, spreading of information or viruses, and protocol performance for dynamic networks Cross-fertilization with other disciplines, and application domains High performance for computational sciences. Since LIP s creation, several teams have started collaborations with scientists from other fields of science. From fundamental algorithms and models to the validation of parallel algorithms and applications over clusters and grids, these exchanges have always been beneficial for the researchers of every discipline. More recently, the Blaise Pascal Center and the Numerical Science & Modeling Federation allow starting more formal collaborations. More than just sharing clusters and supercomputers, both research instruments will allow strong collaborations between LIP experts in high-performance computing and scientists from other fields like physics, chemistry, applied mathematics. LIP will also provide an expertise at the interface between numerical analysis and computing, about improving the numerical quality of results in HPC. Lectures will be given by LIP researchers, and national and international collaborations will be started. Link with microelectronics, electrical engineering, and the semi-conductor industry. Computer science is a natural bridge between mathematics and electrical engineering, from more formal aspects to real hardware developments. The competitiveness pole Minalogic and the Rhône-Alpes region with several semi-conductor indutries or research centers (e.g., STMicroelectronics, CEA) offer opportunities to work with specialists of embedded-systems design, circuit synthesis, micro-electronics, power consumption, and embedded applications. Compsys and Arénaire participate to this interdisciplinary effort where computer science, through formal methods and software developments, is becoming essential. Collaborations are also developed with other micro-electronics labs in France (e.g., LIP6, Lab-STICC). Computer-science models, complex systems, biosciences. In biosciences collaborations were started between LIP research teams and other laboratories in the Rhône-Alpes region and France. The Décrypthon project between AFM, CNRS, and IBM offers the access to two grids (a server-based grid and a desktop grid) and the expertise of the Graal researchers for studying application design over such large-scale platforms. Graal also works with researchers of the INRIA project-team Bamboo based in the Laboratoire de Biométrie et Biologie Évolutive (UCBL) on the automatic generation of protein domain families. The fundamental objective of this collaboration is to enable bioinformaticians to cope with the exponential increase of the size of biological databases by using the power of distributed computing. Another aspect of our investment in biosciences is related to IXXI. D-Net targets inherently multidisciplinary goals, specifically regarding Social/Human Dynamics" and Biology Health". One goal of D-Net is to tackle the problems associated with the spread of infectious diseases over dynamic social networks, and propose novel tools and models. The real networks under direct study represent an example of human and social interactions, and their study will yield interesting insights into human dynamics. Moreover, the main dynamical processes to be studied are spreading of pathogens, which represent an important public health issue. Although complex systems will be less prominent in its activities, MC2 too has initiated projects linked to IXXI and biosciences. A main objective is the study of some dynamical discrete systems like cellular automata, boolean and gene-regulatory networks, as well as self-assembly tilings, with an emphasis on their combinatorial and algorithmic aspects. In Plume, game theory and Nash equilibria are applied to the study of gene-regulatory networks.

38 24 Part 2. LIP general project 2.3 Implementing the project LIP members are highly creative and motivated by their projects, hence the laboratory builds on strong foundations. The new team organization and the future directions reflect our own evolutions and those of the international, national, and local contexts. Below we describe the main risks we face, and then we review the main ingredients of the laboratory s policy for the coming years Weaknesses and risks The increasing administrative workload for researchers, and administrative and technical staff is a concern. The rapid increase of the lab s size is a very good sign. However, it requires paying attention to our administrative organization and infrastructures. The ratio between the numbers of administrative staff and scientific members remains quite low, especially with respect to the increasing importance of project-based research (contracts and temporary human-resource management, evaluation processes, etc.), and the numerous funding sources. With the LRU law, the AERES evaluation process, and the evolutions of research organizations, crucial changes are happening very quickly. These changes have been quite time-consuming for LIP s researchers in The autonomy implies an increased local accounting that may also have an impact on the administrative responsibilities of the researchers (budget rather than position management). The French reform and the evolution in many countries leads to a pressing demand for bibliometric measurements, and indicators in general. The computer-science community, whose production is revealed through very diverse aspects, may be extremely sensitive to that demand. Researchers hope for a careful use of the measures, especially with respect to their political impact, and their effect on scientific orientations. Our association with multiple organizations has always been a major strength. With the LRU, the larger autonomy of the organizations may partly hinder their synchronization, and in turn LIP s running (multiple scientific policies, contract negotiation, administrative deadlines, etc.). Local tools such as the INRIA-ENS outline agreement, aspects of the CNRS institutes, and the PRES should ease communication. As recommended by the expert committee of our previous evaluation in 2006, our collaborations at the Lyon level have been clearly strengthened, especially at the teaching and doctoral program level. With a project well-positioned with respect to the research policy of the University of Lyon, we should continue in this direction. Some of our objectives are essentially related to the technological progress that we must anticipate, and to our management of pioneer experimental facilities. Together with long-term software development and maintenance, this requires sustained infrastructure investments, and engineering support. An under-dimensioned support could impede our objectives. As seen in the progress report, our quick growth also requires an efficient training and research strategy for increasing the number of doctoral students (and PhD grants) in an international context that may not be favorable, especially in computer science. The fact that we are working on three geographical sites requires special efforts (scientific events, transverse initiatives, etc.) to preserve the laboratory s added-value and cohesion in research and teaching Governance and animation A main change will be the laboratory s organization in 8 teams. The non-inria teams are stable. Plume is headed by O. Laurent since Jan when D. Hirschkoff became vice-head of the lab. MC2 is co-headed by P. Koiran (on a two-year leave in Toronto) and É. Thierry since Sept The organization of the INRIA project-teams also depends on the projects life cycles at INRIA. The thematics should however remain globally stable. Arénaire had to modify its organization after G. Villard took the head of the laboratory. F. De Dinechin has the co-responsability of the team, especially for preparing the 2010 INRIA project evaluation. Compsys remains headed by A. Darte, and Reso by P. Vicat- Blanc Primet. Graal will split into two project-teams, Avalon and Roma headed by F. Desprez and F. Vivien, respectively. D-Net is in the process of becoming an INRIA project-team headed by É. Fleury. The Laboratory Council will be modified from 16 to members. We do not think we will have main changes to our current model of governance which is very effective. We will keep frequent Council meetings (every other week, currently) for discussing the main strategic points as well as everyday questions, and General lab meetings four times a year. The heads of teams and the direction meet frequently. The effective interfaces with the CS Department, the École doctorale, the Teaching council (graduate CS studies) and the Commission des habilités (lab supervision of doctoral students), will remain unchanged. The scientific animation of the laboratory must take into account our growth, diversity, and geographical situation. In addition to the teams seminars which are very good tools we will encourage transverse seminars. For instance, a Coq seminar will start for 2009/2010, in particular between Arénaire and Plume, and the Embedded-computing day will be organized again with STMicroelectronics. Beyond the lab and students seminars, we plan to organize general scientific events for the lab such as a retreat, new-members day, PhD day, etc., that we did not organize in the past. We will also encourage invitations of young researchers. The activity and budgets of the teams tend to provide more fundings for the laboratory s scientific policy, which allows supporting the aforementioned plan.

39 2.3 LIP general project 25 With various objectives (e.g., LIP database, research reports, bibliographic statistics, preparation of reports and evaluations) we plan to pursue our policy of using the French repository HAL to give access to all of LIP s publications in the future. Our partnership with INRIA is reinforced in 2009 with a first annual ENS-INRIA meeting (July 2009) for discussing issues related to INRIA project-teams. We expect our future associations to be discussed between our four current organizations. The CNRS INS2I institute at the local level, and UCBL and ENS with the University of Lyon, should also help our positioning (especially with respect to other CS labs), partnerships, and running at the Lyon level. Indeed, it seems that more and more strategic points will be discussed at this level Personnel The main aspects of our policy are to: look for outstanding applications and foster emerging themes on the long term; preserve the stability of the groups and domains of the lab; help to balance the involvement of the heading organizations; strengthen our two scientific axes together with their interweaving, and links with other disciplines in relation with the existing policies at the regional level. Faculty members and researchers. To foster the development of all the teams and the diversity of our teaching, an element of our hiring policy at ENS or UCBL will be to balance positions among teams. An important point is to have enough senior positions in every team to ensure a long-term existence to important themes. Another point is to encourage the balance in terms of number of people per organization. In particular, we may plan to prioritize UCBL positions on our most theoretical/mathematical themes. At the Professor level, we will favor scientific and administrative leadership, and new themes on the long term. Assistant Professor hirings should strengthen existing or newly created axes. At INRIA, the last years growth is not expected to continue. However, as well as for CNRS, we will continue our policy of attracting candidates at the highest level, and taking advantage of the ease of mobility at the national level. Our policy will encourage postdocs stays that are important for helping PhDs find positions. Doctoral students. LIP members are highly involved in the hiring, progress, and future of PhD candidates. To take into account the growth of the laboratory, and preserve the quality of our students and our supervision ratio, we work on the international opening and evolutions of our Master s teaching and Doctoral Program (see Section below). Students are followed at the lab level by the Commission des habilités (at the interface with the Doctoral School). This will continue, especially regarding research progress and production, continuing training, and teaching. Administrative staff. At the administrative level, after a very unstable and over-loaded situation during the whole period of the four-year plan, we expect a clear improvement for January In the new situation we should have: two permanent ENS assistants C. Iafrate and L. Lécot and one CNRS permanent one (hiring in Dec. 2009). We do not have any information about the future plans of the CNRS assistant on extended maternity leave since 2006 (C. Desplanches). Three INRIA assistants will be, at least partially, working for the laboratory and LIP project-teams: S. Boyer, E. Bresle, and C. Suter. These assistants will also work for project-teams outside LIP. Very regular discussions with INRIA administration allows a smooth running of the assistant service, especially for balancing the loads between work for LIP and work for outside teams. Our short-term objective is to start 2010 with a stable staff representing the equivalent of 5 full-time permanent positions, then increase the staff over the following years. As soon as our unstable situation ends, the objective of our policy will be to improve the diversity of their tasks, mainly through scientific-administrative aspects more closely related to the teams objectives and life (e.g., funding management, organization of scientific events). Over the reporting period, some administrative tasks were moved from the administrative staff to researchers. We may note that a first step toward our objective is for the administrative staff, in the near future, to be able to reclaim theses tasks. Technical staff. LIP is organized with an IT support team of three permanent engineers: J.-C. Mignot (CNRS), D. Ponsard (CNRS), and S. Torres (ENS). The team is highly effective in a key support for IT research infrastructures, from network services and everyday computers, to experimental and pioneer platforms, with the exception of FPGA platforms and boards for which we are lacking skills and manpower. Also due to insufficient manpower, it is difficult for the IT support team to be involved in software development and/or maintenance. The objectives of the team are detailed in Chapter 9. For the effectiveness of the services, we will pursue the policy of partly involving the engineers in research projects. The team is relatively under-dimensioned with respect to our needs. Our involvement in national and local infrastructure projects (such as the INRIA ADT Aladdin and the Blaise Pascal Center and Numerical Science & Modeling Federation project), and the evolution of our platforms, depend on the consolidation of the team. In addition to the IT team, we have two permanent research engineers: É. Boix (CNRS, who manages a startup creation), and M. Imbert (Service Expérimentation et Développement, INRIA) working full-time on research projects (Graal, Reso). LIP objectives (such as platforms evolution and hardware/software development on the long term) also depend on a more important permanent

40 26 Part 2. LIP general project support. As analyzed in the progress report, on these aspects we heavily rely on temporary engineers whose number has increased quickly. Permanent positions are mandatory to enable long-term support to sophisticated software, since it can require a strong expertise which takes a long time to acquire. Permanent positions would ease a long-term laboratory policy on these questions Teaching and training The involvement of researchers and academical staff in the teaching at ENS Lyon is already rather strong, and we expect this to continue and even develop. Recent reforms (LRU, and the will, especially at ENS Lyon, to smooth the distinction between academical and purely research positions) will certainly provide further stimulations in this direction. An important challenge that LIP and the Département d Informatique will face is the evolution of the local Master in Computer Science. Starting from academical year , a renewed curriculum is offered. The main novelties include a stress on the importance of training periods in research labs (in France and abroad), as well as a 6-week period of teaching sessions in the spirit of summer schools (concentrated in time, often focused on a rather specialized research topic) and a minor in complex systems. These courses will be liable to attract, beyond Master pupils, doctoral students as well as researchers, and to enhance the visibility of the Master. We expect the renewal of the Masters Programme to be carried on and further developed in the next years. In doing this, it will be important to address the representativity of LIP s research themes, which are becoming more diverse as the lab grows. This phenomenon must be confronted with the rather stable number of (local) students. The commitment of the new ENS Lyon to strongly develop tight connections with other teaching institutions, at an international level, should offer opportunities to address these questions. Local and regional partnerships (notably Lyon and Grenoble) are also paths that can be explored. We also plan to further develop the collaboration with UCBL (Université Lyon 1), which is naturally based on the UCBL personnel working at LIP. The main challenge in doing this is to establish a balanced partnership, which would allow LIP to be present, with its expertise, in the landscape of Lyon, taking into account the evolving structures at UCBL (reform of the UFRs, in particular). Finally, a possibility in terms of training will also be offered by the development of the Blaise Pascal Center Infrastructure Our objectives regarding hardware and software infrastructures are further detailed in Chapter 9. Particular emphasis will be put on the diversification of our platforms (HPC, networking, synthesis tools, etc.) and on the support of software development. Another priority for the IT team is to enforce a security policy (Politique de sécurité des systèmes d information, PSSI). As part of Lyon Cité Campus (Campus plan), our project (2-3 years) is to gather all LIP s teams on a single location, at Gerland. To stay as close to students as possible, we would prefer ENS s site. The connection with ENS students remains clearly central for our teaching activities. UCBL s Gerland site would provide another quality site. LIP researchers evince great energy, and new opportunities will most certainly appear during the next few years. The objective of geographical grouping is quite motivating and will ensure the best possible realisation for our projects.

41 3. Arénaire - Computer Arithmetic Team: ARÉNAIRE Scientific leaders: Florent DE DINECHIN and Gilles VILLARD Evaluation Web site: Parent Organizations: CNRS, École Normale Supérieure de Lyon, INRIA, Université de Lyon Team History: This group started in 1998, as a joint CNRS-ENS Lyon-INRIA project led by J.-M. Muller, with three permanent members. In 2004, Gilles Villard became head of the team. Initially, the team was mainly focused on hardware-oriented algorithms for computer arithmetic and on elementary function evaluation. Beyond several arrivals and departures (the most recent ones are listed in Section 1), many events progressively influenced the scientific orientation of the team: The team developed an expertise on floating-point software that goes beyond elementary functions, addressing more generally the issue of efficient high accuracy computing. Interval arithmetic became both a tool used pervasively by the team and a subject of research. A fruitful collaboration with the Compilation Expertise Center of STMicroelectronics brought us new problems (integer-based floating-point arithmetic, division by a constant, etc.); With the increasing complexity of the arithmetic algorithms and the target technologies, the research focus shifted from designing operators to automating or assisting this design; On the hardware side, the target technology evolved from integrated circuits to field-programmable gate arrays, with new challenges and new opportunities. The team acquired strong expertise in euclidean lattices and linear algebra, on one side because operations on these objects are the next frontier for our arithmetic expertise, and on the other side because they provide solutions to existing arithmetic problems (correct rounding, polynomial approximation). Emerging arithmetic problems, such as the generation of polynomial approximations with constrained coefficients, or the design of specific operators for cryptographic hardware, clearly required skills in number theory. 3.1 Team Composition Current team members Permanent Researchers Name First name Function Institution Arrival date BRISEBARRE Nicolas Research Scientist CNRS 01/09/2007 DE DINECHIN Florent Associate Professor ÉNS Lyon 01/09/1998 HANROT Guillaume Professor ÉNS Lyon 01/09/2009 JEANNEROD Claude-Pierre Research Scientist INRIA 01/02/2002 LEFÈVRE Vincent Research Scientist INRIA 01/09/2006 LOUVET Nicolas Associate Professor UCBL 01/11/2008 MULLER Jean-Michel Senior Research Scientist CNRS 15/03/1989 REVOL Nathalie Research Scientist INRIA 01/09/2002 VILLARD Gilles Senior Research Scientist CNRS 01/12/2000 Post-docs, engineers and visitors Name First name Function and % of time Institution Arrival date FAGBOHOUN Christman Engineer INRIA 20/04/2009 MAYERO Micaela Research Scientist Univ. Paris 13, secondment to INRIA 01/09/2009 NOVOCIN Andrew Engineer INRIA 01/09/

42 28 Part 3. Arénaire TAKEUGMING Honoré Engineer ÉNS Lyon/ANR TCHATER 01/10/2008 THÉVENY Philippe Engineer INRIA 01/10/2009 TORRES Serge Engineer (40%) MESR 01/09/2001 VÁZQUEZ ÁLVAREZ Álvaro Post-doc INRIA 01/10/2009 Doctoral Students Name First name Institution Date of first registration as doctoral student JOLDES Mioara Région Rhône-Alpes 01/09/2008 JOURDAN LU Jingyan CIFRE 01/03/2009 MARTIN-DOREL Érik MESR 01/09/2009 MOREL Ivan ÉNS Lyon 01/09/2007 MOUILLERON Christophe ÉNS Lyon 01/09/2008 NGUYEN Hong Diep INRIA CORDI 01/10/2007 PANHALEUX Adrien ÉNS Lyon 01/09/2008 PASCA Bogdan MESR 01/09/2008 PUJOL Xavier ÉNS Lyon 01/09/2009 REVY Guillaume MESR 01/10/2006 Past team members Past Members Oct Oct Name First name Position Parent Institution Arrival Departure Current position DAUMAS Marc Research Scientist CNRS 01/10/ /11/2005 Professor, Perpignan University STEHLÉ Damien Research Scientist CNRS 01/10/ /07/2008 CNRS scientist seconded to the Universities of Sydney and Macquarie, until 15/07/2010. Past Doctoral students First Name First name registration Departure Institution Doctoral Current Supervisor school position Prof. U. CHÁVES Francisco 01/11/ /10/2007 ÉNS Lyon (Marie M. Daumas, Los Andes, MathIF ALONSO José Curie grant) N. Revol Bogota, Colombia N. Brisebarre, Post-Doc, CHEVILLARD Sylvain 01/09/ /09/2009 ÉNS Lyon MathIF J.-M. Muller INRIA Lorraine F. de DETREY Jérémie 01/09/ /01/2007 ÉNS Lyon MathIF Dinechin, CR2, INRIA J.-M. Muller F. de Intel, Portland, Or. LAUTER Christoph 01/09/ /10/2008 ÉNS Lyon MathIF Dinechin, J.-M. Muller MELQUIOND Guillaume 01/09/ /11/2006 ÉNS Lyon MathIF M. Daumas CR2, INRIA MICHARD Romain 01/09/ /06/2008 ÉNS J.-M. Muller, PostDoc, MathIF Lyon/INRIA A. Tisserand INSA Lyon Ass. Pr. at Saurabh ÉNS Lyon C.-P. Jeannerod, ITS Eng. College (Greater RAINA 01/10/ /09/2006 Kumar (Région Rhône- MathIF J.-M. Muller, Alpes grant) A. Tisserand Noida, India)

43 3.1 Computer Arithmetic 29 VEYRAT- CHARVILLON Nicolas 01/09/ /06/2007 ÉNS Lyon MathIF J.-M. Muller, A. Tisserand PostDoc at U. C. Louvain Past post-doctoral researchers, engineers and visitors (greater than or equal to 3 months) Name First name Function Arrival Departure Home Institution (if appropriate) Current position BECHETOILLE Édouard Engineer 01/10/ /11/2006 INRIA CNRS Research Engineer (IPNL) BEUCHAT Jean-Luc Post-doc Associate Professor Swiss National Science Foundation 01/11/ /11/2005 at Tsukuba University (Japan) University of Waterloo, Canada loo University) Ass. Professor (Water- GIESBRECHT Mark Visitor 01/12/ /06/2006 In charge of ind. relations JOURDAN Nicolas Engineer 23/07/ /02/2008 INRIA and techno. trans- fer at INRIA Rhône- Alpes LOUVET Nicolas Post-doc 01/12/ /10/2008 INRIA UCBL Associate Professor, TISSERAND Arnaud Visitor 01/10/ /08/2006 CNRS CR1, CNRS, IRISA (Lannion) TOLI Ilia Post-doc 01/05/ /04/2006 INRIA folk University (USA) Senior Lecturer at Suf- Evolution of the team: In October 2005, there were 7 permanent members, 2 post-docs, 2 engineers and 7 PhD students in Arénaire. We are now 9 permanent members, 1 associate professor seconded to INRIA, 4.4 engineers, 1 post-doc and 10 PhD students. There has been a significant turn-over among the researchers: departure of Marc Daumas and Jean-Luc Beuchat. Arrival then (hopefully momentarily) departure of Damien Stehlé, arrival of Vincent Lefèvre, recruitment of Nicolas Brisebarre and Nicolas Louvet. We are proud to notice a good success of our former members: Édouard Bechetoille (former software development engineer) is now a CNRS research engineer at IPNL (Institute for Nuclear Physics, Lyon); Jean-Luc Beuchat (former postdoctoral fellow) is now associate professor at Tsukuba University (Japan); Sylvie Boldo, Jérémie Detrey, and Guillaume Melquiond (former PhD students) are now chargés de recherche (that is, full time researchers) at INRIA (in Orsay, Nancy, and Orsay, respectively); Francisco José Cháves Alonso (former PhD student) is now professor at U. Los Andes, Bogota, Colombia; Marc Daumas (former research scientist CR at CNRS) is now full professor at Perpignan University; Pascal Giorgi (former PhD student) is now maître de conférences (associate professor) in Montpellier University; Nicolas Jourdan (former software development engineer) is now in charge of industrial relations and technology transfer at INRIA Rhône-Alpes; Saurabh Raina (former PhD student) is now associate professor at the I.T.S. Engineering College, Greater Nodia, India; Christoph Lauter (former PhD student) is now engineer at INTEL, Portland (US). He is designing software support for floating-point arithmetic. Our former PhD students Sylvain Chevillard, Romain Michard and Nicolas Veyrat-Charvillon currently have post-doc positions, respectively with Projet CACAO of the LORIA Lab in Nancy, with Projet Ares of the CITI Lab at INSA Lyon and with the Crypto Group of Université Catholique de Louvain. We also had good success in recruiting people in the team: Damien Stehlé (research scientist CR2 CNRS) was recruited in He was the first ranked candidate in the national CNRS recruitment for computer scientists;

44 30 Part 3. Arénaire Nicolas Brisebarre (research scientist CR1 CNRS) was recruited in 2007; Nicolas Louvet (associate professor maître de conférences, Université Lyon 1) was recruited in 2008; a newly-recruited ENS Lyon full professor, Guillaume Hanrot, will join the team by October 1, Vincent Lefèvre (research scientist CR1 INRIA) moved from projet CACAO (LORIA, Nancy) in September Executive summary Keywords computer arithmetic; floating-point arithmetic, IEEE 754 arithmetic, fixed-point arithmetic, interval arithmetic, finite field arithmetic, integer arithmetic, multiple precision, arbitrary precision; reliable computation, rounding-error analysis, numerical validation, code certification, formal proof, software synthesis; approximation theory, elementary function, global optimization, linear algebra, euclidean lattice, cryptology, pairing, computer algebra, digital signal processing; VLSI circuit, FPGA circuit, VLIW processor, embedded system. Research area The Arénaire project-team aims at elaborating and consolidating knowledge and tools in the field of computer arithmetic. Computer arithmetic studies how to represent numbers in a computer and how to implement and manipulate such representations efficiently and accurately, in hardware or in software; it studies also the question of the adequation between such number representations and numerical algorithms. Within Arénaire, we cover all of these aspects. First, we study basic arithmetic operators in various number systems, from their algorithmic design to their optimized implementation (in hardware or software, and on general-purpose or application-specific targets). Second, we study the interplay between arithmetic operators and the algorithmics of several application domains like function evaluation, polynomial approximation, linear algebra, lattice basis reduction, cryptology, and signal processing. Here, the question is twofold: How may arithmetic choices impact such application domains in terms of complexity, practical efficiency, accuracy, or reliability? Conversely, to which extent improving algorithms and implementations within those domains and within scientific computing in general, can be useful for enhancing arithmetic, that is, for making it faster and/or more reliable and/or more accurate, etc.? Finally, those two research areas are becoming increasingly tied to a third one, that of certifying numerical computations. Within this third field, we work not only on traditional, paper-and-pencil error analysis but also, and more and more, on automatic error analysis and formal methods. Main goals Through computer arithmetic, we strive to improve the quality of computation at large, both in hardware and in software. We look for improvements in performance (frequency or throughput for hardware, cycle count for software) and cost (area or power consumption for hardware, code size for software, memory consumption for both). In addition, we look for improvements in accuracy, which is more specific to computer arithmetic. Third, we strive to increase confidence in the computed result: confidence that each operator is well specified and behaves as expected, confidence that the composition of such operators in a program does not entail unexpected deviations from the result to be computed. Last but not least, by demonstrating the practical feasibility of getting high-quality numerical answers, we aim at fostering and being part of standardization actions in the fields of computer arithmetic and numerical computing. Methodology A specificity of computer arithmetic is that its relatively wide spectrum requires many areas of expertise, from hardware design and expert programming to formal proving and number theory. This is why Arénaire gathers people from rather different domains, whose common skills are arithmetic algorithms. Our targeted numerical applications benefit from any progress we make at the basic operator level. In turn, our optimized designs and implementations of basic operators are nowadays made possible only through our expertise and state-of-the-art software developments in domains like linear algebra or lattice basis reduction (which offer appropriate modeling tools). Consequently, we also study such domains for themselves, with an emphasis on algorithmic complexity and the design of high-performance and reliable software. To achieve our general goal of high numerical quality, either at the basic operator level or at the application level, we need a combination of rigorous specifications and certification tools. A first approach to this is a careful study of validated arithmetics, and especially interval arithmetic, as well as the possibility of computing exact error bounds or even exact results in floating-point arithmetic. Another, complementary approach is to rely more systematically on formal verification and to use proof assistants like Coq in a way that is well-suited for our purpose. Finally, a very significant change occurred during the recent years: while a few years ago our main realizations, besides algorithms, were expert handwritten libraries (in hardware or software, and offering operators ranging from the most basic ones to elementary functions and linear algebra kernels), we are now more and more interested

45 3.3 Computer Arithmetic 31 in automating the development of such libraries. We now produce software tools that assist us in designing and proving optimized operator libraries. This requires new mathematical and algorithmic expertise. This evolution from hand writing to synthesis is also pushing us towards the compilation community: If an operator can be designed, optimized, and proven automatically, it is natural that this task eventually ends up being part of the compilation process. This explains why our major industrial partner so far is STMicroelectronics Compilation Expertise Center in Grenoble. Highlights (1 page max) Standardization of arithmetics The new revision of the IEEE Standard for floating-point arithmetic (IEEE , available at http: //ieeexplore.ieee.org/servlet/opac?punumber= ) recommends that some elementary functions (sine, exponential... ) should be correctly rounded. This is a direct consequence of the effort led by Arénaire since 1995 on correct rounding of elementary algebraic and transcendental functions. Our worst cases for correct rounding as well as our CRlibm library played the most important role in that decision and, among the 22 bibliographic references cited by the standard, 7 are by Arénaire members. CRlibm is cited by members of the IEEE 754 revision committee as the proof that elementary functions can be correctly rounded at reasonable cost; Interval arithmetic has now reached a stability and maturity that should make its standardization possible; Nathalie Revol has prepared the creation of and chairs the IEEE Interval Standard Working Group - P1788; Reference books Jean-Michel Muller published a new edition of his reference book Elementary functions: algorithms and implementation. Our Handbook of Floating-Point Arithmetic (Birkhäuser Boston, November 2009, 600 pages, ISBN ) gathers years of common work on IEEE floating-point arithmetic; Reference software Our algorithms for emulating IEEE binary floating-point arithmetic on integer processors (FLIP library) are included in the STMicroelectronics C compilation toolset of the ST200 processor family. The FPLibrary project was first to demonstrate floating-point elementary functions for FPGAs offering significant speed-ups with respect to processor implementations. These functions are still without equivalent, and have been widely used worldwide for high-performance computing. Gappa, a state-of-the-art tool for certifying error bounds in rounded arithmetics, originated in Arénaire and is still being developed in close collaboration with us. Sollya is a reference tool for manipulating numerical functions safely. It has been the first environment that offers both numerical safety and automation for the design of elementary function libraries. It possesses unique features for polynomial approximation and validated supremum norms. The fplll library is now the reference for LLL reduction: it is used, or variants of it are used, in Pari/GP, Magma, and Sage; the next release of NTL could be inspired from it. It is also used to generate polynomial approximants in Sollya. Other key results Ivan Morel, Xavier Pujol, Damien Stehlé, and Gilles Villard obtained key results on the use of floatingpoint arithmetic within lattice reduction algorithms, including the Lenstra-Lenstra-Lovász algorithm and the enumeration of the shortest lattice vectors of a lattice. This involves a precise understanding of the numerical properties of the Gram-Schmidt orthogonalization algorithms, and allows one to obtain guaranteed practical algorithms with better asymptotic complexities. Guillaume Hanrot and Damien Stehlé completed a full worst-case analysis of Kannan s algorithm for deterministically computing a shortest non-zero lattice vector. They lowered the best previously known complexity upper bound and built inputs for which that complexity bound is essentially reached. The COSY software (developed at Michigan State University at East Lansing) implements Taylor models arithmetic using floating-point arithmetic. The proof that roundoff errors are correctly accounted for is given in [43], putting an end to a controversy in the interval arithmetic community.

46 32 Part 3. Arénaire 3.3 Research activities Number Systems List of participants: Florent de Dinechin (MCF), Jérémie Detrey (PhD), Jean-Michel Muller (DR). Keywords number system, redundant number systems, carry propagations, floating-point, logarithmic number system (LNS). Scientific issues, goals, and positioning of the team: The study of the number systems and, more generally, of numerical data representations is an important topic in the project. Typical examples are: the redundant number systems used inside multipliers and dividers; alternatives to floating-point representation for special purpose systems; finite field representations with a strong impact on cryptographic hardware circuits; the performance of an interval arithmetic that heavily depends on the underlying real arithmetic. Major results Jan Dec. 2009: P. Kornerup (Southern Danish University, Odense) and J.-M. Muller have shown that, for normal redundant digit sets with radix greater than two, a single guard digit is sufficient to determine the value of a prefix of leading nonzero digits of arbitrary length [35]. With S. Collange (then a student at ÉNS Lyon), F. de Dinechin and J. Detrey have argued that the respective merits of the floating-point and logarithmic number systems can only be assessed on a per-application basis [76], using a library of hardware operators supporting both systems [20, 22]. Self-assessment That topic used to be very important in computer arithmetic. Now, partly due to early work by our group in the late 90 s, the adequate number representations for integer and real numbers, depending on the context, are well determined. However, research remains active in number representations for finite-field arithmetic Efficient Floating-Point Arithmetic and Applications List of participants: Nicolas Brisebarre (CR), Claude-Pierre Jeannerod (CR), Jingyan Jourdan Lu (PhD), Christoph Lauter (PhD), Vincent Lefèvre (CR), Nicolas Louvet (MCF), Jean-Michel Muller (DR), Guillaume Revy (PhD), Gilles Villard (DR). Keywords floating-point arithmetic, multiplication, FMA, division, square root, correct rounding of algebraic functions, extended precision, error-free transforms, compensated algorithms, polynomial evaluation, software implementation, VLIW embedded processor, code generation and certification. Scientific issues, goals, and positioning of the team: Efficient hardware for floating-point arithmetic (as specified by the IEEE-754 standards) is becoming available on an increasing number of chips. A first issue thus is to exploit the specification of such hardware operators in order to enhance the accuracy of numerical computing in general, without sacrificing efficiency. For processors without floating-point hardware, another issue is to emulate IEEE-754 arithmetic as efficiently as possible in software. Major results Jan Dec. 2009: Assuming that a fused multiply-add (FMA) floating-point instruction is available (which is the case on Itanium and PowerPC architectures), we proposed several new algorithms: in [71, 10] N. Brisebarre and J.-M. Muller showed how to perform multiplication by an arbitrary-precision constant very fast and how to check whether the result is correctly rounded; in [33] P. Kornerup (Southern Danish University, Odense), C. Lauter, V. Lefèvre, N. Louvet, and J.-M. Muller gave algorithms that use FMAs for quickly computing correctly-rounded positive integer powers. P. Kornerup, V. Lefèvre, N. Louvet, and J.-M. Muller studied properties of summation algorithms in [107]. They showed in particular that Knuth s 2Sum algorithm is minimal (in terms of number of operations and depth of the dependency graph) among all branch-free algorithms using only rounded-to-nearest floating-point additions/subtractions. S. Boldo (former Arénaire Ph.D. student, at that time in lab. LRI and EPI Proval of IN- RIA Saclay) and G. Melquiond (then in Arénaire) showed how to emulate an FMA [158]. They recently (after G. Melquiond left Arénaire) published the final version of their work [5]. Error-free transforms (EFT) such as the above 2Sum algorithm have further been studied as a way of emulating extended-precision efficiently: C. Lauter gave algorithms for multi-double arithmetic in [181]; N. Louvet (jointly with Ph. Langlois from Université de Perpignan) showed in [110] how EFTs can be used to design a compensated Horner algorithm that simulates k times the precision of native floating-point arithmetic when evaluating polynomials. Concerning software emulation of floating-point operators using integer instructions, C.-P. Jeannerod and G. Revy (jointly with H. Knochel and C. Monat from STMicroelectronics), introduced in [166] a faster algorithm for

47 3.3 Computer Arithmetic 33 correctly-rounded square roots, based on highly-parallel bivariate polynomial evaluation and well suited to VLIW architectures. With G. Villard, they extended the approach to correctly-rounded division [105], the main difficulty here being the automatic numerical certification of the algorithm. Both algorithms were implemented in the FLIP library [195] (partly by hand, partly by a code generator for fast and accurate polynomial evaluation). With the ST200 VLIW compiler our new codes for and are, respectively, 2.4 and 1.8 times faster than previously best ones. Key reference Our Handbook of Floating-Point Arithmetic [150]. Self-assessment Our expertise in floating-point arithmetic is one of the strengths of Arénaire and we propose state-of-theart algorithms and software implementations. However, we have to find a way of securing the long-term existence of our software Efficient Polynomial and Rational Approximations List of participants: Nicolas Brisebarre (CR), Sylvain Chevillard (PhD), Mioara Joldes (PhD), Jean-Michel Muller (DR), Arnaud Tisserand (CR), Serge Torres (IR). Keywords floating-point arithmetic, polynomial approximation, rational approximation, minimax approximation, E- method, lattice basis reduction, closest vector problem, linear programming Scientific issues, goals, and positioning of the team: Since the basic operations that are available in hardware are additions/subtractions, multiplications and divisions, the only functions of one variable one can compute are piecewise polynomials (if division is not allowed) and rational functions (otherwise). Hence it is natural to approximate the functions we wish to evaluate by polynomials or rational functions. The classical methods for computing approximations to functions cannot be used without losing accuracy, since they generate approximations with infinitely precise coefficients, that must be rounded to some format when we evaluate the polynomial or rational function: the corresponding rounded approximation may be significantly less accurate than the best approximation taken among the rounded polynomials (or fractions). Major results Jan Dec. 2009: We have developed methods for generating the best, or at least very good, polynomial approximations (with respect to the supremum norm or the L 2 norm) to functions, among polynomials whose coefficients follow size constraints (e.g., the degree-i coefficient has at most m i fractional bits, or it has a given, fixed, value). Regarding to the supremum norm on a compact interval, one of these methods, due to N. Brisebarre, J.-M. Muller and A. Tisserand reduces to scanning the integer points in a polytope [11], another one, due to N. Brisebarre and S. Chevillard, uses the LLL lattice reduction algorithm [66]. This method, implemented in Sollya, also makes it possible to accelerate the process described in [11]. About the L 2 norm, N. Brisebarre and G. Hanrot (LORIA, INRIA Lorraine) started a complete study of the problem [70] and proposed theoretical and algorithmic results that make it possible to get optimal approximants. Potential applications to hardwired operators have been studied [12] by N. Brisebarre, J.-M. Muller, A. Tisserand and S. Torres. In [67], a joint work with M. Ercegovac (University of California at Los Angeles), N. Brisebarre, S. Chevillard, J.- M. Muller and S. Torres extend the domain of applicability of the E-method (due to M. Ercegovac), as a hardwareoriented method for evaluating elementary functions using polynomial and rational function approximations. Such an approach, based on linear programming and lattice basis reduction, is presented. A design is introduced and discussed. Self-assessment Until the papers published by the team, there was no systematic approach to obtain good approximations to a function f over an interval [a, b] by polynomial or rational functions satisfying additional constraints (e.g., on the size in bits of the coefficients or on the values of some coefficients). The leadership in this topic should be confirmed by our future works on the several variable case and the enhancements to come in the one variable case Linear Algebra and Lattice Basis Reduction List of participants: Claude-Pierre Jeannerod (CR), Nicolas Louvet (MCF), Ivan Morel (PhD), Christophe Mouilleron (PhD), Hong Diep Nguyen (PhD), Nathalie Revol (CR), Damien Stehlé (CR), Gilles Villard (DR) Keywords real matrix, integer matrix, polynomial matrix, structured matrix, approximant basis, euclidean lattice, certified computation, verification algorithm, automatic differentiation, matrix error bound, algebraic complexity, reduction to matrix multiplication, LLL lattice reduction, strong lattice reduction, asymptotically fast algorithm

48 34 Part 3. Arénaire Scientific issues, goals, and positioning of the team: The techniques for exact linear algebra evolved quickly and significantly in the last few years, substantially reducing the complexity of several algorithms (see for instance [30] for an essentially optimal result). A main focus is on matrices whose entries are integers or univariate polynomials over a field, for which we aim at relating the size of the data (integer bit lengths or polynomial degrees) to the cost of solving the problem exactly. A first goal is to design asymptotically faster algorithms, to reduce problems to matrix multiplication in a systematic way, and to relate bit complexity to algebraic complexity. Another direction is to make these algorithms fast in practice as well, especially since applications yield very large matrices that are either sparse or structured. Within the LinBox international project, we work on a software library that corresponds to our algorithmic research on matrices. LinBox is a generic library devoted to sparse or structured exact linear algebra and its applications, that allows to plug external components in a plug-and-play fashion. More recently, we have also started a direction around lattice basis reduction. Euclidean lattices provide powerful tools in various algorithmic domains. In particular, we investigate applications in computer arithmetic, cryptology, algorithmic number theory, and communication theory. We work on improving the complexity estimates of lattice basis reduction algorithms and providing better implementations of them, and on obtaining more reduced bases. The above recent progress in linear algebra may provide new insights. Finally, following the Arénaire objective of specification and certification (in an approximate arithmetic framework like floating-point), we have also started studies around the complexity of computing certified error bounds and condition numbers. Major results Jan Dec. 2009: I. Morel, D. Stehlé, and G. Villard proposed a new floating-point LLL algorithm, relying on the Householder QR factorization algorithm [120]. This algorithm is simpler to describe than the previous ones and requires less floating-point precision. G. Hanrot and D. Stehlé investigated Kannan s algorithm for the computation of a shortest lattice vector. They improved its analysis, by providing a lower upper complexity bound [103] and showed that the new bound is tight [164]. X. Pujol and D. Stehlé showed that this algorithm can be implemented safely with floating-point arithmetic [123]. C.-P. Jeannerod and G. Villard showed in [31] that minimal basis computations and polynomial matrix fractions allow to reduce the complexity of various problems on polynomial matrices to, essentially, that of multiplication. The computation of rational solutions of sparse linear systems has been improved by G. Villard (jointly with W. Eberly, M. Giesbrecht, P. Giorgi, and A. Storjohann) in [94], where an asymptotically faster algorithm is given together with a LinBox-based implementation. The correctness of this new algorithm, then subject to the conjectured existence of suitable block projections of matrices, has then been established in [95]. C.-P. Jeannerod, jointly with A. Bostan and É. Schost, gave a new upper bound on the algebraic complexity of linear system solving for several families of structured matrices [6]. A consequence of this, detailed in [6] as well, is asymptotically faster algorithms for Hermite-Padé approximation and bivariate interpolation. In [137], C.-P. Jeannerod, C. Mouilleron, and G. Villard proposed a new, simpler algorithm for Vandermonde-like and Cauchy-like linear systems, showing that the elimination-based compression steps used by classical algorithms are in fact unnecessary. Based on componentwise perturbation analyses G. Villard showed in [126] how to compute certified and effective error bounds for the R factor of the matrix QR factorization. This approach has been used to certify the LLL reducedness of output bases of floating point and heuristic LLL variants. In order to compute a guaranteed enclosure of the solution of a linear system, H.D. Nguyen and N. Revol have devised an efficient implementation that makes use of fast existing methods for linear system solving [142]. C.-P. Jeannerod, N. Louvet, N. Revol, and G. Villard showed in [135] how automatic differentiation can be used to efficiently compute the condition number of various matrix problems. Self-assessment The strong points here are state-of-the-art complexity results and implementations in the fields of algebraic computation and lattice reduction, as well as new insights into the effective numerical certification of linear algebra. Note that the results around floating-point LLL are being used by other members of the team, e.g., for producing polynomial approximations (see 3.3.3) Correct Rounding of Functions List of participants: Nicolas Brisebarre (CR), Sylvain Chevillard (PhD), Florent de Dinechin (MCF), Mioara Joldes (PhD), Christoph Lauter (PhD), Vincent Lefèvre (CR), Jean-Michel Muller (DR), Nathalie Revol (CR), Damien Stehlé (CR) Keywords elementary functions, libm, correct rounding, double precision, interval arithmetic, machine-assisted proofs, power function Scientific issues, goals, and positioning of the team: The original IEEE Standard for Floating-Point arithmetic (1985) required that the four arithmetic operations and the square root should be correctly rounded: the returned result

49 3.3 Computer Arithmetic 35 should always be the floating-point number nearest the exact result. This requirement has many useful properties. Among them, the fact that the operations are fully specified makes it possible to design algorithms and proofs that use these specifications. The 1985 standard did not require anything concerning the elementary functions (sine, exponential, logarithm, etc.) because of a problem called the Table Maker s Dilemma : an elementary function is evaluated to some internal accuracy (higher than the target precision), and then rounded to the target precision. What is the minimum accuracy necessary to ensure that rounding this evaluation is equivalent to rounding the exact result, for all possible inputs? This question cannot be answered in a simple manner, meaning that in the general case (an arbitrary function) correctly rounding functions requires arbitrary precision, which may be very slow and resource-consuming. We have focused in the previous years on computing bounds on the intermediate precision required for correctly rounding some elementary functions in IEEE-754 double precision. This allows us to offer correct rounding with an acceptable overhead: we have experimental code where the cost of correct rounding is negligible in average, and less than a factor 10 in the worst case. Major results Jan Dec. 2009: G. Hanrot, V. Lefèvre, D. Stehlé and P. Zimmermann [102] studied the problem of finding hard-to-round cases of a periodic function for large floating-point inputs. C. Lauter and V. Lefèvre designed and proved a new algorithm to detect the rounding boundaries for the power (x y ) function in double precision [36]. N. Brisebarre and J.-M. Muller gave bounds that allow to round algebraic functions correctly [9]. V. Lefèvre, D. Stehlé and P. Zimmermann showed that the search for worst (i.e., hardest to round) cases, started by Lefèvre and Muller in 1999, can be extended to the decimal64 format by determining the worst cases of the decimal64 exponential function [112]. In addition, the programs searching for worst cases in double precision were improved, and many new worst-case results were found. F. de Dinechin, C. Lauter and J.-M. Muller published an algorithm for evaluating correctly rounded logarithms in double precision [17]. C. Lauter and F. de Dinechin worked on automating the generation of a machine-optimized polynomial approximation to a function [111]. The resulting polynomial is implemented as a C program resorting to a minimum amount of double-double or triple-double arithmetic if needed, and provided with a formal proof. S. Chevillard and N. Revol designed an algorithm for computing the special function erf in arbitrary precision and with correct rounding [75]. Key references J.M. Muller, Elementary functions, algorithms and implementation, 2nd Edition, Birkhäuser, 2006 [149]. Self-assessment Our work on correct rounding of functions persuaded the IEEE 754 revision committee to recommend that some elementary functions should be correctly rounded, and this is probably the biggest success of Arénaire in the recent years. The new standard was adopted in August CRlibm functions should now move from academic demonstrators to default functions in the GNU libc, but this requires considerable development effort Certified Computing List of participants: Francisco José Cháves Alonso (PhD), Sylvain Chevillard (PhD), Marc Daumas (CR), Claude-Pierre Jeannerod (CR), Mioara Joldes (PhD), Nicolas Jourdan (Eng.), Christoph Lauter (PhD), Vincent Lefèvre (CR), Nicolas Louvet (MCF), Guillaume Melquiond (PhD), Hong Diep Nguyen (PhD), Nathalie Revol (CR), Guillaume Revy (PhD), Gilles Villard (DR). Keywords floating-point arithmetic, roundoff errors, interval arithmetic, arbitrary precision, Taylor models, certified linear systems solving, global optimization / supremum norm, standardization, formal proof, theorem prover, Coq, PVS. Scientific issues, goals, and positioning of the team: The certification of floating-point computations gives guarantees on the quality of the computed results. Typically, this is a bound for the error between the exact, mathematical result and the floating-point, computed result. Several levels of confidence can be reached. The first level consists in establishing an error bound, using properties of floating-point arithmetic or interval arithmetic. Another level of confidence consists in generating a certificate that can be checked using formal proof, using a theorem prover such as Coq or PVS. Major results Jan Dec. 2009: Arénaire is a leader for the standardization of interval arithmetic within the IEEE P1788 working group [106] and within the STL of C++ [14, 130]. Interval arithmetic has been used to certify efficiently (in terms of time and accuracy) the floating-point solution of a linear system [142]. Arbitrary precision interval arithmetic has been implemented in the C library MPFI. Taylor models based on this arithmetic have been developed and used to get guaranteed bounds on the error between a function and its approximating polynomial [74, 73].

50 36 Part 3. Arénaire Formal proof can now make use of floating-point arithmetic and interval arithmetic: interval arithmetic (and more specifically Taylor models) has been added to PVS [72, 131, 177], both arithmetics have been added to Coq [77]. This is part of the Gappa software [182] that establishes tight bounds on roundoff errors, and keeps tracks of the computations paths and transforms them so they can be checked by a theorem prover (Coq). Gappa has been successfully applied to codes of geometric computing, to check geometric certificates [128, 37], and to certify the code produced by Sollya for the CRlibm library [81, 163] and also significant parts of the code of the FLIP library [166, 105]. LEMA, un Langage pour les Expressions Mathématiques Annotées. Finally, a study has begun to establish a language that enables the programmer to represent not only the floating-point code, but also the intended semantics and the programmer s knowledge. This language is based on MathML. Self-assessment Arénaire is playing a leading role in the standardization of interval arithmetic, gained through its joint expertise in interval arithmetic and floating-point arithmetic. We have developed a state-of-the-art implementation in interval arithmetic linear system solving. Our work is pioneering in the field of certification and formal proof for floating-point computations. In particular, a main asset of the CRlibm and FLIP libraries, on top of their execution time, is that significant parts of their codes are certified. Our future directions are still more automation, and the completion of a standard for interval arithmetic. The weakness here is that expertise in formal proof is fluctuating: some members, specialists in formal proof, have now left the team, new persons arrive, with temporary positions, to reinforce this direction. We have to secure this field, either through established collaborations with experts from other teams or through the recruitment of an expert in formal proof, in link with computer arithmetic Hardware arithmetic operators List of participants: Jean-Luc Beuchat (postdoc), Nicolas Brisebarre (CR), Jérémie Detrey (PhD), Florent de Dinechin (MCF), Romain Michard (PhD), Jean-Michel Muller (DR), Bogdan Pasca (PhD), Arnaud Tisserand (CR), Nicolas Veyrat-Charvillon (PhD) Keywords ASIC, FPGA, circuit generator, fixed-point, floating-point, arithmetic operators, function evaluation, integrated circuit, digit-recurrence algorithms, division, square-root, exponential, logarithm, trigonometric, sine, cosine, tridiagonal systems, modular multipliers Scientific issues, goals, and positioning of the team: Hardware arithmetic operators are at the core of any computing system, large and small. This research domain has shifted from integer to floating-point operators, with renewed interest in FMA and decimal architectures due to the IEEE-754 standard revision. The team produces novel hardwareoriented algorithms, and also established arithmetic libraries for reconfigurable circuits (FPGAs). We also investigate operators for cryptography. Major results Jan Dec. 2009: M. D. Ercegovac (U.C.L.A.) and J.-M. Muller have introduced a digit-recurrence algorithm for evaluating complex square-roots [24]. They have adapted Ercegovac s E-method to the digitrecurrence solution of some tridiagonal linear systems [96] and to the evaluation of complex polynomials [98, 25]. They have designed operators for complex arithmetic [97, 93]. J. Detrey and F. de Dinechin have explored hardware algorithms specifically designed for elementary function evaluation on FPGAs. They have introduced table-based methods for exponential and logarithm [21] then sine and cosine [90, 91] up to single-precision. For larger precisions, they have introduced a recurrence algorithm well suited to the fine-grain look-up table (LUT) structure of FPGAs [92]. These operators are available as part of the FPLibrary project (see 3.8.1). F. de Dinechin has worked with O. Creţ, I. Trestian, R. Tudoran, L. Darabant, and L. Vǎcariu, all from UT Cluj- Napoca, on the acceleration, using FPGAs, of a large floating-point physics simulation for a biomedical application [125, 15]. A conclusion was that efficient floating-point on FPGAs should not use the same operators as in processors, but explore original, application-specific ones [162]. This is now the subject of Bogdan Pasca s PhD and the FloPoCo project, which supersedes the FPLibrary (see 3.8.1). In addition to the previous elementary functions, operators studied so far include multipliers by a constant [69, 68], application-specific floating-point accumulators and dot-products [85], squarers and other optimized multipliers [84], square root [79]. In addition, FloPoCo is an architecture generator framework with several novel features, such as target-specific operator specialization, automatic frequency-directed pipeline, and test-bench generation [80].

51 3.5 Computer Arithmetic 37 With A. Tisserand (LIRMM), N. Veyrat-Charvillon used special polynomial approximations to optimize arithmetic operators [38]. R. Michard, N. Veyrat-Charvillon and A. Tisserand also studied hardware operators for evaluating power functions [118]. J.-L. Beuchat and J.-M. Muller have exhibited a family of modular multipliers suited for FPGA applications [4]. In collaboration with T. Miyoshi and E. Okamoto (Tsukuba Univ.), they have written a survey on multiplication in GF(p) and GF(p n ). In collaboration with E. Okamoto (Univ. Tsukuba, Japan), J.-L. Beuchat, N. Brisebarre and J. Detrey have introduced new arithmetic operators for pairing-based cryptography [60, 58, 59, 2]. Their paper [58] won the best paper award at CHES In collaboration with R. Glabb (Calgary University), L. Imbert and A. Tisserand (LIRMM, Montpellier) and G. Jullien (Calgary University), N. Veyrat-Charvillon designed a multi-mode operator for SHA-2 hash functions [27]. Self-assessment The strong points here is the shared Arénaire expertise in optimization, automation, and validation, enabling open-source projects such as FPLibrary/FloPoCo which are widely used worldwide. The main weak points is a lack of hand-on expertise in VLSI technology. For instance, power consumption, which is more and more important in deep sub-micron technologies, should probably be considered as a first-class objective along with cost, performance and accuracy. This weakness should be solved either by recruiting a VLSI expert or by building stronger collaboration with the VLSI community or with companies designing VLSI chips. 3.4 Application domains and social or economic impact Keywords: arithmetic operator, hardware implementation, dedicated circuit, control, validation, proof, numerical software, certified computing. Our expertise covers application domains for which the quality, such as the efficiency or safety, of the arithmetic operators is an issue. On the one hand, it can be applied to hardware oriented developments, for example to the design of arithmetic primitives which are specifically optimized for the target application and support. On the other hand, it can also be applied to software programs, when numerical reliability issues arise: these issues can consist in improving the numerical stability of an algorithm, computing guaranteed results (either exact results or certified enclosures) or certifying numerical programs. The application domains of hardware arithmetic operators are digital signal processing, image processing, embedded applications, reconfigurable computing and cryptography. More generally, hardware arithmetic operators are used in all numerical calculations. Software support of floating-point arithmetic is mostly applied to media processing on embedded chips that have no FPU. Developments of correctly rounded elementary functions is critical to the reproducibility of floating-point computations. Exponentials and logarithms, for instance, are routinely used in accounting systems for interest calculation, where roundoff errors have a financial meaning. Our current focus is on bounding the worst-case time for such computations, which is required to allow their use in safety critical applications, and in proving the correct rounding property for a complete implementation. Certifying a numerical application usually requires bounds on rounding errors and ranges of variables. Some of the tools we develop compute or verify such bounds. For increased confidence in the numerical applications, they may also generate formal proofs of the arithmetic properties. These proofs can then be machine-checked by proof assistants like Coq. Arbitrary precision interval arithmetic can be used in two ways to validate a numerical result. To quickly check the accuracy of a result, one can replace the floating-point arithmetic of the numerical software that computed this result by high-precision interval arithmetic and measure the width of the interval result: a tight result corresponds to good accuracy. When getting a guaranteed enclosure of the solution is an issue, then more sophisticated procedures, such as those we develop, must be employed: this is the case of global optimization problems. The design of faster algorithms for matrix polynomials provides faster solutions to various problems in control theory, especially those involving multivariable linear systems. Lattice reduction algorithms have direct applications in public-key cryptography. They also naturally arise in computer algebra, in the research for good constrained approximations to functions, and of worst cases for correct rounding. A new and promising field of applications is communication theory.

52 38 Part 3. Arénaire 3.5 Visibility and Attractivity Prizes and Awards People Awarded: D. Stehlé: second prize of the PhD award from the SPECIF association in D. Stehlé: second prize of the PhD award from the ASTI association in Awarded communications: N. Brisebarre and J. Detrey: CHES 2007 best paper award (joint work with J.-L. Beuchat and E. Okamoto (Tsukuba Univ., Japan)) Contribution to the Scientific Community Management of Scientific Organisations J.-M. Muller is chargé de Mission at the Sciences et technologies de l information et de l ingéniérie institute of the French CNRS, since April N. Revol is chair of the IEEE P1788 working group for the standardization of interval arithmetic, launched in June G. Villard is member of the board of the CNRS-GDR Informatique Mathématique (headed by B. Vallée). Administration of Professional Societies Editorial Boards IEEE Transactions on Computers, J.-M. Muller (guest editor special section on Computer Arithmetic, Feb. 2009); JUCS (Journal of Universal Computer Science), J.-M. Muller; Journal of Symbolic Computation, G. Villard. TCS (Theoretical Computer Science), M. Daumas and N. Revol (guest editors of a special issue on Real Numbers and Computers, Feb. 2006); LNCS (Lecture Notes in Computer Science) vol 5045, P. Hertling, C. Hoffmann, W. Luther, and N. Revol, special issue on Reliable Implementation of Real Number Algorithms: Theory and Practice, Organisation of Conferences and Workshops J.-M. Muller was co-program chair of the 18th IEEE Symposium on Computer Arithmetic, G. Villard was chair of the steering committee for the International Symposium on Symbolic and Algebraic Computation (ISSAC), F. de Dinechin, C.-P. Jeannerod, V. Lefèvre, N. Louvet, and N. Revol are the organizers of RAIM 2009 (Rencontres Arithmétique de l Informatique Mathématique du GDR IM - October 26-28, LIP). C.-P. Jeannerod co-organized (jointly with F. Rastello) two one-day workshops at LIP on the theme of Compiler Optimization for Embedded Systems (Sept. 12, 2007 and June 15, 2009). J.-M. Muller is member of the steering committee of the RNC (Real Numbers and Computers) and ARITH (IEEE Symposium on Computer Arithmetic) series of conferences. He co-organized the MSR 2007 conference in Lyon. D. Stehlé co-organized LLL+25, an international conference held in honour of the 25th anniversary of the LLL algorithm. The conference took place in Caen on June/July N. Revol co-organized the Dagstuhl seminar no on Reliable Implementation of Real Number Algorithms: Theory and Practice, 2006, and the 3rd MathLogAps Workshop, 2007.

53 3.5 Computer Arithmetic 39 Program committee members International Symposium on Symbolic and Algebraic Computation (ISSAC 06), C.-P. Jeannerod, th IEEE Symposium on Computer Arithmetic, J.-M. Muller (co-program chair), th IEEE Symposium on Computer Arithmetic, J.-M. Muller, th, 18th, 19th, and 20th IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP), J.-M. Muller, 2006, 2007, 2008, International Conference on Field-Programmable Logic and Applications (FPL), F. de Dinechin, 2007, 2008, International Conference on Field-Programmable Technologies (FPT), F. de Dinechin, 2006, Symposium en Architectures de Machines, F. de Dinechin, steering committee member, 2008, th and 8th Conference on Real Numbers and Computers (RNC), N. Brisebarre, 2006 and Computability and Complexity in Analysis (CCA 08), N. Revol, First International Conference on Symbolic Computation and Cryptography (SCC 08), D. Stehlé, International Symposium on Symbolic and Algebraic Computation (ISSAC 07), Parallel Symbolic Computation (PASCO 07), Computer Algebra in Scientific Computing (CASC 06), Transgressive Computing 2006, G. Villard. International expertise Board of examiners for foreign PhDs: J.-M. Muller was reviewer of the PhD dissertation of Á. Vázquez (Univ. Santiago de Compostela, 2009). Contributions to Standardization Bodies: F. de Dinechin, C. Lauter, V. Lefèvre, G. Melquiond, and J.-M. Muller have participated in the balloting process for the revision of the IEEE-754 standard for floating-point arithmetic. V. Lefèvre participates in the Austin Common Standards Revision Group (for the revision of POSIX IEEE Std ). N. Revol prepared the project approval request for an IEEE working group for the standardization of interval arithmetic, approved in June 2008 as IEEE P1788 project, and she has been elected as chair of this Working Group. National expertise INRIA s Commission d Evaluation, J.-M. Muller, 2006; ANR, Architectures du futur, J.-M. Muller (member of evaluation committee, 2006); ANR, Calcul intensif et simulation, J.-M. Muller (member of steering committee, 2007); steering committee of Cluster Isle (Informatique, Signal et Logiciels Embarqués), région Rhône Alpes, J.-M. Muller (2006); Conseil Scientifique, Grenoble INP, J.-M. Muller (member), since 2008; G. Villard was a member of the evaluation committee of the ANR program Défis. Board of examiners for French PhDs and Habilitations: C.-P. Jeannerod was in the PhD committee of S.-K. Raina (ÉNS Lyon, 2006). J.-M. Muller participated to the board of examiners for 11 French PhD dissertations since 2006 (Melquiond, Raina, Detrey, Veyrat-Charvillon, Rivaille, Fournel, Louvet, Grosse, Michard, Lauter, Chevillard), and 2 French habilitations (Messine, de Dinechin). N. Revol was in the PhD committees of M. Charikhi (U. Paris 6, 2005), L. Lamarque (U. Bourgogne, 2006) and F.J. Cháves Alonso (ÉNS Lyon, 2007).

54 40 Part 3. Arénaire G. Villard was in the habilitation committee of L. Imbert, P. Loidreau, member of the board of examiners for the PhD defense of O. Bouissou, F. Chávez, N. Méloni, C. Pernet.; N. Brisebarre was in the PhD committee of S. Chevillard. National recruiting committees: J.-M. Muller was a member of the final recruiting committee ( jury d admission ) CR1 and CR2 of CNRS in Local recruiting committee: N. Brisebarre belonged to the hiring committee ( commission de spécialistes ) of U. St-Étienne (pure mathematics, ). C.-P. Jeannerod belonged to the hiring committees for junior software-development engineers ( ingénieurs associés ) at INRIA Rhône-Alpes in 2006 and From 2006 to 2008, J.-M. Muller belonged to the commission de spécialistes (permanent recruiting committee) of ENS Lyon. After 2009, the rules changed so that there is a new committee for each position. J.-M. Muller belonged to a recruiting committee for a position of Maître de Conférences (senior lecturer) in ENS Lyon in April-May 2009; for a position of Maître de Conférences in Bordeaux University in April-May 2009; and to the hiring committee for junior scientists (CR2) at INRIA Rennes - Bretagne Atlantique in N. Revol belonged to the commission de spécialistes of U. Joseph Fourier Grenoble (applied math, ) and ÉNS Lyon (computer science, ). N. Revol belonged to the hiring committees for junior scientists at INRIA Grenoble - Rhône-Alpes in 2007 and She belongs to the hiring committee for postdoc at INRIA Grenoble - Rhône-Alpes since G. Villard organized the evaluation seminar of INRIA SymB theme, November 14-15, 2006, Paris. He belonged to the commissions de spécialistes of U. Lille, U. Perpignan, UJF Grenoble, and to a hiring committee at U. Perpignan (2009). Evaluation committees: in the context of his chargé de mission position in CNRS, J.-M. Muller represented the CNRS (as an observer) at the evaluation committees of the following laboratories: LSIS (Marseille, December 2006), LIMOS (Clermont-Ferrand, December 2006), LIF (Marseille, January 2007), LINA (Nantes, January 2007), LIFC (Besançon, February 2007), LI Tours, (Tours, June 2007), GSCOP (Grenoble, January 2008), LIP6 (Paris, January 2008), LIPN (Villetaneuse, January 2008), LIX (Palaiseau, February 2008), LORIA (Nancy, February 2008), LSV (Cachan, December 2008), LIFL (Lille, December 2008) Public Dissemination C.-P. Jeannerod and N. Revol presented research careers to high-school students during the 10th Mondial des Métiers (February 2006). J.-M. Muller gave a popular science conference Représentation des nombres et calcul sur ordinateur at Maison du Livre, de l Image et du Son, Villeurbanne, Nov. 21, V. Lefèvre and J.-M. Muller wrote a popular science article for Images des Mathématiques ( N. Revol regularly visits high-schools (3 to 5 visits per year). 3.6 Contracts and grants External contracts and grants (Industry, European, National) Ministry Grant ACI Security in Computer Science OCAM Opérateurs Cryptographiques et Arithmétique Matérielle, 3 years ( ), headed by N. Sendrier (SECRET, INRIA Rocquencourt), 43 kc. Keywords: digital signature, arithmetic operator, FPGA. Participants: J.-L. Beuchat, A. Tisserand, G. Villard. The OCAM project was done in collaboration with the Codes - now called SECRET - team (Inria Rocquencourt) and the team Arithmétique Informatique of the Lirmm laboratory at Montpellier (see fr/codes/ocam). The goal of OCAM was the development of hardware operators for cryptographic applications based on the algebraic theory of codes. The FPGA implementation of a new digital signature algorithm is used as a first target application (see 6.1).

55 3.6 Computer Arithmetic 41 Ministry Grant ACI New Interfaces of Mathematics GAAP Étude et outils pour la Génération Automatique d Approximants Polynomiaux efficaces en machine, 4 years ( ), headed by N. Brisebarre, 14kC. Keywords: polynomial approximation, minimax approximation, floating-point arithmetic, polytope, linear programming. Participants: N. Brisebarre, S. Chevillard, J.-M. Muller, A. Tisserand, S. Torres. The GAAP project was a collaboration with the LaMUSE laboratory (U. Saint-Étienne). The goal was the development of software tools aimed at obtaining very good polynomial approximants under various constraints on the size in bits and the values of the coefficients. The target applications were software and hardware implementations, such as embedded systems for instance. The software output of this project is made of the C libraries Sollya and MEPLib. ANR Gecko Project Geometrical Approach to Complexity and Applications, 3 years ( ), headed by B. Salvy (Algorithms, INRIA Rocquencourt), 6kC. Keywords: geometry, algorithm analysis, polynomial matrix, integer matrix. Participant: G. Villard. Other teams participating have been the École Polytechnique, Univ. Nice Sophia-Antipolis, and Univ. P. Sabatier in Toulouse. The project at the meeting point of numerical analysis, effective methods in algebra, symbolic computation and complexity theory, had the goal to improve solution methods for algebraic or linear differential equations by taking geometry into account. Advances in Lyon have been related to taking into account geometrical structure aspects for matrix problems. ANR EVA-Flo Project Évaluation et Validation Automatiques pour le calcul Flottant, 4 years ( ), headed by N. Revol, 130.5kC. Keywords: automation, floating-point computation, specification and validation. Participants: all Arénaire, Dali (LP2A, U. Perpignan), Measi (LIST, CEA Saclay) and Tropics (INRIA Sophia- Antipolis). This project focuses on the way a mathematical formula is evaluated in floating-point arithmetic. The approach is threefold: study of algorithms for approximating and evaluating mathematical formulae, validation of such algorithms, and automation of the process. ANR LaRedA Project Lattice Reduction Algorithms, 3 years ( ), headed by Brigitte Vallée (CNRS/GREYC) and Valérie Berthé (CNRS/LIRMM), 10kC. Keywords: lattice reduction, average-case analysis, experiments. Participants: Ivan Morel, Damien Stehlé. The aim of the project is to finely analyze lattice reduction algorithms such as LLL, by using experiments, probabilistic tools and dynamic analysis. Among the major goals are the average-case analysis of LLL and its output distribution. In Lyon, we concentrate on the experimental side of the project (by using fplll and MAGMA) and the applications of lattice reduction algorithms. ANR TCHATER Project Terminal Cohérent Hétérodyne Adaptatif TEmps Réel, 3 years ( ), headed by Alcatel-Lucent France, 131kC. Keywords: optical networks, 40Gb/s, Digital Signal Processing, FPGA. Participants: Florent de Dinechin, Gilles Villard. The TCHATER project is a collaboration between Alcatel-Lucent France, E2V Semiconductors, GET-ENST and the INRIA Arénaire and ASPI project/teams. Its purpose is to demonstrate a coherent terminal operating at 40Gb/s using real-time digital signal processing and efficient polarization division multiplexing. In Lyon, we will study the FPGA implementation of specific algorithms for polarization demultiplexing and forward error correction with soft decoding. Région Rhône-Alpes Grant 3 years (Oct Sept. 2006), headed by C.-P. Jeannerod and J.-M. Muller. Keywords: emulation of floating-point arithmetic, integer processor. Participants: C.-P. Jeannerod, J.-M. Muller, S.-K. Raina, A. Tisserand. This project in collaboration with STMicroelectronics was supported by both Région Rhône-Alpes and STMicroelectronics. The goal was to design and implement an efficient floating-point library for some embedded processors (namely the ST200 family) that only have integer arithmetic units. This project has resulted in the first version of the FLIP software library and the PhD of S.-K. Raina.

56 42 Part 3. Arénaire Région Rhône-Alpes Grant 3 years (Oct Sept. 2011), headed by C.-P. Jeannerod and J.-M. Muller. Keywords: floating-point arithmetic, function evaluation, compilation, approximation. Participants: Nicolas Brisebarre, Sylvain Chevillard, Claude-Pierre Jeannerod, Mioara Joldes, Jean-Michel Muller, Nathalie Revol, Guillaume Revy, Gilles Villard. Since October 2008, we have obtained a 3-year grant from Région Rhône-Alpes. That grant funds a PhD student, Mioara Joldes. The project consists in automating as much as possible the generation of code for approximating functions. Instead of calling functions from libraries, we wish to elaborate approximations at compile-time, in order to be able to directly approximate compound functions, or to take into account some information (typically, input range information) that might be available at that time. In this project, we collaborate with the STMicroelectronics Compilation Expertise Center in Grenoble (C. Bertin, H. Knochel, and C. Monat). STMicroelectronics is funding another PhD grant on these themes (see the next item). STMicroelectronics CIFRE Ph.D. Grant 3 years (Mar Feb. 2012), headed by C.-P. Jeannerod and J.-M. Muller. Jingyan Jourdan Lu is supported by a CIFRE Ph.D. Grant from STMicroelectronics (Compilation Expertise Center, Grenoble) on the theme of arithmetic code generation and specialization for embedded processors. Advisors: C.-P. Jeannerod and J.-M. Muller (Arénaire), C. Monat (STMicroelectronics). Minalogic/EMSOC SCEPTRE Grant 3 years (Oct Dec. 2009), headed by C.-P. Jeannerod and J.-M. Muller. Keywords: synthesis of algorithms, compilation, SOCs, embedded systems. Participants: C.-P Jeannerod, N. Jourdan, J.-M. Muller, G. Revy, G. Villard. Since October 2006, we have been involved in Sceptre, a project of the EMSOC cluster of the Minalogic Competitivity Centre. This project, led by STMicroelectronics, aims at providing new techniques for implementing software on system-on-chips. Within Arénaire, we are focusing on the generation of optimized code for accurate evaluation of mathematical functions; our partner at STMicroelectronics is the Compiler Expertise Center (Grenoble). CNRS Grant PEPS FiltrOptim Calcul efficace et fiable en traitement du signal : optimisation de la synthèse de filtres numériques en virgules fixe et flottante, one year (2008), headed by N. Brisebarre, 10 kc. Keywords: digital signal processing, fixed-point, floating-point, constrained coefficient. Participants: N. Brisebarre, S. Chevillard, F. de Dinechin, M. Joldes, J.-M. Muller. The project FiltrOptim is within the 2008 program Programmes Exploratoires Pluridisciplinaires (PEPS) of the ST2I department of the CNRS. The participants belong to the CAIRN (IRISA, Lannion) and Arénaire (LIP, ENS Lyon) teams. The goal is to develop methods that allow for the efficient implementation of a numerical algorithm of a signal processing filter corresponding to some given constraints usually given by a frequential frame and the requested formats of the coefficients. CNRS Grant PEPS ELRESAS Réduction de réseaux : solutions exactes et approchées. Applications à la cryptologie et à l arithmétique des ordinateurs, one year (2009), headed by N. Brisebarre, 5.5kC. Keywords: euclidean lattice, LLL, SVP, CVP, cryptology, computer arithmetic. Participants: Nicolas Brisebarre, Sylvain Chevillard, Ivan Morel, Damien Stehlé, Gilles Villard The project ELRESAS is within the 2009 program Programmes Exploratoires Pluridisciplinaires (PEPS) of the ST2I department of the CNRS. The participants belong to the Arénaire (LIP, ENS Lyon), CACAO (LORIA, Nancy), CASYS (LJK, Grenoble), MOAIS (LIG, Grenoble) and Théorie des nombres (Inst. C. Jordan, Univ. Lyon 1) teams. The general goal is the improvement of the algorithmic of euclidean lattices and some of its applications. PAI Brâncuşi (France-Romania) Reconfigurable systems for biomedical applications, 2 years ( ), headed by F. de Dinechin. Keywords: FPGA, floating-point, biomedical engineering. Participants: F. de Dinechin, B. Pasca. This PAI (programme d actions intégrées) is headed by F. de Dinechin and O. Creţ from Technical University of Cluj-Napoca in Romania. It also involves the INRIA Compsys team-project in Lyon. Its purpose is to use FPGAs to accelerate the computation of the magnetic field produced by a set of coils. The Cluj-Napoca team provides the biomedical expertise, the initial floating-point code, the computing platform and general FPGA expertise. Arénaire contributes the arithmetic aspects, in particular specializing and fusing arithmetic operators, and error analysis. Compsys brings in expertise in automatic parallelization.

57 3.6 Computer Arithmetic 43 PHC Sakura - INRIA Ayame junior program (France-Japan) Software and Hardware Components for Pairing- Based Cryptography, 2 years ( ), headed by G. Hanrot (LORIA) and E. Okamoto (LCIS, Univ. of Tsukuba). Keywords: cryptography, (hyper)elliptic curves, pairing, finite field arithmetic, hardware accelerator, FPGA. Participant: N. Brisebarre. The participants belong to the Arénaire (LIP, ENS Lyon) and CACAO (LORIA, Nancy) teams for the French part and to the LCIS (Univ. of Tsukuba, Japan) and the Future Univ. of Hakodate (Japan). The goal of this project is the enhancement of software and hardware implementations of pairings defined over algebraic curves. We aim at developing a more efficient arithmetic over Jacobian of (hyper)elliptic curves, implementing genus 2 curve based pairings and designing algorithms and implementations resistant to side channel attacks. CNRS Grant Échange de chercheurs 2007, headed by D. Stehlé. 2.5kC. Keywords: Lattice reduction. Participant: D. Stehlé. In 2006, D. Stehlé obtained a grant to visit the Magma team at the University of Sydney, Australia. The aim of the visit, which took place in late 2007, was to investigate strong lattice reduction. CNRS Grant Échange de chercheurs 2009, headed by G. Villard, 5kC. Keywords: Lattice reduction. Participant: G. Villard. In 2008, G. Villard obtained a grant to visit the Magma team at the University of Sydney, Australia. The visit will take place within the end of ARC Grant Australian Research Council Discovery Grant on Lattices and their Theta Series, 3 years ( ), headed by J. Cannon (University of Sydney) and R. Brent (Australian National University, Canberra), 10kC. Keywords: strong lattice reduction, lattice theta series, security estimate of NTRU. Participant: D. Stehlé. J. Cannon, R. Brent and D. Stehlé obtained this funding to investigate on short lattice points enumeration. This is intricately related to strong lattice reduction and is thus important to assess the security of lattice-based cryptosystems such as NTRU The main target of the project is to improve the theory and the algorithms related to lattice theta series. Rhône-Alpes Region Grant Exploradoc 2 years (Jul Jul. 2010), headed by I. Morel. Keywords: Fast LLL-type algorithms. Participant: I. Morel. I. Morel obtained this funding to cover the costs of his cotutelle PhD thesis between ENS Lyon (local advisor: G. Villard) and the University of Sydney (local advisor: J. Cannon). His research focuses on the development, analysis and implementation of fast LLL-type lattice reduction algorithms. XtremeData University Program 1 year (2007), headed by F. de Dinechin. Keywords: high-performance computing, FPGA coprocessor. XtremeData University Program Participants: Florent de Dinechin, Bogdan Pasca F. de Dinechin was included in the XtremeData University program co-sponsored by Altera, AMD, Sun Microsystems and XtremeData, inc. He received a donation of an XD1000 Development System costing 15,000 US$, and the associated programming tools Research Networks (European, National, Regional, Local) Arénaire took part in the European Marie-Curie MathLogAps European Network Early Stage Research Training Site in MATHematical LOGic and APplicationS (see mathlogaps/). MathLogAps ran for 4 years until August 31, It funded 9 short term stays for students and 24 PhD theses, among them the PhD thesis of F.J. Cháves Alonso. It also subsidized 4 training workshops for students, one per year; one of these workshops was co-organized by Arénaire. Arénaire takes part in the Arithmétique and Calcul Formel groups of the GDR Informatique Mathématique (IM). Arénaire actively participates to national events organized by this GDR:

58 44 Part 3. Arénaire Gilles Villard is head of the working group Calcul formel, arithmétique, protection de l information, géométrie of GDR IM; we are involved in the Journées RAIM (Rencontres Arithmétique de l Informatique Mathématique) of GDR IM: Nathalie Revol managed the track Computer arithmetic of the January 2007 session (Montpellier, June 22-24, 2007); Nicolas Brisebarre managed the track Computer Arithmetic of the June 2008 session (Lille, June 3-5, 2008); Arénaire will organize the October 2009 session (see RAIM09/). Arénaire is also actively involved in the GDR Architecture, système et réseau. J.-M. Muller, then F. de Dinechin have been members of the steering committee of the Symposium on Architectures, and F. de Dinechin has given a lecture at the Archi09 winter school Internal Funding CNRS, ENS and Ministry of Research internal: 10kC (2006), 13kC (2007), 11kC (2008) and 10kC (2009). INRIA internal: 26,2kC (2006), 26,9kC (2007), 25kC (2008) and 30kC (2009). 3.7 International collaborations resulting in joint publications With joint publications Centro de investigación y de Estudios Avanzados del I.P.N., Mexico (F. Rodríguez Henríquez); CERN, Geneva, Switzerland (E. McIntosh, F. Schmidt); Future University-Hakodate, Japan (M. Shirase et Tsuyoshi Takagi); Intel Nizhniy Novgorod Laboratory, Russia (S. Maidanov); National Institute of Aerospace, USA (C. Muñoz); North Carolina State University, USA (E. Kaltofen); Southern Danish University, Odense, Denmark (P. Kornerup); Technical University of Cluj-Napoca, Romania (O. Creţ); University of Calgary, Canada (W. Eberly); University of California at Los Angeles, USA (M. Ercegovac, Pouya Dormiani); University of Louisiana at Lafayette, USA (R.B. Kearfott); University of Tsukuba, Japan (J.-L. Beuchat, T. Miyoshi, and E. Okamoto); University of Waterloo, Canada (M. Giesbrecht, G. Labahn, A. Storjohann); University of Western Ontario, Canada (É. Schost). McGill University Montreal, Canada (X.-W. Chang). 3.8 Software production and Research Infrastructure Software plays an important role in Arénaire s production. The software designed by Arénaire can be downloaded at We have designed: User-space software libraries of functions (such as CRlibm: the first library of elementary functions with proven correct rounding, or FLIP: a floating-point library for processors without hardware floating-point units); user-space hardware libraries and generators, such as FloPoCo: a generator of arithmetic cores for FPGAs; tools for helping to build or to validate numerical software, such as Sollya (the swiss army knife of elementary function development), or Gappa (a high-level proof assistant). Thanks to their unique features, the user-space libraries have been widely disseminated worldwide. Our tools are designed for experts, and their diffusion is more restricted, but they are nevertheless internationally recognized. There are many dependencies between these tools, as illustrated on Figure 3.1.

59 3.8 Computer Arithmetic 45 Library developed using Program links against General purpose libraries MPFR MPFI FPLLL Development tools Gappa Sollya User space tools and libraries CRlibm FLIP FloPoCo Figure 3.1: Relationships between some Arénaire developments Software Descriptions CRlibm, a double-precision mathematical library with correct rounding Type: library CRlibm is an efficient mathematical library (libm) with proven correct rounding It offers 21 elementary functions specified by the C99 standard. The availability and performance of CRlibm convinced the IEEE-754 revision committee that correct rounding is feasible at reasonable cost. They recommend the correct rounding of elementary functions in the IEEE-754R standard adopted in June Status: free software, LGPL State: stable. Current effort aims at integrating CRlibm in the GNU libc Sollya, a toolbox for manipulating numerical functions Type: toolbox Sollya is a numerical toolbox for manipulating numerical functions. Its distinguishing feature is the focus on safety: numerical results are certified, or a warning is produced. It has been developed for CRlibm, but is now being used in several other projects. Sollya is an interactive tool with an interface similar to Maple, including a scripting language. It can also be used as a library. Sollya offers, for instance, fast and certified numerical routines for safe plotting, polynomial approximation, and computation of 1D infinite norms. It is currently the only one tool that computes constrained polynomial approximations. Status: free software, CeCILL-C licence (GPL-compatible) State: stable FPLibrary, a library of floating-point operators for FPGAs Type: library of hardware components FPLibrary is a library of floating-point operators essentially targeted to FPGAs. The operators are parameterized in exponent width and mantissa width.

60 46 Part 3. Arénaire FPLibrary was the first hardware library to include floating-point elementary functions (exp, log and trig). It has been used in tens of universities and companies worldwide. The trigonometric operators have been included in a standard set of benchmarks defined by the Altera corporation and the university of Berkeley. The development of FPLibrary stopped in 2008, and all new development effort went to FloPoCo (see below), which provides operators using the same floating-point format. Status: free software, LGPL State: stable FloPoCo, a generator of hardware arithmetic cores Type: generator of hardware components FloPoCo is the successor of FPLibrary. It is a generator of non-standard arithmetic operators for FPGAs, with a focus on floating-point. It supports research on application-specific operators specifically designed for reconfigurable computing. FloPoCo inputs a list of operator specifications (20 operators currently), and outputs synthesizable VHDL. FloPoCo offers more floating-point operators than vendor tools. Its operators have comparable or better performance, and are more portable and more flexible. Status: free software, LGPL State: stable FLIP (Floating-point Library for Integer Processors) Type: software library FLIP is a C library for the efficient software support of binary32 floating-point arithmetic on processors without floating-point hardware units. Although portable, it is highly optimized for the ST200 family of processor cores. FLIP implements correctly-rounded addition, subtraction, multiplication, division, and square root. It supports subnormal numbers as well as the four rounding direction attributes required by the IEEE standard. FLIP is included in the standard development environment for the ST200. Status: free software, LGPL State: stable Gappa: a Tool for Certifying Numerical Programs Type: software Gappa is a tool intended to help verifying and formally proving properties on numerical programs dealing with floating-point or fixed-point arithmetic. Given a logical property involving interval enclosures of mathematical expressions, Gappa tries to verify this property and generates a formal proof of its validity. This formal proof can be machine-checked by an independent tool like the Coq proof-checker. Gappa is being used in several projects, including CRlibm, FLIP, Sollya, and CGal. Status: free software, GPL, CeCILL. State: stable

61 3.8 Computer Arithmetic 47 Participation to GNU MPFR Type: library GNU MPFR is an efficient multiple-precision floating-point library with well-defined semantics (copying the good ideas from the IEEE-754 standard), in particular correct rounding. GNU MPFR provides about 80 mathematical functions, in addition to utility functions (assignments, conversions... ). Special data (Not a Number, infinities, signed zeros) are handled like in the IEEE-754 standard. Since the end of 2006, MPFR has become a joint project between the Arénaire and CACAO (Loria, Nancy) projectteams. Cacao has the leading contribution. MPFR is now used and even required by GCC to evaluate constant expressions at compile time. It is also widely used in Arénaire, in particular by software like MPFI, Gappa and Sollya, and in the search for hard-to-round cases. Status: free software, LGPL State: stable Depot APP: IDDN FR R P on 15 March Lattice package in MAGMA Type: computational algebra system. Magma is a computer algebra system designed to solve problems in algebra, number theory, geometry and combinatorics. MAGMA provides hundreds of mathematical functions in Group Theory, Number Theory, Module Theory, Lattices, Commutative Algebra, Representation Theory, Invariant Theory, Lie Algebra, Algebraic Geometry, Arithmetic Geometry, Cryptography, etc. Damien Stehlé has actively participated in the update of the lattice functions, since In particular, he wrote the new LLL functions and a series of functions related to the shortest lattice vector problem. Status: non-free, cost recovery (non-commercial proprietary), rights owned by the University of Sydney. State: stable, V2.15. MPFI: Multiple Precision Floating-Point Interval Arithmetic Type: library MPFI is a library written in C for interval arithmetic using arbitrary precision (arithmetic and algebraic operations, elementary functions, operations on sets). Used by scientists worldwide (France, Belgium, Germany, UK, USA, India, Colombia, etc.). Status: free software, LGPL State: stable fplll: lattice reduction using floating-point arithmetic Type: library FPLLL is a library which allows to perform several fundamental tasks on euclidean lattices, most notably compute a LLL-reduced basis and compute a shortest non-zero vector in a lattice. FPLLL is progressively becoming the reference for lattice reduction. Last year, it has been included in SAGE and adapted to be included in PARI GP. A variant of it is also contained in the MAGMA computational algebra system (see above). Status: free software, LGPL State: stable

62 48 Part 3. Arénaire Metalibm, a code generator for elementary functions Type: toolbox The Metalibm project provides a tool for the automatic implementation of mathematical (libm) functions. A function f is automatically transformed into Gappa-certified C code implementing an approximation polynomial in a given domain with given accuracy. Metalibm has been used to produce large parts of CRlibm. Status: free software, LGPL State: alpha Contribution to Research Infrastructures F. de Dinechin is Europractice representative for ENS-Lyon and manages the servers, simulation software (ModelSim) and FPGA synthesis software (Xilinx ISE and Altera Quartus suites) used by the Arénaire and Compsys teams. 3.9 Industrialization, patents, and technology transfer Patents (French, European and WorldWide) None Creation of Startups None Consulting Activities None 3.10 Educational Activities Supervision of Educational Programs F. de Dinechin has been the coordinator of international exchange programs for the department of computer science of ENS Lyon since N. Revol supervised the class for PhD students, entitled Applications de l informatique à la recherche et au développement technologique (30 hours per year), N. Brisebarre belonged to the committee that prepared the next computer science Master program of ENS Lyon in Teaching Name Course title (short) Level Institution No of Number of hours academic years N. Brisebarre Analyse L1 Univ. St-Étienne 80 2 N. Brisebarre Initiation Maple/Matlab M1 Univ. St-Étienne 20 2 N. Brisebarre Analyse complexe L3 Univ. St-Étienne 36 2 N. Brisebarre Arithmétique des corps finis, des courbes ; M2 ENS Lyon 15 1 applications à la cryptologie N. Brisebarre Algèbre algorithmique M2 UCB-Lyon F. de Dinechin Architecture, Réseau et Système L3 ENS Lyon 45 4 F. de Dinechin Programmation orientée objet et génie logiciel L3 ENS Lyon 45 1 F. de Dinechin Informatique pour les non-informaticiens L3 ENS Lyon 20 2

63 3.11 Computer Arithmetic 49 F. de Dinechin Compilation M1 ENS Lyon 45 2 F. de Dinechin Algorithmes Arithmétiques M1 ENS Lyon 45 2 F. de Dinechin Nombres, opérateurs, arithmétique et circuits M2 ENS Lyon 45 1 C.-P. Jeannerod Algèbre algorithmique M2 UCB-Lyon C.-P. Jeannerod Algorithmes pour l arithmétique M1 ENS Lyon 45 2 C.-P. Jeannerod Calcul algébrique M2 ENS Lyon 45 1 and G. Villard V. Lefèvre Arithmétique des ordinateurs M2 UCB-Lyon N. Louvet Systèmes d exploitation L3 UCB-Lyon N. Louvet Algorithmique et programmation L1 UCB-Lyon N. Louvet Architectures matérielles et logicielles L2 UCB-Lyon N. Louvet Programmation en Java L2 UCB-Lyon N. Louvet Algorithmique numérique M2 UCB-Lyon J.-M. Muller Arithmétique virgule flottante M2 ENS Lyon 45 2 J.-M. Muller Fonctions élémentaires M2 ENS Lyon 45 2 N. Revol Validation in scientific computing M2 ENS Lyon 45 1 D. Stehlé Algorithmique des réseaux euclidiens et applications M2 ENS Lyon 45 1 D. Stehlé Algorithmique des réseaux euclidiens et applications M2 UCB-Lyon Other teaching activities N. Brisebarre has been the advisor for two M2 research internships (S. Chevillard in 2006 and M. Joldes in 2008). N. Brisebarre, F. de Dinechin, C.-P. Jeannerod, N. Revol and D. Stehlé have been examiners for the ÉNS admissions ( ). F. de Dinechin has been the advisor for the M2 research internship of B. Pasca, for 3 engineer final training periods and for 2 L3 research training periods. He has participated each year to at least one research internship committee (L3, M1 or M2) for the department of computer science at ÉNS-Lyon. He has given a lecture at the Archi 09 winter school. C.-P. Jeannerod was the advisor for two M2 research internships (G. Revy in 2006 and C. Mouilleron in 2008). He was also the advisor of three engineers (É. Bechetoille in , N. Jourdan in , C. Fagbohoun in 2009). He participated to two internship committees at ÉNS Lyon (M1 in 2006 and M2 in 2009). D. Stehlé has been the advisor for the L3 research internships of Fodil Ali Rached (ÉNS Lyon) and David Cadé (ÉNS Paris). D. Stehlé has been the advisor of the M2 research internship of Xavier Pujol (ÉNS Lyon). N. Louvet and J.-M. Muller have been the advisors of the M2 research internship of A. Panhaleux (ÉNS Lyon) in V. Lefèvre gave practical sessions (first steps with MPFR) at the CEA-EDF-INRIA School Certified Numerical Computation at LORIA (October 2007). V. Lefèvre gave a presentation of MPFR at the CNC 2 summer school (Certified Numerical Computation) at LORIA (June 2009) Self-Assessment Strong points The Arénaire team is unique in the world in its focus and breadth. Arénaire has a recognized expertise in hardware arithmetic, software arithmetic cores, arithmetic algorithms, floating-point design and use, validation and formal proof of arithmetic cores, number-theoretic tools useful to computer arithmetic, and linear algebra. Most other computer arithmetic research groups, in France or abroad, focus only on a few of these fields. More importantly, this expertise is shared to a large extent by most Arénaire members. For instance, performance and cost are not the only metrics, accuracy is also always considered; Interval arithmetic is not only a subject of research, it is also pervasively used to build safer operators; Asymptotic complexity is studied, but also the constants in the complexity due to technology constraints such as operator pipelines or cache sizes; Validation is also a transversal concern.

64 50 Part 3. Arénaire Arénaire is an attractive team, both for PhD students and for candidates to open positions. There are still many areas where the team could grow while keeping its unity: towards VLSI, towards cryptography, towards signal processing, towards languages, towards formal proofs among others. Weak points The size and breadth of the team also means that it is more and more difficult to have an up-to-date idea of our colleagues latest research interests. Relatedly, it is becoming increasingly difficult to carry team-wide federating projects such as our Handbook of floating-point arithmetic. The team puts very strong emphasis on software development, but receives comparatively little institutional support for it. While it is fairly easy to get a few months of engineer, we lack the stability that a full-time engineer would bring to secure software projects typically initiated during a PhD. Arithmetic research should be driven by applications, and Arénaire has to keep looking for new interdisciplinary or industrial cooperations. Some directions, such as signal processing or reconfigurable computing, are well identified. Opportunities The publication of the Handbook of floating-point arithmetic, which targets a wide audience, should attract contacts and, hopefully, collaborations with people well outside our community. The interval standardization effort is both an opportunity to exploit the breadth of Arénaire, and an opportunity to strengthen our common culture. The expertise acquired within Arénaire is needed in other standardization efforts, in particular language standards. In a wider sense, the team is open to research opportunities brought by new technology targets (such as GPGPU, reconfigurable computers, massively multicore architectures, nanoscale technologies, or quantum computing) or new application domains. A great opportunity is the project refoundation in 2010, due to the 12-year age limit of INRIA project-teams. Risks Being the largest team in France in traditional computer arithmetic presents some risks, for instance we have to ensure that our former PhD students keep finding interesting positions. Being open to the wider community is the key. Our past successes in widening the scope of Arénaire by recruiting experts from different communities should not prevent us from exploiting external expertise too, through new collaborations with leading teams in related fields. Arénaire researchers suffer the current nationwide increase of research management tasks (writing proposals, evaluating proposals, writing reports, evaluating reports, etc.). This additional pressure leaves less time for research itself, degrades our working conditions, and might degrade the team s atmosphere Perspectives Vision and new goals of the team As long as computers compute, they will need arithmetic at their very core. Moreover, as long as technology evolves, the arithmetic units will have to adapt to its evolution. Arénaire researchers can be assured they always will have problems to solve. However, we want to create evolution, not to follow it. This is often a long-term goal. It took twelve years to obtain the standardization of correct rounding of elementary functions. It took six years before formal proof tools could routinely be used in Arénaire when designing arithmetic cores, and it may take as long before they are routinely used by our colleagues. Pairing-based cryptography or multiple-precision interval arithmetic are other examples that need long-term laboring. If we take as a starting point the two historical research directions of the team (building better operators, and making best use of the existing operators), three major changes in the research activities of the team have been initiated in the previous years, and are expected to evolve further: 1. The notion of arithmetic operator has gotten coarser and coarser. What we will call arithmetic operators is no longer limited to hardware operators for addition, multiplication, or division. It already includes elementary functions and numerical kernels such as linear algebra operators and FFT. Such multivariate problems bring in new challenges. 2. The two historical directions, initially carried by different researchers, have turned out to require the same expertise: For instance, building a floating-point elementary function requires a fine knowledge of the existing floating-point operators. 3. Operator validation using tools from interval arithmetic to formal theorem provers has become a routine part of the arithmetic operator design.

65 3.12 Computer Arithmetic 51 These observations have motivated the shift from designing operators to designing operator generators: programs that output a low-level description of the operators, taking care of their optimization and validation. In other words, our research is shifting from building arithmetic expertise to automating this expertise. Automation of the arithmetic expertise requires new mathematical tools from the number theory and computer algebra communities. This explains the new recruits to the team over the period. They are encouraged to develop their own personal expertise which is not strictly computer arithmetic but also to bring new arithmetic research directions, most notably finite field arithmetic, lattice-based optimization, and cryptography. Automation in turn opens a new frontier: the design of arbitrary, user-specified operators. We have designed tools to obtain the best possible implementations of functions defined by standards. In the close future, we want to use them to implement arbitrary functions. We now face a new problem: how will the user specify the function to be implemented? This question can be solved in two ways. One approach is to get closer to the compilation community, so that the design of arithmetic operators becomes a part of the compilation process. Another is to get closer to the computer algebra community that deals with symbolicnumeric computations. In most current programming languages, an expression like sqrt(x*x+y*y) is defined as the composition of two library multiplications, one library addition, and one library square root, each hopefully implemented (in hardware or software) with well-defined, standard-specified properties. However, the programmer probably had in mind an atomic function x 2 + y 2 which can be implemented much more efficiently than the current composition, and also possesses useful mathematical properties which are lost during composition. Besides, in most current processors, the current implementation will compute 53 bits of the result, although the programmer might be contented with only 12 bits (that is, 3 decimal digits) if he had the possibility to express it. A consequence of such application-specific arithmetic core generation (compared to rather stable functions from general libraries) is that each will have much fewer users. Therefore, we cannot count on a massive amount of users beta-testing and reporting bugs, as general library designers do. We must automatically validate the generated code. In this respect, the last decade concentrated on formalizing fixed-precision computation on scalar functions to a point where its validation could be usefully automated. We now have two directions to explore: machine-checkable proofs of multiple-precision algorithms, and for multi-valued functions such as linear algebra operators. The ultimate goal is to automatically generate formal proofs of arbitrary numerical code, but progress towards this goal has to be incremental. A promising direction is to use techniques based on floating-point error-free transformations, which on one side rely on standard floating-point, therefore could build up on existing knowledge, and on the other side have been shown to address both arbitrary precision and multi-valued functions. Structuration of scientific perspectives Designing an optimized complex operator involves many steps. An approximation may be used, for example a polynomial to approximate a function. Perspectives related to approximation are reviewed in Section Then there are usually several possible ways to evaluate the computation. For a polynomial, there are many evaluation schemes with different performance/cost/accuracy trade-offs, but the same applies to the evaluation of general arithmetic expressions and to matrix operations. Perspectives related to evaluation are detailed in Section Perspectives related to certifying approximation and evaluation will be detailed in Section Section will refine the issues related to the automation of operator design. The remaining sections deal with perspectives that may look more applicative, but actually nurture our research in computer arithmetic: pairing-based cryptology in Section , and euclidean lattices in Section Towards the automatic design of operators and functions What we have successfully automated so far (mostly in the previous 4 years) is program generation for statically-defined elementary functions of one real variable. These techniques will certainly need refining as we try to apply them to more functions, in particular to special functions (such as erf) and multivariate functions (such as euclidean norms). To apply these techniques to arbitrary code at compile time further leads to the following challenges. The first one is to identify a relevant compound function in a program, along with the useful information that will allow to implement it efficiently: range of the input values, needed output accuracy and/or rounding mode, etc. This is a static analysis problem. It requires interaction with compilation people working on classical compilers, but also in the emerging field of high-level programming tools for FPGAs. We will bring in expertise in interval arithmetic. A second challenge then is to analyze such a function automatically, which typically implies the following tasks: Compute maximal-width subranges on which the output is trivial, that is, requires no computation at all (zero, infinity, etc.). This can be extremely tedious and error-prone if done by hand. An important side effect of this step is to generate test vectors for corner cases.

66 52 Part 3. Arénaire Identify computer arithmetic properties answering to questions like Is further range reduction possible?, Are there floating-point midpoints?, Can overflow occur?, etc. Today, this is essentially handwritten by experts, so the challenge is to automate it and to implement dictionaries containing such knowledge. Investigate automating range reduction. Generic reduction schemes can be used (for example, interval splitting into sub-intervals) but involve many parameters, and we have to model the associated cost/performance/accuracy tradeoffs. More function-specific range reduction steps can be an outcome of the previous analysis steps. This general automation process will be progressively set up and refined by working on concrete implementations, in software or in hardware, of a significant set of operators, hopefully driven by applications and through industrial collaborations: all the C99 elementary functions, other functions such as the ICDF function used in Gaussian random number generators, variations around the euclidean norm such as x/ x 2 + y 2, complex arithmetic, interval operators, FFTs and other signal processing transforms. We also plan to study operators similar to the now classical FMA, like DP2 which computes a b + c d. For example, this instruction is available in the latest x86 instruction set extensions, where it involves three intermediate roundings. Implementing it with one rounding only would improve the complex product and other operations. It is worth mentioning that, although similar design methods will apply for software and hardware implementations, time scales are different: compile-time allows for a few seconds for software, and for a few hours when designing hardware. Other challenges more specifically related to software and hardware targets are given below. Software operator design Most of the software we design brings in some floating-point functionality. We target two main types of processors: Embedded processors without a floating-point unit (FPU), for which we need to implement the basic operators, coarser elementary functions, and compile-time arbitrary functions. On such targets, memory may be limited, and power consumption is important. General-purpose processors that do possess an FPU for the basic operations. The challenge is then to use this FPU to its best to implement coarser operators and compile-time functions. The main metrics here are performance and, to a lesser extent, code size. A challenge is to model instruction-level parallelism on such diverse targets, and to manage to exploit it automatically. Hardware operator design Automating hardware operator design is a must when targeting FPGAs, or reconfigurable circuits, which are increasingly being used to accelerate scientific or financial calculations. The challenge here is to design operators specifically for these applications: not only should their low-level architecture match the peculiar metrics of FPGAs, but also, the high-level architecture, and even the operator specifications should be as application-specific as possible, and probably completely different to what we are used to design into VLSI circuits. Such operators can only be produced by generators, and this is the motivation of the FloPoCo generator framework, still in early development. In addition to designing the operators themselves, current challenges include accurately modelling the FPGAs, power estimation, and interaction with high-level (also known as C-to-hardware) compilers. However, we will also keep working on more traditional arithmetic for processors, in particular to enhance FPUs. Here automation is mostly useful for design exploration and testing. Besides the already mentioned DP2 operator, one operator we wish to study is the Add3 operator, that returns the correctly rounded sum of its three arguments. It would improve the performance of general code, and also boost triple-double arithmetic. Since Add3 has the same instruction-set footprint as the FMA, the challenge is to reuse as much of the FMA datapath as possible. In addition to the expertise already being developed in Arénaire around numerical validation, we also want to work on more hardware-specific aspects of validation, such as fault coverage or built-in self-test. Links should be built with the hardware verification community New mathematical tools for function approximation As explained in section 3.3.3, the classical methods for computing approximations to functions cannot be used without loss in terms of accuracy or efficiency. Our algorithms for computing best, or nearly best approximations under constraints (implemented in the Sollya toolbox) will be generalized in two directions: we aim at solving the several variable polynomial and the rational cases and adapt our techniques to other bases than the classical (x j ) one. Another important target is the computation of worst cases for the correct rounding of (elementary and special) functions.

67 3.12 Computer Arithmetic 53 From one variable to several variables We plan to adapt our algorithms (and their implementations) for computing polynomials with machine-number coefficients that very well approximate a given function to the several variable case. We first need to implement a generalization of the Remez algorithm (in the real coefficient case). We might need to design some linear programming based tools to do so. We will also address the case of rational approximation in one variable. We will develop an implementation in Sollya of the computation of these best approximations in the real case. A second general step will consist in the suitable adaptation of the LLL-based approach we developed in the one variable case. The applications we have in mind are the design of function evaluation tools in several variable and making practical a hardware-oriented process, called E-method, for evaluating elementary functions using polynomial and rational function approximations. From monomials to other function basis Currently, our algorithms and their implementations address the case, in one variable, of a basis of the form (x i ) where i belongs to a finite subset of N with a cardinality bounded by, say, 100. We aim at dealing with other bases, in particular, the classical basis made of Chebyshev polynomials, that should lead to a better numerical analysis without losing any efficiency during evaluation, and the trigonometric polynomials case which could allow for new applications in the design of numerical filters. Challenges in the search for correct rounding worst cases With our current algorithms, the cost of finding the worst case accuracy needed for correct rounding grows exponentially with the width of the floating-point format: getting the worst cases for the most common functions in double precision (64-bit width), as we did, was already quite a challenge, which required very carefully tuned implementation and months of calculation on a small network of computers. We wish to tackle larger formats (x87-extended and quadruple precisions), decimal formats, and more functions (including the most frequently used special functions). This will require work both on the algorithms themselves and on their implementation. In particular, we wish to improve the work on numerically irregular functions, such as the trigonometric functions (sine, cosine, tangent) on huge arguments. Also, the programs that compute worst cases are very complex, which might cast some doubt on the correctness of the returned results. We wish our next programs to be formally proven (at least for the critical parts) Linear algebra and polynomial evaluation Linear algebra and polynomial evaluation are key tools for the design, synthesis, and validation of fast and accurate arithmetics. Conversely, arithmetic features can have a strong impact on the cost and numerical quality of computations with matrices, vectors, and polynomials. Thus, deepening our understanding of such interactions is one of our major long-term goals. To achieve this, the expertise acquired in the design of basic arithmetic operators will surely be quite valuable. But the certification methods that we are planning to develop simultaneously (see Section ) should be even more crucial. Certified linear algebra We will investigate the theoretical and practical costs of computing certified and effective error bounds on standard linear algebra operations like factorizations, inversion, linear equation solving, and the computation of eigenvalues. A first direction is symbolic computing of a priori error bounds by manipulating truncated series. Another direction deals with improving the efficiency and quality of self-validating methods for computing error bounds at run time. Concerning structured linear algebra, we have two additional goals. In exact arithmetic (algebraic complexity model), we want to finish our investigation of complexity reductions from one structure to another. In floating-point and fixed-point arithmetics, we plan to study how the structure of the matrix can be used to speed up the computation of condition numbers and of certified and effective error bounds. Certified code generation We plan to improve our work on code generation for polynomials, and to extend it to general arithmetic expressions as well as to operations typical of level 1 BLAS, like sums and dot products. Due to the intrinsic multivariate nature of such problems, the number of evaluation schemes is huge and a challenge here is to compute and certify in a reasonable amount of time evaluation programs that satisfy both efficiency and accuracy constraints. To achieve this goal, we will in

68 54 Part 3. Arénaire particular study the design of specific program transformation techniques driven by the tight and guaranteed error bounds that the certification tools of Section will produce Methods for Certified Computing Throughout every research direction and result of the team, the certification of the quality of the computed result is required. More precisely, the goal is to verify, using computer arithmetic, whether certain mathematical assumptions on the result hold. The verification of these assumptions can usually be carried out using symbolic computations, but when dealing with numerical problems the computation of rigorous error bounds or enclosures, rather than their exact values, is often sufficient to conclude. In this case, interval arithmetic or floating-point computations with a careful control of the generated rounding errors can often be used to improve the performances. Certified bounds on roundoff errors Many error analysis techniques are well known for obtaining a priori bounds on the global roundoff error generated by a floating-point program. Computation of such error bounds can already be done automatically with Gappa in the case of straight-line programs, assuming the precision of every arithmetic operations is fixed and known in advance. One of our next challenge will be to handle a priori error bounds in algorithms where the computing precision varies with each operation, for programs involving loops of variable length. This involves symbolic computations of expressions for these roundoff errors and simplifications, based on numerical computations, of these expressions, in order to get simple but not overly simplified upper bounds, On the other hand, techniques are also available to compute rigorous a posteriori error bounds, interval arithmetic being probably the best established technique with this respect. But certified a posteriori error bounds can also be computed using the specifications of the IEEE-754 floating-point arithmetic, in particular using the directed rounding modes, and sometimes faster implementations than with interval arithmetic can be obtained. As a consequence, it would be interesting to develop code generators to automate the design of certified algorithms based on these techniques. When several a posteriori error bounds for a given algorithm are available, studying the tradeoff been the cost and the sharpness of the computed error bound is also a topic we plan to investigate. Certified enclosures As mentioned above, interval methods can be used to control roundoff errors in numerical computations, but can also be used more generally for computing a rigorous enclosure for the range of a function on a given domain. Indeed, computing enclosures, or ranges, corresponds to the case where input data vary in a set. One can then deduce properties such as the occurrence of overflows, or the sign of the result. The approach of choice is interval arithmetic. An ongoing work is the standardization of this arithmetic: an important effort is made towards the standardization of interval arithmetic, since early 2008, and it will be maintained upon the completion of a standard, planned at earliest in A first issue we will address is the development of an efficient library of interval arithmetic operations for general purpose computations. The goal is to reduce the overhead incurred by interval arithmetic, compared to floating-point arithmetic. One main source for this overhead is the number of changes of the rounding mode. A second issue is the design and implementation of general algorithms that compute enclosures of the solutions of linear and nonlinear systems. Another direction of research is the use of arbitrary precision when the prescribed accuracy is not reached. For efficiency purposes, the difficulty is to increase scarcely the precision, only when everything else fails. Higher order techniques Automatic differentiation is a tool that is more and more used to get either sensitivity measures or more generally Taylor models. Obtaining condition numbers is a well known technique to measure the sensitivity of the solution to a problem to perturbations of its input data. Combined with backward error analysis techniques, they can also be used to obtain first order error bounds on the computed solution to a given problem. Future research will focus on algorithms to compute or to estimate condition numbers using automatic differentiation, in particular on efficient implementations and on complexity issues. Usual, or naive, interval arithmetic evaluates an expression as it is written. To diminish the so-called dependency problem, Taylor models arithmetic is employed. When it is implemented using floating-point arithmetic, it now faces the problem of the limited precision. A challenge is to intertwine Taylor models arithmetic with double-double, (or triple-double or higher) arithmetic, the latter being preferred over arbitrary precision floating-point arithmetic for efficiency reasons. This arithmetic will be a very useful tool for the computation of the supremum norm of a function in one variable.

69 3.12 Computer Arithmetic 55 Formal proof The methods mentioned so far certify the quality of the result... up to a bug in their implementation. To get a higher degree of confidence, the certification can be checked by a theorem prover such as Coq. The main difficulties are twofold. First, the chosen arithmetic or methods must be implemented and proven within the theorem prover. In particular, neither the symbolic-numeric computation of error bounds in the case or variable precision nor Taylor models arithmetic based on floating-point arithmetic, exist yet. Second and most important, a theorem prover calculates extremely slowly. This implies that it cannot perform the same operations as the original algorithm: the computational path must be reworked in order to factorize and simplify parts of it and to reduce its number of operations. This approach has been initiated in Gappa, it should be enlarged to a wider variety of methods of certification. Representation issues A general challenge is to represent and store not only programming information such as the numerical type and value of a variable, but also information known by the programmer, such as the mathematically corresponding value or the fact that a given floating-point operation is actually exact, without rounding error. In other words, the question is how to represent the semantics of a piece of code. We have begun the elaboration of a language for the representation of numerical meta-information: LEMA, Langage pour les Expressions Mathématiques Annotées, enabling the programmer to precisely specify the semantic of numerical programs. Building a database of properties that are frequently used, either mathematical general properties or mathematical properties concerning floating-point arithmetic, will constitute a step towards the automation of code synthesis and certification. This is another issue we will investigate Euclidean lattice reduction Lattice reduction has proved useful for several different problems arising in computer arithmetic, including the computation of the worst cases for the rounding of mathematical functions in fixed precisions and the approximation of functions by floating-point polynomials. Lattice reduction has many other applications in computer science and computational mathematics, making it worthwhile to devise faster general-purpose lattice reduction algorithms. Using floating-point arithmetic within lattice reduction algorithms We wish to finish the work started with the L 2 algorithm, and already largely simplified by the H-LLL algorithm. We want to provide a more complete answer to the question of how to best use floating-point arithmetic within LLL-type algorithms, and carry the developed techniques over to stronger reduction algorithms (e.g., BKZ) and also to faster reduction LLL-type algorithms (e.g., Segment-LLL). Studying Schnorr s hierarchy of reductions There exists a natural hierarchy reductions and reduction algorithms (due to Schnorr) between fast but weak reductions (LLL) and stronger but slower reductions(hkz). A related question is to compute a shortest non-zero lattice vector as efficiently as possible (theoretically and in practice, with massive parallelism and hardware implementation, with and without heuristics). We will also try to simplify and improve upon the large diversity of different trade-offs between efficiency and output quality of reduction algorithms. To devise better algorithms, we will investigate the elaborate techniques used in the proofs of security of lattice-based cryptosystems, such as discrete Gaussian distributions, transference theorems and quantum lattice algorithms. Faster LLL-type algorithms From a theoretical point of view, we will try to use floating-point arithmetic along with blocking techniques to speed up LLL reduction, and to devise a quasi-linear time LLL-reduction algorithm (for fixed dimensions). From a practical point of view, we will use models developed by the LaRedA ANR project, such as the sand-pile model, to optimize the codes, and combine the different developed algorithms in a clever way, to rigorously achieve the result in the fastest possible way. A good test for these improvements would be the automatic verification of all bounds obtained with Coppersmith s technique for attacking different variants of the RSA cryptosystem (which would require huge amounts of large LLL reductions). The latter is likely to provide interesting results on lattice-based techniques for solving the Table Maker s Dilemma, including for functions of several variables.

70 56 Part 3. Arénaire Lattice-based cryptography One of our goals is to devise and make more practical lattice-based cryptography. With code-based cryptosystems, latticebased cryptosystems are the only public-key cryptosystems known today that resist would-be quantum computers. They also provide very precise and strong security properties: the security often provably reduces to worst-case instances of problems closely related to NP-hard problems. One can also expect quasi-optimal complexities for encrypting and decrypting (Õ(1) bit operations per message bit), by using structured lattices. However, for the moment it seems hard to make these cryptosystems practical, and we plan to tackle that problem by assessing precisely the practical security and by optimizing the implementation by using our arithmetical expertise. Other applications Finally, we plan to investigate the applications of lattice reduction in other fields, such as MIMO decoding for wireless communications, numerical analysis, and optimization Pairing-based cryptology We ve just mentioned lattice-based cryptology. Curve-based cryptology is the other public-key cryptological tool we plan to tackle in the years to come. More precisely, we will mainly focus on the arithmetic and hardware implementation issues related to pairing-based cryptography. We plan to extend our previous works on the η T pairing to the genus 2 case in order to compare it to the elliptic curve case. Moreover, we wish to study in detail the other existing pairings (Ate, optimal pairings, etc): we first need a better understanding of their algorithmic and we have to check whether the current architecture that we use must be adapted in order to take into account the course of the (moreover, possibly new) operations occurring in the modified algorithms. We also plan to study the case of finite fields with large characteristic (its use is a recommendation of the NSA, in particular). This will lead us to work with ordinary curves, which will force us to reconsider all the existing solutions Publications and Productions International and national peer reviewed journals [ACL] [1] Bernhard Beckermann, George Labahn, and Gilles Villard. Normal forms for general polynomial matrices. Journal of Symbolic Computation, 41(6): , [2] Jean-Luc Beuchat, Nicolas Brisebarre, Jérémie Detrey, Eiji Okamoto, Masaaki Shirase, and Tsuyoshi Takagi. Algorithms and arithmetic operators for computing the η t pairing in characteristic three. IEEE Transactions on Computers, 57(11): , November [3] Jean-Luc Beuchat, Takanori Miyoshi, Jean-Michel Muller, and Eiji Okamoto. Horner s rule-based multiplication over GF(p) and GF(p n ): A survey. International Journal of Electronics, 95(7): , [4] Jean-Luc Beuchat and Jean-Michel Muller. Automatic generation of modular multipliers for FPGA applications. IEEE Transactions on Computers, 57(12), December [5] Sylvie Boldo and Guillaume Melquiond. Emulation of a FMA and correctly rounded sums: Proved algorithms using rounding to odd. IEEE Transactions on Computers, 57(4): , April [6] Alin Bostan, Claude-Pierre Jeannerod, and Éric Schost. Solving structured linear systems with large displacement rank. Theoretical Computer Science, 407(1:3): , November [7] Nicolas Boullis and Arnaud Tisserand. Some optimizations of hardware multiplication by constant matrices. IEEE Transactions on Computers, 54(10): , October [8] Nicolas Brisebarre, David Defour, Peter Kornerup, Jean-Michel Muller, and Nathalie Revol. A new range reduction algorithm. IEEE Transactions on Computers, 54(3): , [9] Nicolas Brisebarre and Jean-Michel Muller. Correct rounding of algebraic functions. Theoretical Informatics and Applications, 41:71 83, January [10] Nicolas Brisebarre and Jean-Michel Muller. Correctly rounded multiplication by arbitrary precision constants. IEEE Transactions on Computers, 57(2): , February 2008.

71 3.13 Computer Arithmetic 57 [11] Nicolas Brisebarre, Jean-Michel Muller, and Arnaud Tisserand. Computing machine-efficient polynomial approximations. ACM Transactions on Mathematical Software, 32(2): , June [12] Nicolas Brisebarre, Jean-Michel Muller, Arnaud Tisserand, and Serge Torres. Hardware operators for function evaluation using sparse-coefficient polynomials. IEE Electronics Letters, 42(25): , December [13] Nicolas Brisebarre and Georges Philibert. Effective lower and upper bounds for the Fourier coefficients of powers of the modular invariant j. J. Ramanujan Math. Soc., 20(4): , [14] Hervé Brönnimann, Guillaume Melquiond, and Sylvain Pion. The design of the Boost interval arithmetic library. Theoretical Computer Science, 351: , [15] Octavian Creţ, Ionut Trestian, Radu Tudoran, Laura Darabant, Lucia Vǎcariu, and Florent de Dinechin. Accelerating the computation of the physical parameters involved in transcranial magnetic stimulation using FPGA devices. Romanian Journal of Information, Science and Technology, 10(4): , [16] Alain Darte, Robert Schreiber, and Gilles Villard. Lattice-based memory allocation. IEEE Transactions on Computers, 54(10): , October Special Issue, Tribute to B. Ramakrishna (Bob) Rau. [17] Florent de Dinechin, Christoph Quirin Lauter, and Jean-Michel Muller. Fast and correctly rounded logarithms in double-precision. Theoretical Informatics and Applications, 41:85 102, [18] Florent de Dinechin and Arnaud Tisserand. Multipartite table methods. IEEE Transactions on Computers, 54(3): , [19] Florent de Dinechin and Gilles Villard. High precision numerical accuracy in physics research. Nuclear Inst. and Methods in Physics Research, A, 559(1): , [20] Jérémie Detrey and Florent de Dinechin. Outils pour une comparaison sans a priori entre arithmétique logarithmique et arithmétique flottante. Technique et science informatiques, 24(6): , [21] Jérémie Detrey and Florent de Dinechin. Parameterized floating-point logarithm and exponential functions for FPGAs. Microprocessors and Microsystems, Special Issue on FPGA-based Reconfigurable Computing, 31(8): , December [22] Jérémie Detrey and Florent de Dinechin. A tool for unbiased comparison between logarithmic and floating-point arithmetic. Journal of VLSI Signal Processing, 49(1): , [23] Jérémie Detrey and Florent de Dinechin. Fonctions élémentaires en virgule flottante pour les accélérateurs reconfigurables. Technique et Science Informatiques, 27(6): , [24] Milos Ercegovac and Jean-Michel Muller. Complex square root with operand prescaling. Journal of VLSI Signal Processing, 49(1):19 30, October [25] Milos Ercegovac and Jean-Michel Muller. An efficient method for evaluating complex polynomials. Journal of VLSI Signal Processing Systems, To appear. [26] Laurent Fousse, Guillaume Hanrot, Vincent Lefèvre, Patrick Pélissier, and Paul Zimmermann. MPFR: A multipleprecision binary floating-point library with correct rounding. ACM Trans. Math. Softw., 33(2), [27] Ryan Glabb, Laurent Imbert, Graham Jullien, Arnaud Tisserand, and Nicolas Veyrat-Charvillon. Multi-mode operator for SHA-2 hash functions. Journal of Systems Architecture, Special issue on Embedded Cryptographic Hardware, 53(2-3): , [28] Stef Graillat, Jean-Luc Lamotte, and Diep Nguyen Hong. Extended precision with a rounding mode toward zero environment. application on the CELL processor. International Journal of Reliability and Safety, Special issue on "Reliable Engineering Computing, To appear. [29] Stef Graillat, Philippe Langlois, and Nicolas Louvet. Algorithms for accurate, validated and fast polynomial evaluation. Japan Journal of Industrial and Applied Mathematics, To appear. [30] Claude-Pierre Jeannerod and Gilles Villard. Essentially optimal computation of the inverse of generic polynomial matrices. Journal of Complexity, 21(1):72 86, [31] Claude-Pierre Jeannerod and Gilles Villard. Asymptotically fast polynomial matrix algorithms for multivariable systems. International Journal of Control, 79(11): , 2006.

72 58 Part 3. Arénaire [32] E. Kaltofen and G. Villard. On the complexity of computing determinants. Computational Complexity, 13:91 130, [33] Peter Kornerup, Christoph Quirin Lauter, Vincent Lefèvre, Nicolas Louvet, and Jean-Michel Muller. Computing correctly rounded integer powers in floating-point arithmetic. ACM Transactions on Mathematical Software, 37(1), To appear. [34] Peter Kornerup and Jean-Michel Muller. Choosing starting values for certain Newton-Raphson iterations. Theoretical Computer Science, 351(1): , February [35] Peter Kornerup and Jean-Michel Muller. Leading guard digits in finite-precision redundant representations. IEEE Transactions on Computers, 55(5): , May [36] Christoph Quirin Lauter and Vincent Lefèvre. An efficient rounding boundary test for pow(x,y) in double precision. IEEE Transactions on Computers, 58(2): , February [37] Guillaume Melquiond and Sylvain Pion. Formally certified floating-point filters for homogeneous geometric predicates. Theoretical Informatics and Applications, 41(1), [38] Romain Michard, Arnaud Tisserand, and Nicolas Veyrat-Charvillon. Optimisation d opérateurs arithmétiques matériels à base d approximations polynomiales. Technique et science informatiques, 27(6): , June [39] Ivan Morel, Damien Stehlé, and Gilles Villard. Analyse numérique et réduction des réseaux. Technique et Science Informatiques, To appear. [40] Phong Q. Nguyen and Damien Stehlé. An LLL algorithm with quadratic complexity. SIAM Journal on Computing, To appear. [41] Phong Q. Nguyen and Damien Stehlé. Low-dimensional lattice basis reduction revisited. ACM Transactions on Algorithms, [42] Jose-Alejandro Pineiro, Stuart F. Oberman, Jean-Michel Muller, and Javier D. Bruguera. High-speed function approximation using a minimax quadratic interpolator. IEEE Transactions on Computers, 54(3): , March [43] N. Revol, K. Makino, and M. Berz. Taylor models and floating-point arithmetic: proof that arithmetic operations are validated in COSY. Journal of Logic and Algebraic Programming, 64: , [44] N. Revol and F. Rouillier. Motivations for an arbitrary precision interval arithmetic and the MPFI library. Reliable Computing, 11(4): , Invited conferences, seminars, and tutorials [INV] [45] Jean-Michel Muller. Exact computations with an arithmetic known to be approximate. Invited lectures (3 hours) at the conference Numeration: Mathematics and Computer Science, CIRM, Marseille, Mar [46] Jean-Michel Muller. Exact computations with approximate arithmetic. Invited conference at the "Journees en l honneur de Donald Knuth", for the Doctor Honoris Causa diploma given to D. Knuth, Bordeaux, France, Oct [47] Jean-Michel Muller. Some algorithmic improvements due to the availability of an FMA, invited talk at the Dagstuhl seminar 08021, numerical validation in current hardware architectures, January [48] Nathalie Revol. Automatic adaptation of the computing precision, January Dagstuhl seminar on Numerical Validation in Current Hardware Architectures. [49] Nathalie Revol. Introduction to interval analysis and to some interval-based software systems and libraries. In ECMI 2008, The European Consortium For Mathematics In Industry, July [50] Damien Stehlé. Floating-point LLL: Theoretical and practical aspects. LLL+25 conference, in honour of the 25th anniversary of the LLL algorithm, Caen, June [51] Damien Stehlé. Numerical analysis and LLL reduction. Oberwolfach bi-annual workshop on Explicit Methods in Number Theory, Germany, July 2007.

73 3.13 Computer Arithmetic 59 [52] Arnaud Tisserand. Algorithms and number systems for hardware computer arithmetic. In International Symposium on Symbolic and Algebraic Computation (ISSAC), Beijing, China, July Invited tutorial. [53] Gilles Villard. Efficient algorithms in linear algebra, May rd Theoretical Computer Science Spring School, Computational Complexity, Montagnac-les-truffes. [54] Gilles Villard. Some recent progress in symbolic linear algebra and related questions (invited tutorial). In Proc. International Symposium on Symbolic and Algebraic Computation, Waterloo, Canada, pages ACM Press, August International and national peer-reviewed conference proceedings [ACT] [55] Ali Akhavi and Damien Stehlé. Speeding-up lattice reduction with random projections. In Proc. 8th Latin American Theoretical Informatics (LATIN 08), volume 4957 of Lecture Notes in Computer Science, pages Springer, [56] Jean-Claude Bajard, Philippe Langlois, Dominique Michelucci, Géraldine Morin, and Nathalie Revol. Towards guaranteed geometric computations with approximate arithmetics (12 pages). In Advanced Signal Processing Algorithms, Architectures, and Implementations XVIII, part of the SPIE Optics & Photonics 2008 Symposium, volume 7074, August [57] Rachid Beguenane, Jean-Luc Beuchat, Jean-Michel Muller, and Stéphane Simard. Modular multiplication of large integers on FPGA. In 39th Asilomar Conference on Signals, Systems & Computers. IEEE, November [58] Jean-Luc Beuchat, Nicolas Brisebarre, Jérémie Detrey, and Eiji Okamoto. Arithmetic operators for pairing-based cryptography. In Cryptographic Hardware and Embedded Systems CHES 2007, number 4727 in Lecture Notes in Computer Science, pages Springer, Best paper award. [59] Jean-Luc Beuchat, Nicolas Brisebarre, Jérémie Detrey, Eiji Okamoto, and Francisco Rodríguez-Henríquez. A comparison between hardware accelerators for the modified Tate pairing over F 2 m and F 3 m. In Second International Conference on Pairing-based Cryptography (Pairing 08), volume 5209 of Lecture Notes in Computer Science, pages Springer Verlag, [60] Jean-Luc Beuchat, Nicolas Brisebarre, Masaaki Shirase, Tsuyoshi Takagi, and Eiji Okamoto. A coprocessor for the final exponentiation of the η T pairing in characteristic three. In Proceedings of Waifi 2007, volume 4547 of Lecture Notes in Computer Science, pages Springer, [61] Jean-Luc Beuchat and Jean-Michel Muller. Multiplication algorithms for radix-2 RN-codings and two s complement numbers. In Proceedings of the 16th IEEE International Conference on Application-Specific Systems, Architectures, and Processors (ASAP 2005), pages IEEE, [62] Jean-Luc Beuchat and Jean-Michel Muller. RN-codes : algorithmes d addition, de multiplication et d élévation au carré. In SympA 2005: 10 e édition du SYMPosium en Architectures nouvelles de machines, pages 73 84, April [63] Sylvie Boldo and Guillaume Melquiond. When double rounding is odd. In Proceedings of the 17th IMACS World Congress on Computational and Applied Mathematics, Paris, France, [64] Sylvie Boldo and Jean-Michel Muller. Some functions computable with a fused-mac. In Proc. 17th IEEE Symposium on Computer Arithmetic (ARITH-17), pages 52 58, Cape Cod, USA, [65] Alin Bostan, Claude-Pierre Jeannerod, and Éric Schost. Solving Toeplitz- and Vandermonde-like linear systems with large displacement rank. In Proc. International Symposium on Symbolic and Algebraic Computation (ISSAC 2007), Waterloo, Canada, pages ACM Press, August [66] Nicolas Brisebarre and Sylvain Chevillard. Efficient polynomial L -approximations. In Proceedings of the 18th IEEE Symposium on Computer Arithmetic (ARITH-18), pages IEEE, [67] Nicolas Brisebarre, Sylvain Chevillard, Milos Ercegovac, Jean-Michel Muller, and Serge Torres. An efficient method for evaluating polynomial and rational function approximations. In Application-specific Systems, Architectures and Processors (ASAP 2008), pages IEEE, 2008.

74 60 Part 3. Arénaire [68] Nicolas Brisebarre, Florent de Dinechin, and Jean-Michel Muller. Integer and floating-point constant multipliers for FPGAs. In Application-specific Systems, Architectures and Processors (ASAP 2008), pages IEEE, [69] Nicolas Brisebarre, Florent de Dinechin, and Jean-Michel Muller. Multiplieurs et diviseurs constants en virgule flottante avec arrondi correct. In RenPar 18, SympA 2008, CFSE 6, [70] Nicolas Brisebarre and Guillaume Hanrot. Floating-point L 2 approximations to functions. In Proceedings of the 18th IEEE Symposium on Computer Arithmetic (ARITH-18), pages IEEE, [71] Nicolas Brisebarre and Jean-Michel Muller. Correctly rounded multiplication by arbitrary precision constants. In Proc. 17th IEEE Symposium on Computer Arithmetic (ARITH-17). IEEE Computer Society Press, June [72] Francisco Cháves and Marc Daumas. A library to Taylor models for PVS automatic proof checker. In Proceedings of the NSF workshop on reliable engineering computing, pages 39 52, Savannah, Georgia, [73] Sylvain Chevillard, Mioara Joldes, and Christoph Lauter. Certified and fast computation of supremum norms of approximation errors. In 19th IEEE Symposium on Computer Arithmetic (ARITH-19), Portland, Oregon, U.S.A., [74] Sylvain Chevillard and Christoph Quirin Lauter. A certified infinite norm for the implementation of elementary functions. In Seventh International Conference on Quality Software (QSIC 2007), pages IEEE, [75] Sylvain Chevillard and Nathalie Revol. Computation of the error function erf in arbitrary precision with correct rounding. In RNC 8 Proceedings, 8th Conference on Real Numbers and Computers, pages Javier D. Bruguera and Marc Daumas, July [76] Sylvain Collange, Jérémie Detrey, and Florent de Dinechin. Floating point or LNS: choosing the right arithmetic on an application basis. In 9th Euromicro Conference on Digital System Design: Architectures, Methods and Tools (DSD 2006), pages , Dubrovnik, Croatia, August IEEE. [77] Marc Daumas, Guillaume Melquiond, and Cesar Muñoz. Guaranteed proofs using interval arithmetic. In Proceedings of the 17th IEEE Symposium on Computer Arithmetic, pages , Cape Cod, Massachusetts, USA, [78] Florent de Dinechin, Alexey Ershov, and Nicolas Gast. Towards the post-ultimate libm. In 17th Symposium on Computer Arithmetic, pages IEEE, [79] Florent de Dinechin, Mioara Joldes, Bogdan Pasca, and Guillaume Revy. Racines carrées multiplicatives sur FPGA. In 13 e SYMPosium en Architectures nouvelles de machines (SYMPA), Toulouse, September [80] Florent de Dinechin, Cristian Klein, and Bogdan Pasca. Generating high-performance custom floating-point pipelines. In Proceedings of the 19th International Conference on Field Programmable Logic and Applications. IEEE, August [81] Florent de Dinechin, Christoph Quirin Lauter, and Guillaume Melquiond. Assisted verification of elementary functions using Gappa. In Proceedings of the 2006 ACM Symposium on Applied Computing, pages , [82] Florent de Dinechin and Sergey Maidanov. Software techniques for perfect elementary functions in floating-point interval arithmetic. In Real Numbers and Computers, July [83] Florent de Dinechin, Eric McIntosh, and Franck Schmidt. Massive tracking on heterogeneous platforms. In 9th International Computational Accelerator Physics Conference (ICAP), October [84] Florent de Dinechin and Bogdan Pasca. Large multipliers with less DSP blocks. In Proceedings of the 19th International Conference on Field Programmable Logic and Applications. IEEE, August [85] Florent de Dinechin, Bogdan Pasca, Octavian Creţ, and Radu Tudoran. An FPGA-specific approach to floatingpoint accumulation and sum-of-products. In Field-Programmable Technologies, pages IEEE, [86] Jérémie Detrey. TutoRISC : un RISC dans mon FPGA. In 9 es Journées Pédagogiques du CNFM, pages , Saint-Malo, France, November 2006.

75 3.13 Computer Arithmetic 61 [87] Jérémie Detrey and Florent de Dinechin. A parameterizable floating-point logarithm operator for FPGAs. In 39th Asilomar Conference on Signals, Systems & Computers. IEEE, November [88] Jérémie Detrey and Florent de Dinechin. A parameterized floating-point exponential function for FPGAs. In IEEE International Conference on Field-Programmable Technology (FPT 05). IEEE, December [89] Jérémie Detrey and Florent de Dinechin. Table-based polynomials for fast hardware function evaluation. In 16th Intl Conference on Application-specific Systems, Architectures and Processors (ASAP 2005). IEEE, July [90] Jérémie Detrey and Florent de Dinechin. Opérateurs trigonométriques en virgule flottante sur FPGA. In RenPar 17, SympA 2006, CFSE 5 et JC 2006, pages , Perpignan, France, October [91] Jérémie Detrey and Florent de Dinechin. Floating-point trigonometric functions for FPGAs. In International Conference on Field-Programmable Logic and Applications, pages IEEE, August [92] Jérémie Detrey, Florent de Dinechin, and Xavier Pujol. Return of the hardware floating-point elementary function. In 18th Symposium on Computer Arithmetic (ARITH-18), pages IEEE, June [93] Pouya Dormiani, Milos Ercegovac, and Jean-Michel Muller. Design and implementation of a radix-4 complex division unit with prescaling. In Proceedings of the 20th IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP 2009), Boston, U.S.A., IEEE Computer Society. [94] Wayne Eberly, Mark Giesbrecht, Pascal Giorgi, Arne Storjohann, and Gilles Villard. Solving sparse integer linear systems. In Proc. International Symposium on Symbolic and Algebraic Computation, Genova, Italy, pages ACM Press, July [95] Wayne Eberly, Mark Giesbrecht, Pascal Giorgi, Arne Storjohann, and Gilles Villard. Faster inversion and other black box matrix computation using efficient block projections. In Proc. International Symposium on Symbolic and Algebraic Computation, Waterloo, Canada, pages ACM Press, August [96] Milos Ercegovac and Jean-Michel Muller. Arithmetic processor for solving tri-diagonal systems of linear equations. In proc. 40th Conference on Signals, Systems and Computers, Pacific Grove, CA, November [97] Milos Ercegovac and Jean-Michel Muller. Complex multiply-add and other related operators. In Proceedings of SPIE Conf. Advanced Signal Processing Algorithms, Architectures and Implementation XVII, August [98] Milos Ercegovac and Jean-Michel Muller. A hardware-oriented method for evaluating complex polynomials. In Proceedings of 18th IEEE Conference on Application-Specific Systems, Architectures and Processors (ASAP 2007). IEEE, July [99] Milos D. Ercegovac, Jean-Michel Muller, and Arnaud Tisserand. Simple seed architectures for reciprocal and square root reciprocal. In Proc. 39th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, California, U.S.A., October [100] Ryan Glabb, Laurent Imbert, Graham Jullien, Arnaud Tisserand, and Nicolas Veyrat-Charvillon. Multi-mode operator for SHA-2 hash functions. In Proc. International Conference on Engineering of Reconfigurable Systems and Algorithms (ERSA), pages , Las Vegas, Nevada, U.S.A., June [101] Stef Graillat, Jean-Luc Lamotte, and Hong Diep Nguyen. Error-free transformation in rounding mode toward zero. In Dagstuhl Seminar on Numerical Validation in Current Hardware Architectures, volume 5492 of Lecture Notes in Computer Science, pages , [102] Guillaume Hanrot, Vincent Lefèvre, Damien Stehlé, and Paul Zimmermann. Worst cases of a periodic function for large arguments. In Proceedings of the 18th IEEE Symposium on Computer Arithmetic (ARITH-18), pages IEEE, [103] Guillaume Hanrot and Damien Stehlé. Improved analysis of Kannan s shortest lattice vector algorithm (extended abstract). In Proceedings of Crypto 2007, volume 4622 of LNCS, pages Springer-Verlag, [104] Claude-Pierre Jeannerod, Hervé Knochel, Christophe Monat, and Guillaume Revy. Faster floating-point square root for integer processors. In IEEE Symposium on Industrial Embedded Systems (SIES 07), [105] Claude-Pierre Jeannerod, Hervé Knochel, Christophe Monat, Guillaume Revy, and Gilles Villard. A new binary floating-point division algorithm and its software implementation on the ST231 processor. In Proceedings of the 19th IEEE Symposium on Computer Arithmetic (ARITH-19), Portland, OR, June 2009.

76 62 Part 3. Arénaire [106] R. Baker Kearfott, John D. Pryce, and Nathalie Revol. Discussions on an interval arithmetic standard at Dagstuhl seminar In Dagstuhl Seminar on Numerical Validation in Current Hardware Architectures, volume 5492 of Lecture Notes in Computer Science, pages 1 6, [107] Peter Kornerup, Vincent Lefèvre, Nicolas Louvet, and Jean-Michel Muller. On the computation of correctlyrounded sums. In 19th IEEE Symposium on Computer Arithmetic (ARITH-19), Portland, Oregon, U.S.A., [108] Peter Kornerup, Vincent Lefèvre, and Jean-Michel Muller. Computing integer powers in floating-point arithmetic. In Proceedings of 41th Conference on signals, systems and computers. IEEE, November [109] Peter Kornerup and Jean-Michel Muller. RN-coding of numbers: definition and some properties. In Proceedings of the 17th IMACS World Congress on Scientific Computation, Applied Mathematics and Simulation, Paris, July [110] Philippe Langlois and Nicolas Louvet. Compensated Horner algorithm in K times the working precision. In RNC 8 Proceedings, 8th Conference on Real Numbers and Computers, pages Javier D. Bruguera and Marc Daumas, July [111] Christoph Quirin Lauter and Florent de Dinechin. Optimising polynomials for floating-point implementation. In RNC 8 Proceedings, 8th Conference on Real Numbers and Computers, pages 7 16, July [112] Vincent Lefèvre, Damien Stehlé, and Paul Zimmermann. Worst cases for the exponential function in the IEEE 754r decimal64 format. In Reliable Implementation of Real Number Algorithms: Theory and Practice, volume 5045 of Lecture Notes in Computer Science, pages , Dagstuhl, Germany, Springer. [113] Guillaume Melquiond and Sylvain Pion. Formal certification of arithmetic filters for geometric predicates. In Proceedings of the 17th IMACS World Congress on Computational and Applied Mathematics, Paris, France, [114] Romain Michard, Arnaud Tisserand, and Nicolas Veyrat-Charvillon. Divgen: a divider unit generator. In Proc. Advanced Signal Processing Algorithms, Architectures and Implementations XV, volume 5910, page 59100M, San Diego, California, U.S.A., August SPIE. [115] Romain Michard, Arnaud Tisserand, and Nicolas Veyrat-Charvillon. Étude statistique de l activité de la fonction de sélection dans l algorithme de E-méthode. In 5 es journées d études Faible Tension Faible Consommation (FTFC), pages 61 65, Paris, May [116] Romain Michard, Arnaud Tisserand, and Nicolas Veyrat-Charvillon. Évaluation de polynômes et de fractions rationnelles sur FPGA avec des opérateurs à additions et décalages en grande base. In 10 e SYMPosium en Architectures nouvelles de machines (SYMPA), pages 85 96, Le Croisic, April [117] Romain Michard, Arnaud Tisserand, and Nicolas Veyrat-Charvillon. Small FPGA polynomial approximations with 3-bit coefficients and low-precision estimations of the powers of x. In Proc. 16th International Conference on Application-specific Systems, Architectures and Processors (ASAP 2005), pages , Samos, Greece, July IEEE. Best Paper Award. [118] Romain Michard, Arnaud Tisserand, and Nicolas Veyrat-Charvillon. New identities and transformations for hardware power operators. In Proc. Advanced Signal Processing Algorithms, Architectures and Implementations XVI, San Diego, California, U.S.A., August SPIE. [119] Romain Michard, Arnaud Tisserand, and Nicolas Veyrat-Charvillon. Optimisation d opérateurs arithmétiques matériels à base d approximations polynomiales. In 11 e SYMPosium en Architectures nouvelles de machines (SYMPA), Perpignan, October [120] Ivan Morel, Damien Stehlé, and Gilles Villard. H-LLL: Using Householder inside LLL. In Proc. International Symposium on Symbolic and Algebraic Computation (ISSAC 2009), Seould, Korea, pages ACM Press, August [121] Jean-Michel Muller. Generating functions at compile-time. In Proc. 40th Conference on Signals, Systems and Computers, Pacific Grove, CA, November [122] Jean-Michel Muller, Arnaud Tisserand, Benoît Dupont de Dinechin, and Christophe Monat. Division by constant for the ST100 DSP microprocessor. In Proc. 17th Symposium on Computer Arithmetic (ARITH-17), pages , Cape Cod, MA., U.S.A, June IEEE Computer Society.

77 3.13 Computer Arithmetic 63 [123] Xavier Pujol and Damien Stehlé. Rigorous and efficient short lattice vectors enumeration. In Proceedings of ASIACRYPT 08, volume 5350 of Lecture Notes in Computer Science, pages Springer, [124] Arne Storjohann and Gilles Villard. Computing the rank and a small nullspace basis of a polynomial matrix. In Proc. International Symposium on Symbolic and Algebraic Computation, Beijing, China, pages ACM Press, July [125] Ionuţ Trestian, Octavian Creţ, Laura Creţ, Lucia Vǎcariu, Radu Tudoran, and Florent de Dinechin. FPGA-based computation of the inductance of coils used for the magnetic stimulation of the nervous system. In Biomedical Electronics and Devices, volume 1, pages , [126] Gilles Villard. Certification of the QR factor R, and of lattice basis reducedness. In Proc. International Symposium on Symbolic and Algebraic Computation, Waterloo, Canada, pages ACM Press, August [127] Gilles Villard. Differentiation of Kaltofen s division-free determinant algorithm. In MICA 2008 : Milestones in Computer Algebra, Stonehaven Bay, Trinidad and Tobago, May Short communications [COM] and posters [AFF] in conferences and workshops [128] Sylvie Boldo, Marc Daumas, William Kahan, and Guillaume Melquiond. Proof and certification for an accurate discriminant. In SCAN th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, Duisburg, Germany, [129] Alin Bostan, Claude-Pierre Jeannerod, and Éric Schost. Solving structured linear systems of large displacement rank. Poster at ISSAC 2006, July [130] Hervé Brönnimann, Guillaume Melquiond, and Sylvain Pion. Proposing interval arithmetic for the C++ standard. In SCAN th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, Duisburg, Germany, [131] Francisco José Cháves Alonso, Marc Daumas, Cesar Muñoz, and Nathalie Revol. Automatic strategies to evaluate formulas on Taylor models and generate proofs in PVS. In 6th International Congress on Industrial and Applied Mathematics (ICIAM 07), July [132] Sylvain Chevillard and Christoph Quirin Lauter. Certified infinite norm using interval arithmetic. In SCAN th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, Duisburg, Germany, [133] Florent de Dinechin. Elementary functions for double-precision interval arithmetic. In SCAN th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, Duisburg, Germany, [134] Florent de Dinechin, Jérémie Detrey, Ionuţ Trestian, Octavian Creţ, and Radu Tudoran. When FPGAs are better at floating-point than microprocessors. Poster at FPGA 2008, [135] Claude-Pierre Jeannerod, Nicolas Louvet, Nathalie Revol, and Gilles Villard. Computing condition numbers with automatic differentiation. In SCAN th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, El Paso, Texas, [136] Claude-Pierre Jeannerod, Nicolas Louvet, Nathalie Revol, and Gilles Villard. On the computation of some componentwise condition numbers. In SNSC 08-4th International Conference on Symbolic and Numerical Scientific Computing, Hagenberg, Austria, [137] Claude-Pierre Jeannerod, Christophe Mouilleron, and Gilles Villard. Extending Cardinal s algorithm to a broader class of structured matrices. Poster at ISSAC 2009, July [138] Claude-Pierre Jeannerod, Saurabh Kumar Raina, and Arnaud Tisserand. High-radix floating-point division algorithms for embedded VLIW integer processors (extended abstract). In Proc. 17th World Congress on Scientific Computation, Applied Mathematics and Simulation IMACS, Paris, France, July [139] Philippe Langlois and Nicolas Louvet. Accurate solution of triangular linear system. In SCAN th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, El Paso, Texas, 2008.

78 64 Part 3. Arénaire [140] Christoph Quirin Lauter. A survey of multiple precision computation using floating-point arithmetic. In Taylor Models 2006, Florida, [141] Ivan Morel. From an LLL-reduced basis to another. Poster at ISSAC 2008, July [142] Hong-Diep Nguyen and Nathalie Revol. Solving and certifying the solution of a linear system. In SCAN th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, El Paso, Texas, [143] Nathalie Revol. Bounding roundoff errors in Taylor models arithmetic. In 17th IMACS Conf. on Scientific Computation, Applied Math. and Simulation, Paris, France, July [144] Nathalie Revol. Design choices and their limits for an interval arithmetic library. The example of MPFI. In SCAN th GAMM - IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, Duisburg, Germany, [145] Nathalie Revol. Implementing Taylor models arithmetic with floating-point arithmetic. In 6th International Congress on Industrial and Applied Mathematics (ICIAM 07), July [146] Nathalie Revol. Automatic adaptation of the computing precision. In V Taylor Models Workshop, May [147] Nathalie Revol. Survey of proposals for the standardization of interval arithmetic. In SWIM 08: Small Workshop on Interval Methods, June Scientific books and book chapters [OS] [148] Florent de Dinechin, Milos Ercegovac, Jean-Michel Muller, and Nathalie Revol. Encyclopedia of Computer Science and Engineering, chapter Digital Arithmetic. Wiley, [149] Jean-Michel Muller. Elementary Functions, Algorithms and Implementation. Birkhäuser Boston, 2nd Edition, [150] Jean-Michel Muller, Nicolas Brisebarre, Florent de Dinechin, Claude-Pierre Jeannerod, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol, Damien Stehlé, and Serge Torres. Handbook of Floating-Point Arithmetic. Birkhauser Boston, ACM G.1.0; G.1.2; G.4; B.2.0; B.2.4; F.2.1. To appear. [151] Damien Stehlé. LLL+25: 25th Anniversary of the LLL Algorithm Conference, chapter Floating-point LLL: theoretical and practical aspects. Springer, To appear. Scientific popularization [OV] [152] Vincent Lefèvre and Jean-Michel Muller. Erreurs en arithmétique des ordinateurs. Images des mathématiques, June [153] Jean-Michel Muller. Ordinateurs en quête d arithmétique. In Histoire des Nombres. Taillendier, Book or Proceedings editing [DO] [154] Marc Daumas and Nathalie Revol, editors. Special issue on Real Numbers and Computers, volume 351 of Theoretical Computer Science, [155] Peter Hertling, Christoph M. Hoffmann, Wolfram Luther, and Nathalie Revol, editors. Special issue on Reliable Implementation of Real Number Algorithms: Theory and Practice, volume 5045 of Lecture Notes in Computer Science, [156] Peter Kornerup, Paolo Montuschi, Jean-Michel Muller, and Eric Schwarz, editors. Special Section on Computer Arithmetic, volume 58 of IEEE Transactions on Computers, February [157] Peter Kornerup and Jean-Michel Muller, editors. Proceedings of the 18th IEEE Symposium on Computer Arithmetic. IEEE Conference Publishing Services, June 2007.

79 3.13 Computer Arithmetic 65 Other Publications [AP] [158] Sylvie Boldo and Guillaume Melquiond. Emulation of a FMA and correctly-rounded sums: proved algorithms using rounding to odd. Technical report, INRIA, Available at inria [159] Sylvie Boldo and Cesar Muñoz. A formalization of floating-point numbers in PVS. Technical report, National Institute for Aerospace, A much shorter version of this paper has been published in Proceedings of the 21st Annual ACM Symposium on Applied Computing (2006). [160] Sylvain Chevillard. Polynômes de meilleure approximation à coefficients flottants. Master thesis, Laboratoire de l Informatique du Parallélisme - ENS Lyon, [161] Sylvain Chevillard. The functions erf and erfc computed with arbitrary precision. Technical Report RR , Laboratoire de l Informatique du Parallélisme (LIP), 46, allée d Italie, Lyon Cedex 07, January [162] Florent de Dinechin, Jérémie Detrey, Ionuţ Trestian, Octavian Creţ, and Radu Tudoran. When FPGAs are better at floating-point than microprocessors. Technical Report ensl , École Normale Supérieure de Lyon, [163] Florent de Dinechin, Christoph Quirin Lauter, and Guillaume Melquiond. Certifying floating-point implementations using Gappa. Technical Report ensl , arxiv: , École Normale Supérieure de Lyon, [164] Guillaume Hanrot and Damien Stehlé. Worst-case Hermite-Korkine-Zolotarev reduced lattice bases. Technical report, Maths Arxiv, [165] Claude-Pierre Jeannerod. LSP matrix decomposition revisited. Research report RR , Laboratoire de l Informatique du Parallélisme, ENS Lyon, September [166] Claude-Pierre Jeannerod, Hervé Knochel, Christophe Monat, and Guillaume Revy. Computing floating-point square roots via bivariate polynomial evaluation. Technical Report , ensl , LIP, École Normale Supérieure de Lyon, [167] Claude-Pierre Jeannerod, Nicolas Louvet, Jean-Michel Muller, and Adrien Panhaleux. Midpoints and exact points of some algebraic functions in floating-point arithmetic. Technical Report ensl , LIP, École Normale Supérieure de Lyon, [168] Claude-Pierre Jeannerod and Guillaume Revy. Optimizing correctly-rounded reciprocal square roots for embedded VLIW cores. Technical Report , ensl , LIP, École Normale Supérieure de Lyon, [169] Mioara Joldes. Computation of various infinite norms, in one or several variables, designed for the development of mathematical libraries. Master thesis, Laboratoire de l Informatique du Parallélisme - ENS Lyon, [170] Christoph Quirin Lauter. Basic building blocks for a triple-double intermediate format. Technical Report RR , LIP, September [171] Christoph Quirin Lauter. Exact and mid-point rounding cases of power(x,y). Technical Report , Laboratoire de l Informatique du Parallélisme, [172] Bogdan Pasca. Customizing floating-point operators for linear algebra acceleration on FPGAs. Master thesis, Laboratoire de l Informatique du Parallélisme - ENS Lyon, [173] Clément Pernet, Aude Rondepierre, and Gilles Villard. Computing the Kalman form. Research report ccsd , arxiv cs.sc/ , IMAG, Grenoble, October [174] Guillaume Revy. Analyse et implantation d algorithmes rapides pour l évaluation polynomiale sur les nombres flottants. Master thesis, Laboratoire de l Informatique du Parallélisme - ENS Lyon, [175] Gilles Villard. Certification of the QR Factor R, and of Lattice Basis Reducedness (extended version of [126]). Research report hal , arxiv: , CNRS, Université de Lyon, INRIA, [176] Gilles Villard. Kaltofen s division-free determinant algorithm differentiated for matrix adjoint computation (journal submission). Research report ensl , arxiv: , CNRS, Université de Lyon, INRIA, 2008.

80 66 Part 3. Arénaire Doctoral Dissertations and Habilitation Theses [TH] [177] Francisco José Cháves Alonso. Utilisation et certification de l arithmétique d intervalles dans un assistant de preuves. PhD thesis, École Normale Supérieure de Lyon, France, September [178] Sylvain Chevillard. Évaluation efficace de fonctions numériques - Outils et exemples. PhD thesis, École Normale Supérieure de Lyon, France, July [179] Florent de Dinechin. Matériel et logiciel pour l évaluation de fonctions numériques. Précision, performance et validation. Habilitation à diriger des recherches, Université Claude Bernard - Lyon 1, June [180] Jérémie Detrey. Arithmétiques réelles sur FPGA : virgule fixe, virgule flottante et système logarithmique. PhD thesis, École Normale Supérieure de Lyon, France, January [181] Christoph Quirin Lauter. Arrondi correct de fonctions mathématiques - fonctions univariées et bivariées, certification et automatisation. PhD thesis, École Normale Supérieure de Lyon, France, October [182] Guillaume Melquiond. De l arithmétique d intervalles à la certification de programmes. PhD thesis, École Normale Supérieure de Lyon, France, November [183] Romain Michard. Opérateurs arithmétiques matériels optimisés. PhD thesis, École Normale Supérieure de Lyon, France, June [184] Louvet Nicolas. Algorithmes compensés en arithmétique flottante : précision, validation, performances. PhD thesis, Université de Perpignan, France, November [185] Saurabh Kumar Raina. FLIP: a floating-point library for integer processors. PhD thesis, École Normale Supérieure de Lyon, France, September [186] Guillaume Revy. "Mathematical functions in floating-point arithmetic for integer processors - Algorithms, implementation, polynomial evaluation and code generation. PhD thesis, École Normale Supérieure de Lyon, France, [187] Nicolas Veyrat-Charvillon. Opérateurs arithmétiques matériels pour des applications spécifiques. PhD thesis, École Normale Supérieure de Lyon, France, June Total ACL - International and national peer-reviewed journal INV - Invited conferences ACT - International and national peer-reviewed conference proceedings COM / AFF - Short communications and posters in international and national conferences and workshops OS - Scientific books and book chapters OV - Scientific popularization DO - Book or Proceedings editing AP - Other Publications TH - Doctoral Dissertations and Habilitation Theses Total Other Productions: Software [AP] [188] David Cadé, Xavier Pujol, and Damien Stehlé. fplll, [189] Sylvain Chevillard, Mioara Joldes, Nicolas Jourdan, and Christoph Lauter. Sollya, a toolbox for manipulating numerical functions, [190] Sylvain Collange, Jérémie Detrey, Florent de Dinechin, Bogdan Pasca, Cristian Klein, and Xavier Pujol. FloPoCo, a generator of arithmetic cores for FPGAs,

81 3.13 Computer Arithmetic 67 [191] Sylvain Collange, Jérémie Detrey, Florent de Dinechin, and Xavier Pujol. FPLibrary, a library of floating-point operators for FPGAs, [192] Catherine Daramy-Loirat, David Defour, Florent de Dinechin, Matthieu Gallet, Nicolas Gast, Christoph Lauter, and Jean-Michel Muller. CRlibm, a library of correctly rounded elementary functions in double-precision, [193] Laurent Fousse, Guillaume Hanrot, Vincent Lefèvre, Patrick Pélissier, Philippe Théveny, and Paul Zimmermann. GNU MPFR, March [194] The LinBox Group. Linbox, [195] Claude-Pierre Jeannerod, Nicolas Jourdan, Saurabh Kumar Raina, Guillaume Revy, and Arnaud Tisserand. FLIP, a floating-point library for integer processors, [196] Christoph Lauter. Metalibm, a code generator for elementary functions, [197] Guillaume Melquiond. Gappa, a tool for certifying numerical programs,

82 68 Part 3. Arénaire

83 4. Compsys - Compilation and Embedded Computing Systems Team: COMPSYS Scientific leader: Alain DARTE Evaluation Web site: Parent organizations: École Normale Supérieure de Lyon, Université of Lyon, CNRS, Inria Team history: Compsys was created in 2002 by T. Risset and A. Darte. It then grew up to 5 permanent researchers with the successive arrivals of P. Feautrier, F. Rastello, and A. Fraboulet. It became an Inria research project in It then had a second birth in 2007, with the departure of T. Risset and A. Fraboulet (departure from LIP in 2005, then from the Inria project-team in 2007), followed by the recent hiring of C. Alias. There is thus an important shift of activities in the period , the first period (until end of 2007) being the end of a first research phase, 2008 being a second departure for Compsys, with new objectives. 4.1 Team composition Current team members Permanent researchers Name First name Function Institution Arrival date ALIAS Christophe Research scientist Inria 01/01/2009 DARTE Alain Research director CNRS 15/09/1993 FEAUTRIER Paul Full professor ENS-Lyon 01/09/2002 RASTELLO Fabrice Research scientist Inria 01/10/2003 Post-docs, engineers and visitors Name First name Function and % of time (if other than 100%) Institution Arrival date COLOMBET Quentin Expert engineer INRIA Rhône-Alpes 01/11/2007 GONNORD Laure ATER Université Claude Bernard 01/10/2008 Doctoral students Name First name Institution Date of first registration as doctoral student BOISSINOT Benoit ENS-Lyon grant 01/09/2006 PLESCO Alexandru MESR grant 01/09/2006 Name First name Position Past team members Past members Oct Oct Parent institution Arrival date (or Oct. 2005) Departure date Current position FRABOULET Antoine Associate professor Insa-Lyon 01/10/ /06/2007 Back to Insa- Lyon RISSET Tanguy Research scientist Inria 01/09/ /06/2007 Prof. Insa-Lyon Name First name Date of first registration Past doctoral students Date of Institution departure Doctoral school Supervisor Current position 69

84 70 Part 4. Compsys BOUCHEZ Florent 01/09/ /05/2009 ENS-Lyon MATHIF A. Darte, F. Rastello Post-doc Bangalore CHERROUN Hadda 01/09/ /12/2007 Alger - P. Feautrier Associate prof. Alger FOURNEL Nicolas 01/09/ /12/2007 ENS-Lyon MATHIF P. Feautrier, Post-doc A. Fraboulet Tima Research eng. GROSSE Philippe 01/09/ /12/2007 ENS-Lyon MATHIF P. Feautrier Fraunhofer Inst. Erlangen SLAMA Yosr 01/09/ /12/2006 Tunis - P. Feautrier QUINSON Clément 01/09/ /12/2008 ENS-Lyon MATHIF A. Darte - T. Risset, SCHERRER Antoine 01/09/ /12/2006 ENS-Lyon MATHIF A. Fraboulet? Associate prof. Tunis Past post-doctoral researchers, engineers and visitors (greater than or equal to 3 months) Name First name Function Date of Date of Home institution arrival departure (if appropriate) Current position ALIAS Christophe Post-doc 01/02/ /12/2007 Versailles Research scientist Inria HACK Sebastian Post-doc 15/12/ /12/2007 Karlsruhe Associate prof. Sarrebrücken LABBANI Ouassila Post-doc 15/12/ /12/2007 Lille Associate prof. Dijon Evolution of the team Before its creation, all members of Compsys have been working, more or less, in the field of automatic parallelization and high-level program transformations. Paul Feautrier was the initiator of the polytope model for program transformations in the 90s and, before coming to Lyon, started to be more interested in programming models and optimizations for embedded applications, in particular through collaborations with Philips. Alain Darte worked on mathematical tools and algorithmic issues for parallelism extraction in programs. He became interested in the automatic generation of hardware accelerators, thanks to his stay at HP Labs in the Pico project in Spring Antoine Fraboulet did a PhD with Anne Mignotte who was working on high-level synthesis (HLS) on code and memory optimizations for embedded applications. Fabrice Rastello did a PhD on tiling transformations for parallel machines, then was hired by STMicroelectronics where he worked on assembly code optimizations for embedded processors. Tanguy Risset worked for a long time on the synthesis of systolic arrays, being the main architect of the HLS tool MMAlpha. Most researchers in France working on high-performance computing (automatic parallelization, languages, operating systems, networks) moved to grid computing at the end of 90s. We all thought that applications, industrial needs, and research problems were more important in the design of embedded platforms. Furthermore, we were convinced that our expertise on high-level code transformations could be more useful in this field. This is the reason why Tanguy Risset came from Rennes to Lyon in 2002 to create the Compsys team (Compilation and Embedded Computing Systems) with Alain Darte and Anne Mignotte (who retired soon after, for personal reasons), then Paul Feautrier, Antoine Fraboulet, and Fabrice Rastello quickly joined the group. Compsys has then always been (until 2007) a team based at LIP, but with external collaborators from Insa-Lyon (Anne Mignotte and Antoine Fraboulet first, then Tanguy Risset). Compsys became an Inria project-team in 2004 and was evaluated positively in In 2005, Tanguy Risset left the LIP to take a professor position at Insa-Lyon, while Antoine Fraboulet came to LIP, in the context of an Inria delegation. In , both left slowly the group activities, until the Inria evaluation of Compsys (Spring 2007) to which they participated. Then, the team continued with only three permanent researchers: Paul Feautrier, Fabrice Rastello, Alain Darte. In January 2009, Christophe Alias (who was post-doc in Compsys) came back from his post-doc in the USA to take an Inria research scientist position within Compsys. Compsys has thus now 4 permanent researchers. The reduction in manpower due to the departure of Tanguy Risset and Antoine Fraboulet has and will induce a more focused range of activities and objectives. 4.2 Executive summary Keywords Embedded systems, DSP, VLIW, FPGA, hardware accelerators, compilation, code & memory optimization, program analysis, high-level synthesis, parallelism, scheduling, polyhedra, graphs, regular computations.

85 4.2 Compilation and Embedded Computing Systems 71 Research area The industry sells many more embedded processors than general-purpose processors. The field of embedded systems is one of the few segments of the computer market where the European industry still has a substantial share. Compsys is primarily concerned with compilation, and more precisely code optimization, for a subclass of embedded systems, called embedded computing systems, whose applications are more compute-intensive (signal processing, multimedia, stream processing) than control-intensive or with hard real-time constraints. Two aspects are of importance: code optimization for embedded processors (DSP, VLIW, or even possibly micro-controllers) and code optimization for high-level synthesis (HLS) of hardware accelerators. The first includes aggressive static code optimization as well as just-in-time compilation. The second includes memory and communication optimizations, hardware/software co-design, power analysis, design methodologies, etc. Main goals The goal of Compsys is to adapt/extend some code optimization techniques, primarily developed in compilation/parallelization for high-performance computing (HPC), to the domain of compilation for embedded computing systems. Compsys focuses more particularly on micro-codes optimizations for embedded processors and on high-level synthesis of hardware accelerators. Our aim is to help bridging the gap between compilation and architecture, and between computer science and electrical engineering. Indeed, we are convinced that the techniques and mathematical tools developed in the past for HPC are of interest for specific optimizations in HLS and embedded programs optimizations. But we also believe that optimizing and designing embedded systems is not yet a routine effort and that a lot needs to be learned from the application side, the electrical engineering side, and the embedded architecture side. Methodology A specificity of Compsys is to tackle combinatorial optimization problems (graphs, linear programming, polyhedra) arising from actual compilation problems (register allocation, cache optimization, memory allocation, scheduling, consumption, generation of software/hardware interfaces, etc.) and to validate these developments in compiler tools. To address relevant problems and to have more impact, we believe our research efforts should be combined with strong industrial collaborations. We have established a strong collaboration with STMicroelectronics (Grenoble and Crolles) through which we validate our results. Highlights The main scientific achievements of Compsys, in the period , were the following: We developed a strong collaboration with the compilation group at STMicroelectronics, with important results on instruction cache optimization, register allocation, and static single assingment (SSA). In particular, Compsys was the first group to push the use of SSA for register allocation. We completely deconstructed the classic view on register allocation. With our colleagues from STMicroelectronics, we are now well-identified internationally for this contribution. In addition to our results and publications, this topic created a lot of activity in seminars, tutorials, organization of workshop (for example, we organized the first seminar on SSA and will organize CGO 2011), research proposals, hiring of young researchers, etc. We were convinced that our past expertise on loop transformations and polyhedral optiizations could be useful for embedded computing, for many reasons (see hereafter), but that previous techniques needed to be adapted and even extended. We were successful in developing even further the foundation of high-level program transformations, including scheduling techniques for Kahn processes, lattice-based techniques for memory reuse, and more recently analysis and transformations for while loops, to go beyond standard static affine loops. Such work is generating more and more interest from the high-level synthesis community. We had many original contributions with partners closer to hardware constraints, including CEA, related to SoC simulation, hardware/software interfaces, power models and simulators. This activity however will be stopped due to the departure of T. Risset and A. Fraboulet. In this period, Compsys activities resulted in 6 PhD theses (4 local PhD theses, 2 in Maghreb), 5 journals, 32 publications in conferences, including 5 best paper awards. Compsys became an Inria project-team in The main event in 2007 was its evaluation in April. The evaluation was conducted by Erik Hagersted (Uppsala University), Vinod Kathail (Synfora, inc), and Ramanujam (Baton Rouge University). Compsys was positively evaluated and was allowed to continue for 4 more years, but in a new configuration as Tanguy Risset and Antoine Fraboulet left the project to follow research directions closer to their host laboratory at Insa-Lyon. Due to this size reduction, Compsys decided to focus on two research directions only: Code generation for embedded processors: aggressive compilation & just-in-time compilation. High-level program analysis and transformations for high-level synthesis tools. Both activities are supported by contracts with STMicroelectronics (compilation team and HLS team, respectively) with academic partners from Inria (Alchemy and Cairn, respectively). In addition to the Inria annual funding, this will give sufficient funding for the next 3 years. Nevertheless, to make the HLS french community more tight, collaborations have started with the other HLS research french group, which will maybe lead to an ANR proposal.

86 72 Part 4. Compsys The last important highlight concerns human resource. After the departure of Tanguy Risset and Antoine Fraboulet, and the predicted retiring of Paul Feautrier, Compsys was in great danger with possibly only 2 members. Fortunately, Paul Feautrier will continue as an emeritus professor, and Christophe Alias was hired as an Inria research scientist. Therefore, at least for the next 3 years, Compsys should have a stable size, with well-defined objectives. 4.3 Research activities When creating Compsys, we were all convinced of three complementary facts: a) the mathematical tools developed in the past for manipulating programs in automatic parallelization were lacking in high-level synthesis and embedded computing optimizations and, even more, they started to be rediscovered frequently under less mature forms, b) before being able to really use these techniques in HLS and embedded program optimizations, we needed to learn a lot from the application side, from the electrical engineering side, from the embedded architecture side, c) compilation and code optimization will be at the heart of the development of embedded computing systems. Our primary goal was thus twofold: to increase our knowledge of embedded computing systems and to adapt/extend code optimization techniques, primarily designed for high performance computing, for these systems. We proposed four research directions, centered on compilation methods for embedded applications, both for software and accelerators design: Code optimizations for special-purpose processors; Platform-independent high-level code transformations; Hardware and software system integration; Development of polyhedral tools (transversal objective, see Section 4.8 on software production) Objective 1: Code optimization for special-purpose processors List of participants: B. Boissinot, F. Bouchez, Q. Colombet, A. Darte, S. Hack, F. Rastello. Keywords Compilation, code optimization, register allocation, SSA form, VLIW and DSP processors. Scientific issues, goals, and positioning of the team: Our goals are to develop a new understanding on optimizations and algorithms for aggressive and JIT compilation. We focus on a deep study of combinatorial problems to derive new strategies. We also pay attention to the applicability of our ideas in collaboration with STMicroelectronics. Major results Oct Oct. 2009: Strong collaboration with STMicroelectronics, many joint publications including 3 best papers in a row at CGO ( )! Compsys is the initiator of a new and, for many, fascinating view on register allocation, in the light of SSA form, which completely deconstructs the common belief. This led to 2 tutorials on the subject and the organization of the very first seminar on SSA (4 days in Autrans). Self-assessment Our activities with STMicroelectronics are a huge success, not only for the contracts we get. Being able to work within a complete industrial compiler makes our research relevant and more than just theoretical. We also attract young researchers with this activity (PhD students and international post-docs) and our results are getting used (or at least studied) by more and more people (including industry) in the world. This work was done in close collaboration with STMicroelectronics, including B. Dupont de Dinechin, C. Guillon, F. de Ferrière, T. Bidault. We also have contacts (through visits) or stronger collaborations with M. Smith (Harvard), J. Palsberg (UCLA), S. Amarasinghe (MIT), P. Brisk (EPFL), S. Hack (Sarrebrücken), F. Pereira (Belo Horizonte). Context and goals Compilation for embedded processors is either aggressive or just in time (JIT). Aggressive compilation consists in allowing more time to implement costly solutions (so, looking for complete, even expensive, studies is mandatory): the compiled program is loaded in permanent memory and the compilation time to obtain it is not so significant. For embedded systems, code size and energy consumption usually have a critical impact on the cost and the quality of the final product, hence the application is cross-compiled, i.e., compiled on a powerful platform distinct from the target processor. JIT compilation, on the other hand, corresponds to compiling applets on demand on the target processor. The code can be uploaded or sold separately on a flash memory. Compilation is performed at load time or even dynamically during execution. The heuristics, constrained by time and limited resources, cannot be too aggressive: they must be fast enough. In this context, our goal is to contribute to the understanding of combinatorial problems that arise in compilation for embedded processors (e.g., in opcode selection, SSA conversion, register allocation, or in code placement in the instruction cache) so as to to derive both aggressive heuristics and JIT techniques. So far, we concentrated our research on instruction cache optimization, SSA conversion, and register allocation (interaction of register assignment, coalescing, spilling), for aggressive compilation. JIT compilation will be addressed more deeply in the upcoming years. A first specificity of our research is that we try to add a theoretical value on the problems we address (using graph theory, NP-completeness, integer linear programming), even for problems that can appear old (such as register allocation). The second specificity is that we have the luck, thanks to our collaboration with STMicroelectronics (see Section 4.6.1), to be able to implement and test our techniques directly within an industrial compiler.

87 4.3 Compilation and Embedded Computing Systems 73 Instruction cache optimization In the ST220 processor, the instruction cache is direct mapped : the cache line i can hold only instructions whose addresses, modulo L, are equal to i, where L is the number of lines. When a program starts executing a block not in the cache, it must be loaded from main memory (cache miss). This happens either at the first use of a function, or after a conflict, i.e., when two functions, whose executions are interleaved, share the same cache line and evict each other from the cache. A cache miss costs roughly 150 cycles for the ST220, hence the interest of minimizing line sharing for conflicting functions by a smart procedure placement, at link-time. Gloy & Smith proposed a greedy heuristic based on a conflict graph called the temporal relationship graph. This heuristic provides reasonable results for general-purpose processors, but with a huge code size expansion. Pettis & Hansen proposed a greedy fusion without code size expansion to enhance locality. As a side effect, for small codes, this also reduces cache conflicts. We revisited these approaches and proved several complexity results related to cache conflict, code size, and locality optimization [201]. In particular, we showed that once the placement of functions is decided (i.e., modulo L), placing the functions in the code to minimize its size can be done in polynomial time (traveling salesman problem on a cyclic-metric graph). For choosing the placement modulo L, we proposed several heuristic improvements. We are still working on this problem, one of the challenges is to generate a relevant conflict graph, not from an execution trace, but from the control-flow graph. SSA conversion and register allocation The static single assignment (SSA) form is an intermediate representation in which multiplexers (called φ functions) are used to merge values at a join point in the control graph. The SSA form is becoming more and more popular in compilers due to its properties that speed up optimizations and make them easier to implement. Unfortunately, the SSA form is not machine code and φ functions have to be replaced, at the end of the process, by register-to-register copy instructions on control flow edges. Naive methods for translating out of SSA generate many useless copies (live-range splitting) and additional goto instructions (edge splitting). Our past experience on out-of-ssa translation were a bit frustrating. Although our strategy improved previous translation schemes, the final execution times were sometimes disappointing. The reason is that a too aggressive coalescing (removal of register copies) can degrade the next compilation phase, register allocation, by increasing spilling (load/store insertions). However, we understood quickly that SSA could be useful for register allocation as graph coloring under SSA is polynomial because the corresponding interference graph is chordal, a property strangely never noticed nor used before in the compilation community. 1 This remark was the starting point of a deep and long research on register allocation, including the absorption of the literature on this very old problem. Before claiming anything new, we needed to study this problem on all aspects, to check that what we thought we could improve on one side was not destroyed due to another unexpected aspect. We first revisited the initial NP-completeness proof given, in 1981, by Chaitin et al. to show that it was used wrongly to justify heuristics for problems that it does not cover. Our study shows that, somehow surprisingly, the NP-completeness of register allocation is not due to the coloring phase, as may be suggested by a misinterpretation of the reduction of Chaitin et al. from graph k-coloring. If live-range splitting is taken into account, deciding if k registers are enough or if some spilling is necessary is not as hard as one might think. In fact, the NP-completeness of register allocation is due to three factors: special edges (called critical or abnormal) in the control flow graph, the optimization of spilling costs (if k registers are not enough) and of coalescing costs, i.e., which live-ranges should be fused while keeping the graph k-colorable. These results have been presented at WDDD 06 [220] (the debunking workshop) and LCPC 06 [226]. We also made a complete study of spill-everywhere spilling problems (LCTES 07 [237]) and of coalescing problems (CGO 07 [230] for the theory, with a best paper award, and CASES 08 [243] for heuristics). In both cases, we analyzed the complexity in terms of graph structures, especially for chordal graphs and k-greedy graphs, those used in register allocation based on graph coloring. These studies are the basis to develop new register allocation schemes based on two phases, a first spilling phase, followed by a coloring/coalescing/splitting phase with no additional spills. SSA and JIT compilation Liveness analysis is an important analysis in optimizing compilers. Liveness information is used in several optimizations and is mandatory during code generation. Two drawbacks of conventional analyzes are their fairly expensive computations and the fact their results are easily invalidated by program transformations. We proposed a method to check liveness of SSA variables that overcomes both obstacles, by avoiding building interference graphs and liveness sets. The result of our analysis survives all program transformations except for changes of the control-flow graph itself. For common program sizes, our technique is faster and consumes less memory than conventional data-flow approaches. We heavily make use of SSA-form properties to completely circumvent data-flow equation solving. This fast liveness checking received the best paper award at CGO 08 [242], In SSA, many code optimizations can be performed with fast and easy-to-implement algorithms. However, some of them create situations where SSA variables corresponding to the same original variable now have overlapping live ranges. This makes the translation from SSA code to standard code more complicated. There are three issues: correctness, optimization 1 This fact was also independently exploited by J. Palsberg and P. Brisk (UCLA), and S. Hack (Karlsruhe, then post-doc with us).

88 74 Part 4. Compsys (elimination of useless copies), speed (of algorithms). Briggs et al. addressed mainly the first point and proposed patches to correct the original approach of Cytron et al. For correctness, a simpler approach was then proposed by Sreedhar et al. who also developed a more advanced technique to reduce the number of generated copies. However, this last technique turns out to be fairly difficult to implement. We proposed a conceptually simpler approach, based on coalescing and a more precise view of interferences, where correctness and optimization are separated. It reduces the number of generated copies and allows us to develop simpler-to-implement algorithms, with no patches or particular cases, and fast enough for just-in-time compilation. This result received the best paper award at CGO 09 [244] (third year in a row!) Objective 2: High-level code transformations List of participants: C. Alias, H. Cherroun, A. Darte, P. Feautrier, O. Labbani, A. Plesco, C. Quinson, T. Risset. Keywords FPGA, HLS, embedded systems, hardware accelerators, memory optimizations, program transformations. Scientific issues, goals, and positioning of the team: Our goal is to extend the polyhedra-based techniques developed for loop transformation and automatic parallelization to address problems relevant for embedded systems design, such as specification, memory and communication optimization, program analysis. Major results Oct Oct. 2009: Two important conceptual innovations: critical lattice for intra-array memory reuse that subsumes all previous approaches and a modular algorithm for scheduling Kahn process networks. Self-assessment Slow but strong progress for program analysis and transformations. Hard to find students who can understand simultaneously mathematics, computer science, and architecture/synthesis, also because community is small or not well-organized. Hard topic due to the interaction with HLS tools (tough developments and use). This work was done through collaborations or informal contacts with the Inria project Alchemy, STMicroelectronics (P. Urard s team), Thales (through the Martes project), and colleagues such as F. Catthoor (IMEC), S. Verdoolaege (Leiden), P. Clauss (Strasbourg), etc. Also, the work on lattice-based memory allocations was initiated at HP Labs in the Pico project, and continued by A. Darte, R, Schreiber (HP Labs) and G. Villard (Arénaire, LIP). Context and goals During the last 20 years, with the advent of parallelism in supercomputers, the bulk of research in code transformation resulted in (semi-)automatic parallelization, with many techniques based on the description and manipulation of nested loops with polyhedra. Today, embedded systems generate new problems in high-level code optimization, especially for loops, both for optimizing embedded applications and transforming programs for high-level synthesis (HLS). Everything that has to do with data storage is of prime importance as it impacts power consumption, performance, and chip area. On the application side, multimedia applications often make intensive use of multi-dimensional arrays, in sequences of (nested) loops, which make them good targets for static program analysis. The applications are rewritten several times, by the compiler or developer, to go from a high-level algorithmic description down to an optimized and customized version. For memory optimizations, the high-level description is where the largest gain can be obtained because global program analysis and transformations can be done. Analyzing multimedia applications at the source level is thus important. On the architecture side, the hardware, in particular memories, can be customized. When designing and optimizing a programmable embedded system, the designer wants to select the adequate parameters for cache and scratch-pad memories to achieve the smallest cost for the right performance for a given application or set of applications. In HLS, memories (size, topology, connections between processing elements) can even be fully customized for a given application. Embedded systems are thus good targets for memory optimizations. But powerful compile-time program and memory analysis are needed to (semi-)automatically generate a fully-customized circuit from a high-level C-like description. Also, new specification languages or compilation directives are needed to express communicating processes and their communication media: Kahn processes communicating through FIFOs or shared memories are a good target. Our objective in this topic is to adapt and extend high-level transformations, previously developed for automatic parallelization, to the context of HLS and embedded computing optimizations. Such techniques started to be rediscovered under various forms and we think our previous expertise can be useful both for the dissemination of already-known techniques and the development of new ones. Memory reduction When designing hardware accelerators, one has to solve both scheduling (when is a computation done?) and memory allocation (where is the result stored?). This is important to exploit pipelines between functional units, or between external memories and hardware accelerators, and save memory space. An example is image processing, for which it is preferable to store only a few lines and not the entire frame, data being consumed quickly after being produced. Reducing memory size can be done by remapping each array so that it reuses its memory locations when they contain a dead value (intra-array reuse). In , we showed that all previous approaches are particular cases of a general technique based on the construction of admissible lattices. We also proved that we can compute mappings that are optimal up to a multiplicative constant that depends on the problem dimension only (see IEEE Transactions on Computers [200]). After the theory, we developed the algorithms needed to implement our memory reuse strategies. The resulting tool Cl@k extends our tool suite on polyhedral/lattice manipulations (PIP, Polylib). We developed the interface with programs

89 4.3 Compilation and Embedded Computing Systems 75 and the required program analysis. The resulting tool Bee is implemented thanks to the source-to-source transformer ROSE, developed by D. Quinlan (Livermore). Our first experiments with benchmarks borrowed from IMEC, thanks to P. Clauss, S. Verdoolaege, and F. Balasa, show excellent results (LCTES 07 [236]). The combination gives the first complete implementation for array contraction. As a side effect, we get the most aggressive algorithm for converting arrays into scalars. More work remains to be done for deriving faster and better heuristics and for speeding-up the program analysis. Also, the next step is to develop good program transformations that lead to a reduced memory. Scheduling and Kahn processes Industry is now conscious of the need for special programming models for embedded systems. Kahn process networks (KPN), introduced 30 years ago to represent parallel programs, are becoming popular again. A KPN is built from processes communicating via FIFO channels whose histories are deterministic. One can then define a semantics and talk meaningfully about the equivalence of two implementations. Also, the dataflow diagrams used by signal processing specialists can be translated on-the-fly into process networks. We extended this model into communicating regular processes (CRP), in which channels are represented as write-once/read-many arrays. The determinacy property still holds. As an added benefit, a communication system in which the receive operation is not destructive is closer to the expectations of system designers. Although CRP (as KPN) rely on an asynchronous execution model, we developed a modular scheduling technique (see the International Journal of Parallel Programming [202]) that synchronizes them and, in particular, enables buffer optimization. This approach promotes good programming discipline, allows reuse, and is a basic tool for the construction of libraries. The prototype scheduler Syntol is now complete. Program analysis for Array-OL The Array-OL formalism was designed 20 years ago at Thales Research to ease the implementation of data-intensive signal processing as found in sonar and radar software. It is implemented as a design environment, SPEAR, which deals only with data movements, the data processing being hidden in black box procedures. When reusing existing software, the programmer has to abstract the movement of data from the program text, a tedious and error-prone activity. Within the Martes ITEA project, whose aim was to promote the use of UML for inter-operability between embedded system design tools, we developed, in collaboration with Thales Research, a tool for the automatic elaboration of SPEAR specifications. It uses the front-end of our tool Syntol and algorithms from computer algebra for the analysis. The Eclipse framework is used for handling a common UML model, which serves as an information clearing house for the two components of the system. The result of this work has been part of the Martes final results (which won a silver award at the ITEA Symposium) and has been the source for the paper [245] Objective 3: Hardware and software system integration List of participants: H. Cherroun, P. Feautrier, N. Fournel, A. Fraboulet, P. Grosse, T. Risset, A. Scherrer. Keywords High-level synthesis, FPGA, power consumption, simulators. Scientific issues, goals, and positioning of the team: The goal was to continue high-level synthesis, and address it in a more general context, including hardware/software interfaces. Power was also an important issue to explore. Major results Oct Oct. 2009: A progressive insertion within the national and international SoC and HLS communities. Interesting models and tools for SoC simulation, sensor networks simulation, and power consumption. Self-assessment This objective was the furthest from our initial expertise. To enter the world of hardware and software integration, we needed to learn the problems, objectives, constraints of this community, and not just stick on our high-performance computing expertise. We learned a lot, but this spread our activities on a too large spectrum with respect to our size. Also, hardware/software integration is a hard topic, with lot of developments and a lack of students with the adequate expertise if we want to increase the compilation aspects for architecture design. This work was done through collaborations within the french HLS community, with STMicroelectronics, with CEA-LETI (work on power), and with the Ares group, at Insa-Lyon (work on sensor networks). Context and goals Compsys members have a long experience in compilation for parallel systems, high-performance computers, and systolic arrays. One of our aims was to adapt the polyhedral model for high-level synthesis (HLS), in particular for the design of hardware accelerators for compute-intensive applications. In Section 4.3.2, we already mentioned three problems related to HLS that we addressed: a) mathematical and software tools for memory reduction, b) specification of communicating processes, c) high-level program transformations used in front of HLS tools. This section, addresses lower-level activities. Validating MMAlpha (our HLS tool) as a rapid prototyping tool for systolic arrays on FPGA was our first goal, firming up the underlying methodology and trying to interface it within a more general system (general problem of off-the-shelf components). Different tools need to cooperate and one has to design interfaces between elements of different design flows. MMAlpha had already such a generic interface for all linear systolic arrays. Our goal was to continue in this direction, but also to consider other solutions, like networks on chip (NoC) and standard wrapping protocols such as VCI from VSIA. To make this research possible, it was important to start collaborating, which

90 76 Part 4. Compsys we did, with french HLS specialists (LIP6 in Paris, TimA in Grenoble, Lester now LabSTICC in Lorient, Cairn in Rennes) and to develop a better understanding of various HLS tools and simulation models. Succeeding to integrate MMAlpha designs within SocLib was an important objective to go beyond isolated designs. The fact that Compsys was positioned as a team interested in embedded system design, even if our initial objectives were very focused compared to this large topic, pushed us to start collaborations a bit outside our central activities. We received a PhD grant from STMicroelectronics to address the problem of traffic generators for NoC simulation. Also, we were very interested in exploring how compilation techniques could have an effect on power consumption, an important issue in embedded systems. Many papers address this issue, for example with basic block scheduling techniques, but our belief was that more benefit should be found through the reduction of memory transfers and also at operating system level. To address this issue, we started a cooperation with CEA-LETI, which led us further, in the field of hardware and software power modeling and optimization. Finally, the fact that A. Fraboulet and T. Risset moved to Insa-Lyon made them more interested in topics addressed there, in the Ares team, such as sensor networks. SoC simulation System on chip (SoC) simulation appears as a major issue in embedded computing system design. We have been involved in this topic by working actively in the SocLib group (now an ANR platform project). SocLib targets a SystemC cycle accurate simulation, which is useful for precise performance evaluation or SoC platform design. We developed a deep understanding of SocLib simulation models and we started collaborations with hardware designers in the LIP6 laboratory (Paris) and LabSTICC (Lorient). Our major contribution concerns the analysis and synthesis of on-chip traffic, i.e., the traffic occurring in a SoC. In such systems, the introduction of networks on chip (NoC) has brought the interconnection system as a major issue of the design flow. In order to prototype such a NoC quickly, fast simulations need to be done, and replacing the components by traffic generators is a good way to achieve this. We set up and developed a complete and flexible on-chip traffic generation environment, integrated into SocLib, with which we can replay a previously recorded trace, to generate a random load on the network, produce a stochastic traffic fitted to a reference trace, and take into account traffic phases. The introduction of second order statistics (i.e., long range dependence) in the statistical models used for traffic generation is an originality of the work and makes it suitable for the modeling of multi-processor on-chip traffic. The different stages of this work are presented in [223, 225] and in the PhD thesis of A. Scherrer [258]. In addition, through the Minalogic project OpenTLM, which targets a transaction level SystemC simulation, our experiments with SocLib showed that we need TLM simulation to prototype compilation optimization for embedded code because cycle accurate simulation is too slow. Sensor networks Advances and miniaturization of micro-electro-mechanical systems and micro-electronic devices have pushed the development of new applications for wireless networks: wireless sensor networks (WSN). WSN are resource-constrained for computation, communication, energy. In this context, the Worldsens simulation toolchain offers an integrated platform for the design, development, prototyping, and deployment of WSN applications. Worldsens consists in two simulators, WSim and WSNet, that offer very accurate results on the application performances at the different levels of the development. WSNet is a radio medium simulator developed by Ares, Insa-Lyon team, specialized in wireless network protocols and architectures. WSim is a simulator partly developed by A. Fraboulet and A. Scherrer in the GCC cross-compiler toolchain. The simulated hardware is based on the MSP430 micro-controller series from Texas Instruments, specifically designed for low power and capable of running a full operating system. WSim can debug and evaluate performances of the full system at the assembly level, for different operating systems (FreeRTOS, Contiki, TinyOS). A precise estimation of timings, memory consumption, and power can be obtained during simulation, even for several clock sources, each with its own frequency. Variable frequency scaling and artificial drifts between clocks can be taken into account. This drift can be parameterized to act as real hardware clock imperfections, either on the main clock system for real-time shift between two nodes or among different subsystems within the same node. WSim includes several features to debug and analyze sensor nodes application code. The first one is related to debugging using the remote GDB protocol. WSim allows to attach a GDB process to the simulator that can single-step the simulation and have full access to the registers and memory. The step-by-step function includes all available platform clocks and peripherals. It is thus possible to debug interrupts and dynamic frequency scaling parts of a real-time application as well as compiler optimized source code in a cycle accurate environment. Last but not least, WSim also includes power consumption estimations in addition to performance evaluation results. The simulation suite and performance estimation tools were presented in [228, 229, 232] and several demos at conferences [218, 249, 235]. See also the PhD thesis of N. Fournel [260]. Power models The applications implemented on modern embedded systems, e.g., for signal, audio, and video processing, are increasingly complex and compute intensive. In general, high-processing power entails high power consumption. But many appliances are battery powered, and battery capacity does not follow Moore s law. Low power consumption can be achieved at the technology level (size reduction, threshold voltage control) or at the circuit level, but also at the system and software level: this is where Compsys can contribute.

91 4.5 Compilation and Embedded Computing Systems 77 Power optimization can be applied to conventional processor or DSP-based systems, or to ASIC-based systems. In cooperation with CEA-LETI (Y. Durand), we developed a power optimization method for a radio modem, but extensible to any dataflow system. The basic idea is to schedule the application in order to deduce the minimum performance of each block in the design from the timing constraints of the application. Supply voltages and clock frequencies are then adjusted on a per block basis so as to meet these performances. Detailed simulation shows that energy reduction of roughly 50% can be achieved this way. This work was the core of P. Grosse s PhD thesis [261]. Preliminary results were reported in [222] and a more comprehensive presentation in ACM TODAES [206]. Direct physical measurements on a VLSI chip need specialized equipment. In contrast, such measurements are easier on a development platform. N. Fournel, with help from Cegely (electronic laboratory at INSA-Lyon), did such experiments to model the power consumption of a processor, its cache, scratch pad, and external memory. The influence of the clock frequency has been measured, while the influence of voltage scaling had to be extrapolated. The resulting model, coupled to an instruction set simulator, allows the prediction of the energy budget of quite complex softwares and also of operating system services like task switching or synchronization. Application to realistic image processing applications has shown that the cumulative error power model error plus simulator approximations is less than 10%. The resulting tool has been applied to the evaluation of cryptographic protocols [234] and to WSN [238, 240]. 4.4 Application domains and social or economic impact The previous sections describe our main activities in terms of research directions and results, but also places Compsys within the embedded computing systems domain, especially in Europe. We will therefore not come back here to the importance, for industry, of compilation and embedded computing systems design. In terms of application domain, the embedded computing systems we consider are mostly used for multimedia: phones, TV sets, game platforms, etc. But, more than the final applications developed as programs, our main application is the computer itself: how the system is organized (architecture) and designed, how it is programmed (software), how programs are mapped to it (compilation). The industry that can be impacted by our research is thus all the companies that develop embedded systems and processors, and those (the same plus other) than need software tools to map applications to these platforms, i.e., that need to use or even develop programming languages, program optimization techniques, compilers, operating systems. Compsys does not focus on all these critical parts, but our activities are connected to them. 4.5 Visibility and attractivity Prizes and awards Best paper awards Compsys received 5 paper awards since 2007, including three successive best paper awards at CGO, the IEEE/ACM International Symposium on Code Generation and Optimization, which is, with CC, the best conference in compilation/code optimization. CGO 2007 On the complexity of register coalescing, by F. Bouchez, A. Darte, and F. Rastello. AICCSA 2007 An exact resource constrained-scheduler using graph coloring technique, by H. Cherroun and P. Feautrier. CC 2007 A fast cutting-plane algorithm for optimal coalescing, by D. Grund and S. Hack. CGO 2008 Fast liveness checking for SSA-form programs, by B. Boissinot, S. Hack, D. Grund, B. Dupont de Dinechin, and F. Rastello. CGO 2009 Revisiting out-of-ssa translation for correctness, code quality, and efficiency, by B. Boissinot, A. Darte, B. Dupont de Dinechin, C. Guillon, and F. Rastello. Project award The Martes project (see the section on contracts) won the Silver Award at the 2008 ITEA Symposium. Honors P. Feautrier received from the Euro-Par Steering Committee an award in recognition of his outstanding contributions to parallel processing (2009).

92 78 Part 4. Compsys Contribution to the scientific community Management of scientific organizations Administration of professional societies Editorial boards ACM Transactions on Embedded Computing Systems (ACM TECS), Alain Darte. Parallel Computing, Paul Feautrier. International Journal of Parallel Programming, Paul Feautrier. Integration: the VLSI Journal, Tanguy Risset. Organization of conferences and workshops In 2009, Fabrice Rastello was the main organizer of an international seminar on static single assignment (SSA), in Autrans, that regrouped during 4 days more than 50 international participants. This was the very first workshop on SSA and a huge success. See the web page Program committee members ASAP IEEE International Conference on Application-Specific Systems, Architectures, and Processors, Alain Darte (2005, 2006), Tanguy Risset (2006, 2007). CASES ACM International Conference on Compilers, Architecture, and Synthesis for Embedded Systems, Alain Darte (2005, 2006, 2009), Fabrice Rastello (2008). CC International Conference on Compiler Construction, Alain Darte, CGO ACM/IEEE International Symposium on Code Generation and Optimization, Alain Darte (2006), Fabrice Rastello (2009). CPC Compilers for Parallel Computing (international workshop), Alain Darte, steering committee. DATE Design, Automation, and Test in Europe (international conference), Alain Darte (2005, 2006, 2007). Euro-Par International Conference on Parallel Computing, Alain Darte (2009). NPC IFIP International Conference on Network and Parallel Computing, Alain Darte (2007). PLDI ACM SIGPLAN Conference on Programming Language Design and Implementation, Alain Darte (2008). Also member of program committee of the first session fun ideas and thoughts (FIT) at PLDI POPL ACM SIGPLAN/SIGACT International Symposium on Principles of Programming Languages, Paul Feautrier (2006). SAMOS International Symposium on Systems, Architectures, Modeling, and Simulation (international workshop), Tanguy Risset (2005). SCOPES International Workshop on Software and Compilers for Embedded Systems, Alain Darte (2009). International expertise Projects FWO (Fonds Wetenschappelijk Onderzoek) (Belgium), Alain Darte, Natural Science and Engineering Research Council of Canada, Paul Feautrier, National Research Council of Israël, Paul Feautrier, 2006

93 4.6 Compilation and Embedded Computing Systems 79 PhD theses PhD of Andrzej Bednarski (Linköping University, Sweden), Alain Darte, reviewer, PhD of Sven Verdoolaege (Katholieke Universiteit Leuven, Belgium), Alain Darte, reviewer, National expertise Projects ANR evaluation committee on HPC, Paul Feautrier, Evaluation of Inria theme ComD, Alain Darte, coordinator, Université Numérique (UNIT), Paul Feautrier, Selection committees Selection committee at Inria, Alain Darte, Selection committee at University Joseph Fourier, Paul Feautrier, Selection committee at ENS-Lyon, Paul Feautrier, Selection committee at ENS-Lyon, Alain Darte, Selection committee at Insa-Lyon, Antoine Fraboulet, Selection committee at ENS-Lyon (BioInformatics), Paul Feautrier, HDR (professorial thesis) and PhD theses (except as advisers) PhD of Madeleine Nyamsy, Tanguy Risset, member of defense committee, PhD of Lionel Lelong, Tanguy Risset, member of defense committee, PhD of Tariq Ali Omar, Tanguy Risset, member of defense committee, PhD of Samuel Evain, Tanguy Risset, member of defense committee, PhD of Richard Buchman, Tanguy Risset, member of defense committee, PhD of Pascal Gomez, Paul Feautrier, reviewer, PhD of Rachid Seghir, Paul Feautrier, reviewer, PhD of Antoine Scherrer, Paul Feautrier, member of defense committee, PhD of Sami Taktak, Paul Feautrier, member of defense committee, HDR of Bernard Pottier, Paul Feautrier, reviewer, HDR of Emmanuelle Encrenaz, Paul Feautrier, member of defense committee, HDR of Ivan Augé, Alain Darte, reviewer, Other responsibilities Admission exam at ENS-Lyon, Alain Darte, vice-president (responsible of computer science section). Evaluation commission at Inria, Alain Darte, Commission for coordinating the joint research efforts of STMicroelectronics and Inria, Alain Darte, scientific expert,

94 80 Part 4. Compsys Public dissemination Sebastian Hack and Fabrice Rastello (with Philip Brisk, Jens Palsberg, and Fernando Pereira) gave a tutorial on SSAbased register allocation during ESWEEK 2008 (Embedded Systems Week). An updated tutorial was given by Alain Darte and Fabrice Rastello (with Philip Brisk and Jens Palsberg) at CGO Contracts and grants External contracts and grants (industry, European, national) Procope Convention (-2005) with the Passau University (Germany). Cooperation with Chris Lengauer s team. Inria Support (2006) for collaboration with Federal University, Rio Grande Do Sul (Brasil). Cooperation with Luigi Carro s team. CNRS Convention ( ) with the University of Illinois at Urbana-Champaign (USA). Cooperation with David Padua s team. Open-TLM ( ) Contract in the pôle de compétitivité Minalogic (STMicroelectronics, Thompson, silicomps). The goal of the project is to build open-source SystemC SoC simulators at the transition level. Compsys is not involved anymore, now that Tanguy Risset left the team. MARTES ( ) (model-driven approach to real-time embedded systems development). ITEA project. The french partners of the project have focused on the inter-operability of their respective tools using a common UML metamodel, in particular the interaction between Syntol and the parallel design environment SPEAR of Thales Research. SCEPTRE ( ) Contract with STMicroelectronics, Inria, Ministry of Research, and Region, part of pôle de compétitivité Minalogic. The collaboration dealt with the application of SSA for aggressive program optimizations. It concerned mainly register allocation. SCEPTRE was the heart of the development of SSA-based register allocation, which led to the SSA seminar Compsys organized in MEDIACOM ( ) Contract with STMicroelectronics, funding from Ministry of Industry, Nano2012 project. Continuation of SCEPTRE, with more focus on just-in-time compilation, register aliasing, predication. Collaboration with Inria team Alchemy. S2S4HLS ( ) Contract with STMicroelectronics, funding from Ministry of Industry, Nano2012 project. Collaboration with Inria team Cairn concerning the development of source-to-source program transformations for (actually before) high-level synthesis tools Research networks (European, national, regional, local) HIPEAC Compsys is involved in the network of excellence HIPEAC (High-Performance Embedded Architecture and Compilation). ARTIST2 Compsys is partner of the network of excellence Artist2 (MPSoC in particular). EMSOC Compsys is member of the regional research network for Rhône-Alpes (funding, seminars, etc.) Internal funding In addition to external contracts, Inria provides the main internal funding for Compsys, which is an Inria research project. A more limited funding comes from CNRS and ENS-Lyon. 4.7 International collaborations resulting in joint publications With grants and joint publications The collaboration with STMicroelectronics may be considered as an international (and industrial) collaboration even if the compilation team of STMicroelectronics is in Grenoble. This collaboration gave rise to 6 joint publications in the period , describing our work within the project Sceptre. They concern register allocation and SSA optimizations, in the context of aggressive compilation and, more recently, just-in-time compilation.

95 4.8 Compilation and Embedded Computing Systems 81 With joint publications Alain Darte had a long collaboration with Rob Schreiber from HP Labs, Palo Alto. Alain Darte and Rob Schreiber took different research directions after 2005 (Rob Schreiber went back to high-performance computing), but there were still some publications/patents in the period due to the delay of publication (IEEE Transactions on Computers on lattice-based memory allocation, an associated patent, and a publication at PPoPP on the placement of synchronization barriers). 4.8 Software production and research infrastructure Software descriptions We have still been maintaining three historical polyhedral tools, Polylib, Pip, MMAlpha and we use them intensively in our other polyhedra-related developments such as Syntol, Cl@k, and Bee. Other software/implementations are more related to compiler/simulator developments. Polylib Polylib is a C library for polyhedral operations. Tanguy Risset has been responsible for the development of Polylib for several years. There are no new developments, the maintenance was shared between Compsys (until the departure of Tanguy Risset), R2D2 (Rennes), ICPS in Strasbourg, and the University of Leiden. This tool is being used by many groups all over the world. Polylib is available at PIP Paul Feautrier has been the main developer of this parametric integer programming solver. PIP is freely available under the GPL PIP is widely used, worldwide, in the automatic parallelization community for testing dependences, scheduling, several kind of optimizations, code generation, and others. Beside being used in several parallelizing compilers, PIP has found applications in some unconnected domains, as for instance in the search for optimal polynomial approximations of elementary functions (see the work of the Arénaire team). MMAlpha Tanguy Risset is the main developer of MMAlpha since 1994 ( MMAlpha is one of the few available tools for very high-level hardware synthesis (including the design of parallel ASICs). It is based on systems of recurrence equations and systolic-array design methodologies. MMAlpha is now under the LGPL and has been used in many places (England, Canada, India, USA). Its development was shared, until the departure of Tanguy Risset, between Compsys and R2D2 in Rennes. MMAlpha has been evaluated by STMicroelectronics and has been a basis for Compsys participation to European and RNTL calls for proposals. Cl@k Fabrice Baray has been the main developer of Cl@k (critical lattice kernel), under Alain Darte s supervision. It is a standalone mathematical tool that computes or approximates the critical lattice for a given 0-symmetric polytope. 2 It also computes the successive minima of such polytopes. Cl@k can be viewed as a complement to the Polylib suite, enabling yet another kind of optimizations on polyhedra. We use it for the automatic derivation of array mappings that enable memory reuse thanks to modulo operations. We believe that this tool is going to be used for other not-yet-identified problems. For example, it has been used by a mathematician (Achill Schürmann, University of Magdeburg, Germany) to test some mathematical conjectures on Ehrhart polynomials. Bee Christophe Alias is the main developer of Bee, under Alain Darte s supervision. Bee is the complement of Cl@k in terms of its application to memory reuse. Bee connects Cl@k with programs. Bee uses ROSE, a source-to-source program transformer developed by Dan Quinlan, at the Livermore National Labs, to parse and unparse programs. 3 It then computes the lifetime of elements of the arrays to be compressed, builds the necessary input for Cl@k (the 0-symmetric 2 An admissible lattice is a lattice whose intersection with the polytope is reduced to 0; a critical lattice is an admissible lattice with minimal determinant. 3 ROSE took lot of its inspiration from Nestor, a tool developed in the 90s by G.-A. Silber, PhD student of Alain Darte.

96 82 Part 4. Compsys polytope of conflicting differences), then plugs back the results of into the program. Bee has also been used in the Inria project-team Alchemy in Saclay/Paris to help developing a library for data-flow analysis. RanK RanK is our most recent tool. It is still under development and can be improved in many directions. RanK is able to decide, in certain cases, the termination of a program by computing a ranking function, which maps the operations of the program to a well-ordered set in such a way that the ranking decreases as the computations progress. In case of success, RanK can also compute the static worst-case execution time (WCET), in terms of number of instructions, by means of polyhedral techniques. The result is a symbolic expression involving the input variables of the program (parameters). In case of failure, RanK tries to prove the non-termination of the program, by means of integer programming techniques. So far, it has only been used to discover successfully the WCET for small toy benchmark programs from the termination community. RanK represents more than lines of C++ code and it uses the tools PIP and Polylib. RanK input is an integer automaton representing the control structure of the program to check. This automaton, associated with approximations of the range of integer variables, is the output of Aspic, a tool for abstract interpretation in interpreted automata. A too for generating the input to Aspic from a C program has been developped as an extension of the Syntol frontend. The RanK tool and the connections with Syntol and Aspic are developed by Christophe Alias, Paul Feautrier, and Laure Gonnord. Syntol Syntol is a modular process network scheduler developed by Paul Feautrier. The source language is C augmented with specific constructs for representing communicating regular process systems (CRP). The present version features a syntax analyzer, a semantic analyzer that is able to identify DO loops in C code, a dependence computer, a modular scheduler, and interfaces for CLooG (loop generator developed by C. Bastoul, see the Inria project-team Alchemy) and Bee (see Section 4.8.1). The dependence computer has been extended to handle casts, records (structures) and the modulo operator in subscripts and conditional expressions. The present version is written in Java, which ensures portability and excellent performances. The next developments are a) a new code generator, based on the ideas of Boulet and Feautrier (PACT 98), which should be able to generate either C code or VHDL at the RTL level, b) tools for the construction of bounded parallelism schedules, with virtual dependences, allocation functions, pseudo arrays, c) a general strengthening of the syntax analysis phase, with the long-term objective of handling as much of C as possible. Register allocation Our first developments on register allocation in the STMicroelectronics assembly-code optimizer was the complete iterated register allocator written by Cédric Vincent (bachelor degree, under Alain Darte s supervision) in This was the first time a complete implementation was done with success, outside the STMicroelectronics compilation team, in their optimizer. Since then, new developments are constantly done, in particular by Florent Bouchez, Benoit Boissinot, Quentin Colombet, and Fabrice Rastello. The current implementation explores a two-phases register allocation, with a preliminary spilling phase, followed by a coalescing phase, with the help of SSA. Compsys contributed to coalescing algorithms, SSA and colored-ssa translation algorithms (i.e., before or after register allocation), with parallel copies insertions and moves, spilling algorithms (heuristics and optimal algorithms), etc. All these developments are done in the context of contracts with STMicroelectronics, direct contracts before 2005, the Minalogic project Sceptre until end of 2009, a new project (Mediacom) until end of Procedure placement Within the collaboration with the STMicroelectronics compilation team, a complete framework of procedure placement for instruction cache optimization has been developed. This tool is decomposed into several parts: first the conflict graph is built from a trace (STMicroelectronics implementation); similarly the affinity graph can also be built from the trace (Fabrice Rastello s implementation); then procedure placement can be performed using graph information (Benoit Boissinot and Fabrice Rastello s implementation); finally the placement can be evaluated using an execution trace or directly on the simulator (STMicroelectronics implementation). Traffic generator Antoine Scherrer, under Tanguy Risset and Antoine Fraboulet s supervision, has developed a complete network on chip prototyping environment in the SocLib project ( This environment can analyze the traffic

97 4.11 Compilation and Embedded Computing Systems 83 occurring on the communication medium of a complete SoC and replay or generate a traffic with similar statistical characteristics. Advanced stochastic analysis have been performed including second order statistics (auto-correlation used for detecting long range dependent behavior). The environment also provides a set of perl scripts used for fast generation of a SystemC SoC platform and instrumentation of the various SystemC components in order to record network activity. WSim WSim is a platform simulator, developed by Antoine Fraboulet, for sensor networks. It is the companion tool of WSnet (see Section for details). It relies on cycle accurate full platform simulation using microprocessor instruction driven timings. The simulator is able to perform a full simulation of hardware events that occur in the platform and to give back to the developer a precise timing analysis of the simulated software. The simulator can be used in stand-alone mode for debugging purposes when no radio device is used in the design (or when the radio simulation is not needed). But one of the main WSim feature is its interface with the WSNet simulator to perform the simulation of a complete sensor network. Both simulators are registered to the APP (French Agency for Software Protection) [252, 253] Contribution to research infrastructures 4.9 Industrialization, patents, and technology transfer Patents (french, European, worldwide) US patent number , filled in 2002 with the title System and method of optimizing memory usage with data lifetimes, accepted in Assignee: Hewlett Packard. Inventors: Rob Schreiber and Alain Darte. No royalties Creation of startups Consulting activities 4.10 Educational activities Supervision of educational programs Paul Feautrier was in charge of the 2nd year of the Computer Science Master of ENS-Lyon in 2006 and Alain Darte is vice-president of the admission exam to ENS-Lyon, responsible for the Computer Science part Teaching Name Course title (short) Level Institution No of hours No of years A. Darte, P. Feautrier, F. Rastello Advanced compilation & code optimization M2 ENS-Lyon 48 4 T. Risset, A. Fraboulet Design of embedded computing systems M2 Insa-Lyon 48 4 P. Feautrier Operational research M1 ENS-Lyon 48 3 P. Feautrier Compilation project M1 ENS-Lyon 48 1 T. Risset Algorithms and programming L3 Insa-Lyon 48 3 P. Feautrier, T. Risset, F. Rastello Compilation L3 ENS-Lyon 48 4 C. Alias Introduction to compilation L3 ENSI Bourges Self-assessment We first evaluate each of our three past objectives. Code optimization for special-purpose processors Our activities with STMicroelectronics are a huge success for us, not only for the contracts we get. Being able to work within a complete industrial compiler makes our study more than just theoretical: we can try heuristics, evaluate ideas, address problems with realistic constraints. This makes our research more relevant. The collaboration is thus fruitful for both sides: we help STMicroelectronics dive more deeply in the literature and on specific research problems, they help us by bringing new problems and by sharing their strong expertise on architecture and compilation subtleties, and their compiler infrastructure. This collaboration was considered as a success story by Inria: Compsys was viewed as an example to follow, for the future collaborations between Inria and STMicroelectronics (Alain Darte was part of the committee who established the general agreement between Inria and STMicroelectronics on joint research efforts). Also, through this activity, Compsys

98 84 Part 4. Compsys succeeded to attract young researchers: two PhD students from ENS-Lyon, one international post-doc and maybe another one in a near future, and one engineer from STMicroelectronics who will continue as a PhD student. As for scientific results, our first contributions and the comments we got from other researchers were very encouraging. The best paper award for our theoretical paper at CGO 07 (a conference with mainly practical contributions) was also, for us, a sign that more and more researchers think that tightening theory and practice is fundamental. Our other contributions were all very well welcomed. All in all, 10 of our publications are linked to our work with STMicroelectronics, 7 being joint publications, and 3 received a best paper award in the main conference on code generation and optimization (CGO). We were asked to give tutorials (ESWEEK 08, CGO 09, and maybe LCPC 09) on the new view we proposed on register allocation. We also co-organized the very first international workshop on SSA, during 4 days, with almost all specialists of the topic, including the very first ones who introduced the technique. The work on this topic is slow and far from being finished. Being able to beat 30 years of heuristics, even if we improved the theoretical understanding, is not obvious. Also, as our last results illustrate, we are currently exploiting our new understanding to improve JIT register allocators. Compsys just signed a new contract (Mediacom) with STMicroelectronics, for the next 3 years, on this topic. This activity should thus continue, hopefully successfully, in the future. However, as Fabrice Rastello is at the heart of the collaboration on code optimization with STMicroelectronics, this activity will clearly suffer from his departure in sabbatical in High-level code transformations Parametric integer linear programming, polyhedra manipulations, Ehrhart polynomials are now standard in the loop transformation community. But a mathematical tool was missing to better understand bounding box problems. We think that our introduction of the old but non-classical concept of admissible lattices, with its application to memory reduction, is an important achievement that may find other applications. Similarly, the summarizing principles we use for our scalable scheduler for CRP are important as they address complexity issues and modularity. We also developed the tools corresponding to these theoretical achievements, which was part of our objectives. Slowly, we succeed to push the interest for polyhedral techniques in compilation for embedded systems (software but also hardware accelerators). The industry (STMicroelectronics, Thales, but also in the USA with Synfora, Reservoir Labs, GCC, AMD) is considering such techniques with more and more interest. Our results in high-level synthesis are of a more preliminary nature. Clément Quinson obtained interesting results on the HLS tool UGH and we learned a lot by studying and porting STMicroelectronics applications to industrial HLS tools such as CatapultC and PicoExpress. Alexandru Plesco had promising but hard to publish experimental results on loop transformations for HLS tools (in this case, Spark and Altera C2H). But the problem is really hard, manipulating/controlling HLS tools and FPGA platforms is hard, developing in such tools nearly impossible. In addition, it is really hard to find students that can understand simultaneously mathematics, computer science, and architecture/synthesis. And the same is true for us as advisers. This made our collaboration with STMicroelectronics (HLS team) not as successful as it could have been. We need to do better on this very important challenge in the future. We are trying to increase our collaborations on this topic, first with the Cairn team (Rennes) and STMicroelectronics (HLS team), through the project S2S4HLS, second with most HLS specialists in France (TimA in Grenoble, Lip6 in Paris, LabSTICC in Lorient). Also, our current work on the termination of while loops, a problem that initially arose when trying to transform while loops so that HLS tools can accept them, seems quite promising, with many interesting research directions, with both theoretical and software developments. Hardware and software system integration This third objective should have been, a priori, the hardest to develop due to many factors. This was the topic with the maximal distance to our initial expertise. To enter the world of hardware and software integration, we needed to learn the problems faced by this community, understand new objectives and constraints, and not try to just stick our high-performance computing expertise. The good thing is that we learned a lot, the bad thing is that this led us maybe a bit too far from our initial goals and spread our activities on a too large spectrum with respect to our size. Also, hardware/software integration is a very hard topic, with no clear perfect solution, a lot of developments, and a lack of students with the adequate expertise if we want to increase the compilation aspects for architecture design. Nevertheless, we can consider that the collaboration with CEA-LETI and ARES team at Insa-Lyon was a success, with interesting contributions in traffic generators, simulators for performance and power, and models of power consumption, and three PhD theses (respectively Antoine Scherrer, Nicolas Fournel, and Philippe Grosse). However, this activity was mainly developed by Tanguy Risset and Antoine Fraboulet (as well as Paul Feautrier, but within an opportunistic collaboration with CEA-LETI, which will not be continued), therefore, it will not be continued in Compsys. Compsys will focus on the compilation and code optimization aspects. General self-assessment All Compsys activities try to bring theoretical and algorithmic contributions to optimization problems that can have a direct impact on software developments for embedded computing systems. For us, addressing relevant problems and validating our results, possibly with industrial partners, is of prime importance. This is the reason

99 4.12 Compilation and Embedded Computing Systems 85 why we are interested only in contracts with real collaborations between academic partners and industry. In particular, we prefer direct contracts, as we obtained with STMicroelectronics or CEA-LETI (the ITEA project Martes had limited European collaborations, but a good collaboration with Thales at national level). With respect to this view of collaborations/contracts, we can consider that we succeeded: we had and will have industrial or semi-industrial (CEA) partners for all our activities and sufficient contracts to make Compsys run. This is also because Compsys is an Inria project-team, which brings us a helpful funding and possible hiring (post-doc or researcher). As for publications, our point of view is that we must publish only in the top conferences, maybe few papers, but good papers. In this respect, we also were successful with 5 best paper awards in the last 4 years and publications almost only in ACM/IEEE conferences. Like most groups in Computer Science, most of our publications are in conferences. This is mostly due to the necessity, for young researchers, to accumulate a significant number of publications in the three years of preparation of a PhD, which is not compatible with the long publication delays of most journals.this is a common situation in computer science. Efforts to change this situation are under way (see the recent issues of the Communications of th ACM), but one may wonder if the advent of electronic publication will not change the parameters of the problem. Finally, in terms of manpower, Compsys has always been a small team, and it will remain so because the compiler community is fairly small. The departure of Tanguy Risset and Antoine Fraboulet was a big loss, but Compsys could hire a new Inria researcher, Christophe Alias, and several post-docs (one or two every year). The initial expertise of Christophe Alias is unfortunately not on the hardware synthesis side, so it does not compensate the departure of Tanguy Risset. But, in addition to a good expertise on program transformations, it brings a strong expertise in software developments, which will be of great help for Compsys to maintain an activity in which applicability is highly desirable Perspectives Keywords Embedded systems, DSP, VLIW, FPGA, hardware accelerators, compilation, code optimization, just-in-time compilation, memory optimization, program analysis, high-level synthesis (HLS), parallelism, scheduling, linear programming, polyhedra, graphs, regular computations Vision and new goals of the team Due to the reduction of the team and the fact that we did not replace the expertise of Tanguy Risset and Antoine Fraboulet, we will have to narrow the range of our activities and to focus more on the software side. This is mandatory if we want to achieve significant results. We will therefore concentrate on only two activities: code optimization for embedded processors, in particular JIT compilation, and high-level synthesis, in particular front-end optimizations. Code optimization and JIT compilation We want to continue our analysis of combinatorial problems arising in code optimization. Thanks to a deeper understanding of these optimization problems, we can not only derive aggressive solutions with more costly approaches but also derive cleaner and finally simpler solutions. For example, understanding the SSA form, the subtleties of critical control-flow edges, allows us to consider new strategies for register allocation that can be both fast and efficient, thus techniques that are good candidates for JIT compilation. More generally, we want to consider split compilation in the context of deferred code generation for embedded processors. The challenge is to be able to split expensive code generation techniques into an off-line part and an on-line part. In deferred code generation, the source program is cross-compiled into a low-level, processor-independent, program representation usually referred to as byte codes. Code generation is then performed either at load time, or just-in-time (JIT). The main motivation is the drastic reduction of the program footprint in flash memory. For example, the bytecode program image has not yet been affected by optimizations that increase code size. Furthermore, the same image can serve the heterogeneous processors in the embedded device (portability). To make deferred code generation more efficient, split compilation tries to move the expensive analyzes to the upstream compiler and relies on annotations on the byte-code program representation (in an intermediate language such as CLI) to drive the deferred code generator program transformations. Again, to make this possible, only a deep understanding of code optimizations, of optimal solutions and of what can be obtained by heuristics, of interactions between different optimizations, of concepts and strategies that are architecture-independent and those that are not, etc., will make this research fruitful. High-level synthesis High level synthesis has become a necessity, mainly because the exponential increase in the number gate per chip far outstrips the productivity of human designers. Besides, applications that need hardware accelerators usually belong to domains, like telecommunications and game platforms, where fast turn-around and time-to-market minimization are paramount. We believe that our expertise in compilation can contribute to the development of the needed tools.

100 86 Part 4. Compsys The problem can be tackled in two ways. First, one can select an existing tool, either an academic tool such as SPAR or GAUT, or an industrial tool such as CatapultC or C2H, and write front-end optimizers for it. The other, more ambitious approach, is to attempt the direct elaboration of VHDL code, to be submitted to synthesis tools. Both approaches have strengths and drawbacks; it is too early to decide which one will give the best results. Our first experiments with HLS tools reveal two important issues. First, HLS tools are, of course, limited to certain types of input programs so as to make their design flows successful. It is a painful and tricky task for the user to transform the program so that it fits these constraints and to tune it to get good results. Automatic or semi-automatic program transformations can help the user achieve this task. Second, users, even expert users, have only a very limited understanding of what back-end compilers do and why they do not lead to the expected results. An effort must be done to analyze the different design flows of HLS tools, to explain what to expect from them, and how to use them to get a good quality of results. Our first goal is thus to develop high-level techniques that, used in front of existing HLS tools, improve their utilization. This should also give us directions on how to modify them. We should be able to give some contributions on this topic, even tools, if we rely on other well-written tools. We plan to use Rose (Livermore National Laboratories) for high-level transformations and build on top of back-end compilers such as Ugh (Ivan Augé, Frédéric Pétrot), Gaut (Philippe Coussy), or the compiler developed by the team of Scott Mahlke(University of Michigan) for academic tools, and Altera C2H, CatapultC, or PicoExpress for industrial ones. More generally, we want to consider HLS as an exercise in parallelization. In this approach, one has to solve many optimization problems, concerning the encoding of the FSA, the optimal use of registers, the implementation of control constructs. Nevertheless, the results can be disappointing because this approach makes poor use of the inherent parallelism of hardware. So far, no HLS tools is capable of generating designs with communicating parallel accelerators, even if, in theory, at least for the scheduling part, a tool such as PicoExpress could have such capabilities. The reason is that it is, for example, very hard to automatically design parallel memories, how to decide the distribution of array elements in memory banks to get the desired performances with parallel accesses. Also, how to express communicating processes at the language level? How to express constraints, pipeline behavior, communication media, etc.? To better exploit parallelism, a first solution is to extend the source language with parallel constructs, as in all derivations of the Kahn process networks model, including communicating regular processes (CRP) (see Section 4.3.2). The other solution is a form of automatic parallelization. However, classical methods, which are mostly based on scheduling, are not directly applicable, firstly because they pay poor attention to locality, which is of paramount importance in hardware. Beside, their aim is to extract all the parallelism in the source code; they rely on the runtime system to tailor the parallelism degree to the available resources. Obviously, there is no runtime system in hardware. The real challenge is thus to invent new scheduling algorithms that take both resource and locality into account, and then to infer the necessary hardware from the schedule. This is probably possible only for programs that fit into the polytope model Evolution of research activities We just illustrate here some of our current developments and research directions. Register spilling for JIT and split compilation Just-in-time compilers are catching up with ahead-of-time frameworks, stirring the design of more efficient algorithms and more elaborate intermediate representations. They rely on continuous, feedback-directed (re-)compilation frameworks to adaptively select a limited set of hot functions for aggressive optimization. Leaving the hottest functions aside, (quasi-)linear complexity remains the driving force structuring the design of just-in-time optimizers. Despite the developments of SSA-based register allocation that we explored in the recent years, the problem of spilling, i.e., the problem of deciding which variables must be moved to memory to save registers, and when, is largely open. Even an optimal formulation, possibly very expensive, has not been proposed. Because of this lack of clear understanding, spilling is addressed by heuristics which are very poor or not fast enough, or both. One of our challenges in the next years will be to better understand how to spill variables and to develop fast and efficient algorithms for JIT compilation, even for complex architectures with register aliasing. This research will be conducted in the context of the Mediacom contract, with STMicroelectronics. Quentin Colombet will start very soon a PhD thesis on the topic. With the Inria project-team Alchemy, we want to go even further, we want to develop a split compiler design, where more expensive ahead-of-time analysis guide lightweight just-in-time optimizations. A split register allocator can be very aggressive in its offline stage (even optimal), producing a semantically digest through bytecode annotations that can be processed by a lightweight online stage. The algorithmic challenges are threefold: (sub-)linear-size annotation, linear-time online stage, minimal loss of code quality. In most cases, portability of the annotation is an important fourth challenge. We will develop a split register allocator meeting those four challenges, where a compact annotation derived from an optimal integer linear program drives a linear-time algorithm near optimality.

101 4.13 Compilation and Embedded Computing Systems 87 SSA, SSI, ψ-ssa The static single information form (SSI) is a compiler intermediate representation that extends the SSA Form. The fact that interference graphs for procedures represented in SSA form is chordal is a nice property that initiated our work on register allocation under SSA form. In 2007, Philip Brisk proved a similar result concerning SSI: the interference graph would be an interval graph. This property may have consequences for register allocation and liveness analysis. Unfortunately, the algorithms and proofs are based on several incorrect claims and theories (in particular the program structure tree of Johnson et al.) that we are currently debunking. We recently proved that the interval graph property is correct. The proof and algorithm that generate a suitable ordering of basic blocks, to get the interval graph representation, is based on the notion of loop nesting forest. We just started this research. Our goal is to complete the understanding of such intermediate representation to, then, derive better algorithms. More complex than SSA and SSI, our goal is to extend our results to ψ-ssa, a form of SSA that takes predication into account and that is very useful for STMicroelectronics architectures. All code transformations and optimizations must be studied and adapted to ψ-ssa. Loop transformations for HLS and communication optimizations In previous studies, we explored the use of loop transformations in front of HLS tools. We used the Wrapit loop transformation tools developed by the Inria projectteam Alchemy and integrated into the ORC open-source compiler. Starting from C code, synthesized by the Spark HLS tool, we showed important performance improvements of the resulting design (in terms of silicon area and communication optimization). This preliminary study also confirmed that there is a strong need for data communication/storage mechanisms between the host processor and the hardware accelerator. We are currently investigating how to design a hardware/software interface model for enabling communication optimizations, such as burst communications. This may imply the design of a memory controller, the development of a performance model, as well as compilation techniques, possibly with compilation directives. The integration with existing HLS tools is tricky, current investigations are made with the Altera C2H code generator. More generally, we would like to develop a library of communications, associated with directives in the C code, to better exploit data transfers between hardware accelerators and the outside world. So far, HLS tools communicate usually through FIFOs, which prevents prefetching, data packing, data reorganization, parallelism. Going beyond this communication model, with optimized communications, is an important challenge. Program termination and static worst-case execution time Knowledge of the worst case execution time (WCET) of a function or subroutine is a key information for embedded systems, as it allows the construction of a schedule and the verification that real-time constraints are met. The WCET problem is obviously connected to the termination problem: a piece of code that does not terminate has no WCET. The standard method for proving termination is to compute a ranking function (Floyd 1967), which maps the operations of the program to a well-ordered set in such a way that the ranking decreases as the computation advances. We noticed that the constraints on a ranking function are similar to the causality condition that applies to a schedule, as defined for automatic parallelization, with just a change of sign. This allowed us to reinvest most of the techniques for multi-dimensional scheduling (Feautrier 1992) and to obtain multi-dimensional ranking functions in a very efficient way. The WCET of the program can then be obtained easily (at least in theory) by means of polyhedral techniques. The method we are investigating is able to handle (but of course not decide termination of) every program whose control can be translated into a counter automaton. This roughly covers programs whose control depends on integer variables exclusively, using conditionals, for loops and while loops whose tests are affine expressions on variables. Furthermore, it seems to be very easy to approximate programs which are outside this class by familiar techniques, like ignoring nonaffine tests or variables with too complex a behavior. Termination of the approximate program entails termination of the original one, but the converse is not true. This method can be extended in several interesting directions: In some cases, when termination cannot be proved, it is possible to construct a certificate of non-termination in the form of a looping scenario, which should be an invaluable help for debugging. The class of ranking functions should be extended to parametric and piecewise affine function. This will extend the class of programs that can be proven to terminate. The initial motivation of this work was to automatically transform some while loops into DO loops with early exits or additional guards. Such a representation is indeed more suitable for HLS tools. We already have a prototype (see Section 4.8.1) but a lot of work remains to be done, both for theory and implementation of our algorithms. This activity will take place within the contract S2S4HLS (source-to-source four high-level synthesis) with STMicroelectronics and the Inria project-team Cairn.

102 88 Part 4. Compsys 4.13 Publications International and national peer reviewed journals [ACL] [198] Cédric Bastoul and Paul Feautrier. Adjusting a program transformation for legality. Parallel Processing Letters, 15(1-2):3 17, March-June [199] Alain Darte and Guillaume Huard. New complexity results on array contraction and related problems. Journal of VLSI Signal Processing-Systems for Signal, Image, and Video Technology, 40(1):35 55, May [200] Alain Darte, Robert Schreiber, and Gilles Villard. Lattice-based memory allocation. IEEE Transactions on Computers, 54(10): , October Special Issue, Tribute to B. Ramakrishna (Bob) Rau. [201] Christophe Guillon, Fabrice Rastello, Thierry Bidault, and Florent Bouchez. Procedure placement using temporalordering information: Dealing with code size expansion. Journal of Embedded Computing, 1(4): , December [202] Paul Feautrier. Scalable and structured scheduling. International Journal of Parallel Programming, 34(5): , October [203] Antoine Fraboulet and Tanguy Risset. Master interface for on-chip hardware accelerator burst communications. Journal of VLSI Signal Processing, 29(1):73 85, [204] Hadda Cherroun, Alain Darte, and Paul Feautrier. Reservation table scheduling: Branch-and-bound based optimization vs. integer linear programming techniques. RAIRO-OR, 41(4): , December [205] Antoine Scherrer, Nicolas Larrieu, Pierre Borgnat, Philippe Owezarski, and Patrice Abry. Non Gaussian and long memory statistical characterisations for internet traffic with anomalies. IEEE Transactions on Dependable and Secure Computing (TDSC), 4(1):56 70, [206] Philippe Grosse, Yves Durand, and Paul Feautrier. Methods for power optimization in SOC-based data flow systems. ACM Transactions on Design Automation of Electronic Systems, 14(3):1 20, Invited conferences, seminars, and tutorials [INV] [207] Alain Darte. Lattice-based memory allocation. In Seminar on Sphere Packings, Mathematisches Forschungsinstitut Oberwolfach (Germany), November Invited talk. [208] Alain Darte. Revisiting register coalescing in the light of SSA form. In Workshop Compilers for Parallel Computing, Lisboa, Portugal, July Invited talk. [209] Paul Feautrier. Embedded systems and compilation. In MSR (Modélisation des systèmes réactifs) conference, Invited talk. [210] Fabrice Rastello. SSA-based register allocation. In ESWeek 08 (Embedded Systems Week), Atlanta, October Tutorial with P. Brisk, S. Hack, J. Palsberg, and F. Pereira. [211] Alain Darte and Fabrice Rastello. SSA-based register allocation. In CGO 2008, Seattle, March Tutorial with P. Brisk and J. Palsberg. [212] Alain Darte. Out-of-SSA translation. In SSA form seminar, Autrans, April Invited talk. [213] Benoit Boissinot. Fast liveness checking in SSA. In SSA form seminar, Autrans, April Invited talk. International and national peer-reviewed conference proceedings [ACT] [214] Alain Darte and Robert Schreiber. A linear-time algorithm for optimal barrier placement. In ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 05), pages 26 35, Chicago, IL, USA, June [215] Alain Darte, Steven Derrien, and Tanguy Risset. Hardware/software interface for multi-dimensional processor arrays. In IEEE International Conference on Application-Specific Systems, Architecture, and Processors (ASAP 05), pages IEEE Computer Society Press, July 2005.

103 4.13 Compilation and Embedded Computing Systems 89 [216] Hadda Cherroun, Alain Darte, and Paul Feautrier. Scheduling under resource constraints using dis-equalities. In Design Automation and Test in Europe (DATE 06), pages , Munich, Germany, March [217] Antoine Scherrer, Nicolas Larrieu, Pierre Borgnat, Philippe Owezarski, and Patrice Abry. Non Gaussian and long memory statistical modeling of internet traffic. In 4th International Workshop on Internet Performance, Simulation, Monitoring and Measurement (IPS MOME), Salzbourg, Austria, March [218] Guillaume Chelius, Antoine Fraboulet, and Eric Fleury. Demonstration of Worldsens: A fast prototyping and performance evaluation tool for wireless sensor network applications & protocols. In Second International Workshop on Multi-hop Ad Hoc Networks: From Theory to Reality (REALMAN), pages , Firenze, Italia, May ACM. [219] Antoine Scherrer, Nicolas Larrieu, Pierre Borgnat, Philippe Owezarski, and Patrice Abry. Une caractérisation non gaussienne et longue mémoire du trafic internet et de ses anomalies. In 5th Conference on Security and Network Architectures (SAR), Seignosse, France, June [220] Florent Bouchez, Alain Darte, Christophe Guillon, and Fabrice Rastello. Register allocation: What does the NPcompleteness proof of Chaitin et al. really prove? In 5th Annual Workshop in Duplicating, Deconstructing, and Debunking (WDDD 06), held in conjunction with the 33rd International Symposium on Computer Architecture (ISCA-33), Boston, USA, July [221] Pierre Amiranoff, Albert Cohen, and Paul Feautrier. Beyond iteration vectors: Instancewise relational abstract domains. In Static Analysis Symposium (SAS 06), volume 4134 of Lecture Notes In Computer Science, pages , Seoul, Corea, August Springer. [222] Philippe Grosse, Yves Durand, and Paul Feautrier. Power modeling of a NoC based design for high-speed telecommunication systems. In 16th PATMOS Workshop - International Workshop on Power And Timing Modeling, Optimization and Simulation, Montpellier, France, September [223] Antoine Scherrer, Antoine Fraboulet, and Tanguy Risset. A generic multi-phase on-chip traffic generation environment. In IEEE 17th International Conference on Application-Specific Systems, Architectures and Processors (ASAP 06), Steamboat Springs, Colorado, USA, September [224] Silvius Rus, Guobin He, Christophe Alias, and Lawrence Rauchwerger. Region array SSA. In 15th ACM/IEEE International Conference on Parallel Architectures and Compilation Techniques (PACT 06), pages 43 52, Seattle, WA, USA, September ACM. [225] Antoine Scherrer, Antoine Fraboulet, and Tanguy Risset. Automatic phase detection for stochastic on-chip traffic generation. In Proceedings of the 4th International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), pages 88 93, seoul, South Corea, October ACM Press. [226] Florent Bouchez, Alain Darte, Christophe Guillon, and Fabrice Rastello. Register allocation: What does the NP-completeness proof of Chaitin et al. really prove? Or revisiting register allocation: Why and how. In 19th International Workshop on Languages and Compilers for Parallel Computing (LCPC 06), New Orleans, USA, November [227] P. Borgnat, N. Larrieu, P. Owezarski, P. Abry, J. Aussibal, L. Gallon, G. Dewaele, N. Nobelis, L. Bernaille, A. Scherrer, Y. Zhang, Y. Labit, and et al. Détection d attaques de dénis de service par un modèle non gaussien multirésolution. In Colloque francophone sur l ingénierie des protocoles (CFIP), Tozeur, Tunisie, November [228] Antoine Fraboulet, Guillaume Chelius, and Eric Fleury. Worldsens: System tools for embedded sensor networks. In Real-Time Systems Symposium (RTSS 2006) (Work in Progress), Rio de Janeiro, Brasil, December IEEE. [229] Guillaume Chelius, Antoine Fraboulet, and Eric Fleury. Worldsens: A fast and accurate development framework for sensor network applications. In The 22nd Annual ACM Symposium on Applied Computing (SAC 2007), Seoul, Korea, March ACM. [230] Florent Bouchez, Alain Darte, and Fabrice Rastello. On the complexity of register coalescing. In International Symposium on Code Generation and Optimization (CGO 07), pages IEEE Computer Society Press, March Best paper award. [231] Daniel Grund and Sebastian Hack. A fast cutting-plane algorithm for optimal coalescing. In Shriram Krishnamurthi and Martin Odersky, editors, Compiler Construction 2007, Lecture Notes In Computer Science, Braga, Portugal, March Springer. Best paper award.

104 90 Part 4. Compsys [232] Antoine Fraboulet, Guillaume Chelius, and Eric Fleury. Worldsens: Development and prototyping tools for application specific wireless sensors networks. In IPSN 07 Track on Sensor Platforms, Tools and Design Methods (SPOTS), Cambridge, Massachusetts, USA., April ACM. [233] Hadda Cherroun and Paul Feautrier. An exact resource constrained-scheduler using graph coloring technique. In AICCSA 07: The 5th ACS/IEEE International Conference on Computer Systems and Applications, pages IEEE Computer Society, May Best paper award. [234] Nicolas Fournel, Marine Minier, and Stéphane Ubéda. Survey and benchmark of stream ciphers for wireless sensor networks. In Workshop in Information Security Theory and Practices (WISTP 2007), Heraklion, Crete, Greece, May [235] Nicolas Fournel, Antoine Fraboulet, Guillaume Chelius, Eric Fleury, Bruno Allard, and Olivier Brevet. Worldsens: Embedded sensor network application development and deployment. In 26th Annual IEEE Conference on Computer Communications (Infocom), Anchorage, Alaska, USA, May IEEE. [236] Christophe Alias, Fabrice Baray, and Alain Darte. An implementation of lattice-based memory reuse in the sourceto-source translator ROSE. In ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES 07), pages 73 82, San Diego, USA, June [237] Florent Bouchez, Alain Darte, and Fabrice Rastello. On the complexity of spill everywhere under SSA form. In ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES 07), pages , San Diego, USA, June [238] Nicolas Fournel, Antoine Fraboulet, and Paul Feautrier. esimu : a fast and accurate energy consumption simulator for embedded system. In IEEE International Workshop: From Theory to Practice in Wireless Sensor Networks, Helsinki, Finland, June [239] Alain Darte and Clément Quinson. Scheduling register-allocated codes in user-guided high-level synthesis. In ASAP 07: The 18th IEEE International Conference on Application-specific Systems, Architectures and Processors, pages IEEE Computer Society, July [240] Nicolas Fournel, Antoine Fraboulet, and Paul Feautrier. Fast and instruction accurate embedded systems energy characterization using non-intrusive measurements. In PATMOS Workshop - International Workshop on Power And Timing Modeling, Optimization and Simulation, Göteborg, Sweden, September [241] Alexandru Plesco and Tanguy Risset. Coupling loop transformations and high-level synthesis. In SYMPosium en Architectures nouvelles de machines (SYMPA 08), February [242] Benoit Boissinot, Sebastian Hack, Daniel Grund, Benoît Dupont de Dinechin, and Fabrice Rastello. Fast liveness checking for SSA-form programs. In Sixth Annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO 08), pages 35 44, Boston, MA, USA, April ACM. Best paper award. [243] Florent Bouchez, Alain Darte, and Fabrice Rastello. Advanced conservative and optimistic register coalescing. In International Conference on Compilers, Architectures and Synthesis for Embedded Systems (CASES 08), pages , Atlanta, GA, USA, October ACM. [244] Benoit Boissinot, Alain Darte, Benoît Dupont de Dinechin, Christophe Guillon, and Fabrice Rastello. Revisiting out-of-ssa translation for correctness, code quality, and efficiency. In International Symposium on Code Generation and Optimization (CGO 09), pages , Seattle, WA, USA, March IEEE Computer Society Press. Best paper award. [245] Ouassila Labbani, Paul Feautrier, Eric Lenormand, and Michel Barreteau. Elementary transformation analysis for Array-OL. In ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 09, pages , Rabat, Morocco, May [246] Laure Gonnord and Jean-Philippe Babau. Quantity of resource properties expression and runtime assurance for embedded systems. In ACS/IEEE International Conference on Computer Systems and Applications (AICCSA 09), pages , Rabbat, Morocco, May [247] Qingda Lu, Christophe Alias, Uday Bondhugula, Sriram Krishnamoorthy, J. Ramanujam, Atanas Rountev, P. Sadayappan, Yongjian Chen, Haibo Lin, and Tin fook Ngai. Data layout transformation for enhancing locality on NUCA chip multiprocessors. In International ACM/IEEE Conference on Parallel Architectures and Compilation Techniques, September 2009.

105 4.13 Compilation and Embedded Computing Systems 91 Short communications [COM] and posters [AFF] in conferences and workshops [248] Sebastian Hack. Register Allocation for Programs in SSA Form. Date 2007 PhD Forum Poster, April [249] Nicolas Fournel, Antoine Fraboulet, Guillaume Chelius, Eric Fleury, Bruno Allard, and Olivier Brevet. Worldsens: From lab to sensor network application development and deployment. In International Conference on Information Processing in Sensor Networks (IPSN), demo session, Cambridge, MA, USA, April ACM. [250] Nicolas Farrugia, Michel Paindavoine, and Clément Quinson. On the need for semi-automated source to source transformations in the user-guided high-level synthesis tool. In High-level Synthesis: Back to the Future (DAC workshop), June Poster. Scientific books and book chapters [OS] [251] Paul Feautrier. Les compilateurs. In Jean-Éric Pin, editor, Encyclopédie de l Informatique. Vuibert, Scientific popularization [OV] Book or proceedings editing [DO] Other publications [AP] [252] Antoine Fraboulet, Guillaume Chelius, and Eric Fleury. WSim: A hardware platform simulator, IDDN [253] Guillaume Chelius, Antoine Fraboulet, and Eric Fleury. WSNet: A modular event-driven wireless network simulator, IDDN [254] Nicolas Fournel. ARM Linux support for ARM integrator CM922T-XA10 in standalone mode. available online, March [255] Nicolas Fournel. Mutek OS support for ARM integrator CM922T-XA10 in standalone mode. available online, April [256] Alain Darte and Rob Schreiber. System and method of optimizing memory usage with data lifetimes. US patent number , April [257] Alain Darte. Quelques propriétés mathématiques et algorithmiques des ensembles convexes. énoncé et corrigé de l épreuve de mathématiques et informatique, concours d entrée aux ENS de Cachan, Lyon et Ulm, session Revue de Mathématiques Spéciales, 119(1), Doctoral dissertations and habilitation theses [TH] [258] Antoine Scherrer. Analyses statistiques des communications sur puce. PhD thesis, École normale supérieure de Lyon, December [259] Hadda Cherroun. Scheduling for High-Level Synthesis. PhD thesis, Université des Sciences et de la Technologie Houari Boumediene, Alger, December [260] Nicolas Fournel. Estimation et optimisation de performances temporelles et énergétiques pour la conception de logiciels embarqués. PhD thesis, École normale supérieure de Lyon, November [261] Philippe Grosse. Gestion dynamique des tâches dans une architecture microélectronique intégrée à des fins de basse consommation. PhD thesis, École normale supérieure de Lyon, December [262] Florent Bouchez. A Study of Spilling and Coalescing in Register Allocation as Two Separate Phases. PhD thesis, École normale supérieure de Lyon, April Total ACL - International and national peer-reviewed journal INV - Invited conferences

106 92 Part 4. Compsys ACT - International and national peer-reviewed conference proceedings COM / AFF - Short communications and posters in international and national conferences and workshops OS - Scientific books and book chapters 1 1 OV - Scientific popularization DO - Book or proceedings editing AP - Other publications TH - Doctoral dissertations and habilitation theses Total

107 5. Graal - Algorithms and Scheduling for Distributed Heterogeneous Platforms Team: GRAAL Scientific leader: Frédéric VIVIEN Evaluation Web site: Parent Organisations: CNRS, École Normale Supérieure de Lyon, INRIA, Université of Lyon Team History: (optional) The current GRAAL team emerged from the ReMaP team in late GRAAL initially included five permanent members under the leadership of Frédéric Desprez, and was officially created as an INRIA project-team on April 1, On July 1, 2006, as Frédéric Desprez became head of the LIP laboratory, Frédéric Vivien took the team leadership. 5.1 Team Composition Current team members Permanent Researchers Name First name Function Institution Arrival date BENOIT Anne Assistant Professor ENSL 01/09/2005 CANIOU Yves Assistant Professor UCBL 01/09/2005 CARON Eddy Assistant Professor ENSL 01/04/2001 DESPREZ Frédéric DR INRIA 01/09/1995 FEDAK Gilles CR INRIA 01/09/2008 L EXCELLENT Jean-Yves CR INRIA 01/09/2001 MARCHAL Loris CR CNRS 01/10/2007 PÉREZ Christian CR INRIA 01/10/2008 ROBERT Yves Professor ENSL 01/09/1988 TOURANCHEAU Bernard Professor UCBL 06/02/2007 UÇAR Bora CR CNRS 01/01/2009 VIVIEN Frédéric CR INRIA 01/09/2002 Post-docs, engineers and visitors Function and % of Name First name time (if other than Institution Arrival date 100%) BARD Nicolas Engineer CNRS 01/12/2006 BOBELIN Laurent Post-doc INRIA 09/03/2009 BOUZIANE Hinde Post-doc ENS 01/09/2009 CHOWDHURY Indranil Post-doc INRIA 21/05/2009 HE Haiwu Engineer INRIA 01/09/2008 HEYMANN Michaël Post-doc CNRS 10/01/2009 ISNARD Benjamin Engineer INRIA 01/03/2008 LE MAHEC Gaël Engineer INRIA 01/09/2009 LOUREIRO David Engineer INRIA 15/04/2009 MATHIEU Clement Engineer INRIA 15/06/2009 TANG Bing Visiting PhD student Wuhan Univ. of Techn., China 24/10/2008 YU Wang Visiting PhD student Hohai University, China 14/12/

108 94 Part 5. Graal Doctoral Students Name First name Institution Date of first registration as doctoral student BEN SAAD Leila MENRT 01/09/2008 BIGOT Julien INSA de Rennes 01/09/2008 CHARRIER Ghislain CORDI/S 01/11/2007 DEPARDON Benjamin MENRT 01/09/2007 DUFOSSÉ Fanny ENSL 01/09/2008 GALLET Mattieu ENSL 01/09/2006 GAY Jean-Sébastien Region 01/09/ JACQUELIN Mathias MENRT 01/10/2008 PICHON Vincent CIFRE EDF 06/04/2009 RENAUD-GOUD Paul MENRT 01/09/2009 REZVOY Clément MENRT 01/09/2007 Past team members Name First name Date of first registration Past Doctoral students Date of departure Institution Doctoral school AGULLO Emmanuel 01/09/ /12/2008 MENRT MathIF Supervisor J.-Y. L Excellent BOLZE Raphaël 01/10/ /01/2009 CNRS MathIF F. Desprez CHOUHAN Pushpinder Kaur 01/10/ /10/2006 MENRT MathIF MARCHAL Loris 01/09/ /09/2007 ENS Lyon MathIF PINEAU Jean-François 01/09/ /08/2008 ENS Lyon MathIF REHN-SONIGO Veronika 01/10/ /09/2009 MENRT MathIF TEDESCHI Cédric 01/09/ /10/2008 MENRT MathIF VERNOIS Antoine 01/03/ /10/2006 MENRT MathIF F. Desprez and E. Caron Y. Robert and O. Beaumont Y. Robert and F. Vivien A. Benoit and Y. Robert E. Caron and F. Desprez F. Desprez and P. Primet and C. Blanchet (IBCP) Current position PostDoc at U. of Tennessee, Knoxville, USA 2 PostDoc at University of Southern California, Los Angeles, USA Unemployed CNRS, GRAAL PostDoc LIRMM, Montpellier PostDoc MOAIS, Grenoble PostDoc at INRIA Sophia Antipolis IT company, Toulouse Past post-doctoral researchers, engineers and visitors (greater than or equal to 3 months) Name First name Function Date of Date of Home Institution arrival departure (if appropriate) Current position Engineer at a AMAR Abdelkader PostDoc 25/12/ /12/2007 INRIA private company, Paris 1 On long term leave since June 2007 for health reasons.

109 5.2 Algorithms and Scheduling for Distributed Heterogeneous Platforms 95 BOIX Eric Engineer 01/01/ /09/2007 CNRS LIP, MC2 PostDoc at U. BOUTEILLER Aurélien ATER 01/09/ /09/2006 ENS Lyon of Tennessee, Knoxville, USA BOUZIANE Hinde Demi-ATER 01/09/ /08/2009 Lyon 1 Post-doc in GRAAL BUTTARI Alfredo PostDoc 01/01/ /10/2008 INRIA CNRS, Toulouse CEDEYN Aurélien Engineer 50% 16/04/ /08/2009 CNRS Eng. at CEA COMBES Philippe Engineer 16/12/ /12/2008 CNRS Engineer, CICT, Toulouse EYRAUD- DUBOIS Lionel PostDoc 17/10/ /10/2007 INRIA INRIA, Bordeaux FEVRE Aurélia Engineer 01/09/ /09/2007 INRIA CEA Grenoble HAKEM Mourad PostDoc 01/01/ /08/2008 ENS Lyon Associate professor, LIFC, Belfort LE MAHEC Gaël Demi-ATER 01/09/ /08/2009 Lyon 1 Eng. in GRAAL LOUREIRO David Engineer 01/10/ /10/2008 INRIA Engineer, INRIA, GRAAL PETIT Franck Visiting professor INRIA / Univ. of Pr. Univ of Paris 01/09/ /08/2009 Picardie 6 PICHON Vincent Engineer 01/11/ /04/2009 CNRS PhD student in GRAAL QUEMENER Emmanuel Engineer 01/01/ /08/2007 CNRS Lyon Engineer, ENS Evolution of the team: (Free form description of how the team composition has evolved) During the reporting period, five permanent members joined the team, four of them being full-time researchers and one holding a teacher-researcher position. Among these five new permanent members, two were directly hired for GRAAL (they were previously holding post-doctoral positions), two left their former laboratories and teams to join GRAAL, and one joined GRAAL when returning to his faculty position after having spent several years working in the industry. The turn-over of doctoral students was smooth as we were reasonably able to attract students and find them funding. The high turn-over of engineers was due to the temporary nature of all our engineer positions. 5.2 Executive summary Keywords Algorithm Design, Scheduling, Optimization, Distributed Computing, Grid Computing, Sparse Linear Solvers, Direct Methods, Network Enabled Servers, Grid Middleware. Research area Parallel computing has spread into all fields of applications, from classical simulation of mechanical systems or weather forecast, to databases, video-on-demand servers, or search tools like Google. From the architectural point of view, parallel machines have evolved from large homogeneous machines to clusters of PCs, then to aggregations of computing or storage resources through Local Area Networks (LAN) or even Wide Area Networks (WAN). The recent progress of network technology has made it possible to use highly distributed platforms as a single parallel resource. Many research projects have been dealing with computing on these new platforms, most of them focusing on low level software details. We believe that many of these projects failed to study fundamental problems such as complexity of problems and algorithms, and guaranteed scheduling heuristics. Main goals The GRAAL team works on algorithms and scheduling for distributed heterogeneous platforms. The scope of its work goes from theoretical studies to practical developments in freely-distributed softwares. The GRAAL team tackles two main challenges for the widespread use of distributed platforms: the development of environments that will ease their use (in a seamless way) and the design and evaluation of new algorithmic approaches for applications using such platforms. The team research work is organized in three themes: 1. Scheduling strategies and algorithm design for heterogeneous platforms. The research work carried under this theme is rather fundamental: platform and application modeling; complexity studies; design, analysis, and evaluation of algorithms and heuristics. 2. Scheduling for sparse direct solvers. In this theme we only consider direct methods for sparse direct solvers, and our aim is to implement the best solutions in the freely-distributed MUMPSsoftware that we co-develop.

110 96 Part 5. Graal 3. Providing access to HPC servers on the Grid. The aim of this theme is to propose middleware enabling to use Grid computing platforms, in particular in a client-server approach, and to study the research problems associated with them. Methodology A typical research work in the first theme is as follows: we pick an application kernel or an application model, and we study how it can be adapted to be efficiently run on a distributed heterogeneous platform. We are especially interested in taking communications into account (accurate platform modeling, input/output data transfers, etc.). The methodology in the second theme is quite similar, but mainly targets homogeneous distributed computing platforms. The work in the third theme goes from theoretical studies in distributed computing to development in the freelydistributed DIETmiddleware, covering aspects such as service discovery, data management and replication, performance analysis/prediction, scheduling, and middleware deployment. Highlights (1 page max) The software package DIET, developed in the team, was chosen to be the Grid middleware of the Décrypthon Grid, a joint initiative of AFM (French association against muscular dystrophy), CNRS, and IBM, to help genomics and proteomics research. DIET ensures the load-balancing of jobs over the 6 computing centers federated by this Grid saw the launch of a start-up company around the DIETmiddleware and GRAAL s expertise on the deployment of large scale applications over dedicated grids. Yves Robert was "Program Chair" of IPDPS 2008 (IEEE Int. Symp. Parallel Distributed Processing), the main conference in our research field. Yves Robert was elected IEEE Fellow (promotion 2006) and Senior Member of Institut Universitaire de France (2007). Frédéric Desprez received an IBM Faculty award in The latest version of the MUMPS solver has many new features including an out-of-core functionality that allows users to solve even larger problems than before. MUMPS is downloaded more than 3 times per day on average. GRAAL members have been invited to participate to the program committees of numerous conferences. 5 new permanent researchers and teacher-researchers joined the team during the reporting period; the team acquired new competencies and started work in new topics, like stochastic scheduling. Three main productions/publications: 1. The DIET software (see Section 5.8.1) 2. The MUMPS software (see Section 5.8.1) 3. Anne Benoit and Yves Robert. Mapping pipeline skeletons onto heterogeneous platforms. J. Parallel and Distributed Computing, 68(6): , Other main productions/publications: 4. Arnaud Legrand, Alan Su, and Frédéric Vivien. Minimizing the stretch when scheduling flows of divisible requests. Journal of Scheduling, 11(5): , Olivier Beaumont, Larry Carter, Jeanne Ferrante, Arnaud Legrand, Loris Marchal, and Yves Robert. Centralized versus distributed schedulers for multiple bag-of-task applications. IEEE Trans. Parallel Distributed Systems, 19(5): , Patrick R. Amestoy, Abdou Guermouche, Jean-Yves L Excellent, and Stephane Pralet. Hybrid scheduling for the parallel solution of linear systems. Parallel Computing, 32(2): , H. Casanova, A. Legrand, and Y. Robert. Parallel Algorithms. Chapman & Hall/CRC Press, Eddy Caron, Frédéric Desprez, and Cédric Tedeschi. Enhancing computational grids with peer-to-peer technology for large scale service discovery. Journal of Grid Computing, 5(3): , September Abdelkader Amar, Raphaël Bolze, Yves Caniou, Eddy Caron, Andréa Chis, F. Desprez, Benjamin Depardon, Jean-Sébastien Gay, Gaël Le Mahec, and David Loureiro. Tunable Scheduling in a GridRPC Framework. Concurrency and Computation: Practice & Experience. 20 (9): , Frédéric Desprez and Antoine Vernois. Simultaneous scheduling of replication and computation for dataintensive applications on the grid. Journal of Grid Computing, 4(1):19 31, March 2006.

111 5.3 Algorithms and Scheduling for Distributed Heterogeneous Platforms Research activities Scheduling Strategies and Algorithm Design for Heterogeneous Platforms List of participants: Anne Benoit (MCF ENS Lyon), Loris Marchal (CR CNRS, since Oct. 2007), Yves Robert (Pr ENS Lyon and Institut Universitaire de France), Bernard Tourancheau (Pr UCB Lyon 1, since Feb. 2007), Frédéric Vivien (CR INRIA), Leila Ben Saad (PhD 2008-), Fanny Dufossé (PhD 2008-), Matthieu Gallet (PhD ), Matthias Jacquelin (PhD 2008-), Loris Marchal (PhD ), Jean-François Pineau (PhD ), Véronika Rehn-Sonigo (PhD ), Clément Rezvoy (PhD 2007-), Lionel Eyraud-Dubois (PostDoc, ), Mourad Hakem (PostDoc, ). Keywords Algorithm Design, Scheduling, Complexity, Distributed platforms, Heterogeneity Scientific issues, goals, and positioning of the team: Scheduling sets of computational tasks on large-scale distributed platforms is quite a challenging problem. Our main objective is to enhance state-of-the-art techniques to cope with the high heterogeneity and low reliability of such platforms. As we aim at investigating problems of practical interest, we depart from the traditional macro-data flow model and, instead, concentrate on communication-aware models. In addition, we abandon the traditional objective of scheduling algorithms: the minimization of the total execution time, or makespan. This change of objective is justified because most makespan minimization problems are NP-hard or even inapproximable, and because this metric is not relevant in the context of concurrent or online applications. Using realistic models, we concentrate on new multi-criteria approaches that aim at realizing the best possible trade-offs between performance oriented objectives (throughput, latency, stretch), and in some cases reliability or energy minimization. We investigate both offline problems, where all application and platform parameters are known beforehand, and online problems. We target very challenging algorithmic problems. Still, our goal is to provide efficient solutions owing to the combination of realistic models, sophisticated mathematical tools, and experimental validation. At an international level, our commitment to design theoretically guaranteed solutions using realistic platform and application models helps us make the bridge between researchers in scheduling theory and designers of Grid middleware. Major results Oct Oct. 2009: We have investigated multi-criteria scheduling techniques for streaming applications, mainly pipeline workflows. The considered criteria include both performance-related objectives (throughput, response-time, reliability) and cost-oriented ones (rental price, energy). Our research has been oriented to assessing new complexity results and to provide efficient heuristics (whose performance is compared to an exact linear programming formulation) for the combinatorial instances [275, 276, 269, 271]. We have worked on online scheduling, and especially on the minimization of the maximum-stretch (or slowdown) when scheduling flows of divisible requests. We established new theoretical results and proposed practical heuristics [297, 418]. We later extended this study by considering bag-of-tasks applications [349, 272]. We investigated a new scheduling framework, called steady-state scheduling, and dedicated to heterogeneous platforms with regular load. In this context, we established the complexity of a large number of scheduling problems, from collective communications [268] to series of task graphs. Recently, we proposed practical algorithms based on steady-state scheduling [407, 408]. During the reporting period, we have initiated some work on scheduling in a stochastic context [362, 348]. For instance, we theoretically studied, and proposed practical heuristics for, the scheduling of a divisible load on a collection of computers prone to failures, when the objective is solely the maximization of the expectation of the work completed [362]. Finally, we have provided optimal linear algebra kernels for multicore platforms [414]. This work will be extended in several directions (more precise cache and communication models, implementation on larger platforms). Key references (if necessary) Self-assessment Strong Points. The project has a very good worldwide visibility and recognition. We publish in the best forums (journals such as IEEE TPDS and JPDC, conferences such as IPDPS and HPDC). We chair or are members of numerous program committees, and stand on the editorial board of several journals. The project is renewing its main focus at a dynamic (but reasonable) pace. In the last four years, we have investigated new topics and got acquainted with new mathematical and algorithmic techniques (online problems, stochastic algorithms, multi-criteria optimization). Our grant proposals received a strong acceptance rate and support from ANR, CNRS, INRIA, and other institutions.

112 98 Part 5. Graal Weak Points. We lack experimental validation for several of our research results, but the last four years have seen a steady increase of our efforts in that (time-consuming) direction. We would benefit from more industrial collaborations, but we transfer our skills and knowledge to other disciplines Scheduling for Sparse Direct Solvers List of participants: Emmanuel Agullo (Phd ), Alfredo Buttari (Postdoc Jan.-Oct. 2008), Indranil Chowdhury (Postdoc, since May 2009) Philippe Combes (Engineer Dec Dec. 2008), Aurélia Fèvre (Engineer, Sept Aug. 2007), Jean-Yves L Excellent (CR INRIA), Bora Uçar (CR CNRS, since Jan. 2009), in close collaboration with P. Amestoy from ENSEEIHT-IRITand A.Guermouche from LaBRI Keywords direct solvers, HPC, sparse matrices, scheduling Scientific issues, goals, and positioning of the team: The solution of sparse systems of linear equations (symmetric or unsymmetric, most often with an irregular structure) is at the heart of many scientific applications, usually in relation with numerical simulation. Because of their efficiency and robustness, direct methods (based on Gaussian elimination) are methods of choice to address these problems. In order to deal with numerical pivoting and adapt to the parallel architecture dynamically, we are especially interested in approaches that are intrinsically dynamic and asynchronous, leading to irregular tasks graphs that can vary dynamically. This is challenging from the point of view of parallelism and scheduling strategies. Our main goals consist in processing larger and larger problems (as required by our users) efficiently and in a scalable way, while keeping in mind numerical functionalities and accuracy. Note that our research in this field is strongly related to the software package MUMPS (see Section 5.8.1). Major results Oct Oct. 2009: During this period, we have pursued some work on scheduling issues for parallel multifrontal solvers, taking into account both memory and performance into account [265]. Notice that the optimization of the memory usage can lead to interesting theoretical problems even in serial environments, as shown in [295]. One of our main achievements concerns the out-of-core factorization, where disks become necessary when the limits of the core memory are reached. This has led to both theoretical [328] and practical contributions [263], although software development issues appeared to be much heavier than expected. Finally, we have worked on parallelizing the symbolic and scaling techniques of parallel sparse direct solvers. As much as possible, the results of the associated research have been injected in the software package MUMPS, which implies a huge amount of software engineering work. One of our objective was to stay close to applications [304, 305], guiding our research directions with the requirements from our users. Key references (if necessary) Self-assessment Our main strong points are: a good international visibility, both in academia and industry, a highly used software package MUMPSin various fields related to numerical simulation, research themes motivated by needs from applications and re-injected (as much as we can) in software. There are also some weak points: the team working on this topic and on the MUMPS package is geographically distributed (Lyon, Toulouse, Bordeaux), software quality is difficult to maintain as there is hardly enough time to work on pure software issues; staff on temporary positions require a significant start-up on the software aspects and their developments can be difficult to integrate or maintain after their departure Providing Access to HPC Servers on the Grid List of participants: Abdelkader Amar (PostDoc, Dec Dec. 2007), Nicolas Bard (Engineer, Dec ), Eric Boix (Engineer, Jan Sept. 2007), Raphaël Bolze (PhD ), Aurélien Bouteiller (ATER, Sept Sept. 2006), Yves Caniou (MCF UCBL), Eddy Caron (MCF ENSL), Ghislain Charrier (PhD, 2007-), Benjamin Depardon (PhD, 2007-), Frédéric Desprez (DR INRIA), Gilles Fedak (CR INRIA, since Sept. 2008), Sébastien Gay (PhD, 2005-), David Loureiro (Engineer, Oct Oct. 2008), Christian Pérez (CR INRIA, since Oct. 2008), Vincent Pichon (Engineer, Nov Apr. 2009), Emmanuel Quemener (Engineer, Jan Aug. 2007), Cédric Tedeschi (PhD ) Keywords

113 5.5 Algorithms and Scheduling for Distributed Heterogeneous Platforms 99 Scientific issues, goals, and positioning of the team: Resource management is one of the key issues for the development of efficient Grid environments. Several approaches co-exist in today s middleware platforms. One efficient and simple paradigm consists in providing a semi-transparent access to computing servers by submitting jobs to dedicated servers. This model is known as the Software as a Service (SaaS) model where providers offer, not necessarily for free, computing resources (hardware and software) to clients in the same way as Internet providers offer network resources to clients. The programming granularity of this model is rather coarse. One of the advantages of this approach is that end users do not need to be experts in parallel programming to benefit from high performance parallel programs and computers. It also provides more transparency by hiding the search and allocation of computing resources. To design middlewares implementing such paradigm, we need to address issues related to several well-known research domains. In particular, we focus on: middleware and application platforms as a base to implement the necessary glue to broke clients requests, find the best server available, and then submit the problem and its data, resource and service discovery, online and offline scheduling of requests, fault tolerance, link with data management and replication, distributed algorithms to manage the requests and the dynamic behavior of the platform. Recently, we consider other programming abstractions, and in particular component based models, so as to ease the exploitation of large scale and heterogeneous platform with more abstract programming models. We also have extended our research to Desktop Grids, which use the computing, network and storage resources from idle desktop PC s distributed over multiple-lan s or the Internet to compute a large variety of resource-demanding distributed applications. Major results Oct Oct. 2009: The DIET software, first designed for our own experiments and algorithms [332, 377, 372, 371, 287] and software validation [282, 368, 378, 396, 383, 264], was chosen by IBM and AFM to be the production middleware of the Decrypthon Grid. This project was really important for the team, both around the study of bioinformatics application over Grid platform (scheduling and data replication issues [288, 397, 396, 364], middleware development and adaptation) and because it shown that our developments were mature enough to be used in production. This also let us think about starting a company around this software and our expertise in application developpement and optimization. We have worked on service discovery on P2P environment. We designed the DLPT (Distributed Lexicographic Placement Table) architecture [444, 283, 430, 473], which proves mechanisms for load balancing and faulttolerance. The solution centers around three parts. First, it calls upon an indexing system structured as a prefix tree, allowing multi-attribute range queries. Second, it provides the built-in mapping of such structures onto heterogeneous and dynamic networks and proposes some load balancing heuristics for it [386, 387]. Third, as our target platform is dynamic and unreliable, we worked on original fault-tolerance mechanisms, based on selfstabilization [284, 382, 388, 385, 431, 381]. We define and implement improved component models that show it is possible to offer a rich set of concepts of higher abstraction level (such as data sharing, workflow and skeletons) while preserving high performance by being able to re-use state of the art middleware systems [266, 367, 331, 333, 330, 365, 366] Key references (if necessary) Self-assessment Strong Points. we are able to work on theoretical research issues linked to sofware development. Our implication in major research projects such as Grid 5000 and Decrypthon allows us to validate both theoretical and software results on large scale and production platforms. The DIET software has obtained a national and international visibility. We are able to master the whole chain, from component based model definition, to implementation so as to evaluate the benefits of our proposition. Weak Points. Acceptance of of new programming concepts takes times as stable enough prototypes are required. Our software developments, while they were quite successful for our visibility, are time-consuming. We were lucky enough to have a strong support from Inria and ANR but the stability of our engineers is always linked to the success rate of our proposals.

114 100 Part 5. Graal 5.4 Application domains and social or economic impact 5.5 Visibility and Attractivity Prizes and Awards People Awarded: IEEE Fellow, Yves Robert (2006) Senior Member of Institut Universitaire de France, Yves Robert (2007) IBM Faculty award, Frédéric Desprez (2008) Awarded communications: Best paper award, for Anne Benoit, Veronika Rehn-Sonigo and Yves Robert, at HeteroPar Chairing and organization of conferences: Frédéric Desprez, Vice-chair, CCGRID 2008 Frédéric Desprez, Organising Chair, ParCo 2009 Yves Robert, Vice-chair, program committee of IPDPS 2007 (IEEE Int. Parallel and Distributed Processing Symposium) Yves Robert, Chair, program committee of IPDPS 2008 Yves Robert, Vice-chair, program committee of ICPADS 2006 (IEEE Int. Conf. on Parallel and Distributed Systems) Yves Robert, Chair, program committee of HiPC 2006 (IEEE Int. Conf. on High Performance Computing) Yves Robert, Chair, program committee of 2009 TCPP PhD Forum (IEEE Technical Committee on Parallel Processing) Yves Robert, Chair, program committee of ISPDC 2009 (Int. Symposium on Parallel and Distributed Computing) Contribution to the Scientific Community Management of Scientific Organisations Administration of Professional Societies Editorial Boards International Journal of High Performance Computing Applications, Yves Robert. International Journal of High Performance Computing Applications, Bernard Tourancheau (guest editor, 2006). Future Generation Computing Systems, Bernard Tourancheau (guest editor, 2008). Parallel Processing Letter, Bernard Tourancheau (guest editor, 2009). Parallel Computing, Frédéric Vivien. Scalable Computing, Practice and Experience, Frédéric Desprez. Computing Letters, CoLe, Frédéric Desprez. Organisation of Conferences and Workshops Anne Benoit co-organized the Third International Workshop on Practical Aspects of High-level Parallel Programming (PAPP 2006), University of Reading, UK, May 2006, the fourth edition of the workshop PAPP 2007, University of Beijing, China, May 2007; the fifth edition of the workshop PAPP 2008, Krakow, Poland, June 2008; and the sixth edition of the workshop PAPP 2009, Baton Rouge, USA, May In the scope of the INRIA associated team MetagenoGrid (see Section 5.6.1) and of the CNRS-USA grant SchedLife, Anne Benoit, Loris Marchal, Yves Robert and Frédéric Vivien, in coordination with colleagues from the University of Hawai i at Manoā and at the University of California at San Diego, organized the 2nd Scheduling in Aussois Workshop in May This workshop, on invitation only, lasted three days (May 18 to 21) and gathered 36 participants who listened to 20 technical presentations. In the scope of the INRIA associated team MetagenoGrid and of the CNRS-USA grant SchedLife, Anne Benoit, Loris Marchal, Yves Robert and Frédéric Vivien, in coordination with colleagues from the University of Hawai i at Manoā and at the University of Tenessee at Knoxville, organized the Scheduling for large-scale systems Workshop in Knoxville (see in May This workshop, on invitation only, lasted three days (May 13 to 15) and gathered 34 participants who listened to 25 technical presentations.

115 5.5 Algorithms and Scheduling for Distributed Heterogeneous Platforms 101 E. Caron, co-organized the SSS 09 Conference (The 11th International Symposium on Stabilization, Safety, and Security of Distributed Systems), Lyon, France, November, E. Caron, co-organized the INRIA booth for SuperComputing 09, Portland, USA, November, E. Caron organized the Tutorial Session for CCGRID 09. F. Desprez was the vice-chair of the CCGRID 08 conference, Lyon France, May F. Desprez, organized the ParCo 2009 Conference, Lyon, France, September, F. Desprez organized the CoreGRID ERCIM Working Group Workshop on Grids, P2P and Service computing in Conjunction With EuroPAR 2009, Delft, August, Aurélia Fèvre and Jean-Yves L Excellent organized, with Patrick Amestoy from ENSEEIHT-IRIT, the first workshop dedicated to MUMPSusers (MUMPS Users Day, ÉNS Lyon, 24 October 2006), see ens-lyon.fr/mumps/users_day.html. C. Pérez was the co-chair of the Intl. Component Based High Performance Computing workshop, October 16-17, 2008, Karlsruhe, Germany ( Y. Robert and F. Vivien organized the 35th (French) Spring school in theoretical computer science (EPIT 2007) in June The theme chosen for the school was Scheduling algorithms. During the school, an audience of around 40 students and researchers followed 11 talks. B. Tourancheau co-organized the 9th international workshop "Cluster and Computational Grids for Scientific Parallel Computing (CCGSC06)" in Asheville, NC USA, September 2006; organized the 10th international workshop "Cluster and Computational Grids for Scientific Parallel Computing (CCGSC08)" in Asheville, NC USA, September 2008; organized "IP et les Réseaux de Capteurs", 2008, Lyon. B. Uçar, co-organized a mini-symposium in SIAM Conference on Computational Science & Engineering (CSE09), Miami, Florida, USA, March, 2009; Program committee members Anne Benoit ICCS conference from 2006 to 2009 (Int. Conf. on Computational Science), IPDPS 2008 (Int. Parallel and Distributed Processing Symp.), SBAC-PAD 2008 (Int. Symp. on Computer Architecture and High Performance Computing), HPCC 2009 (Int. Conf. on High Performance Computing and Communications), ISPDC 2009 (Int. Symp. on Parallel and Distributed Computing), ISCIS 2009 (Int. Symp. on Computer and Information Sciences), TCPP PhD forum 2009 (in conjunction with IPDPS 2009). Yves Caniou IEEE Heterogeneous Computing Workshop, HCW 2007, 2008 et 2009; IEEE International Conference on Computational Science and Its Applications ICSSA 2007, 2008 et 2009; ACM SIGAPP Mardi Gras Eddy Caron IEEE Workshop on Grid and P2P Systems and Applications (GridPeer), IEEE Heterogeneous Computing Workshop (HCW) 2007, and International Workshop on Hot Topics in Peer-to-Peer Systems (HotP2P) 2008, and IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA), 2007, 2008 and International Workshop on Middleware for Grids, Clouds and e-science (MGC), 2008, and (PDP), 2007, 2008, and Rencontres francophones du Parallélisme (RenPar), 2006, 2007, 2008, and International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS) Frédéric Desprez EuroPVM 06-09, ICCS 06-08, CLADE 06-09, ICCSA 06-08, Grid 06-08, VecPar 06-08, HeteroPAR 06-09, ISPA 06, Grid area à SC 06, High-Performance Scientific and Engineering Computing (HPCC) et SC 06, HPDC-07-09, Tutorial program for SC07, CoreGRID Symposium (avec EuroPAR 07), CCGRID 07-09, 17th IEEE Int. Symposium on High-Performance Distributed Computing (HPDC-17), PCGrid (avec IPDPS), 9th IEEE International Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC-08) et 9th IEEE International Conference on Computational Science and Engineering, Modern computer tools for the biosciences (à côté de CCGRID 08), CoreGRID Symposium (within Europar 08), chair of the algorithms track for the poster

116 102 Part 5. Graal session of Supercomputing 09, Third Workshop on Desktop Grids and Volunteer Computing Systems (PCGrid 2009) (à côté de IPDPS 09), DEPEND 09, Grid track of the SSS 2009, CloudCom 2009, ICSOC 2009 (Service- Wave 2009 ). Gilles Fedak CDUR 2009 (Second Workshop sur la Cohérence Des Données en Univers Réparti), CCGRID 2009 (IEEE International Symposium on Cluster Computing and the Grid), ServP2P 2009 (Workshop on Service-Oriented P2P Networks and Grid Systems, Renpar 2009 (Rencontres Francophones du Parallélisme) Jean-Yves L Excellent ICPADS 2006 (IEEE International Conference on Parallel and Distributed Systems), IPDPS 08 (21-st IEEE International Parallel & Distributed Processing Symposium), CSE 08 (IEEE 11th International Conference on Computational Science and Engineering). Loris Marchal International Symposium on Parallel and Distributed Computing (ISPDC) 2009, IEEE International Conference on Parallel and Distributed Systems (ICPADS) Christian Pérez IEEE International Symposium on Cluster Computing and the Grid (CCGRID) , International European Conference on Parallel and Distributed Computing (Euro-Par) , IEEE 23rd International Conference on Advanced Information Networking and Applications (AINA) , IEEE International Conference on High Performance Computing and Communications (HPCC) 2009, International Conference on Parallel Computing (ParCo) 2009, IEEE International Conference on Cluster Computing (Cluster) 2007, IEEE International Conference on Parallel and Distributed Systems (ICPADS) 2008, ACM/IFIP/USENIX International Middleware Conference (Middleware) 2007, International Conference on Networking and Services (ICNS) , International Conference on Autonomic and Trusted Computing (ATC) 2007, European PVMMPI Users Group Meeting (EuroPVMMPI) , NoE CoreGRID Symposium 08 et 07, Workshop on Programming Models for Grid Computing (PMGC) 2007, Grid-Enabling Legacy Applications And Supporting End Users Workshop (GELA) 2006, Workshop on Component-Based High-Performance Computing (CBHPC/CompFrame) 2008, HPC-GECO/CompFrame Workshop , Rencontres Francophones du Parallélisme (RenPar) Yves Robert IPDPS from 2006 to 2009 (Int. Parallel and Distributed Processing Symposium), Euro-Par 2008 (European Conf. on Parallel Processing), PPAM 2009 (Int. Conf. on Parallel Processing and Applied Mathematics), ISCIS 2009 (Int. Symposium on Computer and Information Sciences). Bernard Tourancheau Bora Uçar Wireless Sensor Network workshop (NTMS 2009), 2009 (steering committee member). Fifteenth International Conference on Parallel and Distributed Systems (ICPADS 09), 2009; 24th International Symposium on Computer and Information Sciences (ISCIS 2009), 2009; IPDPS 2009 TCPP PhD Forum. Frédéric Vivien 6th IFIP International Conference on Network and Parallel Computing (NPC), 2009; 9th IEEE International Symposium on Cluster Computing and the Grid (CCGRID), 2009 (only member of the backup team); 8th International Symposium on Parallel and Distributed Computing (ISPDC), 2009; 14th IEEE International Conference on Parallel and Distributed Systems (ICPADS), 2008; Workshop on Scheduling for Parallel Computing (2007, and 2009); Grid 2006, Grid 2007, and Grid 2008 (IEEE/ACM International Workshop on Grid Computing then IEEE International Conference on Grid Computing); IEEE Heterogeneous Computing Workshop (HCW), 2006; 13th International Conference on High Performance Computing (HiPC), 2006; Rencontres Francophones du Parallélisme (RenPar) 2006, 2008, and 2009; Euromicro Conference on Parallel, Distributed and Network-based Processing (Euro PDP), 2007, 2008, 2009, and 2010; 21-st IEEE International Parallel & Distributed Processing Symposium (IPDPS), 2007; CoreGRID workshop on Grid Middleware 2006, 2007, and 2008; Workshop on Programming Models for Grid Computing (PMGC), 2007.

117 5.6 Algorithms and Scheduling for Distributed Heterogeneous Platforms 103 International expertise FWF Austrian Science Fund, E. Caron, 2008, evaluation of one project. Science Foundation Ireland, F. Vivien, 2008, evaluation of one principal investigator program. Université Franco-Allemande, C. Pérez, 2008, evaluation of one project. Wiener Wissenschafts-, Forschungs- und Technologiefonds ( a small science fund in Vienna, Austria ), F. Vivien, 2008, evaluation of one research project. National expertise ANR Blanc et jeunes chercheurs, J.-Y. L Excellent, 2008, evaluation of one project. ANR Cosinus, C. Pérez, 2006, evaluation of one project. ANR Cosinus, J.-Y. L Excellent, 2008, evaluation of one project. ANR Cosinus, C. Pérez, J.-Y. L Excellent, 2009, evaluation of one project each. ANR Cosinus, F. Desprez, member of the committee. ANR Domaines Emergents, E. Caron, 2008, evaluation of one project. ANR Masse de données, C. Pérez, 2006, evaluation of one project. ARA Masse de données, E. Caron, 2006, evaluation of one project. CNRS ST2I PEPS, Bernard Tourancheau, 2008, evaluation of four projects. Conseil régional Champagnes-Ardenne, E. Caron, evaluation of one project. Conseil régional d Aquitaine, F. Desprez, evaluation of one project. LAAS, F. Desprez, evaluation of one project. Partenariats Hubert Curien (PHC) Berkeley, Bernard Tourancheau, 2009, evaluation of two projects. Partenariats Hubert Curien (PHC) PFCC, Bernard Tourancheau, 2008, evaluation of three projects. Partenariats Hubert Curien (PHC) SAKURA, Bernard Tourancheau, 2008, evaluation of one project. Projets région Bretagne pour l allocation des bourses doctorales, Bernard Tourancheau, 2009, evaluation of two projects. Pôle de compétitivité DIGITEO, F. Desprez, evaluation of one project. Recruiting committee ENS Lyon, F. Desprez, , president. Recruiting committee IFSIC Rennes, F. Desprez, 2009, member. Recruiting committee INSA Lyon, F. Desprez, 2009, member. Recruiting committee Lyon 1 University, E. Caron, 2009, member. Recruiting committee INPG Grenoble, F. Desprez, 2009, member. Recruiting committee INRIA Bordeaux Sud Est, F. Desprez, 2009, member. Recruiting committee INRIA Rennes, F. Desprez, 2009, member. Evaluation committee of I3S, Nice University, F. Desprez, 2007, member. Evaluation committee of INSA, Rennes, C. Perez, 2009, member.

118 104 Part 5. Graal Public Dissemination Yves Robert has given a talk Une infinité d ordinateurs dans le monde, pourquoi faire? during the annual multidisciplinary IUF conference (Nancy, 2007). Yves Robert has contributed a chapter Algorithmique parallèle with Michel Cosnard for the Encyclopédie de l Informatique et des Systèmes d Information, edited by Vuibert (2007). 5.6 Contracts and grants External contracts and grants (Industry, European, National) (Note: for the contracts that extends outside of the reporting period, the subvention reported is only for the reporting period.) INRIA investigation Grant: ARC INRIA Otaphe, 2 years, , INRIA, E. Caron, 21,500C. This project (Ordonnancement de tâches parallélisables en milieu hétérogène, coordinated by Frédéric Suter from the INRIA Algorille project) aims at designing new algorithms for the scheduling of data-parallel tasks on heterogeneous platforms. ARC OTaPHe federated conceptual and experimental researches around parallel task scheduling conducted by the AlGorille, GRAAL, MESCAL and MOAIS INRIA teams. Our approach consists in defining models taking computational grid heterogeneity into account. These models however have to remain simple. From those models guaranteed heuristics will be designed and implemented into the DIET and OAR middlewares in order to validate them. Y. Caniou, E. Caron, and F. Desprez participate to this project. INRIA investigation Grant: ARC INRIA Georep, 2 years, , INRIA, J.-Y. L Excellent, 1,000 C 3. This project (Geometrical Representations for Computer Graphics, coordinated by Bruno Levy from the INRIA ISA-ALICE project) aimed at designing new solutions to convert a raw representation of a 3D object into a higherlevel representation. In this context, our participation consisted in providing expertise and support for the underlying numerical problems involved (sparse systems of equations, use of our sparse direct solver MUMPS). A. Fèvre and J.-Y. L Excellent participated to this project. IUF Y. Robert ( ), ENS Lyon, Y. Robert, 45,735C Yves Robert was elected a Senior Member of Institut Universitaire de France in He will receive a grant of 15k euros per year during five years. Fédération lyonnaise de calcul haute performance (FLCHP) and Pôle scientifique de modélisation numérique, PSMN, ENS Lyon, J.-Y. L Excellent, 1600C. PSMN federates various laboratories, mainly from ENS Lyon, that share parallel machines and experiences of parallelization of applications. FLCHP includes PSMN and other centers and aims at coordinating the renewal of parallel machines for several sites (ENS Lyon, UCBL, Ecole Centrale). J.-Y. L Excellent represents the laboratory in this project. French ministry of research grant: Grid 5000, 3 years, ÉNS Lyon is involved in the Grid 5000project, which aims to build an experimental Grid platform gathering eight sites geographically distributed in France. Each site hosts several clusters connected through the RENATER network. GRAAL is participating in the design of the École normale supérieure de Lyon node. The scalability of DIETwas evaluated on this platform as well as several scheduling heuristics and applications. French-Israeli project "Multicomputing" ( ), INRIA, J.-Y. L Excellent, 0C 4. This project aims at improving the scalability of state-of-the-art computation fluid dynamics by state-of-the-art numerical linear algebra approaches. Apart from us, it involves Tel Aviv University and ENSEEIHT-IRIT (Toulouse), where Alfredo Buttari is coordinator for the French side. In Graal, Indranil Chowdhury, J.-Y. L Excellent, and Bora Uçar participate to this project. 3 Estimate; the budget was managed by the INRIA ISA-ALICE project who covered our expenses related to the project. 4 The budget is 20,000Cper year for 2 years but is managed for the French side by Alfredo Buttari.

119 5.6 Algorithms and Scheduling for Distributed Heterogeneous Platforms 105 Rhône-Alpes: Grille pour le Traitement d Informations Médicales ( ), ENS Lyon, F. Vivien, 3250C. RAGTIME, RAGTIME, a project of Région Rhône-Alpes, is devoted to the use of the grid to perform efficient distributed accesses and computations on medical data. It federates most of the local researchers on grid computing together with medical centers, hospitals, and industrial partners. E. Caron and F. Vivien participate to this project. Project Calcul Hautes Performances et Informatique Distribuée, CNRS, F. Desprez then E. Caron, 24,200C. F. Desprez leads (with E. Blayo from LMC, Grenoble) the Calcul Hautes Performances et Informatique Distribuée project of the cluster Informatique, Signal, Logiciels Embarqués. Together with several research laboratories from the Rhône-Alpes region, we initiate collaborations between application researchers and distributed computing experts. A Ph.D. thesis (J.-S. Gay) has started in September 2005 around scheduling problems for physics and bioinformatic applications. Since 2008, the project is leaded by E. Caron. Y. Caniou, E. Caron, F. Desprez, J.-Y. L Excellent, J.-S. Gay, and F. Vivien participate to this project. ANR grant ALgorithmique des Plates-formes A Grande Échelle, ALPAGE, ENS Lyon, Y. Robert, 100,500C. The goal of this 3 year project ( ) is to gather researchers from the distributed systems and parallel algorithms communities in order to develop efficient and robust algorithms for some elementary applications, such as broadcast and multicast, distribution of tasks that may or may not share files, resource discovery. These fundamental applications correspond to the spectrum of the applications that can be considered on large scale, distributed platforms. Y. Robert is leading the Rhône-Alpes site of this project, which comprises three sites: Paris (LIX and LRI laboratories) and Bordeaux-Rennes (Paris and Scalapplix projects). F. Vivien also participates in this project. ANR CICG League for Efficient Grid Operation, , LEGO, ENS Lyon, E. Caron, 147,763C. The aim of this project is to provide algorithmic and software solutions for large scale architectures; our focus is on performance issues. The software component provides a flexible programming model where resource management issues and performance optimizations are handled by the implementation. On the other hand, current component technology does not provide adequate data management facilities, needed for large data in widely distributed platforms, and does not deal efficiently with dynamic behaviors. We choose three applications: ocean-atmosphere numerical simulation, cosmological simulation, and sparse matrix solver. We propose to study the following topics: Parallel software component programming; Data sharing model; Network-based data migration solution; Coscheduling of CPU, data movement and I/O bandwidth; High-perf. network support. The Grid 5000 platform provides the ideal environment for testing and validation of our approaches. E. Caron is leading the project, which comprises six teams: GRAAL/LIP(Lyon), PARIS/IRISA(Rennes), RUNTIME/LaBRI (Bordeaux), ENSEEIHT/IRIT(Toulouse), CERFACS(Toulouse) and CRAL/ÉNS-Lyon (Lyon). A. Amar, R. Bolze, Y. Caniou, P.K. Chouhan, F. Desprez, JS. Gay and C. Tedeschi also participate in this project. ANR CICG-05-05: Distributed Objects and Components for High Performance Scientific Computing on the Grid 5000 test-bed, , DISC, INRIA, C. Pérez, 110,500C. DISC aims at studying and promoting a new paradigm for programming non-embarrassingly parallel scientific computing applications on distributed, heterogeneous, computing platforms. The DISC project concentrates its activities on numerical kernels and related issues that are of interest to a large variety of application contexts. It is a 3-year and a half project which started in January C.Pérez participates to this project. ANR CICG-05-02: Adaptation et Optimisation des Performances Applicatives sur architectures NUMA. Étude et Mise en Oeuvre sur des Applications en SISmologie, , NUMASIS, INRIA, C. Pérez, 115,752C This project deals with recent NUMA multiprocessor machines with a deep hierarchy. In order to efficiently exploit it, the project aims at evaluating the features of current systems, at proposing and implementing new mechanisms for process, data and communication management. The target applications come from the seismology field that appear representative of current needs in scientific computing. It is a 4-year project which started in January C.Pérez participates to this project.

120 106 Part 5. Graal ANR grant SOlveurs et simulation en Calcul Extrême ( ), SOLSTICE, INRIA, J.-Y. L Excellent, 124,384C. The objective of this project is both to design and develop high-performance parallel linear solvers that will be efficient to solve complex multi-physics and multi-scale problems of very large size (10 to 100 millions of equations). To demonstrate the impact of our research, the work produced in the project will be integrated in real simulation codes to perform simulations that could not be considered with today s technologies. This project also comprises LaBRI (coordinator), CERFACS, INPT-IRIT, CEA-CESTA, EADS-CCR, EDF R&D, and CNRM. We are more particularly involved in tasks related to out-of-core factorization and solution, parallelization of the analysis phase of sparse direct solvers, rank detection, hybrid direct-iterative methods and expertise site for sparse linear algebra. Emmanuel Agullo, Alfredo Buttari, Indranil Chowdhury, Bora Uçar, Aurélia Fèvre and Jean-Yves L Excellent participated to this project. SEISCOPE Consortium ( ), Jean-Yves L Excellent, 1000C 5 The SEISCOPE consortium focuses on wave propagation problems and seismic imaging. Our parallel solver MUMPS has been used extensively for finite-difference modeling of acoustic wave propagation (see [300]) and we have had many interactions with Stéphane Operto and Jean Virieux (GeoScience Azur, coordinators of the SEISCOPE project). The SEISCOPE project is supported by ANR (Agence National de la Recherche Française), and by BP, CGG, SHELL and TOTAL. Emmanuel Agullo, Aurélia Fèvre and Jean-Yves L Excellent participated to this collaboration. ANR grant ANR-06-BLAN Scheduling algorithms and stochastic performance models for applications on dynamic Grid platforms ( ), Stochagrid, ENS Lyon, Y. Robert 118,647C. workflow Grid computing platforms and components are subject to a great variability. On the one hand, statistical models are mandatory to account for the variability and dynamicity of resources. On the other hand, efficient scheduling algorithms only exist for static, dedicated platforms. We need a new stochastic model (non-markov for system interaction but Markov-based for platform characteristics) able to capture the performance of dynamic parallel systems accurately. New, robust, scheduling algorithms will be designed and evaluated on top of this model, thereby providing the first stochastic testbed for workflow applications on Grid platforms. The project is entirely conducted within the GRAAL team (Anne Benoit and Yves Robert are the permanent members involved). ANR grant ANR-06-MDCA-009: Grid Workflow Efficient Enactment for Data Intensive Applications ( ), Gwendia, INRIA, F. Desprez, 174,299C. The objective of the Gwendia 6 project is to design and develop workflow management systems for applications involving large amounts of data. It is a multidisciplinary project involving researchers in computer science (including GRAAL) and in life science (medical imaging and drug discovery). Our work consists in designing algorithms for the management of several workflows in distributed and heterogeneous platforms and to validate them within DIETon the Grid 5000 platform. ANR grant ANR 08 SEGI 022: Ultra Scalable Simulation with SimGrid ( ), USS SimGrid, INRIA, F. Vivien, 24,676C. The USS SimGrid 7 project aims at improving the scalability of the SimGrid simulator to allow its use in Peerto-Peer research in addition of Grid Computing research. The challenges to tackle include models being more scalable at the eventual price of slightly reduced accuracy, automatic instantiation of these models, tools to conduct experiments campaigns, as well as a partial parallelization of the simulator tool. The project is lead by Martin Quinson from Algorille/Loria. GRAAL is involved in the model instantiation (WP 2) and in the parallelization of the simulation (WP5). ANR grant SEGI Servicing Petascale Architectures and DistributEd System, SPADES ((08-ANR-SEGI-025), IN- RIA, E. Caron, 131,736C. Today s emergence of Petascale architectures and evolutions of both research grids and computational grids increase a lot the number of potential resources. However, existing infrastructures and access rules do not allow to fully take advantage of these resources. One key idea of the SPADES project ( is to propose a non-intrusive but highly dynamic environment able to take advantages to available resources without disturbing their native use. One of the priorities of SPADES is to support platforms at a very large scale. Petascale 5 Estimate; the budget was managed by Geoscience Azur, who covered our expenses related to this project

121 5.6 Algorithms and Scheduling for Distributed Heterogeneous Platforms 107 environments are in consequence particularly considered. Nevertheless, these next-generation architectures still suffer from a lack of expertise for an accurate and relevant use. Another challenge of SPADES is to provide a software solution for a service discovery system able to face a highly dynamic platform. This system will be deployed over volatile nodes and thus must tolerate «failures». The implementation of such an experimental development leads to the need for an interface with batch submission systems able to make reservations in a transparent manner for users, but also to be able to communicate with these batch systems in order to get the information required by our schedulers. Yves Caniou, Eddy Caron, Ghislain Charrier, Frédéric Desprez, Gilles Fedak, Franck Petit, and Cédric Tedeschi, participated to this project. ANR JCJC Cloud computing sur plates-formes non fiables de calcul partagé ( , 48 mois), Cloud@Home, INRIA, Gilles Fedak, 58kC. Recently, a new vision of cloud computing has emerged where the complexity of an IT infrastructure is completely hidden from its users. At the same time, cloud computing platforms provide massive scalability, % reliability, and speedy performance at relatively low costs for complex applications and services. This project, lead by D. Kondo from INRIA MESCAL investigates the use of cloud computing for large-scale and demanding applications and services over unreliable resources. In particular, we target volunteered resources distributed over the Internet. In this project, G. Fedak leads the Data management task (WP3). AFM-IBM-CNRS project Décrypthon ( ) F. Desprez. The Décrypthon program aims at accelerating researches in genomics and proteomics and in particular around neuromuscular diseases. A production Grid financed by IBM has been installed in several Universities in France (as well at ENS) and the GRAAL team was chosen to work on the port on bioinformatics applications over the Grid. More recently, DIET was chosen by IBM as the middleware of the whole platform, replacing the United Devices middleware. A volunteer computing platform (World Community Grid) is also used for one application and we worked on its optimization over the WCG platform. An PhD student (R. Bolze), an engineer (N. Bard), and a postdoc (M. Heymann) worked on the project. EU FP7 project Enabling Desktop Grids for e-science (24 months, ), EDGeS, INRIA, Gilles Fedak, 137,502C. This project is lead by P. Kacsuk, and involves the following partners : SZTAKI, INRIA, CIEMAT, Fundecyt, University of Westminster, Cardiff University, University of Coimbra. Grid systems are currently being used and adopted by a growing number of user groups and diverse application domains. However, there still exist many scientific communities whose applications require much more computing resources than existing Grids like EGEE can provide. The main objective of this project is to interconnect the existing EGEE Grid infrastructure with existing Desktop Grid (DG) systems like BOINC or XtremWeb in a strong partnership with EGEE. The interconnection of these two types of Grid systems will enable more advanced applications and provide extended compute capabilities to more researchers. In this collaboration G. Fedak represents the GRAAL team and is responsible for JRA1: Service Grids-Desktop Grids Bridges Technologies and is involved in JRA3 : Data Management, as well as NA3 : Standardization within the OGF group. Samtech ( ), INRIA, J.-Y. L Excellent, 4,800C. Samtech S.A. (Belgium) develops the finite element package SAMCEF, which uses our parallel sparse direct solver MUMPS as one of the internal solvers. This 18-month contract had as its main objective the study of the impact of parallel sparse out-of-core solvers on problems coming from Samtech s customers. The collaboration led to a first out-of-core version of the MUMPS package. In Lyon, Emmanuel Agullo, Aurélia Fèvre and Jean-Yves L Excellent participated to this contract. ENSEEIHT-IRIT (Toulouse) also participated to this contract. Samtech ( ), INRIA, J.-Y. L Excellent, 5382CINRIA and ENSEEIHT-IRIT signed a new contract with the belgian company Samtech S.A. in On Samtech problems, the goal is to improve the memory usage of MUMPS, offer the possibility to address larger memories in a more flexible way, and improve the performance of out-of-core approaches. The new functionalities developed in MUMPS will be made available in a future public release of the package. European NoE CoreGRID ( ), CNRS, F. Vivien, 90,000C. The CoreGRID Network of Excellence aims at building a European-wide research laboratory that will achieve scientific and technological excellence in the domain of large-scale distributed, Grid, and Peer-to-Peer computing. The

122 108 Part 5. Graal primary objective of the CoreGRID Network of Excellence is to build solid foundations for Grid and Peer-to-Peer computing both on a methodological basis and a technological basis. This will be achieved by structuring research in the area, leading to integrated research among experts from the relevant fields, more specifically distributed systems and middleware, programming models, knowledge discovery, intelligent tools, and environments. GRAAL is involved in CoreGRID under the partner CNRS. The CNRS partnership involves Algorille in Nancy (E. Jeannot), MOAIS in Grenoble (G. Huard, D. Trystram), and the Graal project (A. Benoit, Y. Caniou, E. Caron, F. Desprez, Y. Robert, F. Vivien). F. Vivien is responsible for the CoreGRID contract within CNRS. He is responsible for managing the three teams involved in the partner CNRS, and for representing them in the CoreGRID Members General Assembly. F. Vivien is a member of the CoreGRID Integration Monitoring Committee. He is also responsible for a task in the scheduling workpackage. Explora Doc, Lawrence Berkeley National Laboratory, USA, Rhône-Alpes Région and INRIA, E. Agullo, 9000C PhD Student E. Agullo obtained a funding from Région Rhône Alpes (Explora Doc programme) for a 6-month visit (from June 2007 to November 2007) to the Lawrence Berkeley National Laboratory (California, USA) under the supervision of S. X. Li. Additional funding was obtained from the Explorator programme from INRIA. This collaboration focuses on out-of-core left-looking and right-looking supernodal approaches with the aim to compare those approaches to the out-of-core multifrontal approaches studied in France. France-Berkeley Fund Award ( ), Jean-Yves L Excellent, 0C 8 In the framework of the France-Berkeley Fund, we have been awarded a research grant to enable an exchange program involving both young and confirmed scientists. The collaboration will focus on massively parallel solvers for large sparse matrices and will reinforce the collaboration initiated by Emmanuel Agullo (see above). On the French side, this project also involves P. Amestoy (ENSEEIHT-IRIT), A. Guermouche (LaBRI), F.-H. Rouet (ENSEEIHT- IRIT) and I. Duff (CERFACS). Emmanuel Agullo, Bora Uçar and Jean-Yves L Excellent participate to this project. NEGST( ) The aim of the CNRS-JST NEGST(Next Grid Systems and Techniques, negst/) is to help building new collaboration between Japan and France in the area of Grid platform, and in particular interoperability and technological improvements. The three main themas are the virtualization of Grid resources, the metrology for Grid Computing and the interoperability between Grid middleware and applications. REDIMPS( ) REDIMPS(Research and Development of International Matrix Prediction System, is a project funded by the Strategic Japanese-French Cooperative Program on Information and Communications Technology including Computer Science with the CNRS and the JST. The goal of this international collaboration is building an international sparse linear equation solver expert site. Among the objectives of the project, one resides in the cooperation of the TLSE partners and the JAEA in the testing, the validation and the promotion of the TLSE system that is currently released. the leading By integrating knowledge and technology of JAEA and TLSE partners, it is expected that we will achieve the construction of an international expert system for sparse linear algebra on an international grid computing environment. Yves Caniou, Eddy Caron, Frédéric Desprez and Jean-Yves L Excellent participate to this project. INRIA Explorer, Japan Atomic Energy Agency, Japan, INRIA, Y. Caniou, 7000C Y. Caniou obtained a funding from the INRIA Explorer program for a 3-month visit to the Japan Atomic Energy Agency (JAEA). This collaboration, in the context of the REDIMPSproject, aimed to designed and implement an interoperability between DIET, the Grid middleware developed in the Graal team, and ITBL, the Grid middleware used by the JAEA on resources distributed in Japan. The solution is the object of a publication in the IEEE AD- VCOMP 08 conference, and was exhibited during the SuperComputing 08 conference. It is deployed behing the web portal in order to provide the access to french and japanese computing resources. MIT-France Fund Award (2007), MIT, F. Vivien, 5,000$. Multicore architectures are now entering the mainstream, as we can now find them on simple laptops. This architectural change induces a pressing demand for new solutions to existing problems, like the efficient execution of streaming applications such as image, video, and digital signal processing applications. In this collaboration 8 Funds are managed in Berkeley. We have had meetings in France and E. Agullo has spent some time in Berkeley but have not used the specific budget (around 10,000 $ for all institutions).

123 5.7 Algorithms and Scheduling for Distributed Heterogeneous Platforms 109 with S. Amarasinghe and B. Thies from the MIT CSAIL laboratory, we target the efficient scheduling and mapping of streaming applications on multicore architectures, especially on heterogeneous multicore architectures. This collaboration is an extension on our work on steady-state scheduling. Matthieu Gallet and Frédéric Vivien participate to this project. CNRS-USA grant with University of Hawai i ( ), SchedLife, CNRS, A. Benoit, 8,372C. We have been awarded a CNRS grant in the framework of the CNRS/USA funding scheme, which runs for three years starting in The collaboration is done with the Concurrency Research Group (CoRG) of Henri Casanova, and the Bioinformatics Laboratory (BiL) of Guylaine Poisson of the Information and Computer Sciences Department, of the University of Hawai i at Mānoa, USA. The SchedLife project targets the efficient scheduling of largescale scientific applications on clusters and Grids. To provide context for this research, we focus on applications from the domain of bioinformatics, in particular comparative genomics and metagenomics applications, which are of interest to a large user community today. A. Benoit, E. Caron, F. Desprez, Y. Robert and F. Vivien participate to this project. INRIA Associated-team MetagenoGrid ( ), INRIA, F. Vivien, 32,000C. This associated-team involves the exact same persons, and covers the same subject, than the CNRS-USA grant SchedLife described above. Marie Curie Action, IOF, MetagenoGrids ( ), INRIA, F. Vivien, 74,451C. This grant supports F. Vivien s sabbatical year at the University of Hawai i at Manoā in the scope of the INRIA associated team. Germano-French University Co-tutelle V. Rehn-Sonigo ( ), ENS Lyon, Y. Robert, 4,500C. This grants provides funding for exchanges between Lyon and Passau related to the PhD thesis of Veronika Rehn- Sonigo. Her thesis is co-supervised by ENS Lyon (A. Benoit, Y. Robert) and the University of Passau (H. Kosch) Research Networks (European, National, Regional, Local) GRAAL was involved in the following research networks: European Network of Excellence CoreGRID, (see p. 107). Associated team with the University of Hawai i at Manoā, (see p. 109). F. Desprez leads the ERCIM Working Group CoreGRID ( ) which followed the NoE. This working group gathers 31 research teams from all over europe working on Grids, Service oriented architectures and Clouds Internal Funding 5.7 International collaborations resulting in joint publications As publications do not always exactly correspond to grants e.g., a collaboration may lead to publications before it is funded by a grant we first list the grants corresponding to international collaborations and then the publications. With grants MIT-France Fund Award (2007), with S. Amarasinghe and B. Thies from the MIT CSAIL laboratory, on the efficient scheduling and mapping of streaming applications on multicore architectures. CNRS-USA grant SchedLife ( ) and Associated-team MetagenoGrid ( ), with H. Casanova and G. Poisson from the University of Hawai i at Mānoa, USA, on the efficient scheduling of large-scale bioinformatics applications on clusters and Grids. Germano-French University ( ), with H. Kosch from the University of Passau in Germany, on the scheduling of pipelined applications onto heterogeneous platforms. With joint publications Colorado State University, USA (Arnold Rosenberg): reliability [362]. Institute of Information Sciences and Technologies, Pisa, Italy (Marco Aldinucci): algorithmic skeletons and performance models [331]; JAEA, Tokyo, Japan: Interconnection between ITBL Computational Grid and the Grid-TLSE platform [417, 375].

124 110 Part 5. Graal LoughBorough University, UK (Richard Blanchard): numerical simulation of buildings [432]. MIT, USA (Saman Amarasinghe): program optimization [306]; MIT, USA (Kunal Agrawal): workflows [325, 324] National Institute of Advanced Science and Technology, Japan (H. Nakada): standardization of GridRPC in the scope of the Open Grid Forum (OGF) [464, 441] University of California at San Diego, USA (Larry Carter and Jeanne Ferrante): grid models, scheduling algorithms [267, 337]; University of Florida, USA (Jose Fortes): scheduling algorithms [404]; University of Hawai i at Mānoa, USA (Henri Casanova): dynamic workflows, network and Grid models, scheduling in virtualized environments [299, 344, 429, 445]; University of Nevada, Las Vegas, USA (Ajoy Datta): self-stabilizing algorithms [380, 381] University of Passau, Germany (Harald Kosch): scheduling algorithms [271, 352]; University of Tennessee at Knoxville, USA (Jack Dongarra): algorithm design [289, 405]. University of Memphis, USA (Qishi Wu): streaming applications [437, 412]. University of Pisa, Italy (Marco Danelutto): programming models (skeletons) [330, 331] 5.8 Software production and Research Infrastructure Software Descriptions A: DIET Type: software (toolbox) Scientific problem addressed: Develop a set of tools to build computational servers. Functional description: The Distributed Interactive Engineering Toolbox (DIET) project is focused on the development of scalable middleware with initial efforts focused on distributing the scheduling problem across multiple agents. DIET Evaluation according to what you believe is important, as: importance in the user community: Used in production for the Decryphton Grid, a startup is in progress based on this software. reference implementation in a community: june 2009, DIET paper is second in the 50 most-frequently cited articles in International Journal of High Performance Computing Applications as of June 1, http: //hpc.sagepub.com/reports/mfc1.dtl. effort: 6 persons/12 month Status: free software State: version 2.3 available on the web. Depot APP: IDDN.FR S.P B: MUMPS Type: library, currently co-developped by ENSEEIHT-IRIT(P. Amestoy and more recently, A. Buttari), and LaBRI (A. Guermouche), with research contributions from CERFACS. Scientific problem addressed: solution of large sparse systems of linear equations, as arise in challenging numerical simulation problems Worldwide diffusion (academia and industry, downloads per year from our website). Integrated within various commercial and academic packages (Samcef from Samtech, FEMTown from Free Field Technologies, Code_Aster from EDF, IPOPT, Petsc,... ). Partly supported by research contracts (ANR for example) and direct contracts with industry (contracts with Samtech for example).

125 5.8 Algorithms and Scheduling for Distributed Heterogeneous Platforms 111 Developments were initiated in 1996 by European Project Parasol (Esprit 4 LTR project 20160); developments during the last four years concern various new functionalities including parallel scalings, parallel symbolic factorization [460] and out-of-core storage of factors [263]. Latest release: MUMPS version 4.9, which is public domain. More information at and C: Mpi_MkDom2 Type: software developed in collaboration with Daniel Kahn (project Helix, Université Claude Bernard Lyon 1). Scientific problem addressed: parallelized automatic inference of protein domain families. Functional description: takes as input a protein sequence database and builds protein domain families. Evaluation: will enable to build new versions of the ProDom database (the existing sequential algorithm is no longer able to process current databases due to their size and its large complexity). Status: free software. State: in the final stages of development. D: ADAGE Type: software Scientific problem addressed: automatic deployment of multi-technology based application. Functional description: taken a description of the application, of the resources and of optional control parameters, ADAGE computes a deployment plan, copies the needed files, and launches the executables with the needed configuration. Evaluation: ADAGE shows that it is feasible to deploy complex applications, such as a climatology application described in ULCM, implemented in CCM and making use of DIET, JUXMEM and MPI systems. ADAGE relies on a conversion of application description into a generic representation. EDF R&D is interested in such a technology and has provided a PhD grant to improve the model. Status: free software (developed within the ANR LEGO project). State: stable Depot APP: yes E: ULCMi Type: software (ULCM interpreter and runtime systems) Scientific problem addressed: increase component model abstraction level for high performance computing by combining component, workflow, data sharing and skeletton concepts. Functional description: execute an application described in ULCM. Evaluation: ULCMi shows the feasibility and the benefits of the proposed model, which combines distinct concepts. It enables to identify fondamental issues for a full and efficient implementation. Status: free software (developed within the ANR LEGO project). State: proof-of-concept of ULCM Depot APP: on going F: BitDew Type: middleware, distributed services Scientific problem addressed: Large scale data management on Desktop Grids and Clouds Functional description: BitDew relies on 5 abstractions to manage the data : i) replication indicates how many occurrences of a data should be available at the same time on the network, ii) fault-tolerance controls the policy in presence of machine crash, iii) lifetime is an attribute absolute or relative to the existence of other data, which decides the life cycle of a data in the system, iv) affinity drives movement of data according to dependency rules, v) protocol gives the runtime environment hints about the protocol to distribute the data (http, ftp or bittorrent). Programmers define for every data these simple criteria, and let the BitDew runtime environment manage operations of data creation, deletion, movement, replication, and fault-tolerance operation. Evaluation: Status: free softwareunderunder GPL-v2 Licence State: beta software in development (4 releases)

126 112 Part 5. Graal G: XtremWeb Type: middleware, now co-developped by INRIA and IN2P3 Scientific problem addressed: middleware for Desktop Grid computing Functional description: XtremWeb is an open source software to build lightweight Desktop Grid by gathering the unused resources of Desktop Computers (CPU, storage, network). Its primary features permit multi-users, multiapplications and cross-domains deployments. XtremWeb turns a set of volatile resources spread over LAN or Internet into a runtime environment executing high througput applications. Evaluation : XtremWeb is highly programmable and customizable a wide range of applications (bag-of tasks, master/worker), of computing requirements (data/cpu/network-intensive) and computing infrastructures (clusters, Desktop PCs, multi-lan) in a manageable, scalable and secure fasion. Known users: LIFL, LIP, ID-IMAG, LRI (CS), LAL (physics Orsay), IBBMC (biology), Université de Guadeloupe, IFP (petroleum), EADS, CEA, Université du Wisconsin Madison, Université de Tsukuba (Japon), AIST (Australie), UCSD (USA), Université de Tunis, AlmerGrid (NL), Fundecyt (Spain), Hobai (China), HUST (China). Support : XtremWeb has been supported by national grants (ACI CGP2P) and by major European grants around Grid and Desktop Grid such as FP6 CoreGrid: European Network of Excellence, FP6 Grid4all, and more recently FP7 EDGeS : Enabling Desktop Grid for E-Science. Status: free software under GPL-v2 Licence State: stable under the production branch developped by IN2P3 (XWHEP) Contribution to Research Infrastructures ENS Lyon is involved in the Grid 5000 project, which aims at building an experimental Grid platform gathering nine sites geographically distributed in France (17 laboratories). Each site hosts several clusters connected through the RENATER network. GRAAL is participating in the design of the École normale supérieure de Lyon node. The scalability of DIET together with several scheduling heuristics and applications have been evaluated on this platform. The Grid 5000 project is now managed through and Action de Développement Technologique Aladdin, leaded by F. Desprez. 5.9 Industrialization, patents, and technology transfer Patents (French, European and WorldWide) Creation of Startups Name of Company: Graal Systems Date of Creation: End of 2009 Nature and Domain of Activity: Grid and Cloud Computing. A company to ditribute and support DIET Collaboration Contracts: In progress (under NDA until official contract). Nature of Contribution to the creation: Software and Grid/Cloud expertise Team members transferred to the company: Eddy Caron (20%), Frédéric Desprez (20%) and David Loureiro (100%) Creation of a start-up as a mean to promote the research work around the DIET software. After the selection of the DIET software by CNRS, IBM and AFM (Association Française contre les Moypathies) as a grid middleware used in production for the Décrypthon project (which aims at setting up a hardware and software infrastructure allowing researchers in biology and genomic to realize in silico simulations), and its successful setup, Frédéric Desprez, Eddy Caron and David Loureiro started the creation of a company focused on providing services for porting scientific applications on distributed heterogeneous computing resources. During this incubation the project applied to the Concours National à la Création d Entreprise Innovante held by the Ministère de la Recherche et de l Enseignement Supérieur and OSÉO in the Émergence category. The project is one of the winner of this competition, it has been selected by OSÉO and Ernst & Young and a subsidy of 33kC has been given to the project. The start-up creation is planned on November 1. The start-up will be represented on the INRIA booth s for start-ups at the SuperComputing 2009 conference, in Portland, Oregon, USA, November Industrial Transfer The knowledge carried out during the Sébastien Lacour PhD ( ) that led to the developpement of ADAGE during the LEGO project is being transferred to EDF R&D via the PhD of Boris Daix, supported by EDF R&D. Moreover, EDF R&D hired Boris Daix as a research-ingeneer in 2009 to continue to work on this topic. The MUMPS package (see Section 5.8.1) is widely used in industry.

127 5.10 Algorithms and Scheduling for Distributed Heterogeneous Platforms Consulting Activities 5.10 Educational Activities Supervision of Educational Programs Anne Benoit is responsible of first year computer science students (L3) at ENS Lyon (2006-present). Yves Robert is the head of the computer science department at ENS Lyon (2006-present). He also was the Head of the computer science master in 2006 and Bernard Tourancheau was responsible of the Travaux d Étude et Recherche credit unit of the computer science master of UCB-Lyon 1 in Starting on September 1, 2009, Frédéric Vivien will be head of the Algorithmics course of study in the computer science master of ENS Lyon Teaching Name Course title (short) Level Institution No of Number of hours academic years A. Benoit Algorithmique L3 ENS Lyon 48 3 A. Benoit Algorithmique parallèle M1 ENS Lyon 48 2 A. Benoit Algorithmique des réseaux et des M1 ENS Lyon 48 1 télécommunications Y. Caniou Réseaux L3 UCBL 12 Y. Caniou Réseaux M1 UCBL 42 Y. Caniou Stage Unix M2 Pro UCBL 23 Y. Caniou Sécurité M2 Pro UCBL 19.5 Y. Caniou Architecture Sécurité Réseau M2 Pro UCBL 33.5 Y. Caniou Architecture Sécurité Réseau M2 Pro App UCBL 23.5 Y. Caniou Stage M2 Pro App UCBL 40 Y. Caniou CCNA M2 UCBL Y. Caniou Projet M1 UCBL 15 Y. Caniou Stage L3 UCBL 2 Y. Caniou Stage M2 Pro UCBL 9 Y. Caniou Client/Serveur M2 Pro UCBL E. Caron C/Unix Cours L3 ENS Lyon 48 1 E. Caron Conduite de projet logiciel L3 ENS Lyon 48 2 E. Caron Conduite de projet logiciel M1 ENS Lyon 48 1 E. Caron Distributed Systems M1 ENS Lyon 48 3 E. Caron Operating System L3 ENS Lyon 48 3 E. Caron Projet C, Caml et Python L3 ENS Lyon 48 1 J.-Y. L Excellent Parallel Numerical Algorithms Master 2 ENS Lyon 48 1 J.-Y. L Excellent High Perf. Matrix. Computations Master 2 ENS Lyon 48 1 L. Marchal Ordonnancement Master 1 ENS Lyon 48 2 Y. Robert Probability and randomized algorithms Master 1 ENS Lyon 48 2 B. Tourancheau Développement durable: Energie et L3 UCB-Lyon bâtiment B. Tourancheau Algorithmique et Programmation L1 UCB-Lyon B. Tourancheau Réseau et Système M2 UCB-Lyon B. Tourancheau Travaux d Etude et Recherche M2 UCB-Lyon B. Tourancheau Stage d application profesionnelle M2 UCB-Lyon ou recherche B. Tourancheau Réseaux de capteurs M2 UCB-Lyon F. Vivien Algorithmics and scheduling for distributed platforms Master 2 ENS Lyon 48 2

128 114 Part 5. Graal 5.11 Self-Assessment Strong points Over the years, the team has steadily produced high quality research results. This is assessed by the publication track, by the involvement of team members in numerous program committees, by the many international collaborations the team is involved in, and by the distribution of our softwares. The quality of the team work is best exemplified by the fact that it has attracted five new permanent staff members over the last three years. These new recruits brought new competencies which, in turn, will help define the team project. The international contacts also ensure that the team has a broad and accurate view of the research field at an international scale. The team is regularly re-considering its research and long term objectives. This leads to a smooth and regular evolution of our points of focus. For instance, during the reporting period, the team considered the problematics of online scheduling and of scheduling in stochastic contexts, and this for the first time. Also, during the past year, the team has begun focusing on multicore platforms. and it plans to extend its efforts in that direction. With the arrival of new members, the team has enlarged its scope towards programming models, and more particularly to component and service based programming models, and desktop grids. Weak points The team is involved in several software developments. However, all software development positions are temporary positions (1 to 3 year positions, most of them in the lower part of the hiring scale), that depend on irregular funding. As a consequence, it is difficult (if not impossible) to have long term development plans and to ensure the necessary software stability and support, even for recognized softwares (this is best exemplified with the MUMPS software, whose future depends on how much time permanent researchers manage to spend on maintenance, user support and integration of new research). During the last two years, the team was split among two geographically different sites. The team cohesion greatly suffered from this decision, that was a consequence of the strong penury of working space in the main site of the laboratory. During the last years, the laboratory suffered from recurring administrative under-staffing (increase of the number of laboratory members, non-permanent administrative positions, natural turn-around of personnel). As a consequence, more and more administrative duties are assured by researchers, which is clearly detrimental to the research work. Since the GRAAL project-team was created in April 2004, the number of the team permanent researchers jumped from five to twelve (four of the new permanent members being full-time researchers and three holding teacher-researcher positions). Each new permanent member brought to the team new and invaluable skills. As a consequence, the breadth and nature of the problems studied by the team has very significantly evolved. We propose to replace the large and diverse GRAAL team by two small and more coherent teams: ROMA Resource Optimization: Models, Algorithms, and Scheduling (described in Section 5.12), and Avalon Algorithms and Software Architectures for Service Oriented Platforms (described in Section 5.13) Perspectives: new team ROMA Resource Optimization: Models, Algorithms, and Scheduling Keywords: Optimization, Algorithms, Scheduling strategies, Sparse Linear Systems, Software Development List of permanent researchers: Anne Benoit, Jean-Yves L Excellent, Loris Marchal, Yves Robert, Bernard Tourancheau, Bora Uçar, Frédéric Vivien (head) Vision and new goals of the team The ROMA team aims at designing algorithms and scheduling strategies that optimize the resource usage of scientific computing applications. Modern computing platforms provide huge amounts of computational power: GENCI s Jade supercomputer includes 3,057 quad core processors 9 ; the Grid 5000 research grid 3,202 processors 10 ; the EGEE production grid

129 5.12 Algorithms and Scheduling for Distributed Heterogeneous Platforms 115 some 40,000 CPUs 11, and the World Computing Grid desktop grid aggregates more than 1 million devices 12. Such huge computational powers may be used to solve more quickly or more precisely large computational problems. More importantly, they are a sine qua non for scientists devoted to solve problems that are far beyond reach currently. However, a careless use of those platforms would ruin such endeavors. A core belief of the ROMA team is that resources must be spared: computational time, communication capabilities, energy, memory, etc., when considering computing infrastructures; and throughput, response time, stretch, etc., when considering users. However, resource optimization is quite difficult because these platforms have new, and hard to manage, characteristics: they contain multicore processors and sometimes specialized processors such as GPUs; they may be distributed on a very large scale, which can significantly impact communications; they may be volatile and even unreliable; and their usage may be subject to conflicting objectives from the platform owner(s) and users. To enable scientific computing applications to fully harness modern computing platforms, the ROMA team focuses on: the design of algorithms suited for multicore architectures; the design of multicriteria optimizations which both guarantee the efficiency of platform usage and some Quality of Service for users; the design of static approaches able to cope with platform dynamicity; the design of sparse solvers, which are ubiquitous in scientific applications; the design and analysis of combinatorial algorithms. A: Algorithms for Multicore Platforms The major recent change in computer architecture is the ubiquity of multicore processors. The number of cores per processor should significantly increase, leading in the near future to many-core processors containing tens or hundreds of cores. Therefore, even on large-scale platforms gathering many processors, it is important to exploit the parallel capabilities already present inside each processor: the multicore nature of the processing elements is a critical point to achieve good performance. Our first objective is to understand the impact of multicore processors, and to come up with a simple yet realistic model for multicore architectures. In particular, while experimenting and optimizing the practical performance of our algorithms on such architectures, we have to give a special attention to their hierarchical structure, to the distribution of data storage along this hierarchy, and to the increasing heterogeneity of platforms. Hierarchical structure. Because they contain multicore processors, all modern platforms have a hierarchical structure, and this structure impacts the way algorithms must be designed. In particular, the granularity of computations must be hierarchical and adapted to the hierarchy levels. For instance, a linear algebra operation should be divided into large blocks at node levels, which in turn are sub-divided into smaller blocks to be distributed to the computing cores. This hierarchical structure is partially described by the NUMA factor (Non Uniform Memory Access), that is, the ratio between the times taken to reach a data in a far distant memory and in a local one. This factor, however, does not describe other characteristics of the hierarchy, such as potential memory or bandwidth contentions when several sub-parts of a program simultaneously make intensive use of memory or data movement. Distribution of data storage. Another key characteristic of these platforms is the distribution of memories, even within a single computing processor. Cores are provided with a private data cache, of relatively small size, and with a shared cache of larger size which again eases the access to data and enable data sharing between cores. On a larger scale, data is distributed into the memory of nodes. Theoretical studies provide cache efficient algorithms, and some recent studies even deal with multicore processors with shared and distributed caches. Most of these studies, however, consist in minimizing the number of cache-misses, that is the number of times a value has to be loaded from the memory to a cache line. This model, although interesting to formally assess the cache-efficiency of an algorithm, is not practical for algorithmic design. In particular, this model lacks several parameters, such as cache latency for each cache level, cache bandwidth, or more generally the number of simultaneous accesses that a cache supports. These parameters can greatly influence the performance of a cache-intensive algorithm. We do not look for an extremely precise execution model, such as those used in VLSI simulation. We rather need to model the application-level behavior, so as to correctly predict cache and communications interactions, as well as their overlap with computations. Heterogeneity. Future processors are likely to be heterogeneous themselves, as they will certainly incorporate special-purpose cores alongside general-purpose ones. This trend is exemplified by the Cell BE processor and the emerging use of Graphical Processing Units (GPUs) for general-purpose processing (GP-GPU). As a consequence, future computing platforms will be far more heterogeneous than existing ones. This will allow us to build on our expertise on algorithm design for heterogeneous platforms. Nevertheless, new characteristics will have to be taken into account, such as: the presence of heterogeneity at a smaller scale; new data movement characteristics (cache misses are more relevant than communication times); special relationships between hierarchy and heterogeneity (a

130 116 Part 5. Graal cluster of nodes is likely to include a homogeneous cluster of heterogeneous processors). There will be two main directions to our algorithmic work on multicore architectures. In the first direction, we will study how our previous work on heterogeneous platforms can be adapted to this new architectures, and to what extent. In this direction, we have already started to adapt our scheduling work on heterogeneous platforms to the CELL BE processor, as an example of heterogeneous multicore processors. As a second direction, we will focus on the optimization of some linear algebra kernels to multicore architectures, both in sparse and dense settings. For dense linear algebra kernels, we have started investigating the matrix-matrix multiplication on a multicore processor under the objective of minimizing the number of cache misses (shared cache misses, distributed cache misses, or a combination of both). We also started a study of the algebraic path problem (which models a number of linear algebra operations), on a more complex architecture, including GPUs and CPUs, which requires a more complex modeling. For sparse linear algebra, we plan to experiment multicore parallelism based on threads in the parallel sparse direct solver that we develop, MUMPS. By working on the design of an OpenMP version of MUMPS (on top of MPI), hybrid parallelism using both threads and message passing should help improving memory usage and performance on multicore platforms. The use of more heterogeneous processors including GP-GPU seems to be a longer term issue that may need to be tackled too. See paragraph D below for the context in which this work on sparse direct solvers will take place. B: Multi-criteria Optimization. Classical scheduling aims at mapping a task graph, i.e., an application made of several tasks with dependencies, onto a set of distributed resources and to minimize the makespan, i.e., the execution time of the whole application. Recent work has shown, however, that in many contexts makespan minimization is not appropriate. For instance, for pipelined applications, the throughput maximization is more meaningful: the goal is to process as many data sets per time unit as possible. Going further in that direction, we plan to consider multi-criteria optimization problems mixing user-oriented and system-oriented metrics. User-oriented metrics will ensure that the produced schedule satisfy some Quality of Service constraints (like the stretch metric which forbids starvation). System-oriented metrics will ensure that system resources are not wasted (like throughput maximization). Among the potential system metrics is energy consumption, a strongly emerging objective given the specific energyrelated problems of actual computing platforms (energy cost, problems and cost of cooling), and the global environmental context. Green scheduling aims at minimizing energy consumption, by running processors at lower frequencies, or by reducing the number of processors enrolled. Of course being green often involves running at a slower pace, thereby reducing the application makespan or throughput. Minimizing the energy consumption could typically be experimented in our parallel sparse direct solver MUMPS, which requires a huge amount of computing power for large-scale industrial applications. Since performance will still be a significant issue, we will consider bi-criteria optimization techniques that take both performance and energy consumption into account (see paragraph D for more information on the context in which this could be done). Another metric, important for instance on desktop Grids, is the reliability. Indeed, with the advent of very largescale heterogeneous platforms, resources may be cheap and abundant, but resource failures (computers, communication links, etc.) are more likely to occur and have an adverse effect on the applications. We aim at designing robust schedules: the schedule should be prepared, so to speak, to react to resource variations, and should be capable to reserve more resources than needed so as to re-allocate computations as the execution progresses, based on some histogram of the current execution. A solution consists in replicating the processing of key tasks, mapping them onto distinct sets of resources, and gathering results in a data-flow operation mode. The question of determining which tasks to replicate, and to which extent, raises a clear trade-off between the price to pay for robustness (or performance guarantees) and the efficient usage of resources at the system level. C: Static Management of Dynamicity. Grid computing platforms and components are subject to a great variability. Statistical models are mandatory to deal with changes in resource performance, such as CPU speeds or link bandwidths. In addition, frequent hardware failures are unavoidable in large-scale platforms composed of several thousands of processors, which calls for deploying robust algorithms capable of surviving several resource faults during execution. At a smaller scale, collaborative environments allow users who have a massive, compute-intensive workload, to enlist the help of others computers in executing the workload. The resulting cooperating computers may belong to a nearby or remote cluster of workstations, or they could be geographically dispersed computers that are available under global computing or volunteer computing software such as BOINC. These collaborative platforms add various types of uncertainty to the execution. The most dramatic events in this context are unrecoverable interruptions that kill all work currently in progress on the interrupted computer when being suddenly reclaimed by its owner. In both scenarios, the need for some kind of robustness is obvious. There are two possible approaches: React on-the-fly to performance variations and/or failures, by migrating tasks/processes files dynamically, during execution; Statically handle resource variability, by designing algorithms that perform redundant computations, and schedules that replicate tasks.

131 5.12 Algorithms and Scheduling for Distributed Heterogeneous Platforms 117 Both approaches have their own merits. The dynamic approach considered in Paragraph B is more flexible but induces a lot of overhead and monitoring. The static approach (at compile-time) is less costly and does not require any complicated save-and-restore mechanism. However, it does require some knowledge of the application and platform parameters to be efficient. For instance task replication should not be conducted systematically but only for those tasks whose success is critical to the application. Along a similar line, when distributing work in a collaborative environment, we face two dilemmas: (i) sending work to remote computers in small chunks lessens vulnerability to interruption-induced losses, but it also increases the impact of per-work-unit overhead and minimizes the opportunities for parallelism within the assemblage of remote computers; and (ii) replicating work lessens vulnerability to interruption-induced losses, but it also minimizes the expected productivity advantage from having access to many remote computers. In this framework, the more we know about the probability distributions that govern failures and/or interruptions, the better we can tune the job granularity and the replication factor so as to maximize the expected production. In this scope, we plan to revisit some of our previous work on scheduling for heterogeneous platforms, this time assuming platforms to be prone to failures and/or interruptions. We will mainly focus on problems dealing with divisible loads. D: Sparse Direct Solvers. This theme of research is the object of a close collaboration with P. Amestoy (Professor, ENSEEIHT-IRIT), A. Buttari (CNRS researcher, ENSEEIHT-IRIT), and A. Guermouche (Assistant professor, Univ. of Bordeaux). Most often, our research efforts, both collaborative and individual, are combined and implemented in the software package MUMPS. The solution of sparse systems of linear equations of the form Ax = b (with A being symmetric or unsymmetric, most often with an irregular structure) is in the core of many scientific applications, usually related to numerical simulation spanning domains such as geophysics, chemistry, electromagnetism, structural optimization, and computational fluid dynamics. The importance and diversity of the fields of application are our main motivation to pursue research on sparse direct solvers. Our focus is on solvers based on multifrontal methods (MUMPS implements such a method for sequential and parallel environments). The main issues in such kind of solvers are efficient memory usage and high performance execution on parallel platforms while demonstrating stringent accuracy and numerical properties. As our objective is to stay close to applications, our research directions are motivated by the needs from applications. The following four research directions may evolve depending on the priorities of, and contracts with, current and future industrial and academic partners. The first research direction concerns memory issues. Memory poses the limits for the use of sparse direct solvers. For example, if a user has a 3D, regular, finite-element mesh of size q q q, the asymptotic optimal size of the factors is of O(q 4 ). To handle such sizes, one can use out-of-core techniques, where disks supplement memory, or parallelism, where the memory of several computing units is used. Throughout the past few years, we have worked on out-of-core aspects in order to find efficient and effective methods for storing the factorized matrix on disk. Further research remains to be done in order to further reuce core memory usage, and we should in particular study how one can use disk storage to reduce the working memory requirement of the multifrontal approaches. Before that, we will tackle the memory scalability of parallel multifrontal approaches in order to use as efficiently as possible the distributed memory of parallel platforms. This raises considerations both theoretical (how to schedule the graph of tasks to minimize and balance memory usage, for example) and practical (how much performance is one ready to lose in order to improve the memory scalability and be able to process larger problems?). The second research direction is related to modern computing platforms such as addressing large numbers of processors (for example Blue Gene machines with thousands of processors) and taking advantage of multicore processors. The memory scalability is clearly an issue for the first point, but communication patterns and performance issues also need to be revisited, evaluated, and most likely to be partly redesigned. In order to better address multicore architectures (see also paragraph A), we plan to experiment hybrid parallelism (message passing and threads). We will also keep an eye on the developments of the standards for GP-GPU (general purpose graphical processing units) and energy consumption issues (control processor frequency or core usage depending on the computational phases of the solver, see also paragraph B). The third line of research evolves around the widely-known interplay between combinatorial optimization methods (mostly algorithmic graph theory) and the pre-processing phase of the sparse direct solvers. In current state-ofthe-art direct solvers, almost all pre-processing issues are parallelized with some acceptable performance, except computing weighted matchings in bipartite graphs. A matching corresponds to a set of pivots deemed to be beneficial (both from a numerical and a structural point of views) during the later stages of the factorization. Computing a matching is a task of sequential nature, and hence resisted many attempts of parallelization. We plan to compute approximate matchings, sacrificing certain optimality properties to achieve an acceptable parallel performance. The fourth research direction concerns the QR decomposition, which is used for sparse least-square problems for example, and has many applications. The ambitious long-term plan consists in developing a parallel distributed code for least-square problems. There are tight relations with the third research direction as matching techniques

132 118 Part 5. Graal (this time cardinality matching) are used extensively in the QR decomposition. Furthermore, the issues relating to memory use are also highly likely to arise. E: Combinatorial Scientific Computing. This theme of research, in general, concentrates on development, application, and analysis of combinatorial algorithms to enable scientific and engineering (SCE) computations. Here we restrict to the solution of linear systems of equations with direct and iterative methods. We enlarge the applications of combinatorial techniques to high-performance computing applications through the use of task scheduling algorithms. Our point of view is that there are commonalities between task scheduling problems for high-performance computing applications, and the partitioning/ordering problem for SCE applications. Therefore, the solutions designed for one set of applications can be used to design solutions for the other set. Similarly, the problems that arise naturally in one set can be used to imagine different scenarios for the other set. The end result, therefore, is the enrichment of the models and methods for the two domains. Below, we describe the application of hypergraph partitioning to scheduling file-sharing tasks, and we pose a variant of the partitioning problem which naturally arises in the task scheduling domain. In this case, a known technique from SCE is carried over and enriched. The application of the enriched model to the SCE is going to be evaluated. In hypergraph models used for some task scheduling problems, vertices correspond to tasks, and hyperedges correspond to files that tasks take as input. The general modeling approach, then, asks for a partitioning of the vertices of the hypergraph into a given number of parts such that the parts have equal vertex weights, and such that a linear function of the hyperedges that have vertices in two or more parts is minimized (called cutsize). In the task scheduling problem, these two objectives relate to balancing the loads of the processors and minimizing the communication cost. The partitioning problem is NP-hard, and hence efficient heuristic approaches are needed. An important aspect of the hypergraph model is that all vertices in an hyperedge are equally related. For example, the set of tasks that take a particular file as input are all assumed to incur the same cost. Therefore, in a computing platform where the communication costs are not uniform, the model cannot capture the heterogeneity. This was an issue in early papers and had been approximated by using an average cost for each hyperedge. In this proposal, we aim to enrich the hypergraph models to encapsulate the non-uniformity among the vertices in an hyperedge. Such a model has never been imagined before, and therefore a number of fundamental questions should be posed and solved, such as (i) what is the proper definition of a cutsize? (ii) is the partitioning problem still NP-hard? We believe that such an enriched hypergraph model would open a new direction in modeling different problems and would have applications in scientific computing domain as well. Here we focus on the applications in scheduling file-sharing tasks. Consider a duplicated file residing in two different servers where a task needs that file for execution. The task thus can download the file from either of the servers, each entailing a different cost due to the heterogeneous communication network. In hypergraph terms, the vertex that corresponds to the mentioned task should be connected to the two hyperedges representing the same file but with different weights. How should we partition the hypergraph, and how should we decode the partitioning information (which hyperedge should be taken into account while calculating the partitioning returning back to the quest of searching for a proper cutsize definition)? Another aspect that we will address within this modeling framework is task duplication to reduce costs. In hypergraph terms, this corresponds to selectively duplicating some vertices in order to reduce the cutsize. The problem has already been addressed in the VLSI community for the standard hypergraph models. Due to the heterogeneity in the communication medium, the approaches proposed in those works again would not be fitting to the presented framework, thus new algorithms are needed. We also think that it is possible to address the task and file duplication simultaneously by integrating enriched hypergraph model (to encapsulate file duplication) and vertex duplication (to encapsulate the task duplication) Perspectives: new team Avalon Algorithms and Software Architectures for Service Oriented Platforms Keywords Algorithm Design, Scheduling, Distributed Computing, Services, Software Components, SaaS, SOC, SOA List of permanent researchers: Yves Caniou, Eddy Caron, Frédéric Desprez (head), Gilles Fédak, Christian Perez Vision and new goals of the team The fast evolution of hardware capabilities in term of wide area communication as well as of machine virtualization leads to the requirement of another step in the abstraction of resources with respect to applications. Those large scale platforms based on the aggregation of large clusters (Grids), huge datacenters (Clouds) or collections of volunteer PC (Desktop computing platforms) are now available for researchers of different fields of science as well as private companies. This variety of platforms and the way they are accessed have also an important impact on how applications are designed (i.e., the programming model used) as well as how applications are executed (i.e., the runtime/middleware system used). The access to these platforms is driven through the use of

133 5.13 Algorithms and Scheduling for Distributed Heterogeneous Platforms 119 different services providing mandatory features such as security, resource discovery, virtualization, load-balancing, etc. Software as a Service (SaaS) has thus to play an important role in the future development of large scale applications. The overall idea is to consider the whole system, ranging from the resources to the application, as a set of services. Hence, a user application is an ordered set of instructions requiring and making uses of some services like for example a service of execution. Such an execution service is also an application but at the middleware level that is proposing some services (here used by the user application) and potentially using other services like for example a scheduling service. This model based on services provided and/or offered is generalized within software component models which deal with composition issues as well as with deployment issues. The long term goal of the Avalon team is to contribute to the design of programming models supporting a lot of architecture kinds, to implement it by mastering the various algorithmic issues involved, and by studying the impact on application-level algorithms. Ideally, an application should be written once; the complexity is to determine the adequate level of abstraction to provide a simple programming model to the developer while enabling efficient execution on a wide range of architectures. To achieve such a goal, the team plans to contribute at different level including distributed algorithms, programming models, deployment of services, services discovery, service composition and orchestration, large scale data management, etc. All our theoretical results will be validated on software prototypes using applications from different fields of science such as bioinformatics, physics, cosmology, etc. Grid 5000 will be our platform of choice for our experiments. A: Programming Models A first research direction deals with programming models in general, and composition based programming models such as component models or services in particular. Many models have been proposed to solve particular use cases. While it was needed because these use cases were important and they were a huge expectation to have working solution, it turns out that it is a limitation as it does not enable to create an application requiring several models. For example, GridRPC and Google s Map-Reduce are two important and well studied paradigms; each of them having their own and not collaborative models and implementations. Similar situations occur for other paradigms like workflows, data-sharing, and skeletons for example. We have shown how these paradigms can fit into a software component models (which have the benefit of being build around the concept of composition and include the issue of deployment). However, models and implementations have been manually done. Hence, a research direction that we have started to work on is to make use of model transformation techniques for transforming high level concepts into lower. This would lead a powerful mechanism to transform user level concepts into system level concepts such as those provided by service based infrastructure so as to increase the application portability while enabling platform specific optimization. B: Computing on Large and Volatile Desktop Grids Desktop Grids use the computing, network and storage resources from idle desktop PC s distributed over multiple-lan s or the Internet to compute a large variety of resourcedemanding distributed applications. Today, this type of computing platform forms one of the the largest distributed computing systems, and currently provides scientists with tens of TeraFLOPS from hundreds of thousands of hosts. Although these platforms are efficient for high throughput computing, little work has been done to provide access to these vast amount of computing resources through a set of high-level reliable services, such as massively distributed storage system for example. We think that service oriented approach would broaden the usage of Desktop Grids, not only to involve more scientific communities but also commercial enterprises and the general public. Our first research challenge will target the data service and execution of data-intensive application on Desktop Grids. While these applications need to access, compute, store and circulate large volumes of data, little attention has been paid to data management in such large-scale, dynamic, heterogeneous, volatile and highly distributed Grids. Throughout the past few years, we have designed a middleware called BitDew for large scale data management and distribution and studied its applicability to data-driven master/worker parallel scheme for data-intense application. We will leverage on this experience to investigate how to implement collective file management services, how to build efficient services for scientific data delivery networks and services supporting for the Map Reduce programming model in the context of Desktop Grid. A second research challenge is to integrate Desktop Grids with existing Service Oriented Platforms such as Grids and Cloud Computing. Although we have previous experience in bridging Desktop Grids and the EGEE Grids through the European EDGeS FP7 collaboration, the Cloud is a new and emerging paradigm that we should keep an eye on. We have good hope that the convergence of Cloud computing and Desktop Grid can be achieved with the following work plan: introduction of virtualization technologies in DG systems, modification of DG schedulers to allow reservations and design of a new security model which will allow to mix trusted and untrusted resources in a single distributed infrastructure.

134 120 Part 5. Graal C: Deployment/Resource Management While programming models become more and more abstract to support of wide range of resources and uses cases, resource management systems (RMS) did not evolve. They are always mainly based on the concept of bag of tasks to execute; each task being a sequential job or a fixed-range of processors (for static MPI applications). Moreover, RMS select the resources without any cooperation. Thus, the gap between programming models and RMS has become too large to achieve an efficient use of resources. For example, resource selection is done at least at two levels: runtimes of programming models try to build a representation of the resources (but that can not be exact without the support of RMS and that is a duplication of the work done by RMS) in order to transform their high level abstractions into system abstractions (like processes and data files). This is done in function of the type and number of available resources without any guarantee that the RMS will provide such resources (they may have been allocated to another user in the meantime). Hence, a research direction is to work to reduce this gap. The problem is to understand and define the abstraction offered by RMS so that they can achieve their goals (security, fairness, etc.) while enabling advanced programming models to participate to the resource selection. This step should leverage research on the mapping and the scheduling of an application (with its data) on resources. D: Services management for High Performance Computing Service Oriented Computing (SOC) is a computing paradigm that uses services as the basic block to support the rapid development of large scale application over Grids and Clouds. Services can be described, published, discovered, and dynamically composed to develop such applications. Many research directions have to be explored to obtain performance on different architectures. Different performance measures can be used such as completion time, platform throughput, and energy consumption. [Service deployment] Depending on the characteristics of the target platform and the number of requests for given services, replication and deployment decisions can be taken. More than just a middleware problem, this leads to a careful modelization of the platform, the middleware, and the service itself. Data management issues can be also involved. [Service composition and orchestration] Applications built upon services can be described as workflows. These workflows can use sequential and parallel services, specific data repositories and replicas, and particular scheduling strategies. [Service discovery] We develop a P2P platform to deal with hybrid environments based on Petascale architecture and Grid. This platform provides a service discovery system based on the DLPT (Distributed Lexicographic Placement Table). We will design new algorithms to take benefit of this hybrid environment. And we will design algorithms to deal with multiples batch scheduler with task migration. We plan to perform dynamic reservation system based on the current load of the environment and the number of requests Publications International and national peer reviewed journals [ACL] [263] Emmanuel Agullo, Abdou Guermouche, and Jean-Yves L Excellent. A parallel out-of-core multifrontal method: Storage of factors on disk and analysis of models for an out-of-core active memory. Parallel Computing, Special Issue on Parallel Matrix Algorithms, 34(6-8): , [264] Abelkader Amar, Raphaël Bolze, Yves Caniou, Eddy Caron, Benjamin Depardon, Frédéric Desprez, Jean- Sébastien Gay, Gaël Le Mahec, and David Loureiro. Tunable scheduling in a GridRPC framework. Concurrency and Computation: Practice and Experience, 20(9): , [265] Patrick R. Amestoy, Abdou Guermouche, Jean-Yves L Excellent, and Stephane Pralet. Hybrid scheduling for the parallel solution of linear systems. Parallel Computing, 32(2): , [266] Françoise Baude, Caromel Denis, Cédric Dalmasso, Marco Danelutto, Vladimir Getov, Ludovic Henrio, and Christian Pérez. Gcm: A grid extension to fractal for autonomous distributed components. Special Issue of Annals of Telecommunications: Software Components The Fractal Initiative, 64(1):5 24, [267] Olivier Beaumont, Larry Carter, Jeanne Ferrante, Arnaud Legrand, Loris Marchal, and Yves Robert. Centralized versus distributed schedulers for multiple bag-of-task applications. IEEE Trans. Parallel Distributed Systems, 19(5): , [268] Olivier Beaumont, Loris Marchal, and Yves Robert. Complexity results for collective communications on heterogeneous platforms. Int. Journal of High Performance Computing Applications, 20(1):5 17, 2006.

135 5.14 Algorithms and Scheduling for Distributed Heterogeneous Platforms 121 [269] Anne Benoit, Mourad Hakem, and Yves Robert. Contention awareness and fault tolerant scheduling for precedence constrained tasks in heterogeneous systems. Parallel Computing, To appear. [270] Anne Benoit, Mourad Hakem, and Yves Robert. Multi-criteria scheduling of precedence task graphs on heterogeneous platforms. The Computer Journal, To appear. [271] Anne Benoit, Harald Kosch, Veronika Rehn-Sonigo, and Yves Robert. Multi-criteria scheduling of pipeline workflows (and application to the JPEG encoder). Int. Journal of High Performance Computing Applications, 23(2): , [272] Anne Benoit, Loris Marchal, Jean-François Pineau, Yves Robert, and Frédéric Vivien. Scheduling concurrent bag-of-tasks applications on heterogeneous platforms. IEEE Transactions on Computers, To appear. [273] Anne Benoit, Brigitte Plateau, and William J. Stewart. Memory Efficient Kronecker algorithms with applications to the modelling of parallel systems. Future Generation Computer Systems, special issue on System Performance Analysis and Evaluation, 22(7): , August [274] Anne Benoit, Veronika Rehn-Sonigo, and Yves Robert. Replica placement and access policies in tree networks. IEEE Trans. Parallel Distributed Systems, 19(12): , [275] Anne Benoit and Yves Robert. Mapping pipeline skeletons onto heterogeneous platforms. J. Parallel and Distributed Computing, 68(6): , [276] Anne Benoit and Yves Robert. Complexity results for throughput and latency optimization of replicated and dataparallel workflows. Algorithmica, To appear. [277] Anne Benoit, Eric Thierry, and Yves Robert. On the complexity of mapping linear chain applications onto heterogeneous platforms. Parallel Processing Letters, 19(3): , [278] V. Bertis, R. Bolze, F. Desprez, and K. Reed. From Dedicated Grid to Volunteer Grid: Large Scale Execution of a Bioinformatics Application. Journal of Grid Computing, To appear. [279] Raphaël Bolze, Franck Cappello, Eddy Caron, Michel Daydé, Frederic Desprez, Emmanuel Jeannot, Yvon Jégou, Stéphane Lanteri, Julien Leduc, Noredine Melab, Guillaume Mornet, Raymond Namyst, Pascale Primet, Benjamin Quetier, Olivier Richard, El-Ghazali Talbi, and Touché Irena. Grid 5000: a large scale and highly reconfigurable experimental grid testbed. International Journal of High Performance Computing Applications, 20(4): , November [280] Yves Caniou and Emmanuel Jeannot. Multi-criteria scheduling heuristics for GridRPC systems. Special edition of The International Journal of High Performance Computing Applications (IJHPCA), 20(1):61 76, [281] Eddy Caron, Andréea Chis, Frédéric Desprez, and Alan Su. Design of plug-in schedulers for a GridRPC environment. Future Generation Computer Systems, 24(1):46 57, January [282] Eddy Caron and Frédéric Desprez. Diet: A scalable toolbox to build network enabled servers on the grid. International Journal of High Performance Computing Applications, 20(3): , [283] Eddy Caron, Frédéric Desprez, and Cédric Tedeschi. Enhancing computational grids with peer-to-peer technology for large scale service discovery. Journal of Grid Computing, 5(3): , September [284] Eddy Caron, Frédéric Desprez, Cédric Tedeschi, and Franck Petit. Snap-stabilizing prefix tree for peer-to-peer systems. Parallel Processing Letters, To appear. [285] Eddy Caron, Vincent Garonne, and Andreï Tsaregorodtsev. Definition, modelling and simulation of a grid computing scheduling system for high throughput computing. Future Generation Computer Systems, 23(Issue 8): , November ISSN: X. [286] Pushpinder Kaur Chouhan, Holly Dail, Eddy Caron, and Frédéric Vivien. Automatic middleware deployment planning on clusters. International Journal of High Performance Computing Applications, 20(4): , November [287] Holly Dail and Frédéric Desprez. Experiences with hierarchical request flow management for network-enabled server environments. International Journal of High Performance Computing Applications, 20(1): , February 2006.

136 122 Part 5. Graal [288] Frédéric Desprez and Antoine Vernois. Simultaneous scheduling of replication and computation for data-intensive applications on the grid. Journal of Grid Computing, 4(1):19 31, March [289] Jack Dongarra, Jean-François Pineau, Yves Robert, Zhiao Shi, and Frédéric Vivien. Revisiting matrix product on master-worker platforms. International Journal of Foundations of Computer Science, 19(6): , [290] Gilles Fedak, Haiwu He, and Franck Cappello. A Data Management and Distribution Service with Multi-Protocol and Reliable File Transfer. Journal of Network and Computer Applications, To appear. [291] Matthieu Gallet, Yves Robert, and Frédéric Vivien. Comments on Design and performance evaluation of load distribution strategies for multiple loads on heterogeneous linear daisy chain networks. J. Parallel and Distributed Computing, 68(7): , [292] N. Garnier, A. Friedrich, R. Bolze, E. Bettler, L. Moulinier, C. Geourjon, J. D. Thompson, G. Deleage, and O. Poch. MAGOS: multiple alignment and modelling server. Bioinformatics, 22(17): , [293] Jean-Sébastien Gay and Yves Caniou. Étude de la précision de Simbatch, une API pour la simulation de systèmes batch. RSTI - Techniques et Sciences Informatiques, 27(3-4): , [294] Arnaud Giersch, Yves Robert, and Frédéric Vivien. Scheduling tasks sharing files on heterogeneous master-slave platforms. Journal of Systems Architecture, 52(2):88 104, [295] Abdou Guermouche and Jean-Yves L Excellent. Constructing memory-minimizing schedules for multifrontal methods. ACM Transactions on Mathematical Software, 32(1):17 32, [296] Kamer Kaya and Bora Uçar. Exact algorithms for a task assignment problem. Parallel Processing Letters, 19(3): , [297] Arnaud Legrand, Alan Su, and Frédéric Vivien. Minimizing the stretch when scheduling flows of divisible requests. Journal of Scheduling, 11(5): , [298] Loris Marchal, Veronika Rehn, Yves Robert, and Frédéric Vivien. Scheduling algorithms for data redistribution and load-balancing on master-slave platforms. Parallel Processing Letters, 17(1):61 77, [299] Loris Marchal, Yang Yang, Henri Casanova, and Yves Robert. Steady-state scheduling of multiple divisible load applications on wide-area distributed computing platforms. Int. Journal of High Performance Computing Applications, 20(3): , [300] Stéphane Operto, Jean Virieux, Patrick Amestoy, Jean-Yves L Excellent, Luc Giraud, and Hafedh Ben Hadj Ali. 3d finite-difference frequency-domain modeling of visco-acoustic wave propagation using a massively parallel direct solver: A feasibility study. Geophysics, 72(5):SM195 SM211, [301] Jean-François Pineau, Yves Robert, and Frédéric Vivien. The impact of heterogeneity on master-slave scheduling. Parallel Computing, 34(3): , [302] Hélène Renard, Yves Robert, and Frédéric Vivien. Data redistribution algorithms for heterogeneous processor rings. Int. Journal of High Performance Computing Applications, 20(1):31 43, [303] Clément Rezvoy, Delphine Charif, Laurent Guéguen, and Gabriel A.B. Marais. MareyMap: an R-based tool with graphical interface for estimating recombination rates. Bioinformatics, 23(16): , [304] Florent Sourbier, Stéphane Operto, Jean Virieux, Patrick R. Amestoy, and Jean-Yves L Excellent. FWT2D: a massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data part 1: Algorithm. Computer and Geosciences, 35(3): , [305] Florent Sourbier, Stéphane Operto, Jean Virieux, Patrick R. Amestoy, and Jean-Yves L Excellent. FWT2D: a massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data part 2: numerical examples and scalability analysis. Computer and Geosciences, 35(3): , [306] William Thies, Frédéric Vivien, and Saman Amarasinghe. A step towards unifying schedule and storage optimization. ACM Transactions on Programming Languages and Systems (TOPLAS), 29(6):45 p., 2007.

137 5.14 Algorithms and Scheduling for Distributed Heterogeneous Platforms 123 Invited conferences, seminars, and tutorials [INV] [307] Anne Benoit. Mapping skeleton workflows onto heterogeneous platforms. Invited presentation at the Workshop on Parallelism Oblivious Programming, July pop07/programme.html. [308] Anne Benoit. Mapping skeleton workflows onto heterogeneous platforms. Invited conference at the 9th High Performance Computing Conference, June [309] Anne Benoit. Multi-criteria scheduling of workflow applications. Invited presentation at the Workshop on Clusters and Computational Grids for Scientific Computing, September [310] Anne Benoit. Scheduling pipelined applications: models, algorithms and complexity. Invited presentation at the Workshop on Algorithms and Techniques for Scheduling on Clusters and Grids, June 2-5, http: //www-id.imag.fr/astec09/index.php. [311] Yves Caniou. Tunable scheduling and middleware deployment for a scalable gridrpc middleware. Invited presentation at the first workshop Grid@Mons, May [312] Eddy Caron. Diet: un intergiciel de grille pour l ordonnancement d applications. Invited conference at JDIR 08. 9èmes Journées Doctorales en Informatique et Réseaux, January 16, fr/jdir2008/. [313] Iain S. Duff and Bora Uçar. Combinatorial problems in solving linear systems. Invited presentation at Dagstuhl Seminar on Combinatorial Scientific Computing delivered by Iain S. Duff, February 1 6, [314] Loris Marchal. Offline and online scheduling of concurrent bags-of-tasks on heterogeneous platforms. Invited presentation at the New Challenges on Scheduling Theory workshop, May 12-16, imag.fr/ncst08/index.php. [315] Loris Marchal. Efficient scheduling of task graph collections on heterogeneous resources. Invited presentation at the Workshop on Algorithms and Techniques for Scheduling on Clusters and Grids, June 2-5, http: //www-id.imag.fr/astec09/index.php. [316] Yves Robert. Static scheduling for large-scale platforms: can one hope for efficiency? Invited conference at IPDPS 06, the IEEE Int. Parallelism and Distributed Processing Symposium, org/ipdps2006. [317] Yves Robert. Algorithms and scheduling techniques for clusters and grids. Invited conference at the 9th High Performance Computing Conference, [318] Yves Robert. Algorithms and scheduling techniques for heterogeneous platforms. Invited conference at the Workshop on Parallelism Oblivious Programming, pop07/programme.html. [319] Yves Robert. Algorithms for heterogeneous platforms. Invited conference at HeteroPar 2007, http: // [320] Yves Robert. Scheduling bags of tasks. Invited conference at Arny s fest, Amherst, umass.edu/~immerman/theoryseminar/arnyfest.pdf. [321] Yves Robert. Algorithm design and scheduling techniques for clusters and grids. Invited conference at the Workshop on Parallel Routines Optimization and Applications, programa.html. [322] Yves Robert. Static strategies for worksharing with unrecoverable interruptions. Invited presentation at the Workshop on Clusters and Computational Grids for Scientific Computing, September [323] Frédéric Vivien. Minimizing the stretch when scheduling flows of divisible requests. Invited conference at the Scheduling Algorithms for new Emerging Applications workshop, May 30, SAEA06/index.php.

138 124 Part 5. Graal International and national peer-reviewed conference proceedings [ACT] [324] Kunal Agrawal, Anne Benoit, Fanny Dufossé, and Yves Robert. Mapping filtering streaming applications with communication costs. In 21st ACM Symposium on Parallelism in Algorithms and Architectures SPAA ACM Press, [325] Kunal Agrawal, Anne Benoit, and Yves Robert. Mapping linear workflows with computation/communication overlap. In ICPADS 2008, the 14th IEEE International Conference on Parallel and Distributed Systems, pages IEEE Computer Society Press, [326] Emmanuel Agullo, Abdou Guermouche, and Jean-Yves L Excellent. A preliminary out-of-core extension of a parallel multifrontal solver. In EuroPar 06 Parallel Processing, volume 4128 of Lecture Notes in Computer Science, pages , [327] Emmanuel Agullo, Abdou Guermouche, and Jean-Yves L Excellent. Étude mémoire d une méthode multifrontale parallèle hors-mémoire. In 17e Rencontres Francophones en Parallélisme (RenPar 17), pages , Perpignan, France, [328] Emmanuel Agullo, Abdou Guermouche, and Jean-Yves L Excellent. On reducing the I/O volume in a sparse out-of-core solver. In HiPC 07 14th International Conference On High Performance Computing, number 4873 in Lecture Notes in Computer Science, pages 47 58, Goa, India, December 17-20, [329] Emmanuel Agullo, Abdou Guermouche, and Jean-Yves L Excellent. On the I/O volume in out-of-core multifrontal methods with a flexible allocation scheme. In VECPAR 08 International Meeting on High Performance Computing for Computational Science, volume 5336 of LNCS, pages Springer-Verlag Berlin Heidelberg, June [330] Marco Aldinucci, Hinde Bouziane, Marco Danelutto, and Christian Pérez. Towards software component assembly language enhanced with workflows and skeletons. In Joint Workshop on Component-Based High Performance Computing and Component-Based Software Engineering and Software Architecture (CBHPC/COMPARCH 2008). ACM, October ISBN: [331] Marco Aldinucci, Hinde Lilia Bouziane, Marco Danelutto, and Christian Pérez. STKM on SCA: a unified framework with components, workflows and algorthmic skeletons. In 15th International European Conference on Parallel and Distributed Computing (Euro-Par 2009), volume 5704 of LNCS, pages , Delft, Netherlands, August Springer. [332] Abelkader Amar, Raphaël Bolze, Aurélien Bouteiller, Pushpinder Kaur Chouhan, Andréea Chis, Yves Caniou, Eddy Caron, Holly Dail, Benjamin Depardon, Frédéric Desprez, Jean-Sébastien Gay, Gaël Le Mahec, and Alan Su. Diet: New developments and recent results. In Lehner et al. (Eds.), editor, CoreGRID Workshop on Grid Middleware (in conjunction with EuroPar2006), number 4375 in LNCS, pages , Dresden, Germany, August Springer. [333] Françoise André, Guillaume Gauvrit, and Christian Pérez. Dynamic adaptation of the master-worker paradigm. In Proc. of the IEEE 9th International Conference on Computer and Information Technology, Xiamen, China, oct IEEE Computer Society. to appear. [334] Gabriel Antoniu, Eddy Caron, Frédéric Desprez, Aurélia Fèvre, and Mathieu Jan. Towards a transparent data access model for the agridrpc paradigm. In S. Aluru et al. (Eds), editor, HiPC th International Conference on High Performance Computing., number 4873 in LNCS, pages , Goa. India, December Springer Verlag Berlin Heidelberg. [335] Hrachya Astsatryan, Michel Daydé, Aurélie Hurault, Marc Pantel, and Eddy Caron. On defining a web interface for linear algebra tasks over computational grids. In International Conference on Computer Science and Information Technologies (CSIT 07), Yerevan (Arménie), September [336] Hrachya Astsatryan, Vladimir Sahakyan, Yuri Shoukouryan, Michel Daydé, Aurélie Hurault, Marc Pantel, and Eddy Caron. A grid-aware web interface with advanced service trading for linear algebra calculations. In 8th International Meeting High Performance Computing for Computational Science (VECPAR 08), pages , Toulouse, June [337] Olivier Beaumont, Larry Carter, Jeanne Ferrante, Arnaud Legrand, Loris Marchal, and Yves Robert. Centralized versus distributed schedulers for multiple bag-of-task applications. In International Parallel and Distributed Processing Symposium IPDPS IEEE Computer Society Press, 2006.

139 5.14 Algorithms and Scheduling for Distributed Heterogeneous Platforms 125 [338] Olivier Beaumont, Anne-Marie Kermarrec, Loris Marchal, and Étienne Rivière. Voronet, un réseau objet-à-objet sur le modèle petit-monde. In CFSE 5: Conférence Française sur les Systèmes d Exploitation, [339] Olivier Beaumont, Anne-Marie Kermarrec, Loris Marchal, and Etienne Riviére. Voronet: A scalable object network based on voronoi tessellations. In Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS 2007), pages IEEE Computer Society Press, [340] Olivier Beaumont, Loris Marchal, Veronika Rehn, and Yves Robert. FIFO scheduling of divisible loads with return messages under the one-port model. In HCW 2006, the 15th Heterogeneous Computing Workshop. IEEE Computer Society Press, [341] Leila Ben Saad and Bernard Tourancheau. Multiple mobile sinks positioning in wireless sensor networks for buildings. In SensorComm. IEEE-IARIA, [342] Anne Benoit. Comparison of Access Policies for Replica Placement in Tree Networks. In 15th International Euro-Par Conference, LNCS, Delft, The Netherlands, August Springer Verlag. [343] Anne Benoit, Henri Casanova, Veronika Rehn-Sonigo, and Yves Robert. Resource allocation for multiple concurrent in-network stream-processing applications. In HeteroPar 2009: Seventh Int. Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms, jointly held with Euro-Par 2009, LNCS. Springer Verlag, To appear. Received the Best Paper Award. [344] Anne Benoit, Henri Casanova, Veronika Rehn-Sonigo, and Yves Robert. Resource allocation strategies for innetwork stream processing. In 11th Workshop on Advances in Parallel and Distributed Computational Models APDCM IEEE Computer Society Press, [345] Anne Benoit, Alexandru Dobrila, Jean-Marc Nicod, and Laurent Philippe. Throughput optimization for microfactories subject to failures. In ISPDC 2009, 8th International Symposium on Parallel and Distributed Computing, pages 11 18, Lisbon, Portugal, July [346] Anne Benoit, Fanny Dufossé, and Yves Robert. Filter placement on a pipelined architecture. In 11th Workshop on Advances in Parallel and Distributed Computational Models APDCM IEEE Computer Society Press, [347] Anne Benoit, Fanny Dufossé, and Yves Robert. On the complexity of mapping pipelined filtering services on heterogeneous platforms. In IPDPS 2009, the 23rd IEEE International Parallel and Distributed Processing Symposium. IEEE Computer Society Press, [348] Anne Benoit, Bruno Gaujal, Matthieu Gallet, and Yves Robert. Computing the throughput of replicated workflows on heterogeneous platforms. In ICPP 2009, the 38th International Conference on Parallel Processing. IEEE Computer Society Press, To appear. [349] Anne Benoit, Mourad Hakem, and Yves Robert. Fault tolerant scheduling of precedence task graphs on heterogeneous platforms. In 10th Workshop on Advances in Parallel and Distributed Computational Models APDCM IEEE Computer Society Press, [350] Anne Benoit, Mourad Hakem, and Yves Robert. Realistic models and efficient algorithms for fault tolerant scheduling on heterogeneous platforms. In ICPP 2008, the 37th International Conference on Parallel Processing. IEEE Computer Society Press, [351] Anne Benoit, Mourad Hakem, and Yves Robert. Optimizing the latency of streaming applications under throughput and reliability constraints. In ICPP 2009, the 38th International Conference on Parallel Processing. IEEE Computer Society Press, To appear. [352] Anne Benoit, Harald Kosch, Veronika Rehn-Sonigo, and Yves Robert. Bi-criteria pipeline mappings for parallel image processing. In ICCS 2008, the 8th International Conference on Computational Science, volume 5101 of LNCS, pages Springer Verlag, [353] Anne Benoit, Loris Marchal, Jean-François Pineau, Yves Robert, and Frédéric Vivien. Offline and online scheduling of concurrent bags-of-tasks on heterogeneous platforms. In 10th Workshop on Advances in Parallel and Distributed Computational Models APDCM IEEE Computer Society Press, [354] Anne Benoit, Loris Marchal, Jean-Francois Pineau, Yves Robert, and Frédéric Vivien. Resource-aware allocation strategies for divisible loads on large-scale systems. In 18th International Heterogeneity in Computing Workshop HCW IEEE Computer Society Press, 2009.

140 126 Part 5. Graal [355] Anne Benoit, Veronika Rehn, and Yves Robert. Strategies for replica placement in tree networks. In HCW 2007, the 16th Heterogeneous Computing Workshop. IEEE Computer Society Press, [356] Anne Benoit, Veronika Rehn-Sonigo, and Yves Robert. Impact of QoS on replica placement in tree networks. In ICCS 2007, the 7th 2007 International Conference on Computational Science, LNCS 4487, pages Springer Verlag, [357] Anne Benoit, Veronika Rehn-Sonigo, and Yves Robert. Multi-criteria scheduling of pipeline workflows. In HeteroPar 2007: International Conference on Heterogeneous Computing, jointly published with Cluster IEEE Computer Society Press, [358] Anne Benoit, Veronika Rehn-Sonigo, and Yves Robert. Optimizing latency and reliability of pipeline workflow applications. In HCW 2008, the 17th Heterogeneous Computing Workshop. IEEE Computer Society Press, [359] Anne Benoit and Yves Robert. Complexity results for throughput and latency optimization of replicated and dataparallel workflows. In HeteroPar 2007: International Conference on Heterogeneous Computing, jointly published with Cluster IEEE Computer Society Press, [360] Anne Benoit and Yves Robert. Mapping pipeline skeletons onto heterogeneous platforms. In ICCS 2007, the 7th International Conference on Computational Science, LNCS 4487, pages Springer Verlag, [361] Anne Benoit and Yves Robert. Multi-criteria mapping techniques for pipeline workflows on heterogeneous platforms. In Recent Developments in Grid Technology and Applications. Nova Science Publishers, To appear. [362] Anne Benoit, Yves Robert, Arnold Rosenberg, and Frédéric Vivien. Static strategies for worksharing with unrecoverable interruptions. In IPDPS 2009, the 23rd IEEE International Parallel and Distributed Processing Symposium. IEEE Computer Society Press, [363] Anne Benoit, Yves Robert, Arnold Rosenberg, and Frédéric Vivien. Static worksharing strategies for heterogeneous computers with unrecoverable failures. In HeteroPar 2009: Seventh Int. Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms, jointly held with Euro-Par 2009, LNCS. Springer Verlag, To appear. [364] Viktor Bertis, Raphaël Bolze, Frédéric Desprez, and Kevin Reed. Large scale execution of a bioinformatic application on a volunteer grid. In Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC08), April [365] Julien Bigot, Hinde Bouziane, Christian Pérez, and Thierry Priol. On abstractions of software component models for scientific applications. In Proc. of the Abstractions for Distributed Systems workshop, Las Palmas, Gran Canaria, Spain, [366] Julien Bigot and Christian Pérez. Enabling collective communications between components. In CompFrame 07: Proceedings of the 2007 symposium on Component and framework technology in high-performance and scientific computing, pages , New York, NY, USA, October ACM Press. [367] Julien Bigot and Christian Pérez. Increasing reuse in component models through genericity. In Proc of the 11th International Conference on Software Reuse, LNCS, Falls Church, Virginia, USA, oct Springer Verlag. To appear. [368] Raphaël Bolze, Eddy Caron, Frédéric Desprez, Georg Hoesch, and Cyril Pontvieux. A monitoring and visualization tool and its application for a network enabled server platform. In M. Gavrilova, editor, Computational Science and Its Applications - ICCSA 2006, volume 3984 of LNCS, pages , Glasgow, UK., May Springer. [369] Yves Caniou, Eddy Caron, Ghislain Charrier, Andréea Chis, Frédéric Desprez, and Eric Maisonnave. Oceanatmosphere modelization over the grid. In Wu-chi Feng and Yuanyuan Yang, editors, The 37th International Conference on Parallel Processing (ICPP 2008), pages , Portland, Oregon. USA, September IEEE. [370] Yves Caniou, Eddy Caron, Ghislain Charrier, and Frédéric Desprez. Meta-scheduling and task reallocation in a grid environment. In The Third IEEE International Conference on Advanced Engineering Computing and Applications in Sciences (ADVCOMP 2009), page 6p., Sliema, Malta, October [371] Yves Caniou, Eddy Caron, Ghislain Charrier, Frédéric Desprez, Eric Maisonnave, and Vincent Pichon. Oceanatmosphere application scheduling within DIET. In APDCT-08 Symposium. International Symposium on Advanced in Parallel and Distributed Computing Techniques, pages , Sydney, Australia, December In conjunction with ISPA 2008, IEEE Computer Society. Invited paper from the reviewed process of ISPA 08.

141 5.14 Algorithms and Scheduling for Distributed Heterogeneous Platforms 127 [372] Yves Caniou, Eddy Caron, Hélène Courtois, Benjamin Depardon, and Romain Teyssier. Cosmological simulations using grid middleware. In Fourth High-Performance Grid Computing Workshop (HPGC 07), Long Beach, California, USA, March IEEE. [373] Yves Caniou and Jean-Sébastien Gay. Simbatch: an API for simulating and predicting the performance of parallel resources managed by batch systems. In Workshop on Secure, Trusted, Manageable and Controllable Grid Services (SGS), held in conjunction with EuroPar 08, volume 5415 of LNCS, pages , [374] Yves Caniou, Jean-Sébastien Gay, and Pierre Ramet. Tunable parallel experiments in a GridRPC framework: application to linear solvers. In VECPAR 08 International Meeting on High Performance Computing for Computational Science, volume 5336 of LNCS, pages , [375] Yves Caniou, Noriyuki Kushida, and Naoya Teshima. Implementing interoperability between the AEGIS and DIET GridRPC middleware to build an international sparse linear algebra expert system. In The Second IEEE International Conference on Advanced Engineering Computing and Applications in Sciences (ADVCOMP 2008), pages , [376] E. Caron, F. Desprez, D. Loureiro, and A. Muresan. Cloud computing resource management through a grid middleware: A case study with diet and eucalyptus. In IEEE, editor, IEEE International Conference on Cloud Computing (CLOUD 2009), Bangalore, India, September To appear in the Work-in-Progress Track from the CLOUD-II 2009 Research Track. [377] Eddy Caron, Andréea Chis, Frédéric Desprez, and Alan Su. Plug-in scheduler design for a distributed grid environment. In ACM/IFIP/USENIX, editor, 4th International Workshop on Middleware for Grid Computing - MGC 2006, Melbourne, Australia, November In conjunction with ACM/IFIP/USENIX 7th International Middleware Conference [378] Eddy Caron, Pushpinder Kaur Chouhan, and Holly Dail. GoDIET: A deployment tool for distributed middleware on grid In IEEE, editor, EXPGRID workshop. Experimental Grid Testbeds for the Assessment of Large-Scale Distributed Apllications and Tools. In conjunction with HPDC-15., pages 1 8, Paris, France, June [379] Eddy Caron, Pushpinder Kaur Chouhan, and Frédéric Desprez. Automatic middleware deployment planning on heterogeneous platfoms. In The 17th Heterogeneous Computing Workshop (HCW 08), Miami, Florida, April In conjunction with IPDPS [380] Eddy Caron, Ajoy Datta, Benjamin Depardon, and Lawrence Larmore. A self-stabilizing k-clustering algorithm using an arbitrary metric. In Euro-Par 2009, volume LNCS 5704, pages , Delft, The Netherlands, August TU Delft. [381] Eddy Caron, Ajoy Datta, Franck Petit, and Cédric Tedeschi. Self-stabilization in tree-structured p2p service discovery systems. In 27th International Symposium on Reliable Distributed Systems (SRDS 2008), pages , Napoli, Italy, October IEEE. [382] Eddy Caron, Frédéric Desprez, Charles Fourdrignier, Franck Petit, and Cédric Tedeschi. A repair mechanism for fault-tolerance for tree-structured peer-to-peer systems. In Yves Robert, Manish Parashar, Ramamurthy Badrinath, and Viktor K. Prasanna, editors, HiPC th International Conference on High Performance Computing., volume 4297 of LNCS, pages , Bangalore. India, December Springer-Verlag Berlin Heidelberg. [383] Eddy Caron, Frédéric Desprez, and David Loureiro. All-in-one graphical tool for the management of diet a GridRPC middleware. In Norbert Meyer, Domenico Talia, and Ramin Yahyapour, editors, Grid and Services Evolution, pages , Barcelona, Spain, June CoreGRID Workshop on Grid Middleware (in conjunction with OGF 23), Springer. [384] Eddy Caron, Frédéric Desprez, David Loureiro, and Adrian Muresan. Cloud computing resource management through a grid middleware: A case study with diet and eucalyptus. In IEEE International Conference on Cloud Computing (CLOUD 2009), Bangalore, India, Septembre IEEE. To appear in the Work-in-Progress Track from the CLOUD-II 2009 Research Track. [385] Eddy Caron, Frédéric Desprez, Franck Petit, and Cédric Tedeschi. Snap-stabilizing Prefix Tree for Peer-to-peer Systems. In 9th International Symposium on Stabilization, Safety, and Security of Distributed Systems, volume 4838 of Lecture Notes in Computer Science, pages 82 96, Paris, France, November Springer Verlag Berlin Heidelberg.

142 128 Part 5. Graal [386] Eddy Caron, Frédéric Desprez, and Cédric Tedeschi. A dynamic prefix tree for the service discovery within large scale grids. In A. Montresor, A. Wierzbicki, and N. Shahmehri, editors, The Sixth IEEE International Conference on Peer-to-Peer Computing, P2P2006, pages , Cambridge, UK., September IEEE. [387] Eddy Caron, Frédéric Desprez, and Cédric Tedeschi. Efficiency of tree-structured peer-to-peer service discovery systems. In Fifth International Workshop on Hot Topics in Peer-to-Peer Systems (Hot-P2P), Miami, Florida, April In conjunction with IPDPS [388] Eddy Caron, Charles Fourdrignier, Franck Petit, and Cédric Tedeschi. Mécanisme de réparations pour un système P2P de découverte de services. In Perpi Conférences conjointes RenPar 17 / SympA 2006 / CFSE 5 / JC 2006, pages , Canet en Roussillon, October [389] Eddy Caron, Cristian Klein, and Christian Perez. Efficient grid resource selection for a cem application. In RenPar ème Rencontres Francophones du Parallélisme, Toulouse, France, September [390] Fabienne Carrier, Stéphane Devismes, Franck Petit, and Yvan Rivierre. Space-optimal deterministic rendezvous. In Second International Workshop on Reliability, Availability, and Security (WRAS 2009), Hiroshima, Japan, IEEE Computer Society. To appear. [391] Ghislain Charrier and Yves Caniou. Ordonnancement et réallocation de tâches sur une grille de calcul. In 19e Rencontres Francophones du Parallélisme, des Architectures et des Systèmes, Toulouse, September [392] Philippe Combes, Eddy Caron, Frédéric Desprez, Bastien Chopard, and Julien Zory. Relaxing synchronization in a parallel systemc kernel. In ISPA International Symposium on Parallel and Distributed Processing with Applications, pages , Sydney, Australia, December IEEE Computer Society. [393] Carole Delporte-Gallet, Stéphane Devismes, Hugues Fauconnier, Franck Petit, and Sam Toueg. With finite memory consensus is easier than reliable broadcast. In 12th International Conference On Principles of Distributed Systems (OPODIS 2008), volume 5401 of Lecture Notes in Computer Science, Springer, pages 41 57, Luxor, Egypt, [394] Benjamin Depardon. Déploiement générique d applications sur plates-formes hétérogènes distribuées. In Ren- Par 18, Fribourg, Switzerland, feb [395] Benjamin Depardon. Un algorithme auto-stabilisant pour le problème du k-partitionnement sur graphe pondéré. In RenPar ème Rencontres Francophones du Parallélisme, Toulouse, To appear. [396] Frédéric Desprez, Eddy Caron, and Gaël Le Mahec. Dagda: Data arrangement for the grid and distributed applications. In AHEMA International Workshop on Advances in High-Performance E-Science Middleware and Applications. In conjunction with escience 2008, pages , Indianapolis, Indiana, USA, December [397] Frédéric Desprez and Antoine Vernois. Semi-Static Algorithms for Data Replication and Scheduling Over the Grid. In IEEE 3rd International Conference on Intelligent Computer Communication and Processing, workshop on Grid Computing, Cluj-Napoca, Romania, September [398] Stéphane Devismes, Carole Delporte-Gallet, Hugues Fauconnier, Franck Petit, and Sam Toueg. Quand le consensus est plus simple que la diffusion fiable. In 11 e rencontres francophones sur les aspects algorithmiques des télécommunications (Algotel 2009), Carry-Le-Rouet, France, To appear. [399] Stéphane Devismes, Franck Petit, and Sébastien Tixeuil. Exploration optimale probabiliste d un anneau par des robots asynchrones et amnésiques. In 11 e rencontres francophones sur les aspects algorithmiques des télécommunications (Algotel 2009), Carry-Le-Rouet, France, To appear. [400] Stéphane Devismes, Franck Petit, and Sébastien Tixeuil. Optimal probabilistic ring exploration by asynchronous oblivious robots. In 16th International Colloquium on Structural Information and Communication Complexity (SIROCCO 2009), volume 5804 of Lecture Notes in Computer Science, Springer, pages , Piran, Slovenia, [401] Sékou Diakité, Loris Marchal, Jean-Marc Nicod, and Laurent Philippe. Steady-state for batches of identical task graphs. In Euro-Par 2009, volume 5704 of LNCS, pages , Delft University of Technology, Delft, the Netherlands, August [402] Yoann Dieudonné, Shlomi Dolev, Franck Petit, and Michael Segal. Brief announcement: deaf, dumb, and chatting robots. In 28th Annual ACM Symposium on Principles of Distributed Computing (PODC 2009), pages , Calgary, Canada, ACM.

143 5.14 Algorithms and Scheduling for Distributed Heterogeneous Platforms 129 [403] Yoann Dieudonné and Franck Petit. Squaring the circle with weak mobile robots. In 11 e rencontres francophones sur les aspects algorithmiques des télécommunications (Algotel 2009), Carry-Le-Rouet, France, To appear. [404] Jack DiGiovanna, Loris Marchal, Prapaporn Rattanatamrong, Ming Zhao, Shalom Darmanjian, Babak Mahmoudi, Justin Sanchez, José Príncipe, Linda Hermer-Vazquez, Renato Figueiredo, and José Fortes. Towards real-time distributed signal modeling for brain machine interfaces. In Proceedings of Dynamic Data Driven Application Systems (workshop of ICCS), volume 4487 of LNCS, pages Springer Verlag, [405] Jack Dongarra, Jean-François Pineau, Yves Robert, and Frédéric Vivien. Matrix product on heterogeneous masterworker platforms. In 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 53 62, Salt Lake City, Utah, February [406] Lionel Eyraud-Dubois, Arnaud Legrand, Martin Quinson, and Frédéric Vivien. A first step towards automatically building network representations. In Proceedings of Euro-Par 2007, volume 4641 of LNCS, pages , [407] Matthieu Gallet, Loris Marchal, and Frédéric Vivien. Allocating series of workflows on computing grids. In Proceedings of the 14th IEEE International Conference on Parallel and Distributed Systems (ICPADS 08), pages IEEE Computer Society Press, [408] Matthieu Gallet, Loris Marchal, and Frédéric Vivien. Efficient scheduling of task graph collections on heterogeneous resources. In International Parallel and Distributed Processing Symposium IPDPS IEEE Computer Society Press, [409] Matthieu Gallet, Yves Robert, and Frédéric Vivien. Scheduling communication requests traversing a switch: complexity and algorithms. In PDP 2007, 15th Euromicro Workshop on Parallel, Distributed and Network-based Processing, pages IEEE Computer Society Press, [410] Matthieu Gallet, Yves Robert, and Frédéric Vivien. Scheduling multiple divisible loads on a linear processor network. In ICPADS 2007, the 13th International Conference on Parallel and Distributed Systems, [411] Jean-Sébastien Gay and Yves Caniou. Simbatch : une API pour la simulation et la prédiction de performances de systèmes batch. In 17ieme Rencontres Francophones du Parallélisme, des Architectures et des Systèmes, pages , Perpignan, October [412] Yi Gu, Qishi Wu, Anne Benoit, and Yves Robert. Optimizing end-to-end performance of distributed applications with linear computing pipelines. In ICPADS 2009, the 15th International Conference on Parallel and Distributed Systems. IEEE Computer Society Press, [413] Haiwu He, Gilles Fedak, Bing Tran, and Franck Cappello. BLAST Application with Data-aware Desktop Grid Middleware. In Proceedings of 9th IEEE International Symposium on Cluster Computing and the Grid CCGRID 09, pages , Shanghai, China, May [414] Mathias Jacquelin, Loris Marchal, and Yves Robert. Complexity analysis and performance evaluation of matrix product on multicore architectures. In ICPP 2009, the 38th International Conference on Parallel Processing. IEEE Computer Society Press, To appear. [415] P. Kacsuk, Z. Farkas, and G. Fedak. Towards Making BOINC and EGEE Interoperable. In Proceedings of 4th IEEE International Conference on e-science (e-science 2008), International Grid Interoperability and Interoperation Workshop 2008 (IGIIW 2008), pages , Indianapolis, USA, December [416] Daniel Kahn, Clément Rezvoy, and Frédéric Vivien. Parallel large scale inference of protein domain families. In ICPADS 2008, the 14th IEEE International Conference on Parallel and Distributed Systems, pages IEEE Computer Society Press, [417] Noriyuki Kushida, Yoshio Suzuki, Naoya Teshima, Norihiro Nakajima, Yves Caniou, Michel Daydé, and Pierre Ramet. Toward an international sparse linear algebra expert system by interconnecting the ITBL computational grid with the GridTLSE platform. In VECPAR 08 International Meeting on High Performance Computing for Computational Science, volume 5336 of LNCS, pages , [418] Arnaud Legrand, Alan Su, and Frédéric Vivien. Minimizing the stretch when scheduling flows of biological requests. In SPAA 06: Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures, pages , Cambridge, Massachusetts, USA, ACM Press.

144 130 Part 5. Graal [419] Loris Marchal, Pascale Primet, Yves Robert, and Jingdi Zeng. Optimal bandwidth sharing in grid environment. In HPDC 2006, the 15th International Symposium on High Performance Distributed Computing. IEEE Computer Society Press, [420] Loris Marchal, Veronika Rehn, Yves Robert, and Frédéric Vivien. Scheduling and data redistribution strategies on star platforms. In PDP 2007, 15th Euromicro Workshop on Parallel, Distributed and Network-based Processing, pages IEEE Computer Society Press, [421] Yannis Mazzer and Bernard Tourancheau. MPI in wireless sensor networks. In PVM/MPI Recent Advances in Parallel Virtual Machine and Message Passing Interface, volume 5205 of Lecture Notes in Computer Science, pages Springer, [422] Yannis Mazzer and Bernard Tourancheau. Comparisons for 6LoWPAN implementation on wireless sensor networks. In SensorComm. IEEE-IARIA, [423] Stéphane Operto, Jean Virieux, Patrick Amestoy, Luc Giraud, and Jean-Yves L Excellent. 3D frequency-domain finite-difference modeling of accoustic wave propagation using a masively parallel direct solver: a feasable study. In The Society of Exploration Geophysicists (SEG 2006), New Orleans (USA), 01/10/06-06/10/06, pages , [424] Jean-François Pineau, Yves Robert, and Frédéric Vivien. The impact of heterogeneity on master-slave on-line scheduling. In HCW 2006, the 15th Heterogeneous Computing Workshop. IEEE Computer Society Press, [425] Jean-François Pineau, Yves Robert, and Frédéric Vivien. Off-line and on-line scheduling on heterogeneous masterslave platforms. In PDP 2006, 14th Euromicro Workshop on Parallel, Distributed and Network-based Processing. IEEE Computer Society Press, [426] Jean-François Pineau, Yves Robert, and Frédéric Vivien. Revisiting matrix product on master-worker platforms. In 9th Workshop on Advances in Parallel and Distributed Computational Models APDCM IEEE Computer Society Press, [427] Jean-François Pineau, Yves Robert, and Frédéric Vivien. Energy-aware scheduling of flow applications on masterworker platforms. In Euro-Par Parallel Processing, volume 5704 of LNCS, pages Springer Verlag, [428] Veronika Rehn-Sonigo. Optimal Closest Policy with QoS and Bandwidth Constraints for Placing Replicas in Tree Networks. In CoreGRID 2007, Core GRID Symposium Springer Verlag, [429] Mark Stillwell, David Schanzenbach, Frédéric Vivien, and Henri Casanova. Resource allocation using virtual clusters. In 9th IEEE International Symposium on Cluster Computing and the Grid (CCGrid 09), [430] Cédric Tedeschi. Découverte de services pour les grilles de calcul dynamiques large échelle. In Perpi Conférences conjointes RenPar 17 / SympA 2006 / CFSE 5 / JC 2006, pages 52 59, Canet en Roussillon, France, October [431] Cédric Tedeschi. Arbre de préfixes auto-stable pour les systèmes pair-à-pair. In Fribourg Conférences conjointes RenPar 18 / SympA 2008 / CFSE 6, Fribourg, Suisse, February [432] Bernard Tourancheau, Gérard Krauss, and Richard Blanchard. Parametric sensitivity study and optimization of the SDHW and PV subsystems in an energy positive house. In IBPSA-France, [433] Bernard Tourancheau, Yannis Mazzer, Gérard Krauss, Valentin Marvan, and Frederic Kuznick. Calibration of sensors embedded on a micro-controler based wireless network. In IBPSA-France, [434] Bernard Tourancheau, Yannis Mazzer, Gérard Krauss, Valentin Marvan, and Frederic Kuznick. Software calibration of wirelessly networked sensors. In SensorComm. IEEE-IARIA, Short communications [COM] and posters [AFF] in conferences and workshops [435] Gabriel Caillat, Oleg Lodygensky, Gilles Fedak, Haiwu He, Zoltan Balaton, Zoltan Farkas, Gabor Gombas, Peter Kacsuk, Robert Lovas, Attila Csaba Maros, Ian Kelley, Ian Taylor, Gabor Terstyanszky, Tamas Kiss, Miguel Cardenas-Montes, Ad Emmen, and Filipe Araujo. EDGeS: The art of bridging EGEE to BOINC and XtremWeb. In Proceedings of Computing in High Energy and Nuclear Physics (CHEP 09) (Abstract), Prague, Czech Republic, March 2009.

145 5.14 Algorithms and Scheduling for Distributed Heterogeneous Platforms 131 [436] Pushpinder Kaur Chouhan, Holly Dail, Eddy Caron, and Frédéric Vivien. How should you structure your hierarchical scheduler? In HPDC th IEEE International Symposium on High Performance Distributed Computing, pages (Poster), Paris, France, June IEEE. [437] Yi Gu, Qishi Wu, Anne Benoit, and Yves Robert. Complexity analysis and algorithmic development for pipeline mappings in heterogeneous networks. In 28th ACM Symposium on Principles of Distributed Computing PODC ACM Press, Short communication. Scientific books and book chapters [OS] [438] Gabriel Antoniu, Marin Bertier, Luc Bougé, Eddy Caron, Frédéric Desprez, Mathieu Jan, Sébastien Monnet, and Pierre Sens. GDS: An architecture proposal for a grid data-sharing service. In Vladimir Getov, Domenico Laforenza, and Alexander Reinefeld, editors, Future Generation Grids, volume XVIII, CoreGrid Series of Proceedings of the Workshop on Future Generation Grids November 1-5, 2004, Dagstuhl, Germany. Springer Verlag, [439] Olivier Beaumont and Loris Marchal. Steady-state scheduling. In Introduction to Scheduling. Chapman and Hall/CRC Press, To appear. [440] Vincent Breton, Eddy Caron, Frédéric Desprez, and Gaël Le Mahec. Handbook of Research on Computational Grid Technologies for Life Sciences, Biomedicine and Healthcare, chapter High Performance BLAST Over the Grid. IGI Global, [441] Yves Caniou, Eddy Caron, Frédéric Desprez, Hidemoto Nakada, Keith Seymour, and Yoshio Tanaka. High performance GridRPC middleware. In G.A. Gravvanis, J.P Morrison, H.R. Arabnia, and D.A. Power, editors, Grid Technology and Applications: Recent Developments. Nova Science Publishers, At Prepress. Pub. Date: 2009, 2nd quarter. ISBN [442] Franck Cappello, Gilles Fedak, Derrick Kondo, Paul Malécot, and Ala Rezmerita. Handbook of Research on Scalable Computing Technologies, chapter Desktop Grids: From Volunteer Distributed Computing to High Throughput Computing Production Platforms. IGI Global, to appear. [443] E. Caron, F. Desprez, J.-Y. L Excellent, C. Hamerling, M. Pantel, and C. Puglisi-Amestoy. Use of a network enabled server system for a sparse linear algebra application. In Vladimir Getov, Domenico Laforenza, and Alexander Reinefeld, editors, Future Generation Grids, volume XVIII, CoreGrid Series of Proceedings of the Workshop on Future Generation Grids November 1-5, 2004, Dagstuhl, Germany. Springer Verlag, [444] Eddy Caron, Frédéric Desprez, Franck Petit, and Cédric Tedeschi. DLPT: A P2P tool for Service Discovery in Grid Computing. In Nick Antonopoulos, Georgios Exarchakos, Maozhen Li, and Antonio Liotta, editors, Handbook of Research on P2P and Grid Systems for Service-Oriented Computing: Models, Methodologies and Applications. IGI Global, December Released: December ISBN-13: [445] H. Casanova, A. Legrand, and Y. Robert. Parallel Algorithms. Chapman & Hall/CRC Press, [446] Matthieu Gallet, Yves Robert, and Frédéric Vivien. Divisible load scheduling. In Introduction to Scheduling. Chapman and Hall/CRC Press, To appear. [447] Yves Robert and Frédéric Vivien. Algorithmic Issues in Grid Computing. In Algorithms and Theory of Computation Handbook. Chapman and Hall/CRC Press, second edition, To appear. Scientific popularization [OV] [448] Michel Cosnard and Yves Robert. Algorithmique parallèle. In Encyclopédie de l Informatique et des Systèmes d Information, pages Vuibert, Book or Proceedings editing [DO] [449] L. Carter, H. Casanova, F. Desprez, J. Ferrante, and Y. Robert, editors. Special issue on Scheduling techniques for large-scale distributed platforms. Int. J. High Performance Computing Applications 20(1), [450] L. Carter, H. Casanova, F. Desprez, J. Ferrante, and Y. Robert, editors. Special issue on Scheduling techniques for large-scale distributed platforms. Int. J. High Performance Computing Applications 20(4), 2006.

146 132 Part 5. Graal [451] H. Casanova, Y. Robert, and H.J. Siegel, editors. Special issue on Algorithm design and scheduling techniques (realistic platform models) for heterogeneous clusters. IEEE Trans. Parallel Distributed Systems 17(2), [452] Jack Dongarra and Bernard Tourancheau, editors. Cluster and Computational Grids for Scientific Computing, Special Issue on Tools. International Journal of High Performance Computing Applications 20(3), [453] Jack Dongarra and Bernard Tourancheau, editors. Clusters and Computational Grids for Scientific Computing. Parallel Processing Letters 17(1), March [454] Jack Dongarra and Bernard Tourancheau, editors. Special section: Cluster and computational grids for scientific computing. Future Generation Comp. Syst. 24 (1), [455] Jack Dongarra and Bernard Tourancheau, editors. Cluster and Computational Grids for Scientific Computing. Parallel Processing Letters, [456] Y. Robert, editor. Special issue on IPDPS J. Parallel and Distributed Computing 69, 9, [457] Y. Robert, M. Parashar, R. Badrinath, and V. Prasanna, editors. High Performance Computing (HiPC) 2006, LNCS Springer Verlag, [458] Yves Robert and Frédéric Vivien, editors. Introduction to Scheduling. Chapman and Hall/CRC Press, To appear. Other Publications [AP] [459] Patrick R. Amestoy, Alfredo Buttari, Philippe Combes, Abdou Guermouche, Jean-Yves L Excellent, Tzvetomila Slavova, and Bora Uçar. Overview of MUMPS (a multifrontal massively parallel solver), June Presentation at the 2nd French-Japanese workshop on Petascale Applications, Algorithms and Programming (PAAP 08). [460] Patrick R. Amestoy, Alfredo Buttari, and Jean-Yves L Excellent. Towards a parallel analysis phase for a multifrontal sparse solver, June Presentation at the 5th International workshop on Parallel Matrix Algorithms and Applications (PMAA 08). [461] Patrick R. Amestoy, Iain S. Duff, Daniel Ruiz, and Bora Uçar. Towards parallel bipartite matching algorithms. Presentation at Scheduling for large-scale systems, Knoxville, Tennessee, USA, May [462] Patrick R. Amestoy, Iain S. Duff, Tzvetomila Slavova, and Bora Uçar. Out-of-core solution for singleton rhs vectors. Presentation at SIAM Conference on Computational Science and Engineering (CSE09), Miami, Florida, USA, March [463] Patrick R. Amestoy, Iain S. Duff, Tzvetomila Slavova, and Bora Uçar. Out-of-core solution for singleton right hand-side vectors. Poster at Dagstuhl Seminar on Combinatorial Scientific Computing, February [464] Yusuke Tanimura, Keith Seymour, Eddy Caron, Abelkader Amar, Hidemoto Nakada, Yoshio Tanaka, and Frédéric Desprez. Interoperability Testing for The GridRPC API Specification. Open Grid Forum, May OGF Reference: GFD.102. [465] Ümit V. Çatalyürek and Bora Uçar. Partitioning sparse matrices. Presentation at SIAM Conference on Computational Science and Engineering (CSE09), Miami, Florida, USA delivered by Ümit V. Çatalyürek, March Doctoral Dissertations and Habilitation Theses [TH] [466] Emmanuel Agullo. On the Out-of-core Factorization of Large Sparse Matrices. PhD thesis, École Normale Supérieure de Lyon, November [467] Anne Benoit. Scheduling pipelined applications: models, algorithms and complexity. Habilitation à diriger des recherches, École normale supérieure de Lyon, July [468] Raphaël Bolze. Analyse et déploiement de solutions algorithmiques et logicielles pour des applications bioinformatiques à grande échelle sur la grille. PhD thesis, École Normale Supérieure de Lyon, October [469] Pushpinder Kaur Chouhan. Automatic Deployment for Application Service Provider Environments. PhD thesis, École normale supérieure de Lyon, France, September 2006.

147 5.14 Algorithms and Scheduling for Distributed Heterogeneous Platforms 133 [470] Loris Marchal. Communications collectives et ordonnancement en régime permanent sur plates-formes hétérogènes. PhD thesis, École Normale Supérieure de Lyon, France, October [471] Jean-François Pineau. Communication-aware scheduling on heterogeneous master-worker platforms. PhD thesis, École Normale Supérieure de Lyon, September [472] Veronika Rehn-Sonigo. Multi-criteria Mapping and Scheduling of Workflow Applications onto Heterogeneous Platforms. PhD thesis, École Normale Supérieure de Lyon, July [473] Cédric Tedeschi. Peer-to-Peer Prefix Tree for Large Scale Service Discovery. PhD thesis, École normale supérieure de Lyon, October [474] Antoine Vernois. Ordonnancement et réplication de données bioinformatiques dans un contexte de grille de calcul. PhD thesis, École normale supérieure de Lyon, France, October [475] Frédéric Vivien. On scheduling for distributed heterogeneous platforms. Habilitation à diriger des recherches, École normale supérieure de Lyon, May Total ACL - International and national peer-reviewed journal INV - Invited conferences ACT - International and national peer-reviewed conference proceedings COM / AFF - Short communications and posters in international and national conferences and workshops OS - Scientific books and book chapters OV - Scientific popularization DO - Book or Proceedings editing AP - Other Publications TH - Doctoral Dissertations and Habilitation Theses Total Other Productions: Software [AP] [476] Adage: Automatic deployment of applications in a grid environment. [477] DIET: Distributed Interactive Engineering Toolbox. [478] MUMPS: a MUltifrontal Massively Parallel Solver. [479] Ulcmi: Unified lego component model implementation. APP in progress.

148 134 Part 5. Graal

149 6. MC2 - Models of Computation and Complexity Team: MC2 Scientific leader: Pascal KOIRAN Evaluation Web site: Parent Organisations: Ecole Normale Supérieure de Lyon, Université de Lyon, CNRS Team History: (optional) The acronym MC2 stands for Modèles de Calcul et Complexité. The team was created between 1995 and 1997 by a merger of the team Connexionisme, headed By Hélène Paugam-Moisy, with the team Automates Cellulaires et Pavages (Cellular Automata and Tilings), headed by Jacques Mazoyer. The team was led by Jacques Mazoyer until September 2007, and by Pascal Koiran thereafter. 6.1 Team Composition Current team members The arrival dates in the tables below are the dates of arrival in the LIP laboratory. As a result, some arrivals predate the creation of MC2. Permanent Researchers Name First name Function Institution Arrival date KOIRAN Pascal Professor ENS Lyon 09/1995 PORTIER Natacha Associate Professor ENS lyon 09/1999 REMILA Eric Associate Professor IUT Roanne 09/1990 THIERRY Eric Associate Professor ENS Lyon 09/2002 Post-docs, engineers and visitors Name First name Function and % of time (if other than 100%) Institution Arrival date BOIX Eric Engineer CNRS 01/2006 GRIGNARD Arnaud Expert engineer ENS Lyon 05/2009 LA ROTA Camilo Expert engineer CNRS 02/2007 URIBE Ricardo Expert engineer CNRS 03/2007 CHIQUILLO Gian Expert engineer CNRS 05/2008 MALATERRE Mathieu Expert engineer ENS Lyon 07/2009 BELTRAN Jorge Expert engineer CNRS 05/2009 Doctoral Students Name First name Institution Date of first registration as doctoral student BRIQUEL Irénée ENS Lyon 09/2008 GRENET Bruno ENS Lyon 09/2009 JOUHET Laurent ENS Lyon 09/2007 NOUAL Mathilde ENS Lyon 09/2009 ROBERT Julien ENS Lyon 09/2006 Past team members 135

150 Name 136 Part 6. MC2 Past Members Oct Oct Name First name Position Arrival Parent Institution Oct. date position date (or Departure Current 2005) DELORME Marianne Assistant professor Lyon 1 09/ /2008 retired Paris 7 FINKEL Olivier Research scientist CNRS 10/ /2008 (logic team) Marseille ROMASCHENKO Andrei Research scientist CNRS 10/ /2008 (LIF laboratory) MAZOYER Jacques Professor IUFM Lyon 09/ /2008 retired MORVAN Michel Professor ENS Lyon 02/ /2009 SCHABANEL Nicolas Research scientist CNRS 09/ /2008 First name Doctoral school Supervisor BECKER Florent 09/ /2009 ENS Lyon MathIF E. Rémila BOUILLARD Anne 09/ /2006 ENS Lyon MathIF LEBHAR Emmanuelle 09/ /2006 ENS Lyon MathIF LYAUDET Laurent 09/ /2008 ENS Lyon MathIF NESME Vincent 09/ /2006 ENS Lyon MathIF B.Gaujal and J.Mairesse Current position assistant professor (Orléans) assistant professor (ENS Cachan, Ker Lann campus) J.Mazoyer and I.Todinca P.Koiran and N.Portier NOUVEL Bertrand 09/ /2006 ENS Lyon MathIF E.Rémila PERIFEL Sylvain 09/ /2008 ENS Lyon MathIF P.Koiran POUPET Victor 09/ /2007 ENS Lyon MathIF M.Delorme and J.Mazoyer Veolia environnement Paris 7 (LIAFA) Past Doctoral students Date of Date of first registration Institution departure CNRS M.Morvan research and scientist N.Schabanel (Santiago) postdoc (Hannover) post-doc CNRS (Tokyo) assistant professor (Paris 7) assistant professor (Marseille, LIF laboratory)

151 6.2 Models of Computation and Complexity 137 REGNAULT Damien 09/ /2009 ENS Lyon MathIF ROUQUIER Jean-Baptiste 09/ /2009 ENS Lyon MathIF N. Schabanel and E. Thierry ATER (Marseille) post-doc (ISC-PIF, Paris) M. Morvan Past post-doctoral researchers, engineers and visitors (greater than or equal to 3 months) Name First name Function Date of Date of Home Institution arrival departure (if appropriate) Current position AMAR Abdelkader expert engineer (CNRS) 01/ /2007 not appropriate FLARUP Uffe University of visiting doctoral Certus (a danish 01/ /2007 Southern Denmark student company) LANDES Jürgen visiting doctoral University of 05/ /2007 student Manchester YAO Penghui doctoral student (National Chinese visiting doctoral 10/ /2008 Academy of student University of Sciences Singapore) Evolution of the team: The team went through a number of important changes in recent years, including: The arrival and the recent departure of Michel Morvan, who brought the topic of complex systems to the team and created IXXI, a complex systems institute where the team is currently hosted. The retirement of longtime members Marianne Delorme and Jacques Mazoyer (the historic team leader), and more generally a sharp and unfortunate drop in the number of permanent researchers (for instance, there were 8 permanent researchers in March 2005 or July 2007 compared to 4 at this time). 6.2 Executive summary Keywords theoretical computer science, algorithms, computational complexity, discrete structures, discrete dynamical systems, complex systems, gene regulatory networks, algebraic complexity, quantum computing, cellular automata, tilings, self-assembly, network calculus, game theory. Research area The central research topics of our team are: Algorithms and computational complexity. We design and analyze efficient algorithms, and we try to understand the limitations of efficient algorithms. Understanding these limitations is perhaps the main goal of computational complexity. An early and important example is the theory of NP-completeness, which provides a way of showing that certain problems do not admit polynomial time algorithms (assuming the widely believed P = NP conjecture). Complex systems. According to the Réseau National des Systèmes Complexes, complex systems from the cell to the ecosphere result from evolution and adaptation processes. They exhibit emerging properties: the underlying microscopic level gives rise to organized structures at the macroscopic level, and the macroscopic level influences the microscopic level in a feedback loop. These emerging properties are robust and can be studied from different points of views, depending on the class of complex adaptive systems under consideration. Computational complexity and complex systems are connected through the notion of dynamical system. Complex systems are usually dynamic: they evolve in time. The models of computation studied in computational complexity are dynamic as well: a machine s transition function associates to the current configuration of the machine its configuration at the next time step (or the possible next configurations if the machine is not deterministic). We can therefore use our expertise in the study of such systems for the study of complex systems. Main goals As explained above a central goal is to understand the limitations of efficient algorithms, but efficient can take many other meanings than worst-case polynomial time computation on a Turing machine. Algorithms can be

152 138 Part 6. MC2 parallel, distributed, synchronous or asynchronous, deterministic, probabilistic or quantum. Besides computation time, one can try to optimize memory requirements or volume of data exchanged. In order to focus the attention on one (or a few) of these different aspects of computation, we study various models of computation such as, for example, synchronous and asynchronous cellular automata or quantum models of computation. A central goal of our research in complex systems is to understand interactions between the microscopic and macroscopic levels. We pursue this goal both for idealized mathematical models such as cellular automata or boolean networks, and real systems such as gene regulation networks. Methodology Our work in algorithms and computational complexity usually follows a traditional mathematical style (definitions / theorems / proofs... ). We use this mathematical methodology to study algorithms and models of computation, but also to study the underlying mathematical structures. Improved understanding of these structures often leads to better algorithms. For instance, understanding the structure of a tiling space can yield efficient algorithms for the generation of random tilings, and progress in graph theory leads to more efficient combinatorial algorithms. Sometimes we also resort to experiments. This is especially true in our work on (low dimensional) cellular automata, whose evolution in time can be visualized easily. Rigorous mathematical results remain the ultimate goal: we use experiments to develop our intuition and formulate conjectures that will hopefully become theorems at a later stage. In our complex systems work we analyze data (coming in particular from biological experiments), build models that explain the data and study the property of these models by mathematical analysis or by software simulation. The results obtained can suggest further biological experiments. Highlights (1 page max) Our ongoing work in algebraic complexity, together with the work of other researchers such as Peter Bürgisser, has led to a fairly precise understanding of the relationships between some of the main open problems of discrete complexity theory (e.g., P=NP? or P=PSPACE? ) and the corresponding open problems of algebraic complexity theory. Some of the related publications in the current reporting period are [566, 567, 565]. We were the first group to obtain quantum lower bounds for hidden subgroup problems [564, 505]. Most of the problems for which quantum algorithms achieve an exponential speedup (including factoring, solvable in quantum polynomial time by Shor s algorithm) can be formulated as hidden subgroup problems. We have developed a new research activity in Network Calculus, a theory about discrete event systems that provides worst case guarantees in performance evaluation. The quality of our research has already been recognized by a best paper award [542] and has led to industrial collaborations (with Onera and Thales). We have started a collaboration with Creuset (Centre de Recherche Economiques de Saint-Etienne). We study questions of game theory in a distributed setting, for instance, how to allocate fairly rewards to players in certain cooperation games [535]. We have developed a general-purpose software platform for the simulation of complex systems. A related startup is currently under incubation, and should begin its operations around Spring Research activities Algebraic complexity List of participants: Irénée Briquel, Uffe Flarup, Pascal Koiran, Laurent Lyaudet, Bruno Grenet, Sylvain Perifel, Natacha Portier. Keywords: Valiant s model, Blum-Shub-Smale model, arithmetic circuits, treewidth, systems of polynomial equations. Scientific issues, goals, and positioning of the team: Sylvain Perifel s thesis was mostly devoted to algebraic complexity theory. Connections between the complexity of decision and evaluation problems were established. Connections between algebraic and Boolean complexity were also established. We have studied the complexity of evaluating polynomials within Valiant s framework. A central open problem here is whether the permanent of a matrix can be evaluated in a polynomial number of arithmetic operations. We have studied the complexity of decision problems in the Blum-Shub-Smale model of computation. Now, the central problem is the following: is it possible to decide in a polynomial number of arithmetic operations and comparisons whether a system of multivariate polynomial equations has a solution (in the field of real or complex numbers)?

153 6.3 Models of Computation and Complexity 139 Major results Oct Oct. 2009: We have shown that there are strong connections between these two algebraic versions of the P= NP? problem and with Boolean complexity theory. For instance, we have shown that a proof of P = NP over the real or complex number would imply that P = NP in Valiant s framework, or the boolean separation P = PSPACE. In [553] we have studied the complexity of evaluating hard polynomials such as the permanent or the Hamiltonian for special classes of graphs. It was known that subject to these restrictions, the afore-mentioned hard problems become easy. We have shown that the complexity of evaluating the corresponding polynomials is captured exactly by restricted classes of arithmetic circuits, skew circuits. Self-assessment: The results in Sylvain Perifel s thesis together with prior results from our group and other research groups provide a fairly complete picture of the way that some of the main open problems of algebraic and Boolean complexity are connected to each other. It remains to actually solve these problems, that is, to prove the corresponding lower bounds unconditionally. We also believe that the connections between Boolean complexity and Valiant s model should be studied further, and that graph-theoretic notions like tree-width will play an important role in that study Algebraic algorithms List of participants: Pascal Koiran. Keywords: Computer algebra, sparse polynomials, polynomial factorization, determinants. Scientific issues, goals, and positioning of the team: The design of algorithms for algebraic objects like polynomials or matrices has of course applications in many fields of computer science, and in particular in complexity theory where factorizing or computing determinants and permanents are cornerstone problems between different complexity classes. Major results Oct Oct. 2009: With E. Kaltofen, we have designed efficient algorithms for the factorization of sparse polynomials. After handling bivariate polynomials with integer coefficients and searching linear factors [557], these restrictions were lifted in [558] (factors of arbitrary degree for multivariate polynomials with coefficients in a number field). In [559], we studied quotients of symbolic determinants: the entries of the determinants at the numerator and denominator of the quotient are constants or variables. In general, the quotient is a rational fraction. We show that when the quotient is a polynomial it can be efficiently represented by a third symbolic determinant. This situation occurs for instance in the computation of resultants of multivariate polynomial systems. Self-Assessment: Our determinantal representation of quotients is probably not efficient enough to be used in practice. It does raise the question, however, of the existence of a determinantal representation which could be used in practice. It seems that our work on the factorization of sparse polynomials might lead to some (modest) progress on one of the central problems of complexity theory: obtaining good lower bounds on the representation of explicit polynomials by arithmetic circuits Kolmogorov complexity List of participants: Sylvain Perifel, Andrei Romaschenko. Keywords: Boolean circuits, compression, mutual information. Scientific issues, goals, and positioning of the team: We have studied the (open) question whether problems solvable in exponential time admit polynomial size circuits. It has lead to further investigations about Kolmogorov complexity which was also a active research theme for A. Romaschenko. Major results Oct Oct. 2009: We have shown that the answer to the first problem is negative [573] under an assumption from resource-bounded Kolmogorov complexity: polynomial-time symmetry of information. We have therefore connected two important open questions from two different areas: boolean circuit complexity and Kolmogorov complexity. We have studied another problem at the frontier of these two areas: how to extract randomness from an infinite string? To this effect, we have studied in [534] compression of strings by pushdown automata. At the cornerstone of algorithmic information theory and in collaboration with A. Muchnik (Moscow), the stability of Kolmogorov type properties to relativization was also studied with new statements about independence between words [571].

154 140 Part 6. MC2 Self-assessment: Though intuitive, results are rather hard to prove. We still lack general technique suitable to attack similar problems Quantum computing List of participants: Pascal Koiran, Jürgen Landes, Vincent Nesme, Natacha Portier, Penghui Yao. Keywords: hidden subgroup problem, lower bounds, query complexity. Scientific issues, goals, and positioning of the team: We have focused on quantum lower bounds, and in particular on quantum lower bounds for hidden subgroup problems. It is well known that for certain problems, quantum algorithms yield an exponential speedup compared to the best known classical algorithms (e.g. Shor s algorithm and factorization). The hidden subgroup problem is an abstract framework in which most of the the known "fast" quantum algorithms such as Shor s algorithm can be cast. In this framework the algorithm has access to a black-box function f defined on a finite group G. This function is supposed to "hide" a subgroup H G in the sense that f is constant on each coset of H, and takes different values on different cosets. The goal of the quantum algorithm is to identify the hidden subgroup H by performing as few queries as possible to the black-box f. Major results Oct Oct. 2009: We have obtained optimal lower bounds on the number of queries, for Simon s problem first [564] (the case where G =(Z/2Z) n ) and then for all Abelian subgroups [505]. More recently, we have begun a systematic study of lower bounds for non-adaptive quantum algorithms. In this restricted model, a quantum algorithm must perform all of its queries to the black box simultaneously. This model is especially natural for hidden subgroup problems, since many of the known algorithms for such problems turn out to be non-adaptive. Self-Assessment: We were the first group to obtain lower bounds for hidden subgroup problems. Four year after the first version of our paper, despite intensive work on lower bounds in the quantum computing community, this is still the only lower bound available for hidden subgroup problems. It would be interesting to obtain also lower bounds for non-abelian groups Fault-tolerant computation List of participants: Andrei Romashchenko. Keywords: Boolean circuits, error-correcting codes. Scientific issues, goals, and positioning of the team: We have investigated reliable computation with faulty boolean circuits in the coded model of faulty computations introduced in the 1990 s by D. Spielman. In this model the input and the output of a computational circuit are treated as words in some error-correcting code. Major results Oct Oct. 2009: In [583], we investigated two models of local faults. In the random faults model we suppose that each elementary processor (a boolean gate) at each step with some small probability becomes corrupted. We proved that the complexity of a circuit resistant to random faults can be done very close to the complexity of a circuit computing the same function without fault occurences. faults. In the worst-case fulat model, we assume that at each step some arbitrary fixed fraction of elementary processors can be corrupted by an adversary. We show that one can still organize reliable computations but with an exponential blow up of the memory, with a very modest slow down. Self-assessment: We suggested some explicit constructions for the synthesis of reliable circuits. The principal novelty of our method is a new self-correcting procedure based on a sort of mixing mapping (this rather simple mapping enjoys some useful properties of expanders) Infinite words List of participants: Olivier Finkel. Keywords: Topological complexity, infinite games, effective strategies. Scientific issues, goals, and positioning of the team: We study the topological complexity of languages of infinite words by various finite machines such as finite automata, counter machines, pushdown automata, Petri nets... In particular, we try to locate these languages in the Borel or the Wadge hierarchy. To obtain decision algorithms, we considered infinite games between two players with complete information and looked for effective winning

155 6.3 Models of Computation and Complexity 141 strategies. These questions are also connected to the specification and verification of systems interacting with an environment. Major results Oct Oct. 2009: O. Finkel has obtained with D. Lecomte new results [552] on the topological complexity of the ω-powers that appear naturally in the characterization of regular and context-free ω-languages. O. Finkel has also worked in model theory with S. Todorcevic [499] showing that some stretching theorems due to J.-P. Ressayre are in fact equivalent to the existence of n-mahlo cardinals for all n. Self-assessment: We have completely solved the problem of classification of ω-powers with respect to the Borel hierarchy, with effective versions of our results. Remarkably, this problem was solved by combining methods from descriptive set theory and automata theory Small-world phenomenon List of participants: Emmanuelle Lebhar, Nicolas Schabanel. Keywords: Milgram experiment, interaction networks, social networks. Scientific issues, goals, and positioning of the team: The goal is to study the small world phenomenon emerging in real life interaction networks. The seminal work of Milgram (1967) showed that everybody is not only at about 6 handshakes from anybody else, but is also able to find these few handshakes based on his or her own local view of the huge interaction network. Kleinberg (2000) suggested a possible algorithmic nature of this phenomenon by introducing a random graph (a grid augmented with random shortcuts) where a simple greedy algorithm can find short paths between pairs of nodes. Major results Oct Oct. 2009: We have extended his work in several directions with N. Hanusse and P. Duchon (LaBRI). First, we have proposed a local routing algorithm that does much better on expectation than the greedy algorithm. Second, we have looked at the properties that can make it possible to turn the underlying graph (the grid in the case of Kleinberg) into a small world, such as growth [494]. Self-assessment: Our work can be considered as a first step towards a validation of the ubiquity of the small worlds in nature: these only require few resources to be built [548] Networks of asynchronous automata List of participants: Damien Regnault, Michel Morvan, Jean-Baptiste Rouquier, Nicolas Schabanel, Eric Thierry. Keywords: cellular automata, boolean networks, stochastic automata, probabilistic analysis. Scientific issues, goals, and positioning of the team: A natural framework to study some biological models are boolean or Hopfield networks, and cellular automata. Such model have been intensively studied for synchronous updates. We have led a study of other update regimes, like probabilistic ones to model asynchronism. Major results Oct Oct. 2009: We have studied two regimes: α-asynchronicity (i.i.d. updates with probability α) and full asynchronicity (1 update/step uniformly). Asynchronous behaviors differ drastically from their synchronous one in cellular automata, as observed on simulations and proved in double-quiescent 1D elementary cellular automata (classified in [496] for full asynchronism and [551] for α-asynchronism) and 2D Minority cellular automata [579, 578]. Our indicator was the relaxation time, i.e. the expected absorption time in final components of configurations. Some couplings with directed percolation were suspected by experimental studied about coaslescence [584] and have been formalized in [576]. Self-assessment: We managed to introduce some useful combinatorial tools for such analyses like tree masks and still aim at classifying distinct behaviors. Some links with other stochastic processes like Interacting Particles Systems, have not been exploited yet Blind scheduling and data broadcast List of participants: Julien Robert, Nicolas Schabanel. Keywords: blind scheduling, data broadcast, CPU thermal management.

156 142 Part 6. MC2 Scientific issues, goals, and positioning of the team: In most operating systems, the scheduler has no precise information about processes before or during their execution. Consequently we have investigated a class of algorithms called non-clairvoyant, getting informed on the fly. J. Edmonds work (STOC 1999) on the algorithm EQUI (equitably sharing the computing resources between active tasks at any instant) showed that in a general framework about the different levels of parallelism encountered (unknown to the scheduler), the total service time can be competitive with regard to the optimal clairvoyant scheduler (knowing all arrival times and process phases) up to having about twice the resources available. Major results Oct Oct. 2009: We have extended those results to a more general model where tasks are composed of an arbitrary graph of processes (DAG) unknown from the scheduler. We have shown [581, 580] that the algorithm EQUI EQUI which shares equitably the resources between each job and then shares the resources allocated to each one between the active tasks, remains competitive with regard to the optimal clairvoyant scheduler up to allowing twice the resources. We have also applied our results to optimize the broadcast of a web server with dependencies between the different broadcasted pages [546, 582]. Self-assessment: This new expertise about online algorithms has led to another collaboration about schedulings ensuring some microprocessor thermal management [549] Tilings List of participants: Florent Becker, Bertrand Nouvel, Michel Morvan, Eric Rémila, Andrei Romashchenko, Eric Thierry. Keywords: tiling space, domino tiling, lozenge tiling, discrete rotations, self-assembly, fault-tolerance. Scientific issues, goals, and positioning of the team: In the study of the combinatorics and the algorithmics of tilings, a emphasis has been put on a recent self-assembly model introduced by bio-computer scientist E. Winfree (STOC 2000). In this model with biochemistry and nanotechnology flavors, tiles are squares with different types of glue on their sides which guide the way they aggregate. Some external parameters like temperature modify the assembly. This was F. Becker s PhD research topic and it is also a joint work with I. Rapaport and N. Schabanel CMM U. Chile. Major results Oct Oct. 2009: Our original approach consist in constructing homothetic stable shape languages, rather than a fixed shape where the main effort is the encoding of the size, forgetting the geometry. We succeeded in obtaining size optimal and time optimal tiles sets for the construction of squares [537] and later cubes [538]. Until now, it is the only 3D non trivial construction exhibited. In more classical settings, we have studied the flip accessibilility (sequences of local rearrangements) in lozenge tilings (joint work with O. Bodini (LIP6, Paris 6) and T. Fernique (LIF, Univ Marseille). Previouly studied for finite figures, we have obtained some characterizations for tilings of the whole plane [539, 485], later extended to domino tilings [540]. Self-assessment: When viewed as growth processes, self-assembly models may include some probabilities. Then it comes to analyzing a Markovian process with a strong combinatorial flavor, just like in our recent studies of networks of automata. One can hope that some probabilistic tools will be useful for both those models Algorithms for Network Calculus List of participants: Anne Bouillard, Laurent Jouhet, Eric Thierry. Keywords: communication networks, worst-case performance analysis. Scientific issues, goals, and positioning of the team: Network Calculus (NC) is a theory of deterministic queuing systems encountered in communications networks. Aiming at analyzing worst-case performances, it stores informations about the system in constraint functions, which are combined thanks to special operators (mainly from (min,+) algebras) to output results. Though mathematical formulas exist for small but typical networks, little is known about the way to make the formulas effective with efficient algorithms and then to aggregate the computations to analyze large and complex networks. Major results Oct Oct. 2009: With the perspective of offering an algorithmic toolbox and later an software, we have introduced a class of functions which is stable for the NC combinations as well as algorithms implementing them [487]. We have started to propose some algorithms to analyze complex networks with multiplexing [543] and investigated some routing issues[486, 542].

157 6.5 Models of Computation and Complexity 143 Self-assessment: It is a new topic in MC2 team. Our approach with demand for computational complexity analysis is rather new in the NC community. The challenge is to prove that this approach will be useful in the development of a concrete software tool Simulation platform for complex systems List of participants: Michel Morvan, Eric Boix, Ricardo Uribe, Gina Chiquillo, Mathieu Malaterre, Arnaud Grignard, Jorge Beltran. Keywords: complex systems, modeling, simulation. Scientific issues, goals, and positioning of the team: We are developing a software platform [600] for the simulation of complex systems. This work has been supported by the Morphex and Dynanets projects. Toan Chu-Minh from the Oslo company also participated in the development of the platform as a member of the Morphex project. Major results Oct Oct. 2009: The platform has reached a level of maturity which makes it possible to create a startup company. The start-up, led by Eric Boix, is currently under incubation and should begin its operations around spring Modeling, simulation and analysis of gene regulatory networks List of participants: Michel Morvan, Eric Boix, Ricardo Uribe, Gina Chiquillo, Mathieu Malaterre, Arnaud Grignard, Jorge Beltran. Keywords: complex systems, plant reproduction, flower development. Scientific issues, goals, and positioning of the team: The goal of this work is to construct a predictive model for the development of the female reproductive organ of the flower. This entails the modeling, simulation and analysis of the dynamics of gene regulatory networks. This work was supported by ANR project carpelle virtuel. Several researchers outside of LIP were involved, including S. Séné (PhD student at IXXI supervised by M. Morvan and J. Demongeot), F. Monéger and J. Traas from the plant reproduction lab (RDP, ENS Lyon). Major results Oct Oct. 2009: With threshold networks as our modeling paradigm, each node in the network represents a gene, each state variable is Boolean and represents the concentration of the gene s product. The weight on each edge represents the type and strength of the corresponding interaction. We have obtain some robustness results about the dynamics [545]. In collaboration with RDP we have reconstructed the network corresponding to the first development stages of the flower of the Arabidopsis plant [544]. The network s parameters were optimized by mathematical programming techniques, in a collaboration with L. Liberti and F. Tarissan (LIX, Ecole Polytechnique) [569, 585]. 6.4 Application domains and social or economic impact Much of our research is theoretical in nature, and is not necessarily driven by the needs of a particular application domain. We have nonetheless worked on a number of different applications: Plant development, gene regulatory networks: see section The simulation platform described in section was also developed to a large extent with this kind of biological applications in mind, and will lead to the creation of a startup company. Economics: we have started a new collaboration with Creuset, Centre de Recherche Economiques de Saint-Etienne (P. Solal, R. Baron, S. Béal), with support from IXXI and ANR Mint. We study questions of game theory in a distributed setting, for instance, how to allocate fairly rewards to players in certain cooperation games, like TUgames defined on networks [535]. Aerospace engineering: this is a domain where Network Calculus models, algorithms and software tools seem relevant, since aircraft certifications require the worst-case analysis of the embedded systems. A cooperation with Thales and Onera has started to develop theory as well as a prototype software for such analyzes. Bibliometry: we have tested to what extent bibliometric indicators such as the h-index can predict promotions of scientists [502], and we have shown that scientists who engage with society perform better academically [503].

158 144 Part 6. MC2 6.5 Visibility and Attractivity Prizes and Awards Pascal Koiran was nominated a junior IUF member in February Awarded communications: Pascal Koiran won the ISSAC 2005 distinguished paper award for its joint work with Erich Kaltofen [557]. E. Thierry won the VALUETOOLS 07 best paper award for his joint work with A. Bouillard, B. Gaujal and S. Lagrange [542]. Chairing and organization of conferences: Natacha Portier has been in charge since 2003 of the Ecoles jeunes chercheurs en informatique mathématique (formerly: Ecoles jeunes chercheurs en algorithmique et calcul formel). The last four editions of this summer school were held in Bordeaux (2006), Nancy (2007), Marseille (2008) and Clermont-Ferrand (2009) Contribution to the Scientific Community Management of Scientific Organisations The team participates in the activities of the GDR Informatique Mathématique. With about 1200 members, this is the largest of all GDR. In particular, Natacha Portier is a member of its steering committee. E. Thierry co-created and animates the subgroup WEED (Worst End to End Delays), within the inter-gdr group AFSEC (Approches Formelles des Systèmes Embarqués Communicants). Administration of Professional Societies Editorial Boards International Journal of Unconventional Computing, Jacques Mazoyer (honorary editor). Mathématiques et sciences humaines (a EHESS journal), Michel Morvan. Organisation of Conferences and Workshops We organized the FRAC meeting (France Automates Cellulaires) at ENS Lyon in 2006 and 2008 (50 participants each time). Natacha Portier organized with Isabelle Collet a meeting on femmes et informatique théorique (women and theoretical computer science, Lyon, 17th and 18th November 2006). E. Thierry participated in the organization of the conference Stochastic Networks 08 (June 2008). Program committee members STACS (Symposium on Theoretical Aspects of Computer Science), Eric Rémila, CSR (Computer Science in Russia), Pascal Koiran, SOFSEM (International Conference on Current Trends in Theory and Practice of Computer Science), Pascal Koiran, JAC meeting (Journées Automates Cellulaires), Marianne Delorme and Jacques Mazoyer, CIAC (International Conference on Algorithms and Complexity], an Italian conference, Pascal Koiran, I 2 CS Innovative Internet Community Systems, Nicolas Schabanel, Satellite IPG workshop on Interaction networks: analysis, modeling and simulation, Nicolas Schabanel, COSI (Colloquium on Optimization and Information Systems), an Algerian conference, Eric Thierry, 2006 and 2007.

159 6.6 Models of Computation and Complexity 145 International expertise France Berkeley Fund, Pascal Koiran. National expertise Gilles Kahn thesis prize (a French prize awarded to the best doctoral dissertations in computer science), Pascal Koiran, Digiteo, Pascal Koiran, Pascal Koiran, Nicolas Schabanel and Eric Thierry served as experts for the ANR. Eric Thierry served the hiring committee ("commission de spécialistes") of Univ. Clermont 2 (Computer Science, 2008). Pascal Koiran served on 3 hiring committees in May 2009 at ENS Lyon, Univ. Paris Sud 11 (Orsay) and IUT Orsay Public Dissemination Natacha Portier was a guest of the talk show Cité Campus on TLM, a local TV station, in February 2008 and March Natacha Portier organizes visits from high school students to ENS Lyon for a day of discovery of science and scientific professions. Funding is provided for the years by Région Rhône-Alpes. In 2008 Michel Morvan was the guest of La tête au carré, a scientific program on public radio station France Inter. Marianne Delorme and Jacques Mazoyer published an article [587] on Interstices, a web site dedicated to the dissemination of computer science research in the general public. 6.6 Contracts and grants External contracts and grants (Industry, European, National) France-Danemark cooperation program This program funded a 6 month visit by a danish PhD student (Uffe Flarup, from the University of Southern Danemark, Odense) in the MC2 team. Uffe worked with Pascal Koiran and Laurent Lyaudet [553, 554]. This program also funded partially a one-month visit by Klaus Meer, Uffe s adivsor, which resulted in a joint paper with Pascal Koiran [563]. ATIP CNRS Locoglobo This CNRS action spécifique ( ) funded Nicolas Schabanel s research on the computational study of the emergence of global properties from local interactions. ATIP jeunes équipes du CNRS Michel Morvan was awarded an ATIP from CNRS to help create the complex systems institute (IXXI). KAA Michel Morvan participated in the KAA project on diffusion of trust in ad-hoc networks. This project was funded by the ACI Sécurité. ANR Sycomore This ANR program on complex systems and models of computation was coordinated by Bruno Durand (LIF, Marseille). The local researcher in charge was Jacques Mazoyer. ANR Mécamerge This programme ANR blanc on emergence of structures in systems with local interactions was coordinated by Jacques Mazoyer, and later by Eric Rémila. ANR SubTile This project on Substitutions and tilings, coordinatd by Pierre Arnoux (Marseille), will begin in September The Lyon node is coordinated by Johannes Kellendonk and the Amiens node by Fabien Durand. ANR MINT This project on Models of Influence and Networks: Theory, coordinated by Agnieszka Rusinowska (Lyon) will begin in September The Saint Etienne node is coordinated by Philippe Solal and the Paris node by Michel Grabish. ANR Carpelle Virtuel This 3 year ANR blanc project, completed in June 2009, was dedicated to the modeling of plant morphogenesis and was coordinated by Jann Traas from the RDP (Reproduction des plantes) lab of ENS Lyon. Participants from LIP included Michel Morvan, and recruited researcher Camilo La Rota.

160 146 Part 6. MC2 ANR Dynanets Eric Boix is work package leader within this European Reseach project (FET Open program) which studies dynamical networks within hierarchical models adapted to immunology. The project began in June 2009; it will host Arnaud Grignard and Mathieu Malaterre as recruited software engineers. ANR GeneShape Eric Boix participates in this research project dedicated to the understanding of gene feedback mechanisms (for plants and animals). This 3 year project (started June 2009) hosts Camilo La Rota as recruited researcher. PACCAP Marie-Curie fellowship Thanks to the Marie Curie fellowhip Problems in Algebraic Complexity and Complexity of Algebraic Problems, Natacha Portier will visit Toronto for two years. This is an international outgoing fellowship for career development from the PEOPLE program. Natacha Portier will first visit the Fields institute and then the computer science department of the University of Toronto, starting in September IUF Pascal Koiran was nominated a junior IUF member in February IUF membership comes with a grant of euros per year and a reduced teaching load. IXXI Eric Rémila obtained 5000 euros from IXXI, the complex systems institute, for the JAC 08 conference. He was also supported for two joint research projects with Creuset (Centre de Recherches Economiques de l Université de Saint-Etienne) : Selfish routing and cooperative approaches (2000 euros, 2007) and Cooperation, allocation and complexity (5000 euros, 2008). INRIA ARC Coinc Eric Thierry was involved in a 2-year action on Computational Issues in Network Calculus ( ) headed by B. Gaujal (MESCAL, INRIA Grenoble). The project partners were INRIA Grenoble, INRIA Rennes, INRIA Nancy, INRIA Sophia-Antipolis, ENS Lyon, and Angers University. The aim of the project was the development of a software tool implementing basic Network Calculus operations [601]. ONERA subcontracting Laurent Jouhet and Eric Thierry were involved in this one-year (2009) project, headed by A. Bouillard (DISTRIBCOM, INRIA Rennes). The project s goal was to provide a report comparing the various Network Calculus models encountered in the literature. ANR PEGASE This 3-year project PErformances GAranties dans les Systèmes Embarqués will begin in Fall It will be headed by M. Boyer (ONERA Toulouse). The LIP participants will be Laurent Jouhet and Eric Thierry. The project partners are ENS Lyon, ENS Cachan/Rennes, INRIA Grenoble, ONERA, Thalès and RealTimeAtWork. The aim of the project is to deepen Network Calculus theory (a queuing theory for worst case performance evaluation), to make it effective by designing algorithms, to implement them in a software tool, and to validate the results on critical real-time embedded networks from aerospace engineering Research Networks (European, National, Regional, Local) Mathlogaps Pascal Koiran was in charge of the LIP node of the European Marie-Curie EST network Mathlogaps (Mathematical logic and applications, The coordinator was Dugald MacPherson (Leeds). Mathlogaps ran for 4 years until August 31, 2008, and funded two PhD theses for LIP, in the Arénaire (Chaves) and Plume (Zunic) teams. The network also funded two long visits (several months each) by two PhD students (Landes, from Machester, and Yao, from the Chinese Academy of Sciences) in the MC2 team. These two students worked on nonadaptive quantum computing [562]. Morphex Michel Morvan was coordinator of the European project Morphex (Morphogenesis and gene regulatory networks in plants and animals: a complex systems modelling approach, Morphex was a Specific Targeted Research Project (STREP) in the NEST Pathfinder initiative. ONCE-CS Michel Morvan was in charge of a workpackage in the coordination action ONCE-CS (Open Network of Centres of Excellence in Complex Systems) of the 6th framework programme (FP6). Small worlds Nicolas Schabanel was co-chair with Prof. Moni Naor of the working group "Small worlds" of European action COST 295 "Dynamic Communication Networks: Foundations and Algorithms (DYNAMO)" from 2005 to Internal Funding 6.7 International collaborations resulting in joint publications With grants:

161 6.10 Models of Computation and Complexity 147 From 2006 to 2008, Eric Rémila was in charge of an Ecos program which funded trips for Chilean researchers (from Santiago) and for French researchers (from LIFO, in Orléans, and from LIP). This is a part of longstanding cooperation between MC2 and colleagues from Chile. Resulting publications in the current reporting period are [500, 537, 540]. Alexander Chistov visited MC2 during a trip to France funded by a CNRS cooperation program with Russia. This visit resulted in [480]. Klaus Meer, at the time professor at the University of Southern Denmark, and his doctoral student Uffe Flarup visited us thanks to a French-Danish cooperation program. The resulting papers are [553, 563, 554]. With joint publications: An informal collaboration between Pascal Koiran and Erich Kaltofen (North Carolina State University) resulted in the award-winning paper [557], a follow-up paper [558] and a third paper [559] on a different topic. 6.8 Software production and Research Infrastructure Software Descriptions A: Simulation platform for complex systems Some information on this platform can be found in section It should be noted that a related startup is being created by Eric Boix. B: CimulA CimulA [602], a cellular automata analyzer, is a software for the simulation of asynchronous cellular automata. It has been used mostly for the team s research on that model of computation [584]. C: COINC (Computational Issues in Network Calculus) The COINC software [601] implements the basic operations of network calculus. This implementation effort was supported by ARC COINC. The code was mostly written by Sébastien Lagrange from ISTIA (Angers). Our team (Eric Thierry) contributed algorithmic ideas Contribution to Research Infrastructures 6.9 Industrialization, patents, and technology transfer Patents (French, European and WorldWide) Creation of Startups Eric Boix is creating a startup, Cosmo, which should begin its operations around spring The company will sell services related to our platform for the modeling and simulation of complex systems (more on this platform in section ). The platform code will be open-source Consulting Activities 6.10 Educational Activities Supervision of Educational Programs Natacha Portier was in charge of the of the L3 year of the computer science curriculum at ENS Lyon (licence d informatique fondamentale). Pascal Koiran was deputy director of the MathIF (Mathématiques et Informatique Fondamentales) doctoral school from September 2005 to December 2006, and director from January 2007 to December Teaching Complete information is available for current team members only.

162 148 Part 6. MC2 Name Course title (short) Level Institution No of hours Number of academic years N. Portier Quantum Information L3 ENS Lyon 17 1 N. Portier Logic and Complexity M1 ENS Lyon 24 1 N. Portier Probabilities and Algorithms L3 ENS Lyon 30 1 N. Portier Quantum Computing M2 ENS Lyon 45 2 N. Portier Seminar for the students L3 and M1 ENS Lyon 48 3 N. Portier Foundations of CS 2 L3 ENS Lyon 45 2 N. Portier Computational Complexity M1 ENS Lyon 48 1 E. Thierry Algorithms 2 L3 ENS Lyon 45 1 E. Thierry Graph theory L3 ENS Lyon 45 1 E. Thierry Graph theory M1 ENS Lyon 45 2 E. Thierry Ordered structures M1 ENS Lyon 45 1 E. Thierry Option Informatique Prépa agrégation ENS Lyon 90 1 E. Rémila Math for business GEA 1 IUT Roanne 90 3 E. Rémila Tilings M2 ENS Lyon 45 2 E. Rémila Tilings D ED MathIF 30 1 P. Koiran Probabilities and algorithms L3 ENS Lyon 45 1 P. Koiran Constraint satisfaction M2 ENS Lyon 24 1 P. Koiran Algorithms 2 L3 ENS Lyon 45 2 P. Koiran Algebraic complexity M2 ENS Lyon 45 1 P. Koiran Computational complexity M1 ENS Lyon 30 3 P. Koiran Structural complexity M1 ENS Lyon 45 1 M. Morvan Foundations of CS 1 L3 ENS Lyon 45 3 M. Morvan Intro to CS for math/physics M2 ENS Lyon 36 1 M. Morvan discrete dynamical systems M2 ENS Lyon 36 1 J. Mazoyer Foundations of CS 2 L3 ENS Lyon 45 2 J. Mazoyer Decidabilility L3 ENS Lyon 45 1 J. Mazoyer Cellular automata M2 ENS Lyon 36 2 J. Mazoyer Tilings and CA M2 ENS Lyon 45 1 M. Delorme Computational complexity M1 ENS Lyon 45 3 M. Delorme Automata L3 ENS Lyon 45 1 O. Finkel Automata on infinite words M2 ENS Lyon 45 1 A. Romashchenko Kolmogorov complexity M2 ENS Lyon 45 1 N. Schabanel Effective algorithms L3 ENS Lyon 45 1 N. Schabanel Approximation algorithms L3 ENS Lyon Other teaching Doctoral dissertations completed within the team are listed in section Pascal Koiran gave a 6-hour course in algebraic complexity theory at a meeting of the European project Modnet which took place at University Lyon 1 in June The audience was made mostly of PhD students in model theory (a branch of mathematical logic) from the universities participating in the project. Pascal Koiran and Natacha Portier were the advisors for the research internship (M2) and then the PhD of Bruno Grenet from 1st september Pascal Koiran was the advisor for the research internship (M2) and then the PhD of Irénée Briquel from 1st september Pascal Koiran was the advisor for the research internship (M2) of Muhammad Mumtaz Ahmad in spring Eric Thierry is the advisor of Laurent Jouhet, who started his PhD in September Eric Rémila was the advisor for for the research internship (M2) of Marc Bertand in Self-Assessment Weak points The team s main difficulty at this time is clearly the sharp drop in size due to the departure of several permanent members: Vincent Bouchitté (assistant professor at University Lyon 1, deceased in 2005), Marianne

163 6.12 Models of Computation and Complexity 149 Delorme (assistant professor at University Lyon 1, retired), Jacques Mazoyer (professor at the IUFM and then at Lyon 1 after the merger of the IUFM with Lyon 1, retired), Michel Morvan (now in charge of research at Veolia Environnement), Andrei Romashchenko, Olivier Finkel and Nicolas Schabanel (all 3 CNRS researchers). All these departures have led to a loss of scientific expertise and leadership for our team and the LIP laboratory; they have also made the daily life of the team more difficult. In particular, we have discovered to our dismay that with the current team size, holding the MC2 seminar (groupe de travail) on a weekly basis may no longer be feasible. Some excellent candidates have applied to CNRS and ENS positions to join the team, but apparently this has not been sufficient to convince our parent institutions (tutelles) to compensate for some of these losses. Another option besides CNRS and ENS Lyon would be to hire assistant professors or professors from Lyon 1, our third parent institution. Despite numerous discussions with the various persons involved and the support of last two LIP directors, Frédéric Desprez and Gilles Villard, our efforts in this direction have so far remained unsuccessful. Strong points In spite of this difficult situation we were able to maintain a good level of scientific activity. This is true for some of the longtime strengths of our team such as algebraic complexity or tilings, and more recent topics such as quantum computing; also we have managed to preserve some of the team s longstanding experience in cellular automata despite the retirement of Marianne Delorme and Jacques Mazoyer, thanks in particular to the development of the new research topic of probabilistic cellular automata. Likewise, the new role played by graphtheoretic concepts such as the treewidth in our work on algebraic complexity is helping to preserve some of our expertise in graph theory (indeed, the study of treewidth in France was pioneered by Vincent Bouchitté). We have also begun to work on topics that are completely new for the team, e.g. network calculus or applications of game theory to economics. Finally, the creation of a startup should give an enduring legacy to our work on complex systems, even though this is no longer a central topic in the team s project (more on this in the next section) Perspectives Keywords algorithms, computational complexity, combinatorics. Vision and new goals of the team With the departure of Michel Morvan, complex systems will feature much less prominently among the team s research topics. Our project is to focus our research activities on algorithms and computational complexity theory. As explained in section 6.2 these two subjects are complementary: in the field of algorithms one tries to design and analyze efficient algorithms (that is, to obtain upper bounds) while the main goal of computational complexity is to obtain hardness results (and ideally, unconditional lower bounds). There is another less obvious reason why these two subjects are complementary: the techniques of computational complexity are very often algorithmic in nature. An important example is provided by the theory of NP-completeness. To show that a certain problem B is NP-complete, one constructs a reduction from A to B where A is a problem which is already known to be NP-complete. That is, to show that a certain task (deciding B) has no efficient algorithm we design an efficient algorithm for a different task (reducing A to B). This example is of course very well-known, but it may not be so well known to non-experts that plenty of other results from computational complexity theory, including recent results, are based on algorithmic techniques. For instance, it follows from the large body of work on hardness versus randomness tradeoffs that the two tasks of proving unconditional lower bounds and derandomizing algorithms are essentially equivalent. The former task is therefore reduced to a task which is conceptually simpler and in any case of a clearly algorithmic nature: designing efficient deterministic algorithms. Besides algorithms and computational complexity, combinatorics and in particular graph theory is a third, complementary, topic that we would like to develop. Increasing the team s combinatorial expertise will be clearly useful for the development of a successful research program in combinatorial algorithms. Combinatorics and complexity theory are also very complementary since advanced mathematical techniques play an ever increasing in modern complexity theory, and combinatorics is one of the main mathematical disciplines on which complexity theory can draw. This is by no means the only mathematical discipline to play an important role in complexity theory; algebra and probability theory also spring to mind, and in fact it is becoming increasingly difficult to name a mathematical field which does not play a role in complexity theory. The choice of combinatorics seems especially appropriate because this subject is, we believe, currently underrepresented in the LIP laboratory and more generally at ENS Lyon. Currently, the permanent MC2 researchers who are most involved in algorithms and the study of combinatorial structures are Eric Rémila and Eric Thierry. Those who are most involved in complexity theory are Pascal Koiran and Natacha Portier. Our top priority for the new reporting period will be to hire new team members in the 3 areas of algorithms, complexity theory and combinatorics (in combinatorics we will favor those subfields that are particular relevant to

164 150 Part 6. MC2 algorithms and complexity theory). We believe that the quality of our research, our strong involvement in teaching and mentoring of doctoral students should convince our tutelles to support us in this endeavor. Looking more broadly at the "big picture" of the French higher education system we find another reason to support team MC2, namely, the important role that ENS Lyon has to play in the nationwide development of theoretical computer science and more generally of mathematical computer science (which could be defined as the study of computer science problems with mathematical techniques). Indeed, theoretical / mathematical computer science are fields with a high level of international competition where a taste and a talent in both mathematics and computer science are a must. In the French higher education system, ENS Lyon is one of the very few institutions with the ability to train such students in any significant number. Their training must of course rely on research at the highest possible level. There are two main trends in theoretical computer science: the "logics and semantics" trend, and the "algorithms and complexity" trend. Track B of ICALP (arguably the main European conference in theoretical computer science) is devoted to the first trend, and the corresponding LIP team is the Plume team. Track A of ICALP is devoted to the second trend, and our ambition is to represent this "algorithms and complexity" trend at ENS Lyon. Clearly, with the current team size there is still a long way to go. Finally, we list below some of the current team s projects for the near future. Algebraic Complexity From September 2009 to September 2011 most of the team s work in algebraic complexity will take place in Toronto: Pascal Koiran and Natacha Portier will participate in a thematic program at the Fields Institute on Foundations of Computational Mathematics, and will visit the computer science department of the University of Toronto. Natacha Portier s visit is made possible by the European fellowship PACCAP (Problems in Algebraic Complexity and Complexity of Algebraic Problems). Pascal Koiran s visit is made possible by his IUF membership and additional support from the Fields Institute. We give below some examples of problems on which we would like to work. Our first example deals with the worst case complexity of systems of polynomial equations and we plan to study this problem in the boolean model of computation, at least initially. Work by Shub and Smale, and more recently by Beltran and Pardo, show that the approximation of roots of systems of n polynomial equations in n complex variables has polynomial average-case complexity. Their results are based on homotopy Newton methods. Strangely enough, it seems that in the worst-case setting very little is known on the complexity of this fundamental problem: we have neither an efficient algorithm nor a hardness result (note however that if the number of equations is not constrained to be equal to the number of variables, it is an easy and well-known result that the problem becomes NP-hard). We conjecture that this problem is hard in the worst-case setting, and we will work towards establishing this result. One explanation for the lack of current hardness result is that the decision version of the problem always has a positive answer: a system of n polynomial equations in n complex variables always has solutions in projective space. It seems impossible to establish NP-hardness for such problems. In the 1990 s, this observation led Papadimitriou to the systematic study of TFNP (Total Function NP) search problems. In the last few years there has been renewed interest in this line of work due in particular to the extensive work on computational game theory. As is well known, the approximation of Nash equilibria is a central problem in computational game theory, and it is a prominent example of a total search problem. Our goal will therefore be to establish a hardness result, or even a completeness result, for an appropriate class of total search problems. A related example is the use of condition numbers in complexity estimates. There is a significant body of work on algorithms for systems of polynomial equations that have a good behavior from a complexity-theoretic point of view when the input system is well-conditioned. Such results have been obtained for instance by Shub and Smale in the complex case, and by Cucker, Krick, Malajovich and Wschebor in the real case. It is not clear how far these results are from optimality since, as in our first example, no lower bound (or completeness result) that would take the condition number into account seems to be known. We will try to obtain such results. Most likely, they would take the following form: if certain discrete complexity classes are distinct, then certain problems about systems of polynomial equations do not admit algorithms that would have a too well-behaved dependency on the condition number (and other parameters such as the number of variables). Our third example is Linear Programming. For discrete (rational) data, this problem can be solved in polynomial time by the ellipsoid method or by interior point methods. For real-valued data it is known that well-conditioned instances can be solved efficiently, but it is a longstanding open question whether this problem has polynomial worst-case algebraic complexity. We plan to study the connections of this problem to other open problems of algebraic complexity, such as the complexity of evaluating the permanent. In fact, it is already known that Linear Programming has polynomial worst-case algebraic complexity under the the extremely strong hypothesis that the permanent has polynomial-size arithmetic circuits and that P=PSPACE. As explained above, the conjunction of these two hypotheses implies that all problems that can be solved in parallel polynomial time over the reals, including Linear Programming, can in fact be solved in polynomial time. But Linear Programming has arguably a lower

165 6.12 Models of Computation and Complexity 151 complexity than a generic problem in the class of parallel polynomial time problems. One can therefore hope that a similar transfer theorem can be established for this specific problem under a weaker hypothesis. Our last example is MaxFLS: given a system of linear equations, find a feasible subsystem of maximum cardinality. It has been known at least since the work of Amaldi and Kann that this problem is NP-hard in the boolean setting. As in the previous example it can be shown that this problem has polynomial algebraic complexity under the two hypotheses that the permanent has polynomial-size arithmetic circuits, and that P=PSPACE. Again, we would like to obtain a transfer theorem under a weaker hypothesis (e.g., by getting rid of the hypothesis P=PSPACE). Such results would perhaps contribute to put back the focus of research in Algebraic Complexity Theory from decision problems to evaluation problems, as they would show that it is not more difficult to prove a lower bound for the permanent than to prove a lower bound for certain decision problems. In any case, we note that interest in the complexity theory community for the permanent polynomial is currently on the rise. For instance, it is a central object of study in Mulmuley s Geometric Complexity Theory program, and in the work by Kabanets and Impagliazzo on hardness versus randomness tradeoffs. Quantum Computing As explained above, Natacha Portier and Pascal Koiran will spend the next 2 years in Toronto where they should work mostly on non-quantum topics. Our team will nonetheless remain quite active in quantum computing thanks to the arrival in September 2009 of Pablo Arrighi (associate professor at Université Joseph Fourier in Grenoble, visiting LIP thanks to a délégation CNRS) and Jonathan Grattage (on an ATER position). Pablo Arrighi s current research is devoted mostly to quantum cellular automata (QCA). This used to be a rather messy topic since there are several competing definitions of QCA. Pablo Arrighi and his coworkers have done a lot to clarify the situation by studying the relations between these definitions. They have in particular shown the equivalence between certain operational and axiomatic definitions. These results are the quantum counterparts of Hedlund s theorem, a well-known theorem from classical CA theory which establishes the equivalence between the operational definition of classical cellular automata (a line of locally interacting finite automata) and their axiomatic definition as a continuous shift-invariant mapping on the space of configurations. Universal QCA are another topic of interest on which Arrighi and Grattage have collaborated. Their arrival in the team should be mutually beneficial since they will benefit from our expertise in classical and probabilistic CA, and current members will benefit from their expertise in QCA. A promising topic to study during their visit is the extension to open quantum systems of the founding work done by Arrighi and his coworkers for QCA (the QCA that they have studied should be viewed as examples of closed quantum systems since their global evolution rules are unitary). One possible starting point would be to try and obtain a Hedlund-style theorem for probabilistic cellular automata (apparently, no such theorem is currently known). Analysis of discrete dynamical systems It includes the analysis of several models sharing the fact that the configurations are discrete (that is composed of a finite or countable number of elementary pieces, each one being in a state which may change) and the dynamics is given by local updating rules (when updating a local state, the rule only takes into account neighboring states). The time at which updates occur can be discrete or continuous, and there is usually many different update regimes which can be considered. Some of them can be deterministic, probabilistic or even driven by adversaries or external parameters. Beyond the observations of trajectories (that is the sequences of updated configurations) where simulation can be useful, the general objective is usually the prediction and possibly the control of such dynamics. It concerns both the asymptotic behavior and the transitory behavior of the trajectories. Eric Rémila and Eric Thierry are involved in the study of such models: Self-assembly tilings: in this recent model introduced by Winfree, tiles are squares (resp. cubes) that aggregate on 2D (resp. 3D) grid. Each type of tile has different types of glues on its border allowing tiles to stick together if adjacent glues exceed some threshold given by an external parameter, the temperature. This model generalizes the classical Wang tiles with colored borders and the dynamical point of view consists in considering the construction of tilings as a growth process from initial seeds. Among the questions, we wish to study expressiveness: given a set of tiles, which shapes or patterns can be generated? and reciprocally given a language of shapes, is it possible to generate them by a set of tiles? How to characterize sets of tiles which can tile the whole plane or space? With regard to the dynamics, it concerns the asymptotic behavior. Other questions concern the transitory part, like evaluating the time needed to generate a given pattern, or the influence of the temperature. Cellular automata: we wish to continue the study of cellular automata for dynamics different from the classical synchronous updates. Allowing different update schemes like probabilistic ones enable to model asynchronism or faulty behaviors. There exist many open problems, some that we have identified when studying particular rules like Minority or the ones listed in A. Toom s book Cellular Automata with Errors: Problems

166 152 Part 6. MC2 for Students of Probability. A collaboration has been initiated with J. Mairesse (LIAFA, Paris 7) about the form of stationary distributions for probabilistic updates in 1D cellular automata, which are directly linked to the enumeration of some combinatorial objects (directed animals). Gene networks: M. Noual s PhD thesis co-advised by J. Demongeot anf E. Rémila will begin in Fall 2009 about gene networks: we will study attractors and limit cycles and robustness according to the update scheme chosen. The dynamics is strongly dependent on the topology of the gene network, that is the structure of its underlying graph. Our approach will put an emphasis on the combinatorial aspects of those models, and computational complexity will also bring some new insights by setting the difficulty of predicting properties of the dynamics. Here are some known results which give the flavor of what we may look for: the ergodicity of 1D cellular automata enduring probabilistic i.i.d. errors is undecidable (A. Toom), finding the state of a vertex at time t for a boolean network with the Majority rule is P-space complete (C. Moore) (which means that it is likely that one can not parallelize a simulation to speedup the prediction at time t). With such a point of view, this theme enters the team research orientation re-centered on algorithms, combinatorics and computational complexity. Note that in the literature, a new banner called unconventional computing emcompasses those dynamics which have a strong computational power (universality for tilings and cellular automata). Algorithmics of discrete event systems Network Calculus is a theory for the worst-case performance evaluation of particular discrete event systems. Its formalism making use of (min, +) tropical algebra has attracted several theoretical works about communication networks, and an important application is the analysis of critical embedded networks like the ones found in aerospace. It is believed that this theory will be an important alternative to model checking for those particular worst-case evaluations, due to a better scalability. However it remains to be proved and users suffer from the absence of software tools implementing the results from this theory. The main reason is that the algorithmics corresponding to the mathematical solutions of the literature has not been much studied. Our team has started to propose some algorithms and will carry on this study. Eric Thierry has received some ANR fundings (ANR PEGASE) starting at Fall 2009, to reinforce the team on this topic by recruiting a post-doctoral fellow as well as a research engineer. The development of algorithms for Network Calculus is likely to require results from (min, +) algorithmics (due to the formalism which approximates real systems by (min, +)-linear models), graph algorithmics (due to the importance of the topology of the models), linear algebra (due to reduction of some (min, +)-linear operators to some (+, )-linear ones), number theory (due to some periodicity issues), linear programming and more generally computational geometry (in order to deal with the combinations of constraints which are usually piecewise linear functions). Many problems are open concerning the computational complexity of Network Calculus analyzes. For instance, for such models of communication networks, it is not known whether the stability of the network (that is the fact that the amount of data in the network remains bounded) is decidable or not. Dealing with such theoretical issues is imperative in our opinion in order to design a good software tool. Beyond Network Calculus, the team wishes to develop its expertise on the algorithmics of discrete event systems by investigating some other discrete event systems like Petri nets or stochastic extensions of Network Calculus, and study the precise complexity of some underlying problems like the computation of (min, +)-convolutions which is a quite common operation but has not been much studied Publications International and national peer reviewed journals [ACL] [480] P. Koiran A. Chistov, H. Fournier and S. Perifel. On the construction of a family of transversal subspaces over finite fields. Linear Algebra and its Applications, 429 (2-3): , [481] Anne Benoit, Eric Thierry, and Yves Robert. On the complexity of mapping linear chain applications onto heterogeneous platforms. Parallel Processing Letters, 19(3): , [482] Valérie Berthé and Bertrand Nouvel. Discrete rotations and symbolic dynamics. Theoretical Computer Science, 380(3): , [483] V. Blondel, S. Gaubert, and N. Portier. The set of realizations of a max-plus sequence is semi-polyhedral. Journal of Computer and System Sciences, To appear. [484] V. Blondel, E. Jeandel, P. Koiran, and N. Portier. Decidable and undecidable problems about quantum automata. SIAM Journal on Computing, 34(6): , 2005.

167 6.13 Models of Computation and Complexity 153 [485] O. Bodini, T. Fernique, and E. Rémila. A characterization of flip-accessibility for lozenge tilings of the whole plane. Information and Computation, 206): , [486] A. Bouillard, B. Gaujal, S. Lagrange, and E. Thierry. Optimal routing for end-to-end guarantees using network calculus. Performance Evaluation, 65(11-12): , [487] A. Bouillard and E. Thierry. An algorithmic toolbox for network calculus. Discrete Event Dynamic Systems, 18(1):3 49, [488] P. Charbit, E. Jeandel, P. Koiran, S. Périfel, and S. Thomassé. Finding a vector orthogonal to roughly half a collection of vectors. Journal of Complexity, 24(1):39 53, [489] F. Chavanon, M. Latapy, M. Morvan, E. Rémila, and L. Vuillon. Configurations induced by discrete rotations: periodicity and quasiperiodicity properties. Theoretical Computer Science, 346: , [490] F. Chavanon and E. Rémila. Rhombus tilings: decomposition and space structure. Discrete and Computational Geometry, 35: , [491] M. Delorme and J. Mazoyer. Examples of cellular automata classes. RAIRO Theoretical Informatics and Applications, 42 (1):37 54, [492] H. Derksen, E. Jeandel, and P. Koiran. Quantum automata and algebraic groups. Journal of Symbolic Computation, 39 (3-4): , special issue on the occasion of MEGA [493] S. Desreux and E. Rémila. An optimal algorithm to generate tilings. Journal of Discrete Algorithms, 4: , [494] P. Duchon, N. Hanusse, E. Lebhar, and N. Schabanel. Could any graph be turned into a small world? Theoretical Computer Science, 355(1):96 103, Special issue on complex networks. [495] N. Fatès and M. Morvan. An experimental study of robustness to asynchronism for elementary cellular automata. Complex Systems, 16(1):1 27, [496] N. Fatès, M. Morvan, N. Schabanel, and E. Thierry. Fully asynchronous behavior of double-quiescent elementary cellular automata. Theoretical Computer Science, 362(1-3):1 16, [497] O. Finkel. An example of Π 0 3-complete infinitary rational relation. Computer Science Journal of Moldova, 15 (1):3 21, [498] O. Finkel. Topological complexity of locally finite omega languages. Archive for Mathematical Logic, 47(6): , [499] O. Finkel and S. Todorcevic. Local sentences and Mahlo cardinals. Mathematical Logic Quaterly, 53 (6): , [500] A. Gajardo and J. Mazoyer. One head machines from a symbolic approach. Theoretical Computer Science, 370 (1-3):34 47, [501] M. Habib, D. Kelly, E. Lebhar, and C. Paul. Can transitive orientation make sandwich problems easier? Discrete Mathematics, 307(16): , special issue on EuroComb 03. [502] P. Jensen, J.-B. Rouquier, and Y. Croissant. Testing bibliometric indicators by their prediction of scientists promotions. Scientometrics, 78(3), March [503] P. Jensen, J.-B. Rouquier, P. Kreimer, and Y. Croissant. Scientists who engage with society perform better academically. Science and Public Policy, 35(7): , Aug [504] P. Koiran, J. Landes, N. Portier, and P. Yao. Adversary lower bounds for nonadaptive quantum algorithms. Journal of Computer and System Sciences, Special issue on Wollic 08, to appear. [505] P. Koiran, V. Nesme, and N. Portier. The quantum query complexity of the abelian subgroup problem. Theoretical Computer Science, 380 (1-2): , special issue on the occasion of ICALP [506] P. Koiran and S. Perifel. The complexity of two problems on arithmetic circuits. Theoretical Computer Science, 389: , 2007.

168 154 Part 6. MC2 [507] E. Lebhar and N. Schabanel. Close to optimal decentralized routing in long-range contact networks. Theoretical Computer Science, 348(2-3): , special issue on ICALP [508] E. Lebhar and N. Schabanel. Routage dans les petits mondes. )i( Interstice (INRIA), Article de popularisation. [509] G. Malod and N. Portier. Characterizing valiant s algebraic complexity classes. Journal of Complexity, 24(1), journal version of [570]. [510] M. Morvan, E. Rémila, and E. Thierry. A note on the structure of spaces of domino tilings. Discrete Mathematics, 307: , [511] B. Nouvel and E. Rémila. Configurations induced by discrete rotations: periodicity and quasiperiodicity properties. Discrete Applied Mathematics, 147: , [512] E. Rémila. Tiling a polygon with two kinds of rectangles. Discrete and Computational Geometry, 34: , Invited conferences, seminars, and tutorials [INV] [513] Olivier Finkel and Dominique Lecomte. On the topological complexity of ω-powers, Dagstuhl Seminar on Topological and Game-Theoretic Aspects of Infinite Computations", Dagstuhl, Allemagne, [514] P. Koiran. Algebraic versions of the P=NP problem, Invited talk at the midterm conference of the european network MODNET (model theory network). Antalya, Turkey, Novembre [515] P. Koiran. Algebraic versions of the P=NP problem, Invited lecture at CSR 2006 (International Computer Science Symposium in Russia, Saint Petersburg, june 2006). [516] P. Koiran. Decision versus evaluation in algebraic complexity theory, Invited talk at the session on Complexity and Computability in Analysis, Geometry, and Dynamics of the winter 2006 CMS meeting (Toronto, December 2006). [517] P. Koiran. Decision versus evaluation in algebraic complexity theory, Invited talk at the MCU 2007 Conference (Modèles de Calculs Universels). Orléans, September [518] P Koiran. Decision versus evaluation in algebraic complexity theory, Invited talk at the IMA workshop on Complexity, Coding, and Communications (Minneapolis, avril 2007). [519] P Koiran. Interpolation in Valiant s theory, Invited talk at the Oberwolfach complexity theory workshop (june 2007). [520] P. Koiran. Problèmes de décision et d évaluation en complexité algébrique, Journées du GDR Informatique Mathématique (IHP, Paris, February 2007). [521] P. Koiran. Decision versus evaluation in algebraic complexity theory, Invited talk at the DAIMI workshop on algebraic complexity theory (Aarhus, Danemark, September 2008). [522] P. Koiran. On the expressive power of planar perfect matching and permanents of bounded treewidth matrices, Invited talk at the DAIMI workshop on algebraic complexity theory (Aarhus, Danemark, septembre 2008). [523] P. Koiran. On the expressive power of planar perfect matching and permanents of bounded treewidth matrices, Invited talk at FoCM 2008 (Foundations of Computational Mathematics), workshop on real number algorithms (Hong-Kong, june 2008). [524] P Koiran, Complexity theory workshop (Oberwolfach, Germany, November 2009). [525] P. Koiran. Près de 40 ans après le théorème de Cook, où en est la complexité algorithmique? (Une petite histoire du problème "P=NP?"), Colloquium Jacques Morgenstern (Sophia-Antipolis, may 2009). [526] P Koiran, Invited talk, MCU 2010 conference (Machines, Computation, Universality). Carnegie Mellon University, September [527] P Koiran, Keynote speaker at the international workshop "Logical approaches to barriers in computing and complexity" (Greifswald, Germany, February 2010).

169 6.13 Models of Computation and Complexity 155 [528] Natacha Portier. Algorithmes quantiques : est-ce que faire des requêtes en parallèle fait perdre du temps?, exposé invité, Réunion finale du GDR Information et communication quantique du 6 au 8 octobre 2008 à Paris. [529] Natacha Portier. Bornes inférieures pour algorithmes quantiques non adaptatifs, journée information quantique, 23 janvier 2008, IHP. [530] Eric Rémila. Quelques constructions sur les pavages auto-assemblants, Invited conference at the "JOurnées Rouennaises de Combinatoire et Algorithmique ", (JORCAD 08), Rouen, France, Sept [531] Jean-Baptiste Rouquier. Cartographie des pratiques du vélo v : le regard de physiciens et d informaticiens, Invited talk, conference Le métier de cartographe hier et aujourd hui, ENSSIB, Lyon, France, May 28th [532] Nicolas Schabanel. Asynchronous randomized automata: how does randomness affect decentralized algorithms execution?, Juin Invited talk at Approximation and randomized algorithms", Dagstuhl seminar [533] Eric Thierry. Algorithmique du network calculus, Invited talk, 9ème Atelier d Evaluation de Performances, Aussois, juillet International and national peer-reviewed conference proceedings [ACT] [534] P. Albert, E. Mayordoma, P. Moser, and S. Perifel. Pushdown compression. In Proceedings of STACS 2008, pages 39 48, [535] R. Baron, S. Béal, E. Rémila, and P. Solal. Average tree solution for graph games. In Proceedings of the 15th International Colloquium on Structural Information and Communication Complexity (SIROCCO) 2008, [536] F. Becker, S. Rajsbaum, I. Rapaport, and E. Rémila. Average binary long-lived consensus: quantifying the stabilization role played by memory. In Proceedings of the 15th International Colloquium on Structural Information and Communication Complexity (SIROCCO) 2008, Lecture Notes in Computer Science Springer Verlag, [537] F. Becker, I. Rapaport, and E. Rémila. Self-assembling classes of shapes, fast and with a minimal number of tiles. In Proceedings of FSTTCS 2006, Lecture Notes in Computer Science Springer Verlag, [538] F. Becker, E. Rémila, and N. Schabanel. Time optimal self-assembly of 2d and 3d shapes: the case of squares and cubes. In proceedings of the 14th International Conference on DNA Computing (DNA), (2008), Lecture Notes in Computer Science. Springer Verlag, [539] O. Bodini, T. Fernique, and E. Rémila. A characterization of flip-accessibility for lozenge tilings of the whole plane. In Proceedings of LATA 2007, [540] O. Bodini, T. Fernique, and E. Rémila. Characterizations of flip-accessibility for domino tilings of the whole plane. In Proceedings of FPSAC 2007, [541] O. Bodini, T. Fernique, and E. Rémila. Distances on lozenge tilings. In Proceedings of the 15th International Conference on Discrete Geometry for Computer Imagery (DGCI) 2009, [542] A. Bouillard, B. Gaujal, S. Lagrange, and E. Thierry. Optimal routing for end-to-end guarantees: the price of multiplexing. In Proceedings of ValueTools 2007, Best paper award. [543] A. Bouillard, L. Jouhet, and E. Thierry. Computation of a (min,+) multi-dimnesional convolution for end-to-end performance analysis. In Proceedings of ValueTools 2008, [544] J. Demongeot, M. Morvan, and S. Sené. Impact of fixed boundary conditions on the basins of attraction in the flower s morphogenesis of arabidopsis thaliana. In Proceedings of the 22nd International Conference on Advanced Information Networking and Applications, pages IEEE Computer Society, [545] J. Demongeot, M. Morvan, and S. Sené. Robustness of dynamical systems attraction basins against state perturbations: Theoretical protocol and application in systems biology. In CISIS archive Proceedings of the 2008 International Conference on Complex, Intelligent and Software Intensive Systems, pages IEEE Computer Society, [546] Sandeep Dey and Nicolas Schabanel. Customized newspaper problem: Data broadcast with dependancies. In Proc. of the 7th Latin American Theoretical Informatics conference (LATIN), volume LNCS 3887, pages , Mar

170 156 Part 6. MC2 [547] P. Duchon, N. Hanusse, E. Lebhar, and N. Schabanel. Could any graph be turned into a small world? In actes d AlgoTel 2005, [548] Philippe Duchon, Nicolas Hanusse, Emmanuelle Lebhar, and Nicolas Schabanel. Towards small world emergence. In Proc. of ACM Symp. on Parallel Algorithms and Architectures (SPAA), pages , Jul [549] C. Dürr, M. Chrobak, M. Hurand, and J. Robert. Algorithms for temperature-aware task scheduling in microprocessor systems. In Proc. of the 4th International Conference on Algorithmic Aspects in Information and Management (AAIM), [550] N. Fatès, M. Morvan, N. Schabanel, and E. Thierry. Fully asynchronous behavior of double-quiescent elementary cellular automata. In Proceedings of MFCS 2005 (30th International Symposium on Mathematical Foundations of Computer Science), Lecture Notes in Computer Science 3618, pages , [551] Nazim Fatès, Damien Regnault, Nicolas Schabanel, and Éric Thierry. Asynchronous behavior of double-quiescent elementary cellular automata. In Proc. of the 7th Latin American Theoretical Informatics conference (LATIN), volume LNCS 3887, pages , Mar [552] O. Finkel and D. Lecomte. There exist some ω-powers of any Borel rank. In Proceedings of CSL 2007, Lecture Notes in Computer Science Springer Verlag, [553] U. Flarup, P. Koiran, and L. Lyaudet. On the expressive power of planar perfect matching and permanents of bounded treewidth matrices. In Proceedings of ISAAC 2007, Lecture Notes in Computer Science Springer, [554] U. Flarup and L. Lyaudet. On the expressive power of permanents and perfect matchings of matrices of bounded pathwidth/cliquewidth. In Proceedings of CSR 2008 (3rd International Computer Science Symposium in Russia), [555] F. Fomin, F. Mazoit, and I. Todinca. Computing branchwidth via efficient triangulations and blocks. In Proceedings of WG 05 (31st Workshop on Graph-Theoretic Concepts in Computer Science), Lecture Notes in Computer Science 5344, pages , [556] E. Jeandel. Topological automata. In Proceedings of STACS 2005, Lecture Notes in Computer Science Springer Verlag, [557] E. Kaltofen and P. Koiran. On the complexity of factoring bivariate supersparse (lacunary) polynomials. In Proceedings of ISSAC ACM Press, Distinguished paper award. [558] E. Kaltofen and P. Koiran. Finding small degree factors of multivariate supersparse (lacunary) polynomials over algebraic number fields. In Proceedings of ISSAC ACM Press, [559] E. Kaltofen and P. Koiran. Expressing a fraction of two determinants as a determinant. In Proceedings of ISSAC 2008 (International Symposium on Symbolic and Algebraic Computation). ACM Press, [560] P. Koiran. The limit theory of generic polynomials. In Logic Colloquium 2001, volume Lecture notes in Logic 20, pages Association for Symbolic Logic, [561] P. Koiran and I. Briquel. A dichotomy theorem for polynomial evaluation. In Proceedings of MFCS 2009 (34th International Symposium on Mathematical Foundations of Computer Science). Springer, [562] P. Koiran, J. Landes, N. Portier, and P. Yao. Adversary lower bounds for nonadaptive quantum algorithms. In Proceedings of Wollic 08 (15th Workshop on Logic, Language, Information and Computation), Lecture Notes in Computer Science Springer, long version to appear in Journal of Computer and System Sciences. [563] P. Koiran and K. Meer. On the expressive power of CNF formulas of bounded tree- and clique-width. In Proceedings of WG 08 (34th Workshop on Graph-Theoretic Concepts in Computer Science), Lecture Notes in Computer Science Springer, [564] P. Koiran, V. Nesme, and N. Portier. A quantum lower bound for the query complexity of Simon s problem. In Proceedings of ICALP Springer Verlag, [565] P. Koiran and S. Perifel. Valiant s model: from exponential sums to exponential products. In Proceedings of MFCS 2006, Lecture Notes in Computer Science Springer Verlag, 2006.

171 6.13 Models of Computation and Complexity 157 [566] P. Koiran and S. Perifel. VPSPACE and a transfer theorem over the complex field. In Proceedings of MFCS 2007, Lecture Notes in Computer Science Springer Verlag, [567] P. Koiran and S. Perifel. VPSPACE and a transfer theorem over the reals. In Proceedings of STACS 2007, [568] P. Koiran and S. Perifel. A superpolynomial lower bound on the size of uniform non-constant-depth threshold circuits for the permanent. In Proceedings of CCC 2009 (24th IEEE Conference on Computational Complexity). IEEE, [569] C. La Rota, F. Tarissan, and L. Liberti. Inferring parameters of genetic regulatory networks. In Jaime Amador, Carlos Paternina-Arboleda, and Jesús Velásquez, editors, Proceedings of the XIV Latino Ibero American Congress of Operational Research (CLAIO)., page 127, September ISBN: [570] G. Malod and N. Portier. Characterizing valiant s algebraic complexity classes. In Proceedings of MFCS 2006 (31st International Symposium on Mathematical Foundations of Computer Science), Lecture Notes in Computer Science Springer Verlag, [571] A. Muchnik and A. Romashchenko. A random oracle does not help extract the mutual information. In Proceedings of MFCS 2008 (33rd International Symposium on Mathematical Foundations of Computer Science), Lecture Notes in Computer Science Springer Verlag, [572] B. Nouvel and E. Rémila. Incremental and transitive discretized rotations. In Proceedings of IWCIA 2006, Lecture Notes in Computer Science Springer Verlag, [573] S. Perifel. Symmetry of information and nonuniform lower bounds. In Proceedings of CSR 2007, Lecture Notes in Computer Science Springer Verlag, [574] V. Poupet. Cellular automata: Real-time equivalence between one-dimensional neighborhoods. In Proceedings of STACS 2005, Lecture Notes in Computer Science 3404, page Springer Verlag, [575] D. Regnault. Abrupt behaviour changes in cellular automata under asynchronous dynamics. In Proceedings of 2nd European Conference on Complex Systems (ECCS), Oxford, UK, [576] D. Regnault. Directed percolation arising in stochastic cellular automata anaslysis. In Proceedings of the 33rd International Symposium on Mathematical Foundations of Computer Science, volume 5162 of LNCS, pages , [577] D. Regnault. Quick energy drop in stochastic 2d minority. In Proceedings of 8th International Conference on Cellular Automata for Research and Industry (ACRI2008), volume LNCS 5191, to appear. [578] D. Regnault, N. Schabanel, and E. Thierry. On the analysis of simple 2d stochastic cellular automata. In Proceedings of the 2nd International Conference on Language and Automata Theory and Applications, LNCS, pages Springer, to appear. [579] Damien Regnault, Nicolas Schabanel, and Éric Thierry. Progresses in the analysis of stochastic 2D cellular automata: a study of asynchronous 2D minority. In Proc. of 32nd Conf. on Mathematical Foundation of Computer Science (MFCS), volume LNCS 4708, pages , Aug [580] J. Robert and N. Schabanel. Non-clairvoyant scheduling with precedence constraints. In Proc. of the 19th ACM/SIAM Symp on Discrete Algorithms (SODA), [581] Julien Robert and Nicolas Schabanel. Non-clairvoyant batch set scheduling: Fairness is fair enough. In Proc. of 15th European Symposium on Algorithms (ESA), volume LNCS 4698, pages , Oct [582] Julien Robert and Nicolas Schabanel. Pull-based data broadcast with dependencies: Be fair to users, not to items. In Proc. of the 18th ACM/SIAM Symp. on Discrete Algorithms (SODA), pages , Jan [583] A. Romashchenko. Reliable computations based on locally decodable codes. In Proceedings of STACS 2006, Lecture Notes in Computer Science Springer Verlag, [584] Jean-Baptiste Rouquier and Michel Morvan. Coalescing cellular automata. In Vassil N. Alexandrov, G. Dick van Albada, Peter M. A. Sloot, and Jack Dongarra, editors, Proceedings of the 6th International Conference on Computational Science (ICCS), volume 3993 of Lectures Notes in Computer Science, pages Springer, May 2006.

172 158 Part 6. MC2 [585] Fabien Tarissan, Leo Liberti, and Camilo La Rota. Biological regulatory network reconstruction: a mathematical programming approach. In Proceedings of the European Conference on Complex Systems (ECCS08), [586] G. Theyssier. How common can be universality in cellular automata? In Proceedings of STACS 2005, Lecture Notes in Computer Science 3404, pages Springer Verlag, Short communications [COM] and posters [AFF] in conferences and workshops Scientific books and book chapters [OS] Scientific popularization [OV] [587] M. Delorme and J. Mazoyer. La riche zoologie des automates cellulaires. Interstices, interstices.info/. Book or Proceedings editing [DO] Other Publications [AP] [588] P. Koiran. Decision versus evaluation in algebraic complexity. In Proceedings of MCU 2007 (Modèles de Calcul Universels), Lecture Notes in Computer Science Springer, Invited paper. [589] V. Vazirani. Algorithmes d approximation (2ème édition). Springer, collection IRIS, Traduction de N. Schabanel. Doctoral Dissertations and Habilitation Theses [TH] [590] F. Becker. Géométrie pour l auto-assemblage. PhD thesis, Ecole Normale Supérieure de Lyon, [591] F. Chavanon. Aspects combinatoires des pavages. PhD thesis, Ecole Normale Supérieure de Lyon, [592] E. Jeandel. Techniques algébriques en calcul quantique. PhD thesis, Ecole Normale Supérieure de Lyon, [593] L. Lyaudet. Graphes et hypergraphes : complexités algorithmique et algébrique. PhD thesis, Ecole Normale Supérieure de Lyon, [594] V. Nesme. Complexité en requêtes et symétries. PhD thesis, Ecole Normale Supérieure de Lyon, [595] B. Nouvel. Rotations discrètes et automates cellulaires. PhD thesis, Ecole Normale Supérieure de Lyon, [596] S. Perifel. Problèmes de décision et d évaluation en complexité algébrique. PhD thesis, Ecole Normale Supérieure de Lyon, [597] V. Poupet. Automates cellulaires : temps réel et voisinages. PhD thesis, Ecole Normale Supérieure de Lyon, [598] D. Regnault. Sur les automates cellulaires probabilistes : comportements asynchrones. PhD thesis, Ecole Normale supérieure de Lyon, [599] Jean-Baptiste Rouquier. Robustesse et émergence dans les systémes complexes: le modéle des automates cellulaires. PhD thesis, Ecole Normale Supérieure de Lyon, December Total ACL - International and national peer-reviewed journal INV - Invited conferences ACT - International and national peer-reviewed conference proceedings COM / AFF - Short communications and posters in international and national conferences and workshops OS - Scientific books and book chapters OV - Scientific popularization DO - Book or Proceedings editing

173 6.13 Models of Computation and Complexity 159 AP - Other Publications TH - Doctoral Dissertations and Habilitation Theses Total Other Productions: Software [AP] [600] E. Boix, J. Beltran, G. Chiquillo, A. Grignard, M. Morvan, M. Malaterre, and R. Uribe. Plateforme de simulation des systemes complexes. [601] S. Lagrange. COINC (Computational issues on Network Calculus). Contribution de MC2 (E. Thierry) pour la partie algorithmique. [602] J.-B. Rouquier. CimulA, a cellular automata analyser. Contribution de F. Becker. sourceforge.net/.

174 160 Part 6. MC2

175 7. Plume - Proof Theory and Formal Semantics Team: PLUME Scientific leader: Olivier LAURENT Evaluation Web site: Parent Organisations: École Normale Supérieure de Lyon, Université of Lyon, CNRS Team History: The Plume team has been started by Christine Paulin, with a main focus on machine-assisted proofs and the Coq system. After Pierre Lescanne took the lead of Plume in 1997, the themes of Plume have been extended to encompass programming semantics and proof theory in a broader sense. Daniel Hirschkoff then became scientific leader of the team in September In September 2008, the team has started a major evolution, with the arrival of three researchers (P. Baillot, O. Laurent, A. Miquel) and the definition of new research directions including linear logic, implicit computational complexity and program extraction from classical proofs. 7.1 Team Composition Current team members Permanent Researchers Name First name Function Institution Arrival date AUDEBAUD Philippe Associate Professor (MdC) ENSL 01/09/1992 BAILLOT Patrick Research Scientist (CR) CNRS 01/09/2008 DUPRAT Jean Associate Professor (MdC) ENSL 01/09/1987 HIRSCHKOFF Daniel Associate Professor (MdC) ENSL 01/09/1999 LAURENT Olivier Research Scientist (CR) CNRS 01/09/2008 LESCANNE Pierre Professor (Prof) ENSL 01/09/1997 RIBA Colin Associate Professor (MdC) ENSL 01/09/2009 Ph. Audebaud was at the University of Nice until Sept C. Riba joined the team on an ATER position in Sept (see next page). Post-docs, engineers and visitors Name First name Function and % of time (if other than 100%) Institution Arrival date MIQUEL Alexandre détachement ENSL 01/09/2009 TRANQUILLI Paolo postdoc ENSL 21/09/2009 VECCHIATO Silvia doctoral internship Univ. Siena 01/09/2009 A. Miquel joined the team on a CNRS délégation position in Sept (see next page). Doctoral Students Name First name Institution Date of first registration as doctoral student DEMANGEON Romain ENSL 01/09/2007 LASSON Marc ENSL 01/09/2009 PARDON Aurélien ENSL 01/09/2006 PETIT Barbara ENSL 01/09/2008 Past team members 161

176 162 Part 7. Plume Departure date HIRSCHOWITZ Tom CR CNRS 01/09/ /09/2007 Current position CR, Univ. Savoie Name First name Past Members Oct Oct Arrival Name First name Position 2005) Parent Institution date (or Oct. Past Doctoral students Date of Date of first registration Institution departure Doctoral school LE ROUX Stéphane 01/09/ /02/2008 ENSL MATHIF POUS Damien 01/09/ /08/2008 ENSL MATHIF ZUNIC Dragisa 01/09/ /12/2007 ENSL MATHIF Supervisor P. Lescanne D. Hirschkoff P. Lescanne Current position postdoc LIX CR CNRS, Grenoble Teacher at Novi Sad, Serbia Past post-doctoral researchers, engineers and visitors (greater than or equal to 3 months) Name First name Function Date of Date of Home Institution arrival departure (if appropriate) Current position BRIAIS Sébastien ATER 01/09/ /09/2008 ENSL postdoc Univ. Versailles CHIARABINI Luca visitor 29/01/ /05/2007 LMU München PhD student LMU HYM Samuel postdoc 01/09/ /01/2008 CNRS MdC Lille MIQUEL Alexandre délégation 01/09/ /08/2009 CNRS détachement ENS Lyon RIBA Colin ATER 01/09/ /08/2009 ENSL MdC ENS Lyon Evolution of the team: The return of Ph. Audebaud after a détachement at the University of Nice in Sept has been followed by the departure of T. Hirschowitz to the University of Chambéry in Sept In Sept. 2008, P. Baillot, O. Laurent and A. Miquel joined the team (which was then composed of 4 permanent members). This has had a major impact on the scientific goals of the Plume team, with a new strong research direction on proof theory. This evolution is continued with the recruitment of C. Riba on a maître de conférences position in Sept Executive summary Keywords Semantics of programming languages, computer assisted proofs, proof theory. Research area Researchers in the Plume team study methods for the formal analysis of computer programs and, more generally, of computing systems. More precisely, we work on the formal definition of (aspects of) programming languages, and of static analyses of programs. We also rely on theorem proving systems to develop rigorous formalizations and proofs about mathematical theories. We build on the proofs-as-programs correspondence to strengthen the links between proof theory in logic and computer science. We deal in particular with the integration of additional programming features: concurrency, mobility, modularity, probabilistic behaviours,... Main goals The main goals of the team are: to strengthen the understanding of various programming constructs (control operators, probabilistic primitives, concurrency, cryptographic aspects); to provide logical foundations for modern programming languages;

177 7.3 Proof Theory and Formal Semantics 163 to build type systems controlling the behaviour of programs (termination, complexity bounds, dynamic modularity); to define extraction mechanisms in order to build certified software from formal proofs of specifications; to develop rigorous formalizations of mathematical theories with proof assistants like Coq. Methodology Our tool-set comes from mathematical logic and the semantics of programming languages. We use the Curry-Howard (i.e. proofs-as-programs) correspondence as a bridge between proof-theory and computer science. In this way we are able to build mathematical reasonings about program behaviours. Highlights Our publications often come with a mechanized verification (as Coq contributions) of the results they contain. More recently, we have put a specific focus on the use of (variants of) linear logic and of the realizability theory as unifying tools to develop the foundations of programming languages. New up-to techniques for weak bisimulation. Reasoning formally about distributed and concurrent systems is a complex task, because it may lead to a blow up of the number of states to be taken into account, and can involve the analysis of rich and subtle behaviours. Up-to techniques are a set of tools that have been proposed to help establishing properties of equivalence between two concurrent systems (a distributed protocol and its optimized version, a specification of a server and its implementation,...). We have proposed a uniform and modular theory of up-to techniques for weak bisimulation that captures existing proof technology and allows us to define new non-trivial techniques based on termination arguments. This work has fostered the start of a new activity on the machine-assisted formalization of proofs about concurrent systems. Classical program extraction in Coq. The classical extraction module for Coq enriches the Coq proof assistant with a set of commands to extract programs from classical proofs. The target language of the extraction mechanism is the λ c -calculus, an extension of the λ-calculus with control primitives introduced by Jean-Louis Krivine to realize provable formulas of classical logic. This allows us to study the computational content of proofs formalized in the calculus of constructions with universes. Expressiveness and separability of spatial logics for concurrency. Several modal logics have been proposed as specification formalisms to analyze the behaviour and the structure of concurrent processes. Among these, the Ambient Logic (AL) has been introduced to express properties of process mobility in the calculus of Mobile Ambients (MA), and as a basis for query languages on semistructured data. We have established expressiveness properties of AL by showing how it is possible to define logical formulas that capture important properties of MA processes, related to their behaviour (how processes communicate, how they move along computation) but also to their structure (name occurrences, persistence). Building on these results, we have been able to compare the equivalence induced by the logic with other relevant equivalences on processes, and to establish (un)decidability properties of the former. Proofs of randomized algorithms. Randomized algorithms are widely used for finding efficiently approximated solutions to complex problems (for instance primality testing) and for obtaining good average behaviour. Proving properties of such algorithms requires subtle reasoning both on algorithmic and probabilistic aspects of programs. We have defined a new method for proving properties of randomized algorithms in a proof assistant based on higherorder logic. It is based on the monadic interpretation of randomized programs as probabilistic distributions (Giry, Ramsey and Pfeffer). Termination and concurrency. Termination is a fundamental property in sequential languages but it is also important in concurrency. For instance, termination can be used to guarantee that interaction with a resource will eventually end (avoiding denial of service situations), or to ensure that the participants in a transaction will eventually reach an agreement. Termination is particularly difficult in the π-calculus (a commonly accepted model for concurrent and mobile computation), due to the expressiveness of this formalism. We have introduced type systems for termination based on term-rewriting techniques. Both the π-calculus (with name passing) and its higher-order extension with process passing are studied.

178 164 Part 7. Plume 7.3 Research activities In order to guarantee safety and security, programs should be developed in a controlled way, and appropriate methods should be devised. Numerous formal methods have been introduced to help programmers in pursuing this goal. Depending on the application domain, the appropriate notion of being controlled may strongly differ (e.g. termination vs. liveness). The spectrum of properties that such methods make possible to address includes: security, certification, termination, resource boundedness, deadlock freedom, liveness, control on information flow,... Inside the large body of work on formal methods, we can distinguish between static approaches (systems are analysed before execution) and dynamic approaches (analysis is done along execution). We focus on static approaches, and more precisely type systems and formal semantics, using logical means. Researchers in the Plume team study methods for the formal analysis of computer programs and, more generally, of computing systems. More precisely, we work on the formal definition of (aspects of) programming languages, and of static analyses of programs. We also rely on theorem proving systems to develop rigorous formalizations and proofs about mathematical theories. We build on the proofs-as-programs (Curry-Howard) correspondence to strengthen the links between proof theory in logic and computer science. The Curry-Howard correspondence which relates proofs and programs provides us with a bridge between proof-theory and computer science. It is a very powerful framework for technology transfers between these two research topics. In particular formal proofs are endowed with a computational meaning (cut-elimination is evaluation) and logic gives a crucial tool to analyze the foundations of programming languages. The meaning of this correspondence is very well understood in the intuitionistic/functional case, which is the original setting. It has been extended to classical logic and control operators (like call/cc) in the last twenty years. We are now looking for other extensions to develop logical foundations of additional programming concepts: concurrency, mobility, complexity control,... On the programming languages side, the team studies the formal description of expressive programming constructs (such as probabilistic operations, primitives for modularity, or concurrency mechanisms). This is done by following the approach of formal semantics of programming features. It concerns both operational semantics: the precise definition of the computational rules underlying a programming primitive, and denotational semantics: the search for invariants of computation. Our aim is to define and analyze the properties of formalisms that capture these programming constructs in a concise and clear fashion, in the same way the λ-calculus can be used to model sequential functional computation. In doing so, we mostly rely on mathematical tools, to develop the operational and denotational semantics of programs (partial orders, category theory,...). We often use the obtained models to design type systems allowing to characterize and to analyze specific behavioural properties of programs. Due to the very formal nature of the objects we manipulate, we are frequently turning our results into developments in the Coq proof assistant. In a more general way, we also use Coq for the formalization of various mathematical theories Curry-Howard correspondence We work on various aspects of the Curry-Howard correspondence: from the traditional (twenty years old) question of classical logic to more recent points such as the implicit complexity point of view. Consequences in computer science are obtained by means of realizability and program extraction from proofs. Computational interpretations of classical logic List of participants O. Laurent (CR CNRS), P. Lescanne (PR ENS Lyon), D. Zunic (PhD ) Keywords Sequent calculus, graphical syntaxes for proof-theory, constructive classical logic Scientific issues, goals, and positioning of the team The Curry-Howard correspondence, originally introduced between intuitionistic logic and functional languages, is now well understood in the more general context of classical logic and control operators. This led to the development of many logical systems for presenting proofs in classical logic with a well defined computational meaning. The Plume team took part in this line of work and now focuses more on clarifying the relations between the various proposals given in the literature. Major results Plume members worked in particular on the use of graphical syntaxes for representing proofs from classical logic. The X diagrammatic calculus allows for a graphical analysis of the sequent calculus of classical logic. This belongs to the family of classical systems with non-deterministic computational behaviours. On the other side, id-nets give a deterministic representation of classical proofs. These nets are used to provide us with a unified framework for interpreting the deterministic classical systems. This is based on a strong relation with intuitionistic logic and works both for call-by-name and for call-by-value classical systems.

179 7.3 Proof Theory and Formal Semantics 165 Self-assessment The theory of the deterministic computational behaviour of classical logic is now well understood and appropriately unified. On the other hand there are still various proposals for non-deterministic systems. In the same way the relation between the deterministic and the non-deterministic cases are still not completely well analyzed. Program extraction from proofs List of participants Ph. Audebaud (MdC ENS Lyon), L. Chiarabini (visitor 2007), A. Miquel (CNRS Delegation ) Keywords proof assistant, program extraction, control operators Scientific issues, goals, and positioning of the team Not only proofs are programs, but if π is a proof of the formula F, the program P corresponding to π satisfies the specification formalized by F. This provides us with a way of developing certified software without the usual two stages approach (first write the code and then prove the correctness with respect to the specification). After a formalization of the specification of the wanted behaviour of the program through a logical formula, the developer interacts with a proof assistant to write down a proof of the formula. The code is then automatically extracted from the proof and guaranteed to satisfy the specification. Major results Efficient program extraction from proofs can benefit from various backgrounds. Ph. Audebaud and L. Chiarabini approach aims at applying well known techniques from the programming community to the proof side; the current work concerns tail recursion and defunctionalization. Following Krivine s work on realizability for classical logic, A. Miquel has shown how to define realizability interpretations for the calculus of inductive constructions (the type theory underlying Coq). Starting from this theoretical result, he developed an extraction module for the Coq proof assistant, addressing concrete issues related with the representation of integers for example. Self-assessment The abstract theory of realizability is now mature for transfer towards software development. This is the line we are focusing on. The practical use of the theory will naturally bring new questions on the more abstract side. Implicit Computational Complexity List of participants P. Baillot (CR CNRS) Keywords Light linear logics, type systems for complexity Scientific issues, goals, and positioning of the team The linear logic (LL) approach to implicit computational complexity aims at defining variants (or restrictions) of LL for which the cut-elimination procedure encodes precisely a given complexity class. In this way, the complexity of programs becomes an additional computational behaviour accessible through logical means. We put a particular focus on two specific questions: the semantic analysis of the derived logical systems and the design of associated type systems. This is one of the newly started research directions of the team (with the arrivals of P. Baillot and O. Laurent). Major results P. Baillot and D. Mazza (Univ. Paris 13) have proposed a variant of linear logic called linear logic by levels (L3), which admits an elementary time complexity bound on proof normalization, and at the same time generalizes and simplifies the system ELL of Girard. A subsystem of this linear logic by levels also corresponds to polynomial time computation and generalizes the system LLL (light linear logic). One benefit of these systems with respect to ELL and LLL is that they have simpler definitions with proof-nets (the graphic representation of proofs). Self-assessment The back and forth methodology between linear logic and type systems has been very successful in the elaboration of L3. It goes in the important general direction of weakening the constraints in light linear logics in order to be able to deal with more algorithms. Realizability and higher-order term rewriting List of participants A. Miquel (CNRS Delegation ), B. Petit (PhD 2008-???), C. Riba (ATER ) Keywords Realizability, reducibility candidates, termination

180 166 Part 7. Plume Scientific issues, goals, and positioning of the team We have already seen how realizability refines typing and helps in the computational interpretation of proofs. More generally it is a key tool in the study of deep properties of computation, such as termination. Strong progress has been made on the understanding of the relations between reducibility, realizability and orthogonality. It helps in the development of applications to various extensions of the λ-calculus. Major results Union types are required for typing various extensions of the λ-calculus but are known to interact sometimes badly with preservation under reduction, termination and realizability techniques in general. Through the introduction of a notion of value inspired by reducibility candidates, C. Riba has designed typing rules for union types which are preserved under reduction, ensure termination,... This approach applies also to implicit existential types, since they are interpreted as infinitary unions in realizability. The λ-calculus with constructors is designed for the abstract study of pattern matching mechanisms. In order to control the dynamics of this very rich calculus, B. Petit has used a type system based on system F and subtyping whose correction is derived through a realizability model. Self-assessment The Plume team now has a strong expertise in realizability techniques both on the fundamental aspects, on the relation with logical systems and on applications to term rewriting. This has helped a lot in the recent results. A deeper look at relations with denotational semantics is something to address Structures of programming languages We do not only rely on logic to study programs and in a more general way we are interested in mathematical tools for understanding various structures appearing in computing. This line of research is related to the previous one (logic and proof theory), while being closer to programming languages, for which we try to build (semantic) models. Modular presentations of programming languages List of participants T. Hirschowitz (CR CNRS ), A. Pardon (PhD 2006-???) Keywords syntax and operational semantics of programming languages, category theory Scientific issues, goals, and positioning of the team We work on enhancing the available tools to present and analyze the syntax and operational semantics of calculi that are used to describe various forms of computation and interaction. These calculi include the λ-calculus, the π-calculus, as well as Milner et al. s more recent generalization of a vast amount of formalisms, known as bigraphs. We rely for this on a category-theoretic approach, the main goal being to improve the modularity of presentation, and to clarify the use of binders in such formalisms. Major results We have been studying the use of symmetric monoidal closed theories to represent binding in (formal models of) programming languages. This provides a modular notion of syntax, which has allowed us to revisit the notion of context in languages with binders. We have thus been able to accommodate the λ-calculus and the π-calculus in our setting, and to propose an extended version of bigraphs. Self-assessment In order to extend the setting we have introduced and to be able to define the operational semantics of programs, we would like to reformulate our contribution in the framework of double categories, along the lines of previous work by Melliès on models for Multiplicative Linear Logic. Certification of randomized algorithms List of participants Ph. Audebaud (MdC ENS Lyon) Keywords randomized algorithms, formal proof, denotational semantics Scientific issues, goals, and positioning of the team This research takes place as part of the ANR project Scalp which aims at providing formal models for presenting and certifying cryptographic protocols and algorithms. More specifically, Ph. Audebaud works with Christine Paulin (LRI, Orsay) on developing a well fitted higher-order functional framework.

181 7.3 Proof Theory and Formal Semantics 167 Major results We have designed a programing language which meets requirements with respect to expectations in the area of cryptography. It is currently delegated under implementation. For this, the Why verification framework has been chosen, as it already provides most of the required technology, and because a reasonable amount of modification should be needed to reach our goal. We work in parallel on enhancing the proof assistants technology (specifically, the Coq system) with an efficient and easy to use library in order to help users in developing formal proofs. We take advantage of recent tools such as Class Types (from Haskell) at least for internal representation of the objects involved. We currently address the question of semantics with the help of a different language, the calculus λ, designed by Thurn, Pfenning and Park. This language is more explicit as far as randomized effects are concerned, which helps at a meta level. Self-assessment A central aspect of this project is the close integration between its various research directions, that cover the spectrum ranging from abstract denotational models to core programming languages, and to machine mechanization of formal proofs. Concurrency Theory List of participants R. Demangeon (PhD 2007-???), D. Hirschkoff (MdC ENS Lyon), D. Pous (PhD ) Keywords concurrency, process calculus, operational equivalence, type system Scientific issues, goals, and positioning of the team Process calculi such as CCS or the π-calculus provide a framework where it is possible to represent many aspects of concurrent and distributed programming: message passing, dynamic reconfiguration of communications links, localized (and distant) communications, code mobility, persistent servers,... In this setting, a program is given by a process. We develop methods for the analysis of the behaviour of processes: guarantee statically some properties at runtime, compare the behaviours of processes, compare different calculi in terms of the behaviours they can express. Major results We have deepened the understanding of operational equivalences, which give rise to the notion of behaviour of processes. Behaviours provide a form of denotation, beyond the mere syntax of terms. We have studied algebraic properties and axiomatisations of behavioural equivalences, in relation with expressiveness of syntactic operators. We have also worked on proof techniques for behavioural equivalences, and studied in particular how one can combine several techniques to obtain more powerful proof tools. Another research direction is concerned with type systems for termination in concurrent system. We have introduced such static analyses for the first- and higher-order cases (name and process passing, respectively). Self-assessment The effort towards a deeper and more unified understanding of techniques for behavioural equivalences has led to a satisfying level of maturity. A book chapter about up-to techniques for concurrency is currently being written, and a new research activity on the mechanisation of reasoning about concurrent systems in a theorem prover has been started. The work on termination should help in developing bridges from the theory of sequential (functional) computation, as expressed in the λ-calculus, and the world of concurrency. It seems particularly relevant for this to be able to adapt powerful λ-calculus techniques to the π-calculus, as we have started to do. Dynamic Modularity List of participants D. Hirschkoff (MdC ENS Lyon), T. Hirschowitz (CR CNRS, ), S. Hym (postdoc 2007), A. Pardon (PhD, 2006-???), D. Pous (PhD ) Keywords process calculus, distributed programming, abstract machine, type system Scientific issues, goals, and positioning of the team We collaborate with the Sardes project (INRIA Rhône Alpes) on the design of process calculi-based models for dynamic modularity. This phrase stands for typical aspects of component-based programming, such as code mobility, dynamic update of modules, and reflexive programming. We want to provide a clean, mathematically defined formalism where forms of dynamic modularity can be expressed and analyzed. This formalism can serve as the basis for the design of new programming primitives. This work has been supported by the ANR project MoDyFiable - Modularité Dynamique Fiable ( ).

182 168 Part 7. Plume Major results We have developed a formalism where it is possible to express some forms of dynamic modularity. This process calculus, called kπ, features a primitive for passivation, which provides the ability to freeze a running process in order to manipulate it (send it to a different location, duplicate it, replace it with a newer version, modify some of its code). We have defined an abstract machine for kπ, which demonstrates how passivation can be used in a distributed setting. A prototype implementation of this machine has raised interesting questions related to the design of appropriate type systems in order to guarantee forms of runtime safety for kπ processes. Self-assessment The primitive of passivation is very expressive, and goes beyond previously existing forms of interaction in process calculi (notably higher-order communication and forms of code mobility). Several difficult questions related to the metatheory of calculi featuring passivation deserve to be studied. Alternatively, one could also be interested in finding out whether it is possible to define controlled forms of passivation, that renounce to some of its expressiveness in favor of more tractable theoretical properties Formalizations in the Coq proof assistant The formalization activities of the team address the machine representation of different mathematical theories through the development of computer assisted proofs (mainly with the Coq proof assistant). Game theory List of participants P. Lescanne (PR ENS Lyon), S. Le Roux (PhD ) Keywords Nash equilibria, induction, co-induction Scientific issues, goals, and positioning of the team Various concept in games theory are derived in a not formal enough way. P. Lescanne and S. Le Roux defined a notion of Feasibility-Desirability games (FD-games) for unifying the view of decision theory, strategic games and evolutionary games. This comes with the study of Nash equilibria for such games. Major results A model of coalition games based on FD-games has been developed with F. Delaplace with applications in modelling gene regulation activities. P. Lescanne revisited infinite sequential games using co-induction, showing that previous analyses of those games were inadequate, especially concerning escalation which happens to be a Nash equilibrium in the co-inductive formalization. Self-assessment Formal approaches to game theories allow us to identify under-specified notions appearing in the literature. The introduction of logical notions such as co-induction helps in clarifying the theory by proving that behaviour usually supposed to be irrational can be shown rational in the appropriate framework. Constructive geometry List of participants J. Duprat (MdC ENS Lyon) Keywords Euclidean geometry, constructive mathematics, axiomatics Scientific issues, goals, and positioning of the team Starting from strong similarities between constructive proofs and proofs in Euclidean geometry, J. Duprat studies axiomatizations of geometry in the Coq proof assistant. The objective is to provide an interactive framework for describing geometric pictures and then formally proving, in the Coq proof assistant, the properties suggested and induced by the picture. Major results Based on two primitive notions (orientation and equi-distance) and three constructions (lines, circles and intersection points), J. Duprat defined an axiomatization of constructive points of the plane. Current developments use constructive points of a given line to define distances and constructive points of the unit circle to define angles. Self-assessment The interaction between graphical tools and a proof assistant is very promising for providing a framework for formal proofs in geometry. The current axiomatization allows one to derive all the Hilbert axioms (except of course the continuity axiom).

183 7.5 Proof Theory and Formal Semantics Application domains and social or economic impact As explained above, programs represent our main target: we study methods to mathematically define and analyze the behaviour of programs. However, we are not addressing the implementation of actual tools to check properties of real size programs, but work on the extension of the mathematical underpinnings of logical methods for the verification of software and the development of certified software. 7.5 Visibility and Attractivity Prizes and Awards Awarded communications: Best student paper award, Damien Pous, ICALP conference, 2005 Best student paper award, Barbara Petit, TLCA conference, 2009 Chairing and organization of conferences: Mathematics of Program Construction (MPC 08), Philippe Audebaud, co-chair. Logic and Computational Complexity (LCC 09) Workshop, Patrick Baillot, co-chair. The colloquium A journey through term rewriting and lambda-calculi has been organized at Nancy in 2008 for the 60th birthday of Pierre Lescanne Contribution to the Scientific Community Management of Scientific Organisations Pierre Lescanne has been president of SPECIF (Société des Personnels Enseignants et Chercheurs en Informatique de France) between 2006 and 2008 (and vice-president for international questions in ). Administration of Professional Societies Editorial Boards Applicable Algebra in Engineering, Communication and Computing (Springer), Pierre Lescanne (member of the editorial board). Linear Logic wiki ( Olivier Laurent (coordinator). Organisation of Conferences and Workshops Mathematics of Program Construction (MPC 08), Philippe Audebaud Journées GEOCAL-LAC 2009 du GDR IM, Pierre Lescanne and Alexandre Miquel Computational Logic and Applications (CLA 2007) Cracovie, Pierre Lescanne Computational Logic and Applications (CLA 2008) Cracovie, Pierre Lescanne Computational Logic and Applications (CLA 2009) Lyon, Pierre Lescanne Congrès SPECIF Saint Etienne 2007, Pierre Lescanne Congrès SPECIF Bordeaux 2008, Pierre Lescanne

184 170 Part 7. Plume Program committee members Concurrency Theory (CONCUR 07), Daniel Hirschkoff, PC member, 2007 International Colloquium on Automata, Languages and Programming (ICALP 07), Pierre Lescanne, PC member, 2007 International Conference on Typed Lambda Calculi and Applications (TLCA 09), Patrick Baillot, PC member, 2009 Mathematics of Program Construction (MPC), Philippe Audebaud, MPC 08 co-chair and MPC 10 PC member International expertise National expertise Selection committees (Maître de conférences, Professeur and Chargé de Recherche positions) Université de Savoie, 2009, Pierre Lescanne (professor position) Université d Évry, 2009, Pierre Lescanne Université Aix-Marseille II, 2009, Olivier Laurent Université Aix-Marseille II, 2009, Alexandre Miquel ENS Lyon, 2009, Olivier Laurent ENS Lyon, 2009, Daniel Hirschkoff Université Bordeaux I, 2009, Daniel Hirschkoff INRIA Saclay, 2009, Daniel Hirschkoff (CR1 and CR2 positions) ENSIMAG, , Pierre Lescanne ENS Lyon, , Pierre Lescanne Université Paris 7, 2006 and 2007, Daniel Hirschkoff Université Joseph Fourier, Grenoble, 2007, Tom Hirschowitz Université de Provence, 2006, Daniel Hirschkoff PhD committees Matthieu Manceny, Université d Évry, P. Lescanne, Nov Jittisak Senachak, Japan Advanced Institute of Science anbd Technology, P. Lescanne, Jun Sébastien Briais, EPFL, Lausanne (Switzerland), D. Hirschkoff, Dec Alexis Saurin, École Polytechnique, O. Laurent, Sept David Baelde, École Polytechnique, P. Baillot, Dec David Baelde, École Polytechnique, O. Laurent, Dec Vincent Atassi, Université Paris 13, P. Baillot (co-adviser of the thesis), Dec Sylvain Lebresne, Université Paris Diderot, A. Miquel (co-adviser of the thesis), Dec Boris Yakobowski, Université Paris Diderot, A. Miquel, Dec Mauricio Guillermo, Université Paris Diderot, A. Miquel, Dec Jayshan Ragunandan, Imperial College, P. Lescanne, Jan Guillaume Burel, Université Nancy 1, A. Miquel, Mar Romain Beauxis, Ecole Polytechnique, P. Lescanne, May 2009

185 7.6 Proof Theory and Formal Semantics Public Dissemination 7.6 Contracts and grants External contracts and grants (Industry, European, National) MoDyFiable Modularité Dynamique Fiable, ANR ARASSIA, The partner is the Sardes project (INRIA Rhône Alpes). The scientific leader of the project is Tom Hirschowitz (member of Plume during the project). The goal of the project is to design the core of a programming language for dynamic modularity (distributed and mobile computation, runtime update of modules, inspection of running code), by exploiting process calculi-based techniques. Galapagos Géométrie, algorithmes et preuves, ANR programme non-thématique (Blanc), Project partners are INRIA (Sophia Antipolis), LSIIT - UMR 7005 CNRS-ULP (Strasbourg), UFR Sciences, SP2MI (Poitiers). Web page: Project leader: Yves Bertot (INRIA). In this project, we study the applicability of theorem proving tools to two aspects of geometry. First we apply theorem proving tools to computational geometry. Second, we apply theorem proving tools to the verification of geometrical reasoning. CHoCo Curry-Howard and Concurrency, ANR programme non-thématique (Blanc), Project partners are the PPS laboratory (Univ. Paris 7) and the IML (Univ. Aix-Marseille 2). Web page: jussieu.fr/. The site leader for Lyon is Daniel Hirschkoff. The goal of the project is to work on the extension of the Curry-Howard correspondence to encompass concurrent phenomena, drawing inspiration from and making connection with concurrency theory. Scalp Security of Cryptographic ALgorithms with Probabilities, ANR-07-SESU Project partners are DCS-VERIMAG (Grenoble), ProVal-LRI, (Orsay), EVEREST (Inria Sophia-Antipolis), CPR-Cedric and CNAM (Paris). Web page: Project leader: Yassine Lakhnech (DCS-Verimag). The project aims at certifying cryptographic algorithms and protocols within the Coq proof assistant. This means designing a probabilistic language well fitted to that purpose, providing proof theory matching verification of cryptosystems and building a dedicated library of tools and general results on the area. ComplICE Complexité Implicite, Concurrence et Extraction, ANR programme non-thématique (Blanc), 1/1/ /12/2012. Project partners are ENS Lyon (coordinating site), Université Paris 13, INPL Nancy. Web page: Project leader: Patrick Baillot. Participants at LIP: Patrick Baillot, Romain Demangeon, Daniel Hirschkoff, Olivier Laurent, This project investigates implicit computational complexity, that is to say methods to statically ensure time and space complexity bounds on programs, and to characterize in this way complexity classes in a machine-independent way. It explores this problematic in the areas of functional programming, program extraction from formal proofs and concurrent systems. Geocal Géométrie du Calcul, ACI Nouvelles Interfaces des Mathématiques, Project partners are the PPS laboratory (Univ. Paris 7), the IML (Univ. Aix-Marseille 2), the LIPN (Univ. Paris 13). The site leader for Lyon is Daniel Hirschkoff. The goal of the project is to study proof theory and models of computation, developing a geometrical view of various computing and logical paradigms. TLIT Types and Logic in Information Technologies, cooperation between CNRS and Serbian Ministry of Scientific Research, project leader are Silvia Ghilezan (University of Novi Sad, Serbia) and Pierre Lescanne (ENS de Lyon). The project which includes also among others Serbian Academy of Science, PPS (Paris), IRIT (Toulouse) aims at setting the logical basis of information theory using interpretation of classical logic, category theories and types in global computing. Casimir Recherche cooperative en logique computationnelle, program Mira de la region Rhône-Alpes, project leader Pierre Lescanne (ENS de Lyon), between University Jagellone of Krakow, University of Savoie at Chambéry, University Joseph Fourier at Grenoble. The project investigates several aspects of computational logic including, among others, probabilistic aspects of lambda-calculus, a somewhat success story for the cooperation. Application of game theory to the study of modeling, program call for ideas 2007 of National Network on Complex System (RNSC) in coordination with Paris Institute of Complex Systems. The project was done in collaboration with Franck Delaplace from IBISC and University of Evry. Project leaders: Franck Delaplace and Pierre Lescanne.

186 172 Part 7. Plume Research Networks (European, National, Regional, Local) D. Hirschkoff is a member of the BACON équipe associée INRIA between the Sardes project (INRIA Rhône Alpes) and Università di Bologna, for the year The Plume team takes part in the Geocal (géométrie du calcul) and LAC (logique, algèbre et calcul) groups of the GDR Informatique Mathématique. It is also involved in the Langages, Types et Preuves (LTP) group of the GDR Génie de la Programmation et du Logiciel. Monthly meetings of the CHoCo ANR project take place in Lyon and gather around 25 participants. pps.jussieu.fr/events/. A weekly seminar is organized by the Plume team in collaboration with the LIMD team of the LAMA laboratory (Chambéry) Internal Funding 7.7 International collaborations resulting in joint publications With grants Università di Bologna, Italy (D. Sangiorgi) D. Hirschkoff has a long standing collaboration with Davide Sangiorgi. This is testified by numerous joint publications [605, 633, 609, 611, 622, 615, 623, 624]. A joint PhD supervision (R. Demangeon) has started in Sept A grant from the Université Franco-Italienne supports this PhD. More generally, support for the collaboration is provided by the ANR project CHOCO, as well as by the funding of an équipe associée INRIA between Bologna and the Sardes project at INRIA Rhône Alpes, to which D. Hirschkoff is attached. Their collaboration aims at studying various formal aspects of the π-calculus. University of Novi Sad, Serbia (S. Ghilezan) Pierre Lescanne has close contacts with Novi Sad, in particular through the TLIT project (see above). This includes the supervision of two Serbian PhD students in the past years (S. Likavec and D. Zunic). Their main research subject is the study of extensions of the λ-calculus to classical logic (such as Curien-Herbelin s calculus in particular). They address strong normalization questions [626, 607] in collaboration with D. Dougherty (see below). With joint publications Worcester Polytechnic Institute, USA (D. Dougherty) Pierre Lescanne and Dan Dougherty collaborate on various questions related to term rewriting with often a particular focus on the λ-calculus. Their work covers new term rewriting systems for describing object-based languages [625, 608] (together with L. Liquori) but also termination and classical extensions of the λ-calculus [626, 607]. Imperial College, UK (S. van Bakel) Pierre Lescanne and Steffen van Bakel have developed a graphical formalism for continuation style computation, the X framework [644, 618]. Its expressive power is demonstrated by embedding various known systems such as the λ-calculus, Bloo and Rose s calculus of explicit substitutions λx, Parigot s λµ, Curien and Herbelin s calculus. It can also be seen as the pure untyped computational content of the reduction system for implicative classical sequent calculus of Urban. JAIST, Japan (R. Vestergaard) A collaboration between Pierre Lescanne and René Vestergaard aims at a better understanding of formal game theory. Together with Franck Delaplace (Genopole, Evry), they address biology related questions and they study how game theory allows us to study gene regulation networks [620]. 7.8 Software production and Research Infrastructure Software Descriptions Up-to Techniques for Weak Bisimulation in Coq Type: theorem prover formalization Description. This development is a formalization of a series of works on the theory of weak bisimilarity, and of up-to techniques for weak bisimulation (including several new up-to techniques). In particular, it includes formally checked proofs for a great part of [616, 617]. URL:

187 7.10 Proof Theory and Formal Semantics 173 Axiomatizations of Geometry in Coq Type: theorem prover formalization Description. This library contains an axiomatization of the ruler and compass Euclidean geometry. Files A1 to A6 contain the axioms and the basic constructions. The other files build the proof that this axiomatization induces the whole plane geometry except the continuity axiom. For that the proofs of the Hilbert s axioms conclude this work in the files E1 to E5. URL: Distributed Abstract Machine Type: software. Description. This development is a prototype implementation of the GCPan, an abstract machine for distributed and mobile computation introduced in [611]. It consists of an abstraction layer, developed as a generic OCaml library for message-passing (both in local and physically distributed modes), on top of which the actual abstract machine is defined. URL: Classical Extraction Module for Coq Type: software. Description. The classical extraction module for Coq is an external module that enriches Coq with new commands to extract programs from classical proofs. It consists of two components: A dynamically loadable module (kextraction), written in Caml and distributed under the LGPL license A stand-alone evaluator (jivaro) to execute extracted programs, written in Caml and distributed under the GPL license It is the first implementation of classical program extraction using the theory of classical realizability developed by Jean-Louis Krivine in the 90 s (and adapted to the Coq proof assistant). Extracted programs are expressed in the language λ c, a lazy functional language with continuations that significantly differs from the mainstream functional programming languages such as SML, Caml or Haskell, hence the need of a specific evaluator to execute extracted programs. URL: Contribution to Research Infrastructures None. 7.9 Industrialization, patents, and technology transfer None Educational Activities Supervision of Educational Programs D. Hirschkoff is responsible for the first year of the Master in Theoretical Computer Science at ENS Lyon ( ). P. Baillot is in charge of the track Informatique Mathématique of the Master in Theoretical Computer Science at ENS Lyon, since May Ph. Audebaud, J. Duprat, D. Hirschkoff and P. Lescanne regularly take part in the jurys for the training periods of L3, M1 and M2 students at ENS Lyon. J. Duprat participates to the second concours d entrée at ENS Lyon in 2009.

188 174 Part 7. Plume Teaching Name Course title (short) Level Institution No of hours Number of academic years Ph. Audebaud Préparation Agrég. maths ENS Lyon 64 3 Ph. Audebaud Prog2 (TD) L3 ENS Lyon 32 1 Ph. Audebaud Prog2 L3 ENS Lyon 48 2 P. Baillot Linear logic M2 Paris S. Briais Prog. project L3 ENS Lyon 48 1 J. Duprat The Coq System M ENS Lyon 48 5 J. Duprat Préparation Agrég. maths ENS Lyon 64 4 D. Hirschkoff Prog1 L3 ENS Lyon 48 4 D. Hirschkoff Concurrency Th. M ENS Lyon 48 2 D. Hirschkoff Prog. project L3 ENS Lyon 48 2 P. Lescanne Prog2 L3 ENS Lyon 48 2 P. Lescanne Prog2 (TD) L3 ENS Lyon 32 1 P. Lescanne Th. Jeux M ENS Lyon 48 2 A. Miquel Th. Demonstr. M ENS Lyon 48 1 C. Riba Compil. project L3 (TDs) ENS Lyon Self-Assessment Since September 2008, the Plume team entered a transitional period towards the development of additional research directions, around logical foundations of programming languages. The next four years will mainly be dedicated to the establishment of these new activities and to the stabilization of our group. Strong points The dynamism of the team has recently been renewed with the arrival of several young bright researchers to foster new and challenging research directions. Our team is highly attractive (as shown, in particular, by the number and quality of applications to permanent, post-doc, ATER,... positions). We start to gather our expertises in the various aspects of the Curry-Howard correspondence to develop new interactions: complexity for concurrency, realizability techniques in implicit complexity, linear logic approaches to concurrency, semantics of quantification, game based interpretations of logic,... The current funding of the team is mainly provided by ANR projects and gives appropriate support for missions and invitations. In the specific environment of the ENS Lyon, the team plays an important role in the community through its investment in teaching. Among the French PhD students in our topics, an important proportion are former students of ENS Lyon. Weak points After the retirement of Pierre Lescanne planned for 2012, there will be no senior (i.e. rank A) permanent position at Plume. With at least 5 junior (i.e. rank B) positions, the team will need (at least) two professors (or research directors) to deal appropriately with the team management. The number of PhD students in the team is rather low. The arrival of new members and the increase of the number of habilités à diriger des recherches should have a strong impact on this point in the next years. However it should be noticed that it remains very difficult for good PhD students in our subjects to find permanent positions. In the future we should probably try to add more diversity in our funding resources, in particular using bilateral funding and European funding. Opportunities Initiated with the GeoCal ACI NIM ( ), the Paris-Lyon-Marseille-Chambéry network plays a crucial role in the development of the research synergies at a national level around linear logic and the semantics of programming languages. This community is currently incarnated within the GDR Informatique Mathématique (in the GeoCal and LAC working groups in particular), and is at the same time probably too big for the model defined by ANR projects. A by-product of this research dynamics is a number of good students who just finished their PhD or are about to defend it. This guarantees a very high level in the future recruitments. The team could benefit from a better integration in the research environment in logic at Lyon, in particular through closer interactions with the mathematical logic team of University Lyon 1 (at Institut Camille Jordan), that mainly focuses on model theory. Some collaboration already started through the European MALOA (MAthematical LOgic and Applications) project for doctoral studies (kick-off planned in autumn 2009).

189 7.12 Proof Theory and Formal Semantics 175 Strong relations with logic teams and people working in mathematical computer science in the area could be a good starting point for the development of master studies around these topics at Lyon. Outside France, our team is involved in very strong international collaborations: Italy (Bologna, Roma, Torino,...), UK (Cambridge, London,...), Japan (Kyoto, JAIST, Nagoya), USA (Worcester, Boston,...), Serbia (Novi Sad), Poland (Krakow), Germany (München),... Risks Dealing with the growth of a research team, in times where important changes impacts the French academic life, entails an increasing amount of time spent on administrative duties, not for research. In the specific case of funding, dealing with short term research projects compels us to do frequent re-applications (which are very time-demanding). In lack of sufficient recurrent funding, the resources of the team are currently only guaranteed for the next two or three years. Concerning our hiring policy, we will possibly have to challenge the attractiveness of the Paris sites. However, based on our very good relationship with the corresponding teams, we are confident that we can manage to coordinate our hiring policies in an efficient way. As proved in the past, our collaboration with these teams is always very successful Perspectives Keywords Proof theory; semantics of programming languages; computer assisted proofs; Curry-Howard correspondence; linear logic; realizability; game semantics; implicit computational complexity; concurrency theory; probabilities Vision and new goals of the team Based on the fruitful interaction between logic and computer science, our main objectives are directed towards extensions of the Curry-Howard correspondence to additional logical/programming paradigms. The current evolution of the team leads to a particular effort on logical approaches and to a reinforced use of abstract tools. Our methodology is to build on mathematical theories in order to study programming languages. More precisely we want to develop new techniques from logic and semantic tools. Indeed these topics have strongly progressed during the last ten years and can now be brought together in a working framework. We plan to focus on three main mathematical tools. First, on the modelling side, game semantics has emerged as a versatile and accurate setting for representing programs and their interaction. It can handle a large spectrum of programming primitives. Second, on the reasoning side, our tool will be logic which, thanks to the proofs-as-programs paradigm, makes it possible to develop type systems and program extraction from proofs. Linear logic emerged as a powerful system to study logic(s) from this point of view. Finally, to prove the properties of our logical systems and relate them to the modelling side, we will take advantage of techniques coming from realizability, as well as from game semantics. Using these common tools, we will focus on developing type systems and program extraction mechanisms in different directions: imperative program extraction, integration of bounded complexity constraints, extension to concurrent computation, semantics of probabilistic aspects,... The long-term goal of establishing a unified framework (based on linear logic and related tools) for the foundations of rich programming languages (integrating concurrent, probabilistic,... aspects with a semantic control over termination, complexity,...) can be seen as the specificity of our team Logical foundations of programming languages The Curry-Howard correspondence establishes a strong relation between programs with their types on the one hand, and proofs with their formulas on the other. This correspondence holds for appropriate pairs consisting of a programming language and a logic. Our three main tools (linear logic, game semantics, and realizability) are among the most advanced ones in this topic. Linear logic has been introduced twenty years ago by J.-Y. Girard from an analysis of the semantics (denotational semantics more precisely) of intuitionistic logic and of the λ-calculus. Linear logic has had a strong influence on, and has become a key tool in various domains related to logic and computer science: from proof-theory to security protocols, through denotational semantics, and type systems for programming languages. Originating in the idea of modelling computation using the interaction between a program and its environment, game semantics has renewed the study of denotational semantics. Its first major success has been to solve the long standing problem of giving a syntax-independent presentation of the fully abstract model (i.e. a model in which the interpretations of two programs are equal if and only if these programs have the same behaviour, as given by the observational equivalence) of PCF. Game semantics has now been successfully applied both on the logic side and on the programming side with applications to software verification.

190 176 Part 7. Plume Underlying all the computational interpretations of logical proofs, realizability can be presented as a generalization of typing. It associates programs to types in a richer way than with usual type systems, and thus can be used to define types from their computational contents. As a consequence, it is often at the basis of program extraction mechanisms. Connections between these tools have already been established: game semantics was inspired from linear logic, recent models of linear logic are built with game interpretations, realizability can be given a game-like interpretation, realizability is used in models of second order linear logic, double-orthogonal approaches coming from linear logic are now crucial in realizability,... However this does not lead yet to a completely unified corpus of methods. Games and proof theory of constructive classical logic The computational content of proofs in propositional classical logic through the systems defining constructive classical logic (LC, λµ-calculus,...) is now well understood, thanks to the Curry-Howard correspondence. Recent progresses on game models for logical systems have been obtained by a convergence of logical games for intuitionistic logic (in proof theory) and game semantics for extensions of PCF (in computer science). This provides us with appropriate tools for a computation aware analysis of the theorems developed in proof theory in the last century in first and second order classical logic. Successful examples have been obtained in the past in the propositional setting: Gödel s double-negation translations are continuation passing style transformations, for example. Beside higher-order logic, there are on-going works on the computational interpretation of additional axioms. A specific focus is put on (variants of) the axiom of choice. We are interested in the use of games and realizability (Krivine s style or in relation with bar recursion) interpretations of these axioms. In this context, it is important to unify tools developed in the first-order logic framework and in the arithmetic framework. A important general question is the unification of these two key interactive interpretations of logic: game semantics and realizability. Outside constructive classical logic, we will work on understanding the computational content of symmetric classical systems based on the sequent calculus and dealing with a non-deterministic cut-elimination procedure. Applications of Linear Logic In the domain of light linear logics which are providing implicit characterizations of complexity classes, new systems have recently emerged: linear logics by levels, variations on bounded linear logic. We are interested in the syntactic study and development of these systems with two main goals. The first one is the access to additional complexity classes. The second one is to make progress on the intensional expressive power in a given class (PTIME for example): finding systems able to capture more algorithms (intensional power) for a fixed class of functions (extensional power). We also work on the denotational semantics of these variants of linear logic. By means of game semantics in particular, we hope to find models with intrinsic complexity guarantees (any proof interpretable in the model is normalizable in polynomial time for example). The discovery of concurrent behaviours in the normalization of differential interaction nets derived from differential linear logic opens new perspectives in the foundations of distributed computation. The range of the programming primitives expressible in this setting is still under investigation. The study of the denotational models of differential linear logic will allow us to find abstract characterizations of behavioural equivalences of processes (a key ingredient in the theory of process algebras). In the direction of unifying our approaches, a key lock will be to understand the compatibility of the variants of linear logic we use (light, differential,...). A long term goal is to extend Curry-Howard to computations modelled by rewriting theory in order to give them a logical content. A starting point is to use linear logic and game semantics in order to give a logical structure to residuals graphs issued form rewriting. Program extraction and proofs/programs transformations Direct applications of the Curry-Howard correspondence are obtained by transforming proofs into programs or by applying tools from proof theory in computer science and conversely. The paradigm of program extraction from proofs gives a way to build certified software without the necessity to prove correction of the obtained code. The validity of the requested specification is a built-in consequence of the approach. A formal proof is developed in a proof assistant (this requires human interaction) and the associated code is automatically extracted. We will focus our attention on these mechanisms in the future with a specific interest in extraction from classical logic (and additional axioms) and from logical systems with bounded complexity. The development of the first classical extractor for Coq triggers important new questions: efficient execution of the obtained programs, extraction of witnesses in a large class of existential formulas, program extraction from (variants of) the axiom of choice in relation with side effects in programming,...

191 7.12 Proof Theory and Formal Semantics 177 Using the ideas from variants of linear logic for implicit computational complexity (ICC), we will study proof systems which allow for the extraction of programs with certified complexity bounds, for instance PTIME. Our setting would add, to the usual extraction paradigm, the certification that the obtained programs admit a given complexity bound. Realizability and program extraction are two strongly related theories. Our work on extraction in classical logic is based on Krivine s realizability. On the ICC side, realizability techniques, as they begin to be used for light linear logics, will probably turn out to be very important. The transfer of technologies obtained through the Curry-Howard correspondence is still under development. Standard program transformations such as terminal recursion, defunctionalization,... have to be understood as proof transformations and the logical properties of the transformed objects should be studied. In the converse direction, focusing in logic or iterated forcing can be studied as program transformations Semantic tools for new behavioural properties of programs Semantics (in particular denotational semantics) provides means to apply mathematical tools to the understanding of deep properties of programming languages. Results are often very strong but as a consequence they are often restricted to particular programming primitives. We want to build on recent progresses in the Curry-Howard correspondence (and related tools) to address the semantics of richer languages. A typical application of this approach is the definition of a type system. In particular, there is a focus on so-called strong typings which, by constraining programmers to a reasonable programming discipline, provide guarantees on program behaviours. We will focus mainly on implicit complexity constraints on the execution of programs, concurrent behaviours of computational systems, and randomized aspects of languages. This naturally leads us to address new research directions such as the control of complexity properties of concurrent processes. ICC We want to work on the expressiveness of implicit complexity criteria. For this purpose, we will study, given a criterion, the behavioural properties of the programs satisfying the criterion. For instance we can try and investigate necessary (and, possibly, sufficient) conditions on the behaviour of abstractions of these programs. The purpose of this approach is on the one hand to reveal theoretical limitations of certain criteria, by showing that certain families of programs cannot be validated, and, on the other hand, to compare different criteria. In order to design realistic functional languages with bounds on usage of resources, a crucial point is to combine a usable but constrained form of recursion with a rich pattern-matching mechanism. To address this question, we envisage to mix typing constraints coming from light linear logics with other termination criteria for higher-order recursion such as sized-based termination. Concurrency Regarding the theory of process calculi, which provides formal models of concurrent and mobile computation, we will continue working on the behavioural properties of processes. Questions we plan to address include algebraic characterizations of observational equivalences, the definition of methods for the guarantee of behavioural properties, and of proof techniques for reasoning about concurrent systems. A new planned research direction stems from the interaction with ICC for sequential computation. We aim at investigating techniques to guarantee quantitative bounds on concurrent systems, by designing suitable type systems or static criteria. For instance we would like to introduce criteria to guarantee that after an interaction, a system will react after a certain number of internal steps, or that during execution, the size of the system will remain bounded. Such properties are clearly relevant in a concurrent setting: for instance, one would like a request to a server to be answered after a reasonable number of computation steps, or that an agent does not use too many resources for too much time. Randomized aspects of computation Adding probabilistic traits to imperative languages has been broadly studied. Among the main references on this topic, the more related to our source of interest include works done by Kozen, Morgan and Mc Giver. When it comes to functional paradigm though, one has to deal with new objects: so called high order types, hence functional spaces. On a practical side, the consequence is one could provide programs which generate randomized functional terms; and this is different from providing programs which return a randomized value form any given value for the input parameter(s)! The theoretical consequence is that we must understand which mathematical meaning (semantics) can be given to functional spaces in the first place.

192 178 Part 7. Plume This correspondence between programs and mathematical objects matches the broader Curry-Howard correspondence, for which experience shows that the actual understanding of probabilistic traits should benefit from exploring side by side both practical and theoretical issues. Therefore, our research program in the area is twofold: developing semantics (the adaptation of mathematical objects taken from functional analysis like Polish spaces and similar ones looks promising); and proceed forward building formal tools to specify and prove properties on actual programs, as we are confident this approach could benefit from programmer s needs and/or understanding on randomized algorithms. Also, research areas such as Cryptography and Complexity could contribute to provide deeper insight on that matter Formalization Proof assistants and formal logics are not only an object of study for us. We also use them for the formalization of mathematical theories. Formalization is often important and useful to clarify the axioms and reasonings being used, and to find appropriate ways to define the objects being manipulated. Currently our developments are made with the Coq system. Our formal study of game theory and Nash equilibria will be continued with applications to the Internet (infinite games) and to biology (feasibility-desirability games and coalition games). The formalization of Euclidean plane geometry will be turned into a tool to help pupils in the solving of geometric problems by providing dynamic drawings and proof verifications. This requires good interaction between the drawing tool and the proof assistant, through the definition of specific tacticals. We also use Coq for proving properties of various algorithms. Examples that we plan to address include watermarking algorithms, distributed and/or randomized algorithms Publications International and national peer reviewed journals [ACL] [603] Philippe Audebaud and Christine Paulin-Mohring. Proofs of randomized algorithms in Coq. Science of Computer Programming, 74(8): , June [604] Patrick Baillot and Damiano Mazza. Linear logic by levels and bounded time complexity. Theoretical Computer Science, pages, to appear. [605] Arnaud Carayol, Daniel Hirschkoff, and Davide Sangiorgi. On the representation of McCarthy s amb in the picalculus. Theor. Comput. Sci., 330(3): , [606] R. Demangeon, D. Hirschkoff, and D. Sangiorgi. Mobile processes and termination. Festschrift in the Honour of Peter Mosses, to appear. [607] Daniel J. Dougherty, Silvia Ghilezan, and Pierre Lescanne. Characterizing strong normalization in the Curien- Herbelin symmetric lambda calculus: Extending the Coppo-Dezani heritage. Theor. Comput. Sci., 398(1-3): , [608] Daniel J. Dougherty, Pierre Lescanne, and Luigi Liquori. Addressed term rewriting systems: application to a typed object calculus. Mathematical Structures in Computer Science, 16(4): , [609] Daniel Hirschkoff, Étienne Lozes, and Davide Sangiorgi. On the expressiveness of the ambient logic. Logical Methods in Computer Science, 2(2), [610] Daniel Hirschkoff and Damien Pous. A Distribution Law for CCS and a New Congruence Result for the Pi-calculus. Logical Methods in Computer Science, 4(2), [611] Daniel Hirschkoff, Damien Pous, and Davide Sangiorgi. An efficient abstract machine for safe ambients. J. Log. Algebr. Program., 71(2): , [612] Tom Hirschowitz and Xavier Leroy. Mixin modules in a call-by-value setting. ACM Trans. Program. Lang. Syst., 27(5): , [613] Pierre Lescanne. Mechanizing common knowledge logic using coq. Ann. Math. Artif. Intell., 48(1-2):15 43, [614] Étienne Lozes. Elimination of spatial connectives in static spatial logics. Theor. Comput. Sci., 330(3): , 2005.

193 7.13 Proof Theory and Formal Semantics 179 [615] Étienne Lozes, Daniel Hirschkoff, and Davide Sangiorgi. Separability in the ambient logic. Logical Methods in Computer Science, 4(3:4), September [616] Damien Pous. New up-to techniques for weak bisimulation. Theor. Comput. Sci., 380(1-2): , [617] Damien Pous. Using bisimulation proof techniques for the analysis of distributed abstract machines. Theor. Comput. Sci., 402(2-3): , [618] Steffen van Bakel and Pierre Lescanne. Computation with classical sequents. Mathematical Structures in Computer Science, 18(3): , Invited conferences, seminars, and tutorials [INV] International and national peer-reviewed conference proceedings [ACT] [619] Philippe Audebaud and Christine Paulin-Mohring. Proofs of randomized algorithms in Coq. In Tarmo Uustalu, editor, Mathematics of Program Construction, MPC 2006, volume 4014 of LNCS, pages 49 68, Kuressaare, Estonia, Springer Verlag. [620] Chafika Chettaoui, Franck Delaplace, Pierre Lescanne, Mun delanji Vestergaard, and René Vestergaard. Rewriting game theory as a foundation for state-based models of gene regulation. In Computational Methods in Systems Biology, International Conference, CMSB 2006, Trento, Italy, October 18-19, 2006, Proceedings, volume 4210 of Lecture Notes in Computer Science, pages Springer, [621] Luca Chiarabini and Philippe Audebaud. New development in extracting tail recursive programs from proofs. In Logic-Based Program Synthesis and Transformation, 19th International Symposium, LOPSTR 2009, Coimbra, Portugal, Portugal, Sep 9-11, 2009, LNCS. Springer Verlag, to appear. [622] Romain Demangeon, Daniel Hirschkoff, Naoki Kobayashi, and Davide Sangiorgi. On the complexity of termination inference for processes. In Trustworthy Global Computing, Third Symposium, TGC 2007, Sophia-Antipolis, France, November 5-6, 2007, Revised Selected Papers, volume 4912 of Lecture Notes in Computer Science, pages Springer, [623] Romain Demangeon, Daniel Hirschkoff, and Davide Sangiorgi. Static and dynamic typing for the termination of mobile processes. In Fifth IFIP International Conference On Theoretical Computer Science - TCS 2008, IFIP 20th World Computer Congress, TC 1, Foundations of Computer Science, September 7-10, 2008, Milano, Italy, volume 273 of IFIP, pages Springer, [624] Romain Demangeon, Daniel Hirschkoff, and Davide Sangiorgi. Termination in higher-order concurrent calculi. In Proc. of FSEN 09, LNCS. Springer Verlag, to appear. [625] Dan J. Dougherty, Pierre Lescanne, Luigi Liquori, and Frédéric Lang. Addressed term rewriting systems: Syntax, semantics, and pragmatics: Extended abstract. Electr. Notes Theor. Comput. Sci., 127(5):57 82, [626] Daniel J. Dougherty, Silvia Ghilezan, Pierre Lescanne, and Silvia Likavec. Strong normalization of the dual classical sequent calculus. In Proc. of Logic for Programming, Artificial Intelligence, and Reasoning, 12th International Conference, LPAR 2005, Montego Bay, Jamaica, December 2-6, 2005, Proceedings, volume 3835 of Lecture Notes in Computer Science, pages Springer, [627] Jean Duprat. About constructive vectors. In P. Lescanne, R. David, and M. Zaionc, editors, Proceedings of the Second Workshop on Computational Logic and Applications, volume 140 of Electr. Notes Theor. Comput. Sci., pages , [628] Jean Duprat. Une axiomatique de la géométrie plane en Coq. In Actes du Congrès Journées Francophones des Langages Applicatifs, Etretat, january [629] Richard H. G. Garner, Tom Hirschowitz, and Aurélien Pardon. Variable binding, symmetric monoidal closed theories, and bigraphs. In Proc. of CONCUR 09, volume 5710 of LNCS, pages Springer, [630] Daniel Hirschkoff, Tom Hirschowitz, Samuel Hym, Aurélien Pardon, and Damien Pous. Encapsulation and dynamic modularity in the Pi-calculus. In V.T. Vasconcelos and N. Yoshida, editors, Proc. of the PLACES 08 workshop, volume 241 of ENTCS, pages , 2008.

194 180 Part 7. Plume [631] Daniel Hirschkoff, Tom Hirschowitz, Damien Pous, Alan Schmitt, and Jean-Bernard Stefani. Component-oriented programming with sharing: Containment is not ownership. In Generative Programming and Component Engineering, 4th International Conference, GPCE 2005, Tallinn, Estonia, September 29 - October 1, 2005, Proceedings, volume 3676 of Lecture Notes in Computer Science, pages Springer, [632] Daniel Hirschkoff and Damien Pous. A Distribution Law for CCS and a New Congruence Result for the Pi- Calculus. In Foundations of Software Science and Computational Structures, 10th International Conference, FOS- SACS 2007, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2007, Braga, Portugal, March 24-April 1, 2007, Proceedings, volume 4423 of Lecture Notes in Computer Science, pages Springer, [633] Daniel Hirschkoff, Damien Pous, and Davide Sangiorgi. A correct abstract machine for safe ambients. In Coordination Models and Languages, 7th International Conference, COORDINATION 2005, Namur, Belgium, April 20-23, 2005, Proceedings, volume 3454 of Lecture Notes in Computer Science, pages Springer, [634] Michel Hirschowitz, André Hirschowitz, and Tom Hirschowitz. A theory for game theories. In FSTTCS 2007: Foundations of Software Technology and Theoretical Computer Science, 27th International Conference, New Delhi, India, December 12-14, 2007, Proceedings, volume 4855 of Lecture Notes in Computer Science, pages Springer, [635] Alexandre Miquel. Relating classical realizability and negative translation for existential witness extraction. In P.-L. Curien, editor, Proc. of TLCA 09, volume 5608 of LNCS, pages Springer Verlag, [636] Barbara Petit. A polymorphic type system for the lambda-calculus with constructors. In P.-L. Curien, editor, Proc. of TLCA 09, volume 5608 of LNCS, pages Springer Verlag, [637] Damien Pous. Up-to techniques for weak bisimulation. In Automata, Languages and Programming, 32nd International Colloquium, ICALP 2005, Lisbon, Portugal, July 11-15, 2005, Proceedings, volume 3580 of Lecture Notes in Computer Science, pages Springer, [638] Damien Pous. On bisimulation proofs for the analysis of distributed abstract machines. In Trustworthy Global Computing, Second Symposium, TGC 2006, Lucca, Italy, November 7-9, 2006, Revised Selected Papers, volume 4661 of Lecture Notes in Computer Science, pages Springer, [639] Damien Pous. Weak bisimulation up to elaboration. In CONCUR Concurrency Theory, 17th International Conference, CONCUR 2006, Bonn, Germany, August 27-30, 2006, Proceedings, volume 4137 of Lecture Notes in Computer Science, pages Springer, [640] Damien Pous. Complete lattices and up-to techniques. In Programming Languages and Systems, 5th Asian Symposium, APLAS 2007, Singapore, November 29-December 1, 2007, Proceedings, volume 4807 of Lecture Notes in Computer Science, pages Springer, [641] Colin Riba. On the values of reducibility candidates. In P.-L. Curien, editor, Proc. of TLCA 09, volume 5608 of LNCS, pages Springer Verlag, [642] Stéphane Le Roux. Graphs and path equilibria. In Rudolf Fleischer and Jinhui Xu, editors, 4th International Conference on Algorithmic Aspects in Information and Management, volume 5034 of Lecture Notes in Computer Science, pages Springer, [643] Stéphane Le Roux. Acyclic preferences and existence of sequential Nash equilibria: A formal and constructive equivalence. In Stefan Berghofer, Tobias Nipkow, Christian Urban, and Makarius Wenzel, editors, 22nd International Conference on Theorem Proving in Higher Order Logics, volume 5674 of Lecture Notes in Computer Science, pages Springer, [644] Steffen van Bakel, Stéphane Lengrand, and Pierre Lescanne. The language chi: Circuits, computations and classical logic. In Proc. of Theoretical Computer Science, 9th Italian Conference, ICTCS 2005, Siena, Italy, October 12-14, 2005, Proceedings, volume 3701 of Lecture Notes in Computer Science, pages Springer, 2005.

195 7.13 Proof Theory and Formal Semantics 181 Short communications [COM] and posters [AFF] in conferences and workshops [645] Romain Demangeon. Type systems for the termination of mobile processes. talk given at the 20th Nordic Workshop on Programming Theory, Tallinn, Estonia, [646] Romain Demangeon. Type systems for the termination of mobile processes. talk given at the 10th International Workshop on Termination, Leipzig, Germany, [647] Jean Duprat. Plane geometry in Coq. talk at the TYPES 09 meeting, Aussois, France, [648] André Hirschowitz, Michel Hirschowitz, and Tom Hirschowitz. Playing for truth on mutable graphs. Talk at the Colloquium in honor of Gérard Huet s 60th birthday, ENS, Paris, [649] Barbara Petit. A polymorphic type system for the lambda-calculus with constructors. talk at the TYPES 09 meeting, Aussois, France, Scientific books and book chapters [OS] Scientific popularization [OV] [650] Pierre Lescanne. Entretien avec Michel Morvan. Bulletin de SPECIF, 54, December [651] Pierre Lescanne. Interview de Jacques Cohen. Bulletin de SPECIF, 55, March [652] Pierre Lescanne. Review of "Alfred Tarski: Life and Logic" by Anita Burdman Feferman and Solomon Feferman, Cambridge University Press SIGACT News, 37(1):27 28, [653] Pierre Lescanne. Entretien avec Gilles Dowek. Bulletin SPECIF, 58, December [654] Pierre Lescanne. In memoriam John Backus. Bulletin SPECIF, 57, April [655] Pierre Lescanne. Les propositions vraies sont démontrables. La Recherche, 412, oct special issue Le dictionnaire des idées reçues en science. [656] Pierre Lescanne. Interview d Alexandre Miquel. Bulletin SPECIF, 59, April [657] Pierre Lescanne. La pensée informatique, traduction d un article de Jeannette Wing. Bulletin SPECIF, December [658] Pierre Lescanne. Interview de Gérard Berry. Bulletin SPECIF, 61:41 44, April Book or Proceedings editing [DO] [659] Philippe Audebaud and Christine Paulin-Mohring, editors. Mathematics of Program Construction, 9th International Conference, MPC 2008, Marseille, France, July 15-18, Proceedings, volume 5133 of Lecture Notes in Computer Science. Springer, [660] Philippe Audebaud and Christine Paulin-Mohring, editors. Special issue dedicated to MPC 08, Science of Computer Programming. Elsevier, In preparation. [661] Patrick Baillot, Jean-Yves Marion, and Simona Ronchi Della Rocca, editors. Special Issue on Implicit Computational Complexity, volume 10 of ACM Transactions on Computational Logic. ACM, [662] René David, Danièle Gardy, Pierre Lescanne, and Marek Zaiuonc, editors. Computational Logic and Applications, CLA 05, [663] Thomas Ehrhard, Claudia Faggian, and Olivier Laurent, editors. Jean-Yves Girard s Festschrift, Special issue of Theoretical Computer Science. Elsevier, In preparation.

196 182 Part 7. Plume Other Publications [AP] Doctoral Dissertations and Habilitation Theses [TH] [664] Romain Kervarc. Systèmes de types purs et substitutions explicites. PhD thesis, Ecole Normale Supérieure de Lyon, [665] Olivier Laurent. Investigations classiques, complexes et concurrentes à l aide de la logique linéaire. Habilitation à diriger des recherches, Université Paris Diderot Paris 7, In preparation. [666] Silvia Likavec. Types for object oriented and functional programming languages. PhD thesis, ENS Lyon and Università di Torino, [667] Damien Pous. Techniques modulo pour les bisimulations. PhD thesis, Ecole Normale Supérieure de Lyon, [668] Stéphane Le Roux. Generalisation and formalisation in game theory. PhD thesis, Ecole Normale Supérieure de Lyon, [669] Dragisa Zunic. Computing with sequents and diagrams in classical logic. PhD thesis, Ecole Normale Supérieure de Lyon, Total ACL - International and national peer-reviewed journal INV - Invited conferences ACT - International and national peer-reviewed conference proceedings COM / AFF - Short communications and posters in international and national conferences and workshops OS - Scientific books and book chapters OV - Scientific popularization DO - Book or Proceedings editing AP - Other Publications TH - Doctoral Dissertations and Habilitation Theses Total Other Productions [AP] [670] Jean Duprat. Ruler and compass geometry axiomatization, Coq contribution. [671] Pierre Lescanne. Coq development of common knowledge logic. Corresponds to article [613], Coq contribution. [672] Pierre Lescanne. Coq development of infinite extensive games and escalation. Corresponds to report http: //prunel.ccsd.cnrs.fr/ensl /fr/, Coq contribution. [673] Alexandre Miquel. The classical extraction module for Coq, Software: Coq module. [674] Alexandre Miquel. The Jivaro head reduction machine for the λ c -calculus, Software. [675] Damien Pous. GcPan: an Efficient Abstract Machine for Safe Ambients, Software. [676] Damien Pous. A coq formalisation of up-to techniques for weak bisimulation, Coq contribution.

197 8. Reso - Optimized Protocols and Software for High-Performance Networks Team: RESO Scientific leader: P. VICAT-BLANC PRIMET Evaluation Web site: Parent Organisations: Ecole Normale Supérieure de Lyon, University of Lyon 1, INRIA, CNRS Team History: This research team is an INRIA project-team (EPI) which started in In 2007, the LIP RESO team integrated Eric Fleury, nominated as ENS s Professor, and his PhD students. Therefore RESO was renamed temporarly COMET. But in the next period, thanks to the creation of the new INRIA D-NET team 1 and as detailled in the present document, there will be again a strict bijection between the LIP RESO team and the EPI RESO 8.1 Team Composition Current team members Permanent Researchers Name First name Function Institution Arrival date CHELIUS Guillaume Research Associate (CR2) INRIA 1/3/2009 GELAS Jean-Patrick Associate Professor University of Lyon 1/10/2005 GLUCK Olivier Associate Professor University of Lyon 1/10/2003 GONÇALVES Paulo Research Associate (CR1) INRIA 1/3/2006 GUERIN-LASSOUS Isabelle Professor University of Lyon 1/10/2006 FLEURY Eric Professor ENS Lyon 1/10/2007 LEFEVRE Laurent Research Associate (CR1) INRIA 1/10/2001 VICAT-BLANC PRIMET Pascale Senior Researcher (DR2) INRIA 1/10/2005 (as DR) Post-docs, engineers and visitors Name First name Function and % of time (if other than 100%) Institution Arrival date CHENIOUR Abderhaman Temporary Expert Engineer INRIA 15/7/2008 CEDEYN Aurélien Temporary ENS Engineer (50%) ENSL 1/9/2006 DAHAL Manoj Post Doctorant INRIA 5/11/2008 DIAS DE ASSUNCAO Marcos Post Doctorant INRIA 15/9/2009 DRAMINITOS Emanouielli Post Doctorant INRIA 15/11/2008 GOGA OANA Temporary Engineer INIRA 1/10/2008 GREMILLET Olivier Post Doctorant INRIA 1/12/2008 IBRAHIM Mouhamad Post Doctorant INRIA 5/1/2009 IMBERT Matthieu Permanent Research Engineer (40%) INRIA 1/9/2007 MARTINEZ Philippe Temporary Expert Engineer INRIA 1/10/2008 MIGNOT Jean-Christophe Permanent Research Engineer (50% ) CNRS 1/9/2000 MORNARD Olivier Temporary Expert Engineer INRIA 1/3/2007 NUSSBAUM Lucas Temporary ATER UCBL 1/10/2008 RAGON Augustin Temporary Expert Engineer INRIA 1/3/

198 184 Part 8. Reso Doctoral Students Name First name Institution Date of first registration as doctoral student ANHALT Fabienne INRIA 1/10/2008 CHIS Andreea INRIA 1/10/2007 GUICHARD Pierre-Solen INRIA 1/10/2008 GUILLIER Romaric INRIA 1/ HABLOT Ludovic MENRT 1/10/2006 KOSLOVSKI Guilherme INRIA 1/10/2008 LOISEAU Patrick MENRT (AC) 1/10/2006 MON DIVAKARAN Dinil INRIA 1/4/2007 ORGERIE Anne-Cecile MENRT 1/10/2008 QINNA Wang MENRT 1/10/2008 SOUDAN Sébastien MENRT 1/10/2006 VANIER Rémi INRIA 1/10/2006 Past team members Name First name Position Past Members Oct Oct PHAM Cong Duc Associate Professor Parent Institution University of Lyon Arrival date (or Oct. 2005) Departure date 01/10/ /9/2005 Current position Prof. Univ. Pau Past Doctoral students Name First name Date of Date of first registration departure Institution AYARI Narjees FT R&D (Cifre) LAGANIER Julien SUN (Cifre) GOGLIN Brice CNRS VERNOIS Antoine ENS(ACI) Doctoral school ED Math IF ED Math IF ED Math IF ED Math IF Supervisor P. V.B. Primet P. V.B. Primet P. V.B. Primet P. V.B. Primet and F. Desprez Current position Position in private company (SSII) Senior Researcher NTT DoCoMo Junior Researcher (CR1) INRIA Assist. Prof. Univ. Clermont- Ferrand

199 8.2 Optimized Protocols and Software for High-Performance Networks 185 LOPEZ- PACHECO Dino Gouv Mexican ED Math IF Cong Duc Pham Assist. Prof. Univ. Nice Past post-doctoral researchers, engineers and visitors (greater than or equal to 3 months) Date of Date of Home Institution Name First name Function Current position arrival departure (if appropriate) ANCELIN Damien Expert Engineer 1/3/2007 1/3/ No position BAYKUT Süleyman Long Term Visitor 1/1/ /10/2008 Univ. of Istanbul PhD student BOZONNET Pierre Expert Engineer 1/1/ /08/2008 Engineer CARRÃO Hugo Long Term Visitor 6/7/ /12/2008 Univ. of Lisbon PhD student CHEN Bin Bin Long Term Visitor (NL) Post Doc CWI 1/3/ /9/2005 Univ. Singapore PASIN Marcelo Expert Engineer 15/11/2008 Univ. of Lisbon Associate Prof. Evolution of the team: RESO attracted progressively different members coming from parent fields, specifically Paulo Gonçalves (CR1) (Signal processing and statistical analysis), Prof. Isabelle Guerin Lassous (Wireless and ad hoc networks). This gived RESO the potential and the opportunity to recruit new PhD students and to participate to large scale collaborations with industry (INRIA-Bell Labs) and other research teams. For example, the collaboration with the SYSIPHE team (leaded by Patrice Abry) of the ENS s Physics laboratory has been vivified. The arrival of Eric Fleury (dynamic networks) as ENS teacher in 2008 reinforced the Networking component of the LIP laboratory. Eric Fleury was a former member and the scientific leader of the INRIA project-team ARES. After integrating the LIP laboratory, Eric Fleury proposed to introduce a new research topic integrating its ARES experience on sensor networks and ENS potential in network measurement, analysis and modelling. D-NET, a new project on dynamics networks has been elaborated. D-NET started as an INRIA project in march 2009 and will be an independant LIP research team in the period. 8.2 Executive summary Keywords High Speed Networks, Quality of Service, Protocols, Network Services, Traffic Metrology, Energy efficiency, Virtual Infrastructures, Grid, Clouds, Future Internet. Research area Wavelengths multiplexing and switching techniques on optical fibers allow core network infrastructures to rapidly improve their throughput, reliability and flexibility. These improvements have given the opportunity to create high-performance distributed systems that aggregate storage and computational resources into an integrated computing environment. During a decade, a lot of researches and developments around the concepts of Grid, Utility computing and now Cloud Services have underlined the strenght of this approach. This trend will strongly influence the development of the Future Internet but raises major research issues in networking and network services, requiring a new vision of the network and of the current Internet stack (TCP/IP). The coordination of networking, computing and storage requires the design, development and deployment of new resource management and sharing approaches to discover, reserve, co-allocate and reconfigure resources, as well as schedule and control their usages. In the Future the network will not be only a black box providing pipes between edge machines, but will becoming a vast cloud increasingly embedding huge and open ICT resources to meet the requirement of emerging applications. Future high-speed optical networks are addressed not only to support the accelerating and dynamic growth of data traffic but also the new emerging network requirements such as fast and flexible provisioning, QoS levels, and fast recovery procedures of such data-intensive applications. The use of networks for on-demand computing is now gaining in the large Internet, while the optical transport layer extends to the edge (fiber to the home) enabling ultrahigh-performance machine-to-machine or human-to-remote machine communications. One of the key challenges to be addressed for the large deployment of new challenging applications and services in the Internet is the dynamic provisionning of a secure, flexible, transparent and high-performance transport infrastructure. This future pervasive computing environment will rely not only on very high speed wired network but also on wireless systems. The Internet re-design raises the opportunity to better understand and assess higher-level system requirements, and use these as drivers of the lower layer architecture. In this process, mechanisms that are implemented today as part of applications, may conceivably migrate into the network itself to bring it more flexibility, intelligence and autonomy. Main goals The goal of our research is to provide analysis of the limitations of the current communication paradigms,

200 186 Part 8. Reso network software and protocols designed for standard networks and traditional usages, and to propose innovative approaches, optimizations and associated control mechanisms. RESO aims at providing software solutions but also original processes for high-performance and flexible communications on very-high-speed networking infrastructures and for an efficient exploitation of these infrastructures. These solutions must scale in increasing bandwidths, heterogeneity and number of flows and usages. RESO studies high-speed networks and their traffic characteristics, high-end applications requirements, creates open source code, distributes it to the research community for evaluation and usage and help in shortening the wizard gap between network experts and novices. The long-term goal is also to contribute to the evolution of protocols, standards and networking equipments, prompting the introduction of metrology as an intrinsic component of high-speed networks. An important effort is naturally dedicated to the dissemination of these new approaches. During this period, it appeared crucial in the actual context that efforts should also be spent on the investigation and understanding of wireless communications and related network complexity, dynamics, and scalability. To address some of these issues, our work follows four major research topics : 1: Optimized protocol implementations and networking equipements 2: Quality of Service and Transport layer for Future Networks 3: High-Speed Network s traffic metrology, analysis and modelling 4: Network Services for high-demanding applications 5: Wireless communications Methodology RESO s approach relies on atight combination of theoretical and practical work. Our research framework at the interface of a specific network context (very high speed, optical technology) and a challenging application domain (Grids, Clouds) induces a close interaction with both the underlying network level and the application level. Our methodology is based on a deep evaluation of the functionalities and performance of high-speed infrastructures and on a study of the high-end and original requirements before designing and analysing new solutions. RESO gathers expertise in the design and implementation of advanced high-performance cluster networks protocols, longdistance networks and Internet protocols architecture, distributed systems and algorithms but also scheduling theory, optimization, queuing theory, and statistical analysis. This background work provides the context model for innovative protocols and software design and evaluation. Moreover, the proposals are implemented and experimented on real or emulated local or wide area testbeds, with real conditions and large-scale applications, then transfered to the industry. Highlights During this period, the RESO team had internationally visible contributions in the following fields: Network-resource sharing in very-high-speed networks with the work on Bulk Data Transfer Scheduling. Dynamic bandwidth provisioning in Optical Networks with the work on SRV - network-resource scheduling, virtualization and reconfiguration component in a service-oriented approach- and participation to OGF NSI WG. (Open Grid Forum Network Service Interface working group). Virtual private infrastructures orchestration and definition with the work on HIPerNET and VXDL (Virtual private execution infrastructure description language). Sampling methods for characterizing heavy-tailed distributions in high-speed network traffic and experimental validation of the Taqqu theorem. Participation to the design, the development and visibility of the local and national GRID5000 instrument and to its international optical interconnections to Netherland (DAS3) and Japan (Naregi) in collaboration with RENATER; Design and development of a metrology infrastructure Metroflux for fine-grained traffic monitoring in Grid5000 and traffic-analysis system dedicated to 10Gb/s speed links. Development of the SensLAB testbed a very large scale open wireless sensor network platform. SensLAB s main and most important goal is to offer an accurate and efficient scientific tool to help in the design, development, tuning, and experimentation of real large-scale sensor network applications. The other highlights of this period to be underlined are : the strong involvement in the creation of the INRIA-Bell Labs common laboratory and the active participation to the CARRIOCAS project of the Pôle de Compétitivité Ile de France. Indeed a large part of the research topics 2 and 3 is integrated in the ADR (axe de recherche) Semantic Networking we are leading within the

201 8.3 Optimized Protocols and Software for High-Performance Networks 187 common INRIA Bell Labs laboratory. The motivation of our research work in the common lab is to build and to exploit the knowledge that comes along with traffic. The goal is to act in a better way and to make better decisions at the router and network levels. The knowledge that comes with the traffic is what we call the semantics of the traffic. The main topics we are exploring in the research topic of the common laboratory ar: a) traffic identification and classification, b) traffic sampling, c) flow analysis, d), flow scheduling, e) sampling-based scheduling, e) flow-based routing. the increase of the theoretical dimension of our research with the arrival of Paulo Gonçalves and the close collaboration with the MAESTRO team (INRIA Sophia Antipolis) member of the Joint laboratory. This reinforces the attractivity, the diversification,the impact and the visibility of the research team. the vision for the Grid Network and the Internet, we develop and disseminate since 2005, exemplified by the HIPerNET solution for network virtualisation and the Bulk Data Transfer Scheduling concepts proning the introduction of the temporal dimension in the Internet. The Future Internet and the Cloud waves lead us to work on a startup project, winner of the OSEO emergence prize, which will commercialize our solution for virtual infrastructures orchestration, to make this vision a reality for the Future Internet. 8.3 Research activities Optimized Protocol implementations and networking equipements List of participants: Narjess Ayari, Abderhaman Cheniour, Jean-Patrick Gelas, Olivier Glück, Laurent Lefèvre, P. Vicat- Blanc Primet, Jean-Christophe Mignot, Ludovic Hablot, Sébastien Soudan, Fabienne Anhalt, Olivier Mornard, Anne-Cécile Orgerie Keywords Network devices virtualisation, energy efficiency, programmable network equipment, session awareness. Scientific issues, goals, and positioning of the team: In this research topic we focus on the implementation and optimization of the mechanisms and processes within networking devices to mainly address the following issues: flexibility, reliabililty and energy comsumption related to the communications in high demanding context. Since several years, virtualization of the operating system is used in end-systems to improve the security, isolation, reliability and flexibility of the environments. These mechanisms are a must in large-scale distributed system. In this research topic, we explore how these mechanisms can be adapted and used in data transport networks, specifically in switching and routing equipments to increase the security, isolation, reliability and flexibility. An other key enabling factor of new network services is programmability at every level; that is the ability for new software capabilities to self-configure themselves over the network. We explore the concept, "dynamic programming enablers" for the dynamic service-driven configuration of communication resources. The third theme of this research topic concerns the integration of context-awareness functionality in the network equipments to address two important issues: reliabililty of communications and energy comsumption. Most of the Future Network services involve a session model based on multiple flows required for the signalling and for the data exchange, all along the session s lifespan. New service-aware dependable systems are more than ever required. Challenges to these models include the client and server transparency, the low cost during failure free periods and the sustained performance during failures. Based on our work with France Telecom RD we propose session-aware distributed network solutions integrated in a cluster-based network server which support the reliability mandatory to operators services (VOIP). Finally, large-scale distributed systems (and more generally next-generation Internet) are facing infrastructures and energy limitations (use, cooling etc.). In the context of monitored and controlled energy usage, we start to investigate energy-aware equipments and frameworks, which allow users and middleware to efficiently use large-scale distributed architectures. Major results Oct Oct. 2009: We proposed a new network model combining end-host, server and router virtualization to offer isolated and malleable virtual networks of different types, owners and protocols, all sharing one physical infrastructure. To have a better insight in this potential and in its limits we conducted systematic analysis and experiments of the cost of end-host, server and router virtualization. We have proposed a model of a virtual router we have implemented with XEN [746]. We start to study the router migration problem [745]. We designed an autonomic network equipment taking into account the specific requirements of active equipments in terms of dynamic service deployment, auto-settings, self-configuration, monitoring but also in terms of hardware specification (limited resources, limited mechanical parts constraints, dimension constraints), reliability and fault tolerance. We proposed the architecture of an Industrial Autonomic Network Node (called (IAN 2 ) deployable in industrial platforms [764, 691]. The implementation is based on our generic high-performance active network environment (Tamanoir).

202 188 Part 8. Reso In order to improve autonomic service deployment in large scale networks, we designed and proposed the ANPI framework (Autonomic Network Programming Interface) currently used in the Autonomic Internet European Project. In collaboration with France Telecom RD ( Procédés de gestion de sessions multi-flux, N. Ayari, D. Barbaron, L. Lefèvre, France Telecom RD Patent, June 2007), we propose session-aware distributed network solutions integrated in a cluster-based network server which support the reliability mandatory to operators services (VOIP). [751, 750, 894, 749, 848, 754, 755,?] Since 2007, we are exploring the challenges associated with energy usage of large scale distributed infrastructures (Grids, clouds, Future Internet). We are developing solutions to dynamically monitor and optimize energy usage in software frameworks and validate the solutions on the Grid5000 platform. This research axis is supported by the European FP7 STREP "Autonomic Internet" project ( ), the European SSA project OGF-Europe, the European COST Action IC0804 on Energy efficiency in large scale distributed systems ( ), the ANR Jeunes Chercheurs Project DSLLAB, the INRIA ARC GREEN- NET ( ), the Grid5000-ALADDIN Project and the PAI Fast Project with Queensland Univ. of Technology ( ) Quality of Service and Transport layer for Future Networks List of Participants P. Vicat-Blanc Primet, Isabelle Guerin-Lassous, Laurent Lefèvre, Dino Lopez Pacheco, Sébastien Soudan, Romaric Guillier, Dinil Mon Divakaran, Guilherme Koslovski, Remi Vannier, Marcelo Pasin, Pierre Bozonnet Keywords bandwidth sharing, quality of service, high speed transport protocols, flow scheduling. Scientific issues, goals, and positioning of the team: The progression of traffic over Internet links is a remarkable trend of the last years. The bandwidth increases at the end-user level, and ISPs (for home use or enterprise) has radically changed the expectations of Internet users. High bandwidth requirements, like those generated by some peer-to-peer applications, or real-time interactive communications, like voice and video over IP, will play a significant part of the next generation Internet. The key paradigm and condition of the stability of the current Internet is still, that, each Internet device cooperates for network resource management (i.e., congestion control). This cooperative sharing is implemented within the congestion control algorithm of the TCP protocol. But such a way of sharing resources in Internet, presents security risks and limits quality of service. In the future, it will be difficult to assume that all devices will cooperate. Moreover, the conservative behavior of TCP with respect to congestion in IP networks is at the heart of the current performance issues faced when the traffic load is highly dynamic. To solve this problem, protocol enhancements and alternative congestion control mechanisms have been proposed for very-high-speed optical networks, wireless networks and for multimedia applications (see PFLDNET conference series). Most of them are now implemented in current operating systems, but these protocols are not equivalent, not all of them are suitable for all environments and applications, and they may not cohabit well. For a couple of years, the evaluation and comparison of new transport protocols have received an increasing amount of interest (see the IRTF TMRG and ICCRG groups). However, TCP and other alternatives are complex protocols with many user-configurable parameters, and a range of different implementations. Several aspects can be studied, and various testing methods exist. To offer a better quality of experience in Future networks, we argue that some network-resource control has to be associated with the end-to-end flow-control approach. We proposed to investigate the temporal dimension of the Internet and to address the control timescale (control plane) in the context of Future Internet not only for performance, but also for flexibility, manageability and security purposes. Moreover, we have shown that the routers-assisted protocols providing Explicit Rate Notification (ERN protocols) provide higher fairness level and avoid congestion better than End-to-End (E2E) protocols (e.g., TCP-based protocols). However, the absence of inter-operability of ERN protocols with the non-ern routers (like the DropTail routers) and the E2E congestion control protocols (like TCP) prevent the deployed of ERN protocols in current networks. In order to solve the problems of inter-operability of ERN protocols with non-ern routers, we have proposed a solution which detects the set of non-ern router between two ERN routers, estimates the available bandwidth in that set of non-ern routers, and creates a virtual router that computes a feedback reflecting the estimated available bandwidth. Regarding the inter-operability problems of ERN protocols with E2E protocols, we have proposed a solution which provides friendliness between both kind of protocols. The mechanisms implemented by our friendliness solution can be divided in two steps. First, our solution estimates the resources needed by the ERN and E2E protocols in terms of bandwidth. Later, our friendliness solution randomly drops packets belonging to the E2E flows, with a probability updated on base of the results found in the first step. Since the success

203 8.3 Optimized Protocols and Software for High-Performance Networks 189 of ERN protocols depends on the information about the state of the network provided by the routers (the feedback), in this thesis we have studied the behavior of ERN protocols in scenarios where a percentage of the feedback has been lost. Such studies show that feedback losses in networks with VBE may lead to a chaotic behavior of ERN flows. Major results Oct Oct. 2009: (5 to 10 lines) During this period we activelly contributed to the collective methodological effort towards a benchmark design for high speed transport protocols comparison. We proposed scenarii, experiment deployment tools (NXE) and testbed for these studies (Grid 5000 and Metroflux). This work has been done in the context of Romaric Guillier s PhD, ANR IGTMD and of our associated team with AIST in Japan. Alternative bandwidth sharing approach have also been investigated. Flow scheduling [766], based on the inadvance knowledge of the resource requirement of an application or online estimation of these requirements has been studied. Signaling or real-time flow analysis as well as scalability issues have been explored. Distributed and lagrangian relaxation-based solution for bandwidth sharing is also investigated. This approach addresses well the dynamic feature, due to node mobility or traffic variation. We studied the interactions of components required to accomplish the tasks of bandwidth reservation, path computation and network signaling and propose to extend the control-plane and introduce dynamicity in network resource reservation in a network service plane. These results have been obtained in the context of Sebastien Soudan s PhD, FP6 EC-GIN projet, our collaborations with the G-Lambda Project (Japan) and with Alcatel-Lucent in the CARRIOCAS project (System@tic). We have proposed a new architecture for Explicit Rate Notification protocols which improves the responsiveness and the robustness of ERN flows in networks with VBE and feedback losses. The set of mechanisms and proposed solutions have been implemented and validated on the explicit Control Protocol (XCP : XCP-f, XCPr, XCP-i). Concerning the theme bandwidth sharing, we have designed some bandwidth sharing solutions. These solutions are intended either for wireless local networks or for grids and clouds. We study the bandwidth reservation problem for bulk data transfers in grid networks. We model grid networks as a set of distributed sites interconnected by network with potential bottlenecks, and transfer requests arrive online with specified volumes and deadlines. The model is based on network-resources reservations to support the delivery of deterministic performance to end users. We also worked on control planes to allow the automatic configuration of network equipments, in order to add or remove paths between sites or nodes and propose a model for multilayer networks with advance reservation of paths. We explored sender-side rate limitation problem and developped a solution in collaboration with our associated team at Grid Technology Research Center (AIST, Japan) Sebastien Soudan visited in For the transport protocol topic, we collaborated with pfldnet community and IRTF (ICCRG and TMRF). The topic on bandwidth sharing has been supported by the European EC-GIN project and AEOLUS projects. The last topic is developed in close collaboration with Bell Labs (Alcatel-Lucent) and ANAGRAN. It is funded by the INRIA-Bell Labs laboratory. In this direction, we also collaborate intensively with the EPI MAESTRO who are involved in the "Semantic Networking" topic High Speed Network s traffic metrology and statistical analysis List of Participants Paulo Gonçalves, P. Vicat-Blanc Primet, Matthieu Imbert, Damien Ancelin, Patrick Loiseau Keywords Traffic measurement and modelling, statistical analysis, multifractal, wavelets Scientific issues, goals, and positioning of the team: Metrology (i.e. the deployment of a series of tools allowing the collecting of relevant information regarding the system s status) of wide-area computer networks, is a recentlyintroduced discipline in the context of networks, that undergoes constant developments. In a nutshell, this activity consists in measuring along time, the nature and the amount of exchanged information between the constituents of a system. It is then a matter of using the collected data to forecast the network load s evolution, so as to anticipate congestion, and more widely, to guarantee a certain Quality of Service, optimizing resources usage and protocols design. From a statistical signal-processing viewpoint, collected traces correspond to (multivariate) time series principally characterized by non-properties: non-gaussianity, non-stationarity, non-linearities, absence of a characteristic time scale (scale invariance). Our research activity undertakes the development of reliable signal-analysis tools aimed at identifying these (non-)properties in the specific context of computer network traffic. In the course, we intend to clarify the importance of granularity of measurements. Another challenge in network metrology is the effectiveness of packet sub-sampling. It means, to collect only a fraction of the overall traffic (supposedly redundant), and to study the possibility of inferring from that partial

204 190 Part 8. Reso measurement, the most complete information about the system. Non-trivial questions such as which fraction, which sub-sampling rule, adaptativity of this latter, smart sampling, statistical inference open up a broad scope of investigation. In this research topic, we focus on the two following questions: how does the traffic s statistical properties really impact Quality of Service (QoS)? how to identify and to classify, in real time, transiting flows, according to a sensible typology? Within the framework of the common laboratory between INRIA and Alcatel-Lucent, axis, "Semantic networking" brings in a new field of metrology research in RESO. Tools for measuring the end-to-end performance of a path between two hosts are very important for transport protocol and distributed application performance optimization. Bandwidth evaluation methods aim to provide a realistic view of the raw capacity but also to characterize the dynamic behavior of an interconnection, a valuable factor for estimating the transfer time of bulk data. Existing methods differ according to the measurements strategies and the evaluated metrics. These methods can be active or passive, intrusive or non-intrusive. Non-intrusive active approaches, based on lived packet trains or packet pairs provide available bandwidth measurements and/or total capacity measurement. None of the proposed tools, based on these methods, enable the evaluation of these two metrics, while giving an overview of the link s topology and characteristics. A metrology activity including data processing, statistical inference, time series, and stochastic processes analysis, was deemed important to embed in the main research realm of RESO. Our goal is for these analyses to become in the near future a plain component not only in the study and in the development of infrastructures and computing networks, but also in real-time resources identification and management. Major results Oct Oct. 2009: Most achievements in this area lean against the PhD work of P. Loiseau. From 2006 to 2008, we considerably invested in the development of our metrology plateform MetroFlux. Our main concern was to guarantee sufficient reliability to support our future experimental research, and the great investment that has been granted to Grid5000 has been profitably used providing us with a high-performance and quite novel experimental setup to confront the proposed theoretical models with real traffic measurements. MetroFlux gave rise to several presentations in international events [801, 862, 864]. The first theorem we empirically validated with our plate-form was the relationship that bonds the tail index of a heavy-tailed flow size distribution, to the long range dependence property of the corresponding aggregated traffic. Up to our study, this theorem originally derived by Taqqu et al., had never really been corroborated on a real, large-scale network facility, under very general and fully controlled conditions. Originality of our work deserved it to be published as a regular paper in the IEEE transaction on networking [693]. In parallel, we started in 2008 a theoretical work on the maximum likelihood estimation of the heavy-tail index of flow size distributions, from packet sampled traffic traces. The difficulty dwells in the sampling process itself that yields a doubly censured data with respect to the flow size and to the flow population. Yet, we were able to derive the close formed expression of an original estimator, that we presented at SigMetrics 2009 [800]. Since the beginning of 2009, we are interested in Markov models for TCP traffic, and more specifically we revisited the multifractal analysis of these latter. We were led to propose a new large deviation principle adapted to the additive increase, multiplicative decrease behavior of instantaneous TCP throughputs. This is an ongoing work, in joint collaboration with J. Barral from the Sisyphe INRIA team-project Network Services for high-demanding applications List of Participants Olivier Gluck, P. Vicat-Blanc Primet, Laurent Lefèvre, Sébastien Soudan, Romaric Guillier, Ludovic Hablot, Marcelo Pasin, Pierre Bozonnet, Lucas Nussbaum, Aurélien Cedeyn, Oana Goga, Manoj Dahal, Olivier Mornard Keywords Network services, Bandwidth on Demand, Grid5000, MPI, Grids, Clouds Scientific issues, goals, and positioning of the team: The purpose of Computational Grids was initially to aggregate a large collection of shared resources (computing, communication, storage, information) to build an efficient and very-high-performance computing environment for data-intensive or computing-intensive applications. But generally, the underlying communication infrastructure of these large-scale distributed environments is a complex interconnection of multi-ip domains with non-controlled performance characteristics. Consequently the Grid Network cloud exhibits extreme heterogeneity in performance and reliability that considerably affects the global application performance.

205 8.3 Optimized Protocols and Software for High-Performance Networks 191 In strong interaction with the three fundamentals axes, this axis focuses on the application of the solutions to the grid context and on their implementation in a real environment such as the national research instrument Grid5000. Indeed, we believe that the precise structure of future applications and services is difficult to design without building large-scale instruments and systems for real use based on real and efficient hardware. Therefore, in this research topic we develop prototypes and deploy them within the Grid5000 testbed. For example, we design special measurement and routing systems at the edge of each Grid5000 site to explore new approaches or difficult problems alike. Topics that are investigated in this axis strongly focus on the usage and the evolution of the Grid5000 instrument: Studies on the interactions of the MPI and the transport layer in wide area networks, Design, development and evaluation of a dynamic bandwidth provisionning service, Studies on the virtual private execution infrastructure concept for grid and cloud computing environments. Large-scale deployment and evaluation of a high-speed network measurement infrastructure. Major results Oct Oct. 2009: Thanks to systematic experiments on the behavior of MPI in large-scale environments, we merge optimizations of current implementations and propose new optimizations in the communication layers in order to execute more efficiently MPI applications on the Grid. We also study the impact of using the TCP protocol for WAN communications (inter-site communications in the grid) and its interactions with MPI applications [786]. We have proposed a new transparent layer called MPI5000 and placed between MPI and TCP allowing application composed of several tasks to be correctly distributed on available node regarding the grid topology and the application scheme. In the CARRIOCAS project of the pôle Ile de France System@tic we explored optical resource provisionning and optimal bandwidth sharing services in the context of a 40Gb/s, dynamically-provisionable network. We developed the SRV software for dynamic network service scheduling and reconfiguration. In the Grid 5000 testbed, responsible for the networking aspects we have adapted the Metroflux environment to deliver a network measurement service. Through the European EC-GIN project we worked on the Bulk Data Transfer Scheduling approach. We deployed it in Grid5000 and extend it to the Internet. Through the ANR IGTMD project, we collaborated with the LCG and real physicists. A dedicated link deployed between IN2P3 (one of the largest computing center in France) and the FermiLab laboratory in Chicago, enables us to perform transport protocol experiments as well as traffic capture. Through the ANR HIPCAL project, RESO collaborates with biology and medical imaging applications (I3S, IBCP) to design and develop the HIPerNet framework. We propose to combine network and system virtualization with cryptographic identification and SPKI/HIP principles to help the user communities to build and share their own resource reservoirs. The HIPerNet framework enabling the creation and the management of customized confined execution environments in a large scale context implements a agile virtual network solution and a distributed security approach. This research direction is mainly supported by FP6 EC-GIN grant, Grid 5000-ALADDIN initiative, CARRIOCAS project, HIPCAL project, JSPS-NEGST project, the DSLAB and OGF-Europe projects Wireless Communications List of participant: Guillaume Chelius, Andreea Chis, Eric Fleury, Qinna Wang Keywords: wireless network, sensor network, cross layer Scientific issues, goals, and positionning of the team: This future pervasive computing environment will rely not only on very high speed wired network but also on two kind of wireless systems. The former will connect the world everywhere through broadband access links thanks to future 4G systems and mesh networks. The later, namely sensor networks, will make all objects able to communicate and to interact with their environment. While the former needs high data rate through efficient distributed resource sharing policies and multiple antennas use, the later needs low energy interfaces, miniaturization and robustness. Cooperation between multiple modes, antennas, nodes is the technological key to face all these challenging issues. It appears crucial in the actual context that large efforts should be spent on the investigation and understanding of network complexity, dynamics, and scalability. Such effort should be done on the understanding of actual wireless protocols (from MAC to transport), on the behavior of such protocols in wireless networks, and hybrid architecture combining both wireless meshes and heterogeneous wire backbone. From a theoretical point of view, there is an important need of generic and powerful

206 192 Part 8. Reso models in order to characterize the existing networks, in order to better understand the laws and behaviors of large scale networks. Moreover, such theoretical tools should help in the design and validation of a new architecture. Research and development efforts to improve the networking theory should be strongly multi-disciplinary: discrete mathematical approach and combinatorics (random graphs), probabilistic approach (spatial processes), information theory (network coding), performance analysis (stochastic processes)... The behavior of wireless links is totally different from wired ones and models must take into account the intrinsic characteristic of the radio medium and should not treat a wireless link as a rigid input not evolving over time in terms of performance. Major results Sept Oct 2009: We adopt a pragmatic standpoint. Whenever it s possible we want to propose a mathematical representation of the dynamics of the protocols, from which one could predict and optimize the resulting behavior and bounds on the scalability: Modeling of wireless networks. We have been deeply involved in the research and design of distributed clustering algorithm and virtual backbone in wireless networks. We used Stochastic geometry and the theory of point processes in this context to show bounds on the behavior of our proposed solutions. Stochastic geometry is particularly useful in wireless networks since planar or spatial components are present. We have integrate in such model a better representation of interferences. The goal is to study for example the impact of interferences on neighbor discovery protocols, on the capacity of wireless network and provide a powerful theoretical framework to analyze protocols. Performance evaluation tools for wireless networks. There are mainly three ways to evaluate the performances of a wireless networks or a new protocol: simulation, experiments and theoretical tools. All these strategies were used according to the level of details we need and/or the complexity of the problem. Theoretical tools like stochastic process and Markov chains remain an efficient tools to cope with performance evaluation. We have also use simulator tools and more precisely we promote the WSNet simulator developped internaly. WSNet is dedicated to wireless multi hop network s and integrates very accurate radio models (propagation, modulation). Activity Scheduling and QoS. Wireless networks could be used for critical applications like emergency services and environmental monitoring since they have a huge potential added value. Most of the time, energy constraints appear to be a very difficult challenge since wireless nodes are battery operated. Although duty cycling can save power, it also disrupts networks performance. Duty cycling introduces sleep latency: the data sampled by a source node during its sleep period have to be queued until the active period; and an intermediate node has to wait until the next hop wakes up to forward the packet. Duty cycling can also introduce link disconnections, which might cause network partitions: one end node, say x, of a link cannot communicate to the other end, say y, along this link if, no slot is scheduled in which x is allowed to transmit and y is allowed to receive; e.g., no slot is scheduled in which both nodes are active. Our research aims to save energy for general communication by scheduling nodes duty cycles while preserving communication connectivity and bounding packet latency. Our goal is thus to ensure robustness in terms of reliability and fault-tolerance for the communications of spontaneous networks but also to ensure temporal constraints in a spontaneous network. Data Aggregation & Processing. Data aggregation has been put forward as an essential paradigm for wireless routing in sensor networks. The idea is to combine the data coming from different sources enroute eliminating redundancy, minimizing the number of transmissions and thus saving energy. This paradigm shifts the focus from the traditional address-centric approaches for networking (finding short routes between pairs of addressable end-nodes) to a more data-centric approach (finding routes from multiple sources to a single destination that allows in-network consolidation of redundant data). We have study approaches designed by the P2P community by taking into account the broadcasting characteristic of the medium. Sensor Network Testbed We have proposed as Prime Leader SensLAB, a very ambitious project with several partners (LIP6 - University Pierre & MArie Curie, INRIA ASAP, INRIA POPS, LSIIT - University of Strasbourg and Thales). The purpose of the SensLAB project is to deploy a very large scale open wireless sensor network platform. SensLAB s main and most important goal is to offer an accurate and efficient scientific tool to help in the design, development, tuning, and experimentation of real large-scale sensor network applications. Ambient and sensor networks have recently emerged as a premier research topic. Sensor networks are a promising approach and a multi-disciplinary venture that combines computer networks, signal processing, software engineering, embedded systems, and statistics on the technology side. On the scientific applications side, it covers a large spectrum: safety and security of buildings or spaces, measuring traffic flows, environmental engineering, and ecology, to cite a few. Sensor networks will also play an essential role in the upcoming age

207 8.5 Optimized Protocols and Software for High-Performance Networks 193 of pervasive computing as our personal mobile devices will interact with sensor networks dispatched in the environment. The SensLAB platform is distributed among 4 sites and is composed of 1,024 nodes. Each location hosts 256 sensor nodes with specific characteristics in order to offer a wide spectrum of possibilities and heterogeneity. The four test beds are however be part of a common global test bed as several nodes have global connectivity such that it will be possible to experiment a given application on all 1K sensors at the same time. SensLAB is a unique scientific tool for the research on wireless sensor networks. This research direction is mainly supported by FP6 IP MOSAR, ANR SensLAB, ADT SensTOOL projects. The SensLAB project provide resources. 8.4 Application domains and social or economic impact RESO applies its research to the domains of high-end applications and distributed computing, in particular to Grid and Cloud communications. Its results have an impact on science and advance industry with the grid computing approach and on Internet infrastructure economy with the new software architecture for dynamic and programmable networks. Grid computing aims at bringing together large collection of geographically distributed resources (e.g., computing, storage, visualization, etc.) to build on demand very-high-performance computing environments for compute- and data-intensive applications. These large-scale cybernetic infrastructures gain increasing attention from a broad range of actors: from research communities to large companies that have high numerical simulations needs. Whereas grids have been widely in use in the scientific community, they are now moving into the commercial environment through the concept of Cloud computing solutions. Cloud computing fits a recentralization scenario which offers suitable business and security model for large-scale distributed resource sharing. Telecommunication operators (telcos), network equiment designer and computer providers owning the pervasive communication and computing infrastructure benefit from our fundamental and practical research. Telcos are now moving toward infrastructure sharing and cloud computing. Different scenarios for telcos are envisioned and investigated: telcos (1) deploy grids and clouds internally, e.g. for rapid dynamic service provisioning to new customers; (2) link different grid sites and data centers via VPNs; (3) act as an infrastructure service broker. We are exploring these different scenarii with our industrial partners. Researches conducted these last years reveal that grid technology raises new challenges in terms of network optimization, of protocol architecture, and of transport paradigms. We believe that a broad deployment of the optical, grid and cloud technologies can modify and influence the design of the Future Internet, may have a strong impact on the economy and will free creativity around innovative services and applications. RESO pursues the construction of an international community around Grid networks through the European EC-GIN project as well as with the OGF networking community. Through the ANR IGTMD project, RESO collaborates with the LCG and real physicists. A dedicated link deployed between IN2P3 (one of the largest computing center in France) and the FermiLab laboratory in Chicago, enables us to perform transport protocol experiments as well as traffic capture. RESO brings its expertise in Grids and Grid Networking to the CARRIOCAS project of the pôle Ile de France System@tic. This collaboration enables us to explore the limits and the advantages of our previous results in the context of a 40Gb/s, dynamically-provisionable network. Through the ANR HIPCAL project, RESO collaborates with biology and medical imaging large scale distributed applications. RESO work on fundamental aspects of the Future Internet with Bell Labs Alcatel-Lucent. The knowledge and software developed by the RESO team, meeting the emergence of the virtualized infrastructure paradigm, naturally pushed the idea of a startup project. RESO is now transfering the most promising tools elaborated during this last perdiod to its borning LinKTiss startup. 8.5 Visibility and Attractivity Prizes and Awards People Awarded: OSEO Emergence, P. Vicat-Blanc Primet, National Prize for a startup project, 2009 (33Ke)

208 194 Part 8. Reso Marconi Young Scholar, Sebastien Soudan, International Prize for the 4 best PhD student in Telecoms, 2009 (4Ke) Awarded communications: SIGMETRICS 2009, P. Loiseau, R. Guillier, O. Goga, M. Imbert, P. Gonçalves, Y. Kodama, P. Vicat-Blanc Primet, Automated Network Traffic measurements and analysis in Grid5000. Best demonstration award. ICNS 2009, F. Anhalt, P. Vicat-Blanc Primet, Network virtualisation, Best paper award. JDIR 2009, C. Sarr, S. Khalfallah and I. Guérin Lassous, Gestion dynamique de la bande passante dans les réseaux ad hoc multi-sauts, Best Paper Award. Anne-Cécile Orgerie and Laurent Lefèvre. Towards a Green Grid5000. Best presentation award of the Grid5000 school, April JDIR 2007, C. Sarr, C. Chaudet, G. Chelius and I. Guérin Lassous, Amélioration de la précision pour l estimation de la bande passante résiduelle dans les réseaux ad hoc basés sur IEEE , Selected among the three best papers. CFIP 2006, T. Razafindralambo, I. Guérin Lassous, L. Iannone and S. Fdida, Aggrégation Dynamique de Paquets pour résoudre l Anomalie de Performance des Réseaux Sans fil IEEE , Second best paper. Chairing and organization of conferences: ACM PE-WASUN 2009, 2008, 2007, 2006, I. Guérin Lassous, PC co-chair ACM MobiHoc 2009, I. Guérin Lassous, Poster chair IEEE BroadNet 2009, P. Vicat-Blanc Primet, PC co-chair CSA 2009, Laurent Lefèvre, Track co-chair GADA2009, Laurent Lefèvre, Program Comittee Co-Chair E2GC2, Laurent Lefèvre, Workshop co-chair and organizer Parco 2009, Laurent Lefèvre, Local Co-Organizer IEEE HPCC-09, Laurent Lefèvre, General co-chair HPPAC 2009, Laurent Lefèvre, Workshop Co-Chair Pfldnet 2008, P. Vicat-Blanc Primet, PC co-chair ACM GridNets 2008, 2006, P. Vicat-Blanc Primet, PC co-chair EuroPar 2008, 2007, Eric Fleury, Topic co-chair. GridNets 2007, Paulo Gonçalves, local chair PDCAT 08, Laurent Lefèvre, Tutorial and workshop chair IEEE CCGrid 2008, Laurent Lefèvre, General chair ResCom 2007, summer school, I. Guérin Lassous, Scientific Committee EuroPar2007, P. Vicat-Blanc Primet, Topic co-chair. IEEE T2PWSN, Eric Fleury, general chair MSN 2007, Eric Fleury, PC vice-chair GridNets2007, Laurent Lefèvre, Workshop and sponsor chair DSC2007, Laurent Lefèvre, Workshop co-chair WONS 2006, I. Guérin Lassous, General co-chair IEEE/ACM HPDC 2006, P. Vicat-Blanc Primet, Workshop Chair HPCC06, Laurent Lefèvre, PC co-chair. ICPS 2006, Laurent Lefèvre, PC co-chair GAN06, Laurent Lefèvre, Workshop co-chair DSM2006, Laurent Lefèvre, Workshop co-chair

209 8.5 Optimized Protocols and Software for High-Performance Networks Contribution to the Scientific Community Administration of Professional Societies Open Grid Forum, P. Vicat-Blanc Primet, Data Transport Research Group co-chair Editorial Boards Computer Communications, Elsevier, I. Guérin Lassous Ad Hoc Networks, Elsevier, I. Guérin Lassous Discrete Mathematics and Theoretical Computer Science, I. Guérin Lassous Performance Evaluation, Elsevier, I. Guérin Lassous, Guest Editor of Special issue on on "Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks". Future Generation Computer Systems (FGCS), Elsevier, Pascale Vicat-Blanc, Guest Editor of Special issue on "High Performance Protocols and Grid services" [695] Computer Networks (COMNET), Elsevier, Pascale Vicat-Blanc, Guest editor of a special issue on "Hot Topics in Protocols for very long distance networks" [890] Future Generation Computer Systems (FGCS), Elsevier, Pascale Vicat-Blanc, Guest Editor of Special issue on "High Speed Networks for Grid Applications" Future Generation Computer Systems (FGCS), Elsevier, C. Pham, Guest editor for Special issue on "Grid Infrastructures: Practice and Perspectives" [887]. Annals of Telecoms C. Pham has been co-editor with G. Leduc of a special issue of Annals of Telecoms on "Transport protocols for Next Generation Networks". Annals of Telecoms P. Vicat-Blanc Primet, Guest Editor of a special issue on "Grid, Cloud and Utility computing". Journal of System and Software, Laurent Lefèvre, Guest Editor of a Special issue on Pervasive Services [886] Lecture Notes in Computer Science P. Vicat-Blanc Primet, Guest Editor "Scaling, Fractals and Wavelets", John Whiley Ed., P. Gonçalves, co-editor. Organisation of Conferences and Workshops Parco 2009, Laurent Lefèvre IEEE/ACM CCGrid2008, Laurent Lefèvre ACM GridNets 2008, P. Vicat-Blanc Primet Workshop serie Distributed Shared Memory on Clusters (DSM) within IEEE International Symposium on Cluster Computing and the Grid (CCGrid), Laurent Lefèvre. Workshop serie Grid and Advanced Networks in CCGrid, Laurent Lefèvre and Pascale Vicat-Blanc. MetroGrid workshop within ACM GridNets 2007, P. Gonçalves. Program committee members ITC, P. Vicat-Blanc Primet, 2009 IEEE/ACM CCGrid, P. Vicat-Blanc Primet, 2009, 2008 ITCSS, P. Vicat-Blanc Primet, 2009 CFIP, P. Vicat-Blanc Primet, 2009, 2008, 2007 ISWCS, I. Guérin Lassous, 2009 MedHocNet, I. Guérin Lassous, 2009, 2008, 2006

210 196 Part 8. Reso HotMesh, I. Guérin Lassous, 2009 VTC-Spring, I. Guérin Lassous, 2009 PerSens, I. Guérin Lassous, 2009, 2008, 2007, 2006 ICDCN, I. Guérin Lassous, 2009, 2008 IEEE VTC, Eric Fleury, 2009 IEEE PERCOM, Eric Fleury, 2009 IEEE ICC, Eric Fleury, 2009 COMSNET, Eric Fleury, 2009 IEEE/IFIP EUC, Eric Fleury, 2009 PDCAT, Laurent Lefèvre, 2009 MGC, Laurent Lefèvre, 2009, 2008 IEEE SuperComputing, Laurent Lefèvre, 2009, 2006 NSS, Laurent Lefèvre, 2009 CFSE, Laurent Lefèvre, 2009 Renpar, Laurent Lefèvre, 2009 ICCN, Laurent Lefèvre, 2009 IEEE/ACM HPDC, Laurent Lefèvre, 2009, 2008, 2007 ICCS, Laurent Lefèvre, 2009, 2008 HotP2P, Laurent Lefèvre, 2009, 2008, 2007, 2006 IEEE/ACM CCGrid, Laurent Lefèvre, 2009 DAGRES, Laurent Lefèvre, 2009 AusGrid, Laurent Lefèvre, 2009, 2008 EuroNF Netcoop, P. Vicat-Blanc Primet, 2008, 2007 ACM GridNets, P. Vicat-Blanc Primet, 2008, 2007, 2006 PFLDNET, P. Vicat-Blanc Primet, 2008 OPODIS, I. Guérin Lassous, 2008 VTC-Fall, I. Guérin Lassous, 2008 Algotel, I. Guérin Lassous, 2008, 2007, 2006 IFIP Networking, I. Guérin Lassous, 2008, 2007 IFIP NETWORKING, Eric Fleury, 2008 IEEE PIMRC, Eric Fleury, 2008, IEEE DCOSS, Eric Fleury, 2008, COMSWARE, Eric Fleury, 2008, 2007 ADHOC NOW, Eric Fleury, 2008 RenPar, O. Glück, 2008 IEEE/ACM CCGrid, O. Glück, 2008

211 8.5 Optimized Protocols and Software for High-Performance Networks 197 ACM GridNets, O. Glück, 2008 IEEE/ACM HPDC, O. Glück, 2008 AHEMA, Laurent Lefèvre, 2008 DFMA, Laurent Lefèvre, 2008, 2007, 2006 Euro PVMMPI, Laurent Lefèvre, 2008, 2007, 2006 PGrid, Laurent Lefèvre, 2008 ICA3PP, Laurent Lefèvre, 2008 HPPAC, Laurent Lefèvre, 2008, 2007 Europar, P. Vicat-Blanc Primet, 2007 HPCC, P. Vicat-Blanc Primet, 2007 SSS, I. Guérin Lassous, 2007 ACM MSWiM, I. Guérin Lassous, 2007, 2006 ACM MobiHoc, I. Guérin Lassous, 2007, 2006 Localgos, I. Guérin Lassous, 2007 ACM SANET, Eric Fleury, 2007 ACM DIALM, Eric Fleury, 2007 IEEE Statistical Signal Processing (SSP), Paulo Gonçalves, 2007 GADA, Laurent Lefèvre, 2007 IEEE/ACM Grid, Laurent Lefèvre, 2007 Chinacom, Laurent Lefèvre, 2007 CODS, Laurent Lefèvre, 2007 ISPDC, Laurent Lefèvre, 2007 INFOSCALE, Laurent Lefèvre, 2007 VECPAR, P. Vicat-Blanc Primet, 2006 IEEE/ACM Grid, P. Vicat-Blanc Primet, 2006 HPDC, P. Vicat-Blanc Primet, 2006 Communication Network Journal, P. Vicat-Blanc Primet, 2006 Parallel letter, P. Vicat-Blanc Primet, 2006 JPDC, Calculateurs Parallèles, P. Vicat-Blanc Primet, 2006 Techniques et Sciences Informatiques, P. Vicat-Blanc Primet, 2006 ICLAN, I. Guérin Lassous, 2006 LOCAN, I. Guérin Lassous, st International Conference on Ad-Hoc Networking, I. Guérin Lassous, 2006 FAWN, I. Guérin Lassous, 2006 e Science, Laurent Lefèvre, 2006 IEEE/ACM Grid, Laurent Lefèvre, 2006 ICPP, Laurent Lefèvre, 2006 CSNDSP, Laurent Lefèvre, 2006

212 198 Part 8. Reso Participation in steering committees IEEE/ACM CCGrid conference, Laurent Lefèvre, since 2004 ACM GridNets, Pascale Vicat-Blanc Primet, since 2006 PFLDNET workshop serie, P. Vicat-Blanc Primet, since 2005 ICPS, Laurent Lefèvre since 2006 IWAN, Laurent Lefèvre since 2005 International expertise European Committee, FP6, program 2.6.2, I. Guérin Lassous, PhD examining boards, P. Vicat-Blanc Primet : Guillaume Jourjon (University of Sydney, reviewer, 2007), F. Dijkstra (University of Amsterdam, reviewer 2009) PhD examining boards, Laurent Lefèvre : Lakshmi Priya (Anna University, Chennai, India, reviewer, 2009), Frank Chiang (University of Technology, Sydney, Australia, reviewer, 2008), Sylvain Martin (University of Liège, Belgium, examinator, 2007), Bruno Volckaert (University of Gent, Belgium, reviewer, 2006) PhD examining boards, I. Guérin Lassous : Sandrine Calomme (University of Liège, Belgium, reviewer, 2009), Lars Landmark (Norwegian University of Science and Technology, Norway, first opponent, 2009) National expertise ANR, Jeune chercheur, Eric Fleury, janvier 2009 AERES, LSIIT evaluation, I. Guérin Lassous, January CNU, I. Guérin Lassous, since September ANR, VERSO, Eric Fleury, mars 2009, mars 2008, mars 2007 ANR, Telecom, Member of Evaluation Commission, P. Vicat-Blanc Primet, 2006 ANR, Technologies Logicielles, P. Gonçalves, 2006 CNRS, Networks and Telecoms, Pascale Vicat-Blanc, since 2005 CNRS, Networks and Telecoms, Eric Fleury, since 2005 INRIA CR2 Chair of the board of examiners for recruitments of Chargés de Recherche CR2 of the Rhône-Alpes CRI, P.Vicat-Blanc Primet, 2006, 2007, 2008 HDR examining boards, Isabelle Guérin Lassous: Fethi Filali (University Nice Sophia-Antipolis, reviewer, 2008), Fabrice Peyrard (University of Toulouse, reviewer, 2008), Jean-Marie Gorce (INSA Lyon, examinator, 2007). PhD examining boards, Isabelle Guérin Lassous, 2006: Luigi Iannone (Paris 6), Dang Quan Nguyen (Paris 6 - reviewer), Fanilo Harivelo (La Réunion - reviewer), Nathalie Mitton (INSA Lyon, supervisor). item PhD examining boards, Isabelle Guérin Lassous, 2007: Vincent Untz (INPG, reviewer), Yacine Khaled (UTC, reviewer), Sylvain Allio (INSA de Rennes, reviewer), Farid Benbadis (University Paris 6, chair) ; Tahiry Razafindralambo (INSA Lyon, supervisor), Cheik Sarr (INSA Lyon, supervisor) PhD examining boards, Isabelle Guérin Lassous, 2008: Golnaz Karbaschi (Paris 6 - reviewer), David Samper (INPG - reviewer), Thomas Watteyne (INSA Lyon - examinator), Farid Ben Abdesslem (Paris 6 - reviewer), Yann Busnel (Rennes 1 - examinator), Carine Toham (INPG - reviewer), Yoan Pigné (Havre - chair); PhD examining boards, Isabelle Guérin Lassous, 2009: Despoina Triantafyllidou (University Paris-Sud, reviewer), Fahrid Munir (EURECOM, examinator) PhD examining boards, P. Vicat-Blanc Primet 2006: Paul Starzetz (INPG - reviewer), Mathieu Gineste (ENSICA, reviewer)

213 8.6 Optimized Protocols and Software for High-Performance Networks 199 PhD examining boards, P. Vicat-Blanc Primet 2008: Sakuna Charoenpanyasak (Laas, Toulouse, reviewer) PhD examining boards, P. Vicat-Blanc Primet 2009: Nicolas Van Wambeke (Laas, Toulouse, reviewer) PhD examining boards, Laurent Lefèvre 2009: Anthony Mouraud (University Antilles Guyane, examinator) PhD examining boards, Laurent Lefèvre 2008: Benjamin Quetier (University Paris XI, examinator) PhD examining boards, Olivier Glück 2008: Elisabeth Brunet (University Bordeaux I, examinator) Public Dissemination Interstice, Podcast, "Very High Speed Networks", P. Vicat-Blanc Primet, May 2009 Techniques de l Ingénieur, Quelles performances pour les réseaux WiFi?, I. Guérin Lassous, TE 7381, May INEDIT : the Newsletter of INRIA, Towards green computing platforms - vers des plateformes de calcul vertes, Laurent Lef evre, May 2009 INRIA Booth at Supercomputing Conference(SC), Laurent Lefèvre, 2007, Contracts and grants External contracts and grants (Industry, European, National) Grid Explorer (Data Grid explorer, ACI Masse de Données, ) Scientific lead in LIP is P. Vicat-Blanc Primet. The aim of this project was to create a large scale grid and network emulator. ReSO designed and developed an high performance network emulator ewan [845] and evaluated high speed transport protocol. IGTMD (Interoperatibilité des Grilles Transfert Massif de Données, ANR Blanche, 02/02/06-02/02/09, C) Scientific lead in LIP is P. Vicat-Blanc Primet. The aim of this project was to design, develop and validate mechanisms that concretely make the interoperability of heterogeneous grids a reality. RESO was responsible for all research activities concerning high speed transport protocols and services: analyse and experiment new communication and replication models, and alternative transport protocols emerging within the international scientific community. We also exploited the LCG traffic circulating on the IGTMD link for packet capture and grid flow analysis DSLAB ( ADSL Laboratory, ANR Jeune Chercheur, 08/12/05-08/12/08, C) Scientific lead in LIP is P. Vicat-Blanc Primet DSLAB research project aimed at building and using an experimental platform about distributed systems running on DSL Internet. In this project, RESO was responsible for the definition, design and development of flow control algorithms and mechanisms, enabling distributed computing applications to fully exploit the DLS links. HIPCAL (HIPCAL: Virtual Cluster for Bioinformatics applications, ANR Calcul Intensif et Simulation, 01/01/07-31/12/09, C) General leader of the projet is P. Vicat-Blanc Primet. HIPCAL studies a new paradigm (grid substrate) based on confined virtual cluster concept for resource control in grids. In particular, we propose to study and implement new approaches for bandwidth sharing and end-to-end network quality of service guarantees. We aim at demonstrating the functional transparency, enhanced predictability and efficiency for applications offered by the HIPerNET approach. DMASC (Propriétés d Invariance d Échelles de Signaux Cardiaques, Systèmes Dynamiques et Analyse Multifractale, ANR SYSCOMM ) Scientific lead in LIP is P. Gonçalves. Project partners are INRIA (Rocquencourt et Grenoble), Université Paris XII, Université Paris-Sud ( Scientific lead in SISYPHE (Inria Rocquencourt) by Julien Barral. The main objective of this project is to define new mathematical tools for characterizing scaling invariance properties in heart beat rate signals. As scale invariance is usually reminiscent of geometric (multi-)fractal properties directly connected to specific structural features of the underlying system (cardio-vascular system in our case), it can be a reliable indicator of some pathologies. In this project, we also consider the development of efficient estimators to sustain the applicability of the proposed new mathematical objects.

214 200 Part 8. Reso Grid5000 (ACI Grid ). Project partners are INRIA, CNRS, RENATER ( Scientific lead in LIP has been P. Vicat-Blanc Primet (also member of the national committee) This lead is now transfered to L. Lefèvre. This project aim at providing computer science researchers with a national large scale experimental facility of 5000 cores and 10Gb/s links. RESO coordinates Grid 5000 s networking aspects with Renater and Lyon s metropolitan network, but also SINET/Naregi, Geant/Dante and UvA/SurfnetDAS3 for the international connections [758, 682]. We also participate to the ALADDIN ADT. Aurélien Ceyden is member of the national technical committee of GRID CARRIOCAS ( Calcul répartie sur réseau à capacité surdimmensionnée, Pôle de Compétitivité SYSTEM@TIC: CAR- RIOCAS ( ) Scientific lead in LIP is P. Vicat-Blanc Primet. Carriocas project studies and implements an ultra high bit rate (up to 40 Gbps per wavelength) network interconnecting super computers, storage servers and high resolution visualization devices, to support data and computing intensive applications in industrial and scientific domains. In this project, RESO is in charge of the design and protoyping of the "Resource Scheduling Reconfiguration and Virtualisation - SRV" component. Orange- FT ( France-Telecom - OrangeLabs (FR), Industrial collaboration, 15/09/05-27/06/08, C) Scientific lead in LIP is Laurent Lefèvre. This contract focus on Network load balancing on layer 7 switching for high performance and high available Linux based platforms. A CIFRE grant supported this collaboration (Ayari Narjess) [748]. RESO is setting up a new collaboration with Orange-Labs in the context of XaaS and Clouds for the next period. IXP (INTEL(USA), Industrial collaboration, in 2006) Scientific lead in LIP is P. Vicat-Blanc Primet. An INTEL grant was obtained for studying the potential of network processor technology in building High performance (several Gigabits links) network emulators and dynamically programmable routers. The goal is to show that network processors improve performance, and enhance capacities of software network emulators and programmable routers based on Linux platforms. Network interface cards with network processors have been integrated within the Grid 5000 testbed. ALU (ALCATEL-LUCENT, Industrial collaboration,01/09/06-31/10/07, C) Scientific lead in LIP is P. Vicat- Blanc Primet. We started our long-term collaboration with ALCATEL-LUCENT in 2006 by a contract on the service & resource discovery problem in the context of next generation networks. INRIA-Bell Labs (ADR Semantic Networking-Bell Labs, Laboratoire commun entre INRIA et les Laboratoire Bell Labs France - ALCATEL LUCENT, 28/01/08-31/10/12, C) Scientific lead in LIP is P. Vicat-Blanc Primet. P. Vicat-Blanc Primet is also the scientific leader of the "Semantic Networking" research direction of this common laboratory. RESO participated to the creation of the INRIA-Bell Labs joined laboratory. ANAGRAM (ANAGRAM INC INRIA "sans contrepartie financière" 11/07/08 Evaluation agreement) EC-GIN ( Euro-China Grid InterNetworking FP6 STREP , C). The partners are: 1. Université Innsbruck [UIBK] (Austria), Université Zurich [UniZH] (Switzerland), Lancaster University [ULANC] (United Kingdom), Justinmind [JIM] (Spain), EXIS IT ltd - Electronic Systems and Information Technology & Telecommunications Solutions [EXIS IT] (Greece), University of Surrey [UniS] (United Kingdom), Beijing University of Posts and Telecommunications [BUPT] (China), Institute of Software, Chinese Academy of Sciences [ISCAS] (China), China Telecommunication Technology Labs [CTTL] (China), Beijing P&T Consulting & Design Institute Co. Ltd, China Mobile [BCDI] (China). Scientific lead in LIP is P. Vicat-Blanc Primet. Based on a number of properties that make Grids unique from the network perspective, the project EC-GIN is developing tailored network technology in dedicated support of Grid applications. In EC-GIN, RESO studies Grid network resource reservation, message passing issues in Grids, a Grid network API, a system for packet capture and fine grain flow and traffic analysis. INRIA has also worked on packet sampling and LRD characterization. We haved develop and designed the BDTS (Bulk Data Transfer Scheduling Service) enabling Grid users to signal their future bulk transfer and reserve E2E resources to guaranty transfer time. OGF-europe (OGF EUROPE, EU FP7 SA project 01/02/08-31/01/10, C) Scientific lead in LIP is L. Lefèvre RESO participate in the OGF-Europe to reinforce the french participation to OGF standardization activities. We mainly concentrate our contribution on Telco interaction and Energy-efficiency in Grid context. AUTONOMIC INTERNET (Autonomic Internet, 01/01/08-31/12/09, C) Scientific lead in LIP is L. Lefèvre Autonomic Internet (AutoI - FP7.ICT.2007.Call ) project suggests a transition from a service agnostic Internet to service-aware network, managing resources by applying autonomic principles. In order to achieve the objective of service-aware resources and to overcome the ossification of the current Internet AutoI will develop a self-managing virtual resource overlay that can span across heterogeneous networks that can support service mobility, security, quality of service and reliability. In this overlay network, multiple virtual networks co-exist on

215 8.6 Optimized Protocols and Software for High-Performance Networks 201 top of a shared substrate with uniform control. The overlay will be self-managed based on the system s business goals, which drive the service specifications, the subsequent changes in these goals (service context) and changes in the resource environment (resource context). This will be realised by the successful cooperation of the following activities: autonomic control principles, resource virtualisation, enhanced control algorithms, information modelling, policy based management and programmability. RESO is mainly involved in the programmability of the AUTOI overlay by proposing an Autonomic Network Programming Interface which will support large scale service deployment. Laurent Lefevre is leading the workpackage 5 on Service Deployment. Official webpage : COST (COST Action IC0804 on Energy efficiency in large scale distributed systems, ) Scientific lead in LIP is L. Lefèvre The main objective of the Action is to foster original research initiatives addressing energy awareness/saving and to increase the overall impact of European research in the field of energy efficiency in distributed systems. The goal of the Action is to give coherence to the European research agenda in the field, by promoting coordination and encouraging discussions among the individual research groups, sharing of operational know-how (lessons-learned, problems found during practical energy measurements and estimates, ideas for real-world exploitation of energy aware techniques, etc.).the Action objectives can be summarized on scientific and societal points of view: sharing and merging existing practices will lead the Action to propose and disseminate innovative approaches, techniques and algorithms for saving energy while enforcing given Quality of Service (QoS) requirements. Laurent Lefèvre is Management Committee member and French representative in this COST action. AEOLUS (Algorithmic Principles for Building Efficient Overlay Computers, FP6 ( )) Scientific lead in LIP is I. Guerin-Lassous AEOLUS is an IP project that investigates the principles, and develops the algorithmic methods for building an overlay computer. The goal is to enable an efficient and transparent access to the resources of an Internet-based global computer. In particular, the main objectives of this project are to identify and study the important fundamental problems and investigate the corresponding algorithmic principles related to overlay computers running on global computers, and to provide improved methods for communication and computing among wireless, and possibly mobile nodes, so that they can transparently become part of larger Internet-based overlay computers. ARC MALISSE The purpose of the MALISSE INRIA ARC (Malicious sensors) is to study sensor networks and their applicability to a wide range of applications when they should be able to support reliably a number of key functionalities, even in the presence of malicious sensors. Obviously such algorithms need to be themselves adapted to sensor networks and more specifically should take into account sensors reduced resources. Obviously, identifying the potential attacks in such a network is an important step of this research. Expected outcomes should be both theoretical (algorithm design and proofs) and practical (simulation and implementation) in order to fully validate the proposed solutions. CNRS RECAP The RECAP project ( is a CNRS national platform composed of the LIP laboratory, LAAS laboratory, the LIP6 laboratory and the LIFL laboratory. It aims at supporting research activities in the area of self-adaptive and self-organized networks. It addresses many topics such as topology control (addressing, location, etc.), data communication (broadcasting, routing, gathering, etc.), architecture (hardware, system -OS-, network -communication stacks-, etc.), applications (service lookup, distributed database, etc.). COST 295 The main objective of the COST 295 (European Cooperation in the Field of Scientific and Technical Research ( named DYNAMO for Dynamic Communication Networks Foundation and Algorithms, is to structure the community of researchers working on fundamental aspects of Dynamic Communication Networks. FP6, IP, LSH, MOSAR IP project MOSAR (Mastering hospital Antimicrobial Resistance and its spread into the community). Infections caused by antimicrobial-resistant bacteria (AMRB) account for an increasing proportion of healthcare-associated infections, particularly in high-risk units such as intensive care units and surgery; patients discharged to rehabilitation units often remain carriers of AMRB, contributing to their dissemination into longerterm care areas and within the community. The overall objective of MOSAR is to gain breakthrough knowledge in the dynamics of transmission of AMRB, and address highly controversial issues by testing strategies to combat the emergence and spread of antimicrobial resistance, focusing on the major and emerging multi-drug antimicrobial resistant microorganisms in hospitals, now spreading into the community. Microbial genomics and human response to carriage of AMRB will be integrated with health sciences research, including interventional controlled studies in diverse hospital settings, mathematical modelling of resistance dynamics, and health economics. Results from MOSAR will inform healthcare workers and decision-makers on strategies for anticipating and mastering antimicrobial resistance. To achieve these objectives, MOSAR brings together internationally recognized experts in basic laboratory sciences, hospital epidemiology, clinical medicine, behavioural sciences, quantitative analysis and modelling, and health economics. MOSAR brings together 11 institutions recognized for their leadership in these areas,

216 202 Part 8. Reso from 10 EU Member or Associated States, as well as 7 SMEs to develop and validate highthroughput automated molecular tools for detection of AMRB. A high level of co-ordination will be obtained through a professionally IT-supported and rigorous management structure, to achieve optimal synergy of the components of MOSAR. We aim to develop and validate rapid testing for AMRB and initiate the clinical trials during the first 18 months of the project, and then to build on the infrastructure to execute the joint research programme. CAPES COFECUB SAMBA, Brasil The SAMBA project is funded by the CAPES-COFECUB program for 4 years ( ). Partners of the project are the LIP6, the LAAS and the POPS and D-NET INRIA projects on the french side and the UFRJ, UERJ and LNCC on the brazilian side. The goal of this project is to study the technological developments required by the development of new application and services in the context of multihop wireless networks. This concerns routing, positioning, addressing, etc. as well as network deployment and management. WIDE-JST-CNRS ASIA WIDE and CNRS has launched a collaboration on the mobility and measurement topics. This project is supported by the French and Japanese governments and will continue for two years. We exchange researcher, research information, technical results between the two countries. ANR SENSLAB The purpose of the SensLAB 2 project is to deploy a very large scale open wireless sensor network platform. SensLAB s main and most important goal is to offer an accurate and efficient scientific tool to help in the design, development, tuning, and experimentation of real large-scale sensor network applications. Ambient and sensor networks have recently emerged as a premier research topic. Sensor networks are a promising approach and a multi-disciplinary venture that combines computer networks, signal processing, software engineering, embedded systems, and statistics on the technology side. On the scientific applications side, it covers a large spectrum: safety and security of buildings or spaces, measuring traffic flows, environmental engineering, and ecology, to cite a few. Sensor networks will also play an essential role in the upcoming age of pervasive computing as our personal mobile devices will interact with sensor networks dispatched in the environment. SensLAB would be a unique scientific tool for the research on wireless sensor networks. INRIA ADT SensTOOLS The purpose of the SensTOOLS 3 project is to design and provide utilities, hardware and software tool boxes for sensor network embeded applications. The SensTOOL project designs daughter sensing cards (audio board, motioncapture), drivers, communication libraries and distributed components Research Networks (European, National, Regional, Local) Euro-NF (European Network of Excellence on Network of the Future, FP7 (01/01/08-31/12/10, 5980 C) Internal Funding ARC Green-Net (1/1/ /12/2009, C per year) Scientific lead in LIP is L. Lefèvre. The GREEN-NET is a Cooperative Research Action (ARC : Action de Recherche Cooperative) supported by INRIA. This project explores the design of energy-aware software frameworks dedicated to large scale distributed systems. These frameworks will collect energy usage information and provide them to resources managers and schedulers. Large scale experimental validations on Grid5000 and DSLLAB platforms will be proposed. Laurent Lefèvre is leading the INRIA ARC GREEN-NET on Power aware software frameworks for high performance data transport and computing in large scale distributed systems which involves 4 partners: INRIA RESO, INRIA MESCAL (Grenoble), IRIT (Toulouse), Virginia Tech (USA). Official ARC GREEN-NET webpage : INRIA Internship (Bulk data Transfer Internship, 01/01/06-31/12/06, 3029 C) Scientific lead in LIP is P. Vicat-Blanc Primet INRIA Metroflux ( Gtrc-Net10, 19/11/08-18/03/09, C) and CNRS GridBox (GridBox, 23/10/06-23/10/07, C) Scientific lead in LIP is P. Vicat-Blanc Primet and Paulo Goncalves. Metroflux system aims at providing researchers and network operators with a very flexible and accurate packetlevel traffic analysis toolkit configured for 1 Gbps and 10 Gbps speed links. These projects helped in the setup of the Metroflux prototype based on the GtrcNet FPGA-based device technology, on huge data storage resources for collected data and on specific statistical analysis tools

217 8.8 Optimized Protocols and Software for High-Performance Networks International collaborations resulting in joint publications GRIDNET-FJ (Grid Networks France-Japan INRIA s Associated team ) Scientific lead in LIP is P. Vicat- Blanc Primet The associated team enabled us to pursue a strong collaboration following three directions : - Bandwidth allocation and control in Grids : focusing on allocation algorithms and interface interoperability. - Optimisation of MPI communications in Grids around gateways and MPI GtrcNET-packet capture functionality for the design of the packet capture and real-time flow analysis features NEGST ( Projet JSPT-CNRS ( ) Scientific lead in LIP is P. Vicat-Blanc Primet The objective of this project is to promote the collaborations of Japan and France on grid computing technology. This project is organized in three parts: Grid interoperability and applications, Grid Metrics, Instant Grid and virtualization of grid computing resources. RESO mainly participates to the Grid Metrics topic. The Grid Metrics topics, basically gathers all researches about applications, programming models, libraries, runtimes, operating systems and network evaluation, either in synthetic environment (emulators and simulators) or real environment (real network and Grids). SAKURA (PAI SAKURA ( ) Scientific lead in LIP is P. Vicat-Blanc Primet The objective of this France- Japan project between INRIA Grand Large and Reso, AIST and University of Tsukuba) was to work on two main limitations of current grid networks in the context of high performance computing are: 1) the lack of end to end network resource reservation model and mechanisms (optical and Ethernet control plane) and 2) the efficiency of grid application communications. To address these two limitations, we consider that this research should gather two tracks: 1) the study, design, implementation and test of a grid network resource allocation and reservation service for grid computing system based on previous experiences and 2) to understand the precise behaviour of large grid computing system under a large variety of scenarios. FAST (PAI FAST( )) Scientific lead in LIP is L. Lefèvre This project focused on the design of Web Services based on Programmable Networks Infrastructure (WeSPNI). This collaboration between RESO team and Programming Language and System group (PLAS) in Queensland University of Technology (Brisbane, Australia) aims to bring together researchers to design next generation of overlay networks. The Fast project is supported by French Ministry of Foreign affairs ( ) [798]. In 2005, L. Lefèvre has spent 4 months in QUT (May-Sept 2005) and has spent one month in QUT, Australia (October 2006). P. Roe has visited RESO (December 2006) and (September to December 2007) with an INRIA visiting professor position. AI LUSO (Actions intégrées Luso Françaises, 01/09/07-31/12/08) Scientific lead in LIP is P. Gonçalves INRIA (RESO) with University of Lisbon (Instituto Superior de Estatística e Gestão de Informação), Dr. Mário Caetano. Our joint work was initially occasioned by the co-direction of Hugo Carrão PhD program (expected defense in Sept. 2009) on multi-temporal satellite time series for land cover characterization [897]. In the course, we addressed SVM based classification issues [684, 685], and then derived a novel and versatile time series model for yearly phenological evolutions [686]. 8.8 Software production and Research Infrastructure All public software are available at Software Descriptions HIPerNet Type: software toolbox [916] Scientific problem addressed: Orchestration of virtual clusters Functional description: Resource and User registration, Resource virtualisation, Definition, allocation and automatic deployement of Virtual Private Execution Infrastructure. international diffusion, deployment in Grid5000 Status: free software State: proof-of-concept APP: HIPerNet version 0.5: IDDN.FR S.P Transfer to LinKTiss Start-Up

218 204 Part 8. Reso VXCalendar Type: software module Scientific problem addressed: scheduling of virtual resources and infrastructures Functional description: Resource temporal database manager Status: proprietary software State: prototype - associated with a patent APP: version 1.0 du 15 mars 2009 : IDDN.FR S.P Transfer to LinKTiss Start-Up VXtopology Type: software module Scientific problem addressed: management and control of virtual resources and infrastructures Functional description: Resource spacial database manager Status: proprietary software State: prototype - associated with a patent APP: version 1.0 du 15 mars 2009 : IDDN.FR S.P Transfer to LinKTiss Start-Up VXScheduler Type: software module Scientific problem addressed: scheduling of virtual resources and infrastructures Functional description: Adaptation of virtual infrastructure request and scheduling. Status: proprietary software State: prototype - associated with a patent version 1.0 du 15 mars 2009 : IDDN.FR S.P Transfer to LinKTiss Start-Up VXDL parser Type: software module Scientific problem addressed: Virtual infrastructures specifications and processing Functional description: interpretation and XML traduction of virtual infrastructures specifications Status: proprietary software State: prototype - associated with a patent version 2.0 du 20 mars 2009: IDDN.FR S.P Transfer to LinKTiss Start-Up BDTS engine BDTS: Bulk Data Transfer Scheduling Service [918] Type: software [918] Scientific problem addressed: transfer delay predictability, high congestion avoidance due to massive data transfers Functional description: dynamic network bandwidth allocation, bulk data transfers scheduling worldwide diffusion, derived and resused in SRV (Alcatel-Inria) Status: free software State: prototype, APP : JBDTS version 1 du 15/12/2007: IDDN.FR S.P Transfer to LinKTiss Start-Up

219 8.8 Optimized Protocols and Software for High-Performance Networks 205 SRVdemonstrator Name: Scheduling, Reconfiguration and Virtualisation Software. Type: software [923, 924, 922, 914] Scientific problem addressed: Bandwidth on Demand (MTOSI compliant) Functional description: Scheduling, Reconfiguration and Virtualisation of Network resources for intensive computing environment. deployment within ALCATEL- LUCENT testbed Status: proprietary software, CARRIOCAS contract State: prototype, APP: on going Transfer to LinKTiss Start-Up Metroflux Name: Metroflux Type: hardware/software Scientific problem addressed: flow and packet-level traffic capture and analysis Functional description: combine a FPGA based device for packet capture, header extraction and time stamp with a high capacity storage and computing server for statistic analysis of very high speed network s traffics. worldwide diffusion, deployment in Grid5000 network, used by ALCATEL-LUCENT Bell Labs, reference implementation in INRIA Bell-Labs common laboratory, Status: free software and access State: prototype, Depot APP: on going NXE Type: software [912] Scientific problem addressed: Automation of large scale networking experiment Functional description: Definition, configuration, deployment, run and analysis of a large scale experiment for protocol evaluation. nationale diffusion, deployment in GRID5000, Status: free software State: prototype, APP: version 1.0 de novembre 2008: IDDN.FR S.P Transfer to LinKTiss Start-Up PATHNIF Type: software [913] Scientific problem addressed: Automation of large scale and high speed network bottleneck detection Functional description: Systematically analysis and evaluate the capacity of potential bottlenecks of an end to end network path. Status: proprietary software - associated with a patent State: prototype, APP: version 1.0 de mars 2009 : IDDN.FR S.P Transfer to LinKTiss Start-Up

220 206 Part 8. Reso FLOC Type: software [921] Scientific problem addressed: Explicit flow rate control Functional description: Limitation and triggering of flow rate. nationale diffusion, deployment in GRID5000, reference implementation in EC-GIN community, Status: free software State: prototype, APP: version 0.12 du 17 février 2009: IDDN.FR S.P Transfer to LinKTiss Start-Up WattM Type: software [911] Scientific problem addressed: Monitoring framework for energy consumption of data centers Functional description: Monitoring and exposig electrical usage of large scale number of resources. usage and deployment in GRID5000 Status: Open Source software State: prototype ShowWatts Type: software [910] Scientific problem addressed: ShowWatts : Real time energy consumption grapher Functional description: Software used to display real time measures of energy consumed by processing nodes in a grid architecture. This software proposes a graphical interface connected to a set of powermeter devices. Graphical interface can display measures coming through a long distance secured network tunnel. usage and deployment in GRID5000 Status: Open Source software State: prototype Tamanoir Type: software [909] Scientific problem addressed: Supporting high performances in active networks Functional description: Tamanoir is an open source software environment for high speed active networks. Status: Open Source software State: prototype Depot APP : IDDN.FR R.P Tamanoir embedded Type: software [908] Scientific problem addressed: Active execution environment for embedded network equipments Functional description: Tamanoir embedded is a dedicated software platform fully written in Java and suitable for heterogeneous services. Tamanoir provides various methods for dynamic service deployment. Tamanoir embedded also supports autonomic deployment and services updating through mobile equipments. Status: Open Source software State: prototype

221 8.8 Optimized Protocols and Software for High-Performance Networks 207 XCP-I Type: software [915] Scientific problem addressed: XCP-i : Interoperable explicit Control Protocol Functional description: XCP (explicit Control Protocol) is a transport protocol that uses the assistance of specialized routers to very accurately determine the available bandwidth along the path from the source to the destination. We propose XCP-i which is operable on an internetwork consisting of XCP routers and traditional IP routers without loosing the benefit of the XCP control laws. An ns-2 module simulating XCP-i has been developed and will be available on the web. Status: Open Source software State: prototype SNE Type: software [917] Scientific problem addressed: SNE : Stateful Network Equipment Functional description: SNE is a complete library for designing a stateful network equipment (contains Linux kernel patch + user space daemon). The aim of the SNE library is to support issues related to the implementation of high available network elements, with specially focus on Linux systems and firewalls. The SNE library (Stateful Network Equipment) is an add-on to current High Availability (HA) protocols. This library is based on the replication of the connection tracking table system for designing stateful network equipments. SNE is an open source project, available on the web (CECILL Licence). Status: Open Source software State: prototype LSCAN Type: software [919] Scientific problem addressed: LSCAN : Large Scale Deployment of Autonomic Networks Functional description : In the context of the Grid5000 infrastructure, the LSCAN software suite is dedicated to the deployment (graphical), management (through web services) and experiment of large scale deployment of autonomic network nodes. LSCAN is an open source project, adapted from the Nagios software suite and is available on the web. Status: Open Source software State: prototype WSNet Type: software Scientific problem addressed: multi-hop wireless network discrete event simulator. Functional description: Its main characteristics are a flexible architecture, it uses dynamic libraries to handle models, and a realistic radio medium simulation whose accuracy degree can be tuned by the user. A last characteristic of WSNet is its ability to be used in conjunction with WSim simulators in order to offer a complete sensor network simulation with very high accuracy. WSNet is part of the SensTOOLS ( sensor network development framework APP: IDDN Status: Open Source software State: prototype Contribution to Research Infrastructures GRID 5000 ( is a research infrastructure devoted to the experimentation of large-scale highperformace computing. Deeply involved in the constant evolution process of this grid testbed, RESO is in particular responsible for the networking aspects, as a participant to the dedicated INRIA development action (ADT) ALADDIN. From our implication in the design, the deployment and the usage of such a high-performance experimental Network and Grid testbed, we acquired a strong experience in high-speed networks, and a unique expertise in the exploration and the tuning of specific protocols.

222 208 Part 8. Reso In addition, the RESO team played a key-role in the development of certain networking infrastructures and network services. That is notably the case with Metroflux, a measurement probe that we developed in joint collaboration with AIST (Japan). Based on a FPGA Gtrc-Net card, our plate-form can extract and time stamp with a 60ns precision, headers of packets captured on 1 and 10 Gbps links. This tool is the core of a versatile metrology service that we plan to promote across the Grid5000 users community. 8.9 Industrialization, patents, and technology transfer Patents (French, European and WorldWide) Procédés de gestion de sessions multi-flux No et année de dépôt : 2007 Titre du brevet : Procédés de gestion de sessions multi-flux Année de publication : 2007 Déposants et co-déposants : France-Telecom - INRIA Licence : oui Inventeurs: Narjess Ayari, Denis Barbaron and Laurent Lefèvre Royalties : non PATHNIF No et année de dépôt: 2009 Titre du brevet: Procédé de détection de goulet d étranglement d un chemin réseau: PATHNIF Année de publication : en cours d étude Déposants et co-déposants: INRIA Licence : oui Inventeurs: Romaric Guillier, P. Vicat-Blanc Primet Royalties : non TIME2CAP No et année de dépôt: 2009 Titre du brevet: Modèle et procédé de représentation et de contrôle de ressources dynamiques: TIME2CAP Année de publication : en cours d étude Déposants et co-déposants: INRIA et ENS Inventeurs: P. Vicat-Blanc Primet Sebastien Soudan Royalties : non VXCap No et année de dépôt: 2009 Titre du brevet: Modèle et procédé de représentation et de contrôle d infrastructures virtuelles: VXCap Année de publication : en cours d étude Déposants et co-déposants: INRIA Inventeurs: P. Vicat-Blanc Primet Sebastien Soudan, Guilherme Koslovski Royalties : non VXAlloc No et année de dépôt: 2009 Titre du brevet: Procédé d allocation d infrastructures virtuelles: VXAlloc Année de publication : en cours d étude Déposants et co-déposants: INRIA Licence : oui Inventeurs: P. Vicat-Blanc Primet, Guilherme Koslovski Royalties : non

223 8.11 Optimized Protocols and Software for High-Performance Networks Creation of Startups Name of Company: LinKTiss Date of Creation : Incubation since January 2009 Nature and Domain of Activity: Software and Consulting Services for Infrastructures Virtualisation and Orchestration Collaboration Contracts Nature of Contribution to the creation: software licence (eg. Software license, technology license, patent rights) Team members transferred to the company: Pascale Vicat-Blanc Primet, Sébastien Soudan, Romaric Guillier, Philippe Martinez Consulting Activities ANVAR-OSEO, P. Vicat-Blanc Primet 8.10 Educational Activities Supervision of Educational Programs Since 2009, P. Gonçalves is responsible for the axis "Models and Optimization for Emergent infrastructures" of the ENS Lyon Computer Science Master. Since 2007, E. FLEURY is the chair of the master in Fundamental Computer Science at ENS Lyon and in charge of the axis modeling complex systems. Since 2007, Isabelle Guérin Lassous is co-chair of the Master 2 CCI (Compétence Complémentaire en Informatique) in University Lyon 1 (computer science department) Teaching Name Course title (short) Level Institution No of hours nb. years O. Glück Internet App., Network & System Admin. M2 Univ. Lyon 1 30h+30h 5 O. Glück Computer Networks L3 Univ. Lyon 1 30h+30h 3 JP. Gelas Long Distance Network M2 Univ. Lyon 1 12h+24h 3 JP. Gelas Embedded system M2 Univ. Lyon 1 15h+15h 2 JP. Gelas Operating System M2 Univ. Lyon 1 15h+15h 3 JP. Gelas Local Networks M2 Univ. Lyon 1L 15h+15h 3 P. Primet High speed networks M2 ENS Lyon 32h 3 P. Primet Computer Networks M2 EC Lyon 30h 3 P. Gonçalves Traffic models M2 research ENS Lyon 32h 1 L. Lefèvre High performance Networks Maitrise Info. IFI (Hanoi - Vietnam) 30h 1 L. Lefèvre High performance Networks Maitrise Info. Univ. Antilles Guyane 40h 2 L. Lefèvre Computer Networks L3if ENS Lyon 30h 1 I. Guérin L. Ad Hoc Networks Master 1 Univ. Lyon 1 12h 4 I. Guérin L. QoS and Multimedia Systems Master 2 Univ. Lyon 1 30h 4 I. Guérin L. Local Networks Master 2 Univ. Lyon 1 30h 4 I. Guérin L. Autonomic Networking Master 2 Univ. Lyon 1 8h 3 I. Guérin L. Mobile and Wireless Networks Master 2 IFI (Hanoi - Vietnam) 60h 2 I. Guérin L. QoS in wireless Networks Post-graduates U.P. de Catalunya 10h 1 I. Guérin L. Networking Algorithms Master 1 ENS Lyon 16h Self-Assessment Strong points The arrivals of new senior researchers and the subsequent fruitful cross-fertilisation increased the dynamic of all researches around the network topic within the ENS. Our impact expanded to the international grid and network research community as well as the industrial field. RESO actively promoted the mixity of the research team by attracting and recruiting high skilled young women (as senior researcher or PhD student). This helps in balancing the team, facilitates the internal communication and increases the quality of the team dynamic.

224 210 Part 8. Reso RESO is focusing on a very active research domain which has a strong social and economical impact. The Networking but also Services research communities are highly challenged by an uncontrolled evolution of the traditional disciplines due to huge economical and social forces feeded by technologies like web2.0 and optical fiber deployement. The increasing complexity and heterogeneity of network infrastructures, along with the evolving nature of applications and services, challenges the design of new architectures that need to scale, be energy-efficient and economically sustainable. Since 2002, RESO positioned itself at the edge of computing and networking to understand but also propose solutions to accelerate but also smooth the convergence. Today, self-managed and self-organized architectures, virtualization, high availability and energy-efficiency are becoming very hot topic in the context of Cloud services and Future Internet. Our explorations of these aspects from both the global system theoretical optimisation point of view and the network devices experimental point of view give us a strong advance that we plan exploit in the near future. How to guarantee quality of service in machine/user-to-machine/user communication while efficiently using the resources of the networks is becoming major challenge. The two main problems we tackled and will continue to explore in the future are: i) dynamic bandwidth sharing and congestion control in Future Internet and ii) control and flow management with a semantic networking approach. During the last period we focused on the three following questions: 1) what type of congestion control and transport protocol should be used in the context of large-scale deployed high-speed networks (fiber to the home, for example)? 2) how to efficiently share, but also dynamically provision the bandwidth of a network dedicated to computing tasks? 3) is the "flow-aware" approach a valuable solution to solve the end-to-end quality of service issue in the very-high-speed Future Internet? These questions appeared to be more and more critical and far from beeing close. We proposed solutions that have interested both the research and industrial communities and that made our team and laboratory more visible nationally and internationally. We hope to expand our impact in the future and to be able to disseminate largely our solutions. In particular we believe that probing, data analysis and statistical inference is a sensible triangle approach that should soon become an inherent component of emerging system designs. The RESO team leverages 2 main assets to address these challenges : 1. a multidisciplinary competence comprising network and protocols specialists, researchers in statistical signal processing and an experimented computer engineering team. In the fall 2009, a newly hired researcher will bring in a new expertise in queueing and optimization theory ; 2. an original and pioneer experimental facility relying on a large scale, independent and fully reconfigurable network plate-form (Grid5000), and on a versatile and scalable measurement probe which permits fine packetgrain captures at 10Gbps. Weak points Our skills have their downsides on their own. In particular we undergo : 1. the weaknesses of an experimental facility, with its inevitable unstable hardware and software solutions thought to realize the best trade-off between flexibility and robustness ; 2. the natural difficulty that exists amongst scientists with different backgrounds to share a common language and to meet on a unifier project. In addition, our traffic awareness activity is quite recent in the RESO team and certainly, still lacks a fully satisfactory maturity Perspectives Reso RESO attracted progressively different members coming from parent fields. The arrival of Eric Fleury (dynamic networks) as ENS teacher in 2008 reinforced the Networking component of the LIP laboratory. After integrating the LIP laboratory, Eric Fleury proposed to introduce a new research topic integrating its experience on sensor networks and ENS potential in network measurement, analysis and modelling. D-NET, a new project on dynamics networks has been elaborated. D-NET started as an INRIA project in march 2009 and will be an independant LIP research team in the period. Keywords

225 8.12 Optimized Protocols and Software for High-Performance Networks 211 Vision and new goals of the team Our vision is that the Internet of the Future will be a wide reservoir and a vast market for all kinds of small or huge resources and services for the communication and processing of information. Any entity, biological or not, will be able to access and combine those with the duration, trust-level, and capacity it will choose to pay for. Distributed systems that aggregate storage- and computing-resources into a virtual and integrated dynamic data processing environment, also called Clouds, will become ubiquitous. Although these systems will theoretically offer solutions for resource aggregation, high and predictable performance for applications may be hard to obtain due to the unsuitability of the bandwidth-sharing paradigms (fairness, best effort, no QoS), communication protocols, and software overheads. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and reliable manner over long distance networks, RESO will continue to address several issues such as quality of service, security, traffic metrology, traffic modeling, and network-resource scheduling. RESO will pursue the investigation of fundamental issues such as bandwidth- and network resource sharing paradigms, network-resource virtualisation and scheduling, energy efficiency, traffic metrology, traffic modeling, quality of service, transport protocols to deliver the emerging traffic in a timely, efficient, and reliable manner over long-distance networks. RESO aims at playing an active and visible role in the transformation of the Internet architecture. But as the task is huge, RESO cannot stay alone. RESO associates its effort with major actors in this domain. In France, we are already leading a partnerships with Bell Labs (Alcatel-Lucent) and we are builing an other strong collaboration with OrangeLabs (France-Telecom). We are also joining two important european consortium sharing our vision for the Future Internet: GEYSERS and 4WARD. Evolution of Research activities We propose to keep our main focus on high speed networks: the usage of these high speed networks is increasing (FTTH for example) and their technical specificities are evolving a lot (ligh path configuring automation, optical burst switching and packet switching technologies emerging). However, the use of wireless networks as access networks is greatly increasing and, in the future, packets are very likely to be transmitted on different networking technologies. Therefore, some of our future works will include wireless networks as access networks in the targeted high speed networks. We will continue to apply the same methodology which mixes theory and large scale experiments on real testbeds. Our goal is to put more emphasis on theoretical aspects and upstream research adopting the clean slate thinking proned by the networking community. In particular, RESO will focus of key issues such as: where and how to integrate the autonomy required to manage and control high-speed networks at large scale? how to deal with the cost of resource and network virtualization on communication performance? what type of congestion control should be used in the context of a large-scale deployment of high-speed access networks (fiber to the home for instance)? is the traffic of the grid and of high-speed networks self-similar as the Internet traffic is? What are the critical scales? In which sense is self similarity harmful to quality of experience? how to efficiently share the bandwidth in a network that interleaves real-time multimedia applications and computing applications? how to improve the interactions of message-based communications or interactive traffic and transport layer in wide area networks, that can include wireless access networks? Our work will continue to follow four major research topics : 1 Optimized protocol implementations and networking equipements Challenges facing the Future Internet will concern the way this large distributed infrastructure will be able to adapt to new usages and context changing. In this research axis, we explore the integration of contextawareness functionality to address important issues : reliability of communications, dynamic programmability of network equipments and energy consumption of distributed infrastructures. This axis bases its experimental evaluation and validation at large scale on real distributed infrastructures (like Grid5000/ALADDIN). RESO focuses this activity on the following directions : Autonomic Network Programmability : The key enabling factor of new network services is programmability at every level; that is the ability for new software capabilities to self-configure themselves over the network. We explore the concept of "dynamic programming enablers" for dynamic service driven configuration of communication resources. Dynamic programming enablers apply to an executable service that is injected and activated into the network system elements to create the new functionality at runtime. The basic idea is to enable trusted parties (users, operators, and service providers) to activate management-specific service and network components into a specific platform. We study mechanisms and infrastructures required to support

226 212 Part 8. Reso these components. We aim at providing new functionality to services using Internet facilities, addressing the self-management operations in differentiated and integrated services. The goal is the enhancement of the creation and the management (customization, delivery, execution and stop) of Internet services. We have designed the ANPI software framework (Autonomic Network Programming Interface) allowing large scale autonomic service deployment (currently used in the Autonomic Internet european project). This orientation is currently supported by the EU FP7 "Autonomic Internet" project ( ). Session awareness : Most of the NGN services involve a session model based on multiple flows required for the signaling and for the data exchange, all along the session lifespan. New service-aware dependable systems are more than ever required. Challenges to these models include the client and server transparency, the low cost during failure free periods and the sustained performance during failures. Based on our previous work with FT RD ( Procedes de gestion de sessions multi-flux, N. Ayari, D. Barbaron, L. Lefevre, France Telecom RD Patent, June 2007), we continue to explore and propose session aware distributed network solutions which support the reliability mandatory to operators services. A running collaboration with University of Sevilla (Spain) is currently running on this topic. Energy awareness : Large scale distributed systems (and more generally next generation Internet) are facing infrastructures and energy limitations (use, cooling etc.). Energy aspects will change the way we design software frameworks, services and protocols. In the context of monitored and controlled energy usage, we are proposing and continue to explore the design of energy aware equipments and frameworks, which allow users and middleware to efficiently use large scale distributed architectures. We are developing solutions to dynamically monitor energy usage, inject energy information as a resource in distributed systems and adapt existing jobs (OAR) and network (BDTS) schedulers to autonomically benefit from energy information in their scheduling decisions. This research is validated through experimental evaluation on Grid 5000 platform and inside the ALADDIN initiative. This topic is currently supported by the INRIA "Action de Recherche Concertee" called "Green-NET". RESO is also member of the European action COST IC0804 on Energy efficiency in large scale distributed systems ( ). 2 Quality of Service and Transport layer for Future Networks Reso team foresees to extend its activities to heterogeneity. By heterogeneity, we mean the heterogeneity of communication technologies (i.e. wired, fiber, wireless, etc.) and the heterogeneity of traffic (best effort traffic, computing applications, real-time applications, etc.). It is clear that the current networks follow this trend in terms of technologies and of usage. Optimizing the communication of a specific application on a single technology network is a step towards a more efficient application but the obtained performance is not clear on an end-to-end heterogeneous path that is shared by different kinds of applications. It is important to take these constraints into account at the beginning. Our goals are twofold: Designing techniques that provide a network resources sharing of the different applications while ensuring the constraints required by some applications. Different shares (such as max-min, proportional, etc.) can be considered. Optimizing the user satisfaction while ensuring a good use of the network (which corresponds in turn to the operator satisfaction). We plan to identify the different trade-offs that can be obtained. It is clear that the answers to these goals should be dynamic and distributed. The different approaches that will be tackled for these studies are: Modeling of the studied systems. Concerning the modeling of the traffic, we intend to reuse the works carried out by the traffic analysis activity. How to model the heterogenity of communication technologies is one of the first questions to address. Dynamic and distributed optimization. Approaches like Lagrangian optimization for instance are very promising techniques for optimizing in a dynamic and distributed way. We have already used such an approach in a very specific context. We intend to apply it in a more general setting. The dynamic tuning of some parameters of this technique is still an issue that we need to solve to derive truly dynamic optimization solutions. Traffic-aware solutions. Within the framework of the BellLabs-INRIA Joint Lab., our aim is to design quality of service solutions that are based on the traffic knowledge. We have obtained some first results, but we intend to intensify this study, mainly with the use on history dependent utility functions corresponding to the users satisfaction. We plan to use these functions in different quality of service mechanisms, like admission control or active queue management for instance. This use raises the issue of the knowledge distribution to these mechanisms: which information to give, which periodicity, on which area, evaluation of the false operation due to out-of-date information, etc. We will answer to these questions.

227 8.13 Optimized Protocols and Software for High-Performance Networks 213 The network virtualisation. In this topic we will address mainly two challenges: resource sharing and performance isolation in virtual node from one hand, virtual infrastructures on physical infrastructure mapping on an other one. From the theoretical and technical point of views, both are optimisation problems that will be tackled by adopting a "spiral" research methodology. This approach relies on problem modeling, analysis and simulations coupled with software development and concept experimentation. Starting with ultra simple cases (equipments with few ports, networks with few nodes), our aim is to progressively enlarge the problem scale in order to study real operator networks et tools. For this, Reso team and its emergente LinKTiss spinoff will pursue a tight interaction integrating our industrial partners (Alcatel-Lucent and Orange) in the loop. We believe that such a dynamic approach, if supported by our institutions, will strongly benefit to all participating entities. This activity will also be supported by the HIPCAL project and the newly accepted FP7 IP GEYSERS european project. Experimental measurements. As said previously, our methodology merges theory and experiments. It is very important to test our solutions on a real testbed. The obtained results provide a feedback that we intend to fine-tune our solutions (like the convergence parameter used in the Lagrangian optimization technique for instance). We will use Grid5000. We intend to add wireless nodes to this platform in order to get a heterogeneous testbed. 3 High-Speed Network s traffic metrology, analysis and modelling Regarding this axis, Reso team foresees to reinforce and to focus its traffic analysis activity on two main directions. Semantic Networking. Within the framework of the BellLabs-INRIA Joint Lab., our aim is to identify relevant features of traffic data able to discriminate between the underlying classes of applications. The ultimate goal is then to condition differentiated treatments to different flow categories, having in mind to optimize resources allocation and sharing. To this end, our work encompasses several facets : sampling theory, statistical characterization and modeling of discrete time series (i.e. point processes), parameter estimation, learning theory for classification purpose. All these tasks must comply with real time constraints, to guarantee that traffic awareness can be effectively implemented in the core routers of very high speed networks. This project is for us a great opportunity to strengthen our collaboration with the MAESTRO team (INRIA Sophia Antipolis), which is also partner of the Joint Lab. Concretely, we plan to co-advise two new PhD students on these topics, both supported by Joint Lab scholarships. Performance evaluation An important advantage of our metrology testbed is its ability to generate, under realistic network situations, real lived traffics whose origin and transfer conditions are fully controlled. This allows us to monitor the buffer occupancy at a bottleneck point and then to relate the congestion characteristics (global and per flow loss rates, packets drop distribution, transmission delay estimation, etc) to statistical or geometrical properties of the input traffic. More precisely, we are concerned with scaling properties of aggregated and flow based throughputs (adapting to effectively impacting time scales what has been done with long range dependence), and on their dependence on the used protocols. This study led us to consider Markov models of TCP traffics and to devise a new multifractal analysis approach to account for the particular time varying instantaneous throughput due to these retroactive control mechanisms. We then expect to find some mapping between this large deviation principle and characteristics of TCP variants, which would explain different measured performances. This study is a joint work with the SISYPHE team of Inria Rocquencourt (J. Barral), and should be reinforced with the arrival of a new researcher on queuing and optimization theory. 4 Network Services for high demanding applications Reso wants to continue its activity related to MPI In this direction, we will strive to consolidate and to promote the MPI 5000 architecture, which allows for a host of optimizations on gateways, that still remain unexplored: bandwidth reservation, communication scheduling, gateway connections, WAN adapted communication protocols... But, in parallel to this, we foresee to open up our research scope to the coupling between wired and wireless networks. We are interested in the interactions between these two media, and within this framework, we intend to start a co-advised PhD work with Isabelle Guerrin-Lassous. More precisely, we would like to study the issue of resource sharing at the two network interface, in the presence of highly heterogeneous traffics, and to propose end-to-end protocol solutions which cope with networks and traffics heterogeneity. The main focus will be on energy efficiency, network virtualisation and traffic awareness.

228 214 Part 8. Reso 8.13 Perspectives: new team DNet Keywords: measure, sensor network, complex network, dynamic network, graph, network science, distributed algorithm. Vision and new goal of the team The main goal of the D-NET team is to lay solid foundations to the characterization of dynamically evolving networks, and to the field of dynamical processes occurring on large scale dynamic interaction networks. In order to develop tools of practical relevance in real-world settings, we will ground our methodological studies on real data sets obtained during large scale in situ experiments. Let us take the context of health science and public health policy. The spread of infectious diseases remains an urgent public health issue. All modeling approaches crucially depend on our ability to describe the interactions of individuals in the population. They rely therefore on the availability of data on individuals interactions, which define complex networks on which diseases spread. Only recently has it become possible to study large scale interaction networks, such as collaboration networks, or phone call networks, networks of sexual contacts, etc... This has prompted many research efforts in complex network science, mainly in two directions. First, attention has been paid to the network structure, considered as static graphs. A large amount of research has focused on spreading models on complex networks, and has showed that the network topology has a strong impact on the dynamics of the system. However, the dynamics of networks (the changes in their topology) and on networks (e.g., spreading processes) are still generally studied separately. There is therefore an important need for the development of tools and methods for the joint analysis of both dynamics. The D-NET project puts the emphasis on the cross fertilization between these two research lines which will definitively lead to considerable advances. The D-NET project has the following fundamental goals: 1. To develop distributed measurement approaches based on sensor networks in order to capture physical phenoma in space and time; 2. To develop the study of dynamically evolving interaction networks, by building specific tools targeted at characterizing and modeling their dynamical complex properties. 3. To study dynamical processes occurring on dynamical networks, such as spreading processes, taking into account both the dynamics of and on the network structure. 4. To apply these theoretical tools on large scale experimental data sets. We will benefit from a large data set describing the contacts patterns within a hospital environment. 5. to set up and foster multidisciplinary collaborations in order to develop in the case of the MOSAR context our understanding of the spread of nosocomial infections, and of the prevalence of antimicrobial-resistant bacteria (AMRB), in order to develop strategies aiming at controlling and mitigating such spreading. The D-NET project targets the study of dynamically evolving networks, from the point both of their structure and of the dynamics of processes taking place on them. Most activity on complex networks has up to now focused on static networks, towards the characterization of their structure, and the understanding of how this structure influences dynamics such as spreading phenomena. The important step that this project wants to undertake is to take into account the fact that the networks themselves are dynamical entities. This means that the topology evolves and adapts in time, possibly driven by (or in interaction with) the dynamical process unfolding on top of it. This step is motivated by two points. On the one hand, the new availability of a large dataset reporting the dynamics of real networks, mainly by deploying sensor networks closer to the physical object in order to record and gather the dynamic of the network studied (in the MOSAR context, sensors measure all contacts between persons); this dataset will allow us to develop methods and tools with a direct connection to real-world data. On the other hand, the specific context of the dataset, precisely defines a need for such tools, and for the understanding of dynamical processes on dynamical networks, in order to be able to develop appropriate measures, or protocols. In the MOSAR context, namely a hospital environment in which AMRB spread, measure can be containment or health policies. Interestingly, the real-world dynamical graph is also multimodal and several sensors may record various data at different time scale. This multimodality also presents new challenges that will be addressed. The D-NET project has therefore two major goals: on the one hand, to develop the knowledge in the networking science field in order to provide a better understanding of dynamic graphs from both the analysis and structural point of view. This fundamental aspect will be grounded on real world large scale dynamic networks. On the other hand, our gaol is to help develop a better understanding of the physical object studied. The point requires the joint study of both dynamics of the network and on the network, and required a tied collaboration with others fields in order to take advantage of the first goal. The D-NET project therefore has both very fundamental and very applied aspects that are tightly linked.

229 8.13 Optimized Protocols and Software for High-Performance Networks 215 The impact of the research developed in D-NET goes beyond the context of disease spreading studied within the MOSAR context, thanks to the inherent interdisciplinarity of the field of complex networks. Dynamical processes on dynamically evolving networks are indeed present in several fields, including rumor spreading in social networks, opinion formation phenomena, fashion phenomena, the diffusion of innovation in a population. The spread of computer viruses may take place through networks or bluetooth connections, which are both dynamical. The development of efficient algorithms for information spreading in wireless/p2p/dtn networks should also be improved by the understanding of the dynamics of these networks and their temporal properties. The study of all these processes will benefit from the tools developed in this project, which represents a unique opportunity to study a real-world spreading process, occurring on a contact network whose dynamics is known. The conjunction of real-world data with the development of theoretical tools proposed in this project will therefore impact various scientific fields, ranging from epidemiology to computer science, economics and sociology Sensor network, distributed measure and distributed processing In order to gather information on the dynamic behavior of a specif physical phenomena, a measure must be performed. The quality of the measure is crucial and determines all the analysis. Moreover, by conducing and controling the measure and its bias during the experiment, one may derive and optimize the analysis. Sensors networks are one efficient way to measure a phusical phenoma at various space and time scale. One important chalenge is to understand how to take advantage of the numerous deployments of heterogeneous sensor networks that will make various kinds of objects able to communicate and to interact with their environments in order to design a global large scale sensing tool able to monitor and sense the physical environment. By gathering several distributed heterogeneous sensing devices together, one could build a accurate sensing tool. Given a target application, the goal is the select a set of available sensors and to set up the way to collect their data in order to fulfill the requirements of the application. Such data collected and selected on the fly, are able to sense directly and with a high accuracy the phenomena of interest. Sensor could be heterogeneous in terms of sensing capacities, in terms of deployment density, in terms of mobility. The key feature is that heterogeneity is a fundamental and beneficial quality of distributed sensing tool, not just a problem to overcome. In order to understand such distributed heterogeneous sensing networking as a whole and coherent sensing tool, this will require the development of both theoretical and practical techniques to deploy such distributed sensing tools on top of existing sensor networks, to manage this distributed measure tool and to perform efficient and reliable distributing computing to analyze multi scale dynamic data. With these main challenges in mind, we define the following objectives: Characterize the various sensing measures that we can exploit. Such characterization must take into account the different correlations in space and time that exist between all sensors. The characterization must also handle the various time scales that may exist in the measures. The challenge is due to the heterogeneous data resolution. Moreover, data will be multi modal and multi scale with possible irregularities and will offer much correlation (time /space). Fundamental challenges lie in the current lack of tools, models, and methods for the characterization and analysis of time series describing dynamic sensing data. Data set collected by various sensors, may be characterized by the lack of most common simple statistical properties such as stationary, linearity, or Gaussianity. Relevant time scales may be difficult to identify, or may even not exist. Observed properties have non-trivial relations and even the choice of the time scale granularity that should be used in the analysis is a difficult problem, since it may bias the analysis in an uncontrolled way. Methodology and design of a global sensing tool. In order to use multi variable/scale sensing data we need to normalize data format and access method. Moreover, based on the characterization described above, we will propose a methodology to select the appropriate measure in order to fulfill the application requirements. Heterogeneity is a fundamental, beneficial quality of distributed sensing tool, not just a problem to overcome. Heterogeneous sensing systems are more immune to the weaknesses of sensing modalities and more robust against defective, missing, or malicious data sources than even carefully designed homogeneous systems. However, data heterogeneity also presents main challenges when trying to integrate data from many different sensors. We need to a way to model, describe, combine and query these different data sets. Methodology of distributed processing that should be performed in order to guarantee an accurate measure. The goal here is to take advantage of the fact that computation could be delocalized closer to the sensing phenomena. Exploiting space/time correlation could be used in order to optimize the amount of data sent through the network. Design robust distributed processing in order to be able to analyze the data from the different sensors i sa hard challenge. Sensors constantly generate tiny data, so we could consider sensor data as one of streaming data. Considering this feature of sensor data, we ll try to construct a system on which users can program these streaming data as like UNIX pipe. Managing calculated streaming data, we ll be able to reduce a large amount of search

230 216 Part 8. Reso and computation cost, because typical computed data are used again and again. In other words, our system will dynamically compute and merge sensor data, considering user queries Statistical Characterization of Complex Interaction Networks The dynamically evolving networks can be regarded as "out of equilibrium" systems. Indeed, their dynamics is typically characterized by non standard and intricate statistical properties, such as non-stationarity, long range memory effects, intricate space and time correlations. Moreover, such dynamics often exhibit no preferred time scale or equivalently involve a whole range of scales and are characterized by a scaling or scale invariance property. Another important aspect of network dynamics resides in the fact that the sensors measure information of different nature. For instance, in the MOSAR project, inter-individual contacts are registered, together with the health status of each individual, and the time evolution of the resistance to antibiotics of the various strains analyzed. Moreover, such information is collected with different and unsynchronized resolutions in both time and space. This property, referred to as multi-modality, is generic and central in most dynamical networks. If such approach is crucial, it represents also a great challenge and risk for us since the core area of research involved in these statistical characterization of large scale dynamic dataset spans fundamental physics (statistical physics, information theory, nonlinear dynamics and discrete systems). New faculty hires should strengthen the D-NET skill and we also foster our ongoing collaboration with SiSyPhe team at ENS Lyon. With these main challenges in mind, we define the following objectives: From "primitive" to "analyzable" data: Observables. The various and numerous modalities of information collected on the network generate a huge " primitive " data set. It has first to be processed to extract " analyzable data ", which can be envisioned with different time and space resolutions: it can concern either local quantities, such as the number of contacts of each individual, pair-wise contact times and durations, or global measures, e.g., the fluctuations of the average connectivity. The first research direction consists therefore of identifying, from the " primitive data ", a set of " analyzable data " whose relevance and meaningfulness for the analysis of network dynamic and network diffusion phenomena will need to be assessed. Such " analyzable data " needs also to be extracted from large " primitive data " set with " reasonable " complexity, memory and computational loads. Granularity and resolution. The corresponding data will take the form of time-series, " condensing " network dynamics description at various granularity levels, both in time and space. For instance, the existence of a contact between two individuals can be seen as a link in a network of contacts. Contact networks corresponding to contact sequences aggregated at different analysis scales (potentially ranging from hours to days or weeks) can be built. However, it is so far unclear to which extent the choice of the analysis scale impacts the relevance of network dynamics description and analysis. An interesting and open issue lies in the understanding of the evolution of the network from a set of isolated contacts (when analyzed with low resolution) to a globally interconnected ensemble of individuals (at large analysis scale). In general, this raises the question of selecting the adequate level of granularity at which the dynamics should be analyzed. This difficult problem is further complicated by the multi-modality of the data, with potentially different time resolutions. We will therefore consider as an alternative the analysis of network dynamics jointly at all resolutions, through wavelet decompositions and multiresolution analyses. While these tools have been studied for a quite long time for self-similar and multifractal processes, their tailoring to network dynamics implies challenging theoretical issues that will be addressed: for instance, multivariate models for such processes remain to be invented; alternations of periods of regular and irregular behaviors may require the development of processes that go beyond the so-called intermittency models. (non-)stationarity. Stationarity of the data is another crucial issue. Usually, stationarity is understood as a time invariance of statistical properties. This very strong definition is difficult to assess in practice. Recent efforts have put forward a more operational concept of relative stationarity in which an observation scale is explicitly included. The present research project will take advantage of such methodologies and extend them to the network dynamics context. The rationale is to compare local and global statistical properties at a given observation scale in time, a strategy that can be adapted to the various time series that can be extracted from the data graphs so as to capture their dynamics. This approach can be given a statistical significance via a test based on a data-driven characterization of the null hypothesis of stationarity. Dependencies, correlations and causality. To analyze and understand network dynamics, it is essential that (statistical) dependencies, correlations and causalities can be assessed amongst the different components of the " analyzable data ". For instance, in the MOSAR framework, it is crucial to assess the form and nature of the dependencies and

231 8.14 Optimized Protocols and Software for High-Performance Networks 217 causalities between the time series reflecting e.g., the evolution along time of the strain resistance to antibiotics and the fluctuations at the inter-contact level. However, the multimodal nature of the collected information together with its complex statistical properties turns this issue into a challenging task. Therefore, Task1 will also address the design of statistical tools that specifically aim at measuring dependency strengths and causality directions amongst mutivariate signals presenting these difficulties. The objective is to provide elements of answers to natural yet key questions such as : Does a given property observed on different components of the data result from a same and single network mechanism controlling the ensemble or rather stem from different and independent causes? Do correlations observed on one instance of information (e.g., topological) command correlations for other modalities? Can directionality in correlations (causality) be inferred amongst the different components of multivariate data? These should also shed complementary lights on the difficulties and issues associated to the identification of " important " nodes or links Theory and Structural Dynamic Properties of dynamic Networks The study of relevant statistical analyses and characterizations of network dynamics is a key and necessary step before other analysis in order to get an accurate insight of the data. To go further in the characterization of the dynamics, we need to focus on intrinsic properties of evolving networks. New notions (as opposed to classical static graph properties) have to be introduced: rate of vertices or links appearances or disappearances, the duration of link presences or absences. Moreover, more specific properties related to the dynamics have to be defined and are somehow related to the way to model a dynamical graph. Classical graph notions like the definition of path, connected components and k-core have to be revisited in this context. Advanced properties need also to be defined in order to apprehend the intrinsic dynamic structural issues of such dynamic graphs. The notion of communities (dense group of nodes) is important in any social / interaction network context and may play an important role within an epidemic process. To transpose the static graph community concept into the dynamical graph framework is a challenging task and appears necessary in order to better understand how the structure of graphs evolves in time. In these contexte we define the following objectives: Toward a dynamic graph model and theory. We want to design new notions, methods and models for the analysis of dynamic graphs. For the static case, graph theory has defined a vast and consistent set of notions and methods such as degrees, paths, centrality measures, cliques. These notions and methods are completely lacking for the study of dynamic graphs; not only do we not have basic notions allowing the description of a graph and its dynamics, but we also lack the basic definitions and algorithms for manipulating dynamic graphs: there is no consensus on the way to encode such graphs, and for instance even the basic notion of distance between vertices does not have a widely acknowledged equivalent for dynamic graphs. Our goal is therefore to fill this lack, by providing the basic notions for manipulating dynamic graphs as well as the notions and indicators to describe its dynamic meaningfully (as complex networks theory does for static complex networks). For instance, it is crucial to generalize the notion of centrality of a node, in a context where a node can have a large degree or centrality in a snapshot at time t and a much lower one a t +1. Dynamic communities. The detection of dynamic communities is particularly appealing to describe dynamic networks. In order to extend the static case, one may apply existing community detection methods to successive snapshots of dynamic networks. This is however not totally satisfying for two main reasons: first, this would take a large amount of time (directly proportional to the data span); moreover, having a temporal succession of independent communities is not sufficient and we loose valuable information and dependencies. We also need to investigate the temporal links, study the time granularity and look for time periods that could be compressed within a single snapshot. Tools for dynamic graph visualization. Designing generic and pure graph visualization tools is clearly out of the scope of the project. Efficient graph drawing tools or network analysis toolkit/software are now available (e.g., GUESS, TULIP, Sonivis, Network Workbench). However, the drawback of most softwares is that the dynamics is not taken into account. Since we will study the hierarchy of dynamics through the definition of communities we plan to extend graph drawing methods by using the communities structures. We also plan to handle the time evolution in the network analysis toolkit. A tool like TULIP is well designed and could be improved by allowing operations (selection, grouping, sub graph computation...) to take place on the time dimension as well Publications International and national peer reviewed journals [ACL] [677] Anestis Antoniadis, Andrey Feuerverger, and Paulo Gonçalves. Wavelet based estimation for univariate stable laws. Annals of the Institute of Mathematical Statistics, 58(4): , 2006.

232 218 Part 8. Reso [678] O. Audouin, D. Barth, M. Gagnaire, C. Mouton, P. Vicat-Blanc Primet, D. Rodrigues, L. Thual, and D. Verchère. Carriocas project: Towards converged internet infrastructures supporting high performance distributed applications. IEEE/OSA Journal of Lightwave Technology, accepted. [679] Narjess Ayari, Denis Barbaron, Laurent Lefèvre, and Pascale Vicat-Blanc Primet. Fault tolerance for highly available internet services: Concepts, approaches, and issues. IEEE Communications Surveys and Tutorials, 10(2):34 46, July [680] Alessandro Bassi, Micah Beck, Fabien Chanussot, Jean-Patrick Gelas, Robert Harakaly, Laurent Lefèvre, Terry Moore, James Plank, and Pascale Vicat-Blanc Primet. Active and logistical networking for grid computing: the e-toile architecture. The International Journal of Future Generation Computer Systems (FGCS) - Grid Computing: Theory, Methods and Applications, 21(1): , January Elsevier B.V (ed),issn X. [681] Elyes Ben Hamida, Guillaume Chelius, Anthony Busson, and Eric Fleury. Neighbor discovery in multi-hop wireless networks: evaluation and dimensioning with interferences considerations. Discrete Mathematics & Theoretical Computer Science, [682] Raphaël Bolze, Franck Cappello, Eddy Caron, Michel Daydé, Frederic Desprez, Emmanuel Jeannot, Yvon Jégou, Stéphane Lanteri, Julien Leduc, Noredine Melab, Guillaume Mornet, Raymond Namyst, Pascale Primet, Benjamin Quetier, Olivier Richard, El-Ghazali Talbi, and Touché Irena. Grid 5000: a large scale and highly reconfigurable experimental grid testbed. International Journal of High Performance Computing Applications, 20(4): , November [683] Faycal Bouhafs, Jean-Patrick Gelas, Laurent Lefèvre, Moufida Maimour, Cong-Duc Pham, Pascale Vicat- Blanc Primet, and Bernard Tourancheau. Designing and evaluating an active grid architecture. The International Journal of Future Generation Computer Systems (FGCS) - Grid Computing: Theory, Methods and Applications, 21(2): , February [684] Hugo Carrao, António Araújo, Paulo Gonçalves, and Mário Caetano. Multitemporal meris images for land cover mapping at national scale: the case study of portugal. International Journal of Remote Sensing, To appear. [685] Hugo Carrao, Paulo Gonçalves, and Mário Caetano. Contribution of multispectral and multitemporal information from modis images to land cover classification. Elsevier, Remote Sensing of Environment, 112(3): , [686] Hugo Carrao, Paulo Gonçalves, and Mário Caetano. A nonlinear model for satellite images time series: analysis and prediction of land cover dynamics. IEEE Transactions on Geoscience and Remote Sensing, To appear. [687] Eric Fleury and Céline Robardet. Communities detection and the analysis of their dynamics in collaborative networks. International Journal of Web Based Communities (IJWBC), [688] A. Gebremedhin, J. Gustedt, I. Guérin Lassous, M. Essaïdi, and J. A. Telle. Pro: A model for the design and analysis of efficient and scalable parallel algorithms. Nordic Journal of Computing, 13(4), [689] Brice Goglin, Olivier Gluck, and Pascale Vicat-Blanc Primet. Interaction efficace entre les réseaux rapides et le stockage distribué dans les grappes de calcul. Technique et Science Informatiques, 27(7/2008): , September [690] Paulo Gonçalves and Rudolf H. Riedi. Diverging moments and parameter estimation. Journal of American Statistical Association, 100(472): , December [691] Laurent Lefèvre and Jean-Patrick Gelas. Ian2 : Industrial autonomic network node architecture for supporting personnalized network services in the industrial context. The International Journal of Future Generation Computer Systems (FGCS) - Grid Computing: Theory, Methods and Applications, 24(1):58 65, January [692] Laurent Lefèvre and Anne-Cécile Orgerie. Towards energy aware reservation infrastructure for large-scale experimental distributed systems. Parallel Processing Letters, [693] Patrick Loiseau, Paulo Gonçalves, Guillaume Dewaele, Pierre Borgnat, Patrice Abry, and Pascale Vicat- Blanc Primet. Investigating self-similarity and heavy-tailed distributions on a large scale experimental facility. IEEE/ACM Transactions on Networking, To appear. [694] Jean-Philippe Martin-Flatin and Pascale Vicat-Blanc Primet. Editorial of the special issue high performance networking and services in grids: the datatag project. International Journal of Future Generation Computer System, FGCS, 21(Issue 4): , April 2005.

233 8.14 Optimized Protocols and Software for High-Performance Networks 219 [695] Jean-Philippe Martin-Flatin and Pascale Vicat-Blanc Primet. Special issue high performance networking and services in grids: the datatag project. International Journal of Future Generation Computer System, FGCS, 21(Issue 4): (, April [696] Edmundo Pereira de Souza Neto, Patrice Abry, Patrick Loiseau, Jean-Christophe Cejka, Marc-Antoine Custaud, Jean Frutoso, Claude Gharib, and Patrick Flandrin. Empirical mode decomposition to assess cardiovascular autonomic control in rats. Fundamental & Clinical Pharmacology, 21(5): , October [697] T. Razafindralambo and I. Guérin Lassous. Increasing fairness and efficiency using the madmac protocol in ad hoc networks. Ad Hoc Networks, 6(3), May [698] T. Razafindralambo, I. Guérin Lassous, L. Iannone, and S. Fdida. Dynamic packet aggregation to solve performance anomaly in wireless networks. Computer Networks, 52(1), January [699] Gabriel Rilling, Patrick Flandrin, Paulo Gonçalves, and Jonathan M. Lilly. Bivariate empirical mode decomposition. IEEE, Signal Processing Letters, 14(12): , [700] S. Sarr, C. Chaudet, G. Chelius, and I. Guérin Lassous. A node-based available bandwidth evaluation in ieee ad hoc networks. International Journal of Parallel, Emergent and Distributed Systems, 21(6), [701] S. Sarr, C. Chaudet, G. Chelius, and I. Guérin Lassous. Bandwidth estimation for IEEE based ad hoc networks. IEEE Transactions on Mobile Computing, 7(10), [702] Antoine Scherrer, Pierre Borgnat, Eric Fleury, Jean-Loup Guillaume, and Céline Robardet. Description and simulation of dynamic mobility networks. Computer Networks (1976), This work is partially financed by the European Commission under the Framework 6 HealthCare Project LSH PL Mastering hospital Antimicrobial Resistance and its spread into the community (MOSAR) and AEOLUS project IST IP-FP The views given herein represent those of the authors and may not necessarily be representative of the views of the project consortium as a whole. [703] Sebastien Soudan, Binbin Chen, and Pascale Vicat-Blanc Primet. Flow scheduling and endpoint rate control in gridnetworks. Future Generation Computer Systems, 25(8): , [704] Piero Spinnato, Pascale Vicat-Blanc Primet, Chris Edwards, and Michael Welzl. Editorial special section on networks for grid applications. International Journal on Future Generation Computer Systems, april [705] Pascale Vicat-Blanc Primet, F. Echantillac, and M. Goutelle. Experiments of the equivalent differentiated service model in grids. in International Journal Future Generation Computer Systems FGCS, special issue on High Performance Networking and Services in Grids, 21(Issue 4): (, April [706] Pascale Vicat-Blanc Primet, Sebastien Soudan, and Dominique Verchere. Virtualizing and scheduling optical network infrastructure for emerging it services. Optical Networks for the Future Internet (special issue of Journal of Optical Communications and Networking (JOCN)), 1(2):A121 A132, Invited conferences, seminars, and tutorials [INV] [707] Paulo Gonçalves. Empirical Mode Decomposition: Definition and applications to non linear time series. Nonlinear Dynamical methods and time series analysis workshop, Udine (Italy), August [708] Paulo Gonçalves. Empirical Mode Decomposition: Definition and applications to non linear time series. IEEE, 17th Signal Processing and Communications - Application Conference, Antalya, Turkey, April [709] Paulo Gonçalves. Fractal dimension estimation: EMD versus wavelets. Istanbul Technical University, Elec. and Comp. Eng. Dept., April [710] I. Guérin Lassous. Mac protocols for ad hoc networks. IRAMUS workshop, Val Thorens, France, January [711] I. Guérin Lassous. Resources allocation for multihop wireless networks. IFI, Hanoi, Vietnam, October [712] I. Guérin Lassous. Quality of service issues in multihop wireless networks. Polytechnic University of Catalonia, Department of Telematic Engineering, 10h, June [713] Laurent Lefèvre. Designing high performance autonomic gateways for large scale grids and distributed environments. CCGSC 2006 : Clusters and Computational Grids for Scientific Computing Workshop, Flat Rock, USA, September 2006.

234 220 Part 8. Reso [714] Laurent Lefèvre. High performance programmable network support for grid infrastructures. Kolloquium of Computer Science, Linz University, Austria, January [715] Laurent Lefèvre. Autonomic and programmable networks approach for supporting long latency (inter-planetary) grids. Otago University, Seminar of New Zealand Distributed Information Systems group, New Zealand, August [716] Laurent Lefèvre. Next generation router-assisted transport protocols for high performance grids : interoperability and fairness issues. Ho Chi Minh Ville University, Vietnam, May [717] Laurent Lefèvre. Towards new services and capabilities for next generation grids. Otago University, Seminar of New Zealand Distributed Information Systems group, New Zealand, August [718] Laurent Lefèvre. Green-* : Towards energy efficient solutions for next generation large scale distributed systems. ACOMP 2008 : International Workshop on Advanced Computing and Applications, Ho Chi Minh City, Vietnam, March [719] Laurent Lefèvre. Proposing inter-operable router-assisted transport protocol for transferring large volume of data in high performance grids. University of Sevilla, Spain, March [720] Laurent Lefèvre. Towards energy aware resource infrastructure for large scale distributed systems. CCGSC 2008 : Clusters and Computational Grids for Scientific Computing Workshop, Flat Rock, North Carolina, USA, September [721] Laurent Lefèvre. Towards new services and capabilities for next generation grids. University of Sevilla, Spain, March [722] Laurent Lefèvre. Energy efficiency issues for large scale distributed systems : the green-net initiative. OGF 25 : Open Grid Forum during "OGF-EU: Using IT to reduce Carbon Emissions and Delivering the Potential of Energy Efficient Computing" sesion, Catania, Italy, March [723] Laurent Lefèvre. Pourquoi chasser les watts dans les systèmes distribués à grande échelle? les approches green-*. Keynote Talk Renpar 2009 Conference, Toulouse, France, September [724] Laurent Lefèvre and Anne Cecile Orgerie. Energy efficiency challenges for large scale distributed systems. Deakin University, Australia, December [725] Laurent Lefèvre and Anne Cecile Orgerie. Towards energy aware resource infrastructure for large scale distributed systems. University of Melbourne, Australia, January [726] Pascale Vicat-Blanc Primet. High speed transport protocol evaluation in grid5000. National ARRU workshop organised by RENATER, [727] Pascale Vicat-Blanc Primet. High speed transport protocol evaluation in grid th International TERENA conference "NRENs and Grids", [728] Pascale Vicat-Blanc Primet. Qos and security issues in grids. First International workshop ITU/OGF (International Telecommunication Union/Open Grid Forum) on Grids and New Generation Networks, [729] Pascale Vicat-Blanc Primet. Research on high speed networks and transport protocols for grid applications. AIST booth - SC06, [730] Pascale Vicat-Blanc Primet. High speed cluster virtualisation. STIC07, [731] Pascale Vicat-Blanc Primet. Self-management and grid networks. IM07, May [732] Pascale Vicat-Blanc Primet. CARRIOCAS architecture for coordination of very high speed network and high end applications. CARRIOCAS-PHOSPHORUS workshop meeting, [733] Pascale Vicat-Blanc Primet. The CARRIOCAS project: Orchestrating dynamic network service deliveries over ultra high capacity optical networks. CCGrid2008, May [734] Pascale Vicat-Blanc Primet. Dynamic bandwidth provisioning in the CARRIOCAS project. AIST, [735] Pascale Vicat-Blanc Primet. Flow scheduling and network virtualization. Tokyo Technical University (TiTech), 2008.

235 8.14 Optimized Protocols and Software for High-Performance Networks 221 [736] Pascale Vicat-Blanc Primet. High speed transport and flow scheduling. Osaka University, [737] Pascale Vicat-Blanc Primet. Network virtualization. ITU-T s System and Network Operation group meeting at Beijing, [738] Pascale Vicat-Blanc Primet. Network virtualization: perspectives in grids. GridNets2008, Beijing, [739] Pascale Vicat-Blanc Primet. Presentation of the CARRIOCAS project. Optical Networks workshop of Grid- Nets2008, Beijing, [740] Pascale Vicat-Blanc Primet. Virtualizing and scheduling network resource for emerging it services: the CAR- RIOCAS approach. Optical Forum and Conference /NFOEC Workshop on Grid vs Cloud/Utility Computing and Optical Networks, [741] Pascale Vicat-Blanc Primet. VXDL: Virtual execution infrastructures description language. OGF 25 NML working group, [742] Pascale Vicat-Blanc Primet. Where are we going with bandwidth on demand. 8th International TERENA conference, International and national peer-reviewed conference proceedings [ACT] [743] Andrei Agapi, Sebastien Soudan, Marcelo Pasin, Pascale Pascale Vicat-Blanc Primet, and Thilo Kielmann. Optimizing deadline-driven bulk data transfers in overlay networks. In ICCCN 2009 Track on Pervasive Computing and Grid Networking (PCGN), San Francisco, USA, [744] Lachlan Andrew, Cesar Marcondes, Sally Floyd, Lawrence Dunn, Romaric Guillier, Wang Gang, Lars Eggert, Sangtae Ha, and Injong Rhee. Towards a common tcp evaluation suite. In PFLDnet 2008, March [745] Fabienne Anhalt, Guilherme Koslovski, Marcelo Pasin, Jean-Patrick Gelas, and Pascale Vicat-Blanc Primet. Les infrastructures virtuelles à la demande pour un usage flexible de l internet. In JDIR 09: Journees Doctorales en Informatique et Reseaux, Belfort, France, February [746] Fabienne Anhalt and Pascale Vicat-Blanc Primet. Analysis and experimental evaluation of data plane virtualization with xen. In ICNS 09 : International Conference on Networking and Services, Valencia, Spain, April [747] Narjess Ayari, Denis Barbaron, and Laurent Lefèvre. Evaluating session aware admission control strategies for improving the profitability of service providers. In The 3rd IEEE Workshop on Enabling the Future Service- Oriented Internet: Towards Socially-Aware Networks - Held in conjunction with IEEE GLOBECOM 2009, Hawai, USA, December [748] Narjess Ayari, Denis Barbaron, Laurent Lefèvre, and Pascale Vicat-Blanc Primet. A survey on high availability mechanisms for ip services. In HAPCW2005 : High Availability and Performance Computing Workshop, Santa Fe, New Mexico, USA, October [749] Narjess Ayari, Denis Barbaron, Laurent Lefèvre, and Pascale Vicat-Blanc Primet. Sara: A session aware infrastructure for high performance next generation cluster-based servers. In ATNAC 2007 : Australasian Telecommunication Networks and Applications Conference, pages , Christchurch, New Zealand, December [750] Narjess Ayari, Denis Barbaron, Laurent Lefèvre, and Pascale Vicat-Blanc Primet. Session awareness issues for next-generation cluster-based network load balancing frameworks. In AICCSA07 : ACS/IEEE International Conference on Computer Systems and Applications, pages , Amman, Jordan, May [751] Narjess Ayari, Denis Barbaron, Laurent Lefèvre, and Pascale Vicat-Blanc Primet. T2cp-ar: A system for transparent tcp active replication. In AINA-07 : The IEEE 21st International Conference on Advanced Information Networking and Applications, pages , Niagara Falls, Canada, May [752] Narjess Ayari, Denis Barbaron, Laurent Lefèvre, and Pascale Vicat-Blanc Primet. On improving the reliability of cluster based voice over ip services. In FastAbstract : DSN 2008 : The 38th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Anchorage, Alaska, USA, June [753] Narjess Ayari, Denis Barbaron, Laurent Lefèvre, and Pascale Vicat-Blanc Primet. A session aware admission control scheme for next generation ip services. In Fifth annual IEEE Consumer Communications and Networking Conference (IEEE CCNC 2008), Las Vegas, USA, January 2008.

236 222 Part 8. Reso [754] Narjess Ayari, Laurent Lefèvre, and Denis Barbaron. On improving the reliability of internet services through active replication. In PDCAT 2008 : The Ninth International Conference on Parallel and Distributed Computing, Applications and Technologies, pages , Dunedin, New Zealand, December [755] Narjess Ayari, Pablo Neira Ayuso, Laurent Lefèvre, and Denis Barbaron. Towards a dependable architecture for highly available internet services. In ARES 08 : The Third International Conference on Availability, Reliability and Security, pages , Barcelona, Spain, March [756] Suleyman Baykut, Paulo Gonçalves, Pierre-Hervé Luppi, Patrice Abry, Edmundo Pereira de Souza Neto, and Damien Gervasoni. Emd-based analysis of rat eeg data for sleep state classification. In Biosignals. Springer, January [757] M. Brahma, M. Chaudier, E. Garcia, J.P. Gelas, H. Guyennet, F. Hantz, L. Lefèvre, P. Lorenz, and H. Tobiet. Temic: a new cooperative platform for industrial tele-maintenance. In DFMA06 : International Conference on Distributed Framework for Multimedia Applications, Penang, Malaysia, May [758] Franck Cappello, Frederic Desprez, Michel Dayde, Emmanuel Jeannot, Yvon Jegou, Stephane Lanteri, Nouredine Melab, Raymond Namyst, Pascale Vicat-Blanc Primet, Olivier Richard, Eddy Caron, Julien Leduc, and Guillaume Mornet. Grid 5000: A large scale, reconfigurable, controlable and monitorable grid platform. In Grid2005 6th IEEE/ACM International Workshop on Grid Computing, [759] Hugo Carrao, Mário Caetano, and Paulo Gonçalves. Meris based land cover characterization: a comparative study. In Proceedings of the ASPRS international conference, Reno (NV, US), May [760] Hugo Carrao, Paulo Gonçalves, and Mário Caetano. Use of intra-annual satellite imagery time-series for land cover characterization purpose. In Proceedings of the 26th EARSeL Symposium, Warsaw (Poland), [761] Hugo Carrao, Paulo Gonçalves, and Mário Caetano. Land cover characterization through parametric modeling of intra-annual reflectance time series: a comparative study with meris data. In SPIE Europe Symposium on Remote Sensing, Firenze (Italy), Sept [762] Lawrence Chang, Alex Galis, Bertrand Mathieu, Kerry Jean, Roel Ocampo, Lefteris Mamatas, Javier R. Loyola, Joan Serrat, Andreas Berl, Hermann de Meer, Steven Davy, Zeinab Movahedi, and Laurent Lefèvre. Self-organising management overlays for future internet services. In MACE 2008 : 3rd IEEE International Workshop on Modelling Autonomic Communications Environments during the ManWeek 2008 : 4th International Week on Management of Networks and Services, pages 74 89, Samos Island, Greece, September [763] C. Chaudet and I. Guérin Lassous. Etat des lieux sur la qualité de service dans les réseaux ad hoc. In Colloque Francophone sur l Ingénierie des Procoles (CFIP), Tozeur, Tunis, November [764] Martine Chaudier, Jean-Patrick Gelas, and Laurent Lefèvre. Towards the design of an autonomic network node. In IWAN2005 : Seventh Annual International Working Conference on Active and Programmable Networks, Nice, France, November [765] Guillaume Chelius, Eric Fleury, Antoine Fraboulet, and Jean-Christophe Lucet. A wireless sensor network to measure the health care workers exposure to tuberculosis. In International Workshop and Conference on Complex Networks and their Applications (NetSci 09), Venice Italie, [766] Bin Bin Chen and Pascale Vicat-Blanc Primet. Supporting bulk data transfers of high-end applications with guaranteed completion time. In IEEE, editor, IEEE ICC2007 International Conference on Computer Communication, [767] Binbin Chen and Pascale Vicat-Blanc Primet. Scheduling bulk data transfers in grid networks. In IEEE, editor, IEEE CCGRID2007, [768] Georges Da-Costa, Jean-Patrick Gelas, Yiannis Georgiou, Laurent Lefèvre, Anne-Cécile Orgerie, Jean-Marc Pierson, Olivier Richard, and Kamal Sharma. The green-net framework: Energy efficiency in large scale distributed systems. In HPPAC 2009 : High Performance Power Aware Computing Workshop in conjunction with IPDPS 2009, Rome, Italy, May [769] Dinil Mon Divakaran, Eitan Altman, Georg Post, Ludovic Noirie, and Pascale Vicat-Blanc Primet. Analysis of the effects of xlframes in a network. In IFIP/TC6 NETWORKING 2009, pages , May 2009.

237 8.14 Optimized Protocols and Software for High-Performance Networks 223 [770] Dinil Mon Divakaran, Eitan Altman, Georg Post, Ludovic Noirie, and Pascale Vicat-Blanc Primet. From packets to xlframes: Sand and rocks for transfer of mice and elephants. In IEEE High-Speed Networks Workshop, Rio di Janerio, Brazil, April [771] Dinil Mon Divakaran and Pascale Vicat-Blanc Primet. Channel provisioning and flow scheduling in grid overlay networks. In Web Proceedings of the HiPC 2007 poster, December [772] Dinil Mon Divakaran and Pascale Vicat-Blanc Primet. Channel provisioning in grid overlay networks (short paper). In Workshop on IP QoS and Traffic Control, December [773] E. Dramitinos and I. Guérin Lassous. A bandwidth allocation mechanism for 4g. In European Wireless Technology Conference, Roma, Italy, September [774] Eric Fleury, Kevin Huguenin, and Anne-Marie Kermarrec. Route in Mobile WSN and Get Self-Deployment for Free. In 5th IEEE/ACM International Conference on Distributed Computing in Sensor Systems (DCOSS), Marina del Rey États-Unis d Amérique, [775] Eric Fleury, Antoine Scherrer, Pierre Borgnat, Jean-Loup Guillaume, and Céline Robardet. Description and simulation of dynamic mobility networks. In Dynamics on and of Complex Networks II, Jerusalem Israël, [776] Alex Galis, Spyros Denazis, Alessandro Bassi, Pierpaolo Giacomin, Andreas Berl, Andreas Fischer, Herman de Meer, John Srassner, Steven Davy, Daniel Macedo, Guy Pujolle, Javier R. Loyola, Joan Serrat, Laurent Lefèvre, and Abderhaman Cheniour. Management architecture and systems for future internet networks. In FIA Book : Towards the Future Internet - A European Research Perspective, pages , Prague, May IOS Press. ISBN [777] Jean-Patrick Gelas, Olivier Mornard, and Pascale Vicat-Blanc Primet. évaluation des performances réseau dans le contexte de la virtualisation xen. In CFIP 2008 : Colloque Francophone sur l Ingénierie des Protocoles, Les Arcs, France, March [778] Brice Goglin, Olivier Gluck, and Pascale Vicat-Blanc Primet. An efficient network api for in-kernel applications in clusters. In Proceedings of the IEEE International Conference on Cluster Computing, Boston, Massachussets, September IEEE Computer Society Press. [779] Brice Goglin, Olivier Gluck, Pascale Vicat-Blanc Primet, and Jean-Christophe Mignot. Accès optimisés aux fichiers distants dans les grappes disposant d un réseau rapide. In Actes de RenPar 16, CFSE 4, SympAAA 2005, pages 37 46, Le Croisic, Presqu ile de Guérande, France, April [780] Paulo Gonçalves, Patrice Abry, Gabriel Rilling, and Patrick Flandrin. Fractal dimension estimation: empirical mode decomposition versus wavelets. In IEEE Int. Conf. on Acoust. Speech and Sig. Proc., Honolulu, Hawaii (US), April [781] Paulo Gonçalves, Hugo Carrao, and Mário Caetano. Parametric model for intra-annual reflectance time series. In Proceedings of the 26th EARSeL Symposium, Warsaw (Poland), [782] Romaric Guillier, Ludovic Hablot, Yuetsu Kodama, Tomohiro Kudoh, Fumihiro Okazaki, Ryousei Takano, Pascale Vicat-Blanc Primet, and Sebastien Soudan. A study of large flow interactions in high-speed shared networks with grid5000 and gtrcnet-10 instruments. In PFLDnet 2007, Feb [783] Romaric Guillier, Sebastien Soudan, and Pascale Vicat-Blanc Primet. Tcp variants and transfer time predictability in very high speed networks. In Infocom 2007 High Speed Networks Workshop, May [784] Romaric Guillier and Pascale Vicat-Blanc Primet. Methodologies and tools for exploring transport protocols in the context of high-speed networks. In IEEE TCSC Doctoral Symposium, May [785] Romaric Guillier and Pascale Vicat-Blanc Primet. A user-oriented test suite for transport protocols comparison in datagrid context. In ICOIN 2009, January [786] Ludovic Hablot, Olivier Gluck, Jean-Christophe Mignot, Stéphane Genaud, and Pascale Vicat-Blanc Primet. Comparison and tuning of mpi implementation in a grid context. In In Proceedings of 2007 IEEE International Conference on Cluster Computing (CLUSTER), pages , September [787] Ludovic Hablot, Olivier Gluck, Jean-Christophe Mignot, and Pascale Vicat-Blanc Primet. Etude d implémentations mpi dans une grille de calcul. In Actes de Renpar 08, Février 2008.

238 224 Part 8. Reso [788] Syed Hasan, Laurent Lefèvre, Zhiyi Huang, and Paul Werstein. Cross layer protocol support for live streaming media. In AINA-08 : The IEEE 22nd International Conference on Advanced Information Networking and Applications, pages , Okinawa, Japan, March [789] Syed Hasan, Laurent Lefèvre, Zhiyi Huang, and Paul Werstein. Supporting large scale eresearch infrastructures with adapted live streaming capabilities. In 6th Australasian Symposium on Grid Computing and e-research, volume 82, pages 55 63, Wollongong, Australia, January Conferences in Research and Practice in Information Technology - Australian Computer Society. [790] S. Khalfallah, C. Sarr, and I. Guérin Lassous. Dynamic bandwidth management for multihop wireless ad hoc networks. In VTC-Spring, Dublin, Ireland, April [791] Guilherme Koslovski, Pascale Vicat-Blanc Primet, and Andrea Schwertner Charao. Vxdl: Virtual resources and interconnection networks description language. In GridNets 2008, Oct [792] Dieter Kranzlmuller and Laurent Lefèvre. A record and replay mechanism on programmable network card. In The IASTED International Conference on Parallel and Distributed Computing and Networks (PDCN 2005), Innsbruck, Austria, February [793] Julien Laganier and Pascale Vicat-Blanc Primet. Hipernet: a decentralized security infrastructure for large scale grid environments. In 6th IEEE/ACM International Conference on Grid Computing (GRID 2005), November 13-14, 2005, Seattle, Washington, USA, Proceedings, pages IEEE, [794] Laurent Lefèvre. Heavy and lightweight dynamic network services : challenges and experiments for designing intelligent solutions in evolvable next generation networks. In IEEE Society, editor, Workshop on Autonomic Communication for Evolvable Next Generation Networks - The 7th International Symposium on Autonomous Decentralized Systems, pages , Chengdu, Jiuzhaigou, China, April ISBN : [795] Laurent Lefèvre and Jean-Patrick Gelas. Towards interplanetary grids. In Workshop on Next Generation Communication Infrastructure for Deep-Space Communications held in conjunction with the Second International Conference on Space Mission Challenges for Information Technology (SMC-IT), Pasadena, California, July [796] Laurent Lefèvre and Anne-Cécile Orgerie. When clouds become green. In Parco2009 : International Conference on Parallel Computing, Lyon, France, September [797] Laurent Lefèvre and Jean-Marc Pierson. Just in time entertainment deployment on mobile platforms. In ICIW 06 : International Conference on Internet and Web Applications and Services, Guadeloupe, French Caribbean, February [798] Laurent Lefèvre and Paul Roe. Improving the flexibility of active grids through web services. In Australian Computer Society, editor, Fourth Australasian Symposium on Grid Computing and e-research (AusGrid2006), volume 28, pages 3 8, Hobart, Australia, January ISBN [799] Laurent Lefèvre and Aweni Saroukou. Active network support for deployment of java-based games on mobile platforms. In IEEE Computer Society, editor, The First International Conference on Distributed Frameworks for Multimedia Applications (DFMA 2005), pages 88 95, Besancon, France, February [800] Patrick Loiseau, Paulo Gonçalves, Stéphane Girard, Florence Forbes, and Pascale Vicat-Blanc Primet. Maximum likelihood estimation of the flow size distribution tail index from sampled packet data. In ACM Sigmetrics, June [801] Patrick Loiseau, Paulo Gonçalves, Romaric Guillier, Matthieu Imbert, Yuetsu Kodama, and Pascale Vicat- Blanc Primet. Metroflux: A high performance system for analyzing flow at very fine-grain. In TridentCom, April [802] Patrick Loiseau, Paulo Gonçalves, and Pascale Vicat-Blanc Primet. A comparative study of different heavy tail index estimators of the flow size from sampled data. In MetroGrid Workshop, GridNets, Lyon, France, October ACM Press. [803] Dino M. Lopez Pacheco, Laurent Lefèvre, and Cong-Duc Pham. Fairness issues when transferring large volume of data on high speed networks with router-assisted transport protocols. In High Speed Networks Workshop 2007, in conjunction with IEEE INFOCOM 2007, Anchorage, Alaska, USA, May 2007.

239 8.14 Optimized Protocols and Software for High-Performance Networks 225 [804] Dino M. Lopez Pacheco, Laurent Lefèvre, and Cong-Duc Pham. Lightweight fairness solutions for xcp and tcp cohabitation. In IFIP/TC6 Networking 2008, pages , Singapore, May [805] Dino M. Lopez Pacheco, Cong-Duc Pham, and Laurent Lefèvre. Xcp-i : explicit control protocol for heterogeneous inter-networking of high-speed networks. In Globecom 2006, San Francisco, California, USA, November [806] Dino M. Lopez Pacheco, Cong-Duc Pham, and Laurent Lefèvre. Xcp-i: explicit control protocol pour l interconnexion de réseaux haut-débit hétérogènes. In CFIP 2006: Colloque Francophone sur l Ingénierie des Protocoles, Tozeur, Tunisia, October [807] Dino M Lopez-Pacheco and Congduc Pham. Robust transport protocol for dynamic high-speed networks: enhancing the xcp approach. In Proceedings of IEEE International Conference on Networks, volume 1, pages , Kuala Lumpur, Malaysia, November [808] Dino M Lopez-Pacheco and Congduc Pham. Enabling large data transfers on dynamic, very high-speed network infrastructures. In Proceedings of ICN/ICONS/MCL 2006, Mauritius, april [809] Edgar Magana, Laurent Lefèvre, Masum Hasan, and Joan Serrat. Snmp-based monitoring agents and heuristic scheduling for large scale grids. In Grid computing, high-performance and Distributed Applications (GADA 07), Vilamoura, Algarve, Portugal, November [810] Edgar Magana, Laurent Lefèvre, and Joan Serrat. Autonomic management architecture for flexible grid services deployment based on policies. In Architecture of Computing Systems - ARCS 2007, volume 4415, pages , ETH, Zurich, Switzerland, March Springer Berlin / Heidelberg. [811] Loris Marchal, Pascale Primet, Yves Robert, and Jingdi Zeng. Optimal bandwidth sharing in grid environment. In HPDC 2006, the 15th International Symposium on High Performance Distributed Computing. IEEE Computer Society Press, [812] Loris Marchal, Pascale Vicat-Blanc Primet, Yves Robert, and Jingdi Zeng. Optimizing network resource sharing in grids. In IEEE GLOBECOM 05, USA, November [813] Kashif Munir, Pascale Vicat-Blanc Primet, and Michael Welzl. Grid network dimensioning by modeling the deadline constrained bulk data transfers. In 11th IEEE International Conference on High Performance Computing and Communications (HPCC-09), Seoul, Korea, June [814] Pablo Neira Ayuso, Laurent Lefèvre, and Rafael M. Gasca. High availability support for the design of stateful networking equipments. In ARES 06 : The First International Conference on Availability, Reliability and Security, Vienna, Austria, April [815] Pablo Neira Ayuso, Laurent Lefèvre, and Rafael M. Gasca. hft-fw : Hybrid fault-tolerance for cluste-based stateful firewalls. In ICPADS 2008 : The 14th IEEE International Conference on Parallel and Distributed Systems, Melbourne, Australia, December [816] Pablo Neira Ayuso, Rafael M. Gasca, and Laurent Lefèvre. Ft-fw: Efficient connection failover in cluster-based stateful firewall. In PDP2008 : 16th Euromicro International Conference on Parallel, Distributed and networkbased Processing, pages , Toulouse, France, February [817] Pablo Neira Ayuso, Rafael M. Gasca, and Laurent Lefèvre. Multiprimary support for the availability of clusterbased stateful firewalls using ft-fw. In ESORICS 2008 : 13th European Symposium on Research in Computer Security, pages 1 17, Malaga, Spain, October [818] Pablo Neira Ayuso, Leonardo Maccari, Laurent Lefèvre, and Rafael M. Gasca. Stateful firewalling for wireless mesh networks. In NTMS 2008 : The second IFIP International Conference on New Technologies, Mobility and Security, Tangier, Morocco, November [819] Lucas Nussbaum. Rebuilding debian using distributed computing. In CLADE 2009, June [820] Lucas Nussbaum, Fabienne Anhalt, Olivier Mornard, and Jean-Patrick Gelas. Linux-based virtualization for hpc clusters. In Linux Symposium 2009, July [821] Anne-Cécile Orgerie, Laurent Lefèvre, and Jean-Patrick Gelas. Chasing gaps between bursts : Towards energy efficient large scale experimental grids. In PDCAT 2008 : The Ninth International Conference on Parallel and Distributed Computing, Applications and Technologies, pages , Dunedin, New Zealand, December 2008.

240 226 Part 8. Reso [822] Anne-Cécile Orgerie, Laurent Lefèvre, and Jean-Patrick Gelas. Save watts in your grid: Green strategies for energyaware framework in large scale distributed systems. In ICPADS 2008 : The 14th IEEE International Conference on Parallel and Distributed Systems, pages , Melbourne, Australia, December [823] Anne-Cécile Orgerie, Laurent Lefèvre, and Jean-Patrick Gelas. Economies d energie dans les systèmes distribués à grande echelle : l approche eari. In JDIR 09 : Journèes Doctorales en Informatique et Réseaux, Belfort, France, February [824] Andreea Picu, Eric Fleury, and Antoine Fraboulet. On Frequency Optimisation for Power Saving in WSNs. In 14th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), St Louis États-Unis d Amérique, [825] André Pinheiro, Paulo Gonçalves, Hugo Carrao, and Mário Caetano. A toolbox for multi-temporal analysis of satellite imagery. In Proceedings of the 26th EARSeL Symposium, Warsaw (Poland), [826] P. Vicat-Blanc Primet and J. Zeng. An overlay infrastructure for bulk data transfer in grids. In Proceedings of 3rd International Workshop on Protocols for Fast Long-Distance Networks (PFLDnet 05), Lyon, France, February [827] R. Razafindralambo and I. Guérin Lassous. Sba: un algorithme simple de backoff pour les réseaux ad hoc. In Algotel, St Malo, France, May [828] R. Razafindralambo, I. Guérin Lassous, L. Iannone, and S. Fdida. Aggrégation dynamique de paquets pour résoudre l anomalie de performance des réseaux sans fil ieee In Colloque Francophone sur l Ingénierie des Procoles (CFIP), Tozeur, Tunis, November [829] T. Razafindralambo and I. Guérin Lassous. Sba: a simple backoff algorithm for wireless ad hoc networks. In IFIP Networking, Aachen, Germany, May [830] T. Razafindralambo, I. Guérin Lassous, L. Iannone, and S. Fdida. Dynamic packet aggregation to solve performance anomaly in wireless networks. In 9th ACM/IEEE International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), Terremolinos, Spain, October [831] Gabriel Rilling, Patrick Flandrin, and Paulo Gonçalves. Une extension bivariée pour la décomposition modale empirique: Application à des bruits blancs complexes. In Proceedings of the 21th Colloquium GRETSI, Troyes (France), September [832] C. Sarr, C. Chaudet, G. Chelius, and I. Guérin Lassous. Improving accuracy in available bandwidth estimation for based ad hoc networks. In Third IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS), Hong-Kong, China, December [833] C. Sarr, C. Chaudet, G. Chelius, and I. Guérin Lassous. Amélioration de la précision pour l estimation de la bande passante résiduelle dans les réseaux ad hoc basés sur ieee In 8es Journées Doctorales Informatique et Réseau (JDIR), Marne-la-Vallée, France, January [834] C. Sarr, S. Khalfallah, and I. Guérin Lassous. Gestion dynamique de la bande passante dans les réseaux ad hoc multi-sauts. In 9es Journées Doctorales Informatique et Réseau (JDIR), Belfort, France, January [835] Antoine Scherrer, Pierre Borgnat, Eric Fleury, Jean-Loup Guillaume, and Céline Robardet. A Methodology to Identify Characteristics of the Dynamic of Mobile Networks. In ACM ASIAN INTERNET ENGINEERING CON- FERENCE (AINTEC), Bangkok Thaïlande, [836] Sebastien Soudan, Dinil Mon Divakaran, Eitan Altman, and Pascale Vicat-Blanc Primet. Equilibrium in size-based scheduling systems. In 16th International Conference on Analytical and Stochastic Modelling Techniques and Applications, Madrid, Spain, June [837] Sebastien Soudan, Romaric Guillier, Ludovic Hablot, Yuetsu Kodama, Tomohiro Kudoh, Fumihiro Okazaki, Ryousei Takano, and Pascale Vicat-Blanc Primet. Investigation of ethernet switches behavior in presence of contending flows at very high-speed. In PFLDnet 2007, Feb [838] Sebastien Soudan, Romaric Guillier, and Pascale Vicat-Blanc Primet. End-host based mechanisms for implementing flow scheduling in gridnetworks. In GridNets 2007, Oct [839] Sebastien Soudan and Pascale Vicat-Blanc Primet. Mixing malleable and rigid bandwidth requests for optimizing network provisioning. In 21st International Teletraffic Congress, Paris, France, sept 2009.

241 8.14 Optimized Protocols and Software for High-Performance Networks 227 [840] R. Vannier and I. Guérin Lassous. Partage équitable de la bande passante dans les réseaux ad hoc. In CFIP, Les Arcs, France, march [841] R. Vannier and I. Guérin Lassous. Towards a practical and fair rate allocation for multihop wireless networks based on a simple node model. In 11th ACM/IEEE International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), Vancouver, Canada, October [842] Dominique Verchere, Olivier Audouin, Bela Berde, Agostino Chiosi, Richard Douville, Helia Pouyllau, Pascale Vicat-Blanc Primet, Marcelo Pasin, Sebastien Soudan, Thierry Marcot, Veronique Piperaud, Remi Theillaud, Dohy Hong, Dominique Barth, Christian Cadéré, V. Reinhart, and Joanna Tomasik. Automatic network services aligned with grid application requirements in carriocas project. In GridNets 2008, Oct [843] Pascale Vicat-Blanc Primet, Fabienne Anhalt, and Guilherme Koslovski. Exploring the virtual infrastructure service concept in grid In 20th ITC Specialist Seminar on Network Virtualization, Hoi An, Vietnam, May [844] Pascale Vicat-Blanc Primet, Jean-Patrick Gelas, Olivier Mornard, Guilherme Koslovski, Vincent Roca, Lionel Giraud, Johan Montagnat, and Tram Truong Huu. A scalable security model for enabling dynamic virtual private execution infrastructures on the internet. In IEEE International Conference on Cluster Computing and the Grid CCGrid2009, Shanghai, May [845] Pascale Vicat-Blanc Primet, R. Takano, Y. Kodama, T. Kudoh, Olivier Gluck, and C. Otal. Large scale gigabit emulated testbed for grid transport evaluation. In Proceedings of The Fourth International Workshop on Protocols for Fast Long-Distance Networks, PFLDnet 2006, Nara, Japan, February [846] Pascale Vicat-Blanc Primet and Jingdi Zeng. Traffic isolation and network resource sharing for performance control in grids. In Proceedings of the Joint International Conference on Autonomic and Autonomous Systems and International Conference on Networking and Services (ICAS/ICNS 2005) / IEEE, April [847] Jingdi Zeng and Pascale Vicat-Blanc Primet. An overlay infrastructure for bulk data transfers in grids. In in Proceedings of the Third International Workshop on Protocol For Very Long Distance Fat Networks, Pfldnet2005, volume Lyon, February Short communications [COM] and posters [AFF] in conferences and workshops [848] Narjess Ayari, Denis Barbaron, Laurent Lefèvre, and Pascale Vicat-Blanc Primet. Implementation of an active replication based framework for highly available services. NetFilter Workshop 2007, Karlsruhe, Germany, September [849] Martine Chaudier, Jean-Patrick Gelas, and Laurent Lefèvre. IAN2 : Industrial autonomic network node. Poster INRIA Booth, Supercomputing 2005, Seattle, USA, November [850] Jean-Patrick Gelas and Laurent Lefèvre. MoonGrid: Bring processing power to the moon. ISU Annual International Symposium: Why the Moon?, Strasbourg, France, February [851] Jean-Patrick Gelas, Laurent Lefèvre, and Eric Rohmer. Network support for long distance telerobotic platform. Poster INRIA Booth in collaboration with Tohoku University (Japan), Supercomputing 2007, Reno, USA, November [852] I. Guérin Lassous. Standard pour les réseaux sans fil : IEEE Journal Techniques de l Ingénieur, [853] Romaric Guillier, Ludovic Hablot, Sebastien Soudan, and Pascale Vicat-Blanc Primet. Evaluating transport protocols in very high speed grid networks: How to benefit from huge available bandwidth? poster, SuperComputing 2006, October [854] Romaric Guillier and Pascale Vicat-Blanc Primet. High speed transport protocol test suite. poster, SuperComputing 2007, November [855] Romaric Guillier and Pascale Vicat-Blanc Primet. TCP variants and transfer time predictability in very high speed networks. poster, Ecole d été RESCOM 2007, session doctorant, June [856] Romaric Guillier and Pascale Vicat-Blanc Primet. Congestion collapse in Grid5000. demo, Stanford Congestion Collapse workshop, April 2008.

242 228 Part 8. Reso [857] Laurent Lefèvre and Jean-Patrick Gelas. High performance autonomic gateways for large scale grids and distributed environments. Poster INRIA Booth, Supercomputing 2006, Tampa Bay USA, November [858] Laurent Lefèvre, Jean-Patrick Gelas, and Anne-Cécile Orgerie. How an experimental grid is used : the Grid5000 case and its impact on energy usage. Poster CCGrid2008 : 8th IEEE International Symposium on Cluster Computing and the Grid, Lyon, France, May [859] Laurent Lefèvre and Anne-Cécile Orgerie. EARI : Energy aware resource infrastructure for large scale distributed systems. Poster INRIA Booth, Supercomputing 2008, Austin, USA, November [860] Laurent Lefèvre and Anne-Cécile Orgerie. GREEN-NET : Power aware software frameworks for high performance data transport and computing in large scale distributed systems. Poster National ARC Days - INRIA Sophia Antipolis, France, October [861] Patrick Loiseau, Paulo Gonçalves, Guillaume Dewaele, Pierre Borgnat, Patrice Abry, and Pascale Vicat- Blanc Primet. Vérification du lien entre auto-similarité et distributions à queues lourdes sur un dispositif grande échelle, June ième Atelier en Evaluation de Performances, Aussois, France. [862] Patrick Loiseau, Paulo Gonçalves, Yuetsu Kodama, and Pascale Vicat-Blanc Primet. Metroflux: A fully operational high speed metrology platform, September Euro-NF workshop: New trends in modeling, quantitative methods and measurements, in cooperation with Net-coop, THOMSON Paris Research Labs, France. [863] Patrick Loiseau, Paulo Gonçalves, and Pascale Vicat-Blanc Primet. How TCP can kill self-similarity, September Euro-NF workshop: Traffic Engineering and Dependability in the Network of the Future, VTT, Finland. [864] Patrick Loiseau, Romaric Guillier, Oana Goga, Matthieu Imbert, Paulo Goncalves, and Pascale Vicat-Blanc Primet. Automated traffic measurements and analysis in grid Best Demonstration award at ACM SIGMET- RICS/PERFORMANCE (Seattle, US), June [865] Anne-Cécile Orgerie and Laurent Lefèvre. Greening the clouds! Poster during Rescom 2009 Summer School, La Palmyre, France, June [866] Anne-Cécile Orgerie and Laurent Lefèvre. Towards a green Grid5000. Best presentation award of the Grid5000 school, April [867] Sebastien Soudan and Pascale Vicat-Blanc Primet. Passerelle à base de processeurs réseaux pour le controle des flux de grille. poster, CFIP 2006, October Scientific books and book chapters [OS] [868] C. Chaudet and I. Guérin Lassous. Réseaux de capteurs, chapter Principes et protocoles d accès au médium. ISTE/Wiley, [869] Patrick Flandrin, Paulo Gonçalves, and Patrice Abry. Scale Invariance and Wavelets, pages ISTE John Wiley & Sons, Inc., [870] Eric Fleury and Yu Chen. Scheduling Activities in Wireless Sensor Networks. In Sudip Misra, Isaac Woungang, and Subhas Chandra Misra, editors, Guide to Wireless Sensor Networks Computer Communications and Networks, Computer Communications and Networks, page 707. Springer, [871] Eric Fleury and David Simplot Ryl. Réseaux de capteurs : théorie et modélisation. Architecture, Applications, Service. Hermes, [872] Paulo Gonçalves, Jean-Philippe Ovarlez, and Richard Baraniuk. Quadratic Time-Frequency Analysis III: The Affine Class and Other Covariant Classes, pages ISTE John Wiley & Sons, Inc., [873] C. Pham and M. Maimour. Le controle de congestion dans les communications multicast, chapter 7. TRAITE IC2, Multicast Multimédia sur l Internet. Hermes-Lavoiser, March Scientific popularization [OV] [874] I. Guérin Lassous. Standard pour les réseaux sans fil : IEEE Journal Techniques de l Ingénieur, [875] Laurent Lefèvre. Towards green computing platforms - vers des plateformes de calcul vertes. INEDIT : the Newsletter of INRIA, May 2009.

243 8.14 Optimized Protocols and Software for High-Performance Networks 229 Book or Proceedings editing [DO] [876] Patrice Abry, Paulo Gonçalves, and Jaques Lévy Véhel. Scaling, Fractals and wavelets. Digital signal and image processing series. ISTE John Wiley & Sons, Inc., London (UK) and Hoboken (NJ, USA), [877] O. Akan and I. Guérin Lassous, editors. Fourth ACM International Workshop on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN), ACM, Chania, Crete Islands, Greece, October ACM. [878] L. Bao and I. Guérin Lassous, editors. Third ACM International Workshop on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN), ACM, Torremolinos, Spain, October ACM. [879] L. Bononi and I. Guérin Lassous, editors. Fifth ACM International Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN), ACM, Vancouver, Canada, October ACM. [880] A. Boukerche and I. Guérin Lassous, editors. Sixth ACM International Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN), ACM, Canary Islands, Spain, October ACM. [881] Lionel Brunie, Salim Hariri, Laurent Lefèvre, and Jean-Marc Pierson. Proceedings of ICPS2006 : International Conference on Pervasive Services. IEEE, Lyon, France, June [882] Zhiyi Huang, Zhiwei Xu, Laurent Lefèvre, Hong Shen, John Hine, and Yi Pan. Special Issue on Emerging Research in Parallel and Distributed Computing - Journal of Supercomputing. December To Appear. [883] Zhiyi Huang, Zhiwei Xu, Nathan Rountree, Laurent Lefèvre, Hong Shen, John Hine, and Yi Pan. Proceedings of PDCAT 2008 : The Ninth International Conference on Parallel and Distributed Computing, Applications and Technologies. IEEE Computer Society, Dunedin, New Zealand, December [884] David Hutchison, Spiros Denazis, Laurent Lefèvre, and Gary Minden. Proceedings of IWAN2005 : Seventh Annual International Working Conference on Active and Programmable Networks, volume 4388 of LNCS. Nice, France, November [885] Craig A. Lee, Thilo Kielmann, Laurent Lefèvre, and Joao Gabriel Silva. Topic 6 Grid and Cluster Computing: Models, Middleware and Architectures : EuroPar 2005 Parallel Processing, volume 3648 of Lecture Notes in Computer Science. Springer Berlin, Lisboa, Portugal, August ISBN : [886] Laurent Lefèvre and Jean-Marc Pierson. Special issue from International Conference on Pervasive Services - Journal of System and Software, volume 80. Elsevier, December [887] C. Pham and B. Tourancheau. Grid Infrastructures: practice and perspectives - Future Generation Computer System, volume 21. January Editorial. [888] Thierry Priol, Laurent Lefèvre, and Rajkumar Buyya. Proceedings of CCGrid2008 : Eight IEEE International Symposium on Cluster Computing and Grid. IEEE, Lyon, France, May [889] Piero Spinnato, Pascale Vicat-Blanc Primet, Chris Edwards, and Michael Welzl, editors. Special Section on Networks for Grid Applications. Elsevier, april International Journal on Future Generation Computer Systems. [890] Joe Touch, Katsushi Kobayashi, and Pascale Vicat-Blanc Primet. Special issue Hot topics in Transport Protocols for Very Long distance networks - International Journal of Computer Networks (COMNET). january [891] Pascale Vicat-Blanc Primet, Richard Hughes-Jones, and Cheng Jin. Proceedings of the 3rd International Workshop on Protocols for Very Long Distance networks. February [892] Pascale Vicat-Blanc Primet, Tomohiro Kudoh, and Joe Mambretti, editors. Networks for Grid Applications. Springer, Beijing, China, june [893] Pascale Vicat-Blanc Primet, Joe Mambretti, and Tomohiro Kudoh. Proceedings of the 2nd International ACM/ICST GRIDNETS conference on Networks for Grid Applications. October Other Publications [AP] [894] Narjess Ayari, Denis Barbaron, and Laurent Lefèvre. Procédés de gestion de sessions multi-flux. france telecom r&d patent. June [895] Romaric Guillier and Pascale Vicat-Blanc Primet. PATHNIF. INRIA patent

244 230 Part 8. Reso Doctoral Dissertations and Habilitation Theses [TH] [896] Narjess Ayari. Contributions to Session Aware Frameworks for Next Generation Internet Services. PhD thesis, École normale supérieure de Lyon, 46, allée d Italie, Lyon cedex 07, France, October pages. [897] Hugo Carrao. The contribution of multitemporal information from multispectral satellite images for automatic land cover classification at the national scale. PhD thesis, Université de Lisbonne (Portugal), [898] Brice Goglin. R seaux rapides et stockage distribu dans les grappes de calculateurs : propositions pour une interaction efficace. PhD thesis, cole normale sup rieure de Lyon, 46, all e d Italie, Lyon cedex 07, France, October pages. [899] Paulo Gonçalves. Some scale based tools and applications. Habilitation à diriger des recherches, École doctorale de mathématiques et d informatique fondamentale de Lyon, In preparation. [900] Romaric Guillier. Methodologies and Tools for the Evaluation of Transport Protocols in the Context of Highspeed Networks. PhD thesis, ENS-Lyon, Université de Lyon, September [901] Ludovic Hablot. Exécution d applications MPI dans les réseaux de grille. PhD thesis, ENS Lyon, Université de Lyon, [902] Julien Laganier. Architecture de sécurité décentralisée basé sur l identification cryptographique. PhD thesis, ENS-Lyon, Université de Lyon, 46, allée d Italie, Lyon cedex 07, France, [903] Patrick Loiseau. Contribution to the analysis of scaling behavior and quality of service in networks: experimental and theoretical aspects. PhD thesis, École Normale Supérieure de Lyon, [904] Dino M. Lopez P. Propositions for a robust and inter-operable explicit Control Protocol on heterogeneous high speed networks. PhD thesis, École normale supérieure de Lyon, 46, allée d Italie, Lyon cedex 07, France, June pages. [905] Vannier Rémi. Partage de la bande passante équitable dans les réseaux ad hoc. PhD thesis, ENS Lyon, INRIA, [906] Sebastien Soudan. Bandwidth Sharing and Control in High-Speed Networks: Combining Packet- and Circuit- Switching Paradigms. PhD thesis, ENS-Lyon, Université de Lyon, 46, allée d Italie, Lyon cedex 07, France, [907] Antoine Vernois. Ordonnancement et réplication de données bioinformatiques dans un contexte de grille de calcul. PhD thesis, École normale supérieure de Lyon, France, October Software [908] Martine Chaudier, Jean-Patrick Gelas, and Laurent Lefèvre. Tamanoir embedded : an industrial active networking framework, [909] Jean-Patrick Gelas and Laurent Lefèvre. Tamanoir :a high performance active networking framework, [910] Jean-Patrick Gelas, Laurent Lefèvre, and Anne-Cecile Orgerie. Showwatts: Real time energy consumption grapher, [911] Jean-Patrick Gelas, Laurent Lefèvre, and Anne-Cecile Orgerie. Wattm : Monitoring framework for energy consumption of data centers, [912] Romaric Guillier and Pascale Vicat-Blanc Primet. Nxe, [913] Romaric Guillier and Pascale Vicat-Blanc Primet. Pathnif, [914] Guilherme Koslovski and Pascale Vicat-Blanc Primet. Vxdl parser, [915] Dino Lopez Pacheco, Anne-Cecile Orgerie, and Laurent Lefèvre. Xcp-i : Interoperable explicit control protocol, [916] Olivier Mornard and Pascale Vicat-Blanc Primet. Hipernet, [917] Pablo Neira Ayuso and Laurent Lefèvre. Sne : Stateful network equipment, 2006.

245 8.14 Optimized Protocols and Software for High-Performance Networks 231 [918] Marcelo Pasin, Sebastien Soudan, and Pascale Vicat-Blanc Primet. Buld data transfer scheduling service, [919] Pablo Pazos Rey and Laurent Lefèvre. Lscan : Large scale deployment of autonomic networks, [920] Sebastien Soudan and Romaric Guillier. Tcp raw, [921] Sebastien Soudan and Pascale Vicat-Blanc Primet. Floc : Flow control, [922] Sebastien Soudan and Pascale Vicat-Blanc Primet. Vx calendar, [923] Sebastien Soudan and Pascale Vicat-Blanc Primet. Vx scheduler, [924] Sebastien Soudan and Pascale Vicat-Blanc Primet. Vx topology, Total ACL - International and national peer-reviewed journal INV - Invited conferences ACT - International and national peer-reviewed conference proceedings COM / AFF - Short communications and posters in international and national conferences and workshops OS - Scientific books and book chapters OV - Scientific popularization DO - Book or Proceedings editing AP - Other Publications TH - Doctoral Dissertations and Habilitation Theses Total

246 232 Part 8. Reso

247 9. IT Support Team MI-LIP Team: MI-LIP Coordinator: Serge TORRES Evaluation Parent Organizations: École Normale Supérieure de Lyon, Université of Lyon, CNRS, associated with INRIA Team History: The IT support team of the laboratory MI-LIP was created in september 2001 with an investigator and two engineers. It s activity and membership evolved along time and it was eventually acknowledged as a CATI (Centre Automatisé du Traitement de l Information - Automated Data Management Center) by the CNRS in Team Composition Current team members Permanent and Fixed Term Engineers Name First name Function % of time (if other than 100%) Institution Arrival date MIGNOT Jean-Christophe Research Engineer (IR), 50 % CNRS 1996 PONSARD Dominique Study Engineer (IE) CNRS 1999 TORRES Serge Research Engineer (IR), 60 % ENS de Lyon 2001 Past engineers Past Members Oct Oct Date of Date of Name First name Function arrival departure D ALU CEDEYN Stéphane Aurélien Engi- Research neer (IR) Study (IE) Engineer Research Engineer with the CITI lab at INSA-Lyon Engineer with the Commissariat à l énergie atomique (CEA/DAM) at Bruyère le Châtel Home Institution (if appropriate) Current position Evolution of the team: The main factor driving the team evolution is the departure and arrival of the fixed term engineers. Stéphane d Alu has left after being hired as a permanent research engineer in the CITI laboratory at INSA Lyon. Aurélien Cedeyn has also left the LIP on July 31st 2009 and joined with the Commissariat à l Énergie Atomique as a permanent engineer. This cyclically disrupts the team s operation. 9.2 Executive summary Keywords IT infrastructure management, scientific research support, scientific instruments services. 233

248 234 Part 9. MI-LIP Research area The MI-LIP does not have per se a research area: its activity comes as a support of scientific investigations on behalf of the laboratory research teams and, partly, of external scientists. Main goals build and manage an IT environment for the laboratory activity; provide support and expertise to research teams for their projects; build and manage local nodes of distributed experimental platforms. Methodology The activity of the team follows the rules of the trade in IT support and administration. It progressively adopts, as they appear, more formal standards (e.g. ISO 2700x on security). Team s knowledge about these methods is maintained through regular training. Highlights a sound and confortable IT infrastructure for the laboratory users; several specific and customized services to suit particular research needs; building and administration of local elements of large scale platforms. The MI-LIP team does not have an independent research activity. It operates in support of the research team and activities of the laboratory and offers some services to external users. 9.3 Software production and Research Infrastructure Software Descriptions Though, as it is usual for any substantial IT administration activity, lots of code have been produced. However no pieces have been formally disseminated as software packages Contribution to Research Infrastructures The main contributions of the MI-LIP team is its commitment to build, develop, and maintain IT research infrastructures be them of local interest or opened to a larger community. The operational environment is widely open since all the elements have to, and actually do, interoperate with other partner services and systems: the ENS IT infrastructure (for all the elements), other laboratory affiliations (CNRS, INRIA, UCBL Lyon 1, for management information systems and electronic documentation assets) and the other distant nodes (for distributed research infrastructures such as GRID 5000 or the Décrypthon project). Openness is also naturally emphasized by the multi-localization of the laboratory premises that span several buildings on the ENS campus, rented offices outside the campus, where a LIP team is hosted by the IXXI, offices in the Gerland branch of UCBL Lyon 1. This context is very challenging as the MI-LIP team is committed to offer the same level of service on all the laboratory locations. Coupling between teaching and research is particularly tight in the ENS de Lyon and the relationship between LIP and the Computer Science Department must be considered along these lines. As a consequence, some infrastructure elements, managed by the MI-LIP, are used for student training and scientific investigations as well. As a whole, the MI-LIP team has been able to follow the growth of the laboratory membership and activities in spite of a permanent shortage of IT manpower. Nevertheless the come and go of fixed term engineers shakes the team s operation that, associated with the laboratory expansion, reaches a breaking point. Local infrastructure It provides basic and more advanced or specialized (according to the laboratory peculiarities) services. Basic services include: workstation administration (automatic installation and upgrade : Linux, for research, and Windows, for staff; over 50 workstations); file and backup services (over 12 Tbytes);

249 9.3 IT Support Team 235 printing services (spanning over all the locations of the laboratory); Windows terminal services (the bulk of operating systems consists in a mix of Unices and Mac OS/X, and scientists sometimes need access to the Microsoft Office suite); user support (through information services, such as the laboratory intranet, and direct interaction with users); laboratory Web site. The main evolution during the period have been a net growth (storage and printing capacity, number of workstations, locations, following the overall growth of the laboratory); technological upgrades (Wikis and plain HTML services replaced by modern CMS tools, virtualization of servers); organizational consolidation (documentation, disaster recovery plans... ); infrastructure consolidation by realizing the first of a two-phase cooling equipment upgrade of the laboratory computer room. Advanced and/or specialized local services These fall under six categories: Computing services. These are provided in two flavors: traditional clusters (two 24 nodes clusters, one being also used for student training) and computing servers (84 cores in 9 servers with up to 80 Gbytes RAM on one of them); batch jobs execution infrastructure encompassing all the computing resources of the laboratory (including user workstations) and allowing the recruitment of over 220 cores for special purposes (for instance, the quest, by Vincent Lefevre, of the worst cases for the correct rounding of elementary functions uses it extensively). Special purpose server: servers that host specific tools such as VLSI and FPGA conception software (Xilinx, Model- Sim... ). Videoconferencing: a permanently set up videoconferencing room, fitted with professional grade equipment (thanks to INRIA), is available for the laboratory and is also extensively used by other interested parties in the ENS de Lyon. Multiple-architecture access: while the vast majority of equipment is based on X86 architecture (now X86-64) for some specific research needs others architectures are available, notably PowerPC and IA64. Collaborative Development Environment: a GForge server is available for collaborative work not only between laboratory members but also with outside co-workers (more than 100 projects, 200 users). Technical expertise: beyond basic user support, the team engineers are often called to study and give advice on the technology used in research projects. This covers hardware, development tools, elements integration as well as compatibility with the ENS network. The main evolution during the period have been: the extension of the available computing power, mainly by the deployment of the batch infrastructure and by the addition, as well, of some specific computing servers (e.g. 16 cores/80 Gbytes of RAM server); the upgrade and consolidation of the other services upgrade, always with the support of INRIA, of videoconferencing equipment;

250 236 Part 9. MI-LIP Large Scale Research Instruments The team is engaged in two national initiatives of unequal importance (as far as MI-LIP is concerned): GRID 5000 ( The Décrypthon project ( GRID 5000 The team manages the Lyon node of the instrument. It s computing power relies on 260 processors. While this is a rather modest size, even at the scale of GRID 5000, it s operation represents a large part of the team activity (one full time engineer). Time has been dedicated to five main realms: day-to-day management; dealing with hardware problems; deploying research equipments and resources (i.e. energy sensors, GNet network control boxes... ); going with network evolutions of GRID 5000; improving monitoring services; organizing the technical team at national level. A lot of time has been spent (lost?) to fix technical problems that where irrelevant to research issues. Most of them where related to data center cooling malfunctions that have forced us to a time and money costly moving from one computer room to another inside the ENS. In spite of this shift, cooling problems have continued to plague the operation of our GRID 5000 equipment. Other problems arose from the inability of hardware providers to comply with their commitments for the remote operation of the servers. Last but not least, it took a lengthy and painfull debugging processes to deal with issues in the Lyon metropolitan network when we have switched from 1 Gbits to 10 GBits links. Nevertheless, the Lyon node has been able to keep pace with the whole of GRID 5000 and busily take it s share of the experimental burden. Many improvements have taken place in the management and services area. First, a standardization of the organization has been realized, coupled with a better integration of services. This allows engineers from different locations to safely and usefully intervene on a distant node should the in-charge administrator be absent (this can be a long time when a fixed term engineer leaves!). Second, extensive monitoring of the platform has been deployed to allow proactive action to be taken instead of mere reaction to failures. Third, following a nowadays common trend, all the new and most of the old services have been virtualized. In the networking realm, the main change was the switch to 10 Gbits. If this upgrade was rather transparent in the core of the network, up to the local RENATER NOC, things were more complicated downstream, in the metropolitan network (LYRES). Eventually, after a lot of busy work and debugging, the Lyon joined the pack of 10 Gbits GRID 5000 nodes. Décrypthon The MI-LIP team implication is only a small part of the LIP activity in the Décrypthon project. Software developments and research have taken place in the GRAAL team. Yet the basic management of the Lyon node and provision of good operational condition have been supplied by the MI-LIP. Team organization, infrastructure management, and adequacy to laboratory needs The previous realizations were made possible thanks to team and laboratory efforts in several directions. Team organization: The process of acknowledgement of the team as a CATI, has helped to formalize and strengthen it is organization (meeting planning, work plan, cross training and information sharing). A training policy has been defined to procure the new skills and keep the level of proficiency needed to carry on the team s missions. Infrastructure management: The team has internally developed some tools needed to manage the plateform it operates (e.g. automatic workstation and server installation, backup tools to supplement the ENS wide infrastructure). Rules and multiannual planning have been set up to standardize and optimize equipment expenditures so has to relieve the burden of an anarchic proliferation of equipment configurations and maximize the efficiency/cost ratio. The maintenance and upgrade of basic facilities such as computer room have been undertaken. Expenditure and operational disturbance are planned over several years. Adequacy to laboratory needs Laboratory needs have been more rigorously taken into account. Some elements of the platform management have been discussed by laboratory council. Efforts have been concentrated on coping with the mostly wanted services for which proximity is essential. Other services are delegated to external providers (e.g. the ENS IT services or high performance computing centers). A reflexion is undertaken to refocus the laboratory platform policy and find a better articulation between the activity in the research teams and the MI-LIP.

251 9.6 IT Support Team Consulting Activities As mentioned earlier, the team members have been internally consulted for technology selection for research. They have also participated, as experts, in several workgroups inside the ENS for government contracts or for infrastructure projects. 9.5 Self-Assessment Strong points Weak points Opportunities Risks Experienced and mostly stable work force: most engineers are in the LIP for several years and are well integrated within the laboratory and the ENS de Lyon. The process of organizational strengthening and strategic refocusing: this will help to have, on one hand, a more robust infrastructure and, on the other hand, allow for a more optimal use of resources; A wide and rather coherent base of services and platforms: along the years, the team has built a suite of elements that provide a confortable working environment for the laboratory. Future work will be able to capitalize over the existing base to extend and upgrade it. Insufficient work force and the effects of fixed term contracts: man power is chronically insufficient. (End of) Fixed term contracts shake the team operation, especially for the GRID 5000 management area. This is exacerbated by the low manpower issue. Slowness and precariousness in organizational evolution: it takes a constant effort to maintain the focus on objectives and plans. This applies to the internal functioning of the team and to the coordination with the laboratory as well. To loose supervision and security: not enough attention has been given so far to the monitoring of the infrastructure and, too often, malfunctions are reported by users rather than by automatic supervision systems. Security issues have not been systematically analyzed and addressed even if this aspect is not overlooked for some critical systems. The arrival a (several) new permanent engineer(s) : this would relax the work load tension and allow for a different organization. The arrival of young blood would also be an occasion to reconsider current working methods and technologies. Evolution of the context caused by the creation of a new school and the so-called Plan Campus : this opens new opportunities for access to computing services and expertise ( Centre Blaise Pascal, synergies around scientific computing in the Lyon area). Departure of a member. This risk has already materializing with the departure of Aurélien Cedeyn. But others are possible, if not presently planed. The seniority of the team members and their long standing in their present position makes them eligible for an assignment to other duties, should one of them want to make a transfer. In the current situation, this would seriously jeopardize the day-to-day operation of the team, not to speak about it s future plans. Freeze in computing IT investments at ENS de Lyon: present indications and lack of formal commitment of the ENS management in the development of IT infrastructure for research (mainly data center / computer room) rise some concern about the possibility for the LIP to expand it s platform development. GRID 5000 is the first threatened project and the laboratory could be deterred from taking new initiatives in the realm of research instruments by a lack visibility on it s ability to host them. Disruption, in the of the creation of the new ENS de Lyon, of the present mode of cooperation established with the current ENS de Lyon PSI.

252 238 Part 9. MI-LIP 9.6 Perspectives Keywords Vision and new goals of the team The future of team s activity is delineated by a vision where the lab carries on with the current projects and has the ability to take new initiatives as needed by the development of research in the laboratory. The stand of the MI-LIP is based on the conception that the most efficient use of its limited manpower is to use it to support research projects of the laboratory. In fact, research engineers of MI-LIP are already part-time directly involved in research teams, this will probably become the norm for all. As a consequence, we will try to rely as much as possible on our partners for vanilla services. For instance: We have got rid our own authentication system to rely on the ENS information system connected authentication infrastructure. We will participate in the ENS-wide file service system instead of, as we did up to now, develop our own solution. We have tried, with little success so far, to involve the other laboratories of the ENS in the definition of common tools and procedures for the management of Linux workstations. This is not always possible and sometimes, after a carefull analysis, we will locally continue to offer basic services when proximity and tailoring to specific needs are requested. One of our primary goals remains to offer an efficient work environment to the laboratory scientists. Efficiency in this realm involves : responsiveness; personalization; uncluttered user computing environment despite the increasing complexity of the infrastructures. In the case of high-end services, such as high performance computing, we do not try to compete with actual and neighboring computing centers such as PSMN (Pôle Scientifique de Modélisation Numérique, a local mesocenter) not to speak of the IN2P3 (one of the largest french computing centers based in Lyon). Nevertheless, we will continue to try and offer a varied computing environment, including some rather powerfull equipments, to help our scientists to develop and debug their experiments in a familiar environment. When vast amounts of computing power are really required, we will help them to transition to platform more fit to their needs. This leaves room for the involvement with possibly large and/or technologically advanced research (as opposed to production) infrastructures such as GRID 5000 or Décrypthon. Experience already gathered in these two projects can be levered for their next stages as well as for other initiatives. This confers a greater value to the skills of the members team and helps the vivify their interest in their trade. This vision also includes the idea that not all the engineers of the lab should be members of the MI-LIP team. Many of them, in fact a majority, are directly embedded in research team and work on specific projects. The laboratory finds this organization more efficient than a cleaner solution (from a mere management perspective) such as flocking all the engineers in a single administrative entity. Besides, such a gathering would be in contradiction with the organization in INRIA, were engineers are directly attached to projects. It does not mean a lack of interchanges between engineers, no matter what their internal affiliation is. They do exist, though informally, and could be further extended with initiatives such as an engineers seminar, similar to the scientific seminars held in the laboratory. There are plenty of technical topics of general interest to fill the agenda of, say, monthly meetings. The merger of ENS de Lyon and ENS LSH to create a new research and higher education institution, that will host most of the laboratory activities, may significantly alter the context of IT operations and, as a consequence, impact the MI-LIP activities. Due care will have to be given, in coordination with other research labs, to preserve and, possibly improve the fruitful cooperation established with the PSI of the current ENS de Lyon. Evolution of Research (support) activities For the forecoming period the main activity axis and goals will be: Local infrastructure Consolidate the team: incorporate of new engineers, strengthen operations (more formal management relying on standards, such as 2700x and best practices).

253 Index 239 Develop monitoring, and, in general, all the aids that allow for a more proactive management of the infrastructure. Define and enforce of a security policy for the laboratory (PSSI) what can efficiently done only at campus level (in the frame of the new ENS de Lyon). Rationalize the computing infrastructure: virtualization, setup new and more cost efficient solutions (e.g. merge departmental xerocopying and network printing into a single system). Fortify the supporting infrastructure: computer room cooling and powering, participate in new infrastructure projects. Switch to IPV6: this change, driven by powerfull external factors, will have a deep inpact on all our IT operations. Setup new tools to enhance and professionalize user support (e.g. request tracker system) and to create, as well interruptibility shields for the engineers (to allow for a more efficient work). Large scale research instruments GRID 5000: In the forseeable future, GRID 5000 will remain the single most important project the MI-LIP will be involved with. The ADT ALADDIN-G5K initiative, the ongoing flow of new experiments and projects based on this platform and the commitment of the LIP to it s development make it a clear priority for the MI-LIP team. Lots of improvements and evolutions are planed, at project level, that the Lyon node will have to participate in and/or comply with: Increase the node capacity in Lyon: we are still far from the minimum requirements of 500 sockets/cores per Grid 5000 site. Actions (funding, government contract, hardware installation, incorporation into the platform) will have to be taken. But this implies that some issues are solved beforehand... Fix the infrastructure hosting problems in ENS de Lyon, and if not possible, find another solution! Integrate the services more tightly using Puppet or a similar framework. Setup an advanced network monitoring solution. The current devices do not allow for a sufficient level of control, new tools are needed. Enforce a better network control: e. g. give the ability to transport VLANs between distant nodes all over the platform. Switch to IPV6 with public addresses. Switching to IPV6 will probably a necessity in the forecoming period (see before). It will also be an opportunity, using public addresses, to improve the usability of the platform. Beyond the mere deployment of IPV6, this also implies a strengthened security. This wealth of project allows us to put some emphasis again on the stable manpower that is needed to achieve them. Décrypthon: Lyon will remain a node of the project. As stated before, MI-LIP participation will consist essentially of offering healthy hosting conditions and basic system administration. The fixed term contract engineer hired in will be embedded into the team that drive the research activity. Should the project grow, the MI-LIP would be able to cope with the increased needs. Other projects: Stating that research is an unpredictable activity is a classical cliché and can t be an excuse for a lack of planning. But, up to a certain point, it holds a bit of truth, especially nowadays : projects can emerge at any time and this period is particularly fit for unexpected events. The context of IT research is quickly moving at different scales (national, regional, local) and, without incurring the waste of witholding usefull resources for hypothetic new projects, we must be ready to adapt our plan, should the situation change. In more specific term that means the team must maintain a good level of basic infrastructures (with some margin) and technical proficiency to be able to accommodate in good conditions with, at least, the early steps of new projects. Their full development is different story...

Account Manager H/F - CDI - France

Account Manager H/F - CDI - France Account Manager H/F - CDI - France La société Fondée en 2007, Dolead est un acteur majeur et innovant dans l univers de la publicité sur Internet. En 2013, Dolead a réalisé un chiffre d affaires de près

More information

Laboratoire d Informatique de Paris Nord, Institut Galilée, Université. 99 avenue Jean-Baptiste Clément, 93430 Villetaneuse, France.

Laboratoire d Informatique de Paris Nord, Institut Galilée, Université. 99 avenue Jean-Baptiste Clément, 93430 Villetaneuse, France. Domenico Ruoppolo CV Personal Information First Name Domenico. Last Name Ruoppolo. Date of Birth December 16th, 1985. Place of Birth Naples, Italy. Nationality Italian. Location Address Office B311. Contacts

More information

The SIST-GIRE Plate-form, an example of link between research and communication for the development

The SIST-GIRE Plate-form, an example of link between research and communication for the development 1 The SIST-GIRE Plate-form, an example of link between research and communication for the development Patrick BISSON 1, MAHAMAT 5 Abou AMANI 2, Robert DESSOUASSI 3, Christophe LE PAGE 4, Brahim 1. UMR

More information

MEng, BSc Applied Computer Science

MEng, BSc Applied Computer Science School of Computing FACULTY OF ENGINEERING MEng, BSc Applied Computer Science Year 1 COMP1212 Computer Processor Effective programming depends on understanding not only how to give a machine instructions

More information

Doctor of Philosophy in Computer Science

Doctor of Philosophy in Computer Science Doctor of Philosophy in Computer Science Background/Rationale The program aims to develop computer scientists who are armed with methods, tools and techniques from both theoretical and systems aspects

More information

Smart Specialization Regional Innovation Strategy (SRI 3S) in Provence Alpes Côte d Azur

Smart Specialization Regional Innovation Strategy (SRI 3S) in Provence Alpes Côte d Azur Smart Specialization Regional Innovation Strategy (SRI 3S) in Provence Alpes Côte d Azur 1 PACA Assets for economic growth 3 rd French region in terms of GDP 1st University of France (70 000 students)

More information

Formation à l ED STIC ED STIC Doctoral education. Hanna Klaudel

Formation à l ED STIC ED STIC Doctoral education. Hanna Klaudel Formation à l ED STIC ED STIC Doctoral education Hanna Klaudel Texte de référence / Text of low L arrêté de 7 août 2006 : «Les écoles doctorales proposent aux doctorants les formations utiles à leur projet

More information

Introduction au BIM. ESEB 38170 Seyssinet-Pariset Economie de la construction email : [email protected]

Introduction au BIM. ESEB 38170 Seyssinet-Pariset Economie de la construction email : contact@eseb.fr Quel est l objectif? 1 La France n est pas le seul pays impliqué 2 Une démarche obligatoire 3 Une organisation plus efficace 4 Le contexte 5 Risque d erreur INTERVENANTS : - Architecte - Économiste - Contrôleur

More information

Another way to look at the Project Une autre manière de regarder le projet. Montpellier 23 juin - 4 juillet 2008 Gourlot J.-P.

Another way to look at the Project Une autre manière de regarder le projet. Montpellier 23 juin - 4 juillet 2008 Gourlot J.-P. Another way to look at the Project Une autre manière de regarder le projet Montpellier 23 juin - 4 juillet 2008 Gourlot J.-P. Plan of presentation Plan de présentation Introduction Components C, D The

More information

Héméra Inria Project Lab July 2010 June 2014

Héméra Inria Project Lab July 2010 June 2014 Héméra Inria Project Lab July 2010 June 2014 Final Evaluation Paris, December 17 th 2014 Christian Perez AVALON INRIA, France Agenda 10:00-10:10. Bienvenue et tour de table 10:10-10:35. Présentation et

More information

EIT ICT Labs MASTER SCHOOL DSS Programme Specialisations

EIT ICT Labs MASTER SCHOOL DSS Programme Specialisations EIT ICT Labs MASTER SCHOOL DSS Programme Specialisations DSS EIT ICT Labs Master Programme Distributed System and Services (Cloud Computing) The programme in Distributed Systems and Services focuses on

More information

MEng, BSc Computer Science with Artificial Intelligence

MEng, BSc Computer Science with Artificial Intelligence School of Computing FACULTY OF ENGINEERING MEng, BSc Computer Science with Artificial Intelligence Year 1 COMP1212 Computer Processor Effective programming depends on understanding not only how to give

More information

Guidance on Extended Producer Responsibility (EPR) Analysis of EPR schemes in the EU and development of guiding principles for their functioning

Guidance on Extended Producer Responsibility (EPR) Analysis of EPR schemes in the EU and development of guiding principles for their functioning (EPR) Analysis of in the EU and development of guiding principles for their functioning In association with: ACR+ SITA LUNCH DEBATE 25 September 2014 Content 1. Objectives and 2. General overview of in

More information

Proposition d intervention

Proposition d intervention MERCREDI 8 NOVEMBRE Conférence retrofitting of social housing: financing and policies options Lieu des réunions: Hotel Holiday Inn,8 rue Monastiriou,54629 Thessaloniki 9.15-10.30 : Participation à la session

More information

Il est repris ci-dessous sans aucune complétude - quelques éléments de cet article, dont il est fait des citations (texte entre guillemets).

Il est repris ci-dessous sans aucune complétude - quelques éléments de cet article, dont il est fait des citations (texte entre guillemets). Modélisation déclarative et sémantique, ontologies, assemblage et intégration de modèles, génération de code Declarative and semantic modelling, ontologies, model linking and integration, code generation

More information

PhD Program in Pharmaceutical Sciences From drug discovery to the patient Training the next generations of pharmaceutical scientists

PhD Program in Pharmaceutical Sciences From drug discovery to the patient Training the next generations of pharmaceutical scientists PhD Program in Pharmaceutical Sciences From drug discovery to the patient Training the next generations of pharmaceutical scientists Section des sciences pharmaceutiques Univerisité de Lausanne, Université

More information

Section des Unités de recherche. Evaluation report. Research unit : Troubles du comportement alimentaire de l adolescent. University Paris 11

Section des Unités de recherche. Evaluation report. Research unit : Troubles du comportement alimentaire de l adolescent. University Paris 11 Section des Unités de recherche Evaluation report Research unit : Troubles du comportement alimentaire de l adolescent University Paris 11 Mars 2009 Section des Unités de recherche Rapport d'évaluation

More information

Bachelor Degree in Informatics Engineering Master courses

Bachelor Degree in Informatics Engineering Master courses Bachelor Degree in Informatics Engineering Master courses Donostia School of Informatics The University of the Basque Country, UPV/EHU For more information: Universidad del País Vasco / Euskal Herriko

More information

Operation Structure (OS)

Operation Structure (OS) Brief description of the paper/report Argument Original reference Use of Game theory to analyse and control Supply Chain Management Systems Gérard P. Cachon, Serguei Netessine Game Theory in Supply Chain

More information

Welcome to: M2R Informatique & MoSIG Master of ScienceSep. in Informatics 18, 2009 Joseph 1 / 1Fou

Welcome to: M2R Informatique & MoSIG Master of ScienceSep. in Informatics 18, 2009 Joseph 1 / 1Fou Welcome to: M2R Informatique & MoSIG Master of Science in Informatics Joseph Fourier University of Grenoble & Grenoble INP UFR IMAG http://www-ufrima.imag.fr & ENSIMAG http://ensimag.grenoble-inp.fr Sep.

More information

Heterogeneous PLC-RF networking for LLNs

Heterogeneous PLC-RF networking for LLNs Heterogeneous PLC-RF networking for LLNs Cedric Chauvenet, Bernard Tourancheau To cite this version: Cedric Chauvenet, Bernard Tourancheau. Heterogeneous PLC-RF networking for LLNs. CFIP 2011 - Colloque

More information

Education et Formation Professionnelle des Commerçants du Gaz Naturel dans la Vision d un Pivot de Commerce du Gaz Naturel Turc

Education et Formation Professionnelle des Commerçants du Gaz Naturel dans la Vision d un Pivot de Commerce du Gaz Naturel Turc Vision d un Pivot de Commerce du Gaz Naturel Turc 2013-1-TR1-LEO05-47603 1 Vision d un Pivot de Commerce du Gaz Naturel Turc (2013-1-TR1-LEO05-47603) Information sur le projet Titre: Code Projet: Education

More information

WYDO Newsletter. 24 June 2013. Get together and organize your own activity! Index. World Young Doctors Day Journée Mondiale des Jeunes Médecins

WYDO Newsletter. 24 June 2013. Get together and organize your own activity! Index. World Young Doctors Day Journée Mondiale des Jeunes Médecins WYDO Newsletter The newsletter of the World Young Doctors' Organization June 2013 vol. 3, special issue 24 June 2013 World Young Doctors Day Journée Mondiale des Jeunes Médecins Theme: Internet, Social

More information

HANDBOOK FOR THE APPLIED AND COMPUTATIONAL MATHEMATICS OPTION. Department of Mathematics Virginia Polytechnic Institute & State University

HANDBOOK FOR THE APPLIED AND COMPUTATIONAL MATHEMATICS OPTION. Department of Mathematics Virginia Polytechnic Institute & State University HANDBOOK FOR THE APPLIED AND COMPUTATIONAL MATHEMATICS OPTION Department of Mathematics Virginia Polytechnic Institute & State University Revised June 2013 2 THE APPLIED AND COMPUTATIONAL MATHEMATICS OPTION

More information

Université Paris-Est is a research and higher education cluster PRES.

Université Paris-Est is a research and higher education cluster PRES. - Université Paris-Est Marne-la-Vallée - Université Paris 12 Val de Marne - École des Ponts ParisTech - ESIEE Paris - École d architecture, de la ville et des territoires à Marne-la-Vallée (EAVT) - École

More information

When mathematics & computer sciences meet

When mathematics & computer sciences meet When mathematics & computer sciences meet New perspectives for research Models and algorithms: from the discrete to the continuous La Cité Descartes à Champs-sur-Marne Epamarne Photo : Eric Morency, 10/2011

More information

The Ecole polytechnique : Training science-based leaders

The Ecole polytechnique : Training science-based leaders The Ecole polytechnique : Training science-based leaders «L École polytechnique forme des femmes et des hommes responsables, capables de mener des activités complexes et innovantes, pour répondre aux défis

More information

M.S. Computer Science Program

M.S. Computer Science Program M.S. Computer Science Program Pre-requisite Courses The following courses may be challenged by sitting for the placement examination. CSC 500: Discrete Structures (3 credits) Mathematics needed for Computer

More information

Railway Timetabling Optimizer

Railway Timetabling Optimizer The world leader in commercializing the science of complexity and complex adaptive systems through market driven software products. Railway Timetabling Optimizer www.eurobios.com EXPERTISE ET METIER EUROBIOS

More information

Flauncher and DVMS Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically

Flauncher and DVMS Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically Flauncher and Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically Daniel Balouek, Adrien Lèbre, Flavien Quesnel To cite this version: Daniel Balouek,

More information

Poznan University of Technology Faculty of Electrical Engineering

Poznan University of Technology Faculty of Electrical Engineering Poznan University of Technology Faculty of Electrical Engineering Contact Person: Pawel Kolwicz Vice-Dean Faculty of Electrical Engineering [email protected] List of Modules Academic Year: 2015/16

More information

BILL C-665 PROJET DE LOI C-665 C-665 C-665 HOUSE OF COMMONS OF CANADA CHAMBRE DES COMMUNES DU CANADA

BILL C-665 PROJET DE LOI C-665 C-665 C-665 HOUSE OF COMMONS OF CANADA CHAMBRE DES COMMUNES DU CANADA C-665 C-665 Second Session, Forty-first Parliament, Deuxième session, quarante et unième législature, HOUSE OF COMMONS OF CANADA CHAMBRE DES COMMUNES DU CANADA BILL C-665 PROJET DE LOI C-665 An Act to

More information

Pierre Leroux 1942-2008

Pierre Leroux 1942-2008 Pierre Leroux 1942-2008 Some photos and some words in memory of a good friend and colleague, presented by Volker Strehl at the 6oth session of the Séminaire Lotharingien de Combinatore at Strobl (Austria)

More information

Sun Management Center Change Manager 1.0.1 Release Notes

Sun Management Center Change Manager 1.0.1 Release Notes Sun Management Center Change Manager 1.0.1 Release Notes Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95054 U.S.A. Part No: 817 0891 10 May 2003 Copyright 2003 Sun Microsystems, Inc. 4150

More information

SYSTEMS, CONTROL AND MECHATRONICS

SYSTEMS, CONTROL AND MECHATRONICS 2015 Master s programme SYSTEMS, CONTROL AND MECHATRONICS INTRODUCTION Technical, be they small consumer or medical devices or large production processes, increasingly employ electronics and computers

More information

Master of Science in Computer Science

Master of Science in Computer Science Master of Science in Computer Science Background/Rationale The MSCS program aims to provide both breadth and depth of knowledge in the concepts and techniques related to the theory, design, implementation,

More information

Florin Paun. To cite this version: HAL Id: halshs-00628978 https://halshs.archives-ouvertes.fr/halshs-00628978

Florin Paun. To cite this version: HAL Id: halshs-00628978 https://halshs.archives-ouvertes.fr/halshs-00628978 Demand Readiness Level (DRL), a new tool to hybridize Market Pull and Technology Push approaches. Introspective analysis of the new trends in Technology Transfer practices. Florin Paun To cite this version:

More information

Paris-Saclay Center for Data Science

Paris-Saclay Center for Data Science BALÁZS KÉGL DR / CNRS LAL & LRI CNRS & University Paris-Sud ARNAK DALALYAN CÉCILE GERMAIN ALEXANDRE GRAMFORT AKIN KAZAKÇI Pr / ENSAE Pr / UPSud MdC / Telecom ParisTech MdC / Mines ParisTech Laboratoire

More information

An Automatic Reversible Transformation from Composite to Visitor in Java

An Automatic Reversible Transformation from Composite to Visitor in Java An Automatic Reversible Transformation from Composite to Visitor in Java Akram To cite this version: Akram. An Automatic Reversible Transformation from Composite to Visitor in Java. CIEL 2012, P. Collet,

More information

Workprogramme 2014-15

Workprogramme 2014-15 Workprogramme 2014-15 e-infrastructures DCH-RP final conference 22 September 2014 Wim Jansen einfrastructure DG CONNECT European Commission DEVELOPMENT AND DEPLOYMENT OF E-INFRASTRUCTURES AND SERVICES

More information

ATP Co C pyr y ight 2013 B l B ue C o C at S y S s y tems I nc. All R i R ghts R e R serve v d. 1

ATP Co C pyr y ight 2013 B l B ue C o C at S y S s y tems I nc. All R i R ghts R e R serve v d. 1 ATP 1 LES QUESTIONS QUI DEMANDENT RÉPONSE Qui s est introduit dans notre réseau? Comment s y est-on pris? Quelles données ont été compromises? Est-ce terminé? Cela peut-il se reproduire? 2 ADVANCED THREAT

More information

Qu est-ce que le Cloud? Quels sont ses points forts? Pourquoi l'adopter? Hugues De Pra Data Center Lead Cisco Belgium & Luxemburg

Qu est-ce que le Cloud? Quels sont ses points forts? Pourquoi l'adopter? Hugues De Pra Data Center Lead Cisco Belgium & Luxemburg Qu est-ce que le Cloud? Quels sont ses points forts? Pourquoi l'adopter? Hugues De Pra Data Center Lead Cisco Belgium & Luxemburg Agenda Le Business Case pour le Cloud Computing Qu est ce que le Cloud

More information

General syllabus for third-cycle studies in Electrical Engineering TEEITF00

General syllabus for third-cycle studies in Electrical Engineering TEEITF00 1 Faculty of Engineering/LTH General syllabus for third-cycle studies in Electrical Engineering TEEITF00 The syllabus was approved by the Board of the Faculty of Engineering/LTH 22 March 2013 and most

More information

Expérience appui ANR Zimbabwe (Medicines Control Authority of Zimbabwe -MCAZ) Corinne Pouget -AEDES

Expérience appui ANR Zimbabwe (Medicines Control Authority of Zimbabwe -MCAZ) Corinne Pouget -AEDES Expérience appui ANR Zimbabwe (Medicines Control Authority of Zimbabwe -MCAZ) Corinne Pouget -AEDES 1 Zimbabwe Pharmaceutical Industry The Market Small Country of ca.13 million people Human Pharmaceutical

More information

> 09/ 2010 : Université Claude Bernard Lyon 1, IUT Lyon 1, département de Génie Civil

> 09/ 2010 : Université Claude Bernard Lyon 1, IUT Lyon 1, département de Génie Civil Xuan Hong VU Nom / Last Name : VU Prénom / First Name : Xuan Hong Maître de Conférences Docteur en Génie Civil [Associate Professor, Doctor in Civil Engineering] Université Claude Bernard Lyon 1 IUT Lyon

More information

Brief description of the paper/report. Identification

Brief description of the paper/report. Identification Brief description of the paper/report Argument Original reference A holonic framework for coordination and optimization in oil and gas production complexes E. Chacon, I. Besembel, Univ. Los Andes, Mérida,

More information

Export-Expert 2011-1-HR1-LEO05-00827. http://www.adam-europe.eu/adam/project/view.htm?prj=9023

Export-Expert 2011-1-HR1-LEO05-00827. http://www.adam-europe.eu/adam/project/view.htm?prj=9023 Export-Expert 2011-1-HR1-LEO05-00827 1 Information sur le projet Titre: Code Projet: Export-Expert 2011-1-HR1-LEO05-00827 Année: 2011 Type de Projet: Statut: Accroche marketing: Résumé: Projets de transfert

More information

Online Computer Science Degree Programs. Bachelor s and Associate s Degree Programs for Computer Science

Online Computer Science Degree Programs. Bachelor s and Associate s Degree Programs for Computer Science Online Computer Science Degree Programs EDIT Online computer science degree programs are typically offered as blended programs, due to the internship requirements for this field. Blended programs will

More information

Sun Management Center 3.6 Version 5 Add-On Software Release Notes

Sun Management Center 3.6 Version 5 Add-On Software Release Notes Sun Management Center 3.6 Version 5 Add-On Software Release Notes For Sun Fire, Sun Blade, Netra, and Sun Ultra Systems Sun Microsystems, Inc. www.sun.com Part No. 819-7977-10 October 2006, Revision A

More information

Managing Risks at Runtime in VoIP Networks and Services

Managing Risks at Runtime in VoIP Networks and Services Managing Risks at Runtime in VoIP Networks and Services Oussema Dabbebi, Remi Badonnel, Olivier Festor To cite this version: Oussema Dabbebi, Remi Badonnel, Olivier Festor. Managing Risks at Runtime in

More information

PARIS AGENDA OR 12 RECOMMENDATIONS FOR MEDIA EDUCATION

PARIS AGENDA OR 12 RECOMMENDATIONS FOR MEDIA EDUCATION PARIS AGENDA OR 12 RECOMMENDATIONS FOR MEDIA EDUCATION 25 years after the adoption of the Grünwald Declaration that paved the way for media education at the international level, experts, education policy-makers,

More information

Enter a world class network WORLD TRADE CENTER LYON WORLD TRADE CENTER LYON WORLD TRADE CENTER LYON

Enter a world class network WORLD TRADE CENTER LYON WORLD TRADE CENTER LYON WORLD TRADE CENTER LYON Enter a world class network Headquarters : Cité Internationale, 15 Quai Charles de Gaulle - 69006 Lyon - France Tel. 33 (0) 4 72 40 57 52 - Fax 33 (0)4 72 40 57 08 Business Center : Lyon-Saint Exupéry

More information

COMPUTER SCIENCE. FACULTY: Jennifer Bowen, Chair Denise Byrnes, Associate Chair Sofia Visa

COMPUTER SCIENCE. FACULTY: Jennifer Bowen, Chair Denise Byrnes, Associate Chair Sofia Visa FACULTY: Jennifer Bowen, Chair Denise Byrnes, Associate Chair Sofia Visa COMPUTER SCIENCE Computer Science is the study of computer programs, abstract models of computers, and applications of computing.

More information

Reasons for need for Computer Engineering program From Computer Engineering Program proposal

Reasons for need for Computer Engineering program From Computer Engineering Program proposal Reasons for need for Computer Engineering program From Computer Engineering Program proposal Department of Computer Science School of Electrical Engineering & Computer Science circa 1988 Dedicated to David

More information

Archived Content. Contenu archivé

Archived Content. Contenu archivé ARCHIVED - Archiving Content ARCHIVÉE - Contenu archivé Archived Content Contenu archivé Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject

More information

Grid Computing Perspectives for IBM

Grid Computing Perspectives for IBM Grid Computing Perspectives for IBM Atelier Internet et Grilles de Calcul en Afrique Jean-Pierre Prost IBM France [email protected] Agenda Grid Computing Initiatives within IBM World Community Grid Decrypthon

More information

Smart Secure Devices & Embedded Operating Systems

Smart Secure Devices & Embedded Operating Systems Smart Secure Devices & Embedded Operating Systems Contact: [email protected] Team: XLIM/DMI/SSD Limoges/FRANCE Technology involved ibutton USB token Contactless Prices of these objects: less than

More information

School of Computer Science

School of Computer Science School of Computer Science Computer Science - Honours Level - 2014/15 October 2014 General degree students wishing to enter 3000- level modules and non- graduating students wishing to enter 3000- level

More information

Paradox engineering launches PE.STONE, the innovative platform for a tailorable approach to wireless internet of things

Paradox engineering launches PE.STONE, the innovative platform for a tailorable approach to wireless internet of things Date: 16 May 2013 Publication: AsiaToday.com Media type: online Paradox engineering launches PE.STONE, the innovative platform for a tailorable approach to wireless internet of things Novazzano, May 13th

More information

How To Become A Foreign Language Teacher

How To Become A Foreign Language Teacher Université d Artois U.F.R. de Langues Etrangères MASTER A DISTANCE Master Arts, Lettres et Langues Spécialité «CLE/ CLS en milieu scolaire» Voie Professionnelle ou Voie Recherche University of Artois Faculty

More information

Web and Big Data at LIG. Marie-Christine Rousset (Pr UJF, déléguée scientifique du LIG)

Web and Big Data at LIG. Marie-Christine Rousset (Pr UJF, déléguée scientifique du LIG) Web and Big Data at LIG Marie-Christine Rousset (Pr UJF, déléguée scientifique du LIG) Data and Knowledge Processing at Large Scale Officers: Massih-Reza Amini - Jean-Pierre Chevallet Teams: AMA EXMO GETALP

More information

Master of Science in Computer Science Information Systems

Master of Science in Computer Science Information Systems Master of Science in Computer Science Information Systems 1. General Admission Requirements. Admission to Graduate Studies (see graduate admission requirements). 2. Program Admission. In addition to meeting

More information

COMPUTER SCIENCE AND ENGINEERING

COMPUTER SCIENCE AND ENGINEERING The University of Connecticut School of Engineering COMPUTER SCIENCE AND ENGINEERING GUIDE TO COURSE SELECTION AY 2013-2014 Revised May 23, 2013 for Computer Science and Engineering (CSE) Majors in the

More information

Summary Toward An Updated Vision Of Graduate and Post-Doctoral Studies Page 2

Summary Toward An Updated Vision Of Graduate and Post-Doctoral Studies Page 2 In its brief Pour une vision actualisée des formations universitaires aux cycles supérieurs, the Conseil supérieur de l éducation maintains that graduate and post-doctoral studies offer individual and

More information

REGULATIONS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER SCIENCE (MSc[CompSc])

REGULATIONS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER SCIENCE (MSc[CompSc]) 299 REGULATIONS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER SCIENCE (MSc[CompSc]) (See also General Regulations) Any publication based on work approved for a higher degree should contain a reference

More information

Applied mathematics and mathematical statistics

Applied mathematics and mathematical statistics Applied mathematics and mathematical statistics The graduate school is organised within the Department of Mathematical Sciences.. Deputy head of department: Aila Särkkä Director of Graduate Studies: Marija

More information

UNDERGRADUATE DEGREE PROGRAMME IN COMPUTER SCIENCE ENGINEERING SCHOOL OF COMPUTER SCIENCE ENGINEERING, ALBACETE

UNDERGRADUATE DEGREE PROGRAMME IN COMPUTER SCIENCE ENGINEERING SCHOOL OF COMPUTER SCIENCE ENGINEERING, ALBACETE UNDERGRADUATE DEGREE PROGRAMME IN COMPUTER SCIENCE ENGINEERING SCHOOL OF COMPUTER SCIENCE ENGINEERING, ALBACETE SCHOOL OF COMPUTER SCIENCE, CIUDAD REAL Core Subjects (CS) Compulsory Subjects (CPS) Optional

More information

Sun StorEdge A5000 Installation Guide

Sun StorEdge A5000 Installation Guide Sun StorEdge A5000 Installation Guide for Windows NT Server 4.0 Sun Microsystems, Inc. 901 San Antonio Road Palo Alto, CA 94303-4900 USA 650 960-1300 Fax 650 969-9131 Part No. 805-7273-11 October 1998,

More information

Eastern Washington University Department of Computer Science. Questionnaire for Prospective Masters in Computer Science Students

Eastern Washington University Department of Computer Science. Questionnaire for Prospective Masters in Computer Science Students Eastern Washington University Department of Computer Science Questionnaire for Prospective Masters in Computer Science Students I. Personal Information Name: Last First M.I. Mailing Address: Permanent

More information

CURRICULUM VITAE. 2006-2013 President of the Board of Directors of the Annales des Sciences Mathématiques du Québec.

CURRICULUM VITAE. 2006-2013 President of the Board of Directors of the Annales des Sciences Mathématiques du Québec. CURRICULUM VITAE François Huard Department of Mathematics Bishop s University, Sherbrooke, Québec Canada, J1M1Z7 819-822-9600 (2363) email:[email protected] Languages: French, English. Citizenship: Canadian

More information

Study on Cloud Service Mode of Agricultural Information Institutions

Study on Cloud Service Mode of Agricultural Information Institutions Study on Cloud Service Mode of Agricultural Information Institutions Xiaorong Yang, Nengfu Xie, Dan Wang, Lihua Jiang To cite this version: Xiaorong Yang, Nengfu Xie, Dan Wang, Lihua Jiang. Study on Cloud

More information

Optimizing Solaris Resources Through Load Balancing

Optimizing Solaris Resources Through Load Balancing Optimizing Solaris Resources Through Load Balancing By Tom Bialaski - Enterprise Engineering Sun BluePrints Online - June 1999 http://www.sun.com/blueprints Sun Microsystems, Inc. 901 San Antonio Road

More information

Vincent Cheval. Curriculum Vitae. Research

Vincent Cheval. Curriculum Vitae. Research Vincent Cheval School of Computing University of Kent Canterbury, CT2 7NF, UK +44 (0)7479 555701 +44 (0)1227 823816 [email protected] homepage: www.cs.kent.ac.uk/ vc218/web Nationality : French

More information

Code generation under Control

Code generation under Control Code generation under Control Rencontres sur la compilation / Saint Hippolyte Henri-Pierre Charles CEA Laboratoire LaSTRE / Grenoble 12 décembre 2011 Introduction Présentation Henri-Pierre Charles, two

More information

UF EDGE brings the classroom to you with online, worldwide course delivery!

UF EDGE brings the classroom to you with online, worldwide course delivery! What is the University of Florida EDGE Program? EDGE enables engineering professional, military members, and students worldwide to participate in courses, certificates, and degree programs from the UF

More information

Estelle BROSSET Senior lecturer with habilitation to supervise doctoral theses Jean Monnet Chair Section 02- Public law

Estelle BROSSET Senior lecturer with habilitation to supervise doctoral theses Jean Monnet Chair Section 02- Public law Estelle BROSSET Senior lecturer with habilitation to supervise doctoral theses Jean Monnet Chair Section 02- Public law Professional address: Centre d Etudes et de Recherches Internationales et Communautaires

More information

Sun TM SNMP Management Agent Release Notes, Version 1.6

Sun TM SNMP Management Agent Release Notes, Version 1.6 Sun TM SNMP Management Agent Release Notes, Version 1.6 Sun Microsystems, Inc. www.sun.com Part No. 820-5966-12 December 2008, Revision A Submit comments about this document by clicking the Feedback[+]

More information

I will explain to you in English why everything from now on will be in French

I will explain to you in English why everything from now on will be in French I will explain to you in English why everything from now on will be in French Démarche et Outils REACHING OUT TO YOU I will explain to you in English why everything from now on will be in French All French

More information

Soitec. Eric Guiot, Manager R&D Soitec. 7 mars 2011 JSIam 2011

Soitec. Eric Guiot, Manager R&D Soitec. 7 mars 2011 JSIam 2011 Soitec Eric Guiot, Manager R&D Soitec 1 Agenda Soitec: company presentation Applications and market PhDs at Soitec 2 Mission Statement Supply innovative materials and technologies for the electronics and

More information

Draft dpt for MEng Electronics and Computer Science

Draft dpt for MEng Electronics and Computer Science Draft dpt for MEng Electronics and Computer Science Year 1 INFR08012 Informatics 1 - Computation and Logic INFR08013 Informatics 1 - Functional Programming INFR08014 Informatics 1 - Object- Oriented Programming

More information

Accélérer le développement d'applications avec DevOps

Accélérer le développement d'applications avec DevOps Accélérer le développement d'applications avec DevOps RAT06 Bataouche Samira [email protected] Consultante Rational France Our Challenges >70% Line-of-business Takes too long to introduce or make changes

More information

School of Management and Information Systems

School of Management and Information Systems School of Management and Information Systems Business and Management Systems Information Science and Technology 176 Business and Management Systems Business and Management Systems Bachelor of Science Business

More information

N1 Grid Service Provisioning System 5.0 User s Guide for the Linux Plug-In

N1 Grid Service Provisioning System 5.0 User s Guide for the Linux Plug-In N1 Grid Service Provisioning System 5.0 User s Guide for the Linux Plug-In Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95054 U.S.A. Part No: 819 0735 December 2004 Copyright 2004 Sun Microsystems,

More information

Masters in Human Computer Interaction

Masters in Human Computer Interaction Masters in Human Computer Interaction Programme Requirements Taught Element, and PG Diploma in Human Computer Interaction: 120 credits: IS5101 CS5001 CS5040 CS5041 CS5042 or CS5044 up to 30 credits from

More information

Primauté industrielle. Photonique dans LEIT TIC

Primauté industrielle. Photonique dans LEIT TIC Primauté industrielle Photonique dans LEIT TIC ICT in Industrial Leadership (LEIT) Excellent science ICT in Industrial leadership Societal challenges 2 Industrial Leadership - ICT P A new generation of

More information