Future of the apps area software build system



Similar documents
Evaluation of the CMT and SCRAM Software Configuration, Build and Release Management Tools

Report of the LHC Computing Grid Project. Software Management Process RTAG CERN

Supported platforms & compilers Required software Where to download the packages Geant4 toolkit installation (release 9.6)

ARDA Experiment Dashboard

Das HappyFace Meta-Monitoring Framework

CMPT 373 Software Development Methods. Building Software. Nick Sumner Some materials from Shlomi Fish & Kitware

Database Services for CERN

Report on WorkLoad Management activities

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science

Large Projects & Software Engineering

DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group

PetaLinux SDK User Guide. Application Development Guide

ATLAS job monitoring in the Dashboard Framework

Distributed Database Access in the LHC Computing Grid with CORAL

Jenkins World Tour 2015 Santa Clara, CA, September 2-3

The Data Quality Monitoring Software for the CMS experiment at the LHC

Software Deployment and Configuration

Trends in Embedded Software Development in Europe. Dr. Dirk Muthig

Building a Volunteer Cloud

How to Build an RPM OVERVIEW UNDERSTANDING THE PROCESS OF BUILDING RPMS. Author: Chris Negus Editor: Allison Pranger 09/16/2011

Continuous Integration and Automatic Testing for the FLUKA release using Jenkins (and Docker)

sqlite driver manual

Protecting Data with a Unified Platform

Status and Evolution of ATLAS Workload Management System PanDA

Barely Sufficient Software Engineering: 10 Practices to Improve Your Research CSE Software

PHP ON WINDOWS THE PROS AND CONS OF IMPLEMENTING PHP IN A WINDOWS INFRASTRUCTURE

Continuous Delivery for Alfresco Solutions. Satisfied customers and happy developers with!! Continuous Delivery!

Backing Up Your System With rsnapshot

2.1 The RAD life cycle composes of four stages:

Agile Power Tools. Author: Damon Poole, Chief Technology Officer

Software Development Practices in Project X

OpenCV on Android Platforms

TNM093 Practical Data Visualization and Virtual Reality Laboratory Platform

HTML5, The Future of App Development

Developing Embedded Linux Devices Using the Yocto Project

RecoveryVault Express Client User Manual

Web Based Management Systems ebms

What are the benefits of Cloud Computing for Small Business?

Online Backup Linux Client User Manual

Realization of Inventory Databases and Object-Relational Mapping for the Common Information Model

Online Backup Client User Manual

Chapter 13 Configuration Management

Practicing Continuous Delivery using Hudson. Winston Prakash Oracle Corporation

Global Server Load Balancing

Understanding Code Management in a Multi-Vendor Environment. Examples of code management in a multi-team environment

Online Backup Client User Manual

Cost effective methods of test environment management. Prabhu Meruga Director - Solution Engineering 16 th July SCQAA Irvine, CA

Continuous Integration & Automated Testing in a multisite.net/cloud Project

Automated software packaging and installation for the ATLAS experiment

HP-UX 11i software deployment and configuration tools

CMake/CTest/CDash OSCON 2009

Galera Replication. Synchronous Multi-Master Replication for InnoDB. ...well, why not for any other DBMS as well. Seppo Jaakola Alexey Yurchenko

We ( have extensive experience in enterprise and system architectures, system engineering, project management, and

How To Use Happyface (Hf) On A Network (For Free)

A central continuous integration platform

Software change and release management White paper June Extending open source tools for more effective software delivery.

Building Mobile Applications

DiskPulse DISK CHANGE MONITOR

Asset Depreciation Reports

Software Configuration Management

Software Configuration Management. Addendum zu Kapitel 13

Content. Development Tools 2(63)

Successfully managing geographically distributed development

Risk and Change Management Emanuele Della Valle

Application Management A CFEngine Special Topics Handbook

Enhanced Project Management for Embedded C/C++ Programming using Software Components

System Management. Leif Nixon. a security perspective 1/37

Migrating Production HPC to AWS

This presentation introduces you to the Decision Governance Framework that is new in IBM Operational Decision Manager version 8.5 Decision Center.

Source Code Management/Version Control

Frequently Asked Questions Plus What s New for CA Application Performance Management 9.7

Power Tools for Pivotal Tracker

Nexus Professional Whitepaper. Repository Management: Stages of Adoption

Archiver Eclipse/CSS Alarm

From Traditional Functional Testing to Enabling Continuous Quality in Mobile App Development

Continuous Integration and Delivery at NSIDC

Continuous Integration: Put it at the heart of your development

SUCCESFUL TESTING THE CONTINUOUS DELIVERY PROCESS

Agile Software Factory: Bringing the reliability of a manufacturing line to software development

Apache Thrift and Ruby

Transcription:

Future of the apps area software build system Torre Wenaus, BNL/CERN LCG Applications Area Manager http://lcgapp.cern.ch Applications Area Internal Review October 20, 2003

RTAG Evaluation SCRAM build system dev program, status, CMS manpower What to do, why, why not The prototype Plan, coming to a decision Timeline, project impact LHCC Computing Resources Review, Sep 2003 Slide 2 Torre Wenaus, BNL/CERN

RTAG on LCG Software Process Management Excerpts from RTAG final report recommendations Supporting tools: Build tool: GNUmake Configuration management: CMT, SCRAM Tools for managing platform-dependencies, packaging, exporting: autoconf, CMT, SCRAM/DAR, rpm, GRID install and export tool Specific free tools: automake, autoconf, gmake CMT, SCRAM (choose one, both appear to meet requirements) LHCC Computing Resources Review, Sep 2003 Slide 3 Torre Wenaus, BNL/CERN

SCRAM vs. CMT (vs. configure+make) As the RTAG pointed out, we have several related needs: Configuration, dependency management Software build Packaging, distribution SCRAM and CMT both have capabilities in all these areas Approach a year ago, in line with the RTAG, was to select one or the other to address all these functions configure+make based build system was advocated by ALICE/ROOT Not seriously considered as a viable third option to SCRAM, CMT for the build system Mainly because it was not clear it could answer the need for a configuration system with adequate inter-package dependency handling In retrospect probably a mistake on my part configure+make support was, however, put in the SCRAM workplan LHCC Computing Resources Review, Sep 2003 Slide 4 Torre Wenaus, BNL/CERN

SCRAM/CMT Evaluation SCRAM vs. CMT technical evaluation done twice last year Once by ATLAS, CMS experts (A. Undrus, I. Osbourne) Recommended SCRAM, but complaints were made that CMT as used in ATLAS was evaluated rather than CMT itself Again by TW (and taking into account input received) http://lcgapp.cern.ch/project/mgmt/scramcmt.html Selected SCRAM Evaluation specified a development program required to bring the SCRAM build system up to requirements, all relating to the SCRAM build system, and to be carried out in an agreed collaboration with CMS: Speed improvement Support parallel make Add more flexibility; e.g. one cpp ber binary application is limiting Develop a "configure;make;make install layer Windows support. LHCC Computing Resources Review, Sep 2003 Slide 5 Torre Wenaus, BNL/CERN

SCRAM in LCG Today The SCRAM functionalities other than the build system have worked well, generate little fuss, and have been stable For the build system it is a different story Where are we with respect to the most important elements of the development program? Windows support: SCRAM on Windows still not functional enough to build SEAL instead CMT was used in the latest release, in order to meet urgent LHCb need for Windows builds Build system speed: A no-op scram b of POOL takes 6 minutes, a ridiculous length of time Configure/make/make install layer: Nothing existing, nothing done; the available effort was consumed by the above items Exacerbating the bad situation with the build system and the incomplete work program is that the two CMS experts on SCRAM have effectively left the project LHCC Computing Resources Review, Sep 2003 Slide 6 Torre Wenaus, BNL/CERN

Where we are today SCRAM based infrastructure for configuration management, software builds, packaging and distribution A SCRAM build system that is today inadequate in important respects despite significant invested manpower On our own, essentially, in dealing with this deficiency We adopted a large & sophisticated (even if not the largest and most sophisticated) system Requiring help/support outside the project Resulting in the project competing for support attention with the priorities of a manpower-stressed experiment What we actually need, I believe, could be met by a small system that we could efficiently support and adapt LCG applications area requirements are not so different from the open source community to reject without trial the standard open source approach Before diving into another big system like CMT with similar characteristics, or throwing more effort into SCRAM, I think we should prove or disprove a lighter weight approach based on standard open source tools (And I do not mean adopting or re-re-implementing SRT) LHCC Computing Resources Review, Sep 2003 Slide 7 Torre Wenaus, BNL/CERN

Build system prototype So, I decided to see how far I could get with a quick prototype Objectives: Understand whether we can leverage standard open source tools to build a small but solid system that meets our needs and is manpower-efficient Build all of POOL, examples etc. (and later SEAL) with full support for hierarchical package level dependencies Clean, maintainable makefiles and configuration information, not significantly more complicated than SCRAM Work in conjunction with non-build parts of SCRAM (scram project, release package version configuration from SCRAM) Fast configure, very fast make User-friendly, whether the user is an LCG developer, an experiment integrator or user, or an outsider trying out our software Support typical developer environment in which packages in development come from local area, the rest from central release areas Support also developer environments in which pieces of different projects (e.g. POOL and SEAL) are both in development LHCC Computing Resources Review, Sep 2003 Slide 8 Torre Wenaus, BNL/CERN

autoconf+make prototype appwork Started Oct 3, building on a prototype work environment appwork providing SCRAM-independent, configure+make based usage of LCG software developed over several days in August That version used a configure system derived from ROOT s In going to a system to build the LCG software itself, autoconf was found to be better suited (more capable and maintainable) POOL was building on Oct 6 with hierarchical dependency support Leaving of course lots left to do before we have a real system, but indicating we are not talking about a major development effort LHCC Computing Resources Review, Sep 2003 Slide 9 Torre Wenaus, BNL/CERN

Characteristics of the prototype Hierarchical dependencies resolved once, at configure time, not at build time, so make is fast (and the configure step is as well) Configure step ( bootstrap ) takes ~10sec Dependency handling based on the recursive text substitution of m4, used by automake Dependencies resolve into included makefile fragments No-op make on POOL + examples takes ~7sec Uses recursive make; plan to try and compare include-based single makefile, as long as package-level make can still be supported Make can be done at top level (POOL_x_x_x/src) or lower levels Bootstrap generates setup.csh for runtime environment setup Top appwork level provides general environment setup will be the basis for supporting multiple development projects in one environment e.g. concurrently modifying chunks of SEAL and POOL and developing an application using SEAL and POOL LHCC Computing Resources Review, Sep 2003 Slide 10 Torre Wenaus, BNL/CERN

BuildFile vs. Makefile Makefile similar to corresponding BuildFile Dependencies not expressed directly in the Makefile (at least not for those packages used by other packages), but in m4 based configure information ================ BuildFile: <include_path path=.></include_path> <export autoexport=true> <lib name=lcg_persistencysvc> </export> # Dependency on StorageSvc <use name=storagesvc></use> # Dependency on FileCatalog <use name=filecatalog></use> # This provides at least one plugin PLUGIN_MODULE := PersistencySvc Packages using package X, and package X itself, express the dependency and trigger the right.mk includes via include $(lcg_use_x) ================= Makefile: include $(SRCTOPDIR)/config.mk # Dependencies include $(lcg_use_persistencysvc) # This provides at least one plugin PLUGIN_MODULE := PersistencySvc LIBS = liblcg_persistencysvc.so all: $(LIBS) plugin LHCC Computing Resources Review, Sep 2003 Slide 11 Torre Wenaus, BNL/CERN

m4 and mk code written for POOL, SEAL, external packages -rw-r--r-- 1 wenaus zp 114 Oct 6 08:36 lcg_athenaexampledict.mk -rw-r--r-- 1 wenaus zp 92 Oct 6 00:03 lcg_attributelist.mk -rw-r--r-- 1 wenaus zp 83 Oct 6 12:37 lcg_collection.mk -rw-r--r-- 1 wenaus zp 74 Oct 6 00:06 lcg_datasvc.mk -rw-r--r-- 1 wenaus zp 125 Oct 6 08:35 lcg_examplebase.mk -rw-r--r-- 1 wenaus zp 82 Oct 15 00:57 lcg_exampledict.mk -rw-r--r-- 1 wenaus zp 86 Oct 6 00:03 lcg_filecatalog.mk -rw-r--r-- 1 wenaus zp 77 Oct 5 23:47 lcg_poolcore.mk -rw-r--r-- 1 wenaus zp 95 Oct 6 00:05 lcg_persistencysvc.mk -rw-r--r-- 1 wenaus zp 92 Oct 5 23:25 lcg_pluginmanager.mk -rw-r--r-- 1 wenaus zp 82 Oct 15 00:57 lcg_pointerdict.mk -rw-r--r-- 1 wenaus zp 76 Oct 6 08:32 lcg_poolexamples.mk -rw-r--r-- 1 wenaus zp 83 Oct 6 00:16 lcg_reflection.mk -rw-r--r-- 1 wenaus zp 104 Oct 6 00:15 lcg_reflectionbuilder.mk -rw-r--r-- 1 wenaus zp 77 Oct 5 23:26 lcg_sealbase.mk -rw-r--r-- 1 wenaus zp 83 Oct 5 23:26 lcg_sealkernel.mk -rw-r--r-- 1 wenaus zp 89 Oct 5 23:26 lcg_sealplatform.mk -rw-r--r-- 1 wenaus zp 83 Oct 6 00:04 lcg_storagesvc.mk -rw-r--r-- 1 wenaus zp 103 Oct 14 22:13 lcg_tinyeventmodeldict.mk -rw-r--r-- 1 wenaus zp 265 Oct 5 12:27 lcg_aida.m4 -rw-r--r-- 1 wenaus zp 505 Oct 5 12:21 lcg_boost.m4 -rw-r--r-- 1 wenaus zp 149 Oct 5 23:29 lcg_boost.mk -rw-r--r-- 1 wenaus zp 511 Oct 5 13:23 lcg_bz2lib.m4 -rw-r--r-- 1 wenaus zp 513 Oct 5 12:14 lcg_clhep.m4 -rw-r--r-- 1 wenaus zp 3481 Oct 18 16:02 lcg_config.m4 -rw-r--r-- 1 wenaus zp 889 Oct 16 23:22 lcg_dependencies.m4 -rw-r--r-- 1 wenaus zp 614 Oct 6 00:44 lcg_edgrls.m4 -rw-r--r-- 1 wenaus zp 214 Oct 6 00:31 lcg_edgrls.mk etc LHCC Computing Resources Review, Sep 2003 Slide 12 Torre Wenaus, BNL/CERN

Expressing dependencies ============ lcg_pool.m4: m4_define([req_persistencysvc],[req_storagesvc req_filecatalog] AG_/lcg_PersistencySvc.mk) m4_define([req_storagesvc],[req_poolcore req_reflectionbuilder req_pluginmanager] AG_/lcg_StorageSvc.mk) m4_define([req_poolcore],[req_sealkernel req_uuid] AG_/lcg_POOLCore.mk) lcg_use_persistencysvc="req_persistencysvc" ============ lcg_seal.m4: m4_define([req_sealkernel],[req_sealbase req_pluginmanager req_boost] AG_/lcg_SealKernel.mk) m4_define([req_reflectionbuilder],[req_reflection] AG_/lcg_ReflectionBuilder.mk) m4_define([req_reflection],ag_/lcg_reflection.mk) m4_define([req_pluginmanager],[req_sealbase] AG_/lcg_PluginManager.mk) req_x is recursively expanded until it is a string of.mk files to be included via include $(lcg_use_x) LHCC Computing Resources Review, Sep 2003 Slide 13 Torre Wenaus, BNL/CERN

Typical makefile include fragment ============ lcg_poolcore.mk: ifneq ($(lcg_poolcore),yes) LDFLAGS += -llcg_poolcore lcg_poolcore=yes endif ============ lcg_uuid.mk: ifneq ($(lcg_uuid),yes) INCLUDES += -I$(UUIDINCDIR) LDFLAGS += -I$(UUIDINCDIR) -L$(UUIDLIBDIR) -luuid lcg_uuid=yes endif Reentry protected because at the moment multiple instances of includes arising from redundant dependencies are not filtered (trivial to add) LHCC Computing Resources Review, Sep 2003 Slide 14 Torre Wenaus, BNL/CERN

Locating software Release configuration (package versions) obtained from SCRAM Expressed (currently) in lcg_versions.m4 AC_DEFUN([LCG_VERSIONS],[dnl AC_SUBST(lcg_ver_pool,POOL_1_3_0) AC_SUBST(lcg_ver_pi,PI_0_4_1) AC_SUBST(lcg_ver_seal,SEAL_1_1_0) AC_SUBST(lcg_ver_gcc3,3.2) AC_SUBST(lcg_ver_sockets,1.0) AC_SUBST(lcg_ver_uuid,1.32) Software located using standard autoconf approach Flexible; powerful; convenient site customization Works well in conjunction with SPI s automated distribution tool AC_DEFUN([LCG_MYSQL],[dnl AC_REQUIRE([LCG_CONFIG]) MYSQLINCDIR=[$LCGEXT/mysql/$lcg_ver_mysql/$ARCHNAME/include] AC_CHECK_FILE([$MYSQLINCDIR/mysql.h],, [AC_MSG_ERROR([cannot find MySQL headers in $MYSQLINCDIR])]) AC_SUBST(MYSQLINCDIR)dnl LHCC Computing Resources Review, Sep 2003 Slide 15 Torre Wenaus, BNL/CERN

Using the prototype tar xzf appwork.tar.gz cd appwork make (if the makefiles were in the repository you would do scram project POOL POOL_1_3_0) (or w/no SCRAM: mkdir POOL_1_3_0; cd POOL_1_3_0; echo "SCRAM_PROJECTNAME=POOL" >.SCRAM) (instead, there is a pre-installed POOL_1_3_0 made by scram project with the makefiles) cd POOL_1_3_0 (if the makefiles were in the repository: cvs co -r POOL_1_3_0 src) cd src./bootstrap./setup.csh make rehash SimpleWriter.exe Build is done under POOL_1_3_0/build/rh73_gcc32/ Libraries, exes installed under POOL_1_3_0/rh73_gcc32/ /afs/cern.ch/user/w/wenaus/public/appwork-dist/appwork.tar.gz See README. At your own risk; not adequately tested yet Must be run at CERN LHCC Computing Resources Review, Sep 2003 Slide 16 Torre Wenaus, BNL/CERN

ToDo on the prototype Test and refine the tarball distribution and developer/user environment Flexible control over what sw installations are/are not used in the environment Use, or don t use, centrally installed releases, and which ones Site-specific configurations Add POOL tests Extend to SEAL Support evaluations and bugfix, refine Add testing support Windows (cygwin) Try out include-based rather than recursive make LHCC Computing Resources Review, Sep 2003 Slide 17 Torre Wenaus, BNL/CERN

Proposal Retain SCRAM for now but halt improvements on its build system Given experience to date, and the CMS move, I think we can conclude that proceeding with the SCRAM build system is extremely unlikely to emerge as the optimal choice The rest of SCRAM works stably and is the basis of our infrastructure minimize disruption and keep it, at least unless/until cost vs. benefit drives us away Address the immediate needs of LHCb in the way begun in SEAL, adding CMT requirements files to POOL as well as SEAL and using CMT on Windows Continue to allow/support CMT requirements files independently of the in-house build tool decision If it is manageable in terms of maintainability and manpower; assess this Since we have, and will continue to have, two experiments using CMT As part of our effort to better support experiment integration Evaluate, refine the autoconf+make prototype with a view to Adopting this approach if it proves to meet our needs with a significantly simpler system than the other options Projects should evaluate it, and its impact on them Using it at least for an end-user layer, to avoid exposing SCRAM or CMT to users, if it does not prove to be the best choice for our in-house build system Timescale TBD LHCC Computing Resources Review, Sep 2003 Slide 18 Torre Wenaus, BNL/CERN