Site specific monitoring of multiple information systems the HappyFace Project
|
|
|
- Ernest Jefferson
- 9 years ago
- Views:
Transcription
1 Home Search Collections Journals About Contact us My IOPscience Site specific monitoring of multiple information systems the HappyFace Project This content has been downloaded from IOPscience. Please scroll down to see the full text J. Phys.: Conf. Ser ( View the table of contents for this issue, or go to the journal homepage for more Download details: IP Address: This content was downloaded on 21/06/2016 at 02:21 Please note that terms and conditions apply.
2 Site specific monitoring of multiple information systems - the HappyFace Project Volker Büge, Viktor Mauch, Günter Quast, Armin Scheurer, Artem Trunov Karlsruhe Institute of Technology, Postfach 6980, Karlsruhe, Germany [email protected] Abstract. An efficient administration of computing centres requires sophisticated tools for the monitoring of the local infrastructure. Sharing such resources in a grid infrastructure, like the Worldwide LHC Computing Grid (WLCG), goes ahead with a large number of external monitoring systems, offering information on the status of the services and user jobs at a grid site. This huge flood of information from many different sources retards the identification of problems and complicates the local administration. In addition, the web interfaces for the access to the site specific information are often very slow and uncomfortable to use. A meta-monitoring system which automatically queries the different relevant monitoring systems could provide a fast and comfortable access to all important information for the local administration. It becomes also feasible to easily correlate information from different sources and provides an easy access also for non-expert users. In this paper, we describe the HappyFace Project, a modular software framework for such purpose. It queries existing monitoring sources and processes the results to provide a single point of entrance for information on a grid site and its specific services. 1. Current situation of the monitoring of a grid centre Current computing centres have to provide a multiplicity of services for different communities and user groups. Most of these services have to be offered with stringent requirements concerning their quality and availability. This includes the quality of data transfers, the performance and reliability of mass storage systems as well as the maintenance of the requested software and service installations. Moreover, the user communities are organised more and more international, relying on multiple computing centres all over the world. To avoid downtimes in such a complex environment or, if necessary, to ensure that they are as short as possible, a multitude of monitoring applications is needed. They have to provide detailed information about the current status of the system and so to allow a fast detection and thereby a real-time response to problems. Examples of such a distributed user community are the particle collider experiments in modern High Energy Physics, which have to deal with huge data rates. The experiments of the Large Hadron Collider (LHC) [1] at CERN [2] will create a data output in the order of petabytes per year. Furthermore, it is essential for the scientists all over the world to generate and analyse these data, including corresponding Monte Carlo simulations to compare expected results with real data from the experiments. c 2010 IOP Publishing Ltd 1
3 The requirements concerning the computing power and storage space led to the decision by the LHC community to build a network of computing centres, which is called Worldwide LHC Computing Grid (WLCG) [3]. Therefore, data storage and processing power from different computing centres have to be organised to provide a working environment for an international collaboration. Each integrated computing centre provides standardised interfaces and software environments for the WLCG so that the work load can be distributed equally on all available sites. The users of the grid are members of Virtual Organisations (VO). These abstract entities group users and resources in a common administrative domain. In the Worldwide LHC Computing Grid, the VOs are associated with the different experiments. The monitoring of such grid enabled computing centres is an even more complex undertaking. To understand these circumstances, the particularities with respect to the monitoring are described in detail for a centre of the Worldwide LHC Computing Grid. Such a grid site has very strict obligations concerning the operational availability which is necessary due to the reception and processing of the detector data coming from CERN. There are certain domains to be monitored for a successful operation. First of all, local aspects like the operation of the hardware and a stable infrastructure of the computing centre are the basis. Furthermore, each Virtual Organisation requires an own working environment and software services for stable operation, which have to be monitored, too. On the other hand, there are multiple local grid services, essential for the connectivity to the grid, which are monitored centrally by the WLCG. Moreover, also the functionality of the computing centre for the global VO is monitored. An example for this is the status of data transfers between the CMS grid centres, which is coordinated and monitored by PhEDEx [4]. All these local and global components have to be fully operational to provide a stable working environment for the different VOs. However, the ascertainment of the current status of such a grid site relies on information from many different monitoring sources and is quite challenging. A look at current monitoring systems of the WLCG reveals numerous difficulties, especially concerning the monitoring of one single grid site and its services. First of all, there are a lot of valuable monitoring applications providing rather unstructured information. For non-experts it is very hard to find out which information is relevant or responsible for a special problem and where it is located. Furthermore, each monitoring application uses its own technology of presenting data values or graphic output. This makes it also difficult to identify correlations between different error sources. Additionally, the totality of all monitoring systems is uncomfortable to use. A site operator has to check a bunch of different services to access all relevant information. Nearly each monitoring system of the WLCG is designed to store information of all grid sites. Therefore, it is necessary to change the settings in the web interfaces according to own requirements. Another handicap is their high latency. Several monitoring websites, especially complex systems with a database backend, often need more than a minute to submit the settings, query the database and finally to deliver the desired information. By this, the time required for a regular site check is unnecessarily increased. 2. A possibility to improve the situation: meta-monitoring A system with an adaptable configuration for a special site could automatically query the relevant sources and provide a fast and comfortable access to all required information for the local administration would help to abolish these grievances. Such a meta-monitoring system does not act as an additional source of monitoring information; it would rather create a smart summary from existing ones. A preferable meta-monitoring system can be defined by the properties: flexible The system should be built as general as possible to support all usual system environments. 2
4 only one web site The final output should be one single website, which shows all requested information from existing sources. up-to-date The complete monitoring information should be renewed within a regular time interval. history functionality It should be possible to use basic history functions like the call of the site status for a specific time or the creation of the status progress for a specific time scale. This functionality can help to find correlations between problems, reported by different monitoring sources at different times. fast access To minimize the load time of the final output it is important to use a system with a very simple architecture, optimised for a fast data access. All information including external plot images have to be stored in a cache or alternatively in a database. The latter should set value on a high performance. comfortable For a highest possible comfort, each result should be accessible with less than three mouse clicks. simple warning system To recognize quickly critical information parts, the framework should provide a simple visualisation system to highlight the results. It can be realised by a display of smileys, arrows or a traffic light logic, which divide the site status in well-defined warning levels. customizable The meta-monitoring framework should provide an easy implementation of customised tests. It should be able to define alert algorithms according to own requirements as well as sharing tests between different instances. The customers for such a system are site operators or administrators who want to automate site checks and need a quick view on the status of their computer centre taking defined monitoring sources into account. Furthermore, responsible persons from the Virtual Organisations may profit by such systems as it also may summarise information from all important services of their collaboration. Finally, also grid users could get a less detailed view of the system to recognize possible problems of a grid site in case of failure of their jobs. 3. The HappyFace Project One answer to these premises is the HappyFace Project [5]. This modular software framework is designed to query existing monitoring sources, to process the gathered information and to create an overview of the site status and its services Design To design the system as flexible as possible and not to be limited by the actual choice of visualisation, acquisition of information and its visualisation are fully decoupled. For each cycle, all collected data as well as the derived results are stored into a database. The visualisation of the stored information then simply goes via a fast query of the local database. Besides offering a dynamic HTML webpage it is also possible to export dedicated information in other formats like XML [6]. Moreover, this decoupling intrinsically offers the requested history functionality. The framework itself is split up into two parts. The HappyCore provides the basic functionalities of the system like the regular execution of all active modules, access to the database and its initialisation and the composing of the final output of a dynamic web page. 3
5 Moreover, it holds basic modules covering functionalities needed by the different tests. In this picture, each test corresponds to a test module with its own configuration and algorithms for the processing of the collected information. The totality of all this specialised test modules builds the second part of the framework. After the initialisation, the test module collects the requested monitoring information from an external source, processes it and stores the output in the database. Besides the results, additional information like a rating value is also saved. This value is represented by a float number between 0 and 1 where a low value indicates possible problems. This value is used to combine the status of different tests to a global value. To prioritise modules, a weight can be specified. This combination is performed by user defined functions of the HappyCore. In addition, each module creates a PHP [7] fragment for its web output, including the logic for the database query. The framework finally collects all these PHP fragments and composes the final website. As the information is visualised dynamically, features like the history functionality are intrinsically included. To provide a most flexible management and optimal customisability, each module is configured by its own two configuration files. The first consists of default settings ensuring the module to be properly executed once activated. The second file gives the possibility to adapt the module for the local particularities of a site. A very simple example is the HappyCore module SinglePlot which provides the functionality to download a plot image from an external website and to store it locally. It prepares the corresponding hyperlink to the database and finally creates the PHP logic to present the desired plot on the monitoring web page. Each test-module which needs such functionality can inherit from SinglePlot. The only adaption is to set the URL, the test name and its description. Another basic module is PhpPlotDashboard for the download and presentation of statistic plots from the Dashboard monitoring application [8]. The requested dashboard graphic can be obtained by a set of dedicated PHP settings, which are prepared in this module and initialised with settings for the local instance. The test-module inherits from this class and therefore only needs the specification of settings which differ from that default. The advantage of this architecture is that only few adaptations of the PhpPlotDashboard configuration are sufficient to migrate the totality of all derived test-modules from one site to another. In case of changes at the monitoring source, only the basic class has to be adapted, not the final modules. This modular layout intrinsically leads to very small-sized test modules, excludes code doubling and eases therewith considerably the maintenance of the system. It also allows to easily share modules between different sites and to distribute their maintenance. Figure 1 shows an extract of the inheritance tree. The basic instance is ModuleBase which contains the required functionality for each module like database access or the processing of the configuration files. The sub-module SAM contains the logic to parse the XML output of the SAM Monitoring Application [9]. The next layer of sub-modules provides algorithms to rate the status of the Computing and Storage Elements. A similar structure is realised with the sub-modules dcache and PoolStatus. To activate a module, it only has to be specified in the global configuration of HappyFace. To ease the navigation, categories can be specified which hold the final output of each module, keeping the ordering of the configuration file. Please note that only for the webpage, the categories are defined. This allows an easy migration of the module output between categories once needed. For each category, a rating function can be specified used for the calculation of the global category status, based on the status of each member module and its weight. For the monitoring of the German CMS T1 centre GridKA [10], categories for the local infrastructure, the CMS data transfers via the production and debugging instance as well as information on 4
6 Figure 1. Extract of the inheritance tree of the HappyFace project. The ModuleBase provides basic functionalities for each module. The sub-module SAM contains the logic to parse the XML output of the SAM Monitoring Application. The next layer of sub-modules provides algorithms to rate the status of the Computing and Storage Elements. A similar structure is realised with the sub-modules dcache and PoolStatus. the batch and mass storage system are visualised. The rating is performed in a way that the category status is identical to the lowest status of a test module. A screenshot of the HappyFace Project instance used for the monitoring of GridKA is presented in Figure 2. It shows the output of two test modules, which is arranged in the category Batch System. The two plots show the efficiency of all CMS jobs during the last 24 houres. The second module monitors the terminated and submitted jobs to this centre via grid means. With the time control bar on the top of the webpage the user is able to query the complete site status for a specific time. This functionality allows to search for status changes back in time and to detect time-dependent correlations between different sources. To make monitoring as comfortable as possible, the complete framework design sets a high value on providing information with a minimal usage of computer input devices. Each test result is achievable in less than three mouse clicks Technical Details of the Implementation The complete framework is written in the script language Python [11]. To allow an easy implementation of new modules and an extension of the functionality, it has a strictly modular layout, realised using the concept of inheritance. For the configuration, the standard Python ConfigParser is used. Hereby, each module has access to the configuration of all classes from which it is inherited. If a configuration variable is set several times, the one of the last occurrence in the inheritance tree is chosen. The decoupling of data acquisition, processing and visualisation is realised with a SQLite [12] database, in which all kinds of ASCII data is stored. To keep the database as fast as possible, binary files like plot images are stored in the local file system and only the path to the local copy of the file is accessible via the database. The final webpage consists of the PHP fragments from the different modules, which contain the complete logic to query the database depending on the point in time requested by the user. Once a time is specified, the timestamp in the 5
7 Figure 2. Screenshot of the HappyFace Project instance used for the monitoring of the German CMS T1 centre, GridKA. It shows the output of two test modules, which is arranged in the category Batch System. The two plots show the efficiency of all CMS jobs during the last 24 hours. The second module monitors both terminated and submitted jobs to this centre via grid means. With the time control bar on the top of the webpage the user is able to query the complete site status at a specific time. database closest to the given one is calculated and the corresponding information is displayed. Per default, the webpage displays the latest information and updates every five minutes. In Figure 3 the workflow of the framework is illustrated. Please note that the current web output based on PHP has been chosen in order to minimize the requirements on the web server hosting the HappyFace Project. It is rather easy to add additional output webpages based on other technologies. All modules are executed at the same time in a multi-threading environment. In addition to the economy of time this procedure has the advantage that modules, exceeding a global timeout setting, can be terminated by the system. This is an important issue to handle the case that an external source may have temporary problems to provide the requested data. Other possible internal errors of the modules usually concern inconsistent or missing 6
8 information. For these cases, error handling routines catch the failure and display an error message in the final output. Figure 3. Workflow of the framework. Each active module collects the requested external data, process and rates it. The relevant results are stored in the database. An output fragment is provided by each module, later being composed to the final output webpage. This file contains the complete SQL query logic, which is used by visiting the website and request the status concerning to a specific timestamp Requirements The HappyFace Project requires only very basic programs and libraries commonly available under GNU/Linux, thus it can easily run on a desktop workstation. A mechanism for an execution in periodic time intervals, such as cronjobs, as well as Python (v2.5.2 or newer) are required. Furthermore, the database engine SQLite3 and the Python package SQLObject [13] provide the database functionality. The web server has to support the script language PHP (v5.x). 4. Related Work Besides this approach there are further monitoring systems providing availability and status information of the grid sites and their services. However the overall aim is another in comparison with the HappyFace Project. Most of them, like the SiteView GridMap Monitoring Tool [14], try to provide a grid-wide monitoring information centre for all existing sites. The main difference of these, compared with the HappyFace Project, is the centralized maintenance. People using such services have no direct influence concerning rating algorithms, used monitoring information and display output. Furthermore centralized tools have a natural limitation on accessing monitoring information. In contrast, a local monitoring instance of a grid site can be configured according to desire. Local information, which is perhaps not released for the publicity, can also be taken into account for a more sophisticated status rating. The responsible site administrators can realize their own analyze logic, alert warning levels and display output. The HappyFace Project fulfils exactly this concept. The framework allows using existing modules and provides the possibility to build and use new modules if required. Each aspect of a HappyFace monitoring instance from the choice of relevant information via the rating through to the display output is configurable and under the control of the concerning administrator who use this tool. 7
9 5. Conclusion & Outlook An efficient administration of computing centres requires sophisticated tools for the monitoring of the local infrastructure. Especially if the centre is connected to the grid, a multitude of internal and external monitoring sources has to be checked to get reliable information on the status of a grid site. However, these monitoring sources have limitations with respect to their response time and the needed information is often not clearly accessible. One possibility to improve that situation for the site administrators is a meta-monitoring, which automatically queries the relevant sources and provide a fast and comfortable access to all required information. It is not an additional source of information but offers a smart summary of existing ones. HappyFace fulfils all requirements that have been defined for a preferable meta-monitoring system. Due to the decoupling of the acquisition and visualisation of the information, the system is very flexible and can be adapted for many different monitoring purposes. Moreover, due to its modular structure, the development of new modules and their maintenance can be shared between different instances. The modular structure of HappyFace fits perfectly to the fact that most grid centres have to control the same monitoring sources, only with a slightly changed PHP request or xml data. Here, it is often sufficient to configure one parameter in the HappyCore modules to adapt multiple existing test modules for the local particularities. The same is valid once the format of the output of a monitoring source changes as only a modification of the corresponding core module is necessary. Once adapted, it can be published and all other instances can update. To benefit most from this plug-in system, HappyFace is maintained in a subversion repository. Each available module comes with a default configuration that works at every site out of the box once activated at a site. The adaptation of a module for the particularities of a site takes place in a local configuration file. As soon as a fix in one of the basic modules is available, the instance can be updated without loosing the local configuration. The same is valid for new modules. The HappyFace Project is already used for the monitoring of the CMS activities at the German CMS T1 Centre GridKA during several computing challenges. Especially the clear arrangement of the required information helped non-expert shift crews to monitor such a complex system. Moreover, HappyFace instances are used for the monitoring of the WLCG centres in Aachen, Göttingen and Hamburg/DESY by local groups of the ATLAS or CMS experiment. Besides the improvement of existing and the development of new modules, the XML export of the category status as well as other key information will become available in near future. Having defined categories and rating functions, such an XML output can be used to summarise the status of several computing centres allowing a central shift monitoring multiple centres. Once a failure is detected in one of the categories of a centre, the local HappyFace instance can then give more detailed information. Having these functionalities implemented, HappyFace contributes to an efficient monitoring application of complex grid centres and eases the discovery of failures, leading to a better reliability and availability for the user communities. Acknowledgment We thank Stefan Birkholz, Friederike Nowak and Philip Sauerland for the fruitful discussion, testing of the recent developments and the maintenance of some modules. Furthermore, we acknowledge the financial support of the Bundesministerium für Bildung und Forschung BMBF as well as the Helmholtz Alliance Physics at the Terascale. 8
10 References [1] Large Hadron Collider, [2] Organisation Européene pour la Recherche Nucléaire (CERN), [3] Worldwide LHC Computing Grid, [4] PhEDEx, [5] HappyFace Project, [6] XML (Extensible Markup Language), [7] PHP, [8] Arda Dashboard, [9] Site Availability Monitoring (SAM), [10] GridKa, [11] Python, [12] SQLite, [13] SQLObject, [14] SiteView, 9
Das HappyFace Meta-Monitoring Framework
Das HappyFace Meta-Monitoring Framework B. Berge, M. Heinrich, G. Quast, A. Scheurer, M. Zvada, DPG Frühjahrstagung Karlsruhe, 28. März 1. April 2011 KIT University of the State of Baden-Wuerttemberg and
How To Use Happyface (Hf) On A Network (For Free)
Site Meta-Monitoring The HappyFace Project G. Quast, A. Scheurer, M. Zvada CMS Monitoring Review, 16. 17. November 2010 KIT University of the State of Baden-Wuerttemberg and National Research Center of
HappyFace for CMS Tier-1 local job monitoring
HappyFace for CMS Tier-1 local job monitoring G. Quast, A. Scheurer, M. Zvada CMS Offline & Computing Week CERN, April 4 8, 2011 INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK, KIT 1 KIT University of the State
ATLAS job monitoring in the Dashboard Framework
ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1
PoS(EGICF12-EMITC2)110
User-centric monitoring of the analysis and production activities within the ATLAS and CMS Virtual Organisations using the Experiment Dashboard system Julia Andreeva E-mail: [email protected] Mattia
An Integrated CyberSecurity Approach for HEP Grids. Workshop Report. http://hpcrd.lbl.gov/hepcybersecurity/
An Integrated CyberSecurity Approach for HEP Grids Workshop Report http://hpcrd.lbl.gov/hepcybersecurity/ 1. Introduction The CMS and ATLAS experiments at the Large Hadron Collider (LHC) being built at
ARDA Experiment Dashboard
ARDA Experiment Dashboard Ricardo Rocha (ARDA CERN) on behalf of the Dashboard Team www.eu-egee.org egee INFSO-RI-508833 Outline Background Dashboard Framework VO Monitoring Applications Job Monitoring
HappyFace3 everything now in Python ;-)
HappyFace3 everything now in Python ;-) Oliver Oberst, Günter Quast, Steffen Röcker, Marcus Schmitt, Gregor Vollmer, Stefan Wayand, Marian Zvada and the IEKP T1 Shifter-Team Institut für Experimentelle
1. INTERFACE ENHANCEMENTS 2. REPORTING ENHANCEMENTS
W E L C O M E T O M O N I T O R I N G H E A V E N NEW THINGS ABOUT PANDORA FMS 5.0 A new version of Pandora FMS full of enhancements is about to hit the market. Pandora FMS 5.0 will be released by the
Web based monitoring in the CMS experiment at CERN
FERMILAB-CONF-11-765-CMS-PPD International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010) IOP Publishing Web based monitoring in the CMS experiment at CERN William Badgett 1, Irakli
1. INTERFACE ENHANCEMENTS 2. REPORTING ENHANCEMENTS
W E L C O M E T O M O N I T O R I N G H E A V E N NEW THINGS ABOUT PANDORA FMS 5.0 A new version of Pandora FMS full of enhancements is about to hit the market. Pandora FMS 5.0 will be released by the
Evolution of Database Replication Technologies for WLCG
Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full
Adlib Internet Server
Adlib Internet Server Software for professional collections management in archives, libraries and museums Comprehensive, Flexible, User-friendly Adlib Internet Server Put your data online, the easy way
Online CMS Web-Based Monitoring. Zongru Wan Kansas State University & Fermilab (On behalf of the CMS Collaboration)
Online CMS Web-Based Monitoring Kansas State University & Fermilab (On behalf of the CMS Collaboration) Technology and Instrumentation in Particle Physics June 13, 2011 Chicago, USA CMS One of the high
Category: Business Process and Integration Solution for Small Business and the Enterprise
Home About us Contact us Careers Online Resources Site Map Products Demo Center Support Customers Resources News Download Article in PDF Version Download Diagrams in PDF Version Microsoft Partner Conference
Network device management solution
iw Management Console Network device management solution iw MANAGEMENT CONSOLE Scalability. Reliability. Real-time communications. Productivity. Network efficiency. You demand it from your ERP systems
Shoal: IaaS Cloud Cache Publisher
University of Victoria Faculty of Engineering Winter 2013 Work Term Report Shoal: IaaS Cloud Cache Publisher Department of Physics University of Victoria Victoria, BC Mike Chester V00711672 Work Term 3
EnterpriseLink Benefits
EnterpriseLink Benefits GGY AXIS 5001 Yonge Street Suite 1300 Toronto, ON M2N 6P6 Phone: 416-250-6777 Toll free: 1-877-GGY-AXIS Fax: 416-250-6776 Email: [email protected] Web: www.ggy.com Table of Contents
Product Guide. Sawmill Analytics, Swindon SN4 9LZ UK [email protected] tel: +44 845 250 4470
Product Guide What is Sawmill Sawmill is a highly sophisticated and flexible analysis and reporting tool. It can read text log files from over 800 different sources and analyse their content. Once analyzed
Network device management solution.
Network device management solution. iw Management Console Version 3 you can Scalability. Reliability. Real-time communications. Productivity. Network efficiency. You demand it from your ERP systems and
Distributed Database Access in the LHC Computing Grid with CORAL
Distributed Database Access in the LHC Computing Grid with CORAL Dirk Duellmann, CERN IT on behalf of the CORAL team (R. Chytracek, D. Duellmann, G. Govi, I. Papadopoulos, Z. Xie) http://pool.cern.ch &
How To Test Your Web Site On Wapt On A Pc Or Mac Or Mac (Or Mac) On A Mac Or Ipad Or Ipa (Or Ipa) On Pc Or Ipam (Or Pc Or Pc) On An Ip
Load testing with WAPT: Quick Start Guide This document describes step by step how to create a simple typical test for a web application, execute it and interpret the results. A brief insight is provided
Data processing goes big
Test report: Integration Big Data Edition Data processing goes big Dr. Götz Güttich Integration is a powerful set of tools to access, transform, move and synchronize data. With more than 450 connectors,
Paper 064-2014. Robert Bonham, Gregory A. Smith, SAS Institute Inc., Cary NC
Paper 064-2014 Log entries, Events, Performance Measures, and SLAs: Understanding and Managing your SAS Deployment by Leveraging the SAS Environment Manager Data Mart ABSTRACT Robert Bonham, Gregory A.
DiskPulse DISK CHANGE MONITOR
DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com [email protected] 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product
MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper
Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper
DiskBoss. File & Disk Manager. Version 2.0. Dec 2011. Flexense Ltd. www.flexense.com [email protected]. File Integrity Monitor
DiskBoss File & Disk Manager File Integrity Monitor Version 2.0 Dec 2011 www.flexense.com [email protected] 1 Product Overview DiskBoss is an automated, rule-based file and disk manager allowing one to
Final Report - HydrometDB Belize s Climatic Database Management System. Executive Summary
Executive Summary Belize s HydrometDB is a Climatic Database Management System (CDMS) that allows easy integration of multiple sources of automatic and manual stations, data quality control procedures,
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk.
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. Executive Summary Load testing can be used in a range of business scenarios to deliver numerous benefits. At its core,
FP Markets MetaTrader 4 Platform Guide Issue Date: 31 March 2012. First Prudential Markets Pty Ltd ABN 16 112 600 281 AFSL 286354. fpmarkets.com.
FP Markets MetaTrader 4 Platform Guide Issue Date: 31 March 2012 First Prudential Markets Pty Ltd ABN 16 112 600 281 AFSL 286354 1 fpmarkets.com.au 1 Introduction The Foreign Exchange Market The FX market
Software Development Kit
Open EMS Suite by Nokia Software Development Kit Functional Overview Version 1.3 Nokia Siemens Networks 1 (21) Software Development Kit The information in this document is subject to change without notice
What's New in SAS Data Management
Paper SAS034-2014 What's New in SAS Data Management Nancy Rausch, SAS Institute Inc., Cary, NC; Mike Frost, SAS Institute Inc., Cary, NC, Mike Ames, SAS Institute Inc., Cary ABSTRACT The latest releases
XpoLog Center Suite Log Management & Analysis platform
XpoLog Center Suite Log Management & Analysis platform Summary: 1. End to End data management collects and indexes data in any format from any machine / device in the environment. 2. Logs Monitoring -
Print Audit 6 Technical Overview
Print Audit 6 Technical Overview Print Audit 6 is the most accurate and powerful suite of print tracking and print management products available. It is used to analyse, reduce and recover costs along with
Analyzing Network Servers. Disk Space Utilization Analysis. DiskBoss - Data Management Solution
DiskBoss - Data Management Solution DiskBoss provides a large number of advanced data management and analysis operations including disk space usage analysis, file search, file classification and policy-based
Building Views and Charts in Requests Introduction to Answers views and charts Creating and editing charts Performing common view tasks
Oracle Business Intelligence Enterprise Edition (OBIEE) Training: Working with Oracle Business Intelligence Answers Introduction to Oracle BI Answers Working with requests in Oracle BI Answers Using advanced
1 (11) Paperiton DMS Document Management System System Requirements Release: 2012/04 2012-04-16
1 (11) Paperiton DMS Document Management System System Requirements Release: 2012/04 2012-04-16 2 (11) 1. This document describes the technical system requirements for Paperiton DMS Document Management
DEVELOPMENT OF AN ANALYSIS AND REPORTING TOOL FOR ORACLE FORMS SOURCE CODES
DEVELOPMENT OF AN ANALYSIS AND REPORTING TOOL FOR ORACLE FORMS SOURCE CODES by Çağatay YILDIRIM June, 2008 İZMİR CONTENTS Page PROJECT EXAMINATION RESULT FORM...ii ACKNOWLEDGEMENTS...iii ABSTRACT... iv
Digital Asset Management A DAM System for TYPO3
Digital Asset Management A DAM System for TYPO3 Published under the GNU General Public License Copyright 2005 René Fritz, Daniel Hinderink 1 What is Digital Asset Management A DAM system is a tool to handle
End to End Solution to Accelerate Data Warehouse Optimization. Franco Flore Alliance Sales Director - APJ
End to End Solution to Accelerate Data Warehouse Optimization Franco Flore Alliance Sales Director - APJ Big Data Is Driving Key Business Initiatives Increase profitability, innovation, customer satisfaction,
Advanced Event Viewer Manual
Advanced Event Viewer Manual Document version: 2.2944.01 Download Advanced Event Viewer at: http://www.advancedeventviewer.com Page 1 Introduction Advanced Event Viewer is an award winning application
TORNADO Solution for Telecom Vertical
BIG DATA ANALYTICS & REPORTING TORNADO Solution for Telecom Vertical Overview Last decade has see a rapid growth in wireless and mobile devices such as smart- phones, tablets and netbook is becoming very
Clarity Assurance allows operators to monitor and manage the availability and quality of their network and services
Clarity Assurance allows operators to monitor and manage the availability and quality of their network and services clarity.com The only way we can offer World Class Infocomm service is through total automation
GFI LANguard 9.0 ReportPack. Manual. By GFI Software Ltd.
GFI LANguard 9.0 ReportPack Manual By GFI Software Ltd. http://www.gfi.com E-mail: [email protected] Information in this document is subject to change without notice. Companies, names, and data used in examples
VX Search File Search Solution. VX Search FILE SEARCH SOLUTION. User Manual. Version 8.2. Jan 2016. www.vxsearch.com [email protected]. Flexense Ltd.
VX Search FILE SEARCH SOLUTION User Manual Version 8.2 Jan 2016 www.vxsearch.com [email protected] 1 1 Product Overview...4 2 VX Search Product Versions...8 3 Using Desktop Product Versions...9 3.1 Product
ServerView Inventory Manager
User Guide - English FUJITSU Software ServerView Suite ServerView Inventory Manager ServerView Operations Manager V6.21 Edition October 2013 Comments Suggestions Corrections The User Documentation Department
Enterprise Service Bus
We tested: Talend ESB 5.2.1 Enterprise Service Bus Dr. Götz Güttich Talend Enterprise Service Bus 5.2.1 is an open source, modular solution that allows enterprises to integrate existing or new applications
NetIQ. How to guides: AppManager v7.04 Initial Setup for a trial. Haf Saba Attachmate NetIQ. Prepared by. Haf Saba. Senior Technical Consultant
How to guides: AppManager v7.04 Initial Setup for a trial By NetIQ Prepared by Haf Saba Senior Technical Consultant Asia Pacific 1 Executive Summary This document will walk you through an initial setup
AMT - Asset Management Software: Solutions for Mining Companies
AMT - Asset Management Software: Solutions for Mining Companies AMT Mining Solutions AMT is a highly configurable asset management software tool built for mining and other heavy equipment operators. AMT
Authoring for System Center 2012 Operations Manager
Authoring for System Center 2012 Operations Manager Microsoft Corporation Published: November 1, 2013 Authors Byron Ricks Applies To System Center 2012 Operations Manager System Center 2012 Service Pack
Your Go! to intelligent operational management
Your Go! to intelligent operational management FareGo Data CS The heart of our system architecture FareGo Data CS is the centre for data acquisition and administration for your fare collection system.
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
BIRT Document Transform
BIRT Document Transform BIRT Document Transform is the industry leader in enterprise-class, high-volume document transformation. It transforms and repurposes high-volume documents and print streams such
AdminToys Suite. Installation & Setup Guide
AdminToys Suite Installation & Setup Guide Copyright 2008-2009 Lovelysoft. All Rights Reserved. Information in this document is subject to change without prior notice. Certain names of program products
Status and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
Cleopatra Enterprise. Software highlights. Cost Engineering
Cleopatra Enterprise Software highlights Cost Engineering Version : 1..0 Content 1 Introduction...3 2 Common features...3 3 Estimating...5 4 Workflow...6 5 Interfacing...7 6 Web sharing...8 7 Plugins...9
Information Technology Web Solution Services
Information Technology Web Solution Services Icetech 940 West North Avenue Baltimore, Maryland 21217 Tel: 410.225.3117 Fax: 410.225.3120 www. Icetech. net Hubzone Copyright @ 2012 Icetech, Inc. All rights
NAI Global s Technology Solutions
NAI Global s Technology Solutions One of the key aspects of any successful technology implementation is the ability to quickly bring to the user desktop, that critical information that is integrated across
Web Analytics Understand your web visitors without web logs or page tags and keep all your data inside your firewall.
Web Analytics Understand your web visitors without web logs or page tags and keep all your data inside your firewall. 5401 Butler Street, Suite 200 Pittsburgh, PA 15201 +1 (412) 408 3167 www.metronomelabs.com
Monitoring individual traffic flows within the ATLAS TDAQ network
Home Search Collections Journals About Contact us My IOPscience Monitoring individual traffic flows within the ATLAS TDAQ network This content has been downloaded from IOPscience. Please scroll down to
How To Use The Numara Track-It! Help Desk And Asset Management Solution
Welcome to the Numara Track-It! Evaluation Guide Page 1 of 23 INTRODUCTION The purpose of this is to give you an overview of Numara Track-It! so you can get started using the solution right away. Keep
Vanguard Knowledge Automation System
KNOWLEDGE AUTOMATION SYSTEM: OVERVIEW Vanguard Knowledge Automation System Turn routine processes into easy-to-use Web Apps Vanguard Knowledge Automation System lets you capture routine business processes
NETWORK MONITORING. Network Monitoring. Product brief. NETWORK MONITORING Logger Only
Network Monitoring 1 Network Monitoring Product brief Logger Only CONTENTS 1 Page 1. Introduction 2-3 2. Network Structure 4 3. Data Collection 5 4. Data Visualisation 6 5. Dashboard 7 6. Alarm Management
SOFTWARE TESTING TRAINING COURSES CONTENTS
SOFTWARE TESTING TRAINING COURSES CONTENTS 1 Unit I Description Objectves Duration Contents Software Testing Fundamentals and Best Practices This training course will give basic understanding on software
Planning the Migration of Enterprise Applications to the Cloud
Planning the Migration of Enterprise Applications to the Cloud A Guide to Your Migration Options: Private and Public Clouds, Application Evaluation Criteria, and Application Migration Best Practices Introduction
Table of Contents. 2015 Cicero, Inc. All rights protected and reserved.
Desktop Analytics Table of Contents Contact Center and Back Office Activity Intelligence... 3 Cicero Discovery Sensors... 3 Business Data Sensor... 5 Business Process Sensor... 5 System Sensor... 6 Session
Evaluation and implementation of CEP mechanisms to act upon infrastructure metrics monitored by Ganglia
Project report CERN Summer Student Programme Evaluation and implementation of CEP mechanisms to act upon infrastructure metrics monitored by Ganglia Author: Martin Adam Supervisors: Cristovao Cordeiro,
Symfony2 and Drupal. Why to talk about Symfony2 framework?
Symfony2 and Drupal Why to talk about Symfony2 framework? Me and why Symfony2? Timo-Tuomas Tipi / TipiT Koivisto, M.Sc. Drupal experience ~6 months Symfony2 ~40h Coming from the (framework) Java world
Business Benefits From Microsoft SQL Server Business Intelligence Solutions How Can Business Intelligence Help You? PTR Associates Limited
Business Benefits From Microsoft SQL Server Business Intelligence Solutions How Can Business Intelligence Help You? www.ptr.co.uk Business Benefits From Microsoft SQL Server Business Intelligence (September
SOA, case Google. Faculty of technology management 07.12.2009 Information Technology Service Oriented Communications CT30A8901.
Faculty of technology management 07.12.2009 Information Technology Service Oriented Communications CT30A8901 SOA, case Google Written by: Sampo Syrjäläinen, 0337918 Jukka Hilvonen, 0337840 1 Contents 1.
Mascot Integra: Data management for Proteomics ASMS 2004
Mascot Integra: Data management for Proteomics 1 Mascot Integra: Data management for proteomics What is Mascot Integra? What Mascot Integra isn t Instrument integration in Mascot Integra Designing and
IT INFRASTRUCTURE MANAGEMENT SERVICE ADDING POWER TO YOUR NETWORKS
IT INFRASTRUCTURE MANAGEMENT SERVICE ADDING POWER TO YOUR NETWORKS IT INFRASTRUCTURE MANAGEMENT SERVICES Nortech Remote management IT security Services provide around clock remote Management, real time
A Tool for Evaluation and Optimization of Web Application Performance
A Tool for Evaluation and Optimization of Web Application Performance Tomáš Černý 1 [email protected] Michael J. Donahoo 2 [email protected] Abstract: One of the main goals of web application
MONyog White Paper. Webyog
1. Executive Summary... 2 2. What is the MONyog - MySQL Monitor and Advisor?... 2 3. What is agent-less monitoring?... 3 4. Is MONyog customizable?... 4 5. Comparison between MONyog and other Monitoring
Realization of Inventory Databases and Object-Relational Mapping for the Common Information Model
Realization of Inventory Databases and Object-Relational Mapping for the Common Information Model Department of Physics and Technology, University of Bergen. November 8, 2011 Systems and Virtualization
World-wide online monitoring interface of the ATLAS experiment
World-wide online monitoring interface of the ATLAS experiment S. Kolos, E. Alexandrov, R. Hauser, M. Mineev and A. Salnikov Abstract The ATLAS[1] collaboration accounts for more than 3000 members located
GCE APPLIED ICT A2 COURSEWORK TIPS
GCE APPLIED ICT A2 COURSEWORK TIPS COURSEWORK TIPS A2 GCE APPLIED ICT If you are studying for the six-unit GCE Single Award or the twelve-unit Double Award, then you may study some of the following coursework
Rotorcraft Health Management System (RHMS)
AIAC-11 Eleventh Australian International Aerospace Congress Rotorcraft Health Management System (RHMS) Robab Safa-Bakhsh 1, Dmitry Cherkassky 2 1 The Boeing Company, Phantom Works Philadelphia Center
Sharperlight Web Interface
Sharperlight Web Interface www.sharperlight.com [email protected] Sharperlight Web Interface Published by philight Software International Pty Ltd All other copyrights and trademarks are the property
The Complete Performance Solution for Microsoft SQL Server
The Complete Performance Solution for Microsoft SQL Server Powerful SSAS Performance Dashboard Innovative Workload and Bottleneck Profiling Capture of all Heavy MDX, XMLA and DMX Aggregation, Partition,
Installing GFI Network Server Monitor
Installing GFI Network Server Monitor System requirements Computers running GFI Network Server Monitor require: Windows 2000 (SP4 or higher), 2003 or XP Pro operating systems. Windows scripting host 5.5
CA Virtual Assurance/ Systems Performance for IM r12 DACHSUG 2011
CA Virtual Assurance/ Systems Performance for IM r12 DACHSUG 2011 Happy Birthday Spectrum! On this day, exactly 20 years ago (4/15/1991) Spectrum was officially considered meant - 2 CA Virtual Assurance
