1 Preparing Databases for Network Traffic Monitoring Liberios Vokorokos Technical University of Košice Faculty of Electrical Engineering and Informatics Dept. of Computers and Informatics Letná 9, Košice Slovak Republic Adrián Pekár Technical University of Košice Faculty of Electrical Engineering and Informatics Dept. of Computers and Informatics Letná 9, Košice Slovak Republic Norbert Ádám Technical University of Košice Faculty of Electrical Engineering and Informatics Dept. of Computers and Informatics Letná 9, Košice Slovak Republic Abstract This paper deals with the solution of database related problems which can occur during network traffic monitoring. During data evaluation through a database excessive response time can be noticed. While the time necessary to obtain the data from the database has a significant effect on the result of the monitoring, it is important to keep its value as low as possible. Practically, the vast majority of the monitoring tools have to deal with the problems related to the database s excessive response time, so their solution is really justified. In this paper an alternative method of network monitoring, the BasicMeter tool will be introduced by which the exact problems and their solution will be illustrated. I. INTRODUCTION Today s computer networks are designed to transmit video, sound and data at the same time. This kind of transmission is provided by converged networks, which instead of setting own separate links for each traffic type make do with only one converged link. In order to ensure the desired quality and performance of these networks is important to monitor their traffic. By measuring and analyzing network traffic parameters the function of various multimedia applications can be ensured; the network and its users can be secured against external or internal attacks ; Intrusion Detection Systems can be applied ; the lost or leakage of sensitive information can be prevented, etc. For this reason monitoring has a significant role in the management of today s modern computer networks and their services. BasicMeter is a measuring tool which gives an alternative method for network traffic monitoring. It is developed by the MONICA research group at the Technical University of Košice. The main purpose of the tool is to provide network traffic parameters measurement and real-time data analysis. During the development of the tool a quite big response time was noticed while obtaining the data from the database. It was caused by the database management system. Processing even a relatively small number of data can be pretty time-consuming in some cases. Moreover, with the growth of the data in the database which is obvious in the case of network traffic monitoring the time required for evaluating the queries can exceed more tens of seconds. It is because processing millions of records cost some time even on systems with the most expensive hardware equipment and the highest performance. This fact has a negative impact on the functionality of the network traffic monitoring tools. Inspired by the Data Warehouse technology some improvements were designed to the way of storing the data in the database and accessing them. In the following sections the BasicMeter network monitoring tool will be introduced starting with a brief description of its main components, up to a more detailed explanation of the problems it has to face during data analysis. Subsequently, in Section IV one of the most popular database technologies for large-scale data management, i.e. the Data Warehouses will be presented. Section V is addressed to the solution of the problems with excessive response time of the database in the monitoring tool. The last two sections are focusing on the verification of the implemented technology in the BasicMeter s architecture and on the presentation of planed future work. II. THE BASICMETER MEASURING TOOL The concept of the BasicMeter  tool, as described in Figure 1, is in conformance to the IPFIX architecture , . The measurement of the network traffic is based on passive method , which does not require generation of additional network traffic but make do with the existing one. The architecture of the tool was developed within the following three sub-projects: BEEM stands for BasicmEter Exporting and Measuring process. It includes packet capturing, filtering, sampling, creating and maintaining of flow records in the flow cache, and exporting of flow records from observation point by IPFIX protocol. JXColl stands for Java XML Collector. It can collect flow records from one or more metering points in the format of Netflow v5, v9  or IPFIX  protocol. Flow records can be stored in a database for future use or analysis and/or directly sent to one or more analyzing applications by ACP , . WebAnalyzer is a Web application for data analysis. It provides an Graphical User Interface (GUI) for both the visualization of the information obtained by the exporter(s) and the management of the architecture s lower components.
2 III. DESCRIPTION OF THE PROBLEMS ASSOCIATED WITH EXCESSIVE RESPONSE TIME In the exporter s configuration file besides other options a time period can be set. After the expiration of this time period the flow records calculated on the basis of actually measured data will be exported to the collector. Depending on these parameters can happen that although the flow has not yet been finished, the information about it will be stored in the database and further measured data by the exporter will represent updated information about the same flow. So some of the records in the database according to the IP packet structure  will have the same IP addresses, port numbers etc. These records will occur in the database multiple times. In the analyzing application of the measuring tool the user on the basis of certain filtrating criteria can create a graphical continuance. These criteria inter alia include unique source and destination port numbers, source and destination IP addresses and the identifier of the metering point (exporter) . Unfortunately, while the database contain duplicate records, the effective evaluation of the unique records a list where each record is occurring only once is not easy to achieve. The list containing only unique records is obtained from the database by the Fig. 1. The architecture of the BasicMeter measuring tool However, the analyzer itself is not a part of the IPFIX architecture . Therefore further enhancements had to be draft and implemented by which the communication between the analyzer(s) and the collector could be established; and the control of the measuring tool could be enabled. For these purposes the Analyzer Collector Protocol (ACP) and Exporter Collector Analyzer Manager (ECAM) module were developed , . Former is intended for real-time network traffic analysis; latter for the management of the tool s other components. All of the above mentioned components can be configured by their configuration files. As the database of the BasicMeter tool PostgreSQL  was chosen. It is an open-source Object-Relational Database Management System (ORDBMS) which offers an alternative solution to other freeware database systems (MySQL, Firebird, MaxDB, etc.) as well as to proprietary ones (Oracle, DB2, Microsoft SQL Server). Thanks to the flexibility of its license, PostgreSQL can be used, modified and distributed for free by any person for any purpose. According to many experts, PostgreSQL is currently the most advanced and most sophisticated freeware Database Management System (DBMS) . Regarding to the roles of these projects (BEEM, JXColl, WebAnalyzer, PosgreSQL DBMS), in the following they will be specified as the exporter, collector, analyzer and database. SELECT DISTINCT name_of_column FROM name_of_table ORDER BY name_of_column; query. This command eliminates duplicate rows from the result . In case of 1 million records to evaluate the analyzer s query and subsequently send the result to the analyzing application takes the database management system tens of seconds. The higher the number of the records in the database is the more time is needed to evaluate the queries. It is important to mention that the following numbers are only informational. Mainly because the result of the network measurement can be influenced by a wide range of various factors so the results of the monitoring may vary. Another problem occurs during the evaluation of the obtained data from the database by the analyzer. Since the collector stores each data about the IP flows measured by the exporter(s), the size of the data in the database can be really large. For example, monitoring of the network with balanced traffic during 10 days results in approximately 1.5 million of stored records in the database. If a user would like to see what happened on the network during these 10 days, the database management system would return 1.5 million records. On the basis of these records the analyzing application would have to create a continuation in a form of interconnected points. However, in the case of such a large amount of number this is really problematic. Moreover, it is assumed that in the future the analyzer will be capable to create a continuation from a much larger time interval (records). In this case both the processing of the queries by the database and the subsequent visualization of the results by the analyzer represent a really large response time of the BasicMeter tool. (1)
3 For these reasons it was necessary to analyze and improve the way of storing the data of network traffic flows in the database and accessing them. IV. DESCRIPTION OF THE DATA WAREHOUSE TECHNOLOGY Issues related to database systems with large data sets are well known to the scientific community. Many database experts dedicate their attention to the development of concepts and technologies for a long time to solve the problems similar to the one with which the BasicMeter has to deal. One of the most commonly used technologies for easy management of large-scale data is the Data Warehouse. A Data Warehouse is a methodological concept for organizing and managing large data structures. Its main purpose is to provide reliable, complex and integrated materials for applications that rely on extensive data. Over the past few years Data Warehouses went through a rapid growth in the number provided products and services. Their concept was adopted by the public and successfully distributed in various segments of the industries such as manufacturing, sales, financial services, transport, telecommunications, utilities and health care . The main advantage of Data Warehouses is, that the data are stored in a structure which allows their efficient analysis and querying (the main effort was to reach such a condition in the BasicMeter monitoring tool). Further advantages of Data Warehouse are the following : Integration of the data from various sources into one system. Provides history, therefore the data are available for the few last periods of time (e.g. years). The data are arranged according to the individual subjects. Data are stored at different levels of aggregation. Data are periodically retrieved from a system (usually at night or weekends). Users only read the data, i.e. they do not insert new or change the existing ones. Using a wide range of methods, data from the Data Warehouse can be used for effective analysis and presentation. The architecture of Data Warehouses is shown in Figure 2. Data incoming from multiple external resources are subjected to filtering criteria, extracted, merged and stored in the central repository, which is the Data Warehouse. The stored data are also extended by history and summary information . In technical terms, Data Warehouse is a huge database which has a size from several gigabytes up to hundreds of terabytes. Thanks to this architecture, users work with data in a local, homogeneous and centralized data repository, which reduces the access time to these data. In addition, the Data Warehouse also updates periodically itself against the content of external sources. Currently two architectures of Data Warehouses are dominating. One based on Inmon s methodology and the other on Kimball s methodology. Fig. 2. A. Inmon s methodology The common architecture of the Date Warehouses According to Inmon s 1 methodology, Data Warehouse is a subject-oriented, integrated, constant and time-dependent data collection, which serves as support for the management of these data. The main focus is placed on the centralization and normalization of data into the Data Warehouse. Since the normalization of data complicates the ease of use and performance, this architecture also provides various datamarts. Datamarts are derived databases. They normally contain summarized data derived from the data of the Data Warehouse. Their structure is optimized for efficient querying of the records from the database . B. Kimball s methodology According to Kimball s 2 definition, Data Warehouse is a copy of transactional data specifically for structured queries and analysis. His method, also known as dimensional modeling, provides representation of the most elegant compromises of end users integrity and the ease of use. Its main aim is the development of a systematic and progressive Data Warehouse, while mainly focusing on different preventions and its dimensions . C. Considering the criteria of the transition to the new technology Before the transition to the technology for managing largescale data, the following factors  were considered: the amount of the data, which should be stored; 1 William (Bill) Harvey Inmon is an American computer scientist who is known for most people as the father of Data Warehouses. 2 Ralph Kimball is the author of the concepts of Data Warehouse and Business Intelligence (BI).
4 the time (speed), during which the data should be ready for use; history of the records and changes; what types of analysis are needed to be performed over the data; the cost of the technology and its implementation. Further important factor to consider was the types of the information which were going to be stored in the database. For this reason the database and its existing data, the process which is filling the database (collector) and the process which is querying the information from the database (analyzer) were put under a complex analysis. The goal of this analysis was to gather all the requirements. Since the BasicMeter tool is still under development, some of these requirements were still not fully defined (mainly which regard to the analyzer). Unfortunately this would made the transition to the new technology hardly to implement and in some cases also pointless (if the requirements from the collector and analyzer will change, the method of preprocessing of the data dedicated for efficient evaluation has to be changed too). However, the solution of the problems from the previous section still remained necessary. For this reason, after considering the factors described previously, the Data Warehouse technology and some other optimization improvements were implemented in the current database of the BasicMeter tool, which should solve the persistent problems of the database s excessive response time. This solution from the view of the collector s or the analyzer s requirements is also easily customizable. So in the future when the requirements will change, their influence will be easily implemented in the database empowered with the Data Warehouse technology. V. MOVING TO THE DATA WAREHOUSE As seen above, one of the most critical part of the BasicMeter s architecture and basically any IPFIX like architecture  is its database. This also reflects the fact that most of the performance and query related issues occurred during obtaining the data from the database. Firstly a new database was created, where the new optimized and summarized data are going to be stored. Then as solution for the excessive response time during the reception of the list of unique data, this database was extended with tables designated for storing only unique records. Each of these tables contains two columns: one for unique records; one for the number of the record s duplicate occurrence 3. With these tables instead of query 1 the standard SELECT name_of_column FROM name_of_table ORDER BY name_of_column; 3 Storing the number of duplicate records occurrence is designated for statistical purposes. (2) guery can be used. Thus, the database management system will not need to eliminate duplicate records from the result . This results in significant reduction of the time required by the database for processing the query and providing the results to the analyzer. The second main problem, i.e. the difficulties related with the implementation of displaying large amount of data received from the database in the analyzing application which is also connected with the above described problem of the excessive response time was solved in the following way. As a solution the new database was extended with further tables. These tables and their fields were designed so that traffic information can be obtained from them on the basis of time characteristics. They representing a kind of a summarization to one minute of time unite of selected information elements. This will make easier to obtain and represent the data by the analyzer. A more rapid access to the data stored in the designed tables was also ensured by indexing the records when they are stored . The principle of filling the tables was fully inspired by the Data Warehouse technology, while the main effort was aimed at the pre-processing and aggregation of the indexed data on the basis of time characteristics. This is possible through the PL/pgSQL extension of the basic SQL language of the PostgreSQL database . This extension allows procedural and object-oriented programming directly in the database. Using the PL/pgSQL language in the database of BasicMeter tool the following database objects  were created: function 4 ; trigger pertaining to this function. The trigger after each insert of a record in the database calls the proposed function. Subsequently the function starts to insert records obtained from the actually stored values in the default database into the new database s proposed tables in the following order: 1) First the command by which the function was called (it has to be an INSERT statement) and the end of the flow identifier is checked (examining the end of the flow identifier is needed to reduce duplicate records). When these criteria are satisfied, the function connects to the database, where the proposed tables for efficient querying were created. 2) In the next step the function begins to store the records in the tables. First it checks whether the entry already exists in the table proposed for unique records. If the record is already there only its duplicate record counter field will be increased. Otherwise the new, unique entry will be stored in the table. 3) In case that the actually inserted records meets the criteria of some distinguished information, this record will 4 In the terminology of PostgreSQL instead of stored procedure, function is used.
5 Fig. 3. The topology of the verification process be also stored in the tables dedicated for summarizing purposes. 4) In the last step summarized data to one minute of time unite about the start and end of the flow and some other entries characterizing the IP flow will be stored in the tables for aggregating purposes. The implementation of the newly created objects was performed using regular SQL commands of the PostgreSQL database. After the execution of these statements the proposed function and its trigger was added to the old database s default structure. Next, if new data was inserted in the old database, on the basis of these data new optimized and summarized data was stored in the new database. Since the explanation of these database objects (they represent extensive source codes) is out of the scope of this paper in the following they will be not listed. VI. VERIFYING THE BENEFITS OF THE IMPLEMENTED ENHANCEMENTS The above introduced enhancements were proposed to bring efficient evaluation of the analyzer s queries by using the preprocessed and summarized records. The verification of the expected results was performed on a topology as illustrated in Figure 3. By the BasicMeter monitoring tool approximately 2 million records were collected. During the monitoring the measured network traffic information was sent by the exporter to the collector (continuous line in Figure 3) where after processing the IPFIX flow records the obtained data were stored in the default (old) database (fine dashed line in Figure 3). Also less then a quarter 5 of these records was inserted into the new database. On the basis of these stored records by querying the database by the WebAnalyzer (two dots one dash line in Figure 3) a group of measurements was performed (they 5 This value is hardly influenced by the records of the tables containing unique records. were repeated 12 times). The main interest was aimed at the determination of the amount of time consumed by the database to evaluate the query. The server where the database was running had the following hardware and system equipments: Intel(R) Pentium(R) 4 CPU 3.20GHz; 1 GB System RAM; 160 GB IDE Hard Disk; Database PostgreSQL 9.1; Operating System Ubuntu Server 9.10 LTS (32 bit); OpenJDK Java Version ; The comparison of the average values before and after the transition to the Data Warehouse is included in Table I. Before optimization, after entering query 1 the database performed a sequential search, where for the selection of unique information it compared each record with each. Such a search took the database system to perform approximately 20 seconds. However, after optimization, the database with entering query 2 performed only standard selection of unique records from the proposed table. The selection with this standard query took the database management system approximately up to half second. Based on these facts the first problem with excess response time was solved. TABLE I COMPARING THE RESPONSE TIMES BEFORE AND AFTER THE IMPLEMENTATION OF THE DATA WAREHOUSE TECHNOLOGY Method of selecting data Time for evaluating the query Before optimization SELECT DISTINCT (sequential search) approx. 20s After optimization SELECT (standard selection of all data at a time) up to half sec. The verification of the proposed tables for summarized records was performed under the same conditions, while the measurement was repeated 20 times. The average response time was the same as in the case of unique records (partly can be considered as a success). Moreover, the summarized data to one minute of time unite was in each case easily processed and visualized (without delay) in the graph of continuation of the analyzer. So the second problem of complex realization of a large amount of data in the graph of continuation can be also considered as solved. The analyzer does not have to create anymore the continuation from 2 millions of data. It makes do with only a part of it. VII. CONCLUSION AND FUTURE WORK The goal of this paper was to introduce a method for optimizing the access time to the database of the BasicMeter network monitoring tool. As a solution a Data Warehouse technology was successfully implemented in the tool which fulfilled its expected results. The advantage of this solution is that it could be easily implemented in any IPFIX like tool s database. Although the optimization steps over the database succeeded, they serve only as a temporary solution because not
6 all requirements during the implementation of Data Warehouse technology were completely defined. Therefore, in future the database of the BasicMeter should be first subjected to thorough analysis, during which all the requirements from both the collector and analyzer should be definitely identified. Consequently the pre-processing and aggregation of data on the basis of time characteristics should be re-designed and implemented. Future work should be also aimed at the way of filling the new database containing unique and summarized records. More preferable way would be to calculate and store these records at a time (during weekend or at midnight) which has the least impact on the performance of the network monitoring. ACKNOWLEDGMENT This work was supported by the Slovak Research and Development Agency under the contract No. APVV REFERENCES  L. Vokorokos, N. Ádám, A. Baláž, Application Of Intrusion Detection Systems In Distributed Computer Systems And Dynamic Networks, Computer Science and Technology Research Survey, Košice, Elfa,s.r.o.,Košice, 2008, pp , ISBN  L. Vokorokos, A. Kleinová, O. Látka, Network Security on the Intrusion Detection System Level, The 10th IEEE International Conference on Intelligent Engineering Systems, Hungary, Budapest Tech, 2006, pp , ISBN  F. Jakab, Ľ. Koščo, M. Potocký, J. Giertl, Contribution to QoS Parameters Measurement: The BasicMeter Project, Conference Proceedings of the 4th International Conference on Emerging e-learning Technologies and Applications, ICETA 2005, vol. 4, pp ,  G. Sadasivan, N. Brownlee, B. Claise, J. Quittek, Architecture for IP Flow Information Export, RFC 5470,  T. Zseby, E. Boschi, N. Brownlee, B. Claise, IP Flow Information Export (IPFIX) Applicability, RFC 5472,  J. Sučík, F. Jakab, Measurement and Evaluation of Quality of Service Parameters in Computer Networks.  B. Claise, Cisco Systems NetFlow Services Export Version 9, RFC 3954,  B. Claise, Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP Traffic Flow Information, RFC 5101,  J. Giertl, R. Jakab, Ľ. Koščo, Communication Protocol in Computer Network Performance Parameters Measurement Proceedings of the 4th International Information and Telecommunication Technologies Symposium,Federal University of Santa Catarina, Florianopolis, Brazil, December 14-16, 2005, pp , ISBN X.  A. Pekár, J. Giertl, M. Révés, P. Feciľak, Simplification of the Real- Time Network Traffic Monitoring Electrical Engineering and Informatics II, Proceeding of the Faculty of Electrical Engineering and Informatics of the Technical University of Koice, pp , Košice, ISBN ,  F. Jakab, J. Giertl, R. Jakab, M. Kaščák, Improving Efficiency and Manageability in IPFIX Network Monitoring Platform Proc. of the 6th International Network Conference, INC 2006, Plymouth, UK, , University of Plymouth, pp , ISBN ,  T. Mihok, J. Giertl, M. Révés, A. Pekár, P. Feciľak, System for Centralized Management of the BasicMeter Tool IP Networking 1 Theory and Practice, Žilina,  The PostgreSQL Global Developement Group (2011), PostgreSQL Documentation, [Online]. Available:  J. Quittek, S. Bryant, B. Claise, P. Aitken, J. Meyer, Information Model for IP Flow Information Export, RFC 5102,  S. Chaudhuri, U. Dayal, An Overview of Data Warehousing and OLAP Technology, ACM SIGMOD Record, Volume 26 Issue 1,  K. Bandarupalli (2010), Importance of Data Warehousing and its Design, [Online]. Avaliable:  C.T. Hammergren, A. Simon, Data Warehousing For Dummies, Wiley, pp ,  W.H. Inmon, Building the Data Warehouse, Wiley, pp. 3 9,  R. Kimball, The Data Warehouse Toolkit, pp ,  J. Quittek, T. Zseby, B. Claise, S. Zander, Requirements for IP Flow Information Export (IPFIX), RFC 3917, 2004.
Measuring Platform Architecture Based on the IPFIX Standard Alžbeta Kleinová, Anton Baláž, Jana Trelová, Norbert Ádám Department of Computers and Informatics, Technical University of Košice Letná 9, 042
http://www.cse.wustl.edu/~jain/cse567-06/ftp/net_traffic_monitors2/ind... 1 of 11 SNMP and Beyond: A Survey of Network Performance Monitoring Tools Paul Moceri, firstname.lastname@example.org Abstract The growing
BUILDING OLAP TOOLS OVER LARGE DATABASES Rui Oliveira, Jorge Bernardino ISEC Instituto Superior de Engenharia de Coimbra, Polytechnic Institute of Coimbra Quinta da Nora, Rua Pedro Nunes, P-3030-199 Coimbra,
SIP Registration Stress Test Miroslav Voznak and Jan Rozhon Department of Telecommunications VSB Technical University of Ostrava 17. listopadu 15/2172, 708 33 Ostrava Poruba CZECH REPUBLIC email@example.com,
An Introduction to Data Warehousing An organization manages information in two dominant forms: operational systems of record and data warehouses. Operational systems are designed to support online transaction
A Framework for Developing the Web-based Integration Tool for Web-Oriented Warehousing PATRAVADEE VONGSUMEDH School of Science and Technology Bangkok University Rama IV road, Klong-Toey, BKK, 10110, THAILAND
Class Announcements TIM 50 - Business Information Systems Lecture 15 Database Assignment 2 posted Due Tuesday 5/26 UC Santa Cruz May 19, 2015 Database: Collection of related files containing records on
Warehousing: A Technology Review and Update Vernon Hoffner, Ph.D., CCP EntreSoft Resouces, Inc. Introduction Abstract warehousing has been around for over a decade. Therefore, when you read the articles
Chapter 5 Foundations of Business Intelligence: Databases and Information Management 5.1 Copyright 2011 Pearson Education, Inc. Student Learning Objectives How does a relational database organize data,
HETEROGENEOUS DATA TRANSFORMING INTO DATA WAREHOUSES AND THEIR USE IN THE MANAGEMENT OF PROCESSES Pavol TANUŠKA, Igor HAGARA Authors: Assoc. Prof. Pavol Tanuška, PhD., MSc. Igor Hagara Workplace: Institute
NETWORK DEVICE MONITORING pag. 2 INTRODUCTION This document aims to explain how Pandora FMS is able to monitor all network devices available on the marke such as Routers, Switches, Modems, Access points,
Practical Experience with IPFIX Flow Collectors Petr Velan CESNET, z.s.p.o. Zikova 4, 160 00 Praha 6, Czech Republic firstname.lastname@example.org Abstract As the number of Internet applications grows, the number
Business Intelligence in E-Learning (Case Study of Iran University of Science and Technology) Mohammad Hassan Falakmasir 1, Jafar Habibi 2, Shahrouz Moaven 1, Hassan Abolhassani 2 Department of Computer
Cisco Performance Visibility Manager 1.0.1 Cisco Performance Visibility Manager (PVM) is a proactive network- and applicationperformance monitoring, reporting, and troubleshooting system for maximizing
Viete, čo robia Vaši užívatelia na sieti? Roman Tuchyňa, CSA What is ReporterAnalyzer? ReporterAnalyzer gives network professionals insight into how application traffic is impacting network performance.
(Integrated) Technology White Paper Issue 01 Date 2012-9-6 HUAWEI TECHNOLOGIES CO., LTD. 2012. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means
Cisco IOS Flexible NetFlow Technology Last Updated: December 2008 The Challenge: The ability to characterize IP traffic and understand the origin, the traffic destination, the time of day, the application
CHAPTER 3 Using IPM to Measure Network Performance This chapter provides details on using IPM to measure latency, jitter, availability, packet loss, and errors. It includes the following sections: Measuring
Integrating Business Intelligence Module into Learning Management System Mario Fabijanić and Zoran Skočir* Cognita Address: Radoslava Cimermana 64a, 10020 Zagreb, Croatia Telephone: 00 385 1 6558 440 Fax:
1 perfsonar tools evaluation 1 The goal of this PSNC activity was to evaluate perfsonar NetFlow tools for flow collection solution and assess its applicability to easily subscribe and request different
Standardization Activities (IPFIX, PSAMP) Tanja Zseby IP Flow Information Export (IPFIX) oal: Find or develop a protocol for exporting IP traffic flow information from routers and probes Metering to be
Behind the Scenes with MySQL NetQoS Delivers Distributed Network Management Solution with Embedded MySQL NetQoS delivers products and services that enable some of the world s most demanding enterprises
Web Analytics Understand your web visitors without web logs or page tags and keep all your data inside your firewall. 5401 Butler Street, Suite 200 Pittsburgh, PA 15201 +1 (412) 408 3167 www.metronomelabs.com
24 Acta Electrotechnica et Informatica, Vol. 10, No. 4, 2010, 24 28 QOS IN NETWORK TRAFFIC MANAGEMENT Peter FECIĽAK, Katarína KLEINOVÁ, Jozef JANITOR Department of Computers and Informatics, Faculty of
White Paper Redefine Network Visibility in the Data Center with the Cisco NetFlow Generation Appliance What You Will Learn Modern data centers power businesses through a new generation of applications,
Fluke Networks NetFlow Tracker Quick Install Guide for Product Evaluations Pre-installation and Installation Tasks Minimum System Requirements The type of system required to run NetFlow Tracker depends
Vol.2, Issue.2, Mar-Apr 2012 pp-067-072 ISSN: 2249-6645 PERFORMANCE EVALUATION OF DATABASE MANAGEMENT SYSTEMS BY THE ANALYSIS OF DBMS TIME AND CAPACITY Aparna Kaladi 1 and Priya Ponnusamy 2 1 M.E Computer
Comprehensive IP Traffic Monitoring with FTAS System Tomáš Košňar email@example.com CESNET, association of legal entities Prague, Czech Republic Abstract System FTAS is designed for large-scale continuous
SQL Server Integration Services with Oracle Database 10g SQL Server Technical Article Published: May 2008 Applies To: SQL Server Summary: Microsoft SQL Server (both 32-bit and 64-bit) offers best-of breed
Reducing ETL Load Times by a New Data Integration Approach for Real-time Business Intelligence Darshan M. Tank Department of Information Technology, L.E.College, Morbi-363642, India firstname.lastname@example.org Abstract
Database Systems Journal vol. VI, no. 1/2015 59 In-memory databases and innovations in Business Intelligence Ruxandra BĂBEANU, Marian CIOBANU University of Economic Studies, Bucharest, Romania email@example.com,
D1 Solutions AG a Netcetera Company Real Life Performance of In-Memory Database Systems for BI 10th European TDWI Conference Munich, June 2010 10th European TDWI Conference Munich, June 2010 Authors: Dr.
INFO 1500 Introduction to IT Fundamentals 5. Database Systems and Managing Data Resources Learning Objectives 1. Describe how the problems of managing data resources in a traditional file environment are
Database Systems Journal vol. IV, no. 1/2013 11 Reverse Engineering in Data Integration Software Vlad DIACONITA The Bucharest Academy of Economic Studies firstname.lastname@example.org Integrated applications
Chapter 2 Planning the Installation and Installing SQL Server In This Chapter c SQL Server Editions c Planning Phase c Installing SQL Server 22 Microsoft SQL Server 2012: A Beginner s Guide This chapter
High-Volume Data Warehousing in Centerprise Product Datasheet Table of Contents Overview 3 Data Complexity 3 Data Quality 3 Speed and Scalability 3 Centerprise Data Warehouse Features 4 ETL in a Unified
1 B.Sc (Computer Science) Database Management Systems UNIT-V Business Intelligence? Business intelligence is a term used to describe a comprehensive cohesive and integrated set of tools and process used
A Design and implementation of a data warehouse for research administration universities André Flory 1, Pierre Soupirot 2, and Anne Tchounikine 3 1 CRI : Centre de Ressources Informatiques INSA de Lyon
Unified network traffic monitoring for physical and VMware environments Applications and servers hosted in a virtual environment have the same network monitoring requirements as applications and servers
Chapter 6 Basics of Data Integration Fundamentals of Business Analytics Learning Objectives and Learning Outcomes Learning Objectives 1. Concepts of data integration 2. Needs and advantages of using data
Document management and exchange system supporting education process Emil Egredzija, Bozidar Kovacic Information system development department, Information Technology Institute City of Rijeka Korzo 16,
IP Telephony Contact Centers Mobility Services WHITE PAPER Business Value Reporting and Analytics Avaya Operational Analyst April 2005 avaya.com Table of Contents Section 1: Introduction... 1 Section 2:
Recommendations for Network Traffic Analysis Using the NetFlow Protocol Best Practice Document Produced by AMRES NMS Group (AMRES BPD 104) Author: Ivan Ivanović November 2011 TERENA 2010. All rights reserved.
Indexing Techniques for Data Warehouses Queries Sirirut Vanichayobon Le Gruenwald The University of Oklahoma School of Computer Science Norman, OK, 739 email@example.com firstname.lastname@example.org Abstract Recently,
Oracle Data Integrator: Administration and Development What you will learn: In this course you will get an overview of the Active Integration Platform Architecture, and a complete-walk through of the steps
Visualization, Management, and Control for Cisco IWAN Data sheet Overview Intelligent WAN is a Cisco solution that enables enterprises to realize significant cost savings by moving to less expensive transport
International Journal of Soft Computing Applications ISSN: 1453-2277 Issue 4 (2009), pp.35-40 EuroJournals Publishing, Inc. 2009 http://www.eurojournals.com/ijsca.htm An Approach for Facilating Knowledge
Scalable Extraction, Aggregation, and Response to Network Intelligence Agenda Explain the two major limitations of using Netflow for Network Monitoring Scalability and Visibility How to resolve these issues
Rocky Mountain Technology Ventures Exploring the Intricacies and Processes Involving the Extraction, Transformation and Loading of Data 3/25/2006 Introduction As data warehousing, OLAP architectures, Decision
WHITE PAPER With the introduction of NetFlow and similar flow-based technologies, solutions based on flow-based data have become the most popular methods of network monitoring. While effective, flow-based
FileMaker 12 ODBC and JDBC Guide 2004 2012 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker and Bento are trademarks of FileMaker, Inc.
Gaining Operational Efficiencies with the Enterasys S-Series Hi-Fidelity NetFlow There is nothing more important than our customers. Gaining Operational Efficiencies with the Enterasys S-Series Introduction
International Journal of Computing Academic Research (IJCAR) ISSN 2305-9184 Volume 3, Number 6(December 2014), pp. 131-137 MEACSE Publications http://www.meacse.org/ijcar Measuring the Impact of Security
SAS BI Course Content; Introduction to DWH / BI Concepts SAS Web Report Studio 4.2 SAS EG 4.2 SAS Information Delivery Portal 4.2 SAS Data Integration Studio 4.2 SAS BI Dashboard 4.2 SAS Management Console
Chapter 1 A Pragmatic Introduction to Oracle In This Chapter Getting familiar with Oracle Implementing grid computing Incorporating Oracle into everyday life Oracle 11g is by far the most robust database
Database Systems Journal vol. IV, no. 4/2013 21 Data Mining Solutions for the Business Environment Ruxandra PETRE University of Economic Studies, Bucharest, Romania email@example.com Over
Anwendungssoftwares a -Warehouse-, -Mining- und OLAP-Technologien Warehouse Architecture Overview Warehouse Architecture Sources and Quality Mart Federated Information Systems Operational Store Metadata
Pastel Evolution BIC Getting Started Guide Table of Contents System Requirements... 4 How it Works... 5 Getting Started Guide... 6 Standard Reports Available... 6 Accessing the Pastel Evolution (BIC) Reports...
Pg. 1 03/18/2011 JEFAQ - 02/13/2013 - Copyright 2013 - Jet Reports International, Inc. Regarding Jet Enterprise What are the software requirements for Jet Enterprise? The following components must be installed
FileMaker 13 ODBC and JDBC Guide 2004 2013 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker and Bento are trademarks of FileMaker, Inc.
CHAPTER-29 Data Mining, System Products and Research Prototypes 29.1 How to Choose a Data Mining System 29.2 Data, mining functions and methodologies: 29.3 Coupling data mining with database anti/or data
Designing an Object Relational Data Warehousing System: Project ORDAWA * (Extended Abstract) Johann Eder 1, Heinz Frank 1, Tadeusz Morzy 2, Robert Wrembel 2, Maciej Zakrzewicz 2 1 Institut für Informatik
WHITE PAPER Domo Advanced Architecture Overview There are several questions that any architect or technology advisor may ask about a new system during the evaluation process: How will it fit into our organization
IAF Business Intelligence Solutions Make the Most of Your Business Intelligence White Paper INTRODUCTION In recent years, the amount of data in companies has increased dramatically as enterprise resource
OLAP and OLTP AMIT KUMAR BINDAL Associate Professor Databases Databases are developed on the IDEA that DATA is one of the critical materials of the Information Age Information, which is created by data,
METU DEPARTMENT OF COMPUTER ENGINEERING Software Requirements Specification SNMP Agent & Network Simulator Mustafa İlhan Osman Tahsin Berktaş Mehmet Elgin Akpınar 05.12.2010 Table of Contents 1. Introduction...
- Analyzer 2007 Executive Summary Strategy Companion s Analyzer 2007 is enterprise Business Intelligence (BI) software that is designed and engineered to scale to the requirements of large global deployments.
FileMaker 11 ODBC and JDBC Guide 2004 2010 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker is a trademark of FileMaker, Inc. registered
WhatsUp Gold v11 Features Overview This guide provides an overview of the core functionality of WhatsUp Gold v11, and introduces interesting features and processes that help users maximize productivity
Network Monitoring On Large Networks Yao Chuan Han (TWCERT/CC) firstname.lastname@example.org 1 Introduction Related Studies Overview SNMP-based Monitoring Tools Packet-Sniffing Monitoring Tools Flow-based Monitoring
When Recognition Matters THE COMPARISON OF PROGRAMS FOR NETWORK MONITORING www.pecb.com Imagine a working environment comprised of a number of switches, routers, some terminals and file servers. Network
White Paper INTEROPERABILITY OF SAP BUSINESS OBJECTS 4.0 WITH - AN INTEGRATION GUIDE FOR WINDOWS USERS (64 BIT) Abstract This paper presents interoperability of SAP Business Objects 4.0 with Greenplum.
Database Systems Journal vol. IV, no. 2/2013 3 ETL as a Necessity for Business Architectures Aurelian TITIRISCA University of Economic Studies, Bucharest, Romania email@example.com Today, the
Open Source Business Intelligence Tools: A Review Amid Khatibi Bardsiri 1 Seyyed Mohsen Hashemi 2 1 Bardsir Branch, Islamic Azad University, Kerman, IRAN 2 Science and Research Branch, Islamic Azad University,
A Survey Study on Monitoring Service for Grid Erkang You firstname.lastname@example.org ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
SAP BW Connector for BIRT Technical Overview How to Easily Access Data from SAP Cubes Using BIRT www.yash.com 2011 Copyright YASH Technologies. All rights reserved. www.yash.com 2013 Copyright YASH Technologies.
Wireshark Developer and User Conference Using NetFlow to Analyze Your Network June 15 th, 2011 Christopher J. White Manager Applica6ons and Analy6cs, Cascade Riverbed Technology email@example.com SHARKFEST
Next Generation Data Warehouse and In-Memory Analytics S. Santhosh Baboo,PhD Reader P.G. and Research Dept. of Computer Science D.G.Vaishnav College Chennai 600106 P Renjith Kumar Research scholar Computer