Preparing Databases for Network Traffic Monitoring



Similar documents
A Review of the Measuring Platform

Issues in the Passive Approach of Network Traffic Monitoring

How To Understand Network Performance Monitoring And Performance Monitoring Tools

BUILDING OLAP TOOLS OVER LARGE DATABASES

Study of Network Performance Monitoring Tools-SNMP

5.5 Copyright 2011 Pearson Education, Inc. publishing as Prentice Hall. Figure 5-2

An Introduction to Data Warehousing. An organization manages information in two dominant forms: operational systems of

A Framework for Developing the Web-based Data Integration Tool for Web-Oriented Data Warehousing

Data Warehousing: A Technology Review and Update Vernon Hoffner, Ph.D., CCP EntreSoft Resouces, Inc.

SIP Registration Stress Test

Business Intelligence in E-Learning

Practical Experience with IPFIX Flow Collectors

PANDORA FMS NETWORK DEVICE MONITORING

Cisco Performance Visibility Manager 1.0.1

HETEROGENEOUS DATA TRANSFORMING INTO DATA WAREHOUSES AND THEIR USE IN THE MANAGEMENT OF PROCESSES

NetStream (Integrated) Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

QOS IN NETWORK TRAFFIC MANAGEMENT

PANDORA FMS NETWORK DEVICES MONITORING

Figure 1. perfsonar architecture. 1 This work was supported by the EC IST-EMANICS Network of Excellence (#26854).

Standardization Activities (IPFIX, PSAMP) Tanja Zseby

Foundations of Business Intelligence: Databases and Information Management

Reducing ETL Load Times by a New Data Integration Approach for Real-time Business Intelligence

Cisco IOS Flexible NetFlow Technology

Using IPM to Measure Network Performance

Nemea: Searching for Botnet Footprints

Redefine Network Visibility in the Data Center with the Cisco NetFlow Generation Appliance

B.Sc (Computer Science) Database Management Systems UNIT-V

Integrating Business Intelligence Module into Learning Management System

NetQoS Delivers Distributed Network

Comprehensive IP Traffic Monitoring with FTAS System

Viete, čo robia Vaši užívatelia na sieti? Roman Tuchyňa, CSA

Technology in Action. Alan Evans Kendall Martin Mary Anne Poatsy. Eleventh Edition. Copyright 2015 Pearson Education, Inc.

Indexing Techniques for Data Warehouses Queries. Abstract

Web Analytics Understand your web visitors without web logs or page tags and keep all your data inside your firewall.

PERFORMANCE EVALUATION OF DATABASE MANAGEMENT SYSTEMS BY THE ANALYSIS OF DBMS TIME AND CAPACITY

Real Life Performance of In-Memory Database Systems for BI

Unified network traffic monitoring for physical and VMware environments

In-memory databases and innovations in Business Intelligence

1. PRODUCT OVERVIEW PRODUCT COMPONENTS... 3

Fluke Networks NetFlow Tracker

Analyze hop-by-hop path, devices, interfaces, and queues Locate and troubleshoot problems

Network Monitoring On Large Networks. Yao Chuan Han (TWCERT/CC)

Planning the Installation and Installing SQL Server

Cisco IPS Manager Express

Introducing the Microsoft IIS deployment guide

Reverse Engineering in Data Integration Software

NetFlow Tracker Overview. Mike McGrath x ccie CTO mike@crannog-software.com

Flow Based Traffic Analysis

Recommendations for Network Traffic Analysis Using the NetFlow Protocol Best Practice Document

Business Value Reporting and Analytics

Designing an Object Relational Data Warehousing System: Project ORDAWA * (Extended Abstract)

Measuring the Impact of Security Protocols for Bandwidth

EFFECTS OF BUSINESS INTELLIGENCE APPLICATION IN TOLLING SYSTEM

SQL Server Integration Services with Oracle Database 10g

Chapter 6 Basics of Data Integration. Fundamentals of Business Analytics RN Prasad and Seema Acharya

A Design and implementation of a data warehouse for research administration universities

ETL-EXTRACT, TRANSFORM & LOAD TESTING

Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1

Scalable Extraction, Aggregation, and Response to Network Intelligence

SIP Infrastructure Performance Testing

QuickDB Yet YetAnother Database Management System?

SQL Server 2012 Business Intelligence Boot Camp

Wireshark Developer and User Conference

DNA. White Paper. DNA White paper Version: 1.08 Release Date: 1 st July, 2015 Expiry Date: 31 st December, Ian Silvester DNA Manager.

SolarWinds Technical Reference

FileMaker 12. ODBC and JDBC Guide

Beyond Monitoring Root-Cause Analysis

High-Volume Data Warehousing in Centerprise. Product Datasheet

Rocky Mountain Technology Ventures. Exploring the Intricacies and Processes Involving the Extraction, Transformation and Loading of Data

FileMaker 11. ODBC and JDBC Guide

An Approach for Facilating Knowledge Data Warehouse

Signature-aware Traffic Monitoring with IPFIX 1

CONCEPTUAL FRAMEWORK OF BUSINESS INTELLIGENCE ANALYSIS IN ACADEMIC ENVIRONMENT USING BIRT

NetFlow Analysis with MapReduce

System Requirements Table of contents

Integrated Traffic Monitoring

WhatsUp Gold v11 Features Overview

Gaining Operational Efficiencies with the Enterasys S-Series

Republic Polytechnic School of Information and Communications Technology C355 Business Intelligence. Module Curriculum

Port and Container Terminal Analytics

QAME Support for Policy-Based Management of Country-wide Networks

Data Mining Solutions for the Business Environment

Towards Smart and Intelligent SDN Controller

Pastel Evolution BIC. Getting Started Guide

When to consider OLAP?

Using The Paessler PRTG Traffic Grapher In a Cisco Wide Area Application Services Proof of Concept

IAF Business Intelligence Solutions Make the Most of Your Business Intelligence. White Paper November 2002

Data Warehouse: Introduction

FileMaker 13. ODBC and JDBC Guide

OBSERVEIT DEPLOYMENT SIZING GUIDE

WHITE PAPER. Domo Advanced Architecture

DBMS / Business Intelligence, SQL Server

How good can databases deal with Netflow data

Monitor and Manage Your MicroStrategy BI Environment Using Enterprise Manager and Health Center

Processing of Flow Accounting Data in Java: Framework Design and Performance Evaluation

n Assignment 4 n Due Thursday 2/19 n Business paper draft n Due Tuesday 2/24 n Database Assignment 2 posted n Due Thursday 2/26

Firewalls Overview and Best Practices. White Paper

Hadoop Technology for Flow Analysis of the Internet Traffic

Network Security Analysis by Using Business Intelligence

Data warehouses. Data Mining. Abraham Otero. Data Mining. Agenda

Transcription:

Preparing Databases for Network Traffic Monitoring Liberios Vokorokos Technical University of Košice Faculty of Electrical Engineering and Informatics Dept. of Computers and Informatics Letná 9, 042 00 Košice Slovak Republic Email: liberios.vokorokos@tuke.sk Adrián Pekár Technical University of Košice Faculty of Electrical Engineering and Informatics Dept. of Computers and Informatics Letná 9, 042 00 Košice Slovak Republic Email: adrian.pekar@tuke.sk Norbert Ádám Technical University of Košice Faculty of Electrical Engineering and Informatics Dept. of Computers and Informatics Letná 9, 042 00 Košice Slovak Republic Email: norbert.adam@tuke.sk Abstract This paper deals with the solution of database related problems which can occur during network traffic monitoring. During data evaluation through a database excessive response time can be noticed. While the time necessary to obtain the data from the database has a significant effect on the result of the monitoring, it is important to keep its value as low as possible. Practically, the vast majority of the monitoring tools have to deal with the problems related to the database s excessive response time, so their solution is really justified. In this paper an alternative method of network monitoring, the BasicMeter tool will be introduced by which the exact problems and their solution will be illustrated. I. INTRODUCTION Today s computer networks are designed to transmit video, sound and data at the same time. This kind of transmission is provided by converged networks, which instead of setting own separate links for each traffic type make do with only one converged link. In order to ensure the desired quality and performance of these networks is important to monitor their traffic. By measuring and analyzing network traffic parameters the function of various multimedia applications can be ensured; the network and its users can be secured against external or internal attacks [1]; Intrusion Detection Systems can be applied [2]; the lost or leakage of sensitive information can be prevented, etc. For this reason monitoring has a significant role in the management of today s modern computer networks and their services. BasicMeter is a measuring tool which gives an alternative method for network traffic monitoring. It is developed by the MONICA research group at the Technical University of Košice. The main purpose of the tool is to provide network traffic parameters measurement and real-time data analysis. During the development of the tool a quite big response time was noticed while obtaining the data from the database. It was caused by the database management system. Processing even a relatively small number of data can be pretty time-consuming in some cases. Moreover, with the growth of the data in the database which is obvious in the case of network traffic monitoring the time required for evaluating the queries can exceed more tens of seconds. It is because processing millions of records cost some time even on systems with the most expensive hardware equipment and the highest performance. This fact has a negative impact on the functionality of the network traffic monitoring tools. Inspired by the Data Warehouse technology some improvements were designed to the way of storing the data in the database and accessing them. In the following sections the BasicMeter network monitoring tool will be introduced starting with a brief description of its main components, up to a more detailed explanation of the problems it has to face during data analysis. Subsequently, in Section IV one of the most popular database technologies for large-scale data management, i.e. the Data Warehouses will be presented. Section V is addressed to the solution of the problems with excessive response time of the database in the monitoring tool. The last two sections are focusing on the verification of the implemented technology in the BasicMeter s architecture and on the presentation of planed future work. II. THE BASICMETER MEASURING TOOL The concept of the BasicMeter [3] tool, as described in Figure 1, is in conformance to the IPFIX architecture [4], [5]. The measurement of the network traffic is based on passive method [6], which does not require generation of additional network traffic but make do with the existing one. The architecture of the tool was developed within the following three sub-projects: BEEM stands for BasicmEter Exporting and Measuring process. It includes packet capturing, filtering, sampling, creating and maintaining of flow records in the flow cache, and exporting of flow records from observation point by IPFIX protocol. JXColl stands for Java XML Collector. It can collect flow records from one or more metering points in the format of Netflow v5, v9 [7] or IPFIX [8] protocol. Flow records can be stored in a database for future use or analysis and/or directly sent to one or more analyzing applications by ACP [9], [10]. WebAnalyzer is a Web application for data analysis. It provides an Graphical User Interface (GUI) for both the visualization of the information obtained by the exporter(s) and the management of the architecture s lower components.

III. DESCRIPTION OF THE PROBLEMS ASSOCIATED WITH EXCESSIVE RESPONSE TIME In the exporter s configuration file besides other options a time period can be set. After the expiration of this time period the flow records calculated on the basis of actually measured data will be exported to the collector. Depending on these parameters can happen that although the flow has not yet been finished, the information about it will be stored in the database and further measured data by the exporter will represent updated information about the same flow. So some of the records in the database according to the IP packet structure [8] will have the same IP addresses, port numbers etc. These records will occur in the database multiple times. In the analyzing application of the measuring tool the user on the basis of certain filtrating criteria can create a graphical continuance. These criteria inter alia include unique source and destination port numbers, source and destination IP addresses and the identifier of the metering point (exporter) [14]. Unfortunately, while the database contain duplicate records, the effective evaluation of the unique records a list where each record is occurring only once is not easy to achieve. The list containing only unique records is obtained from the database by the Fig. 1. The architecture of the BasicMeter measuring tool However, the analyzer itself is not a part of the IPFIX architecture [4]. Therefore further enhancements had to be draft and implemented by which the communication between the analyzer(s) and the collector could be established; and the control of the measuring tool could be enabled. For these purposes the Analyzer Collector Protocol (ACP) and Exporter Collector Analyzer Manager (ECAM) module were developed [11], [12]. Former is intended for real-time network traffic analysis; latter for the management of the tool s other components. All of the above mentioned components can be configured by their configuration files. As the database of the BasicMeter tool PostgreSQL [13] was chosen. It is an open-source Object-Relational Database Management System (ORDBMS) which offers an alternative solution to other freeware database systems (MySQL, Firebird, MaxDB, etc.) as well as to proprietary ones (Oracle, DB2, Microsoft SQL Server). Thanks to the flexibility of its license, PostgreSQL can be used, modified and distributed for free by any person for any purpose. According to many experts, PostgreSQL is currently the most advanced and most sophisticated freeware Database Management System (DBMS) [13]. Regarding to the roles of these projects (BEEM, JXColl, WebAnalyzer, PosgreSQL DBMS), in the following they will be specified as the exporter, collector, analyzer and database. SELECT DISTINCT name_of_column FROM name_of_table ORDER BY name_of_column; query. This command eliminates duplicate rows from the result [13]. In case of 1 million records to evaluate the analyzer s query and subsequently send the result to the analyzing application takes the database management system tens of seconds. The higher the number of the records in the database is the more time is needed to evaluate the queries. It is important to mention that the following numbers are only informational. Mainly because the result of the network measurement can be influenced by a wide range of various factors so the results of the monitoring may vary. Another problem occurs during the evaluation of the obtained data from the database by the analyzer. Since the collector stores each data about the IP flows measured by the exporter(s), the size of the data in the database can be really large. For example, monitoring of the network with balanced traffic during 10 days results in approximately 1.5 million of stored records in the database. If a user would like to see what happened on the network during these 10 days, the database management system would return 1.5 million records. On the basis of these records the analyzing application would have to create a continuation in a form of interconnected points. However, in the case of such a large amount of number this is really problematic. Moreover, it is assumed that in the future the analyzer will be capable to create a continuation from a much larger time interval (records). In this case both the processing of the queries by the database and the subsequent visualization of the results by the analyzer represent a really large response time of the BasicMeter tool. (1)

For these reasons it was necessary to analyze and improve the way of storing the data of network traffic flows in the database and accessing them. IV. DESCRIPTION OF THE DATA WAREHOUSE TECHNOLOGY Issues related to database systems with large data sets are well known to the scientific community. Many database experts dedicate their attention to the development of concepts and technologies for a long time to solve the problems similar to the one with which the BasicMeter has to deal. One of the most commonly used technologies for easy management of large-scale data is the Data Warehouse. A Data Warehouse is a methodological concept for organizing and managing large data structures. Its main purpose is to provide reliable, complex and integrated materials for applications that rely on extensive data. Over the past few years Data Warehouses went through a rapid growth in the number provided products and services. Their concept was adopted by the public and successfully distributed in various segments of the industries such as manufacturing, sales, financial services, transport, telecommunications, utilities and health care [15]. The main advantage of Data Warehouses is, that the data are stored in a structure which allows their efficient analysis and querying (the main effort was to reach such a condition in the BasicMeter monitoring tool). Further advantages of Data Warehouse are the following [16]: Integration of the data from various sources into one system. Provides history, therefore the data are available for the few last periods of time (e.g. years). The data are arranged according to the individual subjects. Data are stored at different levels of aggregation. Data are periodically retrieved from a system (usually at night or weekends). Users only read the data, i.e. they do not insert new or change the existing ones. Using a wide range of methods, data from the Data Warehouse can be used for effective analysis and presentation. The architecture of Data Warehouses is shown in Figure 2. Data incoming from multiple external resources are subjected to filtering criteria, extracted, merged and stored in the central repository, which is the Data Warehouse. The stored data are also extended by history and summary information [17]. In technical terms, Data Warehouse is a huge database which has a size from several gigabytes up to hundreds of terabytes. Thanks to this architecture, users work with data in a local, homogeneous and centralized data repository, which reduces the access time to these data. In addition, the Data Warehouse also updates periodically itself against the content of external sources. Currently two architectures of Data Warehouses are dominating. One based on Inmon s methodology and the other on Kimball s methodology. Fig. 2. A. Inmon s methodology The common architecture of the Date Warehouses According to Inmon s 1 methodology, Data Warehouse is a subject-oriented, integrated, constant and time-dependent data collection, which serves as support for the management of these data. The main focus is placed on the centralization and normalization of data into the Data Warehouse. Since the normalization of data complicates the ease of use and performance, this architecture also provides various datamarts. Datamarts are derived databases. They normally contain summarized data derived from the data of the Data Warehouse. Their structure is optimized for efficient querying of the records from the database [18]. B. Kimball s methodology According to Kimball s 2 definition, Data Warehouse is a copy of transactional data specifically for structured queries and analysis. His method, also known as dimensional modeling, provides representation of the most elegant compromises of end users integrity and the ease of use. Its main aim is the development of a systematic and progressive Data Warehouse, while mainly focusing on different preventions and its dimensions [19]. C. Considering the criteria of the transition to the new technology Before the transition to the technology for managing largescale data, the following factors [18] were considered: the amount of the data, which should be stored; 1 William (Bill) Harvey Inmon is an American computer scientist who is known for most people as the father of Data Warehouses. 2 Ralph Kimball is the author of the concepts of Data Warehouse and Business Intelligence (BI).

the time (speed), during which the data should be ready for use; history of the records and changes; what types of analysis are needed to be performed over the data; the cost of the technology and its implementation. Further important factor to consider was the types of the information which were going to be stored in the database. For this reason the database and its existing data, the process which is filling the database (collector) and the process which is querying the information from the database (analyzer) were put under a complex analysis. The goal of this analysis was to gather all the requirements. Since the BasicMeter tool is still under development, some of these requirements were still not fully defined (mainly which regard to the analyzer). Unfortunately this would made the transition to the new technology hardly to implement and in some cases also pointless (if the requirements from the collector and analyzer will change, the method of preprocessing of the data dedicated for efficient evaluation has to be changed too). However, the solution of the problems from the previous section still remained necessary. For this reason, after considering the factors described previously, the Data Warehouse technology and some other optimization improvements were implemented in the current database of the BasicMeter tool, which should solve the persistent problems of the database s excessive response time. This solution from the view of the collector s or the analyzer s requirements is also easily customizable. So in the future when the requirements will change, their influence will be easily implemented in the database empowered with the Data Warehouse technology. V. MOVING TO THE DATA WAREHOUSE As seen above, one of the most critical part of the BasicMeter s architecture and basically any IPFIX like architecture [20] is its database. This also reflects the fact that most of the performance and query related issues occurred during obtaining the data from the database. Firstly a new database was created, where the new optimized and summarized data are going to be stored. Then as solution for the excessive response time during the reception of the list of unique data, this database was extended with tables designated for storing only unique records. Each of these tables contains two columns: one for unique records; one for the number of the record s duplicate occurrence 3. With these tables instead of query 1 the standard SELECT name_of_column FROM name_of_table ORDER BY name_of_column; 3 Storing the number of duplicate records occurrence is designated for statistical purposes. (2) guery can be used. Thus, the database management system will not need to eliminate duplicate records from the result [13]. This results in significant reduction of the time required by the database for processing the query and providing the results to the analyzer. The second main problem, i.e. the difficulties related with the implementation of displaying large amount of data received from the database in the analyzing application which is also connected with the above described problem of the excessive response time was solved in the following way. As a solution the new database was extended with further tables. These tables and their fields were designed so that traffic information can be obtained from them on the basis of time characteristics. They representing a kind of a summarization to one minute of time unite of selected information elements. This will make easier to obtain and represent the data by the analyzer. A more rapid access to the data stored in the designed tables was also ensured by indexing the records when they are stored [13]. The principle of filling the tables was fully inspired by the Data Warehouse technology, while the main effort was aimed at the pre-processing and aggregation of the indexed data on the basis of time characteristics. This is possible through the PL/pgSQL extension of the basic SQL language of the PostgreSQL database [13]. This extension allows procedural and object-oriented programming directly in the database. Using the PL/pgSQL language in the database of BasicMeter tool the following database objects [13] were created: function 4 ; trigger pertaining to this function. The trigger after each insert of a record in the database calls the proposed function. Subsequently the function starts to insert records obtained from the actually stored values in the default database into the new database s proposed tables in the following order: 1) First the command by which the function was called (it has to be an INSERT statement) and the end of the flow identifier is checked (examining the end of the flow identifier is needed to reduce duplicate records). When these criteria are satisfied, the function connects to the database, where the proposed tables for efficient querying were created. 2) In the next step the function begins to store the records in the tables. First it checks whether the entry already exists in the table proposed for unique records. If the record is already there only its duplicate record counter field will be increased. Otherwise the new, unique entry will be stored in the table. 3) In case that the actually inserted records meets the criteria of some distinguished information, this record will 4 In the terminology of PostgreSQL instead of stored procedure, function is used.

Fig. 3. The topology of the verification process be also stored in the tables dedicated for summarizing purposes. 4) In the last step summarized data to one minute of time unite about the start and end of the flow and some other entries characterizing the IP flow will be stored in the tables for aggregating purposes. The implementation of the newly created objects was performed using regular SQL commands of the PostgreSQL database. After the execution of these statements the proposed function and its trigger was added to the old database s default structure. Next, if new data was inserted in the old database, on the basis of these data new optimized and summarized data was stored in the new database. Since the explanation of these database objects (they represent extensive source codes) is out of the scope of this paper in the following they will be not listed. VI. VERIFYING THE BENEFITS OF THE IMPLEMENTED ENHANCEMENTS The above introduced enhancements were proposed to bring efficient evaluation of the analyzer s queries by using the preprocessed and summarized records. The verification of the expected results was performed on a topology as illustrated in Figure 3. By the BasicMeter monitoring tool approximately 2 million records were collected. During the monitoring the measured network traffic information was sent by the exporter to the collector (continuous line in Figure 3) where after processing the IPFIX flow records the obtained data were stored in the default (old) database (fine dashed line in Figure 3). Also less then a quarter 5 of these records was inserted into the new database. On the basis of these stored records by querying the database by the WebAnalyzer (two dots one dash line in Figure 3) a group of measurements was performed (they 5 This value is hardly influenced by the records of the tables containing unique records. were repeated 12 times). The main interest was aimed at the determination of the amount of time consumed by the database to evaluate the query. The server where the database was running had the following hardware and system equipments: Intel(R) Pentium(R) 4 CPU 3.20GHz; 1 GB System RAM; 160 GB IDE Hard Disk; Database PostgreSQL 9.1; Operating System Ubuntu Server 9.10 LTS (32 bit); OpenJDK Java Version 1.6.0 23; The comparison of the average values before and after the transition to the Data Warehouse is included in Table I. Before optimization, after entering query 1 the database performed a sequential search, where for the selection of unique information it compared each record with each. Such a search took the database system to perform approximately 20 seconds. However, after optimization, the database with entering query 2 performed only standard selection of unique records from the proposed table. The selection with this standard query took the database management system approximately up to half second. Based on these facts the first problem with excess response time was solved. TABLE I COMPARING THE RESPONSE TIMES BEFORE AND AFTER THE IMPLEMENTATION OF THE DATA WAREHOUSE TECHNOLOGY Method of selecting data Time for evaluating the query Before optimization SELECT DISTINCT (sequential search) approx. 20s After optimization SELECT (standard selection of all data at a time) up to half sec. The verification of the proposed tables for summarized records was performed under the same conditions, while the measurement was repeated 20 times. The average response time was the same as in the case of unique records (partly can be considered as a success). Moreover, the summarized data to one minute of time unite was in each case easily processed and visualized (without delay) in the graph of continuation of the analyzer. So the second problem of complex realization of a large amount of data in the graph of continuation can be also considered as solved. The analyzer does not have to create anymore the continuation from 2 millions of data. It makes do with only a part of it. VII. CONCLUSION AND FUTURE WORK The goal of this paper was to introduce a method for optimizing the access time to the database of the BasicMeter network monitoring tool. As a solution a Data Warehouse technology was successfully implemented in the tool which fulfilled its expected results. The advantage of this solution is that it could be easily implemented in any IPFIX like tool s database. Although the optimization steps over the database succeeded, they serve only as a temporary solution because not

all requirements during the implementation of Data Warehouse technology were completely defined. Therefore, in future the database of the BasicMeter should be first subjected to thorough analysis, during which all the requirements from both the collector and analyzer should be definitely identified. Consequently the pre-processing and aggregation of data on the basis of time characteristics should be re-designed and implemented. Future work should be also aimed at the way of filling the new database containing unique and summarized records. More preferable way would be to calculate and store these records at a time (during weekend or at midnight) which has the least impact on the performance of the network monitoring. ACKNOWLEDGMENT This work was supported by the Slovak Research and Development Agency under the contract No. APVV-0008-10. REFERENCES [1] L. Vokorokos, N. Ádám, A. Baláž, Application Of Intrusion Detection Systems In Distributed Computer Systems And Dynamic Networks, Computer Science and Technology Research Survey, Košice, Elfa,s.r.o.,Košice, 2008, pp. 19 24, ISBN 978-80-8086-100-1. [2] L. Vokorokos, A. Kleinová, O. Látka, Network Security on the Intrusion Detection System Level, The 10th IEEE International Conference on Intelligent Engineering Systems, Hungary, Budapest Tech, 2006, pp. 270 275, ISBN 1-4244-9708-8. [3] F. Jakab, Ľ. Koščo, M. Potocký, J. Giertl, Contribution to QoS Parameters Measurement: The BasicMeter Project, Conference Proceedings of the 4th International Conference on Emerging e-learning Technologies and Applications, ICETA 2005, vol. 4, pp. 371 377, 2005. [4] G. Sadasivan, N. Brownlee, B. Claise, J. Quittek, Architecture for IP Flow Information Export, RFC 5470, 2009. [5] T. Zseby, E. Boschi, N. Brownlee, B. Claise, IP Flow Information Export (IPFIX) Applicability, RFC 5472, 2009. [6] J. Sučík, F. Jakab, Measurement and Evaluation of Quality of Service Parameters in Computer Networks. [7] B. Claise, Cisco Systems NetFlow Services Export Version 9, RFC 3954, 2004. [8] B. Claise, Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP Traffic Flow Information, RFC 5101, 2008. [9] J. Giertl, R. Jakab, Ľ. Koščo, Communication Protocol in Computer Network Performance Parameters Measurement Proceedings of the 4th International Information and Telecommunication Technologies Symposium,Federal University of Santa Catarina, Florianopolis, Brazil, December 14-16, 2005, pp. 161 162, ISBN 85-89264-05-X. [10] A. Pekár, J. Giertl, M. Révés, P. Feciľak, Simplification of the Real- Time Network Traffic Monitoring Electrical Engineering and Informatics II, Proceeding of the Faculty of Electrical Engineering and Informatics of the Technical University of Koice, pp. 272 277, Košice, ISBN 978-80-553-0611-7, 2011. [11] F. Jakab, J. Giertl, R. Jakab, M. Kaščák, Improving Efficiency and Manageability in IPFIX Network Monitoring Platform Proc. of the 6th International Network Conference, INC 2006, Plymouth, UK, 11.- 14.7.2006, University of Plymouth, pp. 81 88, ISBN 1-84102-157-1, 2006. [12] T. Mihok, J. Giertl, M. Révés, A. Pekár, P. Feciľak, System for Centralized Management of the BasicMeter Tool IP Networking 1 Theory and Practice, Žilina, 2011. [13] The PostgreSQL Global Developement Group (2011), PostgreSQL Documentation, [Online]. Available: http://www.postgresql.org/docs/current/static/ [14] J. Quittek, S. Bryant, B. Claise, P. Aitken, J. Meyer, Information Model for IP Flow Information Export, RFC 5102, 2008. [15] S. Chaudhuri, U. Dayal, An Overview of Data Warehousing and OLAP Technology, ACM SIGMOD Record, Volume 26 Issue 1, 1997. [16] K. Bandarupalli (2010), Importance of Data Warehousing and its Design, [Online]. Avaliable: http://www.techbubbles.com/sql-server/importanceof-data-warehousing-and-its-design/ [17] C.T. Hammergren, A. Simon, Data Warehousing For Dummies, Wiley, pp. 104 124, 2009. [18] W.H. Inmon, Building the Data Warehouse, Wiley, pp. 3 9, 2002. [19] R. Kimball, The Data Warehouse Toolkit, pp. 12 15, 1996. [20] J. Quittek, T. Zseby, B. Claise, S. Zander, Requirements for IP Flow Information Export (IPFIX), RFC 3917, 2004.