1 von 143 18.10.2007 21:35 Cacti 0.8.6 HowTo The Cacti HowTo Section Reinhard Scheck Copyright 2006 The Cacti Group Prerequisites This Chapter will guide you through some of the pre-requisites for successfully setting up your cacti site Setting Up SNMP This HowTo will explain how to install and configure the Net-SNMP agent. At time of writing, the latest version available is 5.4 (published on 12/06/2006). Version History Version 0.7 (02/16/2007) : count entries in logfiles (thanks to gerdesj) Version 0.6 (11/02/2006) : added the "dontprintunits" keyword (thanks to netpoke2369) Version 0.5 (09/22/2006) : graph processes Version 0.4 (08/30/2006) : make Net-SNMP listens on TCP, and bind to a specific IP address Version 0.3 (08/14/2006) : build from sources instructions Version 0.2 (08/11/2006) : added SNMP version 3, "exec" and "proxy" directives Version 0.1 (08/10/2006) : intial release Chapter I: Getting Net-SNMP binaries Depending on your operating system, you'll find packages or tarballs to install Net-SNMP : Linux Usually every Linux distribution comes with Net-SNMP packages : RedHat / Fedora : install the net-snmp, net-snmp-libs and net-snmp-utils packages Debian / Ubuntu: install the libsnmp-base, libsnmp5, snmp and snmpd packages SuSE : install the net-snmp package Gentoo : simply emerge the net-snmp ebuild Mandriva : install the libnet-snmp5, net-snmp and net-snmp-utils packages. AIX Packages are available in the University of California repository: release 5.0.6 for AIX 4.1 release 5.0.6 for AIX 4.2
2 von 143 18.10.2007 21:35 Solaris release 5.2 for AIX 4.3 release 5.2 for AIX 5.1 release 5.2 for AIX 5.2 release 5.2 for AIX 5.3 Solaris 10 ships with Net-SNMP 5.0.9 For older Solaris releases, packages are available in the Sunfreeware repository : release 5.1.1 for Solaris 9 Sparc release 5.1.1 for Solaris 9 Intel release 5.1.1 for Solaris 8 Sparc release 5.1.1 for Solaris 8 Intel release 5.1.1 for Solaris 7 Sparc release 5.1.1 for Solaris 2.6 Sparc release 5.1.1 for Solaris 2.5 Sparc For these packages to work, OpenSSL and GCC libraries need to be installed also. Tarballs are also available from the Net-SNMP main site : release 5.2.2 for Solaris 9 on sun4u hardware release 5.2.2 for Solaris 8 on sun4u hardware release 5.2.2 for Solaris 7 on sun4u hardware These tarballs have to be extracted from / has they contain absolute paths. Files are copied to /usr/local/share/snmp, /usr/local/libs, /usr/local/include/net-snmp, /usr/local/man, /usr/local/bin and /usr/local/sbin HP-UX Tarballs are available from the Net-SNMP main site : release 5.4 for HP-UX 11.11 PA-RISC release 5.4 for HP-UX 11.00 PA-RISC release 5.4 for HP-UX 10.20 PA-RISC These tarballs have to be extracted from / has they contain absolute paths. Beware that the binaries are not stripped in these tarballs, this waste a lot space. Files are copied to /usr/local/share/snmp, /usr/local/libs, /usr/local/include/net-snmp, /usr/local/man, /usr/local/bin and /usr/local/sbin FreeBSD Net-SNMP is available through the ports Chapter II: Building the Net-SNMP agent from sources If you can't find binaries for your architecture, you can build the Net-SNMP agent from sources. Latest sources are available here. Here's how to get the configure options of an already running Net-SNMP agent :
3 von 143 18.10.2007 21:35 $ snmpwalk -v 1 -c public localhost.1.3.6.1.4.1.2021.100.6.0 UCD-SNMP-MIB::versionConfigureOptions.0 = STRING: "'-disable-shared' '--with-mib-modules=host/hr_system'" Some useful mib modules are : mibii/mta_sendmail, to graph MTA (Sendmail, Postfix, etc.) statistics diskio, to enable to graph I/O statistics ucd-snmp/lmsensors, for hardware monitoring (Linux and Solaris only) Mib modules can be added like this : $./configure --with-mib-modules="module1 module2" To compile Net-SNMP and build a compressed archive, follow these steps : $./configure --with-your-options $ make # mkdir /usr/local/dist # make install prefix=/usr/local/dist/usr/local exec_prefix=/usr/local/dist/usr/local # cd /usr/local/dist # tar cvf /tmp/net-snmp-5.3.1-dist.tar usr # gzip /tmp/net-snmp-5.3.1-dist.tar # rm -rf /usr/local/dist You can then copy the /tmp/net-snmp-5.3.1-dist.tar.gz file to other servers, and uncompress it from the root directory (everything will get extracted to /usr/local). Chapter III: Configuring the Net-SNMP agent Depending on how you've installed Net-SNMP, the main configuration file (snmpd.conf) is located in /etc/snmp (installation from package) or /usr/local/share/snmp (installation from tarball). Please note that you need to restart (or send the HUP signal) the snmpd daemon whenever you modify snmpd.conf The minimum configuration is this one : rocommunity public This will enable SNMP version 1/2 read-only requests from any host, with the community name public. With this minimal configuration, you'll be able to graph CPU usage, load average, network interfaces, memory / swap usage, logged in users and number of processes. You can restrict from which hosts SNMP queries are allowed :
4 von 143 18.10.2007 21:35 rocommunity public 127.0.0.1 rocommunity test 87.65.43.21 By default Net-SNMP listens on UDP port 161 on all IPv4 interfaces. With the following example, Net-SNMP will listen on UDP port 10000 on 10.20.30.40 IP address : agentaddress 10.20.30.40:10000 You can also make it listens on TCP, which is supported by Cacti : agentaddress tcp:161 The "tcp" keyword can then be used in Cacti : For those who want some more security, you can use the SNMP version 3 protocol, with MD5 or SHA hashing : createuser frederic MD5 mypassphrase DES group groupv3 usm frederic view all included.iso 80 access groupv3 "" any auth exact all all all This creates a user "frederic" whose password is "mypassphrase". To test it : # snmpget -v 3 -l AuthNoPriv -u frederic -A mypassphrase 10.50.80.45 sysname.0 SNMPv2-MIB::sysName.0 = STRING: cyclopes In Cacti, add your device, choose SNMP version 3, and fill the username and password fields :
5 von 143 18.10.2007 21:35 Now that you're done with access control, add these 2 lines in snmpd.conf to indicate the location and contact name of your device : syslocation Bat. C2 syscontact someone@somewhere.org They will then appear in Cacti management interface : Some OIDs return a unit, eg "-153 0.1 dbm". It's a safe idea to turn this off, by adding this to snmpd.conf : dontprintunits true Next step is to graph filesystems in Cacti; the easyest way is to add this line in snmpd.conf : includealldisks When you'll run the "ucd/net - Get Monitored Partitions" Data Query, all the mounted filesystems will show up : If you want a filesystem not to be listed here, add this line to snmpd.conf :
6 von 143 18.10.2007 21:35 ignoredisk /dev/rdsk/c0t2d0 Unfortunatly, some older versions of Net-SNMP do not fully work with the includealldisks keyword :-? You'll then have to list explicitly all filesystems you want to graph : disk / disk /usr disk /var disk /oracle You can also specify NFS mount points. Please note that the Net-SNMP agent can only report filesystems which where mounted before its start. If you manually mount filesystems later, you'll have to reload the Net-SNMP agent (send the HUP signal). You can also graph processes, by adding this to snmpd.conf : proc httpd The result will be accessible under the ucdavis.prtable.prentry tree : prcount, number of current processes running with the name in question prnames, the process name you're counting. In our example, the number of Apache processes will be available under the.1.3.6.1.4.1.2021.2.1.5 OID Chapter IV: Test your configuration Once Net-SNMP is configured and started, here's how to test it : $ snmpwalk -v 1 -c public localhost.1.3.6.1.2.1.1.1.0 SNMPv2-MIB::sysDescr.0 = STRING: Linux cronos 2.4.28 #2 SMP ven jan 14 14:12:01 CET 2005 i686 This basic query shows that your Net-SNMP agent is reachable. You can even query which Net-SNMP version is running on a host : $ snmpwalk -v 1 -c public localhost.1.3.6.1.4.1.2021.100.2.0 UCD-SNMP-MIB::versionTag.0 = STRING: 5.2.1.2 An answer like that one
7 von 143 18.10.2007 21:35 $ snmpwalk -v 1 -c foo localhost.1.3.6.1.2.1.1.1.0 Timeout: No Response from localhost indicates that either the agent is not started, or that the community string is incorrect, or that this device is unreachable. Check your community string, add firewall rules if necessary, etc. If using SNMP version 3, specifying an unknown user will result in this error message : $ snmpget -v 3 -l AuthNoPriv -u john -A mypassphrase 10.50.80.45 sysname.0 snmpget: Unknown user name An incorrect passphrase will result in this error message : $ snmpget -v 3 -l AuthNoPriv -u frederic -A badpassphrase 10.50.80.45 sysname.0 snmpget: Authentication failure (incorrect password, community or key) This query will show you what filesystems are mounted : $ snmpwalk -v 1 -c public localhost.1.3.6.1.4.1.2021.9.1.2 UCD-SNMP-MIB::dskPath.1 = STRING: / UCD-SNMP-MIB::dskPath.2 = STRING: /BB UCD-SNMP-MIB::dskPath.3 = STRING: /dev/shm If the answer is empty, usually it means the includealldisks is not supported by your Net-SNMP agent (you'll have to list each filesystem you want to graph as explained in previous chapter). Finally, this query will you display your network interfaces : $ snmpwalk -v 1 -c public localhost.1.3.6.1.2.1.2.2.1.2 IF-MIB::ifDescr.1 = STRING: lo IF-MIB::ifDescr.2 = STRING: eth0 IF-MIB::ifDescr.3 = STRING: eth1 Chapter V: Extending the Net-SNMP agent A great functionnality of Net-SNMP is that you can "extend" it. Let's run the /tmp/foo.sh script :
8 von 143 18.10.2007 21:35 $ /tmp/foo.sh -arg1 123 Now put this in snmpd.conf : exec foo /bin/sh /tmp/foo.sh -arg1 The result of your script will be accessible under the ucdavis.exttable.extentry tree : output of the script : ucdavis.exttable.extentry.extoutput exit status : ucdavis.exttable.extentry.extresult command : ucdavis.exttable.extentry.extcommand You can check the result with this SNMP query : $ snmpwalk -v 1 -c public localhost.1.3.6.1.4.1.2021.8.1 UCD-SNMP-MIB::extIndex.1 = INTEGER: 1 UCD-SNMP-MIB::extNames.1 = STRING: foo UCD-SNMP-MIB::extCommand.1 = STRING: /bin/sh /tmp/foo.sh -arg1 UCD-SNMP-MIB::extResult.1 = INTEGER: 0 UCD-SNMP-MIB::extOutput.1 = STRING: 123 UCD-SNMP-MIB::extErrFix.1 = INTEGER: 0 UCD-SNMP-MIB::extErrFixCmd.1 = STRING: extoutput translates to.1.3.6.1.4.1.2021.8.1.101 As "foo" is our first exec directive, add ".1" at the end of the OID. In Cacti, use the "SNMP - Generic OID Template" like this : Voila! Result of the /tmp/foo.sh script is now graphed in Cacti. Now let's run this second script, which returns more than one result : $ /tmp/bar.sh 456 789
9 von 143 18.10.2007 21:35 It returns two values, one per line (this is important). Another way to call scripts from snmpd.conf is by specifying an OID, like this : exec.1.3.6.1.4.1.2021.555 /bin/sh /tmp/bar.sh Run this query : $ snmpwalk -v 1 -c public localhost.1.3.6.1.4.1.2021.555 UCD-SNMP-MIB::ucdavis.555.1.1 = INTEGER: 1 UCD-SNMP-MIB::ucdavis.555.2.1 = STRING: "/bin/sh" UCD-SNMP-MIB::ucdavis.555.3.1 = STRING: "/tmp/bar.sh" UCD-SNMP-MIB::ucdavis.555.100.1 = INTEGER: 0 UCD-SNMP-MIB::ucdavis.555.101.1 = STRING: "456" UCD-SNMP-MIB::ucdavis.555.101.2 = STRING: "789" UCD-SNMP-MIB::ucdavis.555.102.1 = INTEGER: 0 UCD-SNMP-MIB::ucdavis.555.103.1 = "" First line returned by the script will be available at.1.3.6.1.4.1.2021.555.2, and so on. You can then use the "SNMP - Generic OID Template" in Cacti (one Data Source per OID). Let's say you want to count the number of entries in a log file. Add this to snmpd.conf : logmatch cactistats /home/cactiuser/cacti/log/cacti.log 120 SYSTEM STATS the global count of matches will be available under the.1.3.6.1.4.1.2021.16.2.1.5.1 OID the "Regex match counter" (which is reset with each file rotation) will be available under the.1.3.6.1.4.1.2021.16.2.1.7.1 OID To list all the available variables, use this query : $ snmpwalk -v 1 -c public localhost logmatch UCD-SNMP-MIB::logMatchMaxEntries.0 = INTEGER: 50 UCD-SNMP-MIB::logMatchIndex.1 = INTEGER: 1 UCD-SNMP-MIB::logMatchName.1 = STRING: cactistats UCD-SNMP-MIB::logMatchFilename.1 = STRING: /home/cactiuser/cacti/log/cacti.log UCD-SNMP-MIB::logMatchRegEx.1 = STRING: SYSTEM STATS UCD-SNMP-MIB::logMatchGlobalCounter.1 = Counter32: 301634 UCD-SNMP-MIB::logMatchGlobalCount.1 = INTEGER: 301634 UCD-SNMP-MIB::logMatchCurrentCounter.1 = Counter32: 6692 UCD-SNMP-MIB::logMatchCurrentCount.1 = INTEGER: 6692 UCD-SNMP-MIB::logMatchCounter.1 = Counter32: 1 UCD-SNMP-MIB::logMatchCount.1 = INTEGER: 0 UCD-SNMP-MIB::logMatchCycle.1 = INTEGER: 120 UCD-SNMP-MIB::logMatchErrorFlag.1 = INTEGER: 0 UCD-SNMP-MIB::logMatchRegExCompilation.1 = STRING: Success
10 von 143 18.10.2007 21:35 We'll then use another interesting directive, the "proxy" one. Let's take for example the Squid proxy : when enabled, its SNMP agent listen to UDP 3401 port. If you want to have system graphs and Squid graphs without declaring 2 devices in Cacti, add this in snmpd.conf : proxy -v 1 -c public localhost:3401.1.3.6.1.4.1.3495.1 The Squid SNMP tree will be available under the.1.3.6.1.4.1.3495.1 branch. Let's query this host : $ snmpwalk -v 1 -c public 10.151.33.3 sysdescr SNMPv2-MIB::sysDescr.0 = STRING: Linux srv1.foo.com 2.6.8.1-12mdk #1 Fri Oct 1 12:53:41 CEST 2004 i686 And here's the Squid part (this specific OID returns the Squid version) : $ snmpwalk -v 1 -c public 10.151.33.3.1.3.6.1.4.1.3495.1.2.3.0 SNMPv2-SMI::enterprises.3495.1.2.3.0 = STRING: "2.5.STABLE6" You'll find how to enable the Squid SNMP agent here. Special Installation Instructions Find some installation instructions here Ubuntu Installation Instructions Requirement: One linux machine - ( can be any depending on your choice try and keep atleast 256mb ram, i have tested cacti on a 1.4ghz processor and 256 mb ram, with centos as the operating system and with around 350 monitored devices, and it has run without a hitch for more than a month, currently cacti resides on a 1.8ghz machine with 512 mb ram, which is also running a proxy server - squid, plus a intranet web ftp server ) the cacti package - we will need to download this from http://cacti.net RRDtool - This is the defacto package used by 95% of all NMS tools out on the net for graphing, details can be found at http://oss.oetiker.ch/rrdtool/ Xampp - now the reason i am going in for Xampp is because it lets maintain a lot of things very easily, ( the apache webserver - mysql database, php programming language and all needed dependencies ) of course we can do it without xampp but you can search for those doc's on the net. Ubuntu installation :-
11 von 143 18.10.2007 21:35 1: The First step now would be to install Linux on our machine, for this example we will download ubuntu ios from the following site http://cdimage.ubuntu.com/releases/gutsy/tribe-5/gutsy-desktop-i386.iso this is the current latest version of ubuntu. note: there is a server version of ubuntu which is also available but we will not be going in for that due to lack of gui. 2: once downloaded burn it on to a cd and then boot the machine you have decided to make as your server with the same CD 3: once the machine boots up, you will notice that is running in live cd mode, i.e the hard disk is not being used, you will find a Install icon on the top left of your screen, double click on it and go ahead with the install, the only problem you might have is during partitioning as we are going in for a separate machine its best that we go in for auto partitioning ( a detailed ubuntu install guide is not really possible right now but it can found here https://help.ubuntu.com/6.10/ubuntu/installation-guide/i386/index.html. Also remember to type in your username to be used as deadwait ( im taking deadwait just for this example ) 4: Once ubuntu is installed - all further steps will now occur from within ubuntu itself, the next thing to do is see to it that it is updated, for that we have to have access to the internet hence you network card will have to configured, i hope you remember the root password you had supplied during the installation. 4.1 : click on system --> administration --> network and in the wired connection tab supply your needed ip address/ subnet mask and gateway 4.2 :if your internet access is through a proxy then click on system -> preferences -> network proxy and supply your proxy server's ip address and port. 4.3 : open up a terminal window ( Applications -> accessories -> terminal ) and type in the foll. commands sudo aptitude update and then sudo aptitude install build-essential 4.4 : Once the build essential is over you are set to install Xampp, RRDtool and Cacti 5: We will begin with Xampp - to know in detail what xampp is check out their website http://www.apachefriends.org/en/xampp.html, Now we need to download Xampp for linux directly clicking on this link http://jaist.dl.sourceforge.net/sourceforge/xampp/xampp-linux-1.6.3b.tar.gz. This is the current latest version of Xampp. remember to download it to /opt. ( the reason im going in for /opt is because the same is mentioned in the website, but remember you can have it downloaded and installed anywhere.) 5.1: Now assuming you have downloaded the file in /opt you need to do the foll next. ( im going to be guiding you in command line mode - but it can be done in GUI, the reason i haven't mentioned the GUI method is because i get confused in GUI mode, as these DOCS will be open for editing later anyone who wishes to update the GUI mode can do so ) as usual click on Application -> accessories -> terminal type in cd /opt this will bring you in the /opt directory, type in
12 von 143 18.10.2007 21:35 sudo tar -xvzf xampp-linux-1.6.3b.tar.gz what this does is unzip/unpack the file into its own directory, if you type int the command dir you will now see that a new directory (lampp) has been created, if you want to delete the xampp-linux-1.6.3b.tar.gz file or not is upto you, if you want to delete it the command is sudo rm -rf xampp-linux-1.6.3b.tar.gz what i do is usually move these files in to my home folder, assuming my login name is tulip then my home folder becomes /tulip, the command to move the file is as follows sudo mv xampp-linux-1.6.3b.tar.gz /tulip Now for the cool part -> int the same terminal type in cd lampp then type in sudo./lampp start and your webserver along with mysql and ftp will have started, to check the same, open up firefox and type in http://localhost, you should get the Xampp screen. 5.2: Now lets clear some basic stuff we need to do - you see Cacti needs a database which we have already installed using xampp, now what we need to do is create cacti's own database. now since you have opened up http://localhost in firefox, xampp will ask you its language preference, click on english, then on the left pane you will see a link for phpmyadmin, click on it, what you see now is an web based administration tool for MySql, on the first page itself you will see an option named Create Database in the field below type in cacti since this is the name we will use for our database ( ofcourse you could name it whatever you want ). then go on to the next step. 6: So then, one part of our work is done next thing to do is to install rrdtool. your going to love this, in a terminal box type in the magic commands sudo aptitude install rrdtool and thats it rrdtool is installed ( Now for a bit of history, we can install entire cacti along with webserver php and mysql by doing sudo aptitude install cacti, but we havent done that because if you are not comfortable with linux it could lead to a lot of confusion as to where the files are installed, also the package could break if an upgrade takes place) At the same time lets install one more tool needed which snmp with the same command sudo aptitude install snmp and then sudo aptitude install snmpd what is important to remember is rrdtool gets installed in /usr/bin/rrdtool, we will need this path later. 7: Now the cream - cacti installation. first we need to download the cacti package which we can do so from this link http://www.cacti.net/downloads/cacti-0.8.6j.tar.gz save the link on your eg: Desktop then open up a terminal and navigate to your deskto, the commands are (assuming your user login is tulip) cd /home/deadwait/desktop remember linux is case-sensitive so desktop wont work it will be Desktop, once we are in Desktop type in the foll commands
13 von 143 18.10.2007 21:35 sudo tar -xvzf cacti-0.8.6j.tar.gz which will extract the foll in a directory named cacti-0.8.6j for purposes of our ease lets rename it to just cacti with the foll command sudo mv cacti-0.8.6j cacti now that the directory is renamed lets move it in our lampp directory so that we can access it via our webserver, to do so run the foll command sudo mv cacti /opt/lampp/htdocs now our cacti directory is copied on to lampp's webroot directory, remember we had created a database in mysql named cacti, now we need to populate tis database, which you should not worry about if you dont understand, just follow the foll steps 7.2 :open up firefox and again go to phpmyadmin, http://localhost/phpmyadmin on the left pane select the database which we have created, in our case its cacti. then on the right pane select import -> then click on browse -> navigate to the directory /opt/lampp/htdocs/cacti in which you have to select the file cacti.sql and then click on go. 7.3: Again in a terminal type in cd /opt/lampp/htdocs/cacti/include then type in sudo nano config.php nano is an editor which will open up the file config.php, in the beginning you will see these options $database_type = "mysql"; $database_default = "cacti"; $database_hostname = "localhost"; $database_username = "cactiuser"; $database_password = "cactiuser"; $database_port = "3306"; you need to change the username and password so that it looks like this $database_type = "mysql"; $database_default = "cacti"; $database_hostname = "localhost"; $database_username = "root"; $database_password = ""; $database_port = "3306"; then press ctrl-x and come out. 7.4: Now open up firefox and type int the address bar the following http://localhost/cacti you will be greeted with a screen which will be the beginning of the installation, just click on next on the next screen you will asked if its a new install, which of course it is,confirm whether the database user and the database name mentioned are correct, go back to step 7.3 and check, click on next 7.5 : when we click it shows us the base paths of all needed files we will notice that the path for php is marked in red because the
14 von 143 18.10.2007 21:35 path is wrong the path shown in the installer is /usr/bin/php we have to change that to /opt/lampp/bin/php and then click finish. Cacti is now installed. It will open up to the cacti homepage and ask you a username and password type in admin and password as admin it will then force you to change the password, type in the new password that you decide on and log in using the new password. We need to do a bit more stuff, you see cacti works by polling the devices which we set it up for, so lets set the poller for every 5 minutes, open up terminal and type in the foll command sudo nano /etc/crontab this will open up the crontab file, now at the end type in the foll. /5 * * * * deadwait /opt/lampp/bin/php /opt/lampp/htdocs/cacti/poller.php > /dev/null 2>&1 then press ctrl x and come out now all along we have assumed the username to login to your machine is tulip, hence the tulip is added above, now we need to do one last thing type in the following command in a terminal sudo chown -R 777 /opt/lampp/htdocs/cacti thats its you are done!!! phew! Basic Usage This chapter may help understanding cacti's basic usage principles. Let me say a word about the general way cacti works. But "theory" is quickly followed by some examples that may help settings up the first graphs. Have fun! 01. Basic Principles Cacti is a Monitoring Solution. As such, operation may be divided into three different tasks:
15 von 143 18.10.2007 21:35 Data Retrieval First task is to retrieve data. Cacti will do so using its Poller. The Poller will be executed from the operating system's scheduler, e.g. crontab for Unix flavored OSes. In current IT installations, you're dealing with lots of devices of different kind, e.g. servers, network equipment, appliances and the like. To retrieve data from remote targets/hosts, cacti will mainly use the Simple Network Management Protocol SNMP. Thus, all devices capable of using SNMP will be eligible to be monitored by cacti. Later on, we demonstrate how to extend cacti's capabilities of retrieving data to scripts, script queries and more. Data Storage There are lots of different approaches for this task. Some may use an (SQL) database, others flat files. Cacti uses rrdtool to store data. RRD is the Acronym for Round Robin Database. RRD is a system to store and display time-series data (i.e. network bandwidth, machine-room temperature, server load average). It stores the data in a very compact way that will not expand over time, and it can create beautiful graphs. This keeps storage requirements at bay. Read more about this in the following chapters. Data Presentation One of the most appreciated features of rrdtool is the built-in graphing function. This comes in useful when combining this with some commonly used webserver. Such, it is possible to access the graphs from merely any browser on any plattform. Graphing can be done in very different ways. It is possible, to graph one or many items in one graph. Autoscaling is supported and logarithmic y-axis as well. You may stack items onto another and print pretty legends denoting characteristics such as minimum, average, maximum and lots more.
16 von 143 18.10.2007 21:35 Cacti Cacti glues all this together. It is mainly written in php, a widely-used general-purpose scripting language that is especially suited for Web development and can be easily embedded into HTML. Cacti provides the Poller and uses RRDTool for Storage and Graphing. All administrative information are stored in a MySQL database. Cacti to rrdtool translation table When using cacti, be it with or without current rrdtool knowledge, you may become confused by all those technical expressions. Let's try to translate Cacti Notation RRDTool Notation Usage Data Template Data Source Item as part of a Data Template Data Source as a real instantiation of a Data Template when applied to a Device Graph Template Graph Template Item as a part of a Graph Template structure of an rrd file data source (ds) RRD file structure of an rrdtool graph statement graph element Used to define the structure for storing data This Template will be applied to specific hosts to create real RRD files A RRD file may hold data for more than one single variable. Each one is named "data source" in RRDTool speech Yep, this is an ugly one When a Data Template is created, Cacti names it "Data Source". RRDTool says it's an RRD file Used to create a "raw rrdtool graph statement" This Template will be applied to specific hosts to create real RRD graphs This is a complex one. Each item will create parts of an RRDTool graph statement. Typically, this will include the (reference to the) DEF needed, LINEx/AREA/STACK along with a color for graph elements or GPRINTs for legends, (reference to a ) CDEF and textual elements Graph as a real RRDTool graph statement, created when applying a Data Template to a Device RRDTool graph statement Whole statement, including all options and graph elements You may be put off by all those template stuff. If you like a more practical approach, just skip to Why Templates?. 02. My First Graph
17 von 143 18.10.2007 21:35 Now let's create the very first graph. I won't stick to the host cacti is running on, because this is a very special one. So I'm assuming you're running at least one other device. As cacti's roots are network monitoring with SNMP, I will use some SNMP capable device. In this case, I choose the router of my home network. But you may of course choose any device that is SNMP enabled. But let's start from the very beginning. Assuming you've just logged in, you'll see a page like this: Choose either of those marked links to access the Devices page. Add a new Device like: Now you're presented with the next page: Description Hostname Give this host a meaningful description. Fill in the fully qualified hostname for this device. Personally, I love to use DNS Names instead of IP Adresses. But you may choose any of them Host Template Choose what type of host, host template this is. The host template will govern what kinds of data should be gathered from this type of host. The magic of templates is explained later SNMP Community Fill in the SNMP read community for this device. If you don't know, use the string "public" as a start.
18 von 143 18.10.2007 21:35 Now hit Create to see: Please notice the information already retrieved from this device. Of course, this output pertains to my special device. The text may vary for your equipment. In case you see: there is an error with the SNMP Community String that must be fixed prior to graph generation. When scrolling down, you should see some more information, that was provided by assigning this device to the given Host Template. I'm aiming at SNMP - Interface Statistics: Now, back to the top of the page, select Create Graphs for this Host and find the following:
19 von 143 18.10.2007 21:35 Check the box next to an interface you want to get data for. A good choice is a row, where a Hardware Address (aka: MAC Address) or the like is shown. From the dropdown, select a graph template of your liking. But remember, that 64 bit graphs are only supported with SNMP V2 (and some more conditions). Finally, Create to get: You want to see your work immediately? So, here is the answer: You have to be patient. Assuming you did not forget to configure your cacti host's scheduler to run the poller every 5 minutes, you'll have to wait at least 10 minutes to see anything. Then, please move to Graph Management: and select the newly generated graph. Please notice, that I've filtered for the device. This was for demonstrating purpose only and to suppress all devices from the list I've already created.
20 von 143 18.10.2007 21:35 The last steps are not the recommended way to handle this. Later on, I'll show how to use the Graph tab and all the magic within. 03. More Graphs Now let's create some more graphs. Please go back to the Devices list and select your Device. Again, Create Graphs for this Host. First, select the wanted Graph from the dropdown, Non-Unicast Packets in this case. Then, please select the wanted interface: Now Create to see: Please perform this procedure a second time, choosing Unicast Packets this time:
21 von 143 18.10.2007 21:35 and Create: Now, again, have a cup of coffee. It takes two polling cycles, before these new graphs get filled. As there are three graphs now, question arise how to handle the graphs display in a more conveniant manner. Please follow me to the next chapter to see the Graphs Tab in action! 04. Using Graphs Tab This Chapter shows, how to use the Graph Tab to view your results The Tree Mode The start page of cacti show two blue tabs, when logging in with admin permissions, the Console and the Graphs. Users without special permissions will see the latter one only. If you click the Graphs Tab right after generating some graph, you won't see anything yet. So lets fill it first. This can be done from the Devices page, when using cacti 0.8.6h. Select your device by entering a Search pattern. Then, please select the checkbox to the right. From the Choose an Action dropdown, select Place on a Tree (Default Tree) to see:
22 von 143 18.10.2007 21:35 Now click Go to see: Accept this by selecting Yes and you're done. Now lets look at the results by selecting the blue Graphs Tab. You'll have to select your Device, my own routing device in this case. Notice the four new Tabs to the right, one of them, the Graphs Tree Tab, being display all in red. One other thing to pay attention to is the little magnifying glass next to each graph. We'll explain this in a minute.
23 von 143 18.10.2007 21:35 You will have noticed, that this view displays all currently defined graphs for this host. In fact, as soon as you add more graphs to this host, they will automagically show up in this view. In this case, we've added the whole Host to the Graph Tree, but there are other options as well. But first, please select the Graph itself by clicking anywhere on it. Now you'll see (by default) four new graphs. Each of them showing a different timespan, from Daily to Yearly. The next image show the two topmost of them: Now to the magnifying glass. You've seen in in the previous graph, and now in again appears next to each of the four graphs. Lets click to see: The little red square was drawn by placing the cursor at one corner and dragging it to the diagonally placed corner. Thus you define the area to be magnified. In this case, only the x-axis takes effect. You'll see:
24 von 143 18.10.2007 21:35 That's nice, eh? The List Mode Now lets turn to the next Graphs View Mode, the List View: The is the second to last tab on the right side. Find the Filter by Host accompanied by an additional text field that allows for freetext filtering. I've selected the will know router to find all three recently defined Graphs. From the Headings, you may learn how many Graphs are in the result set after filtering. There may be more than one page. Now, I've selected the first and the third row. Selecting View yields following result: The display now shows both graphs side-to-side. Notice, that the Legends are suppressed. The layout is defined by the user-specific
25 von 143 18.10.2007 21:35 values to be found under the Settings Tab. You may play with those values to design the layout to your likings. Please also notice, that the Tab changed from List View to Preview Mode automatically. To get more details of a specific view, you may again click on one of the graphs to see: The Magnifying Glass works as described above. The Preview Mode You've already had a short glance at this mode in the previous chapter. When selecting the rightmost tab you're presented with a list of all existing graphs, divided up into several pages. You may scroll to Next or Previous pages.
26 von 143 18.10.2007 21:35 Lets have a look at all those filtering capabilities. Most of those will hold for other lists as well. Lets start with the explicit selection of a host via Filter by Host: Notice the text field to the right to the Filter by Host. Text entered here will be searched in all existing Graph Titles:
27 von 143 18.10.2007 21:35 Be aware of the fact, that this text shows up in an SQL SELECT clause. If you remember your SQL skills, the percent (%) sign is used to make up partly qualified SQL SELECT clauses (wildcard). So look at the next image It is possible to use the underscore (_) for wildcarding a single character. Why Templates? You've surely seen all those Template stuff and may have asked yourself, "Why Templates". You may compare them to Macros or Subroutines of commonly known programming languages. Imagine, you would have to define all rrdtool create parameters to define the logical layout of each and every rrd file. And you would have to define all rrdtool graph parameters to create those nice graphs, for every new graph. Well, this would yield maximum flexibility. But maximum effort, too. But in most installations, there are lots of devices of the same kind. And there are lots of data of the same kind, e.g. traffic information is needed for almost every device. Therefor, the parameters needed to create a traffic rrd file are defined by a Data Template, in this case known as Interface - Traffic. These definitions are used by all Traffic-related rrd files.
28 von 143 18.10.2007 21:35 The same approach is used for defining Graph Templates. This burden is done only once. And all parameters defined within such a Graph Template are copied to all Graphs that are created using this Template. The last type of Templates are the Host Templates. They are not related to some rrdtool stuff. The purpose of Host Templates is to group all Graph Templates and Data Queries (these are explained later) for a given Device type. So you will make up a Host Template e.g. for a specific type of router, switch, host and the like. By assigning the correct Host Template to each new Device, you'll never forget to create all needed Graphs. Well, nice stuff, isn't it? But here comes the bad news. Unlike a Subroutine, Templates are not invoked at runtime: Graph Templates Good News! Almost every setting of a Graph Template is propagated to all related Graphs when saving the changes. But you may encounter problems when checking the Use Per-Graph Value (Ignore this Value) checkbox. When creating new Graphs, the latest defintions are taken into account. Data Templates No change of a Data Template is propagated to already existing rrd files. But most of them may be changed by using rrdtool tune from command line. Pay attention to not append new Data Source Items to already existing rrd files. There's no rrdtool command to achieve this! Host Templates No change of a Host Template is propagated to already existing Devices. But when creating a new one, latest definitions are taken into account. But there's an easy (bit tedious, perhaps) way to apply changes to already existing Devices: First, change the Host Template to None, then change it back to the desired one. All new items are now associated with this Device. Attention! No items are deleted by this procedure. My first Data Template For this task, let's stick to SNMP stuff. For you to be able to reproduce this example, I've chosen the UDP information of the IP MIB. snmpwalk -c <community string> -v1 <device> udp UDP-MIB::udpInDatagrams.0 = Counter32: 7675 UDP-MIB::udpNoPorts.0 = Counter32: 128 UDP-MIB::udpInErrors.0 = Counter32: 0 UDP-MIB::udpOutDatagrams.0 = Counter32: 8406... more to follow... As cacti does not use the MIBs but pure ASN.1 OIDs, let's search the OID used as udpindatagrams: snmpwalk -c <community string> -v1 -On <device> udp.1.3.6.1.2.1.7.1.0 = Counter32: 7778.1.3.6.1.2.1.7.2.0 = Counter32: 129.1.3.6.1.2.1.7.3.0 = Counter32: 0.1.3.6.1.2.1.7.4.0 = Counter32: 8514... more to follow... The needed OID is.1.3.6.1.2.1.7.1.0. Now learn how to enter this into a new Cacti Data Template: Please proceed to Data Templates and filter for SNMP. Check the SNMP - Generic OID Template
29 von 143 18.10.2007 21:35 After clicking Go, you're prompted with a new page to enter the name for the new Data Template: Due to the filter defined above, you won't see the new Template at once, so please enter udp as a new filter to find: Now select this entry to change some definitions according to the following images:
30 von 143 18.10.2007 21:35 for the upper half of the page and for the lower one. Please pay attention to change the MAXIMUM value to 0 to prevent data suppression for values exceeding 100. And you saw the OID.1.3.6.1.2.1.7.1.0 from above, didn't you? Please copy another one for OID.1.3.6.1.2.1.7.4.0, using the description udpoutdatagrams Name The Title of the Data Source will be derived from this. If Use Per-Data Source Value (Ignore this Value) is unchecked, the string entered here is take literally. Checking this box allows for target-specific values by substituting cacti's built-in variables ( host_description will be substituted by the description of the host this Data Template will be associated with.)
31 von 143 18.10.2007 21:35 Data Input Method This selection box allows to associated this Data Template with a specific Data Input Method and the output variables defined therein. For SNMP data use the predefined method Get SNMP Data Associated RRA's Step RRA's define how to store retrieved data and how to consolidate them. Find more about this topic in the rrdtool related sections. This example is built with the predefined settings for RRA's Defines the inteval size in seconds between to polling requests. Default is 300 seconds. Data Source Active You may deactivate the Data Source here, e.g. to prevent using it Internal Data Source Name Each Data Source (there may be more than one per rrd file) has its own name. It is used to access the data. Therefor, it is wise to choose some "self-explanatory" name. For SNMP data, I'd prefer to take the string representation of the OID (if it is not too long) Minimum Value When updating the data source, rrdtool will skip all values that are lower than this Minimum Value. Using negative values requires this to be changed. Maximum Value When updating the data source, rrdtool will skip all values that are higher than this Maximum Value. Pay Attention! When creating a new data source, this value defaults to 100. This is not always a good choice. Entering 0 will result in "no Maximum Value" Data Source Type Heartbeat OID This defines, how rrdtool handles data. While snmpwalking e.g. UDP OIDs, you will notice, that all data are associated with some type. Most important are: COUNTER GAUGE Data representation that counts all occurences (e.g. udpindatagrams) since SNMP agents start time. Can be compared to the mileage of a car. So this value will always increase. It will decrease only, if the SNMP agent is restarted. To get the data of the last interval, rrdtool will build the difference between two data points Data representation that always shows the actual value. Can be compared to the actual speed of a car. If rrd's are not updated within Heartbeat's seconds, the needed data point is assumed to be NaN ("not a number = no valid data) Numerical representation of the OID that will be used for querying the target to retrieve data That's all, for now. My first Graph Template Now let's generate the Graph Template for those already generated Data Templates. Please goto Graph Templates and Add a new one: Now you've to fill in some global parameters:
32 von 143 18.10.2007 21:35 on the lower part of the page, please fill in: and Create to see:
33 von 143 18.10.2007 21:35 where Name Title The Name for this Graph Template. Find this in the Graph Templates List The Title to be displayed on Graphs generated from this Template. There are some cacti-specific variables allowed. One of this is host_description, which takes the hosts description from the Devices definition to generate the Title Vertical Label You may specify a string as a label for the y-axis of the graph Now let's add some Graph Template Items. They will specify, which Data Sources defined by some Data Template should be displayed on the Graph. Please click Add as shown on the last image: Data Source Select the needed Data Source from the Dropdown List: udpindatagrams Color Find a nice color from the Dropdown for this item Graph Item Type Graph Items may be of type AREA or of LINEx, where x is the thickness of the line Text Format This string is printed as part of the Legend
34 von 143 18.10.2007 21:35 Now click Save to see: I always appreciate some nice legends to see the numbers for e.g. maximum, average and last value. There's a shortcut for this: Press Save to see three legend items created in one step!
35 von 143 18.10.2007 21:35 Now let's turn to the second data source. This works very much the same way. So see all four images in sequence:
36 von 143 18.10.2007 21:35 Please scroll down to the bottom of the page and Save your whole work. Now, you may add this new Graph Template to any hosts that responds to those udp OIDs. But in this case, please wait a moment. Let's first proceed to the Host Templates and use this new Graph template for our first own Host Template. My first Host Template The next task is creating a new Host Template. Switch over to Host Templates and Add:
37 von 143 18.10.2007 21:35 and fill in the name of this new Template: Now you'll find two sections added. First, let's deal with Associated Graph Templates. The Add Graph template select box holds all defined Graph Templates. Select the one we've just created and Add it: Next, let's add the Data Query already selected above:
38 von 143 18.10.2007 21:35 Now, Save your work. That's all. Using Templates Using Templates Using Host Templates Surely, you want to see this Host Template in action. This was already described in My first Graph, where we took the ucd/net SNMP Host as a Host Template. So please select a Host, that is already defined to cacti:
39 von 143 18.10.2007 21:35 and Save. Then scroll down to see the Assoiated Graph Template and the Associated Data Query: Now select Create Graphs for this Host from the top of the page. You'll be presented with a new page to select the wanted Graphs: Select our new UDP thingy and some Traffic Graph Template for an interesting interface and Create. The result is displayed with the next page:
40 von 143 18.10.2007 21:35 You'll have to wait for two polling cycles for data to be filled. Using the Graph Template It is possible, only to use the Graph Template created above. It is not necessary to associate a Host Template to get the new UDP stuff. As an example, please select an arbitrary device, that responds to SNMP requests for UDP (see My first data Template) from the Device list. Scroll down to the Associated Graph Templates section and select the UDP Traffic Graph Template and Add: Again, Select Graphs for this Host:
41 von 143 18.10.2007 21:35 and Create: Please notice, that not only a creation message appears. The Graph Template just selected is grayed out, the checkbox disappeared. This is to make clear, what Graph Templates already were chosen to prevent unwanted duplication. Please select Edit this Host again, to see what changed in the Associated Graph Templates section: The Status of this Graph Template has changed to Is Being Graphed. You may Edit to jump to Graph Management and see your graph:
42 von 143 18.10.2007 21:35 Advanced Magic This chapter shows how to extend cacti's build-in capabilities with scripts and queries. Some of them are of course part of the standard cacti distribution files. Scripts and Queries extend cacti's capabilities beyond SNMP. They allow for data retrieval using custom-made code. This is not even restricted to certain programming languages; you'll find php, perl, shell/batch and more. These scripts and queries are executed locally by cacti's poller. But they may retrieve data from remote hosts by different protocols, e.g. ICMP; e.g. ping to measure round trip times and availability telnet; e.g. programming telnet scripts to retrieve data available to sysadmins only ssh; much like telnet, but more secure (and more complicated) http(s); invoke remote cgi scripts to retrieve data via a web server or parse web pages for statistical data (e.g. some network printers) snmp; e.g. use net-snmp's exec/pass functions to call remote scripts and get data ldap: e.g. to retrieve statistical about your ldap server's activities use your own; e.g. invoke nagios agents... and much more... There a two ways extending cacti's build-in capabilities: Data Input Methods for querying single or multiple, but non-indexed readings temperature, humidity, wind,... cpu, memory usage number of users logged in IP readings like ipinreceives (number of ip packets received per host) TCP readings like tcpactiveopens (number of tcp open sockets) UDP readings like udpindatagrams (number of UDP packets received)... Data Queries for indexed readings network interfaces with e.g. traffic, errors, discards other SNMP Tables, e.g. hrstoragetable for disk usage
43 von 143 18.10.2007 21:35 you may even create Data Queries as scripts e.g. for querying a name server (index = domain) for requests per domain By using the Exporting and Importing Facilities, it is possible to share your results with others. Common Tasks In principle, it is possible to divide the following tasks into three different parts: how to retrieve data how to store data how to present data Create own Scripts own scripts Graphs based on a single OID Ha! You do not remember? See previous chapter on My first Data Template. Everything was already explained there! A Simple Data Input Method Find more about this topic in the cacti documentation: Chapter 9. Data Input Methods. All steps are explained in detail now. Chapter I: Create a Data Input Method
44 von 143 18.10.2007 21:35 Data Input Method returning a single value Lets start with a simple script, that takes a hostname or IP address as input parameter, returning a single value. You may find this one as /scripts/ping.pl: #!/usr/bin/perl $ping = `ping -c 1 $ARGV[0] grep icmp_seq`; $ping =~ m/(.*time=)(.*) (ms usec)/; print $2; To define this script as a Data Input Method to cacti, please go to Data Input Methods and click Add. You should see: Please fill in Name, select Script/Command as Input Type and provide the command that should be used to retrieve the data. You may use as a symbolical name for the path_to_your_cacti_installation. Those commands will be executed from crontab; so pay attention to providing full path to binaries if required (e.g. /usr/bin/perl instead of perl). Enter all Input Parameters in <> brackets. Click create to see: Now lets define the Input Fields. Click Add as given above to see:
45 von 143 18.10.2007 21:35 The DropDown Field [Input] contains one single value only. This is taken from the Input String above. Fill Friendly Name to serve your needs. The Special Type Code allows you to provide parameters from the current Device to be queried. In this case, the hostname will be taken from the current device. Click create to see: At least, define the Output Fields. Again, click Add as described above: Provide a short Field [Output] name and a more meaningful Friendly Name. As you will want to save those data, select Update RRD File. Create to see:
46 von 143 18.10.2007 21:35 Click Save and you're done. Chapter II: Create a Data Template Now you want to tell cacti, how to store the data retrieved from this script. Please go to Data Templates and click Add. You should see: Fill in the Data Templates Name with a reasonable text. This name will be used to find this Template among others. Then, please fill in the Data Source Name. This is the name given to the host-specific Data Source. The variable host_description is taken from the actual Device. This is to distinguish data sources for different devices. The Data Input Method is a DropDown containing all known scripts and the like. Select the Data Input Method you just created. The Associated RRA's is filled by default. At the moment there's no need to change this. The lower part of the screen looks like:
47 von 143 18.10.2007 21:35 The Internal Data Source Name may be defined at your wish. There's no need to use the same name as the Output Field of the Data Input Method, but it may look nicer. Click create to see: Notice the new DropDown Output Field. As there is only one Output Field defined by our Data Input Method, you'll see only this. Here's how to connect the Data Source Name (used in the rrd file) to the Output Field of the Script. Click Save and you're done. Chapter III: Create a Graph Template Now you want to tell cacti, how to present the data retrieved from this script. Please go to Graph Templates and click Add. You should see:
48 von 143 18.10.2007 21:35 Fill in Name and Title. The variable host_description will again be filled from the Device's definition when generating the Graph. Keep the rest as is and Create. See: Now click Add to select the first item to be shown on the Graphs:
49 von 143 18.10.2007 21:35 Select the correct Data Source from the DropDown, fill in a color of your liking and select AREA as a Graph Item Type. You want to fill in a Text Format that will be shown underneath the Graph as a legend. Again, Create: Notice, that not only an entry was made under Graph Template Items, but under Graph Item Inputs as well. Don't bother with that now. Lets fill some more nice legends, see: Notice, that the Data Source is filled in automagically. Select LEGEND as Graph Item Type (it is not really a Graph Item Type in rrdtool-speak, but a nice time-saver), and click Create to see:
50 von 143 18.10.2007 21:35 Wow! Three items filled with one action! You may want to define a Vertical Label at the very bottom of the screen and Save. Chapter IV: Apply the Graph Template to your Device Now go to the Devices and select the one of your choice. See the Associated Graph Templates in the middle of this page: Select your newly created Graph template from the Add Graph Template DropDown. Click Add to see:
51 von 143 18.10.2007 21:35 The Template is added and shown as Not Being Graphed. On the top of the page you'll find the Create Graphs for this Host link. Click this to see: Check the box that belongs to the new template and Create. See the results: This will automatically create the needed Graph Description from the Graph Template. As you may notice from the success message, this Graph takes the hosts name in it: router - Test ping (router is the hosts name of this example). create the needed Data Sources Description from the Data Template. Again, you will find the Hosts name replaced for host_description create the needed rrd file with definitions from the Data Template. The name of this file is derived from the Host and the Data Template in conjunction with an auto-incrementing number. create an entry to the poller_table to instruct cacti to gather data on each polling cycle You'll have to wait at least for two polling cycles to find data in the Graph. Find your Graph by going to Graph Management, filtering for your host and selecting the appropriate Graph (there are other methods as well). This may look like:
52 von 143 18.10.2007 21:35 More Scripts It is not only possible to operate scripts with one but with many input and output parameters. As an example, lets create a script version of the UDP Packets In/Out stuff. The solution using the SNMP - Generic OID Template was already shown in Why Templates? Chapter I: The Code The script will be implemented in perl (as I have no profound knowledge of php). As such, it should execute on most platforms. #!/usr/bin/perl -w # -------------------------------------------------- # ARGV[0] = <hostname> required # ARGV[1] = <snmp port> required # ARGV[2] = <community> required # ARGV[3] = <version> required # -------------------------------------------------- use Net::SNMP; # verify input parameters my $in_hostname = $ARGV[0] if defined $ARGV[0]; my $in_port = $ARGV[1] if defined $ARGV[1]; my $in_community = $ARGV[2] if defined $ARGV[2]; my $in_version = $ARGV[3] if defined $ARGV[3]; # usage notes if ( (! defined $in_hostname ) (! defined $in_port ) (! defined $in_community ) (! defined $in_version ) ) { print "usage:\n\n $0 <host> <port> <community> <version>\n\n"; exit; } # list all OIDs to be queried my $udpindatagrams = ".1.3.6.1.2.1.7.1.0"; my $udpoutdatagrams = ".1.3.6.1.2.1.7.4.0"; # get information via SNMP # create session object my ($session, $error) = Net::SNMP->session(
53 von 143 18.10.2007 21:35 -hostname => $in_hostname, -port => $in_port, -version => $in_version, -community => $in_community, # please add more parameters if there's a need for them: # [-localaddr => $localaddr,] # [-localport => $localport,] # [-nonblocking => $boolean,] # [-domain => $domain,] # [-timeout => $seconds,] # [-retries => $count,] # [-maxmsgsize => $octets,] # [-translate => $translate,] # [-debug => $bitmask,] # [-username => $username,] # v3 # [-authkey => $authkey,] # v3 # [-authpassword => $authpasswd,] # v3 # [-authprotocol => $authproto,] # v3 # [-privkey => $privkey,] # v3 # [-privpassword => $privpasswd,] # v3 # [-privprotocol => $privproto,] # v3 ); # on error: exit if (!defined($session)) { printf("error: %s.\n", $error); exit 1; } # perform get requests for all wanted OIDs my $result = $session->get_request( -varbindlist ); # on error: exit if (!defined($result)) { printf("error: %s.\n", $session->error); $session->close; exit 1; } => [$udpindatagrams, $udpoutdatagrams] # print results printf("udpindatagrams:%s udpoutdatagrams:%s", # <<< cacti requires this format! $result->{$udpindatagrams}, $result->{$udpoutdatagrams}, ); $session->close; It should produce following output, when executed from command line: Output: [prompt]> perl udp_packets.pl localhost 161 public 1 udpindatagrams:10121 udpoutdatagrams:11102 Where "public" may be replaced by your community string. Of course, the numbers will vary. Chapter II: Define the Data Input Method To define this script as a Data Input Method to cacti, please go to Data Input Methods and click Add.
54 von 143 18.10.2007 21:35 You should see: Enter the name of the new Data Input Method, select Script/Command and type in the command to call the script. Please use the full path to the command interpreter. Instead of entering the specific parameters, type <symbolic variable name> for each parameter the script needs. Save: Now Add each of the input parameters in the Input Fields section, one after the other. All of them are listed in sequence, starting with <host>: <port>
55 von 143 18.10.2007 21:35 <community> <version> We've used some of cacti builtin parameters. When applied to a host, those variables will be replaced by the hosts actual settings. Then, this command will be stored in the poller_command table. Now Save your work to see
56 von 143 18.10.2007 21:35 After having entered all Input Fields, let's now turn to the Output Fields, respectively. Add the first one, udpindatagrams: Now udpoutdatagrams: Be careful to avoid typos. The strings entered here must exactly match those spitted out by the script. Double check Output Fields! Now, results should be like
57 von 143 18.10.2007 21:35 Finally Save and be proud! Chapter III: Create a New Data Template The previous step explained how to call the script that retrieves the data. Now it's time to tell cacti, how to store them in rrd files. You will need a single Data Template only, even if two different output fields will be stored. rrd files are able to store more than one output fields; rrdtool's name for those is data source. So we will create 1. 2. one single Data Template representing one rrd file two output fields/data sources Data Queries for indexed values What is a Data Query? Here's the text from cacti's website (Chapter 10. Data Queries): Data queries are not a replacement for data input methods in Cacti. Instead they provide an easy way to query, or list data based upon an index, making the data easier to graph. The most common use of a data query within Cacti is to retrieve a list of network interfaces via SNMP.... While listing network interfaces is a common use for data queries, they also have other uses such as listing partitions, processors, or even cards in a router. One requirement for any data query in Cacti, is that it has some unique value that defines each row in the list. This concept follows that of a 'primary key' in SQL, and makes sure that each row in the list can be uniquely referenced. Examples of these index values are 'ifindex' for SNMP network interfaces or the device name for partitions. There are two types of data queries that you will see referred to throughout Cacti. They are script queries and SNMP queries. Script and SNMP queries are virtually identical in their functionality and only differ in how they obtain their information. A script query will call an external command or script and an SNMP query will make an SNMP call to retrieve a list of data.
58 von 143 18.10.2007 21:35 All data queries have two parts, the XML file and the definition within Cacti. An XML file must be created for each query, that defines where each piece of information is and how to retrieve it. This could be thought of as the actual query. The second part is a definition within Cacti, which tells Cacti where to find the XML file and associates the data query with one or more graph templates. A New SNMP Data Query For SNMP Queries, you won't need to create a data retrieval script. Cacti will use SNMP to retrieve information. But cacti will need additional information on how the indexed data is structured. Think about a table (a MIB table in this case); you'll have to tell cacti about the table structure. This is done by defining an XML file (see: SNMP Query XML Syntax for all details). Basically, you have to define the index to tell cacti about the number of rows and about their unique index. This index is later used to access each rows data. Furthermore, you may define columns, that serve as descriptive fields to be shown in the selection table. The XML file knows them as <direction>input</direction> At last, you will have to define those fields, that will be queried for the readings, e.g. ifinoctets, ifoutoctets, ifinerrors,... The XML file knows them as <direction>output</direction> Lets have an example: standard Interface MIB with the corresponding part of the /resources/snmp_queries/interfaces.xml file are displayed using the following table:
59 von 143 18.10.2007 21:35 and see the corresponding table structure when defining New Graphs for that device (my laptop):
60 von 143 18.10.2007 21:35 Now you can map Index: IF-MIB::ifIndex Status: IF-MIB::ifOperStatus Description: IF-MIB::ifDescr Type: IF-MIB::ifType Speed: IF-MIB::ifSpeed All those are input Parameters. They serve as descriptive information to each row to help you identify the proper interface to use. Those parameters of output can be compared to output parameters of a script (see ping.pl script above). These are the readings from the device. By selecting the appropriate row (the one greyed out had been selected by me), you tell cacti to retrieve data from the interface defined by this index. But how does cacti know, what output parameters it shall retrieve? See the Select a Graph type DropDown. It specifies a Graph Template defined for this Data Query. The Graph Template in turn references a Data Template which incorporates the needed output parameters as Data Sources. This works quite the same way as defined for a Data Input Method. To sum up: the SNMP XML file is somehow a replacement for the Data Input Method described above to be used on indexed values. It tells cacti, what data it should retrieve (direction: output). To help you identifying the relevant indexes, the XML defines descriptive parameters (direction: input) to be displayed in the selection table. A walkthrough for this is given now. It is based on the already supplied interfaces.xml XML file. Create a Data Query to tell cacti how to retrieve data Go to Data Queries and click Add to see: Here, we are using the already existing interface.xml file. Select Get SNMP Data (Indexed) as Data Input Method. Create to see:
61 von 143 18.10.2007 21:35 See, that cacti found the XML file. Don't bother with the Associated Graph Templates at the moment. The success message does not include checking of the XML file's content. Not lets proceed to the next definitions. Create a Data Template to tell cacti how to store data This is the exact copy of the definitions made above. So I do not repeat everything here. Data Input Method must be selected as Get SNMP Data (Indexed). As this data source is a COUNTER type, select this as the Data Source Type. But after saving the new Data Source definition, you may want to define a second Data Source to the same Data Template. To do so, select New from the Data Source Item heading to see: The name of the Data Source (ifoutoctets) is not replaced in the Tab until you save your work. By default, Maximum Value is set to 100. This is way too low for an interface. All readings above this value will be stored as NaN by rrdtool. To avoid this, set to 0 (no clipping) or to a reasonable value (e.g. interface speed). Don't forget to specify COUNTER! You will have noticed, that the name of the data source does not match the Name in the interface.xml. Don't worry, the solution to this is given later on. Before leaving, pay attention to the bottom of the page:
62 von 143 18.10.2007 21:35 This is specific to indexed SNMP Queries. You will have to check the last three items to make indexing work. All other items should be left alone, there values will be taken from the appropriate device definitions. Now Save and you're done with this step. Create a Graph Template to tell cacti how to present the data Now you want to tell cacti, how to present the data retrieved from SNMP Query. Again, this is done by merely copying the procedure described above. When selecting the Data Source, pay attention to select from the just defined data sources. The next step is new and applies only to Data Queries: Add Graph Template to the Data Query Now it's time to re-visit our Data Query. Remember the Associated Graph Template we've left alone in the very first step? Now it will get a meaning. Go to Data Queries and select our new one. Then Add a new Associated Graph Template: Give it a Name and select the generated Graph Template. Create.
63 von 143 18.10.2007 21:35 Select the correct Data Source, pay attention to checking the checkboxes of each row. Apply a name to the Data Template and a title to the Graph Template. Use cacti variables as defined in Chapter 15. Variables - Data Query Fields. You may use all XML fields defined as input; in this example the fields and of the interface.xml were used. Add those Suggested Values. They will be used to distinguish Data Sources and Graphs for the same device; without this they all would carry the same name. At last: Save: Apply the Data Query to your Device Now go to your Device to add the Associated Data Query: Click Add and then Create Graphs for this Host to see:
64 von 143 18.10.2007 21:35 Now select the wanted interface and Create to generate the Traffic Graph. As long as there's only one Associated Graph Template for that Data Query, here will be now Select a Graph Type DropDown. Chapter I: XML File for SNMP Queries For SNMP Queries, you won't need to create a data retrieval script. Cacti will use SNMP to retrieve information. But cacti will need additional information on how the indexed data is structured. Think about a table (a MIB table in this case); you'll have to tell cacti about the table structure. This is done by defining an XML file (see: SNMP Query XML Syntax for all details). Basically, you have to define the index to tell cacti about the number of rows and about their unique index. This index is later used to access each rows data. Furthermore, you may define columns, that serve as descriptive fields to be shown in the selection table. The XML file knows them as <direction>input</direction> At last, you will have to define those fields, that will be queried for the readings, e.g. ifinoctets, ifoutoctets, ifinerrors,... The XML file knows them as <direction>output</direction> Lets have an example: standard Interface MIB with the corresponding part of the /resources/snmp_queries/interfaces.xml file are displayed using the following table:
65 von 143 18.10.2007 21:35
66 von 143 18.10.2007 21:35 and see the corresponding table structure when defining New Graphs for that device (my laptop): Now you can map Index: IF-MIB::ifIndex Status: IF-MIB::ifOperStatus Description: IF-MIB::ifDescr Type: IF-MIB::ifType Speed: IF-MIB::ifSpeed All those are input Parameters. They serve as descriptive information to each row to help you identify the proper interface to use. Those parameters of output can be compared to output parameters of a script (see ping.pl script above). These are the readings from the device. By selecting the appropriate row (the one greyed out had been selected by me), you tell cacti to retrieve data from the interface defined by this index. But how does cacti know, what output parameters it shall retrieve? See the Select a Graph type DropDown. It specifies a Graph Template defined for this Data Query. The Graph Template in turn references a Data Template which incorporates the needed output parameters as Data Sources. This works quite the same way as defined for a Data Input Method. To sum up: the SNMP XML file is somehow a replacement for the Data Input Method described above to be used on indexed values. It tells cacti, what data it should retrieve (direction: output). To help you identifying the relevant indexes, the XML defines descriptive parameters (direction: input) to be displayed in the selection table. A walkthrough for this is given now. It is based on the already supplied interfaces.xml XML file. Chapter II: The Data Query Go to Data Queries and click Add to see:
67 von 143 18.10.2007 21:35 Here, we are using the already existing interface.xml file. Select Get SNMP Data (Indexed) as Data Input Method. Create to see: See, that cacti found the XML file. Don't bother with the Associated Graph Templates at the moment. The success message does not include checking of the XML file's content. Not lets proceed to the next definitions. Chapter III: Create the Data Template This is the exact copy of the definitions made above. So I do not repeat everything here. Data Input Method must be selected as Get SNMP Data (Indexed). As this data source is a COUNTER type, select this as the Data Source Type. But after saving the new Data Source definition, you may want to define a second Data Source to the same Data Template. To do so, select New from the Data Source Item heading to see:
68 von 143 18.10.2007 21:35 The name of the Data Source (ifoutoctets) is not replaced in the Tab until you save your work. By default, Maximum Value is set to 100. This is way too low for an interface. All readings above this value will be stored as NaN by rrdtool. To avoid this, set to 0 (no clipping) or to a reasonable value (e.g. interface speed). Don't forget to specify COUNTER! You will have noticed, that the name of the data source does not match the Name in the interface.xml. Don't worry, the solution to this is given later on. Before leaving, pay attention to the bottom of the page: This is specific to indexed SNMP Queries. You will have to check the last three items to make indexing work. All other items should be left alone, there values will be taken from the appropriate device definitions. Now Save and you're done with this step. Chapter IV: Create the Graph Template Now you want to tell cacti, how to present the data retrieved from SNMP Query. Again, this is done by merely copying the procedure described above. When selecting the Data Source, pay attention to select from the just defined data sources. The next step is new and applies to Data Queries only. Chapter V: Add Graph Template to the Data Query Now it's time to re-visit our Data Query. Remember the Associated Graph Template we've left alone in the very first step? Now it will get a meaning. Go to Data Queries and select our new one. Then Add a new Associated Graph Template: Give it a Name and select the generated Graph Template. Create.
69 von 143 18.10.2007 21:35 Select the correct Data Source, pay attention to checking the checkboxes of each row. Apply a name to the Data Template and a title to the Graph Template. Use cacti variables as defined in Chapter 15. Variables - Data Query Fields. You may use all XML fields defined as input; in this example the fields and of the interface.xml were used. Add those Suggested Values. They will be used to distinguish Data Sources and Graphs for the same device; without this they all would carry the same name. At last: Save: Chapter VI: Apply the Data Query to your Device Now go to your Device to add the Associated Data Query:
70 von 143 18.10.2007 21:35 Click Add and then Create Graphs for this Host to see: Now select the wanted interface and Create to generate the Traffic Graph. As long as there's only one Associated Graph Template for that Data Query, here will be now Select a Graph Type DropDown. From snmptable to XML Graphs (Data Query walkthrough) This walkthrough will show you how to implement a new SNMP Data Query. Assuming, you know the SNMP table, the next steps show how to proceed Chapter I: Building raw XML file The starting point will be snmptable for a well know table of the HOSTS MIB: snmptable -c <community> -v 1 <host> HOST-RESOURCES-MIB::hrStorageTable SNMP table: HOST-RESOURCES-MIB::hrStorageTable hrstorageindex hrstoragetype hrstoragedescr hrstorageallocationunits hrstoragesize hrstorageused hrstorageallocationfailures 1 HOST-RESOURCES-TYPES::hrStorageOther Memory Buffers 1024 Bytes 1035356 59532? 2 HOST-RESOURCES-TYPES::hrStorageRam Real Memory 1024 Bytes 1035356 767448? 3 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Swap Space 1024 Bytes 1048568 0? 4 HOST-RESOURCES-TYPES::hrStorageFixedDisk / 4096 Bytes 2209331 826154? 5 HOST-RESOURCES-TYPES::hrStorageFixedDisk /sys 4096 Bytes 0 0? 6 HOST-RESOURCES-TYPES::hrStorageFixedDisk /proc/bus/usb 4096 Bytes 0 0? 7 HOST-RESOURCES-TYPES::hrStorageFixedDisk /boot 1024 Bytes 102454 9029? 8 HOST-RESOURCES-TYPES::hrStorageFixedDisk /home 4096 Bytes 507988 446407? 9 HOST-RESOURCES-TYPES::hrStorageFixedDisk /usr/local 4096 Bytes 507988 17133? 10 HOST-RESOURCES-TYPES::hrStorageFixedDisk /var 4096 Bytes 507988 129429? 11 HOST-RESOURCES-TYPES::hrStorageFixedDisk /var/lib/nfs/rpc_pipefs 4096 Bytes 0 0? This given, the first step will be the definition of an xml file based on those OIDs. So change to your <path_cacti>/resources/snmp_queries directory and create a file named hrstoragetable.xml. You may of course choose your own name, but for me it seems appropriate to take the name of the SNMP Table itself. Before doing so, it is necessary to identify the Index of that table. Without looking at the MIB file, simply perform snmpwalk -c <community> -v 1 -On <host> HOST-RESOURCES-MIB::hrStorageTable more.1.3.6.1.2.1.25.2.3.1.1.1 = INTEGER: 1.1.3.6.1.2.1.25.2.3.1.1.2 = INTEGER: 2.1.3.6.1.2.1.25.2.3.1.1.3 = INTEGER: 3.1.3.6.1.2.1.25.2.3.1.1.4 = INTEGER: 4.1.3.6.1.2.1.25.2.3.1.1.5 = INTEGER: 5.1.3.6.1.2.1.25.2.3.1.1.6 <interface> = INTEGER: 6.1.3.6.1.2.1.25.2.3.1.1.7 <name>get hrstoragedtable = INTEGER: Information</name> 7.1.3.6.1.2.1.25.2.3.1.1.8 <description>get SNMP = INTEGER: based Partition 8 Information out of hrstoragetable</description>.1.3.6.1.2.1.25.2.3.1.1.9 <index_order_type>numeric</index_order_type> = INTEGER: 9.1.3.6.1.2.1.25.2.3.1.1.10 <oid_index>.1.3.6.1.2.1.25.2.3.1.1</oid_index> = INTEGER: 10.1.3.6.1.2.1.25.2.3.1.1.11 = INTEGER: 11.1.3.6.1.2.1.25.2.3.1.2.1 <fields> = OID:.1.3.6.1.2.1.25.2.1.1.1.3.6.1.2.1.25.2.3.1.2.2 <hrstorageindex> = OID:.1.3.6.1.2.1.25.2.1.2 The first index is.1.3.6.1.2.1.25.2.3.1.1.1, but the Index Base is.1.3.6.1.2.1.25.2.3.1.1. This OID is needed for the xml file:
71 von 143 18.10.2007 21:35 <name>index</name> <method>walk</method> <source>value</source> <direction>input</direction> <oid>.1.3.6.1.2.1.25.2.3.1.1</oid> </hrstorageindex> </fields> </interface> Lets talk about the header elements name description Short Name; chose your own one if you want Long Name index_order_type oid_index numeric instead of alphabetic sorting the index of the table There are more header elements, but for sake of simplification, we'll stick to that for now. Lets turn to the fields. They correspond to the columns of the snmptable. For debugging purpose it is recommended to start with the Index field first. This will keep the XML as tiny as possible. The section contains one or more fields, each beginning with and ending with. It is recommended but not necessary to take the textual representation of the OID or an abbreviation of that. name method source direction oid Short Name walk or get (representing snmpwalk or snmpget to fetch the values) value = take the value of that OID as the requested value. Sounds ugly, but there are more options that we won't need for the purpose of this Howto input (for values that may be printed as COMMENTs or the like) output (for values that shall be graphed, e.g. COUNTERs or GAUGEs) the real OID as numeric representation Now save this file and lets turn to cacti to implement this one. First, goto Data Queries to see and Add a new one: snmptable-dq-01
72 von 143 18.10.2007 21:35 snmptable-dq-02 Fill in Short and Long Names at your wish. Enter the file name of the XML file and don't forget to choose Get SNMP Data (indexed). Create to see snmptable-dq-03 It has now Successfully located XML file. But this does not mean that there are no errors. So lets go on with that. Turn to the Device you want to query and add the new Data Query as shown: snmptable-dev-01 Index Count Changed was chosen on purpose to tell cacti to re-index not only on rebbot but each time the Index Count (e.g. number of partitions) changed. When done, see the results as snmptable-dev-02 You'll notice, that on my laptop there are 11 indices = 11 partitions. So the XML worked up to now! To make this clear, select Verbose Query to see:
73 von 143 18.10.2007 21:35 snmptable-dev-03 Chapter II: Insert all descriptive table columns Now lets put all descriptive table columns into the SNMP Query XML file. This refers to hrstoragetype hrstoragedescr hrstorageallocationunits I like to take the XML field names from the snmptable output, but this is not a must. <interface> <name>get hrstoragedtable Information</name> <description>get SNMP based Partition Information out of hrstoragetable</description> <index_order_type>numeric</index_order_type> <oid_index>.1.3.6.1.2.1.25.2.3.1.1</oid_index> <fields> <hrstorageindex> <name>index</name> <method>walk</method> <source>value</source> <direction>input</direction> <oid>.1.3.6.1.2.1.25.2.3.1.1</oid> </hrstorageindex> <hrstoragetype> <name>type</name> <method>walk</method> <source>value</source> <direction>input</direction> <oid>.1.3.6.1.2.1.25.2.3.1.2</oid> </hrstoragetype> <hrstoragedescr> <name>description</name> <method>walk</method> <source>value</source> <direction>input</direction> <oid>.1.3.6.1.2.1.25.2.3.1.3</oid> </hrstoragedescr> <hrstorageallocationunits> <name>allocation Units (Bytes)</name> <method>walk</method> <source>value</source> <direction>input</direction>
74 von 143 18.10.2007 21:35 <oid>.1.3.6.1.2.1.25.2.3.1.4</oid> </hrstorageallocationunits> </fields> </interface> The <name></name> information will later show up as a column heading. Don't forget to provide the correct base OIDs. Remember, that the Index will always be appended to those OIDs, e.g. the first Description will be fetched from OID =.1.3.6.1.2.1.25.2.3.1.3.1 (that is base OID =.1.3.6.1.2.1.25.2.3.1.3 together with the appended index.1 will form the complete OID.1.3.6.1.2.1.25.2.3.1.3.1. Please notice, that all fields that will yield descriptive columns only take <direction>input</direction> If you have completed your work, turn to the cacti web interface and select your host from the Devices list to see: Select the little green circle next to our SNMP XML to update your last changes. Then you'll see sth like: snmptable-dev-10 When using Verbose Query, you'll now find snmptable-dev-11
75 von 143 18.10.2007 21:35 And clicking Create Graphs for this host will result in snmptable-dev-12 snmptable-dev-13 You're not supposed to really create graphs at this moment, cause the XML is not yet complete. And you'll notice, that the second column does not present very useful information. So it may be omitted in later steps. Chapter III: Get the Output Values Now lets modify the XML again. As said earlier, the second column is not very meaningful, so lets drop it. To get the output values, I appended the last two XML field descriptions, see:
76 von 143 18.10.2007 21:35 <interface> <name>get hrstoragedtable Information</name> <description>get SNMP based Partition Information out of hrstoragetable</description> <index_order_type>numeric</index_order_type> <oid_index>.1.3.6.1.2.1.25.2.3.1.1</oid_index> <fields> <hrstorageindex> <name>index</name> <method>walk</method> <source>value</source> <direction>input</direction> <oid>.1.3.6.1.2.1.25.2.3.1.1</oid> </hrstorageindex> <hrstoragedescr> <name>description</name> <method>walk</method> <source>value</source> <direction>input</direction> <oid>.1.3.6.1.2.1.25.2.3.1.3</oid> </hrstoragedescr> <hrstorageallocationunits> <name>allocation Units (Bytes)</name> <method>walk</method> <source>value</source> <direction>input</direction> <oid>.1.3.6.1.2.1.25.2.3.1.4</oid> </hrstorageallocationunits> <hrstoragesize> <name>total Size (Units)</name> <method>walk</method> <source>value</source> <direction>output</direction> <oid>.1.3.6.1.2.1.25.2.3.1.5</oid> </hrstoragesize> <hrstorageused> <name>used Space (Units)</name> <method>walk</method> <source>value</source> <direction>output</direction> <oid>.1.3.6.1.2.1.25.2.3.1.6</oid> </hrstorageused> </fields> </interface> This works very much the same way as above. provide the fields hrstoragesize and hrstoageused provide a useful name don't forget to specify <direction>output</direction> give the corresponding base OIDs Now we may proceed as said above: Pressing the green circle runs that XML definitions against the host and updates the rows/columns. You will notice the "missing" second column only when Create Graphs for this Host is selected. Don't forget to set <direction>output</direction> for all variables/fields, that should be stored in rrd files and be graphed!. This is the mistake that occurs most often. Chapter IV: Defining the Data Template The Data Template will define, how the data is retrieved by the XML Query is saved. For more information about the principles of
77 von 143 18.10.2007 21:35 operation, please see Common Tasks. Please goto Data Templates and Add: snmptable-dt-01 Define the Name of the Data Template. When defining the Name of the Data Source, do not forget to check the Use Per-Data Source Value (Ignore this Value) checkbox. This will come in useful later. Data Input Method will read Get SNMP Data (Indexed). Select Associated RRAs as usual (don't bother with my settings): Now to the Data Source Items. I like giving them the names of the MIB OIDs, see: snmptable-dt-02 and Create. Now enter the second Data Source Item: snmptable-dt-03 snmptable-dt-04 Please pay attention to setting the Maximum Value to 0 (zero). Else, all values exceeding the pre-defined value of 100 would be
78 von 143 18.10.2007 21:35 stored as NaN. Now scroll down to the bottom of the page and check Index Type, Index Value and Output Type Id Save and the Data Template is done. snmptable-dt-05 Chapter V: Defining the Graph Template The Graph Template will define, how the data is presented. For more information about the principles of operation, please see Common Tasks. Please goto Graph Templates and Add: snmptable-gt-01 Fill in the header names and don't forget to check the Use Per-Graph Value (Ignore this Value) for the Graph Template Title: and Create. snmptable-gt-02 Now Add the first Graph Item as usual: snmptable-gt-03
79 von 143 18.10.2007 21:35 Add the Legend and the second Graph Item: snmptable-gt-04 Again, add the Legend to end up with snmptable-gt-05 snmptable-gt-06
80 von 143 18.10.2007 21:35 Chapter VI: Revisiting The Data Query According to Summing Up we'll now have to revisit the Data Query snmptable-dq-10 Now Add the Associated Graph Templates and fill in a meaningsful name. Select the newly created Graph Template to see: Create: snmptable-dq-11 snmptable-dq-12 Select the correct Data Sources and check the boxes on the right. Save. Now fill in some useful Suggested Values, at first for the Data Template: Now apply suggested values for the Graph Template: snmptable-dq-13
81 von 143 18.10.2007 21:35 snmptable-dq-14 Now the Data Query is complete: snmptable-dq-15 Chapter VII: Create Graphs for this Host Now we're almost done. Everything's ready for use now. So goto your device and select Create Graphs for this Host. Select some of the partitions you're interested in: and Create to see: snmptable-dev-20 Lets visit the Data Sources: snmptable-dev-21
82 von 143 18.10.2007 21:35 snmptable-ds-01 As you can see, the Suggested Values of the Data Query defined the Name of the Data Template. So lets go to Graph Management: to see the title defined by the Suggested Values. When turning to the Graphs, you may see something like snmptable-gm-01 snmptable-graph-01 This might be the end of the show. While it should be enough to define some "easy" SNMP XML based Data Queries, there are some tricks and hints left to explain. As you may have noticed, the quantities defined by this example are counted in Units, not Bytes. This is somewhat inconvinient but may be changed. Lets wait for the next Chapter... Chapter VIII: VALUE/REGEXP in Action As said above, with the current XML size values are measured in Units. The current Unit Size is given by hrstorageallocationunits, but the reading of it is like 4096 Bytes. To use this in any calculations, we must get rid of the string Bytes. This can be done by the VALUE/REGEXP Feature of cacti's XML definitions. So please change
83 von 143 18.10.2007 21:35 <hrstorageallocationunits> <name>allocation Units (Bytes)</name> <method>walk</method> <source>value</source> <direction>input</direction> <oid>.1.3.6.1.2.1.25.2.3.1.4</oid> </hrstorageallocationunits> by <hrstorageallocationunits> <name>allocation Units (Bytes)</name> <method>walk</method> <source>value/regexp:([0-9]*) Bytes</source> <direction>input</direction> <oid>.1.3.6.1.2.1.25.2.3.1.4</oid> </hrstorageallocationunits> To proove this, goto your device and again Verbose Query our Data Query to see: snmptable-dev-30 Now select Create Graphs for this Host and notice the change of the column Allocation Units (Bytes). The string "Bytes" has gone:
84 von 143 18.10.2007 21:35 To use these values, we define a CDEF: snmptable-dev-31 snmptable-cdef-01 Notice, that with recent releases of cacti, it is possible to use query_* values within CDEFs. Finally, goto Graph Templates and use this CDEF with all Graph Items: Change the Base Value to 1024 for Bytes -> kbytes and the y-axis description to Bytes: snmptable-gt-10
85 von 143 18.10.2007 21:35 Now the Graph looks like snmptable-gt-11 snmptable-graph-10 Script Data Queries Goal of this HowTo will be to show the principles of writing a Script Query, including script, xml and all needed templates. Why should you create such a thing? Suppose, your target features some indexed readings, that are not available via SNMP but by some other method (e.g. wget/cgi, ssh, NRPE,...). Writing a Script Data Queries works very much the same way as SNMP Data Queries. But nevertheless, I'll take you through all of the steps now. The example uses php. Why php? First, it's easier to copy stuff from already existing php scripts. Second, it would be possible to use cacti functions. It should be possible to imagine, how this works with other programming languages. Strictly speaking, I'm not that php expert. So be patient with me. Please pay attention. This HowTo will not explain how to write a Script Server Data Query (yes, there is such a thing!). It would not introduce that many changes. But this will be left to some other HowTo. Personally, my primary goal was to use an example, that all users should be able to copy to execute each and every step on its own. Unfortunately, there seems to be no example, that is common enough and interesting at the same time. So I'm sorry to announce, that this HowTo will show "Interface Traffic Data Gathering". Yes, I know, this is not that new. And surely, it will not be as fast as pure SNMP. So, to my shame, I suppose that this will never make it into any production environment. But, again, this is not the primary goal. Before starting the work, I feel encouraged to point out a drawback of this approach. Cacti will start a php instance, each time it has to fetch a value from the target device. This is not that fast, obviously. And it will not prosper from the performance boost when switching over from cmd.php to cactid. Of course, even cactid will need to start php! And that's exactly, where the thingy called Script Server Data Query drops in. But let's leave this for the next main chapter.
86 von 143 18.10.2007 21:35 Chapter I: Basic script The starting point will be some very basic php script. Put it into <path_cacti>/scripts/query_interface_traffic.php. It will show interface indices only for the given target host. The script takes two parameters as input, the hostname of the target and the string index. You have to implement the index method, as OO programmers would say. In this case, there's an "if" clause to process index requests. Output is a list of indices, each one on a seperate line. <?php # deactivate http headers $no_http_headers = true; # include some cacti files for ease of use include(dirname( FILE ). "/../include/config.php"); include(dirname( FILE ). "/../lib/snmp.php"); # define all OIDs we need for further processing $oids = array( "index" => ".1.3.6.1.2.1.2.2.1.1", ); $xml_delimiter = "!"; # all required input parms $hostname = $_SERVER["argv"][1]; # hostname/ip@ # put your own community string here $snmp_community = "public"; # community string $snmp_version = 1; # snmp version $snmp_port = 161; # snmp port $snmp_timeout = 500; # snmp timeout $snmp_user = ""; # SNMP V3: user $snmp_pw = ""; # SNMP V3: password $cmd = $_SERVER["argv"][2]; # one of: index/query/get $snmp_retries = 3; # snmp retries # ------------------------------------------------------------------------- # main code starts here # # snmp walk will not be provided with snmp_user and snmp_password # so this will not work for SNMP V3 hosts # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # script MUST respond to index queries # the command for this is defined within the XML file as # <arg_index>index</arg_index> # you may replace the string "index" both in the XML and here # ------------------------------------------------------------------------- # php -q <script> <parms> index # will list all indices of the target values # e.g. in case of interfaces # it has to respond with the list of interface indices # ------------------------------------------------------------------------- if ($cmd == "index") { # retrieve all indices from target $return_arr = reindex(cacti_snmp_walk($hostname, $snmp_community, $oids["index"], $snmp_version, $snmp_user, $snmp_pw, $snmp_port, $snmp_timeout, $snmp_retries)); # and print each index as a separate line for ($i=0;($i<sizeof($return_arr));$i++) {
87 von 143 18.10.2007 21:35 } print $return_arr[$i]. "\n"; # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- } else { print "Invalid use of script query, required parameters:\n\n"; print " <hostname> <cmd>\n"; } function reindex($arr) { $return_arr = array(); for ($i=0;($i<sizeof($arr));$i++) { $return_arr[$i] = $arr[$i]["value"]; } } return $return_arr;?> It will be called like this php -q query_interface_traffic.php <your target host> index 1 2 3 4 As you see, my <target> has 4 indices (interfaces). Discussion: function_reindex You may wonder why this function drops in. Well, lets have a look at cacti_snmp_walk. This function is part of cacti itself and eases the use of SNMP. That's why I call it here. But unfortunately, it's output looks like Array ( [0] => Array ( [oid] => 1.3.6.1.2.1.2.2.1.1.1 [value] => 1 ) [1] => Array ( [oid] => 1.3.6.1.2.1.2.2.1.1.2 [value] => 2 ) [2] => Array ( [oid] => 1.3.6.1.2.1.2.2.1.1.3 [value] => 3 ) [3] => Array
88 von 143 18.10.2007 21:35 ( ) [oid] => 1.3.6.1.2.1.2.2.1.1.4 [value] => 4 ) The values of interest are stored in $return_arr[$i] = $arr[$i]["value"];. The function_reindex gets them all. Chapter II: XML File This given, the first step will be the xml file defining how to access index values only. So change to your <path_cacti>/resources/script_queries directory and create a file named iftraffic.xml. You may of course choose your own name. <interface> <name>get Interface Traffic Information</name> <script_path> path_php_binary -q path_cacti /scripts/query_interface_traffic.php</script_path> <arg_prepend> host_hostname </arg_prepend> <arg_index>index</arg_index> <fields> <ifindex> <name>index</name> <direction>input</direction> <query_name>index</query_name> </ifindex> </fields> </interface> Lets talk about the header elements name: Short Name; chose your own one if you want script_path: Whole command to execute the script from cli. path_php_binary is a cacti builtin variable for /the/full/path/to/php. path_cacti in turn gives the path of the current cacti installation directory. arg_prepend: arg_index: fields: name: direction: All arguments passed to the script go here. There are some builtin variables, again. host_hostname represents the hostname of the device this query will be associated to. The string given here will be passed just after all <arg_prepend> to the script for indexing requests. Up to now, this is the only method our script will answer to. All fields will be defined in this section. Up to now, only the index field is defined The name of this very field input defines all fields that serve as a descriptive information to a specific table index. These values will not be graphed but may be printed in e.g.graph titles by means of query_<name>
89 von 143 18.10.2007 21:35 output defines all fields that will yield a number that should be stored in some rrd file query_name: Name of this field when performing a query or a get request (will be shown later, don't worry now). Now save this file and lets turn to cacti to implement this one. First, goto Data Queries to see and Add a new one: Fill in Short and Long Names at your wish. Enter the file name of the XML file and don't forget to choose Get Script Data (indexed). Create to see It has now Successfully located XML file. But this does not mean that there are no errors. So lets go on with that. Turn to the Device you want to query and add the new Data Query as shown:
90 von 143 18.10.2007 21:35 Index Count Changed was chosen on purpose to tell cacti to re-index not only on reboot but each time the Index Count (e.g. number of interfaces) changed. When done, see the results as To see your script at work, select Verbose Query to see: Chapter III: Completing the Script Now, lets improve our basic script. First, lets define all the variables (OIDs), this script should ask for. <?php # deactivate http headers $no_http_headers = true; # include some cacti files for ease of use include(dirname( FILE ). "/../include/config.php"); include(dirname( FILE ). "/../lib/snmp.php"); # define all OIDs we need for further processing $oids = array( "index" => ".1.3.6.1.2.1.2.2.1.1", "ifstatus" => ".1.3.6.1.2.1.2.2.1.8", "ifdescription" => ".1.3.6.1.2.1.2.2.1.2", "ifname" => ".1.3.6.1.2.1.31.1.1.1.1", "ifalias" => ".1.3.6.1.2.1.31.1.1.1.18", "iftype" => ".1.3.6.1.2.1.2.2.1.3", "ifspeed" => ".1.3.6.1.2.1.2.2.1.5", "ifhwaddress" => ".1.3.6.1.2.1.2.2.1.6", "ifinoctets" => ".1.3.6.1.2.1.2.2.1.10", "ifoutoctets" => ".1.3.6.1.2.1.2.2.1.16", ); $xml_delimiter = "!"; The next step removes all the builtin "magic strings" and replaces them by parameters. We'll have to change the XML template for that (see: later on). cacti supports "snmp_retries" since version 0.8.6i. This is a global config option, access to those is available using "read_config_option". # all required input parms $hostname = $_SERVER["argv"][1]; $snmp_community = $_SERVER["argv"][2]; $snmp_version = $_SERVER["argv"][3]; $snmp_port = $_SERVER["argv"][4];
91 von 143 18.10.2007 21:35 $snmp_timeout = $_SERVER["argv"][5]; $snmp_user = $_SERVER["argv"][6]; $snmp_pw = $_SERVER["argv"][7]; $cmd = $_SERVER["argv"][8]; if (isset($_server["argv"][9])) { $query_field = $_SERVER["argv"][9]; }; if (isset($_server["argv"][10])) { $query_index = $_SERVER["argv"][10]; }; # get number of snmp retries from global settings $snmp_retries = read_config_option("snmp_retries"); The code responsible for the "index" option is left unchanged: # ------------------------------------------------------------------------- # script MUST respond to index queries # the command for this is defined within the XML file as # <arg_index>index</arg_index> # you may replace the string "index" both in the XML and here # ------------------------------------------------------------------------- # php -q <script> <parms> index # will all indices of the target values # e.g. in case of interfaces # it has to respond with the list of interface indices # ------------------------------------------------------------------------- if ($cmd == "index") { # retrieve all indices from target $return_arr = reindex(cacti_snmp_walk($hostname, $snmp_community, $oids["index"], $snmp_version, $snmp_user, $snmp_pw, $snmp_port, $snmp_timeout, $snmp_retries)); # and print each index as a separate line for ($i=0;($i<sizeof($return_arr));$i++) { print $return_arr[$i]. "\n"; } The new code implements the query function as follows # # ------------------------------------------------------------------------- # script MUST respond to query requests # the command for this is defined within the XML file as # <arg_query>query</arg_query> # you may replace the string "query" both in the XML and here # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- php -q <script> <parms> query <function> # script where <function> MUST respond is to a parameter get requests that tells this script, # which the target command value for should this be is retrieved defined within the XML file as # e.g. in <arg_get>get</arg_get> case of interfaces, <function> = ifdescription # it you has may to replace respond the with string the list "get" ofboth in the XML and here # ------------------------------------------------------------------------- interface indices along with the description of the interface # ------------------------------------------------------------------------- php -q <script> <parms> get <function> <index> #}elseif where <function> ($cmd == "query") is a parameter { that tells this script, # which target $arr_index value = should reindex(cacti_snmp_walk($hostname, be retrieved $snmp_community, $oids["index"], # and <index> is the index that should be $snmp_version, queried $snmp_user, $snmp_pw, $snmp_port, # e.g. in case of interfaces, <function> = ifdescription $snmp_timeout, $snmp_retries)); # $arr = reindex(cacti_snmp_walk($hostname, <index> = 1 $snmp_community, $oids[$query_field], # it has to respond with $snmp_version, $snmp_user, $snmp_pw, $snmp_port, # the description of the interface for $snmp_timeout, interface #1$snmp_retries)); # ------------------------------------------------------------------------- }elseif ($cmd for ($i=0;($i<sizeof($arr_index));$i++) == "get") { { print (cacti_snmp_get($hostname, print $arr_index[$i]. $xml_delimiter $snmp_community,. $arr[$i] $oids[$query_field]. "\n";. } ".$query_index", $snmp_version, $snmp_user, $snmp_pw, Last option is the get function
92 von 143 18.10.2007 21:35 $snmp_port, $snmp_timeout, $snmp_retries)); The rest of it is left unchanged. For sake of completeness, I repeat it here # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- } else { print "Invalid use of script query, required parameters:\n\n"; print " <hostname> <community> <version> <snmp_port> <timeout> <user> <pw> <cmd>\n"; } function reindex($arr) { $return_arr = array(); for ($i=0;($i<sizeof($arr));$i++) { $return_arr[$i] = $arr[$i]["value"]; } } return $return_arr;?> You may want to copy all those fragments together and replace the basic script. Now, lets have a try using the command line. The "index" option was already shown, but is repeated here Output: [me@gandalf scripts]$ php -q query_interface_traffic.php <target> <community> 1 161 500 "" "" index 1 2 3 4 Now, lets test the "query" option. The keyword "query" must be given along with the variable, that should be queried. The script now will scan all indices and report the contents of the given variable as follows: Output: [me@gandalf scripts]$ php -q query_interface_traffic.php <target> <community> 1 161 500 "" "" query iftype 1!ethernetCsmacd(6) 2!0 3!0 4!ethernetCsmacd(6) The output reports the index, followed by the chosen delimiter. Then, the content of the requested variable is printed Last, the "get" option is shown. The keyword "get" is required, followed again by the variable (see above). Last needed option is the index, for which the "get" should be performed. Contrary to the "query" option, only one index is scanned. So the index number is not required and will nort be printed. Output:
93 von 143 18.10.2007 21:35 [me@gandalf scripts]$ php -q query_interface_traffic.php <target> <community> 1 161 500 "" "" get iftype 1 ethernetcsmacd(6) The output is not followed by a "newline"! Chapter IV: The Complete XML File Chapter IV: The Complete XML File Of course, we now will have to complete the XML file given in Chapter II. Find it at <path_cacti>/resources/script_queries/iftraffic.xml. <interface> <name>get Interface Traffic Information</name> <script_path> path_php_binary -q path_cacti /scripts/query_interface_traffic.php</script_path> <arg_prepend> host_hostname host_snmp_community host_snmp_version host_snmp_port host_snmp_timeout " host_snmp_username " " host_snmp_password "</arg_prepend> <arg_index>index</arg_index> <arg_query>query</arg_query> <arg_get>get</arg_get> <output_delimeter>!</output_delimeter> <index_order>ifindex</index_order> <index_order_type>numeric</index_order_type> <index_title_format> chosen_order_field </index_title_format> Let's discuss the changes arg_prepend arg_query arg_get some more parameters were added to provide all necessary values for the script. They are position-dependant. You may notice the strange tics I've added for host_snmp_username and host_snmp_password. If you're not using those SNMP V3 parameters, they must be quoted, else the script would fail because two parameters would be missing. Unfortunately, I don't have any SNMP V3 capable system. So I was not able to test this version. The string passed to the query to perform query requests is given here. So you may modify it to your liking (in this case, the script has to be modified accordingly). Some as above for get requests output_delimiter The delimiter used for query requests to separate index and value index_order (optional) Cacti will attempt to find the best field to index off of based on whether each row in the query is unique and non-null. If specified, Cacti will perform this check on the fields listed here in the order specified. Only input fields can be specified and multiple fields should be delimited with a comma. index_order_type (optional) For sorting purposes, specify whether the index is numeric or alphanumeric. numeric: The indexes in this script query are to be sorted numerically (ie. 1,2,3,10,20,31) alphabetic: The indexes in this script query are to be sorted alphabetically (1,10,2,20,3,31). index_title_format (optional) Specify the title format to use when representing an index to the user. Any input field name can be used as a variable if enclosed in pipes ( ). The variable chosen_order_field will be substituted with the field chosen by Cacti to index off of (see index_order above). Text constants are allowed as well
94 von 143 18.10.2007 21:35 Now lets turn to the fields section: <fields> <ifindex> <name>index</name> <direction>input</direction> <query_name>index</query_name> </ifindex> <ifstatus> <name>status</name> <direction>input</direction> <query_name>ifstatus</query_name> </ifstatus> <ifdescription> <name>description</name> <direction>input</direction> <query_name>ifdescription</query_name> </ifdescription> <ifname> <name>name</name> <direction>input</direction> <query_name>ifname</query_name> </ifname> <ifalias> <name>alias</name> <direction>input</direction> <query_name>ifalias</query_name> </ifalias> <iftype> <name>type</name> <direction>input</direction> <query_name>iftype</query_name> </iftype> <ifspeed> <name>speed</name> <direction>input</direction> <query_name>ifspeed</query_name> </ifspeed> <ifhwaddress> <name>hwaddress</name> <direction>input</direction> <query_name>ifhwaddress</query_name> </ifhwaddress> <ifinoctets> <name>inoctets</name> <direction>output</direction> <query_name>ifinoctets</query_name> </ifinoctets> <ifoutoctets> <name>outoctets</name> <direction>output</direction> <query_name>ifoutoctets</query_name> </ifoutoctets> </fields> </interface> These fields are related to the OID array of the script.
95 von 143 18.10.2007 21:35 Attention: The query_name strings must match the OID names exactly! Please notice, that all but the last two fields use direction input. All variables representing numeric values to be graphed must be defined as direction output instead. Chapter V: See it at work! Lets return to the Device and perform a Verbose Query again, see Chapter II. The result is as follows Output: + Running data query [21]. + Found type = '4 '[script query]. + Found data query XML file at '/var/www/html/cacti/resource/script_queries/iftraffic.xml' + XML file parsed ok. + Executing script for list of indexes '/usr/bin/php -q /var/www/html/cacti/scripts/query_interface_traffic.php router snmp-get + Executing script query '/usr/bin/php -q /var/www/html/cacti/scripts/query_interface_traffic.php router snmp-get 1 161 600 "" " + Found item [ifindex='1'] index: 1 + Found item [ifindex='2'] index: 2 + Found item [ifindex='3'] index: 3 + Found item [ifindex='4'] index: 4 + Executing script query '/usr/bin/php -q /var/www/html/cacti/scripts/query_interface_traffic.php router snmp-get 1 161 600 "" " + Found item [ifstatus='up(1)'] index: 1 + Found item [ifstatus='up(1)'] index: 2 + Found item [ifstatus='up(1)'] index: 3 + Found item [ifstatus='up(1)'] index: 4 + Executing script query '/usr/bin/php -q /var/www/html/cacti/scripts/query_interface_traffic.php router snmp-get 1 161 600 "" " + Found item [ifdescription='ethernet0'] index: 1 + Found item [ifdescription=''] index: 2 + Found item [ifdescription=''] index: 3 + Found item [ifdescription='ethernet1'] index: 4 + Executing script query '/usr/bin/php -q /var/www/html/cacti/scripts/query_interface_traffic.php router snmp-get 1 161 600 "" " + Found item [ifname=''] index: 1 + Found item [ifname=''] index: 2 + Found item [ifname=''] index: 3 + Found item [ifname=''] index: 4 + Executing script query '/usr/bin/php -q /var/www/html/cacti/scripts/query_interface_traffic.php router snmp-get 1 161 600 "" " + Found item [ifalias=''] index: 1 + Found item [ifalias=''] index: 2 + Found item [ifalias=''] index: 3 + Found item [ifalias=''] index: 4 + Executing script query '/usr/bin/php -q /var/www/html/cacti/scripts/query_interface_traffic.php router snmp-get 1 161 600 "" " + Found item [iftype='ethernetcsmacd(6)'] index: 1 + Found item [iftype='0'] index: 2 + Found item [iftype='0'] index: 3 + Found item [iftype='ethernetcsmacd(6)'] index: 4 + Executing script query '/usr/bin/php -q /var/www/html/cacti/scripts/query_interface_traffic.php router snmp-get 1 161 600 "" " + Found item [ifspeed='100000000'] index: 1 + Found item [ifspeed='0'] index: 2 + Found item [ifspeed='0'] index: 3 + Found item [ifspeed='10000000'] index: 4 + Executing script query '/usr/bin/php -q /var/www/html/cacti/scripts/query_interface_traffic.php router snmp-get 1 161 600 "" " + Found item [ifhwaddress='00:30:30:2e:35:30:2e:37:46:2e:30:43:2e:30:30:2e:44:16:00:00:00:01:00'] index: 1 + Found item [ifhwaddress=''] index: 2 + Found item [ifhwaddress=''] index: 3 + Found item [ifhwaddress=''] index: 4 + Found data query XML file at '/var/www/html/cacti/resource/script_queries/iftraffic.xml' + Found data query XML file at '/var/www/html/cacti/resource/script_queries/iftraffic.xml' + Found data query XML file at '/var/www/html/cacti/resource/script_queries/iftraffic.xml' Read it carefully, and you'll notice, that all XML fields were scanned and the output shown. All? No, not all. The direction output fields are missing! But this is on purpose as those won't make sense as header fields but will be written to rrd files.
96 von 143 18.10.2007 21:35 Chapter VI: Create the Data Template As usual, next step is to create the Data Template. Select that menu item and Add: script_query-data_template-add-01 and find: script_query-data_template-add-02 fill in Data Template Name, Data Source Name, and, most important, select Data Input Method to read Get Script Data (Indexed). Leave Associated RRAs as is. When creating the data template and graph template, you SHOULD check the "Use Per Data Source Value" checkbox for name & title. When you first create graphs using the data query, it will use the "Suggested Values" to name the templates. But then if you ever edit the templates and leave the "Use Per Data Source Value" unchecked, then saving will overwrite all the data source and graph names. (comment: thanks to user goldburt) Now, please proceed to the lower half script_query-data_template-add-03 enter the Internal Data Source Name. You may select this name freely. There's no need to match it to any of the XML field names. As the OID is a COUNTER, the Data Source Type must be selected appropriately. Save.
97 von 143 18.10.2007 21:35 script_query-data_template-add-04 For the second data source item, please select New. script_query-data_template-add-05 Again, fill in the Data Source Name. Pay attention to set the maximum value to 0 to avoid clipping it off during updating of the rrd file. COUNTER has to be set as done above. Important! You have to select the marked Index fields! Now, save again and you're done. Chapter VII: Create the Graph Template Now, its time for the Graph Template. Select this menu item and Add.
98 von 143 18.10.2007 21:35 and fill in the values as usual: Enter the y-axis description on the lower part of the screen Now Save. Next, fill in the Graph Items Select the Data Source from our Data Template, take the color and select AREA, enter some text Save and add the next graph item. Now, we're going to use the "LEGEND" timesaver again:
99 von 143 18.10.2007 21:35 For the next step, it's necessary to remove the newline added with the last action. Please select the 4th item as follows and remove the newline by deselecting the checkbox Now lets add the same data source again, but as a LINE1, MAXimum with a slightly changed color. Newline is checked this time
100 von 143 18.10.2007 21:35 Pooh. Now lets apply the same procedure for the Outgoing Traffic. Personally, I love those outgoing stuff tp be presented on the negative y-axis. So we'll have to apply some CDEF magic to some items. Lets see Please pay attention when adding the "LEGEND" stuff. No CDEF to be applied in this case (else, legends will show negative values) Again, select last legend item
101 von 143 18.10.2007 21:35 to remove the newline and add a new LINE1, MAXimum, "Make Stack Negative" CDEF with some text and a newline Hoping, you've got all those steps correctly, finally Save your work. Take a cup of coffee to get your brains free again, kiss your wife, hug your children and/or pet your dog; sequence is arbitrary. Chapter VIII: Associate Graph Template with Data Query Huh, that sound complicated. Why would it be necessary to do so? Let me explain: You remember the Data Template, do you? The names of the data source item was chosen arbitrary. The Graph Items were associated with those data source items, but those in turn were not related to anything in the XML file. Not related? Not yet! So, let's revisit the Data Query. Remember the lower part on Associated Graph Templates. Click Add
102 von 143 18.10.2007 21:35 fill in a name for your choice and select the Graph Template that we have created in the last step. Create to see First, let's have a look at the upper half of the screen. The red box to the left show the Internal Data Source Names names taken from the Data Template that is associated with the Graph template we've just added. The red box to the middle has a dropdown for each data source item. The dropdown list contains all output fields taken from the XML file. In our case, there are only two of them. The red box to the right must be checked on each line, to make the association valid. Now, lets turn to the lower half of the screen, denoted Suggested Values The example shows host_description - Traffic - query_ifdescription entered both for name of the Data Template and title of the Graph Template. Click Add, one by one
103 von 143 18.10.2007 21:35 Notice the second title I've added here. If more than one entry is present, they are applied from top to bottom, until a match is found. Match means, that all variables present are filled. Of course, you may add more than one variable taken from the XML file. But pay attention, that not all devices will fill all those variables. So my router does, sigh. You may use all input variables listed in the XML file. A <variable> may be listed as query_<variable>, e.g. for ifalias write query_ifalias and so forth. Click Save, and find the new Graph Template added to the list of Associated Graph Templates. You may continue to add more Graph Templates, each of them may be related to other output field of the XML file. Find, as an example, lots of graph templates associated to the standard Interface Statistics Data Query to get an idea what I'm talking about
104 von 143 18.10.2007 21:35 Don't worry about the first two entries; they are home-made. Chapter IX: Creating the Images Now, let's return to the Device, that we've already have used for this Data Query. Create Graphs for this Host to see I've left the standard Interface Statistics in the screenshot. So you may compare both Queries. Our PHP Interface Traffic stuff has two more header items, Name and Alias. But all data seen equals the standard SNMP Data Query; not that bad, eh? Now, select one item and Create You'll have to wait a bit, at least two polling cycles. Then, you may notice some data in your new graph. The next image shows both our new graph (the first one) and a standard interface traffic graph. The latter one holds more data in this example, don't worry about that.
105 von 143 18.10.2007 21:35 Having a closer view, you may notice a difference in magnitude (y-axis). But please compare the units used. The first graph uses Bytes, the latter one uses Bits. For comparison, it would be necessary to multiply the first one with 8. This may be done using a CDEF Turn Bytes into Bits, applied to all items of the Graph Template. This task is left to you. Summing Up In chapter Common Tasks, I've shown some basic principles of operation. The graph shown should demonstrate the underlying structure, but it was a bit incomplete. To be more precise, cacti's tasks sum up as following:
106 von 143 18.10.2007 21:35 You'll notice the association of Graph Templates to the Data Query as a last step. And a new theme has popped up, the Host Template. This one is for grouping Graph Templates and Data Queries with Associated Graph Templates together as a single Host Template. You may associate each Host to one of those Host Templates. This will ease the burden of associating endless lists of Graph Templates to dozens of hosts. Maintenance Database Setup seems to fail? When I visit http://localhost/cacti the page claims I need to run cacti.sql file. I've already done this, I can see the table structure in my database. Solution: You have a connectivity problem with php and mysql. If you're running MySQL 4.1 or 5, then you will need to apply the old password trick for user authentication to work with Cacti. Add the following to the [mysqld] sub-section: #Use old password encryption method (needed for 4.0 and older clients). old-passwords. (Courtesy "BSOD2600)
107 von 143 18.10.2007 21:35 Run php -i grep MYSQL to find the mysql sock file (MYSQL_SOCKET), e.g. at /var/lib/mysql/mysql.sock rather than /tmp/mysql.sock (which is the default location for mysqld). In this case, create a symlink from /var/lib/mysql/mysql.sock to /tmp/mysql.sock or edit /etc/my.cnf to solve this issue (Courtesy "doctor_octagon") Debug NaN's in your graphs Cacti users sometimes complain about NaN's in their graphs. Unfortunately, there are several reasons for this result. The following is a step-by-step procedure I recommend for debugging this To debug the NaN's: 1. Check Cacti Log File Please have a look at your cacti log file. Usually, you'll find it at <path_cacti>/log/cacti.log. Else see Settings, Paths. Check for this kind of error: CACTID: Host[...] DS[...] WARNING: SNMP timeout detected [500 ms], ignoring host '...' For "reasonable" timeouts, this may be related to a snmpbulkwalk issue. To change this, see Settings, Poller and lower the value for The Maximum SNMP OID's Per SNMP Get Request. Start at a value of 1 and increase it again, if the poller starts working. Some agent's don't have the horsepower to deliver that many OID's at a time. Therefore, we can reduce the number for those older/underpowered devices. 2. Check Basic Data Gathering: For scripts, run them as cactiuser from cli to check basic functionality. E.g. for a perl script named your-perl-script.pl with parameters "p1 p2" under *nix this would look like: su - cactiuser /full/path/to/perl your-perl-script.pl p1 p2... (check output) For snmp, snmpget the _exact_ OID you're asking for, using same community string and snmp version as defined within cacti. For an OID of.1.3.6.1.4.something, community string of "very-secret" and version 2 for target host "target-host" this would look like
108 von 143 18.10.2007 21:35 snmpget -c very-secret -v 2c target-host.1.3.6.1.4.something... (check output) 3. Check cacti's poller: First make sure that crontab always shows poller.php. This program will either call cmd.php, the PHP based poller _or_ cactid, the fast alternative, written in C. Define the poller you're using at "Settings" -> "Poller". Cactid has to be implemented seperately, it does not come with cacti by default. Now, clear./log/cacti.log (or rename it to get a fresh start) Then, change "Settings -> Poller Logging Level" to DEBUG for _one_ polling cycle. You may rename this log as well to avoid more stuff added to it with subsequent polling cycles. Now, find the host/data source in question. The Host[<id>] is given numerically, the <id> being a specific number for that host. Find this <id> from the Devices menue when editing the host: The url contains a string like &id=<id>. Check, whether the output is as expected. If not, check your script (e.g. /full/path/to/perl). If ok, proceed to next step This procedure may be replaced by running the poller manually for the failing host only. To do so, you need the <id>, again. If you're using cmd.php, set the DEBUG logging level as defined above and run php -q cmd.php <id> <id> If you're using cactid, you may override logging level when calling the poller:./cactid --verbosity=5 <id> <id> All output is printed to STDOUT in both cases. This procdure allows for repeated tests without waiting for the next polling interval. And there's no need to manually search for the failing host between hundreds of lines of output. 4. Check MySQL updating In most cases, this step make be skipped. You may want to return to this step, if the next one fails (e.g. no rrdtool update to be found) From debug log, please find the MySQL update statement for that host concerning table poller_output. On very rare occasions, this will fail. So please copy that sql statement and paste it to a mysql session started from cli. This may as well be done from some tool like phpmyadmin. Check the sql return code. 5. Check rrd file updating Down in the same log, you should find some rrdtool update <filename> --template... You should find exactly one update statement for each file.
109 von 143 18.10.2007 21:35 RRD files should be created by the poller. If it does not create them, it will not fill them either. If it does, please check your Poller Cache from Utilities and search for your target. Does the query show up here? 6. Check rrd file ownership If rrd files were created e.g. with root ownership, a poller running as cactiuser will not be able to update those files cd /var/www/html/cacti/rra ls -l localhost* -rw-r--r-- 1 root root 463824 May 31 12:40 localhost_load_1min_5.rrd -rw-r--r-- 1 cactiuser cactiuser 155584 Jun 1 17:10 localhost_mem_buffers_3.rrd -rw-r--r-- 1 cactiuser cactiuser 155584 Jun 1 17:10 localhost_mem_swap_4.rrd -rw-r--r-- 1 cactiuser cactiuser 155584 Jun 1 17:10 localhost_proc_7.rrd -rw-r--r-- 1 cactiuser cactiuser 155584 Jun 1 17:10 localhost_users_6.rrd chown cactiuser:cactiuser *.rrd will help. 7. Check rrd file numbers You're perhaps wondering about this step, if the former was ok. But due to data sources MINIMUM and MAXIMUM definitions, it is possible, that valid updates for rrd files are suppressed, because MINIMUM was not reached or MAXIMUM was exceeded. Assuming, you've got some valid rrdtool update in step 3, perform a rrdtool fetch <rrd file> AVERAGE and look at the last 10-20 lines. If you find NaN's there, perform rrdtool info <rrd file> and check the ds[...].min and ds[...].max entries, e.g. ds[loss].min = 0.0000000000e+00 ds[loss].max = 1.0000000000e+02 In this example, MINIMUM = 0 and MAXIMUM = 100. For a ds.[...].type=gauge verify, that e.g. the number returned by the script does not exceed ds[...].max (same holds for MINIMUM, respectively). If you run into this, please do not only update the data source definition within the Data Template, but perform a
110 von 143 18.10.2007 21:35 rrdtool tune <rrd file> --maximum <ds-name>:<new ds maximum> for all existing rrd files belonging to that Data Template. At this step, it is wise to check step and heartbeat of the rrd file as well. For standard 300 seconds polling intervals (step=300), it is wise to set minimal_heartbeat to 600 seconds. If a single update is missing and the next one occurs in less than 600 seconds from the last one, rrdtool will interpolate the missing update. Thus, gaps are "filled" automatically by interpolation. Be aware of the fact, that this is no "real" data! Again, this must be done in the Data Template itself and by using rrdtool tune for all existing rrd files of this type. 8. Check rrdtool graph statement Last resort would be to check, that the corract data sources are used. Goto Graph Management and select your Graph. Enable DEBUG Mode to find the whole rrdtool graph statement. You should notice the DEF statements. They specify the rrd file and data source to be used. You may check, that all of them are as wanted. 9. Miscellaneous Up to current cacti 0.8.6h, table poller_output may increase beyond reasonable size. This is commonly due to php.ini's memory settings of 8MB default. Change this to at least 64 MB. To check this, please run following sql from mysql cli (or phpmyadmin or the like) select count(*) from poller_output; If the result is huge, you may get rid of those stuff by truncate table poller_output; As of current SVN code for upcoming cacti 0.9, I saw measures were taken on both issues (memory size, truncating poller_output). 10. RPM Installation? Most rpm installations will setup the crontab entry now. If you've followed the installation instructions to the letter (which you should always do ;-) ), you may now have two poller running. That's not a good thing, though. Most rpm installations will setup cron in /etc/cron.d/cacti. Now, please check all your crontabs, especially /etc/crontab and crontabs of users root and cactiuser. Leave only one poller entry for all of them. Personally, I've chosen /etc/cron.d/cacti to avoid problems when updating rpm's. Mosten often, you won't remember this item when updating lots of rpm's, so I felt more secure to put it here. And I've made some slight modifications, see
111 von 143 18.10.2007 21:35 prompt> vi /etc/cron.d/cacti */5 * * * * cactiuser /usr/bin/php -q /var/www/html/cacti/poller.php > /var/local/log/poller.log 2>&1 This will produce a file /var/local/log/poller.log, which includes some additional informations from each poller's run, such as rrdtool errors. It occupies only some few bytes and will be overwritten each time. If you're using the crontab of user "cactiuser" instead, this will look like prompt> crontab -e -u cactiuser */5 * * * * /usr/bin/php -q /var/www/html/cacti/poller.php > /var/local/log/poller.log 2>&1 11. Not NaN, but 0 (zero) values? Pay attention to custom scripts. It is required, that external commands called from there are in the $PATH of the cactiuser running the poller. It is therefor recommended to provide /full/path/to/external/command. User "criggie" reported an issue with running smartctl. It was complaining "you are not root" so a quick chmod +s on the script fixed that problem. Secondly, the script was taking several seconds to run. So cacti was logging a "U" for unparseable in the debug output, and was recording NAN. So my fix there was to make the script run faster - it has to complete in less than one second, and the age of my box make that hard. Logrotate cacti.log Requirements By default, cacti uses the file /log/cacti.log for logging purpose. There's no automatic cleanup of this file. So, without further intervention, there's a good chance, that this file reaches a file size limit of your filesystem. This will stop any further polling process. For *NIX type systems, logrotate is a widely known utility that solves exactly this problem. The following descriptions assumes you've set up a standard logrotate environment. The examples are based on a Fedora 6 environment. I assume, that Red Hat type installations will work the same way. I hope, but am not sure, that this howto is easily portable for Debian/Ubuntu and stuff and hopefully even for *BSD. The logrotate Configuration File The logrotate function is well described in the man pages. My setup is as follows: # logrotate cacti.log /var/www/html/cacti/log/cacti.log { # keep 7 versions online rotate 7 # rotate each day daily
112 von 143 18.10.2007 21:35 # don't compress, but # if disk space is an issue, change to # compress nocompress # create new file with attributes create 644 cactiuser cactiuser # add a YYYYMMDD extension instead of a number dateext } Descriptions are given inline. Copy those statements from above into /etc/logrotate.d/cacti. This is the recommended file for application-specific logrotate files. Test logrotate configuration files are tested by running logrotate -fd /etc/logrotate.d/cacti reading config file /etc/logrotate.d/cacti reading config info for /var/www/html/cacti/log/cacti.log Handling 1 logs rotating pattern: /var/www/html/cacti/log/cacti.log forced from command line (7 rotations) empty log files are rotated, old logs are removed considering log /var/www/html/cacti/log/cacti.log log needs rotating rotating log /var/www/html/cacti/log/cacti.log, log->rotatecount is 7 glob finding old rotated logs failed renaming /var/www/html/cacti/log/cacti.log to /var/www/html/cacti/log/cacti.log-20071004 creating new log mode = 0644 uid = 502 gid = 502 This is a dry run, no rotation is actually performed. Option -f forces log rotation even if the rotate criterium is not fulfilled. Option -d issues debug output but will suppress any real log rotation. Verify by listing the log directory: nothing has changed at all! Now we will request log rotation using logrotate -fd /etc/logrotate.d/cacti No output is produced, but you will see the effect ls -l /var/www/html/cacti/log -rw-r--r-- 1 cactiuser cactiuser 0 4. Okt 21:35 cacti.log -rw-r--r-- 1 cactiuser cactiuser 228735 4. Okt 21:35 cacti.log-20071004 Of course, the date extension on the file will change accordingly. Please notice, that a new cacti.log file was created. If you issue the command again, nothing will happen:
113 von 143 18.10.2007 21:35 logrotate -fv /etc/logrotate.d/cacti reading config file /etc/logrotate.d/cacti reading config info for /var/www/html/cacti/log/cacti.log Handling 1 logs rotating pattern: /var/www/html/cacti/log/cacti.log forced from command line (7 rotations) empty log files are rotated, old logs are removed considering log /var/www/html/cacti/log/cacti.log log needs rotating rotating log /var/www/html/cacti/log/cacti.log, log->rotatecount is 7 destination /var/www/html/cacti/log/cacti.log-20071004 already exists, skipping rotation If you want to see all those 7 rotations on one single day, remove the dateext directive temporarily from the configuration file. Daily MySQL Dump of the Cacti SQL Database using logrotate Requirements By default, cacti uses the MySQL database named cacti. You may want to consider dumping this database on regular intervals for failsafe reason. For a single dump, you will usually enter this dump command directly into crontab. It is possible, to mis-use logrotate to create daily dumps, append dateext-like timestamps to each dump and keep a distinct number of generations online. For a basic setup, see Logrotate cacti.log, The following descriptions assumes you've set up a standard logrotate environment. The examples are based on a Fedora 6 environment. I assume, that Red Hat type installations will work the same way. I hope, but am not sure, that this howto is easily portable for Debian/Ubuntu and stuff and hopefully even for *BSD. The logrotate Configuration File for MySQL Dumping the Cacti Database It is absolutely necessary for this example, that a single dump file already exists. Else, logrotate will skip any execution due to a missing "log" file. My setup is as follows: # logrotate sql dump file /var/www/html/cacti/log/cacti_dump.sql { # keep 31 generations online rotate 31 # create a daily dump daily # don't compress the dump nocompress # create using this create 644 cactiuser cactiuser
114 von 143 18.10.2007 21:35 # append a nice date to the file dateext # delete all generations older than 31 days maxage 31 # run this script AFTER rotating the previous cacti_dump.sql file # make sure to use the correct database, user and password, see./include/config.php prerotate /usr/bin/mysqldump --user=cactiuser --password=cactiuser --lock-tables --add-drop-database --add-drop-table cact endscript } You may add this configuration to /etc/logrotate.d/cacti, even if the logrotate of cacti.log is already given there. Prior to testing this configuration, don't forget to touch /var/www/html/cacti/log/cacti_dump.sql Now run the test as follows logrotate -fv /etc/logrotate.d/cacti reading config file /etc/logrotate.d/cacti reading config info for /var/www/html/cacti/log/cacti_dump.sql Handling 1 log rotating pattern: /var/www/html/cacti/log/cacti_dump.sql forced from command line (31 rotations) empty log files are rotated, old logs are removed considering log /var/www/html/cacti/log/cacti_dump.sql log needs rotating rotating log /var/www/html/cacti/log/cacti_dump.sql, log->rotatecount is 31 glob finding old rotated logs failed running prerotate script renaming /var/www/html/cacti/log/cacti_dump.sql to /var/www/html/cacti/log/cacti_dump.sql-20071004 creating new log mode = 0644 uid = 502 gid = 502 Now list the results ls -l /var/www/html/log/cacti_dump* -rw-r--r-- 1 cactiuser cactiuser 0 4. Okt 22:10 cacti_dump.sql -rw-r--r-- 1 cactiuser cactiuser 318441 4. Okt 22:10 cacti_dump.sql-20071004 RRDTool Stuff How CONSOLIDATION works You will find lots of tutorials for rrdtool at the main rrdtool site. Up to the moment, I personally had some trouble understanding the principles of the CONSOLIDATION FUNCTIONS (cf). While studying several posts in the cacti forum, and also based on discussions elsewhere, I supposed this howto may be useful.
115 von 143 18.10.2007 21:35 But I'm not that rrdtool guru. So I apologize for errors in this document. Example Attached, you will find a perl script that generates two separate rrd's and will generate a single graph based on both of them. Inline, you will find several constants to play with. The script fills both of them with data generated by a loop. The base value is 2. For each following data point, the value will be incremented by 0.1. After 40 iterations, the value will have increased to 6. Defining the 1. rrd file First, lets define some constants needed for rrd file creation #--------------------------------------------------------------------------------- # create first DB #--------------------------------------------------------------------------------- # name of rrd file for test data my $db1 = "/tmp/rrddemo1.rrd"; my $interval = 300; # time between two data points (pdp's) my $heartbeat = 2*$interval; # heartbeat my $xff = 0.5; # pdp's necessary to form one cdp The timespan for this file will be dynamically computed from current timestamp # last timestamp of rrd should equal actual time # rounded to the last interval my $no_iter = 40; my $end = `date +%s`; $end = $interval * int($end/$interval); my $start = $end - $no_iter * $interval; By default, it contains 2 rra's for 4 consolidation functions (AVERAGE, MAX, MIN, LAST). # define all consolidation functions to be used my $CF1 = "AVERAGE"; my $CF2 = "MAX"; my $CF3 = "MIN"; my $CF4 = "LAST"; The first rra holds 5 data points (pdp's). The second one holds 9 data points, that are generated automatically by rrdtool by consolidating 5 pdp's each. So you will have 2*4=8 rra's. # steps and rows my $rra1step = 1; # no of steps in rra 1 my $rra1rows = 5; # no of pdp's in rra 1 my $rra2step = 5; # no of steps (pdp's of rra 1) to form one cdp my $rra2rows = int($no_iter/$rra2step)+1; # no of cdp's in rra 2
116 von 143 18.10.2007 21:35 The rrd file will be created by means of the perl module RRDs.pm: RRDs::create( $db1, "--step=$interval", "--start=". ($start-10), # define datasource "DS:load:GAUGE:$heartbeat:U:U", # consolidation function 1 "RRA:$CF1:$xff:$rra1step:$rra1rows", "RRA:$CF1:$xff:$rra2step:$rra2rows", # consolidation function 2 "RRA:$CF2:$xff:$rra1step:$rra1rows", "RRA:$CF2:$xff:$rra2step:$rra2rows", # consolidation function 3 "RRA:$CF3:$xff:$rra1step:$rra1rows", "RRA:$CF3:$xff:$rra2step:$rra2rows", # consolidation function 4 "RRA:$CF4:$xff:$rra1step:$rra1rows", "RRA:$CF4:$xff:$rra2step:$rra2rows", ) or die "Cannot create rrd ($RRDs::error)"; Defining the 2. rrd file This rrd contains exactly one rra only. There is enough space for all (default:40) data points generated by this script. There is no need for consolidation. #--------------------------------------------------------------------------------- # create second DB # it will hold all data in its first rra # without consolidation # (therefor it is much bigger than the first one) #--------------------------------------------------------------------------------- # name of rrd file for test data my $db2 = "/tmp/rrddemo2.rrd"; RRDs::create( $db2, "--step=$interval", "--start=". ($start-10), # define datasource "DS:load:GAUGE:$heartbeat:U:U", # consolidation function 1 "RRA:$CF1:$xff:$rra1step:$no_iter", ) or die "Cannot create rrd ($RRDs::error)"; Running the Perl Script You may run the script without any parameter. In this case, it will create the 2 rrd files, fill them and generate one png file: #------------------------------------------ # generate rrd graph #------------------------------------------ my $graph = "/tmp/rrddemo1.png";
117 von 143 18.10.2007 21:35 # defines some constants for graphing my $width = 500; my $height = 180; RRDs::graph("$graph", "--title=rrdtool Test: consolidation principles", "--start=". $start, "--end=". $end, "--width=". $width, "--height=". $height, "DEF:demo2=$db2:load:$CF1", "DEF:demo11=$db1:load:$CF1", "DEF:demo12=$db1:load:$CF2", "DEF:demo13=$db1:load:$CF3", "DEF:demo14=$db1:load:$CF4", "COMMENT:raw data as follows, filesize=$db2size\\n", "LINE1:demo2#CCCCCC:RAW DATA, no consolidation\\n", "COMMENT:Consolidated data as follows, filesize=$db1size\\n", "LINE1:demo11#FF0000:CF=AVERAGE\\n", "LINE1:demo12#00FF00:CF=MAX equals CF=LAST in this case\\n", "LINE1:demo13#0000FF:CF=MIN\\n", # "LINE1:demo14#000000:CF=LAST\\n", ) or die "graph failed ($RRDs::error)"; The result may be viewed by a browser, e.g. firefox file:///tmp/rrddemo1.png The result should be similar to: consolidation rrddemo1 Discussing the results One of the basic principles of rrd's is, that they will not grow in space while storing additional data. Let us look at this more carefully. Remember that the script increments each value by 0.1 for each data point. But the first rra will hold only 5 data points, e.g the values 2.0, 2.1, 2.2, 2.3, 2.4. But what happens, if the next value, 2.5, is added? This is where the CONSOLIDATION FUNCTIONS comes in, e.g. AVERAGE. In this case, the average of all 5 values (2.2 in this case) will be stored in the second rra. So, there is a consolidation of the data, only 1 consolidated data point is stored instead of 5 originally entered ones. As a result, you will loose "some information". There is no chance to identify, that the average 2.2 was build out of these 5 values above. It may have
118 von 143 18.10.2007 21:35 been build out of 1.0, 1.5, 2.2, 2.9, 3,4 as well. This is why people often want to increase the size of the first rra to store more data points. But remember, there are more consolidation functions. Use of MAX yields 2.4 in the case above. MIN yields 2.0 and LAST results in 2.4 (the last value of all 5 primary data points). Yes, even in this case it is not possible to rebuild the originally entered data. But you will have an idea at least for MIN, MAX, AVERAGE and even LAST. On the long run, this saves lots of disk space and is VERY fast in processing. And even if you "loose" the original data, you will see the range between MIN and MAX and the AVERAGE. Using with cacti To use this function in cacti, you will have to modify your graph templates. Most of them contain line defintions based on AVERAGE. You may want to add another line using consolidation function MAX/MIN. You won't notice any effect until you use graphs spanning a time frame greater than about 2 days (the default size of the first rra). In this example, AVERAGEs were graphed using an AREA, whereas MAXimums uses LINE1 in a slightly darker shade of the corresponding color. This gives nice graphes even for daily view, IMHO. The example uses an additional feature, a CDEF=CURRENT_DATA_SOURCE,-1,* to mirror outbound traffic to the negative side consolidation traffic Please notice, that MAX does not always match AVERAGES, which is not that surprising from the mathematical point of view. AVERAGEs show Volume based information whereas MAXimums show Peak Usage. Both informations are useful. See the script working If you would like to see, what's going on when running the script, you may call it by perl rrdtest.pl verbose more Then, it will produce output like RRD definitions: Start: 1160814900, End: 1160826900, Updates every: 300 update: 1160814900:2 update: 1160815200:2.1 update: 1160815500:2.2 update: 1160815800:2.3
119 von 143 18.10.2007 21:35 update: 1160816100:2.4 update: 1160816400:2.5 update: 1160816700:2.6 update: 1160817000:2.7 update: 1160817300:2.8 update: 1160817600:2.9 update: 1160817900:3 update: 1160818200:3.1... update: 1160826300:5.8 update: 1160826600:5.9 update: 1160826900:6 Last 5 minutes CF AVERAGE: 1160825400: 5.6 1160825700: 5.7 1160826000: 5.8 1160826300: 5.9 1160826600: 6 Last 6*5 minutes CF AVERAGE: 1160817900: 3 1160819400: 3.5 1160820900: 4 1160822400: 4.5... Last 30 minutes CF LAST: 1160817900: 3.2 1160819400: 3.7 1160820900: 4.2 1160822400: 4.7 1160823900: 5.2 1160825400: 5.7 1160826900: N/A Filesize of rrdfile 1 at /tmp/rrddemo1.rrd: 2336 Filesize of rrdfile 2 at /tmp/rrddemo2.rrd: 864 Attention: in this very case, the filesize of the rrd using consolidation is bigger. But for real world rrd's it is the other way round. Now, you may study all rrd file values in detail. Howto View Historical Data after Consolidation This text was written to help you configuring cacti to display MAXimum values alongside the commonly plotted AVERAGE ones. This will help getting more information out of the usually defined rra's without the need to change anything concerning the existing rrd files. As always: Use this information at your own risk. Here we go! As an attachment to the forum entry you will find a Graph Template that contains all items discussed here. It is a modified Traffic Graph Template. Things discussed here will of course apply to other Graphs as well. Common view of Traffic Graph First, I'll show you the modified template viewed with the standard timeframe of one day. It doesn't look very strange, but let me talk about few things: You'll notice, that Outbound Traffic is displayed on the negative side. This is often done; there are lots of those graphs on the
120 von 143 18.10.2007 21:35 forum. It is simply done by a CDEF named Turn bytes into bits, make negative (include in the Template below) that works like cdef=current_data_source,8,*,-1,* You'll see both a deeper green and a deeper blue line that fits exactly to the AREA definitions. You'll notice a black line that does some TRENDing (there's a nice forum post on that, I've copied from there) for Inbound and Outbound Traffic As usual, you'll see Current, Average and Maximum legend entries Well, you'll notice that my laptop wasn't online the whole day... The magic comes, when you look at some historic data. consolidated-view-01 See the MAXimum Data after Consolidation There are some post on the forum complaining about the data loss after consolidation (that is: data is automatically compressed by rrdtool). But usually, there's not only the AVERAGE rra but also the MAXimum rra defined. And while executing consolidation automatically, not only the AVERAGE values are stored in the rrd but also the MAXimum ones. Example: Values stored initially: 40, 50, 60, 60, 70, 80 Averaged value after consolidation 6 data Points: (40+50+60+60+70+80)/6 = 60 Additionally stored MAXimum value: 80 (if chosen: additionally stored MINimum value: 40) After consolidation, there is still knowledge about what was MAXimum! And this may be graphed as well, see:
121 von 143 18.10.2007 21:35 consolidated-view-02 So you do not only notice the Graph Overall Maximum from the legend (that is: 29.40 k for Inbound) but also when it occurred and additionally the whole timeseries for that MAXimum. (Well, whether TRENDing is helpful here may be answered by yourself) In this case, CONSOLIDATION took place for 6 data points each, so each AVERAGE value displayed here stands for 6 original data points. You will see this if zooming a little deeper: consolidated-view-03 Viewing Historical Data Well, of course this works even if choosing historical data (e.g. Monthly) consolidated-view-04 The minimum resolution now is 2 hours. But the MAXimum values plotted still represent the biggerst of those consolidated values.
Conclusion When you look at your rrd's, you will notice that often MAXimum is already defined. To display these values, nothing has to be modified at those rrd's. And there is no additional disk space required compared to methods, that keep data without consolidation. While graphing the MAXimum values along with the AVERAGE ones, you'll be able to discover the strength of the rrdtool principles. Howto define a very BIG rra "without data loss" This text shows, how to configure cacti for use with a single round robin archive (rra) without using consolidation (e.g. without averaging out some data points). Personally, I do not like this approach. So I would recommend reading of How CONSOLIDATION works and Howto View Historical Data after Consolidation. Be warned! You won't really do that! Why? One of the inherent features of rrd's is: they never grow in space. In other words: When creating a new rrd, it is allocated with all space needed. See rrd-beginners tutorial. As usually, you may use the information given here at your own risk. Basic Knowledge for understanding RRAs Often, there's a fundamental misunderstanding (which is enforced by cacti's way of defining "rra related parameters"). Basically, there are no daily, weekly, monthly, yearly rra's in any rrd file! RRDTool defines different levels of consolidation only. It does not define timespans explicitely. It only defines the AMOUNT OF DATAPOINTS for each consolidation level (known as rows in rrdtool lingo). Assuming you are trying to keep only one level of consolidation, this is defined by step in the rra definition. And, if you want to omit consolidation, this equals to step=1. By default, all rrd files will have 4 levels of consolidation, step=1,6,24,288, respectively. Forget about the last three ones (well, they will use some amount of space; but forget about this for the time being). So lets deal with the first rra (step=1) only. If you want to extend this rra to span a longer time, you have to deal with the number of rows. You will have to increase the number of rows until the wanted timespan is reached. You may compute the timespan by multiplying rows * step. Here we go! Cacti's logic to generate rrd files works as follows: 1. create a device (the host that shall be queried) 2. create a graph for this host (using a graph template or a data query that refers to graph templates) 122 von 143 18.10.2007 21:35
3. each graph template refers to a data template 4. each data template defines one or more data sources 5. each data template uses one or more round robin archives (rra) 6. each of the data sources uses the same set of rra's This tutorial works the way back. Defining a new round robin archive (rra) For the following, lets assume you are logged in with admin permissions and use the console tab. 1. 2. Go to Management -> Data Sources -> RRAs 2.Click Add to add a new rra Now fill in the data as follows and SAVE: Name: you may choose your own Consolidation function: AVERAGE needed X-Files Factor: always 0.5 123 von 143 18.10.2007 21:35
124 von 143 18.10.2007 21:35 Steps: 1 (that is the number of data points to use for consolidation, 1 says: no consolidation at all) Rows: 115200 = 400 days with 24 hours and 12 data points per hour (= 5 min interval) Timespan: used for displaying 33,053,184 seconds = about 382 days (taken from other cacti rra) Define a new data template For ease of use (yes, I'm lazy), please copy an existing template. Goto Data Templates, and check the box on the right of Interface â Traffic: Then scroll to the bottom of the page, select Duplicate and Go. You will be prompted for a new name of this template: Of course, you may choose your own name here. Now it is time to modify this template:
1. You may change the name of the template 2. Select the just created RRA (Don't worry about the other RRAs in this list; they are needed for the next tutorial...) Please leave the rest as is; SAVE. Of course, you may define a new data template from scratch. The only thing to keep in mind is to select the appropriate RRA. The data template is now done. Define a new graph template Well, you will imagine what comes next. Again, I decided to copy the appropriate graph template. So goto Graph Templates and repeat the steps above for the template Interface Traffic (bits/sec). It will look like this: Please pay attention to the next steps! You will have to delete both Graph Item Inputs, as they refer to the wrong data source. Please select the red X to the right of Inbound Data Source as well as Outbound Data Source. Then you will have to add the newly generated data sources. In order to do that, please select each item of the list of Graph Items, one after the other. This will look like: 125 von 143 18.10.2007 21:35
126 von 143 18.10.2007 21:35 As Data Source you will choose the appropriate data source you generated in the previous step. Don't forget to do this for each and every item of the Graph Item list. When you're done, scroll to the bottom of the Graph Template definition and SAVE. Modify Data Query to add Graph Template This example uses Interface Traffic Graph Template. This is referenced by the Data Query SNMP Interface Statistics. Now we're going to add the newly defined Graph Template to this very Data Query. If you have chosen some other Graph Template, e.g. ucd/net Load Average, you will skip this step. The Data Query goes like this. Goto Data Queries and select SNMP â Interface Statistics. Now Add to see this: Define a new name for this Associated Graph and CREATE. Finally: Create Graphs for this Host
127 von 143 18.10.2007 21:35 Goto Devices and select your favorite device to see the rra in action. If you have modified the SNMP Interface Statistics Data Query, you may immediately select Create Graphs for this Host to see the following: Select the interface as you would have done for any Traffic Graph. Then Select a graph type from the dropdown list (of course our newly defined Graph Template!) and CREATE. As usually, you will have to wait at least two polling cycles to get the graph generated and filled with the first value. Don't be impatient! Let it run for awhile. Under the Graphs tab you will notice something like Well, this looks like usual, doesn't it? You may wonder about the Outbound traffic displayed negative. Well, this is a little CDEF but is of no matter here. And of course, for the first two days you will not notice anything unusual. This is because the default cacti rra configuration keeps all data points without consolidation for 600 intervals (about 2 days). Some advice: Please do not click onto the graph too fast. I had to wait some time (don't remember exactly) before clicking gave a result like the next one:
128 von 143 18.10.2007 21:35 This is already a zoomed image. You will notice, that my personal laptop isn't online for the whole day. Now, where's the trick? At first, you may wonder, whether only this one graph will be displayed. This is, because only one single rra exists. And cacti associated the time interval of the graph with each rra. Only one rra defined gives only one image displayed. But you may zoom in at any place and will reach down to the 5 min intervals. This is, what had to be proved (q.e.d as the old romans said). Something to keep in mind Space allocation with rrdtool The space needed is calculated from the number of data sources needed (e.g. traffic in and traffic out form two data sources) the number of rra's needed (e.g. one archive for storing original data points, a second one to hold averaged data points for some weeks, a third for holding averaged data points for some months...) the number of data points to be stored in each rra some header space If you omit consolidation (that is: averaging out some data points), you won't loose data. But you will loose space! Example: Store data every 300 seconds for a whole year. This leads to 12 (data points each hour) * 24 (hours per day) * 365 (days per year) data point (= 105120). Each data point holds 8 bytes, so the whole rrd will occupy about 840,960 Bytes (plus some header space) for each single data source. A closer look to rrd file properties Please have a look at the file sizes on my computer:
129 von 143 18.10.2007 21:35 -rw-r--r-- 1 cactiuser cactiuser 94660 Oct 2 19:40 gandalf_traffic_in_17.rrd -rw-r--r-- 1 cactiuser cactiuser 1844056 Oct 2 19:40 gandalf_traffic_in_71.rrd They belong to following rrd definitions (see Data Source Debug of that data source) /usr/bin/rrdtool create \ /var/www/html/cacti-0.8.6f/rra/gandalf_traffic_in_17.rrd \ --step 300 \ DS:traffic_in:COUNTER:600:0:100000000 \ DS:traffic_out:COUNTER:600:0:100000000 \ RRA:AVERAGE:0.5:1:600 \ RRA:AVERAGE:0.5:6:700 \ RRA:AVERAGE:0.5:24:775 \ RRA:AVERAGE:0.5:288:797 \ RRA:MIN:0.5:1:600 \ RRA:MIN:0.5:6:700 \ RRA:MIN:0.5:24:775 \ RRA:MIN:0.5:288:797 \ RRA:MAX:0.5:1:600 \ RRA:MAX:0.5:6:700 \ RRA:MAX:0.5:24:775 \ RRA:MAX:0.5:288:797 \ RRA:LAST:0.5:1:600 \ RRA:LAST:0.5:6:700 \ RRA:LAST:0.5:24:775 \ RRA:LAST:0.5:288:797 \ and respectively: /usr/bin/rrdtool create \ /var/www/html/cacti-0.8.6f/rra/gandalf_traffic_in_71.rrd \ --step 300 \ DS:traffic_out:COUNTER:600:0:100000000 \ DS:traffic_in:COUNTER:600:0:100000000 \ RRA:AVERAGE:0.5:1:115200 \ As you will notice, the newly generated rrd is about 20 times the size of the original one (and this one spreads two years, not only 400 days). So please pay attention, before using this widely. The performance impact for updating and displaying such rrd's in a large installation may not be desired. Howto RESIZE existing RRAs of existing RRDs Note: Find an easy way to resize RRDs without using the command line at The Toolsmith. There's a free and a commercial version. I did not use any of them until now. I hope, the informations given below are at least helpful to understand rrdtool operation. Be warned!
130 von 143 18.10.2007 21:35 BACKUP ALL YOUR RRDs! There's a good chance, that you will destroy all of your rrd files. I'm not joking! At the time of writing, rrdtool 1.2.12 is stable. Pay attention to older rrdtool-1.2.x version as they contain a bug when resizing rrd files created by rrdtool-1.0.x (see above reference for more). SO BACKUP ALL YOUR RRDs! And check for sufficient file space! As always: Use this information at your own risk. Here we go! At the bottom of this page, please find a perl script resize.pl. It is necessary, to customize the /path/to/the/rrd/binary, e.g /usr/bin/rrdtool. Help! Put resize.pl wherever you want. There's no need to put it into the rrd working directory. But you will need some scratch space here for all rrds to be resized (due to the way rrdtool resize works). The user that runs this script must have write permissions to the current directory used for scratch read permissions on the original rrds to be resized write permissions to the target directory to store the resized rrds in The script does not care about space provided. To get help, simply type perl resize.pl -h you will receive resize.pl Version 0.43 - resize an existing rrd Usage: resize.pl -f <filemask> -r <rra> -s <actual row size> -o <output dir> -g <growth> -i [-d <debug>] Requires: Getopt::Std, File::Basename, File::stat, File::Copy, File::KGlob, RRDp Author: Reinhard Scheck Date: 2006-01-15 Options: -f, filemask of the source rrds -r, rra to be changed (first rra denotes as -r 0) -s, take only rra's with exactly that actual row size -o, output directory for resized rrds -g, growth (number of data points to be ADDED to those already defined) -i, invoke rrdtool info instead of resizing -d, debug level (0=standard, 1=function trace, 2=verbose) -h, usage and options (this help) -s or -r must be given. -s will override -r option No parameter validation done. Hope you know what you're going to do! Dry run You may want to have a look at your rrds before resizing them. Specially for the required parameter -r (denoting the rra to be resized), you will want to have a look at those rras, that are defined in the rrd in question. Example (linefeeds only for ease of
131 von 143 18.10.2007 21:35 reading): perl resize.pl -f "/var/www/html/cacti/rra/localhost_uptime_57.rrd" / -r 0 / -o /var/www/html/cacti/rra/resized/ -g 8000 / -i will result in: -- RRDTOOL INFO localhost_uptime_57.rrd... ds[uptime].type = "GAUGE" rra[0].cf = "AVERAGE" rra[0].rows = 600 rra[1].cf = "AVERAGE" rra[1].rows = 700 rra[2].cf = "AVERAGE" rra[2].rows = 775 rra[3].cf = "AVERAGE" rra[3].rows = 797 rra[4].cf = "MIN" rra[4].rows = 600 rra[5].cf = "MIN" rra[5].rows = 700 rra[6].cf = "MIN" rra[6].rows = 775 rra[7].cf = "MIN" rra[7].rows = 797 rra[8].cf = "MAX" rra[8].rows = 600 rra[9].cf = "MAX" rra[9].rows = 700 rra[10].cf = "MAX" rra[10].rows = 775 rra[11].cf = "MAX" rra[11].rows = 797 rra[12].cf = "LAST" rra[12].rows = 600 rra[13].cf = "LAST" rra[13].rows = 700 rra[14].cf = "LAST" rra[14].rows = 775 rra[15].cf = "LAST" rra[15].rows = 797 You may notice a single data source (uptime) four consolidation functions (AVERAGE, MIN, MAX, LAST) four rra's for each of the consolidation functions Of course, you may also enter a partly qualified dataset name. But it makes sense to take only those rrd's, that belong to the same datasource (e.g. with the same rrd file structure). Resizing a single RRA of a single RRD For ease of use, you may simply omit the trailing parameter -i. But pay attention to the parameter -r! In this example, only the first RRA of the consolidation function AVERAGE shall be resized. It depends on your needs, whether this will result in a correct RRD!
132 von 143 18.10.2007 21:35 perl resize.pl -f "/var/www/html/cacti/rra/localhost_uptime_57.rrd" / -r 0 / -o /var/www/html/cacti/rra/resized/ -g 8000 The output will look like: -- RRDTOOL RESIZE localhost_uptime_57.rrd RRA (0) growing 8000.. (95328).. RRA#0.. (159328).. Done. The first parenthesis contain the file size before resizing, the second one after resizing. Resizing multiple RRA of a single RRD Simply enter all RRAs to be resized in quotes: perl resize.pl -f "/var/www/html/cacti/rra/localhost_uptime_57.rrd" / -r \u201c0 4 8 12\u201d / -o /var/www/html/cacti/rra/resized/ -g 8000 to result in -- RRDTOOL RESIZE localhost_uptime_57.rrd RRA (0 4 8 12) growing 8000.. (95328).. RRA#0#4#8#12.. (351328).. Done. Resizing multiple RRAs of multiple RRDs Please enter all RRAs to be resized in quotes and partly qualify alll RRDs: perl resize.pl -f "/var/www/html/cacti/rra/*_uptime_*.rrd" / -r \u201c0 4 8 12\u201d / -o /var/www/html/cacti/rra/resized/ -g 8000 to result in -- RRDTOOL RESIZE router_uptime_59.rrd RRA (0 4 8 12) growing 8000.. (95328).. RRA#0#4#8#12.. (351328).. Done. -- RRDTOOL RESIZE gandalf_uptime_58.rrd RRA (0 4 8 12) growing 8000.. (95328).. RRA#0#4#8#12.. (351328).. Done. -- RRDTOOL RESIZE localhost_uptime_57.rrd RRA (0 4 8 12) growing 8000.. (95328).. RRA#0#4#8#12.. (351328).. Done. Resizing all RRAs of a given row size This is a new feature of this version. Use the parameter -s to specify the rowsize of the rra's you want to change. This parameter
133 von 143 18.10.2007 21:35 overrides the -r parameter, cause all relevant rra's will be calculated from the current rrd definition. This is useful if you're working on a list of files with different rrd structure (e.g. different Data Templates) perl resize.pl -g 8000 -f "/var/www/html/workspace/branch/rra/gandalf*.rrd" / -s 600 -o /var/www/html/cacti/rra/resized/ -g 8000 to result in... removed... -- RRDTOOL RESIZE gandalf_cpu_system_9.rrd RRA (0 4 ) growing 8000..47836.. RRA#0#4..175840.. Done. -- RRDTOOL RESIZE gandalf_cpu_user_10.rrd RRA (0 4 ) growing 8000..47836.. RRA#0#4..175840.. Done. -- RRDTOOL RESIZE gandalf_errors_in_18.rrd RRA (0 4 ) growing 8000..188308.. RRA#0#4..700312.. Done.... removed... -- RRDTOOL RESIZE gandalf_unicast_in_20.rrd RRA (0 4 ) growing 8000..94660.. RRA#0#4..350664.. Done.... removed... -- RRDTOOL RESIZE gandalf_uptime_58.rrd RRA (0 4 8 12 ) growing 8000..95328.. RRA#0#4#8#12..351328.. Done. -- RRDTOOL RESIZE gandalf_users_89.rrd RRA (0 4 8 12 ) growing 8000..95328.. RRA#0#4#8#12..351328.. Done. user time: 0.34 system time: 1.93 real time: 7.16 Please notice the last line of output, which reports the rrdtool runtime. If -s is given so that no rowsize of any rra will match, the corresponding rrd file is skipped: perl resize.pl -g 8000 -f "/var/www/html/workspace/branch/rra/gandalf*.rrd" -s 601 -o new-resized user time: 0.01 system time: 0.02 real time: 0.23 Something to keep in mind Be warned! You may even enter -o to resolve to the current RRD directory. This will result in overwriting your existing RRDs. YOU DON'T WANT TO DO THAT. Always look at the output after resizing. Try to generate graphs from them. Verify, that everything runs fine. BACKUP YOUR ORIGINAL RRDs. Howto use externally updated rrd files + BONUS TRACK Cacti itself provides a very flexible and fast data gatherer, but situations may occur, where this is not what you want. Assuming, that there already is some rrd file filled by other means, question arises how to use this with cacti's flexible rrdtool front-end administration facilities. Sometimes people call this feature "Importing external rrds" to cacti. But what I'm going to explain is not an automated function. It will require some manual interaction. Of course, the webserver must have at least read access to the required rrd file(s). For sake of easiness, I'll assume the file to be located in cacti's default./rra/ directory. In my examples, this file is called example.rrd.
134 von 143 18.10.2007 21:35 As always: Use this on your own risk Chapter I: Get rrdtool info First task is to get information about the data sources used in the rrd file. This is done using rrdtool info <rrd file>: rrdtool info./rra/external.rrd filename = "/var/www/html/cacti/rra/external.rrd" rrd_version = "0003" step = 300 last_update = 1140957748 ds[external_ds1].type = "GAUGE" ds[external_ds1].minimal_heartbeat = 600 ds[external_ds1].min = 0.0000000000e+00 ds[external_ds1].max = 5.0000000000e+02 ds[external_ds1].last_ds = "UNKN" ds[external_ds1].value = 4.4510209500e+02 ds[external_ds1].unknown_sec = 0 ds[external_ds2].type = "GAUGE" ds[external_ds2].minimal_heartbeat = 600 ds[external_ds2].min = 0.0000000000e+00 ds[external_ds2].max = 5.0000000000e+02 ds[external_ds2].last_ds = "UNKN" ds[external_ds2].value = 7.4183682500e+02 ds[external_ds2].unknown_sec = 0 ds[external_ds3].type = "GAUGE" ds[external_ds3].minimal_heartbeat = 600 ds[external_ds3].min = 0.0000000000e+00 ds[external_ds3].max = 5.0000000000e+02 ds[external_ds3].last_ds = "UNKN" ds[external_ds3].value = 1.0385715550e+03 ds[external_ds3].unknown_sec = 0 rra[0].cf = "AVERAGE" rra[0].rows = 600 rra[0].pdp_per_row = 1 (... more to follow...) From the ds[...] statements, the names of the ds' are taken. In this case the data sources are named external_ds1, external_ds2, external_ds3 respectively. While this is not that a meaningful name, it should show you the principles when dealing with multi-ds rrds. Alongside with this, it is good to know the ds[...].type, ds[...].min and ds[...].max for correct definition of the data sources. In this case, all data sources are of type GAUGE. This will not affect data gathering nor graphing, but to me it seems to be advantageous to create the correct data sources. All other paramaters (step, heartbeat, xff, rra[..].rows) are assumed to be standard settings. Chapter II: Create the Data Template The Data Template will tell cacti, how the data is stored within the rrd file. This is the way to tell cacti about all data sources and their properties. The purpose is twofold: tell cacti, how to create the correct rrd file with all parameters
135 von 143 18.10.2007 21:35 store the name of the data sources for later use with Graph Templates While the first goal is not needed in this context, the second one is crucial. So lets define a new Data Template. Goto Data Templates and Add a new one: Fill in the usual header data: If you want to associate this to a certain host, you may use host_description as a placeholder as usual. As this external.rrd file is updated externally, you must set the Data Input Method to None. Select the Associated RRA's as they will define the Detailed Graph Views (usually Daily, Weekly, Monthly, Yearly). And you'll have to uncheck the Data Source Active checkbox. This will prevent cacti from actually gathering data for this Data Template. Now add the first Data Source: Internal Data Source Name You must use the existing data source name of the rrd as retrieved in Chapter I. In this example, use external_ds1. Minimum Value Fill in the ds[...].min value from rrdtool info above. In this case, use 0. This is not really needed, but for sake of consistency I recommend this. Maximum Value Fill in the ds[...].max value from rrdtool info above. In this case, use 500. This is not really needed, but for sake of consistency I
136 von 143 18.10.2007 21:35 recommend this. Data Source Type Fill in the ds[...].type value from rrdtool info above. In this case, use GAUGE. This is not really needed, but for sake of consistency I recommend this. Heartbeat Fill in the ds[...].minimal_heartbeat value from rrdtool info above. In this case, use 600. This is not really needed, but for sake of consistency I recommend this. Now Create to see: Repeat this for all other data sources of the example.rrd (use the New function of Data Source Items): You will have noticed, that no Custom Data is given as defined by the Data Input Method set to NONE. You'll see the result as: Now you're done. Chapter III. Create the Graph Template The Graph template will tell cacti, what Data Sources should be shown on the Graph. This is very straightforward; simply use the Data Sources as defined in Chapter II and use all the Graph magic cacti provides. So Add a new Graph Template
137 von 143 18.10.2007 21:35 and fill in the usual header data: Now add the first Graph Item: and fill in the usual Data: Use the Legend Option as a time saver:
138 von 143 18.10.2007 21:35 And add all other Graph Items like that: to end up in Of course you will use more meaningful input for Text Format. That's all for now. Chapter IV: Prepare the Host Now its time for the Host to be prepared. You have the choice between two different approaches the Host already exists (perhaps you're polling some other data from this host) and the status is up the Host does not yet exist in cacti's tables and shall never been polled for other data by cacti's own poller The first approach does not need any additional changes to the Devices list. The second approch will be more common. You will need a Host entry in the Devices list even for this host. So we will create kind of a dummy entry. Please goto the Devices list and Add a new one: Fill in Description and Name as usual. To deactivate all checks, please check Disable Host and leave SNMP Community empty.
139 von 143 18.10.2007 21:35 Create Please proceed to the next chapter now Chapter V: Creating the Data Sources Usually, you would create the Data Sources and Graphs automagically using Create Graphs for this Host. But using this approach, cacti would enumerate and generate the needed rrd file(s) on its own. That's not wanted now. Our task is: make cacti generate all Data Sources stuff but accept our external.rrd file name. This is accomplished by adding the Data Sources manually from the Data Sources list. In this case, there's no difference whether you're going to use an already existing host or the kind of host we generated in the previous chapter. Now select the Data Template we generated in Chapter II:
140 von 143 18.10.2007 21:35 and Create. You will be prompted to fill in the full path to your external.rrd file. If this resides in cacti's default./rra directory, you may use <path_rra> for this. Remember, that the web server must have at least read access to that file.: The result is shown in the next image Chapter VI: Create the new Graph Now we're nearly done! The last Step now! Let's create a new Graph from Graph Management now. Select the Selected Graph Template be defined in Chapter III: and Create. Now select all needed Data Source [...] and Save:
141 von 143 18.10.2007 21:35 The result is shown like: Please select this Graph again and Turn on Graph Debug Mode to see Bonus Track: Updating RRD Files from Remote It is known, that rrd files are architecture-dependant. So, if you're updating external rrd files and graphing them via cacti, there's a need to make cacti access these rrd files, e.g. by nfs. This will not succeed when mixing architectures.
142 von 143 18.10.2007 21:35 And nfs may not be the best choice for all cases. But with rrdtool 1.2.x there's a new feature, rrd server. Pasted from rrdtool homepage: Quote: RRD Server If you want to create a RRD-Server, you must choose a TCP/IP Service number and add them to /etc/services like this: rrdsrv 13900/tcp # RRD server Attention: the TCP port 13900 isn't officially registered for rrdsrv. You can use any unused port in your services file, but the server and the client system must use the same port, of course. With this configuration you can add RRDtool as meta-server to /etc/inetd.conf. For example: rrdsrv stream tcp nowait root /opt/rrd/bin/rrdtool rrdtool - /var/rrd Don't forget to create the database directory /var/rrd and reinitialize your inetd. If all was setup correctly, you can access the server with perl sockets, tools like netcat, or in a quick interactive test by using 'telnet localhost rrdsrv'. NOTE: that there is no authentication with this feature! Do not setup such a port unless you are sure what you are doing. For my local setup (RHEL 4.0), I had to modify this a bit. My /etc/services rrdsrv 13900/tcp # RRD server is standard. And I put # default: off # description: RRDTool as a service service rrdsrv { disable = no socket_type = stream protocol = tcp wait = no user = cactiuser server = /usr/bin/rrdtool server_args = - /var/www/html/cacti/rra } as /etc/xinetd.d/rrdsrv. Then xinetd is updated to start rrdsrv. user should be set to the user defined in include/config.php. server_args should be set to cacti's rra directory. server contains the full path to rrdtool's binary. To verify this, try
143 von 143 18.10.2007 21:35 telnet localhost 13900 info external.rrd assuming, that the rrd file "external.rrd" used in this howto is located in the./rra directory. Now its time for some remote script to use this new feature. As an example, see #!/usr/bin/perl use IO::Socket; my $host = shift @ARGV; my $port = shift @ARGV; my $rrd = shift @ARGV; my $socket = IO::Socket::INET->new(PeerAddr=> $host, PeerPort=> $port, Proto=> 'tcp', Type=> SOCK_STREAM) or die "Can't talk to $host at $port"; my $_cmd = "update ". $rrd. " N:". int(rand(10)). ":". int(rand(10)). ":". int(rand(10)); print $socket $_cmd. "\n"; close $socket; Of course, my $_cmd = "update ". $rrd. " N:". int(rand(10)). ":". int(rand(10)). ":". int(rand(10)); is only an example prepared for the "external.rrd" of our example. To use this for updating your own rrd files, this must fit to the data source definitions of your special rrd file. In our example, I put */5 * * * * cactiuser /usr/bin/perl /var/www/html/cacti/scripts/rrd-remote-update.pl localhost 13900 external.rrd into crontab to perform regular updates. Disadvantages This handling has the great disadvantage, that you must configure the rrd file name to each single updating script. This rrd file name on the remote system must match the one on cacti's host. There's a good chance to mess things up when used for lots of rrds. But for some few files this may be appropriate.