Set up a Web server cluster in 5 easy steps
|
|
|
- Doris Hudson
- 10 years ago
- Views:
Transcription
1 English Sign in (or register) Technical topics Evaluation software Community Events Set up a Web server cluster in 5 easy steps Get up and running with the Linux Virtual Server and Linux-HA.org's Heartbeat Eli M. Dow ([email protected]), Software Engineer, IBM Frank Lefevre ([email protected]), Senior Software Engineer, IBM Summary: Construct a highly available Apache Web server cluster that spans multiple physical or virtual Linux servers in 5 easy steps with Linux Virtual Server and Heartbeat v2. Date: 22 Aug 2007 Level: Intermediate Also available in: Korean Russian Japanese Activity: views Comments: 1 (View Add comment - Sign in) Rate this article Average rating (73 votes) Spreading a workload across multiple processors, coupled with various software recovery techniques, provides a highly available environment and enhances overall RAS (Reliability, Availability, and Serviceability) of the environment. Benefits include faster recovery from unplanned outages, as well as minimal effects of planned outages on the end user. To get the most out of this article, you should be familiar with Linux and basic networking, and you should have Apache servers already configured. Our examples are based on standard SUSE Linux Enterprise Server 10 (SLES10) installations, but savvy users of other distributions should be able to adapt the methods shown here. This article illustrates the robust Apache Web server stack with 6 Apache server nodes (though 3 nodes is sufficient for following the steps outlined here) as well as 3 Linux Virtual Server (LVS) directors. We used 6 Apache server nodes to drive higher workload throughputs during testing and thereby simulate larger deployments. The architecture presented here should scale to many more directors and backend Apache servers as your resources permit, but we haven't tried anything larger ourselves. Figure 1 shows our implementation using the Linux Virtual Server and the linux-ha.org components. Figure 1. Linux Virtual Servers and Apache 1/24
2 As shown in Figure 1, the external clients send traffic to a single IP address, which may exist on any of the LVS director machines. The director machines actively monitor the pool of Web servers they relay work to. Note that the workload progresses from the left side of Figure 1 toward the right. The floating resource address for this cluster will reside on one of the LVS director instances at any given time. The service address may be moved manually through a graphical configuration utility, or (more commonly) it can be self-managing, depending on the state of the LVS directors. Should any director become ineligible (due to loss of connectivity, software failure, or similar) the service address will be relocated automatically to an eligible director. The floating service address must span two or more discrete hardware instances in order to continue operation with the loss of one physical machine. With the configuration decisions presented in this article, each LVS director is able to forward packets to any real Apache Web server regardless of physical location or proximity to the active director providing the floating service address. This article shows how each of the LVS directors can actively monitor the Apache servers in order to ensure requests are sent only to operational back-end servers. With this configuration, practitioners have successfully failed entire Linux instances with no interruption of service to the consumers of the services enabled on the floating service address (typically http and https Web requests). New to Linux Virtual Server terminology LVS directors: Linux Virtual Server directors are systems that accept arbitrary incoming traffic and pass it on to any number of realservers. They then accept the response from the realservers and pass it back to the clients who initiated the request. The directors need to perform their task in a transparent fashion such that clients never know that realservers are doing the actual workload processing. LVS directors themselves need the ability to float resources (specifically, a virtual IP address on which they listen for incoming traffic) between one another in order to not become a single point of failure. LVS directors accomplish floating IP addresses by leveraging the Heartbeat component from LVS. This allows each configured director that is running Heartbeat to ensure one, and only one, of the directors lays claim to the virtual IP address servicing incoming requests. Beyond the ability to float a service IP address, the directors need to be able to monitor the status of the realservers that are doing the actual workload processing. The directors must keep a working knowledge of what realservers are available for processing at all times. In order to monitor the realservers, the mon package is used. Read on for details on configuring Heartbeat and configuring mon. Realservers: These systems are the actual Web server instances providing the HA service. It is vital to have more than one realserver providing the service you wish to make HA. In our environment, 6 realservers are implemented, but adding more is trivial once the rest of the LVS infrastructure is in place. In this article, the realservers are all assumed to be running the Apache Web Server, but other services could just as easily have been implemented (in fact, it is trivially easy to enabled SSH serving as an additional test of the methodology presented here). The realservers used are stock Apache Web servers with the notable exception that they were configured to respond as if it were the LVS director's floating IP address, or a virtual hostname corresponding to the floating IP address used by the directors. This is accomplished by altering a single line in the Apache configuration file. You can duplicate our configuration using an entirely open source software stack consisting of Heartbeat technology components provided by linux-ha.org, and server monitoring via mon and Apache. As stated, we used SUSE Linux Enterprise Server for testing our configuration. All of the machines used in the LVS scenario reside on the same subnet and use the Network Address Translation (NAT) configuration. Numerous other network topographies are described at the Linux Virtual Server Web site (see Resources); we favor NAT for simplicity. For added security, you should limit traffic across firewalls to only the floating IP address that is passed between the LVS directors. The Linux Virtual Server suite provides a few different methods to accomplish a transparent HA back-end infrastructure. Each method has advantages and disadvantages. LVS-NAT operates on a director server by grabbing incoming packets that are destined for configurationspecified ports and rewriting the destination address in the packet header dynamically. The director does not process the data content of the packets itself, but rather relays them on to the realservers. The destination address in the packets is rewritten to point to a given realserver from the cluster. The packet is then placed back on the network for delivery to the realserver, and the realserver is unaware that anything has gone on. As far as the realserver is concerned, it has simply received a request directly from the outside world. The replies from the realserver are then sent back to the director where they are again rewritten to have the source address of the floating IP address that clients are pointed at, and are sent along to the original client. Using the LVS-NAT approach means the realservers require simple TCP/IP functionality. The other modes of LVS operation, namely LVS-DR and LVS-Tun require more complex networking concepts. The major benefit behind the choice of LVS-NAT is that very little alteration is required to the configuration of the realservers. In fact, the hardest part is remembering to set the routing statements properly. 2/24
3 Step 1: Building realserver images Begin by making a pool of Linux server instances, each running Apache Web server, and ensure that the servers are working as designed by pointing a Web browser to each of the realserver's IP addresses. Typically, a standard install will be configured to listen on port 80 on its own IP address (in other words, on a different IP for each realserver). Next, configure the default Web page on each server to display a static page containing the hostname of the machine serving the page. This ensures that you always know which machine you are connecting to during testing. As a precaution, check that IP forwarding on these systems is OFF by issuing the following command: cat /proc/sys/net/ipv4/ip_forward If it's not OFF and you need to disable it, issue this command: echo "0" >/proc/sys/net/ipv4/ip_forward An easy way to ensure that each of your realservers is properly listening on the http port (80) is to use an external system and perform a scan. From some other system with network connectivity to your server, you can use the nmap utility to make sure the server is listening. Listing 1. Using nmap to make sure the server is listening nmap -P Starting nmap 3.70 ( ) at :58 EST Interesting ports on : (The 1656 ports scanned but not shown below are in state: closed) PORT STATE SERVICE 22/tcp open ssh 80/tcp filtered http 111/tcp open rpcbind 631/tcp open ipp Be aware that some organizations frown on the use of port scanning tools such as nmap: make sure that your organization approves before using it. Next, point your Web browser to each realserver's actual IP address to ensure each is serving the appropriate page as expected. Once this is completed, go to Step 2. Step 2: Installing and configuring the LVS directors Now you are ready to construct the 3 LVS director instances needed. If you are doing a fresh install of SUSE Linux Enterprise Server 10 for each of the LVS directors, be sure to select the high availability packages relating to heartbeat, ipvsadm, and mon during the initial installation. If you have an existing installation, you can always use a package management tool, such as YAST, to add these packages after your base installation. It is strongly recommended that you add each of the realservers to the /etc/hosts file. This will ensure there is no DNS-related delay when servicing incoming requests. At this time, double check that each of the directors are able to perform a timely ping to each of the realservers: Listing 2. Pinging the realservers ping -c 1 $REAL_SERVER_IP_1 ping -c 1 $REAL_SERVER_IP_2 ping -c 1 $REAL_SERVER_IP_3 ping -c 1 $REAL_SERVER_IP_4 ping -c 1 $REAL_SERVER_IP_5 3/24
4 ping -c 1 $REAL_SERVER_IP_6 Once completed, install ipvsadm, Heartbeat, and mon from the native package management tools on the server. Recall that Heartbeat will be used for intra-director communication, and mon will be used by each director to maintain information about the status of each realserver. Step 3: Installing and configuring Heartbeat on the directors If you have worked with LVS before, keep in mind that configuring Heartbeat Version 2 on SLES10 is quite a bit different than it was for Heartbeat Version 1 on SLES9. Where Heartbeat Version 1 used files (haresources, ha.cf, and authkeys) stored in the /etc/ha.d/ directory, Version 2 uses the new, XML-based Cluster Information Base (CIB). The recommended approach for upgrading is to use the haresources file to generate the new cib.xml file. The contents of a typical ha.cf file are shown in Listing 3. We took the ha.cf file from a SLES9 system and added the bottom 3 lines (respawn, pingd, and crm) for Version 2. If you have an existing version 1 configuration, you may opt to do the same. If you are using these instructions for a new installation, you can copy Listing 3 and modify it to suit your production environment. Listing 3. A sample /etc/ha.d/ha.cf config file Log to syslog as facility "daemon" use_logd on logfacility daemon List our cluster members (the realservers) node litsha22 node litsha23 node litsha21 Send one heartbeat each second keepalive 3 Warn when heartbeats are late warntime 5 Declare nodes dead after 10 seconds deadtime 10 Keep resources on their "preferred" hosts - needed for active/active auto_failback on The cluster nodes communicate on their heartbeat lan (.68.*) interfaces ucast eth ucast eth ucast eth Failover on network failures Make the default gateway on the public interface a node to ping (-m) -> For every connected node, add <integer> to the value set in the CIB, * Default=1 (-d) -> How long to wait for no further changes to occur before updating the CIB with a changed attribute (-a) -> Name of the node attribute to set, * Default=pingd respawn hacluster /usr/lib/heartbeat/pingd -m 100 -d 5s Ping our router to monitor ethernet connectivity ping litrout71_vip Enable version 2 functionality supporting clusters with > 2 nodes crm yes The respawn directive is used to specify a program to run and monitor while it runs. If this program exits with anything other than exit 4/24
5 code 100, it will be automatically restarted. The first parameter is the user id to run the program under, and the second parameter is the program to run. The -m parameter sets the attribute pingd to 100 times the number of ping nodes reachable from the current machine, and the -d parameter delays 5 seconds before modifying the pingd attribute in the CIB. The ping directive is given to declare the PingNode to Heartbeat, and the crm directive specifies whether Heartbeat should run the 1.x-style cluster manager or 2.x-style cluster manager that supports more than 2 nodes. This file should be identical on all the directors. It is absolutely vital that you set the permissions appropriately such that the hacluster daemon can read the file. Failure to do so will cause a slew of warnings in your log files that may be difficult to debug. For a release 1-style Heartbeat cluster, the haresources file specifies the node name and networking information (floating IP, associated interface, and broadcast). For us, this file remained unchanged: litsha /24/eth0/ This file will be used only to generate the cib.xml file. The authkeys file specifies a shared secret allowing directors to communicate with one another. The shared secret is simply a password that all the heartbeat nodes know and use to communicate with one another. The secret prevents unwanted parties from trying to influence the heartbeat server nodes. This file also remained unchanged: auth 1 1 sha1 ca0e f55794b23461eb4106db The next few steps show you how to convert the version 1 haresources file to the new version 2 XML-based configuration format (cib.xml). Though it should be possible to simply copy and use the configuration file in Listing 4 as a starting point, it is strongly suggested that you follow along to tailor the configuration for your deployment. To convert file formats to the XML-based CIB (Cluster Information Base) file you will use in deployment, issue the following command: python /usr/lib64/heartbeat/haresources2cib.py /etc/ha.d/haresources > /var/lib/heartbeat/crm/test.xml A configuration file similar to the one shown in Listing 4 will be generated and placed in /var/lib/heartbeat/crm/test.xml. Listing 4. Sample CIB.xml file <cib admin_epoch="0" have_quorum="true" num_peers="3" cib_feature_revision="1.3" generated="true" ccm_transition="7" dc_uuid="114f3ad1-f18a-4bec-9f01-7ecc4d820f6c" epoch="280" num_updates="5205" cib-last-written="tue Apr 3 16:03: "> <configuration> <crm_config> <cluster_property_set id="cib-bootstrap-options"> <attributes> <nvpair id="cib-bootstrap-options-symmetric_cluster" name="symmetric_cluster" value="true"/> <nvpair id="cib-bootstrap-options-no_quorum_policy" name="no_quorum_policy" value="stop"/> <nvpair id="cib-bootstrap-options-default_resource_stickiness" name="default_resource_stickiness" value="0"/> <nvpair id="cib-bootstrap-options-stonith_enabled" name="stonith_enabled" value="false"/> <nvpair id="cib-bootstrap-options-stop_orphan_resources" name="stop_orphan_resources" value="true"/> <nvpair id="cib-bootstrap-options-stop_orphan_actions" name="stop_orphan_actions" value="true"/> <nvpair id="cib-bootstrap-options-remove_after_stop" name="remove_after_stop" value="false"/> <nvpair id="cib-bootstrap-options-transition_idle_timeout" name="transition_idle_timeout" value="5min"/> <nvpair id="cib-bootstrap-options-is_managed_default" name="is_managed_default" value="true"/> <attributes> <cluster_property_set> <crm_config> <nodes> <node uname="litsha21" type="normal" id="01ca9c3e db5-ba33-a25cd46b72b3"> 5/24
6 <instance_attributes id="standby-01ca9c3e db5-ba33-a25cd46b72b3"> <attributes> <nvpair name="standby" id="standby-01ca9c3e db5-ba33-a25cd46b72b3" value="off"/> <attributes> <instance_attributes> <node> <node uname="litsha23" type="normal" id="dc9a784f af-96d2ab651eac"> <instance_attributes id="standby-dc9a784f af-96d2ab651eac"> <attributes> <nvpair name="standby" id="standby-dc9a784f af-96d2ab651eac" value="off"/> <attributes> <instance_attributes> <node> <node uname="litsha22" type="normal" id="114f3ad1-f18a-4bec-9f01-7ecc4d820f6c"> <instance_attributes id="standby-114f3ad1-f18a-4bec-9f01-7ecc4d820f6c"> <attributes> <nvpair name="standby" id="standby-114f3ad1-f18a-4bec-9f01-7ecc4d820f6c" value="off"/> <attributes> <instance_attributes> <node> <nodes> <resources> <primitive class="ocf" provider="heartbeat" type="ipaddr" id="ipaddr_1"> <operations> <op id="ipaddr_1_mon" interval="5s" name="monitor" timeout="5s"/> <operations> <instance_attributes id="ipaddr_1_inst_attr"> <attributes> <nvpair id="ipaddr_1_attr_0" name="ip" value=" "/> <nvpair id="ipaddr_1_attr_1" name="netmask" value="24"/> <nvpair id="ipaddr_1_attr_2" name="nic" value="eth0"/> <nvpair id="ipaddr_1_attr_3" name="broadcast" value=" "/> <attributes> <instance_attributes> <primitive> <resources> <constraints> <rsc_location id="rsc_location_ipaddr_1" rsc="ipaddr_1"> <rule id="prefered_location_ipaddr_1" score="200"> <expression attribute="uname" id="prefered_location_ipaddr_1_expr" operation="eq" value="litsha21"/> <rule> <rsc_location> <rsc_location id="my_resource:connected" rsc="ipaddr_1"> <rule id="my_resource:connected:rule" score_attribute="pingd"> <expression id="my_resource:connected:expr:defined" attribute="pingd" operation="defined"/> <rule> <rsc_location> <constraints> <configuration> <cib> Once your configuration file is generated, move test.xml to cib.xml, change the owner to hacluster and the group to haclient, and then restart the heartbeat process. Now that the heartbeat configuration is complete, set heartbeat to start at boot time on each of the directors. To do this, Issue the following command (or equivalent for your distribution) on each director: chkconfig heartbeat on Restart each of the LVS directors to ensure the heartbeat service starts properly at boot. By halting the machine that holds the floating resource IP address first, you can watch as the other LVS Director images establish quorum, and instantiate the service address on a newlyelected primary node within a matter of seconds. When you bring the halted director image back online, the machines will re-establish quorum across all nodes, at which time the floating resource IP may transfer back. The entire process should take only a few seconds. 6/24
7 Additionally, at this time you may wish to use the graphical utility for the heartbeat process, hb_gui (see Figure 2), to manually move the IP address around in the cluster by setting various nodes to the standby or active state. Retry these steps numerous times, disabling and reenabling various machines that are active or inactive. With the choice of configuration policy selected earlier, as long as quorum can be established and at least one node is eligible, the floating resource IP address will remain operational. During your testing, you can use simple pings to ensure that no packet loss occurs. When you have finished experimenting, you should have a strong feel for how robust your configuration is. Make sure you are comfortable with the HA configuration of your floating resource IP before continuing on. Figure 2. Graphical configuration utility for the heartbeat process, hb_gui Figure 2 illustrates the graphical console as it appears after login, showing the managed resources and associated configuration options. Note that you must log into the hb_gui console when you first launch the application; the credentials used will depend on your deployment. Notice in Figure 2 how the nodes in the cluster, the litsha2* systems, are each in the running state. The system labeled litsha21 is the current active node, as indicated by the addition of a resource displayed immediately below and indented (IPaddr_1). Also note the selection labeled "No Quorum Policy" to the value "stop". This means that any isolated node releases resources it would otherwise own. The implication of that decision is that at any given time, 2 heartbeat nodes must be active to establish quorum (in other words, a voting majority). Even if a single active, 100% operational node loses connection to its peer systems due to network failure or if both the inactive peers halt simultaneously, the resource will be voluntarily released. Step 4: Creating LVS rules with the ipvsadm command The next step is to take the floating resource IP address and build on it. Because LVS is intended to be transparent to remote Web browser clients, all Web requests must be funneled through the directors and passed on to one of the realservers. Then any results need to be relayed back to the director, which then returns the response to the client who initiated the Web page request. To accomplish that flow of requests and responses, first configure each of the LVS directors to enable IP forwarding (thus allowing requests to be passed on to the realservers) by issuing the following commands: echo "1" >/proc/sys/net/ipv4/ip_forward cat /proc/sys/net/ipv4/ip_forward If all was successful, the second command would return a "1" as output to your terminal. To add this permanently, add: '' IP_FORWARD="yes" to /etc/sysconfig/sysctl. 7/24
8 Next, to tell the directors to relay incoming HTTP requests to the HA floating IP address on to the realservers, use the ipvsadm command. First, clear the old ipvsadm tables: /sbin/ipvsadm -C Before you can configure the new tables, you need to decide what kind of workload distribution you want the LVS directors to use. On receipt of a connect request from a client, the director assigns a realserver to the client based on a "schedule," and you will set the scheduler type with the ipvsadm command. Available schedulers include: Round Robin (RR): New incoming connections are assigned to each realserver in turn. Weighted Round Robin (WRR): RR scheduling with additional weighting factor to compensate for differences in realserver capabilities such as additional CPUs, more memory, and so on. Least Connected (LC): New connections go to the realserver with the least number of connections. This is not necessarily the leastbusy realserver, but it is a step in that direction. Weighted Least Connection (WLC): LC with weighting. It is a good idea to use RR scheduling for testing, as it is easy to confirm. You may want to add WRR and LC to your testing routine to confirm that they work as expected. The examples shown here assume RR scheduling and its variants. Next, create a script to enable ipvsadm service forwarding to the realservers, and place a copy on each LVS director. This script will not be necessary when the later configuration of mon is done to automatically monitor for active realservers, but it aids in testing the ipvsadm component until then. Remember to double-check for proper network and http/https connectivity to each of your realservers before executing this script. Listing 5. The HA_CONFIG.sh file!/bin/sh The virtual address on the director which acts as a cluster address VIRTUAL_CLUSTER_ADDRESS= REAL_SERVER_IP_1= REAL_SERVER_IP_2= REAL_SERVER_IP_3= REAL_SERVER_IP_4= REAL_SERVER_IP_5= REAL_SERVER_IP_6= set ip_forward ON for vs-nat director (1 on, 0 off). cat /proc/sys/net/ipv4/ip_forward echo "1" >/proc/sys/net/ipv4/ip_forward director acts as the gw for realservers Turn OFF icmp redirects (1 on, 0 off), if not the realservers might be clever and not use the director as the gateway! echo "0" >/proc/sys/net/ipv4/conf/all/send_redirects echo "0" >/proc/sys/net/ipv4/conf/default/send_redirects echo "0" >/proc/sys/net/ipv4/conf/eth0/send_redirects Clear ipvsadm tables (better safe than sorry) /sbin/ipvsadm -C 8/24
9 We install LVS services with ipvsadm for HTTP and HTTPS connections with RR scheduling /sbin/ipvsadm -A -t $VIRTUAL_CLUSTER_ADDRESS:http -s rr /sbin/ipvsadm -A -t $VIRTUAL_CLUSTER_ADDRESS:https -s rr First realserver Forward HTTP to REAL_SERVER_IP_1 using LVS-NAT (-m), with weight=1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:http -r $REAL_SERVER_IP_1:http -m -w 1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:https -r $REAL_SERVER_IP_1:https -m -w 1 Second realserver Forward HTTP to REAL_SERVER_IP_2 using LVS-NAT (-m), with weight=1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:http -r $REAL_SERVER_IP_2:http -m -w 1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:https -r $REAL_SERVER_IP_2:https -m -w 1 Third realserver Forward HTTP to REAL_SERVER_IP_3 using LVS-NAT (-m), with weight=1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:http -r $REAL_SERVER_IP_3:http -m -w 1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:https -r $REAL_SERVER_IP_3:https -m -w 1 Fourth realserver Forward HTTP to REAL_SERVER_IP_4 using LVS-NAT (-m), with weight=1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:http -r $REAL_SERVER_IP_4:http -m -w 1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:https -r $REAL_SERVER_IP_4:https -m -w 1 Fifth realserver Forward HTTP to REAL_SERVER_IP_5 using LVS-NAT (-m), with weight=1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:http -r $REAL_SERVER_IP_5:http -m -w 1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:https -r $REAL_SERVER_IP_5:https -m -w 1 Sixth realserver Forward HTTP to REAL_SERVER_IP_6 using LVS-NAT (-m), with weight=1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:http -r $REAL_SERVER_IP_6:http -m -w 1 /sbin/ipvsadm -a -t $VIRTUAL_CLUSTER_ADDRESS:https -r $REAL_SERVER_IP_6:https -m -w 1 We print the new ipvsadm table for inspection echo "NEW IPVSADM TABLE:" /sbin/ipvsadm As you can see in Listing 5, the script simply enables the ipvsadm services, then has virtually identical stanzas to forward Web and SSL requests to each of the individual realservers. We used the -m option to specify NAT, and weight each realserver equally with a weight of 1 (-w 1). The weights specified are superfluous when using normal round robin scheduling (as the default weight is always 1). The option is presented only so that you may deviate to select weighted round robin. To do so change rr to wrr on the 2 consecutive lines below the comment about using round robin, and of course do not forget to adjust the weights accordingly. For more information about the various schedulers, consult the man page for ipvsadm. You have now configured each director to handle incoming Web and SSL requests to the floating service IP by rewriting them and passing 9/24
10 the work on to the realservers in succession. But in order to get traffic back from the realservers, and do the reverse process before handing the requests back to the client who made the request, you need to alter a few of the networking settings on the directors. This is necessary because of the decision to implement LVS directors and realservers in a flat network topology (that is, all on the same subnet). We need to perform the following steps to force the Apache response traffic back through the directors rather than answering directly themselves: echo "0" > /proc/sys/net/ipv4/conf/all/send_redirects echo "0" > /proc/sys/net/ipv4/conf/default/send_redirects echo "0" > /proc/sys/net/ipv4/conf/eth0/send_redirects This was done to prevent the active LVS director from trying to take a TCP/IP shortcut by informing the realserver and floating service IP to talk directly to one another (since they are on the same subnet). Normally redirects are useful, as they improve performance by cutting out unnecessary middlemen in network connections. But in this case, it would have prevented the response traffic from being rewritten as is necessary for transparency to the client. In fact, if redirects were not disabled on the LVS director, the traffic being sent from the realserver directly to the client would appear to the client as an unsolicited network response and would be discarded. At this point, it is time to set the default route of each of the realservers to point at the service floating IP address to ensure all responses are passed back to the director for packet rewriting before being passed back to the client that originated the request. Once redirects are disabled on the directors, and the realservers are configured to route all traffic through the floating service IP, you may proceed to test your HA LVS environment. To test the work done thus far, point a Web browser on a remote client to the floating service address of the LVS directors. For testing in the laboratory, we used a Gecko-based browser (Mozilla), though any browser should suffice. To ensure the deployment was successful, disable caching in the browser, and click the refresh button multiple times. With each refresh, you should see that the Web page displayed is one of the self-identifying pages configured on the realservers. If you are using RR scheduling, you should observe the page cycling through each of realservers in succession. Are you now thinking of ensuring that the LVS configuration starts automatically at boot? Don't do that just yet! There is one more step needed (Step 5) to perform active monitoring of the realservers (thus keeping a dynamic list of which Apache nodes are eligible to service work request). Step 5: Installing and configuring mon on the LVS directors So far, you have established a highly available service IP address and bound that to the pool of realserver instances. But you must never trust any of the individual Apache servers to be operational at any given time. By choosing RR scheduling, if any given realserver becomes disabled, or ceases to respond to network traffic in a timely fashion, 1/6th of the HTTP requests would be failures! Thus it is necessary to implement monitoring of the realservers on each of the the LVS directors in order to dynamically add or remove them from the service pool. Another well-known open source package called mon is well suited for this task. The mon solution is commonly used for monitoring LVS realnodes. Mon is relatively easy to configure, and is very extensible for people familiar with shell scripting. There are essentially three main steps to get everything working: installation, service monitoring configuration, and alert creation. Use your package management tool to handle the installation of mon. When finished with the installation, you need only to adjust the monitoring configuration, and create some alert scripts. The alert scripts are triggered when the monitors determine that a realserver has gone offline, or come back online. Note that with heartbeat v2 installations, monitoring of realservers can be accomplished by making all the realserver services resources. Or, you can use the Heartbeat ldirectord package. By default, mon comes with several monitor mechanisms ready to be used. We altered sample configuration file in /etc/mon.cf to make use of the HTTP service. In the mon configuration file, ensure the header reflects the proper paths. SLES10 is a 64-bit Linux image, but the sample configuration as shipped was for the default (31- or 32-bit) locations. The configuration file sample assumed the alerts and monitors are located /usr/lib, which was incorrect for our particular installation. The parameters we altered were as follows: alertdir = /usr/lib64/mon/alert.d mondir = /usr/lib64/mon/mon.d As you can see, we simply changed lib to lib64. Such a change may not be necessary for your distribution. The next change to the configuration file was to specify the list of realservers to monitor. This was done with the following 6 directives: 10/24
11 Listing 6. Specifying realservers to monitor hostgroup litstat realserver 1 hostgroup litstat hostgroup litstat hostgroup litstat hostgroup litstat hostgroup litstat realserver 6 If you want to add additional realservers, simply add additional entries here. Once you have all of your definitions in place, you need to tell mon how to watch for failure, and what to do in case of failure. To do this, add the following monitor sections (one for each realserver). When done, you will need to place both the mon configuration file and the alert on each of the LVS heartbeat nodes, enabling each heartbeat cluster node to independently monitor all of the realservers. Listing 7. The /etc/mon/mon.cf file global options cfbasedir = /etc/mon alertdir = /usr/lib64/mon/alert.d mondir = /usr/lib64/mon/mon.d statedir = /var/lib/mon logdir = /var/log maxprocs = 20 histlength = 100 historicfile = mon_history.log randstart = 60s authentication types: getpwnam standard Unix passwd, NOT for shadow passwords shadow Unix shadow passwords (not implemented) userfile "mon" user file authtype = getpwnam downtime logging, uncomment to enable if the server is running, don't forget to send a reset command when you change this dtlogfile = downtime.log dtlogging = yes NB: hostgroup and watch entries are terminated with a blank line (or end of file). Don't forget the blank lines between them or you lose. group definitions (hostnames or IP addresses) example: hostgroup servers www mail pop server4 server5 For simplicity we monitor each individual server as if it were a "group" so we add only the hostname and the ip address of an individual node for each. hostgroup litstat hostgroup litstat hostgroup litstat hostgroup litstat hostgroup litstat /24
12 hostgroup litstat Now we set identical watch definitions on each of our groups. They could be customized to treat individual servers differently, but we have made the configurations homogeneous here to match our homogeneous LVS configuration. watch litstat1 service http description http check servers interval 6s monitor http.monitor -p 80 -u /index.html allow_empty_group period wd {Mon-Sun} alert dowem.down.alert -h upalert dowem.up.alert -h alertevery 600s alertafter 1 watch litstat2 service http description http check servers interval 6s monitor http.monitor -p 80 -u /index.html allow_empty_group period wd {Mon-Sun} alert dowem.down.alert -h upalert dowem.up.alert -h alertevery 600s alertafter 1 watch litstat3 service http description http check servers interval 6s monitor http.monitor -p 80 -u /index.html allow_empty_group period wd {Mon-Sun} alert dowem.down.alert -h upalert dowem.up.alert -h alertevery 600s alertafter 1 watch litstat4 service http description http check servers interval 6s monitor http.monitor -p 80 -u /index.html allow_empty_group period wd {Mon-Sun} alert dowem.down.alert -h upalert dowem.up.alert -h alertevery 600s alertafter 1 watch litstat5 service http description http check servers interval 6s monitor http.monitor -p 80 -u /index.html allow_empty_group period wd {Mon-Sun} alert dowem.down.alert -h upalert dowem.up.alert -h alertevery 600s alertafter 1 watch litstat6 service http description http check servers interval 6s monitor http.monitor -p 80 -u /index.html allow_empty_group period wd {Mon-Sun} alert dowem.down.alert -h upalert dowem.up.alert -h alertevery 600s alertafter 1 12/24
13 Listing 7 tells mon to use the http.monitor, which is shipped with mon by default. Additionally, port 80 is specified as the port to use. Listing 7 also provides the specific page to request; you may opt to transmit a more efficient small segment of html as proof of success rather than a complicated default html page for your Web server. The alert and upalert lines invoke scripts that must be placed in the alertdir specified at the top of the configuration file. The directory is typically something that is the distribution default, such as "/usr/lib64/mon/alert.d". The alerts are responsible for telling LVS to add or remove Apache servers from the eligibility list (by invoking the ipvsadm command, as we shall see in a moment). When one of the realservers fails the http test, dowem.down.alert will be executed by mon with several arguments automatically. Likewise, when the monitors determine that a realserver has come back online, the mon process executes the dowem.up.alert with the numerous arguments automatically. Feel free to alter the names of the alert scripts to suit your own deployment. Save this file, and create the alerts (using simple bash scripting) in the alertdir. Listing 8 shows a bash script alert that will be invoked by mon when a real server connection is re-established. Listing 8. Simple alert: we have connectivity! /bin/bash The h arg is followed by the hostname we are interested in acting on So we skip ahead to get the -h option since we don't care about the others REALSERVER= while [ $1!= "-h" ] ; do shift done ADDHOST=$2 For the HTTP service /sbin/ipvsadm -a -t $REALSERVER:http -r $ADDHOST:http -m -w 1 For the HTTPS service /sbin/ipvsadm -a -t $REALSERVER:https -r $ADDHOST:https -m -w 1 Listing 9 shows a bash script alert that will be invoked by mon when a real server connection is lost. Listing 9. Simple alert: we have lost connectivity! /bin/bash The h arg is followed by the hostname we are interested in acting on So we skip ahead to get the -h option since we dont care about the others REALSERVER= while [ $1!= "-h" ] ; do shift done 13/24
14 BADHOST=$2 For the HTTP service /sbin/ipvsadm -d -t $REALSERVER:http -r $BADHOST For the HTTPS service /sbin/ipvsadm -d -t $REALSERVER:https -r $BADHOST Both of those scripts use of the ipvsadm command-line tool to dynamically add and remove realservers from the LVS tables. Note that these scripts are far from perfect. With mon monitoring only the http port for simple Web requests, the architecture as outlined here is vulnerable to situations where a given realserver might be operating correctly for http requests but not for SSL requests. Under those circumstances, we would fail to remove the offending realserver from the list of https candidates. Of course, this is easily remedied by making more advanced alerts specifically for each type of Web request in addition to enabling a second https monitor for each realserver in the mon configuration file. This is left as an exercise for the reader. To ensure monitoring has been activated, enable and disable the Apache process on each of the realservers in sequence, observing each of the directors for their reaction to the events. Only when you have confirmed that each director is properly monitoring each realserver, should you use the chkconfig command to make sure that the mon process starts automatically at boot. The specific command used was chkconfig mon on, but this may vary based on your distribution. With this last piece in place, you have finished the task of constructing a cross-system, highly-available Web server infrastructure. Of course, you might now opt to do more advanced work. For instance, you may have noticed that the mon daemon itself is not monitored (the heartbeat project can monitor mon for you), but with this last step, the basic foundation has been laid. Troubleshooting There are many reasons why an active node could stop functioning properly in an HA cluster, either voluntarily or involuntarily. The node could lose network connectivity to the other nodes, the heartbeat process could be stopped, there might be any one of a number of environmental occurrences, and so on. To deliberately fail the active node, you can issue a halt on that node, or set it to standby mode using the hb_gui (clean take down) command. If you feel inclined to test the robustness of your environment, you might opt to be a bit more aggressive (yank the plug!). Indicators and failover There are two types of log file indicators available to the system administrator responsible for configuring a Linux HA heartbeat system. The log files vary depending on whether or not a system is the recipient of the floating resource IP address. Log results for cluster members that did not receive the floating resource IP address look like so: Listing 10. Log results for also-rans litsha21:~ cat /var/log/messages Jan 16 12:00:20 litsha21 heartbeat: [3057]: WARN: node litsha23: is dead Jan 16 12:00:21 litsha21 cib: [3065]: info: mem_handle_event: Got an event OC_EV_MS_NOT_PRIMARY from ccm Jan 16 12:00:21 litsha21 cib: [3065]: info: mem_handle_event: instance=13, nodes=3, new=1, lost=0, n_idx=0, new_idx=3, old_idx=6 Jan 16 12:00:21 litsha21 crmd: [3069]: info: mem_handle_event: Got an event OC_EV_MS_NOT_PRIMARY from ccm Jan 16 12:00:21 litsha21 crmd: [3069]: info: mem_handle_event: instance=13, nodes=3, new=1, lost=0, n_idx=0, new_idx=3, old_idx=6 14/24
15 Jan 16 12:00:21 litsha21 crmd: [3069]: info: crmd_ccm_msg_callback:callbacks.c Quorum lost after event=not PRIMARY (id=13) Jan 16 12:00:21 litsha21 heartbeat: [3057]: info: Link litsha23:eth1 dead. Jan 16 12:00:38 litsha21 ccm: [3064]: debug: quorum plugin: majority Jan 16 12:00:38 litsha21 ccm: [3064]: debug: cluster:linux-ha, member_count=2, member_quorum_votes=200 Jan 16 12:00:38 litsha21 ccm: [3064]: debug: total_node_count=3, total_quorum_votes= Truncated For Brevity... Jan 16 12:00:40 litsha21 crmd: [3069]: info: update_dc:utils.c Set DC to litsha21 (1.0.6) Jan 16 12:00:41 litsha21 crmd: [3069]: info: do_state_transition:fsa.c litsha21: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=i_integrated cause=c_fsa_internal origin=check_join_state ] Jan 16 12:00:41 litsha21 crmd: [3069]: info: do_state_transition:fsa.c All 2 cluster nodes responded to the join offer. Jan 16 12:00:41 litsha21 crmd: [3069]: info: update_attrd:join_dc.c Connecting to attrd... Jan 16 12:00:41 litsha21 cib: [3065]: info: sync_our_cib:messages.c Syncing CIB to all peers Jan 16 12:00:41 litsha21 attrd: [3068]: info: attrd_local_callback:attrd.c Sending full refresh... Truncated For Brevity... Jan 16 12:00:43 litsha21 pengine: [3112]: info: unpack_nodes:unpack.c Node litsha21 is in standby-mode Jan 16 12:00:43 litsha21 pengine: [3112]: info: determine_online_status:unpack.c Node litsha21 is online Jan 16 12:00:43 litsha21 pengine: [3112]: info: determine_online_status:unpack.c Node litsha22 is online Jan 16 12:00:43 litsha21 pengine: [3112]: info: IPaddr_1 (heartbeat::ocf:ipaddr): Stopped Jan 16 12:00:43 litsha21 pengine: [3112]: notice: StartRsc:native.c litsha22 Start IPaddr_1 Jan 16 12:00:43 litsha21 pengine: [3112]: notice: Recurring:native.c litsha22 IPaddr_1_monitor_5000 Jan 16 12:00:43 litsha21 pengine: [3112]: notice: stage8:stages.c Created transition 15/24
16 graph Truncated For Brevity... Jan 16 12:00:46 litsha21 mgmtd: [3070]: debug: update cib finished Jan 16 12:00:46 litsha21 crmd: [3069]: info: do_state_transition:fsa.c litsha21: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=i_te_success cause=c_ipc_message origin=do_msg_route ] Jan 16 12:00:46 litsha21 cib: [3118]: info: write_cib_contents:io.c Wrote version of the CIB to disk (digest: 83b00c386e8b67c42d033a4141aaef90) As you can see from Listing 10, a roll is taken, and sufficient members for quorum are available for the vote. A vote is taken, and normal operation is resumed with no further action needed. In contrast, log results for cluster members that did receive the floating resource IP address are as follows: Listing 11. The log file of the resource holder litsha22:~ cat /var/log/messages Jan 16 12:00:06 litsha22 syslog-ng[1276]: STATS: dropped 0 Jan 16 12:01:51 litsha22 heartbeat: [3892]: WARN: node litsha23: is dead Jan 16 12:01:51 litsha22 heartbeat: [3892]: info: Link litsha23:eth1 dead. Jan 16 12:01:51 litsha22 cib: [3900]: info: mem_handle_event: Got an event OC_EV_MS_NOT_PRIMARY from ccm Jan 16 12:01:51 litsha22 cib: [3900]: info: mem_handle_event: instance=13, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6 Jan 16 12:01:51 litsha22 crmd: [3904]: info: mem_handle_event: Got an event OC_EV_MS_NOT_PRIMARY from ccm Jan 16 12:01:51 litsha22 crmd: [3904]: info: mem_handle_event: instance=13, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6 Jan 16 12:01:51 litsha22 crmd: [3904]: info: crmd_ccm_msg_callback:callbacks.c Quorum lost after event=not PRIMARY (id=13) Jan 16 12:02:09 litsha22 ccm: [3899]: debug: quorum plugin: majority Jan 16 12:02:09 litsha22 crmd: [3904]: info: do_election_count_vote:election.c Election check: vote from litsha21 Jan 16 12:02:09 litsha22 ccm: [3899]: debug: cluster:linux-ha, member_count=2, member_quorum_votes=200 Jan 16 12:02:09 litsha22 ccm: [3899]: debug: total_node_count=3, total_quorum_votes=300 Jan 16 12:02:09 litsha22 cib: [3900]: info: mem_handle_event: Got an event 16/24
17 OC_EV_MS_INVALID from ccm Jan 16 12:02:09 litsha22 cib: [3900]: info: mem_handle_event: no mbr_track info Jan 16 12:02:09 litsha22 cib: [3900]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm Jan 16 12:02:09 litsha22 cib: [3900]: info: mem_handle_event: instance=14, nodes=2, new=0, lost=1, n_idx=0, new_idx=2, old_idx=5 Jan 16 12:02:09 litsha22 cib: [3900]: info: cib_ccm_msg_callback:callbacks.c LOST: litsha23 Jan 16 12:02:09 litsha22 cib: [3900]: info: cib_ccm_msg_callback:callbacks.c PEER: litsha21 Jan 16 12:02:09 litsha22 cib: [3900]: info: cib_ccm_msg_callback:callbacks.c PEER: litsha22... Truncated For Brevity... Jan 16 12:02:12 litsha22 crmd: [3904]: info: update_dc:utils.c Set DC to litsha21 (1.0.6) Jan 16 12:02:12 litsha22 crmd: [3904]: info: do_state_transition:fsa.c litsha22: State transition S_PENDING -> S_NOT_DC [ input=i_not_dc cause=c_ha_message origin=do_cl_join_finalize_respond ] Jan 16 12:02:12 litsha22 cib: [3900]: info: cib_diff_notify:notify.c Update (client: 3069, call:25): > (ok)... Truncated For Brevity... Jan 16 12:02:14 litsha22 IPaddr[3998]: INFO: /sbin/ifconfig eth0: netmask broadcast Jan 16 12:02:14 litsha22 IPaddr[3998]: INFO: Sending Gratuitous Arp for on eth0:0 [eth0] Jan 16 12:02:14 litsha22 IPaddr[3998]: INFO: /usr/lib64/heartbeat/send_arp -i 500 -r 10 -p /var/run/heartbeat/rsctmp/send_arp/send_arp eth auto ffffffffffff Jan 16 12:02:14 litsha22 crmd: [3904]: info: process_lrm_event:lrm.c LRM operation (3) start_0 on IPaddr_1 complete Jan 16 12:02:14 litsha22 kernel: send_arp uses obsolete (PF_INET,SOCK_PACKET) Jan 16 12:02:14 litsha22 kernel: klogd 1.4.1, state change Jan 16 12:02:14 litsha22 kernel: NET: Registered protocol family 17 Jan 16 12:02:15 litsha22 crmd: [3904]: info: do_lrm_rsc_op:lrm.c Performing op monitor on IPaddr_1 (interval=5000ms, key=0:f9d962f0-4ed6-462d-a28d-e27b c) 17/24
18 Jan 16 12:02:15 litsha22 cib: [3900]: info: cib_diff_notify:notify.c Update (client: 3904, call:18): > (ok) Jan 16 12:02:15 litsha22 mgmtd: [3905]: debug: update cib finished As shown in Listing 11, the /var/log/messages file shows this node has acquired the floating resource. The ifconfig line shows the eth0:0 device being created dynamically to maintain service. And as you can see from Listing 11, a roll is taken, and sufficient members for quorum are available for the vote. A vote is taken, followed by the ifconfig commands that are issued to claim the floating resource IP address. As an additional means of indicating when a failure has occurred, you can log in to any of the cluster members and execute the hb_gui command. Through this method, you can determine by visual inspection which system has the floating resource. Lastly, we would be remiss if we did not illustrate a sample log file from a no-quorum situation. If any singular node cannot communicate with either of its peers, it has lost quorum (since 2/3 is the majority in a three-member voting party). In this situation, the node understands that it has lost quorum, and invokes the no quorum policy handler. Listing 12 shows an example of the log file from such an event. When quorum is lost, a log entry indicates it. The cluster node showing this log entry will disown the floating resource. The ifconfig down statement releases it. Listing 12. Log entry showing loss of quorum litsha22:~ cat /var/log/messages... Jan 16 12:06:12 litsha22 ccm: [3899]: debug: quorum plugin: majority Jan 16 12:06:12 litsha22 ccm: [3899]: debug: cluster:linux-ha, member_count=1, member_quorum_votes=100 Jan 16 12:06:12 litsha22 ccm: [3899]: debug: total_node_count=3, total_quorum_votes= Truncated For Brevity... Jan 16 12:06:12 litsha22 crmd: [3904]: info: crmd_ccm_msg_callback:callbacks.c Quorum lost after event=invalid (id=15) Jan 16 12:06:12 litsha22 crmd: [3904]: WARN: check_dead_member:ccm.c Our DC node (litsha21) left the cluster... Truncated For Brevity... Jan 16 12:06:14 litsha22 IPaddr[5145]: INFO: /sbin/route -n del -host Jan 16 12:06:15 litsha22 lrmd: [1619]: info: RA output: (IPaddr_1:stop:stderr) SIOCDELRT: No such process Jan 16 12:06:15 litsha22 IPaddr[5145]: INFO: /sbin/ifconfig eth0: down Jan 16 12:06:15 litsha22 IPaddr[5145]: INFO: IP Address released 18/24
19 Jan 16 12:06:15 litsha22 crmd: [3904]: info: process_lrm_event:lrm.c LRM operation (6) stop_0 on IPaddr_1 complete Jan 16 12:06:15 litsha22 cib: [3900]: info: cib_diff_notify:notify.c Update (client: 3904, call:32): > (ok) Jan 16 12:06:15 litsha22 mgmtd: [3905]: debug: update cib finished As you can see from the Listing 12, when quorum is lost for any given node, it relinquishes any resources as a result of the chosen no quorum policy configuration. The choice of no quorum policy is up to you. Fail-back actions and messages One of the more interesting implications of a properly-configured Linux HA system is that you do not need to take any action to reinstantiate a cluster member. Simply activating the Linux instance is sufficient to let the node rejoin its peers automatically. If you have configured a primary node (that is, one that is favored to gain the floating resource above all others), it will regain the floating resources automatically. Non-favored systems will simply join the eligibility pool and proceed as normal. Adding another Linux instance back into the pool will cause each node to take notice, and if possibly, re-establish quorum. The floating resources will be re-established on one of the nodes if quorum can be re-established. Listing 13. Quorum is re-established litsha22:~ tail -f /var/log/messages Jan 16 12:09:02 litsha22 heartbeat: [3892]: info: Heartbeat restart on node litsha21 Jan 16 12:09:02 litsha22 heartbeat: [3892]: info: Link litsha21:eth1 up. Jan 16 12:09:02 litsha22 heartbeat: [3892]: info: Status update for node litsha21: status init Jan 16 12:09:02 litsha22 heartbeat: [3892]: info: Status update for node litsha21: status up Jan 16 12:09:22 litsha22 heartbeat: [3892]: debug: get_delnodelist: delnodelist= Jan 16 12:09:22 litsha22 heartbeat: [3892]: info: Status update for node litsha21: status active Jan 16 12:09:22 litsha22 cib: [3900]: info: cib_client_status_callback:callbacks.c Status update: Client litsha21/cib now has status [join] Jan 16 12:09:23 litsha22 heartbeat: [3892]: WARN: 1 lost packet(s) for [litsha21] [36:38] Jan 16 12:09:23 litsha22 heartbeat: [3892]: info: No pkts missing from litsha21! Jan 16 12:09:23 litsha22 crmd: [3904]: notice: crmd_client_status_callback:callbacks.c Status update: Client litsha21/crmd now has status [online]... Jan 16 12:09:31 litsha22 crmd: [3904]: info: crmd_ccm_msg_callback:callbacks.c Quorum 19/24
20 (re)attained after event=new MEMBERSHIP (id=16) Jan 16 12:09:31 litsha22 crmd: [3904]: info: ccm_event_detail:ccm.c NEW MEMBERSHIP: trans=16, nodes=2, new=1, lost=0 n_idx=0, new_idx=2, old_idx=5 Jan 16 12:09:31 litsha22 crmd: [3904]: info: ccm_event_detail:ccm.c CURRENT: litsha22 [nodeid=1, born=13] Jan 16 12:09:31 litsha22 crmd: [3904]: info: ccm_event_detail:ccm.c CURRENT: litsha21 [nodeid=0, born=16] Jan 16 12:09:31 litsha22 crmd: [3904]: info: ccm_event_detail:ccm.c NEW: litsha21 [nodeid=0, born=16] Jan 16 12:09:31 litsha22 cib: [3900]: info: cib_diff_notify:notify.c Local-only Change (client:3904, call: 35): (ok) Jan 16 12:09:31 litsha22 mgmtd: [3905]: debug: update cib finished... Jan 16 12:09:34 litsha22 crmd: [3904]: info: update_dc:utils.c Set DC to litsha22 (1.0.6) Jan 16 12:09:35 litsha22 cib: [3900]: info: sync_our_cib:messages.c Syncing CIB to litsha21 Jan 16 12:09:35 litsha22 crmd: [3904]: info: do_state_transition:fsa.c litsha22: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=i_integrated cause=c_fsa_internal origin=check_join_state ] Jan 16 12:09:35 litsha22 crmd: [3904]: info: do_state_transition:fsa.c All 2 cluster nodes responded to the join offer. Jan 16 12:09:35 litsha22 attrd: [3903]: info: attrd_local_callback:attrd.c Sending full refresh Jan 16 12:09:35 litsha22 cib: [3900]: info: sync_our_cib:messages.c Syncing CIB to all peers... Jan 16 12:09:37 litsha22 tengine: [5119]: info: send_rsc_command:actions.c Initiating action 4: IPaddr_1_start_0 on litsha22 Jan 16 12:09:37 litsha22 tengine: [5119]: info: send_rsc_command:actions.c Initiating action 2: probe_complete on litsha21 Jan 16 12:09:37 litsha22 crmd: [3904]: info: do_lrm_rsc_op:lrm.c Performing op start on IPaddr_1 (interval=0ms, key=2:c5131d14-a9d9-400c-a4b1-60d8f5fbbcce) Jan 16 12:09:37 litsha22 pengine: [5120]: info: process_pe_message:pengine.c 20/24
21 Transition 2: PEngine Input stored in: /var/lib/heartbeat/pengine/pe-input-72.bz2 Jan 16 12:09:37 litsha22 IPaddr[5196]: INFO: /sbin/ifconfig eth0: netmask broadcast Jan 16 12:09:37 litsha22 IPaddr[5196]: INFO: Sending Gratuitous Arp for on eth0:0 [eth0] Jan 16 12:09:37 litsha22 IPaddr[5196]: INFO: /usr/lib64/heartbeat/send_arp -i 500 -r 10 -p /var/run/heartbeat/rsctmp/send_arp/send_arp eth auto ffffffffffff Jan 16 12:09:37 litsha22 crmd: [3904]: info: process_lrm_event:lrm.c LRM operation (7) start_0 on IPaddr_1 complete Jan 16 12:09:37 litsha22 cib: [3900]: info: cib_diff_notify:notify.c Update (client: 3904, call:46): > (ok) Jan 16 12:09:37 litsha22 mgmtd: [3905]: debug: update cib finished Jan 16 12:09:37 litsha22 tengine: [5119]: info: te_update_diff:callbacks.c Processing diff (cib_update): > Jan 16 12:09:37 litsha22 tengine: [5119]: info: match_graph_event:events.c Action IPaddr_1_start_0 (4) confirmed Jan 16 12:09:37 litsha22 tengine: [5119]: info: send_rsc_command:actions.c Initiating action 5: IPaddr_1_monitor_5000 on litsha22 Jan 16 12:09:37 litsha22 crmd: [3904]: info: do_lrm_rsc_op:lrm.c Performing op monitor on IPaddr_1 (interval=5000ms, key=2:c5131d14-a9d9-400c-a4b1-60d8f5fbbcce) Jan 16 12:09:37 litsha22 cib: [5268]: info: write_cib_contents:io.c Wrote version of the CIB to disk (digest: 98cb6685c25d14131c49a998dbbd0c35) Jan 16 12:09:37 litsha22 crmd: [3904]: info: process_lrm_event:lrm.c LRM operation (8) monitor_5000 on IPaddr_1 complete Jan 16 12:09:38 litsha22 cib: [3900]: info: cib_diff_notify:notify.c Update (client: 3904, call:47): > (ok) Jan 16 12:09:38 litsha22 mgmtd: [3905]: debug: update cib finished In Listing 13, you see that quorum has been re-established. When quorum is re-established, a vote is performed and litsha22 becomes the active node with the floating resource. Next steps High availability is best seen as a series of challenges, and the solution outlined here describes the first step. From here, there are many ways to move forward with your environment: you may choose to install redundant networking, a cluster file system to support the realservers, or more advanced middleware that supports clustering directly. 21/24
22 Resources Learn Wikipedia's article on basic networking is a good resource for learning about basic computer topology and networking tasks. Wikipedia has more information about Network Address Translation (NAT), too. Visit the home page of the Apache HTTP server project if you are new to basic Apache configuration. The site offers useful documentation and howto information. See also your distribution-provided Apache man pages for more. The Linux Virtual Server (LVS) project provides the LVS technology used in this article, including ipvsadm. The site contains a wealth of tutorials and other useful documentation, and provides a mailing list. The Linux-HA Web site contains much useful information on high availability, as well as the Heartbeat component described in this article. mon is a framework for monitoring service outages and their recovery check it out. Read our two-part series on setting up a large Linux cluster using IBM Cluster Systems Management software: "Installing a large Linux cluster, Part 1: Introduction and hardware configuration" (developerworks, Dec 2006) "Installing a large Linux cluster, Part 2: Management server configuration and node installation" (developerworks, Jan 2007) "Craft a load-balancing cluster with ClusterKnoppix" (developerworks, Dec 2004) shows how to build a cluster using the ClusterKnoppix Live CD. In the developerworks Linux zone, find more resources for Linux developers, including Linux tutorials, as well as our readers' favorite Linux articles and tutorials over the last month. Stay current with developerworks technical events and Webcasts. Get products and technologies Order the SEK for Linux, a two-dvd set containing the latest IBM trial software for Linux from DB2, Lotus, Rational, Tivoli, and WebSphere. With IBM trial software, available for download directly from developerworks, build your next development project on Linux. Discuss Get involved in the developerworks community through our developer blogs, forums, podcasts, and community topics in our new developerworks spaces. About the authors Eli M. Dow is a software engineer in the IBM Test and Integration Center for Linux in Poughkeepsie, NY. He holds a B.S. degree in computer science and psychology and a master's of computer science from Clarkson University. He is an alumnus of the Clarkson Open Source Institute. His interests include the GNOME desktop, human-computer interaction, virtualization, and Linux systems programming. He is the coauthor of an IBM Redbook Linux for IBM System z9 and IBM zseries. Frank LeFevre is a Senior Software Engineer in the IBM Systems and Technology Group in Poughkeepsie, NY. He has over 28 years of experience with IBM mainframe hardware and operating systems. He is currently is the Team Leader for the Linux Virtual Server Platform Evaluation Test team. Close [x] developerworks: Sign in IBM ID: 22/24
23 Need an IBM ID? Forgot your IBM ID? Password: Forgot your password? Change your password Keep me signed in. By clicking Submit, you agree to the developerworks terms of use. Submit Cancel The first time you sign into developerworks, a profile is created for you. Select information in your developerworks profile is displayed to the public, but you may edit the information at any time. Your first name, last name (unless you choose to hide them), and display name will accompany the content that you post. All information submitted is secure. Close [x] Choose your display name The first time you sign in to developerworks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerworks. Please choose a display name between 3-31 characters. Your display name must be unique in the developerworks community and should not be your address for privacy reasons. Display name: (Must be between 3 31 characters.) By clicking Submit, you agree to the developerworks terms of use. Submit Cancel All information submitted is secure. Average rating (73 votes) 1 star 1 star 2 stars 2 stars 3 stars 3 stars 4 stars 4 stars 5 stars 5 stars Submit Add comment: Sign in or register to leave a comment. Note: HTML elements are not supported within comments. 23/24
24 Notify me when a comment is added1000 characters left Post Total comments (1) Thanks for the article. I am troubleshooting a previous LVS install that was working fine until I had to change my circuit and IP addresses. I have the real servers up on the new network and they respond to pings and can also accept ssh connections. But they are not "hearing" http requests from the outside. I think I have updated all the necessary files with the new network settings, but must be missing something. In the following paragraph, you note that a 1-line change is needed in the apache configuration file (I assume httpd.conf?) "The realservers used are stock Apache Web servers with the notable exception that they were configured to respond as if it were the LVS director's floating IP address, or a virtual hostname corresponding to the floating IP address used by the directors. This is accomplished by altering a single line in the Apache configuration file." What is the 1-line change that is needed? I am new to much of this - any help from anybody is very appreciated Posted by EvanLemonides on 07 February 2011 Report abuse Print this page Share this page Follow developerworks About Feeds and apps Report abuse Faculty Help Newsletters Terms of use Students Contact us IBM privacy Business Partners Submit content IBM accessibility 24/24
Scalable Linux Clusters with LVS
Scalable Linux Clusters with LVS Considerations and Implementation, Part II Eric Searcy Tag1 Consulting, Inc. [email protected] May 2008 Abstract Whether you are perusing mailing lists or reading
LAB THREE STATIC ROUTING
LAB THREE STATIC ROUTING In this lab you will work with four different network topologies. The topology for Parts 1-4 is shown in Figure 3.1. These parts address router configuration on Linux PCs and a
Smoothwall Web Filter Deployment Guide
Smoothwall Web Filter Deployment Guide v1.0.7 Copyright 2013 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Loadbalancer.org Appliances Supported...3 Loadbalancer.org Software Versions
PVFS High Availability Clustering using Heartbeat 2.0
PVFS High Availability Clustering using Heartbeat 2.0 2008 Contents 1 Introduction 2 2 Requirements 2 2.1 Hardware................................................. 2 2.1.1 Nodes...............................................
High Availability Low Dollar Load Balancing
High Availability Low Dollar Load Balancing Simon Karpen System Architect, VoiceThread [email protected] Via Karpen Internet Systems [email protected] These slides are licensed under the
Load Balancing Trend Micro InterScan Web Gateway
Load Balancing Trend Micro InterScan Web Gateway Deployment Guide rev. 1.1.7 Copyright 2002 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Loadbalancer.org Appliances Supported...
Load Balancing Web Proxies Load Balancing Web Filters Load Balancing Web Gateways. Deployment Guide
Load Balancing Web Proxies Load Balancing Web Filters Load Balancing Web Gateways Deployment Guide rev. 1.4.9 Copyright 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Appliances
TECHNICAL NOTE. Technical Note P/N 300-999-649 REV 03. EMC NetWorker Simplifying firewall port requirements with NSR tunnel Release 8.
TECHNICAL NOTE EMC NetWorker Simplifying firewall port requirements with NSR tunnel Release 8.0 and later Technical Note P/N 300-999-649 REV 03 February 6, 2014 This technical note describes how to configure
McAfee SMC Installation Guide 5.7. Security Management Center
McAfee SMC Installation Guide 5.7 Security Management Center Legal Information The use of the products described in these materials is subject to the then current end-user license agreement, which can
Load Balancing Bloxx Web Filter. Deployment Guide
Load Balancing Bloxx Web Filter Deployment Guide rev. 1.1.8 Copyright 2002 2016 Loadbalancer.org, Inc. 1 Table of Contents About this Guide...4 Loadbalancer.org Appliances Supported...4 Loadbalancer.org
ClusterLoad ESX Virtual Appliance quick start guide v6.3
ClusterLoad ESX Virtual Appliance quick start guide v6.3 ClusterLoad terminology...2 What are your objectives?...3 What is the difference between a one-arm and a two-arm configuration?...3 What are the
Load Balancing McAfee Web Gateway. Deployment Guide
Load Balancing McAfee Web Gateway Deployment Guide rev. 1.1.4 Copyright 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Loadbalancer.org Appliances Supported...3 Loadbalancer.org
Twin Peaks Software High Availability and Disaster Recovery Solution For Linux Email Server
Twin Peaks Software High Availability and Disaster Recovery Solution For Linux Email Server Introduction Twin Peaks Softwares Replication Plus software is a real-time file replication tool, based on its
6.0. Getting Started Guide
6.0 Getting Started Guide Netmon Getting Started Guide 2 Contents Contents... 2 Appliance Installation... 3 IP Address Assignment (Optional)... 3 Logging In For the First Time... 5 Initial Setup... 6 License
F-Secure Messaging Security Gateway. Deployment Guide
F-Secure Messaging Security Gateway Deployment Guide TOC F-Secure Messaging Security Gateway Contents Chapter 1: Deploying F-Secure Messaging Security Gateway...3 1.1 The typical product deployment model...4
Web Application Firewall
Web Application Firewall Getting Started Guide August 3, 2015 Copyright 2014-2015 by Qualys, Inc. All Rights Reserved. Qualys and the Qualys logo are registered trademarks of Qualys, Inc. All other trademarks
Cisco AnyConnect Secure Mobility Solution Guide
Cisco AnyConnect Secure Mobility Solution Guide This document contains the following information: Cisco AnyConnect Secure Mobility Overview, page 1 Understanding How AnyConnect Secure Mobility Works, page
Chapter 1 - Web Server Management and Cluster Topology
Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management
Guideline for setting up a functional VPN
Guideline for setting up a functional VPN Why do I want a VPN? VPN by definition creates a private, trusted network across an untrusted medium. It allows you to connect offices and people from around the
How To Set Up A Network Map In Linux On A Ubuntu 2.5 (Amd64) On A Raspberry Mobi) On An Ubuntu 3.5.2 (Amd66) On Ubuntu 4.5 On A Windows Box
CSC-NETLAB Packet filtering with Iptables Group Nr Name1 Name2 Name3 Date Instructor s Signature Table of Contents 1 Goals...2 2 Introduction...3 3 Getting started...3 4 Connecting to the virtual hosts...3
HOWTO: Set up a Vyatta device with ThreatSTOP in router mode
HOWTO: Set up a Vyatta device with ThreatSTOP in router mode Overview This document explains how to set up a minimal Vyatta device in a routed configuration and then how to apply ThreatSTOP to it. It is
Load Balancing Sophos Web Gateway. Deployment Guide
Load Balancing Sophos Web Gateway Deployment Guide rev. 1.0.9 Copyright 2002 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide...3 Loadbalancer.org Appliances Supported...3 Loadbalancer.org
GlobalSCAPE DMZ Gateway, v1. User Guide
GlobalSCAPE DMZ Gateway, v1 User Guide GlobalSCAPE, Inc. (GSB) Address: 4500 Lockhill-Selma Road, Suite 150 San Antonio, TX (USA) 78249 Sales: (210) 308-8267 Sales (Toll Free): (800) 290-5054 Technical
Load Balancing Clearswift Secure Web Gateway
Load Balancing Clearswift Secure Web Gateway Deployment Guide rev. 1.1.8 Copyright 2002 2016 Loadbalancer.org, Inc. 1 Table of Contents About this Guide...3 Loadbalancer.org Appliances Supported...3 Loadbalancer.org
User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream
User Manual Onsight Management Suite Version 5.1 Another Innovation by Librestream Doc #: 400075-06 May 2012 Information in this document is subject to change without notice. Reproduction in any manner
Scalable Linux Clusters with LVS
Scalable Linux Clusters with LVS Considerations and Implementation, Part I Eric Searcy Tag1 Consulting, Inc. [email protected] April 2008 Abstract Whether you are perusing mailing lists or reading
Creating Web Farms with Linux (Linux High Availability and Scalability)
Creating Web Farms with Linux (Linux High Availability and Scalability) Horms (Simon Horman) [email protected] December 2001 For Presentation in Tokyo, Japan http://verge.net.au/linux/has/ http://ultramonkey.org/
SuperLumin Nemesis. Administration Guide. February 2011
SuperLumin Nemesis Administration Guide February 2011 SuperLumin Nemesis Legal Notices Information contained in this document is believed to be accurate and reliable. However, SuperLumin assumes no responsibility
SSL-VPN 200 Getting Started Guide
Secure Remote Access Solutions APPLIANCES SonicWALL SSL-VPN Series SSL-VPN 200 Getting Started Guide SonicWALL SSL-VPN 200 Appliance Getting Started Guide Thank you for your purchase of the SonicWALL SSL-VPN
Barracuda Link Balancer Administrator s Guide
Barracuda Link Balancer Administrator s Guide Version 1.0 Barracuda Networks Inc. 3175 S. Winchester Blvd. Campbell, CA 95008 http://www.barracuda.com Copyright Notice Copyright 2008, Barracuda Networks
Load Balancing Smoothwall Secure Web Gateway
Load Balancing Smoothwall Secure Web Gateway Deployment Guide rev. 1.1.7 Copyright 2002 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide...3 Loadbalancer.org Appliances Supported...3 Loadbalancer.org
Configuring Failover
Configuring Failover 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their respective
Configuration Manual English version
Configuration Manual English version Frama F-Link Configuration Manual (EN) All rights reserved. Frama Group. The right to make changes in this Installation Guide is reserved. Frama Ltd also reserves the
Project 2: Firewall Design (Phase I)
Project 2: Firewall Design (Phase I) CS 161 - Joseph/Tygar November 12, 2006 1 Edits If we need to make clarifications or corrections to this document after distributing it, we will post a new version
Intrusion Detection and Prevention: Network and IDS Configuration and Monitoring using Snort
License Intrusion Detection and Prevention: Network and IDS Configuration and Monitoring using Snort This work by Z. Cliffe Schreuders at Leeds Metropolitan University is licensed under a Creative Commons
High Availability Configuration Guide Version 9
High Availability Configuration Guide Version 9 Document version 9402-1.0-08/11/2006 2 HA Configuration Guide IMPORTANT NOTICE Elitecore has supplied this Information believing it to be accurate and reliable
McAfee Web Filter Deployment Guide
McAfee Web Filter Deployment Guide v1.0.7 Copyright 2013 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Loadbalancer.org Appliances Supported...3 Loadbalancer.org Software Versions Supported...3
Quick Start for Network Agent. 5-Step Quick Start. What is Network Agent?
What is Network Agent? The Websense Network Agent software component uses sniffer technology to monitor all of the internet traffic on the network machines that you assign to it. Network Agent filters
Deploying Secure Internet Connectivity
C H A P T E R 5 Deploying Secure Internet Connectivity This chapter is a step-by-step procedure explaining how to use the ASDM Startup Wizard to set up the initial configuration for your ASA/PIX Security
FortiGate High Availability Overview Technical Note
FortiGate High Availability Overview Technical Note FortiGate High Availability Overview Technical Note Document Version: 2 Publication Date: 21 October, 2005 Description: This document provides an overview
High Availability. Vyatta System
VYATTA, INC. Vyatta System High Availability REFERENCE GUIDE WAN Load Balancing VRRP Clustering Stateful NAT and Firewall Failover RAID 1 Configuration Synchronization Vyatta Suite 200 1301 Shoreway Road
Linux Network Security
Linux Network Security Course ID SEC220 Course Description This extremely popular class focuses on network security, and makes an excellent companion class to the GL550: Host Security course. Protocols
Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy
Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy Objectives The purpose of this lab is to demonstrate both high availability and performance using virtual IPs coupled with DNS round robin
Guardian Digital WebTool Firewall HOWTO. by Pete O Hara
Guardian Digital WebTool Firewall HOWTO by Pete O Hara Guardian Digital WebTool Firewall HOWTO by by Pete O Hara Revision History Revision $Revision: 1.1 $ $Date: 2006/01/03 17:25:17 $ Revised by: pjo
Appliance Quick Start Guide. v7.6
Appliance Quick Start Guide v7.6 rev. 1.0.7 Copyright 2002 2015 Loadbalancer.org, Inc. Table of Contents Loadbalancer.org Terminology... 4 What is a Virtual IP Address?... 5 What is a Floating IP Address?...
Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering
Internet Firewall CSIS 4222 A combination of hardware and software that isolates an organization s internal network from the Internet at large Ch 27: Internet Routing Ch 30: Packet filtering & firewalls
Load Balancing Barracuda Web Filter. Deployment Guide
Load Balancing Barracuda Web Filter Deployment Guide rev. 1.1.4 Copyright 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Loadbalancer.org Appliances Supported...3 Loadbalancer.org
WhatsUp Gold v16.3 Installation and Configuration Guide
WhatsUp Gold v16.3 Installation and Configuration Guide Contents Installing and Configuring WhatsUp Gold using WhatsUp Setup Installation Overview... 1 Overview... 1 Security considerations... 2 Standard
Quick Note 53. Ethernet to W-WAN failover with logical Ethernet interface.
Quick Note 53 Ethernet to W-WAN failover with logical Ethernet interface. Digi Support August 2015 1 Contents 1 Introduction... 2 1.1 Introduction... 2 1.2 Assumptions... 3 1.3 Corrections... 3 2 Version...
Chapter 15: Advanced Networks
Chapter 15: Advanced Networks IT Essentials: PC Hardware and Software v4.0 1 Determine a Network Topology A site survey is a physical inspection of the building that will help determine a basic logical
UIP1868P User Interface Guide
UIP1868P User Interface Guide (Firmware version 0.13.4 and later) V1.1 Monday, July 8, 2005 Table of Contents Opening the UIP1868P's Configuration Utility... 3 Connecting to Your Broadband Modem... 4 Setting
Microsoft Internet Information Services (IIS) Deployment Guide
Microsoft Internet Information Services (IIS) Deployment Guide v1.2.9 Copyright 2013 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 4 Appliances Supported... 4 Microsoft IIS Software Versions
High Availability. Vyatta System
VYATTA, INC. Vyatta System High Availability REFERENCE GUIDE WAN Load Balancing VRRP Clustering Stateful NAT and Firewall Failover RAID 1 Configuration Synchronization Vyatta Suite 200 1301 Shoreway Road
Volume SYSLOG JUNCTION. User s Guide. User s Guide
Volume 1 SYSLOG JUNCTION User s Guide User s Guide SYSLOG JUNCTION USER S GUIDE Introduction I n simple terms, Syslog junction is a log viewer with graphing capabilities. It can receive syslog messages
Configuring Nex-Gen Web Load Balancer
Configuring Nex-Gen Web Load Balancer Table of Contents Load Balancing Scenarios & Concepts Creating Load Balancer Node using Administration Service Creating Load Balancer Node using NodeCreator Connecting
Barracuda Link Balancer
Barracuda Networks Technical Documentation Barracuda Link Balancer Administrator s Guide Version 2.2 RECLAIM YOUR NETWORK Copyright Notice Copyright 2004-2011, Barracuda Networks www.barracuda.com v2.2-110503-01-0503
Configuring PA Firewalls for a Layer 3 Deployment
Configuring PA Firewalls for a Layer 3 Deployment Configuring PAN Firewalls for a Layer 3 Deployment Configuration Guide January 2009 Introduction The following document provides detailed step-by-step
Pharos Control User Guide
Outdoor Wireless Solution Pharos Control User Guide REV1.0.0 1910011083 Contents Contents... I Chapter 1 Quick Start Guide... 1 1.1 Introduction... 1 1.2 Installation... 1 1.3 Before Login... 8 Chapter
DEPLOYMENT GUIDE Version 1.0. Deploying the BIG-IP LTM with Apache Tomcat and Apache HTTP Server
DEPLOYMENT GUIDE Version 1.0 Deploying the BIG-IP LTM with Apache Tomcat and Apache HTTP Server Table of Contents Table of Contents Deploying the BIG-IP LTM with Tomcat application servers and Apache web
F-SECURE MESSAGING SECURITY GATEWAY
F-SECURE MESSAGING SECURITY GATEWAY DEFAULT SETUP GUIDE This guide describes how to set up and configure the F-Secure Messaging Security Gateway appliance in a basic e-mail server environment. AN EXAMPLE
AS/400e. TCP/IP routing and workload balancing
AS/400e TCP/IP routing and workload balancing AS/400e TCP/IP routing and workload balancing Copyright International Business Machines Corporation 2000. All rights reserved. US Government Users Restricted
Sophos for Microsoft SharePoint startup guide
Sophos for Microsoft SharePoint startup guide Product version: 2.0 Document date: March 2011 Contents 1 About this guide...3 2 About Sophos for Microsoft SharePoint...3 3 System requirements...3 4 Planning
Penetration Testing LAB Setup Guide
Penetration Testing LAB Setup Guide (External Attacker - Intermediate) By: magikh0e - [email protected] Last Edit: July 06 2012 This guide assumes a few things... 1. You have read the basic guide of this
Security Correlation Server Quick Installation Guide
orrelog Security Correlation Server Quick Installation Guide This guide provides brief information on how to install the CorreLog Server system on a Microsoft Windows platform. This information can also
Configuring the PIX Firewall with PDM
Configuring the PIX Firewall with PDM Objectives In this lab exercise you will complete the following tasks: Install PDM Configure inside to outside access through your PIX Firewall using PDM Configure
DEPLOYMENT GUIDE CONFIGURING THE BIG-IP LTM SYSTEM WITH FIREPASS CONTROLLERS FOR LOAD BALANCING AND SSL OFFLOAD
DEPLOYMENT GUIDE CONFIGURING THE BIG-IP LTM SYSTEM WITH FIREPASS CONTROLLERS FOR LOAD BALANCING AND SSL OFFLOAD Configuring the BIG-IP LTM system for use with FirePass controllers Welcome to the Configuring
TREK HOSC PAYLOAD ETHERNET GATEWAY (HPEG) USER GUIDE
TREK HOSC PAYLOAD ETHERNET GATEWAY (HPEG) USER GUIDE April 2016 Approved for Public Release; Distribution is Unlimited. TABLE OF CONTENTS PARAGRAPH PAGE 1 Welcome... 1 1.1 Getting Started... 1 1.2 System
PolyServe Understudy QuickStart Guide
PolyServe Understudy QuickStart Guide PolyServe Understudy QuickStart Guide POLYSERVE UNDERSTUDY QUICKSTART GUIDE... 3 UNDERSTUDY SOFTWARE DISTRIBUTION & REGISTRATION... 3 Downloading an Evaluation Copy
CYAN SECURE WEB APPLIANCE. User interface manual
CYAN SECURE WEB APPLIANCE User interface manual Jun. 13, 2008 Applies to: CYAN Secure Web 1.4 and above Contents 1 Log in...3 2 Status...3 2.1 Status / System...3 2.2 Status / Network...4 Status / Network
Load Balancing using Pramati Web Load Balancer
Load Balancing using Pramati Web Load Balancer Satyajit Chetri, Product Engineering Pramati Web Load Balancer is a software based web traffic management interceptor. Pramati Web Load Balancer offers much
Enterprise Remote Control 5.6 Manual
Enterprise Remote Control 5.6 Manual Solutions for Network Administrators Copyright 2015, IntelliAdmin, LLC Revision 3/26/2015 http://www.intelliadmin.com Page 1 Table of Contents What is Enterprise Remote
Virtual Data Centre. User Guide
Virtual Data Centre User Guide 2 P age Table of Contents Getting Started with vcloud Director... 8 1. Understanding vcloud Director... 8 2. Log In to the Web Console... 9 3. Using vcloud Director... 10
Comodo MyDLP Software Version 2.0. Installation Guide Guide Version 2.0.010215. Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013
Comodo MyDLP Software Version 2.0 Installation Guide Guide Version 2.0.010215 Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013 Table of Contents 1.About MyDLP... 3 1.1.MyDLP Features... 3
Microsoft Lync 2010 Deployment Guide
Microsoft Lync 2010 Deployment Guide v1.3.7 Copyright 2013 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 4 Appliances Supported... 4 Microsoft Lync 2010 Software Versions Supported...4
Kaseya 2. User Guide. for Network Monitor 4.1
Kaseya 2 Ping Monitor User Guide for Network Monitor 4.1 June 5, 2012 About Kaseya Kaseya is a global provider of IT automation software for IT Solution Providers and Public and Private Sector IT organizations.
EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014. Version 1
EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014 Version 1 NEC EXPRESSCLUSTER X 3.x for Windows SQL Server 2014 Quick Start Guide Document Number ECX-MSSQL2014-QSG, Version
KeyControl Installation on Amazon Web Services
KeyControl Installation on Amazon Web Services Contents Introduction Deploying an initial KeyControl Server Deploying an Elastic Load Balancer (ELB) Adding a KeyControl node to a cluster in the same availability
Building a Scale-Out SQL Server 2008 Reporting Services Farm
Building a Scale-Out SQL Server 2008 Reporting Services Farm This white paper discusses the steps to configure a scale-out SQL Server 2008 R2 Reporting Services farm environment running on Windows Server
Lab Objectives & Turn In
Firewall Lab This lab will apply several theories discussed throughout the networking series. The routing, installing/configuring DHCP, and setting up the services is already done. All that is left for
AutoDownload: SQL Server and Network Trouble Shooting
AutoDownload: SQL Server and Network Trouble Shooting AutoDownload uses Microsoft s SQL Server database software. Since 2005 when AutoDownload was first released Microsoft have also released new versions
Aspera Connect User Guide
Aspera Connect User Guide Windows XP/2003/Vista/2008/7 Browser: Firefox 2+, IE 6+ Version 2.3.1 Chapter 1 Chapter 2 Introduction Setting Up 2.1 Installation 2.2 Configure the Network Environment 2.3 Connect
THE HONG KONG POLYTECHNIC UNIVERSITY Department of Electronic and Information Engineering
THE HONG KONG POLYTECHNIC UNIVERSITY Department of Electronic and Information Engineering ENG 224 Information Technology Laboratory 6: Internet Connection Sharing Objectives: Build a private network that
Load Balancing. Outlook Web Access. Web Mail Using Equalizer
Load Balancing Outlook Web Access Web Mail Using Equalizer Copyright 2009 Coyote Point Systems, Inc. Printed in the USA. Publication Date: January 2009 Equalizer is a trademark of Coyote Point Systems
Load Balancing Microsoft Sharepoint 2010 Load Balancing Microsoft Sharepoint 2013. Deployment Guide
Load Balancing Microsoft Sharepoint 2010 Load Balancing Microsoft Sharepoint 2013 Deployment Guide rev. 1.4.2 Copyright 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Appliances
High Availability. Palo Alto Networks. PAN-OS Administrator s Guide Version 6.0. Copyright 2007-2015 Palo Alto Networks
High Availability Palo Alto Networks PAN-OS Administrator s Guide Version 6.0 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa Clara, CA 95054 www.paloaltonetworks.com/company/contact-us
Network Agent Quick Start
Network Agent Quick Start Topic 50500 Network Agent Quick Start Updated 17-Sep-2013 Applies To: Web Filter, Web Security, Web Security Gateway, and Web Security Gateway Anywhere, v7.7 and 7.8 Websense
1. Configuring Apache2 Load Balancer with failover mechanism
1. Configuring Apache2 Load Balancer with failover mechanism node01 Messaging Part 1 Instance 1 for e.g.: 192.168.0.140 192.168.0.2 node02 Messaging Part 1 Instance 2 for e.g.: 192.168.0.90 Configuring
Hands On Activities: TCP/IP Network Monitoring and Management
Hands On Activities: TCP/IP Network Monitoring and Management 1. TCP/IP Network Management Tasks TCP/IP network management tasks include Examine your physical and IP network address Traffic monitoring
HOWTO: Set up a Vyatta device with ThreatSTOP in bridge mode
HOWTO: Set up a Vyatta device with ThreatSTOP in bridge mode Overview This document explains how to set up a minimal Vyatta device in a transparent bridge configuration and then how to apply ThreatSTOP
Load Balancing VMware Horizon View. Deployment Guide
Load Balancing VMware Horizon View Deployment Guide v1.1.0 Copyright 2014 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 4 Appliances Supported... 4 VMware Horizon View Versions Supported...4
FioranoMQ 9. High Availability Guide
FioranoMQ 9 High Availability Guide Copyright (c) 1999-2008, Fiorano Software Technologies Pvt. Ltd., Copyright (c) 2008-2009, Fiorano Software Pty. Ltd. All rights reserved. This software is the confidential
Netflow Collection with AlienVault Alienvault 2013
Netflow Collection with AlienVault Alienvault 2013 CONFIGURE Configuring NetFlow Capture of TCP/IP Traffic from an AlienVault Sensor or Remote Hardware Level: Beginner to Intermediate Netflow Collection
Configuration Guide. DHCP Server. LAN client
DHCP Server Configuration Guide 4.0 DHCP Server LAN client LAN client LAN client Copyright 2007, F/X Communications. All Rights Reserved. The use and copying of this product is subject to a license agreement.
Hadoop Basics with InfoSphere BigInsights
An IBM Proof of Technology Hadoop Basics with InfoSphere BigInsights Unit 4: Hadoop Administration An IBM Proof of Technology Catalog Number Copyright IBM Corporation, 2013 US Government Users Restricted
Using RADIUS Agent for Transparent User Identification
Using RADIUS Agent for Transparent User Identification Using RADIUS Agent Web Security Solutions Version 7.7, 7.8 Websense RADIUS Agent works together with the RADIUS server and RADIUS clients in your
FileMaker Server 15. Getting Started Guide
FileMaker Server 15 Getting Started Guide 2007 2016 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker and FileMaker Go are trademarks
Overview of WebMux Load Balancer and Live Communications Server 2005
AVANU Load Balancing for Microsoft Office Live Communications Server 2005 WebMux Delivers Improved Reliability, Availability and Scalability Overview of WebMux Load Balancer and Live Communications Server
Exam : EE0-511. : F5 BIG-IP V9 Local traffic Management. Title. Ver : 12.19.05
Exam : EE0-511 Title : F5 BIG-IP V9 Local traffic Management Ver : 12.19.05 QUESTION 1 Which three methods can be used for initial access to a BIG-IP system? (Choose three.) A. serial console access B.
